text
stringlengths 56
7.94M
|
---|
\begin{document}
\title{A Time-Symmetric Resolution of the Einstein's Boxes Paradox}
\begin{abstract}
The Einstein's Boxes paradox was developed by Einstein, de Broglie, Heisenberg, and others to demonstrate the incompleteness of the Copenhagen Formulation of quantum mechanics. I explain the paradox using the Copenhagen Formulation.~I then show how a time-symmetric formulation of quantum mechanics resolves the paradox in the way envisioned by Einstein and de Broglie. Finally, I describe an experiment that can distinguish between these two formulations.
\end{abstract}
Keywords:\\
quantum foundations; time-symmetric; Einstein's boxes; Einstein--Podolsky--Rosen (EPR)
\pagebreak
\section{Introduction}
A grand challenge of modern physics is to resolve the conceptual paradoxes in the foundations of quantum mechanics~\cite{Smolin}.~Some of these paradoxes concern nonlocality and completeness. For~example, Einstein believed the Copenhagen Formulation (CF) of quantum mechanics was incomplete. He presented a thought experiment (later known as ``Einstein's Bubble'') explaining his reasoning at the 1927 Solvay conference~\cite{Solvay}. In~this experiment, an~incident particle's wavefunction diffracts at a pinhole in a flat screen and then spreads to all parts of a hemispherical screen capable of detecting the wavefunction. The~wavefunction is then detected at one point on the hemispherical screen, implying the wavefunction everywhere else vanished instantaneously. Einstein believed that this instantaneous wavefunction collapse violated the special theory of relativity, and~the wavefunction must have been localized at the point of detection immediately before the detection occurred.~Since the CF does not describe the wavefunction localization before detection, it must be an incomplete theory.~In~an earlier paper, I analyzed a one-dimensional version of this thought experiment with a time-symmetric formulation (TSF) of quantum mechanics~\cite{Heaney1}, showing that the TSF did not need wavefunction collapse to explain the experimental~results.
Einstein, de Broglie, Heisenberg, and~others later modified Einstein's original thought experiment to emphasize the nonlocal action-at-a-distance effects. In~the modified experiment, the~particle's wavefunction was localized in two boxes which were separated by a space-like interval. This modified thought experiment became known as ``Einstein's Boxes.'' Norsen wrote an excellent analysis of the history and significance of the Einstein's Boxes thought experiment using the CF~\cite{Norsen}.
Time-symmetric explanations of quantum behavior predate the discovery of the Schr\"{o}dinger equation~\cite{Tetrode} and have been developed many times over the past century~\mbox{\cite{Lewis,Eddington,Beauregard,Watanabe1,Watanabe2,Sciama,ABL,Davidon,Roberts,Rietdijk,Cramer,Hokkyo1,Sutherland,PeggBarnett,Wharton1,Hokkyo2,Miller,AV,APT,Wharton2,Gammelmark,Price,Corry, Schulman,Drummond,Heaney2,Heaney3,Heaney4}}. The~TSF used in this paper has been described in detail and compared to other TSFs before~\cite{Heaney1,Heaney2,Heaney3,Heaney4}. Note in particular that the TSF used in this paper is significantly different than the Two-State Vector Formalism (TSVF)~\cite{ABL,AV,APT}. First, the~TSVF postulates that a \textit{quantum particle} is completely described by two state vectors, written as $\langle\phi\vert\thickspace\vert\psi\rangle$, where $\vert\psi\rangle$ is a retarded state vector satisfying the retarded Schr\"{o}dinger equation $i\hbar\partial\vert\psi\rangle/\partial t=H\vert\psi\rangle$ and the initial boundary conditions, while $\langle\phi\vert$ is an advanced state vector satisfying the advanced Schr\"{o}dinger equation $-i\hbar\langle\phi\vert\partial/\partial t=\langle\phi\vert H$ and the final boundary conditions. In~contrast, the~TSF postulates that the \textit{transition} of a quantum particle is completely described by a complex transition amplitude density $\phi^\ast\psi$, defined as the algebraic product of the two wavefunctions. Second, the~TSVF postulates that wavefunctions collapse upon measurement, while the TSF has no collapse postulate.~The~particular TSF used in this paper is a type IIB model in~the classification system of Wharton and Argaman~\cite{Wharton3}.
Section~\ref{sec2} explains the paradox associated with the CF of the Einstein's Boxes thought experiment, as~described by de Broglie.~Section~\ref{sec3} reviews a CF numerical model of the thought experiment which does not resolve the paradox.~Section~\ref{sec4} describes a TSF numerical model of the thought experiment which resolves the paradox. Section~\ref{sec5} discusses the conclusions and~implications.
Note that this paper only concerns a single quantum particle interfering with itself and not multiple quantum particles entangled with each other.
\section{The Einstein's Boxes~Paradox}\label{sec2}
\noindent The Einstein's Boxes paradox was explained by de Broglie as follows~\cite{deBroglie}:
\begin{quote}
Suppose a particle is enclosed in a box $B$ with impermeable walls. The~associated wave $\psi$ is confined to the box and cannot leave it. The~usual interpretation asserts that the particle is ``potentially'' present in the whole of the box $B$, with~a probability $\vert\psi\vert^2$ at each point. Let us suppose that by some process or other, for~example, by~inserting a partition into the box, the~box $B$ is divided into two separate parts $B_1$ and $B_2$ and that $B_1$ and $B_2$ are then transported to two very distant places, for~example to Paris and Tokyo. The~particle, which has not yet appeared, thus remains potentially present in the assembly of the two boxes and its wavefunction $\psi$ consists of two parts, one of which, $\psi_1$, is located in $B_1$ and the other, $\psi_2$, in~$B_2$. The~wavefunction is thus of the form $\psi=c_1\psi_1+c_2\psi_2$, where $\vert c_1\vert^2+\vert c_2\vert^2 = 1$.
The probability laws of [the Copenhagen Formulation] now tell us that if an experiment is carried out in box $B_1$ in Paris, which will enable the presence of the particle to be revealed in this box, the~probability of this experiment giving a positive result is $\vert c_1\vert^2$, while the probability of it giving a negative result is $\vert c_2\vert^2$. According to the usual interpretation, this would have the following significance: because the particle is present in the assembly of the two boxes prior to the observable localization, it would be immediately localized in box $B_1$ in the case of a positive result in Paris. This does not seem to me to be acceptable. The~only reasonable interpretation appears to me to be that prior to the observable localization in $B_1$, we know that the particle was in one of the two boxes $B_1$ and $B_2$, but~we do not know in which one, and~the probabilities considered in the usual wave mechanics are the consequence of this partial ignorance. If~we show that the particle is in box $B_1$, it implies simply that it was already there prior to localization. Thus, we now return to the clear classical concept of probability, which springs from our partial ignorance of the true situation. But, if~this point of view is accepted, the~description of the particle given by $\psi$, though~leading to a perfectly \textit{exact} description of probabilities, does not give us a \textit{complete} description of the physical reality, because~the particle must have been localized prior to the observation which revealed it, and~the wavefunction $\psi$ gives no information about~this.
We might note here how the usual interpretation leads to a paradox in the case of experiments with a negative result. Suppose that the particle is charged, and~that in the box $B_2$ in Tokyo a device has been installed which enables the whole of the charged particle located in the box to be drained off and in so doing to establish an observable localization. Now, if~nothing is observed, this negative result will signify that the particle is not in box $B_2$ and it is thus in box $B_1$ in Paris. But~this can reasonably signify only one thing: the particle was already in Paris in box $B_1$ prior to the drainage experiment made in Tokyo in box $B_2$. Every other interpretation is absurd. How can we imagine that the simple fact of having observed \textit{nothing} in Tokyo has been able to promote the localization of the particle at a distance of many thousands of miles away?
\end{quote}
\section{The Conventional Formulation of Einstein's~Boxes}\label{sec3}
The version of Einstein's Boxes proposed by de Broglie is experimentally impractical. We will use Heisenberg's more practical version~\cite{Heisenberg}, shown in Figure~\ref{fig1}. The~Conventional Formulation (CF) postulates that a single free particle wavefunction with a mass $m$ is completely described by a retarded wavefunction $\psi(\vec{r},t)$ which satisfies the initial conditions and evolves over time according to the retarded Schr\"{o}dinger equation:
\begin{equation}
i\hbar\frac{\partial\psi}{\partial t}=-\frac{\hbar^2}{2m}\nabla^2\psi.
\label{eq.1}
\end{equation}
\begin{figure}
\caption{The modified Einstein's Boxes thought experiment. The~source $S$ can emit single-particle wavefunctions on command. Each wavefunction travels to the balanced beam splitter $BS$ and then to box $B_1$ and box $B_2$. The~two boxes are separated by a space-like~interval.}
\label{fig1}
\end{figure}
The ``retarded wavefunction'' and ``retarded Schr\"{o}dinger equation'' are simply the usual wavefunction and Schr\"{o}dinger equation, described as retarded to distinguish them from the ``advanced wavefunction'' and ``advanced Schr\"{o}dinger equation,'' which will be defined below. We will use units where $\hbar=1$ and assume the wavefunction $\psi(\vec{r},t)$ is a traveling Gaussian with an initial standard deviation $\sigma=50$, initial momentum $k_x=0.4$, and~mass $m=1$. We will also assume that each box contains a detector whose eigenstate is the same Gaussian as that emitted by the source. The~CF assumes that a single-particle wavefunction emitted from a source $S$ will travel to the beam splitter $BS$, where half of it will pass through $BS$ and continue to box $B_1$ while the other half will be reflected from $BS$ and travel to box $B_2$. Let us assume the two halves reach the boxes at the same~time.
Figure~\ref{fig2} shows how the wavefunction's CF probability density $\psi^\ast\psi$ evolves over time, assuming the initial condition is localization in the source $S$. At~$t=0$, $\psi^\ast\psi$ is localized inside the source $S$.~At~$t=1000$, $\psi^\ast\psi$ is traveling toward the beam splitter $BS$.~At~$t=3000$, $\psi^\ast\psi$ has been split in half by the beam splitter, and~the two halves are traveling toward box $B_1$ and box $B_2$.~At~$t=4000-\delta t$, half of $\psi^\ast\psi$ arrives at box $B_1$, while the other half arrives at box $B_2$.~Upon~a measurement at box $B_2$ at $t=4000$, the~CF postulates that in 50\% of the runs, the half wavefunction in box $B_2$ collapses to zero, while simultaneously, the half wavefunction in box $B_1$ collapses to a full wavefunction $\phi(\vec{r},t)$, which we will assume has the same shape and size as the initial wavefunction.~It was believed by de Broglie that this prediction of the CF was absurd: ``How can we imagine that the simple fact of having observed \textit{nothing} in [box $B_2$] has been able to promote the localization of the particle [in box $B_1$] at a distance of many thousands of miles away?''
The CF assumes that the probability $P_c$ for the collapse in box $B_1$ is $P_c=A_c^\ast A_c$, where the subscript $c$ denotes the CF and~the CF transition amplitude $A_c$ for the collapse is
\begin{equation}
A_c=\int_{-\infty}^{\infty}\phi^\ast(x,y,4000)\frac{1}{\sqrt{2}}\psi(x,y,4000)dxdy,
\label{eq.2}
\end{equation}
where $t=4000$ is the time of wavefunction collapse and~the ``quantum'' factor $\frac{1}{\sqrt{2}}$ accounts for the initial wavefunction $\psi(x,y,t)$ being split in half when it reaches box $B_1$. Plugging in numbers gives a collapse probability $P_c=0.43$. This probability is not 1/2 because the evolved wavefunction at $t=4000$ is not identical in shape to the detector eigenstate.
\begin{figure}
\caption{The Conventional Formulation (CF) explanation of the Einstein's Boxes experiment with~a single-particle wavefunction emitted from source $S$.~(\textbf{a}
\label{fig2}
\end{figure}
\section{The Time-Symmetric Formulation of Einstein's~Boxes}\label{sec4}
The TSF postulates that quantum mechanics is a theory about \textit{transitions} described by the transition amplitude density $\phi^\ast\psi$, where $\psi$ is a retarded wavefunction that obeys the retarded Schr\"{o}dinger equation $i\hbar\partial\psi/\partial t=H\psi$ and satisfies the initial boundary conditions, while $\phi^\ast$ is an advanced wavefunction that obeys the advanced Schr\"{o}dinger equation $-i\hbar\partial\phi^\ast/\partial t=H\phi^\ast$ and satisfies the final boundary conditions.~As~in the TSVF, $\psi$ can be interpreted as a retarded wavefunction from the past initial conditions, and~$\phi^\ast$ can be interpreted as an advanced wavefunction from the future final conditions~\cite{Heaney1}.~We will assume the same wavefunctions $\psi(\vec{r},t)$ and $\phi(\vec{r},t)$ as in the CF~above.
An electron (e.g.) can be absorbed by a few molecules in a detector. The~number of few-molecule sites in a detector is orders of magnitude larger than the number of square centimeter sites in a detector. This makes it overwhelmingly more likely that the electron will be absorbed in an area localized to a few square nanometers than much larger areas. This could explain why the transition amplitude density refocuses to a localized area at the detector. Note that there exist two unitary solutions based on the initial conditions, but~time-symmetric theories also require the final conditions, which are that the particle is always found in either one or the other box. Let us then assume the final conditions are either a transition amplitude density localized in box $B_1$ or a transition amplitude density localized in box $B_2$.
Figure~\ref{fig3} shows the TSF explanation of the Einstein's Boxes thought experiment, assuming that the final condition is a particle transition amplitude density localized in box $B_1$. At~$t=0$, $\vert\phi^\ast\psi\vert$ is localized inside the source $S$. At~$t=1000$, $\vert\phi^\ast\psi\vert$ is traveling toward the beam splitter $BS$. At~$t=3000$, $\vert\phi^\ast\psi\vert$ has passed through the beam splitter and~is traveling toward box $B_1$. $\vert\phi^\ast\psi\vert$ is zero on the path from $BS$ to $B_2$ because $\phi^\ast$ is zero on this path. At~$t=4000-\delta t$, $\vert\phi^\ast\psi\vert$ arrives at box $B_1$. Upon~a measurement at box $B_2$ at $t=4000$, no particle transition amplitude density is found. Upon~a measurement at box $B_1$ at $t=4000$, one particle's transition amplitude density is found. The~one-particle transition amplitude density was localized inside box $B_1$ before the measurement was~made.
The TSF assumes the probability $P_t$ for the transition from localization in the source $S$ to localization in box $B_1$ is $P_t=\frac{1}{2}A_t^\ast A_t$, where the subscript $t$ denotes the TSF, the~``classical'' probability factor $\frac{1}{2}$ accounts for the fact that there are two equally likely possible final states, and~the TSF amplitude $A_t$ for the transition is
\begin{equation}
A_t=\int_{-\infty}^{\infty}\phi^\ast(x,y,t)\psi(x,y,t)dxdy,
\label{eq.3}
\end{equation}
Plugging in numbers gives a TSF transition probability $P_t=0.43$, which is identical to the CF collapse probability result. Note that there is no transition amplitude density collapse in the TSF, so there is no need to specify the time of collapse in the integrand.
\begin{figure}
\caption{The time-symmetric formulation (TSF) explanation of the Einstein's Boxes experiment, with~a single-particle transition amplitude density emitted from source $S$ and detected at box $B_1$. (\textbf{a}
\label{fig3}
\end{figure}
\section{Discussion}\label{sec5}
To the best of my knowledge, this is the first time a TSF has been shown to resolve the Einstein's Boxes paradox.~The~TSF resolves the paradox in the ways that Einstein and de Broglie envisioned. The~transition amplitude density $\phi^\ast\psi$ ``localises the particle during the propagation~\cite{Solvay},'' and $\phi^\ast\psi$ ``was already in Paris in box $B_1$ prior to the drainage experiment made in Tokyo in box $B_2$~\cite{deBroglie}.'' None of the problems associated with wavefunction collapse occur. The~TSF appears to give the sought-after exact description of the probabilities and a complete description of the physical~reality.
One might wonder if a theory based on transition amplitude densities will be able to reproduce all of the predictions of the CF. In~1932, Dirac showed that all the experimental predictions of the CF of quantum mechanics can be formulated in terms of transition probabilities~\cite{Dirac}.~The~TSF inverts this fact by postulating that quantum mechanics is a theory which experimentally predicts \textit{only} the transition probabilities.~This implies that the TSF has the same predictive power as the~CF.
The TSF has the additional benefit of being consistent with the classical explanation of the Einstein's Boxes thought experiment. As~the size of the ``particle'' becomes larger and it starts behaving more like a classical particle, it will always go to either one box or the other. There is a logical continuity between its behavior in the quantum and classical regimes, in~contrast to the CF~predictions.
In the TSF example above, we assumed the transition probabilities for the two boxes were the same. Now consider the case where the two transitions are not equally likely. For~a very unlikely transition, the pre-experiment estimate of the TSF transition amplitude density $\phi^\ast\psi$ is tiny, while for a very likely transition, the pre-experiment estimate of $\phi^\ast\psi$ is large.~However,~this does not mean that $\phi^\ast\psi$ itself is a smaller-sized field in the event of an unlikely outcome.~Before~an experiment is conducted, we have classical ignorance of which transition will occur. We normalize the wavefunctions $\psi$ and $\phi^\ast$ to unity and calculate the expected probability for each transition based on $\phi^\ast\psi$. After~the experiment is complete, we know which of the two possible transitions actually occurred, so we renormalize the $\phi^\ast\psi$ of that transition to give a transition probability of one and~renormalize the other $\phi^\ast\psi$ to zero. Note that this is an update of our classical ignorance of which transition occurred and not a physical wavefunction collapse. This may explain why the CF collapse postulate appears to~work.
A central issue raised by the Einstein's Boxes paradox is the question of which elements of quantum theory should be thought of as elements of reality (ontic) and~which elements are merely states of knowledge (epistemic).~The~TSF transition amplitude density $\phi^\ast\psi$ and the wavefunctions $\psi$ and $\phi^\ast$ should be thought of as elements of reality, with~the understanding that $\phi^\ast\psi$ is the TSF equivalent of a real particle wavefunction while $\psi$ and $\phi^\ast$ are the TSF equivalents of virtual particle wavefunctions. For~multiple particles, $\phi^\ast\psi$ lives in a higher dimensional configuration spacetime, which should be thought of as the stage for reality \cite{Heaney3}. The~CF concept of a superposition of paths after the beam splitter then becomes just a state of knowledge in the TSF.~In reality, only one path is taken; we just do not know in advance which one. Since the TSF assumes that the sources and sinks are randomly emitting $\psi$ and $\phi^\ast$ wavefunctions, it is a probabilistic theory. In~analogy with the classical theory of special relativity, the~TSF transition amplitude density can be thought of as a quantum worldtube. The~higher dimensional configuration spacetime is then the quantum equivalent of Minkowski~spacetime.
Finally, the~CF predicts a rapid oscillating motion of a free particle's wavefunction in empty space. Schr\"odinger discovered the possibility of this rapid oscillating motion in 1930, naming it zitterbewegung~\cite{Schroedinger}.~This prediction of the CF is inconsistent with Newton's first law, since it implies a free particle's wavefunction does not move with a constant velocity. The~TSF predicts zitterbewegung will never occur~\cite{Heaney1}. Direct measurements of zitterbewegung are beyond the capability of current technology, but~future technological developments should allow measurements to confirm or deny its existence.~Given the technology, one possible way to test for zitterbewegung would be to hold an electron in the ground state in a parabolic potential and then turn off the potential while looking for radiation at the zitterbewegung frequency of $10^{21}s^{-1}$. This could distinguish between the CF and the~TSF.
\end{document}
|
\begin{equation}gin{document}
\title{Noise and Stability in Reaction-diffusion Equations}
\begin{equation}gin{abstract}
We study the stability of reaction-diffusion equations in presence of noise. The relationship of stability of solutions between the stochastic ordinary different equations and the
corresponding stochastic reaction-diffusion equation is firstly established. Then, by using the Lyapunov method, sufficient conditions for mean square and stochastic
stability are given. The results show that the multiplicative noise can make the solution stable, but the
additive noise will be not.
{\bf Keywords}: Stochastic stability; Mean square stability; Noise; Lyapunov method.
AMS subject classifications (2010): 35B35, 60H15.
\end{abstract}
\baselineskip=15pt
\section{Introduction}
\setcounter{equation}{0}
The stability of solutions is an important issue in
the theory of PDEs (partial differential equations), which has been studied by
many authors \cite{YLbook}. There are a lot of sufficient conditions to assure that the
solutions are stable or unstable. We note that
noise always exists in the real world.
The reasons may be that the parameter is obtained by different measurement,
and we can not get the real value, so we consider the ordinary (partial) differential
equations with noise perturbation is available.
In other words, in microscopic world, to describe the particle moving law must
be stochastic ordinary (partial) differential equations. In macroscopic world,
we often consider the case that the coefficient in equations is random or stochastic.
Usually, if we consider the
role of noise, we have two cases. First case: the noise is regarded as a small perturbation.
In this case, the structure of solutions will not be changed and the biggest possible change is
long-time behavior of the solutions, that is to say, the conditions of stability or un-stability
may be different from the deterministic case. Second case: the noise is strong, such as $u^\gamma dW_t$,
where $\gamma>1$, $u$ is the unknown function and $W_t$ is the noise. In this case, the
structure of solutions will be changed. More precisely, the noise can induce the solutions
blow up in finite time.
When the noise appears in a deterministic PDE, the impact of noise on solutions will
be the first thing to be considered. In the present paper, we aim to study impact of noise
on stability of solutions.
Initially, we recall some known results about the impact of noise.
Flandoli et al. \cite{FF2012,FGP2010} proved that the noise can make
the transport equations well-posedness and the noise can prevent the
singularities in linear transport equations. Chow \cite{C2011} obtained that the noise can induce
singularities (finite time blow up of solutions), also see \cite{LD2015,LW2020}.
There are a lot of work about the impact of noise on different PDEs, for example,
Hamilton-Jacobi equations \cite{GG2019},
conservation law \cite{GS2017,GL2019}, porous media equations \cite{DGG2019},
quasilinear degenerate parabolic-hyperbolic equations \cite{GH2018}, and so on.
Besides, the impact of noise on regularity of solutions of parabolic equations
has been studied by Lv et al. \cite{LGWW2019}.
Forty years ago, there are a lot of works about the impact of noise. The main issue
is that the noise can stabilize the solution of ordinary differential equations,
see the book \cite{Kh2011,Maobook1991,Maobook1994}. Meanwhile, the stochastic stability of
functional differential equations is also considered by Mackey-Nechaeva \cite{MN1994}.
In the book \cite{DZbook}, the long time behavior of solutions was considered in Chapter 11 and
sufficient condition of mean square stable is given, see Theorem 11.14. More precisely, Da Prato- Zabczyk
studied the following equation
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
dX=AXdt+B(X)dW_t, \\[1.5mm]
X(0)=x\in H,
\end{array}\right.\lbl{1.1}\end{equation}s
where $H$ is a Hilbert space, $A$ generates a $C_0$ semigroup and $B\in L(H;L_2^0)$ (see
p309 of \cite{DZbook} for more details). They proved that the following statements are equivalent
(i) There exists $M>0$, $\gamma>0$ such that
\begin{equation}ss
\mathbb{E}|X(t,x)|^2\leq Me^{-\gamma t}|x|^2, \ \ \ t\geq0.
\end{equation}ss
(ii) For any $x\in H$ we have
\begin{equation}ss
\mathbb{E}\int_0^\infty|X(t,x)|^2dt<\infty.
\end{equation}ss
It is remarked that the nonlinear term is not considered in the book \cite{DZbook}. Moreover,
the example in \cite{DZbook} is the stochastic reaction-diffusion equation on the bounded domain.
Liu-Mao \cite{LM1998} also considered the stability of trivial solution $0$ on the bounded domain.
Wang-Li \cite{WL2019} considered the
stability and moment boundedness of the stochastic linear age-structured model.
Although in the book \cite{Cb2007}, Chow gave a abstract result to study the stability of null solution
(see page 233), the concrete form was not given.
In the present paper, we consider the concrete model and generalized the classical results.
What's more, we find some difference between stochastic partial differential equations (SPDEs)
and stochastic differential equations (SDEs).
We discuss the impact of different kinds of noise on stability.
Some interesting results are obtained: the additive noise will have a "bad" effect
and some multiplicative noise has "good" effect. In other words, the multiplicative noise can make the solution stable, but the
additive noise will not, which is new for SPDEs.
This is different from SDEs, see Remark \ref{r3.2}.
What's more, we obtain a new result for stability theory of SDEs, see Theorem \ref{t3.6}.
This paper is arranged as follows. In Sections 2,
we will present some preliminaries. Sections 3 and 4 are concerned with the bounded domain and
the whole space, respectively.
\section{Preliminaries}
\setcounter{equation}{0}
In the present paper, we always assume $W_t$ is a one-dimensional standard Wiener process and $W_t(x)$ is a Wiener random field which are defined on a complete probability
space $(\Omega,\mathcal {F},\mathbb{P})$. Firstly, we recall the definitions of stochastic stability.
We only consider the stability of constant equilibrium of stochastic
reaction-diffusion equations.
Consider the following stochastic reaction-diffusion equations
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
du=(\Delta u+f(u))dt+\sigma(u)dW, \ \ \qquad t>0,&x\in \mathbb{R}^d,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &x\in \mathbb{R}^d,
\end{array}\right.\lbl{2.2}\end{equation}s
where $W=W_t$ or $W_t(x)$. The Wiener random field
can be chosen to have the following properties:
$\mathbb{E}W_t(x)=0$ and its covariance function $q(x,y)$ is given by
\begin{equation}ss
\mathbb{E}W_t(x)W_s(y)=(t\wedge s)q(x,y), \ \ \ x,y\in\mathbb{R}^d,
\end{equation}ss
where $t\wedge s=\min\{t,s\}$ for $0\leq t,s\leq T$, or can be chosen as one-dimension Brownian motion.
Without loss of generality, we suppose that $f(0)=\sigma(0)=0$, that is to say, $0$ is a trivial
solution to the first equation of problem (\ref{2.2}). Let $\|\cdot\|$ denote
as some norm with respect to the spatial variable.
\begin{equation}gin{defi} \lbl{d2.1} The trivial solution $0$ is called mean square stable if for any
$\varepsilon>0$, there exists $\delta=\delta(\varepsilon)>0$ such that for any initial
data $u_0$,
\begin{equation}ss
\|u_0\|\leq\delta\ \ \ {\rm implies}\ \ \ \mathbb{E}\|u(\cdot,t)\|^2<\varepsilon, \ \forall \ t\geq0,
\end{equation}ss
and exponentially mean square stable, if there exists two positive constants $c_1$ and $c_2$ such that
\begin{equation}ss
\mathbb{E}\|u(\cdot,t)\|^2<c_1e^{-c_2t}\|u_0\|^2, \ \forall \ t\geq0.
\end{equation}ss
\end{defi}
\begin{equation}gin{defi} \lbl{d2.2} The trivial solution $0$ is called stochastically stable if for any
$\varepsilon_1>0$ and $\varepsilon_2>0$, there exists $\delta=\delta(\varepsilon_1,\varepsilon_2)>0$ such that for $t>0$
the solution $u$ satisfies
\begin{equation}ss
\mathbb{P}\left\{\sup_{t>0}\|u(\cdot,t)\|\leq\varepsilon_1\right\}\geq1-\varepsilon_2 \ \ \ {\rm for}\ \ \ \|u_0\|\leq\delta.
\end{equation}ss
\end{defi}
Before ending this section, we consider the relationship of the stability of solutions between
stochastic differential equations and stochastic reaction-diffusion equations.
Assume that $u=0$ is a trivial solution to the first equation of problem (\ref{2.2}), then
$u=0$ will be a trivial solution to the following equation
\begin{equation}s
du=f(u)dt+\sigma(u)dW_t.
\lbl{2.3}\end{equation}s
Problem (\ref{2.3}) means
equation (\ref{2.3}) with the initial data $u_0$ (independent of $x$). Initially, we introduce a definition for (\ref{2.3}).
\begin{equation}gin{defi} \lbl{d2.1} The trivial solution $0$ is called mean square stable for problem (\ref{2.3}) if for any
$\varepsilon>0$, there exists $\delta=\delta(\varepsilon)>0$ such that for any initial
data $u_0$,
\begin{equation}ss
|u_0|\leq\delta\ \ \ {\rm implies}\ \ \ \mathbb{E}|u(\cdot,t)|^2<\varepsilon, \ \forall \ t\geq0,
\end{equation}ss
and exponentially mean square stable, if there exists two positive constants $c_1$ and $c_2$ such that
\begin{equation}ss
\mathbb{E}|u(\cdot,t)|^2<c_1e^{-c_2t}|u_0|^2, \ \forall \ t\geq0,
\end{equation}ss
and stochastically stable if for any
$\varepsilon_1>0$ and $\varepsilon_2>0$, there exists $\delta=\delta(\varepsilon_1,\varepsilon_2)>0$ such that for $t>0$
the solution $u$ satisfies
\begin{equation}ss
\mathbb{P}\left\{\sup_{t>0}|u(\cdot,t)|\leq\varepsilon_1\right\}\geq1-\varepsilon_2 \ \ \ {\rm for}\ \ \ |u_0|\leq\delta.
\end{equation}ss
\end{defi}
In order to establish the
relationship between problems (\ref{2.2}) and (\ref{2.3}), we need the following lemma.
Let $\eta(r)=r^-$ denote the negative part of $r$ for $r\in\mathbb{R}$.
Set
\begin{equation}ss
k(r)=\eta^2(r),
\end{equation}ss
so that $k(r)=0$ for $r\geq0$ and $k(r)=r^2$ for $r<0$. For
$\epsilon>0$, let $k_\epsilon(r)$ be a $C^2$-regularization of $k(r)$ defined by
\begin{equation}ss
k_\epsilon(r)=\left\{\begin{equation}gin{array}{lllll}
r^2-\displaystyle\frac{\epsilon^2}{6}, \ \qquad \ &r<-\epsilon,\\[1mm]
-\displaystyle\frac{r^3}{\epsilon}\left(\displaystyle\frac{r}{2\epsilon}+\frac{4}{3}\right),\ \ &-\epsilon\leq r<0,\\[1mm]
0, \ \ \ &r\geq0.
\end{array}\right.\end{equation}ss
Then one can check that $k_\epsilon(r)$ has the following properties.
\begin{equation}gin{lem}\lbl{l2.1}{\rm\cite[Lemma 3.1]{LD2015}} The first two derivatives $k'_\epsilon,\,k''_\epsilon$
of $k_\epsilon$ are continuous and satisfy the conditions: $k'_\epsilon(r)=0$ for $r\geq0$;
$k'_\epsilon\leq0$ and $k''_\epsilon\geq0$ for any $r\in\mathbb{R}$. Moreover, as $\epsilon\rightarrow0$,
we have
\begin{equation}ss
k_\epsilon(r)\rightarrow k(r) \ \ {\rm and} \ \ k'_\epsilon(r)\rightarrow-2\eta(r),
\end{equation}ss
and the convergence is uniform for $r\in\mathbb{R}$.
\end{lem}
Under the condition that
both problems (\ref{2.2}) and (\ref{2.3}) have a unique strong solution, we have the following result.
Here we focus on the stability of solutions and we do not talk about the
existence of solutions.
\begin{equation}gin{theo}\lbl{t2.1} Assume that $W_t$ is a one-dimension Wiener process, $f$ and $\sigma$ satisfy the global Lipschitz condition,
and $f(0)=\sigma(0)=0$. Then $0$ is a stable (exponentially mean square stable or stochastically stable)
trivial solution of (\ref{2.2}) (with the $L^\infty$ norm in spatial variable) if and only if
$0$ is a stable (exponentially mean square sable or stochastically stable) trivial solution of (\ref{2.3}).
\end{theo}
{\bf Proof.} Clearly, if $0$ is a stable (exponentially mean square stable or stochastically stable) trivial solution of (\ref{2.2}),
then $0$ is also a stable trivial solution of (\ref{2.3}) since we can choose the initial data $u_0$ which is spatial variable independent.
Conversely, we will check if if $0$ is a stable (exponentially mean square stable or stochastically stable) trivial solution of (\ref{2.3}),
then $0$ is a stable trivial solution of (\ref{2.2}) as well.
Assume $0$ is a stable (exponentially mean square stable or stochastically stable) trivial solution of (\ref{2.3}),
then for any $\varepsilon>0$, there exists constant $\delta=\delta(\varepsilon)>0$ such that
\begin{equation}ss
\mathbb{E}|u_\pm(t)|^2<\varepsilon,
\end{equation}ss
where $u_\pm(t)$ is the solution of (\ref{2.3}) with initial data $\pm2\delta$.
Let $u(x,t)$ be the unique solution of (\ref{2.2}) with initial data $u_0(x)$ which satisfies $|u_0(x)|\leq\delta$. For any fixed $T>0$, we will prove that
\begin{equation}s
u_-(t)\leq u(x,t)\leq u_+(t), \ \ \forall \ (x,t)\in\mathbb{R}^d\times[0,T], \ a.s..
\lbl{2.4}\end{equation}s
If the above inequality holds, then we get the desired result.
In order to the inequality (\ref{2.4}), we let
$v=u(x,t)-u_-(t)$, then $v$ satisfies
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
dv=(\Delta v+f(u)-f(u_-))dt+(\sigma(u)-\sigma(u_-))dW_t, \ \ \qquad t>0,&x\in \mathbb{R}^d,\\[1.5mm]
v(x,0)=u_0(x)+\delta\geq0, \ \ \ &x\in \mathbb{R}^d.
\end{array}\right.\lbl{2.5}\end{equation}s
Note that $\nablabla u$ makes sense in $L^\infty(\mathbb{R}^d)$, thus we can not
take $\Phi_\varepsilon(v(t))=(1,k_\epsilon(v(t)))$, which is different from
those in \cite{C2011,LW2020}. We need introduce a new test function.
For any $R>0$, $B_R(0)$ denotes a ball centered in $0$ with radius $R$. Let $\phi_1$
be the eigenvalue function of Laplacian operator on $B_R(0)$ with respect to
the first eigenvalue $\lambda_1$, i.e.,
\begin{equation}ss\left\{\begin{equation}gin{array}{llll}
-\Delta \phi_1=\lambda_1 \phi_1, \ \ \ \ \ \ \ \ {\rm in} \ B_R(0),\\
\ \phi_1=0, \ \ \qquad\ \ \ \qquad {\rm on}\ \partial B_R(0).
\end{array}\right. \end{equation}ss
Denote $\psi\in C^2(\mathbb{R}^d)$ satisfying
\begin{equation}ss
\psi(x)=\left\{\begin{equation}gin{array}{lll}
\phi_1(x), \ \ \ & x\in\bar B_R(0),\\
0,\ \ \ & x\in\mathbb{R}^d\setminus B_R(0).
\end{array}\right.\end{equation}ss
Define
\begin{equation}ss
\Phi_\varepsilon(v(t))=(\psi,k_\epsilon(v(t)))=\int_{\mathbb{R}^d}\psi(x) k_\epsilon(v(x,t))dx.
\end{equation}ss
By It\^{o}'s formula, we have
\begin{equation}ss
\Phi_\varepsilon(v(t))&=&\Phi_\varepsilon(v_0)+\int_0^t\int_{\mathbb{R}^d}\psi(x) k_\epsilon'(v(x,s))\Delta v(x,s)dxds\\
&&+\int_0^t\int_ {\mathbb{R}^d}\psi(x) k_\epsilon'(v(x,s))(f(u(x,s))-f(u_-(s)))dxds\\
&&+\int_0^t\int_ {\mathbb{R}^d}\psi(x) k_\epsilon'(v(x,s))(\sigma(u(x,s))-\sigma(u_-(s)))dxdW_s\\
&&+\frac{1}{2}\int_0^t\int_{\mathbb{R}^d}\psi(x)k_\epsilon''(v(x,s))(\sigma(u(x,s))-\sigma(u_-(s)))^2dxds.
\end{equation}ss
By using the facts $k_\epsilon''\geq0,\,\psi(x)\geq0$, we have
\begin{equation}ss
&&\int_{\mathbb{R}^d}\psi(x) k_\epsilon'(v(x,s))\Delta v(x,s)dx\\
&=&-\int_{\mathbb{R}^d}\psi(x) k_\epsilon''(v(x,s))|\nablabla v(x,s)|^2dx- \int_{\mathbb{R}^d}k_\epsilon'(v(x,s))\nablabla\psi(x)\cdot\nablabla v(x,s)dx\\
&\leq&- \int_{\mathbb{R}^d}\nablabla k_\epsilon(v(x,s))\cdot\nablabla\psi(x)dx\\
&=&\int_{\mathbb{R}^d} k_\epsilon(v(x,s))\Delta\psi(x)dx\\
&=&-\lambda_1\int_{\mathbb{R}^d} k_\epsilon(v(x,s))\psi(x)dx\leq0.
\end{equation}ss
Consequently,
\begin{equation}ss
\Phi_\varepsilon(v(t))
&\leq&\Phi_\varepsilon(v_0)+\int_0^t\int_{\mathbb{R}^d}\psi(x)k_\epsilon''(v(x,s))\left(\frac{1}{2}(\sigma(u(x,s))-\sigma(u_-(s)))^2\right)dxds\\
&&+\int_0^t\int_ {\mathbb{R}^d} \psi(x)k_\epsilon'(v(x,s))(f(u(x,s))-f(u_-(s)))dxds\\
&&+\int_0^t\int_ {\mathbb{R}^d}\psi(x) k_\epsilon'(v(x,s))(\sigma(u(x,s))-\sigma(u_-(s)))dxdW_s.
\end{equation}ss
Taking expectation over the above equality and using Lemma \ref{l2.1}, we get
\begin{equation}ss
\mathbb{E}\Phi_\varepsilon(v(t))
&=&\mathbb{E}\Phi_\varepsilon(v_0)+\mathbb{E}\int_0^t\int_{\mathbb{R}^d}\psi(x) k_\epsilon''(v(x,s))\\
&&\times\left(\frac{1}{2}(\sigma(u(x,s))-\sigma(u_-(s)))^2\right)dxds\\
&&+\mathbb{E}\int_0^t\int_{\mathbb{R}^d}\psi(x) k_\epsilon'(v(x,s))(f(u(x,s))-f(u_-(s)))dxds\\
&\leq&\mathbb{E}\Phi_\varepsilon(v_0)+\frac{L_\sigma}{2}\mathbb{E}\int_0^t\int_{\mathbb{R}^d}\psi(x) k_\epsilon''(v(x,s))
v(x,s)^{2}dxds\\
&&+L_f\mathbb{E}\int_0^t\int_{\mathbb{R}^d}\psi(x)|k_\epsilon'(v(x,s))|v(x,s)|dxds.
\end{equation}ss
Note that $\lim\limits_{\varepsilon\rightarrow0}\mathbb{E}\Phi_\varepsilon(v(t))
=\mathbb{E}(\eta(v(t))^2,\psi)$, by
taking the limits termwise as $\varepsilon\rightarrow0$ and using Lemma \ref{l2.1}, we have
\begin{equation}ss
\mathbb{E}(\eta(v(t))^2,\psi)\leq(L^2_\sigma+2L_f)
\int_0^t\mathbb{E}(\eta(v(s))^2,\psi) ds,
\end{equation}ss
which, by means of Gronwall's inequality, implies that
\begin{equation}ss
\mathbb{E}(\eta(v(t))^2,\psi)=0, \ \ \ \ \forall \ t\in[0,T].
\end{equation}ss
Note that for any $R>0$, the above inequality always holds.
It follows from $\psi\geq0$ that $\eta(v(t))=v^-(x,t)=0$ a.s. for a.e. $x\in \mathbb{R}^d$.
which implies that $v^-=0$ a.s. for a.e. $x\in \mathbb{R}^d$, and for any $t\in[0,T]$.
Similarly, if we let $v=u_+-u(x,t)$, then we can prove that $v^-=0$ a.s. for a.e. $x\in \mathbb{R}^d$, and for any $t\in[0,T]$.
That is to say, (\ref{2.4}) holds. $\Box$
\begin{equation}gin{remark}\lbl{r2.1}
It is remarked that in Theorem \ref{t2.1} the norm of solution of problem
(\ref{2.2}) is maximum in $\mathbb{R}^d$. The advantage of this norm is that
the estimates we obtained hold point-wise. Under this norm, Theorem \ref{t2.1} also holds
if the equation (\ref{2.2}) is
replaced by the equation (\ref{2.6}).
\end{remark}
Now, we consider a special case on a bounded domain $D\subset\mathbb{R}^d$ ($d\geq1$)
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
du=(\Delta u+f(u))dt+udW_t, \qquad t>0,&x\in D,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &x\in D,\\[1.5mm]
u(x,t)=0, \qquad \qquad \qquad \qquad \qquad t>0, &x\in\partial D.
\end{array}\right.\lbl{2.6}\end{equation}s
We will establish the relationship
between (\ref{2.6}) and the following SDE
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
dX_t=f(X_t)dt+X_tdW_t, \ \ \qquad t>0,\\[1.5mm]
X(0)=u_0.
\end{array}\right.\lbl{2.7}\end{equation}s
In order to do that, we
consider the eigenvalue problem for the elliptic equation
\begin{equation}ss\left\{\begin{equation}gin{array}{llll}
-\Delta \phi=\lambda \phi, \ \ \ \ \ \ \ \ {\rm in} \ D,\\
\phi=0, \ \ \qquad\ \qquad {\rm on}\ \partial D.
\end{array}\right. \end{equation}ss
Then, all the eigenvalues are strictly positive, increasing and
the eigenfunction $\phi_1$ corresponding to the smallest eigenvalue
$\lambda_1$ does not change sign in domain $ D$, as shown in \cite{GTbook}.
Therefore, we normalize it in such a way that
\begin{equation}ss
\phi_1(x)\geq0,\ \ \ \ \int_ D \phi_1(x)dx=1.
\end{equation}ss
\begin{equation}gin{theo}\lbl{t2.2} Assume that
\begin{equation}ss
(f(u),\phi_1)\leq f((u,\phi_1)).
\end{equation}ss
If $0$ is a stable (exponentially mean square stable or stochastically stable)
trivial solution of (\ref{2.7}), then
$0$ is also a stable (exponentially mean square sable or stochastically stable) trivial solution of (\ref{2.6}),
where we take the spatial norm as $\|u\|_{\phi_1}=(u,\phi_1)$.
\end{theo}
{\bf Proof.} It follows from \cite[Theorem 2.1]{C2011} and \cite{LD2015} that the solutions
of (\ref{2.6}) keep non-negative almost surely, i.e.,
$u(x,t)\geq 0$, a.s. for almost every $x\in D$ and for all $ t\in[0,T]$.
Moreover, the solutions exist globally.
Let
\begin{equation}ss
v(t)=(u,\phi_1)=\int_Du(x,t)\phi_1(x)dx.
\end{equation}ss
Then $v$ satisfies
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
dv(t)\leq[-\lambda_1v(t)+f(v(t))]dt+v(t)dW_t, \ \ \qquad t>0,&x\in D,\\[1.5mm]
v(0)=v_0=(u_0,\phi_1).
\end{array}\right.\lbl{2.8}\end{equation}s
It is easy to prove that $v(t)$ is a sub-solution of the
following problem
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
dY_t=[-\lambda_1Y_t+f(Y_t)]dt+Y_tdW_t, \ \ \qquad t>0,&x\in D,\\[1.5mm]
Y(0)=v_0.
\end{array}\right.\lbl{2.9}\end{equation}s
Since the solutions of (\ref{2.9}) keep non-negative, we obtain that
$v(t)$ is also a sub-solution of (\ref{2.8}).
Set $Z_t=Y_t-v(t)$, similar to the proof of Theorem \ref{t2.1}, we can prove
$Z_t\geq0$ almost surely. Indeed, one can first prove that $Y_t\geq0$ almost surely
by using the same method to the proof of Theorem \ref{t2.1}. Then it follows from the
definition of $v$ that $v(t)=(u,\phi_1)>0$ almost surely. Lastly,
noting that
\begin{equation}ss
-(Y_t^r-v(t)^r)=-r\xi^{r-1}Z_t\geq0, \ \ {\rm when}\ Z_t\leq0,
\end{equation}ss
one can use the same method to the proof of Theorem \ref{t2.1} to get
$Z_t\geq0$ almost surely. Similarly, one can prove $Y_t\leq X_t$ almost
surely. Therefore, if $0$ is a stable (exponentially mean square stable or stochastically stable)
trivial solution of (\ref{2.7}) , then
$0$ is also a stable trivial solution of (\ref{2.6}).
The proof is complete. $\Box$
{\bf Example} Let $f(u)=au-ku^r$, where $a\in\mathbb{R},k\geq0,r\geq1$.
$-u^r \geq0$ for $u\leq0$. It follows from \cite[Theorem 2.1]{C2011} and \cite{LD2015} that the solutions
of (\ref{2.6}) with $f(u)=au-ku^r$ keep non-negative almost surely.
By using the H\"{o}lder inequality, we have
\begin{equation}ss
(u,\phi_1)^r=\left(\int_Du(x,t)\phi_1(x)dx\right)^r\leq\int_Du^r(x,t)\phi_1(x)dx=(u^r,\phi_1).
\end{equation}ss
Therefore, all the assumptions of Theorem \ref{t2.2} are satisfied.
\begin{equation}gin{remark}\lbl{r2.2} In particular, if $f(u)=u$, then the stability of the trivial solution
$0$ for (\ref{2.6}) and the following problem
\begin{equation}ss\left\{\begin{equation}gin{array}{lll}
dY_t=(1-\lambda_1)Y_tdt+Y_tdW_t, \ \ \qquad t>0,&x\in D,\\[1.5mm]
Y(0)=(u_0,\phi_1),
\end{array}\right. \end{equation}ss
are equivalent. The above problem is different from (\ref{2.7}) because of the Laplacian operator.
\end{remark}
\section{Bounded domain}
\setcounter{equation}{0}
In this section, we consider the stability results on a bounded domain $D\subset\mathbb{R}^d$.
We focus on the conditions which
induce the solution stable. Meanwhile, we are interested in the difference between the
stochastic partial differential equations and partial differential equations. We will consider
the impact of different noise. Throughout this section, $\|\cdot\|$ means
the norm of $L^2(D)$.
We first consider the following initial boundary problem
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
du=\mu\Delta udt+\sigma dW_t,\ \qquad t>0,&x\in D,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &x\in D,\\[1.5mm]
u(x,t)=0, \quad\qquad \qquad \qquad t>0, &x\in\partial D,
\end{array}\right.\lbl{0.1}\end{equation}s
where $\mu$ and $\sigma$ are positive constants. Then we obtain
\begin{equation}gin{theo}\lbl{t0.1} If $\sigma^2|D|<2\mu\lambda_1\mathbb{E}\|u_0\|^2$, then the
trivial solution $0$ is mean square stable.
\end{theo}
{\bf Proof.} We take the Lyapunov function as $\|u\|^2$. By using It\^{o}'s formula, we have
\begin{equation}ss
\frac{d}{dt}\mathbb{E}\|u(\cdot,t)\|^2&=&2\mu\mathbb{E}\int_ D u\Delta u(x,t)dx+\sigma^2|D|\nonumber\\
&=&-2\mu\mathbb{E}\int_ D |\nablabla u(x,t)|^2dx+\sigma^2|D|\\
&\leq&-2\mu\lambda_1\mathbb{E}\|u(\cdot,t)\|^2+\sigma^2|D|,
\end{equation}ss
where we used the Poincare inequality.
Solving the above inequality, we have
\begin{equation}ss
\mathbb{E}\|u(\cdot,t)\|^2<\left[\mathbb{E}\|u_0\|^2-\frac{\sigma^2|D|}{2\lambda_1\mu}\right]e^{-2\lambda_1\mu t}
+\frac{\sigma^2|D|}{2\lambda_1\mu}.
\end{equation}ss
Note that
\begin{equation}ss
\left[\mathbb{E}\|u_0\|^2-\frac{\sigma^2|D|}{2\lambda_1\mu}\right]e^{-2\lambda_1\mu t}
+\frac{\sigma^2|D|}{2\lambda_1\mu}\leq \mathbb{E}\|u_0\|^2
\end{equation}ss
is equivalent to
\begin{equation}ss
\left[\mathbb{E}\|u_0\|^2-\frac{\sigma^2|D|}{2\lambda_1\mu}\right]\left[e^{-2\lambda_1\mu t}-1\right]
\leq 0.
\end{equation}ss
Therefore, if $\mathbb{E}\|u_0\|^2-\frac{\sigma^2|D|}{2\lambda_1\mu}>0$, that is,
$\sigma^2|D|<2\mu\lambda_1\mathbb{E}\|u_0\|^2$, we have
\begin{equation}ss
\mathbb{E}\|u(\cdot,t)\|^2\leq\mathbb{E}\|u_0\|^2.
\end{equation}ss
The proof is complete. $\Box$
\begin{equation}gin{remark}\lbl{r3.1} It is easy to see that if $\sigma=0$, then the
trivial solution $0$ is stable; and if $\mu=0$, then the trivial solution $0$
will be unstable; and when $\sigma^2>0$, $0$ will be a stable trivial solution under
some more assumptions. Possibly one can say that the additive noise will have a "bad" effect on the stability
of trivial solution.
By using the Chebyshev inequality, one can easily prove the trivia solution $0$ is stochastic
stable without any more assumption, see the next theorem for the proof.
\end{remark}
We study the impact of additive noise.
Consider the following equation
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
du=(\Delta_p u+f(u))dt+\sigma dW_t, \ \ \qquad t>0,&x\in D,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &x\in D,\\[1.5mm]
u(x,t)=0, \quad \qquad \qquad \qquad \qquad \qquad t>0, &x\in\partial D,
\end{array}\right.\lbl{3.1}\end{equation}s
where $\Delta_pu=\nablabla\cdot(|\nablabla u|^{p-2}\nablabla u)$.
Because we only consider the
stability of solutions, throughout this paper we will assume the problem we consider
admits a unique global solution. Let $C_\infty$ be the Sobolev embedding constant satisfying
\begin{equation}s
\|u\|_{L^\infty(D)}\leq C_\infty\|u\|_{W^{1,p}(D)},\ \ \ p>d.
\lbl{3.2}\end{equation}s
It is noted that the solution $u$ of (\ref{3.1}) satisfies that
$\|u\|_{W^{1,p}(D)}=\|\nablabla u\|_{L^{p}(D)}$.
\begin{equation}gin{theo}\lbl{t3.1} Assume the nonlinear term satisfies
\begin{equation}s
uf(u)\leq au^2+bu^{2m},\ \ \ m\geq1,
\lbl{3.3}\end{equation}s
where $a,b\in\mathbb{R}$. Assume further that $p>\max\{2m,d\}$.
If
\begin{equation}s
a+\frac{2-\gamma}{2}
\left(\frac{\gamma|D||b|C_\infty^p}{2}\right)^{\frac{\gamma}{2-\gamma}}+\frac{\sigma^2|D|}{2\mathbb{E}\|u_0\|^2}<0,
\lbl{3.4}\end{equation}s
where $\gamma\in(0,2)$ satisfies
\begin{equation}ss
(2m-2+\gamma)\cdot\frac{2}{\gamma}=p.
\end{equation}ss
Then the trivial solution $0$ of (\ref{3.1}) is mean square stable.
\end{theo}
{\bf Proof.} We remark that when $m=1$, Theorem \ref{t3.1} will become easier, see
\cite{LM1998} for similar results.
We pick a Lyapunov function $V(u)=\|u\|^2$.
By It\^{o}'s formula, taking expectation and integrating with respect to $t$, we have
\begin{equation}s
\frac{d}{dt}\mathbb{E}\|u(\cdot,t)\|^2&=&2\mathbb{E}\int_ D u(\Delta_p u(x,t)+f(u))dx+\sigma^2|D|\nonumber\\
&\leq&-2\mathbb{E}\int_ D |\nablabla u(x,t)|^pdx+2\mathbb{E}\int_D(au^2+bu^{2m})dx+\sigma^2|D|.
\lbl{3.5}\end{equation}s
By the Sobolev embedding inequality (\ref{3.2}), we have
\begin{equation}s
\|u\|_{L^{2m}}^{2m}=\int_ D |u|^{2m}(x,t)dx&\leq&\|u\|_{L^\infty}^{2m-2}\int_ D u^2(x,t)dx\nonumber\\
&=&\|u\|_{L^\infty}^{2m-2}\|u\|_{L^2}^{2}\nonumber\\
&\leq&|D|^{\frac{\gamma}{2}}\|u\|_{L^\infty}^{2m-2+\gamma}\|u\|_{L^2}^{2-\gamma}\nonumber\\
&\leq&C_\infty^{2m-2+\gamma}|D|^{\frac{\gamma}{2}}\|u\|_{W^{1,p}}^{2m-2+\gamma}\|u\|_{L^2}^{2-\gamma}\nonumber\\
&\leq&\frac{2-\gamma}{2}
(C_\infty)^{\frac{2(2m-2+\gamma)}{2-\gamma}}
\left(\frac{\gamma|b||D|}{2}\right)^{\frac{\gamma}{2-\gamma}}\|u\|_{L^2}^{2}\nonumber\\
&&
+\frac{1}{|b|}\|u\|_{W^{1,p}}^{(2m-2+\gamma)\cdot\frac{2}{\gamma}}.
\lbl{3.6}\end{equation}s
Noting that $p>\max\{2m,d\}$,
there exists a constant $\gamma\in(0,2)$ such that
\begin{equation}ss
(2m-2+\gamma)\cdot\frac{2}{\gamma}=p.
\end{equation}ss
Submitting (\ref{3.6}) into (\ref{3.5}), we have
\begin{equation}s
\frac{d}{dt}\mathbb{E}\|u(\cdot,t)\|^2\leq 2\left(a+\hat C\right)\mathbb{E}\|u\|_{L^2}^{2}+\sigma^2|D|,
\lbl{3.7}\end{equation}s
where
\begin{equation}ss
\hat C=\frac{2-\gamma}{2}
\left(\frac{\gamma|D||b|C_\infty^p}{2}\right)^{\frac{\gamma}{2-\gamma}}.
\end{equation}ss
Solving the differential inequality (\ref{3.7}) gives
\begin{equation}s
\mathbb{E}\|u(\cdot,t)\|^2\leq \left(\mathbb{E}\|u_0\|^2+\frac{\sigma^2|D|}{2(a+\hat C)}\right)
e^{2(a+\hat C)t}-\frac{\sigma^2|D|}{2(a+\hat C)}.
\lbl{3.8}\end{equation}s
Note that $a+\hat C<0$ implies that $e^{2(a+\hat C)t}<1$ for all $t>0$. Furthermore,
the assumption
\begin{equation}ss
\mathbb{E}\|u_0\|^2+\frac{\sigma^2|D|}{2(a+\hat C)}>0
\end{equation}ss
yields that
\begin{equation}ss
\mathbb{E}\|u(\cdot,t)\|^2\leq \mathbb{E} \|u_0\|^2,
\end{equation}ss
which completes the proof.
$\Box$
{\bf Example} Consider
\begin{equation}ss\left\{\begin{equation}gin{array}{lll}
du=(\Delta_4 u-u+\frac{1}{C_\infty^4}u^2)dt+\sigma dW_t, \ \quad t>0,&x\in (0,1),\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &x\in (0,1),\\[1.5mm]
u(x,t)=0, \qquad \qquad \qquad \qquad \qquad \qquad t>0, &x\in\partial (0,1).
\end{array}\right.\end{equation}ss
It is easy to check that $\gamma=1$ satisfies $(2m-2+\gamma)\cdot\frac{2}{\gamma}=p$, where
$m=3/2,p=4$. Then if the initial data satisfies
\begin{equation}ss
\frac{\sigma^2}{2\mathbb{E}\|u_0\|^2}<\frac{3}{4},
\end{equation}ss
then Theorem \ref{t3.1} shows that the the trivial solution $0$ of the above problem will be mean square stable.
\begin{equation}gin{remark}\lbl{r3.2} (1) In Theorem \ref{t3.1}, we assume the constant $b$ satisfies (\ref{3.4}).
Note that the constant $C_\infty$ depends on the domain $D$, the dimension $d$ and the constant
$p$, and thus it is hard to give a concrete constant in an example. The reason is that
we used the embedding inequality (\ref{3.2}). On the other hand, we can use the following embedding
inequality replaced (\ref{3.2}):
\begin{equation}ss
W^{1,p}(D)\hookrightarrow W^{1,2m}(D)\hookrightarrow L^{2m}(D), \ \ p>2m.
\end{equation}ss
Let $C_{2m,2m}$ be the Sobolev embedding constant, i.e., $\|u\|_{L^{2m}(D)}\leq C_{2m,2m}\|u\|_{W^{1,2m}(D)}$.
Then under the assumptions that $b\leq \frac{2}{C_{2m,2m}}$ and
\begin{equation}ss
2a\mathbb{E}\|u_0\|^2+\sigma^2|D|<0,
\end{equation}ss
the trivial solution $0$ of (\ref{3.1}) is mean square stable.
(2) We now explain why we did not get the results of stochastic stability.
Like the case of stochastic differential equations, we try to use a Lyapunov function $V(u)=\|u\|^{2r}$ with $0<r<1$ to prove
the stochastic stability.
For additive noise, we can not prove that $\|u(\cdot,t)\|^2>0$ for all $t>0$.
In order to use the It\^{o} formula, we consider the following Lyapunov functional
$V(u)=(\|u\|+\kappa)^{2r}$ with $0<r<1$ and $0<\kappa\ll1$.
This leads to the expression
\begin{equation}s
\frac{d}{dt}\mathbb{E}(\|u\|^2+\kappa)^{r}&=&2r\mathbb{E}\left[(\|u\|^2+\kappa)^{r-1}\int_ D u(\Delta_p u(x,t)+f(u))dx\right]\nonumber\\
&&+r\sigma^2|D|\mathbb{E}\left[(\|u\|^2+\kappa)^{r-1}\right]\nonumber\\
&&
+2\sigma^2r(r-1)\mathbb{E}(\|u\|^2+\kappa)^{r-2}\left(\int_Dudx\right)^2\nonumber\\
&\leq&\mathbb{E}\left[r(\|u\|^2+\kappa)^{r-1}(a+\hat C)\|u(\cdot,t)\|^{2}\right.\nonumber\\
&&\left.+r\sigma^2(\|u\|^2+\kappa)^{r-1}\left(|D|+2(r-1)\frac{\left(\int_Dudx\right)^2}{\|u\|^2+\kappa}\right)\right].
\lbl{3.9}\end{equation}s
Due to the difference $\left(\int_Dudx\right)^2$ and $\left(\int_D|u|dx\right)^2$, we can not
get any help to control the term $|D|$. Note that
\begin{equation}ss
\int_Dudx=0
\end{equation}ss
maybe happen, so we can not use this term. Even though the term
$\left(\int_Dudx\right)^2$ is replaced by $\left(\int_D|u|dx\right)^2$, we
can not get the desired result. The reason is the followings.
The H\"{o}lder inequality implies that
\begin{equation}ss
\int_D|u|dx\leq|D|^{\frac{1}{2}}\left(\int_D|u|^2dx\right)^{\frac{1}{2}}.
\end{equation}ss
Consequently,
\begin{equation}ss
\frac{\|u\|_{L^1}^2}{\|u\|_{L^2}^2}\leq |D|.
\end{equation}ss
Hence we can not use the above inequality in (\ref{3.9}). The aim of the
above discussion is to show the last two terms of right-side hand of
(\ref{3.9}) are in the same level, which are different from the first term for
SPDEs.
But for SDEs, there will be another case. In this case, let $|D|=1$, then we have
\begin{equation}ss
|D|+2(r-1)\frac{\left(\int_Dudx\right)^2}{\|u\|^2+\kappa}
=1+\frac{2(r-1)}{1+\kappa}.
\end{equation}ss
Taking $0<r<1/2$ such that $2r+\kappa<1$, we get
\begin{equation}ss
\frac{d}{dt}\mathbb{E}(\|u\|^2+\kappa)^{r}\leq0.
\end{equation}ss
Letting $\kappa\to0$, and using the Chebyshev inequality, we obtain the stochastic stability.
In all, we find there is a significant difference between SPDEs and SDEs in the stability theory.
\end{remark}
We remark that the noise can be easily generalized the cylindrical Wiener process.
We first generalize the classical results of deterministic reaction-diffusion equation \cite[Theorem 4.2.1, p 166]{YLbook}
to the following equation
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
du(x,t)=(\Delta u(x,t)+f(x,t,u))dt+\sigma udW_t, \ \qquad t>0,&x\in D,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &x\in D,\\[1.5mm]
u(x,t)=0, \qquad\qquad\qquad \qquad \qquad \qquad \qquad \ \ \qquad t>0, &x\in\partial D.
\end{array}\right.\lbl{3.10}\end{equation}s
\begin{equation}gin{theo} \lbl{t3.2}
Assume that $f(x,t,0)=0$, $f\in C^1(D\times[0,\infty)\times(-\infty,\infty))$.
(i) If there exists a constant $\alpha>0$ such that for all $(x,t)\in D\times[0,\infty)$, $\eta\in\mathbb{R}$,
we have
\begin{equation}ss
f(x,t,\eta)\leq(\lambda_1-\alpha)\eta,
\end{equation}ss
then for the initial data $u_0$ satisfying $0\leq u_0(x)\leq\rho \phi_1(x)$ with $\rho>0$, problem (\ref{3.10}) admits
a unique positive solution $u(x,t)$ and the following estimate holds almost surely
\begin{equation}ss
0\leq u(x,t)\leq\rho e^{-(\alpha+\frac{\sigma^2}{2})t+\sigma W_t}\phi_1(x),\ \ (x,t)\in D\times[0,\infty).
\end{equation}ss
Consequently,
\begin{equation}s
\mathbb{E}u(x,t) \leq \rho e^{-\alpha t}\phi_1(x).
\lbl{3.11}\end{equation}s
Assume further that the initial data $u_0$ is a deterministic function, we have
\begin{equation}s
\mathbb{P}\left\{\int_Du(x,t)dx\leq\rho e^{-(\alpha+\frac{\sigma^2}{2})t}\right\}\geq\frac{1}{2}.
\lbl{3.12}\end{equation}s
(ii) If there exists a constant $\alpha>0$ such that for all $(x,t)\in D\times[0,\infty)$, $\eta\geq0$,
we have
\begin{equation}ss
f(x,t,\eta)\geq(\lambda_1+\alpha)\eta,
\end{equation}ss
then for every $\delta>0$, when $u_0(x)\geq\delta \phi_1(x)$, problem (\ref{3.10}) admits
a unique positive solution $u(x,t)$, which exists globally or finite time blowup. On the lifespan, the following estimate holds almost surely
\begin{equation}ss
u(x,t)\geq\delta e^{(\alpha -\frac{\sigma^2}{2})t+\sigma W_t}\phi_1(x),\ \ (x,t)\in D\times[0,\infty).
\end{equation}ss
Consequently, $\mathbb{E}\|u(t)\|^2\geq \delta e^{\alpha t}\mathbb{E}\|u_0\|^2$.
\end{theo}
{\bf Proof.} We first change the stochastic reaction-diffusion equation into random
reaction-diffusion, then by using comparison principle, the desired results are obtained.
More precisely, let
$v(x,t)=e^{-\sigma W_t}u(x,t)$, then $v(x,t)$ satisfies that
\begin{equation}s\left\{\begin{equation}gin{array}{llll}
\frac{\partial}{\partial t}v(x,t)=\Delta v(x,t)-\frac{\sigma^2}{2}v(x,t)
+ e^{-\sigma W_t}f(x,t,e^{\sigma W_t}v(x,t)), \ \quad & t>0,\ x\in D,\\[1.5mm]
v(x,0)=u_0(x), \ \ \ &\qquad \quad x\in D,\\[1.5mm]
v(x,t)=0, & t>0, \ x\in\partial D.
\end{array}\right.\lbl{3.13}\end{equation}s
By using the assumptions, we get
\begin{equation}ss
\frac{\partial}{\partial t}v(x,t)\leq\Delta v(x,t)-\frac{\sigma^2}{2}v(x,t)
+ (\lambda_1-\alpha)v(x,t).
\end{equation}ss
It is easy to check that $\bar v=\rho e^{-(\alpha+\frac{\sigma^2}{2}) t}\phi_1(x)$ is an upper solution of (\ref{3.13}) and
$0$ is a lower solution to (\ref{3.13}),
which implies that for $(x,t)\in D\times[0,\infty)$
\begin{equation}ss
0\leq v(x,t)\leq \rho e^{-(\alpha+\frac{\sigma^2}{2}) t}\phi_1(x)\Longleftrightarrow
0\leq u(x,t)\leq\rho e^{-(\alpha+\frac{\sigma^2}{2})t+\sigma W_t}\phi_1(x), \ \ a.s.,
\end{equation}ss
which implies that
\begin{equation}ss
0\leq\int_Du(x,t)dx\leq\rho e^{-(\alpha+\frac{\sigma^2}{2})t+\sigma W_t}, \ \ a.s..
\end{equation}ss
By using $\mathbb{E}[e^{\sigma W_t}]=e^{\frac{\sigma^2}{2}t}$, we have
\begin{equation}ss
\mathbb{E}u(x,t) \leq e^{\left(\alpha-\lambda_1\right)t}\phi_1(x).
\end{equation}ss
Note that
\begin{equation}ss
\left\{\int_Du(x,t)dx\leq\rho e^{-(\alpha+\frac{\sigma^2}{2})t}\right\}
\Longleftrightarrow\left\{e^{\sigma W_t}\leq1\right\}
\supset\{W_t\leq0\},
\end{equation}ss
thus we have
\begin{equation}ss
\mathbb{P}\left\{\int_Du(x,t)dx\leq\rho e^{-(\alpha+\frac{\sigma^2}{2})t}\right\}
\geq\mathbb{P}\{W_t\leq0\} =\frac{1}{2},
\end{equation}ss
which proves (\ref{3.12}).
Next, we prove (ii). Note that
\begin{equation}ss
\frac{\partial}{\partial t}v(x,t)\geq\Delta v(x,t)-\frac{\sigma^2}{2}v(x,t)
+ (\lambda_1+\alpha)v(x,t).
\end{equation}ss
It follows from that $\underline{v}=\delta e^{(\alpha-\frac{\sigma^2}{2})t}$ is a lower solution of
(\ref{3.13}), thus we have the desired inequality. The proof is complete. $\Box$
\begin{equation}gin{remark}\lbl{r3.3} Following Theorem \ref{t3.2}, it is easy to see that
in mean square sense, the solution of (\ref{3.10}) keeps the same properties as
the deterministic case, which is different from the additive noise, see Theorems \ref{t0.1} and \ref{t3.1}.
Of course, the big difference between the stochastic and deterministic
cases is that there exists an event such that whose probability is large than $0$, where the event
is that the solution of stochastic case maybe have exponentially decay. In other words,
in (\ref{3.12}), if $-\frac{\sigma^2}{2}<\alpha<0$, then the solution $u$ of (\ref{3.10}) satisfies
$\|u(t)\|^2\leq \|u_0\|^2e^{\left(\alpha-\lambda_1-\frac{\sigma^2}{2}\right)t}$ with probability $\frac{1}{2}$.
Maybe from here we can say the noise can stabilize the solutions.
The method we used in Theorem \ref{t3.2} is comparison principle, which is different from the
Lyapunov functional method. The inequality (\ref{3.11}) holds pointwise, which is
different from the earlier results. What's more, the index $\alpha-\lambda_1$ is different from
that obtained by Lyapunov method, see the next theorem.
In part (ii) of Theorem \ref{t3.2} implies the unstable condition of the
trivial solution $0$, which is new in this field.
Comparing with the stochastic ordinary different equations, the role of the Laplacian operator in
stochastic reaction-diffusion equations gives a help with $\lambda_1$ in the stability of trivial
solutions. Indeed, the reason is the Poincare inequality.
\end{remark}
The impact of multiplicative noise in Theorem \ref{t3.2} is
not satisfied. We give the next result.
\begin{equation}gin{theo}\lbl{t3.3} Assume that $f(x,t,0)=0$ and there exists a constant $K>0$ such that for all $(x,t)\in D\times[0,\infty)$, $uf(x,t,u)\leq Ku^2$. If
$K-\lambda_1+\frac{\sigma^2}{2}\leq0$,
then the trivial solution $0$ is mean square stable and if
\begin{equation}s
K-\lambda_1-\frac{\sigma^2}{2}<0,
\lbl{3.14}\end{equation}s
then the trivial solution $0$ is stochastically stable.
\end{theo}
{\bf Proof.}
Taking Lyapunov function $V(u)=\|u\|^2$, we have
\begin{equation}ss
\frac{d}{dt}\mathbb{E}\|u(t)\|^2&=&-\int_D|\nablabla u(x,t)|^2dx+\frac{\sigma^2}{2}\mathbb{E}\|u(t)\|^2+\mathbb{E}\int_Duf(x,t,u)dx\\
&\leq&\left(K-\lambda_1+\frac{\sigma^2}{2}\right)\mathbb{E}\|u(t)\|^2,
\end{equation}ss
which yields that
\begin{equation}ss
\mathbb{E}\|u(t)\|^2\leq \mathbb{E}\|u_0\|^2e^{\left(K-\lambda_1+\frac{\sigma^2}{2}\right)t}.
\end{equation}ss
Next, we use a Lyapunov function $V(u)=\|u\|^{2r}$ with $0<r<1$ to prove
the stochastic stability. Note that in Theorem
\ref{t3.2}, we proved that the solution $u\geq0$ almost surely and thus
we can choose $\|u\|^{2r}$ as Lyapunov functional. This leads to the expression
\begin{equation}s
\frac{d}{dt}\mathbb{E}\|u(\cdot,t)\|^{2r}&=&2r\mathbb{E}\left[\|u(\cdot,t)\|^{2r-2}\int_ D u(\Delta u(x,t)+f(x,t,u))dx\right]\nonumber\\
&&+r\sigma^2\mathbb{E}\|u(\cdot,t)\|^{2r}
+2\sigma^2r(r-1)\mathbb{E}\|u(\cdot,t)\|^{2r}\nonumber\\
&\leq&\mathbb{E}\left[2r\|u(\cdot,t)\|^{2r}\left(K-\lambda_1+\sigma^2r-\frac{\sigma^2}{2}\right)\right].
\lbl{3.15}\end{equation}s
If $K-\lambda_1-\frac{\sigma^2}{2}<0$, we can choose $0<r<\frac{1}{2}$ such that
\begin{equation}ss
K-\lambda_1+\sigma^2r-\frac{\sigma^2}{2}\leq0,
\end{equation}ss
then from the Chebyshev inequality, stochastic stability for the solution
of (\ref{3.1}) follows from (\ref{3.15}). The proof is complete. $\Box$
\begin{equation}gin{remark}\lbl{r3.4} It follows from Theorem \ref{t3.3} that the multiplicative
noise can make solution stable in sense of stochastically stable. Comparing Theorem \ref{t3.3} with
\ref{t3.2}, we can take $K=\lambda_1+\sigma^2/2-\varepsilon>\lambda_1-\alpha$, where $0<\varepsilon\ll1$.
\end{remark}
In the above Theorems, we assume that the noise term satisfies the global Lipschitz condition.
In the following theorem, we will see that the assumption can be weaken as local Lipschitz condition.
In order to do this, we consider the following equation
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
du=(\Delta u-k_1u^{r})dt+k_2u^mdW_t(x), \ \ \qquad &t>0,\ x\in D,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &\qquad\ \ \ x\in D,\\[1.5mm]
u(x,t)=0, \ \ \ \ &t>0,\ x\in\partial D,
\end{array}\right.\lbl{3.17}\end{equation}s
where $k_1,k_2,r$ and $m$ are positive constants. Moreover,
$W_t(x)$ is a Wiener random field with the covariance function $q$.
In our paper \cite{LD2015}, we proved the existence of global solution of (\ref{3.17}) under the assumptions
of Theorem \ref{t3.4}. Moreover, we proved the solutions keep non-negative almost surely.
\begin{equation}gin{theo}\lbl{t3.4} Assume that $r$ is an odd number and $1<m<\frac{1+r}{2}$.
Assume further that there exists a positive constant $q_0$ such that the covariance function $q(x,y)$ satisfies the condition $\sup_{x,y\in\bar D}q(x,y)\leq q_0$.
Then if $\hat\lambda<\lambda_1$, then the trivial solution $0$ is exponentially mean square stable with the index $\lambda_1-\hat\lambda$, where
\begin{equation}ss
\hat\lambda:=\frac{r+1-2m}{r-1}\left(\frac{k_1(r-1)}{2m-2}\right)^{-\frac{2m-2}{r+1-2m}}(q_0k_2)^{\frac{r-1}{r+1-2m}}.
\end{equation}ss
In particular, when $m=2$ and $r>3$, we
assume further that there exists a positive constant $q_1$ such that
the covariance function $q(x,y)$ satisfies the condition $\sup_{x,y\in\bar D}q(x,y)\geq q_1$.
If
\begin{equation}ss
\lambda_1>\frac{r-3}{r-1}\left(\frac{k_1(r-1)}{2}\right)^{-\frac{2}{r-3}}(q_0k_2)^{\frac{r-1}{r-3}}-2k_2^2q_1,
\end{equation}ss
then the trivial solution $0$ is stochastic stable.
\end{theo}
{\bf Proof.} The proof is similar to that of Theorem \ref{t3.3} and we give outline of the
proof for completeness.
Taking Lyapunov function $V(u)=\|u\|^2$, we have the following inequality
\begin{equation}ss
\frac{d}{dt}\mathbb{E}\|u(t)\|^2&=&-2\int_D|\nablabla u(x,t)|^2dx+k_2^2\mathbb{E}\int_Du^{2m}(x,t)q(x,x)dx-2k_1\mathbb{E}\int_Du^{r+1}(x,t)dx\\
&\leq&-2\lambda_1\mathbb{E}\|u(t)\|^2
-2k_1\mathbb{E}\|u(t)\|_{L^{1+r}}^{1+r}+k_2^2q_0\mathbb{E}\|u(t)\|_{L^{2m}}^{2m}.
\end{equation}ss
By using the interpolation inequality
\begin{equation}ss
\|u\|_{L^r}\leq\|u\|^\theta_{L^p}\|u\|^{1-\theta}_{L^q},
\end{equation}ss
with $r=2m,\,p=2$ and $q=4$, we have
\begin{equation}ss
q_0\|u\|_{L^{2m}}^{2m}&\leq&q_0\|u\|_{L^2}^{2m\theta}\|u\|_{L^{1+r}}^{2m(1-\theta)}\nonumber\\
&\leq&k_1\|u\|_{L^{1+r}}^{2m(1-\theta)\frac{1}{1-m\theta}}+
m\theta\left(\frac{k_1}{1-m\theta}\right)^{-\frac{1-m\theta}{m\theta}}
(q_0k_2)^{\frac{1}{m\theta}}\|u\|_{L^2}^{2}\nonumber\\
&=&k_1\|u\|_{L^{1+r}}^{1+r}+\hat\lambda\|u\|_{L^2}^{2},
\end{equation}ss
where
\begin{equation}ss
\theta=\frac{r+1-2m}{mr-m},\ \ \hat\lambda:=\frac{r+1-2m}{r-1}\left(\frac{k_1(r-1)}{2m-2}\right)^{-\frac{2m-2}{r+1-2m}}(q_0k_2)^{\frac{r-1}{r+1-2m}}.
\end{equation}ss
Consequently, we have
\begin{equation}ss
\mathbb{E}\|u(t)\|^2\leq \mathbb{E}\|u_0\|^2e^{-\left(\lambda_1-\hat\lambda\right)t},
\end{equation}ss
where implies that the trivial solution $0$ is exponentially mean square stable.
Next, we use a Lyapunov function $V(u)=\|u\|^{2\gamma}$ with $0<\gamma<1$ to prove
the stochastic stability (note that the solutions of (\ref{3.17}) is non-negative function, see \cite{LD2015}).
This leads to the expression
\begin{equation}ss
\frac{d}{dt}\mathbb{E}\|u(\cdot,t)\|^{2\gamma}&=&\gamma\mathbb{E}
\left[\|u(\cdot,t)\|^{2\gamma-2}\int_ D u(\Delta u(x,t)-k_1u^r(x,t))dx\right]\nonumber\\
&&+\gamma k_2^2\mathbb{E}\|u(\cdot,t)\|^{2\gamma-2}\int_Du^{2m}dx\nonumber\\
&&
+2k_2^2\gamma(\gamma-1)\mathbb{E}\|u(\cdot,t)\|^{2\gamma-4}\int_D\int_Dq(x,y)u^{m}(x,t)
u^m(y,t)dx\\
&\leq&\gamma\mathbb{E}
\left[\|u(\cdot,t)\|^{2\gamma-2}\int_ D u(\Delta u(x,t)-k_1u^r(x,t))dx\right]\nonumber\\
&&+\gamma k_2^2\mathbb{E}\|u(\cdot,t)\|^{2\gamma-2}\int_Du^{2m}dx
+2k_2^2q_0\gamma(\gamma-1)\mathbb{E}\|u(\cdot,t)\|^{2\gamma}.
\end{equation}ss
Then by using similar method in proving mean square stable, we can choose
$0<r<1$ such that
\begin{equation}ss
\frac{d}{dt}\mathbb{E}\|u(\cdot,t)\|^{2\gamma}\leq0.
\end{equation}ss
From the Chebyshev inequality, stochastic stability for the solution
of (\ref{3.17}) is obtained. The proof is complete. $\Box$
In paper \cite{TWW2020}, the authors considered the following problem
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
dX_t=X_t(b(X_t)+k_1X^{m-1}_t)dt+k_2X^{\frac{m+1}{2}}_t\phi(X_t)dW_t, \ \ \qquad &t>0,\ \\[1.5mm]
X_0=x>0,
\end{array}\right.\lbl{3.18}\end{equation}s
where $k_1,k_2\in\mathbb{R},\,m\geq1$. In \cite{LDWW2018}, we considered
the competition between the nonlinear term and noise term. The
result of \cite{TWW2020} generalized the results of \cite{LDWW2018}.
Now we first recall the main results of \cite{TWW2020}.
\begin{equation}gin{prop}\lbl{p3.1}\cite[Theorem 1.1]{TWW2020} Let $k_1$ be a real number which is not zero.
Assume $rb(r)\in C^1(\mathbb{R}_+)$ and there exist two positive
numbers $c_0$ and $m_0(<m)$ such that
\begin{equation}ss
|b(r)|\leq c_0(1+r^{m_0-1}),\ \ \ r\in\mathbb{R}_+.
\end{equation}ss
Assume in addition that $r^{\frac{m+1}{2}}\phi(r)\in C^1(\mathbb{R}_+)$. Let $\begin{equation}ta\in(0,1)$ and suppose there
is a $r_0>0$ such that
\begin{equation}ss
\inf_{r\geq r_0}\phi(r)>\sqrt{\frac{2|k_1|}{(1-\begin{equation}ta)k_2^2}}.
\end{equation}ss
There is a unique solution $X_t(x)$ for (\ref{3.18}) on $t\geq0$
and the solution is positive for all $t\geq0$ almost surely. Moreover,
for every $T>0$
\begin{equation}ss
\sup_{0\leq t\leq T}\mathbb{E}X_t^\begin{equation}ta(x)<+\infty.
\end{equation}ss
\end{prop}
Moreover, \cite[Theorem 1.2]{TWW2020} shows that the result in
proposition is sharp. More precisely, if there exists $\gamma\in(\begin{equation}ta,1)$ such that
\begin{equation}ss
\sup_{r\geq r_0}\phi(r)<\sqrt{\frac{2|k_1|}{(1-\gamma)k_2^2}},
\end{equation}ss
then there is a real number $T_0>0$ such that
\begin{equation}ss
\sup_{0\leq t\leq T}\mathbb{E}X_t^\gamma(x)=+\infty.
\end{equation}ss
The above results implies that the
trivial solution $0$ is not mean square stable. But
the stochastic stability would be possible. In the following
result, we will give a positive answer.
\begin{equation}gin{theo}\lbl{t3.6} Let all the assumptions of Proposition \ref{p3.1} hold.
Assume further that
\begin{equation}ss
rb(r)\leq c_1r+c_2r^{m_0},\ \ 1<m_0<m,\ \ r\in\mathbb{R}_+,\ c_1<0<c_2.
\end{equation}ss
(i) If $ \inf_{r\geq 0}\phi(r)\geq1$ and
\begin{equation}ss
c_1+\frac{[p(k_1-\frac{k_2^2}{2})]^{-p/q}}{q}<0,
\end{equation}ss
where
\begin{equation}ss
p=\frac{m-1}{m_0-1}, \ \ \ q=\frac{m-1}{m-m_0}.
\end{equation}ss
Then the trivial solution $0$ is stochastic stable.
(ii) If $\phi(r)=r^{\frac{\alpha}{2}}$ with $\alpha\geq0$ and
\begin{equation}ss
&&\frac{m-1}{\alpha+m-1}\left(\frac{-c_1(\alpha+m-1)}{2\alpha}\right)^{-\frac{\alpha}{m-1}}k_1^{\frac{\alpha+m-1}{m-1}}\\
&&
+\frac{m_0-1}{\alpha+m-1}\left(\frac{-c_1(\alpha+m-1)}{2(\alpha+m-m_0)}\right)^{-\frac{\alpha+m-m_0}{m_0-1}}
c_2^{\frac{\alpha+m-1}{m_0-1}}<\frac{k_2^2}{2}.
\end{equation}ss
Then the trivial solution $0$ is stochastic stable.
\end{theo}
{\bf Proof.}
(i) We use a Lyapunov function $V(X)=|X|^{\begin{equation}ta}$ with $0<\begin{equation}ta<1$ being
fixed later to prove
the stochastic stability (noting that the solutions is positive almost surely, or
one can use $(|X|+\kappa)^{\begin{equation}ta}$ to replace $|X|^{\begin{equation}ta}$ and then let
$\kappa\to0$). This leads to the expression
\begin{equation}ss
\frac{d}{dt}\mathbb{E}|X|^{\begin{equation}ta}&=&\begin{equation}ta\mathbb{E}
|X|^{\begin{equation}ta}(b(X_t)+k_1X^{m-1}_t)+\frac{1}{2}k_2^2\begin{equation}ta(\begin{equation}ta-1)\mathbb{E}|X_t|^{\begin{equation}ta+m-1}\phi^2(X_t)\\
&\leq&\begin{equation}ta\mathbb{E}|X|^{\begin{equation}ta}\left[c_1+c_2X_t^{m_0-1}+\left(k_1+(\begin{equation}ta-1)\frac{k_2^2}{2}\right)
|X_t|^{m-1}\right].
\end{equation}ss
Set $0<\begin{equation}ta\ll1$ such that
\begin{equation}ss
c_1+\frac{[p(k_1-\frac{(1-\begin{equation}ta)k_2^2}{2})]^{-p/q}}{q}\leq0.
\end{equation}ss
By using the $\varepsilon$-Young inequality, we have
\begin{equation}ss
\frac{d}{dt}\mathbb{E}|X|^{\begin{equation}ta}\leq0.
\end{equation}ss
From the Chebyshev inequality, stochastic stability for the solution
of (\ref{3.18}) is obtained.
(ii) In this case: $\phi(r)=r^{\frac{\alpha}{2}}$ with $\alpha\geq0$. For every $\begin{equation}ta\in(0,1)$,
if we take
$r_0=\left(\sqrt{\frac{3|k_1|}{k_2(1-\begin{equation}ta)}}\right)^{\frac{2}{\alpha}}$, then
\begin{equation}ss
\inf_{r\geq r_0}\phi(r)=\inf_{r\geq r_0}|r|^{\alpha/2}=\sqrt{\frac{3|k_1|}{k_2(1-\begin{equation}ta)}}
>\sqrt{\frac{2|k_1|}{(1-\begin{equation}ta)k_2^2}}.
\end{equation}ss
Hence Proposition \ref{p3.1} holds for this case.
Similar to case (i), we use a Lyapunov function $V(X)=|X|^{\begin{equation}ta}$ with $0<\begin{equation}ta<1$.
This leads to the expression
\begin{equation}ss
\frac{d}{dt}\mathbb{E}|X|^{\begin{equation}ta}&=&\begin{equation}ta\mathbb{E}
|X|^{\begin{equation}ta}(b(X_t)+k_1X^{m-1}_t)+\frac{1}{2}k_2^2\begin{equation}ta(\begin{equation}ta-1)\mathbb{E}|X_t|^{\begin{equation}ta+\alpha+m-1}\\
&\leq&\begin{equation}ta\mathbb{E}|X|^{\begin{equation}ta}\left[c_1+c_2X_t^{m_0-1}+k_1|X_t|^{m-1}+(\begin{equation}ta-1)\frac{k_2^2}{2}
|X_t|^{\alpha+m-1}\right].
\end{equation}ss
Set $0<\begin{equation}ta\ll1$ such that
\begin{equation}ss
&&\frac{m-1}{\alpha+m-1}\left(\frac{-c_1(\alpha+m-1)}{2\alpha}\right)^{-\frac{\alpha}{m-1}}k_1^{\frac{\alpha+m-1}{m-1}}\\
&&
+\frac{m_0-1}{\alpha+m-1}\left(\frac{-c_1(\alpha+m-1)}{2(\alpha+m-m_0)}\right)^{-\frac{\alpha+m-m_0}{m_0-1}}
c_2^{\frac{\alpha+m-1}{m_0-1}}\leq\frac{k_2^2}{2}(1-\begin{equation}ta).
\end{equation}ss
By using the $\varepsilon$-Young inequality, we have
\begin{equation}ss
\frac{d}{dt}\mathbb{E}|X|^{\begin{equation}ta}\leq0.
\end{equation}ss
From the Chebyshev inequality, stochastic stability for the solution
of (\ref{3.18}) is obtained. $\Box$
\begin{equation}gin{remark}\lbl{r3.5}
Theorem \ref{t3.6} is new for SDEs. When $k_2=0$, the solution of
(\ref{3.18}) will blow up in finite time, and thus Theorem \ref{t3.6}
implies that the multiplicative noise can make the solution stable.
\end{remark}
Unfortunately, for SPDEs, we can not get the similar result to
Theorem \ref{t3.6}. Before we end this section, we give the reason.
For simplicity, we consider the following problem
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
du=(\Delta u+k_1u^{r})dt+k_2u^mdW_t, \ \ \qquad &t>0,\ x\in D,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &\qquad\ \ \ x\in D,\\[1.5mm]
u(x,t)=0, \ \ \ \ &t>0,\ x\in\partial D,
\end{array}\right.\lbl{3.19}\end{equation}s
where $k_1,k_2,r,m\in\mathbb{R}$. Under the condition $r<m$, the
existence of global solution was established by \cite{LW2020}. In the
following, we set forth the reason why we can not get that the trivial solution
$0$ is stochastic stable.
We use a Lyapunov function $V(u)=\|u\|^{2\gamma}$ with $0<\gamma<1/2$ to prove
the stochastic stability (if we worry about $\|u\|=0$, we can use
$(\|u\|^2+\kappa)^\gamma$ instead and then let $\kappa\to0$). This leads to the expression
\begin{equation}s
\frac{d}{dt}\mathbb{E}\|u(\cdot,t)\|^{2\gamma}&=&\gamma\mathbb{E}
\left[\|u(\cdot,t)\|^{2\gamma-2}\int_ D u(\Delta u(x,t)+k_1u^r(x,t))dx\right]\nonumber\\
&&+\gamma\mathbb{E}\left[\|u(\cdot,t)\|^{2\gamma-2}\int_ Dq(x,x)u^{2m}dx\right]\nonumber\\
&&
+2k_2^2\gamma(\gamma-1)\mathbb{E}\|u(\cdot,t)\|^{2\gamma-4}\int_D\int_Dq(x,y)u^{m+1}(x,t)
u^{m+1}(y,t)dxdy\nonumber\\
&\leq&-\gamma\lambda_1\mathbb{E}\|u(\cdot,t)\|^{2\gamma}+
k_1\gamma\mathbb{E}\left[\|u(\cdot,t)\|^{2\gamma-2}
\|u(\cdot,t)\|_{L^{1+r}}^{1+r}\right]\nonumber\\
&&+\gamma\mathbb{E}\left[\|u(\cdot,t)\|^{2\gamma-2}\left(q_0\|u\|_{L^{2m}}^{2m}
+
2q_1k_2^2(\gamma-1)\frac{\|u(\cdot,t)\|_{L^{m+1}}^{2(m+1)}}{\|u(\cdot,t)\|^{2}}\right)\right].
\lbl{3.20}\end{equation}s
H\"{o}lder inequality implies that
\begin{equation}ss
\int_D| u|^{m+1}(x,t)dx \leq \left(\int_D| u|^{2m}(x,t)dx\right)^{\frac{1}{2}}\left(\int_D| u|^{2}(x,t)dx\right)^{\frac{1}{2}}.
\end{equation}ss
For the last term of right-side hand of (\ref{3.20}), we want to get
\begin{equation}ss
\frac{\|u(\cdot,t)\|_{L^{m}}^{2m}}{\|u(\cdot,t)\|^{2}}\geq\|u\|_{L^{2m}}^{2m},
\end{equation}ss
which is a contradiction with respect to the above H\"{o}lder inequality.
This shows no difference from Remark \ref{r3.2}.
\section{Whole space}
\setcounter{equation}{0}
In this section, we consider the impact of noise in the whole space.
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
du=(\Delta u+f(t,u))dt+\sigma(t,u)dW(x,t) \ \ \qquad t>0,&x\in \mathbb{R},\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &x\in \mathbb{R},
\end{array}\right.\lbl{4.1}\end{equation}s
where $W(x,t)$ is the space-time white noise. Throughout this section, we assume that $f(t,0)=\sigma(t,0)=0$. A mild solution to (\ref{4.1}) in sense of Walsh \cite{walsh1986} is any
$u$ which is adapted to the filtration generated by the
white noise and satisfies the following evolution equation
\begin{equation}ss
u(x,t)=\int_{\mathbb{R}}K(x-y,t)u_0(y)dy
+\int_0^t\int_{\mathbb{R}}K(x-y,t-s)\sigma(u,y,s)dW(y,s)dy,
\end{equation}ss
where $K(x,t)$ denotes the heat kernel of Laplacian operator.
\begin{equation}gin{theo}\lbl{t4.1} Assume that there exist non-negative constants $\alpha$, $\begin{equation}ta$ and $\gamma$ such that
\begin{equation}s
|f(u)+\alpha u|\leq\begin{equation}ta(t) |u|, \ \ |\sigma(t,u)|\leq\gamma(t) |u|,\ \
2\int_0^t\begin{equation}ta(s)ds +\frac{1}{2\sqrt{\pi}}\int_0^t\frac{\gamma(s)}{(t-s)^{-\frac{1}{2}}}ds<1.
\lbl{4.1}\end{equation}s
Then the trivial solution $0$ is exponentially mean square stable with index $2\alpha$.
\end{theo}
{\bf Proof.} Note that $f$ and $\sigma$ satisfy the global Lipschitz condition,
one can prove that (\ref{4.1}) admits a unique global solution.
Let $v(x,t)=e^{\alpha t}u(x,t)$. Then $v$ satisfies
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
dv=(\Delta v+f(e^{-\alpha t}v)+\alpha e^{-\alpha t}v)dt+\sigma(t,e^{-\alpha t}v)dW(x,t), \ \qquad t>0,&x\in \mathbb{R},\\[1.5mm]
v(x,0)=u_0(x), \ \ \ &x\in \mathbb{R}.
\end{array}\right.\lbl{4.2}\end{equation}s
By taking the second moment and using the Walsh isometry,
we get for $t\in[0,T]$
\begin{equation}ss
\mathbb{E}|v(x,t)|^2&=&\left(\int_{\mathbb{R}}K(x-y,t)u_0(y)dy+\int_0^t\int_{\mathbb{R}}K(x-y,t-s)
[f(e^{-\alpha s}v)+\alpha e^{-\alpha s}v]dyds\right)^2\\
&&
+\int_0^t\int_{\mathbb{R}}K^2(x-y,t-s)\mathbb{E}\sigma^2(s,e^{-\alpha t}v)dyds\\
&\leq&2\max_{x\in\mathbb{R}}|u_0|^2(x)+\left(2\int_0^t\begin{equation}ta(s)ds +\frac{1}{2\sqrt{\pi}}\int_0^t\frac{\gamma(s)}{(t-s)^{-\frac{1}{2}}}ds\right)\max_{(x,t)\in\mathbb{R}\times[0,T]}
\mathbb{E}v^2(x,t),
\end{equation}ss
which implies that
\begin{equation}ss
\mathbb{E}|v(x,t)|^2\leq C\max_{x\in\mathbb{R}}|u_0|^2(x),
\end{equation}ss
where $C$ is independent of $t$. And thus we complete the proof. $\Box$
The reason why we used the properties of heat kernel is that we did not know the existence of local strong solution of problem (\ref{4.1}) with space-time white noise. For general $d$, we have the following result.
\begin{equation}gin{theo} \lbl{t4.2} Consider the Cauchy problem
\begin{equation}s\left\{\begin{equation}gin{array}{lll}
du=(\Delta u+f(t,u))dt+\sigma(t,u)dW_t \ \ \qquad t>0,&x\in \mathbb{R}^d,\\[1.5mm]
u(x,0)=u_0(x), \ \ \ &x\in \mathbb{R}^d.
\end{array}\right.\lbl{4.4}\end{equation}s
Assume that there exists a constant $K>0$ such that for all $(x,t)\in \mathbb{R}^d\times[0,\infty)$,
\begin{equation}ss
uf(t,u)\leq Ku^2,\ \ \ \sigma^2(t,u)\leq \sigma u^2, \ \ \sigma>0.
\end{equation}ss
Then if
$K+\frac{\sigma^2}{2}\leq0$,
then the trivial solution $0$ is mean square stable and if
$K-\frac{\sigma^2}{2}<0$,
then the trivial solution $0$ is stochastically stable.
\end{theo}
The proof of Theorem \ref{t4.2} is similar to Theorem \ref{t3.3} and we omit it here.
Meanwhile, we remark that in the whole space the operator $\Delta$ will have no help to
the stability of trivial solution.
\noindent {\bf Acknowledgment} This research was partly supported by the NSF of China grants 11771123, 11501577, 11626085.
\begin{equation}gin{thebibliography}{99}\label{ref:ref}\addtolength{\itemsep}{-1.2ex}
\bibitem{Cb2007} P-L. Chow, {\em Stochastic partial differential equations},
Chapman Hall/CRC Applied Mathematics and Nonlinear Science Series. Chapman Hall/CRC,
Boca Raton, FL, 2007. x+281 pp. ISBN: 978-1-58488-443-9.
\bibitem{C2011} P-L. Chow, {\em Explosive solutions of stochastic reaction-diffusion
equations in mean $L^p$-norm}, J. Differential Equations {\bf 250} (2011) 2567-2580.
\bibitem{DZbook}G. Da Prato and J. Zabczyk, {\em Stochastic equations in infinite dimensions},
Encyclopedia of Mathematics and its Applications, 44. Cambridge University Press, Cambridge, 1992. xviii+454 pp. ISBN: 0-521-38529-6
\bibitem{DGG2019} K. Dareiotis, M. Gerencs$\acute{e}$r and B. Gess, {\em Entropy solutions for stochastic
porous media equations}, J. Differential Equations {\bf266} (2019) 3732-3763.
\bibitem{FF2012} E. Fedrizzi, F. Flandoli, Noise prevents singularities in linear transport equations, J. Funct. Anal. 264 (6) (2012) 1329-1354.
\bibitem{FGP2010} F. Flandoli, M. Gubinelli, E. Priola, Well-posedness of the transport equation by stochastic perturbation, Invent. Math. 180 (1) (2010) 1-53.
\bibitem{GG2019} P. Gassiat and B. Gess, {\em Regularization by noise for stochastic Hamilton-Jacobi equations},
Probab. Theory Related Fields {\bf173} (2019) 1063-1098.
\bibitem{GS2017} B. Gess and P. Souganidis, {\em Long-time behavior, invariant measures,
and regularizing effects for stochastic scalar conservation laws},
Comm. Pure Appl. Math. {\bf70} (2017) 1562-1597.
\bibitem{GH2018} B. Gess and M. Hofmanov\'{a},
{\em Well-posedness and regularity for quasilinear degenerate parabolic-hyperbolic SPDE},
Ann. Probab. {\bf46} (2018) 2495-2544.
\bibitem{GL2019} B. Gess and X. Lamy, {\em Regularity of solutions to scalar conservation laws with a force},
Ann. Inst. H. Poincare Anal. Non Lineaire {\bf36} (2019) 505-521.
\bibitem{GTbook} D. Gilbarg and N. S. Trudinger, {\em Elliptic partial differential equations of second order}, 2nd
Ed., Springer-Verlag, New York, 1983.
\bibitem{Kh2011} R. Khasminskii, {\em Stochastic stability of differential equations}, Springer Heidelberg
Dordrecht London New York, 2011.
\bibitem{LM1998} K. Liu and X. Mao, {\em Exponential stability of non-linear stochastic
evolution equations}, Stochastic Process. Appl. {\bf 78} (1998) 173-193.
\bibitem{LD2015} G. Lv and J. Duan, {\em Impacts of noise on a class of partial differential equations},
J. Differential Equations, {\bf258} (2015) 2196-2220.
\bibitem{LDWW2018} G. Lv and J. Duan, L. Wang and J. Wu, {\em Impact of noise on ordinary differential
equations}, Dynamic system and Applications, {\bf27} (2018) 225-236.
\bibitem{LGWW2019} G. Lv, H. Gao, J. Wei and J-L. Wu, {\em BMO and Morrey-Campanato estimates
for stochastic convolutions and Schauder estimates for stochastic parabolic equations},
J. Differential Equations {\bf266} (2019) 2666-2717.
\bibitem{LW2020} G. Lv and J. Wei, {\em Global existence and non-existence of stochastic parabolic equations},
J. Differential Equations in press (arXiv:1902.07389).
\bibitem{MN1994} M. Mackey and G. Nechaeva, {\em Noise and stability in Differential Delay Equations},
J. Dynam. Differential Equations {\bf6} (1994) 395-426.
\bibitem{Maobook1991} X. Mao, {\em Stability of stochastic differential equations with respect to semimartingales}, Longman, 1991.
\bibitem{Maobook1994} X. Mao, {\em Exponential stability of stochastic differential equations}, Marcel Dekker, 1994.
\bibitem{TWW2020} R. Tian, J. Wei and J. Wu {\em Generalized population dynamics equations with
environmental noises}, submitted
\bibitem{walsh1986} John B. Walsh, {\em
An introduction to stochastic partial differential equations},
volume 1180 of Lecture Notes in Math., pages 265-439, Springer
Berlin, 1986.
\bibitem{WL2019} Z. Wang and X. Li, {\em Stability and moment boundedness of the stochastic
linear age-structured model}, J. Dynam. Differential Equations {\bf31} (2019) 2109-2125.
\bibitem{YLbook} Q. Ye and Z. Li, {\em
Introduction to reaction-diffusion equations},
Science Press, Beijing, 1990.
\end{thebibliography}
\end{document}
|
\begin{document}
\begin{abstract}
In this paper, we extend the rectangular side of the shuffle conjecture by stating a rectangular analogue of the square paths conjecture. In addition, we describe a set of combinatorial objects and one statistic that are a first step towards a rectangular extension of (the rise version of) the Delta conjecture, and of (the rise version of) the Delta square conjecture, corresponding to the case $q=1$ of an expected general statement. We also prove our new rectangular paths conjecture in the special case when the sides of the rectangle are coprime.
\end{abstract}
\maketitle
\section{Introduction}
In the 90's, Garsia and Haiman set out to prove the Schur positivity of the (modified) Macdonald polynomials by showing them to be the bi-graded Frobenius characteristic of certain Garsia-Haiman modules \cite{Garsia-Haiman-PNAS-1993}. Their prediction was confirmed in 2001, when Haiman used the algebraic geometry of the Hilbert sheme to prove that the dimension of their modules equals $n!$ \cite{Haiman-nfactorial-2001}, thus proving the $n!$ theorem. In the course of these developments, it became clear that there were remarkable connections to be found between Macdonald polynomial theory and representation theory of the symmetric group. For example, during their quest for Macdonald positivity, Garsia and Haiman introduced the $\mathfrak{S}_n$-module of \emph{diagonal harmonics}, i.e.\ the coinvariants of the diagonal action of $\mathfrak{S}_n$ on polynomials in two sets of $n$ variables, and they conjectured that its Frobenius characteristic is given by $\nabla e_n$, where $\nabla$ is the \emph{nabla} operator on symmetric functions introduced in \cite{Bergeron-Garsia-Haiman-Tesler-Positivity-1999}, which acts diagonally on Macdonald polynomials. Haiman proved this conjecture in 2002 \cite{Haiman-Vanishing-2002}.
The combinatorial side of things solidified when Haglund, Haiman, Loehr, Remmel, and Ulyanov then formulated the so called \emph{shuffle conjecture} \cite{HHLRU-2005}, i.e.\ they predicted a combinatorial formula for $\nabla e_n$ in terms of labelled Dyck paths, which are lattice paths using North and East steps going from $(0,0)$ to $(n,n)$ and staying weakly above the line connecting these two points (called the \emph{main diagonal}). Several years later, Haglund, Morse and Zabrocki conjectured a \emph{compositional} refinement of the shuffle conjecture, which also specified all the points where the Dyck paths returns to main diagonal \cite{Haglund-Morse-Zabrocki-2012}. This was the statement later proved by Carlsson and Mellit in \cite{Carlsson-Mellit-ShuffleConj-2018}, implying the \emph{shuffle theorem}.
Over the years, this subject has revealed itself to be extremely fruitful and to have striking connections to other fields of mathematics including elliptical Hall algebras, affine Hecke algebras, Springer fibers, the homology of torus knots and the shuffle algebra of symmetric functions.
In this paper, we add a few (conjectural) formulas to the substantial list of variants and generalisations inspired by the the succes story of the shuffle theorem; that is, equations with a symmetric function related to Macdonald polynomials on one side and lattice paths combinatorics on the other. Furthermore, we support one of these conjectures by proving a non-trivial special case.
One of the earliest shuffle-like formulas was conjectured in 2007 Loehr and Warrington \cite{Loehr-Warrington-square-2007}. They predicted an expression of $\nabla \omega(p_n)$ in terms of \emph{square paths}, i.e.\ lattice paths from $(0,0)$ to $(n,n)$ using only North and East steps and ending with an East step (without a the restriction of staying above the main diagonal). Their formula was proved by Sergel in \cite{Leven-2016} to be a consequence of the shuffle theorem.
Next, Haglund, Remmel and Wilson formulated the \emph{Delta conjecture} \cite{Haglund-Remmel-Wilson-2018}, a pair of conjectures for the symmetric function $\Delta'_{e_{n-k-1}}e_n$ in terms of decorated Dyck paths, where $k$ decorations are placed on either \emph{rises} or \emph{valleys} of the path. The symmetric function operator $\Delta'_f$ acts diagonally on the Macdonald polynomials and generalises $\nabla$, in a sense.
The rise version of the Delta conjecture was proved by D'Adderio and Mellit in \cite{DAdderioMellit2022CompositionalDelta}, using the compositional refinement in \cite{DAdderio-Iraci-VandenWyngaerd-Theta-2021}. In \cite{DAdderio-Iraci-VandenWyngaerd-DeltaSquare-2019}, the authors stated a \emph{Delta square conjecture} (still open at the moment), which extends (the rise version of) the Delta conjecture in the same fashion as the square paths theorem extends the shuffle theorem. The valley version also has similar extensions \cites{Qiu-Wilson-2020, Iraci-VandenWyngaerd-Valley-Square-2021}, but it lacks a compositional version and it is still open.
Around the same time as the formulation of the Delta conjecture, the story has been extended to rectangular Dyck paths: paths from $(0,0)$ to $(m,n)$ staying above the main diagonal. In \cite{Bergeron-Garsia-Sergel-Xin-2016}, building on the work in \cite{Gorsky-Negut-2015}, Bergeron, Garsia, Sergel, and Xin conjectured that a certain symmetric function related to the elliptic Hall algebra studied by Schiffmann and Vasserot \cite{Schiffmann-Vasserot-2011} can be expressed in terms of rectangular Dyck paths. Their prediction was recently proved by Mellit \cite{mellit2021toric}.
In this paper, we state a rectangular analogue of the square paths conjecture, where the combinatorial objects are lattice paths from $(0,0)$ to $(m,n)$ ending with an East step. Our main result is the proof the special case of our conjecture where the sides of the rectangle are coprime. Moreover, using the Theta operators (first introduced in \cite{DAdderio-Iraci-VandenWyngaerd-Theta-2021}), we conjecture the special case $q=1$ of a rectangular analogue of (the rise version of) the Delta conjecture and the Delta square conjecture, in terms of rectangular paths that lie above some horizontal translation of the \emph{broken diagonal}, a ``decorated'' analogue of the diagonal of the rectangle that turns out to be necessary to describe the right set of combinatorial objects.
\section{Symmetric functions}
For all the undefined notations and the unproven identities, we refer to \cite{DAdderioIraciVandenWyngaerd2022TheBible}*{Section~1}, where definitions, proofs and/or references can be found.
We denote by $\Lambda$ the graded algebra of symmetric functions with coefficients in $\mathbb{Q}(q,t)$, and by $\langle\, , \rangle$ the \emph{Hall scalar product} on $\Lambda$, defined by declaring that the Schur functions form an orthonormal basis.
The standard bases of the symmetric functions that will appear in our calculations are the monomial $\{m_\lambda\}_{\lambda}$, complete $\{h_{\lambda}\}_{\lambda}$, elementary $\{e_{\lambda}\}_{\lambda}$, power $\{p_{\lambda}\}_{\lambda}$ and Schur $\{s_{\lambda}\}_{\lambda}$ bases.
For a partition $\mu \vdash n$, we denote by \[ \widetilde{H}_\mu \coloneqq \widetilde{H}_\mu[X] = \widetilde{H}_\mu[X; q,t] = \sum_{\lambda \vdash n} \widetilde{K}_{\lambda \mu}(q,t) s_{\lambda} \] the \emph{(modified) Macdonald polynomials}, where \[ \widetilde{K}_{\lambda \mu} \coloneqq \widetilde{K}_{\lambda \mu}(q,t) = K_{\lambda \mu}(q,1/t) t^{n(\mu)} \] are the \emph{(modified) Kostka coefficients} (see \cite{Haglund-Book-2008}*{Chapter~2} for more details).
Macdonald polynomials form a basis of the algebra of symmetric functions $\Lambda$. This is a modification of the basis introduced by Macdonald \cite{Macdonald-Book-1995}.
If we identify the partition $\mu$ with its Ferrer diagram, i.e.\ with the collection of cells $\{(i,j)\mid 1\leq i\leq \mu_j, 1\leq j\leq \ell(\mu)\}$, then for each cell $c\in \mu$ we refer to the \emph{arm}, \emph{leg}, \emph{co-arm} and \emph{co-leg} (denoted respectively by $a_\mu(c), l_\mu(c), a_\mu'(c), l_\mu'(c)$) as the number of cells in $\mu$ that are strictly to the right, below, to the left and above $c$ in $\mu$, respectively (see Figure~\ref{fig:notation}).
\begin{figure}
\caption{Arm, leg, co-arm, and co-leg of a cell of a partition.}
\label{fig:notation}
\end{figure}
Let $M \coloneqq (1-q)(1-t)$. For every partition $\mu$, we define the following constants:
\[
B_{\mu} \coloneqq B_{\mu}(q,t) = \sum_{c \in \mu} q^{a_{\mu}'(c)} t^{l_{\mu}'(c)}, \qquad
\Pi_{\mu} \coloneqq \Pi_{\mu}(q,t) = \prod_{c \in \mu / (1)} (1-q^{a_{\mu}'(c)} t^{l_{\mu}'(c)}). \]
We will make extensive use of the \emph{plethystic notation} (cf. \cite{Haglund-Book-2008}*{Chapter~1}). We also need several linear operators on $\Lambda$.
\begin{definition}[\protect{\cite{Bergeron-Garsia-ScienceFiction-1999}*{3.11}}]
\label{def:nabla}
We define the linear operator $\nabla \colon \Lambda \rightarrow \Lambda$ on the eigenbasis of Macdonald polynomials as \[ \nabla \widetilde{H}_\mu = e_{\lvert \mu \rvert}[B_\mu] \widetilde{H}_\mu. \]
\end{definition}
\begin{definition}
\label{def:pi}
We define the linear operator $\mathbf{\Pi} \colon \Lambda \rightarrow \Lambda$ on the eigenbasis of Macdonald polynomials as \[ \mathbf{\Pi} \widetilde{H}_\mu = \Pi_\mu \widetilde{H}_\mu \] where we conventionally set $\Pi_{\varnothing} \coloneqq 1$.
\end{definition}
\begin{definition}
\label{def:delta}
For $f \in \Lambda$, we define the linear operators $\Delta_f, \Delta'_f \colon \Lambda \rightarrow \Lambda$ on the eigenbasis of Macdonald polynomials as \[ \Delta_f \widetilde{H}_\mu = f[B_\mu] \widetilde{H}_\mu, \qquad \qquad \Delta'_f \widetilde{H}_\mu = f[B_\mu-1] \widetilde{H}_\mu. \]
\end{definition}
Observe that on the vector space of homogeneous symmetric functions of degree $n$, denoted by $\Lambda^{(n)}$, the operator $\nabla$ equals $\Delta_{e_n}$.
\begin{definition}[\protect{\cite{DAdderio-Iraci-VandenWyngaerd-Theta-2021}*{(28)}}]
\label{def:theta}
For any symmetric function $f \in \Lambda^{(n)}$ we define the \emph{Theta operators} on $\Lambda$ in the following way: for every $F \in \Lambda^{(m)}$ we set
\begin{equation*}
\Theta_f F \coloneqq
\left\{\begin{array}{ll}
0 & \text{if } n \geq 1 \text{ and } m=0 \\
f \cdot F & \text{if } n=0 \text{ and } m=0 \\
\mathbf{\Pi} f \left[\frac{X}{M}\right] \mathbf{\Pi}^{-1} F & \text{otherwise}
\end{array}
\right. ,
\end{equation*}
and we extend by linearity the definition to any $f, F \in \Lambda$.
\end{definition}
It is clear that $\Theta_f$ is linear. In addition, if $f$ is homogeneous of degree $k$, then so is $\Theta_f$:
\[\Theta_f \Lambda^{(n)} \subseteq \Lambda^{(n+k)} \qquad \text{ for } f \in \Lambda^{(k)}. \]
Finally, we need to refer to \cite{Bergeron-Garsia-Sergel-Xin-2016}*{Algorithm~4.1} (see also \cite{Bergeron-Garsia-Sergel-Xin-Remarkable-2016}*{Definition~1.1, Theorem~2.5}).
\begin{definition}
Let $m, n > 0$. Let $a,b,c,d \in \mathbb{N}$ such that $a+c=m$, $b+d=n$, $ad-bc = \gcd(m,n)$. We recursively define $Q_{m,n}$ as an operator on $\Lambda$ by \[ Q_{m,n} = \frac{1}{M} \left( Q_{c,d} Q_{a,b} - Q_{a,b} Q_{c,d} \right), \] with base cases \[ Q_{1,0} = D_0 = \mathsf{id} - M \Delta_{e_1} \quad \text{ and } \quad Q_{0,1} = - \underline{e_1} \] (where $\underline{f}$ is the multiplication by $f$).
\end{definition}
\begin{definition}
For a coprime pair $(a,b)$ and $f \in \Lambda^{(d)}$, we define $F_{a,b}(f)$ as follows. Let \[ f = \sum_{\lambda \vdash d} c_\lambda(q,t) \left( \frac{qt}{qt-1} \right)^{\ell(\lambda)} h_\lambda \left[ \frac{1-qt}{qt} X \right]. \] Then, we define \[ F_{a,b}(f) \coloneqq \sum_{\lambda \vdash d} c_\lambda(q,t) \prod_{i=1}^{\ell(\lambda)} Q_{\lambda_i a, \lambda_i b}(1). \]
\end{definition}
For our convenience, we use the shorthands \[ e_{m,n} \coloneqq F_{a,b}(e_d), \qquad p_{m,n} \coloneqq F_{a,b}(p_d) \] where $m = ad, n = bd$, and $\gcd(a,b) = 1$. Beware: $e_{4,2} = F_{2,1}(e_2)$, but $e_{42} = e_4 e_2$.
\section{Combinatorial definitions}
The objects we are concerned with are \emph{rectangular Dyck paths} and \emph{rectangular paths}. All the following definitions are classical for rectangular Dyck paths \cite{Bergeron-Garsia-Sergel-Xin-2016} and new for rectangular paths.
\subsection{Rectangular paths}
\begin{definition}
A \emph{rectangular path} of size $m \times n$ is a lattice path composed of unit North and East steps, going from $(0,0)$ to $(m,n)$, and ending with an East step. A \emph{rectangular Dyck path} is a rectangular path that lies weakly above the diagonal $my = nx$ (called \emph{main diagonal}).
\end{definition}
\begin{figure}
\caption{A $7 \times 9$ rectangular path with its base diagonal and the main diagonal (dashed).}
\label{fig:rectangular-path}
\end{figure}
We denote the sets of rectangular paths and rectangular Dyck paths of size $m \times n$ as $\mathsf{RP}(m,n)$ and $\mathsf{RD}(m,n)$, respectively.
\begin{definition}
For a $m \times n$ rectangular path $\pi$, let $a_i$ be the (signed) horizontal distance between the starting point of the $i$-th North step and the main diagonal. We define the \emph{area word} of the path to be the sequence $(a_1, \dots, a_n)$. Set $s \coloneqq - \min \{a_i \mid 1 \leq i \leq n\}$, which we call the \emph{shift} of the path. Note that $s = 0$ if $\pi$ is a rectangular Dyck path, and $s > 0$ otherwise.
\end{definition}
\begin{definition}
We call the diagonal $my = n(x-s)$, which is the lowest diagonal that intersects the path, the \emph{base diagonal}.
\end{definition}
\begin{definition}
The \emph{area} of a rectangular path $\pi$ is $\mathsf{area}(\pi) \coloneqq \sum_{i=1}^{n} \lfloor a_i + s \rfloor$. This is the number of whole squares that lie entirely between the path $\pi$ and its base diagonal.
\end{definition}
For example, the path in Figure~\ref{fig:rectangular-path} has area word
\[\left(0,-\frac{11}{9}, -\frac{4}{9}, \frac{1}{3}, -\frac{8}{9}, -\frac{1}{9}, \frac{2}{3}, -\frac{5}{9}, \frac{2}{9}\right) \approx
(0, \, -1.22,\, -0.44, \, 0.33, \, -0.88, \, -0.11, \, 0.66, \, -0.55, \, 0.22). \]
Thus, its shift is $\frac{11}{9}$ and its area is $5$.
\subsection{Decorated rectangular paths}
In a similar fashion as the rise version of the Delta conjecture \cite{Haglund-Remmel-Wilson-2018} (which is now a theorem \cites{DAdderioMellit2022CompositionalDelta,Blasiak-Haiman-Morse-Pun-Seeling-Extended-Delta-2021}), we introduce the concept of \emph{decorated rises} for rectangular paths.
\begin{definition}
The \emph{rises} of a rectangular path are the indices of the rows containing a North step that immediately follows another North step. A \emph{decorated rectangular path} is a rectangular path with a given subset $dr$ of its rises.
\end{definition}
\begin{definition}
For a decorated rectangular path of size $(m+k) \times (n+k)$ with $k$ decorated rises, we define the \emph{broken diagonal} to be the broken segment built as follows. Let $(x_1, y_1) = (0,0)$, then for $1 \leq i < n+k$, define
\begin{equation*}
(x_{i+1}, y_{i+1}) =
\left\{\begin{array}{ll}
(x_i + \frac{m}{n}, y_i+1) & \text{if } i \not \in dr \\
(x_i+1, y_i+1) & \text{if } i \in dr. \\
\end{array}
\right.
\end{equation*}
The broken diagonal is the broken segment joining $(x_i, y_i)$ and $(x_{i+1}, y_{i+1})$ for all $i$, that is, the line that starts at $(0,0)$ and the proceeds with slope $\frac{n}{m}$ in rows not containing decorated rises, and with slope $1$ in rows that contain decorated rises.
\end{definition}
Note that, if the path has no decorated rises, then the broken diagonal coincides with the main diagonal.
\begin{definition}
We define a \emph{decorated rectangular Dyck path} to be a decorated rectangular path that lies weakly above the broken diagonal.
\end{definition}
See Figure~\ref{fig:decorated-dyck} for an example of such a path. We use a $\ast$ to mark the decorated rises.
\begin{figure}
\caption{A decorated rectangular Dyck path with its broken diagonal.}
\label{fig:decorated-dyck}
\end{figure}
The definitions of \emph{area word} and \emph{area} extend to decorated paths as well, using the broken diagonal in place of the main diagonal.
\begin{definition}
For a $(m+k) \times (n+k)$ decorated rectangular path $(\pi, dr)$ with $k$ decorated rises, let $a_i$ be the horizontal distance between the starting point of the $i$-th North step and the broken diagonal. We define the \emph{area word} of the path as the sequence $a_1, \dots, a_{n+k}$. We define $s \coloneqq - \min \{a_i \mid 1 \leq i \leq n+k\}$ to be the shift of the path.
\end{definition}
\begin{definition}
We define the \emph{area} of a decorated rectangular path $\pi$ as \[ \mathsf{area}(\pi) \coloneqq \sum_{i \not \in dr} \lfloor a_i + s \rfloor. \]
\end{definition}
The area of the path in Figure~\ref{fig:decorated-dyck} is equal to $3$.
\subsection{Labelled paths}
Finally, we need to introduce labelled objects.
\begin{definition}
A \emph{labelling} of a (decorated) rectangular (Dyck) path is an assignment of a positive integer label to each North step of the path, such that consecutive North steps are assigned strictly increasing labels. A \emph{labelled (decorated) rectangular (Dyck) path} is a (decorated) rectangular (Dyck) path together with a labelling.
\end{definition}
We say that a labelling is \emph{standard} if the set of labels is $[n] \coloneqq \{1, \dots, n\}$, where $n$ is the height of the path.
\begin{figure}
\caption{A $7 \times 9$ labelled rectangular path (left) and labelled decorated Dyck path (right).}
\end{figure}
We denote by $w_i$ the label assigned to the $i$-th North step of the path.
We also denote the sets of labelled rectangular paths and labelled rectangular Dyck paths of size $m \times n$ as $\mathsf{LRP}(m,n)$ and $\mathsf{LRD}(m,n)$ respectively, and the sets of labelled decorated rectangular paths and labelled decorated rectangular Dyck paths of size $(m+k) \times (n+k)$ with $k$ decorated rises as $\mathsf{LRP}(m+k,n+k)^{\ast k}$ and $\mathsf{LRD}(m+k,n+k)^{\ast k}$, respectively.
\begin{definition}
Given a labelled (decorated) rectangular (Dyck) path $(\pi, dr, w)$, we define $x^w = \prod_i x_{w_i}$. With an abuse of notation, we will sometimes write $\pi$ to mean $(\pi, dr, w)$, in which case we will have $x^\pi = x^w$.
\end{definition}
Given a rectangular (Dyck) path $\pi$, the cells in the rectangular grid going from $(0,0)$ to $(m,n)$ that lie above the path form the Ferrer's diagram of a partition $\mu(\pi)$.
Here we extend the definition of dinv given in \cite{Bergeron-Garsia-Sergel-Xin-2016} (see also \cite{mellit2021toric}) for rectangular Dyck paths to any rectangular path. We will describe it in two different ways.
\begin{definition}
Let $\pi$ be a $m \times n$ rectangular path, and let $1 \leq i, j \leq n$. We say that $i$ \emph{attacks} $j$ in $\pi$ (or $(i,j)$ is an \emph{attack relation} for $\pi$) if \[ (a_i, i) <_{\text{lex}} (a_j, j) <_{\text{lex}} (a_i + {\textstyle\frac{m}{n}}, i). \]
\end{definition}
At this point, we can define the dinv of an unlabelled path.
\begin{definition}
We define the \emph{path dinv} of a rectangular path $\pi$ as \[ \mathsf{pdinv}(\pi) \coloneqq \# \left\{ c \in \mu(\pi) \mid \textstyle\frac{a}{\ell+1} \leq \frac{m}{n} < \frac{a+1}{\ell} \right\} \] where $a = a_\mu(c)$ and $\ell = \ell_\mu(c)$, and the second inequality always holds if $\ell = 0$.
\end{definition}
For labelled paths, we need some extra steps.
\begin{definition}
We define the \emph{temporary dinv} of a labelled rectangular path $(\pi, w)$ as
\[ \mathsf{tdinv}(\pi) \coloneqq \# \{ 1 \leq i, j \leq n \mid w_i < w_j \text{ and } i \text{ attacks } j \}. \]
\end{definition}
\begin{definition}
We define the \emph{maximal temporary dinv} of a rectangular path $\pi$ as
\[ \mathsf{maxtdinv}(\pi) \coloneqq \# \{ 1 \leq i, j \leq n \mid i \text{ attacks } j \}. \] Note that this is the same as $\max\{\mathsf{tdinv}(\pi, w) \mid w \in W(\pi)\}$, where $W(\pi)$ is the set of all possible labellings of $\pi$.
\end{definition}
The following is a simpler description for the difference $\mathsf{pdinv}(\pi) - \mathsf{maxtdinv}(\pi)$, given in \cite{Hicks-Sergel-2015}.
\begin{definition}
We define the \emph{dinv correction} of a rectangular path $\pi$ as \[ \mathsf{cdinv}(\pi) \coloneqq \# \left\{ c \in \mu(\pi) \mid \textstyle\frac{a+1}{\ell+1} \leq \frac{m}{n} < \frac{a}{\ell} \right\} - \# \left\{ c \in \mu(\pi) \mid \textstyle\frac{a}{\ell} \leq \frac{m}{n} < \frac{a+1}{\ell+1} \right\}, \] where $a = a_\mu(c)$ and $\ell = \ell_\mu(c)$.
\end{definition}
We will provide a visual interpretation for the $\mathsf{tdinv}$ and $\mathsf{cdinv}$ later in the section.
\begin{theorem}[\cite{Hicks-Sergel-2015}*{Theorem~2}]
\label{thm:cdinv-dyck}
For any rectangular Dyck path $\pi$, we have \[ \mathsf{cdinv}(\pi) = \mathsf{pdinv}(\pi) - \mathsf{maxtdinv}(\pi). \]
\end{theorem}
We extend this result to all rectangular paths, without the restriction of lying above the main diagonal.
\begin{theorem}
\label{thm:cdinv}
For any rectangular path $\pi$, we have \[ \mathsf{cdinv}(\pi) = \mathsf{pdinv}(\pi) - \mathsf{maxtdinv}(\pi) - \# \{ i \mid a_i(\pi) < 0 \} - \# \left\{ i \mid a_i(\pi) < -\frac{m}{n} \right\}. \]
\end{theorem}
\begin{proof}
Let $\pi'$ be the path obtained from $\pi$ by adding $n$ North steps at the beginning, and $m$ East steps at the end. By construction, $\mu(\pi') = \mu(\pi)$ and the slope is the same, so $\mathsf{cdinv}(\pi') = \mathsf{cdinv}(\pi)$. By \Cref{thm:cdinv-dyck}, this quantity is also equal to $\mathsf{pdinv}(\pi') - \mathsf{maxtdinv}(\pi')$. But again, $\mathsf{pdinv}(\pi)$ only depends on $\mu(\pi)$, so $\mathsf{pdinv}(\pi') = \mathsf{pdinv}(\pi)$.
We only need to compare $\mathsf{maxtdinv}(\pi)$ and $\mathsf{maxtdinv}(\pi')$. It is immediate that $(i,j)$ is an attack relation in $\pi$ if and only if $(n+i, n+j)$ is an attack relation in $\pi'$, so we only need to count attack relations in $\pi'$ where either $i \leq n$ or $j \leq n$. Since the first $n$ steps of $\pi'$ are all North steps by construction, we cannot possibly have attack relations where both $i$ and $j$ are at most $n$.
We have that, whenever $a_i(\pi) < 0$ (i.e.\ the corresponding North step begins strictly below the main diagonal), $n+i$ is attacked exactly once in $\pi'$ by some $j \leq n$. In fact, we have $0 \leq a_{n+i}(\pi') = m + a_i(\pi) < m$, and since $a_j(\pi') = \frac{m}{n}(j-1)$ for $j \leq n$, there exists exactly one $j$ such that $\frac{m}{n}(j-1) \leq a_{n+i}(\pi') < \frac{m}{n}j$ (which is exactly the attack relation, as $j < n+i$).
For the same reason, whenever $a_i(\pi) < -\frac{m}{n}$ (i.e.\ the corresponding North step ends strictly below the main diagonal), $n+i$ attacks exactly one $j \leq n$ in $\pi'$. In fact, if that is the case, we have $a_{n+i}(\pi') = m + a_i(\pi) \leq \frac{m}{n} (n-1)$, so there exists exactly one $j$ such that $a_{n+i}(\pi') < \frac{m}{n}(j-1) \leq a_{n+i}(\pi') + \frac{m}{n}j$ (which is exactly the attack relation, as $n+i > j$).
Summarising, we have
\begin{align*}
\mathsf{cdinv}(\pi) & = \mathsf{cdinv}(\pi') \\
& = \mathsf{pdinv}(\pi') - \mathsf{maxtdinv}(\pi') \\
& = \mathsf{pdinv}(\pi) - \mathsf{maxtdinv}(\pi') \\
& = \mathsf{pdinv}(\pi) - \mathsf{maxtdinv}(\pi) - \# \{ i \mid a_i(\pi) < 0 \} - \# \left\{ i \mid a_i(\pi) < -\frac{m}{n} \right\}
\end{align*}
as desired.
\end{proof}
Note that the term $\# \{ i \mid a_i(\pi) < 0 \}$ counts the number of North steps of the path that begin below the main diagonal, in the same fashion as in the tertiary dinv (or bonus dinv) for square paths \cites{Loehr-Warrington-square-2007, Leven-2016}.
To obtain a unified definition of dinv of rectangular paths that matches the expected symmetric functions, it turns out that we have to keep that term and disregard the term $\# \{ i \mid a_i(\pi) < -\frac{m}{n} \}$. This finally leads us to the following definition.
\begin{definition}
We define the \emph{dinv} of a labelled rectangular path $(\pi, w)$ as \[ \mathsf{dinv}(\pi, w) \coloneqq \mathsf{tdinv}(\pi, w) + \mathsf{cdinv}(\pi) + \# \{ i \mid a_i(\pi) < 0 \}. \]
\end{definition}
\begin{figure*}
\caption{Calculation of the dinv of a rectangular square path.}
\label{fig:dinv}
\end{figure*}
We now give a visual interpretation of the various summands.
The temporary dinv counts all pairs of North steps $(i,j)$ such that $w_i < w_j$ and the $j$-th North step begins between the line $y = \frac{n}{m}(x + a_i)$ and the line $y =\frac{n}{m}(x + a_i) + 1$, with ties broken by comparing $i$ and $j$. In Figure~\ref{fig:dinv}, we have drawn these two lines for all North steps of the path and marked the beginnings of North steps contained between them and that satisfy the condition on the label. We see that the contribution to the dinv is $4$.
The dinv correction is split into two parts. The first summand counts the number of cells $c$ above the path such that the two lines parallel to the main diagonal and starting from the endpoints of the East step below $c$ both intersect the North step to the right of $c$ (bottom endpoint excluded, but top endpoint included).
The second summand counts the number of cells $c$ above the path such that the two lines parallel to the main diagonal and starting from the endpoints of the North step to the right of $c$ both intersect the East step below $c$ (right endpoint included, but left endpoint excluded). Notice that the two sets cannot simultaneously be non-empty, the first one being empty if $m \leq n$ and the second one being empty if $m \geq n$. In Figure~\ref{fig:dinv}, we have a path of size $5 \times 7$ so the second term is $0$. We have greyed out the cells counted in the third term, giving a contribution to the dinv of $-4$.
The bonus dinv, as previously mentioned, counts the number of North steps of the path that begin below the main diagonal. In Figure~\ref{fig:dinv} there are $3$ North steps starting below the main diagonal.
Thus the path in Figure~\ref{fig:dinv} has dinv equal to $3$.
\section{Conjectures}
With the previous definitions in mind, we can state the \emph{rectangular shuffle theorem} \cite{mellit2021toric} and several new conjectures, which were verified by computer for all paths with semiperimeter $m+n$ up to $13$.
\begin{theorem}{\cite{mellit2021toric}}
For any $m, n \in \mathbb{N}$, we have \[ e_{m,n} = \sum_{\pi \in \mathsf{LRD}(m,n)} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\label{thm:rectangular-dyck}
\end{theorem}
Conjecturally, we extend this result to rectangular paths, as follows.
\begin{conjecture}
\label{conjecture:rectangular-paths}
For any $m, n \in \mathbb{N}$, and $d = \gcd(m,n)$, we have \[ \frac{[m]_q}{[d]_q} p_{m,n} = \sum_{\pi \in \mathsf{LRP}(m,n)} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
\begin{figure}
\caption{The set of $2 \times 3$ standard rectangular paths, with their dinv (in blue) and area (in red).}
\label{fig:rectangular-paths-23}
\end{figure}
\begin{example}
Let $m=2$ and $n=3$. In \Cref{conjecture:rectangular-paths}, we can check for example that the Hilbert series (that is, the scalar product with $h_{1^n}$) coincides with the sum over all $2 \times 3$ standard rectangular paths of the monomial $q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)}$. In fact, we have
\[ \frac{[2]_q}{[1]_q} \langle p_{2,3}, h_{1^3} \rangle = (1+q)(q+t+2) = 1 + q + 1 + t + q + q^2 + q + qt, \]
which coincides with the values in Figure~\ref{fig:rectangular-paths-23}.
\end{example}
We also have (univariate) analogues of the Delta conjecture and the Delta square conjecture for rectangular (Dyck) paths, using Theta operators.
\begin{conjecture}
\label{conj:rectangular-dyck-delta}
For any $m, n \in \mathbb{N}$, we have \[ \left. \Theta_{e_k} e_{m,n} \right\rvert_{q=1} = \sum_{\pi \in \mathsf{LRD}(m+k,n+k)^{\ast k}} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
\begin{figure}
\caption{The set of $3 \times 4$ standard rectangular Dyck paths with two decorated rises, with their area.}
\label{fig:rectangular-delta}
\end{figure}
\begin{conjecture}
\label{conj:rectangular-delta}
For any $m, n \in \mathbb{N}$, and $d = \gcd(m,n)$, we have \[ \left. \frac{[m+k]_q}{[d]_q} \Theta_{e_k} p_{m,n} \right\rvert_{q=1} = \sum_{\pi \in \mathsf{LRP}(m+k,n+k)^{\ast k}} t^{\mathsf{area}(\pi)} x^\pi. \]
\end{conjecture}
Conjectures~\ref{conj:rectangular-dyck-delta} and \ref{conj:rectangular-delta} have been checked by computer up to semiperimeter $13$. See Figure~\ref{fig:rectangular-delta} for the case $m=1$, $n=2$, $k=2$: indeed \[ \left. \langle \Theta_{e_2} e_{2,1}, h_{1^4} \rangle \right\rvert_{q=1} = t^2 + 5t + 11, \] which coincides with the combinatorial expression.
These conjectures bring with themselves a natural open problem.
\begin{problem}
Find a statistic $\mathsf{qstat} \colon \mathsf{LRP}(m+k,n+k)^{\ast k} \rightarrow \mathbb{N}$ such that \[ \Theta_{e_k} e_{m,n} = \sum_{\pi \in \mathsf{LRD}(m+k,n+k)^{\ast k}} q^{\mathsf{qstat}(\pi)} t^{\mathsf{area}(\pi)} x^\pi \] and \[ \frac{[m+k]_q}{[d]_q} \Theta_{e_k} p_{m,n} = \sum_{\pi \in \mathsf{LRP}(m+k,n+k)^{\ast k}} q^{\mathsf{qstat}(\pi)}t^{\mathsf{area}(\pi)} x^\pi. \]
\end{problem}
Unlike in the square case, simply ignoring the decorations on the rises to compute the dinv does not give the expected $\mathsf{qstat}$.
\section{The sweep process}
In this section, we show that the sweep process in \cite{mellit2021toric}*{Subsection~4.1} also gives the correct outcome for rectangular paths, without the restriction of staying above the main diagonal.
We refer to \cite{mellit2021toric}*{Proposition~3.3} for the definitions of the operators $d_+$ and $d_-$, to \cite{mellit2021toric}*{Subsection~3.5} for the definition of characteristic function of a Dyck path with a marking, and to \cite{mellit2021toric}*{Section~4} and the first paragraph of \cite{mellit2021toric}*{Theorem~4.2} for how they relate to the following sweep process. We do not report all the definitions here because we are only interested in certain combinatorial properties of the sweep process and how they change between rectangular Dyck paths and rectangular paths, rather than in the process itself, but we encourage the interested reader to compare \Cref{thm:sweep} and \cite{mellit2021toric}*{Theorem~4.2}.
\begin{definition}[Sweep process]
For $\pi \in \mathsf{RP}(m, n)$, define $\mathsf{sweep}(\pi)$ through the algorithm that follows.
Initialize $\varphi = 1 \in V_0$.
Consider a line $l$ with slope $\frac{n}{m} - \epsilon$, with $\epsilon < \frac{1}{(2mn)^2}$ (so that it ``breaks ties'' but does not change the order in which the lattice points are hit with respect to a line with slope $\frac{m}{n}$), which stays fully above $\pi$.
Move $l$ downward and modify $\varphi$ every time $l$ passes through a lattice point $p$ weakly below $\pi$ and different from $(m, n)$.
At each lattice point $p$, modify $\varphi$ as follows:
\begin{itemize}
\item[(A)] if $p$ is between a NE pair of steps, apply $d_+$;
\item[(B)] if $p$ is between an EN pair of steps, or $p = (0, 0)$ and the path starts with a N step, apply $d_-$;
\item[(C)] if $p$ is between a NN pair of steps, apply $q^{-a}\frac{d_-d_+ - d_+d_-}{q-1}$, where $a$ is the number of vertical steps of $\pi$ crossed by $l$ to the right of $p$;
\item[(D)] if $p$ is between an EE pair of steps, or $p = (0, 0)$ and the path starts with an E step, multiply by $q^a$ (where $a$ is defined as in the previous case);
\item[(E)] if $p$ is strictly below $\pi$, multiply by $t$.
\end{itemize}
The algorithm stops when $l$ is entirely below the path $\pi$.
\end{definition}
See Figure~\ref{fig:sweeping} for an illustration of the sweeping process.
\begin{figure}
\caption{The sweeping process.}
\label{fig:sweeping}
\end{figure}
\begin{theorem}
\label{thm:sweep}
For $\pi$ any rectangular path, we have \[ \mathsf{sweep}(\pi) = t^{\mathsf{area}(\pi)} \sum_{w \in W(\pi)} q^{\mathsf{dinv}(\pi, w)} x^w, \]
where $W(\pi)$ is the set of possible labellings of $\pi$.
\end{theorem}
\begin{proof}
As in \cite{mellit2021toric}*{Theorem~4.2}, plotting the attack relations gives a Dyck path $\tilde\pi$ with a set of marked corners $\Sigma_\pi$ such that
\[ \chi(\tilde\pi, \Sigma_\pi) = \sum_{w \in W(\pi)} q^{\mathsf{tdinv}(\pi, w)} x^w, \]
where $\chi(\tilde\pi, \Sigma_\pi)$ is the characteristic function of a Dyck path (see \cite{mellit2021toric}*{Subsection~3.5}) and such that $\chi(\tilde\pi, \Sigma_\pi)$ is the result of the operations $(A), (B)$, and $(C)$ without the factor $q^{-a}$.
It is also clear that operation $(E)$ gives $t^{\mathsf{area}(\pi)}$, so all that is left to show is that the power of $q$ produced by rules $(C)$ and $(D)$ equals \[ + \# \left\{ c \in \mu(\pi) \mid \textstyle\frac{a+1}{\ell+1} \leq \frac{m}{n} < \frac{a}{\ell} \right\} - \# \left\{ c \in \mu(\pi) \mid \textstyle\frac{a}{\ell} \leq \frac{m}{n} < \frac{a+1}{\ell+1} \right\} + \# \{ i \mid a_i < 0 \}. \]
Let us again define $\pi'$ to be the path obtained from $\pi$ by adding $n$ North steps at the beginning, and $m$ East steps at the end. Since $\pi'$ is a rectangular Dyck path, by the proof of \cite{mellit2021toric}*{Theorem~4.2} we know that the power of $q$ produced by rules $(C)$ and $(D)$ equals $\mathsf{cdinv}(\pi')$, which is also equal to $\mathsf{cdinv}(\pi)$ as it only depends on $\mu(\pi') = \mu(\pi)$.
We need to compare the power of $q$ produced by rules $(C)$ and $(D)$ applied to $\pi'$ and $\pi$.
The result is the same for lattice points in between steps of $\pi'$ that were already in $\pi$, as we are not adding any North step to their right. For the lattice points in between the last $m$ East steps of $\pi'$, the exponent of $q$ is always $0$, as their corresponding value of $a$ is $0$.
For the lattice points in between first $n$ steps of $\pi'$, we have to apply rule $(C)$, so their total contribution is equal to minus the number of North steps of $\pi'$ intersected by any line with slope $\frac{n}{m} - \varepsilon$ starting from $(0,j)$ for some $j < n$ which is exactly the number of North steps of $\pi$ finishing strictly below the main diagonal, that is, the number of $i$ such that $a_i(\pi) < - \frac{m}{n}$.
Finally, the point $(0,n)$ in $\pi$ switches from rule $(D)$ to rule $(A)$, or from rule $(B)$ to rule $(C)$, depending whether $\pi$ starts with an East or a North step respectively; in either case, the difference between its contributions in $\pi'$ and in $\pi$ is given by minus the number of North steps of $\pi$ that are crossed by the line with slope $\frac{n}{m} - \varepsilon$ starting from $(0,0)$, which is exactly the number of $i$ such that $- \frac{m}{n} \leq a_i(\pi) < 0$.
In total, we get that the difference in the exponents of $q$ produced by rules $(C)$ and $(D)$ applied to $\pi'$ and $\pi$ is $- \# \{ i \mid a_i(\pi) < 0 \}$, so we have
\begin{align*}
\mathsf{sweep}(\pi) & = t^{\mathsf{area}(\pi)} q^{\mathsf{cdinv}(\pi)} q^{\# \{ i \mid a_i(\pi) < 0 \}} \sum_{w \in W(\pi)} q^{\mathsf{tdinv}(\pi, w)} x^w
& = t^{\mathsf{area}(\pi)} \sum_{w \in W(\pi)} q^{\mathsf{dinv}(\pi, w)} x^w
\end{align*}
as desired.
\end{proof}
\section{The coprime case}
In this section, we prove \Cref{conjecture:rectangular-paths} in the coprime case:
\begin{theorem}
If $\gcd(m, n) = 1$, then
\[ [m]_q \, p_{m,n} = \sum_{\pi \in \mathsf{LRP}(m,n)} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi. \]
\label{thm:coprime-case}
\end{theorem}
\begin{proof}
Since $\gcd(m,n) = 1$, we have $e_{m,n} = p_{m,n} = F_{m,n}(e_1)$.
Therefore, in order to prove \Cref{thm:coprime-case}, it is enough to show that the set of (unlabelled) rectangular paths $\mathsf{RP}(m,n)$ can be partitioned into subsets $\mathcal{P}_1, \dotsc, \mathcal{P}_h$ of cardinality $m$ such that:
\begin{enumerate}[label=(\arabic*)]
\item each $\mathcal{P}_i$ contains exactly one Dyck path $\pi_0 \in \mathsf{RD}(m,n)$;
\item for each $\mathcal{P}_i$ and $0 \leq k < m$, there exists a (unique) element $\pi_k \in \mathcal{P}_i$ such that $\mathsf{sweep}(\pi_k) = q^k \mathsf{sweep}(\pi_0)$.
\end{enumerate}
Indeed, if such a partition exists, then
\begin{align*}
[m]_q \, p_{m,n} &= [m]_q \, e_{m,n} = [m]_q \sum_{\pi \in \mathsf{LRD}(m,n)} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi \\
&= [m]_q \sum_{\pi \in \mathsf{RD}(m,n)} \mathsf{sweep}(\pi) \\
&= \sum_{\pi \in \mathsf{RP}(m,n)} \mathsf{sweep}(\pi) \\
&= \sum_{\pi \in \mathsf{LRP}(m,n)} q^{\mathsf{dinv}(\pi)} t^{\mathsf{area}(\pi)} x^\pi,
\end{align*}
where we used \Cref{thm:rectangular-dyck} in the first line, \Cref{thm:sweep} in the second line, the partition $\mathsf{RP}(m,n) = \mathcal{P}_1 \sqcup \dotsb \sqcup \mathcal{P}_h$ in the third line, and \Cref{thm:sweep} again in the fourth line.
Next, we construct a partition of $\mathsf{RP}(m,n)$ with the desired properties.
Consider an (unlabelled) rectangular path $\pi \in \mathsf{RP}(m, n)$.
Denote by $d_i \in \mathbb{Q}$ the signed horizontal distance between the endpoint of the $i$-th horizontal step of $\pi$ and the main diagonal (for $0 \leq i < k$).
Fix now an integer $k$ with $0 \leq k < m$.
The $k$-th horizontal step divides the path $\pi$ into two parts $\pi_0$ and $\pi_1$, where $\pi_1$ starts immediately after the $k$-th horizontal step and $\pi_0$ ends with the $k$-th horizontal step.
Define the path $\phi(\pi)=\phi_k(\pi)$
as the concatenation of $\pi_1$ followed by $\pi_0$ (we fix $\phi_0 = \mathsf{id}$).
Also, let
\[ r(\pi) = r_k(\pi)
=\begin{cases} \# \left\{ i \mid d_k> d_i \geq 0 \right\} & \textnormal{if } d_k\geq 0 \\
- \# \left\{ i \mid 0 \geq d_i > d_k \right\} & \textnormal{if } d_k < 0, \end{cases}\]
that is, up to a sign, the number of horizontal steps whose endpoint lies between the main diagonal and the diagonal parallel to it that passes through the endpoint of the $k$-th horizontal step.
We partition $\mathsf{RP}(m, n)$ as follows.
If $\pi \in \mathsf{RD}(m,n)$ is the $i$-th Dyck path, define $\mathcal{P}_i = \{ \phi_k(\pi) \mid 0 \leq k < m \}$.
The sets $\mathcal{P}_1, \dotsc, \mathcal{P}_h$ form a partition of $\mathsf{RP}(m, n)$.
Since $\gcd(m, n) = 1$, $\mathcal{P}_i$ contains no Dyck path other than $\pi$, so the partition satisfies property (1) above.
By definition of $r_k$, we have that $\{ r_k(\pi) \mid 0 \leq k < m \} = \{0, 1, \dotsc, m-1\}$. See Figure~\ref{fig:partition RP} for an example.
Then the partition satisfies property (2) thanks to \Cref{lemma} below.
\end{proof}
\begin{figure}
\caption{A rectangular Dyck path $\pi$ and $\phi_k(\pi)$ for all $0 \leq k < m$. The horizontal steps of $\pi$ are marked by integers indicating their order with respect to the distance between their endpoint and the main diagonal.}
\label{fig:partition RP}
\end{figure}
\begin{lemma}
\label{lemma:main_char}
If $\gcd (m,n)=1$, then $\mathsf{sweep}(\phi_k(\pi))= q^{r_k(\pi)}\mathsf{sweep}(\pi)$.
\label{lemma}
\end{lemma}
\begin{proof}
The relative order of points in $\pi$ and their images in $\phi(\pi)$ does not change when performing the sweep process.
Therefore,
\[
\frac{\mathsf{sweep}(\phi(\pi))}{q^{a(\phi(\pi))}}= \frac{\mathsf{sweep}(\pi)}{q^{a(\pi)}},
\]
where $a(\pi)$ is the exponent of $q$ obtained by applying the sweep process.
To conclude, we need to show that $a(\phi(\pi))=a(\pi) + r(\pi)$.
Define $A_\pi, B_\pi, C_\pi, D_\pi$ as the sets of lattice points of $\pi$, different from the point $(m,n)$, that are between a $NE$, $EN$, $NN$, $EE$ pair of steps respectively.
We consider the point $(0,0)$ to be preceded by a virtual East step, so $(0,0)\in B_\pi$ or $(0,0)\in D_\pi$ if the first step is a North or an East step respectively.
Let $p$ be a lattice point of $\pi$.
Define $a(p) \in \mathbb{Z}$ as the number of vertical steps that intersect the ray $\rho(p) \coloneqq \{p+u \cdot (m,n) \mid u \in \mathbb{R}_+\}$, multiplied by the following coefficient $\epsilon(p)$:
\[
\epsilon(p) =
\begin{cases}
0 & \text{if $p \in A_\pi \cup B_\pi$} \\
-1 & \text{if $p \in C_\pi$} \\
1 & \text{if $p \in D_\pi$}.
\end{cases}
\]
By construction, we have that $a(\pi) = \sum_{p \in \pi} a_\pi(p)$.
For a lattice point $p$ of $\pi$, denote by $l = l(p) \in \{0, 1\}$ the index such that $p$ is a point of $\pi_l$.
For this purpose, the right endpoint of the $k$-th horizontal step is considered as a lattice point of $\pi_1$ (not $\pi_0$), whereas $(m,n)$ is not considered as a lattice point of $\pi_1$ (or of $\pi$).
Define $a'(p) \in \mathbb{Z}$ as the number of vertical steps of $\pi_{1-l}$ that intersect the line $\lambda(p) \coloneqq \{p + u \cdot (m,n) \mid u \in \mathbb{R}\}$, multiplied by the following coefficient $\epsilon'(p)$:
\[
\epsilon'(p) =
\begin{cases}
0 & \text{if $p \in A_\pi \cup B_\pi$} \\
(-1)^{l} & \text{if $p \in C_\pi$} \\
(-1)^{l+1} & \text{if $p \in D_\pi$}.
\end{cases}
\]
In other words: $a'(p)$ vanishes if $p \in A_\pi \cup B_\pi$; otherwise, $|a'(p)|$ is equal to the number of intersections between the line $\lambda(p)$ and vertical steps in the part of the path not containing $p$.
\textbf{Claim 1:} $\displaystyle a(\phi(\pi)) - a(\pi) = \sum_{p \in \pi} a'(p)$.
The intersections between rays $\rho(p) = \{p + u \cdot (m,n) \mid u \in \mathbb{R}_+\}$ and vertical steps in $\pi_{l(p)}$ are counted in both $a(\phi(\pi))$ and $a(\pi)$ (with the same sign), so they simplify.
The remaining summands in $a(\phi(\pi))$ count the intersections between rays $\rho(p)$, where $p$ is in $\pi_1$, and vertical steps in $\pi_0$ (where $\pi_0$ is translated by $(m,n)$ so that it starts from $(m,n)$).
Equivalently, they count the intersections between lines $\lambda(p)$ (where $p$ is in $\pi_1$) and vertical steps in $\pi_0$ (not translated).
Therefore, their contribution is given by $\sum_{p \in \pi_1} a'(p)$.
Note that the points in $C_\pi$ get a negative sign, as in the definition of $a_\pi (p)$.
The remaining summands in $a(\pi)$ count the intersections between rays $\rho(p)$, where $p$ is in $\pi_0$, and vertical steps in $\pi_1$.
Since $\pi_1$ comes after $\pi_0$, we can substitute the rays $\rho(p)$ with the lines $\lambda(p)$.
Their contribution is given by $\sum_{p \in \pi_0} a'(p)$.
\textbf{Intermezzo:}
We refer to a maximal sequence of consecutive North steps as a \emph{vertical segment}. Each point in $D_\pi$ (i.e., between two East steps) is considered as a vertical segment of length $0$.
This way, the path $\pi_0$ has $k$ vertical segments with $x$ coordinates equal to $0, \dotsc, k-1$, and the path $\pi_1$ has $m-k$ vertical segments with $x$ coordinates $k, \dotsc, m-1$.
Denote by $S_i$ the $i$-th vertical segment.
It is convenient to translate each vertical segment $S_i$ along the line $\{ u \cdot (m, n) \mid u \in \mathbb{R}\}$ so that its $x$ coordinate becomes $0$.
We denote this translated segment by $T_i$.
Let $y_i$ and $y_i'$ be the $y$ coordinates of the endpoints of $T_i$, with $y_i \leq y_i'$.
Therefore, the $y$ coordinates of $S_i$ are $y_i + i \cdot \frac{n}{m}$ and $y_i' + i \cdot \frac{n}{m}$.
Note that the endpoints of the $T_i$'s are all distinct because $m$ and $n$ are coprime.
\textbf{Claim 2:} $\displaystyle \sum_{p \in \pi} a'(p) = \sum_{i < k} \sum_{j \geq k} ( \delta_{T_i \supset T_j} - \delta_{T_i \subset T_j} )$.
Let us analyze the contributions to the left hand side due to the $i$-th and $j$-th vertical segments, for fixed $i < k$ and $j \geq k$.
Let $h$ be the number of integral points $p \in S_i$
(including the endpoints of $S_i$)
such that $\lambda(p)$ intersects the $j$-th vertical segment $S_j$.
If $T_i \supset T_j$, then $S_j$ has length $h$ and, for all its $h+1$ points $p'$, the line $\lambda(p')$ intersects $S_i$.
Once we exclude the endpoints, $h-1$ points remain.
On the other hand, the endpoints of $S_i$ are not among the $h$ points $p \in S_i$ such that $\lambda(p)$ intersects $S_j$.
The overall contribution of $S_i$ and $S_j$ to the left hand side is $h-(h-1) = +1$.
Note that if $T_i \supset T_j$ and $h=0$, then $S_j$ is a single point $p' \in D_\pi$ such that $\lambda(p')$ intersects $S_i$, so it contributes to the left hand side as $+1$.
In other words, vertical segments of length $0$ can still be regarded as having $h-1 = -1$ integral points other than the endpoints.
Similarly, if $T_i \subset T_j$, then the contribution is $-1$.
Finally, if neither of $T_i$ and $T_j$ contains the other, $S_j$ also has $h$ points $p'$ such that $\lambda(p')$ intersects $S_i$, so the contribution is $0$.
\textbf{Claim 3:} $\delta_{T_i \supset T_j} - \delta_{T_i \subset T_j} = \delta_{y_i < y_j} - \delta_{y_{i+1} < y_{j+1}}$ (where we set $y_m = 0$).
Clearly, we have $\delta_{T_i \supset T_j} - \delta_{T_i \subset T_j} = \delta_{y_i < y_j} - \delta_{y_i' < y_j'}$.
The top endpoint of $S_i$ has the same $y$ coordinate as the bottom endpoint of $S_{i+1}$, so $y_i' = y_{i+1} + \frac nm$.
Similarly, $y_j' = y_{j+1} + \frac nm$, so $\delta_{y_i' < y_j'} = \delta_{y_{i+1} < y_{j+1}}$.
\textbf{Claim 4:} $\displaystyle\sum_{i < k} \sum_{j \geq k} ( \delta_{y_i < y_j} - \delta_{y_{i+1} < y_{j+1}} ) = r(\pi)$.
Write $\delta_{i, j}$ as a shorthand for $\delta_{y_i < y_j}$.
The left hand side simplifies to
\begin{equation}
\sum_{k \leq j < m} \delta_{0, j} + \sum_{0 < i < k} \delta_{i, k} - \sum_{k < j \leq m} \delta_{k, j} - \sum_{0< i < k} \delta_{i, m}
= 1 + \sum_{0 \leq i < m} (\delta_{0, i} + \delta_{i, k} - 1),
\label{eq:sums}
\end{equation}
where we have used the facts that $y_m = y_0 = 0$ and $\delta_{i, j} = 1 - \delta_{j,i}$ for $i \neq j$.
If $y_k >0$, the final summation in \eqref{eq:sums} counts the horizontal steps of $\pi$ whose right endpoint lies strictly between the main diagonal and the translated diagonal $\{y_k + u \cdot (m,n) \mid u \in \mathbb{R}\}$.
The $+1$ term can be interpreted as counting the final horizontal step which ends on the main diagonal.
If $y_k < 0$, the final summation in \eqref{eq:sums} counts the same points with negative sign, but also has a $-2$ coming from the terms $i=0$ and $i=k$ (because $\delta_{0,k} = 0$).
Then $-2+1 = -1$ counts the final horizontal step with negative sign.
In all cases, the result is exactly $r(\pi)$.
\end{proof}
This completes the proof of \Cref{thm:coprime-case}.
\end{document}
|
{\bf e}gin{document}
\setcounter{secnumdepth}{5}
\setcounter{tocdepth}{3}
\thispagestyle{empty}
{\bf e}gin{figure}
\centering \scalebox{1.5}{\includegraphics{crest}}
\end{figure}
\vspace*{3cm}
{\bf e}gin{center}
\LARGE{Convex Hulls of Planar Random Walks}
{\large \date*{\today}}
{\Large{Chang Xu \\
Department of Mathematics and Statistics\\
University of Strathclyde\\
Glasgow, UK}\\
}
\end{center}
{\bf e}gin{center}\normalsize{This thesis is submitted to the University of Strathclyde for the\\
degree of Doctor of Philosophy in the Faculty of Science.}
\end{center}
\thispagestyle{empty} \noindent The copyright of this thesis belongs
to the author under the terms of the United Kingdom Copyright Acts
as qualified by University of Strathclyde Regulation 3.50. Due
acknowledgement must always be made of the use of any material in,
or derived from, this thesis.
\setcounter{page}{1}
\pagenumbering{roman}
\section*{Notations}
{\bf e}gin{longtable}[!h]{lllr}
& & & \quad Page\\
$S_n$, $Z_i$ & : & random walk with location $S_n$ and increments $Z_i$ & \pageref{S_n,Z_i} \\
$\mathop \mathrm{hull} ( S_0, \ldots, S_n )$ & : & the convex hull of random walk $S_n$ & \pageref{hull S} \\
${\mathcal{S}}_n$ & : & the random walk $\{ S_0, S_1, \ldots, S_n \}$ & \pageref{cS_n} \\
$L_n$ & : & the perimeter length of $\mathop \mathrm{hull} ( S_0, \ldots, S_n )$ & \pageref{L_n} \\
$A_n$ & : & the area of $\mathop \mathrm{hull} ( S_0, \ldots, S_n )$ & \pageref{A_n} \\
$\| \blob \|$ & : & the Euclidean norm & \pageref{2-norm} \\
$\mu$ & : & the mean drift vector & \pageref{mu} \\
$\Sigma$ & : & the covariance matrix associated with $Z_i$ & \pageref{Sigma} \\
$\Sigma^{1/2}$ & : & the matrix square-root of $\Sigma$ & \pageref{Sigma^1/2} \\
$\sigma^2$ & : & $= \trace \Sigma$ & \pageref{sigma^2} \\
$\hat \mu$ & : & $= \| \mu \|^{-1} \mu$ for $\mu \neq 0$ & \pageref{hat mu} \\
$\sigma^2_{\mu}$ & : & $= \mathbb{E}\, \left[ \left( ( Z_1 - \mu) \cdot \hat \mu \right)^2 \right]$ & \pageref{spara} \\
$\sigma^2_{\mu_\per}$ & : & $= \sigma^2 - \sigma^2_{\mu}$ & \pageref{sperp} \\
${\mathcal{C}} ( [0,T] ; \mathbb{R}^d )$ & : & the class of continuous functions from $[0,T]$ to $\mathbb{R}^d$ & \pageref{cC} \\
${\mathcal{C}}^0 ( [0,T] ; \mathbb{R}^d )$ & : & $= \{ f \in {\mathcal{C}} ( [0,T] ; \mathbb{R}^d ) : f(0) = {\bf 0} \}$ & \pageref{cC^0} \\
$\rho_\infty (\blob\, , \blob)$ & : & the supremum metric & \pageref{rho_infty} \\
$\rho({\bf x},A)$ & : & $= \inf_{{\bf y} \in A} \rho({\bf x},{\bf y})$ for $A \subseteq \mathbb{R}^d$ and a point ${\bf x} \in \mathbb{R}^d$ & \pageref{rho(x,A)} \\
${\mathcal{C}}_d$ & : & $= {\mathcal{C}} ( [0,1] ; \mathbb{R}^d )$ & \pageref{cC_d} \\
${\mathcal{C}}_d^0$ & : & $= \{ f \in {\mathcal{C}}_d : f(0) = {\bf 0} \}$ & \pageref{cC_d^0} \\
$\mathbb{S}_{d-1}$ & : & $= \{ {\bf u} \in \mathbb{R}^d : \| {\bf u} \| = 1 \}$, the unit sphere in $\mathbb{R}^d$ & \pageref{cS_d-1} \\
${\mathcal{K}}_d$ & : & the collection of convex compact sets in $\mathbb{R}^d$ & \pageref{cK_d} \\
${\mathcal{K}}_d^0$ & : & $= \{ A \in {\mathcal{K}}_d : {\bf 0} \in A \}$ & \pageref{cK_d^0} \\
$\rho_H ( \blob\, , \blob )$ & : & the Hausdorff metric & \pageref{rho_H} \\
$\pi_r (\blob)$ & : & the parallel body at distance $r$ & \pageref{pi_r} \\
$b$ & : & $=( b(s) )_{s \in [0,1]}$, standard Brownian motion in $\mathbb{R}^d$ & \pageref{b} \\
${\mathcal{A}}(\blob)$ & : & the area of convex compact sets in the plane & \pageref{cA} \\
${\mathcal{L}}(\blob)$ & : & the perimeter length of convex compact sets & \pageref{cL} \\
$h_A (\blob)$ & : & the support function of $A \in {\mathcal{K}}^0_d$ & \pageref{h_A()} \\
$H(f)$ & : & $= \mathop \mathrm{hull} \left( f [ 0,1 ] \right)$ for $f \in {\mathcal{C}}_d$ & \pageref{H()} \\
$h_t$ & : & the convex hull of the Brownian path up to time $t$ & \pageref{h_t} \\
$\ell_t$ & : & $= {\mathcal{L}} ( h_t )$ & \pageref{eqn:def of Lt At for BM} \\
$a_t$ & : & $= {\mathcal{A}} ( h_t )$ & \pageref{eqn:def of Lt At for BM} \\
$\mathbb{R}ightarrow$ & : & weak convergence & \pageref{thm:donsker} \\
$w$ & : & $=( w(s) )_{s \in [0,1]}$, standard Brownian motion in $\mathbb{R}$ & \pageref{w} \\
$\tilde b (s)$ & : & $= ( s , w(s) )$, for $s \in [0,1]$ & \pageref{tilde b} \\
$\tilde h_t$ & : & $= \mathop \mathrm{hull} \tilde b [0,t] \in {\mathcal{K}}_2^0$ & \pageref{tilde h_t} \\
$\tilde a_t$ & : & $= {\mathcal{A}} ( \tilde h_t )$ & \pageref{tilde a_t} \\
$\! {\,\bf 1}\{\blob \}$ & : & the indicator function & \pageref{kac} \\
$x^+$ & : & $= x {\,\bf 1}\{ x>0 \}$ & \pageref{x^+} \\
$x^+$ & : & $= -x {\,\bf 1}\{ x<0 \}$ & \pageref{x^-} \\
$u_0 ( \Sigma )$ & : & $= \mathbb{V} {\rm ar} {\mathcal{L}} ( \Sigma^{1/2} h_1 )$ & \pageref{eq:var_constants} \\
$v_+$, $v_0$ & : & defined in equation \eqref{eq:two_vars10} & \pageref{eq:two_vars10} \\
\end{longtable}
\pagenumbering{arabic}
\pagestyle{myheadings} \markright{\sc Chapter 1}
\chapter{Introduction}
\label{chapter1}
\section{Background on Random Walk}
Let $Z_1, Z_2, \dots$ \label{S_n,Z_i} be independent identically distributed (i.i.d.) random variables taking values in $\mathbb{R}^d$ and
let $S_n = \sum_{i=1}^n Z_i$. $S_n$ is a \emph{random walk} \cite[p.\,88]{gut}.
Random walk theory is a classical and well-studied topic in probability theory. In 1905, Albert Einstein studied the Brownian motion in his paper
\emph{``On the Movement of Small Particles Suspended in a Stationary Liquid Demanded by the Molecular-Kinetic Theory of Heat''}.
Brownian motion is the random motion of particles in a fluid which is found by the botanist Robert Brown in 1827 \cite[Sec. 2.1]{hughes}. He noted that the pollen grains in water kept moved through
randomly. Einstein explained in details how the motion that Brown had observed was a result of the pollen being moved by individual water molecules.
Scientists then gave the mathematical formalisation for the Brownian motion and its generalisation: random walk. The term \emph{random walk} was first used by Karl Pearson in 1905.
In a letter to Nature, he gave a simple model to describe a mosquito infestation in a forest. At each time step, a single mosquito moves a fixed length in a randomly chosen direction. Pearson wanted to know the distribution of the mosquitoes after many steps had been taken. The letter was answered by Lord Rayleigh, who had already solved a more general form of
this problem in 1880, in the context of sound waves in heterogeneous materials.
Modelling a sound wave travelling through the material can be thought of as summing up a sequence of random wave-vectors
of constant amplitude but random phase since sound waves in the material have roughly
constant wavelength, but their directions are altered at scattering sites within the material.
There are some classical results we need to bear in mind when we study random walks. First we need to introduce the concepts of recurrence and transience.
A random walk $S_n$ taking values in $\mathbb{R}^d$ is called \emph{point-recurrent} if
$$\mathbb{P}(S_n = 0 \text{ infinitely often}) = 1$$
and \emph{point-transient} if
$$\mathbb{P}(S_n = 0 \text{ infinitely often}) = 0.$$
If the random walk is not discrete then these definitions are not very useful. Instead we say that the random walk is
\emph{neighbourhood-recurrent} if for some $\varepsilon > 0$,
$$\mathbb{P}(|S_n| < \varepsilon \text{ infinitely often}) = 1 $$
and \emph{neighbourhood-transient} if
$$\mathbb{P}(|S_n| < \varepsilon \text{ infinitely often}) = 0 .$$
In the discrete case, for a simple random walk we have the P\'olya's theorem \cite{polya}. A random walk $S_n = \sum_{i=1}^n Z_i$ on $\mathbb{Z}^d$ is \emph{simple} if
for any $i \in \mathbb{N}$,
$$\mathbb{P}(Z_i = e) = {\bf e}gin{cases}\phantom{2} (2d)^{-1} & \text{if } e \in \mathbb{Z}^d \text{ and } \|e\|=1 , \\
\quad 0 & \text{otherwise} .
\end{cases}$$
{\bf e}gin{theorem}[P\'olya]
A simple random walk $S_n = \sum_{i=1}^n Z_i$ in $\mathbb{Z}^d$ is recurrent for $d = 1$ or $d = 2$ and transient for $d \geq 3$.
\end{theorem}
This theorem was generalised by Chung and Fuchs \cite{chung-fuchs} in 1951.
{\bf e}gin{theorem}[Chung--Fuchs]
Let $S_n$ be a random walk in $\mathbb{R}^d$. Then,
{\bf e}gin{enumerate}[(i)]
\item If $d = 1$ and $n^{-1}S_n \to 0$ in probability, then $S_n$ is neighbourhood-recurrent.
\item If $d = 2$ and $n^{-1/2}S_n$ converges in distribution to a centred normal distribution, then $S_n$ is neighbourhood-recurrent.
\item If $d \geq 3$ and the random walk is not contained in a lower-dimensional subspace, then it is neighbourhood-transient.
\end{enumerate}
\end{theorem}
\section{Background on geometric probability}
A central theme of classical geometric probability or stochastic geometry concerns the study of the properties of random point sets in Euclidean space and associated structures. For example, a large literature is devoted to study of the lengths of graphs on random vertex sets in Euclidean space $\mathbb{R}^d$, $d \geq 2$.
The interests are primarily in the lengths of those graphs representing the solutions to problems in Euclidean combinatorial optimization (see \cite{steele2} or \cite{yukich}).
In the classical setting, the random point sets are generated by i.i.d. random variables.
Some typical problems involve the construction of the shortest possible network of some kind:
Let $X_0, X_1, \dots, X_n$ be i.i.d. random points with common distribution on $\mathbb{R}^d$ and $V=\{X_i\}_{i=0}^n$.
{\bf e}gin{enumerate}[(i)]
\item Travelling salesman problem. Find the length of shortest closed path traversing each vertex in $V$ exactly once.
\item Minimal spanning tree. Find the minimal total edge length of a spanning tree through $V$.
\item Minimal Euclidean matching. Find the minimal total edge length of a Euclidean matching of points in $V$.
\end{enumerate}
Many of the questions of geometric probability or stochastic geometry are equally valid for point sets generated by random walk trajectories.
\section{Random convex hulls}
We first define the convex hull here. A set $C$ in $\mathbb{R}^d$ is \emph{convex} if it has the following property \cite[p.\,42]{gruber}:
$$(1 - \lambda)x + \lambda y \in C \ \text{for any } x, y \in C, 0 \leq \lambda \leq 1. $$
Given a set $A$ in $\mathbb{R}^d$, its \emph{convex hull} is the intersection of all convex sets
in $\mathbb{R}^d$ which contain $A$. Since the intersection of convex sets is always convex,
the convex hull of $A$ is convex and it is the smallest convex set in $\mathbb{R}^d$ with respect to set inclusion, which contains $A$.
One of the motivations to study the convex hulls is to find the extreme values in the random points. For the 1-dimensional case, the extreme values are just the maximum and minimum values. For higher dimensional cases, the extreme values could be determined by the convex hulls.
However, the extreme values have different meanings in these two different main settings of classical stochastic geometry.
For the setting of i.i.d. random points, one important concern is the outlier detection in random sample.
For the setting of trajectories of stochastic processes, extremes are important for study of record values.
It gives two related but different streams of research, with different underlying probabilistic models and different motivating questions,
though generally the motivations are all comes from multidimensional theory of extremes. See for example \cite{barnett1}, \cite{barnett2}, \cite{barnett-lewis} and \cite{nevzorov}.
\subsection{i.i.d. random points}
Convex hulls of iid. random points, also known as \emph{random polytopes}, were first studied by Geffroy \cite{geffroy} (1961), R\'enyi and Sulanke \cite{renyi-sulanke} (1963), and Efron \cite{efron} (1965). In the case where the points are normally distributed, the resulting convex hulls are known as \emph{Gaussian polytopes}. See Reitzner \cite[Random polytopes, pp.\,45-76]{reitzner} (2010) and Hug \cite{hug} (2013) for recent surveys.
Motivation arises in statistics (multivariate extremes) and convex geometry (approximation of convex sets), and there are connection to the isotropic constant in functional analysis: see Reitzner \cite{reitzner}. He also listed some other applications including to the analysis of algorithms and optimization.
For the multivariate extremes, let $X_0, X_1, \dots, X_n$ be the iid. random points with common distribution on $\mathbb{R}^d$ and $V=\{X_i\}_{i=0}^n$.
In the case of $d=1$, iid. points extremes are used in outlier detection in statistics.
In the case of $d \geq 2$, Green \cite{green} describes the peeling algorithm for detection of multivariate outliers via the iterated removal of points on the boundary of the convex hulls.
For the approximation of convex sets, Reitzner \cite{reitzner} insulates the algorithms to efficiently compute convex hull for large point set in $\mathbb{R}^d$.
\subsection{Trajectories of stochastic process}
Before the study of random polytopes, L\'evy \cite{levybook} had considered the convex hull of planar Brownian motion. The study of convex hull of random walk goes back to Spitzer and Widom \cite{sw}. Generally, the convex hull of a stochastic process is an interesting geometrical object, related to extremes of the stochastic processes, giving a multivariate analogue of \emph{record values}.
In one dimension, a value of a process is a record value if it is either less than all previous values (a lower record) or greater than all previous values (an upper record). In higher dimensions, a natural definition of ``record'' is then a point that lies outside the convex hull of all previous values.
More recent work on convex hull of Brownian motion includes Burdzy \cite{burdzy} (1985), Cranston, Hsu and March \cite{chm} (1989), Eldan \cite{eldan} (2014), Evans \cite{evans} (1985), Pitman and Ross \cite{pitman-ross} (2012).
For general stochastic processes, convex hulls and related convex \emph{minorants} or \emph{majorants}, are studied by Bass \cite{bass} (1982) and Sinai \cite{sinai} (1998).
\section{Applications for convex hulls of random walks}
In recent studies of random walks, attention has focussed on various geometrical aspects of random walk trajectories.
Many of the questions of stochastic geometry, traditionally concerned with functionals of independent random points, are also of interest for point sets generated by random walks.
Study of the convex hull of planar random walk goes back to Spitzer and Widom \cite{sw}
and the continuum analogue, convex hull of planar Brownian motion,
to L\'evy \cite[\S52.6, pp.~254--256]{levybook}; both have received renewed interest recently, in part motivated by
applications arising for example in modelling the `home range' of animals.
Random walks have been extensively used to model the movement of animals; Karl Pearson's original
motivation for the random walk problem originated with modelling the migration of animal
species such as mosquitoes, and subsequently random walks have been used to model the locomotion
of microbes: see~\cite{codling,smouse} for surveys. If the trajectory of the random walker
represents the locations visited by a roaming animal,
then the convex hull is a natural estimate of the `home range' of the animal~\cite{worton1,worton2}.
Natural properties of interest are the perimeter length and area of the convex hull.
See \cite{mcr} for a recent survey
of motivation and previous work.
The method of Chapter {\mathrm{e}}f{chapter3} in part relies on an analysis of \emph{scaling limits},
and thus links the discrete and continuum settings.
\section{Introduction of the model}
\label{sec:intro}
On each unsteady step, a drunken gardener deposits one of $n$ seeds. Once the flowers have bloomed, what is the minimum length of fencing required to enclose the garden?
Let $Z_1, Z_2, \ldots$ be a sequence of independent, identically distributed (i.i.d.)
random vectors on $\mathbb{R}^2$. Write ${\bf 0}$ for the origin in $\mathbb{R}^2$.
Define the random walk $(S_n; n \in \mathbb{Z}P)$ by $S_0 := {\bf 0}$
and for $n \geq 1$, $S_n := \sum_{i=1}^n Z_i$.
Let
$\mathop \mathrm{hull} ( S_0, \ldots, S_n )$ \label{hull S}
be the convex hull of positions of the walk up to and including the $n$th step, which is the smallest convex set that contains $S_0, S_1 \ldots, S_n$.
Let $L_n $ \label{L_n} denote the length of the perimeter of $\mathop \mathrm{hull} ( S_0, \ldots, S_n )$ and $A_n $ \label{A_n} be the area of the convex hull. (See Figure {\mathrm{e}}f{fig:Intro}.)
{\bf e}gin{figure}[h!]
\center
\includegraphics[width=0.75\textwidth]{Intro}
\caption{Simulated path of a zero-drift random walk and its convex hull.}
\label{fig:Intro}
\end{figure}
We will impose a moments condition of the following form:
{\bf e}gin{description}
\item[\namedlabel{ass:moments}{M$_p$}]
Suppose that $\mathbb{E}\, [ \| Z_1 \|^p ] < \infty$.
\end{description} \label{2-norm}
For almost everything that follows, we will assume that at least the
$p=1$ case of \eqref{ass:moments} holds, and frequently we
will assume the $p = 2$ case.
For several of our results we
assume that \eqref{ass:moments} holds for some $p>2$.
In any case, we will be explicit about which case we
assume at any particular point.
Given that \eqref{ass:moments} holds for some $p \geq 1$,
then
$\mu := \mathbb{E}\, Z_1 \in \mathbb{R}^2$, \label{mu} the mean drift vector
of the walk, is well defined.
If \eqref{ass:moments} holds for some $p \geq 2$,
then
$\Sigma := \mathbb{E}\, [ (Z_1 - \mu)(Z_1-\mu)^\tra]$, \label{Sigma} the covariance
matrix associated with $Z$, is well defined;
$\Sigma$ is positive semidefinite and symmetric.
We write $\sigma^2 := \trace \Sigma = \mathbb{E}\, [ \| Z_1 - \mu \|^2 ]$. \label{sigma^2} Here and elsewhere $Z_1$ and $\mu$ are viewed as column vectors, and $\| \blob \|$ is the Euclidean norm.
We also introduce the decomposition $\sigma^2 = \sigma^2_{\mu} + \sigma^2_{\mu_\per}$ with \label{sperp}
\[ \sigma^2_{\mu} := \mathbb{E}\, \left[ \left( ( Z_1 - \mu) \cdot \hat \mu \right)^2 \right] = \mathbb{E}\, [ ( Z_1 \cdot \hat \mu )^2 ] - \| \mu \|^2 \in \mathbb{R}P.\] \label{spara}
Here and elsewhere, `$\cdot$' denotes the scalar product, $\hat \mu := \| \mu \|^{-1} \mu$ for $\mu \neq 0$, \label{hat mu}
and $\mathbb{R}P := [0,\infty)$.
{\bf e}gin{figure}[h!]
\centering
\includegraphics[width=0.99\textwidth]{hull}
\caption{Example with mean drift $\mathbb{E}\, [ Z_1]$ of magnitude $\|\mu\| = 1/4$ and $n = 10^3$ steps.}\label{fig1}
\end{figure}
Convex hulls of random points have received much attention over the last several decades: see \cite{mcr} for an extensive survey,
including more than 150 bibliographic references, and sources
of motivation more serious than our drunken gardener, such as modelling the `home-range' of animal populations.
An important tool in the study of random convex hulls
is provided by a result of Cauchy in classical convex geometry.
Spitzer and Widom \cite{sw}, using Cauchy's formula, and later Baxter \cite{baxter}, using a combinatorial argument,
showed that
{\bf e}gin{equation}
\label{SW formula}
\mathbb{E}\, [ L_n ] = 2 \sum_{i=1}^n \frac{1}{i} \mathbb{E}\, \| S_i \| . \end{equation}
Note that $\mathbb{E}\, [ L_n]$ thus scales like $n$ in the case where the one-step mean drift vector
$\mathbb{E}\, [ Z_1 ] \neq {\bf 0}$ but like $n^{1/2}$
in the case where $\mathbb{E}\, [ Z_1 ] = {\bf 0}$ (provided $\mathbb{E}\, [ \| Z_1 \|^2 ] < \infty$).
The Spitzer--Widdom--Baxter result, in common with much of the literature,
is concerned with first-order properties of $L_n$:
see \cite{mcr} for a summary
of results in this direction for various random convex hulls, with a specific focus on (driftless) planar Brownian motion.
Much less is known about higher-order properties of $L_n$.
There is a clear distinction between the zero drift case ($\mathbb{E}\,[Z_1]={\bf 0}$) and the non-zero drift case ($\| \mathbb{E}\,[ Z_1 ] \| > 0$). For example, denote $r_n := \inf_{{\bf x} \in \partial \text{hull}(S_0, \dots, S_n)} \| {\bf x} \|$. Note that $r_n$ is non decreasing in $n$, because $S_0 = {\bf 0} \in \text{hull}(S_0, \dots, S_n) \subseteq \text{hull}(S_0, \dots, S_{n+1})$. We investigated the asymptotic behaviour of $r_n$ in the following two different cases.
{\bf e}gin{proposition}
{\bf e}gin{enumerate}[(i)]
\item
Suppose $\mathbb{E}\, [ \| Z_1 \|^2 ] < \infty$ and $\mathbb{E}\, [Z_1]= {\bf 0}$. Then $\lim_{n \to \infty} r_n = \infty$ a.s.
\item
Suppose $\mathbb{E}\, \| Z_1 \| < \infty$ and $\mathbb{E}\, [Z_1]\neq {\bf 0}$. Then $\lim_{n \to \infty} r_n < \infty$ a.s.
\end{enumerate}
\end{proposition}
{\bf e}gin{proof}
{\bf e}gin{enumerate}[(i)]
\item
In the first case, the random walk ($S_n; n \in \mathbb{Z}_+$) is recurrent (see e.g. \cite{durrett}). There exists $h \in \mathbb{R}_+$, depending on the distributioon of $Z_1$, such that $S_n$ will visit any ball of radius at least $h$ infinitely often (e.g., in the case of simple symmetric random walk on $\mathbb{Z}^2$, it suffices to take $h=1$). Let $r>0$. Then, $S_n$ will visit $B((r+h){\bf y}; h)$ infinitely often for each ${\bf y} \in \{(1,1),(-1,1),(1,-1),(-1,-1)\}$. Here the notation $B({\bf x}; r)$ is a Euclidean ball (a disk) with centre ${\bf x} \in \mathbb{R}^2$ and radius $r \in \mathbb{R}_+$.
So there exists some random time $N$ with $N < \infty$ a.s. such that $\{S_0,\dots,S_N\}$ contains a point in each of these four balls, and so $\text{hull}(S_0,\dots,S_N)$ contains the square with these points as its corners, which in turn contains $B({\bf 0}; r)$. So $\liminf_{n \to \infty} r_n \geq r$ for any $r \in \mathbb{R}_+$. So $\lim_{n \to \infty} r_n =\infty$.
\item
In the second case, the random walk is transient (see \cite{durrett}). Let $W_i$ be a wedge with apex $S_i$ with a angle $\theta < \pi$ (say $\theta = \pi/4$) so that $\theta$ is bisected by $\mathbb{E}\, Z_1$. By the Strong Law of Large Numbers, $\| S_n /n - \mathbb{E}\, Z_1 \| \to 0 {\ \mathrm{a.s.}}$ and so $S_n/n \cdot \mathbb{E}\, Z_1^{\perp} \to 0 {\ \mathrm{a.s.}}$, where $\mathbb{E}\, Z_1^{\perp}$ is the normal vector of $\mathbb{E}\, Z_1$. This implies the number of points outside the wedge $W_i$ is finite for any $i \in \mathbb{Z}^+$. We take some $S_k$ inside the wedge $W_0$ and denote the set of finitely many points outside $W_k$ by $\{ S_{\sigma_j}: j=1,2,\ldots,m \}$. Note that $S_0$ is outside $W_k$ so the set $\{ S_{\sigma_j} \}$ is non-empty. Hence, there must be some $S_{\sigma_t} \in \{ S_{\sigma_j} \}$ standing on the boundary of the convex hull, $S_{\sigma_t} \in \partial \text{hull}(S_0, \dots, S_n)$ for all $n \geq \sigma_t$. Then, $\limsup_{n \to \infty} r_n \leq \| S_{\sigma_t} \| < \infty$, which implies $\lim_{n \to \infty} r_n < \infty$ a.s. since $r_n$ is non decreasing.
\end{enumerate}
\end{proof}
{\bf e}gin{remark}
The key property for (i) is not (compact set) recurrence, but \emph{angular recurrence} in the sense that $S_n$ visits any cone with apex at ${\bf 0}$ and non-zero angle infinitely often. Thus the same distinction between (i) and (ii) persists for random walks in $\mathbb{R}^d$, $d \geq 3$, with the notation extended in the natural way.
\end{remark}
Because of this distinction, we always separate the arguments of $L_n$ and $A_n$ into the cases of non-zero and zero drift.
To illustrate our model, here we give some pictures of simulation examples (see Figure {\mathrm{e}}f{sim123}).
{\bf e}gin{figure}[!h]
\includegraphics[width=0.43\textwidth]{sim1} ~~
\includegraphics[width=0.43\textwidth]{sim2}
\\
\includegraphics[width=0.43\textwidth]{sim3}
\caption{The number of steps $n=300$ for all three examples. The top left: Simple random walk on $\mathbb{Z}^2$. $Z_i$ takes $(\pm 1,0)$, $(0,\pm 1)$ each with probability 1/4. \newline
The top right: $Z_i$ takes $(\pm 1,0)$, $(0,\pm 1)$, $(-1,1)$, $(1,-1)$ each with probability 1/6. \newline
The bottom left: Pearson--Rayleigh random walk. $Z_i$ takes value uniformly on the unit circle. }\label{sim123}
\end{figure}
\section{Outline of the thesis}
Chapter {\mathrm{e}}f{chapter2} is some necessary mathematical prerequisites for our results. It includes the concepts of the study objects and the essential tools used in the rest chapters.
In Chapter {\mathrm{e}}f{chapter3}
we describe our scaling limit approach, and carry it through after presenting the necessary preliminaries;
the main new results of this chapter, Theorems {\mathrm{e}}f{thm:limit-zero} and {\mathrm{e}}f{thm:limit-drift},
give weak convergence statements for convex hulls of random walks in the case of zero and non-zero drift, respectively.
Armed with these weak convergence results, we present
asymptotics for expectations and variances of the quantities $L_n$ and $A_n$ in Section 5.4, 6.4 and 6.5;
the arguments in this section rely in part on the scaling limit apparatus, and in part on direct random walk computations.
This section concludes with upper and lower bounds we found for the limiting variances.
Snyder and Steele \cite{ss} showed that $n^{-1}L_n$ converges almost surely to a deterministic limit, and proved an upper bound on the variance $\mathbb{V} {\rm ar}[L_n]=O(n)$ \cite{ss}.
In Chapter {\mathrm{e}}f{chapter4}, we give a different approach to prove their major results, which includes
the fact that $n^{-1}\mathbb{E}\,[L_n]$ converges (Proposition {\mathrm{e}}f{upper bound of E L_n}) and a simple expression for the limit in Proposition {\mathrm{e}}f{EL with drift}.
For the zero drift case, we give a new improved limit expression in Proposition {\mathrm{e}}f{limit of E L_n}.
Chapter {\mathrm{e}}f{chapter5} gives the convergence of $n^{-1}\mathbb{V} {\rm ar}[L_n]$ in Proposition {\mathrm{e}}f{upper bound for Var L_n}, which is first proved by Snyder and Steele \cite{ss}.
They also gave the law of large numbers for $L_n$ in the non-zero drift case. But we found it also valid for the zero drift case (Proposition {\mathrm{e}}f{LLN for L_n}).
Apart from that, the following of major results in this chapter are new.
For the non-zero drift case, we give a simple expression for the limit of $n^{-1}\mathbb{V} {\rm ar}[L_n]$ in Theorem {\mathrm{e}}f{thm1} \cite[Theorem 1.1]{wx}, which is non-zero for walks outside a certain degenerate class. This answers a question of Snyder and Steele. It is also the only case where the perimeter length $L_n$ is Gaussian.
So we give a central limit theorem for $L_n$ in this case in Theorem {\mathrm{e}}f{thm2} \cite[Theorem 1.2]{wx}.
For the non-zero drift case, the limit expression of $n^{-1}\mathbb{V} {\rm ar}[L_n]$ is given in Proposition {\mathrm{e}}f{prop:var-limit-zero u0} \cite[Proposition 3.5]{wx2} and its upper and lower bounds are given by Proposition {\mathrm{e}}f{prop:var_bounds u0} \cite[Proposition 3.7]{wx2}.
Chapter {\mathrm{e}}f{chapter6} is an analogue of Chapter {\mathrm{e}}f{chapter5} for the area $A_n$.
In Theorem {\mathrm{e}}f{prop:EA-zero} we give the asymptotic for the expected area $\mathbb{E}\, A_n$ with zero drift, which is a bit more general than the form given by Barndorff--Nielsen and Baxter \cite{bnb}.
Apart from that, the following of major results in this chapter are new.
We give the asymptotic for the expected area $\mathbb{E}\, A_n$ with drift in Proposition {\mathrm{e}}f{prop:EA-drift} \cite[Proposition 3.4]{wx2} and also
the asymptotics for their variance $\mathbb{V} {\rm ar} A_n$ in both zero drift (Proposition {\mathrm{e}}f{prop:var-limit-zero} \cite[Proposition 3.5]{wx2}) and non-zero drift cases (Proposition {\mathrm{e}}f{prop:var-limit-drift v+} \cite[Proposition 3.6]{wx2}).
Meanwhile, some upper and lower variance bounds are provided by the last section of this chapter.
\pagestyle{myheadings} \markright{\sc Chapter 2}
\chapter{Mathematical prerequisites}
\label{chapter2}
\section{Convergence of random variables}
\label{sec:convergence of random variables}
First of all, we define the different modes of convergence we will need in this thesis.
Let $X$ and $X_1, X_2, \dots $ be random variables in $\mathbb{R}$.
$X_n$ converges \emph{almost surely} to $X$ ($X_n \toas X$) as $n \to \infty$ iff
$$\mathbb{P}\left(\{ \omega : X_n(\omega) \to X(\omega) {\ \mathrm{as}\ } n \to \infty\}\right) = 1 .$$
$X_n$ converges \emph{in probability} to $X$ ($X_n \topr X$) as $n \to \infty$ iff, for every $\varepsilon > 0$,
$$\mathbb{P}\left(|X_n - X| > \varepsilon \right) \to 0 {\ \mathrm{as}\ } n \to \infty .$$
The \emph{$L^p$ norm} of $X$ is defined by
$$\|X\|_p := \left( \mathbb{E}\, |X|^p \right)^{1/p}.$$
$X_n$ converges \emph{in $L^p$} to $X$ ($X_n \tolp X$) for $p \geq 1$, as $n \to \infty$ iff
$$\mathbb{E}\, \left(|X_n - X|^p \right) \to 0,\text{ i.e. }\|X_n - X\|_p \to 0, {\ \mathrm{as}\ } n \to \infty .$$
Let $F_X(x) = \mathbb{P}(X \leq x), x \in \mathbb{R}$, be the distribution function of $X$ and let $C(F_X) = \{x : F_X(x) \text{ is continuous at } x\}$ be the continuity set of $F_X$. $X_n$ converges \emph{in distribution} to $X$ ($X_n \tod X$) as $n \to \infty$ iff
$$F_{X_n}(x) \to F_X(x) {\ \mathrm{as}\ } n \to \infty, \text{ for all } x \in C(F_X) .$$
The concept of convergence in distribution extends to random variables in $\mathbb{R}^d$ in terms of the joint distribution functions $\mathbb{P}[X_n^{(1)} \leq x^{(1)}, \dots, X_n^{(d)} \leq x^{(d)}]$.
These modes of convergence have the following logical relationships.
{\bf e}gin{align*}
X_n \tolp X & \,\rotatebox{-45}{$\Longrightarrow$} \\
& \qquad X_n \topr X \Longrightarrow X_n \tod X \\
X_n \toas X & \; \rotatebox{45}{$\Longrightarrow$}
\end{align*}
Now we collect some basic results on deducing convergence lemmas and theorems.
{\bf e}gin{lemma}[Dominated convergence \cite{gut} p.57] \label{dominated convergence}
Let $X, Y$ and $X_1, X_2, \dots$ be random variables. Suppose that $|X_n| \leq Y$ for all $n$, where $\mathbb{E}\, Y < \infty$, and that $X_n \to X$ a.s. as $n \to \infty$. Then
$$\mathbb{E}\, |X_n - X| \to 0 {\ \mathrm{as}\ } n \to \infty ,$$
In particular,
$$\mathbb{E}\, X_n \to \mathbb{E}\, X {\ \mathrm{as}\ } n \to \infty .$$
\end{lemma}
{\bf e}gin{lemma}[Pratt's lemma \cite{gut} p.221] \label{pratt's lemma}
Let $X$ and $X_1, X_2, \dots$ be random variables. Suppose that $X_n \to X$ almost surely as $n \to \infty$, and that
$$ |X_n| \leq Y_n \text{ for all } n,\quad Y_n \to Y {\ \mathrm{a.s.}},\quad \mathbb{E}\, Y_n \to \mathbb{E}\, Y {\ \mathrm{as}\ } n \to \infty.$$
Then
$$ X_n \to X \text{ in } L^1 \quad\text{and}\quad \mathbb{E}\, X_n \to \mathbb{E}\, X {\ \mathrm{as}\ } n \to \infty .$$
\end{lemma}
{\bf e}gin{lemma}[The Borel--Cantelli lemma \cite{gut} p.96, 98] \label{borel-cantelli}
Let $\{A_n, n \geq 1\}$ be arbitrary events. Then
$$ \sum_{n=1}^{\infty} \mathbb{P}(A_n) < \infty \Longrightarrow \mathbb{P}(A_n \ \text{i.o.}) = 0 .$$
Moreover,
suppose that $X_1, X_2, \dots $ are random variables. Then,
$$ \sum_{n=1}^{\infty} \mathbb{P}(|X_n| > \varepsilon) < \infty \text{ for any } \varepsilon > 0 \Longrightarrow X_n \to 0 {\ \mathrm{a.s.}} {\ \mathrm{as}\ } n \to \infty .$$
\end{lemma}
{\bf e}gin{lemma}[Slutsky's theorem \cite{gut} p.249] \label{slutsky}
Let $X_1, X_2, \dots$ and $Y_1, Y_2, \dots$ be sequences of random variables, Suppose that
$$X_n \tod X \text{ and } Y_n \topr a {\ \mathrm{as}\ } n \to \infty ,$$
where $a$ is some constant. Then,
$$X_n + Y_n \tod X +a \text{ and } X_n \cdot Y_n \tod X \cdot a .$$
\end{lemma}
Here we also introduce some useful concepts of uniform integrability.
A collection of random variables $X_i$, $i \in I$, is said to be \emph{uniformly integrable} if
$$\lim_{M \to \infty} \left(\sup_{i\in I} \mathbb{E}\,(|X_i| {\,\bf 1}( |X_i| > M))\right) = 0 .$$
{\bf e}gin{lemma}
Let $X$ and $X_1, X_2, \dots$ be random variables. If $X_n \to X$ in probability then the following are equivalent:
{\bf e}gin{enumerate}[(i)]
\item $\{X_n\}_{i=1}^\infty$ is uniformly integrable.
\item $X_n \to X$ in $L^1$.
\item $\mathbb{E}\, |X_n| \to \mathbb{E}\, |X| < \infty$.
\end{enumerate}
\end{lemma}
{\bf e}gin{lemma}[convergence of means \cite{kallenberg} p.45] \label{convergence of means}
Let $X, X_1, X_2, \dots$ be $\mathbb{R}P$-valued random variables with $X_n \tod X$. If $\{X_i\}_{i=1}^{\infty}$ is uniformly integrable, then $\mathbb{E}\, X_n \to \mathbb{E}\, X$ as $n \to \infty$.
\end{lemma}
\section{Martingales}
\label{sec:martingales}
A sequence $\{X_n\}_{i=1}^\infty$ of random variables is \emph{$\{{\mathcal{F}}_n\}$-adapted} if $X_n$ is ${\mathcal{F}}_n$-measurable for all $n$, which means for any $k \in \mathbb{R}$,
$\{\omega: X_n(\omega) \leq k\} \in {\mathcal{F}}_n$.
An integrable \{${\mathcal{F}}_n$\}-adapted sequence {$X_n$} is called a \emph{martingale} if
$$ \mathbb{E}\,(X_{n+1} \mid {\mathcal{F}}_n ) = X_n {\ \mathrm{a.s.}} \text{ for all } n \geq 0. $$
It is called a \emph{submartingale} if
$$ \mathbb{E}\,(X_{n+1} \mid {\mathcal{F}}_n ) \geq X_n {\ \mathrm{a.s.}} \text{ for all } n \geq 0, $$
and a \emph{supermartingale} if
$$ \mathbb{E}\,(X_{n+1} \mid {\mathcal{F}}_n ) \leq X_n {\ \mathrm{a.s.}} \text{ for all } n \geq 0. $$
An integrable, \{${\mathcal{F}}_n$\}-adapted sequence \{$D_n$\} is called a \emph{martingale difference sequence} if
$$ \mathbb{E}\, (D_{n+1} \mid {\mathcal{F}}_n)=0 \text{ for all } n \geq 0. $$
Then, the sequence of $M_n := \sum_{k=1}^n D_k$ is $\{{\mathcal{F}}_n\}$-martingale since
$$\mathbb{E}\, [M_{n+1} - M_n \mid {\mathcal{F}}_n] = \mathbb{E}\, [D_{n+1} \mid {\mathcal{F}}_n] = 0,$$
which indicate
$$\mathbb{E}\, [M_{n+1} \mid {\mathcal{F}}_n] = M_n.$$
{\bf e}gin{lemma}[Orthogonality of martingale differences \cite{gut} p.488] \label{martingale diff. orth.}
Let $\{D_n\}_{n=0}^\infty$ be a martingale difference sequence. Then $\mathbb{E}\,[D_m D_n]=0$ for $m \neq n$. Hence,
$$\mathbb{V} {\rm ar}\left(\sum_{i=0}^n D_i\right) = \sum_{i=0}^n \mathbb{V} {\rm ar}(D_i) .$$
\end{lemma}
We use a standard martingale difference construction based on resampling.
Consider the functional on $\mathbb{R}^n$, $f: \mathbb{R}^n \to \mathbb{R}$. Let $Y_1, Y_2, \dots, Y_n$ be iid. random variables and $W_n = f(Y_1, \dots, Y_n)$.
Let $Y'_1, Y'_2, \dots, Y'_n$ be independent copies of $Y_1, Y_2, \dots, Y_n$ and
$$W_n^{(i)} = f(Y_1, \dots, Y_{i-1}, Y'_i, Y_{i+1}, \dots, Y_n) .$$
Let $D_{n,i} = \mathbb{E}\, [W_n - W_n^{(i)} \mid {\mathcal{F}}_i]$ where ${\mathcal{F}}_i = \sigma(Y_1, \dots, Y_i)$.
{\bf e}gin{lemma} \label{resampling}
Let $n \in \mathbb{N}$. Then
{\bf e}gin{enumerate}[(i)]
\item $W_n - \mathbb{E}\, W_n = \sum_{i=1}^n D_{n,i}$;
\item $\mathbb{V} {\rm ar} (W_n) = \sum_{i=1}^n \mathbb{E}\, [D_{n,i}^2]$ whenever the latter sum is finite.
\end{enumerate}
\end{lemma}
{\bf e}gin{proof}
The idea is well known.
Since $W_n^{(i)}$ is independent of $Y_i$,
$$\mathbb{E}\, [W_n^{(i)} \mid {\mathcal{F}}_i] = \mathbb{E}\,[W_n^{(i)} \mid {\mathcal{F}}_{i-1}] = \mathbb{E}\, [W_n \mid {\mathcal{F}}_{i-1}] .$$
So,
$$D_{n,i} = \mathbb{E}\,[W_n \mid {\mathcal{F}}_i] - \mathbb{E}\,[W_n \mid {\mathcal{F}}_{i-1}] .$$
Hence $D_{n,i}$ is martingale differences, since
$$\mathbb{E}\,[D_{n,i} \mid {\mathcal{F}}_{i-1}] = \mathbb{E}\,[W_n \mid {\mathcal{F}}_{i-1}] - \mathbb{E}\, [W_n \mid {\mathcal{F}}_{i-1}] = 0 $$
and
$$\sum_{i=1}^n D_{n,i} = \mathbb{E}\,[W_n \mid {\mathcal{F}}_n] - \mathbb{E}\,[W_n \mid {\mathcal{F}}_0] = W_n - \mathbb{E}\, W_n .$$
So,
$$\mathbb{E}\,\left[\left(\sum_{i=1}^n D_{n,i} \right)^2\right] = \mathbb{V} {\rm ar} (W_n) .$$
But by orthogonality of martingale differences, (Lemma {\mathrm{e}}f{martingale diff. orth.}),
$$\mathbb{V} {\rm ar}(W_n) = \sum_{i=1}^n \mathbb{E}\, [D_{n,i}^2] .$$
\end{proof}
Note that by the conditional Jensen's inequality $\left(\mathbb{E}\,([\,\xi \mid {\mathcal{F}}])\right)^2 \leq \mathbb{E}\, [\,\xi^2 \mid {\mathcal{F}}]$, we have
$$D_{n,i}^2 \leq \mathbb{E}\,\left[\left(W_n - W_n^{(i)}\right)^2 \mid {\mathcal{F}}_i\right] .$$
So from part (ii) of Lemma {\mathrm{e}}f{resampling},
$$\mathbb{V} {\rm ar}(W_n) \leq \sum_{i=1}^n \mathbb{E}\,\left[\left( W_n^{(i)}-W_n \right)^2\right] .$$
This gives a upper bound for the variance of $W_n$, which is a factor of $2$ larger than the upper bound obtained from the Efron--Stein inequality (equation (2.3) in \cite{ss}):
{\bf e}gin{lemma} \label{lem:efron-stein}
$$\mathbb{V} {\rm ar}(W_n) \leq \frac{1}{2}\sum_{i=1}^n \mathbb{E}\,\left[\left( W_n^{(i)}-W_n \right)^2\right] .$$
\end{lemma}
\section{Reflection principle for Brownian motion}
{\bf e}gin{lemma}[Reflection principle \cite{morters} p.44] \label{reflection principle}
If $T$ is a stopping time and $\{w(t):t \geq 0 \}$ is a standard 1-dimensional Brownian motion, then the process $\{w^*(t):t\geq 0 \}$ called Brownian motion reflected at $T$ and defined by
$$w^*(t) = w(t){\,\bf 1}\{t\leq T\} + \left(2 w(T)- w(t)\right){\,\bf 1}\{t>T\} $$
is also a standard Brownian motion.
\end{lemma}
{\bf e}gin{corollary} \label{reflection}
Suppose $r>0$ and $\{w(t):t \geq 0 \}$ is a standard 1-dimensional Brownian motion. Then,
$$\mathbb{P}\left( \sup_{0 \leq s \leq t} w(s)>r \right)= 2\mathbb{P}\left(w(t)>r \right) .$$
\end{corollary}
\section{Useful inequalities}
We collect some useful inequalities which is useful in the next chapters.
{\bf e}gin{lemma}[Markov's inequality \cite{gut} p.120]
Let $X$ be a random variable. Suppose that $\mathbb{E}\,|X|^r < \infty$ for some $r>0$, and let $x > 0$. Then,
$$ \mathbb{P}(|X| > x) \leq \frac{\mathbb{E}\,|X|^r}{x^r} . $$
\end{lemma}
{\bf e}gin{lemma}[Chebyshev's inequality \cite{gut} p.121]
Let $X$ be a random variable. Suppose that $\mathbb{V} {\rm ar} X < \infty$. Then for $x> 0$,
$$ \mathbb{P}(|X - \mathbb{E}\, X| > x) \leq \frac{\mathbb{V} {\rm ar} X}{x^2} . $$
\end{lemma}
{\bf e}gin{lemma}[The Cauchy--Schwarz inequality \cite{gut} p.130] \label{Cauchy-Schwarz ineq.}
Suppose that random variables $X$ and $Y$ have finite variances. Then,
$$ |\mathbb{E}\, XY| \leq \mathbb{E}\, |XY| \leq \|X\|_2 \|Y\|_2 = \sqrt{\mathbb{E}\, (X^2) \mathbb{E}\, (Y^2)} .$$
\end{lemma}
The next result generalises the Cauchy--Schwarz inequality.
{\bf e}gin{lemma}[The H\"older inequality \cite{gut} p.129]
Let $X$ and $Y$ be random variables. Suppose that $p^{-1} + q^{-1} = 1$, $\mathbb{E}\, |X|^p < \infty$ and $\mathbb{E}\, |Y|^q < \infty$, then
$$ |\mathbb{E}\, XY | \leq \mathbb{E}\, |XY | \leq \|X\|_p \|Y\|_q = (\mathbb{E}\, X^p)^{1/p} (\mathbb{E}\, Y^q)^{1/q}.$$
\end{lemma}
{\bf e}gin{lemma}[The Minkowski inequality \cite{gut} p.129] \label{minkowski}
Let $p \geq 1$. Suppose that $X$ and $Y$ are random variables, such that $\mathbb{E}\, |X|^p < \infty$ and $\mathbb{E}\, |Y|^p < \infty$. Then,
$$ \|X + Y\|_p \leq \|X\|_p + \|Y\|_p .$$
\end{lemma}
This is the triangle inequality for the $L^p$ norm.
Now we introduce some inequalities on martingales.
{\bf e}gin{lemma}[Doob's inequality \cite{durrett} p.214] \label{Doob}
If $X_n$ is a martingale, then for $1 < p < \infty$,
$$ \mathbb{E}\, \left[\left( \max_{0 \leq m \leq n} |X_m| \right)^p \right] \leq \left(\frac{p}{p-1}\right)^p \mathbb{E}\,(|X_n|^p) .$$
\end{lemma}
{\bf e}gin{lemma}[Azuma--Hoeffding inequality \cite{penrose} p.33] \label{azuma-hoeffding}
Let $D_{n,i}$ $(i=1,\dots,n)$ be a martingale difference sequence adapted to a filtration ${\mathcal{F}}_i$, which means $D_{n,i}$ is ${\mathcal{F}}_i$-measurable and $\mathbb{E}\,[D_{n,i}|{\mathcal{F}}_{i-1}]=0$. Then, for any $t>0$,
$$\mathbb{P}\left(\Big| \sum_{i=1}^n D_{n,i} \Big| >t\right) \leq 2 \exp\left( -\frac{t^2}{2n d_{\infty}^2} \right),$$
where $d_{\infty}$ is such that $|D_{n,i}| \leq d_{\infty}$ a.s. for all $n,i$.
\end{lemma}
We also introduce some inequalities for sums of independent random variables.
{\bf e}gin{lemma}[Marcinkiewicz--Zygmund inequality \cite{gut} p.151] \label{marcinkiewicz}
Let $p \geq 1$. Suppose that $X, X_1, X_2, \dots, X_n$ are independent, identically distributed random variables with mean 0 and $\mathbb{E}\, |X|^p < \infty$.
Set $S_n = \sum_{k=1}^n X_k$. Then there exists a constant $B_p$ depending only on $p$, such that
$$ \mathbb{E}\, |S_n|^p \leq {\bf e}gin{cases} B_p n \mathbb{E}\,|X|, & \text{if } 1 \leq p \leq 2 , \\
B_p n^{p/2} \mathbb{E}\, |X|^{p/2}, & \text{if } p >2 .
\end{cases} $$
\end{lemma}
{\bf e}gin{lemma}[Rosenthal's inequality \cite{gut} p.151] \label{rosenthal}
Let $p \geq 1$. Suppose that $X_1, X_2, \dots, X_n$ are independent random variables such that $E|X_k|^p < \infty$ for all $k$. Set $S_n = \sum_{k=1}^n X_k$. Then,
$$ \mathbb{E}\, |S_n|^p \leq \max\left\{2^p \sum_{k=1}^n \mathbb{E}\, |X_k|^p , 2^{p^2} \left( \sum_{k=1}^n \mathbb{E}\, |X_k| \right)^p \right\}. $$
\end{lemma}
\section{Useful theorems and lemmas}
{\bf e}gin{lemma}[Fubini's theorem \cite{gut} p.65] \label{fubini}
Let ($\Omega_1, {\mathcal{F}}_1, P_1$) and ($\Omega_2, {\mathcal{F}}_2, P_2$) be probability spaces, and
consider the product space ($\Omega_1 \times \Omega_2, {\mathcal{F}}_1 \times {\mathcal{F}}_2, P$), where $P = P_1 \times P_2$ is
the product measure. Suppose that ${\bf X} = (X_1, X_2)$ is a two-dimensional random variable, and that $g$ is ${\mathcal{F}}_1 \times {\mathcal{F}}_2$-measurable, and (i) non-negative or (ii) integrable. Then,
$$ \mathbb{E}\, g({\bf X}) = \int_{\Omega} g({\bf X})\, \textup{d} P = \int_{\Omega_1} \left( \int_{\Omega_2}g({\bf X})\, \textup{d} P_2\right) \textup{d} P_1 = \int_{\Omega_2} \left( \int_{\Omega_1}g({\bf X})\, \textup{d} P_1\right) \textup{d} P_2 .$$
\end{lemma}
{\bf e}gin{lemma}
\label{convergence of Cesaro mean}
Let $\{y_n \}_{n=1}^{\infty}$ be a sequence of real numbers and let $y \in \mathbb{R}$. If $ y_n \to y$ as $n \to \infty$, then $n^{-1}\sum_{i=1}^n y_i \to y$ as $n \to \infty$.
\end{lemma}
{\bf e}gin{proof}
By assumption, for any $\varepsilon >0$ there exists $n_0 \in \mathbb{N}$ such that $|y_n -y| \leq \varepsilon$ for all $n \geq n_0$. Then,
{\bf e}gin{align*}
\left| \frac{1}{n} \sum_{i=1}^n y_i -y \right| & = \left| \frac{1}{n} \sum_{i=1}^n (y_i-y) \right| \\
& \leq \left| \frac{1}{n} \sum_{i=1}^{n_0}(y_i-y) \right| + \left| \frac{1}{n} \sum_{i=n_0+1}^n (y_i-y) \right| \\
& \leq \frac{1}{n} \sum_{i=1}^{n_0} |y_i-y| + \frac{1}{n} \sum_{i=n_0 +1}^{n} |y_i-y| \\
& \leq \frac{1}{n} \sum_{i=1}^{n_0} |y_i-y| + \varepsilon \\
& \leq 2\varepsilon ,\end{align*}
for all $n$ big enough. Since $\varepsilon >0$ was arbitrary, the result follows.
\end{proof}
\section{Multivariate normal distribution}
Let $\Sigma$ be a symmetric positive semi-definite ($d \times d$) matrix. Then, there exists an unique positive semi-definite symmetric matrix $\Sigma^{1/2}$ such that $\Sigma = \Sigma^{1/2} \Sigma^{1/2}$ \cite{mardia}. The matrix
$\Sigma^{1/2}$ can also be regarded as a linear transform of $\mathbb{R}^d$ given by ${\bf x} \mapsto \Sigma^{1/2} {\bf x}$.
For a random variable $Y$, the notation $Y \sim {\mathcal{N}}(0, \Sigma)$ means $Y$ has $d$ dimensional normal distribution with mean $0$ and covariance matrix $\Sigma$.
In the degenerate case, all entries of the covariance matrix is $0$, $\Sigma = 0$, which means that $Y=0$ almost surely.
{\bf e}gin{lemma} \label{linear transformation}
Suppose $X \sim {\mathcal{N}}(0, I)$ and let $Y = \Sigma^{1/2} X$. Then $Y \sim {\mathcal{N}}(0, \Sigma)$.
\end{lemma}
{\bf e}gin{lemma}[Multidimensional Central Limit Theorem \cite{mardia} p.62] \label{Mult CLT}
Suppose $\{Z_i\}_{i=1}^\infty$ is a sequence of i.i.d. random variables on $\mathbb{R}^d$. $S_n = \sum_{i=1}^n Z_i$ is a random walk on $\mathbb{R}^d$.
If $\mathbb{E}\,(\|Z_1\|^2) < \infty$, $\mathbb{E}\, Z_1 = 0$ and $\mathbb{E}\,(Z_1 Z_1^\top)= \Sigma$, then
$$n^{-1/2} S_n \tod {\mathcal{N}}(0,\Sigma) .$$
\end{lemma}
\section{Analytic and Geometric prerequisites}
We recall a few basic facts from real analysis: \cite{rudin} is an excellent general reference.
The \emph{Heine--Borel theorem} states that a set in $\mathbb{R}^d$ is compact if and only if it is closed
and bounded \cite[p.~40]{rudin}. Compactness is preserved under continuous mappings: if $(X,\rho_X)$ is a compact metric space and $(Y, \rho_Y)$ is a metric space,
and $f : (X, \rho_X) \to (Y, \rho_Y)$ is continuous, then the image $f(X)$ is compact \cite[p.~89]{rudin}; moreover $f$ is uniformly continuous on $X$ \cite[p.~91]{rudin}.
For any such uniformly continuous $f$, there is a monotonic \emph{modulus of continuity} $\mu_f : \mathbb{R}P \to \mathbb{R}P$ such that $\rho_Y ( f(x_1), f(x_2) ) \leq \mu_f ( \rho_X (x_1, x_2 ))$
for all $x_1, x_2 \in X$, and for which $\mu_f ( \rho ) \downarrow 0$ as $\rho \downarrow 0$ (see e.g.~\cite[p.~57]{kallenberg}).
Let $d$ be a positive integer.
For $T > 0$, let ${\mathcal{C}} ( [0,T] ; \mathbb{R}^d )$ \label{cC} denote the class of continuous functions
from $[0,T]$ to $\mathbb{R}^d$. Endow ${\mathcal{C}} ( [0,T] ; \mathbb{R}^d )$ with the supremum metric
\[ \rho_\infty ( f, g) := \sup_{t \in [0,T]} \rho ( f(t), g(t) ) , ~\text{for } f,g \in {\mathcal{C}} ( [0,T] ; \mathbb{R}^d ). \] \label{rho_infty}
Let ${\mathcal{C}}^0 ( [0,T] ; \mathbb{R}^d )$ \label{cC^0} denote those functions in ${\mathcal{C}} ( [0,T] ; \mathbb{R}^d )$ that map $0$ to the origin in $\mathbb{R}^d$.
Usually, we work with $T=1$, in which case we write simply
\[ {\mathcal{C}}_d := {\mathcal{C}} ( [0,1] ; \mathbb{R}^d ) , ~~\text{and}~~ {\mathcal{C}}_d^0 := \{ f \in {\mathcal{C}}_d : f(0) = {\bf 0} \} .\] \label{cC_d}
\label{cC_d^0}
For $f \in {\mathcal{C}} ( [0,T] ; \mathbb{R}^d )$ and $t \in [0,T]$, define $f [0,t] := \{ f(s) : s \in [0,t] \}$, the image of $[0,t]$ under $f$. Note that, since $[0,t]$ is compact and $f$ is continuous,
the \emph{interval image} $f [0,t]$ is compact.
We view elements $f \in {\mathcal{C}} ( [0,T] ; \mathbb{R}^d )$ as \emph{paths} indexed by time $[0,T]$, so that $f[0,t]$ is the section of the path up to time $t$.
We need some notation and concepts from convex geometry: we found \cite{gruber} to be very useful,
supplemented by \cite{sw} as a convenient reference for a little integral geometry.
Let $d$ be a positive integer.
Let $\rho({\bf x},{\bf y}) = \| {\bf x} - {\bf y}\|$ denote the Euclidean distance between ${\bf x}$ and ${\bf y}$ in $\mathbb{R}^d$. For a set $A \subseteq \mathbb{R}^d$, write
$\partial A$ for the boundary of $A$ (the intersection
of the closure of $A$ with the closure of $\mathbb{R}^d \setminus A$), and $\mathop \mathrm{int} (A) := A \setminus \partial A$ for the interior
of $A$.
For $A \subseteq \mathbb{R}^d$
and a point ${\bf x} \in \mathbb{R}^d$, set $\rho({\bf x},A) := \inf_{{\bf y} \in A} \rho({\bf x},{\bf y})$, \label{rho(x,A)}
with the usual convention that $\inf \emptyset = +\infty$.
We write $\lambda_d$ for Lebesgue measure on $\mathbb{R}^d$.
Write $\mathbb{S}_{d-1} := \{ {\bf u} \in \mathbb{R}^d : \| {\bf u} \| = 1 \}$ \label{cS_d-1}
for the unit sphere in $\mathbb{R}^d$.
Let ${\mathcal{K}}_d$ \label{cK_d} denote the collection of convex compact sets in $\mathbb{R}^d$, and write
\[ {\mathcal{K}}^0_d := \{ A \in {\mathcal{K}}_d : {\bf 0} \in A \} \] \label{cK_d^0}
for those sets in ${\mathcal{K}}_d$ that include the origin. The Hausdorff metric on ${\mathcal{K}}^0_d$
will be denoted
\[ \rho_H ( A, B ) := \max \Big\{ \sup_{{\bf x} \in B} \rho({\bf x},A) , \sup_{{\bf y} \in A} \rho({\bf y},B) \Big\} ~\text{for } A,B \in {\mathcal{K}}_d.\] \label{rho_H}
Given $A \in {\mathcal{K}}_d$, for $r > 0$ set
\[ \pi_r ( A) := \{ {\bf x} \in \mathbb{R}^d : \rho ({\bf x},A) \leq r \} ,\] \label{pi_r}
the \emph{parallel body} of $A$ at distance $r$.
Note that,
two equivalent descriptions of $\rho_H$ (see e.g.\ Proposition 6.3 of \cite{gruber}) are
for $A, B \in {\mathcal{K}}^0_d$,
{\bf e}gin{align}
\label{eq:hausdorff_minkowski} \rho_H (A, B) & = \inf \left\{ r \geq 0 : A \subseteq \pi_r ( B ) \text{ and } B \subseteq \pi_r ( A ) \right\}; \text{ and } \\
\label{eq:hausdorff_support} \rho_H (A,B) & = \sup_{e \in \mathbb{S}_{d-1} } \left| h_A (e) - h_B (e) \right| ,
\end{align}
where $h_A ( {\bf x}) := \sup_{{\bf y} \in A} ({\bf x} \cdot {\bf y} )$ is the \emph{support function} of $A$ and ${\bf x} \cdot {\bf y}$ is the inner product of ${\bf x}$ and ${\bf y}$, i.e. $(x_1,y_1)\cdot (x_2, y_2) = x_1 x_2 + y_1 y_2$. \label{h_A()}
\section{Continuous mapping theorem and Donsker's Theorem}
\label{sec:CMT and Donsker}
We consider random walks in $\mathbb{R}^d$ in this section. First we need to define the weak convergence in $\mathbb{R}^d$.
Suppose $(\Omega, {\mathcal{F}}, \mathbb{P})$ is a probability space and $(M, \rho)$ is a metric space. For $n \geq 1$, suppose that
$$X_n, X: \Omega \longrightarrow M$$
are random variables taking values in $M$. If
$$\mathbb{E}\, f(X_n) \to \mathbb{E}\, f(X)\ {\ \mathrm{as}\ } n\to \infty,$$
for all bounded, continuous functional $f: M \longrightarrow \mathbb{R}$, then we say that $X_n$ \emph{converges weakly} to $X$ and write $X_n \mathbb{R}ightarrow X$.
The weak convergence generalises the concept of convergence in distribution for random variables on $\mathbb{R}^d$.
{\bf e}gin{lemma}[continuous mapping theorem \cite{kallenberg} p.41] \label{continuous mapping}
Fix two metric spaces $(M_1,\rho_1)$ and $(M_2,\rho_2)$. Let $X, X_1, X_2, \dots$ be random variables taking values in $M_1$ with $X_n \mathbb{R}ightarrow X$. Suppose $f$ is a
mapping on $(M_1,\rho_1) \to (M_2,\rho_2)$, which is continuous everywhere in $M_1$ apart from possible on a set $A \subseteq M_1$ with $\mathbb{P}(X \in A) = 0$.
Then, $f(X_n) \mathbb{R}ightarrow f(X)$.
\end{lemma}
We generalise the definition of $Z_i$ and $S_n$ a little in this section. Let $\{Z_i\}_{i=1}^\infty$ be a i.i.d. random vectors on $\mathbb{R}^d$ and $S_n = \sum_{i=1}^n Z_i$.
For each $n \in \mathbb{N}$ and all $t \in [0,1]$, define
\[ X_n (t) := S_{\lfloor nt \rfloor} + (nt - \lfloor nt \rfloor ) \left( S_{\lfloor nt \rfloor +1} - S_{\lfloor nt \rfloor} \right) = S_{\lfloor nt \rfloor} + (nt - \lfloor nt \rfloor ) Z_{\lfloor nt \rfloor +1} .\]
Let $b:=( b(s) )_{s \in [0,1]}$ \label{b} denote standard Brownian motion in $\mathbb{R}^d$, started at $b(0) = 0$.
{\bf e}gin{lemma}[Donsker's Theorem]
\label{thm:donsker} Let $d \in \mathbb{N}$.
Suppose that $\mathbb{E}\, (\| Z_1 \|^2) <\infty$,
$\| \mathbb{E}\, Z_1 \| = 0$, and $\mathbb{E}\, [ Z_1 Z_1^\top ] = \Sigma$ . Then, as $n \to \infty$,
\[ n^{-1/2} X_n \mathbb{R}ightarrow \Sigma^{1/2}b, \]
in the sense of weak convergence on $({\mathcal{C}}_d^0 , \rho_\infty )$.
\end{lemma}
{\bf e}gin{remark}
Donsker's theorem generalizes the multidimensional central limit theorem (Lemma {\mathrm{e}}f{Mult CLT})
to a \emph{functional} central limit theorem,
because weak convergence of paths implies convergence in distribution of the endpoints.
Indeed, taking $t=1$ in Donsker's Theorem, the marginal convergence gives
$$n^{-1/2} X_n(1) = n^{-1/2} S_n \tod \Sigma^{1/2} b(1) .$$
Here by Lemma {\mathrm{e}}f{linear transformation}, $\Sigma^{1/2} b(1) \sim {\mathcal{N}}(0, \Sigma)$ since $b(1) \sim {\mathcal{N}}(0,I)$.
Then we have $n^{-1/2} S_n \tod {\mathcal{N}}(0, \Sigma)$, which is Lemma {\mathrm{e}}f{Mult CLT}.
\end{remark}
\section{Cauchy formula}
For this section we take $d=2$.
We consider the ${\mathcal{A}}: {\mathcal{K}}_2 \to \mathbb{R}P$ \label{cA}
and ${\mathcal{L}} : {\mathcal{K}}_2 \to \mathbb{R}P$ \label{cL} given by the area and the perimeter length of convex compact sets in the plane. Formally,
we may define
{\bf e}gin{equation}
\label{eq:L-def}
{\mathcal{A}} (A) := \lambda_2 (A) , ~~\text{and} ~~ {\mathcal{L}} (A) := \lim_{r \downarrow 0} \left( \frac{\lambda_2 ( \pi_r (A)) - \lambda_2 (A)}{r} \right),
\text{ for } A \in {\mathcal{K}}_2 .\end{equation}
The limit in \eqref{eq:L-def} exists by the \emph{Steiner formula} of integral geometry (see e.g.~\cite{schneider-weil}),
which expresses $\lambda_2 ( \pi_r(A))$ as a quadratic polynomial in $r$ whose coefficients
are given in terms of the \emph{intrinsic volumes} of $A$:
{\bf e}gin{equation}
\label{eq:steiner}
\lambda_2 ( \pi_r(A)) = \lambda_2 (A) + r {\mathcal{L}} (A) + \pi r^2 {\,\bf 1} \{ A \neq \emptyset \} .\end{equation}
In particular,
\[ {\mathcal{L}} (A) = {\bf e}gin{cases}\phantom{2} {\mathcal{H}}_{1} ( \partial A ) & \text{if } \mathop \mathrm{int} (A) \neq \emptyset , \\
2 {\mathcal{H}}_{1} ( \partial A ) & \text{if } \mathop \mathrm{int} (A) = \emptyset ,
\end{cases} \]
where ${\mathcal{H}}_{d}$ is $d$-dimensional Hausdorff measure on Borel sets.
We observe the translation-invariance and scaling properties
\[ {\mathcal{L}} ( x + \alpha A ) = \alpha {\mathcal{L}} (A) , ~~\text{and}~~ {\mathcal{A}} ( x+ \alpha A) = \alpha^2 {\mathcal{A}} (A) ,\]
where for $A \in {\mathcal{K}}_2$, $x + \alpha A = \{x + \alpha y : y \in A \} \in {\mathcal{K}}_2$.
For $A \in {\mathcal{K}}_2$, Cauchy obtained the following formula:
{\bf e}gin{equation}
\label{cauchy0}
{\mathcal{L}} (A) = \int_0^\pi \left( \sup_{{\bf y} \in A} ( {\bf y} \cdot {\bf e}_\theta ) - \inf_{{\bf y} \in A} ( {\bf y} \cdot {\bf e}_\theta ) \right) \textup{d} \theta .
\end{equation}
We will need the following consequence of ({\mathrm{e}}f{cauchy0}).
{\bf e}gin{proposition} \label{cauchyhull}
Let $K = \{ {\bf z}_0, \ldots, {\bf z}_n \}$ be a finite point set in $\mathbb{R}^2$, and let ${\mathcal{C}} = \mathop \mathrm{hull} (K)$.
Then
{\bf e}gin{equation}
\label{cauchy1}
{\mathcal{L}} ( {\mathcal{C}} ) = \int_0^\pi \left( \max_{0 \leq i \leq n} ( {\bf z}_i \cdot {\bf e}_\theta ) - \min_{0 \leq i \leq n } ( {\bf z}_i \cdot {\bf e}_\theta ) \right) \textup{d} \theta.
\end{equation}
\end{proposition}
In particular, for the case of our random walk, ({\mathrm{e}}f{cauchy1}) says
{\bf e}gin{equation}
\label{cauchy_}
L_n = {\mathcal{L}} ( \mathop \mathrm{hull}(S_0, \dots, S_n) ) = \int_0^\pi \left( \max_{0 \leq i \leq n} ( S_i \cdot {\bf e}_\theta ) - \min_{0 \leq i \leq n } ( S_i \cdot {\bf e}_\theta ) \right) \textup{d} \theta.
\end{equation}
An immediate but useful consequence of ({\mathrm{e}}f{cauchy_}) is that
{\bf e}gin{equation}
\label{L_monotone}
L_{n+1} \geq L_n, {\ \mathrm{a.s.}}
\end{equation}
In the case where $K$ is a finite point set, $\mathop \mathrm{hull} ( K)$
is a convex polygon, the boundary of which contains vertices ${\mathcal{V}} \subseteq K$
(extreme points of the convex hull) and the line-segment edges connecting them;
note that $\mathop \mathrm{hull} (K) = \mathop \mathrm{hull} ({\mathcal{V}})$.
Now, by convexity,
\[ \sup_{{\bf y} \in {\mathcal{C}}} ( {\bf y} \cdot {\bf e}_\theta ) = \max_{0 \leq i \leq n} ( {\bf z}_i \cdot {\bf e}_\theta) = \sup_{{\bf y} \in {\mathcal{V}}} ( {\bf y} \cdot {\bf e}_\theta ) ,\]
and similarly for the infimum. So ({\mathrm{e}}f{cauchy0}) does indeed imply ({\mathrm{e}}f{cauchy1}). However, to keep this presentation as self-contained as possible, we give a direct proof of ({\mathrm{e}}f{cauchy1}) without appealing to the more general result ({\mathrm{e}}f{cauchy0}).
{\bf e}gin{proof}[Proof of Proposition {\mathrm{e}}f{cauchyhull}]
The above discussion shows that it suffices to consider the case where ${\mathcal{V}} =K$ in which all of the ${\bf z}_i$ are on the boundary of the convex hull. Without loss of generality, suppose that ${\bf 0} \in {\mathcal{C}}$. Then we may rewrite ({\mathrm{e}}f{cauchy1}) as
$${\mathcal{L}}({\mathcal{C}})= \int_0^{2\pi} \max_{0 \leq i \leq n} ({\bf z}_i \cdot {\bf e}_{\theta}) \,\textup{d} \theta .$$
Suppose also that ${\bf z}_i = \|{\bf z}_i\|{\bf e}_{\theta_i}$ in polar coordinates, labelled so that $0 \leq \theta_0 < \theta_1 < \dots < \theta_n < 2\pi$. Thus starting from the rightmost point of $\partial {\mathcal{C}}$ on the horizontal axis and traversing the boundary anticlockwise, one visits the vertices ${\bf z}_0,{\bf z}_1,\dots,{\bf z}_n$ in order.
{\bf e}gin{figure}[h]
\centering
\includegraphics[width=0.85\textwidth]{pic1}
\caption{Proof of Proposition {\mathrm{e}}f{cauchyhull}}
\label{ipe 1}
\end{figure}
Let ${\bf z}_{n+1}:={\bf z}_0$. Draw the perpendicular line of ${\bf z}_k - {\bf z}_{k-1}$ passing through point ${\bf 0}$ and denote the foot as ${\bf y}_k$. For $1 \leq k \leq n+1$, let
{\bf e}gin{displaymath}
\hat{{\bf z}}_k := \left\{
{\bf e}gin{array}{ll}
{\bf y}_k, & \mbox{if}\ {\bf y}_k \in\ \mbox{line segment}\ \overline{{\bf z}_{k-1}{\bf z}_k} \\
{\bf z}_k, & \mbox{if}\ {\bf y}_k \in\ \mbox{extended line of}\ \overrightarrow{{\bf z}_{k-1}{\bf z}_k} \\
{\bf z}_{k-1}, & \mbox{if}\ {\bf y}_k \in\ \mbox{extended line of}\ \overrightarrow{{\bf z}_{k}{\bf z}_{k-1}}
\end{array}
\right.
\end{displaymath}
and let $\hat{{\bf z}}_0 := \hat{{\bf z}}_{n+1}$.
Notice that $\hat{{\bf z}}_1, \dots, \hat{{\bf z}}_{n+1}$ are ordered in the same way as ${\bf z}_0,\dots,{\bf z}_n$ (see Figure {\mathrm{e}}f{ipe 1}). Therefore,
$$\partial {\mathcal{C}} = \bigcup_{k=0}^{n}\left[(\hat{{\bf z}}_{k+1}-{\bf z}_k) \cup ({\bf z}_k - \hat{{\bf z}}_k )\right] .$$
Write $\hat{{\bf z}}_{i} = \|\hat{{\bf z}}_i\|{\bf e}_{\hat{\theta}_i}$ for $0 \leq i \leq n+1$ in the polar coordinates, we have
$$ \int_0^{2\pi} \max_{0 \leq i \leq n} ({\bf z}_i \cdot {\bf e}_{\theta}) \,\textup{d} \theta = \sum_{k=0}^{n} \int_{\hat{\theta}_k}^{\hat{\theta}_{k+1}} {\bf z}_k \cdot {\bf e}_{\theta} \,\textup{d} \theta .$$
Consider $\int_{\hat{\theta}_k}^{\hat{\theta}_{k+1}} {\bf z}_k \cdot {\bf e}_{\theta} \,\textup{d} \theta$.
Let ${\bf z}_k := (\alpha_1, {\bf e}ta_1)$, ${\bf z}_{k+1} := (\alpha_2,{\bf e}ta_2)$ and ${\bf z}_{k-1} := (\alpha_0,{\bf e}ta_0)$. Without loss of generality, we can set ${\bf e}ta_1 = 0$ and $\alpha_1 >0$. Then we have ${\bf e}ta_2 \geq 0$, ${\bf e}ta_0 \leq 0$, $0 \leq \hat{\theta}_{k+1} \leq \pi/2$ and $-\pi/2 \leq \hat{\theta}_{k} \leq 0$.
So,
{\bf e}gin{align*}
\int_{\hat{\theta}_k}^{\hat{\theta}_{k+1}} {\bf z}_k \cdot {\bf e}_{\theta} \,\textup{d} \theta
= & \int_{\hat{\theta}_k}^{\hat{\theta}_{k+1}}(\alpha_1,0)\cdot(\cos \theta,\sin \theta) \,\textup{d}\theta \\
= & \alpha_1 (\sin \hat{\theta}_{k+1} - \sin \hat{\theta}_k) \\
= & \alpha_1 \left(\frac{\| \hat{{\bf z}}_{k+1}-{\bf z}_k \|}{\alpha_1} - \frac{-\| {\bf z}_k - \hat{{\bf z}}_k \|}{\alpha_1} \right) \\
= & \| \hat{{\bf z}}_{k+1}-{\bf z}_k \| + \| {\bf z}_k - \hat{{\bf z}}_k \| .
\end{align*}
Hence,
$$ \int_0^{2\pi} \max_{0 \leq i \leq n} ({\bf z}_i \cdot {\bf e}_{\theta}) \,\textup{d} \theta = \sum_{k=0}^{n} \int_{\hat{\theta}_k}^{\hat{\theta}_{k+1}} {\bf z}_k \cdot {\bf e}_{\theta} \,\textup{d} \theta
= \sum_{k=0}^{n} \left( \| \hat{{\bf z}}_{k+1}-{\bf z}_k \| + \| {\bf z}_k - \hat{{\bf z}}_k \| \right) = L({\mathcal{C}}) . \qedhere$$
\end{proof}
\pagestyle{myheadings} \markright{\sc Chapter 3}
\chapter{Scaling limits for convex hulls}
\label{chapter3}
\section{Overview}
\label{sec:outline}
For some of the results that follow, scaling limit ideas are useful.
Recall that $S_n = \sum_{k=1}^n Z_k$ is the location of our random walk in $\mathbb{R}^2$ after $n$ steps. Write ${\mathcal{S}}_n := \{ S_0, S_1, \ldots, S_n \}$. \label{cS_n}
Our strategy to study properties of the random convex set $\mathop \mathrm{hull} {\mathcal{S}}_n$ (such as $L_n$ or $A_n$)
is to seek a weak limit for
a suitable scaling of $\mathop \mathrm{hull} {\mathcal{S}}_n$, which
we must hope to be
the convex hull of some scaling limit representing the walk ${\mathcal{S}}_n$.
In the case of zero drift ($\mu = 0$) a candidate scaling limit for the walk is readily identified
in terms
of planar Brownian motion. For the case $\mu \neq 0$, the `usual' approach of
centering and then scaling the walk (to again obtain planar Brownian motion) is not
useful in our context, as this transformation
does not act on the convex hull in any sensible way. A better idea is to scale space differently in the direction of $\mu$ and in the
orthogonal direction.
In other words, in either case we consider
$\phi_n ({\mathcal{S}}_n)$ for some \emph{affine} continuous scaling function
$\phi_n : \mathbb{R}^2 \to \mathbb{R}^2$.
The convex hull is preserved under affine transformations, so
\[ \phi_n ( \mathop \mathrm{hull} {\mathcal{S}}_n ) = \mathop \mathrm{hull} \phi_n ( {\mathcal{S}}_n ) ,\]
the convex hull of a random set
which will have a weak limit. We will then be able to deduce
scaling limits for quantities $L_n$ and $A_n$ provided, first, that we work in suitable spaces
on which our functionals of interest
enjoy continuity, so that we can appeal to the continuous mapping theorem for weak limits,
and, second, that $\phi_n$ acts on length and area by simple scaling. The usual $n^{-1/2}$ scaling
when $\mu =0$ is fine; for $\mu \neq 0$ we scale space in one coordinate by $n^{-1}$ and in the other by $n^{-1/2}$,
which acts nicely on area, but \emph{not} length. Thus these methods work exactly in the three cases
corresponding to \eqref{eq:three_vars}.
In view of the scaling limits that we expect, it is natural to work not with point
sets like ${\mathcal{S}}_n$, but with continuous \emph{paths}; instead of ${\mathcal{S}}_n$
we consider the interpolating path constructed as follows.
For each $n \in \mathbb{N}$ and all $t \in [0,1]$, define
\[ X_n (t) := S_{\lfloor nt \rfloor} + (nt - \lfloor nt \rfloor ) \left( S_{\lfloor nt \rfloor +1} - S_{\lfloor nt \rfloor} \right) = S_{\lfloor nt \rfloor} + (nt - \lfloor nt \rfloor ) Z_{\lfloor nt \rfloor +1} .\]
Note that $X_n (0) = S_0$ and $X_n (1) = S_n$.
Given $n$, we are interested in the convex hull of the image in $\mathbb{R}^2$ of the interval
$[0,1]$ under
the continuous function
$X_n$. Our scaling limits will be of the same form.
\section{Convex hulls of paths}
In this section we study some basic properties of the map from a continuous path to its convex hull.
Let $f \in {\mathcal{C}} ([0,T] , \mathbb{R}^d)$. For any $t \in [0,T]$, $f[0,t]$ is compact, and so
Carath\'eodory's theorem for convex hulls (see Corollary 3.1 of \cite[p.\ 44]{gruber})
shows that $\mathop \mathrm{hull} ( f [0,t] )$ is compact. So $\mathop \mathrm{hull} ( f [0,t] ) \in {\mathcal{K}}_d$ is convex, bounded, and closed; in particular, it is a Borel set.
For reasons that we shall see, it mostly suffices to work with paths parametrized over the interval $[0,1]$.
For $f \in {\mathcal{C}}_d$, define
\[ H (f) := \mathop \mathrm{hull} \left( f [ 0,1 ] \right) .\] \label{H()}
First we prove continuity of the map $f \mapsto H (f)$.
{\bf e}gin{lemma}
\label{lem:path-hull}
For any $f, g \in {\mathcal{C}}^0_d$, we have
{\bf e}gin{equation}
\label{eq:H-comparison}
\rho_H ( H(f) , H(g) ) \leq \rho_\infty ( f,g).\end{equation}
Hence the function $H : ( {\mathcal{C}}^0_d , \rho_\infty ) \to ( {\mathcal{K}}^0_d , \rho_H )$ is continuous.
\end{lemma}
{\bf e}gin{proof}
Let $f, g \in {\mathcal{C}}^0_d$. Then $H(f)$ and $H(g)$ are non-empty, as they both contain $f(0) = g(0)={\bf 0}$.
Consider ${\bf x} \in H(f)$. Since the convex hull of a set is the set of all convex combinations of points of the set (see Lemma 3.1 of \cite[p.\ 42]{gruber}),
there exist a finite positive integer $n$, weights $\lambda_1, \dots ,\lambda_n \geq 0$ with $\sum_{i=1}^n \lambda_i =1$, and $t_1, \dots, t_n \in [0,1]$ for which
${\bf x} = \sum_{i=1}^n \lambda_i f(t_i)$.
Then, taking ${\bf y} = \sum_{i=1}^n \lambda_i g(t_i)$, we have that ${\bf y} \in H(g)$ and, by the triangle inequality,
\[ \rho ({\bf x},{\bf y}) = \sum_{i=1}^n \lambda_i \rho( f(t_i) , g(t_i) )
\leq \rho_\infty (f,g) .\]
Thus, writing $r = \rho_\infty (f,g)$,
every ${\bf x} \in H(f)$ has ${\bf x} \in \pi_r ( H (g) )$,
so $H(f) \subseteq \pi_r ( H (g) )$. The symmetric argument gives
$H(g) \subseteq \pi_r ( H (f) )$. Thus, by \eqref{eq:hausdorff_minkowski},
we obtain \eqref{eq:H-comparison}.
\end{proof}
Given $f \in {\mathcal{C}}_d$, let $E (f) := \mathop \mathrm{ext} ( H(f) )$, the extreme points of the convex hull (see \cite[p.\ 75]{gruber}).
The set $E(f)$ is the smallest set (by inclusion) that generates $H(f)$ as its convex hull, i.e., for any
$A$ for which $\mathop \mathrm{hull} (A) = H(f)$, we have $E(f) \subseteq A$; see Theorem 5.5 of \cite[p.\ 75]{gruber}.
In particular, $E(f) \subseteq f [0,1]$.
{\bf e}gin{lemma}
\label{lem:hull-max}
Let $f \in {\mathcal{C}}_d$.
Let $q : \mathbb{R}^d \to \mathbb{R}$ be continuous and convex. Then $q$ attains its supremum over $H(f)$ at a point of $f$, i.e.,
\[ \sup_{{\bf x} \in H(f)} q ({\bf x}) = \max_{t \in [0,1]} q (f(t)) .\]
\end{lemma}
{\bf e}gin{proof}
Theorem 5.6 of \cite[p.\ 76]{gruber} shows that any continuous convex function on $H(f)$ attains its maximum
at a point of $E(f)$. Hence, since $E(f) \subseteq f [0,1]$,
\[ \sup_{{\bf x} \in H(f)} q ({\bf x}) = \sup_{{\bf x} \in E(f)} q ({\bf x}) \leq \sup_{{\bf x} \in f[0,1]} q({\bf x}) . \]
On the other hand, $f[0,1] \subseteq H(f)$, so
$\sup_{{\bf x} \in f[0,1]} q({\bf x}) \leq \sup_{{\bf x} \in H(f)} q ({\bf x})$. Hence
\[ \sup_{{\bf x} \in H(f)} q ({\bf x}) = \sup_{{\bf x} \in f[0,1]} q({\bf x}) = \sup_{t \in [0,1]} q(f(t)) .\]
Since $q \circ f$ is the composition of two continuous functions, it is itself continuous, and so the supremum is attained in the compact set $[0,1]$.
\end{proof}
For $A \in {\mathcal{K}}^0_d$, the \emph{support function} of $A$ is $h_A : \mathbb{R}^d \to \mathbb{R}P$ defined by
\[ h_A ( {\bf x}) := \sup_{{\bf y} \in A} ({\bf x} \cdot {\bf y} ) .\]
For $A \in {\mathcal{K}}_2^0$,
\emph{Cauchy's formula} \eqref{cauchy0} states
\[ {\mathcal{L}} (A) = \int_{\mathbb{S}_1} h_A ({\bf u}) \textup{d} {\bf u} = \int_0^{2\pi} h_A ( {\bf e}_\theta ) \textup{d} \theta .\]
We end this section by showing that the map $t \mapsto \mathop \mathrm{hull} ( f[0,t] )$ on $[0,T]$ is continuous if $f$ is continuous on $[0,T]$,
so that the continuous trajectory $t \mapsto f(t)$ is accompanied by a continuous `trajectory' of its convex hulls. This observation was made
by El Bachir \cite[pp.~16--17]{elbachir}; we take a different route based on the path space result Lemma {\mathrm{e}}f{lem:path-hull}. First we need a lemma.
{\bf e}gin{lemma}
\label{lem:path-stretch}
Let $T >0$ and $f \in {\mathcal{C}} ( [0,T] ; \mathbb{R}^d)$. Then the map defined for $t \in [0,T]$ by
$t \mapsto g_t$, where $g_t : [0,1] \to \mathbb{R}^d$ is given by $g_t (s) = f (t s)$, $s \in [0,1]$, is
a continuous function from $([0,T], \rho)$ to $( {\mathcal{C}}_d, \rho_\infty )$.
\end{lemma}
{\bf e}gin{proof}
First we fix $t \in [0,T]$ and show that $s \mapsto g_t (s)$ is continuous, so that $g_t \in {\mathcal{C}}_d$ as claimed.
Since $f$ is continuous on the compact interval $[0,T]$, it is uniformly continuous,
and admits
a monotone modulus of continuity $\mu_f$. Hence
\[ \rho ( g_t (s_1) , g_t (s_2) ) = \rho ( f(ts_1) , f(ts_2) ) \leq \mu_f ( \rho (ts_1 , ts_2)) = \mu_f ( t \rho (s_1, s_2 ) ) ,\]
which tends to $0$ as $\rho(s_1, s_2) \to 0$.
Hence $g_t \in {\mathcal{C}}_d$.
It remains to show that $t \mapsto g_t$ is continuous. But on ${\mathcal{C}}_d$,
{\bf e}gin{align*} \rho_\infty ( g_{t_1}, g_{t_2} ) & = \sup_{s \in [0,1]} \rho ( f(t_1 s) , f(t_2 s) ) \\
& \leq \sup_{s \in [0,1]} \mu_f ( \rho (t_1 s, t_2 s) ) \\
& \leq \mu_f ( \rho ( t_1, t_2 ) ) ,\end{align*}
which tends to $0$ as $\rho (t_1, t_2) \to 0$, again using the uniform continuity of $f$.
\end{proof}
Here is the path continuity result for convex hulls of continuous paths; cf \cite[p.~16--17]{elbachir}.
{\bf e}gin{corollary}
\label{cor:point-hull}
Let $T >0$ and $f \in {\mathcal{C}}^0 ( [0,T] ; \mathbb{R}^d)$ with $f(0) = {\bf 0}$. Then the map defined for $t \in [0,T]$ by
$t \mapsto \mathop \mathrm{hull} ( f[0,t] )$ is
a continuous function from $([0,T], \rho)$ to $( {\mathcal{K}}^0_d, \rho_H )$.
\end{corollary}
{\bf e}gin{proof}
By Lemma {\mathrm{e}}f{lem:path-stretch}, $t \mapsto g_t$ is continuous, where $g_t(s) = f(ts)$, $s \in [0,1]$. Note that,
since $f(0)={\bf 0}$, $g_t \in {\mathcal{C}}_d^0$.
But the sets $f [0,t]$ and $g_t [0,1]$ coincide, so $\mathop \mathrm{hull} ( f [0,t] ) = H (g_t)$, and, by Lemma {\mathrm{e}}f{lem:path-hull}, $g_t \mapsto H(g_t)$ is continuous.
Thus $t \mapsto H(g_t)$ is the composition of two continuous functions, hence itself a continuous function:
\[ {\bf e}gin{array}{ccccc}
[0,T] & \longrightarrow & {\mathcal{C}}^0_d & \longrightarrow & {\mathcal{K}}^0_d \\
t & \mapsto & g_t & \mapsto & H(g_t)
\end{array} \qedhere \]
\end{proof}
Recall definitions of the functionals for perimeter length ${\mathcal{L}}$ and area ${\mathcal{A}}$ in \eqref{eq:L-def}.
We give the following inequalities in the metric spaces.
{\bf e}gin{lemma}
\label{lem:functional-continuity}
Suppose that $A, B \in {\mathcal{K}}^0_2$. Then
{\bf e}gin{align}
\label{eq:L-comparison}
\rho ( {\mathcal{L}}(A) , {\mathcal{L}}(B) ) & \leq 2 \pi \rho_H (A,B) ;\\
\label{eq:A-comparison}
\rho ( {\mathcal{A}}(A) , {\mathcal{A}}(B) ) & \leq \pi \rho_H (A,B)^2 + ( {\mathcal{L}}(A) \vee {\mathcal{L}}(B) ) \rho_H (A,B) .
\end{align}
Hence, the functions ${\mathcal{L}}$ and ${\mathcal{A}}$ are both continuous from $({\mathcal{K}}^0_2 , \rho_H )$ to $( \mathbb{R}P , \rho)$.
\end{lemma}
{\bf e}gin{proof}
First consider ${\mathcal{L}}$. By Cauchy's formula,
{\bf e}gin{align*}
\left| {\mathcal{L}} (A) - {\mathcal{L}}(B) \right| & = \left| \int_{\mathbb{S}_1} \left( h_{A} ( {\bf u} ) - h_{B} ( {\bf u} ) \right) \textup{d} {\bf u} \right| \\
& \leq \int_{\mathbb{S}_1} \sup_{{\bf u} \in \mathbb{S}_1} \left| h_{A} ( {\bf u} ) - h_{B} ( {\bf u} ) \right| \textup{d} {\bf u} = 2 \pi \rho_H ( A , B ) ,\end{align*}
by the triangle inequality and then \eqref{eq:hausdorff_support}. This gives \eqref{eq:L-comparison}.
Now consider ${\mathcal{A}}$. Set $r = \rho_H (A,B)$. Then, by \eqref{eq:hausdorff_minkowski}, $A \subseteq \pi_r (B)$.
Hence
\[ {\mathcal{A}} (A) \leq {\mathcal{A}} ( \pi_r (B) ) \leq {\mathcal{A}}(B) + r {\mathcal{L}} (B) + \pi r^2 ,\]
by \eqref{eq:steiner}. With the analogous argument starting from $B \subseteq \pi_r (A)$, we get \eqref{eq:A-comparison}.
\end{proof}
\section{Brownian convex hulls as scaling limits}
\label{sec:Brownian-hulls}
Now we return to considering the random walk $S_n = \sum_{k=1}^n Z_k$ in $\mathbb{R}^2$.
The two different scalings outlined in Section {\mathrm{e}}f{sec:outline}, for the cases $\mu =0$ and $\mu \neq 0$,
lead to different scaling limits for the random walk. Both are associated with Brownian motion.
In the case $\mu =0$, the scaling limit is the usual planar Brownian motion, at least when $\Sigma = I$, the identity matrix.
Let $b:=( b(s) )_{s \in [0,1]}$ denote standard Brownian motion in $\mathbb{R}^2$, started at $b(0) = 0$.
For convenience we may assume $b \in {\mathcal{C}}_2^0$ (we can work on a probability space for which continuity holds for all sample points, rather than merely almost all).
For $t \in [0,1]$, let
{\bf e}gin{equation} \label{h_t}
h_t := \mathop \mathrm{hull} b[0,t] \in {\mathcal{K}}_2^0
\end{equation}
denote the convex hull of the Brownian path up to time $t$.
By Corollary {\mathrm{e}}f{cor:point-hull}, $t \mapsto h_t$ is continuous. Much is known about the properties of $h_t$: see e.g.\
\cite{chm,elbachir,evans,klm}.
We also set
{\bf e}gin{equation} \label{eqn:def of Lt At for BM}
\ell_t := {\mathcal{L}} ( h_t ) , ~~\text{and}~~ a_t := {\mathcal{A}} ( h_t ) ,
\end{equation}
the perimeter length and area of the standard Brownian convex hull.
By Lemma {\mathrm{e}}f{lem:functional-continuity},
the processes $t \mapsto \ell_t$ and $t \mapsto a_t$ also have continuous sample paths.
We also need to work with the case of general covariances $\Sigma$;
to do so we introduce more notation and recall some facts about multivariate Gaussian random vectors.
For definiteness, we view vectors as Cartesian column vectors when required.
Since $\Sigma$ is positive semidefinite and symmetric,
there is a (unique) positive semidefinite symmetric matrix square-root $\Sigma^{1/2}$ \label{Sigma^1/2}
for which $\Sigma = (\Sigma^{1/2} )^2$.
The map $x \mapsto \Sigma^{1/2} x$ associated with $\Sigma^{1/2}$ is a linear transformation on $\mathbb{R}^2$
with Jacobian $\det \Sigma^{1/2} = \sqrt{ \det \Sigma}$;
hence $\cA ( \Sigma^{1/2} A ) = \cA (A) \sqrt{ \det \Sigma }$
for any measurable $A \subseteq \mathbb{R}^2$.
If $W \sim {\mathcal{N}} (0, I)$, then by Lemma {\mathrm{e}}f{linear transformation}, $\Sigma^{1/2} W \sim {\mathcal{N}} (0, \Sigma)$,
a bivariate normal distribution with mean $0$
and covariance $\Sigma$; the notation permits $\Sigma =0$,
in which case ${\mathcal{N}}(0,0)$ stands for the degenerate
normal distribution with point mass at $0$. Similarly, given $b$ a standard Brownian motion on $\mathbb{R}^2$, the diffusion $\Sigma^{1/2} b$
is \emph{correlated} planar Brownian motion with covariance matrix $\Sigma$.
Recall that `$\mathbb{R}ightarrow$' (see Section {\mathrm{e}}f{sec:CMT and Donsker}) indicates weak convergence.
{\bf e}gin{theorem}
\label{thm:limit-zero}
Suppose that $\mathbb{E}\, ( \| Z_1\|^2 ) < \infty$ and $\mu =0$.
Then, as $n \to \infty$,
\[ n^{-1/2} \mathop \mathrm{hull} \{ S_0, S_1, \ldots, S_n \} \mathbb{R}ightarrow \Sigma^{1/2} h_1 , \]
in the sense of weak convergence on $({\mathcal{K}}_2^0 , \rho_H )$.
\end{theorem}
{\bf e}gin{proof}
Donsker's theorem (see Lemma {\mathrm{e}}f{thm:donsker})
implies that $n^{-1/2} X_n \mathbb{R}ightarrow \Sigma^{1/2} b$
on $({\mathcal{C}}_2^0 , \rho_\infty )$.
Now, the point set $X_n [0,1]$ is the union of the line segments
$\{ S_{k} +\theta (S_{k+1} - S_k) : \theta \in [0,1] \}$
over $k=0,1,\ldots, n-1$. Since the convex hull is preserved under affine transformations,
\[
H ( n^{-1/2} X_n ) =
n^{-1/2} H (X_n) = n^{-1/2} \mathop \mathrm{hull} \{ S_0, S_1, \ldots, S_n \} .\]
By Lemma {\mathrm{e}}f{lem:path-hull}, $H$ is continuous, and so the continuous mapping theorem
(see Lemma {\mathrm{e}}f{continuous mapping}) implies that
$$n^{-1/2} \mathop \mathrm{hull} \{ S_0, S_1, \ldots, S_n \} \mathbb{R}ightarrow H ( \Sigma^{1/2} b ) \text{ on } ({\mathcal{K}}_2^0 , \rho_H ) .$$
Finally, invariance of the convex hull under affine transformations shows $H (\Sigma^{1/2} b ) = \Sigma^{1/2} H (b) = \Sigma^{1/2} h_1$.
\end{proof}
Theorem {\mathrm{e}}f{thm:limit-zero} together with the continuous mapping theorem and Lemma {\mathrm{e}}f{lem:functional-continuity}
implies the following distributional limit results in the case $\mu =0$. Recall that `$\tod$' (see Section {\mathrm{e}}f{sec:convergence of random variables}) denotes
convergence in distribution for $\mathbb{R}$-valued random variables.
{\bf e}gin{corollary}
\label{cor:zero-limits}
Suppose that $\mathbb{E}\, ( \| Z_1\|^2 ) < \infty$ and
$\mu =0$.
Then, as $n \to \infty$,
\[ n^{-1/2} L_n \tod {\mathcal{L}} ( \Sigma^{1/2} h_1 ) , ~~\text{and} ~~ n^{-1 } A_n \tod {\mathcal{A}} ( \Sigma^{1/2} h_1 ) = a_1 \sqrt{\det \Sigma} .\]
\end{corollary}
{\bf e}gin{remark}
Recall that $a_1 = {\mathcal{A}}(h_1)$ is the area of the standard 2-dimensional Brownian convex hull run for unit time.
The distributional limits for $ n^{-1/2} L_n$ and $ n^{-1 } A_n$
in Corollary {\mathrm{e}}f{cor:zero-limits} are supported on $\mathbb{R}P$ and, as we will show in Proposition {\mathrm{e}}f{prop:var_bounds u0} and Proposition {\mathrm{e}}f{prop:var_bounds v0 v+} below, are non-degenerate if $\Sigma$ is positive definite;
hence they are \emph{non-Gaussian} excluding trivial cases.
\end{remark}
In the case $\mu \neq 0$, the scaling limit
can be viewed as a space-time trajectory of one-dimensional Brownian motion. Let $w:=( w(s) )_{s \in [0,1]}$ \label{w} denote standard Brownian motion in $\mathbb{R}$, started at $w(0) = 0$;
similarly to above, we may take $w \in {\mathcal{C}}_1^0$.
Define $\tilde b \in {\mathcal{C}}_2^0$ in Cartesian coordinates via
\[ \tilde b (s) = ( s , w(s) ) , ~ \text{for } s \in [0,1]; \] \label{tilde b}
thus $\tilde b [0,1]$ is the space-time diagram of one-dimensional Brownian motion run for unit time.
For $t \in [0,1]$, let $\tilde h_t := \mathop \mathrm{hull} \tilde b [0,t] \in {\mathcal{K}}_2^0$, \label{tilde h_t}
and define $\tilde a_t := {\mathcal{A}} ( \tilde h_t )$. \label{tilde a_t}
(Closely related to $\tilde h_t$ is the greatest \emph{convex minorant} of $w$ over $[0,t]$, which is
of interest in its own right, see e.g.~\cite{pitman-ross} and references therein.)
Suppose $\mu \neq 0$ and $\sigma^2_{\mu_\per} \in (0,\infty)$.
Given $\mu \in \mathbb{R}^2 \setminus \{ 0 \}$,
let $\hat \mu_\perp$ be the unit vector perpendicular to $\mu$ obtained by rotating $\hat \mu$ by $\pi/2$ anticlockwise.
For $n \in \mathbb{N}$, define $\psi^\mu_n : \mathbb{R}^2 \to \mathbb{R}^2$ by the image of $x \in \mathbb{R}^2$ in Cartesian components:
\[ \psi^\mu_n ( x ) = \left( \frac{x \cdot \hat \mu}{n \| \mu \| } , \frac{x \cdot \hat \mu_\perp}{ \sqrt { n \sigma^2_{\mu_\per} } } \right) .\]
In words, $\psi^\mu_n$ rotates $\mathbb{R}^2$, mapping $\hat \mu$ to the unit vector in the horizontal direction,
and then scales space with a horizontal shrinking factor $\| \mu \| n$ and a vertical factor $ \sqrt { n \sigma^2_{\mu_\per} } $;
see Figure {\mathrm{e}}f{fig rotate} for an illustration.
{\bf e}gin{figure}[t]
\center
\includegraphics[width=0.9\textwidth]{figure_2}
\caption{Simulated path of $n=1000$ steps a random walk with drift $\mu = ( \frac{1}{2}, \frac{1}{4} )$ and its convex hull
(\emph{top left}) and (not to the same scale) the image under $\psi_n^\mu$ (\emph{bottom right}).}
\label{fig rotate}
\end{figure}
{\bf e}gin{theorem}
\label{thm:limit-drift}
Suppose that $\mathbb{E}\, ( \| Z_1\|^2 ) < \infty$, $\mu \neq 0$, and $\sigma^2_{\mu_\per} >0$.
Then, as $n \to \infty$,
\[ \psi^\mu_n ( \mathop \mathrm{hull} \{ S_0, S_1, \ldots, S_n \} ) \mathbb{R}ightarrow \tilde h_1, \]
in the sense of weak convergence on $({\mathcal{K}}_2^0 , \rho_H )$.
\end{theorem}
{\bf e}gin{proof}
Observe that $\hat \mu \cdot S_n$ is a random walk on $\mathbb{R}$ with one-step mean drift $\hat \mu \cdot \mu = \| \mu \| \in (0,\infty)$,
while $\hat \mu_\perp \cdot S_n$ is a walk with mean drift $\hat \mu_\perp \cdot \mu = 0$
and increment variance
{\bf e}gin{align*}
\mathbb{E}\, \left[ ( \hat \mu_\perp \cdot Z )^2 \right]
& = \mathbb{E}\, \left[ ( \hat \mu_\perp \cdot ( Z - \mu) )^2 \right] \\
& = \mathbb{E}\, [ \| Z - \mu \|^2 ] - \mathbb{E}\, [ (\hat \mu \cdot (Z - \mu) )^2 ] = \sigma^2 - \sigma^2_{\mu} \\
& = \sigma^2_{\mu_\per} .
\end{align*}
According to the strong law of large numbers, for any $\varepsilon>0$ there exists $N_\varepsilon \in \mathbb{N}$ a.s.\ such that
$| m^{-1} \hat \mu \cdot S_m - \| \mu \| | < \varepsilon$ for $m \geq N_\varepsilon$.
Now we have that
{\bf e}gin{align*} \sup_{N_\varepsilon/n \leq t \leq 1 } \left| \frac{ \hat \mu \cdot S_{\lfloor nt \rfloor}}{n} - t \| \mu \| \right|
& \leq \sup_{N_\varepsilon/n \leq t \leq 1 } \left( \frac{\lfloor nt \rfloor}{n} \right)
\left| \frac{ \hat \mu \cdot S_{\lfloor nt \rfloor}}{\lfloor nt \rfloor} - \| \mu \| \right| \\
& \quad + \| \mu \| \sup_{0 \leq t \leq 1 } \left| \frac{\lfloor nt \rfloor}{n} - t \right| \\
& \leq \sup_{N_\varepsilon/n \leq t \leq 1}
\left| \frac{ \hat \mu \cdot S_{\lfloor nt \rfloor}}{\lfloor nt \rfloor} - \| \mu \| \right| + \frac{ \| \mu \|}{n} \\
& \leq \varepsilon + \frac{ \| \mu \|}{n}.
\end{align*}
On the other hand,
\[ \sup_{0 \leq t \leq N_\varepsilon/n } \left| \frac{ \hat \mu \cdot S_{\lfloor nt \rfloor}}{n} - t \| \mu \| \right|
\leq \frac{1}{n} \max \{ \hat \mu \cdot S_0, \ldots, \hat \mu \cdot S_{N_\varepsilon} \} + \frac{N_\varepsilon \| \mu \|}{n} \to 0, {\ \mathrm{a.s.}} ,\]
since $N_\varepsilon < \infty$ a.s. Combining these last two displays and using the fact that $\varepsilon>0$ was arbitrary,
we see that
$$\sup_{0 \leq t \leq 1} \left| n^{-1} \hat \mu \cdot S_{\lfloor nt \rfloor} - t \| \mu \| \right| \to 0, \text{ a.s. (the
functional version of the strong law)}. $$
Similarly,
$$\sup_{0 \leq t \leq 1} \left| n^{-1} \hat \mu \cdot S_{\lfloor nt \rfloor +1} - t \| \mu \| \right| \to 0, \text{ a.s. as well}.$$
Since $X_n(t)$ interpolates $S_{\lfloor nt \rfloor}$ and $S_{\lfloor nt \rfloor +1}$, it follows that
$$\sup_{0 \leq t \leq 1} \left| n^{-1} \hat \mu \cdot X_n(t) - t \| \mu \| \right| \to 0, {\ \mathrm{a.s.}} $$
In other words,
$(n \|\mu \|)^{-1} X_n \cdot \hat \mu$ converges a.s.\ to the identity function $t \mapsto t$ on $[0,1]$.
For the other component, Donsker's theorem (Lemma {\mathrm{e}}f{thm:donsker}) gives $( n \sigma^2_{\mu_\per})^{-1/2} X_n \cdot \hat \mu_\perp \mathbb{R}ightarrow w$ on $({\mathcal{C}}_1^0, \rho_\infty)$.
It follows that, as $n \to \infty$,
$\psi^\mu_n ( X_n ) \mathbb{R}ightarrow \tilde b$,
on $({\mathcal{C}}_2^0 , \rho_\infty )$. Hence by Lemma {\mathrm{e}}f{lem:path-hull} and since $\psi_n^\mu$ acts as an affine transformation on $\mathbb{R}^2$,
\[ \psi_n^\mu ( H ( X_n ) ) = H ( \psi_n^\mu ( X_n ) ) \mathbb{R}ightarrow H ( \tilde b ) ,\]
on $({\mathcal{K}}_2^0 , \rho_H )$, and the result follows.
\end{proof}
Theorem {\mathrm{e}}f{thm:limit-drift} with the continuous mapping theorem (Lemma {\mathrm{e}}f{continuous mapping}), Lemma {\mathrm{e}}f{lem:functional-continuity},
and the fact that ${\mathcal{A}} ( \psi_n^\mu ( A )) = n^{-3/2} \| \mu \|^{-1} ( \sigma^2_{\mu_\per} )^{-1/2} {\mathcal{A}} ( A )$
for measurable $A \subseteq \mathbb{R}^2$,
implies
the following distributional limit for $A_n$ in the case $\mu \neq 0$.
{\bf e}gin{corollary}
\label{cor:A-limit-drift}
Suppose that $\mathbb{E}\, ( \| Z_1\|^2 ) < \infty$, $\mu \neq 0$, and $\sigma^2_{\mu_\per} >0$.
Then
\[ n^{-3/2} A_n \tod \| \mu \| ( \sigma^2_{\mu_\per} )^{1/2} \tilde a_1 , \text{ as } n \to \infty . \]
\end{corollary}
{\bf e}gin{remarks}
(i) Only the $\sigma^2_{\mu_\per} >0$ case is non-trivial, since
$\sigma^2_{\mu_\per} =0$ if and only if $Z$ is parallel to $\pm \mu$ a.s., in which case all the points
$S_0, \ldots, S_n$ are collinear and $A_n = 0$ a.s.\ for all $n$.\\
(ii)
The limit in Corollary {\mathrm{e}}f{cor:A-limit-drift} is non-negative and non-degenerate (see Proposition {\mathrm{e}}f{prop:var_bounds v0 v+}
below) and hence non-Gaussian.
\end{remarks}
The framework of this chapter shows that whenever a discrete-time
process in $\mathbb{R}^d$ converges weakly to a limit on the space of continuous paths, the corresponding convex hulls
converge. It would be of interest to extend the framework to admit discontinuous limit processes,
such as L\'evy processes with jumps \cite{klm} that arise as scaling limits of random walks whose increments
have infinite variance.
\pagestyle{myheadings} \markright{\sc Chapter 4}
\chapter{Spitzer--Widom formula for the expected perimeter length and its consequences}
\label{chapter4}
\section{Overview}
Our contribution in this Chapter is giving a new proof of the Spitzer--Widom formula in Section {\mathrm{e}}f{sec: proof of SW} and giving the asymptotics for the expected perimeter length in Section {\mathrm{e}}f{sec: asy for per length} by using that formula.
Firstly, we show how to deduce the Spitzer--Widom formula from the Cauchy formula.
The following theorem is Theorem 2 in \cite{sw}.
{\bf e}gin{theorem}[Spitzer--Widom formula]
Suppose that $\mathbb{E}\, \|Z_1\| < \infty$. Then $$ \mathbb{E}\, L_n = 2 \sum_{k=1}^n \frac{1}{k} \mathbb{E}\, \| S_k \|. $$
\end{theorem}
The basis for our derivation of the Spitzer--Widdom formula is an analogous result for
\emph{one-dimensional} random walk, stated in Lemma {\mathrm{e}}f{kac2} below, which is itself
a consequence of the combinatorial result given in Lemma {\mathrm{e}}f{kac}. Lemma {\mathrm{e}}f{kac} was stated by Kac \cite[pp.\ 502--503 and Theorem 4.2 on p.\ 508]{kac}
and attributed to Hunt; the proof given is due to Dyson. Lemma {\mathrm{e}}f{kac2} is variously attributed to Chung, Hunt, Dyson and Kac; it is also related
to results of Sparre Andersen \cite{andersen} and is a special case of what has become known as the Spitzer or Spitzer--Baxter identity \cite[Ch.\ 9]{kallenberg} for random walks,
which is a more sophisticated result usually deduced from Wiener--Hopf Theory.
\section{Derivation of Spitzer--Widom formula}
\label{sec: proof of SW}
Let $X_1, X_2, \dots$ be i.i.d. random variables. Let $T_n=\sum_{i=1}^n X_i$ and $M_n=\max\{0,T_1,\dots,T_n\}$.
Let $\sigma: (1,2,\dots,n)\mapsto (\sigma_1,\sigma_2,\dots,\sigma_n) \in \mathbb{Z}_+^n$ be a permutation on $\{1,\dots,n\}$.
Then $(\pi_n; \circ)$ is a group consisting of $\sigma$ under the composition operation. For $\sigma \in \pi_n$, let $T_n^{\sigma} = \sum_{i=1}^n X_{\sigma_i}$
and $M_n^{\sigma}=\max\{0,T_1^{\sigma},\dots,T_n^{\sigma} \}$.
{\bf e}gin{lemma} \label{kac}
$$\sum_{\sigma\in \pi_n} M_n^{\sigma} = \sum_{\sigma \in \pi_n} X_{\sigma_1} \sum_{k=1}^n {\,\bf 1}\{T_k^\sigma > 0\}.$$
\end{lemma}
{\bf e}gin{proof}
Note that if $T_k^\sigma \leq 0$, then $M_k^\sigma - M_{k-1}^\sigma = 0$. If $T_k^\sigma > 0$, then
$$ M_k^\sigma = \max(T_1^{\sigma},T_2^{\sigma},\dots,T_k^{\sigma}) = X_{\sigma_1} + \max(0,X_{\sigma_2},X_{\sigma_2} + X_{\sigma_3},\dots,\sum_{l=2}^k X_{\sigma_l}) .$$
Combining these two cases, we get
{\bf e}gin{align*}
M_k^\sigma - M_{k-1}^\sigma =
& {\,\bf 1}\{T_k^\sigma>0\} \Bigg[ X_{\sigma_1} + \max\Big(0,X_{\sigma_2},X_{\sigma_2} + X_{\sigma_3},\dots,\sum_{l=2}^k X_{\sigma_l}\Big) \\
& - \max\Big(0,X_{\sigma_1},X_{\sigma_1}+X_{\sigma_2},\dots,\sum_{j=1}^{k-1} X_{\sigma_j}\Big) \Bigg].
\end{align*}
Fix $k \in \{1,\dots,n \}$. Let $G(\omega_{k+1},\dots,\omega_n)$ be the subset of $\pi_n$ consisting of permutations whose last $(n-k)$ indices are $\omega_{k+1},\dots,\omega_n$, where $1 \leq \omega_i \leq n$.
Then $\pi_n$ is decomposed into $\frac{n!}{k!}$ disjoint subsets $G(\omega_{k+1},\dots,\omega_n)$ of size $k!$.
Denote $$f(\sigma_1,\dots,\sigma_{k-1},\sigma_k)
:= \max\Big(0,X_{\sigma_1}, X_{\sigma_1}+X_{\sigma_2}, \dots,\sum_{j=1}^{k-1} X_{\sigma_j}\Big) .$$
Then,
$$ M_k^\sigma - M_{k-1}^\sigma = {\,\bf 1}\{T_k^\sigma>0\} \left[ X_{\sigma_1} + f(\sigma_2, \dots, \sigma_k, \sigma_1)- f(\sigma_1,\dots,\sigma_{k-1},\sigma_k) \right]. $$
Summing both sides of the equation over $\{\sigma \in \pi_n \}$, since
$$\sum_{\sigma \in \pi_n} = \sum_{1 \leq \sigma_{k+1},\dots,\sigma_n \leq n} \sum_{\sigma \in G(\sigma_{k+1},\dots,\sigma_n)} ,$$
and
$$\sum_{\sigma \in G(\sigma_{k+1},\dots,\sigma_n)} f(\sigma_2, \dots, \sigma_k, \sigma_1) = \sum_{\sigma \in G(\sigma_{k+1},\dots,\sigma_n)} f(\sigma_1,\dots,\sigma_{k-1},\sigma_k) ,$$
we get
{\bf e}gin{equation} \label{M_k}
\sum_{\sigma\in \pi_n} \left( M_k^{\sigma}-M_{k-1}^{\sigma} \right) = \sum_{\sigma \in \pi_n} X_{\sigma_1} {\,\bf 1}\{T_k^\sigma > 0\} .
\end{equation}
The result is implied by summing both sides of the equation ({\mathrm{e}}f{M_k}) from $k=1$ to $n$. Note that $M_0^\sigma = \max(0) = 0$.
\end{proof}
Here we use the notation $x^+ := x {\,\bf 1}\{ x>0 \}$ \label{x^+}
and $x^- := -x {\,\bf 1}\{ x<0 \}$ \label{x^-} for $x \in \mathbb{R}$. So $x=x^+ - x^-$ and $|x|= x^+ + x^-$.
The following result on the expected maximum of 1-dimensional random walk is variously attributed to Chung, Hunt, Dyson and Kac.
A combinatorial proof similar to the one given here can be found on page 301-302 of \cite{chung}.
{\bf e}gin{lemma} \label{kac2}
Suppose that $\mathbb{E}\, |X_k| < \infty$. Then, $$\mathbb{E}\, M_n = \sum_{k=1}^n \frac{\mathbb{E}\,(T_k^+)}{k}.$$
\end{lemma}
{\bf e}gin{proof}
By Lemma {\mathrm{e}}f{kac}, we have
{\bf e}gin{align*}
\mathbb{E}\, M_n = \mathbb{E}\, M_n^{\sigma}
& = \frac{1}{n!}\sum_{\sigma \in \pi_n} \mathbb{E}\, M_n^{\sigma} \\
& = \frac{1}{n!} \sum_{\sigma\in \pi_n} \mathbb{E}\, \big[X_{\sigma_1} \sum_{k=1}^n {\,\bf 1}\{T_k^{\sigma} > 0 \} \big] \\
& = \mathbb{E}\, \big[ X_1 \sum_{k=1}^n {\,\bf 1}\{T_k > 0\} \big] ,
\end{align*}
since the $X_i$ are i.i.d., $\mathbb{E}\, ( X_1 {\,\bf 1}\{T_k>0 \}) = \mathbb{E}\,( X_i {\,\bf 1}\{T_k>0 \})$ for any $1 \leq i \leq k$. Also, $\mathbb{E}\,(X_1 {\,\bf 1}\{ T_k>0\}) = k^{-1} \mathbb{E}\,(T_k {\,\bf 1}\{T_k>0\})$. Then,
{\bf e}gin{align*}
\mathbb{E}\, \big[ X_1 \sum_{k=1}^n {\,\bf 1}\{T_k > 0\} \big]
& = \sum_{k=1}^n \mathbb{E}\, \big[ X_1 {\,\bf 1}\{T_k > 0\} \big] \\
& = \sum_{k=1}^n \mathbb{E}\, \big[ \frac{T_k}{k} {\,\bf 1}\{T_k>0\} \big] \\
& = \sum_{k=1}^n \frac{\mathbb{E}\,(T_k^+)}{k} . \qedhere
\end{align*}
\end{proof}
{\bf e}gin{remark}
\emph{Fluctuation theory} for one-dimensional random walks concerns a series of important identities involving the distributions
of $M_n$, $T_n$, and other quantities associated with the random walk path. A cornerstone of the theory is the celebrated double generating-function identity of
Spitzer which states that
\[ \sum_{n=0}^\infty t^n \mathbb{E}\, [ {\mathrm{e}}^{ i u M_n } ] = \exp \left\{ \sum_{k=1}^\infty \frac{t^k}{k} \mathbb{E}\, [ {\mathrm{e}}^{iu T_k^+} ] \right\} \]
for $|t| <1$. Lemma~{\mathrm{e}}f{lem:path-stretch} is a corollary to Spitzer's identity, obtained on differentiating with respect to $u$ and setting $u=0$. The proof of
Spitzer's identity may be approached from an analytic perspective, using the Wiener--Hopf factorization (see e.g.\ Resnick \cite[Ch.~7]{resnick}),
or from a combinatorial one (see e.g.\ Karlin and Taylor \cite[Ch.~17]{kt2}). These references discuss many other aspects of fluctuation theory, as do
Chung \cite[\S \S 8.4 \& 8.5]{chung}, Feller \cite{feller2}, Asmussen \cite[Ch.~VIII]{asmussen}, and Tak\'acs \cite{takacs2}. In particular, Chung \cite[pp.~301--302]{chung} gives a direct
proof of Lemma 4.3 closely related to the one presented here; essentially the same proof is in \cite[p.~232]{asmussen}.
\end{remark}
{\bf e}gin{proof}[Proof of the Spitzer--Widom formula] $\ $\\
\quad Denote $M_n(\theta):= \max_{0 \leq i \leq n}(S_i \cdot {\bf e}_\theta)$ and $m_n(\theta):= \min_{0 \leq i \leq n}(S_i \cdot {\bf e}_\theta)$. Note that $M_n(\theta) \geq 0$ and $m_n(\theta) \leq 0$ since ${\bf 0} \in {\mathcal{H}}_n$.
Applying Fubini's theorem (see Lemma {\mathrm{e}}f{fubini}) in Cauchy formula ({\mathrm{e}}f{cauchy_}), we get
$$\mathbb{E}\, L_n = \int_0^\pi\left( \mathbb{E}\, M_n(\theta) - \mathbb{E}\, m_n(\theta)\right) \textup{d} \theta .$$
Observe that $S_n \cdot {\bf e}_{\theta}$ is a one-dimensional random walk on $\mathbb{R}$. Take $T_k = S_k \cdot {\bf e}_{\theta}$ in Lemma {\mathrm{e}}f{kac2}. Then,
$$ \mathbb{E}\, M_n(\theta)= \sum_{k=1}^n \frac{\mathbb{E}\, \left[(S_k \cdot {\bf e}_{\theta})^+\right]}{k} \quad \hbox{and} \quad
\mathbb{E}\, m_n(\theta)= -\sum_{k=1}^n \frac{\mathbb{E}\, \left[(-S_k \cdot {\bf e}_{\theta})^+\right]}{k} ,$$
since $m_n(\theta) = -\max_{0 \leq i \leq n}(- S_i \cdot {\bf e}_\theta)$. So, since $x^- = (-x)^+$,
{\bf e}gin{align*}
\mathbb{E}\, L_n
= & \int_0^\pi \sum_{k=1}^n \frac{1}{k} \mathbb{E}\, \left[(S_k \cdot {\bf e}_{\theta})^+ + (S_k \cdot {\bf e}_{\theta})^- \right] \textup{d}\theta \\
= & \int_0^\pi \sum_{k=1}^n \frac{\mathbb{E}\, \left|S_k \cdot {\bf e}_{\theta}\right|}{k} \textup{d} \theta .
\end{align*}
Then, by Fubini's theorem,
{\bf e}gin{align*}
\mathbb{E}\, L_n
= & \sum_{k=1}^n \frac{1}{k} \int_0^\pi \mathbb{E}\, \left|S_k \cdot {\bf e}_{\theta}\right| \textup{d} \theta \\
= & \sum_{k=1}^n \frac{1}{k} \mathbb{E}\, \int_0^\pi \left|S_k \cdot {\bf e}_{\theta}\right| \textup{d} \theta \\
= & 2 \sum_{k=1}^n \frac{\mathbb{E}\, \| S_k \|}{k} . \qedhere
\end{align*}
\end{proof}
\section{Asymptotics for the expected perimeter length}
\label{sec: asy for per length}
To investigate the first-order properties of $\mathbb{E}\, L_n$, we suggested by the Spitzer-Widom formula ({\mathrm{e}}f{SW formula}) that the first-order properties of $\mathbb{E}\, \| S_n \|$ need to be studied first.
{\bf e}gin{lemma}
\label{ES with drift}
If $\mathbb{E}\, \| Z_1 \| < \infty$, then $ n^{-1}\mathbb{E}\, \| S_n \| \to \|\mu\| $ as $ n \to \infty$.
\end{lemma}
{\bf e}gin{proof}
The strong law of large numbers for $S_n$ says $\| S_n /n - \mathbb{E}\, Z_1 \| \to 0 {\ \mathrm{a.s.}}$ as $n \to \infty$. Then by the triangle inequality,
$$ \| S_n/n \| = \| S_n/n-\mathbb{E}\, Z_1 + \mathbb{E}\, Z_1 \| \leq \| S_n/n-\mathbb{E}\, Z_1 \|+ \| \mathbb{E}\, Z_1\| $$
and
$$ \| \mathbb{E}\, Z_1\| \leq \| \mathbb{E}\, Z_1 - S_n/n \| + \| S_n/n \|. $$
So, $\| S_n \|/n \to \| \mathbb{E}\, Z_1 \| {\ \mathrm{a.s.}}$ as $n \to \infty$.
Similarly, let $Y_n = \sum_{i=1}^n \| Z_i \|$, then $Y_n/n \to \mathbb{E}\, \|Z_1 \| {\ \mathrm{a.s.}}$ as $n \to \infty$. Also we simply have $\mathbb{E}\,[Y_n/n] = \mathbb{E}\, \| Z_1 \|$ and $0 \leq \| S_n \|/n \leq Y_n/n$. Hence, the result is proved by Pratt's Lemma (see Lemma {\mathrm{e}}f{pratt's lemma}).
\end{proof}
The following asymptotic result for $\mathbb{E}\, L_n$ was obtained as equation (2.16) by Snyder \& Steele \cite{ss} under
the stronger condition $\mathbb{E}\, ( \| Z_1 \|^2 ) < \infty$; as Lemma {\mathrm{e}}f{ES with drift} shows, a finite first moment is sufficient.
{\bf e}gin{proposition}
\label{EL with drift}
Suppose $\mathbb{E}\, \| Z_1 \| < \infty$, then $n^{-1}\mathbb{E}\, L_n \to 2\|\mu\|$, as $n \to \infty$.
\end{proposition}
{\bf e}gin{proof}
The result is implied by the Spitzer--Widom formula ({\mathrm{e}}f{SW formula}) and Lemma {\mathrm{e}}f{convergence of Cesaro mean} with $y_n=n^{-1}\mathbb{E}\, \| S_n\|$, since $y_n \to \|\mu\|$ by Lemma {\mathrm{e}}f{ES with drift}.
\end{proof}
{\bf e}gin{remarks}
\label{remarks of EL with drift}
{\bf e}gin{enumerate}[(i)]
\item
Proposition {\mathrm{e}}f{EL with drift} says that if $\mu \neq 0$ then $\mathbb{E}\, L_n$ is of order $n$. If $\mu=0$, it says $\mathbb{E}\, L_n = o(n)$. We will show later in Proposition {\mathrm{e}}f{limit of E L_n} that under mild extra conditions in the $\mu=0$ case, $n^{-1/2} \mathbb{E}\, L_n$ has a limit.
\item
Snyder and Steele\cite[p.\ 1168]{ss} showed that if $\mathbb{E}\,(\| Z_1 \|^2)< \infty$ and $\mu \neq 0$, then in fact $n^{-1}L_n \to 2\|\mu\| {\ \mathrm{a.s.}}$ as $n \to \infty$. We give a proof of this in Proposition {\mathrm{e}}f{LLN for L_n} below.
\end{enumerate}
\end{remarks}
For the zero drift case $\mu=0$, we have the following.
{\bf e}gin{lemma}
\label{ES 0 drift}
Suppose $\mathbb{E}\,(\| Z_1 \|^2) < \infty$ and $\mu=0$, then $\mathbb{E}\, (\| S_n \|^2)= O(n)$ and $\mathbb{E}\, \| S_n \|= O(n^{1/2})$.
\end{lemma}
{\bf e}gin{proof}
Consider $\|S_n\|^2$,
{\bf e}gin{equation} \label{S_n+1}
\| S_{n+1} \|^2 = \| S_n+Z_{n+1} \|^2 = \| S_n \|^2 + 2S_n \cdot Z_{n+1}+ \|Z_{n+1}\|^2 .
\end{equation}
So, $$\mathbb{E}\,(\|S_{n+1}\|^2) - \mathbb{E}\,(\|S_{n}\|^2) = \mathbb{E}\,(\|Z_{1}\|^2) ,$$
since $S_n$ and $Z_{n+1}$ are independent and $Z_{n+1}$ has mean $0$, so $\mathbb{E}\,(S_n \cdot Z_{n+1})=\mathbb{E}\, S_n \cdot \mathbb{E}\, Z_{n+1} = 0$.
Then sum from $n=0$ to $m-1$ to get
$$\mathbb{E}\,(\|S_m\|^2) - \mathbb{E}\,(\|S_0\|^2)=m \mathbb{E}\,(\|Z_1\|^2).$$
Hence, $\mathbb{E}\,(\|S_n\|^2) = O(n)$. The last result is given by Jensen's inequality, $\mathbb{E}\, \|S_n\| \leq (\mathbb{E}\,[\|S_n\|^2])^{1/2}$.
\end{proof}
{\bf e}gin{remark}
Lemma {\mathrm{e}}f{ES 0 drift} only gives the upper bound for the order of $\mathbb{E}\, \| S_n \|$. Under the mild assumption $\mathbb{P} (\|Z_1\| =0) < 1$, $n^{-1/2}\mathbb{E}\,\|S_n\|$ in fact has a positive limit, as we will see in the proof of Proposition {\mathrm{e}}f{limit of E L_n} below. This extra condition is of course necessary for the positive limit,
since if $Z_1 \equiv 0$ then $\mathbb{E}\, \| S_n \| \equiv 0$.
\end{remark}
{\bf e}gin{proposition} \label{upper bound of E L_n}
Suppose $\mathbb{E}\,(\|Z_1\|^2) < \infty$ and $\mu=0$, then $\mathbb{E}\, L_n = O(n^{1/2})$.
\end{proposition}
{\bf e}gin{proof}
By Lemma {\mathrm{e}}f{ES 0 drift} and Spitzer--Widom formula ({\mathrm{e}}f{SW formula}), for some constant C,
$$\mathbb{E}\, L_n \leq 2 \sum_{i=1}^n \frac{C\sqrt{i}}{i} = 2C \sum_{i=1}^n i^{-1/2} = O(n^{1/2}). \qedhere $$
\end{proof}
{\bf e}gin{lemma}
\label{lem:walk_moments}
Let $p > 1$. Suppose that $\mathbb{E}\,[ \|Z_1\|^p ] < \infty$.
{\bf e}gin{itemize}
\item[(i)] For any $e \in \mathbb{S}_1$ such that $e \cdot \mu = 0$, $\mathbb{E}\, [ \max_{0\leq m\leq n} |S_m \cdot e|^p ] = O(n^{1 \vee (p/2)})$.
\item [(ii)] Moreover, if $\mu = 0$, then
$\mathbb{E}\, [ \max_{0\leq m\leq n} \| S_m \|^p ] = O(n^{1 \vee (p/2)})$.
\item[(iii)] On the other hand, if $\mu \neq 0$, then
$\mathbb{E}\, [\max_{0\leq m\leq n} |S_m \cdot \hat \mu|^p ]= O ( n^p )$.
\end{itemize}
\end{lemma}
{\bf e}gin{proof}
Given that $\mu \cdot e =0$, $S_n \cdot e$ is a martingale, and hence, by convexity,
$| S_n \cdot e|$ is a non-negative submartingale. Then, for $p > 1$,
\[ \mathbb{E}\, \left[ \max_{0\leq m \leq n} |S_m \cdot e|^p \right] \leq \left( \frac{p}{p-1} \right)^p \mathbb{E}\, \left[ | S_n \cdot e |^p \right]
= O ( n^{ 1 \vee (p/2) } ) ,\]
where the first inequality is Doob's $L^p$ inequality (see Lemma {\mathrm{e}}f{Doob})
and the second is the Marcinkiewicz--Zygmund inequality (see Lemma {\mathrm{e}}f{marcinkiewicz}). This gives part (i).
Part (ii) follows from part (i): take $\{ e_1, e_2\}$ an orthonormal basis of $\mathbb{R}^2$ and apply (i) with each basis vector.
Then by the triangle inequality
$$\max_{0\leq m\leq n} \| S_m \| \leq \max_{0\leq m\leq n} |S_m \cdot e_1 | + \max_{0\leq m\leq n} |S_m \cdot e_2 |$$
together
with Minkowski's inequality (see Lemma {\mathrm{e}}f{minkowski}), we have
{\bf e}gin{align*}
\mathbb{E}\, \left[ \max_{0\leq m\leq n} \| S_m \|^p \right]
&\leq \mathbb{E}\, \left[\left( \max_{0\leq m\leq n}|S_m \cdot e_1 | + \max_{0\leq m\leq n}|S_m \cdot e_2 | \right)^p \right] \\
&= \left\| \max_{0\leq m\leq n}|S_m \cdot e_1 | + \max_{0\leq m\leq n}|S_m \cdot e_2 | \right\|_p^p \\
&\leq \left( \left\|\max_{0\leq m\leq n}|S_m \cdot e_1 | \right\|_p + \left\|\max_{0\leq m\leq n}|S_m \cdot e_2 | \right\|_p \right)^p \\
&= O(n^{1 \vee (p/2)}).
\end{align*}
Part (iii) follows from the fact that
$$\max_{0 \leq m \leq n} | S_m \cdot \hat \mu | \leq \sum_{k=1}^n | Z_k \cdot \hat \mu | \leq \sum_{k=1}^n \| Z_k \|$$
and an application of Rosenthal's inequality (see Lemma {\mathrm{e}}f{rosenthal}) to the latter sum gives
{\bf e}gin{align*}
\mathbb{E}\, \left[ \max_{0\leq m\leq n} \| S_m \cdot \hat \mu \|^p \right]
&\leq \mathbb{E}\, \left[\left( \sum_{k=1}^n \|Z_k\| \right)^p\, \right] \\
&\leq \max \left\{ 2^p \sum_{k=1}^n \mathbb{E}\, \|Z_k\|^p ,\, 2^{p^2} \left( \sum_{k=1}^n \mathbb{E}\, \|Z_k\| \right)^p \right\} \\
&\leq \max \left\{ O(n) , O(n^p) \right\} \\
&\leq O(n^p). \qedhere
\end{align*}
\end{proof}
Proposition {\mathrm{e}}f{upper bound of E L_n} gives the order of $\mathbb{E}\, L_n$. Now we can have the exact limit by the following result,
the statement of which is similar to an example on p.~508 of \cite{sw}.
{\bf e}gin{proposition} \label{limit of E L_n}
Suppose $\mathbb{E}\,(\|Z_1\|^2) < \infty$ and $\mu=0$. Then, for $Y \sim \mathbb{N}N({\bf 0}, \Sigma)$,
$$\lim_{n \to \infty} n^{-1/2}\mathbb{E}\, L_n = \mathbb{E}\, {\mathcal{L}} (\Sigma^{1/2} h_1) = 4 \mathbb{E}\, \|Y\| .$$
\end{proposition}
{\bf e}gin{proof}
The finite point-set case of Cauchy's formula gives
{\bf e}gin{equation}
\label{eq:walk-cauchy}
L_n = \int_{\mathbb{S}_{1}} \max_{0 \leq k \leq n} ( S_k \cdot e)\textup{d} e \leq 2 \pi \max_{0 \leq k \leq n} \| S_k \|.\end{equation}
Then by Lemma {\mathrm{e}}f{lem:walk_moments}(ii) we have $\sup_n \mathbb{E}\, [ ( n^{-1/2} L_n )^{2} ] < \infty$.
Hence $n^{-1/2} L_n$ is uniformly integrable, so that Theorem {\mathrm{e}}f{thm:limit-zero} yields
$\lim_{n \to \infty} n^{-1/2}\mathbb{E}\, L_n = \mathbb{E}\, {\mathcal{L}} ( \Sigma^{1/2} h_1 )$.
It remains to show that $\lim_{n \to \infty} n^{-1/2}\mathbb{E}\, L_n = 4 \mathbb{E}\, \| Y \|$. One can use Cauchy's formula to compute
$\mathbb{E}\, {\mathcal{L}} ( \Sigma^{1/2} h_1 )$; instead we give a direct random walk argument, following \cite{sw}.
The central limit theorem for $S_n$ implies that
$n^{-1/2} \|S_n\| \to \|Y\|$ in distribution. Under the given conditions, $\mathbb{E}\, [ \|S_{n+1}\|^2 ] = \mathbb{E}\, [ \|S_{n}\|^2 ] + \mathbb{E}\, [ \|Z_{n+1} \|^2 ]$,
so that $\mathbb{E}\, [ \| S_n \|^2 ] = O(n)$. It follows that $n^{-1/2} \|S_n\|$ is uniformly integrable,
and hence
$$\lim_{n \to \infty} n^{-1/2} \mathbb{E}\, \|S_n\| = \mathbb{E}\, \|Y\|.$$
So for any $\varepsilon > 0$, there is some $n_0 \in \mathbb{N}$ such that $\left| k^{-1/2}\mathbb{E}\, \|S_k\| - \mathbb{E}\,\|Y\| \right| < \varepsilon $ for all $k \geq n_0$.
Then by the S--W formula \eqref{SW formula}, we have
{\bf e}gin{align*}
& \left| \frac{\mathbb{E}\, L_n}{\sqrt{n}} -2\mathbb{E}\,\|Y\| \frac{1}{\sqrt{n}} \sum_{k=1}^n k^{-1/2} \right| \\
&= \frac{2}{\sqrt{n}} \left| \sum_{k=1}^n \left( \frac{\mathbb{E}\, \|S_k\|}{k} - \mathbb{E}\,\|Y\| k^{-1/2} \right) \right| \\
&\leq \frac{2}{\sqrt{n}} \sum_{k=1}^n \left| \frac{\mathbb{E}\, \|S_k\|}{\sqrt{k}} - \mathbb{E}\, \|Y\| \right| k^{-1/2} \\
&= \frac{2}{\sqrt{n}} \left( \sum_{k=1}^{n_0} + \sum_{i = n_0 +1 }^n \right) \left| \frac{\mathbb{E}\, \|S_k\|}{\sqrt{k}} - \mathbb{E}\,\|Y\| \right| k^{-1/2} \\
&\leq \frac{D}{\sqrt{n}} + \frac{2}{\sqrt{n}} \sum_{k = n_0 +1}^n \left| \frac{\mathbb{E}\, \|S_k\|}{\sqrt{k}} - \mathbb{E}\,\|Y\| \right| k^{-1/2} \\
&\leq \frac{D}{\sqrt{n}} + \frac{2 \varepsilon}{\sqrt{n}} \sum_{k = n_0 +1}^n k^{-1/2},
\end{align*}
for some constant $D$ and the $n_0$ mentioned above.
Also notice the fact that $\lim_{n \to \infty} n^{-1/2} \sum_{k=1}^n k^{-1/2} =2$. This can be proved by the monotonicity,
$$2\left[ (n+1)^{1/2}-1 \right] = \int_1^{n+1} x^{-1/2}\,\textup{d} x \leq \sum_{k=1}^n k^{-1/2} \leq \int_0^n x^{-1/2}\,\textup{d} x = 2n^{1/2} .$$
Taking $n \to \infty$ in the displayed inequality gives
$$\limsup_{n \to \infty} \left| \frac{\mathbb{E}\, L_n}{\sqrt{n}} -2\mathbb{E}\,\|Y\| \frac{1}{\sqrt{n}} \sum_{k=1}^n k^{-1/2} \right| \leq 4\varepsilon .$$
Since $\varepsilon > 0$ was arbitrary, it follows that
$$\lim_{n \to \infty} \left| \frac{\mathbb{E}\, L_n}{\sqrt{n}} -2\mathbb{E}\,\|Y\| \frac{1}{\sqrt{n}} \sum_{k=1}^n k^{-1/2} \right| =0 .$$
Therefore,
$$\lim_{n \to \infty} \frac{\mathbb{E}\, L_n}{\sqrt{n}} = \lim_{n \to \infty}2\mathbb{E}\,\|Y\| \frac{1}{\sqrt{n}} \sum_{k=1}^n k^{-1/2} = 4\mathbb{E}\,\|Y\|. \qedhere$$
\end{proof}
Cauchy's formula applied to the line segment from $0$ to $Y$ with Fubini's theorem implies $2 \mathbb{E}\, \| Y \| = \int_{\mathbb{S}_1} \mathbb{E}\, [ ( Y \cdot e )^+ ] \textup{d} e$.
Here $Y \cdot e = e^\tra Y$ is univariate normal
with mean $0$ and variance $e^\tra \Sigma e = \|\Sigma^{1/2} e\|^2$, so that $\mathbb{E}\,[ ( Y \cdot e)^+ ]$ is
$\|\Sigma^{1/2} e\|$ times one half of the mean of the square-root of a $\chi_1^2$ random variable. Hence
$$\mathbb{E}\, \| Y \| = ( 8 \pi)^{-1/2} \int_{\mathbb{S}_1} \|\Sigma^{1/2} e\|\, \textup{d} e ,$$
which in general may be expressed via a complete elliptic integral of the second kind
in terms of the ratio of the eigenvalues of $\Sigma$.
In the particular case $\Sigma = I$, $\mathbb{E}\, \| Y \| = \sqrt{\pi / 2}$ so
then Proposition {\mathrm{e}}f{limit of E L_n} implies that
\[
\lim_{n \to \infty} n^{-1/2}\mathbb{E}\, L_n = \sqrt{8 \pi}, \]
matching the formula
$\mathbb{E}\, \ell_1 = \sqrt{8 \pi}$ of Letac and Tak\'acs \cite{letac2,takacs} (see Lemma {\mathrm{e}}f{lem:letac} below).
We also note the bounds
{\bf e}gin{equation}
\label{EL-bounds}
\pi^{-1/2} \sqrt{ \trace \Sigma } \leq \mathbb{E}\, \| Y \| \leq \sqrt{ \trace \Sigma } ;\end{equation}
the upper bound here is from Jensen's inequality and the fact that $\mathbb{E}\, [ \| Y\|^2 ] = \trace \Sigma$.
The lower bound in \eqref{EL-bounds} follows from the inequality
\[ \mathbb{E}\, \| Y \| \geq \sup_{e \in \mathbb{S}_1} \mathbb{E}\, | Y \cdot e |
= \sqrt{ 2/\pi } \sup_{e \in\mathbb{S}_1} ( \mathbb{V} {\rm ar} [ Y \cdot e ] )^{1/2} \]
together with the fact that
\[ \sup_{e \in \mathbb{S}_1} \mathbb{V} {\rm ar} [ Y \cdot e ]
= \sup_{e \in \mathbb{S}_1} \|\Sigma^{1/2} e\|^2
= \| \Sigma^{1/2} \|^2_{\rm op} = \| \Sigma \|_{\rm op} = \lambda_\Sigma \geq \frac{1}{2} \trace \Sigma ,
\]
where $\| \blob \|_{\rm op}$ is the matrix operator norm and $\lambda_\Sigma $ is the largest
eigenvalue of $\Sigma$;
in statistical terminology, $\lambda_\Sigma$ is the variance of the first principal component associated with $Y$.
We give a proof of the formula of Letac and Tak\'acs \cite{letac2,takacs}.
{\bf e}gin{lemma}
\label{lem:letac}
Let $\ell_1 = {\mathcal{L}}(h_1)$ (see equation \eqref{eqn:def of Lt At for BM})
be the perimeter length of convex hull of a standard Brownian motion on $[0,1]$ in $\mathbb{R}^2$. Then, $\mathbb{E}\, \ell_1 = \sqrt{8 \pi}$.
\end{lemma}
{\bf e}gin{proof}
Applying Fubini’s theorem (Lemma {\mathrm{e}}f{fubini}) in Cauchy formula \eqref{cauchy0} for $\ell_1$,
$$\ell_1 = \int_0^{2 \pi} \sup_{t \in [0,1]}(b(t) \cdot {\bf e}_{\theta}) \, d\theta ,$$
we have
{\bf e}gin{align*}
\mathbb{E}\, \ell_1 &= \int_0^{2 \pi} \mathbb{E}\, \sup_{t \in [0,1]}(b(t) \cdot {\bf e}_{\theta}) \, d\theta \\
&= 2\pi \mathbb{E}\, \sup_{t \in [0,1]} (b(t) \cdot {\bf e}_{\theta}), ~\text{where $b(t) \cdot {\bf e}_{\theta}$ is a 1 dimensional Brownian motion,} \\
&= 2\pi \mathbb{E}\, \sup_{t \in [0,1]} w(t).
\end{align*}
Here $w(t)$ is defined as a standard 1-dimensional Brownian motion, which is the same as in Corollary {\mathrm{e}}f{reflection}. Then we have
{\bf e}gin{align*}
\mathbb{E}\, \sup_{t \in [0,1]} w(t)
&= \int_0^{\infty} \mathbb{P} \left( \sup_{t \in [0,1]} w(t) >r \right)\textup{d} r \\
&= 2 \int_0^{\infty} \mathbb{P} \left( w(1) > r \right) \textup{d} r, \text{ by Reflection principle (Corollary {\mathrm{e}}f{reflection}),} \\
&= 2 \int_0^{\infty} \frac{\textup{d} r}{\sqrt{2 \pi}} \int_r^{\infty} e^{-y^2 /2} \,\textup{d} y \\
&= \sqrt{\frac{2}{\pi}} \int_0^\infty \textup{d} y \int_0^y e^{-y^2 /2}\, \textup{d} r, \text{ by changing orders of integrals,} \\
&= \sqrt{\frac{2}{\pi}}
\end{align*}
Hence, the result follows.
\end{proof}
\pagestyle{myheadings} \markright{\sc Chapter 5}
\chapter{Asymptotics for perimeter length of the convex hull}
\label{chapter5}
\section{Overview}
\label{sec:5.1}
To start this chapter we discuss some simulations. We considered a specific form of random walk with increments
$Z_i - \mathbb{E}\, [Z_i] = (\cos \Theta_i, \sin \Theta_i)$, where $\Theta_i$ was uniformly distributed on $[0, 2\pi)$,
corresponding to a uniform distribution on a unit circle centred at $\mathbb{E}\, [Z_i] = \mu$.
We took one example with $\mu = {\bf 0}$, and two examples with $\mu \neq {\bf 0}$ of different magnitudes.
For the expected perimeter length, the simulations (see Figure {\mathrm{e}}f{fig:expper}) are consistent with the
Spitzer--Widdom--Baxter result (see the argument below \eqref{SW formula}), Proposition {\mathrm{e}}f{limit of E L_n} and Proposition {\mathrm{e}}f{LLN for L_n}.
In the case of $\mu = {\bf 0}$, the result in Proposition {\mathrm{e}}f{limit of E L_n} take the form:
$\lim_{n \to \infty} n^{-1/2}\mathbb{E}\, L_n = 4 \mathbb{E}\, \|Y\| = 4$.
In the case of $\mu \neq {\bf 0}$, the result in Proposition {\mathrm{e}}f{LLN for L_n} take the form:
$n^{-1} L_n \toas 2 \|\mu\|=0.4 \text{ or } 0.72 $.
{\bf e}gin{figure}[h!]
\centering
\includegraphics[width=0.31\textwidth]{expper1} \,
\includegraphics[width=0.31\textwidth]{expper2} \,
\includegraphics[width=0.31\textwidth]{expper3}
\caption{Plots of $y = \mathbb{E}\,[L_n]$ estimates against $x =$ (left to right) $n^{1/2}$, $n$, $n$ for
about $25$ values of $n$ in the range $10^2$ to $2.5 \times 10^5$ for 3 examples
with $\|\mu\| =$ (left to right) $0$, $0.2$, $0.36$. Each point is
estimated from $10^3$ repeated simulations.
Also plotted are straight lines $y = 3.532 x$ (leftmost plot),
$y=0.40x$ (middle plot) and $y= 0.721x$ (rightmost plot).}
\label{fig:expper}
\end{figure}
For the variance of perimeter length with drift, the result in Theorem {\mathrm{e}}f{thm1} take the form:
$\lim_{n \to \infty} \mathbb{V} {\rm ar}[L_n] = 4 \mathbb{E}\,[\cos^2 \Theta_1]=2$ and in Theorem {\mathrm{e}}f{thm2},
$(2n)^{-1/2}(L_n - \mathbb{E}\,[L_n])$ converges in distribution to a standard normal distribution.
The corresponding pictures in Figures {\mathrm{e}}f{fig:varper} and {\mathrm{e}}f{fig:normal} show an agreement between the simulations and the theory.
In the zero drift case, the simulations (the leftmost plot in Figure {\mathrm{e}}f{fig:varper}) suggest that
$\lim_{n \to \infty} n^{-1} \mathbb{V} {\rm ar}[L_n]$ exists but Figure {\mathrm{e}}f{fig:normal} does not appear to be consistent with a normal distribution as a limiting distribution.
{\bf e}gin{figure}[h!]
\centering
\includegraphics[width=0.31\textwidth]{varper1} \,
\includegraphics[width=0.31\textwidth]{varper2} \,
\includegraphics[width=0.31\textwidth]{varper3}
\caption{Plots of $y = \mathbb{V} {\rm ar}[L_n]$ estimates against $x = n$ for
the three examples described in Figure {\mathrm{e}}f{fig:expper}.
Also plotted are straight lines $y = 0.536 x$ (leftmost plot)
and $y=2x$ (other two plots).}
\label{fig:varper}
\end{figure}
{\bf e}gin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{normal}
\caption{Simulated histogram estimates for the distribution of
$\tfrac{L_n-\mathbb{E}\, [L_n]}{\sqrt{\mathbb{V} {\rm ar}[L_n]}}$
with $n=5 \times 10^3$
in the three examples described in Figure {\mathrm{e}}f{fig:expper}. Each histogram is compiled from $10^3$ samples.}
\label{fig:normal}
\end{figure}
We will show in
Proposition {\mathrm{e}}f{prop:var-limit-zero u0} that
$$\text{if } \mu = 0: ~~ \lim_{n \to \infty} n^{-1} \mathbb{V} {\rm ar} L_n = u_0 ( \Sigma ),$$
where $u_0( \blob )$ is finite and positive provided $\sigma^2 < \infty$.
For the constant $u_0(I)$ ($I$ being the identity matrix), Table {\mathrm{e}}f{table x1} gives numerical evaluation of rigorous bound that we prove in
Proposition {\mathrm{e}}f{prop:var_bounds u0} below, plus estimate from simulations. See also Section~{\mathrm{e}}f{sec:exact evaluation of limiting variances} for an explicit integral expression for $u_0(I)$.
{\bf e}gin{table}[!h]
\center
\def1.4{1.4}
{\bf e}gin{tabular}{c|ccc}
& lower bound & simulation estimate & upper bound \\
\hline
$u_0 ( I)$ & $2.65 \times 10^{-3}$ & 1.08 & 9.87 \\
\end{tabular}
\caption{The simulation estimate is
based on $10^5$ instances of a walk of length $n = 10^5$. The final decimal digit in the numerical upper (lower)
bounds has been rounded up (down).}
\label{table x1}
\end{table}
\section{Upper bound for the variance}
Assuming that $\mathbb{E}\, [ \| Z_1 \|^2 ] < \infty$, Snyder and Steele \cite{ss} obtained an upper bound for $\mathbb{V} {\rm ar} [ L_n]$
using Cauchy's formula together with a version of the Efron--Stein inequality.
Snyder and Steele's result (Theorem 2.3 of \cite{ss}) can be expressed as
{\bf e}gin{equation}
\label{ssup} n^{-1} \mathbb{V} {\rm ar} [L_n] \leq \frac{\pi^2}{2}
\left( \mathbb{E}\, [ \| Z_1 \|^2 ] - \| \mathbb{E}\, [ Z_1 ] \|^2 \right)
, ~~~ (n \in \mathbb{N} := \{1,2,\ldots \} ) . \end{equation}
As far as we are aware, there are no lower bounds for $\mathbb{V} {\rm ar} [ L_n]$ in the literature.
According to the discussion in \cite[\S 5]{ss}, Snyder and Steele had ``no compelling reason to expect that $O(n)$ is the correct order of magnitude'' in their upper bound for $\mathbb{V} {\rm ar} [L_n]$,
and they speculated that perhaps $\mathbb{V} {\rm ar} [ L_n] = o(n)$ (maybe with a distinction between the cases of zero and non-zero drift).
Our first main result settles this question under minimal conditions, confirming
that ({\mathrm{e}}f{ssup}) is indeed of the correct order, apart from
in certain degenerate cases, while demonstrating that the constant on the right-hand side of ({\mathrm{e}}f{ssup}) is not, in general, sharp.
The first step in looking for the variance upper bound is a
martingale difference argument, based on resampling members of the sequence
$Z_1, \ldots, Z_n$, to get an expression for $\mathbb{V} {\rm ar} [L_n]$ amenable to analysis: see Section {\mathrm{e}}f{sec:martingales}.
Let ${\mathcal{F}}_0$ denote the trivial $\sigma$-algebra, and for $n \in \mathbb{N}$ set ${\mathcal{F}}_n := \sigma (Z_1, \ldots, Z_n)$, the $\sigma$-algebra generated
by the first $n$ steps of the random walk. Then $S_n$ is ${\mathcal{F}}_n$-measurable, and for $n \in \mathbb{N}$ we can write
$L_n = \Lambda_n ( Z_1, \ldots, Z_n )$ for $\Lambda_n : \mathbb{R}^{2n} \to [0,\infty)$
a measurable function.
Let $Z_1', Z_2',\ldots$ be an independent copy of the sequence $Z_1, Z_2, \ldots$.
Fix $n \in \mathbb{N}$. For $i \in \{1,\ldots, n\}$, we `resample' the $i$th increment, replacing $Z_i$ with $Z_i'$, as follows.
Set
{\bf e}gin{equation}
\label{resample}
S_j^{(i)} := {\bf e}gin{cases} S_j & \textrm{ if } j < i \\
S_j - Z_i + Z_i' & \textrm{ if } j \geq i ;\end{cases} \end{equation}
then $(S_j^{(i)} ; 0 \leq j \leq n)$ is a modification of the random walk $(S_j ; 0 \leq j \leq n)$ that keeps all the components apart from
the $i$th step which is independently resampled. We let $L_n^{(i)}$ denote
the perimeter length of the corresponding convex hull for this modified walk, namely
$\mathop \mathrm{hull} ( S_0^{(i)}, \ldots, S_n^{(i)} )$,
i.e.,
\[ L_n^{(i)} := \Lambda_n (Z_1, \ldots, Z_{i-1}, Z'_i, Z_{i+1}, \ldots, Z_n ) .\]
For $i \in \{1,\ldots, n\}$, define
{\bf e}gin{equation}
\label{dni}
D_{n, i} := \mathbb{E}\, [ L_n - L_n^{(i)}\mid {\mathcal{F}}_{i} ] ;\end{equation}
in other words, $-D_{n,i}$ is the expected change in the perimeter length of the convex hull,
given ${\mathcal{F}}_i$, on replacing $Z_i$ by $Z_i'$.
The point of this construction is the following result.
{\bf e}gin{lemma}
\label{lem1}
Let $n \in \mathbb{N}$. Then (i) $L_n - \mathbb{E}\, [ L_n] = \sum_{i=1}^n D_{n,i}$; and (ii)
$\mathbb{V} {\rm ar} [ L_n ] = \sum_{i=1}^n \mathbb{E}\, [ D_{n,i}^2 ]$, whenever the latter sum is finite.
\end{lemma}
{\bf e}gin{proof}
Take $W_n = L_n$ in Lemma {\mathrm{e}}f{resampling}. Then the results follow.
\end{proof}
{\bf e}gin{remark} \label{remark: upper bound for var Ln}
Lemma {\mathrm{e}}f{lem1} with the conditional Jensen's inequality gives the bound
$$ \mathbb{V} {\rm ar}[L_n] \leq \sum_{i=1}^n \mathbb{E}\,\left[\left(L_n^{(i)} - L_n \right)^2\right] ,$$
which is a factor of $2$ larger than the upper bound obtained from the Efron--Stein inequality:
$\mathbb{V} {\rm ar}[L_n] \leq 2^{-1} \sum_{i=1}^n \mathbb{E}\,\left[ (L_n^{(i)}-L_n)^2 \right]$ (see equation (2.3) in \cite{ss}).
\end{remark}
Let ${\bf e}_\theta = (\cos \theta, \sin \theta)$ be the unit vector in direction $\theta \in (-\pi, \pi]$.
For $\theta \in [0,\pi]$, define
\[ M_n (\theta) := \max_{0 \leq j \leq n} ( S_j \cdot {\bf e}_\theta ) , \textrm{ and } m_n (\theta) := \min_{0 \leq j \leq n} ( S_j \cdot {\bf e}_\theta ) .\]
Note that since $S_0 = {\bf 0}$, we have $M_n (\theta) \geq 0$ and $m_n (\theta) \leq 0$, a.s.
In the present setting (see equation ({\mathrm{e}}f{cauchy_})), Cauchy's formula for convex sets yields
\[ L_n = \int_0^\pi \left( M_n (\theta) - m_n (\theta) \right) \textup{d} \theta = \int_0^\pi R_n (\theta) \textup{d} \theta ,\]
where $R_n (\theta) := M_n (\theta) - m_n (\theta) \geq 0$ is the {\em parametrized range function}. Similarly, when the $i$th increment
is resampled,
\[ L_n^{(i)} = \int_0^\pi \left( M^{(i)}_n (\theta) - m^{(i)}_n (\theta) \right) \textup{d} \theta = \int_0^\pi R^{(i)}_n (\theta) \textup{d} \theta ,\]
where $R_n^{(i)} (\theta ) = M^{(i)}_n (\theta) - m^{(i)}_n (\theta)$, defining
\[ M^{(i)}_n (\theta) := \max_{0 \leq j \leq n} ( S^{(i)}_j \cdot {\bf e}_\theta ) , \textrm{ and } m^{(i)}_n (\theta) := \min_{0 \leq j \leq n} ( S^{(i)}_j \cdot {\bf e}_\theta ) .\]
Thus to study $D_{n,i} = \mathbb{E}\, [ L_n - L_n^{(i)} \mid {\mathcal{F}}_i]$ we will consider
{\bf e}gin{equation}
\label{cauchy}
L_n - L_n^{(i)} = \int_0^\pi \left( R_n (\theta) -R^{(i)}_n (\theta) \right) \textup{d} \theta
= \int_0^\pi \Delta^{(i)}_{n} (\theta) \textup{d} \theta,\end{equation}
where $\Delta^{(i)}_{n} (\theta) := R_n (\theta) - R^{(i)}_n (\theta)$.
For $\theta \in [0,\pi]$, let
\[ \ubar J_{n} (\theta) := \argmin_{0 \leq j \leq n } ( S_j \cdot {\bf e}_\theta ) , \textrm{ and } \bar J_{n} (\theta) := \argmax_{0 \leq j \leq n } ( S_j \cdot {\bf e}_\theta ) ,\]
so $m_n (\theta) = S_{\ubar J_n (\theta)} \cdot {\bf e}_\theta$ and
$M_n (\theta) = S_{\bar J_n (\theta)} \cdot {\bf e}_\theta$.
Similarly, recalling ({\mathrm{e}}f{resample}), define
\[ \ubar J^{(i)}_{n} (\theta) := \argmin_{0 \leq j \leq n } ( S^{(i)}_j \cdot {\bf e}_\theta ) , \textrm{ and } \bar J^{(i)}_{n} (\theta) := \argmax_{0 \leq j \leq n } ( S^{(i)}_j \cdot {\bf e}_\theta ) .\]
(Apply the following conventions in the event of ties: $\argmin$ takes the maximum argument
among tied values, and $\argmax$ the minimum.)
We will use the following simple bound repeatedly in the arguments that follow.
This upper bound for $| \Delta_n ^{(i)} (\theta ) |$ is also given in Lemma 2.1 of \cite{ss}. But we have a different way to prove here.
{\bf e}gin{lemma}
Almost surely, for any $\theta \in [0,\pi]$ and any $i \in \{ 1,2 ,\ldots , n\}$,
{\bf e}gin{equation}
\label{deltabound}
| \Delta_n ^{(i)} (\theta ) | \leq | (Z_i - Z_i') \cdot {\bf e}_{\theta}| \leq \| Z_i\| + \| Z_i ' \| . \end{equation}
\end{lemma}
{\bf e}gin{proof}
Consider the effect on $S_k \cdot {\bf e}_{\theta}$ when $Z_i$ is replaced by $Z_i'$. If $i > k$, then $S_k \cdot {\bf e}_{\theta} = S_k^{(i)} \cdot {\bf e}_{\theta}$.
If $i \leq k$, then $S_k \cdot {\bf e}_{\theta} = S_k^{(i)} \cdot {\bf e}_{\theta} + (Z_i - Z_i') \cdot {\bf e}_{\theta}$. Hence, for all $i$,
$$ S_k \cdot {\bf e}_{\theta} \leq S_k^{(i)} \cdot {\bf e}_{\theta} + ((Z_i - Z_i') \cdot {\bf e}_{\theta} \vee 0 ) .$$
Therefore,
$$ \max_{1 \leq k \leq n} S_k \cdot {\bf e}_{\theta} \leq \max_{1 \leq k \leq n} S_k^{(i)} \cdot {\bf e}_{\theta} + ((Z_i - Z_i') \cdot {\bf e}_{\theta} \vee 0 ) .$$
Similarly, we have
$$ \min_{1 \leq k \leq n} S_k \cdot {\bf e}_{\theta} \geq \min_{1 \leq k \leq n} S_k^{(i)} \cdot {\bf e}_{\theta} + ((Z_i - Z_i') \cdot {\bf e}_{\theta} \wedge 0 ) .$$
Combining these two inequalities with maximum and minimum, we get
{\bf e}gin{align*}
R_n(\theta) - R_n^{(i)}(\theta)
& \leq ((Z_i - Z_i') \cdot {\bf e}_{\theta} \vee 0) - ((Z_i - Z_i') \cdot {\bf e}_{\theta} \wedge 0 ) \\
& = |(Z_i - Z_i') \cdot {\bf e}_{\theta}| .
\end{align*}
Also similarly, we can get $R_n^{(i)}(\theta) - R_n(\theta) \leq |(Z_i' - Z_i) \cdot {\bf e}_{\theta}|$. Thus, the result follows from the triangle inequality.
\end{proof}
The following is Lemma 2.2 in \cite{ss}.
{\bf e}gin{lemma}
For all $1 \leq i \leq n$, $$\mathbb{E}\,\left[\left( \int_0^{\pi} \left|(Z_i- Z_i') \cdot {\bf e}_{\theta}\right| \textup{d} \theta \right)^2 \right]
\leq \pi^2 \left(\mathbb{E}\,\|Z_1\|^2 - \|\mu\|^2 \right)= \pi^2 \sigma^2 .$$
\end{lemma}
{\bf e}gin{proof}
By Cauchy-Schwarz Inequality, we have
$$\mathbb{E}\,\left[\left( \int_0^{\pi} \left|(Z_i- Z_i') \cdot {\bf e}_{\theta}\right| \textup{d} \theta \right)^2 \right] \leq \pi \mathbb{E}\,\left(\int_0^{\pi}\left|(Z_i- Z_i') \cdot {\bf e}_{\theta}\right|^2 \textup{d} \theta \right) .$$
Then, since $Z_i$, $Z_i'$ are identically and independently distributed,
{\bf e}gin{align*}
\mathbb{E}\, \left[ | Z_i \cdot {\bf e}_{\theta} - Z_i' \cdot {\bf e}_{\theta} |^2 \right]
& = \mathbb{E}\, \left[ (Z_i \cdot {\bf e}_{\theta})^2 \right] + \mathbb{E}\, \left[ (Z_i' \cdot {\bf e}_{\theta})^2 \right] - 2\mathbb{E}\, \left[(Z_i \cdot {\bf e}_{\theta})(Z_i' \cdot {\bf e}_{\theta}) \right] \\
& = 2 \mathbb{V} {\rm ar} [Z_1 \cdot {\bf e}_{\theta}] \\
& = 2 \left(\sigma^2_{\mu} \cos^2\theta + \sigma^2_{\mu_\per} \cos^2\theta + 2 \cos \theta \sin \theta \rho_{\mu \mu_\per} \sigma_{\mu} \sigma_{\mu_\per} \right) ,
\end{align*}
where $\rho_{\mu \mu_\per}$ is the covariance of $(Z_1-\mu) \cdot \hat\mu$ and $(Z_1-\mu) \cdot \hat\mu_\per$.
So,
{\bf e}gin{align*}
\mathbb{E}\, \int_0^{\pi} \left|(Z_i- Z_i') \cdot {\bf e}_{\theta}\right|^2 \textup{d} \theta & = 2 \left( \sigma^2_{\mu} \int_0^\pi \cos^2 \theta\,\textup{d} \theta + \sigma^2_{\mu_\per} \int_0^\pi \sin^2 \theta \,\textup{d} \theta \right) \\
&\quad + 4 \rho_{\mu \mu_\per} \sigma_\mu \sigma_{\mu_\per} \int_0^\pi \cos \theta \sin \theta \,\textup{d} \theta \\
& = \pi(\sigma^2_{\mu} + \sigma^2_{\mu_\per}) .
\end{align*}
This proves the lemma.
\end{proof}
The next result is a version of Theorem 2.3 in \cite{ss}. But they get better right-hand side by using Efron--Stein inequality
{\bf e}gin{proposition} \label{upper bound for Var L_n}
Suppose $\mathbb{E}\,(\|Z_1\|^2) < \infty$. Then
{\bf e}gin{equation} \label{eq:ss}
\mathbb{V} {\rm ar}(L_n) \leq \frac{\pi^2 \sigma^2}{2} n.
\end{equation}
\end{proposition}
{\bf e}gin{proof}
By Lemma {\mathrm{e}}f{lem:efron-stein}, equation ({\mathrm{e}}f{cauchy}) and ({\mathrm{e}}f{deltabound}),
{\bf e}gin{align*}
\mathbb{V} {\rm ar}[L_n]
& \leq \frac{1}{2} \sum_{i=1}^n \mathbb{E}\,\left[\left(\int_0^\pi \Delta^{(i)}_{n} (\theta) \textup{d} \theta\right)^2 \right] \\
& \leq \frac{1}{2} \sum_{i=1}^n \mathbb{E}\,\left[\left(\int_0^\pi \left|(Z_i- Z_i') \cdot {\bf e}_{\theta}\right| \textup{d} \theta\right)^2 \right] \\
& \leq \frac{1}{2} \sum_{i=1}^n \pi^2 \sigma^2 \\
& = \frac{n \pi^2 \sigma^2}{2} ,
\end{align*}
since $Z_i$ are independent identically distributed.
\end{proof}
\section{Law of large numbers}
As we mentioned earlier in Remarks {\mathrm{e}}f{remarks of EL with drift}, Snyder and Steele \cite{ss} has shown the asymptotic behaviour of $L_n /n$.
They state their law of large numbers only for $\mu \neq 0$ but the case with $\mu = 0$ works equally well.
Here we give a different proof of the law of large numbers by using the variance bound.
{\bf e}gin{proposition} \label{LLN for L_n}
If $\mathbb{E}\, (\|Z_1\|^2) < \infty$, then $n^{-1} L_n \to 2 \|\mu\| {\ \mathrm{a.s.}}$ as $n \to \infty$.
\end{proposition}
{\bf e}gin{proof}
We have $n^{-1} \mathbb{E}\, L_n \to 2\|\mu\| $ by Proposition {\mathrm{e}}f{EL with drift} and
the variance bound $\mathbb{V} {\rm ar} L_n \leq C n$ by Proposition {\mathrm{e}}f{upper bound for Var L_n}. Chebyshev's inequality says, for any $\varepsilon>0$,
$$\mathbb{P}\left(\left| \frac{L_n}{n} - \frac{\mathbb{E}\, L_n}{n} \right|> \varepsilon \right) \leq \frac{\mathbb{V} {\rm ar} (n^{-1} L_n)}{\varepsilon^2} \leq \frac{C}{\varepsilon^2 n}.$$
Take $n = n_k = k^2$, then
$$\sum_{k=1}^{\infty} \mathbb{P} \left(\left| \frac{L_{n_k}}{n_k} - \frac{\mathbb{E}\, L_{n_k}}{n_k} \right|> \varepsilon \right) \leq \frac{C}{\varepsilon^2} \sum_{k=1}^{\infty} \frac{1}{k^2} < \infty.$$
So the Borel--Cantelli lemma (see Lemma {\mathrm{e}}f{borel-cantelli}) implies that $|n_k^{-1} L_{n_k} - n_k^{-1}\mathbb{E}\, L_{n_k} | \to 0 {\ \mathrm{a.s.}}$ as $k \to \infty$.
Hence
$$ \left| \frac{L_{n_k}}{n_k} - 2\|\mu\| \right| \leq \left| \frac{L_{n_k}}{n_k} - \frac{\mathbb{E}\, L_{n_k}}{n_k} \right| + \left|\frac{\mathbb{E}\, L_{n_k}}{n_k} -2\|\mu\| \right| \to 0 {\ \mathrm{a.s.}} {\ \mathrm{as}\ } k \to 0. $$
For any $n$, let $k=\lfloor \sqrt{n} \rfloor$. Then $n_k \leq n < n_{k+1}$.
Since $L_n$ is non-decreasing in $n$ by ({\mathrm{e}}f{L_monotone}), we have
$$\frac{L_n}{n} \leq \frac{L_{n_{k+1}}}{n} \leq \frac{L_{n_{k+1}}}{n_{k+1}} \cdot \frac{n_{k+1}}{n} \leq \frac{L_{n_{k+1}}}{n_{k+1}} \cdot \frac{n_{k+1}}{n_k} ,$$
and also
$$\frac{L_n}{n} \geq \frac{L_{n_{k}}}{n} \geq \frac{L_{n_k}}{n_k} \cdot \frac{n_k}{n} \geq \frac{L_{n_k}}{n_k} \cdot \frac{n_k}{n_{k+1}} .$$
Then as $n \to \infty$, $k \to \infty$ so
$$\frac{L_{n_k}}{n_k} \overset{a.s.}{\to} 2\|\mu\| \quad \mbox{and} \quad \frac{n_k}{n_{k+1}} = \frac{(\lfloor \sqrt{n} \rfloor)^2}{(\lfloor \sqrt{n} \rfloor+1)^2} \to 1 .$$
Therefore $n^{-1} L_n \to 2 \|\mu\| $ a.s.
\end{proof}
Proposition {\mathrm{e}}f{LLN for L_n} says that if $\mathbb{E}\,[ \| Z_1 \|^2 ] < \infty$ and $\mu =0$, then $n^{-1} L_n \to 0$ a.s. But Proposition
{\mathrm{e}}f{upper bound of E L_n} says that $\mathbb{E}\, L_n = O ( n^{1/2} )$, so we might expect to be able to improve on this `law of large numbers'. Indeed,
we have the following.
{\bf e}gin{proposition} \label{2LLN for L_n}
Suppose $\mathbb{E}\,[ \| Z_1 \|^2 ] < \infty$.
{\bf e}gin{enumerate}[(i)]
\item
For any $\alpha > 1/2$, as $n \to \infty$,
\[ \frac{L_n - \mathbb{E}\, L_n}{n^{\alpha} } \to 0, \text{ in probability}. \]
\item If, in addition,
$\mu =0$, then for any $\alpha > 1/2$, $n^{-\alpha} L_n \to 0$ a.s.\ as $n \to \infty$.
\end{enumerate}
\end{proposition}
{\bf e}gin{proof}
Similarly to the proof of Proposition {\mathrm{e}}f{LLN for L_n}, Chebyshev's inequality gives, for $\varepsilon >0$,
{\bf e}gin{equation}
\label{eqc1}
\mathbb{P} \left( \frac{| L_n - \mathbb{E}\, L_n |}{n^\alpha} > \varepsilon \right) \leq \frac{C}{\varepsilon^2} n^{1-2\alpha} .\end{equation}
The right-hand side here tends to $0$ as $n \to \infty$ provided $\alpha > 1/2$, giving (i).
For part (ii), take $n = n_k = 2^k$ in ({\mathrm{e}}f{eqc1}). Then
\[ \sum_{k=1}^\infty \mathbb{P} \left( \frac{| L_{n_k} - \mathbb{E}\, L_{n_k} |}{n_k^\alpha} > \varepsilon \right) < \infty ,\]
provided $\alpha > 1/2$. So
\[ \lim_{k \to \infty} \frac{| L_{n_k} - \mathbb{E}\, L_{n_k} |}{n_k^\alpha} = 0, {\ \mathrm{a.s.}} \]
But
\[ \lim_{k \to \infty} \frac{ \mathbb{E}\, L_{n_k} }{n_k^\alpha} = \lim_{n \to \infty} \frac{ \mathbb{E}\, L_{n} }{n^\alpha} = 0 ,\]
by Proposition {\mathrm{e}}f{upper bound of E L_n}, and hence
\[ \lim_{k \to \infty} \frac{L_{n_k}}{n_k^\alpha} = 0, {\ \mathrm{a.s.}} \]
For every positive integer $n$, there exists $k(n) \in \mathbb{Z}P$ for which $2^{k(n)} \leq n < 2^{k(n)+1}$
and $k(n) \to \infty$ as $n \to \infty$. Hence, by ({\mathrm{e}}f{L_monotone}),
\[ \frac{L_n}{n^\alpha} \leq \frac{L_{2^{k(n)+1}}}{(2^{k(n)})^\alpha} = 2^\alpha \frac{L_{2^{k(n)+1}}}{(2^{k(n)+1})^\alpha} ,\]
which tends to $0$ a.s.\ as $n \to \infty$.
\end{proof}
Moreover, $(L_n - \mathbb{E}\, L_n)n^{-\alpha}$ in Proposition {\mathrm{e}}f{2LLN for L_n}(i) is also convergent to $0$ almost surely, if we assume $\|Z_1\|$ is upper bounded by some constant. To show this, we need to use Azuma--Hoeffding inequality (see Lemma {\mathrm{e}}f{azuma-hoeffding}).
{\bf e}gin{lemma} \label{p(L_n-EL_n)}
Assume $\|Z_1\| \leq B$ a.s. for some constant $B$. Then, for any $t>0$,
$$\mathbb{P}\left(| L_n - \mathbb{E}\, L_n | > t \right) \leq 2 \exp\left( - \frac{t^2}{8 \pi^2 B^2 n}\right). $$
\end{lemma}
{\bf e}gin{proof}
Let $D_{n,i}=\mathbb{E}\,[L_n - L_n^{(i)} | {\mathcal{F}}_i]$, where ${\mathcal{F}}_0$ denote the trivial $\sigma$-algebra, and for $i \in \mathbb{N}$, ${\mathcal{F}}_i = \sigma(Z_1,\dots,Z_i)$ is the $\sigma$-algebra generated by the first $n$ steps of the random walk. So $D_{n,i}$ is ${\mathcal{F}}_i$-measurable.
Since $L_n^{(i)}$ is independent of $Z_i$, $$\mathbb{E}\, [L_n^{(i)} | {\mathcal{F}}_i] = \mathbb{E}\, [L_n^{(i)} | {\mathcal{F}}_{i-1}] = \mathbb{E}\, [L_n | {\mathcal{F}}_{i-1}],$$ so that
$D_{n,i}=\mathbb{E}\, [L_n | {\mathcal{F}}_{i}] - \mathbb{E}\, [L_n | {\mathcal{F}}_{i-1}]$. Hence, $\mathbb{E}\, [D_{n,i} | {\mathcal{F}}_{i-1}]=0$.
By using equation ({\mathrm{e}}f{deltabound}) and our assumption that $\|Z_1\| \leq B$ a.s., we can deduce an upper bound for $|D_{n,i}|$ as follows.
$$ |D_{n,i}| \leq \mathbb{E}\, \left[ \int_0^{\pi} |\Delta_n^{(i)}(\theta)| \textup{d} \theta \Big| {\mathcal{F}}_{i}\right] \leq \pi (\|Z_i\| + \|Z_i'\|) \leq 2\pi B .$$
Hence, the result follows Lemma {\mathrm{e}}f{azuma-hoeffding} with $d_{\infty}=2 \pi B$.
\end{proof}
{\bf e}gin{proposition}
Suppose $\| Z_1 \| \leq B$ for some constant $B$. Then for any $\alpha > 1/2$,
$$\frac{L_n - \mathbb{E}\, L_n}{n^{\alpha}} \to 0 {\ \mathrm{a.s.}} $$
\end{proposition}
{\bf e}gin{proof}
The result follows Lemma {\mathrm{e}}f{p(L_n-EL_n)} by using Borel--Cantelli Lemma (see Lemma {\mathrm{e}}f{borel-cantelli}).
\end{proof}
\section{Central limit theorem for the non-zero drift case}
\label{sec:CLT for drift}
\subsection{Control of extrema}
\label{sec:extrema}
For the remainder of this section, without loss of generality, we
suppose that $\mathbb{E}\, [ Z_1 ] = \mu {\bf e}_{\pi/2}$
with $\mu \in (0,\infty)$.
Observe that $( S_j \cdot {\bf e}_\theta ; 0 \leq j \leq n)$ is a one-dimensional random walk: indeed,
$S_j \cdot {\bf e}_\theta = \sum_{k=1}^j Z_k \cdot {\bf e}_\theta$. The mean drift of this one-dimensional random walk is
{\bf e}gin{equation}
\label{mutheta}
\mathbb{E}\, [ Z_1 \cdot {\bf e}_\theta ] = \mathbb{E}\, [ Z_1] \cdot {\bf e}_\theta = \mu \sin \theta .
\end{equation}
Note that the drift $\mu \sin \theta$ is positive if $\theta \in (0, \pi)$.
This crucial fact gives us control over the behaviour of the extrema such as $M_n (\theta)$ and $m_n (\theta)$
that contribute to ({\mathrm{e}}f{cauchy}), and this will allow us to estimate the conditional expectation of the final term in ({\mathrm{e}}f{cauchy})
(see Lemma {\mathrm{e}}f{estimate} below).
For $\gamma \in (0,1/2)$ and $\delta \in (0,\pi/2)$
(two constants that will be chosen to be suitably small later in our arguments), we denote by $E_{n,i} (\delta, \gamma)$ the event that the following occur:
{\bf e}gin{itemize}
\item for all $\theta \in [\delta, \pi - \delta ]$,
$\ubar J_{n} (\theta) < \gamma n$ and $\bar J_{n} (\theta) > (1 - \gamma) n$;
\item for all $\theta \in [\delta, \pi - \delta ]$,
$\ubar J^{(i)}_{n} (\theta) < \gamma n$ and $\bar J^{(i)}_{n} (\theta) > (1 - \gamma) n$.
\end{itemize}
We write $E^{\mathrm{c}}_{n,i} (\delta, \gamma)$ for the complement of $E_{n,i} (\delta, \gamma)$. The idea is that $E_{n,i} (\delta, \gamma)$ will occur with high probability, and on this event
we have good control over $\Delta^{(i)}_{n} (\theta)$.
The next result formalizes these assertions. For $\gamma \in (0,1/2)$, define $I_{n, \gamma} := \{1,\ldots, n\} \cap [ \gamma n , (1-\gamma) n ]$.
{\bf e}gin{lemma}
\label{En}
For any $\gamma \in (0,1/2)$ and any $\delta \in (0,\pi/2)$, the following hold.
{\bf e}gin{itemize}
\item[(i)]
If $i \in I_{n,\gamma}$, then, a.s., for any $\theta \in [\delta, \pi - \delta ]$,
{\bf e}gin{equation}
\label{change}
\Delta_n^{(i)} (\theta) {\,\bf 1} ( E_{n,i}(\delta,\gamma) )
= ( Z_i - Z'_i ) \cdot {\bf e}_\theta {\,\bf 1} ( E_{n,i}(\delta,\gamma) ) .
\end{equation}
\item[(ii)] If $\mathbb{E}\, \| Z_1 \| < \infty$ and $\| \mathbb{E}\, [ Z_1]\| \neq 0$, then
$\min_{1 \leq i \leq n} \mathbb{P} [ E_{n,i} (\delta, \gamma) ] \to 1$ as $n \to \infty$.
\end{itemize}
\end{lemma}
{\bf e}gin{proof}
First we prove part (i).
Suppose that $i \in I_{n,\gamma}$, so $\gamma n \leq i \leq (1-\gamma) n$.
Suppose that $\theta \in [\delta, \pi-\delta ]$.
Then on $E_{n,i} (\delta,\gamma)$, we have $\ubar J_n (\theta) < i < \bar J_n (\theta)$ and
$\ubar J_n^{(i)} (\theta) < i < \bar J_n^{(i)} (\theta)$.
Then from ({\mathrm{e}}f{resample}) it follows that in fact
$\ubar J_n (\theta) =\ubar J_n^{(i)} (\theta)$
and $\bar J_n (\theta) =\bar J_n^{(i)} (\theta)$. Hence
$m_n (\theta) = m_n^{(i)} (\theta)$ and
$$M_n^{(i)} (\theta) = S^{(i)}_{\bar J_n (\theta)} \cdot {\bf e}_\theta = M_n (\theta) + (Z_i'-Z_i) \cdot {\bf e}_\theta, \text{ by ({\mathrm{e}}f{resample}).}$$
Equation ({\mathrm{e}}f{change}) follows.
Next we prove part (ii). Suppose that $\mu = \| \mathbb{E}\, [ Z_1 ] \| >0$.
Since $\mathbb{E}\, \| Z_1 \| < \infty$, the strong law of large
numbers implies that $\| n^{-1} S_n - \mathbb{E}\, [ Z_1 ] \| \to 0$, a.s.,
as $n \to \infty$. In other words, for any $\varepsilon_1 >0$, there
exists $N:= N (\varepsilon_1)$ such that $\mathbb{P} [ N < \infty ] =1$
and
$ \| n^{-1} S_n - \mathbb{E}\, [Z_1] \| < \varepsilon_1$ for all $n \geq N$.
In particular, for $n \geq N$, by ({\mathrm{e}}f{mutheta}),
{\bf e}gin{equation}
\label{project}
\left| n^{-1} S_n \cdot {\bf e}_\theta - \mu \sin \theta \right|
=
\left| n^{-1} S_n \cdot {\bf e}_\theta - \mathbb{E}\, [ Z_1] \cdot {\bf e}_\theta \right|
\leq \left\| n^{-1} S_n - \mathbb{E}\, [Z_1] \right\| < \varepsilon_1 ,\end{equation}
for all $\theta \in [0, 2\pi)$.
Take $\varepsilon_1 < \mu \sin \delta$.
If $n \geq N$, then, by ({\mathrm{e}}f{project}),
\[ S_n \cdot {\bf e}_\theta > ( \mu \sin \theta - \varepsilon_1 ) n \geq ( \mu \sin \delta - \varepsilon_1 ) n,\]
provided $\theta \in [\delta, \pi-\delta]$. By choice
of $\varepsilon_1$, the last term in the previous display is strictly
positive. Hence, for $n \geq N$, for any $\theta \in [\delta, \pi-\delta]$,
$S_n \cdot {\bf e}_\theta >0$. But, $S_0 \cdot {\bf e}_\theta =0$. So
$\ubar J_n (\theta) < N$ for all $\theta \in [\delta, \pi-\delta]$, and
\[ \mathbb{P} \left[ \cap_{\theta \in [\delta, \pi-\delta]}
\{ \ubar J_n (\theta) < \gamma n \} \right]
\geq \mathbb{P} [ N < \gamma n ] \to 1 ,\]
as $n \to \infty$, since $N < \infty$ a.s.
Now,
{\bf e}gin{equation}
\label{max}
\max_{0 \leq j \leq (1-\gamma) n } S_j \cdot {\bf e}_\theta \leq
\max \left\{ \max_{0 \leq j \leq N} S_j \cdot {\bf e}_\theta , \max_{N \leq j \leq (1-\gamma) n }
S_j \cdot {\bf e}_\theta \right\} .\end{equation}
For the final term on the right-hand side of ({\mathrm{e}}f{max}), ({\mathrm{e}}f{project}) implies that
\[ \max_{N \leq j \leq (1-\gamma) n }
S_j \cdot {\bf e}_\theta
\leq \max_{0 \leq j \leq (1-\gamma) n } ( \mu \sin \theta + \varepsilon_1 ) j
\leq ( \mu \sin \theta + \varepsilon_1 ) (1-\gamma ) n .\]
On the other hand, if $n \geq N$, then ({\mathrm{e}}f{project}) implies that
$S_n \cdot {\bf e}_\theta \geq ( \mu \sin \theta - \varepsilon_1 ) n$.
Here
$$\mu \sin \theta - \varepsilon_1 \geq ( \mu \sin \theta + \varepsilon_1 )(1-\gamma)
\text{ if } \varepsilon_1 < \frac{\gamma \mu \sin \theta}{2-\gamma}.$$
Now we choose $\varepsilon_1 < \frac{\gamma \mu \sin \delta}{2}$.
Then, for any $\theta \in [\delta, \pi-\delta]$, we have that,
for $n \geq N$,
\[ S_n \cdot {\bf e}_\theta > \max_{N \leq j \leq (1-\gamma) n }
S_j \cdot {\bf e}_\theta .\]
Hence, by ({\mathrm{e}}f{max}),
{\bf e}gin{align*}
\mathbb{P} \left[ \cap_{\theta \in [\delta, \pi-\delta]}
\{ \bar J_n (\theta) > (1-\gamma) n \} \right]
\geq \mathbb{P} \left[
\cap_{\theta \in [\delta, \pi-\delta]} \left\{
S_n \cdot {\bf e}_\theta > \max_{0 \leq j \leq (1-\gamma) n } S_j \cdot {\bf e}_\theta
\right\} \right] \\
\geq
\mathbb{P} \left[ N \leq n , \, \cap_{\theta \in [\delta, \pi-\delta]} \left\{
S_n \cdot {\bf e}_\theta > \max_{0 \leq j \leq N } S_j \cdot {\bf e}_\theta
\right\} \right] . \end{align*}
Also, for $n \geq N$, $S_n \cdot {\bf e}_\theta > (1- \frac{\gamma}{2}) \mu n \sin \delta$,
so we obtain
{\bf e}gin{align*} \mathbb{P} \left[ \cap_{\theta \in [\delta, \pi-\delta]}
\{ \bar J_n (\theta) > (1-\gamma) n \} \right]
\geq
\mathbb{P} \left[ N \leq n , \, \max_{0 \leq j \leq N} \| S_j \| \leq \left(1- \frac{\gamma}{2}\right) \mu n \sin \delta \right] ,
\end{align*}
using the fact that $\max_{0 \leq j \leq N} S_j \cdot {\bf e}_\theta
\leq \max_{0 \leq j \leq N} \| S_j \|$ for all $\theta$.
Now, as $n \to \infty$,
$\mathbb{P} [ N > n ] \to 0$, and
\[ \mathbb{P} \left[ \max_{0 \leq j \leq N} \| S_j \| > \left(1- \frac{\gamma}{2}\right) \mu n \sin \delta \right] \to 0,\]
since $N < \infty$ a.s.
So we conclude that
\[ \mathbb{P} \left[ \cap_{\theta \in [\delta, \pi-\delta]}
\{ \ubar J_n (\theta) < \gamma n , \, \bar J_n (\theta) > (1-\gamma) n \} \right] \to 1,\]
as $n \to \infty$, and the same result holds for $\ubar J_n^{(i)} (\theta)$
and $\bar J_n^{(i)} (\theta)$, uniformly in $i \in \{1,\ldots,n\}$,
since resampling $Z_i$ does not change the distribution of the trajectory.
\end{proof}
\subsection{Approximation for the martingale differences}
The following result is a key component to our proof. Recall that $D_{n,i} = \mathbb{E}\, [ L_n - L_n^{(i)} \mid {\mathcal{F}}_i ]$.
{\bf e}gin{lemma}
\label{estimate}
Suppose that $\mathbb{E}\, \| Z_1 \| < \infty$, $\gamma \in (0,1/2)$, and $\delta \in (0, \pi/2)$.
For any $i \in I_{n,\gamma}$,
{\bf e}gin{align}
\label{eq2}
\left| D_{n,i} - \frac{2 ( Z_i - \mathbb{E}\, [ Z_1] ) \cdot \mathbb{E}\, [ Z_1]}{\| \mathbb{E}\, [ Z_1] \|}
\right|
& \leq
4 \delta \| Z_i \| + 4 \delta \mathbb{E}\, \| Z_1\| + 3 \pi \| Z_i\| \mathbb{P} [ E_{n,i}^{\mathrm{c}}(\delta, \gamma) \mid {\mathcal{F}}_i ] \nonumber \\
&{} \quad {} + 3 \pi \mathbb{E}\, [ \| Z_i'\| {\,\bf 1} ( E_{n,i}^{\mathrm{c}}(\delta, \gamma) ) \mid {\mathcal{F}}_i ]
, {\ \mathrm{a.s.}}
\end{align}
\end{lemma}
{\bf e}gin{proof}
Taking (conditional) expectations
in ({\mathrm{e}}f{cauchy}), we obtain
{\bf e}gin{equation}
\label{eq4}
D_{n,i}
= \int_0^\pi \mathbb{E}\, [ \Delta_n^{(i)} (\theta) {\,\bf 1} (E_{n,i} (\delta, \gamma)) \mid {\mathcal{F}}_i ] \textup{d} \theta + \int_0^\pi \mathbb{E}\, [ \Delta_n^{(i)} (\theta) {\,\bf 1} (E_{n,i} ^{\mathrm{c}} (\delta,\gamma)) \mid {\mathcal{F}}_i ] \textup{d} \theta .\end{equation}
For the second term on the right-hand side of ({\mathrm{e}}f{eq4}), we have
{\bf e}gin{align}
\label{eq5}
\left| \int_0^\pi \mathbb{E}\, [ \Delta_n^{(i)} (\theta) {\,\bf 1} (E_{n,i} ^{\mathrm{c}} (\delta,\gamma)) \mid {\mathcal{F}}_i ] \textup{d} \theta \right| &
\leq \int_0^\pi \mathbb{E}\, [ | \Delta_n^{(i)} (\theta) | {\,\bf 1} (E_{n,i}^{\mathrm{c}} (\delta,\gamma) ) \mid {\mathcal{F}}_i ] \textup{d} \theta . \end{align}
Applying the bound ({\mathrm{e}}f{deltabound}), we obtain
{\bf e}gin{align}
\label{eq6}
\int_0^\pi \mathbb{E}\, [ | \Delta_n^{(i)} (\theta) | {\,\bf 1} (E_{n,i}^{\mathrm{c}} (\delta,\gamma) ) \mid {\mathcal{F}}_i ] \textup{d} \theta
\leq
\pi \mathbb{E}\, [ ( \| Z_i\| + \| Z_i'\| ) {\,\bf 1} (E_{n,i}^{\mathrm{c}} (\delta,\gamma) ) \mid {\mathcal{F}}_i ] \nonumber\\
= \pi \| Z_i \| \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ]
+ \pi \mathbb{E}\, [ \| Z_i' \| {\,\bf 1} ( E_{n,i}^{\mathrm{c}} (\delta,\gamma) ) \mid {\mathcal{F}}_i ] ,\end{align}
since $Z_i$ is ${\mathcal{F}}_i$-measurable with $\mathbb{E}\, \| Z_i \| < \infty$.
We decompose the first integral on the right-hand side of ({\mathrm{e}}f{eq4}) as $I_1 + I_2 + I_3$,
where
{\bf e}gin{align*} I_1 & := \int_0^{\delta} \mathbb{E}\, [ \Delta_n^{(i)} (\theta) {\,\bf 1} (E_{n,i} (\delta,\gamma)) \mid {\mathcal{F}}_i ] \textup{d} \theta , \\
I_2 & := \int_{\delta}^{\pi-\delta} \mathbb{E}\, [ \Delta_n^{(i)} (\theta) {\,\bf 1} (E_{n,i} (\delta,\gamma)) \mid {\mathcal{F}}_i ] \textup{d} \theta , \\
I_3 & := \int_{\pi-\delta}^{\pi} \mathbb{E}\, [ \Delta_n^{(i)} (\theta) {\,\bf 1} (E_{n,i} (\delta,\gamma)) \mid {\mathcal{F}}_i ] \textup{d} \theta .\end{align*}
First we deal with $I_1$ and $I_3$.
We have
{\bf e}gin{align*} | I_1 | \leq \int_0^{\delta} \mathbb{E}\, [ | \Delta^{(i)}_{n} (\theta) | \mid {\mathcal{F}}_i ] \textup{d} \theta \leq
\delta \mathbb{E}\, [ \| Z_i \| + \| Z_i'\| \mid {\mathcal{F}}_i ] ,{\ \mathrm{a.s.}}, \end{align*}
by another application of ({\mathrm{e}}f{deltabound}).
Here $\mathbb{E}\, [ \| Z_i \| \mid {\mathcal{F}}_i ] = \| Z_i \|$, since $Z_i$ is ${\mathcal{F}}_i$-measurable,
and, since $Z_i'$ is independent of ${\mathcal{F}}_i$,
$\mathbb{E}\, [ \| Z_i'\| \mid {\mathcal{F}}_i ] = \mathbb{E}\, \| Z_i'\| = \mathbb{E}\, \| Z_1 \|$.
A similar argument applies to $I_3$, so that
{\bf e}gin{equation}
\label{eq7}
|I_1+I_3| \leq 2 \delta \| Z_i \| + 2 \delta \mathbb{E}\, \| Z_1 \| , {\ \mathrm{a.s.}} \end{equation}
We now consider $I_2$.
From ({\mathrm{e}}f{change}), since $i \in I_{n,\gamma}$,
we have
{\bf e}gin{align*} I_2 & = \int_{\delta}^{\pi - \delta} \mathbb{E}\, [ (Z_i - Z'_i) \cdot {\bf e}_\theta {\,\bf 1} (E_{n,i} (\delta,\gamma)) \mid {\mathcal{F}}_i ] \textup{d} \theta
\\
& = \int_{\delta}^{\pi - \delta} \mathbb{E}\, [ (Z_i - Z'_i) \cdot {\bf e}_\theta \mid {\mathcal{F}}_i ] \textup{d} \theta
- \int_{\delta}^{\pi - \delta} \mathbb{E}\, [ (Z_i - Z'_i) \cdot {\bf e}_\theta {\,\bf 1} (E^{\rm c}_{n,i} (\delta,\gamma)) \mid {\mathcal{F}}_i ] \textup{d} \theta
.\end{align*}
Here, by the triangle inequality,
{\bf e}gin{align}
& {} \phantom{=} {} \left| \int_{\delta}^{\pi - \delta} \mathbb{E}\, [ (Z_i - Z'_i) \cdot {\bf e}_\theta {\,\bf 1} (E^{\rm c}_{n,i} (\delta,\gamma)) \mid {\mathcal{F}}_i ] \textup{d} \theta \right| \nonumber\\
& \leq \int_{0}^{\pi} \mathbb{E}\, [ ( \| Z_i\| + \| Z'_i\|) {\,\bf 1} (E^{\rm c}_{n,i} (\delta,\gamma)) \mid {\mathcal{F}}_i ] \textup{d} \theta \nonumber\\
\label{eq8}
& =
\pi \| Z_i \| \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ]
+ \pi \mathbb{E}\, [ \| Z_i' \| {\,\bf 1} ( E_{n,i}^{\mathrm{c}} (\delta,\gamma) ) \mid {\mathcal{F}}_i ]
, \end{align}
similarly to ({\mathrm{e}}f{eq6}).
Finally, similarly to ({\mathrm{e}}f{eq7}),
{\bf e}gin{align} \left| \int_{\delta}^{\pi - \delta} \mathbb{E}\, [ (Z_i - Z'_i) \cdot {\bf e}_\theta \mid {\mathcal{F}}_i ] \textup{d} \theta
- \int_{0}^{\pi} \mathbb{E}\, [ (Z_i - Z'_i) \cdot {\bf e}_\theta \mid {\mathcal{F}}_i ] \textup{d} \theta \right| \nonumber\\
\leq 2 \delta \mathbb{E}\, [ \| Z_i\| + \| Z'_i \| \mid {\mathcal{F}}_i ]
\label{eq15}
= 2 \delta \left( \| Z_i \| + \mathbb{E}\, \| Z_1 \| \right) . \end{align}
We combine ({\mathrm{e}}f{eq4}) with ({\mathrm{e}}f{eq5}) and the bounds in ({\mathrm{e}}f{eq6})--({\mathrm{e}}f{eq15})
to give
{\bf e}gin{align}
\label{eq29}
\left| D_{n,i} - \int_{0}^{\pi} \mathbb{E}\, [ (Z_i - Z'_i) \cdot {\bf e}_\theta \mid {\mathcal{F}}_i ] \textup{d} \theta \right|
& \leq
4 \delta \| Z_i \| + 4 \delta \mathbb{E}\, \| Z_1\| + 3 \pi \| Z_i\| \mathbb{P} [ E_{n,i}^{\mathrm{c}}(\delta, \gamma) \mid {\mathcal{F}}_i ] \nonumber \\
&{} \quad {} + 3 \pi \mathbb{E}\, [ \| Z_i'\| {\,\bf 1} ( E_{n,i}^{\mathrm{c}}(\delta, \gamma) ) \mid {\mathcal{F}}_i ]
, {\ \mathrm{a.s.}}
\end{align}
To complete the proof of the lemma, we compute the integral on the left-hand side
of ({\mathrm{e}}f{eq29}).
First note that
$\mathbb{E}\, [ (Z_i - Z'_i) \cdot {\bf e}_\theta \mid {\mathcal{F}}_i ] = ( Z_i - \mathbb{E}\, [ Z'_i] ) \cdot {\bf e}_\theta$, since
$Z_i$ is ${\mathcal{F}}_i$-measurable and $Z_i'$ is independent of ${\mathcal{F}}_i$,
so that
\[ \int_{0}^{\pi} \mathbb{E}\, [ (Z_i - Z'_i) \cdot {\bf e}_\theta \mid {\mathcal{F}}_i ] \textup{d} \theta = \int_{0}^{\pi} ( Z_i -\mathbb{E}\, [ Z_i] ) \cdot {\bf e}_\theta \textup{d} \theta.\]
To evaluate the last integral, it is convenient to introduce the notation $Z_i - \mathbb{E}\, [ Z_i] = R_i {\bf e}_{\Theta_i}$
where $R_i = \| Z_i - \mathbb{E}\, [ Z_i ] \| \geq 0$ and $\Theta_i \in [0, 2\pi )$.
Then
{\bf e}gin{align*} \int_{0}^{\pi} ( Z_i -\mathbb{E}\, [ Z_i] ) \cdot {\bf e}_\theta \textup{d} \theta &
= \int_0^\pi R_i {\bf e}_{\Theta_i} \cdot {\bf e}_\theta \textup{d} \theta
= R_i \int_0^\pi \cos (\theta - \Theta_i) \textup{d} \theta \\
& = 2 R_i \sin \Theta_i = 2 R_i {\bf e}_{\Theta_i} \cdot {\bf e}_{\pi/2}.\end{align*}
Now ({\mathrm{e}}f{eq2}) follows from ({\mathrm{e}}f{eq29}), and the proof is complete.
\end{proof}
\subsection{Proofs for the central limit theorem}
For ease of notation, we write $Y_i := 2 \| \mathbb{E}\, [ Z_1] \|^{-1} ( Z_i - \mathbb{E}\, [ Z_1] ) \cdot \mathbb{E}\, [ Z_1]$, and
define
\[ W_{n,i} := D_{n,i} - Y_i .\]
The upper bound for $|W_{n,i}|$ in
Lemma {\mathrm{e}}f{estimate} together with Lemma {\mathrm{e}}f{En}(ii) will enable us to prove the following result, which will be the basis of our proof of Theorem {\mathrm{e}}f{thm0}.
{\bf e}gin{lemma}
\label{wni}
Suppose that $\mathbb{E}\, [ \| Z_1\|^2] <\infty$ and $\| \mathbb{E}\, [Z_1]\| \neq 0$. Then
\[ \lim_{n \to \infty} n^{-1} \sum_{i=1}^n \mathbb{E}\, [ W_{n,i}^2 ] = 0 . \]
\end{lemma}
{\bf e}gin{proof}
Fix $\varepsilon >0$. We take $\gamma \in (0,1/2)$ and $\delta \in (0,\pi/2)$, to be specified later. We divide the sum of interest into two parts, namely $i \in I_{n, \gamma}$ and $i \notin I_{n,\gamma}$.
Now from ({\mathrm{e}}f{cauchy}) with ({\mathrm{e}}f{deltabound}) we have
$| L_n^{(i)} - L_n | \leq \pi ( \| Z_i \| + \| Z_i' \| )$, a.s.,
so that
\[ | D_{n,i} | \leq \pi \mathbb{E}\, [ \| Z_i \| + \| Z_i' \| \mid {\mathcal{F}}_i ]
= \pi ( \| Z_i \| + \mathbb{E}\, \| Z_i \| ) .\]
It then follows from the triangle inequality that
\[ | W_{n,i} | \leq | D_{n,i} | + 2 \| Z_i - \mathbb{E}\, [ Z_i ] \|
\leq ( \pi + 2) ( \| Z_i \| + \mathbb{E}\, \| Z_i \| ) .\]
So provided $\mathbb{E}\, [ \| Z_1 \|^2 ] < \infty$, we have
$\mathbb{E}\, [ W_{n,i}^2 ] \leq C_0$ for all $n$ and all $i$, for some
constant $C_0 < \infty$, depending only on the distribution of $Z_1$.
Hence
\[ \frac{1}{n} \sum_{i \notin I_{n,\gamma} } \mathbb{E}\, [ W_{n,i}^2 ]
\leq \frac{1}{n} 2 \gamma n C_0 = 2 \gamma C_0 ,\]
using the fact that there are at most $2\gamma n$ terms in the sum.
From now on, choose $\gamma >0$ small enough so that $2 \gamma C_0 < \varepsilon$.
Now consider $i \in I_{n, \gamma}$.
For such $i$, ({\mathrm{e}}f{eq2}) shows that, for some constant $C_1 < \infty$,
{\bf e}gin{align}
\label{eq30}
| W_{n,i} | & \leq C_1 (1 + \| Z_i \| ) \delta + C_1 \| Z_i \| \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ] \nonumber\\
& \qquad \qquad {} + C_1 \mathbb{E}\, [ \| Z_i'\| {\,\bf 1} (E_{n,i}^{\mathrm{c}} (\delta,\gamma)) \mid {\mathcal{F}}_i ] , {\ \mathrm{a.s.}} \end{align}
Here, for any $B_1 \in (0,\infty)$, a.s.,
{\bf e}gin{align*}
\mathbb{E}\, [ \| Z_i'\| {\,\bf 1} (E_{n,i}^{\mathrm{c}} (\delta,\gamma)) \mid {\mathcal{F}}_i ]
& \leq \mathbb{E}\, [ \| Z_i'\| {\,\bf 1} \{ \| Z_i' \| > B_1 \} \mid {\mathcal{F}}_i ]
+ B_1 \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ] \\
& = \mathbb{E}\, [ \| Z_i'\| {\,\bf 1} \{ \| Z_i' \| > B_1 \} ] + B_1 \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ] ,
\end{align*}
since $Z_i'$ is independent of ${\mathcal{F}}_i$.
Here, since $\mathbb{E}\, \| Z_i'\| = \mathbb{E}\, \|Z_1 \| < \infty$,
the dominated convergence
theorem (see Lemma {\mathrm{e}}f{dominated convergence}) implies that $\mathbb{E}\, [ \| Z_i'\| {\,\bf 1} \{ \| Z_i' \| > B_1 \} ] \to 0$
as $B_1 \to \infty$.
So we can choose $B_1 = B_1 (\delta)$ large enough so that
\[ \mathbb{E}\, [ \| Z_i'\| {\,\bf 1} (E_{n,i}^{\mathrm{c}} (\delta,\gamma)) \mid {\mathcal{F}}_i ]
\leq \delta + B_1 \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ], {\ \mathrm{a.s.}} \]
Combining this with ({\mathrm{e}}f{eq30}) we see that there is a constant $C_2 < \infty$ for which
\[ | W_{n,i} | \leq C_2 (1 + \| Z_i \| ) \left(
\delta + B_1 \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ] \right) , {\ \mathrm{a.s.}}
\]
Hence
{\bf e}gin{align*}
W_{n,i}^2 & \leq C_2^2 (1 + \| Z_i \| )^2 \left( \delta^2 + 2B_1 \delta \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ] +
B_1^2 \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ]^2 \right) \\
& \leq C_3^2 (1 + \| Z_i \| )^2 \left( \delta + B_1^2 \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ] \right) ,\end{align*}
for some constant $C_3 < \infty$,
using the facts that $\delta < \pi/2 < 2$ and $\mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ] \leq 1$.
Taking expectations we get
\[ \mathbb{E}\, [ W_{n,i}^2 ] \leq C_3^2 \delta \mathbb{E}\, [ (1 + \| Z_i \| )^2 ] + C_3^2 B_1^2 \mathbb{E}\,
\left[ (1 + \| Z_i \| )^2 \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ] \right] .\]
Provided $\mathbb{E}\, [ \|Z_1 \|^2 ] < \infty$,
there is a constant $C_4 < \infty$ such that the first term
on the right-hand side of the last display is bounded
by $C_4 \delta$. Now fix $\delta >0$ small
enough so that $C_4 \delta < \varepsilon$; this choice also
fixes $B_1$. Then
{\bf e}gin{equation}
\label{eq32}
\mathbb{E}\, [ W_{n,i}^2 ] \leq \varepsilon + C_3^2 B_1^2 \mathbb{E}\,
\left[ (1 + \| Z_i \| )^2 \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ] \right] .\end{equation}
For the final term in ({\mathrm{e}}f{eq32}), observe that, for any $B_2 \in (0,\infty)$, a.s.,
{\bf e}gin{align}
\label{eq31}
(1 + \| Z_i \|)^2 \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta,\gamma) \mid {\mathcal{F}}_i ]
& \leq (1+B_2)^2 \mathbb{P} [ E_{n,i}^{\mathrm{c}} (\delta ,\gamma) \mid {\mathcal{F}}_i ] \nonumber\\
& {} \quad {} + (1 + \| Z_i \|)^2 {\,\bf 1} \{ \| Z_i \| > B_2 \} .\end{align}
Here
$\mathbb{E}\, [ (1 + \| Z_i \|)^2 {\,\bf 1} \{ \| Z_i \| > B_2 \} ] \to 0$
as $B_2 \to \infty$, provided $\mathbb{E}\, [ \| Z_1 \|^2] <\infty$,
by the dominated convergence theorem.
Hence, since $\delta$ and $B_1$ are fixed,
we can choose $B_2 = B_2(\varepsilon) \in (0,\infty)$ such that
$$C_3^2 B_1^2 \mathbb{E}\, \left[ (1 + \| Z_i \|)^2 {\,\bf 1} \{ \| Z_i \| > B_2 \} \right] < \varepsilon .$$
Then taking expectations in ({\mathrm{e}}f{eq31}) we obtain from ({\mathrm{e}}f{eq32}) that
\[ \mathbb{E}\, [ W_{n,i}^2 ] \leq 2 \varepsilon + C_3^2 B_1^2 (1+B_2)^2 \mathbb{P} [ E_{n,i}^{\rm c } (\delta,\gamma) ]
.\]
Now choose $n_0$ such that
$C_3^2 B_1^2 (1+B_2)^2 \mathbb{P} [ E_{n,i}^{\rm c } (\delta,\gamma) ] < \varepsilon$ for all $n \geq n_0$,
which we may do by Lemma {\mathrm{e}}f{En}(ii). So for the given $\varepsilon>0$ and $\gamma \in (0,1/2)$,
we can choose
$n_0$ such that for all $i \in I_{n,\gamma}$ and all $n \geq n_0$,
$ \mathbb{E}\, [ W_{n,i}^2 ] \leq 3 \varepsilon$. Hence
\[ \frac{1}{n} \sum_{i \in I_{n,\gamma}} \mathbb{E}\, [ W_{n,i}^2 ] \leq 3 \varepsilon ,\]
for all $n \geq n_0$.
Combining the estimates for $i \in I_{n,\gamma}$ and $i \notin I_{n,\gamma}$, we see that
\[ \frac{1}{n} \sum_{i=1}^n \mathbb{E}\, [ W_{n,i}^2 ]
\leq 2 \gamma C_0 + 3 \varepsilon \leq 4 \varepsilon ,\]
for all $n \geq n_0$. Since $\varepsilon>0$ was arbitrary, the result follows.
\end{proof}
Now we can claim and prove our main theorems.
{\bf e}gin{theorem}
\label{thm0}
Suppose that $\mathbb{E}\, [ \| Z_1 \|^2] < \infty$ and
$\| \mathbb{E}\, [ Z_1 ] \| \neq 0$. Then,
as $n \to \infty$,
\[ n^{-1/2} \left| L_n - \mathbb{E}\, [ L_n] - \sum_{i=1}^n \frac{ 2 (Z_i - \mathbb{E}\, [ Z_1] ) \cdot \mathbb{E}\, [ Z_1]}{\| \mathbb{E}\, [Z_1] \|} \right| \to 0 , \ \textrm{in} \ L^2 .\]
\end{theorem}
{\bf e}gin{proof}
First note that
\[ \mathbb{E}\, [ W_{n,i} \mid {\mathcal{F}}_{i-1} ] = \mathbb{E}\, [ D_{n,i} \mid {\mathcal{F}}_{i-1} ] - \mathbb{E}\, [ Y_i \mid {\mathcal{F}}_{i-1} ]
= 0 - \mathbb{E}\, [ Y_i],
\]
since $D_{n,i}$ is a martingale difference sequence and $Y_i$ is independent of ${\mathcal{F}}_{i-1}$.
Here, by definition, $\mathbb{E}\, [ Y_i] =0$, and so $W_{n,i}$ is also a martingale difference
sequence. Therefore, by orthogonality,
$$n^{-1}\mathbb{E}\,\left[ \left( \sum_{i=1}^{n}W_{n,i} \right)^2 \right] =
n^{-1} \sum_{i=1}^{n} \mathbb{E}\,\left[ W_{n,i}^2 \right] \to 0 {\ \mathrm{as}\ } n \to \infty, \text{ by Lemma {\mathrm{e}}f{wni}.} $$
In other words, $n^{-1/2} \sum_{i=1}^{n}W_{n,i} \to 0$ in $L^2$,
which, with Lemma {\mathrm{e}}f{lem1}(i), implies the statement in the theorem.
\end{proof}
{\bf e}gin{theorem}
\label{thm1}
Suppose that $\mathbb{E}\, [ \| Z_1 \|^2] < \infty$ and
$\| \mathbb{E}\, [ Z_1 ] \| \neq 0$. Then
{\bf e}gin{equation}
\label{vlim}
\lim_{n \to \infty} n^{-1} \mathbb{V} {\rm ar} [ L_n ] = \frac{4 \mathbb{E}\, [ ( (Z_1 - \mathbb{E}\, [Z_1] ) \cdot \mathbb{E}\, [ Z_1] )^2 ]}{\| \mathbb{E}\, [ Z_1] \|^2} = 4 \sigma^2_{\mu}. \end{equation}
\end{theorem}
{\bf e}gin{remarks} \label{rmk:degenerate}
{\bf e}gin{enumerate}[(i)]
\item The assumptions $\mathbb{E}\, [ \| Z_1 \|^2] < \infty$ and
$\| \mathbb{E}\, [ Z_1 ] \| \neq 0$ ensure $4 \sigma^2_{\mu} < \infty$.
\item To compare the limit result ({\mathrm{e}}f{vlim}) with Snyder and Steele's upper bound ({\mathrm{e}}f{ssup}), observe that
\[ 4 \sigma^2_{\mu} = 4 \left( \frac{\mathbb{E}\, [ (Z_1 \cdot \mathbb{E}\, [ Z_1] )^2 ] - \| \mathbb{E}\,[ Z_1 ] \|^4 }{\| \mathbb{E}\, [ Z_1] \|^2 } \right)
\leq 4 \left( \mathbb{E}\,[ \| Z_1 \|^2 ] - \| \mathbb{E}\, [ Z_1 ] \|^2 \right) .\]
\item The limit $4 \sigma^2_{\mu}$ is zero if and only if $(Z_1 - \mathbb{E}\, [ Z_1] ) \cdot \mathbb{E}\, [ Z_1] =0$
with probability 1, i.e., if $Z_1 - \mathbb{E}\, [ Z_1]$ is always orthogonal to $\mathbb{E}\,[Z_1]$.
In such a degenerate case,
({\mathrm{e}}f{vlim}) says that $\mathbb{V} {\rm ar}[L_n]=o(n)$. This is the case, for example, if $Z_1$ takes values $(1,1)$ and $(1,-1)$ each with probability $1/2$. Note that
the Snyder--Steele bound ({\mathrm{e}}f{ssup}) applied in this example says
only that $\mathbb{V} {\rm ar} [L_n] \leq (\pi^2/2) n$, which is not the correct order. Here, the two-dimensional trajectory can be viewed as a space-time trajectory of a \emph{one-dimensional} simple symmetric random walk. We conjecture that in fact $\mathbb{V} {\rm ar} [L_n] = O (\log n)$. Steele \cite{steele}
obtains variance results for the \emph{number of faces} of the convex hull
of one-dimensional simple random walk, and comments that such results for $L_n$ seem ``far out of reach'' \cite[p.\ 242]{steele}.
\end{enumerate}
\end{remarks}
{\bf e}gin{proof}
Write
{\bf e}gin{equation}
\label{xizeta}
\xi_n = \frac{L_n - \mathbb{E}\,[L_n]}{\sqrt { n} }; ~~\textrm{and} ~~
\zeta_n = \frac {1}{\sqrt{ n }} \sum_{i=1}^n Y_i,
~\textrm{where} ~ Y_i = \frac{2 (Z_i - \mathbb{E}\, [ Z_1 ] ) \cdot \mathbb{E}\, [ Z_1]}{\| \mathbb{E}\, [ Z_1 ] \|}.
\end{equation}
Then Theorem {\mathrm{e}}f{thm0} shows that $| \xi_n - \zeta_n | \to 0$ in $L^2$ as $n \to \infty$. Also,
with $4 \sigma^2_{\mu}$ as given by ({\mathrm{e}}f{vlim}), $\mathbb{E}\, [ \zeta_n^2 ] = 4 \sigma^2_{\mu}$. Then a computation shows that
\[ n^{-1} \mathbb{V} {\rm ar} [ L_n ] = \mathbb{E}\, [ \xi_n^2 ] = \mathbb{E}\, [ (\xi_n - \zeta_n)^2] + \mathbb{E}\, [ \zeta_n^2 ] +2 \mathbb{E}\, [ (\xi_n -\zeta_n) \zeta_n ] .\]
Here, by the $L^2$ convergence, $\mathbb{E}\, [ (\xi_n - \zeta_n)^2] \to 0$ and, by the Cauchy--Schwarz inequality (see Lemma {\mathrm{e}}f{Cauchy-Schwarz ineq.}),
$$\left| \mathbb{E}\, [ (\xi_n -\zeta_n) \zeta_n ] \right| \leq \left( \mathbb{E}\, [ (\xi_n - \zeta_n)^2] \mathbb{E}\, [ \zeta_n^2] \right)^{1/2} \to 0 \text{ as well.} $$
So $\mathbb{E}\, [ \xi_n^2] \to 4 \sigma^2_{\mu}$ as $n \to \infty$.
\end{proof}
In the case where $\mathbb{E}\, [ \| Z_1 \|^2] < \infty$ and
$\| \mathbb{E}\, [ Z_1 ] \| = \mu > 0$, Snyder and Steele deduce from their bound ({\mathrm{e}}f{ssup}) a strong law of large numbers for $L_n$,
namely
$\lim_{n \to \infty} n^{-1} L_n = 2 \mu$, a.s.\ (see \cite[p.\ 1168]{ss}).
Given this and the variance asymptotics of Theorem {\mathrm{e}}f{thm1},
it is natural to ask whether there is an accompanying central limit theorem.
Our next result gives a positive answer in the non-degenerate case, again with essentially minimal assumptions.
In the proof of Theorem {\mathrm{e}}f{thm2} we will use two facts about convergence in distribution that we now
recall (see Lemma {\mathrm{e}}f{slutsky}).
First, if sequences of random variables
$\xi_n$ and $\zeta_n$ are such that $\zeta_n \to \zeta$ in distribution for some random variable $\zeta$ and $|\xi_n -\zeta_n| \to 0$ in probability,
then $\xi_n \to \zeta$ in distribution (this is \emph{Slutsky's theorem}). Second,
if $\zeta_n \to \zeta$ in distribution and $\alpha_n \to \alpha$ in probability, then $\alpha_n \zeta_n \to \alpha \zeta$ in distribution.
{\bf e}gin{theorem}
\label{thm2}
Suppose that $\mathbb{E}\, [ \| Z_1 \|^2] < \infty$, $\| \mathbb{E}\, [ Z_1 ] \| \neq 0$ and $\sigma^2_{\mu} > 0$. Then for any $x \in \mathbb{R}$,
{\bf e}gin{equation}
\label{clt}
\lim_{n \to \infty} \mathbb{P} \bigg[ \frac{L_n - \mathbb{E}\, [ L_n ] }{\sqrt{\mathbb{V} {\rm ar} [ L_n ]} } \leq x \bigg]
=
\lim_{n \to \infty} \mathbb{P} \bigg[ \frac{L_n - \mathbb{E}\, [ L_n ] }{\sqrt{4 \sigma^2_{\mu} n} } \leq x \bigg]
= \Phi (x) ,
\end{equation}
where $\Phi$ is the standard normal distribution function.
\end{theorem}
{\bf e}gin{proof}
Use the notation for $\xi_n$ and $\zeta_n$ as given by ({\mathrm{e}}f{xizeta}).
Then, by Theorem {\mathrm{e}}f{thm0}, $| \xi_n - \zeta_n | \to 0$ in $L^2$,
and hence
in probability.
In the sum $\zeta_n$, the $Y_i$ are i.i.d.\ random variables
with mean $0$ and variance $\mathbb{E}\, [Y_i^2 ] = 4 \sigma^2_{\mu}$. Hence the classical
central limit theorem (see e.g.\ \cite[p.\ 93]{durrett})
shows that $\zeta_n$ converges in distribution to a normal
random variable with mean $0$ and variance $4 \sigma^2_{\mu}$.
Slutsky's theorem then implies that $\xi_n$ has the same distributional limit. Hence, for any $x \in \mathbb{R}$,
\[ \lim_{n\to \infty} \mathbb{P} \left[ \frac{ \xi_n}{\sqrt{ 4 \sigma^2_{\mu} }} \leq x \right] = \lim_{n\to \infty} \mathbb{P} \left[ \frac{L_n - \mathbb{E}\, [ L_n]}{\sqrt{ 4 \sigma^2_{\mu} n}} \leq x \right] = \Phi (x) ,\]
where $\Phi$ is the standard normal distribution function.
Moreover,
\[ \mathbb{P} \left[ \frac{L_n - \mathbb{E}\, [ L_n]}{\sqrt{ \mathbb{V} {\rm ar} [ L_n ]}} \leq x \right]
= \mathbb{P} \left[ \frac{\xi_n \alpha_n}{\sqrt { 4 \sigma^2_{\mu} }} \leq x \right] ,
\]
where $\alpha _n = \sqrt {\frac{4 \sigma^2_{\mu} n}{\mathbb{V} {\rm ar} [L_n] }} \to 1$
by Theorem {\mathrm{e}}f{thm1}. Thus we verify the limit statements in ({\mathrm{e}}f{clt}).
\end{proof}
\section{Asymptotics for the zero drift case}
Recall that $h_1$ is defined in \eqref{h_t} and $\Sigma$ is a covariance matrix (see Section {\mathrm{e}}f{sec:Brownian-hulls}), which is positive semidefinite and symmetric.
Let
{\bf e}gin{equation}
\label{eq:var_constants}
u_0 ( \Sigma ) := \mathbb{V} {\rm ar} {\mathcal{L}} ( \Sigma^{1/2} h_1 ) ,
\end{equation}
we have the following results.
{\bf e}gin{proposition}
\label{prop:var-limit-zero u0}
Suppose that \eqref{ass:moments} holds for some $p > 2$,
and $\mu =0$. Then
\[ \lim_{n \to \infty} n^{-1} \mathbb{V} {\rm ar} L_n = u_0 ( \Sigma ).\]
\end{proposition}
{\bf e}gin{proof}
From \eqref{eq:walk-cauchy} and Lemma {\mathrm{e}}f{lem:walk_moments}(ii), for $p > 2$ we have $\sup_n \mathbb{E}\, [ ( n^{-1} L_n ^2 )^{p/2} ] < \infty$.
Hence $n^{-1} L_n^2$ is uniformly integrable, and we deduce convergence of $n^{-1} \mathbb{V} {\rm ar} L_n$ in Corollary {\mathrm{e}}f{cor:zero-limits}.
\end{proof}
The next result gives bounds on $u_0(\Sigma)$ defined in \eqref{eq:var_constants}.
{\bf e}gin{proposition}
\label{prop:var_bounds u0}
{\bf e}gin{equation}
\frac{263}{1080} \pi^{-3/2} {\mathrm{e}}^{-144/25}
\trace \Sigma
\leq u_0 ( \Sigma) \leq \frac{\pi^2}{2} \trace \Sigma . \label{eq:u-bounds}
\end{equation}
In addition, if $\Sigma = I$ we have the following sharper form of the lower bound:
\[ \mathbb{V} {\rm ar} \ell_1 = u_0 ( I ) \geq \frac{2}{5} \left(1-\frac{8}{25\pi} \right) {\mathrm{e}}^{-25 \pi /16 } > 0 .\]
\end{proposition}
For the proof of this result, we rely on a few facts about one-dimensional Brownian motion,
including the bound (see e.g.\ equation (2.1) of \cite{jp}), valid for all $r>0$,
{\bf e}gin{equation}
\label{eq:brown_norm}
\mathbb{P} \left[ \sup_{ 0 \leq s \leq 1} | w (s) | \leq r \right] \geq
\frac{4}{\pi} \left( {\mathrm{e}}^{-\pi^2/(8r^2)} - \frac{1}{3} {\mathrm{e}}^{-9\pi^2/(8r^2)} \right) .
\end{equation}
We let $\Phi$ denote the distribution function of a standard normal random variable; we will also need
the standard Gaussian tail bound (see e.g.~\cite[p.~12]{durrett})
{\bf e}gin{equation}
\label{eq:gauss-bound}
1 - \Phi( x) = \frac{1}{\sqrt{2 \pi}} \int_x^\infty {\mathrm{e}}^{-y^2/2} \textup{d} y \geq \frac{1}{x \sqrt{2 \pi}} \left(1 - \frac{1}{x^2} \right) {\mathrm{e}}^{-x^2/2} , ~~ \text{for } x > 0 .
\end{equation}
We also note that for $e \in \mathbb{S}_1$ the diffusion
$e \cdot (\Sigma^{1/2} b)$ is one-dimensional
Brownian motion with
variance parameter $e^\tra \Sigma e$.
The idea behind the variance lower bounds is elementary. For a random variable $X$ with mean $\mathbb{E}\, X$,
we have, for any $\theta \geq 0$,
$$\mathbb{V} {\rm ar} X = \mathbb{E}\, \left[ (X- \mathbb{E}\, X )^2 \right] \geq \theta^2 \mathbb{P} \left[ | X - \mathbb{E}\, X | \geq \theta \right].$$
If $\mathbb{E}\, X \geq 0$,
taking $\theta = \alpha \mathbb{E}\, X $ for $\alpha >0$, we obtain
{\bf e}gin{equation}
\label{eq:var-bound}
\mathbb{V} {\rm ar} X \geq \alpha^2 ( \mathbb{E}\, X )^2 \big( \mathbb{P} [ X \leq (1-\alpha) \mathbb{E}\, X ] + \mathbb{P} [ X \geq (1 + \alpha) \mathbb{E}\, X ] \big)
,\end{equation}
and our lower bounds use whichever of the
latter two probabilities is most convenient.
{\bf e}gin{proof}[Proof of Proposition {\mathrm{e}}f{prop:var_bounds u0}.]
We start with the upper bounds. Snyder and Steele's bound \eqref{eq:ss} with
the statement for $\mathbb{V} {\rm ar} L_n$ in
Proposition {\mathrm{e}}f{prop:var-limit-zero u0} gives the upper bound in \eqref{eq:u-bounds}.
We now move on to the lower bounds.
Let $e_\Sigma \in \mathbb{S}_1$ denote an eigenvector of $\Sigma$ corresponding to the principal eigenvalue $\lambda_\Sigma$.
Then since $\Sigma^{1/2} h_1$ contains the line segment from $0$ to any (other) point in $\Sigma^{1/2} h_1$, we have from monotonicity of ${\mathcal{L}}$ that
\[ {\mathcal{L}} ( \Sigma^{1/2} h_1 ) \geq 2 \sup_{0 \leq s \leq 1 } \| \Sigma^{1/2} b(s) \| \geq 2 \sup_{0 \leq s \leq 1 } \left( e_\Sigma \cdot ( \Sigma^{1/2} b (s) ) \right) .\]
Here $e_\Sigma \cdot ( \Sigma^{1/2} b )$ has the same distribution as $\lambda_\Sigma^{1/2} w$. Hence, for $\alpha > 0$,
{\bf e}gin{align*} \mathbb{P} \left[ {\mathcal{L}} ( \Sigma^{1/2} h_1) \geq (1+ \alpha) \mathbb{E}\, {\mathcal{L}} ( \Sigma^{1/2} h_1) \right] &
\geq
\mathbb{P} \left[ \sup_{0 \leq s \leq 1} w(s) \geq \frac{1+\alpha}{2} \lambda_\Sigma^{-1/2} \mathbb{E}\, {\mathcal{L}} ( \Sigma^{1/2} h_1) \right] \\
& \geq
\mathbb{P} \left[ \sup_{0 \leq s \leq 1} w(s) \geq 2 (1+\alpha) \sqrt{2} \right] ,\end{align*}
using the fact that $\lambda_\Sigma \geq \frac{1}{2} \trace \Sigma$ and the upper bound
in \eqref{EL-bounds}.
Applying \eqref{eq:var-bound} to $X = {\mathcal{L}} ( \Sigma^{1/2} h_1) \geq 0$ gives, for $\alpha >0$,
{\bf e}gin{align*}
\mathbb{V} {\rm ar} {\mathcal{L}} (\Sigma^{1/2} h_1 ) & \geq \alpha^2 ( \mathbb{E}\, {\mathcal{L}} ( \Sigma^{1/2} h_1) )^2 \mathbb{P} \left[ \sup_{0 \leq s \leq 1} w(s) \geq 2 (1+\alpha) \sqrt{2} \right] \\
& \geq \frac{32}{\pi}
\alpha^2 \left( \trace \Sigma \right) \left( 1 - \Phi ( 2 (1+\alpha) \sqrt{2} ) \right)
,\end{align*}
using the lower bound in \eqref{EL-bounds} and the fact that
$\mathbb{P} [ \sup_{0 \leq s \leq 1 } w(s) \geq r ] = 2 \mathbb{P} [ w(1) \geq r ] = 2 ( 1 -\Phi (r))$ for $r >0$, which is
a consequence of the reflection principle. Numerical curve sketching suggests that $\alpha = 1/5$ is close to optimal; this choice of $\alpha$ gives,
using \eqref{eq:gauss-bound},
\[ \mathbb{V} {\rm ar} {\mathcal{L}} (\Sigma^{1/2} h_1 ) \geq \frac{32}{25 \pi} \left( \trace \Sigma \right) \left( 1 - \Phi ( 12 \sqrt{2} /5 ) \right)
\geq \frac{263}{1080} \pi^{-3/2} \left( \trace \Sigma \right) \exp \left\{ -\frac{144}{25} \right\} ,\]
which is the lower bound in \eqref{eq:u-bounds}.
We get a sharper result when $\Sigma = I$ and ${\mathcal{L}} ( h_1 ) = \ell_1$, since we know $\mathbb{E}\, \ell_1 = \sqrt{ 8 \pi}$ explicitly.
Then, similarly to above, we get
\[ \mathbb{V} {\rm ar} \ell_1 \geq 8 \pi \alpha^2 \mathbb{P}
\left[ \sup_{0 \leq s \leq 1} w(s) \geq (1+\alpha) \sqrt{2 \pi} \right] , \text{ for } \alpha >0, \]
which at $\alpha = 1/4$ yields the stated lower bound.
\end{proof}
\pagestyle{myheadings} \markright{\sc Chapter 6}
\chapter{Results on area of the convex hull}
\label{chapter6}
\section{Overview}
The aims of the present chapter are to provide first and second-order information for $A_n$ in both the cases $\mu=0$ and $\mu \neq 0$.
We start by some simulations. We considered the same form of random walk as in Section {\mathrm{e}}f{sec:5.1}.
For the expected area, the simulations (see Figure {\mathrm{e}}f{fig:exparea}) are consistent with Theorem {\mathrm{e}}f{prop:EA-zero} and Theorem {\mathrm{e}}f{prop:EA-drift}.
In the case of $\mu = {\bf 0}$, Theorem {\mathrm{e}}f{prop:EA-zero} implies:
$\lim_{n \to \infty} n^{-1} \mathbb{E}\, A_n = \frac{\pi}{2} \sqrt{\det \Sigma} = 0.785$.
In the case of $\mu \neq 0$, Theorem {\mathrm{e}}f{prop:EA-drift} takes the form:
$\lim_{n \to \infty} n^{-3/2} \mathbb{E}\, A_n = \frac{1}{3} \| \mu \| \sqrt{2\pi \sigma^2_{\mu_\per}} = 0.236$ or $0.425$.
{\bf e}gin{figure}[h!]
\centering
\includegraphics[width=0.32\textwidth]{exparea1}
\includegraphics[width=0.32\textwidth]{exparea2}
\includegraphics[width=0.32\textwidth]{exparea3}
\caption{Plots of $y = \mathbb{E}\,[A_n]$ estimates against $x =$ (left to right) $n$, $n^{3/2}$, $n^{3/2}$ for
about $25$ values of $n$ in the range $10^2$ to $2.5 \times 10^5$ for 3 examples
with $\|\mu\| =$ (left to right) $0$, $0.4$, $0.72$. Each point is
estimated from $10^3$ repeated simulations.
Also plotted are straight lines $y = 0.781 x$ (leftmost plot),
$y=0.236x$ (middle plot) and $y= 0.425x$ (rightmost plot).} \label{fig:exparea}
\end{figure}
For the variance of area, Proposition {\mathrm{e}}f{prop:var-limit-zero} and {\mathrm{e}}f{prop:var-limit-drift v+} show that the
limits for variance exist in both zero and non-zero drift cases.
For example, we will show that
{\bf e}gin{alignat}{1}
\label{eq:two_vars10}
\text{if } \mu \neq 0: ~~ & \lim_{n \to \infty} n^{-3} \mathbb{V} {\rm ar} A_n = v_+ \| \mu \|^2 \sigma^2_{\mu_\per} ; \nonumber\\
\text{if } \mu = 0: ~~ & \lim_{n \to \infty} n^{-2} \mathbb{V} {\rm ar} A_n = v_0 \det \Sigma
,\end{alignat}
where $v_0$ and $v_+$ are finite and positive, and these quantities are in fact variances associated with convex hulls of Brownian scaling limits for the walk.
These scaling limits provide the basis of the analysis in this chapter; the methods are necessarily quite different
from those in \cite{wx}.
For the constants $v_0$ and $v_+$, Table~{\mathrm{e}}f{table2} gives
numerical evaluations of rigorous bounds that we prove in Proposition~{\mathrm{e}}f{prop:var_bounds v0 v+} below, plus estimates from simulations.
{\bf e}gin{table}[!h]
\center
\def1.4{1.4}
{\bf e}gin{tabular}{c|ccc}
& lower bound & simulation estimate & upper bound \\
\hline
$v_0$ & $8.15 \times 10^{-7}$ & 0.30 & 5.22 \\
$v_+$ & $1.44 \times 10^{-6}$ & 0.019 & 2.08
\end{tabular}
\caption{Each of the simulation estimates is
based on $10^5$ instances of a walk of length $n = 10^5$. The final decimal digit in each of the numerical upper (lower)
bounds has been rounded up (down).}
\label{table2}
\end{table}
The variance limits we deduced in the simulations
(see Figure {\mathrm{e}}f{fig:vararea}) are indeed lie in the variance bounds given by Proposition {\mathrm{e}}f{prop:var_bounds v0 v+}.
{\bf e}gin{figure}[h!]
\centering
\includegraphics[width=0.31\textwidth]{vararea1} \,
\includegraphics[width=0.31\textwidth]{vararea2} \,
\includegraphics[width=0.31\textwidth]{vararea3}
\caption{Plots of $y = \mathbb{V} {\rm ar}[A_n]$ estimates against $x =$ (left to right) $n^2$, $n^{3}$, $n^{3}$ for
the three examples described in Figure {\mathrm{e}}f{fig:exparea}.
Also plotted are straight lines $y = 0.0748 x$ (leftmost plot),
$y=0.00152x$ (middle plot) and $y= 0.00480x$ (rightmost plot).} \label{fig:vararea}
\end{figure}
\section{Upper bound for the expected value and variance for the area}
{\bf e}gin{proposition}
\label{lem:A_moments}
Let $p \geq 1$. Suppose that $\mathbb{E}\, [ \| Z_1 \|^{2p} ] < \infty$.
{\bf e}gin{itemize}
\item[(i)] We have $\mathbb{E}\, [ A_n^p ] = O ( n^{3p/2} )$. Suppose in addition $\mathbb{E}\,(\|Z_1 \|^{4p}) < \infty$, then $\mathbb{V} {\rm ar} (A_n^p)=O(n^{3p})$.
\item[(ii)] Moreover, if $\mu =0$ we have $\mathbb{E}\, [ A_n^p ] = O (n^{p} )$. Suppose in addition $\mathbb{E}\,(\|Z_1 \|^{4p}) < \infty$, then $\mathbb{V} {\rm ar} (A_n^p)=O(n^{2p})$.
\end{itemize}
\end{proposition}
{\bf e}gin{proof}
For part (i), it suffices to suppose $\mu \neq 0$. Then, bounding the convex hull by a rectangle,
{\bf e}gin{align*}
A_n
& \leq
\left(\max_{0\leq m \leq n} S_m \cdot \hat \mu - \min_{0\leq m \leq n} S_m \cdot \hat \mu \right) \left(\max_{0\leq m \leq n} S_m \cdot \hat \mu_\perp - \min_{0\leq m \leq n} S_m \cdot \hat \mu_\perp \right)\\
& \leq
4 \left(\max_{0\leq m \leq n} |S_m \cdot \hat \mu| \right) \left(\max_{0\leq m \leq n} |S_m \cdot \hat \mu_\perp| \right) .
\end{align*}
Hence, by the Cauchy--Schwarz inequality, we have
$$ \mathbb{E}\, [ A_n^p ] \leq 4^p \left( \mathbb{E}\, \left[ \max_{0\leq m \leq n} |S_m \cdot \hat \mu|^{2p} \right] \right)^{1/2}
\left(\mathbb{E}\, \left[ \max_{0\leq m \leq n} |S_m \cdot \hat \mu_\perp|^{2p} \right] \right)^{1/2} .$$
Now an application of Proposition {\mathrm{e}}f{lem:walk_moments}(i) and (iii) gives $\mathbb{E}\, [ A_n^p ] = O ( n^{3p/2} )$.
Suppose in addition $\mathbb{E}\,(\|Z_1 \|^{4p}) < \infty$. By the same process as above, we have
$$A_n^{2p} \leq 4^{2p} \left(\max_{0\leq m \leq n} |S_m \cdot \hat \mu|^{2p} \right) \left(\max_{0\leq m \leq n} |S_m \cdot \hat \mu_\perp|^{2p} \right) ,$$
and $ \mathbb{E}\, (A_n^{2p}) = O(n^{3p})$.
Hence, $\mathbb{V} {\rm ar} (A_n^p) = \mathbb{E}\, \left(A_n^{2p}\right) - \left(\mathbb{E}\, A_n^p \right)^{2} = O(n^{3p})$.
For part (ii), $\mu = 0$.
Since the convex $\text{hull}(S_0, \dots, S_n)$ is contained in the disk of radius $\max_{0 \leq m \leq n} \|S_m\|$ and centre $0$, $A_n^p \leq \pi^p (\max_{0 \leq m \leq n} \|S_m\|^{2p})$ a.s.
Proposition {\mathrm{e}}f{lem:walk_moments}(ii) then yields $\mathbb{E}\, [ A_n^p ] = O (n^{p})$.
Suppose in addition $\mathbb{E}\,(\|Z_1 \|^{4p}) < \infty$. By the same process as above, we have $\mathbb{E}\, [ A_n^{2p} ] = O (n^{2p})$. Therefore, $\mathbb{V} {\rm ar} (A_n^p)=O(n^{2p})$.
\end{proof}
{\bf e}gin{remark}
We will show below in Theorem {\mathrm{e}}f{prop:EA-drift} $n^{-3/2}\mathbb{E}\, A_n$ has a limit in the non-zero drift case and, in Proposition {\mathrm{e}}f{prop:EA-zero}, $n^{-1} \mathbb{E}\, A_n$ has a limit in the zero drift case.
\end{remark}
\section{Asymptotics for the expected area}
Let $T({\bf u},{\bf v})$ (${\bf u},{\bf v} \in \mathbb{R}^2$) be the area of a triangle with sides of ${\bf u},{\bf v}$ and ${\bf u} + {\bf v}$. Then,
$$T({\bf u},{\bf v})= \frac{1}{2} \sqrt{\|{\bf u}\|^2 \|{\bf v}\|^2 - ({\bf u} \cdot {\bf v})^2} .$$
For $\alpha, {\bf e}ta >0$, $T(\alpha {\bf u}, {\bf e}ta {\bf v}) = \alpha {\bf e}ta T({\bf u},{\bf v})$.
{\bf e}gin{lemma} \label{CLT for ET}
Suppose $\mathbb{E}\, (\|Z_1\|^2)< \infty$, $\mathbb{E}\, Z_1 ={\bf 0}$ and $\mathbb{E}\, (Z_1^{T} Z_1) = \Sigma$. Then as $m \to \infty$ and $(k-m) \to \infty$,
$$\frac{\mathbb{E}\, T(S_m, S_k-S_m)}{\sqrt{m(k-m)}} \to \mathbb{E}\, T(Y_1,Y_2) ,$$
where $Y_1$, $Y_2$ are iid. rvs. $Y_1, Y_2 \sim \mathbb{N}N({\bf 0}, \Sigma)$.
\end{lemma}
{\bf e}gin{proof}
By Central Limit Theorem in $\mathbb{R}^2$ (see \cite{durrett}), $n^{-1/2} S_n \overset{d.}{\to} \mathbb{N}N({\bf 0}, \Sigma)$.
Since $S_m$ and $S_k - S_m$ are independent, as $m$ and $k-m \to \infty$,
$$\left( \frac{S_m}{\sqrt{m}}, \frac{S_k-S_m}{\sqrt{k-m}} \right) \overset{d.}{\to} T(Y_1,Y_2) .$$
Using the fact $T$ is continuous,
$$\frac{T(S_m, S_k-S_m)}{\sqrt{m(k-m)}} = T\left( \frac{S_m}{\sqrt{m}}, \frac{S_k-S_m}{\sqrt{k-m}} \right) \overset{d.}{\to} T(Y_1,Y_2) .$$
Also, by Lemma {\mathrm{e}}f{ES 0 drift},
{\bf e}gin{align*}
\mathbb{E}\, \left(\left[ \frac{\mathbb{E}\, T(S_m, S_k-S_m)}{\sqrt{m(k-m)}} \right]^2 \right)
& \leq \frac{\mathbb{E}\,(\|S_m\|^2 \|S_k-S_m\|^2)}{m(k-m)} \\
& \leq \frac{\mathbb{E}\, \|S_m\|^2}{m} \cdot\frac{\mathbb{E}\, \|S_k-S_m\|^2}{k-m} < \infty.
\end{align*}
That means $m^{-1/2}(k-m)^{-1/2}T(S_m,S_k - S_m)$ is uniformly integrable over $(m, k)$ with $m \geq 1$, $k \geq m+1$. So the result follows.
\end{proof}
We state the following result without proof. It is a higher dimensional analogue of S--W formula \eqref{SW formula}. See Barndorff--Nielson and Baxter \cite{baxter} for the proof.
{\bf e}gin{lemma}[Barndorff Nielsen \& Baxter]
{\bf e}gin{equation} \label{E A_n formula}
\mathbb{E}\,(A_n)= \sum_{k=2}^n \sum_{m=1}^{k-1} \frac{\mathbb{E}\,\big[ T(S_m,S_k-S_m)\big]}{m(k-m)} .
\end{equation}
\end{lemma}
{\bf e}gin{lemma} \label{single sum limit}
$$ \lim_{k \to \infty} \sum_{m=1}^{k-1} \frac{1}{m^{1/2}(k-m)^{1/2}} = \pi .$$
\end{lemma}
{\bf e}gin{proof}
Let $f(m,k)=m^{-1/2}(k-m)^{-1/2}$. For any $\delta \in (0,1)$, we have $f(m,k) \leq f(m-\delta,k)$ if $m \leq k/2$ and $f(m,k) \geq f(m-\delta,k)$ if $m \geq k/2$.
Consider the sum as two parts,
$$\sum_{m=1}^{k-1} f(m,k) = \left( \sum_{m=1}^{\lfloor k/2 \rfloor} + \sum_{m=\lfloor k/2 \rfloor +1}^{k-1} \right) f(m,k) .$$
Then,
{\bf e}gin{align*}
\sum_{m=1}^{k-1} f(m,k)
& \geq \int_{1}^{\lfloor k/2 \rfloor} f(m,k)\,\textup{d} m + \int_{\lfloor k/2 \rfloor +1}^{k-1} f(m-1,k)\,\textup{d} m, \\
& \qquad\qquad \text{ by letting } u=\frac{m}{k} \text{ and } v=\frac{m-1}{k}, \\
& = \int_{1/k}^{\lfloor \frac{k}{2} \rfloor /k} \frac{1}{\sqrt{u(1-u)}}\,\textup{d} u + \int_{\lfloor \frac{k}{2} \rfloor /k}^{1-\frac{2}{k}} \frac{1}{\sqrt{v(1-v)}}\,\textup{d} v \\
& = \int_{1/k}^{1-2/k} \frac{1}{\sqrt{u(1-u)}}\,\textup{d} u .
\end{align*}
Also,
{\bf e}gin{align*}
\sum_{m=1}^{k-1} f(m,k)
& \leq \int_{1}^{\lfloor k/2 \rfloor} f(m-1,k)\,\textup{d} m + \int_{\lfloor k/2 \rfloor +1}^{k-1} f(m,k)\,\textup{d} m \\
& = \int_{0}^{\lfloor \frac{k}{2} \rfloor /k - 1/k} \frac{1}{\sqrt{u(1-u)}}\,\textup{d} u + \int_{\lfloor \frac{k}{2} \rfloor /k +1/k}^{1-\frac{1}{k}} \frac{1}{\sqrt{v(1-v)}}\,\textup{d} v \\
& \leq \int_{0}^{1-1/k} \frac{1}{\sqrt{u(1-u)}}\,\textup{d} u .
\end{align*}
Therefore,
{\bf e}gin{equation}
\label{eq:pi-sum}
\lim_{k \to \infty}\sum_{m=1}^{k-1} f(m,k) = \int_0^1 [u(1-u)]^{-1/2}\,\textup{d} u = B\left(\frac{1}{2},\frac{1}{2}\right) = \Gamma\left(\frac{1}{2}\right)^2 = \pi ,\end{equation}
where $B(\blob, \blob)$ is the Beta function and $\Gamma(\blob)$ is the Gamma function.
\end{proof}
{\bf e}gin{lemma} \label{double sum limit}
$$\lim_{n \to \infty} \frac{1}{n} \sum_{k=2}^n \sum_{m=1}^{k-1} \frac{1}{m^{1/2}(k-m)^{1/2}} = \pi .$$
\end{lemma}
{\bf e}gin{proof}
The result follows from Lemma {\mathrm{e}}f{convergence of Cesaro mean} and Lemma {\mathrm{e}}f{single sum limit}.
\end{proof}
{\bf e}gin{proposition} \label{lim E A/n with 0 drift}
Suppose $\mathbb{E}\,(\|Z_1\|^2)<\infty$ and $\mu = 0$. Then, $$ \lim_{n \to \infty}\frac{\mathbb{E}\, A_n}{n} = \pi \mathbb{E}\, T(Y_1, Y_2) ,$$ where $Y_1$, $Y_2$ are iid. rvs.
$Y_1, Y_2 \sim \mathbb{N}N({\bf 0}, \Sigma)$ and $\Sigma = \mathbb{E}\, (Z_1^T Z_1)$.
\end{proposition}
{\bf e}gin{proof}
In ({\mathrm{e}}f{E A_n formula}), denote $ g(k,m):= m^{-1/2}(k-m)^{-1/2} \mathbb{E}\,\big[ T(S_m,S_k-S_m)\big]$. Then,
{\bf e}gin{equation}
\label{eq:bnb}
\mathbb{E}\, A_n= \sum_{k=2}^n \sum_{m=1}^{k-1} \frac{g(k, m)}{m^{1/2}(k-m)^{1/2}} . \end{equation}
and by Lemma {\mathrm{e}}f{CLT for ET},
{\bf e}gin{equation}
\label{eq:triangle_mean}
\lim_{m \to \infty,\ k-m \to \infty} g(k, m)=\mathbb{E}\, T(Y_1,Y_2):= \lambda . \end{equation}
So, for every $\varepsilon >0$, there exists $m_0 \in \mathbb{Z}_+$ such that for any $m \geq m_0$ and $k-m \geq m_0$ we have $|g(k,m)-\lambda|\leq \varepsilon$.
For the upper bound of $ \mathbb{E}\, A_n$, Separate the inner sum as
{\bf e}gin{align*}
\mathbb{E}\, A_n
& = \left(\sum_{k=2}^{m_0} + \sum_{k=m_0 +1}^n \right) \sum_{m=1}^{k-1}\ \frac{g(k, m)}{m^{1/2}(k-m)^{1/2}} \\
& = \sum_{k=m_0 +1}^n \sum_{m=1}^{k-1}\frac{g(k, m)}{m^{1/2}(k-m)^{1/2}} + O(1) \\
&= \sum_{k=m_0 +1}^n \left(\sum_{m=1}^{m_0} + \sum_{m=k-m_0}^{k-1} +\sum_{m=m_0+1}^{k-m_0 -1} \right) \frac{g(k, m)}{m^{1/2}(k-m)^{1/2}} + O(1) ,
\end{align*}
where
{\bf e}gin{align} \label{star1}
& \sum_{k=m_0 +1}^n \left(\sum_{m=1}^{m_0} + \sum_{m=k-m_0}^{k-1} \right) \frac{g(k, m)}{m^{1/2}(k-m)^{1/2}} \notag \\
& \leq m_0 \sum_{k=m_0 +1}^n \frac{\max_{1 \leq m \leq m_0} g(k, m)}{(k-m_0)^{1/2}} + m_0 \sum_{k=m_0 +1}^n \frac{\max_{k-m_0 \leq m \leq k} g(k, m)}{(k-m_0)^{1/2}} \notag \\
& \leq \lambda' \sum_{k=m_0 +1}^n \frac{2 m_0}{(k-m_0)^{1/2}}, \quad \hbox{since}\ \max_{1 \leq k,m \leq n} g(k,m) < \infty, \notag \\
& \leq O(n^{1/2}) ,
\end{align}
where $\lambda'$ is some constant,
and
$$ \sum_{k=m_0 +1}^n \sum_{m=m_0+1}^{k-m_0 -1} \frac{g(k, m)}{m^{1/2}(k-m)^{1/2}}
\leq (\lambda+\varepsilon) \sum_{k=2}^n \sum_{m=1}^{k-1} \frac{1}{m^{1/2}(k-m)^{1/2}} .$$
By Lemma {\mathrm{e}}f{double sum limit},
$$\limsup_{n \to \infty} \frac{1}{n} \sum_{k=m_0 +1}^n \sum_{m=m_0+1}^{k-m_0 -1} \frac{g(k, m)}{m^{1/2}(k-m)^{1/2}} \leq (\lambda + \varepsilon)\pi .$$
Hence, $\limsup_{n \to \infty} n^{-1} \mathbb{E}\, A_n \leq (\lambda + \varepsilon) \pi$ by \eqref{star1}.
So $\limsup_{n \to \infty} n^{-1} \mathbb{E}\, A_n \leq \lambda \pi$, since $\varepsilon >0$ was arbitrary.
For the lower bound
{\bf e}gin{align*}
\mathbb{E}\, A_n
& \geq \sum_{k=2}^n \sum_{m=m_0}^{k-m_0} \frac{g(k,m)}{m^{1/2}(k-m)^{1/2}} \\
& \geq (\lambda-\varepsilon) \sum_{k=2}^n \sum_{m=m_0}^{k-m_0} \frac{1}{m^{1/2}(k-m)^{1/2}} \\
& \geq (\lambda-\varepsilon) \sum_{k=2}^n \left(\sum_{m=1}^{k-1} - \sum_{m=1}^{m_0 -1} -\sum_{m=k-m_0+1}^{k-1} \right) \frac{1}{m^{1/2}(k-m)^{1/2}} \\
& \geq (\lambda-\varepsilon) \sum_{k=2}^n \sum_{m=1}^{k-1} \frac{1}{m^{1/2}(k-m)^{1/2}} - (\lambda-\varepsilon)\sum_{k=2}^n \frac{2(m_0 -1)}{(k-1)^{1/2}} .
\end{align*}
By Lemma {\mathrm{e}}f{double sum limit}, $\liminf_{n \to \infty} n^{-1} \mathbb{E}\, A_n \geq (\lambda - \varepsilon)\pi$.
Therefore $\liminf_{n \to \infty} n^{-1} \mathbb{E}\, A_n \geq \lambda \pi$, since $\varepsilon >0$ was arbitrary. Then the result follows.
\end{proof}
{\bf e}gin{lemma}
\label{lemma:EA-zero}
If $Y_1$, $Y_2$ are iid. rvs. $Y_1, Y_2 \sim \mathbb{N}N({\bf 0}, \Sigma)$ and $\Sigma = \mathbb{E}\, (Z_1^T Z_1)$
Then,
\[ \mathbb{E}\, T(Y_1,Y_2) = \frac{1}{2} \sqrt{ \det \Sigma } . \]
\end{lemma}
{\bf e}gin{proof}
With $\Sigma = (\Sigma^{1/2})^2$, we have that $(Y_1, Y_2)$ is equal in distribution to $(\Sigma^{1/2} W_1, \Sigma^{1/2} W_2)$
where $W_1$ and $W_2$ are independent ${\mathcal{N}} (0, I)$ random vectors. Since $\Sigma^{1/2}$ acts as a linear transformation on $\mathbb{R}^2$
with Jacobian $\sqrt{ \det \Sigma}$,
\[ \mathbb{E}\, T(Y_1,Y_2) = \mathbb{E}\, T (\Sigma^{1/2} W_1, \Sigma^{1/2} W_2) = \sqrt{ \det \Sigma } \mathbb{E}\, T (W_1, W_2 ) .\]
Here
$$\mathbb{E}\, T (W_1, W_2 ) = \frac{1}{2} \mathbb{E}\, [ \| W_1 \| \| W_2 \| \sin \Theta ],$$
where the minimum angle $\Theta$ between $W_1$ and $W_2$ is uniform on $[0, \pi]$, and $(\| W_1\|, \|W_2\|, \Theta)$
are independent. Hence
$$\mathbb{E}\, T (W_1, W_2 ) = \frac{1}{2} ( \mathbb{E}\, \| W_1 \| )^2 ( \mathbb{E}\, \sin \Theta ) = \frac12,$$
using the fact that $\mathbb{E}\, \sin \Theta = 2/\pi$ and $\| W_1 \|$ is the square-root of a $\chi_2^2$ random variable, so $\mathbb{E}\, \| W_1 \| = \sqrt{ \pi/2}$ and the result follows.
\end{proof}
{\bf e}gin{theorem}
\label{prop:EA-zero}
Suppose that $\mathbb{E}\, \|Z_1\|^2 < \infty$ and $\mu=0$.
Then,
\[ \lim_{n \to \infty} n^{-1}\mathbb{E}\, A_n = \frac{\pi}{2} \sqrt{ \det \Sigma } . \]
\end{theorem}
{\bf e}gin{proof}
The result follows from Proposition {\mathrm{e}}f{lim E A/n with 0 drift} combining with Lemma {\mathrm{e}}f{lemma:EA-zero}.
\end{proof}
{\bf e}gin{theorem}
\label{prop:EA-drift}
Suppose that \eqref{ass:moments} holds for some $p > 2$, $\mu \neq 0$,
and $\sigma^2_{\mu_\per} >0$.
Then
\[ \lim_{n \to \infty} n^{-3/2} \mathbb{E}\, A_n = \| \mu \| ( \sigma^2_{\mu_\per})^{1/2} \mathbb{E}\, \tilde a_1 = \frac{1}{3} \| \mu \| \sqrt{2\pi \sigma^2_{\mu_\per}} . \]
In particular, $\mathbb{E}\, \tilde a_1 = \frac{1}{3} \sqrt{2 \pi}$.
\end{theorem}
{\bf e}gin{proof}
Recall that $\tilde a_1 = {\mathcal{A}}(\tilde h_1)$ is the convex hull area of the space-time diagram of one-dimensional Brownian motion run for unit time.
Given $\mathbb{E}\, [ \|Z_1\|^p ] < \infty$ for some $p >2$, Proposition {\mathrm{e}}f{lem:A_moments}(i) shows that
$\mathbb{E}\, [ A_n^{p/2} ] = O ( n^{3p/4} )$, so that
$\mathbb{E}\, [ ( n^{-3/2} A_n )^{p/2} ]$
is uniformly bounded. Hence $ n^{-3/2} A_n$ is uniformly integrable,
so Corollary {\mathrm{e}}f{cor:A-limit-drift} implies that
{\bf e}gin{equation}
\label{eq:EA-scaling-drift}
\lim_{n \to \infty} n^{-3/2} \mathbb{E}\, A_n = \| \mu \| ( \sigma^2_{\mu_\per})^{1/2} \mathbb{E}\, \tilde a_1.
\end{equation}
In light of \eqref{eq:EA-scaling-drift}, it
remains to identify $\mathbb{E}\, \tilde a_1= \frac{1}{3} \sqrt{2 \pi}$. It does not seem straightforward to work directly with the Brownian limit;
it turns out again to be
simpler to work with a suitable random walk.
We choose a walk that is particularly convenient for
computations.
Let $\xi \sim {\mathcal{N}} (0,1)$ be a standard normal random variable,
and take $Z$ to be distributed as $Z = ( 1, \xi )$ in Cartesian coordinates. Then $S_n = ( n , \sum_{k=1}^n \xi_k )$ is
the space-time diagram of the symmetric random walk on $\mathbb{R}$ generated by i.i.d.\ copies $\xi_1, \xi_2, \ldots$ of $\xi$.
For $Z = (1, \xi)$, $\mu = (1,0)$ and $\sigma^2 = \sigma^2_{\mu_\per} = \mathbb{E}\, [ \xi^2 ] = 1$. Thus
by \eqref{eq:EA-scaling-drift}, to complete the proof of Theorem {\mathrm{e}}f{prop:EA-drift}
it suffices to show that for this walk
$\lim_{n\to \infty} n^{-3/2} \mathbb{E}\, A_n = \frac{1}{3} \sqrt{2 \pi}$.
If $u, v \in \mathbb{R}^2$ have Cartesian components $u = (u_1, u_2)$ and $v = (v_1, v_2)$, then we
may write $T ( u, v) = \frac{1}{2} | u_1 v_2 - v_1 u_2 |$. Hence
{\bf e}gin{align*}
T ( S_m, S_k - S_m ) & = \frac{1}{2} \left| ( k-m) \sum_{j=1}^m \xi_j - m \sum_{j=m+1}^k \xi_j \right| .\end{align*}
By properties of the normal distribution, the right-hand side of the last display has the same distribution as $ \frac{1}{2} | \xi \sqrt{ k m (k-m)} |$.
Hence
\[ \frac{\mathbb{E}\, T ( S_m, S_k - S_m )}{\sqrt{ m (k-m) } } = \frac{1}{2} \mathbb{E}\, | \xi \sqrt{k} | = \frac{1}{2} \sqrt{ 2 k / \pi } ,\]
using the fact that $| \xi |$ is distributed as the square-root of a $\chi_1^2$ random variable, so $\mathbb{E}\, | \xi | = \sqrt{ 2 / \pi }$.
Hence, by \eqref{eq:bnb}, this random walk enjoys the exact formula
{\bf e}gin{align*} \mathbb{E}\, A_n & = \frac{1}{\sqrt{2\pi}} \sum_{k=2}^n \sum_{m=1}^{k-1} \frac{\sqrt{k}}{\sqrt{ m (k-m) }} . \end{align*}
Then from \eqref{eq:pi-sum} we obtain
$\mathbb{E}\, A_n \sim \sqrt{\pi /2} \sum_{k=2}^n k^{1/2}$, which gives the result.
\end{proof}
{\bf e}gin{remark}
The idea used in the proof of Theorem~{\mathrm{e}}f{prop:EA-drift},
first establishing the existence of a limit for a class of models
and then choosing a particular model for which the limit can be conveniently
evaluated, goes back at least to Kac; see~\cite[p.\,293]{kac}.
\end{remark}
\section{Law of large numbers for the area}
{\bf e}gin{proposition} \label{LLN for A_n with 0 drift}
Suppose $\mathbb{E}\, (\|Z_1\|^4) < \infty$ and $\|\mathbb{E}\, Z_1 \| =0$. Then for any $\alpha >1$, $n^{-\alpha} A_n \to 0$ a.s. as $n \to \infty$.
\end{proposition}
{\bf e}gin{proof}
By Chebyshev's inequality for $A_n$,
$$ \mathbb{P} \left(\frac{|A_n - \mathbb{E}\, A_n|}{n^{\alpha}} \geq \varepsilon \right) = \mathbb{P}(|A_n - \mathbb{E}\, A_n| \geq \varepsilon n^{\alpha}) \leq \frac{\mathbb{V} {\rm ar}(A_n)}{\varepsilon^2 n^{2 \alpha}} .$$
Since $\mathbb{V} {\rm ar}(A_n) = O(n^2)$ by Proposition {\mathrm{e}}f{lem:A_moments}(ii), for any $\alpha >1$, as $n \to \infty$ we have
$$\mathbb{P} \left(\frac{|A_n - \mathbb{E}\, A_n|}{n^{\alpha}} \geq \varepsilon \right) = O(n^{2-2\alpha}) .$$
So $n^{-\alpha} (A_n - \mathbb{E}\, A_n) \to 0$ in probability.
Take $n=n_k=2^k$ for $k \in \mathbb{N}$, we have
$$\mathbb{P} \left(\frac{|A_{n_k} - \mathbb{E}\, A_{n_k}|}{n_k^{\alpha}} \geq \varepsilon \right) = O(n_k^{2-2\alpha}) = O(4^{k(1-\alpha)}) .$$
So for any $\varepsilon > 0$,
$$\sum_{k=1}^{\infty} \mathbb{P} \left(\frac{|A_{n_k} - \mathbb{E}\, A_{n_k}|}{n_k^{\alpha}}\geq \varepsilon \right) < \infty .$$
By Borel--Cantelli Lemma (Lemma {\mathrm{e}}f{borel-cantelli}), as $k \to \infty$
$$\frac{A_{n_k} - \mathbb{E}\, A_{n_k}}{n_k^{\alpha}} \to 0 ~{\ \mathrm{a.s.}}$$
By Proposition {\mathrm{e}}f{lem:A_moments}(ii), $n_k^{-\alpha} \mathbb{E}\, A_{n_k} \to 0$ as $n \to \infty$, we get
$$\frac{A_{n_k}}{n_k^{\alpha}} \to 0 ~ {\ \mathrm{a.s.}} {\ \mathrm{as}\ } k \to \infty.$$
For any $n \in \mathbb{N}$, there exists $k(n) \in N$ such that $2^{k(n)} \leq n < 2^{k(n)+1}$. By monotonicity of $A_n$,
$$2^{-\alpha} \frac{A_{n_{k(n)}}}{n_{k(n)}^{\alpha}} = \frac{A_{2^{k(n)}}}{(2^{k(n)+1})^{\alpha}} \leq \frac{A_n}{n^{\alpha}} \leq \frac{A_{2^{k(n)+1}}}{(2^{k(n)})^{\alpha}} = 2^{\alpha} \frac{A_{n_{k(n)+1}}}{n_{k(n)+1}^{\alpha}} .$$
The result follows by the Squeezing Theorem.
\end{proof}
{\bf e}gin{proposition}
Suppose $\mathbb{E}\, (\|Z_1\|^4) < \infty$. Then, for any $\alpha > 3/2$, $n^{-\alpha} A_n \to 0$ a.s. as $n \to \infty$.
\end{proposition}
{\bf e}gin{proof}
By Chebyshev's inequality for $A_n$,
$$ \mathbb{P} \left(\frac{|A_n - \mathbb{E}\, A_n|}{n^{\alpha}} \geq \varepsilon \right) = \mathbb{P}(|A_n - \mathbb{E}\, A_n| \geq \varepsilon n^{\alpha}) \leq \frac{\mathbb{V} {\rm ar}(A_n)}{\varepsilon^2 n^{2 \alpha}} .$$
Since $\mathbb{V} {\rm ar}(A_n) = O(n^3)$ by Proposition {\mathrm{e}}f{lem:A_moments}(i), for any $\alpha >3/2$, as $n \to \infty$ we have
$$\mathbb{P} \left(\frac{|A_n - \mathbb{E}\, A_n|}{n^{\alpha}} \geq \varepsilon \right) = O(n^{3-2\alpha}) .$$
So $n^{-\alpha} (A_n - \mathbb{E}\, A_n) \to 0$ in probability.
Take $n=n_k=2^k$ for $k \in \mathbb{N}$, we have
$$\mathbb{P} \left(\frac{|A_{n_k} - \mathbb{E}\, A_{n_k}|}{n_k^{\alpha}} \geq \varepsilon \right) = O(n_k^{3-2\alpha}) = O(4^{k(3/2-\alpha)}) .$$
So for any $\varepsilon > 0$,
$$\sum_{k=1}^{\infty} \mathbb{P} \left(\frac{|A_{n_k} - \mathbb{E}\, A_{n_k}|}{n_k^{\alpha}}\geq \varepsilon \right) < \infty .$$
By Borel--Cantelli Lemma (Lemma {\mathrm{e}}f{borel-cantelli}), as $k \to \infty$
$$\frac{A_{n_k} - \mathbb{E}\, A_{n_k}}{n_k^{\alpha}} \to 0 ~{\ \mathrm{a.s.}}$$
By Proposition {\mathrm{e}}f{lem:A_moments}(i), $n_k^{-\alpha} \mathbb{E}\, A_{n_k} \to 0$ as $n \to \infty$, we get
$$\frac{A_{n_k}}{n_k^{\alpha}} \to 0 ~ {\ \mathrm{a.s.}} {\ \mathrm{as}\ } k \to \infty.$$
For any $n \in \mathbb{N}$, there exists $k(n) \in N$ such that $2^{k(n)} \leq n < 2^{k(n)+1}$. By monotonicity of $A_n$,
$$2^{-\alpha} \frac{A_{n_{k(n)}}}{n_{k(n)}^{\alpha}} = \frac{A_{2^{k(n)}}}{(2^{k(n)+1})^{\alpha}} \leq \frac{A_n}{n^{\alpha}} \leq \frac{A_{2^{k(n)+1}}}{(2^{k(n)})^{\alpha}} = 2^{\alpha} \frac{A_{n_{k(n)+1}}}{n_{k(n)+1}^{\alpha}} .$$
The result follows by the Squeezing Theorem.
\end{proof}
\section{Asymptotics for the variance}
Recall that Proposition {\mathrm{e}}f{prop:var-limit-zero u0} shows $\lim_{n \to \infty} n^{-1} \mathbb{V} {\rm ar} L_n = u_0 ( \Sigma )$.
In this section, we will show that
{\bf e}gin{alignat}{2}
\label{eq:three_vars}
\text{if } \mu \neq 0: ~~ & \lim_{n \to \infty} n^{-3} \mathbb{V} {\rm ar} A_n = v_+ \| \mu \|^2 \sigma^2_{\mu_\per} ; \nonumber\\
\text{if } \mu = 0: ~~ & \lim_{n \to \infty} n^{-2} \mathbb{V} {\rm ar} A_n = v_0 \det \Sigma
.\end{alignat}
The quantities $ v_0$ and $v_+$ in \eqref{eq:three_vars} are finite and positive,
as is $u_0( \blob )$ provided $\sigma^2 \in (0,\infty)$,
and these quantities are in fact variances associated with convex hulls of Brownian scaling limits for the walk.
{\bf e}gin{proposition}
\label{prop:var-limit-zero}
Suppose that \eqref{ass:moments} holds for some $p >4$,
and $\mu =0$. Then
\[ \lim_{n \to \infty} n^{-2} \mathbb{V} {\rm ar} A_n = v_0 \det \Sigma.\]
\end{proposition}
{\bf e}gin{proof}
Lemma {\mathrm{e}}f{lem:A_moments}(ii) shows that
$\mathbb{E}\, [ A_n^{2(p/4)} ] = O ( n^{p/2} )$, so that
$\mathbb{E}\, [ ( n^{-2} A^2_n )^{p/4} ]$
is uniformly bounded. Hence $ n^{-2} A_n^2$ is uniformly integrable,
and we deduce convergence of $n^{-2} \mathbb{V} {\rm ar} A_n$ in Corollary {\mathrm{e}}f{cor:zero-limits}.
\end{proof}
For the case with drift, we have the following variance result.
{\bf e}gin{proposition}
\label{prop:var-limit-drift v+}
Suppose that \eqref{ass:moments} holds for some $p > 4$ and $\mu \neq 0$.
Then
\[ \lim_{n \to \infty} n^{-3} \mathbb{V} {\rm ar} A_n = v_+ \| \mu \|^2 \sigma^2_{\mu_\per}.\]
\end{proposition}
{\bf e}gin{proof}
Given $\mathbb{E}\, [ \|Z_1\|^p ] < \infty$ for some $p >4$, Lemma {\mathrm{e}}f{lem:A_moments}(i) shows that
$\mathbb{E}\, [ A_n^{2(p/4)} ] = O ( n^{3p/4} )$, so that
$\mathbb{E}\, [ ( n^{-3} A^2_n )^{p/4} ]$
is uniformly bounded. Hence $ n^{-3} A_n^2$ is uniformly integrable,
so Corollary {\mathrm{e}}f{cor:A-limit-drift} yields the result.
\end{proof}
\section{Variance bounds}
{\bf e}gin{proposition}
\label{prop:var_bounds v0 v+}
We have $u_0 (\Sigma) =0$ if and only if $\trace \Sigma =0$.
The following inequalities for the quantities defined at \eqref{eq:var_constants} hold.
{\bf e}gin{alignat}{1}
0 < \frac{4}{49} \left( {\mathrm{e}}^{- 7\pi^2 / 12}
-
\frac{1}{3} {\mathrm{e}}^{-21 \pi^2 / 4}
\right)^2 &{} \leq v_0 \leq 16 (\log 2)^2 - \frac{\pi^2}{4}; \label{eq:v0-bounds} \\
0 < \frac{2}{225} \left(
{\mathrm{e}}^{-25 \pi/9}
-\frac{1}{3}
{\mathrm{e}}^{-25 \pi}
\right) &{} \leq v_+ \leq 4 \log 2 - \frac{2 \pi}{9}. \label{eq:v1-bounds}
\end{alignat}
\end{proposition}
{\bf e}gin{proof}
Bounding $\tilde a_1$ by the area of a rectangle, we have
{\bf e}gin{equation}
\label{eq:a1-upper}
\tilde a_1 \leq r_1 \leq 2 \sup_{0 \leq s \leq 1} | w (s) |, {\ \mathrm{a.s.}} , \end{equation}
where $r_1 := \sup_{0 \leq s \leq 1} w (s) - \inf_{0 \leq s \leq 1} w (s)$.
A result of Feller \cite{feller} states that $\mathbb{E}\, [ r_1^2 ] = 4 \log 2$.
So by the first inequality in \eqref{eq:a1-upper}, we have $\mathbb{E}\, [\tilde a_1^2] \leq 4 \log 2$,
and by Theorem {\mathrm{e}}f{prop:EA-drift}
we have $\mathbb{E}\, \tilde a_1 = \frac{1}{3} \sqrt{ 2 \pi}$; the upper bound in \eqref{eq:v1-bounds} follows.
Similarly, for any orthonormal basis $\{ e_1, e_2\}$ of $\mathbb{R}^2$, we bound $a_1$ by a rectangle
\[ a_1 \leq \left( \sup_{0 \leq s \leq 1} e_1 \cdot b(s) - \inf_{0 \leq s \leq 1 } e_1 \cdot b(s) \right)
\left( \sup_{0 \leq s \leq 1} e_2 \cdot b(s) - \inf_{0 \leq s \leq 1 } e_2 \cdot b(s) \right) ,\]
and the two (orthogonal) components are independent, so $\mathbb{E}\, [ a_1^2 ] \leq ( \mathbb{E}\, [ r_1^2 ] )^2 = 16 (\log 2)^2$,
which with the fact that $\mathbb{E}\, a_1 = \frac{\pi}{2}$ gives the upper bound in \eqref{eq:v0-bounds}.
We now move on to the lower bounds. Tractable upper bounds for $a_1$ and $\tilde a_1$ are easier to come by than
lower bounds, and thus we obtain a lower bound on the variance by showing
the appropriate area has positive probability of being smaller than the corresponding mean.
Consider $a_1$; note $\mathbb{E}\, a_1 = \pi/2$ \cite{elbachir}. Since, for any orthonormal basis $\{e_1, e_2\}$ of $\mathbb{R}^2$,
\[ a_1 \leq \pi \sup_{0 \leq s \leq 1} \| b (s) \|^2 \leq \pi \sup_{0 \leq s \leq 1 } | e_1 \cdot b(s) |^2 + \pi \sup_{0 \leq s \leq 1 } | e_2\cdot b(s) |^2 ,\]
using the fact that $e_1 \cdot b$ and $e_2 \cdot b$ are independent one-dimensional Brownian motions,
\[ \mathbb{P} [ a_1 \leq r ] \geq \mathbb{P} \left[ \sup_{0 \leq s \leq 1 } | w (s) |^2 \leq \frac{r}{2\pi} \right]^2 , ~ \text{for} ~ r >0 .\]
We apply \eqref{eq:var-bound} with $X = a_1$ and $\alpha \in (0,1)$, and set $r = (1-\alpha) \frac{\pi}{2}$ to obtain
{\bf e}gin{align*}
\mathbb{V} {\rm ar}\, a_1 & \geq \alpha^2 \frac{\pi^2}{4} \mathbb{P} \left[ \sup_{0 \leq s \leq 1 } | w (s) | \leq \frac{\sqrt{1-\alpha}}{2} \right]^2 \\
& \geq 4 \alpha^2 \left( \exp \left\{-\frac{\pi^2}{2(1-\alpha) } \right\}
- \frac{1}{3} \exp \left\{-\frac{9 \pi^2}{2(1- \alpha) } \right\} \right)^2 ,\end{align*}
by \eqref{eq:brown_norm}.
Taking $\alpha = 1/7$ is close to optimal, and gives the lower bound in \eqref{eq:v0-bounds}.
For $\tilde a_1$, we apply \eqref{eq:var-bound} with $X = \tilde a_1$ and $\alpha \in (0,1)$.
Using the fact
that $\mathbb{E}\, \tilde a_1 = \frac{1}{3} \sqrt{2 \pi}$ (from Theorem {\mathrm{e}}f{prop:EA-drift}) and the weaker of the two bounds in \eqref{eq:a1-upper},
we obtain
{\bf e}gin{align*}
\mathbb{V} {\rm ar}\, \tilde a_1 & \geq \alpha^2 \frac{2 \pi}{9} \mathbb{P} \left[ \sup_{0 \leq s \leq 1} | w (s) | \leq \frac{ (1-\alpha) \sqrt{2 \pi} }{6} \right] \\
& \geq \frac{8}{9} \alpha^2 \left(
\exp \left\{-\frac{9\pi}{4 (1-\alpha)^2 } \right\}
- \frac{1}{3} \exp \left\{-\frac{81\pi}{4 (1-\alpha)^2 } \right\} \right) ,\end{align*}
by \eqref{eq:brown_norm}.
Taking $\alpha = 1/10$ is close to optimal, and gives the lower bound in \eqref{eq:v1-bounds}.
\end{proof}
{\bf e}gin{remark}
The main interest of the lower bounds in Proposition {\mathrm{e}}f{prop:var_bounds v0 v+} is that they are \emph{positive};
they are certainly not sharp. The bounds can surely be improved. We note just the following idea.
A lower bound for $\tilde a_1$ can be obtained by conditioning on $\theta := \sup \{ s \in [0,1] : w(s) =0\}$
and using the fact that the maximum of $w$ up to time $\theta$ is distributed as the maximum of a scaled Brownian bridge;
combining this with the previous argument improves the lower bound on $v_+$ to $2.09 \times 10^{-6}$.\\
\end{remark}
\pagestyle{myheadings} \markright{\sc Chapter 7}
\chapter{Conclusions and open problems}
\label{chapter7}
\section{Summary of the limit theorems}
We summarize in general the asymptotic behaviour of the expectation and variance of $L_n$ and $A_n$ as the following table.
{\bf e}gin{table}[!h]
\center
\def1.4{1.4}
{\bf e}gin{tabular}{cc|ccc}
& & limit exists for $\mathbb{E}\,$ & limit exists for $\mathbb{V} {\rm ar}$ & limit law \\
\hline
\multirow{2}{*}{ $\mu = 0$ }
& $L_n$ & $n^{-1/2} \mathbb{E}\, L_n$$^\mathsection$ & $n^{-1} \mathbb{V} {\rm ar} L_n$ & non-Gaussian \\
& $A_n$ & $n^{-1} \mathbb{E}\, A_n$$^{\mathparagraph}$ & $n^{-2} \mathbb{V} {\rm ar} A_n$ & non-Gaussian \\
\hline
\multirow{2}{*}{ $\mu \neq 0$ }
& $L_n$ & $n^{-1} \mathbb{E}\, L_n$$^\mathsection$$^\dagger$ & $n^{-1} \mathbb{V} {\rm ar} L_n$$^\ddagger$ & Gaussian$^\ddagger$ \\
& $A_n$ & $n^{-3/2} \mathbb{E}\, A_n$ & $n^{-3} \mathbb{V} {\rm ar} A_n $ & non-Gaussian
\end{tabular}
\caption{Results originate from: $\mathsection$\!\cite{sw}; $\dagger$\!\cite{ss}; $\ddagger$\!\cite{wx}; $\mathparagraph$\!\cite{bnb} (in part);
the rest are new.
The limit laws exclude degenerate cases when associated variances vanish.}
\label{table1}
\end{table}
Table {\mathrm{e}}f{table x3} collets the lower and upper bounds and simulation estimates for the constants defined at equation \eqref{eq:var_constants} and equation \eqref{eq:three_vars}.
{\bf e}gin{table}[!h]
\center
\def1.4{1.4}
{\bf e}gin{tabular}{c|ccc}
& lower bound & simulation estimate & upper bound \\
\hline
$u_0 ( I)$ & $2.65 \times 10^{-3}$ & 1.08 & 9.87 \\
$v_0$ & $8.15 \times 10^{-7}$ & 0.30 & 5.22 \\
$v_+$ & $1.44 \times 10^{-6}$ & 0.019 & 2.08
\end{tabular}
\caption{Each of the simulation estimates is
based on $10^5$ instances of a walk of length $n = 10^5$. The final decimal digit in each of the numerical upper (lower)
bounds has been rounded up (down).}
\label{table x3}
\end{table}
Claussen et al. \cite{claussen} give some numerical estimations that $\mathbb{V} {\rm ar}\, l_1 \approx 1.075$ and $\mathbb{V} {\rm ar}\, a_1 \approx 0.31$, which is a good agreement with our limit estimations $1.08$ and $0.30$.
\section{Exact evaluation of limiting variances}
\label{sec:exact evaluation of limiting variances}
It would, of course, be of interest to evaluate any of $u_0$, $v_0$, or $v_+$ exactly.
In general this looks hard. The paper \cite{rs} provides a key component to a possible approach to evaluating $u_0$.
By Cauchy's formula and Fubini's theorem,
\[ \mathbb{E}\, [ \ell_1^2 ] = \int_{\mathbb{S}_1} \int_{\mathbb{S}_1}
\mathbb{E}\, \left[ \left( \sup_{0 \leq s \leq 1} ( e_1 \cdot b (s) ) \right)
\left( \sup_{0 \leq t \leq 1} ( e_2 \cdot b (t) ) \right) \right]
\textup{d} e_1 \textup{d} e_2 .\]
Here, the two standard one-dimensional Brownian motions $e_1 \cdot b$ and $e_2 \cdot b$
have correlation determined by the cosine of the angle $\phi$ between them, i.e.,
\[ \mathbb{E}\, \left[ ( e_1 \cdot b (s) ) ( e_2 \cdot b (t) ) \right]
= ( s \wedge t ) \, e_1 \cdot e_2
= ( s \wedge t ) \cos \phi .\]
The result of Rogers and Shepp \cite{rs} then shows that
\[ \mathbb{E}\, \left[ \left( \sup_{0 \leq s \leq 1} ( e_1 \cdot b (s) ) \right)
\left( \sup_{0 \leq t \leq 1} ( e_2 \cdot b (t) ) \right) \right]
= c ( \cos \phi ) ,\]
where the function $c$ is given explicitly in \cite{rs}.
Using this result, we obtain
\[ \mathbb{E}\, [ \ell_1^2 ] = 4 \pi \int_{-\pi/2}^{\pi/2} c ( \sin \theta ) \textup{d} \theta
= 4 \pi \int_{-\pi/2}^{\pi/2} \textup{d} \theta \int_0^\infty \textup{d} u \cos \theta
\frac{ \cosh (u \theta )}
{ \sinh ( u \pi /2 ) } \tanh \left( \frac{ (2 \theta + \pi) u }{4} \right) .
\]
We have not been able to deal with this integral analytically, but numerical
integration gives $\mathbb{E}\, [ \ell_1^2 ] \approx 26.1677$, which with the fact that $\mathbb{E}\, \ell_1 = \sqrt{ 8 \pi}$
gives
$u_0(I) = \mathbb{V} {\rm ar} \ell_1 \approx 1.0350$, in reasonable agreement with the
simulation estimate in Table~{\mathrm{e}}f{table2}.
Another possible approach to evaluating $u_0$ is suggested by a remarkable computation of Goldman \cite{goldman} for the analogue of $u_0(I)= \mathbb{V} {\rm ar} \ell_1$ for the planar \emph{Brownian bridge}. Specifically, if
$b'_t$ is the standard Brownian bridge in $\mathbb{R}^2$ with $b'_0
= b'_1 = 0$, and $\ell'_1 = {\mathcal{L}} ( \mathop \mathrm{hull} b' [0,1] )$ the perimeter length of its convex hull,
\cite[Th\'eor\`eme 7]{goldman} states that
\[ \mathbb{V} {\rm ar} \ell'_1 {} = {}
\frac{\pi^2}{6} \left( 2 \pi \int_0^\pi \frac{ \sin \theta}{\theta} \textup{d} \theta - 2 - 3 \pi \right) \approx 0.34755 . \]
\section{Open problems}
\subsection{Degenerate case for $L_n$ when $\mu \neq 0$ and $\sigma_\mu^2 =0$}
\label{sec:degenerate case}
Recall Remark {\mathrm{e}}f{rmk:degenerate}(iii) for Theorem {\mathrm{e}}f{thm1}. For example, consider
$$ Z_1 = {\bf e}gin{cases}\phantom{2}
(1,1), & \text{with probability } 1/2; \\
(1,-1), & \text{with probability } 1/2.
\end{cases} $$
Then the $\sigma^2_{\mu}$ in Theorem {\mathrm{e}}f{thm1} is zero and our results on the second-order properties
of $L_n$ in Chapter {\mathrm{e}}f{chapter5} can not be applied in this degenerate case.
See Figure {\mathrm{e}}f{fig:degenerate} for an example of random walk in this case.
{\bf e}gin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{degenerate}
\caption{Example of the degenerate case with $n=100$.} \label{fig:degenerate}
\end{figure}
For this example, we conjecture $\frac{\mathbb{V} {\rm ar} L_n}{\log n} \to \text{constant}$, based on some simulations. See Figure {\mathrm{e}}f{fig:deg sim1} below.
{\bf e}gin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{deg_sim1}
\caption{Simulation for the degenerate case $\mathbb{V} {\rm ar} L_n =0.6612 \log(n)$.} \label{fig:deg sim1}
\end{figure}
A second open question is whether in this case $\frac{L_n - \mathbb{E}\, L_n}{\sqrt{\mathbb{V} {\rm ar} L_n}}$ has a distributional limit. If so, is that limit normal?
We conjecture that there is a limit, but it is not normal (see Figure {\mathrm{e}}f{fig:deg normal}).
{\bf e}gin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{degHist}
\includegraphics[width=0.49\textwidth]{degQQ}
\caption{Simulations for the degenerate case.} \label{fig:deg normal}
\end{figure}
\subsection{Heavy-tailed increments}
All main results from previous chapters are based on the assumption {\mathrm{e}}f{ass:moments} for $p=2$, that the second moments of increments are finite.
But what happens in the heavy-tail problems, in which $\mathbb{E}\,( \|Z_1\|^2) = \infty$? We give two simulation examples.
\subsection{Centre-of-mass process}
We can associate to a random walk trajectory $S_0, S_1, S_2, \ldots$ its \emph{centre-of-mass}
process $G_0, G_1, G_2, \ldots$ defined by $G_0 := S_0 = 0$ and for $n \geq 1$ by
$G_n = \frac{1}{n} \sum_{k=1}^n S_k$.
By convexity, the convex hull of $\{ G_0, G_1, \ldots, G_n \}$ is contained in the convex hull of $\{ S_0, S_1, \ldots, S_n\}$.
What can one say about its perimeter length or area?
Note that one may express $G_n$ as a weighted sum of the increments of the walk as
\[ G_n = \sum_{k=1}^n \left( \frac{n-k+1}{n} \right) Z_k .\]
Then, for example, we expect that the method of Section~{\mathrm{e}}f{sec:CLT for drift}
carries through to this case; this is one direction for future work.
\subsection{Higher dimensions}
Most of the analysis of $L_n$ in this thesis is restricted to $d=2$ because we rely on the Cauchy formula for planar convex sets. In higher dimensions,
the analogues of $L_n$ and $A_n$ are the \emph{intrinsic volumes} of the convex body. Analogues of Cauchy's formula are available,
but these seem more difficult to use as the basis for analysis.
However, the scaling limit theories in Chapter~{\mathrm{e}}f{chapter3} may have some relatively straightforward corollaries in higher dimensions.
So, some analogous results for $A_n$ in Chapter~{\mathrm{e}}f{chapter6} may not be so difficult to figure out.
{\bf e}gin{thebibliography}{99}
\bibitem{andersen} E.\ S.\ Andersen, On the fluctuations of sums of random variables II,
{\em Math.\ Scand.}\ {\bf 2} (1954) 195--223.
\bibitem{asmussen} S.\ Asmussen,
\emph{Applied Probability and Queues}, 2nd ed., Springer-Verlag, New York, 2003.
\bibitem{bnb} O.\ Barndorff--Nielsen and G.\ Baxter,
Combinatorial lemmas in higher dimensions,
\emph{Trans.\ Amer.\ Math.\ Soc.}\ {\bf 108} (1963) 313--325.
\bibitem{barnett1} V.\ Barnett, The ordering of multivariate data, {\em J.\ Roy.\ Statist.\ Soc.\ Ser.\ A}\ {\bf 139} (1976) no.3, 318--355.
\bibitem{barnett2} V.\ Barnett, Outliers and order statistics, {\em Comm.\ Statist.\ Theory\ Methods}\ {\bf 17} (1988) no.7, 2109--2118.
\bibitem{barnett-lewis} V.\ Barnett and T.\ Lewis, \emph{Outliers in statistical data}, 3rd ed., Wiley, Chichester-New York-Brisbane, 1994.
\bibitem{blvc} F.\ Bartumeus, M.\ G.\ E.\ Da Luz, G.\ M.\ Viswanathan and J.\ Catalan, Animal search strategies: a quantitative random-walk analysis,
{\em Ecology}\ {\bf 86} no.11, (2005) 3078--3087.
\bibitem{bass} R.\ F.\ Bass, Markov processes and convex minorants, \emph{Seminaire de Probabilities}, LNM (1982) 1059.
\bibitem{baxter} G.\ Baxter, A combinatorial lemma for complex numbers,
{\em Ann.\ Math.\ Statist.}\ {\bf 32} (1961) 901--904.
\bibitem{bill} P.\ Billingsley, \emph{Convergence of Probability Measures}, 2nd ed., Wiley, New York, 1999.
\bibitem{burdzy} K.\ Burdzy, Brownian paths and cones, \emph{Ann.\ Prob.}\ {\bf 13} no.3, (1985) 1006--1010.
\bibitem{claussen} G.\ Claussen, A.\ K.\ Hartmann, and S.\ N.\ Majumdar, Convex hulls of random walks: Large-deviation properties,
\emph{Phys.\ Rev.}\ {\bf 91} (2015) 052104.
\bibitem{chm} M.\ Cranston, P.\ Hsu, and P.\ March, Smoothness of the convex hull of planar Brownian motion,
\emph{Ann.\ Probab.}\ {\bf 17} (1989) 144--150.
\bibitem{chung} K.\ L.\ Chung, \emph{A Course in Probability Theory}, 3rd ed., Academic Press, San Diego, 2001.
\bibitem{chung-fuchs} K.\ L.\ Chung and W.\ H.\ J.\ Fuchs, On the distribution of values of sums of random variables,
\emph{Mem.\ Amer.\ Math.\ Soc.}\ {\bf 6} (1951).
\bibitem{codling} E.\ A.\ Codling, M.\ J.\ Plank and S.\ Benhamou,
Random walk models in biology,
\emph{J. R. Soc. Interface} {\bf 5} (2008) 813--834.
\bibitem{durrett} R.\ Durrett, \emph{Probability: Theory and Examples},
Wadsworth \& Brooks/Cole, Pacific Grove, CA, 1991.
\bibitem{efron} B.\ Efron, The convex hull of a random set of points, {\em Biometrika}\ {\bf 52} no. 3/4, (1965) 331--343.
\bibitem{elbachir} M.\ El Bachir, \emph{L'enveloppe convex du mouvement Brownien}, Ph.D. thesis, Universit\'e Toulouse III---Paul Sabatier, 1983.
\bibitem{eldan} R.\ Eldan, Volumetric properties of the convex hull of an n-dimensional Brownian motion, \emph{Electron.\ J.\ Prob.}\
{\bf 19} no.45, (2014) 1--34.
\bibitem{evans} S.\ N.\ Evans, On the {H}ausdorff dimension of {B}rownian cone points, \emph{Math.\ Proc.\ Camb.\ Philos.\ Soc.}\ {\bf 98} (1985) 343--353.
\bibitem{feller} W.\ Feller, The asymptotic distribution of the range of sums of independent random variables,
\emph{Ann.\ Math.\ Statist.}\ {\bf 22} (1951) 427--432.
\bibitem{feller2} W.\ Feller,
\emph{An Introduction to Probability Theory and its Applications. Vol. II.},
2nd ed., Wiley, New York, 1971.
\bibitem{geffroy} J.\ Geffroy, Localisation asymptotique du poly\`edre d'appui d'un \'echantillon laplacien \`a $k$ dimensions, \emph{Publ.\ Inst.\ Stat.\ Univ.\ Paris}\ {\bf 10} (1961) 213--228.
\bibitem{gph} L.\ Giuggioli, J.\ R.\ Potts and S.\ Harris, Animal interactions and the emergence of territoriality, {\em PLoS.\ Comput.\ Biol.}\ {\bf 7}(3) (2011) e1002008.\ doi:10.1371/journal.pcbi.1002008
\bibitem{glendinning} R.\ H.\ Glendinning, The convex hull of a dependent vector-valued process, {\em J.\ Statist.\ Comput.\ Simul.}\ {\bf 38} (1991) 219--237.
\bibitem{goldman} A.\ Goldman, Le spectre de certaines mosa\"iques poissoniennes du plan et l'enveloppe convex du pont brownien,
\emph{Probab.\ Theory Relat.\ Fields} {\bf 105} (1996) 57--83.
\bibitem{green} P.\ J.\ Green, Peeling bivariate data, pp.~3--19 in {\em Interpreting Multivariate Data}, V.\ Barnett (ed.), Wiley, 1981.
\bibitem{gruber} P.\ M.\ Gruber, \emph{Convex and Discrete Geometry}, Springer, Berlin, 2007.
\bibitem{gut} A.\ Gut, \emph{Probability: A Graduate Course}, Springer, Uppsala, 2005.
\bibitem{hug} D.\ Hug, Random polytopes, Chapter 7 in \emph{Stochastic Geometry, Spatial Statistics and Random Fields}, Springer, 2013.
\bibitem{hughes} B.\ Hughes, \emph{Random Walks and Random Environments, Vol. I.}, Oxford, 1995.
\bibitem{jp} N.\ C.\ Jain and W.\ E.\ Pruitt, The other law of the iterated logarithm,
\emph{Ann.\ Probab.}\ {\bf 3} (1975) 1046--1049.
\bibitem{kac} M.\ Kac,
Toeplitz matrices, translation kernels and a related problem in probability theory,
{\em Duke\ Math.\ J.}\ {\bf 21} (1954) 501--509.
\bibitem{kallenberg} O.\ Kallenberg, \emph{Foundations of Modern Probability}, 2nd ed., Springer, New York, 2002.
\bibitem{klm} J.\ Kampf, G.\ Last, and I.\ Molchanov, On the convex hull of symmetric stable processes,
\emph{Proc.\ Amer.\ Math.\ Soc.}\ {\bf 140} (2012) 2527--2535.
\bibitem{kt2} S.\ Karlin, and H.\ M.\ Taylor,
\emph{A Second Course in Stochastic Processes}, Academic Press,
New York, 1981.
\bibitem{letac} G.\ Letac,
An explicit calculation of the mean of the perimeter of the convex hull of a plane random walk,
{\em J.\ Theor.\ Prob.}\ {\bf 6} (1993) 385--387.
\bibitem{letac2} G.\ Letac, Advanced problem 6230, \emph{Amer.\ Math.\ Monthly} {\bf 85} (1978) 686.
\bibitem{levybook} P.\ L\'evy, \emph{Processus Stochastiques et Mouvement Brownien},
Gauthier-Villars, Paris, 1948.
\bibitem{mardia} K.\ V.\ Mardia, J.\ T.\ Kent and J.\ M.\ Bibby, \emph{Multivariate Analysis}, Academic Press, London, 1979.
\bibitem{mcr} S.\ N.\ Majumdar, A.\ Comtet, and J.\ Randon-Furling,
Random convex hulls and extreme value statistics,
{\em J.\ Stat.\ Phys.}\ {\bf 138} (2010) 955--1009.
\bibitem{mohr} C.\ O.\ Mohr, Table of equivalent populations of north American small mammals, {\em Amer. Midland Naturalist}\ {\bf 37} no.1, (1947) 223--249.
\bibitem{morters} P.\ M\"orters and Y.\ Peres, \emph{Brownian Motion}, Cambridge, 2010.
\bibitem{nevzorov} V.\ B.\ Nevzorov, \emph{Records: Mathematical Theory}, Amer. Math. Soc., 2001.
\bibitem{penrose} M.\ Penrose, \emph{Random Geometric Graphs}, Oxford, 2003.
\bibitem{pitman-ross} J.\ Pitman and N.\ Ross, The greatest convex minorant of Brownian motion, meander, and bridge,
{\em Probab.\ Theory Relat.\ Fields}\ {\bf 153} (2012) 771--807.
\bibitem{polya} G.\ P\'olya, \"Uber eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Strassennetz,
\emph{Math.\ Ann.}\ {\bf 84} (1921) 149-–160.
\bibitem{reitzner} .\ Reitzner, pp.~45--76 in \emph{New Perspectives
in Stochastic Geometry}, W.S.~Kendall \& I.~Molchanov (eds.), OUP, 2010.
\bibitem{renyi-sulanke} A.\ R\'enyi and R.\ Sulanke, \"Uber die konvexe h\"ulle von $n$ zuf\"allig gew\"ahlten punkten, {\em Z.\ Wahrscheinlichkeitstheorie}\ {\bf 2} (1963) 75--84.
\bibitem{resnick} S. Resnick,
\emph{Adventures in Stochastic Processes}, Birkh\"auser, Boston, 1992.
\bibitem{rs} L.C.G.\ Rogers and L.\ Shepp,
The correlation of the maxima of correlated {B}rownian motions,
\emph{J.\ Appl.\ Probab.}\ {\bf 43} (2006) 880--883.
\bibitem{rudin} W.\ Rudin, \emph{Principles of Mathematical Analysis}, 3rd ed., McGraw-Hill, 1976.
\bibitem{schneider-weil} R.\ Schneider and W.\ Weil, Classical stochastic geometry, pp.~1--42 in \emph{New Perspectives
in Stochastic Geometry}, W.S.~Kendall \& I.~Molchanov (eds.), OUP, 2010.
\bibitem{sinai} Ya.\ G.\ Sinai, Convex hulls of random processes, \emph{Amer.\ Math.\ Soc.\ Transl.}\ {\bf 186} (1998).
\bibitem{smouse} P.E. Smouse, S. Focardi, P.R. Moorcroft, J.G. Kie, J.D. Forester and J.M. Morales,
Stochastic modelling of animal movement,
\emph{Phil. Trans. R. Soc. B} {\bf 365} (2010) 2201--2211.
\bibitem{ss} T.\ L.\ Snyder and J.\ M.\ Steele, Convex hulls of random walks,
{\em Proc.\ Amer.\ Math.\ Soc.}\ {\bf 117} (1993) 1165--1173.
\bibitem{sw} F.\ Spitzer and H.\ Widom, The circumference of a convex polygon,
{\em Proc.\ Amer.\ Math.\ Soc.}\ {\bf 12} (1961) 506--509.
\bibitem{steele} J.\ M.\ Steele, The {B}ohnenblust--{S}pitzer algorithm and its applications,
{\em J.\ Comput.\ Appl.\ Math.}\ {\bf 142} (2002) 235--249.
\bibitem{steele2} J.\ M.\ Steele, \emph{Probability Theory and Combinatorial Optimization}, Soc. for Industrial and Applied Math., 1997.
\bibitem{takacs} L.\ Tak\'acs, Expected perimeter length, \emph{Amer.\ Math.\ Monthly} {\bf 87} (1980) 142.
\bibitem{takacs2} L.\ Tak\'acs,
\emph{Combinatorial Methods in the Theory of Stochastic Processes},
Wiley, New York, 1967.
\bibitem{wx} A.R.\ Wade and C.\ Xu, Convex hulls of planar random walks with drift, {\em Proc.\ Amer.\ Math.\ Soc.}\ {\bf 143} (2015) 433--445.
\bibitem{wx2} A.R.\ Wade and C.\ Xu, Convex hulls of planar random walks and their scaling limits, {\em Stoc.\ Proc.\ and\ their\ Appl.}\ {\bf 125} (2015) 4300--4320.
\bibitem{worton1} B.\ J.\ Worton, A review of models of home range for animal movement, {\em Ecol.\ Modelling}\ {\bf 38} (1987) 277--298.
\bibitem{worton2} B.\ J.\ Worton, A convex hull-based estimator of home-range size, {\em Biometrics}\ {\bf 51} no.4, (1995) 1206--1215.
\bibitem{yukich} J.\ E.\ Yukich, \emph{Probability Theory of Classical Euclidean Optimization Problems}, Springer, 1998.
\end{thebibliography}
\end{document}
|
\begin{document}
\author{Maciej Ga\l{}\k{a}zka}
\address{Maciej Ga\l{}\k{a}zka, Faculty of Mathematics, Computer Science, and Mechanics, University of Warsaw, ul. Banacha 2, 02-097 Warszawa, Poland}
\email{[email protected]}
\title{Multigraded Apolarity}
\date{\today}
\keywords{secant variety, Waring rank, cactus rank, border rank, toric variety, apolarity, catalecticant}
\subjclass[2010]{14M25, 14N15}
\begin{abstract}
We generalize methods to compute various kinds of rank to the case of a toric variety $X$ embedded into projective space using a very ample line
bundle $\mathcal{L}$. We find an upper bound on the cactus rank. We use this to compute rank, border rank, and cactus rank of monomials in $H^0(X,
\mathcal{L})^*$ when $X$ is $\mathbb{P}^1 \times \mathbb{P}^1$, the Hirzebruch surface $\mathbb{F}_1$, the weighted projective plane
$\mathbb{P}(1,1,4)$, or a fake weighted projective plane.
\end{abstract}
\maketitle
\tableofcontents
\pagebreak
\section{Introduction}
\subsection{Background}
The topic of calculating ranks of polynomials goes back to works of Sylvester on apolarity in the 19th century. For introductions to this subject, see
\cite{iarrobino_kanev_book_Gorenstein_algebras} and \cite{landsberg_tensorbook}. For a concise introduction to the concept of rank for different
subvarieties $X \subseteq \mathbb{P}^N$ and numerous ways to give lower bounds for rank, see \cite{teitler_geometric_lower_bounds} (see also many
references there). For a brief review of the apolarity action in the case of the Veronese map, see \cite[Section 3]{nisiabu_jabu_cactus}.
As far we know, the notion of cactus rank was first defined in \cite[Chapter 5]{iarrobino_kanev_book_Gorenstein_algebras} (where it is called the
``scheme length''). For a motivation, basic properties and an application in the case of the Veronese embedding, see \cite{nisiabu_jabu_cactus}. We
study cactus rank, because properties of the Hilbert scheme of all zero-dimensional subschemes of a variety are better understood than properties of
the subset corresponding to smooth schemes (i.e.\ schemes of points). Another reason is that many bounds for rank work also for cactus rank, for
instance the Landsberg-Ottaviani bound for vector bundles (see \cite{mgalazka_cactus_equations}). There is also a lower bound for the cactus rank by
Ranestad and Schreyer (see \cite{ranestad_schreyer_on_the_rank_of_a_symmetric_form}).
In this paper, we see what happens when $X$ is a toric variety. For an introduction to this subject, see the newer \cite{cox_book} and the older
\cite{fulton}. For toric varieties, many invariants can be computed quite easily. This can be used to study ranks and secant varieties. In
\cite{cox_sidman}, the authors investigate the second secant variety $\sigma_2(X)$, where $X$ is a toric variety embedded into some projective space.
As they write there, ``Many classical varieties whose secant varieties have been studied are toric''. Here we take a different approach. We generalize
apolarity to toric varieties, and then, as an application, we compute rank, cactus rank and border rank of some polynomials.
\subsection{Main results}
We need to introduce some notions to state the main results. Suppose $X$ is a $\mathbb{Q}$-factorial projective toric variety. Let $S$ be the Cox ring
of $X$. By definition, it is graded by $\operatorname{Cl} X$. Since $X$ is a toric variety, $S$ is a polynomial ring with finitely many variables (see \cite[Section
5.2]{cox_book}), so we may write $S \cong \mathbb{C}[x_1,\dots,x_r]$. Introduce $T = \mathbb{C}[y_1,\ldots,y_r]$. We will think of $T$ as an
$S$-module, where the multiplication (denoted by ${\: \lrcorner \:}$) is induced by
\begin{equation}\label{equation:apolarity}
x_i {\: \lrcorner \:} y_1^{b_1}\cdot \ldots \cdot y_r^{b_r} = \begin{cases}
y_1^{b_1}\cdot\ldots\cdot y_i^{b_i-1}\cdot\ldots\cdot y_r^{b_r} & \text{if} \, b_i > 0,\\
0 & \text{otherwise.}\end{cases}
\end{equation}
We define a grading on $T$ in $\operatorname{Cl} X$ in an analogous way as on $S$:
\begin{equation*}
\deg y_1^{a_1}\cdot\ldots\cdot y_r^{a_r} = \deg x_1^{a_1}\cdot\ldots\cdot x_r^{a_r}\text{.}
\end{equation*}
Let $\alpha \in \operatorname{Pic} X$ be a very ample class. The pairing ${\: \lrcorner \:}$ gives a duality which identifies $H^0(X, \mathcal{O}(\alpha))^*$ with $T_\alpha$
(this is described in detail in Proposition \ref{proposition:duality}). Here and later, for any graded ring $R$ and any degree $\mu$, we denote by
$R_\mu$ the graded piece of $R$ of degree $\mu$. For $F \in T_\alpha$ we define $F^\perp$ as its annihilator in $S$ (with respect to the action
${\: \lrcorner \:}$).
The first main result of this paper is:
\begin{theorem}[Multigraded Apolarity Lemma]\label{theorem:multigraded_apolarity}
Let
\begin{equation*}
\varphi \colon X \hookrightarrow \mathbb{P}(H^0(X, \mathcal{O}(\alpha))^*)
\end{equation*}
be the morphism associated with the complete linear system $|\mathcal{O}(\alpha)|$. Fix a non-zero $F \in H^0(X, \mathcal{O}(\alpha))^*$.
Then for any closed subscheme $R \hookrightarrow X$ we have
\begin{equation*}
F \in \langle R \rangle \iff I(R) \subseteq F^\perp \text{.}
\end{equation*}
Here $I(R)$ is the ideal of $R$ from Definition \ref{definition:ideal_of_subscheme}, and $\langle R \rangle$ is the linear span of a subscheme (see
the beginning of Subsection \ref{subsection:cactus_rank}).
\end{theorem}
This was first proven in my master thesis (see \cite{mgalazka_master_thesis}). Then it was independently proven for smooth $X_\Sigma$ in \cite[Lemma
1.3]{toric_ranestad}. In the paper the authors use this to determine varieties of apolar subschemes for $\mathbb{P}^1 \times \mathbb{P}^1$ embedded
into projective space by $\mathcal{O}(2,2)$ and $\mathcal{O}(3,3)$, and also for the Hirzebruch surface $\mathbb{F}_1$ embedded by the bundle
$\mathcal{O}(2,1)$ (in notation from Subsection \ref{subsection:hirzebruch_surface}).
We prove Theorem \ref{theorem:multigraded_apolarity} in Section \ref{section:apolarity_lemma}.
Suppose $\beta \in \operatorname{Cl} X$. Consider the restriction of the action ${\: \lrcorner \:}$ to
\begin{equation*}
S_{\beta} \times T_{\alpha} \xra{{\: \lrcorner \:}} T_{\alpha - \beta} \text{.}
\end{equation*}
For any $F \in T_\alpha$ we consider the linear map $C_F^\beta : S_\beta \to T_{\alpha - \beta}$ given by $h \mapsto h{\: \lrcorner \:} F$.
\begin{theorem}\label{theorem:catalecticant}
Fix $F \in H^0(X, \mathcal{O}(\alpha))^*$. We have the following:
\begin{enumerate}[(1)]
\item if $\beta \in \operatorname{Cl} X$, then
\begin{equation*}
\operatorname{\underline{r}}(F) \geq \operatorname{rank}(C_F^\beta)\text{,}
\end{equation*}
\item if $\beta \in \operatorname{Pic} X$, then
\begin{equation*}
\operatorname{cr}(F) \geq \operatorname{rank}(C_F^\beta)\text{.}
\end{equation*}
\end{enumerate}
Here $\operatorname{\underline{r}}(F)$ and $\operatorname{cr}(F)$ denote the border rank and the cactus rank of $F$, respectively, see Definitions
\ref{definition:sigma_rank_border_rank} and \ref{definition:cactus_rank}.
\end{theorem}
We also provide an example such that the bound in point (1) does not hold for the cactus rank, see Remark \ref{remark:catalecticant_counterexample}.
Theorem \ref{theorem:catalecticant} is proven in Corollary \ref{corollary:cactus_catalecticant_bound} and Corollary
\ref{corollary:border_catalecticant_bound}. The bound in point (2) was given in \cite[Theorem 5.3.D]{iarrobino_kanev_book_Gorenstein_algebras} for the
Veronese embedding. Also see \cite{mgalazka_cactus_equations} for a version of the bound in point (2) for vector bundles of higher rank.
As an application of Theorem \ref{theorem:multigraded_apolarity}, in Section \ref{section:upper_bound_on_cactus} we provide an upper bound for the
cactus rank of a polynomial.
\begin{theorem} \label{theorem:upper_bound_on_cactus}
Suppose $X$ is a smooth projective toric variety, and $\alpha \in \operatorname{Pic} X$ is a very ample class. Let $0 \neq F \in H^0(X, \mathcal{O}(\alpha))^*$.
Let $\sigma$ be any cone of the fan of $X$ of maximal dimension. Let $f$ be the dehomogenization of $F$ defined by setting all the variables
corresponding to rays not in $\sigma$ to $1$. Then
\begin{equation*}
\operatorname{cr}(F) \leq \dim S/f^\perp\text{.}
\end{equation*}
\end{theorem}
This theorem is a generalization of \cite[Theorem 3]{bernardi_ranestad_cactus_rank_of_cubics} to the multigraded setting.
From it we derive a corollary.
\begin{corollary}\label{corollary:upper_bound_segre_veronese}
Let
\begin{equation*}
\mathbb{P}^{n_1}\times \dots \times \mathbb{P}^{n_k} \xra{v_{d_1,\dots,d_k}} \mathbb{P}(\operatorname{Sym}^{d_1}\mathbb{C}^{n_1 + 1}\otimes \dots \otimes
\operatorname{Sym}^{d_k}\mathbb{C}^{n_k + 1})
\end{equation*}
be a Segre-Veronese embedding. Here $\operatorname{Sym}^i$ denotes the $i$-th symmetric tensors. Let $F \in \operatorname{Sym}^{d_1}\mathbb{C}^{n_1 + 1}\otimes \dots \otimes
\operatorname{Sym}^{d_k}\mathbb{C}^{n_k + 1}$ be a non-zero form. Then
\begin{align*}
\operatorname{cr}(F) &\leq \sum_{\substack{(e_1,\dots,e_k) |\\ e_1 + \dots + e_k \leq d/2 }}\binom{n_1 -1+ e_1}{e_1}\cdot\ldots\cdot \binom{n_k -1+ e_k}{e_k} \\
&+\sum_{\substack{(e_1,\dots,e_k) |\\ e_1 + \dots + e_k > d/2 }}\binom{n_1 -1+ d_1 - e_1}{d_1 - e_1}\cdot\ldots\cdot \binom{n_k -1 + d_k - e_k}{d_k - e_k}\text{.}
\end{align*}
\end{corollary}
In \cite{ballico_bernardi_gesmundo_cactus_rank_segre_veronese} the authors prove a weaker version of the bound in Corollary
\ref{corollary:upper_bound_segre_veronese}.
See Section \ref{section:upper_bound_on_cactus} for the proofs of Theorem \ref{theorem:upper_bound_on_cactus} and Corollary
\ref{corollary:upper_bound_segre_veronese}.
Finally, we use this to compute ranks of monomials when $X$ is a projective toric surface. The first example is $\mathbb{P}^1 \times \mathbb{P}^1$,
see Subsection \ref{subsection:p1timesp1}. We consider the problem of determining cactus ranks and ranks of monomials $F =
x_0^{k_0}x_1^{k_1}y_0^{l_0}y_1^{l_1}$, where $k_0 \geq k_1 \geq 1, l_0 \geq l_1 \geq 1$. We have
\begin{equation}\label{equation:obvious_rank_inequality}
\operatorname{r}(F) \leq (k_0 + 1)(l_0 + 1)\text{.}
\end{equation}
But the equality in the equation above does not always hold. For example, rank of $x_0^2 x_1 y_0^2 y_1$ is $8$, not $9$ (see \cite[Remark
16]{christandl_kjaerulff_zuiddam_tensor_rank_not_multiplicative}, \cite{chen_friedland_rank_of_tensor_product_is_eight}).
Our result is
\begin{theorem}\label{theorem:inequalities}
The following inequalities hold:
\begin{enumerate}[(i)]
\item $\operatorname{r}(F) \leq (k_0 + 1)(l_1 + 1) + (k_1 + 1)(l_0 + 1) - (k_1 + 1)(l_1 + 1)$,\label{item:first_inequality}
\item $\operatorname{r}(F) \geq (k_0 + 1)(l_1 + 1)$ for $k_0 > k_1$, $\operatorname{r}(F) \geq (k_1 + 1)(l_0 + 1)$ for $l_0 > l_1$,\label{item:second_inequality}
\item $\operatorname{r}(F) \geq (k_1 + 2)(l_1 + 2) - 1$ for $k_0 > k_1$ and $l_0 > l_1$.\label{item:third_inequality}
\end{enumerate}
\end{theorem}
Item \eqref{item:first_inequality} is stronger than the recent result \cite[Proposition
3.9]{ballico_bernardi_christandl_gesmundo_partially_symmetric}. Item \eqref{item:second_inequality} is proven independently in \cite[Proposition 4.3]{
ballico_bernardi_gesmundo_oneto_ventura_geometric_conditions}.
Let us look at the cases where rank is determined by these inequalities. When we set $l_1 = l_0$ in the first inequality of Item
\eqref{item:second_inequality}, from Equation \eqref{equation:obvious_rank_inequality} we get $\operatorname{r}(F) = (k_0 + 1)(l_0 + 1)$. Also when we set $k_0 =
k_1 + 1$ and $l_0 = l_1 + 1$, we get (by Items \eqref{item:first_inequality} and \eqref{item:third_inequality}) that $\operatorname{r}(F) = (k_1 + 2)(l_1 + 2) -
1$.
The next example is the Hirzebruch surface $\mathbb{F}_1$ (which can be defined as $\mathbb{P}^2$ blown up in one point), see Subsection
\ref{subsection:hirzebruch_surface}. Here we find monomials whose border rank is less than their cactus rank (and also their smoothable rank), see
Remark \ref{remark:wild_case}. Another one is the weighted projective plane $\mathbb{P}(1,1,4)$, see Subsection
\ref{subsection:weighted_projective_plane}. Here we give an example of a monomial whose cactus rank is less than its border rank. The last one is a
fake weighted projective plane (see Subsection \ref{subsection:fake_weighted_projective_plane}) --- the quotient of $\mathbb{P}^2$ by the action of
$\mathbb{Z}/3 = \{1, \varepsilon, \varepsilon^2 \}$ (where $\varepsilon^3 = 1$) given by $\varepsilon \cdot [\lambda_0,\lambda_1,\lambda_2] =
[\lambda_0, \varepsilon\lambda_1,\varepsilon^2 \lambda_2]$.
\subsection{Acknowledgments}
This article a severely expanded version of my master thesis, \cite{mgalazka_master_thesis}.
I thank my advisor, Jaros\l{}aw Buczy\'nski, for introducing me to this subject, his insight, many suggestions of examples, suggestions on how to
improve the presentation, many discussions, and constant support. I also thank Piotr Achinger and Joachim Jelisiejew for suggestions on how to improve
the presentation. I am also grateful to Joachim Jelisiejew and Mateusz Micha{\l}ek for helpful discussions.
I was supported by the project ``Secant varieties, computational complexity, and toric degenerations'' realized withing the Homing Plus programme of
Foundation for Polish Science, co-financed from European Union, Regional Development Fund, and by Warsaw Center of Mathematics and Computer Science
financed by Polish program KNOW. During the process of expanding the article (adding Section \ref{section:upper_bound_on_cactus} and Subsection
\ref{subsection:p1timesp1}) I was supported by the NCN project ``Algebraic Geometry: Varieties and Structures'' no. 2013/08/A/ST1/00804.
\section{Ranks and secant varieties}
In this section we review the definitions of various kinds of ranks and secant varieties.
\begin{definition}\label{definition:sigma_rank_border_rank}
Let $W$ be a finite-dimensional complex vector space, and $X$ a subvariety of $\mathbb{P}W$. Let
\begin{equation*}
\sigma_r^0(X) = \{ [F] \in \mathbb{P}W | [F] \in \langle p_1,\dots, p_r \rangle \text{ where } p_1,\dots, p_r \in X \}\text{,}
\end{equation*}
where $\langle \rangle$ denotes the (projective) linear span. Define the $r$-th \emph{secant variety} of $X \subseteq \mathbb{P}W$ by $\sigma_r(X) =
\overline{\sigma_r^0(X)}$. The overline denotes the Zariski closure. For any non-zero $F \in W$ define the
\emph{$X$-rank} of $F$:
\begin{align*}
\operatorname{r}_X(F) &= \min \{ r \in \mathbb{Z}_{\geq 1} | [F] \in \sigma_r^0(X) \} \\
&= \min \{ r \in \mathbb{Z}_{\geq 1} | [F] \in \langle p_1,\dots,p_r \rangle \text{ for some } p_1,\dots,p_r \in X\}
\end{align*}
and the \emph{$X$-border rank} of $F$:
\begin{align*}
\operatorname{\underline{r}}_X(F) &= \min \{ r \in \mathbb{Z}_{\geq 1} | [F] \in \sigma_r(X) \}\\
&= \min \{ r \in \mathbb{Z}_{\geq 1} | F \text{ is a limit of points of }X\text{-rank } \leq r \}\text{.}
\end{align*}
Usually, if $X$ is fixed, we omit the prefix and call them rank and border rank, respectively.
\end{definition}
The problem of calculating border rank of points is related to finding equations of secant varieties. Namely, if we know set-theoretic equations of
$\sigma_r(X) \subseteq \mathbb{P}W$ for $r = 1,2,3,\dots$, then we can calculate the border rank of any point (by checking if it satisfies the
equations).
\begin{example}
Let $X$ be the $d$-th Veronese variety $\mathbb{P}V \subseteq \mathbb{P}\operatorname{Sym}^d V$. Then $X$-rank of $[F] \in \mathbb{P} \operatorname{Sym}^d V$ is the least $r$
such that $F$ can be written as $v_1^d + \dots + v_r^d$ for some $v_i \in V$. The $X$-rank is called the symmetric rank, or the Waring rank in this
case.
\end{example}
Let us go back to the setting of a projective variety $X \subseteq \mathbb{P}W$. Here are a few results which we are going to need later.
Let $\mathbb{P}T_{q}X$ denote the projective tangent space of $X$ embedded in $\mathbb{P}W$ at point $q$, i.e.\ the projectivization of the affine
tangent space to the affine cone of $X$.
\begin{proposition}[Terracini's Lemma]\label{proposition:terracini}
Let $r$ be a positive integer. Then for $r$ general points $p_1,\dots, p_r \in X$ and a general point $q \in \langle p_1,\dots,p_r \rangle$ we have
\begin{equation*}
\mathbb{P}T_{q}\sigma_r(X) = \langle \mathbb{P}T_{p_1} X,\dots,\mathbb{P}T_{p_r} X \rangle\text{.}
\end{equation*}
\end{proposition}
For a proof, see \cite[Section 5.3]{landsberg_tensorbook} or \cite[Chapter V, Proposition 1.4]{zak_tangents}.
\begin{corollary}[of Proposition \ref{proposition:terracini}]
The dimension of $\sigma_r(X)$ is not greater than $r(\dim X + 1) - 1$.
\end{corollary}
\begin{proposition}
If $X$ is irreducible, then $\sigma_r(X)$ is irreducible for any $r \geq 1$.
\end{proposition}
\begin{definition}\label{definition:expected_dimension}
When $\dim \sigma_r(X) = \min(\dim \mathbb{P}W, r(\dim X + 1) -1)$, we say that $\sigma_r(X)$ is of \emph{expected dimension}.
\end{definition}
\subsection{Cactus rank}\label{subsection:cactus_rank}
For a zero-dimensional scheme $R$ (of finite type over $\mathbb{C}$), let $\operatorname{length}{R}$ denote its length, i.e.\ $\dim_{\mathbb{C}}H^0(R,
\mathcal{O}_R)$. This is equal to the degree of $R$ in any embedding into projective space. Also for any subscheme $R \hookrightarrow \mathbb{P}W$
define $\langle R \rangle$ to be the linear span of $R$, i.e.\ the smallest projective linear space, through which the inclusion of the scheme factors.
\begin{definition}\label{definition:cactus_rank}
Define the \emph{$X$-cactus rank} of $F \in W$:
\begin{equation*}
\operatorname{cr}_X(F) = \min \{\operatorname{length} R | R \hookrightarrow X, \dim R = 0, F \in \langle R \rangle \}\text{.} \\
\end{equation*}
\end{definition}
We have the following inequalities:
\begin{align*}
\operatorname{cr}(F) &\leq \operatorname{r}(F)\text{,} \\
\operatorname{\underline{r}}(F) &\leq \operatorname{r}(F)\text{.}
\end{align*}
\section{Toric varieties}\label{section:toric_varieties}
\subsection{Quotient construction and the Cox ring}
Let $M$ and $N$ be dual lattices (abelian groups isomorphic to $\mathbb{Z}^k$ for some $k \geq 1$) and $\langle \cdot,\cdot\rangle :M\times N\to
\mathbb{Z}$ be the duality between them. Let $X_\Sigma$ be the toric variety of a fan $\Sigma \subseteq N_\mathbb{R} \coloneqq N\otimes \mathbb{R}$
with no torus factors. The term ``with no torus factors'' means that the linear span of $\Sigma$ in $N_\mathbb{R}$ is the whole space. Let $\Sigma(1)$
denote the set of rays of the fan $\Sigma$. Similarly, $\sigma(1)$ denotes the set of rays in a cone $\sigma$. Then $X_\Sigma$ can be obtained as an
almost geometric quotient of an action of $G \coloneqq \Hom(\operatorname{Cl} X_\Sigma, \mathbb{C}^*)$ on $\mathbb{C}^{\Sigma(1)}\setminus Z$, where $Z$ is a
subvariety of $\mathbb{C}^{\Sigma(1)}$. Let us go briefly through the construction of this quotient. We follow \cite[Section 5.1]{cox_book}.
Since $X_\Sigma$ has no torus factors, we have an exact sequence
\begin{equation*}
0 \to M \to \mathbb{Z}^{\Sigma(1)} \to \operatorname{Cl} X_\Sigma \to 0\text{.}
\end{equation*}
After applying $\Hom(-, \mathbb{C}^*)$, this gives
\begin{equation*}
1 \to \Hom(\operatorname{Cl} X_\Sigma, \mathbb{C}^*) \to (\mathbb{C}^*)^{\Sigma(1)} \to \mathbb{C}^*\otimes N \to 1\text{.}
\end{equation*}
So $G = \Hom(\operatorname{Cl} X_\Sigma, \mathbb{C}^*)$ is a subset of $\mathbb{C}^{\Sigma(1)}$, and the action is given by multiplication on coordinates. Let $S =
\operatorname{Spec} \mathbb{C}[x_\rho | \rho \in \Sigma(1)]$. In other words, $S$ is the polynomial ring with variables indexed by the rays of the fan $\Sigma$. The
ring $S$ is the coordinate ring of the affine space $\mathbb{C}^{\Sigma(1)}$. For a cone $\sigma \in \Sigma$, define
\begin{equation*}
x^{\widehat{\sigma}} = \prod_{\rho \notin \sigma(1)}{x_\rho}\text{.}
\end{equation*}
Then define a homogeneous ideal in $S$:
\begin{equation}\label{equation:irrelevant_ideal}
B = B(\Sigma) = (x^{\widehat{\sigma}} | \sigma \in \Sigma) \subseteq S\text{,}
\end{equation}
which is called the irrelevant ideal, and let $Z = Z(\Sigma) \subseteq \mathbb{C}^{\Sigma(1)}$ be the vanishing set of $B$. For a precise construction
of the quotient map $[\cdot]$
\[\begin{tikzcd}
\mathbb{C}^{\Sigma(1)}\setminus Z \arrow{r}{[\cdot]} & (\mathbb{C}^{\Sigma(1)}\setminus Z)// G = X_\Sigma
\end{tikzcd}\]
see \cite[Proposition 5.1.9]{cox_book}, where it is denoted by $\pi$.
Fix an ordering of all the rays of the fan, let $\Sigma(1) = \{\rho_1,\ldots,\rho_r\}$. Then $S$ becomes $\mathbb{C}[x_{\rho_1},\dots,x_{\rho_r}] =:
\mathbb{C}[x_1,\dots,x_r]$. The ring $S$ is the Cox ring of $X_\Sigma$. For more details, see \cite[Section 5.2]{cox_book}, where $S$ is called the total
coordinate ring. This ring is graded by the class group $\operatorname{Cl}{X_\Sigma}$, where
\begin{equation*}
\deg x_i = [D_{\rho_i}]\text{,}
\end{equation*}
and $D_{\rho_i}$ is the torus-invariant divisor corresponding to $\rho_i$, see \cite[Chapter 4]{cox_book}.
\subsection{Saturated ideals}
Take any ideals $I, J \subseteq S$. Let $(I :_S J)$ be the set of all $x \in S$ such that $x \cdot J \subseteq I$; it is an ideal of $S$. It is
sometimes called the quotient ideal, or the colon ideal. For any ideals $I, J, K \subseteq S$ we have:
\begin{itemize}
\item $I \subseteq (I :_S J)$,
\item if $J \subseteq K$, then $(I:_S J) \supseteq (I:_S K)$,
\item $(I :_S J\cdot K) = ((I:_S J):_S K)$.
\end{itemize}
Recall the irrelevant ideal $B \subseteq S$ defined in Equation \eqref{equation:irrelevant_ideal}. Take any ideal $I \subset S$. We define the
$B$-saturation of $I$ as
\begin{equation*}
\sat{I} = \bigcup_{i\geq 1}{(I:_S B^i)}\text{.}
\end{equation*}
Note that this is an increasing union because $B^i \supseteq B^j$ for $i < j$, so $\sat{I}$ is an ideal. Since $S$ is
Noetherian, the union stabilizes in a finite number of steps. We always have $I \subseteq \sat{I}$. If this is an equality, we say that $I$ is
$B$-saturated. In order to show that $I$ is $B$-saturated, it suffices to find any $i \geq 1$ such that $I = (I :_S B^i)$.
Moreover, if $I$ and $J$ are homogeneous, then so is $(I:_S J)$. It follows that for $I$ homogeneous the ideal $\sat{I}$ is homogeneous.
\begin{example}
Let us look at the projective space $\mathbb{P}_{\mathbb{C}}^k$. See \cite[Example 5.1.7]{cox_book}. Here $S =
\mathbb{C}[x_0,\dots,x_k]$, $B = (x_0,\dots,x_k) = \bigoplus_{i \geq 1}S_i$ and $Z = \{0\}$. In this case
\begin{equation*}
\sat{I} = \{f \in S | \text{ for all } i = 0,1,\dots,k \text{ there is } n \text{ such that } x_i^n\cdot f \in I \}\text{.}
\end{equation*}
Recall that in this case there is a 1-1 correspondence between closed subschemes of $\mathbb{P}_{\mathbb{C}}^k$ and homogeneous $B$-saturated ideals
of $S$. Moreover, the ideal given by $\bigoplus_{i\geq 0}H^0(X, \mathcal{I}_R\otimes \mathcal{O}(i))$, where $\mathcal{I}_R$ is the ideal sheaf of
$R$ in $\mathbb{P}_{\mathbb{C}}^k$, is $B$-saturated. For more on this, see \cite[II, Corollary 5.16 and Exercise 5.10]{hartshorne}.
\end{example}
For a toric variety the situation is more complicated. We will assume that the fan $\Sigma$ is simplicial for technical reasons. There can be many
$B$-saturated ideals defining a subscheme $R$. But they have to agree in the $\operatorname{Pic}$ part. See \cite[Theorem 3.7 and the following
discussion]{cox_homogeneous} for more details. Consider the map
\begin{equation}
\bigoplus_{\alpha \in \operatorname{Cl} X_\Sigma}H^0(X, \mathcal{I}_R\otimes \mathcal{O}(\alpha)) \to \bigoplus_{\alpha \in \operatorname{Cl} X_\Sigma}H^0(X, \mathcal{O}(\alpha))
\label{map:saturation}
\end{equation}
induced by $\mathcal{I}_R \hookrightarrow \mathcal{O}_{X_\Sigma}$. We may take $I(R)$ to be the image of this map. This is done in the proof of
\cite[Proposition 6.A.6]{cox_book}. Note that in this case for any $\alpha \in \operatorname{Pic} X_\Sigma$ the vector space $H^0(X_\Sigma,
\mathcal{I}_R\otimes\mathcal{O}(\alpha))$ can be identified with those global sections of $\mathcal{O}(\alpha)$ which vanish on $R$. So let us make
the following
\begin{definition}\label{definition:ideal_of_subscheme}
Let $X_\Sigma$ be a simplicial toric variety. Let $R \hookrightarrow X_\Sigma$ be a closed subscheme. We define $I(R) \subseteq S$, the ideal of
$R$, to be the image of homomorphism \eqref{map:saturation}.
\end{definition}
\begin{proposition}
\label{proposition:ideal}
Suppose the fan $\Sigma$ is simplicial. Let $\alpha \in \operatorname{Pic} X_\Sigma$ be the class of a Cartier divisor. Let $R \hookrightarrow X_\Sigma$ be any
closed subscheme. Then $(I(R))_\alpha = (I(R):_S B^i)_\alpha$ for any $i \geq 1$ (hence $I(R)$ agrees with $\sat{I(R)}$ in degree $\alpha$).
\end{proposition}
\begin{proof}
Take $x \in S_\alpha$ such that $x \cdot B^i \subseteq I(R)$. It is enough to show that $x$ is zero on $R$. Take any point $p \in R$. We will show
that $x$ is zero on $R$ around that point. Since the vanishing set of $B$ is empty, we know that some homogeneous element $b \in B$ is non-zero at
$p$. By taking a large enough power, we may assume $b \in (B^i)_\beta$ for some $\beta \in \operatorname{Pic} X_\Sigma$ (here we use that $\Sigma$ is
simplicial!). Because $b$ is non-zero at $p$, there is an open neighbourhood $p \in U \subseteq X_\Sigma$ such that $\mathcal{O}_{X_\Sigma}(\beta)$
is trivialized on $U$ by $b$. But then $x$ is zero when pulled back to $R$ on $U$ if and only if $x \cdot b$ is zero when pulled back to $R$ on $U$.
But the latter thing is true as $x \cdot b \in I(R)$.
\end{proof}
\subsection{Isomorphism between sections and polynomials}
Let $\alpha \in \operatorname{Cl} X_\Sigma$. Recall the isomorphism of $H^0(X_\Sigma,\mathcal{O}(\alpha))$ and
$\mathbb{C}[x_1,\ldots,x_r]_\alpha$ given in \cite[Proposition 5.3.7]{cox_book}.
\begin{proposition}\label{proposition:isomorphism}
Suppose $\alpha \in \operatorname{Pic} X_\Sigma$. Take any section $s \in H^0(X_\Sigma, \mathcal{O}(\alpha))$ and the corresponding polynomial $f \in S_\alpha$.
Also let $p$ be a point in $X_\Sigma$ and take any $(\lambda_1,\ldots,\lambda_r) \in \mathbb{C}^r$ such that $[\lambda_1,\dots,\lambda_r] = p$.
Then
\begin{equation*}
s(p) = 0 \iff f(\lambda_1,\dots,\lambda_r) = 0\text{.}
\end{equation*}
\end{proposition}
\begin{proof}
Take any $\sigma$ such that $p \in U_\sigma$. We will trivilize the line bundle $\mathcal{O}(\alpha)$ on $U_\sigma$ in order to move the situation
to regular functions on $U_\sigma$. We will do it by finding a section that is nowhere zero both as a polynomial and as a section.
We know that $U_\sigma = \operatorname{Spec} (S_{x^{\widehat{\sigma}}})_0$, where $x^{\widehat{\sigma}}=\prod_{\rho \notin \sigma}{x_\rho}$, the inner subscript
refers to localization, and the outer one is taking degree $0$. From the definition of $\mathcal{O}(\alpha)$ we have
$H^0(U_\sigma,\mathcal{O}(\alpha)) = (S_{x^{\widehat{\sigma}}})_\alpha$. Our goal is to find a monomial in $(S_{x^{\widehat{\sigma}}})_\alpha$ which
is nowhere zero as a section. Take any torus-invariant representative $\sum_{\rho}{a_\rho D_{\rho}}$ of class $\alpha$ (here $a_\rho \in
\mathbb{Z}$). From \cite[Theorem 4.2.8]{cox_book} there exists an $m_\sigma \in M$ such that $\langle m_{\sigma},u_\rho\rangle = -a_\rho$ for $\rho
\in \sigma(1)$ (here $M$ is the lattice of characters as in the beginning of Section \ref{section:toric_varieties}, $\sigma(1)$ is the set of rays of
the cone $\sigma$, and $u_\rho \in N$ is the generator of ray $\rho$). Then
\begin{equation*}
\sum_{\rho}{\langle m_\sigma,u_\rho \rangle D_\rho} +
\sum_{\rho}a_\rho D_\rho = \sum_{\rho \notin \sigma(1)}(\langle m_\sigma,u_\rho \rangle + a_\rho) D_\rho
\end{equation*}
belongs to the class $\alpha$ as well. This is a direct consequence of the exact sequence \cite[Theorem 4.2.1]{cox_book}. The outcome is that the
monomial
\begin{equation}\label{equation:monomial}
g \coloneqq \prod_{\rho \notin \sigma(1)}x_\rho^{\langle m_\sigma,u_\rho \rangle + a_\rho}
\end{equation}
has degree $\alpha$. Notice that it belongs
to $(S_{x^{\widehat{\sigma}}})_\alpha$.
We want to show that $g$ is nowhere zero as a section of $\mathcal{O}(\alpha)$. The polynomial $g \in S_{x^{\widehat{\sigma}}}$ is invertible, with
inverse $g^{-1} \in (S_{x^{\widehat{\sigma}}})_{-\alpha}$. But then $g^{-1}\cdot g = 1 \in (S_{x^{\widehat{\sigma}}})_0$. If $g$ were zero at some
point $p \in X_\Sigma$, then we would have $0 = g^{-1}(p)\cdot g(p) = 1$, a contradiction. An analogous proof shows that $g$ is nowhere zero as a
polynomial.
Now we can set $\bar{f} = g^{-1}f$ and then $\bar{f}$ is a regular function on $\operatorname{Spec}(S_{x^{\widehat{\sigma}}})_0$. We need to see that $\bar{f}(p)
= 0$ is equivalent to $\bar{f}(\lambda_1,\dots,\lambda_r) = 0$. In fact, even more is true: $\bar{f}(p) = \bar{f}(\lambda_1,\dots,\lambda_r)$. To
see this, consider the projection $\mathbb{C}^r \setminus Z \xra{[\cdot]} X_\Sigma$ restricted to the inverse image of $U_\sigma$. This corresponds
to the homomorphism of algebras $[\cdot]^*_\sigma : \mathbb{C}[\sigma^\vee \cap M] \to S_{x^{\widehat{\sigma}}}$ given by
\begin{equation*}
\chi^m \mapsto \prod_{\rho \in \Sigma(1)} x_\rho^{\langle m, u_\rho\rangle}\text{,}
\end{equation*}
see \cite[Proof of Theorem 5.1.11]{cox_book}. Here $\sigma^\vee$ is the dual cone, $\chi^m$ is the character corresponding to $m$, and $\langle
\cdot,\cdot \rangle$ is the standard pairing between $M$ and $N$. Let us look at the following diagram
\[\begin{tikzcd}
\mathbb{C}[\sigma^\vee \cap M] \ar{r}{[\cdot]_\sigma^*}\ar[rd, "\operatorname{ev}_p",swap]
& S_{x^{\widehat{\sigma}}} \ar[d, "\operatorname{ev}_\lambda"] \\
& \mathbb{C}\text{,}
\end{tikzcd}\]
where $\operatorname{ev}$ denotes the evaluation. When we apply the functor $\operatorname{Spec}$ to the diagram, it becomes commutative (since $[\lambda] = p$). As $\operatorname{Spec}$ is
an equivalence of categories, the original diagram is commutative. But this means that $\bar{f}(p) = \bar{f}(\lambda_1,\dots,\lambda_r)$, as
desired.
\end{proof}
\begin{corollary}\label{corollary:isomorphism}
Suppose $\alpha \in \operatorname{Pic} X_\Sigma$. Suppose $f_1, f_2 \in S_\alpha$ are polynomials and $s_1, s_2$ are the corresponding sections of
$\mathcal{O}(\alpha)$. Also fix, as above, $p \in X_\Sigma$ and $(\lambda_1,\dots,\lambda_r) \in \mathbb{C}^r$ such that
$[\lambda_1,\dots,\lambda_r] = p$. Then if $f_2(\lambda_1,\dots,\lambda_r)$ and $s_2(p)$ are non-zero, we get
\begin{equation*}
\frac{f_1(\lambda_1,\dots,\lambda_r)}{f_2(\lambda_1,\dots,\lambda_r)} = \frac{s_1(p)}{s_2(p)}\text{.}
\end{equation*}
\end{corollary}
\begin{proof}
Take $\mu \in \mathbb{C}$ such that $f_1(\lambda_1,\dots,\lambda_r) = \mu f_2(\lambda_1,\dots,\lambda_r)$. Then use the previous fact for $f_1 - \mu f_2$
and the corresponding section $s_1 - \mu s_2$.
\end{proof}
\subsection{Generators of the class group}\label{subsection:freeness_of_the_class_group}
\begin{proposition}\label{proposition:basis_of_class_group}
Let $X_\Sigma$ be a smooth complete variety. Pick any $\sigma \in \Sigma$ of full dimension. Let $\rho_1,\dots,\rho_d$ be the rays that are not in
$\sigma$. Then the classes $[D_{\rho_1}],\dots,[D_{\rho_d}]$ are a basis of the class group.
\end{proposition}
\begin{proof}
In the proof of Proposition \ref{proposition:isomorphism}, given a maximal cone $\sigma \in \Sigma$, and a class $\alpha \in \operatorname{Pic} X_\Sigma$, we
constructed a monomial $g$ of degree $\alpha$, such that none of its rays belonged to $\sigma$ (see Equation \eqref{equation:monomial}). This means
that the classes generate the class group (we use here that for smooth varieties $\operatorname{Pic} X_\Sigma = \operatorname{Cl} X_\Sigma$).
Now consider the exact sequence
\begin{equation*}
0 \to M \to \mathbb{Z}^{\Sigma(1)} \to \operatorname{Cl} X_\Sigma \to 0\text{.}
\end{equation*}
From this we get that $\operatorname{rank} \operatorname{Cl} X_\Sigma = \#\Sigma(1) - \dim M_\mathbb{C}$, which is equal to the number of rays not in $\sigma$, since the cone
$\sigma$ is smooth. But now from the exact sequence
\begin{equation*}
0 \to \mathbb{Z}^{l} \to \bigoplus_{\rho \notin \sigma(1)}\mathbb{Z}[D_{\rho}] \to \operatorname{Cl} X_\Sigma \to 0
\end{equation*}
we get that $l$ = 0, so
\begin{equation*}
\operatorname{Cl} X_\Sigma \cong \bigoplus_{\rho \notin \sigma(1)}\mathbb{Z}[D_{\rho}]\text{,}
\end{equation*}
as desired.
\end{proof}
\subsection{Dehomogenization and homogenization}\label{subsection:dehomo_and_homo}
Let $X_\Sigma$ be a smooth projective toric variety. Denote the rays of the fan by $\rho_1,\dots,\rho_r$. Fix $\sigma \in \Sigma$. We want to
restrict $X_\Sigma$ to the affine patch $U_\sigma$. Suppose the rays that are not in $\sigma$ are $\rho_1,\dots, \rho_k$. Then the restriction
corresponds to setting $x_1,\dots, x_k$ to $1$. Denote by $\pi$ the dehomogenization on $T$ (i.e.\ setting every power $y_i^d$ to $1$) and by $\pi^*$
the dehomogenization on $S$ (i.e.\ setting $x_1,\dots,x_k$ to $1$).
\begin{proposition}\label{proposition:injectivity}
For any $\alpha \in \operatorname{Cl} X_\Sigma = \operatorname{Pic} X_\Sigma$ the map
\begin{equation*}
\pi : T_\alpha \to T
\end{equation*}
is injective. So is the map
\begin{equation*}
\pi^* : S_\alpha \to S \text{.}
\end{equation*}
\end{proposition}
\begin{proof}
Suppose $\pi(F) = 0$ for some $0 \neq F \in T_\alpha$. Then there exist two different monomials $y_1^{c_1} \cdot\ldots\cdot y_r^{c_r}$ and $y_1^{d_1}\cdot\ldots\cdot
y_r^{d_r}$ of degree $\alpha$ such that after applying $\pi$ they are the same. This means that $c_{k+1} = d_{k+1},\dots,c_r = d_r$, so $\deg
y_1^{c_1}\cdot\ldots\cdot y_k^{c_k} = \deg y_1^{d_1} \cdot\ldots\cdot y_k^{d_k}$. The tuples $(c_1,\dots,c_k)$ and $(d_1,\dots,d_k)$ are different, so this gives a
non-trivial relation between the classes corresponding to $y_1,\dots,y_k$, contradicting Proposition \ref{proposition:basis_of_class_group}.
A similar proof works for $\pi^*$.
\end{proof}
Now let us define the homogenization $f^\text{h}$ of a non-zero polynomial $f \in \mathbb{C}[x_{k + 1},\dots, x_r]$. Suppose
\begin{equation*}
f = \sum_{\alpha \in \operatorname{Cl} X_\Sigma}{f_\alpha}\text{,}
\end{equation*}
where each $f_\alpha$ is homogeneous of degree $\alpha$. Let $D_i$ be the divisor corresponding to $\rho_i$. From Proposition
\ref{proposition:basis_of_class_group} we know that the classes $[D_i]$, where $i = 1,\dots,k$, form a basis of the class group. Hence, for each
$\alpha$ such that $f_\alpha \neq 0$ we have
\begin{equation*}
\alpha = a_{\alpha,1} [D_1] + \ldots + a_{\alpha,k} [D_k] \text{,}
\end{equation*}
where $a_{\alpha,i} \in \mathbb{Z}$. Let $b_i = \max\{a_{\alpha,i} | \alpha \in \operatorname{Cl} X_\Sigma, f_\alpha \neq 0\}$. Then we set
\begin{equation*}
f^\text{h} = \sum_{\alpha \in \operatorname{Cl} X_\Sigma } {x_1^{b_1 - a_{\alpha,1}}\cdot\ldots\cdot x_k^{b_k - a_{\alpha,k}} f_\alpha}\text{.}
\end{equation*}
This is homogeneous of degree $b_1 [D_1] +\ldots + b_k [D_k]$.
\begin{proposition}
\label{proposition:hom_dehom}
Suppose $f \in S$ is homogeneous and non-zero and let $f = x_1^{e_1}\cdot\ldots\cdot x_k^{e_k} \hat{f}$, where the $e_i$ are natural, and $\hat{f}$ is not
divisible by any of the $x_i$. Then $(\pi^*(f))^\text{h} = \hat{f}$.
\end{proposition}
\begin{proof}
We know that $\pi^*(f)$ is non-zero from Proposition \ref{proposition:injectivity}. Let $\pi^*(f) = \sum_{\alpha \in \operatorname{Cl} X_\Sigma} g_\alpha$, where
each $g_\alpha$ has degree $\alpha$. Then
\begin{equation*}
f = x_1^{e_1}\cdot\ldots\cdot x_k^{e_k} \cdot \left( \sum_{\alpha \in \operatorname{Cl} X_\Sigma}{x_1^{d_{\alpha,1}}\cdot\ldots\cdot x_r^{d_{\alpha,r}} g_\alpha}\right)
\end{equation*}
for some natural $d_{\alpha,i}$. Suppose that
\begin{equation*}
\alpha = a_{\alpha,1} [D_1] + \ldots + a_{\alpha,k} [D_k] \text{,}
\end{equation*}
where $a_{\alpha,i}$ are integers. Then
\begin{equation*}
\deg f = (e_1 + d_{\alpha,1} + a_{\alpha,1}) [D_1] + \ldots + (e_k + d_{\alpha,k} + a_{\alpha,k}) [D_k] \text{.}
\end{equation*}
This is true for any $\alpha \in \operatorname{Cl} X_\Sigma$ such that $g_\alpha \neq 0$, so we can set $c_i = d_{\alpha,i} + a_{\alpha,i}$. We know that $c_i
\geq a_{\alpha,i}$ for each $\alpha$, so $c_i \geq \max_\alpha a_{\alpha,i}$. If $c_i > \max_\alpha a_{\alpha,i}$ for some $i$, then $x_i$ divides
$\hat{f}$, which is a contradiction. Hence $c_i = \max_\alpha a_{\alpha,i}$, and therefore $d_{\alpha,i} = \max_{\alpha}a_{\alpha,i} -
a_{\alpha,i}$. It follows that $\hat{f}$ is the homogenization of $\pi^*(f)$.
\end{proof}
\begin{definition}
Suppose $I \subseteq \mathbb{C}[x_{k + 1},\dots, x_r]$ is a non-zero ideal. Let
\begin{equation*}
I^\text{h} = (f^\text{h} | f \in I\setminus \{0\})
\end{equation*}
be the homogenization of $I$.
\end{definition}
\begin{proposition}
\label{proposition:hom_saturated}
The ideal $I^\text{h}$ is saturated with respect to $x_1 \cdot\ldots\cdot x_k$.
\end{proposition}
\begin{proof}
Suppose that $x_1 \cdot\ldots\cdot x_k f \in I^\text{h}$ for some non-zero $f$, then
\begin{equation*}
x_1 \cdot\ldots\cdot x_k f = g_1 f_1^\text{h} + \ldots + g_l f_l^\text{h}
\end{equation*}
for some $g_i \in S$ and $f_i \in I$. If we set $x_1,\dots,x_k$ to $1$, we get
\begin{equation*}
\pi^*(f) = \pi^*(g_1) f_1 + \ldots + \pi^*(g_l) f_l\text{.}
\end{equation*}
This means that $\pi^*(f) \in I$, and it follows that $(\pi^*(f))^\text{h} \in I^\text{h}$ from the definition of $I^\text{h}$. But $f$ is divisible
by $(\pi^*(f))^\text{h}$ from Proposition \ref{proposition:hom_dehom}, so $f \in I^\text{h}$.
\end{proof}
\begin{proposition}
\label{proposition:binomial}
Suppose $I \subseteq \mathbb{C}[x_{k+1},\dots,x_{r}]$ is generated by binomials of the form $x^{a} - x^{b}$. Then
\begin{equation*}
I^\text{h} = ( (x^a - x^b)^\text{h} | x^a - x^b \in I \setminus 0)\text{.}
\end{equation*}
\end{proposition}
\begin{lemma}
\label{lemma:binomial}
Suppose $I \subseteq \mathbb{C}[x_{k+1},\dots,x_{r}]$ is generated by binomials of the form $x^{a} - x^{b}$ and that $f \in I$. Then there are
binomials $x^{c_i} - x^{d_i} \in I$ and $\lambda_i \in \mathbb{C}$, where $i = 1,2,\dots,l$, such that
\begin{equation*}
f = \sum_{i = 1}^l \lambda_i (x^{c_i} - x^{d_i})
\end{equation*}
and every $x^{c_i}, x^{d_i}$ appears as a monomial of $f$ with a non-zero coefficient.
\end{lemma}
\begin{proof}
Suppose
\begin{equation*}
f = \sum_{i = 1}^m \kappa_i (x^{a_i} - x^{b_i})\text{,}
\end{equation*}
where $x^{a_i} - x^{b_i} \in I$ and $\kappa_i \in \mathbb{C}\setminus\{0\}$ for $i = 1,2,\dots,m$. Suppose that some monomial $x^b$ appears in the sum on
right-hand side and that it does not appear on the left-hand side. Possibly changing the signs of some $\kappa_i$, we may assume that there are
indices $i_1,\dots,i_n$ such that $b_{i_1} = \dots = b_{i_n} = b$ and $\sum_{j=1}^n \kappa_{i_j} = 0$, and that $b$ appears nowhere else in the sum on
the right-hand side. In this case $\kappa_{i_1} = -\sum_{j=2}^n \kappa_{i_j}$ and therefore
\begin{equation*}
f = \sum_{i | b_i \neq b}\kappa_i (x^{a_i} - x^{b_i}) + \sum_{j = 2}^{n}\kappa_{i_j}(x^{a_{i_j}} - x^{a_{i_1}}) \text{,}
\end{equation*}
where each $x^{a_{i_j}} - x^{a_{i_1}} = (x^{a_{i_j}} - x^{b_{i_j}}) - (x^{a_{i_1}} - x^{b_{i_1}}) \in I$. We have reduced the number of summands on
the right-hand side. Continuing this process, we get to the situation where every monomial on the right-hand side appears on the left-hand side with
a non-zero coefficient.
\end{proof}
\begin{proof}[Proof of Proposition \ref{proposition:binomial}]
Let $f \in I$ be a non-zero polynomial. From Lemma \ref{lemma:binomial} we get that there are $\lambda_i \in \mathbb{C}$ and $x^{c_i} - x^{d_i}$,
where $i = 1,\dots,l$, such that
\begin{equation*}
f = \sum_{i = 1}^l \lambda_i (x^{c_i} - x^{d_i})
\end{equation*}
and every $x^{c_i}, x^{d_i}$ appears as a monomial $x^{e_s}$ of $f$ with a non-zero coefficient. Let $s(i), s'(i)$ be such that $x^{c_i} =
x^{e_{s(i)}}$ and $x^{d_i} = x^{e_{s'(i)}}$. We need to get back to the definition of homogenization. Suppose $x^{e_s}$ has
class $a_{s,1} [D_1] + \dots + a_{s,k} [D_k]$. Then $x^{c_i}$ has class $a_{s(i),1} [D_1] + \dots + a_{s(i),k} [D_k]$, and $x^{d_i}$ has class
$a_{s'(i),1} [D_1] + \dots + a_{s'(i),k} [D_k]$. It follows that
\begin{equation*}
(x^{c_i} - x^{d_i})^\text{h} = x_1^{b_{i,1} - a_{s(i),1}} \cdot\ldots\cdot x_k^{b_{i,k} - a_{s(i),k}} x^{c_i} - x_1^{b_{i,1} - a_{s'(i),1}} \cdot\ldots\cdot
x_k^{b_{i,k} - a_{s'(i),k}} x^{d_i}\text{,}
\end{equation*}
where $b_{i,j} = \max(a_{s(i),j}, a_{s'(i),j})$. Let
\begin{equation*}
f = \sum_{s=1}^m \mu_s x^{e_s}\text{.}
\end{equation*}
Then
\begin{equation*}
f^\text{h} = \sum_{s=1}^m \mu_s x_1^{\bar{b}_1 - a_{s,1}} \cdot\ldots\cdot x_k^{\bar{b}_k - a_{s,k}} x^{e_s}\text{.}
\end{equation*}
Here
\begin{multline*}
\bar{b}_j = \max\{ a_{s,j} | s = 1,\dots,m \} = \max\{ a_{s(i),j}, a_{s'(i),j} | i =1,\dots,l \} \\
= \max\{ b_{i,j} | i=1,\dots,l
\}\text{,}
\end{multline*}
as every $x^{c_i}, x^{d_i}$ appears as a monomial of $f$.
Hence
\begin{align*}
f^\text{h} &= \sum_{s=1}^l \mu_s x_1^{\bar{b}_1 - a_{s,1}} \cdot\ldots\cdot x_k^{\bar{b}_k - a_{s,k}}x^{e_s} \\
&= \sum_{i=1}^l \lambda_i\Big(x_1^{\bar{b}_1 - a_{s(i),1}} \cdot\ldots\cdot x_k^{\bar{b}_k - a_{s(i),k}} x^{c_i}
- x_1^{\bar{b}_1 - a_{s'(i),1}} \cdot\ldots\cdot x_k^{\bar{b}_k - a_{s'(i),k}} x^{d_i}\Big) \\
&= \sum_{i=1}^l \lambda_i x_1^{\bar{b}_1- b_{i,1}} \cdot\ldots\cdot x_k^{\bar{b}_k - b_{i,k}} \Big(x_1^{b_{i,1}- a_{s(i),1}} \cdot\ldots\cdot x_k^{b_{i,k} - a_{s(i),k}}
x^{c_i}\\ &- x_1^{b_{i,1} - a_{s'(i),1}} \cdot\ldots\cdot x_k^{b_{i,k} - a_{s'(i),k}} x^{d_i}\Big) \\
&= \sum_{i=1}^l \lambda_i x_1^{\bar{b}_1- b_{i,1}} \cdot\ldots\cdot x_k^{\bar{b}_k - b_{i,k}}(x^{c_i} - x^{d_i})^\text{h}\text{.}
\end{align*}
\end{proof}
\subsection{Embedded tangent space}
Let $X_P$ be the toric variety embedded by a very ample polytope $P$ with vertices in lattice $M$. Let $v$ be a vertex of the polytope $P$ (which
corresponds to a torus fixed point $p \in X_P$).
\begin{proposition}\label{proposition:embedded_tangent_space}
The projective embedded tangent space at $p$ in the embedding by $P$ is given by the linear space of the lattice points of $P$ which belong to the
Hilbert basis of the semigroup $\mathbb{N}(P\cap M - v)$.
\end{proposition}
\begin{proof}
Let $z_0,\dots,z_k$ be the coordinates corresponding to the monomials in the embedding by $P$, with $z_0$ corresponding to the vertex $v$. Let us
look at the affine chart given by setting $z_0 = 1$. The equations of the toric variety in the affine chart come from integral relations between the
lattice points of $P \cap M - v$. The equations of the embedded tangent space at $p$ (in the affine chart) are given in the following way: the forms
\begin{equation*}
\sum_{i=1}^k \frac{\partial f}{\partial z_i}_{|[z_1,\dots,z_k] = [0,0,\dots,0]} z_i \text{,}
\end{equation*}
where $f$ is an equation of the embedded variety $X_P$ in the affine chart, give all the equations.
Suppose $h_j$ is an element of the Hilbert basis of $\mathbb{N}(P\cap M - v)$, and that the coordinate $z_j$ corresponds to $h_j$. Any relation is
of the form
\begin{equation}\label{relation:of_lattice_points}
l h_j + l_1 h_{j_1} + \dots + l_m h_{j_m} = l_{m + 1} h_{j_{m + 1}} + \dots + l_n h_{j_n} \text{,}
\end{equation}
where $h_{j_i} \in P\cap M - v$, $h_{j_i}$ are mutually different, $h_{j_i} \neq h_j$, and $l \geq 0, l_i \geq 1$ for $i =1,\dots, n$ are positive
integers. In this relation the left-hand side is not equal to $h_j$ as $h_j$ is not a sum. Let us look at the polynomial equation coming form this
Relation \eqref{relation:of_lattice_points}. It is
\begin{equation}\label{equation:of_toric_variety}
z_j^{l} z_{j_1}^{l_1}\cdot\ldots\cdot z_{j_m}^{l_m} = z_{j_{m+1}}^{l_{m+1}} \cdot\ldots\cdot z_{j_n}^{l_n} \text{,}
\end{equation}
where the left-hand side is not equal to $z_j$, and the right-hand side does not contain $z_j$. If we differentiate this equation with respect to
$z_j$ and substitute the point p (which has coordinates $(z_1,\dots,z_k) = (0,\dots,0))$, we get $0 = 0$. Therefore $z_j$ does not appear in the
equation of the embedded tangent space at $p$ coming from Equation \eqref{equation:of_toric_variety}. Hence, the point $(z_1, z_2,\dots,z_k) =
(0,\dots,0,1,0,\dots,0)$ (where the $1$ is on the $j$-th place) satisfies this equation of embedded tangent space at $p$.
Since the point $(0,\dots,0,1,0,\dots,0)$ is independent of the chosen Equation \eqref{equation:of_toric_variety}, we get that it satisfies all the
equations of the projectivized tangent space at $p$.
We come back the projective coordinates. We proved that every for every vector $h_j$ in the Hilbert basis of $\mathbb{N}(P \cap M - v)$ the point
$[0,\dots,0,1,0,\dots,0]$ (where the $1$ is on the $j$-th place) is in the embedded tangent space. Also the point $p = [1,0,\dots,0]$ is in this
space. As this projective space has dimension equal to the cardinality of the Hilbert basis (see \cite[Lemma 1.3.10]{cox_book}), we get the desired
equality.
\end{proof}
\section{Apolarity}\label{section:apolarity}
Recall the definition of the apolarity action from Equation \eqref{equation:apolarity}. When $g{\: \lrcorner \:} F = 0$ we often say that $g$ is apolar to $F$.
The grading on $T$ is the same as on $S$:
$$\deg y_i \coloneqq [D_{\rho_i}]\text{.}$$
\begin{remark}
Notice that ${\: \lrcorner \:}$ defined in Equation \eqref{equation:apolarity} could be seen as derivation, except that we do not multiply by a constant. We
only need to replace $y_i^b$ with $b!\cdot y_i^b$. This can be done by taking $T$ to be the ring of divided powers, see \cite[Appendix
A]{iarrobino_kanev_book_Gorenstein_algebras}, or \cite[Chapter A2.4]{eisenbud} for a coordinate free version. For characteristic zero, this amounts
to setting $y_i^{(b)} = \frac{y_i^b}{b!}$. But here we do not need $T$ to be a ring, we only need it to be a module. So we might as well write
$y_i^b$ instead of $y_i^{(b)}$. It will not matter, provided we do not multiply $y_i^{b_1}$ by $y_i^{b_2}$. This will make some calculations easier.
\end{remark}
\begin{remark}
Notice that when we take $g \in S_\alpha$ and $F \in T_\beta$, then $g{\: \lrcorner \:} F$ is homogeneous of degree $\alpha-\beta$ for any $\alpha$, $\beta
\in \operatorname{Cl} X_\Sigma$. That follows from the fact that when we multiply by subsequent $x_i$'s, the degree of $F$ decreases by $[D_{\rho_i}]$. This
means that, although $T$ is not a graded $S$-module, it becomes a graded $S$-module if we define the grading by
\begin{equation*}
\deg y_i = -[D_{\rho_i}]\text{.}
\end{equation*}
\end{remark}
Futhermore, if $F \in T$ is homogeneous, we will denote by $F^\perp$ its annihilator, which is a homogeneous ideal in that case.
Assume $X_\Sigma$ is a complete toric variety. Then we have $S_0 = T_0 = \mathbb{C}$ and $S_\alpha, T_\alpha$ are finite-dimensional vector spaces for
any $\alpha \in \operatorname{Cl} X_\Sigma$.
\begin{proposition}
\label{proposition:duality}
The map $S_{\alpha}\times T_{\alpha}\to T_0 = \mathbb{C}$ given by $(g, F) \mapsto g {\: \lrcorner \:} F$ makes the
\begin{equation*}
\{x_1^{a_1}\cdot\ldots\cdot x_r^{a_r}|[a_1D_{\rho_1}+\ldots+a_rD_{\rho_r}]=\alpha\}
\end{equation*}
basis dual to
\begin{equation*}
\{y_1^{b_1}\cdot\ldots\cdot y_r^{b_r}|[b_1D_{\rho_1}+\ldots+b_rD_{\rho_r}]=\alpha\}\text{.}
\end{equation*}
for any $\alpha \in \operatorname{Cl} X_\Sigma$. In particular, it is a duality for any $\alpha \in \operatorname{Cl} X_\Sigma$.
\end{proposition}
\begin{proof}
We know that $x_1^{a_1} \cdot\ldots\cdot x_r^{a_r} {\: \lrcorner \:} y_1^{a_1} \cdot\ldots\cdot y_r^{a_r} = 1$. Consider the value of
$x_1^{a_1} \cdot\ldots\cdot x_r^{a_r} {\: \lrcorner \:} y_1^{b_1} \cdot\ldots\cdot y_r^{b_r}$ when $(a_1,\dots,a_r) \neq (b_1,\dots,b_r)$. We know that
\begin{equation}
x_1^{a_1} \cdot\ldots\cdot x_r^{a_r} {\: \lrcorner \:} y_1^{b_1} \cdot\ldots\cdot y_r^{b_r} =
\begin{cases}
y_1^{b_1-a_1}\cdot\ldots\cdot y_r^{b_r-a_r} & \text{if} \: b_i \geq a_i \: \text{for all} \: i \text{.} \\
0 & \text{otherwise.}
\end{cases}
\label{equation:result_of_duality}
\end{equation}
We want to prove (\ref{equation:result_of_duality}) is zero, so suppose otherwise. The degree of (\ref{equation:result_of_duality}) is zero. But the
only monomial whose degree is the trivial class is the constant monomial $1$ (we use that $X_\Sigma$ is complete). This implies that $b_i = a_i$ for
all $i$. But this cannot be true, since we assumed $(a_1,\dots,a_r) \neq (b_1,\dots,b_r)$. This contradiction means that $x_1^{a_1}\cdot\ldots\cdot x_r^{a_r}
{\: \lrcorner \:} y_1^{b_1}\cdot\ldots\cdot y_r^{b_r} = 0$, as desired.
\end{proof}
As a corollary, we see that $T = \bigoplus_{\alpha \in \operatorname{Cl} X_\Sigma}{H^0(X, \mathcal{O}(\alpha))^*}$.
Combining Proposition \ref{proposition:duality} and Corollary \ref{corollary:isomorphism}, we get
\begin{proposition}\label{proposition:formula}
Let $X_\Sigma$ be a complete toric variety. Then for any $\alpha \in \operatorname{Pic} X_\Sigma$ such that $\mathcal{O}(\alpha)$ is basepoint free, the map
\begin{equation*}
\varphi \colon X_\Sigma \to \mathbb{P}(H^0(X_\Sigma, \mathcal{O}(\alpha))^*)
\end{equation*}
associated with the complete linear system $|\mathcal{O}(\alpha)|$ is given by
\begin{equation}\label{equation:formula}
\varphi([\lambda_1,\dots,\lambda_r]) = \left[\sum_{\substack{b_1,\ldots,b_r \in \mathbb{Z}_{\geq 0}| \\
y_1^{b_1}\cdot\ldots\cdot y_r^{b_r} \in T_\alpha}}\lambda_1^{b_1}\cdot\ldots\cdot\lambda_r^{b_r}\cdot y_1^{b_1}\cdot\ldots\cdot
y_r^{b_r}\right]\text{.}
\end{equation}
\end{proposition}
\begin{proof}
In general, if $\{s_i | i \in I \}$ is a basis of $H^0(X, \mathcal{O}(\alpha))$ ($I$ is some finite index set), and
$\{s^i | i\in I \} \subseteq H^0(X, \mathcal{O}(\alpha))^*$ is the dual basis, then
\begin{equation*}
\varphi(p) = \left [\sum_{i\in I}s_i(p)\cdot s^i\right ]\text{,}
\end{equation*}
where $s_i(p)$ means evaluating section $s_i$ at point $p$. Note that it does not make sense to talk about the value of a section in $\mathbb{C}$,
but the quotient $s_i(p)/s_j(p) \in \mathbb{C}$ makes sense, and the sum makes sense as a class in the projectivization of $H^0(X,
\mathcal{O}(\alpha))^*$.
By Proposition \ref{proposition:duality}, the monomials $y_1^{b_1}\dots y_r^{b_r} \in T_\alpha$ form a dual basis to $x_1^{b_1}\dots x_r^{b_r}$. So
from Corollary \ref{corollary:isomorphism} we know that for any $i = (b_1,\dots,b_r)$, $i' = (b_1',\dots,b_r')$ such that $s_{i'}(p)$ is non-zero we
have
\begin{equation*}
\frac{s_i(p)}{s_{i'}(p)} = \frac{(x_1^{b_1}\cdot\ldots\cdot x_r^{b_r})(p)}{(x_1^{b_1'}\cdot\ldots\cdot x_r^{b_r'})(p)} = \frac{\lambda_1^{b_1}\cdot\ldots\cdot
\lambda_r^{b_r}}{\lambda_1^{b_1'}\cdot\ldots\cdot \lambda_r^{b_r'}}\text{.}
\end{equation*}
The formula (\ref{equation:formula}) follows.
\end{proof}
\subsection{Hilbert function}\label{subsection:hilbert_function}
Let $\alpha \in \operatorname{Pic} X_\Sigma$ be a very ample class. Fix $F \in T_\alpha$. The ring $S/{F^\perp}$ is called the apolar ring of $F$. It is graded by
the class group of $X_\Sigma$. Let us denote it by $A_F$. Consider its Hilbert function $H : \operatorname{Cl} X_\Sigma \to \mathbb{Z}_{\geq 0}$ given by
\begin{equation*}
\beta \mapsto dim_\mathbb{C}\left((A_F)_\beta\right)\text{.}
\end{equation*}
The Hilbert function is symmetric. The proof for projective space also applies to toric varieties:
\begin{proposition}\label{proposition:hilbert_function_symmetry}
Let $X_\Sigma$ be a complete toric variety. Then for any $\beta \in \operatorname{Cl} X_\Sigma$:
\begin{equation*}
dim_{\mathbb{C}}(A_F)_\beta = dim_{\mathbb{C}}(A_F)_{\alpha-\beta}\text{.}
\end{equation*}
\end{proposition}
\begin{proof}
We will prove that the bilinear map $(A_F)_\beta \times (A_F)_{\alpha-\beta} \to \mathbb{C} \cong (A_F)_0$ given by $(g, h) \mapsto (g\cdot
h){\: \lrcorner \:} F$ is a duality. Take any $g \in S_\beta$ such that $g {\: \lrcorner \:} F \neq 0$. Then there is $h \in S_{\alpha-\beta}$ such that $h {\: \lrcorner \:}
(g{\: \lrcorner \:} F) \neq 0$ (because ${\: \lrcorner \:}$ makes $S_{\alpha-\beta}$ and $T_{\alpha-\beta}$ dual by Proposition \ref{proposition:duality}). But this
means that $(h\cdot g) {\: \lrcorner \:} F \neq 0$. We have proven that multiplying by any non-zero $g \in (A_F)_\beta$ is non-zero as a map
$(A_F)_{\alpha-\beta} \to \mathbb{C}$. Similarly, multiplying by any non-zero $h \in (A_F)_{\alpha-\beta}$ is non-zero as a map $(A_F)_{\beta} \to
\mathbb{C}$. We are done.
\end{proof}
\begin{remark}\label{remark:catalecticant}
The values of the Hilbert function of $S/{F^\perp}$ are the same as the ranks of the catalecticant homomorphisms. More precisely, let
\begin{equation*}
C_F^\beta : S_\beta \to T_{\alpha-\beta}
\end{equation*}
be given by
\begin{equation*}
g \mapsto g {\: \lrcorner \:} F\text{.}
\end{equation*}
This map is called the catalecticant homomorphism. We have
\begin{equation*}
\operatorname{rank}{C_F^\beta} = \dim_\mathbb{C}(A_F)_\beta\text{.}
\end{equation*}
This is because the graded piece of $F^\perp$ of degree $\beta$ is the kernel of $C_F^\beta$. For more on catalecticant homomorphisms, see
\cite[Section 2]{teitler_geometric_lower_bounds} or \cite[Chapter 1]{iarrobino_kanev_book_Gorenstein_algebras}.
\end{remark}
\subsection{Apolarity Lemma}\label{section:apolarity_lemma}
Suppose $X$ is a projective variety over $\mathbb{C}$. Let $\mathcal{L}$ be a very ample line bundle on $X$, and $\varphi \colon X \to \mathbb{P}(H^0(X,
\mathcal{L})^*)$ the associated morphism. For a closed subscheme $i\colon R \hookrightarrow X$, $\langle R \rangle$ denotes its linear span in
$\mathbb{P}(H^0(X, \mathcal{L})^*)$, and $\mathcal{I}_R$ denotes its ideal sheaf on $X$. Recall that for any line bundle on $X$, the vector subspace
$H^0(X, \mathcal{I}_R \otimes \mathcal{L}) \subseteq H^0(X, \mathcal{L})$ consists of the sections which pull back to zero on $R$.
Let $(\cdot {\: \lrcorner \:} \cdot): H^0(X, \mathcal{L}) \otimes H^0(X, \mathcal{L})^* \to \mathbb{C}$ denote the natural pairing (this agrees with the
notation introduced in Equation \eqref{equation:apolarity}). Now we are ready to formulate the Apolarity Lemma:
\begin{proposition}[Apolarity Lemma, general version]
\label{proposition:general_apolarity}
Let $F \in H^0(X, \mathcal{L})^*$ be a non-zero element. Then for any closed subscheme $i : R \hookrightarrow X$ we have
\begin{equation*}
F \in \langle R \rangle \iff H^0(X, \mathcal{I}_R \otimes \mathcal{L}) {\: \lrcorner \:} F = 0\text{.}
\end{equation*}
\end{proposition}
\begin{proof}
Take any $s \in H^0(X, \mathcal{L})$, let $H_s$ be the corresponding hyperplane in $H^0(X, \mathcal{L})^*$. Then,
\begin{equation*}
\langle R \rangle \subseteq H_s \iff i^*(s) = 0 \iff s \in H^0(X, \mathcal{I}_R \otimes \mathcal{L}) \text{.}
\end{equation*}
Below we identify sections $s \in H^0(X, \mathcal{L})$ with hyperplanes $H_s$ in $H^0(X, \mathcal{L})^*$. Then for any $R$
\begin{align*}
F \in \langle R \rangle & \iff \forall_{s\in H^0(X, \mathcal{L})}(\langle R \rangle \subseteq H_s \operatorname{im}plies F \in H_s) \\
& \iff \forall_{s\in H^0(X, \mathcal{L})}(s \in H^0(X, \mathcal{I}_R \otimes \mathcal{L}) \operatorname{im}plies F \in H_s) \\
& \iff \forall_{s\in H^0(X, \mathcal{L})}(s \in H^0(X, \mathcal{I}_R \otimes \mathcal{L}) \operatorname{im}plies s {\: \lrcorner \:} F = 0) \\
& \iff H^0(X, \mathcal{I}_R \otimes \mathcal{L}) {\: \lrcorner \:} F = 0 \text{.}
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem:multigraded_apolarity}]
From Proposition \ref{proposition:general_apolarity} we know that $F \in \langle R \rangle$ if and only if $I(R)_\alpha \subseteq F^\perp_\alpha$. It
remains to prove that $I(R)_\alpha \subseteq F^\perp_\alpha$ implies $I(R) \subseteq F^\perp$. Suppose $I(R)_\alpha \subseteq F^\perp_\alpha$. Take
any $g \in I(R)_\beta$ for some $\beta \in \operatorname{Cl} X_\Sigma$. We want to show that $g {\: \lrcorner \:} F = 0$. We have $S_{\alpha-\beta}\cdot g \subseteq
I_\alpha$, because $g$ is in the ideal. This means that $(S_{\alpha-\beta} \cdot g) {\: \lrcorner \:} F = 0$, i.e.\ $S_{\alpha-\beta}{\: \lrcorner \:}(g{\: \lrcorner \:} F) = 0$.
Now, $g {\: \lrcorner \:} F$ is an element of $T_{\alpha-\beta}$, which is zero when multiplied by anything from $S_{\alpha-\beta}$, which is equal to
$T_{\alpha-\beta}^*$ by Proposition \ref{proposition:duality}. It follows that $g {\: \lrcorner \:} F$ is zero.
\end{proof}
\begin{remark}\label{remark:agreeing_ideals}
By Proposition \ref{proposition:ideal}, we might have taken $\sat{I(R)}$ instead of $I(R)$ in Theorem \ref{theorem:multigraded_apolarity}. By
\cite[Theorem 3.7]{cox_homogeneous}, we might have taken any $B$-saturated ideal defining $R$.
\end{remark}
\section{Catalecticant bounds}\label{section:catalecticant_bounds}
We prove lower bounds for various kinds of rank (called catalecticant bounds). They help us to calculate these ranks in Section
\ref{section:examples}. See \cite[Section 2]{teitler_geometric_lower_bounds} for a different viewpoint on these types of lower bounds in the cases of
the Veronese variety, Segre-Veronese variety and general varieties.
\begin{proposition}\label{proposition:zero_dimensional}
Let $X$ be a complete variety and $R$ be a zero-dimensional subscheme of $X$ with ideal sheaf $\mathcal{I}_R$. Then for any line bundle $\mathcal{L}$
\begin{equation*}
\operatorname{length}{R} \geq h^0(X, \mathcal{L}) - h^0(X, \mathcal{I}_R \otimes \mathcal{L})\text{.}
\end{equation*}
Recall that for a zero-dimensional scheme $\operatorname{length}{R}$ is $\dim_{\mathbb{C}}H^0(R, \mathcal{O}_R)$.
\end{proposition}
\begin{proof}
We have an exact sequence
\begin{equation*}
0 \to \mathcal{I}_R \to \mathcal{O}_X \to \mathcal{O}_R \to 0\text{.}
\end{equation*}
We tensor it with $\mathcal{L}$:
\begin{equation*}
0 \to \mathcal{I}_R\otimes \mathcal{L} \to \mathcal{L} \to \mathcal{L}_{|R} \to 0\text{.}
\end{equation*}
After taking global sections (which are left-exact), we get an exact sequence
\begin{equation*}
0 \to H^0(X, \mathcal{I}_R\otimes \mathcal{L}) \to H^0(X, \mathcal{L}) \to H^0(R, \mathcal{L}_{|R})\text{.}
\end{equation*}
It follows that
\begin{equation*}
h^0(R, \mathcal{L}_{|R}) \geq h^0(X,\mathcal{L}) - h^0(X, \mathcal{I}_R \otimes \mathcal{L})\text{.}
\end{equation*}
But on a zero-dimensional scheme, every line bundle trivializes. This means $h^0(R, \mathcal{L}_{|R}) = h^0(R, \mathcal{O}_R)$, which is the length
of $R$.
\end{proof}
Let $X_\Sigma$ be a projective simplicial toric variety. Let us fix a very ample class $\alpha \in \operatorname{Pic} X_\Sigma$. Suppose $\beta \in \operatorname{Cl} X_\Sigma$.
The linear map ${\: \lrcorner \:} \colon S_{\beta}\otimes T_{\alpha} \to T_{\alpha-\beta}$ can be seen as coming from the morphism
\begin{equation*}
\mathcal{O}(\beta)\otimes \mathcal{O}(\alpha - \beta) \to \mathcal{O}(\alpha)
\end{equation*}
by taking multiplication of global sections:
\begin{equation*}
H^0(X_\Sigma, \mathcal{O}(\beta)) \otimes H^0(X_\Sigma, \mathcal{O}(\alpha - \beta)) \to H^0(X_\Sigma, \mathcal{O}(\alpha))
\end{equation*}
and rearranging the terms:
\begin{equation*}
H^0(X_\Sigma, \mathcal{O}(\beta))\otimes H^0(X_\Sigma, \mathcal{O}(\alpha))^* \xra{{\: \lrcorner \:}} H^0(X_\Sigma, \mathcal{O}(\alpha -\beta))^*\text{.}
\end{equation*}
For any $\gamma \in \operatorname{Cl} X_\Sigma$ the space $H^0(X_\Sigma, \mathcal{O}(\gamma))$ is $S_\gamma$ and we identify $T_\gamma$ with
$H^0(X_\Sigma, \mathcal{O}(\gamma))^*$ by Proposition \ref{proposition:duality}. Notice that if we fix $F \in H^0(X_\Sigma, \mathcal{O}(\alpha))^*$,
then the map above becomes the catalecticant homomorphism
\begin{equation*}
C_F^\beta : H^0(X_\Sigma, \mathcal{O}(\beta)) \to H^0(X_\Sigma, \mathcal{O}(\alpha-\beta))^*
\end{equation*}
from Remark \ref{remark:catalecticant}.
As a corollary of Proposition \ref{proposition:zero_dimensional} and the Apolarity Lemma (Theorem \ref{theorem:multigraded_apolarity}), we get the
catalecticant bound in the special case of line bundles.
\begin{corollary}[Catalecticant bound for cactus rank]\label{corollary:cactus_catalecticant_bound}
For any $\beta \in \operatorname{Pic} X_\Sigma$, and any $F \in H^0(X_\Sigma, \mathcal{O}(\alpha))^*$ we have
\begin{equation*}
\operatorname{cr}(F) \geq \operatorname{rank}{C_F^\beta}\text{.}
\end{equation*}
\end{corollary}
\begin{proof}
Take any zero-dimensional scheme $R\hookrightarrow X_\Sigma$ such that $F \in \langle R \rangle$. Let $I$ be any $B$-saturated ideal defining $R$.
We have
\begin{multline*}
\operatorname{length} R \geq h^0(X_\Sigma, \mathcal{O}(\beta)) - h^0(X_\Sigma, \mathcal{I}_R \otimes \mathcal{O}(\beta)) = \dim_{\mathbb{C}}(S/I)_\beta \\
\geq \dim_{\mathbb{C}}(S/F^\perp)_\beta = \dim_{\mathbb{C}} \operatorname{im} C_F^\beta \text{,}
\end{multline*}
where the first inequality follows from Proposition \ref{proposition:zero_dimensional}, and the second from Theorem
\ref{theorem:multigraded_apolarity}. We also used that $I(R)$ agrees with any saturated ideal defining $R$ in degrees coming from $\operatorname{Pic} X_\Sigma$,
see Remark \ref{remark:agreeing_ideals}, and the fact that values of the Hilbert function are ranks of catalecticant homomorphisms (Remark
\ref{remark:catalecticant}).
\end{proof}
The bound for cactus rank does not hold for classes $\beta \notin \operatorname{Pic} X_\Sigma$. See Subsection \ref{subsection:weighted_projective_plane} for an
example. But the bound does hold for rank and $\beta \in \operatorname{Cl} X_\Sigma$:
\begin{proposition}[Catalecticant bound for rank]\label{proposition:rank_catalecticant_bound}
For any $\beta \in \operatorname{Cl} X_\Sigma$, and any $F \in H^0(X_\Sigma, \mathcal{O}(\alpha)^*)$ we have
\begin{equation*}
\operatorname{r}(F) \geq \operatorname{rank}{C_F^\beta}\text{.}
\end{equation*}
\end{proposition}
The following proof is an adaptation of \cite[the ``suprisingly quick proof'' after equation (8)]{teitler_geometric_lower_bounds}.
\begin{proof}
For any $\gamma \in \operatorname{Cl} X_\Sigma$ and any $(\lambda_1,\dots,\lambda_r) \in \mathbb{C}^r$, define a polynomial in $y_1,\dots,y_r$
\begin{equation*}
\psi_\gamma(\lambda_1,\dots,\lambda_r) = \sum_{\substack{ a_1,\dots,a_r \in \mathbb{Z}_{\geq 0}| \\ y_1^{a_1}\cdot\ldots\cdot y_r^{a_r} \in
T_\gamma}}{\lambda_1^{a_1}\cdot\ldots\cdot \lambda_r^{a_r}\cdot y_1^{a_1}\cdot\ldots\cdot y_r^{a_r}}\text{.}
\end{equation*}
First we prove the formula
\begin{equation*}
g {\: \lrcorner \:} \psi_\alpha(\lambda_1,\dots,\lambda_r) = g(\lambda_1,\dots,\lambda_r) \psi_{\alpha-\beta}(\lambda_1,\dots,\lambda_r)
\end{equation*}
for any $g \in S_{\beta}$ (here $g(\lambda_1,\dots,\lambda_r)$ means evaluating the polynomial $g$ at the $\lambda_i$'s). The formula is linear in
$g$, so we may assume $g = x_1^{b_1}\cdot\ldots\cdot x_r^{b_r}$.
Let $P$ be the set of all monomials $y_1^{a_1}\cdot\ldots\cdot y_r^{a_r}$ of degree $\alpha$ such that $g {\: \lrcorner \:} y_1^{a_1}\cdot\ldots\cdot y_r^{a_r} \neq 0$ (i.e.\
$a_i \geq b_i$ for all $i$). Then the map $g {\: \lrcorner \:} \cdot$ is a bijection from $P$ onto the set of all monomials of degree $\alpha-\beta$ in
variables $y_1,\dots,y_r$ (injectivity is clear; for surjectivity note that for any $y_1^{a_1'}\cdot\ldots\cdot y_r^{a_r'} \in T_{\alpha-\beta}$ the monomial
$y_1^{a_1' + b_1}\cdot\ldots\cdot y_r^{a_r' +b_r} \in T_\alpha$ is what we are looking for). It follows that
\begin{align*}
g {\: \lrcorner \:} \psi_\alpha(\lambda_1,\dots,\lambda_r) &= \sum_{\substack{ a'_1,\dots,a'_r \in \mathbb{Z}_{\geq 0}| \\ y_1^{a'_1}\cdot\ldots\cdot y_r^{a'_r}
\in T_{\alpha-\beta}}}\lambda_1^{a'_1 + b_1}\cdot\ldots\cdot \lambda_r^{a'_r + b_r} y_1^{a'_1}\cdot\ldots\cdot y_r^{a'_r}\\ &= \lambda_1^{b_1}\cdot\ldots\cdot
\lambda_r^{b_r}\cdot \sum_{\substack{ a'_1,\dots,a'_r \in \mathbb{Z}_{\geq 0}| \\ y_1^{a'_1}\cdot\ldots\cdot y_r^{a'_r} \in
T_{\alpha-\beta}}}\lambda_1^{a'_1}\cdot\ldots\cdot \lambda_r^{a'_r} y_1^{a'_1}\cdot\ldots\cdot y_r^{a'_r}\\ &=
g(\lambda_1,\dots,\lambda_r)\psi_{\alpha-\beta}(\lambda_1,\dots,\lambda_r)\text{.}
\end{align*}
Now we proceed to the proof of the catalecticant bound. Take $F \in H^0(X, \mathcal{O}(\alpha))^*$. Then $\operatorname{r}(F)$ is the least $l$ such that $F =
\psi_\alpha(\mb{\lambda}^1) + \dots + \psi_\alpha(\mb{\lambda}^l)$ for some $\mb{\lambda}^1,\dots,\mb{\lambda}^l \in \mathbb{C}^r$ (basically
because $\psi_\alpha$ agrees with $\varphi_{|\mathcal{O}(\alpha)|}$ from Proposition \ref{proposition:formula}). We want to bound from above the
dimension of the image of the map $C^\beta_F : S_\beta \to T_{\alpha-\beta}$, $g \mapsto g {\: \lrcorner \:} F$. But for any $g \in S_\beta$ we have
\begin{equation*} g
{\: \lrcorner \:} F = g {\: \lrcorner \:} (\psi_\alpha(\mb{\lambda}^1) + \dots + \psi_\alpha(\mb{\lambda}^l)) = g(\mb{\lambda}^1)\cdot
\psi_{\alpha-\beta}(\mb{\lambda}^1)+\dots+g(\mb{\lambda}^l)\cdot \psi_{\alpha-\beta}(\mb{\lambda}^l)\text{.}
\end{equation*}
So for any $g$ in the domain of the map, the image $C_F^\beta(g)$ is in
\begin{equation*}
\langle \psi_{\alpha-\beta}(\mb{\lambda}^1), \dots, \psi_{\alpha-\beta}(\mb{\lambda}^l) \rangle \text{.}
\end{equation*}
It follows that the rank of $C_F^\beta$ is at most $l$.
\end{proof}
\begin{proposition}\label{proposition:closed_set}
Fix $\beta \in \operatorname{Cl} X_\Sigma$. Then for any $l \in \mathbb{Z}_{+}$ the set of points $F \in \mathbb{P}(H^0(X_\Sigma,\mathcal{O}(\alpha))^*)$ such that
$\operatorname{rank}(C_F^\beta) \leq l$ is Zariski-closed.
\end{proposition}
\begin{proof}
Pick a basis of $H^0(X_\Sigma, \mathcal{O}(\beta))$ and a basis of $H^0(X_\Sigma, \mathcal{O}(\alpha - \beta))$. Then ${\: \lrcorner \:}$ becomes a matrix
with entries in $H^0(X_\Sigma, \mathcal{O}(\alpha))$. In order to get the rank of the map $C_F^\beta = \cdot {\: \lrcorner \:} F$, we evaluate the matrix at
$F \in H^0(X_\Sigma, \mathcal{O}(\alpha))^*$. Hence the set of those $F$'s such that the rank of $\cdot {\: \lrcorner \:} F$ is at most $l$ is given by the
vanishing of the $(l+1)$-th minors of the matrix. These minors are polynomials from $\operatorname{Sym}^\bullet H^0(X_\Sigma, \mathcal{O}(\alpha))$. We are done.
\end{proof}
\begin{corollary}[Catalecticant bound for border rank]\label{corollary:border_catalecticant_bound}
For any $\beta \in \operatorname{Cl} X_\Sigma$ and any $F \in T_\alpha$ we have
\begin{equation*}
\operatorname{\underline{r}}(F) \geq \operatorname{rank} C_F^\beta\text{.}
\end{equation*}
\end{corollary}
\begin{proof}
Let $k = \operatorname{\underline{r}}(F)$. From Proposition \ref{proposition:rank_catalecticant_bound} we know that
\begin{equation*}
\sigma^0_k(X) = \{[G] \in \mathbb{P}T_\alpha| \operatorname{r}(G) \leq k\} \subseteq \{[G] \in \mathbb{P}T_\alpha | \operatorname{rank} C_G^\beta \leq k\}\text{.}
\end{equation*}
Since the set on the right hand side is closed (Proposition \ref{proposition:closed_set}), we have
\begin{equation*}
\sigma_k(X) = \overline{\sigma_k^0(X)} \subseteq \{[G] \in \mathbb{P}T_\alpha | \operatorname{rank} C_G^\beta \leq k\}\text{,}
\end{equation*}
and the claim follows.
\end{proof}
\section{Upper bound on cactus rank}\label{section:upper_bound_on_cactus}
In this section, using apolarity we improve the bound for the cactus rank given in \cite{ballico_bernardi_gesmundo_cactus_rank_segre_veronese}. We
generalize the ideas given first in \cite{bernardi_ranestad_cactus_rank_of_cubics} to the multigraded setting.
Recall the maps of dehomogenization $\pi, \pi^*$ and the notion of homogenization denoted by $f^\text{h}$ from Subsection
\ref{subsection:dehomo_and_homo}. We dehomogenize by setting $x_1,\dots,x_k$ to $1$ and by setting every power $y_i^b$ to $1$, where $1 \leq i \leq
k$. Let $F \in H^0(X, \mathcal{O}(\alpha))^*$, and let $f = \pi(F)$.
\begin{proposition}
Let $G$ be any homogeneous polynomial in $S$. Let $g = \pi^*(G)$. Suppose $g {\: \lrcorner \:} f = 0$. Then $G {\: \lrcorner \:} F = 0$.
\end{proposition}
\begin{proof}
Let $\tilde{F} = (y_1\cdot\ldots\cdot y_k)^D\cdot F$, where $D > \deg G$ and here the degree means degree in $\mathbb{N}_{\geq 0}$ as a non-homogeneous
polynomial. We will show that $\pi(G {\: \lrcorner \:} \tilde{F}) = g {\: \lrcorner \:} f$. By bilinearity of $\: \lrcorner$, we may assume that $\tilde{F}$ and $G$
are monomials, i.e.\ $G = x_1^{b_1}\cdot\ldots\cdot x_r^{b_r}$ and $\tilde{F} = y_1^{a_1}\cdot\ldots\cdot y_r^{a_r}$. We have
\begin{equation*}
G {\: \lrcorner \:} \tilde{F} = \begin{cases}
y_1^{a_1 - b_1}\cdot\ldots\cdot y_r^{a_r - b_r} & \text{if } a_i \geq b_i \text{ for all } i\text{,} \\
0 & \text{otherwise.}
\end{cases}
\end{equation*}
and
\begin{equation*}
g {\: \lrcorner \:} f = \begin{cases}
y_{k+1}^{a_{k+1} - b_{k+1}}\cdot\ldots\cdot y_r^{a_r - b_r} & \text{if } a_i \geq b_i \text{ for } i \geq k+1\text{,} \\
0 & \text{otherwise.}
\end{cases}
\end{equation*}
Since for $i\leq k$ we have $a_i \geq D > b_i$, we get that the conditions $a_i \geq b_i$ for all $i$ and $a_i \geq b_i$ for $i \geq k+1$ are
equivalent. It follows that $\pi(G {\: \lrcorner \:} \tilde{F}) = g{\: \lrcorner \:} f$.
As $\pi$ is injective (Proposition \ref{proposition:injectivity}), we immediately get that $G {\: \lrcorner \:} \tilde{F} = 0$. We know that $F =
(x_1\cdot\ldots\cdot x_k)^D {\: \lrcorner \:} \tilde{F}$, so
\begin{equation*}
G {\: \lrcorner \:} F = G {\: \lrcorner \:} ((x_1\cdot\ldots\cdot x_k)^D {\: \lrcorner \:} \tilde{F}) = (x_1\cdot\ldots\cdot x_k)^D {\: \lrcorner \:} (G {\: \lrcorner \:} \tilde{F}) = 0\text{.}
\end{equation*}
\end{proof}
This means that $(f^\perp)^\text{h} \subseteq F^\perp$ , so by Theorem \ref{theorem:multigraded_apolarity} the ideal $f^\perp$ defines a scheme $R$
such that $F \in \langle R \rangle$ (notice that $(f^\perp)^\text{h}$ is $B$-saturated, as it is $x_1\cdot\ldots\cdot x_r$-saturated by Proposition
\ref{proposition:hom_saturated}). Hence, its length gives an upper bound on the cactus rank of $F$. Let us calculate it in the case of the
Segre-Veronese embedding.
Let $\mathbb{P}^{n_1}\times \dots \times \mathbb{P}^{n_k}$ be embedded by the line bundle $\mathcal{O}(d_1,\dots,d_k)$. Here, after we dehomogenize
$F$, we get that $f$ is a polynomial in $n_1$ variables of degree $(1,0,\dots,0)$, $n_2$ variables of degree $(0,1,0,\dots,0)$, \dots, and $n_k$
variables of degree $(0,\dots,0,1)$. We need to bound from above
\begin{equation*}
\dim \pi^*(S)/f^\perp\text{.}
\end{equation*}
This is equal to the dimension of the space of all partial derivatives of $f$. We do this just as in \cite[Proof of Theorem
3]{bernardi_ranestad_cactus_rank_of_cubics}. The space of all multihomogeneous polynomials of degree $(e_1,\dots,e_k)$ in $n_1 + n_2 + \dotsb + n_k$
variables has dimension
\begin{equation*}
\binom{n_1 -1 + e_1}{e_1}\cdot\ldots\cdot \binom{n_k -1 + e_k}{e_k}\text{.}
\end{equation*}
Now let $d = d_1 +\dots + d_k$ be the total degree. We bound the space of the partials of total degree at most $\frac{d}{2}$ by the number of
linearly independent variables, by which we differentiate. We bound the space of the other partials by the dimension of the space of all polynomials
of total degree less than $\frac{d}{2}$. Hence,
\begin{align*}
\operatorname{cr}(F) &\leq \sum_{\substack{(e_1,\dots,e_k) |\\ e_1 + \dots + e_k \leq d/2 }}\binom{n_1 -1+ e_1}{e_1}\cdot\ldots\cdot \binom{n_k -1+ e_k}{e_k} \\
&+\sum_{\substack{(e_1,\dots,e_k) |\\ e_1 + \dots + e_k > d/2 }}\binom{n_1 -1+ d_1 - e_1}{d_1 - e_1}\cdot\ldots\cdot \binom{n_k -1 + d_k - e_k}{d_k - e_k}\text{.}
\end{align*}
This is stronger than the bound in \cite{ballico_bernardi_gesmundo_cactus_rank_segre_veronese}, since the authors include all monomials, and we
include monomials of bounded multidegree.
\begin{example}
Consider the Segre embedding
\begin{equation*}
\underbrace{\mathbb{P}^n \times \dots \times \mathbb{P}^n}_{k \text{ times}} \hookrightarrow \mathbb{P}(\mathbb{C}^{n+1}\otimes \dots \otimes
\mathbb{C}^{n+1})\text{,}
\end{equation*}
which is given by the line bundle $\mathcal{O}(1,\dots,1)$. Then the bound
gives for any $F$
\begin{align*}
\operatorname{cr}(F) &\leq 1 + kn + \binom{k}{2}n^2 +\dots +\binom{k}{ k/2 } n^{ k/2 } \\
&+ \binom{k}{ k/2 - 1} n^{ k/2 - 1} + \dots +\binom{k}{2}n^2 + kn + 1
\end{align*}
for $k$ even, and
\begin{align*}
\operatorname{cr}(F) &\leq 1 + kn + \binom{k}{2}n^2 +\dots +\binom{k}{\lfloor k/2 \rfloor} n^{\lfloor k/2 \rfloor} \\
&+ \binom{k}{\lfloor k/2 \rfloor} n^{\lfloor k/2 \rfloor} + \dots +\binom{k}{2}n^2 + kn + 1
\end{align*}
for $k$ odd.
\end{example}
\begin{remark}
The bound is sometimes better if we replace the condition $e_1 + \dots e_k \leq \frac{d}{2}$ by the condition $l(e_1,\dots,e_k) \leq
\frac{l(d_1,\dots,d_k)}{2}$ for some linear form $l$.
For instance, consider $\mathbb{P}^2\times \mathbb{P}^2 \times \mathbb{P}^2$ embedded by the line bundle $\mathcal{O}(3,3,2)$. We get that
$l(e_1,e_2,e_3) = e_1 + e_2+ e_3$ gives the bound
\begin{equation*}
\operatorname{cr}(F) \leq 255
\end{equation*}
for any $F$, while the form $l(e_1,e_2,e_3) = 2e_1 + 2e_2 +3e_3$ gives the bound
\begin{equation*}
\operatorname{cr}(F) \leq 250
\end{equation*}
for any $F$, and the bound from the article \cite{ballico_bernardi_gesmundo_cactus_rank_segre_veronese} gives
\begin{equation*}
\operatorname{cr}(F) \leq 294
\end{equation*}
for any $F$.
\end{remark}
\section{Examples}\label{section:examples}
We use what we proved to look at some examples. In this section, we denote the coordinates of the ring $S$ by Greek letters $\alpha,
\beta,\dots$ and the corresponding coordinates in $T$ by $x, y,\dots$ (possibly with subscripts).
We calculate ranks, cactus ranks and border ranks (denoted by $\operatorname{r}(F)$, $\operatorname{cr}(F)$, $\operatorname{\underline{r}}(F)$) of some monomials $F$ for toric surfaces embedded into
projective spaces. See Definitions \ref{definition:sigma_rank_border_rank} and \ref{definition:cactus_rank} for the definitions of these ranks.
\subsection{$\mathbb{P}^1\times\mathbb{P}^1$}\label{subsection:p1timesp1}
Consider the set
\begin{equation*}
\{\rho_{\alpha,0} = (1, 0), \rho_{\alpha,1} = (-1,0), \rho_{\beta,0} = (0, 1), \rho_{\beta,1} = (0, -1)\}\text{.}
\end{equation*}
Let $\Sigma$ be the only complete fan such that this set is its set of rays. Then $X_\Sigma$ is $\mathbb{P}^1 \times \mathbb{P}^1$, which is smooth.
\[\begin{tikzpicture}[scale = 0.7]
\draw[-latex, thin] (0,0) -- (1,0);
\draw[-latex, thin] (0,0) -- (0,1);
\draw[-latex, thin] (0,0) -- (-1,0);
\draw[-latex, thin] (0,0) -- (0,-1);
\foreach \x in {-2,...,2}
{
\foreach \y in {-2,...,2}
{
\draw[fill] (\x,\y)circle [radius=0.025];
}
}
\node[below right] at (1,0) {$\rho_{\alpha,0}$};
\node[above] at (0,1) {$\rho_{\beta,0}$};
\node[below left] at (-1,0) {$\rho_{\alpha,1}$};
\node[below] at (0,-1) {$\rho_{\beta,1}$};
\end{tikzpicture}\]
Its class group is the free abelian group on two generators $D_{\rho_{\alpha,0}} \sim D_{\rho_{\alpha,1}}$ and $D_{\rho_{\beta,0}} \sim
D_{\rho_{\beta,1}}$. Here and later in this section $D_\rho$ is the toric invariant divisor corresponding to $\rho$ (as in Section
\ref{section:toric_varieties}) and $\sim$ means the linear equivalence. Let $\alpha_0, \alpha_1, \beta_0, \beta_1$ be the variables corresponding to
$\rho_{\alpha,0}$, $\rho_{\alpha,1}$, $\rho_{\beta,0}$, $\rho_{\beta,1}$. As a result, we may think of $S$ as the polynomial ring
$\mathbb{C}[\alpha_0, \alpha_1, \beta_0, \beta_1]$ graded by $\mathbb{Z}^2$, where the grading is given by
\begin{center}
\begin{tabular}{c|cccc}
$f$ & $\alpha_0$ & $\alpha_1$ & $\beta_0$ & $\beta_1$ \\
\hline
\multirow{2}{*}{$\deg f$} & 1 & 1 & 0 & 0 \\
& 0 & 0 & 1 & 1
\end{tabular}
\end{center}
The nef cone in $(\operatorname{Cl} X_\Sigma)_{\mathbb{R}}$ is generated by $D_{\rho_{\alpha,0}}$ and $D_{\rho_{\beta,0}}$.
Let $x_0, x_1, y_0, y_1$ be the basis dual to $\alpha_0, \alpha_1, \beta_0, \beta_1$. We consider the problem of determining cactus ranks and ranks of
monomials $F = x_0^{k_0}x_1^{k_1}y_0^{l_0}y_1^{l_1}$, where $k_0 \geq k_1 \geq 1, l_0 \geq l_1 \geq 1$. The annihilator ideal is $(\alpha_0^{k_0 + 1},
\alpha_1^{k_1 + 1}, \beta_0^{l_0 + 1}, \beta_1^{l_1 + 1})$. We have
\begin{equation*}
\dim (S/F^{\perp})_{(k_1, l_1)} = (k_1 + 1)(l_1 + 1)\text{.}
\end{equation*}
It follows that
\begin{equation*}
\operatorname{cr}(F) \geq (k_1 + 1)(l_1 + 1)\text{.}
\end{equation*}
But $I = (\alpha_1^{k_1 + 1}, \beta_1^{l_1 + 1}) \subseteq F^\perp$ is a $B$-saturated ideal of a scheme of length $(k_1 + 1)(l_1 + 1)$. This is
because we can look locally, at the affine open set where $\alpha_0, \beta_0 \neq 0$. There our scheme becomes
\begin{equation*}
\operatorname{Spec} \mathbb{C}\left[\frac{\alpha_1}{\alpha_0},\frac{\beta_1}{\beta_0}\right]
/\left(\frac{\alpha_1^{k_1 + 1}}{\alpha_0^{k_1 + 1}}, \frac{\beta_1^{l_1 + 1}}{\beta_0^{l_1 + 1}}\right) \cong
\operatorname{Spec} \mathbb{C}[u,v]/(u^{k_1 + 1}, v^{l_1 + 1})
\end{equation*}
for some variables $u,v$. The scheme constructed in this way has desired length. Hence, by Theorem \ref{theorem:multigraded_apolarity}
\begin{equation*}
\operatorname{cr}(F) = (k_1 + 1)(l_1 + 1)\text{.}
\end{equation*}
Now we address the problem of finding ranks of such monomials. We prove Theorem \ref{theorem:inequalities}.
Let $R$ be the polynomial ring $\mathbb{C}[u,v]$.
\begin{lemma}
\label{lemma:bezout}
Consider the ideal $I = (u^m v^n - 1, u^p - v^q) \subseteq R$, where $m, n\geq 1$ and at least one of the integers $p,q$ is greater than or equal to
$1$. Then $V(I) \subset \mathbb{A}^2$ consists of $mq + np$ reduced points.
\end{lemma}
\begin{proof}
First we show that $u^m v^n - 1, u^p - v^q$ intersect transversally. Let $s \in \mathbb{N}_0$ be the smallest number such that $u^s v^t - 1 \in
I$ for some $t \in \mathbb{Z}_+$. Let $i \in \mathbb{N}_0$ be the smallest number such that $u^i - v^j \in I$ for some $j \in \mathbb{Z}_+$. We
claim that $s = 0$ or $i = 0$. Assume to the contrary, that $\min(s,i) > 0$. If $s \geq i$, then we have
\begin{equation*}
u^s v^t - 1 - u^{s - i}v^t(u^i - v^j) = u^{s-i}v^{j + t} -1 \in I\text{,}
\end{equation*}
which contradicts the minimality of $s$. If $s < i$, then
\begin{equation*}
v^t(u^i - v^j) - u^{i-s}(u^s v^t - 1) = u^{i-s} - v^{t + j} \in I \text{,}
\end{equation*}
which contradicts the minimality of $i$.
We get that $v^i - 1 \in I$ for some $i \in \mathbb{Z}_+$. Similarly, by interchanging the roles of $i,s$ with $j, t$, we get that $u^j -1 \in
I$ for some $j \in \mathbb{Z}_+$. The polynomials $v^i - 1$ and $u^j - 1$ intersect transversally in $ij$ points, so $u^m v^n -1, u^p - v^q$ also
intersect transversally.
We want to use B\'ezout's theorem for $\mathbb{P}^1\times \mathbb{P}^1$. In order to do so, we homogenize generators of $I$ and check that they have
no roots at infinity. We consider the dehomogenization given by the ring homomorphism $S \to R$, $\alpha_0 \mapsto u$, $\alpha_1 \mapsto 1$,
$\beta_0 \mapsto v$, $\beta_1 \mapsto 1$. Then the generators of $I$ become
\begin{equation}
\label{equation:generators}
\alpha_0^m\beta_0^n - \alpha_1^m \beta_1^n, \alpha_0^p \beta_1^q - \alpha_1^p \beta_0^q\text{.}
\end{equation}
Now we can see that if $\alpha_1 = 0$, then $\alpha_0 \neq 0$, so if we put this into the first generator in Equation \ref{equation:generators}, we
get that $\beta_0 = 0$, and if we put it into the second generator, we get $\beta_1 = 0$. But $\beta_0$ and $\beta_1$ cannot simultaneously be $0$.
Similarly, if $\beta_1 = 0$, then $\beta_0 \neq 0$. From the first generator, we get that $\alpha_0 = 0$, and from the second we have $\alpha_1 =
0$, but the two equalities cannot hold at the same time.
This means that the polynomials $u^m v^n - 1, u^p - v^q$ have no common roots at infinity, so we can use multihomogeneous B\'ezout's theorem (see
\cite[Example 4.9]{MR1328833}) to get that $u^m v^n - 1, u^p -v^q$ have $mq + np$ common roots.
\end{proof}
\begin{proof}[Proof of Item \eqref{item:first_inequality} of Theorem \ref{theorem:inequalities}]
If $k_0 = k_1$ or $l_0 = l_1$, Item \eqref{item:first_inequality} becomes Equation \eqref{equation:obvious_rank_inequality}, so it is true. Now
assume $k_0 > k_1$ and $l_0 > l_1$ and consider $I = (u^{k_1 + 1} - v^{l_1 + 1}, u^{k_0 + 1}v^{l_0 - l_1} - 1)$. Let
\begin{equation*}
M = (k_0 + 1)(l_1 + 1) + (k_1+ 1)(l_0 + 1) - (k_1 + 1)(l_1 + 1)\text{.}
\end{equation*}
From Proposition \ref{proposition:hom_dehom} we know that $I^\text{h}$ is $\alpha_1 \beta_1$-saturated, which implies that it is $B$-saturated (we
homogenize in the same way as in the proof of Lemma \ref{lemma:bezout}). By Lemma \ref{lemma:bezout}, we get that $I^\text{h}$ is a radical ideal of
$M$ points. We need to show that $I^\text{h} \subseteq F^\perp$. From Proposition \ref{proposition:binomial} it suffices to show that for $u^m v^n -
1 \in I$ we have $(u^m v^n - 1)^\text{h} \in F^\perp$ and that for $u^p - v^q \in I$ we have $(u^p - v^q)^\text{h} \in F^\perp$.
Since $u^{k_1 + 1} - v^{l_1 + 1} \in I$, from Lemma \ref{lemma:bezout} we get that any element of the form $u^m v^n - 1 \in I$ must satisfy $(k_1 + 1)n +
(l_1 + 1)m \geq M$.
Let us picture the polynomials on $\mathbb{Z}^2$. We put the polynomial $u^m v^n - 1$ in the point $(m,n)$ (that is the degree of the homogenization).
Then the binomials $u^m v^n - 1 \in I$ lie above or on the line connecting two points: $(k_0 + 1, l_0 - l_1)$ and $(k_0 - k_1, l_0 + 1)$. Similarly,
the binomials $u^p - v^q \in I$ lie above or on the line $(k_0 + 1)q + (l_0 - l_1)p = M$.
We have:
\emph{Claim 1:} for each binomial of the form $u^m v^n - 1 \in I$ we have either $m \geq k_0 + 1$ or $n \geq l_0 + 1$. It suffices to argue that there
are no elements $u^m v^n - 1 \in I$ in the interior of the segment connecting points $(k_0 + 1, l_0 - l_1)$ and $(k_0 - k_1, l_0 + 1)$ nor in the
interior of the triangle with vertices $(k_0 +1,l_0 - l_1), (k_0 - k_1, l_0 + 1), (k_0 + 1, l_0 + 1)$. Suppose that $u^m v^n - 1$ is such, then
\begin{equation*}
u^m v^n - 1 - (u^{k_0 + 1} v^{l_0 - l_1} - 1) = u^m v^{l_0 - l_1} (v^{n - l_0 + l_1} - u^{k_0 + 1 - m}) \in I\text{.}
\end{equation*}
Since $I$ is $uv$-saturated, we get that $u^{k_0 + 1 - m} - v^{n - l_0 + l_1} \in I$. But $k_0 + 1 - m < k_1 + 1$ and $n - l_0 + l_1 < l_1 + 1$
(because $m > k_0 - k_1$ and $n < l_0 + 1$, respectively), so the point $(p,q) = (k_0 + 1 - m, n - l_0 +l_1)$ lies below the line $ (k_0 + 1)q + (l_0
- l_1)p = M$, a contradiction.
\emph{Claim 2:} there are no binomials $u^p - v^q \in I$ lying in the interior of the rectangle with vertices $(k_1 + 1, 0), (k_0 + 1,0), (k_1 + 1,l_1
+ 1), (k_0 + 1, l_1+ 1)$. Indeed, for any such binomial $u^p - v^q$ we would have
\begin{equation*}
u^p - v^q - u^{p-k_1 -1} (u^{k_1 + 1} - v^{l_1 + 1}) = v^q(u^{p-k_1 - 1} v^{l_1 + 1 - q} - 1) \in I\text{,}
\end{equation*}
so also $u^{p - k_1 -1}v^{l_1 + 1 - q} - 1 \in I$. But $p - k_1 - 1 < k_0 - k_1$ and $l_1 + 1 -q < l_0 + 1$ (since $p < k_0 + 1$ and $q > 0$,
respectively), so the point $(m,n) = (p - k_1 - 1, l_1 + 1 - q)$ lies below the line $(k_1 + 1)n + (l_1 + 1)m = M$, a contradiction.
\emph{Claim 3:} there are no binomials $u^p - v^q \in I$ lying in the interior of the rectangle with vertices $(0, l_1 + 1), (0,l_0 + 1), (k_1 + 1,l_1
+ 1), (k_1 + 1, l_0+ 1)$. This is just \emph{Claim 2} with the roles of the axes reversed.
From \emph{Claim 1, 2} and \emph{3} it follows that for each $u^m v^n -1 \in I$ we have $(u^m v^n -1)^\text{h} \in F^\perp$ and for $u^p - v^q \in I$
we have $(u^p - v^q)^\text{h} \in F^\perp$. From it we conclude that for any binomial $b \in I$ we have $b^\text{h} \in F^\perp$ (since homogenization
is well-behaved with respect to multiplication by monomials), so from Proposition \ref{proposition:binomial} we have $I^\text{h} \subseteq F^\perp$,
and we are done with Item \eqref{item:first_inequality}.
\end{proof}
\begin{proof}[Proof of Item \eqref{item:second_inequality}]
Suppose that $\operatorname{r}(F) < (k_0 + 1)(l_1 + 1)$. Then by Theorem \ref{theorem:multigraded_apolarity} there is a radical $B$-saturated ideal $I$ of at
most $(k_0 + 1)(l_1 + 1) - 1$ points such that $I \subseteq F^{\perp} = (\alpha_0^{k_0 + 1}, \alpha_1^{k_1 + 1}, \beta_0^{l_0 + 1}, \beta_1^{l_1 +
1})$. By Proposition \ref{proposition:zero_dimensional} we have that $\dim (S/I)_{(k_0,l_1)} \leq (k_0 + 1)(l_1+1) -1$. We know that $\dim
S_{(k_0,l_1)} = (k_0 + 1)(l_1 + 1)$. But this means that $\dim I_{(k_0,l_1)} \geq 1$. We have
\begin{align*}
&F^{\perp}_{(k_0,l_1)} = \\
&\alpha_1^{k_1 + 1} \cdot \langle \alpha_0^{k_0-k_1 - 1}, \alpha_0^{k_0-k_1 - 2}\alpha_1 ,\dots, \alpha_1^{k_0-k_1 -1} \rangle
\cdot \langle \beta_0^{l_1}, \beta_0^{l_1 -1} \beta_1,\dots,\beta_1^{l_1}\rangle
\end{align*}
Hence there is a non-zero polynomial
\begin{equation}\label{equation:rank_induction}
\alpha_1^{k_1 + 1}(\eta_{k_0-k_1 -1}\alpha_0^{k_0-k_1-1} + \eta_{k_0 - k_1 -2}\alpha_0^{k_0 - k_1 -2}\alpha_1 + \ldots + \eta_0\alpha_1^{k_0-k_1-1})
\in I\text{,}
\end{equation}
where $\eta_i \in \langle \beta_0^{l_1}, \beta_0^{l_1-1} \beta_1,\ldots, \beta_1^{l_1} \rangle$.
We prove by descending induction on $j$ that we have
\begin{equation*}
\eta_{k_0 - k_1 - 1} = \eta_{k_0 - k_1 - 2} = \dots = \eta_{j+1} = 0\text{.}
\end{equation*}
The beginning of the induction is trivial ($j = k_0 - k_1 -1$). Now assume that the induction assumption
holds for a given $j$. Then (by Equation \eqref{equation:rank_induction}) for some $l \geq 1$
\begin{equation*}
\alpha_1^l(\eta_{j}\alpha_0^j + \eta_{j-1}\alpha_0^{j-1}\alpha_1 + \ldots + \eta_0\alpha_1^j) \in I\text{.}
\end{equation*}
The ideal $I$ is radical, so we know that
\begin{equation*}
\alpha_1(\eta_{j}\alpha_0^j + \eta_{j-1}\alpha_0^{j-1}\alpha_1 + \ldots + \eta_0\alpha_1^j) \in I\text{.}
\end{equation*}
But $I \subseteq F^{\perp}$, so
\begin{equation*}
\alpha_1(\eta_{j}\alpha_0^{j}+\eta_{j-1}\alpha_0^{j-1}\alpha_1 + \ldots + \eta_0\alpha_1^{j}) {\: \lrcorner \:} F = 0\text{.}
\end{equation*}
We know that
\begin{align*}
\alpha_1(\eta_{j}\alpha_0^j + \ldots + \eta_0\alpha_1^j) {\: \lrcorner \:} F =& \; \bar{\eta}_j x_0^{k_0 - j} x_1^{k_1 - 1} + \bar{\eta}_{j-1} x_0^{k_0 -j
+ 1} x_1^{k_1 - 2} + \ldots + \\ + &\; \bar\eta_{\max(0,j-k_1 + 1)} x_0^{k_0 - \max(0,j-k_1 + 1)}x_1^{\max(0,k_1 - 1 - j)} \text{,}
\end{align*}
where $\bar{\eta}_i = \eta_i {\: \lrcorner \:} y_0^{l_0} y_1^{l_1} \in \mathbb{C}[y_0, y_1]_{l_0}$ are such that for every $i$ if $\eta_i \neq 0$, then
$\bar{\eta}_i \neq 0$. As all the monomials in the sum are different, we get that all $\bar{\eta}_i$ in the sum are $0$, which implies that all
corresponding $\eta_i$ are $0$. At least the first summand is present in the sum since $k_0 -j \geq k_1 + 1$ and $k_1 - 1 \geq 0$. This is our
induction assumption for some $j' < j$.
The fact that $\eta_i$ are all zero gives a contradition with the fact that the polynomial was non-zero. This proves Item
\eqref{item:second_inequality}.
\end{proof}
\begin{proof}[Proof of Item \eqref{item:third_inequality}]
Let $I \subseteq F^\perp$ be a
$B$-saturated radical ideal of at most $(k_1 + 2)(l_1 + 2) - 2$ points. Then $\dim (S/I)_{(k_1 + 1,l_1 + 1)} \leq (k_1 + 2)(l_1 + 2) - 2$, so $\dim
I_{(k_1 + 1,l_1 + 1)} \geq 2$. Since
\begin{align*}
F^\perp_{k_1 + 1, l_1 + 1} = & \alpha_1^{k_1 + 1} \langle \beta_0^{l_1 + 1}, \beta_0^{l_1}\beta_1,\dots,\beta_0 \beta_1^{l_1} \rangle\\
& \langle \alpha_0^{k_1 + 1}, \alpha_0^{k_1}\alpha_1,\dots,\alpha_0 \alpha_1^{k_1}\rangle \cdot \beta_1^{l_1 + 1} + \langle \alpha_1^{k_1 +
1}\beta_1^{l_1 + 1} \rangle\text{,}
\end{align*}
we get that $I_{(k_1 + 1, l_1 + 1)}$ has a basis consisting of
\begin{align*}
t_1 &= \alpha_1^{k_1 + 1}(\kappa_1\beta_0^{l_1}\beta_1 +\dots + \kappa_{l_1}\beta_0\beta_1^{l_1}) \\
&+ (\lambda_0 \alpha_0^{k_1 + 1} + \dots + \lambda_{k_1} \alpha_0 \alpha_1^{k_1})\beta_1^{l_1 + 1} + \eta \alpha_1^{k_1 + 1}\beta_1^{l_1 + 1}\text{,} \\
t_2 &= \alpha_1^{k_1 + 1}(\mu_0 \beta_0^{l_1 + 1} + \mu_1\beta_0^{l_1}\beta_1 +\dots + \mu_{l_1}\beta_0\beta_1^{l_1}) \\
&+ (\nu_0 \alpha_0^{k_1 + 1} + \dots + \nu_{k_1}\alpha_0 \alpha_1^{k_1})\beta_1^{l_1 + 1} + \zeta \alpha_1^{k_1 + 1}\beta_1^{l_1 + 1}\text{,}
\end{align*}
where $\kappa_i, \lambda_i, \mu_i, \nu_i, \eta, \zeta \in \mathbb{C}$. If $\kappa_1 = 0$, then (since $I$ is radical and $t_1$ is divisible by
$\beta_1^2$) $t_1/\beta_1 \in I$. Out of the monomials of $t_1/\beta_1$, only the ones divisible by $\alpha_1^{k_1 + 1}$ are in $F^\perp$. Hence,
$\lambda_i = 0$ for all $i$. But then from the fact that $I$ is radical and that $t_1/\beta_1$ is divisible by $\alpha_1^2$ we get that
$t_1/(\alpha_1\beta_1) \in I$. Since none of the monomials of $t_1/(\alpha_1\beta_1)$ are in $F^\perp$, we get $\eta = \kappa_2 = \kappa_3 = \dots =
\kappa_{l_1} = 0$, a contradiction. It follows that we may assume that $\kappa_1 = 1$.
If $\mu_0 = 0$, then we consider the element $\mu_1 t_1 - t_2$. It is divisible by $\beta_1^2$, so in the same way as before, we get that all the
coefficients of $\mu_1 t_1 - t_2$ are $0$. It follows that we may assume that $\mu_0 = 1$.
In this case
\begin{align*}
&\beta_0 t_1 - \beta_1 t_2 \\
&= \alpha_1^{k_1 + 1}((\kappa_2 - \mu_1)\beta_0^{l_1}\beta_1^2 +\dots+
(\kappa_{l_1} - \mu_{l_1 - 1})\beta_0^2 \beta_1^{l_1} -\mu_{l_1}\beta_0 \beta_1^{l_1 + 1}) \\
&+(\lambda_0 \alpha_0^{k_1 + 1} + \lambda_1 \alpha_0^{k_1}\alpha_1 +\dots \lambda_{k_1}\alpha_0\alpha_1^{k_1})\beta_0\beta_1^{l_1 + 1} + \eta
\alpha_1^{k_1 + 1} \beta_0\beta_1^{l_1 + 1} \\
&-(\nu_0 \alpha_0^{k_1 + 1} + \nu_1 \alpha_0^{k_1}\alpha_1 +\dots \nu_{k_1}\alpha_0\alpha_1^{k_1})\beta_1^{l_1 + 2} + \zeta
\alpha_1^{k_1 + 1} \beta_1^{l_1 + 2} \text{.}
\end{align*}
This is divisible by $\beta_1^2$, so also
\begin{equation*}
\frac{\beta_0 t_1 - \beta_1 t_2} {\beta_1} \in I\text{.}
\end{equation*}
The monomials $\alpha_0^{k_1 + 1}\beta_0\beta_1^{l_1},\dots,\alpha_0\alpha_1^{k_1}\beta_0 \beta_1^{l_1}$ are not in $F^\perp$ (which is a monomial
ideal), so $\lambda_i = 0$ for all $i$. Hence $t_1$ is divisible by $\alpha_1^{k_1 + 1}$, and therefore $t_1/\alpha_1^{k_1} \in I\subseteq F^\perp$.
The monomial $\alpha_1 \beta_0^{l_1} \beta_1$ is not in $F^\perp$, but its coefficient in $t_1/\alpha_1^{k_1}$ is $\kappa_1 = 1$, a contradiction.
\end{proof}
\subsection{Hirzebruch surface $\mathbb{F}_1$}\label{subsection:hirzebruch_surface}
Consider the set
\begin{equation*}
\{\rho_{\alpha,0} = (1, 0), \rho_{\alpha,1} = (-1,-1), \rho_{\beta,0} = (0, 1), \rho_{\beta,1} = (0, -1)\}\text{.}
\end{equation*}
Let $\Sigma$ be the only complete fan such that this set is the set of rays of $\Sigma$. The example in \cite[Example 3.1.16]{cox_book} is the same,
only with a different ray arrangement. Then $X_\Sigma$ is called the Hirzebruch surface $\mathbb{F}_1$. It is smooth.
\[\begin{tikzpicture}[scale = 0.7]
\draw[-latex, thin] (0,0) -- (1,0);
\draw[-latex, thin] (0,0) -- (0,1);
\draw[-latex, thin] (0,0) -- (0,-1);
\draw[-latex, thin] (0,0) -- (-1,-1);
\foreach \x in {-2,...,2}
{
\foreach \y in {-2,...,2}
{
\draw[fill] (\x,\y)circle [radius=0.025];
}
}
\node[below right] at (1,0) {$\rho_{\alpha,0}$};
\node[above] at (0,1) {$\rho_{\beta,0}$};
\node[below left] at (-1,-1) {$\rho_{\alpha,1}$};
\node[below] at (0,-1) {$\rho_{\beta,1}$};
\end{tikzpicture}\]
Its class group is the free abelian group on two generators $D_{\rho_{\alpha,0}} \sim D_{\rho_{\alpha,1}}$ and $D_{\rho_{\beta,1}}$. Moreover,
$D_{\rho_{\beta,0}} \sim D_{\rho_{\beta,1}} + D_{\rho_{\alpha, 0}}$. Let $\alpha_0, \alpha_1, \beta_0, \beta_1$ be the variables corresponding to
$\rho_{\alpha,0}$, $\rho_{\alpha,1}$, $\rho_{\beta,0}$, $\rho_{\beta,1}$. As a result, we may think of $S$ as the polynomial ring
$\mathbb{C}[\alpha_0, \alpha_1, \beta_0, \beta_1]$ graded by $\mathbb{Z}^2$, where the grading is given by
\begin{center}
\begin{tabular}{c|cccc}
$f$ & $\alpha_0$ & $\alpha_1$ & $\beta_0$ & $\beta_1$ \\
\hline
\multirow{2}{*}{$\deg f$} & 1 & 1 & 1 & 0 \\
& 0 & 0 & 1 & 1
\end{tabular}
\end{center}
The nef cone in $(\operatorname{Cl} X_\Sigma)_{\mathbb{R}}$ is generated by $D_{\rho_{\alpha,0}}$ and $D_{\rho_{\beta,0}} \sim D_{\rho_{\alpha,0}} +
D_{\rho_{\beta, 0}}$.
\begin{example}
Consider the monomial $F \coloneqq x_0 x_1 y_0 y_1$, where $x_0, x_1, y_0, y_1$ is the basis dual to $\alpha_0, \alpha_1, \beta_0, \beta_1$. It has
degree $(3, 2)$, so it is in the interior of the nef cone, so the corresponding line bundle is very ample. We claim that the rank and the cactus
rank of $F$ are four, and that the border rank is three:
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$\operatorname{r}(F)$ & $\operatorname{cr}(F)$ & $\operatorname{\underline{r}}(F)$ \\
\hline
$4$ & $ 4$ & $3$ \\
\hline
\end{tabular}
\end{center}
Let us compute the Hilbert function of the apolar algebra of $F$.
\[\begin{tikzpicture}
\draw[help lines] (-2,-1) grid (5,5);
\draw[->, thick] (0,0) -- (0,5);
\draw[->, thick] (0,0) -- (5,0);
\draw[fill] (0,0)circle [radius=0.025];
\draw[fill] (1,0)circle [radius=0.025];
\draw[fill] (2,0)circle [radius=0.025];
\draw[fill] (3,0)circle [radius=0.025];
\draw[fill] (0,1)circle [radius=0.025];
\draw[fill] (1,1)circle [radius=0.025];
\draw[fill] (2,1)circle [radius=0.025];
\draw[fill] (3,1)circle [radius=0.025];
\draw[fill] (0,2)circle [radius=0.025];
\draw[fill] (1,2)circle [radius=0.025];
\draw[fill] (2,2)circle [radius=0.025];
\draw[fill] (3,2)circle [radius=0.025];
\node[below left] at (0,0) {1};
\node[below left] at (1,0) {2};
\node[below left] at (2,0) {1};
\node[below left] at (3,0) {0};
\node[below left] at (0,1) {1};
\node[below left] at (1,1) {3};
\node[below left] at (2,1) {3};
\node[below left] at (3,1) {1};
\node[below left] at (0,2) {0};
\node[below left] at (1,2) {1};
\node[below left] at (2,2) {2};
\node[below left] at (3,2) {1};
\end{tikzpicture}\]
Notice that it can only be non-zero in the first quadrant. Hence, the symmetry of the Hilbert function (see Proposition
\ref{proposition:hilbert_function_symmetry}) implies it can only be non-zero in the rectangle with vertices $(0,0)$, $(3,0)$, $(3,2)$, $(0,2)$.
Computing each value of the Hilbert function is just computing the kernel of a linear map. For instance, for degree $(1,0)$, we have
\begin{equation*}
(a_0 \alpha_0 + a_1 \alpha_1) {\: \lrcorner \:} x_0 x_1 y_0 y_1 = a_0 x_1 y_0 y_1 + a_1 x_0 y_0 y_1\text{,}
\end{equation*}
which is zero if and only if $a_0 = 0$ and $a_1 = 0$. Hence, the Hilbert function is
\begin{equation*}
dim_{\mathbb{C}} (S/F^\perp)_{(1, 0)} = dim_{\mathbb{C}}S_{(1,0)} - dim_{\mathbb{C}}F^\perp_{(1,0)} = 2 - 0 = 2\text{.}
\end{equation*}
For degree $(2, 1)$, we get
\begin{multline*}
(a \alpha_0^2 \beta_1 + b \alpha_0\alpha_1\beta_1 + c \alpha_1^2\beta_1 + d\alpha_0\beta_0 + e\alpha_1\beta_0) {\: \lrcorner \:} x_0 x_1 y_0 y_1
= b y_0 + d x_1 y_1 + e x_0 y_1\text{.}
\end{multline*}
So the result is zero precisely for vectors of the form $(a,0,c,0,0)$, where $a, c \in \mathbb{C}$. Then
\begin{equation*}
dim_{\mathbb{C}} (S/F^\perp)_{(2, 1)} = 5 - 2 = 3\text{.}
\end{equation*}
The apolar ideal $F^\perp$ is $(\alpha_0^2, \alpha_1^2, \beta_0^2, \beta_1^2)$. (It is independent of the grading, so we can just copy the result
from the Waring rank case, see \cite{ranestad_schreyer_on_the_rank_of_a_symmetric_form}.)
Firstly, we will show that the rank is at most four. By the Apolarity Lemma, toric version (Theorem \ref{theorem:multigraded_apolarity}), it is
enough to find a reduced zero-dimensional subscheme of length four $R$ of $X_\Sigma$ (i.e.\ a set of four points in $X_\Sigma$) such that $I(R)
\subseteq F^\perp$. The subscheme defined by $I = (\alpha_0^2 -\alpha_1^2, \beta_0^2 - \alpha_1^2 \beta_1^2) \subseteq F^\perp$ satisfies these
requirements. This scheme is a reduced union of four points: $[1, 1; 1, 1], [1, 1; 1, -1], [1, -1; 1, 1], [1, -1; 1, -1]$. As a consequence, we may
write
\begin{equation*}
x_0 x_1 y_0 y_1 = \frac{1}{4}\left(\varphi(1, 1; 1, 1) - \varphi(1, 1; 1, -1) - \varphi(1, -1; 1, 1) + \varphi(1, -1; 1, -1)\right)\text{.}
\end{equation*}
We will show that the cactus rank is at least four. Suppose it is at most three. Then there is a $B$-saturated homogeneous ideal $I \subseteq F^\perp$
defining a zero-dimensional subscheme $R$ of length at most three. From the calculation of the Hilbert function, we know that
$\dim_{\mathbb{C}}{F^\perp_{(2,1)}} = 2$. Let us calculate $\dim_{\mathbb{C}} I_{(2,1)}$. Since $I$ is $B$-saturated, by Proposition
\ref{proposition:ideal}, the vector subspace $I_{(2,1)} \subseteq S_{(2,1)}$ are the sections which are zero on $R$. But from Proposition
\ref{proposition:zero_dimensional}
\begin{equation*}
3 \geq \text{length of} \, R \geq \dim_{\mathbb{C}}S_{(2,1)} - \dim_{\mathbb{C}}I_{(2,1)} = 5 -\dim_{\mathbb{C}}I_{(2,1)}\text{,}
\end{equation*}
so
\begin{equation*}
\dim_{\mathbb{C}} I_{(2,1)} \geq 2\text{.}
\end{equation*}
By the Apolarity Lemma (Theorem \ref{theorem:multigraded_apolarity}), we have $I_{(2,1)} \subseteq (F^\perp)_{(2,1)}$. As the dimensions are equal,
it follows that $I_{(2,1)} = (F^\perp)_{(2,1)}$. This means $\alpha_0^2 \beta_1, \alpha_1^2 \beta_1 \in I$. But $I$ is $B$-saturated, so $\alpha_0
\alpha_1 \beta_1 \in I \subseteq F^\perp$, which implies that $\alpha_0 \alpha_1 \beta_1 {\: \lrcorner \:} x_0 x_1 y_0 y_1 = 0$, a contradiction.
Let us show that border rank of $F$ is at most three. Take $p = [\lambda, 1; 1, \mu] \in \mathbb{F}_1$. Then from Proposition
\ref{proposition:formula}, we know that \begin{align*}
[\lambda, 1; 1, \mu] \mapsto \lambda \mu\cdot \bigg( & \lambda^2 \mu x_0^3 y_1^2 + \lambda \mu x_0^2 x_1 y_1^2 + \mu x_0 x_1^2 y_1^2 +
\frac{\mu}{\lambda} x_1^3 y_1^2 \\
+ {} & \lambda x_0^2 y_0 y_1 + x_0 x_1 y_0 y_1 + \frac{1}{\lambda} x_1^2 y_0 y_1 \\
+ {} & \frac{1}{\mu} x_0 y_0^2 + \frac{1}{\mu \lambda} x_1 y_0^2\bigg ) \text{.}
\end{align*}
But
\begin{equation*}
[0, 1; 1, \mu] \mapsto \mu\cdot \bigg( {\mu} x_1^3 y_1^2 + x_1^2 y_0 y_1 + \frac{1}{\mu} x_1 y_0^2 \bigg ) \text{,}
\end{equation*}
and
\begin{equation*}
[1, 0; 1, 0] \mapsto x_0 y_0^2 \text{.}
\end{equation*}
Hence,
\begin{multline*}
- x_0 x_1 y_0 y_1 + \frac{1}{\lambda \mu} \varphi([\lambda, 1; 1, \mu]) - \frac{1}{\lambda \mu} \varphi([0, 1; 1, \mu]) -
\frac{1}{\mu}\varphi([1, 0; 1, 0]) \\
= \lambda^2 \mu x_0^3 y_1^2 + \lambda \mu x_0^2 x_1 y_1^2 + \mu x_0 x_1^2 y_1^2 + \lambda x_0 y_0^2 \xrightarrow{\lambda, \mu \to 0} 0 \text{.}
\end{multline*}
It follows that $x_0 x_1 y_0 y_1$ is expressible as a limit of linear combinations of three points on $X_\Sigma$, so the border rank is at most three.
But there is another proof that the border rank of $F$ is at most three. We will show that the third secant variety $\sigma_3(X) = \mathbb{P}^8$. It
suffices to show that $\dim \sigma_3(X_\Sigma)$ is eight. We will use Terracini's Lemma (Proposition \ref{proposition:terracini}).
Since $X_\Sigma \to \mathbb{P}(H^0(X_\Sigma, \mathcal{O}(\alpha))^*)$ is given by a parametrization, we can calculate the projectivized tangent
space. Take points of the form $[1, \lambda; \mu, 1]$, where $\lambda, \mu \in \mathbb{C}$. Then
\begin{equation*}
\varphi([1, \lambda; \mu, 1]) = [1, \lambda, \lambda^2, \lambda^3, \mu, \mu \lambda, \mu \lambda^2, \mu^2, \mu^2 \lambda]\text{.}
\end{equation*}
The coordinates are in the standard monomial basis of $H^0(X_\Sigma, \mathcal{O}(\alpha))^*$. The affine tangent space at $\varphi([1, \lambda; \mu,
1])$ is spanned by the vector
\begin{equation*}
v = [1, \lambda, \lambda^2, \lambda^3, \mu, \mu \lambda, \mu \lambda^2, \mu^2, \mu^2 \lambda]
\end{equation*}
and its two derivatives with respect to $\lambda$ and $\mu$:
\begin{align*}
\frac{\partial v}{\partial \lambda} & = [0, 1, 2 \lambda, 3 \lambda^2, 0, \mu, 2\mu \lambda, 0, \mu^2]\text{,} \\
\frac{\partial v}{\partial \mu} & = [0, 0, 0, 0, 1, \lambda, \lambda^2, 2\mu, 2\mu \lambda]\text{,} \\
\end{align*}
If we take three general points, say $[1, x, y, 1], [1, s, t, 1], [1, u, v, 1]$, we can look at the space spanned by the three tangent
spaces. This will be the space spanned by the rows of the following matrix:
\begin{equation*}
M = \left(
\begin{matrix}
1 & x & x^2 & x^3 & y & y x & y x^2 & y^2 & y^2 x \\
0 & 1 & 2 x & 3 x^2 & 0 & y & 2y x & 0 & y^2 \\
0 & 0 & 0 & 0 & 1 & x & x^2 & 2y & 2y x \\
1 & s & s^2 & s^3 & t & t s & t s^2 & t^2 & t^2 s \\
0 & 1 & 2 s & 3 s^2 & 0 & t & 2t s & 0 & t^2 \\
0 & 0 & 0 & 0 & 1 & s & s^2 & 2t & 2t s \\
1 & u & u^2 & u^3 & v & v u & v u^2 & v^2 & v^2 u \\
0 & 1 & 2 u & 3 u^2 & 0 & v & 2v u & 0 & v^2 \\
0 & 0 & 0 & 0 & 1 & u & u^2 & 2v & 2v u \\
\end{matrix}
\right)
\end{equation*}
We can calculate the determinant using for instance Macaulay2
\begin{equation*}
\det M = (s - u)(u - x)(s - x)(ys - xt - yu + tu + xv - sv)^4\text{.}
\end{equation*}
This is non-zero for general points on the variety. This means that the tangent space of the cone of the third secant variety at a general point has
dimension nine, so $\dim \sigma_3(X_\Sigma) = 8$, hence $\sigma_3(X)$ fills the whole space.
Finally, the border rank is at least three by Corollary \ref{corollary:border_catalecticant_bound}. We use it for the class $(2,1)$, recall
that $\dim_{\mathbb{C}}(S/F^\perp)_\beta = \operatorname{rank} C_F^\beta$.
\begin{remark}\label{remark:wild_case}
We could also define the smoothable $X$-rank:
\begin{equation*}
\operatorname{sr}_X(F) = \min \{\operatorname{length} R | R \hookrightarrow X, \dim R = 0, F \in \langle R \rangle, R \text{ smoothable} \}\text{.}
\end{equation*}
For the definition of a smoothable scheme, see \cite[Definition 5.16]{iarrobino_kanev_book_Gorenstein_algebras}. For more on the smoothable rank,
see \cite{nisiabu_jabu_smoothable_rank_example}. We always have $\operatorname{cr}(F) \leq \operatorname{sr}(F) \leq \operatorname{r}(F)$, so in the case of $\mathbb{F}_1$ and $F = x_0
x_1 y_0 y_1$ we get $\operatorname{sr}(F) = 4$. In particular, we obtain what the authors in \cite{nisiabu_jabu_smoothable_rank_example} call a ``wild'' case,
i.e.\ the border rank is strictly less than the smoothable rank.
\end{remark}
\end{example}
\begin{example}
For a similar case on the same variety, let $F = x_0^2 x_1^2 y_0 y_1$, then $\deg F = (5,2)$. Here the line bundle $\mathcal{O}(5,2)$ gives an
embedding of $X_\Sigma$ into $\mathbb{P}^{14}$. We show that here the rank and the cactus rank are six, and that the border rank is five:
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$\operatorname{r}(F)$ & $\operatorname{cr}(F)$ & $\operatorname{\underline{r}}(F)$ \\
\hline
$6$ & $ 6$ & $5$ \\
\hline
\end{tabular}
\end{center}
The apolar ideal is $F^\perp = (\alpha_0^3, \alpha_0^3, \beta_0^2, \beta_1^2)$. The Hilbert function of $S/F^\perp$ is the following:
\[\begin{tikzpicture}
\draw[help lines] (-2,-1) grid (7,5);
\draw[->, thick] (0,0) -- (0,5);
\draw[->, thick] (0,0) -- (7,0);
\foreach \x in {0,1,2,3,4,5}
{
\foreach \y in {0,1,2}
{
\draw[fill] (\x,\y)circle [radius=0.025];
}
}
\node[below left] at (0,0) {1};
\node[below left] at (1,0) {2};
\node[below left] at (2,0) {3};
\node[below left] at (3,0) {2};
\node[below left] at (4,0) {1};
\node[below left] at (5,0) {0};
\node[below left] at (0,1) {1};
\node[below left] at (1,1) {3};
\node[below left] at (2,1) {5};
\node[below left] at (3,1) {5};
\node[below left] at (4,1) {3};
\node[below left] at (5,1) {1};
\node[below left] at (0,2) {0};
\node[below left] at (1,2) {1};
\node[below left] at (2,2) {2};
\node[below left] at (3,2) {3};
\node[below left] at (4,2) {2};
\node[below left] at (5,2) {1};
\end{tikzpicture}\]
The ideal $I = (\alpha_0^3 - \alpha_1^3, \beta_0^2 - \beta_1^2 \alpha_1^2) \subseteq F^\perp$ is a $B$-saturated radical homogeneous ideal defining
a subscheme of length six, so the rank is at most six.
Suppose there is a homogeneous $B$-saturated ideal $I \subseteq F^\perp$ defining a subscheme of length five. We have
\begin{align*}
S_{(3,1)} &= \langle \alpha_0^2 \beta_0, \alpha_0 \alpha_1 \beta_0, \alpha_1^2 \beta_0, \alpha_0^3 \beta_1, \alpha_0^2 \alpha_1 \beta_1,
\alpha_0 \alpha_1^2 \beta_1, \alpha_1^3 \beta_1 \rangle\text{, and} \\
(F^\perp)_{(3, 1)} &= \langle \alpha_0^3 \beta_1, \alpha_1^3 \beta_1 \rangle\text{.}
\end{align*}
From Propostion \ref{proposition:zero_dimensional} we have $\dim_{\mathbb{C}}(S/I)_{(3,1)} \leq 5$, so $\dim_{\mathbb{C}}I_{(3,1)} \geq 7 - 5 = 2$.
But $I_{(3,1)} \subseteq (F^\perp)_{(3,1)}$ from the Apolarity Lemma (Theorem \ref{theorem:multigraded_apolarity}), and also $\dim_{\mathbb{C}}
(F^\perp)_{(3,1)} = 2$. This means that $I_{(3,1)} = (F^\perp)_{(3,1)}$.
Hence, $\alpha_0^3 \beta_1, \alpha_1^3 \beta_1 \in I$. As $I$ is $B$-saturated, we get $\alpha_0^2 \alpha_1^2 \beta_1 \in I \subseteq F^\perp$, but
this is a contradiction since $\alpha_0^2 \alpha_1^2 \beta_1 {\: \lrcorner \:} F \neq 0$.
The border rank is at least five because of Corollary \ref{corollary:border_catalecticant_bound}. Similarly to what we did before, we show that
fifth secant variety fills the whole space, so the border rank of any polynomial is at most five. Here $\varphi = \varphi_{|\mathcal{O}(5,2)|}$ is
given (in the standard monomial basis) by
\begin{equation*}
[1, \lambda; \mu, 1] \mapsto [1, \lambda, \lambda^2, \lambda^3, \lambda^4, \lambda^5, \mu, \lambda \mu, \lambda^2 \mu, \lambda^3 \mu, \lambda^4
\mu, \mu^2, \lambda \mu^2, \lambda^2 \mu^2, \lambda^3 \mu^2]\text{.}
\end{equation*}
The tangent space of the affine cone of $X_\Sigma$ is spanned by $v = \varphi(1, \lambda; \mu, 1)$ and the two derivatives
\begin{align*}
\frac{\partial v}{\partial \lambda} & = [0, 1, 2 \lambda, 3 \lambda^2, 4 \lambda^3, 5 \lambda^4,
0, \mu, 2\lambda \mu, 3 \lambda^2 \mu, 4\lambda^3 \mu, 0, \mu^2, 2 \lambda \mu^2, 3\lambda^2 \mu^2]\text{,} \\
\frac{\partial v}{\partial \mu} & = [0, 0, 0, 0,0,0, 1, \lambda, \lambda^2, \lambda^3, \lambda^4, 2\mu, 2\lambda \mu, 2\lambda^2 \mu, 2\lambda^3
\mu] \text{.} \\
\end{align*}
If we take five points, say $[1, x; y, 1], [1, u; v, 1], [1, s; t, 1], [1, a, b, 1], [1, c, d, 1]$, we get that the tangent space of the affine cone
of $\sigma_5(X_\Sigma)$ is spanned by the rows of the following matrix:
\begin{equation*}
\begin{pmatrix}
1& x& x^2& x^3& x^4& x^5& y& x y& x^2 y& x^3 y& x^4 y& y^2& x y^2& x^2 y^2& x^3 y^2 \\
0& 1& 2 x& 3 x^2& 4 x^3& 5 x^4& 0& y& 2x y& 3 x^2 y& 4x^3 y& 0& y^2& 2 x y^2& 3x^2 y^2\\
0& 0& 0& 0 & 0 & 0 & 1& x& x^2& x^3& x^4& 2y& 2x y& 2x^2 y& 2x^3 y\\
1& s& s^2& s^3& s^4& s^5& t& s t& s^2 t& s^3 t& s^4 t& t^2& s t^2& s^2 t^2& s^3 t^2 \\
0& 1& 2 s& 3 s^2& 4 s^3& 5 s^4& 0& t& 2s t& 3 s^2 t& 4s^3 t& 0& t^2& 2 s t^2& 3s^2 t^2\\
0& 0& 0& 0 & 0 & 0 & 1& s& s^2& s^3& s^4& 2t& 2s t& 2s^2 t& 2s^3 t\\
1& u& u^2& u^3& u^4& u^5& v& u v& u^2 v& u^3 v& u^4 v& v^2& u v^2& u^2 v^2& u^3 v^2 \\
0& 1& 2 u& 3 u^2& 4 u^3& 5 u^4& 0& v& 2u v& 3 u^2 v& 4u^3 v& 0& v^2& 2 u v^2& 3u^2 v^2\\
0& 0& 0& 0 & 0 & 0 & 1& u& u^2& u^3& u^4& 2v& 2u v& 2u^2 v& 2u^3 v\\
1& a& a^2& a^3& a^4& a^5& b& a b& a^2 b& a^3 b& a^4 b& b^2& a b^2& a^2 b^2& a^3 b^2 \\
0& 1& 2 a& 3 a^2& 4 a^3& 5 a^4& 0& b& 2a b& 3 a^2 b& 4a^3 b& 0& b^2& 2 a b^2& 3a^2 b^2\\
0& 0& 0& 0 & 0 & 0 & 1& a& a^2& a^3& a^4& 2b& 2a b& 2a^2 b& 2a^3 b\\
1& c& c^2& c^3& c^4& c^5& d& c d& c^2 d& c^3 d& c^4 d& d^2& c d^2& c^2 d^2& c^3 d^2 \\
0& 1& 2 c& 3 c^2& 4 c^3& 5 c^4& 0& d& 2c d& 3 c^2 d& 4c^3 d& 0& d^2& 2 c d^2& 3c^2 d^2\\
0& 0& 0& 0 & 0 & 0 & 1& c& c^2& c^3& c^4& 2d& 2c d& 2c^2 d& 2c^3 d\\
\end{pmatrix}
\end{equation*}
If we set $(x, y, s, t, u, v, a, b, c, d) = (1,2,3,4,5,6,7,9,0,2)$ and calculate the determinant in the field $\mathbb{Z}/101$, we get $34$, in
particular, non-zero. This means that the determinant calculated in $\mathbb{C}$ is also non-zero at this point, so it is non-zero on a dense open
subset. Hence by Terracini's lemma (Proposition \ref{proposition:terracini}) the dimension of the affine cone of $\sigma_5(X_\Sigma)$ is fifteen. It
follows that $\sigma_5(X_\Sigma) = \mathbb{P}^{14}$. Thus the border rank of $F$ is five.
\end{example}
\subsection{Weighted projective plane $\mathbb{P}(1,1,4)$}\label{subsection:weighted_projective_plane}
Consider a set of rays $\{\rho_x = (-1,-4), \rho_y = (1,0), \rho_z = (0,1)\}$. Let $\Sigma$ be the complete fan determined by these rays. This is a
fan of $\mathbb{P}(1,1,4)$, the weighted projective space with weights $1,1,4$, see \cite[Section 2.0, Subsection Weighted Projective Space; and
Example 3.1.17]{cox_book}.
\[\begin{tikzpicture}[scale = 0.7]
\draw[-latex, thin] (0,0) -- (-1,-4);
\draw[-latex, thin] (0,0) -- (1,0);
\draw[-latex, thin] (0,0) -- (0,1);
\foreach \x in {-2,...,2}
{
\foreach \y in {-4,...,2}
{
\draw[fill] (\x,\y)circle [radius=0.025];
}
}
\node[below right] at (1,0) {$\rho_{y}$};
\node[above] at (0,1) {$\rho_{z}$};
\node[below] at (-1,-4) {$\rho_{x}$};
\end{tikzpicture}\]
The class group is $\mathbb{Z}$, generated by $D_{\rho_x} \sim D_{\rho_y}$, and we know that $D_{\rho_z} \sim 4 D_{\rho_x}$. The Cox ring is
$\mathbb{C}[\alpha, \beta, \gamma]$, where $\alpha, \beta, \gamma$ correspond to $\rho_x, \rho_y, \rho_z$, and the degrees are given by the vector
$(1,1,4)$. Let $x, y, z$ denote the dual coordinates. The Picard group is generated by $\mathcal{O}(4)$. The only singular point is $[0,0,1]$.
Consider the embedding given by $\mathcal{O}_{X_\Sigma}(4)$, which is a line bundle. It maps $X$ into $\mathbb{P}^5$ (since there are six monomials
of degree $4$: $x^4, x^3 y, x^2 y^2, xy^3, y^4, z$). We calculate various ranks of $F = x^2 y^2$. The results are shown it the following table
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$\operatorname{r}(F)$ & $\operatorname{cr}(F)$ & $\operatorname{\underline{r}}(F)$ \\
\hline
$3$ & $2$ & $3$ \\
\hline
\end{tabular}
\end{center}
The Hilbert function of $A_F$ is $(1,2,3,2,1)$ (here the first element of the sequence corresponds to $\mathcal{O}_{X_\Sigma}$, the next to
$\mathcal{O}_{X_\Sigma}(1)$, and so on). This means (by Corollary \ref{corollary:border_catalecticant_bound}) that $\operatorname{\underline{r}}(F) \geq 3$.
We know that $F^\perp = (\alpha^3, \beta^3, \gamma)$, since the annihilator remains the same if we change the grading. Let $I = (\alpha^3, \beta^3)
\subseteq F^\perp$. We show that the length of the scheme $R \coloneqq V(I)$ is two. This will mean that $\operatorname{cr}(F) \leq 2$. Since $R$ is supported at
the point $[0,0,1]$, we can look at it on the affine open $U_\sigma$, where $\sigma = \operatorname{Cone}(\rho_x, \rho_y)$. After localizing $S =
\mathbb{C}[\alpha,\beta,\gamma]$ at $\gamma$ and taking degree $0$, we get the ring
\begin{equation*}
\mathbb{C}\left[\frac{\alpha^4}{\gamma}, \frac{\alpha^3\beta}{\gamma}, \frac{\alpha^2\beta^2}{\gamma}, \frac{\alpha\beta^3}{\gamma},
\frac{\beta^4}{\gamma}\right]\text{.}
\end{equation*}
Ideal $I$ becomes the ideal generated by $\frac{\alpha^4}{\gamma}, \frac{\alpha^3\beta}{\gamma}, \frac{\alpha\beta^3}{\gamma},
\frac{\beta^4}{\gamma}$ in this ring, so the quotient is a two-dimensional vector space with basis $1, \frac{\alpha^2 \beta^2}{\gamma}$. Hence the
length of $R$ is two.
But the cactus rank cannot be $1$, since $x^2 y^2$ is not in the image of $\varphi_{|\mathcal{O}(4)|}$ (see Proposition \ref{proposition:formula}).
It follows that $\operatorname{cr}(F) = 2$.
Now consider the ideal $I = (\alpha^3 - \beta^3, \gamma) \subseteq F^\perp$. We show that the length of the scheme defined by $I$ is three. Since
$I$ is radical, the scheme given by $I$ is reduced, hence this will show that $\operatorname{r}(F) \leq 3$, as desired. But $I = (\alpha - \beta, \gamma) \cap
(\alpha - \varepsilon \beta, \gamma)\cap (\alpha -\varepsilon^2 \beta,\gamma)$, where $\varepsilon = \frac{-1 + \sqrt{3}i}{2}$, so the scheme given by $I$
is the reduced union of $[1,1,0], [\varepsilon,1,0], [\varepsilon^2,1,0]$.
\begin{remark}\label{remark:catalecticant_counterexample}
Since in this example
\begin{equation*}
\operatorname{rank} C_F^{\mathcal{O}(2)} = \dim (A_F)_2 = \dim (S/F^\perp)_2 = 3\text{,}
\end{equation*}
and $\operatorname{cr}(F) = 2$, we see that the bound stated in point (1) of Theorem \ref{theorem:catalecticant} does not hold for the cactus rank (and
reflexive sheaves of rank one that are not line bundles).
\end{remark}
\begin{remark}
One can also calculate that the projective tangent space in this embedding at the singular point $[0,0,1]$ is the whole $\mathbb{P}^5$ (this is a
straightforward application of Proposition \ref{proposition:embedded_tangent_space}). It follows that the cactus rank of every point in
$\mathbb{P}^5$ is at most two, since any point of the tangent space at $[0,0,1]$ can be reached by a linear span of a scheme of length two
supported at $[0,0,1]$.
\end{remark}
\subsection{Fake weighted projective plane}\label{subsection:fake_weighted_projective_plane}
Consider the set of rays $\{\rho_0 = (-1, -1), \rho_1 = (2, -1), \rho_2 = (-1, 2) \}$. Let $\Sigma$ be the complete fan determined by these rays. Then
$X_\Sigma$ is an example of a fake weighted projective space, see \cite[Example 6.2]{wero_fps}.
\[\begin{tikzpicture}[scale = 0.7]
\draw[-latex, thin] (0,0) -- (-1,-1);
\draw[-latex, thin] (0,0) -- (2,-1);
\draw[-latex, thin] (0,0) -- (-1,2);
\foreach \x in {-2,...,2}
{
\foreach \y in {-2,...,2}
{
\draw[fill] (\x,\y)circle [radius=0.025];
}
}
\node[below left] at (-1,-1) {$\rho_{0}$};
\node[right] at (2,-1) {$\rho_{1}$};
\node[above] at (-1,2) {$\rho_{2}$};
\end{tikzpicture}\]
Let $\alpha_0, \alpha_1, \alpha_2$ be the corresponding coordinates in $S$. The class group is generated by $D_{\rho_0}, D_{\rho_1}, D_{\rho_2}$
with relations $D_{\rho_0} \sim 2 D_{\rho_1} - D_{\rho_2} \sim 2 D_{\rho_2} - D_{\rho_1}$. This is the same as a group with two generators
$D_{\rho_0}$ and $D_{\rho_2} - D_{\rho_1}$ with the relation $3(D_{\rho_2} - D_{\rho_1}) \sim 0$. This choice gives an isomorphism with $\mathbb{Z} \times
\mathbb{Z}/3$ sending $D_{\rho_0}$ to $(1, 0)$ and $D_{\rho_2} - D_{\rho_1}$ to $(0, 1)$. The Picard group is the subgroup generated by $3
D_{\rho_0}$. It is free.
As a result, $S = \mathbb{C}[\alpha_0, \alpha_1, \alpha_2]$ is graded by $\operatorname{Cl} X_\Sigma = \mathbb{Z}\times \mathbb{Z}/3$, where
\begin{align*}
\deg \alpha_0 &= (1,0)\text{,} \\
\deg \alpha_1 &= (1,1)\text{,} \\
\deg \alpha_2 &= (1,-1) = (1, 2)\text{,}
\end{align*}
and $\operatorname{Pic} X_\Sigma$ is generated by $(3,0)$. The singular points of $X_\Sigma$ are $[1,0,0]$, $[0,1,0]$, $[0,0,1]$.
Consider the line bundle $\mathcal{O}(6,0)$. It is ample, because by \cite[Proposition 6.3.25]{cox_book} every complete toric surface is projective,
and the line bundles $\mathcal{O}(-3m, 0)$ for $m < 0$ have no non-zero sections. By \cite[Proposition 6.1.10, (b)]{cox_book} it is very ample. It
gives an embedding $\varphi :X_\Sigma \hookrightarrow \mathbb{P}^9$. We denote the dual coordinates by $x_0, x_1, x_2$.
\begin{example}
Let $F = x_0^4 x_1 x_2 \in H^0(X_\Sigma, \mathcal{O}(6,0))^*$. The apolar ideal is $(\alpha_0^5, \alpha_1^2, \alpha_2^2)$. We claim that the cactus
rank is two, the rank is at most five, and the border rank is two.
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$\operatorname{r}(F)$ & $\operatorname{cr}(F)$ & $\operatorname{\underline{r}}(F)$ \\
\hline
$\leq 5$ & $ 2$ & $2$ \\
\hline
\end{tabular}
\end{center}
Note that $F$ is not in the image of $\varphi_{|\mathcal{O}(6,0)|}$, so the cactus rank and the border rank are at least two.
We show that the cactus rank is two. Consider the ideal $I = (\alpha_1^2, \alpha_2^2) \subseteq F^\perp$. It is saturated, since $B$ in this case is
$(\alpha_0, \alpha_1, \alpha_2)$, so it is the same as in the case of $\mathbb{P}^2$. We show that the length of the subscheme given by $I$ is two.
Since the support of the scheme is the point $[1, 0, 0]$, we check it on the set $U_\sigma$, where $\sigma = \operatorname{Cone}(\rho_1, \rho_2)$. We localize
with respect to $\alpha_0$, take degree zero, and get the ring
\begin{equation}\label{equation:length_calculation}
\mathbb{C}\left[\frac{\alpha_1^3}{\alpha_0^3}, \frac{\alpha_2^3}{\alpha_0^3}, \frac{\alpha_1 \alpha_2}{\alpha_0^2}\right]
\cong \mathbb{C}[u, v, w]/(w^3 - uv)\text{.}
\end{equation}
If we factor out by the ideal generated by $\alpha_1^2$ and $\alpha_2^2$, we get
\begin{equation*}
\mathbb{C}[u,v, w]/(w^3 - uv, u, v, w^2) \cong \mathbb{C}[w]/(w^2)\text{,}
\end{equation*}
so the length of the scheme defined by $I$ is two.
Now we show that the rank is at most five. Take a homogeneous ideal $I = (\alpha_0^5 - \alpha_1^4 \alpha_2, \alpha_1^3 - \alpha_2^3)
\subseteq F^\perp$. We show that the length of the subscheme defined by $I$ is five. From these equations we know that no coordinate can be
zero, so we can check the length on the open subset $U_\sigma$, where $\sigma = \operatorname{Cone}(\rho_1, \rho_2)$. We get the same ring as in Equation
\ref{equation:length_calculation}, and we want to factor it out by the ideal generated by $\alpha_0^5 - \alpha_1^4 \alpha_2$ and $\alpha_1^3 -
\alpha_2^3$. The second generator gives the relation $u - v$, and the first one the relation $1 - v w$. So we get the ring
\begin{equation*}
\mathbb{C}[v, w]/(w^3 - v^2, 1 - v w)\text{.}
\end{equation*}
But notice that $1 = v w$ implies that $w$ is non-zero. Hence
\begin{multline*}
\mathbb{C}[v, w]/(w^3 - v^2, 1 - v w) \cong \mathbb{C}[v, w, w^{-1}]/(w^3 - v^2, 1 - v w) \\
\cong \mathbb{C}[v, w, w^{-1}]/(w^5 - 1, w^{-1} - v) \cong \mathbb{C}[w, w^{-1}]/(w^5 - 1)\text{.}
\end{multline*}
We get a reduced scheme of length five, so the rank is at most five.
Now we show that $\operatorname{\underline{r}}(F) = 2$. Consider the equations given by rank one reflexive sheaves $\mathcal{O}(3,0)$
and $\mathcal{O}(3,1)$ (given by minors of matrices as in the proof of Proposition \ref{proposition:closed_set}). In order to find these equations,
we give coordinates to every point $p \in H^0(X_\Sigma, \mathcal{O}(6,0))^*$:
\begin{align*}
p = t_{6,0,0}x_0^6 + t_{0,6,0}x_1^6 + t_{0,0,6}x_2^6 + t_{4,1,1} x_0^4 x_1 x_2
+ t_{1,4,1}x_0 x_1^4 x_2 + t_{1,1,4}x_0 x_1 x_2^4 \\
+ t_{3,3,0}x_0^3x_1^3 + t_{0,3,3} x_1^3 x_2^3 + t_{3,0,3}x_0^3 x_2^3 + t_{2,2,2} x_0^2 x_1^2 x_2^2 \text{.}
\end{align*}
Now we write down the matrix of the map $(\cdot {\: \lrcorner \:} p) : S_{(3,0)} \to T_{(3,0)}$ in the standard monomial bases $\alpha_0^3, \alpha_1^3,
\alpha_2^3, \alpha_0\alpha_1\alpha_2$ and $x_0^3, x_1^3, x_2^3, x_0 x_1 x_2$:
\begin{equation*}
M = \left(
\begin{matrix}
t_{6,0,0} & t_{3,3,0} & t_{3,0,3} & t_{4,1,1} \\
t_{3,3,0} & t_{0,6,0} & t_{0,3,3} & t_{1,4,1} \\
t_{3,0,3} & t_{0,3,3} & t_{0,0,6} & t_{1,1,4} \\
t_{4,1,1} & t_{1,4,1} & t_{1,1,4} & t_{2,2,2}
\end{matrix}
\right)
\end{equation*}
We also write down the matrix of the map $(\cdot {\: \lrcorner \:} p) : S_{(3,1)}\to T_{(3,-1)}$ in the bases $\alpha_0^2 \alpha_1, \alpha_1^2 \alpha_2,
\alpha_2^2 \alpha_0$ and $x_0^2 x_2, x_1^2 x_0, x_2^2 x_1$:
\begin{equation*}
N = \left(
\begin{matrix}
t_{4,1,1} & t_{2,2,2}& t_{3,0,3} \\
t_{3,3,0} & t_{1,4,1} & t_{2,2,2}\\
t_{2,2,2} & t_{0,3,3} & t_{1,1,4}
\end{matrix}
\right)
\end{equation*}
We compute that the $3$ by $3$ minors of $M$ and $N$ define an irreducible variety of dimension $5$ over $\mathbb{Q}$. But it can be found by the
same method as in Subsection \ref{subsection:hirzebruch_surface} that the dimension of the second secant variety of the embedding $X_\Sigma
\hookrightarrow \mathbb{P}(H^0(X_\Sigma, \mathcal{O}(6,0))^*)$ is $5$. Hence, the $\sigma_2(X_\Sigma)$ is given set-theoretically by the $3$ by $3$
minors of $M$ and $N$ over $\mathbb{Q}$. But this means that it is also defined by these equations over $\mathbb{C}$. Finally, since $F$ satisfies
these equations, the claim follows.
\end{example}
\begin{example}
Now take $F = x_0^2 x_1^2 x_2^2 \in H^0(X_\Sigma, \mathcal{O}(6,0))^*$. Here the apolar ideal is $F^\perp = (\alpha_0^3, \alpha_1^3, \alpha_2^3)$.
We calculate the following
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$\operatorname{r}(F)$ & $\operatorname{cr}(F)$ & $\operatorname{\underline{r}}(F)$ \\
\hline
$3$ & $ 3 $ & $3$ \\
\hline
\end{tabular}
\end{center}
Let $I = (\alpha_0^3 - \alpha_1^3, \alpha_1^3 - \alpha_2^3)$. In this case also no coordinate can be zero, so we may calculate the length on
$U_\sigma$ (where $\sigma$ is as before). We get the ring as in Equation \ref{equation:length_calculation} and the two generators become $1 - u$ and
$u - v$. So here the quotient ring is
\begin{equation*}
\mathbb{C}[w]/(w^3 - 1)\text{.}
\end{equation*}
This means that the rank is at most three (notice that we get a reduced scheme).
We can calculate the Hilbert function of $A_F = S/F^\perp$ (where $F = x_0^2 x_1^2 x_2^2$). We have $\dim_{\mathbb{C}} (A_F)_{(3,1)} = 3$, so from
Corollary \ref{corollary:border_catalecticant_bound} we get that $\operatorname{\underline{r}}(F) \geq 3$.
Now we show that $\operatorname{cr}(F) = 3$. We look at the polytope $P$ of the embedding by $\mathcal{O}(6,0)$.
\[\begin{tikzpicture}
\foreach \x in {-2,...,2}
{
\foreach \y in {-2,...,2}
{
\draw[fill] (\x,\y)circle [radius=0.025];
}
}
\fill[opacity=0.5, color=gray] (0,-2)--(2,2)--(-2,0)--cycle;
\draw[-latex,thick] (0,-2) -- (1,0);
\draw[-latex,thick] (0,-2) -- (0,-1);
\draw[-latex,thick] (0,-2) -- (-1,-1);
\draw[-latex,thick] (-2,0) -- (0,1);
\draw[-latex,thick] (-2,0) -- (-1,-1);
\draw[-latex,thick] (-2,0) -- (-1,0);
\draw[-latex,thick] (2,2) -- (1,1);
\draw[-latex,thick] (2,2) -- (1,0);
\draw[-latex,thick] (2,2) -- (0,1);
\node[right] at (2,2) {$(6,0,0)$};
\node[below] at (0,-2) {$(0,0,6)$};
\node[left] at (-2,0) {$(0,6,0)$};
\node[below] at (0,0) {$(2,2,2)$};
\end{tikzpicture}\]
The projective tangent space at the vertex $v$ is given by the Hilbert basis of the semigroup $\mathbb{N}(P\cap M - v)$ (see Proposition
\ref{proposition:embedded_tangent_space}). The vector $(2,2,2)$ is in none of the three Hilbert bases, which means that $x_0^2 x_1^2 x_2^2$ is in
none the of three tangent spaces at the singular points. But the fact that $\operatorname{\underline{r}}(F) \geq 3$ means that $F$ is neither in any projective tangent
space at a smooth point nor at any secant line passing through two points. It follows that $\operatorname{cr}(F) > 2$.
\end{example}
\end{document}
|
\begin{document}
\title{$\,$\vskip-2cm\bf The distribution and asympotic behaviour of the negative Wiener-Hopf factor for L\'evy processes with rational positive jumps}
\author{ {\sc Ekaterina T. Kolkovska\/}\thanks{\'{A}rea de Probabilidad y Estad\'{\i}stica, Centro de Investigaci\'{o}n en Matem\'{a}ticas,
Guanajuato, Mexico.} \and
{\sc Ehyter M. Mart\'{\i}n-Gonz\'{a}lez}\thanks{Departamento de Matem\'aticas, Universidad de Guanajuato,
Guanajuato, Mexico.}
}
\date{ }
\maketitle
\vskip-1cm
\begin{abstract}
We study the distribution of the negative Wiener-Hopf factor for a class of two-sided jumps L\'evy processes whose positive
jumps have a rational Laplace transform. The positive Wiener-Hopf factor for this class of processes was studied by \left[te{lewismordecki}.
Here we obtain a formula for the Laplace transform of the negative Wiener-Hopf factor, as well as an
explicit expression for its probability density, which is in terms of sums of convolutions of known functions. Under
additional regularity conditions on the L\'evy measure of the studied processes, we also provide asymptotic results as $u\to-\infty$ for the distribution function $F(u)$
of the negative Wiener-Hopf factor.
We illustrate our results in some particular examples.
\noindent
\textit{Keywords and phrases}: Two-sided jumps L\'evy process,
Wiener-Hopf factorization, Negative Wiener-Hopf factor, L\'evy risk processes.
\mathcal{E}nd{abstract}
\newtheorem{cond}{Condition}
\newtheorem{lemma}{Lemma}
\newtheorem{remark}{Remark}
\newtheorem{propo}{Proposition}
\newtheorem{teo}{Theorem}
\newtheorem{coro}{Corollary}
\newtheorem{defi}{Definition}
\newenvironment{proofofmainteo1}[1][Proof of Theorem \ref{laplacefactorWienerHopfnegativo}]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\newenvironment{proofofmainteo2}[1][Proof of Theorem \ref{densidadfactornegativo}]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\newenvironment{proofoflemaJ0}[1][Proof of Lemma \ref{lemaJ0}]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\newenvironment{proofoflemmatranslationtail}[1][Proof of Lemma \ref{lemmatranslationtailminuslaplaceexponent}]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\newenvironment{proofoflemanegativeWHfactor}[1][Proof of Lemma \ref{lemmanegativeWHfactor}]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\newenvironment{proof}[1][Proof]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\def\mathbb{C}{\mathbb{C}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\allowdisplaybreaks
\section{Introduction}
\selectlanguage{english}
The Wiener-Hopf factorization for L\'evy processes has become a very important tool
due to its applications in several branches of applied probability, such
as insurance mathematics, theory of branching processes, mathematical finance and optimal control.
For instance, when the market is modelled by a L\'evy process, the positive Wiener-Hopf factor
allows to solve the optimal stopping problem corresponding to the pricing of a
perpetual call option,
while the negative Wiener-Hopf factor is used to solve the
optimal stopping problem corresponding to the pricing of a
perpetual put option.
This negative Wiener-Hopf factor also arises in insurance mathematics in connection with scale functions appearing in fluctuation identities. Such identities allow to obtain the joint distribution of the first passage time below a certain level and the position of the process at this time,
which is the classical ruin problem.
For a one-dimensional L\'evy process $\mathcal{X}=\{\mathcal{X}(t),t\geq0\}$ we denote $S_t =\sup_{0\leq s\leq t}\mathcal{X}(s)$ and $I_t =\inf_{0\leq s\leq t}\mathcal{X}(s)$. The explicit distribution of $S_t$ and $I_t$ in general is difficult to obtain but the following relation holds. Let $e_q$ be an independent exponential random variable with parameter $q>0$. The positive and negative Wiener-Hopf factors
of $\mathcal{X}$ are defined respectively as the random variables $S_{e_q} $ and $I_{e_q} $and they satisfy the identity
\begin{equation}\label{igualdadWHfactors}
\mathbb{E}\left[ e^{ir\ S_{e_q} }\right]\mathbb{E}\left[ e^{ir\ I_{e_q} }\right]=\frac{q}{q+\mathbb{P}si_\mathcal{X}(r)},\quad
r\in\mathbb{R},
\mathcal{E}nd{equation}
where $\mathbb{P}si_\mathcal{X}(r)=-\log\mathbb{E}\left[ e^{ir \mathcal{X}(1)}\right]$ is the characteristic exponent of $\mathcal{X}$. Only a few results are known for the explicit distribution of both Wiener-Hopf factors for processes with positive and negative jumps, see e.g. \left[te{feller}, \left[te{borovkov}, \left[te{asmussenetal} and \left[te{kuznetsov}, \left[te{kuznetsov2}.
While the distribution of
the positive Wiener-Hopf factor has been studied recently by several authors under some rather general conditions on the positive jumps (see,
e.g. \left[te{kuznetsov},
\left[te{kuznetsovpeng}, \left[te{lewismordecki} and the references therein),
the distribution of the negative factor in these cases has not be obtained explicitly.
In this paper we consider
L\'evy processes $\mathcal{X}$ with two-sided jumps such that the positive jumps have rational Laplace transform,
and with general negative jumps. This class of L\'evy processes has been studied recently in \left[te{lewismordecki},
where the authors obtained the explicit distribution of the positive Wiener-Hopf factor as well as
asymptotic results for the tail of $S _\infty$. The particular case of L\'evy processes with positive jumps which have phase-type distribution has been studied by \left[te{asmussenetal} where the authors obtained the distributions of both Wiener-Hopf factors. The class of distributions having rational Laplace transforms is rich enough since
it it dense in the class of nonnegative distributions. By inverting the Laplace transform of the random variable $-I_{e_q} $
we provide an explicit expression for the probability
density of the negative Wiener-Hopf factor $I_{e_q} $ in terms of given functions.
Under additional regularity assumptions on the L\'evy measure of $\mathcal{X}$ we obtain
asymptotic results as $u\to-\infty$ for the distribution function $F(u)$
of the negative Wiener-Hopf factor
$I_{e_q} $. Our formula for the density of the negative Wiener-Hopf factor generalizes the corresponding result in \left[te{asmussenetal}.
The paper is organized as follows: in Section \ref{basicnotions}
we introduce basic concepts and notations and give some preliminary results.
In Section \ref{WHfactorssection} we obtain an expression for the Laplace transform
of $-I_{e_q} $, which we invert in order to get
an explicit formula for its probability density. In Section 4 we derive asymptotic results for the distribution of the negative Wiener-Hopf factor, while some relevant examples are given in Section \ref{examples}. In the final section we give the proof of the auxiliary Lemma 5.
\section{Preliminary results}\label{basicnotions}\setcounter{equation}{0}
We consider the class of two-sided jumps L\'evy processes $\mathcal{X}=\{\mathcal{X}(t),t\geq0 \}$, where
\begin{equation}\label{Xalpha}
\mathcal{X}(t)=ct +\gamma \mathcal{B}(t) + \mathcal{Z}(t)-\mathcal{S}(t),\ t\geq0.
\mathcal{E}nd{equation}
In the above expression, $c\geq0$ is a drift term, $\mathcal{B}=\{\mathcal{B}(t),t\geq0\}$
is a standard Brownian motion with variance parameter 2,
$\mathcal{S}=\{\mathcal{S}(t),t\geq0\}$ is a pure jump L\'evy process having only positive jumps and $\mathcal{Z}=\{\mathcal{Z}(t),t\geq0\}$
is a compound Poisson process
with L\'evy measure $\lambda_1f_1(x)\,dx$, where $\lambda_1>0$ is constant. The function $f_1$ is assumed to be a probability
density with Laplace transform of the form
\begin{equation}\label{laplacef1}
\mathcal{W}idehat{f}_1(r)=\frac{Q(r)}{\mathbb{P}rod\limits_{i=1}^N(\alpha_i+r)^{n_i}},
\mathcal{E}nd{equation}
where $N, n_i\in \mathbb{N}$ with $n_1+n_2+\dots +n_N=m$, $0< \alpha_1< \alpha_2< \dots \alpha_N$ are real numbers
and $Q(r)$ is a polynomial of degree at most $m-1$.
Let $\Psi_\mathcal{S}(r)=-\log \mathbb{E}\left[ e^{-r \mathcal{S}(1)}\right]$ be
the Laplace exponent of the process $\mathcal{S}$. It is known (see \left[te{sato})
that $\Psi_\mathcal{S}(r) =\int_{0+}^\infty\left(1-e^{-rx}-rx h(x)\right)\nu_\mathcal{S}(dx)$,
where $h$ is a truncation function and $\nu_\mathcal{S}$ is the L\'evy measure of $\mathcal{S}$,
which satisfies $\int_{0+}^\infty \left( x ^2 \mathcal{W}edge 1\right)\nu_\mathcal{S}(dx)<\infty$.
We also set $\overline{\mathcal{V}}_\mathcal{S}(u)=\int_u^\infty\nu_\mathcal{S}(dx)$.
For $\mathcal{X}$ given in (\ref{Xalpha}) we
consider the function
\begin{equation*}
\Psi_\mathcal{X}(r)=cr+\gamma^2r^2+\lambda_1\left(\frac{Q(-r)}{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-r)^{n_j}}-1\right)-\Psi_\mathcal{S}(r).
\mathcal{E}nd{equation*}
Note that, for $0\leq r< \alpha_1$, $\Psi_\mathcal{X}(r)=\log \mathbb{E}\left[ e^{r\mathcal{X}(1)}\right]$.
When $\mathcal{S}$ is a subordinator we replace $\Psi_\mathcal{S}(r)$ in the above expression by
$G_\mathcal{S}(r)=\int^\infty_{0+}\left(1-e^{-rx}\right)\nu_\mathcal{S}(dx),$
and assume that the drift term $c$ includes the constant $\int_{0+}^\infty x h(x)\nu_\mathcal{S}(dx)$.
In what follows we consider the sets $\mathbb{C}_+=\{z\in\mathbb{C}: Re(z)\geq0\}$ and $\mathbb{C}_{++}=\{z\in\mathbb{C}:Re(z)>0\}$.
We consider the following cases:
\begin{enumerate}[\text{Case }A.]
\item $c=\gamma=0$ and $\mathcal{S}$ is a driftless subordinator other than a compound Poisson process or $\mathcal{S}$ is a compound Poisson process such that $\mathbb{E}\left[\mathcal{X}(1)\right]>0$,
\item $c>0$, $\gamma=0$ and $\mathcal{S}$ is a driftless subordinator,
\item Any other case, except when $c=\gamma=0$ and $\mathcal{S}$ is a compound Poisson processes with $\mathbb{E}\left[\mathcal{X}(1)\right]\leq 0$. In this case we also assume that $\int_{0+}^\infty(x^2\mathcal{W}edge x)\nu_\mathcal{S}(dx)<\infty$.
\mathcal{E}nd{enumerate}
\begin{remark}
Assumption $\int_{0+}^\infty(x^2\mathcal{W}edge x)\nu_\mathcal{S}(dx)<\infty$ is true, for instance, when $\int_{0+}^\infty x\nu_\mathcal{S}(dx)<\infty.$
\mathcal{E}nd{remark}
The following result from \left[te{lewismordecki} holds for the roots of the equation $ \Psi_\mathcal{X}(r)-q=0,$ which we call generalized Cram\'er-Lundberg equation:
\begin{lemma}\label{lemmarootsofGLEgeneralcase}
Let $q\geq0$ and assume $\mathbb{E}\left[\mathcal{X}(1)\right]>0$ when $q=0$. Then:
\begin{enumerate}[a)]
\item In case A, the equation $ \Psi_\mathcal{X}(r)-q=0$ has $m$ roots in $\mathbb{C}_{++}$,
\item In cases B and C, the equation $ \Psi_\mathcal{X}(r)-q=0$ has $m+1$ roots in $\mathbb{C}_{++}$.
\mathcal{E}nd{enumerate}
In all the cases above, there is exactly one real root $\beta_1(q)$ in the interval $(0,\alpha_1)$, and
it satisfies $\lim\limits_{q\downarrow0}\beta_1(q)=0$.
When $q=0$, $\beta_1(0)=0$ is a simple root of $ \Psi_\mathcal{X}(r)=0$ in all cases A, B and C.
\mathcal{E}nd{lemma}
Let us assume that the equation $ \Psi_\mathcal{X}(r)-q=0$ has $R$ different roots in $\mathbb{C}_{++}$,
denoted by $\beta_1(q),\dots,\beta_R(q)$, with respectively multiplicities $k_1,k_2,\dots,k_R$,
where $\sum\limits_{j=1}^{R}k_j=m+1- 1_{\{case\ A\}}$, and $1_{\{case\ A\}}=1$ in case A and $1_{\{case\ A\}}=0$ in the other cases. We let $\beta_1(q)$ be the real root such
that $\beta_1(q)\in[0,\alpha_1)$, hence $k_1=1$.
The case when $q=0$ is taken in the limiting sense.
When $\mathbb{E}\left[\mathcal{X}(1)\right]\leq 0$, we have $\mathbb{P}\left[ I_\infty=-\infty\right]=1$, hence we have the following condition:
\begin{cond}\label{condiq0}
For $q=0$, we assume that $\mathbb{E}\left[\mathcal{X}(1)\right]>0$.
\mathcal{E}nd{cond}
For $a=0,1,\dots,m+1$, we define the linear operator $\mathcal{T}_{s,a}$ by the
expression
\begin{equation*}
\mathcal{T}_{s,a}f(u)=\int_u^\infty(y-u)^ae^{-s(y-u)}f(y)dy,
\mathcal{E}nd{equation*}
for all measurable, nonnegative functions $f$ and complex numbers $s$ such that the integral above exists and is finite.
If $\nu$ is a measure such that
$\int_u^\infty(y-u)^ae^{-s(y-u)}\nu(dy)$ exists, we define for $a=0,1,\dots,m+1$,
\begin{equation}\label{operadorT}
\mathcal{T}_{s,a}\nu(u)=\int_u^\infty(y-u)^ae^{-s(y-u)}\nu(dy),
\mathcal{E}nd{equation}
and denote the Laplace transforms of these two operators by $\mathcal{W}idehat{\mathcal{T}}_{s,a} f$
and $\mathcal{W}idehat{\mathcal{T}}_{s,a} \nu$.
When $a=0$, we obtain the Dickson-Hipp operator $T_sf$ defined
in \left[te{dicksonhipp}
and write $\mathcal{T}_{s}f(u)=\int_u^\infty e^{-s(y-u)}f(y)dy$, with the corresponding modification
when $f$ is replaced by a measure $\nu$.
We shall use
the following elementary properties and lemma:
\small
\begin{equation}\label{laplaceoperatorT}
\mathcal{W}idehat{T}_sf(r)=\frac{\mathcal{W}idehat{f}(r)-\mathcal{W}idehat{f}(s)}{s-r},\quad \mathcal{W}idehat{T}_s\nu(r)=\frac{\int_{0+}^\infty \left( e^{-rx}-e^{-sx}\right)\nu(dx)}{s-r}.
\mathcal{E}nd{equation}
\normalsize
\begin{lemma}\label{lemaderivadaseintegrales}
Let $f$ be a function (or a measure) such that $\mathcal{T}_{s,k} f(u)$ exists for every $s\in\mathbb{C}_{++}$, $k\in \mathbb{N}\cup\{0\}$
and $u>0$. For each $r\in \mathbb{C}_+$, $s\in\mathbb{C}_{++}$ and $k\in\mathbb{N}\cup\{0\}$ there holds $\frac{\mathbb{P}artial^k}{\mathbb{P}artial s^k}\mathcal{W}idehat{T}_sf(r)=(-1)^k\mathcal{W}idehat{\mathcal{T}}_{s,k}f(r)$.
\mathcal{E}nd{lemma}
The following result follows from Theorem 6.16 in \left[te{kyprianou}.
\begin{lemma}\label{lemmakyprianou}
Let $S_{e_q}$ be the positive Wiener-Hopf factor of a L\'evy process, other than a compound Poisson process, and denote by $\kappa$ the joint Laplace exponent for the bivariate subordinator representing the ascending ladder process $( {\mathbb{L}}^{-1},\mathbb{H}), $
and by $\Lambda (dx,dy)$ its bivariate L\'evy measure.
Then there exist $b\geq 0$ such that, for $r\geq0$ and $q>0$, it holds
\begin{equation*}
b r+\int _{0+}^\infty\left(1-e^{-ry}\right) \int_{0+}^\infty e^{-q x}\Lambda (dx,dy)=\kappa(q,r)-\kappa(q,0),
\mathcal{E}nd{equation*}
and
\begin{equation}\label{laplacepositiveWHfactor}
\mathbb{E}\left[ e^{-r S_{e_q}}\right]=
E\left[ e^{-r \mathbb{H}_q \left( e_{\kappa(q,0)}\right)}\right]=\frac{\kappa(q,0)}{ \kappa(q,r) }.
\mathcal{E}nd{equation}
Here $e_{\kappa(q,0)}$ is an exponential random variable
with mean $1/\kappa(q,0),$ independent of the L\'evy process.
It also holds
\begin{equation}\label{kapa}
q-\Psi_{\mathcal{X}}(r)=\kappa(q, -ir)\mathcal{W}idehat {\kappa}(q, ir),
\mathcal{E}nd{equation}
where $\mathcal{W}idehat{\kappa}$ is the joint Laplace exponent for the bivariate subordinator representing the descending ladder process $( {\mathbb{L}}^{-1},\mathbb{\mathcal{W}idehat{H}}). $
\mathcal{E}nd{lemma}
In order to simplify our notations, we define the following constants:
$$E(j,a,q)=\binom{k_j-1}{a}\frac{(-1)^{1-k_j+a}}{(k_j-1)!}\frac{\mathbb{P}artial^{k_j-1-a}}{\mathbb{P}artial s^{k_j-1-a}}\left[\frac{\mathbb{P}rod\limits_{l=1}^N(\alpha_l-s)^{n_l}(\beta_j(q)-s)^{k_j}}{\mathbb{P}rod\limits_{l=1}^R(\beta_l(q)-s)^{k_l}}\right]_{s=\beta_j(q)},$$
$$E_*(j,a,q)=\binom{k_j-1}{a}\frac{(-1)^{1-k_j+a}}{(k_j-1)!}\frac{\mathbb{P}artial^{k_j-1-a}}{\mathbb{P}artial s^{k_j-1-a}}\left[\frac{\mathbb{P}rod\limits_{l=1}^N(\alpha_l-s)^{n_l}(\beta_j(q)-s)^{k_j}}{\mathbb{P}rod\limits_{l=1}^R(\beta_l(q)-s)^{k_l}}s\right]_{s=\beta_j(q)},$$
for each $j=1,2,\dots,R$. The constants $E(j,0,q)$ and $E(j,a,q)$ for $a>0$ correspond, respectively, to those given in expressions (2.4)
and (2.5) in \left[te{lewismordecki}.
We define the functions
\begin{align}
\mathcal{E}ll_q(u)= \sum\limits_{j=1}^R\sum\limits_{a=0}^{k_j-1}E(j,a,q)\mathcal{T}_{\beta_j(q),a}\nu_\mathcal{S}(u),
\quad \mathcal{L}_q(u)= \sum\limits_{j=1}^R\sum\limits_{a=0}^{k_j-1}E_*(j,a,q)\mathcal{T}_{\beta_j(q),a}\overline{\mathcal{V}}_\mathcal{S}(u),\quad q\geq0,\nonumber
\mathcal{E}nd{align}
and the measure
\begin{equation}\label{funcionesL}
\chi_{q,\mathcal{S}}(dx )=\left\{
\begin{array}{ll}
\nu_\mathcal{S}(dx)+\mathcal{E}ll_q(x)dx&\mbox{in case A },\\
&\\
\mathcal{E}ll_q(x)dx&\mbox{in case B},\\
&\\
\left[\overline{\mathcal{V}}_\mathcal{S}(x)-\mathcal{L}_q(x)\right] dx&\mbox{in case C}.
\mathcal{E}nd{array}
\right.
\mathcal{E}nd{equation}
\section{Main results}\setcounter{equation}{0}\label{WHfactorssection}
In this section we obtain an explicit expression for the probability density of
the negative
Wiener-Hopf factor $I _{e_q}$of
the process $\mathcal{X}$ defined in (\ref{Xalpha}). The results presented for $q=0$ are all under the assumption that Condition \ref{condiq0} holds.
For $a>0$ let $\mathcal{E}_a(x)$ denote the exponential density $\mathcal{E}_a(x)=a e^{-ax}$, $ x>0$ and define, for $q\ge0$, the function $\mathcal{W}idehat{W}_q$ by
\begin{equation}\label{laplaceWcasogeneral}
\mathcal{W}idehat{ W}_q(r)=\frac{\kappa(q,-r)}{\left[ q- \Psi_\mathcal{X}(r)\right]}, \quad r\ge0,
\mathcal{E}nd{equation}
where $\kappa$ is given in Lemma \ref{lemmakyprianou}. By Theorem 2.2 in \left[te{lewismordecki}, we know that
\begin{equation}\label{kappa1}
\kappa(q,r)=\frac{\mathbb{P}rod_{j=1}^R(\beta_j(q)+r)^{k_j}}{\mathbb{P}rod_{l=1}^N(\alpha_l+r)^{n_l}}
\mathcal{E}nd{equation}
so $\kappa(q,0)=\frac{\mathbb{P}rod_{j=1}^R\beta_j^{k_j}(q)}{\mathbb{P}rod_{l=1}^N\alpha_l^{n_l}(q)}$. Since $\beta_1(0)=0$, it follows that $\kappa(0,0)=0$. Hence, using that from L'H\^opital's rule $\lim_{\beta_1(q) \to 0}{ \frac{\Psi_{\mathcal{X}}(\beta_{1}(q))}{\beta_{1}(q))}}=\mathbb{E}(\mathcal{X}(1)),$
we obtain
$$\lim_{q\downarrow0}\frac{q}{\kappa(q,0)}=\lim_{q\downarrow0}\frac{q}{\beta_1(q)}\frac{\mathbb{P}rod_{l=1}^N\alpha_l^{n_l}}{\mathbb{P}rod_{j=2}^R\beta_j^{k_j}(q)}=\mathbb{E}\left[\mathcal{X}(1)\right]\frac{\mathbb{P}rod_{l=1}^N\alpha_l^{n_l}}{\mathbb{P}rod_{j=2}^R\beta_j^{k_j}(0)}.$$
We define for $q>0$, $a(q)=q/\kappa(q,0)$ and $a(0)=\lim_{q\downarrow0}a(q)$.
Hence, we have the following result.
\begin{lemma}
The Laplace transforms of $-I _{e_q}$ for $q>0$ and
$-I_\infty $ satisfy the following equalities for $r\geq0$:
\begin{equation}\label{mainteo1parteA}
\mathbb{E}\left[ e^{-r \left[-I _{e_q}\right]}\right]=a(q) \mathcal{W}idehat{ W}_q(r)\quad\text{ and }\quad\mathbb{E}\left[ e^{-r \left[-I _\infty\right]}\right]=a(0)\mathcal{W}idehat{W}_{0}(r)
\mathcal{E}nd{equation}
Hence, for $q\geq0$, $a(q) W_q$ is the density of $-I_{e_q} $ .
\mathcal{E}nd{lemma}
\begin{proof}
Using (\ref{igualdadWHfactors}), Lemma \ref{lemmakyprianou} and the relation $\mathbb{P}si_\mathcal{X}(s)=-\Psi_\mathcal{X}(is)$, we get
\begin{equation}\label{relacionhatkappaykappa}
\mathbb{E}\left[ e^{is I _{e_q}}\right]=\frac{q}{q-\Psi_\mathcal{X}(is)}\left(\frac{\kappa(q,-is)}{\kappa(q,0)}\right).
\mathcal{E}nd{equation}
The function on the right-hand side can be analytically extended to negative part of the imaginary axis. Hence the result follows taking $s=-ir$ for $r\geq0$.
The case for $q=0$ follows by taking limits when $q\downarrow0$.
\mathcal{E}nd{proof}
In the following result we invert $a(q)\mathcal{W}idehat{W}_q$.
\begin{teo}\label{laplacefactorWienerHopfnegativo}\leavevmode
\begin{enumerate}[a)]
\item The function $\mathcal{W}idehat{W}_q$ satisfies the equalities
\begin{equation}\label{laplaceWienerHopf}
a(q) \mathcal{W}idehat{ W}_q(r)=\left\{
\begin{array}{ll}
\frac{\mathcal{W}idehat{\mathcal{E}}_{a(q)}(r)}{1-\left(1-\mathcal{W}idehat{\mathcal{E}}_{a(q)}(r)\right)\left(1-\mathcal{W}idehat{\overline{\chi}}_{q,\mathcal{S}}(r)\right)}, &\text{ in cases A and C with }\gamma=0,\\
\frac{\frac{a(q)}{c}}{1-\frac{1}{c}\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(r)}, &\text{ in case B},\\
\frac{\mathcal{W}idehat{\mathcal{E}}_{a(q)\gamma^{-2}}(r)}{1+\gamma^{-2}\left(1-\mathcal{W}idehat{\mathcal{E}}_{a(q)\gamma^{-2}}(r)\right)\mathcal{W}idehat{\overline{\chi}}_{q,\mathcal{S}}(r)}, &\text{ in case C with } \gamma>0.
\mathcal{E}nd{array}\right.
\mathcal{E}nd{equation}
\item For $q\geq0$ and $u\ge0$ the negative Wiener-Hopf factor $I_{e_q}$ has a generalized density function given by:
\begin{align}
&a(q) W_q(u)\nonumber \\
&=\left\{
\begin{array}{ll}
\mathcal{E}_{a(q)}*\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\binom{n}{k}(-1)^k\left(\overline{\chi}_{q,\mathcal{S}}+\mathcal{E}_{a(q)}-\mathcal{E}_{a(q)}*\overline{\chi}_{q,\mathcal{S}}\right)^{*k}(u)&\text{ in cases A and C with }\gamma=0,\\
&\\
\frac{a(q)}{c}\delta_0(u)+\frac{a(q)}{c}\sum\limits_{n=1}^\infty\left(\dfrac{1}{c}\right)^n\chi_{q,\mathcal{S}}^{*n}(u),&\text{ in case B},\\
&\\
\mathcal{E}_{a(q)\gamma^{-2}}*\sum\limits_{n=0}^\infty\left(-\frac{1}{\gamma^2}\right)^n\mathcal{B}igg(\overline{\chi}_{q,\mathcal{S}}-\mathcal{E}_{a(q)\gamma^{-2}}*\overline{\chi}_{q,\mathcal{S}}\mathcal{B}igg)^{*n}(u)&\text{ in case C with }\gamma>0,\nonumber
\mathcal{E}nd{array}\right.
\mathcal{E}nd{align}
where $\delta_0$ is Dirac's delta function.
\mathcal{E}nd{enumerate}
\mathcal{E}nd{teo}
In order to prove Theorem \ref{laplacefactorWienerHopfnegativo} we need the following lemma. Its proof is technical and lengthly and is deferred to section 6.
\begin{lemma}\label{lemmanegativeWHfactor}
For $q\geq0$ we have:
\begin{equation}\label{lemanegativeWHfactor}
\left[ q- \Psi_\mathcal{X}(r)\right]\left( \kappa(q,-r)\right)^{-1}=\left\{
\begin{array}{ll}
a(q)+G_\mathcal{S}(r)+\mathcal{W}idehat{\mathcal{E}ll}_q(0)-\mathcal{W}idehat{\mathcal{E}ll}_q(r), &\text{ in Case A},\\
a(q)+\mathcal{W}idehat{\mathcal{E}ll}_q(0)-\mathcal{W}idehat{\mathcal{E}ll}_q(r), &\text{ in Case B},\\
a(q) +\gamma^2 r-\frac{\Psi_\mathcal{S}(r)}{r}-\big[\mathcal{W}idehat{\mathcal{L}}_q(0)-\mathcal{W}idehat{\mathcal{L}}_q(r)\big], &\text{ in Case C}.
\mathcal{E}nd{array}\right.
\mathcal{E}nd{equation}
\mathcal{E}nd{lemma}
\begin{proofofmainteo1}
Clearly, part b) follows inverting (\ref{laplaceWienerHopf}). To prove (\ref{laplaceWienerHopf}) we assume $q>0$. The case $q=0$ follows by letting $q\downarrow0$.
From (\ref{mainteo1parteA}),
(\ref{funcionesL}),(\ref{laplaceWcasogeneral}), (\ref{lemanegativeWHfactor}) and the definition of $a(q)$ we obtain
\begin{equation}\label{caso A}
\mathbb{E}\left[ e^{-r \left[-I _{e_q}\right]}\right]=a(q) \mathcal{W}idehat{ W}_q(r)
=\frac{a(q)}{a(q)+\int\limits_{0+}^\infty(1-e^{-rx})\chi_{q,\mathcal{S}}(dx )} =\frac{q}{q+\int\limits_{0+}^\infty(1-e^{-rx})\kappa(q,0)\chi_{q,\mathcal{S}}(dx )}.
\mathcal{E}nd{equation}
To obtain (\ref{laplaceWienerHopf}) in case A, we use
\begin{equation}\label{laplaceE}
\mathcal{W}idehat{\mathcal{E}}_{a(q)}(r)=\frac{a(q)}{a(q)+r}.
\mathcal{E}nd{equation} and apply Fubini's Theorem to $\int\limits_{0+}^\infty(1-e^{-rx})\chi_{q,\mathcal{S}}(dx )$. This yields,
\begin{align}
a(q) \mathcal{W}idehat{ W}_q(r)&=\frac{a(q)}{a(q)+\int\limits_{0+}^\infty(1-e^{-rx})\chi_{q,\mathcal{S}}(dx )}
=\frac{a(q)}{a(q)+r\mathcal{W}idehat{\overline{\chi}}_{q,\mathcal{S}}(r)}
=\frac{\mathcal{W}idehat{\mathcal{E}}_{a(q)}(r)}{1-\left(1-\mathcal{W}idehat{\mathcal{E}}_{a(q)}(r)\right)\left(1-\mathcal{W}idehat{\overline{\chi}}_{q,\mathcal{S}}(r)\right)},\nonumber
\mathcal{E}nd{align}
Hence (\ref{laplaceWienerHopf}) follows in this case.
In case B, (\ref{laplaceWcasogeneral}) and (\ref{lemanegativeWHfactor}) we have
\begin{align}
a(q)\mathcal{W}idehat{W}_q(r)&=\frac{a(q)}{a(q)+\mathcal{W}idehat{\mathcal{E}ll}_q(0)-\mathcal{W}idehat{\mathcal{E}ll}_q(r)} =\frac{\frac{a(q)}{a(q)+\mathcal{W}idehat{\mathcal{E}ll}_q(0)}}{1-\frac{1}{a(q)+\mathcal{W}idehat{\mathcal{E}ll}_q(0)}\mathcal{W}idehat{\mathcal{E}ll}_q(r)}.\label{igualdad2casoB}
\mathcal{E}nd{align}
Due to (\ref{igualdadadelta}) we have $a(q)+\mathcal{W}idehat{\mathcal{E}ll}_q(0)=c$, and
from (\ref{funcionesL}) it follows that $\mathcal{W}idehat{\mathcal{E}ll}_q(r)=\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(r)$.
Substituting these two equalities into (\ref{igualdad2casoB}) and using (\ref{mainteo1parteA}) gives (\ref{laplaceWienerHopf}).
We now deal with case C. Using (\ref{laplaceWcasogeneral}) and (\ref{lemanegativeWHfactor}), we obtain $$ a(q)\mathcal{W}idehat{W}_q(r)=\frac{a(q)}{a(q)+\gamma^2 r-\frac{\Psi_\mathcal{S}(r)}{r}-\mathcal{B}ig[\mathcal{W}idehat{\mathcal{L}}_q(0)-\mathcal{W}idehat{\mathcal{L}}_q(r)\mathcal{B}ig]}.
$$
Now we apply Fubini's theorem to $\Psi_\mathcal{S}(r)/r$ to obtain, for $\gamma>0$:
\begin{align}
a(q)\mathcal{W}idehat{W}_q(r)
&=\frac{\frac{a(q)\gamma^{-2}}{a(q)\gamma^{-2}+r}}{1+\gamma^{-2}\frac{r}{a(q)\gamma^{-2}+r}\mathcal{W}idehat{\overline{\chi}}_{q,\mathcal{S}}(r)}
=\frac{\mathcal{W}idehat{\mathcal{E}}_{a(q)\gamma^{-2}(r)}}{1+\gamma^{-2}\left(1-\mathcal{W}idehat{\mathcal{E}}_{a(q)\gamma^{-2}(r)}\right)\mathcal{W}idehat{\overline{\chi}}_{q,\mathcal{S}}(r)},\nonumber
\mathcal{E}nd{align}
where we have used (\ref{laplaceE}) with $a(q)$ replaced by $a(q)\gamma^{-2}$. When $\gamma=0$ it holds,
\begin{align}
a(q)\mathcal{W}idehat{W}_q(r)
&=\frac{\frac{a(q)}{a(q)+r}}{1+\frac{r}{a(q)+r}\mathcal{W}idehat{\overline{\chi}}_{q,\mathcal{S}}(r)}
=\frac{\mathcal{W}idehat{\mathcal{E}}_{a(q)(r)}}{1+\left(1-\mathcal{W}idehat{\mathcal{E}}_{a(q)(r)}\right)\mathcal{W}idehat{\overline{\chi}}_{q,\mathcal{S}}(r)},\nonumber
\mathcal{E}nd{align}
and we obtain (\ref{laplaceWienerHopf}) using (\ref{mainteo1parteA}).
\mathcal{E}nd{proofofmainteo1}
\begin{lemma} For all $q\ge0$ the measure $\kappa (q,0) \chi_{q,\mathcal{S}}$ is the L\'evy measure of $-I_{e_q}.$
\mathcal{E}nd{lemma}
\begin{proof}
The non-negative random variable $-I_{e_q} $ is infinitely divisible,
with Laplace transform
\begin{equation}\label{laplacefactornegativo2}
\mathbb{E}\left[ e^{-r\left(-I_{e_q} \right)}\right]=\mathcal{E}xp\left\{ -\int_{0+}^\infty \left(1-e^{-rx}\right)\nu_q(dx)\right\},
\mathcal{E}nd{equation} where the measure
\begin {equation} \label{medidanu}
\nu_q(dx) =\int_{0}^\infty t^{-1}e^{-q t}\mathbb{P}\left[ -I_t \in dx\right] dt
\mathcal{E}nd{equation}
is the L\'evy measure of $-I_{e_q}$ (see e.g. Lema 6.17 in \left[te{kyprianou}).
On the other hand, from the formula for Frullani's integral, we have for $\alpha,\beta>0$ and $z\leq 0$,
\begin{equation}\label{frullanisintegral}
\left(\frac{\alpha}{\alpha-z}\right)^\beta=\mathcal{E}xp\left\{-\int_0^\infty(1-e^{zt})\beta t^{-1}e^{-\alpha t}dt\right\}.
\mathcal{E}nd{equation}
Let $\mathcal{N}_q$ denote the subordinator with L\'evy measure $\kappa(q,0)\chi_{q,\mathcal{S}},$ and denote its Laplace exponent by $\Psi_q$. From (\ref{caso A}) we have
$\mathbb{E}\left[ e^{-r(-I_{e_q} )}\right]=
\frac{q}{q+\Psi_q(r)},$ hence using (\ref{frullanisintegral}) with $\alpha=q$, $\beta=1$ and $z=-\Psi_q(r)$
we obtain
\begin{equation}\label{frullaniA}
\mathbb{E}\left[ e^{-r\left(-I_{e_q} \right)}\right]=
\mathcal{E}xp\left\{ -\int_0^\infty\left(1-e^{-t\Psi_q(r)}\right) t^{-1}e^{-q t}dt\right\}.
\mathcal{E}nd{equation}
Since $1-e^{-t\Psi_q(r)}=\int_0^\infty \left(1-e^{-rx}\right)\mathbb{P}\left[ \mathcal{N}_q(t)\in dx\right]$, setting
\begin{equation}\label{medida}
\mathbb{P}i(dx)=\int_{0}^\infty t^{-1}e^{-q t}\mathbb{P}\left[ \mathcal{N}_q(t) \in dx\right] dt,
\mathcal{E}nd{equation}
and using
Fubini's theorem in (\ref{frullaniA}) it follows that
\begin{equation}\label{frullani2}
\mathbb{E}\left[ e^{-r\left(-I_{e_q} \right)}\right]=
\mathcal{E}xp\left\{ -\int_0^\infty\left(1-e^{-rx}\right) \mathbb{P}i(dx)\right\}.
\mathcal{E}nd{equation}
Now from (\ref{laplacefactornegativo2}) we deduce that
\begin{equation}\label{nu}
\mathbb{P}i= \nu_q.
\mathcal{E}nd{equation}
Using (\ref{medida}) and (\ref{medidanu}) we obtain the result.
\mathcal{E}nd{proof}
\begin{remark} Since $-I_{e_q}=\mathcal{W}idehat{H}(e_{\mathcal{W}idehat{\kappa}(q,0)})$ in distribution \left[te[Theorem 6.16]{kyprianou}, it follows that the measure $\kappa (q,0) \chi_{q,\mathcal{S}}$ is also the L\'evy measure of the descending ladder-height process $\mathcal{W}idehat{H}$ corresponding to $\mathcal{X}(t),$ killed at the uniform rate $\mathcal{W}idehat{\kappa}(q,0).$
\mathcal{E}nd{remark}
\section{Asymptotic behavior of the negative Wiener-Hopf factor}\setcounter{equation}{0}
For $u>0$ we denote
$F_{I_{e_q} }(-u)=\mathbb{P}\left[ I_{e_q} <-u\right]$
We obtain asymptotic expressions for $F_{I_{e_q} }(-u)$ when $u\to\infty$. For this, we use the following technical result.
\begin{lemma}
The equality
\begin{align}
r\mathcal{W}idehat{F}_{I_{e_q}}(r)
&=\frac{\sum\limits_{j=1}^{m+2}A_j'r^j+\Psi_\mathcal{S}(r)\mathbb{P}rod\limits_{l=1}^N\alpha_l^{n_l}+\sum\limits_{j=1}^mA_j r^j\Psi_\mathcal{S}(r)-\mathcal{W}idehat{\kappa}(q,0)\sum\limits_{j=1}^{m+1}B_jr^j}{q\mathbb{P}rod\limits_{l=1}^N\alpha_l^{n_l}+\sum\limits_{j=1}^{m+2}A_j'r^j+\Psi_\mathcal{S}(r)\mathbb{P}rod\limits_{l=1}^N\alpha_l^{n_l}+\sum\limits_{j=1}^mA_j r^j\Psi_\mathcal{S}(r)}\label{expansionPsiX}
\mathcal{E}nd{align}
holds
for some constants $A_j,j=1,\dots,m$, $B_k,k=1,\dots,m+1$ and $A_l',l=1,\dots,m+2$.
\mathcal{E}nd{lemma}
\begin{proof}
Using that $r\mathcal{W}idehat{F}_{I_{e_q}}(r)=1-\mathcal{W}idehat{f}_{-I_{e_q}}(r)$, where $\mathcal{W}idehat{f}_{-I_{e_q}}(r)$ denotes the Laplace transform of the density of $-I_{e_q}$, we obtain
as in (\ref{laplacepositiveWHfactor}) \begin{equation}
r\mathcal{W}idehat{F}_{I_{e_q}}(r)=\frac{\mathcal{W}idehat{\kappa}(q,r)-\mathcal{W}idehat{\kappa}(q,0)}{\mathcal{W}idehat{\kappa}(q,r)}.\label{kappas}
\mathcal{E}nd{equation}
On the other hand from (\ref{kapa}) and (\ref{kappa1}) it follows \begin{equation}
\mathcal{W}idehat{\kappa}(q,r)=\mathbb{P}rod_{l=1}^N\left(\alpha_l-r\right)^{n_l}\frac{q-\Psi_\mathcal{X}(r)}{\mathbb{P}rod_{j=1}^R\left(\beta_j(q)-r\right)^{k_j}}\label{kappagorro}.
\mathcal{E}nd{equation}
Since
$$q-\Psi_\mathcal{X}(r)=q-cr-\gamma^2r^2-\lambda_1\left(\frac{Q(-r)}{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-r)^{n_j}}-1\right)+\Psi_\mathcal{S}(r),$$
and $\lambda_1\left(\frac{Q(-r)}{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-r)^{n_j}}-1\right)$ can be written as the quotient of some polynomial $\mathcal{P}_1$ with degree $m$ and $\mathbb{P}rod\limits_{j=1}^N(\alpha_j-r)^{n_j}$, which is also a polynomial of degree $m$, we have
\begin{equation}\label{laplaceexponentwithpolynomials}
q-\Psi_\mathcal{X}(r)=q-cr-\gamma^2r^2-\frac{\mathcal{P}_1(r)}{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-r)^{n_j}}+\Psi_\mathcal{S}(r).
\mathcal{E}nd{equation}
Since $\frac{Q(0)}{\mathbb{P}rod\limits_{j=1}^N\alpha_j^{n_j}}=1$, we obtain that $\mathcal{P}_1$ is a polynomial with constant term $0$. Using $\mathbb{P}rod\limits_{j=1}^N(\alpha_j-r)^{n_j}=\sum_{j=0}^mA_j r^j$, it follows that
\begin{align}
\mathbb{P}rod\limits_{j=1}^N(\alpha_j-r)^{n_j}(q-\Psi_\mathcal{X}(r))&=qA_0+q\sum_{j=1}^mA_j r^j-c\sum_{j=0}^mA_j r^{j+1}-\gamma^2\sum_{j=0}^mA_j r^{j+2}\nonumber\\
&-\mathcal{P}_1(r)+A_0\Psi_\mathcal{S}(r)+\sum_{j=1}^mA_j r^j\Psi_\mathcal{S}(r)\nonumber\\
&=qA_0+\sum\limits_{j=1}^{m+2}A_j'r^j+A_0\Psi_\mathcal{S}(r)+\sum_{j=1}^mA_j r^j\Psi_\mathcal{S}(r)\label{expansionpolinomialdePsiX}.
\mathcal{E}nd{align}
Evaluating $\mathbb{P}rod\limits_{j=1}^N(\alpha_j-r)^{n_j}$ at $r=0$ we obtain $A_0=\mathbb{P}rod_{l=1}^N\alpha_l^{n_l}$. Moreover, setting $r=0$ in (\ref{kappagorro}) gives
\begin{equation}\label{qA0}
q\mathbb{P}rod\limits_{l=1}^N\alpha_l^{n_l}=\mathcal{W}idehat{\kappa}(q,0)\mathbb{P}rod\limits_{j=1}^R\beta_j^{k_j}(q).
\mathcal{E}nd{equation}
Hence, using that $\mathbb{P}rod_{j=1}^R\left(\beta_j(q)-r\right)^{k_j}$ can be expressed as $\sum_{j=0}^{m+1}B_jr^j$ and substituting (\ref{qA0}) and (\ref{expansionpolinomialdePsiX}) into (\ref{kappas}), it follows that
\begin{align}
r\mathcal{W}idehat{F}_{I_{e_q}}(r)
&=\frac{\sum\limits_{j=1}^{m+2}A_j'r^j+\Psi_\mathcal{S}(r)\mathbb{P}rod\limits_{l=1}^N\alpha_l^{n_l}+\sum\limits_{j=1}^mA_j r^j\Psi_\mathcal{S}(r)-\mathcal{W}idehat{\kappa}(q,0)\sum\limits_{j=1}^{m+1}B_jr^j}{q\mathbb{P}rod\limits_{l=1}^N\alpha_l^{n_l}+\sum\limits_{j=1}^{m+2}A_j'r^j+\Psi_\mathcal{S}(r)\mathbb{P}rod\limits_{l=1}^N\alpha_l^{n_l}+\sum\limits_{j=1}^mA_j r^j\Psi_\mathcal{S}(r)}.\nonumber
\mathcal{E}nd{align}
\mathcal{E}nd{proof}
In what follows we write $f\approx c g$
for any two nonnegative functions $f$ and $g$ on $[0,\infty)$ such that $\lim\limits_{u\to\infty}\frac{f(u)}{g(u)}=c$, with $c\neq 0$.
Now we can derive the first asymptotic expression for $F_{I_{e_q} }$.
\begin{propo}\label{proporeferi}
If $r^{-\mathcal{X}i}\Psi_\mathcal{S}(r)\to D$ as $r\downarrow 0$ for some $\mathcal{X}i\in(0,1)$ and some positive constant $D$, then
$$F_{I_{e_q} }(-u)\approx \frac{D}{q\Gamma(1-\mathcal{X}i)}u^{-\mathcal{X}i},\quad u\to\infty.$$
\mathcal{E}nd{propo}
\begin{proof}
Due to Theorem 4 in \left[te{feller}, page 446, we only need to prove that
\begin{equation}\label{qepd}
r^{-\mathcal{X}i+1}\mathcal{W}idehat{F}_{I_{e_q} }(r)\to Dq^{-1}.
\mathcal{E}nd{equation}
From (\ref{expansionPsiX}) we have
\begin{align}
r^{-\mathcal{X}i+1}\mathcal{W}idehat{F}_{I_{e_q} }(r)=\frac{\sum\limits_{j=1}^{m+2}A_j'r^{j-\mathcal{X}i}+r^{-\mathcal{X}i}\Psi_\mathcal{S}(r)\mathbb{P}rod\limits_{l=1}^N\mathcal{X}i_l^{n_l}+\sum\limits_{j=1}^mA_j r^{j-\mathcal{X}i}\Psi_\mathcal{S}(r)-\mathcal{W}idehat{\kappa}(q,0)\sum\limits_{j=1}^{m+1}B_jr^{j-\mathcal{X}i}}{q\mathbb{P}rod\limits_{l=1}^N\mathcal{X}i_l^{n_l}+\sum\limits_{j=1}^{m+2}A_j'r^j+\Psi_\mathcal{S}(r)\mathbb{P}rod\limits_{l=1}^N\mathcal{X}i_l^{n_l}+\sum\limits_{j=1}^mA_j r^j\Psi_\mathcal{S}(r)}.
\mathcal{E}nd{align}
Since all the polynomial terms in the numerator have no constant term and $\mathcal{X}i\in(0,1)$, we have ${j-\mathcal{X}i}>0$, hence letting $r\downarrow0$ and using the hypothesis on $r^{-\mathcal{X}i}\Psi_\mathcal{S}(r)$, we obtain (\ref{qepd}).
\mathcal{E}nd{proof}
Let us denote by $\overline{\chi}_{q,\mathcal{S}}$ the tail of the L\'evy measure $\chi_{q,\mathcal{S}}$.
\begin{propo}
In case B, if $1-\frac{\overline{\chi}_{q,\mathcal{S}}(u)}{\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)}$ is a subexponential distribution,
then $F_{I_{e_q} }(-u)\approx q^{-1}\kappa(q,0)\overline{\chi}_{q,\mathcal{S}}(u)$.
\mathcal{E}nd{propo}
\begin{proof}
First we note that from (\ref{igualdadadelta}) we get $c=a(q)+\mathcal{W}idehat{\mathcal{E}ll}_q(0)$, hence $a(q)=c-\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)$, and
\begin{align}
\mathcal{W}idehat{F}_{I_{e_q} }(r)
&=\frac{1}{r}\left(\frac{\frac{1}{c}\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)-\frac{1}{c}\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(r)}{1-\frac{1}{c}\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(r)}\right)
=\frac{\frac{1}{c}\mathcal{W}idehat{\overline{\chi}}_{q,\mathcal{S}}(r)}{1-\frac{1}{c}\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(r)},\nonumber
\mathcal{E}nd{align}
where in the last equality we used that for a probability density $f$ with tail $\overline{F}$, we have $\mathcal{W}idehat{\overline{F}}(r)=r^{-1}\left(1-\mathcal{W}idehat{f}(r)\right)$.
It follows that
\begin{equation}\label{colacasoB}
F_{I_{e_q} }(-u)=\frac{1}{c}\overline{\chi}_{q,\mathcal{S}}*\sum\limits_{n=0}^\infty\left(\frac{1}{c}\right)^n\overline{\chi}_{q,\mathcal{S}}^{*n}(u).
\mathcal{E}nd{equation}
Now we set $p=\frac{\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)}{a(q)+\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)}$. Then $p\in(0,1)$ and from (\ref{igualdadadelta}) it follows that
$p=\frac{\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)}{c}$. Using (\ref{colacasoB}) we get
\begin{align}
F_{I_{e_q} }(-u)&= \dfrac{1}{c}\overline{\chi}_{q,\mathcal{S}}*\sum\limits_{n=0}^\infty\left(\dfrac{1}{c}\right)^n\chi_{q,\mathcal{S}}^{*n}(u)
= p\dfrac{\overline{\chi}_{q,\mathcal{S}}}{\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)}*\sum\limits_{n=0}^\infty p^n\left(\dfrac{\chi_{q,\mathcal{S}}}{\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)}\right)^{*n}(u).\nonumber
\mathcal{E}nd{align}
Let us define the probability distribution
$H(u)=(1-p)\int\limits_0^u\sum\limits_{n=0}^\infty p^n\left(\dfrac{\chi_{q,\mathcal{S}}}{\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)}\right)^{*n}(x)dx$,
and denote its density by $h$.
Since $\overline{\chi}_{q,\mathcal{S}}(u)/\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)$ is the tail of a proper distribution, due to Corollary 3 in \left[te{embrechtsetal}
and the assumption that $1-\overline{\chi}_{q,\mathcal{S}}/\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)$ is subexponential, we obtain
\begin{equation}\label{asymptoticG}
\overline{H}(u)\approx \frac{p}{1-p}\frac{\overline{\chi}_{q,\mathcal{S}}(u)}{\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)},
\mathcal{E}nd{equation}
hence $H$ is a subexponential distribution. From Lemma 2.5.2 in \left[te{rolskietal} and (\ref{asymptoticG}), we have
\begin{align}
\lim\limits_{u\to\infty}\frac{F_{I_{e_q} }(-u)}{\overline{\chi}_{q,\mathcal{S}}(u)/\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)}&=\lim\limits_{u\to\infty}\frac{\frac{p}{1-p}\frac{\overline{\chi}_{q,\mathcal{S}}}{\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)}*h(u)}{\overline{\chi}_{q,\mathcal{S}}(u)/\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)}
=\frac{p}{1-p}=\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)q^{-1}\kappa(q,0),\nonumber
\mathcal{E}nd{align}
which implies the result.
\mathcal{E}nd{proof}
Let us define $\Pi(x)=\frac{\int_1^x \chi_{q,\mathcal{S}}(dy)}{\int_1^\infty \chi_{q,\mathcal{S}}(dy)}$ for $x>1$. We recall that a probability distribution $F$ is a
subexponential distribution if $\overline{F^{*2}}(x)\approx 2\overline{F}(x)$
as $x\to\infty$.
\begin{propo}
Suppose that $\Pi$ is a subexponential distribution and set $\overline{\nu_q}(u)=\int\limits_u^\infty\nu_q(dy)$ where $\nu_q$ is the L\'evy measure of $-I_{e_q}.$
Then the random variable $-I_{e_q} $ has a subexponential distribution, and
\begin{equation}\label{colaasintoticamedidapi}
F_{I_{e_q} }(-u)\approx \overline{\nu_q}(u),\ u\to\infty.
\mathcal{E}nd{equation}
\mathcal{E}nd{propo}
\begin{proof}
The assertion that $-I_{e_q} $ has a subexponential distribution and that (\ref{colaasintoticamedidapi}) holds, follow from
(\ref{frullani2}) and Theorem 1 in \left[te{embrechtsetal}.
\mathcal{E}nd{proof}
\section{Examples}\label{examples}\setcounter{equation}{0}
In this section we apply the results from the previous section to several particular examples in which we obtain
simple asymptotic expressions for the negative Wiener-Hopf factor $F_{I_{e_q} }$.
For simplicity, in cases A and C we will assume that the
roots of $\Psi_\mathcal{X}(r)-q=0$ in $\mathbb{C}_{++}$
are all different. This assumption holds e.g. when the density $f_1$ is a convex combination of exponential densities.
\textbf{Example 1 (case A)}.
We take $\Psi_\mathcal{S}(r)=r^\mathcal{X}i$, for $\mathcal{X}i \in(0,1)$, hence $\mathcal{S}$ is an $\mathcal{X}i$-stable subordinator and the assumptions on Proposition \ref{proporeferi} hold. Hence,
$$F_{I_{e_q} }(-u)\approx \frac{1}{\Gamma(1-\mathcal{X}i)}u^{-\mathcal{X}i} \mbox{ as }u\to\infty.$$
\textbf{Example 2 (Case B)}.
1. Let us
suppose that $G_\mathcal{S}(r)=\lambda_2\mathcal{W}idehat{f}_2(r)-\lambda_2$, i.e. $\mathcal{S}$ is a compound Poisson
process with L\'evy measure $\lambda_2f_2(x)$ with $\lambda_2>0$. In this case the
resulting L\'evy risk process is the classical two-sided jumps risk process.
Therefore
$$\chi_{q,\mathcal{S}}(u)=\lambda_2\sum\limits_{j=1}^R\sum\limits_{a=0}^{k_j-1}E(j,a,q)
\int_u^\infty(y-u)^a e^{-\beta_j(q)(y-u)}f_2(y)dy.$$
When $k_j=1$ for all $j=1,2,\dots,R$, and
$f_2$ is a mixture of exponential densities, the above expression can be easily calculated.
Let us consider the particular case when $f_1(x)= qe^{-qx}$, $x>0$, $\lambda_2=1$ and $f_2(x)=pe^{-px}$, $x>0$.
In this case the generalized
Lundberg equation $\Psi_\mathcal{X}(r)-q=0$ has two real roots $\beta_1(q)$ and $\beta_2(q)$ such that
$0<\beta_1(q)<q<\beta_2(q)$. Hence
$$\chi_{q,\mathcal{S}}(u)=\left(\frac{q-\beta_1(q)}{\beta_2(q)-\beta_1(q)}\frac{1}{\beta_1(q)+p}+\frac{\beta_2(q)-q}{\beta_2(q)-\beta_1(q)}\frac{1}{\beta_2(q)+p}\right) pe^{-pu}:=C_q pe^{-pu},$$
which means that the associated subordinator $\mathcal{N}_{2,q}$ is a compound Poisson process with intensity $C_q$ and jump sizes
with density $f_2$.
In this case we obtain an explicit expression for $F_{I_{e_q}}$ and its Laplace transform:
\begin{equation}\label{CP}
\mathcal{W}idehat{F}_{I_{e_q} }(r)=\frac{\frac{C_q p}{c}}{\frac{p(c-C_q)}{c}+r}\quad\mbox{and}\quad
F_{I_{e_q} }(-x)=\frac{C_q p}{c}e^{-p(c-C_q) x}.
\mathcal{E}nd{equation}
2. Now let us suppose that $\overline{F}_2(x)=\left( \frac{\theta}{\theta+x^c}\right)^{\mathcal{X}i}$
for $\mathcal{X}i, c,\theta>0$. This corresponds to a classical two-sided
jumps risk process with claims given by a Burr distribution with parameters $\mathcal{X}i$, $\theta$ and $c$. Then
$$\chi_{q,\mathcal{S}}(u)=\lambda_2\sum\limits_{j=1}^{m+1}E(j,0,q)\int_u^\infty e^{-\beta_j(q)(y-u)} \frac{\mathcal{X}i\theta^\mathcal{X}i}{(\theta+y)^{1+\mathcal{X}i}}dy,$$
and applying L'H\^opital's rule twice we get
\begin{align}
\lim\limits_{u\to\infty}\frac{\overline{\chi}_{q,\mathcal{S}}(u)}{\left(\frac{\theta}{\theta+u^c}
\right)^{\mathcal{X}i}}&=\lambda_2\sum\limits_{j=1}^{m+1}E(j,0,q)\lim\limits_{u\to\infty}
\frac{\int_u^\infty e^{\beta_j(q)x}\int_x^\infty
e^{-\beta_j(q)y} \frac{\mathcal{X}i\theta^\mathcal{X}i}{(\theta+y^c)^{1+\mathcal{X}i}}dy}{\left(\frac{\theta}
{\theta+u^c}\right)^{\mathcal{X}i}}
=\lambda_2\sum\limits_{j=1}^{m+1}E(j,0,q)\frac{1}{\beta_j(q)}.\nonumber
\mathcal{E}nd{align}
Using the second equality in Lemma 5.2 in \left[te{kolkovskamartin2}, it follows that
\begin{equation*}
\lim\limits_{u\to\infty}\frac{\overline{\chi}_{q,\mathcal{S}}(u)}
{\left(\frac{\theta}{\theta+u^c}\right)^{\mathcal{X}i}}=\lambda_2\mathbb{P}rod_{i=1}^N\alpha_i^{n_i}\mathbb{P}ai\mathbb{P}rod_{j=1}^{m+1}\beta_j(q)\mathbb{P}ad^{-1}=
\frac{\lambda_2a(q)}{q},
\mathcal{E}nd{equation*}
which implies that $1-\frac{\overline{\chi}_{q,\mathcal{S}}(u)}{\mathcal{W}idehat{\chi}_{q,\mathcal{S}}(0)}$ is a subexponential distribution. Therefore,
due to Proposition 2 we obtain
$$F_{I_{e_q} }(-u)\approx \frac{\lambda_2}{q}\left(\frac{\theta}{\theta+u^c}\right)^{\mathcal{X}i}\quad\mbox{as}\quad u\to\infty.$$
\textbf{Example 3 (case C)}. Let us suppose that $\mathcal{S}$ is a spectrally positive $\mathcal{X}i$-stable process,
with $\mathbb{E}\left[\mathcal{S}(1)\right]=0$ and $\mathcal{X}i \in (1,2)$. In this case
$-\Psi_\mathcal{S}(r)=r^\mathcal{X}i$ and
$$\chi_{q,\mathcal{S}}(x)=\frac{1}{\mathcal{X}i} \left( x^{-\mathcal{X}i}-\sum_{j=1}^{m+1}\beta_j(q)E(j,0,q)\int_x^\infty e^{-\beta_j(q)(y-x)}y^{-\mathcal{X}i}dy\right),$$
where we have used that $E_*(j,0,q)=\beta_j(q)E(j,0,q)$.
Notice that the function $\mathcal{L}_{\mathcal{X}i}(x)=\sum_{j=1}^{m+1}\beta_j(q)E(j,0,q)\int_x^\infty e^{-\beta_j(q)(y-x)}y^{-\mathcal{X}i}dy$
coincides with the function $F_\mathcal{X}i$ defined in \left[te{kolkovskamartin3}, hence from Proposition 1 in the aforementioned work we obtain
$$\int_x^\infty\mathcal{L}_{\mathcal{X}i}(y)dy\approx \left(1-\mathbb{P}rod_{i=1}^N\alpha_i^{n_i}\mathbb{P}ai\mathbb{P}rod_{j=1}^{m+1}\beta_j(q)\mathbb{P}ad^{-1}\right)\frac{x^{1-\mathcal{X}i}}{\Gamma(2-\mathcal{X}i)}\quad\mbox{as}\quad x\to\infty.$$
Since $\mathbb{P}rod_{i=1}^N\alpha_i^{n_i}\mathbb{P}ai\mathbb{P}rod_{j=1}^{m+1}\beta_j(q)\mathbb{P}ad^{-1}=\frac{a(q)}{q}$, it follows that
$\overline{\chi}_{q,\mathcal{S}}(x)\approx C_{\mathcal{X}i} \frac{x^{1-\mathcal{X}i}}{\Gamma(2-\mathcal{X}i)}$ as $x\to\infty,$
where $C_{\mathcal{X}i}=\frac{1}{\mathcal{X}i}\left(\frac{\Gamma(2-\mathcal{X}i)}{\mathcal{X}i-1}-1+\frac{a(q)}{q}\right)$.
Therefore, from Proposition 2 we obtain
$F_{I_{e_q} }(-u)\approx\frac{C_{\mathcal{X}i}}{a(q)\Gamma(2-\mathcal{X}i)}u^{1-{\mathcal{X}i}}$
as $u\to\infty$.
\section{Proof of Lemma \ref{lemmanegativeWHfactor}}\label{proofs}\setcounter{equation}{0}
This section is devoted to the proof of Lemma \ref{lemmanegativeWHfactor}. It requires some preliminary results which we state first.
Let $x_1, x_2,\dots,x_k$ be different complex numbers. For $m\in \{1,2,\ldots\}$ let us denote
$(x_i)_{m}= \underbrace{x_i,x_i,\dots,x_i}_{m \ \text{times}},$ $i=1,\ldots,k$.
The following result follows from standard tools in interpolation theory.
\begin{lemma}\label{diferenciasdivididaslabbe}
Let $f:\mathbb{C}\to\mathbb{C}$ be an analytic function and define $g:\mathbb{C}^m\to\mathbb{C}$ by
\begin{equation*}
g[x_1,\dots,x_m]=(-1)^{m-1}\sum\limits_{j=1}^m\frac{f(x_j)}{\mathbb{P}rod\limits_{l\neq j}(x_l-x_j)},
\mathcal{E}nd{equation*} where $x_i \neq x_j$ for $i \neq j.$
Let $n_1$, $n_2,\ldots,n_k$ be given natural numbers, and $m=\sum_{j=1}^kn_j$.
Then the function $g$ can be extended analytically for multiple points by the expression
\begin{align}
g\left[ (x_1)_{n_1},\dots,(x_k)_{n_k}\right]&=(-1)^{m-1}\sum\limits_{j=1}^k\frac{(-1)^{m-n_j}}{(n_j-1)!}\frac{\mathbb{P}artial ^{n_j-1}}{\mathbb{P}artial s^{n_j-1}}\left[\frac{f(s)(x_j-s)^{n_j}}{\mathbb{P}rod\limits_{l=1}^k(x_l-s)^{n_l}}\right]_{s=x_j}.\nonumber
\mathcal{E}nd{align}
\mathcal{E}nd{lemma}
We also need the following two lemmas. The first one is
a well-known formula in interpolation theory, while the second one is part of the proof of Proposition 5.4 in \left[te{kolkovskamartin2}.
\begin{lemma}Let $m\ge1$ and let $a_1,\ldots, a_m$ be given different numbers. Then
\begin{equation}\label{formulainterpolacion}
\sum\limits_{j=1}^m\frac{(a_j-s)^k}{\mathbb{P}rod\limits_{l=1,l\neq j}^m(a_l-a_j)}=\left\{
\begin{array}{ll}
(-1)^{m-1},&k=m-1,\\
0,&k=0,1,\dots,m-2,\\
\frac{1}{\mathbb{P}rod\limits_{j=1}^m(a_j-s)},&k=-1.
\mathcal{E}nd{array}
\right.
\mathcal{E}nd{equation}
\mathcal{E}nd{lemma}
In what follows we set $P_1(r)=\mathbb{P}rod_{j=1}^N(\alpha_j-r)^{n_j}$.
\begin{lemma}\label{lemaJ0}
Let $r,r_1,r_2,\dots,r_{m+1}$ be $m+2$ different complex numbers and define
$$J_0(r)=\lambda_1\sum\limits_{j=1}^{m+1}P_1(r_j)\frac{1}{
\mathbb{P}rod\limits_{k=1,i\neq j}^{m+1}(r_k-r_j)}
\frac{\frac{Q(-r)}{\mathbb{P}rod_{l=1}^N(\alpha_l-r)^{n_l}}-\frac{Q(-r_j)}{\mathbb{P}rod_{l=1}^N(\alpha_l-r_j)^{n_l}}}
{r_j-r}.$$ Then $J_0(r)=0$ for all $r\in \mathbb{C}$.
\mathcal{E}nd{lemma}
The following result is used in case C.
\begin{lemma}\label{lemmatranslationtailminuslaplaceexponent}
Let $\nu_\mathcal{S}$ be the L\'evy measure of a spectrally positive pure jump
L\'evy process such that
\begin{equation}\label{assumptionnuS}
\int_{0+}^\infty \left( x^2\mathcal{W}edge x\right)\nu_\mathcal{S}(dx)<\infty.
\mathcal{E}nd{equation}
Then
\begin{equation}\label{igualdadcolanu}
-\Psi_\mathcal{S}(r)=r\int_{0+}^\infty\left(1-e^{-rx}\right)\overline{\mathcal{V}}_\mathcal{S}(x)\,dx.
\mathcal{E}nd{equation}
Moreover, for any $r_1,r_2\in\mathbb{C}_+$ such that $r_1\neq r_2$ and $\Psi_\mathcal{S}$ exists, there holds\begin{equation}\label{igualdadestraslacion}
\frac{\Psi_\mathcal{S}(r_1)-\Psi_\mathcal{S}(r_2)}{r_2-r_1}=r_2\mathcal{W}idehat{T}_{r_2}\overline{\mathcal{V}}_\mathcal{S}(r_1)-\frac{\Psi_\mathcal{S}(r_1)}{r_1}=r_1\mathcal{W}idehat{T}_{r_2}\overline{\mathcal{V}}_\mathcal{S}(r_1)-\frac{\Psi_\mathcal{S}(r_2)}{r_2}.
\mathcal{E}nd{equation}
\mathcal{E}nd{lemma}
\begin{proof}
From assumption (\ref{assumptionnuS}) we can restrict to $h\mathcal{E}quiv1$, and by applying Fubini's theorem we get (\ref{igualdadcolanu}). On
the other hand, we have:
\begin{align}
\frac{\Psi_\mathcal{S}(r_1)-\Psi_\mathcal{S}(r_2)}{r_2-r_1}
&=\frac{r_1\frac{\Psi_\mathcal{S}(r_1)}{r_1}-r_2\frac{\Psi_\mathcal{S}(r_2)}{r_2}}{r_2-r_1}=\frac{r_1-r_2}{r_2-r_1}\frac{\Psi_\mathcal{S}(r_1)}{r_1}+r_2\frac{\frac{\Psi_\mathcal{S}(r_1)}{r_1}-\frac{\Psi_\mathcal{S}(r_2)}{r_2}}{r_2-r_1}\nonumber \\
&=r_2\frac{\frac{\Psi_\mathcal{S}(r_1)}{r_1}-\frac{\Psi_\mathcal{S}(r_2)}{r_2}}{r_2-r_1}-\frac{\Psi_\mathcal{S}(r_1)}{r_1}.\label{igualdadPsiS}
\mathcal{E}nd{align}
Similarly,
\begin{align}
\frac{\Psi_\mathcal{S}(r_1)-\Psi_\mathcal{S}(r_2)}{r_2-r_1}&=r_1\frac{\frac{\Psi_\mathcal{S}(r_1)}{r_1}-\frac{\Psi_\mathcal{S}(r_2)}{r_2}}{r_2-r_1}-\frac{\Psi_\mathcal{S}(r_2)}{r_2},\label{igualdadPsiS2}
\mathcal{E}nd{align}
and
\begin{align}
\frac{\Psi_\mathcal{S}(r_1)}{r_1}-\frac{\Psi_\mathcal{S}(r_2)}{r_2}&=\int_{0+}^\infty\left[\frac{1-e^{-r_1x}-r_1x}{r_1}-\frac{1-e^{-r_2x}-r_2x}{r_2}\right]\nu_\mathcal{S}(dx)\nonumber\\
&=\int _{0+}^\infty\int _0^x \left[ e^{-r_1y}-1-\left( e^{-r_2y}-1\right)\right] dy\nu_\mathcal{S}(dx)=\int _{0+}^\infty\int _y^\infty \nu_\mathcal{S}(dx)\left[ e^{-r_1y}-e^{-r_2y}\right] dy\nonumber\\
&=\mathcal{W}idehat{\overline{\mathcal{V}}}_\mathcal{S}(r_1)-\mathcal{W}idehat{\overline{\mathcal{V}}}_\mathcal{S}(r_2),\nonumber
\mathcal{E}nd{align}
where the third equality follows from Fubini's theorem.
Substituting the last equality in
(\ref{igualdadPsiS}) and (\ref{igualdadPsiS2}), and using (\ref{laplaceoperatorT}), we obtain the result.
\mathcal{E}nd{proof}
Now we are ready to prove Lemma \ref{lemmanegativeWHfactor}.
\begin{proofoflemanegativeWHfactor}
We deal only with the case $q>0$. The case $q=0$
follows by taking limits when $q\downarrow0$ and that the limit $\lim\limits_{q\downarrow0}\beta_j(q)$ exists and
$ \lim\limits_{q\downarrow0}\frac{q}{\beta_1(q)}=\mathbb{E}[\mathcal{X}(1)]$.
The proof of the lemma is simpler when the roots $\beta_j(q)$ of the generalized Lundberg equation are simple (see equality (6.11) below). In case of multiple roots we will approximate the Lundberg equation by an Lundberg equation depending on parameter $\mathcal{E}psilon$ which has simple roots, and such that when $\mathcal{E}psilon \to 0,$ the roots of the approximating Lundberg equation approximate the multiple roots of the given equation. At the end of the proof we take $\mathcal{E}psilon \to 0$ to obtain Lemma 5 in case of multiple roots $\beta_j(q).$
First we obtain the result for the case C. Recall that $\beta_j(q)$ are the roots of the
generalized Lundberg function of $\mathcal{X}$. Let $\varepsilon\in\left( 0,E\right)$,
where
$$E=\min\left\{\left|Re\left(\beta_i(q)\right)-Re\left(\beta_j(q)\right)\right|: Re\left(\beta_i(q)\right)-Re\left(\beta_j(q)\right)\neq0 \right\},$$
and define the complex numbers
\begin{equation}\label{rhoasterisco}
\begin{array}{l}
\beta_1(q)^*=\beta_1(q),
\beta_2(q)^*=\beta_2(q) + \frac{1}{m+1}\varepsilon,
\dots, \beta_{k_1}(q)^*=\beta_1(q)+\frac{k_1-1}{m+1}\varepsilon,\\ $\,$ \\ \beta_{k_1+1}(q)^*=\beta_2(q),\dots, \beta_{k_1+k_2}(q)^*=\beta_2(q)+\frac{k_1+k_2-1}{m+1}\varepsilon,\\
\beta_{k_1+\dots+k_{R-1}+1}(q)^*=\beta_R(q),\dots, \beta_{m+1}(q)^*=\beta_R(q)+\frac{\sum\limits_jk_j-1}{m+1}\varepsilon,
\mathcal{E}nd{array}
\mathcal{E}nd{equation}
where we have omitted the dependence on $\varepsilon$ for simplicity. It follows that
$\lim\limits_{\varepsilon\rightarrow 0}\beta_{l_j+a_j}(q)^*=\beta_j(q),\ j=1,2,\dots,R,$
for $l_1=0,l_2=k_1,\dots,l_R=k_{R-1}$ and $a_j=1,2,\dots,k_j$.
This gives $m+1$ different numbers $\beta_1^*,\dots,\beta_{m+1}^*$
such that, as $\varepsilon\downarrow0$, the first $k_1$ numbers converge to $\beta_1(q)$,
the next $k_2$ numbers converge to $\beta_2(q)$, and so on.
From the definition of $\Psi_\mathcal{X}$,
\begin{align}
\Psi_\mathcal{X}\left[ \beta_j(q)^* \right]-q&=-\Psi_\mathcal{S}\left[ \beta_j(q)^* \right]+\lambda_1\frac{Q\left( -\beta_j(q)^*\right)}{\mathbb{P}rod_{l=1}^N\left( \alpha_l-\beta_j(q)^*\right)^{n_l}} +c\beta_j(q)^* +\gamma^2\left[\beta_j(q)^* \right]^2-q-\lambda_1. \nonumber
\mathcal{E}nd{align}
Therefore, for each $j=1,2,\dots,m+1$ we obtain
\begin{align}
\lambda_1+q&=-\Psi_\mathcal{S}\left[ \beta_j(q)^* \right]+\lambda_1\frac{Q\left( -\beta_j(q)^*\right)}{\mathbb{P}rod_{l=1}^N\left( \alpha_l-\beta_j(q)^*\right)^{n_l}}+c\beta_j(q)^* +\gamma^2\left[\beta_j(q)^* \right]^2-\left( \Psi_\mathcal{X}\left[ \beta_j(q)^* \right]-q\right),\nonumber
\mathcal{E}nd{align}
which yields
\begin{align}
\Psi_\mathcal{X}(r)-q&=-\Psi_\mathcal{S}(r)+\lambda_1\frac{Q\left( -r\right)}{\mathbb{P}rod_{l=1}^N\left( \alpha_l-r\right)^{n_l}}+cr+\gamma^2 r^2+ \Psi_\mathcal{S}\left[ \beta_j(q)^* \right]-\lambda_1\frac{Q\left( -\beta_j(q)^*\right)}{\mathbb{P}rod_{l=1}^N\left( \alpha_l-\beta_j(q)^*\right)^{n_l}}\nonumber \\
&-c\beta_j(q)^* -\gamma^2\left[\beta_j(q)^* \right]^2+ \Psi_\mathcal{X}\left[ \beta_j(q)^* \right]-q. \label{laplaceconrhoepsilon}
\mathcal{E}nd{align}
Since $P_1$ is a polynomial with
degree $m$, using Lagrange interpolation we obtain the equivalent representation
$$P_1(r)=\sum\limits_{l=1}^{m+1}\frac{\mathbb{P}rod\limits_{j=1}^N\left[ \alpha_j-\beta_l(q)^* \right]^{n_j}}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^* -\beta_l(q)^* \right]}\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^* -r\right].$$
This and (\ref{laplaceconrhoepsilon}) give
\begin{align}
\left[ \Psi_\mathcal{X}(r)-q\right] P_1(r)
&=\sum\limits_{l=1}^{m+1}\frac{P_1\left[\beta_l(q)^* \right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^* -\beta_l(q)^* \right]}\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^* -r\right]\left\{-(\beta_l(q)^* -r)\frac{\Psi_\mathcal{S}(r)-\Psi_\mathcal{S}(\beta_l(q)^* )}{\beta_l(q)^* -r}\right.\nonumber \\
&+\left.\mathbb{P}hantom{\int_0^{\int}}(\beta_l(q)^* -r)\left(\lambda_1 J_0^*-c-\gamma^2(\beta_l(q)^* +r)\right)+ \Psi_\mathcal{X}(\beta_l(q)^* )-q\right\}\nonumber \\
&=\mathbb{P}rod\limits_{j=1}^{m+1}\left[\beta_j(q)^* -r\right]\sum\limits_{l=1}^{m+1}\frac{P_1\left[\beta_l(q)^* \right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^* -\beta_l(q)^* \right]}\left\{-\frac{\Psi_\mathcal{S}(r)-\Psi_\mathcal{S}(\beta_l(q)^* )}{\beta_l(q)^* -r}\right.\nonumber \\
&\left.+\lambda_1 J_0^*-c-\gamma^2(\beta_l(q)^* +r)+\frac{ \Psi_\mathcal{X}(\beta_l(q)^* )-q}{\beta_l(q)^* -r}\right\},\label{igualdadconinterpolaciondelagrange}
\mathcal{E}nd{align}
where $J_0^*=\frac{\frac{Q(-r)}{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-r)^{n_j}}-\frac{Q(-\beta_l(q)^*)}{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-\beta_l(q)^*)^{n_j}}}{\beta_l(q)^* -r}$.
Formula (\ref{formulainterpolacion}) and Lemma \ref{lemaJ0} imply, respectively:
\begin{equation}\label{suma1}
\begin{array}{ccc}
\sum\limits_{l=1}^{m+1}\frac{P_1\left[\beta_l(q)^* \right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^* -\beta_l(q)^* \right]}=1&\text{ and }&\lambda_1\sum\limits_{l=1}^{m+1}\frac{P_1\left[\beta_l(q)^* \right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^* -\beta_l(q)^* \right]}J_0^*=0.
\mathcal{E}nd{array}
\mathcal{E}nd{equation}
Substituting these two equalities
in (\ref{igualdadconinterpolaciondelagrange}), using the first equality in (\ref{igualdadestraslacion})
to calculate $\frac{\Psi_\mathcal{S}(r)-\Psi_\mathcal{S}(\beta_j(q)^* )}{\beta_j(q)^* -r}$
and dividing by $\mathbb{P}rod_{j=1}^{m+1}\left[\beta_j(q)^* -r\right]$, we obtain
\begin{align}
&\left[ \Psi_\mathcal{X}(r)-q\right] \frac{P_1(r)}{\mathbb{P}rod_{j=1}^{m+1}\left[\beta_j(q)^* -r\right]}\nonumber \\
&=\sum\limits_{l=1}^{m+1}\frac{P_1\left[\beta_l(q)^* \right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^* -\beta_l(q)^* \right]}\mathcal{B}igg\{-\beta_l(q)^* \mathcal{W}idehat{T}_{\beta_l(q)^* }\overline{\mathcal{V}}_\mathcal{S}(r)+\frac{\Psi_\mathcal{S}(r)}{r}-c
-\gamma^2\left[ r+\beta_l(q)^* \right]
+\frac{ \Psi_\mathcal{X}\left[\beta_l(q)^* \right]-q}{\beta_j(q)^* -r}\mathcal{B}igg\}\nonumber \\
&=\frac{\Psi_\mathcal{S}(r)}{r}-c-\gamma^2r-
\sum\limits_{l=1}^{m+1}\frac{P_1\left[\beta_l(q)^* \right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^* -\beta_l(q)^* \right]}\mathcal{B}igg\{\beta_l(q)^* \mathcal{W}idehat{T}_{\beta_l(q)^* }\overline{\mathcal{V}}_\mathcal{S}(r)+\gamma^2\beta_l(q)^*
-\frac{ \Psi_\mathcal{X}\left[\beta_l(q)^* \right]-q}{\beta_j(q)^* -r} \mathcal{B}igg\}, \label{denominadorcasoC0}
\mathcal{E}nd{align}
where the last equality follows from (\ref{suma1}).
Using Lemma \ref{lemmakyprianou} and Theorem 2.2 in \left[te{lewismordecki} we obtain $\lim\limits_{\varepsilon\downarrow0}\frac{P_1(r)}{\mathbb{P}rod_{j=1}^{m+1}\left[\beta_j(q)^* -r\right]}=\left(\kappa(q,-r)\right)^{-1}$. Hence we let $\varepsilon\downarrow0$ in both sides of
(\ref{denominadorcasoC0} and apply Lemma \ref{diferenciasdivididaslabbe}. This yields:
\begin{align}
&\left[ \Psi_\mathcal{X}(r)-q\right] \left(\kappa(q,-r)\right)^{-1}\nonumber \\
&=\frac{\Psi_\mathcal{S}(r)}{r}-c-\gamma^2r-\sum\limits_{l=1}^R\frac{(-1)^{1-k_l}}{(k_l-1)!}\frac{\mathbb{P}artial^{k_l-1}}{\mathbb{P}artial s^{k_l-1}}\left[\frac{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-s)^{n_j}(\beta_l(q)-s)^{k_l}}{\mathbb{P}rod\limits_{j=1}^R(\beta_j(q)-s)^{k_j}} s\mathcal{W}idehat{T}_s\overline{\mathcal{V}}_\mathcal{S}(r)\right]_{s=\beta_l(q)}\nonumber \\
&-\gamma^2\sum\limits_{l=1}^R\frac{(-1)^{1-k_l}}{(k_l-1)!}\frac{\mathbb{P}artial^{k_l-1}}{\mathbb{P}artial s^{k_j-1}}\left[\frac{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-s)^{n_j}(\beta_l(q)-s)^{k_l}}{\mathbb{P}rod\limits_{j=1}^R(\beta_j(q)-s)^{k_j}} s\right]_{s=\beta_l(q)}\nonumber \\
&+\sum\limits_{l=1}^R\frac{(-1)^{1-k_l}}{(k_l-1)!}\frac{\mathbb{P}artial^{k_l-1}}{\mathbb{P}artial s^{k_l-1}}\left[\frac{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-s)^{n_j}(\beta_l(q)-s)^{k_l}}{\mathbb{P}rod\limits_{j=1}^R(\beta_j(q)-s)^{k_j}}\frac{ \Psi_\mathcal{X}(s)-q}{s-r}\right]_{s=\beta_l(q)}.\nonumber
\mathcal{E}nd{align}
Since, for $j=1,2,\dots,R$, $\beta_j(q)$ are roots of
$ \Psi_\mathcal{X}(s)-q=0$ in $\mathbb{C}_{++}$
with respective multiplicities $k_j,$ it follows from the Leibniz rule
that
$$\sum\limits_{l=1}^R\frac{(-1)^{1-k_l}}{(k_l-1)!}\frac{\mathbb{P}artial^{k_l-1}}{\mathbb{P}artial s^{k_l-1}}\left[\frac{P_1(s)(\beta_l(q)-s)^{k_l}}{\mathbb{P}rod\limits_{j=1}^R(\beta_j(q)-s)^{k_j}}\frac{ \Psi_\mathcal{X}(s)-q}{s-r}\right]_{s=\beta_l(q)}=0.$$
Hence, substituting this in the equality above and setting $D_q=\sum\limits_{l=1}^R\frac{(-1)^{1-k_l}}{(k_l-1)!}\frac{\mathbb{P}artial^{k_l-1}}{\mathbb{P}artial s^{k_j-1}}\left[\frac{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-s)^{n_j}(\beta_l(q)-s)^{k_l}}{\mathbb{P}rod\limits_{j=1}^R(\beta_j(q)-s)^{k_j}} s\right]_{s=\beta_l(q)}$,
we obtain:
\begin{align}
&\left[ q- \Psi_\mathcal{X}(r)\right] \left(\kappa(q,-r)\right)^{-1}\nonumber \\
&=c+\gamma^2D_q+\gamma^2r-\frac{\Psi_\mathcal{S}(r)}{r}
+\sum\limits_{l=1}^R\frac{(-1)^{1-k_l}}{(k_l-1)!}\frac{\mathbb{P}artial^{k_l-1}}{\mathbb{P}artial s^{k_l-1}}\left[\frac{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-s)^{n_j}(\beta_l(q)-s)^{k_l}}{\mathbb{P}rod\limits_{j=1}^R(\beta_j(q)-s)^{k_j}} s\mathcal{W}idehat{T}_s\overline{\mathcal{V}}_\mathcal{S}(r)\right]_{s=\beta_l(q)}.\label{denominadorcasoC}
\mathcal{E}nd{align}
Using Leibniz rule and Lemma \ref{lemaderivadaseintegrales}
we get
\begin{align}
&\sum\limits_{l=1}^R\sum\limits_{a=0}^{k_l-1}\binom{k_l-1}{a}\frac{(-1)^{1-k_l}}{(k_l-1)!}\frac{\mathbb{P}artial^{k_l-1-a}}{\mathbb{P}artial s^{k_l-1-a}}\left[\frac{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-s)^{n_j}(\beta_l(q)-s)^{k_l}}{\mathbb{P}rod\limits_{j=1}^R(\beta_j(q)-s)^{k_j}}s\right]_{s=\beta_l(q)}\frac{\mathbb{P}artial^a}{\mathbb{P}artial s^a} \left[\mathcal{W}idehat{T}_s\overline{\mathcal{V}}_\mathcal{S}(r)\right]_{s=\beta_l(q)}\nonumber \\
&=\sum\limits_{l=1}^R\sum\limits_{a=0}^{k_l-1}E_*(l,a,q)\mathcal{W}idehat{\mathcal{T}}_{\beta_l(q),a}\overline{\mathcal{V}}_\mathcal{S}(r)=\sum\limits_{l=1}^R\sum\limits_{a=0}^{k_l-1}E_*(l,a,q)\mathcal{W}idehat{\mathcal{T}}_{\beta_l(q),a}\overline{\mathcal{V}}_\mathcal{S}(r)
=\mathcal{W}idehat{\mathcal{L}}_q(r),\nonumber
\mathcal{E}nd{align}
hence from (\ref{denominadorcasoC}) we obtain
$$\left[ q- \Psi_\mathcal{X}(r)\right]\left(\kappa(q,-r)\right)^{-1}=c+\gamma^2D_q+\gamma^2r-\frac{\Psi_\mathcal{S}(r)}{r}+\mathcal{W}idehat{\mathcal{L}}_q(r).$$
Since by L'H\^opital's rule
$\lim\limits_{r\downarrow0}\frac{\Psi_\mathcal{S}(r)}{r}=0$, it
follows that
$a(q)=c+\gamma^2D_q+\mathcal{W}idehat{\mathcal{L}}_q(0)$. Hence
\begin{align}
\left[ q- \Psi_\mathcal{X}(r)\right]\left(\kappa(q,-r)\right)^{-1}
&=a(q)-\mathcal{W}idehat{\mathcal{L}}_q(0)+\gamma^2r-\frac{\Psi_\mathcal{S}(r)}{r}+\mathcal{W}idehat{\mathcal{L}}_q(r)&\nonumber \\
&=a(q)+\gamma^2r-\frac{\Psi_\mathcal{S}(r)}{r}-\left( \mathcal{W}idehat{\mathcal{L}}_q(0)-\mathcal{W}idehat{\mathcal{L}}_q(r)\right),\nonumber
\mathcal{E}nd{align}
and we obtain the result for case C.
To obtain the result for case B, we use the same notations as above for the $m+1$ roots of the generalized Lundberg equation, and use (\ref{igualdadconinterpolaciondelagrange})
with
$\Psi_\mathcal{S}(r)$ replaced by $G_\mathcal{S}(r)$ and setting $\gamma=0$.
This gives:
\begin{align}
&\left[ \Psi_\mathcal{X}(r)-q\right] P_1(r)\nonumber \\
&=\mathbb{P}rod\limits_{j=1}^{m+1}\left[\beta_j(q)^* -r\right]\sum\limits_{l=1}^{m+1}\frac{P_1\left[\beta_l(q)^* \right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^* -\beta_l(q)^* \right]}\left\{-\frac{G_\mathcal{S}(r)-G_\mathcal{S}(\beta_l(q)^* )}{\beta_l(q)^* -r}+\lambda_1 J_0^*-c+\frac{ \Psi_\mathcal{X}(\beta_l(q)^* )-q}{\beta_l(q)^* -r}\right\}\nonumber \\
&=-c+\mathbb{P}rod\limits_{j=1}^{m+1}\left[\beta_j(q)^* -r\right]\sum\limits_{l=1}^{m+1}\frac{P_1\left[\beta_l(q)^* \right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^* -\beta_l(q)^* \right]}\left\{\mathcal{W}idehat{T}_{\beta_l(q)^* }\nu_\mathcal{S}(r)+\frac{ \Psi_\mathcal{X}(\beta_l(q)^* )-q}{\beta_l(q)^* -r}\right\},\nonumber
\mathcal{E}nd{align}
where in the second equality we used (\ref{suma1}) and the fact that
$\frac{G_\mathcal{S}(r)-G_\mathcal{S}(s)}{s-r}=-\mathcal{W}idehat{T}_s\nu_\mathcal{S}(r),$
which follows from the second equality in (\ref{laplaceoperatorT}).
Proceeding as in case C, we obtain
\begin{align}
&\left[ q- \Psi_\mathcal{X}(r)\right] \left(\kappa(q,-r)\right)^{-1}\nonumber\\
&=c-\sum\limits_{l=1}^R\sum\limits_{a=0}^{k_l-1}\binom{k_l-1}{a}\frac{(-1)^{1-k_l+a}}{(k_l-1)!}\frac{\mathbb{P}artial^{k_l-1-a}}{\mathbb{P}artial s^{k_l-1-a}}\left[\frac{\mathbb{P}rod\limits_{j=1}^N(\alpha_j-s)^{n_j}(\beta_l(q)-s)^{k_l}}{\mathbb{P}rod\limits_{j=1}^R(\beta_j(q)-s)^{k_j}}\right]_{s=\beta_l(q)}\mathcal{W}idehat{\mathcal{T}}_{\beta_l(q),a}\nu_\mathcal{S}(r)\nonumber \\
&=c-\mathcal{W}idehat{\mathcal{E}ll}_q(r),\nonumber
\mathcal{E}nd{align}
Now we set $r=0$ in the above equality to obtain
\begin{equation}\label{igualdadadelta}
c=a(q)+\mathcal{W}idehat{\mathcal{E}ll}_q(0).
\mathcal{E}nd{equation}
This gives the result for case B.
In case A we have
$\Psi_{\mathcal{X}}(r)=\lambda_1\left(\mathcal{W}idehat{f}_1(-r)-1\right)-G_\mathcal{S}(r)$. For now we assume that $\mathcal{S}$ is not a compound Poisson process.
In this case we know from Lemma \ref{lemmarootsofGLEgeneralcase} that the equation
$\Psi_{\mathcal{X}}(r)-q=0$ has only $m$ roots in $\mathbb{C}_{++}$, denoted as before and such that they have respective multiplicities $k_1\mathcal{E}quiv1,k_2,\dots,k_R$, where $\sum_{j=1}^R k_j=m$. Now we consider the numbers $\beta_1^*(q),\dots,\beta_m(q)^*$ as
defined in (\ref{rhoasterisco}) with $\beta_{m+1}(q)^*$ replaced by $\beta_\infty(n)=\sqrt{n}$.
We also take $c_n=\frac{1}{n}$.
In this case, the function $\Psi_\mathcal{X}(r)+c_nr$ is the generalized Lundberg function of some
L\'evy process of the form (\ref{Xalpha}) with $\gamma=0$ and drift term $c_n$. Hence we
can use (\ref{igualdadconinterpolaciondelagrange}) with $\gamma=0$, $G_\mathcal{S}$ instead of
$\Psi_\mathcal{S}$ and $c_n$ instead of $c$. This gives
\begin{align}
&\left[ \Psi_\mathcal{X}(r)+c_nr-q\right] P_1(r)\nonumber \\
&=(\beta_\infty(n)-r)\mathbb{P}rod\limits_{k=1}^m\left[\rho^*_{k,q}-r\right]\sum\limits_{l=1}^m\frac{P_1\left[\beta_l(q)^*\right]}{(\beta_\infty(n)-\beta_l(q)^*)\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^*-\beta_l(q)^*\right]}
\left\{-\frac{G_\mathcal{S}(r)-G_\mathcal{S}(\beta_l(q)^*)}
{\beta_l(q)^*-r}\right.\nonumber \\
&\left.+\lambda_1 \frac{\frac{Q(-r)}{\mathbb{P}rod_{a=1}^N(\alpha_a-r)^{n_a}}-
\frac{Q(-\beta_l(q)^*)}
{\mathbb{P}rod_{a=1}^N(\alpha_a-\beta_l(q)^*)^{n_a}}}{\beta_l(q)^*-r}
-c_n+\frac{\Psi_\mathcal{X}(\beta_l(q)^*)-q}{\beta_l(q)^*-r}
\right\}\nonumber \\
&+(\beta_\infty(n)-r)\mathbb{P}rod\limits_{k=1}^m\left[\rho^*_{k,q}-r\right]\frac{P_1\left[\beta_\infty(n)\right]}{\mathbb{P}rod\limits_{j=1}^m\left[\beta_j(q)^*-\beta_\infty(n)\right]}\mathcal{B}igg\{-\frac{G_\mathcal{S}(r)-G_\mathcal{S}(\beta_\infty(n))}{\beta_l(q)^*-r}\nonumber \\
&+\lambda_1 \frac{\frac{Q(-r)}{\mathbb{P}rod_{a=1}^N(\alpha_a-r)^{n_a}}-\frac{Q(-\beta_\infty(n))}{\mathbb{P}rod_{a=1}^N(\alpha_a-\beta_\infty(n))^{n_a}}}{\beta_\infty(n)-r}
-c_n+\frac{\Psi_\mathcal{X}(\beta_\infty(n))-q}{\beta_\infty(n)-r}
\mathcal{B}igg\}.\nonumber
\mathcal{E}nd{align}
Again, proceeding as in case C it follows that
\begin{align}
&\left[ \Psi_\mathcal{X}(r)+c_nr-q\right] \frac{P_1(r)}{\mathbb{P}rod_{j=1}^m\left[\beta_j(q)^*-r\right]}
=-c_n(\beta_\infty(n)-r)\nonumber \\
&+\sum\limits_{l=1}^m\frac{\beta_\infty(n)-r}{\beta_\infty(n)-\beta_l(q)^*}\frac{P_1\left[\beta_l(q)^*\right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^*-\beta_l(q)^*\right]}\mathcal{B}igg\{\mathcal{W}idehat{T}_{\beta_l(q)^*}\nu_\mathcal{S}(r)+\frac{\Psi_\mathcal{X}\left[\beta_l(q)^*\right]-q}{\beta_l(q)^*-r}\mathcal{B}igg\}\nonumber \\
&+\frac{P_1\left[\beta_\infty(n)\right]}{\mathbb{P}rod\limits_{j=1}^m\left[\beta_j(q)^*-\beta_\infty(n)\right]}\left\{\left(\beta_\infty(n)-r\right)\mathcal{W}idehat{T}_{\beta_\infty(n)}\nu_\mathcal{S}(r)+\left( \Psi_\mathcal{X}\left[\beta_\infty(n)\right]-q\right)\right\}\nonumber\\
&=-c_n(\beta_\infty(n)-r)+\sum\limits_{l=1}^m\frac{\beta_\infty(n)-r}{\beta_\infty(n)-\beta_l(q)^*}\frac{P_1\left[\beta_l(q)^*\right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^*-\beta_l(q)^*\right]}\mathcal{B}igg\{\mathcal{W}idehat{T}_{\beta_l(q)^*}\nu_\mathcal{S}(r)
\nonumber \\
&+\frac{\Psi_\mathcal{X}\left[\beta_l(q)^*\right]-q}{\beta_l(q)^*-r}\mathcal{B}igg\}
+\frac{P_1\left[\beta_\infty(n)\right]}{\mathbb{P}rod\limits_{j=1}^m\left[\beta_j(q)^*-\beta_\infty(n)\right]}\mathcal{B}igg\{-G_\mathcal{S}(r)+G_\mathcal{S}(\beta_\infty(n))
\nonumber \\
&+\left( \Psi_\mathcal{X}\left[\beta_\infty(n)\right]-q\right)\mathcal{B}igg\},\nonumber
\mathcal{E}nd{align}
where in the second equality we have used that $(\beta_\infty(n)-r)\mathcal{W}idehat{T}_{\beta_\infty(n)}\nu_S(r)=(\beta_\infty(n)-r)\left(\frac{-G_\mathcal{S}(r)+G_\mathcal{S}(\beta_\infty(n))}{\beta_\infty(n)-r}\right)$.
Now we substitute $\Psi_\mathcal{X}(\beta_\infty(n))=\lambda_1\left(\frac{Q(-\beta_\infty(n))}{\mathbb{P}rod_{a=1}^N(\alpha_a-\beta_\infty(n))^{n_a}}-1\right)+c_n\beta_\infty(n)-G_\mathcal{S}(\beta_\infty(n))$ in the above equality and rearrange terms to obtain
\begin{align}&\left[ \Psi_\mathcal{X}(r)+c_nr-q\right] \frac{P_1(r)}{\mathbb{P}rod_{j=1}^m\left[\beta_j(q)^*-r\right]}\nonumber \\
&=-c_n(\beta_\infty(n)-r)+\sum\limits_{l=1}^m\frac{\beta_\infty(n)-r}{\beta_\infty(n)-\beta_l(q)^*}\frac{P_1\left[\beta_l(q)^*\right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^*-\beta_l(q)^*\right]}\mathcal{B}igg\{\mathcal{W}idehat{T}_{\beta_l(q)^*}\nu_\mathcal{S}(r) \nonumber \\
&+\frac{\Psi_\mathcal{X}\left[\beta_l(q)^*\right]-q}{\beta_l(q)^*-r}\mathcal{B}igg\}-\frac{P_1\left[\beta_\infty(n)\right]}{\mathbb{P}rod\limits_{j=1}^m\left[\beta_j(q)^*-\beta_\infty(n)\right]}G_\mathcal{S}(r)\nonumber \\
&+\frac{P_1\left[\beta_\infty(n)\right]}{\mathbb{P}rod\limits_{j=1}^m\left[\beta_j(q)^*-\beta_\infty(n)\right]}\mathcal{B}igg\{c_n\beta_\infty(n)+\lambda_1\frac{Q(-\beta_\infty(n))}{\mathbb{P}rod_{a=1}^N(\alpha_a-\beta_\infty(n))^{n_a}}-\left(\lambda_1+q\right)\mathcal{B}igg\} .\label{denominatorXcetagamma0}
\mathcal{E}nd{align}
Since both polynomials $P_1(r)=\mathbb{P}rod_{j=1}^N(\alpha_j-r)^{n_j}$ and $\mathbb{P}rod_{j=1}^m\left(\beta_j(q)^*-r\right)$
have degree $m$, and since $Q(r)$ given in (\ref{laplacef1}) is a polynomial of
degree at most $m-1$, it follows that for any fixed $r$ and $s$ such that $r\neq \beta_\infty(n)$ and $s\neq \beta_\infty(n)$,
$$
\lim\limits_{n\rightarrow\infty}\frac{\beta_\infty(n)-r}{\beta_\infty(n)-s}=1,\quad \lim\limits_{n\rightarrow\infty}\frac{P_1(\beta_\infty(n))}{\mathbb{P}rod_{j=1}^m\left(\beta_j(q)^*-\beta_\infty(n)\right)}=1,\quad \lim\limits_{n\rightarrow\infty}\frac{Q(-\beta_\infty(n))}{\mathbb{P}rod_{a=1}^N(\alpha_a-\beta_\infty(n))^{n_a}}=0.
$$
Hence, letting $n\rightarrow\infty$ in both sides of (\ref{denominatorXcetagamma0}) we get
\begin{align}
&\left[ q-\Psi_\mathcal{X}(r)\right]\frac{\mathbb{P}rod_{j=1}^N(\alpha_j-r)^{n_j}}{\mathbb{P}rod_{j=1}^m\left(\beta_j(q)^*-r\right)}\nonumber \\
&=-\sum\limits_{l=1}^m\frac{P_1\left[\beta_l(q)^*\right]}{\mathbb{P}rod\limits_{j\neq l}\left[\beta_j(q)^*-\beta_l(q)^*\right]}\mathcal{B}igg\{\mathcal{W}idehat{T}_{\beta_l(q)^*}\nu_\mathcal{S}(r)+\frac{\Psi_\mathcal{X}\left[\beta_l(q)^*\right]-q}{\beta_l(q)^*-r}\mathcal{B}igg\}+G_\mathcal{S}(r)+\lambda_1+q.\nonumber
\mathcal{E}nd{align}
The remaining part of the proof is done similarly as in cases B and C using Lemma \ref{lemmakyprianou}.
The case when $\mathcal{S}$ is a compound Poisson process with $\mathbb{E}\left[ \mathcal{S}(1)\right]>0$ is obtained from case B as follows:
when $c=0$ we have
$\Psi_\mathcal{X}'(r)|_{r=0+}=\lambda_1\mu_1-\lambda_2\mu_2,$
hence $\mathbb{P}si_\mathcal{X}(r)-q=0$ has $m$ roots under the assumption that $\lambda_1\mu_1-\lambda_2\mu_2>0$.
If $\mathcal{X}_c$ denotes the process $\mathcal{X}$ with drift $c>0$ when $\mathcal{S}$ is a subordinator and $\mathcal{X}$ denotes the same process with $c=0$ we have
$\mathbb{P}si_{\mathcal{X}_c}(r)\to\mathbb{P}si_\mathcal{X}(r)$, for all $r>0$ when $c\downarrow0$.
This implies that the roots of $\mathbb{P}si_{\mathcal{X}_c}(r)-q$ must converge to those of $\mathbb{P}si_{\mathcal{X}_c}(r)-q$, but since the latter function has only $m$ roots, it must hold that one of the roots of $\mathbb{P}si_{\mathcal{X}_c}(r)-q$ (say $\beta_{m+1}(q)$) must converge to infinite when $c\downarrow0$. Hence the result is obtained by the same procedure as before, replacing $c_n$ by $c$ and $\beta_\infty(n)$ by $\beta_{m+1}(q)$.
\mathcal{E}nd{proofoflemanegativeWHfactor}
{\noindent\bf Acknowledment\ } The authors are grateful to two anonymous referees for their careful reading of the paper and for many useful suggestions which greatly improved the presentation of the results. The first-named author appreciates partial support from CONACyT Grant No. 257867.
\begin{thebibliography}{26}
\mathbb{P}rovidecommand{\natexlab}[1]{#1}
\mathbb{P}rovidecommand{\url}[1]{\texttt{#1}}
\mathcal{E}xpandafter\ifx\csname urlstyle\mathcal{E}ndcsname\relax
\mathbb{P}rovidecommand{\doi}[1]{doi: #1}\mathcal{E}lse
\mathbb{P}rovidecommand{\doi}{doi: \begingroup \urlstyle{rm}\Url}\fi
\bibitem [Asmussen et al. (2004)]{asmussenetal}
S. Asmussen, F. Avram and M.R. Pistorius.
\newblock Russian and American put options under exponential phase-type L\'evy models.
\newblock {\mathcal{E}m Stochastic Processes and their Applications}, 109 (1), 79-111, 2004.
\bibitem[Bertoin(1996)]{bertoin}
J. Bertoin.
\newblock \mathcal{E}mph{L\'evy processes}.
\newblock Cambridge University Press, 1996.
\bibitem [Borovkov(1976)]{borovkov}
A. Borovkov.
\newblock {\sl Stochastic processes in queueing theory.}
\newblock Springer-Verlag New York, 1976.
\bibitem[Dickson and Hipp(2001)]{dicksonhipp}
D.C.M. Dickson and C. Hipp.
\newblock On the time to ruin for Erlang(2) risk processes.
\newblock \mathcal{E}mph{Insurance: Mathematics and Economics}, 29 (3), 333-344, 2001.
\bibitem[Embrechts et al.(1979)Embrechts, Goldie, and
Veraverbeke]{embrechtsetal}
P. Embrechts, C. Goldie, and N. Veraverbeke.
\newblock Subexponentiality and infinite divisibility.
\newblock \mathcal{E}mph{Zeitschrift f\"{u}r Wahrscheinlichkeitstheorie und Verwandte Gebiete}, 49 (3), 335-347, 1979.
\bibitem[Feller(1971)]{feller}
W. Feller.
\newblock \mathcal{E}mph{An introduction to probability theory and its applications},
volume II.
\newblock Wiley and Sons, 1971.
\bibitem[Gerber and Shiu(1998)]{gerbershiu}
H.U. Gerber and E.S.W. Shiu.
\newblock On the time value of ruin.
\newblock \mathcal{E}mph{North American Actuarial Journal}, 2 (1), 48-78, 1998.
\bibitem[Kolkovska and Mart\'in-Gonz\'alez(2016)]{kolkovskamartin2}
E.T. Kolkovska and E.M. Mart\'in-Gonz\'alez.
\newblock Path functionals of a class of L\'evy insurance risk processes
\newblock\mathcal{E}mph{Communications on Stochastic Analysis}, 10(3), 363-387, 2016.
\bibitem[Kolkovska and Mart\'in-Gonz\'alez(2018)]{kolkovskamartin3}
E.T. Kolkovska and E.M. Mart\'in-Gonz\'alez.
\newblock Asymptotic behavior of the ruin probability, the severity of ruin and
the surplus prior to ruin of a two-sided jumps perturbed risk process.
\newblock \mathcal{E}mph{XII Symposium of Probability and Stochastic Processes, Progress in Probability}, p. 107-134,
\newblock{Birkh\"auser}, 2018.
\bibitem[Kuznetsov(2010{\natexlab{a}})]{kuznetsov}
A. Kuznetsov.
\newblock Wiener-Hopf factorization and distribution of extrema for a family of
L\'evy processes.
\newblock \mathcal{E}mph{Annals of Applied Probability}, 20 (5), 1801-1830,
2010{\natexlab{a}}.
\bibitem[Kuznetsov(2010{\natexlab{b}})]{kuznetsov2}
A. Kuznetsov.
\newblock Wiener-Hopf factorization for a family of L\'evy processes related
to theta functions.
\newblock \mathcal{E}mph{Journal of Applied Probability}, 47(4):\mathbb{P}enalty0
1023-1033, 2010{\natexlab{b}}.
\bibitem[Kuznetsov and Peng(2012)]{kuznetsovpeng}
A. Kuznetsov and X. Peng.
\newblock On the Wiener-Hopf factorization for L\'evy processes with bounded
positive jumps.
\newblock \mathcal{E}mph{Stochastic Processes and Their Applications}, 122\mathbb{P}enalty0
(7):\mathbb{P}enalty0 2610--2638, 2012.
\bibitem[Kyprianou(2006)]{kyprianou}
A. Kyprianou.
\newblock {\sl Introductory lectures on fluctuations of L\'evy processes with
applications}.
\newblock Springer-Verlag, Berlin Heidelberg, 2006.
\bibitem[Lewis and Mordecki(2008)]{lewismordecki}
A. Lewis and E. Mordecki.
\newblock Wiener-hopf factorization for l\'evy processes having positive jumps
with rational transforms.
\newblock \mathcal{E}mph{Journal of Applied Probability}, 118-134, 45 (1), 2008.
\bibitem[Rolski et al. (1999)]{rolskietal}
T. Rolski, H. Schmidli, V. Schmidt, and J. Teugels.
\newblock {\sl Stochastic Processes for Insurance and Finance}.
\newblock {John Wiley and sons}, 1999.
\bibitem[Sato (1999)]{sato} K. Sato.
\newblock {\sl L\'evy Processes and Infinitely Divisible Distributions}.
\newblock Cambridge University Press, 1999.
\mathcal{E}nd{thebibliography}
\mathcal{E}nd{document}
|
\begin{document}
\title{On the detailed structure of quantum control landscape for fast single qubit phase-shift gate generation}
$^1$ Department of Mathematical Methods for Quantum Technologies, Steklov Mathematical Institute of Russian Academy of Sciences, 8 Gubkina Str., Moscow, 119991, Russia\footnote{\url{www.mi-ras.ru/eng/dep51}}\\
$^2$ National University of Science and Technology "MISiS", 4 Leninsky Prosp., Moscow 119991, Russia\\
E-mail: [email protected], [email protected] (corresponding author)
\begin{abstract}
In this work, we study the detailed structure of quantum control landscape for the problem of single-qubit phase shift gate generation on the fast time scale. In previous works, the absence of traps for this problem was proven on various time scales. A special critical point which was known to exist in quantum control landscapes was shown to be either a saddle or a global extremum, depending on the parameters of the control system. However, in the case of saddle the numbers of negative and positive eigenvalues of Hessian at this point and their magnitudes have not been studied. At the same time, these numbers and magnitudes determine the relative ease or difficulty for practical optimization in a vicinity of the critical point. In this work, we compute the numbers of negative and positive eigenvalues of Hessian at this saddle point and moreover, give estimates on magnitude of these eigenvalues. We also significantly simplify our previous proof of the theorem about this saddle point of the Hessian [Theorem~3 in B.O.~Volkov, O.V.~Morzhin, A.N.~Pechen, J.~Phys.~A: Math. Theor. {\bf 54}, 215303 (2021)].
\end{abstract}
\section{Introduction}
Optimal quantum control, which includes methods for manipulation of quantum systems, attracts now high attention due to various existing and prospective applications in quantum technologies~\cite{GlaserEurPhysJD2015}. Among important topics, one problem which was posed in~\cite{RHR} is the analysis of quantum control landscapes, that is, local and global extrema of quantum control objective functionals. Various results have been obtained in this field e.g. in~\cite{HR,MoorePRA2011,Pechen2011,Pechen2012,IJC2012,FouquieresSchirmer,PechenIl'in2014,PechenTannorCJC2014,Larocca2018,Zhdanov2018,Russell2018,PechenIl'in2016,Volkov_Morzhin_Pechen,DalgaardPRA2022} for closed and open quantum systems. For open quantum systems, a formulation of completely positive trace preserving dynamics as points of {\it complex Stiefel manifold} (strictly speaking, of some factors of complex Stiefel manifolds over some transformations) was proposed and theory of open system's quantum control as gradient flow optimization over complex Stiefel manifolds was developed in details for two-level~\cite{PechenJPA2008} and general $n$--level quantum systems~\cite{OzaJPA2009} and applied to the analysis of quantum control landscapes. Control landscapes for open-loop and closed-loop control were analyzed in a unified framework~\cite{PechenPRA2010.82.030101}. A unified analysis of classical and quantum kinematic control landscapes was performed~\cite{PechenEPJ2010}. Computation of numbers of positive and negative eigenvalues of the Hessian at saddles of the control landscape is an important problem~\cite{MoorePRA2011}.
In this work, we analytically study the detailed structure of quantum control landscape around special critical point for the problem of single-qubit phase shift gate generation on the fast time scale. The absence of traps for single-qubit gate generation was proven on long time scale in~\cite{Pechen2012,PechenIl'in2014} and on fast time scale~\cite{PechenIl'in2016,Volkov_Morzhin_Pechen}, where a single special control which is a critical point was studied and shown to be a saddle. However, the numbers of negative and positive eigenvalues of Hessian at this saddle point control were not studied. At the same time, these numbers are important as they determine the numbers of directions towards decreasing and increasing of the objective and hence determine the level of difficulty for practical optimization starting in a vicinity of the saddle point. The numbers of positive and negative eigenvalues of Hessian of the objective for some other examples of quantum systems were computed in~\cite{MoorePRA2011}. In this work, we compute the numbers of negative and positive eigenvalues of Hessian at this saddle point, give estimates on magnitude of the eigenvalues and also significantly simplify our previous proof of the theorem about Hessian at this saddle point~\cite{Volkov_Morzhin_Pechen}.
In Sec.~\ref{Sec:2}, we summarize results of previous works which are relevant for our study. In Sec.~\ref{Sec:3}, the main theorem of this work is presented; its proof is provided in Sec.~\ref{Sec:3}. Conclusions section~\ref{Sec:4} summarizes this work.
\section{Previous results}\label{Sec:2}
A single qubit driven by a coherent control $f\in L^2([0,T];\mathbb R)$, where $T>0$ is the final time, in the absence of the environment evolves according to the Schr\"odinger equation for the unitary evolution operator $U_t^f$
\begin{equation}
\frac{dU_t^f}{dt}=-i(H_0+f(t)V)U_t^f,\qquad U_0^f=\mathbb I
\end{equation}
where $H_0$ and $V$ are the free and interaction Hamiltonians. Common assumption is that $[H_0,V]\ne 0$. This assumption guarantees controllability of the two-level system; otherwise the dynamics is trivial. A single qubit quantum gate is a unitary $2\times 2$ operator $W$ defined up to a physically unrelevant phase, so that $W\in su(2)$. The problem of single qubit gate generation can be formulated as
\begin{equation}
J_W[f]=\frac14 |{\rm Tr} (U_t^f W^\dagger)|^2\to\max
\end{equation}
In~\cite{PechenIl'in2014,PechenIl'in2016,Volkov_Morzhin_Pechen}, the results on the absence of traps (points of local but not global extrema of $J_W$) for this problem were obtained. To explicitly formulate these results, consider the special constant control $f(t)=f_0$ and time $T_0$:
\begin{eqnarray}
f_0&:=&\frac{-{\rm Tr} H_0{\rm Tr} V+2{\rm Tr}(H_0V)}{({\rm Tr} V^2)^2-2{\rm Tr}(V^2)},\\
T_0&:=&\frac {\pi}{\|H_0-\mathbb I {\rm Tr} H_0/2+f_0(V-\mathbb I {\rm Tr} V/2)\|}.
\end{eqnarray}
The following theorem was proved in~\cite{PechenIl'in2014}.
\begin{theorem}\label{theorem2016-1}
Let $W\in SU(2)$ be a single qubit quantum gate.
If $[W,H_0+f_0V]\neq0$ then for any $T>0$ traps do not exist. If $[W,H_0+f_0V]=0$ then any control, except possibly $f\equiv f_0$, is not trap for any $T>0$ and the control $f_0$ is not trap for $T>T_0$.
\end{theorem}
The case of whether control $f_0$ can be trap for $T\leq T_0$ or not was partially studied in~\cite{PechenIl'in2016}. Without loss of generality it is sufficient to consider the case $H_0=\sigma_z$ and $V=v_x\sigma_x+v_y\sigma_y$, where $\upsilon_x,\upsilon_y\in \mathbb{R}$ ($\upsilon_x^2+\upsilon_y^2>0$) and $\sigma_x,\sigma_y,\sigma_z$ are the Pauli matrices:
\begin{equation}
\sigma_x=\Bigg(\begin{array}{*{20}{c}}0 & 1 \\ 1 & 0\end{array}\Bigg), \qquad \sigma_y=\Bigg(\begin{array}{*{20}{c}}0 & -i \\ i & 0\end{array}\Bigg), \qquad \sigma_z=\Bigg(\begin{array}{*{20}{c}} 1 & 0 \\ 0 & -1\end{array}\Bigg).
\end{equation}
In this case, the special time is $T_0=\frac {\pi}2$ and the special control is $f_0=0$.
By theorem~\ref{theorem2016-1}, if $[W,\sigma_z]\neq 0$, then for any $T>0$ there are no traps for $J_W$. If $[W,\sigma_z]=0$, then $W=e^{i\varphi_W \sigma_z+i\beta}$, where $\varphi_W\in (0,\pi]$ and $\beta\in [0,2\pi)$. The phase can be neglected, so without loss of generality we set $\beta=0$. Below we consider only such gates. The following result was proved in~\cite{PechenIl'in2016}.
\begin{theorem}\label{theorem2016-2}
Let $W=e^{i\varphi_W \sigma_z}$. If $\varphi_W\in (0,\frac{\pi}{2})$, then for any $T>0$ there are no traps. If $\varphi_W\in [\frac \pi 2,\pi]$, then for any $T>\pi-\varphi_W$ there are no traps.
\end{theorem}
For fixed $\varphi_W$ and $T$ the value of the objective evaluated at $f_0$ is
\begin{equation}\label{JD2}
J_W[f_0]=\cos^2{(\varphi_W+T)}.
\end{equation}
If $\varphi_W+T=\pi$ then $J_W[f_0]=1$ and $f_0$ is a point of global maximum. If $\varphi_W+T=\frac{\pi}2$ and $\varphi_W+T=\frac{3\pi}2$ then $J_W[f_0]=0$ and $f_0$ is a point of global minimum.
The Taylor expansion of the functional $J_W$ at $f$ up to the second order has the form (for the theory of calculus of variations in infinite dimensional spaces see~\cite{BogachevSmolyanov}):
\begin{eqnarray}
J_W[f+\delta f] &=& J_W[f]+\int_0^T\frac{\delta J_W}{\delta f(t)}\delta f(t)dt\nonumber \\
&&+\frac 12 \int_0^T\int_0^T {\rm Hess}(t,s)\delta f(t)\delta f(s)dtds+o(\|\delta f\|^2_{L^2})
\end{eqnarray}
The linear term is determined by the integral kernel of the Fr\'echet derivative,
\[
\frac{\delta J_W}{\delta f(t)}=\frac 12 \Im({\rm Tr} Y^\ast {\rm Tr}(YV_t))
\]
and determines the gradient of the objective; here as in~\cite{PechenIl'in2016} we use the notations $Y=W^\dagger U^f_T$ and $V_t=U^{f\dagger}_t VU^f_t$. The second order term is the integral kernel of the Hessian,
\[
{\rm Hess}(t,s)=
\begin{cases}
\frac 12 \Re({\rm Tr}(YV_{t}){\rm Tr}(Y^\ast V_{s})-{\rm Tr}(YV_{s}V_{t}){\rm Tr} Y^\ast)
,&\textrm{ if } s\geq t\\
\frac 12 \Re({\rm Tr}(YV_{s}){\rm Tr}(Y^\ast V_{t})-{\rm Tr}(YV_{t}V_{s}){\rm Tr} Y^\ast)
,& \textrm{ if } s<t.
\end{cases}
\]
The control $f_0=0$ is a critical point, i.e., gradient of the objective evaluated at this control is zero. The Hessian at $f_0=0$ has the form (see~\cite{PechenIl'in2016}):
\begin{equation}\label{Hess}
{\rm Hess}(s,t)=-2\upsilon^2\cos{\varphi}\cos{(2|t-s|+\varphi)},
\end{equation}
where $\varphi=-\varphi_W-T$ and $\upsilon=\sqrt{\upsilon^2_x+\upsilon^2_y}$.
We consider for the values of the parameters $(\varphi_W,T)$ the following cases (see Fig.~\ref{fig2}, where the set $\mathcal{D}_2$ in addition is divided into three subsets described in Sec.~\ref{Sec:4}):
\begin{itemize}
\item $(\varphi_W,T)$ belongs to the triangle domain
\[
\hspace*{-1.2cm}
\mathcal{D}_1 := \left\{ (\varphi_W, T)~:~ 0<T<\frac{\pi}{2}, \quad \frac{\pi}{2} \leq \varphi_W < \pi - T \right\};
\]
\item $(\varphi_W,T)$ belongs to the triangle domain
\[
\hspace*{-1.2cm}
\mathcal{D}_2 := \left\{ (\varphi_W, T)~:~ 0 < T \leq \frac{\pi}{2}, \quad \pi - T < \varphi_W < \pi, \quad (\varphi_W,T)\neq (\pi, \frac {\pi}2) \right\}.
\]
\item $(\varphi_W,T)$ belongs to the square domain without the diagonal
\[
\hspace*{-1.2cm}
\mathcal{D}_3 := \left\{(\varphi_W, T)~:~ 0 < T \leq \frac{\pi}{2}, \quad 0 < \varphi_W <\frac {\pi}2 ,\quad \varphi_W+T\neq\frac{\pi}2\right\}.
\]
\item $(\varphi_W,T)$ belongs to the set
\[
\hspace*{-1.2cm}
\mathcal{D}_4 := \left\{ (\varphi_W, T)~:~ 0 < T \leq \frac{\pi}{2},\quad \varphi_W=\pi \right\}.
\]
\end{itemize}
\begin{remark}
Note that these notations for the domains $\mathcal{D}_2,\mathcal{D}_3,\mathcal{D}_4$ are different from that used in~\cite{Volkov_Morzhin_Pechen}. The present notations seem to be more convenient.
\end{remark}
The following theorem was obtained in~\cite{Volkov_Morzhin_Pechen}.
\begin{theorem}
If $(\varphi_W,T)\in \mathcal{D}_1\cup \mathcal{D}_2\cup \mathcal{D}_3\cup \mathcal{D}_4$ then the Hessian of the objective functional $J_W$ at $f_0=0$ is an injective compact operator on $L^2([0,T];\mathbb{R})$. Moreover,
\begin{enumerate}
\item If $(\varphi_W,T)\in \mathcal{D}_1$, then Hessian at $f_0$ has only
negative eigenvalues.
\item If $(\varphi_W,T)\in \mathcal{D}_2\cup \mathcal{D}_3\cup \mathcal{D}_4$ then Hessian
at $f_0$ has both negative and positive eigenvalues. In this case, the special control $f_0=0$ is a saddle point for the objective functional.
\end{enumerate}
\end{theorem}
Note that the numbers of positive and negative eigenvalues mentioned in item 2 of this theorem, as well as their magnitudes, were not computed in~\cite{Volkov_Morzhin_Pechen}. However, these numbers and magnitudes are important since they determine the numbers of directions towards increasing or decreasing of $J_W$, and the magnitudes of the eigenvalues determine the speed of increasing and decreasing the objective along these directions. All of that affects the relative level of easy or difficulty of practical optimization in a vicinity of $f_0$.
\section{Main theorem}\label{Sec:3}
Our main result of this work is the following theorem.
\begin{theorem}\label{theorem4} One has the following.
\begin{enumerate}
\item If $(\varphi_W,T)\in \mathcal{D}_1$, then Hessian at $f_0$ has only
negative eigenvalues.
\item If $(\varphi_W,T)\in \mathcal{D}_2$ then Hessian
at $f_0$ has two positive and infinitely many negative eigenvalues.
\item If $(\varphi_W,T)\in\mathcal{D}_3$ and $\varphi_W+T<\pi/2$,
then Hessian at $f_0$ has one positive and infinitely many negative eigenvalues.
If $(\varphi_W,T)\in\mathcal{D}_3$ and $\varphi_W+T>\pi/2$, then Hessian at $f_0$ has one negative and infinitely many positive eigenvalues.
\item If $(\varphi_W,T)\in\mathcal{D}_4$, then Hessian at $f_0$ has one negative and infinitely many positive eigenvalues.
\end{enumerate}
\end{theorem}
\section{Proof of the main theorem}\label{Sec:4}
In this section we will prove Theorem 4.
If $(\varphi_W,T)\in \mathcal{D}_1\cup \mathcal{D}_2\cup \mathcal{D}_3\cup \mathcal{D}_4$, then $\sin2\varphi=-\sin2(\varphi_W+T)\neq 0$.
Instead of Hessian, we can consider the integral operator $K=\frac 1{\upsilon^2\sin2\varphi} {\rm Hess} $ with integral kernel:
\begin{equation}
K(s,t)=-\frac{\cos{(2|t-s|+\varphi)}}{\sin{\varphi}},
\end{equation}
For any continuous $g$, we can find $h=Kg$ as a unique solution of the ODE:
\begin{equation}\label{eq1!}
h''(t)+4h(t)=4g(t),
\end{equation}
which satisfies the initial conditions
\begin{eqnarray}
h(0)&=&-\frac 1{\sin{\varphi}}\int_0^T\cos{(2s+\varphi)}g(s)ds, \label{bound1}\\
h'(0)&=&-\frac {2}{\sin{\varphi}}\int_0^T\sin{(2s+\varphi)}g(s)ds. \label{bound2}
\end{eqnarray}
Let $\mu$ be an eigenvalue of the operator $K$ and $g$ be the corresponding eigenfunction, so that $h=Kg=\mu g$.
Let $\lambda=1/\mu$.
Then using~(\ref{eq1!}), we obtain that
\begin{equation}\label{hh}
h''(t)=4(\lambda-1)h(t).
\end{equation}
In the following sections, we examine whether eigenvalue $\mu$ can be positive.
This problem is similar to the Sturm–Liouville problem (see~\cite{Vladimirov}).
\subsection{Case $\lambda<1$}
Consider the case $\lambda<1$.
Let $a^2=(1-\lambda)$ and $a>0$. If $h$ satisfies~(\ref{hh}) then $h$ has the form $h(t)=b\cos 2at+c\sin 2at$ and
\[
g(t)=(1-a^2)(b\cos 2at+c\sin 2at).
\]
If we substitute $g$ in~(\ref{bound1}) and~(\ref{bound2}), then we get a system of two linear algebraic equations on $(b,c)$ (see~\cite{Volkov_Morzhin_Pechen}). It has a non-zero solution if the determinant of the coefficients of this system is not equal to zero.
This determinant has the form
\[
F^1_{\varphi_W,T}(a)=-2a-a^2\sin{(2aT)}\sin{(2\varphi_W)}-\sin{(2aT)}\sin{(2\varphi_W)}+2a\cos{(2aT)}\cos({2\varphi_W)}.
\]
So the function $g$ is an eigenfunction of the operator $K$ with the eigenvalue $\mu=1/\lambda=1/(1-a^2)$ if and only
the function $F^1_{\varphi_W,T}$ has a positive root.
It is easy to see that for $(\varphi_W,T)\in\mathcal{D}_4$
the roots of the function $F^1_{\varphi_W,T}$ are $a_n=\frac{\pi n}T$. If $(\varphi_W,T)\in\mathcal{D}_1$
and $\varphi_W=\frac{\pi}2$ then the roots of the function $F^1_{\varphi_W,T}$ are $a_n=\frac{2\pi n-\pi}{2T}$. Hence in these cases
$\mu_n=\frac 1{1-a_n^2}$ belong to the spectrum of the operator $K$. These numbers are negative. We will show bellow that there is also one positive eigenvalue for $(\varphi_W,T)\in\mathcal{D}_4$.
\begin{lemma}
\label{lemma2}
Let $T\in(0,\frac\pi 2)$. The equation
\[
\alpha x=\tan{(Tx)}
\]
has only one root on $(0,1)$ if $T< \alpha <\tan{T}$ and has not roots on $(0,1)$ if $\alpha\in(-\infty,T]$ and $\alpha \in[\tan{T},+\infty)$.
\end{lemma}
The proof of this lemma is illustrated on Fig.~\ref{fig1} (left subplot).
\begin{lemma}
\label{lemma3}
Let $T\in(0,\frac\pi 2)$. The equation
\[
\alpha x=\cot{(Tx)}
\]
has only one root on $(0,1)$ if $\cot{T}< \alpha$ and has not roots on $(0,1)$ if $\alpha\in(-\infty,\cot{T}]$.
\end{lemma}
The proof of this lemma is illustrated on Fig.~\ref{fig1} (right subplot).
\begin{figure}
\caption{Illustration for Lemma~\ref{lemma2}
\label{fig1}
\end{figure}
In addition, we divide the domain ${\cal D}_2$ into the following three subdomains (see Fig.~\ref{fig2}).
\begin{itemize}
\item $(\varphi_W,T)$ belongs to the set
\[
\hspace*{-1.2cm}
\mathcal{D}'_2 := \left\{ (\varphi_W, T)~:~(\varphi_W,T)\in \mathcal{D}_2,\quad T<-\tan{\varphi_W} \right\};
\]
\item $(\varphi_W,T)$ belongs to the set
\[
\hspace*{-1.2cm}
\mathcal{D}_2'':= \left\{ (\varphi_W, T)~:~(\varphi_W,T)\in \mathcal{D}_2,\quad T=-\tan{\varphi_W} \right\};
\]
\item $(\varphi_W,T)$ belongs to the set
\[
\hspace*{-1.2cm}
\mathcal{D}_2''' := \left\{ (\varphi_W, T)~:~(\varphi_W,T)\in \mathcal{D}_2,\quad T>-\tan{\varphi_W} \right\}.
\]
\end{itemize}
\begin{figure}
\caption{The domains of the rectangle $(\varphi_W,T)\in[0,\pi]\times [0,\pi/2]$. $\mathcal{D}
\label{fig2}
\end{figure}
\begin{proposition}\label{prop1}
Positive roots of the function $F^1_{\varphi_W,T}$ are
$\{a_n\}$ and $\{a'_m\}$, where $a_n\in \left(\frac{(n-1)\pi}{T},\frac{n\pi}{T}\right)$ is a solution of the equation
\begin{equation}
\label{eq1}
-x\cot{\varphi_W}=\cot{(xT)}
\end{equation}
and $a'_m\in \left(\frac{(m-1)\pi}{T},\frac{m\pi}{T}\right)$ is a solution of the equation
\begin{equation}
\label{eq2}
-x\tan{\varphi_W}=\tan{(xT)}\, ,
\end{equation}
$n\in \mathbb{N}$, $m$ runs through $\{2,\ldots\}$ for $(\varphi_W,T)\in \mathcal{D}''_2\cup \mathcal{D}'''_2\cup \mathcal{D}_3$ and runs through $\{1,2,\ldots\}$ for $(\varphi_W,T)\in \mathcal{D}_1\cup \mathcal{D}'_2$.
Then $\mu_n=\frac 1{1-a_n^2}$, $\mu'_m=\frac 1{1-{a'}^2_m}$ belong to the spectrum of the operator $K$. Moreover,
\begin{enumerate}
\item If $(\varphi_W,T)\in \mathcal{D}_1$ and $\varphi_W\neq \frac{\pi}2$, then the numbers $\{\mu_n\}$ and $\{\mu'_m\}$ are negative, where
$n,m\in\mathbb{N}$.
\item If $(\varphi_W,T)\in \mathcal{D}''_2\cup \mathcal{D}'''_2$, then $\mu_1=\frac 1{1-a_1^2}>1$ is positive. The numbers $\{\mu_n\}$ and $\{\mu'_m\}$ are negative for $n>1$ and $m\in \mathbb{N}$.
\item If $(\varphi_W,T)\in \mathcal{D}'_2$, then $\mu_1=\frac 1{1-a_1^2}>1$ and $\mu'_1=\frac 1{1-{a'_1}^2}>1$ are positive. The numbers $\{\mu_n\}$ and $\{\mu'_m\}$ are negative for $n>1$ and $m>1$.
\item If $(\varphi_W,T)\in \mathcal{D}_3$, then the numbers $\{\mu_n\}$ and $\{\mu'_m\}$ are negative, where $n\in\mathbb{N}$ and $m>1$.
\end{enumerate}
\end{proposition}
\textbf{Proof}
Let us analyze positive roots of the function $F^1_{\varphi_W,T}$.
For this purpose we consider quadratic (with respect to $x$) equation:
\[
x^2\sin{(2aT)}\sin{(2\varphi_W)}+2x(1-\cos{(2aT)}\cos{(2\varphi_W)})+\sin{(2aT)}\sin{(2\varphi_W)}=0
\]
The roots of this quadratic equation are
\begin{eqnarray}
x_1&=&-\tan{\varphi_W}\cot{\left(aT\right)},\\
x_2&=&-\cot{\varphi_W}\tan{\left(aT\right)}
\end{eqnarray}
Hence $a$ is the root of the function $F^1_{\varphi_W,T}$
if and only if $x=a$ is a solution of either equation~(\ref{eq1}) or the equation~(\ref{eq2}).
The number $\mu=\frac 1{1-a^2}$
is a positive eigenvalue of the operator $K$ if and only if
$a\in (0,1)$.
If $(\varphi_W,T)\in \mathcal{D}_1$ and $\varphi_W\neq \frac{\pi}2$ then
\[
\tan{T}<\tan{(\pi-\varphi_W)}=-\tan{(\varphi_W)}.
\]
Due to Lemma~\ref{lemma3} equation~(\ref{eq1}) has not roots on $(0,1)$.
Due to Lemma~\ref{lemma2} equation~(\ref{eq2})
has not roots on $(0,1)$.
If $(\varphi_W,T)\in \mathcal{D}_2$, then
\[
\cot{T}<-\cot{\varphi_W}.
\]
Hence, due to Lemma~\ref{lemma3} equation~(\ref{eq1}) has one root on $(0,1)$.
If $T\geq -\tan{\varphi_W}$ then Lemma~\ref{lemma2}
implies that~(\ref{eq2}) has not solution on $(0,1)$.
So if $(\varphi_W,T)\in \mathcal{D}''_2\cup \mathcal{D}'''_2$, then the function $F^1_{\varphi_W,T}$ has only one root $a_1$ in the interval $(0,1)$.
If $(\varphi_W,T)\in \mathcal{D}'_2$, then the function $F^1_{\varphi_W,T}$ has two roots $a_1$ and $a'_1$ in the interval $(0,1)$.
If $(\varphi_W,T)\in \mathcal{D}_3$, then $-\tan{\varphi_W}<0$ and Lemmas~\ref{lemma2} and~\ref{lemma3} imply that both equations~(\ref{eq1}) and~(\ref{eq2}) have not solutions on $(0,1)$.
\subsection{Case $\lambda=1$}
If $\mu=1$ is an eigenvalue of the operator $K$ then
the corresponding eigenfunctions should have the form
\[
h(t)=g(t)=ct+b.
\]
If we substitute $g$ in~(\ref{bound1}) and~(\ref{bound2}), then we get a system of two linear algebraic equations on $(b,c)$ (see~\cite{Volkov_Morzhin_Pechen}). This system has a non-zero solution if and only if the determinant of the coefficients of this system is not equal to zero.
This determinant has the form
\begin{equation}
\Delta=-2\sin{\varphi_W}(\sin{\varphi_W}+T\cos{\varphi_W}).
\end{equation}
\begin{proposition}\label{prop2}
$\mu'_1=1$ is an eigenvalue
of $K$ only in the following cases
\begin{enumerate}
\item $(\varphi_W,T)\in \mathcal{D}_{4}$.
\item $(\varphi_W,T)\in \mathcal{D}_2''$.
\end{enumerate}
\end{proposition}
\subsection{Case $\lambda>1$}
Consider the case $\lambda>1$.
Let $a^2=(\lambda-1)$ and $a>0$. If $h$ satisfies~(\ref{hh}) then $h$ has the form $h(t)=be^{2at}+ce^{-2at}$.
Then
\[
g(t)=(1+a^2)(be^{2at}+ce^{-2at}).
\]
If we substitute $g$ in~(\ref{bound1}) and~(\ref{bound2}), then we get a system of two linear algebraic equations on $b$ and $c$ (see~\cite{Morzhin_Pechen_LJM_2020,Morzhin_Pechen_LJM_2019}). This system has a non-zero solution if and only if the determinant of the coefficients of this system is not equal to zero.
This determinant has the form
\begin{eqnarray}
\label{1111bound11}
F^2_{\varphi_W,T}(a) &=&-a^2\sinh{(2aT)}\sin{(2\varphi_W)} \nonumber \\
&&+2a(1-\cosh{(2aT)}\cos{(2\varphi_W)})
+\sinh{(2aT)}\sin{(2\varphi_W)}.
\end{eqnarray}
So the function $g$ is an eigenfunction of the operator $K$ with the eigenvalue $\mu=1/\lambda=1/(1+a^2)$ if and only if
the function $F^2_{\varphi_W,T}$ has a positive root.
It is easy to see that for $(\varphi_W,T)\in\mathcal{D}_4$
and $(\varphi_W,T)\in\mathcal{D}_1$ such that $\varphi_W=\frac{\pi}2$ the function $F^2_{\varphi_W,T}$ has not positive roots.
\begin{proposition}\label{prop3} One has the following:
\begin{enumerate}
\item If $(\varphi_W,T)\in \mathcal{D}_3$, then $F^2_{\varphi_W,T}$ has only one positive root $a'_1>0$, where $a'_1$ is a solution of the equation
\begin{equation}
\label{eq12}
x\cot{\varphi_W}=\coth{Tx}
\end{equation}
Then $\mu'_1=\frac1{1+a_1^2}<1$ is a positive eigenvalue of the operator $K$.
\item
If $(\varphi_W,T)\in \mathcal{D}_1\cup \mathcal{D}'_2\cup \mathcal{D}''_2$ and $\varphi_W\neq \frac{\pi}2$, then the function $F^2_{\varphi_W,T}$ has no positive roots.
\item If $(\varphi_W,T)\in \mathcal{D}'''_2$,
then $F^2_{\varphi_W,T}$ has only one positive root $a'_1>0$, where $a'_1$ is a solution of the equation
\begin{equation}
\label{eq11}
-x\tan{\varphi_W}=\tanh{Tx}
\end{equation}
then the function $F^2_{\varphi_W,T}$ has only one positive root $a'_1>0$. Then $\mu'_1=\frac1{1+{a'_1}^2}<1$
is a positive eigenvalue of the operator $K$.
\end{enumerate}
\end{proposition}
\textbf{Proof.}
Let us analyze positive roots of the function $F^1_{\varphi_W,T}$.
For this purpose we consider quadratic (with respect to $x$) equation:
\[
x^2\sinh{(aT)}\sin{(2\varphi_W)}-2x(1-\cosh{(aT)}\cos{(2\varphi_W)})-\sinh{(aT)}\sin{(2\varphi_W)}=0
\]
The roots of this equation are
\begin{eqnarray}
x_1&=&-\cot{\varphi_W}\tanh{\left(aT\right)}\\
x_2&=&\tan{\varphi_W}\coth{\left(aT\right)}
\end{eqnarray}
Then $a$ is the root of the function $F^2_{\varphi_W,T}$
if and only if $x=a$ is a solution of either the equation~(\ref{eq11})
or the equation~(\ref{eq12}).
If $(\varphi_W,T)\in \mathcal{D}_3$ then due to $\tan{\varphi_W}>0$ equation~(\ref{eq11}) has not positive roots. Equation~(\ref{eq12}) has only one root.
If $(\varphi_W,T)\in \mathcal{D}_1\cup \mathcal{D}_2$ and $\varphi_W\neq \frac \pi 2$ then due to $\cot{\varphi_W}<0$ equation~(\ref{eq12}) has not positive roots. If $(\varphi_W,T)\in \mathcal{D}_1\cup \mathcal{D}'_2\cup \mathcal{D}''_2$ and $\varphi_W\neq \frac \pi 2$
then $T\leq-\tan\varphi_W$ and $\tanh{(Tx)}<-\tan({\varphi_W})x$ for positive $x$ and equation~(\ref{eq11}) has not positive roots.
If $(\varphi_W,T)\in \mathcal{D}'''_2$ then equation~(\ref{eq11}) has one positive root.
The statement of Theorem~\ref{theorem4} follows directly from Propositions~\ref{prop1},~\ref{prop2},~\ref{prop3}. Important is that these propositions in addition give estimates for the magnitudes of the eigenvalues.
\section{Conclusions}
Analysis of either existence or absence of traps (which are points of local, but not global, extrema of the objective quantum functional) is important for quantum control. It was known that in the problem of single-qubit
phase shift quantum gate generation all controls, except maybe the special control $f_0=0$ at small times, cannot be traps. In the previous work~\cite{Volkov_Morzhin_Pechen}, we studied the spectrum of the Hessian at this control $f_0$ and investigated under what conditions this control is a saddle point of the quantum objective functional. In this work, we have calculated the numbers of negative and positive eigenvalues of the Hessian at this control point and obtained estimates for the magnitudes of these eigenvalues. At the same time, we significantly simplified the proof of Theorem~3 of the paper~\cite{Volkov_Morzhin_Pechen}.
\end{document}
|
\begin{document}
\title[Coefficient invariances of Convex functions]{Coefficient invariances
of Convex functions}
\begin{abstract}
We summarise known sharp bounds for coefficient invariances for convex
functions, and suggest some significant open problems
\end{abstract}
\begin{abstract}
For convex univalent functions we give instances where the sharp bound for
various coefficient functionals are identical to those for the corresponding
bound for the inverse function . We give instances where the sharp bounds
differ and also suggest some significant open problems.
\end{abstract}
\title[coefficient invariances for convex functions]{coefficient invariances
for convex functions}
\author[D. K. Thomas]{Derek K. Thomas}
\address{Derek K. Thomas, Department of Mathematics, Swansea University Bay
Campus, Swansea, SA1 8EN, United Kingdom.}
\email{[email protected]}
\keywords{Univalent, Convex functions, Carath\'eodory functions, Inverse,
coefficients functions }
\subjclass{30C45}
\maketitle
\section{Introduction}
Let ${\mathcal{A}}$ denote the class of analytic functions $f$ in the unit
disk $\mathbb{D}=\{ z\in\mathbb{C}: |z|<1 \}$ normalized by $
f(0)=0=f^{\prime }(0)-1$. Then for $z\in\mathbb{D}$, $f\in {\mathcal{A}}$
has the following representation
\begin{equation*} \label{A;01}
f(z) = z+ \sum_{n=2}^{\infty}a_n z^n.
\end{equation*}
allskip
Let ${\mathcal{S}}$ denote the subclass of all univalent (i.e., one-to-one)
functions in ${\mathcal{A}}$.
allskip
The most significant subclasses of $\mathcal{S}$ are the classes $\mathcal{S}
^*$ of Starlike and $\mathcal{C}$ of Convex functions, with $\mathcal{C}$
defined as follows.
$f\in \mathcal{C}$ if, and only if, $f\in \mathcal{A}$ and
\begin{equation*}
\Re \left(1+\dfrac{zf^{\prime \prime }(z)}{f^{\prime }(z)}\right)>0,\quad
z\in \mathbb{D,}
\end{equation*}
allskip
A classical result using the Carath\'eodory functions shows that if $f\in
\mathcal{C}$, then $|a_n|\le 1$ for $n\ge2$, see e.g \cite{Pom}.
\section{Inverse coefficients}
Each $f\in\mathcal{S}$ possess an inverse function $f^{-1}$ given by
\begin{equation*}
f^{-1}(w)=w+\sum_{n=2}^{\infty}A_{n}w^n,
\end{equation*}
valid in some set $|w|<r_0(f)$
Although it took from 1916 to 1985 to solve the Bieberbach conjecture, in
1923 Lowner found the sharp upper bound for $|A_n|$ for all $n\ge2, $ by
showing that
\begin{equation*}
|A_n|\le \dfrac{\Gamma[2n+1]}{\Gamma[n+1]\Gamma[n+1]}.
\end{equation*}
Since the Koebe function $k(z)$ is also starlike, Lowner's bound is also
sharp for the class $\mathcal{S}^*$.
However since the Koebe function does not belong to $\mathcal{C}$, other
sharp bounds must hold for $|A_n|$ when $f\in\mathcal{C}$, and a complete
solution to this problem appears difficult to find.
The first indication that this was not a straightforward, but interesting
problem occurs in a paper in 1979 by Kirwan and Schober \cite{KS}, who
showed that there exists a function in $\mathcal{C}$ such that $|A_n|>1$ for
$n\ge10$.
Then in 1982, Libera and Zlotkiewicz \cite{LZ} proved the following.
Let $f\in\mathcal{C}$ and have inverse function given by
\begin{equation*} \label{Inverse}
f^{-1}(w)=w+\sum_{n=2}^{\infty}A_{n}w^n,
\end{equation*}
then for $2\le n\le7$, the following sharp inequalities hold.
\begin{equation*}
|A_n|\le1.
\end{equation*}
Subsequently in 1984, Campschroer \cite{Camp} showed that if
$f\in\mathcal{C}$ and has inverse function given by (\ref{Inverse}), then $
|A_8|\le1,$ and the inequality is sharp.
Thus the first questions that arise are, what is the sharp bound for $|A_9|$
, and what are the sharp bounds for $|A_n|$ when $n\ge10$?
The next obvious question is to ask if there are any other invariance
properties amongst coefficient functionals in $a_n$ and $A_n$?
allskip
If it turns out that there are other invariant properties, then \textbf{why
is this so?}
We will see that there are instances where sharp bounds can be found for
functionals concerning $A_n$ ($|A_n|$ for instance) which are different for
the sharp bound for the corresponding functional concerning $a_n$ ($|a_n|$
for instance)?
Perhaps the most natural generalisation to the class $\mathcal{C}$ of convex
functions is the class of convex functions of order $\alpha$ defined as
follows.
For $0\le\alpha<1$, denote by ${\mathcal{C}}(\alpha)$ the subclass of ${
\mathcal{C}}$ consisting of convex functions of order $\alpha$ i.e., $f \in {
\mathcal{C}}(\alpha)$ if, and only if, for $z\in\mathbb{D}$
\begin{equation*}
\re \left\{1+ \frac{zf^{\prime \prime }(z)}{f^{\prime }(z)} \right\} >\alpha.
\end{equation*}
The first obvious question to consider is to look for invariance amongst the
initial coefficients, where we at once encounter non-invariance between $
|a_3|$ and $|A_3|$.
Using elementary techniques, it is a simple exercise using well-known tools
to prove the follow inequalities, all of which are sharp
If $f\in \mathcal{C}(\alpha),$ then\newline
\begin{equation*}
|a_{2}|, |A_{2}| \leq 1-\alpha, \quad \text{and} \quad |a_3|\le \dfrac{
(3-2\alpha)(1-\alpha)}{3}.
\end{equation*}
\begin{equation*}
|A_{3}| \leq \left\{
\begin{array}{ll}
\dfrac{(3-4\alpha)(1-\alpha)}{3}, & \hbox{$0\le\alpha\leq\dfrac{1}{2}$,} \\
& \\
\dfrac{1-\alpha}{3}, & \hbox{$\dfrac{1}{2}\leq\alpha<1$.}
\end{array}
\right.
\end{equation*}
In 2016, Thomas and Verma \cite{TV} demonstrated some invariance properties
amongst the class of strongly convex functions defined as follows.
For $0<\beta\le1$, denote by ${\mathcal{C}}^{\beta}$ the subclass of ${
\mathcal{C}}$ consisting of strongly convex functions i.e., $f \in {\mathcal{
C}}^{\beta}$ if, and only if, for $z\in\mathbb{D}$
\begin{equation*} \label{def1}
\Big|\arg \Big(1+ \frac{zf^{\prime \prime }(z)}{f^{\prime }(z)} \Big )|\le
\dfrac{\beta \pi}{2}.
\end{equation*}
We give first some invariant properties amongst the initial coefficients
proved for $f\in \mathcal{C}^{\beta}$ in \cite{TV}
Let $f\in \mathcal{C}^{\beta}$ , then\newline
\begin{equation*}
|a_{2}|, |A_{2}| \leq \beta, \quad \quad \quad |a_{3}|,|A_{3}| \leq \left\{
\begin{array}{ll}
\dfrac{\beta}{3}, & \hbox{$0<\beta\leq\dfrac{1}{3}$,} \\
& \\
{\beta}^2, & \hbox{$\dfrac{1}{3}\leq\beta\leq1$.}
\end{array}
\right.
\end{equation*}
\newline
\begin{equation*}
|a_{4}|,|A_{4}| \leq \left\{
\begin{array}{ll}
\dfrac{\beta}{6}, & \hbox{$0<\beta\leq\sqrt{\dfrac{2}{17}}$,} \\
& \\
\dfrac{\beta}{18}(1+17{\beta}^2), & \hbox{$\sqrt{\dfrac{2}{17}}\leq\beta
\leq1$.}
\end{array}
\right.
\end{equation*}
All the inequalities are sharp.
If $f\in \mathcal{C}^{\beta}$, then for any complex number $\nu$,\newline
\begin{equation*}
\left|a_{3}-\nu{a_{2}^{2}}\right|, \left|A_{3}-\nu{A_{2}^{2}}
\right|\leq\max\left\{\frac{\beta}{3}, \ {\beta}^2\left|1-\nu\right|\right\}.
\end{equation*}
Both inequalities are sharp.
For $f\in\mathcal{A}$, the logarithmic coefficients $\gamma_n$ of $f(z)$ are
defined by
\begin{equation*}
\log \dfrac{f(z)}{z}=2\sum_{n=1}^{\infty}\gamma_nz^n.
\end{equation*}
They play a central role in the theory of univalent functions, and formed
the basis of de Brange's proof of the Bieberbach conjecture.
We make the following definition.
Let $f\in \mathcal{C}^{\beta}$, and $\log \dfrac{f^{-1}(\omega)}{\omega}$ be
given by
\begin{equation*}
\log \dfrac{f^{-1}(\omega)}{\omega}=2\sum_{n=1}^{\infty} c_{n}{\omega}^{n}.
\end{equation*}
It was also shown in \cite{TV} that if $f\in \mathcal{C}^{\beta}$, then
\begin{equation*}
|\gamma_1|, |{c}_{1}| \leq \dfrac{\beta}{2}, \quad \quad \quad |\gamma_2|, |{
c}_{2}| \leq \left\{
\begin{array}{ll}
\dfrac{\beta}{6}, & \hbox{$0<\beta\leq\dfrac{2}{3}$,} \\
& \\
\dfrac{{\beta}^2}{4}, & \hbox{$\dfrac{2}{3}\leq\beta\leq1$,}
\end{array}
\right.
\end{equation*}
\newline
\begin{equation*}
|\gamma_3|, |{c}_{3}| \leq \left\{
\begin{array}{ll}
\dfrac{\beta}{12}, & \hbox{$0<\beta\leq\sqrt{\dfrac{2}{5}}$,} \\
& \\
\dfrac{\beta}{36}(1+5{\beta}^2), & \hbox{$\sqrt{\dfrac{2}{5}}\le
\beta\leq1$.}
\end{array}
\right.
\end{equation*}
All the inequalities are sharp.
Clearly the more complicated the coefficient functional, the more difficult
will be the analysis, and finding invariance.
We consider next the second Hankel determinants $H(2,2)(f)$ and $
H(2,2)(f^{-1})$, defined by
\begin{equation*}
H(2,2)(f)=a_2 a_4-a_3^2,
\end{equation*}
{and}
\begin{equation*}
H(2,2)(f^{-1})=A_2 A_4-A_3^2
\end{equation*}
It was further shown by Thomas and Verma \cite{TV} that if $f\in\mathcal{C}
^{\beta}$, then
\begin{equation*}
|H(2,2)(f)|,|H(2,2)(f^{-1})| \le
\begin{cases}
\dfrac{\beta^2}{9}, & \mbox \ 0<\beta\leq \dfrac{1}{3}, \\
& \\
\dfrac{\beta(1+\beta)(1+17\beta)}{72(3+\beta)}, & \mbox\ \dfrac{1}{3}\le
\beta\le 1 \\
&
\end{cases}
\end{equation*}
\noindent who claimed that all the inequalities are sharp.
Although the proofs of the positive results are correct, the claim that the
second inequality is sharp is false.
Note however that when $\beta=1,$ i.e., $f\in\mathcal{C}$, $
|H(2,2)(f)|,|H(2,2)(f^{-1})|\le\dfrac{1}{8},$ and these inequalities are
sharp.
Although the methods used in the proofs of the above invariance are correct,
they are not strong enough to give sharp bounds for the second inequality.
The following correction was subsequently given by Lecko, Sim and Thomas
\cite{LST}.
If $f\in\mathcal{C}^{\beta}$, then the following sharp inequalities hold.
\begin{equation*}
|H(2,2)(f)|,|H(2,2)(f^{-1})| \le
\begin{cases}
\dfrac{\beta^2}{9}, & \mbox \ 0<\beta\leq \dfrac{1}{3}, \\
& \\
\dfrac{\beta^2(1+\beta)(17+\beta)}{72(2+3\beta-\beta^2)}, & \mbox\ \dfrac{1}{
3}\le \beta\le 1. \\
&
\end{cases}
\end{equation*}
Note again that when $\beta=1,$ i.e., $f\in\mathcal{C}$, $
|H(2,2)(f)|,|H(2,2)(f^{-1})|\le\dfrac{1}{8.}$
The problem of finding sharp bounds for the difference of successive
coefficients $|a_{n+1}|-|a_n|$ for functions in $\mathcal{S}$ represents one
of the most difficult areas of study in univalent function theory, with the
only sharp bound known so far is when $n=2$, where Duren prove the rather
curious sharp inequality
\begin{equation*}
-1 \leq |a_3| - |a_2| \leq \frac{3}{4} + e^{-\lambda_0}(2e^{-\lambda_0}-1) =
1.029\cdots,
\end{equation*}
where $\lambda_0$ is the unique value of $\lambda$ in $0 < \lambda <1$,
satisfying the equation $4\lambda = e^{\lambda}$.
Although Leung found the complete solution when $f$ is starlike by showing
that $||a_{n+1}|-|a_n||\le1$, the problem for convex functions remains
mostly open.
In 2016, Ming and Sugawa \cite{Li} made the first advance in this problem by
finding the following sharp bounds when $n\ge2$
\begin{equation*}
|a_{n+1}|-|a_n|\le \dfrac{1}{n+1},
\end{equation*}
and also proved the sharp lower bounds
\begin{equation*}
|a_{3}|-|a_2|\ge -\dfrac{1}{2},\quad \text{and}\quad |a_{4}|-|a_3|\ge -
\dfrac{1}{3}
\end{equation*}
Thus in particular we have the sharp bounds
\begin{equation*}
-\dfrac{1}{2}\le|a_{3}|-|a_2|\le \dfrac{1}{3} \quad \text{and}\quad -\dfrac{1
}{3}\le|a_{4}|-|a_3|\le \dfrac{1}{4}.
\end{equation*}
Sim and Thomas in 2020 \cite{ST} proved the following inequalities hold for
the inverse coefficients, thus demonstrating another example of invariance.
\begin{equation*}
-\dfrac{1}{2}\le|A_{3}|-|A_2|\le \dfrac{1}{3},
\end{equation*}
\noindent and have recently shown, (the proof of which requires much more
complicated analysis), that the following sharp inequalities hold
\begin{equation*}
-\dfrac{1}{3}\le|A_{4}|-|A_3|\le \dfrac{1}{4}.
\end{equation*}
Any advances when $n\ge 4$ would require deeper methods of proof.
It is clear from the above, that the more complicated the functional and
class of convex functions considered, the less likely is it that there will
be invariance. Also functionals containing coefficients $a_n$ and $A_n$ for $
n\ge3$ will similarly be difficult to deal with.
Sim and Thomas (\cite{ST2}, Proposition 1) have recently given a general
lemma concerning functions of positive real part, which provides a tool
enabling coefficient differences to be found when $n=2$, which can be
applied to many subclasses of univalent functions . This can also be used to
consider coefficient differences of the inverse function. However the lemma
only applies when $n=2$.
In recent years it has become fashionable to consider subclasses of convex
(and starlike) functions where the function $p(z)$ is specified, often
having a range with some interesting geometrical property.
Examples of recently discussed subclasses of convex functions, where some
initial invariance properties have been found are as follows.
A most natural class of convex functions related to the exponential function
is the class $\mathcal{C}_{E}$ defined as follows
\begin{equation*}
\mathcal{C}_{E}=\left\{ f\in \mathcal{A}:1+\frac{zf^{\prime \prime }(z)}{
f^{\prime }(z)}\prec e^{z}\right\}.
\end{equation*}
Some initial coefficients results for the class $\mathcal{C}_{E}$ were given
by Zaprawa \cite{Zap}, and similar analysis shows that the following
invariance properties hold.
\begin{equation*}
|a_{2}|, |A_2|\le \dfrac{1}{2} \quad \text{and}\quad |a_{3}|, |A_3|\le
\dfrac{1}{4}\quad \text{and}\quad |a_{4}|, |A_4|\le \dfrac{17}{144}.
\end{equation*}
All the inequalities are sharp.
Using well-known methods it is also possible to prove the following
inequalities, both of which are sharp.
\begin{equation*}
|a_2 a_3-a_4|, |A_{2} A_{3}-A_4|\le \dfrac{1}{12}.
\end{equation*}
Similarly the following sharp invariances hold for the second Hankel
determinants for $f\in\mathcal{C}_{E}$ \cite{Zap},
\begin{equation*}
|H(2,2)(f)|, |H(2,2)(f^{-1})|\le \dfrac{73}{2592}.
\end{equation*}
A class $\mathcal{C}_{SG}$ of convex functions which exhibits some
interesting invariance properties is related to a modified sigmoid function
and is defined by
\begin{equation*}
\mathcal{C}_{SG}=\left\{ f\in \mathcal{A}:1+\frac{zf^{\prime \prime }(z)}{
f^{\prime }(z)}\prec \frac{2}{1+e^{-z}}\right\} .
\end{equation*}
Here the function $2/(1+e^{-z})$ is a modified sigmoid function which maps $
\mathbb{D}$ onto the domain $\Delta _{SG}=\left\{ w\in \mathbb{C}
\right\}:\left\vert \log \left( w/\left( 2-w\right) \right) \right\vert <1\}
.$
The class $\mathcal{C}_{SG}$ was first discussed in \cite{DKT} where some
invariance properties were found. In particular the following invariance
properties hold.
\begin{equation*}
|a_{2}|, |A_2|\le \dfrac{1}{4} \quad \text{and}\quad |a_{3}|, |A_3|\le
\dfrac{1}{12}\quad \text{and}\quad |a_{4}|, |A_4|\le \dfrac{1}{24}.
\end{equation*}
All the inequalities are sharp.
Further invariance properties for functions in $\mathcal{C}_{SG}$ also hold
(see \cite{DKT1}), where the following inequality for $|a_2 a_3-a_4|$ was
proved, and the inequality for $|A_2 A_3-A_4|$ follows using similar
methods,
\begin{equation*}
|a_{2}a_3-a_4|, |A_2 A_3-A_4|\le \dfrac{1}{24}.
\end{equation*}
\noindent Both inequalities are sharp.
As already mentioned, finding sharp bounds for the differences of
coefficients can present difficulties and next we give an example of proved
non-invariance for $\mathcal{C}_{SG}$, noting that \textbf{all the
inequalities are sharp}.
\begin{equation*}
-\dfrac{5}{24}\le |a_{3}|-|a_2|\le \dfrac{1}{12},\quad \text{and} \quad -
\dfrac{1}{4}\le |A_{3}|-|A_2|\le \dfrac{1}{12}.
\end{equation*}
As mentioned above other choices of $p(z)$ have been made primarily to
define some kind of interesting geometric property of the range of $p(z)$,
for example
allskip
(i) \ $p(z)=1+\dfrac{4}{3}z+\dfrac{2}{3}z^2$ gives a cardioid domain.
allskip
(ii) $p(z)=1+\dfrac{4}{5}z+\dfrac{1}{5}z^4$ gives a 3-leaf petal shaped
domain.
It is very likely that similar invariances hold for some functionals in
these classes.
Finally we note a recent interesting example of non-invariance.
It was shown by Sim, Zaprawa and Thomas in 2021 \cite{SZT} that for $f\in
\mathcal{C}(\alpha),$
\begin{equation*}
|H(2,2)(f)|\le \dfrac{(1-\alpha)^2(6+5\alpha)}{48(1+\alpha)},
\end{equation*}
\noindent and that this inequality is sharp, thus solving a long-standing
problem.
allskip
\noindent In the same paper it was shown that for $f\in \mathcal{C}(\alpha),$
\begin{equation*}
|H(2,2)(f^{-1})| \le
\begin{cases}
\dfrac{1}{96}(12-28\alpha+19\alpha^2), & \mbox \ \alpha\in [0,\dfrac{2}{5}],
\\
& \\
\dfrac{1}{9}{(1-\alpha)^2}, & \mbox\ \alpha\in [\dfrac{2}{5},\dfrac{4}{5}],
\\
& \\
\dfrac{\alpha(1-\alpha)^2(19\alpha-8)}{48(1+\alpha)(2\alpha-1)}, & \mbox\
\alpha\in [\dfrac{4}{5},1].
\end{cases}
\end{equation*}
allskip
\noindent and that all the inequalities are sharp.
\noindent Thus unless $\alpha=0$, we have non-invariance.
We note here that when $f\in \mathcal{C}^(\beta),$ there is invariance
between $|H(2,2)(f)|$ and $|H(2,2)(f^{-1})|$ for all $\beta\in (0,1]$, which
is curious since the definition of the class $\mathcal{C}^(\beta)$ involves
the power $p(z)^{\beta}.$
Checking for invariances is a matter of applying available well-known tools
to functionals.
allskip
But the \textbf{real problem} is to discover \textbf{WHY} invariances occur
in the classes and functionals considered.
allskip
The answer is probably that invariances are both class and functional
dependant, and that there is no simple rule.
Recall that in 2016, Ming and Sugawa \cite{Li} proved that if $f\in\mathcal{C
}$, then when $n\ge2$
\begin{equation*}
|a_{n+1}|-|a_n|\le \dfrac{1}{n+1},
\end{equation*}
and that the bounds are sharp.
In the same paper Ming and Sugawa \cite{Li} further proved that when $n\ge4$
\begin{equation*}
-\dfrac{1}{n}<|a_{n+1}|-|a_n|\le \dfrac{1}{n+1},
\end{equation*}
allskip
\noindent thus
\begin{equation*}
||a_{n+1}|-|a_n||=\mathcal{O} \big(\dfrac{1}{n}\big), \ \ \text{as}\ n\to
\infty.
\end{equation*}
allskip
Is it true that if $f\in \mathcal{C}$, then
\begin{equation*}
||A_{n+1}|-|A_n||=\mathcal{O} \big(\dfrac{1}{n}\big), \ \ \text{as}\ n\to
\infty?
\end{equation*}
allskip
Next note that Kowalczyk, Lecko and Sim \cite{KLS} have shown that if $f\in
\mathcal{C}$, then the third Hankel determinant
\begin{equation*}
|H_{3,1}(f)| =|a_3(a_2a_4 - a_3^2) - a_4(a_4 - a_2a_3) + a_5(a_3 -
a_2^2)|\le \dfrac{4}{135},
\end{equation*}
\noindent and that this inequality is sharp.
allskip
It is true that the following sharp inequality holds?
\begin{equation*}
|H_{3,1}(f^{-1})| =|A_3(A_2A_4 - A_3^2) - A_4(A_4 - A_2A_3) + A_5(A_3 -
A_2^2)|\le \dfrac{4}{135}.
\end{equation*}
\end{document}
|
\begin{document}
\author{Bingyu Xia}
\address{The Ohio State University, Department of Mathematics, 231 W 18th Avenue, Columbus, OH 43210-1174, USA}
\email{[email protected]}
\keywords{Bridgeland stability conditions, Derived categories, Moduli Spaces, Twisted cubics}
\subjclass[2010]{14F05 (Primary); 14H45, 14J60, 18E30 (Secondary)}
\title{Hilbert scheme of twisted cubics as simple wall-crossing}
\begin{abstract}
We study the Hilbert scheme of twisted cubics in the three-dimensional projective space by using Bridgeland stability conditions. We use wall-crossing techniques to describe its geometric structure and singularities, which reproves the classical result of Piene and Schlessinger.
\end{abstract}
\maketitle
\section{Introduction}
In this paper, we study the birational transformations induced by simple wall-crossings in the space $\mathrm{Stab}(\mathbb{P}^{3})$ of Bridgeland stability conditions on $\mathbb{P}^{3}$ and show how they naturally lead to a new proof of the main result of \cite{PS85,EPS87}.
The notion of stability condition was introduced by Bridgeland in \cite{Bri07}.
It provides a new viewpoint on the study of moduli spaces of sheaves and complexes.
Simple wall-crossings are the most well-behaved wall-crossings in the space of stability conditions.
They are controlled by the extensions of a family of pairs of stable destabilizing objects: they contract a locus of extensions in the moduli of one side of the wall, then produce a new locus of reverse extensions in the moduli of the other side of the wall. The precise definition of a simple wall-crossing is given in Definition \ref{lihai}.
In some examples, the expectation is that a simple wall-crossing will blow up the old moduli space and add a new component that intersects the blow-up transversely along the exceptional locus.
In this paper, we will prove this is indeed the case for the Hilbert scheme of twisted cubics. The main theorem is the following:
\noindent\textbf{Main Theorem.} (See also Theorem \ref{ritian}, Theorem \ref{zhu1} and Theorem \ref{zhu2}) \textit{There is a path $\gamma$ in $\mathrm{Stab}(\mathbb{P}^{3})$ that crosses three walls and four chambers for a fixed Chern character $v=\mathrm{ch}(\mathcal{I}_{C})$, where $\mathcal{I_{C}}$ is the ideal sheaf of a twisted cubic $C$. If we list the moduli space of semistable objects in each chamber with respect to the path $\gamma$, we have:}
\textit{$(1)$ The empty space $\emptyset$;}
\textit{$(2)$ A smooth projective integral variety $\mathbf{M}_{1}$ of dimension $12$;}
\textit{$(3)$ A projective variety $\mathbf{M}_{2}$ with two irreducible components $\mathbf{B}$ and $\mathbf{P}$, where $\mathbf{P}$ is a $\mathbb{P}^{9}$-bundle over $\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*}$ and $\mathbf{B}$ is the blow-up of $\mathbf{M}_{1}$ along a $5$-dimensional smooth center. The two components of $\mathbf{M}_{2}$ intersect transversally along the exceptional divisor of $\mathbf{B}$;}
\textit{$(4)$ The Hilbert scheme of twisted cubics $\mathbf{M}_{3}$. $\mathbf{M}_{3}$ is a blow-up of $\mathbf{M}_{2}$ along a $5$-dimensional smooth center contained in $\mathbf{P}\setminus\mathbf{B}$.}
Among the above three wall-crossings, the second one and the third one are simple. We are going to study them in great details in Section $4$ and $5$.
In \cite{SchB15}, Schmidt also studied certain wall-crossings on $\mathbb{P}^{3}$. We followed his construction of the path $\gamma$ in the Main Theorem. We will also follow his construction of moduli space $\mathbf{M}_{1}$ by using quiver representations in Section $3$. For the second wall-crossing and the third wall-crossing, Schmidt reinterpreted the main result of \cite{PS85,EPS87} in the new setting of Bridgeland stability. The method of Piene and Schlessinger to study the geometric structure of the Hilbert scheme of twisted cubics is based on deformation theory of ideals. They first used a comparison theorem to show that the Hilbert scheme of twisted cubics is isomorphic to the moduli space of ideals of twisted cubics, and then they use the $\mathbf{P}\mathrm{GL}(4)$-action to reduce tangent space computations to some special ideals. Finally, they exhibited a basis of deformations of these special ideals and computed the versal deformations.
We will use a different method to directly study the second wall-crossing and the third wall-crossing without referring to \cite{PS85,EPS87}. In Section $4$, we first identify the locus $H$ in $\mathbf{M}_{1}$ that is going to be modified after the second wall-crossing. This is Proposition \ref{zhongyao} $(1)$. Then we construct two embeddings of the irreducible components into $\mathbf{M}_{2}$: one is from the projective bundle parametrizing reverse extensions of the family of pairs of destabilizing objects, the other is from the blow-up of $\mathbf{M}_{1}$ along $H$. This is the content of Proposition \ref{zhongyao} $(2)$ and Proposition \ref{ruyao} $(2)$. By definition of a simple wall-crossing, the union of the images of the two embeddings is $\mathbf{M}_{2}$, so $\mathbf{M}_{2}$ only has two irreducible components. With the help of some $\mathrm{Ext}$ computations, we show that the intersection of the two images is the exceptional divisor of the blow-up, and the two embeddings are isomorphisms outside it. This is Remark \ref{fadian}, Remark \ref{mafuyu} (1) and Proposition \ref{ruyao} $(1)$. Finally we study the deformation theory of complexes on the intersection and prove that the two irreducible components of $\mathbf{M}_{2}$ intersect transversely. This is Proposition \ref{gan}. In Section $5$, again we first identify the locus $H'$ that is going to be modified after the third wall-crossing and find that it is solely contained in one irreducible component of $\mathbf{M}_{2}$. Then we construct an isomorphism between the blow-up of $\mathbf{M}_{2}$ along $H'$ and $\mathbf{M}_{3}$, where the latter is the Hilbert scheme of twisted cubics. This is Theorem \ref{haoxiang}. As a consequence, this reproves the main result of \cite{PS85, EPS87} on the geometric structures of the Hilbert scheme of twisted cubics by using stability and wall-crossing techniques. The advantages of this is that we can get rid of using the equations of special ideals. It will be easier sometimes to generalize our approach, especially when the equations are complicated or unavailable.
The Hilbert scheme of twisted cubics is a first nontrivial example where our wall-crossing method applies, and we hope it could be applied in more general cases. Some related works in which our method may apply are: \cite{GHS16} about the moduli of elliptic quartics in $\mathbb{P}^{3}$, \cite{LLMS16} about the moduli of twisted cubics in a cubic fourfold and \cite{Tra16} about the moduli space of certain point-like objects on a surface.
\noindent\textbf{Notations.}\begin{align}
\mathrm{Coh}(\mathbb{P}^{3}) &\quad \text{abelian category of coherent sheaves on $\mathbb{P}^{3}$},\nonumber\\
\mathrm{D}^{\mathrm{b}}(\mathbb{P}^{3}) &\quad \text{bounded derived category of $\mathrm{Coh}(\mathbb{P}^{3})$},\nonumber\\
\mathcal{T}_{X} &\quad \text{tangent bundle of a smooth projective variety $X$}\nonumber\\
T_{X,x} &\quad \text{tangent space of $X$ at a point $x$},\nonumber\\
T_{f,x} &\quad \text{tangent map $T_{X,x}\longrightarrow T_{Z,f(x)}$ of a morphism $f:X\longrightarrow Z$},\nonumber\\
\mathcal{N}_{Y/X} &\quad \text{normal bundle of a smooth subvariety $Y$ in $X$},\nonumber\\
N_{Y/X,y} &\quad \text{normal space of $Y$ in $X$ at a point $y$},\nonumber\\
\mathscr{E}xt_{f}^{1}(\mathcal{F},\mathcal{G}) &\quad \text{relative $\mathrm{Ext}^{1}$ sheaf of $\mathcal{F}$ and $\mathcal{G}$ with respect to a morphism $f$},\nonumber\\
\mathscr{T}or^{1}(\mathcal{F},\mathcal{G}) &\quad \text{$\mathrm{Tor}^{1}$ sheaf of $\mathcal{F}$ and $\mathcal{G}$}.\nonumber\\
\mathrm{ch}(E) &\quad \text{Chern charater of an object $E\in\mathrm{D}^{\mathrm{b}}(\mathbb{P}^{3})$}\nonumber\\
c_{i}(E) &\quad \text{$i$-th Chern class of an object $E\in\mathrm{D}^{\mathrm{b}}(\mathbb{P}^{3})$}\nonumber
\end{align}
\noindent\textbf{Acknowledgements.} I would like to thank David Anderson, Yinbang Lin, Giulia Sacc\`{a}, Benjamin Schmidt and Xiaolei Zhao for useful discussions and suggestions. I am in great debt to my advisor Emanuele Macr\`{i}, who introduces me to this topic and gives me advice. I express my gratitude to the mathematics department of Ohio State University for helping me handle the situation after my advisor moved. I would also like to thank Northeastern University at which most of this paper has been written for their hospitality. The research is partially supported by NSF grants DMS-1302730 and DMS-1523496 (PI Emanuele Macr\`{i}) and a Graduate Special Assignment of the mathematics department of Ohio State University.
\section{A Brief Review on Bridgeland Stability Conditions}
In this section, we review how to construct Bridgeland stability conditions on $\mathbb{P}^{3}$ and define the notion of a simple wall-crossing.
\begin{defin}
A stability condition $(Z,\mathcal{P})$ on $\mathrm{D}^{\mathrm{b}}(\mathbb{P}^{3})$ consists of a group homomorphism $Z:K(\mathrm{D}^{\mathrm{b}}(\mathbb{P}^{3}))\longrightarrow\mathbb{C}$ called central charge, and full additive subcategories $\mathcal{P}(\phi)\subset\mathrm{D}^{\mathrm{b}}(\mathbb{P}^{3})$ for each $\phi\in\mathbb{R}$, satisfying the following axioms:
$(1)$ if $E\in\mathcal{P}(\phi)$ then $Z(E)=m(E)\mathrm{exp}(i\pi\phi)$ for some $m(E)\in\mathbb{R}_{>0}$,
$(2)$ for all $\phi\in\mathbb{R}$, $\mathcal{P}(\phi+1)=\mathcal{P}(\phi)[1]$,
$(3)$ if $\phi_{1}>\phi_{2}$ and $A_{j}\in\mathcal{P}_{j}$, then $\mathrm{Hom}_{\mathrm{D}^{\mathrm{b}}(\mathbb{P}^{3})}(A_{1}, A_{2})=0$,
$(4)$ for each nonzero object $E\in\mathrm{D}^{\mathrm{b}}(\mathbb{P}^{3})$ there are a finite sequence of real numbers
\begin{equation*}
\phi_{1}>\phi_{2}>\cdots>\phi_{n}
\end{equation*}
and a collection of triangles
\begin{displaymath}
\xymatrix {0=E_{0} \ar[r] &E_{1} \ar[d] \ar[r] &E_{2} \ar[d] \ar[r] &\cdots \ar[r] &E_{n-1} \ar[r] &E_{n}=E,\ar[d]\\
& \ar@{.>}[ul] A_{1} & \ar@{.>}[ul] A_{2} & & & \ar@{.>}[ul] A_{n}}
\end{displaymath}
with $A_{j}\in\mathcal{P}(\phi_{j})$ for all $j$.
\end{defin}
If we denote the set of all locally-finite stability conditions by $\mathrm{Stab}(\mathbb{P}^{3})$, then [Theorem 1.2, Bri07] tells us that there is a natural topology on $\mathrm{Stab}(\mathbb{P}^{3})$ making it a complex manifold.
By [Bri07, Proposition $5.3$], to give a stability condition on the bounded derived category of $\mathbb{P}^{3}$, it is equivalent to giving a stability function on a heart of a bounded $t$-structure satisfying the Harder-Narasimhan property. [Tod09, Lemma 2.7] shows this is not possible for the standard heart $\mathrm{Coh}(\mathbb{P}^{3})$. In \cite{BMT14}, stability conditions are constructed on a so-called double tilt $\mathscr{A}^{\alpha,\beta}$ of the standard heart.
We identify the cohomology $\mathrm{H}^{*}(\mathbb{P}^{3},\mathbb{Q})$ with $\mathbb{Q}^{4}$ with respect to the obvious chose of basis. Let $(\alpha,\beta)\in\mathbb{R}_{>0}\times\mathbb{R}$. We define the twisted slope function for $E\in\mathrm{Coh}(\mathbb{P}^{3})$ to be
\begin{equation}
\mu_{\beta}\left(E\right)=\frac{c_{1}\left(E\right)-\beta c_{0}\left(E\right)}{c_{0}\left(E\right)}\nonumber
\end{equation} if $c_{0}(E)\neq0$, and otherwise we let $\mu_{\beta}=+\infty$. Then we set
\begin{align}
\mathcal{T}_{\beta}&=\{E\in\mathrm{Coh}(\mathbb{P}^{3}):\text{any quotient sheaf $G$ of $E$ satisfies }\mu_{\beta}\left(G\right)>0\}\nonumber\\
\mathcal{F}_{\beta}&=\{E\in\mathrm{Coh}(\mathbb{P}^{3}):\text{any subsheaf $F$ of $E$ satisfies }\mu_{\beta}\left(F\right)\leqslant0\}.\nonumber
\end{align}
$(\mathcal{F}_{\beta},\mathcal{T}_{\beta})$ forms a torsion pair in the bounded derived category of $\mathbb{P}^{3}$, because Harder-Narasimhan filtrations exist for the twisted slope $\mu_{\beta}$.
\begin{defin}
Let $\mathrm{Coh}^{\beta}(\mathbb{P}^{3})\subset\mathrm{D}^{\mathrm{b}}(\mathbb{P}^{3})$ be the extension-closure $\langle\mathcal{T}_{\beta}, \mathcal{F}_{\beta}[1]\rangle$. We define the following two functions on $\mathrm{Coh}^{\beta}(\mathbb{P}^{3})$:
\begin{align}
Z_{\alpha,\beta}&=-\left(\mathrm{ch}_{2}-\beta\mathrm{ch}_{1}+\left(\frac{\beta^{2}}{2}-\frac{\alpha^{2}}{2}\right)\mathrm{ch}_{0}\right)+i\left(\mathrm{ch}_{1}-\beta\mathrm{ch}_{0}\right),\nonumber\\
\nu_{\alpha,\beta}&=-\frac{\mathrm{Re}\left(Z_{{\alpha,\beta}}\right)}{\mathrm{Im}\left(Z_{{\alpha,\beta}}\right)}\nonumber
\end{align}
if $\mathrm{Im}(Z_{\alpha,\beta})\neq0$, and we let $\nu_{\alpha,\beta}=+\infty$ otherwise. An object $E\in\mathrm{Coh}^{\beta}(\mathbb{P}^{3})$ is called $\nu_{\alpha,\beta}$-(semi)stable if for all nontrivial subobjects $F$ of $E$, we have $\nu_{\alpha,\beta}(F)<(\leqslant)\nu_{\alpha,\beta}(E/F)$
\end{defin}
An important inequality introduced in \cite{BMT14} and proved in \cite{Mac14} for $\nu_{\alpha,\beta}$-semistable objects is the following:
\begin{theorem}(Generalized Bogomolov-Gieseker inequality) For any $\nu_{\alpha,\beta}$-semistable object $E\in\mathrm{Coh}^{\beta}(\mathbb{P}^{3})$ satisfying $\nu_{\alpha,\beta}(E)=0$, we have the following inequality
\begin{equation}
\mathrm{ch}_{3}\left(E\right)-\beta\mathrm{ch}_{2}\left(E\right)+\frac{\beta^{2}}{2}\mathrm{ch}_{1}\left(E\right)-\frac{\beta^{3}}{6}\mathrm{ch}_{0}\left(E\right)\leqslant\frac{\alpha^{2}}{6}\left(\mathrm{ch}_{1}\left(E\right)-\beta\mathrm{ch}_{0}\left(E\right)\right).\nonumber
\end{equation}
\label{GBG}
\end{theorem}
On the other hand, for the new slope function $\nu_{\alpha,\beta}$, Harder-Narasimhan filtrations also exist. If we repeat the above construction, and define
\begin{align}
\mathcal{T}'_{\alpha,\beta}&=\{E\in\mathrm{Coh}(\mathbb{P}^{3}):\text{any quotient object $G$ of $E$ satisfies }\nu_{\alpha, \beta}(G)>0\}\nonumber\\
\mathcal{F}'_{\alpha,\beta}&=\{E\in\mathrm{Coh}(\mathbb{P}^{3}):\text{any subobject $F$ of $E$ satisfies }\nu_{\alpha, \beta}(F)\leqslant0\}.\nonumber
\end{align}
Then $(\mathcal{F}'_{\alpha,\beta},\mathcal{T}'_{\alpha,\beta})$ forms a torsion pair of $\mathrm{Coh}^{\beta}(\mathbb{P}^{3})$.
\begin{defin}
Let $\mathscr{A}^{\alpha,\beta}\subset\mathrm{D}^{\mathrm{b}}(\mathbb{P}^{3})$ be the extension-closure $\langle\mathcal{T}'_{\alpha,\beta},\mathcal{F}_{\alpha,\beta}[1]\rangle$. We define the following two functions on $\mathscr{A}^{\alpha,\beta}$, for $s>0$:
\begin{align*}
Z_{{\alpha,\beta},s}&=-\left(\mathrm{ch}_{3}-\beta\mathrm{ch}_{2}-\left(\left(s+\frac{1}{6}\right)\alpha^{2}-\frac{\beta^{2}}{2}\right)\mathrm{ch}_{1}-\left(\frac{\beta^{3}}{6}-\left(s+\frac{1}{6}\right)\alpha^{2}\beta\right)\mathrm{ch}_{0}\right)\\&\qquad+i\left(\mathrm{ch}_{2}-\beta\mathrm{ch}_{1}+\left(\frac{\beta^{2}}{2}-\frac{\alpha^{2}}{2}\right)\mathrm{ch}_{0}\right)\\\lambda_{{\alpha,\beta},s}&=-\frac{\mathrm{Re}\left(Z_{{\alpha,\beta},s}\right)}{\mathrm{Im}\left(Z_{{\alpha,\beta},s}\right)}
\end{align*}
if $\mathrm{Im}(Z_{{\alpha,\beta},s})\neq0$, and we let $\lambda_{\alpha,\beta,s}=+\infty$ otherwise. An object $E\in\mathscr{A}^{\alpha,\beta}$ is called $\lambda_{\alpha,\beta,s}$-(semi)stable if for all nontrivial subobjects $F$ of $E$, we have $\lambda_{\alpha,\beta,s}(F)<(\leqslant)\lambda_{\alpha,\beta,s}(E/F)$.
\end{defin}
By [BMT14, Corollary 5.2.4] and [BMS14, Lemma 8.8], Theorem \ref{GBG} implies
\begin{propo}
The pair ($\mathscr{A}^{\alpha,\beta},Z_{{\alpha,\beta},s}$) is a Bridgeland stability condition on $\mathrm{D}^{\mathrm{b}}(\mathbb{P}^{3})$ for all $(\alpha,\beta,s)\in\mathbb{R}_{>0}\times\mathbb{R}\times\mathbb{R}_{>0}$. The function $(\alpha,\beta,s)\mapsto(\mathscr{A}^{\alpha,\beta},Z_{{\alpha,\beta},s})$ is continuous.
\end{propo}
Once the existence problem is solved, we want to study the moduli space $M_{\lambda_{\alpha,\beta,s}}(v)$ of $\lambda_{\alpha,\beta,s}$-semistable objects $E\in\mathscr{A}^{\alpha,\beta}$ with a fixed Chern character $\mathrm{ch}(E)=v$, and the wall-crossing phenomena in the space of stability conditions when varying $(\alpha,\beta,s)\in\mathbb{R}_{>0}\times\mathbb{R}\times\mathbb{R}_{>0}$. For the wall-crossing phenomena, the expectation here is something similar to [Bri08, Section9]: we have a collection of codimension $1$ submanifolds in $(\alpha,\beta,s)\in\mathbb{R}_{>0}\times\mathbb{R}\times\mathbb{R}_{>0}$ called walls, the complement of all walls is a disjoint union of open subset called chambers. If we move stability conditions in a chamber, there is no strictly semistable object and the set of semistable objects does not change. The set of semistable objects changes only when we cross a wall. For the moduli space of semistable objects, we have two technical difficulties according to \cite{AP06} when we construct it: generic flatness and boundedness. In the case of 3-folds, assuming the generalized Bogomolov-Gieseker inequality, we have the following result from [PT16, Theorem 4.2; Corollary 4.23]:
\begin{theorem}
Assume $X$ is a smooth projective 3-fold on which the generalized Bogomolov-Gieseker inequality holds for tilt-semistable objects, then the moduli functor of Brideland semistable objects $\mathcal{M}_{\sigma}(v)$ for a fixed Chern character $v$ is a quasi-proper algebraic stack of finite-type over $\mathbb{C}$. If there is no strictly semistable object, then $\mathcal{M}_{\sigma}(v)$ is a $\mathbb{C}^{*}$-gerbe over a proper algebraic space $M_{\sigma}(v)$.
\end{theorem}
\noindent There is also an important point in $\mathrm{Stab}(\mathbb{P}^{3})$ called the large volume limit of Bridgeland stability. Roughly speaking, it means when the polarization is large enough (taking $\alpha\rightarrow+\infty$ in Proposition $2.5$), the moduli space of semistable objects will become the same as the moduli space of Gieseker semistable sheaves. [Bri08, Section 14] illustrates this picture in the case of K3 surfaces.
Now we are ready to define the notion of a simple wall-crossing. Fix a wall $W$ and two adjacent chambers $C_{1}$, $C_{2}$ in $\mathrm{Stab}(\mathbb{P}^{3})$, we denote the stability conditions in the chambers $C_{1}$, $C_{2}$ by $\lambda_{1}$, $\lambda_{2}$ respectively.
\begin{defin} A wall-crossing is simple if there exists two nonempty moduli spaces $\mathbf{M}_{A}$ and $\mathbf{M}_{B}$ of semistable objects in $\mathscr{A}^{\alpha,\beta}$ with Chern character $v_{A}$ and $v_{B}$ for stability conditions in a neighborhood of a point on $W$ meeting $C_{1}$ and $C_{2}$ such that:
$(1)$ $v_{A}+v_{B}=v$ and any $A\in\mathbf{M}_{A}$ and $B\in\mathbf{M}_{B}$ is stable;
$(2)$ if $E$ is $\lambda_{1}$-stable but not $\lambda_{2}$-stable, then there exists a unique pair $(A,B)$ in $\mathbf{M}_{A}\times\mathbf{M}_{B}$ such that $0\longrightarrow B\longrightarrow E\longrightarrow A\longrightarrow0$ is a nontrivial extension. Conversely, all nontrivial extensions of $A$ by $B$ are $\lambda_{1}$-stable but not $\lambda_{2}$-stable;
$(3)$ if $F$ is $\lambda_{1}$-stable but not $\lambda_{2}$-stable, then there exists a unique pair $(A,B)$ in $\mathbf{M}_{A}\times\mathbf{M}_{B}$ such that $0\longrightarrow A\longrightarrow F\longrightarrow B\longrightarrow0$ is a nontrivial extension. Conversely, all nontrivial extensions of $B$ by $A$ are $\lambda_{1}$-stable but not $\lambda_{2}$-stable.
\label{lihai}
\end{defin}
Now we fix $v=\mathrm{ch}(\mathcal{I}_{C})$, where $C$ is a twisted cubic in $\mathbb{P}^{3}$. We briefly recall the main ideas of finding the wall-crossings in the Main Theorem without using \cite{PS85,EPS87} as follows: First, we can formally use numerical properties of a wall together with the usual Bogomolov inequality to find the Chern characters $v_{A}$ and $v_{B}$ (Actually, this procedure can be made into a computer algorithm, see [SchB15, Theorem 5.3; Theorem 6.1; Section 5.3] for more details). For the first wall-crossing, we have $v_{A}=\mathrm{ch}(\mathcal{O}(-2)^{3})$ and $v_{B}=\mathrm{ch}(\mathcal{O}(-3)[1]^{2})$. In [SchB15, Proposition 4.5], Schmidt showed that $\mathcal{O}(-2)^{3}$ and $\mathcal{O}(-3)[1]^{2}$ are the only semistable object with those Chern characters. Since these two objects are only strictly semistable, the first wall-crossing is not simple. But it is still not hard to construct the moduli space in this case via quiver representations. For the second wall-crossing, we have $v_{A}=\mathrm{ch}(\mathcal{I}_{p}(-1))$ and $v_{B}=\mathrm{ch}(\mathcal{O}_{V}(-3))$, where $p$ is a point in $\mathbb{P}^{3}$ and $V$ is a plane in $\mathbb{P}^{3}$. In [SchB15, Theorem 5.3], Schmidt showed that $\mathcal{I}_{p}(-1)$ and $\mathcal{O}_{V}(-3)$ are all the semistable objects with those Chern characters. It is also easy to check that in this case $\mathcal{I}_{p}(-1)$ and $\mathcal{O}_{V}(-3)$ are stable, so the second wall-crossing is simple, and the moduli spaces $\mathbf{M}_{A}$ and $\mathbf{M}_{B}$ in Definition \ref{lihai} are $\mathbb{P}^{3}$ and $(\mathbb{P}^{3})^{*}$ respectively. The third wall-crossing is similar to the second wall-crossing. We have $v_{A}=\mathrm{ch}(\mathcal{O}(-1))$ and $v_{B}=\mathrm{ch}(\mathcal{I}_{q/V}(-3))$, where $V$ is a plane in $\mathbb{P}^{3}$ and $q$ is a point on $V$. $\mathcal{O}(-1)$ and $\mathcal{I}_{q/V}(-3)$ are all the semistable objects with those Chern character and they are stable. The third wall-crossing is also simple, with $\mathbf{M}_{A}$ being a point and $\mathbf{M}_{B}$ being the incidence hyperplane $H$ contained in $\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*}$. The statement that $\mathbf{M}_{3}$ is the Hilbert scheme is due to the facts that the large volume limits of Bridgeland stability conditions coincides with Gieseker stability conditions, and the moduli space of Gieseker semistable ideal sheaves is the same with the Hilbert scheme.
We will study the three wall-crossings of the Main Theorem in details in the next three sections.
\section{The First Wall-crossing}
In this section, we construct the moduli space $\mathbf{M}_{1}$ and prove that it is a smooth, projective and integral variety. This part first appears in [SchB15, Theorem 7.1], we will give more details here.
We start with a quiver $Q=(V,A):V=\{v_{1},v_{2}\},A=\{e_{i}|i=1,2,3,4\}$, where $s(e_{i})=v_{1}$ and $t(e_{i})=v_{2}$ (Actually $Q$ is just $\bullet\overset{4}{\longrightarrow}\bullet$). We set a dimension vector to be (2,3) and define $\theta:\mathbb{Z}\oplus\mathbb{Z}\longrightarrow\mathbb{Z}$ to be $\theta(m,n)=-3m+2n$. A representation $V$ with dimension vector $(2,3)$ is $\theta$-(semi)stable if for any proper nontrivial subrepresentation $W$ we have $\theta(\mathrm{\underline{dim}}W)>(\geqslant)0$, where $\mathrm{\underline{dim}}W$ is the dimension vector of $W$. If $S$ is a scheme, we define a family of $\theta$-semistable representations of $Q$ over $S$ with dimension vector $(2,3)$ to be four homomorphisms $f_{0},f_{1},f_{2},f_{3}:V\longrightarrow W$, where $V$ and $W$ are locally free on $S$ with $\mathrm{rk}(V)=2$ and $\mathrm{rk}(W)=3$, such that the representation $f_{0s},f_{1s},f_{2s},f_{3s}:V_{s}\longrightarrow W_{s}$ is $\theta$-semistable for any closed point $s\in S$. We define $\mathcal{K}_{\theta}:\mathbf{Sch}_{\mathbb{C}}\longrightarrow\mathbf{Sets}$ to be the moduli functor sending a scheme $S$ to the set of isomorphism classes of families of $\theta$-semistable representations with dimension vector $(2,3)$ over $S$.
\begin{propo}
The functor $\mathcal{K}_{\theta}$ is represented by a smooth projective integral variety $K_{\theta}$.
\label{hulai}
\end{propo}
\begin{proof}
By \cite{Kin94}, since the dimension vector $(2,3)$ is indivisible, $\mathcal{K}_{\theta}$ is represented by a projective variety $K_{\theta}$ and there is no strictly $\theta$-semistable representation. The path algebra of $Q$ is hereditary since there is no relation between arrows, this means $K_{\theta}$ is smooth and irreducible.
\end{proof}
\begin{theorem}
The two moduli spaces $K_{\theta}$ and $\mathbf{M}_{1}$ are isomorphic.
\label{ritian}
\end{theorem}
\begin{proof}
Fix $(\alpha_{0},\beta_{0})=(\frac{1}{2}+\varepsilon,-\frac{5}{2})$, where $\varepsilon>0$ is small. By [SchB15, Theorem 5.3; Theorem 6.1], $\mathbf{M}_{1}$ is isomorphic to the moduli space $\mathbf{M}^{\mathrm{tilt}}_{\alpha_{0},\beta_{0}}(v)$ of $\nu_{\alpha_{0},\beta_{0}}$-semistable objects in $\mathrm{Coh}^{\beta_{0}}(\mathbb{P}^{3})$. Since $(\alpha_{0},\beta_{0})$ is in the interior of a chamber, there is no strictly semistable objects. Notice that $-3<\beta_{0}<-2$, so by definition $\mathcal{O}(-2)$ and $\mathcal{O}(-3)[1]$ are in $\mathrm{Coh}^{\beta_{0}}(\mathbb{P}^{3})$, and we have
\begin{align}
Z_{\alpha_{0},\beta_{0}}\left(\mathcal{O}(-2)\right)&=-\frac{1}{8}+\frac{\alpha_{0}^{2}}{2}+\frac{1}{2}i,\nonumber\\
Z_{\alpha_{0},\beta_{0}}\left(\mathcal{O}(-3)[1]\right)&=\frac{1}{8}-\frac{\alpha_{0}^{2}}{2}+\frac{1}{2}i.\nonumber
\end{align}
On the other hand, We denote $\mathrm{Rep}(Q)$ to be the abelian category of quiver representations of $Q$, and denote $\mathscr{B}$ to be the extension closure of $\mathcal{O}(-2)$ and $\mathcal{O}(-3)[1]$ in $\mathrm{Coh}^{\beta_{0}}(\mathbb{P}^{3})$. By [SchB15, Theorem 5.1], all $\nu_{\alpha_{0},\beta_{0}}$-semistable objects are in $\mathscr{B}$. By [Bon89, Theorem 6.2], there is an equivalence $F:\mathrm{D}^{\mathrm{b}}(\mathscr{B})\longrightarrow\mathrm{D}^{\mathrm{b}}(\mathrm{Rep}(Q))$. This functor $F$ sends $\mathcal{O}(-3)[1]$ and $\mathcal{O}(-2)$ to the two simple representations $\mathbb{C}\longrightarrow0$ and $0\longrightarrow\mathbb{C}$. On $\mathscr{B}$, we can define a central charge $Z$ and a slope function $\eta$ by
\begin{align*}
Z\left(E\right)&=\theta\left(F^{-1}\left(E\right)\right)+i\mathrm{dim}\left(F^{-1}\left(E\right)\right),\\
\eta\left(E\right)&=-\frac{\mathrm{Re}\left(Z\left(E\right)\right)}{\mathrm{Im}\left(Z\left(E\right)\right)}=-\frac{\theta\left(F^{-1}\left(E\right)\right)}{\mathrm{dim}\left(F^{-1}\left(E\right)\right)},
\end{align*}
where $\mathrm{dim}$ is the sum of the two components of a dimension vector. This will make $\sigma:=(Z,\mathscr{B})$ a stability condition on $\mathrm{D}^{\mathrm{b}}(\mathscr{B})$ by [Bri07, Example 5.5], and $F$ sends $\sigma$-semistable objects with Chern character $v$ to $\theta$-semistable represetations with dimension vector $(2,3)$. If we denote $\mathbf{M}_{\sigma}$ to be the moduli of $\sigma$-semistable objects in $\mathscr{B}$ with Chern character $v$, then actually $F$ defines a bijection map of sets between $\mathbf{M}_{\sigma}$ and $K_{\theta}$. We will globalize this construction later and get a bijective morphism by using the existence of a universal family. Now we compute that
\begin{align}
Z\left(\mathcal{O}(-2)\right)&=2+i,\nonumber\\
Z\left(\mathcal{O}(-3)[1]\right)&=-3+i.\nonumber
\end{align}
If we view $Z$ and $Z_{\alpha_{0},\beta_{0}}|_{\mathrm{D}^{\mathrm{b}}(\mathscr{B})}$ as linear maps from $\mathbb{Z}^{2}$ to $\mathbb{R}^{2}$, then an easy computation shows they differ from each other by composing a linear map in $\mathrm{GL}^{+}(2;\mathbb{R})$. This means they define the same stability condition and hence have the same moduli of semistable objects with Chern character $v$, so $\mathbf{M}_{\sigma}=\mathbf{M}^{\mathrm{tilt}}_{\alpha_{0},\beta_{0}}(v)$.
It only remains to show that $K_{\theta}$ is isomorphic to $\mathbf{M}_{\sigma}$. For any $\sigma$-semistable object $E\in\mathrm{D}^{\mathrm{b}}(\mathscr{B})$ with Chern character $v$, $F(E)$ is a $\theta$-semistable representation $f_{1}, f_{2}, f_{3}, f_{4}:\mathbb{C}^{3}\longrightarrow\mathbb{C}^{2}$. We have an obvious exact sequence
\begin{center}
$\begin{CD}
0 @>>> \mathbb{C}^{3} @>>> \mathbb{C}^{3}\\
@VVV @V f_{i}VV @VVV \\
\mathbb{C}^{2} @>>> \mathbb{C}^{2} @>>> 0
\end{CD}$
\end{center}
in $\mathrm{Rep}(Q)$ which corresponds to an exact sequence $\mathcal{O}(-2)^{3}\longrightarrow E\longrightarrow\mathcal{O}(-3)[1]^{2}$ in $\mathscr{B}$. By applying the long exact sequence for $\mathrm{Hom}$ functor to it, we can see that $\mathrm{Ext}^{2}(E,E)=0$. But $\mathrm{Ext}^{2}(E,E)$ computes the obstruction space of $\mathbf{M}_{\sigma}$ at $E$ by \cite{Ina02} and \cite{Lie06}, so $\mathbf{M}_{\sigma}$ is smooth and hence a complex manifold. Since there is no strictly $\sigma$-semistable object, a universal family $\mathcal{U}$ of $\sigma$-semistable objects with Chern character $v$ exists on $\mathbf{M}_{\sigma}\times\mathbb{P}^{3}$, and $\mathcal{U}$ is an extension of $p^{*}\mathcal{O}(-3)^{\oplus2}[1]$ by $p^{*}\mathcal{O}(-2)^{\oplus3}$. If we denote $\mathscr{B}'$ to be the extension closure of $p^{*}\mathcal{O}(-3)^{\oplus2}[1]$ and $p^{*}\mathcal{O}(-2)^{\oplus3}$ in $\mathrm{D}^{\mathrm{b}}(\mathbf{M}_{\sigma}\times\mathbb{P}^{3})$, and denote $\mathrm{Rep}_{K_{\theta}}(Q)$ to be the category of families of quiver representations over $K_{\theta}$. Then there exists an equivalence $F_{K_{\theta}}:\mathscr{B}'\longrightarrow\mathrm{D}^{\mathrm{b}}(\mathrm{Rep}_{K_{\theta}}(Q))$ such that when restricted to a fiber $x\times\mathbb{P}^{3}$, $F_{K_{\theta}}$ is the same with $F$. Because $F_{K_{\theta}}(\mathcal{U})|_{x\times\mathbb{P}^{3}}=F(\mathcal{U}|_{x\times\mathbb{P}^{3}})$ and $\mathcal{U}|_{x\times\mathbb{P}^{3}}$ is a $\sigma$-semistable object with Chern character $v$, $F_{K_{\theta}}(\mathcal{U})|_{x\times\mathbb{P}^{3}}$ is $\theta$-semistable with dimension vector $(2,3)$. This means $F_{K_{\theta}}(\mathcal{U})$ is a family of $\theta$-semistable objects with dimension vector $(2,3)$, so it induces a morphism $\varphi:\mathbf{M}_{\sigma}\longrightarrow K_{\theta}$. As $\mathcal{U}$ is a universal family of $\sigma$-semistable objects with Chern character $v$, and $F$ is a bijection between $\sigma$-semistable objects with Chern character $v$ in $\mathscr{B}$ and $\theta$-semistable representations with dimension vector $(2,3)$, $\varphi$ is a bijective morphism. We proved that $K_{\theta}$ is smooth in Proposition $3.1$, and any bijiective morphism between complex manifolds is an isomorphism, so $\varphi$ is an isomorphism. Therefore $K_{\theta}$ is isomorphic to $\mathbf{M}_{1}$.
\end{proof}
\section{The Second Wall-crossing}
In this section, we study the second wall-crossing and prove $(3)$ in the Main Theorem. To be more precise, we will prove the following theorem. Let $V$ be a plane in $\mathbb{P}^{3}$ and $p$ be a point in $\mathbb{P}^{3}$.
\begin{theorem}
The second wall-crossing is simple with a family of pairs of destabilizing objects $(\mathcal{I}_{p}(-1)$, $\mathcal{O}_{V}(-3))$. The moduli space of semistable objects after the wall-crossing is a projective variety $\mathbf{M}_{2}$. $\mathbf{M}_{2}$ has two irreducible components $\mathbf{B}$ and $\mathbf{P}$, where $\mathbf{P}$ is a $\mathbb{P}^{9}$-bundle over $\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*}$ and $\mathbf{B}$ is the blow-up of $\mathbf{M}_{1}$ along a $5$-dimensional smooth center. The two components of $\mathbf{M}_{2}$ intersect transversally along the exceptional divisor of $\mathbf{B}$.
\label{zhu1}
\end{theorem}
Throughout this section, we fix the family of pairs of destabilizing objects to be
\begin{equation}
\left(A, B\right)=\left(\mathcal{I}_{p}(-1),\mathcal{O}_{V}(-3)\right),\nonumber
\end{equation}and denote the stability conditions in the chamber of $\mathbf{M}_{1}$ (resp. $\mathbf{M}_{2}$) by $\lambda_{1}$ (resp. $\lambda_{2}$). Whenever we take an extension of $A$ and $B$, we always mean a nontrivial extension class modulo scalar multiplications. The following Hom and Ext group computations are straightforward.
\begin{lemma}
$\mathrm{Hom}(A,B)=\mathrm{Hom}(B,A)=0$, $\mathrm{Hom}(A,A)=\mathrm{Hom}(B,B)=\mathbb{C}$;
$\mathrm{Ext}^{1}(A,B)=\mathbb{C}$ if $p\in V$, and $0$ otherwise,
$\mathrm{Ext}^{1}(A,A)=\mathrm{Ext}^{1}(B,B)=\mathbb{C}^{3}$, $\mathrm{Ext}^{1}(B,A)=\mathbb{C}^{10}$;
$\mathrm{Ext}^{2}(A,B)=\mathbb{C}$, $\mathrm{Ext}^{2}(B,B)=0$, $\mathrm{Ext}^{2}(A,A)=\mathbb{C}^{3}$, $\mathrm{Ext}^{2}(B,A)=0$;
$\mathrm{Ext}^{3}(A,B)=\mathrm{Ext}^{3}(A,A)=\mathrm{Ext}^{3}(B,B)=\mathrm{Ext}^{3}(B,A)=0$.
\label{diao}
\end{lemma}
\noindent\textbf{Moduli space of nontrivial extensions.} In this subsection, we construct two moduli spaces $H$ and $\mathbf{P}$, where $H$ parametrizes nontrivial extensions of $A$ by $B$ and $\mathbf{P}$ parametrizing the reverse nontrivial extensions. We show that with the universal extensions on those moduli spaces, $H$ is embedded into $\mathbf{M}_{1}$ and $\mathbf{P}$ is embedded into $\mathbf{M}_{2}$. Then we do some detailed comutations on $\mathrm{Ext}$ groups for later uses.
We recall the comments after Definition \ref{lihai}: the second wall-crossing is simple and we have $\mathbf{M}_{A}=\mathbb{P}^{3}$ parametrizing $\mathcal{I}_{p}(-1)$ and $\mathbf{M}_{B}=(\mathbb{P}^{3})^{*}$ parametrizing $\mathcal{O}_{V}(-3)$. We denote the universal family of semistable objects with Chern character $v_{A}$ on $\mathbf{M}_{A}\times\mathbb{P}^{3}$ by $\mathcal{U}_{A}$, and the universal family of semistable objects with Chern character $v_{B}$ on $\mathbf{M}_{B}\times\mathbb{P}^{3}$ by $\mathcal{U}_{B}$. Denote two projections by
\begin{equation*}
\mathbf{M}_{A}\times\mathbb{P}^{3}\overset{\pi_{A}}{\longleftarrow}\mathbf{M}_{A}\times\mathbf{M}_{B}\times\mathbb{P}^{3}\overset{\pi_{B}}{\longrightarrow}\mathbf{M}_{B}\times\mathbb{P}^{3}.
\end{equation*}
We also denote the projection onto the first two factors by $\mathbf{M}_{A}\times\mathbf{M}_{B}\times\mathbb{P}^{3}\overset{\pi}{\longrightarrow}\mathbf{M}_{A}\times\mathbf{M}_{B}$. Let $H$ be the incidence hyperplane $\{(p,V)\in\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*}|p\in V\}$, and denote the restriction of the above three projections to $H\times\mathbb{P}^{3}$ by $\pi_{A}^{H}$, $\pi_{B}^{H}$ and $\pi_{H}$. Define $\mathcal{F}$ to be $\pi_{A}^{*}\mathcal{U}_{A}$ and $\mathcal{G}$ to be $\pi_{B}^{*}\mathcal{U}_{B}$, and define $\mathcal{F}_{H}$ to be $\left(\pi_{A}^{H}\right)^{*}\mathcal{U}_{A}$ and $\mathcal{G}_{H}$ to be $\left(\pi_{B}^{H}\right)^{*}\mathcal{U}_{B}$. Let $S\longrightarrow\mathbf{M}_{A}\times\mathbf{M}_{B}$ and $S_{H}\longrightarrow H$ be any morphisms of schemes, and denote the pullbacks of these two morphisms with respect to $\pi$ and $\pi_{H}$ by $q^{S}$ and $q_{H}^{S}$.
\begin{propo}
There exists an extension on $H\times\mathbb{P}^{3}$
\begin{equation}
0\longrightarrow\mathcal{G}_{H}\otimes \pi_{H}^{*}\mathcal{L}\longrightarrow\mathcal{U}_{E}\longrightarrow\mathcal{F}_{H}\longrightarrow0,
\end{equation}
$\mathcal{L}=\mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{F}_{H},\mathcal{G}_{H})^{*}$ is a line bundle, which is universal on the category of noetherian $H$-schemes for the classes of nontrivial extensions of $\left(q_{H}^{S}\right)^{*}\mathcal{F}_{H}$ by $\left(q_{H}^{S}\right)^{*}\mathcal{G}_{H}$ on $\left(H\times\mathbb{P}^{3}\right)\times_{H}S_{H}$, modulo the scalar mutiplication of $H^{0}(S_{H},\mathcal{O}_{S_{H}}^{*})$.
\label{dadiao}
\end{propo}
\begin{proof}
We apply [Lan85, Proposition 4.2; Corollary 4.5] to $\mathcal{F}_{H}$, $\mathcal{G}_{H}$ and $\pi_{H}$. We only need to check that $\mathscr{E}xt^{0}_{\pi_{H}}(\mathcal{F}_{H},\mathcal{G}_{H})=0$ and $\mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{F}_{H},\mathcal{G}_{H})$ commutes with base change in the sense that over any point $(p_{0},V_{0})\in H$, $\mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{F}_{H},\mathcal{G}_{H})$ restricts to $\mathrm{Ext}^{1}(A_{0},B_{0})$. First notice that $\mathscr{E}xt^{3}_{\pi_{H}}(\mathcal{F}_{H},\mathcal{G}_{H})$ restricts to $\mathrm{Ext}^{3}(A_{0}, B_{0})$ over $(p_{0},V_{0})$, where the latter is $0$ by Lemma $4.1$. Then [Lan85, Theorem 1.4] tells us $\mathscr{E}xt^{2}_{\pi_{H}}(\mathcal{F}_{H},\mathcal{G}_{H})$ restricts to $\mathrm{Ext}^{2}(A_{0}, B_{0})$ over $(p_{0},V_{0})$, where the latter is $\mathbb{C}$ for all points in $H$. Hence $\mathscr{E}xt^{2}_{\pi_{H}}(\mathcal{F}_{H},\mathcal{G}_{H})$ is a line bundle. Again [Lan85, Theorem 1.4] tells us $\mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{F}_{H},\mathcal{G}_{H})$ restricts to $\mathrm{Ext}^{1}(A_{0}, B_{0})$ over $(p_{0},V_{0})$. By Lemma $4.1$ we have $\mathrm{Ext}^{1}(A_{0}, B_{0})=\mathbb{C}$ for all points in $H$, so $\mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{F}_{H},\mathcal{G}_{H})$ is a line bundle. Applying [Lan85, Theorem 1.4] a third time, $\mathscr{E}xt^{0}_{\pi_{H}}(\mathcal{F}_{H},\mathcal{G}_{H})$ will restrict to $\mathrm{Hom}(A_{0},B_{0})$, where the latter is $0$ by Lemma $4.1$. Hence $\mathscr{E}xt^{0}_{\pi_{H}}(\mathcal{F}_{H},\mathcal{G}_{H})=0$.
\end{proof}
\begin{propo}
The relative Ext sheaf $\mathscr{E}xt^{1}_{\pi}(\mathcal{G},\mathcal{F})$ is locally free of rank $10$ on $\mathbf{M}_{A}\times\mathbf{M}_{B}$. If we denote its projectivization $\mathbb{P}(\mathscr{E}xt^{1}_{\pi}(\mathcal{G},\mathcal{F})^{*})$ by $\mathbf{P}$, then there exists an extension on $\mathbf{P}\times\mathbb{P}^{3}$
\begin{equation}
0\longrightarrow h^{*}\mathcal{F}\otimes\pi_{\mathbf{P}}^{*}\mathcal{O}_{\mathbf{P}}(1)\longrightarrow\mathcal{U}_{F}\longrightarrow h^{*}\mathcal{G}\longrightarrow0,
\end{equation}
$h$ is the projection $\mathbf{P}\times\mathbb{P}^{3}\longrightarrow\mathbf{M}_{A}\times\mathbf{M}_{B}\times\mathbb{P}^{3}$, $\pi_{\mathbf{P}}$ is the projection $\mathbf{P}\times\mathbb{P}^{3}\longrightarrow \mathbf{P}$ and $\mathcal{O}_{\mathbf{P}}(1)$ is the relative $\mathcal{O}(1)$ on $\mathbf{P}$, which is universal on the category of noetherian $\mathbf{M}_{A}\times\mathbf{M}_{B}$-schemes for the classes of nontrivial extensions of $\left(q^{S}\right)^{*}\mathcal{F}$ by $\left(q^{S}\right)^{*}\mathcal{G}$ on $\left(\mathbf{M}_{A}\times\mathbf{M}_{B}\times\mathbb{P}^{3}\right)\times_{\mathbf{M}_{A}\times\mathbf{M}_{B}}S$, modulo the scalar mutiplication of $H^{0}(S,\mathcal{O}_{S}^{*})$.
\label{juru}
\end{propo}
\begin{proof}
The proof is completely analogous to the proof of Proposition \ref{dadiao}.
\end{proof}
The existence the above extension $\mathcal{U}_{E}$ (resp. $\mathcal{U_{F}}$) gives a flat family of $\lambda_{1}$-stable (resp. $\lambda_{2}$-stable) sheaves on $H$ (resp. $\mathbf{P}$), hence it induces a morphism $\varphi_{E}:H\longrightarrow \mathbf{M}_{1}$ (resp. $\varphi_{F}:\mathbf{P}\longrightarrow \mathbf{M}_{2}$).
\begin{propo}
(1) The induced morphism $\varphi_{E}$ is a closed embedding;
(2) The induced morphism $\varphi_{F}$ is injective on the level of sets and Zariski tangent spaces.
\label{zhongyao}
\end{propo}
\begin{proof}
On the level of sets, $\varphi_{E}$ maps an extension $0\longrightarrow B\longrightarrow E\longrightarrow A\longrightarrow0$ to $E$. If we have two extensions $0\longrightarrow B\longrightarrow E\longrightarrow A\longrightarrow0$ and $0\longrightarrow B'\longrightarrow E'\longrightarrow A'\longrightarrow0$ such that $E\cong E'$ as stable sheaves, then $E'=E$ and this isomorphism is just a scalar multiplication by some $c\in\mathbb{C}^{*}$. By the definition of a simple wall-crossing with a pair of destabilizing object, we must have $A'=A$ and $B'=B$. This implies that $\varphi_{E}$ is injective on the level of sets.
On the level of Zariski tangent spaces, a tangent vector $v$ of $H$ at a point $(p,V)$ can be represented by a morphism $\mathrm{Spec}\mathbb{C}[\varepsilon]/(\varepsilon^{2})\longrightarrow H$. By pulling back the universal extension $(1)$ to $\left(H\times\mathbb{P}^{3}\right)\times_{H}\mathrm{Spec}\mathbb{C}[\varepsilon]/(\varepsilon^{2})=\mathrm{Spec}\mathbb{C}[\varepsilon]/(\varepsilon^{2})\times\mathbb{P}^{3}$, we get an exact sequence of flat families
\begin{equation*}
0\longrightarrow\mathcal{G}_{\varepsilon}\longrightarrow\mathcal{E}_{\varepsilon}\longrightarrow\mathcal{F}_{\varepsilon}\longrightarrow0
\end{equation*}
and $\mathcal{G}_{\varepsilon}$, $\mathcal{E}_{\varepsilon}$ and $\mathcal{F}_{\varepsilon}$ restrict to $B$, $E$ and $A$ on the closed fiber respectively. In particular, $\mathcal{E}_{\varepsilon}$ is a flat family of $\lambda_{1}$-stable objects. It gives rise to a morphism $\mathrm{Spec}\mathbb{C}[\varepsilon]/(\varepsilon^{2})\longrightarrow\mathbf{M}_{1}$ corresponding to $T_{\varphi_{E},(p,V)}(v)$. Suppose we have two tangent vectors $v$, $v'$ represented by morphisms $\xi,\xi':\mathrm{Spec}\mathbb{C}[\varepsilon]/(\varepsilon^{2})\longrightarrow H$ and $T_{\varphi_{E},(p,V)}(v)=T_{\varphi_{E},(p,V)}(v')$. Then there exists an isomorphism $\eta:\mathcal{E}_{\varepsilon}\longrightarrow\mathcal{E}_{\varepsilon}'$ between the resulting flat families of $\lambda_{1}$-stable objects such that $\eta$ restricts to identity on the closed fiber. By \cite{Ina02} and \cite{Lie06}, $\eta$ corresponds to the following diagram in the derived category:
\begin{center}
$\begin{CD}
E @= E \\
@V\zeta VV @V\zeta'VV \\
E[1] @>c>> E[1],
\end{CD}$
\end{center}
where $c$ is a multiplication by some nonzero constant $c$. By composing $\xi$ and $\xi'$ with the natural projections \begin{equation*}
\mathbf{M}_{A}=\mathbb{P}^{3}\longleftarrow H\longrightarrow(\mathbb{P}^3)^{*}=\mathbf{M}_{B},
\end{equation*}
we can complete $\zeta$ and $\zeta'$ to commutative diagrams
\begin{center}
$\begin{CD}
B @>>> E @>>> A @.\qquad B @>>> E @>>> A \\
@VVV @V\zeta VV @VVV \qquad @VVV @V\zeta' VV @VVV \\
B[1] @>>> E[1] @>>> A[1] @.\qquad B[1] @>>> E[1] @>>> A[1],
\end{CD}$
\end{center}
Via the two diagrams, the above diagram of $\eta$ will induce two diagrams
\begin{center}
$\begin{CD}
B @= B @.\qquad A @= A \\
@V\zeta_{B}VV @V\zeta'_{B}VV \qquad @V\zeta_{A} VV @V\zeta'_{A}VV \\
B[1] @>c>> B[1] @.\qquad A[1] @>c>> A[1]
\end{CD}$
\end{center}
corresponding to isomorphisms $\eta_{B}:\mathcal{G}_{\varepsilon}\longrightarrow\mathcal{G}_{\varepsilon}'$ and $\eta_{A}:\mathcal{F}_{\varepsilon}\longrightarrow\mathcal{F}_{\varepsilon}'$ such that they restrict to identities on closed fiber and they make the following diagram commutative:
\begin{center}
$\begin{CD}
0@>>>\mathcal{G}_{\varepsilon}@>>>\mathcal{E}_{\varepsilon}@>>>\mathcal{F}_{\varepsilon}@>>>0\\
@. @V\eta_{B}VV @V\eta VV @V\eta_{A}VV @.\\
0@>>>\mathcal{G}_{\varepsilon}'@>>>\mathcal{E}_{\varepsilon}'@>>>\mathcal{F}_{\varepsilon}'@>>>0,
\end{CD}$
\end{center}
which implies the two morphisms $\xi$ and $\xi'$ are the same. Therefore $v=v'$ and $T_{\varphi_{E},E}$ is injective. This proves that $\varphi_{E}$ is a closed embedding. The proof of (2) is completely analogous to the above argument.
\end{proof}
Now we study the normal sequence of the embedding $\varphi_{E}:H\longrightarrow \mathbf{M}_{1}$. Fix a nontrivial extension $0\longrightarrow B\longrightarrow E\longrightarrow A\longrightarrow0$, then we have the following lemma.
\begin{lemma}
The following diagram is coming from taking the long exact sequences for $\mathrm{Hom}$ functor in two directions, it is commutative with exact rows and columns and all boundary homomorphisms are $0$.
\label{zheteng}
\end{lemma}
\noindent$\begin{CD}
\mathrm{Ext}^{1}(A,B)=\mathbb{C} @>0>> \mathrm{Ext}^{1}(A,E)=\mathbb{C}^{2} @>>> \mathrm{Ext}^{1}(A,A)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{2}(A,B)=\mathbb{C} \\
@V0VV @VVV @VVV @VVV \\
\mathrm{Ext}^{1}(E,B)=\mathbb{C}^{2} @>>> \mathrm{Ext}^{1}(E,E)=\mathbb{C}^{12} @>>> \mathrm{Ext}^{1}(E,A)=\mathbb{C}^{10} @>>> \mathrm{Ext}^{2}(E,B)=0 \\
@VVV @VVV @VVV @VVV \\
\mathrm{Ext}^{1}(B,B)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{1}(B,E)=\mathbb{C}^{13} @>>> \mathrm{Ext}^{1}(B,A)=\mathbb{C}^{10} @>>> \mathrm{Ext}^{2}(B,B)=0 \\
@VVV @VVV @VVV @VVV \\
\mathrm{Ext}^{2}(A,B)=\mathbb{C} @>0>> \mathrm{Ext}^{2}(A,E)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{2}(A,A)=\mathbb{C}^{3} @>>> 0
\end{CD}$
\begin{proof}
This diagram is a straightforward computation by using that $(A, B)=(\mathcal{I}_{p}(-1)$, $\mathcal{O}_{V}(-3))$ and that $E$ satisfies a triangle $\mathcal{O}(-2)^{3}\longrightarrow E\longrightarrow\mathcal{O}(-3)[1]^{2}$.
\end{proof}
The Kodaira-Spencer map $\mathrm{KS}:T_{\mathbf{M}_{1},E}\longrightarrow\mathrm{Ext}^{1}(E,E)$ is known to be an isomorphism by \cite{Ina02} and \cite{Lie06}. If we let $\theta_{E}$ to be the composition $
\mathrm{Ext}^{1}(E,E)\longrightarrow\mathrm{Ext}^{1}(E,A)\longrightarrow\mathrm{Ext}^{1}(B,A)$ (or $\mathrm{Ext}^{1}(E,E)\longrightarrow\mathrm{Ext}^{1}(B,E)\longrightarrow\mathrm{Ext}^{1}(B,A)$) in the diagram of Lemma \ref{zheteng}, and let the kernel of $\theta_{E}$ to be $K_{E}$, then we have
\begin{propo}
The Kodaira-Spencer map $\mathrm{KS}$ restricts to an isomorphism between $T_{H,E}$ and $K_{E}$, and we have the following commutative diagram:
\label{KS}
\end{propo}
\begin{center}
$\begin{CD}
0 @>>> T_{H,E} @>>> T_{\mathbf{M}_{1},E} @>>> N_{H/\mathbf{M}_{1},E} @>>> 0\\
@. @VV\mathrm{KS}V @VV\mathrm{KS}V @VVV\\
0 @>>> K_{E} @>>> \mathrm{Ext}^{1}(E,E) @>\theta_{E}>> \mathrm{Ext}^{1}(B,A)
\end{CD}$
\end{center}
\begin{proof}
$\theta_{E}$ is the composition of $\mathrm{Ext}^{1}(E,E)\longrightarrow\mathrm{Ext}^{1}(E,A)\longrightarrow\mathrm{Ext}^{1}(B,A)$, where the first map is surjective with a two-dimensional kernel $\mathrm{Ext}^{1}(E,B)$ and the second map has a $3$-dimensional kernel $\mathrm{Ext}^{1}(A,A)$ by Lemma \ref{zheteng}. This implies $K_{E}$ is $5$-dimensional since $K_{E}$ is an extension of $\mathrm{Ext}^{1}(A,A)$ by $\mathrm{Ext}^{1}(E,B)$, so $\mathrm{dim}K_{E}=\mathrm{dim}T_{H,E}$. On the other hand, as shown in the proof of Proposition \ref{zhongyao}, a vector $v$ in $T_{H,E}$ is represented by a commutative diagram:
\begin{center}
$\begin{CD}
B @>>> E @>>> A \\
@VVV @V\mathrm{KS}(v) VV @VVV \\
B[1] @>>> E[1] @>>> A[1]
\end{CD}$.
\end{center}
$\theta_{E}(\mathrm{KS}(v))$ is equal to the composition $B\longrightarrow E\overset{\mathrm{KS}(v)}{\longrightarrow} E[1]\longrightarrow A[1]$, which is zero since by using the commutativity of the diagram. Hence $T_{H,E}$ is mapped into $K_{E}$ under $\mathrm{KS}$. Since we have proved $\mathrm{dim}K_{E}=\mathrm{dim}T_{H,E}$ , $\mathrm{KS}$ canonically induces an isomorphism between them.
\end{proof}
We can also define $\theta_{F}:\mathrm{Ext}^{1}(F,F)\longrightarrow\mathrm{Ext}^{1}(A,B)$ for any nontrivial extension $0\longrightarrow A\longrightarrow F\longrightarrow B\longrightarrow0$ in a similar way. Denote its kernel by $K_{F}$, then we have :
\begin{corol}
The tangent space $T_{\mathbf{P},F}$ is canonically identified with $K_{F}$ under the Kodaira-Spencer map.
\label{ciyao}
\end{corol}
\begin{proof}
The reason that $T_{\mathbf{P},F}$ is mapped into $K_{F}$ under the Kodaira-Spencer map is the same as in the case of Proposition \ref{KS}. Conversely, take any $\zeta\in K_{F}$, we have that the composition $A\longrightarrow F\overset{\zeta}{\longrightarrow} F[1]\longrightarrow B[1]$ is $0$. By using the universal property of a triangle in the derived category, there exists morphisms $A\longrightarrow A[1]$ and $B\longrightarrow B[1]$ such that the following diagram is commutative:
\begin{center}
$\begin{CD}
A @>>> F @>>> B \\
@VVV @V\zeta VV @VVV \\
A[1] @>>> F[1] @>>> B[1]
\end{CD}$.
\end{center}
This diagram will correspond to an exact sequence of flat families on $\mathrm{Spec}\mathbb{C}[\varepsilon]/(\varepsilon^{2})\times\mathbb{P}^{3}$
\begin{equation*}
0\longrightarrow\mathcal{F}_{\varepsilon}\longrightarrow\mathcal{F}_{\varepsilon}'\longrightarrow\mathcal{G}_{\varepsilon}\longrightarrow0
\end{equation*}
where $\mathcal{F}_{\varepsilon}$, $\mathcal{F}_{\varepsilon}'$ and $\mathcal{G}_{\varepsilon}$ will restrict to $A$, $F$ and $B$ on the closed fiber. By the universal property of $\mathbf{P}$ proved in Proposition \ref{juru}, this sequence induces a morphism from $\mathrm{Spec}\mathbb{C}[\varepsilon]/(\varepsilon^{2})$ to $\mathbf{P}$ corresponding to a tangent vector $v$ of $\mathbf{P}$ at $F$. It is not hard to check $\mathrm{KS}(v)=\zeta$, so $\mathrm{KS}$ is also surjective between $T_{\mathbf{P},F}$ and $K_{F}$.
\end{proof}
We can use the exact sequence (1) to write down the following globalization of the diagram in Proposition \ref{KS}.
\begin{propo}
The following diagram has exact rows. Among the three vertical morphisms, the left one and middle one are isomorphisms, and the right one is an injection.
\begin{displaymath}
\xymatrix{0 \ar[r] & \mathcal{T}_{H} \ar[r] \ar[d] &\mathcal{T}_{\mathbf{M}_{1}}|_{H} \ar[r] \ar[d]_{\mathrm{KS}} &\mathcal{N}_{H/\mathbf{M}_{1}} \ar[r] \ar[d] &0\\
0 \ar[r] & \mathcal{K}_{E} \ar[r] & \mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{U}_{E},\mathcal{U}_{E}) \ar[r] &\mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{G}_{H}\otimes \pi_{H}^{*}\mathcal{L},\mathcal{F}_{H})}
\end{displaymath}
\label{duodiao}
\end{propo}
From this proposition we see that the normal bundle $\mathcal{N}_{H/\mathbf{M}_{1}}$ embedds into $\mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{G}_{H}\otimes\pi_{H}^{*}\mathcal{L},\mathcal{F}_{H})$, hence its projectivization $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$ is embedded in $\mathbb{P}(\mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{G}_{H}\otimes\pi_{H}^{*}\mathcal{L},\mathcal{F}_{H})^{*})=\mathbb{P}(\mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{G}_{H},\mathcal{F}_{H})^{*})$, where the latter is the preimage of $H$ under the projection $\mathbb{P}(\mathscr{E}xt^{1}_{\pi}(\mathcal{G},\mathcal{F})^{*})=\mathbf{P}\longrightarrow\mathbb{P}^3\times(\mathbb{P}^{3})^{*}$.
Next we are going to compute the dimension of the Zariski tangent space $T_{\mathbf{M}_{2},F}\cong\mathrm{Ext}^{1}(F,F)$ for a nontrivial extension $0\longrightarrow A\longrightarrow F\longrightarrow B\longrightarrow0$. First let us introduce some notations: we denote $e:A\longrightarrow B[1]$ the nontrivial extension of $A$ by $B$ and name the arrows $ B\overset{h}{\longrightarrow}E\overset{j}{\longrightarrow}A$. Similarly let $f:B\longrightarrow A[1]$ be the extension we fix and name the arrows $ A\overset{k}{\longrightarrow}F\overset{l}{\longrightarrow}B$. There are three cases and they are taken care of by the following three propositions.
\begin{propo}
If $F\in\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$, then we have the following commutative diagram with exact rows and columns. All boundary homomorphisms are $0$ except at $\mathrm{Ext}^{1}(B,A)$, where the two homomophisms $\mathrm{Ext}^{1}(F,A)\longleftarrow\mathrm{Ext}^{1}(B,A)\longrightarrow\mathrm{Ext}^{1}(B,F)$ have a same $1$-dimensional kernel $\mathbb{C}f$.
\label{cao}
\end{propo}
\begin{center}
$\begin{CD}
\mathrm{Ext}^{1}(B,A)=\mathbb{C}^{10} @>>> \mathrm{Ext}^{1}(F,A)=\mathbb{C}^{12} @>>> \mathrm{Ext}^{1}(A,A)=\mathbb{C}^{3}\\
@VVV @VVV @VVV\\
\mathrm{Ext}^{1}(B,F)=\mathbb{C}^{12} @>>> \mathrm{Ext}^{1}(F,F)=\mathbb{C}^{16} @>>> \mathrm{Ext}^{1}(A,F)=\mathbb{C}^{4}\\
@VVV @VVV @VVV\\
\mathrm{Ext}^{1}(B,B)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{1}(F,B)=\mathbb{C}^{4} @>>> \mathrm{Ext}^{1}(A,B)=\mathbb{C}\\
@VVV @V0VV @V0VV\\
0 @>>> \mathrm{Ext}^{2}(F,A)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{2}(A,A)=\mathbb{C}^{3}\\
@VVV @VVV @VVV\\
0 @>>> \mathrm{Ext}^{2}(F,F)=\mathbb{C}^{4} @>>> \mathrm{Ext}^{2}(A,F)=\mathbb{C}^{4}\\
@VVV @VVV @VVV\\
0 @>>> \mathrm{Ext}^{2}(F,B)=\mathbb{C} @>>> \mathrm{Ext}^{2}(A,B)=\mathbb{C}
\end{CD}$
\end{center}
\begin{proof}
We show that the diagram holds if and only if $F\in\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$. If the diagram holds, then $\theta_{F}\neq0$. We can find $\zeta\in\mathrm{Ext}^{1}(F,F)$ such that $e=l[1]\circ\zeta\circ k$. Now we have $f\circ e[-1]=f\circ l\circ\zeta[-1]\circ k[-1]=0$ because $f\circ l=0$. This means $f:B\longrightarrow A[1]$ factors through $h:B\longrightarrow E$, i.e. $f=x\circ h$ for some $x:E\longrightarrow A[1]$. On the other hand, from the diagram in Lemma \ref{zheteng} we see that $\mathrm{Ext}^{1}(E,E)\overset{j_{*}}{\longrightarrow}\mathrm{Ext}^{1}(E,A)$ is surjective, hence
$x:E\longrightarrow A[1]$ lifts to some $\xi:E\longrightarrow E[1]$. So we have $f=j[1]\circ\xi\circ h$ and $f$ is in the image of $\theta_{E}$. By Proposition \ref{KS}, this means $f$ is in $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$. Conversely, if $f$ is in $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$, then we can write $f=j[1]\circ\xi\circ h$ for some nontrivial $\xi:E\longrightarrow E[1]$. Then $f[1]\circ e=j[2]\circ\xi[1]\circ h[1]\circ e=0$ because $h[1]\circ e=0$. This means $e:A\longrightarrow B[1]$ factors through $l[1]:F[1]\longrightarrow B[1]$, i.e. $e=l[1]\circ z$ for some $z:A\longrightarrow F[1]$. On the other hand, $\mathrm{Ext}^{1}(F,F)\overset{k^{*}}{\longrightarrow}\mathrm{Ext}^{1}(A,F)$ is surjective because its cokernel $\mathrm{Ext}^{2}(B,F)=0$. This implies that $z=\zeta\circ k$ for some $\zeta:E\longrightarrow E[1]$. So we have $e=l[1]\circ\zeta\circ k$ and $e$ is in the image of $\theta_{F}$. Therefore $\theta_{F}\neq0$. By Corollary $4.7$, the kernel of $\theta_{F}$ is $T_{\mathbf{P},F}$, which is $15$-dimensional since $\mathbf{P}$ is a $\mathbb{P}^{9}$-bundle over $\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*}$. Hence $\mathrm{Ext}^{1}(F,F)=\mathbb{C}^{16}$. The rest of the diagram will follow automatically due to exactness.
\end{proof}
\begin{propo}
If $F\in \mathbb{P}(\mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{G}_{H},\mathcal{F}_{H})^{*})\setminus\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$, then we have the following commutative diagram with exact rows and columns. All boundary homomorphisms are $0$ except at $\mathrm{Ext}^{1}(B,A)$, where the two homomophisms $\mathrm{Ext}^{1}(F,A)\longleftarrow\mathrm{Ext}^{1}(B,A)\longrightarrow\mathrm{Ext}^{1}(B,F)$ have a same $1$-dimensional kernel $\mathbb{C}f$.
\end{propo}
\begin{center}
$\begin{CD}
\mathrm{Ext}^{1}(B,A)=\mathbb{C}^{10} @>>> \mathrm{Ext}^{1}(F,A)=\mathbb{C}^{12} @>>> \mathrm{Ext}^{1}(A,A)=\mathbb{C}^{3}\\
@VVV @VVV @VVV\\
\mathrm{Ext}^{1}(B,F)=\mathbb{C}^{12} @>>> \mathrm{Ext}^{1}(F,F)=\mathbb{C}^{15} @>>> \mathrm{Ext}^{1}(A,F)=\mathbb{C}^{3}\\
@VVV @VVV @V0VV\\
\mathrm{Ext}^{1}(B,B)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{1}(F,B)=\mathbb{C}^{4} @>>> \mathrm{Ext}^{1}(A,B)=\mathbb{C}\\
@VVV @VVV @VVV\\
0 @>>> \mathrm{Ext}^{2}(F,A)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{2}(A,A)=\mathbb{C}^{3}\\
@VVV @VVV @VVV\\
0 @>>> \mathrm{Ext}^{2}(F,F)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{2}(A,F)=\mathbb{C}^{3}\\
@VVV @VVV @VVV\\
0 @>>> \mathrm{Ext}^{2}(F,B)=\mathbb{C} @>>> \mathrm{Ext}^{2}(A,B)=\mathbb{C}\\
\end{CD}$
\end{center}
\begin{proof}
By the proof of previous proposition, we know that $\theta_{F}=0$ since $F$ is not in $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$. Therefore $\mathrm{Ext}^{1}(F,F)=\mathbb{C}^{15}$. By Lemma \ref{diao}, we know $\mathrm{Ext}^{1}(A,B)=\mathbb{C}$, since $F$ is mapped into $H$ under the bundle projection $\mathbf{P}\longrightarrow\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*}$. The rest of the diagram then follows automatically due to exactness.
\end{proof}
\begin{propo}
If $F\in \mathbf{P}\setminus\mathbb{P}(\mathscr{E}xt^{1}_{\pi_{H}}(\mathcal{G}_{H},\mathcal{F}_{H})^{*})$, then we have the following commutative diagram with exact rows and columns. All boundary homomorphisms are $0$ except at $\mathrm{Ext}^{1}(B,A)$, where the two homomophisms $\mathrm{Ext}^{1}(F,A)\longleftarrow\mathrm{Ext}^{1}(B,A)\longrightarrow\mathrm{Ext}^{1}(B,F)$ have a same $1$-dimensional kernel $\mathbb{C}f$.
\end{propo}
\begin{center}
$\begin{CD}
\mathrm{Ext}^{1}(B,A)=\mathbb{C}^{10} @>>> \mathrm{Ext}^{1}(F,A)=\mathbb{C}^{12} @>>> \mathrm{Ext}^{1}(A,A)=\mathbb{C}^{3}\\
@VVV @VVV @VVV\\
\mathrm{Ext}^{1}(B,F)=\mathbb{C}^{12} @>>> \mathrm{Ext}^{1}(F,F)=\mathbb{C}^{15} @>>> \mathrm{Ext}^{1}(A,F)=\mathbb{C}^{3}\\
@VVV @VVV @VVV\\
\mathrm{Ext}^{1}(B,B)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{1}(F,B)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{1}(A,B)=0\\
@VVV @V0VV @VVV\\
0 @>>> \mathrm{Ext}^{2}(F,A)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{2}(A,A)=\mathbb{C}^{3}\\
@VVV @VVV @VVV\\
0 @>>> \mathrm{Ext}^{2}(F,F)=\mathbb{C}^{4} @>>> \mathrm{Ext}^{2}(A,F)=\mathbb{C}^{4}\\
@VVV @VVV @VVV\\
0 @>>> \mathrm{Ext}^{2}(F,B)=\mathbb{C} @>>> \mathrm{Ext}^{2}(A,B)=\mathbb{C}\\
\end{CD}$
\end{center}
\begin{proof}
Since $F$ is not in $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$, we have $\theta_{F}=0$ and $\mathrm{Ext}^{1}(F,F)=\mathbb{C}^{15}$. By Lemma \ref{diao}, we know $\mathrm{Ext}^{1}(A,B)=0$ since $F$ is mapped outside $H$ under the bundle projection $\mathbf{P}\longrightarrow\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*}$. The rest of the diagram then follows automatically due to exactness.
\end{proof}
\begin{remark}
From the above propositions, we can see that for $F\in \mathbf{P}\setminus\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$, $\mathbf{P}$ is smooth at $F$ and $\mathrm{dim} T_{\mathbf{P},F}=\mathrm{dim}T_{\mathbf{M}_{2},F}=15$. By Proposition \ref{zhongyao} (2), $T_{\varphi_{F},F}$ is injective. This implies $\varphi_{F}$ is an isomorphism at $F$ and $\mathbf{M}_{2}$ is smooth at $F$.
\label{fadian}
\end{remark}
\noindent\textbf{Elementary modification.} In this subsection, we construct a flat family of $\lambda_{2}$-stable objects on the blow-up of $\mathbf{M}_{1}$ along $H$. The key is to perform a so-called elementary modification on the pullback of universal family of $\lambda_{1}$-stable objects along the exceptional divisor with respect to the extension $(1)$ in Proposition \ref{dadiao}.
Let us first introduce some notations: denote the blow-up of $\mathbf{M}_{1}$ along $H$ by $\mathbf{B}$, the blow-up morphism $\mathbf{B}\times\mathbb{P}^{3}\longrightarrow \mathbf{M}_{1}\times\mathbb{P}^{3}$ by $b$ and its restriction to the exceptional divisor $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}\longrightarrow H\times\mathbb{P}^{3}$ by $b_{H}$. Denote the universal family of $\lambda_{1}$-stable objects on $\mathbf{M}_{1}\times\mathbb{P}^{3}$ by $\mathcal{U}_{1}$, then $\mathcal{U}_{1}|_{H\times\mathbb{P}^{3}}$ and $\mathcal{U}_{E}$ both induce the embedding $\varphi_{E}:H\longrightarrow\mathbf{M}_{1}$, so they differ from each other by tensoring a pullback of a line bundle from $H$ via projection. Assume $\mathcal{U}_{1}|_{H\times\mathbb{P}^{3}}=\mathcal{U}_{E}\otimes\pi_{H}^{*}\mathcal{L}'$ for some line bundle $\mathcal{L}'$ on $H$. Consider the composition of the restriction map and the pullback of surjection in (1) by $b_{H}$:
\begin{equation}
b^{*}\mathcal{U}_{1}\twoheadrightarrow b^{*}\mathcal{U}_{1}|_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}=b_{H}^{*}\mathcal{U}_{E}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\twoheadrightarrow b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\nonumber
\end{equation}
Denote the kernel of this composition by $\mathcal{K}$ then we have:
\begin{propo}
The sheaf $\mathcal{K}$ is a flat family of $\lambda_{2}$-stable objects.
\end{propo}
\begin{proof}
$\mathcal{K}$ is a flat family of $\lambda_{2}$-stable objects outside the exceptional divisor because it is identical to $\mathcal{U}_{1}$. If we restrict the exact sequence $0\longrightarrow\mathcal{K}\longrightarrow b^{*}\mathcal{U}_{1}\longrightarrow b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\longrightarrow0$ to the exceptional divisor $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}$, we will get
\begin{align*}
0\longrightarrow\mathscr{T}or^{1}(b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}',\mathcal{O}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}})\longrightarrow\mathcal{K}|_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}&\longrightarrow\\b_{H}^{*}\mathcal{U}_{E}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\longrightarrow b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'&\longrightarrow0\nonumber
\end{align*}
On the other hand, tensoring $b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'$ to the exact sequence $0\longrightarrow\mathcal{I}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}\longrightarrow\mathcal{O}\longrightarrow\mathcal{O}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}\longrightarrow0$, we have
\begin{align*}
0\longrightarrow\mathscr{T}or^{1}(b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}',\mathcal{O}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}})\overset{=}{\longrightarrow} b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\otimes\mathcal{I}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}\\\overset{0}{\longrightarrow} b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\overset{=}{\longrightarrow} b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\longrightarrow0.\nonumber
\end{align*}
Hence
\begin{align*}
\mathscr{T}or^{1}(b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}',\mathcal{O}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}})&=b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\otimes\mathcal{I}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}\\&= b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\otimes\mathcal{N}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}^{*}.
\end{align*}
Also notice that the kernel of
\begin{equation*}
b_{H}^{*}\mathcal{U}_{E}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\longrightarrow b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'
\end{equation*} is $b_{H}^{*}\mathcal{G}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'$, so $\mathcal{K}|_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}$ satisfies
\begin{align}
0\longrightarrow b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\otimes\mathcal{N}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}^{*}\longrightarrow\mathcal{K}|_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}&\longrightarrow\nonumber\\ b_{H}^{*}\mathcal{G}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'&\longrightarrow0.
\end{align}
This means on each fiber $x\times\mathbb{P}^{3}$, the restriction $\mathcal{K}_{x}$ is an extension of $B$ by $A$. In particular $\mathcal{K}_{x}$ has the same Chern character as other fibers, therefore $\mathcal{K}$ is flat since $\mathbf{B}$ is smooth. To prove it is a family of $\lambda_{2}$-stable objects, we need to show $\mathcal{K}_{x}$ is a nontrivial extension of $B$ by $A$. Actually since $x\in\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$ represents a nonzero normal direction of $H$ in $\mathbf{M}_{1}$, we expect $\mathcal{K}_{x}$ to be $\theta_{E}(\mathrm{KS}(x))$ in $\mathrm{Ext}^{1}(B,A)$. This is indeed the case because $\mathcal{K}|_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}$ can be interpreted in the following way: First we use the injection
\begin{equation*}
b_{H}^{*}\mathcal{G}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\longrightarrow b_{H}^{*}\mathcal{U}_{E}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'
\end{equation*}
to pull back the exact sequence
\begin{equation*}
0\longrightarrow b^{*}\mathcal{U}_{1}\otimes\mathcal{I}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}\longrightarrow b^{*}\mathcal{U}_{1}\longrightarrow b_{H}^{*}\mathcal{U}_{E}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\longrightarrow0,
\end{equation*}we get
\begin{equation*}
0\longrightarrow b^{*}\mathcal{U}_{1}\otimes\mathcal{I}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}\longrightarrow\mathcal{K}\longrightarrow b_{H}^{*}\mathcal{G}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\longrightarrow0.
\end{equation*}Then we push out the resulting exact sequence using the surjection
\begin{equation*}
b^{*}\mathcal{U}_{1}\otimes\mathcal{I}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}\longrightarrow b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\otimes\mathcal{I}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}=b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\otimes\mathcal{N}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}^{*},
\end{equation*}
we will get $(3)$. On a fiber $x\times\mathbb{P}^{3}$, this means first we take an extension
\begin{equation*}
0\longrightarrow E\longrightarrow G\longrightarrow E\longrightarrow0
\end{equation*}representing $x\in\mathrm{Ext}^{1}(E,E)$, then do a pullback using $B\longrightarrow E$ followed by a pushout using $E\longrightarrow A$. The resulting extension
\begin{equation*}
0\longrightarrow A\longrightarrow\mathcal{K}_{x}\longrightarrow B\longrightarrow0
\end{equation*}is exactly $\theta_{E}(\mathrm{KS}(x))$. This shows that $\mathcal{K}$ is a flat family of $\lambda_{2}$-stable objects.
\end{proof}
If we denote the induced morphism of $\mathcal{K}$ by $\delta:\mathbf{B}\longrightarrow \mathbf{M}_{2}$, then
\begin{propo}
(1) The induced morphism $\delta$ is an isomorphism outside $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$, and the restriction $\delta|_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})}$ coincides with $\varphi_{F}|_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})}$;
(2) The induced morphism $\delta$ is injective on the level of sets and Zariski tangent spaces.
\label{ruyao}
\end{propo}
\begin{proof}
$\delta$ is an isomorphism outside $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$ because $\mathcal{K}$ is the same with $\mathcal{U}_{1}$. On the other hand, under the identification
\begin{align*}
&\mathrm{Ext}^{1}\left(b_{H}^{*}\mathcal{G}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}',b_{H}^{*}\mathcal{F}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L}'\otimes\mathcal{N}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}^{*}\right)\\&=\mathrm{Ext}^{1}\left(b_{H}^{*}\mathcal{G}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L},b_{H}^{*}\mathcal{F}_{H}\otimes\mathcal{N}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}^{*}\right)\nonumber\\&=\mathrm{H}^{0}\left(\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*}),\mathscr{E}xt_{\pi_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})}}^{1}\left(b_{H}^{*}\mathcal{G}_{H}\otimes b_{H}^{*}\pi_{H}^{*}\mathcal{L},b_{H}^{*}\mathcal{F}_{H}\otimes\pi_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})}^{*}\mathcal{O}_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})}(1)\right)\right)\\&=\mathrm{H}^{0}\left(H,\mathscr{E}xt_{\pi_{H}}^{1}\left(\mathcal{G}_{H}\otimes\pi_{H}^{*}\mathcal{L},\mathcal{F}_{H}\right)\otimes\mathcal{N}_{H/\mathbf{M}_{1}}^{*}\right)\\&=\mathrm{Hom}\left(\mathcal{N}_{H/\mathbf{M}_{1}},\mathscr{E}xt_{\pi_{H}}^{1}\left(\mathcal{G}_{H}\otimes\pi_{H}^{*}\mathcal{L},\mathcal{F}_{H}\right)\right),
\end{align*}
\noindent the extension $(3)$ corresponds to the injection $i$ from $\mathcal{N}_{H/\mathbf{M}_{1}}$ to $\mathscr{E}xt_{\pi_{H}}^{1}(\mathcal{G}_{H}\otimes\pi_{H}^{*}\mathcal{L},\mathcal{F}_{H})$ constructed in Proposition \ref{duodiao} via the Kodaira-Spencer map. Similarly in Proposition \ref{juru}, the extension $(2)$ corresponds to the identity $id$ in $\mathrm{Hom}(\mathscr{E}xt^{1}_{\pi}(\mathcal{G},\mathcal{F}),\mathscr{E}xt^{1}_{\pi}(\mathcal{G},\mathcal{F}))=\mathrm{Ext}^{1}(h^{*}\mathcal{G},h^{*}\mathcal{F}\otimes\pi_{\mathbf{P}}\mathcal{O}_{\mathbf{P}}(1))$. Notice that $i$ is the restriction of $id$ to $\mathcal{N}_{H/\mathbf{M}_{1}}$, this means $(3)$ is a restriction of $(2)$ to $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}$ up to tensoring a pullback of some line bundle on $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$. Therefore $\delta|_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}=\varphi_{F}|_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}$. In particular, $\delta|_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\times\mathbb{P}^{3}}$ is injective on the level of Zariski tangent spaces since $\varphi_{F}$ is. To show $\delta$ is injective on the level of Zariski tangent spaces, it only remains to show that the normal direction $v_{x}$ of $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$ in $\mathbf{B}$ at a point $x\in\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$ is not sent to the image of $T_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*}),x}$ under $T_{\delta,x}$. If it were so, we suppose $\xi:\mathrm{Spec}\mathbb{C}[\varepsilon]/(\varepsilon^{2})\longrightarrow\mathbf{B}$ represents $v_{x}$. Notice that we have a pullback diagram
\begin{center}
$\begin{CD}
\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*}) @>>> \mathbf{P}\\
@VVV @V\varphi_{F}VV\\
\mathbf{B} @>\delta>> \mathbf{M}_{2}
\end{CD}$
\end{center}
since $\delta(\mathbf{B})\cap\varphi_{F}(\mathbf{P})=\delta(\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*}))$. Because $T_{\delta,x}(T_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*}),x})$ is contained in $T_{\varphi_{F},x}$, we can lift $\delta\circ\xi$ to $\xi':\mathrm{Spec}\mathbb{C}[\varepsilon]/(\varepsilon^{2})\longrightarrow \mathbf{P}$ that makes the pullback diagram above commutative, hence $\xi$ factors through $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$. This implies $v_{x}$ is in $T_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*}),x}$, which is a contradiction.
\end{proof}
\begin{remark}
(1) The last argument also shows that the normal direction $v_{x}$ is not mapped to the image of $T_{\mathbf{P},\mathcal{K}_{x}}$ under $T_{\varphi_{F},F}$. By Corollary $4.7$, $T_{\varphi_{F},F}(T_{\mathbf{P},\mathcal{K}_{x}})$ is the kernel of $\theta_{F}$, so we must have $\theta_{F}(v_{x})\neq0$;
(2) Since $T_{\varphi_{F},F}(T_{\mathbf{P},F})=\mathbb{C}^{15}$ and $T_{\delta,F}(T_{\mathbf{B},F})=\mathbb{C}^{12}$, the pullback diagram in the above proof also implies $T_{\varphi_{F},F}(T_{\mathbf{P},F})\cap T_{\delta,F}(T_{\mathbf{B},F})=T_{\delta,F}(T_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*}),F})=\mathbb{C}^{11}$.
\label{mafuyu}
\end{remark}
\noindent\textbf{Obstruction computation.} In this subsection, we study the deformation theory of complexes on the intersection of the two irreducible components of $\mathbf{M}_{2}$. We give explicit local equations defining $\mathbf{M}_{2}$ at a point in the intersection. In particular, this will imply the two irreducible components of $\mathbf{M}_{2}$ intersect transversely.
Recall that we have constructed two morphisms $\delta:\mathbf{B}\longrightarrow \mathbf{M}_{2}$ and $\varphi_{F}:\mathbf{P}\longrightarrow \mathbf{M}_{2}$, both of them are injective on the level of sets and Zariski tangent spaces. By the definition of a simple wall-crossing, any $\lambda_{2}$-stable object has to lie in the image of one of the two morphisms. Thus $\mathbf{M}_{2}$ has two irreducible components corresponding to the image of $\delta$ and $\varphi_{F}$. The intersection of the two components is the image of the exceptional divisor $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$ by Proposition \ref{ruyao}. Outside the intersection of the two components, $\mathbf{M}_{2}$ is smooth by Remark \ref{fadian} and Remark \ref{mafuyu} (1). To study the singularity of $\mathbf{M}_{2}$, we fix an $\lambda_{2}$-semistable object $F$ in $\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$, then we have
\begin{propo}
The tangent vectors of $\mathbf{M}_{2}$ at $F$ in the subspaces $T_{\varphi_{F},F}(T_{\mathbf{P},F})$ and $T_{\delta,F}(T_{\mathbf{B},F})$ correspond to versal deformations of $F$.
\label{mua}
\end{propo}
\begin{proof}
Suppose a Zariski tangent vector of $\mathbf{M}_{2}$ at $F$ in $T_{\varphi_{F},F}(T_{\mathbf{P},F})$ is represented by a morphism $\eta:\mathrm{Spec}\mathbb{C}[\varepsilon]/(\varepsilon^{2})\longrightarrow\mathbf{M}_{2}$, then $\eta$ factors through $\varphi_{F}:\mathbf{P}\longrightarrow\mathbf{M}_{2}$:
\begin{displaymath}
\xymatrix {\mathrm{Spec}\mathbb{C}[\varepsilon]/(\varepsilon^{2})\ar[r]\ar[d]^{\eta'}\ar[dr]^{\eta}&\mathrm{Spec}S\ar[dl]^{\xi} \\\mathbf{P}\ar[r]^{\varphi_{F}}&\mathbf{M}_{2}
}
\end{displaymath}
If $S$ is a finite dimensional local Artin $\mathbb{C}$-algebra with a local surjection $S\longrightarrow\mathbb{C}[\varepsilon]/(\varepsilon^{2})$, then we can lift $\eta'$ to $\xi:\mathrm{Spec}S\longrightarrow\mathbf{P}$ since $\mathbf{P}$ is smooth. By composing $\xi$ with $\varphi_{F}$, we get a lift of $\eta$. Hence $\eta$ corresponds to a versal deformation. A similar argument works for tangent vectors in $T_{\delta,F}(T_{\mathbf{B},F})$.
\end{proof}
In order to show $T_{\varphi_{F},F}(T_{\mathbf{P},F})$ and $T_{\delta,F}(T_{\mathbf{B},F})$ are all the versal deformations of $F$, we study the quadratic part of the Kuranishi map $\kappa_{2}:T_{\mathbf{M}_{2},F}\cong\mathrm{Ext}^{1}(F,F)\longrightarrow\mathrm{Ext}^{2}(F,F)$. First we give a decomposition of $T_{\mathbf{M}_{2},F}\cong\mathrm{Ext}^{1}(F,F)$ with respect to some geometric structures. In the blow-up $\mathbf{B}$, we have $T_{\mathbf{B},F}=N_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})/\mathbf{B},F}\oplus T_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*}),F}$ and $N_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})/\mathbf{B},F}$ is 1-dimensional. Suppose it is generated by a vector $v_{F}$, then we have
\begin{propo}
The Zariski tangent space $T_{\mathbf{M}_{2},F}\cong\mathrm{Ext}^{1}(F,F)$ has the following decomposition
\begin{equation}
T_{\mathbf{M}_{2},F}=\mathbb{C}v_{F}\oplus T_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*}),F}\oplus N_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*})/\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}\oplus T_{H,E}\oplus N_{H/\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},E}.
\end{equation}In this decomposition,
\begin{align*}
T_{\delta,F}(T_{\mathbf{B},F})&=\mathbb{C}v_{F}\oplus T_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*}),F}\oplus T_{H,E}\\ T_{\varphi_{F},F}(T_{\mathbf{P},F})&=T_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*}),F}\oplus N_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*})/\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}\oplus T_{H,E}\oplus N_{H/\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},E}
\end{align*}
\label{miao}
\end{propo}
\begin{proof}
By Remark \ref{mafuyu} (1), $\theta_{F}(v_{F})\neq0$, hence we can decompose $\mathrm{Ext}^{1}(F,F)=\mathbb{C}v_{F}\oplus T_{\mathbf{P},F}$ because the kernel of $\theta_{F}$ is $T_{\mathbf{P},F}$. On the other hand, $\mathbf{P}=\mathbb{P}(\mathscr{E}xt^{1}_{\pi}(\mathcal{G},\mathcal{F})^{*})$ is a projective bundle over $\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*}$, so we have $T_{\mathbf{P},F}=T_{\mathbb{P}({\mathrm{Ext}^{1}(B,A)^{*}),F}}\oplus T_{\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},(A,B)}$. To give further decomposition, denote $E$ the nontrivial extension of $A$ by $B$, we have that $\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*})$ is embedded in $\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*})$ via the Kodaira-Spencer map by Proposition \ref{KS}, so $T_{\mathbb{P}({\mathrm{Ext}^{1}(B,A)^{*}),F}}=T_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*}),F}\oplus N_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*})/\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}$. Also notice that the incidence hyperplane $H$ is embedded in $\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*}$, so $T_{\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},(A,B)}=T_{H,E}\oplus N_{H/\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},E}$. By composing all the decompositions above, we have the proposition.
\end{proof}
The importance of this decomposition is that some of the summands have direct relations with the $\mathrm{Ext}^{2}$ groups in Lemma \ref{zheteng}, Proposition \ref{KS} and Proposition \ref{cao}, which becomes crucial later when we compute $\kappa_{2}$. Fix a nontrivial $\zeta\in\mathrm{Ext}^{1}(F,F)$. Let $e:A\longrightarrow B[1]$ correspond to the nontrivial extension $E$ and $f:B\longrightarrow A[1]$ correspond to $F$, name the arrows $ A\overset{k}{\longrightarrow}F\overset{l}{\longrightarrow}B$. Then we have the following two lemmas:
\begin{lemma}
The normal space $N_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*})/\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}$ can be identified with $\mathrm{Ext}^{2}(A,A)$ under a canonical isomorphism. If $\zeta$ belongs to $N_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*})/\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}$ in (4), then $\zeta=k[1]\circ t\circ l$ for some $t\in \mathrm{Ext}^{1}(B,A)$ such that $t[1]\circ e$ is nonzero in $\mathrm{Ext}^{2}(A,A)$.
\label{kao}
\end{lemma}
\begin{proof}
By Lemma \ref{zheteng}, we know that the cokernel of $\theta_{E}:\mathrm{Ext}^{1}(E,E)\longrightarrow\mathrm{Ext}^{1}(B,A)$ is $\mathrm{Ext}^{2}(A,A)$. By Proposition \ref{KS}, we know that the Kodaira-Spencer map $\mathrm{KS}$ induces an isomorphism between the image of $\theta_{E}$ and $N_{H/\mathbf{M}_{1},E}$. On the other hand, $N_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*})/\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}$ is equal to the quotient $\mathrm{Ext}^{1}(B,A)/N_{H/\mathbf{M}_{1},E}$, so $N_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*})/\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}\cong\mathrm{Ext}^{2}(A,A)$. To prove the second statement, we look at the square
\begin{center}
$\begin{CD}
\mathrm{Ext}^{1}(B,A) @>l^{*}>> \mathrm{Ext}^{1}(F,A)\\
@Vk[1]_{*}VV @Vk[1]_{*}VV \\
\mathrm{Ext}^{1}(B,F) @>l^{*}>> \mathrm{Ext}^{1}(F,F)
\end{CD}$
\end{center} in Proposition \ref{cao}. There is an injection $\mathrm{Ext}^{1}(B,A)/\mathbb{C}f\longrightarrow\mathrm{Ext}^{1}(F,F)$, which is the same as $T_{\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}\longrightarrow\mathrm{Ext}^{1}(F,F)$. Notice the fact that $N_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*})/\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}$ is contained in $T_{\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}$, $\zeta$ has to be in $T_{\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}$, this means
$\zeta=k[1]\circ t\circ l$ for some $t\in\mathrm{Ext}^{1}(B,A)$. For $\zeta$ to be nontrivial and lying in $\mathrm{Ext}^{2}(A,A)$, $t$ has to be nonzero under the cokernel map $(-)[1]\circ e:\mathrm{Ext}^{1}(B,A)\longrightarrow\mathrm{Ext}^{2}(A,A)
$, so
$t[1]\circ e\neq0$
\end{proof}
\begin{lemma}
The normal space $N_{H/\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},E}$ can be identified with $\mathrm{Ext}^{2}(A,B)$ under a canonical isomorphism. If $\zeta$ belongs to $N_{H/\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},E}$ in (4), then $\zeta$ can be completed to the following commutative diagram with $e[1]\circ t+r[1]\circ e\neq0$ in $\mathrm{Ext}^{2}(A,B)$:
\begin{center}
$\begin{CD}
A @>k>> F @>l>> B\\
@VtVV @V\zeta VV @VrVV\\
A[1] @>k[1]>> F[1] @>l[1]>> B[1]
\end{CD}$
\end{center}
\label{ri}
\end{lemma}
\begin{proof}
Recall that $K_{E}$ is the kernel of $\theta_{E}$, and by Proposition \ref{KS} it can be identified with $T_{H,E}$ via the Kodaira-Spencer map. From the diagram in Lemma \ref{zheteng}, we have an exact sequence
$\begin{CD}
0\longrightarrow K_{E}\longrightarrow\mathrm{Ext}^{1}(A,A)\oplus\mathrm{Ext}^{1}(B,B) @>(e[1]\circ-)+(-[1]\circ e)>> \mathrm{Ext}^{2}(A,B)\longrightarrow0
\end{CD}$.
\noindent On the other hand, we have the canonical normal sequence of $H$ embedded in $\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*}$
\begin{equation*}
0\longrightarrow T_{H,E}\longrightarrow T_{\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},(A,B)}\longrightarrow N_{H/\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},E}\longrightarrow0,
\end{equation*}
since $\mathrm{Ext}^{1}(A,A)\oplus\mathrm{Ext}^{1}(B,B)$ can also be identified with $T_{\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},(A,B)}$ via the Kodaira-Spencer map, this induces a canonical isomorphism between $N_{H/\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},E}$ and $\mathrm{Ext}^{2}(A,B)$.
Notice that $N_{H/\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},E}$ is contained in $T_{\mathbf{P},F}$ and the latter is kernel of $\theta_{F}$. We have $\theta_{F}(\zeta)=0$. By using the universal property of triangles, $\zeta$ can be completed to a commutative diagram:
\begin{center}
$\begin{CD}
A @>k>> F @>l>> B\\
@VtVV @V\zeta VV @VrVV\\
A[1] @>k[1]>> F[1] @>l[1]>> B[1]
\end{CD}$.
\end{center}
Since $\zeta$ is nontrivial, $(t,r)$ has to be sent to a nonzero element in $\mathrm{Ext}^{2}(A,B)$ under the last map of the exact sequence above, therefore $e[1]\circ t+r[1]\circ e\neq0$.
\end{proof}
With respect to the decomposition $(4)$, we let \begin{equation}
\zeta=u_{1}v_{F}+w_{1}+u_{2}s_{1}+u_{3}s_{2}+u_{4}s_{3}+w_{2}+u_{5}s_{4},
\end{equation}where $w_{1}\in T_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*}),F}$, $\{s_{1},s_{2},s_{3}\}$ forms a basis of $N_{\mathbb{P}(N_{H/\mathbf{M}_{1},E}^{*})/\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}$, $w_{2}\in T_{H,E}$, $\{s_{4}\}$ is a basis of $N_{H/\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},E}$ and $u_{i}\in\mathbb{C}$ are coefficients. $(5)$ is inspired by the explicit basis chosen in the proof of [PS85, Lemma 6]. In the next theorem, we will see that the equations cutting out versal deformations by using (5) is the same as using Piene and Schlessinger's basis in the case of deformations of ideals.
\begin{propo} The quadratic part of Kuranishi map takes the following form with respect to (5)
\begin{equation*}
\kappa_{2}(\zeta)=\zeta\cup\zeta= \sum_{i=1}^{4}u_{1}u_{i+1}(v_{F}+s_{i})\cup(v_{F}+s_{i}),
\end{equation*}
where $\cup$ is the Yoneda pairing of extensions. $\{(v_{F}+s_{i})\cup(v_{F}+s_{i})|i=1,2,3,4\}$ forms a basis of the obstruction space $\mathrm{Ext}^{2}(F,F)$.
\label{gan}
\end{propo}
\begin{proof}
The equality $\kappa_{2}(\zeta)=\zeta\cup\zeta$ is known for complexes in \cite{Ina02}, \cite{Lie06} and \cite{KLS06}. The second equality is a straightforward computation. It only uses the fact that for any $v$ in $T_{\mathbf{B},F}$ or $T_{\mathbf{P},F}$, we have $v\cup v=0$ since $v$ is a versal deformation by Proposition \ref{mua}.
To prove the last statement, we first show that $\{(v_{F}+s_{i})\cup(v_{F}+s_{i})|i=1,2,3\}$ is linearly independent. If not, then a certain nontrivial linear combination $\sum_{i=1}^{3}a_{i}(v_{F}+s_{i})\cup(v_{F}+s_{i})=0$. We can rewrite it as $v_{F}[1]\circ s+ s[1]\circ v_{F}=0$, where $s=\sum_{i=1}^{3}a_{i}s_{i}$ is a nontrivial first deformation of $F$ in $N_{\mathbb{P}(N_{H,E}^{*})/\mathbb{P}(\mathrm{Ext}^{1}(B,A)^{*}),F}$. By Lemma \ref{kao}, we can write $s=k[1]\circ t\circ l$ for some $t\in \mathrm{Ext}^{1}(B,A)$ such that $t[1]\circ e$ is nonzero in $\mathrm{Ext}^{2}(A,A)$. Now
\begin{align}
0&=\left(v_{F}[1]\circ s+s[1]\circ v_{F}\right)\circ k\nonumber\\&=v_{F}[1]\circ k[1]\circ t\circ l\circ k+k[2]\circ t[1]\circ l[1]\circ v_{F}\circ k.\nonumber
\end{align}
Since $l\circ k=0$ and $l[1]\circ v_{F}\circ k=\theta_{F}(v_{F})=e$, we have $k[2]\circ t[1]\circ e=0$. From the diagram in Proposition \ref{cao}, we know that $\mathrm{Ext}^{2}(A,A)\overset{k[2]_{*}}{\longrightarrow}\mathrm{Ext}^{2}(A,F)$ is an injection, hence $t[1]\circ e=0$, which is a contradiction.
It only remains to show that $(v_{F}+s_{4})\cup(v_{F}+s_{4})$ is not a linear combination of $\{(v_{F}+s_{i})\cup(v_{F}+s_{i})|i=1,2,3\}$. For this we will show for $i=1,2,3$
\begin{align*}
l[2]\circ\left((v_{F}+s_{i})\cup(v_{F}+s_{i})\right)&=0,\\ l[2]\circ\left((v_{F}+s_{4})\cup(v_{F}+s_{4})\right)&\neq0.
\end{align*}
By Lemma \ref{kao}, we can assume $s_{i}=k[1]\circ t_{i}\circ l$ for some $t_{i}\in\mathrm{Ext}^{1}(B,A)$ satisfying $t_{i}[1]\circ e\neq0$. Then
\begin{align}
& l[2]\circ((v_{F}+s_{i})\cup(v_{F}+s_{i}))\nonumber\\=& l[2]\circ v_{F}[1]\circ k[1]\circ t_{i}\circ l+l[2]\circ k[2]\circ t_{i}[1]\circ l[1]\circ v_{F}.\nonumber
\end{align}
Since $l[2]\circ v_{F}[1]\circ k[1]=e[1]$ and $l[2]\circ k[2]=0$, we have $l[2]\circ((v_{F}+s_{i})\cup(v_{F}+s_{i}))=e[1]\circ t_{i}\circ l$. Notice that $e[1]\circ t_{i}\in\mathrm{Ext}^{2}(B,B)=0$, so $l[2]\circ((v_{F}+s_{i})\cup(v_{F}+s_{i}))=0$. On the other hand, $s_{4}$ is a nontrivial element in $N_{H/\mathbb{P}^{3}\times(\mathbb{P}^{3})^{*},E}$. By Lemma \ref{ri}, $s_{4}$ can be completed to the following commutative diagram with $e[1]\circ t_{4}+r_{4}[1]\circ e\neq0$ in $\mathrm{Ext}^{2}(A,B)$:
\begin{center}
$\begin{CD}
A @>k>> F @>l>> B\\
@Vt_{4}VV @Vs_{4}VV @Vr_{4}VV\\
A[1] @>k[1]>> F[1] @>l[1]>> B[1]
\end{CD}$
\end{center}
\noindent Now
\begin{align}
& l[2]\circ((v_{F}+s_{4})\cup(v_{F}+s_{4}))\circ k\nonumber\\=& l[2]\circ v_{F}[1]\circ s_{4}\circ k+ l[2]\circ s_{4}[1]\circ v_{F}\circ k\nonumber\\=& l[2]\circ v_{F}[1]\circ k[1]\circ t_{4}+r_{4}[1]\circ l[1]\circ v_{F}\circ k\nonumber\\=& e[1]\circ t_{4}+r_{4}[1]\circ e\neq0.\nonumber
\end{align}By the diagram in Proposition \ref{cao}, $k^{*}:\mathrm{Ext}^{2}(F,B)\longrightarrow\mathrm{Ext}^{2}(A,B)$ is an isomorphism, hence $l[2]\circ((v_{F}+s_{4})\cup(v_{F}+s_{4}))\neq0$.
\end{proof}
\begin{corol}
The two irreducible components of $\mathbf{M}_{2}$ intersect transeversely.
\end{corol}
\begin{proof}
Proposition \ref{gan} shows that $\kappa_{2}^{-1}(0)$ is cut out by equations $u_{1}u_{2},u_{1}u_{3},u_{1}u_{4},u_{1}u_{5}$ in $\mathrm{Ext}^{1}(F,F)$, so all first order deformations that can be lifted to the second order form a $\mathbb{C}^{15}\cup\mathbb{C}^{12}$ satisfying $\mathbb{C}^{15}\cap\mathbb{C}^{12}=\mathbb{C}^{11}$ in $\mathrm{Ext}^{1}(F,F)$. But $T_{\varphi_{F},F}(T_{\mathbf{P},F})\cup T_{\delta,F}(T_{\mathbf{B},F})=\mathbb{C}^{15}\cup\mathbb{C}^{12}$ and $T_{\varphi_{F},F}(T_{\mathbf{P},F})\cap T_{\delta,F}(T_{\mathbf{B},F})=T_{\varphi_{F},F}(T_{\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*}),F})=\mathbb{C}^{11}$ by Remark \ref{mafuyu} (2), so indeed we have exhibited all versal deformations of $F$ and the two components of $\mathbf{M}_{2}$ intersect transversely.
\end{proof}
We end this section by proving $\mathbf{M}_{2}$ is a projective variety.
\begin{theorem}
The moduli space $\mathbf{M}_{2}$ is a projective variety.
\end{theorem}
\begin{proof}
$\mathbf{M}_{2}$ is smooth outside the intersection of its two components by Remark \ref{fadian} and Remark \ref{mafuyu} (1) . For any $F\in\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})$, since no first order deformation other than a versal one can be lifted to the second order, $\mathbf{M}_{2}$ is reduced at $F$. This proves $\mathbf{M}_{2}$ is reduced. Now we can view $\mathbf{M}_{2}$ as the pushout of the closed embeddings $\mathbf{B}\longleftarrow\mathbb{P}(\mathcal{N}_{H/\mathbf{M}_{1}}^{*})\longrightarrow \mathbf{P}$. In general a pushout diagram does not exist in the category of schemes, but when the two morphisms are closed embeddings it exists [SchK05, Lemma 3.9]. This proves that $\mathbf{M}_{2}$ is a scheme. The fact that $\mathbf{M}_{2}$ is projective and of finite type comes after the analysis of wall-crossing $(3)$ in the next section, where we prove that $\mathbf{M}_{3}$ is a blow-up of $\mathbf{M}_{2}$ along a smooth center contained in $\varphi_{F}(\mathbf{P})\setminus\delta(\mathbf{B})$. Since $\mathbf{M}_{3}$ is the Hilbert scheme, it is automatically projective and of finite type, so $\mathbf{M}_{2}$ is a projective variety.
\end{proof}
\section{The Third Wall-crossing}
In this section, we study the third wall-crossing and prove $(4)$ in the Main theorem. To be more precise, we will prove the following theorem. Let $V$ be a plane in $\mathbb{P}^{3}$ and $q$ be a point on $V$.
\begin{theorem}
The third wall-crossing is simple with a family of pairs of destabilizing objects $(\mathcal{O}(-1)$, $\mathcal{I}_{q/V}(-3))$.
The moduli space of semistable objects after the wall-crossing is the Hilbert scheme of twisted cubics $\mathbf{M}_{3}$. $\mathbf{M}_{3}$ is also the blow-up of $\mathbf{M}_{2}$ along a $5$-dimensional smooth center contained in $\mathbf{P}\setminus\mathbf{B}$.
\label{zhu2}
\end{theorem}
We fix the family of pairs of destabilizing objects to be
\begin{equation}
\left(A,B\right)=\left(\mathcal{O}(-1),\mathcal{I}_{q/V}(-3)\right),\nonumber
\end{equation}
The method is almost the same with the previouse section, but the situation here is easier since we expect no extra components or singularities occur and $\mathbf{M}_{3}$ is a blow-up of $\mathbf{M}_{2}$ along a smooth center.
The following Hom and Ext group computations are straightforward.
\begin{lemma}
$\mathrm{Hom}(A,B)=\mathrm{Hom}(B,A)=0$, $\mathrm{Hom}(A,A)=\mathrm{Hom}(B,B)=\mathbb{C}$;
$\mathrm{Ext}^{1}(A,B)=\mathbb{C}$, $\mathrm{Ext}^{1}(A,A)=0$, $\mathrm{Ext}^{1}(B,B)=\mathbb{C}^{5}$, $\mathrm{Ext}^{1}(B,A)=\mathbb{C}^{10}$;
$\mathrm{Ext}^{2}(A,B)=0$, $\mathrm{Ext}^{2}(B,B)=\mathbb{C}^{2}$, $\mathrm{Ext}^{2}(A,A)=0$, $\mathrm{Ext}^{2}(B,A)=\mathbb{C}$;
$\mathrm{Ext}^{3}(A,B)=\mathrm{Ext}^{3}(A,A)=\mathrm{Ext}^{3}(B,B)=\mathrm{Ext}^{3}(B,A)=0$.
\end{lemma}
Similar to Proposition \ref{dadiao}, the incidence hyperplance $H$ is the moduli space of nontrivial extensions of $A$ by $B$. Similar to Proposition \ref{zhongyao}, we can construct an embedding $\varphi'_{E}:H\longrightarrow \mathbf{M}_{2}$. Since $\mathbf{M}_{2}$ has two irreducible components $\mathbf{B}$ and $\mathbf{P}$, we want to know which component $H$ lies in.
\begin{propo}
Under the induced morphism $\varphi_{E}'$, $H$ is embedded into $\mathbf{P}\setminus\mathbf{B}$.
\end{propo}
\begin{proof}
Take any $E\in H$, we have a nontrivial extension $0\longrightarrow B\longrightarrow E\longrightarrow A\longrightarrow0$.
By using long exact sequences for $\mathrm{Hom}$ functor, we get the following commutative diagram with exact rows and columns, and all boundary homomorphims are $0$.
\begin{center}
$\begin{CD}
\mathrm{Ext}^{1}(A,B)=\mathbb{C} @>0>> \mathrm{Ext}^{1}(E,B)=\mathbb{C}^{5} @>>> \mathrm{Ext}^{1}(B,B)=\mathbb{C}^{5}\\
@V0VV @VVV @VVV\\
\mathrm{Ext}^{1}(A,E)=0 @>>> \mathrm{Ext}^{1}(E,E) @>>> \mathrm{Ext}^{1}(E,B)\\
@VVV @VVV @VVV\\
\mathrm{Ext}^{1}(A,A)=0 @>>> \mathrm{Ext}^{1}(E,A)=\mathbb{C}^{10} @>>> \mathrm{Ext}^{1}(B,A)=\mathbb{C}^{10}\\
@VVV @VVV @VVV\\
\mathrm{Ext}^{2}(A,B)=0 @>>> \mathrm{Ext}^{2}(E,B)=\mathbb{C}^{2} @>>> \mathrm{Ext}^{2}(B,B)=\mathbb{C}^{2}\\
@VVV @VVV @VVV\\
0 @>>> \mathrm{Ext}^{2}(E,E) @>>> \mathrm{Ext}^{2}(B,E)\\
@VVV @VVV @VVV\\
0 @>>> \mathrm{Ext}^{2}(E,A)=\mathbb{C} @>>> \mathrm{Ext}^{2}(B,A)=\mathbb{C}\\
\end{CD}$
\end{center}
\noindent If $E\in \mathbf{B}\setminus\mathbf{P}$, then $\mathrm{Ext}^{1}(E,E)=\mathbb{C}^{12}$, but this violates the exactness of the central column of the above diagram. If $E\in \mathbf{P}\cap\mathbf{B}$, then by Proposition $4.9$ we have $\mathrm{Ext}^{1}(E,E)=\mathbb{C}^{16}$ and $\mathrm{Ext}^{2}(E,E)=\mathbb{C}^{4}$, which also does not fit into the above diagram. Hence $E\in \mathbf{P}\setminus\mathbf{B}$.
\end{proof}
\begin{remark}
This proposition means that the third wall-crossing only modifies one irreducible component of $\mathbf{M}_{2}$, namely $\mathbf{P}$. It does not touch the other component $\mathbf{B}$.
\end{remark}
On the other hand, we can construct a morphism $\varphi_{F}':\mathbf{P}'\longrightarrow \mathbf{M}_{3}$ that is injective on the level of sets and Zariski tangent spaces, where $\mathbf{P}'$ is a $\mathbb{P}^{9}$-bundle over $H$ parametrizing all nontrivial extensions of $B$ by $A$. This implies that for any $F$ in the image of $\varphi_{F}'$, $\mathrm{Ext}^{1}(F,F)$ is at least $14$-dimensional since $\mathrm{dim}\mathbf{P}'=14$ and $\mathbf{P}'$ is smooth.
If we denote the blow-up of $\mathbf{M}_{2}$ along $H$ by $\mathbf{B}'$, then we can perform the elementary modification on the pullback of the universal family over $\mathbf{M}_{2}$ along the exceptional divisor of $\mathbf{B}'$ to get a flat family $\mathcal{K}'$. Similar to Proposition \ref{ruyao}, $\mathcal{K}'$ induces a morphism $\delta':\mathbf{B}'\longrightarrow \mathbf{M}_{3}$ which is injective on the level of sets and Zariski tangent spaces.
\begin{theorem}
The induced morphism $\delta'$ is an isomorphism.
\label{haoxiang}
\end{theorem}
\begin{proof}
$\mathcal{K}'$ is the same as the universal family over $\mathbf{M}_{2}$ outside the exceptional divisor, so $\delta'$ is an isomorphism outside the exceptional divisor. For any $F$ lying in the exceptional divisor, $\delta'$ induces an injection $T_{\mathbf{B}',F}\longrightarrow\mathrm{Ext}^{1}(F,F)=T_{\mathbf{M}_{3},F}$. To prove $\delta'$ is an isomorphism at $F$, we only need to show $\mathrm{Ext}^{1}(F,F)=\mathbb{C}^{15}=T_{\mathbf{B}',F}$. Since we have an exact sequence $0\longrightarrow A\longrightarrow F\longrightarrow B\longrightarrow0$, this can be done by writing down the long exact sequences for $\mathrm{Hom}$ functor again.
\begin{center}
$\begin{CD}
\mathrm{Ext}^{1}(B,A)=\mathbb{C}^{10} @>>> \mathrm{Ext}^{1}(F,A)=\mathbb{C}^{9} @>>> \mathrm{Ext}^{1}(A,A)=0\\
@VVV @VVV @VVV\\
\mathrm{Ext}^{1}(B,F)=\mathbb{C}^{14} @>>> \mathrm{Ext}^{1}(F,F)=\mathbb{C}^{15} @>>> \mathrm{Ext}^{1}(A,F)=\mathbb{C}\\
@VVV @VVV @VVV\\
\mathrm{Ext}^{1}(B,B)=\mathbb{C}^{5} @>>> \mathrm{Ext}^{1}(F,B)=\mathbb{C}^{6} @>>> \mathrm{Ext}^{1}(A,B)=\mathbb{C}\\
@V0VV @V0VV @VVV\\
\mathrm{Ext}^{2}(B,A)=\mathbb{C} @>>> \mathrm{Ext}^{2}(F,A)=\mathbb{C} @>>> \mathrm{Ext}^{2}(A,A)=0\\
@VVV @VVV @VVV\\
\mathrm{Ext}^{2}(B,F)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{2}(F,F)=\mathbb{C}^{3} @>>> \mathrm{Ext}^{2}(A,F)=0\\
@VVV @VVV @VVV\\
\mathrm{Ext}^{}(B,B)=\mathbb{C}^{2} @>>> \mathrm{Ext}^{2}(F,B)=\mathbb{C}^{2} @>>> \mathrm{Ext}^{2}(A,B)=0
\end{CD}$
\end{center}
\end{proof}
\end{document}
|
\begin{document}
\displaystylete{}
\title{{f Spectra originated from semi-B-Fredholm theory and commuting perturbations}
\large
\begin{quote}
{\bf Abstract:} ~In [\cite{Burgos-Kaidi-Mbekhta-Oudghiri}], Burgos, Kaidi, Mbekhta
and Oudghiri provided an
affirmative answer to a question of Kaashoek and Lay and proved that
an operator $F$ is power finite rank if and only
if $\sigma_{dsc}(T+F) =\sigma_{dsc}(T)$ for every operator $T$ commuting with
$F$. Later, several authors extended this result to the
essential descent spectrum, the left Drazin spectrum and the left essentially Drazin spectrum.
In this paper, using the theory of
operator with eventual topological uniform descent and the technique
used in [\cite{Burgos-Kaidi-Mbekhta-Oudghiri}], we generalize this
result to various spectra originated from seni-B-Fredholm theory. As
immediate consequences, we give affirmative answers to several
questions posed by Berkani, Amouch and Zariouh. Besides, we provide
a general framework which allows us to derive in a unify way
commuting perturbational results of Weyl-Browder type theorems and
properties (generalized or not). These commuting perturbational
results, in particular, improve many recent results of
[\cite{Berkani-Amouch}, \cite{Berkani-Zariouh partial},
\cite{Berkani Zariouh}, \cite{Berkani Zariouh Functional Analysis},
\cite{Rashid gw}] by removing certain extra assumptions.
\\
{\bf 2010 Mathematics Subject Classification:} primary 47A10, 47A11; secondary 47A53, 47A55 \\
{\bf Key words:} Semi-B-Fredholm operators; eventual topological
uniform descent; power finite rank; commuting perturbation.
\end{quote}
\section{Introduction }
\quad\,~In 1972, Kaashoek and Lay have shown in [\cite{Kaashoek-Lay}] that the descent spectrum
is invariant under commuting power finite rank perturbation $F$
(that is, $F^{n}$ is finite rank for some $n \in \mathbb{N}$). Also
they have conjectured that this perturbation property characterizes
such operators $F$. In 2006, Burgos, Kaidi, Mbekhta and Oudghiri
provided in [\cite{Burgos-Kaidi-Mbekhta-Oudghiri}] an
affirmative answer to this question and proved that
an operator $F$ is power finite rank if and only
if $\sigma_{dsc}(T+F) =\sigma_{dsc}(T)$ for every operator $T$ commuting with
$F$. Later, Fredj generalized this result in [\cite{Bel}] to the
essential descent spectrum. Fredj, Burgos and Oudghiri extended this result in [\cite{Bel-Burgos-Oudghiri 3}] to
the left Drazin spectrum and the left essentially Drazin spectrum.
The present paper is concern with commuting power finite rank
perturbations of semi-B-Fredholm operators. As seen in Theorem
\ref{2.19} (i.e., main result), we generalize the previous results
to various spectra originated from semi-B-Fredholm theory. The proof
of our main result is mainly dependent upon the theory of operator
with eventual topological uniform descent and the technique used in
[\cite{Burgos-Kaidi-Mbekhta-Oudghiri}].
Spectra originated from semi-B-Fredholm theory include, in
particular, the upper semi-B-Weyl spectrum $\sigma_{USBW}$ (resp.
the B-Weyl spectrum $\sigma_{BW}$) which is closely related to
generalized a-Weyl's theorem, generalized a-Browder's theorem,
property $(gw)$ and property $(gb)$ (resp. generalized Weyl's
theorem, generalized Browder's theorem, property $(gaw)$ and
property $(gab)$). Concerning the upper semi-B-Weyl spectrum
$\sigma_{USBW}$, Berkani and Amouch posed in [\cite{Berkani-Amouch}]
the following question:
\begin {question}${\label{1.1}}$
Let $T \in \mathcal{B}(X)$ and let $N \in \mathcal{B}(X)$ be a
nilpotent operator commuting with $T$. Do we always have
$$ \sigma_{USBW}(T+N) = \sigma_{USBW}(T) \ ?$$
\end{question}
Similarly, for the B-Weyl spectrum $\sigma_{BW}$, Berkani and Zariouh posed in [\cite{Berkani
Zariouh}] the following question:
\begin {question}${\label{1.2}}$
Let $T \in \mathcal{B}(X)$ and let $N \in \mathcal{B}(X)$ be a
nilpotent operator commuting with $T$. Do we always have
$$ \sigma_{BW}(T+N) = \sigma_{BW}(T) \ ?$$
\end{question}
Recently, Amouch, Zguitti, Berkani and Zariouh have given partial
answers in [\cite{Amouch partial}, \cite{AmouchM ZguittiH},
\cite{Berkani-Amouch}, \cite{Berkani-Zariouh partial}] to Question
\ref{1.1}. As immediate consequences of our main result (see Theorem
\ref{2.19}), we provide positive answers to Questions \ref{1.1} and
\ref{1.2} and some other questions posed by Berkani and Zariouh (see
Corollaries \ref{3.2}, \ref{3.4} and \ref{3.9}). Besides, we provide
a general framework which allows us to derive in a unify way
commuting perturbational results of Weyl-Browder type theorems and
properties (generalized or not). These commuting perturbational
results, in particular, improve many recent results of
[\cite{Berkani-Amouch}, \cite{Berkani-Zariouh partial},
\cite{Berkani Zariouh}, \cite{Berkani Zariouh Functional Analysis},
\cite{Rashid gw}] by removing certain extra assumptions (see
Corollary \ref{3.10} and Remark \ref{3.11}).
Throughout this paper, let $\mathcal{B}(X)$ denote the Banach
algebra of all bounded linear operators acting on an infinite
dimensional
complex Banach space $X$, and $\mathcal{F}(X)$ denote its ideal of finite rank operators on $X$.
For an operator $T \in \mathcal{B}(X)$, let $T^{*}$ denote its dual,
$\mathcal {N}(T)$ its kernel, $\alpha(T)$ its nullity, $\mathcal
{R}(T)$ its range, $\beta(T)$ its defect, $\sigma(T)$ its spectrum
and $\sigma_{a}(T)$ its approximate point spectrum. If the range
$\mathcal {R}(T)$ is closed and $\alpha(T) < \infty$ (resp.
$\beta(T) < \infty$), then $T$ is said to be $upper$
$semi$-$Fredholm$ (resp. $lower$ $semi$-$Fredholm$). If $T \in
\mathcal{B}(X)$ is both upper and lower semi-Fredholm, then $T$ is
said to be $Fredholm$. If $T \in \mathcal{B}(X)$ is either upper or
lower semi-Fredholm, then $T$ is said to be $semi$-$Fredholm$, and
its index is defined by
\begin{upshape}ind\end{upshape}$(T)$ = $\alpha(T)-\beta(T)$.
For each $n \in \mathbb{N}$, we set $c_{n}(T) = \dim \mathcal
{R}(T^{n})/\mathcal {R}(T^{n+1})$ and $c^{'}_{n}(T) = \dim \mathcal
{N}(T^{n+1})/\mathcal {N}(T^{n}).$ It follows from
[\cite{Kaashoek M A 12}, Lemmas 3.1 and 3.2] that, for every $n \in \mathbb{N}$,
$$c_{n}(T) = \dim X / (\mathcal {R}(T) + \mathcal {N}(T^{n})), \ \ \ \ c^{'}_{n}(T) = \dim \mathcal {N}(T)
\cap \mathcal {R}(T^{n}).$$ Hence, it is easy to see that the
sequences $\{c_{n}(T)\}_{n=0}^{\infty}$ and
$\{c^{'}_{n}(T)\}_{n=0}^{\infty}$ are decreasing. Recall that the
$descent$ and the $ascent$ of $T \in \mathcal{B}(X)$ are
$dsc(T)= \inf \{n \in \mathbf{\mathbb{N}}:\mathcal {R}(T^{n})= \mathcal {R}(T^{n+1})\}$
and $asc(T)=\inf \{n \in \mathbf{\mathbb{N}}:\mathcal {N}(T^{n})= \mathcal {N}(T^{n+1})\}$,
respectively (the infimum of an empty set is defined to be $\infty$). That is,
$$dsc(T)=\inf \{n \in \mathbf{\mathbb{N}}:c_{n}(T) = 0 \}$$
and
$$asc(T)=\inf \{n \in \mathbf{\mathbb{N}}:c^{'}_{n}(T) =0 \}.$$
Similarly, the $esential$ $descent$ and the $esential$ $ascent$ of $T \in
\mathcal{B}(X)$ are
$$dsc_{e}(T)= \inf \{n \in \mathbf{\mathbb{N}}:c_{n}(T) <
\infty \}$$
and $$asc_{e}(T)=\inf \{n \in
\mathbf{\mathbb{N}}:c^{'}_{n}(T) < \infty \}.$$ If $asc(T) <
\infty$ and $\mathcal {R}(T^{asc(T)+1})$ is closed, then $T$ is said
to be $left$ $Drazin$ $invertible$. If $dsc(T) < \infty $ and
$\mathcal {R}(T^{dsc(T)})$ is closed, then $T$ is said to be $right$
$Drazin$ $invertible$. If $asc(T) = dsc(T) < \infty $, then $T$ is
said to be $Drazin$ $invertible$. Clearly, $T \in \mathcal{B}(X)$ is
both left and right Drazin invertible if and only if $T$ is Drazin
invertible. If $asc_{e}(T) < \infty$ and $\mathcal
{R}(T^{asc_{e}(T)+1})$ is closed, then $T$ is said to be $left$
$essentially$ $Drazin$ $invertible$. If $dsc_{e}(T) < \infty $ and
$\mathcal {R}(T^{dsc_{e}(T)})$ is closed, then $T$ is said to be
$right$ $essentially$ $Drazin$ $invertible$.
For $T \in \mathcal{B}(X)$, let us define the $left$ $Drazin$
$spectrum$, the $right$ $Drazin$ $spectrum$, the $Drazin$
$spectrum$, the $left$ $essentially$ $Drazin$ $spectrum$, and the
$right$ $essentially$ $Drazin$ $spectrum$ of $T$ as follows
respectively:
$$ \sigma_{LD}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a left Drazin invertible operator} \};$$
$$ \sigma_{RD}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a right Drazin invertible operator} \};$$
$$ \sigma_{D}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a Drazin invertible operator} \};$$
$$ \sigma_{LD}^{e}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a left essentially Drazin invertible operator} \};$$
$$ \sigma_{RD}^{e}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a right essentially Drazin invertible operator} \}.$$
These spectra have been extensively studied by several authors, see
e.g [\cite{Aiena-Biondi-Carpintero}, \cite{AmouchM ZguittiH},
\cite{Berkani}, \cite{Berkani2},
\cite{Carpintero-Garcia-Rosas-Sanabria}, \cite{Bel},
\cite{Bel-Burgos-Oudghiri 3}, \cite{Mbekhta-Muller 9}].
Recall that an operator $T \in \mathcal{B}(X)$ is said to be
$Browder$ (resp. $upper$ $semi$-$Browder$, $lower$ $semi$-$Browder$)
if $T$ is Fredholm and $asc(T)=dsc(T) < \infty$ (resp. $T$ is upper
semi-Fredholm and $asc(T) < \infty$, $T$ is lower semi-Fredholm and
$dsc(T) < \infty$).
For each integer $n$, define $T_{n}$ to be the restriction of $T$ to
$\mathcal{R}(T^{n})$ viewed as the map from $\mathcal{R}(T^{n})$
into $\mathcal{R}(T^{n})$ (in particular $T_{0} = T $). If there
exists $n \in \mathbb{N}$ such that $\mathcal {R}(T^{n})$ is closed
and $T_{n}$ is Fredholm (resp. upper semi-Fredholm, lower
semi-Fredholm, Browder, upper semi-Browder, lower semi-Browder),
then $T$ is called $B$-$Fredholm$ (resp. $upper$
$semi$-$B$-$Fredholm$, $lower$ $semi$-$B$-$Fredholm$, $B$-$Browder$,
$upper$ $semi$-$B$-$Browder$, $lower$ $semi$-$B$-$Browder$). If $T
\in \mathcal {B}(X)$ is upper or lower semi-B-Browder, then $T$ is
called $semi$-$B$-$Browder$. If $T \in \mathcal {B}(X)$ is upper or
lower semi-B-Fredholm, then $T$ is called $semi$-$B$-$Fredholm$. It
follows from [\cite{Berkani semi-B-fredholm}, Proposition 2.1] that
if there exists $n \in \mathbb{N}$ such that $\mathcal {R}(T^{n})$
is closed and $T_{n}$ is
semi-Fredholm, then $\mathcal {R}(T^{m})$ is closed, $T_{m}$ is semi-Fredholm and
\begin{upshape}ind\end{upshape}$(T_{m})$ = \begin{upshape}ind\end{upshape}$(T_{n})$ for all $m \geq
n$. This enables us to define the index of a semi-B-Fredholm
operator $T$ as the index of the semi-Fredholm operator $T_{n}$,
where $n$ is an integer satisfying $\mathcal {R}(T^{n})$ is closed
and $T_{n}$ is semi-Fredholm. An operator $T \in \mathcal {B}(X)$ is
called $B$-$Weyl$ (resp. $upper$ $semi$-$B$-$Weyl$, $lower$
$semi$-$B$-$Weyl$) if $T$ is B-Fredholm and
\begin{upshape}ind\end{upshape}$(T)=0$ (resp. $T$ is upper
semi-B-Fredholm and
\begin{upshape}ind\end{upshape}$(T) \leq 0$, $T$ is
lower semi-B-Fredholm and
\begin{upshape}ind\end{upshape}$(T) \geq 0$). If $T
\in \mathcal {B}(X)$ is upper or lower semi-B-Weyl, then $T$ is
called $semi$-$B$-$Weyl$.
For $T \in \mathcal{B}(X)$, let us define the $upper$
$semi$-$B$-$Fredholm$ $spectrum$, the $lower$ $semi$-$B$-$Fredholm$
$spectrum$, the $semi$-$B$-$Fredholm$ $spectrum$, the $B$-$Fredholm$
$spectrum$, the $upper$ $semi$-$B$-$Weyl$ $spectrum$, the $lower$
$semi$-$B$-$Weyl$ $spectrum$, the $semi$-$B$-$Weyl$ $spectrum$, the
$B$-$Weyl$ $spectrum$, the $upper$ $semi$-$B$-$Browder$ $spectrum$,
the $lower$ $semi$-$B$-$Browder$ $spectrum$, the
$semi$-$B$-$Browder$ $spectrum$, and the $B$-$Browder$ $spectrum$ of
$T$ as follows respectively:
$$ \sigma_{USBF}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a upper semi-B-Fredholm
operator} \};$$
$$ \sigma_{LSBF}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a lower semi-B-Fredholm
operator} \};$$
$$ \sigma_{SBF}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a semi-B-Fredholm
operator} \};$$
$$ \sigma_{BF}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a B-Fredholm
operator} \};$$
$$ \sigma_{USBW}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a upper semi-B-Weyl
operator} \};$$
$$ \sigma_{LSBW}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a lower semi-B-Weyl
operator} \};$$
$$ \sigma_{SBW}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a semi-B-Weyl
operator} \};$$
$$ \sigma_{BW}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a B-Weyl
operator} \};$$
$$ \sigma_{USBB}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a upper semi-B-Browder
operator} \};$$
$$ \sigma_{LSBB}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a lower semi-B-Browder
operator} \};$$
$$ \sigma_{SBB}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a semi-B-Browder
operator} \};$$
$$ \sigma_{BB}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a B-Browder
operator} \}.$$ These spectra originated from semi-B-Fredholm theory
also have been extensively studied by several authors, see e.g
[\cite{Aiena-Biondi-Carpintero}, \cite{AmouchM ZguittiH},
\cite{Berkani}, \cite{Berkani pams}, \cite{Berkani semi-B-fredholm},
\cite{Berkani Zariouh}, \cite{Carpintero-Garcia-Rosas-Sanabria}].
For any $T \in \mathcal{B}(X)$, Berkani have found in
[\cite{Berkani}, Theorem 3.6] the following elegant equalities:
$$ \sigma_{LD}(T) = \sigma_{USBB}(T), \ \ \ \ \sigma_{RD}(T) = \sigma_{LSBB}(T);$$
$$ \sigma_{LD}^{e}(T) = \sigma_{USBF}(T), \ \ \ \ \sigma_{RD}^{e}(T) = \sigma_{LSBF}(T);$$
$$ \sigma_{D}(T) = \sigma_{BB}(T).$$
This paper is organized as follows. In Section 2, by using the
theory of operator with eventual topological uniform descent and the
technique used in [\cite{Burgos-Kaidi-Mbekhta-Oudghiri}], we
characterize power finite rank operators via various spectra
originated from seni-B-Fredholm theory. In Section 3, as some
applications, we provide affirmative answers to some questions of
Berkani, Amouch and Zariouh. Besides, we provide a general framework
which allows us to derive in a unify way commuting perturbational
results of Weyl-Browder type theorems and properties (generalized or
not). These commuting perturbational results, in particular, improve
many recent results of [\cite{Berkani-Amouch}, \cite{Berkani-Zariouh
partial}, \cite{Berkani Zariouh}, \cite{Berkani Zariouh Functional
Analysis}, \cite{Rashid gw}] by removing certain extra assumptions.
\section{Main result}
\quad\,~ We begin with the following lemmas in order to give the
proof of the main result in this paper.
\begin {lemma}${\label{2.7}}$ Let $F \in \mathcal {B}(X)$ with $F^{n} \in \mathcal {F}(X)$ for some $n \in \mathbb{N}$.
If $T \in \mathcal {B}(X)$ is upper semi-B-Fredholm and commutes
with $F$, then $T+F$ is also upper semi-B-Fredholm.
\end{lemma}
\begin{proof} Since $T$ is upper semi-B-Fredholm, by [\cite{Berkani}, Theorem
3.6], $T$ is left essentially Drazin invertible. Hence by
[\cite{Bel-Burgos-Oudghiri 3}, Proposition 3.1], $T+F$ is left
essentially Drazin invertible. By [\cite{Berkani}, Theorem 3.6]
again, $T$ is upper semi-B-Fredholm.
\end{proof}
\begin{lemma}${\label{2.10}}$ Let $F \in \mathcal {B}(X)$ with $F^{n} \in \mathcal {F}(X)$ for some $n \in \mathbb{N}$.
If $T \in \mathcal {B}(X)$ is lower semi-B-Fredholm and commutes
with $F$, then $T+F$ is also lower semi-B-Fredholm.
\end{lemma}
\begin{proof} Since $F^{n} \in \mathcal {F}(X)$ for some $n \in
\mathbb{N}$, $\mathcal{R}(F^{n})$ is a closed and finite-dimensional
subspace, and hence $\dim \mathcal{R}(F^{*n})=\dim \mathcal
{N}(F^n)^{\partialrtialerp} = \dim \mathcal{R}(F^{n})$, thus
$\mathcal{R}(F^{*n})$ is finite-dimensional, this infers that $F^{*n} \in \mathcal {F}(X^*).$
It is obvious that $T^*$ commutes with $F^*$. Since $T$ is lower
semi-B-Fredholm, by [\cite{Berkani}, Theorem 3.6], $T$ is right
essentially Drazin invertible. Then from the presentation before
Section IV of [\cite{Mbekhta-Muller 9}], it follows that $T^{*}$ is
left essentially Drazin invertible. Hence by
[\cite{Bel-Burgos-Oudghiri 3}, Proposition 3.1],
$(T+F)^{*}=T^{*}+F^{*}$ is left essentially Drazin invertible. From
the presentation before Section IV of [\cite{Mbekhta-Muller 9}]
again, it follows that $T+F$ is right essentially Drazin invertible.
Consequently, by [\cite{Berkani}, Theorem 3.6] again, $T+F$ is lower
semi-B-Fredholm.
\end{proof}
It follows from [\cite{Berkani}, Corollary 3.7 and Theorem 3.6] that
$T$ is B-Fredholm if and only if $T$ is both upper and lower
semi-B-Fredholm.
\begin {corollary}${\label{2.11}}$ Let $T \in \mathcal{B}(X)$ and let $F \in \mathcal {B}(X)$ with $F^{n} \in \mathcal {F}(X)$ for some $n \in
\mathbb{N}$. If $T$ commutes with $F$, then
$(1)$ $\sigma_{USBF}(T+F) = \sigma_{USBF}(T);$
$(2)$ $\sigma_{LSBF}(T+F) = \sigma_{LSBF}(T);$
$(3)$ $\sigma_{SBF}(T+F) = \sigma_{SBF}(T);$
$(4)$ $\sigma_{BF}(T+F) = \sigma_{BF}(T).$
\end{corollary}
\begin{proof} From Lemma \ref{2.7}, the first equation follows
easily. The second equation follows immediately from Lemma
\ref{2.10}. The third equation is true because
$\sigma_{SBF}(T)=\sigma_{USBF}(T) \cap \sigma_{LSBF}(T)$, for every
$T \in \mathcal{B}(X)$.
The fourth equation is
also true because $\sigma_{BF}(T)=\sigma_{USBF}(T) \cup
\sigma_{LSBF}(T)$, for every $T \in \mathcal{B}(X)$.
\end{proof}
To continue the discussion of this paper, we recall some classical
definitions. Using the isomorphism $X/N(T^{d}) \approx R(T^{d})$ and
following [\cite{Grabiner 9}], a topology on $R(T^{d})$ is
defined as follows.
\begin{definition} ${\label{2.12}}$ Let $T \in \mathcal{B}(X)$. For every $d \in \mathbf{\mathbb{N}},$
the operator range topological on $R(T^{d})$ is defined by the norm
$||\small{\cdot}||_{R(T^{d})}$ such that for all $y \in R(T^{d})$,
$$||y||_{R(T^{d})} = \inf\{||x||: x \in X,y=T^{d}x\}.$$
\end{definition}
For a detailed discussion of operator ranges and their topologies,
we refer the reader to [\cite{Fillmore-Williams 7}] and [\cite{Grabiner 8}].
If $T\in\mathcal{B}(X)$, for each $n \in \mathbb{N}$, $T$ induces a
linear transformation from the vector space $\mathcal {R}(T^{n})/
\mathcal {R}(T^{n+1})$ to the space $\mathcal {R}(T^{n+1})/\mathcal
{R}(T^{n+2})$. We will let $k_{n}(T)$ be the dimension of the null
space of the induced map. From [\cite{Grabiner 9}, Lemma 2.3] it
follows that, for every $n \in \mathbb{N}$,
\begin{align*} \qquad \qquad \qquad k_{n}(T)
&= \dim (\mathcal {N}(T) \cap \mathcal {R}(T^{n})) / (\mathcal {N}(T) \cap \mathcal {R}(T^{n+1})) \\
&= \dim (\mathcal {R}(T) +
\mathcal {N}(T^{n+1})) / (\mathcal {R}(T) + \mathcal {N}(T^{n})).
\end{align*}
\begin {definition}${\label{2.13}}$ Let $T \in
\mathcal{B}(X)$ and let $d \in \mathbf{\mathbb{N}}$. Then $T$ has
$uniform$ $descent$ for $n \geq d$ if $k_{n}(T)=0$ for all $n \geq d$. If in addition $R(T^{n})$
is closed in the operator range topology of $R(T^{d})$ for all $n \geq d$$,$ then we say that $T$
has $eventual$ $topological$ $uniform$
$descent$$,$ and$,$ more precisely$,$ that $T$ has $topological$ $uniform$
$descent$ $for$ $n \geq d$.
\end{definition}
Operators with eventual topological uniform descent are introduced
by Grabiner in [\cite{Grabiner 9}]. It includes all classes of
operators introduced in the Introduction of this paper. It also
includes many other classes of operators such as operators of Kato
type, quasi-Fredholm operators, operators with finite descent and
operators with finite essential descent, and so on. A very detailed
and far-reaching account of these notations can be seen in
[\cite{Aiena}, \cite{Berkani}, \cite{Mbekhta-Muller 9}]. Especially,
operators which have topological uniform descent for $n \geq 0$ are
precisely the $semi$-$regular$ operators studied by Mbekhta in
[\cite{Mbekhta}]. Discussions of operators with eventual topological
uniform descent may be found in [\cite{Berkani-Castro-Djordjevic},
\cite{Cao}, \cite{Grabiner 9}, \cite{Jiang-Zhong-Zeng},
\cite{Jiang-Zhong-Zhang}, \cite{Zeng-Zhong-Wu}].
An operator $T \in \mathcal{B}(X)$ is said to be $essentially$
$semi$-$regular$ if $\mathcal {R}(T)$ is closed and
$k(T):=\sum_{n=0}^{\infty} k_{n}(T) <\infty$. From [\cite{Grabiner
9}, Theorem 3.7] it follows that $$k(T) = \dim \mathcal
{N}(T)/(\mathcal {N}(T) \cap \mathcal {R}(T^{\infty})) = \dim
(\mathcal {R}(T) + \mathcal {N}(T^{\infty}))/\mathcal {R}(T).$$
Hence, every essentially semi-regular operator $T \in
\mathcal{B}(X)$ can be characterized by $\mathcal {R}(T)$ is closed
and and there exists a finite dimensional subspace
$F \subseteq X$ such that $\mathcal {N}(T) \subseteq \mathcal
{R}(T^{\infty}) + F.$ In addition, if $T$ is essentially
semi-regular, then $T^{n}$ is essentially semi-regular,
and hence $R(T^{n})$ is closed for all $n \in \mathbf{\mathbb{N}}$ (see Theorem 1.51 of [\cite{Aiena}]).
Hence it is easy to verify that
if $T \in \mathcal{B}(X)$ is essentially semi-regular, then there exist $p \in \mathbb{N}$ such
that $T$ has topological uniform descent for
$n \geq p$.
Also, an operator $T \in \mathcal{B}(X)$ is called Riesz
if its essential spectrum $\sigma_e(T):= \{ \lambda \in \mathbb{C}:
T - \lambda I \makebox{ is not Fredholm} \}=\{0\}$. The $hyperrange$
and $hyperkernel$ of $T \in \mathcal{B}(X)$ are the subspaces of $X$
defined by $\mathcal {R}(T^{\infty}) = \bigcap_{n=1}^{\infty}
\mathcal {R}(T^{n})$ and $\mathcal {N}(T^{\infty}) =
\bigcup_{n=1}^{\infty} \mathcal {N}(T^{n})$, respectively.
\begin{lemma} ${\label{2.14}}$
Suppose that $T \in \mathcal{B}(X)$ has topological uniform descent
for $m \geq d$. If\, $S \in
\mathcal{B}(X)$ is a Riesz operator commuting with $T$
and $V=S+T$ has topological uniform descent
for $n \geq l,$ then:
$(a)\ \mathrm{dim\,} (\mathcal {R}(T^{\infty})+ \mathcal {R}(V^{\infty})) / (\mathcal {R}(T^{\infty}) \cap \mathcal {R}(V^{\infty})) <
\infty;$
$(b)\ \mathrm{dim\,} (\overline{\mathcal {N}(T^{\infty})} +
\overline{\mathcal {N}(V^{\infty})}) / ( \overline{\mathcal
{N}(T^{\infty})} \cap \overline{\mathcal {N}(V^{\infty})}) <
\infty;$
$(c)$\,\,$\mathrm{dim\,} \mathcal {R}(V^{n})/\mathcal {R}(V^{n+1})$ $=$
$\mathrm{dim\,} \mathcal {R}(T^{m})/\mathcal {R}(T^{m+1})$ for
sufficiently large $m$ and $n;$
$(d)$\,\,$\mathrm{dim\,} \mathcal {N}(V^{n+1})/\mathcal {N}(V^{n})$ $=$
$\mathrm{dim\,} \mathcal {N}(T^{m+1})/\mathcal {N}(T^{m})$ for
sufficiently large $m$ and $n$.
\end{lemma}
\begin{proof} Parts (c) and (d) follow directly from
[\cite{Zeng-Zhong-Wu}, Theorems 3.8 and 3.12 and Remark 4.5].
When $d \neq 0$ (that is, $T$ is not semi-regular), parts (a) and
(b) follow also directly from [\cite{Zeng-Zhong-Wu}, Theorems 3.8
and 3.12 and Remark 4.5].
When $d = 0$ (that is, $T$ is semi-regular), then by
[\cite{Zeng-Zhong-Wu}, Theorems 3.8] we have that $V=T+S$ is
essentially semi-regular. So, there exist $p \in \mathbb{N}$ such
that $V$ has topological uniform descent for $n \geq p$. If $p \neq
0$ (that is, $V$ is not semi-regular), then parts (a) and (b) follow
directly from [\cite{Zeng-Zhong-Wu}, Theorem 3.12 and Remark 4.5].
If $p=0$ (that is, $V$ is semi-regular), noting that $(M+N)/N
\approx M/(M \cap N)$ for any subspaces $M$ and $N$ of $X$ (see
[\cite{Kaashoek M A 12}, Lemma 2.2]), then parts (a) and (b) follow
from [\cite{Zeng-Zhong-Wu}, Theorems 3.8 and Remark 4.5].
\end{proof}
\begin {theorem}${\label{2.15}}$ Let $F \in \mathcal {B}(X)$ with $F^{n} \in \mathcal {F}(X)$ for some $n \in \mathbb{N}$.
$(1)$\ If $T \in \mathcal {B}(X)$ is semi-B-Fredholm and commutes
with $F$, then
$(a)\ \mathrm{dim\,} (\mathcal {R}(T^{\infty})+ \mathcal {R}((T+F)^{\infty})) / (\mathcal {R}(T^{\infty}) \cap \mathcal {R}((T+F)^{\infty})) <
\infty;$
$(b)\ \mathrm{dim\,} (\overline{\mathcal {N}(T^{\infty})} +
\overline{\mathcal {N}((T+F)^{\infty})}) / ( \overline{\mathcal {N}(T^{\infty})} \cap
\overline{\mathcal {N}((T+F)^{\infty})}) < \infty .$
$(2)$ If $T \in \mathcal {B}(X)$ is upper \begin {upshape}(\end
{upshape}resp. lower\begin {upshape})\end {upshape} semi-B-Fredholm
and commutes with $F$, then $T+F$ is also upper \begin
{upshape}(\end {upshape}resp. lower\begin {upshape})\end {upshape}
semi-B-Fredholm and
\begin{upshape}ind\end{upshape}$(T+F)=$\begin{upshape}ind\end{upshape}$(T)$.
\end{theorem}
\begin{proof} Suppose that $F \in \mathcal {B}(X)$ with $F^{n} \in \mathcal {F}(X)$ for some $n \in
\mathbb{N}$. Then $F^{n}$ is Riesz, that is,
$\sigma_e(F^{n})=\{0\}.$ By the spectral mapping theorem for the
essential spectrum, we get that $\sigma_e(F)=\{0\},$ so $F$ is
Riesz.
$(1)$ Since $T$ is semi-B-Fredholm and commutes with $F$, by Lemmas
\ref{2.7} and \ref{2.10}, $T+F$ is also semi-B-Fredholm. Since every
semi-B-Fredholm operator is an operator of eventual topological
uniform descent, by Lemma \ref{2.14}$(a)$ and $(b)$, parts $(a)$ and
$(b)$ follow immediately.
$(2)$ By Lemmas \ref{2.7} and \ref{2.10}, it remains to prove that
\text{ind}$(T+F)=$\text{ind}$(T)$. Since every semi-B-Fredholm
operator is an operator of eventual topological uniform descent, by
Lemma \ref{2.14}$(c)$ and $(d)$ and [\cite{Berkani semi-B-fredholm},
Proposition 2.1], we have that \text{ind}$(T+F) =$\text{ind}$(T)$.
\end{proof}
\begin{theorem}${\label{2.16}}$ Let $T \in \mathcal{B}(X)$ and let $F \in \mathcal {B}(X)$ with $F^{n} \in \mathcal {F}(X)$ for some $n \in
\mathbb{N}$. If $T$ commutes with $F$, then
$(1)$ $\sigma_{USBW}(T+F) = \sigma_{USBW}(T);$
$(2)$ $\sigma_{LSBW}(T+F) = \sigma_{LSBW}(T);$
$(3)$ $\sigma_{SBW}(T+F) = \sigma_{SBW}(T);$
$(4)$ $\sigma_{BW}(T+F) = \sigma_{BW}(T).$
\end{theorem}
\begin{proof}It follows directly from Theorem \ref{2.15}(2).
\end{proof}
Next, we turn to the discussion of characterizations of power finite
rank operators via various spectra originated from seni-B-Fredholm
theory. Before this, some notations are needed.
For $T \in \mathcal{B}(X)$, let us define the $descent$ $spectrum$,
the $essential$ $descent$ $spectrum$ and the $eventual$
$topological$ $uniform$ $descent$ $spectrum$ of $T$ as follows
respectively: $$\sigma_{dsc}(T)=\{\lambda \in \mathbb{C}:
dsc(T-\lambda I) =\infty \};$$ $$\sigma^{e}_{dsc}(T)=\{\lambda \in
\mathbb{C}: dsc_{e}(T-\lambda I) =\infty \};$$
$$\sigma_{ud}(T)=\{\lambda \in \mathbb{C}: T-\lambda I \makebox{
does not have eventual topological uniform descent} \}.$$
In [\cite{Jiang-Zhong-Zhang}], Jiang, Zhong
and Zhang obtained a classification of the components of $eventual$
$topological$ $uniform$ $descent$ $resolvent$ $set$ $\rho_{ud}(T):=
\mathbb{C} \backslash \sigma_{ud}(T)$. As an application of the
classification, they show that $\sigma_{ud}(T)=\varnothing$
precisely when $T$ is algebraic.
\begin {lemma}${\label{2.17}}$ \begin {upshape}([\cite{Jiang-Zhong-Zhang}, Corollary 4.5])\end {upshape}
Let $T \in \mathcal{B}(X)$ and let $\sigma_{*} \in \{\sigma_{ud},
\sigma_{dsc}, \sigma^{e}_{dsc}, \sigma_{USBF}=\sigma^{e}_{LD}, $
$\sigma_{USBB}=\sigma_{LD}, \sigma_{BB}=\sigma_{D} \}$. Then the
following statements are equivalent:
$(1)$ $\sigma_{*}(T) = \varnothing$;
$(2)$ $T$ is algebraic \begin {upshape}(\end {upshape}that is, there exists a non-zero complex
polynomial $p$ for which $p(T)=0$\begin {upshape})\end {upshape}.
\end{lemma}
\begin {corollary}${\label{2.18}}$ Let $T \in \mathcal{B}(X)$ and let $\sigma_{*} \in \{
\sigma_{ud}, \sigma_{dsc}, \sigma^{e}_{dsc},
\sigma_{USBF}=\sigma^{e}_{LD}, \sigma_{LSBF}= \sigma^{e}_{RD},
\sigma_{SBF},$ $ \sigma_{BF},\sigma_{USBW}, \sigma_{LSBW},
\sigma_{SBW}, \sigma_{BW},\sigma_{USBB}=\sigma_{LD},
\sigma_{LSBB}=\sigma_{RD}, \sigma_{SBB}, \sigma_{BB}=\sigma_{D} \}$.
Then the following statements are equivalent:
$(1)$ $\sigma_{*}(T) = \varnothing$;
$(2)$ $T$ is algebraic.
\end{corollary}
\begin{proof} If $\sigma_{*} \in \{
\sigma_{ud}, \sigma_{dsc}, \sigma^{e}_{dsc},
\sigma_{USBF}=\sigma^{e}_{LD}, $ $\sigma_{USBB}=\sigma_{LD},
\sigma_{BB}=\sigma_{D}\}$, the conclusion is given by Lemma
\ref{2.17}. Note that $$\sigma_{ud}(\cdot) \subseteq
\sigma_{SBF}(\cdot) \subseteq
\{_{\sigma_{USBF}(\cdot)=\sigma^{e}_{LD}(\cdot)}^{\sigma_{SBW}(\cdot)}
\subseteq \sigma_{USBW}(\cdot) \subseteq
\{_{\sigma_{USBB}(\cdot)=\sigma_{LD}(\cdot)}^{\sigma_{BW}(\cdot)}
\subseteq
\sigma_{BB}(\cdot)=\sigma_{D}(\cdot)$$
and that $$\sigma_{ud}(\cdot) \subseteq \sigma_{SBF}(\cdot)
\subseteq
\{_{\sigma_{LSBF}(\cdot)=\sigma^{e}_{RD}(\cdot)}^{\sigma_{SBW}(\cdot)}
\subseteq \sigma_{LSBW}(\cdot) \subseteq
\{_{\sigma_{LSBB}(\cdot)=\sigma_{RD}(\cdot)}^{\sigma_{BW}(\cdot)}
\subseteq
\sigma_{BB}(\cdot)=\sigma_{D}(\cdot).$$
By Lemma \ref{2.17}, if $\sigma_{*} \in \{ \sigma_{LSBF}=
\sigma^{e}_{RD}, \sigma_{SBF},$ $\sigma_{USBW}, \sigma_{LSBW},
\sigma_{SBW}, \sigma_{BW}, \sigma_{LSBB}=\sigma_{RD} \}$, the
conclusion follows easily. Note that $$\sigma_{ud}(\cdot) \subseteq
\sigma_{SBB}(\cdot) \subseteq
\sigma_{BB}(\cdot)=\sigma_{D}(\cdot)$$
and that $$\sigma_{ud}(\cdot) \subseteq
\sigma_{BF}(\cdot) \subseteq
\sigma_{BB}(\cdot)=\sigma_{D}(\cdot).$$
Again by Lemma \ref{2.17}, if $\sigma_{*} \in \{ \sigma_{SBB},
\sigma_{BF}\}$, the conclusion follows easily.
\end{proof}
In [\cite{Bel-Burgos-Oudghiri 3}, Theorem 3.2], O. Bel Hadj Fredj et
al. proved that $F \in \mathcal {B}(X)$ with $F_{n} \in \mathcal
{F}(X)$ for some $n \in \mathbb{N}$ if and only if
$\sigma_{LD}^{e}(T+F) =\sigma_{LD}^{e}(T)$ (equivalently,
$\sigma_{LD}(T+F) =\sigma_{LD}(T)$) for every operator $T$ in the
commutant of $F$.
We are now in a position to give the proof of the following main
result.
\begin {theorem}${\label{2.19}}$ Let $F \in \mathcal {B}(X)$ and $\sigma_{*} \in \{
\sigma_{dsc}, \sigma^{e}_{dsc}, \sigma_{USBF}=\sigma^{e}_{LD},
\sigma_{LSBF}= \sigma^{e}_{RD}, \sigma_{SBF},$ $
\sigma_{BF},\sigma_{USBW}, \sigma_{LSBW}, \sigma_{SBW},
\sigma_{BW},\sigma_{USBB}=\sigma_{LD}, \sigma_{LSBB}=\sigma_{RD},
\sigma_{SBB}, \sigma_{BB}=\sigma_{D} \}$. Then the following
statements are equivalent:
$(1)$ $F^{n} \in \mathcal {F}(X)$ for some $n \in \mathbb{N}$;
$(2)$ $\sigma_{*}(T+F) =\sigma_{*}(T)$ for all $T \in \mathcal
{B}(X)$ commuting with $F$.
\end{theorem}
\begin{proof} For $\sigma_{*} \in \{\sigma_{dsc}, \sigma^{e}_{dsc},
\sigma_{USBB}=\sigma_{LD}, \sigma_{USBF}=\sigma^{e}_{LD}\}$, the
conclusion can be found in [\cite{Burgos-Kaidi-Mbekhta-Oudghiri},
Theorem 3.1], [\cite{Bel}, Theorem 3.1] and
[\cite{Bel-Burgos-Oudghiri 3}, Theorem 3.2]. In the following, we
prove the conclusion for the others spectra.
$(1)\Rightarrow (2)$ For $\sigma_{*} \in \{\sigma_{LSBF}=
\sigma^{e}_{RD}, \sigma_{SBF},$ $ \sigma_{BF},\sigma_{USBW},
\sigma_{LSBW}, \sigma_{SBW}, \sigma_{BW}\}$, the conclusion follows
directly from Corollary \ref{2.11} and Theorem \ref{2.16}.
For $\sigma_{*} \in \{\sigma_{LSBB}=\sigma_{RD}\}$, suppose that $F
\in \mathcal {B}(X)$ with $F^{n} \in \mathcal {F}(X)$ for some $n
\in \mathbb{N}$ and that $T \in \mathcal {B}(X)$ commutes with $F$.
It is clear that $F^* \in \mathcal {B}(X^*)$ with $F^{*n} \in
\mathcal {F}(X^*)$ and that $T^* \in \mathcal {B}(X^*)$ commutes
with $F^*$. From the presentation before this theorem, we get that
$\sigma_{LD}(T^*+F^*) =\sigma_{LD}(T^*)$, hence dually,
$\sigma_{RD}(T+F) =\sigma_{RD}(T)$.
For $\sigma_{*} \in \{\sigma_{SBB}, \sigma_{BB}=\sigma_{D}\}$,
noting that $\sigma_{SBB}(\cdot) = \sigma_{USBB}(\cdot) \cap
\sigma_{LSBB}(\cdot)$ and that $\sigma_{BB}(\cdot) =
\sigma_{USBB}(\cdot) \cup \sigma_{LSBB}(\cdot)$, the conclusion
follows.
$(2) \Rightarrow (1)$ Conversely, suppose that $\sigma_{*}(T+F)
=\sigma_{*}(T)$ for all $T \in \mathcal {B}(X)$ commuting with $F$,
where $\sigma_{*} \in \{\sigma_{LSBF}= \sigma^{e}_{RD},
\sigma_{SBF},$ $ \sigma_{BF},\sigma_{USBW}, \sigma_{LSBW},
\sigma_{SBW}, \sigma_{BW}, \sigma_{LSBB}=\sigma_{RD}, \sigma_{SBB},
\sigma_{BB}=\sigma_{D} \}$. By considering $T=0$, then
$\sigma_{*}(F) =\sigma_{*}(0+F)=\sigma_{*}(0)=\varnothing.$ By
Corollary \ref{2.18}, we know that $F$ is algebraic. Therefore
$$X =X_{1} \oplus X_{2}\oplus \cdots \oplus X_{n},$$
where $\sigma(F)=\{\lambda_{1},\lambda_{2},\cdots ,\lambda_{1} \}$
and the restriction of $F-\lambda_{i}$ to $X_{i}$ is nilpotent for
every $1 \leq i \leq n$. We claim that if $\lambda_{i} \neq 0$,
$\dim X_{i}$ is finite. Suppose to the contrary that $\lambda_{i}
\neq 0$ and $ X_{i}$ is infinite dimensional. For every $1 \leq j
\leq n$, let $F_{j}$ be the restriction of $F$ to $X_{j}$. Then with
respect to the decomposition $X =X_{1} \oplus X_{2}\oplus \cdots
\oplus X_{n}$,
$$F =F_{1} \oplus F_{2}\oplus \cdots \oplus F_{n}.$$ By
[\cite{Burgos-Kaidi-Mbekhta-Oudghiri}, Proposition 3.3], there
exists a non-algebraic operator $S_{i}$ on $X_{i}$ commuting with
the restriction $F_{i}$ of $F$. Let $S$ denote the extension of
$S_{i}$ to $X$ given by $S=0$ on each $X_{j}$ such that $j \neq i$.
Obviously $SF = FS$, and so $\sigma_{*}(S+F) =\sigma_{*}(S)$ by
hypothesis. On the other hand, since $F =F_{1} \oplus F_{2}\oplus
\cdots \oplus F_{n}$ is algebraic, $F_{j}$ is algebraic for every $1
\leq j \leq n$. In particular, $F_{j}$ is algebraic for every $j
\neq i$. Hence by Corollary \ref{2.18},
$\sigma_{BB}(F_{j})=\varnothing$ for every $j \neq i$, it follows
easily that $\sigma_{*}(S+F)= \sigma_{*}(S_{i}+F_{i})$. Since
$\sigma_{*}(S)= \sigma_{*}(S_{i})$, we obtain that
$\sigma_{*}(S_{i}) = \sigma_{*}(S_{i} + F_{i}) = \sigma_{*}(S_{i} +
\lambda_{i})$ because $F_{i} - \lambda_{i}$ is nilpotent. Choose an
arbitrary complex number $\alpha \in \sigma_{*}(S) \neq
\varnothing$, it follows that $k\lambda_{i}+\alpha \in
\sigma_{*}(S)$ for every positive integer $k$ , which implies that
$\lambda_{i} = 0$, the desired contradiction.
\end{proof}
\begin {remark}${\label{2.20}}$ \begin{upshape} $(1)$ The argument we have given for the the
implication $(2) \Rightarrow (1)$ in Theorem \ref{2.19} is, in fact,
discovered by following the trail marked out by Burgos, Kaidi,
Mbekhta and Oudghiri [\cite{Burgos-Kaidi-Mbekhta-Oudghiri}].
$(2)$ By [\cite{Berkani-Amouch}, Lemma 2.3], [\cite{Mbekhta-Muller
9}, pp. 135-136] and a similar argument of [\cite{Berkani pams},
Proposition 3.3], we know that $\sigma_{*}(T+F)=\sigma_{*}(T)$ for
all finite rank operator $F$ not necessarily commuting with $T$,
where $\sigma_{*} \in \{\sigma^{e}_{dsc},
\sigma_{USBF}=\sigma^{e}_{LD}, \sigma_{LSBF}= \sigma^{e}_{RD},
\sigma_{SBF},$ $ \sigma_{BF},\sigma_{USBW}, \sigma_{LSBW},
\sigma_{SBW}, \sigma_{BW}\}$. By [\cite{Mbekhta-Muller 9},
Observation 5 in p. 136], we know that $\sigma_{*}$ is not stable
under non-commuting finite rank perturbation, where $\sigma_{*} \in
\{\sigma_{dsc}, \sigma_{USBB}=\sigma_{LD},
\sigma_{LSBB}=\sigma_{RD}, \sigma_{SBB}, \sigma_{BB}=\sigma_{D}\}$.
\end{upshape}
\end{remark}
\section{Some applications}
\quad\,~Rashid claimed in [\cite{Rashid}, Theorem 3.15] that if $T \in
\mathcal{B}(X)$ and $Q$ is a quasi-nilpotent operator that commute
with $T$, then (in [\cite{Rashid}], $\sigma_{USBW}$ is denoted as
$\sigma_{SBF^{-}_{+}}$)
$$\sigma_{USBW}(T+Q)=\sigma_{USBW}(T).$$
In [\cite{Zeng-Zhong}, Example 2.13], the authors showed that this
equality does not hold in general.
As an immediate consequence of Theorem \ref{2.19} (that is, main
result), we obtain the following corollary which, in particular, is
a corrected version of [\cite{Rashid}, Theorem 3.15] and also
provide positive answers to Questions \ref{1.1} and \ref{1.2}.
\begin {corollary}${\label{3.2}}$ Let $T \in \mathcal{B}(X)$ and let $N \in \mathcal{B}(X)$ be a
nilpotent operator commuting with $T$. Then
$(1)$ $\sigma_{USBW}(T+N) = \sigma_{USBW}(T);$
$(2)$ $\sigma_{LSBW}(T+N) = \sigma_{LSBW}(T);$
$(3)$ $\sigma_{SBW}(T+N) = \sigma_{SBW}(T);$
$(4)$ $\sigma_{BW}(T+N) = \sigma_{BW}(T).$
\end{corollary}
Besides Question \ref{1.2}, Berkani and Zariouh also posed in
[\cite{Berkani Zariouh}] the following question:
\begin {question}${\label{3.3}}$
Let $T \in \mathcal{B}(X)$ and let $N \in \mathcal{B}(X)$ be a
nilpotent operator commuting with $T$. Under which conditions
$$\sigma_{BF}(T+N) = \sigma_{BF}(T) \ ?$$
\end{question}
As an immediate consequence of Theorem \ref{2.19}, we also obtain
the following corollary which, in particular, provide a positive
answer to Question \ref{3.3}.
\begin {corollary}${\label{3.4}}$ Let $T \in \mathcal{B}(X)$ and let $N \in \mathcal{B}(X)$ be a
nilpotent operator commuting with $T$. Then
$(1)$ $\sigma_{USBF}(T+N) = \sigma_{USBF}(T);$
$(2)$ $\sigma_{LSBF}(T+N) = \sigma_{LSBF}(T);$
$(3)$ $\sigma_{SBF}(T+N) = \sigma_{SBF}(T);$
$(4)$ $\sigma_{BF}(T+N) = \sigma_{BF}(T).$
\end{corollary}
We say that $\lambda \in \sigma_{a}(T)$ is a left pole of $T$ if
$T-\lambda I$ is left Drazin invertible. Let $\Pi_{a}(T)$ denote the
set of all left poles of $T$. An operator $T \in \mathcal{B}(X)$ is
called $a$-$polaroid$ if $\makebox{iso}\sigma_{a}(T)=\Pi_{a}(T)$.
Here and henceforth, for $A \subseteq \mathbb{C}$, $\makebox{iso}A$
is the set of isolated points of $A$. Besides Questions \ref{1.2}
and \ref{3.3}, Berkani and Zariouh also posed in [\cite{Berkani
Zariouh Functional Analysis}] the following three questions:
\begin {question}${\label{3.5}}$
Let $T \in \mathcal{B}(X)$ and let $N \in \mathcal{B}(X)$ be a
nilpotent operator commuting with $T$. Under which conditions
$$asc(T+N) < \infty \Longleftrightarrow \sigma_{asc}(T) < \infty \ ?$$
\end{question}
\begin {question}${\label{3.6}}$
Let $T \in \mathcal{B}(X)$ and let $N \in \mathcal{B}(X)$ be a
nilpotent operator commuting with $T$. Under which conditions,
$\mathcal {R}((T+N)^{m})$ is closed for $m$ large enough if and only
if $\mathcal {R}(T^{m})$ is closed for $m$ large enough $?$
\end{question}
\begin {question}${\label{3.7}}$
Let $T \in \mathcal{B}(X)$ and let $N \in \mathcal{B}(X)$ be a
nilpotent operator commuting with $T$. Under which conditions
$$ \Pi_{a}(T+N) = \Pi_{a}(T) \ ?$$
\end{question}
We mention that Question \ref{3.5} is, in fact, an immediate
consequence of an earlier result of Kaashoek and Lay
[\cite{Kaashoek-Lay}, Theorem 2.2]. To Question \ref{3.6}, suppose
that $T \in \mathcal{B}(X)$ and that $N \in \mathcal{B}(X)$ is a
nilpotent operator commuting with $T$. As a
direct consequence of Theorem \ref{2.19} (that is, main result), we
know that if there exists $n \in \mathbb{N}$ such that $c_{n}(T) <
\infty$ or $c^{'}_{n}(T) < \infty$, then $\mathcal {R}((T+N)^{m})$
is closed for $m$ large enough if and only if $\mathcal {R}(T^{m})$
is closed for $m$ large enough.
To Question \ref{3.7}, we first recall a classical result.
\begin {lemma}${\label{3.8}}$ \begin{upshape}([\cite{Mbekhta-Muller 9}])\end{upshape}
If $T \in \mathcal{B}(X)$ and $Q \in \mathcal{B}(X)$ is a
quasi-nilpotent operator commuting with $T$, then
\begin{equation}{\label{eq 3.1}}
\qquad\qquad\qquad\quad \sigma(T+Q) = \sigma(T) \makebox{ and } \sigma_{a}(T+Q) = \sigma_{a}(T).
\end{equation}
\end{lemma}
As an immediate consequence of Theorem \ref{2.19} and Lemma
\ref{3.8}, we also obtain the following corollary which provide a
positive answer to Question \ref{3.7}.
\begin {corollary}${\label{3.9}}$
Let $T \in \mathcal{B}(X)$ and let $N \in \mathcal{B}(X)$ be a
nilpotent operator commuting with $T$. Then
\begin{equation} {\label{eq 3.2}}
\qquad\qquad\qquad\qquad\qquad\quad \ \ \Pi_{a}(T+N) = \Pi_{a}(T).
\end{equation}
\end{corollary}
Let $\Pi(T)$ denote the set of all poles of $T$. It is proved in
[\cite{Berkani-Zariouh partial}, Lemma 2.2] that if $T \in
\mathcal{B}(X)$ and $Q \in \mathcal{B}(X)$ is a nilpotent operator
commuting with $T$, then
\begin{equation}{\label{eq 3.3}}
\qquad\qquad\qquad\qquad\qquad\qquad \ \ \Pi(T+N) = \Pi(T).
\end{equation}
Let $E(T)$ and $E_{a}(T)$ denote the set of all isolated eigenvalues
of $T$ and the set of all eigenvalues of $T$ that are isolated in
$\sigma_{a}(T)$, respectively. That is, $$E(T) = \{\lambda \in
\makebox{iso}\sigma(T): 0 < \alpha(T- \lambda I)\}$$ and
$$E_{a}(T) =
\{\lambda \in \makebox{iso}\sigma_{a}(T): 0 < \alpha(T- \lambda
I)\}.$$ An operator $T \in \mathcal{B}(X)$ is called $a$-$isoloid$
if $\makebox{iso}\sigma_{a}(T)=E_{a}(T)$.
We set
$\Pi^{0}(T)=\{\lambda \in \Pi(T): \alpha(T-\lambda I)<\infty\}$,
$\Pi_{a}^{0}(T) =\{\lambda \in \Pi_{a}(T): \alpha(T-\lambda I) < \infty\}$,
$E^{0}(T)=\{\lambda \in E(T): \alpha(T-\lambda I) < \infty\}$
and
$E_{a}^{0}(T)=\{\lambda \in E_{a}(T): \alpha(T-\lambda I) < \infty\}.$
Suppose that $T \in \mathcal{B}(X)$ and that $N \in \mathcal{B}(X)$
is a nilpotent operator commuting with $T$. Then from the proof of
[\cite{Berkani Zariouh}, Theorem 3.5], it follows that
$$0 < \alpha(T+N) \Longleftrightarrow 0 < \alpha(T)$$ and $$\alpha(T+N) < \infty \Longleftrightarrow \alpha(T) < \infty.$$
Hence by Equation (\ref{eq 3.1}), we have the following equations:
\begin{equation} {\label{eq 3.4}}
\qquad\qquad\qquad\qquad\qquad\qquad \ \ E(T+N) = E(T);
\end{equation}
\begin{equation} {\label{eq 3.5}}
\qquad\qquad\qquad\qquad\qquad\qquad \ \ E_{a}(T+N) = E_{a}(T);
\end{equation}
\begin{equation} {\label{eq 3.6}}
\qquad\qquad\qquad\qquad\qquad\qquad \ \ E^{0}(T+N) = E^{0}(T);
\end{equation}
\begin{equation} {\label{eq 3.7}}
\qquad\qquad\qquad\qquad\qquad\qquad \ \ E_{a}^{0}(T+N) = E_{a}^{0}(T).
\end{equation}
An operator $T \in \mathcal{B}(X)$ is said to be $upper$
$semi$-$Weyl$ if $T$ is upper semi-Fredholm and ind$(T) \leq 0$. An
operator $T \in \mathcal{B}(X)$ is said to be $Weyl$ if $T$ is
Fredholm and ind$(T)=0$. For $T \in \mathcal{B}(X)$, let us define
the $upper$ $semi$-$Browder$ $spectrum$, the $Browder$ $spectrum$,
the $upper$ $semi$-$Weyl$ $spectrum$ and the $Weyl$ $spectrum$ of
$T$ as follows respectively:
$$ \sigma_{USB}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a upper
semi-Browder operator} \};$$
$$ \sigma_{B}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a Browder
operator} \};$$
$$ \sigma_{USW}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a upper
semi-Weyl operator} \};$$
$$ \sigma_{W}(T) = \{ \lambda \in \mathbb{C}: T - \lambda I \makebox{ is not a Weyl
operator} \}.$$
Suppose that $T \in \mathcal{B}(X)$ and that $R \in \mathcal{B}(X)$
is a Riesz operator commuting with $T$. Then it follows from
[\cite{Tylli}, Proposition 5] and [\cite{Rakocevic}, Theorem 1] that
\begin{equation} {\label{eq 3.8}}
\qquad\qquad\qquad\qquad\qquad\quad \ \ \sigma_{USW}(T+R) = \sigma_{USW}(T);
\end{equation}
\begin{equation} {\label{eq 3.9}}
\qquad\qquad\qquad\qquad\qquad\qquad \ \ \sigma_{W}(T+R) =
\sigma_{W}(T);
\end{equation}
\begin{equation} {\label{eq 3.10}}
\qquad\qquad\qquad\qquad\qquad\quad \ \ \sigma_{USB}(T+R) = \sigma_{USB}(T);
\end{equation}
\begin{equation} {\label{eq 3.11}}
\qquad\qquad\qquad\qquad\qquad\qquad \ \ \sigma_{B}(T+R) =
\sigma_{B}(T).
\end{equation}
Suppose that $T \in \mathcal{B}(X)$ and that $Q \in \mathcal{B}(X)$
is a quasi-nilpotent operator commuting with $T$. Then, noting that
$\Pi^{0}(T)= \sigma(T) \backslash \sigma_{B}(T)$ and
$\Pi_{a}^{0}(T)=\sigma_{a}(T) \backslash \sigma_{USB}(T)$ for any $T
\in \mathcal{B}(X)$, it follows from Equations (\ref{eq 3.1}),
(\ref{eq 3.10}) and (\ref{eq 3.11}) that
\begin{equation} {\label{eq 3.12}}
\qquad\qquad\qquad\qquad\qquad\qquad \ \ \Pi^{0}(T+Q) =
\Pi^{0}(T);
\end{equation}
\begin{equation} {\label{eq 3.13}}
\qquad\qquad\qquad\qquad\qquad\qquad \ \ \Pi_{a}^{0}(T+Q) =
\Pi_{a}^{0}(T).
\end{equation}
In the following table, we use the abbreviations $gaW$, $aW$, $gW$,
$W$, $(gw)$, $(w)$, $(gaw)$ and $(aw)$ to signify that an operator
$T \in \mathcal{B}(X)$ obeys generalized a-Weyl's theorem, a-Weyl's
theorem, generalized Weyl's theorem, Weyl's theorem, property
$(gw)$, property $(w)$, property $(gaw)$ and property $(aw)$. For
example, an operator $T \in \mathcal{B}(X)$ is said to obey
generalized a-Weyl's theorem (in symbol $T \in gaW$), if
$\sigma_{a}(T) \backslash \sigma_{USBW}(T) = E_{a}(T)$. Similarly,
the abbreviations $gaB$, $aB$, $gB$, $B$, $(gb)$, $(b)$, $(gab)$ and
$(ab)$ have analogous meaning with respect to Browder's theorem or
the properties.
\begin {center}
\begin{tabular} {|l|l|l|l|l} \hline
$gaW$ & $\sigma_{a}(T) \backslash \sigma_{USBW}(T) = E_{a}(T)$ & $gaB$ & $\sigma_{a}(T) \backslash \sigma_{USBW}(T) = \Pi_{a}(T)$ \\ \hline
$aW$ & $\sigma_{a}(T) \backslash \sigma_{USW}(T) = E_{a}^{0}(T)$ & $aB$ & $\sigma_{a}(T) \backslash \sigma_{USW}(T) = \Pi_{a}^{0}(T)$ \\ \hline
$gW$ & $\sigma(T) \backslash \sigma_{BW}(T) = E(T)$ & $gB$ & $\sigma(T) \backslash \sigma_{BW}(T) = \Pi(T)$ \\ \hline
$W$ & $\sigma(T) \backslash \sigma_{W}(T) = E^{0}(T)$ & $B$ & $\sigma(T) \backslash \sigma_{W}(T) = \Pi^{0}(T)$ \\ \hline
$(gw)$ & $\sigma_{a}(T) \backslash \sigma_{USBW}(T) = E(T)$ & $(gb)$ & $\sigma_{a}(T) \backslash \sigma_{USBW}(T) = \Pi(T)$ \\ \hline
$(w)$ & $\sigma_{a}(T) \backslash \sigma_{USW}(T) = E^{0}(T)$ & $(b)$ & $\sigma_{a}(T) \backslash \sigma_{USW}(T) = \Pi^{0}(T)$ \\ \hline
$(gaw)$ & $\sigma(T) \backslash \sigma_{BW}(T) = E_{a}(T)$ & $(gab)$ & $\sigma(T) \backslash \sigma_{BW}(T) = \Pi_{a}(T)$ \\ \hline
$(aw)$ & $\sigma(T) \backslash \sigma_{W}(T) = E_{a}^{0}(T)$ & $(ab)$ & $\sigma(T) \backslash \sigma_{W}(T) = \Pi_{a}^{0}(T)$ \\ \hline
\end{tabular}
\end {center}
Weyl-Browder type theorems and properties, in their classical and
more recently in their generalized form, have been studied by a
large of authors. Theorem \ref{2.19} and Equations (\ref{eq
3.1})---(\ref{eq 3.13}) give us an unifying framework for
establishing commuting perturbational results of Weyl-Browder type
theorems and properties (generalized or not).
\begin {corollary}${\label{3.10}}$
$(1)$ If $T \in \mathcal{B}(X)$ obeys $gaW$
\begin {upshape}(\end {upshape}resp. $aW,$ $gW,$ $W,$ $(gw),$ $(w)$,
$(gaw),$ $(aw),$ $(gb),$ $(gab)$\begin {upshape})\end {upshape} and $N \in
\mathcal{B}(X)$ is a nilpotent operator commuting with $T$, then
$T+N$ also obeys $gaW$
\begin {upshape}(\end {upshape}resp. $aW,$ $gW,$ $W,$ $(gw),$ $(w),$
$(gaw),$ $(aw),$ $(gb),$ $(gab)$\begin {upshape})\end {upshape}.
$(2)$ If $T \in \mathcal{B}(X)$ obeys $gaB$
\begin {upshape}(\end {upshape}resp. $aB,$ $gB,$ $B$\begin {upshape})\end {upshape} and $R \in
\mathcal{B}(X)$ is a Riesz operator commuting with $T$, then $T+R$ also obeys
$gaB$ \begin {upshape}(\end {upshape}resp. $aB,$ $gB,$ $B$\begin {upshape})\end {upshape}.
$(3)$ If $T \in \mathcal{B}(X)$ obeys $(b)$
\begin {upshape}(\end {upshape}resp. $(ab)$\begin {upshape})\end {upshape} and $Q \in
\mathcal{B}(X)$ is a quasi-nilpotent operator commuting with $T$, then $T+Q$ also obeys
$(b)$ \begin {upshape}(\end {upshape}resp. $(ab)$\begin {upshape})\end {upshape}.
\end{corollary}
\begin{proof}
$(1)$ It follows directly from Theorem \ref{2.19} and Equations
(\ref{eq 3.1})---(\ref{eq 3.9}).
$(2)$ By [\cite{AmouchM ZguittiH equivalence}], we know that $T$
obeys $gB$ (resp. $gaB$) if and only if $T$ obeys $B$ (resp. $aB$)
for any $T \in \mathcal{B}(X)$. Note that $T$ obeys $B$ (resp. $aB$)
if and only if $\sigma_{W}(T)=\sigma_{B}(T)$ (resp.
$\sigma_{USW}(T)=\sigma_{USB}(T)$). Hence by Equations (\ref{eq
3.8})---(\ref{eq 3.11}), the conclusion follows immediately.
$(3)$ It follows directly from Equations (\ref{eq 3.1}), (\ref{eq
3.8}), (\ref{eq 3.9}), (\ref{eq 3.12}) and (\ref{eq 3.13}).
\end{proof}
The commuting perturbational results established in Corollary
\ref{3.10}, in particular, improve many recent results of
[\cite{Berkani-Amouch}, \cite{Berkani-Zariouh partial},
\cite{Berkani Zariouh}, \cite{Berkani Zariouh Functional Analysis},
\cite{Rashid gw}] by removing certain extra assumptions.
\begin {remark}${\label{3.11}}$ \begin{upshape}
(1) For generalized a-Weyl's theorem, part (1) of Corollary
\ref{3.10} improves
[\cite{Berkani Zariouh Functional Analysis}, Theorem 3.3] by removing the extra assumption that
$E_{a}(T) \subseteq \makebox{iso}\sigma(T)$ and extends [\cite{Berkani Zariouh Functional Analysis}, Theorem 3.2].
For property $(gw)$, on one hand, part (1) of Corollary
\ref{3.10} improves [\cite{Rashid gw}, Theorem
2.16] (resp. [\cite{Berkani-Amouch}, Theorem 3.6]) by removing the extra assumption that
$T$ is a-isoloid (resp. $T$ is a-polaroid) and extends [\cite{Berkani Zariouh}, Theorem 3.8];
on the other hand, our proof for it is a corrected proof of [\cite{Rashid gw}, Theorem
2.16]. For property $(gab)$, part (1) of Corollary
\ref{3.10} improves [\cite{Berkani Zariouh}, Theorem 3.2] by removing the extra assumption that
$T$ is a-polaroid and extends [\cite{Berkani Zariouh}, Theorem
3.4].
For generalized Weyl's theorem (resp. property $(w)$, property $(gaw)$), part (1) of Corollary
\ref{3.10} has been proved in [\cite{Berkani-Amouch}, Theorem 3.4]
(resp. [\cite{Aiena-Biondi-Villafane}, Theorem 3.8] and [\cite{Berkani-Amouch}, Theorem 3.1], [\cite{Berkani Zariouh}, Theorem 3.6])
by using a different method.
For a-Weyl's theorem,
some other commuting perturbational theorems for it have been proved in [\cite{Berkani Zariouh Functional Analysis}, \cite{Cao-Guo-Meng}, \cite{Oudghiri}].
For Weyl's theorem (resp. property $(aw)$, property $(gb)$), part (1) of Corollary
\ref{3.10} has been proved in [\cite{Oberai}, Theorem 3] (resp. [\cite{Berkani Zariouh}, Theorem
3.5], [\cite{Zeng-Zhong}, Theorem 2.6]).
(2) It has been discovered in [\cite{Aiena-Carpintero-Rosas}] that Browder's theorem and a-Browder's theorem are stable under
commuting Riesz perturbations.
(3) For property $(b)$ (resp. $(ab)$), part (3) of Corollary
\ref{3.10} extends [\cite{Berkani-Zariouh partial}, Theorem 2.1]
(resp. [\cite{Berkani Zariouh}, Theorem 3.1]) from commuting nilpotent perturbations to commuting quasi-nilpotent perturbations.
\end{upshape}
\end{remark}
We conclude this paper by some examples to illustrate our commuting
perturbational results of Weyl-Browder type theorems and properties
(generalized or not).
The following simple example shows that $gaW$, $aW,$ $gW,$ $W,$
$(gw),$ $(w)$, $(gaw)$ and $(aw)$ are not stable under commuting
quasi-nilpotent perturbations.
\begin {example}${\label{3.12}}$ \begin{upshape}
Let $Q: l_{2}(\mathbb{N}) \longrightarrow l_{2}(\mathbb{N})$ be a
quasi-nilpotent operator defined by
$$Q(x_{1},x_{2},\cdots )=(\frac{x_{2}}{2},\frac{x_{3}}{3}, \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in
l_{2}(\mathbb{N}).$$ Then $Q$ is quasi-nilpotent,
$\sigma(Q)=\sigma_{a}(Q)=\sigma_{W}(Q)=\sigma_{USW}(Q)=\sigma_{BW}(Q)=\sigma_{USBW}(Q)$
$=\{0\}$ and $E_{a}(Q)=E_{a}^{0}(Q)=E(Q)=E^{0}(Q)=\{0\}$. Take
$T=0.$ Clearly, $T$ satisfies $gaW$ (resp. $aW,$ $gW,$ $W,$ $(gw),$
$(w)$, $(gaw)$, $(aw)$), but $T+Q=Q$ fails $gaW$ (resp. $aW,$ $gW,$
$W,$ $(gw),$ $(w)$, $(gaw)$, $(aw)$).
\end{upshape}
\end{example}
The following example was given in [\cite{Zeng-Zhong}, Example 2.14]
to show that property $(gb)$ is not stable under commuting
quasi-nilpotent perturbations. Now, we use it to illustrate that
property $(gab)$ is also unstable under commuting quasi-nilpotent
perturbations.
\begin {example}${\label{3.13}}$ \begin{upshape}
Let $U: l_{2}(\mathbb{N}) \longrightarrow l_{2}(\mathbb{N})$ be the
unilateral right shift operator defined by
$$U(x_{1},x_{2},\cdots )=(0,x_{1},x_{2}, \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in
l_{2}(\mathbb{N}).$$ Let $V: l_{2}(\mathbb{N}) \longrightarrow
l_{2}(\mathbb{N})$ be a quasi-nilpotent operator defined by
$$V(x_{1},x_{2},\cdots )=(0,x_{1},0,\frac{x_{3}}{3},\frac{x_{4}}{4} \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in
l_{2}(\mathbb{N}).$$ Let $N: l_{2}(\mathbb{N}) \longrightarrow
l_{2}(\mathbb{N})$ be a quasi-nilpotent operator defined by
$$N(x_{1},x_{2},\cdots )=(0,0,0,-\frac{x_{3}}{3},-\frac{x_{4}}{4} \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in
l_{2}(\mathbb{N}).$$ It is easy to verify that $VN=NV$. We consider
the operators $T$ and $Q$ defined by $T=U \oplus V$ and $Q=0 \oplus
N$, respectively. Then $Q$ is quasi-nilpotent and $TQ=QT$. Moreover,
$$\sigma(T) = \sigma(U) \cup \sigma(V) = \{\lambda \in
\mathbb{C}: 0 \leq |\lambda| \leq 1 \},$$
$$\sigma_{a}(T) = \sigma_{a}(U) \cup \sigma_{a}(V) = \{\lambda \in
\mathbb{C}: |\lambda| = 1 \} \cup \{0\},$$
$$\sigma(T+Q) = \sigma(U) \cup
\sigma(V+N) = \{\lambda \in \mathbb{C}: 0 \leq |\lambda| \leq 1 \}$$
and
$$\sigma_{a}(T+Q) = \sigma_{a}(U) \cup \sigma_{a}(V+N) = \{\lambda \in
\mathbb{C}: |\lambda| = 1 \} \cup \{0\}.$$ It follows that
$\Pi_{a}(T)=\Pi(T)= \varnothing$ and $\{0\}= \Pi_{a}(T+Q) \neq
\Pi(T+Q)= \varnothing.$ Hence by [\cite{Berkani-Zariouh New
Extended}, Corollary 2.7], $T+Q$ does not satisfy property $(gab)$.
But since $T$ has SVEP, $T$ satisfies Browder's theorem or
equivalently, by [\cite{AmouchM ZguittiH equivalence}, Theorem 2.2],
$T$ satisfies generalized Browder's theorem. Therefore by
[\cite{Berkani-Zariouh New Extended}, Corollary 2.7] again, $T$
satisfies property $(gb)$.
\end{upshape}
\end{example}
The following example was given in [\cite{Zeng-Zhong}, Example 2.12]
to show that property $(gb)$ is not preserved under commuting finite
rank perturbations. Now, we use it to illustrate that property $(b)$
and $(ab)$ are also unstable under commuting finite rank (hence
compact) perturbations.
\begin {example}${\label{3.14}}$ \begin{upshape}
Let $U: l_{2}(\mathbb{N}) \longrightarrow l_{2}(\mathbb{N})$ be the
unilateral right shift operator defined by
$$U(x_{1},x_{2},\cdots )=(0,x_{1},x_{2}, \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in
l_{2}(\mathbb{N}).$$ For fixed $0 < \varepsilon < 1$, let
$F_{\varepsilon}: l_{2}(\mathbb{N}) \longrightarrow
l_{2}(\mathbb{N})$ be a finite rank operator defined by
$$F_{\varepsilon}(x_{1},x_{2},\cdots )=(-\varepsilon x_{1},0,0, \cdots ) \makebox{\ \ \ for all \ } (x_{n}) \in
l_{2}(\mathbb{N}).$$ We consider the operators $T$ and $F$ defined
by $T=U \oplus I$ and $F=0 \oplus F_{\varepsilon}$, respectively.
Then $F$ is a finite rank operator and $TF=FT$. Moreover,
$$\sigma(T) = \sigma(U) \cup \sigma(I) = \{\lambda \in
\mathbb{C}: 0 \leq |\lambda| \leq 1 \},$$
$$\sigma_{a}(T) = \sigma_{a}(U) \cup \sigma_{a}(I) = \{\lambda \in
\mathbb{C}: |\lambda| = 1 \} ,$$
$$\sigma(T+F) = \sigma(U) \cup
\sigma(I+F_{\varepsilon}) = \{\lambda \in \mathbb{C}: 0 \leq
|\lambda| \leq 1 \}$$ and
$$\sigma_{a}(T+F) = \sigma_{a}(U) \cup \sigma_{a}(I+F_{\varepsilon}) = \{\lambda \in
\mathbb{C}: |\lambda| = 1 \} \cup \{ 1 - \varepsilon \}.$$ It
follows that $\Pi_{a}^{0}(T)=\Pi^{0}(T)= \varnothing$ and $\{1 -
\varepsilon \}= \Pi_{a}^{0}(T+F) \neq \Pi^{0}(T+F)= \varnothing.$
Hence by [\cite{Berkani-Zariouh Extended}, Corollary 2.7] (resp.
[\cite{Berkani-Zariouh New Extended}, Corollary 2.6]), $T+F$ does
not satisfy property $(b)$ (resp. $(ab)$). But since $T$ has SVEP,
$T$ satisfies a-Browder's theorem (resp. Browder's theorem),
therefore by [\cite{Berkani-Zariouh Extended}, Corollary 2.7] (resp.
[\cite{Berkani-Zariouh New Extended}, Corollary 2.6]) again, $T$
satisfies property $(b)$ (resp. $(ab)$).
\end{upshape}
\end{example}
\end{document}
|
\begin{document}
\title{On Strassen's rank additivity for small three-way tensors hanks{Submitted to the editors
DATE.}
\begin{abstract}
We address the problem of the additivity of the tensor rank.
That is, for two independent tensors we study if the rank of their direct sum is equal to the sum of their individual ranks.
A positive answer to this problem was previously known as Strassen's conjecture
until recent counterexamples were proposed by Shitov. The latter are not very explicit, and they are only known to exist asymptotically for very large tensor spaces.
In this article we prove that for some small three-way tensors the additivity holds.
For instance, if the rank of one of the tensors is at most 6, then the additivity holds.
Or, if one of the tensors lives in ${\mathbb C}^k\otimes {\mathbb C}^3\otimes {\mathbb C}^3$ for any $k$,
then the additivity also holds.
More generally, if one of the tensors is concise and its rank is at most 2
more than the dimension of one of the linear spaces, then additivity holds.
In addition we also treat some cases of the additivity of the border rank of such tensors.
In particular, we show that the additivity of the border rank holds if the direct sum tensor is contained in
${\mathbb C}^4\otimes {\mathbb C}^4\otimes {\mathbb C}^4$.
Some of our results are valid over an arbitrary base field.
\end{abstract}
{\footnotesize
\begin{keywords}
Tensor rank, additivity of tensor rank, Strassen's conjecture, slices of tensor, secant variety, border rank.
\end{keywords}
\begin{AMS}
Primary: 15A69, Secondary: 14M17, 68W30, 15A03.
\end{AMS}
\section{Introduction}\label{sect_intro}
The matrix multiplication is a bilinear map $\mu_{i,j,k}\colon {\mathcal{M}}^{i\times j} \times {\mathcal{M}}^{j\times k} \to {\mathcal{M}}^{i \times k}$,
where ${\mathcal{M}}^{l \times m}$
is the linear space of $l\times m $ matrices with coefficients in a field ${\Bbbk}$.
In particular, ${\mathcal{M}}^{l \times m} \simeq {\Bbbk}^{l \cdot m}$,
where $\simeq$ denotes an isomorphism of vector spaces.
We can interpret $\mu_{i,j,k}$ as a three-way tensor
\[
\mu_{i,j,k} \in ({\mathcal{M}}^{i \times j})^* \otimes ({\mathcal{M}}^{j \times k})^*\otimes {\mathcal{M}}^{i \times k}.
\]
Following the discoveries of Strassen \cite{strassen_gaussian_elimination_is_not_optimal}, scientists started to wonder
what is the minimal number of multiplications required to calculate the product of two matrices $MN$,
for any $M\in {\mathcal{M}}^{i \times j}$ and $N \in {\mathcal{M}}^{j \times k}$.
This is a question about the \emph{tensor rank} of $\mu_{i,j,k}$.
Suppose $A$, $B$, and $C$ are finite dimensional vector spaces over ${\Bbbk}$.
A \emph{simple tensor} is an element of the tensor space $A\otimes B \otimes C$ which can be written as $a\otimes b \otimes c$ for some $a\in A$, $b\in B$, $c\in C$.
The rank of a tensor $p\in A\otimes B \otimes C$ is the minimal number $R(p)$ of simple tensors needed,
such that $p$ can be expressed as a linear combination of simple tensors.
Thus $R(p)=0$ if and only if $p=0$, and $R(p)=1$ if and only if $p$ is a simple tensor.
In general, the higher the rank is, the more complicated $p$ ``tends'' to be.
In particular, the minimal number of multiplications needed to calculate $MN$ as above is equal to $R(\mu_{i,j,k})$.
See for instance \cite{comon_tensor_decompositions_survey}, \cite{landsberg_tensorbook},
\cite{carlini_grieve_oeding_four_lectures_on_secants} and references therein
for more details and further motivations to study tensor rank.
Our main interest in this article is in the \emph{additivity} of the tensor rank.
Going on with the main example, given arbitrary four matrices
$M'\in {\mathcal{M}}^{i' \times j'}$, $N' \in {\mathcal{M}}^{j' \times k'}$, $M''\in {\mathcal{M}}^{i'' \times j''}$, $N'' \in {\mathcal{M}}^{j'' \times k''}$,
suppose we want to calculate both products $M' N'$ and $M''N''$ simultaneously.
What is the minimal number of multiplications needed to obtain the result?
Is it equal to the sum of the ranks $R(\mu_{i',j',k'}) + R(\mu_{i'',j'',k''})$?
More generally, the same question can be asked for arbitrary tensors.
If we are given two tensors in independent vector spaces, is the rank of their sum equal to the sum of their ranks?
A positive answer to this question was widely known as Strassen's Conjecture \cite[p.~194, \S4, Vermutung~3]{strassen_vermeidung_von_divisionen}, \cite[\S5.7]{landsberg_tensorbook},
until it was disproved by Shitov \cite{shitov_counter_example_to_Strassen}.
\begin{problem}[Strassen's additivity problem]\label{prob_strassen}
Suppose $A = A' \oplus A''$, $B = B' \oplus B''$, and $C = C' \oplus C''$, where all $\fromto{A}{C''}$ are finite dimensional vector spaces over a field ${\Bbbk}$.
Pick $p' \in A' \otimes B' \otimes C'$ and $p'' \in A'' \otimes B'' \otimes C''$
and let $p= p' + p''$, which we will write as $p= p'\oplus p''$.
Does the following equality hold
\begin{equation}\label{equ_additivity}
R(p) = R(p') + R(p'')?
\end{equation}
\end{problem}
In this article we address several cases of Problem~\ref{prob_strassen} and its gen\-er\-al\-isa\-tions.
It is known that if one of the vector spaces $A'$, $A''$, $B'$, $B''$, $C'$, $C''$ is at most two dimensional,
then the additivity of the tensor rank \eqref{equ_additivity} holds:
see \cite{jaja_takche_Strassen_conjecture} for the original proof
and Section~\ref{sec_hook_shaped_spaces} for a discussion of more recent approaches.
One of our results includes the next case, that is, if say $\dim B''=\dim C'' = 3$, then \eqref{equ_additivity} holds. The following theorem summarises our main results.
\begin{thm}\label{thm_additivity_rank_intro}
Let ${\Bbbk}$ be any base field
and let $A'$, $A''$, $B'$, $B''$, $C'$, $C''$ be vector spaces over ${\Bbbk}$.
Assume $p' \in A' \otimes B' \otimes C'$ and $p'' \in A'' \otimes B'' \otimes C''$ and let
\[
p = p' \oplus p'' \in (A'\oplus A'') \otimes (B'\oplus B'') \otimes(C'\oplus C'').
\]
If at least one of the following conditions holds,
then the additivity of the rank holds for $p$, that is, $R(p) = R(p') + R(p'')$:
\begin{itemize}
\item ${\Bbbk}={\mathbb C}$ or ${\Bbbk}={\mathbb R}$ (complex or real numbers) and $\dim B''\le 3$ and $\dim C'' \le 3$.
\item $R(p'')\le \dim A'' +2$ and $p''$ is not contained in $\tilde{A''} \otimes B'' \otimes C''$
for any linear subspace $\tilde{A''} \subsetneqq A''$
(this part of the statement is valid for any field ${\Bbbk}$).
\item ${\Bbbk}={\mathbb R}$ or ${\Bbbk}$
is an algebraically closed field of characteristic $\ne 2$
and $R(p'')\le 6$.
\end{itemize}
Analogous statements hold if we exchange the roles of $A$, $B$, $C$ and/or of ${}'$ and ${}''$.
\end{thm}
The theorem summarises the content of Theorems~\ref{thm_additivity_rank_plus_2}--\ref{thm_additivity_rank_6}
proven in Section~\ref{sect_proofs_of_main_results_on_rank}.
\begin{rem}
Although most of our arguments are characteristic free,
we partially rely on some earlier results which
often are proven only over the fields of the
complex or the real numbers, or other special fields.
Specifically, we use upper bounds on the maximal rank
of small tensors,
such as \cite{bremner_hu_Kruskal_theorem} or \cite{sumi_miyazaki_sakata_maximal_tensor_rank}.
See Section~\ref{sect_proofs_of_main_results_on_rank}
for a more detailed discussion.
In particular, the consequence of the proof of
Theorem~\ref{thm_additivity_rank_6} is that if
(over any field ${\Bbbk}$) there are $p'$ and $p''$
such that $R(p'')\le 6$ and $R(p'\oplus p'')< R(p')+R(p'')$,
then $p''\in {\Bbbk}^3\otimes {\Bbbk}^3 \otimes {\Bbbk}^3$ and $R(p'')=6$.
In \cite{bremner_hu_Kruskal_theorem}
it is shown that if ${\Bbbk}={\mathbb Z}_2$ (the field with two elements),
then such tensors $p''$ with $R(p'')=6$ exist.
\end{rem}
Some other cases of additivity were shown in \cite{feig_winograd_strassen}.
Another variant of Problem~\ref{prob_strassen} asks the same question in the setting of symmetric tensors
and the symmetric tensor rank, or equivalently, for homogeneous polynomials and their Waring rank.
No counterexamples to this version of the problem are yet known,
while some partial positive results are described in
\cite{carlini_catalisano_chiantini_symmetric_Strassen_1},
\cite{carlini_catalisano_chiantini_geramita_woo_symmetric_Strassen_2},
\cite{carlini_catalisano_oneto_Strassen},
\cite{casarotti_massarenti_mella_Comon_and_Strassen},
and \cite{teitler_symmetric_strassen}.
Possible ad hoc extensions to the symmetric case of the techniques and results obtained in this article are subject of a follow-up research.
Next we turn our attention to the \emph{border rank}.
Roughly speaking, over the complex numbers, a tensor $p$ has border rank at most $r$,
if and only if it is a limit of tensors of rank at most $r$.
The border rank of $p$ is denoted by ${\underline{R}}(p)$.
One can pose the analogue of Problem~\ref{prob_strassen} for the border rank:
for which tensors $p' \in A' \otimes B' \otimes C'$ and $p'' \in A'' \otimes B'' \otimes C''$
is the border rank additive, that is, ${\underline{R}}(p'\oplus p'') = {\underline{R}}(p')+{\underline{R}}(p'')$?
In general, the answer is negative; in fact there exist examples for which ${\underline{R}}(p'\oplus p'') <{\underline{R}}(p')+ {\underline{R}} (p'')$:
Sch\"onhage \cite{schonhage_matrix_multiplication} proposed a family of counterexamples amongst which the smallest is
\[
{\underline{R}}(\mu_{2,1,3})=6,\quad {\underline{R}}(\mu_{1,2,1})=2, \quad {\underline{R}}(\mu_{2,1,3}\oplus \mu_{1,2,1})=7,
\]
see also \cite[\S11.2.2]{landsberg_tensorbook}.
Nevertheless, one may be interested in special cases of the problem. We describe one instance suggested by J.~Landsberg
(private communication, also mentioned during his lectures at Berkeley in 2014).
\begin{problem}[Landsberg]
Suppose $A', B', C'$ are vector spaces and $A''\simeq B'' \simeq C'' \simeq {\mathbb C}$.
Let $p'\in A'\otimes B' \otimes C'$ be any tensor and $p'' \in A''\otimes B'' \otimes C''$ be a non-zero tensor.
Is ${\underline{R}}(p'\oplus p'') > {\underline{R}}(p')$?
\end{problem}
Another interesting question is what is the smallest counterexample to the additivity of the border rank?
The example of Sch\"onhage lives in ${\mathbb C}^{2+2}\otimes {\mathbb C}^{3+2}\otimes {\mathbb C}^{6+1}$,
that is, it requires using a seven dimesional vector space.
Here we show that if all three spaces $A$, $B$, $C$ have dimensions at most $4$,
then it is impossible to find a counterexample to the additivity of the border rank.
\begin{thm}\label{thm_additivity_border_rank_in_dimension_4}
Suppose $A', A'', B', B'', C',C''$ are vector spaces over ${\mathbb C}$
and $A=A'\oplus A''$, $B=B'\oplus B''$, and $C=C'\oplus C''$.
If $\dim A, \dim B, \dim C \le 4$, then
for any $p' \in A' \otimes B' \otimes C'$ and $p'' \in A'' \otimes B'' \otimes C''$ the
additivity of the border rank holds:
\[
{\underline{R}}(p'\oplus p'') = {\underline{R}}(p')+{\underline{R}}(p'').
\]
\end{thm}
We prove the theorem in Section~\ref{sect_border_rank}
as Corollary~\ref{cor_additivity_brank_very_small_a_b_c}, Propositions~\ref{prop_br_case_3_2_2_1_2_2}
and~\ref{prop_br_case_3_3_3_1_1_1}, which in fact cover a wider variety of cases.
\subsection{Overview}
In this article, for the sake of simplicity, we mostly restrict our presentation to the case of three-way tensors,
even though some intermediate results hold more generally.
In Section~\ref{sec_basics} we introduce the notation and review known methods about tensors in general.
We review the translation of the rank and border rank of three-way tensors into statements about linear spaces of matrices.
In Proposition~\ref{prop_bound_on_rank_for_non_concise_decompositions_for_vector_spaces} we explain that any decomposition that uses elements outside of the minimal tensor space containing a given tensor must involve more terms
than the rank of that tensor.
In Section~\ref{sect_dir_sums} we present the notation related to the direct sum tensors
and we prove the first results on the additivity of the tensor rank.
In particular, we slightly generalise the proof of the additivity of the rank when one of
the tensor spaces has dimension $2$.
In Section~\ref{sec_rank_one_matrices_and_additive_rank} we analyse
rank one matrices contributing to the minimal decompositions of tensors,
and we distinguish seven types of such matrices.
Then we show that to prove the additivity of the tensor rank one can get rid of two of those types,
that is, we can produce a smaller example, which does not have these two types,
but if the additivity holds for the smaller one, then it also holds for the original one.
This is the core observation to prove the main result, Theorem~\ref{thm_additivity_rank_intro}.
Finally, in Section~\ref{sect_border_rank} we analyse the additivity of the border rank for small tensor spaces.
For most of the possible splittings of the triple $A={\mathbb C}^4 =A'\oplus A''$,
$B={\mathbb C}^4 =B'\oplus B''$, $C={\mathbb C}^4 =C'\oplus C''$,
there is an easy observation (Corollary~\ref{cor_additivity_brank_very_small_a_b_c})
proving the additivity of the border rank.
The remaining two pairs of triples are treated by more advanced methods,
involving in particular the Strassen type equations for secant varieties.
We conclude the article with a brief discussion of the potential analogue of
Theorem~\ref{thm_additivity_border_rank_in_dimension_4} for
$A=B=C={\mathbb C}^5$.
\section{Ranks and slices}\label{sec_basics}
This section reviews the notions of rank, border rank, slices, conciseness. Readers that are familiar to these concepts may easily skip this section.
The main things to remember from here are Notation~\ref{notation_V_Seg} and
Proposition~\ref{prop_bound_on_rank_for_non_concise_decompositions_for_vector_spaces}.
Let $\fromto{A_1, A_2}{A_d}$, $A$, $B$, $C$, and $V$ be finite dimensional vector spaces over
a field ${\Bbbk}$.
Recall a tensor $s\in A_1 \otimes A_2 \otimes \dotsb \otimes A_d$ is \emph{simple} if and only if it can be written as $a_1 \otimes a_2 \otimes \dotsb\otimes a_d$
with $a_i \in A_i$. Simple tensors will also be referred to as \emph{rank one tensors} throughout this paper.
If $P$ is a subset of $V$, we denote by $\langle P \rangle$ its linear span.
If $P=\setfromto{p_1}{p_r}$ is a finite subset, we will write
$\langle\fromto{p_1}{p_r}\rangle$ rather than $\langle\setfromto{p_1}{p_r}\rangle$ to simplify notation.
\begin{defin}
Suppose $W\subset A_1 \otimes A_2 \otimes \dotsb \otimes A_d$ is a linear subspace of the tensor product space.
We define $R(W)$, \emph{the rank} of $W$, to be the minimal number $r$, such that there exist simple tensors $\fromto{s_1}{s_r}$
with $W$ contained in $\langle\fromto{s_1}{s_r}\rangle$.
For $p \in A_1 \otimes \dotsb \otimes A_d$, we write $R(p):= R(\langle p\rangle)$.
\end{defin}
In the setting of the definition, if $d=1$, then $R(W)= \dim W$.
If $d=2$ and $W = \langle p \rangle$ is $1$-dimensional, then $R(W)$ is the rank of $p$ viewed as a linear map $A_1^*\to A_2$.
If $d=3$ and $W = \langle p \rangle$ is $1$-dimensional, then $R(W)$ is equal to $R(p)$ in the sense of Section~\ref{sect_intro}.
More generally, for arbitrary $d$, one can relate the rank $R(p)$ of $d$-way tensors with the rank $R(W)$ of certain linear subspaces in the space of $(d-1)$-way tensors.
This relation is based on the \emph{slice technique}, which we are going to review in Section~\ref{section_slices}.
\subsection{Variety of simple tensors}\label{sect_variety_of_simple_tensors}
As it is clear from the definition, the rank of a tensor does not depend on the non-zero rescalings of $p$.
Thus it is natural and customary to consider the rank as a function on the projective space ${\mathbb P}(A_1 \otimes A_2 \otimes \dotsb \otimes A_d)$.
There the set of simple tensors is naturally isomorphic to the cartesian product of projective spaces.
Its embedding in the tensor space is also called the \emph{Segre variety}:
\[
\Seg=\Seg_{A_1, A_2, \dotsc, A_d} := {\mathbb P} A_1 \times {\mathbb P} A_2 \times \dotsb \times {\mathbb P} A_d \subset {\mathbb P}(A_1 \otimes A_2 \otimes \dotsb \otimes A_d).
\]
We will intersect linear subspaces of the tensor space with the Segre variety.
Using the language of algebraic geometry, such intersection may have a non-trivial scheme structure.
In this article we just ignore the scheme structure and all our intersections are set theoretic.
To avoid ambiguity of notation,
we write $\reduced{(\cdot)}$ to underline this issue, while
the reader not originating from algebraic geometry should ignore the symbol $\reduced{(\cdot)}$.
\begin{notation}\label{notation_V_Seg}
Given a linear subspace of a tensor space, $V \subset A_1 \otimes A_2 \otimes \dotsb \otimes A_d$,
we denote:
\[
V_{\Seg}:=\reduced{({\mathbb P} V \cap \Seg)}.
\]
Thus $V_{\Seg}$ is (up to projectivisation)
the set of rank one tensors in $V$.
\end{notation}
In this setting, we have the following trivial rephrasing of the definition of rank:
\begin{prop}\label{prop_can_choose_decomposition_containing_simple_tensors_from_W}
Suppose $W \subset A_1 \otimes A_2 \otimes \dotsb \otimes A_d$ is a linear subspace.
Then $R(W)$ is equal to the minimal number $r$, such that there exists a linear subspace $V \subset A_1 \otimes A_2 \otimes \dotsb \otimes A_d$ of dimension $r$
with $W \subset V$ and ${\mathbb P} V $ is linearly spanned by
$V_{\Seg}$.
In particular,
\begin{enumerate}
\item $R(W) = \dim W$ if and only if
\[
{\mathbb P} W = \langle W_{\Seg} \rangle.
\]
\item Let $U$ be the linear subspace such that ${\mathbb P} U :=\langle W_{\Seg} \rangle$.
Then $\dim U$ tensors from $W$ can be used in the minimal decomposition of $W$,
that is, there exist $\fromto{s_1}{s_{\dim U}}\in W_{\Seg}$ such that $W\subset \langle \fromto{s_1}{s_{R(W)}} \rangle$ and $s_i$ are simple tensors.
\end{enumerate}
\end{prop}
\subsection{Secant varieties and border rank}
For this subsection (and also in Section~\ref{sect_border_rank}) we assume ${\Bbbk}={\mathbb C}$.
See Remark~\ref{rem_secants_other_fields} for generalisations.
In general, the set of tensors of rank at most $r$ is neither open nor closed.
One of the very few exceptions is the case of matrices, that is, tensors in $A\otimes B$.
Instead, one defines the secant variety
$\sigma_r(\Seg_{\fromto{A_1}{A_d}}) \subset {\mathbb P}(A_1\otimes \dotsb \otimes A_d)$ as:
\[
\sigma_r = \sigma_r(\Seg_{\fromto{A_1}{A_d}}) := \overline{\set{p \in {\mathbb P}(A_1\otimes \dotsb \otimes A_d) \mid R(p) \le r}}.
\]
The overline $\overline{\set{\cdot}}$ denotes the closure in the Zariski topology. However
in this definition, the resulting set coincides with the Euclidean closure.
This is a classically studied algebraic variety
\cite{adlandsvik_joins_and_higher_secant_varieties}, \cite{palatini_secant_varieties}, \cite{zak_tangents},
and leads to a definition of border rank of a point.
\begin{defin}\label{def_br_point}
For $p \in A_1 \otimes A_2 \otimes \dotsb \otimes A_d$
define ${\underline{R}}(p)$, \emph{the border rank} of $p$,
to be the minimal number $r$, such that $\linspan{p} \in \sigma_r(\Seg_{\fromto{A_1}{A_d}})$,
where $\linspan{p}$ is the underlying point of $p$ in the projective space.
We follow the standard convention that ${\underline{R}}(p) = 0$ if and only if $p=0$.
\end{defin}
Analogously we can give the same definitions for linear subspaces.
Fix $\fromto{A_1}{A_d}$ and an integer $k$.
Denote by $\Gr(k,A_1\otimes \dotsb \otimes A_d)$
the Grassmannian of $k$-dimensional linear subspaces of the vector space $A_1\otimes \dotsb \otimes A_d$.
Let $\sigma_{r,k}(\Seg) \subset \Gr(k,A_1\otimes \dotsb \otimes A_d)$ be the \emph{Grassmann secant variety}
\cite{landsberg_jabu_ranks_of_tensors},
\cite{chiantini_coppens_Grassmannians_of_secant_varieties},
\cite{ciliberto_cools_Grassmann_secant_extremal_varieties}:
\[
\sigma_{r,k}(\Seg):=\overline{\set{W \in \Gr(k,A_1\otimes \dotsb \otimes A_d) \mid R(W) \le r}}.
\]
\begin{defin}\label{def_br_linear_space}
For $W \subset A_1 \otimes A_2 \otimes \dotsb \otimes A_d$, a linear subspace of dimension $k$,
define ${\underline{R}}(W)$, \emph{the border rank} of $W$,
to be the minimal number $r$, such that $W \in \sigma_{r,k}(\Seg_{\fromto{A_1}{A_d}})$.
\end{defin}
In particular, if $k =1$, then Definition~\ref{def_br_linear_space} coincides with Definition~\ref{def_br_point}:
${\underline{R}}(p) = {\underline{R}} (\linspan{p})$.
An important consequence of the definitions of border rank of a point or of a linear space is that it is a semicontinuous function
\[
{\underline{R}} \colon \Gr(k,A_1\otimes \dotsb \otimes A_d)\to {\mathbb N}
\]
for every $k$. Moreover, ${\underline{R}}(p)= 1$ if and only if $\linspan{p}\in Seg$.
\begin{rem}\label{rem_secants_other_fields}
When treating the border rank and secant varieties we assume the base field is ${\Bbbk}={\mathbb C}$.
However, the results of \cite[\S 6, Prop.~6.11]{jabu_jeliesiejew_finite_schemes_and_secants}
imply (roughly) that anything that we can say about a secant variety over ${\mathbb C}$,
we can also say about the same secant variety over any field ${\Bbbk}$ of characteristic~$0$.
In particular, the same results for border rank over an algebraically closed field ${\Bbbk}$ will be true.
If ${\Bbbk}$ is not algebraically closed, then the definition of border rank above might not generalise
immediately, as there might be a difference between the closure in the Zariski topology
or in some other topology, the latter being the Euclidean topology in the case ${\Bbbk}={\mathbb R}$.
\end{rem}
\subsection{Independence of the rank of the ambient space}
As defined above, the notions of rank and border rank of a vector subspace $W \subset A_1\otimes A_2 \otimes\dotsb \otimes A_d$,
or of a tensor $p \in A_1 \otimes\dotsb \otimes A_d$, might seem to depend on the ambient spaces $A_i$.
However, it is well known that the rank is actually independent of the choice of the vector spaces.
We first recall this result for tensors, then we apply the slice technique to show it in general.
\begin{lem}[{\cite[Prop.~3.1.3.1]{landsberg_tensorbook} and \cite[Cor.~2.2]{landsberg_jabu_ranks_of_tensors}}]\label{lem_rank_independent_of_ambient}
Suppose ${\Bbbk}={\mathbb C}$ and $p \in A_1' \otimes A_2' \otimes \dotsb \otimes A_d'$ for some linear subspaces $A_i'\subset A_i$.
Then $R(p)$ (respectively, ${\underline{R}}(p)$)
measured as the rank (respectively, the border rank) in $A_1' \otimes \dotsb \otimes A_d'$ is equal
to the rank (respectively, the border rank) measured in $A_1 \otimes \dotsb \otimes A_d$.
\end{lem}
We also state a stronger fact about the rank from the same references:
in the notation of Lemma~\ref{lem_rank_independent_of_ambient}, any minimal expression $W\subset \langle \fromto{s_1}{s_{R(W)}}\rangle$,
for simple tensors $s_i$, must be contained in $A_1' \otimes \dotsb \otimes A_d'$.
Here we show that the difference in the length of the decompositions
must be at least the difference of the respective dimensions.
For simplicity of notation, we restrict the presentation to the case $d=3$.
The reader will easily generalise the argument to any other numbers of factors.
We stress that the lemma below does not depend on the base field, in particular, it does not require algebraic closedness.
\begin{lem}\label{lem_bound_on_rank_for_non_concise_decompositions}
Suppose that $p \in A' \otimes B \otimes C$, for a linear subspace $A'\subset A$,
and that we have an expression $p \in \langle \fromto{s_1}{s_{r}}\rangle$, where $s_i = a_i \otimes b_i \otimes c_i$ are simple tensors.
Then:
\[
r \ge R(p) + \dim \langle \fromto{a_1}{a_r}\rangle - \dim A'.
\]
\end{lem}
In particular,
Lemma~\ref{lem_bound_on_rank_for_non_concise_decompositions}
implies the rank part of
Lemma~\ref{lem_rank_independent_of_ambient}
for any base field ${\Bbbk}$,
which on its own can also be seen by following
the proof
of \cite[Prop.~3.1.3.1]{landsberg_tensorbook}
or \cite[Cor.~2.2]{landsberg_jabu_ranks_of_tensors}.
\begin{prf}
For simplicity of notation, we assume that $A' \subset \langle \fromto{a_1}{a_r}\rangle$
(by replacing $A'$ with a smaller subspace if needed) and that $A = \langle \fromto{a_1}{a_r}\rangle$
(by replacing $A$ with a smaller subspace).
Set $k = \dim A -\dim A'$ and let us reorder the simple tensors $s_i$
in such a way that the first $k$ of the $a_i$'s are linearly independent
and $\linspan{A' \sqcup \setfromto{a_1}{a_k}} = A$.
Let $A'' = \linspan{\fromto{a_1}{a_k}}$ so that $A = A' \oplus A''$
and consider the quotient map $\pi\colon A \to A/A''$.
Then the composition $A' \to A \stackrel{\pi}{\to} A/A'' \simeq A'$ is an isomorphism, denoted by $\phi$.
By a minor abuse of notation, let $\pi$ and $\phi$ also denote the induced maps
$\pi\colon A\otimes B\otimes C \to (A/A'')\otimes B \otimes C$ and
$\phi\colon A'\otimes B\otimes C \simeq A'\otimes B \otimes C$.
We have
\begin{align*}
\phi(p) =\pi(p) & \in \pi\left(\linspan{ \fromto{a_1\otimes b_1\otimes c_1}{a_r\otimes b_r\otimes c_r} }\right)\\
& = \linspan{ \fromto{\pi(a_1)\otimes b_1\otimes c_1}{\pi(a_r)\otimes b_r\otimes c_r} }\\
& = \linspan{ \fromto{\pi(a_{k+1})\otimes b_{k+1}\otimes c_{k+1}}{\pi(a_r)\otimes b_r\otimes c_r} }.
\end{align*}
Using the inverse of the isomorphism $\phi$,
we get a presentation of $p$ as a linear combination of $(r-k)$ simple tensors,
that is, $R(p)\le r-k$ as claimed.
\end{prf}
\subsection{Slice technique and conciseness}\label{section_slices}
We define the notion of conciseness of tensors and we review
a standard \emph{slice technique} that replaces the calculation of rank of three way tensors with the calculation of rank of linear spaces of matrices.
A tensor $p \in A_1 \otimes A_2 \otimes \dotsb \otimes A_d$ determines a linear map $p \colon A_1^* \to A_2 \otimes \dotsb \otimes A_d$.
Consider the image $W = p(A_1^*) \subset A_2 \otimes\dotsb \otimes A_d$.
The elements of a basis of $W$ (or the image of a basis of $A_1^*$) are called \emph{slices} of $p$.
The point is that $W$ essentially uniquely (up to an action of $GL(A_1)$) determines $p$ (cfr. \cite[Cor.~3.6]{landsberg_jabu_ranks_of_tensors}).
Thus the subspace $W$ captures the geometric information about $p$, in particular its rank and border rank.
\begin{lem}[{\cite[Thm~2.5]{landsberg_jabu_ranks_of_tensors}}]\label{lem_rank_of_space_equal_to_rank_of_tensor}
Suppose $p \in A_1 \otimes A_2 \otimes \dotsb \otimes A_d$ and $W = p(A_1^*)$ as above.
Then $R(p) = R(W)$ and (if ${\Bbbk}={\mathbb C}$) ${\underline{R}}(p) = {\underline{R}}(W)$.
\end{lem}
Clearly, we can also replace $A_1$ with any of the $A_i$ to define slices as images $p(A_i^*)$ and obtain the analogue of the lemma.
We can now prove the analogue of Lemmas~\ref{lem_rank_independent_of_ambient} and \ref{lem_bound_on_rank_for_non_concise_decompositions}
for higher dimensional subspaces of the tensor space. As before, to simplify the notation,
we only consider the case $d=2$, which is our main interest.
\begin{prop}\label{prop_bound_on_rank_for_non_concise_decompositions_for_vector_spaces}
Suppose $W \subset B' \otimes C'$ for some linear subspaces $B'\subset B$, $C' \subset C$.
\begin{enumerate}
\item \label{item_rank_can_be_measured_anywhere}
The numbers $R(W)$ and ${\underline{R}}(W)$ measured as the rank and border rank of $W$ in $B' \otimes C'$
are equal to its rank and border rank calculated in $B \otimes C$ (in the statement about border rank, we assume that ${\Bbbk}={\mathbb C}$).
\item \label{item_decompositions_in_larger_spaces}
Moreover, if we have an expression $W \subset \langle \fromto{s_1}{s_{r}}\rangle$,
where $s_i = b_i \otimes c_i$ are simple tensors,
then:
\[
r \ge R(W) + \dim \langle \fromto{b_1}{b_r}\rangle - \dim B'
\]
\end{enumerate}
\end{prop}
\begin{prf}
Reduce to Lemmas~\ref{lem_rank_independent_of_ambient} and \ref{lem_bound_on_rank_for_non_concise_decompositions}
using Lemma~\ref{lem_rank_of_space_equal_to_rank_of_tensor}.
\end{prf}
We conclude this section by recalling the following definition.
\begin{defin}
Let $p \in A_1 \otimes A_2 \otimes \dotsb \otimes A_d$ be a tensor or let $W \subset A_1 \otimes A_2 \otimes \dotsb \otimes A_d$ be a linear subspace.
We say that $p$ or $W$ is \emph{$A_1$-concise} if for all linear subspaces $V \subset A_1$, if $p \in V \otimes A_2 \otimes \dotsb \otimes A_d$
(respectively, $W \subset V \otimes A_2 \otimes \dotsb \otimes A_d$), then $V = A_1$.
Analogously, we define $A_i$-concise tensors and spaces for $i =\fromto{2}{d}$.
We say $p$ or $W$ is \emph{concise} if it is $A_i$-concise for all $i\in \setfromto{1}{n}$.
\end{defin}
\begin{rem}
Notice that $p \in A_1 \otimes A_2 \otimes \dotsb \otimes A_d$ is $A_1$-concise if and only if $p\colon A_1^* \to A_2 \otimes \dotsb \otimes A_d$ is injective.
\end{rem}
\section{Direct sum tensors and spaces of matrices}\label{sect_dir_sums}
Again, for simplicity of notation we restrict the presentation to the case of tensors in $A\otimes B \otimes C$ or linear subspaces of $B \otimes C$.
We introduce the following notation that will be adopted throughout this manuscript.
\begin{notation}\label{notation}
Let $A',A'',B',B'',C',C''$ be vector spaces over ${\Bbbk}$ of dimensions, respectively, $\mathbf{a}',\mathbf{a}'',\mathbf{b}',\mathbf{b}'',\mathbf{c}',\mathbf{c}''$.
Suppose $A = A' \oplus A''$, $B = B' \oplus B''$, $C = C' \oplus C''$
and $\mathbf{a}=\dim A = \mathbf{a}'+\mathbf{a}''$, $\mathbf{b}=\dim B =\mathbf{b}'+\mathbf{b}''$ and $\mathbf{c}=\dim C =\mathbf{c}'+\mathbf{c}''$.
For the purpose of illustration, we will interpret the two-way tensors in $B \otimes C$
as matrices in ${\mathcal{M}}^{\mathbf{b}\times \mathbf{c}}$.
This requires choosing bases of $B$ and $C$, but (whenever possible) we will refrain from naming the bases explicitly.
We will refer to an element of the space of matrices
${\mathcal{M}}^{\mathbf{b}\times \mathbf{c}} \simeq B\otimes C$ as a $(\mathbf{b}'+\mathbf{b}'',\mathbf{c}'+\mathbf{c}'')$ \emph{partitioned matrix}.
Every matrix $w\in{\mathcal{M}}^{\mathbf{b}\times \mathbf{c}} $ is a block matrix with four blocks of size
$\mathbf{b}'\times \mathbf{c}'$, $\mathbf{b}'\times \mathbf{c}''$,
$\mathbf{b}''\times \mathbf{c}'$ and $\mathbf{b}''\times \mathbf{c}''$ respectively.
\end{notation}
\begin{notation}\label{notation_2}
As in Section~\ref{section_slices}, a tensor $p\in A\otimes B\otimes C$ is a linear map $p:A^\ast\to B\otimes C$;
we denote by $W:=p(A^\ast)$ the image of $A^\ast$ in the space of matrices $B\otimes C$.
Similarly, if $p = p' + p'' \in (A' \oplus A'') \otimes (B' \oplus B'') \otimes (C' \oplus C'')$ is such that $p'\in A'\otimes B'\otimes C'$ and $p''\in A''\otimes B''\otimes C''$, we set $W':=p'({A'}^\ast)\subset B'\otimes C'$ and $W'':=p''({A''}^\ast)\subset B''\otimes C''$.
In such situation, we will say that $p = p' \oplus p''$ is a \emph{direct sum tensor}.
We have the following direct sum decomposition:
\[
W=W'\oplus W''\subset (B'\otimes C')\oplus (B''\otimes C'')
\]
and an induced matrix partition of type $(\mathbf{b}'+\mathbf{b}'',\mathbf{c}'+\mathbf{c}'')$ on every matrix $w\in W$ such that
\[
w=\begin{pmatrix}
w' & \underline{0} \\
\underline{0} & w''
\end{pmatrix},
\]
where $w'\in W'$ and $w''\in W''$, and the two $\underline{0}$'s denote zero matrices of size
$\mathbf{b}'\times\mathbf{c}''$ and $\mathbf{b}''\times\mathbf{c}'$ respectively.
\end{notation}
\begin{prop}
Suppose that $p$, $W$, etc.~are as in Notation~\ref{notation_2}.
Then the additivity of the rank holds for $p$, that is~$R(p) = R(p')+R(p'')$, if and only if the additivity of the rank holds for $W$, that is, $R(W) = R(W')+ R(W'')$.
\end{prop}
\begin{prf}
It is an immediate consequence of Lemma~\ref{lem_rank_of_space_equal_to_rank_of_tensor}.
\end{prf}
\subsection{Projections and decompositions}
The situation we consider here again concerns the direct sums and their minimal decompositions.
We fix $W' \subset B'\otimes C'$ and $W'' \subset B''\otimes C''$ and we choose a minimal decomposition of $W'\oplus W''$,
that is, a linear subspace $V\subset B \otimes C$
such that $\dim V = R(W'\oplus W'')$, ${\mathbb P} V=\linspan{V_{\Seg}}$ and $V\supset W'\oplus W''$.
Such linear spaces $W'$, $W''$ and $V$ will be fixed for the rest of Sections~\ref{sect_dir_sums} and \ref{sec_rank_one_matrices_and_additive_rank}.
In addition to Notations~\ref{notation_V_Seg}, \ref{notation} and \ref{notation_2} we need the following.
\begin{notation}\label{notation_projection}
Under Notation~\ref{notation}, let $\pi_{C'}$ denote the projection
\[
\pi_{C'}:C \to C'',\text{ or }
\]
whose kernel is the space $C'$. With slight abuse of notation, we shall denote by $\pi_{C'}$ also the following projections
\[
\pi_{C'}:B\otimes C\to B\otimes C'', \text{ or } \pi_{C'}:A\otimes B\otimes C\to A \otimes B\otimes C'',
\]
with kernels, respectively, $B\otimes C'$ and $A\otimes B \otimes C'$.
The target of the projection is regarded as a subspace of $C$, $B\otimes C$, or $A\otimes B \otimes C$, so that it is possible to compose such projections, for instance:
\[
\pi_{C'} \pi_{B''} \colon B\otimes C \to B'\otimes C'', \text{ or } \pi_{C'} \pi_{B''} \colon A \otimes B\otimes C \to A\otimes B'\otimes C''.
\]
We also let $E'\subset B' $ (resp. $E''\subset B''$) be the minimal vector subspace such that
$\pi_{C'}(V)$ (resp. $\pi_{C''}(V)$) is contained in $(E'\oplus B'')\otimes C''$ (resp. $(B'\oplus E'')\otimes C'$).
By swapping the roles of $B$ and $C$, we define $F'\subset C'$ and $F''\subset C''$ analogously. By the lowercase letters $\mathbf{e}',\mathbf{e}'',\mathbf{f}',\mathbf{f}''$ we denote the dimensions of the subspaces $E',E'',F',F''$.
\end{notation}
If the differences $R(W') - \dim W'$ and $R(W'') - \dim W''$ (which we will informally call the \emph{gaps}) are large, then the spaces $E',E'',F',F''$ could be large too, in particular they can coincide with $B', B'', C', C''$ respectively.
In fact, these spaces measure ``how far'' a minimal decomposition $V$ of a direct sum $W=W' \oplus W''$ is from being a direct sum of decompositions of $W'$ and $W''$.
In particular, we will show in
Proposition~\ref{prop_SAC_if_E'=0} and Corollary~\ref{cor_small_dimensions_of_E_and_F},
that if $E'' = \set{0}$ or if both $E''$ and $F''$ are sufficiently small, then $R(W) = R(W')+ R(W'')$.
Then, as a consequence of Corollary~\ref{cor_bounds_on_es_and_fs},
if one of the gaps is at most two (say, $R(W'') = \dim W'' +2$),
then the additivity of the rank holds, see Theorem~\ref{thm_additivity_rank_plus_2}.
\begin{lem}\label{lemma_bound_r'_e'_R_w'}
In Notation~\ref{notation_projection} as above, with $W=W'\oplus W'' \subset B\otimes C$,
the following inequalities hold.
\begin{align*}
R(W') + \mathbf{e}'' & \le R(W)-\dim W'',&
R(W'')+ \mathbf{e}' & \le R(W)-\dim W', \\
R(W') + \mathbf{f}'' & \le R(W)-\dim W'',&
R(W'')+ \mathbf{f}' & \le R(W)-\dim W'.
\end{align*}
\end{lem}
\begin{proof}
We prove only the first inequality $R(W') + \mathbf{e}'' \le R(W)-\dim W''$,
the other follow in the same way by swapping $B$ and $C$ or ${}'$ and ${}''$.
By Proposition~\ref{prop_bound_on_rank_for_non_concise_decompositions_for_vector_spaces}\ref{item_rank_can_be_measured_anywhere} and \ref{item_decompositions_in_larger_spaces}
we may assume $W'$ is concise: $R(W')$ or $R(W)$ are not affected by choosing the minimal subspace of $B'$ by \ref{item_rank_can_be_measured_anywhere}, also the minimal decomposition $V$ cannot involve anyone from outside of the minimal subspace by \ref{item_decompositions_in_larger_spaces}.
Since $V$ is spanned by rank one matrices and the projection
$\pi_{C''}$ preserves the set of matrices of rank at most one,
also the vector space $\pi_{C''}(V)$ is
spanned by rank one matrices, say
\[
\pi_{C''}(V) = \linspan{\fromto{b_1 \otimes c_1}{b_r\otimes c_r}}
\]
with $r= \dim \pi_{C''}(V)$.
Moreover, $\pi_{C''}(V)$ contains $W'$.
We claim that
\[
B'\oplus E'' = \linspan{\fromto{b_1}{b_r}}.
\]
Indeed, the inclusion $B' \subset \linspan{\fromto{b_1}{b_r}}$
follows from the conciseness of $W'$,
as $W' \subset V\cap B'\otimes C'$.
Moreover, the inclusions
$E''\subset \linspan{\fromto{b_1}{b_r}}$ and $B'\oplus E'' \supset \linspan{\fromto{b_1}{b_r}}$
follow from the definition of $E''$, cf. Notation~\ref{notation_projection}.
Thus Proposition~\ref{prop_bound_on_rank_for_non_concise_decompositions_for_vector_spaces}\ref{item_decompositions_in_larger_spaces}
implies that
\begin{equation}\label{equ_bound_on_pi_C_bis_of_V}
r= \dim\pi_{C''}(V) \ge R(W')+\underbrace{\dim\linspan{\fromto{b_1}{b_r}}}_{\mathbf{b}'+\mathbf{e}''}- \mathbf{b}' = R(W') +\mathbf{e}''.
\end{equation}
Since $V$ contains $W''$ and $\pi_{C''}(W'') = \set{0}$, we have
\[
r= \dim \pi_{C''}(V) \le \dim V - \dim W'' = R(W) - \dim W''.
\]
The claim follows from the above inequality together with \eqref{equ_bound_on_pi_C_bis_of_V}.
\end{proof}
Rephrasing the inequalities of Lemma~\ref{lemma_bound_r'_e'_R_w'}, we obtain the following.
\begin{cor}\label{cor_bounds_on_es_and_fs}
If $R(W) < R(W')+R(W'')$, then
\begin{align*}
\mathbf{e}' &< R(W') - \dim W', &
\mathbf{f}' &< R(W') - \dim W', \\
\mathbf{e}''&< R(W'') - \dim W'',&
\mathbf{f}''&< R(W'') - \dim W''.
\end{align*}
\end{cor}
This immediately recovers the known case of additivity, when the gap is equal to $0$,
that is, if $R(W')=\dim W'$, then $R(W)=R(W')+R(W'')$ (because $\mathbf{e}'\ge 0$).
Moreover, it implies that if one of the gaps is equal to $1$
(say $R(W')=\dim W'+1$),
then either the additivity holds or both $E'$ and $F'$
are trivial vector spaces.
In fact, the latter case is only possible if the former case
holds too.
\begin{lem}\label{lem_rank_at_least_2_more_than_dimension}
With Notation~\ref{notation_projection}, suppose $E'=\set{0}$ and $F'=\set{0}$.
Then the additivity of the rank holds $R(W)= R(W') + R(W'')$.
In particular, if $R(W') \le \dim W' +1$, then the additivity holds.
\end{lem}
\begin{proof}
Since $E'=\set{0}$ and $F'=\set{0}$, by the definition of $E'$ and $F'$ we must have the following inclusions:
\[
\pi_{B''}(V) \subset B'\otimes C' \text{ and } \pi_{C''}(V) \subset B'\otimes C'.
\]
Therefore $V \subset B'\otimes C' \oplus B''\otimes C''$ and $V$ is obtained from the union of the decompositions of $W'$ and $W''$.
The last statement follows from Corollary~\ref{cor_bounds_on_es_and_fs}
\end{proof}
Later in Proposition~\ref{prop_SAC_if_E'=0} we will show a stronger version of the above lemma,
namely that it is sufficient to assume that only one of $E'$ or $F'$ is zero.
In Corollary~\ref{cor_small_dimensions_of_E_and_F}
we prove a further generalisation based on the results in the following subsection.
\subsection{``Hook''-shaped spaces}\label{sec_hook_shaped_spaces}
It is known since \cite{jaja_takche_Strassen_conjecture} that the additivity of the tensor rank holds for tensors with one of the factors of dimension $2$, that is, using Notation~\ref{notation} and \ref{notation_2},
if $\mathbf{a}' \le 2$ then $R(p'+p'') = R(p')+R(p'')$.
The same claim is recalled in \cite[Sect.~4]{landsberg_michalek_abelian_tensors} after Theorem~4.1.
The brief comment says that if rank of $p'$ can be calculated by the \emph{substitution method}, then the additivity of the rank holds.
Landsberg and Micha{\l}ek implicitly suggest that if $\mathbf{a}' \le 2$, then the rank of $p'$ can be calculated by the substitution method, \cite[Items~(1)--(6) after Prop.~3.1]{landsberg_michalek_abelian_tensors}.
This is indeed the case (at least over an algebraically closed field ${\Bbbk}$), although rather demanding to verify,
at least in the version of the algorithm presented in the cited article.
In particular, to show that the substitution method can calculate the rank of $p'\in {\Bbbk}^2\otimes B'\otimes C'$,
one needs to use the normal forms of such tensors \cite[\S10.3]{landsberg_tensorbook}
and understand all the cases, and it is hard to agree that this method is so much simplier than the original approach of \cite{jaja_takche_Strassen_conjecture}.
Instead, probably, the intention of the authors of \cite{landsberg_michalek_abelian_tensors} was slightly different,
with a more direct application of \cite[Prop.~3.1]{landsberg_michalek_abelian_tensors}
(or Proposition~\ref{proposition_for_AFT_method_coordinate_free} below).
This has been carefully detailed and described in \cite[Prop.~3.2.12]{rupniewski_mgr} and
here we present this approach to show a stronger statement about small ``hook''-shaped spaces (Proposition~\ref{prop_1_2_hook_shaped}).
We stress that our argument
for Proposition~\ref{prop_1_2_hook_shaped},
as well as \cite[Prop.~3.2.12]{rupniewski_mgr} requires the assumption of an algebraically closed base field ${\Bbbk}$,
while the original approach
of \cite{jaja_takche_Strassen_conjecture}
works over any field.
For a short while we also work over an arbitrary field.
\begin{defin}\label{def_hook_shaped_space}
For non-negative integers $e, f$, we say that a linear subspace $W \subset B \otimes C$
is \emph{$(e,f)$-hook shaped},
if $W\subset {\Bbbk}^e\otimes C + B \otimes {\Bbbk}^f$
for some choices of linear subspaces ${\Bbbk}^e\subset B$ and ${\Bbbk}^f\subset C$.
\end{defin}
The name ``hook shaped'' space comes from the fact that under an appropriate choice of basis, the only non-zero coordinates form a shape of a hook
$\ulcorner$ situated in the upper left corner of the matrix, see Example~\ref{ex_hook}.
The integers $(e,f)$ specify how wide the edges of the hook are.
A similar name also appears in the context of Young diagrams, see for instance \cite[Def.~2.3]{berele_regev_hook_Young_diagrams_with_applications}.
\begin{example}\label{ex_hook}
A $(1,2)$-hook shaped subspace of ${\Bbbk}^4 \otimes {\Bbbk}^4$ has only the following possibly nonzero entries in some coordinates:
$$\begin{bmatrix}
* & * & * & *\\
* & * & 0 & 0\\
* & * & 0 & 0\\
* & * & 0 & 0\\
\end{bmatrix}.
$$
\end{example}
The following elementary observation is presented in \cite[ Prop.~3.1]{landsberg_michalek_abelian_tensors} and in \cite[Lem.~B.1]{alexeev_forbes_tsimerman_Tensor_rank_some_lower_and_upper_bounds}.
Here we have phrased it in a coordinate free way.
\begin{prop} \label{proposition_for_AFT_method_coordinate_free}
Let $p \in A\otimes B \otimes C$, $R(p) = r>0$, and pick $\alpha \in A^*$ such that $p(\alpha) \in B\otimes C$ is nonzero.
Consider two hyperplanes in $A$: the linear hyperplane $\alpha^{\perp}= (\alpha =0)$
and the affine hyperplane $(\alpha=1)$.
For any $a\in (\alpha=1)$, denote
\[
\tilde{p}_{a}:= p - a\otimes p(\alpha) \in \alpha^{\perp} \otimes B \otimes C.
\]
Then:
\begin{enumerate}
\item \label{item_AFT_exists_choice_droping_rank}
there exists a choice of $a\in (\alpha=1)$ such that $R(\tilde{p}_{a}) \leq r-1$,
\item \label{item_AFT_any_choice_drops_rank_at_most_1}
if in addition $R(p(\alpha))=1$, then for any choice of $a \in (\alpha=1)$
we have $R(\tilde{p}_{a}) \geq r-1$.
\end{enumerate}
\end{prop}
See \cite[Prop.~3.1]{landsberg_michalek_abelian_tensors} for the proof (note the statement there is over the complex numbers only, but the proof is field independent) or, alternatively,
using Lemma~\ref{lem_rank_of_space_equal_to_rank_of_tensor} translate it into
the following straightforward statement on linear spaces of tensors:
\begin{prop}\label{proposition_for_AFT_method_slice_A}
Suppose $W \subset B \otimes C$ is a linear subspace, $R(W) = r$.
Assume $w \in W$ is a non-zero element.
Then:
\begin{enumerate}
\item there exists a choice of a complementary subspace $\widetilde{W}\subset W$,
such that $\widetilde{W} \oplus \linspan{w} = W$ and $R(\widetilde{W}) \leq r-1$, and
\item if in addition $R(w)=1$, then for any choice of the complementary subspace $\widetilde{W} \oplus \linspan{w} = W$
we have $R(\widetilde{W}) \geq r-1$.
\end{enumerate}
\end{prop}
Proposition~\ref{proposition_for_AFT_method_coordinate_free} is crucial in the proof that the additivity of the rank holds for vector spaces, one of which is $(1,2)$-hook shaped (provided that the base field is algebraically closed).
Before taking care of that, we use the same proposition to prove a simpler statement about $(1,1)$-hook shaped spaces, which is valid without any assumption on the field. The proof essentially follows the idea outlined in
\cite[Thm~4.1]{landsberg_michalek_abelian_tensors}.
\begin{prop}\label{prop_1_1_hook_shaped}
Suppose $W''\subset B''\otimes C''$ is $(1,1)$-hook shaped and $W'\subset B'\otimes C'$
is an arbitrary subspace.
Then the additivity of the rank holds for $W'\oplus W''$.
\end{prop}
Before commencing the proof of the proposition we state three lemmas, which will be applied to both $(1,1)$ and $(1,2)$ hook shaped spaces.
The first lemma is analogous to \cite[Thm~4.1]{landsberg_michalek_abelian_tensors}.
In this lemma (and also in the rest of this section) we will work with a sequence of tensors, $p_0, p_1, p_2, \dotsc$ in the space $A\otimes B \otimes C$, which are not necessarily direct sums.
Nevertheless, for each $i$,
we write $p'_i = \pi_{A''} \pi_{B''} \pi_{C''}(p_i)$
(that is, this is the ``corner'' of $p_i$ corresponding to $A'$, $B'$ and $C'$).
We define $p''_i$ analogously.
\begin{lem}\label{lem_proving_additivity_via_substitution}
Suppose $W'\subset A'\otimes B'\otimes C'$
and $W''\subset A''\otimes B''\otimes C''$ are two subspaces.
Let $r''=R(W'')$ and suppose that there exists a sequence of tensors
$p_0, p_1, p_2, \dotsc, p_{r''}\in A\otimes B \otimes C$
satisfying the following properties:
\renewcommand{(\roman{enumi})}{(\arabic{enumi})}
\begin{enumerate}
\item \label{item_proof_additivity_hook_p0_eq_p}
$p_0 =p$ is such that $p(A^*) = W = W' \oplus W''$,
\item \label{item_proof_additivity_hook_p_prime_preserved}
$p'_{i+1}=p'_{i}$ for every $0\le i < r''$,
\item \label{item_proof_additivity_hook_p_i_bis_does_not_drop_rank_too_much}
$R(p''_{i+1}) \ge R(p''_{i})-1$ for every $0\le i < r''$,
\item \label{item_proof_additivity_hook_p_i_drops_rank_enough}
$R(p_{i+1}) \le R(p_{i})-1$ for each $0\le i < r''$.
\end{enumerate}
\renewcommand{(\roman{enumi})}{(\roman{enumi})}
Then the additivity of the rank holds for $W'\oplus W''$
and for each $i < r''$ we must have $p''_i\ne 0$.
\end{lem}
\begin{proof}
We have
\[
R(W') + R(W'')
\stackrel{\text{\ref{item_proof_additivity_hook_p0_eq_p},\ref{item_proof_additivity_hook_p_prime_preserved}}}{=}
R(p'_{r''}) + r''
\le R(p_{r''})+r''
\stackrel{\text{\ref{item_proof_additivity_hook_p_i_drops_rank_enough}}}{\le}
R(p_0)
\stackrel{\text{\ref{item_proof_additivity_hook_p0_eq_p}}}{=}
R(W).
\]
The nonvanishing of $p_i''$ follows from
\ref{item_proof_additivity_hook_p_i_bis_does_not_drop_rank_too_much}.
\end{proof}
The second lemma tells us how to construct a single step in the above sequence.
\begin{lem}\label{lem_constructing_next_tensor_in_the_sequence}
Suppose $\Sigma \subset A\otimes B \otimes C$ is a linear subspace,
$p_i\in \Sigma$ is a tensor,
and $\gamma\in C''$ is such that:
\begin{itemize}
\item $R(p''_i(\gamma))=1$,
\item $\gamma$ preserves $\Sigma$, that is,
$\Sigma(\gamma)\otimes C \subset \Sigma$,
where $\Sigma(\gamma) = \set{t(\gamma)\mid t \in \Sigma} \subset A\otimes B$.
\item $\Sigma(\gamma)$ does not have entries in $A'\otimes B'$, that is
$
\pi_{A''}\pi_{B''}(\Sigma(\gamma)) =0.
$
\end{itemize}
Consider $\gamma^{\perp} \subset C$ to be the perpendicular hyperplane.
Then there exists
$p_{i+1} \in (\Sigma \cap A\otimes B \otimes \gamma^{\perp})$
that satisfies properties \ref{item_proof_additivity_hook_p_prime_preserved}--\ref{item_proof_additivity_hook_p_i_drops_rank_enough}
of Lemma~\ref{lem_proving_additivity_via_substitution}
(for a fixed $i$).
\end{lem}
\begin{proof}
As in Proposition~\ref{proposition_for_AFT_method_coordinate_free}
for $c\in (\gamma=1)$ set $(\tilde{p}_i)_c = p_i - p_i(\gamma)\otimes c \in A\otimes B\otimes \gamma^{\perp}$.
We will pick $p_{i+1}$ among the $(\tilde{p}_i)_c$.
In fact by
Proposition~\ref{proposition_for_AFT_method_coordinate_free}\ref{item_AFT_exists_choice_droping_rank}
there exists a choice of $c$ such that $p_{i+1}= (\tilde{p}_i)_c$ has rank less than $R(p_i)$,
that is, \ref{item_proof_additivity_hook_p_i_drops_rank_enough} is satisfied.
On the other hand, since $\gamma$ is in $(C'')^*$, we have $p''_{i+1} = \left(\widetilde{p''_i}\right)_{c''}$
(where $c= c' + c''$ with $c'\in C'$ and $c''\in C''$)
and by
Proposition~\ref{proposition_for_AFT_method_coordinate_free}\ref{item_AFT_any_choice_drops_rank_at_most_1}
also \ref{item_proof_additivity_hook_p_i_bis_does_not_drop_rank_too_much} is satisfied.
Property~\ref{item_proof_additivity_hook_p_prime_preserved} follows,
as $\Sigma(\gamma)$ (in particular, $p_i(\gamma)$)
has no entries in $A'\otimes B'\otimes C'$.
Finally, $p_{i+1} \in \Sigma$ thanks to the assumption that $\gamma$ preserves $\Sigma$ and $\Sigma$ is a linear subspace.
\end{proof}
The next lemma is the common first step in the proofs of
additivity for $(1,1)$ and $(1,2)$ hooks: we construct a few initial elements of the sequence needed in
Lemma~\ref{lem_proving_additivity_via_substitution}.
\begin{lem}\label{lem_first_step_for_hooks}
Suppose $W''\subset B''\otimes C''$
is a $(1,f)$-hook shaped space for some integer $f$ and $W'\subset B'\otimes C'$ is arbitrary.
Fix ${\Bbbk}^1\subset B''$ and ${\Bbbk}^f\subset C''$
as in Definition~\ref{def_hook_shaped_space} for $W''$.
Then there exists a sequence of tensors
$p_0, p_1, p_2, \dotsc, p_{k}\in A\otimes B \otimes C$ for some $k$ that satisfies properties
\ref{item_proof_additivity_hook_p0_eq_p}--\ref{item_proof_additivity_hook_p_i_drops_rank_enough}
of Lemma~\ref{lem_proving_additivity_via_substitution} and in addition
$p''_{k}\in A''\otimes B''\otimes {\Bbbk}^f$
and for every $i$
we have $ p_i\in A' \otimes B' \otimes C' \oplus
A'' \otimes \left(B''\otimes {\Bbbk}^f + {\Bbbk}^1 \otimes C\right)$.
In particular:
\begin{itemize}
\item $p''_i((A'')^*)$ is a $(1,f)$-hook shaped space
for every $i<k$,
while $p''_k((A'')^*)$ is a $(0,f)$-hook shaped space.
\item Every $p_i$ is ``almost'' a direct sum tensor, that is,
$
p_i = (p'_i\oplus p''_i) + q_i,
$
where
\[
q_i\in A''\otimes {\Bbbk}^1 \otimes C'\subset A''\otimes B'' \otimes C'.
\]
\end{itemize}
\end{lem}
\begin{proof}
To construct the sequence $p_i$ we recursively apply
Lemma~\ref{lem_constructing_next_tensor_in_the_sequence}.
By our assumptions,
$p''\in A'' \otimes B''\otimes {\Bbbk}^f
+ A''\otimes \linspan{x}\otimes C''$
for some choice of $x \in B''$ and fixed ${\Bbbk}^f \subset C''$.
We let
$\Sigma = A' \otimes B' \otimes C' \oplus A'' \otimes \left(B''\otimes {\Bbbk}^f + \linspan{x} \otimes C\right)$.
Tensor $p_0$ is defined by \ref{item_proof_additivity_hook_p0_eq_p}.
Suppose we have already constructed
$p_0,\dotsc, p_i$ and that $p''_i$ is not yet contained in
$A'' \otimes B''\otimes {\Bbbk}^f$.
Therefore
there exists a hyperplane $\gamma^{\perp}=(\gamma=0) \subset C$
for some $\gamma\in (C'')^*\subset C^*$
such that ${\Bbbk}^f \subset \gamma^{\perp}$,
but $p''_{i} \notin A'' \otimes B''\otimes \gamma^{\perp} $.
Equivalently, $p''_{i}(\gamma) \ne 0$
and $p''_{i}(\gamma) \subset A''\otimes \linspan{x}$.
In particular, $R(p''_{i}(\gamma))=1$
and $\Sigma(\gamma) \subset A'' \otimes \linspan{x}$.
Thus $\gamma$ preserves $\Sigma$ as in
Lemma~\ref{lem_constructing_next_tensor_in_the_sequence}
and $\Sigma(\gamma)$ has no entries in $A'\otimes B'\otimes C'$.
Thus we construct $p_{i+1}$ using
Lemma~\ref{lem_constructing_next_tensor_in_the_sequence}.
Since we are gradually reducing the dimension of the third factor
of the tensor space containing
$p''_{i+1}$, eventually we will arrive at the case
$p''_{i+1} \in A'' \otimes B''\otimes {\Bbbk}^f$, proving the claim.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop_1_1_hook_shaped}]
We construct the sequence $p_i$
as in Lemma~\ref{lem_proving_additivity_via_substitution}.
The initial elements $\fromto{p_0}{p_k}$ of the sequence are given by Lemma~\ref{lem_first_step_for_hooks}.
By the lemma and our assumptions,
$p_i''\in A'' \otimes B''\otimes \linspan{y}
+ A''\otimes \linspan{x}\otimes C''$
for some choices of $x \in B''$ and $y \in C''$ and
\[
p_k \in A' \otimes B' \otimes C'
\oplus A''\otimes \big(\linspan{x} \otimes C' \oplus
B'' \otimes \linspan{y}\big).
\]
Now suppose that we have constructed
$\fromto{p_k}{p_j}$ for some $j\ge k$ satisfying \ref{item_proof_additivity_hook_p_prime_preserved}--\ref{item_proof_additivity_hook_p_i_drops_rank_enough},
such that
\begin{equation*}
p_j \in \Sigma = A' \otimes B' \otimes C'
\oplus A''\otimes B \otimes (C'\oplus \linspan{y}).
\end{equation*}
If $p''_j=0$,
then by Lemma~\ref{lem_proving_additivity_via_substitution} we are done, as $j=r''$.
So suppose $p''_j\ne 0$,
and choose $\beta \in (B'')^*$ such that $p''_j(\beta) \ne 0$,
that is, $R(p''_j(\beta))=1$ since $p''_j(\beta) \in A'' \otimes \linspan{y}$.
We produce $p_{j+1}$ using
Lemma~\ref{lem_constructing_next_tensor_in_the_sequence}
with the roles of $B$ and $C$ swapped
(so also $\beta$ takes the role of $\gamma$ etc.).
We stop after constructing $p_{r''}$ and thus the desired sequence exists and proves the claim.
\end{proof}
In the rest of this section we will show that an analogous statement
holds for $(1,2)$-hook shaped spaces under an additional assumption
that the base field is algebraically closed.
We need the following lemma (false for nonclosed fields),
whose proof is a straightforward dimension count,
see also \cite[Prop.~3.2.11]{rupniewski_mgr}.
\begin{lem}\label{lem_dim_2_have_rank_one_matrix}
Suppose ${\Bbbk}$ is algebraically closed (of any characteristic)
and $p\in A\otimes B \otimes {\Bbbk}^2$ and $p\ne 0$.
Then at least one of the following holds:
\begin{itemize}
\item there exists a rank one matrix in $p(A^*)\subset B\otimes {\Bbbk}^2$, or
\item for any $x\in B$ there exists a rank one matrix in $p(x^{\perp}) \subset A \otimes {\Bbbk}^2$, where $x^{\perp} \subset B^*$ is the hyperplane defined by $x$.
\end{itemize}
\end{lem}
\begin{proof}
If $p$ is not ${\Bbbk}^2$-concise, then both claims trivially hold (except if rank of $p$ is one, then only the first claim holds).
Thus without loss of generality, we may suppose $p$ is concise by replacing $A$ and $B$ with smaller spaces if necessary.
If $\dim A\ge \dim B$,
then the projectivisation of the image
${\mathbb P}(p(A^*))\subset {\mathbb P}(B\otimes {\Bbbk}^2)$ intersects the Segre variety ${\mathbb P}(B) \times {\mathbb P}^1$
by the dimension count \cite[Thm~I.7.2]{hartshorne}
(note that here we use that the base field ${\Bbbk}$ is algebraically closed).
Otherwise, $\dim A < \dim B$ and the intersection
\[
{\mathbb P}(p(B^*)) \cap ({\mathbb P}(A) \times {\mathbb P}^1)\subset
{\mathbb P}(A\otimes {\Bbbk}^2)
\]
has positive dimension by the same dimension count.
In particular, any hyperplane ${\mathbb P}(p(x^{\perp})) \subset {\mathbb P}(p(B^*))$ also intersects the Segre variety.
\end{proof}
The next proposition reproves
(under the additional assumption that ${\Bbbk}$ is algebraically closed)
and slightly strengthens the theorem of JaJa-Takche
\cite{jaja_takche_Strassen_conjecture},
which can be thought of as a theorem about $(0,2)$-hook shaped spaces.
\begin{prop}\label{prop_1_2_hook_shaped}
Suppose ${\Bbbk}$ is algebraically closed, $W''\subset B''\otimes C''$ is $(1,2)$-hook shaped and $W'\subset B'\otimes C'$
is an arbitrary subspace.
Then the additivity of the rank holds for $W'\oplus W''$.
\end{prop}
\begin{proof}
We will use Lemmas~\ref{lem_proving_additivity_via_substitution},
\ref{lem_constructing_next_tensor_in_the_sequence}
and \ref{lem_first_step_for_hooks} again.
That is, we are looking for a sequence
$p_0, \dotsc, p_{r''}\in A\otimes B \otimes C$
with the properties
\ref{item_proof_additivity_hook_p0_eq_p}--\ref{item_proof_additivity_hook_p_i_drops_rank_enough},
and the initial elements $\fromto{p_0}{p_k}$ are constructed in such a way that
$p_k \in A' \otimes B' \otimes C'
\oplus A''\otimes
\big(\linspan{x} \otimes C' \oplus
B'' \otimes {\Bbbk}^2 \big)$.
Here $x\in B''$ is such that
$W'' \subset \linspan{x} \otimes C'' +B''\otimes {\Bbbk}^2$.
We have already ``cleaned'' the part of the hook of size $1$,
and now we work with the remaining space of $\mathbf{b}''\times 2$ matrices.
Unfortunately, cleaning $p''_i$
produces rubbish in the other parts of the tensor,
and we have to control the rubbish so that it does not affect $p'_i$, see \ref{item_proof_additivity_hook_p_prime_preserved}.
Note that what is left to do is not just the plain case of Strassen's additivity in the case of $\mathbf{c}''=2$
proven in \cite{jaja_takche_Strassen_conjecture} since $p_k$ may have already nontrivial entries
in another block, the one corresponding to $A''\otimes B''\otimes C'$
(the small tensor $q_k$ in the statement of Lemma~\ref{lem_first_step_for_hooks}).
We set $\Sigma =A'\otimes B' \otimes C' \oplus A\otimes \left(B\otimes {\Bbbk}^2 \oplus \linspan{x} \otimes C'\right)$.
To construct $p_{j+1}$ we use Lemma~\ref{lem_dim_2_have_rank_one_matrix}
(in particular, here we exploit the algebraic closedness of ${\Bbbk}$).
Thus either there exists $\alpha \in (A'')^*$ such that $R(p_j''(\alpha)) =1$,
or there exists $\beta \in x^{\perp} \subset (B'')^*$ such that $R(p_j''(\beta)) =1$.
In both cases we apply
Lemma~\ref{lem_constructing_next_tensor_in_the_sequence}
with the roles of $A$ and $C$ swapped or the roles of $B$ and $C$ swapped.
The conditions in the lemma are straightforward to verify.
We stop after constructing $p_{r''}$ and thus the desired sequence exists and proves the claim.
\end{proof}
\section{Rank one matrices and additivity of the tensor rank}\label{sec_rank_one_matrices_and_additive_rank}
As hinted by the proof of Proposition~\ref{prop_1_2_hook_shaped},
as long as we have a rank one matrix in the linear space $W'$ or $W''$, we have a good starting point for an attempt
to prove the additivity of the rank.
Throughout this section we will make a formal statement out of this observation and prove that if there is a rank one matrix in the linear spaces,
then either the additivity holds or there exists a ``smaller'' example of failure of the additivity.
In Section~\ref{sect_proofs_of_main_results_on_rank} we exploit several versions of this claim
in order to prove Theorem~\ref{thm_additivity_rank_intro}.
Throughout this section we follow Notations~\ref{notation_V_Seg} (denoting the rank one elements in a vector space by
the subscript $\cdot_{Seg}$),
\ref{notation} (introducing the vector spaces $\fromto{A}{C''}$ and their dimensions $\fromto{\mathbf{a}}{\mathbf{c}''}$),
\ref{notation_2} (defining a direct sum tensor $p=p'\oplus p''$ and the corresponding vector spaces $W, W', W''$),
and also~\ref{notation_projection} (which explains the conventions for projections $\fromto{\pi_{A'}}{\pi_{C''}}$
and vector spaces $\fromto{E'}{F''}$, which measure how much the fixed decomposition $V$ of $W$ sticks out
from the direct sum $B'\otimes C' \oplus B''\otimes C''$).
\subsection{Combinatorial splitting of the decomposition}
We carefully analyse
the structure of the rank one matrices in $V$.
We will distinguish seven types of such matrices.
\begin{lem}\label{lemma_reduction}
Every element of $V_{\Seg}\subset {\mathbb P}(B\otimes C)$ lies in the projectivisation of one of the following subspaces
of $B\otimes C$:
\begin{enumerate}
\item \label{item_prime_bis}
$B'\otimes C'$, $B''\otimes C''$,
(\ensuremath{\operatorname{Prime}}, \ensuremath{\operatorname{Bis}})
\item \label{item_vertical_and_horizontal}
$E'\otimes (C'\oplus F'')$,
$E''\otimes (F'\oplus C'')$,
(\ensuremath{\operatorname{HL}}, \ensuremath{\operatorname{HR}})\\
$(B'\oplus E'')\otimes F'$,
$(E'\oplus B'')\otimes F''$,
(\ensuremath{\operatorname{VL}}, \ensuremath{\operatorname{VR}}) \item \label{item_mixed}
$(E'\oplus E'')\otimes (F'\oplus F'')$.
(\ensuremath{\operatorname{Mix}})
\end{enumerate}
\end{lem}
The spaces in \ref{item_prime_bis} are purely contained in the original direct summands, hence, in some sense, they are the easiest to deal with (we will show how to ``get rid'' of them and construct a smaller example justifying a potential lack of additivity).
\footnote{The word \ensuremath{\operatorname{Bis}}{} comes from the Polish way of pronouncing the ${}''$ symbol.}
The spaces in \ref{item_vertical_and_horizontal}
stick out of the original summand, but only in one direction, either horizontal (\ensuremath{\operatorname{HL}}, \ensuremath{\operatorname{HR}}), or vertical (\ensuremath{\operatorname{VL}}, \ensuremath{\operatorname{VR}})\footnote{Here the letters ``H, V, L, R'' stand for
``horizontal, vertical, left, right'' respectively.}.
The space in \ref{item_mixed} is mixed and it sticks out in all directions. It is the most difficult to deal with and we expect that the typical counterexamples to the additivity of the rank will have mostly (or only) such mixed matrices in their minimal decomposition.
The mutual configuration and layout of those spaces in the case $\mathbf{b}',\mathbf{b}'',\mathbf{c}',\mathbf{c}''=3$, $\mathbf{e}',\mathbf{e}'',\mathbf{f}',\mathbf{f}''=1$ is illustrated in Figure~\ref{fig_layout_of_Prim__Mixed}.
\begin{figure}
\caption{We use Notation~\ref{notation_projection}
\label{fig_layout_of_Prim__Mixed}
\end{figure}
\begin{proof}[Proof of Lemma~\ref{lemma_reduction}]
Let $b\otimes c\in V_{\Seg}$ be a matrix of rank one. Write $b=b'+b''$ and $c=c'+c''$, where $b'\in B', b''\in B'', c'\in C'$ and $c''\in C''$. We consider the image of $b\otimes c$ via the four natural projections introduced in
Notation~\ref{notation_projection}:
\begin{subequations}
\begin{alignat}{3}
\pi_{B' }(b\otimes c)&=b''\otimes c & \ \in \ && B''&\otimes(F' \oplus C''),
\label{equ_pi_B_prime}\\
\pi_{B''}(b\otimes c)&=b' \otimes c & \ \in \ && B' &\otimes(C' \oplus F''),
\label{equ_pi_B_bis}\\
\pi_{C' }(b\otimes c)&=b \otimes c''& \ \in \ && (E'\oplus B'')&\otimes C'', \text{ and}
\label{equ_pi_C_prime}\\
\pi_{C''}(b\otimes c)&=b \otimes c' & \ \in \ && (B'\oplus E'')&\otimes C'.
\label{equ_pi_C_bis}
\end{alignat}
\end{subequations}
Notice that $b'$ and $b''$ cannot be simultaneously zero, since $b\ne 0$.
Analogously, $(c',c'')\neq (0,0)$.
Equations~\eqref{equ_pi_B_prime}--\eqref{equ_pi_C_bis} prove that the non-vanishing of one of $b', b'', c', c''$ induces a restriction on another one. For instance, if $b'\ne 0$, then by \eqref{equ_pi_B_bis} we must have $c''\in F''$.
Or, if $b''\ne 0$, then \eqref{equ_pi_B_prime} forces $c' \in F'$,
and so on.
Altogether we obtain the following cases:
\begin{itemize}
\item[(1)] If $b',b'',c',c''\neq0$, then $b\otimes c\in (E'\oplus E'')\otimes (F'\oplus F'')$ (case \ensuremath{\operatorname{Mix}}).
\item[(2)] if $b',b''\neq 0$ and $c'=0$, then $b\otimes c=b\otimes c''\in (E'\oplus B'')\otimes F''$ (case \ensuremath{\operatorname{VR}}).
\item[(3)] if $b',b''\neq0$ and $c''=0$, then $b\otimes c=b\otimes c'\in (B'\oplus E'')\otimes F'$ (case \ensuremath{\operatorname{VL}}).
\item[(4)] If $b'=0$, then either $c'=0$ and therefore $b\otimes c=b''\otimes c''\in B''\otimes C''$ (case \ensuremath{\operatorname{Bis}}), or $c'\neq0$ and $b\otimes c=b''\otimes c\in E''\otimes (F'\oplus C'')$ (case \ensuremath{\operatorname{HR}}).
\item[(5)] If $b''=0$, then either $c''=0$ and thus $b\otimes c=b'\otimes c'\in B'\otimes C'$ (case \ensuremath{\operatorname{Prime}}), or $c''\neq0$ and $b\otimes c=b''\otimes c\in E'\otimes (C'\oplus F'')$ (case \ensuremath{\operatorname{HL}}).
\end{itemize}
This concludes the proof.
\end{proof}
As in Lemma~\ref{lemma_reduction} every element of $V_{Seg}\subset {\mathbb P}(B\otimes C)$
lies in one of seven subspaces of $B\otimes C$.
These subspaces may have nonempty intersection.
We will now explain our convention with respect to choosing a basis of $V$ consisting of elements of $V_{\Seg}$.
Here and throughout the article by $\sqcup$ we denote the disjoint union.
\begin{notation}\label{notation_Prime_etc}
We choose a basis ${\mathcal{B}}$ of $V$ in such a way that:
\begin{itemize}
\item ${\mathcal{B}}$ consist of rank one matrices only,
\item ${\mathcal{B}} = \ensuremath{\operatorname{Prime}} \sqcup \ensuremath{\operatorname{Bis}} \sqcup \ensuremath{\operatorname{HL}} \sqcup \ensuremath{\operatorname{HR}} \sqcup \ensuremath{\operatorname{VL}} \sqcup \ensuremath{\operatorname{VR}} \sqcup \ensuremath{\operatorname{Mix}}$,
where each of \ensuremath{\operatorname{Prime}}, \ensuremath{\operatorname{Bis}}, \ensuremath{\operatorname{HL}}, \ensuremath{\operatorname{HR}}, \ensuremath{\operatorname{VL}}, \ensuremath{\operatorname{VR}}, and \ensuremath{\operatorname{Mix}}{}
is a finite set of rank one matrices of the respective type as in Lemma~\ref{lemma_reduction}
(for instance, $\ensuremath{\operatorname{Prime}} \subset B'\otimes C'$, $\ensuremath{\operatorname{HL}}\subset E'\otimes (C'\oplus F'')$, etc.).
\item ${\mathcal{B}}$ has as many elements of $\ensuremath{\operatorname{Prime}}$ and $\ensuremath{\operatorname{Bis}}$ as possible, subject to the first two conditions,
\item ${\mathcal{B}}$ has as many elements of $\ensuremath{\operatorname{HL}}$, $\ensuremath{\operatorname{HR}}$, $\ensuremath{\operatorname{VL}}$ and $\ensuremath{\operatorname{VR}}$ as possible, subject to all of the above conditions.
\end{itemize}
Let $\mathbf{prime}$ be the number of elements of $\ensuremath{\operatorname{Prime}}$ (equivalently, $\mathbf{prime} = \dim\linspan{\ensuremath{\operatorname{Prime}}}$)
and analogously define $\mathbf{bis}$, $\mathbf{hl}$, $\mathbf{hr}$, $\mathbf{vl}$, $\mathbf{vr}$, and $\mathbf{mix}$.
The choice of ${\mathcal{B}}$ need not be unique, but we fix one for the rest of the article.
Instead, the numbers $\mathbf{prime}$, $\mathbf{bis}$, and $\mathbf{mix}$ are uniquely determined by $V$
(there may be some non-uniqueness in dividing between $\mathbf{hl}$, $\mathbf{hr}$, $\mathbf{vl}$, $\mathbf{vr}$).
\end{notation}
Thus to each decomposition we associated a sequence of seven non-negative integers $(\fromto{\mathbf{prime}}{\mathbf{mix}})$.
We now study the inequalities between these integers and exploit them to get theorems about the additivity of the rank.
\begin{prop}\label{prop_projection_inequalities}
In Notations~\ref{notation_projection} and \ref{notation_Prime_etc} the following inequalities hold:
\begin{enumerate}
\item \label{item_projection_inequality_short_prim}
$\mathbf{prime} + \mathbf{hl} + \mathbf{vl}
+ \min\big(\mathbf{mix},\mathbf{e}' \mathbf{f}' \big) \geq R(W')$,
\item \label{item_projection_inequality_short_bis}
$\mathbf{bis} + \mathbf{hr} + \mathbf{vr}
+ \min\big(\mathbf{mix},\mathbf{e}'' \mathbf{f}'' \big) \geq R(W'')$,
\item \label{item_projection_inequality_long_prim_H}
$\mathbf{prime} + \mathbf{hl} + \mathbf{vl}
+ \min\big(\mathbf{hr} + \mathbf{mix} , \mathbf{f}'(\mathbf{e}'+\mathbf{e}'')\big) \geq R(W') + \mathbf{e}''$,
\item \label{item_projection_inequality_long_prim_V}
$\mathbf{prime} + \mathbf{hl} + \mathbf{vl}
+ \min\big(\mathbf{vr} +\mathbf{mix},\mathbf{e}'(\mathbf{f}'+\mathbf{f}'')\big) \geq R(W') + \mathbf{f}''$,
\item \label{item_projection_inequality_long_bis_H}
$\mathbf{bis} + \mathbf{hr} + \mathbf{vr}
+ \min\big(\mathbf{hl} +\mathbf{mix},\mathbf{f}''(\mathbf{e}'+\mathbf{e}'')\big) \geq R(W'') + \mathbf{e}'$,
\item \label{item_projection_inequality_long_bis_V}
$\mathbf{bis} + \mathbf{hr} + \mathbf{vr}
+ \min\big(\mathbf{vl}+ \mathbf{mix},\mathbf{e}''(\mathbf{f}'+\mathbf{f}'')\big) \geq R(W'') + \mathbf{f}'$.
\end{enumerate}
\end{prop}
\begin{proof}
To prove Inequality~\ref{item_projection_inequality_short_prim}
we consider the composition of projections $\pi_{B''}\pi_{C''}$.
The linear space $\pi_{B''}\pi_{C''}(V)$ is spanned by rank one matrices
$\pi_{B''}\pi_{C''}({\mathcal{B}})$ (where ${\mathcal{B}} = \ensuremath{\operatorname{Prime}} \sqcup \dotsb \sqcup \ensuremath{\operatorname{Mix}}$ as in Notation~\ref{notation_Prime_etc}),
and it contains $W'$.
Thus $\dim (\pi_{B''}\pi_{C''}(V)) \ge R(W')$.
But the only elements of the basis ${\mathcal{B}}$
that survive both projections (that is, they are not mapped to zero under the composition)
are \ensuremath{\operatorname{Prime}}, \ensuremath{\operatorname{HL}}, \ensuremath{\operatorname{VL}}, and \ensuremath{\operatorname{Mix}}.
Thus
\[
\mathbf{prime} + \mathbf{hl} + \mathbf{vl} + \mathbf{mix} \geq \dim (\pi_{B''}\pi_{C''}(V)) \ge R(W').
\]
On the other hand, $\pi_{B''}\pi_{C''}(\ensuremath{\operatorname{Mix}}) \subset E'\otimes F'$,
thus among $\pi_{B''}\pi_{C''}(\ensuremath{\operatorname{Mix}})$ we can choose at most $\mathbf{e}'\mathbf{f}'$ linearly independent matrices.
Thus
\[
\mathbf{prime} + \mathbf{hl} + \mathbf{vl} + \mathbf{e}'\mathbf{f}' \geq \dim (\pi_{B''}\pi_{C''}(V)) \ge R(W').
\]
The two inequalities prove \ref{item_projection_inequality_short_prim}.
To show Inequality~\ref{item_projection_inequality_long_prim_H}
we may assume that $W'$ is concise as in the proof of Lemma~\ref{lemma_bound_r'_e'_R_w'}.
Moreover, as in that same proof (more precisely, Inequality~\eqref{equ_bound_on_pi_C_bis_of_V}) we show that
$
\dim\pi_{C''}(V) \ge R(W') +\mathbf{e}''.
$
But $\pi_{C''}$ sends all matrices from $\ensuremath{\operatorname{Bis}}$ and $\ensuremath{\operatorname{VR}}$ to zero, thus
\[
\mathbf{prime} + \mathbf{hl} + \mathbf{vl} + \mathbf{hr} + \mathbf{mix} \geq \dim\pi_{C''}(V) \ge R(W') + \mathbf{e}''.
\]
As in the proof of Part~\ref{item_projection_inequality_short_prim}, we can also replace $\mathbf{hr} + \mathbf{mix}$ by $\mathbf{f}'(\mathbf{e}'+\mathbf{e}'')$,
since $\pi_{C''}(\ensuremath{\operatorname{HR}}\cup \ensuremath{\operatorname{Mix}}) \subset (E' \oplus E'')\otimes F'$,
concluding the proof of \ref{item_projection_inequality_long_prim_H}.
The proofs of the remaining four inequalities are identical to one of the above,
after swapping the roles of $B$ and $C$ or ${}'$ and ${}''$ (or swapping both pairs).
\end{proof}
\begin{prop}\label{prop_SAC_if_E'=0}
With Notation~\ref{notation_projection},
if one among $E',E'',F',F''$ is zero, then $R(W)=R(W')+R(W'')$.
\end{prop}
\begin{proof}
Let us assume without loss of generality that $E'=\{0\}$.
Using the definitions of sets $\ensuremath{\operatorname{Prime}}$, $\ensuremath{\operatorname{Bis}}$, $\ensuremath{\operatorname{VR}}$,\dots as in Notation~\ref{notation_Prime_etc}
we see that $\ensuremath{\operatorname{HL}} = \ensuremath{\operatorname{VR}}= \ensuremath{\operatorname{Mix}}= \emptyset$, due to the order of choosing the elements of the basis ${\mathcal{B}}$:
For instance, a potential candidate to became a member of \ensuremath{\operatorname{HL}}, would be first elected to $\ensuremath{\operatorname{Prime}}$,
and similarly $\ensuremath{\operatorname{VR}}$ is consumed by $\ensuremath{\operatorname{Bis}}$ and $\ensuremath{\operatorname{Mix}}$ by $\ensuremath{\operatorname{HR}}$.
Thus:
\begin{equation*}
R(W) = \dim(V_{Seg})
= \mathbf{prime} + \mathbf{bis} + \mathbf{hr} + \mathbf{vl}.
\end{equation*}
Proposition~\ref{prop_projection_inequalities}\ref{item_projection_inequality_short_prim} and \ref{item_projection_inequality_short_bis} implies
\[
R(W') +R(W'') \leq \mathbf{prime} + \mathbf{vl} + \mathbf{bis} + \mathbf{hr} = R(W), \\
\]
while $R(W') +R(W'') \ge R(W)$ always holds. This shows the desired additivity.
\end{proof}
\renewcommand{(\roman{enumi})}{(\alph{enumi})}
\begin{cor}\label{cor_inequalities_HL__Mix}
Assume that the additivity fails for $W'$ and $W''$, that is, $d=R(W') + R(W'')-R(W'\oplus W'') >0$.
Then the following inequalities hold:
\begin{enumerate}
\item \label{item_mix_at_least_1}
$\mathbf{mix} \geq d \ge 1$,
\item \label{item_mix_and_horizontal_at_least_1_plus_e}
$\mathbf{hl} + \mathbf{hr} + \mathbf{mix} \geq \mathbf{e}' + \mathbf{e}'' + d \ge 3$,
\item \label{item_mix_and_vertical_at_least_1_plus_f}
$\mathbf{vl} + \mathbf{vr} + \mathbf{mix} \geq \mathbf{f}' + \mathbf{f}'' + d \ge 3$.
\end{enumerate}
\end{cor}
\renewcommand{(\roman{enumi})}{(\roman{enumi})}
\begin{proof}
To prove \ref{item_mix_at_least_1} consider the inequalities \ref{item_projection_inequality_short_prim} and \ref{item_projection_inequality_short_bis} from Proposition~\ref{prop_projection_inequalities} and their sum:
\begin{align}
\mathbf{prime} + \mathbf{hl} + \mathbf{vl} + \mathbf{mix} &\ge R(W' ),\nonumber \\
\mathbf{bis} + \mathbf{hr} + \mathbf{vr} + \mathbf{mix} &\ge R(W''),\nonumber\\
\mathbf{prime} + \mathbf{bis} +\mathbf{hl} + \mathbf{hr} + \mathbf{vl} +\mathbf{vr} + 2 \mathbf{mix}
&\ge R(W') + R(W'') \label{equ_mix_at_least_1_plus_Rank}.
\end{align}
The lefthand side of \eqref{equ_mix_at_least_1_plus_Rank} is equal to $R(W)+ \mathbf{mix}$,
while its righthand side is $R(W)+d$.
Thus the desired claim.
Similarly, using inequalities \ref{item_projection_inequality_long_prim_H}
and \ref{item_projection_inequality_long_bis_H}
of the same propostion
we obtain \ref{item_mix_and_horizontal_at_least_1_plus_e},
while \ref{item_projection_inequality_long_prim_V}
and \ref{item_projection_inequality_long_bis_V} imply~\ref{item_mix_and_vertical_at_least_1_plus_f}.
Note that $\mathbf{e}' + \mathbf{e}'' + d \ge 3$ and $\mathbf{f}' + \mathbf{f}'' + d \ge 3$
by Proposition~\ref{prop_SAC_if_E'=0}.
\end{proof}
\subsection{Replete pairs}
As we hunger after inequalities involving integers $\fromto{\mathbf{prime}}{\mathbf{mix}}$ we distinguish a class of pairs $W', W''$
with particularly nice properties.
\begin{defin}
Consider a pair of linear spaces $W'\subset B'\otimes C'$ and $W''\subset B'' \otimes C''$
with a fixed minimal decomposition $V= \linspan{V_{\Seg}} \subset B\otimes C$ and $\fromto{\ensuremath{\operatorname{Prime}}}{\ensuremath{\operatorname{Mix}}}$
as in Notation~\ref{notation_Prime_etc}.
We say $(W', W'')$ is \emph{replete}, if $\ensuremath{\operatorname{Prime}} \subset W'$ and $\ensuremath{\operatorname{Bis}} \subset W''$.
\end{defin}
\begin{rem}
Striclty speaking, the notion of \emph{replete pair} depends also on the minimal decomposition $V$.
But as always we consider a pair $W'$ and $W''$ with a fixed decomposition
$V=\linspan{V_{\Seg}} \supset W'\oplus W''$, so we refrain from mentioning $V$ in the notation.
\end{rem}
The first important observation is that as long as we look for pairs that fail to satisfy the additivity, we are free to replenish any pair.
More precisely, for any fixed $W'$, $W''$ (and $V$) define the \emph{repletion} of $(W', W'')$ as the pair $(\repletion{W'},\repletion{W''})$:
\begin{equation}
\begin{aligned}
\repletion{W'}:&=W'+\linspan{\ensuremath{\operatorname{Prime}}}, &
\repletion{W''}:&=W''+\linspan{\ensuremath{\operatorname{Bis}}}, &
\repletion{W}:&=\repletion{W'}\oplus \repletion{W''}.
\end{aligned}
\end{equation}
\begin{prop}\label{prop_does_not_hurt_to_replenish}
For any $(W', W'')$, with Notation~\ref{notation_Prime_etc},
we have:
\begin{align*}
R(W' ) \le R(\repletion{W' }) & \le R(W' ) + (\dim \repletion{W' } - \dim W' ),\\
R(W'') \le R(\repletion{W''}) & \le R(W'') + (\dim \repletion{W''} - \dim W''),\\
R(\repletion{W}) & = R(W).
\end{align*}
In particular, if the additivity of the rank fails for $(W',W'')$, then it also fails for
$(\repletion{W'},\repletion{W''})$.
Moreover,
\begin{enumerate}
\item \label{item_repletion_has_the_same_decomposition}
$V$ is a minimal decomposition of $\repletion{W}$;
in particular, the same distinguished basis
$\ensuremath{\operatorname{Prime}}\sqcup \ensuremath{\operatorname{Bis}} \sqcup\dotsb\sqcup \ensuremath{\operatorname{Mix}}$ works for both $W$ and $\repletion{W}$.
\item \label{item_repletion_is_replete}
$(\repletion{W'}, \repletion{W''})$ is a replete pair.
\item \label{item_gap_is_preserved_under_repletion}
The gaps $\gap{\repletion{W'}}$, $\gap{\repletion{W''}}$, and $\gap{\repletion{W}}$,
are at most (respectively)
$\gap{W'}$, $\gap{W''}$, and $\gap{W}$.
\end{enumerate}
\end{prop}
\begin{proof}
Since $W' \subset \repletion{W'}$, the inequality $R(W')\le R(\repletion{W'})$ is clear.
Moreover $\repletion{W'}$ is spanned by $W'$ and $(\dim \repletion{W' } - \dim W' )$
additional matrices, that can be chosen out of $\ensuremath{\operatorname{Prime}}$ --- in particular, these additional matrices are all of rank $1$
and $R(\repletion{W'}) \le R(W') + (\dim \repletion{W' } - \dim W' )$.
The inequalities about ${}''$ and $R(W)\le R(\repletion{W})$ follow similarly.
Further $\repletion{W} \subset V$, thus $V$ is a decomposition of $\repletion{W}$.
Therefore also $R(\repletion{W}) \le \dim V = R(W)$, showing $R(\repletion{W}) = R(W)$
and \ref{item_repletion_has_the_same_decomposition}.
Item~\ref{item_repletion_is_replete} follows from \ref{item_repletion_has_the_same_decomposition},
while \ref{item_gap_is_preserved_under_repletion} is a rephrasement of the initial inequalities.
\end{proof}
Moreover, if one of the inequalities of Lemma~\ref{lemma_bound_r'_e'_R_w'} is an equality,
then the respective $W'$ or $W''$ is not affected by the repletion.
\begin{lem}\label{lem_minimal_pair_is_replete}
If, say, $R(W') + \mathbf{e}'' = R(W) - \dim W''$, then $W'' = \repletion{W''}$,
and analogous statements hold for the other equalities coming from replacing $\le$ by $=$ in
Lemma~\ref{lemma_bound_r'_e'_R_w'}.
\end{lem}
\begin{proof}
By Lemma~\ref{lemma_bound_r'_e'_R_w'} applied to $\repletion{W} = \repletion{W'}\oplus \repletion{W''}$
and by Proposition~\ref{prop_does_not_hurt_to_replenish}
we have:
\begin{align*}
R(\repletion{W}) - \mathbf{e}'' & \stackrel{\text{\ref{lemma_bound_r'_e'_R_w'}}}{\ge} R(\repletion{W'}) + \dim (\repletion{W''}) \\
& \stackrel{\text{\ref{prop_does_not_hurt_to_replenish}}}{\ge} R(W') + \dim W'' \\
&\stackrel{\text{assumptions of \ref{lem_minimal_pair_is_replete}}}{=}\ \ R(W) - \mathbf{e}''
\stackrel{\text{\ref{prop_does_not_hurt_to_replenish}}}{=} R(\repletion{W}) - \mathbf{e}''.
\end{align*}
Therefore all inequalities are in fact equalities. In particular, $\dim (\repletion{W''}) = \dim W''$.
The claim of the lemma follows from $W'' \subset \repletion{W''}$.
\end{proof}
\subsection{Digestion}
For replete pairs it makes sense to consider the complement of $\linspan{\ensuremath{\operatorname{Prime}}}$ in $W'$,
and of $\linspan{\ensuremath{\operatorname{Bis}}}$ in $W''$.
\begin{defin} With Notation~\ref{notation_Prime_etc},
suppose $S'$ and $S''$ denote the following linear spaces:
\begin{align*}
S':&=\linspan{\ensuremath{\operatorname{Bis}} \sqcup \ensuremath{\operatorname{HL}} \sqcup \ensuremath{\operatorname{HR}} \sqcup \ensuremath{\operatorname{VL}} \sqcup \ensuremath{\operatorname{VR}} \sqcup \ensuremath{\operatorname{Mix}}}
\cap W' && \text{(we omit $\ensuremath{\operatorname{Prime}}$ in the union) and }\\
S'':&=\linspan{\ensuremath{\operatorname{Prime}}\sqcup \ensuremath{\operatorname{HL}}\sqcup \ensuremath{\operatorname{HR}}\sqcup \ensuremath{\operatorname{VL}}\sqcup \ensuremath{\operatorname{VR}}\sqcup \ensuremath{\operatorname{Mix}}} \cap W'' && \text{(we omit $\ensuremath{\operatorname{Bis}}$ in the union).}
\end{align*}
We call the pair $(S', S'')$ the \emph{digested version} of $(W',W'')$.
\end{defin}
\begin{lem}\label{lemma_dim_S'}
If $(W', W'')$ is replete,
then $W' = \linspan{\ensuremath{\operatorname{Prime}}} \oplus S'$ and $W'' = \linspan{\ensuremath{\operatorname{Bis}}} \oplus S''$.
\end{lem}
\begin{proof}
Both $\linspan{\ensuremath{\operatorname{Prime}}}$ and $S'$ are contained in $W'$.
The intersection $\linspan{\ensuremath{\operatorname{Prime}}} \cap S'$ is zero, since
the seven sets $\ensuremath{\operatorname{Prime}}, \ensuremath{\operatorname{Bis}}, \ensuremath{\operatorname{HR}}, \ensuremath{\operatorname{HL}}, \ensuremath{\operatorname{VL}}, \ensuremath{\operatorname{VR}}, \ensuremath{\operatorname{Mix}}$ are disjoint and together they are linearly independent.
Furthermore,
\[
\codim (S' \subset W') \le \codim (\linspan{\ensuremath{\operatorname{Bis}} \sqcup \ensuremath{\operatorname{HR}} \sqcup \ensuremath{\operatorname{HL}} \sqcup \ensuremath{\operatorname{VL}} \sqcup \ensuremath{\operatorname{VR}} \sqcup \ensuremath{\operatorname{Mix}}} \subset V) = \mathbf{prime}.
\]
Thus $\dim S' + \mathbf{prime} \ge \dim W'$, which concludes the proof of the first claim. The second claim is analogous.
\end{proof}
These complements $(S', S'')$ might replace the original replete pair $(W',W'')$:
as we will show, if the additivity of the rank fails for $(W',W'')$, it also fails for $(S', S'')$.
Moreover, $(S', S'')$ is still replete, but it does not involve any $\ensuremath{\operatorname{Prime}}$ or $\ensuremath{\operatorname{Bis}}$.
\begin{lem}\label{lemma_can_digest}
Suppose $(W', W'')$ is replete, define $S'$ and $S''$ as above and set $S=S'\oplus S''$.
Then
\begin{enumerate}
\item \label{item_can_digest___rank}
$R(S)=R(W) -\mathbf{prime}-\mathbf{bis} = \mathbf{hl}+\mathbf{hr}+\mathbf{vl}+\mathbf{vr}+\mathbf{mix}$
and the space $\linspan{\ensuremath{\operatorname{HL}},\ensuremath{\operatorname{HR}},\ensuremath{\operatorname{VL}},\ensuremath{\operatorname{VR}},\ensuremath{\operatorname{Mix}}}$ determines a minimal decomposition of $S$.
In particular, $(S',S'')$ is replete and both spaces $S'$ and $S''$ contain no rank one matrices.
\item \label{item_can_digest___additivity}
If the additivity of the rank $R(S) = R(S')+ R(S'')$ holds for $S$,
then it also holds for $W$, that is, $R(W) = R(W')+ R(W'')$.
\end{enumerate}
\end{lem}
\begin{proof}
Since $W = S\oplus \linspan{\ensuremath{\operatorname{Prime}}, \ensuremath{\operatorname{Bis}}}$, we must have $R(W)\le R(S)+ \mathbf{prime} +\mathbf{bis}$.
On the other hand, $S\subset \linspan{\ensuremath{\operatorname{HL}},\ensuremath{\operatorname{HR}},\ensuremath{\operatorname{VL}},\ensuremath{\operatorname{VR}},\ensuremath{\operatorname{Mix}}}$, hence $R(S)\le \mathbf{hl}+\mathbf{hr}+\mathbf{vl}+\mathbf{vr}+\mathbf{mix}$.
These two claims show the equality for $R(S)$ in \ref{item_can_digest___rank}
and that $\linspan{\ensuremath{\operatorname{HL}},\ensuremath{\operatorname{HR}},\ensuremath{\operatorname{VL}},\ensuremath{\operatorname{VR}},\ensuremath{\operatorname{Mix}}}$ gives a minimal decomposition of $S$.
Since there is no tensor of type $\ensuremath{\operatorname{Prime}}$ or $\ensuremath{\operatorname{Bis}}$ in this minimal decomposition,
it follows that the pair $(S',S'')$ is replete by definition.
If, say, $S'$ contained a rank one matrix, then by our choice of basis in
Notation~\ref{notation_Prime_etc} it would be in the span of $\ensuremath{\operatorname{Prime}}$, a contradiction.
Finally, if $R(S) = R(S')+ R(S'')$, then:
\begin{align*}
R(W) &= R(S)+ \mathbf{prime} + \mathbf{bis} \\
&= R(S')+ \mathbf{prime} + R(S'')+\mathbf{bis} \ge R(W') + R(W''),
\end{align*}
showing the statement \ref{item_can_digest___additivity} for $W$.
\end{proof}
As a summary, in our search for examples of failure of the additivity of the rank,
in the previous section we replaced a linear space $W=W'\oplus W''$ by its
repletion $\repletion{W} = \repletion{W'} \oplus \repletion{W''}$, that is possibly larger.
Here in turn, we replace $\repletion{W}$ by a smaller linear space $S = S' \oplus S''$.
In fact, $\dim W'\ge \dim S'$ and $\dim W''\ge \dim S''$, and also $R(S)\le R(W)$ and $R(S')\le R(W')$ etc.
That is, changing $W$ into $S$ makes the corresponding tensors possibly ``smaller'',
but not larger.
In addition, we gain more properties: $S$ is replete and has no $\ensuremath{\operatorname{Prime}}$'s or $\ensuremath{\operatorname{Bis}}$'s in its minimal decomposition.
\begin{cor}\label{cor_small_dimensions_of_E_and_F}
Suppose that $W = W'\oplus W''$ is as in Notation~\ref{notation_2}
and that $\mathbf{e}''$ and $\mathbf{f}''$ are as in Notation~\ref{notation_projection}.
If either:
\begin{enumerate}
\item\label{item_cor_small_dims_of_E_and_F_arbitrary_field}
${\Bbbk}$ is an arbitrary field, $\mathbf{e}''\le 1$ and $\mathbf{f}''\le 1$, or
\item\label{item_cor_small_dims_of_E_and_F_alg_closed_field}
${\Bbbk}$ is algebraically closed, $\mathbf{e}''\le 1$ and $\mathbf{f}''\le 2$,
\end{enumerate}
then the additivity of the rank $R(W) = R(W')+ R(W'')$ holds.
\end{cor}
\begin{proof}
By Proposition~\ref{prop_does_not_hurt_to_replenish} and Lemma~\ref{lemma_can_digest},
we can assume $W$ is replete and equal to its digested version.
But then (since $\ensuremath{\operatorname{Bis}} =\emptyset$) we must have $W''\subset E''\otimes C'' + B''\otimes F''$.
In particular, $W''$ is, respectively, a $(1,1)$-hook shaped space or a $(1,2)$-hook shaped space.
Then the claim follows from Proposition~\ref{prop_1_1_hook_shaped} or Proposition~\ref{prop_1_2_hook_shaped}.
\end{proof}
\subsection{Additivity of the tensor rank for small tensors}\label{sect_proofs_of_main_results_on_rank}
We conclude our discussion of the additivity of the tensor rank with the following summarising results.
\begin{thm}\label{thm_additivity_rank_plus_2}
Over an arbitrary base field ${\Bbbk}$
assume $p'\in A'\otimes B'\otimes C'$ is any tensor, while
$p''\in A''\otimes B''\otimes C''$ is concise and $R(p'')\le \mathbf{a}''+2$.
Then the additivity of the rank holds:
\[
R(p'\oplus p'') = R(p')+R(p'').
\]
The analogous statements with the roles of $A$ replaced by $B$ or $C$, or the roles of ${}'$ and ${}''$
swapped, hold as well.
\end{thm}
\begin{proof} Since $p''$ is concise, the corresponding vector subspace $W''=p''((A'')^*)$
has dimension equal to $\mathbf{a}''$.
By Corollary~\ref{cor_small_dimensions_of_E_and_F}\ref{item_cor_small_dims_of_E_and_F_arbitrary_field}
we may assume $\mathbf{e}''\ge 2$ or $\mathbf{f}''\ge 2$.
Say, $\mathbf{e}'' \ge 2 \ge R(p'') - \dim W''$,
then by Corollary~\ref{cor_bounds_on_es_and_fs} the additivity must hold.
\end{proof}
\begin{thm}\label{thm_additivity_rank_in_dimensions_a_3_3}
Suppose the base field is ${\Bbbk} ={\mathbb C}$ or ${\Bbbk}={\mathbb R}$
(complex or real numbers) and
assume $p'\in A'\otimes B'\otimes C'$ is any tensor, while
$p''\in A''\otimes {\Bbbk}^3\otimes {\Bbbk}^3$ for an arbitrary vector space $A''$.
Then the additivity of the rank holds:
$R(p'\oplus p'') = R(p')+R(p'')$.
\end{thm}
\begin{proof}
By the classical Ja'Ja'-Takche Theorem~\cite{jaja_takche_Strassen_conjecture}
(in the algebraically closed case also shown in Proposition~\ref{prop_1_2_hook_shaped}),
we can assume $p''$ is concise in $A''\otimes {\Bbbk}^3\otimes {\Bbbk}^3$.
But then by \cite[Thm~5 and Thm~6]{sumi_miyazaki_sakata_maximal_tensor_rank}
the rank of $p''$ is at most $\mathbf{a}'' +2$
and the result follows from Theorem~\ref{thm_additivity_rank_plus_2}.
\end{proof}
Note that in the proof above we exploit the results
about maximal rank in ${\Bbbk}^{\mathbf{a}''} \otimes {\Bbbk}^3\otimes {\Bbbk}^3$.
In \cite{sumi_miyazaki_sakata_maximal_tensor_rank} the authors assume that the base field is ${\mathbb C}$ o r${\mathbb R}$.
We are not aware of any similar results over other fields,
with the unique exception of $\mathbf{a}''=3$, see the next proof for a discussion.
\begin{thm}\label{thm_additivity_rank_6}
Suppose the base field ${\Bbbk}$ is such that:
\begin{itemize}
\item the maximal rank of a tensor in ${\Bbbk}^3\otimes {\Bbbk}^3 \otimes {\Bbbk}^3$ is at most $5$.
\end{itemize}
(For example ${\Bbbk}$ is algebraically closed of characteristic $\ne 2$ or ${\Bbbk}= {\mathbb R}$).
Furthermore assume $R(p'') \le 6$.
Then independently of $p'$, the additivity of the rank holds: $R(p'\oplus p'')= R(p')+R(p'')$.
\end{thm}
\begin{proof}
Without loss of generality, we may assume $p''$ is concise in $A''\otimes B''\otimes C''$.
As in the previous proof, if any of the dimensions $\dim A''$, $\dim B''$, $\dim C''$ is at most $2$,
then the claim follows from \cite{jaja_takche_Strassen_conjecture}.
On the other hand, if any of the dimensions $\mathbf{a}''$, $\mathbf{b}''$, $\mathbf{c}''$ is at least $4$,
then the result follows from Theorem~\ref{thm_additivity_rank_plus_2}.
The remaining case $\mathbf{a}''=\mathbf{b}''=\mathbf{c}''=3$ also follows from Theorem~\ref{thm_additivity_rank_plus_2} by our assumption on the field ${\Bbbk}$.
The assumption is satisfied for ${\Bbbk}={\mathbb R}, {\mathbb C}$
see \cite[Thm~5.1]{bremner_hu_Kruskal_theorem}
or \cite[Thm~5]{sumi_miyazaki_sakata_maximal_tensor_rank}.
In \cite[top of p.~402]{bremner_hu_Kruskal_theorem} the authors say
that their proof is also valid for any algebraically closed field
of characteristic not equal to $2$.
They also provide the interesting history of this question
and, furthermore, they show that the assumption about maximal rank in
${\Bbbk}^3\times {\Bbbk}^3\times {\Bbbk}^3$ fails for ${\Bbbk}={\mathbb Z}_2$.
\end{proof}
Assuming the base field is ${\Bbbk}={\mathbb C}$,
one of the smallest cases not covered by the above theorems would be
the case of $p', p''\in {\mathbb C}^4\otimes {\mathbb C}^4\otimes {\mathbb C}^3$.
The generic rank (that is, the rank of a general tensor) in ${\mathbb C}^4\otimes {\mathbb C}^4\otimes {\mathbb C}^3$
is $6$, moreover \cite[p.~6]{atkinson_stephens_maximal_rank_of_tensors}
claims the maximal rank is $7$ (see also \cite[Prop.~2]{sumi_miyazaki_sakata_maximal_tensor_rank}).
\begin{example}
Suppose $A'=A''={\mathbb C}^4$ and either $B'=B''= {\mathbb C}^4$ and $C'=C''= {\mathbb C}^3$
or $B'=C''= {\mathbb C}^4$ and $B'=C''= {\mathbb C}^3$.
Suppose both $p'\in A'\otimes B'\otimes C'$
and $p''\in A''\otimes B''\otimes C''$ are tensors of rank $7$
and that the additivity of the rank fails for $p= p'\oplus p''$.
Let $W'$ and $W''$ be as in Notation~\ref{notation_2},
and $E'$, $\mathbf{e}'$, etc.~be as in Notation~\ref{notation_projection}.
Then:
\begin{itemize}
\item $R(p) =13$,
\item $\mathbf{e}'=\mathbf{e}'' = \mathbf{f}' =\mathbf{f}''=2$,
\item with $\ensuremath{\operatorname{Prime}}$, $\mathbf{hl}$, etc., as in Notation~\ref{notation_Prime_etc},
we have $\ensuremath{\operatorname{Prime}} =\ensuremath{\operatorname{Bis}} = \emptyset$, and the following inequalities hold:
\[
\begin{array}{|lcccl|}
\mathbf{hl}ine
\multicolumn{5}{|c|}{\text{if } \mathbf{b}''= 4, \mathbf{c}''=3}\\
\mathbf{hl}ine
\mathbf{hl}ine
2 & \le & \mathbf{hl} & \le &3\\
\mathbf{hl}ine
2 & \le & \mathbf{hr} & \le &3\\
\mathbf{hl}ine
3 & \le & \mathbf{vl} & \le &4\\
\mathbf{hl}ine
3 & \le & \mathbf{vr} & \le &4\\
\mathbf{hl}ine
1 & \le & \mathbf{mix} & \le &3\\
\mathbf{hl}ine
\multicolumn{3}{|r}{\mathbf{hl}+\mathbf{vl}} & \le &6\\
\mathbf{hl}ine
\multicolumn{3}{|r}{\mathbf{hr}+\mathbf{vr}} & \le &6\\
\mathbf{hl}ine
\end{array}
\quad \text{ or } \quad
\begin{array}{|lcccl|}
\mathbf{hl}ine
\multicolumn{5}{|c|}{\text{if } \mathbf{b}''= 3, \mathbf{c}''=4}\\
\mathbf{hl}ine
\mathbf{hl}ine
2 & \le & \mathbf{hl} & \le &3\\
\mathbf{hl}ine
3 & \le & \mathbf{hr} & \le &4\\
\mathbf{hl}ine
3 & \le & \mathbf{vl} & \le &4\\
\mathbf{hl}ine
2 & \le & \mathbf{vr} & \le &3\\
\mathbf{hl}ine
1 & \le & \mathbf{mix} & \le &3\\
\mathbf{hl}ine
\multicolumn{3}{|r}{\mathbf{hl}+\mathbf{vl}} & \le &6\\
\mathbf{hl}ine
\multicolumn{3}{|r}{\mathbf{hr}+\mathbf{vr}} & \le &6.\\
\mathbf{hl}ine
\end{array}
\]
\end{itemize}
\end{example}
\begin{proof}[Sketch of proof]
For brevity we only argue in the case $\mathbf{b}''= 4, \mathbf{c}''=3$,
while the proof of $\mathbf{b}''= 3, \mathbf{c}''=4$ is very similar.
Both tensors $p', p''\in {\mathbb C}^4 \otimes {\mathbb C}^4\otimes {\mathbb C}^3$ must be concise,
as otherwise either Theorem~\ref{thm_additivity_rank_in_dimensions_a_3_3}
or JaJa-Takche Theorem imply the additivity of the rank.
By Corollary~\ref{cor_bounds_on_es_and_fs} we must have $\mathbf{e}' \le 2$,
and similarly for $\mathbf{f}'$, $\mathbf{e}''$, $\mathbf{f}''$.
If one of them is strictly less then $2$,
then Corollary~\ref{cor_small_dimensions_of_E_and_F}\ref{item_cor_small_dims_of_E_and_F_alg_closed_field} implies the additivity, a contradiction,
thus $\mathbf{e}'=\mathbf{e}'' = \mathbf{f}' =\mathbf{f}''=2$.
By the failure of the additivity, we must have $R(W)\le 13$, but Lemma~\ref{lemma_bound_r'_e'_R_w'}
implies also $R(W)\ge 13$, showing that $R(p)=13$.
If, say $\ensuremath{\operatorname{Prime}} \ne \emptyset$, then the digested version $(S', S'')$ of repletion of $(W', W'')$
is also a counterexample to the additivity by
Lemma~\ref{lemma_can_digest}\ref{item_can_digest___additivity}.
If $S=S'\oplus S''$ has lower rank than $W$, then either $S$ is not concise,
contradicting Theorem~\ref{thm_additivity_rank_in_dimensions_a_3_3}
or $S$ contradicts the above calculations of rank.
Thus also $R(S)=13$ and by Lemma~\ref{lemma_can_digest}\ref{item_can_digest___rank}
we must have $\mathbf{prime}=\mathbf{bis} =0$. In fact, $S=W$.
Let $\widetilde{E'}\subset E'$ be the smallest linear subspace such that
$\pi_{C''}(\ensuremath{\operatorname{HL}})\subset \widetilde{E'}\otimes C'$. Set ${\bf{\widetilde{e}'}}=\dim \widetilde{E'}$.
Since $\ensuremath{\operatorname{Prime}}=\emptyset$, we must have
\[
W'\subset \linspan{\pi_{C''}(\ensuremath{\operatorname{HL}}), \pi_{B''}(\ensuremath{\operatorname{VL}}), \pi_{B''}\pi_{C''}(\ensuremath{\operatorname{Mix}})}
\subset \widetilde{E'} \otimes C' + B'\otimes F'.
\]
That is, $W'$ is $({\bf{\widetilde{e}'}}, \mathbf{f}')$-hook shaped.
Since $\mathbf{f}'=2$, Proposition~\ref{prop_1_2_hook_shaped}
shows that $\mathbf{hl} \ge {\bf{\widetilde{e}'}}\ge 2$.
Similarly, $\mathbf{hr}$, $\mathbf{vl}$, $\mathbf{vr}$ are also at least $2$.
We also see that $\widetilde{E'} = E'$, that is, the elements of type $\ensuremath{\operatorname{HL}}$ are concise in $E'$.
Next, we show that $\mathbf{vl} \ne 2$, which is perhaps the most interesting part of this example.
For this purpose we consider the projection $\pi_{E'\oplus B''}\colon B \to B'/E'$.
The related map $B\otimes C \to (B'/E')\otimes C$
(which by the standard abuse we also denote $\pi_{E'\oplus B''}$),
kills all the rank one tensors of types $\ensuremath{\operatorname{HL}}$, $\ensuremath{\operatorname{HR}}$, $\ensuremath{\operatorname{VR}}$ and $\ensuremath{\operatorname{Mix}}$,
leaving only those of type $\ensuremath{\operatorname{VL}}$ alive.
The image $\pi_{E'\oplus B''}(W) \subset (B'/E')\otimes F'$ has rank at most $\mathbf{vl}$
and is concise (otherwise, either Proposition~\ref{prop_1_2_hook_shaped} shows the additivity or $p'$ is not concise, a contradiction in both cases).
Note that $(B'/E')\otimes F' \simeq {\mathbb C}^2 \otimes {\mathbb C}^2$ and there are only two (up to a change of basis)
concise linear subspaces of ${\mathbb C}^2 \otimes {\mathbb C}^2$ which have rank at most $2$.
In both cases it is straightforward to verify that there exists $\beta'\in (B'/E')^* \subset (B')^*$
such that $\beta'(p) = \beta'(p') \in A' \otimes C'$ has rank $1$.
Then, by swapping the roles of $A$ and $B$, the process of repletion and digestion
(Lemma~\ref{lemma_can_digest}) leads to a smaller tensor which is also a counterexample to the additivity of the rank, again a contradiction.
Thus $R(\pi_{E'\oplus B''}(W))$ must be at least $3$ and consequently, $\mathbf{vl} \ge 3$.
The same argument shows that $\mathbf{vr} \ge 3$.
Combining the inequalities obtained so far we also get:
\[
\mathbf{mix} = 13 - (\mathbf{hl} + \mathbf{hr} + \mathbf{vl} + \mathbf{vr}) \le 3.
\]
The inequality $\mathbf{mix} \ge 1$ follows from
Corollary~\ref{cor_inequalities_HL__Mix}\ref{item_mix_at_least_1},
and it is left to show only the last two inequalities.
To prove $\mathbf{hl}+\mathbf{vl} \le 6$,
we use Proposition~\ref{prop_projection_inequalities}\ref{item_projection_inequality_short_bis}:
\[
7\le \mathbf{hr} +\mathbf{vr} +\mathbf{mix} = R(W) - (\mathbf{hl}+\mathbf{vl}) = 13-(\mathbf{hl}+\mathbf{vl}).
\]
The last inequality follows from a similar argument.
\end{proof}
\section{Additivity of the tensor border rank}\label{sect_border_rank}
Throughout this section we will follow Notations~\ref{notation} and~\ref{notation_2}. Moreover, we restrict to the base field ${\Bbbk}={\mathbb C}$.
We turn our attention to the additivity of the border rank.
That is, we ask for which tensors $p'\in A'\otimes B'\otimes C'$ and $p''\in A''\otimes B''\otimes C''$
the following equality holds:
\[
{\underline{R}}(p'\oplus p'') = {\underline{R}}(p') + {\underline{R}}(p'').
\]
Since the known counterexamples to the additivity are much smaller than in the case of the additivity
of the tensor rank, our methods are more restricted to very small cases.
We commence with the following elementary observation.
\begin{lem}\label{lemma_additivity_small_brank}
Consider concise tensors $p'\in A'\otimes B'\otimes C'$ and $p''\in A''\otimes B''\otimes C''$ with ${\underline{R}}(p')\le \mathbf{a}'$ and ${\underline{R}}(p'')\le \mathbf{a}''$ (thus in fact ${\underline{R}}(p') = \mathbf{a}'$ and ${\underline{R}}(p'')= \mathbf{a}''$).
Let $p=p' \oplus p''$.
Then the additivity of the border rank holds ${\underline{R}}(p) = {\underline{R}}(p')+{\underline{R}}(p'')$.
\end{lem}
\begin{proof}
Since $p'$ and $p''$ are concise, the linear maps
$p'\colon (A')^* \to B'\otimes C'$ and $p''\colon (A'')^* \to B''\otimes C''$ are injective.
Then also the map $p\colon A^\ast\to B\otimes C$ is injective and
\[
{\underline{R}}(p) \ge \dim p(A^*) = \dim p'((A')^*) + \dim p''((A'')^*) = {\underline{R}}(p') + {\underline{R}}(p'').
\]
The opposite inequality always holds.
\end{proof}
\begin{cor}\label{cor_additivity_brank_very_small_a_b_c}
Suppose both triples of integers $(\mathbf{a}',\mathbf{b}',\mathbf{c}')$ and $(\mathbf{a}'',\mathbf{b}'',\mathbf{c}'')$
fall into one of the following cases:
$(a,b,1)$, $(a,1,c)$,
$(a,b,2)$ with $a\ge b \ge 2$,
$(a,2,c)$ with $a\ge c \ge 2$,
$(a,b,c)$ with $a\ge b c$.
Then for any concise tensors $p'\in A'\otimes B'\otimes C'$ and $p''\in A''\otimes B''\otimes C''$
the additivity of the border rank holds.
\end{cor}
Note that the list of triples in the corollary is a bit exaggerated, as some of these triples have no concise tensors.
However, this phrasing is convenient for further applications and search for unsolved pairs of triples.
\begin{proof}
After removing the triples that do not admit any concise tensor the list reduces to:
$(a,a,1)$, $(a,1,a)$,
$(a,b,2)$ (for $2 \le b \le a \le 2b$),
$(a,2,c)$ (for $2 \le c \le a \le 2c$),
$(bc,b,c)$.
We claim that in all these cases ${\underline{R}}(p') = \mathbf{a}'$ and ${\underline{R}}(p'') = \mathbf{a}''$. In fact:
\begin{itemize}
\item The claim is clear for $(a,1,a)$, $(a,a,1)$, and $(bc,b,c)$.
\item For $(a,a,2)$ and $(a,2,a)$ the claim follows from the classification of such tensors,
see the argument in the first paragraph of \cite[\S5.3]{jabu_han_mella_teitler_high_rank_loci}.
\item For $(a,b,2)$ (with $2 \le b < a \le 2b$), and $(a,2,c)$ (with $2 \le c < a \le 2c$),
the claim follows from the previous case: any such concise tensor $T$ has border rank at least $a$.
But $T$ is at the same time a (non-concise) tensor in a larger tensor space
${\mathbb C}^a \otimes {\mathbb C}^a \otimes {\mathbb C}^2$ or ${\mathbb C}^a \otimes {\mathbb C}^2 \otimes {\mathbb C}^a$.
Thus by Lemma~\ref{lem_rank_independent_of_ambient} the border rank of $T$
is at most the generic (border) rank
in this larger space, which is equal to $a$ by the previous item.
\end{itemize}
Therefore we conclude using Lemma~\ref{lemma_additivity_small_brank}.
\end{proof}
Theorem~\ref{thm_additivity_border_rank_in_dimension_4} claims that the additivity of the border rank holds for $\mathbf{a},\mathbf{b},\mathbf{c}\le 4$.
Most of the cases follow from Corollary~\ref{cor_additivity_brank_very_small_a_b_c},
with the exception of $(3+1,2+2,2+2)$ and $(3+1,3+1,3+1)$,
which are covered in Sections~\ref{sec_br_3_2_2_1_b_c} and~\ref{sec_br_3_3_3_1_b_c}.
\begin{defin}
Assume $p, q \in A\otimes B\otimes C$ are two tensors.
We say that $p$ is \emph{more degenerate} than $q$ if $p \in \overline{GL(A)\times GL(B) \times GL(C)\cdot q}$.
\end{defin}
\begin{example}\label{example_222_degenerates_to_122}
Any concise tensor in ${\mathbb C}^1\otimes {\mathbb C}^2\otimes {\mathbb C}^2$ is more degenerate
than any concise tensor in ${\mathbb C}^2\otimes {\mathbb C}^2\otimes {\mathbb C}^2$.
\end{example}
\begin{example}\label{example_degeneration_in_322}
Consider concise tensors in ${\mathbb C}^3\times {\mathbb C}^2\times {\mathbb C}^2$.
According to \cite[Table 10.3.1]{landsberg_tensorbook},
there are two orbits of the action of $GL_3\times GL_2\times GL_2$ of such tensors,
both orbits of border rank $3$.
One orbit is ``generic'', the other is \emph{more degenerate}.
The latter is represented by:
\[
p = a_1 \otimes b_1 \otimes c_1
+ a_2 \otimes b_1 \otimes c_2
+ a_3 \otimes b_2 \otimes c_1.
\]
\end{example}
\begin{lem}\label{lemma_more_degenerate}
Suppose $p'\in A'\otimes B'\otimes C'$ is an arbitrary tensor
and $p'', q'' \in A''\otimes B''\otimes C''$ are
such that ${\underline{R}} (p'')={\underline{R}}(q'')$ and $p''$ is more degenerate than $q''$.
If the additivity of the border rank holds for $p'\oplus p''$
then it also holds for $p'\oplus q''$.
\end{lem}
\begin{proof}
Since $p''$ is more degenerate than $q''$ also $p'\oplus p''$ is more degenerate than $p'\oplus q''$.
Thus
\[
{\underline{R}}(p'\oplus q'') \ge {\underline{R}}(p'\oplus p'') = {\underline{R}}(p') + {\underline{R}}(p'')= {\underline{R}}(p') + {\underline{R}}(q'').
\]
\end{proof}
\subsection{Strassen's equations of secant varieties}\label{sec_Strassen_equations}
Often as a criterion to determine whether a tensor is or is not of a given border rank, we exploit
defining equations of the corresponding secant varieties.
We review here one type of equations that is most important for the small cases we consider in this article.
First assume $\mathbf{b}=\mathbf{c}$ and consider the space of square matrices $B\otimes C$.
Let $f_\mathbf{b}:(B\otimes C)^{\times3}\to B\otimes C$ be the map of matrices defined as follows:
\begin{equation}\label{equation_adjoint}
f_\mathbf{b}(x,y,z)=x\adj (y) z-z \adj (y)x,
\end{equation}
where $\adj(y)$ denotes the adjoint matrix of $y$.
As in Section~\ref{section_slices} write
$$
p=\sum_{i=1}^{\mathbf{a}} a_i\otimes w_i,
$$
where $w_1,\dots,w_{\mathbf{a}}\in W:=p({A}^\ast)\subset B\otimes C$ are $\mathbf{b}\times\mathbf{c}$ matrices
and $\{a_1,\dots,a_\mathbf{a}\}$ is a basis of $A$.
\begin{prop}\label{prop_strassen_adj}
Assume that $p\in A\otimes B\otimes C$.
\begin{enumerate}
\item \cite{strassen_equations} \label{item_strassen_adj_3}
Suppose $\mathbf{a}=\mathbf{b}=\mathbf{c}=3$.
Then ${\underline{R}}(p)\le3$ if and only if $f_3(x,y,z)=\underline{0}$ for every $x,y,z\in W$.
\item \cite{landsbergmanivel08} \label{item_strassen_adj_more}
Suppose $\mathbf{a}=\mathbf{b}=\mathbf{c}$ and ${\underline{R}}(p)\le\mathbf{a}$.
Then $f_\mathbf{a}(x,y,z)=\underline{0}$, for every $x,y,z\in W$.
\end{enumerate}
\end{prop}
See also \cite[Thm~3.2]{friedland_salmon_problem_paper}.
We also recall Ottaviani's derivation of Strassen's equations (\cite{ottaviani_symplectic_bundles_secants_Luroth}, see also \cite[Sect.~3.8.1]{landsberg_tensorbook}) for secant varieties of three factor Segre embeddings.
Given a tensor $p:B^\ast\to A\otimes C$, consider the contraction operator
$$
p^{\wedge}_A:A\otimes B^\ast\to\Lambda^2 A\otimes C,
$$
obtained as composition of the map $ \Id_A\otimes p:A\otimes B^\ast\to A^{\otimes2}\otimes C$ with the natural projection $A^{\otimes2}\otimes C\to\Lambda^2 A\otimes C$.
\begin{prop}[{\cite[Theorem 4.1]{ottaviani_symplectic_bundles_secants_Luroth}}]
\label{prop_ottaviani_derivation_strassen}
Assume $3\le\mathbf{a}\le \mathbf{b},\mathbf{c}$.
If ${\underline{R}}(p)\le r$, then $\rk({p}^\wedge_{A})\le r(\mathbf{a}-1)$.
\end{prop}
If $\mathbf{a}=3$, we can slice $p$ as follows (cf. Section~\ref{section_slices}):
$p=\sum_{i=1}^3a_i \otimes w_i\in A\otimes B\otimes C$, with $w_i\in B\otimes C$.
Then the matrix representation of ${p}^\wedge_{A}$ in block matrices is the following $(\mathbf{b}+\mathbf{b}+\textbf{b},\mathbf{c}+\mathbf{c}+\mathbf{c})$ partitioned matrix
\begin{equation}\label{equation_contraction_operator}
M_3(w_1,w_2,w_3):=\left(
\begin{array}{ccc}
\underline{0}&w_3&-w_2\\
-w_3&\underline{0}&w_1\\
w_2&-w_1&\underline{0}
\end{array}\right).
\end{equation}
\begin{prop}[{\cite[Prop.~7.6.4.4]{landsberg_tensorbook}}]\label{prop_strassen_degree9}
If $\mathbf{a}=\mathbf{b}=\mathbf{c}=3$, the degree nine equation
\[
\det({p}^\wedge_{A})=0
\]
defines the variety $\sigma_4({\mathbb P} A\times {\mathbb P} B\times{\mathbb P} C)\subset{\mathbb P}(A\otimes B\otimes C)$.
\end{prop}
If $\mathbf{a}=4$ and $p=\sum_{i=1}^4a_i \otimes w_i\in A\otimes B\otimes C$, with
$w_i\in B\otimes C$, then the matrix representation of ${p}^\wedge_{A}$
in block matrices is the following $(4\cdot\mathbf{b},6\cdot\mathbf{c})$
partitioned matrix
\begin{equation}\label{equation_contraction_operator_4}
M_4(w_1,w_2,w_3,w_4):=\left(
\begin{array}{cccccc}
\underline{0}&w_3&-w_2&w_4&\underline{0}&\underline{0}\\
-w_3&\underline{0}&w_1&\underline{0}&-w_4&\underline{0}\\
w_2&-w_1&\underline{0}&\underline{0}&\underline{0}&w_4\\
\underline{0}&\underline{0}&\underline{0}&-w_1&w_2&-w_3\\
\end{array}\right).
\end{equation}
\subsection{Case \texorpdfstring{$(3+1,2+\mathbf{b}'',2+\mathbf{c}'')$}{(3+1,2+b'',2+c'')}}\label{sec_br_3_2_2_1_b_c}
Assume $\mathbf{a}'=3$, $\mathbf{b}'=\mathbf{c}'=2$ and $\mathbf{a}''=1$.
\begin{prop}\label{prop_br_case_3_2_2_1_2_2}
For any $p'\in {\mathbb C}^3\otimes{\mathbb C}^2\otimes {\mathbb C}^2$ and $p''\in {\mathbb C}^1\otimes{\mathbb C}^{\mathbf{b}''}\otimes {\mathbb C}^{\mathbf{c}''}$
the additivity of the border rank holds.
\end{prop}
\begin{proof}
We can assume $p''$ is concise, so that ${\underline{R}}(p'')=\mathbf{b}''=\mathbf{c}''$.
Also if $p'$ is not concise, then Corollary~\ref{cor_additivity_brank_very_small_a_b_c} shows the claim.
So suppose $p'$ is concise and thus ${\underline{R}}(p')=3$.
We can write $p'=a_1\otimes w'_1+a_2\otimes w'_2+a_3\otimes w'_3$ and $p''=a_4\otimes w''_4$, where $w'_1\dots, w'_3$ are $2\times2$ matrices and $w_4''$ is an invertible $\mathbf{b}''\times \mathbf{b}''$ matrix.
As for $p'$, by Example~\ref{example_degeneration_in_322} and Lemma~\ref{lemma_more_degenerate}
we can choose the more degenerate tensor, which has the following normal form:
$$
w'_1=\left(\begin{array}{cc} 1&0\\0&0\end{array}\right),
w'_2=\left(\begin{array}{cc} 0&1\\0&0\end{array}\right),
w'_3=\left(\begin{array}{cc} 0&0\\1&0\end{array}\right).
$$
Write $p=\sum_{i=1}^4 a_i\otimes w_i$, where $w_i$ are the following $(2+\mathbf{b}'',2+\mathbf{b}'')$ partitioned matrices
$$
w_i=\left(\begin{array}{cc} w'_i&\underline{0}\\ \underline{0}&\underline{0} \end{array}\right), i=1,2,3, \
w_4=\left(\begin{array}{cc} \underline{0}&\underline{0}\\
\underline{0}&w''_4 \end{array}\right).
$$
We use the same notation as in Section~\ref{sec_Strassen_equations}.
We claim that the matrix representing the contraction operator $p^\wedge_A$,
denoted by $M_4(w_1,w_2,w_3,w_4)$ as in \eqref{equation_contraction_operator_4}, has rank $7+3\mathbf{b}''$.
We conclude that ${\underline{R}}(p) \ge 3+\mathbf{b}''={\underline{R}}(p')+{\underline{R}}(p'')$
by Proposition~\ref{prop_ottaviani_derivation_strassen} showing the addivitity.
In order to prove the claim, we observe that $M_4(w_1,w_2,w_3,w_4)$ can be transformed via permutations of rows and columns into the following $(6+3\mathbf{b}''+2+\mathbf{b}'',6+3\mathbf{b}''+2+2+2+3\mathbf{b}'')$-partitioned matrix
$$
\left(
\begin{array}{cccccc}
M_3(w'_1,w'_2,w'_3)&\underline{0}&\underline{0}&\underline{0}&\underline{0}&\underline{0}\\
\underline{0}&N&\underline{0}&\underline{0}&\underline{0}&\underline{0}\\
\underline{0}&\underline{0}&-w'_1&w'_2&-w'_3&\underline{0}\\
\underline{0}&\underline{0}&\underline{0}&\underline{0}&\underline{0}&\underline{0}\\
\end{array}\right),
$$
where $N$ is the following $3\mathbf{b}''\times3\mathbf{b}''$ matrix
$$
N=\left(\begin{array}{ccc}
w_4''&\underline{0}&\underline{0}\\
\underline{0}&-w_4''&\underline{0}\\
\underline{0}&\underline{0}&w_4''
\end{array}\right).
$$
One can compute that the rank of $M_3(w'_1,w'_2,w'_3)$ equals $5$.
Moreover, since $\rk(N)=3\mathbf{b}''$ and $\rk((-w'_1,w'_2,-w'_3))=2$, we conclude the proof of the claim.
\end{proof}
\subsection{Case \texorpdfstring{$(3+1,3+\mathbf{b}'',3+\mathbf{c}'')$}{(3+1,3+b'',3+c'')}}\label{sec_br_3_3_3_1_b_c}
Recall our usual setting: $p'\in A'\otimes B'\otimes C'$, $p''\in A''\otimes B''\otimes C''$,
$\mathbf{a}':=\dim A'$, etc. (Notation~\ref{notation_2}).
In this subsection we are going to prove the following case of additivity of the border rank.
\begin{prop}\label{prop_br_case_3_3_3_1_1_1}
The additivity of the border rank holds for $p'\oplus p''$ if $\mathbf{a}'=\mathbf{b}'=\mathbf{c}'=3$, and
$p'$ is concise and $\mathbf{a}''=1$.
\end{prop}
\begin{proof}
By replacing $B''$ and $C''$ with smaller spaces
we can assume $p''$ is also concise and in particular $\mathbf{b}''=\mathbf{c}''$.
If ${\underline{R}}(p') = 3$ then Lemma~\ref{lemma_additivity_small_brank} implies the claim.
On the other hand, by Terracini's Lemma, ${\underline{R}}(p')\le 5$.
Thus it is sufficient to treat the cases ${\underline{R}}(p')=4$ and ${\underline{R}}(p')=5$.
Let $\{a_1,a_2,a_3\}$ be a basis of $A'$ and let $\{a_4\}$ be a basis of $A''\simeq {\mathbb C}$.
Write
\begin{equation}\label{equation_p'_(3,3,3)}
p'=a_1\otimes w'_1+a_2\otimes w'_2+a_3\otimes w'_3,
\end{equation}
where $w'_1,w'_2,w'_3\in W':=p'((A')^\ast)\subset B'\otimes C'$ are $3\times3$ matrices.
Similarly, let
$$
p=a_1\otimes w_1+a_2\otimes w_2+a_3\otimes w_3+a_4\otimes w_4,
$$
where $w_1,w_2,w_3,w_4\in W:=p({A}^\ast)\subset B\otimes C$ are $(3+\mathbf{b}'',3+\mathbf{b}'')$ partitioned matrices:
\begin{equation}\label{equation_slicing_border}
w_i=\left(
\begin{array}{cc}
w'_i & \underline{0} \\
\underline{0} & 0 \\
\end{array}\right), \ i=1,2,3, \text{ and }
w_4=\left(
\begin{array}{cc}
\underline{0} & \underline{0}\\
\underline{0} & w_4''
\end{array}\right).
\end{equation}
We now analyse the two cases ${\underline{R}}(p') =4$ and ${\underline{R}}(p') =5$ separately.
\paragraph{The additivity holds if the border rank of $p'$ is equal to four}
Assume by contradiction that ${\underline{R}}(p)\le \mathbf{b}''+3 = {\underline{R}}(p')+{\underline{R}}(p'') -1$.
By Proposition~\ref{prop_strassen_adj}\ref{item_strassen_adj_more},
we obtain the following equations: $f_{\mathbf{b}''+3}(x',y'+y'',z')=\underline{0}$,
for every $x',y',z'\in W'= p'\left((A')^*\right)$ and $0\ne y''\in W''= p''\left((A'')^*\right)$.
We can see that $\adj(y'+y'')$ is the following $(3+\mathbf{b}'',3+\mathbf{b}'')$ partitioned matrix
$$
\adj(y'+y'')=
\left(
\begin{array}{cc}
\det(y'') \adj (y')& \underline{0} \\
\underline{0}& \det(y')\adj(y'')\\
\end{array}\right).
$$
Therefore we have
$$
x' \adj(y'+y'') z' =
\left(
\begin{array}{cc}
\det(y'') x'\adj (y')z'& \underline{0} \\
\underline{0}& 0 \\
\end{array}\right).
$$
Since $p''$ is concise, $\det(y'')\ne 0$, and
thus from the vanishing of $f_{\mathbf{b}''+3}(x', y'+y'', z')$ we also obtain that $f_3(x',y',z')=0$.
Therefore ${\underline{R}}(p')\le 3$ by Proposition~\ref{prop_strassen_adj}\ref{item_strassen_adj_3}, a contradiction.
\paragraph{The additivity holds if the border rank of $p'$ is equal to five}
Consider the projection $\pi: A\otimes B\otimes C \to A'\otimes B\otimes C$ given by
\begin{align*}
a_i&\mapsto a_i,\ i=1,2,3\\
a_4&\mapsto a_1+a_2+a_3.
\end{align*}
Consider $\bar{p}:=\pi(p)\in A'\otimes B\otimes C$ and write $\bar{p}=a_1\otimes \bar{w}_1+a_2\otimes \bar{w}_2+a_3\otimes \bar{w}_3$, where, for $i=1,2,3$,
$\bar{w}_i$ is the $(3+\mathbf{b}'',3+\mathbf{b}'')$ partitioned matrix
$$
\bar{w}_i=\left(\begin{array}{cc}
w'_i&0\\
0&w''_4
\end{array}\right).
$$
We claim that $\rk({\bar{p}}^\wedge_{A'})=9+2\mathbf{b}''$.
Indeed, by swapping both rows and columns of $M_3(\bar{w}_1,\bar{w}_2,\bar{w}_3)$
(see Equation~\ref{equation_contraction_operator})
we obtain the following $(9+3\mathbf{b}'',9+3\mathbf{b}'')$ partitioned matrix
$$
\left(\begin{array}{cc}
{p'}^\wedge_{A'}&\underline{0}\\
\underline{0}&M_3(w_4'',w_4'',w_4'')
\end{array}\right).
$$
Since ${\underline{R}}(p')=5$, the matrix ${p'}^\wedge_{A'}$ has rank $9$, by Proposition~\ref{prop_strassen_degree9}. Moreover $M_3(w_4'',w_4'',w_4'')$ has rank $2\mathbf{b}''$.
Therefore, by Proposition~\ref{prop_ottaviani_derivation_strassen}, we obtain ${\underline{R}}(\bar{p})\ge 5+ \mathbf{b}''$.
We conclude by observing that ${\underline{R}}(p)\ge {\underline{R}}(\bar{p})$.
\end{proof}
This concludes the proof of Theorem~\ref{thm_additivity_border_rank_in_dimension_4}, as all possible splittings
$\mathbf{a}= \mathbf{a}'+\mathbf{a}''$, $\mathbf{b}= \mathbf{b}'+\mathbf{b}''$, $\mathbf{c}= \mathbf{c}'+\mathbf{c}''$ with $\mathbf{a}, \mathbf{b},\mathbf{c}\le 4$
are covered either by Corollary~\ref{cor_additivity_brank_very_small_a_b_c}
or one of Propositions~\ref{prop_br_case_3_2_2_1_2_2} or \ref{prop_br_case_3_3_3_1_1_1}.
\begin{table}[htb]
\[
\begin{array}{|c|c|c|c|c|}
\mathbf{hl}ine
\# & (\mathbf{a}', \mathbf{b}', \mathbf{c}') & (\mathbf{a}'', \mathbf{b}'', \mathbf{c}'') & {\underline{R}}(p') & {\underline{R}}(p'') \\
\mathbf{hl}ine
1. & 3, 2, 2 & 2, 3, 2 & 3 & 3 \\
2. & 3, 3, 2 & 2, 2, 3 & 3 & 3 \\
3. & 3, 3, 3 & 2, 2, 2 & 4, 5 & 2 \\
4. & 4, 2, 2 & 1, 2, 2 & 4 & 2 \\
5. & 4, 2, 2 & 1, 3, 3 & 4 & 3 \\
6. & 4, 3, 2 & 1, 2, 2 & 4 & 2 \\
7. & 4, 3, 3 & 1, 1, 1 & 5 & 1 \\
8. & 4, 3, 3 & 1, 2, 2 & 5 & 2 \\
9. & 4, 4, 3 & 1, 1, 1 & 5, 6 & 1 \\
10. & 4, 4, 4 & 1, 1, 1 & 5, 6, 7 & 1 \\
\mathbf{hl}ine
\end{array}
\]
\caption{The list of pairs of concise tensors and their border ranks that should be checked to determine
the additivity of the border rank for $\mathbf{a}, \mathbf{b}, \mathbf{c} \le 5$.
This list contains all pairs of concise tensors not covered by
Corollary~\ref{cor_additivity_brank_very_small_a_b_c},
or Proposition~\ref{prop_br_case_3_2_2_1_2_2},
or Proposition~\ref{prop_br_case_3_3_3_1_1_1},
together with their possible border ranks,
excluding the cases covered by Lemma~\ref{lemma_additivity_small_brank}.
The maximal possible values of border ranks above have been obtained
from~\cite[Sect.~4]{abo_ottaviani_peterson_segre}.} \label{tab_cases_for_br_up_to_dim_5}
\end{table}
One could analyse the additivity for $\mathbf{a}, \mathbf{b}, \mathbf{c} \le 5$
(so for the bound one more than in Theorem~\ref{thm_additivity_border_rank_in_dimension_4})
by checking all 10 possible cases listed in Table~\ref{tab_cases_for_br_up_to_dim_5}.
We conclude the article by solving also Case $3$ from the table.
\begin{example}
If $p'\in {\mathbb C}^3\otimes {\mathbb C}^3\otimes {\mathbb C}^3$ and $p''\in {\mathbb C}^2\otimes {\mathbb C}^2\otimes {\mathbb C}^2$ are both concise,
then the additivity of the border rank holds for $p'\oplus p''$.
Indeed, by Example~\ref{example_222_degenerates_to_122}
there exists $q''\in {\mathbb C}^1\otimes {\mathbb C}^2\otimes {\mathbb C}^2$ more degenerate than $p''$,
but of the same border rank.
By Lemma~\ref{lemma_more_degenerate} it is enough to prove the additivity for $p'\oplus q''$.
This is provided by Proposition~\ref{prop_br_case_3_3_3_1_1_1}.
\end{example}
\section*{Acknowledgments}
We are enormously grateful to Joseph Landsberg for introducing us
to this topic and numerous discussions and explanations.
We also thank Michael Forbes, Mateusz Micha{\l}ek, Artie Prendergast-Smith, Zach Teitler, and Alan Thompson
for reference suggestions and their valuable comments.
We are also greatful the referees and the journal editors for their suggestions that have helped to improve the results and presentation.
The research on this project was spread across a wide period of time.
It commenced once E.~Postinghel was a postdoc at IMPAN in Warsaw (Poland, 2012-2013) under the project ``Secant varieties, computational complexity, and toric degenerations'' realised within the Homing Plus programme of Foundation for Polish Science, cofinanced from European Union, Regional Development Fund.
Also our collaboration in years 2014-2019 was possible during many meetings,
in particular those that were related to special programmes, such as:
the thematic semester ``Algorithms and Complexity in Algebraic Geometry'' at
Simons Institute for the Theory of Computing (2014),
the Polish Algebraic Geometry mini-Semester (miniPAGES, 2016),
and the thematic semester Varieties: Arithmetic and Transformations (2018).
The latter two events were supported by the grant 346300 for IMPAN
from the Simons Foundation and the matching 2015-2019 Polish MNiSW fund.
We are grateful to the participants of these semesters for numerous inspiring discussions
and to the sponsors for supporting our participations.
We are grateful to Loughborough University for hosting our collaboration in May 2017 and to Copenhagen University for hosting us in January 2019.
In addition, J.~Buczy{\'n}ski is supported by
the Polish National Science Center project ``Algebraic Geometry: Varieties and Structures'',
2013/08/A/ST1/00804,
the scholarship ``START'' of the Foundation for Polish Science
and a scholarship of Polish Ministry of Science.
E.~Postinghel was supported by a grant of the Research Foundation - Flanders (FWO) between 2013-2016 and is supported by the EPSRC grant no.~EP/S004130/1 from late 2018.
Finally, the paper is also a part of the activities of the AGATES research group.
\end{document}
|
\begin{document}
\title{Efficient Gradient Approximation Method for Constrained Bilevel Optimization}
\begin{abstract}
Bilevel optimization has been developed for many machine learning tasks with large-scale and high-dimensional data. This paper considers a constrained bilevel optimization problem, where the lower-level optimization problem is convex with equality and inequality constraints and the upper-level optimization problem is non-convex. The overall objective function is non-convex and non-differentiable. To solve the problem, we develop a gradient-based approach, called gradient approximation method, which determines the descent direction by computing several representative gradients of the objective function inside a neighborhood of the current estimate. We show that the algorithm asymptotically converges to the set of Clarke stationary points, and demonstrate the efficacy of the algorithm by the experiments on hyperparameter optimization and meta-learning.
\end{abstract}
\section{Introduction}
A general constrained bilevel optimization problem is formulated as follows:
\begin{align}
\label{opt_intr}
&\min_{x \in \mathbb{R}^{d_x}} \ \Phi(x)=f\left(x, y^{*}(x)\right) \nonumber \\
& \text { s.t. } \ r\left(x, y^{*}(x)\right) \leq 0; \ s\left(x, y^{*}(x)\right) = 0; \\
&y^{*}(x)=\underset{y \in \mathbb{R}^{d_y}}{\arg\min } \{ g(x, y): p\left(x, y\right) \leq 0; q\left(x, y\right) = 0\}. \nonumber
\end{align}
The bilevel optimization minimizes the overall objective function $\Phi(x)$ with respect to (w.r.t.) $x$, where $y^{*}(x)$ is the optimal solution of the lower-level optimization problem and parametric in the upper-level decision variable $x$. In this paper, we assume that $y^{*}(x)$ is unique for any $x \in \mathbb{R}^{d_x}$.
Existing methods to solve problem \eqref{opt_intr} can be categorized into two classes: single-level reduction methods \cite{BARD198277, Bard1990, shi2005extended} and descent methods \cite{SAVARD1994265, dempe1998implicit}.
Single-level reduction methods use the KKT conditions to replace the lower-level optimization problem when it is convex.
Then, they reformulate the bilevel optimization problem \eqref{opt_intr} as a single-level constrained optimization problem.
Descent methods aim to find descent directions in which the new point is feasible and meanwhile reduces the objective function.
Paper \cite{SAVARD1994265} computes a descent direction of the objective function by solving a quadratic program.
Paper \cite{dempe1998implicit} applies the gradient of the objective function computed in \cite{kolstad1990derivative, fiacco1990nonlinear} to compute a generalized Clarke Jacobian, and uses a bundle method \cite{schramm1992version} for the optimization.
When applied to machine learning, bilevel optimization faces additional challenges as the dimensions of decision variables in the upper-level and lower-level problems are high \cite{liu2021investigating}.
Gradient-based methods have been shown to be effective in handling large-scale and high-dimensional data in a variety of machine learning tasks \cite{bottou2008}.
They have been extended to solve the bilevel optimization problem where there is no constraint in the lower-level optimization.
The methods can be categorized into
the approximate implicit differentiation (AID) based approaches \cite{pedregosa2016hyperparameter,gould2016differentiating,ghadimi2018approximation,grazzi2020iteration} and the iterative differentiation (ITD) approaches \cite{grazzi2020iteration,franceschi2017forward,franceschi2018bilevel,shaban2019truncated,ji2021bilevel}.
The AID based approaches evaluate the gradients of $y^{*}(x)$ and $\Phi (x)$ based on implicit differentiation \cite{bengio2000gradient}.
The ITD based approaches treat the iterative optimization steps in the lower-level optimization
as a dynamical system, impose $y^{*}(x)$ as its stationary point, and compute
$\nabla y^{*}(x)$ at each iterative step.
The gradient-based algorithms have been applied to solve several machine learning tasks,
including meta-learning \cite{franceschi2018bilevel,rajeswaran2019meta,JiLLP20},
hyperparameter optimization \cite{pedregosa2016hyperparameter,franceschi2017forward, franceschi2018bilevel}, reinforcement learning \cite{hong2020two,konda2000actor}, and network architecture search \cite{liu2018darts}.
The above methods are limited to unconstrained bilevel optimization and require the objective function to be differentiable.
They cannot be directly applied when constraints are present in the lower-level optimization, as the objective function is non-differentiable.
\textbf{Contributions. }
In this paper, we consider a special case of problem \eqref{opt_intr} where the upper-level constraints $r$ and $s$ are not included.
In general, the objective function $\Phi$ is nonconvex and non-differentiable, even if the upper-level and lower-level problems are convex and functions $f$, $g$, $p$, $q$ are differentiable \cite{10.1137.0913069,liu2021investigating}.
Most methods for this bilevel optimization problem are highly complicated and computationally expensive, especially when the dimension of the problem is large \cite{liu2021investigating,10.1007/s10589-015-9795-8}.
Addressing the challenge, we determine the descent direction by computing several gradients which can represent the gradients of the objective function of all points in a ball, and develop a computationally efficient algorithm with convergence guarantee for the constrained bilevel optimization problem.
The overall contributions are summarized as follows.
(i) Firstly, we derive the conditions under which the lower-level optimal solution $y^{*}(x)$ is continuously differentiable or directional differentiable. In addition, we provide analytical expressions for the gradient of $y^*(x)$ when it is continuously differentiable and the directional derivative of $y^{*}(x)$ when it is directional differentiable.
(ii) Secondly, we propose the gradient approximation method, which applies the Clarke subdifferential approximation of the non-convex and non-differentiable objective function $\Phi$ to the line search method. In particular, a set of derivatives is used to approximate the gradients or directional derivatives on all points in a neighborhood of the current estimate. Then, the Clarke subdifferential is approximated by the derivatives, and the approximate Clarke subdifferential is employed as the descent direction for line search.
(iii) It is shown that, the Clarke subdifferential approximation errors are small, the line search is always feasible, and the algorithm asymptotically converges to the set of Clarke stationary points.
(iv) We empirically verify the efficacy of the proposed algorithm by conducting experiments on hyperparameter optimization and meta-learning.
\textbf{Related Works. }
Differentiation of the optimal solution of a constrained optimization problem has been studied for a long time.
Sensitivity analysis of constrained optimization \cite{lemke1985introduction,fiacco1990sensitivity,fiacco1990nonlinear} shows the optimal solution $y^*(x)$ of a convex optimization problem is directional differentiable but may not differentiable at all points.
It implies that the objective function $\Phi(x)$ in problem \eqref{opt_intr} may not be differentiable.
Based on the implicit differentiation of the KKT conditions, the papers also compute $\nabla y^*(x)$ when $y^*$ is differentiable at $x$.
Optnet \cite{amos2017input,amos2017optnet,agrawal2019differentiable} applies the gradient computation to the constrained bilevel optimization, where a deep neural network is included in the upper-level optimization problem.
In particular, the optimal solution $y^*(x)$ serves as a layer in the deep neural network and $\nabla y^*(x)$ is used as the backpropagation gradients to optimize the neural network parameters.
However, all the above methods do not explicitly consider the non-differentiability of $y^{*}(x)$ and $\Phi(x)$, and cannot guarantee convergence.
Recently, papers \cite{liu2021towards,sow2022constrained} consider that the lower-level optimization problem has simple constraints, such that projection onto the constraint set can be easily computed, and require that the constraint set is bounded. In this paper, we consider inequality and equality constraints, which are more general than those in \cite{liu2021towards,sow2022constrained}.
\textbf{Notations. }
Denote $a>b$ for vectors $a, b \in \mathbb{R}^{n}$, when $a_i>b_i$ for all $1 \leq i \leq n$. Notations $a \geq b$, $a=b$, $a \leq b$, and $a<b$ are defined in an analogous way. Denote the $l_2$ norm of vectors by $\| \cdot \|$.
The directional derivative of a function $f$ at $x$ on the direction $d$ with $\|d\|=1$ is defined as $\nabla_{d} f({x}) \triangleq \lim _{h \rightarrow 0^{+}} \frac{f(x+h d)-f(x)}{h}$.
A ball centered at $x$ with radius $\epsilon$ is denoted as $\mathcal{B}(x,\epsilon)$.
The complementary set of a set $S$ is denoted as $S^C$.
The distance between the point $x$ and the set $S$ is defined as $d(x, S) \triangleq \inf \{\|x-a\| \mid a \in S\}$.
The convex hull of $S$ is denoted by $\operatorname{conv} S$.
For set $S$ and function $f$, we define the image set $f(S) \triangleq \{f(x)\mid x \in S \}$.
For a finite positive integer set $I$ and a vector function $p$, we denote the subvector function $p_I \triangleq [p_{k_1}, \cdots , p_{k_j}, \cdots]^{\top} $ where $k_j \in I$.
\section{Problem Statement}
\label{section_1}
Consider the constrained bilevel optimization problem:
\begin{align}
\label{biopt}
&\min_{x \in \mathbb{R}^{d_x}} \ \Phi(x)=f\left(x, y^{*}(x)\right) \\
&\text{s.t. }
y^{*}(x)=\underset{y \in \mathbb{R}^{d_y}}{\arg\min } \{ g(x, y): p\left(x, y\right) \leq 0; q\left(x, y\right) = 0\}, \nonumber
\end{align}
where $f, g : \mathbb{R}^{d_x} \times \mathbb{R}^{d_y} \rightarrow \mathbb{R}$; $p : \mathbb{R}^{d_x} \times \mathbb{R}^{d_y} \rightarrow \mathbb{R}^{m}$; $q : \mathbb{R}^{d_x} \times \mathbb{R}^{d_y} \rightarrow \mathbb{R}^{n}$.
Given $x \in \mathbb{R}^{d_x}$, we denote the lower-level optimization problem in \eqref{biopt}
as $P(x)$.
The feasible set of $P(x)$ is defined as $K\left(x\right) \triangleq \{y \in \mathbb{R}^{d_y}: p\left(x, y\right) \leq 0, q\left(x, y\right) = 0\}$.
Suppose the following assumptions hold.
\begin{assumption}
\label{a1} The functions $f$, $g$, $p$ and $q$ are twice continuously differentiable.
\end{assumption}
\begin{assumption}
\label{a2}
For all ${x} \in \mathbb{R}^{d_x}$, the function $g(x,y)$ is $\mu$-strongly-convex w.r.t. $y$; $p_j(x,y)$ is convex w.r.t. $y$ for each $j$; $q_i(x,y)$ is affine w.r.t. $y$ for each $i$.
\end{assumption}
Note that the upper-level objective function $f(x,y)$ and the overall objective function $\Phi(x)$ are non-convex.
The lower-level problem $P(x)$ is convex and its Lagrangian is
$
\mathcal{L}(y, \lambda, \nu, x) \triangleq g(x, y)+ \lambda^{\top} p(x, y)+ \nu^{\top} q(x, y)
$,
where $(\lambda, \nu)$ are Lagrange multipliers and $\lambda \geq 0$.
\begin{definition}
\label{def1}
Suppose that the KKT conditions hold at ${y}$ for $P(x)$ with the Lagrangian multipliers ${\lambda}$ and ${\nu}$.
The set of {active inequality constraints} at ${y}$ for $P(x)$ is defined as:
$J(x,{y}) \triangleq \{j: 1 \leq j \leq m, \ p_{j}(x,{y})=0\}$.
An inequality constraint is called inactive if it is not included in $J(x,{y})$ and the set of inactive constraints is denoted as $J(x,y)^C$.
The set of {strictly active inequality constraints} at ${y}$ is defined as:
$J^{+}(x,{y},{\lambda}) \triangleq \{j: j \in J\left(x,{y}\right), \ {\lambda}_{j}>0\}$.
The set of {non-strictly active inequality constraints} at ${y}$ is defined as:
$J^{0}(x,{y},{\lambda}) \triangleq J(x,{y}) \setminus J^{+}(x,{y},{\lambda})$.
Notice that ${\lambda}_j \geq 0$ for $j \in J(x,{y})$ and ${\lambda}_j = 0$ for $j \in J^{0}(x,{y},{\lambda})$.
\end{definition}
\begin{definition}
The Linear Independence Constraint Qualification (LICQ) holds at ${y}$ for $P(x)$ if the vectors
$
\left\{\nabla_y p_{j}\left(x,{y}\right), j \in J\left(x, {y}\right) ; \nabla_y q_{i}\left(x, {y}\right), 1 \leq i \leq n \right\}
$
are linearly independent.
\end{definition}
\begin{assumption}
\label{a3}
Suppose that for all ${x} \in \mathbb{R}^{d_x}$, the solution $y^{*}(x)$ exists for $P\left(x\right)$, and the LICQ holds at $y^{*}(x)$ for $P(x)$.
\end{assumption}
\section{Differentiability and Gradient of $y^{*}(x)$}
\label{section3}
In this section, we provide sufficient conditions under which the lower-level optimal solution $y^{*}(x)$ is continuously differentiable or directional differentiable. We compute the gradient of $y^*(x)$ when it is continuously differentiable and the directional derivative of $y^{*}(x)$ when it is directional differentiable.
Moreover, we give a necessary condition that $y^{*}(x)$ is not differentiable and illustrate it by a numerical example.
In problem \eqref{biopt},
if the upper-level objective function $f$ and the solution of lower-level problem $y^{*}$ are continuously differentiable, so is $\Phi$, and by the gradient computation of composite functions, we have
\begin{equation}
\label{compositiond}
\nabla \Phi(x)=\nabla_{x} f(x, y^{*}(x))+\nabla y^{*}(x)^{\top} \nabla_{y} f(x, y^{*}(x)).
\end{equation}
It is shown in \cite{domke2012generic} that, when $p$ and $q$ are absent, $y^*$ and $\Phi$ are differentiable under certain assumptions.
The differentiability of $y^*$ and $\Phi$ is used by the AID based approaches in \cite{pedregosa2016hyperparameter,gould2016differentiating,ghadimi2018approximation,grazzi2020iteration,domke2012generic} to approximate $\nabla y^*$ and minimize $\Phi$ by gradient descent.
However, it is not the case as the lower-level problem \eqref{biopt} is constrained.
Theorem \ref{th0} states the conditions under which $y^{*}(x)$ is directional differentiable.
\begin{theorem}
\label{th0}
Suppose Assumptions \ref{a1}, \ref{a2}, \ref{a3} hold. The following properties hold for any $x$.
\begin{itemize}
\item[(\romannumeral1)] The global minimum $y^{*}(x)$ of $P\left(x\right)$ exists and is unique. The KKT conditions hold at $y^{*}(x)$ with unique Lagrangian multipliers $\lambda(x)$ and $\nu(x)$.
\item[(\romannumeral2)]
The vector function $z(x) \triangleq [y^{*}(x)^{\top}, \lambda(x)^{\top}, \nu(x)^{\top}]^{\top}$ is continuous and locally Lipschitz.
The directional derivative of $z(x)$ on any direction exists.
\end{itemize}
\end{theorem}
As shown in part (i) of Theorem \ref{th0},
$y^{*}(x)$, ${\lambda}(x)$ and ${\nu}(x)$ are uniquely determined by $x$.
So we simplify the notations of Definition \ref{def1} in the rest of this paper: $J(x,y^{*}(x))$ is denoted as $J(x)$, $J^{+}(x,y^{*}(x),{\lambda}(x))$ is denoted as $J^{+}(x)$, and $J^{0}(x,y^*(x),\lambda(x))$ is denoted as $J^{0}(x)$.
In part (ii), the computation of the directional derivative of $z(x)$ is given in Theorem \ref{th2} in Appendix \ref{proof_app}.
\begin{definition}
\label{def3}
Suppose that the KKT conditions hold at ${y}$ for $P(x)$ with the Lagrangian multipliers ${\lambda}$ and ${\nu}$.
The Strict Complementarity Slackness Condition (SCSC) holds at ${y}$ w.r.t. ${\lambda}$ for $P(x)$, if ${\lambda}_{j}>0$ for all ${j} \in J(x,{y})$.
\end{definition}
\begin{remark}
\label{remark_scsc}
The KKT conditions include the Complementarity Slackness Condition (CSC). The SCSC is stronger than the CSC, which only requires that ${\lambda}_{j} \geq 0$ for all ${j} \in J(x,{y})$.
\end{remark}
Theorem \ref{th1} states the conditions under which $y^{*}(x)$ is continuously differentiable and derives $\nabla y^{*}(x)$.
\begin{theorem}
\label{th1}
Suppose Assumptions \ref{a1}, \ref{a2}, \ref{a3} hold.
If the SCSC holds at $y^{*}(x)$ w.r.t. $\lambda(x)$, then $z(x)$ is continuously differentiable at $x$ and the gradient is computed as
\begin{equation}
\label{eq8}
\left[
\nabla_{x} y^{*}(x)^{\top},
\nabla_{x} \lambda_{J(x)}^{\top}(x),
\nabla_{x} \nu(x)^{\top}
\right]^{\top}
= -M_{+}^{-1}(x) N_{+}(x)
\end{equation}
and $\nabla_{x} \lambda_{{J(x)}^C}(x)=0$,
where $M_{+}(x) \triangleq $
$$
\left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_{y} p_{J^{+}(x)}^{\top} & \nabla_{y} q^{\top} \\
\nabla_{y} p_{J^{+}(x)} & 0 & 0 \\
\nabla_y q & 0 & 0
\end{array}\right](x,y^*(x),\lambda(x),\nu(x))
$$
is nonsingular and $N_{+}(x)\triangleq$
$$
[\nabla_{x y}^{2} \mathcal{L}^{\top}, \nabla_{x} p_{J^{+}(x)}^{\top}, \nabla_{x} q^{\top}]^{\top}(x,y^*(x),\lambda(x),\nu(x)).
$$
\end{theorem}
Theorem \ref{th1} shows that, if $z(x)$ is not continuously differentiable, then the SCSC does not hold at $y^*(x)$ w.r.t. $\lambda(x)$.
Definition \ref{def3} implies that the SCSC holds at ${y}$ w.r.t. ${\lambda}$ for $P(x)$ if and only if $J^{0}(x) =\emptyset$.
It concludes that if $y^*(x)$ is not continuously differentiable at $x$, $J^0(x) \neq \emptyset$, i.e., the non-differentiability of $y^{*}(x)$ occurs at points with non-strictly active constraints.
Example \ref{example1} illustrates such claim.
\begin{example}
\label{example1}
Consider a bilevel optimization problem $\Phi(x)=y^*(x)$ and the lower-level problem $P(x)$: $y^*(x)= {\arg\min}_y \{(y-x^2)^2 : p_1(x,y)=-x-y \leq 0\}$, where $x$, $y \in \mathbb{R}$.
The analytical solution of $z(x)=[y^*(x), \lambda(x)]$ is given by: $y^*(x)=x^2$, $\lambda(x)=0$ when $x \in (-\infty,-1] \cup [0,+\infty)$; $y^*(x)=-x$, $\lambda(x)=-2x(1+x)$ when $x \in [-1,0]$.
Correspondingly, when $x \in (-1,0)$, $J(x)=\{1\}$, $J^+(x)=\{1\}$, $J^0(x)=\emptyset$; when $x \in (-\infty,-1) \cup (0,+\infty)$, $J(x)=\emptyset$, $J^+(x)=\emptyset$, $J^0(x)=\emptyset$; when $x \in \{ -1,0 \}$, $J(x)=\{1\}$, $J^+(x)=\emptyset$, $J^0(x)=\{1\}$.
As shown in Fig. \ref{ssss1}, $y^*(x)$ is continuously differentiable everywhere except when $J^0(x) \neq \emptyset$.
\end{example}
\begin{figure}
\caption{Occurrence of non-differentiability. }
\label{ssss1}
\end{figure}
The computation of the gradient of $z(x)$ in \eqref{eq8} is derived from the implicit differentiation of the KKT conditions of problem $P(x)$, which is also used in \cite{fiacco1990sensitivity, amos2017optnet, agrawal2019differentiable}.
Compared with these papers, Theorem \ref{th1} directly determines $\nabla_{x} \lambda_{{J(x)}^C}(x)=0$ and excludes $\lambda_{{J(x)}^C}(x)$ from the computation of the inverse matrix in \eqref{eq8}, when $z(x)$ is continuously differentiable. Theorem \ref{th2} in Appendix \ref{proof_app} derives the directional derivative of $z(x)$ when it is not differentiable.
Consider a special case where the lower-level optimization problem $P(x)$ is unconstrained.
Since the SCSC is not needed anymore, the assumptions in Theorem \ref{th1} reduce to that $g$ is twice continuously differentiable and $g(x,y)$ is $\mu$-strongly-convex w.r.t. $y$ for ${x} \in \mathbb{R}^{d_x}$.
By Theorem \ref{th1}, the optimal solution $y^{*}(x)$ is continuously differentiable, the matrix $\nabla_{y}^{2} g({x}, {y})$ is non-singular, and the gradient is computed as
$\nabla y^{*}(x) =-[\nabla_{y}^{2} g({x}, {y})]^{-1} \nabla_{x y}^{2} g({x}, {y})$.
These results are well-known and widely used in unconstrained bilevel optimization analysis and applications \cite{pedregosa2016hyperparameter,franceschi2017forward, franceschi2018bilevel, ji2021bilevel}.
\section{The Gradient Approximation Method}
In this section, we develop the gradient approximation method to efficiently solve problem \eqref{biopt}, whose objective function is
non-differentiable and non-convex.
First, we define the Clarke subdifferential (Section \ref{sectionA}) and efficiently approximate the Clarke subdifferential of the objective function
$\Phi(x)$ (Section \ref{sectionB}). Next, we propose the gradient approximation algorithm, provide its convergence guarantee (Section \ref{sectionC}), and present its implementation details (Section \ref{sectionD}).
\subsection{Clarke Subdifferential of $\Phi$}
\label{sectionA}
As shown in Section \ref{section_1} and also shown in \cite{10.1007/s10589-015-9795-8,liu2021investigating}, the objective function $\Phi\left(x\right)$ of problem (\ref{biopt}) is usually non-differentiable and non-convex.
To deal with the non-smoothness and non-convexity,
we introduce Clarke subdifferential and Clarke stationary point.
\begin{definition}[Clarke subdifferential and Clarke stationary point \cite{clarke1975generalized}]
\label{defc}
For a locally Lipschitz function $f: \mathbb{R}^{n} \rightarrow \mathbb{R}$, the Clarke subdifferential of $f$ at $x$ is defined by the convex hull of the limits of gradients of $f$ on sequences converging to $x$, i.e.,
$\bar{\partial} f(x) \triangleq \operatorname{conv}\left\{\lim _{j \rightarrow \infty} \nabla f\left(y^{j}\right):\left\{y^{j}\right\} \rightarrow x\right.$ where $f$ is differentiable at $y^{j}$ for all $\left.j \in \mathbb{N}\right\}$.
The Clarke $\epsilon \text {-subdifferential}$ of $f$ at $x$ is defined by $
\bar{\partial}_{\epsilon} f(x)\triangleq\operatorname{conv} \{ \bar{\partial} f(x^{\prime}): x^{\prime} \in \mathcal{B}(x,\epsilon) \}$.
A point $x$ is Clarke stationary for $f$ if $0 \in \bar{\partial} f(x)$.
\end{definition}
If $y^*$ is differentiable at $x$, we have $\bar{\partial} y^*(x)=\left\{ \nabla y^*(x) \right\}$
and $\bar{\partial} \Phi(x)=\{ \nabla_{x} f(x, y^{*}(x))+\nabla y^{*}(x)^{\top} \nabla_{y} f(x, y^{*}(x)) \}$;
otherwise,
$\bar{\partial} \Phi(x)= \{\nabla_{x} f(x, y^{*}(x))+w^{\top} \nabla_{y} f(x, y^{*}(x)) : w \in \bar{\partial} y^{*}(x) \}$.
Take the functions shown in Example \ref{example1} and Fig. \ref{ssss1} as an example, $\bar{\partial}_{\epsilon} \Phi(-1)=\bar{\partial}_{\epsilon} y^{*}(-1)= \operatorname{conv} \{[-2-2\epsilon,-2] \cup \{-1\}\}=[-2-2\epsilon,-1]$, and $\bar{\partial}_{\epsilon} \Phi(0)=\bar{\partial}_{\epsilon} y^{*}(0)= \operatorname{conv} \{[0,2\epsilon] \cup \{-1\}\}=[-1,2\epsilon]$.
\subsection{Clarke Subdifferential Approximation}
\label{sectionB}
Gradient-based methods have been applied to convex and non-convex optimization problems \cite{hardt2016train,nemirovski2009robust}. The convergence requires that the objective function is differentiable.
If there exist points where the objective function is not differentiable, the probability for the algorithms to visit these points is non-zero and the gradients at these points are not defined \cite{bagirov2020numerical}.
Moreover, oscillation may occur even if the objective function is differentiable at all visited points \cite{bagirov2014introduction}.
To handle the non-differentiability, the gradient sampling method \cite{burke2005robust,kiwiel2007convergence,bagirov2020numerical} uses gradients in a neighborhood of the current estimate to approximate the Clarke subdifferential and determine the descent direction.
Specifically, the method samples a set of points inside the neighborhood $\mathcal{B}(x^0,\epsilon)$, select the points where the objective function is differentiable, and then compute the convex hull of the gradients on the sampled points.
However, in problem (\ref{biopt}), the point sampling is highly computationally expensive.
For each sampled point $x^j$, to check the differentiability of $\Phi$, we need to solve the lower-level optimization $P(x^j)$ to obtain $y^*(x^j)$, $\lambda(x^j)$ and $\nu(x^j)$, and check the SCSC.
Moreover, after the points are sampled, the gradient on each point is computed by \eqref{eq8}.
As the dimension $d_x$ increases, the sampling number increases to ensure the accuracy of the approximation.
More specifically, as shown in \cite{kiwiel2007convergence}, the algorithm is convergent if the sampling number is large than $d_x+1$.
The above procedure is executed in each optimization iteration.
Addressing the computational challenge, we approximate the Clarke $\epsilon \text {-subdifferential}$ by a small number of gradients, which can represent the gradients on all points in the neighborhood.
The following propositions distinguish two cases: $\Phi$ is continuously differentiable on $\mathcal{B}(x^0,\epsilon)$ (Proposition \ref{prop2_0}) and it is not (Proposition \ref{prop3}).
\begin{proposition}
\label{prop2_0}
Suppose Assumptions \ref{a1}, \ref{a2}, \ref{a3} hold.
Consider $x^0 \in R^{d_x}$. There is sufficiently small $\epsilon>0$ such that,
if the SCSC holds at $y^{*}(x)$ w.r.t. $\lambda(x)$ for any $x \in \mathcal{B}(x^0,\epsilon)$, then $\nabla \Phi(x^0) \in \bar{\partial}_{\epsilon} \Phi(x^0)$ and
$$|\|\nabla \Phi(x^0)\| - d(0, \bar{\partial}_{\epsilon} \Phi(x^0))|< o(\epsilon).$$
\end{proposition}
Proposition \ref{prop2_0} shows that the gradient $\nabla \Phi$ at a single point $x^0$ can be used to approximate the Clarke $\epsilon \text {-subdifferential}$ $\bar{\partial}_{\epsilon} \Phi(x^0)$ and the approximation error is in the order of $\epsilon$.
Recall that the gradient $\nabla \Phi(x^0)$ can be computed by \eqref{compositiond} and \eqref{eq8}. Fig. \ref{ssss1.5} illustrates the approximation on the problem in Example \ref{example1}. The SCSC holds at $y^{*}(x)$ and $\Phi(x)$ is continuously differentiable on $\mathcal{B}(x^0,\epsilon)$, then $\bar{\partial}_{\epsilon} \Phi(x^0)=[2x^0-2\epsilon, 2x^0+2\epsilon]$ can be approximated by $\nabla \Phi(x^0)= 2x^0$, and the approximation error is $2\epsilon$.
The approximations of $\bar{\partial}_{\epsilon}\Phi(x^1)$ and $\bar{\partial}_{\epsilon}\Phi(x^2)$ can be done in an analogous way.
\begin{figure*}
\caption{The SCSC holds on $\mathcal{B}
\label{ssss1.5}
\caption{Subsets inside $\mathcal{B}
\label{ssss2}
\caption{Approximate $\bar{\partial}
\label{ssss3}
\end{figure*}
Consider $\Phi(x)$ is not continuously differentiable at some points in $\mathcal{B}(x^0,\epsilon)$.
Define the set $I^\epsilon(x^0)$ which contains all $j$ such that there exist $x^{\prime}$, $x^{\prime\prime} \in \mathcal{B}(x^0,\epsilon)$ with
$
j\in J^+(x^{\prime})^C \text{ and } j\in J^+(x^{\prime\prime})
$.
Define the set $I_+^{\epsilon}(x^0)$ which contains all $j$ such that $j\in J^+(x)$ for any $x \in \mathcal{B}(x^0,\epsilon)$.
If $I^{\epsilon}(x^0)$ is not empty, there exists a point $x \in \mathcal{B}(x^0,\epsilon)$ such that the SCSC does not hold at $y^*(x)$.
The power set of $I^\epsilon(x^0)$ partitions $\mathcal{B}(x^0,\epsilon)$ into a number of subsets, where $\Phi(x)$ and $y^*(x)$ are continuously differentiable in each subset.
An illustration on the problem in Example \ref{example1} is shown in Fig. \ref{ssss2}.
The point $x^{\prime}=-1 $ belongs to $(x^0-\epsilon, x^0+\epsilon)$ and the SCSC does not hold at $y^*(x^{\prime})$. Notice that $I^{\epsilon}_{+}(x^0)=\emptyset$ and $I^\epsilon(x^0)=\{1\}$. Then, $I^\epsilon(x^0)=\{1\}$ has the power set $\{S_{(1)},S_{(2)}\}$ with $S_{(1)}=\emptyset$ and $S_{(2)}=\{1\}$.
Then, $\mathcal{B}(x^0,\epsilon)$ is partitioned to two subsets: the subset where the
constraint $p_1$ is inactive (blue side in the ball) which corresponds to $S_{(1)}$, and the subset where the constraint $p_1$ is strictly active (red side in the ball) which corresponds to $S_{(2)}$. Their boundary is the point $x^{\prime}$ where the constraint $p_1$ is non-strictly active.
It can be seen that $y^*(x)$ is continuously differentiable on each subset and the gradient variations are small inside the subset when $\epsilon$
is small. In contrast, the gradient variations between two subsets are large.
Inspired by Proposition \ref{prop2_0}, we compute a representative gradient to approximate $\nabla y^*(x)$ inside each subset of $\mathcal{B}(x^0,\epsilon)$.
Now we proceed to generalize the above idea.
Recall that $\nabla \Phi(x)$ is computed by \eqref{compositiond} and $f$ is twice continuously differentiable.
Define
\begin{equation}
\begin{aligned}
\label{eq17}
G(x^{0},\epsilon)\triangleq\{ \nabla_{x} f\left(x^0, y^{*}\left(x^0\right)\right)+{w}^S(x^0)^{\top} \\
\nabla_{y} f\left(x^0, y^{*}\left(x^0\right)\right): S \subseteq I^\epsilon(x^0)\},
\end{aligned}
\end{equation}
where
${w}^S(x^0)$ is obtained by extracting the first $d_x$ rows from matrix $-M^{S}_{\epsilon}(x^0,y^*(x^0))^{-1} N^{S}_{\epsilon}(x^0,y^*(x^0))$, with
$$M^{S}_{\epsilon} \triangleq
\left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_y p_{I^{\epsilon}_{+}(x^0)}^{\top} & \nabla_{y} q^{\top} & \nabla_y p^{\top}_{S}\\
\nabla_y p_{I^{\epsilon}_{+}(x^0)} & 0 & 0 &0\\
\nabla_y q & 0 & 0 &0\\
\nabla_y p_{S} & 0 & 0 &0
\end{array}\right],$$
and
$N^{S}_{\epsilon} \triangleq \left[\nabla_{x y}^{2}\mathcal{L}^{\top}, \nabla_{x} p_{I^{\epsilon}_{+}(x^0)}^{\top}, \nabla_{x} q^{\top}, \nabla_{x} p_{S}^{\top} \right]^{\top}$.
Here, $S$ is a subset of $I^\epsilon(x^0)$, and $w^S(x^0)$ is the representative gradient to approximate $\nabla y^*(x)$ inside the subset of $\mathcal{B}(x^0,\epsilon)$ which corresponds $S$. Proposition \ref{prop3} shows that the Clarke $\epsilon \text {-subdifferential}$ $\bar{\partial}_{\epsilon} y^*(x^0)$ can be approximated by representative gradient set $G(x^{0},\epsilon)$, and the approximation error is in the order of $\epsilon$.
\begin{proposition}
\label{prop3}
Suppose Assumptions \ref{a1}, \ref{a2}, \ref{a3} hold.
Consider $x^0 \in \mathbb{R}^{d_x}$, and assume there exists a sufficiently small $\epsilon>0$ such that, there exists $x \in \mathcal{B}(x^0,\epsilon)$ such that $y^{*}(x)$ is not continuously differentiable at $x$.
Then, the following inequality holds
for any $z \in \mathbb{R}^{d_x}$,
$$| d(z, \operatorname{conv} G(x^{0},\epsilon)) - d(z, \bar{\partial}_{\epsilon} \Phi(x^0))|< o(\epsilon).$$
\end{proposition}
The computation of the representative gradient $w^S(x^0)$ of Example \ref{example1} is demonstrated in Fig. \ref{ssss3}.
Since $x^0$ is near the boundary of two subsets, Proposition \ref{prop3} employs $w^{S_{(1)}}(x^0) = \nabla y^{*}(x^0)$ to approximate the gradients of the subset with the inactive constraint (blue side), and $w^{S_{(2)}}(x^0) =\nabla \Tilde{y}^{*}(x^0)$ to approximate the gradients in the subset with the strictly active constraint (red side). The twice-differentiable function $\Tilde{y}^{*}(x)$ is an extension of $y^{*}(x)$ (refer to the definition of $x^I(\cdot)$ in (12.8) of \cite{dempe1998implicit}).
The gradients $\nabla {y}^{*}(x^0)$ and $\nabla \Tilde{y}^{*}(x^0)$ are computed in the matrices $-{M^{S_{(1)}}_{\epsilon}}^{-1} N^{S_{(1)}}_{\epsilon}$ and $-{M^{S_{(2)}}_{\epsilon}}^{-1} N^{S_{(2)}}_{\epsilon}$, respectively.
Then, the representative gradients $w^{S_{(1)}}(x^0)$ and $w^{S_{(2)}}(x^0)$ are used to approximate $\bar{\partial}_{\epsilon} y^*(x^0)$.
Then, we can compute $G(x^0,\epsilon)=\{2x^0,-1\}$ and $\bar{\partial}_{\epsilon} \Phi(x^0)= [2x^0-2\epsilon,-1] $ with $-1 \in [x^0-\epsilon,x^0+\epsilon]$.
The approximation error $| d(z, \operatorname{conv} G(x^{0},\epsilon)) - d(z, \bar{\partial}_{\epsilon} \Phi(x^0))|$ is smaller than or equal to $2\epsilon$ for any $z$.
\begin{algorithm}[bt]
\caption{Gradient Approximation Method}
\label{alg:framework0}
\begin{algorithmic}[1]
\REQUIRE Initial point $x^{0}$;
Initial approximation radius $\epsilon_{0} \in(0, \infty)$; Initial stationarity target $\nu_{0} \in(0, \infty)$; Line search parameters $(\beta, \gamma) \in(0,1) \times(0,1)$; Termination tolerances $\left(\epsilon_{\mathrm{opt}}; \nu_{\mathrm{opt}}\right) \in[0, \infty) \times[0, \infty)$; Discount factors $\left(\theta_{\epsilon}, \theta_{\nu}\right) \in(0,1) \times(0,1)$.
\FOR{$k \in \mathbb{N}$}
\STATE \label{line2} Solve the lower-level optimization problem $P(x^k)$ and obtain $y^{*}(x^k)$, $\lambda(x^k)$, and $\nu(x^k)$
\STATE \label{line3} Check the differentiability of $y^{*}$ on $\mathcal{B}\left(x^{k}, \epsilon_{k}\right)$ by \eqref{check_dif} and \eqref{check}
\IF{$y^{*}$ is continuously differentiable on $\mathcal{B}\left(x^{k}, \epsilon_{k}\right)$}
\STATE \label{line5} Compute $g^{k}=\nabla \Phi(x^k)$ by \eqref{compositiond}
\ELSE
\STATE \label{line7} Compute $G(x^{k},\epsilon_k)$ by \eqref{eq17}
\STATE \label{line8} $\bar{\partial}_{\epsilon_k}\Phi(x^k )=\operatorname{conv} G(x^{k},\epsilon_k)$
\STATE \label{line9} Compute $g^{k}= \min \{ \|g\|: g \in \operatorname{conv} G(x^{k},\epsilon_k) \}$
\ENDIF
\IF{$\left\|g^{k}\right\| \leq \nu_{\mathrm{opt}}$ and $\epsilon_{k} \leq \epsilon_{\mathrm{opt}}$}
\STATE {\bfseries Output:} $x^{k}$ and \textbf{terminate}
\ENDIF
\IF{$\left\|g^{k}\right\| \leq \nu_{k}$}
\STATE $\nu_{k+1} \leftarrow \theta_{\nu} \nu_{k}$, $\epsilon_{k+1} \leftarrow \theta_{\epsilon} \epsilon_{k}$, and $t_{k} \leftarrow 0$
\ELSE
\STATE \label{line17} Compute $t_k$ by the line search:
$t_{k} = \max \{t \in\{\gamma, \gamma^{2}, \ldots\}: \Phi(x^{k}-t g^{k})< \Phi(x^{k}) -\beta t\|g^{k}\|^{2}\}$
\STATE $\nu_{k+1} \leftarrow \nu_{k}$ and $\epsilon_{k+1} \leftarrow \epsilon_{k}$
\ENDIF
\STATE $x^{k+1} \leftarrow x^{k}-t_{k} g^{k}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{The Gradient Approximation Algorithm}
\label{sectionC}
Our proposed gradient approximation algorithm, summarized in Algorithm \ref{alg:framework0}, is a line search algorithm. It uses the approximation of the Clarke subdifferential as the descent direction for line search.
In iteration $k$, we firstly solve the lower-level optimization problem $P(x^k)$ and obtain $y^{*}(x^k)$, $\lambda(x^k)$ and $\nu(x^k)$. To reduce the computation complexity, the solution in iteration $k$ serves as the initial point to solve $P(x^{k+1})$ in iteration $k+1$. Secondly, we check the differentiability of $y^{*}$ on $\mathcal{B}\left(x^{k}, \epsilon_{k}\right)$ and its implementation details are shown in Section \ref{sectionD1}. If $y^{*}$ is continuously differentiable on $\mathcal{B}\left(x^{k}, \epsilon_{k}\right)$, we use $\nabla \Phi(x^k)$ to approximate $\bar{\partial}_{\epsilon} \Phi(x^k)$ which corresponds to Proposition \ref{prop2_0}. Otherwise, $G(x^{k},\epsilon_k)$ is used which corresponds to Proposition \ref{prop3}. The details of computing $G(x^{k},\epsilon_k)$ are shown in \eqref{eq17} and Section \ref{sectionD2}.
Thirdly, the line search direction $g^k$ is determined by a vector which has the smallest norm over all vectors in the convex hull of $G(x^{k},\epsilon_k)$.
During the optimization steps, as the iteration number $k$ increases, the approximation radius $\epsilon_k$ decreases. According to Propositions \ref{prop2_0} and \ref{prop3}, the approximation error of the Clarke subdifferential is diminishing.
We next characterize the convergence of Algorithm \ref{alg:framework0}.
\begin{theorem}
\label{th_converge}
{
Suppose Assumptions \ref{a1}, \ref{a2}, \ref{a3} hold and $\Phi(x)$ is lower bounded on $\mathbb{R}^{d_x}$.
Let $\{x^{k}\}$ be the sequence generated by Algorithm \ref{alg:framework0} with $\nu_{\mathrm{opt}}=\epsilon_{\mathrm{opt}}=0$. Then,
\begin{itemize}
\item[(\romannumeral1)] For each $k$, the line search in line \ref{line17} has a solution $t_k$.
\item[(\romannumeral2)] $\lim_{k\rightarrow\infty} \nu_{k} = 0$, $\lim_{k\rightarrow\infty} \epsilon_{k} = 0$.
\item[(\romannumeral3)] $\liminf_{k\rightarrow\infty} d(0, \bar{\partial} \Phi(x^k))= 0$.
\item[(\romannumeral4)] Every limit point of $\{x^{k}\}$ is Clarke stationary for $\Phi$.
\end{itemize}}
\end{theorem}
If the objective function $\Phi$ is non-convex but smooth, property (iii) reduces to $\liminf_{k\rightarrow\infty}\|\nabla \Phi(x^k)\| = 0$, which is a widely used convergence criterion for smooth and non-convex optimization \cite{nesterov1998introductory,jin2021nonconvex}.
A sufficient condition for the existence of limit point of $\{x^k\}$ is that the sequence is bounded.
\subsection{Implementation Details}
\label{sectionD}
\subsubsection{Check differentiability of $y^{*}$ on $\mathcal{B}\left(x^{0}, \epsilon_{0}\right)$}
\label{sectionD1}
We propose Proposition \ref{prop2} to check differentiability of $y^{*}$ on $\mathcal{B}\left(x^{0}, \epsilon_{0}\right)$, which is required by line \ref{line3} of Algorithm \ref{alg:framework0}.
\begin{proposition}
\label{prop2}
Consider $x^0 \in \mathbb{R}^{d_x}$ and $\epsilon>0$.
Suppose Assumptions \ref{a1}, \ref{a2}, \ref{a3} hold.
Then, Lipschitz constants of functions $\Phi(x)$, $\lambda_j(x)$ and $p_j(x,y^*(x))$ on $\mathcal{B}(x^0,\epsilon)$ exist and are denoted by
$l_{\Phi}(x^0,\epsilon)$, $l_{\lambda_j}(x^0,\epsilon)$ and $l_{p_j}(x^0,\epsilon)$, respectively.
Further, suppose the SCSC holds at $y^{*}(x^0)$ w.r.t. $\lambda(x^0)$. If there exists $\epsilon_1 > 0$ such that
\begin{equation}
\label{check_dif}
\begin{aligned}
&\lambda_j(x^0) > l_{\lambda_j}(x^0,\epsilon_1) \epsilon_1 \ \text{ for all } j \in J(x^0), \\
&p_j(x^0,y^*(x^0)) < -l_{p_j}(x^0,\epsilon_1) \epsilon_1 \ \text{ for all } j \not\in J(x^0),
\end{aligned}
\end{equation}
then $y^{*}$ is continuously differentiable on $\mathcal{B}(x^0,\epsilon_1)$.
\end{proposition}
Proposition \ref{prop2} shows that, $y^{*}$ is continuously differentiable on a neighborhood of $x^0$, if for any $j$, either (i) $\lambda_j$ is larger than zero with non-trivial amount when the constraint $p_j(x^0,y^*(x^0))$ is active; or (ii) the satisfaction of $p_j(x^0,y^*(x^0))$ is non-trivial when it is inactive.
For case (i), $\lambda_j(x)>0$ and the constraint is strictly active for all $x \in \mathcal{B}(x^0,\epsilon)$;
for case (ii), $p_j(x,y^*(x))<0$ and the constraint is inactive for all $x \in \mathcal{B}(x^0,\epsilon)$.
As a illustration on the problem in Example \ref{example1} shown in Fig. \ref{ssss1.5}, $y^{*}$ is continuously differentiable on $\mathcal{B}(x^0,\epsilon)$, $\mathcal{B}(x^1,\epsilon)$ and $\mathcal{B}(x^2,\epsilon)$, and the constraint is inactive or strictly active in each ball.
{\color{black}
We evaluate the differentiability of $y^*(x)$ and $\Phi(x)$ on $\mathcal{B}(x^0,\epsilon)$ by Proposition \ref{prop2}.
In particular, we approximatively regard that $y^*$ and $\Phi$ is continuously differentiable on $\mathcal{B}(x^0,\epsilon)$ if \eqref{check_dif} is satisfied; otherwise, there exists $x \in \mathcal{B}(x^0,\epsilon)$ such that $y^*$ and $\Phi$ is not continuously differentiable at $x$.
The Lipschitz constants $l_{\lambda_j}(x^0,\epsilon)$ and $l_{p_j}(x^0,\epsilon)$ are computed as
\begin{equation}
\label{check}
\begin{aligned}
&l_{\lambda_j}(x^0,\epsilon)= \| \nabla \lambda_j(x^0)\|+\delta , \\
&l_{p_j}(x^0,\epsilon)= \|\nabla_x p_j(x^0,y^*(x^0))+ \\ & \quad \quad\quad \nabla y^{*}(x^0)^{\top}\nabla_y p_j(x^0,y^*(x^0))\|+\delta,
\end{aligned}
\end{equation}
where $\delta$ is a small parameter, and $\nabla_x p_j(x^0,y^*(x^0))$ and $\nabla \lambda_j(x^0)$ are given in \eqref{eq8}.
Here, for a function $f$, we approximate its Lipschitz constant on $\mathcal{B}(x^0,\epsilon)$, which is defined as $l_{f}(x^0,\epsilon) \triangleq \sup_x \{ \| \nabla f(x)\|: x \in \mathcal{B}(x^0,\epsilon)\}$, as $l_{f}(x^0,\epsilon) \approx \| \nabla f(x^0)\|+\delta$.
As $\epsilon$ decreases, $f$ in $\mathcal{B}(x^0,\epsilon)$ is approximating to an affine function, and then the approximation error of $l_{f}(x^0,\epsilon)$ decreases.}
\subsubsection{Computation of $G(x^{0},\epsilon)$}
\label{sectionD2}
To compute $G(x^{0},\epsilon)$ in line \ref{line7} of Algorithm \ref{alg:framework0}, we need to compute the sets $I_{+}^\epsilon(x^0)$ and $I^\epsilon(x^0)$ defined in Proposition \ref{prop3}.
Similar to the idea in Proposition \ref{prop2},
we evaluate $I_{+}^\epsilon(x^0)$ and $I^\epsilon(x^0)$ as
\begin{align}
\label{check_I}
& I_+^\epsilon(x^0) = \left\{ j \in J(x^0): \lambda_j(x^0) > l_{\lambda_j}(x^0,\epsilon) \epsilon \right\} , \nonumber \\
& I^\epsilon_{-}(x^0) = \left\{ j \not\in J(x^0): \right. \nonumber \left. p_j(x^0,y^*(x^0)) < -l_{p_j}(x^0,\epsilon) \epsilon \right\}, \nonumber \\
& I^\epsilon(x^0) = \left\{ j : j \not\in I^\epsilon_+(x^0) \cup I^\epsilon_{-}(x^0) \right\}.
\end{align}
Recall that the KKT conditions hold at $y^*(x^0)$ for problem $P(x^0)$, then for any $x \in \mathcal{B}(x^0,\epsilon)$, $p_j(x,y^*(x))=0$ for $j \in I_+^\epsilon(x^0)$ and $\lambda_j(x)=0$ for $j \in I_-^\epsilon(x^0)$.
Here, we also use $l_{\lambda_j}$ and $l_{p_j}$ given in \eqref{check} as the Lipschitz constants.
When $y^*$ and $\lambda$ are not differentiable at $x^0$, we sample a point $x^{\prime}$ near $x^0$ such that $y^*$ and $\lambda$ are differentiable at $x^{\prime}$, then replace $\nabla \lambda(x^0)$ and $\nabla y^*(x^0)$ in \eqref{check} by $\nabla \lambda(x^{\prime})$ and $\nabla y^*(x^{\prime})$.
\begin{comment}
\subsubsection{Computation of gradient matrices}
In lines \ref{line5} and \ref{line7} of Algorithm \ref{alg:framework0}, we need to compute the gradient matrices $\nabla \Phi(x^k)$ and the set of gradients $G(x^{k},\epsilon_k)$.
Notice that both $-M_{+}^{-1}(x) N_{+}(x)$ in the computation of $\nabla \Phi(x^k)$ in \eqref{eq8} and $-M^{S}_{\epsilon}(x^0,y^*(x^0))^{-1} N^{S}_{\epsilon}(x^0,y^*(x^0))$ in the computation of $G(x^{0},\epsilon)$ in \eqref{eq17} have a form of
\begin{equation}
\label{form}
-\left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_{y} r^{\top}\\
\nabla_{y} r & 0
\end{array}\right]^{-1}
\left[\begin{array}{c}
\nabla_{x y}^{2} \mathcal{L}\\
\nabla_{x} r
\end{array}\right],
\end{equation}
where $\nabla_{y}^{2} \mathcal{L}^{-1}$ is positive definite at $(y^*(x), \lambda(x), \nu(x), x)$ for any $x$ (shown in the proof (ii) of Theorem \ref{th2} in Appendix).
We can compute \eqref{form} as follows. First, as $\nabla_{y}^{2} \mathcal{L}^{-1}$ is positive definite, we can use the conjugate gradient (CG) method \cite{hestenes1952methods} to compute $A = \nabla_{y}^{2} \mathcal{L}^{-1} \nabla_{x y}^{2} \mathcal{L}$ and $B= \nabla_{y}^{2} \mathcal{L}^{-1} \nabla_{y} {r}^{\top}$. Second, \eqref{form} can be written as
\begin{equation}
\label{form1}
\left[\begin{array}{c}
-A+B(\nabla_{y}{r} B)^{-1} (\nabla_{y}{r} A - \nabla_{x} r) \\
-(\nabla_{y}{r} B)^{-1} (\nabla_{y}{r} A - \nabla_{x} r)
\end{array}\right].
\end{equation}
Let the number of strictly active constraints be $m^{+}$ and the number of non-strictly active constraints be $m^{-}$ in $P(x)$, and $n$ is the number of equality constraints.
Then, $r(x,y)$ is a vector function with at most $n+m^{+}+m^{-}$ dimensions.
The dimension of $\nabla_{y}^{2} \mathcal{L}$ is $d_y$. In machine learning applications, $d_x$ is usually large and $n+m^{+}+m^{-}$ is relatively small. It is shown in \cite{ji2021bilevel} that the computation of $A$ and $B$ in the first step is achievable, and thus so is the computation of \eqref{form1}.
\end{comment}
\section{Experiments }
\subsection{Hyperparameter Optimization}
\begin{comment}
In a machine learning problem, given a hyperparameter $\Lambda$,
the learner’s parameter $w^*$ is to minimize the training error.
Hyperparameter optimization is to search for the best hyperparameter $\Lambda^{*}$ for the learning problem, which can be formulated as a bilevel optimization problem.
In particular, it is to minimize the validation error of the learner’s parameter $\omega^{*}$ in the upper-level optimization, where $w^{*}$ is the minimizer of training error in the lower-level optimization and is determined by the hyperparameter $\Lambda$.
\end{comment}
Hyperparameter optimization has been widely studied \cite{pedregosa2016hyperparameter,franceschi2017forward,ji2021bilevel}.
However, existing methods cannot handle hyperparameter optimization of constrained learning problems, such as the supported vector machine (SVM) classification \cite{cortes1995support}, constrained reinforcement learning \cite{achiam2017constrained,chen2021primal,xu2021crpo}.
We apply the proposed algorithm to hyperparameter optimization of constrained learning.
\begin{figure}
\caption{ Loss and accuracy v.s. running time in hyperparameter optimization of linear and kernelized SVM}
\label{fig:ho}
\end{figure}
\subsubsection{Hyperparameter Optimization of SVM}
We optimize the hyperparameter in the SVM optimization, i.e., the penalty terms of the separation violations.
We conduct the experiment on linear SVM and kernelized SVM on the dataset of diabetes in \cite{Dua2019}.
It is the first time to solve hyperparameter optimization for SVM.
We provide details of the problem formulation and the implementation setting in Appendix \ref{A1}.
As shown in Fig. \ref{fig:ho}, the loss is nearly convergent for both linear and kernelized SVM, and the final test accuracy is much better than that of randomly selected hyperparameters, which is the initial point of the optimization.
\subsubsection{Data Hyper-Cleaning}
Data hyper-cleaning \cite{franceschi2017forward, shaban2019truncated} is to train a classifier in a setting where the labels of training data are corrupted with a probability $p$ (i.e., the corruption rate).
We formulate the problem as a hyperparameter optimization of SVM and conduct experiments on a breast cancer dataset provided in \cite{Dua2019}.
The problem formulation and the implementation setting are provided in Appendix \ref{A1}.
We compare our gradient approximation method with directly gradient descent used in \cite{amos2017optnet,lee2019meta}.
It is shown in Fig. \ref{fig:dataclean} that, our method converges faster
than the benchmark method in terms of the loss and accuracy in both the training and test stages. Moreover, both the two methods achieve the test accuracy $96.2 \%$ using the corrupt data ($p=0.4$). The accuracy is comparable to the test accuracy $96.5\%$ of an SVM model where the data is not corrupted.
\begin{figure}
\caption{Comparison of gradient descent (GD) and gradient approximation method (GAM) in data hyper-cleaning with the corruption rate $p=0.4$. Left: training and test losses of GD and GAM v.s. running time; right: training and test accuracy of GD and GAM with $p=0.4$ and training and test accuracy with $p=0$ v.s. running time. }
\label{fig:dataclean}
\end{figure}
\begin{figure}
\caption{ Comparison of MetaOptNet and gradient approximation method (GAM). For each dataset, left: training loss v.s. running time; right: test accuracy v.s. running time.}
\label{fig:meta-training-shot}
\end{figure}
\subsection{Meta-Learning}
Meta-learning approaches for few-shot learning have been formulated as bilevel optimization problems in \cite{rajeswaran2019meta,lee2019meta, ji2021bilevel}. In particular, the problem in MetaOptNet \cite{lee2019meta} has the form of problem \eqref{biopt} with the lower-level constraints. However, its optimization does not explicitly consider the non-differentiability of the objective function and cannot guarantee convergence.
In the experiment, we compare our algorithm with the optimization in MetaOptNet on datasets CIFAR-FS \cite{R2D2} and FC100 \cite{TADAM}, which are widely used for few-shot learning. Appendix \ref{A2} provides details of
the problem formulation and the experiment setting.
Fig. \ref{fig:meta-training-shot} compares our gradient approximation method and the direct gradient descent in MetaOptNet \cite{lee2019meta}.
The two algorithms share all training configurations, including
the network structure, the learning rate in each epoch and the batch size.
For both CIFAR-FS
and FC100 datasets, our method converges faster
than the optimization in MetaOptNet in terms of the training loss and test accuracy, and achieves a higher final test accuracy.
Note that the only difference between the two algorithms in this experiment is the computation of the descent direction. The result shows the Clarke subdifferential approximation in our algorithm works better than the gradient as the descent direction.
This is consistent with Proposition \ref{prop3}, where a set of representative gradients instead one gradient is more suitable to approximate
the Clarke subdifferential.
More comparison results with other meta-learning approaches are given in Appendix \ref{A2}.
\section{Conclusion}
We develop a gradient approximation method for the bilevel optimization where the lower-level optimization problem is convex with equality and inequality constraints and the upper-level optimization is non-convex. The proposed method efficiently approximates the Clarke Subdifferential of the non-smooth objective function, and theoretically guarantees convergence. Our experiments validate our theoretical analysis and demonstrate the superior effectiveness of the algorithm.
\onecolumn
\appendix
\noindent {\Large \textbf{Supplementary Materials}}
\section{Implementation Supplement}
\subsection{Computation of gradient matrices}
In lines \ref{line5} and \ref{line7} of Algorithm \ref{alg:framework0}, we need to compute the gradient matrices $\nabla \Phi(x^k)$ and the set of gradients $G(x^{k},\epsilon_k)$.
Notice that both $-M_{+}^{-1}(x) N_{+}(x)$ in the computation of $\nabla \Phi(x^k)$ in \eqref{eq8} and $-M^{S}_{\epsilon}(x^0,y^*(x^0))^{-1} N^{S}_{\epsilon}(x^0,y^*(x^0))$ in the computation of $G(x^{0},\epsilon)$ in \eqref{eq17} have a form of
\begin{equation}
\label{form+}
-\left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_{y} r^{\top}\\
\nabla_{y} r & 0
\end{array}\right]^{-1}
\left[\begin{array}{c}
\nabla_{x y}^{2} \mathcal{L}\\
\nabla_{x} r
\end{array}\right],
\end{equation}
where $\nabla_{y}^{2} \mathcal{L}^{-1}$ is positive definite at $(y^*(x), \lambda(x), \nu(x), x)$ for any $x$ (shown in the proof (ii) of Theorem \ref{th2} in Appendix).
We can compute \eqref{form+} as follows. First, as $\nabla_{y}^{2} \mathcal{L}^{-1}$ is positive definite, we can use the conjugate gradient (CG) method \cite{hestenes1952methods} to compute $A = \nabla_{y}^{2} \mathcal{L}^{-1} \nabla_{x y}^{2} \mathcal{L}$ and $B= \nabla_{y}^{2} \mathcal{L}^{-1} \nabla_{y} {r}^{\top}$. Second, \eqref{form+} can be written as
\begin{equation}
\label{form1+}
\left[\begin{array}{c}
-A+B(\nabla_{y}{r} B)^{-1} (\nabla_{y}{r} A - \nabla_{x} r) \\
-(\nabla_{y}{r} B)^{-1} (\nabla_{y}{r} A - \nabla_{x} r)
\end{array}\right].
\end{equation}
Let the number of strictly active constraints be $m^{+}$ and the number of non-strictly active constraints be $m^{-}$ in $P(x)$, and $n$ is the number of equality constraints.
Then, $r(x,y)$ is a vector function with at most $n+m^{+}+m^{-}$ dimensions.
The dimension of $\nabla_{y}^{2} \mathcal{L}$ is $d_y$. In machine learning applications, $d_x$ is usually large and $n+m^{+}+m^{-}$ is relatively small. It is shown in \cite{ji2021bilevel} that the computation of $A$ and $B$ in the first step is achievable, and thus so is the computation of \eqref{form1+}.
Paper \cite{amos2017optnet} provides a highly efficient solver to compute the gradient of the solution of a quadratic program, which exploits fast GPU-based batch solves within a primal-dual interior point method. The tool can also be used to compute $-M_{+}^{-1}(x) N_{+}(x)$ and $-M^{S}_{\epsilon}(x^0,y^*(x^0))^{-1} N^{S}_{\epsilon}(x^0,y^*(x^0))$ in Algorithm \ref{alg:framework0}, when the lower-level optimization problem is a quadratic program.
\subsection{Satisfaction of Assumptions}
Here are two reminders of Assumption \ref{a3}.
\begin{remark}
A sufficient condition of Assumption \ref{a3} which is easier to check is that, the solution $y^{*}(x)$ exists for $P\left(x\right)$ and the LICQ holds at $y$ for $P(x)$ for all ${x} \in \mathbb{R}^{d_x}$ and $y \in \mathbb{R}^{d_y}$.
\end{remark}
\begin{remark}
The equality constraint cannot be replaced by two inequality constraints, i.e., $p = 0$ is replaced by $p \leq 0$ and $-p \leq 0$. Otherwise, the LICQ is violated.
\end{remark}
Note that all optimization problems in the experiments of this paper satisfy Assumptions \ref{a1}, \ref{a2}, \ref{a3}. The details are shown in Appendix \ref{A0}.
\section{Experimental Supplement}\label{sc: expsetting}
\label{A0}
All experiments are executed on a computer with a 4.10 GHz Intel Core i5 CPU and an RTX 3080 GPU.
\subsection{Hyperparameter Optimization}
\label{A1}
In a machine learning problem, given a hyperparameter $\Lambda$, the learner is to minimize the training error and the optimal parameter is denoted as $w^*(\Lambda)$.
Hyperparameter optimization is to search for the best hyperparameter $\Lambda^{*}$ for the learning problem, which can be formulated as a bilevel optimization problem.
In particular, it is to minimize the validation error of the learner’s parameter $w^{*}(\Lambda)$ in the upper-level optimization, where $w^{*}(\Lambda)$ is the minimizer of training error in the lower-level optimization under the hyperparameter $\Lambda$.
Hyperparameter optimization has been widely studied in \cite{pedregosa2016hyperparameter,franceschi2017forward, franceschi2018bilevel,lorraine2020optimizing,ji2021bilevel}.
However, these methods cannot handle with hyperparameter optimization of constrained learning problems, such as the supported vector machine (SVM) classification \cite{cortes1995support}, safe reinforcement learning \cite{achiam2017constrained,chen2021primal,xu2021crpo}.
We apply the proposed algorithm to hyperparameter optimization of constrained learning problem, which is formulated as
$$
\begin{aligned}
&\min _{\Lambda} \ \Phi(\Lambda) = \mathcal{L}_{\mathcal{D}_{\text {val }}}(w^{*})=\frac{1}{\left|\mathcal{D}_{\text {val }}\right|} \sum_{z \in \mathcal{D}_{\text {val }}} \mathcal{L}\left(w^{*} ; z \right) \\
& \text { s.t. } w^{*}=\underset{w}{\arg \min } \{ \mathcal{F}_{\mathcal{D}_{\text {tr }}}(\Lambda, w):
p\left(\Lambda, w\right) \leq 0; q\left(\Lambda, w\right) = 0 \},
\end{aligned}
$$
where $\mathcal{D}_{\text {val }}$ and $\mathcal{D}_{\text {tr }}$ are validation and training data, $\mathcal{L}_{\mathcal{D}_{\text {val }}}$ is the loss function of model parameter $\omega$ on data $\mathcal{D}_{\text {val }}$.
The lower-level optimization is the training of model parameter $w$, where $\mathcal{F}_{\mathcal{D}_{\text {val }}}$ is the training objective on $\mathcal{D}_{\text {tr }}$ and $p$, $q$ are the constraints.
\subsubsection{Hyperparameter Optimization of SVM}
The optimization problem for SVM is:
\begin{equation}
\label{eqsvm_opt}
\begin{aligned}
(w^{*}, b^{*}, {\xi}^{*}) =&\arg\min_{w, b, \xi} \ \frac{1}{2}\|w\|^{2}+ \frac{1}{2}\sum_{i=1}^{N} {e^{c_i}} \xi_{i}^2 \\
& \text { s.t. } \ l_{i}\left(w^{\top} \phi(z_{i})+b\right) \geq 1-\xi_{i}, \ i=1,2, \ldots N,\\
\end{aligned}
\end{equation}
Here, $z_i$ is the data point and $y_i$ is the label, and $(z_i;l_i) \in \mathcal{D}_{\text {tr }}$ for all $1 \leq i \leq N$. and $(z_i;y_i) \in \mathcal{D}_{\text {tr }}$ for all $1 \leq i \leq N$. The vector function $\phi(z_i)$ is the high dimension feature for point $z_i$. The kernel function is defined as $K\left(z_{i}, z_{j}\right)=\phi\left(z_{i}\right)^{T} \phi\left(z_{j}\right)$.
The hyperparameter optimization of SVM is formulated as
\begin{equation}
\label{ho_svm_upper}
\min _{c} \ \Phi(c) = \mathcal{L}_{\mathcal{D}_{\text {val }}}(w^{*},b^{*}),
\end{equation}
where $w^{*},b^{*}$ are given in \eqref{eqsvm_opt} and the optimized hyperparameter is $c=[c_1, \dots, c_N]$.
To satisfy Assumption \ref{a2}, we set the objective function of \eqref{eqsvm_opt} as $\frac{1}{2}\|w\|^{2}+ \frac{1}{2} \sum_{i=1}^{N} {e^{c_i}} \xi_{i}^2 + \frac{1}{2} \mu b^2$ where $\mu$ is a small positive number. Then, the objective function is strongly-convex w.r.t $(w, b, \xi)$. It is easy to justify the LICQ in Assumption \ref{a3} is satisfied.
The upper-level objective function is defined as
$\mathcal{L}_{\mathcal{D}_{\text {val }}}(w^{*},b^{*})= \frac{1}{\left|\mathcal{D}_{\text {val }}\right|} \sum_{(z,l) \in \mathcal{D}_{\text {val }}} \mathcal{L}\left(w^{*},b^* ; z,l \right)$,
where $\mathcal{L}(w^{*},b^{*} ; \mathcal{D}_{\text {val }})$ is defined as
$\mathcal{L}\left(w^{*},b^* ; z,l \right)=\sigma((\frac{-l(z^{\top}{w^{*}}+b)}{\|w^{*}\|})$ and $\sigma(x)=\frac{1-e^{-x}}{1+e^{-x}}$.
Here, $\frac{l(z^{\top}{w^{*}}+b)}{\|w^{*}\|}$ is the signed distance between point $z$ and the decision plane $z^{\top}{w^{*}}+b=0$, where $\frac{l(z^{\top}{w^{*}}+b)}{\|w^{*}\|}>0$ when the prediction is correct and $\frac{l(z^{\top}{w^{*}}+b)}{\|w^{*}\|}<0$ when the prediction is incorrect. Thus, $\mathcal{L}_{\mathcal{D}_{\text {val }}}(w^{*},b^{*})$ is a differentiable surrogate function of the validation accuracy.
When the feature function $\phi$ is not tractable, the hyperparameter $c$ of \eqref{eqsvm_opt} can not be directly optimized.
For example, in kernelized SVM \cite{10.1214/009053607000000677}, under most kernel functions, e.g., Gaussian kernel and polynomial kernel $\phi$ are unknown or very complex.
Then, it is hard to compute $\nabla w^{*} (c)$ by Theorem \ref{th1}.
Therefore, in kernelized SVM, we solve the dual problem of \eqref{eqsvm_opt}:
\begin{equation}
\label{eqsvm_opt_dual}
\begin{aligned}
\min _{\alpha} \ & \frac{1}{2} \alpha^{\top} (Q + C^{-1}) \alpha -\sum_{i=1}^{n} \alpha_{i}\\
\text { s.t. } & \ \sum_{i=1}^{n} \alpha_{i} y_{i}=0 \\
&\alpha_{i} \geq 0, \ i=1,2, \ldots N,
\end{aligned}
\end{equation}
where $Q$ is an $n$ by $n$ positive semi-definite matrix with $Q_{i j} \equiv y_{i} y_{j} K\left(z_{i}, z_{j}\right)$, and $C=\operatorname{diag}\left(e^{c_{1}}, \ldots, e^{c_{n}}\right)$.
Since the strong duality holds for problem (\ref{eqsvm_opt}), we have
$$
w^{*}=\sum_{i=1}^{n} \alpha_{i}^{*} y_{i} \phi\left(z_{i}\right)
$$
and
$$
b^{*}=y_{i}(1-e^{-c_i}\alpha_i)-\sum_{j=1}^{n} \alpha_{j}^{*} y_{j}\phi\left(z_{j}\right)^{\top} \phi\left(z_{i}\right)=y_{i}(1-e^{-c_i}\alpha_i)-\sum_{j=1}^{n} \alpha_{j}^{*} y_{j} K\left(z_{j}, z_{i}\right)
$$
for any support vector $z_{i}$ with $\alpha_{i}^{*}>0$.
Following the kernel method, the computation of $\phi$ is not required for both processes of model training and prediction.
The prediction of $z^{new}$ is
$$\operatorname{sign}\{ {w^{*}}^{\top}\phi(z^{new})+b^*\}=\operatorname{sign}\{\sum_{i=1}^{n} \alpha_{i}^{*} y_{i} K\left(z_{i},z_{new}\right)+b^{*}\}.$$
Assumption \ref{a2} is satisfied, since the objective function $\frac{1}{2} \alpha^{\top} (Q+C^{-1}) \alpha -\sum_{i=1}^{n} \alpha_{i}$ where $Q$ is a positive semi-definite matrix and $C^{-1}=\operatorname{diag}\left(e^{-c_{1}}, \ldots, e^{-c_{n}}\right)$ is positive definite, then the objective function is strongly-convex. Since there exists $i$ such that $\alpha_{i}^{*} > 0$, then it is easy to justify the LICQ in Assumption \ref{a3} is satisfied.
In the experiment, we consider hyperparameter optimization of the linear SVM model and the kernelized SVM model.
\textbf{Linear SVM: } The feature function is $\phi(x)=x$. Both lower-level problems (\ref{eqsvm_opt}) and (\ref{eqsvm_opt_dual}) works for hyperparameter optimization of Linear SVM. Here, we solve the bilevel problem \eqref{eqsvm_opt}.
\textbf{kernelized SVM: } We apply the polynomial kernel, i.e., $K(z, z^{\prime})=\phi(z)^{T} \phi(z^{\prime})=(\gamma z^{\top}z^{\prime}+r)^{d}$, where $\gamma=1$ and $d=3$. We test our algorithm on a diabetes dataset in \cite{Dua2019}.
For Algorithm \ref{alg:framework0}, we set $\gamma=0.3$, $\epsilon_0=0.3$, $\beta=0.5$ and fix the total iteration number as 60.
\subsubsection{Data Hyper-Cleaning}
We formulate the data hyper-cleaning as the hyperparameter optimization of SVM, where the upper-level optimization problem is shown in \eqref{ho_svm_upper} and the lower-level optimization problem is shown in \eqref{eqsvm_opt}. After the optimization of the hyperparameter $c$, the penalty term $e^{c_i}$ which corresponds to the corruption data $(z_i,y_i)$ will be close to $0$. Thus, the corruption data $(z_i,y_i)$ is detected and almost does no affect training and the prediction of the classifier model. We conduct experiments on a dataset of breast cancer provided in \cite{Dua2019}.
For Algorithm \ref{alg:framework0}, we set $\gamma=0.3$, $\epsilon_0=0.3$, $\beta=0.5$ and fix the total iteration number as 30.
\subsection{Meta-learning}
\label{A2}
Meta-learning for few-shot learning is to learn a shared prior parameter across a distribution of tasks, such that a simple learning step with few-shot data based on the prior leads to a good adaptation to the task in the distribution. In particular, the training task $\mathcal{T}_{i}$ is sampled from distribution $P_{\mathcal{T}}$. Each task $\mathcal{T}_{i}$ is characterized by its training data $\mathcal{D}_{i}^{tr}$ and test data $\mathcal{D}_{i}^{test}$.
The goal of meta-learning is to find a good parameter ${\phi}$ and a base learner $\mathcal{A}$, such that the task-specific parameter $w^{i}=\mathcal{A}({\phi}, \mathcal{D}_{i}^{tr})$ has a small test loss $\mathcal{L}(w^{i}, \phi, \mathcal{D}_{i}^{test})$.
The training of meta-learning can be formulated as a constrained bilevel optimization problem \cite{lee2019meta}. The upper-level optimization is to extract features from the input data. The multi-class SVM served as the base learner $\mathcal{A}$ in the lower-level optimization to classify the data on its extracted features. In particular, the feature extraction model $f_{{\phi}}$ maps from the image $x_n$ to its features denoted as $f_{{\phi}}(x_n)$.
The multi-class SVM in the lower-level optimization is a constrained optimization problem:
\begin{equation}
\label{metalearningopt}
\begin{aligned}
w^{i}=\mathcal{A}\left(\mathcal{D}_i^{\text {tr }} ; {\phi}\right)=\underset{\{{w}_{k}\},\{\xi_{n}\}}{\arg \min } \frac{1}{2} \sum_{k}\left\|{w}_{k}\right\|_{2}^{2}+C \sum_{n} \xi_{n} \\
\text{s.t. }
{w}_{y_{n}} \cdot f_{{\phi}}\left({x}_{n}\right)-{w}_{k} \cdot f_{{\phi}}\left({x}_{n}\right) \geq 1-\delta_{y_{n}, k}-\xi_{n}, \forall n, k
\end{aligned}
\end{equation}
where $\mathcal{D}^{ {tr }}_i=\left\{\left({x}_{n}, y_{n}\right)\right\}$ with image ${x}_{n}$ and its label $y_n$, $C$ is the regularization parameter and $\delta_{\cdot,\cdot}$ is the Kronecker delta function. Here, $k$ is the index of feature $f_{{\phi}}(x_n)$, and $n$ is the index of the data.
The upper-level optimization is
\begin{equation}
\label{metalearningupper}
\min_{\phi} \sum_{\mathcal{T}_{i} \in P_{\mathcal{T}}} \mathcal{L}\left(w^i, \phi, \mathcal{D}^{ {test }}_i \right).
\end{equation}
where
$$
\mathcal{L}\left(w^i, \phi, \mathcal{D}^{ {test }}_i \right)=
\sum_{({x}, y) \in \mathcal{D}^{ {test }}}\left[-\gamma {w}_{y}^i \cdot f_{\phi}({x})+\log \sum_{k} \exp \left(\gamma {w}_{k}^i \cdot f_{\phi}({x})\right)\right]
$$
and $w^i$ is given in \eqref{metalearningopt}. Here, $\mathcal{L}\left(w^i, \phi, \mathcal{D}^{ {test }}_i \right)$ is the negative log-likelihood loss under the feature extraction parameter ${{\phi}}$ and the SVM parameter ${w}^i$ optimized in lower-level optimization \eqref{metalearningopt}.
Then, the upper-level optimization \eqref{metalearningupper} to find the best feature extraction parameter ${{\phi}}$.
Following the experiment setting in MetaOptNet \cite{lee2019meta}, we use a ResNet-12 network as the feature extraction mapping $f_{{\phi}}()$. Since the line search in Algorithm \ref{alg:framework0} is not convenient for the meta-learning problem, we compute the descent direction in \ref{alg:framework0} and use the SGD method with Nesterov momentum of 0.9 and weight decay of 0.0005 to solve the problem. Each mini-batch
consists of 8 episodes. The model was meta-trained for 30 epochs, with each epoch consisting of 1000 episodes. The
learning rate was initially set to 0.1, and then changed to
0.006, 0.0012 at epochs 10, 20 and 25, respectively. We use the configurations for both our method and the method in MetaOptNet \cite{lee2019meta}.
The comparison to previous work on CIFAR-FS and FC100 in the aspect of prediction accuracy is shown in Table \ref{tab:CIFAR}. It is shown that the final test accuracy of our optimization algorithm is slightly better than that of MetaOptNet-SVM \cite{lee2019meta}.
\begin{table*}[htb]
\caption{\textbf{Comparison to prior work on CIFAR-FS and FC100.} Average few-shot classification accuracies (\%) with 95\% confidence intervals on CIFAR-FS and FC100. \textsuperscript{$\ast$}CIFAR-FS results from \cite{R2D2}. FC100 result from \textsuperscript{$\dagger$}\cite{TADAM} and \textsuperscript{$\ddag$}\cite{ji2021bilevel}. All models are trained on the original training data of CIFAR-FS and FC100 in \cite{lee2019meta}, and validation data are not included.}
\label{tab:CIFAR}
\begin{center}
\begin{small}
\begin{tabular}{@{}llc@{}cc@{}c@{}cc@{}}
\hline
\toprule
& & \multicolumn{2}{c}{\textbf{CIFAR-FS 5-way}} & \phantom{ab} & \multicolumn{2}{c}{\textbf{FC100 5-way}} \\
\cmidrule{3-4} \cmidrule{6-7}
\textbf{model} && \textbf{1-shot} & \textbf{5-shot} && \textbf{1-shot} & \textbf{5-shot} \\
\hline
MAML\textsuperscript{$\ast$}\textsuperscript{$\ddag$} \cite{MAML} && 58.9 $\pm$ 1.9 \quad & \quad 71.5 $\pm$ 1.0 && - \quad & \quad 47.2 \\
Prototypical Networks\textsuperscript{$\ast$}\textsuperscript{$\dagger$} \cite{proto-net} && 55.5 $\pm$ 0.7 \quad & \quad 72.0 $\pm$ 0.6 && 35.3 $\pm$ 0.6 \quad & \quad 48.6 $\pm$ 0.6 \\
Relation Networks\textsuperscript{$\ast$} \cite{sung2018learning} && 55.0 $\pm$ 1.0 \quad & \quad 69.3 $\pm$ 0.8 && - \quad & \quad - \\
R2D2 \cite{R2D2} && 65.3 $\pm$ 0.2 \quad & \quad 79.4 $\pm$ 0.1 && - \quad & \quad -\\
TADAM \cite{TADAM} && - \quad & \quad - && 40.1 $\pm$ 0.4 \quad & \quad 56.1 $\pm$ 0.4\\
ProtoNets \cite{proto-net} && \textbf{72.2 $\pm$ 0.7} \quad & \quad 83.5 $\pm$ 0.5 && 37.5 $\pm$ 0.6 \quad & \quad 52.5 $\pm$ 0.6 \\
MetaOptNet-RR \cite{lee2019meta} && \textbf{72.6 $\pm$ 0.7} \quad & \quad \textbf{84.3 $\pm$ 0.5} && 40.5 $\pm$ 0.6 \quad & \quad 55.3 $\pm$ 0.6\\
MetaOptNet-SVM \cite{lee2019meta} && \textbf{72.0 $\pm$ 0.7} \quad & \quad \textbf{84.2 $\pm$ 0.5} && 41.1 $\pm$ 0.6 \quad & \quad 55.5 $\pm$ 0.6 \\
MetaOptNet-SVM-GAE (ours) && \textbf{72.2 $\pm$ 0.7} \quad & \quad \textbf{84.6 $\pm$ 0.6} && \textbf{41.9 $\pm$ 0.6} \quad & \quad \textbf{56.4 $\pm$ 0.8} \\
\bottomrule
\hline
\end{tabular}
\end{small}
\end{center}
\end{table*}
\section{Proof and Analysis}
Firstly, we clarify notations used in this section.
All notations used in this section are the same as in the main body of the paper, except $J(x)$, $J^{+}(x)$ and $J^{0}(x)$ in Section \ref{section3}. In Section \ref{section3},
we simplify notations in \eqref{def1} and denote $J(x,y^{*}(x))$ as $J(x)$, $J^{+}(x,y^{*}(x),{\lambda}(x))$ as $J^{+}(x)$, and $J^{0}(x,y^*(x),\lambda(x))$ as $J^{0}(x)$, because the optimal solution $y^{*}(x)$ and the Lagrangian multipliers ${\lambda}$ and ${\nu}$ are uniquely determined by $x$.
This section keeps all definitions in Definition \ref{def1}, and simplify $J^{+}(x,y,{\lambda})$ as $J^{+}(x,y)$, and $J^{0}(x,y,\lambda)$ as $J^{0}(x,y)$, where the KKT conditions hold at $y$ for $P(x)$.
We can do this because Lagrangian multipliers $\lambda$ and $\mu$ are unique and determined by $x$ and $y$ when
the KKT conditions hold at $y$ for $P(x)$ and the LICQ holds (Assumption \ref{a3}) \cite{kyparisis1985uniqueness}.
\label{proof_app}
\subsection{Proof of Theorems \ref{th0} and \ref{th1}}
We first list Definition \ref{def4}, Theorems \ref{thm1} and \ref{thm1+}, which are shown in \cite{lemke1985introduction,Giorgi2018ATO,kojima1980strongly}, then introduce Lemmas \ref{lemma0}, \ref{thm4} and \ref{thm3}. Finally, we prove Theorem \ref{th2}, which is the full version of the combination of Theorems \ref{th0} and \ref{th1}.
\begin{definition}
\label{def4}
Suppose that the KKT conditions hold at $\hat{y}$ for $P(x)$ with the Lagrangian multipliers $\hat{\lambda}$ and $\hat{\nu}$, the Strong Second Order Sufficient Conditions (SSOSC) hold at $\hat{y}$ if
$$
z^{\top} \nabla_{y}^{2} \mathcal{L}\left(\hat{y}, \hat{\lambda}, \hat{\nu}, x\right) z>0
$$
for all $z \neq 0, z \in Z\left(\hat{y},x\right)$, where $\mathcal{L}$ is the Lagrangian associated with $P(x)$, and $Z\left(\hat{y},x\right)$ is defined by
$$
Z\left(\hat{y},x\right) \triangleq \{ z \in \mathbb{R}^{d_y}:
\nabla_y p_{j}\left(x,\hat{y}\right) z=0, \ \hat{\lambda}_j>0;
\nabla_y q_{i}\left(x,\hat{y}\right) z=0, \ 1 \leq i \leq n \}.
$$
\end{definition}
\begin{theorem}[\cite{lemke1985introduction,Giorgi2018ATO}]
\label{thm1}
Consider the problem $P\left(x^{0}\right)$. Suppose that Assumption \ref{a1} holds. Suppose that $y^{0} \in K\left(x^{0}\right)$ and the KKT conditions hold at $y^{0}$ with the Lagrangian multipliers $\lambda^{0}, {\nu}^{0}$. Moreover, suppose that the LICQ holds at $y^{0}$, the SCSC holds at $y^{0}$ w.r.t. $\lambda^{0}$, and the SSOSC hold at $y^{0}$, with $\left(\lambda^{0}, {\nu}^{0}\right)$. Then,
\begin{itemize}
\item[(\romannumeral1)] $y^{0}$ is a locally unique local minimum of $P\left(x^{0}\right)$, i.e., there exists $\delta > 0$ such that, for all $y \in \mathcal{B}(y^0,\delta)$, $y^{0}$ is the unique local minimum of $P\left(x^{0}\right)$. The associated Lagrangian multipliers $\lambda^{0}$ and ${\nu}^{0}$ are unique.
\item[(\romannumeral2)] There exists $\epsilon > 0$ such that, there exists a unique continuously differentiable vector function
$$
z(x) \triangleq [y(x)^{\top}, \lambda(x)^{\top}, \nu(x)^{\top}]^{\top},
$$
which is defined on $\mathcal{B}(x^0,\epsilon)$, and
$y(x)$ is a locally unique local minimum of $P(x)$. The KKT conditions hold at $y(x)$ with unique associated Lagrangian multipliers $\lambda(x)$ and $\nu(x)$.
\item[(\romannumeral3)] The LICQ and the SCSC hold at $y(x)$ for $P(x)$ for all $x \in \mathcal{B}(x^0,\epsilon)$.
\item[(\romannumeral4)] The gradient of $z(x)$ is given as
$$
\left[\begin{array}{c}
\nabla_{x} y(x^0) \\
\nabla_{x} \lambda(x^0) \\
\nabla_{x} \nu(x^0)
\end{array}\right]=-M(x^0,y^0,\lambda^{0},{\nu}^{0})^{-1} N(x^0,y^0,\lambda^{0},{\nu}^{0}),
$$
where
$$
M\triangleq
\left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \left(\nabla_{y} p_{1}\right)^{\top} & \cdots & \left(\nabla_{y} p_{m}\right)^{\top} & \left(\nabla_{y} q\right)^{\top} \\
\lambda_{1} \nabla_{y} p_{1} & p_{1} & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & 0 & \vdots \\
\lambda_{m} \nabla_{y} p_{m} & 0 & \cdots & p_{m} & 0 \\
\nabla_y q & 0 & 0 & 0 & 0
\end{array}\right]
$$
with $M(x^0, y^0)$ being nonsingular and
$$
N\triangleq
\left[\nabla_{x y}^{2} \mathcal{L}^{\top}, \lambda_1\left(\nabla_{x} p_1\right)^{\top}, \cdots, \lambda_m\left(\nabla_{x} p_m\right)^{\top}, \nabla_{x} q^{\top} \right]^{\top}.
$$
\end{itemize}
\end{theorem}
\begin{remark}
If the lower-level optimization problem $P(x^0)$ is unconstrained, the requirements in Theorem \ref{thm1} reduce to that $\nabla_{y}^{2}g\left({y}^0, x^0\right)$ is positive definite.
\end{remark}
\begin{theorem}[\cite{Giorgi2018ATO,kojima1980strongly}]
\label{thm1+}
Suppose that all requirements except the SCSC in Theorem \ref{thm1} are satisfied at $(y^0, \lambda^0, {\nu}^0)$ for $P\left(x^{0}\right)$.
Then,
\begin{itemize}
\item[(\romannumeral1)] $y^{0}$ is a locally unique local minimum of $P\left(x^{0}\right)$. The associated Lagrangian multipliers $\lambda^{0}$ and ${\nu}^{0}$ are unique.
\item[(\romannumeral2)] There exists $\epsilon > 0$ such that, there exists a unique Lipschitz continuous and once directional differentiable vector function
$$
z(x) \triangleq [y(x)^{\top}, \lambda(x)^{\top}, \nu(x)^{\top}]^{\top},
$$
which is defined on $\mathcal{B}(x^0,\epsilon)$, and
$y(x)$ is the locally unique local minimum of $P(x)$ with unique associated Lagrangian multipliers $\lambda(x)$ and $\nu(x)$.
\item[(\romannumeral3)] The LICQ hold at $y(x)$ for $P(x)$ for all $x \in \mathcal{B}(x^0,\epsilon)$.
\end{itemize}
\end{theorem}
\begin{lemma}
\label{lemma0}
Suppose that all requirements except the SCSC in Theorem \ref{thm1} hold at $(y^0, \lambda^0, {\nu}^0)$ for $P\left(x^{0}\right)$ (All requirements in Theorem \ref{thm1+} are satisfied).
Let $y(x)$ be the locally unique local minimum of $P(x)$ shown in Theorem \ref{thm1+}.
Then,
\begin{itemize}
\item[(\romannumeral1)] There exists $\beta >0$, such that for all $x \in \mathcal{B}(x^0,\beta)$, $p_{j}\left(x, y(x)\right)< 0$ for all $j \not\in J(x^0,y^0)$.
\item[(\romannumeral2)] There exists $\xi >0$ and $\delta>0$, such that for all $x^{\prime} \in \mathcal{B}(x^0,\xi)$ and $y^{\prime} \in \mathcal{B}(y(x^{\prime}),\delta)$, $p_{j}\left(x^{\prime}, y^{\prime}\right)<0$ for all $j \not\in J(x^0,y^0)$.
\end{itemize}
\end{lemma}
\begin{proof}
(\romannumeral1)
For any $j \not\in J(x^0,y^0)$, we have $p_{j}\left(x^0, y^0\right) < 0$. Then, there exists $\epsilon_1>0$ such that $p_{j}\left(x^0, y^0\right) \leq -\epsilon_1$.
Since the function $p_j$ is continuous at $(x^0, y^0)$ and $y$ is continuous at $x^0$, we have that $p_j(x,y(x))$ is continuous at $x^0$.
Then, there exists $\beta_1 > 0$ such that, for all $x \in \mathcal{B}(x^0,\beta_1)$, we have $|p_{j}\left(x, y(x)\right) -p_{j}\left(x^0, y^0\right)| \leq \frac{1}{2}\epsilon_1$, and then $p_{j}\left(x, y(x)\right)\leq -\frac{1}{2}\epsilon_1$.
By selecting the smallest $\beta_1$ over all $j \not\in J(x^0,y^0)$ as $\beta$, (\romannumeral1) is shown.
(\romannumeral2)
Since $p_j$ is continuous at $(x^0, y^0)$, there exists $\delta_1 > 0$ such that, for all $(x^{\prime},y^{\prime}) \in \mathcal{B}((x^0, y^0),\delta_1)$, $|p_{j}\left(x^{\prime}, y^{\prime}\right) -p_{j}\left(x^0, y^0\right)| \leq \frac{1}{2}\epsilon_1$.
Since $y(x)$ is continuous at $x^0$, there exists $\xi_1>0$, for all $x \in \mathcal{B}(x^0,\xi_1)$, $\|y(x)-y(x^0)\|<\delta_1/4$.
Let $\xi_2 = \min\{\delta_1/4,\xi_1\}$. Then, for all $x^{\prime} \in \mathcal{B}(x^0,\xi_2)$ and $y^{\prime} \in \mathcal{B}(y(x^{\prime}),\delta_1/2)$, $\|(x^{\prime},y^{\prime})-(x_0,y_0)\|<\delta_1$. Then $|p_{j}\left(x^{\prime}, y^{\prime}\right) -p_{j}\left(x^0, y^0\right)| \leq \frac{1}{2}\epsilon_1$, and $p_{j}\left(x^{\prime}, y^{\prime}\right)\leq -\frac{1}{2}\epsilon_1$.
By selecting the smallest $\xi_2$ over all $j \not\in J(x^0,y^0)$ as $\xi$ and selecting the smallest $\delta_1/2$ over all $j \not\in J(x^0,y^0)$ as $\delta$, we have that, for all $x^{\prime} \in \mathcal{B}(x^0,\xi)$, we can find $\delta>0$ such that, when ${||y^{\prime}-y(x)||} \leq \delta$, $p_{j}\left(x, y^{\prime}\right)\leq -\frac{1}{2}\epsilon_1$ for all $j \not\in J(x^0,y^0)$.
\end{proof}
\begin{remark}
Lemma \ref{lemma0} shows that the inactive constraints at $(x^0,y^0)$ are still inactive near $(x^0,y^0)$.
\end{remark}
\begin{lemma}
\label{thm4}
Suppose that all requirements in Theorem \ref{thm1} hold at $(y^0, \lambda^0, {\nu}^0)$ for $P\left(x^{0}\right)$. Define problem $\hat{P}\left(x\right)$ as:
$$
\begin{aligned}
\underset{y}{\arg\min }& \ g(x, y): \\
\text { s.t. } \ & p_{j}\left(x, y\right) \leq 0, j \in J(x^0,y^0), \\
& q\left(x, y\right) = 0.
\end{aligned}
$$
Then, the following
properties hold:
\begin{itemize}
\item[(\romannumeral1)] All requirements and all conclusions in Theorem \ref{thm1} hold at $(y^0, {\lambda}^{0}, {\nu}^0)$ for ${P}\left(x^{0}\right)$, and also hold at $(y^0, {\lambda}^{0}_{J(x^0,y^0)}, {\nu}^0)$ for $\hat{P}\left(x^{0}\right)$.
\end{itemize}
Let ${z}(x) \triangleq [{y}(x)^{\top}, {\lambda}(x)^{\top}, {\nu}(x)^{\top}]^{\top}$ be the unique continuously differentiable vector function in a neighborhood of $x^0$, such that $y(x)$ is a locally unique local minimum of $P(x)$ with unique associated Lagrangian multipliers $\lambda(x)$ and $\nu(x)$.
Let $\hat{z}(x) \triangleq [\hat{y}(x)^{\top}, \hat{\lambda}(x)^{\top}, \hat{\nu}(x)^{\top}]^{\top}$ be the unique continuously differentiable vector function in a neighborhood of $x^0$, such that $\hat{y}(x)$ is a locally unique local minimum of $\hat{P}(x)$ with unique associated Lagrangian multipliers $\hat{\lambda}(x)$ and $\hat{\nu}(x)$.
\begin{itemize}
\item[(\romannumeral2)] We have
$$
\begin{aligned}
&\nabla_{x} y(x^0)=\nabla_{x} \hat{y}(x^0), \\
&\nabla_{x} {\nu}(x^0)=\nabla_{x} \hat{\nu}(x^0), \\
&\nabla_{x} \lambda_j(x^0)=\nabla_{x} \hat{\lambda}_j(x^0) \text{ when } j \in J(x^0,y^0),\\
&\nabla_{x} \lambda_j(x^0)=0 \text{ when } j \not\in J(x^0,y^0).
\end{aligned}
$$
\end{itemize}
\end{lemma}
\begin{proof} (\romannumeral1)
Problem $\hat{P}\left(x^{0}\right)$ holds a same objective function and same equality constraints as problem $P\left(x^{0}\right)$.
The inequality constraints of $P\left(x^{0}\right)$ are those of $P\left(x^{0}\right)$ removing the inactive constraint.
The LICQ, the SCSC, the SSOSC and the KKT conditions hold at $(y^0, {\lambda}^{0}, {\nu}^0)$ for ${P}\left(x^{0}\right)$. Then, it is easy to justify that the LICQ, the SCSC hold at $(y^0, {\lambda}^{0}_{J(x^0,y^0)}, {\nu}^0)$ for $\hat{P}\left(x^{0}\right)$.
By the KKT conditions at $(y^0,{\lambda}^{0}, {\nu}^0)$ for ${P}\left(x^{0}\right)$, we have $\lambda_j=0$ for $j \not\in J(x^0,y^0)$, i.e., $p_{j}\left(x^0, y^0\right) < 0$. Then, the SSOSC and the KKT conditions hold at $(y^0, {\lambda}^{0}_{J(x^0,y^0)}, {\nu}^0)$ for $\hat{P}\left(x^{0}\right)$.
By Theorem \ref{thm1}, (\romannumeral1) holds.
Then $y^{0}$ is a locally unique local minimum of $\hat{P}\left(x^{0}\right)$, and there exists $\hat{z}(x)=[\hat{y}(x)^{\top}, \hat{\lambda}(x)^{\top}, \hat{\nu}(x)^{\top}]^{\top}$ being the unique continuously differentiable vector function in a neighborhood of $x^0$, such that $\hat{y}(x)$ is a locally unique local minimum of $\hat{P}(x)$ with unique Lagrangian multipliers $\hat{\lambda}(x)$ and $\hat{\nu}(x)$.
(\romannumeral2) All conclusions in Theorem \ref{thm1} hold at $(y^0, {\lambda}^{0}, {\nu}^0)$ for ${P}\left(x^{0}\right)$, then there exists ${z}(x)=[{y}(x)^{\top}, {\lambda}(x)^{\top}, {\nu}(x)^{\top}]^{\top}$ be the unique continuously differentiable vector function in a neighborhood of $x^0$, such that $y(x)$ is a locally unique local minimum of $P(x)$ with unique Lagrangian multipliers $\lambda(x)$ and $\nu(x)$. Next, we will show that, in a neighborhood of $x^0$,
$$
\begin{aligned}
&y(x)=\hat{y}(x), \\
&{\nu}(x)=\hat{\nu}(x), \\
&\lambda_j(x)=\hat{\lambda}_j(x) \text{ when } j \in J(x^0,y^0),\\
&\lambda_j(x)=0 \text{ when } j \not\in J(x^0,y^0).
\end{aligned}
$$
Since $y(x)$ is a locally unique local minimum of $P(x)$, then there exists $\beta_3>0$, for any ${||x-x^0||}\leq \beta_3$, $y(x)$ is a local minimum of $P\left(x\right)$, i.e., there exists $\delta_3 > 0$ such that $g(x, y(x)) \leq g(x, y^{\prime})$, when ${||y^{\prime}-y(x)||} \leq \delta_3$, $p\left(x, y^{\prime}\right) \leq 0$, and $q\left(x, y^{\prime}\right) = 0$.
Let $\beta_2$ be the $\xi$ and $\delta_3$ be the $\delta$ shown in Lemma \ref{lemma0}.
Let $\beta_4=\min{\{\beta_2,\beta_3\}}$ and $\delta_4=\min{\{\delta_2,\delta_3\}}$. Then, for all ${||x-x^0||}\leq \beta_4$, we have $\delta_4$, such that the following two statements are satisfied:
\begin{itemize}
\item[(a)] $g(x, y(x)) \leq g(x, y^{\prime})$ when ${||y^{\prime}-y(x)||} \leq \delta_4$, $p\left(x, y^{\prime}\right) \leq 0$, and $q\left(x, y^{\prime}\right) = 0$.
\item[(b)] $p_{j}\left(x, y^{\prime}\right)\leq -\frac{1}{2}\epsilon_1 <0$ for all $j \not\in J(x^0,y^0)$ when ${||y^{\prime}-y(x)||} \leq \delta_4$.
\end{itemize}
The statement (b) show that set $\{y^{\prime}: {||y^{\prime}-y(x)||} \leq \delta_4\} \subset $ $\{y^{\prime}: p_{j}\left(x, y^{\prime}\right)\leq -\frac{1}{2}\epsilon_1 <0 \text{ for all } j \not\in J(x^0,y^0)\}$.
Then, $g(x, y(x)) \leq g(x, y^{\prime})$ when ${||y^{\prime}-y(x)||} \leq \delta_4$, $q\left(x, y^{\prime}\right) = 0$, and $p\left(x, y^{\prime}\right) \leq 0$, $j \in J(x^0,y^0)$. This means $y(x)$ is a local minimum of $\hat{P}\left(x\right)$ for all ${||x-x^0||}\leq \beta_4$.
For ${||x-x^0||}\leq \beta_4$, $y(x)$ is a locally unique local minimum of ${P}\left(x\right)$. Assume that $y(x)$ is not a locally unique local minimum of $\hat{P}\left(x\right)$, i.e., for any $\phi>0$, there exists ${||y^{\prime}(\phi)-y(x)||}\leq \phi$ such that $y^{\prime}$ is a local minimum of $\hat{P}\left(x\right)$. We can set $0<\phi<\delta_4$, then $p_j(x,y^{\prime}(\phi))<0$ for all $j \not\in J(x^0,y^0)$. Then for any $\phi$, $y^{\prime}(\phi)$ is a local minimum of ${P}\left(x\right)$, which contradicts that $y(x)$ is a locally unique local minimum of ${P}\left(x\right)$. Thus, $y(x)$ is a locally unique local minimum of $\hat{P}\left(x\right)$.
Let $z_1(x)\triangleq [{y}(x)^{\top}, {\lambda}^{\prime}(x)^{\top}, {\nu}(x)^{\top}]^{\top}$ defined on $\{x: {||x-x^0||}\leq \beta_4\}$, where ${\lambda}^{\prime}(x)^{\top}$ is a vector function which contains all $\lambda_j(x)$ when $j \in J(x^0,y^0)$. Then $z_1(x)$ is a continuously differentiable vector function, and it is easy to justify that the KKT conditions hold at $y(x)$ with Lagrangian multipliers $\lambda^{\prime}(x)$ and $\nu(x)$ for $\hat{P}(x)$.
Since $\hat{z}(x)=[\hat{y}(x)^{\top}, \hat{\lambda}(x)^{\top}, \hat{\nu}(x)^{\top}]^{\top}$ is the unique continuously differentiable vector function in a neighborhood of $x^0$, such that $\hat{y}(x)$ is a locally unique local minimum of $\hat{P}(x)$. Then for ${||x-x^0||}\leq \beta_4$, we have $\hat{z}(x)=z_1(x)$ and
$$
\begin{aligned}
&y(x)=\hat{y}(x), \\
&{\nu}(x)=\hat{\nu}(x), \\
&\lambda_j(x)=\hat{\lambda}_j(x) \text{ when } j \in J(x^0,y^0).
\end{aligned}
$$
Since $p_{j}\left(x, y(x)\right) <0$ when $j \not\in J(x^0,y^0)$, then $\lambda_j(x)=0$ by the KKT conditions.
Then,
$$
\begin{aligned}
&\nabla_{x} y(x^0)=\nabla_{x} \hat{y}(x^0), \\
&\nabla_{x} {\nu}(x^0)=\nabla_{x} \hat{\nu}(x^0), \\
&\nabla_{x} \lambda_j(x^0)=\nabla_{x} \hat{\lambda}_j(x^0) \text{ when } j \in J(x^0,y^0),\\
&\nabla_{x} \lambda_j(x^0)=0 \text{ when } j \not\in J(x^0,y^0).
\end{aligned}
$$
\end{proof}
Note that the SCSC holds at $\hat{y}$ w.r.t. $\hat{\lambda}$ for $P(x)$ is equivalent to $J^{0}(x,\hat{y})=\emptyset$, then $J\left(x,\hat{y}\right) = J^{+}\left(x,\hat{y}\right)$.
We define the matrix functions
$$
M_{+}(x^0,y^0) \triangleq
\left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_{y} p_{J^{+}(x^0,y^0)}^{\top} & \nabla_{y} q^{\top} \\
\nabla_{y} p_{J^{+}(x^0,y^0)} & 0 & 0 \\
\nabla_y q & 0 & 0
\end{array}\right](x^0,y^0,\lambda^{0},{\nu}^{0}),
$$
and
$$
N_{+}(x^0,y^0)\triangleq [\nabla_{x y}^{2} \mathcal{L}^{\top}, \nabla_{x} p_{J^{+}(x^0,y^0)}^{\top}, \nabla_{x} q^{\top}]^{\top}(x^0,y^0,\lambda^{0},{\nu}^{0}).
$$
Compute the gradient of $\hat{z}(x)$ as shown in Theorem \ref{thm1}.
Since $\lambda_j>0$ for all $j \in J^{+}(x^0,y^0)$, we can cancel all $\lambda_j$ in $M$ and $N$.
Then we can get
$$
\left[\begin{array}{c}
\nabla_{x} \hat{y}(x^0) \\
\nabla_{x} \hat{\lambda}(x^0) \\
\nabla_{x} \hat{\nu}(x^0) \\
\end{array}\right]
= -{M}_{+}^{-1}(x^0, y^0) {N}_{+}(x^0, y^0).
$$
By Lemma \ref{thm4}, the gradient of ${z}(x)$ is computed as:
\begin{equation}
\label{compute_g0}
\begin{array}{c}
\left[\begin{array}{c}
\nabla_{x} y(x^0) \\
\nabla_{x} \lambda_{J(x^0,y^0)}(x^0) \\
\nabla_{x} \nu(x^0)
\end{array}\right]
= -M_{+}(x^0, y^0)^{-1} N_{+}(x^0, y^0), \\ \nabla_{x} \lambda_{{J(x^0,y^0)}^C}(x^0)=0. \end{array}
\end{equation}
\begin{lemma}
\label{thm3}
Suppose that all requirements except the SCSC in Theorem \ref{thm1} are satisfied at $(y^0, \lambda^0, {\nu}^0)$ for $P\left(x^{0}\right)$.
Then,
\begin{itemize}
\item[(\romannumeral1)] $y^{0}$ is a locally unique local minimum of $P\left(x^{0}\right)$.
\item[(\romannumeral2)] There exists $\epsilon > 0$ such that, there exists a unique Lipschitz continuous vector function
$$
z(x) \triangleq [y(x)^{\top}, \lambda(x)^{\top}, \nu(x)^{\top}]^{\top},
$$
which is defined on $\mathcal{B}(x^0,\epsilon)$, and
$y(x)$ is the locally unique local minimum of $P(x)$ with unique associated Lagrangian multipliers $\lambda(x)$ and $\nu(x)$.
\end{itemize}
For a direction $d \in \mathbb{R}^{d_x}$,
define $J^{0}_{+}(x^0,y^0, d)$ as set which contains all $j \in J^{0}(x^0,y^0)$ such that, there exists ${\epsilon}_0>0$, for any $0<{\epsilon}<{\epsilon}_0$,
$p_j(x^0+\epsilon d, y(x^0+\epsilon d))=0$ and $\lambda_j(x^0+\epsilon d) > 0$.
Denote $J^{0}_{-}(x^0,y^0, d) \triangleq J^{0}(x^0,y^0) \setminus
J^{0}_{+}(x^0,y^0, d)$,
\begin{equation}
\label{md16}
M_{D}(x^0, y^0, d) \triangleq
\left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_{y} p_{J^{+}(x^0,y^0)}^{\top} & \nabla_{y} q^{\top} & \nabla_{y} p^{\top}_{J^{0}_{+}(x^0,y^0, d)}\\
\nabla_{y} p_{J^{+}(x^0,y^0)} & 0 & 0 & 0\\
\nabla_y q & 0 & 0 & 0 \\
\nabla_{y} p_{J^{0}_{+}(x^0,y^0, d)} & 0 & 0 & 0
\end{array}\right](x^0,y^0,\lambda^{0},{\nu}^{0})
\end{equation}
and
\begin{equation}
\label{nd17}
N_{D}(x^0, y^0,d) \triangleq
\left[\nabla_{x y}^{2} \mathcal{L}^{\top}, \nabla_{x} p_{J^{+}(x^0,y^0)}^{\top}, \nabla_{x} q^{\top}, \nabla_{x} p_{J^{0}_{+}(x^0,y^0, d)}^{\top} \right]^{\top}(x^0,y^0,\lambda^{0},{\nu}^{0}).
\end{equation}
\begin{itemize}
\item[(\romannumeral3)]
The directional derivative of $z(x)$ at $x^0$ on any direction $d \in \mathbb{R}^{d_x}$ with $\|d\|=1$ exists and given by
\begin{equation}
\label{compute_g1}
\begin{array}{c}
\left[\begin{array}{c}
\nabla_{d} {y}(x^0) \\
\nabla_{d} \lambda_{J^{+}(x^0,y^0)}(x^0) \\
\nabla_{d} {\nu}(x^0) \\
\nabla_{d} {\lambda}_{J_+^0(x^0,y^0,d)}(x^0) \\
\end{array}\right]
= -M_{D}^{-1}(x^0, y^0, d) N_{D}(x^0, y^0,d) d, \\
\nabla_{d} {\lambda}_{J_-^0(x^0,y^0,d)}(x^0) =0, \\
\nabla_{d} {\lambda}_{J(x^0,y^0)^C}(x^0) =0,
\end{array}
\end{equation}
where $M_{D}(x^0,y^0,d)$ is nonsingular.
\end{itemize}
\end{lemma}
\begin{proof}
When the SCSC holds at $y^{0}$ w.r.t. $\lambda^{0}$, then we have $J(x^0,y^0)= J^{+}(x^0,y^0) $. This theorem is equivalent to Lemma \ref{thm4}.
When the SCSC is not satisfied, $J^{0}(x^0,y^0)\neq \emptyset$.
(\romannumeral1)
By part (\romannumeral1) of Theorem \ref{thm1+}, the LICQ, the SSOSC and the KKT conditions hold at $(y^0, {\lambda}^{0}, {\nu}^0)$ for ${P}\left(x^{0}\right)$, (\romannumeral1) holds.
(\romannumeral2) By part (\romannumeral2) of Theorem \ref{thm1+}, the LICQ, the SSOSC and the KKT conditions hold at $(y^0, {\lambda}^{0}, {\nu}^0)$ for ${P}\left(x^{0}\right)$, (\romannumeral2) holds and the directional derivative of $z(x)$ at $x^0$ on any direction exists.
(\romannumeral3)
By (\romannumeral2), the vector function $z(x) \triangleq [y(x)^{\top}, \lambda(x)^{\top}, \nu(x)^{\top}]^{\top}$ defined on $\mathcal{B}(x^0,\epsilon)$ is the local solution of $P(x)$.
By (\romannumeral3) of Theorem \ref{thm1+} the LICQ hold at $y(x)$ for $P(x)$ for all $x \in \mathcal{B}(x^0,\epsilon)$.
Then, according to Theorem 3 in \cite{Giorgi2018ATO}, $z(x) \triangleq [y(x)^{\top}, \lambda(x)^{\top}, \nu(x)^{\top}]^{\top}$ satisfies the KKT conditions for problem $P(x)$ for any $x \in \mathcal{B}(x^0,\epsilon)$, and $\lambda(x), \nu(x)$ are unique Lagrangian multipliers.
The directional derivative of $z(x)$ at $x^0$ on any direction exists.
(a) Consier $j \not\in J(x^0,y^0)$. From part (\romannumeral2) of Lemma \ref{lemma0}, we have that, in a small neighborhood of $(x^0,y^0)$, $p_{j}\left(x, y\right)< 0$ for all $j \not\in J(x^0,y^0)$, i.e., the inactive constraints at $(x^0,y^0)$ are still inactive in a neighborhood of $(x^0,y^0)$.
Then, for $j \not\in J(x^0,y^0)$, there exists $\beta_0$ such that
$p_j(x^0+\beta d, y(x^0+\beta d))<0$ and $\lambda_j(x^0+\beta d)=0$ for all $\beta<\beta_0$.
Then, $\nabla_{d} {\lambda}_{J(x^0,y^0)^C}(x^0) =0$.
(b) Consider $j \in J^{0}_{-}\left(x^0,y^0, d\right)$, i.e., there does not exist ${\beta}_0>0$, for any ${\beta}<{\beta}_0$,
$p_j(x^0+\beta d, y(x^0+\beta d))=0$ and $\lambda_j(x^0+\beta d)>0$. There are two possible cases for the direction $d$. The first case is that, there exists ${\beta}_0>0$, for any ${\beta}<{\beta}_0$,
$p_j(x^0+\beta d, y(x^0+\beta d)) \leq 0$ and $\lambda_j(x^0+\beta d)=0$. The second case is that, for any ${\beta}_0>0$, we can always find ${\beta},\beta_1<{\beta}_0$ such that, $p_j(x^0+\beta d, y(x^0+\beta d))=0$, $\lambda_j(x^0+\beta d) > 0$, and $p_j(x^0+\beta d, y(x^0+\beta d)) \leq 0$, $\lambda_j(x^0+\beta d)=0$.
For the first case, we have $\nabla_{d} {\lambda}_{J_-^0(x^0,y^0,d)}(x^0) =0$.
For the second case, since the directional derivative of $y(x)$ and $\lambda(x)$ at $x^0$ on the direction exists, we have $\nabla_\beta p_j(x^0+\beta d, y(x^0+\beta d))=0$ and $\nabla_{d} {\lambda}_{J_-^0(x^0,y^0,d)}(x^0) =0$.
Thus, for $j \in J^{0}_{-}\left(x^0,y^0, d\right)$, $\nabla_{d} {\lambda}_{J_-^0(x^0,y^0,d)}(x^0) =0$.
(c) Consider $j \in J^{+}(x^0,y^0)$, we have $\lambda^0_j>0$. When $\beta$ is sufficiently small, we have $p_j(x^0+\beta d, y(x^0+\beta d))=0$ and $\lambda_j(x^0+\beta d)>0$.
(d) Consider $j \in J^{0}_{+}\left(x^0,y^0, d\right)$, $p_j(x^0+\beta d, y(x^0+\beta d))=0$ and $\lambda_j(x^0+\beta d) > 0$.
(e) Consider the KKT conditions at $z(x)$, we have
$$
\left\{\begin{array}{l}
\nabla_{y} \mathcal{L}(y(x), \lambda(x), \nu(x), x)=0 \\
\lambda_{j}(x) p_{j}(y(x), x)=0, \ j=1, \ldots, m \\
q_j(y(x), x)=0, \ j=1, \ldots, n
\end{array}\right.
$$
for any $x \in \mathcal{B}(x^0,\epsilon)$.
Then, for any sufficiently small $\beta<\epsilon$, we have
$$
\left\{\begin{array}{l}
\nabla_{y} \mathcal{L}(y(x^0), \lambda(x^0), \nu(x^0), x^0)=0 \\
\lambda_{j}(x^0) p_{j}(y(x^0), x^0)=0, \ j=1, \ldots, m \\
q_j(y(x^0), x^0)=0, \ j=1, \ldots, n
\end{array}\right.
\text{ and }
\left\{\begin{array}{l}
\nabla_{y} \mathcal{L}(y(x^0+\beta d), \lambda(x^0+\beta d), \nu(x^0+\beta d), x^0+\beta d)=0 \\
\lambda_{j}(x^0+\beta d) p_{j}(y(x^0+\beta d), x^0+\beta d)=0, \ j=1, \ldots, m \\
q_j(y(x^0+\beta d), x^0+\beta d)=0, \ j=1, \ldots, n.
\end{array}\right.
$$
Then, we have
$$
\left.\frac{\partial \nabla_{y} \mathcal{L}\left(y\left(x^{0}+\beta d\right), \lambda\left(x^{0}+\beta d\right), \nu\left(x^{0}+\beta d\right), x^{0}+\beta d\right)}{\partial \beta}\right|_{\beta=0}=0,
$$
$$
\left.\frac{\partial \lambda_{i}(x^0+\beta d) p_{j}(y(x^0+\beta d), x^0+\beta d)}{\partial \beta}\right|_{\beta=0}=0, \ j=1, \ldots, m,
$$
$$
\left.\frac{\partial q_j(y(x^0+\beta d), x^0+\beta d)}{\partial \beta}\right|_{\beta=0}=0, \ j=1, \ldots, n.
$$
Then, we get
$$
\left[\begin{array}{ccccc}
\nabla_{y}^{2} \mathcal{L} & \left(\nabla_{y} p_{1}\right)^{\top} & \cdots & \left(\nabla_{y} p_{m}\right)^{\top} & \left(\nabla_{y} q\right)^{\top} \\
\lambda_{1} \nabla_{y} p_{1} & p_{1} & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & 0 & \vdots \\
\lambda_{m} \nabla_{y} p_{m} & 0 & \cdots & p_{m} & 0 \\
\nabla_{y} q & 0 & 0 & 0 & 0
\end{array}\right]
\left[\begin{array}{c}
\nabla_{d} y \\
\nabla_{d} \lambda_1 \\
\vdots \\
\nabla_{d} \lambda_m \\
\nabla_{d} \nu
\end{array}\right]
+
\left[\begin{array}{c}
\nabla_{x y}^{2} \mathcal{L} \\
\lambda_{1}\nabla_{x} p_{1} \\
\vdots \\
\lambda_{m}\nabla_{x} p_{m} \\
\nabla_{x} q
\end{array}\right] d =0.
$$
From (a)(b), we have $\nabla_{d} {\lambda}_{J(x^0,y^0)^C}(x^0) =0$ and $\nabla_{d} {\lambda}_{J_-^0(x^0,y^0,d)}(x^0) =0$. Then, the equation is reduced to
$$
\begin{aligned}
&\left[\begin{array}{ccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_{y} p_{J^{+}(x^0,y^0)}^{\top} & \nabla_{y} p_{J^{0}_{+}(x^0,y^0, d)}^{\top} & \nabla_{y} q^{\top} \\
\lambda_{J^{+}(x^0,y^0)} \nabla_{y} p_{J^{+}(x^0,y^0)} & p_{J^{+}(x^0,y^0)} & 0 & 0 \\
{\lambda}_{J_+^0(x^0,y^0,d)} \nabla_{y} p_{J^{0}_{+}(x^0,y^0, d)} & 0 & p_{J^{0}_{+}(x^0,y^0, d)} & 0 \\
\nabla_{y} q & 0 & 0 & 0
\end{array}\right]
\left[\begin{array}{c}
\nabla_{d} y \\
\nabla_{d} \lambda_{J^{+}(x^0,y^0)} \\
\nabla_{d} {\lambda}_{J_+^0(x^0,y^0,d)} \\
\nabla_{d} \nu
\end{array}\right] \\
&+
\left[\begin{array}{c}
\nabla_{x y}^{2} \mathcal{L} \\
\lambda_{J^{+}(x^0,y^0)}\nabla_{x} p_{J^{+}(x^0,y^0)} \\
{\lambda}_{J_+^0(x^0,y^0,d)}\nabla_{x} p_{J^{0}_{+}(x^0,y^0, d)} \\
\nabla_{x} q
\end{array}\right] d =0.
\end{aligned}
$$
From (c)(d), $p_{J^{+}(x^0,y^0)}=0$, $p_{J^{0}_{+}(x^0,y^0, d)}=0$, $\lambda_{J^{+}(x^0,y^0)}>0$, and ${\lambda}_{J_+^0(x^0,y^0,d)}>0$. Then, $\lambda_{J^{+}(x^0,y^0)}$ and ${\lambda}_{J_+^0(x^0,y^0,d)}$ are cancelled. We have
$$
\begin{array}{c}
\left[\begin{array}{c}
\nabla_{d} {y}(x^0) \\
\nabla_{d} \lambda_{J^{+}(x^0,y^0)}(x^0) \\
\nabla_{d} {\nu}(x^0) \\
\nabla_{d} {\lambda}_{J_+^0(x^0,y^0,d)}(x^0) \\
\end{array}\right]
= -M_{D}^{-1}(x^0, y^0, d) N_{D}(x^0, y^0,d) d
\end{array}
$$
where $M_{D}(x^0,y^0,d)$ and $N_{D}(x^0, y^0,d)$ is defined in \eqref{md16} and \eqref{nd17}. Since the LICQ holds, $M_{D}(x^0,y^0,d)$ is nonsingular.
\end{proof}
Consider Assumptions \ref{a1}, \ref{a2}, \ref{a3} are satisfied, $y^*(x)$ is the optimal solution of $P(x)$.
Define the matrix functions
$$
M_{+}(x) \triangleq
\left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_{y} p_{J^{+}(x,y^*(x))}^{\top} & \nabla_{y} q^{\top} \\
\nabla_{y} p_{J^{+}(x,y^*(x))} & 0 & 0 \\
\nabla_y q & 0 & 0
\end{array}\right](x,y^*(x),\lambda(x),\nu(x))
$$
and
$$
N_{+}(x)\triangleq = [\nabla_{x y}^{2} \mathcal{L}^{\top}, \nabla_{x} p_{J^{+}(x,y^*(x))}^{\top}, \nabla_{x} q^{\top}]^{\top}(x,y^*(x)).
$$
For a direction $d \in \mathbb{R}^{d_x}$, define
$J^{0}_{+}(x,y^*(x), d)$ as set which contains all $j \in J^{0}(x,y^*(x))$ such that, there exists ${\epsilon}_0>0$, for any $0<{\epsilon}<{\epsilon}_0$,
$p_j(x+\epsilon d, y^*(x+\epsilon d))=0$ and $\lambda_j(x+\epsilon d) > 0$.
Denote $J^{0}_{-}(x,y^*(x), d) \triangleq J^{0}(x,y^*(x)) \setminus
J^{0}_{+}(x,y^*(x), d)$. Define
$$
M_{D}(x, d) \triangleq
\left[
\begin{array}{ccc}
M_{+} & \begin{array}{ccc}
\nabla_{y} p^{\top}_{J^{0}_{+}(x,y^*(x), d)} \\ 0 \\ 0 \\
\end{array} \\
\begin{array}{ccc}
\nabla_{y} p_{J^{0}_{+}(x,y^*(x), d)} & 0 & 0 \\
\end{array} & 0 \\
\end{array}\right](x,y^*(x))
$$
and
$$
N_{D}(x,d) \triangleq \left[N_{+}^{\top}(x), \nabla_{x} p_{J^{0}_{+}(x,y^*(x), d)}^{\top}(x,y^*(x)) \right]^{\top}.
$$
\begin{theorem}[Full version of the combination of Theorems \ref{th0} and \ref{th1}]
\label{th2}
Suppose Assumptions \ref{a1}, \ref{a2}, \ref{a3} hold. Then,
\begin{itemize}
\item[(\romannumeral1)] The global minimum $y^{*}(x)$ of $P\left(x\right)$ exists and is unique. The KKT conditions hold at $y^{*}(x)$ with unique Lagrangian multipliers $\lambda(x)$ and $\nu(x)$.
\item[(\romannumeral2)]
The vector function $z(x) \triangleq [y^{*}(x), \lambda(x), \nu(x)]$ is continuous and locally Lipschitz.
\end{itemize}
\begin{itemize}
\item[(\romannumeral3)]
The directional derivative of $z$ at $x$ on any direction $d \in \mathbb{R}^{d_x}$ with $\|d\|=1$ exists and given by
\begin{equation}
\label{M0}
\begin{array}{c}
\left[\begin{array}{c}
\nabla_{d} {y}^{*}(x) \\
\nabla_{d} \lambda_{J^{+}(x,y^{*}(x))}(x) \\
\nabla_{d} {\nu}(x) \\
\nabla_{d} {\lambda}_{J_+^0(x,y^{*}(x),d)}(x)
\end{array}\right]
= -M_{D}^{-1}(x,d) N_{D}(x,d) d, \\
\nabla_{d} {\lambda}_{J_-^0(x,y^{*}(x),d)}(x) =0, \\
\nabla_{d} {\lambda}_{J(x,y^{*}(x))^C}(x)=0,
\end{array}
\end{equation}
where $M_{D}(x,d)$ is nonsingular.
\end{itemize}
\begin{itemize}
\item[(\romannumeral4)] If the SCSC holds at $y^{*}(x)$ w.r.t. $\lambda(x)$, ${z}$ is continuously differentiable at $x$ and the gradient is computed by
$$
\left[\begin{array}{c}
\nabla_{x} y^{*}(x) \\
\nabla_{x} \lambda_{J(x,y^{*}(x))}(x) \\
\nabla_{x} \nu(x)
\end{array}\right]
= -M_{+}^{-1}(x) N_{+}(x),
$$
$$\nabla_{x} {\lambda}_{J(x,y^{*}(x))^C}(x) =0,$$
where ${M}_{+}(x)$ is nonsingular.
\end{itemize}
\end{theorem}
\begin{proof}
(\romannumeral1)
Firstly, we show $y^*(x)$ exists and is unique.
Function $g(x,y)$ is $\mu$-strongly-convex w.r.t. $y$, and the feasible set $K\left(x\right)$ is convex and closed for all ${x} \in \mathbb{R}^{d_x}$.
Case one: $K(x)$ is bounded. Then, $K(x)$ is a compact set and $g(x,y)$ is continuous, which imply that the minimum $y^*(x)$ exists.
Case two: $K(x)$ is not bounded.
Function $g(x,y)$ is $\mu$-strongly-convex w.r.t. $y$, then
$$g(x,y) \geq g(x,y^0) + \nabla_y g(x,y^0)^{\top} (y-y^0)+\frac{\mu}{2}\|y-y^0\|^2.$$
Then, $\lim_{\|y\| \rightarrow \infty } g(x,y) = +\infty$. Then, for any real number $\alpha$, the set $\{x \mid g(x,y) \leq \alpha\}$ is closed and bounded, and there exists $\alpha$ such that $\{y \mid g(x,y) \leq \alpha\}$ is not empty. Then, $\{x \mid g(x,y) \leq \alpha\}$ is compact.
Then, the compactness of $\{y \mid g(x,y) \leq \alpha\}$ and the continuity of $g(x,y)$ imply that, the optimization problem $\min g(x,y) \text{ s.t. } y \in \{t \mid g(x,t) \leq \alpha\}$ is feasible and has a minimum, which is also the minimum of problem $P(x)$. Thus, the global minimum $y^*(x)$ exists.
The solution $y^{*}(x)$ is a global minimum of $P(x)$ implies it is a local minimum.
Assume $y^{*}(x)$ is not the unique local minimum of $P(x)$ and there exists $y^{\prime} \in K(x)$ is also a local minimum, then there exists a $0<\alpha<1$ such that $g(x,y^{*}(x)) \leq g(x, \alpha y^{*}(x) +(1-\alpha) y^{\prime})$ and $g(x,y^{\prime} \leq g(x, \alpha y^{*}(x) +(1-\alpha) y^{\prime})$, then $\alpha g(x,y^{*}(x)) + (1-\alpha)g(x, y^{\prime}) \leq g(x, \alpha y^{*}(x) +(1-\alpha) y^{\prime})$, which contradicts that $g(x,y)$ is $\mu$-strongly-convex w.r.t. $y$.
Thus, $y^{*}(x)$ is the unique local minimum, and then it is the unique global minimum.
The solution $y^{*}(x)$ is a global minimum of $P(x)$ implies that $y^{*}(x)$ is a local minimum.
By Assumptions \ref{a3}, the LICQ holds at $y^{*}(x)$ for $P(x)$.
Then, the KKT conditions hold at $y^{*}(x)$ with unique Lagrangian multipliers $\lambda(x)$ and $\nu(x)$ for $P(x)$ by (i) of Theorem \ref{thm1+}.
(\romannumeral2)(\romannumeral3) Firstly show that, for all ${x} \in \mathbb{R}^{d_x}$, the SSOSC holds at ${y}^{*}(x)$ with Lagrangian multipliers $\lambda(x)$ and $\nu(x)$ for $P(x)$.
By Assumption \ref{a2}, $g(x,y)$ is $\mu$-strongly-convex w.r.t. $y$, which implies that $\nabla_{y}^{2} g(x, {y}^{*}(x))$ is positive definite;
$p_j(x,y)$ is convex implies that $\nabla_{y}^{2} p_j(x, {y}^{*}(x))$ is positive semi-definite; $q_j(x,y)$ is affine implies $\nabla_{y}^{2} q(x, {y}^{*}(x))=0$.
The KKT condition in (\romannumeral1) implies that $\lambda(x) \geq 0$.
Then,
$$
\nabla_{y}^{2} \mathcal{L}\left({y}^{*}(x), {\lambda}(x), \hat{\nu}(x), x\right)=\nabla_{y}^{2} g(x, {y}^{*}(x))+ \lambda(x)^{\top} \nabla_{y}^{2} p(x, {y}^{*}(x))+ \nu(x)^{\top} \nabla_{y}^{2} q(x, {y}^{*}(x))
$$
is positive definite. Therefore, the SSOSC holds.
The KKT conditions, the LICQ, and the SSOSC hold at ${y}^{*}(x)$ with Lagrangian multipliers $\lambda(x)$ and $\nu(x)$ for $P(x)$.
By Lemma \ref{thm3}, for all ${x}^0 \in \mathbb{R}^{d_x}$, in a neighborhood of ${x}^0$, there exists a unique Lipschitz continuous vector function
$\hat{z}(x) \triangleq [\hat{y}^{\top}(x), \hat{\lambda}^{\top}(x), \hat{\nu}^{\top}(x)]^{\top}$ and
$\hat{y}(x)$ is a locally unique local minimum of $P(x)$. Then the local minimum $y^{*}(x)$ with unique Lagrangian multipliers $\lambda(x)$ and $\nu(x)$ for $P(x)$ is unique implies that $\hat{z}(x)=z(x)$.
For any $x^0 \in \mathbb{R}^{d_x}$, the vector function
$
z(x) \triangleq [y^{*}(x)^{\top}, \lambda(x)^{\top}, \nu(x)^{\top}]^{\top}
$
is Lipschitz in a neighborhood of $x^0$ implies that, $z(x)$ is locally Lipschitz on $\mathbb{R}^{d_x}$. The computation of gradient is given in Lemma \ref{thm3}.
(\romannumeral4) The KKT conditions, the LICQ, and the SSOSC hold at ${y}^{*}(x)$ with Lagrangian multipliers $\lambda(x)$ and $\nu(x)$ for $P(x)$. By Lemma \ref{thm4}, the SCSC holds at $y^{*}(x^0)$ w.r.t. $\lambda(x^0)$ implies that, for all ${x}^0 \in \mathbb{R}^{d_x}$, in a neighborhood of ${x}^0$, there exists a unique continuously differentiable vector function
$\hat{z}(x) \triangleq [\hat{y}^{\top}(x), \hat{\lambda}^{\top}(x), \hat{\nu}^{\top}(x)]^{\top}$ and
$\hat{y}(x)$ is a locally unique local minimum of $P(x)$. Then the local minimum $y^{*}(x)$ with unique Lagrangian multipliers $\lambda(x)$ and $\nu(x)$ for $P(x)$ is unique implies that $\hat{z}(x)=z(x)$. The computation of gradient is given in Lemma \ref{thm4}.
\end{proof}
For a constrained lower-level optimization $P(x)$, for each direction $d$, paper \cite{ralph1995directional} computes $\nabla_d y^{*}(x)$ by solving a quadratic programming problem. In Theorem \ref{th2}, we compute $\nabla_d y^{*}(x)$ by \eqref{M0} and
the set $J^{0}_{+}(x,y^*(x), d)$ can be determined by sampling ${\epsilon}_0$ in a small neighborhood.
\subsection{Proofs of Propositions \ref{prop2_0} and \ref{prop3}}
The following lemma provides a way to compute Clarke subdifferential of $y^{*}$ without the SCSC.
\begin{lemma}
\label{prop1}
Suppose all assumptions in Theorem \ref{th1} hold.
Then,
\begin{equation}
\label{c_d}
\bar{\partial} y^{*}\left(x^{0}\right)=\operatorname{conv}\left\{ {w}^S(x^0) : S \subseteq J^{0}(x^0,y^*(x^0))\right\}.
\end{equation}
Here,
${w}^S(x^0)$ is obtained by extracting the first $d_x$ rows from matrix $-M^{S}_+(x^0,y^*(x^0))^{-1} N^{S}_+(x^0,y^*(x^0))$. When $S$ is not empty,
$$M^{S}_+ (x,y) \triangleq
\left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_{y} p_{J^{+}(x^0,y^*(x^0))}^{\top} & \nabla_{y} q^{\top} & \nabla_{y} p_S^{\top} \\
\nabla_{y} p_{J^{+}(x^0,y^*(x^0))} & 0 & 0 & 0 \\
\nabla_y q & 0 & 0 & 0 \\
\nabla_{y} p_S & 0 & 0 & 0
\end{array}\right](x,y),$$
and $$N^{S}_+ (x,y) \triangleq \left[\nabla_{x y}^{2} \mathcal{L}^{\top}, \nabla_{x} p_{J^{+}(x^0,y^*(x^0))}^{\top}, \nabla_{x} q^{\top}, \nabla_{x} p_{S}^{\top} \right]^{\top}(x,y).$$
When $S$ is empty, $M^{S}_+(x^0,y^*(x^0)) ={M}_{+}(x^0,y^*(x^0))$ and $N^{S}_+(x^0,y^*(x^0))= {N}_{+}(x^0,y^*(x^0))$, where
$$
M_{+}(x,y) = \left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_{y} p_{J^{+}(x^0,y^*(x^0))}^{\top} & \nabla_{y} q^{\top} \\
\nabla_{y} p_{J^{+}(x^0,y^*(x^0))} & 0 & 0 \\
\nabla_y q & 0 & 0
\end{array}\right](x,y),
$$
and
$$
N_{+}(x,y) = [\nabla_{x y}^{2} \mathcal{L}^{\top}, \nabla_{x} p_{J^{+}(x^0,y^*(x^0))}^{\top}, \nabla_{x} q^{\top}]^{\top}(x,y).
$$
\end{lemma}
Lemma \ref{prop1} is provided in papers \cite{dempe1998implicit,malanowski1985differentiability}, and also
can be derived from the directional derivative in part (iii) of Theorem \ref{th1}.
\begin{lemma}
\label{lemma1}
Let $f: \mathbb{R}^{d_x} \longrightarrow \mathbb{R}$ a locally Lipschitz function. Then $f$ is Lipschitz continuous on any compact set $S \in \mathbb{R}^{d_x}$.
\end{lemma}
\begin{proof}
The function $f: \mathbb{R}^{d_x} \longrightarrow \mathbb{R}$ locally Lipschitz, then for any $x \in \mathbb{R}^{d_x}$, there are a Lipschitz constant $l(x)$ and $\epsilon(x) >0$, such that $l(x)$ is the Lipschitz constant of $f$ on $\mathcal{B}(x,\epsilon(x))$. For any compact set $S \in \mathbb{R}^{d_x}$, we have $S \subset {\cup}_{x \in S} \mathcal{B}(x,\epsilon(x))$, then ${\cup}_{x \in S} \mathcal{B}(x,\epsilon(x))$ is a open cover of $S$. The set $S$ is compact implies that, there exists a finite set $F$ such that $S \subset {\cup}_{x \in F} \mathcal{B}(x,\epsilon(x))$. Then, $\operatorname{max}_{x \in F}l(x)<\infty$ is a Lipschitz constant of $f$ on ${\cup}_{x \in F} \mathcal{B}(x,\epsilon(x))$, and thus is a Lipschitz constant of $f$ on $S$.
\end{proof}
\begin{lemma}
\label{lemma2}
Suppose all assumptions in Theorem \ref{th2} hold.
Given a open ball $B$, if for each $j$, either
$$j\in J^+(x,y^*(x))^C = J(x,y^*(x))^C \cup J^0(x,y^*(x)) \ \text{ for all } x \in B,$$
or
$$j\in J^+(x,y^*(x)) \ \text{ for all } x \in B.$$
Then $y^{*}(x)$ is continuously differentiable on $B$.
\end{lemma}
\begin{proof}
For any $j$,
$\text{ either } p_j(x,y^*(x)) \leq 0, \lambda_j(x)=0 \text{ for all } x \in B, \text{ or } p_j(x,y^*(x))=0, \lambda_j(x)>0 \text{ for all } x \in B$.
Consider the set
$$
B^{\prime} \triangleq \{x^{\prime} \in B: \text{for all } j, \text{ either } p_j(x^{\prime},y^*(x^{\prime}))<0, \lambda_j(x^{\prime})=0 \text{ or } p_j(x^{\prime},y^*(x^{\prime}))=0, \lambda_j(x^{\prime})>0\}.
$$
(i)
By part (\romannumeral3) of Theorem \ref{th2}, for any point $x^{\prime} \in B^{\prime}$, the SCSC holds at $y^{*}(x^{\prime})$, and $y^{*}$ is continuously differentiable at $x^{\prime}$.
(ii)
Consider the points $x \in B \setminus B^{\prime}$, there exist $j$ such that $p_j(x,y^*(x))=0$, $\lambda_j(x)=0$, i.e., $J^{0}(x,y^*(x))$ is not empty.
(ii.a) Consider $j$ such that
$p_j(x,y^*(x))\leq0$, $\lambda_j(x)=0$ for all $ x \in B$. By the definitions in Theorem \ref{th2}, $j \in J^{0}_{-}(x,y^*(x), d)$ is not added to the computation of $-M_{D}^{-1} N_{D}$ in \eqref{compute_g1} for any direction $d$.
(ii.b)
Consider $j$ such that
$p_j(x,y^*(x))=0$, $\lambda_j(x) > 0$ for all $x \in B$. By the definitions in Theorem \ref{th2}, $j \in J^{0}_{+}(x,y^*(x), d)$ is added to the computation of $-M_{D}^{-1} N_{D}$ in \eqref{compute_g1} for any direction $d$.
From (ii.a) and (ii.b), for the points $x \in B \setminus B^{\prime}$, for any $d$, the computation of $-M_{D}^{-1} N_{D}$ in \eqref{compute_g1} is totally same, i.e.,
$\lim _{h \rightarrow 0} \frac{\left\|y^*(x+h)-y^*(x)+M_{D}^{-1}(x,d) N_{D}(x,d) h\right\|}{|h|}=0$. Thus, $y^{*}(x)$ is differentiable on $B$.
Then, $y^{*}(x)$ is differentiable on $B \setminus B^{\prime}$.
Moreover, The derivative $M_{D}^{-1}(x,d) N_{D}(x,d)$ is continuous for any $x$ and $d$.
Thus, $y^{*}(x)$ is continuously differentiable on $B$.
\end{proof}
\subsubsection{Proof of Proposition \ref{prop2_0}}
\begin{proof}[Proof of Proposition \ref{prop2_0}]
By the computation of gradient of $y^{*}$ in part (\romannumeral3) of Theorem \ref{th2}, $\Phi(x)$ and $y^{*}$ are differentiable on $\mathcal{B}(x^0,\epsilon)$, then they are twice-differentiable on $\mathcal{B}(x^0,\epsilon)$. When $\epsilon$ is sufficiently small, $| \|\nabla \Phi(x^0)\|- \|\nabla \Phi(x^{\prime})\| | < o(\epsilon)$ for any $x^{\prime} \in \mathcal{B}(x^0,\epsilon)$. Then, $|\|\nabla \Phi(x^0)\| - d(0, \bar{\partial}_{\epsilon} \Phi(x^0))|< o(\epsilon)$.
\end{proof}
\subsubsection{Proof of Proposition \ref{prop3}}
\begin{proposition}[Full version of Proposition \ref{prop3}]
\label{prop4}
Suppose Assumptions \ref{a1}, \ref{a2}, \ref{a3} hold. Consider $x^0 \in \mathbb{R}^{d_x}$ and $\epsilon>0$, there exists $x \in \mathcal{B}(x^0,\epsilon)$ such that $y^{*}$ is not continuously differentiable at $x$.
Then, there exists at least one $j$, such that there exist $x^{\prime}$, $x^{\prime\prime} \in \mathcal{B}(x^0,\epsilon)$ with
\begin{equation}
\begin{aligned}
\label{eq1600000}
j\in J^+(x^{\prime},y^*(x^{\prime}))^C = J(x^{\prime}, y^*(x^{\prime}))^C \cup J^0(x^{\prime}, y^*(x^{\prime})) \text{ and } \ j\in J^+(x^{\prime\prime},y^*(x^{\prime\prime});
\end{aligned}
\end{equation}
Define the set $I^\epsilon(x^0)$ which contains all $j$ that satisfy \eqref{eq1600000}.
Define the set $I^{\epsilon}_{+}(x^0)$ which contains all $j$ such that, for any $x \in \mathcal{B}(x^0,\epsilon)$,
$$
j\in J^+(x, y^*(x));
$$
Define the set $I^{\epsilon}_{-}(x^0)$ which contains all $j$ such that, for any $x \in \mathcal{B}(x^0,\epsilon)$,
$$
j\in J(x, y^*(x))^C \cup J^0(x, y^*(x)).
$$
The set $G(x^{0},\epsilon)$ is defined as
$$
G(x^{0},\epsilon)\triangleq\{ \nabla_{x} f\left(x^0, y^{*}\left(x^0\right)\right)+{w}^S(x^0)^{\top}
\nabla_{y} f\left(x^0, y^{*}\left(x^0\right)\right): S \subseteq I^\epsilon(x^0)\}.
$$
Here,
${w}^S(x^0)$ is obtained by extracting the first $d_x$ rows from matrix $-M^{S}_{\epsilon}(x^0,y^*(x^0))^{-1} N^{S}_{\epsilon}(x^0,y^*(x^0))$, with
$$M^{S}_{\epsilon} \triangleq
\left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_y p_{I^{\epsilon}_{+}(x^0)}^{\top} & \nabla_{y} q^{\top} & \nabla_y p^{\top}_{S}\\
\nabla_y p_{I^{\epsilon}_{+}(x^0)} & 0 & 0 &0\\
\nabla_y q & 0 & 0 &0\\
\nabla_y p_{S} & 0 & 0 &0
\end{array}\right],$$
and
$$N^{S}_{\epsilon} \triangleq \left[\nabla_{x y}^{2}\mathcal{L}^{\top}, \nabla_{x} p_{I^{\epsilon}_{+}(x^0)}^{\top}, \nabla_{x} q^{\top}, \nabla_{x} p_{S}^{\top} \right]^{\top}.$$
Consider $x^0 \in \mathbb{R}^{d_x}$, and assume there exists a sufficiently small $\epsilon>0$ such that, there exists $x \in \mathcal{B}(x^0,\epsilon)$ with that $y^{*}(x)$ is not continuously differentiable at $x$.
Then, the following holds
\begin{itemize}
\item[(\romannumeral1)] For any $g \in G\left(x^{0},\epsilon\right)$, there exists $x^{\prime} \in \mathcal{B}(x^0,\epsilon)$ such that
$\| g-\nabla \Phi(x^{\prime}) \|< o(\epsilon)$. For any $x^{\prime\prime} \in \mathcal{B}(x^0,\epsilon)$ and $y^*(x)$ is differentiable at $x^{\prime\prime}$, there exists $g \in G\left(x^{0},\epsilon\right)$ such that
$\| g-\nabla \Phi(x^{\prime\prime}) \|< o(\epsilon)$.
\item[(\romannumeral2)] For any $z \in \mathbb{R}^{d_x}$, $$| d(z, \operatorname{conv} G(x^{0},\epsilon)) - d(z, \bar{\partial}_{\epsilon} \Phi(x^0))|< o(\epsilon).$$
\end{itemize}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{prop4}]
If for $x^0 \in \mathbb{R}^{d_x}$ and $\epsilon>0$, $y^{*}(x)$ is not continuously differentiable on the open ball $\mathcal{B}(x^0,\epsilon)$.
Suppose that, for all $j$, points $x^{\prime}$, $x^{\prime\prime} \in \mathcal{B}(x^0,\epsilon)$ with
\eqref{eq1600000}
do not exist. By Lemma \ref{lemma2}, $y^{*}(x)$ is continuously differentiable on $\mathcal{B}(x^0,\epsilon)$, which leads to contradiction. Then, there exists at least one $j$, such that there exist $x^{\prime}$, $x^{\prime\prime} \in \mathcal{B}(x^0,\epsilon)$ with \eqref{eq1600000}.
(\romannumeral1) Firstly, similar to the proof in Proposition \ref{prop2_0}, if the components of
$$M^{S}_{\epsilon} =
\left[\begin{array}{ccccccc}
\nabla_{y}^{2} \mathcal{L} & \nabla_y p_{I^{\epsilon}_{+}(x^0)}^{\top} & \nabla_{y} q^{\top} & \nabla_y p^{\top}_{S}\\
\nabla_y p_{I^{\epsilon}_{+}(x^0)} & 0 & 0 &0\\
\nabla_y q & 0 & 0 &0\\
\nabla_y p_{S} & 0 & 0 &0
\end{array}\right],$$
and
$$N^{S}_{\epsilon}= \left[\nabla_{x y}^{2}\mathcal{L}^{\top}, \nabla_{x} p_{I^{\epsilon}_{+}(x^0)}^{\top}, \nabla_{x} q^{\top}, \nabla_{x} p_{S}^{\top} \right]^{\top}.$$
are same with the components of $M_{D}(x^0,d)$ and $N_{D}(x^0,d)$ shown in \eqref{M0},
i.e., the sets which are involved in the matrix computation:
$$I^{\epsilon}_{+}(x^{\prime}) \cup S ={J^{0}_{+}(x^{\prime\prime},y^*(x^{\prime\prime}), d)} \cup J^{+}(x^{\prime\prime},y^*(x^{\prime\prime})),$$
then the difference $\|M^{S}_{\epsilon}(x^{\prime},y^*(x^{\prime}))^{-1} N^{S}_{\epsilon}(x^{\prime},y^*(x^{\prime})) - M_{D}(x^{\prime\prime},d))^{-1} N_{D}(x^{\prime\prime},d)\|< o(\epsilon)$ for all $x^{\prime}$ and $x^{\prime\prime} \in \mathcal{B}(x^0,\epsilon)$, since the function
${(M^{S}_{\epsilon})}^{-1} N^{S}_{\epsilon}$ is differentiable (refer to the proof in Proposition \ref{prop2_0}).
(a) Suppose that, for any $x^{\prime} \in \mathcal{B}(x^0,\epsilon)$ and any $d \in \mathbb{R}^{d_x}$, there exists $g^S \in G\left(x^{0},\epsilon\right)$ such that
$$I^{\epsilon}_{+}(x^{0}) \cup S ={J^{0}_{+}(x^{\prime},y^*(x^{\prime}), d)} \cup J^{+}(x^{\prime},y^*(x^{\prime}));$$
Then, for any $x^{\prime} \in \mathcal{B}(x^0,\epsilon)$ such that $y^{*}$ is differentiable at $x^{\prime}$, there exists $g^S \in G\left(x^{0},\epsilon\right)$ such that, for a direction $d \in \mathbb{R}^{d_x}$, $\|M^{S}_{\epsilon}(x^{0},y^*(x^{0}))^{-1} N^{S}_{\epsilon}(x^{0},y^*(x^{0})) - M_{D}(x^{\prime},d)^{-1} N_{D}(x^{\prime},d)\|< o(\epsilon)$. Then, $\|w^S(x^0)-y^*{(x^{\prime})}\|< o(\epsilon)$ and $\| g^S-\nabla \Phi(x^{\prime}) \|< o(\epsilon)$.
(b) Suppose that, for any $g^S \in G\left(x^{0},\epsilon\right)$, there exists $x^{\prime} \in \mathcal{B}(x^0,\epsilon)$ and $d \in \mathbb{R}^{d_x}$ such that
$$I^{\epsilon}_{+}(x^{0}) \cup S ={J^{0}_{+}(x^{\prime},y^*(x^{\prime}), d)} \cup J^{+}(x^{\prime},y^*(x^{\prime})).$$
Since the set ${J^{0}_{+}(x^{\prime},y^*(x^{\prime}), d)} \subseteq J^{0}(x^{\prime},y^*(x^{\prime}))$, from Lemma \ref{prop1} and its proof shown in \cite{dempe1998implicit,malanowski1985differentiability}, $g^S \in \bar{\partial} y^{*}\left(x^{\prime}\right)$ and exists $\left\{x^{j}\right\}$ such that $g^S = \lim _{j \rightarrow \infty} \nabla y^{*}\left(x^{j}\right):\left\{x^{j}\right\} \rightarrow x^{\prime}$ where $y^{*}$ is differentiable at $y^{j}$ for all $j$.
Then, there exists $x^{\prime\prime}$ in any small neighborhood of $x^{\prime}$, such that ${J^{0}_{+}(x^{\prime},y^*(x^{\prime}), d)} \cup J^{+}(x^{\prime},y^*(x^{\prime}))= J^{+}(x^{\prime\prime},y^*(x^{\prime\prime}))$ and $y^*$ is differentiable at $x^{\prime\prime}$.
Then,
$$I^{\epsilon}_{+}(x^{0}) \cup S = J^{+}(x^{\prime\prime},y^*(x^{\prime\prime})),$$
and we have $\|w^S(x^0)-y^*{(x^{\prime\prime})}\|< o(\epsilon)$ and $\| g^S-\nabla \Phi(x^{\prime\prime}) \|< o(\epsilon)$ with
$x^{\prime\prime} \in \mathcal{B}(x^0,\epsilon)$. Then, the part (i) of the proposition is shown.
Now, we can prove (\romannumeral1) by showing that the assumptions of (a) and (b) hold.
Define $J^{N}(x,y^*(x)) \triangleq \{j: j \not\in {J^{+}(x,y^*(x))} \cup {J^{0}(x,y^*(x))} \}$. Then, for any $x^{*} \in \mathcal{B}(x^0,\epsilon)$ and any $d \in \mathbb{R}^{d_x}$,
$
I^{\epsilon}(x^0) \cup I_+^{\epsilon}(x^0) \cup I_-^{\epsilon}(x^0) =J^{0}({x^*,y^*(x^{*})}) \cup J^{+}({x^*,y^*(x^{*})}) \cup J^{N}(x^*,y^*(x^*))= J^{0}_{-}({x^*,y^*(x^{*}),d}) \cup J^{0}_{+}({x^*,y^*(x^{*}),d}) \cup J^{+}({x^*,y^*(x^{*})}) \cup J^{N}(x^*,y^*(x^*))
$,
since they both contain all constraints $p_j$. Note that the intersection of two in $I^{\epsilon}(x^0)$, $I_+^{\epsilon}(x^0)$ and $I_-^{\epsilon}(x^0)$ is empty, the intersection of two in $J^{0}_{-}({x^*,y^*(x^{*}),d})$, $J^{0}_{+}({x^*,y^*(x^{*}),d})$, $J^{+}({x^*,y^*(x^{*})})$ and $J^{N}(x^*,y^*(x^*))$ is empty, and $J^{0}({x^*,y^*(x^{*})}) = J^{0}_{+}({x^*,y^*(x^{*}),d}) \cup J^{0}_{-}({x^*,y^*(x^{*}),d})$.
\begin{itemize}
\item [(1)] Consider $j \in I_+^{\epsilon}(x^0)$, i.e., $p_j(x,y^*(x))=0$ and $\lambda_j(x) > 0$ for all $x \in \mathcal{B}(x^0,\epsilon)$.
Then $j \in {J^{+}(x^{\prime},y^*(x^{\prime}))}$ for any $x^{\prime} \in \mathcal{B}(x^0,\epsilon)$.
Then,
$j \in {J^{0}_{+}(x^{\prime},y^*(x^{\prime}), d)} \cup {J^{+}(x^{\prime},y^*(x^{\prime}))}$ for any $d \in \mathbb{R}^{d_x}$ and any $x^{\prime} \in \mathcal{B}(x^0,\epsilon)$.
Thus, $$I_+^{\epsilon}(x^0) \subseteq {J^{0}_{+}(x^{\prime},y^*(x^{\prime}), d)} \cup {J^{+}(x^{\prime},y^*(x^{\prime}))}$$ for any $d \in \mathbb{R}^{d_x}$ and any $x^{\prime} \in \mathcal{B}(x^0,\epsilon)$.
\item [(2)] Consider $j \in I_-^{\epsilon}(x^0)$. If given $x \in \mathcal{B}(x^0,\epsilon)$, we have $p_j(x,y^*(x))<0$ and $\lambda_j(x) = 0$, then $j \not\in {J^{+}(x,y^*(x))} \cup {J^{0}(x,y^*(x))} $, i.e., $j \in J^{N}(x,y^*(x))$.
If given $x \in \mathcal{B}(x^0,\epsilon)$, we have $p_j(x,y^*(x))=0$, $\lambda_j(x) = 0$ and this holds for any $x \in \mathcal{B}(x^0,\epsilon)$, then $j \in {J^{0}_{-}(x,y^*(x), d)}$ for any $d \in \mathbb{R}^{d_x}$.
If given $x \in \mathcal{B}(x^0,\epsilon)$, we have $p_j(x,y^*(x))=0$, $\lambda_j(x) = 0$ but this does not hold for all set $\mathcal{B}(x^0,\epsilon)$, then $j \in {J^{0}_{-}(x,y^*(x), d)}$ for any $d \in \mathbb{R}^{d_x}$.
Then, $$I_-^{\epsilon}(x^0) \subseteq {J^{0}_{-}(x,y^*(x), d)} \cup J^{N}(x,y^*(x)),$$
for any $x \in \mathcal{B}(x^0,\epsilon)$ and any $d \in \mathbb{R}^{d_x}$.
\item [(3)] Consider $j \in I^{\epsilon}(x^0)$.
When $\epsilon>0$ is sufficiently small such that, there exists $x^{*} \in \mathcal{B}(x^0,\epsilon)$ such that $p_j(x^{*},y^*(x^{*}))=0$ and $\lambda_j(x^{*}) = 0$ for all $j \in I^{\epsilon}(x^0)$.
Then, we have $$I^{\epsilon}(x^0) \subseteq J^{0}({x^*,y^*(x^{*})}).$$ Then, for any $S \subseteq I^{\epsilon}(x^0)$, $S \subseteq J^{0}({x^*,y^*(x^{*})})$.
Also, $${J^{+}(x^*,y^*(x^*))} \subseteq I_+^{\epsilon}(x^0).$$
This is true because that, assume that $j \in {J^{+}(x^*,y^*(x^*))}$ and $j \not\in I_+^{\epsilon}(x^0)$, then
$j \in I^{\epsilon}(x^0)$, then $p_j(x^{*},y^*(x^{*}))=0$ and $\lambda_j(x^{*}) = 0$, and $j \in {J^{0}(x^*,y^*(x^*))}$ which contradicts $j \in {J^{+}(x^*,y^*(x^*))}$.
\end{itemize}
From (1) and (3), we have that, there exists $x^{*} \in \mathcal{B}(x^0,\epsilon)$, for any $d \in \mathbb{R}^{d_x}$,
\begin{equation}
\label{eq222}
J^{+}(x^*, y^*(x^*)) \subseteq I_+^{\epsilon}(x^0) \subseteq {J^{+}(x^*,y^*(x^*))}\cup {J_+^{0}(x^*,y^*(x^*),d)}.
\end{equation}
From Lemma \ref{prop1} and the proof of the lemma shown in \cite{dempe1998implicit,malanowski1985differentiability}, for any set $A \subseteq J^{0}({x^*,y^*(x^{*})})$, we can find a direction $d$ such that $A = {J^{0}_{+}(x^*,y^*(x^*), d)}$. Here, we can let $A= S \cup I_+^{\epsilon}(x^0) \setminus J^{+}(x^*,y^*(x^*)) $. From \eqref{eq222}, $A \subseteq J^{0}({x^*,y^*(x^{*})})$.
Then, $S \cup I_+^{\epsilon}(x^0) \setminus J^{+}(x^*,y^*(x^*)) ={J^{0}_{+}(x^*,y^*(x^*), d)}$.
Then, for any $g^S \in G\left(x^{0},\epsilon\right)$,
there exists $x^{*}\in \mathcal{B}(x^0,\epsilon)$ and $d \in \mathbb{R}^{d_x}$ such that
$$I^{\epsilon}_{+}(x^{0}) \cup S ={J^{0}_{+}(x^{*},y^*(x^{*}), d)} \cup J^{+}(x^{*},y^*(x^{*})).$$
The assumption of (b) is shown.
From (2), if $j \in I_-^{\epsilon}(x^0)$, for any $x \in \mathcal{B}(x^0,\epsilon)$ and any $d \in \mathbb{R}^{d_x}$, $j \in {J^{0}_{-}(x,y^*(x), d)} \cup J^{N}(x,y^*(x))$. Then, $j \not\in {J^{0}_{+}(x,y^*(x), d)} \cup {J^{+}(x,y^*(x))} $. Then, for any set $C \subseteq I_-^{\epsilon}(x^0)$, $C \cap {(J^{0}_{+}(x,y^*(x), d)} \cup {J^{+}(x,y^*(x)))}$ is empty.
Moreover, from (1), $I_+^{\epsilon}(x^0) \subseteq {J^{0}_{+}(x,y^*(x), d)} \cup {J^{+}(x,y^*(x))}$.
Then,
for any $x \in \mathcal{B}(x^0,\epsilon)$ and any $d \in \mathbb{R}^{d_x}$, there exists
$g^S \in G\left(x^{0},\epsilon\right)$ such that
$$I^{\epsilon}_{+}(x^{0}) \cup S ={J^{0}_{+}(x,y^*(x), d)} \cup J^{+}(x,y^*(x));$$
The assumption of (a) is shown. Then, the proof of the part (i) of the proposition is done.
(\romannumeral2) From the definition of the Clarke $\epsilon \text {-subdifferential}$, it is easy to see
$\bar{\partial}_{\epsilon} \Phi(x^0)=\operatorname{conv} \{\nabla \Phi(x^{\prime}): x^{\prime} \in \mathcal{B}(x^0,\epsilon), y^* $ is differentiable at $x^{\prime}\}$. Then, for any $g_0 \in \bar{\partial}_{\epsilon} \Phi(x^0)$, there exist $g_1$, $g_2 \in \{\nabla \Phi(x^{\prime}): x^{\prime} \in \mathcal{B}(x^0,\epsilon), y^* $ is differentiable at $x^{\prime}\}$, such that $g_0 = \theta g_1 + (1-\theta) g_2$ and $0 \leq \theta \leq 1$.
From (\romannumeral1), we can find $g_1^{\prime}$, $g_2^{\prime} \in G\left(x^{0},\epsilon\right)$, such that $\| g_1-g_1^{\prime} \|< o(\epsilon)$ and $\| g_2-g_2^{\prime} \|< o(\epsilon)$. Then,
$$
g_0^{\prime} = \theta g_1^{\prime} + (1-\theta) g_2^{\prime} \in \operatorname{conv} G(x^{0},\epsilon),$$ and for any $z \in \mathbb{R}^{d_x}$,
$$
\begin{aligned}
| \|z- g_0\|-\|z- g_0^{\prime}\| | &= | \|z- (\theta g_1 + (1-\theta) g_2) \|-\|z- (\theta g_1^{\prime} + (1-\theta) g_2^{\prime})\| | \\ &\leq | \|\theta (z- g_1) + (1-\theta) (z-g_2) \|-\|(\theta (z- g_1^{\prime})) + (1-\theta) (z-g_2^{\prime})\| | \\
&\leq \|\theta (z- g_1) + (1-\theta) (z-g_2) -(\theta (z- g_1^{\prime})) - (1-\theta) (z-g_2^{\prime})\| \\
&\leq \|\theta (z- g_1) -(\theta (z- g_1^{\prime})) \| +\|(1-\theta) (z-g_2) - (1-\theta) (z-g_2^{\prime})\| \\ &< o(\epsilon).
\end{aligned}
$$
Let $l= d(z, \bar{\partial}_{\epsilon} \Phi(x^0))$. Then, for any $\sigma >0$, there exists $g_0 \in \bar{\partial}_{\epsilon} \Phi(x^0))$ such that
$l \leq \|z-g_0 \| < l+\sigma$. Then, there exists $g_0^{\prime} \in \operatorname{conv} G(x^{0},\epsilon))$ with $| \|z- g_0\|-\|z- g_0^{\prime}\| | < o(\epsilon)$.
Then, $l-o(\epsilon) \leq \|z-g_0^{\prime} \| < l+\sigma+o(\epsilon)$. Since $\|z-g_0^{\prime} \| \geq d(z, \operatorname{conv} G(x^{0},\epsilon))$, we have $d(z, \operatorname{conv} G(x^{0},\epsilon)) = \inf \{\|z-a\| \mid a \in \operatorname{conv} G(x^{0},\epsilon)\}< l+\sigma+o(\epsilon)$.
Then, $d(z, \operatorname{conv} G(x^{0},\epsilon)) < d(z, \bar{\partial}_{\epsilon} \Phi(x^0))+\sigma+o(\epsilon)$ for any $\sigma>0$, i.e., $d(z, \operatorname{conv} G(x^{0},\epsilon)) \leq d(z, \bar{\partial}_{\epsilon} \Phi(x^0))+o(\epsilon)$.
Similar to the proof of $d(z, \operatorname{conv} G(x^{0},\epsilon)) \leq d(z, \bar{\partial}_{\epsilon} \Phi(x^0))+o(\epsilon)$, from (\romannumeral1), we can also get $d(z, \bar{\partial}_{\epsilon} \Phi(x^0)) \leq d(z, \operatorname{conv} G(x^{0},\epsilon)) +o(\epsilon)$. Then, $| d(z, \operatorname{conv} G(x^{0},\epsilon)) - d(z, \bar{\partial}_{\epsilon} \Phi(x^0))|< o(\epsilon)$. Then, the proof of the part (ii) of Proposition \ref{prop4} is done.
\end{proof}
Note that Proposition \ref{prop3} is included in Proposition \ref{prop4}, then Proposition \ref{prop3} is shown.
\subsection{Proof of Theorem \ref{th_converge}}
We start part (\romannumeral1) of the proof of Theorem \ref{th_converge} from Lemma \ref{lemma3}, and show part (\romannumeral2) of the proof from Lemma \ref{lemma4}.
\begin{lemma}
\label{lemma3}
Let $\emptyset \neq C \subset \mathbb{R}^{n}$ be compact and convex and $\beta \in(0,1)$ and $0 \notin C$.
If $u, v \in C$ and $\|u\| = d(0, C)$, then $\langle v, u\rangle \geq \|u\|^{2}$.
\end{lemma}
\begin{proof}
Suppose that $\|u\| = d(0, C)$ and $\langle v, u\rangle < \|u\|^{2}$. Then, there exists $0<\theta<1$ such that $w=\theta v + (1-\theta) u$ and $\|w\|<\|u\|$. Then, $w \in C$ since $C$ is convex. Then $\|u\| \ne d(0, C)$, which has contradiction.
\end{proof}
\begin{proof}[Proof of part (\romannumeral1) of Theorem \ref{th_converge}]
Firstly, consider $y^{*}$ is differentiable at $x^k$. We have $g^{k}= \min \{ \|g\|: g \in \operatorname{conv} G(x^{k},\epsilon_k) \}$ and $\|g^k\| >0$.
We have $ \operatorname{conv} G(x^{k},\epsilon_k) $ is closed, bounded and convex, then is convex and compact.
Since $ \operatorname{conv} G(x^{k},\epsilon_k) $ is closed, $g^k \in \operatorname{conv} G(x^{k},\epsilon_k) $ and $\|g^k \|= d(0, \operatorname{conv} G(x^{k},\epsilon_k))$.
By Lemma \ref{lemma3}, $\langle v, g_k\rangle \geq \|g^k\|^{2}$ for any $v \in \operatorname{conv} G(x^{k},\epsilon_k)$.
Since $y^{*}$ is differentiable at $x^k$, we have $\nabla \Phi(x^k) \in \operatorname{conv} G(x^{k},\epsilon_k)$. Then, $\langle \nabla \Phi(x^k), g_k\rangle \geq \|g^k\|^{2}$. Since $y^{*}$ is differentiable at $x^k$, $\nabla \Phi$ is differentiable at $x^k$, then $\Phi$ is twice-differentiable at $x^k$.
Then, when $t$ is sufficiently small,
$$
\begin{aligned}
\Phi(x^{k}-t g^{k})=& \Phi(x^{k})-t \langle \nabla \Phi(x^{k}), g^{k} \rangle +o(t^2) \\
\leq&\Phi(x^{k}) - t\|g^{k}\|^{2} +o(t^2) \\
<& \Phi(x^{k}) -\beta t\|g^{k}\|^{2},
\end{aligned}
$$
for any $0<\beta<1$. Then, the line search has a positive solution $t_k$.
Secondly, consider $y^{x}$ is not differentiable at $x^k$. From Lemma \ref{prop1}, Proposition \ref{prop3} and the proof of Proposition \ref{prop3} (also shown in \cite{dempe1998implicit,malanowski1985differentiability}), $\Phi(x)$ is a piecewise function by finite number of twice-differentiable functions $\{\Phi_i(x)\}_{1 \leq i \leq m}$, and for each $1 \leq i \leq m$, $\nabla\Phi_i(x^k) \in \operatorname{conv} G(x^{k},\epsilon_k)$.
Then, when $t$ is sufficiently small, for all $1 \leq i \leq m$,
$$
\begin{aligned}
\Phi_i(x^{k}-t g^{k})=& \Phi_i(x^{k})-t \langle \nabla \Phi_i(x^{k}), g^{k} \rangle +o(t^2) \\
\leq &\Phi_i(x^{k}) - t\|g^{k}\|^{2} +o(t^2) \\
<& \Phi_i(x^{k}) -\beta t\|g^{k}\|^{2}\\
=& \Phi(x^{k}) -\beta t\|g^{k}\|^{2},
\end{aligned}
$$
for any $0<\beta<1$.
Then, we have $\Phi(x^{k}-t g^{k})<\Phi(x^{k}) -\beta t\|g^{k}\|^{2}$. Then, the line search has a positive solution $t_k$.
\end{proof}
\begin{lemma}
\label{lemma4}
If $\liminf\limits_{k \rightarrow \infty}{\max \left\{\|x^k-x\|,\|g^k\|,\epsilon_k\right\}}=0$ with $| \|g^k\|-d(0, \bar{\partial}_{\epsilon_k} \Phi(x^k))|<o(\epsilon_k)$ for sufficiently small $\epsilon_k$, then $0 \in \bar{\partial} \Phi(x)$.
\end{lemma}
\begin{proof}
If $\liminf\limits_{k \rightarrow \infty}{\max \left\{\|x^k-x\|,\|g^k\|,\epsilon_k\right\}}=0$, then the sequence $\{x^{k}\}$ has a subsequence $\{x^{k_n}\}$ such that $x^{k_n} \rightarrow x$, $\|g^k\| \rightarrow 0$ and $\epsilon_k \rightarrow 0$. Since $| \|g^k\|-d(0, \bar{\partial}_{\epsilon} \Phi(x^k))|<o(\epsilon_k)$ and $\epsilon_k \rightarrow 0$, we have $d(0, \bar{\partial}_{\epsilon_k} \Phi(x^k)) \rightarrow 0$, i.e., there exists a sequence $\{h^k\}$ with $h^k \in \bar{\partial}_{\epsilon_k} \Phi(x^k)$ and $\|h^k\| \rightarrow 0$, which implies $0 \in \bar{\partial} \Phi(x)$.
\end{proof}
\begin{proof}[Proof of parts (\romannumeral2)-(\romannumeral4) of Theorem \ref{th_converge}]
(\romannumeral2.a) From part (\romannumeral1) of Theorem \ref{th_converge}, the line search has a non-zero solution $t_k$, we have
$\Phi(x^{k+1})< \Phi(x^{k}) -\beta t_k\|g^{k}\|^{2}$, Then,
$$\sum_{k=1}^{\infty} \beta t_k\|g^{k}\|^{2} < \sum_{k=1}^{\infty} \Phi(x^{k})-\Phi(x^{k+1}) \leq \Phi(x^{1})-a<\infty,$$
where $a$ is a lower bound of $\Phi(x)$ on $\mathbb{R}^{d_x}$.
Since $x^{k+1}=x^{k}-t g^{k}$, we have
\begin{equation}
\label{smallinf}
\sum_{k=1}^{\infty} \|x^{k+1}-x^{k} \| \|g^k \|<\infty.
\end{equation}
(\romannumeral2.b) We now show $\lim_{k\rightarrow\infty} \nu_{k} = 0$ and $\lim_{k\rightarrow\infty} \epsilon_{k} = 0$. Assume this does not hold. There are $k_1$, $\hat{\nu}$ and $\hat{\epsilon}$, such that ${\nu}_k=\hat{\nu}$, ${\epsilon}_k=\hat{\epsilon}$ and $\|g^k\|>\hat{\nu}$ for all $k>k_1$.
Then, by \eqref{smallinf}, we have $\sum_{k=1}^{\infty} \|x^{k+1}-x^{k} \|<\infty$, and then $x^k$ converges to a point $\bar{x}$ and $t_k\|g^{k}\| \rightarrow 0$ which means $t_k \rightarrow 0$.
Similar to the proof of part (i) of Theorem \ref{th_converge}, when $t$ is sufficiently small,
\begin{equation}
\label{t_solution}
\begin{aligned}
\Phi(x^{k}-t g^{k})=& \Phi(x^{k})-t \langle \nabla \Phi_i(x^{k}), g^{k} \rangle +\alpha t^2 \\
\leq&\Phi(x^{k}) - t\|g^{k}\|^{2} +\alpha t^2 \\
<& \Phi(x^{k}) -\beta t\|g^{k}\|^{2},
\end{aligned}
\end{equation}
where $\Phi(x)$ is a piecewise function by finite number of twice-differentiable functions $\{\Phi_i(x)\}_{1 \leq i \leq m}$, and $\alpha$ is the upper bound of $\{\| \nabla^2\Phi_i(x)\| \}_{1 \leq i \leq m}$ on the set $\operatorname{conv} \{x^k\}$.
Since $x^k \rightarrow \bar{x}$, there exists a bounded and closed set $A$ such that $\operatorname{conv} \{x^k\} \subset A$. Since $\nabla^2\Phi_i(x)$ is continuous on the compact set $A$, $\| \nabla^2\Phi_i(x) \|$ is bounded.
Then, the upper bound $\alpha$ exists.
From \eqref{t_solution}, there exists $t_0$ such that, for all $t<t_0$ and
$- t\|g^{k}\|^{2} +\alpha t^2
< -\beta t\|g^{k}\|^{2}$, i.e., $t<\frac{(1-\beta)\|g^{k}\|^{2}}{\alpha}$, we have $t$ satisfies the inequality $\Phi(x^{k}-t g^{k})< \Phi(x^{k}) -\beta t\|g^{k}\|^{2}$.
Since $\|g^k\|>\hat{\nu}$ for all $k>k_1$, then if $t<t_0$ and $t<\frac{(1-\beta)\hat{\nu}^{2}}{\alpha}$, we have $t<\frac{(1-\beta)\|g^{k}\|^{2}}{\alpha}$ for all $k>k_1$.
Then, the line search
$t_{k} = \sup \{t \in\{\gamma, \gamma^{2}, \ldots\}: \Phi(x^{k}-t g^{k})< \Phi(x^{k}) -\beta t\|g^{k}\|^{2}\}$ has solution $t_{k} \geq \gamma^{N}$ with $\gamma^{N} \leq \min\{t_0, \frac{(1-\beta)\hat{\nu}^{2}}{\alpha}\} <\gamma^{N-1}$ for all $k>k_1$.
Then, $t_k$ does not converge to $0$, which contradicts \eqref{smallinf}.
Then, $\lim_{k\rightarrow\infty} \nu_{k} = 0$ and $\lim_{k\rightarrow\infty} \epsilon_{k} = 0$.
(\romannumeral3)
Since $\lim_{k\rightarrow\infty} \nu_{k} = 0$, then for any arbitrary small $\nu>0$ and any $k_0$, there exists $k>k_0$ such that $\|g_k\|<\nu$, then $\liminf\limits_{k \rightarrow \infty}\|g^k\| = 0$.
Since $| \|g^k\|-d(0, \bar{\partial}_{\epsilon} \Phi(x^k))|<o(\epsilon_k)$ and $\epsilon_k \rightarrow 0$, we have $\liminf\limits_{k\rightarrow\infty} d(0, \bar{\partial} \Phi(x^k))= 0$.
(\romannumeral4)
We now have $\lim_{k\rightarrow\infty} \nu_{k} = 0$, $\lim_{k\rightarrow\infty} \epsilon_{k} = 0$,
and $\liminf\limits_{k \rightarrow \infty}\|g^k\| = 0$.
Let $\bar{x}$ is a limit point of $\{x^k\}$. If $x^k$ converges to $\bar{x}$, then $\|x^k-x\| \rightarrow 0$.
So we have
$\liminf\limits_{k \rightarrow \infty}{\max \left\{\|x^k-x\|,\|g^k\|,\epsilon_k\right\}}=0$.
By Proposition \ref{prop3}, if $\Phi$ is not differentiable on the ball $\mathcal{B}(x^0,\epsilon)$, for sufficiently small $\epsilon_k$,
we have $| \|g^k\|-d(0, \bar{\partial}_{\epsilon_k} \Phi(x^k))|= | d(0, \operatorname{conv} G(x^{k},\epsilon_k)) -d(0, \bar{\partial}_{\epsilon_k} \Phi(x^k))|<o(\epsilon_k)$.
By Proposition \ref{prop2}, if $\Phi$ is not differentiable on the ball $\mathcal{B}(x^0,\epsilon)$, $g^k=\bar{\partial} \Phi(x^k))$, then $| \|g^k\|-d(0, \bar{\partial}_{\epsilon_k} \Phi(x^k))|<o(\epsilon_k)$.
Then, by Lemma \ref{lemma4}, we have $0 \in \bar{\partial} \Phi(x)$.
Consider $x^k$ does not converge to $\bar{x}$.
Note that $\bar{x}$ is a limit point of $\{x^k\}$.
Since $\lim_{k\rightarrow\infty} \epsilon_{k} = 0$, we just need to show $\liminf\limits_{k \rightarrow \infty}{\max \left\{\|x^k-x\|,\|g^k\|\right\}}=0$, then $0 \in \bar{\partial} \Phi(x)$ by Lemma \ref{lemma4}. Assume that $\liminf\limits_{k \rightarrow \infty}{\max \left\{\|x^k-x\|,\|g^k\|\right\}}>0$.
Since $\bar{x}$ is a limit point of $\{x^k\}$, for any $\hat{v}>0$ and $\hat{k}$, there exists an infinite set $K(\hat{k},\hat{v}) \triangleq \{k: k\geq \hat{k}, \|x^k-\bar{x}\|<\hat{v} \}$.
Since $\liminf\limits_{k \rightarrow \infty}{\max \left\{\|x^k-x\|,\|g^k\|\right\}}>0$,
there exists $\hat{v}>0$ and $\hat{k}$ such that $\|g^k\|>\hat{v}$ for all $k \in K(\hat{k},\hat{v})$. From \eqref{smallinf}, we have $ \sum_{k \in K(\hat{k},\hat{v})} \|x^{k+1}-x^{k} \| \|g^k \|<\infty$, then $ \sum_{k \in K(\hat{k},\hat{v})} \|x^{k+1}-x^{k}\|<\infty$. Since $x^k$ does not converge to $\bar{x}$, there exists $\epsilon>0$ such that, for each $k \in K(\hat{k},\hat{v})$ with $\|x^{k}-\bar{x}\|\leq \hat{v}/2$, there exists $k^{\prime} >k$ satisfying $\|x^{k}-x^{k^{\prime}}\|>\epsilon$ and $x^i \in K(\hat{k},\hat{v})$ for all $k \leq i< k^{\prime}$.
Then, we have $\epsilon<\|x^{k^{\prime}}-x^{k}\| \leq \sum_{i=k}^{k^{\prime}-1}\|x^{i+1}-x^{i}\|$. Since $ \sum_{i \in K(\hat{k},\hat{v})} \|x^{i+1}-x^{i} \| <\infty$, we can select a sufficiently large $k$ with $ \sum_{i=k}^{\infty} \|x^{i+1}-x^{i} \| <\epsilon$. Then, there is a contradiction. Thus, $\liminf\limits_{k \rightarrow \infty}{\max \left\{\|x^k-x\|,\|g^k\|\right\}}=0$.
Thus, whether $x^k$ converges to the limit point $\bar{x}$ or not, we have $0 \in \bar{\partial} \Phi(x)$. The part (iii) of Theorem \ref{th_converge} is shown.
\end{proof}
\subsection{Proofs of Propositions \ref{prop2}}
\begin{proof}[Proof of Proposition \ref{prop2}]
(\romannumeral1) By part (\romannumeral2) of Theorem \ref{th2}, vector function $z(x) \triangleq [y^{*}(x)^{\top}, \lambda(x)^{\top}, \nu(x)^{\top}]^{\top}$ is locally Lipschitz.
Lemma \ref{lemma1} implies that $z(x)$ is Lipschitz continuous function on any compact set $S \in \mathbb{R}^{d_x}$.
Then, $y^{*}(x)$, $\lambda_j(x)$ and $p_j(x)$ for all $j$ are Lipschitz continuous function on any compact set.
Since $f$ and $p$ are continuously differentiable, then $\Phi(x)=f\left(x, y^{*}(x)\right)$ and $p(x,y^*(x)$ are locally Lipschitz. Thus, $\Phi(x)$ and $f\left(x, y^{*}(x)\right)$ are Lipschitz continuous function on any compact set. Then part (i) is proven.
For all $j \in J(x^0,\hat{y})$, either $\lambda_j(x^0) > l_{\lambda_j}(x^0,\epsilon) \epsilon$ or $p_j(x^0,y^*(x^0)) < -l_{p_j}(x^0,\epsilon) \epsilon$. Firstly, if $\lambda_j(x^0) > l_{\lambda_j}(x^0,\epsilon) \epsilon$, for any $x \in \mathcal{B}(x^0,\epsilon)$, $|\lambda_j(x)-\lambda_j(x^0)| \leq l_{\lambda_j}(x^0,\epsilon) \|x-x^0\| \leq l_{\lambda_j}(x^0,\epsilon) \epsilon$. Then, for any $x \in \mathcal{B}(x^0,\epsilon)$, $\lambda_j(x) > 0$, also $p_j(x^0,y^*(x^0)) =0$, which is obtained by the KKT conditions. Then $j \in J^{+}(x,y^*(x))$.
Secondly, if $p_j(x^0,y^*(x^0)) < -l_{p_j}(x^0,\epsilon) \epsilon$, then for any $x \in \mathcal{B}(x^0,\epsilon)$, $| p_j(x^0,y^*(x^0))- p_j(x,y^*(x))| \leq l_{p_j}(x^0,\epsilon) \|x-x^0 \| \leq l_{p_j}(x^0,\epsilon) \epsilon$. Then, for any $x \in \mathcal{B}(x^0,\epsilon)$, $p_j(x,y^*(x)) <0$, and $j \not\in J(x,y^*(x))$. Thus, for any $x \in \mathcal{B}(x^0,\epsilon)$, $J^{0}(x,y^*(x))$ is empty, which implies the SCSC holds at $y^{*}(x)$ w.r.t. $\lambda(x)$ for any $x \in \mathcal{B}(x^0,\epsilon)$. By part (\romannumeral3) of Theorem \ref{th2}, $y^{*}(x)$ is differentiable on $\mathcal{B}(x^0,\epsilon)$. Then $\Phi(x)$ is differentiable on $\mathcal{B}(x^0,\epsilon)$.
\end{proof}
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{A note on hierarchies and bureaucracies}
\author{Shang-Guan H. Wu\corauthref{cor}}
\corauth[cor]{Corresponding author.} \ead{[email protected]}
\address{Wan-Dou-Miao Research Lab, Suite 1002, 790 WuYi Road,\\
Shanghai, 200051, China.}
\begin{abstract}
In this note, we argue that there is a bug in [Tirole, J.,
``Hierarchies and bureaucracies: On the role of collusion in
organizations,'' {\em Journal of Law, Economics and Organization},
vol.2, 181-214, 1986].
\end{abstract}
\begin{keyword}
Collusion; Incentive theory.
\end{keyword}
\end{frontmatter}
In this note, the notation is referred to Ref. [1]. Vertical
structures are represented by three-layer hierarchies:
principal/supervisor/agent. The \emph{principal}, who is the owner
of the vertical structure or the buyer of the good produced by the
agent, or more generally, the person who is affected by the agent's
activity, lacks either the time or the knowledge required to
supervise the agent. The \emph{supervisor} is a party that exerts no
effort, receives a wage from the principal and collects information
to help the principal control the agent. The \emph{agent} is the
productive unit. The profit $x$ created by the agent's activity
depends on a productivity parameter $\theta$ and on the effort $e>0$
he exerts:
\begin{equation*}
x=\theta+e.
\end{equation*}
The agent's effort $e$ is assumed not observable by the supervisor
and the principal. The agent's disutility of effort is equal, in
monetary terms, to $g(e)$.
For a given $\theta$, the supervisor's signal $s$ can take two
values: $\{\theta,\varnothing\}$, where $\varnothing$ denotes
observation ``nothing''. The report $r$ of supervisor is
\emph{verifiable}, i.e., if $s=\theta$ then $r\in
\{\theta,\varnothing\}$, otherwise $r=\varnothing$. The productivity
$\theta$ can take two values: $\underline{\theta}$ and
$\bar{\theta}$ such that $0<\underline{\theta}<\bar{\theta}$. There
are four states of \emph{nature}, indexed by $i$. State of nature
$i$ has probability $p_{i}$ ($\sum\limits_{i=1}^{4}p_{i}=1$). The
agent always observes $\theta$ before choosing his effort. The
supervisor may or may not observe $\theta$. In the following
description of the four states of nature, $S$ and $A$ stand for
supervisor and agent:\\
\emph{State 1}: $A$ and $S$ observe $\underline{\theta}$.\\
\emph{State 2}: $A$ observes $\underline{\theta}$, $S$ observes
``nothing''.\\
\emph{State 3}: $A$ observes $\bar{\theta}$, $S$ observes
``nothing''.\\
\emph{State 4}: $A$ and $S$ observe $\bar{\theta}$.
\textbf{Claim 1}: The supervisor and principal cannot discriminate
\emph{State} 2 and 3.\\
\emph{Proof}: The only difference between \emph{State} 2 and 3 is
that agent $A$ observes different values of productivity. However,
this parameter is agent's private information and not observable to
the supervisor and principal. Put in other words, \emph{State} 2 and
3 are indifferent to the supervisor and the principal. $\square$
\emph{Timing}.\\
1) The principal offers a contract.\\
2) $A$ learns the productivity $\theta$, $S$ learns the signal $s$.\\
3) $A$ chooses the effort $e$.\\
4) Profit $x=\theta+e$, $S$ reports $r$.\\
5) The principal transfers $S(x,r)$ and $W(x,r)$ to the supervisor
and agent respectively.
\textbf{Claim 2}: There is a bug in the agent's incentive
compatibility constraint (\emph{AIC}): $W_{3}-g(e_{3})\geq
W_{2}-g(e_{2}-\triangle\theta)$ (Page 191, Line 8, [1]).
\emph{Proof}: As specified in the timing, the wage $W(x,r)$ of agent
is transferred by the principal. It only depends on the commonly
observable variables $x$ and $r$, not on the agent's private
productivity $\theta$. Since the principal cannot discriminate
\emph{State} 2 and 3, the items $W_{2}$ and $W_{3}$ are indeed
meaningless.
That's the bug, not only for the condition (\emph{AIC}), but also
for the whole paper of Tirole (1986).
\section*{Acknowledgments}
The author is very grateful to Ms. Fang Chen, Hanyue Wu
(\emph{Apple}), Hanxing Wu (\emph{Lily}) and Hanchen Wu
(\emph{Cindy}) for their great support.
\end{document}
|
\begin{document}
\title{On compact generation of deformed schemes}
\begin{abstract}
We obtain a theorem which allows to prove compact generation of derived categories of Grothendieck categories, based upon certain coverings by localizations. This theorem follows from an application of Rouquier's cocovering theorem in the triangulated context, and it implies Neeman's result on compact generation of quasi-compact separated schemes. We prove an application of our theorem to non-commutative deformations of such schemes, based upon a change from Koszul complexes to Chevalley-Eilenberg complexes.
\end{abstract}
\section{Introduction}
Compact generation of triangulated categories was introduced by Neeman in \cite{neeman2}. One of the motivating situations is given by derived categories of ``nice'' schemes (i.e. quasi-compact separated schemes in \cite{neeman2}, later extended to quasi-compact quasi-separated schemes by Bondal and Van den Bergh
in \cite{bondalvandenbergh}). The ideas of the proofs later cristalized in Rouquier's (co)covering theorem \cite{rouquier} which describes a certain covering-by-Bousfield-localizations situation in which compact generation (later extended to $\alpha$-compact generation
by Murfet
in \cite{murfet2}) of a number of ``smaller pieces'' entails compact generation of the whole triangulated category. The notions needed in the (co)covering concept can be interpreted as categorical versions of standard scheme constructions like unions and intersections of open subsets, and in the setup of Grothendieck categories rather than triangulated categories they have been important in non-commutative algebraic geometry (see eg. \cite{vanoystaeyen}, \cite{vanoystaeyenverschoren}, \cite{rosenberg}, \cite{smith}). In this paper we apply Rouquier's theorem in order to obtain a (co)covering theorem for Grothendieck categories based upon these notions, which can be used to prove compact generation of derived categories of Grothendieck categories (see Theorem \ref{cocovergroth} in the paper).
\begin{theorem}\label{cocovergrothintro}
Let $\ensuremath{\mathcal{C}}$ be a Grothendieck category with a compatible covering of affine localizing subcategories $\ensuremath{\mathcal{S}}_i \subseteq \ensuremath{\mathcal{C}}$ for $i \in I = \{1, \dots, n\}$. Suppose:
\begin{enumerate}
\item $D(\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_i)$ is compactly generated for every $i \in I$.
\item For every $i \in I$ and $\varnothing \neq J \subseteq I \setminus \{i\}$, suppose the essential image $\ensuremath{\mathcal{E}}$ of
$$\cap_{j \in J} \ensuremath{\mathcal{S}}_j \longrightarrow \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}}_i$$
is such that $D_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_i)$ is compactly generated in $D(\ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}}_i)$.
\end{enumerate}
Then $D(\ensuremath{\mathcal{C}})$ is compactly generated, and an object in $D(\ensuremath{\mathcal{C}})$ is compact if and only if its image in every $D(\ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}}_i)$ for $i \in I$ is compact.
\end{theorem}
When applied to the category of quasi-coherent sheaves over a quasi-compact separated scheme, the theorem implies Neeman's original result.
Our interest in the intermediate Theorem \ref{cocovergrothintro} comes from its applicability to Gro\-then\-dieck categories that originate as ``non-commutative deformations'' of schemes, more precisely abelian deformations of categories of quasi-coherent sheaves in the sense of \cite{lowenvandenbergh1}.
After formulating a general result for deformations (Theorem \ref{thmdefcomp}), based upon lifting compact generators under deformation, we specialize further to the scheme case in Theorem \ref{thmscheme}. We use the description of deformations from \cite{lowen4} using non-commutative twisted presheaf deformations of the structure sheaf on an affine open cover.
When all involved deformed rings are commutative, using liftability of Koszul complexes under deformation, the corresponding twisted deformations are seen to be compactly generated, a fact which also follows from \cite{toen2}.
In our main Theorem \ref{mainintro} (see Theorem \ref{maintheorem} in the paper), we show that actually all non-commutative deformations are compactly generated.
\begin{theorem} \label{mainintro} Let $X$ be a quasi-compact separated
$k$-scheme. Then every flat deformation of the abelian category $\mathbb{Q}ch(X)$ over a finite dimensional commutative local $k$-algebra has a compactly generated derived category.
\end{theorem}
The proof is based upon the following lifting result for Koszul complexes (see Theorem \ref{thmlift} in the paper):
\begin{theorem}\label{thmliftintro}
Let $A$ be a commutative $k$-algebra and $f = (f_1, \dots, f_n)$ a
sequence of elements in $A$. For $d\ge 1$ there exists a perfect
complex $X_d \in D(A)$ generating the same thick subcategory of $D(A)$
as the Koszul complex $K(f)$ and satisfying the following property: let $(R,m)$ be a finite dimensional commutative local $k$-algebra with $m^d = 0$ and $R/m
= k$. Then $X_d$ may be lifted to any $R$-deformation of~$A$.
\end{theorem}
Concretely, let $\mathfrak{n}$ be the
Lie algebra freely generated by $x_1, \dots, x_n$ subject to the relations that all expressions involving $\geq d$ brackets vanish. Sending $x_i$ to~$f_i$
makes~$A$ into a right $\mathfrak{n}$-representation. Then $X_d$ is defined as the
Chevalley-Eilenberg complex $(A\otimes_k \wedge\mathfrak{n},d_{CE})$ of~$A$.
Clearly $X_1=K(f)$.
It appears that in general one should think of $X_d$ as a kind of ``higher Koszul complex''.
\noindent \emph{Acknowledgement.}
The first author thanks Tobias Dyckerhoff, Dmitry Kaledin, Bernhard Keller, Alexander Kuznetsov, Daniel Murfet, Amnon Neeman, Paul Smith, Greg Stevenson and Bertrand To\"en for interesting discussions on the topic of this paper. She is especially grateful to Bernhard Keller for his valuable comments on a previous version of this paper.
\section{Coverings of Grothendieck categories}
Localization theory of abelian categories and Grothendieck categories goes back to the work of Gabriel \cite{gabriel}, which actually contains some of the important seeds of non-commutative algebraic geometry, like the fact that noetherian (this condition was later eliminated in the work of Rosenberg \cite{rosenberg2}) schemes can be reconstructed from their abelian category of quasi-coherent sheaves. In the general philosophy (due to Artin, Tate, Stafford, Van den Bergh and others) that non-commutative schemes can be represented by Grothendieck categories ``resembling'' quasi-coherent sheaf categories, localizations of such categories have been a key ingredient in the development of the subject by Rosenberg, Smith, Van Oystaeyen, Verschoren, and others (see eg. \cite{rosenberg}, \cite{smith}, \cite{vanoystaeyen}, \cite{vanoystaeyenverschoren}).
In particular, Van Oystaeyen and Verschoren investigated a notion of compatibility between localizations (see \cite{vanoystaeyen}, \cite{verschoren1}).
More recent approaches to non-commutative algebraic geometry (due to Bondal, Kontsevich, To\"en and others) take triangulated categories (and algebraic enhancements like dg or $A_{\infty}$ algebras and categories) as models for non-commutative spaces. The beautiful abelian localization theory was paralleled by an equally beautiful triangulated localization theory, based upon Verdier and Bousfield localization, see e.g. \cite{krause5} and the references therein. By considering appropriate unbounded derived categories, every Grothendieck localization gives rise to a Bousfield localization.
Recently, the notion of properly intersecting Bousfield subcategories was introduced by Rouquier in the context of his cocovering theorem concerning compact generation of certain triangulated categories \cite{rouquier}. The condition bears a striking similarity to the notion of compatibility in the Grothendieck context, which is even reinforced by the characterization proved by Murfet in \cite{murfet2}.
In this section, we introduce all the relevant notions in both contexts, and we observe that in the special situation where the right adjoints of a collection of compatible localizations of Grothendieck categories are exact, they give rise to properly intersecting Bousfield localizations, and Grothendieck coverings give rise to triangulated coverings.
We go on to deduce a covering theorem for Grothendieck categories (Theorem \ref{cocovergroth}) which allows to prove compact generation of the derived category.
\subsection{Coverings of abelian categories} \label{parcoverab}
We first review the situation for abelian categories.
Let $\ensuremath{\mathcal{C}}$ be an abelian category. A \emph{localization} of $\ensuremath{\mathcal{C}}$ consists of an exact functor
$$a: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{C}}'$$
with a fully faithful right adjoint $i: \ensuremath{\mathcal{C}}' \longrightarrow \ensuremath{\mathcal{C}}$.
A subcategory $\ensuremath{\mathcal{S}} \subseteq \ensuremath{\mathcal{C}}$ is called a \emph{Serre subcategory} if it is closed under subquotients and extensions. A Serre subcategory gives rise to an exact \emph{Gabriel quotient}
$$a: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}$$
with $\mathrm{Ker}(a) = \ensuremath{\mathcal{S}}$. The Serre subcategory $\ensuremath{\mathcal{S}}$ is called \emph{localizing} if $a$ is the left adjoint in a localization.
Now suppose $\ensuremath{\mathcal{C}}$ is Grothendieck. Then $\ensuremath{\mathcal{S}}$ is localizing precisely when $\ensuremath{\mathcal{S}}$ is moreover closed under coproducts.
Conversely, for every localization $a: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{C}}'$, $\ensuremath{\mathcal{S}} = \mathrm{Ker}(a)$ is localizing,
$a$ factors over an equivalence $\ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}} \cong \ensuremath{\mathcal{C}}'$, and putting
$$\ensuremath{\mathcal{S}}^{\mathsf{per}p} = \{ C \in \ensuremath{\mathcal{C}} \,\, |\,\, \mathrm{Hom}_{\ensuremath{\mathcal{C}}}(S,C) = 0 = \mathrm{Ext}^1_{\ensuremath{\mathcal{C}}}(S,C)\,\, \forall S \in \ensuremath{\mathcal{S}} \},$$
the right adjoint $i: \ensuremath{\mathcal{C}}' \longrightarrow \ensuremath{\mathcal{C}}$ factors over an equivalence $\ensuremath{\mathcal{C}}' \cong \ensuremath{\mathcal{S}}^{\mathsf{per}p}$.
Let $\ensuremath{\mathcal{C}}$ be an abelian category.
For full subcategories $\ensuremath{\mathcal{S}}_1$, $\ensuremath{\mathcal{S}}_2$ of $\ensuremath{\mathcal{C}}$, the \emph{Gabriel product} is given by
$$\ensuremath{\mathcal{S}}_1 \ast \ensuremath{\mathcal{S}}_2 = \{ C \in \ensuremath{\mathcal{C}} \,\, |\,\, \exists\,\, S_1 \in \ensuremath{\mathcal{S}}_1, \,\, S_2 \in \ensuremath{\mathcal{S}}_2, \,\, 0 \longrightarrow S_1 \longrightarrow C \longrightarrow S_2 \longrightarrow 0\}.$$
Clearly, $\ensuremath{\mathcal{S}}$ is closed under extensions if and only if $\ensuremath{\mathcal{S}} \ast \ensuremath{\mathcal{S}} = \ensuremath{\mathcal{S}}$.
An easy diagram argument reveals that the Gabriel product is associative.
\begin{definition} \cite{vanoystaeyen}, \cite{verschoren1}
Full subcategories $\ensuremath{\mathcal{S}}_1$, $\ensuremath{\mathcal{S}}_2$ of $\ensuremath{\mathcal{C}}$ are called \emph{compatible} if
$$\ensuremath{\mathcal{S}}_1 \ast \ensuremath{\mathcal{S}}_2 = \ensuremath{\mathcal{S}}_2 \ast \ensuremath{\mathcal{S}}_1.$$
\end{definition}
For two compatible Serre subcategories, we have $\ensuremath{\mathcal{S}}_1 \ast \ensuremath{\mathcal{S}}_2 = \langle \ensuremath{\mathcal{S}}_1 \cup \ensuremath{\mathcal{S}}_2\rightarrowngle$, the smallest Serre subcategory containing $\ensuremath{\mathcal{S}}_1$ and $\ensuremath{\mathcal{S}}_2$.
Clearly, in the picture of a localization, the data of $\ensuremath{\mathcal{S}}$, $a$ and $i$ determine each other uniquely.
\begin{proposition} \cite{vanoystaeyen} \cite{vanoystaeyenverschoren}\label{propcomp0}
Consider localizations $(\ensuremath{\mathcal{S}}_1, a_1, i_1)$ and $(\ensuremath{\mathcal{S}}_2, a_2, i_2)$ of $\ensuremath{\mathcal{C}}$. Put $q_1 = i_1 a_1$ and $q_2 = i_2 a_2$. The following are equivalent:
\begin{enumerate}
\item $\ensuremath{\mathcal{S}}_1$ and $\ensuremath{\mathcal{S}}_2$ are compatible.
\item $q_1(\ensuremath{\mathcal{S}}_2) \subseteq \ensuremath{\mathcal{S}}_2$ and $q_2(\ensuremath{\mathcal{S}}_1) \subseteq \ensuremath{\mathcal{S}}_1$.
\item $q_1 q_2 = q_2 q_1$.
\end{enumerate}
\end{proposition}
In the situation of Proposition \ref{propcomp0}, we speak about \emph{compatible} localizations.
A collection of Serre subcategories (or localizations) is called \emph{compatible} if the corresponding localizations are pairwise compatible.
\begin{definition}
A collection $\Sigma$ of Serre subcategories of $\ensuremath{\mathcal{C}}$ is called a \emph{covering} of $\ensuremath{\mathcal{C}}$ if
$$\bigcap \Sigma = \bigcap_{\ensuremath{\mathcal{S}} \in \Sigma} \ensuremath{\mathcal{S}} = 0.$$
\end{definition}
By this definition, the collection of functors $a: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}}$ with $\ensuremath{\mathcal{S}} \in \Sigma$ ``generates'' $\ensuremath{\mathcal{C}}$ in the sense that $C \in \ensuremath{\mathcal{C}}$ is non-zero if and only if $a(C)$ is non-zero for some $\ensuremath{\mathcal{S}} \in \Sigma$.
\begin{proposition}
Consider a collection $\Sigma$ of localizing Serre subcategories of $\ensuremath{\mathcal{C}}$. The following are equivalent:
\begin{enumerate}
\item $\Sigma$ is a covering of $\ensuremath{\mathcal{C}}$.
\item The objects $i(D)$ for $D \in \ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}$ and $\ensuremath{\mathcal{S}} \in \Sigma$ cogenerate $\ensuremath{\mathcal{C}}$, i.e a morphism $C' \longrightarrow C$ in $\ensuremath{\mathcal{C}}$ is non-zero if and only if there exists a morphism $C \longrightarrow i(D)$ with $D \in \ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}$ for some $\ensuremath{\mathcal{S}} \in \Sigma$ such that $C' \longrightarrow C \longrightarrow i(D)$ is non-zero.
\end{enumerate}
\end{proposition}
\begin{proof}
This easily follows from the adjunction between $a$ and $i$.
\end{proof}
The notion of a compatible covering is inspired by open coverings of schemes. For a covering collection $j: U_i \longrightarrow X$ of open subschemes of a scheme $X$ (i.e. $X = \cup U_i$), the collection of localizations $j^{\ast}: \mathbb{Q}ch(X) \longrightarrow \mathbb{Q}ch(U_i)$ consitutes a compatible covering of $\mathbb{Q}ch(X)$.
\subsection{Descent categories}\label{pardescent}
Consider a compatible collection $\Sigma$ of localizations $\ensuremath{\mathcal{C}}_j$ of $\ensuremath{\mathcal{C}}$ indexed by a finite set $I$. We then obtain commutative (up to natural isomorphism) diagrams of localizations
$$\xymatrix{{\ensuremath{\mathcal{C}}} \ar[r]^{a_k} \ar[d]_{a_j} & {\ensuremath{\mathcal{C}}_k} \ar[d]^{a^k_{kj}} \\ {\ensuremath{\mathcal{C}}_j} \ar[r]_{a^j_{kj}} & {\ensuremath{\mathcal{C}}_{kj}}}$$
with $\ensuremath{\mathcal{C}}_k = \ensuremath{\mathcal{S}}^{\mathsf{per}p}_k$, $\ensuremath{\mathcal{C}}_{kj} = (\ensuremath{\mathcal{S}}_k \ast \ensuremath{\mathcal{S}}_j)^{\mathsf{per}p} = \ensuremath{\mathcal{C}}_j \cap \ensuremath{\mathcal{C}}_k$.
Using associativity of the Gabriel product and compatibility of the localizations, we obtain for each $J = \{ j_1, \dots, j_p\} \subseteq I$ a localizing subcategory
$$\ensuremath{\mathcal{S}}_J = \ensuremath{\mathcal{S}}_{j_1} \ast \dots \ast \ensuremath{\mathcal{S}}_{j_p}$$
of $\ensuremath{\mathcal{C}}$ with corresponding localization
$$\ensuremath{\mathcal{C}}_J = \ensuremath{\mathcal{C}}_{j_1} \cap \dots \cap \ensuremath{\mathcal{C}}_{j_p}$$
of $\ensuremath{\mathcal{C}}$, and all the $\ensuremath{\mathcal{S}}_J$ are compatible. In particular, we obtain for every inclusion $K \subseteq J$ a further localization $a^K_J: \ensuremath{\mathcal{C}}_K \longrightarrow \ensuremath{\mathcal{C}}_J$ left adjoint to the inclusion $i^K_J: \ensuremath{\mathcal{C}}_J \longrightarrow \ensuremath{\mathcal{C}}_K$. It is easily seen that for $K \subseteq J_1$ and $K \subseteq J_2$, the localizations $a^K_{J_1}$ and $a^K_{J_2}$ are compatible. Let $\mathrm{D}elta_{\varnothing}$ be the category of finite subsets of $I$ ordered by inclusions, and let $\mathrm{D}elta$ be the subcategory of non-empty subsets. Putting $\ensuremath{\mathcal{C}}_{\varnothing} = \ensuremath{\mathcal{C}}$, the categories $\ensuremath{\mathcal{C}}_J$ for $J \subseteq I$ can be organized into a pseudofunctor
$$\ensuremath{\mathcal{C}}_{\bullet}: \mathrm{D}elta_{\varnothing} \longrightarrow \mathbb{C}at: J \longmapsto \ensuremath{\mathcal{C}}_J$$
with the $a^K_J$ for $K \subseteq J$ as restriction functors.
Hence, $\ensuremath{\mathcal{C}}_{\bullet}$ is a fibered category of localizations in the sense of \cite[Definition 2.4]{lowen4}.
We define the \emph{descent category} $\mathrm{D}es(\Sigma)$ of $\Sigma$ to be the descent category $\mathrm{D}es(\ensuremath{\mathcal{C}}_{\bullet}|_{\mathrm{D}elta})$, i.e it is a bi-limit of the restricted pseudofunctor $\ensuremath{\mathcal{C}}_{\bullet}|_{\mathrm{D}elta}$.
In particular, we obtain a natural comparison functor
$$\ensuremath{\mathcal{C}} \longrightarrow \mathrm{D}es(\Sigma).$$
We conclude:
\begin{proposition}\label{propcovdes}
The compatible collection $\Sigma$ of localizations of $\ensuremath{\mathcal{C}}$ is a covering if and only if the comparison functor $\ensuremath{\mathcal{C}} \longrightarrow \mathrm{D}es(\Sigma)$ is faithful.
\end{proposition}
Conversely, descent categories yield a natural way of constructing an abelian category covered by a given collection of abelian categories. More precisely, let $I$ be a finite index set, let $\mathrm{D}elta$ be as above, and let
$$\ensuremath{\mathcal{C}}_{\bullet}: \mathrm{D}elta \longrightarrow \mathbb{C}at: J \longrightarrow \ensuremath{\mathcal{C}}_J$$
be a pseudofunctor for which every $a^K_J: \ensuremath{\mathcal{C}}_K \longrightarrow \ensuremath{\mathcal{C}}_J$ for $K \subseteq J$ is a localization with right adjoint $i^K_J$ and $\mathrm{Ker}(a^K_J) = \ensuremath{\mathcal{S}}^K_J$. Suppose moreover that for $K \subseteq J_1$ and $K \subseteq J_2$ the corresponding localizations are compatible and $\ensuremath{\mathcal{S}}^K_{J_1 \cup J_2} = \ensuremath{\mathcal{S}}^K_{J_1} \ast \ensuremath{\mathcal{S}}^K_{J_2}$.
Consider the descent category $\mathrm{D}es(\ensuremath{\mathcal{C}}_{\bullet})$ with canonical functors
$$a_K: \mathrm{D}es(\ensuremath{\mathcal{C}}_{\bullet}) \longrightarrow \ensuremath{\mathcal{C}}_K: (X_J)_J \longmapsto X_K$$
\begin{proposition}\label{desaffine}
\begin{enumerate}
\item The functor $a_K$ is a localization with fully faithful right adjoint $i_K$ with
$$a_J i_K = i^{J}_{K \cup J} a^K_{K \cup J}: \ensuremath{\mathcal{C}}_K \longrightarrow \ensuremath{\mathcal{C}}_J.$$
\item The localizations $a_K$ are compatible.
\item The localizations $a_K$ consitute a covering of $\mathrm{D}es(\ensuremath{\mathcal{C}}_{\bullet})$.
\item If the functors $i^{J}_{K \cup J}$ are exact, then so are the functors $i_K$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) For $X \in \ensuremath{\mathcal{C}}_K$, using compatibility of the localizations occuring in $\ensuremath{\mathcal{C}}_{\bullet}$, $(i^{J}_{K \cup J} a^K_{K \cup J}(X))_J$ can be made into a descent datum $i_K(X)$, and $i_K$ can be made into a functor right adjoint to $a_K$. Since $a_K i_K = 1_{\ensuremath{\mathcal{C}}_K}$, the functor $i_K$ is fully faithful. (2)
(3) Immediate from Proposition \ref{propcovdes}. (4) Immediate from the formula in (1).
\end{proof}
\subsection{Coverings of triangulated categories}\label{parcovertria}
Next we review the situation for triangulated categories. For an excellent introduction to the localization theory of triangulated catgeories, we refer the reader to \cite{krause5}.
Let $\ensuremath{\mathcal{T}}$ be a triangulated category. A \emph{(Bousfield) localization} of $\ensuremath{\mathcal{T}}$ consists of an exact functor
$$a: \ensuremath{\mathcal{T}} \longrightarrow \ensuremath{\mathcal{T}}'$$
with a fully faithful (automatically exact) right adjoint $i: \ensuremath{\mathcal{T}}' \longrightarrow \ensuremath{\mathcal{T}}$. A subcategory $\ensuremath{\mathcal{I}} \subseteq \ensuremath{\mathcal{T}}$ is called \emph{triangulated} if it is closed under cones and shifts and \emph{thick} if it is moreover closed under direct summands. A thick subcategory gives rise to an exact \emph{Verdier quotient}
$$a: \ensuremath{\mathcal{T}} \longrightarrow \ensuremath{\mathcal{T}}/\ensuremath{\mathcal{I}}$$
with $\mathrm{Ker}(a) = \ensuremath{\mathcal{I}}$. The thick subcategory $\ensuremath{\mathcal{I}}$ is called a \emph{Bousfield subcategory} if $a$ is the left adjoint in a localization.
For every localization $a: \ensuremath{\mathcal{T}} \longrightarrow \ensuremath{\mathcal{T}}'$, $\ensuremath{\mathcal{I}} = \mathrm{Ker}(a)$ is Bousfield,
$a$ factors over an equivalence $\ensuremath{\mathcal{T}}/ \ensuremath{\mathcal{I}} \cong \ensuremath{\mathcal{T}}'$, and putting
$$\ensuremath{\mathcal{I}}^{\mathsf{per}p} = \{ T \in \ensuremath{\mathcal{T}} \,\, |\,\, \mathrm{Hom}_{\ensuremath{\mathcal{T}}}(I,T) = 0 \,\, \forall I \in \ensuremath{\mathcal{I}} \},$$
the right adjoint $i: \ensuremath{\mathcal{T}}' \longrightarrow \ensuremath{\mathcal{T}}$ factors over an equivalence $\ensuremath{\mathcal{T}}' \cong \ensuremath{\mathcal{I}}^{\mathsf{per}p}$.
For full subcategories $\ensuremath{\mathcal{I}}_1$, $\ensuremath{\mathcal{I}}_2$ of $\ensuremath{\mathcal{T}}$, the \emph{Verdier product} is given by
$$\ensuremath{\mathcal{I}}_1 \ast \ensuremath{\mathcal{I}}_2 = \{ T \in \ensuremath{\mathcal{T}} \,\, |\,\, \exists\,\, I_1 \in \ensuremath{\mathcal{I}}_1, \,\, I_2 \in \ensuremath{\mathcal{I}}_2, \,\, I_1 \longrightarrow T \longrightarrow I_2 \longrightarrow \}.$$
Clearly, $\ensuremath{\mathcal{I}}$ is triangulated if and only if $\ensuremath{\mathcal{I}} = \ensuremath{\mathcal{I}} \ast \ensuremath{\mathcal{I}}$.
\begin{definition} \cite{rouquier} \cite{murfet2}
Full subcategories $\ensuremath{\mathcal{I}}_1$, $\ensuremath{\mathcal{I}}_2$ of $\ensuremath{\mathcal{T}}$ are said to \emph{intersect properly} if
$$\ensuremath{\mathcal{I}}_1 \ast \ensuremath{\mathcal{I}}_2 = \ensuremath{\mathcal{I}}_2 \ast \ensuremath{\mathcal{I}}_1.$$
\end{definition}
For two properly intersecting thick subcategories, $\ensuremath{\mathcal{I}}_1 \ast \ensuremath{\mathcal{I}}_2 = \langle \ensuremath{\mathcal{I}}_1 \cup \ensuremath{\mathcal{I}}_2\rightarrowngle$, the smallest thick subcategory containing $\ensuremath{\mathcal{I}}_1$ and $\ensuremath{\mathcal{I}}_2$.
Clearly, in the picture of a localization, the data of $\ensuremath{\mathcal{I}}$, $a$ and $i$ determine eachother uniquely.
\begin{proposition} \cite{rouquier} \cite{murfet2} \label{propcomp}
Consider localizations $(\ensuremath{\mathcal{I}}_1, a_1, i_1)$ and $(\ensuremath{\mathcal{I}}_2, a_2, i_2)$ of $\ensuremath{\mathcal{C}}$. Put $q_1 = i_1 a_1$ and $q_2 = i_2 a_2$. The following are equivalent:
\begin{enumerate}
\item $\ensuremath{\mathcal{I}}_1$ and $\ensuremath{\mathcal{I}}_2$ are compatible.
\item $q_1(\ensuremath{\mathcal{I}}_2) \subseteq \ensuremath{\mathcal{I}}_2$ and $q_2(\ensuremath{\mathcal{I}}_1) \subseteq \ensuremath{\mathcal{I}}_1$.
\item $q_1 q_2 = q_2 q_1$.
\end{enumerate}
\end{proposition}
In the situation of Proposition \ref{propcomp}, we speak about \emph{properly intersecting} localizations.
A collection of thick subcategories (or localizations) is called \emph{properly intersecting} if the localizations are pairwise properly intersecting.
\begin{definition}\label{defcovertria}
A collection $\Theta$ of full subcategories of $\ensuremath{\mathcal{T}}$ is called a \emph{covering} of $\ensuremath{\mathcal{T}}$ if
$$\bigcap \Theta = \bigcap_{\ensuremath{\mathcal{I}} \in \Theta} \ensuremath{\mathcal{I}}
= 0.$$
\end{definition}
\begin{remark}
In \cite{rouquier}, the term cocovering is reserved for a collection of Bousfield subcategories which is covering in the sense of Definition \ref{defcovertria} and properly intersecting.
\end{remark}
By Definition \ref{defcovertria}, for a covering collection $\Theta$ of thick subcatories, the collection of quotient functors $a: \ensuremath{\mathcal{T}} \longrightarrow \ensuremath{\mathcal{T}}/ \ensuremath{\mathcal{I}}$ with $\ensuremath{\mathcal{I}} \in \Theta$ ``generates'' $\ensuremath{\mathcal{T}}$ in the sense that $T \in \ensuremath{\mathcal{T}}$ is non-zero if and only if $a(T)$ is non-zero for some $\ensuremath{\mathcal{I}} \in \Theta$.
\begin{proposition}
Consider a collection $\Theta$ of Bousfield subcategories of $\ensuremath{\mathcal{T}}$. The following are equivalent:
\begin{enumerate}
\item $\Theta$ is a covering of $\ensuremath{\mathcal{T}}$.
\item The objects $i(D)$ for $D \in \ensuremath{\mathcal{T}}/\ensuremath{\mathcal{I}}$ and $\ensuremath{\mathcal{I}} \in \Theta$ cogenerate $\ensuremath{\mathcal{T}}$, i.e an object $T$ in $\ensuremath{\mathcal{T}}$ is non-zero if and only if there exists a non-zero morphism $T \longrightarrow i(D)$ with $D \in \ensuremath{\mathcal{T}}/\ensuremath{\mathcal{I}}$ for some $\ensuremath{\mathcal{I}} \in \Theta$.
\end{enumerate}
\end{proposition}
\begin{proof}
This easily follows from the adjunction between $a$ and $i$.
\end{proof}
\subsection{Induced coverings}
Given the formal parallellism between sections \ref{parcoverab} and \ref{parcovertria}, and the fact that a localization
$a: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{D}}$
with right adjoint
$i: \ensuremath{\mathcal{D}} \longrightarrow \ensuremath{\mathcal{C}}$
of Grothendieck categories
gives rise to an induced
Bousfield localization
$$La = a: D(\ensuremath{\mathcal{C}}) \longrightarrow D(\ensuremath{\mathcal{D}})$$
with fully faithful right adjoint $$Ri: D(\ensuremath{\mathcal{D}}) \longrightarrow D(\ensuremath{\mathcal{C}})$$
of the corresponding derived categories, it is natural to ask what happens to the notions of compatibility and coverings under this operation of taking derived categories.
For coverings, the situation is very simple.
For a full subcategory $\ensuremath{\mathcal{S}}$ of $\ensuremath{\mathcal{C}}$, let $D_{\ensuremath{\mathcal{S}}}(\ensuremath{\mathcal{C}})$ denote the full subcategory of $D(\ensuremath{\mathcal{C}})$ consisting of complexes whose cohomology lies in $\ensuremath{\mathcal{S}}$.
\begin{lemma}
Let $a: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{D}}$ be an exact functor between Grothendieck categories with $\mathrm{Ker}(a) = \ensuremath{\mathcal{S}}$, and consider $La = a: D(\ensuremath{\mathcal{C}}) \longrightarrow D(\ensuremath{\mathcal{D}})$. We have $\mathrm{Ker}(La) = D_{\ensuremath{\mathcal{S}}}(\ensuremath{\mathcal{C}})$.
\end{lemma}
\begin{proof}
For a complex $X \in D(\ensuremath{\mathcal{C}})$, we have $H^n(a(X)) = a(H^n(X))$.
\end{proof}
\begin{lemma}
For a collection $\Sigma$ of full subcategories of a Grothendieck category $\ensuremath{\mathcal{C}}$, we have
$$D_{\bigcap_{\ensuremath{\mathcal{S}} \in \Sigma}\ensuremath{\mathcal{S}}}(\ensuremath{\mathcal{C}}) = \bigcap_{\ensuremath{\mathcal{S}} \in \Sigma} D_{\ensuremath{\mathcal{S}}}(\ensuremath{\mathcal{C}}).$$
\end{lemma}
\begin{proposition}\label{propindcover}
Let $\Sigma$ be a collection of full subcategories of a Grothendieck category $\ensuremath{\mathcal{C}}$. Then $\Sigma$ is a covering of $\ensuremath{\mathcal{C}}$ if and only if the collection $\{ D_{\ensuremath{\mathcal{S}}}(\ensuremath{\mathcal{C}}) \,\, |\,\, \ensuremath{\mathcal{S}} \in \Sigma\}$ is a covering of $D(\ensuremath{\mathcal{C}})$.
\end{proposition}
Now consider a Grothendieck category $\ensuremath{\mathcal{C}}$ and localizations
$a_k: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{D}}_k$
with right adjoints $i_k$, $q_k = i_k a_k$, and $\ensuremath{\mathcal{S}}_k = \mathrm{Ker}(a_k)$ for $k \in \{ 1, 2\}$.
Taking derived functors yields Bousfield localizations $La_k = a_k: D(\ensuremath{\mathcal{C}}) \longrightarrow D(\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_k)$
with right adjoints $Ri_k: D(\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_k) \longrightarrow D(\ensuremath{\mathcal{C}})$ and $\mathrm{Ker}(La_k) = D_{\ensuremath{\mathcal{S}}_k}(\ensuremath{\mathcal{C}})$.
We have the following inclusion between thick subcategories:
\begin{lemma}
We have $D_{\ensuremath{\mathcal{S}}_1}(\ensuremath{\mathcal{C}}) \ast D_{\ensuremath{\mathcal{S}}_2}(\ensuremath{\mathcal{C}}) \subseteq D_{\ensuremath{\mathcal{S}}_1 \ast \ensuremath{\mathcal{S}}_2}(\ensuremath{\mathcal{C}})$.
\end{lemma}
\begin{proof}
A triangle $X_1 \longrightarrow X \longrightarrow X_2 \longrightarrow$ with $X_k \in D_{\ensuremath{\mathcal{S}}_k}(\ensuremath{\mathcal{C}})$ gives rise to a long exact sequence $\dots \longrightarrow H^nX_1 \longrightarrow H^nX \longrightarrow H^nX_2 \longrightarrow \dots$. Since $\ensuremath{\mathcal{S}}_k$ is closed under subquotients, we obtain an exact sequence $0 \longrightarrow S_1 \longrightarrow H^nX \longrightarrow S_2 \longrightarrow 0$ with $S_k \in \ensuremath{\mathcal{S}}_k$.
\end{proof}
In general, we have:
\begin{proposition}
If $D_{\ensuremath{\mathcal{S}}_1}(\ensuremath{\mathcal{C}})$ and $D_{\ensuremath{\mathcal{S}}_2}(\ensuremath{\mathcal{C}})$ are compatible in $D(\ensuremath{\mathcal{C}})$, then $\ensuremath{\mathcal{S}}_1$ and $\ensuremath{\mathcal{S}}_2$ are compatible in $\ensuremath{\mathcal{C}}$.
\end{proposition}
\begin{proof}
Immediate from Lemma \ref{lemkey0} and the characterizations (2) in Propositions \ref{propcomp0} and \ref{propcomp}. \end{proof}
\begin{lemma} \label{lemkey0}
If $Ri_1 a_1(D_{\ensuremath{\mathcal{S}}_2}(\ensuremath{\mathcal{C}})) \subseteq D_{\ensuremath{\mathcal{S}}_2}(\ensuremath{\mathcal{C}})$, then $i_1 a_1(\ensuremath{\mathcal{S}}_2) \subseteq \ensuremath{\mathcal{S}}_2$.
\end{lemma}
\begin{proof}
Let $S_2 \in \ensuremath{\mathcal{S}}_2$. We have $i_1 a_1 (S_2) = R^0i_1 a_1 (S_2) = H^0 Ri_1 a_1 (S_2)$ and since $Ri_1 a_1 (S_2) \in D_{\ensuremath{\mathcal{S}}_2}(\ensuremath{\mathcal{C}})$, it follows that $i_1 a_1 (S_2) \in \ensuremath{\mathcal{S}}_2$ as desired.
\end{proof}
Unfortunately, the converse implication does not hold in general. However, in the special situation where $i_1$ and $i_2$ are exact, it is equally straightforward.
\begin{definition}
A localization $a: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{D}}$ is called \emph{affine} if the right adjoint $i: \ensuremath{\mathcal{D}} \longrightarrow \ensuremath{\mathcal{C}}$ is exact. A localizing subcategory $\ensuremath{\mathcal{S}} \subseteq \ensuremath{\mathcal{C}}$ is called \emph{affine} if the corresponding localization $\ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}$ is affine.
\end{definition}
\begin{example}
If $X$ is a quasi-compact scheme and $U \subset X$ an affine open subscheme, then $\mathbb{Q}ch(X) \longrightarrow \mathbb{Q}ch(U)$ is an affine localization.
\end{example}
\begin{remark}
Other variants of ``affineness'' have been used in the non-commutative algebraic geometry literature. For instance, Paul Smith calls an inclusion functor $i: \ensuremath{\mathcal{D}} \longrightarrow \ensuremath{\mathcal{C}}$ affine if it has both adjoints. In particular, if $i: \ensuremath{\mathcal{D}} \longrightarrow \ensuremath{\mathcal{C}}$ is the right adjoint in a localization, it becomes a forteriori exact. In this paper, we have chosen the most convenient notion of ``affineness'' for our purposes.
\end{remark}
\begin{proposition}\label{compint}
If $\ensuremath{\mathcal{S}}_1$ and $\ensuremath{\mathcal{S}}_2$ are compatible and affine, then $D_{\ensuremath{\mathcal{S}}_1}(\ensuremath{\mathcal{C}})$ and $D_{\ensuremath{\mathcal{S}}_2}(\ensuremath{\mathcal{C}})$ are compatible.
\end{proposition}
\begin{proof}
Immediate from Lemma \ref{lemkey} and the characterizations (2) in Propositions \ref{propcomp0} and \ref{propcomp}.
\end{proof}
\begin{lemma}\label{lemkey}
If $i_1a_1(\ensuremath{\mathcal{S}}_2) \subseteq \ensuremath{\mathcal{S}}_2$ and $i_1$ is exact, then $Ri_1 a_1(D_{\ensuremath{\mathcal{S}}_2}(\ensuremath{\mathcal{C}})) \subseteq D_{\ensuremath{\mathcal{S}}_2}(\ensuremath{\mathcal{C}})$.
\end{lemma}
\begin{proof}
For a complex $X \in D_{\ensuremath{\mathcal{S}}_2}(\ensuremath{\mathcal{C}})$, we have $Ri_1 a_1(X) = i_1 a_1 (X)$ and since $i_1 a_1$ is exact, $H^n(i_1 a_1 (X)) = i_1 a_1 (H^n(X)) \in \ensuremath{\mathcal{S}}_2$.
\end{proof}
The affineness condition in Proposition \ref{compint} does not describe the only situation where compatible abelian localizations give rise to compatible Bousfield localizations, but it is the only situation we will need in this paper. To end this section, we will describe another situation, inspired by the behaviour of large categories of sheaves of modules.
Recall that the functor $i : \ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}} \longrightarrow \ensuremath{\mathcal{C}}$ has finite cohomological dimension if there exists an $N \in \mathbb{Z}$ such that
if $X \in D(\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}})$ has $H^n(X) = 0$ for $n > 0$, then $R^n i(X) = H^n(Ri (X)) = 0$ for $n \geq N$.
\begin{proposition}\label{propfin}
Let $\ensuremath{\mathcal{S}}_1$ and $\ensuremath{\mathcal{S}}_2$ be compatible and suppose the following conditions hold:
\begin{enumerate}
\item There exist a class of objects $\ensuremath{\mathcal{A}} \subseteq \ensuremath{\mathcal{C}}$ and classes $\ensuremath{\mathcal{A}}_k \subseteq \ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}}_k$ consisting of $i_k$-acyclic objects such that $a_k(\ensuremath{\mathcal{A}}) \subseteq \ensuremath{\mathcal{A}}_k$ and $i_k(\ensuremath{\mathcal{A}}_k) \subseteq \ensuremath{\mathcal{A}}$.
\item The functors $i_k$ have finite cohomological dimension.
\end{enumerate}
Then $D_{\ensuremath{\mathcal{S}}_1}(\ensuremath{\mathcal{C}})$ and $D_{\ensuremath{\mathcal{S}}_2}(\ensuremath{\mathcal{C}})$ are compatible.
\end{proposition}
\begin{example}
Let $X$ be a quasi-compact scheme with quasi-compact open subschemes $j_1: U_1 \longrightarrow X$ and $j_2: U_2 \longrightarrow X$. We have restriction functors $j_k^{\ast}: \mathbb{M}od(X) \longrightarrow \mathbb{M}od(U_k)$ between the categories of all sheaves of modules with right adjoints $i_{k, \ast}: \mathbb{M}od(U_k) \longrightarrow \mathbb{M}od(X)$ with finite cohomological dimension. In Proposition \ref{propfin}, we can take for $\ensuremath{\mathcal{A}}$ and $\ensuremath{\mathcal{A}}_k$ the classes of flabby sheaves. Hence the localizations $j^{\ast}_k: D(\mathbb{M}od(X)) \longrightarrow D(\mathbb{M}od(U_k))$ are compatible.
\end{example}
\subsection{Rouquier's Theorem}
Compactly generated triangulated categories were invented by Neeman \cite{neeman2} with the compact generation of derived categories of ``nice'' schemes as one of the principal motivations.
As proved in \cite{neeman2}, for a scheme with a collection of ample invertible sheaves, these sheaves constitute a collection of compact generators of the derived category. But also in \cite{neeman2}, a totally different proof of compact generation is given for arbitrary quasi-compact separated schemes. The result is further improved by Bondal and Van den Bergh in \cite{bondalvandenbergh}, where a single compact generator is constructed for quasi-compact semi-separated schemes. These proofs are by induction on the opens in a finite affine cover, and the ingredients eventually cristalized in Rouquier's theorem \cite{rouquier} which is entirely expressed in terms of a cover of a triangulated category. Finally, in \cite{murfet2}, Murfet obtained a version of the theorem with compactness replaced by $\alpha$-compactness. We start by recalling the theorem.
\begin{theorem}\cite{murfet2}\label{cocovertria}
Let $\ensuremath{\mathcal{T}}$ be a triangulated category with a compatible covering of Bousfield subcategories $\ensuremath{\mathcal{I}}_i \subseteq \ensuremath{\mathcal{T}}$ for $i \in I = \{1, \dots, n\}$. Let $\alpha$ be a regular cardinal. Suppose:
\begin{enumerate}
\item $\ensuremath{\mathcal{T}}/ \ensuremath{\mathcal{I}}_i$ is $\alpha$-compactly generated for every $i \in I$.
\item For every $i \in I$ and $\varnothing \neq J \subseteq I \setminus \{i\}$, the essential image of
$$\cap_{j \in J} \ensuremath{\mathcal{I}}_j \longrightarrow \ensuremath{\mathcal{T}} \longrightarrow \ensuremath{\mathcal{T}}/ \ensuremath{\mathcal{I}}_i$$
is $\alpha$-compactly generated in $\ensuremath{\mathcal{T}}/ \ensuremath{\mathcal{I}}_i$.
\end{enumerate}
Then $\ensuremath{\mathcal{T}}$ is $\alpha$-compactly generated, and an object in $\ensuremath{\mathcal{T}}$ is $\alpha$-compact if and only if its image in every $\ensuremath{\mathcal{T}}/ \ensuremath{\mathcal{I}}_i$ for $i \in I$ is $\alpha$-compact.
\end{theorem}
\begin{remark}
The $\alpha = \aleph_0$-case of the theorem is Rouquier's cocovering theorem \cite{rouquier}.
\end{remark}
We now obtain the following application to Grothendieck categories:
\begin{theorem}\label{cocovergroth}
Let $\ensuremath{\mathcal{C}}$ be a Grothendieck category with a compatible covering of affine localizing subcategories $\ensuremath{\mathcal{S}}_i \subseteq \ensuremath{\mathcal{C}}$ for $i \in I = \{1, \dots, n\}$. Suppose:
\begin{enumerate}
\item $D(\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_i)$ is $\alpha$-compactly generated for every $i \in I$.
\item For every $i \in I$ and $\varnothing \neq J \subseteq I \setminus \{i\}$, suppose the essential image $\ensuremath{\mathcal{E}}$ of
$$\cap_{j \in J} \ensuremath{\mathcal{S}}_j \longrightarrow \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}}_i$$
is such that $D_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_i)$ is compactly generated in $D(\ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}}_i)$.
\end{enumerate}
Then $D(\ensuremath{\mathcal{C}})$ is $\alpha$-compactly generated, and an object in $D(\ensuremath{\mathcal{C}})$ is $\alpha$-compact if and only if its image in every $D(\ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}}_i)$ for $i \in I$ is $\alpha$-compact.
\end{theorem}
\begin{proof}
This is an application of Theorem \ref{cocovertria} by invoking Propositions \ref{propindcover} and \ref{compint} and Lemma \ref{lemess}.
\end{proof}
\begin{lemma}\label{lemess}
With the notations of Theorem \ref{cocovergroth}, $\cap_{j \in J} \ensuremath{\mathcal{S}}_j$ is a localizing Serre subcategory which is compatible with $\ensuremath{\mathcal{S}}_i$, and the essential image $\ensuremath{\mathcal{E}}$ of
$$\cap_{j \in J} \ensuremath{\mathcal{S}}_j \longrightarrow \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}}_i$$
is a localizing Serre subcategory given by the kernel of
$$\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_i \longrightarrow \ensuremath{\mathcal{C}}/(\ensuremath{\mathcal{S}}_i \ast \cap_{j\in J} \ensuremath{\mathcal{S}}_j).$$
The essential image of
$$\cap_{j \in J}D_{\ensuremath{\mathcal{S}}_j}(\ensuremath{\mathcal{C}}) \longrightarrow D(\ensuremath{\mathcal{C}}) \longrightarrow D(\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_i)$$
is given by $D_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_i)$.
\end{lemma}
\begin{remark}
By the Gabriel-Popescu theorem, all Grothendieck categories are localizations of module categories, and thus their derived categories are well-generated \cite{neeman} \cite{krause6} (and thus $\alpha$-compactly generated for some $\alpha$) as localizations of compactly generated derived categories of rings. However, they are not necessarily compactly generated as was shown in \cite{neeman3}.
\end{remark}
\begin{remark}
Compatibility between localizations can be considered a commutative phenomenon (after all, it expresses that two localization functors commute). The non-commutative topology developed by Van Oystaeyen \cite{vanoystaeyen} encompasses notions of coverings (and in fact, non-commutative Grothendieck topologies) which apply to the situation of non-commuting localizations. An investigation whether this approach can be extended to the triangulated setup, and whether it is possible to obtain results on compact generation extending Theorems \ref{cocovertria} and \ref{cocovergroth}, is work in progress.
\end{remark}
\section{Deformations}
In this section we obtain an application of Theorem \ref{cocovergroth} to deformations of Grothendieck categories, based upon application to the undeformed categories (Theorem \ref{thmdefcomp}). For simplicity, we focuss on compact generation ($\alpha = \aleph_0$).
By the work of Keller \cite{keller1}, compact generation of the derived category $D(\ensuremath{\mathcal{C}})$ of a Grothendieck category leads to the existence of a dg algebra $A$ - the derived endomorphism algebra of a generator - representing the category in the sense that $D(\ensuremath{\mathcal{C}}) \cong D(A)$. At this point, most of non-commutative derived algebraic geometry has been developed with dg algebras (or $A_{\infty}$-algebras) as models, although a definitive theory should also include more general algebraic enhancements on the level of the entire categories. For the topic of deformations, a satisfactory treatment on the level of dg algebras does certainly not exist in complete generality \cite{kellerlowen}, due to obstructions which also play an important role in the present paper.
A deformation theory for triangulated categories on the level of enhancements of the entire categories is still under construction \cite{dedekenlowen2}, and is also subject to obstructions. Thus, Grothendieck enhancements are the only ones for which a satisfactory intrinsic deformation theory exists for the moment, and for this reason our intermediate Theorem \ref{cocovergroth} is crucial.
\subsection{Deformation and localization}
Infinitesimal deformations of abelian categories were introduced in \cite{lowenvandenbergh1}.
We deform along a surjective ringmap $R \longrightarrow k$ between coherent commutative rings, with a nilpotent kernel and such that $k$ is finitely presented over $R$. This includes the classical infinitesimal deformation setup in the direction of Artin local $k$-algebras.
Deformations are required to be flat in an appropriate sense, which was introduced in \cite{lowenvandenbergh1}. It was shown in the same paper that deformations of Grothendieck categories remain Grothendieck. The interaction between deformation and localization was treated in \cite[\S 7]{lowenvandenbergh1}.
Let $\iota: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{D}}$ be a deformation of Grothendieck categories. There are inverse bijections between the Serre subcategories of $\ensuremath{\mathcal{C}}$ and the Serre subcategories of $\ensuremath{\mathcal{D}}$ described by the maps
$$\ensuremath{\mathcal{S}} \longmapsto \bar{\ensuremath{\mathcal{S}}} = \langle \ensuremath{\mathcal{S}} \rightarrowngle_{\ensuremath{\mathcal{D}}} = \{ D \in \ensuremath{\mathcal{D}} \,\, |\,\, k \otimes_R D \in \ensuremath{\mathcal{S}}\}$$
and
$$\ensuremath{\mathcal{S}} \longmapsto \ensuremath{\mathcal{S}} \cap \ensuremath{\mathcal{C}}.$$
These restrict to bijections between localizing subcategories, and for corresponding localizing subcategories $\ensuremath{\mathcal{S}}$ of $\ensuremath{\mathcal{C}}$ and $\bar{\ensuremath{\mathcal{S}}}$ of $\ensuremath{\mathcal{D}}$, there is an induced deformation
$\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}} \longrightarrow \ensuremath{\mathcal{D}}/ \bar{\ensuremath{\mathcal{S}}}$ and there are commutative diagrams
$$\xymatrix{ {\ensuremath{\mathcal{D}}} \ar[r]^{\bar{a}} & {\ensuremath{\mathcal{D}}/\bar{\ensuremath{\mathcal{S}}}} \\ {\ensuremath{\mathcal{C}}} \ar[u] \ar[r]_a & {\ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}}} \ar[u]} \hspace{2cm}
\xymatrix{ {\ensuremath{\mathcal{D}}} & {\ensuremath{\mathcal{D}}/\bar{\ensuremath{\mathcal{S}}}} \ar[l]_{\bar{i}} \\ {\ensuremath{\mathcal{C}}} \ar[u] & {\ensuremath{\mathcal{C}}/ \ensuremath{\mathcal{S}}.} \ar[u] \ar[l]^i}$$
\begin{proposition}\label{liftcover}
Let $\Sigma$ be a collection of Serre subcategories of $\ensuremath{\mathcal{C}}$ and consider the corresponding collection $\bar{\Sigma} = \{ \bar{\ensuremath{\mathcal{S}}} \,\, |\,\, \ensuremath{\mathcal{S}} \in \Sigma\}$ of Serre subcategories of $\ensuremath{\mathcal{D}}$.
Then $\Sigma$ is a covering of $\ensuremath{\mathcal{C}}$ if and only if $\bar{\Sigma}$ is a covering of $\ensuremath{\mathcal{D}}$.
\end{proposition}
\begin{proof}
Immediate from Lemma \ref{lemcovdef}.
\end{proof}
\begin{lemma}\label{lemcovdef}
Let $\Sigma$ be a collection of Serre subcategories of $\ensuremath{\mathcal{C}}$. We have
$$\overline{ \cap_{\ensuremath{\mathcal{S}} \in \Sigma} \ensuremath{\mathcal{S}}} = \cap_{\ensuremath{\mathcal{S}} \in \Sigma} \overline{\ensuremath{\mathcal{S}}}.$$
\end{lemma}
\begin{proof}
Immediate from the description of the bijections between Serre subcategories of $\ensuremath{\mathcal{C}}$ and $\ensuremath{\mathcal{D}}$.
\end{proof}
\begin{proposition}\cite[Proposition 3.8]{lowen4}\label{liftcomp}
Let $\ensuremath{\mathcal{S}}_k \subseteq \ensuremath{\mathcal{C}}$ be localizing subcategories for $k \in \{ 1,2\}$. If $\ensuremath{\mathcal{S}}_1$ and $\ensuremath{\mathcal{S}}_2$ are compatible in $\ensuremath{\mathcal{C}}$, then $\overline{\ensuremath{\mathcal{S}}_1}$ and $\overline{\ensuremath{\mathcal{S}}_2}$ are compatible in $\ensuremath{\mathcal{D}}$. In this case, we have $\overline{\ensuremath{\mathcal{S}}_1} \ast \overline{\ensuremath{\mathcal{S}}_2} = \overline{\ensuremath{\mathcal{S}}_1 \ast \ensuremath{\mathcal{S}}_2}$.
\end{proposition}
We will need one more lifting result.
\begin{lemma}\label{lemee}
Let $\ensuremath{\mathcal{S}}_k \subseteq \ensuremath{\mathcal{C}}$ be compatible localizing subcategories for $k \in \{ 1,2\}$. The essential image $\ensuremath{\mathcal{E}}$ of $$\ensuremath{\mathcal{S}}_2 \longrightarrow \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_1$$ is the kernel of
$$a^1_2: \ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_1 \longrightarrow \ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_1 \ast \ensuremath{\mathcal{S}}_2.$$
The lift $\overline{\ensuremath{\mathcal{E}}}$ of $\ensuremath{\mathcal{E}}$ to $\ensuremath{\mathcal{D}}/\overline{\ensuremath{\mathcal{S}}_1}$ is the essential image of
$$\overline{\ensuremath{\mathcal{S}}_2} \longrightarrow \ensuremath{\mathcal{D}} \longrightarrow \ensuremath{\mathcal{D}} \longrightarrow \ensuremath{\mathcal{D}}/\overline{\ensuremath{\mathcal{S}}_2}.$$
\end{lemma}
\subsection{Lifts of compact generators}
Let $\iota: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{D}}$ be a deformation of Grothen\-dieck categories, let $\ensuremath{\mathcal{S}}$ be a localizing Serre subcategory of $\ensuremath{\mathcal{C}}$ and let $\overline{\ensuremath{\mathcal{S}}}$ be the corresponding localizing subcategory of $\ensuremath{\mathcal{D}}$.
For an abelian category $\ensuremath{\mathcal{A}}$, let $\ensuremath{\mathsf{Ind}}(\ensuremath{\mathcal{A}})$ be the ind-completion of $\ensuremath{\mathcal{A}}$, i.e. the closure of $\ensuremath{\mathcal{A}}$ inside $\mathbb{M}od(\ensuremath{\mathcal{A}})$ under filtered colimits, and let $\ensuremath{\mathsf{Pro}}(\ensuremath{\mathcal{A}}) = (\ensuremath{\mathsf{Ind}}(\ensuremath{\mathcal{A}}^{^{\mathrm{op}}})^{^{\mathrm{op}}}$ be the pro-completion of $\ensuremath{\mathcal{A}}$.
Consider the commutative diagram
$$\xymatrix{ {D_{\bar{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}})} \ar[r] & {D(\ensuremath{\mathcal{D}})} \\ {D_{\ensuremath{\mathcal{S}}}(\ensuremath{\mathcal{C}})} \ar[u] \ar[r] & {D(\ensuremath{\mathcal{C}})} \ar[u]_{R\iota} }$$
and the derived functor
$$k \otimes^L_R -: D(\mathsf{Pro}(\ensuremath{\mathcal{D}})) \longrightarrow D(\mathsf{Pro}(\ensuremath{\mathcal{C}})).$$
For a collection $\ensuremath{\mathcal{A}}$ of objects in a triangulated category $\ensuremath{\mathcal{T}}$, we denote by $\overline{\langle \ensuremath{\mathcal{A}} \rightarrowngle }_\ensuremath{\mathcal{T}}$ the smallest localizing (i.e. triangulated and closed under direct sums) subcategory of $\ensuremath{\mathcal{T}}$ containing $\ensuremath{\mathcal{A}}$.
Recall from \cite{dedekenlowen} that we have a balanced action
$$- \otimes^L_R - : D^{-}(\ensuremath{\mathsf{mod}} (k)) \otimes D^{-}(\ensuremath{\mathcal{D}}) \longrightarrow D^{-}(\ensuremath{\mathcal{D}}).$$
The following is a refinement of \cite[Proposition 5.9]{dedekenlowen}.
\begin{proposition}\label{compgenlift}
Consider a collection $\mathfrak{g}$ of objects of $D^{-}(\ensuremath{\mathcal{D}})$ such that the collection $k\otimes^L_R \mathfrak{g} = \{ k \otimes^L_R G \,\, |\,\, G \in \mathfrak{g}\}$ compactly generates $D_{{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{C}})$ inside $D(\ensuremath{\mathcal{C}})$. Then $\mathfrak{g}$ compactly generates $D_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}})$ inside $D(\ensuremath{\mathcal{D}})$.
\end{proposition}
\begin{proof}
The objects of $\mathfrak{g}$ are compact by \cite[Proposition 5.8]{dedekenlowen}.
Consider $\overline{ \langle \mathfrak{g} \rightarrowngle}_{D(\ensuremath{\mathcal{D}})}$, i.e the closure of $\mathfrak{g}$ in $D(\ensuremath{\mathcal{D}})$ under cones, shifts and direct dums.
We are to show that $\overline{ \langle \mathfrak{g} \rightarrowngle}_{D(\ensuremath{\mathcal{D}})} = D_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}})$.
First we have to make sure that every $G \in \mathfrak{g}$ is contained in $D_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}})$. Writing $I$ as a homotopy colimit of cones of finite free $k$-modules, we obtain that both $k \otimes^L_R G$ and $I \otimes^L_R G \cong I \otimes^L_k (k \otimes^L_R D)$ belong to $D_{\ensuremath{\mathcal{S}}}(\ensuremath{\mathcal{C}})$ and hence to $D_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}})$. From the triangle
$$I \otimes^L_R G \longrightarrow G \longrightarrow k \otimes^L_R G \longrightarrow$$
we deduce that $G$ also belongs to $D_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}})$. Consequently $\overline{ \langle \mathfrak{g} \rightarrowngle}_{D(\ensuremath{\mathcal{D}})} \subseteq D_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}})$.
Next we look at the other inclusion $D_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}}) \subseteq \overline{ \langle \mathfrak{g} \rightarrowngle}_{D(\ensuremath{\mathcal{D}})}$.
For an arbitrary complex $D \in D_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}})$, we can write $D = \mathrm{hocolim}_{n = 0}^{\infty} \tau^{\leq n}D$ with $\tau^{\leq n}D \in D^{-}_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}})$. Consequently, it suffices to show that $D^{-}_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}}) \subseteq \overline{ \langle \mathfrak{g} \rightarrowngle}_{D(\ensuremath{\mathcal{D}})}$. For $D \in D^-_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}})$, consider the triangle
$$I \otimes^L_R D \longrightarrow D \longrightarrow k \otimes^L_R D \longrightarrow.$$
First note that writing $k$ as a homotopy colimit of cones of finite free $R$-modules, we see that
\begin{equation}\label{hocok}
k \otimes^L_R D \in \overline{ \langle D \rightarrowngle}_{D(\ensuremath{\mathcal{D}})}.
\end{equation}
Using $I \otimes^L_R D \cong I \otimes^L_k (k \otimes^L_R D)$, we deduce from balancedness of the derived tensor product that $I \otimes_R^L D$ and $k \otimes_R^L D$ belong to both $D(\ensuremath{\mathcal{C}})$ and $D_{\overline{\ensuremath{\mathcal{S}}}}(\ensuremath{\mathcal{D}})$, whence to $D_{\ensuremath{\mathcal{S}}}(\ensuremath{\mathcal{C}})$. Consequently, it suffices to show that $\overline{ \langle k \otimes^L_R\mathfrak{g} \rightarrowngle}_{D(\ensuremath{\mathcal{C}})} = D_{\ensuremath{\mathcal{S}}}(\ensuremath{\mathcal{C}}) \subseteq \overline{ \langle \mathfrak{g} \rightarrowngle}_{D(\ensuremath{\mathcal{D}})}$. To see this, it suffices to consider \eqref{hocok} for all $D \in \mathfrak{g}$.
\end{proof}
\subsection{Compact generation of deformations}
Putting together all our results so far, we now describe a situation in which one obtains compact generation of the derived category $D(\ensuremath{\mathcal{D}})$ of a deformation $\ensuremath{\mathcal{D}}$.
Let $\ensuremath{\mathcal{C}}$ be a Grothendieck abelian category with a deformation $\iota: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{D}}$. Let $\ensuremath{\mathcal{S}}_i \subseteq \ensuremath{\mathcal{C}}$ for $i \in I = \{1, \dots, n\}$ be a covering collection of compatible localizing subcategories of $\ensuremath{\mathcal{C}}$ and let $\overline{\ensuremath{\mathcal{S}}_i} \subseteq \ensuremath{\mathcal{D}}$ be the corresponding covering collection of compatible localizing subcategories of $\ensuremath{\mathcal{D}}$.
\begin{theorem}\label{thmdefcomp}
Suppose:
\begin{enumerate}
\item For every $i \in I$, $\overline{\ensuremath{\mathcal{S}}_i} \subseteq \ensuremath{\mathcal{D}}$ is affine.
\item For every $i \in I$, there is a collection $\mathfrak{g}_i$ of objects of $D^-(\ensuremath{\mathcal{D}}/\overline{\ensuremath{\mathcal{S}}_i})$ such that the collection $k \otimes_R^L \mathfrak{g}_i$ compactly generates $D(\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_i)$.
\item For every $i \in I$ and $J \subseteq I \setminus \{i\}$, the essential image $\ensuremath{\mathcal{E}}$ of
$$\cap_{j \in J}\ensuremath{\mathcal{S}}_j \longrightarrow \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_i$$
is such that there is a collection $\mathfrak{g}$ of objects of $D^-(\ensuremath{\mathcal{D}}/\overline{\ensuremath{\mathcal{S}}_i})$ for which the collection $k \otimes_R^L \mathfrak{g}$ compactly generates $D_{{\ensuremath{\mathcal{E}}}}(\ensuremath{\mathcal{C}}/{\ensuremath{\mathcal{S}}_i})$ inside $D(\ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_i)$.
\end{enumerate}
Then $D(\ensuremath{\mathcal{D}})$ is compactly generated and an object in $D(\ensuremath{\mathcal{D}})$ is compact if and only if its image in each $D(\ensuremath{\mathcal{D}}/ \overline{\ensuremath{\mathcal{S}}_i})$ is compact.
\end{theorem}
\begin{proof}
By Propositions \ref{liftcover} and \ref{liftcomp} and assumption (1), we are in the basic setup of Theorem \ref{cocovergroth}. By Proposition \ref{compgenlift}, the collections $\mathfrak{g}_i$ and $\mathfrak{g}$ in assumptions (2) and (3) consitute collections of compact generators of $D(\ensuremath{\mathcal{D}}/\ensuremath{\mathcal{S}}_i)$ and of $D_{\overline{\ensuremath{\mathcal{E}}}}(\ensuremath{\mathcal{D}}/ \overline{\ensuremath{\mathcal{S}}}_i)$ inside $D(\ensuremath{\mathcal{D}}/ \overline{\ensuremath{\mathcal{S}}}_i)$ respectively. Finally, using Lemma \ref{lemee}, assumptions (1) and (2) in Theorem \ref{cocovergroth} are fulfilled and the theorem applies to the deformed situation.
\end{proof}
\section{Lifting Koszul complexes}
For an object $M$ in $D(A)$ for a $k$-algebra $A$ , we denote by
$\langle M \rightarrowngle_A$ the smallest thick subcategory of $D(A)$
containing $M$ and by $\overline{\langle M \rightarrowngle}_A$ the smallest
localizing subcategory of $D(A)$ containing $M$.
For a $k$-algebra map $A\longrightarrow B$ we have a restriction functor
\[
{}_A(-):D(B)\longrightarrow D(A)
\]
with left adjoint given by the derived tensor product
\[
B\otimes^L_A-:D(A)\longrightarrow D(B)
\]
(the actual map $A\longrightarrow B$ will never be in doubt).
\subsection{An auxiliary result}\label{paraux}
Let $k$ be a field and let $\mathbb{N}NN$ be the Lie algebra freely generated
by $x_1, \dots, x_n$ subject to the relations that all expressions
involving $\geq d$ brackets vanish. Since there are only a finite
number of expressions in $(x_i)_i$ involving $< d$ brackets, $\mathbb{N}NN$ is
finite dimensional over $k$. Let $U$ be the universal enveloping
algebra of $\mathbb{N}NN$. Let $I \subseteq U$ be the twosided ideal generated
by $([x_i, x_j])_{ij}$. Then $U/I = k[x_1, \dots, x_n]$. Our arguments
below will be mostly based on the $k$-algebra maps
\[
\xymatrix{
U\ar[r] & U/I \ar[r]^-{x_i\mapsto 0}& k
}
\]
The left $k$-module $k$ gives rise to a left $U$-module ${_U}k = U/(x_1, \dots, x_n)$ and a left
$U/I$-module ${_{U/I}}k = (U/I)/(x_1, \dots, x_n)$. Since
$U$ (as well as $U/I$) is noetherian of finite global dimension,
${_U}k$ is a perfect left $U$-module and ${_{U/I}}k$ is a perfect
$U/I$-module. By tensoring we obtain
another perfect $U/I$-module: $U/I \otimes^L_U {_U}k$.
\begin{proposition}\label{propaux}
We have the following equalities: $\langle {_{U/I}}k \rightarrowngle_{U/I} = \langle U/I \otimes^L_U {_U}k \rightarrowngle_{U/I}$ and $\overline{\langle {_{U/I}}k \rightarrowngle}_{U/I} = \overline{\langle U/I \otimes^L_U {_U}k \rightarrowngle}_{U/I}$.
\end{proposition}
\begin{proof}
We start by proving $ U/I \otimes^L_U {_U}k
\in \langle {_{U/I}}k \rightarrowngle_{U/I}$. Note that
$\langle {_{U/I}}k \rightarrowngle_{U/I}$ consists of all $U/I$-modules with
finite dimensional total cohomology. Thus, it suffices to look at
the cohomology of $U/I \otimes^L_U {_U}k$. Since we are only
interested in the underlying $k$-module, it suffices to compute ${_k}(U/I
\otimes^L_U {_U}k)$ (restriction for $k\hookrightarrow U/I$)
which we can do using a finite free resolution of $(U/I)_U$ as
a right $U$-module. Things then reduce to the trivial fact ${}_k(U\otimes^L_U {}_Uk)
\cong k$.
Thus, we have proven $U/I \otimes^L_U {_U}k \in
\langle {_{U/I}}k \rightarrowngle_{U/I} \subset \overline{\langle {_{U/I}}k \rightarrowngle}_{U/I}$.
Now $U/I \otimes^L_U {_U}k$ is a perfect left $U/I$-module and
hence it is compact in $D(U/I)$. From this it follows immediately that
it is also a compact object in $\overline{\langle {}_{U/I}k\rightarrowngle}_{U/I}$.
To prove the claims of the proposition it is sufficient to prove that
$U/I \otimes^L_U {_U}k$
is a compact generator of $\overline{\langle {}_{U/I}k\rightarrowngle}_{U/I}$. In other
words we have to prove that its right orthogonal is zero:
\[
\bigl( U/I
\otimes^L_U {_U}k\bigr)^\mathsf{per}p\cap \overline{\langle {_{U/I}}k \rightarrowngle}_{U/I} = 0.
\]
Now suppose we have $X \in
\overline{\langle {_{U/I}}k \rightarrowngle}_{U/I}$ with $\mathrm{RHom}_{U/I}(U/I
\otimes^L_U {_U}k, X) = 0$. Then we have ${}_UX \in \overline{\langle
{_U}k \rightarrowngle}_U$ and also by adjunction $\mathrm{RHom}_{U}({_U}k, {}_UX) =
0$. Since the perfect complex ${}_Uk$ is a compact generator of $\overline{\langle
{_U}k \rightarrowngle}_U$ we obtain ${}_UX=0$ which implies $X=0$.
\end{proof}
\subsection{Koszul precomplexes}\label{koszul}
Let $A$ be a possibly non-commutative $k$-algebra and consider a finite sequence of elements $x = (x_1, \dots, x_n)$ in $A$.
We will work in the category $\mathbb{M}od(A)$ of left $A$-modules.
We define a precomplex $K(x)$ of $A$-modules with
$K(x)_p = A\otimes_k \Lambda^p k^n$ the free $A$-module of rank $\begin{pmatrix} n \\ p \end{pmatrix}$ with basis $e_{i_1} \wedge \dots \wedge e_{i_p}$ with $i_1 < \dots < i_p$.
We define the $A$-linear morphism $$d_p: K(x)_p \longrightarrow K(x)_{p-1}$$
by $$d_p(e_{i_1} \wedge \dots \wedge e_{i_p}) = \sum_{k = 1}^p (-1)^{k+1} x_{i_k} e_{i_1} \wedge \dots \wedge \hat{e}_{i_k} \wedge \dots \wedge e_{i_p}.$$
The differential may be compactly written as $d=\sum_iR_{x_i}\partial/\partial e_i$
where we consider the $e_i$ as odd and $R_{x_i}(a)=ax_i$ which yields:
\[
d^2=\sum_{1 \leq i < j \leq p} R_{[x_j,x_i]}\,\partial^2/\partial e_i\partial e_j
\]
Thus $K(x)$ is a complex if and only if the $(x_{i})_i$ commute.
\subsection{Lifting Koszul complexes}\label{parliftkoszul}
Let $(R, m)$ be a finite dimensional $k$-algebra with $m^d = 0$ and
$R/m = k$ and let $A'$ be an $R$-algebra with $A'/mA' = A$. Consider a
sequence $f = (f_1, \dots, f_n)$ of element in $A$ and a sequence $f'
= (f'_1, \dots, f'_n)$ of elements in $A'$ such that the reduction of
$f'_i$ to $A$ equals $f_i$. Let $K(f)$ be the Koszul complex
associated to $f$. As soon as some of the $f'_i$ do not commute, the
Koszul precomplex $K(f')$ fails to be a complex according to \S
\ref{koszul}. For this reason, we will now use the result of \S
\ref{paraux} to lift a perfect complex generating the same localizing
subcategory as $K(f)$. In fact, this ``liftable complex'' happens to
be independent of $A'$ or $R$! Its size depends however in a major way on
$d$.
\begin{theorem}\label{thmlift}
Let $(R,m)$ be a finite dimensional algebra with $m^d = 0$ and $R/m = k$, and let $A$ be a commutative $k$-algebra and $f = (f_1, \dots, f_n)$ a sequence of elements in $A$.
There exists a perfect complex $X \in D(A)$ with $\langle K(f) \rightarrowngle_A = \langle X \rightarrowngle_A$ and
$$\overline{\langle K(f) \rightarrowngle}_A = \overline{\langle X \rightarrowngle}_A$$
which is such that for every $R$-algebra $A'$ with $A'/m = A$ there exists a perfect complex $X' \in D(A')$ with $A \otimes^L_{A'} X' = X$.
We can take $X = A \otimes^L_U {_U}k$.
\end{theorem}
\begin{proof}
Let $f' = (f'_1, \dots, f'_n)$ be an arbitrary sequence of elements in $A'$ such that the reduction of $f'_i$ to $A$ equals $f_i$.
From the definition of the algebra $U$ in \S \ref{paraux}, we obtain a commutative diagram
$$\xymatrix{{U} \ar[r] \ar[d] & {A'} \ar[d] \\ {U/I} \ar[r] & A}$$
with the horizontal maps determined by $x_i \longmapsto f'_i$ and $x_i \longmapsto f_i$ respectively.
We thus have
\begin{equation}\label{eqdiag}
A \otimes^L_{A'} (A' \otimes^L_U {{_U} k}) = A \otimes^L_{U/I} (U/I \otimes^L_U {_U}k).
\end{equation}
By Proposition \ref{propaux} we have
$\langle {_{U/I}}k \rightarrowngle_{U/I} = \langle U/I \otimes^L_U {_U}k \rightarrowngle_{U/I}$
and hence
\begin{equation}\label{eqsubcats}
\langle A \otimes^L_{U/I} {_{U/I}} k \rightarrowngle_{A} = \langle A \otimes^L_{U/I} (U/I \otimes^L_U {_U}k)\rightarrowngle_A.
\end{equation}
Over $U/I = k[x_1, \dots, x_n]$, the Koszul complex $K(x_1, \dots, x_n)$ constitutes a projective resolution of ${_{U/I}}k$. Hence, on the left hand side of \eqref{eqsubcats} we have $A \otimes^L_{U/I} {_{U/I}} k = A \otimes_{U/I} K(x_1, \dots, x_n) = K(f)$. Hence, by \eqref{eqdiag} it suffices to take $X = A \otimes^L_U {_U}k$ and $X' = A' \otimes^L_U {_U}k$.
\end{proof}
Since over $U$, the Chevalley-Eilenberg complex $V(\mathbb{N}NN)$ of $\mathbb{N}NN$ constitutes a projective resolution of ${_U}k$, in Theorem \ref{thmlift} we concretely obtain $X = A \otimes_U V(\mathbb{N}NN) = A \otimes_k \Lambda^{\ast}\mathbb{N}NN$ and $X' = A' \otimes_U V(\mathbb{N}NN) = A' \otimes_k \Lambda^{\ast}\mathbb{N}NN$, both equipped with the Chevalley-Eilenberg differential
$$\begin{aligned}
d(a \otimes y_1 \wedge \dots \wedge y_p) & = \sum_{i = 1}^p (-1)^{i +1} ay_i \otimes y_1 \wedge \dots \wedge \hat{y_i} \wedge \dots \wedge y_p \\
& + \sum_{i < j} (-1)^{i + j} a \otimes [y_i, y_j] \wedge \dots \wedge \hat{y_i} \dots \wedge \hat{y_j} \dots \wedge y_p
\end{aligned}$$
for a basis $(y_i)_i$ for $\frak{n}$.
Let us look at some examples.
If $d = 1$ or $n = 1$, we have $\mathbb{N}NN = kx_1 ^{\mathrm{op}}lus \dots ^{\mathrm{op}}lus kx_n$, $U = k[x_1, \dots, x_n]$, $V(\mathbb{N}NN) = K(x_1, \dots, x_n)$ and $X = K(f_1, \dots, f_n)$. For $n = 1$ we have $X' = K(f'_1)$.
Thus, the first non-trivial case to consider is $d = 2$ and $n = 2$. We have $\mathbb{N}NN = kx_1 ^{\mathrm{op}}lus kx_2 ^{\mathrm{op}}lus k[x_1, x_2]$ and consequently $X$ is given by the complex
$$\xymatrix{0 \ar[r] & A \ar[r]_{d_3} & {A^3} \ar[r]_{d_2} & {A^3} \ar[r]_{d_1} & A \ar[r] & 0}$$
with basis elements over $A$ given by $x_1 \wedge x_2 \wedge [x_1, x_2]$ in degree 3, $x_1\wedge x_2$, $x_2 \wedge [x_1, x_2]$, $[x_1, x_2] \wedge x_1$ in degree 2, $x_1$, $x_2$, $[x_1, x_2]$ in degree 1 and $1$ in degree 0 and differentials given by
$$d_3 = \begin{pmatrix} [f_1, f_2] \\ f_1 \\ f_2 \end{pmatrix}, \hspace{0,5cm} d_2 = \begin{pmatrix} -f_2 & 0 & [f_1, f_2] \\
f_1 & -[f_1, f_2] & 0 \\ -1 & f_2 & - f_1 \end{pmatrix}, \hspace{0,5cm} d_1 = \begin{pmatrix} f_1 & f_2 & [f_1, f_2] \end{pmatrix}.$$
Similarly $X'$ is given by the same expressions with $A$ replaced by $A'$ and $f_i$ replaced by the chosen lift $f'_i$. Note that $[f_1, f_2] = 0$ but we possibly have $[f'_1, f'_2] \neq 0$.
\section{Deformations of schemes} \label{pardefschemes}
In this section we specialize Theorem \ref{thmdefcomp} to the scheme case. In Theorem \ref{thmscheme}, we give a general formulation in the the setup of a Grothendieck deformation of the category $\mathbb{Q}ch(X)$ over a quasi-compact, separated scheme. After discussing some special cases in which direct lifting of Koszul complexes already leads to compact generation of the deformed category (like the case in which all deformed rings on an affine cover are commutative), in \S \ref{parmaintheorem} we prove our main Theorem \ref{maintheorem} which states that all non-commutative deformations are in fact compactly generated. The proof is based upon the change from Koszul complexes to liftable generators from Theorem \ref{thmlift}.
\subsection{Deformed schemes using ample line bundles}
Let $X$ be a quasi-compact separated scheme over a field $k$. If we want to investigate compact generation of $D(\ensuremath{\mathcal{D}})$ for an abelian deformation $\ensuremath{\mathcal{D}}$ of $\ensuremath{\mathcal{C}} = \mathbb{Q}ch(X)$, by Proposition \ref{compgenlift} (and in fact, its special case \cite[Proposition 5.9]{dedekenlowen}) a global approach is to look for compact generators of $D(\ensuremath{\mathcal{C}})$ that lift to $D(\ensuremath{\mathcal{D}})$ under $k \otimes^L_R -$. We easily obtain the following result:
\begin{proposition}\label{propamp}
Suppose $X$ has an ample line bundle $\ensuremath{\mathcal{L}}$. If $H^2(X, \ensuremath{\mathcal{O}}_X) = 0$, all infinitesimal deformations of $\mathbb{Q}ch(X)$ have compactly generated derived categories.
\end{proposition}
\begin{proof}
According to \cite{neeman}, $D(\mathbb{Q}ch(X))$ is compactly generated by the tensor powers $\ensuremath{\mathcal{L}}^n$ for $n \in \mathbb{Z}$. By \cite{lowen2}, the obstructions to lifting $\ensuremath{\mathcal{L}}^n$ along an infinitesimal deformation lie in $\mathrm{Ext}^2_{X}(\ensuremath{\mathcal{L}}^n, I \otimes_k \ensuremath{\mathcal{L}}^n)$ for $I \cong k^m$ for some $m$. But we have
$$\mathrm{Ext}^2_{X}(\ensuremath{\mathcal{L}}, I \otimes_k \ensuremath{\mathcal{L}}) \cong [\mathrm{Ext}^2(\ensuremath{\mathcal{L}}, \ensuremath{\mathcal{L}})]^m = [\mathrm{Ext}^2_X(\ensuremath{\mathcal{O}}_X, \ensuremath{\mathcal{O}}_X)]^m = [H^2(X, \ensuremath{\mathcal{O}}_X)]^m = 0$$
as desired.
\end{proof}
\subsection{Deformed schemes using coverings}\label{pardefschcover}
Let $(X, \ensuremath{\mathcal{O}})$ be a quasi-compact, separated scheme and put $\ensuremath{\mathcal{C}} = \mathbb{Q}ch(X)$. Since the homological condition in Proposition \ref{propamp} excludes interesting schemes, we now investigate a different approach based upon affine covers.
Let $U_i$ for $i \in I = \{1, \dots, n\}$ be an affine cover of $X$, with $U_i \cong \mathrm{Spec}(\ensuremath{\mathcal{O}}(U_i))$. Put $Z_i = X \setminus U_i$. With $\ensuremath{\mathcal{C}}_i = \mathbb{Q}ch(U_i)$ and $\ensuremath{\mathcal{S}}_i = \mathbb{Q}ch_{Z_i}(X)$, the category of quasi-coherent sheaves on $X$ supported on $Z_i$, we are in the situation of a covering collection of compatible localizations of $\ensuremath{\mathcal{C}}$.
For $J \subseteq I$, put $U_J = \cap_{j \in J}U_j$ and $\ensuremath{\mathcal{C}}_J = \mathbb{Q}ch(U_J)$. For $i \in I$ and $J \subseteq I \setminus \{i\}$, put $Z^i_J = U_i \setminus \cup_{j \in J} U_j = U_i \cap \cap_{j \in J} Z_j$. The essential image $\ensuremath{\mathcal{E}}$ of $\cap_{j \in J}\ensuremath{\mathcal{S}}_j \longrightarrow \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{C}}/\ensuremath{\mathcal{S}}_i$ is given by $\ensuremath{\mathcal{E}} = \mathbb{Q}ch_{Z^i_J}(U_i)$.
Let $\mathrm{D}elta$ and $\mathrm{D}elta_{\varnothing}$ be as in \S \ref{pardescent}. For $K \subseteq J$, we have $U_J \subseteq U_K$ and the corresponding localization is given by restriction of sheaves $a^K_J: \mathbb{Q}ch(U_K) \longrightarrow \mathbb{Q}ch(U_J)$ with right adjoint direct image functor $i^K_J$. Moreover, the localization can be entirely described in terms of module categories. If $\ensuremath{\mathcal{O}}(U_K) \longrightarrow \ensuremath{\mathcal{O}}(U_J)$ is the canonical restriction, then we have
$$a^K_J \cong \ensuremath{\mathcal{O}}(U_J) \otimes_{\ensuremath{\mathcal{O}}(U_K)} -: \mathbb{M}od(\ensuremath{\mathcal{O}}(U_K)) \longrightarrow \mathbb{M}od(\ensuremath{\mathcal{O}}(U_J))$$
and the right adjoint $i^K_J$ is simply the restriction of scalars functor, which is obviously exact.
For the resulting pseudofunctor
$$\mathbb{M}od(\ensuremath{\mathcal{O}}(U_{\bullet})): \mathrm{D}elta \longrightarrow \mathbb{C}at: J \longrightarrow \mathbb{M}od(\ensuremath{\mathcal{O}}(U_{J})),$$
we have $\mathbb{Q}ch(X) \cong \mathrm{D}es(\mathbb{M}od(\ensuremath{\mathcal{O}}(U_{\bullet})))$.
According to \cite{lowen4}, this situation is preserved under deformation. More precisely, up to equivalence an arbitrary abelian deformation $\iota: \ensuremath{\mathcal{C}} \longrightarrow \ensuremath{\mathcal{D}}$ is obtained as
$\ensuremath{\mathcal{D}} \cong \mathrm{D}es(\mathbb{M}od(\overline{\ensuremath{\mathcal{O}}}_{\bullet}))$ where
$$\overline{\ensuremath{\mathcal{O}}}_{\bullet}: \mathrm{D}elta \longrightarrow \mathbb{R}ng: J \longrightarrow \overline{\ensuremath{\mathcal{O}}}_J$$
is a pseudofunctor (a ``twisted presheaf'') deforming $\ensuremath{\mathcal{O}}(U_{\bullet})$, and $\ensuremath{\mathcal{D}}_J = \mathbb{M}od(\overline{\ensuremath{\mathcal{O}}}_J)$ is the deformation of $\ensuremath{\mathcal{C}}_J$ corresponding to $\overline{\ensuremath{\mathcal{S}}_J}$. By Proposition \ref{desaffine}, the functors
$\overline{i}_K: \ensuremath{\mathcal{D}}_K \longrightarrow \ensuremath{\mathcal{D}}$ are exact, and the $\overline{a}_k: \ensuremath{\mathcal{D}} \longrightarrow \ensuremath{\mathcal{D}}_k$ constitute a covering collection of compatible localizations of $\ensuremath{\mathcal{D}}$. We note that by taking $\mathfrak{g}_i = \{\overline{\ensuremath{\mathcal{O}}}_i\}$, condition 2 in Theorem \ref{thmdefcomp} is automatically fulfilled. We conclude:
\begin{theorem}\label{thmscheme}
Let $X$ be a quasi-compact, separated scheme with an affine cover $U_i$ for $i \in I = \{1, \dots, n\}$. Let $\iota: \mathbb{Q}ch(X) \longrightarrow \ensuremath{\mathcal{D}}$ be an abelian deformation with induced deformations $\ensuremath{\mathcal{D}}_i$ of $\mathbb{Q}ch(U_i)$. For every $i \in I$ and $J \subseteq I \setminus \{i\}$, consider $Z^i_J = U_i \cap \cap_{j \in J} Z_j$. Suppose there is a collection $\mathfrak{g}^i_J$ of objects in $D^-(\ensuremath{\mathcal{D}}_i)$ such that $k \otimes^L_R \mathfrak{g}^i_J$ compactly generates $D_{Z^i_J}(U_i)$ inside $D(U_i)$.
Then $D(\ensuremath{\mathcal{D}})$ is compactly generated and an object in $D(\ensuremath{\mathcal{D}})$ is compact if and only if its image in each $D(\ensuremath{\mathcal{D}}_i)$ is compact.
\end{theorem}
\begin{remark}
Before it makes sense to investigate the more general situation of deformations of quasi-compact, semi-separated schemes $X$, for which $D_{\mathbb{Q}ch(X)}(\mathbb{M}od(X))$ is known to be compactly generated by \cite{bondalvandenbergh}, a better understanding of the direct relation between deformations of $\mathbb{Q}ch(X)$ and $\mathbb{M}od(X)$ should be obtained. It follows from \cite{lowen4} that these two Grothendieck categories have equivalent deformation theories, the deformation equivalence passing through twisted non-commutative deformations of the structure sheaf. An interesting question in its own right is to understand whether corresponding deformations of $\mathbb{Q}ch(X)$ and of $\mathbb{M}od(X)$ are related by an inclusion functor and a quasi-coherator like in the undeformed setup. \end{remark}
\subsection{Twisted deformed schemes}\label{partwisted}
In this section we collect some observations which follow immediately from Theorem \ref{thmscheme}, based upon direct lifting of Koszul complexes. In the slightly more restrictive deformation setup of \S \ref{parmaintheorem}, all compact generation results we state here also follow from the more general Theorem \ref{maintheorem}, but there the involved generators are more complicated.
Let $A$ be a commutative $k$-algebra and $f = (f_1, \dots, f_n)$ a finite sequence of elements in $A$. Put $X = \mathrm{Spec}(A)$.
Consider the closed subset
$$Z = V(f) = V(f_1, \dots, f_n) = \{ p \in \mathrm{Spec}(A) \,\, |\,\, f_1, \dots, f_n \in p\} \subseteq X.$$
Let $\mathbb{Q}ch_Z(X)$ be the localizing subcategory of quasi-coherent sheaves on $X$ supported on $Z$ and put $D_Z(X) = D_{\mathbb{Q}ch_Z(X)}(\mathbb{Q}ch(X))$.
We recall the following:
\begin{proposition}\cite{bokstedtneeman}\label{propbok}
The category $D_Z(X)$ is compactly generated by $K(f)$ inside $D(X)$.
\end{proposition}
Let $\overline{A}$ be an $R$-deformation of $A$ and let $\ensuremath{\mathcal{D}} = \mathbb{M}od(\overline{A})$ be the corresponding abelian deformation of $\ensuremath{\mathcal{C}} = \mathbb{Q}ch(X) \cong \mathbb{M}od(A)$. Let $\overline{\mathbb{Q}ch_Z(X)} \subseteq \ensuremath{\mathcal{D}}$ be the localizing subcategory corresponding to $\mathbb{Q}ch_Z(X) \subseteq \ensuremath{\mathcal{C}}$.
Let $\overline{f} = (\overline{f_1}, \dots, \overline{f_n})$ be a sequence of lifts of the elements $f_i \in A$ to $\overline{f_i}$ in $\overline{A}$ under the canonical map $\overline{A} \longrightarrow A$.
Clearly the precomplex $K(\overline{f})$ of finite free $\overline{A}$-modules
satisfies $k \otimes_R K(\overline{f}) = K(f)$.
If $K(\overline{f})$ is a complex, we thus have $$k \otimes^L_R K(\overline{f}) = K(f).$$
From Proposition \ref{compgenlift} we deduce:
\begin{proposition}
If every two distinct elements $\overline{f_l}$ and $\overline{f_k}$ commute, then $K(\overline{f})$ compactly generates $D_{\overline{\mathbb{Q}ch_Z(X)}}(\ensuremath{\mathcal{D}})$ inside $D(\ensuremath{\mathcal{D}})$.
\end{proposition}
\begin{corollary}
If $\overline{A}$ is a commutative deformation of $A$, then $D_{\overline{\mathbb{Q}ch_Z(X)}}(\ensuremath{\mathcal{D}})$ is compactly generated inside $D(\ensuremath{\mathcal{D}})$.
\end{corollary}
\begin{corollary}\label{corsing}
If $f = (f_1)$ consists of a single element, then $D_{\overline{\mathbb{Q}ch_Z(X)}}(\ensuremath{\mathcal{D}})$ is compactly generated inside $D(\ensuremath{\mathcal{D}})$.
\end{corollary}
We can now formulate some corollaries of Theorem \ref{thmscheme}:
\begin{proposition}
Let $X$ be a quasi-compact separated scheme with affine cover $U_i$ for $i \in I = \{1, \dots, n\}$.
Let
$$\overline{\ensuremath{\mathcal{O}}}_{\bullet}: \mathrm{D}elta \longrightarrow \mathbb{R}ng: J \longmapsto \overline{\ensuremath{\mathcal{O}}}_J$$
be a pseudofunctor deforming
$$\ensuremath{\mathcal{O}}(U_{\bullet}): \mathrm{D}elta \longrightarrow \mathbb{R}ng: J \longmapsto \ensuremath{\mathcal{O}}(U_{\bullet})$$
such that all the rings $\overline{\ensuremath{\mathcal{O}}}_J$ are commutative.
Then the category $D(\ensuremath{\mathcal{D}})$ for $\ensuremath{\mathcal{D}} = \mathrm{D}es(\mathbb{M}od(\overline{\ensuremath{\mathcal{O}}}_{\bullet}))$ is compactly generated and an object in $D(\ensuremath{\mathcal{D}})$ is compact if and only if its image in each of the categories $\mathbb{M}od(\overline{\ensuremath{\mathcal{O}}}_i)$ is compact.
\end{proposition}
In particular, we recover the fact that for a smooth scheme, the components $H^2(X, \ensuremath{\mathcal{O}}_X) ^{\mathrm{op}}lus H^1(X, \ensuremath{\mathcal{T}}_X)$ of $HH^2(X)$ correspond to compactly generated deformations of $\mathbb{Q}ch(X)$, a fact which also follows from \cite{toen2}.
\begin{proposition}\label{curves}
Let $X$ be a scheme with an affine cover $U_1$, $U_2$ with $U_i \cong \mathrm{Spec}(A_i)$ such that $U_1 \cap U_2 \cong \mathrm{Spec}({A_1}_{x}) \cong \mathrm{Spec}({A_2}_y)$ for $x \in A_1$ and $y \in A_2$. Then every deformation of $\mathbb{Q}ch(X)$ is compactly generated.
\end{proposition}
Unfortunately, Proposition \ref{curves} typically applies to curves, and they tend to have no genuinely non-commutative deformations. For instance, for a smooth curve $X$ the Hochschild cohomology is seen to reduce to $HH^2(X) = H^1(X, \ensuremath{\mathcal{T}}_X)$ for dimensional reasons, whence there are only scheme deformations of $X$.
\subsection{Non-commutative deformed schemes}\label{parmaintheorem}
Let $k$ be a field and $(R,m)$ a finite dimensional $k$ algebra with $m^d = 0$ and $R/m = k$.
In this section we prove our main result, namely that non-commutative deformations of quasi-compact separated schemes are compactly generated. Based upon \S \ref{parliftkoszul}, we remedy the fact that for general non-commutative deformations of schemes, the relevant Koszul precomplexes fail to be complexes and hence cannot be used as lifts, unlike in the special cases discussed in \S \ref{partwisted}.
\begin{theorem}\label{maintheorem}
Let $X$ be a quasi-compact separated $k$-scheme with an affine cover $U_i$ for $i \in I = \{1, \dots, n\}$. Let $\iota: \mathbb{Q}ch(X) \longrightarrow \ensuremath{\mathcal{D}}$ be an abelian $R$-deformation with induced deformations $\ensuremath{\mathcal{D}}_i$ of $\mathbb{Q}ch(U_i)$. Then $D(\ensuremath{\mathcal{D}})$ is compactly generated and an object in $D(\ensuremath{\mathcal{D}})$ is compact if and only if its image in each $D(\ensuremath{\mathcal{D}}_i)$ is compact.
\end{theorem}
\begin{proof}
For $i \in I$ and $J \subseteq I \setminus \{i\}$, put $Y = U_i = \mathrm{Spec}(A)$ and $Z = U_i \cap \cap_{j \in J} Z_j$. For a finite sequence of elements $f = (f_1, \dots, f_k)$ we can write
$$Z = V(f) = \{ p \in \mathrm{Spec}(A) \,\, |\,\, f_1, \dots. f_k \in p \} \subseteq Y.$$
For the induced deformation $\ensuremath{\mathcal{D}}_i$ of $\mathbb{Q}ch(U_i) \cong \mathbb{M}od(A)$ we have $\ensuremath{\mathcal{D}}_i \cong \mathbb{M}od(\overline{A})$ for an $R$-deformation $\overline{A}$ of $A$.
By Proposition \ref{propbok}, the category $D_Z(Y)$ is compactly generated by $K(f)$ inside $D(Y)$. Now by Theorem \ref{thmlift}, there exists a perfect complex $X' \in D(\overline{A})$ for which $A \otimes^L_{\overline{A}} X' = k \otimes^L_R X'$ compactly generates $D_X(Y)$ inside $D(Y) \cong D(A)$ as desired.
\end{proof}
The theorem shows in particular that the entire second Hochschild cohomology is realized by means of compactly generated abelian deformations.
\section{Appendix: Removing obstructions}\label{parremobs}
In this appendix we discuss an approach to removing obstuctions to first order deformations from \cite{kellerlowen} based upon the Hochschild complex, which applies in the case of length two Koszul complexes and thus leads to an alternative proof of Theorem \ref{maintheorem} in the case of first order deformations of surfaces. We compare the explicit lifts we obtain in both approaches.
\subsection{Hochschild complex}
Let $\mathbb{P}P$ be a $k$-linear abelian category. Recall that the Hochschild complex $\mathbb{C}C(\mathbb{P}P)$ is the complex of $k$ modules with, for $n \geq 0$,
$$\mathbb{C}C^n(\mathbb{P}P) = \mathrm{pr}od_{P_0, \dots, P_n \in \ensuremath{\mathcal{C}}} \mathrm{Hom}_k(\mathbb{P}P(P_{n-1}, P_n) \otimes \dots \otimes \mathbb{P}P(P_0, P_1), \mathbb{P}P(P_0, P_n))$$
endowed with the familiar Hochschild differential.
Let $C(\mathbb{P}P)$ be the dg category of complexes of $\mathbb{P}P$-objects with Hochschild complex $\mathbb{C}C(C(\mathbb{P}P))$ with
$$\mathbb{C}C^n(C(\mathbb{P}P)) = \mathrm{pr}od_{C_0, \dots, C_n \in C(\mathbb{P}P)} \mathrm{Hom}_k(\mathrm{Hom}(C_{n-1}, C_n) \otimes \dots \otimes \mathrm{Hom}(C_0, C_1), \mathrm{Hom}(C_0, C_n)).$$
An element of $C^n(\mathbb{P}P)$ can be naively extended to $C(\mathbb{P}P)$, yielding
$$\mathbb{C}C^n(\mathbb{P}P) \longrightarrow \mathbb{C}C^n(C(\mathbb{P}P)): \phi \longrightarrow \phi.$$
\subsection{Linear deformations and lifts of complexes}
If $m$ is the compositon of the category $\mathbb{P}P$, then a Hochschild $2$-cocycle $\phi$ corresponds to the first order deformation
$$(\overline{\mathbb{P}P} = \mathbb{P}P[\epsilon], \overline{m} = m + \phi \epsilon).$$
Here $\mathrm{Ob}(\overline{\mathbb{P}P}) \cong \mathrm{Ob}(\mathbb{P}P)$ and we denote objects in $\overline{\mathbb{P}P}$ by $\overline{P}$ for $P \in \mathbb{P}P$.
For objects $P_0, P_1 \in \mathbb{P}P$, we have $\overline{\mathbb{P}P}(\overline{P_1}, \overline{P_0}) = \mathbb{P}P(P_1, P_0)[\epsilon]$. A morphism $f: P_1 \longrightarrow P_0$ in $\mathbb{P}P$ naturally gives rise to a morphism $\overline{f} = f + 0\epsilon: \overline{P_1} \longrightarrow \overline{P_0}$ in $\overline{\mathbb{P}P}(\overline{P_1}, \overline{P_0})$, the trivial lift.
For a complex $(P, d)$ of $\mathbb{P}P$-objects, there thus arises a natural lifted precomplex $(\overline{P}, \overline{d})$ of $\overline{\mathbb{P}P}$-objects.
In $\overline{\mathbb{P}P}$ we have
$$\overline{m}(\overline{d}, \overline{d}) = \phi(d,d) \epsilon$$
and in fact, $$[\phi(d,d)] \in K(\mathbb{P}P)(P[-2],P)$$
is precisely the obstruction to the existence of a \emph{complex} $(\overline{P}, \overline{d}')$ in $K(\overline{\mathbb{P}P})$ with $k \otimes_R (\overline{P}, \overline{d}') \cong (P,d)$ in $K(\mathbb{P}P)$ (see \cite{lowen2}).
In general this obstruction will not vanish, but in some cases it is seen to vanish on the nose.
\begin{proposition}
If the differential $d$ of $P$ has no two consecutive non-zero components $d_n: P_n \longrightarrow P_{n-1}$, then $\phi(d,d) = 0$ and $(\overline{P}, \overline{d})$ is a complex lifting $(P,d)$.
\end{proposition}
\subsection{Removing obstructions}\label{par2remobs}
If $0 \neq [\phi(d,d)] \in \mathrm{Ext}^2_{\mathbb{P}P}(P,P)$, following \cite{kellerlowen} we consider the morphism $\phi(d,d): \Sigma^{-2} P \longrightarrow P$ and we turn to the related complex
$$P^{(1)} = \mathrm{cone}(\phi(d,d)) = P ^{\mathrm{op}}lus \Sigma^{-1} P$$
with differential
$$d^{(1)} = \begin{pmatrix} d & \phi(d,d) \\ 0 & -d \end{pmatrix}.$$
The obstruction associated to the complex $P^{(1)}$ is then given by
$$\phi^{(1)} = \phi(d^{(1)}, d^{(1)}) = \begin{pmatrix} \phi(d,d) & \phi(d, \phi(d,d)) - \phi(\phi(d,d), d) \\
0 & \phi(d,d) \end{pmatrix}.$$
According to \cite[Lemma 3.18]{kellerlowen}, the degree two morphism
$$\begin{pmatrix} \phi(d,d) & 0\\
0 & \phi(d,d) \end{pmatrix}: P^{(1)} \longrightarrow P^{(1)}$$
is nullhomotopic (a nullhomotopy is given by $\small{\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}}$).
Put $\psi^{(1)} = \phi(d, \phi(d,d)) - \phi(\phi(d,d), d)$. It follows that the obstruction associated to $P^{(1)}$ is given by
$$\tilde{\phi}^{(1)} = \begin{pmatrix} 0 & \psi^{(1)} \\ 0 & 0 \end{pmatrix} \in K(\mathbb{P}P)(P^{(1)}[-2], P^{(1)}).$$
In general there is no reason why $\tilde{\phi}^{(1)}$ should be nullhomotopic, but in some cases it can be seen to be zero on the nose.
\begin{proposition}\label{propconelift}
Suppose the differential $d$ of $P$ has no three consecutive non-zero components $d_n: P_n \longrightarrow P_{n-1}$. Then $\psi^{(1)} = 0$ and there is a complex $\overline{P^{(1)}} \in K(\overline{\mathbb{P}P})$ with $k \otimes_R \overline{P^{(1)}} \cong P^{(1)} \in K(\mathbb{P}P)$.
\end{proposition}
Following \cite[Proposition 3.16]{kellerlowen}, we note that the original complex $P$ can sometimes be reconstructed from $P^{(1)}$.
\begin{proposition} \cite{kellerlowen} \label{propconegen}
If $\phi(d,d)$ is nilpotent, $P$ can be constructed from $P^{(1)}$ using cones, shifts and direct summands. This applies in particular if $d$ has no $m$ consecutive non-zero components $d_n: P_n \longrightarrow P_{n-1}$ for some $m \geq 1$.
\end{proposition}
\begin{proof}
This follows from the octahedral axiom (see \cite[Proposition 3.16]{kellerlowen}).
\end{proof}
\subsection{The case of Koszul complexes}
The approach to removing obstructions discussed in \S \ref{par2remobs} applies to the case of length two Koszul complexes. In this section we compare this approach with the solution from \S \ref{parliftkoszul}.
Let $A$ be a commutative $k$-algebra with a first order deformation $\bar{A}$ determined by a Hochschild 2 cocycle $\phi \in \mathrm{Hom}_k(A \otimes_k A, A)$. For a sequence $f = (f_1, f_2)$ of elements in $A$ we consider the Koszul complex $(K(f), d)$ which is given by
$$\xymatrix{0 \ar[r] & A \ar[r]_-{\small{\begin{pmatrix} f_1 \\ -f_2 \end{pmatrix}}} & {A^2} \ar[r]_{\small{\begin{pmatrix} f_2 & f_1 \end{pmatrix}}} & A \ar[r] & 0}$$
In \S \ref{parremobs}, we take $\mathbb{P}P$ to be the category of finite free $A$-modules.
The obstruction $\phi(d,d): \Sigma^{-2}K(f) \longrightarrow K(f)$ is determined by the element
$$\alpha = \phi(f_1, f_2) - \phi(f_2, f_1) \in A.$$
The complex $K(f)^{(1)} = \mathrm{cone}(\phi(d,d))$ is given by
$$\xymatrix{0 \ar[r] & A \ar[r]_{d^{(1)}_3} & {A^3} \ar[r]_{d^{(1)}_2} & {A^3} \ar[r]_{d^{(1)}_1} & A \ar[r] & 0}$$
with differentials given by
$$d^{(1)}_3 = \begin{pmatrix} 0 \\ -f_1 \\ f_2 \end{pmatrix}, \hspace{0,5cm} d^{(1)}_2 = \begin{pmatrix} -f_2 & 0 & 0 \\
f_1 & 0 & 0 \\ \ \alpha & f_2 & f_1 \end{pmatrix}, \hspace{0,5cm} d^{(1)}_1 = \begin{pmatrix} - f_1 & - f_2 & 0 \end{pmatrix}.$$
Apart from the signs, the main difference with the complex $X$ from \S \ref{parliftkoszul} lies in the fact that here $\alpha$ depends on the Hochschild cocycle, whereas in $X$ it is replaced by the constant value $1$.
The nulhomotopy $\partial$ for the obstruction $\phi^{(1)}$ gives rise to the lifted complex $K(f)^{(1)}[\epsilon]$ with differential $d - \partial \epsilon$. Concretely, the differential is given by
$$\bar{d}^{(1)}_3 = \begin{pmatrix} - \epsilon \\ -f_1 \\ f_2 \end{pmatrix}, \hspace{0,5cm} \bar{d}^{(1)}_2 = \begin{pmatrix} -f_2 & 0 & - \epsilon \\
f_1 & - \epsilon & 0 \\ \ \alpha & f_2 & f_1 \end{pmatrix}, \hspace{0,5cm} \bar{d}^{(1)}_1 = \begin{pmatrix} - f_1 & - f_2 & - \epsilon \end{pmatrix}.$$
On the other hand, if for $X'$ we choose $f'_i = f_i + 0 \epsilon$, then we have $[f'_1, f'_2] = \alpha \epsilon$ and hence $X'$ has differential $d'$ given by
$$d'_3 = \begin{pmatrix} \alpha \epsilon \\ f_1 \\ f_2 \end{pmatrix}, \hspace{0,5cm} d'_2 = \begin{pmatrix} -f_2 & 0 & \alpha \epsilon \\
f_1 & -\alpha \epsilon & 0 \\ -1 & f_2 & - f_1 \end{pmatrix}, \hspace{0,5cm} d'_1 = \begin{pmatrix} f_1 & f_2 & \alpha \epsilon \end{pmatrix}.$$
Clearly, the computations leading to $(\bar{d}^{(1)})^2 = 0$ and to ${d'}^2 = 0$ are almost identical and have the definition of $\alpha$ as main ingredient.
\def$'${$'$} \def$'${$'$}
\mathrm{pr}ovidecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\mathrm{pr}ovidecommand{\mathbb{M}R}{\relax\ifhmode\unskip\space\fi MR }
\mathrm{pr}ovidecommand{\mathbb{M}Rhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\mathrm{pr}ovidecommand{\href}[2]{#2}
\end{document}
\def$'${$'$}
\mathrm{pr}ovidecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\end{document}
|
\begin{document}
{\bf\LARGE
\begin{center}
Construction of symmetric Hadamard matrices of order $4v$
for $v=47,73,113$
\end{center}
}
\noindent
{\bf N. A. Balonin$^a$}, Dr. Sc., Tech., Professor, [email protected] \\
{\bf D. {\v{Z}}. {\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D}okovi{\'c}$^b$}, PhD, Distinguished Professor Emeritus, [email protected] \\
{\bf D. A. Karbovskiy$^a$}, [email protected] \\
\noindent
${}^{a}$Saint-Petersburg State University of Aerospace Instrumentation,
67, B. Morskaia St., 190000, Saint-Petersburg, Russian Federation \\
${}^{b}$University of Waterloo, Department of Pure Mathematics and Institute for Quantum Computing, Waterloo, Ontario, N2L 3G1, Canada \\
\begin{abstract}
We continue our systematic search for symmetric Hadamard matrices
based on the so called propus construction. In a previous paper
this search covered the orders $4v$ with odd $v\le41$. In this
paper we cover the cases $v=43,45,47,49,51$.
The odd integers $v<120$ for which no symmetric Hadamard matrices of order $4v$ are known are the following:
$$47,59,65,67,73,81,89,93,101,103,107,109,113,119.$$
By using the propus construction, we found several symmetric Hadamard matrices of order $4v$ for $v=47,73,113$.
{\bf Keywords:} Symmetric Hadamard matrices, Propus array, cyclic difference families, Diophantine equations.
\end{abstract}
\section{Introduction}
In this paper we continue the systematic investigation,
begun in \cite{BBDKM:2017-a}, of the propus construction of symmetric Hadamard matrices.
Let us recall that a {\em Hadamard matrix} is a $\{1,-1\}$-matrix $H$ of order $m$ whose rows are mutually orthogonal, i.e. $HH^T=mI_m$, where $I_m$ is the identity matrix of order
$m$. We say that $H$ is {\em skew-Hadamard matrix} if also $H+H^T=2I_m$.
The famous {\em Hadamard conjecture} asserts that Hadamard matrices exist for all orders $m$ which are multiples of $4$. (They also exist for $m=1,2$.) Similar conjectures have been proposed for symmetric Hadamard matrices and skew-Hadamard matrices, see e.g. \cite[V.1.4]{CK:2007}. The smallest orders $4v$ for which such matrices have not been constructed are 668 for Hadamard matrices, 276 for skew-Hadamard matrices, and 188 for symmetric Hadamard matrices. Let us also mention that symmetric Hadamard matrices of orders 116, 156, 172 have been constructed only very recently, see
\cite{DDK:SpecMatC:2015,BBDKM:2017-a}.
Since the size of a Hadamard matrix or a skew or symmetric
Hadamard matrix can always be doubled, while preserving its type, we are interested mostly in the case where these matrices have order $4v$ with $v$ odd.
The propus construction is based on the so called {\em Propus array}
\begin{equation} \label{Propus-array}
H=\left[ \begin{array}{cccc}
-C_1 & C_2R & C_3R & C_4R \\
C_3R & RC_4 & C_1 & -RC_2 \\
C_2R & C_1 & -RC_4 & RC_3 \\
C_4R & -RC_3 & RC_2 & C_1
\end{array} \right].
\end{equation}
In this paper, except in section \ref{sec:Exceptional}, the matrices $C_i$ will be circulants of order $v$ and the matrix
$R$ will be the back-circulant identity matrix of order $v$,
$$
\label{Matrix-R}
R=\left[ \begin{array}{ccccc}
0 & 0 & \cdots & 0 & 1 \\
0 & 0 & & 1 & 0 \\
\vdots & & & \\
0 & 1 & & 0 & 0 \\
1 & 0 & & 0 & 0
\end{array} \right].
$$
The matrix $H$ will be a Hadamard matrix if
\begin{equation} \label{eq:uslov-C}
\sum_i C_i C_i^T = 4vI_{4v}.
\end{equation}
(Superscript T denotes transposition of matrices.)
If also $C_1^T=C_1$ and $C_2=C_3$ then $H$ will be a symmetric
Hadamard matrix.
To construct the circulants $C_i$ satisfying the above conditions we use the cyclic propus difference families
$(A_1,A_2,A_3,A_4)$ with parameters
$(v;k_1,k_2,k_3,k_4;\lambda)$ such that $A_2=A_3$ and at least one of the base blocks $A_1,A_4$ is symmetric. The parameters must satisfy the three equations
\begin{eqnarray} \label{eq:osn}
&& \sum_{i=1}^4 k_i(k_i-1) = \lambda(v-1), \\
&& \sum_{i=1}^4 k_i = \lambda+v, \label{eq:gs} \\
&& k_2 = k_3. \label{eq:23}
\end{eqnarray}
We refer to such parameter sets as the {\em propus parameter sets}.
For the definitions of the terms that we use here
and the facts we mention below, we refer the reader to \cite{BBDKM:2017-a}. Without any loss of generality, we impose the following additional restrictions:
\begin{equation} \label{eq:add}
v/2 \ge k_1,k_2; \quad k_1 \ge k_4.
\end{equation}
For convenience we say that the propus parameter sets satisfying
these additional conditions are {\em normalized}.
For a given odd $v$ there exist at least one normalized propus parameter set, see \cite[Theorem 1]{BBDKM:2017-a}. However, there exist even $v$ for which this is not true, see \cite[Theorem 2]{BBDKM:2017-a}.
It is conjectured in \cite{BBDKM:2017-a} that for each odd $v$
there exists at least one propus difference family in the
cyclic group ${\mbox{\bf Z}}_v$ of integers modulo $v$. But this may
fail if we specify not only $v$ (odd) but also the
parameters $k_1,k_2=k_3,k_4$.
Our computations suggest that these exceptional propus
parameter sets must have all $k_i$ equal to each other. For instance, there is no cyclic propus difference family having the parameters $(25;10,10,10,10;15)$. (This is also true for the propus difference families over the elementary abelian group
${\mbox{\bf Z}}_5\times{\mbox{\bf Z}}_5$.)
One of the authors developed a computer program to search
for propus difference families. For the description of the
algorithm used in the program we refer the reader to
\cite{BBDKM:2017-a}. We used that program on
PCs to construct many such families for odd (or even) $v$.
The first version of the program was used in the range
$v<43$. The second, improved version, was capable of finding
solutions for $v\le 51$. Some of the timings for these computations are given in section \ref{sec:app}.
In section \ref{sec:v=47} we give several examples of symmetric Hadamard matrices of new orders 188, 292, and 452.
In section \ref{Parametri} we list the normalized propus parameter sets for odd $v\in\{43,45,\ldots,59\}$ and for each
of them we indicate whether propus families with that parameter set exist and, if they do, which of the blocks A or D can
be chosen to be symmetric. This list together with a similar list in \cite{BBDKM:2017-a} shows that there is a rich supply of
propus type symmetric Hadamard matrices for orders $4v$ with
odd $v<50$. Sporadic examples are also known for $v=53,55,57$. The first undecided case is $v=59$.
In section \ref{sec:Exceptional} we focus on the case where $v=s^2$ is an odd square. We count the number of propus parameter sets $(v;x,y,y,z;\lambda)$ with $v=s^2$ by dropping the normalization condition $x\ge z$. This number, $N_s$, is also the number of positive odd integer solutions of a simple quadratic Diophantine equation, namely (\ref{eq:Dioph}). When
$s$ is an odd prime then we conjecture that
$N_s-s-1\in\{+1,-1\}$. We refer to the cases where $v$ is odd and all $k_i$ are equal as exceptional cases. They occur only when $v=s^2$. We also conjecture that every prime
$s\equiv1 \pmod{4}$ can be written uniquely as
$s=(a^2+b^2)/(a-b)$ where $a$ and $b$ are positive integers and $1<a\le (s-1)/2$. Moreover, the denominator $a-b$ is either a square or 2 times a square.
Finally in section \ref{sec:app}, for each of the normalized propus parameter sets with odd $v=43,45,\ldots,51$, but
excluding the exceptional parameter set $(49;21,21,21,21;35)$, we list one or two examples of propus difference families.
\section{Symmetric Hadamard matrices of new orders}
\label{sec:v=47}
The smallest order $4v$ for which no symmetric Hadamard
matrix was known previously is $188=4\cdot47$.
There are four propus parameter sets
$$
(47; 20, 22, 22, 18; 35), (47; 22, 20, 20, 19; 34),
(47; 23, 19, 19, 21; 35),(47; 23, 22, 22, 17; 37)
$$
with $v=47$. In each case we constructed many such matrices, but here we record just two examples for each parameter set. In all four cases, $A$ is symmetric in the first and $D$ symmetric in the second example. As $B=C$ we omit the block $C$.
The examples are separated by semicolons.
\begin{eqnarray*}
&& (47; 20, 22, 22, 18; 35) \\
&& [1,2,6,7,12,14,15,18,22,23,24,25,29,32,33,35,40,41,45,46],\\
&& [0,1,2,3,4,7,9,10,13,14,19,26,28,30,32,34,35,36,37,39,42,46],\\
&& [0,1,2,10,12,15,20,23,26,27,28,30,33,34,39,42,43,45];\\
&& [0,1,3,4,6,7,10,11,13,15,18,19,24,29,31,33,35,37,38,45],\\
&& [0,1,2,5,8,9,10,12,13,18,19,23,24,25,27,29,31,32,38,39,41,44],\\
&& [9,10,11,12,14,16,20,21,23,24,26,27,31,33,35,36,37,38];\\
&& \\
&& (47; 22, 20, 20, 19; 34) \\
&& [1,4,5,7,8,9,11,12,16,18,21,26,29,31,35,36,38,39,40,42,43,46],\\
&& [0,1,2,3,7,15,16,19,21,23,26,27,28,29,30,32,34,37,38,44],\\
&& [0,1,2,3,8,9,11,12,13,18,20,21,26,27,32,34,36,41,44];\\
&& [0,1,2,3,7,9,10,12,14,16,17,18,20,23,26,27,28,35,37,42,43,45],\\
&& [0,1,2,3,9,11,13,14,19,23,26,27,29,30,32,33,34,35,38,43],\\
&& [0,4,6,11,15,16,18,19,22,23,24,25,28,29,31,32,36,41,43];\\
&& \\
&& (47; 23, 19, 19, 21; 35) \\
&& [0,1,4,5,7,9,10,11,12,15,19,22,25,28,32,35,36,37,38,40,42,43,46],\\
&& [0,1,2,3,5,8,9,10,12,13,17,19,22,24,28,30,34,36,37],\\
&& [0,1,2,3,13,14,17,18,19,21,25,26,27,30,32,34,35,40,41,43,44];\\
&& [0,1,2,3,5,6,7,8,11,13,15,16,18,22,23,24,26,27,29,33,38,40,45],\\
&& [0,1,2,3,5,11,12,17,22,25,29,30,31,33,34,35,37,38,41],\\
&& [0,2,4,7,8,10,11,16,17,21,23,24,26,30,31,36,37,39,40,43,45];\\
&& \\
&& (47; 23, 22, 22, 17; 37) \\
&& [0,1,4,11,12,13,15,16,18,19,21,22,25,26,28,29,31,32,34,35,36,43,46],\\
&& [0,1,2,3,4,5,8,10,13,18,19,21,23,25,27,29,30,36,39,41,42,43],\\
&& [0,1,2,5,6,10,11,12,15,21,25,26,33,38,40,41,45];\\
&& [0,1,2,3,4,7,8,10,11,12,13,16,19,21,23,25,26,29,31,33,34,35,45],\\
&& [0,1,2,3,4,8,9,12,13,14,17,18,19,20,26,27,29,31,34,37,40,44],\\
&& [0,2,6,13,15,18,20,21,22,25,26,27,29,32,34,41,45].\\
&& \\
\end{eqnarray*}
Let us give a concrete example. We choose the first parameter set above, $(47;20,22,22,18;35)$, and its first propus difference family, namely:
\begin{eqnarray*}
A &=& [1,2,6,7,12,14,15,18,22,23,24,25,29,32,33,35,40,41,45,46],\\
B=C &=& [0,1,2,3,4,7,9,10,13,14,19,26,28,30,32,34,35,36,37,39,42,46],\\
D &=& [0,1,2,10,12,15,20,23,26,27,28,30,33,34,39,42,43,45].
\end{eqnarray*}
The binary $\{+1,-1\}$-sequences $a,b=c,d$ associated with the base blocks $A,B=C,D$ are:
\begin{eqnarray*}
a &=& [1,-1,-1,1,1,1,-1,-1,1,1,1,1,-1,1,-1,-1,1,1,-1,1,1,1,-1, \\
&& -1,-1,-1,1,1,1,-1,1,1,-1,-1,1,-1,1,1,1,1,-1,-1,1,1,1,-1,-1], \\
b=c &=& [-1,-1,-1,-1,-1,1,1,-1,1,-1,-1,1,1,-1,-1,1,1,1,1,-1,1,1,1, \\
&& 1,1,1,-1,1,-1,1,-1,1,-1,1,-1,-1,-1,-1,1,-1,1,1,-1,1,1,1,-1],\\
d &=& [-1,-1,-1,1,1,1,1,1,1,1,-1,1,-1,1,1,-1,1,1,1,1,-1,1,1,-1, \\
&& 1,1,-1,-1,-1,1,-1,1,1,-1,-1,1,1,1,1,-1,1,1,-1,-1,1,-1,1].
\end{eqnarray*}
The circulant matrices $C_1,C_2=C_3,C_4$ whose first rows are the sequences $a,b=c,d$ are depicted on Figure 1. Our convention is that a white square represents $+1$ and the black square
$-1$. Note that $C_1$ is symmetric.
\begin{figure}
\caption{The circulants $C_1,C_2=C_3,C_4$}
\label{fig:Fig1}
\end{figure}
By plugging these circulants into the propus array
(\ref{Propus-array}) we obtain the symmetric Hadamard matrix shown in Figure 2.
\begin{figure}
\caption{Symmetric Hadamard matrix of order 188}
\label{fig:Fig2}
\end{figure}
Next we give the symmetric Hadamard matrices of order $4v$
where $v=73,113$. No symmetric Hadamard matrices of these orders
were known previously.
We build the four base blocks $A,B=C,D$ as a union of orbits of a subgroup $H$ of ${\mbox{\bf Z}}^*_v$ acting on the finite field ${\mbox{\bf Z}}_v$. We choose $H=\{1,8,64\}$ for $v=73$ and $H=\{1,16,28,30,49,106,109\}$ for $v=113$.
For $v=73$ we use the parameter set $(73;36,36,36,28;63)$.
The base blocks of the two propus difference families are:
\begin{eqnarray*}
A &=& \bigcup_{i\in I} iH,\quad
I=\{1,2,3,4,9,11,18,21,26,27,36,43\} \\
B=C &=& \bigcup_{j\in J} jH,\quad
J=\{2,4,5,6,9,12,14,17,27,34,35,36\} \\
D &=& \{0\} \cup \bigcup_{k\in K} kH,\quad
K=\{1,2,3,6,7,9,18,42,43\}; \\
&& \\
A &=& \bigcup_{i\in I} iH,\quad
I=\{1,2,3,4,6,9,12,18,25,27,35,36\} \\
B=C &=& \bigcup_{j\in J} jH,\quad
J=\{2,5,7,9,13,17,25,26,33,35,36,42\} \\
D &=& \{0\} \cup \bigcup_{k\in K} kH,\quad
K=\{4,6,13,18,27,34,35,36,42\}. \\
\end{eqnarray*}
For $v=113$ we use the parameter set $(113;56,49,49,56;97)$.
The base blocks of the four propus difference families are:
\begin{eqnarray*}
A &=& \bigcup_{i\in I} iH,\quad I=\{1,4,5,6,13,17,18,20\} \\
B=C &=& \bigcup_{j\in J} jH,\quad J=\{1,5,9,11,12,17,39\} \\
D &=& \bigcup_{k\in K} kH,\quad K=\{2,3,5,10,11,12,18,20\}; \\
&& \\
A &=& \bigcup_{i\in I} iH,\quad I=\{1,4,5,6,13,17,18,20\} \\
B=C &=& \bigcup_{j\in J} jH,\quad J=\{1,2,4,11,12,13,17\} \\
D &=& \bigcup_{k\in K} kH,\quad K=\{1,2,3,5,11,12,18,20\}; \\
&& \\
\end{eqnarray*}
\begin{eqnarray*}
A &=& \bigcup_{i\in I} iH,\quad I=\{1,4,5,6,13,17,18,20\} \\
B=C &=& \bigcup_{j\in J} jH,\quad J=\{1,2,4,11,12,13,17\} \\
D &=& \bigcup_{k\in K} kH,\quad K=\{3,4,5,8,9,12,13,20\}; \\
&& \\
A &=& \bigcup_{i\in I} iH,\quad I=\{1,3,4,10,12,13,18,39\} \\
B=C &=& \bigcup_{j\in J} jH,\quad J=\{2,5,9,10,17,20,39\} \\
D &=& \bigcup_{k\in K} kH,\quad K=\{2,3,9,11,12,17,20,39\}. \\
&& \\
\end{eqnarray*}
The first three families share the same block $A$,
and the second and third family differ only in block $D$.
In spite of that, the four families are pairwise
nonequivalent.
\section{Normalized propus parameter sets}
\label{Parametri}
We list here all normalized propus parameter sets
$(v;x,y,y,z;\lambda)$ for odd $v=43,45,\ldots,59$. The cyclic propus families consisting of four base blocks
$A,B,C,D\subseteq{\mbox{\bf Z}}_v$ having sizes $x,y,y,z$, respectively, and such that $B=C$ and $A$ or $D$ is symmetric give symmetric Hadamard matrices of order $4v$. (If only $D$ is symmetric we have to switch $A$ and $D$ before plugging the blocks into the propus array.) If $x=z\ne y$ then the parameter set
$(v;y,x,x,y;\lambda)$ is also normalized and is included in our list. In the former case the two base blocks of size $y$ have to be equal, while in the latter case the base blocks of size $x$
have to be equal.
The four base blocks, subsets of ${\mbox{\bf Z}}_v$, are denoted by $A,B,C,D$. We require all propus difference families to have
$B=C$. If we know such a family exists with symmetric block $A$, we indicate this by writing the symbol $A$ after the parameter set, and similarly for the symbol $D$. If we know that there exists a propus family with both $A$ and $D$ symmetric, then we write the symbol $AD$. Finally, the question mark means that the existence of a cyclic propus difference familiy remains undecided.
The symbol $T$ indicates that the parameter set belongs to the Turyn series of Williamson matrices. Since in that case all four base blocks are symmetric, the symbol $T$ implies $AD$. Further, the symbol $X$ indicates that the parameter set belongs to another infinite series (see
\cite[Theorem 5]{DDK:SpecMatC:2015}) which is based on the
paper \cite{XXSW} of Xia, Xia, Seberry, and Wu. In our list
below the symbol $X$ implies $D$. More precisely, for a difference family $A,B,C,D$ in the $X$-series two blocks are equal, say $B=C$,
and one of the remaining blocks is skew, block $A$ in our list,
and the last one is symmetric, block $D$.
For odd $v$ in the range $43,45,\ldots,51$ there is only one propus parameter set, $(49;21,21,21,21;36)$, for which we failed to find a cyclic propus difference family.
(We believe that such family does not exist.)
\begin{center}
Normalized propus paramater sets with $v$ odd, $43\le v\le 59$
\end{center}
$$
\begin{array}{llll}
(43;18,21,21,16;33) & A,D & (43;19,18,18,18;30) & A,D \\
(43;21,17,17,20;32) & A,D & (43;21,19,19,16;32) & A,D \\
(43;21,21,21,15;35) & A,D & (45;18,21,21,18;33) & A,D \\
(45;19,20,20,18;32) & AD,T & (45;21,18,18,21;33) & A,D \\
(45;21,20,20,17;33) & A,D & (45;21,22,22,16;36) & A,D \\
(45;22,19,19,18;33) & A,D,X & (47;20,22,22,18;35) & A,D \\
(47;22,20,20,19;34) & A,D & (47;23,19,19,21;35) & A,D \\
(47;23,22,22,17;37) & A,D & (49;21,21,21,21;35) & ? \\
(49;22,22,22,19;36) & A,D & (49;22,24,24,18;39) & A,D \\
(49;23,20,20,22;36) & AD,T & (49;23,23,23,18;38) & A,D \\
(51;21,25,25,20;40) & AD,T & (51;23,22,22,21;37) & A,D \\
(53;22,24,24,22;39) & ? & (53;24,22,22,24;39) & ? \\
(53;24,25,25,20;41) & ? & (53;26,22,22,23;40) & D,X \\
(55;23,26,26,22;42) & AD,T & (55;24,25,25,22;41) & ? \\
(55;24,27,27,21;44) & ? & (55;26,23,23,24;41) & ? \\
(55;27,24,24,22;42) & ? & (55;27,25,25,21;43) & ? \\
(57;25,25,25,24;42) & ? & (57;27,25,25,23;43) & ? \\
(57;27,26,26,22;44) & ? & (57;28,28,28,21;48) & D,X \\
(59;26,28,28,23;46) & ? & (59;27,25,25,26;44) & ? \\
(59;28,29,29,22;49) & ? & & \\
\end{array}
$$
In order to justify the claims made in this list, we give in section \ref{sec:app} examples of the propus difference families having the required properties. (For $v=47$ the examples are listed in section \ref{sec:v=47}.)
\section{Exceptional series of propus parameter sets}
\label{sec:Exceptional}
We say that a propus parameter set $(v;k_1,k_2,k_3,k_4:\lambda)$ is {\em exceptional} if $k_1=k_2=k_3=k_4$. The exceptional parameter sets are parametrized by just one integer $s>1$ and are given by the formula
\begin{equation} \label{def:exceptional}
\Pi_s=(~s^2;~\binom{s}{2},\binom{s}{2},\binom{s}{2},
\binom{s}{2};~s(s-2)~).
\end{equation}
There exists a cyclic propus difference family with parameter set $\Pi_3$. There exists also a propus difference
family $(A,B,C,D)$ over the group ${\mbox{\bf Z}}_3\times{\mbox{\bf Z}}_3$ with
the same parameter set and such that $A$ is symmetric and
$B=C=D$. By using the finite field ${\mbox{\bf Z}}_3[\alpha]$ where
$\alpha^2=-1$, we can take
\begin{equation} \label{3x3}
A=\{0,\alpha,-\alpha\},\quad
B=C=D=\{\alpha,1-\alpha,\alpha-1\}.
\end{equation}
For $s=5$, it is reported in \cite{BBDKM:2017-a} that there are no propus difference families in ${\mbox{\bf Z}}_{25}$ having $\Pi_5$ as its parameter set. We performed another exhaustive search and found no such families in ${\mbox{\bf Z}}_5\times{\mbox{\bf Z}}_5$.
For $s=7$, our non-exhaustive searches found no cyclic propus difference families having the parameter set $\Pi_7$.
However, we found a cyclic difference family with parameter set $\Pi_7$ and $B=C$ with neither $A$ nor $D$ symmetric:
\begin{eqnarray*}
A&=& [0,1,2,3,4,8,11,12,14,19,21,24,26,27,29,37,38,41,44,45,46], \\
B&=&C= [0,1,2,3,5,7,11,14,15,17,24,27,28,29,32,35,38,43,44,45,47],\\
D&=& [0,1,2,5,6,8,10,11,12,14,16,18,21,22,23,30,31,32,36,37,41].\\
\end{eqnarray*}
While computing the propus parameter sets $(v;x,y,y,z;\lambda)$ in the case when $v=s^2$ is an odd square, we observed an interesting feature. Namely, if in the definition of normalized propus difference sets we drop only the condition that
$x\ge z$ and if $s$ is an odd prime then the number, $N_s$, of
such parmeter sets is either $s$ or $s+2$.
It follows from the proof of \cite[Theorem 1]{BBDKM:2017-a} that $N_s$ is equal to the number of odd positive integer solutions of the Diophantine equation
\begin{equation} \label{eq:Dioph}
\xi^2+2\eta^2+\zeta^2=4s^2.
\end{equation}
After making additional computations, we decided to propose the
following conjecture.
\begin{conjecture} \label{cnj:primes}
For any odd prime $s$, $N_s-s-1\in\{+1,-1\}$.
\end{conjecture}
We have verified our conjecture for all odd primes less than 10000. There are 1228 such primes. For 606 of them we have $N_s=s$ and for the remaining 622 we have $N_s=s+2$. Thus the sequence $N_s-s-1$ is a $\{+1,-1\}$-sequence when $s$ runs through odd primes $<10000$. We have sketched the partial sums of this sequence on Figure 3.
If $s$ is a prime congruent to $1 \pmod{4}$, we observed that apart from $\Pi_s$ there is another normalized propus parameter set with $v=s^2$ and $k_2=k_3=\binom{s}{2}$. Let us denote this new parameter set by
\begin{equation} \label{def:exc-comp}
\Pi'_s=(~s^2;~\binom{s}{2}+\alpha,\binom{s}{2},\binom{s}{2},
\binom{s}{2}-\beta;~s(s-2)+\alpha-\beta~).
\end{equation}
The integers $\alpha$ and $\beta$ are positive and satisfy the quadratic Diophantine equation
\begin{equation} \label{eq:Dioph-comp}
\alpha^2+\beta^2=s(\alpha-\beta).
\end{equation}
We propose another conjecture.
\begin{conjecture} \label{cnj:primes1mod4}
For any odd prime $s\equiv1 \pmod{4}$ the Diophantine equation
(\ref{eq:Dioph-comp}), in the unknowns $\alpha$ and $\beta$, has
a unique solution $(a,b)$, where $a$ and $b$ are positive integers and $1<a\le (s-1)/2$.
Moreover, $a-b$ is either a square or 2 times a square.
\end{conjecture}
We have verified that this conjecture holds for $s<100000$.
If we drop the condition $1<a\le (s-1)/2$, than there exists one more solution, namely $(s-a,b)$.
Note also that the two solutions share the same $b$, and so
the integer $b$ is uniquely determined by $s$.
\begin{figure}
\caption{Partial sums of the sequence $N_s-s-1$, $s$ odd prime}
\label{fig:Fig3}
\end{figure}
\section{Appendix} \label{sec:app}
The cyclic propus difference families listed below, except
some of the families that belong to one of the two infinite series $T$ and $X$, have been constructed by using a computer program written by one of the authors. The program was run on two PCs, each with a single 64-bit processor.
For $v=39$ it takes about 5 minutes to obtain a solution,
about 20 minutes for $v=41$, about 1 hour for $v=43$, about
3 or 4 hours for $v=45$, about 12 hours for $v=47$, about
2 days for $v=49$, and 5 days for $v=51$.
In all families below the base block $B=C$, and to save space we omit the block $C$. The families are terminated by semicolons.
\begin{eqnarray*}
&& (43; 18, 21, 21, 16; 33) \\
&& [1,3,6,9,14,15,16,19,20,23,24,27,28,29,34,37,40,42],\\
&& [0,1,2,4,5,10,12,14,15,16,17,20,21,23,24,26,27,28,32,34,41],\\
&& [0,1,2,3,9,10,13,15,18,21,29,34,36,37,38,39];\\
&& [0,1,2,3,7,8,9,10,16,17,20,22,25,28,36,41],\\
&& [0,1,2,3,6,7,9,10,12,13,14,18,20,27,29,30,31,33,34,39,41],\\
&& [1,3,6,9,14,15,16,19,20,23,24,27,28,29,34,37,40,42];\\
&& \\
&& (43; 19, 18, 18, 18; 30) \\
&& [0,4,9,10,11,15,16,18,19,21,22,24,25,27,28,32,33,34,39],\\
&& [0,1,2,3,11,12,17,19,20,23,24,25,27,29,31,33,36,40],\\
&& [0,1,2,3,5,10,12,15,18,23,25,26,28,29,36,39,40,41];\\
&& [0,1,2,5,9,10,14,16,20,23,24,27,29,30,32,34,36,38,40],\\
&& [0,1,2,3,4,8,9,10,11,14,18,21,26,27,30,32,40,42],\\
&& [2,7,8,9,10,13,15,18,21,22,25,28,30,33,34,35,36,41];\\
&& \\
&& (43; 21, 17, 17, 20; 32) \\
&& [0,1,3,6,7,9,11,14,16,20,21,22,23,27,29,32,34,36,37,40,42],\\
&& [0,1,2,3,5,6,7,13,15,24,25,28,29,32,37,39,40],\\
&& [0,1,2,3,10,12,14,15,18,19,20,25,28,29,31,32,34,35,37,39];\\
&& [0,1,2,3,4,7,12,13,14,18,20,23,24,28,30,32,33,34,36,38,41],\\
&& [0,1,2,5,8,10,15,17,18,19,21,24,25,30,36,37,40],\\
&& [1,3,4,5,6,7,8,13,18,21,22,25,30,35,36,37,38,39,40,42];\\
&& \\
&& (43; 21, 19, 19, 16; 32) \\
&& [0,1,6,11,12,13,16,17,19,20,21,22,23,24,26,27,30,31,32,37,42],\\
&& [0,1,2,6,8,9,12,15,17,20,22,23,24,26,27,35,36,39,41],\\
&& [0,1,2,6,8,9,11,15,16,18,20,24,28,29,31,41];\\
&& [0,1,2,3,4,7,11,13,15,17,19,20,22,32,33,34,35,37,39,40,42],\\
&& [0,1,2,4,5,8,10,11,13,16,17,20,23,24,25,27,34,38,39],\\
&& [2,3,4,10,12,14,15,20,23,28,29,31,33,39,40,41];\\
\end{eqnarray*}
\begin{eqnarray*}
&& (43;21,21,21,15;35) \\
&& [0,1,2,3,4,8,9,12,14,19,22,23,26,28,29,31,32,34,38,39,41],\\
&& [1,4,6,9,10,11,13,14,15,16,17,21,23,24,25,31,35,36,38,40,41],\\
&& [0,7,9,13,14,15,17,18,25,26,28,29,30,34,36]; \\
&& [0,1,2,3,6,7,9,11,13,15,16,17,21,22,25,26,29,33,38,39,41],\\
&& [0,1,2,3,4,5,6,7,8,13,16,17,20,22,25,27,29,34,35,37,40],\\
&& [0,5,6,12,13,14,16,20,23,27,29,30,31,37,38];\\
\end{eqnarray*}
The last example consists of a D-optimal design (blocks $A$ and $D$) and two copies of the Paley difference set in ${\mbox{\bf Z}}_{43}$
(blocks $B=C$). It is taken from the paper \cite{DDK:SpecMatC:2015}.
\begin{eqnarray*}
&& (45; 18, 21, 21, 18; 33) \\
&& [4,7,8,9,10,11,16,19,20,25,26,29,34,35,36,37,38,41],\\
&& [0,1,2,3,5,7,8,12,13,16,19,22,23,27,32,34,36,39,40,42,44],\\
&& [0,1,2,3,10,12,13,15,17,19,24,25,32,34,37,38,39,41];\\
&& \\
&& (45; 19, 20, 20, 18; 32) \\
&& [0,1,6,12,13,14,16,17,20,22,23,25,28,29,31,32,33,39,44],\\
&& [1,3,7,8,10,11,12,13,17,20,25,28,32,33,34,35,37,38,42,44],\\
&& [1,6,12,13,14,16,17,20,22,23,25,28,29,31,32,33,39,44];\\
&& \\
&& (45; 21, 18, 18, 21; 33) \\
&& [0,4,6,11,12,13,14,17,18,20,22,23,25,27,28,31,32,33,34,39,41],\\
&& [0,1,2,5,6,8,9,11,13,20,21,23,31,32,34,37,38,41],\\
&& [0,1,2,3,6,10,12,17,18,19,21,22,23,25,33,35,37,38,40,41,42];\\
&& \\
&& (45; 21, 20, 20, 17; 33) \\
&& [0,2,3,4,5,10,14,15,17,19,22,23,26,28,30,31,35,40,41,42,43],\\
&& [0,1,2,3,4,8,12,13,15,22,23,26,28,30,31,36,37,39,42,43],\\
&& [0,1,2,3,4,6,10,14,20,26,27,30,33,35,37,42,43];\\
&& [0,1,2,3,8,9,15,16,18,22,23,25,28,32,33,34,36,38,39,42,43],\\
&& [0,1,2,3,5,6,8,9,12,14,16,18,19,20,24,27,29,31,32,41],\\
&& [0,1,2,7,10,12,18,19,22,23,26,27,33,35,38,43,44];\\
\end{eqnarray*}
\begin{eqnarray*}
&& (45; 21, 22, 22, 16; 36) \\
&& [0,3,4,6,7,9,11,12,13,14,18,27,31,32,33,34,36,38,39,41,42],\\
&& [0,1,2,3,4,6,9,11,14,15,16,19,20,26,28,30,34,35,36,38,42,43],\\
&& [0,1,2,3,10,11,13,17,22,23,24,27,30,34,39,42];\\
&& [0,1,2,4,6,7,9,11,12,17,21,24,25,27,28,30,32,33,34,39,43],\\
&& [0,1,2,4,5,9,10,13,14,15,18,20,21,26,28,29,35,36,38,40,42,43],\\
&& [3,9,13,15,16,17,18,19,26,27,28,29,30,32,36,42];\\
&& \\
&& (45; 22, 19, 19, 18; 33) \\
&& [2,3,4,7,8,10,12,13,14,17,20,25,28,31,32,33,35,37,38,41,42,43],\\
&& [0,1,2,5,9,11,14,18,19,20,22,24,26,27,30,31,32,33,34],\\
&& [0,1,2,7,9,10,13,16,17,19,24,27,33,35,36,38,40,43];\\
&& [0,1,2,3,7,10,11,15,16,18,19,20,25,28,30,31,35,36,37,40,42,43],\\
&& [0,1,2,4,6,12,19,20,21,24,25,29,31,32,33,35,40,42,43],\\
&& [1,3,4,5,8,10,11,18,21,24,27,34,35,37,40,41,42,44];\\
&& \\
&& (49; 22, 22, 22, 19; 36) \\
&& [1,3,5,8,9,11,12,15,16,18,19,30,31,33,34,37,38,40,41,44,46,48],\\
&& [0,1,2,3,4,5,6,9,14,15,18,25,27,30,32,33,35,37,38,42,43,44],\\
&& [0,1,2,5,6,10,12,14,18,19,21,27,32,34,35,36,40,43,45];\\
&& [0,1,2,3,4,7,10,14,15,18,19,24,26,30,31,32,33,35,37,40,41,47],\\
&& [0,1,2,3,4,6,10,11,14,16,17,23,24,31,34,36,38,39,41,42,43,47],\\
&& [0,1,3,6,8,12,18,21,22,23,26,27,28,31,37,41,43,46,48];\\
&& \\
&& (49; 22, 24, 24, 18; 39) \\
&& [2,3,6,8,9,17,19,20,21,22,24,25,27,28,29,30,32,40,41,43,46,47],\\
&& [0,1,2,3,7,8,9,16,19,21,23,25,26,28,29,32,34,36,37,38,40,
41,42,46],\\
&& [0,1,2,3,7,8,12,15,17,19,26,29,36,37,39,42,43,46];\\
&& [0,1,2,3,7,10,12,13,14,17,19,20,22,23,25,26,27,28,32,37,40,46],\\
&& [0,1,2,3,4,5,6,9,11,13,14,15,17,19,22,27,28,31,33,34,35,38,43,45],\\
&& [2,5,6,10,16,17,19,23,24,25,26,30,32,33,39,43,44,47];\\
\end{eqnarray*}
\begin{eqnarray*}
&& (49; 23, 20, 20, 22; 36) \\
&& [0,1,2,4,6,8,15,16,17,20,21,23,26,28,29,32,33,34,41,43,45,47,
48], \\
&& [3,6,10,13,14,15,20,21,23,24,25,26,28,29,34,35,36,39,43,46],\\
&& [1,2,4,6,8,15,16,17,20,21,23,26,28,29,32,33,34,41,43,45,47,
48];\\
&& \\
&& (49; 23, 23, 23, 18; 38) \\
&& [0,3,4,7,9,10,11,13,15,17,18,23,26,31,32,34,36,38,39,40,42,45,46],\\
&& [0,1,2,3,4,5,6,9,10,11,12,15,16,21,23,25,27,28,32,35,40,41,43],\\
&& [0,1,2,5,6,13,15,18,21,22,27,32,34,36,37,39,46,47];\\
&& [0,1,2,3,4,5,6,7,8,11,15,17,20,24,27,29,33,36,38,41,44,45,47],\\
&& [0,1,2,3,5,7,8,10,12,14,20,22,23,24,30,31,32,35,36,37,38,41,46],\\
&& [3,4,10,13,14,16,20,21,24,25,28,29,33,35,36,39,45,46];\\
&& \\
\end{eqnarray*}
\begin{eqnarray*}
&& (51;23,22,22,21;37) \\
&& [0,2,4,10,11,13,16,20,21,23,24,25,26,27,28,30,31,35,38,40,
41,47,49], \\
&& [0,1,2,3,4,5,9,13,15,20,22,23,27,31,32,33,38,39,41,44,47,48],\\
&& [0,1,2,3,6,8,9,12,13,14,17,19,22,31,34,36,37,38,40,44,49];\\
&& [0,1,2,3,4,5,10,12,13,14,15,19,21,22,28,30,34,37,39,41,42,
47,49],\\
&& [0,1,2,4,5,8,10,13,18,19,21,24,25,28,29,31,33,35,38,39,40,43],\\
&& [0,2,3,4,5,6,9,13,19,20,25,26,31,32,38,42,45,46,47,48,49];\\
&& (51;25,25,21,20;40) \\
&& [0,1,4,5,7,9,15,16,17,18,22,29,33,34,35,3642,44,46,47,50],\\
&& [0,1,2,4,5,7,8,11,15,16,21,23,25,26,28,30,35,36,40,43,44,46,
47,49,50],\\
&& [1,4,5,7,9,15,16,17,18,22,29,33,34,35,36,42,44,46,47,50];\\
&& (53;26,22,22,23;40) \\
&& [1,5,6,10,11,12,15,18,22,27,28,29,30,32,33,34,36,37,39,40,44,45,46,49,50,51], \\
&& [0,1,2,3,9,11,18,21,24,25,29,33,34,35,36,41,44,46,48,49,50,52],\\
&& [0,1,3,9,10,12,14,16,17,20,23,25,28,30,33,36,37,39,41,43,44,50,
52];\\
\end{eqnarray*}
\begin{eqnarray*}
&& (55;23,26,26,22;42) \\
&& [0,6,7,10,11,15,17,18,19,21,24,26,29,31,34,36,37,38,40,44,45,48,49], \\
&& [1,2,4,8,14,16,17,18,19,23,24,25,27,28,30,31,32,36,37,38,39,41,
47,51,53,54],\\
&& [6,7,10,11,15,17,18,19,21,24,26,29,31,34,36,37,38,40,44,45,48,49];\\
&& \\
&& (57;28,28,28,21;48) \\
&& [2,4,12,13,15,21,23,24,25,27,28,31,35,37,38,39,40,41,43,46,47,48,49,50,51,52,54,56],\\
&& [0,1,2,3,4,6,9,11,13,16,17,20,23,28,31,32,34,35,37,39,40,41,43,44,45,49,50,53],\\
&& [0,1,4,6,13,14,15,19,20,21,26,31,36,37,38,42,43,44,51,53,56];\\
&& \\
\end{eqnarray*}
\end{document}
|
\begin{document}
\title{Photon-pair generation in random nonlinear
layered structures}
\author{Jan Pe\v{r}ina Jr.}
\affiliation{Joint Laboratory of Optics of Palack\'{y} University
and Institute of Physics of Academy of Sciences of the Czech
Republic, 17. listopadu 50A, 772 07 Olomouc, Czech Republic}
\email{[email protected]}
\author{Marco Centini}
\author{Concita Sibilia}
\author{Mario Bertolotti}
\affiliation{Dipartimento di Energetica, Universit\`{a} La
Sapienza di Roma, Via A. Scarpa 16, 00161 Roma, Italy}
\begin{abstract}
Nonlinearity and sharp transmission spectra of random 1D nonlinear
layered structures are combined together to produce photon pairs
with extremely narrow spectral bandwidths. Indistinguishable
photons in a pair are nearly unentangled. Also two-photon states
with coincident frequencies can be conveniently generated in these
structures if photon pairs generated into a certain range of
emission angles are superposed. If two photons are emitted into
two different resonant peaks, the ratio of their spectral
bandwidths may differ considerably from one and two photons remain
nearly unentangled.
\end{abstract}
\pacs{42.50.Dv,42.50.Ex,42.25.Dd}
\maketitle
\section{Introduction}
After the first successful experiment generating photon pairs in
the nonlinear process of spontaneous parametric down-conversion in
a nonlinear crystal and demonstration of its unusual temporal and
spectral properties more than 30 years ago \cite{Hong1987},
properties of the emitted photon pairs have been addressed in
detail in numerous investigations \cite{Mandel1995}. The key tool
in the theory of photon pairs has become a two-photon spectral
amplitude \cite{Keller1997,PerinaJr1999,DiGiuseppe1997,Grice1998}
that describes a photon pair in its full complexity. At the
beginning the effort has been concentrated on two-photon entangled
states with anti-correlated signal- and idler-field frequencies
that occur in usual nonlinear crystals. New sources able to
generate high photon fluxes have been discovered using, e.g.,
periodically-poled nonlinear crystals
\cite{Kuklewicz2005,Kitaeva2007}, four-wave mixing in nonlinear
structured fibers \cite{Li2005,Fulconis2005}, nonlinear planar
waveguides \cite{Tanzilli2002} or nonlinearities in cavities
\cite{Shapiro2000}. Chirped periodically-poled crystals opened the
door for the generation of signal and idler fields with extremely
wide spectral bandwidths \cite{Harris2007,Nasr2008}. Photon pairs
with wide spectral bandwidths can also be generated in specific
non-collinear geometries \cite{Carrasco2006}. Very sharp temporal
features are typical for such states that can be successfully
applied in metrology (see \cite{Carrasco2004} for quantum optical
coherence tomography). Later, even two-photon states with
coincident frequencies have been revealed. They can be emitted
provided that the extended phase matching conditions (phase
matching of the wave-vectors and group-velocity matching) are
fulfilled \cite{Giovannetti2002,Giovannetti2002a,Kuzucu2005}. Also
other suitable geometries for the generation of states with
coincident frequencies have been found. Nonlinear phase matching
for different frequencies at different angles of propagation of
the interacting fields can be conveniently used in this case
\cite{Torres2005,Torres2005a,Molina-Terriza2005}. Or wave-guiding
structures with transverse pumping and counter-propagating signal
and idler fields can be exploited
\cite{DeRossi2002,Booth2002,Walton2003,Ravaro2005,Lanco2006,Sciscione2006,Walton2004,PerinaJr2008}.
The last two approaches are quite flexible and allow the
generation of states having two-photon amplitudes with an
arbitrary orientation of main axes of their (approximately)
gaussian shape. These approaches emphasize the role of a
two-photon state at the boundary between the two mentioned cases.
This state is, a bit surprisingly, unentangled in frequencies and,
at the first sight, should not be very interesting. The opposed is
true \cite{URen2003,URen2005} due to the fact that the signal and
idler photons are completely indistinguishable and perfectly
synchronized in time for pulsed pumping. Non-collinear
configurations of bulk crystals of a suitable length and pump beam
with a suitable waist can also be used to generate such state
\cite{Grice2001,Carrasco2006}. These states with completely
indistinguishable photons are very useful in many
quantum-information protocols (e.g., in linear quantum computing)
that rely on polarization entanglement.
It has been shown that spectral entanglement affects polarization
entanglement and causes the lowering of polarization-entanglement
performance in quantum-information protocols. For example, the
role of spectral entanglement in polarization quantum
teleportation has been analyzed in detail in \cite{Humble2007}. We
note that this analysis is also appropriate for quantum repeaters,
quantum relays or quantum repeaters. If two photons are identical
and without any mutual entanglement, the polarization degrees of
freedom are completely separated from the spectral ones and in
principle the best possible performance of quantum information
protocols is guaranteed. In practice, mode mismatches both in
spectral and spatial domains may occur and degrade the
performance. It has been shown in \cite{Rohde2005} that Gaussian
distributed photons with large bandwidths represent states with
the best tolerance against errors. Under these conditions high
visibilities of interference patterns created by a simultaneous
detection of in principal many photons are expected.
As shown in this paper, nonlinear layered structures with randomly
positioned boundaries are a natural source of unentangled photon
pairs. If the down-converted photons are generated under identical
conditions (i.e. into the same transmission peaks) they are
indistinguishable and thus ideal for quantum-information
processing with polarization degrees of freedom. Moreover their
spectra are very narrow and temporal wave packets quite long and
so their use in experimental setups is fault tolerant against
unwanted temporal shifts \cite{Rohde2005}. Also because the
down-converted photons are generated into localized states with
high values of electric-field amplitudes higher photon-pair
generation rates are expected. This is important for protocols
exploiting several photon pairs. These states with very narrow
spectra can also be useful in entangled two-photon spectroscopy.
Superposition of photon pairs generated under different emission
angles is possible and two-photon states coincident in frequencies
can be engineered this way.
We note that the use of high electric-field amplitudes occurring
in localized states in random structures for the enhancement of
nonlinear processes has been studied in detail for second-harmonic
generation in \cite{Ochiai2005,Centini2006}.
The paper is organized as follows. The model of spontaneous
parametric down-conversion and formulas providing main physical
characteristics of photon pairs are presented in Sec.~II.
Properties of random 1D layered structures important for
photon-pair emission are studied in Sec.~III. A typical random
structure as a source of photon pairs is analyzed in Sec.~IV. A
scheme providing two-photon states coincident in frequencies is
analyzed in Sec.~V. Emission of a photon pair non-degenerate in
frequencies is investigated in Sec.~VI. Sec.~VII contains
Conclusions.
\section{Spontaneous parametric down-conversion and
characteristics of photon pairs}
Spontaneous parametric down-conversion in layered media can be
described by a nonlinear Hamiltonian $ \hat{H} $ \cite{Mandel1995}
in the formalism of quantum mechanics;
\begin{eqnarray}
\hat{H}(t) &=& \epsilon_0 \int_{\cal V} d{\bf r} \; \nonumber \\
& & \hspace{-10mm} {\bf d}({\bf r}):
\left[ {\bf E}_{p}^{(+)}({\bf r},t) \hat{\bf E}_{s}^{(-)}({\bf
r},t)
\hat{\bf E}_{i}^{(-)}({\bf r},t) + {\rm h.c.} \right].
\label{1}
\end{eqnarray}
Vector properties of the nonlinear interaction are characterized
by a third-order tensor of nonlinear coefficients $ {\bf d} $
\cite{Boyd1994}. Symbol $ {\bf E}_{p}^{(+)} $ denotes a
positive-frequency pump-field electric-field amplitude, whereas
negative-frequency electric-field amplitude operators $ \hat{\bf
E}_{m}^{(-)} $ for $ m=s,i $ describe the signal and idler fields.
Symbol $ \epsilon_0 $ is permittivity of vacuum, $ {\cal V} $
interaction volume, $ {\rm h.c.} $ stands for a hermitian
conjugated term, and $ : $ means tensor multiplication with
respect to three indices. The electric-field amplitudes of the
pump, signal, and idler fields can be conveniently decomposed into
forward- and backward-propagating monochromatic plane waves when
describing scattering of the considered fields at boundaries of a
layered structure \cite{PerinaJr2006,Centini2005,Vamivakas2004}.
Vector properties of the electric-field amplitudes are taken into
account using decomposition into TE and TM modes. A detailed
theory has been developed in \cite{PerinaJr2006} and this theory
is used in our calculations. We note that a suitable choice of
vector and photonic properties can result in the generation of
photon pairs with a two-photon spectral amplitude anti-symmetric
in the signal- and idler-field frequencies \cite{PerinaJr2007}.
Some of the properties of photon pairs generated in layered
structures are common with these of photon pairs originating in
nonlinear crystal super-lattices composed of several nonlinear
homogeneous pieces \cite{URen2005a,URen2006}.
In further considerations, we assume fixed polarizations of the
signal and idler fields and restrict ourselves to a scalar model.
Then the perturbation solution of Schr\"{o}dinger equation for the
signal- and idler-fields wave-function results in the following
two-photon wave-function \cite{Keller1997,PerinaJr1999}:
\begin{eqnarray}
|\psi_{s,i}(t)\rangle &=&
\int_{0}^{\infty} \, d\omega_s \int_{0}^{\infty} \, d\omega_i \;
\phi(\omega_s,\omega_i)
\nonumber \\
& & \mbox{} \hspace{-1.5cm}\times
\hat{a}_{s}^{\dagger}(\omega_s)
\hat{a}_{i}^{\dagger}(\omega_i) \exp(i\omega_s t) \exp(i
\omega_i t)
|{\rm vac} \rangle .
\label{2}
\end{eqnarray}
The two-photon spectral amplitude $ \phi(\omega_s,\omega_i) $
giving a probability amplitude of generating a signal photon at
frequency $ \omega_s $ and its idler twin at frequency $ \omega_i
$ is determined for given angles of emission and polarizations of
the signal and idler photons. Creation operator $
\hat{a}_{m}^{\dagger}(\omega_m) $ adds one photon at frequency $
\omega_m $ into field $ m $ ($ i=s,i $) and $ |{\rm vac} \rangle $
means the vacuum state for the signal and idler fields. Both
photons can propagate either forward or backward outside the
structure as a consequence of scattering inside the structure.
Here we analyze in detail the case when both photons propagate
forward and note that similar behavior is found in the remaining
three cases. More details can be found in
\cite{PerinaJr2006,Centini2005}.
The two-photon spectral amplitude $ \phi $ can be decomposed into
the Schmidt dual basis with base functions $ f_{s,n} $ and $
f_{i,n} $ \cite{Law2000,Law2004,PerinaJr2008} in order to reveal
correlations (entanglement) between the signal and idler fields:
\begin{equation}
\phi(\omega_s,\omega_i) = \sum_{n=1}^{\infty} \lambda_n
f_{s,n}(\omega_s) f_{i,n}(\omega_i) ;
\label{3}
\end{equation}
$ \lambda_n $ being coefficients of the decomposition. Entropy of
entanglement $ S $ defined as \cite{Law2000}
\begin{equation}
S = - \sum_{n=1}^{\infty} \lambda_n^2 \log_2 \lambda_n^2
\label{4}
\end{equation}
is a suitable quantitative measure of spectral entanglement
between the signal and idler fields. Symbol $ \log_2 $ stands for
logarithm with base 2. An additional measure of spectral
entanglement can be expressed in terms of the cooperativity
parameter $ K $ \cite{Law2004}:
\begin{equation}
K = \frac{1}{\sum_{n=1}^{\infty} \lambda_n^4 }.
\label{5}
\end{equation}
The two-photon spectral amplitude $ \phi(\omega_s,\omega_i) $
describes completely properties of photon pairs. Number $ N $ of
the generated photon pairs as well as signal-field energy spectrum
$ S_s(\omega_s) $ can simply be determined using the two-photon
spectral amplitude $ \phi $ \cite{PerinaJr2006}:
\begin{eqnarray}
N &=& \int_{0}^{\infty} d\omega_s \int_{0}^{\infty} d\omega_i
|\phi(\omega_s,\omega_i)|^2 ,
\label{6} \\
S_s(\omega_s) &=& \hbar \omega_s \int_{0}^{\infty} d\omega_i
|\phi(\omega_s,\omega_i)|^2.
\label{7}
\end{eqnarray}
Fourier transform $ \phi(t_s,t_i) $ of the spectral two-photon
amplitude $ \phi(\omega_s,\omega_i) $,
\begin{eqnarray}
\phi(t_s,t_i) &=& \frac{1}{2\pi}
\int_{0}^{\infty} d\omega_s \; \int_{0}^{\infty} d\omega_i \;
\sqrt{\frac{\omega_s\omega_i}{\omega_s^0\omega_i^0}}
\phi(\omega_s,\omega_i) \nonumber \\
& & \mbox{} \times
\exp(-i\omega_st_s)
\exp(-i\omega_it_i),
\label{8}
\end{eqnarray}
is useful in determining temporal properties of photon pairs.
Symbol $ \omega_m^0 $ in Eq.~(\ref{8}) denotes the central
frequency of field $ m $, $ m=s,i $. This Fourier transform is
linearly proportional to the two-photon temporal amplitude $ {\cal
A} $ defined along the expression
\begin{eqnarray}
{\cal A}(t_s,t_i) &=&\langle {\rm vac} |
\hat{E}^{(+)}_{s}(0,t_0+t_s) \hat{E}^{(+)}_{i}(0,t_0+t_i)
|\psi_{s,i}(t_0)\rangle \nonumber \\
& &
\label{9}
\end{eqnarray}
and giving a probability amplitude of detecting a signal photon at
time $ t_s $ together with its twin at time $ t_i $. We have
\cite{PerinaJr2006}:
\begin{equation}
{\cal A}(t_s,t_i) = \frac{\hbar \sqrt{\omega_s^0 \omega_i^0}}{4\pi\epsilon_0 c {\cal B}}
\phi(t_s,t_i) ,
\label{10}
\end{equation}
where $ c $ is speed of light in vacuum and $ {\cal B} $
transverse area of the fields. Photon flux $ {\cal N } $ of, e.g.,
the signal field is determined using a simple formula (valid for
narrow spectra) \cite{PerinaJr2006}:
\begin{equation}
{\cal N}_{s}(t_s) = \hbar\omega_s^0
\int_{-\infty}^{\infty} dt_i \; |\phi(t_s,t_i)|^2 .
\label{11}
\end{equation}
Direct measurement of temporal properties of photon pairs is
impossible because of short time scales needed and so temporal
properties may be detected only indirectly. Time duration of the
(pulsed) down-converted fields as well as entanglement time can be
experimentally addressed using Hong-Ou-Mandel interferometer.
Normalized coincidence-count rate $ R^{\rm HOM}_n $ in this
interferometer is derived as follows \cite{PerinaJr1999}:
\begin{eqnarray}
R^{\rm HOM}_n(\tau_l) &=& 1-\tilde{\rho}(\tau_l) , \\
\tilde{\rho}(\tau_l) &=& \frac{1}{ R(0,0)}
{\rm Re} \Biggl[ \int_{0}^{\infty} d\omega_s \, \int_{0}^{\infty} d\omega_i
\; \omega_s \omega_i
\nonumber \\
& & \hspace{-15mm} \phi(\omega_s,\omega_i)
\phi^{*}(\omega_i,\omega_s)
\exp(i\omega_i\tau_l) \exp(-i\omega_s\tau_l) \Biggr] .
\label{13}
\end{eqnarray}
Normalization constant $ R(0,0) $ occurring in Eq.~(\ref{13}) is
given in Eq.~(\ref{15}) bellow. Relative time delay $ \tau_l $
between the signal and idler photons changes in the
interferometer.
A preferred direction of correlations between the signal- and
idler-field frequencies can be determined from the orientation of
coincidence-count interference fringes in Franson interferometer
\cite{Walton2003} that is characterized by normalized
coincidence-count rate $ R^{\rm F}_n $ in the form:
\begin{eqnarray}
R^{\rm F}_n(\tau_s,\tau_i) &=& \frac{1}{4} + \frac{1}{8R(0,0)} {\rm Re}
\left\{ 2R(\tau_s,0) + 2R(0,\tau_i) \right. \nonumber \\
& & \mbox{} \left. + R(\tau_s,\tau_i) +
R(\tau_s,-\tau_i) \right\}.
\label{14}
\end{eqnarray}
The function $ R $ in Eq.~(\ref{14}) is defined as follows:
\begin{eqnarray}
R(\tau_s,\tau_i) &=& \int_{0}^{\infty} d\omega_s \, \int_{0}^{\infty} d\omega_i
\, \; \omega_s \omega_i |\phi(\omega_s,\omega_i)|^2 \nonumber \\
& & \mbox{} \times \exp(i\omega_s\tau_s) \exp(i\omega_i\tau_i).
\label{15}
\end{eqnarray}
Time delay $ \tau_m $ ($ m=s,i $) corresponds to a relative phase
shift between two arms in the path of photon $ m $.
\section{Properties of random 1D layered structures}
We consider a 1D layered structure composed of two dielectrics
with mean layer optical thicknesses equal to $ \lambda_0/4 $ ($
\lambda_0 = 1\times 10^{-6} $~m is chosen). Such structure, for
example, can be fabricated by etching a crystal made of LiNbO$
\mbox{}_3 $ and filling free spaces with a suitable material (SiO$
\mbox{}_2 $). The optical axis of LiNbO$ \mbox{}_3 $ is parallel
to the planes of boundaries and coincides with the direction of
fields' polarizations that correspond to TE modes. This geometry
exploits the largest nonlinear coefficient of tensor $ {\bf d} $
($ {\bf d}_{zzz} $). The fields propagate as ordinary waves with
the dispersion formula $ n^2(\omega) = 4.9048 + 0.11768/(354.8084
- 0.0475\omega^2) - 9.6398 \omega^2 $ \cite{Dmitriev1991}. The
second material (SiO$ \mbox{}_2 $) is characterized by its index
of refraction $ n(\omega) = 1.45 $. A random structure is
generated along the following algorithm:
\begin{enumerate}
\item The number $ N_{\rm elem} $ of elementary layers of optical
thicknesses $ \lambda_0/4 $ is fixed. Then the material of each
elementary layer is randomly chosen.
\item At each boundary between two materials, an additional random
shift of the boundary position governed by a gaussian distribution
with variance $ \lambda_0/40 $ is introduced.
\end{enumerate}
The randomness given by a random choice of the material of each
layer is crucial for the observation of features typical for
random structures. The randomness introduced by gaussian shifts of
boundary positions is only additional and does not modify
considerably properties of a random structure. It also describes
errors occurring in the fabrication process. We note that the same
generation algorithm has been used in
\cite{Centini2006,Ochiai2005} where second-harmonic generation in
random structures has been addressed.
The process of spontaneous parametric down-conversion can be
efficient in these structures provided that the signal and idler
fields are generated into transmission peaks (corresponding to
spatially localized states) and the structure is at least
partially transparent for the pump field. This imposes
restrictions to the allowed optical lengths of suitable
structures. They have to have such lengths that the down-converted
fields (usually degenerate in frequencies) are localized whereas
there occurs no localization at the pump-field wavelength. This is
possible because the shorter the wavelength the larger the
localization length \cite{Anderson1958} for a given structure. We
assume the pump-field (signal-, idler-field) wavelength $
\lambda_p $ ($ \lambda_s $, $ \lambda_i $) in the vicinity of $
\lambda_0/2 $ ($ \lambda_0 $).
Numerical simulations for the considered materials and wavelengths
have revealed that the best suitable numbers $ N_{\rm elem} $ of
elementary layers lie between 200 and 400. In general, the greater
the contrast of indices of refraction of two materials the smaller
the number of needed layers. Widths of transmission peaks
corresponding to localized states in a random structure may vary
by several orders of magnitude. As an example, probability
distribution $ P_{\Delta\lambda_s} $ of widths of transmission
peaks for an ensemble of structures with $ N_{\rm elem} = 250 $
elementary layers (optical lengths of these structures lie around
$ 6 \times 10^{-5} $~m) is shown in Fig.~\ref{fig1}a. Comparison
with the distributions appropriate for $ N_{\rm elem} = 500 $ and
$ N_{\rm elem} = 750 $ in Fig.~\ref{fig1}a demonstrates that the
longer the structure the narrower transmission peaks can be
expected. Localization optical length at $ \lambda_0 $
\cite{Bertolotti2005,Centini2006} in the direction perpendicular
to the boundaries lies around $ 22\times 10^{-6} $~m for the
considered structure lengths. As for the simulation, we have
randomly generated several tens of thousands structures in order
to have around $ 3\times 10^4 $ transmission peaks for the
statistical analysis. Transmission peaks can be easily
distinguished from their flat surroundings for which intensities
lower than one percent of the maximum peak intensity are typical.
Widths of intensity transmission peaks have been determined as
FWHM. Simulations have revealed that peaks with very small
intensity transmissions prevail, but there occur also peaks with
intensity transmissions close to one. Such peaks are useful for
the generation of photon pairs because of the presence of high
electric-field amplitudes inside the structure. Photon pairs can
be also generated under nonzero angles of emission. The greater
the radial angle $ \theta_s $ of signal-photon emission the
narrower the transmission peaks as documented in Fig.~\ref{fig1}b.
Also the localization lengths (projected into the direction
perpendicular to the boundaries) shorten with an increasing radial
angle $ \theta_s $ of signal-photon emission. Localization length
equal to $ 9.5 \times 10^{-6} $~m ($ 2.5 \times 10^{-6} $~m) has
been determined at the angle of signal-photon emission $ \theta_s
= 30 $~deg ($ 60 $~deg).
\begin{figure}
\caption{Probability distribution $ P_{\Delta\lambda_s}
\label{fig1}
\end{figure}
Fabrication of such structures is relatively easy due to allowed
high tolerances. A typical sample has several transmission peaks
at different angles $ \theta_s $ of emission for a given frequency
of the signal field. If we assume that also the idler field is
tuned into the same transmission peak the (normally incident) pump
field has to have twice the frequency corresponding to this peak.
In non-collinear geometry the transverse wave-vectors of the
signal and idler fields have to have the same magnitudes and
opposed signs in order to fulfill phase-matching conditions in the
transverse plane. If the angle $ \theta_s $ of signal-photon
emission increases a given transmission peak survives increasing
its central frequency $ \omega_s^0 $. Moreover the dependence of
central frequency $ \omega_s^0 $ on radial emission angle $
\theta_s $ can be considered to be linear in a certain interval of
angles $ \theta_s $. This property leads to wide tunability in
frequencies.
\section{Properties of the emitted photon pairs}
We assume a normally incident pump beam forming a TE wave and
generation of the TE-polarized signal and idler fields into the
same transmission peak. Thus both emitted photons have the same
central frequencies and the central frequency of the pump field is
twice this frequency, i.e. the conservation law of energy is
fulfilled. If a structure is pumped by a broadband femtosecond
pulse, photon pairs are generated into a certain range of emission
angles. If a signal photon occurs at a given radial angle $
\theta_s $ its idler twin is emitted at its radial angle $
\theta_i = - \theta_s $ and wave-vectors of both photons and the
pump field lie in the same plane (i.e., their azimuthal angles $
\psi_s $ and $ \psi_i $ coincide) as a consequence of phase
matching conditions in the transverse plane. The central frequency
$ \omega_s^0 $ of a signal photon depends on the angle $ \theta_s
$ of signal-photon emission, as illustrated in Fig.~\ref{fig2}
showing the signal-field intensity spectrum $ S_s^{\rm ref} $.
Width of the spectrum $ S_s $ coincides with the width of
intensity transmission peak because the transmission peaks are
very narrow and so all frequencies inside them have nearly the
same conditions for the nonlinear process. This means that linear
properties of the photonic-band-gap structure have a dominant role
in the determination of spatial properties of the generated photon
pairs. Localization of the signal and idler fields inside the
considered photonic-band-gap structure leads to the enhancement of
photon-pair generation rate up to 5000 times (in the middle of
transmission peak) as can be deduced from Fig.~\ref{fig2}. This is
in accordance with the finding that the process of second-harmonic
generation can be enhanced by 3 or 4 orders of magnitude
\cite{Centini2006} in these structures. This enhancement is at the
expense of dramatic narrowing of the range of the allowed
frequencies of emitted photons. We estimate from 10 to $ 10^3 $
generated photon pairs into the whole emission cone per 100~mW of
the pump power depending on the width of transmission peak. The
wider the transmission peak the higher number of pairs is
expected. We note that extremely narrow down-converted fields can
be obtained also from parametric pumping of $ \mbox{}^{87} $Rb
atoms \cite{Kolchin2006}.
\begin{figure}
\caption{Contour plot of relative energy spectrum $ S_s^{\rm rel}
\label{fig2}
\end{figure}
Two-photon spectral amplitudes $ \phi(\omega_s,\omega_i) $ for
different angles $ \theta_s $ of signal-photon emission are very
similar in their shape; they differ in their central frequencies.
We note that the central frequencies of the signal and idler
photons are the same ($ \omega_s^0 = \omega_i^0 $) for a given
angle $ \theta_s $ of emission. A typical shape of the two-photon
spectral amplitude $ \phi(\omega_s,\omega_i) $ resembling a cross
in its contour plot (see Fig.~\ref{fig3}) reflects the fact that
the signal and idler fields are nearly perfectly separable. This
is confirmed by Schmidt decomposition of the amplitude $
\phi(\omega_s,\omega_i) $ in which only the first mode is
important ($ \lambda_1 = 1.00 $). Entropy $ S $ of entanglement
for the two-photon spectral amplitude $ \phi $ in Fig.~\ref{fig3}
equals 0.00 and cooperativity parameter $ K $ is 1.00. The
spectral dependence of the first three mode functions in the
decomposition is shown in Fig.~\ref{fig4}.
\begin{figure}
\caption{Contour plot of squared modulus $ |\phi|^2 $ of the two-photon spectral amplitude for
the transmission peak at $ \theta_s = 0 $~deg. The structure is pumped
by a pulse 250~fs long; $ |\phi|^2 $ is normalized such that
$ 4 \int d\omega_s \int d\omega_i |\phi(\omega_s,\omega_i)|^2
/ (\omega_p^0)^2 = 1 $.}
\label{fig3}
\end{figure}
\begin{figure}
\caption{Squared modulus $ |f_n|^2 $ of mode functions with n=1 (solid line),
2 (solid line with *), and 3 (solid line with $ \triangle $) of Schmidt decomposition
of the two-photon spectral amplitude $ \phi $ in Fig.~\ref{fig3}
\label{fig4}
\end{figure}
A typical temporal two-photon amplitude $ {\cal A}(t_s,t_i) $
spreads over tens or hundreds of ps reflecting narrow frequency
spectra of the signal and idler fields. Its contour plot resembles
a droplet \cite{PerinaJr2006} that originates in the zig-zag
movement of the emitted photons inside the structure that delays
the occurrence time of photons at the output plane of the
structure. Both photons have the same wave-packets due to
identical emission conditions as can be verified in Hong-Ou-Mandel
interferometer showing the visibility equal to one.
Assuming the normally incident pump beam the signal-field
intensity profile at a given frequency in the transverse plane is
nonzero around a circle due to the rotational symmetry of the
photonic-band-gap structure around the pump-beam propagation
direction. This symmetry assumes an appropriate rotation of
polarization base vectors. However, values of intensity change
around the circle depending on fields' polarizations (defined by
analyzers in front of detectors) and properties of nonlinear
tensor $ {\bf d} $.
Correlations between the signal and idler fields in the transverse
plane are characterized by a correlation area that is defined by
the probability of emitting an idler photon in a given direction
[defined by angular declinations $ \Delta\theta_i $ (radial
direction) and $ \Delta \psi_i $ (azimuthal direction) from the
ideal direction of emission] provided that the direction of
signal-photon emission is fixed. Correlation area is in general an
ellipse with typical lengths in radial ($ \sigma\Delta\theta_i $)
and azimuthal ($ \sigma\Delta \psi_i $) directions. These angles
are very small for plane-wave pumping, typically of the order of $
10^{-3} - 10^{-4} $~rad. However, focusing of the pump beam can
increase their values considerably as shown in Fig.~\ref{fig5},
where the pump-beam diameter $ a $ varies from 30~$ \mu $m up to
1~mm. Spread of the correlation area in radial direction is
smaller compared to that in azimuthal direction, because emission
of a photon is more restricted in radial direction (for geometry
reasons) by narrow bands of the photonic structure. Release of
strict phase-matching conditions in the transverse plane caused by
a focused pump beam (with a circular spot) affects radial and
azimuthal angles of emission in the same way.
\begin{figure}
\caption{Spread of the correlation area in radial ($ \sigma\Delta\theta_i $, solid line)
and azimuthal ($ \sigma\Delta \psi_i $, solid line with *) directions of idler-photon emission as a
function of pump-beam diameter $ a $. The transmission peak in the structure
with 250 elementary layers occurs at $ \theta_s^0 = 26.3 $~deg, i.e. the central radial
idler-photon emission angle $ \theta_i^0 $ equals - 26.3~deg. Logarithmic scale on the
$ x $ axis is used.}
\label{fig5}
\end{figure}
We note that spreading of the signal- and idler-field intensity
profiles caused by pump-beam focusing has been experimentally
observed in \cite{Zhao2008} for a type-II bulk crystal.
\section{Two-photon states coincident in frequencies}
Superposition of photon pairs with signal photons emitted under
different radial emission angles $ \theta_s $ results in states
coincident in frequencies. This spectral beam combining of fields
from different spatial modes and with different spectral
compositions can be achieved using an optical dispersion element
like a grating \cite{Augst2003}. This technique has already been
applied in the construction of a source of photon pairs that uses
achromatic phase matching and spatial decomposition of the pump
beam \cite{Torres2005,Torres2005a}.
There is approximately a linear dependence between the central
frequencies $ \omega_s^0 = \omega_i^0 $ and radial angle $
\theta_s $ of signal-field emission for sufficiently large values
of $ \theta_s $ assuming a normally incident pump beam. The
resultant two-photon spectral amplitude $
\Phi_M(\omega_s,\omega_i) $ after superposing photon pairs from $
M $ equidistantly positioned pinholes present both in the signal
and idler beams can be approximately expressed as:
\begin{equation}
\Phi_M(\omega_s,\omega_i) = \sum_{n=0}^{M-1} \exp(i\varphi n)
\phi(\omega_s + n\Delta\omega,\omega_i+ n\Delta\omega) ,
\label{16}
\end{equation}
where $ \Delta\omega $ is the difference between the signal-field
central frequencies of fields originating in adjacent pinholes.
Phase $ \varphi $ determines the difference in phases of two
two-photon spectral amplitudes at the central signal- and
idler-field frequencies coming from adjacent pinholes. The
two-photon spectral amplitude $ \phi (\omega_s,\omega_i) $ in
Eq.~(\ref{16}) belongs to a two-photon state coming from the first
pinhole. Schmidt decomposition of the two-photon spectral
amplitude $ \Phi_M $ describing photon pairs coming from $ M $
pinholes shows that there are nearly $ M $ independent modes
[determined by the value of cooperativity parameter $ K $
introduced in Eq.~(\ref{5})]. These modes are collective modes,
i.e. they have non-negligible values in areas of frequencies
associated with every pinhole (see Figs.~\ref{fig6} and \ref{fig7}
for $ M=2 $ pinholes). Such states are perspective for the
implementation of various quantum-information protocols.
\begin{figure}
\caption{Contour plot of squared modulus $ |\Phi_M|^2 $ of the two-photon spectral
amplitude created by superposing photon-pair amplitudes from $ M=2 $
pinholes. The structure is pumped by a pulse 250~fs long; $ |\Phi_M|^2 $
is normalized such that $ 4 \int d\omega_s \int d\omega_i |\Phi_M(\omega_s,\omega_i)|^2
/ (\omega_p^0)^2 = 1 $.}
\label{fig6}
\end{figure}
\begin{figure}
\caption{Squared modulus $ |f_n|^2 $ of mode functions with n=1 (solid line),
2 (solid line with *), and 3 (solid line with $ \triangle $) of Schmidt decomposition
of the two-photon spectral amplitude $ \Phi_M $ in Fig.~\ref{fig6}
\label{fig7}
\end{figure}
The Fourier transform $ \Phi_M(t_s,t_i) $ defined in Eq.~(\ref{8})
can be approximately expressed as follows:
\begin{equation}
\Phi_M(t_s,t_i) \approx \sum_{n=0}^{M-1} \exp[i(\varphi -
\Delta\omega(t_s+t_i))n] \phi(t_s,t_i) ,
\label{17}
\end{equation}
where
\begin{eqnarray}
\phi(t_s,t_i) &=& \frac{1}{2\pi} \int_{0}^{\infty} d\omega_s \int_{0}^{\infty}
d\omega_i \phi(\omega_s,\omega_i) \exp(i\omega_s t_s) \nonumber
\\
& & \mbox{} \times \exp(i\omega_i t_i)
\label{18}
\end{eqnarray}
means the Fourier transform of a two-photon amplitude associated
with one pinhole. The expression in Eq.~(\ref{17}) indicates that
interference of photon-pair amplitudes originating in different
pinholes creates interference fringes in the sum $ t_s + t_i $ of
the occurrence times of signal and idler photons. This is caused
by positive correlations in the signal- and idler-field
frequencies. The analysis of the sum in Eq.~(\ref{17}) shows that
the greater the number $ M $ of pinholes, the narrower the range
of allowed values of the sum $ t_s + t_i $ [$ \lim_{M\rightarrow
\infty} \sum_{n=0}^{M-1} \exp(ixn) \approx \delta(x) $].
Comparison of shapes of the two-photon temporal amplitudes $ {\cal
A}_M(t_s,t_i) $ derived in Eq.~(\ref{10}) for $ M=2 $ and $ M=8 $
pinholes shown in Fig.~\ref{fig8} reveals this tendency for
localization in time domain.
\begin{figure}
\caption{Contour plot of squared modulus $ |{\cal A}
\label{fig8}
\end{figure}
These features originating in positive correlations of the signal-
and idler-field frequencies are experimentally accessible by
measuring the pattern of coincidence-count rate $ R_n^{\rm F} $ in
Franson interferometer for sufficiently large values of signal-
and idler-photon delays $ \tau_s $ and $ \tau_i $. A (nearly
separable) two-photon state from one pinhole creates a chessboard
tilted by 45~degrees (see Fig.~\ref{fig9}a). If amplitudes from
several pinholes are included typical fringes oriented at 45
degrees become visible in coincidence-count patterns given by
rate $ R_n^{\rm F} $ (compare Figs.~\ref{fig9}a and \ref{fig9}b).
The greater the number $ M $ of pinholes, the better the fringes
are formed.
\begin{figure}
\caption{Contour plot of normalized coincidence-count rate $ R_n^{\rm F}
\label{fig9}
\end{figure}
Superposition of photon-pair amplitudes (with a suitable phase
compensation) from a given range of signal-field emission angles $
\theta_s $ (and the corresponding range of idler-field emission
angles $ \theta_i $) as defined by rectangular apertures gives a
two-photon spectral amplitude $ \Phi $ having a cigar shape with
coincident frequencies (see Fig.~\ref{fig10}). The phase
compensation requires the introduction of an additional phase
shift depending on the radial emission angle $ \theta_s $ and can
be accomplished, e.g., by a wedge-shaped prism. Schmidt
decomposition of the two-photon amplitude $ \Phi $ shows that
several modes are non-negligibly populated ($ \lambda_1 = 0.78 $,
$ \lambda_2 = 0.13 $, and $ \lambda_3 = 0.04 $). Typical profiles
of mode functions $ f_n $ for this configuration are plotted in
Fig.~\ref{fig11}. The first and most populated mode extends over
all frequencies whereas oscillations are characteristic for the
other modes. The larger the spectral width of the signal and idler
fields the larger the cooperativity parameter $ K $. This means
that the number of effective independent modes can be easily
changed just by changing the height of rectangular apertures. This
makes this source of photon pairs extraordinarily useful. Both
emitted photons are perfectly indistinguishable providing
visibility equal to one in Hong-Ou-Mandel interferometer. Positive
correlation in the signal- and idler-field frequencies gives
coincidence-count interference fringes in Franson interferometer
tilted by 45 degrees for sufficiently large values of the signal-
and idler-field delays $ \tau_s $ and $ \tau_i $.
\begin{figure}
\caption{Contour plot of squared modulus $ |\Phi|^2 $ of the two-photon spectral
amplitude created by superposition of photon-pair amplitudes from a certain range
of signal-field (and idler-field) emission angles $ \theta_s $ ($ \theta_i $) with
a suitable phase compensation. The structure is pumped by a pulse 250~fs long;
$ |\Phi|^2 $ is normalized according to the condition
$ 4 \int d\omega_s \int d\omega_i |\Phi(\omega_s,\omega_i)|^2
/ (\omega_p^0)^2 = 1 $.}
\label{fig10}
\end{figure}
\begin{figure}
\caption{Squared modulus $ |f_n|^2 $ of mode functions with n=1 (solid line),
2 (solid line with *), and 3 (solid line with $ \triangle $) of Schmidt decomposition
of the two-photon spectral amplitude $ \Phi $ in Fig.~\ref{fig10}
\label{fig11}
\end{figure}
\section{Spectrally non-degenerate emission of photon pairs}
The simplest way for the observation of spectrally non-degenerate
emission of a photon pair is to consider collinear geometry and
exploit a random structure with two different transmission peaks.
Pumping frequency is then given as the sum of central frequencies
of these transmission peaks into which signal and idler photons
are emitted. The generation of a suitable structure using the
algorithm and geometry presented in Sec.~III in this case is by an
order of magnitude more difficult compared to that providing
photon pairs degenerated in frequencies. Structures having two
peaks with considerably different bandwidths of the transmission
peaks are especially interesting. Spectral bandwidths of the
signal and idler fields differ accordingly. Such states are
interesting in some applications, e.g., in constructing heralded
single-photon sources.
The two-photon spectral amplitude $ \phi(\omega_s,\omega_i) $ has
a cigar shape prolonged along the frequency of the field with a
larger spectral bandwidth (say the signal field). On the other
hand, the two-photon temporal amplitude $ {\cal A}(t_s,t_i) $ is
broken into several islands along the axis giving the idler-photon
detection time $ t_i $. Phase modulation of the two-photon
spectral amplitude $ \phi(\omega_s,\omega_i) $ with faster changes
along the frequency $ \omega_i $ and slower changes along the
frequency $ \omega_s $ is responsible for this behavior. Photon
fluxes of the signal and idler fields are then composed of several
peaks as documented in Fig.~\ref{fig12}. This feature is reflected
in an asymmetric dip in coincidence-count rate $ R_n^{\rm HOM} $
in Hong-Ou-Mandel interferometer, that also shows oscillations at
the difference of the central signal- and idler-field frequencies.
\begin{figure}
\caption{Photon fluxes of the signal ($ {\cal N}
\label{fig12}
\end{figure}
Asymmetry of the interference dip shown in Fig.~\ref{fig12} can be
related to the shape of spectral two-photon amplitude $ \phi $
\cite{PerinaJr1999}. Origin of these effects lies in delays caused
by the zig-zag movement of the generated photons inside the
structure.
\begin{figure}
\caption{Coincidence-count rate $ R_n^{\rm HOM}
\label{fig13}
\end{figure}
\section{Conclusions}
Nonlinear random layered structures in which an optical analog of
Anderson localization occurs have been analyzed as a suitable
source of photon pairs with narrow spectral bandwidths and perfect
indistinguishability. Spectral bandwidths in the range from 1~nm
to 0.01~nm are available for different realizations of a random
structure. Photon pairs with the same signal- and idler-field
bandwidths as well as with considerably different bandwidths can
be generated. Random structures are flexible as for the generated
frequencies and emission angles. Two-photon states with coincident
frequencies and variable spectral bandwidth can be reached if
two-photon amplitudes of photon pairs generated into different
emission angles are superposed. Also two-photon states with
signal- and idler-field spectra composed of several peaks and
characterized by collective spectral mode functions are available.
All these states are very perspective for optical implementations
of many quantum information protocols. Possible implementation of
these sources into integrated optoelectronic circuits is a great
advantage.
\acknowledgments
The authors acknowledge support by projects IAA100100713 of GA AV
\v{C}R, COST 09026, 1M06002, and MSM6198959213 of the Czech
Ministry of Education. Also MIUR project II04C0E3F3 Collaborazioni
Interuniversitarie ed Internazionali tipologia C and support
coming from cooperation agreement between Palack\'{y} University
and University La Sapienza in Roma is acknowledged. The authors
thank fruitful discussions with A. Messina.
\end{document}
|
\begin{document}
\title{Event-by-event simulation of double-slit experiments with single photons}
\section{Introduction}
In 1802, Young performed a double-slit experiment with light in order to resolve the question whether light was composed of particles,
confirming Newton's particle picture of light, or rather consisted of waves~\cite{YOUNG}.
His experiment showed that the light emerging from the slits produces a fringe pattern on the screen that is characteristic for interference, discrediting
Newton's corpuscular theory of light~\cite{YOUNG}.
It took about hundred years until Einstein with his explanation of the photoelectric effect in terms of photons, somehow revived Newton's particle
picture of light~\cite{EINS05a}.
In 1924, de Broglie introduced the idea that also matter, not just light, can exhibit wave-like properties~\cite{BROG25}.
This idea has been confirmed in various double-slit experiments with massive objects such as
electrons~\cite{JONS61,MERL76,TONO89,NOEL95}, neutrons~\cite{ZEIL88,RAUC00}, atoms~\cite{CARN91,KEIT91}
and molecules such as $C_{60}$ and $C_{70}$~\cite{ARND99,BREZ02}, all showing interference.
The observation that matter and light exhibit both wave and particle character, depending
on the circumstances under which the experiment is carried out, is reconciled by introducing
the concept of particle-wave duality~\cite{HOME97}.
In most double-slit experiments, the interference pattern is built up by recording individual clicks of the detectors.
In some of these experiments~\cite{MERL76,TONO89,JACQ05} it can be argued that at any time, there is only one object
that travels from the source to the detector.
Under these circumstances, the real challenge is to explain how the detection of individual objects that do not interact with each other
can give rise to the interference patterns that are being observed.
According to Feynman, this phenomenon is ``impossible, absolutely impossible to explain in any classical way and has in it
the heart of quantum mechanics''~\cite{FEYN65}.
Later, Feynman used the double-slit experiment as an example to argue that ``far more fundamental was the discovery that in nature
the laws of combining probabilities were not those of the classical probability theory of Laplace''~\cite{FEYN65b}.
It is known that the latter statement is incorrect as it results from an erroneous application of probability theory~\cite{BALL86,BALL01}.
In this letter, we show that also Feynman's former statement needs to be revised:
We present a simple computer algorithm that reproduces event-by-event,
events being defined as clicks of a detector,
just as in real double-slit experiments, the interference patterns that are usually associated with wave behavior.
We also demonstrate that our event-by-event simulation model reproduces the
wave mechanical results of a recent single-photon
interference experiment that employs a Fresnel biprism~\cite{JACQ05}.
In our simulation model every essential component of the laboratory experiment such as
the single-photon source, the slit, the Frensel biprism, and detector array
has a counterpart in the algorithm.
The data is analyzed by counting detection events, just as in Ref.~\cite{JACQ05}.
The simulation model is solely based on experimental facts and satisfies Einstein's criterion
of local causality.
In a pictorial description of our simulation model, we may speak about ``photons'' generating
the detection events. However, these so-called photons, as we will call them in the sequel,
are elements of a model or theory for the real laboratory experiment only.
The only experimental facts are the settings of the various apparatuses and the detection events.
What happens in between activating the source and the registration of the detection
events belongs to the domain of imagination.
In the simulation model, the photons have which-way information, never have direct communication with each other
and arrive one by one at a detector.
Although the photons know exactly which route they followed, they
nevertheless build up an interference pattern at the detector, thereby
contradicting the first part of Feynman's first statement, reproduced in the introduction.
With our simulation model, we demonstrate that it is possible to give
a complete description of the double-slit experiment in terms of particles only.
The event-based simulation approach that we describe in this letter
is unconventional in that it does not require knowledge of the wave amplitudes as
obtained by solving the wave mechanical problem.
Instead, the interference patterns are obtained through a simulation of locally causal,
classical dynamical systems.
Our approach provides a common-sense description of the experimental facts without invoking
concepts from quantum theory such as the particle-wave duality~\cite{HOME97}.
In our simulation approach, we adopt the point of view that quantum theory has nothing to say about individual events~\cite{HOME97}.
Therefore, the fact that there exist event-by-event simulation algorithms that reproduce the results of quantum theory
has no direct implications to the foundations of quantum theory:
These algorithms describe the process of generating events at a level of detail
that is outside the scope of what current quantum theory can describe.
The work presented here is not concerned with an interpretation or an extension of quantum theory.
For phenomena that cannot (yet) be described by a deductive theory, it is common practice to use probabilistic models.
Although Kolmogorov's probability theory provides a rigorous framework to formulate such models,
there are ample examples that illustrate how easy it is to make plausible assumptions that create all kinds of paradoxes,
also for every-day problems that have no bearing on quantum theory at all~\cite{GRIM95,TRIB69,JAYN03,BALL03}.
Subtle mistakes such as dropping (some of the essential) conditions,
like in the discussion of the double-slit experiment~\cite{BALL86,BALL01},
mixing up the meaning of physical and statistical independence,
changing one probability space for another during the cause of an argument can give rise to
all kinds of paradoxes~\cite{JAYN89,HESS01,BALL86,BALL03,HESS06}.
It seems that the
interference phenomena that we focus on in this letter
cannot be explained within the framework of Kolmogorov's probability theory~\cite{KHRE01}.
Our simulation approach does not rely on concepts of probability theory:
We strictly stay in the domain of finite-digit arithmetic.
\section{Event-by-event simulation}
In our simulation approach, the photons are regarded as messengers that travel from
a source to a detector. The messenger carries a message that may change as the
messenger encounters another object on its path such as the Fresnel biprism for instance.
The algorithms for updating the messages are designed in such a way that the
collection of many messages, that is the collection of many detection events, reproduces the results of Maxwell's theory.
The key point of these algorithms is that they define classical, dynamical systems that are adaptive.
This letter builds on earlier work~\cite{RAED05d,RAED05b,RAED05c,MICH05,RAED06c,RAED07a,RAED07b,RAED07c,ZHAO08,ZHAO08b} that
demonstrates that it is possible to simulate quantum phenomena on the level of individual events
without invoking concepts of quantum theory or probability theory.
Specifically, we have demonstrated that locally-connected networks of processing units
with a primitive learning capability can simulate event by event,
the single-photon beam splitter and Mach-Zehnder interferometer experiments of Grangier {\sl et al.}~\cite{GRAN86},
Einstein-Podolsky-Rosen experiments with photons~\cite{ASPE82a,ASPE82b,WEIH98},
universal quantum computation~\cite{MICH05,RAED05c},
and Wheeler's delayed choice experiment of Jacques {\sl et al.}~\cite{JACQ07}.
In our earlier work, there was no need to simulate the detection process itself
but, as we argue later, to simulate two-beam interference event by event, it is logically impossible
to reproduce the results of wave theory without introducing a model for the detectors.
We show that the simplest algorithm that accounts for the essential features of a single-event
detector allows us to reconstruct, event by event, the interference patterns that are described by wave theory.
Incorporating this detector model in the simulation models that we reported about in our earlier work
does not change the conclusions of Refs.~\cite{RAED05d,RAED05b,RAED05c,MICH05,RAED06c,RAED07a,RAED07b,RAED07c,ZHAO08,ZHAO08b}.
In this sense, the detector model adds a new, fully compatible, component to our collection of event-based algorithms.
\begin{figure}
\caption{Schematic diagram of an interference experiment with a Fresnel biprism (FBP)~\cite{BORN64}
\label{DS_FBP}
\end{figure}
\begin{figure}
\caption{Schematic diagram of a double-slit experiment with two sources $S_1$ and $S_2$ of width $a$,
separated by a center-to-center distance $d$.
The sources emit light according to a uniform intensity distribution and with a uniform angular distribution,
$\beta$ denoting the angle. The light is recorded by detectors $D$ positioned on a semi-circle with radius $X$.
}
\label{doubleslit}
\end{figure}
\begin{figure}
\caption{Schematic diagram of a two-beam interference experiment with two line sources $S_1$ and $S_2$ having a spatial
Gaussian profile.
The sources are separated by a center-to-center distance $d$ and emit light according to a uniform angular distribution,
$\beta$ denoting the angle.
The light is detected by detectors $D$ positioned at $(X,y)$.}
\label{simu11}
\end{figure}
\section{Wave theory}
Figure~\ref{DS_FBP} shows a schematic diagram of a two-beam interference experiment with a Fresnel biprism~\cite{BORN64}.
A pencil of light, emitted by the source $S$, is divided by refraction into two pencils~\cite{BORN64}.
Interference can be obtained in the region where both pencils overlap, denoted by the grey area in Fig.~\ref{DS_FBP}.
As a Fresnel biprism consists of two equal prisms with small refraction angle and as the angular aperture of the pencils is small,
we may neglect aberrations~\cite{BORN64}.
The system consisting of the source $S$ and the Fresnel biprism can then be replaced by
a system with two virtual sources $S_1$ and $S_2$~\cite{BORN64}, see Fig.~\ref{DS_FBP}.
For such a system, it is straightforward to compute the intensity of the wave at the detector screen.
We consider two cases:
\begin{itemize}
\item{The sources $S_1$ and $S_2$ are slits of width $a$, separated by a center-to-center distance $d$, see Fig.~\ref{doubleslit}.
Then, in the Fraunhofer regime, we have~\cite{BORN64}
\begin{equation}
I(\theta) = A\left(\frac{\sin\frac{ka\sin\theta}{2}}{\frac{ka\sin\theta}{2}}\right)^2 \cos^2\frac{kd\sin\theta}{2},
\label{eq_slit}
\end{equation}
where $A$ is a constant, $k$ is the wave number, and $\theta$ denotes the angular position of the detector on the circular screen.}
\item{The sources $S_1$ and $S_2$ form a line source with a current distribution given by
\begin{equation}
J(x,y) = \delta(x)\sum_{s=\pm1}
e^{-(y-sd/2)^2/2\sigma^2}
,
\label{Jy}
\end{equation}
where $\sigma$ is the variance and $d$ denotes the distance between the centre of the two sources.
The intensity of the overlapping pencils reads
\begin{equation}
I(y) = B\left(\cosh\frac{byd}{\sigma ^2} + \cos\frac{(1-b)kyd}{X} \right)e^{\frac{-b(y^2+d^2/4)}{\sigma ^2}},
\label{eq_Gaussian}
\end{equation}
where $B$ is a constant, $b=k^2\sigma^4/(X^2 + k^2\sigma^4)$, and $(X,y)$ are the coordinates
of the detector (see Fig.~\ref{simu11}). Closed-form expression Eq.~(\ref{eq_Gaussian}) was obtained
by assuming that $d\ll X$ and $\sigma\ll X$.}
\end{itemize}
From Eqs.~(\ref{eq_slit}) and (\ref{eq_Gaussian}), it directly follows that the intensity distribution on the detector screen
displays fringes that are characteristic for interference.
Results of a time-resolved experiment that is a laboratory realization
of the interference experiment schematically depicted in Fig.~\ref{simu11}
are presented in Ref.~\cite{SAVE02}.
\section{Simulation model}
Looking at Figs.~\ref{doubleslit} and \ref{simu11}, common sense leads to the conclusion that
once we exclude the possibility that there is direct communication between photons,
the fact that we observe
individual events that form an interference pattern can only be due to the presence and
internal operation of the detector (we ignore the possibility that there is no common-sense explanation for this phenomenon).
Therefore, it is worthwhile to consider the detection process in more detail.
In its simplest form, a light detector consists of a material that can be ionized by light.
The electric charges that result from the ionization process are then amplified and
subsequently detected by appropriate electronic circuits.
As usual, the interaction between the incident electromagnetic field ${\mathbf E}$ and
the material takes the form ${\mathbf P}\cdot{\mathbf E}$, where ${\mathbf P}$ is the polarization vector of the material~\cite{BORN64}.
Treating this interaction in first-order perturbation theory, the detection probability is
$P(t)=\int^{t}_{0}\int^{t}_{0}\langle\langle{\mathbf E^{T}(t')}\cdot{\mathbf K(t'-t'')}\cdot{\mathbf E(t'')} \rangle\rangle dt'dt''$
where ${\mathbf K(t'-t'')}$ is a memory kernel that contains information about the material only and
$\langle\langle.\rangle\rangle$ denotes the average with respect to the initial state of the electromagnetic field~\cite{BALL03}.
Very sensitive photon detectors such as photomultipliers and avalanche diodes have an additional feature:
They are trigger devices meaning that the generated signal depends on a threshold.
From these general considerations, it is clear that a minimal model should be able
to account for the memory and the threshold behavior of real photon detectors.
As it is our intention to perform event-based simulations,
the model for the detector should, in addition to the two features mentioned earlier, operate on the basis of individual events.
\subsection{Messenger}
In our simulation approach, we view each photon as a messenger.
Each messenger carries a message, representing its time of flight.
Compatibility with the macroscopic description (Maxwell's theory) demands that the encoding
of the time of flight is modulo a distance which, in Maxwell's theory, is the wavelength of the light.
Thus, the message is conveniently encoded as a two-dimensional unit vector
${\mathbf e}_i=(e_{0,i}, e_{1,i})=(\cos\phi_i, \sin\phi_i)$, where $\phi_i$ is the event-based equivalent of the
phase of the electromagnetic wave and the subscript $i>0$ labels the messages.
\subsection{Source}
A single-photon source is trivially realized in a simulation model in which the photons are viewed as messengers.
We simply generate a message, wait until this message has been processed by the detector, and then generate the next message.
This ensures that there can be no direct communication between the messages, implying that our simulation model (trivially) satisfies Einstein's criterion of local causality.
\subsection{Detector}
A simple model for the detector that accounts for the three features mentioned earlier,
contains an internal vector ${\mathbf p}_i = (p_{0,i},p_{1,i})$ with Euclidean norm less or equal than one.
This vector is updated according to the rule
\begin{equation}
{\mathbf p}_i = \gamma {\mathbf p}_{i-1} + (1-\gamma) {\mathbf e}_i,
\label{ruleofLM}
\end{equation}
where $0<\gamma<1$. The machine generates a binary output signal $S_i$ using the threshold function
\begin{equation}
S_i = \Theta(p^{2}_{0,i}+p^{2}_{1,i}-r_i),
\label{thresholdofLM}
\end{equation}
where $0\leq r_i <1$.
The total detector count is defined as
\begin{equation}
N=\sum^{M}_{i=1}S_i,
\label{N_counts}
\end{equation}
where $M$ is the total number of messages received. Thus, $N$ counts the number of one's generated by the machine.
Although not essential for our simulations, it is convenient to use pseudo-random numbers for $r_i$.
This will mimic the unpredictability of the detector signal. The parameter $\gamma$ controls the precision
with which the message processing machine defined by Eq.~(\ref{ruleofLM}) can represent a sequence of messages with the same
${\mathbf e}_i$ and also controls the pace at which new messages affect the internal state of the machine~\cite{RAED05d}.
The internal vector ${\mathbf p}_i$ and $\gamma$ play the roles of the polarization vector ${\mathbf P}(t)$
and the memory kernel ${\mathbf K(t'-t'')}$, respectively. Notice that the formal solution of Eq.~(\ref{ruleofLM})
has the same mathematical structure as the constitutive equation
${\mathbf P}(t)=\int_0^t \chi(u) {\mathbf E}(t-u) du$ in Maxwell's theory~\cite{BORN64}.
The threshold function of the real detector is implemented through Eq.~(\ref{thresholdofLM}).
A detector screen is just a collection of identical detectors and is modeled as such.
Each detector has a predefined spatial window within which it accepts messages.
It is not easy to study the behavior of the classical, dynamical system defined by
Eqs.~(\ref{ruleofLM})--(\ref{N_counts}) by analytical methods but it is close to trivial
to simulate the model on a computer.
Note that the machine defined by Eq.~(\ref{ruleofLM}) has barely enough memory to store the equivalent of one message.
Thus, the machine derives its power from the way it processes successive messages, not from storing a lot of data.
In particular, the machine does not know about $M$, a quantity that is unknown in real experiments.
It is self-evident that the detector model defined by Eqs.~(\ref{ruleofLM}) and
(\ref{thresholdofLM}) should not be regarded as a realistic model for say, a photomultiplier.
Our aim is to show that, in the spirit of Occam's razor, this is probably the simplest event-based model
that can reproduce the interference patterns that we usually describe by wave theory.
\begin{figure}
\caption{Detector counts as a function of the angular (spatial) detector position $\theta$ ($y$)
as obtained by event-by-event simulations of the interference experiment shown in Fig.~\ref{doubleslit}
\label{simu1}
\end{figure}
\begin{figure}
\caption{Schematic diagram of the simulation setup of a single-photon experiment with a Fresnel biprism.
The apex of the Fresnel biprism with summit angle $\alpha$ is positioned at $(X',0)$.
A line source with a current distribution given by Eq.~(\ref{Jy}
\label{ds_real}
\end{figure}
\begin{figure}
\caption{Detector counts as a function of the detector position $y$
of the detector array positioned at $X$ (see Fig.~\ref{ds_real}
\label{simu22}
\end{figure}
\section{Simulation results}
First, we show that our event-by-event simulation model reproduces the wave mechanical results
Eq.~(\ref{eq_slit}) of the double-slit experiment.
Second, we simulate a two-beam interference experiment
and demonstrate that the simulation data
agrees with Eq.~(\ref{eq_Gaussian}).
Finally, we present the results for the simulation of the single-photon interference experiment with a Fresnel biprism~\cite{JACQ05},
as depicted in Fig.~\ref{DS_FBP}.
\subsection{Double-slit experiment}
As a first example, we consider sources that are slits of width $a=\lambda$ ($\lambda=670$ nm in all our simulations), separated by a distance $d=5\lambda$,
see Fig.~\ref{doubleslit}.
In Fig.~\ref{simu1}(a), we present the simulation results for a source-detector distance $X=0.05$ mm and for $\gamma=0.999$.
When a messenger (photon) travels from the source at $(0,y)$ to the detector screen positioned at $X$,
it updates its own time of flight, or equivalently its phase $\phi_i$.
This time of flight is calculated according to geometrical optics~\cite{BORN64}.
As the messenger hits a detector, the detector updates its internal state and decides whether to output a zero or a one.
This process is repeated many times.
The results of wave theory, as given by Eq.~(\ref{eq_slit}), are represented by the dashed lines.
Looking at Fig.~\ref{simu1}(a), it is clear that there is excellent agreement between the event-based simulation and wave theory.
\subsection{Two-beam interference experiment}
As a second example, we assume that the messengers leave either source $S_1$ or $S_2$ from a position $y$ that is
distributed according to a Gaussian distribution with variance $\sigma$ and mean $+d/2$ or $-d/2$, respectively
(see Fig.~\ref{simu11}).
The simulation results for a source-detector distance $X=0.1$ mm and for $\gamma=0.999$ are shown in Fig.~\ref{simu1}(b).
The dashed line is the corresponding result of wave theory, see Eq.~(\ref{eq_Gaussian}).
Also in this case, the agreement between wave theory and the event-by-event simulation is extremely good.
\subsection{Experiment with a Fresnel biprism}
For simplicity, we assume that the source $S$ is located in the Fresnel biprism.
Then, the results do not depend on the dimension of the Fresnel biprism.
Figure~\ref{ds_real} shows the schematic representation of the single-photon interference experiment that we simulate.
Simulations with a Fresnel biprism of finite size yield results that differ
qualitatively only (results not shown). The time of flight of the $i$th message
is calculated according to the rules of geometrical optics~\cite{BORN64}.
In the simulation, the angle of incidence $\beta$ of the photons is selected
randomly from the interval $[-\alpha /2,\alpha /2]$, where $\alpha$ denotes
the summit angle of the Fresnel biprism.
A collection of representative simulation results for $\gamma=0.999$
is shown in Fig.~\ref{simu22}, together with the results
as obtained from wave theory. Again, we find that there is excellent quantitative agreement between
the event-by-event simulation data and wave theory.
Furthermore, the simulation data presented in Fig.~\ref{simu22} is qualitatively very similar to the results
reported in Ref.~\cite{JACQ05} (compare with Fig.~4(d) and Fig.~5(a)(b) of Ref.~\cite{JACQ05}).
\section{Conclusion}
In this letter, we have demonstrated that it is possible to give a particle-only description for
single-photon interference experiments with a double-slit, two beams, and with a Fresnel biprism. Our
event-by-event simulation model
\begin{itemize}
\item{reproduces the results from wave theory,}
\item{satisfies Einstein's criterion of local causality,}
\item{provides a pictorial description that is not in conflict with common sense.}
\end{itemize}
We do not exclude that there are other event-by-event algorithms that
reproduce the interference patterns of wave theory.
For instance, in the case of the single-electron experiment with the biprism~\cite{TONO98},
it may suffice to have an adaptive machine handle the electron-biprim interaction without having
adaptive machines modeling the detectors. We leave this topic for future research.
We hope that our simulation results stimulate the design of new time-resolved single-photon experiments
to test our particle-only model for interference. However, in order to eventually falsify our model,
it would not suffice to simply publish raw experimental data and show
that they do not agree in all details with the model described in this letter.
Indeed, the model that we have presented is far from unique
and it requires little imagination to see that one can construct similar but more
sophisticated event-based models that can explain interference without making recourse to wave theory.
\section{Acknowledgment}
We would like to thank K. De Raedt, K. Keimpema, and S. Zhao for many helpful discussions.
\end{document}
|
\begin{document}
\title{
Dynamics of a qubit coupled to a dissipative nonlinear quantum oscillator: an effective bath approach}
\author{Carmen\, Vierheilig$^1$, Dario Bercioux$^2$ and Milena\, Grifoni$^1$}
\affiliation{$^1$ Institut f\"{u}r Theoretische Physik, Universit\"at
Regensburg, 93035 Regensburg, Germany\\
$^2$ Freiburg Institute for Advanced Studies, Albert-Ludwigs-Universit\"at Freiburg,
79104 Freiburg im Breisgau, Germany}
\date{\today}
\begin{abstract}
We consider a qubit coupled to a nonlinear quantum oscillator, the latter coupled to an Ohmic bath, and investigate the qubit dynamics. This composed system can be mapped onto that of a qubit coupled to an effective bath. An approximate mapping procedure to determine the spectral density of the effective bath is given. Specifically, within a linear response approximation the effective spectral density is given by the know\-ledge of the linear susceptibility of the nonlinear quantum oscillator. To determine the actual form of the susceptibility, we consider its periodically driven counterpart, the problem of the quantum Duffing oscillator within linear response theory in the driving amplitude. Knowing the effective spectral density, the qubit dynamics is investigated. In particular, an analytic formula for the qubit's population difference is derived. Within the regime of validity of our theory, a very good agreement is found with predictions obtained from a Bloch-Redfield master equation approach applied to the composite qubit-nonlinear oscillator system.
\end{abstract}
\pacs{03.65.Yz,03.67.Lx,05.40.-a,85.25.-j} \maketitle
\section{Introduction}
The understanding of relaxation and dephasing properties of qubits due to the surrounding environment is essential for quantum computation \cite{Nielsen}. A famous model to study the environmental influences on the coherent dynamics of a qubit is the spin-boson model \cite{Weiss,LeggettRevModPhysErratum,Grifoni304}, consisting of a two-level system (TLS) bilinearily coupled to a bath of harmonic oscillators. Although the bath degrees of freedom can be traced out exactly, analytical solutions are only possible within perturbative schemes.
First those perturbative in the coupling of the TLS to the bath are typically obtained within a Born-Markov treatment of the Liouville equation for the TLS density matrix \cite{Redfield,Blum} or within the path integral formalism \cite{Weiss}. The equivalence of both methods has been demonstrated restricting to low temperatures and low damping strengths in Ref. [\onlinecite{Hartmann}]. The second alternative approach is to perform perturbation theory in the tunneling amplitude of the two-level system. Within the so-termed non-interacting blip approximation (NIBA) \cite{LeggettRevModPhysErratum,Grifoni304,Weiss} it yields equations of motion for the TLS reduced density matrix enabling to capture the case of strong TLS-bath coupling. Reality is however often more complex, as the qubit might be coupled to other quantum systems besides to a thermal bath. For example, to read-out its state, a qubit is usually coupled to a read-out device.\\
In the following we mostly have in mind the flux qubit \cite{Mooij1999} read-out by a DC-SQUID. The latter mediates the dissipation originating from the surrounding electromagnetic bath and can be modeled both as a linear or nonlinear oscillator \cite{Mooij1999,Tian,vanderWal,Chiorescu2003,Chiorescu2004,Johansson2006,Lupascuquantumstatedetection,Lee,Picot,LupascuHigh,Inomata,Wirth}. Recently, the nonlinearity of qubit read-out devices, for example of a DC-SQUID \cite{Lee,Picot,LupascuHigh,Inomata,Wirth} or a Josephson bifurcation amplifier \cite{Siddiqi2,Siddiqi,MalletSingle}, has been used to improve the measurement scheme in terms of a faster read-out and higher fidelity. Specifically, the device was operated in a regime where the dynamics exhibited bifurcation features typical of a classical nonlinear oscillator. As demonstrated e.g. in Ref. [\onlinecite{Chiorescu2004}] the quantum limit is within the experimental reach as well.\\
From the theoretical side there are two different viewpoints to investigate the dynamics of a qubit coupled to an oscillator, with the latter in turn coupled to a thermal bath. The first way is to consider the TLS and the oscillator as a single quantum system coupled to the bath, while the second is an effective bath description where the effective environment seen by the qubit includes the oscillator and the original thermal bath. The mapping to an effective bath has been discussed for the case in which the TLS is coupled to a \textit{harmonic} oscillator in Ref. [\onlinecite{Garg}]. Specifically, the spectral density of the effective bath acquires a broadened peak centered around the frequency of the oscillator. This case has been investigated in Refs. [\onlinecite{Wilhelm1,Goorden2005,Goorden2004,Goorden2003,Nesi,Kleff2,Brito}] by applying standard numerical and analytical methods established for the spin-boson model. All those works showed that the peaked structure of the effective bath is essential when the eigenfrequency of the TLS becomes comparable to the oscillator frequency.\\
So far the first approach was used in Ref. [\onlinecite{CarmenPRA}] to describe a qubit-nonlinear oscillator (NLO) system in the deep quantum regime. Here the effects of the (harmonic) thermal reservoir can be treated using standard Born-Markov perturbation theory. The price to be paid, however, is that the Hilbert space of the qubit-nonlinear oscillator system is infinite, which requires for practical calculations its truncation invoking e.g. low temperatures \cite{CarmenPRA}.\\
In contrast to the above work we investigate here the case of a qubit-NLO system, with the latter being coupled to an Ohmic bath, within an effective bath description. Due to the nonlinearity of the oscillator, the mapping to a \textit{linear} effective bath is not exact. In this case a temperature and nonlinearity dependent effective spectral density well captures the NLO influence on the qubit dynamics.\\
The paper is organized as follows: In section \ref{model} we introduce the model with the relevant quantities. In section \ref{mapping} the mapping procedure is investigated and the effective spectral density for the corresponding linear case is given. Afterwards the mapping procedure is applied to the case of a qubit coupled to a nonlinear quantum oscillator. As a consequence of the mapping the determination of the effective spectral density is directly related to the knowledge of the susceptibility of the oscillator. We show how the susceptibility can be obtained from the steady-state response of a quantum Duffing oscillator in section \ref{suslin}. In section \ref{susnonlin} the steady-state response of the dissipative quantum Duffing oscillator is reviewed and its susceptibility is put forward. The related effective spectral density is derived in section \ref{Jeff}. In section \ref{qudyn} the qubit dynamics is investigated by applying the non-interacting blip approximation (NIBA) to the kernels of the generalized master equation which governs the dynamics of the population difference of the qubit. A comparison with the results of Ref. [\onlinecite{CarmenPRA}], obtained within the first approach, is shown. Further, analogies and differences with respect to the linear case are discussed. In section \ref{Conclusions} conclusions are drawn.
\section{Hamiltonian}\label{model}
We consider a composed system built of a qubit, -the system of interest-, coupled to a nonlinear quantum oscillator (NLO), see Fig. \ref{linearbath}.
To read-out the qubit state we couple the qubit linearly to the oscillator with the coupling constant $\overline{g}$, such that via the intermediate NLO dissipation also enters the qubit dynamics.
\begin{figure}
\caption{Schematic representation of the composed system built of a qubit, an intermediate nonlinear oscillator and an Ohmic bath.
\label{linearbath}
\label{linearbath}
\end{figure}
The Hamiltonian of the composed system reads:
\begin{equation}\label{eq10}
\hat{H}_{\rm tot}=\hat{H}_{\rm S}+\hat{H}_{\rm NLO}+\hat{H}_{\rm S+NLO}+\hat{H}_{\rm NLO+B}+\hat{H}_{\rm B},
\end{equation}
where
\begin{eqnarray}\label{gl3}
\hat{H}_{\rm S}&=&\frac{\hat{p}^2}{2\mu}+U(\hat{q}),\\
\hat{H}_{\rm NLO}&=& \frac{1}{2M}\hat{P}_y^2+\frac{1}{2} M\Omega^2\hat{y}^2+\frac{\overline{\alpha}}{4}\hat{y}^4,\nonumber\\
\hat{H}_{\rm S+NLO}&=&\overline{g}\hat{y}\hat{q},\nonumber\\
\hat{H}_{\rm NLO+B}&=& \sum_j\left[-c_j \hat{x}_j \hat{y}+\frac{c_j^2}{2m_j\omega_j^2}\hat{y}^2\right],\nonumber\\
\hat{H}_{\rm B}&=&\sum_j\left[
\frac{\hat{p}_j^2}{2m_j}+\frac{1}{2}m_j \omega_j^2\hat{x}_j^2\right]\nonumber.
\end{eqnarray}
Here $\hat{H}_{\rm S}$ represents the qubit Hamiltonian, where $\mu$ is the particle's mass and $U(q)$ a one-dimensional double well potential with minima at $q=\pm q_0/2$. $\hat{H}_{\rm NLO}$ is the NLO Hamiltonian, where the parameter $\overline{\alpha}>0$ accounts for the nonlinearity. When the oscillator represents a SQUID used to read-out the qubit, the oscillator frequency $\Omega$ corresponds to the SQUID's plasma frequency. The dissipation on the NLO is modeled in the following by coupling it to an Ohmic bath, characterized by the spectral density \cite{Weiss}:
\begin{equation}\label{ohmic}
J(\omega)=\frac{\pi}{2}\sum_{j=1}^\mathcal{N}\frac{c_j^2}{m_j\omega_j}\delta(\omega-\omega_j)=\eta\omega=M\gamma\omega.
\end{equation}
In the classical limit it corresponds to a white noise source, where $\eta$ is a friction coefficient with dimensions mass times frequency.\\
In the following focus will be on the qubit dynamics in the presence of the dissipative nonlinear oscillator. Namely we will study the time evolution of the qubit's position as described by:
\begin{eqnarray}
q(t)&:=&{\rm Tr}\{\hat{\rho}_{\rm tot}(t)\hat{q}\}={\rm Tr}_{\rm S}\{\hat{\rho}_{\rm red}(t)\hat{q}\},
\end{eqnarray}
where $\hat{\rho}_{\rm tot}$ and $\hat{\rho}_{\rm red}$ are the total and reduced density operators, respectively. The latter is defined as:
\begin{eqnarray}
\hat{\rho}_{\rm red}:={\rm Tr}_{\rm B} {\rm Tr}_{\rm NLO}\{\hat{\rho}_{\rm tot}\},
\end{eqnarray}
where the trace over the degrees of freedom of the bath and of the oscillator is taken. In Fig. \ref{approachschaubild} two different approaches to determine the qubit dynamics are depicted. In the first approach, which is elaborated in Ref. [\onlinecite{CarmenPRA}], one first determines the eigenstates and eigenvalues of the composed qubit-oscillator system and then includes environmental effects via standard Born-Markov perturbation theory. The second approach exploits an effective description for the environment surrounding the qubit based on a mapping procedure. This will be investigated in the next section.
\begin{figure}
\caption{Schematic representation of the complementary approaches available to evaluate the qubit dynamics: In the first approach one determines the eigenvalues and eigenfunctions of the composite qubit plus oscillator system (yellow (light grey) box) and accounts afterwards for the harmonic bath characterized by the Ohmic spectral density $J(\omega)$. In the effective bath description one considers an environment built of the harmonic bath and the nonlinear oscillator (\rm red (dark grey) box). In the harmonic approximation the effective bath is fully characterized by its effective spectral density $J_{\rm eff}
\label{approachschaubild}
\end{figure}
\section{Mapping}\label{mapping}
The main aim is to evaluate the qubit's evolution described by $q(t)$. This can be achieved within an effective description using a mapping procedure. Thereby the oscillator and the Ohmic bath are put together, as depicted in Figure \ref{approachschaubild}, to form an effective bath. The effective Hamiltonian
\begin{eqnarray}\label{gl1}
\hat{H}_{\rm eff}&=&\hat{H}_{\rm S}+\hat{H}_{\rm B eff}
\end{eqnarray}
is chosen such that, after tracing out the bath degrees of freedom, the same dynamical equations for $q(t)$ are obtained as from the original Hamiltonian $\hat{H}_{\rm tot}$. Due to the nonlinear character of the oscillator an exact mapping implies that $\hat{H}_{\rm B eff}$ represents a nonlinear environment. We shall show in the following subsection using linear response theory that a linear approximation for $\hat{H}_{\rm B eff}$ is justified for weak coupling $\overline{g}$. Then Eq. (\ref{gl1}) describes an effective spin-boson problem where
\begin{eqnarray}\label{effbath}
\hat{H}_{\rm B eff}&=&\frac{1}{2}\sum_{j=1}^\mathcal{N}\left[\frac{\hat{P}_j^2}{m_j}+m_j\omega_j^2\left(\hat{X}_j-
\frac{d_j}{m_j \omega_j^2} \hat{q}\right)^2\right],
\end{eqnarray}
and the associated spectral density is:
\begin{equation}
J_{\rm{eff}}(\omega)=\frac{\pi}{2}\sum_{j=1}^\mathcal{N}\frac{d_j^2}{m_j\omega_j}\delta(\omega-\omega_j).
\end{equation}
The Hamiltonian (\ref{gl1}) with (\ref{effbath}) leads to coupled equations of motion \cite{Weiss,LeggettRevModPhysErratum}:
\begin{eqnarray}
\mu\ddot{\hat{q}}(t)+U'(\hat{q})+\sum_{j=1}^\mathcal{N}\left(\frac{d_j^2}{m_j\omega_j^2}\hat{q}\right)&=&\sum_{j=1}^\mathcal{N} d_j \hat{X}_j,\nonumber\\
m_j \ddot{\hat{X}}_j+m_j\omega_j^2\hat{X}_j&=&d_j \hat{q}\nonumber,
\end{eqnarray}
where $U'(\hat{q})=\frac{d}{dq}U(\hat{q})$. By formally integrating the second equation of motion and inserting the solution into the first equation the well-known Langevin equation for the operator $\hat{q}$ is obtained. This, in turn, allows to obtain the Langevin equation for $q_{\rm eff}(t):={\rm Tr}\{\hat{\rho}_{\rm eff}\hat{q}(t)\}$\cite{Weiss}:
\begin{eqnarray}\label{eom1}
\mu \ddot{q}_{\rm eff}+\mu\int_{0}^t dt'\gamma_{\rm eff}(t-t')\dot{q}_{\rm eff}+\langle U'(\hat{q})\rangle_{\rm eff}&=&0,
\end{eqnarray}
with the effective damping kernel $\gamma_{\rm eff}(t-t')$.\\
Notice that $\langle\dots\rangle_{\rm eff}$ indicates the expectation value taken with respect to $\hat{\rho}_{\rm{eff}}$, which is the density operator associated to $\hat{H}_{\rm{eff}}$ \cite{Weiss}. In Laplace space, defined by
\begin{eqnarray}
y(t)&=&\frac{1}{2\pi i}\int_{\mathcal{C}} d\lambda y(\lambda)\exp(\lambda t),\\
y(\lambda)&=&\int_{0}^\infty dt y(t)\exp(-\lambda t)\nonumber,
\end{eqnarray}
we obtain from Eq. (\ref{eom1}) the equation of motion:
\begin{equation}\label{map2}
\mu \lambda^2 q_{\rm eff}(\lambda)+\mu \lambda\gamma_{\rm eff}(\lambda) q_{\rm eff}(\lambda)+\langle U'(\lambda)\rangle_{\rm eff}=0.
\end{equation}
The real part $\gamma'_{\rm{eff}}(\omega)=\textrm{Re}[\hat{\gamma}(\lambda=-i\omega)]$ of the effective damping kernel $\gamma_{\rm eff}(t)$ is related to the spectral density via \cite{Weiss}:
\begin{eqnarray}\label{gammaJ}
\gamma'_{\rm{eff}}(\omega)=\frac{J_{\rm eff}(\omega)}{\mu\omega}.
\end{eqnarray}
The mapping for the case of zero nonlinearity $\overline{\alpha}$ and Ohmic damping has been discussed in Ref. [\onlinecite{Garg}]. There the influence of both the intermediate harmonic oscillator and the bath is embedded into an effective peaked spectral density given by:
\begin{eqnarray}\label{linearspecdens}
J_{\rm{eff}}^{\rm HO} (\omega)&=&\frac{\overline{g}^2\gamma\omega}{M(\Omega^2 - \omega^2)^2 + M\gamma^2 \omega^2 },
\end{eqnarray}
showing Ohmic low frequency behaviour $J_{\textrm{eff}}^{HO} (\omega)\longrightarrow_{\ _{\hspace{-0.7cm}\omega\rightarrow 0}} \overline{g}^2\gamma\omega/(M\Omega^4)$.
\subsection{Equation of motion for the nonlinear Hamiltonian}
As discussed above, the mapping requires the knowledge of the \textit{reduced} dynamics of the system described by the variable $q(t)$.
Therefore we start from the coupled equations of motion derived from the Hamiltonian $\hat{H}_{\rm tot}$ given in Eq. (\ref{eq10}):
\begin{subequations}
\begin{eqnarray}
\mu \ddot{\hat{q}}+U'(\hat{q})=-\overline{g}\hat{y}\label{eq7a},\\
M\ddot{\hat{y}}+\eta\dot{\hat{y}}+M\Omega^2 \hat{y}+\overline{\alpha} \hat{y}^3&=&-\overline{g}\hat{q}+\hat{\xi}(t).\label{eq7b}
\end{eqnarray}
\end{subequations}
According to Eq. (\ref{ohmic}), $\eta=M\gamma$ is the damping coefficient and
\begin{equation}
\hat{\xi}(t)=\sum_{j=1}^\mathcal{N} c_j\left[x_j^{(0)}\cos(\omega_j t)+\frac{p_j^{(0)}}{m_j\omega_j}\sin(\omega_j t)\right]-M\gamma\delta(t)\hat{y}(0)
\end{equation}
a fluctuating force originating from coupling to the bath. In order to eliminate $\hat{y}$ from the first equation of motion, we have to calculate $\hat{y}[\hat{q}(t)]$ from the second equation.\\
In the following we look at equations of motion for the expectation values resulting from Eqs. (\ref{eq7a}) and (\ref{eq7b}), i.e., we look at the evolution of $q(t):={\rm Tr}\{\hat{\rho}_{\rm tot}\hat{q}(t)\}$ and $y(t):={\rm Tr}\{\hat{\rho}_{\rm tot}\hat{y}(t)\}$. Since we want to calculate $y(t)$ we turn back to Eq. (\ref{eq10}) and treat the coupling term $\hat{H}_{\rm S+NLO}$ as a perturbation, $\overline{g}\ll M\Omega^2$. Then the use of linear response theory in this perturbation is justified and we find:\\
\begin{eqnarray}\label{eq1}
\lefteqn{y(t)=\langle\hat{y}(t)\rangle_0}\\ &&-\frac{i}{\hbar}\int_{-\infty}^{\infty}dt'\theta(t-t')\langle[\hat{y}(t),\hat{y}(t')]\rangle_0 \overline{g} \langle \hat{q}(t')\rangle_0\theta(t')\nonumber\\
&&+\mathcal{O}(\overline{\alpha}\,\overline{ g}^2),\nonumber
\end{eqnarray}
where $\langle\dots\rangle_0$ denotes the expectation value in the absence of the coupling $\overline{g}$, which we assume it has been switched on at time $t_0=0$.\\
Notice that for a \textit{linear} system, as for example the damped harmonic oscillator, the linear response \textit{becomes exact}, such that the neglected corrections are at least of order $\mathcal{O}(\overline{\alpha}\,\overline{ g}^2)$. Moreover, the time evolution of the expectation values is the same as in \textit{the classical case}; this fact corresponds to the Ehrenfest theorem \cite{Weiss}. For \textit{nonlinear} systems the expression in Eq. (\ref{eq1}) is an approximation, because all orders in the perturbation are nonvanishing\begin{footnote}{
An extension of the concept of linear response in case of nonlinear systems is the so called Volterra expansion, which provides a systematic perturbation series in the forcing \cite{Volterra}.}
\end{footnote}.\\
In Laplace space,
Eq. (\ref{eq1}) yields:\\
\begin{eqnarray}\label{f1}
\delta y(\lambda)=\chi(\lambda)\overline{g}\langle \hat{q}(\lambda)\rangle_0+\mathcal{O}(\overline{\alpha}\,\overline{ g}^2),
\end{eqnarray}
where $\delta y(\lambda)=y(\lambda)-\langle \hat{y}(\lambda)\rangle_0$ and
where $\chi(\lambda)$ is the Laplace transform of the response function or susceptibility:\\
\begin{eqnarray}
\chi(t-t')=-\frac{i}{\hbar}\theta(t-t')\langle[\hat{y}(t),\hat{y}(t')]\rangle_0.
\end{eqnarray}
Since $q(\lambda)-\langle \hat{q}(\lambda)\rangle_0=\mathcal{O}(\overline{g}^2)$,
from Eqs. (\ref{eq7a}) and (\ref{f1}) it follows:\\
\begin{eqnarray}
&&\mu \lambda^2q(\lambda)+\overline{g}^2\chi(\lambda)q(\lambda)+\mathcal{O}(\overline{\alpha}\,\overline{g}^3,\overline{g}^4)\label{eqneu}\nonumber\\
&&=-\langle U'(\lambda)\rangle-\overline{g}\langle \hat{y}(\lambda)\rangle_0.
\end{eqnarray}
That is, we have a normalization of the mass, and a damping-like term due to the coupled equations of motion. The effect of the nonlinearity is embedded in the response function $\chi$.\\
We assume in the following that in the absence of the coupling to the qubit the NLO and bath are in thermal equilibrium, which yields $\langle\hat{y}(t)\rangle_0=0$ for all times, and thus also: $\langle\hat{y}(\lambda)\rangle_0=0$.
\subsection{Mapping of the equations of motion and generic form for the effective spectral density}
By comparison of Eqs. (\ref{map2}) and (\ref{eqneu}) we can conclude that they yield the same dynamics if:
\begin{eqnarray}\label{rel1}
\langle U'(\lambda)\rangle_{\rm eff}=\langle U'(\lambda)\rangle,
\end{eqnarray}
and the effective bath is chosen such that:
\begin{eqnarray}
\overline{g}^2\frac{\chi(\lambda)}{\mu \lambda}&=&\gamma_{\rm eff}(\lambda).
\end{eqnarray}
By comparing the last equations with the relation (\ref{gammaJ}) and replacing $\lambda=-i\omega$
it follows:
\begin{eqnarray}\label{gl20}
J_{\rm eff}(\omega)= -\overline{g}^2\chi''(\omega),
\end{eqnarray}
where $\chi''(\omega)$ is the imaginary part of the susceptibility in Fourier space. We have now reduced the problem of finding the effective spectral density to that of determining the corresponding susceptibility. Notice that for a linear system the classical and quantum susceptibility coincide and are independent of the driving amplitude! In this case it is possible to calculate $\chi(\omega)$ directly from the classical equations of motion. For a generic nonlinear system, however, the classical and quantum susceptibilities differ.
\subsection{Linear susceptibility of a Duffing oscillator}\label{suslin}
In order to evaluate the linear susceptibility, we solve the auxiliary problem of calculating the susceptibility of a quantum Duffing oscillator (DO), i.e., of the nonlinear quantum oscillator in Eq. (\ref{gl3}) additionally driven by a periodic force with driving amplitude $F$ and driving frequency $\omega_{ex}$.
The corresponding equation of motion is:
\begin{equation}
M\ddot{\hat{y}}+\eta\dot{\hat{y}}+M\Omega^2 \hat{y}+\overline{\alpha} \hat{y}^3=-F\theta(t-t_0)\cos(\omega_{ex} t)+\hat{\xi}(t).
\end{equation}
Application of linear response theory in the driving yields the equation for the expectation value of the position of the oscillator:
\begin{eqnarray}
y(t)&=& \langle \hat{y}(t)\rangle_0-\frac{i}{\hbar}\int_{t_0}^{\infty}dt'\theta(t-t')\langle[\hat{y}(t),\hat{y}(t')]\rangle_0\nonumber\\
&&\times F\cos(\omega_{ex} t') +\mathcal{O}(F^2).
\end{eqnarray}
Using the symmetry properties of the susceptibility $\chi(\omega)$ we obtain in the steady-state limit:
\begin{eqnarray}\label{chisteadystate}
y_{st}(t)=\lim_{t_0\rightarrow -\infty}y(t)&=& \langle \hat{y}(t)\rangle_0+F\cos(\omega_{ex} t)\chi'(\omega_{ex})\nonumber\\
&&+ F\sin(\omega_{ex} t)\chi''(\omega_{ex})\nonumber+\mathcal{O}(F^3)\\
&\equiv&A\cos(\omega_{ex}t+\phi)+\mathcal{O}(F^3).
\end{eqnarray}
Here the presence of the Ohmic bath implies $\lim_{t_0\rightarrow -\infty}\langle \hat{y}(t)\rangle_0=0$. Notice that due to symmetry inversion of the NLO, corrections of $\mathcal{O}(F^2)$ vanish in Eq. (\ref{chisteadystate}).
In Eq. (\ref{chisteadystate}) $A$ and $\phi$ are the amplitude and phase of the steady-state response. It follows $\chi(\omega)=\frac{A}{F}\exp(-i\phi)$, such that $\chi''(\omega)=-\frac{A}{F}\sin\phi$.
\section{Steady-state dynamics of a Duffing oscillator}\label{susnonlin}
So far we have reduced the problem of finding the effective spectral density to the one of determining the steady-state response of the Duffing oscillator in terms of the amplitude $A$ and the phase $\phi$.
These quantities were recently derived in Refs. [\onlinecite{Peano2,CarmenChemPhys}], using the framework of a Bloch-Redfield-Floquet description of the dynamics of the DO. The results in Ref. [\onlinecite{CarmenChemPhys}] are applicable in a wide range of driving frequencies around the one-photon resonance regime $\omega_{ex}=\Omega+3\overline{\alpha} y_0^4/(4\hbar)\equiv\Omega_1$ for strong enough nonlinearities: $y_0 F/(2\sqrt{2})\ll 3\overline{\alpha} y_0^4/4\ll\hbar\Omega$, where we introduced the oscillator length $y_0=\sqrt{\hbar/(M\Omega)}$.\\
As illustrated in Refs. [\onlinecite{Peano2,CarmenChemPhys}] the amplitude and phase are fully determined by the knowledge of the matrix elements of the stationary density matrix of the Duffing oscillator in the Floquet basis, see e.g. Eqs. (67)-(70) in Ref. [\onlinecite{CarmenChemPhys}]. There the master equation yielding the elements of the stationary density matrix is analytically solved in the low temperature regime $k_B T<<\hbar\Omega$ imposing a partial secular approximation, yielding Eq. (70) of Ref. [\onlinecite{CarmenChemPhys}], and restricting to spontaneous emission processes only. Here we follow the same line of reasoning as in Ref. [\onlinecite{CarmenChemPhys}] to evaluate the amplitude and phase: we impose the same partial secular approximation and consider low temperatures $k_B T <\hbar\Omega$. However, we include now both emission and absorption processes, i.e., we use the full dissipative transition rates as in Eq. (64) of Ref. [\onlinecite{CarmenChemPhys}].
The imaginary part of the linear susceptibility $\chi$ follows from the so obtained \textit{nonlinear} susceptibility $\chi_{NL}$ in the limit of vanishing driving amplitudes:
\begin{small}
\begin{eqnarray}\label{chilarger}
\lefteqn{\chi''(\omega_{ex})=\lim_{F\rightarrow 0}\chi''_{NL}(\omega_{ex})}\\
&=&-\frac{y_0^4J(\omega_{ex})n_1(0)^4\frac{2\Omega_1}{|\omega_{ex}|+\Omega_1}}{y_0^4J(\Omega_1)^2n_1(0)^4(2 n_{th}(\Omega_1)+1)^2+4\hbar^2(|\omega_{ex}|-\Omega_1)^2},\nonumber
\end{eqnarray}
\end{small}
where
\begin{eqnarray}
n_1(0)&=&\left[1-\frac{3}{8\hbar\Omega}\overline{\alpha} y_0^4\right].
\end{eqnarray}
For consistency also $n_1^4(0)$ has to be treated up to first order in $\overline{\alpha}$ only.\\
Moreover, we used the spectral density $J(\omega)=M\gamma\omega$ and the Bose function $n_{th}(\epsilon)=\left[\coth\left(\hbar\epsilon/(2k_B T)\right)-1\right]/2$, which determines the weight of the emission and absorption processes.\\
\section{Effective spectral density for a nonlinear system} \label{Jeff}
The effective spectral density follows from Eqs. (\ref{gl20}) and (\ref{chilarger}). It reads:
\begin{small}
\begin{eqnarray}\label{Jsimpl}
\lefteqn{
J_{\rm eff}(\omega_{ex})}\\
&&=\overline{g}^2\frac{\gamma\omega_{ex}n_1(0)^4\frac{2\Omega_1}{|\omega_{ex}|+\Omega_1}}{M\gamma^2\Omega_1^2(2 n_{th}(\Omega_1)+1)^2n_1(0)^4+4M\Omega^2(|\omega_{ex}|-\Omega_1)^2}.\nonumber
\end{eqnarray}
\end{small}
As in case of the effective spectral density $J_{\rm eff}^{\rm HO}$, Eq. (\ref{linearspecdens}), we observe Ohmic behaviour at low frequency. In contrast to the linear case, the effective spectral density is peaked at the shifted frequency $\Omega_1$. Its shape approaches the Lorentzian one of the linear effective spectral density, but with peak at the shifted frequency, as shown in Fig. \ref{CompLorentz}.\\
\begin{figure}
\caption{Comparison of the effective spectral density $J_{\rm eff}
\label{CompLorentz}
\end{figure}
While in Refs. [\onlinecite{CarmenChemPhys,Peano2}] the amplitude of the oscillator showed an antiresonant to resonant transition depending on the ratio of driving amplitude $F$ and damping $\gamma$, the effective spectral density, obtained in the limit $F\to 0$, displays only resonant behaviour.\\
\section{Qubit dynamics}\label{qudyn}
In the following we derive the dynamics of a qubit coupled to this effective nonlinear bath. Therefore we identify the system Hamiltonian $\hat{H}_{\rm S}$ introduced in Eq. (\ref{eq10}) with the one of a qubit, denoted in the following as $\hat{H}_{\rm TLS}$. This is verified at low energies if the barrier height of the double well potential $U(\hat{q})$ is larger than the energy separation of the ground and first excited levels in each well. In this case the relevant Hilbert space can be restricted to the two-dimensional space spanned by the ground state vectors $\ket{L}$ and $\ket{R}$ in the left and right potential well, respectively \cite{Weiss}. We start defining the actual form of the qubit Hamiltonian and its interaction with the nonlinear oscillator and afterwards introduce its dynamical quantity of interest, the population difference $P(t)$.
\subsection{Qubit}
The Hamiltonian of the TLS (qubit), given in the localized basis $\left\{|L\rangle,|R\rangle\right\}$, is:
\begin{eqnarray}
\hat{H}_{\rm TLS}&=&-\frac{\hbar}{2}\left(\varepsilon\sigma_z+\Delta\sigma_x\right),
\end{eqnarray}
where $\sigma_{i}$, $i=x,z$, are the corresponding Pauli matrices, the energy bias $\varepsilon$ accounts for an asymmetry between the two wells and $\Delta$ is the tunneling amplitude. The bias $\varepsilon$ can be tuned for a superconducting flux qubit by application of an external flux $\Phi_{\rm ext}$ and vanishes at the so-called degeneracy point \cite{WallraffSideband}. For $\varepsilon\gg\Delta$ the states $|L\rangle$ and $|R\rangle$ are eigenstates of $\hat{H}_{\rm TLS}$, corresponding to clockwise and counterclockwise currents, respectively.\\
The interaction in Eq. (\ref{gl3}) is conveniently rewritten as:
\begin{eqnarray}\label{parameter1}
\hat{H}_{\rm TLS-NLO}&=&\overline{g}\hat{q}\hat{y}\\
&=&\frac{\overline{g}}{2\sqrt{2}}q_0\sigma_z y_0(a+a^\dagger)\nonumber\\
&:=&\hbar g\sigma_z (a+a^\dagger).\nonumber
\end{eqnarray}
Likewise we express the nonlinear oscillator Hamiltonian as:
\begin{eqnarray}\label{parameter2}
\hat{H}_{\rm NLO}&=& \hbar\Omega\left(\hat{j}+\frac{1}{2}\right)+\frac{\overline{\alpha}}{4}\hat{y}^4\\
&=& \hbar\Omega\left(\hat{j}+\frac{1}{2}\right)+\frac{\overline{\alpha}y_0^4}{16}(a+a^\dagger)^4\nonumber\\
&:=&\hbar\Omega\left(\hat{j}+\frac{1}{2}\right)+\frac{\alpha}{4}(a+a^\dagger)^4.\nonumber
\end{eqnarray}
\subsection{Population difference}
The dynamics of a qubit is usually characterized in terms of the population difference $P(t)$ between the $\ket{R}$ and $\ket{L}$ states of the qubit:
\begin{eqnarray}\label{dynamics}
P(t)&:=&\langle\sigma_z\rangle\\
&=&{ \rm} {\rm Tr}_{{\rm TLS}} \{ \sigma_{ {\rm z}} \hat{\rho}_{{\rm \rm red}} (t) \}\\& =& \bra{{\rm R}} \hat{\rho}_{{\rm red}}(t) \ket{{\rm R}} - \bra{{\rm L}} \hat{\rho}_{{\rm red}}(t) \ket{{\rm L}},\nonumber
\end{eqnarray}
where $\hat{\rho}_{{\rm red}}(t)$ is the reduced density matrix of the TLS,
\begin{equation}
\hat{\rho}_{{\rm red}}(t) ={\rm Tr}_{B} \{\hat{ \rho}_{\rm eff}(t) \}.
\end{equation}
It is found after tracing out the degrees of freedom of the effective bath from the total density matrix
$\hat{\rho}_{\rm eff} (t) = \exp^{- \frac{{\rm i}}{\hbar} \hat{H}_{\rm eff} t} \hat{\rho}_{\rm eff} (0) \exp^{\frac{{\rm i}}{\hbar} \hat{H}_{\rm eff} t}$. It follows that in the two level approximation $q_{\rm eff}(t)=\frac{q_0}{2}P(t)$, where $q_{\rm eff}(t)$ is the position operator expectation value introduced in Sec. \ref{mapping}.\\
As we mapped the nonlinear system onto an effective spin-boson model, the evaluation of the population difference $P(t)$ of the TLS is possible using standard approximations developed for the spin-boson model \cite{Goorden2004,Goorden2005,Nesi}.
Assuming a factorized initial condition $\hat{\rho}_{\rm eff}(0)=\hat{\rho}_{\rm TLS}(0)\exp(-\beta\hat{H}_{\rm Beff}/Z)$,
the population difference $P(t)$ fulfills the generalized master equation (GME) \cite{Weiss,GrifoniExact}
\begin{equation}\label{GME}
\dot{P}(t)=-\int_0^t dt'[K^s(t-t')P(t')+K^a(t')],\quad t>0
\end{equation}
where $K^s(t-t')$ and $K^a(t-t')$ are symmetric and antisymmetric with respect to the bias, respectively. They are represented as a series in the tunneling amplitude. As an exact solution neither analytically nor numerically is available, due to the complicated form of the exact kernel, we impose in the following the so-called Non-Interacting Blip Approximation (NIBA) \cite{Weiss,LeggettRevModPhysErratum}. Applying NIBA corresponds to truncating the exact kernels to first order in $\Delta^2$ and is therefore perturbative in the tunneling amplitude of the qubit. It is justified in various regimes: it is exact at zero damping, otherwise it is only an approximation which works at best for zero bias and/or large damping and/or high temperature \cite{Weiss}. One finds within the NIBA
\begin{eqnarray}\label{kernels}
K^s(t)&=& \Delta^2\exp(-S(t))\cos(R(t)),\\
K^a(t)&=& \Delta^2\exp(-S(t))\sin(R(t)),\nonumber
\end{eqnarray}
where $S(\tau)$ and $R(\tau)$ are the real and imaginary part of the bath correlation function:
\begin{eqnarray}
Q(\tau)&=&S(\tau)+iR(\tau)=\int_0^\infty d\omega \frac{G_{\rm eff}(\omega)}{\omega^2}\times\\
&&\left[\coth\left(\frac{\beta\hbar\omega}{2}(1-\cos(\omega t))\right)+i\sin(\omega t)\right]\nonumber,
\end{eqnarray}
where $G_{\rm eff}(\omega)=q_0^2J_{\rm eff}(\omega)/(\pi\hbar)$. In particular, upon introducing the dimensionless constant \mbox{$\varsigma=g^2\gamma n_1(0)^4/(\pi\Omega^3)$}, we obtain:
\begin{eqnarray}\label{GeffNL}
G_{\rm eff}(\omega)&=&2\varsigma\Omega^2\frac{\omega \frac{2\Omega_1}{|\omega|+\Omega_1}}{\overline{\gamma}^2+(|\omega|-\Omega_1)^2},
\end{eqnarray}
where we used \mbox{$\Omega_1 n_1(0)^2=\Omega+\mathcal{O}(\alpha^2)$},
and \mbox{$\overline{\gamma}_{th}:=(2n_{th}(\Omega_1)+1)\gamma/2$}. Consequently, the dynamics of the qubit is fully determined by the knowledge of the correlation function $Q(\tau)$ and hence of the effective spectral density derived in section \ref{Jeff}.\\
We now consider the qubit dynamics for the case of the effective nonlinear bath. Therefore we determine the actual form of the correlation functions $S(\tau)$ and $R(\tau)$. From Eq. (\ref{GeffNL}) it follows:
\begin{eqnarray}
S(\tau)&=&X \tau+L\left[\exp(-\overline{\gamma}_{th}\tau)\cos\left( \Omega_1\tau\right)-1\right]\nonumber\\
&&+Z\exp(-\overline{\gamma}_{th}\tau)\sin\left(\Omega_1 \tau\right),\\
R(\tau)&=&I-\exp(-\overline{\gamma}_{th}\tau)\left[N \sin\left(\Omega_1 \tau\right)\right.\\
&&\left.+I\cos\left(\Omega_1\tau\right)\right],\nonumber
\end{eqnarray}
where
\begin{eqnarray}
I&=&\frac{2\pi\varsigma\Omega^2}{\Omega_1^2+\overline{\gamma}_{th}^2}\\
N&=&-I\left(\frac{\Omega_1}{\overline{\gamma}_{th}}-\frac{\overline{\gamma}_{th}}{\Omega_1}\right)\nonumber\\
X&=&\frac{2}{\hbar\beta}I\nonumber\\
L&=&-\frac{I}{\overline{\gamma}_{th}} \frac{1}{\cosh\left(\beta\hbar\Omega_1\right)-\cos\left(\beta\hbar\overline{\gamma}_{th}\right)}\times\nonumber\\
&&\left[\Omega_1\right.
\left.\sinh\left(\beta\hbar\Omega_1\right)-\overline{\gamma}_{th}\sin\left(\beta\hbar\overline{\gamma}_{th}\right)
\right]\nonumber
\\
Z&=&-\frac{I}{\overline{\gamma}_{th}} \frac{1}{\cosh\left(\beta\hbar\Omega_1\right)-\cos\left(\beta\hbar\overline{\gamma}_{th}\right)}\times\nonumber\\&&
\left[\overline{\gamma}_{th}\right.
\left.\sinh\left(\beta\hbar\Omega_1\right)+\Omega_1\sin\left(\beta\hbar\overline{\gamma}_{th}\right)
\right]\nonumber.\nonumber
\end{eqnarray}
Here we have neglected the contribution coming from the Matsubara term, which is verified if the temperature is high enough \cite{Weiss}, i.e. $k_B T\gg\hbar\overline{\gamma}/(2\pi)$. Moreover, we applied in the contributions of the poles lying in the vicinity of $\pm\Omega_1$ the approximation: $2\Omega_1/(2\Omega_1\pm i\overline{\gamma}_{th})\approx 1$. This corresponds effectively to neglect certain $\mathcal{O}(\overline{\gamma}_{th})$ contributions.
\subsection{Analytical solution for the nonlinear peaked spectral density}
In this section we derive an analytical formula for the population difference $P(t)$ for the symmetric case ($\varepsilon=0$), requiring weak damping strengths $\gamma$, such that a \textit{weak damping approximation} of the NIBA kernels is verified, specifically $\gamma/(2\pi\Omega)<<1$. As this calculation is analogue to the one illustrated in detail in Ref. [\onlinecite{Nesi}], we only define the relevant quantities and give the main results.\\
Due to the convolutive form of Eq. (\ref{GME}) this integro-differential equation is solved by applying Laplace transform. In Laplace space it reads:
\begin{eqnarray}
P(\lambda)&=&\frac{1-\frac{1}{\lambda}K^a(\lambda)}{\lambda+K^s(\lambda)},
\end{eqnarray}
where $P(\lambda)=\int_0^\infty dt\exp(-\lambda t)P(t)$ and analogously for $K^{a/s}(\lambda)$.\\
Consequently, the dynamics of $P(t)$ is determined if the poles of
\begin{eqnarray}\label{poleq}
\lambda+K^s(\lambda)&=&0
\end{eqnarray}
are found and the corresponding back transformation is applied.
We transform the kernels in Eq. (\ref{kernels}) in Laplace space and expand them up to first order in the damping. This procedure is called weak damping approximation (WDA) in Ref. [\onlinecite{Nesi}]. One obtains:
\begin{eqnarray}
K^{(s)}(\lambda)&=&\Delta^2\int_0^\infty d\tau \exp(-\lambda \tau)\exp(-S_0(\tau))\\
&&\left\{\cos(R_0(\tau))[1-S_1(\tau)]-\sin(R_0(\tau))R_1(\tau)\right\},\nonumber\\
K^{(a)}(\lambda)&=&0\nonumber,
\end{eqnarray}
where the indices $\{0,1\}$ denote the actual order in the damping. Specifically,
\begin{eqnarray}
S(\tau)&=&S_0(\tau)+S_1(\tau)+\mathcal{O}(\gamma^2),\\
R(\tau)&=&R_0(\tau)+R_1(\tau)+\mathcal{O}(\gamma^2),
\end{eqnarray}
where
\begin{eqnarray}
S_0(\tau)&=&Y[\cos(\Omega_1 \tau)-1],\\
S_1(\tau)&=&A\tau\cos(\Omega_1 \tau)+B\tau+C\sin(\Omega_1 \tau),\nonumber\\
R_0(\tau)&=&W\sin(\Omega_1 \tau),\nonumber\\
R_1(\tau)&=&V\left(1-\cos(\Omega_1 \tau)-\frac{\Omega_1\tau}{2}\sin(\Omega_1 \tau)\right)\nonumber.
\end{eqnarray}
The zeroth order coefficients in the damping are given by:
\begin{eqnarray}
Y&=&-W\frac{\sinh(\beta\hbar\Omega_1)}{\cosh(\beta\hbar\Omega_1)-1},\\
W&=&\frac{4 g^2n_1(0)^4}{\Omega_1\Omega(2n_{th}(\Omega_1)+1)}\nonumber,
\end{eqnarray}
and the first order coefficients by:
\begin{eqnarray}
A&=&-\overline{\gamma}_{th} Y,\nonumber\\
B&=&\frac{2}{\hbar\beta}V,\nonumber\\
C&=&-V\frac{\beta\hbar\Omega_1+\sinh(\beta\hbar\Omega_1)}{\cosh(\beta\hbar\Omega_1)-1},\nonumber\\
V&=&\frac{2g^2n_1(0)^4\gamma}{\Omega_1^2\Omega}\nonumber.
\end{eqnarray}
With this we are able to solve the pole equation for $P(t)$, Eq. (\ref{poleq}), as an expansion up to first order in the damping around the solutions $\lambda_p$ of the non-interacting pole equation, i.e., $\lambda^*=\lambda_p-\gamma \kappa_p+i\gamma\upsilon+\mathcal{O}(\gamma^2)$, as $\gamma/\Omega<<1$. Following Nesi et al. \cite{Nesi} the kernel is rewritten in the compact form:
\begin{eqnarray}
K^{(s)}(\lambda)&=&\sum_{n=0}^\infty \int_0^\infty d\tau \exp(-\lambda \tau)\left\{\Delta_{n,c}^2\cos(n\Omega_1 t)\right.\nonumber\\
&&\left.[1-S_1(\tau)]
+\Delta_{n,s}^2\sin(n\Omega_1 t)R_1(\tau)\right\},
\end{eqnarray}
where
\begin{small}
\begin{eqnarray}
\Delta_{n,c}&=&\Delta\exp(Y/2)\sqrt{(2-\delta_{n,0})(-i)^n J_n(u_0)\cosh\left(n\frac{\hbar\beta\Omega_1}{2}\right)},\nonumber\\
\Delta_{n,s}&=&\Delta\exp(Y/2)\sqrt{(2-\delta_{n,0})(-i)^n J_n(u_0)\sinh\left(n\frac{\hbar\beta\Omega_1}{2}\right)}\nonumber,\\
\end{eqnarray}
\end{small}
and
\begin{eqnarray}
u_0&=&i\sqrt{Y^2-W^2}\\
&=&i\left(\frac{4g^2n_1(0)^4}{(2n_{th}(\Omega_1)+1)\Omega_1\Omega}\right)\frac{1}{\sinh(\beta\hbar\Omega_1/2)}
.\nonumber
\end{eqnarray}
To obtain analytical expressions, we observe that in the considered parameter regime where $g/\Omega\ll1$ and $\beta\hbar\Omega_1>1$ it holds $|u_0|<1$. Following \cite{Nesi} this allows effectively a truncation to the $n=0$ and $n=1$ contributions in $K^{(s)}(\lambda)$ as the argument of the Bessel functions is small, leading to the following approximations:
\begin{eqnarray}
\Delta_{0,c}^2&=&\Delta^2\exp(Y)J_0(u_0)\approx\Delta^2\exp(Y)\\
&\approx&\Delta^2\exp\left(-\frac{4 g^2n_1(0)^6}{\Omega^2}\right)\nonumber\\
\Delta_{1,c}^2&=&\Delta^2\exp(Y)\sqrt{Y^2-W^2}\cosh(\beta\hbar\Omega_1/2),\nonumber\\
&\approx&\Delta_{0,c}^2\frac{4 g^2n_1(0)^6}{\Omega^2}.\nonumber
\end{eqnarray}
Solving the undamped pole equation yields:
\begin{eqnarray}\label{polecond}
\lefteqn{\lambda_p^2\equiv\lambda_{\pm}^2=-\frac{\Delta_{0,c}^2+\Delta_{1,c}^2+\Omega_1^2}{2}}\\&&
\pm\sqrt{\left(\frac{\Delta_{0,c}^2-\Omega_1^2}{2}\right)^2+\frac{\Delta_{1,c}^2}{2}\left(\Delta_{0,c}^2+\frac{\Delta_{1,c}}{2}^2+\Omega_1^2\right)}\nonumber\\
&:=&-\Omega_{\pm}^2.\nonumber
\end{eqnarray}
The last two equations allow to determine the oscillation frequency. Finally, within the WDA the qubit's population difference is obtained as:
\begin{small}
\begin{eqnarray}\label{Panalyt}
P(t)&=&\exp(-\gamma\kappa_- t)\frac{\lambda_-^2+\Omega_1^2}{\lambda_-^2-\lambda_+^2}\left[\cos(\Omega_- t)-\frac{\gamma\kappa_-}{\Omega_-}\sin(\Omega_- t)\right]\nonumber\\&&
+\exp(-\gamma\kappa_+ t)\frac{\lambda_+^2+\Omega_1^2}{\lambda_+^2-\lambda_-^2}\left[\cos(\Omega_+ t)-\frac{\gamma\kappa_+}{\Omega_+}\sin(\Omega_+ t)\right]\nonumber,\\
\end{eqnarray}
\end{small}
where $\kappa_{\pm}=\kappa(\lambda_\pm)$, which is derived in detail in Eq. (B.1.) of Ref. [\onlinecite{Nesi}]. Note that for a consistent treatment if $\mathcal{O}(g^2)$ is kept we implicitly require $\gamma\ll g$, as only the first order in the damping is taken into account.\\\\
We consider two possible resonance cases: First we choose the resonance condition $\Omega_1=\Delta_{0,c}$, such that the oscillation frequencies are, to lowest order in $\Delta_{1,c}$,
\begin{eqnarray}\label{freq}
\Omega_{\pm}&=&\Omega_1\mp\frac{\Delta_{1,c}}{2}\\
&\approx&\Omega+\frac{3}{\hbar}\alpha \mp g(1-\frac{3}{2\hbar}\alpha).\nonumber
\end{eqnarray}
As a consequence we obtain the so-called Bloch-Siegert shift:
\begin{eqnarray}
\Omega_--\Omega_+
&=&2 g(1-3\alpha/2\hbar),
\end{eqnarray}
which is also obtained in Ref. [\onlinecite{CarmenPRA}].
For comparison with \cite{CarmenPRA} we choose as second condition $\Delta=\Omega$, such that:
\begin{eqnarray}
\Omega_-&=&\Omega+\frac{3 \alpha}{2\hbar}-g+\frac{3 \alpha g}{2 \hbar\Omega },\\
\Omega_+&=&\Omega+\frac{3 \alpha}{2\hbar}+g-\frac{3 \alpha g}{2 \hbar\Omega },
\end{eqnarray}
which agrees with the results of \cite{CarmenPRA} up to first order in the nonlinearity or/and in the coupling. Comparing with \cite{CarmenPRA} we do not observe an exact agreement for the prefactors of the mixed terms of order $\mathcal{O}(\alpha g)$.\\
We show in Figs. \ref{Phighg} and \ref{Fhighg} a comparison of the analytic WDA formula Eq. (\ref{Panalyt}), the numerical solution of the NIBA Eq. (\ref{GME}), denoted by NIBA, and the results obtained in Ref. [\onlinecite{CarmenPRA}] from a numerical solution of the Bloch-Redfield equations referred to as TLS-NLO approach. We observe that the dynamics is dominated by two frequencies and well reproduced within all three approaches. In the Fourier spectrum we observe tiny deviations of the resonance frequencies. There are two different reasons for these deviations: First the coupling strength $g$ is large enough, that higher orders in the coupling yield a finite contribution in the effective bath description. Second Eq. (\ref{polecond}) has to be expanded in both the nonlinearity and the coupling $g$, which is not possible in the numerical program. However, as we derived above when expanding the analytic formula, we find up to lowest order in the coupling $g$ and in the nonlinearity $\alpha$ the same results. We emphasize that this small discrepancy is also seen for the corresponding linear system in the work of Hausinger et al. \cite{Hannes} when comparing the NIBA results in Ref. [\onlinecite{Nesi}] with those of the Bloch-Redfield procedure.
\begin{figure}
\caption{Comparison of the behaviour of $P(t)$ as obtained from the numerical solution of the Bloch-Redfield equations based on the TLS-NLO approach, Ref. [\onlinecite{CarmenPRA}
\label{Phighg}
\end{figure}
\begin{figure}
\caption{Corresponding Fourier transform of $P(t)$ as shown in Fig. \ref{Phighg}
\label{Fhighg}
\end{figure}
To clarify the above statements we consider also the case that the coupling is weak, i.e., $g\ll\gamma,\Omega_1$. In the regime where the coupling is much weaker than the nonlinearity ($\hbar g\ll\alpha$), Eq. (32) given in Ref. [\onlinecite{CarmenPRA}] has to be expanded differently. Note that in this regime the results of Eq. (41) in Ref. [\onlinecite{CarmenPRA}] are not applicable. A proper expansion allows in this regime to neglect $\mathcal{O}(g^2)$ or higher if $\mathcal{O}(\alpha^2)$ is neglected. The transition frequencies, when choosing $\Omega=\Delta$, are then determined by Eq. (32) of Ref. [\onlinecite{CarmenPRA}]:
\begin{eqnarray}\label{rc1}
\Omega_{\pm}&=&\Omega+\frac{3}{2\hbar}\alpha\mp\frac{1}{2}\sqrt{9\alpha^2/\hbar^2}\\
&=&\left\{\begin{array}{c}
\Omega,\\
\Omega_1=\Omega+3\alpha/\hbar.
\end{array}
\right.\nonumber
\end{eqnarray}
Applying also an expansion of Eq. (\ref{polecond}) consistent with this parameter regime, we obtain:
\begin{eqnarray}
-\Omega_{\pm}^2&=&\frac{1}{2}\left(-\Omega^2-\Omega_1^2\pm\Omega^2\mp\Omega_1^2\right),
\end{eqnarray}
such that:
\begin{eqnarray}\label{rc2}
\Omega_{+}&=&\Omega,\\
\Omega_{-}&=&\Omega_1=\Omega+3\alpha/\hbar\nonumber.
\end{eqnarray}
The transition frequencies in Eqs. (\ref{rc1}) and (\ref{rc2}) coincide, and in Figs. \ref{Plowg} and \ref{Flowg} there is no deviation observed when comparing the three different approaches.
\begin{figure}
\caption{As in Fig. \ref{Phighg}
\label{Plowg}
\end{figure}
\begin{figure}
\caption{Corresponding Fourier transform of $P(t)$ shown in Fig. \ref{Plowg}
\label{Flowg}
\end{figure}
\subsection{Influence on the qubit dynamics due to the nonlinearity- A comparison of the NIBA for linear and nonlinear effective spectral densities}\label{Res}
In this last section we want to address the effects of the nonlinearity onto the qubit dynamics. The comparison of linear versus nonlinear case is done at level of the numerical solution of the NIBA equation and shown in Figs. \ref{CompNLLP} and \ref{CompNLLF}. As already obtained in Ref. [\onlinecite{CarmenPRA}], we observe that the transition frequencies are shifted to higher values compared to the linear case. As a consequence also the amplitudes associated to the transitions are modified. Moreover, we observe a decrease of the vacuum Rabi splitting compared to the linear case. Consequently, the effect of the nonlinearity of the read-out device can be observed in the qubit dynamics.
\begin{figure}
\caption{$P(t)$ within the NIBA when using the linear and the nonlinear effective spectral densities, Eqs.(\ref{linearspecdens}
\label{CompNLLP}
\end{figure}
\begin{figure}
\caption{Corresponding Fourier transform of $P(t)$ shown in Fig. \ref{CompNLLP}
\label{CompNLLF}
\end{figure}
\section{Conclusions}\label{Conclusions}
In this work we determined the dynamics of a qubit coupled via a nonlinear oscillator (NLO) to an Ohmic bath within an effective bath description. We investigated an approximate mapping procedure based on linear response theory, which is applicable in case of weak nonlinearities and small to moderate qubit-NLO coupling. We determined the effective spectral density in terms of the qubit-oscillator coupling and the linear susceptibility of a nonlinear oscillator. The susceptibility was calculated for practical purposes from the periodically driven counterpart of the original nonlinear oscillator. The so obtained spectral density shows resonant behaviour, specifically almost a Lorentzian form for the considered parameter regime, and is peaked at a shifted frequency, namely at the one-photon resonance between ground state and first excited state of the nonlinear oscillator. Moreover, this effective spectral density acquires a temperature dependence and behaves Ohmic at low frequencies. Based on the effective spectral density the qubit dynamics are investigated within the NIBA approximation. In addition an analytical formula for the qubit dynamics is provided, which describes very well the dynamics at low damping. These results were compared to the numerical solution in Ref. [\onlinecite{CarmenPRA}], where the Bloch-Redfield equations for the density matrix of the coupled qubit nonlinear oscillator system (TLS-NLO) are solved. We find an overall agreement of the two approaches and show that deviations are of order $\mathcal{O}(\alpha g)$, where $\alpha$ is the nonlinearity and $g$ the coupling strength. Exemplarily this effect was analyzed for two possible coupling strengths $g$. We emphasize that parameters like temperature and damping and especially the strength of the coupling $g$ and nonlinearity $\alpha$ determine the appropriate form of the expansions in the different regime of parameters. Due to the tunability of the parameters various qubit dynamics are possible. In agreement with Ref. [\onlinecite{CarmenPRA}] we observed the following effects due to the nonlinearity: First the transition frequencies of the two dominating peaks, where we are in the regime $g\gg\alpha/\hbar,$ are shifted to larger values compared to the linear case. As a consequence also the amplitudes of the coherent oscillations of the population difference $P(t)$ are modified. Moreover, the Bloch-Siegert shift is decreased due to the nonlinearity.\\
We conclude that, as in case of the corresponding linear system \cite{Nesi,Hannes}, the effective bath description provides an alternative approach to investigate the complex dynamics of the qubit dissipative NLO system.
\section{Acknowledgment}
We acknowledge support by the DFG under the funding program SFB 631.
\end{document}
|
\begin{document}
\title[]{Identities for hyperelliptic $\wp$-functions of genus one, two and three in covariant form.}
\author{Chris Athorne}
\address{Maths. Glasgow}
\email{[email protected]}
\begin{abstract}
We give a covariant treatment of the quadratic differential
identities satisfied by the $\wp$-functions on the Jacobian of
smooth hyperelliptic curves of genus $\leq 3$.
\end{abstract}
\maketitle
\section{Introduction}
A classical problem in the theory of a planar $(n,s)$ algebraic
curve is a description of the differential equations satisfied by
meromorphic, multiply periodic functions defined on its Jacobian
variety. In the genus $g$ hyperelliptic case ($n=2,\,s=2g+2$) the
field of such functions is entirely described in terms of certain
$\wp_{ij}$ functions which generalize the Weierstrass $\wp$-function
on the elliptic curve, the genus one case.
The derivation of these identities has been a major concern over the
last ten to fifteen years and many results have been published: see
\cite{BEL1997,EEL2000,EEP2003} for seminal literature.
The aim of this paper is to promote a new methodology which
considerably simplifies the derivation and presentation of these
identities by utilizing elementary representation theory. The
fundamental observation is that the underlying algebraic curves
belong to generic families permuted under an ${\mathfrak sl}_2$
action. This can be interpreted \cite{AEE2003,AEE2004} as a
covariance property that translates into covariance of the
$\wp$-function identities. This means that each polynomial identity
between derivatives of the $\wp$-function belongs to a finite
dimensional representation of ${\mathfrak sl}_2$, the knowledge of
which depends only upon a highest weight element. It is only
necessary to find these highest weight identities to generate the
other identities in the representation.
However, a requirement of this approach is that we develop the
theory for the generic member of the family of curves. This is in
contrast to former treatments where a simpler, normal form is
exploited by moving a branch point to infinity, i.e. removing the
highest degree term.
The only case where the covariant equations are written down is for
genus two hyperelliptic curves by Baker \cite{B1897}. He achieves
this by establishing the equations for the curve in normal form and
then undoing the ``normalizing" transformation's effect on the
identities. Even so he finds it necessary to introduce a ``fudge
factor" to restore full covariance.
This ``fudge factor" points to another problem. Not only must the
curve be in general position but the fundamental (Kleinian)
definition of the $\wp$-function \cite{B1897,B1903,B1907,BEL1997}
must itself be rendered covariant. This problem reasserts itself in
the next highest genus and the Baker equations for the genus three
curve \cite{B1903} are nowhere written down in covariant form.
The same issues occur in purely algebraic treatments, that of
Cassels and Flynn for instance \cite{CF1996}. The formulation of
their approach, important for curves over general fields, can also
be rendered covariant and will be discussed in another publication.
In this paper we work entirely over $\mathbb C$.
In this respect a note on the approach of the papers
\cite{AEE2003,AEE2004} by the present author and collaborators is in
order. What was attempted in those papers was a radically different
approach to the analytic theory based on a very simple definition of
the $\wp$-function, quite different to Klein's but with some
philosophical proximity to that of \cite{CF1996}. However, whilst
this was an effective approach to genus two, attempts so far to
extend it to higher genus have foundered on finding the
corresponding, simple definition of the $\wp$-function.
The programme of the current paper is, therefore, firstly to define
the $\wp$ function in a covariant way and secondly to derive the
identities it satisfies by combining the traditional technique of
expansion about a chosen point with the Lie algebraic representation
theory. We do this for genera one, two and three to recover known
sets of differential equation or their equivalents. The emphasis is
placed on the methodology.
The results so obtained are rather beautiful generalizations of the
formulae found in \cite{B1903,B1907,BEL1997}. Most of all we obtain
a covariant bordered determinantal form of the set of quadratic
identities in the $\wp_{ijk}$ for genus two, familiar from Baker's
work \cite{B1907}, and a new generalization of this formula to the
genus three case involving a doubly bordered determinant. These
quadratic relations should presumably be regarded as the most
fundamental differential identities and it is a positive feature of
the covariant machinery that it produces them in a systematic manner
at the simplest level.
\section{Lie algebraic operations}
Curves of the form
\begin{equation}
v(x,y;a_0,\ldots,a_{2g+2)})=y^2-\sum_{i=0}^{2g+2}\binom{2g+2}{i}a_ix^i=0
\end{equation}
are generically hyperelliptic and of genus $g$: that is, unless some
special relations obtain between the coefficients.
The family of such curves is permuted under transformations given by
\begin{eqnarray}
x\mapsto X=\frac{\alpha x+\beta}{(\gamma x+\delta)},\\
y\mapsto Y=\frac{y}{(\gamma x +\delta)^{g+1}},
\end{eqnarray}
where $$\alpha\delta-\beta\gamma=1,$$ mapping the above curve into
\begin{equation}
V(X,Y;A_0,\ldots,A_{2g+2})=Y^2-\sum_{i=0}^{2g+2}\binom{2g+2}{i}A_iX^i=0
\end{equation}
the $A_i$ being functions of the $a_i$ and the parameters
$\alpha,\beta,\gamma$ and $\delta$.
This can be restated as infinitesimal \emph{covariance} conditions,
\begin{eqnarray}
{\bf e}v(x,y;a_0,\ldots,a_{2g+2})=0\\
{\bf
f}v(x,y;a_0,\ldots,a_{2g+2})+2(g+1)xv(x,y;a_0,\ldots,a_{2g+2})=0
\end{eqnarray}
where the generators $\bf e$ and $\bf f$ are given by
\begin{eqnarray}
{\bf e}=\partial_x-\sum_{i=0}^{2g+2}(2g+2-i)a_{i+1}\partial_{a_i}\\
{\bf
f}=-x^2\partial_x-(g+1)xy\partial_y-\sum_{i=0}^{2g+2}ia_{i-1}\partial_{a_i}\\
{\bf
h}=-2x\partial_x-(2g+2)y\partial_y-\sum_{i=0}^{g+1}ia_{i}\partial_{a_i}.
\end{eqnarray}
These generators satisfy the ${\mathfrak sl}_2$ commutation
relations,
\begin{equation}
[{\bf h},{\bf e}]=2{\bf e},\quad [{\bf h},{\bf f}]=-2{\bf f},\quad
[{\bf e},{\bf f}]={\bf h}.
\end{equation}
The coefficients $a_0,a_1,\ldots,a_{2g+2}$ are a basis for a $2g+3$
dimensional representation.
The space of holomorphic differentials on the curve is spanned by
the set \[\{\frac{x^{i-1}dx}{y}|i=1,\ldots,g\}\] and the symmetric
sums of each of these differentials taken over $g$ copies of the
curve,
\begin{equation}
du_i=\sum_{j=1}^g\frac{x_j^{i-1}dx_j}{y_j}
\end{equation}
are a basis for holomorphic one-forms on the Jacobian variety of the
curve.
One checks the following action of ${\mathfrak sl_2}$:
\begin{eqnarray}
{\bf e}du_i&=&(i-1)du_{i-1}\\
{\bf f}du_i&=&(g-i)du_{i+1}
\end{eqnarray}
and it then follows that
\begin{eqnarray}\label{ef}
{\bf e}\partial_{u_i}&=&-i\partial_{u_{i+1}}\\
{\bf f}\partial_{u_i}&=&-(g-i+1)\partial_{u_{i-1}}
\end{eqnarray}
\section{Covariant Klein relations}
Our starting point will be the Kleinian definition of the doubly
indexed $\wp$ functions: $\wp_{ij}=\wp_{ji}$ \cite{BEL1997}. The
indices are to be thought of as derivatives with respect to the
variables $u_i$. There are thus integrability conditions of the
form,
\begin{equation}
\wp_{ij,k}=\wp_{ik,j}=\wp_{kj,i}\quad\forall i,j,k.
\end{equation}
For the moment we think of these objects purely as indexed symbols
satisfying algebraic rules of differentiation and a set of
identities to be specified shortly. However they are not
traditionally defined in a covariant manner, that is in a way that
respects the further relations following from (\ref{ef}), namely,
\begin{eqnarray}
{\bf e}\wp_{ij}&=&-i\wp_{i+1\,j}-j\wp_{i\,j+1}\\
{\bf f}\wp_{ij}&=&-(g-i+1)\wp_{i-1\,j}-(g-j+1)\wp_{i\,j-1}
\end{eqnarray}
In order to proceed we need to adjust the fundamental definition by
adding correction terms without destroying the fundamental
singularity properties of the $\wp_{ij}$.
How to do this is best seen by example and we explain it now for the
genus two case.
The classical definition in genus two assumes a normal form with
branch point at infinity, $a_6=0, a_5=\frac23$, and is:
\begin{equation}
\wp_{11}+(x_i+x)\wp_{12}+xx_i\wp_{22}=\frac{F(x,x_i)-yy_i}{4(x-x_i)^2}
\end{equation}
where $i=1,2$ and $\wp$ is a function of the argument $\int^xd{\bf
u}+\int^{x_1}d{\bf u}+\int^{x_2}d{\bf u}$, ${\bf u}=(u_1,u_2)$. The
function $F(x,x_i)$ is the classical polar form
\begin{equation}
F(x,x_i)=2(x+x_i)x^2x^2_i+15a_4x^2x_i^2+10a_3(x+x_i)xx_i+15a_2xx_i+3a_1(x+x_i)+a_0
\end{equation}
For the generic case one must clearly reinstate the coefficients
$a_6$ and $a_5$ but this alone is not enough to render the equation
covariant which, in this case means \emph{invariant}, it being a
single relation.
The left hand side becomes invariant on dividing by $x-x_i$ since
both $$(\wp_{11},-2\wp_{12},\wp_{22})$$ and
$${\bf X^3}=\left(\frac{2xx_i}{x-x_i},-\frac{x+x_i}{x-x_i},\frac{2}{x-x_i}\right)$$ are
three dimensional representations. Note that the $x_i$ here can be
either choice from $x_1$ and $x_2.$
On the right hand side the ratio $\frac{yy_i}{(x-x_i)^3}$ is now
also seen to be invariant but $\frac{F(x,x_i)}{(x-x_i)^3}$ is not.
Note however that there is a seven dimensional representation,
\begin{eqnarray}
{\bf X^7}=\left(\frac{6}{(x-x_i)^3},-\frac{3(x+x_i)}{(x-x_i)^3},\frac{3(x^2+3xx_i+x_i^2)}{(x-x_i)^3},-\frac{(x^3+9x^2x_i+9x_i^2x+x^3)}{(x-x_i)^3}\right.,\nonumber\\
\left.\frac{3(x^2+3xx_i+x_i^2)xx_i}{(x-x_i)^3},-\frac{3(x+x_i)x^2x_i^2}{(x-x_i)^3},\frac{6x^3x_i^3}{(x-x_i)^3}\right)
\end{eqnarray}
which, when taken with the coefficients
$a_0,-a_1,a_2,-a_3,a_4,-a_5,a_6$ gives an invariant. This
modification does not alter the fundamental requirement that in the
limit $x\rightarrow x_i,y\rightarrow y_i$ the $\wp_{ij}$ are regular
but have poles of order 2 when $x\rightarrow x_i,y\rightarrow -y_i$
\cite{CF1996}. Hence our modified definition is,
\begin{equation}
\wp_{11}{\bf X^3}_2+\wp_{12}{\bf X^3}_1+\wp_{22}{\bf
X^3}_0=\frac{\tilde F(x,x_i)-yy_i}{2(x-x_i)^3}
\end{equation}
where
\begin{equation}
\frac{\tilde F(x,x_i)}{(x-x_i)^3}=a_0{\bf X^7}_6+a_1{\bf
X^7}_5+a_2{\bf X^7}_4+a_3{\bf X^7}_3+a_4{\bf X^7}_2+a_5{\bf
X^7}_1+a_6{\bf X^7}_0
\end{equation}
is a covariant ``polar'' form.
The corresponding generalizations for other genera are
straightforward and depend on constructing $2g+3$ dimensional
representations, ${\bf X}^{2g+3}$, by taking highest weight elements
$(x-x_i)^{-(g+1)}$ for $\bf e$ and applying $\bf f$ successively,
with appropriate normalizations.
Thus, for instance, for genus one we write,
\begin{equation}
\wp_{11}=\frac{\tilde F(x,x_i)-yy_i}{2(x-x_i)^2}
\end{equation}
where, using $\bf X^5$,
\begin{equation}
\tilde
F(x,x_i)=a_0+2a_1(x+x_i)+a_2(x^2+xx_i+x_i^2)+a_3(x+x_i)xx_i+a_4x^2x_i^2.
\end{equation}
The covariant polar form stands in a geometric relation to the
hyperelliptic curve $y^2-\sum_{i=0}^{2g+2}\binom{2g+2}{i}a_ix^i$ of
degree $g+2$ not shared by the traditional polar form; namely, the
curve $yy_i-\tilde F(x,x_i)=0$ of degree $g+1$ is tangent to order
$g+1$ to the hyperelliptic curve at the common point $(x_i,y_i)$.
For the calculations which follow we put the defining relations into
the convenient form
\begin{equation}
yy_i-{\bf x}^th{\bf x}_i=0
\end{equation}
where $h$ is a $(g+2)\times(g+2)$ matrix whose entries depend only
upon the $a_i$ and the $\wp_{ij}$. The $\bf x$'s are $g+2$-vectors
of monomials,e.g.
\begin{equation}
{\bf x}^t=(1,x,x^2,\ldots,x^g,x^{g+1}).
\end{equation}
\section{Differential relations in genus one}
Here we give a new, covariant treatment of the most classical case
of all: the Weierstrass $\wp$-function.
Covariance of the quartic curve
\begin{equation}
y^2=a_0+4a_1x+6a_2x^2+4a_3x^3+a_4x^4
\end{equation}
under ${\mathfrak sl}_2({\mathbb C})$ requires
\begin{eqnarray}
{\bf e}(x)&=&1\nonumber\\
{\bf e}(y)&=&0\nonumber\\
{\bf f}(x)&=&-x^2\nonumber\\
{\bf f}(y)&=&-2xy\nonumber\\
{\bf e}(a_i)&=&-(4-i)a_{i+1}\nonumber\\
{\bf f}(a_i)&=&-ia_{i-1}\nonumber
\end{eqnarray}
There is only one holomorphic differential on the curve:
$du_1=\frac{dx}{y}$ and it is clear that
\begin{eqnarray}
{\bf e}(du_1)&=&0\nonumber\\
{\bf f}(du_1)&=&0\nonumber
\end{eqnarray}
so that $\wp_{11}$, $\wp_{111}$, etc. are all invariant.
Even for this, the simplest case, it is necessary to make the Klein
definition covariant before we start by using ${\bf X}^5$ as at the
end of the last section. We apply the fundamental definition of
Klein \cite{BEL1997}, written in the form
\begin{equation}\label{K1}
yy_1-{\bf x}^th{\bf x}_1=0
\end{equation}
where ${\bf x}=(1,x,x^2)$, ${\bf x_1}=(1,x_1,x_1^2)$ but where $h$
is now the covariantly modified, three by three matrix
\begin{equation}
h=\left[
\begin{array}{ccc}
a_0&2a_1&a_2-2\wp_{11}\\
2a_1&4a_2+4\wp_{11}&2a_3\\
a_2-2\wp_{11}&2a_3&a_4
\end{array}
\right]
\end{equation}
Note that in terms of entries of $h$,
\begin{eqnarray}
y^2&=&a(x)\nonumber\\
&=&h_{33}x^4+(h_{32}+h_{23})x^3+(h_{31}+h_{22}+h_{13})x^2\nonumber\\
&&+(h_{12}+h_{21})x+h_{11}\nonumber
\end{eqnarray}
each coefficient being independent of the $\wp_{11}$ symbol.
Take the residue of (\ref{K1}) at $x=\infty$, $y={\sqrt
h_{33}}(x^2+\frac{h_{32}}{h_{33}}x+\ldots)$:
\begin{equation}\label{K10}
{\sqrt h_{33}}y_1-h_{31}-h_{32}x_1-h_{33}x_1^2=0
\end{equation}
The two index symbol $\wp_{11}$ is \cite{BEL1997} a function of $x$
and $x_1$ in the form
\begin{equation}
\wp_{11}=\wp_{11}\left(\int^xd{\bf u}+\int^{x_1}d{\bf u}\right)
\end{equation}
Hence the effect of the operator $y\partial_x=\partial_{u_1}$ etc.
on $\wp_{11}$ is
\begin{eqnarray}
y\partial_x\wp_{11}&=&\wp_{111}\\
y_1\partial_{x_1}\wp_{11}&=&\wp_{111}
\end{eqnarray}
Now apply $y\partial_x$ to the Klein relation (\ref{K1}):
\begin{equation}
yy'y_1-y{\bf x}'^th{\bf x}_1={\bf x}^t(\partial_{u_1}h){\bf x}_1
\end{equation}
Use of the defining relation allows us to replace $yy_1$ to give:
\begin{equation}
(y'{\bf x}^t-y{\bf x'}^t)h{\bf x}_1={\bf x}^t(\partial_{u_1}h){\bf
x}_1
\end{equation}
The highest order term using $y={\sqrt
h_{33}}(x^2+\frac{h_{32}}{h_{33}}x+\ldots)$, yields
\begin{eqnarray}
h_{33}(h{\bf x}_1)_2-h_{23}(h{\bf
x}_1)_3=\sqrt{h_{33}}(\partial_{u_1}h{\bf x}_1)_3
\end{eqnarray}
where we have used subscripts $(\cdot)_2$ and $(\cdot)_3$ to denote
the second and third components of a vector quantity.
Explicitly we have the identity:
\begin{eqnarray}
\left|\begin{array}{cc}
h_{12}&h_{13}\\
h_{23}&h_{33}
\end{array}\right|
+\left|\begin{array}{cc}
h_{22}&h_{23}\\
h_{32}&h_{33}
\end{array}\right|x_1
+2{\sqrt h_{33}}\wp_{111}=0&&\nonumber
\end{eqnarray}
The same identity arises if we differentiate the Klein relation with
respect to $x_1$.
So far then $y_1$ is given by a quadratic in $x_1$, linear in
$\wp_{11}$, and $\wp_{111}$ by a relation linear in $x_1$ and
$\wp_{11}$. One further relation is afforded by the fact that
$(x_1,y_1)$ lies on the curve. Using the expression (\ref{K10}) for
$y_1$ this becomes
\begin{equation}
\left|\begin{array}{cc}h_{22}&h_{23}\\h_{32}&h_{33}\end{array}\right|x_1^2+
2\left|\begin{array}{cc}h_{12}&h_{13}\\h_{32}&h_{33}\end{array}\right|x_1+
\left|\begin{array}{cc}h_{11}&h_{13}\\h_{31}&h_{33}\end{array}\right|=0.
\end{equation}
We now eliminate $x_1$ between this quadratic relation and the
preceeding linear expression for $\wp_{111}$. We obtain
\begin{equation}
\wp_{111}^2=-\frac14\left|\begin{array}{ccc}h_{11}&h_{12}&h_{13}\\h_{21}&h_{22}&h_{23}\\h_{31}&h_{32}&h_{33}\\\end{array}\right|
\end{equation}
Identifying as customary the classical $\wp$-function with
$\wp_{11}$ and its derivative, $\wp'$, with $\wp_{111}$ we have,
expanding the determinant, the equation for the $\wp$-function for
the \emph{generic} curve of genus one:
\begin{eqnarray}
\wp'^2-4\wp^3&=&-(a_0a_4-4a_1a_3+3a_2^2)\wp\\
&&-a_0a_2a_4+a_0a_3^2-2a_1a_2a_3+a_2^3+a_1^2a_4\nonumber
\end{eqnarray}
\subsection{Remarks}
\subsubsection{}
The coefficients $a_0a_4-4a_1a_3+3a_2^2$ and
$-a_0a_2a_4+a_0a_3^2-2a_1a_2a_3+a_2^3+a_1^2a_4$ are readily verified
to be invariants under the ${\mathfrak sl}_2({\mathbb C})$ action.
This is only to be expected from the classical approach. They sit
inside the two-fold and three-fold tensor products of the five
dimensional representation spanned by $\{a_0,a_1,a_2,a_3,a_4\}$.
\subsubsection{}
Specializing to the case where one branch point is moved to
$\infty$, we take $a_4=0$. By shifting $x$ we can set $a_2=0$ and by
scaling, set $a_3=1$:
\begin{equation}
\wp'^2=4\wp^3+4a_1\wp+a_0\nonumber
\end{equation}
Traditionally one associates this curve with the cubic
\[y^2=4x^3+4a_1x+a_0\]
parametrized by setting $x=\wp$ and $y=\wp'$ but we see that in fact
the origin of the factor of 4 on the left hand side is not at all
related to the value of $a_3$. It is rather an intrinsic value that
holds for the generic curve. We could of course solve the relations
obtained in the previous section to obtain $x_1$ and $y_1$ as
functions of $\wp_{11}$, $\wp_{111}$ and the $a_i$ inorder to
parametrise the generic quartic, $y^2=a(x)$. This parametrization
looks, at first sight, rather unattractive although it reduces to
the classical one when the branch point is moved to $\infty$.
\subsubsection{}
The generic differential equation for the $\wp$-function above is
actually what for higher genus would be called a quadratic identity.
Consequently the coefficients in the differential equation are
polynomial in the $a_i$ and not linear.
\subsubsection{}
Why is life more complicated for higher genus? Simply because the
$\wp_{ij}$ are now a $\frac12g(g+1)$ dimensional (not, in general,
irreducible) representation and so their relations cannot be
constructed solely from invariant quantities.
\section{Differential relations in genus two}
The fundamental definition of Klein can be modified to the form
\begin{equation}\label{K2}
yy_i-{\bf x}h{\bf x}^T_i=0
\end{equation}
for $i=1,2$, where ${\bf x}=(1,x,x^2,x^3)$, ${\bf
x_i}=(1,x_i,x_i^2,x_i^3)$ and $h$ is the covariant four by four
matrix
\begin{equation}
h=\left[
\begin{array}{cccc}
a_0&3a_1&3a_2-2\wp_{11}&a_3-2\wp_{12}\\
3a_1&9a_2+4\wp_{11}&9a_3+2\wp_{12}&3a_4-2\wp_{22}\\
3a_2-2\wp_{11}&9a_3+2\wp_{12}&9a_4+4\wp_{22}&3a_5\\
a_3-2\wp_{12}&3a_4-2\wp_{22}&3a_5&a_6
\end{array}
\right]
\end{equation}
Note that in terms of entries of $h$,
\begin{eqnarray}
y^2&=&a(x)\nonumber\\
&=&h_{44}x^6+(h_{34}+h_{43})x^5+(h_{24}+h_{33}+h_{42})x^4\nonumber\\
&&+(h_{14}+h_{23}+h_{32}+h_{41})x^3\nonumber\\
&&+(h_{13}+h_{22}+h_{31})x^2+(h_{12}+h_{21})x+h_{11}\nonumber
\end{eqnarray}
each coefficient being independent of the $\wp_{ij}$ symbols.
Take the residue of (\ref{K2}) at $x=\infty$, $y={\sqrt
h_{44}}(x^3+\frac{h_{34}}{h_{44}}x^2+\ldots)$:
\begin{equation}\label{K0}
{\sqrt h_{44}}y_1-h_{41}-h_{42}x_1-h_{43}x_1^2-h_{44}x_1^3=0
\end{equation}
The two index symbols, $\wp_{ij}$ are \cite{BEL1997} functions of
$x$, $x_1$ and $x_2$ in the form
\begin{equation}
\wp_{ij}=\wp_{ij}\left(\int^xd{\bf u}+\int^{x_1}d{\bf
u}+\int^{x_2}d{\bf u}\right)
\end{equation}
Hence the effect of the operators
$y\partial_x=\partial_{u_1}+x\partial_{u_2}$ etc. on the $\wp_{ij}$
is
\begin{eqnarray}
y\partial_x\wp_{ij}&=&\wp_{ij1}+x\wp_{ij2}\\
y_1\partial_{x_1}\wp_{ij}&=&\wp_{ij1}+x_1\wp_{ij2}\\
y_2\partial_{x_2}\wp_{ij}&=&\wp_{ij1}+x_2\wp_{ij2}
\end{eqnarray}
Apply $y_2\partial_{x_2}$ to the Klein relation (\ref{K2}) with
$i=1$. By elementary algebra it reduces, for all $x$, to the form
\begin{equation}
-2(x-x_1)^2\left(A+xB\right)=0
\end{equation}
where $A$ and $B$ are functions of $x_1$, $x_2$ and the $\wp_{ijk}$.
As there can be no relation linear in $x$ between these objects
\cite{B1897}, both the coefficients $A$ and $B$ must vanish:
\begin{eqnarray}\label{KK}
\wp_{111}+(x_1+x_2)\wp_{112}+x_1x_2\wp_{122}&=&0\\
\wp_{112}+(x_1+x_2)\wp_{122}+x_1x_2\wp_{222}&=&0\nonumber
\end{eqnarray}
Now apply $y\partial_x$ to the Klein relation (\ref{K2}) with $i=1$:
\begin{equation}
yy'y_1-y{\bf x}'h{\bf x}^T_1={\bf
x}(\partial_{u_1}h+x\partial_{u_2}){\bf x}^T_1
\end{equation}
Use of the defining relation allows us to replace $yy_1$ to give:
\begin{equation}
(y'{\bf x}^T-y{\bf x'}^T)h{\bf x}^T_1={\bf
x}(\partial_{u_1}h+x\partial_{u_2}){\bf x}^T_1
\end{equation}
Using $y={\sqrt h_{44}}(x^3+\frac{h_{34}}{h_{44}}x^2+\ldots)$, the
highest order term yields
\begin{eqnarray}
h_{44}(h{\bf x}^T_1)_3-h_{34}(h{\bf
x}^T_1)_4=\sqrt{h_{44}}(\partial_{u_2}h{\bf x}_1^T)_4
\end{eqnarray}
where again we have used subscripts $(\cdot)_i$ to denote $i$th
components of a vector quantity.
Explicitly we have a quadratic identity:
\begin{eqnarray}\label{KKK}
\left|\begin{array}{cc}
h_{31}&h_{34}\\
h_{41}&h_{44}
\end{array}\right|
+\left|\begin{array}{cc}
h_{32}&h_{34}\\
h_{42}&h_{44}
\end{array}\right|x_1
+\left|\begin{array}{cc}
h_{33}&h_{34}\\
h_{43}&h_{44}
\end{array}\right|x_1^2&&\\+2{\sqrt
h_{44}}(\wp_{122}+x_1\wp_{222})=0&&\nonumber
\end{eqnarray}
By the general symmetry of the problem the same identity must be
satisfied by $x_2$. Thus we can obtain expressions for the symmetric
combinations $x_1+x_2$ and $x_1x_2$, namely:
\begin{eqnarray}
2{\sqrt h_{44}}\wp_{222}&=&-\left|\begin{array}{cc}
h_{32}&h_{34}\\
h_{42}&h_{44}
\end{array}\right|-\left|\begin{array}{cc}
h_{33}&h_{34}\\
h_{43}&h_{44}
\end{array}\right|(x_1+x_2)\\
2{\sqrt h_{44}}\wp_{122}&=&-\left|\begin{array}{cc}
h_{31}&h_{34}\\
h_{41}&h_{44}
\end{array}\right|+\left|\begin{array}{cc}
h_{33}&h_{34}\\
h_{43}&h_{44}
\end{array}\right|x_1x_2
\end{eqnarray}
Eliminating these symmetric combinations from the second of the pair
(\ref{KK}) we obtain the relation:
\begin{eqnarray}
\left|\begin{array}{cc}
h_{33}\wp_{112}-h_{32}\wp_{122}+h_{31}\wp_{222}& h_{34}\\
&\\h_{43}\wp_{112}-h_{42}\wp_{122}+h_{41}\wp_{222}&h_{44}
\end{array}\right|&=&0\nonumber\\
&&\nonumber
\end{eqnarray}
from which it follows that
\begin{eqnarray}
h_{33}\wp_{112}-h_{32}\wp_{122}+h_{31}\wp_{222}&=&\lambda h_{34}\\
h_{43}\wp_{112}-h_{42}\wp_{122}+h_{41}\wp_{222}&=&\lambda h_{44}
\end{eqnarray}
$\lambda$ being some constant to be determined.
All the elements of these identities belong to irreducible
representations of ${\mathfrak sl}_2$ and it is easy to show that
the identities are mutually self-consistent under the Lie algebra
action if $\lambda$ is identified with $\wp_{111}$. They then form
two of a multiplet of four identities (a four dimensional
representation of ${\mathfrak sl}_2$) summarized in matrix form as,
\begin{equation}\label{bilinear}
\left(\begin{array}{cccc}
h_{11}&h_{12}&h_{13}&h_{14}\\
h_{21}&h_{22}&h_{23}&h_{24}\\
h_{31}&h_{32}&h_{33}&h_{34}\\
h_{41}&h_{42}&h_{43}&h_{44}
\end{array}\right)\left(\begin{array}{c}\wp_{222}\\-\wp_{122}\\\wp_{112}\\-\wp_{111}\end{array}\right)=0
\end{equation}
An immediate consequence of this is the relation for the Kummer
surface, quartic in the $\wp_{ij}$:
\begin{equation}
\det(h)=0
\end{equation}
But, more than this, it follows (we do not give the argument here
because it is a simplification of that leading up to equation
(\ref{seelater}) for the genus three case) straightforwardly from
(\ref{bilinear}) and the theory of diagonalisation of the symmetric
matrix $h$ that if we define the bordered matrix,
\begin{equation}
H=\left(\begin{array}{ccccc}
h_{11}&-h_{12}&h_{13}&-h_{14}&l_0\\
-h_{21}&h_{22}&-h_{23}&h_{24}&l_1\\
h_{31}&-h_{32}&h_{33}&-h_{34}&l_2\\
-h_{41}&h_{42}&-h_{43}&h_{44}&l_3\\
l_0&l_1&l_2&l_3&0
\end{array}\right)
\end{equation}
then $\det(H)$ is, up to a factor, the expression
$(l_0\wp_{222}+l_1\wp_{122}+l_2\wp_{112}+l_3\wp_{111})^2$.
That this factor is, in fact, $-\frac14$ could be established by the
classical argument of singularity balancing between the leading
terms, quadratic in the $\wp_{ijk}$ and cubic in the $\wp_{ij}$.
However it is instructive and in keeping with the current, purely
algebraic, philosophy to establish the result by using the relation
arising by application of $y_1\partial_{x_1}$ to the Klein relation
(\ref{K2}) for $i=1$.
Immediately we have
\begin{equation}
yy_1y'_1-{\bf x}(\partial_{u_1}h+x_1\partial_{u_2}h){\bf
x_1}^T-y_1{\bf x}h{\bf x'_1}^T=0
\end{equation}
Replacing $y_1y'_1$ by $\frac12a'(x_1)$, taking the $x=\infty$
residue of
\begin{equation}
\frac12ya'(x_1)-y_1{\bf x}h{\bf x'_1}^T={\bf
x}(\partial_{u_1}h+x_1\partial_{u_2}h){\bf x_1}^T
\end{equation}
and by elimination of $y_1$ as before, we find:
\begin{equation}
\frac12\left((h{\bf x}^T_1)_4^2-h_{44}a(x_1)\right)'=2{\sqrt
h_{44}}(\wp_{112}+2\wp_{122}x_1+\wp_{222}x_1^2)\nonumber
\end{equation}
prime denoting differentiation with respect to $x_1$: that is, we
ignore the implicit $x_1$ dependence of the $\wp_{ij}$. In fact the
right hand side of this equation is easily seen to be cubic in $x_1$
and not, as at first sight it appears, quintic. Exploiting the
symmetry of $h$ gives us,
\begin{eqnarray}
{\sqrt h_{44}}(\wp_{112}+2\wp_{122}x_1+\wp_{222}x_1^2)&+&
\left|\begin{array}{cc}
h_{33}&h_{34}\\
h_{43}&h_{44}
\end{array}\right|x_1^3
+\frac32\left|\begin{array}{cc}
h_{23}&h_{24}\nonumber\\
h_{43}&h_{44}
\end{array}\right|x_1^2\\
&&+ \left(\left|\begin{array}{cc}
h_{13}&h_{14}\\
h_{43}&h_{44}
\end{array}\right|+\frac12\left|\begin{array}{cc}
h_{22}&h_{24}\\
h_{42}&h_{44}
\end{array}\right|\right)x_1
\nonumber\\
&&+\frac12\left|\begin{array}{cc}
h_{12}&h_{14}\\
h_{42}&h_{44}
\end{array}\right|=0
\end{eqnarray}
It is straightforward to eliminate (\ref{KKK}) from the above to
leave a second quadratic identity,
\begin{equation}
2{\sqrt h_{44}}(\wp_{112}-\wp_{222}x_1^2)+ \left|\begin{array}{cc}
h_{23}&h_{24}\\ h_{43}&h_{44}
\end{array}\right|x_1^2
+\left|\begin{array}{cc}
h_{22}&h_{24}\\
h_{42}&h_{44}
\end{array}\right|x_1
+\left|\begin{array}{cc}
h_{12}&h_{14}\\
h_{42}&h_{44}
\end{array}\right|=0\nonumber
\end{equation}
Again, eliminating $x_1^2$ between this and (\ref{KKK}) provides a
relation of degree one in $x_1$. Since the $x_i$ can satisfy nothing
simpler than quadratic relations the coefficient of $x_1$ and the
constant term must be identically zero. The first is
\begin{equation}
4h_{44}\wp_{222}^2= \left|
\begin{array}{cc}
h_{32}&h_{34}\\
h_{42}&h_{44}
\end{array}
\right|
\left|
\begin{array}{cc}
h_{23}&h_{24}\\
h_{43}&h_{44}
\end{array}\right|
- \left|
\begin{array}{cc}
h_{33}&h_{34}\\
h_{43}&h_{44}
\end{array}\right|
\left|
\begin{array}{cc}
h_{22}&h_{24}\\
h_{42}&h_{44}
\end{array}\right|
\end{equation}
and by a well-known identity for $3\times 3$ determinants
\cite{Aitken}:
\begin{equation}
-4\wp_{222}^2= \left|
\begin{array}{ccc}
h_{22}&h_{23}&h_{24}\\
h_{32}&h_{33}&h_{34}\\
h_{42}&h_{43}&h_{44}
\end{array}\right|
\end{equation}
This allows us to fix the value of the constant of proportionality
and we obtain a beautiful, covariant generalization of Baker's
formula \cite{B1907}:
\begin{equation}
(l_0\wp_{222}+l_1\wp_{122}+l_2\wp_{112}+l_3\wp_{111})^2=-\frac14\left|\begin{array}{ccccc}
h_{11}&-h_{12}&h_{13}&-h_{14}&l_0\\
-h_{21}&h_{22}&-h_{23}&h_{24}&l_1\\
h_{31}&-h_{32}&h_{33}&-h_{34}&l_2\\
-h_{41}&h_{42}&-h_{43}&h_{44}&l_3\\
l_0&l_1&l_2&l_3&0
\end{array}\right|\nonumber
\end{equation}
For later comparison we change the sign of $l_1$ and $l_3$:
\begin{equation}\label{quadtwo}
(l_0\wp_{222}-l_1\wp_{122}+l_2\wp_{112}-l_3\wp_{111})^2=-\frac14\left|\begin{array}{ccccc}
h_{11}&h_{12}&h_{13}&h_{14}&l_0\\
h_{21}&h_{22}&h_{23}&h_{24}&l_1\\
h_{31}&h_{32}&h_{33}&h_{34}&l_2\\
h_{41}&h_{42}&h_{43}&h_{44}&l_3\\
l_0&l_1&l_2&l_3&0
\end{array}\right|\nonumber
\end{equation}
In section \ref{fourindex} we will derive identities linear in the
$\wp_{ij}$ and $\wp_{ijk}$ from the above quadratic identities.
\emph{Presumably} all $\wp$-function identities arise from these
quadratic ones by algebraic and differential processes but, of
course, this is not immediately clear. Nor is it immediately
essential to their application in this paper.
\section{Differential relations in genus three.}
The last section recovers classical results in that the covariant
identities were written down in \cite{B1907} though not there
derived in a covariant manner. By contrast a covariant treatment of
higher genus hyperelliptic (or non-hyperelliptic) curves has not
been given before. This we now do.
For genus three we have three covariant Klein equations:
\begin{equation}\label{K3}
yy_i-{\bf x}h{\bf x}^T_i=0
\end{equation}
for $i=1,2,3$, where ${\bf x}=(1,x,x^2,x^3,x^4)$, ${\bf
x_i}=(1,x_i,x_i^2,x_i^3,x_i^4)$ and $h$ is the $5\times 5$ matrix,
\begin{tiny}
\begin{equation}
\left[\begin{array}{ccccc}
a_0&4a_1&6a_2-2\wp_{11}&4a_3-2\wp_{12}&a_4-2\wp_{13}\\
4a_1&16a_2+4\wp_{11}&24a_3+2\wp_{12}&16a_4-2\wp_{22}+4\wp_{13}&4a_5-2\wp_{23}\\
6a_2-2\wp_{11}&24a_3+2\wp_{12}&36a_4+4\wp_{22}-4\wp_{13}&24a_5+2\wp_{23}&6a_6-2\wp_{33}\\
4a_3-2\wp_{12}&16a_4-2\wp_{22}+4\wp_{13}&24a_5+2\wp_{23}&16a_6+4\wp_{33}&4a_7\\
a_4-2\wp_{13}&4a_5-2\wp_{23}&6a_6-2\wp_{33}&4a_7&a_8
\end{array}
\right]\nonumber
\end{equation}
\end{tiny}
The residue of (\ref{K3}) at $x=\infty$, $y(x)={\sqrt
h_{55}}x^4+\frac{h_{45}+h_{54}}{2{\sqrt h_{55}}}x^3\ldots$ gives
\begin{equation}
{\sqrt h_{5,5}}y_i-(h{\bf x}^T_i)_5=0,
\end{equation}
for $i$ with value 1, 2 or 3.
This time the operator $y\partial_x$ and its indexed relatives is
given by $\partial_{u_1}+x\partial_{u_2}+x^2\partial_{u_3}$ etc. We
may apply $y_2\partial_{x_2}$ to (\ref{K3}) with $i=1$ and take the
residue at $x=\infty$ to obtain,
\begin{eqnarray}
\left((\frac{\partial h}{\partial u_1}+x_2\frac{\partial h}{\partial
u_2}+x_2^2\frac{\partial h}{\partial u_3}){\bf x}^T_1\right)_5=0
\end{eqnarray}
Simplifying, removing overall factors of $x_1-x_2$, gives
\begin{eqnarray}
\wp_{113}+(x_1+x_2)\wp_{123}+x_1x_2\wp_{223}+(x_1^2+x_2^2)\wp_{133}&&\nonumber\\
+(x_1+x_2)x_1x_2\wp_{233}+x_1^2x_2^2\wp_{333}&=&0
\end{eqnarray}
and, by cyclic interchange of the $x_i$,
\begin{eqnarray}
\wp_{113}+(x_2+x_3)\wp_{123}+x_2x_3\wp_{223}+(x_2^2+x_3^2)\wp_{133}&&\nonumber\\
+(x_2+x_3)x_2x_3\wp_{233}+x_2^2x_3^2\wp_{333}&=&0\\
\wp_{113}+(x_3+x_1)\wp_{123}+x_3x_1\wp_{223}+(x_3^2+x_1^2)\wp_{133}&&\nonumber\\
+(x_3+x_1)x_3x_1\wp_{233}+x_3^2x_1^2\wp_{333}&=&0.
\end{eqnarray}
From these three identities we can form three identities whose
coefficients are symmetric functions in the $x_i$, namely:
\begin{eqnarray}\label{A}
\wp_{223}-\wp_{133}+s^{(1)}\wp_{233}+s^{(2)}\wp_{333}&=&0\\
\wp_{123}+s^{(1)}\wp_{133}-s^{(3)}\wp_{333}=0\\
\wp_{113}-s^{(2)}\wp_{133}-s^{(3)}\wp_{233}=0
\end{eqnarray}
where $s^{(1)}=x_1+x_2+x_3$, $s^{(2)}=x_1x_2+x_2x_3+x_3x_1$ and
$s^{(3)}=x_1x_2x_3$.
An important observation at this point is that these three equations
are overdetermined for $s^{(1)}$, $s^{(2)}$ and $s^{(3)}$ so that
the $\wp_{ijk}$ must satisfy the quadratic identity
\begin{equation}
\wp_{113}\wp_{333}-\wp_{123}\wp_{233}+\wp_{223}\wp_{133}-\wp_{133}^2=0.
\end{equation}
This relation is in the kernel of $\bf e$ and thus is a highest
weight element for a set of relations forming a five dimensional
representation:
\begin{eqnarray}\label{trivial}
P_{\bf
5}(0)&=&\wp_{113}\wp_{333}-\wp_{123}\wp_{233}+\wp_{223}\wp_{133}-\wp_{133}^2\nonumber\\
P_{\bf
5}(1)&=&-\wp_{233}\wp_{113}-\wp_{112}\wp_{333}-\wp_{133}\wp_{222}+2\wp_{133}\wp_{123}+\wp_{233}\wp_{122}\nonumber\\
P_{\bf
5}(2)&=&\wp_{133}\wp_{122}-\wp_{133}\wp_{113}-\wp_{223}\wp_{122}+\wp_{223}\wp_{113}\nonumber\\
&&+\wp_{111}\wp_{333}+\wp_{123}\wp_{222}-2\wp_{123}^2\nonumber\\
P_{\bf
5}(3)&=&-\wp_{233}\wp_{111}-\wp_{112}\wp_{133}+\wp_{112}\wp_{223}-\wp_{113}\wp_{222}+2\wp_{113}\wp_{123}\nonumber\\
P_{\bf
5}(4)&=&-\wp_{123}\wp_{112}+\wp_{113}\wp_{122}-\wp_{113}^2+\wp_{133}\wp_{111}\nonumber
\end{eqnarray}
This gives a set of five identities quadratic in the $\wp_{ijk}$,
$P_{\bf 5}(i)=0$ for $i=0,\dots 4.$
Differentiating (\ref{K3}) with respect to $y\partial_x$,
\begin{equation}
(y'(x){\bf x}-y(x){\bf x}')h{\bf x}_1^T={\bf x}(\frac{\partial
h}{\partial u_1}+x\frac{\partial h}{\partial u_2}+x^2\frac{\partial
h}{\partial u_3}){\bf x}_1^T
\end{equation}
we again use the expansion near $x=\infty$,
\[y(x)={\sqrt h_{55}}x^4+\frac{h_{45}+h_{54}}{2{\sqrt
h_{55}}}x^3\ldots\] collecting the highest order term (degree 6) in
the identity just obtained. Thus, using the symmetry of the matrix
$h$,
\begin{equation}\label{cubic}
\left|\begin{array}{cc}
(h{\bf x}_1^T)_4 & (h{\bf x}_1^T)_5\\
h_{54} & h_{55}
\end{array}\right|=(\frac{\partial
h}{\partial u_3}{\bf x}_1^T)_5.
\end{equation}
This is an identity cubic in $x_1$ also satisfied by $x_2$ and
$x_3$:
\begin{eqnarray}
\left|\begin{array}{cc}
h_{44} & h_{45}\\
h_{54} & h_{55}
\end{array}\right|x_i^3+
\left|\begin{array}{cc}
h_{34} & h_{35}\\
h_{54} & h_{55}
\end{array}\right|x_i^2+
\left|\begin{array}{cc}
h_{24} & h_{25}\\
h_{44} & h_{55}
\end{array}\right|x_i+
\left|\begin{array}{cc}
h_{14} & h_{15}\\
h_{54} & h_{55}
\end{array}\right|&&\nonumber\\
+2{\sqrt h_{55}}(\wp_{133}+x_i\wp_{233}+x_i^2\wp_{333})&=&0
\end{eqnarray}
We can now eliminate the symmetric functions $s^{(1)}$, $s^{(2)}$
and $s^{(3)}$ in (\ref{A}). From the first relation we obtain,
\begin{eqnarray}
h_{24}\wp_{333}-h_{34}\wp_{233}+h_{44}(\wp_{223}-\wp_{133})-
h_{54}\lambda&=&0\nonumber\\
h_{25}\wp_{333}-h_{35}\wp_{233}+h_{45}(\wp_{223}-\wp_{133})-
h_{55}\lambda&=&0
\end{eqnarray}
where $\lambda$ is an undetermined multiplier.
Since these identities are polynomial in the $\wp_{ijk}$ and the
$h_{ij}$ only they must belong to a finite dimensional
representation of ${\mathfrak sl}_2(\mathbb C)$. Application of the
$\bf e$ and $\bf f$ operators must generate further identities. This
can only work for a special value of $\lambda$ and application of
$\bf e$ to the second of the above identities shows that it will be
a highest weight element (in the kernel of $\bf e$) only if
$\lambda=\wp_{222}-2\wp_{123}.$ Hence we have
\begin{eqnarray}\label{base1}
h_{24}\wp_{333}-h_{34}\wp_{233}+h_{44}(\wp_{223}-\wp_{133})-h_{54}(\wp_{222}-2\wp_{123})
&=&0\nonumber\\
h_{25}\wp_{333}-h_{35}\wp_{233}+h_{45}(\wp_{223}-\wp_{133})-h_{55}(\wp_{222}-2\wp_{123})
&=&0
\end{eqnarray}
We label the second of these identities $P_{\bf 9}(0)$ because it is
highest weight for a nine dimensional representation generated by
repeated application of $\bf f$, a set of nine linearly independent
identities $P_{\bf 9}(i)$ for $i=0,\ldots 8$. The last of these
identities is:
\begin{equation}
P_{\bf
9}(8)=h_{11}(\wp_{222}-2\wp_{123})-h_{12}(\wp_{122}-\wp_{113})+h_{13}\wp_{112}-h_{14}\wp_{111}=0
\end{equation}
Rather than write these out in detail now we shall summarize them in
a more compact form shortly.
Now from a linear combination of the first of the identities
(\ref{base1}) and $P_{\bf 9}(1)$ we can form the highest weight
identity for a seven dimensional representation:
\begin{eqnarray}
P_{\bf
7}(0)=-4h_{15}\wp_{333}+4h_{35}\wp_{133}-h_{45}(2\wp_{123}+\wp_{222})+4h_{55}(\wp_{122}-\wp_{113})&&\nonumber\\
-h_{34}\wp_{233}+h_{24}\wp_{333}-h_{44}(\wp_{133}-\wp_{223})&=&0
\end{eqnarray}
Proceeding in this way with the other identities obtained from
eliminating the symmetric functions from the other identities in
(\ref{A}), we obtain representations $P_{\bf 5}$, $P_{\bf 3}$ and
$P_{\bf 1}$, giving a total of $9+7+5+3+1=5^2$ relations linear in
the three index symbols.
These identities are not presented in the simplest form however.
They can be rendered more transparent by taking various linear
combinations so that one only ever has four $h$ terms arising in
each identity. We do not give the details here because it involves
routine linear algebra applied to the above identities, best
accomplished using a computer algebra package. The \emph{fact} that
this simplification is possible, however, is of significance and it
not clear to the present author exactly why it should be so.
After this rearrangement the identities take the form of a matrix
product
\begin{equation}\label{fourterm}
hA=0 \end{equation}
of the symmetric $5\times 5$ matrix $h$ and an
antisymmetric $5\times 5$ matrix
\begin{equation}
A=\left[
\begin{array}{ccccc}
0&-\wp_{333}&\wp_{233}&-\wp_{223}+\wp_{133}&\wp_{222}-2\wp_{123}\\
\wp_{333}&0&-\wp_{133}&\wp_{123}&-\wp_{122}+\wp_{113}\\
-\wp_{233}&\wp_{133}&0&-\wp_{113}&\wp_{112}\\
\wp_{223}-\wp_{133}&-\wp_{123}&\wp_{113}&0&-\wp_{111}\\
-\wp_{222}+2\wp_{123}&\wp_{122}-\wp_{113}&-\wp_{112}&\wp_{111}&0
\end{array}
\right]
\end{equation}
One checks that the matrix $A$ has rank at most three by virtue of
the relations (\ref{trivial}) obtained earlier. In fact the $4\times
4$ minors of $A$ are products of the $P_{\bf 5}(i)$:
\begin{equation}
A(i,j)=P_{\bf 5}(5-i)P_{\bf 5}(5-j)
\end{equation}
Further the $3\times 3$ minors also have the $P_{\bf 5}(i)$ as
factors. There are however non vanishing $2\times 2$ minors so the
rank of $A$ is exactly two.
Consequently the five by five matrix $h$ has exactly a two
dimensional zero eigenspace and, being symmetric, must be similar to
a diagonal matrix of form $h_D=Diag(0,0,h_3,h_4,h_5)$.
We can use this fact to obtain identities quadratic in the
$\wp_{ijk}$ by generalizing the argument in the genus two case as
follows.
Let $\Pi$ be the matrix which diagonalizes $h$, let $\bf l$ and $\bf
k$ be arbitrary five component column vectors, $I_2$ the two by two
identity matrix and consider
\begin{eqnarray}
\left|\left[\begin{array}{cc} \Pi^T & 0\\0 & I_2
\end{array}\right]\left[\begin{array}{ccc}
h&\bf l & \bf
k\\
\bf l^T & 0 & 0\\
\bf k^T & 0 & 0\\\end{array}\right]\left[\begin{array}{cc}\Pi & 0\\0
&
I_2\end{array}\right]\right|&=&\left|\left[\begin{array}{ccc}h_D&\Pi^T\bf
l & \Pi^T\bf
k\\
{\bf l^T}\Pi & 0 & 0\\
{\bf k^T}\Pi & 0 & 0\\\end{array}\right]\right|\nonumber\\
&=&h_3h_4h_5\left|\left[\begin{array}{cc}(\Pi^T{\bf l})_1 &
(\Pi^T{\bf l})_2\\(\Pi^T{\bf k})_1 & (\Pi^T{\bf k})_2\end{array}
\right]\right|^2
\end{eqnarray}
Now consider
\begin{eqnarray}{\bf l}^TA{\bf k}&=&{\bf l}^T\Pi A_D \Pi^T{\bf
k}\nonumber\\
&=&\alpha\left|\left[\begin{array}{cc}(\Pi^T{\bf l})_1 & (\Pi^T{\bf
l})_2\\(\Pi^T{\bf k})_1 & (\Pi^T{\bf k})_2\end{array} \right]\right|
\end{eqnarray}
where $A_D$ is the normal form of $A$
\begin{equation}
A_D=\left[\begin{array}{ccccc}
0 & \alpha & 0 & 0 & 0\\
-\alpha & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0\\
\end{array}\right]
\end{equation}
corresponding to the diagonal form of $h$.
Combining these observations we obtain the attractive formula
\begin{equation}\label{seelater}
({\bf l}^TA{\bf k})^2=\lambda\left|\begin{array}{ccc} h&\bf l & \bf
k\\
\bf l^T & 0 & 0\\
\bf k^T & 0 & 0\\\end{array}\right|
\end{equation}
where $\lambda$ is a function yet to be determined.
The undetermined factor can be found from a (simple) singularity
argument and also by a more involved, algebraic expansion and as for
genus two we will present the latter.
We return to the original relation on the curve for $y_1$ of degree
8 in $x_1$. By substituting for $y_1$ we obtain a sextic in $x_1$
from which we eliminate the degree 6 and 5 terms using the cubic
expression (\ref{cubic}). The resulting quartic identity in $x_1$
has leading term
\begin{equation}
\wp_{333}^2+\frac14\left|\begin{array}{ccc}
h_{33}&h_{34}&h_{35}\\
h_{43}&h_{44}&h_{45}\\
h_{53}&h_{54}&h_{55}
\end{array}\right|.
\end{equation}
Hence $\lambda=-\frac14$ in the full quadratic identity:
\begin{equation}
({\bf l}^TA{\bf k})^2=-\frac14\left|\begin{array}{ccc} h&\bf l & \bf
k\\
\bf l^T & 0 & 0\\
\bf k^T & 0 & 0\\\end{array}\right|
\end{equation}
This formula is a new result of this paper.
\section{Identities for $\wp_{ijkl}$}\label{fourindex}
In all three cases discussed above there are identities for the four
index $\wp$-functions of the form:
\begin{equation}
\wp_{ijkl}=F(\wp_{11},\wp_{12},\wp_{22},\wp_{13},\ldots)
\end{equation}
that are obtained by differentiating the identities quadratic in the
$\wp_{ijk}$ and (for genus two and three) using certain identities
involving two and three index $\wp$-functions.
Clearly in genus one we get
\begin{equation}
\wp''=6\wp^2-\frac12(a_0a_4-4a_1a_3+3a_2^2)
\end{equation}
recalling that the two index $\wp$ function is written as $\wp$ in
this case.
In genus two we start with the identity for $\wp_{222}^2$.
Differentiating,
\begin{eqnarray}
-8\wp_{222}\wp_{2222}&=& \left|
\begin{array}{ccc}
4\wp_{112}&h_{23}&h_{24}\\
2\wp_{122}&h_{33}&h_{34}\\
-2\wp_{222}&h_{43}&h_{44}
\end{array}\right|+
\left|
\begin{array}{ccc}
h_{22}&2\wp_{122}&h_{24}\\
h_{32}&4\wp_{222}&h_{34}\\
h_{42}&0&h_{44}
\end{array}\right|\nonumber\\
&&+ \left|
\begin{array}{ccc}
h_{22}&h_{23}&-2\wp_{222}\\
h_{32}&h_{33}&0\\
h_{42}&h_{43}&0
\end{array}\right|\nonumber\\
&=&4\wp_{112}\left|\begin{array}{cc}h_{33}&h_{34}\\h_{43}&h_{44}\end{array}\right|
+4\wp_{122}\left|\begin{array}{cc}h_{32}&h_{34}\\h_{42}&h_{44}\end{array}\right|\nonumber\\
&&+4\wp_{222}\left(-\left|\begin{array}{cc}h_{23}&h_{24}\\h_{33}&h_{34}\end{array}\right|+\left|\begin{array}{cc}h_{22}&h_{24}\\h_{42}&h_{44}\end{array}\right|\right)\nonumber
\end{eqnarray}
The first two terms on the right hand side can be replaced by a
single term with factor $\wp_{222}$ by utilizing the identity
\begin{equation}
\wp_{112}\left|\begin{array}{cc}h_{3,3}&h_{3,4}\\h_{4,3}&h_{4,4}\end{array}\right|
+\wp_{122}\left|\begin{array}{cc}h_{3,2}&h_{3,4}\\h_{4,2}&h_{4,4}\end{array}\right|
+\wp_{222}\left|\begin{array}{cc}h_{3,1}&h_{3,4}\\h_{4,1}&h_{4,4}\end{array}\right|=0\nonumber
\end{equation}
This, in turn, is obtained from the quadratic identity by setting
$l_i=h_{i+1,j}$ to get four identities of the form
\begin{equation}
h_{1,j}\wp_{222}-h_{2,j}\wp_{122}+h_{3,j}\wp_{112}-h_{4,j}\wp_{111}=0
\end{equation}
and eliminating $\wp_{111}$ from the pair $j=3,4$. Thus
\begin{eqnarray}
-\wp_{2222}&=&\frac12\left(-\left|\begin{array}{cc}h_{2,3}&h_{2,4}\\h_{3,3}&h_{3,4}\end{array}\right|+\left|\begin{array}{cc}h_{2,2}&h_{2,4}\\h_{4,2}&h_{4,4}\end{array}\right|-\left|\begin{array}{cc}h_{3,1}&h_{3,4}\\h_{4,1}&h_{4,4}\end{array}\right|\right)\\
\frac13(-\wp_{2222}+6\wp^2_{22})&=&a_2a_6-4a_3a_5+3a_4^2+a_6\wp_{11}-2a_5\wp_{12}+a_4\wp_{22}\nonumber
\end{eqnarray}
Application of $\bf e$ and $\bf f$ to this identity shows that it is
highest weight for a five dimensional representation reproducing the
classic partial differential equations of Baker \cite{B1907}.
The fully general, covariant genus three equations have not been
written down before. We proceed as before by differentiating the
$\wp_{333}^2$ relation with respect to $u_3$:
\begin{eqnarray}
-8\wp_{333}\wp_{3333}&=&\left|\begin{array}{ccc}
4\wp_{223}-4\wp_{133}&h_{34}&h_{35}\\
2\wp_{233}&h_{44}&h_{45}\\
-2\wp_{333}&h_{54}&h_{55}
\end{array}\right|+\left|\begin{array}{ccc}
h_{33}&2\wp_{233}&h_{35}\\
h_{43}&4\wp_{333}&h_{45}\\
h_{53}&0&h_{55}
\end{array}\right|\nonumber\\
&&+\left|\begin{array}{ccc}
h_{33}&h_{34}&-2\wp_{333}\\
h_{43}&h_{44}&0\\
h_{53}&h_{54}&0
\end{array}\right|\nonumber\\
&=&4(\wp_{223}-\wp_{133})\left|\begin{array}{cc}h_{44}&h_{45}\\h_{54}&h_{55}\end{array}\right|
-4\wp_{233}\left|\begin{array}{cc}h_{34}&h_{35}\\h_{54}&h_{55}\end{array}\right|\nonumber\\
&&+4\wp_{333}\left(\left|\begin{array}{cc}h_{33}&h_{35}\\h_{53}&h_{55}\end{array}\right|-\left|\begin{array}{cc}h_{34}&h_{35}\\h_{44}&h_{45}\end{array}\right|\right)\nonumber
\end{eqnarray}
As before we can derive identities linear in the $\wp_{ijk}$ from
the general quadratic identities. Putting
$k_0=1,\,k_1=k_2=k_3=k_4=0$ and $l_i=h_{i+1,j},\,i=0\ldots 4$ gives,
for any choice of $j$,
\begin{equation}
h_{2j}\wp_{333}-h_{3j}\wp_{233}+h_{4j}(\wp_{223}-\wp_{133})-h_{5j}(\wp_{222}-2\wp_{123})=0
\end{equation}
Eliminating the $\wp_{222}-2\wp_{123}$ term between the cases $j=4$
and $j=5$ yields
\begin{equation}
(\wp_{223}-\wp_{133})\left|\begin{array}{cc}h_{44}&h_{45}\\h_{54}&h_{55}\end{array}\right|
-\wp_{233}\left|\begin{array}{cc}h_{34}&h_{35}\\h_{54}&h_{55}\end{array}\right|
+\wp_{333}\left|\begin{array}{cc}h_{24}&h_{25}\\h_{54}&h_{55}\end{array}\right|=0
\end{equation}
Use of this identity in the equation for $\wp_{333}\wp_{3333}$ gives
\begin{equation}
-2\wp_{3333}=-\left|\begin{array}{cc}h_{24}&h_{25}\\h_{54}&h_{55}\end{array}\right|
+\left|\begin{array}{cc}h_{33}&h_{35}\\h_{53}&h_{55}\end{array}\right|
-\left|\begin{array}{cc}h_{34}&h_{35}\\h_{44}&h_{45}\end{array}\right|
\end{equation}
and thus, by application of $\bf f$, the nine dimensional space of
identities,
\begin{eqnarray}
-\wp_{3333}+6\wp^2_{33}&=&10(a_4a_8-4a_5a_7+3a_6^2)\nonumber\\
&&+8a_6\wp_{33}-8a_7wp_{23}+a_8(3\wp_{22}-4\wp_{13})\nonumber\\
-\wp_{2333}+6\wp_{23}\wp_{33}&=&10(a_3a_8-3a_4a_7+2a_5a_6)\nonumber\\
&&+12a_5\wp_{33}-10\wp_{23}+4a_7(\wp_{22}-3\wp_{13})\nonumber\\
&&+2a_8\wp_{12}\nonumber\\
2(-\wp_{1333}+6\wp_{13}\wp_{33})&=&10(3a_2a_8-4a_3a_7-11a_4a_6+12a_5^2)\nonumber\\
+3(-\wp_{2233}+2\wp_{22}\wp_{33}+4\wp_{23}^2)&&+60a_4\wp_{33}-36a_5\wp_{23}-2a_6(9\wp_{22}-52\wp_{13})\nonumber\\
&&+20a_7\wp_{12}+4a_8\wp_{11}\nonumber\\
-\wp_{2223}+6\wp_{22}\wp_{23}&=&10(a_1a_8+2a_2a_7-12a_3a_6+9a_4a_5)\nonumber\\
+3(-\wp_{1233}+2\wp_{12}\wp_{33}+4\wp_{13}\wp_{23})&&+40a_3\wp_{33}-10a_4\wp_{23}+4a_5(3\wp_{22}-29\wp_{13})\nonumber\\
&&+18a_6\wp_{12}+12a_7\wp_{11}\nonumber\\
-\wp_{2222}+6\wp_{22}^2&=&10(a_0a_8+12a_1a_7-22a_2a_6-36a_3a_5+45a_4^2\nonumber\\
+6(-\wp_{1133}+2\wp_{11}\wp_{33}+4\wp_{13}^2)&&+120a_2\wp_{33}+40a_3\wp_{23}+50a_4(\wp_{22}-12\wp_{13})\nonumber\\
+12(-\wp_{1223}+4\wp_{12}\wp_{23}+2\wp_{13}\wp_{22})&&+40a_5\wp_{12}+120a_6\wp_{11}\nonumber\\
-\wp_{1222}+6\wp_{12}\wp_{22}&=&10(a_0a_7+2a_1a_6-12a_2a_5+9a_3a_4)\nonumber\\
+3(-\wp_{1123}+4\wp_{12}\wp_{13}+2\wp_{11}\wp_{23})&&+12a_1\wp_{33}+18a_2\wp_{23}+4a_3(3\wp_{22}-29\wp_{13})\nonumber\\
&&-10a_4\wp_{12}+40a_5\wp_{11}\nonumber\\
2(-\wp_{1113}+6\wp_{11}\wp_{13})&=&10(3a_0a_6-4a_1a_5-11a_2a_4+12a_3^2)\nonumber\\
+3(-\wp_{1122}+2\wp_{11}\wp_{22}+4\wp_{12}^2)&&+4a_0\wp_{33}+20a_1\wp_{23}+2a_2(9\wp_{22}-52\wp_{13})\nonumber\\
&&-36a_3\wp_{12}+60a_4\wp_{11}\nonumber\\
-\wp_{1112}+6\wp_{11}\wp_{12}&=&10(a_0a_5-3a_1a_4+2a_2a_3)\nonumber\\
&&+2a_0\wp_{23}+4a_1(\wp_{22}-3\wp_{13})-10a_2\wp_{12}\nonumber\\
&&+12a_3\wp_{11}\nonumber\\
-\wp_{1111}+6\wp_{11}^2&=&10(a_0a_4-4a_1a_3+3a_2^2)\nonumber\\
&&+a_0(3\wp_{22}-4\wp_{13})-8a_1\wp_{12}+8a_2\wp_{11}\nonumber
\end{eqnarray}
Given that there are fifteen of the symbols $\wp_{ijkl}$ we expect
to be able to find a further six identities.
Thus, returning to the $\wp_{333}^2$ identity and differentiating
with respect to $u_1$ this time yields
\begin{eqnarray}
-8\wp_{333}\wp_{1333}&=&\left|\begin{array}{ccc}
4\wp_{122}-4\wp_{113}&h_{34}&h_{35}\\
2\wp_{123}&h_{44}&h_{45}\\
-2\wp_{133}&h_{54}&h_{55}
\end{array}\right|+\left|\begin{array}{ccc}
h_{33}&2\wp_{123}&h_{35}\\
h_{43}&4\wp_{133}&h_{45}\\
h_{53}&0&h_{55}
\end{array}\right|\nonumber\\
&&+\left|\begin{array}{ccc}
h_{33}&h_{34}&-2\wp_{333}\\
h_{43}&h_{44}&0\\
h_{53}&h_{54}&0
\end{array}\right|\nonumber\\
&=&4(\wp_{122}-\wp_{113})\left|\begin{array}{cc}h_{44}&h_{45}\\h_{54}&h_{55}\end{array}\right|
-4\wp_{123}\left|\begin{array}{cc}h_{34}&h_{35}\\h_{54}&h_{55}\end{array}\right|\nonumber\\
&&+4\wp_{133}\left(\left|\begin{array}{cc}h_{33}&h_{35}\\h_{53}&h_{55}\end{array}\right|-\left|\begin{array}{cc}h_{34}&h_{35}\\h_{44}&h_{45}\end{array}\right|\right)\nonumber
\end{eqnarray}
This time some appropriate identities arise by choosing
$k_0=0,k_1=1,k_2=0,k_3=0$ and $k_4=0$ and the $l_i$ as before:
\begin{equation}
h_{ij}\wp_{333}-h_{3j}\wp_{133}+h_{4j}\wp_{123}-h_{5j}(\wp_{122}-\wp_{113})=0
\end{equation}
These allow us to replace the terms on the right hand side of the
$\wp_{1333}$ equation by terms involving $\wp_{333}$ and so factor
this out to leave
\begin{equation}
-2\wp_{1333}=-\left|\begin{array}{cc}h_{14}&h_{15}\\h_{44}&h_{45}\end{array}\right|+\left|\begin{array}{cc}h_{13}&h_{53}\\h_{15}&h_{55}\end{array}\right|
\end{equation}
Applying $\bf e$ to this identity gives the $\wp_{2333}$ identity
found above. Successive applications of $\bf f$ however yield a set
of seven identities:
\begin{eqnarray}
-\wp_{1333}+6\wp_{13}\wp_{33}&=&3a_2a_8-8a_3a_7+5a_4a_6\nonumber\\
&&+3a_4\wp_{33}-10a_6\wp_{13}+4a_7\wp_{12}-a_8\wp_{11}\nonumber\\
-\wp_{1233}+2\wp_{12}\wp_{33}+4\wp_{13}\wp_{23}&=&2a_1a_8-12a_3a_6+10a_4a_5\nonumber\\
&&+4a_3\wp_{33}+2a_4\wp_{23}-20a_5\wp_{13}+6a_6\wp_{12}\nonumber\\
-\wp_{1133}+2\wp_{11}\wp_{33}+4\wp_{13}^2&=&a_0a_8+8a_1a_7-18a_2a_6-16a_3a_5+25a_4^2\nonumber\\
-\wp_{1223}+2\wp_{13}\wp_{22}+4\wp_{12}\wp_{23}&&+6a_2\wp_{33}+8a_3\wp{23}+a_4(\wp_{22}-48\wp_{13})\nonumber\\
&&+8a_5\wp_{12}+6a_6\wp_{11}\nonumber\\
-\wp_{1222}+6\wp_{12}\wp_{22}&=&16a_0a_7+20a_1a_6-156a_2a_5+120a_3a_4\nonumber\\
+6(-\wp_{1123}+2\wp_{11}\wp_{23}+4\wp_{12}\wp_{13})&&+12a_1\wp_{33}+36a_2\wp_{23}+4(3a_3\wp_{22}-44\wp_{13})\nonumber\\
&&-4a_4\wp_{12}+52a_5\wp_{11}\nonumber\\
-\wp_{1113}+6\wp_{11}\wp_{12}&=&11a_0a_6-16a_1a_5-35a_2a_4+40a_3^2\nonumber\\
-\wp_{1122}+2\wp_{11}\wp_{22}+4\wp_{12}^2&&+a_0\wp_{33}+8a_1\wp_{23}+2a_2(3\wp_{22}-19\wp_{13})\nonumber\\
&&-12a_3\wp_{12}+21a_4\wp_{11}\nonumber\\
-\wp_{1112}+6\wp_{11}\wp_{12}&=&10(a_0a_5-3a_1a_4+2a_2a_3)\nonumber\\
&&+2a_0\wp_{23}+4a_1(\wp_{22}-3\wp_{13})-10a_2\wp_{12}\nonumber\\
&&+12a_3\wp_{11}\nonumber\\
-\wp_{1111}+6\wp_{11}^2&=&10(a_0a_4-4a_1a_3+3a_2^2)\nonumber\\
&&+a_0(3\wp_{22}-4\wp_{13})-8a_1\wp_{12}+8a_2\wp_{11}\nonumber
\end{eqnarray}
Of these seven the last two are already represented in the previous
set so that we still seek another one. To find this go to the
quadratic identity for $\wp_{133}$,
\begin{equation}
-\wp_{133}^2=\frac14\left|\begin{array}{ccc}h_{11}&h_{14}&h_{15}\\h_{41}&h_{44}&h_{55}\\h_{51}&h_{54}&h_{55}\end{array}\right|
\end{equation}
and differentiate with respect to $u_1$. Using similar identities to
before we find
\begin{equation}
-2\wp_{1133}=\left|\begin{array}{cc}h_{1,1}&h_{1,5}\\h_{5,1}&h_{5,5}\end{array}\right|-
\left|\begin{array}{cc}h_{1,4}&h_{1,5}\\h_{2,4}&h_{2,5}\end{array}\right|
\end{equation}
and applying $\bf f$ successively arrive at:
\begin{eqnarray}
2(-\wp_{1133}+6\wp_{13}^2)+4(\wp_{23}\wp_{12}-\wp_{13}\wp_{22})&=&a_0a_8-16a_3a_5+15a_4^2\nonumber\\
&&+8a_3\wp_{23}-2a_4(\wp_{22}+12\wp_{13})+8a_5\wp_{12}\nonumber\\
-\wp_{1123}+4\wp_{12}\wp_{13}+2\wp_{23}\wp_{11}&=&2a_0a_7-12a_2a_5+10a_3a_4\nonumber\\
&&+6a_2\wp_{23}-20a_3\wp_{13}+2a_4\wp_{12}+4a_5\wp_{11}\nonumber\\
-\wp_{1122}+2\wp_{11}\wp_{22}+4\wp_{12}^2&=&14a_0a_6-24a_1a_5-30a_2a_4+40a_3^3\nonumber\\
+2(-\wp_{1113}+6\wp_{11}\wp_{13})&&+12a_1\wp_{23}+6a_2(\wp_{22}-8\wp_{13})\nonumber\\
&&-12a_3\wp_{12}+24a_4\wp_{11}\nonumber\\
-\wp_{1112}+6\wp_{11}\wp_{12}&=&10a_0a_5-30a_1a_4+20a_2a_3\nonumber\\
&&+2a_0\wp_{23}+4a_1(\wp_{22}-3\wp_{13})\nonumber\\
&&-10a_2\wp_{12}+12a_3\wp_{11}\nonumber\\
-\wp_{1111}+6\wp_{11}^2&=&10a_0a_4-40a_1a_3+30a_2^2\nonumber\\
&&+a_0(3\wp_{22}-4\wp_{13})-8a_1\wp_{12}+8a_2\wp_{11}\nonumber
\end{eqnarray}
Only one of these is linearly independent of the identities we
already have.
In Appendix 1 we summarize these identities and in Appendix 2 we
compare them with the original, non-covariant identities of Baker
\cite{B1903}, showing that they are equivalent under a simple
transformation. To this end the identities in Appendix 1 are written
in a Baker friendly form where each involves but one of the four
index objects. This is not ideal from the representation theoretic
viewpoint however, as the identities then do not fall naturally into
multiplets.
\section{Conclusions}
This paper establishes that the use of covariant methods for
hyperelliptic curves is a practical tool in the construction and
understanding of the partial differential equations satisfied by the
$\wp$-function. In order to do so the definition of the $\wp$
function has to be slightly modified in a way that does not alter
its analytic properties. The resulting covariant identities for the
$\wp_{ijk}$ and $\wp_{ijkl}$ (Appendix 1) differ in detail from
those obtained by Baker (Appendix 2) but are generic and are derived
in a straight forward, economical way with minimal use of computer
algebra and in an algorithmic manner. Because the equivalence of the
two sets of equations is by no means self evident we also specify in
Appendix 2 the transformation between the two definitions of the
$\wp_{ij}$. Given Baker's equations one could have written down the
covariant genus three equations by deducing this simple
transformation by comparing the classical and covariant polar forms.
But this would not have been a test of the machinery nor would it
have provided us with the neat expression for the quadratic, genus
three identities.
By ``minimal use of computer algebra" we mean that the derivation of
the highest weight identities was carried out by hand. A computer
algebra programme was used to implement the actions of $\bf e$ and
$\bf f$ on these highest weight identities in order to check
covariance and to generate the full sets of identities. The other
use of computer algebra, as remarked at the time, was in rearranging
by linear superposition, the identities linear in the $\wp_{ijk}$ in
the genus three case, into the form (\ref{fourterm}).
It may be remarked that Baker's equations are a little simpler than
the covariant ones. From the current point of view this is a
simplification bought at the expense of the more abstract
simplification which incorporates the representation theory. The
drawback of the simplicity is that each identity has to be obtained
independently. The advantage of the marginally more involved
covariant set is that the fifteen identities for the $\wp_{ijkl}$
decompose into sets of nine, five and one elements from each of
which one need only find a single identity using the singularity
analysis, the others following by application of the \emph{raising}
and \emph{lowering operators}, ${\bf e}$ and ${\bf f}$.
Further, the representation theory lays bare a pattern of bones with
further intriguing symmetries that beg further study, particularly
in view of the $\sigma$ function and hyperelliptic addition laws.
However, the most pressing issue now is to apply these methods to
the more difficult non-hyperelliptic curves of low genus.
\section{Appendix 1}
Here we summarize the four-index relations for the covariant
$\wp$-function in the genus three case. $\Delta$ denotes a quadratic
in two index functions:
$\Delta=\wp_{11}\wp_{33}-\wp_{12}\wp_{23}-\wp_{13}^2+\wp_{13}\wp_{22}.$
\begin{eqnarray}
-\wp_{3333}+6\wp_{33}^2&=&8a_6\wp_{33}-8a_7\wp_{23}+a_8(3\wp_{22}-4\wp_{13})\nonumber\\
&&+10(a_4a_8-4a_5a_7+3a_6^2)\nonumber\\
-\wp_{2333}+6\wp_{23}\wp_{33}&=&12a_5\wp_{33}-10a_6\wp_{23}+4a_7(\wp_{22}-3\wp_{13})\nonumber\\
&&+2a_8\wp_{12}\nonumber\\
&&+10(a_3a_8-3a_4a_7+2a_5a_6)\nonumber\\
-\wp_{2233}+4\wp_{23}^2+2\wp_{22}\wp_{33}&=&18a_4\wp_{33}-12a_5\wp_{23}+2a_6(3\wp_{22}-14\wp_{13})\nonumber\\
&&+4a_7\wp_{12}+2a_8\wp_{11}\nonumber\\
&&+8(a_2a_8-a_3a_7-5a_4a_6+5a_5^2)\nonumber\\
-\wp_{2223}+6\wp_{22}\wp_{23}&=&28a_3\wp_{33}-16a_4\wp_{23}+4a_5(3\wp_{22}-14\wp_{13})\nonumber\\
&&+12a_7\wp_{11}\nonumber\\
&&+4(a_1a_8+5a_2a_7-21a_3a_6+15a_4a_5)\nonumber\\
-\wp_{2222}+6\wp_{22}^2-12\Delta&=&48a_3\wp_{33}-32a_3\wp_{23}+32a_4(\wp_{22}-3\wp_{13})\nonumber\\
&&-32a_5\wp_{12}+48a_6\wp_{11}\nonumber\\
&&a_0a_8+24a_1a_7-4a_2a_6-216a_3a_5+195a_4^2\nonumber\\
-\wp_{1333}+6\wp_{13}\wp_{33}&=&3a_4\wp_{33}-10a_6\wp_{13}+4a_7\wp_{12}-a_8\wp_{11}\nonumber\\
&&+3a_2a_8-8a_3a_7+5a_4a_6\nonumber\\
-\wp_{1233}+4\wp_{13}\wp_{23}+2\wp_{12}\wp_{33}&=&4a_3\wp_{33}+2a_4\wp_{23}-20a_5\wp_{13}+6a_6\wp_{12}\nonumber\\
&&+2(a_1a_8-6a_3a_6+5a_4a_5)\nonumber\\
-\wp_{1223}+4\wp_{12}\wp_{23}+2\wp_{13}\wp_{22}&=&6a_2\wp_{33}+4a_3\wp_{23}+2a_4(\wp_{22}-18\wp_{13})\nonumber\\
+2\Delta&&+4a_5\wp_{12}+6a_6\wp_{11}\nonumber\\
&&+\frac12(a_0a_8+16a_1a_7-36a_2a_6-16a_3a_5+35a_4^2)\nonumber\\
-\wp_{1222}+6\wp_{12}\wp_{22}&=&12a_1\wp_{33}+4a_3(3\wp_{22}-14\wp_{13})\nonumber\\
&&-16a_4\wp_{12}+28a_5\wp_{11}\nonumber\\
&&+4(a_0a_7+5a_1a_6-21a_2a_5+15a_3a_4)\nonumber\\
-\wp_{1133}+4\wp_{13}^2+2\wp_{11}\wp_{33}-2\Delta&=&4a_3\wp_{23}-a_4(\wp_{22}-12\wp_{13})+4a_5\wp_{12}\nonumber\\
&&+\frac12(a_0a_8-16a_3a_5+15a_4^2)\nonumber\\
-\wp_{1123}+4\wp_{12}\wp_{13}+2\wp_{11}\wp_{23}&=&6a_2\wp_{23}-20a_3\wp_{13}+2a_4\wp_{12}+4a_5\wp_{11}\nonumber\\
&&+2(a_0a_7-6a_2a_5+5a_3a_4)\nonumber\\
-\wp_{1122}+4\wp_{12}^2+2\wp_{11}\wp_{22}&=&2a_0\wp_{33}+4a_1\wp_{23}+2a_2(3\wp_{22}-14\wp_{13})\nonumber\\
&&-12a_3\wp_{12}+18a_4\wp_{11}\nonumber\\
&&+8(a_0a_6-a_1a_5-5a_2a_4+5a_3^2)\nonumber\\
-\wp_{1113}+6\wp_{11}\wp_{13}&=&-a_0\wp_{33}+4a_1\wp_{23}-10a_2\wp_{13}+3a_4\wp_{11}\nonumber\\
&&+3a_0a_6-8a_1a_5+5a_2a_4\nonumber\\
-\wp_{1112}+6\wp_{11}\wp_{12}&=&2a_0\wp_{23}+4a_1(\wp_{22}-3\wp_{13})\nonumber\\
&&-10a_2\wp_{12}+12a_3\wp_{11}\nonumber\\
&&+10(a_0a_5-3a_1a_4+2a_2a_3)\nonumber\\
-\wp_{1111}+6\wp_{11}^2&=&a_0(3\wp_{22}-4\wp_{13})-8a_1\wp_{12}+8a_2\wp_{11}\nonumber\\
&&+10(a_0a_4-4a_1a_3+3a_2^2)\nonumber
\end{eqnarray}
\section{Appendix 2}
Here we reproduce the genus three equations from Baker's paper
\cite{B1903}. We will denote by $\wp^{\mathfrak B}$ the traditional
genus three $\wp$-function. For ease of comparison we have also
rewritten the $\lambda_i$ coefficients of the monomials in $x$ in
the octic curve in Baker's paper in terms of the $a_i$ used above.
\begin{eqnarray}
{{\wp^{\mathfrak B}}}_{3333}-6{\wp^{\mathfrak B}}_{33}^2&=&28a_6{\wp^{\mathfrak B}}_{33}+8a_7{\wp^{\mathfrak B}}_{23}\nonumber\\
&&+a_8(4{\wp^{\mathfrak B}}_{13}-3{\wp^{\mathfrak B}}_{22})\nonumber\\
&&-35a_4a_8+56a_5a_7\nonumber\\
{\wp^{\mathfrak B}}_{2333}-6{\wp^{\mathfrak B}}_{23}{\wp^{\mathfrak B}}_{33}&=&28a_6{\wp^{\mathfrak B}}_{23}+4a_7(3{\wp^{\mathfrak B}}_{13}-{\wp^{\mathfrak B}}_{22})\nonumber\\
&&+2a_8{\wp^{\mathfrak B}}_{12}-14a_3a_8\nonumber\\
{\wp^{\mathfrak B}}_{2233}-4{\wp^{\mathfrak B}}_{23}^2-2{\wp^{\mathfrak B}}_{22}{\wp^{\mathfrak B}}_{33}&=&28a_5{\wp^{\mathfrak B}}_{23}+28a_6{\wp^{\mathfrak B}}_{13}-4a_7{\wp^{\mathfrak B}}_{12}\nonumber\\
&&-2a_8{\wp^{\mathfrak B}}_{11}-14a_2a_8\nonumber\\
{\wp^{\mathfrak B}}_{2223}-6{\wp^{\mathfrak B}}_{22}{\wp^{\mathfrak B}}_{23}&=&-28a_3{\wp^{\mathfrak B}}_{33}+7-a_4{\wp^{\mathfrak B}}_{23}+56a_5{\wp^{\mathfrak B}}_{13}\nonumber\\
&&-12a_7{\wp^{\mathfrak B}}_{11}\nonumber\\
&&-4a_1a_8-56a_2a_7\nonumber\\
{\wp^{\mathfrak B}}_{2222}-6{\wp^{\mathfrak B}}_{22}^2-12\Delta&=&-84a_2{\wp^{\mathfrak B}}_{33}+56a_3{\wp^{\mathfrak B}}_{23}+70a_4{\wp^{\mathfrak B}}_{22}\nonumber\\
&&+56a_5{\wp^{\mathfrak B}}_{12}-84a_6{\wp^{\mathfrak B}}_{11}\nonumber\\
&&-392a_2a_6+392a_3a_5\nonumber\\
{\wp^{\mathfrak B}}_{1333}-6{\wp^{\mathfrak B}}_{13}{\wp^{\mathfrak B}}_{33}&=&28a_6{\wp^{\mathfrak B}}_{13}-4a_7{\wp^{\mathfrak B}}_{14}+a_8{\wp^{\mathfrak B}}_{11}\nonumber\\
{\wp^{\mathfrak B}}_{1233}-4{\wp^{\mathfrak B}}_{13}{\wp^{\mathfrak B}}_{23}-2{\wp^{\mathfrak B}}_{12}{\wp^{\mathfrak B}}_{33}&=&28a_5{\wp^{\mathfrak B}}_{13}-2a_1a_8\nonumber\\
{\wp^{\mathfrak B}}_{1223}-4{\wp^{\mathfrak B}}_{12}{\wp^{\mathfrak B}}_{23}-2{\wp^{\mathfrak B}}_{13}{\wp^{\mathfrak B}}_{22}&=&70a_4{\wp^{\mathfrak B}}_{13}-8a_1a_7-\frac12a_0a_8\nonumber\\
+2\Delta&&\nonumber\\
{\wp^{\mathfrak B}}_{1222}-6{\wp^{\mathfrak B}}_{12}{\wp^{\mathfrak B}}_{22}&=&-12a_1{\wp^{\mathfrak B}}_{33}+56a_3{\wp^{\mathfrak B}}_{13}+70a_4{\wp^{\mathfrak B}}_{12}\nonumber\\
&&-28a_5{\wp^{\mathfrak B}}_{11}\nonumber\\
&&-112a_1a_6-4a_0a_7\nonumber\\
{\wp^{\mathfrak B}}_{1133}-4{\wp^{\mathfrak B}}_{13}^2-2{\wp^{\mathfrak B}}_{11}{\wp^{\mathfrak B}}_{33}&=&-\frac12a_1a_8\nonumber\\
-2\Delta&&\nonumber\\
{\wp^{\mathfrak B}}_{1123}-4{\wp^{\mathfrak B}}_{12}{\wp^{\mathfrak B}}_{13}-2{\wp^{\mathfrak B}}_{11}{\wp^{\mathfrak B}}_{23}&=&28a_3{\wp^{\mathfrak B}}_{13}-2a_0a_7\nonumber\\
{\wp^{\mathfrak B}}_{1122}-4{\wp^{\mathfrak B}}_{12}^2-2{\wp^{\mathfrak B}}_{11}{\wp^{\mathfrak B}}_{22}&=&-2a_0{\wp^{\mathfrak B}}_{33}-4a_1{\wp^{\mathfrak B}}_{23}+28a_2{\wp^{\mathfrak B}}_{13}\nonumber\\
&&+28a_3{\wp^{\mathfrak B}}_{12}-14a_0a_6\nonumber\\
{\wp^{\mathfrak B}}_{1113}-6{\wp^{\mathfrak B}}_{11}{\wp^{\mathfrak B}}_{13}&=&a_0{\wp^{\mathfrak B}}_{33}-4a_1{\wp^{\mathfrak B}}_{23}+28a_2{\wp^{\mathfrak B}}_{13}\nonumber\\
{\wp^{\mathfrak B}}_{1112}-6{\wp^{\mathfrak B}}_{11}{\wp^{\mathfrak B}}_{12}&=&-2a_0{\wp^{\mathfrak B}}_{23}+4a_1(3{\wp^{\mathfrak B}}_{13}-{\wp^{\mathfrak B}}_{22})\nonumber\\
&&+28a_2{\wp^{\mathfrak B}}_{12}-14a_0a_5\nonumber\\
{\wp^{\mathfrak B}}_{1111}-6{\wp^{\mathfrak B}}_{11}^2&=&a_0(4\wp_{13}-3{\wp^{\mathfrak B}}_{22})+8a_1{\wp^{\mathfrak B}}_{12}\nonumber\\
&&+28a_2{\wp^{\mathfrak B}}_{11}\nonumber\\
&&-35a_0a_4+56a_1a_3\nonumber
\end{eqnarray}
The Baker equivalent of the $5\times 5$ matrix $h$ we will call
$h^{\mathfrak B}$ and since our covariant form is to be replaced by
the classical polar form $h^{\mathfrak B}$ will be given by:
\begin{tiny}
\begin{equation}
\left[\begin{array}{ccccc}
a_0&4a_1&-2{\wp^{\mathfrak B}}_{11}&-2{\wp^{\mathfrak B}}_{12}&-2{\wp^{\mathfrak B}}_{13}\\
4a_1&28a_2+4{\wp^{\mathfrak B}}_{11}&28a_3+2{\wp^{\mathfrak B}}_{12}&-2{\wp^{\mathfrak B}}_{22}+4{\wp^{\mathfrak B}}_{13}&-2{\wp^{\mathfrak B}}_{23}\\
-2{\wp^{\mathfrak B}}_{11}&28a_3+2{\wp^{\mathfrak B}}_{12}&70a_4+4{\wp^{\mathfrak B}}_{22}-4{\wp^{\mathfrak B}}_{13}&28a_5+2{\wp^{\mathfrak B}}_{23}&-2{\wp^{\mathfrak B}}_{33}\\
-2{\wp^{\mathfrak B}}_{12}&-2{\wp^{\mathfrak B}}_{22}+4{\wp^{\mathfrak B}}_{13}&28a_5+2{\wp^{\mathfrak B}}_{23}&28a_6+4{\wp^{\mathfrak B}}_{33}&4a_7\\
-2{\wp^{\mathfrak B}}_{13}&-2{\wp^{\mathfrak B}}_{23}&-2{\mathfrak
B}_{33}&4a_7&a_8
\end{array}
\right]
\end{equation}
\end{tiny}
Consequently
\begin{eqnarray}
{\wp^{\mathfrak B}}_{11}&=&\wp_{11}-3a_2\nonumber\\
{\wp^{\mathfrak B}}_{12}&=&\wp_{12}-2a_3\nonumber\\
{\wp^{\mathfrak B}}_{13}&=&\wp_{13}-\frac12a_4\nonumber\\
{\wp^{\mathfrak B}}_{22}&=&\wp_{22}-9a_4\nonumber\\
{\wp^{\mathfrak B}}_{23}&=&\wp_{23}-2a_5\nonumber\\
{\wp^{\mathfrak B}}_{33}&=&\wp_{33}-3a_6\nonumber\\
\end{eqnarray}
Substitution for either $\wp$ or $\wp^{\mathfrak B}$ does indeed
transform the two sets of equations into one another.
\end{document}
|
\begin{document}
\title{Affine connections and symmetry jets}
\author{M\'elanie Bertelson$^\ast$, Pierre Bieliavsky$^\dagger$}
\date{\today}
\maketitle
\begin{center}
$^\ast$ Chercheur Qualifi\'e F.N.R.S. \\
D\'epartement de Math\'ematique, C.P. 218 \\
Universit\'e Libre de Bruxelles \\
Boulevard du Triomphe \\
1050 Bruxelles -- Belgique\\
{\tt [email protected]} \\
\ \\
$^\dagger$ Institut de Recherche en Math\'ematique et Physique \\
Universit\'e Catholique de Louvain \\
2, Chemin du Cyclotron \\
1348 Louvain-la-Neuve -- Belgique \\
{\tt [email protected]}
\end{center}
\begin{abstract} We establish a bijective correspondence between affine connections and a class of semi-holonomic jets of local diffeomorphisms of the underlying manifold called symmetry jets in the text. The symmetry jet corresponding to a torsion free connection consists in the family of $2$-jets of the geodesic symmetries. Conversely, any connection is described in terms of the geodesic symmetries by a simple formula involving only the Lie bracket of vector fields. We then formulate, in terms of the symmetry jet, several aspects of the theory of affine connections and obtain geometric and intrinsic descriptions of various related objects involving the gauge groupoid of the frame bundle. In particular, the property of uniqueness of affine extension admits an equivalent formulation as the property of existence and uniqueness of a certain groupoid morphism. Moreover, affine extension may be carried out at all orders and this allows for a description of the tensors associated to an affine connections, namely the torsion, the curvature and their covariant derivatives of all orders, as obstructions for the affine extension to be holonomic. In addition this framework provides a nice interpretation for the absence of other tensors.
\end{abstract}
\tableofcontents
\intro{Introduction}
The heart of this paper consists in the observation that any torsionless affine connection $\nabla$ on a smooth manifold $M$ can be described in terms of its geodesic symmetries $(s_x)_{x\in M}$ (cf.~paragraph preceding \lref{thelemma}) by the formula~:
\begin{equation}\label{conn2}
\Bigl(\nabla_X Y\Bigr)_x = \frac{1}{2} \Bigl[X, Y + (s_x)_*Y\Bigr]_x,
\end{equation}
where $x \in M$ and $X$, $Y$ are vector fields on $M$. \\
When the ambient manifold is endowed with the structure of a symplectic symmetric space and $\nabla$ is the unique connection admitting the symmetries as affine transformations, a similar formula, involving the symplectic structure as well as the symmetries, appeared in \cite{B} (see \rref{sss} for more details). \\
D'Atri, in \cite{D'A} gave a proof of the consequence of the previous formula that the collection of geodesic symmetries determines the connection. More precisely, he wrote the expression (\ref{Christoffel}) that defines the Christoffel symbols in terms of the geodesic symmetries and proved that the $\Gamma$'s so defined transform as they are expected to under a change of coordinates. D'Atri raised the question of characterizing germs of geodesic symmetries amongst germs near the diagonal of maps $s : M \times M \to M$ for which each $s(x, \cdot)$ is an involutive diffeomorphism of a neighborhood of $x$ having $x$ as an isolated fixed point. \sref{geod-sym&loc-sym-sp} provides a partial answer to that question. It is partial in that we do not provide an algebraic characterization of the geodesic symmetries. Rather, they appear as weak integral submanifolds of a (non-necessarily integrable) distribution constructed from the data of the family $(j^2_xs_x)_{x \in M}$, itself arbitrary (except for the condition $j^1_xs_x = -I$ for all $x \in M$).\\
The expression (\ref{conn2}) relies only on the family of second order jets $(j^2_xs_x)_{x\in M}$ of the geodesic symmetries. Notice that the first order part of the jet $j^2_xs_x$ coincides with the linear map $-I_x: T_xM \to T_xM : X \to -X$. It is also true that a family of $2$-jets whose first order coincides with the family $-I = (-I_x)_{x \in M}$, called in the text a {\bf holonomic symmetry jet}, induces a torsionless affine connection. Moreover this correspondence between torsionless affine connections and holonomic symmetry jets is one-to-one (\tref{conn<-->sym-jet}). \\
This procedure may be enlarged to encompass affine connections with torsion. One only needs to consider semi-holonomic symmetry jets as well. Recall that a \emph{non-holonomic} $(1,1)$-jet $j^1_xb$ is the first jet at a point $x$ of a family $b(x') = j^1_{x'}f_{x'}$ of $1$-jets (cf.~\cite{Ehresmann}, \cite{Lib}, see also \aref{jet-of-bis}). It is said to be \emph{semi-holonomic} when $b(x) = j^1_xb^0$ with $b^0(x') = f_{x'}(x')$ and \emph{holonomic} when $j^1_xb$ is a genuine $2$-jet. As stated in \pref{symm-jet<-->conn}, there is a bijective correspondence between semi-holonomic symmetry jets and affine connections that extends the above-mentioned correspondence for torsionless affine connections. \\
One of our purpose from thereon as been to understand how the various actors of the theory of affine connections are related to the symmetry jet. Often so, although not always, the results appearing in the text are known, but their formulation is geometric and differs from the usual one. Moreover, all proofs are thoroughly intrinsic, shading light on the true nature of the objects involved and their relationship to one another. A very simple instance thereof appears in \sref{UAE}, where affine transformations are described as leaves of a certain distribution on a groupoid naturally associated to $M$. Also, some results become quasi-obvious and consist simply in \emph{looking}, like the fact that for a locally symmetric space, the geodesic symmetries are affine transformations (cf.~\sref{lss}). \\
Another aim is to render precise the intuition that at order two an affine manifold $(M, \nabla)$ should be a locally symmetric space in the sense that at each point $x$ in $M$, there should exist a canonical germ of locally symmetric space osculatory to $M$ at $x$. The treatment of affine connections carried on in the text seems to bring us closer to that goal.\\
Coming back to the content of the paper, two important actors in the theory of connections are the torsion and the curvature tensors. A natural and short path from symmetry jets to these goes by the property of uniqueness of affine extension and its reformulation in terms of groupoids. Let us recall the classical statement. \\
{\bf Uniqueness of affine extension~:} On a manifold $M$ endowed with a torsionless affine connection $\nabla$, any linear isomorphism $\xi : T_xM \to T_yM$ may be uniquely extended to an affine $2$-jet. That is, there exists a $2$-jet $j^2_xf$ that satisfies $j^1_xf = \xi$ and
$$f_{*_x} \Bigl(\nabla_{X_x} Y \Bigr) = \nabla\bigl._{f_{*_x} X_x} f_* Y.$$
We show that this property is valid for affine connections with torsion as well and moreover admits a nice reformulation and proof in terms of groupoids. To clarify our point, we need a couple of paragraphs. \\
The general linear groupoid of the tangent bundle of $M$, also called the gauge groupoid of the frame bundle and commonly denoted by $GL(TM) \rightrightarrows M$, is the set of all linear isomorphisms $\xi : T_xM \to T_yM$ between tangent spaces to $M$ (for a good treatment of groupoids, we recommend either \cite{Mck-87}, \cite{Mck-05} or \cite{dSW}). The source and target of $\xi$, denoted respectively by $\alpha(\xi)$ and $\beta(\xi)$, are $x$ and $y$. Observe that it is indeed a groupoid as only those pairs $(\xi, \xi')$ of linear isomorphisms for which the source of $\xi$ coincides with the target of $\xi'$ can be composed. All properties of Lie groupoids are satisfied by $GL(TM) \rightrightarrows M$ (see \cite{Mck-05}, 1.1.12). \\
The first jet extension of a Lie groupoid $G \rightrightarrows M$, denoted by $\b^{(1)}(G)\rightrightarrows M$, is defined to be the collection of all $1$-jets of local bisections of $G$ (see \aref{jet-of-bis}). It is naturally endowed with the structure of a Lie groupoid over $M$. The notion of $k$-jet extension is defined similarly. This procedure can be iterated and the groupoids $\b^{(l)}(\b^{(k)}(G))$, $\b^{(m)}(\b^{(l)}(\b^{(k)}(G)))$, ... are denoted by $\b^{(l,k)}(G)$, $\b^{(m,l,k)}(G)$,... respectively. There are some noticeable inclusions like~: $\b^{(2)}(G) \subset \b^{(1,1)}(G)$. Now if $G$ is the pair groupoid $\PP(M) = M \times M \rightrightarrows M$, then $\b^{(1)}(G)$ coincides with $GL(TM)$ and a semi-holonomic symmetry jet is a section $\fs : M \to \b^{(1,1)}(\PP(M))$ whose first order part is $-I$.
\\
Now we can reformulate the property of uniqueness of affine extension in the way that is useful for us. \\
\noindent
{\bf Uniqueness of affine extension bis~:} If $\fs : M \to \b^{(1,1)}(\PP(M))$ is a symmetry jet there exists a unique groupoid morphism
$$S : \b^{(1)}(\PP(M)) \to \b^{(1,1)}(\PP(M))$$
that satisfies $S \circ -I = {\mathfrak s}$. Moreover the image of $S$ consists of all affine jets. \\
The map $S$ plays a central role here. It can be thought of as a distribution on $\b^{(1)}(\PP(M))$ through the simple and well-known observation that the data of a $(1,1)$-jet $j^1_xb$ is equivalent to that of the plane $D(j^1_xb) = b_{*_ x} (T_xM)$ tangent to $\b^{(1)}(\PP(M))$. Thus the map $S$ is, in other guise, the distribution $\d$ on $\b^{(1)}(\PP(M))$ defined by $\d_\xi = D(S(\xi))$ and called the affine distribution. Now affine transformations of $M$ appear as leaves of $\d$ that are bisections of $\b^{(1)}(\PP(M))$.\\
If the connection is torsionless, an affine jet is necessarily holonomic, i.e.~is a genuine $2$-jet. If the connection has torsion, this is not anymore true but some affine jets may still be holonomic. A semi-holonomic $(1,1)$-jet is holonomic when it is fixed under the canonical involution of $\b^{(1,1)}(\PP(M))$ that permutes the derivations. The torsion appears as a measure of the defect of holonomy as the next result makes clear~: \\
{\bf \pref{prop-torsion}} The jet $S(\xi)$ is holonomic exactly when $\xi$ preserves the torsion $T$. More precisely, if $X$ and $Y$ are vector fields on $M$ and $x\in M$, setting $\X = Y_{*_x} X_x \in T_{Y_x}TM$, the relation
\begin{equation}\label{intro-torsion}
S(\xi) \cdot \X - \kappa(S(\xi)) \cdot \X = \xi\Bigl(T(X_x, Y_x)\Bigr) - T\Bigl(\xi(X_x), \xi(Y_x)\Bigr)
\end{equation}
holds, where the $\cdot$ refers to the natural action of $\b^{(1,1)}(\PP(M))$ on $T^2M$ (derived from the action of $\b^{(1)}(\PP(M))$ on $TM$). \\%Let us say a word here about $\kappa$. It is introduced in \\
Particularizing (\ref{intro-torsion}) to $\xi = -I_x$, we obtain a formula for the torsion in terms of the symmetry jet~${\mathfrak s}$~:
$$T (X_x, Y_x) = \frac{1}{2} \Bigr(\kappa\bigl({\mathfrak s}(x)\bigr) \cdot \X - {\mathfrak s}(x) \cdot \X \Bigr).$$
Since the curvature involves the second order covariant derivative, it seems natural to investigate affine extensions of the next order. It is not difficult to prove inductively that affine extensions of all order exist (\pref{uaebis} and first paragraph of \sref{why-there-are-no}). In particular, there exists a groupoid morphism
$${\mathbb S} : \b^{(1)}(\PP(M)) \to \b^{(1,1,1)}(\PP(M))$$
whose image is the collection of all affine $(1,1,1)$-jets, i.e.~jets that preserve the second covariant derivative with respect to $\nabla$ (\dref{affine111jet}). Here again an affine $(1,1,1)$-jet need not be a $3$-jet and two natural involutions on $\b^{(1,1,1)}(\PP(M))$, denoted $\kappa$ and $\kappa_*$, generate the group $S_3$ of all permutations of the derivations. The $3$-jets are the fixed points of both $\kappa$ and $\kappa_*.$ \\
Now an affine $(1,1,1)$-jet ${\mathbb S}(\xi)$, for which $S(\xi)$ is holonomic, is $\kappa$-invariant if and only if $\xi$ preserves the curvature tensor $R$. More precisely, the following formula appears in \pref{prop-curvature}~:
\begin{equation}\label{curvature}
{\mathbb S}(\xi) \cdot {\mathfrak X} - \kappa ({\mathbb S}(\xi)) \cdot {\mathfrak X} = \xi \Bigl(R(X_x, Y_x)Z_x\Bigr) - R(\xi X_x, \xi Y_x) \; \xi Z_x,
\end{equation}
where $\IX = Z_{**_{Y_x}} Y_{*_x}X_x \in T_{Z_{*_x}Y_x}T^2M$ for vector fields $X$, $Y$, $Z$ on $M$ and some point $x$ in $M$. Here $\cdot$ stands for the action of $\b^{(1,1,1)}(\PP(M))$ on $T^3M$. \\
How about invariance under $\kappa_*$~? What is the obstruction~? The first covariant derivative of the torsion tensor as the following formula, a direct consequence of (\ref{intro-torsion}), implies~:
$${\mathbb S}(\xi) \cdot {\mathfrak X} - \kappa_*({\mathbb S}(\xi)) \cdot {\mathfrak X} = \xi \Bigl( (\nabla_{Z_x}T^\nabla) (X_x, Y_x) \Bigr) - \Bigl(\nabla_{\xi Z_z}T^\nabla\Bigr)(\xi X_x, \xi Y_x).$$
To summarize, an affine $(1,1,1)$-jet is a genuine $3$-jet if and only if its first order part preserves the torsion, its first covariant derivative and the curvature. \\
Setting $\xi = a I$, with $a \neq 1, -1$, in the first of these relations, we obtain a description of the curvature of a torsion free connection $\nabla$ in terms of the affine extension map~:
$$R(X_x, Y_x)\,Z_x = \frac{1}{a (1 - a^2)} \Bigl({\mathbb S}(aI_x) \cdot {\mathfrak X} - \kappa ({\mathbb S}(aI_x)) \cdot {\mathfrak X}\Bigr).$$
The following alternative description, whose advantage is that it involves only the symmetry jet, is proved in \tref{curvthm}~:
$$R(X_x, Y_x) Z_x = \frac{1}{4} \Bigl(\kappa(j^1_x {\mathfrak s}) \cdot j^1_x {\mathfrak s} \cdot \IX - \;j^1_x {\mathfrak s} \cdot \kappa(j^1_x {\mathfrak s}) \cdot \IX \Bigr).
$$
It is tempting (and even true provided the symbols are suitably defined) to write
$$R = \frac{1}{4}\bigl[ \kappa(j^1\fs), j^1\fs\bigr].$$
We said earlier that affine extensions exist at all order. A natural question therefore is~: ``what happens at order $4$ and higher~?" The $\kappa$-invariance of an affine jet of order at least $4$ is automatic. A simple explanation for that phenomenon, that relies on the formalism developed previously, is exposed in \sref{why-there-are-no}. It provides in particular an elementary justification for the well-known fact that there is no other natural tensor in affine geometry than the torsion and the curvature tensors (as well as their covariant derivatives of all orders of course). \\% reference for that ? See Kolar, Michor and Slovak + Atiyah, Bott Patodi.\\
Given a family of $TM$-valued covariant tensors $\{Q_i; i \in I\}$ on $M$, the closed set of $1$-jets that preserve all of them is a Lie subgroupoid of $\b^{(1)}(\PP(M))$ denoted hereafter by $\b(\{Q_i; i \in I\})$. Thus, considering the torsion and curvature tensors $T$ and $R$ of an affine connection, a collection of Lie subgroupoids of $\b^{(1)}(\PP(M))$ naturally appears~: $\b(T)$, $\b(T, \nabla T, R)$, etc.... Their intersection
$$\b_o = \b(T, R, ..., \nabla^kT, \nabla^kR, ...)$$
is the largest subset of $\b^{(1)}(\PP(M))$ entirely foliated by leaves (of maximal dimension) of the associated affine distribution $\d$ and containing all of them.\\
In particular, when the affine connection is locally symmetric, that is satisfies $T = 0$ and $\nabla R=0$, the groupoid $\b_o$ coincides with $\b(R)$. Thus through any $1$-jet $\xi$ that preserves $R$ passes a leaf of $\d$ which is necessarily the $1$-jet extension of a partial affine transformation $\varphi_\xi$ of $(M, \nabla)$. As proved in \sref{geod-sym&loc-sym-sp}, the maps $\varphi_\xi$ is the geodesic extension of $\xi$, that is
$$\varphi_\xi = \exp_y \circ \; \xi \circ \exp_x^{-1}.$$
As a by-product, we can reprove existence and uniqueness of the Levi-Civita connection of a pseudo-Riemannian metric (\pref{LC}). The specificity of pseudo-Riemannian metrics, compared to other tensors like symplectic forms, is that the collection of $2$-jets extending $-I_x$ has the same dimension as the set of $1$-jets of symmetric tensors extending a given non-degenerate symmetric tensor~$g_x$. \\
We also compare our approach with Kobayashi's correspondence between torsionless affine connections and admissible sections (\cite{K}). The latter can easily be extended to connections with torsion by enlarging the class of admissible sections and we show how an ``admissible'' section naturally induces a symmetry jets and vice-versa. In the same spirit, affine connections are a class of Lie algebroid connections and we describe the correspondance between the latter and symmetry jets. \\
The paper is organized as follows. There is a large appendix that contains all the relevant material with our preferred notation about groupoids and groupoids of jets of bisections. The detailed structure of $\b^{(1,1)}(\PP(M))$ and $\b^{(1,1,1)}(\PP(M))$, as well as that of the second and third tangent bundles $T^2M$ and $T^3M$ is investigated there. The idea is to read the appendix alongside with the main text. As to the latter, we begin, in Section 1, with the correspondence between torsionless affine connections and holonomic symmetry jets. The case of affine connections with torsion is treated in Section 2. Section 3 contains a reformulation of the property of uniqueness of affine extensions in terms of groupoids and introduces the distribution $\d$. The correspondence between the defect of holonomy of the affine extension and the defect of invariance of the torsion is treated in Section 4. The description (\ref{curv-j1s}) of the curvature in terms of the first jet extension of the symmetry jet is established in Section 5. The property of uniqueness of affine extension at order $3$ is handled in Section 6. Section 7 proves the correspondence between the defect of $\kappa$-invariance (respectively $\kappa_*$-invariance) of the affine extension of order $3$ of a $1$-jet $\xi$ and the defect of invariance of the curvature (respectively first covariant derivative of the torsion) under $\xi$ and describes geometrically the integrability locus of $\d$. Section 8 investigates the question of existence and holonomy of order $4$ affine extensions. It is proven there that the $\kappa$-invariance does automatically hold, explaining why no new tensor appears. Section 9 describes the horizontal distribution on an associated bundle induced from an affine connection. An alternative proof of existence and uniqueness of the Levi-Civita connection is presented in Section 10. A geometric correspondence between a symmetry jet and the Lie algebroid connection associated to the induced affine connection is shown in Section 11. The geometric description of parallel transport and geodesics appears in Section 12. Whence follows, in Section 13, the construction in terms of $\d$ of the $1$-jet extension of the map $\varphi_\xi = \exp_y \circ \; \xi \circ \exp_x$ for some linear isomorphism $\xi : T_xM \to T_yM$. That section also contains the proof of the property that for a locally symmetric space, through any $\xi$ in $\b^{(1)}(\PP(M))$ that preserves the curvature passes a leaf of $\d$, which is then necessarily $j^1\varphi_\xi$. Section 14 deals with Kobayashi's correspondence between torsionless affine connections and admissible sections of the second order frame bundle.
\section*{Acknowledgments}
We wish to warmly thank Wolfgang Bertram, Wilhelm Klingenberg, the late Shoshichi Kobayashi, Alan Weinstein and Joe Wolf for very stimulating conversations regarding the content of the paper. We are grateful to Yannick Voglaire for pointing out D'Atri's article to us. Last but not least, Kirill MacKenzie has made numerous thoughtful suggestions to improve the presentation of the paper. We thank him heartily.
\section{Torsionless affine connections as symmetry jets}
An affine connection on a smooth manifold $M$ is commonly defined to be a $\mathbb R$-bilinear map
$$\nabla: \IX(M) \times \IX(M) \to \IX(M) : (X, Y) \mapsto \nabla_X Y$$
which is $C^\infty(M)$-linear in the first argument and satisfies the Leibniz rule in the second argument, that is, for all $X$, $Y$ in $\IX(M)$ and all $f \in C^\infty(M)$, we have
\begin{enumerate}
\item[-] $\nabla_{fX}Y = f \nabla_X Y$,
\item[-] $\nabla_X(fY) = Xf \; Y + f \nabla_X Y$.
\end{enumerate}
The previous relations suggest to think of $(\nabla_X Y)_x$ as a derivative of $Y$ in the direction of $X_x$. Such a derivative exists already~: it is simply $Y_{*_x}X_x$ ($Y$ is thought of as a section $M \to TM$ of the tangent bundle). The point is that $Y_{*_x}X_x$ lies in the second tangent bundle $T^2M$, while we would like to have an element of $TM$. In fact, an affine connection is really a projection from $T^2M$ to $TM$ and the associated horizontal distribution on $TM$ is its ``kernel". A precise statement that relies on notations introduced in \aref{second-t-b} is the content of the next lemma.
\begin{lem}\label{tilde-nabla} An affine connection on the manifold $M$ is a map
$$\widetilde{\nabla} : T^2M \to TM : \X \to \widetilde{\nabla}(\X)$$
such that
\begin{enumerate}
\item[$\bullet$] $\widetilde{\nabla}$ is a morphism of vector bundles over the map $p : TM \to M$ when $T^2M$ is endowed with the vector bundle structure associated to either $p$ or $p_*$.
\item[$\bullet$] $\widetilde{\nabla} (i^p_{0_x}(V_x)) = V_x$.
\end{enumerate}
The correspondence with the classical definition of affine connection goes through the relation~:
\begin{equation}\label{alt-conn}
\nabla_{X_x}Y = \widetilde{\nabla}(Y_{*_x}X_x).
\end{equation}
\end{lem}
\noindent{\bf Proof}. Given an affine connection $\nabla$, it induces a map $\widetilde{\nabla} : T^2M \to TM$ defined by (\ref{alt-conn}) on non vertical vectors which can be extended by $\widetilde{\nabla}(i^p_{0_x}(Y_x)) = Y_x$ on vertical vectors.
It is not difficult to verify the linearity conditions except for the case of two vectors $\X_1$ and $\X_2$ in the same $p$-fiber and whose $p_*$-projections are linearly dependent (as we may not assume that $\X_1$ and $\X_2$ are of type $Y_{*}(X)$ for a same local vector field $Y$). Suppose then that $\X_1 = {Y_1}_{*}(X_x)$ and $\X_2 = {Y_2}_*(a X_x)$ for some $a \in \mathbb R_0$. Then it is not difficult to verify that
$$\X_1 + \X_2 = (1+a) \frac{d}{dt} \Bigl( \frac{1}{(1+a)} Y_1^t + \frac{a}{(1 + a)} Y_2^t \Bigr)\Bigr|_{t = 0}.$$
Hence $\X_1 + \X_2 = Y_*((1+a)X_x)$ with
$$Y = \frac{1}{1+a} Y_1 + \frac{a}{1 + a} Y_2.$$
Verifying now that $\widetilde{\nabla}(\X_1 + \X_2) = \widetilde{\nabla}(\X_1) + \widetilde{\nabla}(\X_2)$ is easy. \\
Conversely, given a map $\widetilde{\nabla}$ as in the statement of the lemma, defining $\nabla$ through (\ref{alt-conn}) yields an affine connection. Indeed, the Leibniz rule is the only point that might not seem to follow immediately. It is a direct consequence of \rref{Leibniz} according to which
$$(fY)_{*_x} (X_x) = X_xf \; Y_x + m_{f(x)*}\Bigl( Y_{*_x}X_x \Bigr),$$
where $X_x f \; Y_x$ really means $I(f(x)Y_x, X_xf Y_x) = i(f(x)Y_x) +_* i_{0_x}(X_xf \; Y_x)$.
\cqfd
\begin{rmk}\label{hor-dist}
The horizontal distribution $\h = \h^{\nabla}$ associated to the connection $\nabla$ is the ``kernel" of $\widetilde{\nabla}$, that is
$$\h = \widetilde{\nabla}^{-1}(0_{TM}).$$
\end{rmk}
Now, let $(s_x)_{x \in M}$ be a smooth family of smooth local diffeomorphisms $M$ such that $s_x$ is defined near $x$ and satisfies $s_x(x) = x$ and ${s_{x_{*_x}}} = - \id$. We consider the bilinear map $\nabla : \IX(M) \times \IX(M) \to \IX(M) : (X, Y) \mapsto \nabla_XY$ defined by the formula
\begin{equation}\label{connection}
\Bigl(\nabla_X Y\Bigr)_x = \frac{1}{2} \Bigl[X, Y + (s_x)_*(Y)\Bigr]_x,
\end{equation}
where $x \in M$ and $X, Y \in {\mathfrak X}(M)$. To be precise, the vector field $Y + (s_x)_*(Y)$ achieves the value $Y_{x'} + (s_x)_{*_{s_x^{-1}(x')}}(Y_{s_x^{-1}(x')})$ at the point $x'$.
\begin{prop} Formula (\ref{connection}) defines a torsionless affine connexion on $M$.
\end{prop}
\noindent{\bf Proof}. Let us verify that the three conditions defining a torsionless affine connection are satisfied. First of all $\nabla_{fX}Y = f\nabla_XY$ because the vector field $Y + (s_x)_*(Y)$ vanishes at $x$. \\
To prove the condition $\nabla_{X} fY = Xf \; Y + f \nabla_X Y$, observe that $(s_x)_*(fY) = (f \circ s_x^{-1}) (s_x)_*Y$. Hence
$$[X, (s_x)_*(fY)] = X(f \circ s_x^{-1}) (s_x)_*Y + (f \circ s_x^{-1})[X, (s_x)_*Y],$$
which evaluated at $x$ yields
$$[X, (s_x)_*(fY)]_x = X_x f Y_x + f(x) [X, (s_x)_*Y]_x.$$
Then
$$\begin{array}{ccl}
2\Bigl(\nabla_XfY \Bigr)_x & = & \Bigl[X, fY + (s_x)_*(fY)\Bigr]_x\\
& = & X_xf Y_x + f(x) \Bigl[X, Y\Bigr]_x + X_x f Y_x + f(x) \Bigl[X, (s_x)_*(Y)\Bigr]_x\\
& = & 2 X_xf \; Y_x + 2 f(x) \Bigl(\nabla_XfY \Bigr)_x
\end{array}$$
Finally the torsion $T^\nabla(X,Y) = \nabla_X Y - \nabla_Y X - [X, Y]$ vanishes because
$$\begin{array}{ccl}
\Bigl(\nabla_X Y - \nabla_Y X \Bigr)_x & = & \frac{1}{2} \Bigl[X, Y + (s_x)_*Y\Bigr]_x + \frac{1}{2} \Bigl[X + (s_x)_*X, Y\Bigr]_x \\
& = & \Bigl[X, Y]_x + \frac{1}{2} \Bigl[X + (s_x)_*(X), Y + (s_x)_*(Y)\Bigr]_x \\
& = & \Bigl[X, Y\Bigr]_x.
\end{array}$$
\cqfd
\begin{rmk}
The Christoffel symbols of the connexion $\nabla$ with respect to local coordinates $x^1, ..., x^n$ around $x$ are
\begin{equation}\label{Christoffel}
\Gamma_{ij}^k (x) = - \frac{1}{2} \frac{\partial^2s_x^k}{\partial x^i \partial x^j} (x).
\end{equation}
\end{rmk}
Observe that, of the whole family $(s_x)_{x \in M}$, only $(j^2_xs_x)_{x \in M}$ consisting of the second-order jet of each $s_x$ at $x$ plays a role. In other words, any section
$${\mathfrak s} : M \to \b^{(2)}(\PP(M))$$ of the bundle $\alpha : \b^{(2)}(\PP(M)) \to M$ of $2$-jets of local diffeomorphisms of $M$ (cf.~\nref{b1} in \aref{jet-of-bis}) that projects onto the section
$$-I : M \to \b^{(1)}(\PP(M)) : x \mapsto [- I_x : X \mapsto -X]$$
via the canonical projection $p : \b^{(2)}(\PP(M)) \to \b^{(1)}(\PP(M))$ determines a connexion $\nabla^{\mathfrak s}$.
\begin{dfn} A section ${\mathfrak s} : M \to \b^{(2)}(\PP(M))$ such that $p \circ {\mathfrak s} = -I$ is called hereafter a holonomic symmetry jet.
\end{dfn}
\begin{rmk}\label{sss} As described in \cite{B}, the canonical connection of a symplectic symmetric space $(M, (s_x)_{x \in M}, \omega)$ admits an expression similar to (\ref{connection}). More precisely, if $X$, $Y$ and $Z$ are vector fields on $M$, then the following expression defines both the unique $s_x$-invariant symplectic connection on the symplectic symmetric space $(M, (s_x)_{x \in M}, \omega)$ and the connection $\nabla^{\mathfrak s}$ associated to the symmetry jet ${\mathfrak s}(x) = j^2_xs_x$~:
\begin{equation}
\omega_x\Bigl(\nabla_X Y, Z\Bigr) = \frac{1}{2} X_x\Bigl(\omega(Y + (s_x)_*Y, Z)\Bigr).
\end{equation}
Indeed, it is a consequence of the following short computation
$$\begin{array}{ccl}
0 & = & \bigl(\nabla_X\omega\bigr)_x\Bigl(Y + (s_x)_*Y, Z\Bigr) \\
& = & X_x\Bigl(\omega(Y + (s_x)_*Y, Z)\Bigr) - \omega_x\Bigl(\nabla_X(Y + (s_x)_*Y), Z\Bigr) \\
& = & X_x\Bigl(\omega(Y + (s_x)_*Y, Z)\Bigr) - \omega_x\Bigl([X, Y + (s_x)_*Y], Z\Bigr).
\end{array}$$
\end{rmk}
\begin{dfn} A diffeomorphism $\varphi$ of a manifold $M$ endowed with an affine connection $\nabla$ is said to be affine if
$$\varphi_* \Bigl(\nabla_XY\Bigr) = \nabla_{\varphi_*X} \varphi_*Y$$
holds for all $X$, $Y$ in $\IX(M)$. Likewise the $2$-jet $j^2_x\varphi$ of a local diffeomorphism $\varphi : U \subset M \to V\subset M$ at a point $x$ of its domain is said to be affine if the previous relation holds at $x$, that is~:
$$\varphi_{*_x} \Bigl(\nabla_{X_x}Y\Bigr) = \nabla_{\varphi_{*_x}X_x} \varphi_*Y.$$
\end{dfn}
\begin{lem}\label{inv} The connexion $\nabla^{\mathfrak s}$ admits ${\mathfrak s}(x)$ as affine $2$-jet.
\end{lem}
\noindent{\bf Proof}. $$\begin{array}{ccl}
2\Bigl(\nabla^{\mathfrak s}_X Y\Bigr)_x & = & \Bigl[X, Y + (s_x)_*Y\Bigr]_x \\
& = & - (s_x)_{*_x} \Bigl[X, Y + (s_x)_*Y\Bigr]_x \\
& = & - \Bigl[(s_x)_*X, (s_x)_*Y + (s_x)_* \circ (s_x)_*Y\Bigr]_x \\
& = & - \Bigl[- X, (s_x)_*Y + (s_x)_* \bigl( (s_x)_*Y \bigr) \Bigr]_x \\
& = & 2 \Bigl(\nabla^{\mathfrak s}_X (s_x)_*Y\Bigr)_x,
\end{array}$$
\cqfd
\begin{rmk}
Given a whole family $(s_x)_{x \in M}$ of smooth diffeomorphisms of $M$ satisfying $s_x(x) = x$ and $(s_x)_{*_x} = -\id$, there is not a global $s_x$-invariance of $\nabla^{\mathfrak s}$, for ${\mathfrak s}(x) = j^2_xs_x$, unless the $s_x$'s satisfy additional relations of a global nature. A typical example is a symmetric space, where the symmetries $s_x$ satisfy $s_x \circ s_x = \id$ and $s_y \circ s_x \circ s_y = s_{s_y(x)}$, In that case, the associated connection is globally $s_x$-invariant. Indeed,
$$\begin{array}{ccl}
\Bigl(\nabla_{(s_y)_*X} (s_y)_*Y\Bigr)_x
& = & \frac{1}{2} \Bigl[(s_y)_*X, (s_y)_*Y + (s_x)_* \circ (s_y)_*Y\Bigr]_x \\
& = & \frac{1}{2} \Bigl[(s_y)_*X, (s_y)_*Y + (s_y)_* \circ (s_{s_y(x)})_*Y\Bigr]_x \\
& = & \frac{1}{2} (s_y)_{*_{s_y(x)}}\Bigl[X, Y + (s_{s_y(x)})_*Y\Bigr]_{s_y(x)} \\
& = & (s_y)_{*_{s_y(x)}} \Bigl(\nabla_{X} Y\Bigr)_{s_y(x)}
\end{array}$$
\end{rmk}
We have obtained so far a connection from a symmetry jet. On the other hand, a connection induces a family of local diffeomorphisms, its geodesic symmetries. More precisely, let $\exp : \OO \subset TM \to TM$ be the exponential map associated to the connection $\nabla$, that is the map that sends a tangent vector $X$ to the time-one geodesic tangent to $X$. Here $\OO$ is assumed to be some neighborhood of the $0$-section in $TM$ on which $\exp$ is defined and such that if $\OO_x$ denotes the intersection of $\OO$ with $T_xM$, then $-\OO_x = \OO_x$ and the restriction of $\exp$ to $\OO_x$ --- denoted by $\exp_x$ --- is a diffeomorphism onto some open subset $\U_x$ of $M$. The geodesic symmetry at $x$ associated to $\nabla$ is the local involutive diffeomorphism $a^\nabla_x : \U_x \to \U_x : y \to \exp(- \exp^{-1} (y))$. As can be expected, the $2$-jet at $x$ of the geodesic symmetry of the connection $\nabla^{\mathfrak s}$ coincides with ${\mathfrak s}$.
\begin{lem}\label{thelemma} If $a_x$ denotes the geodesic symmetry at $x$ induced by the connection $\nabla^{\mathfrak s}$ associated to the symmetry jet ${\mathfrak s}$, then $$j^2_x a_x = {\mathfrak s}(x).$$
\end{lem}
\noindent{\bf Proof}. Let $\gamma(t) = \exp_x(tX_x)$ be a geodesic with tangent vector field $X_{\gamma(t)} = \frac{d\gamma}{dt}(t)$. The latter satisfies
$$0 = \nabla_{X_x} X = \frac{1}{2} \Bigl[X_x, X + (s_x)_*X\Bigr]_x.$$
Equivalently
$$0 = X_x\Bigl((X + (s_x)_*X)(f)\Bigr) = X_x(Xf) + X_x\Bigl( (s_x)_*X(f)\Bigr),$$
for all $f \in C^\infty(M)$. Developing the right hand side of the previous equality yields
$$\begin{array}{rcl}
0 & = & \displaystyle{X_x(Xf) - (s_x)_{*_x}X_x \Bigl((s_x)_*X(f)\Bigr)} \\
& = & - \displaystyle{\frac{d}{dt}\exp_x(-tX_x)\Bigl|_{t=0} (Xf) - \frac{d}{dt} s_x \circ \exp_x(tX_x)\Bigl|_{t=0} (s_x)_*X(f)} \\
& = & \displaystyle{\frac{d}{dt} -X_{\exp_x(-tX_x)} (f) \Bigl|_{t=0} - \;\frac{d}{dt}( (s_x)_*X)_{s_x \circ \, \exp_x(tX_x)}(f)\Bigl|_{t=0}} \\
& = & \displaystyle{\frac{d}{dt} \frac{d}{ds} f \circ \exp_x(-sX_x)\Bigl|_{s=t} \Bigl|_{t=0} - \;\frac{d}{dt}(s_x)_*(X_{\exp_x(tX_x)})(f)\Bigl|_{t=0}} \\
& = & \displaystyle{\frac{d^2}{dt^2} f \circ \exp_x(-tX_x)\Bigl|_{t=0} - \;\frac{d}{dt} X_{\exp_x(tX_x)} (f \circ s_x)\Bigl|_{t=0}} \\
& = & \displaystyle{\frac{d^2}{dt^2} f \circ \exp_x \circ -I_x (tX_x)\Bigl|_{t=0} - \;\frac{d}{dt} \frac{d}{ds} f \circ s_x \circ \exp_x (sX_x)\Bigl|_{s=t}\Bigl|_{t=0}} \\
& = & \displaystyle{\frac{d^2}{dt^2} f \circ \exp_x \circ -I_x (tX_x)\Bigl|_{t=0} - \;\frac{d^2}{dt^2}f \circ s_x\circ \exp_x(tX_x)\Bigl|_{t=0}}.
\end{array}$$
We claim that this implies that the maps $f \circ \exp_x \circ -I_x$ and $f \circ s_x \circ \exp_x$ coincide up to order $2$. Indeed, observe that the differential at $0_x \in T_xM$ of these two maps coincide. Hence their second differential (cf.~\nref{f**} in \aref{b11})
$$f_{**_x} \circ {\exp_x}_{**_{0_x}} \circ (-I_x)_{**_{0_x}} \quad \mbox{and} \quad f_{**_x} \circ {s_x}_{**_x} \circ {\exp_x}_{**_{0_x}}$$
belong to a same fiber of $p \times p_* : \b^{(2)}(\PP(M)) \to \b^{(1)}(\PP(M)) \times \b^{(1)}(\PP(M))$, so that their difference is a symmetric bilinear map $B$ from $T_xM \times T_xM$ to $T_yM$ (cf.~\rref{affine-str}) which is determined by its values on pairs $(X, X) \in T_xM \times T_xM$. Now, the previous calculation shows that $B(X, X)$ vanishes for all $X$. Whence the result.
\cqfd
\begin{rmk}
So starting from a symmetry jet ${\mathfrak s}$, we obtain in a canonical way, via the affine connection $\nabla^{\mathfrak s}$, a smooth family $(a^{\nabla^{\mathfrak s}}_x)_{x \in M}$ of local involutive diffeomorphisms which integrate pointwise the section~${\mathfrak s}$.
\end{rmk}
Given a torsionless affine connection $\nabla$, let us denote by ${\mathfrak a}^\nabla$ the symmetry jet defined by
$${\mathfrak a}^\nabla (x) = j^2_x a^\nabla_x.$$
\begin{thm}\label{conn<-->sym-jet}
The two correspondences ${\mathfrak s} \rightsquigarrow \nabla^{\mathfrak s}$ and $\nabla \rightsquigarrow {\mathfrak a}^\nabla$ are inverse to one another. In particular it is true that any affine connection $\nabla$ is associated to the symmetry jet consisting of the family of 2-jets $a^\nabla$ of its geodesic symmetries, through the relation
\begin{equation}\label{conn/sym}
\Bigl(\nabla_X Y\Bigr)_x = \frac{1}{2} \Bigl[X, Y + (a^\nabla_x)_*Y\Bigr]_x, \quad X, Y \in {\mathfrak X}(M)
\end{equation}
\end{thm}
\noindent{\bf Proof}. Half of the \pref{conn<-->sym-jet}, namely the fact that ${\mathfrak a}^{\nabla^{\mathfrak s}} = {\mathfrak s}$, has been proven in \pref{thelemma}. The other half, that is,
$$\nabla^{a^\nabla} = \nabla,$$
is easily seen once we know that the geodesic symmetries are affine up to order $2$. Indeed, suppose $(\nabla_XY)_x = (\nabla_X(a^\nabla_x)_*Y)_x$, then
$$\begin{array}{ccl}
\Bigl(\nabla_XY\Bigr)_x & = & \frac{1}{2} \Bigl( \nabla_X Y \Bigr)_x + \frac{1}{2} \Bigl( \nabla_X (a^\nabla_x)_*Y \Bigr)_x \\
& = & \frac{1}{2} \Bigl( \nabla_X (Y + (a^\nabla_x)_*Y) \Bigr)_x \\
& = & \frac{1}{2} \Bigl[X, Y + (a^\nabla_x)_*Y \Bigr]_x
\end{array}$$
We prove now that the geodesic symmetries are affine up to order $2$. A linear connection $\nabla$ induces a horizontal distribution $\h = \h^\nabla$ on the tangent bundle, described in terms of $\widetilde{\nabla}$ (cf.~\lref{tilde-nabla}) as its kernel, that is,
$$\h = \widetilde{\nabla}^{-1}(0_{TM}).$$
Therefore it is sufficient to prove that $(a_x)_{**_{Y_x}}$ maps $\h_{Y_x}$ onto $\h_{-Y_x}$. Suppose $X$ and $Y$ are vector fields defined near $x$ tangent to $\h$ at $X_x$ and $Y_x$. Then $X + Y$ is also tangent to $\h$ at $X_x + Y_x$ because the connection is linear and $[X, Y]_x = 0$ because the connection is torsionless. Besides, a vector field $Z$ that is tangent to $\h$ at $Z_x$ is also tangent to the velocity vector field of the geodesic $\exp_x(tZ_x)$, which implies that $(a_x)_*Z$ is also tangent to the velocity vector field of $\exp_x(-tZ_x)$. Thus $\nabla_{Z_x} (a_x)_*Z = 0$ and
$$\begin{array}{ccl}
\Bigl(\nabla_X (a_x)_*Y\Bigr)_x & = & \Bigl(\nabla_{X + Y} (a_x)_*(X + Y)\Bigr)_x - \Bigl(\nabla_X (a_x)_*X\Bigr)_x \\
& & - \Bigl(\nabla_Y(a_x)_*Y\Bigr)_x - \Bigl(\nabla_Y(a_x)_*X\Bigr)_x\\
& = & - \Bigl(\nabla_Y(a_x)_*X\Bigr)_x \\
& = & \Bigl(\nabla_{(a_x)_*Y}(a_x)_*X\Bigr)_x\\
& = & \Bigl(\nabla_{(a_x)_*X}(a_x)_*Y\Bigr)_x + \Bigl[(a_x)_*Y, (a_x)_*X\Bigr]_x \\
& = & - \Bigl(\nabla_X(a_x)_*Y\Bigr)_x - \Bigl[Y,X\Bigr]_x \\
& = & - \Bigl(\nabla_X(a_x)_*Y\Bigr)_x.
\end{array}$$
Hence $\nabla_{X_x} (a_x)_*Y = 0$ for all $X_x \in T_xM$, which implies that $(a_x)_{**_x}$ preserves the horizontal distribution $\h$ along $T_xM$.
\cqfd
\begin{rmk}\label{Kandinsky}
Notice in particular that for $f$ to be a local involutive diffeomorphism requires no condition on its second order derivative when $f_{*_x} = -I_x$. In other terms, when $j^1_xf = -I_x$, we have $j^2_xf^{-1} = j^2_xf$. (See also \cref{invertibility}).
\end{rmk}
\begin{rmk}
A alternative proof of \pref{conn<-->sym-jet} will be provided when handling the torsion case (cf.~\pref{symm-jet<-->conn}).
\end{rmk}
\section{Connections with torsion}\label{conn-torsion}
For a connection with torsion, the $2$-jet of a geodesic symmetry is not any more affine. Indeed, on the one hand, if a map is affine up to order $2$ at a point $x$, its differential at $x$ must preserve the torsion. On the other hand, the torsion being a $3$-tensor, cannot be preserved by $-I_x$ unless it vanishes at $x$. Nevertheless by relaxing slightly the notion of symmetry jet as in the following definition, one may establish a bijective correspondence between symmetry jets and arbitrary affine connections. Regarding notation, we refer to \aref{b11} and \dref{bh}.
\begin{dfn} A symmetry jet is a section
$${\mathfrak s} : M \to \b_h^{(1,1)}(M)$$
of $\alpha : \b_h^{(1,1)}(M) \to M$ whose first order part $p \circ {\mathfrak s} = p_* \circ {\mathfrak s} : M \to \b^{(1)}(\PP(M))$ coincides with~$-I$. A symmetry jet is said to be holonomic if it takes its values in $\b^{(2)}(\PP(M))$ and semi-holonomic otherwise.
\end{dfn}
\begin{prop}\label{symm-jet<-->conn}
Given a symmetry jet ${\mathfrak s}$, the formula
\begin{equation}\label{sym-jet-->conn1}
\Bigl(\nabla^{\mathfrak s}_X Y\Bigr)_x = \frac{1}{2} \Bigl[X, Y + s_x (Y)\Bigr]_x,
\end{equation}
where ${\mathfrak s}(x) = j^1_x s_x$, for some local bisection $s_x : U_x \subset M \to \b^{(1)}(\PP(M))$, defines an affine connection. Moreover, this induces a bijective correspondence between symmetry jets and affine connections.
\end{prop}
The proof will show that the condition that ${\mathfrak s}(x)$ belongs to $\b^{(1,1)}(\PP(M))$ rather than $\b^{(1,1)}_{nh}(\PP(M))$ ensures that the Leibniz rule is satisfied and cannot be relaxed. Of course the symmetry jet is holonomic, or $\kappa$-invariant (cf.~\lref{kappa-prop}),
if and only if the connection is torsionless.
\begin{rmk}
Observe the following alternative expression for $\nabla^{\mathfrak s}$~:
\begin{equation}\label{sym-jet-->conn2}
\nabla^{\mathfrak s}_{X_x}Y = \frac{1}{2} \pi \Bigl(Y_{*_x} X_x, \pmb{-} {\mathfrak s}(x) \cdot Y_{*_x}X_x \Bigr),
\end{equation}
where the thick minus $\pmb{-}$ denotes the composition of the scalar multiplication by $-1$ with respect to one vector bundle structures of $T^2M$ over $TM$ with scalar multiplication by $-1$ with respect to the other one (cf.~\aref{second-t-b}), i.e.~$\pmb{-} = m_1 \circ m_{1*}$. The expression ${\mathfrak s}(x) \cdot Y_{*_x}X_x$ stands for the action of the $(1,1)$-jet ${\mathfrak s}(x)$ on $Y_{*_x}X_x$ (cf.~(\ref{(1,1)-action})). The two vectors $Y_{*_x} X_x$ and $\pmb{-} {\mathfrak s}(x) \cdot Y_{*_x}X_x$ are in the same fiber of the affine fibration $T^2M \to TM \oplus TM$ (see the figure below). Whence their difference yields an element of $TM$ (cf.~(\ref{i}) and (\ref{def-i}) in \aref{second-t-b}). The fact that (\ref{sym-jet-->conn2}) coincides with (\ref{sym-jet-->conn1}) is an easy consequence of the relation between the Lie bracket and $\kappa$ (cf.~\pref{involution-prop})~:
$$\begin{array}{cll}
\Bigl[X, Y + s_x (Y)\Bigr]_x & = & \pi\Bigl((Y + s_x (Y))_{*_x}X_x, \kappa(X_{*_x}(Y + s_x (Y))_x) \Bigr)\\
& = & \pi\Bigl(Y_{*_x}X_x +_* {\mathfrak s}(x) \cdot Y_{*_x}(-X_x), \kappa(0_{X_x}) \Bigr) \quad \mbox{(thanks to (\ref{coucou}))}\\
& = & \pi\Bigl(Y_{*_x}X_x +_* m_{-1} \bigl( {\mathfrak s}(x) \cdot Y_{*_x}X_x \bigr), {0_*}_{X_x} \Bigr) \mbox{(thanks to (\ref{prop-kappa}))} \\
& = & \pi\Bigl(Y_{*_x}X_x, \pmb{-} {\mathfrak s}(x) \cdot Y_{*_x}X_x \Bigr) \quad \mbox{(thanks to (\ref{abc})).}
\end{array}$$
\begin{figure}\label{fig-conn}
\end{figure}
\end{rmk}
\noindent
{\bf Proof of \pref{symm-jet<-->conn}} Let ${\mathfrak s} : M \to \b^{(1,1)}(\PP(M))$ be a symmetry jet. Proving that (\ref{sym-jet-->conn2}) defines an affine connexion amounts to showing that the left hand side satisfies the Leibniz rule as the $C^\infty(M)$-linearity in the first argument is easy to see. So let $f$ be a smooth function on $M$, then
$$\nabla^{\mathfrak s}_{X_x}fY = \frac{1}{2} \pi \Bigl((fY)_{*_x} X_x, \pmb{-} {\mathfrak s}(x) \cdot (fY)_{*_x}X_x \Bigr).$$
Recall from (\ref{scalar-mult}) that
$$(fY)_{*_x} X_x = X_xf \; Y_x + m_{f(x)*} (Y_{*_x}X_x),$$
where $X_x f \; Y_x$ really means $I_p(f(x)Y_x, X_xf Y_x) = i(f(x)Y_x) +_* i^p_{0_M}(X_xf \; Y_x)$. So
$$\begin{array}{ccl}
\pmb{-}{\mathfrak s}(x) \cdot (fY)_{*_x} X_x & = & \pmb{-}{\mathfrak s}(x) \cdot \Bigl(I_p(f(x)Y_x, X_xf Y_x) + m_{f(x)*} (Y_{*_x} X_x)\Bigr) \\
& = & \pmb{-} \Bigl(I_p(- f(x)Y_x, - X_xf Y_x) + m_{f(x)*} \bigl( {\mathfrak s}(x) \cdot Y_{*_x}X_x \bigr) \Bigr)\\
& = & \Bigl(I_p(f(x)Y_x, - X_xf Y_x) + \pmb{-} m_{f(x)*} \bigl( {\mathfrak s}(x) \cdot Y_{*_x}X_x \bigr) \Bigr).
\end{array}$$
Then
$$\begin{array}{ccl}
\nabla^{\mathfrak s}_{X_x}fY & = & \frac{1}{2} \pi\Bigl(I_p(f(x)Y_x, X_xf Y_x) + m_{f(x)*} (Y_{*_x} X_x), \\
& & \;\;\;\;\;\;\; I_p(f(x)Y_x, - X_xf Y_x) + \pmb{-} m_{f(x)*} \bigl( {\mathfrak s}(x) \cdot Y_{*_x}X_x \bigr)\Bigr) \\
& = & \frac{1}{2} \pi \Bigl(I_p(f(x)Y_x, X_xf Y_x), I_p(f(x)Y_x, -X_xf Y_x) \Bigr) +\\
& & \frac{1}{2} \pi \Bigl( m_{f(x)*} (Y_{*_x} X_x), m_{f(x)*} \bigl( \pmb{-} {\mathfrak s}(x) \cdot Y_{*_x}X_x \bigl) \Bigr) \\
& = & \Bigl. X_xf Y_x + f(x) \nabla_{X_x}Y.
\end{array}$$
We explain now how to associate a symmetry jet ${\mathfrak s}$ to a connection $\nabla$. Extracting ${\mathfrak s}$ from (\ref{sym-jet-->conn2}) yields the following expression~:
\begin{equation}\label{conn-->sym-jet}
{\mathfrak s}(x) \cdot \X = \pmb{-} \X + m_{-1*} \Bigl(I(Y_x, 2 \nabla_{X_x} Y) \Bigr),
\end{equation}
where $\X = Y_{*_x}X_x$ and $\nabla = \nabla^{\mathfrak s}$. Moreover, for any connection $\nabla$, the relation (\ref{conn-->sym-jet}) defines a symmetry jet ${\mathfrak s}$ whose associated connection $\nabla^{\mathfrak s}$ is $\nabla$. Indeed,
$$\begin{array}{ccl}
\nabla^{\mathfrak s}_{X_x}Y & = & \frac{1}{2} \pi \Bigl(Y_{*_x} X_x, \pmb{-} {\mathfrak s}(x) \cdot Y_{*_x}X_x \Bigr) \\
& = & \frac{1}{2} \pi \Bigl(Y_{*_x} X_x, Y_{*_x} X_x - I(Y_x, 2 \nabla_{X_x} Y) \Bigr) \\
& = & \nabla_{X_x} Y.
\end{array}$$
\cqfd
\section{Uniqueness of affine extension revisited}\label{UAE}
This section provides an alternative description of the well-known {\sl property of uniqueness of affine extension}, which states that on a manifold $M$ endowed with a torsionless affine connection $\nabla$, any linear isomorphism $\xi : T_xM \to T_yM$, $x, y \in M$, admits a unique lift to an affine $2$-jet, meaning that there exists a local diffeomorphism $f : U \subset M \to V \subset N$ whose differential at $x$ coincides with $\xi$ and that satisfies for any pair of vector fields $X$ and $Y$
$$f_{*_x} \Bigl( \nabla_{X_x} Y \Bigr) = \nabla_{f_{*_x}(X_x)} \Bigl(f_{*} (Y)\Bigr),$$
a relation which depends only on the $2$-jet $j^2_xf$ of $f$. The property of uniqueness of affine extension holds for connections with torsion as well, provided the affine extension is allowed to belong to $\b_h^{(1,1)}(M)$ instead of $\b^{(2)}(\PP(M))$.
\begin{dfn}
An affine jet is an element $\xi = j^1_x b$ of $\b_h^{(1,1)}(M)$ such that
$$b(x)(\nabla_{X_x}Y) = \nabla_{b(x)(X_x)}b(Y).$$
\end{dfn}
\begin{rmk}\label{saffine} The image of ${\mathfrak s}$ consists of affine jets. This is a consequence of the fact that any $(1,1)$-jet in $\b^{(1,1)}(\PP(M))$ whose first order part lies in the bisection $-I$ is its own inverse (\pref{e-iota}). Indeed, we know from (\ref{coucou}) that $(s_x(Y))_{*_x} X_x = - {\mathfrak s}(x) \cdot Y_{*_x}X_x$. Thus
$$\begin{array}{ccl}
\nabla^{\mathfrak s}_{X_x}s_x(Y) & = & \frac{1}{2} \pi \Bigl( - {\mathfrak s}(x) \cdot Y_{*_x} X_x, m_{-1*} \bigl( {\mathfrak s}(x) \cdot {\mathfrak s}(x) \cdot Y_{*_x}X_x \bigr)\Bigr) \\
& = & \frac{1}{2} \pi \Bigl(- {\mathfrak s}(x) \cdot Y_{*_x} X_x, m_{-1*} \bigl( Y_{*_x}X_x\bigr) \Bigr) \\
& = & \Bigr. \frac{1}{2} \nabla^{\mathfrak s}_{X_x}Y.
\end{array}$$
The last equality follows from $\pi(\X, \Y) = - \pi(\Y, \X) = \pi(m_{-1*} \Y, m_{-1*} \X)$.
\end{rmk}
The following proposition consists in the property of uniqueness of affine extension extended to affine connections with torsion and shows also that the map $\b^{(1)}(\PP(M)) \to \b^{(1,1)}(\PP(M))$ that associates to a $1$-jet its unique affine extension, can be characterized as being the unique groupoid morphism and section of $p = p_*$ that extends ${\mathfrak s}$, in the sense that $S \circ -I = {\mathfrak s}$.
\begin{prop}\label{uae} Given an affine connection $\nabla^{\mathfrak s}$ there is a unique Lie groupoid morphism $S : \b^{(1)}(\PP(M)) \to \b_h^{(1,1)}(M)$ (\cite{Mck-05}) such that $S \circ -I = {\mathfrak s}$ and $p \circ S = \id$. Moreover the set of affine jets is precisely the image of $S$. When the symmetry jet is holonomic, the morphism $S$ takes its values in $\b^{(2)}(\PP(M))$.
\end{prop}
\noindent{\bf Proof}. Let $j^1_x b \in \b_h^{(1,1)}(M)$, with $\beta(b(x)) = y$. Then
$$\begin{array}{ccl}
b(x) \bigl( \nabla_{X_x} Y\bigr) - \nabla_{b(x) X_x} b Y
& = & b(x) \Bigl(\frac{1}{2} \pi \bigl(Y_{*_x} X_x, \pmb{-} {\mathfrak s}(x) \cdot Y_{*_x}X_x \bigr)\Bigr) \\
& & - \frac{1}{2} \pi \Bigl((bY)_{*_y} (b(x) X_x), \pmb{-} {\mathfrak s}(y) \cdot (bY)_{*_y} (b(x) X_x) \Bigr)\\
& = & \frac{1}{2} \pi \Bigl(j^1_x b \cdot Y_{*_x} X_x, \pmb{-} j^1_x b \cdot {\mathfrak s}(x) \cdot Y_{*_x}X_x \Bigr) \\
& & - \frac{1}{2} \pi \Bigl(j^1_x b \cdot Y_{*_x} X_x, \pmb{-} {\mathfrak s}(y) \cdot j^1_x b \cdot Y_{*_x}X_x \Bigr) \\
& = & \frac{1}{2} \pi \Bigl( \pmb{-} {\mathfrak s}(y) \cdot j^1_x b \cdot Y_{*_x}X_x, \pmb{-} j^1_x b\cdot {\mathfrak s}(x) \cdot Y_{*_x}X_x \Bigr).
\end{array}$$
This implies that $j^1_x b$ is affine if and only if
$$j^1_x b \cdot {\mathfrak s}(x) = {\mathfrak s}(y) \cdot j^1_x b.$$
(This statement relies on \lref{111-jetsasmaps}.) Or, equivalently
\begin{equation}\label{equation}
{\mathfrak s}(y) \cdot j^1_x b \cdot {\mathfrak s}(x) = j^1_x b.
\end{equation}
In terms of the associated plane (cf.~\rref{1jets-as-planes} in \aref{jet-of-bis}), the previous equation (\ref{equation}) is satisfied if and only if $D(j^1_xb)$, which lies in $\e_\xi$, (cf.~\rref{h-e}) satisfies
\begin{equation}\label{equation-bis}
D({\mathfrak s}(y)) \cdot D(j^1_xb) \cdot D({\mathfrak s}(x)) = D(j^1_xb).
\end{equation}
Now, for any $1$-jet $\xi$ in $\b^{(1)}(\PP(M))$, define the map
\begin{equation}\label{psi}
\psi_\xi : T_\xi \b^{(1)}(\PP(M)) \to T_\xi \b^{(1)}(\PP(M)) : X_\xi \mapsto \overline{Y}^{D({\mathfrak s}(y)), \alpha_*} \cdot X_\xi \cdot \overline{X}^{D({\mathfrak s}(x)), \beta_*},
\end{equation}
where $X = \alpha_{*_{\xi}}(X_{\xi})$, $Y = \beta_{*_{\xi}}(X_{\xi})$ and where $\overline{X}^{D({\mathfrak s}(x)), \beta_*}$ (respectively $\overline{Y}^{D({\mathfrak s}(y)), \alpha_*}$) denotes the lift of $X$ (respectively $Y$) via $\beta_*$ (respectively $\alpha_*$) in $D({\mathfrak s}(x))$ (respectively $D({\mathfrak s}(y))$). The dot in the previous formula refers to the differential of the multiplication in the groupoid $\b^{(1)}(\PP(M))$, that is the map
$$m_{*_{(\xi_2, \xi_1)}} : T_{\xi_2}\b^{(1)}(\PP(M)) \times_{(\alpha_{*_{\xi_2}}, \beta_{*_{\xi_1}})} T_{\xi_1}\b^{(1)}(\PP(M)) \longrightarrow T_{\xi_2 \cdot \xi_1} \b^{(1)}(\PP(M)),$$
\begin{equation}\label{diff-mult}
m_{*_{(\xi_2, \xi_1)}} (X_{\xi_2}, X_{\xi_1}) \stackrel{\rm not}{=} X_{\xi_2} \cdot X_{\xi_1}.
\end{equation}
\begin{figure}
\caption{The map $\psi_\xi$}
\end{figure}
The relation (\ref{equation-bis}) can be reformulated in terms of $\psi_\xi$ as follows~:
$$\psi_\xi \Bigl(D(j^1_xb)\Bigl) = D(j^1_xb).$$
The map $\psi_\xi$ is an involutive automorphism of $T_{\xi}\b^{(1)}(\PP(M))$. Indeed, on the one hand, $\alpha_{*_\xi} \circ \psi_\xi = -\alpha_{*_\xi}$ and $\beta_{*_\xi} \circ \psi_\xi = -\beta_{*_\xi}$ and on the other hand,
\begin{equation}\label{x-x=0}
\begin{array}{ccl}
\overline{X}^{D({\mathfrak s}(x)), \beta_*} \cdot \overline{-X}^{D({\mathfrak s}(x)), \beta_*} & = & \overline{X}^{D({\mathfrak s}(x)), \beta_*} \cdot -\overline{X}^{D({\mathfrak s}(x)), \beta_*} \\
& = & \overline{X}^{D({\mathfrak s}(x)), \beta_*} \cdot \iota_{*} \bigl(\overline{X}^{D({\mathfrak s}(x)), \beta_*} \bigr) \\
& = & 0_{x},
\end{array}
\end{equation}
as implied by \pref{e-iota} and, similarly, $\overline{-Y}^{D({\mathfrak s}(y)), \alpha_*} \cdot \overline{Y}^{D({\mathfrak s}(y)), \alpha_*} = 0_y$. Hence $T_\xi \b^{(1)}(\PP(M))$ decomposes into a direct sum of eigenspaces corresponding to the eigenvalues $\pm 1$~:
$$T_\xi \b^{(1)}(\PP(M)) = E^{\psi_\xi}_{+1} \oplus E^{\psi_\xi}_{-1}.$$
Clearly $E^{\psi_\xi}_{+1} = \Ker p_{*_\xi}$ for $p : \b^{(1)}(\PP(M)) \to \PP(M)$. Since $\Ker p_{*_\xi} \subset \e_\xi$, the subspaces $E^{\psi_\xi}_{-1}$ and $\e_\xi$ are transverse and therefore
$$\d_{\xi} = E^{\psi_\xi}_{-1} \cap \e_{\xi}$$
defines a distribution on $\b^{(1)}(\PP(M))$ corresponding to a section
$$S : \b^{(1)}(\PP(M)) \to \b^{(1,1)}(\PP(M))$$ of $p$
such that $D(S(\xi)) = \d_\xi$. We claim that $S$ is a groupoid morphism whose image consists of the set of affine jets. The first statement is a consequence of the following simple observation~:
$$\psi_{\xi_2 \cdot \xi_1}(X_{\xi_2} \cdot X_{\xi_1}) = \psi_{\xi_2}(X_{\xi_2}) \cdot \psi_{\xi_1}(X_{\xi_1}),$$
itself implied by (\ref{x-x=0}). As to the second statement, along the bisection $-I$, the image of $S$ coincides with ${\mathfrak s}$. Indeed, the relation (\ref{x-x=0}) implies that
$$D(\fs (x)) \subset E^{\psi_{-I_x}}_{-1}.$$
Hence $S(-I)$ consists of affine jets (cf.~\rref{saffine}). Thus, for any $\xi : T_xM \to T_yM$ in $\b^{(1)}(\PP(M))$, the property of $S$ to be a groupoid morphism that coincides with ${\mathfrak s}$ on $-I$ implies that $S(\xi) = S(-I_y \cdot \xi \cdot -I_x) = S(-I_y) \cdot S(\xi) \cdot S(-I_x) = {\mathfrak s}(y) \cdot S(\xi) \cdot {\mathfrak s}(x)$ which implies that $S(\xi)$ is affine. \\
Concerning the very last statement of the proposition, it is a consequence of the property that $\kappa$ is a groupoid morphism (cf.~\lref{kappa-prop}) and the first part of the proposition. Indeed,
$$\kappa (S(\xi)) = \kappa({\mathfrak s}(y) \cdot S(\xi) \cdot {\mathfrak s}(x)) = {\mathfrak s}(y) \cdot \kappa (S(\xi)) \cdot {\mathfrak s}(x)$$
implies that $\kappa(S(\xi)) = S(\xi)$.
\cqfd
\begin{dfn}
A section $S : \b^{(1)}(\PP(M)) \to \b^{(1,1)}(\PP(M))$ of $p$ which is a groupoid morphism is called an affine extension or the affine extension of the symmetry jet it is induced from.
\end{dfn}
\begin{prop}
Affine connections are in bijective correspondence with affine extension.
\end{prop}
Since a $(1,1)$-jet $\xi$ may also appear as a plane $D(\xi)$ tangent to $\b^{(1)}(\PP(M))$ attached to $p(\xi)$ (cf.~\rref{1jets-as-planes} in \aref{jet-of-bis}), the data of the section $S$ is equivalent to that of a distribution, denoted by $\d$ or $\d^{\mathfrak s}$, on $\b^{(1)}(\PP(M))$. It satisfies the following properties~:
\begin{enumerate}
\item[a)] $\d$ is ``transverse" in the sense transverse to both the $\alpha$-fibers and the $\beta$-fibers.
\item[b)] The fact that $\d$ is induced from a groupoid morphism implies that it coincides with $\varepsilon_{*_x} (T_xM)$ along the identity bisection, is invariant under $\iota_*$ and is preserved by multiplication, that is the map $m_* : TG \times_{(\alpha_*, \beta_*)} TG \to TG$ maps $\d \times_{(\alpha_*, \beta_*)} \d$ onto $\d$.
\item[c)] $\d \subset \e$ (see \dref{def-e}). Equivalently, the bouncing map
$$\fb : \b^{(1)}(\PP(M)) \to \End(TM, TM) : \xi \mapsto \beta_{*_\xi} \circ \alpha_{*_\xi}|_{\d_\xi}^{-1} = \fb(\d_\xi)$$ induced by $\d$ is the identity.
\end{enumerate}
We use the word ``transverse" to describe condition a) because ``horizontal" is quite dubious in the groupoid case as $\d$ is, at least along the bisection $-I$, rather vertical with respect to our standard picture of a groupoid, since the bouncing map along $-I$ coincides with $-I$.
\begin{figure}
\caption{The distribution $\d$}
\end{figure}
\begin{dfn}
A distribution on a groupoid $G$ satisfying the previous condition is called an {\sl invariant transverse distribution}.
\end{dfn}
One could rephrase what has been said in this section into the following proposition
\begin{prop}
Symmetry jets are in one-to-one correspondance with affine extensions as well as with invariant transverse distributions.
\end{prop}
Now, affine local or global transformation appear as leaves of the distribution~$\d$.
\begin{prop} Let ${\mathfrak s}$ be symmetry jet on the manifold $M$, let $\nabla$ denote the induced affine connection and $\d$ the associated distribution on $\b^{(1)}(\PP(M))$. Through the $1$-jet extension map $\varphi \to j^1\varphi$, affine (local) diffeomorphism of $(M, \nabla^{\mathfrak s})$ correspond to (local) bisections of $\b^{(1)}(\PP(M))$ that are leaves of $\d$.
\end{prop}
\noindent{\bf Proof}. Since $\d$ is contained in $\e$, a leaf of $\d$ is necessarily locally a $1$-jet extension $j^1\varphi$. The latter is then affine since tangent to $\d$.
\cqfd
\section{Torsion and holonomy of affine jets}
Given a symmetry jet ${\mathfrak s}$ on a manifold $M$, the defect for an affine $(1,1)$-jet $S(\xi)$ to be holonomic, or fixed under the involution $\kappa$ (cf.~\lref{kappa-prop}), coincides with the defect of invariance of the torsion under $\xi$. A precise statement is the content of the next proposition.
\begin{prop}\label{prop-torsion} Let $S(\xi) = j^1_x b$ be an affine jet, then, for any $\X \in T^2M$ with $p(\X) = Y_x$ and $p_*(\X) = X_x$, we have
\begin{equation}\label{torsion}
\pi \Bigl(S(\xi) \cdot \X, \kappa(S(\xi)) \cdot \X \Bigr) = \xi\Bigl(T^\nabla(X_x, Y_x)\Bigr) - T^\nabla \Bigl(\xi(X_x), \xi(Y_x)\Bigr).
\end{equation}
In particular, the affine $(1,1)$-jet $S(\xi)$ extending $\xi$ is a $2$-jet if and only if $\xi$ preserves the torsion.
\end{prop}
\noindent{\bf Proof}. Supposing that $X$ and $Y$ are vector fields on $M$ extending $X_x$ and $Y_x$ respectively and such that $\X = \kappa (X_{*_x}Y_x)$, the right hand side of (\ref{torsion}) equals
$$\begin{array}{cl}
& \displaystyle{\xi \Bigl(\nabla_{X_x}Y - \nabla_{Y_x}X - [X,Y]_x\Bigr)} \\
& \displaystyle{- \Bigl( \nabla_{\xi(X_x)} bY + \nabla_{\xi(Y_x)} bX + [bX, bY]_y \Bigr)} \\
= & \displaystyle{\Bigl(\xi(\nabla_{X_x}Y) - \nabla_{\xi(X_x)} bY\Bigr) - \Bigl(\xi(\nabla_{Y_x}X) - \nabla_{\xi(Y_x)} bX\Bigr)} \\
& \displaystyle{- \Bigl(\xi([X,Y]_x) - [bX, bY]_y\Bigr)} \\
= & \displaystyle{- \; \xi \circ \pi \Bigr( Y_{*_x}X_x, \kappa (X_{*_x}Y_x)\Bigr) + \pi \Bigl((bY)_{*_y} \xi(X_x), \kappa\bigl((bX)_{*_y} \xi(Y_x)\bigr)\Bigr).} \\
= & \displaystyle{- \; \pi \Bigr(S(\xi) \cdot Y_{*_x}X_x, S(\xi) \cdot \kappa (X_{*_x}Y_x)\Bigr)} \\
& \displaystyle{+ \; \pi \Bigr(S(\xi) \cdot Y_{*_x}X_x, \kappa(S(\xi) \cdot X_{*_x}Y_x) \Bigr)}\\
= & \displaystyle{\pi \Bigl(S(\xi) \cdot \kappa(X_{*_x}Y_x), \kappa(S(\xi)) \cdot \kappa(X_{*_x}Y_x) \Bigr)} \\
= & \displaystyle{\pi \Bigl(S(\xi) \cdot \X, \kappa(S(\xi)) \cdot \X \Bigr),}
\end{array}$$
where we have used \pref{involution-prop}, as well as relation (\ref{coucou}) from \aref{b11}.
\cqfd
Equation (\ref{torsion}) yields a geometric interpretation of the torsion of an affine connection in terms of its symmetry jet.
\begin{cor} Let ${\mathfrak s}$ be a symmetry jet, and let $\nabla$ be the corresponding affine connection. Then
$$T^{\nabla} (X_x, Y_x) = \frac{1}{2} \pi\Bigr(\kappa\bigl({\mathfrak s}(x)\bigr) \cdot \X, {\mathfrak s}(x) \cdot \X \Bigr),$$
for any $\X \in T^2M$ with $p(\X) = Y_x$ and $p_*(\X) = X_x$.
\end{cor}
In terms of the distribution $\d^\fs$, this means that for any $X_x \in T_xM$, the endomorphism $T^\nabla(X_x, \cdot)$ of $T_xM$ is the difference between the lifts $X_1$ and $X_2$ of $X_x$ in $\d_{-I_x}$ and $\kappa(\d_{-I_x})$ respectively with respect to $\alpha_*$. Indeed, the vector $X_2 - X_1$ lies in $T_{-I_x}(\b^{(1)}(\PP(M))_{x,x})$ which is naturally identified with $\End(T_xM, T_xM)$.
\begin{figure}
\caption{The torsion}
\end{figure}
\section{The curvature in terms of the symmetry jet}
In this section we present a very simple expression for the curvature of an affine connection in terms of the first jet of the symmetry jet. More precisely, we prove the following statement
\begin{thm}\label{curvthm} Let ${\mathfrak s}$ be a symmetry jet on the manifold $M$. Then the curvature tensor $R$ of the associated affine connection admits the following expression
\begin{equation}\label{curv-j1s}
R(X_x, Y_x) Z_x = \frac{1}{4} \Pi \Bigl(\kappa(j^1_x {\mathfrak s}) \cdot j^1_x {\mathfrak s} \cdot \IX, \;j^1_x {\mathfrak s} \cdot \kappa(j^1_x {\mathfrak s}) \cdot \IX \Bigr),
\end{equation}
where $X$, $Y$ and $Z$ are vector fields on $M$ and $\IX$ stands for $Z_{**_{Y_x}} Y_{*_x} X_x \in T^3M$.
\end{thm}
\begin{rmk} One could also write, with a slight abuse of notation
$$R = \frac{1}{4}\Pi([\kappa(j^1_x{\mathfrak s}), j^1_x{\mathfrak s}]).$$
\end{rmk}
\begin{rmk} Observe that $\kappa(j^1_x {\mathfrak s})$ is not a $(1,1,1)$-jet since $p(j^1_x {\mathfrak s}) = {\mathfrak s}(x)$ does not coincide with $p_*(j^1_x {\mathfrak s}) = m_{-1*}$ (cf.~\rref{kappa-rem-bis}). Nevertheless, $\kappa(j^1_x {\mathfrak s})$ is an element in $\EL(T^3M)$ (cf.~\dref{el-def}) whose action on an element $\IX$ in $T^3M$ is defined by
$$\kappa(j^1_x {\mathfrak s}) \cdot \IX = \kappa(j^1_x {\mathfrak s} \cdot \kappa(\IX)).$$
Moreover,
\begin{enumerate}
\item[-] $p(\kappa(j^1_x {\mathfrak s})) = m_{-1*}$,
\item[-] $p_*(\kappa(j^1_x {\mathfrak s})) = {\mathfrak s}(x)$,
\item[-] $p_{**}(\kappa(j^1_x {\mathfrak s})) = m_{-1}$.
\end{enumerate}
\end{rmk}
\noindent{\bf Proof}. As a first step, let us compute $\nabla_{X_x} \nabla_Y Z$ in terms of the symmetry jet.
$$\nabla_{X_x} \nabla_Y Z = \displaystyle{\frac{1}{2} \pi \Bigl((\nabla_Y Z)_{*_x} (X_x), \pmb{-} {\mathfrak s}(x) \cdot (\nabla_Y Z)_{*_x} (X_x)\Bigr)}$$
Let $\gamma : (-\varepsilon , \varepsilon ) \to M$ be a path in $M$ tangent to $X_x$ at $t = 0$. Then
$$\begin{array}{ccl}
(\nabla_Y Z)_{*_x} (X_x) & = &\displaystyle{\frac{d}{dt} (\nabla_Y Z)_{\gamma(t)} \Bigl|_{t=0}} \\
& = & \displaystyle{\frac{d}{dt} \frac{1}{2} \pi \Bigl( Z_{*_{\gamma(t)}}Y_{\gamma(t)}, \pmb{-} {\mathfrak s}(\gamma(t)) \cdot Z_{*_{\gamma(t)}}Y_{\gamma(t)} \Bigl)\Bigl|_{t=0}} \\
& = & m_{\frac{1}{2}*} \Bigl(\pi_{*_{(\mathbb ZE, \pmb{-} {\mathfrak s}(x) \cdot \mathbb ZE)}} \Bigl( \IX, m_{-1*} \circ m_{-1**} \bigl( j^1_x{\mathfrak s} \cdot \IX\bigr) \Bigr)\Bigr),
\end{array}$$
where $\mathbb ZE = p(\IX) = Z_{*_x}Y_x$. Since $\IX^1 = \IX$ and $\IX^2 = m_{-1*} \circ m_{-1**} ( j^1_x{\mathfrak s} \cdot \IX)$ belong to the same $({\mathcal P}_2 = p_* \times p_{**})$-fiber, there exists a $\U \in T^{X_x}TM$ such that
$$\IX^1 = \IX^2 +_* \Bigl(e_*(\IX^2) +_{**} (i^p_{0_M})_*(\U)\Bigr) = \IX^2 +_{**} \Bigl(e_{**}(\IX^2) +_{*} (i^p_{0_M})_*(\U)\Bigr).$$
(This follows from (\ref{proj123}) in \aref{third-der}). It is not difficult to verify that $\pi_*(\IX^1, \IX^2) = \Pi_2(\IX^1, \IX^2) = \U$. Now, we claim that for an element $\xi \in \EL^{(1,1,1)}(T^3M)$,
\begin{equation}\label{pi*}
\pi_*(\xi \cdot \IX^1, \xi \cdot \IX^2) = p_*(\xi) \cdot \pi_*(\IX^1, \IX^2).
\end{equation}
This is verified as follows~:
$$\begin{array}{ccl}
\xi \cdot \IX^1 & = & \xi \cdot \Bigl(\IX^2 +_* \bigl(e_*(\IX^2) +_{**} (i^p_{0_M})_*(\U)\bigr)\Bigr) \\
& = & \xi \cdot \IX^2 +_* \bigl(\xi \cdot e_*(\IX^2) +_{**} \xi \cdot (i^p_{0_M})_*(\U)\bigr) \\
& = & \xi \cdot \IX^2 +_* \bigl(e_*(\xi \cdot \IX^2) +_{**} (i^p_{0_M})_*(p_*(\xi) \cdot \U)\bigr)
\end{array}$$
The last equality follows from \lref{111-jetsasmaps} and holds for any $\xi \in \EL(T^3M)$ such that $\xi \cdot (i^p_{0_M})_*(\U) = (i^p_{0_M})_*(p_*(\xi) \cdot \U)$. In particular for $\xi = \kappa(j^1_x \fs)$. Indeed,
$$\begin{array}{cll}
\kappa(j^1_x \fs) \cdot \Bigl( (i^p_{0_M})_*(\U) \Bigr) & = & \kappa\Bigl( j^1_x \fs \cdot \kappa\bigl( (i^p_{0_M})_*(\U) \bigr)\Bigr) \\
& = & \kappa\Bigl(j^1_x \fs \cdot i^{p_*}_{{0_*}_{TM}}(\U) \Bigr) \\
& = & \kappa \Bigl( i^{p_*}_{{0_*}_{TM}} \bigl( \fs(x) \cdot \U \bigr)\Bigr) \\
& = & (i^p_{0_M})_* \Bigl(\fs(x) \cdot \U \Bigr).
\end{array}$$
Thus
$$\nabla_{X_x} \nabla_Y Z = \displaystyle{\frac{1}{2} \pi \Bigl( m_{\frac{1}{2}*} \bigl[\pi_{*} (\IX^1, \IX^2) \bigr], \pmb{-} m_{\frac{1}{2}*} \bigl[\pi_{*} ( \kappa(j^1_x \fs) \cdot \IX^1, \kappa(j^1_x \fs) \cdot \IX^2 ) \bigr] \Bigr)}.$$
To pursue this computation, observe that $\pi((m_a)_*\X, (m_a)_*\Y) = a \pi (\X, \Y)$ for two elements $\X$ and $\Y$ in a same $(p \times p_*)$-fiber and a real $a$. Thus
$$\nabla_{X_x} \nabla_Y Z = \displaystyle{\frac{1}{4} \pi \Bigl( \pi_{*} (\IX^1, \IX^2), \pmb{-} \; \pi_{*} (\kappa(j^1_x \fs) \cdot \IX^1, \kappa(j^1_x \fs) \cdot\IX^2) \Bigr)}.$$
Moreover $m_{-1} (\pi_{*} (\IY^1,\IY^2)) = \pi_{*} (m_{-1} (\IY^1), m_{-1} (\IY^2))$ and $m_{-1*} ( \pi_{*} (\IY^1,\IY^2))$ coincides with both $\pi_{*} (m_{-1*} (\IY^1), m_{-1*} (\IY^2))$ and $\pi_{*} (m_{-1**} (\IY^1), m_{-1**} (\IY^2))$ for any $\IY^1$, $\IY^2$ in $T^3M$. Thus
\begin{equation}\label{nablacarre}
\begin{array}{ccl}
\nabla_{X_x} \nabla_Y Z & = & \displaystyle{\frac{1}{4} \pi \Bigl( \pi_{*} \bigl(\IX^1, \IX^2 \bigr), }\\
& & \displaystyle{\pi_{*} \bigl( m_{-1} \circ m_{-1**} ( \kappa(j^1_x {\mathfrak s}) \cdot \IX^1 ), m_{-1} \circ m_{-1**} ( \kappa(j^1_x {\mathfrak s}) \cdot \IX^2) \bigr) \Bigr)}
\end{array}
\end{equation}
Let us define
\begin{enumerate}
\item[-] $\widetilde{\IX^1} = m_{-1} \circ m_{-1**} ( \kappa(j^1_x {\mathfrak s}) \cdot \IX^1)$,
\item[-] $\widetilde{\IX^2} = m_{-1} \circ m_{-1**} (\kappa(j^1_x {\mathfrak s}) \cdot \IX^2) = m_{-1} \circ m_{-1*} (\kappa(j^1_x {\mathfrak s}) \cdot j^1_x{\mathfrak s} \cdot \IX)$.
\end{enumerate}
Now $\U = \pi_{*} (\IX^1, \IX^2)$, $\widetilde{\U} = \pi_{*} (\widetilde{\IX^1}, \widetilde{\IX^2})$ and $U = \pi(\pi_{*} (\IX^1, \IX^2), \pi_{*} (\widetilde{\IX^1}, \widetilde{\IX^2})) = \nabla_{X_x} \nabla_Y Z$ satisfy the relations
$$\begin{array}{ccl}
\IX^1 & = & \IX^2 +_{**} \Bigl(e_{**}(\IX^2) +_* (i^p_{0_M})_*(\U)\Bigr)\\
\widetilde{\IX^1} & = & \widetilde{\IX^2} +_{**} \Bigl(e_{**}(\widetilde{\IX^2}) +_* (i^p_{0_M})_*(\widetilde{\U})\Bigr) \\
\U & = & \widetilde{\U} +_* \bigl( e_*(\U) + i^p_{0_M}(U) \bigr).
\end{array}$$
Besides, all four elements $\IX^1$, $\IX^2$, $\widetilde{\IX^1}$ and $\widetilde{\IX^2}$ belong to the same $p_{**}$-fiber and
$$\begin{array}{ccl}
&&\IX^1 -_{**} \IX^2 -_{**} \widetilde{\IX^1} +_{**} \widetilde{\IX^2} \\
& = & \Bigl(e_{**}(\IX^2) +_{*} (i^p_{0_M})_* (\U) \Bigl) -_{**} \Bigl( e_{**}(\widetilde{\IX^2}) +_{*} (i^p_{0_M})_*(\widetilde{\U})\Bigr) \\
& = & e_{**}(\IX^2) +_* \Bigl( (i^p_{0_M})_* (\U) -_* (i^p_{0_M})_*(\widetilde{\U})\Bigr) \\
& = & e_{**}(\IX^2) +_* \Bigl( (i^p_{0_M})_* (\U -_*\widetilde{\U})\Bigr) \\
& = & e_{**}(\IX^2) +_* \Bigl( (i^p_{0_M})_* \bigl(e_*(\U) + i^p_{0_M} (U)\bigr)\Bigr)\\
& = & e_{**}(\IX^2) +_* \Bigl( (i^p_{0_M})_* \circ i_* \circ p_* (\U) + (i^p_{0_M})_* \circ i^p_{0_M} (U)\Bigr)\\
& = & e_{**}(\IX^2) +_* \Bigl( i_* \circ i_* \circ p_* \circ p_{**} (\IX^2) + I (U)\Bigr)\\
& = & e_{**}(\IX^2) +_* \Bigl( i_* \circ p_* \circ i_{**} \circ p_{**} (\IX^2) + I (U)\Bigr)\\
& = & e_{**}(\IX^2) +_* \Bigl( e_* (e_{**} (\IX^2)) + I (U)\Bigr).\\
\end{array}$$
This computation shows, in terms of the piece of notation introduced in (\ref{pi-bis}) that
$$\nabla_{X_x} \nabla_Y Z = \frac{1}{4} \Pi \Bigl( \IX^1 -_{**} \IX^2 -_{**} \widetilde{\IX^1} +_{**} \widetilde{\IX^2} \Bigr).$$
In other terms
$$\begin{array}{c}
\nabla_{X_x} \nabla_Y Z = \\
\frac{1}{4} \Pi \Bigl( \IX +_{**} \; (m_{-1})_* j^1_x{\mathfrak s} \cdot \IX +_{**} \; (m_{-1}) \; \kappa(j^1_x{\mathfrak s}) \cdot \IX +_{**} \; (m_{-1}) \circ (m_{-1})_{*} \; \kappa(j^1_x{\mathfrak s}) \cdot j^1_x{\mathfrak s} \cdot \IX \Bigr).
\end{array}$$
Now we can tackle the curvature. Without loss of generality, we may assume that $[X,Y]_x = 0$. Then
$$\begin{array}{ccl}
R(X_x, Y_x) Z_x & = & \nabla_{X_x} \nabla_Y Z - \nabla_{Y_x} \nabla_X Z \\
& = & \frac{1}{4} \Pi \Bigl( \IX^1 -_{**} \IX^2 -_{**} \widetilde{\IX^1} +_{**} \widetilde{\IX^2} \Bigr) - \\
& & \frac{1}{4} \Pi \Bigl( \IY^1 -_{**} \IY^2 -_{**} \widetilde{\IY^1} +_{**} \widetilde{\IY^2} \Bigr),
\end{array}$$
with
\begin{enumerate}
\item[-] $\IY^1 = \IY = Z_{**_{X_x}} X_{*_x} Y_x$,
\item[-] $\IY^2 = (m_{-1})_* \circ (m_{-1})_{**} j^1_x{\mathfrak s} \cdot \IY$,
\item[-] $\widetilde{\IY^1} = (m_{-1}) \circ (m_{-1})_{**} \; \kappa(j^1_x{\mathfrak s}) \cdot \IY$,
\item[-] $\widetilde{\IY^2} = (m_{-1}) \circ (m_{-1})_{*} \; \kappa(j^1_x{\mathfrak s}) \cdot j^1_x{\mathfrak s} \cdot \IY$.
\end{enumerate}
Observe that (\ref{i-kappa-bis}) implies that
$$\begin{array}{ccl}
\Pi \Bigl( \IY^1 -_{**} \IY^2 -_{**} \widetilde{\IY^1} +_{**} \widetilde{\IY^2} \Bigr) & = & \Pi \circ \kappa \Bigl( \IY^1 -_{**} \IY^2 -_{**} \widetilde{\IY^1} +_{**} \widetilde{\IY^2} \Bigr)\\
& = & \Pi \Bigl( \kappa(\IY^1) -_{**} \kappa(\IY^2) -_{**} \kappa(\widetilde{\IY^1}) +_{**} \kappa(\widetilde{\IY^2}) \Bigr).
\end{array}$$
Moreover,
\begin{enumerate}
\item[-] $\kappa(\IY^1) = \kappa (Z_{**_{X_x}} X_{*_x} Y_x) = Z_{**_{X_x}} \kappa(X_{*_x} Y_x) = Z_{**_{X_x}} Y_{*_x} X_x = \IX^1$,
\item[-] $\kappa(\IY^2) = (m_{-1}) \circ (m_{-1})_{**} \kappa(j^1_x{\mathfrak s}) \cdot \IX = \widetilde{\IX^1}$,
\item[-] $\kappa(\widetilde{\IY^1}) = (m_{-1})_* \circ (m_{-1})_{**} \; j^1_x{\mathfrak s} \cdot \IX = \IX^2$,
\item[-] $\kappa(\widetilde{\IY^2}) = (m_{-1})_* \circ (m_{-1}) \; j^1_x{\mathfrak s} \cdot \kappa(j^1_x{\mathfrak s}) \cdot \IX \neq \widetilde{\IX^2}$.
\end{enumerate}
Therefore,
$$\begin{array}{ccl}
R(X_x, Y_x) Z_x & = & \nabla_{X_x} \nabla_Y Z - \nabla_{Y_x} \nabla_X Z \\
& = & \frac{1}{4} \Pi \Bigl( \IX^1 -_{**} \IX^2 -_{**} \widetilde{\IX^1} +_{**} \widetilde{\IX^2} \Bigr) - \\
& & \frac{1}{4} \Pi \Bigl( \IX^1 -_{**} \widetilde{\IX^1} -_{**} \IX^2 +_{**} \kappa(\widetilde{\IY^2}) \Bigr) \\
& = & \frac{1}{4} \Pi \Bigl( \bigl(\IX^1 -_{**} \IX^2 -_{**} \widetilde{\IX^1} +_{**} \widetilde{\IX^2}\bigr), \\
& & \;\;\;\;\;\;\; \bigl(\IX^1 -_{**} \widetilde{\IX^1} -_{**} \IX^2 +_{**} \kappa(\widetilde{\IY^2}) \bigr) \Bigr) \\
& = & \frac{1}{4} \Pi \Bigl(\widetilde{\IX^2}, \kappa(\widetilde{\IY^2}) \Bigr)
\end{array}$$
In other words,
$$\begin{array}{ccl}
R(X_x, Y_x) Z_x & = & \frac{1}{4} \Pi \Bigl((m_{-1}) \circ (m_{-1})_{*} \; \kappa(j^1_x{\mathfrak s}) \cdot j^1_x{\mathfrak s} \cdot \IX, \\
& & \;\;\;\;\;\;\; (m_{-1})_* \circ (m_{-1}) \; j^1_x{\mathfrak s} \cdot \kappa(j^1_x{\mathfrak s}) \cdot \IX \Bigr) \\
& = & \frac{1}{4} \Pi \Bigl(\kappa(j^1_x{\mathfrak s}) \cdot j^1_x{\mathfrak s} \cdot \IX, j^1_x{\mathfrak s} \cdot \kappa(j^1_x{\mathfrak s}) \cdot \IX \Bigr)
\end{array}$$
\cqfd
\section{Third order affine extension}\label{third-order-affine-extension}
As for the order $2$, on can prove that affine jets of order $3$ extending a given $1$-jet always exist, provided $(1,1,1)$-jets, that is elements of the groupoid $\b^{(1,1,1)}_{nh}(\PP(M))$ are allowed.
Recall from \aref{b111} that the later are of the type
$$\xi = j^1_xb,$$
where
$$b : U_x \to \b^{(1,1)}_{nh}(\PP(M)) : x' \mapsto j^1_{x'}b_{x'}$$
is some local bisection of $\b^{(1,1)}_{nh}(\PP(M))$ and the various
$$b_{x'} : U_{x'} \to \b^{(1)}(\PP(M)),$$
for $x' \in U_x$, form a smooth family of local bisections of $\b^{(1)}(\PP(M))$. Recall also that when $\xi$ lies in $\b^{(1,1,1)}(\PP(M))$, we may assume that $b_{x'}(x') = b_x(x')$ and that $b_{x'}$ is tangent to $\e$ for all $x'$ or equivalently that $(\beta \circ b_{x'})_{*_{x'}} = b_{x'}(x')$ (cf.~observation following \dref{b111h}).
\begin{dfn}\label{affine111jet} A $(1,1,1)$-jet $\xi$ is affine if it belongs to $\b^{(1,1,1)}(\PP(M))$, if its $(1,1)$-part $p(\xi) = p_*(\xi) = p_{**}(\xi)$ is affine and if for any vector fields $X$, $Y$, $Z$ in ${\mathfrak X}(M)$, we have
\begin{equation}\label{affine111}
\xi \Bigl(\nabla_{X_x}\nabla_Y Z \Bigr) = \nabla_{\xi X_x} \nabla_{b_xY} b_{\centerdot} Z.
\end{equation}
\end{dfn}
Let us say a few words about the right hand side of (\ref{affine111}). The vector field $b_xY$ is defined on a neighborhood $V_y$ of $y$ by $(b_xY)_{y'} = b_x(x') Y_{x'}$ with $b_x^0(x') = y'$. The notation $b_{\centerdot} Z$ stands for the family $T_{y'}$ of vector fields parameterized by $y' = b_x^0(x') \in V_y$~:
$$T_{y'} : V_{y'} \to TM : y'' = b_{x'}^0(x'') \mapsto (b_{x'}Z)_{y''} = b_{x'}(x'')Z_{x''}.$$
It is differentiated covariantly in the direction of the vector field $y' \mapsto (b_x Y)_{y'}$ and the result, that depends twofold on the variable $y'$ is being covariantly differentiated in the direction of $\xi X_x \in T_yM$.
\begin{rmk} It is also important to notice that a $(1,1,1)$-jet that satisfies (\ref{affine111}) alone does not necessarily have an affine $(1,1)$-part. Indeed, let $b$ denote a local bisection of $\b^{(1)}(\PP(M))$ such that $j^1_xb = S(\xi)$. Then
$$\xi \Bigl(\nabla_{X_x}\nabla_Y Z \Bigr) = \nabla_{\xi X_x}b \bigl(\nabla_Y Z \bigr).$$
So $j^1_x j^1_\centerdot b_\centerdot$ is affine if and only if
$$\nabla_{\xi X_x}\Bigl(b \bigl(\nabla_Y Z \bigr) - \nabla_{b_x Y} b_{\centerdot} Z \Bigr) = 0$$
for all $X$, $Y$, $Z$ in $\IX(M)$. The latter relation only means that, for any vector fields $Y$ and $Z$, the two local vector fields
$$U = b \bigl(\nabla_Y Z \bigr) \quad \mbox{and} \quad V = \nabla_{b_x Y} b_{\centerdot} Z$$ induce the same map
$p^v \circ U_{*_y} = p^v \circ V_{*_y} : T_yM \to T_yM$, where $p^v$ denotes the projection $p^v : T^2M \to TM$ induced from the horizontal distribution on $T^2M$ associated to $\nabla^{\mathfrak s}$. Still, the vectors $U_y$ and $V_y$ might not agree in general. Equivalently, $j^1_x b_x$ might not coincide with $j^1_x b = S(\xi)$.
\end{rmk}
The following statement follows directly from \pref{uae}.
\begin{prop}\label{uaebis} Given a symmetry jet ${\mathfrak s} : M \to \b^{(1,1)}(\PP(M))$ and the corresponding distribution $\d$ on $\b^{(1)}(\PP(M))$, the (tautological) distribution
$${\mathfrak D}_{S(\xi)} = S_{*_\xi}(\d_\xi)$$
along $\im S \subset \b^{(1,1)}(\PP(M))$ corresponds to a groupoid morphism
$${\mathbb S} : \b^{(1)}(\PP(M)) \to \b^{(1,1,1)}(\PP(M))$$
whose image consists of affine $(1,1,1)$-jets and such that $p \circ {\mathbb S}$ coincides with $S$.
\end{prop}
\noindent{\bf Proof}. For any $\xi \in \b^{(1)}(\PP(M))$ let $a_\xi$ denote some local bisection $x_1 \mapsto a_\xi(x_1)$ such that $j^1_xa_\xi = S(\xi)$, for $x = \alpha(\xi)$. Then, for each point $a_\xi(x_1)$, the expression $a_{a_\xi(x_1)}$ denotes a local bisection, that depends smoothly on $x_1$, whose first jet at $x_1$ coincides with $S(a_\xi(x_1))$. It is tautological that the $(1,1,1)$-jet
$$j^1_x (j^1_{x_1} a_{a_\xi(x_1)}) = j^1_x (S \circ a_\xi)$$
is affine. Indeed,
$$\xi \Bigl( \nabla_{X_x} \nabla_Y Z \Bigr) = \nabla_{\xi X_x} a_\xi \Bigl( \nabla_{Y}Z \Bigr) = \nabla_{\xi X_x} \nabla_{a_\xi Y} a_{a_\xi(\centerdot)} Z.$$
Moreover, $j^1_x (S \circ a_\xi)$ belongs to $\b^{(1,1,1)}(\PP(M))$~:
\begin{enumerate}
\item[-] $p(j^1_x (S \circ a_\xi)) = S \circ a_\xi(x) = S(\xi)$,
\item[-] $p_*(j^1_x (S \circ a_\xi)) = j^1_x(p \circ S \circ a_\xi) = j^1_x a_\xi = S(\xi)$,
\item[-] $p_{**}(j^1_x (S \circ a_\xi)) = j^1_x(p_* \circ S \circ a_\xi) = j^1_x a_\xi = S(\xi)$.
\end{enumerate}
\ \\
Observe now that $D(j^1_x (S \circ a_\xi)) = (S \circ a_\xi)_{*_x}(T_xM) = S_{*_\xi}(\d_\xi)$. Furthermore, the section
$${\mathbb S} : \b^{(1)}(\PP(M)) \to \b^{(1,1,1)}(\PP(M)) : \xi \mapsto {\mathbb S}(\xi) = j^1_x(S \circ a_\xi)$$
is a groupoid morphism. Indeed, if $(\xi_1, \xi_2) \in \b^{(1)} \times_{(\alpha, \beta)} \b^{(1)}$, then
$$D\Bigl({\mathbb S}(\xi_1 \cdot \xi_2)\Bigr) = S_{*_{\xi_1 \cdot \xi_2}}(\d_{\xi_1 \cdot \xi_2}) = S_{*_{\xi_1 \cdot \xi_2}}(\d_{\xi_1} \cdot \d_{\xi_2}) = S_{*_{\xi_1}}(\d_{\xi_1} ) \cdot S_{*_{\xi_2}} (\d_{\xi_2})$$
(cf.~\rref{1jets-as-planes}) implies that ${\mathbb S}(\xi_1 \cdot \xi_2) = {\mathbb S}(\xi_1) \cdot {\mathbb S}(\xi_2)$.
\cqfd
\begin{prop}\label{uae-order3} The affine $(1,1,1)$-jet ${\mathbb S}(\xi)$ is the unique affine $(1,1,1)$-jet whose first order is $\xi$.
\end{prop}
\noindent{\bf Proof}. The idea of the proof is to compute
\begin{equation}\label{xi-nabla}
\xi \Bigl(\nabla_{X_x} \nabla_Y Z \Bigr) - \nabla_{\xi X_x} \nabla_{b_x Y} b_{\centerdot} Z,
\end{equation}
for a $(1,1,1)$-jet $\zeta = j^1_xj^1_\centerdot b_\centerdot$ in $\b^{(1,1,1)}(\PP(M))$ whose second order part is $S(\xi)$, in terms of ${\mathfrak s}$ and ${\mathbb S}$ so as to make the condition that (\ref{xi-nabla}) vanishes equivalent to $\zeta = {\mathbb S}(\xi)$. Using the formula (\ref{nablacarre}) with $\xi_o = {\mathbb S}(-I_x)$ and taking into account the facts that $p(\xi)(\pi(\X^1, \X^2)) = \pi(\xi \cdot \X^1, \xi \cdot \X^2)$ for any $(1,1)$-jet $\xi$ and $p_*(\xi) \cdot \pi_*(\IX^1, \IX^2) = \pi_* (\xi \cdot \IX^1, \xi \cdot \IX^2)$ for any $(1,1,1)$-jet $\xi$ (cf.~(\ref{pi*})), the first term $\xi(\nabla_{X_x} \nabla_Y Z)$ can be rewritten~:
$$\begin{array}{l}
\quad \displaystyle{\frac{1}{4} \xi \Bigl\{ \pi \Bigl[ \pi_{*} \Bigl(\IX, (m_{-1})_* \circ (m_{-1})_{**} j^1_x{\mathfrak s} \cdot \IX \Bigr),} \\
\quad \displaystyle{\pi_{*} \Bigl( (m_{-1}) \circ (m_{-1})_{**} \; {\mathbb S}(-I_x) \cdot \IX, (m_{-1}) \circ (m_{-1})_{*} \; {\mathbb S}(-I_x) \cdot j^1_x{\mathfrak s} \cdot \IX \Bigr) \Bigr]\Bigr\}} \\
= \displaystyle{\frac{1}{4} \pi \Bigl[ \pi_{*} \Bigl({\mathbb S}(\xi) \cdot \IX, (m_{-1})_* \circ (m_{-1})_{**} {\mathbb S}(\xi) \cdot j^1_x{\mathfrak s} \cdot \IX \Bigr),} \\
\quad \displaystyle{\pi_{*} \Bigl((m_{-1}) \circ (m_{-1})_{**} {\mathbb S}(\xi) \cdot {\mathbb S}(-I_x) \cdot \IX, (m_{-1}) \circ (m_{-1})_{*} {\mathbb S}(\xi) \cdot {\mathbb S}(-I_x) \cdot j^1_x{\mathfrak s} \cdot \IX \Bigr) \Bigr]} \\
= \displaystyle{\frac{1}{4} \pi \Bigl[ \pi_{*} \Bigl({\mathbb S}(\xi) \cdot \IX, (m_{-1})_* \circ (m_{-1})_{**} j^1_y{\mathfrak s} \cdot {\mathbb S}(\xi) \cdot \IX \Bigr),} \\
\quad \displaystyle{\pi_{*} \Bigl( (m_{-1}) \circ (m_{-1})_{**} {\mathbb S}(-I_y) \cdot {\mathbb S}(\xi) \cdot \IX, (m_{-1}) \circ (m_{-1})_{*} {\mathbb S}(-I_y) \cdot j^1_y{\mathfrak s} \cdot {\mathbb S}(\xi) \cdot \IX \Bigr) \Bigr],}
\end{array}$$
where $\IX = Z_{**_{Y_x}Y_{*_x}X_x}$. The third equality follows from the fact that ${\mathbb S}(\xi)$ commutes with ${\mathbb S}(-I_x)$ and with $j^1_x{\mathfrak s}$. This is really the key point here and the main property of ${\mathbb S}(\xi)$ that distinguishes it from other $(1,1,1)$-jets. Indeed, ${\mathbb S}$ being a groupoid morphism, we have
$${\mathbb S}(\xi) \cdot {\mathbb S}(-I_x) = {\mathbb S}(\xi \cdot -I_x) = {\mathbb S}(-I_y \cdot \xi) = {\mathbb S}(-I_y) \cdot {\mathbb S}(\xi)$$
and
$$\begin{array}{lll}
{\mathbb S}(\xi) \cdot j^1_x\fs & = & j^1_x (S \circ a_\xi) \cdot j^1_x(S \circ -I) = j^1_x \Bigl((S \circ a_\xi) \cdot (S \circ -I)\Bigr) \\
& = & j^1_x\Bigl(S \circ (a_\xi \cdot -I)\Bigr) = j^1_x\Bigl(S \circ (-I \cdot a_\xi)\Bigr) = j^1_x\Bigl((S \circ -I) \cdot (S \circ a_\xi)\Bigr) \\
& = & \Bigl.j^1_y\fs \cdot {\mathbb S}(\xi).
\end{array}$$
For the second term of (\ref{xi-nabla}), notice first that since $\zeta$ belongs to $\b^{(1,1,1)}(\PP(M))$, we may assume that $b_{x'}(x') = b_{x}(x')$ and $(\beta \circ b_{x'})_{*_{x'}} = b_{x'}(x')$. Let $\gamma : (-\varepsilon , \varepsilon ) \to M$ be a path tangent to $X_x$ at $0$. Then $\tilde{\gamma} = \beta \circ b_x \circ \gamma$ is tangent to $\xi X_x$ at $y$ and observe that
$$\nabla_{\xi X_x} \nabla_{b_x Y} b_{\centerdot} Z = \displaystyle{\frac{1}{2} \pi \Bigl( \frac{d}{dt} \bigl(\nabla_{b_x Y} b_{\centerdot} Z\bigr)_{\tilde{\gamma}(t)} \Bigr|_{t=0}, \pmb{-} {\mathfrak s}(x) \cdot \frac{d}{dt} \bigl(\nabla_{b_x Y} b_{\centerdot} Z\bigr)_{\tilde{\gamma}(t)} \Bigr|_{t=0} \Bigr)}.$$
Now, for each $t \in (- \varepsilon , \varepsilon )$ let $\tau_t : (-\varepsilon , \varepsilon ) \to M$ be a path tangent to $Y_{\gamma(t)}$ at $0$. Again, the path $\tilde{\tau}(t) = \beta \circ b_{\gamma(t)} \circ \tau_t$ is tangent to $(b_xY)_{\tilde{\gamma}(t)}$.
Then
$$\begin{array}{cll}
\Bigl(\nabla_{b_x Y} b_{\centerdot} Z\Bigr)_{\tilde{\gamma}(t)} & = & \displaystyle{ \nabla_{(b_x Y)_{\tilde{\gamma}(t)}} \bigl(b_{\gamma(t)} Z\bigr)} \\
& = & \displaystyle{\frac{1}{2} \pi \Bigl( \frac{d}{ds} (b_{\gamma(t)}Z)_{\tilde{\tau}_t(s)} \Bigr|_{s=0}, \pmb{-} {\mathfrak s}(\tilde{\gamma}(t)) \cdot \frac{d}{ds} (b_{\gamma(t)}Z)_{\tilde{\tau}_t(s)} \Bigr|_{s=0} \Bigr).}
\end{array}$$
Besides,
$$\begin{array}{cll}
\displaystyle{\frac{d}{ds} \Bigl(b_{\gamma(t)}Z \Bigr)_{\tilde{\tau}_t(s)} \Bigr|_{s=0}} & = & \displaystyle{\frac{d}{ds} b_{\gamma(t)} \bigl(\tau_t(s)\bigr) Z\bigl(\tau_t(s)\bigr) \Bigr|_{s=0}} \\
& = & j^1_{\gamma(t)} b_{\gamma(t)} \cdot Z_{*_{\gamma(t)}} Y_{\gamma(t)}
\end{array}$$
and
$$\frac{d}{dt} j^1_{\gamma(t)} b_{\gamma(t)} \cdot Z_{*_{\gamma(t)}} Y_{\gamma(t)} \Bigr|_{t=0} = j^1_x j^1_\centerdot b_\centerdot \cdot Z_{**_{Y_x}} Y_{*_x} X_x.$$
Therefore,
$$\begin{array}{cll}
\displaystyle{\frac{d}{dt} \Bigl(\nabla_{b_x Y} b_{\centerdot} Z \Bigr)_{\tilde{\gamma}(t)} \Bigr|_{t=0}} & = & (m_{\frac{1}{2}})_* \pi_* \Bigl( \zeta \cdot Z_{**_{Y_z}} Y_{*_x} X_x, \\
& & \; \qquad \qquad (m_{-1})_* \circ (m_{-1})_{**} j^1_y{\mathfrak s} \cdot \zeta \cdot Z_{**_{Y_z}} Y_{*_x} X_x \Bigr).
\end{array}$$
We pursue our computation of $\nabla_{\xi X_x} \nabla_{b_x Y} b_{\centerdot} Z$ as in the proof of \tref{curvthm}, and obtain
$$\begin{array}{cll}
\nabla_{\xi X_x} \nabla_{b_x Y} b_{\centerdot} Z & = & \displaystyle{\frac{1}{4} \pi \Bigl[ \pi_* \Bigl( \zeta \cdot \IX, (m_{-1})_* \circ (m_{-1})_{**} \;j^1_y{\mathfrak s} \cdot \zeta \cdot \IX \Bigr),} \\
& & \quad \;\;\; \pi_* \Bigl((m_{-1}) \circ (m_{-1})_{**} \; {\mathbb S}(-I_y) \cdot \zeta \cdot \IX, \\
& & \qquad \quad \;\; (m_{-1}) \circ (m_{-1})_{*} \; {\mathbb S}(-I_y) \cdot j^1_y{\mathfrak s} \cdot \zeta \cdot \IX\Bigr) \Bigr].
\end{array}$$
Thus $\nabla_{\xi X_x} \nabla_{b_x Y} b_{\centerdot} Z$ admits the same expression as $\xi(\nabla_{X_x} \nabla_Y Z)$ with $\zeta$ instead of ${\mathbb S}(\xi)$. We claim that $\zeta$ is affine if and only it coincides with ${\mathbb S}(\xi)$. Indeed, since $\zeta \cdot \IX$ and ${\mathbb S}(\xi) \cdot \IX$ are in the same $(p \times p_* \times p_{**})$-fiber, there exists a $W \in T_yM$ such that
$${\mathbb S}(\xi) \cdot \IX = A^{\zeta \cdot \IX}_{\p} (W) \stackrel{\rm not}{=} \zeta \cdot \IX \pmb{+} W.$$
(cf.~(\ref{param-P-fiber}) in \aref{third-der}). Then the relation (\ref{action-sur-TM}) implies that
$$\begin{array}{rcl}
\displaystyle{j^1_y{\mathfrak s} \cdot {\mathbb S}(\xi) \cdot \IX} & = & j^1_y{\mathfrak s} \cdot \zeta \cdot \IX \pmb{+} -W, \\
\displaystyle{{\mathbb S}(-I_y) \cdot {\mathbb S}(\xi) \cdot \IX} & = & {\mathbb S}(-I_y) \cdot \zeta \cdot \IX \pmb{+} - W,\\
\displaystyle{{\mathbb S}(-I_y) \cdot j^1_y{\mathfrak s} \cdot {\mathbb S}(\xi) \cdot \IX} & = & {\mathbb S}(-I_y) \cdot j^1_y{\mathfrak s} \cdot \zeta \cdot \IX \pmb{+} W.
\end{array}$$
Each one of the previous relations remain valid if the two elements of $T^3M$ appearing in each side of the equality are both multiplied by an even number of negative signs. Hence defining $\ell_i \in \EL(T^3M)$, $i =1, 2, 3$ by
\begin{enumerate}
\item[-] $\ell_1 \cdot \IY = (m_{-1})_* \circ (m_{-1})_{**} \;j^1_y{\mathfrak s} \cdot \IY$,
\item[-] $\ell_2 \cdot \IY = (m_{-1}) \circ (m_{-1})_{**} \; {\mathbb S}(-I_y) \cdot \IY$,
\item[-] $\ell_3 \cdot \IY = (m_{-1}) \circ (m_{-1})_{*} {\mathbb S}(-I_y) \cdot j^1_y{\mathfrak s} \cdot \IY$,
\end{enumerate}
we see that
$$\begin{array}{l}
\ell_i \cdot {\mathbb S}(\xi) \cdot \IX = \ell_i \cdot \zeta \cdot \IX \pmb{+} - W \qquad i = 1, 2 \\
\ell_3 \cdot {\mathbb S}(\xi) \cdot \IX = \ell_3 \cdot \zeta \cdot \IX \pmb{+} W.
\end{array}$$
This implies that
$$\begin{array}{l}
\pi_* \Bigl( {\mathbb S}(\xi) \cdot \IX, \ell_1 \cdot {\mathbb S}(\xi) \cdot \IX\Bigr) = \pi_* \Bigl(\zeta \cdot \IX, \ell_1 \cdot \zeta \cdot \IX \Bigr) \pmb{+} 2W \\
\pi_* \Bigl(\ell_2 \cdot {\mathbb S}(\xi) \cdot \IX, \ell_3\cdot {\mathbb S}(\xi) \cdot \IX \Bigr) = \pi_* \Bigl(\ell_2 \cdot \zeta \cdot \IX, \ell_3 \cdot \zeta \cdot \IX \Bigr) \pmb{+} -2W.
\end{array}$$
Hence
$$\xi \Bigl(\nabla_{X_x} \nabla_Y Z \Bigr) - \nabla_{\xi X_x} \nabla_{b_x Y} b_{\centerdot} Z,$$
which coincides with
$$\begin{array}{l}
\displaystyle{\frac{1}{4} \pi \Bigl[ \pi_{*} \Bigl({\mathbb S}(\xi) \cdot \IX, \ell_1 \cdot {\mathbb S}(\xi) \cdot \IX \Bigr), \pi_{*} \Bigl( \ell_2 \cdot {\mathbb S}(\xi) \cdot \IX, \ell_3 \cdot {\mathbb S}(\xi) \cdot \IX \Bigr) \Bigr] - }\\
\displaystyle{\frac{1}{4} \pi \Bigl[ \pi_* \Bigl( \zeta \cdot \IX, \ell_1 \cdot \zeta \cdot \IX \Bigr), \pi_* \Bigl(\ell_2 \cdot \zeta \cdot \IX, \ell_3 \cdot \zeta \cdot \IX\Bigr) \Bigr], }
\end{array}$$
is equal to $W$, i.e. we have shown that
\begin{equation}\label{nabla-nabla}
\xi \Bigl(\nabla_{X_x} \nabla_Y Z \Bigr) - \nabla_{\xi X_x} \nabla_{b_x Y} b_{\centerdot} Z = \Pi \Bigl( {\mathbb S}(\xi) \cdot \IX, \zeta \cdot \IX\Bigr).
\end{equation}
Hence $\zeta$ is affine is and only if $W = 0$, that is if and only $\zeta \cdot \IX = {\mathbb S}(\xi) \cdot \IX$ for all $\IX \in T_x^3M$ which in turn implies that $\zeta = {\mathbb S}(\xi)$.
\cqfd
\section{The curvature as a measure of the integrability of affine jets}
This section is devoted to proving that the affine $(1,1,1)$-jet ${\mathbb S}(\xi)$ extending $\xi$, which exists and is unique, as proven in the previous section, is a genuine $3$-jet depending on whether the $1$-jet $\xi$ does preserve both the covariant derivative of the torsion and the curvature. More precisely, a $(1,1,1)$-jet in $\b^{(1,1,1)}(\PP(M))$ is holonomic if and only if it is symmetric, that is, preserved by the involutions $\kappa$ and $\kappa_*$. Moreover, a $(1,1,1)$-jet is invariant under $\kappa$ (respectively $\kappa_*$) if and only if its first order preserves the covariant derivative of the torsion (respectively the curvature). The first statement, which is the content of the next proposition, is obtained from the relation (\ref{torsion}) by differentiating both sides. The second one, \pref{prop-curvature}, involves computing the curvature tensor, evaluated on vectors $X_x, Y_x, Z_x \in T_xM$, in terms of the second derivatives of $X, Y, Z$.
\begin{prop}\label{kappa*} Let $\xi \in \b^{(1)}(\PP(M))$ and suppose $S(\xi)$ is holonomic. If $x = \alpha(\xi)$, let $X_x$, $Y_x$ and $Z_x$ be three vectors in $T_xM$ that extend to vector fields $X$, $Y$ and $Z$. Let also ${\mathfrak X} = Z_{**_{Y_x}} Y_{*_x} X_x$ in $T^3M$, then
\begin{equation}\label{kappa*/nablaT}
\Pi \Bigl({\mathbb S}(\xi) \cdot {\mathfrak X}, \kappa_*({\mathbb S}(\xi)) \cdot {\mathfrak X}\Bigr) = \xi \Bigl( (\nabla_{Z_x}T^\nabla) (Y_x, X_x) \Bigr) - \Bigl(\nabla_{\xi Z_z}T^\nabla\Bigr)(\xi Y_x, \xi X_x),
\end{equation}
\end{prop}
Thus, when $S(\xi)$ is $\kappa$-invariant, the affine extension ${\mathbb S}(\xi)$ is $\kappa_*$-invariant if and only if $\xi$ preserves $\nabla T^\nabla$. In particular, ${\mathbb S}(\xi)$ is automatically $\kappa_*$-invariant when the connection $\nabla$ is torsionless.\\
\noindent{\bf Proof}. Let $t \in (-\varepsilon, \varepsilon) \mapsto (\xi_t, \mathbb ZE_t) \in \b^{(1)}(\PP(M)) \times_{(\alpha, p^2)} T^2M$ be a smooth path whose first component $\xi_t$ is tangent to $\d_{\xi_0}$ at $t = 0$ with non-vanishing velocity vector $\dot{\xi}_0$. Set ${\mathfrak X} = \frac{d\mathbb ZE_t}{dt}|_0$, $X_t = p(\mathbb ZE_t)$, $Y_t = p_*(\mathbb ZE_t)$, $X_x = X_0$, $Y_x = Y_0$, $Z_x = \alpha_{*_{\xi_0}}\dot{\xi}_0 = p_* \circ p_* ({\mathfrak X})$, $\Y = \frac{dX_t}{dt}|_0 = p_*({\mathfrak X})$ and $\X = \frac{dY_t}{dt}|_0 = p_{**}({\mathfrak X})$. Then, equation (\ref{torsion}) holds for any $t \in (-\varepsilon, \varepsilon)$~:
\begin{equation}\label{torsion-t}
\pi \Bigl(S(\xi_t) \cdot \mathbb ZE_t, \kappa(S(\xi_t)) \cdot \mathbb ZE_t \Bigr) = \xi_t(T^\nabla(Y_t, X_t)) - T^\nabla (\xi_t(Y_t), \xi_t(X_t)).
\end{equation}
Notice that
$$\frac{d}{dt}S(\xi_t) \cdot \mathbb ZE_t \Bigr|_{t=0} = \rho^{(1,1)}_*\Bigl(\frac{d}{dt}S(\xi_t)\Bigr|_{t=0}, \frac{d}{dt}\mathbb ZE_t \Bigr|_{t=0} \Bigr) = \rho^{(1,1)}_*\Bigl(S_{*_{\xi_0}}(\dot{\xi}_0), \IX \Bigr) = {\mathbb S}(\xi_0) \cdot \IX,$$
thanks to the hypothesis that $\dot{\xi}_0 \in \d_{\xi_0}$ and the fact that $D({\mathbb S}(\xi)) = S_{*_\xi} (\d_{\xi})$. Similarly,
$$\frac{d}{dt}\kappa(S(\xi_t)) \cdot \mathbb ZE_t \Big|_{t=0} = \kappa_*({\mathbb S}(\xi_0)) \cdot \IX,$$
thanks to \rref{kappa*-diff} which implies that $\rho^{(1,1)}_{*}(\kappa^M_*(S_{*_{\xi}}(\dot{\xi}_0)), \IX) = \kappa_*({\mathbb S}(\xi)) \cdot \IX$.
Thus, the derivative with respect to $t$ of (\ref{torsion-t}), evaluated at $t=0$, yields
\begin{multline}\label{torsion-der}
\pi_*\Bigl({\mathbb S}(\xi) \cdot {\mathfrak X}, \kappa_*({\mathbb S}(\xi)) \cdot {\mathfrak X}\Bigr) = \\
S(\xi) \cdot T^\nabla_{*_{(Y_x, X_x)}}(\X, \Y) -_* T^\nabla_{*_{(\xi Y_x, \xi X_x)}}(S(\xi) \cdot \X, S(\xi) \cdot \Y),
\end{multline}
Observe that, thanks to the hypothesis that $\xi$ preserves the torsion, both sides of (\ref{torsion-der}) are vectors in $T_{0_M}TM = TM \oplus TM$. Whence their vertical component are well-defined and agree, that is,
\begin{multline}
p^v \biggl( \pi_*\Bigl({\mathbb S}(\xi) \cdot {\mathfrak X}, \kappa_*({\mathbb S}(\xi)) \cdot {\mathfrak X}\Bigr) \biggr) = \\
p^v \biggl(S(\xi) \cdot T^\nabla_{*_{(Y_x, X_x)}}(\X, \Y) -_* T^\nabla_{*_{(\xi Y_x, \xi X_x)}}(S(\xi) \cdot \X, S(\xi) \cdot \Y) \biggr).
\end{multline}
Notice that $p^v$ coincides with $\widetilde{\nabla} : T^2M \to TM$, the vertical projection introduced in \lref{tilde-nabla}. In particular, the right-hand side of the previous relation coincides with the difference of the vertical projections of each term. Suppose that $\X = Y_{*_x} Z_x$ and $\Y = X_{*_x}Z_x$, where $X$ and $Y$ are local vector fields on $M$ and observe that
$$\widetilde{\nabla} \circ T^\nabla_{*_{(Y_x, X_x)}} (\X, \Y) = \nabla_{Z_x} \bigl(T^\nabla(Y, X)\bigr),$$
Indeed, $\widetilde{\nabla} (\W) = \nabla_{V_x} U$ when $\W = U_{*_x} V_x$. In particular, if $\X$ and $\Y$ are both tangent to the horizontal distribution $\h^\nabla = \widetilde{\nabla}^{-1}(0_{TM})$, then
$$\widetilde{\nabla} \circ T^\nabla_{*_{(Y_x, X_x)}} (\X, \Y) = \bigl(\nabla_{Z_x}T^\nabla\bigr) (Y_x, X_x).$$
Moreover, the action of an affine jet $S(\xi)$ on $T^2M$ commutes with the vertical projection $\widetilde{\nabla}$, that is
$$\widetilde{\nabla}(S(\xi) \cdot \X) = \xi \, \widetilde{\nabla}(\X).$$
Altogether, under the hypothesis that ${\mathfrak X}$ is such that $p_*({\mathfrak X})$ and $p_{**}({\mathfrak X})$ both belong to $\h^\nabla$, (\ref{torsion-der}) becomes
\begin{equation}\label{torsion-der-bis}
\Pi \Bigl({\mathbb S}(\xi) \cdot {\mathfrak X}, \kappa_*({\mathbb S}(\xi)) \cdot {\mathfrak X}\Bigr) = \xi \bigl((\nabla_{Z_x} T^\nabla) (Y_x, X_x) \bigr) - \bigl(\nabla_{\xi Z_x} T^\nabla\bigr) (\xi Y_x, \xi X_x),
\end{equation}
where we have used the fact that $p^v \circ \pi_*$ coincides with $\Pi$. \\
Finally, the assumption that $\X$ and $\Y$ belong to the horizontal distribution $\h^\nabla$ is not restrictive due to the fact that the left-hand side of (\ref{torsion-der-bis}) does only depend on $X_x$, $Y_x$ and $Z_x$ (cf.~\rref{vert-part}).
\cqfd
\begin{rmk} Removing the hypothesis that $\xi$ preserves the torsion, we can still prove that
\begin{multline}\label{kappa*-gen}
\widetilde{\nabla} \Bigl( \pi_* \bigl({\mathbb S}(\xi) \cdot {\mathfrak X}, \kappa_*({\mathbb S}(\xi)) \cdot {\mathfrak X}\bigr) \Bigr) = \\
\xi \bigl((\nabla_{Z_x} T^\nabla) (Y_x, X_x) \bigr) - \bigl(\nabla_{\xi Z_x} T^\nabla\bigr) (\xi Y_x, \xi X_x).
\end{multline}
\end{rmk}
Now we will see that ${\mathbb S}(\xi)$ is $\kappa$-invariant if and only if $\xi$ preserves the curvature of $\nabla$. This will thus show that ${\mathbb S}(\xi)$ is a $3$-jet if and only if $\xi$ preserves the following three tensors~: the torsion of $\nabla$, its covariant derivative and the curvature of $\nabla$.
\begin{prop}\label{prop-curvature} Let $\xi \in \b^{(1)}(\PP(M))$ be a $1$-jet that preserves the torsion of $\nabla$, let $x = \alpha(\xi)$ and let $X_x$, $Y_x$ and $Z_x$ be three vectors in $T_xM$ that extend to vector fields $X$, $Y$ and $Z$. Let also ${\mathfrak X} = Z_{**_{Y_x}} Y_{*_x} X_x$ in $T^3M$. Then
\begin{equation}\label{curvature}
\Pi \Bigl({\mathbb S}(\xi) \cdot {\mathfrak X}, \kappa ({\mathbb S}(\xi)) \cdot {\mathfrak X} \Bigr) = \xi \Bigl(R^\nabla(X_x, Y_x)Z_x\Bigr) - R^\nabla(\xi X_x, \xi Y_x) \xi Z_x.
\end{equation}
In particular, if $\xi$ preserves the torsion tensor $T^\nabla$ and the curvature tensor $R^\nabla$, the affine extension ${\mathbb S}(\xi)$ is $\kappa$-invariant.
\end{prop}
\noindent{\bf Proof}. Write ${\mathbb S}(\xi) = j^1_x j^1_\centerdot b_\centerdot$, with $b_x$ (respectively $b_{x'}$) a local bisection of $\b^{(1)}(\PP(M))$ tangent to $\d_\xi$ (respectively $\d_{b_x(x')}$). Observe that the vector fields $b_x Y$ and $b_x Z$ extend the vectors $\xi Y_x$ and $\xi Z_x$ respectively and can therefore be used to compute the curvature, as is done below.
$$\begin{array}{lll}
&& \xi \Bigl(R^\nabla(X_x, Y_x)\; Z_x\Bigr) - R^\nabla(\xi X_x, \xi Y_x) \; \xi Z_x \\
& = & \xi \Bigl( \nabla_{X_x} \nabla_Y Z - \nabla_{Y_x} \nabla_X Z - \nabla_{[X, Y]_x} Z\Bigr) \\
&& - \Bigl( \nabla_{\xi X_x} \nabla_{b_x Y} b_x Z - \nabla_{\xi Y_x} \nabla_{b_x X} b_x Z - \nabla_{[b_x X, b_x Y]_y} b_x Z \Bigr).
\end{array}$$
The vector fields $X$ and $Y$ may be chosen so that their bracket at $x$ vanishes, or equivalently that $\kappa(X_{*_x} Y_x) = Y_{*_x} X_x$. Moreover, the assumption that $\xi$ preserves the torsion $T^\nabla$ implies that the bracket $[b_x X, b_x Y]_y$ vanishes as well, as shown below~:
$$\begin{array}{lll}
[b_x X, b_x Y]_y & = & \pi \Bigl( \bigl(b_x Y\bigr)_{*_y} (\xi X_x), \kappa \bigl((b_x X)_{*_y} (\xi Y_x) \bigr)\Bigr) \\
& = & \pi \Bigl(j^1_xb_x \cdot Y_{*_x} X_x, \kappa \bigl(j^1_x b_x \cdot X_{*_x} Y_x \bigr) \Bigr)\\
& = & \pi \Bigl(S(\xi) \cdot Y_{*_x} X_x, \kappa (S(\xi)) \cdot \kappa(X_{*_x} Y_x) \Bigr)\\
& = & \pi \Bigl(S(\xi) \cdot Y_{*_x} X_x, \kappa (S(\xi)) \cdot Y_{*_x} X_x\Bigr)\\
& = & \xi \Bigl(T^\nabla(X_x, Y_x) \Bigr) - T^\nabla \Bigr(\xi X_x, \xi Y_x \Bigr) \quad(\mbox{\pref{prop-torsion}}) \\
& = & 0\quad(\mbox{if $\xi$ preserves the torsion}).
\end{array}$$
Thus
$$\begin{array}{lll}
&& \xi \Bigl(R^\nabla(X_x, Y_x)\; Z_x\Bigr) - R^\nabla(\xi X_x, \xi Y_x) \; \xi Z_x \\
& = & \xi \Bigl( \nabla_{X_x} \nabla_Y Z \Bigr) - \Bigl( \nabla_{\xi X_x} \nabla_{b_x Y} b_x Z \Bigr) \\
&& - \xi \Bigl( \nabla_{Y_x} \nabla_X Z \Bigr) + \Bigl( \nabla_{\xi Y_x} \nabla_{b_x X} b_x Z \Bigr).
\end{array}$$
The relation (\ref{nabla-nabla}) established in the proof of \pref{uae-order3} and applied to $\zeta = j^2_x b_x$ which is also en extension of $S(\xi)$ under the hypothesis that $S(\xi)$ is holonomic yields
$$\begin{array}{lll}
&& \xi \Bigl(R^\nabla(X_x, Y_x)\; Z_x\Bigr) - R^\nabla(\xi X_x, \xi Y_x) \; \xi Z_x \\
& = & \Pi \Bigl( {\mathbb S}(\xi) \cdot \IX, \zeta \cdot \IX \Bigr) - \Pi \Bigl( {\mathbb S}(\xi) \cdot \kappa(\IX), \zeta \cdot \kappa(\IX) \Bigr) \\
& = & \Pi \Bigl( {\mathbb S}(\xi) \cdot \IX, \zeta \cdot \IX \Bigr) - \Pi \Bigl( \kappa ( {\mathbb S}(\xi)) \cdot \IX, \kappa(\zeta) \cdot \IX \Bigr) \\
& = & \Pi \Bigl( {\mathbb S}(\xi) \cdot \IX, \zeta \cdot \IX \Bigr) - \Pi \Bigl( \kappa ( {\mathbb S}(\xi)) \cdot \IX, \zeta \cdot \IX \Bigr) \\
& = & \Pi \Bigl( {\mathbb S}(\xi) \cdot \IX, \kappa ( {\mathbb S}(\xi)) \cdot \IX \Bigr)
\end{array}$$
For the second equality, we have used the fact that $\kappa(Z_{**_{Y_x}}Y_{*_x} X_x) = Z_{**_{X_x}}X_{*_x} Y_x$, for the third the relation (\ref{i-kappa-bis}) in \aref{third-der} and for the fourth, the fact that $\kappa(j^2_xb_x) = j^2_x b_x$.
\cqfd
\begin{rmk}
It is possible to write a more general formula for (\ref{curvature}) when the assumption that $\xi$ preserves the torsion is not fulfilled.
\end{rmk}
\begin{cor} Suppose the torsion of $\nabla^{\mathfrak s}$ vanishes identically. Then the curvature $R$ of $\nabla^{\mathfrak s}$ may be recovered from (\ref{curvature}) by particularizing $\xi$. Indeed, considering some linear homothety $m_a : TM \to TM : X_x \mapsto a X_x$ with $a \neq \pm 1$, we obtain the following expression for the curvature~:
$$R (X_x, Y_x)Z_x = \frac{1}{a (1 - a^2)} \Pi \Bigl({\mathbb S}(m_a) \cdot {\mathfrak X}, \kappa ({\mathbb S}(m_a)) \cdot {\mathfrak X}\Bigr),$$
where $X$, $Y$, $Z \in \IX(M)$ and $\IX = Z_{**_{Y_x}} Y_{*_x} X_x$.
\end{cor}
The subgroupoid of $1$-jets that preserve the torsion and the curvature can be characterized in terms of $\d^\fs$ as follows.
\begin{dfn}\label{int-locus} The integrability locus of a distribution $\d$ on a manifold $W$ is the set of points $w \in W$ such that the bracket of any pair of local vector fields near $w$ tangent to $\d$ belongs to $\d$ at $w$, or
$$\In(\d) = \Bigl\{w \in W; A, B \in \Gamma \d \Longrightarrow [A, B]_w \in \d\Bigr\}.$$
When $w \in \In(\d)$, we also say that $\d$ is flat at $w$.
\end{dfn}
The following is a classical result of differential geometry~:
\begin{prop}\label{int/oscul} Let $\d$ be a distribution on a manifold $W$. If $w \in \In(\d)$, there exists an embedded submanifold $F \subset W$ which is osculatory to $\d$ at $w$. The second order jet of $F$ at $w$ is unique.
\end{prop}
\begin{lem}\label{Int-kappa} Let ${\mathfrak s}$ be a symmetry jet on $M$. Then $\xi \in \In(\d^{\mathfrak s})$ if and only if $\kappa({\mathbb S}(\xi)) = {\mathbb S}(\xi)$. In particular $\In(\d^\fs)$ is the set of $1$-jets that preserve the torsion and the curvature, that is $\In(\d^\fs) = \b(T^\fs, R^\fs)$ with the notation introduced in \dref{B(Q)}.
\end{lem}
\noindent{\bf Proof}. Observe that $\xi \in \Int(\d^\fs)$ if and only if there exists a local bisection $b$ of $\b^{(1)}(\PP(M))$ that is osculatory to $\d_b^\fs$ at $x = \alpha(\xi)$. In other terms, $Tb = D(j^1b)$ is tangent to $\d_b^\fs = D(S\circ b)$ at $x$ or $j^1b$ is tangent to $S \circ b$ at $x$. The latter statement is equivalent to $(j^1b)_*(T_xM) = (S \circ b)_*(T_xM) = \d_b^\fs$ that is to $j^2_xb = {\mathbb S}(b(x))$. The latter equality says that the affine jet ${\mathbb S}(b(x))$ is $\kappa$-invariant.
\cqfd
\begin{rmk}
Observe that the integrability locus of $\d^{\mathfrak s}$ is a subgroupoid of $\b^{(1)}(\PP(M))$ that contains $I \cup -I$. It is all of $\b^{(1)}(\PP(M))$ if and only if $R^{\mathfrak s}$ vanishes identically. It would be interesting to know whether $\In(\d^{\mathfrak s})$ determines $R^{\mathfrak s}$ in general.
\end{rmk}
\section{Why there are no other tensors than the torsion and the curvature~?}\label{why-there-are-no}
The purpose of the section is to explain why, if we pursue this procedure no new tensor appear. First of all, (semi-holonomic) affine extensions of all order exist. At order four, the affine extension $S^{(1,1,1,1)}(\xi)$ of $\xi \in \b^{(1)}(\PP(M))$ is defined through
$$D(S^{(1,1,1,1)}(\xi)) = {\mathbb S}_{*_{\xi}}(\d^{\mathfrak s}_\xi).$$
Set ${\mathfrak D}^{\fs}_{S(\xi)} \stackrel{\rm def}{=} S_{*_\xi}(\d^{\mathfrak s}_\xi)$. The relation
\begin{equation}\label{iterated-integrability}
[{\mathfrak D}^{\fs}, {\mathfrak D}^{\fs}]_{S(\xi)} = [S_{*} \d^{\mathfrak s}, S_* \d^{\mathfrak s}]_{S(\xi)} = S_{*_\xi} [\d^{\mathfrak s}, \d^{\mathfrak s}]_\xi
\end{equation}
implies that if $\xi$ belongs to the integrability locus of $\d^{\mathfrak s}$ (cf.~\dref{int-locus}), then $S(\xi)$ automatically belongs to that of ${\mathfrak D}^{\fs}$. As a consequence, if $\xi$ preserves $T$, $\nabla T$ and $R$, then $S^{(1,1,1,1)}(\xi)$ is automatically $\kappa$-invariant (\pref{kappa-inv}). This says that the $\kappa$-invariance of $S^{(1,1,1,1)}(\xi)$ is not anymore obstructed by some tensor. In addition, differentiating the relations (\ref{kappa*/nablaT}) and (\ref{curvature}) imply that the $\kappa_*$ and $\kappa_{**}$-invariance of $S^{(1,1,1,1)}(\xi)$ depends on the $\xi$-invariance of $\nabla R$ and $\nabla \nabla T$ (cf.~\pref{next-order-inv}). This process can be iterated, showing that if $\xi$ preserves the various covariant derivatives of the torsion and the curvature, then the affine extension of $\xi$ is holonomic at any order. \\
In order to write proofs of the results announced previously, the following result, due to Tapia, is particularly useful. It generalizes to Lie groupoids the well-known theorem of E.~Cartan according to which any closed subgroup of a Lie group is an embedded Lie subgroup~:
\begin{thm}\label{closed-subgr}\cite{Ta} Let $G \rightrightarrows M$ be a locally trivial Lie groupoid. Then any closed (algebraic) subgroupoid of $G$ is an embedded Lie subgroupoid.
\end{thm}
Locally trivial Lie groupoids are those Lie groupoids for which the map $\alpha \times \beta : G \to M \times M$ is a surjective submersion. This condition is certainly satisfied by $\b^{(1)}(\PP(M))$ (cf.~\lref{loc-triv}).
\begin{dfn}\label{B(Q)} Given a family $\{Q_1, ..., Q_k\}$ of $(1, p)$-tensors on $M$, the subgroupoid of $\b^{(1)}(\PP(M))$ consisting of those $1$-jets $\xi$ that preserve all $Q_i$'s in the sense that
$$Q_i (\xi X_1, ..., \xi X_{p}) = \xi (Q_i(X_1, ..., X_{p}))$$
for all $X_1,..., X_p \in \IX(M)$ and all $i = 1, ..., k$ is denoted by $\b(Q_1, ..., Q_k)$.
\end{dfn}
\begin{cor} For any choice of tensors $Q_1, ..., Q_k$, the (algebraic) subgroupoid $\b(Q_1, ..., Q_k)$ is an embedded Lie subgroupoid. In particular, given a symmetry jet ${\mathfrak s}$, this yields a set of embedded Lie subgroupoids ${\mathcal B}(T)$, ${\mathcal B}(R)$, ${\mathcal B}(T, \nabla T, R)$, etc ..., for $T$ (respectively $R$) the torsion (respectively curvature) of the associated connection $\nabla^{\mathfrak s}$.
\end{cor}
\begin{lem}\label{d-tangent-to-b(Q)} Given a tensor $Q$ on $M$, the distribution $\d^{\mathfrak s}$ is tangent to ${\mathcal B}(Q)$ at $\xi$ if and only if $\xi$ preserves the first covariant derivative of $Q$.
\end{lem}
\noindent{\bf Proof}. A vector $X_\xi = \frac{d \xi_t}{dt}\bigr|_{t=0}$ in $\d^{\mathfrak s}_\xi$ belongs to $T_\xi\b(Q)$ if and only if
$$\frac{d}{dt} \xi_t Q \Bigl(X_1^t, ..., X_p^t \Bigr) \Bigl|_{t=0} = \frac{d}{dt} Q \Bigl(\xi_t X_1^t, ..., \xi_t X_p^t \Bigr) \Bigr|_{t=0},$$
for any paths $t \mapsto X_i^t$ of vectors in $T_{\alpha(\xi_t)}M$. Equivalently,
$$S(\xi) \cdot Q_{*_{(X_1^0, ..., X_p^0)}} \Bigl( \X_1, ..., \X_p \Bigr) = Q_{*_{(\xi X_1^0, ..., \xi X_p^0)}} \Bigl( S(\xi) \cdot \X_1, ..., S(\xi) \cdot \X_p\Bigr),$$
where $\X_i = \frac{d}{dt}X_i^t \bigl|_{t=0}$. Projecting horizontally with respect to $\h^\nabla$ and assuming, without loss of generality, that $\X_i$ is tangent to the horizontal distribution $\h^\nabla$, as is done in the proof of \pref{kappa*}, we obtain
$$\xi \Bigl( \nabla_{Z_x} Q (X_1^0, ...,X_p^0)\Bigr) = \nabla_{\xi Z_x} Q \Bigl( \xi X_1^0, ..., \xi X_p^0 \Bigr),$$
where $Z_x = \alpha_{*_\xi}(X_\xi)$.
\cqfd
Now we prove existence of affine $(1,1,1,1)$-jets. Let $\xi \in \b^{(1)}(\PP(M))$, and let $a_\xi$ denote some local bisection $U \ni x_1 \mapsto a_\xi(x_1) \in \b^{(1)}(\PP(M))$ tangent to $\d^{\mathfrak s}$ at $\xi$, or equivalently such that $j^1_xa_\xi = S(\xi)$, for $x = \alpha(\xi)$. Similarly, for each $1$-jet $a_\xi(x_1)$, $x_1\in U$, let $a^2_\xi(x_1)$ (instead of $a_{a_\xi(x_1)}$) denote a local bisection tangent to $\d^{\mathfrak s}$ at the point ${a_\xi(x_1)}$. Iterating this procedure, we obtain a family of local bisections $x_k \mapsto a^k_\xi(x_1, ..., x_{k-1})(x_k)$, $k = 1, 2, ...$ of $\b^{(1)}(\PP(M))$ such that
\begin{enumerate}
\item[-] $a^k_\xi(x_1, ..., x_{k-1})(x_{k-1}) = a^{k-1}_\xi(x_1, ..., x_{k-2})(x_{k-1})$,
\item[-] $j^1_{x_{k-1}} a^k_\xi(x_1, ..., x_{k-1}) = S \bigl(a^{k-1}_\xi(x_1, ..., x_{k-2})(x_{k-1}) \bigr).$
\end{enumerate}
\begin{prop}\label{affine-ext-order-4} Let $\xi \in \b^{(1)}(\PP(M))$, with $\alpha (\xi) = x$. Then the $(1,1,1,1)$-jet
$$S^{(1,1,1,1)}(\xi) = j^1_xj^1_{x_1}a^2_\xi(x_1) = j^1_x({\mathbb S} \circ a_\xi).$$
is affine in the sense that for any $x \in M$ and $X, Y, Z, T \in \IX(M)$, we have
$$\xi \Bigl( \nabla_{X_x} \nabla_Y \nabla_Z T \Bigr) = \nabla_{(\xi X_x)} \nabla_{(a_{\xi}(x_1) Y_{x_1})} \nabla_{(a^2_{\xi}(x_1, x_2) Z_{x_2})} \bigl(a^3_\xi(x_1, x_2, x_3) T_{x_3} \bigr).$$
\end{prop}
\noindent{\bf Proof}. \pref{saffine} implies the following sequence of equalities~:
$$\begin{array}{lll}
\xi \Bigl( \nabla_{X_x} \nabla_Y \nabla_Z T \Bigr) & = & \nabla_{\xi X_x} \Bigl( a_\xi \bigl(\nabla_Y \nabla_Z T \bigr) \Bigr), \\
a_\xi(x_1) \Bigl(\nabla_{Y_{x_1}} \nabla_Z T \Bigr) & = & \nabla_{a_\xi(x_1) Y_{x_1}} \Bigl ( a^2_\xi(x_1) \bigl(\nabla_Z T \bigr) \Bigr), \\
a^2_\xi(x_1)(x_2) \Bigl(\nabla_{Z_{x_2} }T \Bigr) & = & \nabla_{a^2_\xi(x_1)(x_2) Z_{x_2}} \Bigl( a^3_\xi(x_1, x_2) Z \Bigr).
\end{array}$$
\cqfd
\begin{rmk}
Observe that
$$D(S^{(1,1,1,1)}(\xi)) = {\mathbb S}_{*_\xi}(\d_\xi).$$
\end{rmk}
\begin{rmk} This argument applies to any order. Let us denote $k\cdot (1)$ a sequence $(1, ..., 1)$ with $k$ times the number $1$. Given $\xi \in \b^{(1)}(\PP(M))$ with $x = \alpha(\xi)$, the $k \cdot (1)$-jet $S^{k \cdot (1)} (\xi) = j^1_xj^1_{x_1} ... j^1_{x_{k-1}} a^{k}(x_1, ..., x_{k-1})$ is affine in the sense that its $k$ parts of order $k-1$ are affine and agree and it preserves the $k$-th power of $\nabla$.
\end{rmk}
\begin{prop}\label{next-order-inv} Let $\xi \in \b^{(1)}(\PP(M))$ be such that ${\mathbb S}(\xi)$ is holonomic and let $S^{(1,1,1,1)}(\xi)$ be the affine $(1,1,1,1)$-jet extending $\xi$. Let ${\mathbb X} \in T^4_xM$ with $x = \alpha(\xi)$ and its four projections on $T_xM$ denoted by $X_x, Y_x, Z_x, T_x$. Then
\begin{multline}\label{nabla-nabla-T}
\Pi \Bigl(S^{(1,1,1,1)}(\xi) \cdot {\mathbb X}, \kappa_{**}(S^{(1,1,1,1)}(\xi)) \cdot {\mathbb X}\Bigr) \\
= \xi \Bigl( \nabla_{T_x}\nabla T^\nabla (Z_x, Y_x, X_x) \Bigr) - \Bigl(\nabla_{\xi T_x} \nabla T^\nabla \Bigr)(\xi Z_x, \xi Y_x, \xi X_x)
\end{multline}
\begin{multline}\label{nabla-R}
\Pi \Bigl(S^{(1,1,1,1)}(\xi) \cdot {\mathbb X}, \kappa_{*}(S^{(1,1,1,1)}(\xi)) \cdot {\mathbb X}\Bigr) \\
= \xi \Bigl( \nabla_{T_x} R^\nabla (X_x, Y_x, Z_x) \Bigr) - \Bigl( \nabla_{\xi T_x} R^\nabla \Bigr)(\xi X_x, \xi Y_x, \xi Z_x),
\end{multline}
where $\Pi : T^4M \times_{(P,P)}T^4M \to TM$ with $P = p \times p_* \times p_{**} \times p_{***}$ is defined in a similar fashion as for the case of $T^3M$ (cf.~(\ref{Pi}) in \aref{third-der}).
\end{prop}
\noindent{\bf Proof}. The idea is, of course, to differentiate (\ref{kappa*/nablaT}) and (\ref{curvature}) with respect to $\xi$. Strictly speaking, this would not be possible because these two relations are only valid for $\xi$'s preserving the torsion. However, thanks to Tapia's \tref{closed-subgr}, we know that ${\mathcal B}(T)$ is a submanifold and we may thus differentiate the relations (\ref{nabla-nabla-T}) and (\ref{nabla-R}) in the direction of its tangent space. \\
As a first step, observe that the hypothesis that $\xi$ preserves the torsion and its first covariant derivative implies (cf.~\lref{d-tangent-to-b(Q)}) that
$$\d_\xi \subset T_\xi {\mathcal B}(T).$$
In particular $\alpha_{*_\xi}$ restricted to $T_\xi \b(T)$ is a submersion. So given any ${\mathbb X} \in T^4_xM$, there exists a $X_\xi \in T_\xi {\mathcal B}(T)$ such that
$$(X_\xi, {\mathbb X}) \in T_{(\xi, \IX)} \bigl({\mathcal B}(T) \times_{(\alpha, p^3)} T^3M\bigr).$$
Let $(- \varepsilon, \varepsilon) \to {\mathcal B}(T) \times_{(\alpha, p^3)} T^3M : t \mapsto (\xi_t, \IX_t)$ be a path tangent to $(X_\xi, {\mathbb X})$. Then the relation
\begin{equation}\label{abcd}
\Pi \bigl({\mathbb S}(\xi_t) \cdot {\mathfrak X}_t, \kappa_o({\mathbb S}(\xi_t)) \cdot {\mathfrak X}_t \Bigr) =
\xi_t \bigl(Q (Z_t, Y_t, X_t) \bigr) - Q (\xi_t Z_t, \xi_t Y_t, \xi_t X_t),
\end{equation}
for $X_t = p \circ p (\IX_t)$, $Y_t = p_* \circ p(\IX_t)$, $Z_t = p_* \circ p_*(\IX_t)$, holds true for any $t \in (- \varepsilon, \varepsilon)$, where $(\kappa_o, Q)$ is either $(\kappa_*, \nabla T)$ or $(\kappa, R)$. Differentiating both sides with respect to $t$ and evaluating at $t = 0$ yields~:
\begin{multline}\label{abdc}
\Pi_*\bigl( S^{(1,1,1,1)}(\xi) \cdot {\mathbb X}, \kappa_{o_*}(S^{(1,1,1,1)}(\xi)) \cdot {\mathbb X} \bigr) = S(\xi) \Bigl(Q_* (Z_{*_x}T_x, Y_{*_x}T_x, X_{*_x}T_x) \Bigr) -_*\\
Q_* \Bigl(S(\xi) \cdot Z_{*_x}T_x, S(\xi) \cdot Y_{*_x}T_x, S(\xi) \cdot Z_{*_x}T_x \Bigr),
\end{multline}
where $T_x = \frac{d}{dt}p^3(\IX_t)\bigl|_{t=0}$. Because the left-hand side of (\ref{abcd}) vanishes at $t = 0$, the left-hand side of (\ref{abdc}) is a vector in $T_{0_x}TM = T_xM \oplus T_xM$ whose vertical component coincides with
$$\Pi \bigl( S^{(1,1,1,1)}(\xi) \cdot {\mathbb X}, \kappa_{o_*}(S^{(1,1,1,1)}(\xi)) \cdot {\mathbb X} \bigr).$$
As to the right-hand side of (\ref{abdc}), its treatment is essentially contained in the proof of \pref{kappa*}. Indeed, its vertical component coincides with the difference of the vertical projection with respect to $\h^\nabla$ of each terms. So assuming the local vector fields $X$, $Y$ and $Z$ to be tangent to the horizontal distribution $\h^\nabla$, we reach the desired equalities.
\cqfd
\begin{prop}\label{kappa-inv} Suppose $\xi \in \b^{(1)}(\PP(M))$ preserves $T$, $\nabla T$ and $R$. Then the affine $(1,1,1,1)$-jet $S^{(1,1,1,1)}(\xi)$ is $\kappa$-invariant. In particular if $\xi$ preserves also $\nabla \nabla T$ and $\nabla R$, then $S^{(1,1,1,1)}(\xi)$ is holonomic.
\end{prop}
\noindent{\bf Proof}. Let $\xi \in \b(T, \nabla T, R)$. Then $\xi$ belongs to $\In (\d^{\mathfrak s})$ (cf.~\pref{Int-kappa}), which implies that $S(\xi)$ belongs to $\In ({\mathfrak D}^{\mathfrak s})$ (cf.~(\ref{iterated-integrability})). Therefore there exists a local bisection $b$ in $\im S \subset \b^{(1,1)}_{nh}(\PP(M))$, thus of type $b = S \circ b_o$ for a local bisection $b_o$ of $\b^{(1)}(\PP(M))$, which is osculatory to ${\mathfrak D}^{\mathfrak s}$ at $S(\xi)$. Equivalently, the distributions $Tb$ and ${\mathfrak D}^{\mathfrak s}_b$ are tangent at $S(\xi)$ and, therefore, the corresponding local bisections $j^1b$ and ${\mathbb S} \circ b_o$ of the groupoid $\b^{(1,1,1)}_{nh}(\PP(M))$ are tangent at $x = \alpha(\xi)$~:
\begin{equation}\label{transitory}
(j^1b)_{*_x} (T_xM) = ({\mathbb S}_{*_\xi} \circ b_{o*_x}) (T_xM).
\end{equation}
On the other hand, the fact that $T_{S(\xi)}b = {\mathfrak D}^{\mathfrak s}_{S(\xi)}$ implies that
$$(S_{*_\xi} \circ b_{o*_x}) (T_xM) = S_{*_\xi} (\d^{\mathfrak s}_\xi)$$ and therefore that $b_{o*_x} (T_xM) = \d^{\mathfrak s}_\xi$. Thus (\ref{transitory}) says that $j^2_xb = S^{(1,1,1,1)}(\xi)$. Hence the affine $(1,1,1,1)$-jet $S^{(1,1,1,1)}(\xi)$ is in fact a $(2,1,1)$-jet, whence is fixed by $\kappa$.
\cqfd
\begin{rmk} The subgroupoid $\b(T, R, ..., \nabla^k T, \nabla^k R, ...)$ is the largest subset of $\b^{(1)}(\PP(M))$ entirely foliated by the leaves of $\d^\fs$ and containing all of them.
\end{rmk}
\section{Distributions on associated bundles}\label{hor-ass-bdle}
The groupoid $\b^{(1)}(\PP(M))$ acts on a vector bundle $E \to M$ exactly when the latter is associated to the frame bundle (cf.~\rref{associated}). We describe the horizontal distribution induced by a symmetry jet $\fs$ on $M$ as the collection of $-1$-eigenspaces of a canonical bundle map $TE \to TE$ over the identity, in the same spirit as for the distribution $\d^\fs$. \\
Let $\rho : \b^{(1)}(\PP(M)) \times_{(\alpha, \pi)} E \to E : (\xi, e) \mapsto \xi \cdot e$ be a linear groupoid action of $\b^{(1)}(\PP(M))$ on a vector bundle $\pi : E \to M$. Then a symmetry jet ${\mathfrak s}$ induces a linear connection $\nabla^{\mathfrak s}$ on $E$ through the formula~:
\begin{equation}\label{sym-jet-->conn-on-E}
\nabla^{\mathfrak s}_{X_x} e = \frac{1}{2} \pi \Bigl(e_{*_x} X_x, \pmb{-}{\mathfrak s}(x) \cdot e_{*_x}X_x \Bigr),
\end{equation}
for $X_x$ a vector in $T_xM$ and $e$ is a local section of $E$ defined near $x$. The bold minus sign $\pmb{-}$ stands for $m_{-1} \circ L_{-I*}$, where $L_{-I*}$ is the differential of the action of the bisection $-I$ on $E$, that is
$$L_{-I*} : TE \to TE : \frac{de_t}{dt}\Bigl|_{t=0} \mapsto \frac{d \rho(-I_{\pi(e_t)}, e_t)}{dt}\Bigl|_{t=0}.$$
The dot in formula (\ref{sym-jet-->conn-on-E}) denotes the derived action
$$\rho^{(1)} : \b^{(1,1)} (M) \times_{(\alpha, \pi_*)} TE \to TE$$ (cf.~\dref{derived-action}) and the map $\pi$ is defined similarly to (\ref{i}). Simply observe that
$$P : TE \to E \times TM : V_e \mapsto (p(V_e) = e, \pi_{*_e}(V_e))$$ is an affine fibration modeled on $E_{\pi(e)}$, the fiber of $\pi$ through $e$. Whence the map
$$\pi : TE \times_{(P, P)} TE \to E : (V_e, V'_e) \mapsto \pi(V_e, V'_e).$$
It is not difficult to adapt the first part of the proof of \pref{symm-jet<-->conn} and show that (\ref{sym-jet-->conn-on-E}) defines a linear connection on $E$. \\
Now given a bisection $b$ of $\b^{(1)}(\PP(M))$ and a horizontal distribution $\d$ along $b$, we have two bundle automorphisms over $\phi_b : M \to M : x \mapsto \beta \circ b(x)$~:
$$\Phi_b : E \to E : e \mapsto \rho(b(\pi(e)), e)$$
$$\psi_{(b, \d)} : TM \to TM : X_x \mapsto \beta_* \circ \bigr(\alpha_*\big|_{\d_{b(x)}}\bigl)^{-1} (X_x),$$
both over $\phi_b : M \to M : x \mapsto \beta \circ b(x)$. Consider the map $\Psi_{(b, \d)} : TE \to TE$ over $\Phi_b$ defined by
\begin{equation}\label{Psi}
\Psi_{(b, \d)} (V_e) = \rho_*\Bigl(\overline{\pi_{*}(V_e)}^{(\d_{b(\pi(e))}, \alpha_*)}, V_e\Bigr),
\end{equation}
where $\overline{\pi_{*}(V_e)}^{(\d_{b(\pi(e))}, \alpha_*)}$ denotes the lift of $\pi_{*}(V_e)$ in $\d_{b(\pi(e))}$ with respect to $\alpha_*$. It is also the unique vector $X$ in $\d_{b(\pi(e))}$ for which the pair $(X, V_e)$ belongs to the fiber product $TG \times_{(\alpha_*, \pi_*)}TE$ which is the source of the map $\rho_*$. \\
\begin{figure}
\caption{The map $\Psi_{(b, \d)}
\end{figure}
It is easy to see that the map $\Psi_{(b, \d)}$ preserves the vertical tangent space $T^\pi E$, coincides with $(\Phi_b)_*$ vertically and with $\psi_{(b, \d)}$ horizontally, that is to say
$$\left\{
\begin{array}{lcl}\label{gen-second-deriv}
\Psi_{(b, \d)} \circ i^\pi = i^\pi \circ (\Phi_b)_*\\
\pi_* \circ \Psi_{(b, \d)} = \psi_{(b, \d)} \circ \pi_*,
\end{array}\right.$$
where $i^\pi : T^\pi E \to TE$ denotes the canonical inclusion. In particular, an invariant horizontal distribution on a groupoid $G$ that acts on a fibration $\pi : E \to M$ induces a representation of the group of bisections of $\b^{(1)}(\PP(M))$ into the group $GL(TE)$ of fiberwise linear diffeomorphisms of $TE$. \\
Notice also that if $\d$ (respectively $\d'$) is a horizontal distribution along a bisection $b$ (respectively $b'$) respectively, then
\begin{equation}\label{multiplicativity}
\Psi_{(b, \d)} \circ \Psi_{(b', \d')} = \Psi_{(b \cdot b', \d \cdot \d')}.
\end{equation}
Now suppose $b = -I$ and $\d$ coincides with the distribution $\d^\fs$ on $\b^{(1)}(\PP(M))$ induced from a symmetry jet $\fs$ on $M$. Consider the bundle map
$$\Theta_{(b,\d)} \stackrel{\Def}{=} \Psi_{(b, Tb)} \circ \Psi_{(b, \d)} = m_{-1*} \circ \Psi_{(b, \d)} = \Psi_{(I, Tb \cdot \d)}.$$
It is a bundle map over the identity on $E$ which coincides with the identity on $T^\pi E$ and with the map $\psi_{(b, Tb)} \circ \psi_{(b, \d)} = \psi_{(b, \d)}$ horizontally.
Since $b \cdot b = I$, $Tb \cdot Tb = TI$, $\d \cdot \d = TI$ (cf.~\pref{e-iota}) and $Tb \cdot \d = \d \cdot Tb$, the map $\Theta_{(b,\d)}$ is involutive~:
$$\begin{array}{ccl}
\Theta_{(b,\d)} \circ \Theta_{(b,\d)} & = & \Psi_{(b, Tb)} \circ \Psi_{(b, \d)} \circ \Psi_{(b, Tb)} \circ \Psi_{(b, \d)} \\
& = & \Psi_{(I, Tb \cdot \d \cdot Tb \cdot \d)} \\
& = & \Psi_{(I, TI)} = \id.
\end{array}$$
This implies that each tangent space $T_eE$ splits into a direct sum of eigenspaces for the eigenvalues $+1$ and $-1$. Of course $T^\pi E$ is contained in the $+1$-eigenspace of $\Theta_{(b, \d)}$. Moreover, since $\psi_{(b, Tb)} = \id$ and $\psi_{(b, \d)} = -I$, we have $\pi_* \circ \Theta_{(b, \d)} = -I \circ \pi_*$, which implies that $T^\pi E$ coincides with the $+1$-eigenspace and the $-1$-eigenspace is thus a horizontal distribution denoted by $\h = \h^{\mathfrak s}$ on $E$~:
$$\h = \Ker(\Theta_{(b,\d)} + I) = \im (- \Theta_{(b,\d)} + I)$$
\begin{rmk}\label{associated} The groupoid $\b^{(1)}(\PP(M))$ acts linearly on a vector bundle $\pi : E \to M$ if and only if the latter is associated to the principal bundle of frames of $TM$.
\end{rmk}
\begin{lem} Let ${\mathfrak s}$ be a symmetry jet on $M$ and let $\d$ be the corresponding distribution on $\b^{(1)}(\PP(M))$. Then the horizontal distribution $\h^{\mathfrak s}$ on $E$ is the canonical horizontal distribution associated to the connection $\nabla^{\mathfrak s}$ (cf.~\rref{hor-dist}).
\end{lem}
\noindent{\bf Proof}. Firstly, let us show that the map $\Psi_{(b, \d)}$ coincides with the $\rho^{(1,1)}$-action of ${\mathfrak s}$ on $TE$, that is, for any $V_e \in T_eE$~:
$$\Psi_{(b, \d)} (V_e) = \rho^{(1,1)} \Bigl({\mathfrak s}(\pi(e)), E \Bigr),$$
Suppose $\pi_*(V_e) = X \in T_xM$. Then,
$$\begin{array}{ccl}
\Psi_{(b, \d)} (V_e) & = & \rho_{*_{(b(x), e)}} \Bigl(\overline{X}^{(\d_{b(x)}, \alpha_*)}, V_e\Bigr) \\
& = & \displaystyle{\rho^{(1,1)}\Bigl({\mathfrak s}(x), V_e \Bigr)},\\
\end{array}$$
(cf.~\rref{derived-action-of-pair-groupoid}). It is proven now that a vector $V_e$ in $T_eE$ is in the eigenspace of $\Psi_{(b,\d)}$ for the eigenvalue $-1$ if and only if $\nabla^{\mathfrak s}_{X_x} e = 0$, for $X_x = \pi_*(V_e)$ and $e$ a local section of $\pi : E \to M$ such that $e_{*_x}X_x = V_e$~:
$$\nabla^{\mathfrak s}_{X_x} e = \frac{1}{2} \pi \Bigl(e_{*_x} X_x, (\pmb{-} {\mathfrak s}(x) \cdot e_{*_x}X_x) \Bigr) = 0$$
if and only if
$$m_{-1*} \Bigl({\mathfrak s}(x) \cdot e_{*_x} X_x \Bigr) = - e_{*_x} X_x,$$
or equivalently, if and only if
$$\Theta_{(b,\d)} (e_{*_x} X_x) = -e_{*_x} X_x.$$
Since $\nabla^{\mathfrak s}_{X_x} e = 0$ characterizes vectors $V_e$ in $\h^{\mathfrak s}$, the claim is proven.
\cqfd
\section{Levi-Civita connection}
Given a pseudo-Riemannian metric $g$ on a manifold $M$, it is well-known that there exists a unique torsionless affine connection $\nabla$ on $M$ that preserves the metric $g$ in the sense that $\nabla g = 0$. Let us reprove this fact in terms of symmetry jets. \\
Consider the vector bundle $\pi : S^2(M) \to M$, consisting of covariant symmetric $2$-tensors. The groupoid $\b^{(1)}(\PP(M))$ naturally acts on $S^2(M)$~:
$$\rho : \b^{(1)}(\PP(M)) \times_{(\alpha, \pi)} S^2(M) \to S^2(M) : (\xi, h) \mapsto \xi \cdot h,$$
with $(\xi \cdot h) (X_y, Y_y) = h (\xi^{-1} X_y, \xi^{-1} Y_y)$ for $X_y, Y_y \in T_yM$, $y = \beta(\xi)$. \\
Now, any (holonomic or not) symmetry jet ${\mathfrak s} : M \to \b^{(1,1)}(\PP(M))$ induces a horizontal distribution $\d = \d^\fs$ along the bisection $b = -I$ and an involutive vector bundle map $\Psi_{(b, \d)}$ (cf.~(\ref{Psi}))~:
$$\Psi_{(b, \d)} : TS^2(M) \to TS^2(M) : X_h \mapsto \rho_{*_{(-I_x, h)}}\Bigl(\overline{\pi_*(X_h)}^{(\d_{-I_x}, \alpha_*)}, X_h \Bigr).$$
covering the identity map $S^2(M) \to S^2(M)$. Notice that in this case $\Psi_{(b, Tb)} = \id$, hence $\Psi_{(b, \d)} = \Theta_{(b, \d)}$. Any leaf $L$ of the horizontal distribution
$$(E_\Psi^{-1})_{h_x} = \im (- {\mathfrak s}(x) + I)$$
by $(-1)$-eigenspaces of $\Psi_{(b, \d)}$ such that $\pi|_L : L \to M$ is a diffeomorphism is a parallel symmetric $2$-tensor.
\begin{rmk}
Notice that a pseudo-Riemannian metric $h$ on $M$ --- or for that matter any covariant tensor on $M$ --- determines the subgroupoid $\OO^{(1)}$ of $\b^{(1)}(\PP(M))$ consisting of $1$-jets that preserve $h$. It contains $-I$ as subgroupoid (provided $h$ is a $2p$-tensor). Since $\OO^{(1)}$ is closed, it is necessarily a Lie subgroupoid (cf.~\tref{closed-subgr}). Denotes by $\OO^{(1,1)}(M)$ (respectively $\OO^{(1,1)}_h(M)$) the groupoid of $1$-jets of local bisections of $\OO^{(1)}$ (respectively $\OO^{(1,1)}(M) \cap \b^{(1,1)}(\PP(M))$). Now if a symmetry jet $\fs$ on $M$ takes its values in $\OO^{(1,1)}_h(M)$, then $h$ is parallel for the connection $\nabla^\fs$. Indeed, the map $\Psi_{b, \d}$ obviously preserves the tangent spaces to the section $h$ of $S^2(M) \to M$. Conversely, if a symmetry jet $\fs$ is such that $h$ is parallel for $\nabla^\fs$, then $\fs(M) \subset \OO^{(1)}(M)$. The problem is to understand whether $\OO^{(1)}(M)$ supports a unique distribution along $-I$ which lies in $\e \cap T\OO^{(1)}(M)$. The proof of the next proposition goes along a different path.
\end{rmk}
\begin{prop}\label{LC}
Given a pseudo-Riemannian metric $h$, there is a unique holonomic symmetry jet ${\mathfrak s}$ such that the $(-1)$-eigenspace of $\Psi_{(-I, \d^\fs)}$ in $T_{h_x}S^2(M)$ coincides with $h_{*_x}(T_xM)$ at all points $x$ in $M$.
\end{prop}
\begin{rmk} A relatively convincing argument, though not a complete proof, in favor of the previous statement is that the dimension of the manifold of $2$-jets over $-I_x$ and the dimension of the space of $1$-jets of symmetric $2$-tensors over $h_x$ coincide, a fact that is specific to pseudo-Riemannian metrics (as opposed to symplectic structures for instance, see \rref{symplectic} below).
\end{rmk}
\noindent{\bf Proof}. Fix a point $x$ in $M$. Let ${\mathfrak s}^0(x) = j^2_x s^0_x, {\mathfrak s}(x) = j^2_xs_x$ be two $2$-jets extending $-I_x$. Then ${\mathfrak s}(x) - {\mathfrak s}^0(x)$ may be thought of as a bilinear map $T_xM \times T_xM \to T_xM$, or equivalently a linear map $A : T_xM \to \End(T_xM, T_xM)$ which is symmetric since both ${\mathfrak s}_0(x)$ and ${\mathfrak s}(x)$ belong to $\b^{(2)}(\PP(M))$ (cf.~\rref{affine-str}). \\
Of course a $2$-jet $\xi$ extending $-I_x$ induces an involution $\Psi_\xi$ of $T_{h_x}S^2(M)$ whose $(-1)$-eigenspace is a horizontal $n$-plane $(E_\Psi^{-1})_{h_x}$. The latter corresponds to the $1$-jet $\T(\xi) = j^1_xh$ of some local section $h$ of $S^2(M)$ extending $h_x$ through $(E_\Psi^{-1})_{h_x} = h_{*_x} (T_xM)$. Our aim is to show that the map $\T$ is a bijective correspondence as this implies that if $h$ is a pseudo-Riemannian metric on $M$, then $\fs(x) = \T^{-1}(j^1_xh)$ defines a holonomic symmetry jet for which $h$ is parallel. \\
Let $j^1_xh^0 = \T(\fs^0(x))$ and $j^1_x h = \T(\fs(x))$. Then for any $X_x \in T_xM$, the vector $h_{*_x} (X_x) - h^0_{*_x} (X_x)$ is canonically identified to an element, denoted $h(X_x)$, of the fiber of $\pi$ over $x$, that is to a symmetric tensor. More precisely, for any lift $Y_{h_x}$ of $X_x$ in $T_{h_x}S^2(M)$, we have
$$\begin{array}{ccl}
h(X_x) & = & \frac{1}{2}\Bigl(- {\mathfrak s}(x) + I \Bigr) \cdot Y_{h_x} - \frac{1}{2}\Bigl(- {\mathfrak s}^0(x) + I \Bigr) \cdot Y_{h_x} \\
& = & - \frac{1}{2}\Bigl({\mathfrak s}(x) \cdot Y_{h_x} - {\mathfrak s}^0(x) \cdot Y_{h_x} \Bigr) \\
& = & - \frac{1}{2} \Bigl(\rho_{*_{(-I_x, h_x)}} \bigl((s_x)_{*_{x}}(X_x), Y_{h_x} \bigr) - \rho_{*_{(-I_x, h_x)}}\bigl((s^0_x)_{*_{x}}(X_x), Y_{h_x}\bigr)\Bigr)\\
& = & - \frac{1}{2}\rho_{*_{(-I_x, h_x)}} \Bigl((s_x)_{*_{x}}(X_x) - (s^0_x)_{*_{x}}(X_x), Y_{h_x} - Y_{h_x}\Bigr)\\
& = & - \frac{1}{2}\rho_{*_{(-I_x, h_x)}} \Bigl(A(X_x), 0_{h_x}\Bigr) \\
& = & - \frac{1}{2} \Bigl(h_x (A(X_x) \cdot , \cdot ) + h_x ( \cdot, A(X_x) \cdot) \Bigr).
\end{array}$$
This shows in particular that for $\fs^0(x)$ and $X_x$ fixed, the correspondence $A(X_x) \mapsto h(X_x)$ is a linear map between vector spaces of the same dimension. When $h_x$ is non-degenerate and $A$ is symmetric it is also injective as we prove now. Suppose that for all $Y_x, Z_x \in T_xM$, we have
$$h_x (A(X_x) Y_x, Z_x) + h_x (Y_x, A(X_x) Z_x) = 0.$$
In other word the $h_x$-adjoint of $A(X_x)$ is $-A(X_x)$. Now, let us suppose that $A$ is symmetric. Then
$$\begin{array}{cl}
h_x (A(X_x)Y_x, Z_x) & = - h_x(Y_x, A(X_x) Z_x) = - h_x(Y_x, A(Z_x) X_x) \\
& = h_x(A(Z_x) Y_x, X_x) = h_x(A(Y_x) Z_x, X_x) \\
& = - h_x(Z_x, A(Y_x) X_x) = - h_x(Z_x, A(X_x) Y_x),
\end{array}$$
which implies, by symmetry of $h_x$ that $A$ vanishes.
\cqfd
\begin{rmk}\label{symplectic} Let $\omega$ be a symplectic structure on $M$. Then a similar construction yields a correspondence between $2$-jets extending $-I_x$ and $1$-jets of closed $2$-forms extending a fixed one $\omega_x$. On the one hand, the dimension of the space of $2$-jets in $p^{-1}(-I_x)$ is $n^2(n+1)/2$. On the other hand, the dimension $d$ the space of $1$-jets of closed forms, that is $j^1_x\omega$ such that $(d\omega)_x = 0$ coincides with the difference between the dimension of the space of all jets at $x$ of $2$-forms, that is $n^2(n-1)/2$, and the dimension of the space of $3$-forms at $x$, that is $n(n-1)(n-2)/6$. Hence $d = n(n+1)(n+2)/6$, which is precisely the dimension of the space of totally symmetric $3$-tensors at $x$, whose we know parameterize the space of symplectic torsionless affine connections \cite{T}.
\end{rmk}
\section{Relation with Lie algebroid connections}\label{groupoid}
It is known that affine connections and Lie algebroid connections on the Lie algebroid of the general linear groupoid $GL(TM) = \b^{(1)}(\PP(M))$ are the same thing. The point of this section is to show a short path between a symmetry jet $\fs$ and a Lie algebroid connection on the Lie algebroid of $\b^{(1)}(\PP(M))$. The former may be thought of as the rank $n = \dim M$ distribution $\d^\fs$ along the bisection $-I$ in $\b^{(1)}(\PP(M))$, the latter is in fact a rank $n$ distribution on $\b^{(1)}(\PP(M))$ tangent to the $\beta$-fibers. Going from one to the other is done by elementary manipulations on distributions.\\
We briefly recall Atiyah's sequence which is another way to describe a linear connection and which yields the motivations for the definition of a Lie algebroid connection. We refer to \cite{dSW} for an elegant introduction to this topic. A lie algebroid connection is then compared in the case of $\b^{(1)}(\PP(M))$ to a symmetry jet. \\
To a principal fiber bundle $\pi : P \to M$ with structure group $G$ is naturally associated a Lie groupoid, called the {\sl gauge groupoid}, as described hereafter. Its set of objects is defined to be $G_P = (P \times P)/G$, the quotient of $P\times P$ through the diagonal action of $G$. Equivalently, $G_P$ may be described as the set of equivariant maps between any two fibers of $P$. The image by the source $\alpha$ (respectively target $\beta$) of an equivariant map $\xi : P_x \to P_y$ is $x$ (respectively $y$). Finally, the product is the composition of maps. Notice that for the principal bundle $\pi : \F(M) \to M$ of frames of $TM$, the gauge groupoid is precisely $\b^{(1)}(\PP(M))$. \\
The Lie algebroid (cf.~\cite{Mck-05}) of the Lie groupoid $G = \b^{(1)}(\PP(M))$ coincides with the set $T\F(M)/G$ of $G$-invariant vector fields on $\F(M)$ with anchor induced by the projection $\pi_* : T\F(M) \to TM$.
In this case the anchor is surjective. Lie algebroids with surjective anchor are called {\sl transitive Lie algebroids}. \\
Now a principal connection on $\pi : \F(M) \to M$ is an invariant horizontal distribution on $\F(M)$. Equivalently, it is a section of the anchor $\rho : L \to TM$.
\begin{dfn}
A {\sl connection on a transitive Lie algebroid} is defined to be a section of its anchor or a splitting of the exact sequence of vector bundles
$$0 \to \Ker \rho \to L \to TM \to 0.$$
The induced map $L \to \Ker \rho$ is called a {\sl connection form}.
\end{dfn}
To rely the Lie algebroid point of view to the the symmetry jet point of view on a linear connection, we will describe below how a Lie algebroid connection on the Lie algebroid of $\b^{(1)}(\PP(M))$ directly induces a symmetry jet. \\
Let $\sigma^\beta : TM \to T_{\varepsilon(M)}G^\beta$ be a Lie algebroid connection. The right-invariant distribution on $G$ associated to the image of $\sigma^\beta$ is denoted hereafter by $\Sigma^\beta$. Define a horizontal distribution on $G$ along $M$ by
$$\EN^\sigma = \{\sigma (X) + F(\sigma(X)); X \in TM\},$$
where $F : TG^\beta \to TG^\alpha$ is the flip automorphism, defined by
\begin{equation}\label{flip}
F : T_xG^\beta \to T_xG^\alpha : Y \to Y - \alpha_{*}(Y)
\end{equation} that exchanges the tangent space to the $\beta$-fibers over a point in $M$ with that of the corresponding $\alpha$-fiber. It is also induced by the canonical isomorphisms $T_xG^\beta \simeq T_xG/T_xM \simeq T_xG^\alpha$ with the normal bundle to $TM$. Notice that $\fb(\EN^\sigma_{x}) = -I_x$ (the \emph{bouncing} map $\fb$ is defined in \rref{bouncing}). Let $b$ denote the bisection $-I$ of $\b^{(1)}(\PP(M))$ and let $L_{b}$ denote the diffeomorphism of $G$ induced by the left action of $b$~:
$$L_{b} : G \to G : \xi \mapsto b(\beta(\xi)) \cdot \xi.$$
Then
$$\d^\sigma \stackrel{\Def}{=} (L_{b})_*(\EN^\sigma)$$
is a horizontal distribution along $b$ that is contained in the holonomic distribution $\e^{(1)}$ (cf.~\dref{def-e} and \nref{nota-e}) because $\fb(\d^\sigma_{-I_x}) = \fb(\EN^\sigma_{\varepsilon(x)}) = -I_x$ or equivalently, a symmetry jet. \\
Conversely, let $\d \subset \e$ be a horizontal distribution on $G = \b^{(1)}(\PP(M))$ along the bisection $b = -I$, then $\d$ induces a Lie algebroid connection $\sigma$ as follows. First translate $\d$ to a horizontal distribution $\EN^\d$ along $\varepsilon(M)$ via $L_{b}$~:
$$\EN^\d = (L_{b})_*(\d).$$
Notice that $\fb(\EN^\d_{\varepsilon(x)}) = -I_x$. Then define
$$\sigma^\d : TM \to TG^\beta : X \mapsto \frac{1}{2}\Bigl((\alpha_{*_x}\Big|_{\EN^\d})^{-1}(X) + X \Bigr).$$
One verifies easily that $\sigma$ is indeed a Lie algebroid connection~:
$$\begin{array}{l}
(\alpha_* \circ \sigma^\d)(X) = \frac{1}{2}\alpha_* \Bigl((\alpha_{*_x}\Big|_{\EN^\d})^{-1}(X) + X\Bigr) = \frac{1}{2} (X + \alpha_{*_x}(X)) = X \\
(\beta_* \circ \sigma^\d)(X) = \frac{1}{2}\beta_* \Bigl((\alpha_{*_x}\Big|_{\EN^\d})^{-1}(X) + X\Bigr) = \frac{1}{2}(-X + \beta_{*_x}(X)) = 0
\end{array}$$
Finally, one can verify that a Lie algebroid connection on $\b^{(1)}(\PP(M))$ and its corresponding horizontal distribution along $-I$ are two realizations of a same linear connection. \\
To go directly from a Lie algebroid connection $\sigma^\beta$ on $G = \b^{(1)}(\PP(M))$ to the corresponding global distribution $\d$ on $\b^{(1)}(\PP(M))$, one can proceed as follows. Consider the map
$$\varphi_{\sigma^\beta} : \alpha^*TM \to T\b^{(1)}(\PP(M))$$
\begin{equation}\label{s-->d}
\varphi_{\sigma^\beta} (\xi, X) = (L_{\xi})_{*_x} \circ \sigma^\beta (X) - (R_{\xi})_{*_y} \circ F \circ \sigma^\beta \circ \xi (X),
\end{equation}
The image of $\varphi_{\sigma^\beta}$ is an invariant horizontal distribution (see Figure 5). Let us check that the image of $\varphi_{\sigma^\beta}$ indeed consists of eigenvectors for the eigenvalue $-1$ of the map $\psi_\xi$, $\xi \in \b^{(1)}(\PP(M))$ defined in terms of $\d$ along $b = -I$ by (\ref{psi}) in \sref{UAE}. Let $X \in T_xM$, then $\sigma(X) = 1/2 (X + \overline{X})$, where $\overline{X}$ stands for the lift with respect to $\alpha_*$ of $X$ in $\EN_{\varepsilon(x)}$. Then $F(\sigma(X)) = 1/2(-X + \overline{X})$. Thus
$$\begin{array}{cll}
\psi_\xi \Bigl(\varphi_{\sigma^\beta} (\xi, X) \Bigr) & = & \overline{\xi X} \cdot \Bigl[(L_{\xi})_{*_x} \bigl(\sigma(X)\bigr) - (R_{\xi})_{*_y} \bigl(F(\sigma(\xi X))\bigr) \Bigr] \cdot -\overline{X} \\
& = & \Bigl( (L_{\xi})_{*_x} \bigl(\sigma(X)\bigr) \Bigr) \cdot -\overline{X} + \overline{\xi X} \cdot \Bigl( - (R_{\xi})_{*_y} \bigl(F(\sigma(\xi X))\bigr) \Bigr)\\
& = & 0_\xi \cdot \Bigl(\frac{1}{2} (X + \overline{X})\Bigr) \cdot \Bigl(\frac{1}{2} ( -\overline{X} - \overline{X})\Bigr) \\
&& + \Bigl(\frac{1}{2}(\overline{\xi X} + \overline{\xi X}) \Bigr) \cdot \Bigl(\frac{1}{2} (\xi X - \overline{\xi X})\Bigr) \cdot 0_\xi \\
& = & 0_\xi \cdot \Bigl(\frac{1}{2} (-\overline{X} - X)\Bigr) + \Bigl(\frac{1}{2}(\overline{\xi X} - \overline{\xi X}) \Bigr) \cdot 0_\xi \\
& = & \Bigl.- \varphi_{\sigma^\beta} (\xi, X)
\end{array}$$
\begin{figure}
\caption{The distribution $\d$ on $\b^{(1)}
\label{sigmad}
\end{figure}
\begin{rmk} For a general groupoid, the data of a Lie algebroid connection is not equivalent to that of an invariant horizontal distribution. Indeed, consider for instance the pair groupoid $\PP(M) = M \times M \rightrightarrows M$ on the manifold $M$. The anchor $\rho : T_xG^\beta \to T_xM$ of the associated Lie algebroid being an isomorphism, its inverse yields a canonical Lie algebroid connection. In contrast, the data of an invariant horizontal distribution $\d$ on $M \times M$ is equivalent to that of a groupoid morphism $M \times M \to \b^{(1)}(\PP(M))$ over the identity or, in other words, a global trivialization of $TM$. This implies that the manifold $M$ is parallelizable, which is a noticeable restriction on the type of pair groupoids that admit an invariant horizontal distribution. \\
In general, the groupoid morphism $G \to \b^{(1)}(\PP(M))$ over the identity is the data that is missing in a Lie algebroid connection. More precisely, a Lie algebroid connection endowed with a Lie groupoid morphism $\psi : G \to \b^{(1)}(\PP(M))$ over the identity $M \to M$ is an invariant horizontal distribution on $G$. Indeed, the formula (\ref{s-->d}) makes now sense if $\xi$ acting on $T_xM$ is replaced by $\psi (\xi)$.
\end{rmk}
\section{Geodesics and parallel transport}
Let ${\mathfrak s}$ be a symmetry jet, let $S$ be the corresponding affine extension section and $\d^{\mathfrak s}$ the associated invariant horizontal distribution on $\b^{(1)}(\PP(M))$. Let also $\sigma^\beta : TM \to T\b^{(1)}(\PP(M))^\beta$ denote the corresponding Lie algebroid connection and $\sigma^\alpha = \iota \circ \sigma^\beta$ its dual Lie algebroid connection. The right-invariant distribution on $\b^{(1)}(\PP(M))$ associated to the image of $\sigma^\alpha$ (respectively $\sigma^\beta$) is denoted hereafter by $\Sigma^\alpha$ (respectively $\Sigma^\beta$). \\
We consider smooth paths $\gamma : I \to M$ defined on some interval $I$ containing $0$ that are in addition regular, that is $d \gamma /dt$ does not vanish. Let $\T$ be some distribution defined on $\b^{(1)}(\PP(M))$ for which $\beta_*$ restricts to a fiberwise linear isomorphism. Then an $\beta$-lift of $\gamma$ through $\xi \in \b^{(1)}(\PP(M))$ is a path
$$L(\gamma) = L_\xi^\T(\gamma) : I' \to \b^{(1)}(\PP(M))$$ defined on an eventually smaller interval $I' \subset I$, still containing $0$, such that
\begin{enumerate}
\item[-] $\alpha \circ L(\gamma) = \gamma$,
\item[-] $\displaystyle{\frac{dL(\gamma)}{dt}(t) \in \T_{L(\gamma)(t)}}$ for all $t \in I'$,
\item[-] $L(\gamma)(0) = \xi$.
\end{enumerate}
Such a lift is the flow line through $\xi$ of the vector field obtained by lifting the velocity vector field of $\gamma$ (restricted to a subinterval where $\gamma$ is injective) to a vector field tangent to $\T$ and defined on $\alpha^{-1}(\gamma(I))$, so it certainly does exist, although not in general on the entire interval $I$. \\
Given an affine connection $\nabla$, a common way to think about the induced parallel transport is as follows. Given a path $\gamma : 0 \in I \to M$, denote by $\overline{\gamma}_X(t)$ the lift of $\gamma$ through the vector $X$ in $T_xM$ tangent to the horizontal distribution $\h$ on $TM$. The parallel transport along $\gamma$ is then defined to be
$$\tau^\gamma(t) : T_{\gamma(0)}M \to T_{\gamma(t)}M : X \mapsto \overline{\gamma}_X(t).$$
\begin{lem}\label{parallel-transport} Let $\gamma : I \to M$ be regular path in $M$ with $\gamma(0) = x$. Then the lift $L^{\Sigma^\alpha}_{\varepsilon(x)} (\gamma)$ is the parallel transport along $\gamma$. In other words,
$$L^{\Sigma^\alpha}_{\varepsilon(x)}(\gamma)(t) = \tau^\gamma(t).$$
\end{lem}
\noindent{\bf Proof}. Recall from \sref{hor-ass-bdle} the description of $\h$ as eigenspace~:
$$\h = \Ker(\Theta_{(b,\d)} + I) = \im (- \Theta_{(b,\d)} + I)$$
and notice that
$$\Theta_{(b,\d)} = \Psi_{(b, Tb)} \circ \Psi_{(b, \d)} = \Psi_{(I, \EN)},$$
since $Tb \cdot \d = \EN$. In other terms, the lift of a vector $Y \in T_xM$ to a vector $\overline{Y}$ tangent to $\h$ at $X$ admits the following expression~:
$$\begin{array}{ccl}
\overline{Y} & = & \frac{1}{2}\Bigl(- \Theta_{(b, \d)}(\overline{Y}) + \overline{Y} \Bigr) \\
& = & \frac{1}{2}\Bigl(- \rho^{(1)}_{*_{}} \bigl(\overline{Y}^{\EN_{\varepsilon(x)}, \alpha_*}, \overline{Y} \bigr) + \overline{Y} \Bigr) \\
& = &\frac{1}{2}\Bigl(\rho^{(1)}_{*_{}} \bigl( - \overline{Y}^{\EN_{\varepsilon(x)}, \alpha_*}, - \overline{Y} \bigr) + \rho^{(1)}_{*_{}} \bigl( I_* Y, \overline{Y} \bigr) \Bigr) \\
& = & \frac{1}{2} \rho^{(1)}_{*_{}} \bigl( - \overline{Y}^{\EN_{\varepsilon(x)}, \alpha_*} + I_* Y, 0_X \bigr) \Bigl.\\
& = & \rho^{(1)}_*\bigl( \sigma^\alpha(Y), 0_X \bigr). \Bigl.
\end{array}$$
This implies that the tangent vector to the path $L^{\Sigma^\alpha}_{\varepsilon(x)}(\gamma)(t) \cdot X$ belongs to~$\h$. Indeed, set
$$\tau(t) = L^{\Sigma^\alpha}_{\varepsilon(x)}(\gamma)(t).$$
Then
$$\begin{array}{ccl}
\displaystyle{\frac{d}{dt} \rho^{(1)}\Bigl(\tau(t), X\Bigr)\Bigr|_{t}} & = & \displaystyle{\rho^{(1)}_*\Bigl( \frac{d}{dt}\tau(t)\Bigr|_{t}, 0_X \Bigr)} \\
& = & \displaystyle{\rho^{(1)}_*\Bigl( (R_{\tau(t)})_{*_{\gamma(t)}} \sigma^\alpha \bigl(\frac{d \gamma}{dt}\bigr), 0_X \Bigr)} \\
& = & \displaystyle{\rho^{(1)}_*\Bigl( m_*\bigl\{\sigma^\alpha (\frac{d \gamma}{dt}), 0_{\tau(t)}\bigr\}, 0_X \Bigr)} \\
& = & \displaystyle{\rho^{(1)}_*\Bigl( \sigma^\alpha (\frac{d \gamma}{dt}), \rho^{(1)}_* \bigl(0_{\tau(t)}, 0_X \bigr) \Bigr)} \quad \mbox{(since $\rho$ is an action)} \\
& = & \displaystyle{\rho^{(1)}_*\Bigl( \sigma^\alpha \bigl(\dot{\gamma}(t)\bigr), 0_{\tau(t) \cdot X} \Bigr)} \in \h.
\end{array}$$
Observe that in the case of the distributions $\Sigma^\alpha$ (and $\Sigma^\beta$), the lift of a path is defined on the entire interval $I$. This is due to the right-invariance of $\Sigma^\alpha$
\cqfd
Consider now a path $\gamma : I \to M$, with $\gamma(0) = x$. Then
$$v(t) = \Bigr(L^{\Sigma^\beta}_{\varepsilon(x)}(\gamma)(t)\Bigr) \cdot \frac{d\gamma}{dt}\Bigr|_{t}$$
is a path in $T_xM$, the inverse parallel transport along $\gamma$ of $\frac{d \gamma}{dt}$. Conversely, given a path $v : I \to T_xM$, it is associated to a unique path $\gamma = {\mathcal I}(v)$ (${\mathcal I}$ for ``integration'') in $M$ such that the inverse parallel transport of $\frac{d \gamma}{dt}$ along $\gamma$ yields $v$.
The path $\gamma$ is constructed as follows. First consider the path
$$\sigma^\beta \circ v : I \to \Sigma^\beta_{\varepsilon(x)}.$$
It can be left-translated along the $\alpha$-fiber of $x$ into a path of vector fields $X_\xi = (L_\xi)_*(\sigma^\beta(v(t)))$ in $\Sigma^\beta$ along $\alpha^{-1}(x)$ which can be flipped with respect to $\d^\fs$ and become a path of vector fields $F(X_\xi)$ in $\Sigma^\alpha$ tangent to $\alpha^{-1}(x)$ (see \fgref{geodesics-fig}). Indeed, it follows from (\ref{s-->d}) that the flip of $\Sigma^\beta$ with respect to $\d^\fs$ is $\Sigma^\alpha$. The flow line of the latter through $x$, denoted $\overline{\gamma}(t)$, $t \in I' \subset I$ is the parallel transport along the path ${\mathcal I}(v)(t) = \gamma(t) = \beta \circ \overline{\gamma}(t)$. \\
\begin{figure}
\caption{The distribution $\d$ on $\b^{(1)}
\label{geodesics-fig}
\end{figure}
These observations allow us to characterize geodesics in terms of $\sigma$, hence, indirectly, in terms of ${\mathfrak s}$.
\begin{prop}\label{geodesics} A path $\gamma : I \to M$ is a geodesic if and only if its associated path $\underline{\gamma} : I \to T_{\gamma(0)}M$ is constant. In particular, for any $X \in TM$, the geodesic $\gamma_X$ tangent to $X$ at $0$ is the path associated to the constant path $\underline{\gamma}_X(t) = X$.
\end{prop}
Notice that the homogeneity property $\gamma_{cX} (t)= \gamma_X(ct)$ appears clearly in the construction. \\
\noindent{\bf Proof}. This follows readily from \pref{parallel-transport} and the characterization of geodesics as paths with constant velocity.
\cqfd
\lref{parallel-transport} also yields an interpretation of the lift with respect to $\d^\fs$ of a path in $M$.
\begin{lem}\label{d-lift}
Let $\gamma : I \to M$ be a path in $M$ and $\xi : T_{\gamma(0)} M \to T_yM$ a linear map. Then the $\alpha$-lift of $\gamma$ tangent to $\d$ and passing through $\xi$ is the path~:
\begin{equation}\label{d-lift¶llel-t}
L_\xi^\d(\gamma)(t) : T_{\gamma(t)}M \to T_{\gamma'(t)}M : X \mapsto \tau^{\gamma'}(t) \circ \xi \circ (\tau^{\gamma}(t))^{-1}(X),
\end{equation}
where
$$\gamma' = {\mathcal I}\bigl(\xi \circ (\tau^\gamma(t))^{-1} (\frac{d\gamma}{dt})\bigr).$$
\end{lem}
\noindent{\bf Proof}. Setting $\widetilde{\gamma} = L_\xi^\d(\gamma)$, the relation to be proven is
$$\widetilde{\gamma}(t) = \tau^{\gamma'}(t) \circ \xi \circ (\tau^{\gamma}(t))^{-1},$$
or, in other terms,
\begin{equation}\label{d-lift-equ}
\widetilde{\gamma}(t) \circ \tau^\gamma (t) = \tau^{\gamma'}(t) \circ \xi.
\end{equation}
Equivalently, the path $\widetilde{\gamma} \circ \tau^\gamma$ is everywhere tangent to the distribution $\Sigma^\alpha$. Fix $t \in I$ and set
\begin{enumerate}
\item[-] $X = \frac{d \gamma}{dt}\bigl|_t$,
\item[-] $\overline{X}_1 = \frac{d \widetilde{\gamma}}{dt}\bigl|_t$,
\item[-] $\overline{X}_2 = \frac{d \tau^\gamma}{dt}\bigl|_t$.
\end{enumerate}
The relation (\ref{d-lift-equ}) becomes
$$\overline{X}_1 \cdot \overline{X}_2 \in \Sigma^\alpha_{\widetilde{\gamma}(t) \circ \tau^\gamma (t)},$$
for all $t \in I$, or equivalently,
$$\bigl(R_{\widetilde{\gamma}(t) \cdot \tau^\gamma(t)}^{-1}\bigr)_{*} \bigl(\overline{X}_1 \cdot \overline{X}_2\bigr) \in \im \sigma^\alpha.$$
Remembering the characterization of $\d^\fs$ as the distribution consisting of the traces in $\e$ of the $(-1)$-eigenspaces of the map $\psi$ (cf.~(\ref{psi})), we know that
$$L \cdot \overline{X}_1 \cdot R = -\overline{X}_1,$$
where $L$ (respectively $R$) is the $\alpha$-lift (respectively $\beta$-lift) of $\beta_*\overline{X}_1 = \widetilde{\gamma}(t)(X)$ (respectively $\alpha_*\overline{X}_1 = X$) in $\d_{\widetilde{\gamma}(t)}$. Equivalently,
$$\overline{X}_1 \cdot R = \iota_*(L) \cdot -\overline{X}_1,$$
and since $\iota_*(L) = -L$,
$$\overline{X}_1 \cdot R = - \bigl(L \cdot \overline{X}_1\bigr).$$
Moreover, because $\overline{X}_2$ belongs to the right invariant distribution $\Sigma^\alpha$,
$$\overline{X}_2 = (R_{\tau^\gamma(t)})_{*}\sigma^\alpha(X) = \bigl(\frac{1}{2} X + \frac{1}{2} (m_{-1})_* R\bigr) \cdot 0_{\tau^\gamma(t)}.$$
Therefore, using the linearity of the differential of the multiplication $m$ in $\b^{(1)}(\PP(M))$, we compute
$$\begin{array}{lll}
\bigl(R_{\widetilde{\gamma}(t) \cdot \tau^\gamma(t)}^{-1}\bigr)_{*} \bigl(\overline{X}_1 \cdot \overline{X}_2\bigr) & = & \overline{X}_1 \cdot \overline{X}_2 \cdot 0_{(\widetilde{\gamma}(t) \cdot \tau^\gamma(t))^{-1}} \\
& = & \overline{X}_1 \cdot \bigl(\frac{1}{2} X + \frac{1}{2} (m_{-1})_* R\bigr) \cdot 0_{\tau^\gamma(t)} \cdot 0_{(\widetilde{\gamma}(t) \cdot \tau^\gamma(t))^{-1}} \Bigl. \\
& = & \bigl( \frac{1}{2}\overline{X}_1 + \frac{1}{2} \overline{X}_1 \bigr) \cdot \bigl(\frac{1}{2} X + \frac{1}{2} (m_{-1})_* R\bigr) \cdot 0_{(\widetilde{\gamma}(t))^{-1}} \Bigl.\\
& = & \Bigl( \frac{1}{2}\bigl(\overline{X}_1\cdot X \bigr) + \frac{1}{2} \bigl( \overline{X}_1 \cdot (m_{-1})_* R\bigr) \Bigr) \cdot 0_{(\widetilde{\gamma}(t))^{-1}} \Bigl.\\
& = & \frac{1}{2}\Bigl(\overline{X}_1 + (m_{-1})_* (\overline{X}_1\cdot R) \Bigr) \cdot 0_{(\widetilde{\gamma}(t))^{-1}}\\
& = & \frac{1}{2}\Bigl(\overline{X}_1 - (m_{-1})_* (L \cdot \overline{X}_1) \Bigr) \cdot \frac{1}{2} \Bigl( \iota_*\overline{X}_1 - \iota_*\overline{X}_1 \Bigr)\\
& = & \frac{1}{2}\Bigl(\overline{X}_1 \cdot \iota_*\overline{X}_1 \Bigr) - \frac{1}{2} \Bigl((m_{-1})_* (L \cdot \overline{X}_1) \cdot \iota_*\overline{X}_1 \Bigr)\\
& = & \frac{1}{2} \widetilde{\gamma}(t)(X) - \frac{1}{2} (m_{-1})_*L \Bigl.\\
& = & \sigma^\alpha\bigl(\widetilde{\gamma}(t)(X)\bigr)\Bigl.
\end{array}$$
\cqfd
\section{Geodesic symmetries and locally symmetric spaces}\label{geod-sym&loc-sym-sp}
Let $\fs$ be a symmetry jet on $M$, we would like to describe the geodesic symmetries of the affine connexion $\nabla^\fs$ directly in terms of $\d^\fs$. More generally, for each $\xi \in \b^{(1)}(\PP(M))$, we construct hereafter the extension of $\xi$ through geodesics~:
$$\varphi_\xi : U_{\alpha(\xi)} \to U_{\beta(\xi)} : \exp_{\beta(\xi)} \circ \; \xi \circ \exp_{\alpha(\xi)}^{-1},$$ where $U_{\alpha(\xi)}$ and $U_{\beta(\xi)}$ are some neighborhoods of $\alpha(\xi)$ and $\beta(\xi)$ respectively. Of course, the geodesics symmetries are the maps $s_x = \varphi_{-I_x}$, $x \in M$. We would like to show that the maps $\varphi_\xi$ are the best candidates affine transformations, that is, when $S^{k \cdot (1)}(\xi)$ is affine, it coincides with $j^k_{\alpha(\xi)}\varphi_\xi$. \\
Let $\xi$ be a fixed element in $\b^{(1)}(\PP(M))$. If $\gamma : I \to M$ is a regular path passing through $\alpha(\xi)$, let $L(\gamma) = L_\xi^{\d} (\gamma)$ be the $\alpha$-lift of $\gamma$ tangent to $\d = \d^\fs$ and passing through $\xi$. It is defined on some subinterval $I' \subset I$. Given $X \in TM$, let $\gamma_X$ denote the maximal geodesic through $X$. The homogeneity property for geodesics implies that the image of $L(\gamma_X)$ is independent on $X$ in ${\mathbb R}X$. Define $b_\xi$ to be the union over all vectors $X$ in the unit sphere $ST_xM$ of $T_xM$ (relative to some Riemannian metric on $M$) of the images of the paths $L(\gamma_X)$. Near $\xi$, the set $b_\xi$ is an embedded submanifold tangent to $\d^{\mathfrak s}$ at $\xi$, and, in fact, a local bisection. Indeed, the sphere $ST_xM$ being compact, a common domain $I'$ may be chosen for the various lifts $L(\gamma_X)$. The submanifold $b_\xi$ is a best candidate leaf of $\d^\fs$, that is, if there exists a submanifold of $\b^{(1)}(\PP(M))$ that is tangent to $\d^\fs$ up to order $k$, then $b_\xi$ is such a manifold. It follows easily from \pref{geodesics} and \lref{d-lift} that $b_\xi$ is the set of linear maps~:
$$T_{\gamma_X(t)}M \to T_{\varphi_\xi(\gamma_X(t))}M : X' \mapsto \tau^{\gamma_{\xi(X)}}(t) \circ \xi \circ (\tau^{\gamma_X}(t))^{-1}(X'),$$
where $t$ varies in $I'$ and $X$ in $ST_xM$. This is almost $\varphi_\xi$, but not quite yet. More precisely, the base map is $\varphi_\xi$. So to obtain $\varphi_\xi$, we apply the bouncing map
$\fb$ to $Tb_\xi$ (cf.~\rref{bouncing}), that is,
$$j^1\varphi_\xi = \fb(Tb_\xi).$$
In particular, the geodesic symmetries $s_x$, $x \in M$ are realized as local bisections in $\b^{(1)}(\PP(M))$ as follows~:
$$j^1 s_x = \fb(Tb_{-I_x}).$$
Notice that if $b_\xi$ is tangent to $\d^\fs$ up to order $k$, then $j^1\varphi_\xi$ is tangent to $\d^\fs$ up to order $k-1$. To summarize, we have proven the following proposition.
\begin{prop}\label{geodesic-symmetries} To produce $j^1 \varphi_\xi$, $\alpha$-lift each geodesic through $x$ to a path tangent to $\d^\fs$ passing through $\xi$ then make it a holonomic bisection via $\fb$. In other terms
$$j^1\varphi_\xi = \fb(Tb_\xi).$$
\end{prop}
This procedure is relatively simple. The distribution $\d^\fs$ does not admit $n$-dimensional leaves in general. Nevertheless, it always has $1$-dimensional leaves and gathering the ones passing through a given $\xi$ and that project via $\alpha$ onto the geodesics through $\alpha(x)$ (a natural family of paths filling a neighborhood of $x$) yields a bisection whose projection onto $M \times M$ is the graph of $\varphi_\xi$. In case $\d^\fs$ does admit a leaf $D$ through $\xi$, it will coincide near $\xi$ with $b_\xi$, which becomes automatically a holonomic bisection since $\d^\fs \subset \e$.
\begin{rmk} When $\xi \in \In{\d^\fs} = \b(T, R)$ (cf.~\dref{int-locus}), then $b_\xi$ is osculatory to $\d^\fs$ and when $\xi \in \b(T, \nabla T, R)$, then $j^1\varphi_\xi$ is osculatory to $\d^\fs$. It seems important to make the following distinction. If a bisection $b$ is tangent to $\d^\fs$ up to order $k$, that is $j^{k-1}_{\alpha(\xi)} b = S^{k \cdot (1)}(b(x))$ then the holonomic bisection $\fb(Tb)$ is tangent to $\d^\fs$ only up to order $k-1$, unless $b$ is, in addition, tangent to the holonomic distribution $\e$ up to order $k+1$, in which case the bisection $\fb(Tb)$ is tangent to $\d^\fs$ up to order $k$ as well, that is $S^{k \cdot (1)}(b(x))$ is holonomic.
\end{rmk}
It is worthwhile noticing that Emmanuel Giroux has shown in \cite{G} that a plane field admits near each point a canonical $2$-jet of {\sl path-osculatory surfaces} (our terminology, to distinguish from osculatory). Path-osculatory, means that each path through the point in the surface is osculatory to the distribution. In the context of the present paper, we obtain for each $\xi \in \b^{(1)}(\PP(M))$, a canonical surface whose second order jet at $x$ coincides with Giroux's osculatory surfaces. \\
Now it is easy to show the well-known result that for a locally symmetric space, that is a space endowed with a torsionless affine connection for which the curvature tensor is parallel, the local diffeomorphisms $\varphi_\xi$ is affine if and only if $\xi \in \b(R)$ (cf.~\cite{H}, Lemma 1.2.~p.~200). In particular, for those spaces --- and only them --- each geodesic symmetry is affine.
\begin{prop}\label{lss} Let ${\mathfrak s}$ be a symmetry jet whose associated connection $\nabla$ is locally symmetric, that is satisfies $T^\nabla = 0$ and $\nabla R^\nabla = 0$. Then through any $\xi \in \b^{(1)}(\PP(M))$ that preserves the curvature passes a $n$-dimensional leaf of $\d^{\mathfrak s}$.
\end{prop}
\noindent{\bf Proof}. Consider the Lie subgroupoid $\b(R)$. Since $\nabla R = 0$, \lref{d-tangent-to-b(Q)} implies that $\d^{\mathfrak s}$ is tangent to $\b(R)$ along $\b(R)$. Moreover, \lref{Int-kappa} implies that $\d^{\mathfrak s}$ in involutive along $\b(R)$. Hence the Lie subgroupoid $\b(R)$ is foliated by leaves of~$\d^\fs$.
\cqfd
\section{Relation with Kobayashi's admissible sections}\label{Kobayashi}
A theorem due to Kobayashi \cite{K} asserts that there is a bijective correspondence between torsionless affine connections and admissible sections of the bundle of $2$-frames $\F^{(2)}(M) \to \F^{(1)}(M)$. Since jets occur in our construction as well, it seems relevant to compare the two approaches. This section contains a brief description of Kobayashi's theorem, interpreted in terms of affine extensions as in \cite{H}, and a direct correspondence between Kobayashi's admissible section and our symmetry jet. \\
Given a manifold $M$ of dimension $n$ and a non-negative integer $k$, {\em the bundle of $k$-frames of $M$}, denoted $\pi^k : \F^{(k)}(M) \to M$ is the set of $k$-jets at $0$ of local diffeomorphisms $\mathbb R^n \to M$ defined near $0$, endowed with the canonical projection that sends a jet $j^k_0f$ to $f(0) \in M$. Observe that the bundle of $1$-frames is the usual bundle of frames of $M$. There are obvious maps $\pi^{l \to k} : \F^{(l)}(M) \to \F^{(k)}(M)$ for each pair $l > k$. The group $GL(n, \mathbb R)$ acts on the right on each $\F^{(k)}(M)$ through $j^k_0f \cdot A = j^k_0 (f \circ A)$. A section $\F^{(1)}(M) \to \F^{(2)}(M)$ is said to be admissible if it is equivariant with respect to these $GL(n, \mathbb R)$-actions. \\
The main ingredient of this correspondence is the property of uniqueness of affine extension applied to $\mathbb R^n$ endowed with the trivial connection $\nabla_o$ and $M$ endowed with a torsionless affine connection $\nabla$ (it is a consequence of \pref{uae} applied to the disconnected affine manifold $(M \cup \mathbb R^n, \nabla \cup \nabla_o)$). It ensures existence of an admissible section $s^{\nabla} : \F^{(1)}(M) \to \F^{(2)}(M)$ that maps $\xi : \mathbb R^n \to T_{f(0)}M$ to the unique affine $2$-jet that extends it. It is indeed equivariant with respect to the $GL(n, \mathbb R)$-actions on $\F^{(1)}(M)$ and $\F^{(2)}(M)$ since $s^{\nabla}(\xi) \cdot A$ is an affine $2$-jet that extends $\xi \circ A$, implying the relation $s^{\nabla}(\xi) \cdot A = s^{\nabla}(\xi \circ A)$.\\
Conversely, given an admissible section $s$, it directly induces a linear horizontal distribution $\h^s$ on $TM$, hence an affine connection $\nabla^s$. Indeed, let $\xi \in \F^{(1)}(M)$. Since $s(\xi) = j^2_0f$ for some local diffeomorphism $f : \mathbb R^n \to M$ defined near $0$, one may define for $X \in T_{f(0)}M$
$$\h_X = f_{**_0} (H_{f_{*_0}^{-1}}(X)),$$
where $H_Z$ denotes the natural horizontal plane in $T_ZT\mathbb R^n$. The $GL(n, \mathbb R)$-invariance of $s^\nabla$ guarantees the independence of $\h$ on the initial choice of a basis $\xi$ in $T_xM$ as well as the linearity of $\h$, in the sense that $\h_{a X + b Y} = m_{a*} (\h_{X}) +_* m_{b*} (\h_Y)$. \\
One can alternatively observe, as is done in \cite{K} that the pullback of the ${\mathcal Gl}(n, \mathbb R)$-component of the canonical form $\theta^{(2)}$ on $\F^{(2)}(M)$ via $s$ yields a connection form on $\F^{(1)}(M)$. \\
Now we have two natural ways to think about torsionless affine connections, one in terms of admissible sections and another one in terms of holonomic symmetry jets. It is tempting to close the triangle and show how to induce a symmetry jet naturally from an admissible section and vice-versa. So doing, we are going to enlarge the Kobayashi's correspondence to connection with torsion. As can be expected it simply consists in allowing admissible section to take values in $\F^{(1,1)}(M)$ the set of $1$-jets of sections of $\F^{(1)}(M) \to M$. Observe that $\F^{(1,1)}(M)$ is a bundle over $M$ for the canonical projection $\pi^{(1,1)} (j^1_x e) = \pi^1(e(x))$ and that its elements are also horizontal planes tangent to $\F^{(1)}(M)$, the correspondence being $j^1_xe \mapsto e_{*_x}(T_xM)$. \\
Consider the groupoid action $\rho^{(1)} : \b^{(1)}(\PP(M)) \times_{(\alpha, \pi^1)} \F^{(1)}(M) \to \F^{(1)}(M)$, its~derivative
$$\rho^{(1)}_{*} : T\b^{(1)}(\PP(M)) \times_{(\alpha_*, \pi^1_*)} T\F^{(1)}(M) \to T\F^{(1)}(M) : (X_\xi, B_e) \mapsto X_\xi \cdot B_e,$$
and the induced groupoid action
$$\rho^{(1,1)} : \b^{(1,1)}_{nh}(\PP(M)) \times_{(\alpha, \pi^{(1,1)})} \F^{(1,1)}(M) \to \F^{(1,1)}(M) : (j^1_xb, j^1_xe) \mapsto j^1_xb \cdot j^1_xe = j^1_x (b \cdot e).$$
Observe that
$$D \Bigl(j^1_xb \cdot j^1_xe \Bigr) = \rho^{(1)}_*(b_{*_x}(T_xM), e_{*_x}(T_xM))$$
and that $\b^{(1)}(\PP(M))$ and $\b^{(1,1)}_{nh}(\PP(M))$ act on $\F^{(1)}(M)$ and $\F^{(1,1)}(M)$ respectively through $GL(n, \mathbb R)$-equivariant maps. More is true~:
\begin{lem} The action $\rho^{(1,1)}$ of $\b^{(1,1)}(\PP(M))$ on $\F^{(1,1)}(M)$ is simply transitive, that is, given two elements $j^1_{x_1} e_1$ and $j^1_{x_2} e_2$ in $\F^{(1,1)}(M)$, there exists a unique element in $\b^{(1,1)}(\PP(M))$ mapping $j^1_{x_1} e_1$ on $j^1_{x_2} e_2$. It is denoted by $m(j^1_{x_1} e_1, j^1_{x_2} e_2)$.
\end{lem}
\noindent{\bf Proof}. Define $\xi$ to be the linear isomorphism $T_{x_1}M \to T_{x_2}M$ that maps the basis $e_1(x_1)$ to $e_2(x_2)$. Let also $\varphi : \Op\{x_1\} \to \Op\{x_2\}$ be a local diffeomorphism of $M$ such that $j^1_{x_1} \varphi = \xi$. The relation $j^1_{x_1}b \cdot j^1_{x_1}{e}_1 = j^1_{x_2}{e}_2$ is satisfied by $b$ defined by
$$b(x_1) \bigl(e_1(x_1)\bigr) = e_2(\varphi(x_1)).$$
Moreover, from its construction $j^1_xb$ belongs to $\b^{(1,1)}(\PP(M))$. Suppose $j^1_xb_o \in \b^{(1,1)}(\PP(M))$ also satisfies $j^1_{x_1}b_o \cdot j^1_{x_1}e_1 = j^1_{x_2}e_2$. Then $j^1_{x_1}(b_o^{-1} \cdot b)$ fixes $j^1_{x_1}e_1$. In other terms,
\begin{equation}\label{DD}
D(j^1_{x_1}(b_o^{-1} \cdot b)) \cdot D(j^1_{x_1}e_1) = D(j^1_{x_1}e_1).
\end{equation}
Let $X\in T_{x_1}M$ and let $Y$ denotes the $\alpha_*$-lift of $X$ in $D(j^1_{x_1}(b_o^{-1} \cdot b)) \subset T_{\varepsilon (x_1)} \b^{(1,1)}(\PP(M))$, then $Y = X + A$ with $A = \frac{da_t}{dt}\bigl|_{t=o} \in \Ker(\alpha_* \times \beta_*) \simeq \End(T_{x_1}M)$. Let also $Z$ denote the lift of $X$ in $D(j^1_{x_1}e_1)$. Then the relation (\ref{DD}) implies that
$$\begin{array}{cll}
Z & = & \rho_*^{(1)}\Bigl(X + A, Z \Bigr) \\
& = & \rho_*^{(1)}\Bigl(X, Z \Bigr) + \rho_*^{(1)}\Bigl(A, 0_{e_1(x_1)} \Bigr) \\
& = & \displaystyle{Z + \frac{d}{dt} a_t(e_1(x_1))\Bigl|_{t=0},}
\end{array}$$
which implies that $A$ vanishes. This holds for any $X \in T_{x_1}M$ and thus
$$D(j^1_{x_1}(b_o^{-1} \cdot b)) = T_{x_1}M.$$
\cqfd
Now, an admissible section $s$ induces a symmetry jet ${\mathfrak s}$ through the relation~:
$${\mathfrak s} (x) = m(s(e_x), s(-e_x)),$$
where $e_x$ is some element in $\F^{(1)}(M)$, whose choice does not affect the value of $m(s(e_x), s(-e_x))$ because $m$ is $GL(n, \mathbb R)$-invariant. Moreover, its first order part $p({\mathfrak s} (x))$ is $- I_x$. As ${\mathfrak s}(x)$ consists of affine $(1,1)$-jets, the induced connection $\nabla^{\mathfrak s}$ coincides with~$\nabla^s$. \\
Conversely, given a symmetry jet ${\mathfrak s}$, we obtain an admissible section $s$ as follows. The distribution $\d^\fs$ along $-I$ associated to ${\mathfrak s}$ induces the distribution $E_{-1}^\psi$ on $\F^{(1)}(M)$ consisting of eigenspaces for the eigenvalue $-1$ of the induced involution
$$\psi : T\F^{(1)}(M) \to T\F^{(1)}(M): B_{e_x} \mapsto \rho^{(1)}_{*} \Bigl(\overline{X}^{\d_{-I_x}, \alpha_*}, B_{e_x} \cdot (-I) \Bigr),$$
where $X = \pi^1_{*_{e_x}}(B_{e_x})$. The vertical tangent space $\Ker(\pi^1)_*$ consists of fixed vectors by $\psi$. Besides,
$$\pi^1_*\circ \psi = - \psi,$$
hence $E_{-1}^\psi$ is $n$-dimensional and transverse to $\pi^1$. Define
$$s : \F^{(1)}(M) \to \F^{(1,1)}(M) : e \mapsto s(e),$$
such that $s(e) = j^1_x\tilde{e}$ if and only if $D(j^1_x\tilde{e}) = (E_{-1}^\psi)_e.$
When ${\mathfrak s}$ is holonomic, $s$ takes its values in $\F^{(2)}(M) \subset \F^{(1,1)}(M)$ thanks to the following construction. First observe that if $s(j^1_0 f)$ is the unique affine $2$-jet extension (with respect to the connection $\nabla^{\mathfrak s}$ associated to ${\mathfrak s}$) of some $1$-jet $j^1_0 f \in \F^{(1)}(M)$, then ${\mathfrak s}(f(0)) \cdot s(j^1_0 f)$ is still an affine extension, namely the affine extension of $- j^1_0 f$. Now because $s$ is admissible, $s(- j^1_0 f) = s(j^1_0 f) \cdot (-I)$, where $s(j^1_0 f) \cdot (-I)$ refers to the action of the element $-I \in GL(n, \mathbb R)$ on $\F^{(2)}(M)$. Whence $s(j^1_0 f)$ has to satisfy the following implicit relation~:
\begin{equation}\label{implicit}
{\mathfrak s}(f(0)) \cdot s(j^1_0 f) = s(j^1_0 f) \cdot (-I),
\end{equation}
Since the $GL(n, \mathbb R)$-action commutes with the action of $\b^{(2)}(\PP(M))$, the previous relation is equivalent to
\begin{equation}\label{implicit-bis}
s(j^1_0 f)^{-1} \cdot {\mathfrak s}(f(0)) \cdot s(j^1_0 f) = -I,
\end{equation}
where if $s(j^1_0 f) = j^2_0 f$, then $s(j^1_0 f) = j^2_{f(0)}f^{-1}$, which can be solved by means of \lref{sec-der-commutation}. Indeed, let $\theta_1$ be any element in $\F^{(2)}(M)$ that extends $j^1_0f$. Then
$$\theta_1^{-1} \cdot {\mathfrak s}(f(0)) \cdot \theta_1 = \eta_x,$$
for some $2$-jet $\eta_x$ of local diffeomorphism of $\mathbb R^n$ extending $-I$. Solutions to (\ref{implicit-bis}) are in bijective correspondence with $2$-jets $\theta$ of local diffeomorphisms of $\mathbb R^n$ that extend $I$ and satisfy
$$\theta \cdot \eta_x \cdot \theta^{-1} = -I,$$
or equivalently
$$-I \cdot \theta \cdot \eta_x = \theta.$$
Supposing $\theta = j^2_xf$ and $\eta_x = j^2_xh$, \lref{sec-der-commutation} implies that
$$d^2f(X_x) = d^2(-I \cdot f \cdot h)(X_x) = - d^2f(X_x) + d^2(-h)(X_x).$$
Hence $d^2f = \frac{1}{2} d^2(-h)$.
\cqfd
\appendix
\section{Notations}\label{notation}
The following standard pieces of notation are frequently used in the text~:
\begin{enumerate}
\item For a manifold $N$, the canonical projection $TN \to N$ is denoted most often by the letter $p$ and sometimes by the symbol $p^N$. In particular, $p : T^2M \to TM$ is the projection of $TN$ onto $N$ for $N = TM$.
\item The canonical inclusion of a manifold into its tangent bundle as the zero section is systematically denoted by the letter $i$.
\item When $\pi : M \to N$ is a submersion, $i^\pi : T^\pi M \hookrightarrow TM$ denotes the canonical inclusion of the sub-bundle $T^\pi M$ consisting of vectors tangent to the fibers of $\pi$ in $TM$.
\item If $\pi : M \to N$ is a submersion and $\varphi : N_0 \hookrightarrow N$ is an immersion, the notation $T_{N_0}^\pi M$ stands for the set of vectors tangent to the fibers of $\pi$ and belonging to $\pi^{-1}(\varphi(N_0))$.
\item Given a fibration $\pi : E \to B$, the fiber $\pi^{-1}(b)$ is denoted by $E^\pi_b$ or $E_b$ when unambiguous and the natural inclusion of $E^\pi_b$ in the total space $E$ is denoted by $i^\pi_b : E^\pi_b \to E$.
\item For a vector bundle $\pi : E \to B$, we have a natural inclusion
$$i^\pi_e : E_{\pi(e)} \hookrightarrow T_e^\pi E : e' \mapsto \frac{d(e + te')}{dt}\Bigr|_{0}$$ for each element $e$ in $E$. Notice that $i^\pi_e = (i^\pi_{\pi(e)})_{*_e}$.
\item If $\pi_1 : M_1 \to N$ and $\pi_2 : M_2 \to N$ are two submersions. Then $M_1 \times_{(\pi_1, \pi_2)} M_2$ denotes the fiber-product manifold $\{(m_1, m_2) \in M_1 \times M_2; \pi_1(m_1) = \pi_2(m_2)\}$. It is naturally endowed with a projection onto N. Observe that
$$T(M_1 \times_{(\pi_1, \pi_2)} M_2) = TM_1 \times_{(\pi_{1*}, \pi_{2*})} TM_2.$$
\item Given two vector bundles $\pi_1 : E_1 \to B_1$ and $\pi_2 : E_2 \to B_2$ and a map $\varphi : E_1 \to E_2$, the expression $\varphi : (E_1, \pi_1) \to (E_2, \pi_2)$ means that $\varphi$ is a morphism of vector bundles.
\end{enumerate}
\section{Lie groupoids of jets of bisections}\label{jet-of-bis}
We refer to the books \cite{dSW}, \cite{Mck-87} and \cite{Mck-05} for an introduction to the basics on groupoids. Notice though that, like Mackenzie but unlike da Silva and Weinstein, we call the source map $\alpha$ and the target map $\beta$. The following notions will be needed in the text~: Lie groupoid, locally trivial Lie groupoid, groupoid morphism, Lie subgroupoid, bisection, local bisection, Lie groupoid action, Lie algebroid. \\
For a Lie groupoid $G \rightrightarrows M$, the symbol $\alpha$ denotes the source map, $\beta$ the target map, $m : G \times_{(\alpha, \beta)}G \to G$ the multiplication, $\varepsilon : M \to G$ the natural inclusion and $\iota$ the inversion. The collection of smooth local bisections of $G$ is denoted by $\b_{\ell}(G)$ and the subcollection of local bisections defined on a neighborhood of $x \in M$ by $\b_{\ell, x}(G)$. Let us now introduce the jet extensions groupoids (cf.~\cite{Ehresmann}). Given a Lie groupoid $G \rightrightarrows M$, a point $x \in M$ and any integer $k \geq 0$, there is an equivalence relation $\sim^k_x$ on the set $\b_{\ell, x}(G)$ of local bisections of $G \rightrightarrows M$ defined near $x$, namely $b_1\sim^k_x b_2$ if $b_1(x) = g = b_2(x)$ and $b_1$ and $b_2$ have the same Taylor expansion of order $k$ at $x$ with respect to some (hence any) pair of local coordinate systems around $x$ and $g$. The equivalence class of $b$ with respect to $\sim^k_x$ is commonly denoted by $j^k_xb$ and called the $k$-jet of $b$ at $x$.
\begin{dfn} The set of all $k$-jets of local bisections of $G$ is denoted by $\b^{(k)}(G)$ and called the $k$-jet extension of the Lie groupoid $G$.
\end{dfn}
The set $\b^{(k)}(G)$ is another Lie groupoid over $M$ when it is endowed with the obvious structure maps, namely
\begin{enumerate}
\item[-] $\alpha : \b^{(k)}(G) \to M : j^k_xb \mapsto x$,
\item[-] $\beta : \b^{(k)}(G) \to M : j^k_xb \mapsto \beta \circ b (x)$,
\item[-] $m(j^k_yb' , j^k_xb) = j^k_x (b' \cdot b)$ when $y = \beta \circ b(x)$,
\item[-] $\varepsilon : M \to \b^{(k)}(G) : x \mapsto j^k_x\varepsilon$,
\item[-] $\iota : \b^{(k)}(G) \to \b^{(k)}(G) : j^k_xb \mapsto j^k_yb^{-1}$, where $y = \beta \circ b(x)$.
\end{enumerate}
Let us observe that for $k > l$, the natural projection $p^{k \to l} : \b^{(k)}(G) \to \b^{(l)}(G) : j^k_xb \mapsto j^l_xb$ is a Lie groupoid morphism.
\begin{rmk}\label{1jets-as-planes}
Let $G \rightrightarrows M$ be a Lie groupoid and set $n = \dim M$. Consider the Grassmann bundle $\pi :\Gr_n(G) \to G$ of $n$-planes tangent to $G$. Notice that the data of an element $j^1_xb$ in $\b^{(1)}(G)$ is equivalent to the data of the map $b_{*_x} : T_xM \to T_{b(x)}G$, itself equivalent to the data of the plane $\im(b_{*_x})$. Moreover, the map
\begin{equation}\label{D}
D : \b^{(1)}(G) \to \Gr_n(G) : j^1_xb \mapsto D(j^1_xb) = b_{*_x}(T_xM)
\end{equation}
is a diffeomorphism onto the open subset of $\Gr_n(G)$ consisting of ``horizontal" planes, that is, planes that are transverse to the $\alpha$-fibers and the $\beta$-fibers. The latter subset is denoted by $\Gr^h_n(G)$ and supports thus a groupoid structure whose source and target map are $\alpha \circ \pi$ and $\beta \circ \pi$ respectively and whose multiplication is induced from the differential of the multiplication
$$m_{*_{(g_1, g_2)}} : T_{g_1}G \times_{(\alpha_{*_{g_1}}, \beta_{*_{g_2}})} T_{g_2}G \to T_{g_1 \cdot g_2}G.$$
in $G$. The identity at $x$ is $T_xM$ and the inverse of $D \subset T_gG$ is $\iota_{*_g}(D)$.
\end{rmk}
\begin{nota}\label{b1} Consider the pair groupoid $\PP(M)$, that is the set $M \times M$, endowed with the two projections $\alpha = p_2$ and $\beta = p_1$ onto $M$ and the product $(x, y) \cdot (y, z) = (x,z)$. A local bisection $b$ of $\PP(M)$ is a partial diffeomorphism $U_b = \alpha(b) \to V_b = \beta(b)$. Therefore the groupoid $\b^{(k)}(\PP(M))$ is the proper subset of $J^k(M,M)$ consisting of $k$-jets of partial diffeomorphisms of $M$. In particular, the groupoid $\b^{(1)}(\PP(M))$ is the set of linear maps between any pair of tangent spaces to $M$. It is called in the literature the \emph{general linear groupoid} of the vector bundle $TM$ or the \emph{gauge groupoid} of the principal bundle of frames of $M$ and also denoted by ${\rm GL}(TM)$ or ${\rm Aut}({\mathcal F}(M))$.
\end{nota}
The extension procedure can be iterated and the groupoid $\b^{(k_1)}(\b^{(k_2)}(G))$ is denoted hereafter by $\b^{(k_1,k_2)}(G)$. It contains $\b^{(k_1 + k_2)}(G)$ as an embedded subgroupoid. Observe that, in addition to the natural projections
$$p^{k_1 \to l_1} : \b^{(k_1,k_2)}(G) \to \b^{(l_1, k_2)}(G) : j^{k_1}_xb \mapsto j^{l_1}_xb,$$
for $l_1 = 0, ..., k_1-1$, there are projections
$$p^{k_2\to l_2}_* : \b^{(k_1,k_2)}(G) \to \b^{(k_1, l_2)}(G) : j^{k_1}_xb \mapsto j^{k_1}_x(p^{k_2 \to l_2} \circ b)$$
that are groupoid morphisms as well. On $\b^{(k_1+k_2)}(G) \subset \b^{(k_1,k_2)}(G)$, the map $p^{k_2\to 0}_*$ coincides with
$p^{k_1+k_2 \to k_1}$. Similarly, given a sequence $(k_1, ..., k_I)$ of natural numbers, the groupoid $\b^{(k_1)}(...(\b^{(k_I)}(G))...)$ is denoted by $\b^{(k_1, ...,
k_I)}(G)$ and supports a series of projections
$$p^{k_i \to l_i}_{\underbrace{* ... *}_{i-1}} : \b^{(k_1, ..., k_i, ..., k_I)}(G) \to \b^{(k_1, ..., l_i, ..., k_I)}(G).$$
Notice, for instance, that any groupoid $\b^{(k_1, ..., k_I)}(G)$ is a subgroupoid of the groupoid $\b^{(1,...,1)}(G)$ with $k_1 + ... + k_I$ copies of~$1$.
\begin{nota}\label{bkl}
Provided it does not generate any ambiguity, we will use the following abbreviations~:
\begin{enumerate}
\item[-] The projection $p^{k \to 0}$ on $\b^{(k)}(G)$, that extracts the $0$-th order part of a jet, is denoted by $p^k$ and coincides with $p \circ ... \circ p$ ($k$ factors) on $\b^{(1, ..., 1)}(G) \supset \b^{(k)}(G)$.
\item[-] Similarly, the projection $p^{k_i \to 0}_{* ... *}$ is denoted by $p^{k_i}_{* ... *}$ and coincides with $p_{* ... *} \circ ... \circ p_{* ... *}$ on $\b^{(k_1, ..., k_{i-1}, 1, ..., 1, k_{i+1}, ..., k_I)} \supset \b^{(k_1, ..., k_i, ..., k_I)}$.
\item[-] We remove the superscripts from the projections $p^1$, $p_*^1$, ..., $p^1_{* ...*}$ and denote them by $p$, $p_*$, $p_{*... *}$. Observe that
\begin{equation}\label{p*=p*}
D\Bigl( p_{\underbrace{* ... *}_{i}}(j^1_xb)\Bigr) = (p_{\underbrace{* ... *}_{i-1}})_{*_{b(x)}} \Bigl( D(j^1_xb)\Bigr).
\end{equation}
\end{enumerate}
\end{nota}
A local bisection $b$ of $G$, induces a local so-called {\bf holonomic} bisection
$$j^kb : U \to \b^{(k)}(G) : x \mapsto j^k_xb$$
of the groupoid $\b^{(k)}(G)$. When $G \rightrightarrows M$ is locally trivial, there is a nice characterization of local holonomic bisections of $\b^{(1)}(G)$ as local bisections tangent to a certain distribution that we introduce hereafter. Consider the map $p : \b^{(1)}(G) \to G$ and its differential $p_{*_\xi} : T_\xi \b^{(1)}(G) \to T_{p(\xi)}G$ at $\xi \in\b^{(1)}(G)$. Observe that $D(\xi)$ is a subspace in $T_{p(\xi)}G$.
\begin{dfn}\label{def-e}
The holonomic distribution $\e$ or $\e^G$ on $\b^{(1)}(G)$ is defined by
$$\e_\xi = p_{*_\xi}^{-1} \bigl(D(\xi)\bigr).$$
\end{dfn}
\begin{prop}\label{e}
The distribution $\e$ has rank $(n + n(k - n))$, where $n = \dim M$ and $k = \dim G$, contains the distribution $\Ker p_*$ and is transverse to the $\alpha$ and $\beta$-fibers of $\b^{(1)}(G)$. It has the property that a local bisection $b : U \to \b^{(1)}(G)$ is holonomic if and only if it is tangent to $\e$.
\end{prop}
\noindent{\bf Proof}. Observe that $\Ker p_{*_\xi}$ is $(n(k - n))$-dimensional. Therefore, the plane $\e_\xi$ has rank $\dim D(\xi) + \dim \Ker p_{*_\xi} = n + n(k-n)$. \\
Because $D(\xi)$ is transverse to the $\alpha$ and $\beta$-fibers of $G$, its lift $p_*^{-1} (D(\xi))$ enjoys the same property in $\b^{(1)}(G)$. \\
Let $b$ be a local bisection in $\b^{(1)}(G)$. The condition $T_{b(x)}b \subset \e$ is equivalent to $p_*(T_{b(x)}b) = D(b(x))$, that is $p_*(j^1_xb) = b(x)$ (cf.~(\ref{p*=p*})) or $j^1_x(p \circ b) = b(x)$. So a bisection $b$ is tangent to $\e$ everywhere if and only if it coincides with $j^1b^0$.
\cqfd
\begin{rmk} In general, a bisection $b$ of $\b^{(1,...,1)}(M)$ is holonomic if $b$ is tangent to $\e$ and its projection $p \circ b$ is a holonomic bisection.
\end{rmk}
\begin{rmk}\label{sh} The common terminology in the literature (cf.~e.g.~\cite{Lib}) is as follows. Elements in $\b^{(k)}(G)$ are called {\em holonomic} jets, elements in $\b^{(1,..., 1)}(G)$ ($k$ copies of $1$) that do not belong to $\b^{(k)}(G)$ are called {\em non-holonomic} jets. Amongst non-holonomic jets are {\em semi-holonomic} jets who are those non-holonomic jets for which the values of the projections $p, p_*, ..., p_{*...*}$ all agree.
\end{rmk}
\begin{dfn}\label{derived-action} A left action $\rho : G \times_{(\alpha, \pi)} E \to E$ of a groupoid $G \rightrightarrows M$ on a fiber bundle $\pi : E \to M$ (\cite{Mck-05}) induces an action of the groupoid $\b^{(1)}(G)\rightrightarrows M$ onto $\pi \circ p : TE \to M$ as follows~:
$$\rho^{(1)} : \b^{(1)}(G) \times_{(\alpha, \pi \circ p)}TE \to TE : \Bigl(j^1_x b, X_e\Bigr) \mapsto j^1_xb \cdot X_e = \rho_{*} \Bigl(b_{*_x}\bigl(\pi_{*_e} (X_e)\bigr), X_e\Bigr),$$
where $\rho_* : TG \times_{(\alpha_*, \pi_*)} TE \to TE$ is the differential of $\rho$.
Iterating this procedure, we obtain actions
$$\rho^{(1,...,1)} : \b^{(1,...,1)} \times_{(\alpha, \pi \circ p^k)} T^kE \to T^k E$$
for any $k = 1, 2, ...$.
\end{dfn}
\begin{rmk}\label{derived-action-of-pair-groupoid} In particular, starting from the trivial action of the pair groupoid $\PP(M)$ on $M$
$$\rho : (M\times M) \times_{(\alpha, \id)} M \to M : ((y, x), x) \mapsto y$$
we obtain actions $\rho^{(1)}$, $\rho^{(1,1)}$, $\rho^{(1,1,1)}$, ... of $\b^{(1)}, \b^{(1,1)}, \b^{(1,1,1)}, ...$ on $TM$, $T^2M$, $T^3M, ...$ respectively. A groupoid action $\rho : G \times_{(\alpha, \pi)} E \to E$ is said to be {\sl effective} if $\rho(g_1, e) = \rho(g_2, e)$ for all $e \in E$ with $\pi(e) = \alpha(g_1) = \alpha(g_2)$ implies that $g_1 = g_2$. The various actions $\rho^{(1)}$, $\rho^{(1,1)}$, $\rho^{(1,1,1)}$, ... are effective.
\end{rmk}
\begin{lem}\label{loc-triv} The $k$-jet extension of a locally trivial groupoid is locally trivial as well. In particular, all the groupoids $\b^{(... ,l,k)}(M) \rightrightarrows M$ are locally trivial.
\end{lem}
\section{Second tangent bundle}\label{second-t-b}
Given a manifold $M$, its second tangent bundle $T^2M$ is defined to be the tangent bundle $p : T(TM) \to TM$ of the total space of the tangent bundle to $M$. Its elements are denoted by calligraphed letters $\X, \Y, \mathbb ZE, ....$. It is endowed with several pieces of structure that we describe in the present section. Note that $T^2M$ is an example of a double vector bundle as treated in Chapter 9 of \cite{Mck-05}. The main specificity of $T^2M$ is that its \emph{flip} is canonically isomorphic to itself.\\
\noindent
{\bf Vector bundle structures~:} $T^2M$ admits two distinct structures of vector bundle over the manifold $TM$~: \\
$\bullet$ $p : T^2M \to TM$ is the canonical projection, that is, if $t \mapsto X_t$ is a path in $TM$, then
$$p\Bigl(\frac{dX_t}{dt}\Bigl|_{t=0}\Bigr) = X_0.$$
Fiberwise addition and scalar multiplication are denoted, as usual, by $+ : TM \times_{(p,p)}TM : (\X_1, \X_2) \mapsto \X_1 + \X_2$ and $m_a : \mathbb R \times TM : (a, \X) \mapsto a\X = m_a(\X)$ respectively. The fiber over a vector $X_x \in TM$ is denoted by $T_{X_x}TM$. \\
$\bullet$ $p_* : T^2M \to TM$ is the differential of the canonical projection $p : TM \to M$, that is
$$p_*\Bigl(\frac{dX_t}{dt}\Bigl|_{t=0}\Bigr) = \frac{dp(X_t)}{dt}\Bigl|_{t=0}.$$
Fiberwise addition is the differential of the corresponding map on $TM$, that is
$$+_* : T^2M \times_{(p_*, p_*)} T^2M : \Bigl(\X = \frac{dX_t}{dt}\Bigl|_0, \Y = \frac{dY_t}{dt}\Bigl|_0\Bigr) \mapsto \frac{dX_t + Y_t}{dt}\Bigl|_0,$$
where the path $t \mapsto X_t$ and $t \mapsto Y_t$ have been chosen to satisfy $p(X_t) = p(Y_t)$. This is not restrictive since $p_*(\X) = p_*(\Y)$. Similarly, scalar multiplication by a real $a$ is the differential of $m_a$ on $TM$~:
$$m_{a*} : T^2M \to T^2M : \X = \frac{dX_t}{dt}\Bigl|_{t=0} \mapsto m_{a*}(\X) = \frac{d (a X_t)}{dt}\Bigl|_{t = 0}.$$
The $p_*$-fiber over a vector $X_x \in TM$ is denoted by $T^{X_x}TM$.
\begin{figure}
\caption{The addition and scalar multiplication in $(T^2M, p_*)$}
\end{figure}
\begin{lem}
The map $p : T^2M \to TM$ (respectively $p_* : T^2M \to TM$) is a vector bundle morphism when $T^2M$ is endowed with the vector bundle structure induced by $p_*$ (respectively $p$).
\end{lem}
The to maps $p \circ p$ and $p \circ p_*$ of $T^2M$ onto $M$ agree and are denoted by $p^2$. In other words the following diagram is commutative. The fiber of $p^2$ over a point $x \in M$ is denoted by $T^2_xM$.
\hspace{3cm}\xymatrix{
& T^2M\ar@{->}[ld]_p \ar@{->}[rd]^{p_*} & \\
TM \ar@{->}[rd]_p & & TM\ar@{->}[ld]^p \\
& M &}
\\
\noindent
{\bf Horizontal inclusions and projections~:} The two vector bundle structures $p, p_* : T^2M \to TM$ induce two natural inclusions of $TM$ into $T^2M$~:
$$i, i_* : TM \to T^2M : X_x \mapsto i(X_x) = 0_{X_x}, i_*(X_x) = {0_{*X_x}},$$
parameterizing the respective zero-sections denoted by $0_{TM}, 0_{*TM}$. The map $i_*$ is the differential of the inclusion $i : M \to TM$ of $M$ as the zero-section of $TM$. The inclusion $i$ (respectively $i_*$) is a vector bundle morphism between $(TM, p)$ and $(T^2M, p_*)$ (respectively $(T^2M, p)$). The associated projections of $T^2M$ onto its two zero-sections are denoted by $e = i \circ p$ and $e_* = i_* \circ p_* $. Let ${\bf i}$ denote the inclusion $i \circ i = i_* \circ i$ of $M$ into $T^2M$. It parameterizes the intersection of the two zero-sections $0_{TM} \cap {0_*}_{TM} = 0_{0_M}$. Similarly, let ${\bf e}$ denote the projection $e \circ e_* = e_* \circ e$ of $T^2M$ onto $0_{0_M}$. These different maps satisfy the relations~:
$$\begin{array}{llll}
p \circ i = id_{TM} \;\;& p \circ e = p \;\;& p_* \circ i = i \circ p \;\;& p_* \circ e = e \circ p \\
p_* \circ i_* = id_{TM} \;\; & p_* \circ e_* = p_* \;\; & p \circ i_* = i \circ p \;\; & p \circ e_* = e \circ p \\
\end{array}$$
and $${\bf e} = i^2 \circ p^2.$$
\noindent
{\bf Vertical inclusion of $TM$~:} There is a canonical ``vertical inclusion" from $TM$ into $T^2M$ parameterizing $T^p_{0_M}TM = p^{-1}(0_M) \cap \, p_*^{-1}(0_M)$ (cf.~\sref{notation})
$$i^p_{0_M} : TM \to T^2M : X_x \mapsto i^p_{0_M}(X_x) = \frac{dtX_x}{dt} \Bigl|_{t=0}.$$
The restriction of $i^p_{0_M}$ to $T_xM$ is sometimes denoted by $i^p_{0_x}$. Observe that the map $i^p_{0_M}$ is a vector bundle map for both structures on $T^2M$. In particular, a vector $\X \in T^2M$ belongs to $\im (i^p_{0_M})$ if and only if $m_a(\X) = m_{a*}(\X)$ for all $a \in \mathbb R$.\\
\noindent
{\bf Vertical inclusions of $TM \oplus TM$~:} There are also two canonical inclusions of $TM \oplus TM$ into $T^2M$ parameterizing the two ``kernels" $p^{-1}(0_M) = T_{0_M}TM$ and $p_*^{-1}(0_M) = T^pTM$ as follows~:
$$I_{p} : TM \oplus TM \to p^{-1}(0_M) \subset T^2M : (X^1_x, X_x^2) \mapsto i_{*_x}(X_x^1) + i^p_{0_M}(X_x^2)$$
$$I_{p_*} : TM \oplus TM \to p_*^{-1}(0_M) \subset T^2M : (X^1_x, X_x^2) \mapsto i(X_x^1) +_* i^p_{0_M}(X_x^2).$$
The map $I_{p}$ (respectively $I_{p_*}$) is a vector bundle morphism between $TM \oplus TM$ and $(T^2M, p)$ (respectively $(T^2M, p_*)$).
\begin{rmk}\label{Leibniz} Given vector fields $X$, $Y$ on $M$ and a function $f \in C^\infty(M)$, the section $fY$ of $p : TM \to M$ satisfies
\begin{equation}\label{scalar-mult}
(fY)_{*_x} (X_x) = X_xf \; Y_x + m_{f(x)*}(Y_{*_x}X_x),
\end{equation}
where $X_x f \; Y_x$ really means $I_{p_*}(f(x)Y_x, X_xf Y_x) = i(f(x)Y_x) +_* i^p_{0_M}(X_xf \; Y_x)$. Indeed, setting $(f \circ p) \times \id : TM \to \mathbb R \times TM : Z_x \mapsto (f(x), Z_x)$ and $m : \mathbb R \times TM \to TM : (a, X_x) \mapsto a X_x$, the section $fY$ can be rewritten as
$$m \circ ((f \circ p) \times \id) \circ Y.$$
Hence
$$\begin{array}{ccl}
(fY)_{*_x} (X_x) & = & m_{*_{(f(x), Y_x)}} \Bigl(f_{*_x} (X_x), Y_{*_x}X_x\Bigr) \\
& = & m_{*_{(f(x), Y_x)}} \Bigl(f_{*_x} (X_x), 0_{Y_x} \Bigr) + m_{*_{(f(x), Y_x)}} \Bigl(0_{f(x)}, Y_{*_x}X_x\Bigr) \\
& = & \Bigl. X_xf \; Y_x + m_{f(x)*} (Y_{*_x}X_x),
\end{array}$$
\end{rmk}
\noindent
{\bf Affine structure~:} The product $P = p \times p_* : T^2M \to TM \times_{(p, p)}TM$ yields on $T^2M$ yet another structure, of affine bundle of rank $n$ this time, whose fiber over $(X_x, Y_x)$ is modeled on the vector space $T_xM$ and denoted by $T_{X_x}^{Y_x}TM$. Observe that for a fixed vector $\X \in T^2M$, with $p(\X) = X_x$ and $p_*(\X) = Y_x$ the two maps
\begin{equation}\label{A}
\begin{array}{l}
A_\X : T_xM \to T_{X_x}^{Y_x}TM : V_x \mapsto \X + \Bigl(e (\X) +_* i^p_{0_M}(V_x)\Bigr) \\
A_\X : T_xM \to T_{X_x}^{Y_x}TM : V_x \mapsto \X +_* \Bigl(e_*(\X) + i^p_{0_M}(V_x)\Bigr)
\end{array}
\end{equation}
coincide and parameterize the fiber of $P$ through $\X$. Moreover, there is a canonical map
\begin{equation}\label{i}
\pi : T^2M \times_{(P, P)}T^2M \to TM : (\X_1, \X_2) \mapsto \pi(\X_1, \X_2),
\end{equation}
defined by
\begin{equation}\label{def-i}
\pi(\X_1, \X_2) = V_x \quad \mbox{if} \quad \X_1 = A_{\X_2}(V_x)
\end{equation}
One could also write, with a slight abuse of notation, the expression $\pi(\X_1, \X_2) = \X_1 - \X_2$. The map $\pi$ satisfies
\begin{equation}\label{abc}
\pi(\X^1, \X^2) = \pi(\X^1 +_o \X, \X^2 +_o \X)
\end{equation}
when $+_o$ denotes either $+$ or $+_*$ and $p_o(\X) = p_o(\X^i) \in T^2M$ for the corresponding projection $p_o$. \\
\noindent
{\bf Canonical Involution~:} A very important piece of the structure of $T^2M$ is its canonical involution, defined below~:
\begin{dfn}\label{involution} The canonical involution on $T^2M$ is commonly defined by means of local coordinates $(x^i, X^i, Y^i, \X^i)$ induced by local coordinates $x^i$ on $M$ as being the map $\kappa = \kappa_M: T^2M \to T^2M$ that flips the two middle sets of coordinates
$$\kappa(x^i, X^i, Y^i, \X^i) = (x^i, Y^i, X^i, \X^i).$$
\end{dfn}
\noindent
{\bf Properties of $\kappa$~:} The involution $\kappa$ is an isomorphism between the two distinct vector bundle structures on $T^2M$, and, as such, satisfies the relations~:
\begin{equation}\label{prop-kappa}
\begin{array}{ll}
p_* \circ \kappa = p & p \circ \kappa = p_*\\
\kappa \circ i = i_* & \kappa \circ i_* = i \\
\kappa \circ e = e_* & \kappa \circ e_* = e \\
\kappa \circ m_{a*} = m_a \circ \kappa & \kappa \circ m_a = m_{a*} \circ \kappa .\\
\kappa(\X + \Y) = \kappa(\X) +_* \kappa(\Y) & \kappa(\X +_* \Y) = \kappa(\X) + \kappa(\Y)
\end{array}
\end{equation}
It is thus also an endomorphism of the affine bundle $P : T^2M \to TM \times_{(p,p)} TM$ over the reflection map $\kappa_o(X,Y) = (Y,X)$. Furthermore, $\kappa$ fixes pointwise the image of $i_{0_M}^p$~:
$$\kappa \circ i^p_{0_M} = i^p_{0_M}.$$
This and the relations (\ref{A}) imply that for two vectors $\X_1$ and $\X_2$ in a same $P$-fiber,
\begin{equation}\label{i-kappa}
\pi(\X_1, \X_2) = \pi(\kappa(\X_1), \kappa(\X_2)).
\end{equation}
The following properties will not be used in the text but provide additional insight on $\kappa$ and its relation with the Lie bracket and the flows of vector fields.
\begin{prop}\label{involution-prop}(\cite{L}, \cite{K-}) The involution $\kappa$, sometimes denoted by $\kappa^M$, satisfies the following properties~: let $x \in M$ and $X, Y \in {\mathfrak X}(M)$
\begin{enumerate}
\item[-] $[X, Y]_x = \pi \bigl(Y_{*_x}X_x, \kappa(X_{*_x} Y_x) \bigr)$,
\item[-] $\displaystyle{\kappa(Y_{*_x} X_x) = \frac{d(\varphi_Y^t)_{*_x}X_x}{dt}\Bigl|_{0}}$, where $\varphi_Y^t$ denotes the flow of $Y$ at time $t$.
\end{enumerate}
\end{prop}
\begin{rmk}
It would be nice to have an intrinsic description of $\kappa$. Each one of the previous properties could serve as a definition of $\kappa$ on $T^2M - T^pTM$. On $T^pTM$, we want $\kappa$ to coincide with
$$\kappa \Bigl(i(X_x) + i_{0_M}(V_x) \Bigr) = i_*(X_x) +_* i_{0_M}(V_x).$$
So $\kappa$ can be defined intrinsically separately on $T^2M - T^pTM$ and $T^pTM$ but smoothness of $\kappa$ across $T^pTM$ has then to be established. So this approach does not seem completely satisfactory.
\end{rmk}
\begin{rmk}
The involution $\kappa$ allows for an alternative characterization of distributions on $M$ that are involutive~: a distribution $\d$ on $M$ is involutive if and only if the subset $T\d \cap p_*^{-1}(\d)$ of $T^2M$ is $\kappa$-invariant ($\d$ is thought of a subbundle of $TM$; therefore $T\d \subset T^2M$).
\end{rmk}
\section{Structure of $\b^{(1,1)}(\PP(M))$}\label{b11}
The structure of the groupoid $\b^{(1,1)}(\PP(M))$ (cf.~\aref{jet-of-bis} and \nref{b1}) follows closely that of $T^2M$. As already observed in \aref{jet-of-bis}, it is endowed with two natural projections $p$ and $p_*$ onto $\b^{(1)}(\PP(M))$, whose definition we briefly recall. An element of $\b^{(1,1)}(\PP(M))$ is of the type $j^1_xb$ for some local bisection $b : U_x \to \b^{(1)}(\PP(M))$ defined in a neighborhood $U_x$ of $x$. Then
$$\left\{ \begin{array}{lllllllll}
p &:& \b^{(1,1)}(\PP(M)) &\to& \b^{(1)}(\PP(M)) &:& j^1_xb &\mapsto& b(x) \\
p_* &&&&&&&& j^1_x(p \circ b).
\end{array}\right.$$
We thus obtain a commutative diagram~:
\hspace{1.3cm}
\xymatrix{
& \b^{(1,1)}(\PP(M)) \ar@{->}[ld]_p \ar@{->}[rd]^{p_*} & & \\
\b^{(1)}(\PP(M)) \ar@{->}[rd]_p & & \b^{(1)}(\PP(M))\ar@{->}[ld]^p & (\star)\\
& M & &}
Notice that $b^0 = p \circ b$ is a local bisection of the pair groupoid $M\times M$, that is a section $x \mapsto (f(x), x)$ of $\alpha$ such that $\beta \circ b^0 : x \mapsto f(x)$ is a local diffeomorphism of $M$. In the sequel $b^0 = p \circ b$ will systematically be identified with the local diffeomorphism $f = \beta \circ b^0$.
\begin{rmk}\label{bouncing} Under the identification of $\b^{(1,1)}(\PP(M))\sim \Gr_n^h(\b^{(1)}(\PP(M)))$ (cf.~\rref{1jets-as-planes}), the map $p_* : \b^{(1,1)}(\PP(M)) \to \b^{(1)}(\PP(M))$ coincides with the ``bouncing map", defined as follows
$$\fb : \Gr_n^h(\b^{(1)}(\PP(M))) \to \b^{(1)}(\PP(M)) : P_\xi \mapsto \fb(P_\xi) = \beta_{*_\xi} \circ \Bigl(\alpha_{*_\xi}\bigr|_{P_\xi}\Bigr)^{-1}.$$
\end{rmk}
When $b$ is a local bisection of $\b^{(1)}(\PP(M))$, the map $\fb$ applied to $Tb$ yields a holonomic bisection~:
$$x \mapsto \fb(T_{b(x)}b) = j^1_xb^0.$$
\begin{nota}\label{nota-e}
The holonomic distribution $\e^{M\times M}$ on $\b^{(1)}(\PP(M))$ (cf.~\dref{def-e}) is denoted by $\e^{(1)}$.
\end{nota}
Observe that if a $n$-plane $P_\xi$ is contained in the holonomic distribution $\e^{(1)}$ then $\fb(P_\xi) = \xi$. (One could describe $\e^{(1)}$ as the union of all horizontal $n$-planes having this property.) More generally, the linear maps $\fb(P_\xi)$ and $\xi$ coincide on $\alpha_*(P_\xi \cap \e^{(1)}_\xi)$.
\begin{rmk} The distribution $\e^{(1)}$ is maximally non-integrable.
It generalizes the canonical contact form $\alpha$ on the set of $1$-jets of local maps from $M$ to the real line $J^1(M \times \mathbb R) \simeq T^*M \times \mathbb R$, defined by $\alpha(X_\beta, V) = \beta(\pi_*(X_\alpha)) - V$ to the case of $1$-jets of maps from $M$ to $M$.
\end{rmk}
\begin{dfn}\label{bh}
The set of $(1,1)$-jets $\xi = j^1_xb$ for which $p(\xi) = p_*(\xi)$ is denoted hereafter $\b^{(1,1)}_{sh}(\PP(M))$ (``sh" for semi-holonomic, cf.~\rref{sh}). Notice that $\b^{(2)}(\PP(M)) \subsetneqq \b^{(1,1)}_{sh}(\PP(M))$.
\end{dfn}
\begin{rmk}\label{h-e}
A $(1,1)$-jet $\xi$ belongs to $\b^{(1,1)}_{sh}(\PP(M))$ if and only if $D(\xi) \subset \e^{(1)}$.
\end{rmk}
\begin{prop}\label{e-iota}
Along the bisection $b_o = -I$, the distribution $\e^{(1)}$ coincides with the family of $(-1)$-eigenspaces of the involution $\iota_{*}$.
\end{prop}
\noindent{\bf Proof}. Let $x\in M$ and set $\xi = -I_x \in \b^{(1)}(\PP(M))$. Then
$$\iota_{*_\xi} : T_\xi \b^{(1)}(\PP(M)) \to T_\xi \b^{(1)}(\PP(M))$$
is an involutive linear isomorphism. Hence the tangent space to $\b^{(1)}(\PP(M))$ at $\xi$ decomposes into a direct sum of $+1$ and $-1$ eigenspaces for $\iota_{*_\xi}$~:
$$T_\xi \b^{(1)}(\PP(M)) = E_{+1} \oplus E_{-1}.$$
We claim that $\e^{(1)}_\xi = E_{-1}$. Let $X_\xi \in T_\xi\b^{(1)}(\PP(M))$. Then, because $p$ is a groupoid morphism and $\iota(\xi) = \xi$, we have
$$p_{*_{\xi}} \circ \iota_{*_\xi}(X_\xi) = \iota_{*_{p(\xi)}} \circ p_{*_\xi}(X_\xi).$$
Moreover $\iota_{*_{p(\xi)}} : T_xM \times T_xM : (X^1, X^2) \mapsto (X^2, X^1)$. Thus $X_\xi \in E_{-1}$ implies that $p_{*}(X_\xi) \in D(\xi)$. Conversely, $p_{*}(X_\xi) \in D(\xi)$ implies that $X_\xi \in E_{-1} + \Ker p_{*_\xi}$. But $\Ker p_{*_\xi} \subset E_{-1}$. Indeed, if $\exp(tA)$ is a one-parameter subgroup in the Lie group $p^{-1}(x,x) \subset \b^{(1,1)}(\PP(M))$, then $\iota (\exp (tA)) = \exp(-tA)$, whence $\iota (\xi \cdot \exp(tA)) = \xi \cdot \exp (-tA)$ ($\xi$ is central), which implies that $\iota_{*_\xi} (X) = -X$ for $X \in \Ker p_{*_\xi}$.
\cqfd
\begin{cor}\label{invertibility} Any element of $\b^{(1,1)}(\PP(M))$ whose first order part belongs to the bisection $-I \subset \b^{(1)}(\PP(M))$ is its own inverse.
\end{cor}
\noindent{\bf Proof}. Let $\xi \in \b^{(1,1)}_{sh}(\PP(M))$, with $p(\xi) = -I_x$ then $D(\xi) \subset \e^{(1)}_{-I_x}$. Hence, from \rref{1jets-as-planes}, we know that $D(\iota(\xi))$ coincides with $\iota_{*_{-I_x}}(D(\xi))$ which equals $D(\xi)$ by the previous result, implying that $\iota(\xi) = \xi$.
\cqfd
\begin{rmk}\label{f**}
The $2$-jet $j^2_xf$ of a local diffeomorphism $f : U \subset M \to M$ of $M$ is equivalently described as the map
$$f_{**_x} : T^2_xM \to T^2_yM : \X_{X_x} \mapsto (f_{*})_{*_{X_x}}(\X_{X_x}) \stackrel{\rm not}{=} f_{**_{X_x}}(\X_{X_x}).$$
Observe (with pieces of notation introduced in \sref{second-t-b}) that
\begin{equation}\label{rel-f**}
\begin{array}{rll}
p \circ f_{**_x} &=& f_{*_x} \circ p \\
p_* \circ f_{**_x} &=& f_{*_x} \circ p_* \\
f_{**_x} \circ i^p_{0_x} &=& i^p_{0_y} \circ f_{*_x}.
\end{array}
\end{equation}
\end{rmk}
More generally, consider the natural left action $\rho^{(1,1)}$ of $\b^{(1,1)}(\PP(M))$ on $T^2M$ (cf.~\dref{derived-action} and \rref{derived-action-of-pair-groupoid})~:
\begin{equation}\label{(1,1)-action}
\begin{array}{cllll}
\rho^{(1,1)} & : & \b^{(1,1)}(\PP(M)) \times_{(\alpha, p^2)} T^2M & \to & T^2M \\
& & \displaystyle{\Bigl(j^1_xb, \X = \frac{dX_t}{dt}\Bigr|_{0}\Bigr) } & \mapsto & \displaystyle{ j^1_xb \cdot \X = \frac{d (b \cdot X_t)}{dt}\Bigr|_{0}. }
\end{array}
\end{equation}
In particular if $\X = Y_{*_x}X_x \in T^2M$ and $\beta(b(x)) = y$, then
\begin{equation}\label{coucou}
j^1_xb \cdot \X = (bY)_{*_y}(b(x) X_x).
\end{equation}
The relevance of the next lemma is essentially that it allows to carry the canonical involution $\kappa$ of $T^2M$ to a canonical involution on $\b^{(1,1)}_{sh}(\PP(M))$. Indeed, given a $(1,1)$-jet $\xi$, the following expression~:
$$\kappa(\xi) \cdot \X = \kappa \bigl( \xi \cdot \kappa(\X)\bigr).$$
define a map $T^2_xM \to T^2_yM$, which corresponds, thanks to the next lemma, to a new $(1,1)$-jet, defined to be $\kappa(\xi)$ (see~\dref{kappa-definition}). Recall that for a groupoid $G \rightrightarrows M$ and two point $x$ and $y$ in $M$, the notation $G_{x,y}$ stands for the subset $\alpha^{-1}(x) \cap \beta^{-1}(y)$ of $G$.
\begin{lem}\label{11-jetsasmaps} Through the action $\rho^{(1,1)}$, the set of $(1,1)$-jets $(\b^{(1,1)}(\PP(M)))_{x,y}$ is in one-to-one correspondence with the set of maps $\ell : T^2_xM \to T^2_yM$ enjoying the following properties~:
\begin{enumerate}
\item[(a)] $\ell$ is a vector bundle morphism from $(T^2_xM, p)$ to $(T^2_yM, p)$ over a linear map $p(\ell) : T_xM \to T_yM$ and a vector bundle morphism from $(T^2_xM, p_*)$ to $(T^2_yM, p_*)$ over a linear map $p_*(\ell) : T_xM \to T_yM$~: \\
\hspace{1.3cm}
\xymatrix{
T^2_xM \ar[r]^{\ell} \ar[d]_p & T^2_yM \ar[d]^p & & & T^2_xM \ar[r]^{\ell} \ar[d]_{p_*} & T^2_yM \ar[d]^{p_*} \\
T_xM \ar[r]^{p(\ell)} & T_yM & & & T_xM \ar[r]^{p_*(\ell)} & T_yM
}
\item[(b)] $\ell$ coincides with $p(\ell)$ on each fiber of the vertical sub-bundle $T^p_{0_M}TM$~: \\
\hspace{1.3cm}
\xymatrix{
T^p_{0_x}TM \ar[r]^{\ell} & T^p_{0_y}TM \\
T_xM \ar[u]^{i^p_{0_x}}\ar[r]^{p(\ell)} & T_yM \ar[u]_{i^p_{0_y}}
}
\end{enumerate}
A $(1,1)$-jet $\xi$ corresponds to a map $\ell_\xi$ with $p(\ell_\xi) = p(\xi)$ and $p_*(\ell_\xi) = p_*(\xi)$. Genuine $2$-jets $j^2_xf \in \b^{(2)}(\PP(M))$ induce maps, also denoted by $f_{**_x}$, that, in addition, commute with $\kappa$, i.e.
\begin{equation}\label{f**-kappa}
f_{**_x} \circ \kappa = \kappa \circ f_{**_x}.
\end{equation}
\end{lem}
\begin{rmk}\label{rem-f**-kappa}
The relation (\ref{f**-kappa}) holds for any smooth map $f : M \to N$, where $\kappa$ stands either for the involution on $T^2M$ or on $T^2N$.
\end{rmk}
The next lemma is used in the proof of the previous one.
\begin{lem}\label{decomposition}
Let $U$ denote some open subset of $M$ and let $\{X^1, ..., X^n\}$ be a set of local vector fields in $\IX(U)$ forming a basis of each tangent space $T_{x}M$, $x \in U$. Given a path
$$(-\varepsilon, \varepsilon) \mapsto T_{\gamma(t)}M : t \mapsto X_t = \sum_{j=1}^n a_j(t) X^j(\gamma(t))$$
in $TM$. Its velocity vector $\displaystyle{\X = \frac{dX_t}{dt}|_{t=0}}$ admits the following expression~:
$$\displaystyle{ \X = \sideset{}{_{*}}\sum_{j=1}^n m_{a_j(0)*} \bigl(X^j_{*_x} Y_x\bigr) + \Bigl[i\Bigl(a_j(0) X^j_x \Bigr) +_* i^p_{0_M} \Bigl(\frac{da_j}{dt}\Bigl|_{t=0}X^j_x \Bigr) \Bigr] },$$
where $\sideset{}{_{*}}\sum$ indicates that the addition is $+_*$ and where $\displaystyle{Y_x = \frac{d \gamma(t)}{dt}|_{t=0}}$.
\end{lem}
\noindent{\bf Proof}. It is essentially the same statement as \rref{Leibniz}. A proof is nevertheless included mainly to prepare for the proof of the corresponding statement for $T^3M$ (\lref{111-jetsasmaps}). Let us abbreviate $a_j(0)$ by $a_j$.
$$\begin{array}{lll}
\X & = & \displaystyle{\frac{d}{dt}\sum_{j=1}^n m\Bigl(a_j(t), X^j(\gamma(t))\Bigr) \Bigl|_{t=0}
= \sideset{}{_{*}}\sum_{j=1}^n m_{*_{(a_j, X^j_x)}} \Bigl( \partial_ta_j(0), X^j_{*_x} Y_x\Bigr) }\\
& = & \displaystyle{ \sideset{}{_{*}}\sum_{j=1}^n m_{*_{(a_j, X^j_x)}} \Bigl( 0_{a_j}, X^j_{*_x}Y_x \Bigr) + m_{*_{(a_j, X^j_x)}} \Bigl( \frac{da_j(t)}{dt}\Bigl|_{t=0}, 0_{X^j_x}\Bigr) }\\
& = & \displaystyle{ \sideset{}{_{*}}\sum_{j=1}^n m_{a_j*} \bigl(X^j_{*_x} Y_x \bigr) + \frac{d}{dt} m \Bigl(a_j + \frac{da_j}{dt}\Bigl|_{t=0} t, X^j_x\Bigr)\Bigr|_{t=0}}\\
& = & \displaystyle{ \sideset{}{_{*}}\sum_{j=1}^n m_{a_j*} \bigl(X^j_{*_x} Y_x \bigr) + \Bigl[ \frac{d}{dt} m\Bigl(a_j, X^j_x \Bigr)\Bigr|_{t=0} +_* m \Bigl(t, \frac{da_j}{dt}\Bigl|_{t=0} X^j_x \Bigr)\Bigr|_{t=0}}\Bigl]\\
& = & \displaystyle{ \sideset{}{_{*}}\sum_{j=1}^n m_{a_j*} \bigl( X^j_{*_x} Y_x \bigr) + \Bigl[ i\Bigl(a_j X^j_x \Bigr) +_* i^p_{0_M} \Bigl(\frac{da_j}{dt}\Bigl|_{t=0} X^j_x \Bigr) \Bigr]}
\end{array}$$
\cqfd
\noindent
{\bf Proof of \lref{11-jetsasmaps}} It is quite obvious that a $(1,1)$-jet does enjoy the properties $(a)$ and $(b)$. A detailed proof of the converse is provided mainly because it will lighten up the proof of \lref{111-jetsasmaps}. Consider a map $\ell : T^2_xM \to T^2_yM$ that satisfies $(a)$ and $(b)$. Let $\{X^1, ..., X^n\}$ be a basis of $T_xM$ and let $\{Y^1 = p(\ell)(X^1), ..., Y^n = p(\ell)(X^n)\}$ be the corresponding basis of $T_yM$. For each $1 \leq i \leq n$, let $H^i$ be a $n$-dimensional subspace in $T_{X^i}TM$ complementary to $T^p_{X^i}TM$ and let $X^i : U \to TM$ (respectively $Y^i : V \to TM$) be a local section of $TM$ defined over a neighborhood $U$ of $x$ (respectively $V$ of $y$) in $M$, passing through $X^i$ (respectively $Y^i$) and tangent to $H^i$ (respectively $\ell(H^i)$). We may assume that the set $\{X^1(x'), ..., X^n(x')\}$ (respectively $\{Y^1(y'), ..., Y^n(y')\}$) is a basis of $T_{x'}M$ (respectively $T_{y'}M$) for all $x' \in U$ (respectively $y' \in V$) and that there exists a local diffeomorphism $g : U \to V$ such that $g_{*_x} = p_*(\ell)$. Now define a local bisection $b$ of $\b^{(1)}(\PP(M))$ over $U$ as follows~:
$$b(x') : T_{x'}M \to T_{g(x')}M : \sum_{i = 1}^n a_i X^i(x') \mapsto \sum_{i=1}^n a_i Y^i(g(x')).$$
It is now trivial to verify, by means of \lref{decomposition}, that the action of the $(1,1)$-jet $j^1_xb$ on $T^2M$ coincides with $\ell$. Indeed, it amounts to verifying that the image of the vectors $X^j_{*_x}X^k_x$, $i(X^j_x)$, $i^p_{0_M}(X^j_x)$ under the action of $j^1_xb$ and that of $\ell$ agree. This is implemented in the construction of $b$.
\cqfd
\begin{rmk}\label{hypoth-b} About the importance of hypothesis $(b)$. Consider the map
$$m_{-1} \circ m_{-1*} : T^2M \to T^2M.$$
It is a homomorphism for both vector bundle structures on $T^2M$ and it preserves the vertical sub-bundle $T^pTM$ but restricts to $\id$ on $T^pTM$ instead of $-\id$. Hence it is not induced by a $(1,1)$-jet and in particular does not yield a (necessarily canonical) affine connection (cf.~\pref{symm-jet<-->conn}).
\end{rmk}
\begin{dfn} A homomorphism of $T^2M$ is a bijective morphism of vector bundles $\ell : (T^2_xM, p_o) \to (T^2_yM, p_o)$, for some $x, y \in M$ and for both $p_o = p$ and $p_o = p_*$ over linear isomorphisms denoted by $p(\ell)$ and $p_*(\ell) : T_xM \to T_yM$ respectively. The set of homomorphisms of $T^2M$ is denoted by $\EL(T^2M)$ and the map $\b^{(1,1)}(\PP(M)) \to \EL(T^2M) : \xi \mapsto \ell_\xi$ by $\fL$. The set $\EL(T^2M)$, which is endowed with a Lie groupoid structure for which $\fL$ is a groupoid morphism, has the following distinguished Lie subgroupoids~:
\begin{enumerate}
\item[-] $\EL^{(1,1)}(T^2M) = \fL(\b^{(1,1)}(\PP(M)))$,
\item[-] $\EL^{(1,1)}_{sh}(T^2M) = \fL(\b^{(1,1)}_{sh}(\PP(M)))$,
\item[-] $\EL^{(2)}(T^2M) = \fL(\b^{(2)}(\PP(M)))$.
\end{enumerate}
\end{dfn}
Notice that he difference between $\EL(T^2M)$ and $\EL^{(1,1)}(T^2M)$ is that the action of an element in $\EL(T^2M)$ on the vertical subbundle $\im i^p_{0_M}$ is through any linear map, generally unrelated to $p(\ell)$ or $p_*(\ell)$.
\begin{dfn}\label{kappa-definition}
The natural involution $\kappa$ on $\b_{sh}^{(1,1)}(M)$ is defined through the expression~:
\begin{equation}\label{kappa}
\kappa(\xi) \cdot \X = \kappa \bigl( \xi \cdot \kappa(\X)\bigr).
\end{equation}
It consists basically in exchanging the order of the two derivatives involved in a $(1,1)$-jet.
\end{dfn}
We have not been able to find a definition of a canonical involution on $(1,1)$-jets in the literature. The common definition
is a (non-canonical) involution on $J^1(J^1(Y \to X))$, for a fibration $Y \to X$, depending on the choice of a connection on $X$.
\begin{rmk}\label{kappa-rem}
Notice that if $\xi$ lies in $\b^{(1,1)}(\PP(M)) - \b^{(1,1)}_{sh}(\PP(M))$, the right-hand side of (\ref{kappa}) does not define anymore a $(1,1)$-jet as the condition {\it (b)} in \pref{11-jetsasmaps} fails. Indeed,
$$\kappa(\xi) \cdot i^p_{0_M}(U) = \kappa \bigl( \xi \cdot \kappa(i^p_{0_M}(U))\bigr) = \kappa \bigl( \xi \cdot i^p_{0_M}(U)\bigr) = \kappa \bigl( i^p_{0_M}(p(\xi) \cdot U)\bigr) = i^p_{0_M}(p(\xi) \cdot U),$$
while a $(1,1)$-jet acts on a vertical vector via its $p$-component, which, in the case of $\kappa(\xi)$, must be $p_*(\xi)$. Nevertheless $\kappa$ is defined on the entire space $\EL(T^2M)$ of homomorphisms of $T^2M$.
\end{rmk}
\begin{lem}\label{kappa-prop} The involution $\kappa$ is a groupoid automorphism whose fixed point set is $\b^{(2)}(\PP(M))$ and such that $p\circ \kappa = p_*$ and $p_* \circ \kappa = p$.
\end{lem}
\begin{rmk}\label{affine-str} A $(1,1)$-jet $\xi$ is also a morphism between the affine bundles $P : T^2_xM \to T_xM \oplus T_xM$ and $P : T^2_yM \to T_yM \oplus T_yM$ over the map $p(\xi) \oplus p_*(\xi) : T_xM \oplus T_x M \to T_yM \oplus T_yM$. Its {\bf pure second order part} in the direction of two vectors $X_x$ and $Y_x$ in $T_xM$ is the affine map
$$\xi(X_x, Y_x) : T_{X_x}^{Y_x}TM \to T_{p(\xi)(X_x)}^{p_*(\xi)(Y_x)}TM,$$
where $T_{X_x}^{Y_x}TM$ denotes the intersection $p^{-1}(X_x) \cap p_*^{-1}(Y_x)$. The set of $(1,1)$-jets over a fixed map $\xi_1 \oplus \xi_2 : T_x M \oplus T_xM \to T_yM \oplus T_yM$ is an affine space modeled on the space of bilinear maps from $T_xM \times T_xM$ to $T_yM$. Indeed, let $\xi_0, \xi \in \b^{(1,1)}(\PP(M))$ be such that $p(\xi_0) = p(\xi)$ and $p_*(\xi_0) = p_*(\xi)$, then
\begin{equation}\label{xi-xi0}
\xi - \xi_0 : T_xM \times T_xM \to T_yM : (X_x, Y_x) \mapsto \pi \Bigl(\xi \cdot Y_{*_x} X_x, \xi_0 \cdot Y_{*_x} X_x \Bigr),
\end{equation}
where the map $\pi$ has been defined by (\ref{i}) in \aref{second-t-b} and where $Y$ is a local vector field extending $Y_x$, whose choice is irrelevant. Indeed, formula (\ref{scalar-mult}) implies that the right hand side of (\ref{xi-xi0}) is $C^\infty(M)$-bilinear. Notice that if both $\xi_0$ and $\xi$ are holonomic, or $\kappa$-invariant then $\xi - \xi_0$ is symmetric. Indeed, let $X, Y$ be local vector fields extending $X_x, Y_x$ and satisfying $[X, Y]_x = 0$. Then
$$\begin{array}{ccl}
\bigl(\xi - \xi_0\bigr) (X_x, Y_x) & = & \pi \Bigl(\xi \cdot Y_{*_x} X_x, \xi_0 \cdot Y_{*_x} X_x \Bigr) \\
& = & \pi \Bigl(\kappa \bigl(\xi \cdot Y_{*_x} X_x\bigr), \kappa \bigl(\xi_0 \cdot Y_{*_x} X_x\bigr) \Bigr) \;\;\mbox{implied by (\ref{i-kappa})} \\
& = & \pi \Bigl(\xi \cdot X_{*_x} Y_x, \xi_0 \cdot X_{*_x} Y_x \Bigr) \;\;\mbox{implied by \pref{involution-prop}} \\
& = & \bigl(\xi - \xi_0\bigr) (Y_x, X_x).
\end{array}$$
\end{rmk}
We finally prove a lemma about conjugation of $2$-jets that reveals useful when dealing with torsionless affine connections (cf.~\sref{Kobayashi}). Let $f : U \subset M \to M$ be a local diffeomorphism of $M$ such that for some $x \in U$, we have $f_{*_x} = I_x : T_xM \to T_xM$. Then the map
$$f_{**_{X_x}} - I : T_{X_x}TM \to T_{X_x}TM : \X \mapsto f_{**_{X_x}}(\X) - \X$$
\begin{enumerate}
\item[-] vanishes on $T^p_{X_x}TM$ since $({f_{**}}_{X_x} - I) \circ i^p_{X_x} = 0$,
\item[-] takes value in $T^p_{X_x}TM$ since $p_{*_{X_x}}\circ ({f_{**}}_{X_x} - I) = 0$ (cf.~(\ref{rel-f**})).
\end{enumerate}
Whence there is a linear map $d^2f(X_x) : T_xM \to T_xM$, such that $({f_{**}}_{X_x} - I)$ coincides with the composition
$$T_{X_x}TM \stackrel{p_{*_{X_x}}}{\longrightarrow} T_xM \stackrel{d^2f(X_x)}{\longrightarrow} T_xM \stackrel{i^p_{X_x}}{\longrightarrow} T_{X_x}TM.$$
\begin{lem}\label{sec-der-commutation}
Consider $2$-jets $j^2_xf, j^2_xg, j^2_yh \in \b^{(2)}(\PP(M))$ such that $p(j^2_xf) = I_x$ and $p(j^2_xg) = \iota \circ p(j^2_yh)$. Then
$$d^2 (g \circ f \circ h)(X_y) = g_{*_{x}} \circ d^2f({h_*}_y X_y) \circ h_{*_y} + d^2 (g \circ h)(X_y).$$ In particular, if $g_{*_x} = -I_x = h_{*_x}$, then
$$d^2 (g \circ f \circ h)(X_x) = - d^2f(X_x) + d^2 (g \circ h)(X_x).$$
\end{lem}
\noindent{\bf Proof}. The proof is just a short verification.
\begin{multline*}
\Bigl({(g \circ f \circ h)_{**}}_{X_x} - I\Bigr) \\
\begin{array}{ccl}
& = & \Bigl. {g_{**}}_{{h_*}_x(X_x)} \circ {f_{**}}_{{h_*}_x(X_x)} \circ {h_{**}}_{X_x} - I \\
& = & {g_{**}}_{{h_*}_x(X_x)} \circ \Bigl({f_{**}}_{{h_*}_x(X_x)} - I \Bigr) \circ {h_{**}}_{X_x} + {g_{**}}_{{h_*}_x(X_x)} \circ {h_{**}}_{X_x} - I \\
& = & {g_{**}}_{{h_*}_x(X_x)} \circ \Bigl(i^p_{{h_*}_x(X_x)} \circ d^2f({h_*}_x(X_x)) \circ {p_*}_{{h_*}_x(X_x)} \Bigr) \circ {h_{**}}_{X_x} \\
& & \Bigl. + i^p_{X_x} \circ d^2 (g \circ h)(X_x) \circ {p_*}_{X_x} \\
& = & i^p_{X_x} \circ \Bigl( {g_*}_{h(x)} \circ d^2f({h_*}_x(X_x)) \circ {h_*}_x + d^2 (g \circ h)(X_x)\Bigr) \circ {p_*}_{X_x}
\end{array}
\end{multline*}
\cqfd
\section{Third tangent bundle}\label{third-der}
Consider the third tangent bundle, denoted $T^3M$, and defined to be the tangent bundle of the total space of $T^2M$.
Its element will be denoted by ``frak" letters like ${\mathfrak X}$. As described below, it is endowed with three canonical projections onto $T^2M$, giving $T^3M$ three distinct vector bundle structures over $T^2M$, three natural projections onto $TM$ and one onto $M$. It contains three vertical inclusions of $T^2M$ and six vertical inclusions of $T^2M \oplus T^2M$ and admits three involutions, each permuting two of the vector bundle structures and permuting the inclusions of $T^2M$ and $T^2M \oplus T^2M$. \\
\noindent
{\bf Vector bundle structures~:} \\
$\bullet$ $p : T^3M \to T^2M : \IX = \frac{d\mathbb ZE_t}{dt}|_{t = 0} \mapsto p(\IX) = \mathbb ZE_0$. The fiberwise addition is denoted by $+ : T^3M \times_{(p,p)}T^3M \to T^3M$ and the scalar multiplication by a real $a$ by $m_a : T^3M \to T^3M : \IX \mapsto m_a(\IX)= a \IX$. \\
$\bullet$ $p_* : T^3M \to T^2M : \IX = \frac{d\mathbb ZE_t}{dt}|_{t = 0} \mapsto p_*(\IX) = \frac{dp(\mathbb ZE_t)}{dt}|_{t = 0}$. The $p_*$-fiberwise addition is the differential of the fiberwise addition $+$ on $T^2M$~:
$$+_* : T^3M\times_{(p_*, p_*)}T^3M \to T^3M : \Bigl(\frac{d\mathbb ZE_t}{dt}\Bigl|_{t=0}, \frac{d\mathbb ZE'_t}{dt} \Bigr|_{t=0}\Bigr) \mapsto \frac{d(\mathbb ZE_t + \mathbb ZE'_t)}{dt} \Bigr|_{t=0},$$
where we assume without loss of generality that $p(\mathbb ZE_t) =p(\mathbb ZE'_t)$ for all $t$'s. Similarly, the scalar multiplication by a real $a$ is the differential of the scalar multiplication $m_a$ on $T^2M$~:
$$m_{a*} : T^3M \to T^3M : \IX = \frac{d\mathbb ZE_t}{dt}\Bigl|_{t=0} \mapsto m_{a*} (\IX) = \frac{dm_a(\mathbb ZE_t)}{dt}\Bigl|_{t=0}.$$
$\bullet$ $p_{**} : T^3M \to T^2M : \IX = \frac{d\mathbb ZE_t}{dt}|_{t = 0} \mapsto p_{**}(\IX) = \frac{dp_*(\mathbb ZE_t)}{dt}|_{t = 0}$. The $p_{**}$-fiberwise addition and scalar multiplication by a real $a$ are denoted respectively by $+_{**}$ and $m_{a**}$ and are the differential of the $p_*$-fiberwise addition $+_*$ and scalar multiplication $m_{a*}$ on $T^2M$. \\
Let ${\mathfrak X} = \frac{d\mathbb ZE_t}{dt}|_{t = 0}$ in $T^3M$, with $t \mapsto \mathbb ZE_t$ a path in $T^2M$. Set $X_t = p(\mathbb ZE_t)$, $Y_t = p_*(\mathbb ZE_t)$, $x_t = p(X_t) = p(Y_t)$. Then
$$\begin{array}{lll}
\Bigl.p({\mathfrak X}) = \mathbb ZE_0 &p \circ p ({\mathfrak X}) = X_0 & p_* \circ p ({\mathfrak X}) = Y_0 \\
\Bigl.p_*({\mathfrak X}) = \frac{dX_t}{dt}|_{t=0} \stackrel{\rm not}{=} \Y \quad & p \circ p_* ({\mathfrak X}) = p(\Y) = X_0 \quad & p_* \circ p_* ({\mathfrak X}) = p_*(\Y) \stackrel{\rm not}{=} Z \\
\Bigl.p_{**}({\mathfrak X}) = \frac{dY_t}{dt}|_{t=0} \stackrel{\rm not}{=} \X & p \circ p_{**} ({\mathfrak X}) = p(\X) = Y_0 & p_* \circ p_{**} ({\mathfrak X}) = p_*(\X) \stackrel{\rm not}{=} Z \\
\end{array}$$
\begin{figure}
\caption{A picture of $T^3M$}
\end{figure}
A comment about this picture. We think of $T^3M$ as the set of tangent vectors to the union over all $X_x \in TM$ of the tangent spaces $T_{X_x}TM$ that are represented on the picture above as $2$-planes sticking out ``horizontally''. The dotted lines indicates that that copy of $TM$ does not really lie in $T^3M$. It is added to the picture in order to better represent the $p_*$ projections of the vectors in $T^2M$. \\
The projections satisfy the following relations
$$p \circ p = p \circ p_* \qquad p_* \circ p = p \circ p_{**} \qquad p_* \circ p_* = p_* \circ p_{**}.$$
For conciseness we denoted those three projections $p \circ p, p_* \circ p, p_* \circ p_* : T^3M \to TM$ by $p_1$, $p_2$, $p_3$ respectively. These commuting relations are summarized in the following diagram where the maps form the edges of a cube resting on a vertex (the altitude of a vertex depends on the number of derivatives)~:
\hspace{3cm}
\xymatrix{
& T^3M \ar[ld]_{p_*} \ar@{-->}[d]^{p} \ar[rd]^{p_{**}} & \\
T^2M \ar[d]_p \ar[rd]_<<<<{p_*} & T^2M \ar@{-->}[ld]^<<<{p} \ar@{-->}[rd] _<<<{p_*}& T^2M \ar[ld]^<<<{p_*} \ar[d]^{p}\\
TM \ar[rd]_p & TM \ar[d]^p & TM \ar[ld]^p \\
&M&
}
Observe also that each projection $p, p_*, p_{**} : T^3M \to T^2M$ is linear for the vector bundle structures associated to the two other projections. More precisely, there are the following homomorphisms of vector bundles~:
\begin{equation}\label{peti}
\begin{array}{ll}
\Bigl.p : (T^3M, p_*) \to (T^2M, p) & p : (T^3M, p_{**}) \to (T^2M, p_*) \\
\Bigl.p_* : (T^3M, p) \to (T^2M, p) & p_* : (T^3M, p_{**}) \to (T^2M, p_*)\\
\Bigl.p_{**} : (T^3M, p) \to (T^2M, p) & p_{**} : (T^3M, p_*) \to (T^2M, p_*)
\end{array}
\end{equation}
\noindent
{\bf Horizontal inclusions and projections~:} Dual to the three projections $p, p_*, p_{**} : T^3 M \to T^2M$, there are three injections $i, i_*, i_{**} : T^2M \to T^3M$ whose images are the three different zero-sections for the vector bundle structures associated to $p, p_*, p_{**}$ respectively. Since $i : N \to TN$ denotes the canonical injection of a manifold in its tangent bundle as the zero section, then $i_*$ is the differential of $i : TM \to T^2M$ and $i_{**}$ is the second differential of $i : M \to TM$. Each inclusion $i_o$ is a vector bundle morphism for the same structures as for the corresponding projection $p_o$. The images of $i, i_*, i_{**}$ are denoted respectively by $0_{T^2M}, {0_*}_{T^2M}, {0_{**}}_{T^2M}$ and the image of a vector $\X$ by $0_\X, {0_*}_\X, {0_{**}}_\X$. Moreover, we have the following relations expressing that the cube is also commutative if one adds the arrows corresponding to $i$, $i_*$ and $i_{**}$~:
$$\begin{array}{lllll}
i \circ i = i_* \circ i && i \circ i_* = i_{**} \circ i && i_* \circ i_* = i_{**} \circ i_*
\end{array}$$
on $TM$ and
$$\begin{array}{lllll}
p \circ i = id && p \circ i_* = i \circ p && p \circ i_{**} = i_* \circ p \\
p_* \circ i = i \circ p && p_* \circ i_* = id && p_* \circ i_{**} = i_* \circ p_* \\
p_{**} \circ i = i \circ p_* && p_{**} \circ i_* = i_* \circ p_* && p_{**} \circ i_{**} = id.
\end{array}$$
on $T^2M$. Let us introduce some more notation~:
\begin{enumerate}
\item[-] $i \circ i = i_* \circ i \stackrel{\rm not}{=} I_1$,
\item[-] $i \circ i_* = i_{**} \circ i \stackrel{\rm not}{=} I_2$,
\item[-] $i_* \circ i_* = i_{**} \circ i_* \stackrel{\rm not}{=} I_3$.
\item[-] $i \circ i \circ i \stackrel{\rm not}{=}{\bf i}$
\end{enumerate}
Notice that ${\bf i}$ coincides with any other inclusion of $M$ into $T^3M$ built from the various $i, i_*, i_{**}$'s. For convenience we will denote by $e$, $e_*$, $e_{**}$ the projections $i \circ p$, $i_* \circ p_*$, $i_{**} \circ p_{**}$ respectively which send a vector onto the zero vector in its $p$, $p_*$ or $p_{**}$-fiber respectively. The following relations are an easy consequence of (\ref{peti})~:
$$\begin{array}{lllll}
p \circ e = p && p \circ e_* = e \circ p && p \circ e_{**} = e_* \circ p \\
p_* \circ e = e \circ p_* && p_* \circ e_* = p_* && p_* \circ e_{**} = e_* \circ p_* \\
p_{**} \circ e = e \circ p_{**} && p_{**} \circ e_* = e_* \circ p_{**} && p_{**} \circ e_{**} = p_{**}. \\
\end{array}$$
Furthermore, the projections $e$, $e_*$ and $e_{**}$ commute and the compositions of two such map is a new projection onto the image of some inclusion $I_j$ of $TM$. More precisely, set
\begin{enumerate}
\item[-] $e \circ e_* \stackrel{\rm not}{=} E_1$,
\item[-] $e \circ e_{**} \stackrel{\rm not}{=} E_2$,
\item[-] $e_* \circ e_{**} \stackrel{\rm not}{=} E_3$.
\item[-] $e \circ e_* \circ e_{**} \stackrel{\rm not}{=} {\bf e}$.
\end{enumerate}
Then $E_j$ (respectively ${\bf e}$) is the projection of $T^3M$ onto the image of $I_j$ (respectively ${\bf i}$) and it satisfies~:
\begin{equation}\label{Eandp}
\begin{array}{lll}
p \circ E_1 = e \circ p & p_* \circ E_1 = e \circ p_* & p_{**} \circ E_1 = \ee \circ p_{**} \\
p \circ E_2 = e_* \circ p & p_* \circ E_2 = \ee \circ p_* & p_{**} \circ E_2 = e \circ p_{**}. \\
p \circ E_3 = \ee \circ p & p_* \circ E_3 = e_* \circ p_* & p_{**} \circ E_3 = e_* \circ p_{**}
\end{array}
\end{equation}
\noindent
{\bf Vertical inclusions of $T^2M$~:} There are three distinct vertical inclusions of $T^2M$ that parameterize the three transverse intersections
$$\begin{array}{lll}
V_1 = p^{-1}(0_{TM}) \cap p_*^{-1}(0_{TM}) \\
V_2 = p_*^{-1}({0_*}_{TM}) \cap p_{**}^{-1}({0_*}_{TM}) \\
V_3 = p^{-1}({0_*}_{TM}) \cap p_{**}^{-1}(0_{TM})
\end{array}$$
$$\begin{array}{lll}
\Biggl. i^p_{0_{TM}} : T^2M \stackrel{\sim}{\longrightarrow} V_1 = T^p_{0_{TM}}(T^2M) & : & \V \mapsto \displaystyle{\frac{d (t \V)}{dt}\Bigl|_{t=0}} \\
\Biggl. (i^p_{0_M})_{*} : T^2M \stackrel{\sim}{\longrightarrow} V_2 = T(T^p_{0_M}TM) & : & \displaystyle{\V = \frac{d V_t}{dt}\Bigl|_{t=0} \mapsto
\frac{d(i^p_{0_M}(V_t) )}{dt} \Bigl|_{t=0}} \\
\Biggl. i^{p_*}_{{0_*}_{TM}} : T^2M \stackrel{\sim}{\longrightarrow} V_3 = T^{p_*}_{{0_*}_{TM}}(T^2M) & : & \V \mapsto \displaystyle{\frac{d (m_{t*} (\V))}{dt}\Bigl|_{t=0},}
\end{array}$$
where indeed, the second one is the differential of the vertical inclusion $i^p_{0_M}$ of $TM$ into $T^2M$. These maps satisfy the following relations~:
$$\begin{array}{lll}
\Bigl.p \circ i^p_{0_{TM}} = e & p_* \circ i^p_{0_{TM}} = e & p_{**} \circ i^p_{0_{TM}} = i^p_{0_M} \circ p_* \\
\Bigl.p \circ (i^p_{0_M})_{*} = i^p_{0_M} \circ p & p_* \circ (i^p_{0_M})_{*} = e_* & p_{**} \circ (i^p_{0_M})_{*} = e_* \\
\Bigl.p \circ i^{p_*}_{{0_*}_{TM}} = e_* & p_* \circ i^{p_*}_{{0_*}_{TM}} = i^p_{0_M} \circ p & p_{**} \circ i^{p_*}_{{0_*}_{TM}} = i \circ p_*,
\end{array}$$
and are vector bundle morphism in different ways~:
\begin{equation}\label{vert-incl}
\begin{array}{lll}
i^p_{0_{TM}} : (T^2M, p) \to (T^3M, \left\{\begin{array}{c} p \\ p_*\end{array}\right.) & i^p_{0_{TM}} : (T^2M, p_*) \to (T^3M, p_{**}) \\
(i^p_{0_{M}})_* : (T^2M, p) \to (T^3M, p) & (i^p_{0_{M}})_* : (T^2M, p_*) \to (T^3M, \left\{\begin{array}{c} p_* \\ p_{**} \end{array}\right.) \\
i^{p_*}_{{0_*}_{TM}} : (T^2M, p) \to (T^3M, p_*) & i^{p_*}_{{0_*}_{TM}} : (T^2M, p_*) \to (T^3M, \left\{\begin{array}{c} p \\ p_{**} \end{array}\right.).
\end{array}
\end{equation}
The presence of the bracket indicates that on the image of the inclusion at hand, the two linear structures coincide.
\begin{rmk} The other three intersections $p^{-1}(0_{TM}) \cap p_*^{-1}({0_*}_{TM})$, $p_*^{-1}({0_*}_{TM}) \cap p_{**}^{-1}(0_{TM})$ and $p^{-1}({0_*}_{TM}) \cap p_{**}^{-1}(0_{TM})$ could also be considered, but we do not need them here. A difference is that a vector $\IX$ in $p^{-1}(0_{TM}) \cap p_*^{-1}({0_*}_{TM})$ automatically belongs to $p^{-1}(0_{0_M})$.
\end{rmk}
\noindent
{\bf Vertical inclusion of $TM$~:} $T^3M$ supports also a vertical inclusion of $TM$~:
$$I : TM \stackrel{\sim}{\longrightarrow} T^3M : V_x \mapsto \frac{d}{dt} \Bigl(t \frac{d(sV_x)}{ds}\Bigl|_{s=0}\Bigr)\Bigl|_{t=0},$$
defined by pre-composing any vertical inclusions $i^p_{0_{TM}}$, $i^{p_*}_{{0_*}_{TM}}$ or $(i^p_{0_M})_{*}$ with the vertical inclusion $i^p_{0_M}$. It is a vector bundle morphism between $TM$ and all three vector bundle structures on $T^3M$. Moreover~:
$$
p \circ I = p_* \circ I = p_{**} \circ I = {\bf i}.
$$
\noindent
{\bf Vertical inclusions of $T^2M \oplus T^2M$~:} The various ``kernels" $p^{-1}(0_X)$, $p^{-1}({0_*}_X)$, $p_*^{-1}(0_X)$, $p_*^{-1}({0_*}_X)$, $p_{**}^{-1}(0_X)$, $p_{**}^{-1}({0_*}_X)$ admit the following parameterization by $T^2M \oplus T^2M$ (where the direct sum is either with respect to the projection $p$ or to the projection $p_*$ as indicated below)~:
\begin{enumerate}
\item[1)] $\I_p = I_p^{TM} : T^2M \times_{(p, p)}T^2M \to p^{-1}(0_{TM})$,
$$\I_p (\Y_{X}, \V_{X}) = i_{*_{X}}(\Y_{X}) + i^p_{0_{TM}}(\V_{X}).$$
\item[2)] $\I_p^* : T^2M \times_{(p, p_*)}T^2M \to p^{-1}({0_*}_{TM})$,
$$\I_p^* (\X_{X}, \V^{X}) = i_{**_{X}}(\X_{X}) + i^{p_*}_{{0_*}_{TM}}(\V^{X}).$$
\item[3)] $\I_{p_*} = I_{p_*}^{TM} : T^2M \times_{(p,p)} T^2M \to p_*^{-1}(0_{TM})$,
$$\I_{p_*} (\mathbb ZE_{X}, \V_{X}) = i(\mathbb ZE_X) +_* i^p_{0_{TM}}(\V_X).$$
\item[4)] $\I_{p_*}^* = (I_p)_* : T^2M \times_{(p_*,p_*)} T^2M \to p_*^{-1}({0_*}_{TM})$,
$$\I_{p_*}^*(\X^{X}, \V^{X}) = i_{**} (\X^X) +_* (i^p_{0_M})_* (\V^X).$$
\item[5)] $\I_{p_{**}} : T^2M \times_{(p_*,p_*)} T^2M \to p_{**}^{-1}(0_{TM})$,
$$\I_{p_{**}} (\mathbb ZE^{X}, \V^{X}) = i(\mathbb ZE^X) +_{**} i^{p_*}_{{0_*}_{TM}}(\V^X).$$
\item[6)] $\I^*_{p_{**}} = (I_{p_*})_* : T^2M \times_{(p_*, p_*)} T^2M \to p_{**}^{-1}({0_*}_{TM})$,
$$\I^*_{p_{**}} : (\Y^X, \V^X) = i_* (\Y^X) +_{**} (i^p_{0_M})_* (\V^X).$$
\end{enumerate}
The rule to form these vertical inclusion can be described as follows~: if $j$ stands either for nothing or for $*$ or for $**$ and $k$ stands for either nothing or $*$, then
$$p_j^{-1}\Bigl(\im (i_k)\Bigr) = \im (i_{l(j,k)}) +_{j} \Bigl(p_j^{-1}(\im i_k) \cap p_{l(j,k)}^{-1}(\im i_{l(j,k)})\Bigr),$$
where $l(j, k)$ is so that $p_j$ realizes a morphism between $(T^3M, p_{l(j,k)})$ and $(T^2M, p_k)$. \\
Observe that for $p_i = p$ or $p_*$, $i=1,2$, the direct sum $T^2M \times_{(p_1, p_2)} T^2M$ is naturally a vector bundle of rank $4n$ over $TM$ for the projection but also a vector bundle of rank $3n$ over $TM \times_{(p,p)}TM$ for the projection $\hat{p}_1 \times \hat{p}_2$, where $\hat{p}_1$ denotes $p_*$ if $p_1 = p$ and $p$ otherwise. With respect to these bundle structures, each inclusion of a direct sum $T^2M \oplus T^2M$ is a vector bundle morphism in two fashions~:
\begin{enumerate}
\item[-] $\I_p : (T^2M \times_{(p,p)} T^2M, \left\{\begin{array}{l} p=p \\ p_* \times p_*\end{array}\right.) \to (T^3M, \left\{\begin{array}{l} p \\ p_{**}\end{array}\right.)$
\item[-] $\I_p^* : (T^2M \times_{(p,p_*)} T^2M, \left\{\begin{array}{l} p=p_* \\ p_* \times p \end{array}\right.) \to (T^3M, \left\{\begin{array}{l} p \\ p_* \end{array}\right.)$
\item[-] $\I_{p_*} : (T^2M \times_{(p,p)} T^2M, \left\{\begin{array}{l} p=p \\ p_* \times p_* \end{array}\right.) \to (T^3M, \left\{\begin{array}{l} p_* \\ p_{**} \end{array}\right.)$
\item[-] $\I_{p_*}^* : (T^2M \times_{(p_*,p_*)} T^2M, \left\{\begin{array}{l} p_*=p_* \\ p \times p \end{array}\right.) \to (T^3M, \left\{\begin{array}{l} p_* \\ p \end{array}\right.)$
\item[-] $\I_{p_{**}} : (T^2M \times_{(p_*,p_*)} T^2M, \left\{\begin{array}{l} p_*=p_* \\ p \times p \end{array}\right.) \to (T^3M, \left\{\begin{array}{l} p_{**} \\ p_* \end{array}\right.)$
\item[-] $\I^*_{p_{**}} : (T^2M \times_{(p_*, p_*)} T^2M, \left\{\begin{array}{l} p_*=p_* \\ p \times p \end{array}\right.) \to (T^3M, \left\{\begin{array}{l} p_{**} \\ p \end{array}\right.)$
\end{enumerate}
\noindent
{\bf Affine structures over $T^2M \oplus T^2M$~:} Any choice of two projections $p_1, p_2$ amongst $p, p_*, p_{**} : T^3M \to T^2M$, yields an affine fibration $p_1 \times p_2 : T^3M \to T^2M \oplus T^2M$. Altogether this provides $T^3M$ with three affine fibration structures over some fiber-product of $T^2M$ with itself~:
$$\begin{array}{lllll}
\p_1 \stackrel{\rm not}{=} p \times p_* & : & T^3M & \to & T^2M \times_{(p, p)} T^2M \\
\p_2 \stackrel{\rm not}{=} p_* \times p_{**} & : & T^3M & \to & T^2M \times_{(p_*, p_*)} T^2M \\
\p_3 \stackrel{\rm not}{=} p_{**} \times p & : & T^3M & \to & T^2M \times_{(p, p_*)} T^2M
\end{array}$$
whose respective fibers $\p_i^{-1}(\X_1, \X_2)$ admit two distinct affine structures (one for each factor of the projection $\p_i$) modeled on the fiber of either $p$ or $p_* : T^2M \to TM$. A fiber $(p_1 \times p_2)^{-1}(\X_1, \X_2)$, endowed with its affine structures induced by $p_i$ will be denoted by $((p_1 \times p_2)^{-1}(\X_1, \X_2), p_i)$. It is modeled on the vector space $p_1^{-1}(\X_1) \cap p_2^{-1}(p_2(e_o(\IX)))$, where $e_o$ coincides with $e$, $e_*$ or $e_{**}$ depending on whether $p_1$ is $p$, $p_*$ or $p_{**}$. Thus
\begin{enumerate}
\item[-] $\bigl(\p_1^{-1}(\mathbb ZE, \Y \bigr), p)$ is modeled on $p^{-1}(\mathbb ZE) \cap p_*^{-1}(0_X)$,
\item[-] $\bigl(\p_1^{-1}(\mathbb ZE, \Y \bigr), p_*)$ is modeled on $p_*^{-1}(\Y)\cap p^{-1}(0_X)$,
\item[-] $\bigl(\p_2^{-1}(\Y, \X \bigr), p_*)$ is modeled on $ p_*^{-1}(\Y)\cap p_{**}^{-1}({0_*}_Z)$,
\item[-] $\bigl(\p_2^{-1}(\Y, \X \bigr), p_{**})$ is modeled on $p_*^{-1}(\X) \cap p_*^{-1}({0_*}_Z)$,
\item[-] $\bigl(\p_3^{-1}(\X, \mathbb ZE \bigr), p_{**})$ is modeled on $ p_{**}^{-1}(\X) \cap p^{-1}({0_*}_Y)$,
\item[-] $\bigl(\p_3^{-1}(\X, \mathbb ZE \bigr), p)$ is modeled on $p^{-1}(\mathbb ZE)\cap p_{**}^{-1}(0_Y)$.
\end{enumerate}
Let us to describe explicitly the three canonical inclusions of $T^2M$ parameterizing the various affine fibers passing through a given element ${\mathfrak X} \in T^3M$. They are denoted by the symbols $A^{\IX}_{(\p_i,p_j)}$, where $p_j$ is one of the factors of $\p_i$, indicating that we consider the $\p_i$-fiber of $\IX$ endowed with the affine structure given by the first projection $p_j$. Notice that each such inclusion is also obtained by translation of one of the vertical inclusions $i^p_{0_{TM}}, i^{p_*}_{{0_*}_{TM}}, (i^p_{0_M})_*$ of $T^2M$ along one of the inclusions $i, i_*, i_{**}$. More precisely,
$$\begin{array}{lllllllll}
A^{\IX}_{(\p_1, p)} & : & T_XTM & \to & T^3M & : & \V_X & \mapsto & \IX + \Bigl( e(\IX) +_* i^p_{0_X}(\V_X)\Bigr) \\
A^{\IX}_{(\p_1, p_*)} & : & T_XTM & \to & T^3M & : & \V_X & \mapsto & \IX +_* \Bigl( e_*(\IX) + i^p_{0_X}(\V_X)\Bigr) \\
A^{\IX}_{(\p_2, p_*)} & : & T^ZTM & \to & T^3M & : & \V^{Z} & \mapsto & \IX +_* \Bigl(e_* (\IX) +_{**} (i^p_{0_M})_* (\V^Z) \Bigr) \\
A^{\IX}_{(\p_2, p_{**})} & : & T^ZTM & \to & T^3M & : & \V^{Z} & \mapsto & \IX +_{**} \Bigl(e_{**} (\IX) +_* (i^p_{0_M})_* (\V^Z)\Bigr) \\
A^{\IX}_{(\p_3, p_{**})} & : & T^YTM & \to & T^3M & : & \V^{Y} & \mapsto & \IX +_{**} \Bigl( e_{**}(\IX) + i^{p_*}_{i_*(X)}(\V^{Y})\Bigr) \\
A^{\IX}_{(\p_3, p)} & : & T^YTM & \to & T^3M & : & \V^Y & \mapsto & \IX + \Bigl( e(\IX) +_{**} i^{p_*}_{i_*(X)}(\V^Y)\Bigr)
\end{array}$$
\noindent
\begin{rmks} \
\begin{enumerate}
\item[-] The place of the parentheses above is important, only because shifting them makes generally appear sums of elements of $T^3M$ not belonging to a same fiber of any of the vector bundle structures on $T^3M$. Nevertheless when displacing parentheses yields a sensible expression, it is guaranteed to agree with the initial one.
\item[-] Since $A^{\IX}_{(\p_1, p)} = A^{\IX}_{(\p_1, p_*)}$, $A^{\IX}_{(\p_2, p_*)} = A^{\IX}_{(\p_2, p_{**})}$, $A^{\IX}_{(\p_3, p_{**})} = A^{\IX}_{(\p_3, p)}$, we may remove the second subscript $p_2$ from the notation and talk about the three maps $A^{\IX}_{\p_i}$, $i = 1, 2, 3$.
\end{enumerate}
\end{rmks}
Dually, there are three projection maps $\Pi_i$ from $T^3M \times_{(\p_i, \p_i)} T^3M$ to $T^2M$ defined by
\begin{equation}\label{proj123}
\Pi_i(\IX^1, \IX^2) = \U \iff \IX^1 =A^{\IX_2}_{\p_i} (\U)
\end{equation}
\ \\
\noindent
{\bf Affine structure modeled on $TM$~:} There is yet another structure of affine fibration on $T^3M$, obtained by considering all three projections $p$, $p_*$ and $p_{**}$. Denote by ${\mathcal P}(M)$ the image of the map $\p = p \times p_* \times p_{**} : T^3M \to T^2M \times T^2M \times T^2M$, that is the set of triples $(\X_1, \X_2, \X_3)$ of vectors in $T^2M$ such that
$$p(\X_1) = p(\X_2) \qquad p_*(\X_1) = p(\X_3) \qquad p_*(\X_2) = p_*(\X_3).$$
The map
$$\begin{array}{llcll}
\p & : & T^3M & \to & {\mathcal P}(M) \\
&&{\mathfrak X} & \mapsto & (p({\mathfrak X}), p_*({\mathfrak X}), p_{**}({\mathfrak X}))
\end{array}$$
in an affine fibration with typical fiber modeled on $T_xM$. For each element $\IX$ in $T^3M$, with $p \circ p \circ p (\IX) = x$, there is a parameterization of its $\p$-fiber by $T_xM$ that admits six different expressions~:
\begin{equation}\label{param-P-fiber}
A^{\IX}_{\p}(V_x) = \left\{
\begin{array}{l}
\IX + \Bigl( e(\IX) +_* \bigl(e_*(e(\IX)) +_{**} I(V_x)\bigr) \Bigr) \\
\IX + \Bigl( e(\IX) +_{**} \bigl(e_{**}(e(\IX)) +_{*} I(V_x)\bigr)\Bigr)\\
\IX +_* \Bigl( e_*(\IX) + \bigl(e(e_*(\IX)) +_{**} I(V_x)\bigr) \Bigr) \\
\IX +_* \Bigl( e_*(\IX) +_{**} \bigl( e_{**}(e_*(\IX)) + I(V_x) \bigr)\Bigr)\\
\IX +_{**} \Bigl( e_{**}(\IX) + \bigl(e(e_{**}(\IX)) +_* I(V_x)\bigr) \Bigr) \\
\IX +_{**} \Bigl( e_{**}(\IX) +_* \bigl( e_*(e_{**}(\IX)) + I(V_x) \bigr)\Bigr)\\
\end{array}\right.
\end{equation}
Let us explain the first equality of (\ref{param-P-fiber}). First of all, $\p(I(V_x)) = (\ii(x), \ii(x), \ii(x))$ and $p_{**} \circ e_* \circ e = \ee \circ p_{**} = \ii(x)$ (see (\ref{Eandp})) imply that the sum $+_{**}$ makes sense. Adding $I(V_x)$ does not change the $\p$-fiber, so
$$p_*\Bigl(e_*(e(\IX)) +_{**} I(V_x)\Bigr) = p_* (e_* (e (\IX))) = p_* ( e (\IX)).$$
Hence the sum $+_*$ makes sense as well. Furthermore,
$$p\Bigl(e(\IX) +_* \bigl(e_*(e(\IX)) +_{**} I(V_x) \bigr)\Bigr) = p(e(\IX)) + p(e_*(e(\IX))) = p(\IX) + e(p(\IX)) = p(\IX),$$
so that the third sum $+$ is well-defined. The other expressions are treated similarly.\\
In particular, there is a map~:
\begin{equation}\label{Pi}
\Pi: T^3M \times_{(\p, \p)} T^3M \to TM : (\IX^1, \IX^2) \mapsto \Pi(\IX^1, \IX^2),
\end{equation}
such that $\Pi(\IX^1, \IX^2) = V_x$ if $\IX^1 = A^{\IX^2}_{\p}(V_X)$. It satisfies
$$\Pi(\IX^1, \IX^2) = \Pi(\IX^1 +_o \IX, \IX^2 +_o \IX)$$
when $+_o$ denotes either $+$, $+_*$ or $+_{**}$ and $p_o(\IX) = p_o(\IX^i) \in T^3M$ for the corresponding projection $p_o$. For an element $\IX \in T^3M$ that admits either one of the following descriptions
$$\IX = \left\{ \begin{array}{ll}
e(\IX) +_* \bigl(e_*(e(\IX)) +_{**} I(V_x)\bigr) & e(\IX) +_* \bigl(e_*(e(\IX)) +_{**} I(V_x)\bigr) \\
e_*(\IX) + \bigl(e(e_*(\IX)) +_{**} I(V_x)\bigr) & e_*(\IX) +_{**} \bigl( e_{**}(e_*(\IX)) + I(V_x) \bigr) \\
e_{**}(\IX) + \bigl(e(e_{**}(\IX)) +_* I(V_x)\bigr) & e_{**}(\IX) +_* \bigl( e_*(e_{**}(\IX)) + I(V_x) \bigr),
\end{array}\right.$$
we set
\begin{equation}\label{pi-bis}
\Pi(\IX) = \Pi (\IX, e(\IX)) = V_x.
\end{equation}
\noindent{\bf Involutions.} On $T^3M$, there are three natural involutive automorphisms, each permuting two of the three vector bundle structures. The first one is the natural involution $\kappa^{TM}$ of the second tangent bundle $T^2N$ of the manifold $N=TM$. It is denoted by either $\kappa_1$ or $\kappa$. The second one is the differential $\kappa_*^{M}$ of the involution $\kappa^M$ of $T^2M$. It is denoted by $\kappa_2$ or $\kappa_*$. The third one is the conjugate of $\kappa$ by $\kappa_*$ and is denoted by either $\kappa_3$ or $\kappa'$. Thus $\kappa_3 = \kappa_* \circ \kappa \circ \kappa_*$. These three involutions correspond to the three involutive automorphisms of the cube obtained by reflexion relative to the planes that contain the two vertices $T^3M$ and $M$. They generate the group --- isomorphic to $S_3$ --- of ``level-preserving" automorphisms of the cube resting on its vertex $M$. More precisely,
\begin{equation}\label{kappa-proj}
\begin{array}{lll}
p \circ \kappa = p_* & p_* \circ \kappa = p & p_{**} \circ \kappa = \kappa \circ p_{**}\\
p \circ \kappa_* = \kappa \circ p & p_* \circ \kappa_* = p_{**} & p_{**} \circ \kappa_* = p_* \\
p \circ \kappa' = \kappa \circ p_{**} & p_* \circ \kappa' = \kappa \circ p_*& p_{**} \circ \kappa' = \kappa \circ p.
\end{array}
\end{equation}
The first line follows directly from the corresponding properties (\ref{prop-kappa}) of the involution $\kappa^{TM}$ and \rref{rem-f**-kappa}. The second line consists in differentiating the relations (\ref{prop-kappa}) for $\kappa^M$. The third line follows from the first two. More is true~: $\kappa$ is a vector bundle isomorphism between $(T^3M, p)$ and $(T^3M, p_*)$
\begin{enumerate}
\item[-] $\kappa : (T^3M, p) \stackrel{\sim}{\longrightarrow} (T^3M, p_*)$ over $\id_{T^2M}$,
\item[-] $\kappa : (T^3M, p_{**}) \stackrel{\sim}{\longrightarrow} (T^3M, p_{**})$ over $\kappa$,
\item[-] $\kappa_* : (T^3M, p_*) \stackrel{\sim}{\longrightarrow} (T^3M, p_{**})$ over $\id_{T^2M}$,
\item[-] $\kappa_* : (T^3M, p) \stackrel{\sim}{\longrightarrow} (T^3M, p)$ over $\kappa$,
\item[-] $\kappa' : (T^3M, p) \stackrel{\sim}{\longrightarrow} (T^3M, p_{**})$ over $\kappa$,
\item[-] $\kappa' : (T^3M, p_*) \stackrel{\sim}{\longrightarrow} (T^3M, p_*)$ over $\kappa$, .
\end{enumerate}
Moreover $\kappa$ is an isomorphism of each $(T^3M, p_o)$ over $\kappa: T^2M \to T^2M$. Whence follows a series of equalities relating $\kappa$ with the various inclusions and projections. \\
$$\begin{array}{lll}
\kappa \circ i = i_* & \kappa \circ i_* = i & \kappa \circ i_{**} = i_{**} \circ \kappa \\
\kappa_* \circ i = i \circ \kappa & \kappa_* \circ i_* = i_{**} & \kappa_* \circ i_{**} = i_* \\
\kappa' \circ i = i_{**} \circ \kappa & \kappa' \circ i_* = i_* \circ \kappa & \kappa' \circ i_{**} = i \circ \kappa.
\end{array}$$
Whence
$$\begin{array}{lll}
\kappa \circ e = e_* \circ \kappa & \kappa \circ e_* = e \circ \kappa & \kappa \circ e_{**} = e_{**} \circ \kappa \\
\kappa_* \circ e = e \circ \kappa_* & \kappa_* \circ e_* = e_{**} \circ \kappa_* & \kappa_* \circ e_{**} = e_*\circ \kappa_* \\
\kappa' \circ e = e_{**} \circ \kappa' & \kappa' \circ e_* = e_* \circ \kappa' & \kappa' \circ e_{**} = e \circ \kappa'.
\end{array}$$
The involutions also permutes the vertical inclusions of $T^2M$~:
$$\begin{array}{lll}
\kappa \circ i^p_{0_{TM}} = i^p_{0_{TM}} & \kappa \circ (i^p_{0_M})_* = i^{p_*}_{{0_*}_{TM}} & \kappa \circ i^{p_*}_{{0_*}_{TM}} = (i^p_{0_M})_* \\
\kappa_* \circ i^p_{0_{TM}} = i^{p_*}_{{0_*}_{TM}} \circ \kappa & \kappa_* \circ (i^p_{0_M})_* = (i^p_{0_M})_* & \kappa_* \circ i^{p_*}_{{0_*}_{TM}} = i^p_{0_{TM}} \circ \kappa \\
\kappa' \circ i^p_{0_{TM}} = (i^p_{0_M})_* \circ \kappa & \kappa' \circ (i^p_{0_M})_* = i^p_{0_{TM}} \circ \kappa & \kappa' \circ i^{p_*}_{{0_*}_{TM}} = i^{p_*}_{{0_*}_{TM}}.
\end{array}$$
We may now easily deduce the action of the involutions on the vertical inclusions of $T^2M \oplus T^2M$~:
$$\begin{array}{llllll}
\kappa \circ \I_p = \I_{p_*} & \kappa_* \circ \I_p = \I_p^* \circ (\id \times \kappa) & \kappa' \circ \I_p = \I^*_{p_{**}} \circ (\kappa \times \kappa) \\
\kappa \circ \I_p^* = \I^*_{p_*} \circ (\kappa \times \id) & \kappa_* \circ \I_p^* = \I_p \circ (\id \times \kappa) & \kappa' \circ \I_p^* = \I_{p_{**}} \circ (\kappa \times \id) \\
\kappa \circ \I_{p_*} = \I_p & \kappa_* \circ \I_{p_*} = \I_{p_{**}} \circ (\kappa \times \kappa) & \kappa' \circ \I_{p_*} = \I_{p_*}^* \circ (\kappa \times \kappa) \\
\kappa \circ \I_{p_*}^* = \I_p^* \circ (\kappa \times \id) & \kappa_* \circ \I_{p_*}^* = \I^*_{p_{**}} & \kappa' \circ \I_{p_*}^* = \I_{p_*} \circ (\kappa \times \kappa) \\
\kappa \circ \I_{p_{**}} = \I^*_{p_{**}} & \kappa_* \circ \I_{p_{**}} = \I_{p_*} \circ (\kappa \times \kappa) & \kappa' \circ \I_{p_{**}} = \I_p^* \circ (\kappa \times \id) \\
\kappa \circ \I_{p_{**}}^*= \I_{p_{**}} & \kappa_* \circ \I_{p_{**}}^*= \I^*_{p_*} & \kappa' \circ \I_{p_{**}}^*= \I_p \circ (\kappa \times \kappa)
\end{array}$$
Finally, the involutions fix the vertical inclusion of $TM$~:
$$\kappa_o \circ I = I,$$
for $\kappa_o = \kappa$, $\kappa_*$ or $\kappa'$. As a consequence, the relations (\ref{param-P-fiber}) imply
\begin{equation}\label{i-kappa-bis}
\Pi\Bigl(\IX_1, \IX_2\Bigr) = \Pi\Bigl(\kappa_o(\IX_1), \kappa_o(\IX_2)\Bigr).
\end{equation}
\section{Structure of $\b^{(1,1,1)}_{nh}(\PP(M))$}\label{b111}
As to $\b^{(1,1,1)}_{nh}(\PP(M))$, its elements are called $(1,1,1)$-jets and are of the type $\xi = j^1_x j^1_\centerdot b_\centerdot$, where $(b_{x'})_{x' \in U_x}$ is a smooth family of local bisections of $\b^{(1)}(\PP(M))$ parameterized by the elements $x'$ of a neighborhood $U_x$ of $x$ in $M$. We consider the local bisection $j^1_\centerdot b_\centerdot : x' \mapsto j^1_{x'} b_{x'}$ of $\b^{(1,1)}_{nh}(\PP(M))$ and its first order jet $j^1_xj^1_{\centerdot} b_{\centerdot}$ at $x$.
\begin{figure}
\caption{The family $(b_{x'}
\end{figure}
There are three natural projections from $\b^{(1,1,1)}_{nh}(\PP(M))$ to $\b^{(1,1)}_{nh}(\PP(M))$ denoted by $p$, $p_*$ and $p_{**}$ (cf.~\nref{bkl}). They admit the following description~:
\begin{enumerate}
\item[-] $p : \xi = j^1_x j^1_\centerdot b_\centerdot \mapsto (j^1_\centerdot b_\centerdot)(x) = j^1_x b_x$,
\item[-] $p_* : \xi \mapsto j^1_x(p \circ j^1_\centerdot b_\centerdot) = j^1_x(b_\centerdot(\centerdot))$,
\item[-] $p_{**} : \xi \mapsto j^1_x(p_* \circ j^1_\centerdot b_\centerdot) = j^1_x (j^1_\centerdot (p \circ b_\centerdot)) = j^1_x j^1_\centerdot (b^0_\centerdot)$.
\end{enumerate}
Remembering that the data of an element $\xi = j^1_x j^1_\centerdot b_\centerdot$ of $\b^{(1,1,1)}_{nh}(\PP(M))$ is equivalent to that of the plane
$$D(\xi) = {(j^1_\centerdot b_\centerdot)}_{*_x} (T_xM) \subset T_{j^1_x b_{x}}\b^{(1,1)}_{nh}(\PP(M)),$$
the projections $p_*$ and $p_{**}$ are just the differential of the projections $p, p_* : \b^{(1,1)}_{nh}(\PP(M)) \to \b^{(1)}(\PP(M))$, i.e.~:
$$D(p_*(\xi)) = p_*(D(\xi)) \qquad D(p_{**}(\xi)) = (p_*)_*(D(\xi)).$$
Furthermore, the projections $p$, $p_*$ and $p_{**}$ satisfy the same relations as the corresponding projections from $T^3M$ to $T^2M$~:
\begin{enumerate}
\item[-] $p \circ p = p \circ p_* : \xi = j^1_x j^1_\centerdot b_\centerdot \mapsto b_x(x)$,
\item[-] $p_* \circ p = p \circ p_{**} : \xi = j^1_x j^1_\centerdot b_\centerdot \mapsto j^1_x b^0_x$,
\item[-] $p_* \circ p_* = p_* \circ p_{**} : \xi = j^1_x (j^1_\centerdot b_\centerdot) \mapsto j^1_x (b^0_\centerdot(\centerdot))$.
\end{enumerate}
Altogether we obtain a cube resting on a vertex whose edges consist of groupoid morphisms~: \\
\hspace{0.5cm}
\xymatrix{
& \b^{(1,1,1)}_{nh}(\PP(M)) \ar[ld]_{p_*} \ar@{-->}[d]^{p} \ar[rd]^{p_{**}} & \\
\b^{(1,1)}_{nh}(\PP(M)) \ar[d]_p \ar[rd]_<<<<{p_*} & \b^{(1,1)}_{nh}(\PP(M)) \ar@{-->}[ld]^<<<{p} \ar@{-->}[rd] _<<<{p_*}& \b^{(1,1)}_{nh}(\PP(M)) \ar[ld]^<<<{p_*} \ar[d]^{p}\\
\b^{(1)}(\PP(M)) \ar[rd]_p & \b^{(1)}(\PP(M)) \ar[d]^p & \b^{(1)}(\PP(M)) \ar[ld]^p \\
&M&
}
\begin{dfn}\label{b111h} Denote by $\b^{(1,1,1)}(\PP(M))$ the set of $(1,1,1)$-jets for which the three projections onto $\b^{(1,1)}_{nh}(\PP(M))$ coincide as well as the three projections onto $\b^{(1)}(\PP(M))$. In other terms
\begin{multline}
\b^{(1,1,1)}(\PP(M)) = \Bigl\{\xi^{(1,1,1)} \in \b^{(1,1,1)}_{nh}(\PP(M)) \;\Bigr|\; \\
p(\xi) = p_*(\xi) = p_{**}(\xi) \in \b^{(1,1)}(\PP(M)) \Bigr\}.
\end{multline}
\end{dfn}
When $\xi = j^1_xj^1_\centerdot b_\centerdot$ lies in $\b^{(1,1,1)}(\PP(M))$, we may assume that $b_{x'}(x') = b_x(x')$ and that $b_{x'}$ is tangent to $\e$ at $x'$.
\begin{figure}
\caption{Here $b_x (\centerdot) = (b_{\centerdot}
\end{figure}
Of course a genuine $3$-jet belongs to $\b^{(1,1,1)}(\PP(M))$ but $\b^{(1,1,1)}(\PP(M))$ contains $(1,1,1)$-jets that are not $3$-jets. \\
Remember the holonomic distribution on $\b^{(1,1)}_{nh}(\PP(M))$ introduced in \dref{def-e}~:
$$\e^{\b^{(1)}(\PP(M))}_\xi \stackrel{\rm not}{=} \e^{(1,1)}_\xi = p_{*_\xi}^{-1}(D(\xi)).$$
\begin{lem} An element $\xi$ in $\b^{(1,1,1)}_{nh}(\PP(M))$ belongs to $\b^{(1,1,1)}(\PP(M))$ if and only if $p(\xi) \in \b^{(1,1)}(\PP(M))$ and
$$D(\xi) \subset \e^{(1,1)}_{p(\xi)} \cap T_{p(\xi)}\b^{(1,1)}(\PP(M)).$$
\end{lem}
\noindent{\bf Proof}. The inclusion
$$D(\xi) \subset \e^{(1,1)}_{p(\xi)},$$
is equivalent to $p_*(D(\xi)) = D(p(\xi))$. Since $p_*(D(\xi)) = D(p_*(\xi))$, this amounts to $p_*(\xi) = p(\xi)$. Now, since $(1,1)$-jets $\xi$ belonging to $\b^{(1,1)}(\PP(M))$ are characterized by the relation $p(\xi) = p_*(\xi)$, the inclusion
$$D(\xi) \subset T_{p(\xi)}\b^{(1,1)}(\PP(M))$$
is equivalent to $p_*(D(\xi)) = (p_{*})_*(D(\xi))$, that is $D(p_*(\xi)) = D(p_{**}(\xi))$ or $p_*(\xi) = p_{**}(\xi)$.
\cqfd
\begin{lem}
A bisection $b$ of $\b^{(1,1)}(\PP(M))$ is everywhere tangent to $\e^{(1,1)}$ if and only if it is a $2$-jet extension~: $b = j^2f$.
\end{lem}
\noindent{\bf Proof}. \pref{e} already implies that if a local bisection $b$ of $\b^{(1,1)}_{nh}(\PP(M))$ is tangent to $\e^{(1,1)}$ then it is a holonomic bisection of $\b^{(1)}(\b^{(1)}(\PP(M)))$, that is $b = j^1b'$ for $b' = p \circ b$. Now the local bisection $b'$ is necessarily tangent to $\e$. Indeed,
$$T_{b'(x')}b' = (p \circ b)_{*_{x'}}(T_{x'}M) = p_{*_{b(x')}}\Bigl(b_{*_{x'}}(T_{x'}M)\Bigr) \subset p_{*_{b(x')}}(\e^{(1,1)}_{b(x')}) = D(b(x')) \subset \e.$$
Whence the bisection $b'$ is holonomic as well~: $b' = j^1(p \circ b') = j^1b'^0$ and thus $b = j^2b'^0$.
\cqfd
Recall the natural action $\rho^{(1,1,1)}$ of the groupoid $\b^{(1,1,1)}_{nh}(\PP(M)) \rightrightarrows M$ on the fibration $T^3M \to M$~:
$$\b^{(1,1,1)} \times_{(\alpha, p^3)} T^3M : (\xi, {\mathfrak X}) \mapsto \xi \cdot {\mathfrak X},$$
where for $\xi = j^1_x b$, with $b \in \b_\ell(\b^{(1,1)}_{nh}(\PP(M)))$, and ${\mathfrak X} = \frac{d\X_t}{dt}\bigl|_{t=0}$,
$$\xi \cdot {\mathfrak X} = \frac{d(b \cdot \X_t)}{dt}\Bigr|_{t=0}.$$
We will now characterize, as has been done for $\rho^{(1,1)}$ in a previous section, the partial maps $T^3M \to T^3M$ arising from the action of a $(1,1,1)$-jet.
\begin{dfn}\label{el-def} A homomorphism of $T^3M$ is a bijective map $\ell : T^3_xM \to T^3_yM$, $x$, $y \in M$ which is a vector bundle isomorphism $\ell : (T^3_xM, p_o) \to(T^3_yM, p_o)$ over a homomorphism of $T^2M$ denoted by $p_o(\ell)$ where $p_o$ stands for either $p$, $p_*$ or $p_{**}$. Moreover,
\begin{enumerate}
\item[-] $p \circ p(\ell) = p \circ p_*(\ell)$,
\item[-] $p_* \circ p(\ell) = p \circ p_{**}(\ell)$,
\item[-] $p_* \circ p_*(\ell) = p_* \circ p_{**}(\ell)$.
\end{enumerate}
Let $\EL(T^3M)$ denote the set of homomorphisms of $T^3M$. It is naturally endowed with a groupoid structure.
\end{dfn}
\begin{rmk}\label{cons-el-def} As a consequence of this definition, an element $\ell$ of $\EL(T^3M)$ preserves the various zero sections in $T^3M$. More precisely,
$$\ell \circ i = i \circ p(\ell) \quad \ell \circ i_* = i_* \circ p_*(\ell) \quad \ell \circ i_{**} = i_{**} \circ p_{**}(\ell).$$
In particular, a homomorphism $\ell$ of $T^3M$ preserves the three vertical copies $p^{-1}(0_{TM}) \cap p_*^{-1}(0_{TM})$, $p_*^{-1}({0_*}_{TM}) \cap p_{**}^{-1}({0_*}_{TM})$ and $p^{-1}({0_*}_{TM}) \cap p_{**}^{-1}(0_{TM})$ of $T^2M$. Moreover, the fact that the vertical inclusion $i^p_{0_{TM}}$, $i^{p_*}_{{0_*}_{TM}}$, $(i^p_{0_M})_*$ are vector bundle morphisms as specified in (\ref{vert-incl}) implies that a homomorphism of $T^3M$ acts on their images through homomorphisms of $T^2M$. Similarly, a homomorphism of $T^3M$ preserves the vertical inclusion $I$ of $TM$ and acts linearly on its image.
\end{rmk}
\begin{lem}\label{111-jetsasmaps}
Via the action $\rho^{(1,1,1)}$, the groupoid $\b^{(1,1,1)}_{nh}(\PP(M))$ is canonically identified with the subset of $\EL(T^3M)$, denoted by $\EL^{(1,1,1)}(T^3M)$, of homomorphisms $\ell : T^3_xM \to T^3_yM$ assuming the following specific values on the images of the vertical inclusions $i^p_{0_{TM}}, (i^p_{0_M})_*, i^{p_*}_{{0_*}_{TM}}$~:
\begin{equation}\label{values-on-vert}
\begin{array}{rll}
\ell \circ i^p_{0_{TM}} & = & i^p_{0_{TM}} \circ p(\ell) \\
\ell \circ (i^{p}_{0_M})_* & = & (i^{p}_{0_M})_* \circ p_*(\ell) \\
\ell \circ i^{p_*}_{{0_*}_{TM}} & = & i^{p_*}_{{0_*}_{TM}} \circ p(\ell).
\end{array}
\end{equation}
Given a $(1,1,1)$-jet $\xi$, the associated linear map $\ell = \ell_\xi = \rho^{(1,1,1)}(\xi, \centerdot) : T^3_xM \to T^3_yM$ satisfies $p(\ell) = \EL(p(\xi))$, $p_*(\ell) = \EL(p_*(\xi))$ and $p_{**}(\ell) = \EL(p_{**}(\xi))$.
\end{lem}
\begin{rmk}
A homomorphism $\ell : T^3_xM \to T^3_yM$ acts on the vertical inclusion $I : T_xM \to T^3_xM$ through $p(p(\ell)) = p(p_*(\ell))$, that is
$$\ell \circ I = I \circ p(p(\ell)).$$
This implies in particular that $\ell$ acts on the image of $A^{\IX}_{\p} : T_xM \to T^3M$, $\IX \in T^3M$ via $p \circ p(\ell)$ as well~:
\begin{equation}\label{action-sur-TM}
\ell \circ A^{\IX}_{\p} = A^{\IX}_{\p} \circ \; p(p(\ell)).
\end{equation}
\end{rmk}
\begin{notas}
Let $\fL : \b^{(1,1,1)}_{nh}(\PP(M)) \to \EL(T^3M)$ denote the map $\xi \mapsto \ell_\xi$ and set
\begin{enumerate}
\item[-] $\EL^{(1,1,1)}_h(T^3M) = \fL( \b^{(1,1,1)}(\PP(M)))$,
\item[-] $\EL^{(3)}(T^3M) = \fL( \b^{(3)}_h(M))$.
\end{enumerate}
\end{notas}
The following extension to $T^3M$ of \lref{decomposition} is useful in order to prove \lref{111-jetsasmaps}.
\begin{lem}\label{decomposition-ordre-3} Let $\{X^1, ..., X^n\}$ be a local basis of sections of $TM$ and write a given $\IX \in T^3M$ as follows~
$$\IX = \frac{d}{dt} \frac{d}{ds}\sum_{j=1}^na_j(t,s) X^j(\gamma(t,s)) \Bigl|_{s=0} \Bigr|_{t=0}.$$
Then $\IX$ admits the following expression as a linear combination of horizontal and vertical vectors~:
$$\begin{array}{c}
{\displaystyle \sideset{}{_{**}}\sum_{j=1}^n } {\displaystyle \Biggl\{\Biggl[m_{a_j**} X^j_{**_{Y_x}}Y_{*_x}Z_x + }
\Bigl[m_{a_j**} \Bigl( i \bigl(X^j_{*_x}Y_x \bigr)\Bigr) +_{**} m_{\partial_ta_j(0)**} \Bigl(i^{p_*}_{{0_*}_{TM}} \bigl( X^j_{*_x}Y_x \bigr)\Bigr) \Bigr] \Biggr]\\
+_* \Biggl[\Bigl\{ m_{a_j**} \Bigl( i_{*_{X^j_x}} \bigl(X^j_{*_{x}} Z_x \bigr) \Bigr) + \Bigl[ m_{a_j**} \Bigl({\bf i} \bigl( X^j_x \bigr) \Bigr) +_{**} m_{\partial_t a_j(0)} \Bigl(i_{*_{0_x}} \bigl(i^p_{0_M} (X^j_x) \bigl) \Bigr) \Bigr] \Bigr\} \\
\qquad +_{**} \; m_{\partial_s a_j(0)*} \Bigl( (i^p_{0_M})_* \bigl( X^j_{*_x} Z_x \bigr) \Bigr) + \Bigl[m_{\partial_s a_j(0)*} i \Bigl( i^p_{0_M} (X^j_x ) \Bigr) +_* \partial^2_{ts}a_j(0) \bigl( I ( X^j_x)\bigr) \Bigr] \Bigr\} \Biggr] \Biggr\} .
\end{array}$$
\end{lem}
\noindent{\bf Proof}. Set
\begin{enumerate}
\item[-] $a_j(t) = a_j(t,0)$,
\item[-] $a_j = a_j(0)$,
\item[-] $\partial_sa_j(t) = \frac{\partial a_j}{\partial s}(t,0)$,
\item[-] $\partial^2_{ts}a_j(0) = \frac{\partial^2 a_j}{\partial t \partial s}(0,0)$,
\item[-] $\gamma(t) = \gamma(t,0)$.
\end{enumerate}
Using \lref{decomposition} we compute $\X_t = \frac{d}{ds}\sum_{j=1}^n a_j(t,s)X^j(\gamma(t,s))|_{s=0}$~:
$$\begin{array}{c}
\displaystyle{\X_t = \sideset{}{_{*}}\sum_{j=1}^n \Bigl\{m_{a_j(t)*}X^j_{*_{\gamma(t)}} Y_t + \Bigl[ i\Bigl(a_j(t) X^j(\gamma(t)) \Bigr) +_* i^p_{0_M} \Bigl(\partial_sa_j(t) X^j(\gamma(t)) \Bigr) \Bigr] \Bigr\}},
\end{array}$$
where $Y_t = \frac{\partial \gamma(t,s)}{\partial s}|_{s=0}$. Now the vector $\IX$ is a sum of three types of vectors~:
\begin{multline}\label{IX}
\IX = \displaystyle{ \sideset{}{_{**}}\sum_{j=1}^n \Bigg\{ \frac{d}{dt} m_{a_j(t)*}X^j_{*_{\gamma(t)}} Y_t \Bigr|_{t=0} }\\
\displaystyle{ +_* \Biggl[ \frac{d}{dt} i\Bigl(a_j(t) X^j(\gamma(t)) \Bigr) \Bigr|_{t=0} } \\
\displaystyle{ +_{**} \frac{d}{dt} i^p_{0_M} \Bigl(\partial_sa_j(t) X^j(\gamma(t)) \Bigr) \Bigr|_{t=0} \Biggr] \Biggr\}. }
\end{multline}
The first term of (\ref{IX}) yields~:
$$\begin{array}{l}
\displaystyle{\frac{d}{dt} m_{a_j(t)*}X^j_{*_{\gamma(t)}} Y_t \Bigr|_{t=0} } \\
= \displaystyle{ \frac{d}{dt} m_{*} \Bigl( a_j(t), X^j_{*_{\gamma(t)}} Y_t \Bigr) \Bigr|_{t=0}}\\
= \displaystyle{ (m_*)_{*_{(a_j,X^j_{*_x}Y_x)}} \Bigl( \partial_ta_j(0), X^j_{**_{Y_x}}Y_{*_x}Z_x\Bigr)}\\
= \displaystyle{ (m_*)_{*} \Bigl( 0_{a_j}, X^j_{**_{Y_x}}Y_{*_x}Z_x\Bigr) + (m_*)_{*} \Bigl( \frac{d}{dt} a_j(0) + t\partial_ta_j(0)\Bigr|_{t=0}, 0_{X^j_{*_x}Y_{x}}\Bigr)}\\
= \displaystyle{ (m_*)_{*} \Bigl( 0_{a_j}, X^j_{**_{Y_x}}Y_{*_x}Z_x\Bigr) + \frac{d}{dt } m_* \Bigl( a_j + t \partial_ta_j(0), X^j_{*_x}Y_{x}\Bigr) \Bigr|_{t=0}}\\
= \displaystyle{ m_{a_j**} X^j_{**_{Y_x}}Y_{*_x}Z_x + \Bigl[ i \Bigl(m_{a_j*}X^j_{*_x}Y_x \Bigr) +_{**} i^{p_*}_{{0_*}_{TM}} \Bigl(m_{\partial_ta_j(0)*} X^j_{*_x}Y_x \Bigr) \Bigr]} \\
= \displaystyle{ m_{a_j**} X^j_{**_{Y_x}}Y_{*_x}Z_x + \Bigl[m_{a_j**} \Bigl( i \bigl(X^j_{*_x}Y_x \bigr)\Bigr) +_{**} m_{\partial_ta_j(0)**} \Bigl(i^{p_*}_{{0_*}_{TM}} \bigl( X^j_{*_x}Y_x \bigr)\Bigr) \Bigr], } \\
\end{array}$$
where $Y_{*_x}Z_x = \frac{dY_t}{dt}|_{t=0}$. In particular $Z_x = \frac{d\gamma(t)}{dt}|_{t=0}$. The second term yields~:
$$\begin{array}{l}
\displaystyle{ \frac{d}{dt} i\Bigl(a_j(t) X^j(\gamma(t)) \Bigr) \Bigr|_{t=0} }\\
= \displaystyle{ i_{*_{a_jX^j_x}} \Bigl[ m_{a_j*}X^j_{*_{x}} Z_x+ \Bigl( i(a_j X^j_x) +_* i^p_{0_M} \bigl(\partial_ta_j(0) X^j_x \bigr)\Bigr) \Bigr] }\\
= \displaystyle{ i_{*_{a_jX^j_x}} \Bigl(m_{a_j*}X^j_{*_{x}} Z_x \Bigr) + i_{*_{a_jX^j_x}} \Bigl( i(a_j X^j_x) +_* i^p_{0_M} \bigl(\partial_ta_j(0) X^j_x \bigr)\Bigr)}\\
= \displaystyle{ m_{a_j**} \Bigl( i_{*_{X^j_x}} \bigl(X^j_{*_{x}} Z_x \bigr) \Bigr) + i_{*_{a_jX^j_x}} \Bigl( m_{a_j*} \bigl(i(X^j_x)\bigr) +_* \partial_ta_j(0) i^p_{0_M} \bigl(X^j_x \bigr)\Bigr)}\\
= \displaystyle{ m_{a_j**} \Bigl( i_{*_{X^j_x}} \bigl(X^j_{*_{x}} Z_x \bigr) \Bigr) + \Bigl[ i_{*_{a_jX^j_x}} \Bigl(m_{a_j*} \bigl(i(X^j_x)\bigr) \Bigr) +_{**} i_{*_{0_x}} \Bigl(\partial_ta_j(0) i^p_{0_M} \bigl( X^j_x\bigr) \Bigr)\Bigr]}\\
= \displaystyle{ m_{a_j**} \Bigl( i_{*_{X^j_x}} \bigl(X^j_{*_{x}} Z_x \bigr) \Bigr) + \Bigl[ m_{a_j**} \Bigl({\bf i} \bigl( X^j_x \bigr) \Bigr) +_{**} \partial_t a_j(0) \Bigl(i_{*_{0_x}} \bigl(i^p_{0_M} (X^j_x) \bigl) \Bigr) \Bigr], }\\
\end{array}$$
and the third term yields~:
$$\begin{array}{l}
\displaystyle{ \frac{d}{dt} i^p_{0_M} \Bigl(\partial_sa_j(t) X^j(\gamma(t)) \Bigr) \Bigl|_{t=0}}\\
= \displaystyle{ \Bigl( (i^p_{0_M})_* \bigl( m_{\partial_s a_j(0)*} X^j_{*_x} Z_x \bigr) \Bigr) + (i^p_{0_M})_* \Bigl[ i\Bigl(\partial_s a_j(0) X^j_x \Bigr) +_* i^p_{0_M} \Bigl(\partial^2_{ts}a_j(0) X^j_x\Bigr) \Bigr] }\\
= \displaystyle{ m_{\partial_s a_j(0)*} \Bigl( (i^p_{0_M})_* \bigl( X^j_{*_x} Z_x \bigr) \Bigr) + \Bigl[ (i^p_{0_M})_* \Bigl(m_{\partial_s a_j(0)*} i \bigl( X^j_x \bigr) \Bigr) +_* (i^p_{0_M})_* \Bigl(\partial^2_{ts}a_j(0) i^p_{0_M} \bigl( X^j_x \bigr) \Bigr) \Bigr] }\\
= \displaystyle{ m_{\partial_s a_j(0)*} \Bigl( (i^p_{0_M})_* \bigl( X^j_{*_x} Z_x \bigr) \Bigr) + \Bigl[m_{\partial_s a_j(0)*} i \Bigl( i^p_{0_M} (X^j_x ) \Bigr) +_* \partial^2_{ts}a_j(0) \bigl( I ( X^j_x)\bigr) \Bigr]. }\\
\end{array}$$
\cqfd
\noindent
{\bf Proof of \lref{111-jetsasmaps}} The first part of the proof consists in showing that the action of an element $\xi = j^1_xb = j^1_x j^1_{x'}b_{x'}$ of $\b^{(1,1,1)}_{nh}(\PP(M))$ on $T^3M$ is a homomorphism of $T^3M$. To handle the $p$-linearity, let $\IX_1$, $\IX_2$ belongs to some $p$-fiber of $T^3_xM$ and let $a$ be a real number. Then if $Z_i$ denotes $p_* \circ p_*(\IX_i)$, $i = 1, 2$, we have
$$\begin{array}{cll}
j^1_xb \cdot \Bigl( a \IX_1 + \IX_2\Bigr) & = & \rho^{(1,1)}_* \Bigl( b_{*_x} (a Z_1 + Z_2), a\IX_1 + \IX_2\Bigr) \\
& = & a \rho^{(1,1)}_* \Bigl( b_{*_x} (Z_1), \IX_1 \Bigr) + \rho^{(1,1)}_* \Bigl( b_{*_x} (Z_2), \IX_2\Bigr).
\end{array}$$
Supposing instead that $\IX_1$ and $\IX_2$ belong to the same $p_*$-fiber, implying in particular that $Z_1 = Z_2$, we have
$$\begin{array}{cll}
j^1_xb \cdot \Bigl( m_{a*} \IX_1 +_* \IX_2\Bigr) & = & \rho^{(1,1)}_* \Bigl( b_{*_x} (Z_1), m_{a*} \IX_1 +_* \IX_2\Bigr) \\
& = & \displaystyle{\frac{d}{dt} \rho^{(1,1)}\Bigl( b (\gamma(t)), a\X_{1t} + \X_{2t}\Bigr) \Bigr|_{t=0}, }
\end{array}$$
where $\frac{d}{dt}\gamma(t)|_{t=0} = Z_1$. The $p_*$-linearity follows from the linearity of $\rho^{(1,1)}$. Supposing now that $\IX_1$ and $\IX_2$ belong to the same $p_{**}$-fiber, we see that
$$\begin{array}{cll}
j^1_xb \cdot \Bigl( m_{a**} \IX_1 +_{**} \IX_2\Bigr) & = & \rho^{(1,1)}_* \Bigl( b_{*_x} (Z_1), m_{a**} \IX_1 +_{**} \IX_2\Bigr) \\
& = & \displaystyle{\frac{d}{dt} \rho^{(1,1)}\Bigl( j^1_{\gamma(t)}b_{\gamma(t)}, m_{a*} \X_{1t} +_* \X_{2t}\Bigr) \Bigr|_{t=0}, } \\
& = & \displaystyle{\frac{d}{dt} \frac{d}{ds} \rho^{(1)}\Bigl( b_{\gamma(t)}(\gamma(t,s)), aX_{1ts} + \X_{2ts}\Bigr) \Bigr|_{s=0}\Bigr|_{t=0}, }
\end{array}$$
The $p_{**}$-linearity follows thus from the linearity of $\rho^{(1)}$. \\
Now, let us show that the action of a $(1,1,1)$-jet takes the specific values (\ref{values-on-vert}) on the vertical copies of $T^2_xM$. Let $\xi = j^1_xb \in \b^{(1,1,1)}_{nh}(\PP(M))$ and $\V = \frac{dV_t}{dt}|_{t=0} \in T^2_xM$, then~:
$$\begin{array}{lll}
\xi\cdot i^p_{0_{TM}}(\V) & = & \displaystyle{j^1_x b \cdot \frac{d t\V}{dt} \Bigl|_{t=0}} = \displaystyle{\frac{d}{dt} b(x) \cdot t\V \Bigl|_{t=0}} = \displaystyle{\frac{d}{dt} t \bigl(b(x) \cdot \V \bigr) \Bigl|_{t=0}} \\
& = & \displaystyle{i^p_{0_{TM}} \Bigl(p(\xi) \cdot \V \Bigr)}, \\
\displaystyle{\xi\cdot (i^p_{0_{M}})_*(\V) } & = & \displaystyle{j^1_x b \cdot \frac{d \bigl(i^p_{0_M}(V_t)\bigr)}{dt} \Bigl|_{t=0}} = \displaystyle{\frac{d}{dt} b \cdot i^p_{0_M}(V_t) \Bigl|_{t=0}} = \displaystyle{\frac{d}{dt} i^p_{0_M}(p(b) \cdot V_t) \Bigl|_{t=0}} \\
& = & \displaystyle{(i^p_{0_{M}})_* \Bigl(p_*(\xi) \cdot \V \Bigr)}, \\
\displaystyle{\xi\cdot i^{p_*}_{{0_*}_{TM}}(\V) } & = & \displaystyle{j^1_x b \cdot \frac{d\bigl(m_{t*} \V\bigr)}{dt} \Bigl|_{t=0}} = \displaystyle{\frac{d}{dt} \bigl(b(x) \cdot m_{t*} \V \bigr)\Bigl|_{t=0}} = \displaystyle{\frac{d}{dt} m_{t*} \bigl(b(x) \cdot \V \bigr) \Bigl|_{t=0}} \\
& = & \displaystyle{i^{p_*}_{{0_*}_{TM}} \Bigl(p(\xi) \cdot \V \Bigr)}.
\end{array}$$
The main part of the proof consists in showing the surjectivity of $\fL$ onto $\EL^{(1,1,1)}(T^3M)$. So let $\ell : T^3M \to T^3M$ in $\EL$ be a homomorphism that satisfies (\ref{values-on-vert}), we will show that it coincides with the action of a $(1,1,1)$-jet. In order to prove this, we need to construct a family of linear maps
$$b_{x'}(x'') : T_{x''}M \to T_{b^0_{x'}(x'')}M,$$
with $(x', x'') \in \U = \cup_{x' \in U}\{x'\} \times U_{x'}$, where $U$ is a neighborhood of $x$ in $M$ and for $x' \in U$, $U_{x'}$ is a neighborhood of $x'$ in $M$, that ``integrates" $\ell$ in the sense that the action of the $(1,1,1)$-jet $\xi = j^1_x j^1_{x'} b_{x'}$ on $T^3M$ coincides with $\ell$. Let $\{X^1, ..., X^n\}$ be a local basis of vector fields defined on a neighborhood $U$ of $x$ in $M$. In view of the decomposition of any element $\IX$ in $T^3_xM$ described in \lref{decomposition-ordre-3}, the \lref{decomposition} and the \rref{cons-el-def}, it is sufficient to construct a family $b_{x'}(x'')$ for which the associated $(1,1,1)$-jet $\xi$ satisfies $\EL(p(\xi)) = p(\ell)$, $\EL(p_*(\xi)) = p_*(\ell)$ and $\EL(p_{**}(\xi)) = p_{**}(\ell)$ as well as
\begin{equation}\label{horiz-action}
\xi \cdot X^j_{**_{X^k_x}}X^k_{*_x}(T_xM) = \ell \Bigl(X^j_{**_{X^k_x}}X^k_{*_x}(T_xM)\Bigr),
\end{equation}
for all $j, k = 1, ..., n$. To achieve these conditions, first integrate the homomorphism $p_{**}(\ell)$ of $T^2M$ into a local bisection $b : U \to \b^{(1)}(\PP(M))$ $: x' \mapsto b(x') = j^1_{x'}\varphi_{x'}$ such that $\EL(j^1_xb) = p_{**}(\ell)$ (cf.~\lref{11-jetsasmaps}). Then consider a family of linear isomorphisms $b_{x}(x') : T_{x'}M \to T_{\varphi_x(x')}M$, $x' \in U$ such that $\EL(j^1_xb_x) = p(\ell)$ (once more \lref{11-jetsasmaps}). Now extend $b_x(x)$ into a family $b_{x'}(x') : T_{x'}M \mapsto T_{\varphi_{x'}(x')}M$, $x' \in U$ such that $\EL (j^1_x(b_{\centerdot}(\centerdot))) = p_*(\ell)$. So far, we have ensured that if $b_x(x')$ and $b_{x'}(x')$ is further extended to a family $b_{x'}(x'') : T_{x''}M \to T{\varphi_{x'}}(x'')M$, then the corresponding $(1,1,1)$-jet $\xi$ satisfies $\EL(p_o(\xi)) = p_o(\ell)$ for $p_o = p, p_*, p_{**}$. In order to guaranty the condition (\ref{horiz-action}), set
$$H^j_{x'} = X^j_{*_{x'}}(T_{x'}M) \qquad \mbox{and} \qquad \h^j = \cup_{x' \in U}H^j_{x'}.$$
Then $\h^j$ is a submanifold of $T^2M$ which is completely determined by that data of the family $X^j_{*_{x'}}X^k_{x'}$, $x' \in U$, $k=1, ..., n$ of bases of the various horizontal spaces $H^j_{x'}$. Consider now the image of the differential of $X^j_{*}X^k$, that is
$$X^j_{**_{X^k_x}}X^k_{*_x}(T_xM) \stackrel{\rm not}{=} T^j_k.$$
It is a horizontal $n$-plane in $T^3_xM$ tangent to $\h^j$ whose image under $\ell$ is a horizontal $n$-plane $\ell(T^j_k)$ in $T^3_yM$. For each pair $j,k = 1, ..., n$, let $E^j_k : V \to T^2M$ be a smooth section of $p^2 : T^2M \to M$ defined on a neighborhood $V$ of $y$ such that
\begin{enumerate}
\item[-] $(E^j_k)_{*_y} (T_yM) = \ell(T^j_k)$,
\item[-] $p \circ E^j_k(y') = b_{x'}(x') X^j(x')$, for $y' = \varphi_{x'}(x')$,
\item[-] $p_* \circ E^j_k(y') = j^1_{x'}(\varphi_\centerdot(\centerdot)) X^k_{x'}$, for $y' = \varphi_{x'}(x')$.
\end{enumerate}
The first condition implies in particular that $E^j_k(y) = p(\ell) (X^j_{*_x}X^k_x)$. Now for each $y'$ in $V$, let
$$J^j_{y'} = {\rm span} \Bigl\{E^j_k(y') ; k = 1, ..., n \Bigr\}\subset T_{b_{x'}(x') X^j_{x'}}TM.$$
It is a horizontal space because the vectors $p_*(E^j_k(y'))$ are linearly independent and we may choose a smooth family $Y^j_{y'}$, $y' \in V$ of local vector fields such that
$$(Y^j_{y'})_{*_{y'}}(T_{y'}M) = J^j_{y'}.$$
For $y'$ sufficiently close to $y$, and $y''$ sufficiently close to $y'$, the vector fields $Y^j_{y'}(y'')$ form a basis of $T_{y''}M$. This allows us to define $b_{x'}(x'')$, for $x' \in U$, $x'' \in U_{x'}$ via~:
$$b_{x'}(x'') X^j_{x''} = Y^j_{b^0_x(x')}(b^0_{x'}(x'')).$$
This proves surjectivity of $\EL$. Injectivity follows from the effectiveness of the action $\rho^{(1,1,1)}$ (cf.~\rref{derived-action-of-pair-groupoid}).
\cqfd
\begin{figure}
\caption{A picture of the various actors of the proof}
\end{figure}
\begin{rmk}\label{vert-part} Denote by $\p(M)$ the subset of $\b^{(1,1)}_{nh}(\PP(M)) \times \b^{(1,1)}_{nh}(\PP(M)) \times \b^{(1,1)}_{nh}(\PP(M))$ consisting of the triples $(\xi_{1}, \xi_{2}, \xi_{3})$ for which $p(\xi_1) = p(\xi_2)$, $p_*(\xi_1) = p(\xi_3)$, $p_*(\xi_2) = p_*(\xi_3)$. It is the image of the projection $\p=p \times p_* \times p_{**} : \b^{(1,1,1)}_{nh}(\PP(M)) \to \b^{(1,1)}_{nh}(\PP(M)) \times \b^{(1,1)}_{nh}(\PP(M)) \times \b^{(1,1)}_{nh}(\PP(M))$. Now the map
$$\p : \b^{(1,1,1)}_{nh}(\PP(M)) \to \p(M)$$
is an affine bundle whose fiber over any triple $(\xi_1, \xi_2, \xi_3)$ is modeled on the set of trilinear maps $T_xM \times T_xM \times T_xM \to T_yM$, where $x = \alpha(\xi_i)$ and $y = \beta(\xi_i)$. Indeed, let $\xi_0, \xi$ be two elements in $\p^{-1}(\xi_{1}, \xi_{2}, \xi_{3})$, then
$$\xi - \xi_0 : T_xM \times T_xM \times T_xM \to T_yM : (X_x, Y_x, Z_x) \mapsto \Pi (\xi \cdot \IX, \xi_0 \cdot \IX),$$
where $\IX \in T^3M$ satisfies $p \circ p(\IX) = X_x$, $p_* \circ p(\IX) = Y_x$ and $p_* \circ p_*(\IX) = Z_x$ defines a trilinear map independent on the choice of $\IX$.
\end{rmk}
\noindent
{\bf Canonical Involutions~:} \lref{111-jetsasmaps} allows us to transport on $\b^{(1,1,1)}(\PP(M))$ the involutions $\kappa$, $\kappa_*$ and $\kappa'$ on $T^3M$.
\begin{cor}
The expression
\begin{equation}\label{kappa-def}
\kappa_o(\xi) \cdot {\mathfrak X} = \kappa_o (\xi \cdot \kappa_o({\mathfrak X}))
\end{equation}
defines, for $\kappa_o = \kappa$, $\kappa_*$ or $\kappa'$ an involutive automorphism of the groupoid $\b^{(1,1,1)}(\PP(M))$ permuting two of the three fibrations. As is the case for $T^3M$, we have the relations~:
$$\begin{array}{lll}
p \circ \kappa = p_* & p_* \circ \kappa = p & p_{**} \circ \kappa = \kappa \circ p_{**} \\
p \circ \kappa_* = \kappa \circ p & p_* \circ \kappa_* = p_{**} & p_{**} \circ \kappa_* = p_* \\
p \circ \kappa' = \kappa \circ p_{**} & p_* \circ \kappa' = \kappa \circ p_*& p_{**} \circ \kappa' = \kappa \circ p.\\
\end{array}$$
\end{cor}
\noindent{\bf Proof}. It suffices to prove that the right-hand side of (\ref{kappa-def}) defines a map $T^3_xM \to T^3_yM$ satisfying the hypotheses of \lref{111-jetsasmaps}. This follows from the properties of the various involutions on $T^3M$. In addition,
$$\begin{array}{ccl}
\kappa_o(\xi_1 \cdot \xi_2) \cdot \IX & = & \kappa_o\bigl((\xi_1 \cdot \xi_2) \cdot \kappa_o(\IX)\bigr) \\
& = & \kappa_o\bigl(\xi_1 \cdot (\xi_2 \cdot \kappa_o(\IX)\bigr)\\
& = & \kappa_o(\xi_1) \cdot \kappa_o(\xi_2 \cdot \kappa_o(\IX)) \\
& = & \kappa_o(\xi_1) \cdot \kappa_o(\xi_2) \cdot \IX.
\end{array}$$
\cqfd
\begin{lem} The fixed point set of $\kappa$ (respectively $\kappa_*$) is $\b^{(2,1)}_h(M)$ (respectively $\b^{(1,2)}_h(M)$).
\end{lem}
\begin{rmk}\label{kappa-rem-bis} As for $(1,1,1)$-jets not in $\b^{(1,1,1)}(\PP(M))$, the expression (\ref{kappa-def}) does not in general define a $(1,1,1)$-jet (cf.~\rref{kappa-rem}). More precisely, we may define
\begin{enumerate}
\item[-] $\kappa(\xi)$ when $p(\xi) = p_*(\xi)$, which implies that $p_{**}(\xi) \in \b_h^{(1,1)}$.
\item[-] $\kappa_*(\xi)$ when $p_*(\xi) = p_{**}(\xi)$, which implies that $p(\xi) \in \b_h^{(1,1)}$.
\end{enumerate}
For an arbitrary element $\xi \in \b^{(1,1,1)}_{nh}(\PP(M))$ or even in $\EL(T^3M)$, one may define $\kappa(\xi)$ as the element in $\EL(T^3M)$ that satisfies~:
$$\kappa(\xi) \cdot \IX = \kappa(\xi \cdot \kappa(\IX)).$$
\end{rmk}
\begin{rmk}\label{kappa*-diff} Given a $(1,1,1)$-jet $\xi = j^1_xb$ for which $\kappa_*$ is defined, that is $p_*(\xi) = p_{**}(\xi)$, the corresponding tangent plane $D(\xi)$ is contained in the tangent space to the subgroupoid $\b^{(1,1)}(\PP(M))$ and $\kappa_*$ coincides with the differential of $\kappa^M$, that~is~:
$$D(\kappa_*(\xi)) = (\kappa^M)_*(D(\xi)).$$
Equivalently,
$$\kappa_*(j^1_xb) = j^1_x (\kappa \circ b).$$
\end{rmk}
\end{document}
|
\begin{document}
\makeatletter
\newcommand{\rmnum}[1]{\romannumeral #1}
\newcommand{\Rmnum}[1]{\expandafter\@slowromancap\romannumeral #1@}
\makeatother
\preprint{APS/123-QED}
\title{Quantum algorithm for association rules mining}
\author{Chao-Hua Yu}
\affiliation{State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, 100876, China}
\affiliation{State Key Laboratory of Cryptology, P.O. Box 5159, Beijing, 100878, China}
\author{Fei Gao}
\email{[email protected]}
\affiliation{State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, 100876, China}
\author{Qing-Le Wang}
\affiliation{State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, 100876, China}
\author{Qiao-Yan Wen}
\affiliation{State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, 100876, China}
\date{\today}
\begin{abstract}
Association rules mining (ARM) is one of the most important problems in knowledge discovery and data mining. The goal of it is to acquire consumption habits of customers by discovering the relationships between items from a transaction database that has a large number of transactions and items. In this paper, we address ARM in the quantum settings and propose a quantum algorithm for the most compute intensive process in ARM, i.e., finding out the frequent 1-itemsets and 2-itemsets. In our algorithm, to mine the frequent 1-itemsets efficiently, we use the technique of amplitude amplification. To mine the frequent 2-itemsets efficiently, we introduce a new quantum state tomography scheme, i.e., pure-state-based tomography. It is shown that our algorithm is potential to offer polynomial speedup over the classical algorithm.
\begin{description}
\item[PACS numbers]
03.67.Dd, 03.67.Hk
\end{description}
\end{abstract}
\pacs{Valid PACS appear here}
\maketitle
\emph{Introduction.---}Quantum computing provides a paradigm that makes use of quantum mechanical principles, such as superposition and entanglement, to perform computing tasks in quantum systems (quantum computers) \cite{QCQI}. Just as classical algorithms run in the classical computers, a quantum algorithm is a step-by-step procedure run in the quantum computers for solving a certain problem, which, more interestingly, is expected to outperform the classical algorithms for the same problem. As of now, various quantum algorithms have been put forward to solve a number of problems faster than their classical counterparts \cite{QA}, and mainly fall into one of three classes \cite{PWShor}. The first class features the famous Shor's algorithm \cite{Shor} for large number factoring and discrete logarithm, which offers exponential speedup over the classical algorithms for the same problems. The second class are represented by the Grover's quantum search \cite{Grover} and its generalized version, i.e., amplitude amplification \cite{AA}, which achieve quadratic speedup over the classical search algorithm. The third class contains the algorithms for quantum simulation \cite{QS}, the original idea of which is suggested by Feynman \cite{F} to speed up the simulation of quantum systems using quantum computers.
In the past decade, quantum simulation has made great progress in efficient sparse Hamiltonian simulation \cite{BCK}, which underlies two important quantum algorithms, quantum algorithm for solving linear equations (called HHL algorithm) \cite{HHL} and quantum principal component analysis \cite{QPCA}. The former is to generate a pure quantum state encoding the solution of linear equations, which is potential to achieve exponential speedup over the best classical algorithm for the same problem. The latter is an efficient quantum state tomography on quantum sates with low-rank or approximately low-rank density matrix based on the technique of density matrix exponentiation. Inspired by these two algorithms, a number of quantum machine learning algorithms for big data have been proposed and potentially exhibit exponential speedup over the classical algorithms \cite{WBL,LMR,RML,CD}. For example, the quantum support vector machine was recently proposed for big data classification \cite{RML}. These quantum machine learning algorithms will evidently make the tasks of big data mining be accomplished more efficiently than their classical counterparts.
In this letter, we address another important problem in big data mining, association rules mining (ARM), in the quantum settings. The goal of ARM is to acquire consumption habits by mining association rules from a large transaction database \cite{DM,Apriori,PCY}. More formally, given a transaction database consisting of a large number of transactions and items, the task of ARM is to discover the association rules connecting two itemsets (an itemset is a set of items) $A$ and $B$ in the conditional implication form $A \Rightarrow B$, which implies that a customer who buys the items in $A$ also tends to buy the items in $B$. The core of ARM is to mine the itemsets that frequently occur in the transactions. In the classical regime, various algorithms for mining frequent itemsets have been proposed and well studied over the past decades \cite{DM}, the most famous one being the Apriori algorithm \cite{Apriori}. However, the information explosion today makes the database to be processed extremely large, which poses great challenges to the compute ability of classical computers for undertaking ARM by using these algorithms. Therefore, it is of great significance to propose more efficient algorithms for ARM.
In practice, the processing cost of mining frequent 1-itemsets ($k$-itemset is a set of $k$ items) and 2-itemsets dominates the total processing cost of mining all the frequent itemsets in ARM \cite{PCY}. Therefore, in this letter, we propose a quantum algorithm for mining frequent 1-itemsets and 2-itemsets. In our algorithm, the technique of amplitude amplification \cite{AA} is applied to efficiently mine frequent 1-itemsets. To efficiently mine frequent 2-itemsets, we introduce a new quantum state tomography scheme, i.e., pure-state-based tomography, which could be applied to efficiently reconstruct the approximately low-rank density matrix with nonnegative elements. It will be shown that this algorithm is potential to achieve polynomial speedup over the classical sampling algorithm.
\emph{Review of ARM.---}We first review some basic concepts and notations in ARM. Suppose a transaction database, the objective ARM deals with, contains $N$ transactions $T=\{T_1,T_2,\cdots,T_N \}$ and each one is a subset of $M$ items $I=\{I_1,I_2,\cdots,I_M \}$, i.e., $T_i \subseteq I$. It can also be seen as $N \times M$ binary matrix denoted by $D$ in which the element $D_{ij}=1 (0)$ means that the item $I_j$ is (not) contained in the transaction $T_i$.
The \emph{support} of an itemset $X$ is defined as the proportion of transactions in $T$ that contain all the items in $X$, i.e., $\supp(X)=\frac{|\{T_i| X \subseteq T_i\}|}{N}$. An association rule connects two disjoint itemsets $A$ and $B$ and is of the implication form $A \Rightarrow B$.
Its support (confidence) is defined as $\supp(A \Rightarrow B)=\supp(A\cup B)$ ($\conf(A \Rightarrow B)=\frac{\supp(A\cup B)}{\supp(B)}$). A rule is called frequent (confident) if its support (confidence) is not less than a prespecified threshold $min\_supp$ ($min\_conf$). The task of ARM is to find out the rules $A \Rightarrow B$ that are both frequent and confident. This can be achieved by two phases: (1) find out all the frequent itemsets $X$, defined as $\supp(X)>min\_supp$; (2)find out all the rules $A \Rightarrow B$ such that $A\cup B=X$ and it is confident. Because the second phase is much less costly than the first, the core work of ARM lies in the first phase. Furthermore, the processing cost in discovering frequent 1-itemsets and 2-itemsets, denoted by $F_1$ and $F_2$ respectively, dominates the total processing cost of the first phase. Therefore, how to reduce the cost of discovering $F_1$ and $F_2$ is of great significance.
Based on the Apriori property stating that all nonempty subset of a frequent itemset must also be frequent, mining $F_1$ and $F_2$ can be done in a level-wise manner. Firstly, one can compute all the supports of each item and pick out the frequent items to constitute $F_1$. Secondly, one can use $F_1$ to generate the candidate 2-itemsets $C_2=F_1 \bowtie F_1=\{\{I_i,I_j\}|I_i,I_j \in F_1, I_i\neq I_j\}$, compute the supports of itemsets in $C_2$ and then pick out the frequent ones to constitute $F_2$. In fact, computing the support of any 1-itemset or 2-itemset can be transformed into computing the inner product of two binary vectors. Suppose the transaction database corresponds to a $N\times M$ binary matrix $D=(\overrightarrow{d_1},\overrightarrow{d_2},\cdots,\overrightarrow{d_M})$, then the support of any 1-itemset $I_i$ can be computed by $S_{ii}=\frac{\overrightarrow{d_i}\cdot\overrightarrow{d_i}}{N}$ and the support of any 2-itemset $\{I_i,I_j\}$ can be computed by $S_{ij}=\frac{\overrightarrow{d_i}\cdot\overrightarrow{d_j}}{N}$. Therefore, the supports of all the 1-itemsets and 2-itemsets can be computed by $S=\frac{D^TD}{N}$, where $D^T$ is the transpose of the matrix $D$ and the supports of all the 1-itemsets (2-itemsets) correspond to the diagonal (off-diagonal) elements of $S$.
\emph{Pure-state-based quantum state tomography.---}
Before giving the details of our quantum algorithm for mining $F_1$ and $F_2$, we first introduce a new quantum state tomography, pure-state-base tomography, which will be used to mine $F_2$ as a key subroutine. Quantum state tomography is a process of reconstructing the density matrix of an unknown quantum state by performing series of measurements on a large number of copies of this state. In general, tomography on a $d$-dimensional quantum state requires $d^2$ measurement settings \cite{GLM}. However, in many cases, the density matrix of the state could be of low or approximately low rank \cite{QPCA,GLFBE}. That is, it can be approximately constructed by $r\ll d$ largest eigenvalues and their corresponding eigenvectors from the spectral decomposition. In this case, two recently invented tomography techniques, quantum state tomography via compressed sensing \cite{GLM} and quantum principal component analysis \cite{QPCA}, can be applied. Both schemes require only $\mathcal{O}(rdpoly(\log(d)))$ measurement settings, which offer significant improvement on large quantum systems.
However, all the above tomography schemes will perform postprocessing in the classical computer on the measurement outcomes to reconstruct the classical description of the density matrix, and this will take time $\Omega(d^2)$ because $d^2$ elements of the density matrix need to be determined. Here we propose a new quantum state tomography scheme, named \emph{pure-state-based quantum state tomography}, that is potential to overcome this limit and more direct to obtain the elements of density matrix. In our scheme, we do not directly perform tomography on the state written as $\rho=\sum_{i,j=1}^d\rho_{ij}|i\rangle\langle j|$ but transform it into a pure state approximating $\frac{\sum_{i,j=1}^d\rho_{ij}|i\rangle|j\rangle}{\sqrt{\sum_{i,j=1}^d|\rho_{ij}|^2}}$. Once the pure state is created, one can perform measurements on the pure state to reveal the information of $\rho_{ij}$ with bounded error. To illustrate it, we first show how to prepare this pure state in the following.
\begin{theorem}
Suppose the a $d$-dimensional quantum state with density matrix $\rho=\sum_{i,j=1}^d\rho_{ij}|i\rangle\langle j|$ has eigenvalues $\lambda_j$ satisfying $\frac{1}{\kappa}\leq\lambda_j\leq1$ and can be generated in time $T_{\rho}$, then a pure state $|\psi\rangle$ approximating $|\psi_{\rho}\rangle=\frac{\sum_{i,j=1}^d\rho_{ij}|i\rangle|j\rangle}{\sqrt{\sum_{i,j=1}^d|\rho_{ij}|^2}}$ as $\||\psi\rangle-|\psi_{\rho}\rangle\|\leq \epsilon$ can be created taking $\mathcal{O}(\frac{\kappa^3}{\epsilon^3})$ copies of $\rho$ and time $\mathcal{O}(\frac{T_{\rho}\kappa^3}{\epsilon^3})$.
\end{theorem}
\begin{proof}
The idea for creating the pure state $|\psi\rangle$ is that performing the operation $\rho \otimes I$ on the state vector $\frac{\sum_{k=1}^{d}|k\rangle|k\rangle}{\sqrt{d}}$ will yield
\begin{eqnarray}
\frac{\sum_{k=1}^{d}\{(\sum_{i,j=1}^{d}\rho_{ij}|i\rangle\langle j|)|k\rangle\}|k\rangle}{\sqrt{d}}=\frac{\sum_{i,k=1}^{d}\rho_{ik}|i\rangle |k\rangle}{\sqrt{d}}.
\label{eq:1}
\end{eqnarray}
Performing the operation can be achieved by the technique of improved phase estimation together with controlled rotation operation which has been applied in HHL algorithm \cite{HHL} and a number of quantum machine learning algorithms \cite{WBL,LMR,RML,CD}. The detailed steps are presented as follows:
1. Prepare three quantum registers in the initial state
\begin{eqnarray}
(\sqrt{\frac{2}{t}}\sum_{\tau=0}^{t-1}\sin(\frac{\pi(\tau+1/2)}{t})|\tau\rangle)(\frac{\sum_{k=1}^{d}|k\rangle|k\rangle}{\sqrt{d}}).
\label{eq:2}
\end{eqnarray}
2. Perform the controlled unitary operation $\sum_{\tau=0}^{t-1}|\tau\rangle\langle\tau|\otimes e^{\frac{-i\rho \tau t_0}{t}}$ on the first two registers. Here $t_0$ and $t$ are two parameters introduced to adjust the accuracy. Implementing the operation requires the techniques of density matrix exponentiation and its controlled fashion\cite{QPCA}, which takes $t$ copies of $\rho$ and introduces error $\mathcal{O}(\frac{t_0^2}{t})$\cite{QPCA,RML}. Thus, to ensure the error is within $\epsilon$, $t$ is set $t=\mathcal{O}(\frac{t_0^2}{\epsilon})$.
3. Apply Fourier transformation on the first register to obtain a state close to $\frac{\sum_{j,k=1}^d|\lambda_j\rangle \langle u_j|k\rangle |u_j\rangle|k\rangle}{\sqrt{d}}$, where $\lambda_j$ and $|u_j\rangle$ are the eigenvalues and eigenvectors of $\rho$, i.e., $\rho=\sum_{j=1}^d\lambda_j |u_j\rangle\langle u_j|$. The error of eigenvalue in the first register is within $\mathcal{O}(\frac{1}{t_0})$ and the error induced of final state is within $\mathcal{O}(\frac{\kappa}{t_0})$ \cite{HHL}. Therefore, in order to ensure the final error is within $\epsilon$, $t_0$ should be set $t_0=\mathcal{O}(\frac{\kappa}{\epsilon})$ and $t=\mathcal{O}(\frac{\kappa^2}{\epsilon^3})$.
4. Introduce an auxiliary qubit in the state $|0\rangle$ and perform the controlled unitary operation on the eigenvalue register and the auxiliary qubit to generate the state approximating
$\frac{\sum_{j,k=1}^d|\lambda_j\rangle \langle u_j|k\rangle |u_j\rangle|k\rangle(\sqrt{1-|C\lambda_j|^2}|0\rangle+C\lambda_j|1\rangle)}{\sqrt{d}}$,
where $C\in\mathcal{O}(max_{j}\lambda_j)^{-1}$.
5. Erase the eigenvalue register and the state is near to $\frac{\sum_{j,k=1}^d\langle u_j|k\rangle |u_j\rangle|k\rangle(\sqrt{1-|C\lambda_j|^2}|0\rangle+C\lambda_j|1\rangle)}{\sqrt{d}}$.
6. Measure the last qubit until obtaining the outcome $|1\rangle$. Then we obtain the state $|\psi\rangle$ close to
\begin{eqnarray}
\frac{\sum_{j,k=1}^d\lambda_j\langle u_j|k\rangle |u_j\rangle|k\rangle}{B}&=&\frac{(\rho \otimes I)\sum_{k=1}^{d}|k\rangle|k\rangle}{B}\nonumber \\
&=& \frac{\sum_{i,k=1}^{d}\rho_{ik}|i\rangle |k\rangle}{B},
\label{eq:3}
\end{eqnarray}
i.e., $|\psi_{\rho}\rangle$, where $B=\sqrt{\sum_{l=1}^d \lambda_l^2}=\sqrt{\sum_{i,j=1}^d|\rho_{ij}|^2}$. Obviously, this state vector is proportional to the vector (1). The probability of obtaining $|1\rangle$ is $\sum_{j=1}^{d}\frac{|C\lambda_j|^2}{d} \in \Omega(\frac{1}{\kappa^2})$. Thus $\mathcal{O}(\kappa^2)$ measurements are needed to obtain this outcome with a large probability. The technique of amplitude amplification \cite{AA} can be applied to reduce the number of repetitions to $\mathcal{O}(\kappa)$.
According to the steps 3 and 6, we can see that the number of copies of $\rho$ required to create $|\psi\rangle$ scales as $\mathcal{O}(t\kappa)=\mathcal{O}(\frac{\kappa^3}{\epsilon^3})$ and thus the time complexity scales as $\mathcal{O}(\frac{T_{\rho}\kappa^3}{\epsilon^3})$. Here the time for simulating the SWAP operator (in step 2) for time $\frac{t_0}{t}$ is too small to be neglected \cite{RML}.
\end{proof}
However, for the state $\rho$ of low or approximately low rank, the lower bound $\frac{1}{\kappa}$ will be extremely small, or equivalently, $\kappa$ will be extremely large. According to the theorem above, it will make creating the pure state $|\psi\rangle$ very expensive in time. To overcome this problem, inspired by \cite{HHL,RML}, a constant $\epsilon_{eff} = \mathcal{O}(1)$ is chosen and only the eigenvalues $\epsilon_{eff} \leq\lambda_j\leq1$ are taken into considered. In this case, in step 6, the probability of obtaining $|1\rangle$ will be $\Omega(\frac{\epsilon_{eff}^2}{d})$. Consequently, it will take $\mathcal{O}(\frac{\sqrt{d}}{\epsilon_{eff}^3\epsilon^3})$ copies of $\rho$ and time $\mathcal{O}(\frac{T_{\rho}\sqrt{d}}{\epsilon_{eff}^3\epsilon^3})$ to create $|\psi\rangle$.
After creating the pure state $|\psi\rangle$, one can perform measurements on the state to reveal the estimates of $\rho_{ij}$. Take the case that $\rho$ is approximately low-rank and all of $\rho_{ij}$ are nonnegative real numbers as an example. One can perform the measurements in the computational basis on $|\psi\rangle$ and get the outcome $|i\rangle|j\rangle$ with proportion $p_{ij}$, then $\rho_{ij}$ can be estimated by $B\sqrt{p_{ij}}$. Here $B=\sqrt{\sum_{l=1}^d \lambda_l^2}$ can be estimated by estimating $\lambda_l$ via quantum principal component analysis \cite{QPCA}. This takes a little cost because $\rho$ is approximately low-rank. Moreover, assuming that there are $d'$ significantly large elements in $\rho$, or equivalently in $|\psi\rangle$, $\mathcal{O}(\frac{d'}{\epsilon^2})$ measurements in the computational basis are needed to approximate $|\psi\rangle$ and ensure sum of the squared errors is within $\epsilon^2$. The total runtime of tomography is $\mathcal{O}(\frac{T_{\rho}d'\sqrt{d}}{\epsilon_{eff}^3\epsilon^5})$.
\emph{Quantum Algorithm.---}
\emph{1. Mining frequent 1-itemsets.}
Mining frequent 1-itemsets requires estimating the supports of all 1-itemsets and determining the frequent 1-itemsets.
To achieve that in the quantum settings, we assume that the oracle accessing each element of the database binary matrix $D$ is provided. More precisely, it is a unitary operator $U_O$ acting on the computational basis,
\begin{eqnarray}
|i\rangle|j\rangle|0\rangle \xrightarrow{U_O} |i\rangle|j\rangle|D_{ij}\rangle.
\end{eqnarray}
This oracle can be implemented via quantum random access memory that takes time $\mathcal{O}(\log(NM))$ \cite{GLM}.
Provided with the above oracle, one can use the technique of amplitude amplification \cite{AA} to create a quantum state whose density matrix is proportional to the support matrix $S$ so that measuring the state in the computational basis can reveal the supports of all the 1-itemsets, i.e., $S_{ii}$. First, one prepare three quantum registers in the state $|\varphi_1\rangle=\frac{(\sum_{i=1}^{N}|i\rangle)\otimes(\sum_{j=1}^{M}|j\rangle)\otimes |0\rangle}{\sqrt{NM}}$. Secondly, perform $U_O$ on $|\varphi_1\rangle$ to yield the state $|\varphi_2\rangle=\frac{\sum_{i=1}^{N}\sum_{j=1}^{M}|i\rangle|j\rangle|D_{ij}\rangle}{\sqrt{NM}}$. Thirdly, we apply amplitude amplification to search the items $D_{ij}=1$ in the last qubit and then measure it until getting $|1\rangle$ to obtain the state $|\varphi_3\rangle=\frac{\sum_{i=1}^{N}\sum_{j=1}^{M}D_{ij}|i\rangle|j\rangle}{\sqrt{W}}$, where $W$ is the number of overall 1's in $D$, i.e., $W=\sum_{i=1}^M\sum_{j=1}^M D_{ij}$. The oracle complexity scales as $\mathcal{O}(\sqrt{\frac{NM}{W}})=\mathcal{O}(\sqrt{\frac{M}{a}})$ and the time complexity scales as $\mathcal{O}(\sqrt{\frac{M}{a}}\log(MN))$, where $a=\frac{W}{N}$ is the average number of items in each transaction and can be estimated by quantum counting with oracle complexity $\mathcal{O}(\sqrt{M})$ \cite{QC}. In practice, since a customer generally only buy few items in one transaction, $a$ generally scales as $\mathcal{O}(1)$. As a consequence, the oracle complexity comes to $\mathcal{O}(\sqrt{M})$. The density matrix of the second register of $|\varphi_3\rangle$ is $\rho=\frac{D^TD}{tr(D^TD)}=\frac{D^TD}{W}=\frac{S}{a}$, thus $S=a\rho$. Finally, measure the state $\rho$ in the computational basis and the outcome $i$ occurs with probability $\rho_{ii}=\frac{S_{ii}}{a}$, thus the supports $S_{ii}$ are estimated.
To obtain $F_1$, the classical Apriori algorithm determinately computes the supports of all the items which takes time $MN$. These supports can also be estimated via sampling technique. Take $N_c$ samples for estimating each support so that $S_{ii}$ can be estimated with squared error $\mathcal{O}(\frac{S_{ii}(1-S_{ii})}{N_c})$ and thus the sum of the squared errors for estimating all the supports scale as $\mathcal{O}(\frac{a}{N_c})$. To make the sum of the squared errors be less than a preset value $\epsilon^2$, $N_c$ is chosen as $\mathcal{O}(\frac{a}{\epsilon^2})=\mathcal{O}(\frac{1}{\epsilon^2})$ and thus the total time taken to estimate all the $M$ supports via sampling scales as $\mathcal{O}(\frac{M}{\epsilon^2})$. In our algorithm, we sample $\rho$ by measuring it in the computational basis. Suppose we take $N_q$ samples, $\rho_{ii}$ can be estimated with square error $\mathcal{O}(\frac{\rho_{ii}(1-\rho_{ii})}{N_q})$ and the sum of the squared errors scales as $\mathcal{O}(\frac{1}{N_q})$. Since $\rho_{ii}=\frac{S_{ii}}{a}$, the sum of the squared errors for estimating $\rho_{ii}$ should be $a^2$ times that for estimating $S_{ii}$. Therefore, $N_q$ should be chosen $N_q=\mathcal{O}(\frac{a^2}{\epsilon^2})=\mathcal{O}(\frac{1}{\epsilon^2})$ and thus our algorithm takes time $\mathcal{O}(\frac{\sqrt{M}\log(MN)}{\epsilon^2})$ to mine frequent 1-itemsets.
\emph{2. Mining frequent 2-itemsets.}
Suppose there are $M_1$ frequent 1-itemsets, $F_1=\{I_{f_1},I_{f_2},\cdots,I_{f_{M_1}}\}$, based on which we can mine the frequent 2-itemsets $F_2$. To do that, we need to compute all the supports of the candidate 2-itemsets $C_2=F_1 \bowtie F_1=\{\{I_{f_i},I_{f_j}\}|I_{f_i},I_{f_j} \in F_1, I_{f_i} \neq I_{f_j}\}$ and pick out the frequent ones whose supports are not less than $min\_sup$. Just as the matrix $S$, all the supports of itemsets in $C_2$ are placed in the upper (or lower) off-diagonal elements of the matrix $\overline{S}=\frac{D_f^{T}D_f}{N}$, where $D_f=(\overrightarrow{d_{f_1}},\overrightarrow{d_{f_2}}, \cdots, \overrightarrow{d_{f_{M_1}}})$ is a part of $D$. Following the ideas of mining frequent 1-itemsets and pure-state-based quantum state tomography, our quantum algorithm proceeds as follows to mine frequent 2-itemsets:
1. Using the method in mining frequent 1-itemsets, a quantum state with density matrix $\sigma=\frac{D_f^TD_f}{tr(D_f^TD_f)}\propto \overline{S}$ is created taking time $\mathcal{O}(\sqrt{M_1}\log(M_1N))$. Note that the average number of items in each row of $D_f$, denoted by $a_f$, can also be estimated by quantum counting \cite{QC}. It is evidently not greater than $a$ and thus scale as $\mathcal{O}(1)$.
2. Perform the pure-state-based quantum state tomography on the state $\sigma$. Since $\sigma$ is generally low-rank, the time taken for tomography is $\mathcal{O}(\frac{M_1'M_1\log(M_1N)}{\epsilon_{eff}^3\epsilon^5})$, where we assume $\sigma$ (or equivalently $\overline{S}$) has $M_1'$ significantly large elements and $\epsilon_{eff}$ is the lower bound of eigenvalues that are taken into account.
3. Estimate the the supports of candidate 2-itemsets in $C_2$. In the pure-state-based quantum state tomography on $\sigma$, the pure state close to $|\psi_{\sigma}\rangle=\sum_{ij}\psi_{\sigma}^{ij}|i\rangle|j\rangle$ is created, in which $\psi_{\sigma}^{ij}=\frac{\sigma_{ij}}{\sqrt{\sum_{j}\gamma_j^2}}$ and $\gamma_j$ are eigenvalues of $\sigma$. Since $\sigma$ is generally approximately low-rank, the eigenvalues $\gamma_j$ can be estimated by quantum principal component analysis \cite{QPCA} in very little time. Then, the support of $\{I_{f_i},I_{f_j}\}$, $\overline{S}_{ij}=\frac{a_f\psi_{\sigma}^{ij}}{\sqrt{\sum_{j}\gamma_j^2}}$ are estimated.
4. Pick out the frequent candidate 2-itemsets to constitute the set $F_2$.
To obtain $F_2$, the Apriori algorithm directly computes the supports ($\overline{S}_{ij}$) of $\frac{M_1(M_1-1)}{2}$ candidate 2-itemsets in $C_2$ and thus takes runtime $\frac{M_1(M_1-1)N}{2}$.
To reduce the complexity, the sampling technique can also be applied and $\mathcal{O}(\frac{M_1^2}{\epsilon^2})$ samples for estimating each support are used to make sum of the squared error is within $\epsilon^2$. In our algorithm, $\mathcal{O}(\frac{M_1^{'}}{\epsilon^2})$ samples \cite{NS} in step 2 are required to make sum of the squared errors of estimating these supports is also within $\epsilon^2$ and thus total runtime scales as $\mathcal{O}(\frac{M_1'M_1\log(M_1N)}{\epsilon_{eff}^3\epsilon^5})$, which offers polynomial speedup over the classical sampling algorithm when $M_1'=\mathcal{O}(M_1^p)$ and $p<1$. In practice, since there will be a large number of pairs of items $\{I_i,I_j\}$ have weak associations, the speedup can be achieved with a high probability.
The comparison of time complexity between our quantum algorithm and classical algorithms for mining frequent 1-itemsets and 2-itemsets is given in the table \ref{tab:table1}. In is shown that our algorithm is exponentially faster than the Apriori algorithm due to the use of sampling technique and potentially polynomially faster than the classical sampling algorithm.
\begin{table}[b]
\caption{\label{tab:table1}
Comparison of time complexity between our quantum algorithm and classical Apriori algorithm and sampling algorithm for mining frequent 1-itemsets $F_1$ and 2-itemsets $F_2$.}
\begin{ruledtabular}
\begin{tabular}{ccc}
\multirow{2}{*}{Algorithm} &\multicolumn{2}{c}{Time complexity} \\
\cline{2-3}
&Mining $F_1$ &Mining $F_2$ \\ \hline
Apriori &$\mathcal{O}(MN)$ &$\mathcal{O}(M_1^2N)$ \\
Sampling &$\mathcal{O}(\frac{M}{\epsilon^2})$ &$\mathcal{O}(\frac{M_1^2}{\epsilon^2})$ \\
Quantum &$\mathcal{O}(\frac{\sqrt{M}\log(MN)}{\epsilon^2})$ &$\mathcal{O}(\frac{M_1'M_1\log(M_1N)}{\epsilon_{eff}^3\epsilon^5})$\\
\end{tabular}
\end{ruledtabular}
\end{table}
\emph{Conclusion.---}
In this letter, we have provided a quantum algorithm for ARM with focus on the most compute intensive process in ARM, mining frequent 1-itemsets and 2-itemsets. In our algorithm, the techniques of amplitude amplification and pure-state-based quantum state tomography are introduced to make our algorithm potentially achieves polynomial speedup over the classical sampling algorithm. Moreover, since limited quantum oracles accessing the database are used in our algorithm, data privacy for database can be achieved to some degree. We hope this algorithm can be useful for designing more quantum algorithms for big data mining tasks.
\end{document}
|
\betagin{document}
\betagin{abstract}
For knots in $S^3$, the bi-graded hat version of knot Floer homology is defined over $\mathbb{Z}$; however, for an $l$-component link $L$ in $S^3$ with $l>1$, there are $2^{l-1}$ bi-graded hat versions of link Floer homology defined over $\mathbb{Z}$; the multi-graded hat version of link Floer homology, defined from holomorphic considerations, is only defined over $\mathbb{F}_2$; and there is a multi-graded version of link Floer homology defined over $\mathbb{Z}$ using grid diagrams. In this short note, we try to address this issue, by extending the $\mathbb{F}_2$-valued multi-graded link Floer homology theory to $2^{l-1}$ $\mathbb{Z}$-valued theories. A grid diagram representing a link gives rise to a chain complex over $\mathbb{F}_2$, whose homology is related to the multi-graded hat version of link Floer homology of that link over $\mathbb{F}_2$. A sign refinement of the chain complex exists, and for knots, we establish that the sign refinement does indeed correspond to the sign assignment for the hat version of the knot Floer homology. For links, we create $2^{l-1}$ sign assignments on the grid diagrams, and show that they are related to the $2^{l-1}$ multi-graded hat versions of link Floer homology over $\mathbb{Z}$, and one of them corresponds to the existing sign refinement of the grid chain complex.
\end{abstract}
\keywords{sign convention; link Floer homology; grid diagram}
\subjclass[2010]{57M25, 57M27, 57R58}
\title{A note on sign conventions in link Floer homology}
\section{Introduction}\label{sec:introduction}
Knot Floer homology, primarily as an invariant for knots and links
inside $S^3$, was discovered by Peter Ozsv\'{a}th{} and \mathbb{Z}oltan{}
Szab\'{o}{} \cite{POZSzknotinvariants}, and independently by Jacob
Rasmussen \cite{JR}. Later, a related invariant for links, called link
Floer homology, was constructed by Peter Ozsv\'{a}th{} and \mathbb{Z}oltan{}
Szab\'{o}{} \cite{POZSzlinkinvariants}. However, due to certain
orientation issues, the link invariant was only constructed over
$\mathbb{F}_2$, instead of $\mathbb{Z}$. This short note is the author's effort to
understand the orientation issues that are known, and to resolve some
of the issues that are unknown.
Let us describe the algebraic structure of knot Floer homology in the
simplest case, as described in \cite{POZSzknotinvariants}. Let $K$ be
a null-homologous knot in $\#^{l-1}(S^1\times S^2)$. Then there are
$2^{l-1}$ bi-graded chain complexes over $\mathbb{Z}$, such that they all give
rise to the same complex when tensored with $\mathbb{F}_2$. The two
gradings are called the Maslov grading $M$ and the Alexander grading
$A$. The boundary maps preserve the Alexander grading, but lower the
Maslov grading by one. Therefore, the Maslov grading acts as the
homological grading while the Alexander grading acts as an extra
filtration. The homology of the chain complexes is called the hat
version of the knot Floer homology. Therefore, we get an $\mathbb{F}_2$-valued
bi-graded hat version of knot Floer homology and $2^{l-1}$ $\mathbb{Z}$-valued
bi-graded hat versions of knot Floer homology.
The reason for working with null-homologous knots in connected sums of
$S^1\times S^2$ is very simple. We want to work with links in $S^3$.
However, a link with $l$ components in $S^3$ very naturally gives rise
to a null-homologous knot in $\#^{l-1}(S^1\times S^2)$,
\cite{POZSzknotinvariants}. Therefore, what we have is the following.
Given a link $L\subset S^3$, with $l$ components, and after making
certain auxiliary choices, we get $2^{l-1}$ bi-graded chain complexes
over $\mathbb{Z}$, henceforth denoted by $\widehatcfk(L,\mathbb{Z},\mathfrak{o})$, where
$\mathfrak{o}$, called an orientation system, takes values in an indexing
set of $2^{l-1}$ elements, and records which of the $2^{l-1}$ chain
complexes is the one under consideration. All of the $2^{l-1}$ chain
complexes give rise the same bi-graded chain complex over $\mathbb{F}_2$,
$\widehatcfk(L,\mathbb{F}_2)=\widehatcfk(L,\mathbb{Z},\mathfrak{o})\otimes \mathbb{F}_2$. The reader
should be warned that these bi-graded chain complexes,
$\widehatcfk(L,\mathbb{Z},\mathfrak{o})$ and $\widehatcfk(L,\mathbb{F}_2)$, are not
link-invariants (they might depend on the auxiliary choices that we
did not specify, but simply alluded to), but their homologies are link
invariants. Therefore, we get one $\mathbb{F}_2$-valued bi-graded hat version
of knot Floer homology $\widehathfk(L,\mathbb{F}_2)=H_*(\widehatcfk(L,\mathbb{F}_2))$, and
$2^{l-1}$ $\mathbb{Z}$-valued bi-graded hat versions of knot Floer homology
$\widehathfk(L, \mathbb{Z}, \mathfrak{o})=H_*(\widehatcfk(L,\mathbb{Z},\mathfrak{o}))$. We often
let $\widehathfk(L,\mathbb{Z})$ denote any one of the $2^{l-1}$ versions, or a canonical one, namely the one coming from the canonical choice of orientation systems in \cite{POZSzapplications}. However, to decide which of the $2^{l-1}$ groups $\widehathfk(L,\mathbb{Z},\mathfrak{o})$ is the canonical one, one needs to understand some of the other versions of link Floer homology, most notably the infinity version. This seems to be a harder problem, for reasons that we will discuss shortly.
In \cite{POZSzlinkinvariants}, the story for links is treated
in a slightly different light, and a new definition of link Floer
homology is given. Given a link $L$ with $l$ components in $S^3$,
modulo certain choices, a chain complex $\widehatcfl(L,\mathbb{F}_2)$ over $\mathbb{F}_2$
is constructed. The chain complex carries $(l+1)$ gradings: a single
Maslov grading $M$, which is lowered by one by the boundary map, and
$l$ Alexander gradings $A_1, A_2, \ldots, A_l$, one for each link
component, each of which is preserved by the boundary map. The
homology of the chain complex $\widehathfl(L,\mathbb{F}_2)=H_*(\widehatcfl(L,\mathbb{F}_2))$
is an $\mathbb{F}_2$-valued $(l+1)$-graded homology theory, called the link
Floer homology, and it is a link invariant. These two definitions,
\emph{a priori}, are different. Therefore, we have been careful
throughout; we have called the definition from
\cite{POZSzknotinvariants} the knot Floer homology (even when talking
about links), and denoted it by $\widehathfk$, and we have called the
definition from \cite{POZSzlinkinvariants} the link Floer homology,
and denoted it by $\widehathfl$. However, by a miraculous coincidence, it
turns out that if we condense the $l$ Alexander gradings in
$\widehathfl(L,\mathbb{F}_2)$ into one single Alexander grading $A=\sum_i A_i$,
then the resulting $\mathbb{F}_2$-valued bi-graded homology group is
isomorphic to $\widehathfk(L,\mathbb{F}_2)$.
In this note, we will complete the picture by constructing $2^{l-1}$
$\mathbb{Z}$-valued chain complexes, $\widehatcfl(L,\mathbb{Z},\mathfrak{o})$, each carrying a
Maslov grading $M$, and $l$ Alexander gradings $A_1,A_2,\ldots,A_l$,
such that the homologies
$\widehathfl(L,\mathbb{Z},\mathfrak{o})=H_*(\widehatcfl(L,\mathbb{Z},\mathfrak{o}))$ are link
invariants, and on condensing the $l$ Alexander gradings into one
Alexander grading $A=\sum_i A_i$, we get the $2^{l-1}$ $\mathbb{Z}$-valued
bi-graded homology groups $\widehathfk(L,\mathbb{Z},\mathfrak{o})$.
A similar story (except possibly the last bit of coincidence) holds
for the other versions of link Floer homologies, most notably the
minus, plus and infinity versions; however, the holomorphic
considerations and the orientation issues are significantly more
subtle. In particular, we will encounter boundary degenerations, and we will have to orient the relevant moduli spaces in a consistent fashion. We plan to address this problem in future work. Understanding the orientation issues for all versions of link Floer homology will help us understand which of the $2^{l-1}$ link Floer homology groups is the canonical one and whether it has some sort of a useful characterization.
For the second part of the discourse, we concentrate on the
computational aspects of the theory. Ever since knot Floer homology
saw the light of day \cite{POZSzknotinvariants}, \cite{JR},
\cite{POZSzlinkinvariants}, and some of its immense strengths were
discovered \cite{POZSzgenusbounds}, \cite{POZSzthurstonnorm}, \cite{YN}, people were
interested in algorithms to compute it. There have been several
recent developments towards computing various versions of link Floer
homology for links in $S^3$ \cite{CMPOSS}, \cite{SSJW}, \cite{POZSzcube}, \cite{POASZSz}.
We choose to concentrate on the algorithm from \cite{CMPOSS}: the link
$L$ in $S^3$ is represented by a toroidal grid diagram $G$, such that
the $i^{\text{th}}$ component is represented by $m_i$ vertical line segments
and $m_i$ horizontal line segments; an $\mathbb{F}_2$-valued $(l+1)$-graded
chain complex $C(G)$ is constructed such that its homology $H_*(C(G))$
is isomorphic to
$\widehathfl(L,\mathbb{F}_2)\otimes_i(\otimes^{m_i-1}(\mathbb{F}_2\oplus\mathbb{F}_2))$, where,
in the $(\mathbb{F}_2\oplus\mathbb{F}_2)$ that is tensored with itself $(m_i-1)$
times, for one of the generators, all the $(l+1)$ gradings are zero,
and for the other generator, the Maslov grading $M=-1$, and the
Alexander grading $A_j=-\partialta_{ij}$.
Very shortly thereafter, \cite{CMPOZSzDT} assigned signs of $\pm 1$ to each of the boundary maps in the chain complex $C(G)$ in a well defined way, such that it remains a chain complex and its homology (over $\mathbb{Z}$) is isomorphic to $\widehathfg(L,\mathbb{Z})\otimes_i(\otimes^{m_i-1}(\mathbb{Z}\oplus\mathbb{Z}))$, for some $(l+1)$-graded group $\widehathfg(L,\mathbb{Z})$, which is a link invariant.
A very natural question that arises is whether the new homology group $\widehathfg(L,\mathbb{Z})$ is isomorphic to $\widehathfl(L,\mathbb{Z},\mathfrak{o})$ for some $\mathfrak{o}$. We establish that the answer is in the affirmative, and indeed, we construct $2^{l-1}-1$ other sign assignments on the boundary maps of $C(G)$, such that the homologies of these $2^{l-1}$ sign refined grid chain complexes correspond precisely to the $2^{l-1}$ $\mathbb{Z}$-valued $(l+1)$-graded homology groups $\widehathfl(L,\mathbb{Z})$. Once again, it is an interesting question whether $\widehathfg(L,\mathbb{Z})$ is isomorphic to the canonical $\widehathfl(L,\mathbb{Z})$, and once again, we are unable to answer it with our present methods. It is also an interesting endeavor to find two $l$-component links $L_1$ and $L_2$, such that $\widehatcfl(L_1,\mathbb{F}_2)$ is isomorphic to $\widehatcfl(L_2,\mathbb{F}_2)$ as $(l+1)$-graded $\mathbb{F}_2$-modules, there is a bijection between the set of $2^{l-1}$ groups $\widehatcfk(L_1,\mathbb{Z})$ and the set of $2^{l-1}$ groups $\widehatcfk(L_2,\mathbb{Z})$ such that the corresponding groups are isomorphic as bi-graded $\mathbb{Z}$-modules, $\widehathfg(L_1,\mathbb{Z})$ is isomorphic to $\widehathfg(L_2,\mathbb{Z})$ as $(l+1)$-graded $\mathbb{Z}$-modules, but there is no bijection between the set of $2^{l-1}$ groups $\widehathfl(L_1,\mathbb{Z})$ and the set of $2^{l-1}$ groups $\widehathfl(L_2,\mathbb{Z})$ such that the corresponding groups are isomorphic as $(l+1)$-graded $\mathbb{Z}$-modules.
This is a rather short paper. We expect the reader to be already
familiar with most of \cite{CMPOZSzDT}, \cite{POZSzknotinvariants},
\cite{POZSzlinkinvariants}. Despite trying our level best to be as
self-contained as possible, we will still be rather fast in our
exposition.
\subsection*{Acknowledgment}
The work was done when the author was supported by the Clay Research Fellowship. He would like to thank Robert Lipshitz, Peter Ozsv\'{a}th{} and \mathbb{Z}oltan{} Szab\'{o}{} for several helpful discussions. He would also like to thank the referee for providing useful comments and for pointing out the errors.
\section{Floer homology}\label{sec:floerhomology}
For the first part of the section, in the following few numbered paragraphs, we will briefly review the basics of Heegaard Floer homology. The interested reader is referred to \cite{POZSz}, \cite{POZSzapplications} for more details.
\subsection{} A \emph{Heegaard diagram} is an object $\mathcal{H}=(\Sigma_g, \alphapha_1, \ldots,\alphapha_{g+k-1},\betata_1,\ldots,\betata_{g+k-1},\alphalowbreak X_1,\ldots,X_k,O_1,\ldots,O_k)$, where: $\Sigma_g$ is a Riemann surface of genus $g$; $\alphapha=(\alphapha_1,\ldots,\alphalowbreak \alphapha_{g+k-1})$ is $(g+k-1)$-tuple of disjoint simple closed curves such that $\Sigma_g\setminus\alphapha$ has $k$ components; $\betata=(\betata_1,\ldots, \betata_{g+k-1})$ is $(g+k-1)$-tuple of disjoint simple closed curves such that $\Sigma_g\setminus\betata$ has $k$ components; the $\alphapha$ circles are transverse to the $\betata$ circles; $X=(X_1,\ldots,X_k)$ is a $k$-tuple of points such that each component of $\Sigma_g\setminus\alphapha$ has an $X$ marking, and each component of $\Sigma_g\setminus\betata$ has an $X$ marking; $O=(O_1,\ldots,O_k)$ is a $k$-tuple of points such that each component of $\Sigma_g\setminus\alphapha$ has an $O$ marking, and each component of $\Sigma_g\setminus\betata$ has an $O$ marking; and the diagram is assumed to be \emph{admissible}, which is a technical condition that we will describe later.
\subsection{}A Heegaard diagram represents an oriented link $L$ inside a three-manifold $Y$ in the following way: the pair $(\Sigma_g,\alphapha)$ represents genus $g$ handlebody $U_{\alphapha}$; the pair $(\Sigma_g,\betata)$ represents genus $g$ handlebody $U_{\betata}$; the ambient three-manifold $Y$ is obtained by gluing $U_{\alphapha}$ to $U_{\betata}$ along $\Sigma_g$; the $X$ markings are joined to the $O$ markings by $k$ simple oriented arcs in the complement of the $\alphapha$ circles, and the interiors of the $k$ arcs are pushed slightly inside the handlebody $U_{\alphapha}$; the $O$ markings are joined to the $X$ markings by $k$ simple oriented arcs in the complement of the $\betata$ circles, and the interiors of the $k$ arcs are pushed slightly inside the handlebody $U_{\betata}$; the union of these $2k$ oriented arcs is the oriented link $L$. Let the link have $l$ components, and let $2 m_i$ be the number of arcs that represent $L_i$, the $i^{\text{th}}$ component of the link $L$. Therefore, $k=\sum_i m_i \geq l$. In \cite{POZSzlinkinvariants}, the case $k=l$ is studied, and in \cite{POZSzknotinvariants}, the subcase $k=l=1$ is dealt with. We will always assume that $L_i$ is null-homologous in $Y$, for each $i$.
\subsection{}Consider $(g+k-1)$-tuples of points $x=(x_1,\ldots,x_{g+k-1})$, such that each $\alphapha$ circle contains some $x_i$, and each $\betata$ circle contains some $x_j$. To each such tuple $x$, we can associate a $\text{Spin}^{\text{C}}$ structure $\mathfrak{s}_x$ on the ambient three-manifold $Y$. In all the three-manifolds that we will consider, we will be interested in a canonical torsion $\text{Spin}^{\text{C}}$ structure. In particular, for $Y=\#^n S^1\times S^2$, we will be interested in the unique torsion $\text{Spin}^{\text{C}}$ structure. A \emph{generator} is a $(g+k-1)$-tuple $x$ of the type described above, such that $\mathfrak{s}_x$ is the canonical $\text{Spin}^{\text{C}}$ structure. The set of all generators in a Heegaard diagram $\mathcal{H}$ is denoted by $\mathcal{G}_{\mathcal{H}}$. An \emph{elementary domain} is a component of $\Sigma_g\setminus(\alphapha\cup\betata)$. A \emph{domain} $D$ joining a generator $x$ to a generator $y$, is a $2$-chain generated by elementary domains such that $\partial(\partial D|_{\alphapha})=y-x$. The set of all domains joining $x$ to $y$ is denoted by $\mathcal{D}(x,y)$. A \emph{periodic domain} $P$ is a $2$-chain generated by elementary domains such that $\partial(\partial P|_{\alphapha})=0$. The set of periodic domains is denoted by $\mathcal{P}_{\mathcal{H}}$, and there is a natural bijection between $\mathcal{P}_{\mathcal{H}}$ and $\mathcal{D}(x,x)$ for any generator $x$. If $D$ is a domain, and if $p$ is a point lying in an elementary domain, then $n_p(D)$ denotes the coefficient of the $2$-chain $D$ at that elementary domain. Let $n_X(D)=\sum_i n_{X_i}(D)$ and $n_O(D)=\sum n_{O_i}(D)$. Furthermore, let $n_{X,i}(D)$ denote the sum of $n_{X_j}(D)$ for all the $X_j$ markings that lie in $L_i$, and let $n_{O,i}(D)$ denote the sum of $n_{O_j}(D)$ for all the $O_j$ markings that lie in $L_i$. A domain is said to be \emph{non-negative} if it has non-negative coefficients in every elementary domain. A domain $D$ is said to be \emph{empty} if $n_{X_i}(D)=n_{O_i}(D)=0$ for all $i$. A Heegaard diagram is called \emph{admissible} if there are no non-negative, non-trivial empty periodic domains. The set of all empty domains in $\mathcal{D}(x,y)$ is denoted by $\mathcal{D}^0(x,y)$, and the set of all empty periodic domains is denoted by $\mathcal{P}^0_{\mathcal{H}}$. The set $\mathcal{P}^0_{\mathcal{H}}$ forms a free abelian group of rank $b_1(Y)+l-1$.
\subsection{}Every domain $D$ has an integer valued \emph{Maslov index} $\mu(D)$ associated to it, which satisfies certain properties that we will mention as we need them. In all the Heegaard diagrams that we will consider, the following additional restrictions will hold: if $P\in\mathcal{D}(x,x)$, then $\mu(P)=2 n_O(P)$ and, since $L_i$ is null-homologous in $Y$, $n_{X,i}(P)= n_{O,i}(P)$ for all $i$. This allows us to define $(l+1)$ relative gradings. Given two generators $x,y$, choose a domain $D\in\mathcal{D}(x,y)$ (since $\mathfrak{s}_x=\mathfrak{s}_y$, the set $\mathcal{D}(x,y)$ is non-empty), and let the \emph{relative Maslov grading} $M(x,y)=\mu(D)-2n_O(D)$, and let the \emph{relative Alexander grading} $A_i(x,y)=n_{X,i}(D)-n_{O,i}(D)$. In certain situations, with certain additional hypotheses, these gradings can be lifted to absolute gradings. However, for convenience, we will not work with absolute gradings right away. Therefore, until Lemma \ref{lem:absolutegrading}, whenever we talk about the Maslov grading $M$, or the Alexander grading $A_i$, we mean some affine lift of the corresponding relative grading, which is only well-defined up to a translation by $\mathbb{Z}$. Let $Q_i=\mathbb{Z}\oplus\mathbb{Z}$ be the $(l+1)$-graded group, with the two generators lying in gradings $(0,0,\ldots,0)$ and $(-1,-\partialta_{i1},\ldots,-\partialta_{il})$, where $\partialta$ is the Kronecker delta function.
\subsection{}For the analytical aspects of the theory, which we are about to describe now, the reader is strongly advised to read Section 3 of \cite{POZSz}. Let $\text{Sym}^{g+k-1}(\Sigma_g)$ be $(g+k-1)$-fold symmetric product, and let $J_s$ be a path of nearly symmetric almost complex structures on it, obtained as a small perturbation of the constant path of nearly symmetric almost complex structure $\text{Sym}^{g+k-1}(\mathfrak{j})$, where $\mathfrak{j}$ is a fixed complex structure on $\Sigma_g$, such that $J_s$ achieves certain transversality that we will describe later. The subspaces $\mathbb{T}_{\alphapha}=\alphapha_1\times\cdots\times\alphapha_{g+k-1}$ and $\mathbb{T}_{\betata}=\betata_1\times\cdots\times\betata_{g+k-1}$ are two totally real tori. Notice that $\mathcal{G}_{\mathcal{H}}$ is in a natural bijection with a subset of $\mathbb{T}_{\alphapha}\cap\mathbb{T}_{\betata}$. Fix $\mathfrak{p}>2$. Given a domain $D\in\mathcal{D}(x,y)$, let $\mathcal{B}(D)$ be the space of all $L^{\mathfrak{p}}_1$ maps $u$ from $[0,1]\times\mathbb{R}\subset\mathbb{C}$ to $\text{Sym}^{g+k-1}(\Sigma_g)$, such that: $u$ maps $\{0\}\times\mathbb{R}$ to $\mathbb{T}_{\alphapha}$; $u$ maps $\{1\}\times\mathbb{R}$ to $\mathbb{T}_{\betata}$; $\lim_{t\rightarrow\infty}u(s+it)=x$ with a certain pre-determined asymptotic behavior; $\lim_{t\rightarrow -\infty}u(s+it)=y$ with a certain pre-determined asymptotic behavior; for any point $p$ in any elementary domain, the algebraic intersection number between $u$ and $\{p\}\times \text{Sym}^{g+k-2}(\Sigma_g)$ is $n_p(D)$, or, as it is colloquially stated, the domain $D$ is the \emph{shadow} of $u$. Ozsv\'{a}th{} and Szab\'{o}{} define a vector bundle $\mathcal{L}$ over $\mathcal{B}(D)$, and a section $\xi$ of that bundle depending on $J_s$, such that the linearization of the section $D_u\xi$ is a Fredholm operator for every $u\in\mathcal{B}(D)$. The transversality of the path $J_s$ that we mentioned earlier, simply means that the Fredholm section $\xi$ is transverse to the $0$-section of $\mathcal{L}$. The intersection of $\xi$ and the $0$-section is denoted by $\mathcal{M}_{J_s}(D)$, and it consists precisely of the $J_s$-holomorphic maps. There is an $\mathbb{R}$ action on $\mathcal{M}_{J_s}(D)$ coming from the $\mathbb{R}$ action on $[0,1]\times\mathbb{R}$, and the \emph{unparametrized moduli space} is denoted by $\widehat{\mathcal{M}_{J_s}}(D)=\mathcal{M}_{J_s}(D)/\mathbb{R}$. The virtual index bundle of the linearization map $D_u$ gives an element of the $K$-theory of $\mathcal{B}(D)$. Its dimension is the expected dimension of the moduli space $\mathcal{M}_{J_s}(D)$, and this dimension is in fact the Maslov index $\mu(D)$, that we had mentioned earlier. The determinant line bundle of the index bundle, henceforth denoted by $\det(D)$, turns out to be a trivializable line bundle over $\mathcal{B}(D)$. Therefore, a choice of a nowhere vanishing section on the trivializable line bundle $\det(D)$, produces an orientation of the moduli space $\mathcal{M}_{J_s}(D)$, and hence an orientation of the unparametrized moduli space $\widehat{\mathcal{M}_{J_s}}(D)$.
\subsection{}\label{para:special}If $D_1\in\mathcal{D}(x,y)$ and $D_2\in\mathcal{D}(y,z)$ are domains, then the $2$-chain $D_1+D_2$ lies in $\pi_2(x,z)$. The asymototic behaviors that we had mentioned earlier, along with some globally pre-determined choices, allows us to get a pre-gluing map from $\mathcal{B}(D_1)\times\mathcal{B}(D_2)$ to $\mathcal{B}(D_1+D_2)$. The pullback of the line bundle $\det(D_1+D_2)$ over $\mathcal{B}(D_1+D_2)$ can be canonically identified with the line bundle $\det(D_1)\wedge \det(D_2)$ over $\mathcal{B}(D_1)\times\mathcal{B}(D_2)$ by linearized gluing. An \emph{orientation system} $\mathfrak{o}$ is a choice of a nowhere vanishing section $\mathfrak{o}(D)$ of the line bundle $\det(D)$ for every domain $D\in\mathcal{D}(x,y)$, and for every pair of generators $x,y\in\mathcal{G}_{\mathcal{H}}$, such that if $D_1\in\mathcal{D}(x,y)$ and $D_2\in\mathcal{D}(y,z)$, then $\mathfrak{o}(D_1)\wedge\mathfrak{o}(D_2)=\mathfrak{o}(D_1+D_2)$. Therefore, two orientation systems $\mathfrak{o}_1$ and $\mathfrak{o}_2$ disagree on $D_1+D_2$ if and only if they disagree on exactly one of the two domains $D_1$ and $D_2$.
\subsection{}The following describes a method to find all possible orientation systems. Fix a generator $x\in\mathcal{G}_{\mathcal{H}}$, and for every other generator $y$, choose a domain $D_y\in\mathcal{D}(x,y)$. Then choose a set of periodic domains $P_1,\ldots,P_m$, which freely generate $\mathcal{P}_{\mathcal{H}}$. Orient the determinant line bundles over the domains $D_y$ and $P_j$ arbitrarily. Since any domain $D\in\mathcal{D}(y,z)$ can be written uniquely as $D=\sum_j a_j P_j +D_z-D_y$, this choice uniquely specifies an orientation system.
Thus, an orientation system is specified by its values on certain domains $D_y$ and certain periodic domains $P_j$. This allows us to define a chain complex over $\mathbb{Z}$, and it will turn out that the gauge equivalence class of the sign assignment on the chain complex is independent of the orientations of the line bundles $\det(D_y)$. Therefore, declare two orientations systems to be \emph{strongly equivalent} if they agree on all the periodic domains in $\mathcal{P}_{\mathcal{H}}$ (or in other words, they agree on all the periodic domains $P_1,\ldots,P_m$). There is a second notion of equivalence, which is of some importance to us, whereby two orientation systems are declared to be \emph{weakly equivalent} if they agree on all the periodic domains in $\mathcal{P}^0_{\mathcal{H}}$. Let $\widehat{\mathcal{O}}_{\mathcal{H}}$ denote the set of weak equivalence classes of orientation systems. Then $\widehat{\mathcal{O}}_{\mathcal{H}}$ is a torseur over $\operatorname{Hom}(\mathcal{P}^0_{\mathcal{H}},\mathbb{Z}/2\mathbb{Z})$, so there are exactly $2^{b_1(Y)+l-1}$ weak equivalence classes of orientation systems.
If $D\in\mathcal{D}(x,y)$ is a domain, its unparametrized moduli space
$\widehat{\mathcal{M}_{J_s}}(D)$ is a compact, $(\mu(D)-1)$-dimensional manifold
with corners by Gromov compactness and the fact that $J_s$ achieves
transversality; an orientation system $\mathfrak{o}$ determines an
orientation on $\widehat{\mathcal{M}_{J_s}}(D)$. Therefore, if $\mu(D)=1$, then
$\widehat{\mathcal{M}_{J_s}}(D)$ is a compact oriented zero-dimensional manifold with
corners, or in other words, it is a finite number of signed
points. Let $c(D)$ be the total number of points, counted with
sign. The cornerstone of Floer homology in the present setting, is the
following lemma.
\betagin{lem}\cite{POZSz}\label{lem:index2}
If $D\in\mathcal{D}(x,y)$ is a domain with $\mu(D)=2$, then
$\widehat{\mathcal{M}_{J_s}}(D)$ is an oriented one-dimensional
manifold. Furthermore, if $D=D_1+D_2$, where $D_1\in\mathcal{D}(x,z)$ and
$D_2\in\mathcal{D}(z,y)$, with $\mu(D_1)=1$ and $\mu(D_2)=1$, then the
total number of points in the boundary of $\widehat{\mathcal{M}_{J_s}}(D)$ that
correspond to a decomposition of $D$ as $D_1+D_2$, when counted with
signs induced from the orientation of $\widehat{\mathcal{M}_{J_s}}(D)$, equals
$c(D_1)c(D_2)$.
\end{lem}
An immediate corollary is the following: if all the points in the
boundary of $\widehat{\mathcal{M}_{J_s}}(D)$ correspond to such a decomposition --- in other words, if bubbling and boundary degenerations can be ruled out --- then the sum $\sum c(D_1)c(D_2)$ over all such possible decompositions
is zero. This allows us to define the following $(l+1)$-graded chain
complex over $\mathbb{Z}$. This is a well-known chain complex, and it was
first defined by Ozsv\'{a}th{} and Szab\'{o}{} for $k=1$. However, for a
general value of $k$, the chain complex was originally not defined
over $\mathbb{Z}$. There are certain subtleties that need to be resolved
before the minus version can be defined over $\mathbb{Z}$, namely, we have to
orient the boundary degenerations in a consistent manner such that the
proofs of Theorems \ref{thm:main}, \ref{thm:secondmain} and
\ref{thm:connectsum} go through; however, those
issues do not appear when we work only in the hat version.
\betagin{defn}
Given an admissible Heegaard diagram $\mathcal{H}$ for $L$ and an orientation system $\mathfrak{o}\in\widehat{\mathcal{O}}_{\mathcal{H}}$, let $\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},\mathfrak{o})$ be the chain complex freely generated over $\mathbb{Z}$ by the elements of $\mathcal{G}_{\mathcal{H}}$, with the $(l+1)$ gradings given by $M,A_1,\ldots,A_l$, and the boundary map given by $\partial x=\sum_{y\in\mathcal{G}_\mathcal{H}}\sum_{D\in\mathcal{D}^0(x,y),\mu(D)=1}c(D)y$.
\end{defn}
\betagin{lem}
The map $\partial$ on $\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},\mathfrak{o})$ reduces the Maslov grading by $1$, keeps all Alexander gradings fixed, and satisfies $\partial^2=0$.
\end{lem}
\betagin{proof}
The claims regarding the gradings follow directly from the
definitions. To prove that $\partial^2=0$, by Lemma \ref{lem:index2}, we
only need to show that for any empty Maslov index $2$ domain $D$,
the boundary points of $\widehat{\mathcal{M}}(D)$ do not correspond to
bubbling or boundary degenerations. However, the shadow of a bubble
or a boundary degeneration is a $2$-chain in the Heegaard diagram,
whose boundary lies entirely within the $\alphapha$ circles, or
entirely within the $\betata$ circles. Any such $2$-chain must have
non-zero coefficient at some $X$ marking, and therefore by
positivity of domains, the original domain must also have non-zero
coefficient at that $X$ marking, and therefore, could not have been
empty.
\end{proof}
Even though we did not specify in the notations, $\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},\mathfrak{o})$ might also depend on the path of almost
complex structures $J_s$ on $\text{Sym}^{g+k-1}(\Sigma_g)$. However, the
homology $H_*(\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},\mathfrak{o}))$, as an $(l+1)$-graded
object, depends only on the link $L$, the numbers of $X$ markings,
$m_i$, that lie on the $i^{\text{th}}$ link component for each $i$,
and the weak equivalence class of the orientation system $\mathfrak{o}$.
\betagin{thm}\label{thm:main}
For a fixed Heegaard diagram $\mathcal{H}$ and a fixed path of almost
complex structures $J_s$, if $\mathfrak{o}_1$ and $\mathfrak{o}_2$ are weakly
equivalent, then the two chain complexes
$\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},\mathfrak{o}_1)$ and
$\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},\mathfrak{o}_2)$ are isomorphic. If $\mathcal{H}_1$ and
$\mathcal{H}_2$ are two different Heegaard diagrams for the same link
$L$, such that in both $\mathcal{H}_1$ and $\mathcal{H}_2$, the
$i^{\text{th}}$ link component $L_i$ is represented by $m_i$ $X$
markings and $m_i$ $O$ markings, and if $J_{s,1}$ and $J_{s,2}$ are
two paths of almost complex structures on the two symmetric
products, then there is a bijection $f$ between
$\widehat{\mathcal{O}}_{\mathcal{H}_1}$ and $\widehat{\mathcal{O}}_{\mathcal{H}_2}$, such that for
every $\mathfrak{o}\in\widehat{\mathcal{O}}_{\mathcal{H}_1}$, the
homology $H_*(\widehatcfl_{\mathcal{H}_1}(L,\mathbb{Z},\mathfrak{o}))$ is isomorphic to
the homology $H_*(\widehatcfl_{\mathcal{H}_2}(L,\mathbb{Z},f(\mathfrak{o})))$, as
$(l+1)$-graded groups.
\end{thm}
\betagin{proof}
This is neither a new type of a theorem, nor a new idea of a proof.
For the first part, let $\mathfrak{o}_1$ and $\mathfrak{o}_2$ be two weakly
equivalent orientation systems. We are going to define a map
$t:\mathcal{G}_{\mathcal{H}}\rightarrow\{\pm 1\}$ in the following way. Call
two generators $x$ and $y$ to be connected if there is an empty
domain $D\in\mathcal{D}^0(x,y)$. For each connected component of
$\mathcal{G}_{\mathcal{H}}$, choose a generator $x$ in that connected
component, and declare $t(x)=1$. For every other generator $y$ in
that connected component, choose an empty domain
$D_y\in\mathcal{D}^0(x,y)$, and declare $t(y)=1$ if $\mathfrak{o}_1(D_y)$
agrees with $\mathfrak{o}_2(D_y)$, and $t(y)=-1$ otherwise. Since
$\mathfrak{o}_1$ and $\mathfrak{o}_2$ agree on all the empty periodic domains,
$t$ is a well-defined function. Furthermore, for any empty Maslov
index $1$ domain $D\in\mathcal{D}^0(x,y)$, the contribution
$c_{\mathfrak{o}_1}(D)$ coming from $\mathfrak{o}_1$ is related to the
contribution $c_{\mathfrak{o}_2}(D)$ coming from $\mathfrak{o}_2$ by the
equation $c_{\mathfrak{o}_1}(D)=t(x)t(y)c_{\mathfrak{o}_2}(D)$. That shows that
the two chain complexes are isomorphic via the map $x\mapsto t(x)x$.
For the second part of the theorem, recall the well known fact that
if two Heegaard diagrams $\mathcal{H}_1$ and $\mathcal{H}_2$ represent the
same link $L$, such that each component of the link has the same
number of $X$ and $O$ markings in both the Heegaard diagrams, then
they can be related to one another by a sequence of isotopies,
handleslides, and stabilizations. This essentially follows from
\cite[Proposistion 7.1]{POZSz} and \cite[Lemma
2.4]{CMPOSS}. However, during the isotopies, we do not require the
$\alphapha$ circles to remain transverse to the $\betata$ circles.
Therefore, we can assume that $\mathcal{H}_1$ and $\mathcal{H}_2$ are related
by one of the following elementary moves: changing the path of
almost complex structures $J_s$ by an isotopy $J_{s,t}$; a
stabilization in a neighborhood of a marked point; a sequence of
isotopies and handleslides of the $\alphapha$ circles in the complement of the
marked points; or a sequence of isotopies and handleslides of the
$\beta$ circles in the complement of the
marked points.
For the case of a stabilization, or an isotopy of the
path of almost complex structures, there is a natural identification
between $\mathcal{P}^0_{\mathcal{H}_1}$ and $\mathcal{P}^0_{\mathcal{H}_2}$, and a natural
identification of the determinant line bundles over the corresponding
empty periodic domains. Since a weak equivalence class of an
orientation system is determined by its values on the empty periodic
domains, this produces a natural identification between
$\widehat{\mathcal{O}}_{\mathcal{H}_1}$ and $\widehat{\mathcal{O}}_{\mathcal{H}_2}$. The proof that
the two homologies are isomorphic for the corresponding orientation
systems is immediate for the case of a stabilization, and follows from
the usual arguments of \cite{POZSz} for the other cases. We
do not encounter any new problems, since boundary degenerations are
still ruled out by the marked points.
For the remaining cases, namely, the case of isotopies and
handleslides of $\alpha$ circles or $\beta$ circles, the isomorphism is
established by counting holomorphic triangles. Let us assume that the
$\alphapha$ circles are changed to the $\gammamma$ circles by a sequence of
isotopies and handleslides in the complement of the marked points. Out
of the $2^{g+k-1}$ weak equivalence classes of orientation systems in
the Heegaard diagram $\mathcal{H}_3=(\Sigma,\gammamma,\alphapha,z,w)$, there is a
unique one $\mathfrak{o}_3$, for which the homology of $\mathcal{H}_3$ is
torsion-free. Each empty periodic domain in $\mathcal{H}_2$ can be written
uniquely as a sum of empty periodic domains in $\mathcal{H}_1$ and
$\mathcal{H}_3$. Therefore, we have a natural bijection between
$\widehat{\mathcal{O}}_{\mathcal{H}_1}$ and $\widehat{\mathcal{O}}_{\mathcal{H}_2}$: given an
orientation system $\mathfrak{o}\in\widehat{\mathcal{O}}_{\mathcal{H}_1}$, we can patch it
with $\mathfrak{o}_3$, to get an orientation system
$f(\mathfrak{o})\in\widehat{\mathcal{O}}_{\mathcal{H}_2}$. The triangle map, evaluated on
the top generator of the homology of $\mathcal{H}_3$, provides the required
isomorphism between the homology of $\mathcal{H}_1$ and the homology of
$\mathcal{H}_2$, for the corresponding orienation systems. The same proof
from \cite{POZSz} goes through without any problems since we do not encounter any boundary degenerations.
\end{proof}
Let $\vec{m}=(m_1,\ldots,m_l)$. The above theorem shows that
$H_*(\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},\mathfrak{o}))$ is an invariant of the link $L$
inside the three-manifold, a choice of a weak equivalence class of an
orientation system $\mathfrak{o}$, and the vector $\vec{m}$. Let us
henceforth denote the homology as $\widehathfl_{\vec{m}}(L,\mathbb{Z},\mathfrak{o})$. We
now investigate the dependence of
$\widehathfl_{\vec{m}}(L,\mathbb{Z},\mathfrak{o})$ on $\vec{m}$.
\betagin{thm}\label{thm:secondmain}
Let $\mathcal{H}$ be a Heegaard diagram for a link $L$, where the $i^{\text{th}}$
component $L_i$ is represented by $m_i$ $X$ markings and $m_i$ $O$
markings, and let $\mathcal{H}'$ be a Heegaard diagram for the same link,
where $L_i$ is represented by $m'_i=(m_i+\partialta_{i_0 i})$ $X$
markings and $m'_i$ $O$ markings, for some fixed $i_0$. Then
there is a bijection $f$ between $\widehat{\mathcal{O}}_{\mathcal{H}}$ and
$\widehat{\mathcal{O}}_{\mathcal{H}'}$ such that for every weak equivalence class
of orientation system $\mathfrak{o}$, $\widehathfl_{\vec{m}'}(L,\mathbb{Z},\mathfrak{o})$ is
isomorphic to $\widehathfl_{\vec{m}}(L,\mathbb{Z},f(\mathfrak{o}))\otimes Q_{i_0}$ as
$(l+1)$-graded groups.
\end{thm}
\betagin{proof}
Consider the Riemann sphere $S$ with one $\alphapha$ circle and one
$\betata$ circle, intersecting each other at two points $p$ and $q$.
Put two $X$ markings, one $O$ marking and one $W$ marking, one in
each of the four elementary domains of
$S\setminus(\alphapha\cup\betata)$, such that the boundary of either of
the two elementary domains that contain an $X$ marking runs from
$p$ to $q$ along the $\alphapha$ circle, and from $q$ to $p$ along the
$\betata$ circle. Remove a small disk in the neighborhood of the point
$W$. In the Heegaard diagram $\mathcal{H}$, choose an $X$ marking that
lies in $L_{i_0}$, and remove a small disk in the neighborhood of
that point. Then connect the diagram $\mathcal{H}$ to the sphere $S$ via
the `neck' $S^1\times [0,T]$ to get a new Heegaard diagram for the
same link, where $L_i$ is represented by $m'_i$ $X$
markings, and $m'_i$ $O$ markings. This process is
shown in Figure \ref{fig:stabilization}. By Theorem \ref{thm:main},
we can assume that the new Heegaard diagram is $\mathcal{H}'$. There is a
natural correspondance between $\mathcal{P}^0_{\mathcal{H}}$ and
$\mathcal{P}^0_{\mathcal{H}'}$, and this induces the bijection $f$ between
$\widehat{\mathcal{O}}_{\mathcal{H}}$, and $\widehat{\mathcal{O}}_{\mathcal{H}'}$.
\betagin{figure}
\psfrag{z}{$X$}
\psfrag{w}{$O$}
\psfrag{p}{$p$}
\psfrag{q}{$q$}
\psfrag{a}{$\alphapha$}
\psfrag{b}{$\betata$}
\betagin{center}
\includegraphics[width=\textwidth]{stabilization}
\end{center}
\caption{The Heegaard diagrams $\mathcal{H}$ and $\mathcal{H}'$.}\label{fig:stabilization}
\end{figure}
Fix $\mathfrak{o}\in\widehat{\mathcal{O}}_{\mathcal{H}}$. As $(l+1)$-graded groups,
$\widehatcfl_{\mathcal{H}'}(L,\mathbb{Z},\mathfrak{o})=\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},f(\mathfrak{o}))\otimes(\mathbb{Z}\oplus\mathbb{Z})$,
where one $\mathbb{Z}$ corresponds to all the generators that contain the
point $p$, and has $(M,A_1,\ldots,A_l)$ multi-grading
$(0,0,\ldots,0)$, and the other $\mathbb{Z}$ corresponds to all the generators
that contain the point $q$, and has $(M,A_1,\ldots,A_l)$ multi-grading
$(-1,-\partialta_{i_0 1},\ldots,-\partialta_{i_0 l})$. We simply need to show
that the same identity holds as chain complexes. For this, it is
enough to show that there are no boundary maps from the generators
that contain the point $p$ to the generators that contain the point
$q$.
Following the arguments from \cite{POZSzlinkinvariants}, we extend the
`neck length' $T$, and move the point $W$ close to the $\alphapha$ circle
in $S$. After choosing $T$ sufficiently large and $W$ sufficiently
close to the $\alphapha$ circle, if there is an empty positive Maslov
index $1$ domain $D$, joining a generator containing $p$ to a
generator containing $q$, such that $c(D)\neq 0$, then $D$ must
correspond to a positive, Maslov index $2$ domain in $\mathcal{H}$ that
avoids all the $O$ markings and whose boundary lies entirely on the
$\alphapha$ circles. However, any non-trivial domain in $\mathcal{H}$ whose
boundary lies entirely on the $\alphapha$ circles must have non-zero
coefficients at some $O$ marking, thus producing a contradiction, and
thereby finishing the proof.
\end{proof}
Henceforth, denote $\widehathfl_{(1,\ldots,1)}(L,\mathbb{Z},\mathfrak{o})$ by $\widehathfl(L,\mathbb{Z},\mathfrak{o})$.
Theorems \ref{thm:main} and \ref{thm:secondmain} imply:
\betagin{thm}\label{thm:fourthmain}
Let $\mathcal{H}$ be a Heegaard diagram for a link $L\subset S^3$ with
$l$ components, such that the $i^{\text{th}}$ component $L_i$ is represented
by exactly $m_i$ $X$ markings, and exactly $m_i$ $O$ markings. Then
the $2^{l-1}$ homology groups $\widehathfl_{\vec{m}}(L,\mathbb{Z},\mathfrak{o})$ are isomorphic to
the $2^{l-1}$ groups
$\widehathfl(L,\mathbb{Z},\mathfrak{o})\otimes_i(\otimes^{m_i-1}Q_i)$.
\end{thm}
We are almost done with the construction that we had set out to do. Given a link $L\subset S^3$ with $l$ components, we have produced $2^{l-1}$ $\mathbb{Z}$-valued $(l+1)$-graded homology groups $\widehathfl(L,\mathbb{Z},\mathfrak{o})$. We would like to finish this section by showing that when we combine the $l$ Alexander gradings into one, then we get the $2^{l-1}$ $\mathbb{Z}$-valued bi-graded homology groups $\widehathfk(L,\mathbb{Z},\mathfrak{o})$. Recall that the groups $\widehathfk(L,\mathbb{Z},\mathfrak{o})$ are constructed by viewing the link $L\subset Y$ as a knot in $Y\#^{l-1}(S^1\times S^2)$, and then looking at the knot Floer homology. Therefore, the following lemma is all that we need.
\betagin{thm}\label{thm:connectsum}
Let $\mathcal{H}$ be a Heegaard diagram for a link $L\subset Y$ with
$(l+1)$ components, such that each component is represented by one
$X$ and one $O$ marking. Let $\widetilde{L}$ be the link with $l$
components in $Y\#(S^1\times S^2)$, whose $l^{\text{th}}$ component
$\widetilde{L}_l$ is obtained by connect summing $L_{l+1}$ and $L_l$
through the one-handle, and let $\widetilde{\mathcal{H}}$ be a Heegaard diagram
for $\widetilde{L}$, where $\widetilde{L}_i$ is represented by $(1+\partialta_{il})$
$X$ markings and $(1+\partialta_{il})$ $O$ markings. Then, there is a
bijection $f$ between $\widehat{\mathcal{O}}_{\mathcal{H}}$ and
$\widehat{\mathcal{O}}_{\widetilde{\mathcal{H}}}$, such that for all
$\mathfrak{o}\in\widehat{\mathcal{O}}_{\mathcal{H}}$,
$H_*(\widehatcfl_{\widetilde{\mathcal{H}}}(\widetilde{L},\mathbb{Z},f(\mathfrak{o})))=\widehathfl(L,\mathbb{Z},\mathfrak{o})\otimes
Q_l$ as $(l+1)$-graded groups, where the $(l+1)$ gradings on the
left hand side are $(M,A_1,\ldots,A_{l-1},A_l+A_{l+1})$.
\end{thm}
\betagin{proof}
This proof is very similar to the proof of Theorem
\ref{thm:secondmain}. Once more, consider the Riemann sphere $S$
with one $\alphapha$ circle and one $\betata$ circle, intersecting each
other at two points $p$ and $q$. Put two $X$ markings and two $W$
marking, one in each of the four elementary domains of
$S\setminus(\alphapha\cup\betata)$, such that the boundary of either of
the two elementary domains that contain an $X$ marking runs from
$p$ to $q$ along the $\alphapha$ circle, and from $q$ to $p$ along the
$\betata$ circle. Remove two small disks in the neighborhoods of the
$W$ markings. In the Heegaard diagram $\mathcal{H}$, remove two small
disks in the neighborhoods of the the two $X$ markings
that lie in $L_l$ and $L_{l+1}$. Then connect $\mathcal{H}$ to the sphere
$S$ via the two `necks,' $S^1\times[0,T_1]$ and $S^1\times[0,T_2]$,
as shown in Figure \ref{fig:connectsum}. The resulting picture is a
Heegaard diagram for the link $\widetilde{L}\subset Y\#(S^1\times S^2)$,
where the $i^{\text{th}}$ component $\widetilde{L}_i$ is represented by
$(1+\partialta_{il})$ $X$ markings and $(1+\partialta_{il})$ $O$ markings.
By the virtue of Theorem \ref{thm:main}, we can assume that this
Heegaard diagram is $\widetilde{\mathcal{H}}$.
\betagin{figure}
\psfrag{z}{$X$}
\psfrag{p}{$p$}
\psfrag{q}{$q$}
\psfrag{a}{$\alphapha$}
\psfrag{b}{$\betata$}
\betagin{center}
\includegraphics[width=\textwidth]{connectsum}
\end{center}
\caption{The Heegaard diagrams $\mathcal{H}$ and $\widetilde{\mathcal{H}}$.}\label{fig:connectsum}
\end{figure}
An empty periodic domain in $\mathcal{H}$ gives rise to an empty periodic
domain in $\widetilde{\mathcal{H}}$. In the other direction, an empty periodic
domain in $\widetilde{\mathcal{H}}$ gives rise to a periodic domain in $\mathcal{H}$
which does not pass through any of the $O$ markings. Since each
component of the link $L$ is null-homologous in $Y$, such a periodic
domain is an empty periodic domain. Therefore, there is a natural
correspondance between the empty periodic domains of $\mathcal{H}$ and
$\widetilde{\mathcal{H}}$, and this induces the bijection $f$ between
$\widehat{\mathcal{O}}_{\mathcal{H}}$ and $\widehat{\mathcal{O}}_{\widetilde{\mathcal{H}}}$.
Fix $\mathfrak{o}\in\widehat{\mathcal{O}}_{\mathcal{H}}$. It
is immediate that as $(l+1)$-graded groups,
$\widehatcfl_{\widetilde{\mathcal{H}}}(\widetilde{L},\mathbb{Z},f(\mathfrak{o}))=\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},\mathfrak{o})\otimes Q_l$.
However, quite like the case of Theorem \ref{thm:secondmain}, for
sufficiently large `neck lengths' $T_1$ and $T_2$, and with the two
$W$ markings sufficiently close to the $\alphapha$ circle on $S$, the
above identity holds even as chain complexes.
\end{proof}
Before we conclude this section, a note regarding absolute gradings is
due. So far, we have worked with relative Maslov grading and relative
Alexander gradings. However, for links in $S^3$, and for links in
$\#^m (S^1\times S^2)$ that we obtain from links in $S^3$ by the
connect sum process described in Theorem \ref{thm:connectsum}, there
is a well-defined way to lift these gradings to absolute gradings, as
defined in \cite[Theorem 7.1]{POZSz4manifolds}, \cite[Subsections 3.3
and 3.4]{POZSzknotinvariants} and \cite[Lemma 4.6 and Equation
24]{POZSzlinkinvariants}. Since this is an oft-studied scenario, for
such links, let us improve the earlier theorems, and
henceforth work with absolute gradings.
\betagin{lem}\label{lem:absolutegrading}
For links in $\#^m(S^1\times S^2)$ that come from links in $S^3$ by
the connect sum operation as decribed in Theorem
\ref{thm:connectsum}, the isomorphisms in Theorems \ref{thm:main},
\ref{thm:secondmain}, \ref{thm:fourthmain} and \ref{thm:connectsum}
preserve the absolute gradings.
\end{lem}
\betagin{proof}
Recall that the isomorphisms in question come from chain maps that
preserve the relative gradings. Therefore, each such chain map must
shift each absolute grading by a fixed integer on the entire
chain complex. We want to show that each of these shifts is zero.
Since the absolute gradings are defined on the generators
themselves, this shift is unchanged if instead of working over $\mathbb{Z}$,
we tensor everything with $\mathbb{F}_2$ and work over $\mathbb{F}_2$. However,
since the Heegaard Floer homology of $\#^m(S^1\times S^2)$ is
non-trivial over $\mathbb{F}_2$, in each case, the homology of the entire
chain complex is non-trivial over $\mathbb{F}_2$. Furthermore, the
maps induced on the homology over $\mathbb{F}_2$ preserve the absolute gradings
\cite{POZSz4manifolds}, \cite{POZSzknotinvariants},
\cite{POZSzlinkinvariants}. Therefore, all the shifts are zero, and each
of the chain maps preserves all the gradings.
\end{proof}
\section{Grid diagrams}\label{sec:griddiagrams}
A \emph{planar grid diagram of index $N$} is the square
$S=[0,N]\times[0,N]\subset\mathbb{R}^2$, with the following additional
structures: if $1\leq i\leq N$, the horizontal line $y=(i-1)$ is
called $\alphapha_i$, the $i^{\text{th}}$ $\alphapha$ arc, and the vertical line
$x=(i-1)$ is called $\betata_i$, the $i^{\text{th}}$ $\betata$ arc; there are $2N$
markings, denoted by $X_1,\ldots,X_N,O_1,\ldots,O_N$, such that each
component of $S\setminus(\bigcup_i\alphapha_i)$ contains one $X$ marking and
one $O$ marking, and each component of $S\setminus(\bigcup_i\betata_i)$
contains one $X$ marking and one $O$ marking.
A \emph{toroidal grid diagram of index $N$} is obtained from a planar
grid diagram of the same index by identifying the opposite sides of
the square $S$ to form a torus $T$. A careful reader will immediately
observe that this creates a Heegaard diagram $\mathcal{H}$ for some link
$L$ in $S^3$, and for the rest of the section, we will work with this
Heegaard diagram. The $\alphapha$ arcs and the $\betata$ arcs become full
circles, and they are the $\alphapha$ circles and the $\betata$ circles
respectively; the $N$ components of $T\setminus(\bigcup_i\alphapha_i)$ are called
the \emph{horizontal annuli}, and each of them contains one $X$
marking and one $O$ marking; the horizontal annulus with $\alphapha_i$ as
the circle on the bottom is called the $i^{\text{th}}$ horizontal annulus, and
is denoted by $H_i$; the $N$ components of $T\setminus(\bigcup_i\betata_i)$ are
called the \emph{vertical annuli}, and each of them also contains one
$X$ marking and one $O$ marking; the vertical annulus with $\betata_i$
as the circle on the left is called the $i^{\text{th}}$ vertical annulus,
and is denoted by $V_i$; the $N^2$ components of
$T\setminus\bigcup_i(\alphapha_i\cup\betata_i)$ are the elementary
domains. Therefore, the link $L$ that the toroidal grid diagram
represents, can be obtained in the following way. We assume that the
toroidal grid diagram comes from a planar grid diagram on the square
$S$. Then in each component of $S\setminus(\bigcup_i\alphapha_i)$, we join the $X$
marking to the $O$ marking by an embedded arc, and in each component
of $S\setminus(\bigcup_i\betata_i)$, we join the $O$ marking to the $X$ marking
by an embedded arc, and at every crossing, we declare the arc that
joins $O$ to $X$ to be the overpass. Henceforth, we also assume that
the link $L$ has $l$ components, and the $i^{\text{th}}$ component $L_i$ is
represented by $m_i$ $X$ markings and $m_i$ $O$ markings, and $\sum_i
m_i =N$.
There is only one $Spin^C$ structure, so generators in $\mathcal{G}_{\mathcal{H}}$
correspond to the permuatations in $\mathfrak{S}_N$ as follows: a generator
$x=(x_1,\ldots,x_N)\in\mathcal{G}_{\mathcal{H}}$ comes from the permutation
$\sigma\in\mathfrak{S}_n$, where $x_i=\alphapha_i\cap\betata_{\sigma(i)}$ for
each $1\leq i\leq N$. The $N$ points $x_1,\ldots,x_N$ are called the
\emph{coordinates} of the generator $x$.
Let $\mathfrak{j}$ be the complex structure on $T$ induced from the standard
complex structure on $S\subset\mathbb{C}$, and let $J_s$ be the constant path
of almost complex structure $\text{Sym}^N(\mathfrak{j})$ on $\text{Sym}^N(T)$. After a
slight perturbation of the $\alpha$ and the $\beta$ circles, we can ensure
that $J_s$ achieves transversality for all domains up to Maslov index
two \cite[Lemma 3.10]{RL}. Henceforth, we work with these perturbed
$\alpha$ and $\beta$ circles and this path of nearly symmetric almost
complex structure.
Consider the $2^{l-1}$ chain complexes $\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},\mathfrak{o})$. The boundary maps in each of the chain complexes correspond to objects called \emph{rectangles}. A rectangle $R$ joining a generator $x$ to a generator $y$ is a $2$-chain generated by the elementary domains of $\mathcal{H}$, such that the following conditions are satisfied: $R$ only has coefficients $0$ and $1$; the closure of the union of the elementary domains where $R$ has coefficient $1$ is a disk embedded in $T$ with four corners, or in other words, it looks like a rectangle; the top-right corner and the bottom-left corner of $R$ are coordinates of $x$; the top-left corner and the bottom-right corner of $R$ are coordinates of $y$; the generators $x$ and $y$ share $(N-2)$ coordinates; and $R$ does not contain any coordinates of $x$ or any coordinates of $y$ in its interior. It is easy to check that the rectangles are precisely the positive Maslov index one domains. We denote the set of all rectangles joining $x$ to $y$ by $\mathcal{R}(x,y)\subset \mathcal{D}(x,y)$. The set $\mathcal{R}(x,y)$ is empty unless $x$ and $y$ differ in exactly two coordinates, and even then, $\left|\mathcal{R}(x,y)\right|\leq 2$.
\betagin{lem}\cite[Theorem 1.1]{CMPOSS}\label{lem:maslovone}
If $D\in\mathcal{D}(x,y)$ is a domain with $\mu(D)\leq 0$, then the
unparametrized moduli space $\widehat{\mathcal{M}_{J_s}}(D)$ is empty. If
$D\in\mathcal{D}(x,y)$ is a Maslov index one domain such that
$\widehat{\mathcal{M}_{J_s}}(D)$ is non-empty, then $D$ is a
rectangle. Conversely, if $R\in\mathcal{R}(x,y)$ is a rectangle, then
$\widehat{\mathcal{M}_{J_s}}(R)$ consists of exactly one point, and hence
$\left|c(R)\right|=1$.
\end{lem}
If $D\in\mathcal{D}(x,y)$, we say that $D$ can be decomposed as a sum of
two rectangles if there exists a generator $z\in\mathcal{G}_{\mathcal{H}}$ and
rectangles $R_1\in\mathcal{R}(x,z)$ and $R_2\in\mathcal{R}(z,y)$ such that
$D=R_1+R_2$. It is easy to check that the domains that can
be decomposed as sum of two rectangles are precisely the
positive Maslov index two domains. For any generator $x\in\mathcal{G}_T$,
there are exactly $2N$ Maslov index two positive domains in
$\mathcal{D}(x,x)$, namely the ones coming from the horizontal annuli
$H_1,\ldots,H_N$ and the vertical annuli $V_1,\ldots,V_N$.
\betagin{lem}\label{lem:maslovtwo}
If $D\in\mathcal{D}(x,y)$ is a Maslov index two domain such that
$\widehat{\mathcal{M}_{J_s}}(D)$ is non-empty, then $D$ can be decomposed as a
sum of two rectangles. Conversely, if $D\in\mathcal{D}(x,y)$ can be
decomposed as a sum of two rectangles, then $\widehat{\mathcal{M}_{J_s}}(D)$
is a compact $1$-dimensional manifold with exactly two
endpoints. Furthermore, if $x=y$ (i.e. if $D$ comes from a
horizontal or a vertical annulus), then one of the endpoints
corresponds to the unique way of decomposing $D$ as a sum of two
rectangles, while the other endpoint corresponds to an $\alphapha$ or a
$\betata$ boundary degeneration; and if $x\neq y$, then $D$ can be
decomposed as a sum of two rectangles in exactly two ways, and the
two endpoints correspond to the two decompositions.
\end{lem}
Lemma \ref{lem:maslovone} implies that once we choose an orientation system $\mathfrak{o}$ (and not just a weak equivalence class of orientation systems), we get a function $c_{\mathfrak{o}}$ from the set of all rectangles to $\{-1,1\}$. Lemma \ref{lem:maslovtwo} in conjunction with Lemma \ref{lem:index2} implies that if a domain $D\in\mathcal{D}(x,y)$ can be decomposed as a sum of two rectangles in two different ways $D=R_1+R_2=R_3+R_4$, then $c_{\mathfrak{o}}(R_1)c_{\mathfrak{o}}(R_2)=-c_{\mathfrak{o}}(R_3)c_{\mathfrak{o}}(R_4)$. This naturally leads to the definition of a sign assignment.
\betagin{defn}
A \emph{sign assignment} $s$ is a function from the set of all rectangles to the set $\{-1,1\}$, such that the following condition is satisfied: if $x,y,z,z'\in\mathcal{G}_{\mathcal{H}}$ are distinct generators, and if $R_1\in\mathcal{R}(x,z)$, $R_2\in\mathcal{R}(z,y)$, $R'_1\in\mathcal{R}(x,z')$, $R'_2\in\mathcal{R}(z',y)$ are rectangles with $R_1+R_2=R'_1+R'_2$, then $s(R_1)s(R_2)=-s(R'_1)s(R'_2)$. Two sign assignments $s_1$ and $s_2$ are said to be \emph{gauge equivalent} if there is a function $t:\mathcal{G}_{\mathcal{H}}\rightarrow\{-1,1\}$, such that $s_1(R)=t(x)t(y)s_2(R)$, for all $x,y\in\mathcal{G}_{\mathcal{H}}$ and for all $R\in\mathcal{R}(x,y)$.
\end{defn}
In particular, a true sign assignment, as defined in \cite[Definition 4.1]{CMPOZSzDT}, is a sign assignment. Let $f$ be the map from the set of all orientation systems to the set of all sign assignments such that for all rectangles $R$, $f(\mathfrak{o})(R)=c_{\mathfrak{o}}(R)$. In this section, we will show that there are exactly $2^{2N-1}$ gauge equivalence classes of sign assignments on the grid diagram. We will put a weak equivalence on the sign assignments, which is weaker than the gauge equivalence. We will prove that there are exactly $2^{l-1}$ weak equivalence classes of sign assignments, and the map $f$ induces a bijection $\widetilde{f}$ between the set of weak equivalence classes of orientation systems and the set of weak equivalence classes of sign assignments. This will allow us to combinatorially calculate $\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},\mathfrak{o})$ for all $\mathfrak{o}\in\widehat{\mathcal{O}}_{\mathcal{H}}$, and thereby calculate $\widehathfl(L,\mathbb{Z})$ in all the $2^{l-1}$ versions. As a corollary, this will also show that any sign assignment (in particular, the one constructed in \cite{CMPOZSzDT}) computes $\widehathfl(L,\mathbb{Z},\mathfrak{o})$ for some orientation system $\mathfrak{o}$.
We have an explicit (although slightly artificial) correspondance between the generators in $\mathcal{G}_{\mathcal{H}}$ and the elements of the symmetric group $\mathfrak{S}_N$, whereby a permutation $\sigma\in\mathfrak{S}_N$ gives rise to the generator $x=(x_1,\ldots,x_N)$ with $x_i=\alphapha_i\cap\betata_{\sigma(i)}$. There is the following very natural partial order on the permutations: a reduction of a permutation $\tau$ is a permuation obtained by pre-composing $\tau$ by some transposition $(i,j)$ where $i<j$ and $\tau(i)>\tau(j)$; the permutation $\sigma$ is declared to be smaller than the permutation $\tau$, if $\sigma$ can be obtained from $\tau$ by a sequence of reductions. This induces a partial order $\prec$ on the elements of $\mathcal{G}_{\mathcal{H}}$.
For $x,y\in\mathcal{G}_{\mathcal{H}}$, if $y\prec x$ and there does not exist any $z\in\mathcal{G}_{\mathcal{H}}$ such that $y\prec z\prec x$, then we say that $x$ \emph{covers} $y$, and write that as $y\leftarrow x$. If we view the toroidal grid diagram as one coming from a planar grid diagram on $S=[0,N]\times[0,N]$, then $y\leftarrow x$ precisely when there is a rectangle from $x$ to $y$ contained in the subsquare $S'=[0,N-1]\times[0,N-1]$.
The poset $(\mathcal{G}_{\mathcal{H}},\prec)$ is a well-understood object \cite{shellPHE}. There is a unique minimum $p\in\mathcal{G}_{\mathcal{H}}$, which corresponds to the identity permutation. In particular, the Hasse diagram of $(\mathcal{G}_{\mathcal{H}},\prec)$, viewed as an unoriented graph, is connected. There is a unique maximum $q\in\mathcal{G}_{\mathcal{H}}$, which corresponds to the permutation that maps $i$ to $(N+1-i)$. The poset is shellable, which means that there is a total ordering $<$ on the maximal chains, such that if $\mathfrak{m}_1$ and $\mathfrak{m}_2$ are two maximal chains with $\mathfrak{m}_1<\mathfrak{m}_2$, then there exists a maximal chain $\mathfrak{m}_3<\mathfrak{m}_2$ with $\mathfrak{m}_1\cap\mathfrak{m}_2\subseteq\mathfrak{m}_3\cap\mathfrak{m}_2=\mathfrak{m}_2\setminus\{z\}$ for some $z\in\mathfrak{m}_2$. This in particular implies that given any two maximal chains $\mathfrak{m}_1$ and $\mathfrak{m}_2$, we can get from $\mathfrak{m}_2$ to $\mathfrak{m}_1$ via a sequence of maximal chains, where we get from one maximal chain to the next by changing exactly one element.
Given a sign assignment $s$ and a generator $x\in\mathcal{G}_{\mathcal{H}}$, we define two functions $h_{s,x},v_{s,x}:\{1,\ldots,N\}\rightarrow\{-1,1\}$, called the \emph{horizontal function} and the \emph{vertical function}, as follows: let $D\in\mathcal{D}(x,x)$ be Maslov index two positive domain which corresponds to the horizontal annulus $H_i$; then, $D$ can be decomposed as a sum of two rectangles in a unique way, and define the horizontal function $h_{s,x}(i)$ as the product of the signs of the two rectangles. The vertical function $v_{s,x}(i)$ is constructed similarly by considering the vertical annulus $V_i$ instead. Clearly, the horizontal and the vertical functions depend only on the gauge equivalence class of the sign assignment. The following theorem shows that the functions do not depend on the choice of the generator $x$, and will henceforth be denoted by $h_s$ and $v_s$.
\betagin{thm}\label{thm:projection}
For any sign assignment $s$, for any two generators $x,y\in\mathcal{G}_{\mathcal{H}}$, and for any $1\leq i\leq N$, the horizontal and the vertical functions satisfy $h_{s,x}(i)=h_{s,y}(i)$ and $v_{s,x}(i)=v_{s,y}(i)$.
\end{thm}
\betagin{proof}
Fix a sign assignment $s$, and fix $i\in\{1,\ldots,N\}$. We will only prove the statement for the vertical function; the argument for the horizontal function is very similar. Given $z\in\mathcal{G}_{\mathcal{H}}$, let $(z',R_z,R'_z)$ be the unique triple with $z'\in\mathcal{G}_T$, $R_z\in\mathcal{R}(z,z')$ and $R'_z\in\mathcal{R}(z',z)$ such that $R_z+R'_z\in\mathcal{D}(z,z)$ comes from the vertical annulus $V_i$. We simply want to show that for any two generators $x,y\in\mathcal{G}_{\mathcal{H}}$, $s(R_x)s(R'_x)=s(R_y)s(R'_y)$. Recall the partial order on $\mathcal{G}_{\mathcal{H}}$. The corresponding Hasse diagram, when viewed as an unoriented graph, is connected; therefore, it is enough to prove the above statement when $y\leftarrow x$. Thus, we can assume that there exists a rectangle $R\in\mathcal{R}(x,y)$. We end the proof by considering the following two cases.
\betagin{figure}
\betagin{center}
\includegraphics[height=170pt]{projection}
\end{center}
\caption{The case when $y$ and $x'$ disagree in exactly $3$ or exactly $4$ coordinates. The coordinates of $x$, $y$, $x'$ and $y'$ are denoted by white circles, black circles, white squares and black squares, respectively.}\label{fig:projection}
\end{figure}
\emph{The generators $y$ and $x'$ disagree on none of the coordinates.} In this case, $y=x'$, $y'=x$, $R_x=R'_y$ and $R_y=R'_x$. The equality $s(R_x)s(R'_x)=s(R_y)s(R'_y)$ follows trivially.
\emph{The generators $y$ and $x'$ disagree on exactly three or exactly four coordinates.} In this case, there exists a rectangle $R'\in\mathcal{R}(x',y')$, such that $R_x+R'=R+R_y\in\mathcal{D}(x,y')$ and $R'_x+R=R'+R'_y\in\mathcal{D}(x',y)$. The three essentially different types of diagrams that might appear (up to a rotation by $180^{\circ}$) are illustrated in Figure \ref{fig:projection}. Therefore, $s(R_x)s(R')=-s(R)s(R_y)$ and $s(R'_x)s(R)=-s(R')s(R'_y)$. Multiplying, we get the required identity $s(R_x)s(R'_x)=s(R_y)s(R'_y)$.
\end{proof}
The following two theorems will establish that there are exactly $2^{2N-1}$ gauge equivalence classes of sign assignments. Let $\Phi$ be the map from the set of gauge equivalence classes of sign assignments to $\{-1,1\}^{2N-1}$ given by $s\rightarrow (h_s(1),\ldots,\alphalowbreak h_s(N),\alphalowbreak v_s(1),\ldots,v_s(N-1))$.
\betagin{thm}\label{thm:equivalence}
Given functions $g_h,g_v:\{1,\ldots,N\}\rightarrow\{-1,1\}$, such that $\left|g^{-1}_v(1)\right|\equiv \left|g^{-1}_h(-1)\right|\pmod{2}$, there exists a sign assignment $s$, such that $g_h=h_s$ and $g_v=v_s$. Therefore, in particular, the function $\Phi$ from the set of gauge equivalence classes of sign assignments to $\{-1,1\}^{2N-1}$ is surjective.
\end{thm}
\betagin{proof}
By \cite[Theorem 4.2]{CMPOZSzDT}, there exists a sign assignment $s_0$ such that $h_{s_0}(i)=1$ and $v_{s_0}(i)=-1$ for all $i\in\{1,\ldots,N\}$. Given $g_h,g_v:\{1,\ldots,N\}\rightarrow\{-1,1\}$ with $\left|g^{-1}_v(1)\right|\equiv \left|g^{-1}_h(-1)\right|\pmod{2}$, we would like to modify $s_0$ to get $s$, such that $g_h=h_s$ and $g_v=v_s$.
The general method that we employ to modify a sign assignment $s_1$ to get another sign assignment $s_2$, is the following: we start with a multiplicative $2$-cochain $m$ which assigns elements of $\{-1,1\}$ to the elementary domains; if $D$ is a $2$-chain generated by the elementary domains, then $\langle m,D\rangle$ is simply the evaluation of $m$ on $D$; then, for a rectangle $R\in\mathcal{R}(x,y)$, we define $s_2(R)$ to be $s_1(R)\langle m,R\rangle$. It is easy to see that $s_2$ is a sign assignment if and only if $s_1$ is a sign assignment.
We prove the statement by an induction on the number $n(g_v,g_h)=\frac{1}{2}(\left|g^{-1}_v(1)\right|+\left|g^{-1}_h(-1)\right|)$. For the base case, when $n(g_v,g_h)=0$, we can simply choose $s=s_0$.
Assuming that the induction hypothesis is proved for $n=k$, let $g_h,g_v:\{1,\ldots,N\}\alphalowbreak\rightarrow\{-1,1\}$ be functions with $n(g_v,g_h)=k+1$. Choose functions $\widetilde{g}_h,\widetilde{g}_v:\{1,\ldots,N\}\alphalowbreak\rightarrow\{-1,1\}$ such that $n(\widetilde{g}_v,\widetilde{g}_h)=k$ and $\left|\{i\mid g_v(i)\neq\widetilde{g}_v(i)\}\right|+\left|\{i\mid g_h(i)\neq\widetilde{g}_h(i)\}\right|=2$. By induction, there is a sign assignment $\widetilde{s}$ such that $\widetilde{g}_h=h_{\widetilde{s}}$ and $\widetilde{g}_v=v_{\widetilde{s}}$. If $\left|\{i\mid g_v(i)\neq\widetilde{g}_v(i)\}\right|=2$, consider the two vertical annuli corresponding to the two values where $g_v$ disagrees with $\widetilde{g}_v$, choose a horizontal annulus, and let $m$ be the $2$-cochain which assigns $(-1)$ to the two elementary domains where the horizontal annulus intersects the two vertical annuli, and $1$ to every other elementary domain. Similarly, if $\left|\{i\mid g_h(i)\neq\widetilde{g}_h(i)\}\right|=2$, consider the two horizontal annuli corresponding to the two values where $g_h$ disagrees with $\widetilde{g}_h$, choose a vertical annulus, and let $m$ be the $2$-cochain which assigns $(-1)$ to the two elementary domains where the vertical annulus intersects the two horizontal annuli, and $1$ to every other elementary domain. Finally, if $\left|\{i\mid g_v\neq\widetilde{g}_v(i)\}\right|=\left|\{i\mid g_h\neq\widetilde{g}_h(i)\}\right|=1$, consider the vertical annulus corresponding to the value where $g_v$ disagrees with $\widetilde{g}_v$, consider the horizontal annulus corresponding to the value where $g_h$ disagrees with $\widetilde{g}_h$, and let $m$ be the $2$-cochain which assigns $(-1)$ to the elementary domain where the vertical annulus intersects the horizontal annulus, and $1$ to every other elementary domain. Let $s$ be the sign assignment obtained from $\widetilde{s}$ by modifying it by the $2$-cochain $m$. It is fairly straightforward to check that $g_h=h_s$ and $g_v=v_s$.
\end{proof}
\betagin{thm}\label{thm:uniqueness}
The function $\Phi$ from the set of gauge equivalence classes of sign assignments to $\{-1,1\}^{2N-1}$ is injective.
\end{thm}
\betagin{proof}
For this proof, we will closely follow the corresponding proof from \cite{CMPOZSzDT}. However, that proof uses the permutahedron whose $1$-skeleton is the Cayley graph of the the symmetric group, where the generators are the adjacent transpositions. In our proof, we will use a different simplicial complex, which is the order complex of the partial order $\prec$ on $\mathcal{G}_{\mathcal{H}}$.
Recall that the poset has a unique minimum $p$, and a unique maximum $q$. View the Hasse diagram of the poset as an oriented graph $\mathfrak{g}$. Choose a maximal tree $\mathfrak{t}$ with $p$ as a root, i.e. given any vertex $x$, there is a (unique) oriented path from $p$ to $x$ in $\mathfrak{t}$. The edges of $\mathfrak{g}$ correspond to the rectangles that are supported in $[0,N-1]\times[0,N-1]$. A sign assignment endows the edges of $\mathfrak{g}$ with signs $\pm 1$.
Let us choose a $(2N-1)$-tuple in $\{-1,1\}^{2N-1}$, and let $s$ be a sign assignment such that the $(2N-1)$-tuple equals $\Phi(s)$. We would like to show that the gauge equivalence class of $s$ is determined. Since $\mathfrak{t}$ is a tree, by replacing the sign assignment $s$ by a gauge equivalent one if necessary, we can assume that $s$ labels all the edges of $\mathfrak{t}$ with $1$'s. We will show that the values of $s$ on all the other edges are now determined.
Now consider any other edge $y\leftarrow x$ in $\mathfrak{g}$. Let $\mathfrak{c}_1$ be the unique oriented path from $p$ to $x$ in $\mathfrak{t}$, and let $\mathfrak{c}_2$ be the unique oriented path from $p$ to $y$ in $\mathfrak{t}$. Choose an oriented path $\mathfrak{c}_0$ from $x$ to $q$ in $\mathfrak{g}$. Let $\mathfrak{m}_1$ be the union of $\mathfrak{c}_1$ and $\mathfrak{c}_0$, and let $\mathfrak{m}_2$ be the union of $\mathfrak{c}_2$, the edge from $y$ to $x$, and $\mathfrak{c}_0$; these can be seen as maximal chains in $(\mathcal{G}_{\mathcal{H}},\prec)$. Clearly, $($the product of the signs on the edges in $\mathfrak{m}_1)\cdot($the product of the signs on the edges in $\mathfrak{m}_2)=($the product of the signs on the edges in $\mathfrak{c}_1)\cdot($the product of the signs on the edges in $\mathfrak{c}_2)\cdot($the sign on the edge from $y$ to $x)$. Since $\mathfrak{c}_1\cup\mathfrak{c}_2\subseteq\mathfrak{t}$, the signs on the edges of $\mathfrak{c}_1$ and $\mathfrak{c}_2$ are all $1$, so the sign on the edge from $y$ to $x$ equals $($the product of the signs on the edges in $\mathfrak{m}_1)\cdot($the product of the signs on the edges in $\mathfrak{m}_2)$. Since $(\mathcal{G}_{\mathcal{H}},\prec)$ is shellable, $\mathfrak{m}_2$ can be turned into $\mathfrak{m}_1$ through maximal chains by modifying one element at a time. Changing exactly one element of exactly one of the maximal chains negates the above product, so the product depends only on the graph $\mathfrak{g}$. Thus, $s$ is determined on all the edges of $\mathfrak{g}$.
Therefore, we have shown that there exists at most one sign assignment, up to gauge equivalence, on the rectangles that lie in the subsquare $S'=[0,N-1]\times[0,N-1]$. In fact, shellability of our poset also implies that there exists a sign assignment, but we do not need it. The rest of the proof for uniqueness is very similar to the proof from \cite{CMPOZSzDT}, but for the reader's convenience, we repeat the argument. Let $S''\subset T$ be the annular subspace corresponding to the rectangle $[0,N-1]\times[0,N]$ in the planar grid diagram. Next, we show that the value of $s$ is determined on all the rectangles that lie in $S''$.
This is done by an induction on the (horizontal) width of the rectangles. For the base case, if $R\in\mathcal{R}(x,y)$ is a rectangle of width one which is not supported in $S'$, then let $R'\in\mathcal{R}(y,x)$ be the unique rectangle such that $R+R'$ is a vertical annulus. The vertical function $v_s$ determines the product of the signs $s(R)s(R')$, and thereby the sign $s(R)$.
\betagin{figure}
\betagin{center}
\includegraphics[height=170pt]{uniqueness}
\end{center}
\caption{The induction step. The coordinates of $x$, $y$,
$y'$ and $z$ are denoted by white circles, white squares, black
squares and black circles, respectively.}\label{fig:uniqueness}
\end{figure}
Assuming that we have proved the uniqueness of sign assignments for all
the rectangles up to width $k$, let $R\in\mathcal{R}(x,y)$ be a width
$(k+1)$ rectangle. Let $R_1\in\mathcal{R}(y,z)$ be the width one rectangle
such that the bottom-left corner of $R_1$ is the top-left corner of
$R$. Then there exists a generator $y'\neq y$, a width one rectangle
$R'\in\mathcal{R}(x,y')$ and a width $k$ rectangle $R'_1\in\mathcal{R}(y',z)$,
such that $R+R_1=R'+R'_1\in\mathcal{D}(x,z)$. The situation is illustrated
in Figure \ref{fig:uniqueness}. By induction, the value of
$s$ is determined on $R_1$, $R'$ and $R'_1$. However,
$s(R)s(R_1)=-s(R')s(R'_1)$, and this determines the sign $s(R)$. This
completes the induction and shows that the value of the sign
assignment $s$ is fixed on all the rectangles that are supported
in $S''$. A similar argument, but with the diagrams rotated by
$90^{\circ}$, shows that the value of $s$ is, in fact, determined on
all the rectangles. This completes the proof of uniqueness.
\end{proof}
\betagin{lem}\label{lem:product}
For any sign assignment $s$, the product $\prod_{i=1}^N h_s(i)v_s(i)$ equals $(-1)^N$.
\end{lem}
\betagin{proof}
By Theorem \ref{thm:equivalence}, there exists a sign assignment $s'$ such that $h_{s'}=h_{s}$, $v_{s'}(i)=v_s(i)$ for $i\in\{1,\ldots,N-1\}$ and $v_{s'}(N)= (-1)^N h_s(N)\prod_{i=1}^{N-1} h_s(i)v_s(i)$. Since $\Phi(s)=\Phi(s')$, by Theorem \ref{thm:uniqueness}, $s$ and $s'$ are gauge equivalent. Therefore, $\prod_{i=1}^N h_s(i)v_s(i)=\prod_{i=1}^N h_{s'}(i)v_{s'}(i)=(-1)^N$.
\end{proof}
Fix a sign assignment $s$ and fix a link component $L_i$. Let $V(L_i)=\{j\mid \text{the }X\text{ marking in }V_j\text{ is in }L_i\}$ and let $H(L_i)=\{j\mid \text{the }X\text{ marking in }H_j\text{ is in }L_i\}$. The product $(\prod_{j\in H(L_i)}h_s(j))(\prod_{j\in V(L_i)}(-v_s(j)))$ is defined to be the \emph{sign of the link component $L_i$} and is denoted by $r_s(L_i)$.
Call two sign assignments $s_1$ and $s_2$ \emph{weakly equivalent} if $r_{s_1}$ agrees with $r_{s_2}$ on each of the link components. Clearly, if two sign assignments are gauge equivalent, then they are weakly equivalent. Due to Lemma \ref{lem:product}, the product of the signs of all the link components is $1$, and this is the only restriction on these numbers $r_s(L_i)$. Therefore, there are exactly $2^{l-1}$ weak equivalence classes of sign assignments. The following observation yields a direct proof that the chain complex $\widehatcfl_{\mathcal{H}}(L,\mathbb{Z},\mathfrak{o})$ depends only on the weak equivalence class of the sign assignment $f(\mathfrak{o})$.
\betagin{lem}
If two sign assignments $s_1$ and $s_2$ are weakly equivalent, then there exists a sign assignment $s'_2$, which is gauge equivalent to $s_2$, such that $s_1$ and $s'_2$ agree on all the rectangles that avoid the $X$ markings and the $O$ markings.
\end{lem}
\betagin{proof}
Since $s_1$ and $s_2$ are weakly equivalent, a proof similar to the proof of Theorem \ref{thm:equivalence} shows that there exists a $2$-cochain $m$ which assigns $1$ to every elementary domain that does not contain any $X$ or $O$ markings, such that the sign assignment $s'_2$ obtained by modifying $s_1$ by the $2$-cochain $m$ satisfies
$h_{s_2}=h_{s'_2}$ and $v_{s_2}=v_{s'_2}$. Therefore, by Theorem \ref{thm:uniqueness}, $s'_2$ is gauge equivalent to $s_2$.
\end{proof}
\betagin{thm}
The map $f$ from the set of orientation systems to the set of sign assignments induces a well-defined bijection $\widetilde{f}$ from the set of weak equivalence classes of orientation systems to the set of weak equivalence classes of sign assignments.
\end{thm}
\betagin{proof}
Recall that two orientation systems $\mathfrak{o}_1$ and $\mathfrak{o}_2$ are weakly equivalent if and only if, for a fixed generator $x\in\mathcal{G}_{\mathcal{H}}$, $\mathfrak{o}_1$ agrees with $\mathfrak{o}_2$ on all the domains in $\mathcal{D}(x,x)$ that correspond to the empty periodic domains of $\mathcal{P}^0_{\mathcal{H}}$. Therefore, we need to find a basis for the empty periodic domains.
For each $i\in\{1,\ldots,l\}$, let $P_i=\sum_{j\in V(L_i)}V_j-\sum_{j\in H(L_i)}H_j$. These $l$ empty periodic domains generate $\mathcal{P}^0_{\mathcal{H}}$, and $\sum_i P_i=0$ is the only relation among these domains. Therefore, the domains $P_1,\ldots,P_{l-1}$ freely generate $\mathcal{P}^0_{\mathcal{H}}$.
If $D\in\mathcal{D}(x,x)$ is a domain which corresponds to a vertical annulus $V_i$, then we know from Paragraph \ref{para:special} that $\mathfrak{o}_1$ agrees with $\mathfrak{o}_2$ on $D$ if and only if $v_{f(\mathfrak{o}_1)}(i) = v_{f(\mathfrak{o}_2)}(i)$. A similar statement holds for the horizontal annuli. A repeated application of the same principle shows that if $D\in\mathcal{D}(x,x)$ corresponds to the empty periodic domain $P_i$, then $\mathfrak{o}_1$ agrees with $\mathfrak{o}_2$ on $D$ if and only if $r_{f(\mathfrak{o}_1)}(L_i)=r_{f(\mathfrak{o}_1)}(L_i)$. Therefore, the orientation systems $\mathfrak{o}_1$ and $\mathfrak{o}_2$ are weakly equivalent if and only if the sign assignments $f(\mathfrak{o}_1)$ and $f(\mathfrak{o}_2)$ are weakly equivalent. This shows that the map in question is well-defined and injective. As both sets have $2^{l-1}$ elements, it is a bijection.
\end{proof}
A consequence of the theorems in this section is the following.
\betagin{thm}
There is a bijection $\widetilde{f}$ between the weak equivalence classes of orientation systems and the weak equivalence classes of sign assignments, such that for each of the $2^{l-1}$ weak equivalence classes of orientation systems $\mathfrak{o}$, the homology of the grid chain complex, evaluated with the sign assignment $f(\mathfrak{o})$, is isomorphic as an absolutely $(l+1)$-graded group to $\widehathfl(L,\mathbb{Z},\mathfrak{o})\otimes_i(\otimes^{m_i-1}Q_i)$.
\end{thm}
\betagin{figure}
\psfrag{x}{$X$}
\psfrag{o}{$O$}
\psfrag{xo}{$XO$}
\betagin{center}
\includegraphics[width=0.8\textwidth]{grid}
\end{center}
\caption{Grid diagrams for the two-component unlink and the Hopf link.}\label{fig:grid}
\end{figure}
Let us conclude with a couple of examples. The first grid diagram in Figure \ref{fig:grid} represents the two-component unlink. There are exactly two generators and exactly two rectangles connecting the two generators. One weak equivalence class assigns the same sign to both the rectangles while the other weak equivalence class assigns opposite signs. Therefore, for one weak equivalence class of orientation systems, the homology is $\mathbb{Z}/2\mathbb{Z}$, while for the other other weak equivalence class of orientation systems, the homology is $\mathbb{Z}\oplus\mathbb{Z}$.
The second grid diagram in Figure \ref{fig:grid} represents the Hopf link. There are twenty-four generators and sixteen rectangles. It can be checked by direct computation that the homology is independent of the sign assignment. Therefore, the link Floer homology of the Hopf link is the same for both the weak equivalence classes of orientation systems.
\end{document}
|
\begin{eqnarray*}gin{document}
\author{W.~D.~Gillam}
\address{Department of Mathematics, Brown University, Providence, RI 02912}
\email{[email protected]}
\date{\today}
\title{Localization of ringed spaces}
\begin{eqnarray*}gin{abstract} Let $X$ be a ringed space together with the data $M$ of a set $M_x$ of prime ideals of $\mathcal O_{X,x}$ for each point $x \in X$. We introduce the localization of $(X,M)$, which is a locally ringed space $Y$ and a map of ringed spaces $Y \to X$ enjoying a universal property similar to the localization of a ring at a prime ideal. We use this to prove that the category of locally ringed spaces has all inverse limits, to compare them to the inverse limit in ringed spaces, and to construct a very general $\operatorname{Spec}$ functor. We conclude with a discussion of relative schemes. \end{abstract}
\mathfrak{m}aketitle
\setcounter{section}{0}
\section{Introduction} Let ${\bf Top}$, ${\bf LRS}$, ${\bf RS}$, and ${\bf Sch}$ denote the categories of topological spaces, locally ringed spaces, ringed spaces, and schemes, respectively. Consider maps of schemes $f_i : X_i \to Y$ ($i=1,2$) and their fibered product $X_1 \times_Y X_2$ as schemes. Let $\underline{X}$ denote the topological space underlying a scheme $X$. There is a natural comparison map $$\eta : \underline{X_1 \times_Y X_2} \to \underline{X}_1 \times_{\underline{Y}} \underline{X}_2 $$ which is not generally an isomorphism, even if $X_1,X_2,Y$ are spectra of fields (e.g.\ if $Y=\operatorname{Spec} \mathbb{R}$, $X_1=X_2=\operatorname{Spec} {\bf C}C$, the map $\eta$ is two points mapping to one point). However, in some sense $\eta$ fails to be an isomorphism only to the extent to which it failed in the case of spectra of fields: According to [EGA I.3.4.7] the fiber $\eta^{-1}(x_1,x_2)$ over a point $(x_1,x_2) \in \underline{X}_1 \times_{\underline{Y}} \underline{X}_2$ (with common image $y = f_1(x_1)=f_2(x_2)$) is naturally bijective with the set $$\underline{ \operatorname{Spec} } \, k(x_1) \otimes_{k(y)} k(x_2) . $$ In fact, one can show that this bijection is a homeomorphism when $\eta^{-1}(x_1,x_2)$ is given the topology it inherits from $X_1 \times_Y X_2$. One can even describe the sheaf of rings $\eta^{-1}(x_1,x_2)$ inherits from $X_1 \times_Y X_2$ as follows: Let \begin{eqnarray*} S(x_1,x_2) & := & \{ z \in \operatorname{Spec} \mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2} : z|\mathcal O_{X_i,x_i} = \mathfrak{m}_{x_i} {\rm \; for \; } i=1,2 \} . \end{eqnarray*} Then ($\underline{\operatorname{Spec}}$ of) the natural surjection $$ \mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2} \to k(x_1) \otimes_{k(y)} k(x_2) $$ identifies $\underline{\operatorname{Spec}} \, k(x_1) \otimes_{k(y)} k(x_2)$ with a closed subspace of $\underline{\operatorname{Spec}} \, \mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2}$ and $\mathcal O_{X_1 \times_Y X_2} | \eta^{-1}(x_1,x_2) $ naturally coincides, under the EGA isomorphism, to the restriction of the structure sheaf of $\operatorname{Spec} \mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2}$ to the closed subspace $$\underline{\operatorname{Spec}} \, k(x_1) \otimes_{k(y)} k(x_2) \subseteq \underline{\operatorname{Spec}} \, \mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2}.\footnote{There is no sense in which this sheaf of rings on $\underline{\operatorname{Spec}} \, k(x_1) \otimes_{k(y)} k(x_2)$ is ``quasi-coherent". It isn't even a module over the usual structure sheaf of $\operatorname{Spec} k(x_1) \otimes_{k(y)} k(x_2)$.}$$ It is perhaps less well-known that this entire discussion remains true for ${\bf LRS}$ morphisms $f_1,f_2$.
From the discussion above, we see that it is possible to describe $X_1 \times_Y X_2$, at least as a set, from the following data: \begin{eqnarray*}gin{enumerate} \item the ringed space fibered product $X_1 \times_Y^{{\bf RS}} X_2$ (which carries the data of the rings $\mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2}$ as stalks of its structure sheaf) and \item the subsets $S(x_1,x_2) \subseteq \operatorname{Spec} \mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2}$ \end{enumerate} It turns out that one can actually recover $X_1 \times_Y X_2$ as a scheme solely from this data, as follows: Given a pair $(X,M)$ consisting of a ringed space $X$ and a subset $M_x \subseteq \operatorname{Spec} \mathcal O_{X,x}$ for each $x \in X$, one can construct a locally ringed space $(X,M)^{\operatorname{loc}}$ with a map of ringed spaces $(X,M)^{\operatorname{loc}} \to X$. In a special case, this construction coincides with M.~Hakim's spectrum of a ringed topos. Performing this general construction to $$(X_1 \times_Y^{{\bf RS}} X_2, \{ S(x_1,x_2) \} ) $$ yields the comparison map $\eta$, and, in particular, the scheme $X_1 \times_Y X_2$. A similar construction in fact yields all inverse limits in ${\bf LRS}$ (\S\ref{section:inverselimits}) and the comparison map to the inverse limit in ${\bf RS}$, and allows one to easily prove that a finite inverse limits of schemes, taken in ${\bf LRS}$, is a scheme (Theorem~\ref{thm:Schinverselimits}). Using this description of the comparison map $\eta$ one can easily describe some circumstances under which it is an isomorphism (\S\ref{section:fiberedproducts}), and one can easily see, for example, that it is a \emph{localization morphism} (Definition~\ref{defn:localizationmorphism}), hence has zero cotangent complex.
The localization construction also allows us construct (\S\ref{section:relativespec}), for any $X \in {\bf LRS}$, a very general relative spec functor \begin{eqnarray*} \operatorname{Spec}_X : (\mathcal O_X-{\bf Alg})^{\rm op} & \to & {\bf LRS} / X \end{eqnarray*} which coincides with the usual one when $X$ is a scheme and we restrict to quasi-coherent $\mathcal O_X$ algebras. We can also construct (\S\ref{section:geometricrealization}) a ``good geometric realization" functor from M.~Hakim's stack of relative schemes over a locally ringed space $X$ to ${\bf LRS} / X$.\footnote{Hakim already constructed such a functor, but ours is different from hers.} It should be emphasized at this point that there is essentially only \emph{one} construction, the localization of a ringed space of \S\ref{section:localization}, in this paper, and \emph{one} (fairly easy) theorem (Theorem~\ref{thm:localization}) about it; everything else follows formally from general nonsense.
Despite all these results about inverse limits, I stumbled upon this construction while studying \emph{direct limits}. I was interested in comparing the quotient of, say, a finite \'etale groupoid in schemes, taken in sheaves on the \'etale site, with the same quotient taken in ${\bf LRS}$. In order to compare these meaningfully, one must somehow put them in the same category. An appealing way to do this is to prove that the (functor of points of the) ${\bf LRS}$ quotient is a sheaf on the \'etale site. In fact, one can prove that for any $X \in {\bf LRS}$, the presheaf $$Y \mathfrak{m}apsto \operatorname{Hom}_{{\bf LRS}}(Y,X) $$ is a sheaf on schemes in both the fppf and fpqc topologies. Indeed, one can easily describe a topology on ${\bf RS}$, analogous to the fppf and fpqc topologies on schemes, and prove it is subcanonical. To upgrade this to a subcanonical topology on ${\bf LRS}$ one is naturally confronted with the comparison of fibered products in ${\bf LRS}$ and ${\bf RS}$. In particular, one is confronted with the question of whether $\eta$ is an epimorphism in the category of ringed spaces. I do not know whether this is true for arbitrary ${\bf LRS}$ morphisms $f_1,f_2$, but in the case of schemes it is possible to prove a result along these lines which is sufficient to upgrade descent theorems for ${\bf RS}$ to descent theorems for ${\bf Sch}$.
\mathfrak{n}oindent {\bf Acknowledgements.} This research was partially supported by an NSF Postdoctoral Fellowship.
\section{Localization} \label{section:mainconstruction} We will begin the localization construction after making a few definitions.
\begin{eqnarray*}gin{comment}
We will encounter the following categories:
\begin{eqnarray*}gin{tabular}{lll} ${\bf Set}$ & & sets \\ ${\bf Sch}$ & & schemes \\ ${\bf RS}$ & & ringed spaces \\ ${\bf LRS}$ & & locally ringed spaces \\ ${\bf PRS}$ & & primed ringed spaces (Definition~\ref{defn:PRS}) \\ ${\bf PS}(X)$ & & prime systems on the ringed space $X$ (Definition~\ref{defn:PRS}) \\ ${\bf Rings}$ & & commutative rings \\ ${\bf Rings}(X)$ & & sheaves of commutative rings on a space $X$ \\ ${\bf LAn}$ & & local commutative rings and local ring homomorphisms \\ ${\bf Top}$ & & topological spaces \\ ${\bf Ouv}(X)$ & & open subsets of the space $X$ with inclusions as morphisms \end{tabular}
\end{comment}
\begin{eqnarray*}gin{defn} \label{defn:localizationmorphism} A morphism $f : A \to B$ of sheaves of rings on a space $X$ is called a \emph{localization morphism}\footnote{See [Ill II.2.3.2] and the reference therein.} iff there is a multiplicative subsheaf $S \subseteq A$ so that $f$ is isomorphic to the localization $A \to S^{-1}A$ of $A$ at $S$.\footnote{This condition can be checked in stalks.} A morphism of ringed spaces $f : X \to Y$ is called a localization morphism iff $f^{\sharp} : f^{-1} \mathcal O_Y \to \mathcal O_X$ is a localization morphism. \end{defn}
A localization morphism $A \to B$ in ${\bf Rings}(X)$ is both flat and an epimorphism in ${\bf Rings}(X)$.\footnote{Both of these conditions can be checked at stalks.} In particular, the cotangent complex (hence also the sheaf of K\"ahler differentials) of a localization morphism is zero [Ill II.2.3.2]. The basic example is: For any affine scheme $X = \operatorname{Spec} A$, $\underlinenderline{A}_X \to \mathcal O_X$ is a localization morphism.
\begin{eqnarray*}gin{defn} \label{defn:Spec} Let $A$ be a ring, $S \subseteq \operatorname{Spec} A$ any subset. We write $\operatorname{Spec}_A S$ for the locally ringed space whose underlying topological space is $S$ with the topology it inherits from $\operatorname{Spec} A$ and whose sheaf of rings is the inverse image of the structure sheaf of $\operatorname{Spec} A$. \end{defn}
If $A$ is clear from context, we drop the subscript and simply write $\operatorname{Spec} S$. There is one possible point of confusion here: If $I \subseteq A$ is an ideal, and we think of $\operatorname{Spec} A/I$ as a subset of $\operatorname{Spec} A$, then \begin{eqnarray*} \operatorname{Spec}_A (\operatorname{Spec} A/I) & \mathfrak{n}eq & \operatorname{Spec} A/I \end{eqnarray*} (though they have the same topological space).
\subsection{Prime systems}
\begin{eqnarray*}gin{defn} \label{defn:PRS} Let $X=(X,\mathcal O_X)$ be a ringed space. A \emph{prime system} $M$ on $X$ is a map $x \mathfrak{m}apsto M_x$ assigning a subset $M_x \subseteq \operatorname{Spec} \mathcal O_{X,x}$ to each point $x \in X$. For prime systems $M,N$ on $X$ we write $M \subseteq N$ to mean $M_x \subseteq N_x$ for all $x \in X$. Prime systems on $X$ form a category ${\bf PS}(X)$ where there is a unique morphism from $M$ to $N$ iff $M \subseteq N$. The \emph{intersection} $\cap_i M_i$ of prime systems $M_i \in {\bf PS}(X)$ is defined by \begin{eqnarray*} ( \cap_i M_i)_x & := & \cap_i (M_i)_x. \end{eqnarray*} A \emph{primed ringed space} $(X,M)$ is a ringed space $X$ equipped with a prime system $M$. Prime ringed spaces form a category ${\bf PRS}$ where a morphism $f : (X,M) \to (Y,N)$ is a morphism of ringed spaces $f$ satisfying $$ (\operatorname{Spec} f_x)(M_x) \subseteq N_{f(x)} $$ for every $x \in X$. \end{defn}
The inverse limit of a functor $i \mathfrak{m}apsto M_i$ to ${\bf PS}(X)$ is clearly given by $\cap_i M_i$.
\begin{eqnarray*}gin{rem} \label{rem:inverseimage} Suppose $(Y,N) \in {\bf PRS}$ and $f : X \to Y$ is an ${\bf RS}$ morphism. The \emph{inverse image} $f^* N$ is the prime system on $X$ defined by \begin{eqnarray*} (f^*N)_x & := & (\operatorname{Spec} f_x)^{-1}(N_{f(x)}) \\ & = & \{ \mathfrak{p} \in \operatorname{Spec} \mathcal O_{X,x} : f_x^{-1}(\mathfrak{p}) \in N_{f(x)} \}. \end{eqnarray*} Formation of inverse image prime systems enjoys the expected naturality in $f$: $g^*(f^*M) = (fg)^*M$. We can alternatively define a ${\bf PRS}$ morphism $f : (X,M) \to (Y,N)$ to be an ${\bf RS}$ morphism $f : X \to Y$ such that $M \subseteq f^* N$ (i.e.\ together with a ${\bf PS}(X)$ morphism $M \to f^* N$). \end{rem}
For $X \in {\bf LRS}$, the \emph{local prime system} $\mathcal M_X$ on $X$ is defined by $\mathcal M_{X,x} := \{ \mathfrak{m}_x \}$. If $Y$ is another locally ringed space, then a morphism $f : X \to Y$ in ${\bf RS}$ defines a morphism of primed ringed spaces $f : (X,\mathcal M_X) \to (Y,\mathcal M_Y)$ iff $f$ is a morphism in ${\bf LRS}$, so we have a fully faithful functor \bne{M} \mathcal M : {\bf LRS} & \to & {\bf PRS} \\ \mathfrak{n}onumber X & \mathfrak{m}apsto & (X, \mathcal M_X), \end{eqnarray} and we may regard ${\bf LRS}$ as a full subcategory of ${\bf PRS}$.
At the ``opposite extreme" we also have, for any $X \in {\bf RS}$, the \emph{terminal prime system} $\mathcal T_X$ defined by $\mathcal T_{X,x} := \operatorname{Spec} \mathcal O_{X,x}$ (i.e.\ the terminal object in ${\bf PS}(X)$). For $(Y,M) \in {\bf PRS}$, we clearly have \begin{eqnarray*} \operatorname{Hom}_{{\bf PRS}}((Y,M), (X,\mathcal T_X)) & = & \operatorname{Hom}_{{\bf RS}}(Y,X), \end{eqnarray*} so the functor \bne{T} \mathcal T : {\bf RS} & \to & {\bf PRS} \\ \mathfrak{n}onumber X & \mathfrak{m}apsto & (X,\mathcal T_X) \end{eqnarray} is right adjoint to the forgetful functor ${\bf PRS} \to {\bf RS}$ given by $(X,M) \mathfrak{m}apsto X$.
\subsection{Localization} \label{section:localization} Now we begin the main construction of this section. Let $(X,M)$ be a primed ringed space. We now construct a locally ringed space $(X,M)^{\operatorname{loc}}$ (written $X^{\operatorname{loc}}$ if $M$ is clear from context), and a ${\bf PRS}$ morphism $\mathfrak{p}i : (X^{\operatorname{loc}}, \mathcal M_{X^{\operatorname{loc}}}) \to (X,M)$ called the \emph{localization of} $X$ \emph{at} $M$.
\begin{eqnarray*}gin{defn} Let $X$ be a topological space, $\curly F$ a sheaf on $X$. The category ${\bf Sec} \, \curly F$ of \emph{local sections} of $\curly F$ is the category whose objects are pairs $(U,s)$ where $U$ is an open subset of $X$ and $s \in \curly F(U)$, and where there is a unique morphism $(U,s) \to (V,t)$ iff $U \subseteq V$ and $t|_U = s$. \end{defn}
As a set, the topological space $X^{\operatorname{loc}}$ will be the set of pairs $(x,z)$, where $x \in X$ and $z \in M_x$. Let $\mathcal P(X^{\operatorname{loc}})$ denote the category of subsets of $X^{\operatorname{loc}}$ whose morphisms are inclusions. For $(U,s) \in {\bf Sec} \, \mathcal O_X$, set \begin{eqnarray*} \operatorname{U}(U,s) & := & \{ (x,z) \in X^{\operatorname{loc}} : x \in U, s_x \mathfrak{n}otin z \} . \end{eqnarray*} This defines a functor \begin{eqnarray*} \operatorname{U} : {\bf Sec} \, \mathcal O_X & \to & \mathcal P(X^{\operatorname{loc}}) \end{eqnarray*} satisfying: \begin{eqnarray*} \operatorname{U}(U,s) \cap \operatorname{U}(V,t) & = & \operatorname{U}(U \cap V, s|_{U \cap V} t|_{U \cap V}) \\ \operatorname{U}(U,s^n) & = & \operatorname{U}(U,s) \quad \quad \quad \quad (n \in \mathbb{Z}_{>0}). \end{eqnarray*} The first formula implies that $\operatorname{U}({\bf Sec} \, \mathcal O_X) \subseteq \mathcal P(X^{\operatorname{loc}})$ is a basis for a topology on $X^{\operatorname{loc}}$ where a basic open neighborhood of $(x,z)$ is a set $\operatorname{U}(U,s)$ where $x \in U$, $s_x \mathfrak{n}otin z$. We always consider $X^{\operatorname{loc}}$ with this topology. The map \begin{eqnarray*} \mathfrak{p}i : X^{\operatorname{loc}} & \to & X \\ (x,z) & \mathfrak{m}apsto & x \end{eqnarray*} is continuous because $\mathfrak{p}i^{-1}(U) = \operatorname{U}(U,1)$.
We construct a sheaf of rings $\mathcal O_{X^{\operatorname{loc}}}$ on $X^{\operatorname{loc}}$ as follows. For an open subset $V \subseteq X^{\operatorname{loc}}$, we let $\mathcal O_{X^{\operatorname{loc}}}(V)$ be the set of $$ s = (s(x,z)) \in \mathfrak{p}rod_{(x,z) \in V} (\mathcal O_{X,x})_z $$ satisfying the \emph{local consistency condition}: For every $(x,z) \in V$, there is a basic open neighborhood $\operatorname{U}(U,t)$ of $(x,z)$ contained in $V$ and a section $$ \frac{a}{t^n} \in \mathcal O_X(U)_t $$ such that, for every $(x',z') \in \operatorname{U}(U,t)$, we have \begin{eqnarray*} s(x',z') & = & \frac{a_{x'}}{t_{x'}^n} \in (\mathcal O_{X,x'})_{z'}. \end{eqnarray*} (Of course, one can always take $n=1$ since $\operatorname{U}(U,t)=\operatorname{U}(U,t^n)$.) The set $\mathcal O_{X^{\operatorname{loc}}}(V)$ becomes a ring under coordinatewise addition and multiplication, and the obvious restriction maps make $\mathcal O_{X^{\operatorname{loc}}}$ a sheaf of rings on $X^{\operatorname{loc}}$. There is a natural isomorphism \begin{eqnarray*} \mathcal O_{X^{\operatorname{loc}},(x,z)} & = & (\mathcal O_{X,x})_z \end{eqnarray*} taking the germ of $s=(s(x,z)) \in \mathcal O_{X^{\operatorname{loc}}}(U)$ in the stalk $\mathcal O_{X^{\operatorname{loc}},(x,z)}$ to $s(x,z) \in (\mathcal O_{X,x})_z$. This map is injective because of the local consistency condition and surjective because, given any $a/b \in (\mathcal O_{X,x})_z$, we can lift $a,b$ to $\overline{a},\overline{b} \in \mathcal O_X(U)$ on some neighborhood $U$ of $x$ and define $s \in \mathcal O_{X^{\operatorname{loc}}}(\operatorname{U}(U,\overline{b}))$ by letting $s(x',z') := \overline{a}_{x'} / \overline{b}_{x'} \in (\mathcal O_{X,x'})_{z'}.$ This $s$ manifestly satisfies the local consistency condition and has $s(x,z) = a/b$. In particular, $X^{\operatorname{loc}}$, with this sheaf of rings, is a locally ringed space.
To lift $\mathfrak{p}i$ to a map of ringed spaces $\mathfrak{p}i : X^{\operatorname{loc}} \to X$ we use the tautological map $$\mathfrak{p}i^{\flat} :\mathcal O_X \to \mathfrak{p}i_* \mathcal O_{X^{\operatorname{loc}}}$$ of sheaves of rings on $X$ defined on an open set $U \subseteq X$ by \begin{eqnarray*} \mathfrak{p}i^{\flat}(U) : \mathcal O_X(U) & \to & (\mathfrak{p}i_* \mathcal O_{X^{\operatorname{loc}}})(U) = \mathcal O_{X^{\operatorname{loc}}}(\operatorname{U}(U,1)) \\ s & \mathfrak{m}apsto & ( (s_x)_z ) . \end{eqnarray*} It is clear that the induced map on stalks \begin{eqnarray*} \mathfrak{p}i_{x,z} : \mathcal O_{X,x} & \to & \mathcal O_{X^{\operatorname{loc}},(x,z)} = (\mathcal O_{X,x})_z \end{eqnarray*} is the natural localization map, so $\mathfrak{p}i_{x,z}^{-1}(\mathfrak{m}_z) = z \in M_x$ and hence $\mathfrak{p}i$ defines a ${\bf PRS}$ morphism $\mathfrak{p}i : (X^{\operatorname{loc}}, \mathcal M_{X^{\operatorname{loc}}}) \to (X,M)$.
\begin{eqnarray*}gin{rem} \label{rem:localization} It would have been enough to construct the localization $(X,\mathcal T_X)^{\operatorname{loc}}$ at the terminal prime system. Then to construct the localization $(X,M)^{\operatorname{loc}}$ at any other prime system, we just note that $(X,M)^{\operatorname{loc}}$ is clearly a subset of $(X,\mathcal T_X)^{\operatorname{loc}}$, and we give it the topology and sheaf of rings it inherits from this inclusion. The construction of $(X,\mathcal T_X)^{\operatorname{loc}}$ is ``classical." Indeed, M.~Hakim \cite{Hak} describes a construction of $(X,\mathcal T_X)^{\operatorname{loc}}$ that makes sense for any ringed topos $X$ (she calls it the \emph{spectrum} of the ringed topos [Hak IV.1]), and attributes the analogous construction for ringed spaces to C.~Chevalley [Hak IV.2]. Perhaps the main idea of this work is to define ``prime systems," and to demonstate their ubiquity. The additional flexibility afforded by non-terminal prime systems is indispensible in the applications of \S\ref{section:applications}. It is not clear to me whether this setup generalizes to ringed topoi. \end{rem}
We sum up some basic properties of the localization map $\mathfrak{p}i$ below.
\begin{eqnarray*}gin{prop} \label{prop:localization} Let $(X,M)$ be a primed ringed space with localization $\mathfrak{p}i : X^{\operatorname{loc}} \to X$. For $x \in X$, the fiber $\mathfrak{p}i^{-1}(x)$ is naturally isomorphic in ${\bf LRS}$ to $\operatorname{Spec} M_x$ (Definition~\ref{defn:Spec}).\footnote{By ``fiber" here we mean $\mathfrak{p}i^{-1}(x) := X^{\operatorname{loc}} \times^{{\bf RS}}_X (\{ x \}, \mathcal O_{X,x})$, which is just the set theoretic preimage $\mathfrak{p}i^{-1}(x) \subseteq X^{\operatorname{loc}}$ with the topology and sheaf of rings it inherits from $X^{\operatorname{loc}}$. This differs from another common usage of ``fiber" to mean $X^{\operatorname{loc}} \times^{{\bf RS}}_X (\{ x \}, k(x) )$. } Under this identification, the stalk of $\mathfrak{p}i$ at $z \in M_x$ is identified with the localization of $\mathcal O_{X,x}$ at $z$, hence $\mathfrak{p}i$ is a localization morphism (Definition~\ref{defn:localizationmorphism}). \end{prop}
\begin{eqnarray*}gin{proof} With the exception of the fiber description, everything in the proposition was noted during the construction of the localization. Clearly there is a natural bijection of sets $M_x = \mathfrak{p}i^{-1}(x)$ taking $z \in M_x$ to $(x,z) \in \mathfrak{p}i^{-1}(x)$. We first show that the topology inherited from $X^{\operatorname{loc}}$ coincides with the one inherited from $\operatorname{Spec} \mathcal O_{X,x}$. By definition of the topology on $X^{\operatorname{loc}}$, a basic open neighborhood of $z \in M_x$ is a set of the form \begin{eqnarray*} \operatorname{U}(U,s) \cap M_x & = & \{ z' \in M_x : s_x \mathfrak{n}otin z' \}, \end{eqnarray*} where $U$ is a neighborhood of $x$ in $X$ and $s \in \mathcal O_X(U)$ satisfies $s_x \mathfrak{n}otin z$. Clearly this set depends only on the stalk of $s_x \in \mathcal O_{X,x}$ of $s$ at $x$, and any element $t \in \mathcal O_{X,x}$ lifts to a section $\overline{t} \in \mathcal O_X(U)$ on some neighborhood of $X$, so the basic neighborhoods of $z \in M_x$ are the sets of the form $$ \{ z' \in M_x : t \mathfrak{n}otin z' \}$$ where $t_x \mathfrak{n}otin z$. But for the same set of $t$, the sets \begin{eqnarray*} D(t) & := & \{ \mathfrak{p} \in \operatorname{Spec} \mathcal O_{X,x} : t \mathfrak{n}otin \mathfrak{p} \} \end{eqnarray*} form a basis for neighborhoods of $z$ in $\operatorname{Spec} \mathcal O_{X,x}$ so the result is clear.
We next show that the sheaf of rings on $M_x$ inherited from $X^{\operatorname{loc}}$ is the same as the one inherited from $\operatorname{Spec} \mathcal O_{X,x}$. Given $f \in \mathcal O_{X,x}$, a section of $\mathcal O_{X^{\operatorname{loc}}}|M_x$ over the basic open set $M_x \cap D(f)$ is an element $$s = (s(z)) \in \mathfrak{p}rod_{z \in M_x \cap D(f)} (\mathcal O_{X,x})_z $$ satisfying the local consistency condition: For all $z \in M_x \cap D(f)$, there is a basic open neighborhood $\operatorname{U}(U,t)$ of $(x,z)$ in $X^{\operatorname{loc}}$ and an element $a/(t^n) \in \mathcal O_X(U)_t$ such that, for all $z' \in M_x \cap D(f) \cap \operatorname{U}(U,t)$, we have $s(z') \in a_{z'} / (t^n_{z'})$. Note that \begin{eqnarray*} M_x \cap D(f) \cap \operatorname{U}(U,t) & = & M_x \cap D(ft_x) \end{eqnarray*} and $a_x / (t_x^n) \in \mathcal O_{\operatorname{Spec} \mathcal O_{X,x}}(D(ft_x))$. The sets $D(ft_x) \cap M_x$ cover $M_x \cap D(f) \subseteq \operatorname{Spec} \mathcal O_{X,x}$, and we have a ``global formula" $s$ showing that the stalks of the various $a_x / (t_x^n)$ agree at any $z \in M_x \cap D(f)$, so they glue to yield an element $g(s) \in \mathcal O_{\operatorname{Spec} \mathcal O_{X,x}}(M_x \cap D(f))$ with $g(s)_z = s(z)$. We can define a morphism of sheaves on $M_x$ by defining it on basic opens, so this defines a morphism of sheaves $g : \mathcal O_{X^{\operatorname{loc}}}|M_x \to \mathcal O_{\operatorname{Spec} \mathcal O_{X,x}}|M_x$ which is easily seen to be an isomorphism on stalks. \end{proof}
\begin{eqnarray*}gin{rem} \label{rem:open} Suppose $(X,M) \in {\bf PRS}$ and $U \subseteq X$ is an open subspace of $X$. Then it is clear from the construction of $\mathfrak{p}i : (X,M)^{\operatorname{loc}} \to X$ that $\mathfrak{p}i^{-1}(U) = (U,\mathcal O_X|U,M|U)^{\operatorname{loc}}$. \end{rem}
The following theorem describes the universal property of localization.
\begin{eqnarray*}gin{thm} \label{thm:localization} Let $f: (X,M) \to (Y,N)$ be a morphism in ${\bf PRS}$. Then there is a unique morphism $\overline{f} : (X,M)^{\operatorname{loc}} \to (Y,N)^{\operatorname{loc}}$ in ${\bf LRS}$ making the diagram \bne{dia} & \xymatrix{ (X,M)^{\operatorname{loc}} \ar[d]_{\mathfrak{p}i} \ar[r]^{\overline{f}} & (Y,N)^{\operatorname{loc}} \ar[d]^{\mathfrak{p}i} \\ X \ar[r]^f & Y } \end{eqnarray} commute in ${\bf RS}$. Localization defines a functor \begin{eqnarray*} {\bf PRS} & \to & {\bf LRS} \\ (X,M) & \mathfrak{m}apsto & (X,M)^{\operatorname{loc}} \\ f: (X,M) \to (Y,N) & \mathfrak{m}apsto & \overline{f} : (X,M)^{\operatorname{loc}} \to (Y,N)^{\operatorname{loc}} \end{eqnarray*} retracting the inclusion functor $\mathcal M : {\bf LRS} \to {\bf PRS}$ and right adjoint to it: For any $Y \in {\bf LRS}$, there is a natural bijection \begin{eqnarray*} \operatorname{Hom}_{{\bf LRS}}(Y, (X,M)^{\operatorname{loc}}) & = & \operatorname{Hom}_{{\bf PRS}}((Y,\mathcal M_Y), (X,M)). \end{eqnarray*} \end{thm}
\begin{eqnarray*}gin{proof} We first establish the \emph{existence} of such a morphism $\overline{f}$. The fact that $f$ is a morphism of primed ringed spaces means that we have a function \begin{eqnarray*} M_x & \to & N_{f(x)} \\ z & \mathfrak{m}apsto & f_x^{-1}(z) \end{eqnarray*} for each $x \in X$, so we can complete the diagram of topological spaces $$ \xymatrix{ X^{\operatorname{loc}} \ar[d]^{\mathfrak{p}i} \ar[r]^{\overline{f}} & (Y,N)^{\operatorname{loc}} \ar[d]^{\mathfrak{p}i} \\ X \ar[r]^{f} & Y } $$ (at least on the level of sets) by setting \begin{eqnarray*} \overline{f}(x,z) & := & (f(x), f_x^{-1}(z)) \in Y^{\operatorname{loc}}. \end{eqnarray*} To see that $\overline{f}$ is continuous it is enough to check that the preimage $\overline{f}^{-1}(\operatorname{U}(U,s))$ is open in $X^{\operatorname{loc}}$ for each basic open subset $\operatorname{U}(U,s)$ of $Y^{\operatorname{loc}}$. But it is clear from the definitions that \begin{eqnarray*} \overline{f}^{-1} \operatorname{U}(U,s) & = & \operatorname{U}( f^{-1}(U), f^{\sharp} f^{-1}(s) ) \end{eqnarray*} (note $(f^{\sharp} f^{-1}(s))_x = f_x(s_{f(x)})$).
Now we want to define a map $\overline{f}^{\sharp} : \overline{f}^{-1} \mathcal O_{X^{\operatorname{loc}}} \to \mathcal O_Y$ of sheaves of rings on $Y$ (with ``local stalks") making the diagram $$ \xymatrix@C+20pt{ \mathcal O_{X^{\operatorname{loc}}} & \ar@{.>}[l]_-{\overline{f}^{\sharp}} \overline{f}^{-1} \mathcal O_{Y^{\operatorname{loc}}} \\ \mathfrak{p}i^{-1} \mathcal O_X \ar[u] & \ar[l]_-{\mathfrak{p}i^{-1} f^{\sharp}} \mathfrak{p}i^{-1} f^{-1} \mathcal O_Y \ar[u] } $$ commute in ${\bf Rings}(Y^{\operatorname{loc}})$. The stalk of this diagram at $(x,z) \in X^{\operatorname{loc}}$ is a diagram $$ \xymatrix@C+20pt{ (\mathcal O_{X,x})_z & \ar@{.>}[l]_-{\overline{f}_{(x,z)}} (\mathcal O_{Y,f(x)})_{f_x^{-1}z} \\ \mathcal O_{X,x} \ar[u]^{\mathfrak{p}i_{(x,z)}} & \ar[l]^{f_x} \mathcal O_{Y,f(x)} \ar[u]_{\mathfrak{p}i_{(f(x),f_x^{-1}(z))}} } $$ in ${\bf Rings}$ where the vertical arrows are the natural localization maps; these are epimorphisms, and the universal property of localization ensures that there is a unique local morphism of local rings $\overline{f}_{(x,z)}$ completing this diagram. We now want to show that there is actually a (necessarily unique) map $\overline{f}^{\sharp} : \overline{f}^{-1} \mathcal O_{X^{\operatorname{loc}}} \to \mathcal O_Y$ of sheaves of rings on $X^{\operatorname{loc}}$ whose stalk at $(x,z)$ is the map $\overline{f}_{(x,z)}$. By the universal property of sheafification, we can work with the presheaf inverse image $\overline{f}^{-1}_{\rm pre} \mathcal O_{X^{\operatorname{loc}}}$ instead. A section $[V,s]$ of this presheaf over an open subset $W \subseteq X^{\operatorname{loc}}$ is represented by a pair $(V,s)$ where $V \subseteq Y^{\operatorname{loc}}$ is an open subset of $Y^{\operatorname{loc}}$ containing $\overline{f}(W)$ and $$ s = (s(y,z)) \in \mathcal O_{Y^{\operatorname{loc}}}(V) \subseteq \mathfrak{p}rod_{(y,z) \in V} (\mathcal O_{Y,y})_z . $$ I claim that we can define a section $f_{\rm pre}^{\sharp}[V,s] \in \mathcal O_{X^{\operatorname{loc}}}(W) $ by the formula \begin{eqnarray*} f_{\rm pre}^{\sharp}[V,s](x,z) := s(f(x),f_x^{-1}(z)) . \end{eqnarray*} It is clear that this element is independent of replacing $V$ with a smaller neighborhood of $f[W]$ and restricting $s$, but we still must check that $$ f_{\rm pre}[V,s] \in \mathfrak{p}rod_{(x,z) \in W} (\mathcal O_{X,x})_z$$ satisfies the local consistency condition. Suppose $$ \frac{a}{t^n} \in \mathcal O_X(U)_t$$ witnesses local consistency for $s \in \mathcal O_{Y^{\operatorname{loc}}}(V)$ on a basic open subset $\operatorname{U}(U,t) \subseteq V$. Then it is straightforward to check that the restriction of $$ \frac{ f^{\sharp}(f^{-1}(a)) }{ f^{\sharp}(f^{-1}t^n)} \in \mathcal O_Y( \overline{f}^{-1}( \operatorname{U}(U,t) )) $$ to $\overline{f}^{-1} \operatorname{U}(U,t) \cap W $ witnesses local consistency of $f_{\rm pre}^{\sharp}[V,s]$ on \begin{eqnarray*} \overline{f}^{-1}(\operatorname{U}(U,t)) \cap W & = & \operatorname{U}(f^{-1}(U), f^\sharp f^{-1} t) \cap W . \end{eqnarray*} It is clear that our formula for $f^{\sharp}_{\rm pre}[V,s]$ respects restrictions and has the desired stalks and commutativity, so its sheafification provides the desired map of sheaves of rings.
This completes the construction of $\overline{f} : X^{\operatorname{loc}} \to Y^{\operatorname{loc}}$ in ${\bf LRS}$ making \eqref{dia} commute in ${\bf RS}$. We now establish the uniqueness of $\overline{f}$. Suppose $\overline{f}' : X^{\operatorname{loc}} \to Y^{\operatorname{loc}}$ is a morphism in ${\bf LRS}$ that also makes \eqref{dia} commute in ${\bf RS}$. We first prove that $\overline{f} = \overline{f}'$ on the level of topological spaces. For $x \in X$ the commutativity of \eqref{dia} ensures that $\overline{f}'(x,z) = (f(x),z')$ for some $z ' \in N_{f(x)} \subseteq \operatorname{Spec} \mathcal O_{Y,f(x)},$ so it remains only to show that $z' = f_x^{-1}(z).$ The commutativity of \eqref{dia} on the level of stalks at $(x,z) \in X^{\operatorname{loc}}$ gives a commutative diagram of rings $$ \xymatrix{ (\mathcal O_{X,x})_z & \ar[l]_{\overline{f}'_{(x,z)}} (\mathcal O_{Y,f(x)})_{z'} \\ \mathcal O_{X,x} \ar[u]^{\mathfrak{p}i_{(x,z)}} & \ar[l]^{f_x} \mathcal O_{Y,f(x)} \ar[u]_{\mathfrak{p}i_{(f(x),z')}} } $$ where the vertical arrows are the natural localization maps. From the commutativity of this diagram and the fact that $(\overline{f}')_{(x,z)}^{-1}(\mathfrak{m}_z) = \mathfrak{m}_{z'}$ (because $\overline{f}'_{(x,z)}$ is local) we find \begin{eqnarray*} z' & = & \mathfrak{p}i_{(f(x),z')}^{-1}(\mathfrak{m}_{z'}) \\ & = & \mathfrak{p}i_{(f(x),z')}^{-1} (\overline{f}')_{(x,z)}^{-1}(\mathfrak{m}_z) \\ & = & f_x^{-1} \mathfrak{p}i_x^{-1}(\mathfrak{m}_z) \\ & = & f_x^{-1}(z) \end{eqnarray*} as desired. This proves that $\overline{f} = \overline{f}'$ on topological spaces, and we already argued the uniqueness of $\overline{f}^{\sharp}$ (which can be checked on stalks) during its construction.
The last statements of the theorem follow easily once we prove that the localization morphism $\mathfrak{p}i : (X,\mathcal M_X)^{\operatorname{loc}} \to X$ is an isomorphism for any $X \in {\bf LRS}$. On the level of topological spaces, it is clear that $\mathfrak{p}i$ is a continuous bijection, so to prove it is an isomorphism we just need to prove it is open. To prove this, it is enough to prove that for any $(U,s) \in {\bf Sec} \, \mathcal O_X$, the image of the basic open set $\operatorname{U}(U,s)$ under $\mathfrak{p}i$ is open in $X$. Indeed, \begin{eqnarray*} \mathfrak{p}i( \operatorname{U}(U,s) ) & = & \{ x \in U : s_x \mathfrak{n}otin \mathfrak{m}_x \} \\ & = & \{ x \in U : s_x \in \mathcal O_{X,x}^* \} \end{eqnarray*} is open in $U$, hence in $X$, because invertibility at the stalk implies invertibility on a neighborhood. To prove that $\mathfrak{p}i$ is an isomorpism of locally ringed spaces, it remains only to prove that $\mathfrak{p}i^{\sharp} : \mathcal O_X \to \mathcal O_{X^{\operatorname{loc}}}$ is an isomorphism of sheaves of rings on $X=X^{\operatorname{loc}}$. Indeed, Proposition~\ref{prop:localization} says the stalk of $\mathfrak{p}i^{\sharp}$ at $(x, \mathfrak{m}_x) \in X^{\operatorname{loc}}$ is the localization of the local ring $\mathcal O_{X,x}$ at its unique maximal ideal, which is an isomorphism in ${\bf LAn}$. \end{proof}
\begin{eqnarray*}gin{lem} \label{lem:important} Let $A \in {\bf Rings}$ be a ring, $(X,\mathcal O_X) := \operatorname{Spec} A$, and let $*$ be the punctual space. Define a prime system $N$ on $(X,\underlinenderline{A}_X)$ by $$N_x := \{ x \} \subseteq \operatorname{Spec} \underlinenderline{A}_{X,x} = \operatorname{Spec} A = X.$$ Let $a : (X,\mathcal O_X) \to (X,\underlinenderline{A}_X)$ be the natural ${\bf RS}$ morphism. Then $\mathcal M_{(X,\mathcal O_X)} = a^* N$ and the natural ${\bf PRS}$ morphisms $$(X,\mathcal O_X,\mathcal M_{(X,\mathcal O_X)}) \to (X,\underlinenderline{A}_X,N) \to (*,A,\operatorname{Spec} A) = (*,A,\mathcal T_{(*,A)}) $$ yield natural isomorphisms $$ (X,\mathcal O_X) = (X,\mathcal O_X,\mathcal M_{(X,\mathcal O_X)})^{\operatorname{loc}} = (X,\underlinenderline{A}_X,N)^{\operatorname{loc}} = (*,A,\operatorname{Spec} A)^{\operatorname{loc}} $$ in ${\bf LRS}$. \end{lem}
\begin{eqnarray*}gin{proof} Note that the stalk $a_x : \underlinenderline{A}_{X,x} \to \mathcal O_{X,x}$ of $a$ at $x \in X$ is the localization map $A \to A_x$, and, by definition, $(a^*N)_x$ is the set prime ideals $z$ of $A_x$ pulling back to $x \subseteq A$ under $a_x : A \to A_x$. The only such prime ideal is the maximal ideal $\mathfrak{m}_x \subseteq A_x$, so $(a^*N)_x = \{ \mathfrak{m}_x \} = \mathcal M_{(X,\mathcal O_X),x}$.
Next, it is clear from the description of the localization of a ${\bf PRS}$ morphism that the localizations of the morphisms in question are bijective on the level of sets. Indeed, the bijections are given by $$ x \leftrightarrow (x,\mathfrak{m}_x) \leftrightarrow (x,x) \leftrightarrow (*,x),$$ so to prove that they are continuous, we just need to prove that they have the same topology. Indeed, we will show that they all have the usual (Zariski) topology on $X=\operatorname{Spec} A$. This is clear for $(X,\mathcal O_X,\mathcal M_{(X,\mathcal O_X)})$ because localization retracts $\mathcal M$ (Theorem~\ref{thm:localization}), so $(X,\mathcal O_X,\mathcal M_{(X,\mathcal O_X)})^{\operatorname{loc}} = (X,\mathcal O_X)$, and it is clear for $(*,A,\operatorname{Spec} A)$ because of the description of the fibers of localization in Proposition~\ref{prop:localization}. For $(X,\underlinenderline{A}_X,N)$, we note that the sets $\operatorname{U}(U,s)$, as $U$ ranges over \emph{connected} open subsets of $X$ (or any other family of basic opens for that matter), form a basis for the topology on $(X,\underlinenderline{A}_X,N)^{\operatorname{loc}}$. Since $U$ is connected, $s \in \underlinenderline{A}_X(U) = A$, and $\operatorname{U}(U,s)$ is identified with the usual basic open subset $D(s) \subseteq X$ under the bijections above. This proves that the ${\bf LRS}$ morphisms in question are isomorphisms on the level of spaces, so it remains only to prove that they are isomorphisms on the level of sheaves of rings, which we can check on stalks using the description of the stalks of a localization in Proposition~\ref{prop:localization}. \end{proof}
\begin{eqnarray*}gin{rem} \label{rem:notlocal} If $X \in {\bf LRS}$, and $M$ is a prime system on $X$, the map $\mathfrak{p}i : X^{\operatorname{loc}} \to X$ is not generally a morphism in ${\bf LRS}$, even though $X,X^{\operatorname{loc}} \in {\bf LRS}$. For example, if $X$ is a point whose ``sheaf" of rings is a local ring $(A,\mathfrak{m})$, and $M = \{ \mathfrak{p} \}$ for some $\mathfrak{p} \mathfrak{n}eq \mathfrak{m}$, then $X^{\operatorname{loc}}$ is a point with the ``sheaf" of rings $A_{\mathfrak{p}}$, and the ``stalk" of $\mathfrak{p}i^{\sharp}$ is the localization map $l : A \to A_{\mathfrak{p}}$. Even though $A, A_{\mathfrak{p}}$ are local, this is \emph{not} a local morphism because $l^{-1}( \mathfrak{p} A_{\mathfrak{p}}) = \mathfrak{p} \mathfrak{n}eq \mathfrak{m}$. \end{rem}
\section{Applications} \label{section:applications} In this section we give some applications of localization of ringed spaces.
\subsection{Inverse limits} \label{section:inverselimits} We first prove that ${\bf LRS}$ has all inverse limits.
\begin{eqnarray*}gin{thm} \label{thm:inverselimits} The category ${\bf PRS}$ has all inverse limits, and both the localization functor ${\bf PRS} \to {\bf LRS}$ and the forgetful functor ${\bf PRS} \to {\bf RS}$ preserve them. \end{thm}
\begin{eqnarray*}gin{proof} Suppose $i \mathfrak{m}apsto (X_i,M_i)$ is an inverse limit system in ${\bf PRS}$. Let $X$ be the inverse limit of $i \mathfrak{m}apsto X_i$ in ${\bf Top}$ and let $\mathfrak{p}i_i : X \to X_i$ be the projection. Let $\mathcal O_X$ be the direct limit of $i \mathfrak{m}apsto \mathfrak{p}i_i^{-1} \mathcal O_{X_i}$ in ${\bf Rings}(X)$ and let $\mathfrak{p}i_i^{\sharp} : \mathfrak{p}i_i^{-1} \mathcal O_{X_i} \to \mathcal O_X$ be the structure map to the direct limit, so we may regard $X=(X,\mathcal O_X)$ as a ringed space and $\mathfrak{p}i_i$ as a morphism of ringed spaces $X \to X_i$. It immediate from the definition of a morphism in ${\bf RS}$ that $X$ is the inverse limit of $i \mathfrak{m}apsto X_i$ in ${\bf RS}$. Let $M$ be the prime system on $X$ given by the inverse limit (intersection) of the $\mathfrak{p}i_i^*M_i$. Then it is clear from the definition of a morphism in ${\bf PRS}$ that $(X,M)$ is the inverse limit of $i \mathfrak{m}apsto (X_i,M_i)$, but we will spell out the details for the sake of concreteness and future use.
Given a point $x = (x_i) \in X$, we have defined $M_x$ to be the set of $z \in \operatorname{Spec} \mathcal O_{X,x}$ such that $\mathfrak{p}i_{i,x}^{-1}(z) \in M_{x_i} \subseteq \operatorname{Spec} \mathcal O_{X_i,x_i}$ for every $i$, so that $\mathfrak{p}i_i$ defines a ${\bf PRS}$ morphism $\mathfrak{p}i_i : (X,M) \to (X_i,M_i)$. To see that $(X,M)$ is the direct limit of $i \mathfrak{m}apsto (X_i,M_i)$ suppose $f_i : (Y,N) \to (X_i,M_i)$ are morphisms defining a natural transformation from the constant functor $i \mathfrak{m}apsto (Y,N)$ to $i \mathfrak{m}apsto (X_i,M_i)$. We want to show that there is a unique ${\bf PRS}$ morphism $f : (Y,N) \to (X,M)$ with $\mathfrak{p}i_i f = f_i$ for all $i$. Since $X$ is the inverse limit of $i \mathfrak{m}apsto X_i$ in ${\bf RS}$, we know that there is a unique map of ringed spaces $f : Y \to X$ with $\mathfrak{p}i_i f = f_i$ for all $i$, so it suffices to show that this $f$ is a ${\bf PRS}$ morphism. Let $y \in Y$, $z \in N_y$. We must show $f_y^{-1}(z) \in M_{f(x)}$. By definition of $M$, we must show $(\mathfrak{p}i_i)_{f(x)}^{-1}(f_y^{-1}(z)) \in M_{\mathfrak{p}i_i(f(x))} = M_{f_i(y)}$ for every $i$. But $\mathfrak{p}i_i f = f_i$ implies $f_y (\mathfrak{p}i_i)_{f(x)} = (f_i)_y$, so $(\mathfrak{p}i_i)_{f(x)}^{-1}(f_y^{-1}(z)) = (f_i)_y^{-1}(z) $ is in $M_{f_i(y)}$ because $f_i$ is a ${\bf PRS}$ morphism.
The fact that the localization functor preserves inverse limits follows formally from the adjointness in Theorem~\ref{thm:localization}. \end{proof}
\begin{eqnarray*}gin{cor} \label{cor:inverselimits} The category ${\bf LRS}$ has all inverse limits. \end{cor}
\begin{eqnarray*}gin{proof} Suppose $i \mathfrak{m}apsto X_i$ is an inverse limit system in ${\bf LRS}$. Composing with $\mathcal M$ yields an inverse limit system $i \mathfrak{m}apsto (X_i,\mathcal M_{X_i})$ in ${\bf PRS}$. By the theorem, the localization $(X,M)^{\operatorname{loc}}$ of the inverse limit $(X,M)$ of $i \mathfrak{m}apsto (X_i,\mathcal M_{X_i})$ is the inverse limit of $i \mathfrak{m}apsto (X_i,\mathcal M_{X_i})^{\operatorname{loc}}$ in ${\bf LRS}$. But localization retracts $\mathcal M$ (Theorem~\ref{thm:localization}) so $i \mathfrak{m}apsto (X_i,\mathcal M_{X_i})^{\operatorname{loc}}$ is our original inverse limit system $i \mathfrak{m}apsto X_i$. \end{proof}
We can also obtain the following result of C.~Chevalley mentioned in [Hak IV.2.4].
\begin{eqnarray*}gin{cor} \label{cor:Chevalley} The functor \begin{eqnarray*} {\bf RS} & \to & {\bf LRS} \\ X & \mathfrak{m}apsto & (X,\mathcal T_X)^{\operatorname{loc}} \end{eqnarray*} is right adjoint to the inclusion ${\bf LRS} \hookrightarrow {\bf RS}$. \end{cor}
\begin{eqnarray*}gin{proof} This is immediate from the adjointness property of localization in Theorem~\ref{thm:localization} and the adjointness property of the functor $\mathcal T$: For $Y \in {\bf LRS}$ we have \begin{eqnarray*} \operatorname{Hom}_{{\bf LRS}}(Y,(X,\mathcal T_X)^{\operatorname{loc}}) & = & \operatorname{Hom}_{{\bf PRS}}((Y,\mathcal M_Y), (X,\mathcal T_X)) \\ & = & \operatorname{Hom}_{{\bf RS}}(Y,X). \end{eqnarray*} \end{proof}
Our next task is to compare inverse limits in ${\bf Sch}$ to those in ${\bf LRS}$. Let $* \in {\bf Top}$ be ``the" punctual space (terminal object), so ${\bf Rings}(*) = {\bf Rings}$. The functor \begin{eqnarray*} {\bf Rings} & \to & {\bf RS} \\ A & \mathfrak{m}apsto & (*,A) \end{eqnarray*} is clearly left adjoint to \begin{eqnarray*} \Gamma : {\bf RS}^{\rm op} & \to & {\bf Rings} \\ X & \mathfrak{m}apsto & \Gamma(X,\mathcal O_X) . \end{eqnarray*} By Lemma~\ref{lem:important} (or Proposition~\ref{prop:localization}) we have \begin{eqnarray*} \mathcal T(*,A)^{\operatorname{loc}} & := & (*,A,\operatorname{Spec} A)^{\operatorname{loc}} \\ & = & \operatorname{Spec} A .\end{eqnarray*} Theorem~\ref{thm:localization} yields an easy proof of the following result, which can be found in the Errata for [EGA I.1.8] printed at the end of [EGA II].
\begin{eqnarray*}gin{prop} \label{prop:affine} For $A \in {\bf Rings}$, $X \in {\bf LRS}$, the natural map \begin{eqnarray*} \operatorname{Hom}_{{\bf LRS}}(X,\operatorname{Spec} A) & \to & \operatorname{Hom}_{{\bf Rings}}(A, \Gamma(X,\mathcal O_X)) \end{eqnarray*} is bijective, so $\operatorname{Spec} : {\bf Rings} \to {\bf LRS}$ is left adjoint to $\Gamma : {\bf LRS}^{\rm op} \to {\bf Rings}$. \end{prop}
\begin{eqnarray*}gin{proof} This is a completely formal consequence of various adjunctions: \begin{eqnarray*} \operatorname{Hom}_{{\bf LRS}}(X,\operatorname{Spec} A) & = & \operatorname{Hom}_{{\bf LRS}}(X,\mathcal T(*,A)^{\operatorname{loc}}) \\ & = & \operatorname{Hom}_{{\bf PRS}}((X,\mathcal M_X),\mathcal T(*,A)) \\ & = & \operatorname{Hom}_{{\bf RS}}(X, (*,A)) \\ & = & \operatorname{Hom}_{{\bf Rings}}(A,\Gamma(X,\mathcal O_X)). \end{eqnarray*} \end{proof}
\begin{eqnarray*}gin{thm} \label{thm:Schinverselimits} The category ${\bf Sch}$ has all finite inverse limits, and the inclusion ${\bf Sch} \to {\bf LRS}$ preserves them. \end{thm}
\begin{eqnarray*}gin{proof} It is equivalent to show that, for a finite inverse limit system $i \mathfrak{m}apsto X_i$ in ${\bf Sch}$, the inverse limit $X$ in ${\bf LRS}$ is a scheme. It suffices to treat the case of (finite) products and equalizers. For products, suppose $\{ X_i \}$ is a finite set of schemes and $X = \mathfrak{p}rod_i X_i$ is their product in ${\bf LRS}$. We want to show $X$ is a scheme. Let $\overline{x}$ be a point of $X$, and let $x = (x_i) \in \mathfrak{p}rod_i^{{\bf RS}} X_i$ be its image in the ringed space product. Let $U_i = \operatorname{Spec} A_i$ be an open affine neighborhood of $x_i$ in $X_i$. As we saw above, the map $X \to \mathfrak{p}rod_i^{{\bf RS}} X_i$ is a localization and, as mentioned in Remark~\ref{rem:open}, it follows that the product $U := \mathfrak{p}rod_i U_i$ of the $U_i$ in ${\bf LRS}$ is an open neighborhood of $\overline{x}$ in $X$,\footnote{This is the only place we need ``finite". If $\{ X_i \}$ were infinite, the topological space product of the $U_i$ might not be open in the topology on the topological space product of the $X_i$ because the product topology only allows ``restriction in \emph{finitely many} coordinates".} so it remains only to prove that there is an isomorphism $U \cong \operatorname{Spec} \otimes_i A_i$, hence $U$ is affine.\footnote{There would not be a problem \emph{here} even if $\{ X_i \}$ were infinite: ${\bf Rings}$ has all direct and inverse limits, so the (possibly infinite) tensor product $\otimes_i A_i$ over $\mathbb{Z}$ (coproduct in ${\bf Rings}$) makes sense. Our proof therefore shows that any inverse limit (not necessarily finite) of \emph{affine} schemes, taken in ${\bf LRS}$, is a scheme.} Indeed, we can see immediately from Proposition~\ref{prop:affine} that $U$ and $\operatorname{Spec} \otimes_i A_i$ represent the same functor on ${\bf LRS}$: \begin{eqnarray*} \operatorname{Hom}_{{\bf LRS}}(Y,U) & = & \mathfrak{p}rod_i \operatorname{Hom}_{{\bf LRS}}(Y,U_i) \\ & = & \mathfrak{p}rod_i \operatorname{Hom}_{{\bf Rings}}(A_i, \Gamma(Y,\mathcal O_Y)) \\ & = & \operatorname{Hom}_{{\bf Rings}}(\otimes_i A_i, \Gamma(Y,\mathcal O_Y)) \\ & = & \operatorname{Hom}_{{\bf LRS}}(Y, \operatorname{Spec} \otimes_i A_i). \end{eqnarray*}
The case of equalizers is similar: Suppose $X$ is the ${\bf LRS}$ equalizer of morphisms $f,g : Y \rightrightarrows Z$ of schemes, and $x \in X$. Let $y \in Y$ be the image of $x$ in $Y$, so $f(y)=g(y)=:z$. Since $Y,Z$ are schemes, we can find affine neighborhoods $V = \operatorname{Spec} B$ of $y$ in $Y$ and $W = \operatorname{Spec} A$ of $z$ in $Z$ so that $f,g$ take $V$ into $W$. As before, it is clear that the equalizer $U$ of $f|V, g|V : V \rightrightarrows W$ in ${\bf LRS}$ is an open neighborhood of $x \in X$, and we prove exactly as above that $U$ is affine by showing that it is isomorphic to $\operatorname{Spec}$ of the coequalizer \begin{eqnarray*} C & = & B / \langle \{ f^{\sharp}(a) - g^{\sharp}(a) : a \in A \} \rangle \end{eqnarray*} of $f^{\sharp}, g^{\sharp} : A \rightrightarrows B$ in ${\bf Rings}$. \end{proof}
\begin{eqnarray*}gin{rem} \label{rem:inverselimits} The basic results concerning the existence of inverse limits in ${\bf LRS}$ and their coincidence with inverse limits in ${\bf Sch}$ are, at least to some extent, ``folk theorems". I do not claim originality here. The construction of fibered products in ${\bf LRS}$ can perhaps be attributed to Hanno Becker \cite{HB}, and the fact that a cartesian diagram in ${\bf Sch}$ is also cartesian in ${\bf LRS}$ is implicit in the [EGA] Erratum mentioned above. \end{rem}
\begin{eqnarray*}gin{rem} \label{rem:topoi} It is unclear to me whether the 2-category of locally ringed topoi has 2-fibered products, though Hakim seems to want such a fibered product in [Hak V.3.2.3]. \end{rem}
\subsection{Fibered products} \label{section:fiberedproducts} In this section, we will more closely examine the construction of fibered products in ${\bf LRS}$ and explain the relationship between fibered products in ${\bf LRS}$ and those in ${\bf RS}$. By Theorem~\ref{thm:Schinverselimits}, the inclusion ${\bf Sch} \hookrightarrow {\bf LRS}$ preserves inverse limits, so these results will generalize the basic results comparing fibered products in ${\bf Sch}$ to those in ${\bf RS}$ (the proofs will become much more transparent as well).
\begin{eqnarray*}gin{defn} \label{defn:S} Suppose $(A,\mathfrak{m},k), (B_1,\mathfrak{m}_1,k_1), (B_2,\mathfrak{m}_2,k_2) \in {\bf LAn}$ and $f_i : A \to B_i$ are ${\bf LAn}$ morphisms, so $f_i^{-1}(\mathfrak{m}_i) = \mathfrak{m}$ for $i=1,2$. Let $i_j : B_j \to B_1 \otimes_A B_2$ ($j=1,2$) be the natural maps. Set \bne{S} S(A,B_1,B_2) & := & \{ \mathfrak{p} \in \operatorname{Spec} (B_1 \otimes_A B_2) : i_1^{-1}(\mathfrak{p})=\mathfrak{m}_1, \; i_2^{-1}(\mathfrak{p}) = \mathfrak{m}_2 \} . \end{eqnarray} Note that the kernel $K$ of the natural surjection \begin{eqnarray*} B_1 \otimes_A B_2 & \to & k_1 \otimes_k k_2 \\ b_1 \otimes b_2 & \mathfrak{m}apsto & [b_1] \otimes [b_2] \end{eqnarray*} is generated by the expressions $m_1 \otimes 1$ and $1 \otimes m_2$, where $m_i \in \mathfrak{m}_i$, so $$ \operatorname{Spec} (k_1 \otimes_k k_2) \hookrightarrow \operatorname{Spec} (B_1 \otimes_A B_2) $$ is an isomorphism onto $S(A,B_1,B_2)$. In particular, \begin{eqnarray*} S(A,B_1,B_2) & = & \{ \mathfrak{p} \in \operatorname{Spec} (B_1 \otimes_A B_2) : K \subseteq \mathfrak{p} \} \end{eqnarray*} is closed in $\operatorname{Spec} (B_1 \otimes_A B_2)$. \end{defn}
The subset $S(A,B_1,B_2)$ enjoys the following important property: Suppose $g_i : (B_i,\mathfrak{m}_i) \to (C,\mathfrak{n})$, $i=1,2$, are ${\bf LAn}$ morphisms with $g_1f_1 = g_2 f_2$ and $h = (f_1,f_2) : B_1 \otimes_A B_2 \to C$ is the induced map. Then $h^{-1}(\mathfrak{n}) \in S(A,B_1,B_2)$. Conversely, every $\mathfrak{p} \in S(A,B_1,B_2)$ arises in this manner: take $C = (B_1 \otimes_A B_2)_{\mathfrak{p}}$.
\mathfrak{n}oindent {\bf Setup:} We will work with the following setup throughout this section. Let $f_1 : X_1 \to Y$, $f_2 : X_2 \to Y$ be morphisms in ${\bf LRS}$. From the universal property of fiber products we get a natural ``comparison" map \begin{eqnarray*} \eta : X_1 \times^{{\bf LRS}}_Y X_2 & \to & X_1 \times^{{\bf RS}}_Y X_2 . \end{eqnarray*} Let $\mathfrak{p}i_i : X_1 \times_Y^{{\bf RS}} X_2 \to X_i$ ($i=1,2$) denote the projections and let $g := f_1 \mathfrak{p}i_1 = f_2 \mathfrak{p}i_2$. Recall that the structure sheaf of $X_1 \times_Y^{{\bf RS}} X_2$ is $\mathfrak{p}i_1^{-1} \mathcal O_{X_1} \otimes_{g^{-1} \mathcal O_Y} \mathfrak{p}i_2^{-1} \mathcal O_{X_2}$. In particular, the stalk of this structure sheaf at a point $x = (x_1,x_2) \in X_1 \times_Y^{{\bf RS}} X_2$ is $\mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2}$, where $$y := g(x) = f_1(x_1) = f_2(x_2).$$ In this situation, we set \begin{eqnarray*} S(x_1,x_2) & := & S(\mathcal O_{Y,y}, \mathcal O_{X_1,x_1}, \mathcal O_{X_2,x_2}) \end{eqnarray*} to save notation.
\begin{eqnarray*}gin{thm} \label{thm:comparison} The comparison map $\eta$ is surjective on topological spaces. More precisely, for any $x = (x_1,x_2) \in X_1 \times_Y^{{\bf RS}} X_2$, $\eta^{-1}(x)$ is in bijective correspondence with the set $S(x_1,x_2)$, and in fact, there is an ${\bf LRS}$ isomorphism \begin{eqnarray*} \eta^{-1}(x) & := & X_1 \times_Y^{{\bf LRS}} X_2 \times_{X_1 \times_Y^{{\bf RS}} X_2} (x, \mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2}) \\ & = & \operatorname{Spec}_{ \mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2} } S(x_1,x_2). \end{eqnarray*} In particular, $\eta^{-1}(x)$ is isomorphic as a \emph{topological space} to $\operatorname{Spec} k(x_1) \otimes_{k(y)} k(x_2)$ (but not as a ringed space). The stalk of $\eta$ at $z \in S(x_1,x_2)$ is identified with the localization map \begin{eqnarray*} \mathcal O_{X,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X,x_2} & \to & ( \mathcal O_{X,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X,x_2} )_z. \end{eqnarray*} In particular, $\eta$ is a localization morphism (Definition~\ref{defn:localizationmorphism}). \end{thm}
\begin{eqnarray*}gin{proof} We saw in \S\ref{section:inverselimits} that the comparison map $\eta$ is identified with the localization of $X_1 \times_Y^{{\bf RS}} X_2$ at the prime system $(x_1,x_2) \mathfrak{m}apsto S(x_1,x_2)$, so these results follow from Proposition~\ref{prop:localization}. \end{proof}
\begin{eqnarray*}gin{rem} When $X_1,X_2,Y \in {\bf Sch}$, the first statement of Theorem~\ref{thm:comparison} is [EGA I.3.4.7]. \end{rem}
\begin{eqnarray*}gin{rem} The fact that $\eta$ is a localization morphism is often implicitly used in the theory of the cotangent complex. \end{rem}
\begin{eqnarray*}gin{defn} \label{defn:rational} Let $f : X \to Y$ be an ${\bf LRS}$ morphism. A point $x \in X$ is called \emph{rational} over $Y$ (or ``over $y := f(x)$" or ``with respect to $f$") iff the map on residue fields $\overline{f}_x : k(y) \to k(x)$ is an isomorphism (equivalently: is surjective). \end{defn}
\begin{eqnarray*}gin{cor} \label{cor:comparison1} Suppose $x_1 \in X_1$ is rational over $Y$ (i.e.\ with respect to $f_1 : X_1 \to Y$). Then for any $x=(x_1,x_2) \in X_1 \times^{{\bf RS}}_Y X_2$, the fiber $\eta^{-1}(x)$ of the comparison map $\eta$ is punctual. In particular, if \emph{every} point of $X_1$ is rational over $Y$, then $\eta$ is bijective. \end{cor}
\begin{eqnarray*}gin{proof} Suppose $x_1 \in X_1$ is rational over $Y$. Suppose $x = (x_1,x_2) \in X_1 \times^{{\bf RS}} X_2$. Set $y := f_1(x_1)=f_2(x_2)$. Since $x_1$ is rational, $k(y) \cong k(x_1)$, so $\operatorname{Spec} k(x_1) \otimes_{k(y)} k(x_2) \cong \operatorname{Spec} k(x_2)$ has a single element. On the other hand, we saw in Definition~\ref{defn:S} that this set is in bijective correspondence with the set $$ S(x_1,x_2) \subseteq \operatorname{Spec} ( \mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2} )$$ appearing in Theorem~\ref{thm:comparison}, so that same theorem says that $\eta^{-1}(x)$ consists of a single point. \end{proof}
\begin{eqnarray*}gin{rem} Even if every $x_1 \in X_1$ is rational over $Y$, the comparison map $$\eta : X_1 \times_Y^{{\bf LRS}} X_2 \to X_1 \times_Y^{{\bf RS}} X_2$$ is not generally an isomorphism on topological spaces, even though it is bijective. The topology on $X_1 \times_Y^{{\bf LRS}} X_2$ is generally much finer than the product topology. In this situation, the set $S(x_1,x_2)$ always consists of a single element $z(x_1,x_2)$: namely, the maximal ideal of $\mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2}$ given by the kernel of the natural surjection $$\mathcal O_{X_1,x_1} \otimes_{\mathcal O_{Y,y}} \mathcal O_{X_2,x_2} \to k(x_1) \otimes_{k(y)} k(x_2) = k(x_2).$$ If we identify $X_1 \times_Y^{{\bf LRS}} X_2$ and $X_1 \times_Y^{{\bf RS}} X_2$ as sets via $\eta$, then the ``finer" topology has basic open sets \begin{eqnarray*} \operatorname{U}(U_1 \times_Y U_2,s) := \{ (x_1,x_2) \in U_1 \times_Y U_2 : s_{(x_1,x_2)} \mathfrak{n}otin z(x_1,x_2) \} \end{eqnarray*} as $U_1,U_2$ range over open subsets of $X_1,X_2$ and $s$ ranges over $$(\mathfrak{p}i_1^{-1}\mathcal O_{X_1} \otimes_{g^{-1}\mathcal O_Y} \mathfrak{p}i_2^{-1} \mathcal O_{X_2})(U_1 \times_Y U_2).$$ This set is not generally open in the product topology because the stalks of $$\mathfrak{p}i_1^{-1}\mathcal O_{X_1} \otimes_{g^{-1}\mathcal O_Y} \mathfrak{p}i_2^{-1} \mathcal O_{X_2}$$ are not generally local rings, so not being in $z(x_1,x_2)$ does not imply invertibility, hence is not generally an open condition on $(x_1,x_2)$. \end{rem}
\begin{eqnarray*}gin{rem} On the other hand, sometimes the topologies on $X_1$, $X_2$ are so fine that the sets $\operatorname{U}(U_1 \times_Y U_2,s)$ \emph{are} easily seen to be open in the product topology. For example, suppose $k$ is a topological field.\footnote{I require all finite subsets of $k$ to be closed in the definition of ``topological field".} Then one often works in the full subcategory ${\bf C}$ of locally ringed spaces over $k$ consisting of those $X \in {\bf LRS} / k$ satisfying the conditions: \begin{eqnarray*}gin{enumerate} \item \label{an1} Every point $x \in X$ is a $k$ point: the composition $k \to \mathcal O_{X,x} \to k(x) $ yields an isomorphism $k=k(x)$ for every $x \in X$. \item The structure sheaf $\mathcal O_{X}$ is \emph{continuous for the topology on} $k$ in the sense that, for every $(U,s) \in {\bf Sec} \, \mathcal O_{X}$, the function \begin{eqnarray*} s( \hspace{0.05in} {\rm \_} \hspace{0.05in} ) : U & \to & k \\ x & \mathfrak{m}apsto & s(x) \end{eqnarray*} is a continuous function on $U$. Here $s(x) \in k(x)$ denotes the image of the stalk $s_x \in \mathcal O_{X,x}$ in the residue field $k(x) = \mathcal O_{X,x} / \mathfrak{m}_x$, and we make the identification $k=k(x)$ using \eqref{an1}. \end{enumerate} One can show that fiber products in ${\bf C}$ are the same as those in ${\bf LRS}$ and that the forgetful functor ${\bf C} \to {\bf Top}$ preserves fibered products (even though ${\bf C} \to {\bf RS}$ may not). Indeed, given $s \in (\mathfrak{p}i_1^{-1}\mathcal O_{X_1} \otimes_{g^{-1}\mathcal O_Y} \mathfrak{p}i_2^{-1} \mathcal O_{X_2})(U_1 \times_Y U_2)$, the set $\operatorname{U}(U_1 \times_Y U_2,s)$ is the preimage of $k^* \subseteq k$ under the map $s( \hspace{0.05in} {\rm \_} \hspace{0.05in} )$, and we can see that $s( \hspace{0.05in} {\rm \_} \hspace{0.05in} )$ is continuous as follows: By viewing the sheaf theoretic tensor product as the sheafification of the presheaf tensor product we see that, for any point $(x_1,x_2) \in \operatorname{U}_1 \times_Y U_2$, we can find a neighborhood $V_1 \times_Y V_2$ of $(x_1,x_2)$ contained in $U_1 \times_Y U_2$ and sections $a_1, \dots, a_n \in \mathcal O_{X_1}(V_1)$, $b_1,\dots,b_n \in \mathcal O_{X_2}(V_2)$ such that the stalk $s_{x_1',x_2'}$ agrees with $\sum_i (a_i)_{x_1'} \otimes (b_i)_{x_2'}$ at each $(x_1',x_2') \in V_1 \times_Y V_2$. In particular, the function $s( \hspace{0.05in} {\rm \_} \hspace{0.05in} )$ agrees with the function $$(x_1',x_2') \mathfrak{m}apsto \sum_i a_i(x_1')b_i(x_2') \in k$$ on $V_1 \times_Y V_2$. Since this latter function is continuous in the product topology on $V_1 \times_Y V_2$ (because each $a_i( \hspace{0.05in} {\rm \_} \hspace{0.05in} )$, $b_i( \hspace{0.05in} {\rm \_} \hspace{0.05in} )$ is continuous) and continuity is local, $s( \hspace{0.05in} {\rm \_} \hspace{0.05in} )$ is continuous. \end{rem}
\begin{eqnarray*}gin{cor} \label{cor:comparison2} Suppose $(f_1)_{x_1} : \mathcal O_{Y, f(x_1)} \to \mathcal O_{X_1,x_1}$ is surjective for every $x_1 \in X_1$. Then the comparison map $\eta$ is an isomorphism. In particular, $\eta$ is an isomorphism under either of the following hypotheses: \begin{eqnarray*}gin{enumerate} \item \label{immersion} $f_1$ is an immersion. \item \label{point} $f_1 : \operatorname{Spec} k(y) \to Y$ is the natural map associated to a point $y \in Y$. \end{enumerate} \end{cor}
\begin{eqnarray*}gin{proof} It is equivalent to show that $X := X_1 \times_Y^{{\bf RS}} X_2$ is in ${\bf LRS}$ and the structure maps $\mathfrak{p}i_i : X \to X_i$ are ${\bf LRS}$ morphisms. Say $x=(x_1,x_2) \in X$ and let $y := f_1(x_1)=f_2(x_2)$. By construction of $X$, we have a pushout diagram of rings $$ \xymatrix@C+15pt{ \mathcal O_{Y,y} \ar[r]^-{(f_1)_{x_1}} \ar[d]_{(f_2)_{x_2}} & \mathcal O_{X_1,x_1} \ar[d]^{(\mathfrak{p}i_1)_x} \\ \mathcal O_{X_2,x_2} \ar[r]^-{(\mathfrak{p}i_2)_x} & \mathcal O_{X,x} } $$ hence it is clear from surjectivity of $(f_1)_{x_1}$ and locality of $(f_2)_{x_2}$ that $\mathcal O_{X,x}$ is local and $(\mathfrak{p}i_1)_x, (\mathfrak{p}i_2)_x$ are ${\bf LAn}$ morphisms. \end{proof}
\begin{eqnarray*}gin{cor} \label{cor:comparison3} Suppose $$ \xymatrix{ X_1 \times_Y X_2 \ar[r]^-{\mathfrak{p}i_2} \ar[d]_{\mathfrak{p}i_1} & X_2 \ar[d]^{f_2} \\ X_1 \ar[r]^{f_1} & Y } $$ is a cartesian diagram in ${\bf LRS}$. Then: \begin{eqnarray*}gin{enumerate} \item \label{rationaletafiber} If $z \in X_1 \times_Y X_2$ is rational over $Y$, then $\eta^{-1}(\eta(z)) = \{ z \}$. \item \label{rationalbasechange} Let $(x_1,x_2) \in X_1 \times^{{\bf RS}}_Y X_2$, and let $y := \mathfrak{p}i_1(x_1) = \mathfrak{p}i_2(x_2).$ Suppose $k(x_2)$ is isomorphic, as a field extesion of $k(y)$, to a subfield of $k(x_1)$. Then there is a point $z \in X_1 \times_Y^{{\bf Sch}} X_2$ rational over $X_1$ with $\mathfrak{p}i_i(z)=x_i$, $i=1,2$. \end{enumerate} \end{cor}
\begin{eqnarray*}gin{proof} For \eqref{rationaletafiber}, set $(x_1,x_2) := \eta(z)$, $y := \mathfrak{p}i_1(x_1) = \mathfrak{p}i_2(x_2)$. Then we have a commutative diagram $$ \xymatrix{ k(z) & \ar[l]_-{\overline{\mathfrak{p}i}_{2,z}} k(x_2) \\ k(x_1) \ar[u]^{\overline{\mathfrak{p}i}_{1,z}} & \ar[l]_-{\overline{f}_{1, x_1}} k(y) \ar[u]_-{\overline{f}_{2,x_2}} } $$ of residue fields. By hypothesis, the compositions $k(y) \to k(x_i) \to k(z)$ are isomorphisms for $i=1,2$, so it must be that every map in this diagram is an isomorphism, hence the diagram is a pushout. On the other hand, according to the first statement of Theorem~\ref{thm:comparison}, $\eta^{-1}(\eta(z))$ is in bijective correspondence with $$\operatorname{Spec} (k(x_1) \otimes_{k(y)} k(x_2)) = \operatorname{Spec} k(z),$$ which is punctual.
For \eqref{rationalbasechange}, let $i : k(x_2) \hookrightarrow k(x_1)$ be the hypothesized morphism of field extensions of $k(y)$. By the universal property of the ${\bf LRS}$ fibered product $X_1 \times_Y X_2$, the maps \begin{eqnarray*} (x_2,i): \operatorname{Spec} k(x_1) & \to & X_2 \\ x_1 : \operatorname{Spec} k(x_1) & \to & X_1 \end{eqnarray*} give rise to a map $$ g : \operatorname{Spec} k(x_1) \to X_1 \times_Y X_2. $$ Let $z \in X_1 \times_Y X_2$ be the point corresponding to this map. Then we have a commutative diagram of residue fields $$ \xymatrix{ k(x_1) \\ & k(z) \ar[lu] & k(x_2) \ar[l] \ar[llu]_i \\ & k(x_1) \ar@{=}[luu] \ar[u]_{\overline{\mathfrak{p}i}_{1,z}} & k(y) \ar[l] \ar[u] }$$ so $\overline{\mathfrak{p}i}_{1,z} : k(x_1) \to k(z)$ must be an isomorphism. \end{proof}
\subsection{Spec functor} \label{section:relativespec} Suppose $X \in {\bf LRS}$ and $f : \mathcal O_X \to A$ is an $\mathcal O_X$ algebra. Then $f$ may be viewed as a morphism of ringed spaces $f : (X,A) \to (X,\mathcal O_X)=X$. Give $X$ the local prime system $\mathcal M_X$ as usual and $(X,A)$ the inverse image prime system $f^*\mathcal M_X$ (Remark~\ref{rem:inverseimage}), so $f$ may be viewed as a ${\bf PRS}$ morphism $$f : (X,A,f^* \mathcal M_X) \to (X,\mathcal O_X, \mathcal M_X).$$ Explicitly: \begin{eqnarray*} (f^* \mathcal M_X)_x & = & \{ \mathfrak{p} \in A_x : f_x^{-1}(\mathfrak{p}) = \mathfrak{m}_x \subseteq \mathcal O_{X,x} \}. \end{eqnarray*} By Theorem~\ref{thm:localization}, there is a unique ${\bf LRS}$ morphism $$\overline{f} : (X,A,f^* \mathcal M_X)^{\operatorname{loc}} \to (X,\mathcal O_X,\mathcal M_X)^{\operatorname{loc}}=X$$ lifting $f$ to the localizations. We call \begin{eqnarray*} \operatorname{Spec}_X A & := & (X,A,f^* \mathcal M_X)^{\operatorname{loc}} \end{eqnarray*} the \emph{spectrum} (relative to $X$) of $A$. $\operatorname{Spec}_X$ defines a functor \begin{eqnarray*} \operatorname{Spec}_X : (\mathcal O_X / {\bf Rings}(X))^{\rm op} & \to & {\bf LRS} / X . \end{eqnarray*} Note that $ \operatorname{Spec}_X \mathcal O_X = (X,\mathcal O_X,\mathcal M_X)^{\operatorname{loc}} = X$ by Theorem~\ref{thm:localization}.
Our functor $\operatorname{Spec}_X$ agrees with the usual one (c.f.\ [Har II.Ex.5.17]) on their common domain of definition:
\begin{eqnarray*}gin{lem} Let $f : X \to Y$ be an affine morphism of schemes. Then $\operatorname{Spec}_X f_* \mathcal O_X$ (as defined above) is naturally isomorphic to $X$ in ${\bf LRS}/Y$. \end{lem}
\begin{eqnarray*}gin{proof} This is local on $Y$, so we can assume $Y = \operatorname{Spec} A$ is affine, and hence $X = \operatorname{Spec} B$ is also affine, and $f$ corresponds to a ring map $f^\sharp : A \to B$. Then \begin{eqnarray*} f_* \mathcal O_X & = & B^{\sim} \\ & = & \underlinenderline{B}_Y \otimes_{ \underlinenderline{A}_Y } \mathcal O_Y, \end{eqnarray*} as $\mathcal O_Y$ algebras, and the squares in the diagram $$ \xymatrix{ (Y,f_* \mathcal O_X, (f^\flat)^* \mathcal M_Y) \ar[r] \ar[d] & (Y,\mathcal O_Y,\mathcal M_Y) \ar[d] \\ (Y,\underlinenderline{B}_Y, \underlinenderline{f^{\sharp}}_Y^* N ) \ar[r] \ar[d] & (Y,\underlinenderline{A}_Y, N) \ar[d] \\ (*,B,\operatorname{Spec} B) \ar[r] & (*,A,\operatorname{Spec} A) } $$ in ${\bf PRS}$ are cartesian in ${\bf PRS}$, where $N$ is the prime system on $(Y,\underlinenderline{A}_Y)$ given by $N_y = \{ y \}$ discussed in Lemma~\ref{lem:important}. According to that lemma, the right vertical arrows become isomorphisms upon localizing, and according to Theorem~\ref{thm:inverselimits}, the diagram stays cartesian upon localizing, so the left vertical arrows also become isomorphisms upon localizing, hence \begin{eqnarray*} \operatorname{Spec}_Y f_* \mathcal O_X & := & (Y, f_*\mathcal O_X, (f^{\flat})^* \mathcal M_Y)^{\operatorname{loc}} \\ & = & \operatorname{Spec} B \\ & = & X. \end{eqnarray*} \end{proof}
\begin{eqnarray*}gin{rem} Hakim [Hak IV.1] defines a ``Spec functor" from ringed topoi to locally ringed topoi, but it is not the same as ours on the common domain of definition. There is no meaningful situation in which Hakim's Spec functor agrees with the ``usual" one. When $X$ ``is" a locally ringed space, Hakim's $\operatorname{Spec} X$ ``is" (up to replacing a locally ringed space with the corresponding locally ringed topos) our $(X,\mathcal T_X)^{\operatorname{loc}}$. As mentioned in Remark~\ref{rem:localization}, Hakim's theory of localization is only developed for the terminal prime system, which can be a bit awkward at times. For example, if $X$ is a locally ringed space at least one of whose local rings has positive Krull dimension, Hakim's sequence of spectra yields an infinite strictly descending sequence of ${\bf RS}$ morphisms $$ \cdots \to \operatorname{Spec} (\operatorname{Spec} X) \to \operatorname{Spec} X \to X.$$ \end{rem}
The next results show that $\operatorname{Spec}_X$ takes direct limits of $\mathcal O_X$ algebras to inverse limits in ${\bf LRS}$ and that $\operatorname{Spec}_X$ is compatible with changing the base $X$.
\begin{eqnarray*}gin{lem} The functor $\operatorname{Spec}_X$ preserves inverse limits. \end{lem}
\begin{eqnarray*}gin{proof} Let $i \mathfrak{m}apsto (f_i : \mathcal O_X \to A_i)$ be a direct limit system in $\mathcal O_X / {\bf Rings}(X)$, with direct limit $f : \mathcal O_X \to A$, and structure maps $j_i : A_i \to A$. We claim that $\operatorname{Spec}_X A = (X,A,f^* \mathcal M_X)^{\operatorname{loc}}$ is the inverse limit of $i \mathfrak{m}apsto \operatorname{Spec}_X A_i = (X,A_i,f_i^*\mathcal M_X)^{\operatorname{loc}}$. By Theorem~\ref{thm:inverselimits}, it is enough to show that $(X,A,f^* \mathcal M_X)$ is the inverse limit of $i \mathfrak{m}apsto (X,A_i,f_i^* \mathcal M_X)$ in ${\bf PRS}$. Certainly $(X,A)$ is the inverse limit of $i \mathfrak{m}apsto (X,A_i)$ in ${\bf RS}$, so we just need to show that $f^* \mathcal M_X = \cap_i j_i^*( f_i^* \mathcal M_X)$ as prime systems on $(X,A)$ (see the proof of Theorem~\ref{thm:inverselimits}), and this is clear because $j_i f_i = f$, so, in fact, $j_i^*(f_i^* \mathcal M_X) = f^* \mathcal M_X$ for every $i$. \end{proof}
\begin{eqnarray*}gin{lem} Let $f : X \to Y$ be a morphism of locally ringed spaces. Then for any $\mathcal O_Y$ algebra $g : \mathcal O_Y \to A$, the diagram $$ \xymatrix{ \operatorname{Spec}_X f^*A \ar[r] \ar[d] & \operatorname{Spec}_Y A \ar[d] \\ X \ar[r] & Y } $$ is cartesian in ${\bf LRS}$. \end{lem}
\begin{eqnarray*}gin{proof} Note $f^*A := f^{-1}A \otimes_{f^{-1} \mathcal O_Y} \mathcal O_X$ as usual. One sees easily that $$ \xymatrix{ (X, f^*A, (f^{-1}g)^* \mathcal M_X) \ar[r] \ar[d] & (Y,A,g^* \mathcal M_Y) \ar[d] \\ (X,\mathcal O_X,\mathcal M_X) \ar[r] & (Y,\mathcal O_Y,\mathcal M_Y) } $$ is cartesian in ${\bf PRS}$ so the result follows from Theorem~\ref{thm:inverselimits}. \end{proof}
\begin{eqnarray*}gin{example} \label{example:Spec} When $X$ is a scheme, but $A$ is not a coherent $\mathcal O_X$ module, $\operatorname{Spec}_X A$ may not be a scheme. For example, let $B$ be a local ring, $X := \operatorname{Spec} B$, and let $x$ be the unique closed point of $X$. Let $A := x_*B \in {\bf Rings}(X)$ be the skyscraper sheaf $B$ supported at $x$. Note $\mathcal O_{X,x} = B$ and \begin{eqnarray*} \operatorname{Hom}_{{\bf Rings}(X)}(\mathcal O_X,x_*B) & = & \operatorname{Hom}_{{\bf Rings}}(\mathcal O_{X,x}, B), \end{eqnarray*} so we have a natural map $\mathcal O_X \to A$ in ${\bf Rings}(X)$ whose stalk at $x$ is $\operatorname{Id} : B \to B$. Then $\operatorname{Spec}_X A = (\{ x \}, A)$ is the punctual space with ``sheaf" of rings $A$, mapping in ${\bf LRS}$ to $X$ in the obvious manner. But $(\{ x \}, A)$ is not a scheme unless $A$ is zero dimensional.
Here is another related pathology example: Proceed as above, assuming $B$ is a local \emph{domain} which is not a field and let $K$ be its fraction field. Let $A := x_* K$, and let $\mathcal O_X \to A$ be the unique map whose stalk at $x$ is $B \to K$. Then $\operatorname{Spec}_X A$ is empty. \end{example}
Suppose $X$ is a scheme, and $A$ is an $\mathcal O_X$ algebra such that $\operatorname{Spec}_X A$ is a scheme. I do not know whether this implies that the structure morphism $\operatorname{Spec}_X A \to X$ is an \emph{affine} morphism of schemes.
\subsection{Relative schemes} We begin by recalling some definitions.
\begin{eqnarray*}gin{defn} \label{defn:fibered} ([SGA1], [Vis 3.1]) Let $F : {\bf C} \to {\bf D}$ be a functor. A ${\bf C}$ morphism $f : c \to c'$ is called \emph{cartesian} (relative to $F$) iff, for any ${\bf C}$ morphism $g : c'' \to c'$ and any ${\bf D}$ morphism $h : Fc' \to Fc''$ with $Fg \circ h = Ff$ there is a unique ${\bf C}$ morphism $\overline{h} : c \to c''$ with $F \overline{h} = h$ and $f = g \overline{h}$. The functor $F$ is called a \emph{fibered category} iff, for any ${\bf D}$ morphism $f : d \to d'$ and any object $c'$ of ${\bf C}$ with $Fc'=d'$, there is a cartesian morphism $\overline{f} : c \to c'$ with $F \overline{f}=f$. A \emph{morphism} of fibered categories $$(F : {\bf C} \to {\bf D}) \to (F' : {\bf C}' \to D')$$ is a functor $G : {\bf C} \to {\bf C}'$ satisfying $F'G=F$ and taking cartesian arrows to cartesian arrows. If ${\bf D}$ has a topology (i.e.\ is a site), then a fibered category $F : {\bf C} \to {\bf D}$ is called a \emph{stack} iff, for any object $d \in D$ and any cover $\{ d_i \to d \}$ of $d$ in ${\bf D}$, the category $F^{-1}(d)$ is equivalent to the category $F(\{ d_i \to d \})$ of \emph{descent data} (see [Vis 4.1]). \end{defn}
Every fibered category $F$ admits a morphism of fibered categories, called the \emph{associated stack}, to a stack universal among such morphisms [Gir I.4.1.2].
\begin{eqnarray*}gin{defn} \label{defn:SchX} ([Hak V.1]) Let $X$ be a ringed space. Define a category ${\bf Sch}_X^{\rm pre}$ as follows. Objects of ${\bf Sch}_X^{\rm pre}$ are pairs $(U,X_U)$ consisting of an open subset $U \subseteq X$ and a scheme $X_U$ over $\operatorname{Spec} \mathcal O_X(U)$. A morphism $(U,X_U) \to (V,X_V)$ is a pair $(U \to V, X_U \to X_V)$ consisting of an ${\bf Ouv}(X)$ morphism $U \to V$ (i.e. $U \subseteq V$) and a morphism of schemes $X_U \to X_V$ making the diagram \bne{mo} & \xymatrix{ X_U \ar[d] \ar[r] & X_V \ar[d] \\ \operatorname{Spec} \mathcal O_X(U) \ar[r] & \operatorname{Spec} \mathcal O_X(V) } \end{eqnarray} commute in ${\bf Sch}$. The forgetful functor ${\bf Sch}_X^{\rm pre} \to {\bf Ouv}(X)$ is clearly a fibered category, where a cartesian arrow is a ${\bf Sch}_X^{\rm pre}$ morphism $(U \to V, X_U \to X_V)$ making \eqref{mo} \emph{cartesian} in ${\bf Sch}$ (equivalently in ${\bf LRS}$). Since ${\bf Ouv}(X)$ has a topology, we can form the associated stack ${\bf Sch}_X$. The category of \emph{relative schemes over} $X$ is, by definition, the fiber category ${\bf Sch}_X(X)$ of ${\bf Sch}_X$ over the terminal object $X$ of ${\bf Ouv}(X)$. \end{defn}
(The definition of relative scheme makes sense for a ringed topos $X$ with trivial modifications.)
\subsection{Geometric realization} \label{section:geometricrealization} Now let $X$ be a \emph{locally} ringed space. Following [Hak V.3], we now define a functor \begin{eqnarray*} F_X : {\bf Sch}_X(X) & \to & {\bf LRS} / X \end{eqnarray*} called the \emph{geometric realization}. Although a bit abstract, the fastest way to proceed is as follows:
\begin{eqnarray*}gin{defn} \label{defn:LRSX} Let ${\bf LRS}_X$ be the category whose objects are pairs $(U,X_U)$ consisting of an open subset $U \subseteq X$ and a locally ringed space $X_U$ over $(U,\mathcal O_X|U)$, and where a morphism $(U,X_U) \to (V,X_V)$ is a pair $(U \to V, X_U \to X_V)$ consisting of an ${\bf Ouv}(X)$ morphism $U \to V$ (i.e. $U \subseteq V$) and an ${\bf LRS}$ morphism $X_U \to X_V$ making the diagram \bne{mor} & \xymatrix{ X_U \ar[d] \ar[r] & X_V \ar[d] \\ (U,\mathcal O_X|U) \ar@{^(->}[r] & (V,\mathcal O_X|V) } \end{eqnarray} commute in ${\bf LRS}$. The forgetful functor $(U,X_U) \mathfrak{m}apsto U$ makes ${\bf LRS}_X$ a fibered category over ${\bf Ouv}(X)$ where a cartesian arrow is a morphism $(U \to V, X_U \to X_V)$ making \eqref{mor} cartesian in ${\bf LRS}$. \end{defn}
In fact the fibered category ${\bf LRS}_X \to {\bf Ouv}(X)$ is a stack: one can define locally ringed spaces and morphisms thereof over open subsets of $X$ locally. Using the universal property of stackification, we define $F_X$ to be the morphism of stacks (really, the corresponding morphism on fiber categories over the terminal object $X \in {\bf Ouv}(X)$) associated to the morphism of fibered categories \begin{eqnarray*} F_X^{\rm pre} : {\bf Sch}_X^{\rm pre} & \to & {\bf LRS}_X \\ (U,X_U) & \mathfrak{m}apsto & (U, X_U \times_{\operatorname{Spec} \mathcal O_X(U)}^{{\bf LRS}} (U,\mathcal O_X|U)) . \end{eqnarray*} The map $(U,\mathcal O_X|U) \to \operatorname{Spec} \mathcal O_X(U)$ is the adjunction morphism for the adjoint functors of Proposition~\ref{prop:affine}. This functor clearly takes cartesian arrows to cartesian arrows.
\begin{eqnarray*}gin{rem} Although we loosely follow [Hak V.3.2] in our construction of the geometric realization, our geometric realization functor differs from Hakim's on their common domain of definition. \end{rem}
\subsection{Relatively affine morphisms} Let $f : X \to Y$ be an ${\bf LRS}$ morphism. Consider the following conditions: \begin{eqnarray*}gin{enumerate}[label=RA\theenumi., ref=RA\theenumi] \item \label{RA11} Locally on $Y$ there is an $\mathcal O_Y(Y)$ algebra $A$ and a cartesian diagram $$ \xymatrix{ X \ar[r] \ar[d]_f & \operatorname{Spec} A \ar[d] \\ Y \ar[r] & \operatorname{Spec} \mathcal O_Y(Y) } $$ in ${\bf LRS}$. \item \label{RA} There is an $\mathcal O_Y$ algebra $A$ so that $f$ is isomorphic to $\operatorname{Spec}_Y A$ in ${\bf LRS}/Y$. \item \label{RASpecQCo} Same condition as above, but $A$ is required to be quasi-coherent. \item \label{RA4} For any $g : Z \to Y$ in ${\bf LRS} / Y$, the map \begin{eqnarray*} \operatorname{Hom}_{{\bf LRS} / Y}(Z,X) & \to & \operatorname{Hom}_{\mathcal O_Y / {\bf Rings}(Y)}( f_* \mathcal O_X, g_* \mathcal O_Z) \\ h & \mathfrak{m}apsto & g_* h^{\flat} \end{eqnarray*} is bijective. \end{enumerate}
\begin{eqnarray*}gin{rem} The condition \eqref{RA11} is equivalent to both of the following conditions: \begin{eqnarray*}gin{enumerate}[label=RA1.\theenumi., ref=RA1\theenumi] \item \label{RA12} Locally on $Y$ there is a ring homomorphism $A \to B$ and a cartesian diagram $$ \xymatrix{ X \ar[r] \ar[d]_f & \operatorname{Spec} B \ar[d] \\ Y \ar[r] & \operatorname{Spec} A } $$ in ${\bf LRS}$. \item \label{RA13} Locally on $Y$ there is an affine morphism of schemes $X' \to Y'$ and a cartesian diagram $$ \xymatrix{ X \ar[r] \ar[d]_f & X' \ar[d] \\ Y \ar[r] & Y' } $$ in ${\bf LRS}$. \end{enumerate} The above two conditions are equivalent by definition of an affine morphism of schemes, and one sees the equivalence of \eqref{RA11} and \eqref{RA12} using Proposition~\ref{prop:affine}, which ensures that the map $Y \to \operatorname{Spec} A$ in \eqref{RA11} factors through $Y \to \operatorname{Spec} \mathcal O_Y(Y)$, hence \begin{eqnarray*} X & = & Y \times_{\operatorname{Spec} A} \operatorname{Spec} B \\ & = & Y \times_{\operatorname{Spec} \mathcal O_Y(Y)} \operatorname{Spec} \mathcal O_{Y}(Y) \times_{\operatorname{Spec} A} \operatorname{Spec} B \\ & = & Y \times_{\operatorname{Spec} \mathcal O_Y(Y)} \operatorname{Spec} (\mathcal O_Y(Y) \otimes_A B).\end{eqnarray*} \end{rem}
Each of these conditions has some claim to be the definition of a \emph{relatively affine morphism} in ${\bf LRS}$. With the exception of \eqref{RA}, all of the conditions are equivalent, when $Y$ is a scheme, to $f$ being an affine morphism of schemes in the usual sense. With the exception of \eqref{RA4}, each condition is closed under base change. For each possible definition of a relatively affine morphism in ${\bf LRS}$, one has a corresponding definition of \emph{relatively schematic morphism}, namely: $f : X \to Y$ in ${\bf LRS}$ is relatively schematic iff, locally on $X$, $f$ is relatively affine.
The notion of ``relatively schematic morphism" obtained from \eqref{RA11} is equivalent to: $f : X \to Y$ is in the essential image of the geometric realization functor $F_Y$.
\subsection{Monoidal spaces} The setup of localization of ringed spaces works equally well in other settings; for example in the category of \emph{monoidal spaces}. We will sketch the relevant definitions and results. For our purposes, a \emph{monoid} is a set $P$ equipped with a commutative, associative binary operation $+$ such that there is an element $0 \in P$ with $0+p=p$ for all $p \in P$. A morphism of monoids is a map of sets that respects $+$ and takes $0$ to $0$. An \emph{ideal} of a monoid $P$ is a subset $I \subseteq P$ such that $I+P \subseteq I$. An ideal $I$ is \emph{prime} iff its complement is a submonoid (in particular, its complement must be non-empty). A submonoid whose complement is an ideal, necessarily prime, is called a \emph{face}. For example, the faces of $\mathbb{N}^2$ are $\{ (0,0) \}$, $\mathbb{N} \oplus 0$, and $0 \oplus \mathbb{N}$; the diagonal ${\bf D}elta : \mathbb{N} \hookrightarrow \mathbb{N}^2$ is a submonoid, but not a face.
If $S \subseteq P$ is a submonoid, the \emph{localization} of $P$ at $S$ is the monoid $S^{-1}P$ whose elements are equivalence classes $[p,s]$, $p \in P$, $s \in S$ where $[p,s]=[p',s']$ iff there is some $t \in S$ with $t+p+s'=t+p'+s$, and where $[p,s]+[p',s']:=[p+p',s+s']$. The natural map $P \to S^{-1}P$ given by $p \mathfrak{m}apsto [p,0]$ is initial among monoid homomorphisms $h : P \to Q$ with $h(S) \subseteq Q^*$. The localization of a monoid at a prime ideal is, by definition, the localization at the complementary face.
A \emph{monoidal space} $(X,\mathcal M_X)$ is a topological space $X$ equipped with a sheaf of monoids $\mathcal M_X$. Monoidal spaces form a category ${\bf MS}$ where a morphism $f=(f, f^\dagger) : (X,\mathcal M_X) \to (Y,\mathcal M_Y)$ consists of a continuous map $f : X \to Y$ together with a map $f^\dagger : f^{-1} \mathcal M_Y \to \mathcal M_X$ of sheaves of monoids on $X$. A monoidal space $(X,\mathcal M_X)$ is called \emph{local} iff each stalk monoid $\mathcal M_{X,x}$ has a unique maximal ideal $\mathfrak{m}_x$. Local monoidal spaces form a category ${\bf LMS}$ where a morphism is a map of the underlying monoidal spaces such that each stalk map $f^\dagger_x : \mathcal M_{Y,f(x)} \to \mathcal M_{X,x}$ is \emph{local} in the sense $(f^\dagger)^{-1} \mathfrak{m}_{f(x)} = \mathfrak{m}_x$. A \emph{primed monoidal space} is a monoidal space equipped with a set of primes $M_x$ in each stalk monoid $\mathcal M_{X,x}$. The \emph{localization} of a primed monoidal space is a map of monoidal spaces $(X,\mathcal M_X,M)^{\rm loc} \to (X,\mathcal M_X)$ from a local monoidal space constructed in an obvious manner analogous to the construction of \S\ref{section:localization} and enjoying a similar universal property. In particular, we let $\operatorname{Spec} P$ denote the localization of the punctual space with ``sheaf" of monoids $P$ at the terminal prime system. A \emph{scheme over} $\mathbb{F}_1$ is a locally monoidal space locally isomorphic to $\operatorname{Spec} P$ for various monoids $P$. (This is not my terminology.)
The same ``general nonsense" arguments of this paper allow us to construct inverse limits of local monoidal spaces, to prove that a finite inverse limit of schemes over $\mathbb{F}_1$, taken in local monoidal spaces, is again a scheme over $\mathbb{F}_1$, to construct a relative $\operatorname{Spec}$ functor \begin{eqnarray*} \operatorname{Spec} : (\mathcal M_X / {\bf Mon}(X))^{\rm op} & \to & {\bf LMS} / (X,\mathcal M_X) \end{eqnarray*} for any $(X,\mathcal M_X) \in {\bf LMS}$ which preserves inverse limits, and to prove that the natural map $$\operatorname{Hom}_{{\bf LMS}}((X,\mathcal M_X),\operatorname{Spec} P) \to \operatorname{Hom}_{{\bf Mon}}(P,\mathcal M_X(X))$$ is bijective.
\begin{eqnarray*}gin{thebibliography}{EGA}
\bibitem[Hak]{Hak} M.~Hakim, \emph{Topos annel\'es et sch\'emas relatifs.} Ergeb. Math. Grenz. Band 64. Springer-Verlag 1972.
\bibitem[Har]{Har} R.~Hartshorne, \emph{Algebraic Geometry.} Springer-Verlag 1977.
\bibitem[HB]{HB} H.~Becker, \emph{Faserprodukt in LRS}, available at:
\begin{eqnarray*}gin{tt} http://www.uni-bonn.de/$\sim$habecker/Faserprodukt$\underlinenderline{ \hspace{.075in} }$in$\underlinenderline{ \hspace{.075in} }$LRS.pdf.\end{tt}
\bibitem[Ill]{Ill} L.~Illusie, \emph{Complexe Cotangent et Deformations I.} L.N.M. 239. Springer-Verlag 1971.
\bibitem[EGA]{EGA} A.~Grothendieck and J.~Dieudonn\'e, \emph{\'El\'ements de g\'eom\'etrie alg\'ebrique.} Pub. Math. I.H.E.S., 1960.
\bibitem[Gir]{Gir} J.~Giraud, \emph{Cohomologie non ab\'elienne.}
\bibitem[Vis]{Vis} A.~Vistoli, \emph{Notes on Grothendieck topologies, fibered categories, and descent theory.}
\end{thebibliography}
\end{document}
|
\begin{document}
\renewcommand{Section}{Section}
\renewcommand{Subsection}{Subsection}
\begin{center}
\vspace*{-2em}
\noindent\makebox[\linewidth]{\rule{14cm}{0.4pt}}
{\LongleftarrowRGE{\textsc{\textbf{The Hodge ring of varieties\\[.3em] in positive characteristic}}}}
{\leftarrowrge{\textsc{Remy van Dobben de Bruyn}}}
\rule{8cm}{0.4pt}
\end{center}
\renewcommand{\small\bfseries\scshape Abstract}{\small\bfseries\scshape Abstract}
\begin{abstract}\noindent
Let $k$ be a field of positive characteristic. We prove that the only linear relations between the Hodge numbers $h^{i,j}(X) = \dim H^j(X,\Omega_X^i)$ that hold for \emph{every} smooth proper variety $X$ over $k$ are the ones given by Serre duality. We also show that the only linear combinations of Hodge numbers that are birational invariants of $X$ are given by the span of the $h^{i,0}(X)$ and the $h^{0,j}(X)$ (and their duals $h^{i,n}(X)$ and $h^{n,j}(X)$). The corresponding statements for compact K\"ahler manifolds were proven by Kotschick and Schreieder \cite{KS}.
\end{abstract}
\phantomsection
\section*{Introduction}
The Hodge numbers $h^{i,j}(X) = \dim H^j(X,\Omega^i_X)$ of a compact K\"ahler manifold~$X$ satisfy the relations $h^{i,j} = h^{n-i,n-j}$ (Serre duality) and $h^{i,j} = h^{j,i}$ (Hodge symmetry). Kotschick and Schreieder showed \cite[Thm.~1,~consequence~(2)]{KS} that these are the only \emph{universal} linear relations between the $h^{i,j}$, i.e.\ the only ones that are satisfied for every compact K\"ahler manifold $X$ of dimension $n$.
Serre constructed smooth projective varieties in characteristic $p > 0$ for which Hodge symmetry fails \cite[Prop.\ 16]{SerMex}; however Serre duality still holds. Thus, the natural analogue of the result of Kotschick--Schreieder is the following.
\begin{thm}\leftarrowbel{Thm Hodge}
The only universal linear relations between the Hodge numbers $h^{i,j}(X)$ of smooth proper varieties $X$ over an arbitrary field $k$ of characteristic $p > 0$ are the ones given by Serre duality.
\end{thm}
This is proven in \autoref{Thm linear relations Hodge-de Rham}. The strategy is identical to that of \cite{KS}: compute the subring of $\mathbf Z[x,y,z]$ generated by the formal Hodge polynomials
\[
h(X) = \left(\sum_{i,j} h^{i,j}(X)x^i y^j\right)z^{\dim X}.
\]
The term $z^{\dim X}$ is new in \cite{KS}, and its introduction ensures that varieties of different dimensions cannot be identified with each other. It defines a grading on $\mathbf Z[x,y,z]$ by dimension, i.e.\ $\mathbf Z[x,y,z]_n = \mathbf Z[x,y]z^n$.
In characteristic $0$, Kotschick and Schreieder prove \cite[Cor.\ 3]{KS} that the graded subring of $\mathbf Z[x,y,z]$ given in degree $n$ by the equations $h^{i,j} = h^{n-i,n-j}$ and $h^{i,j} = h^{j,i}$ is the free algebra $\mathbf Z[h(\mathbf P^1),h(E),h(\mathbf P^2)]$ on the Hodge polynomials of $\mathbf P^1$, an elliptic curve $E$, and~$\mathbf P^2$.
We prove the natural analogue in characteristic $p$:
\begin{thm}\leftarrowbel{Thm Hodge ring}
Let $\mathcal{H} \subseteq \mathbf Z[x,y,z]$ be the graded subring defined by
\[
\mathcal{H}_n = \left\{\left(\sum_{i,j=0}^n h^{i,j}x^iy^j\right)z^n\ \Bigg|\ h^{i,j} = h^{n-i,n-j} \text{ for all } i,j\right\}.
\]
Then $\mathcal{H}$ is generated by $h(\mathbf P^1)$, $h(E)$, $h(\mathbf P^2)$, and $h(S)$, where $E$ is any elliptic curve and $S$ is any surface with $h^{1,0}(S) - h^{0,1}(S) = \pm 1$. Moreover, $h(S)$ satisfies a monic quadratic relation over the free polynomial algebra $\mathbf Z[h(\mathbf P^1),h(E),h(\mathbf P^2)]$.
\end{thm}
This is proven in \autoref{Cor generated by P^1, E, P^2, and S}. An example of a surface $S$ satisfying the hypothesis is given by Serre's counterexample to Hodge symmetry \cite[Prop.~16]{SerMex}, since $h^{0,1}(S) = 0$ and $h^{1,0}(S) = 1$. We recall the construction in \autoref{Prop example Hodge symmetry}.
There are ``twice as many'' possible Hodge polynomials in characteristic $p$ as in characteristic $0$: the Hodge diamonds now only have a $\mathbf Z/2$-symmetry instead of a $(\mathbf Z/2)^2$-symmetry. Thus, a Hilbert polynomial calculation suggests that the Hodge ring of varieties in characteristic $p > 0$ should be a quadratic extension of the one in characteristic $0$. This is indeed confirmed by \autoref{Thm Hodge ring}.
Similar to \cite[Thm.\ 2]{KS}, we also get a statement on birational invariants:
\begin{thm}\leftarrowbel{Thm Hodge birational}
A linear combination of the Hodge numbers $h^{i,j}(X)$ is a birational invariant of the variety $X$ (over a field $k$ of characteristic $p$) if and only if it is in the span of the $h^{0,j} = h^{n,n-j}$ and the $h^{i,0} = h^{n-i,n}$.
\end{thm}
This is proven in \autoref{Thm linear combinations birational invariant} below. The result in characteristic $0$ is the same \cite[Thm.\ 2]{KS}, except that there $h^{i,0}$ is actually equal to $h^{0,i}$ by Hodge symmetry. The fact that the $h^{0,j}$ are birational invariants in characteristic $p > 0$ is due to Chatzistamatiou and R\"ulling \makebox{\cite[Thm.\ 1]{ChaRul};} for $h^{i,0}$ this is classical.
Finally we address the failure of degeneration of the Hodge--de Rham spectral sequence:
\begin{thm}\leftarrowbel{Thm Hodge de Rham}
The only linear relations between the Hodge numbers $h^{i,j}(X)$ and the de Rham numbers $h_{\operatorname{dR}}^i(X)$ of smooth proper varieties $X$ over $k$ are the relations spanned by
\begin{itemize}
\item Serre duality: $h^{i,j}(X) = h^{n-i,n-j}(X)$;
\item Poincar\'e duality: $h_{\operatorname{dR}}^i(X) = h_{\operatorname{dR}}^{2n-i}(X)$;
\item Components: $h^{0,0}(X) = h_{\operatorname{dR}}^0(X)$;
\item Euler characteristic: $\sum_{i,j} (-1)^{i+j} h^{i,j}(X) = \sum_i (-1)^i h_{\operatorname{dR}}^i(X)$.
\end{itemize}
\end{thm}
This is proven in \autoref{Thm linear relations Hodge-de Rham}.
\subsection*{Outline of the paper}
In \autoref{Sec Hodge symmetry} we recall Serre's example of the failure of Hodge symmetry, and make slight improvements that we will use later. In \autoref{Sec Hodge-de Rham degeneration} we recall an example of W.\ Lang of a similar flavour of a surface for which the Hodge--de Rham spectral sequence does not degenerate.
In \autoref{Sec Hodge ring} we define the \emph{Hodge ring of varieties} as the graded subring of $\mathbf Z[x,y,z]$ where Serre duality holds, and show that it is generated by the Hodge polynomials of $\mathbf P^1$, any elliptic curve $E$, $\mathbf P^2$, and the surface $S$ of \autoref{Sec Hodge symmetry}. This proves \autoref{Thm Hodge}. In \autoref{Sec birational} we use this presentation to study birational invariants, proving \autoref{Thm Hodge birational}.
In \autoref{Sec de Rham} we define the \emph{de Rham ring of varieties} as the graded subring of $\mathbf Z[t,z]$ where Poincar\'e duality holds, and produce a similar presentation in terms of $\mathbf P^1$, $E$, $\mathbf P^2$, and $S$. Finally, in \autoref{Sec Hodge-de Rham} we combine the information of the previous sections into a \emph{Hodge--de Rham ring of varieties}, and we use the surface $S$ of \autoref{Sec Hodge-de Rham degeneration} to show that the Hodge--de Rham ring can be generated by varieties. This gives \autoref{Thm Hodge de Rham}.
\subsection*{Notation}
Throughout, $p$ will be a prime number, and $k$ will be a field of characteristic $p$ (fixed once and for all). A \emph{variety} over a field $k$ will be a geometrically integral, finite type, separated $k$-scheme. We write $\mathbf{Var}_k$ for the set of isomorphism classes of smooth proper $k$-varieties.
For a variety $X$ over a field $k$, we will write $H^{i,j}(X) = H^j(X,\Omega_X^i)$, and denote its dimension by $h^{i,j}(X) = h^j(X,\Omega_X^i)$ (we avoid the usual $h^{p,q}(X)$ as it leads to a clash of notation). Similarly, the algebraic de Rham cohomology is denoted $H_{\operatorname{dR}}^i(X)$, and its dimension is $h_{\operatorname{dR}}^i(X)$. We warn the reader that $h^{i,j}(X)$ and $h^{j,i}(X)$ may differ, and the sum $\sum_{i+j=m} h^{i,j}(X)$ may not equal $h_{\operatorname{dR}}^m(X)$.
The Picard functor $\mathbf{Pic}_X = R^1\pi_* \mathbf G_m$ of a smooth proper variety $\pi \colon X \to \operatorname{Spec} k$ is representable by a scheme \cite[Exp.~XII,~Cor.~1.5(a)]{SGA6}. We denote its identity component by $\mathbf{Pic}_X^0$, and the union of the torsion components by $\mathbf{Pic}^\tau_X$. Note that neither is reduced in general.
\section{Failure of Hodge symmetry}\leftarrowbel{Sec Hodge symmetry}
Following Serre \cite[Prop.\ 16]{SerMex}, we construct a smooth projective surface $X$ over $\mathbf F_p$ with $h^0(X,\Omega_X^1) = 0$ but $h^1(X,\mathcal O_X) = 1$. We do a little better than Serre's example: our $X$ is defined over the prime field $\mathbf F_p$, admits a lift to $\mathbf Z_p$, and we include the exact computation of $h^1(X,\mathcal O_X)$.
We start with a well-known lemma that is useful for the constructions in this section and the next.
\begin{Lemma}\leftarrowbel{Lem G-torsor sequence}
Let $X$ and $Y$ be proper $k$-varieties, let $G$ be a finite group scheme over $k$, and let $f \colon Y \to X$ be a $G$-torsor. Then there is a short exact sequence
\[
0 \to G^\vee \to \mathbf{Pic}_X \to (\mathbf{Pic}_Y)^G
\]
on the big flat site $(\operatorname{Spec} k)_{\operatorname{fppf}}$, where $G^\vee$ is the Cartier dual of $G$.
\end{Lemma}
\begin{proof}
We have a commutative diagram
\begin{equation*}
\begin{tikzcd}
Y \ar{r}{\pi_Y}\ar{d}[swap]{f} & \operatorname{Spec} k\ar{d}[swap,yshift=.15em]{f^{\text{univ}}}\ar[start anchor=south east, end anchor=north west, equal]{rd} & \\
X \ar{r}[swap]{g} & BG \ar{r}[swap]{\pi_{BG}} & \operatorname{Spec} k
\end{tikzcd}
\end{equation*}
whose left hand square is a pullback, where $\pi_{\mathscr X} \colon \mathscr X \to \operatorname{Spec} k$ denotes the structure map of an algebraic stack $\mathscr X$ over $k$. The Grothendieck spectral sequence for
\begin{equation*}
\begin{tikzcd}
X_{\operatorname{fppf}} \ar{r}{g_*} & BG_{\operatorname{fppf}} \ar{r}{\pi_{BG,*}} & (\operatorname{Spec} k)_{\operatorname{fppf}}
\end{tikzcd}
\end{equation*}
gives a short exact sequence of low degree terms
\begin{equation}
0 \to R^1\pi_{BG,*}(g_*\mathbf G_m) \to R^1\pi_{X,*}\mathbf G_m \to \pi_{BG,*}(R^1g_*\mathbf G_m).\leftarrowbel{Dia low degree terms}
\end{equation}
The middle term is $\mathbf{Pic}_X$, and the pullback of $R^1g_*\mathbf G_m$ along $f^{\text{univ}}$ is $\mathbf{Pic}_Y$. Since $\pi_{BG,*} \colon BG_{\operatorname{fppf}} \to (\operatorname{Spec} k)_{\operatorname{fppf}}$ is computed by pulling back along $f^{\text{univ}}$ and taking $G$-invariants, the third term of (\ref{Dia low degree terms}) is $(\mathbf{Pic}_Y)^G$.
Since $\pi_Y$ is proper with geometrically connected fibres, the same goes for $g$. This forces $g_* \mathbf G_m = \mathbf G_m$, so the first term of (\ref{Dia low degree terms}) is $R^1\pi_{BG,*}\mathbf G_m = \mathbf{Pic}_{BG}$. A line bundle on $BG$ is trivialised on $f^{\text{univ}} \colon \operatorname{Spec} k \to BG$, so it gives a cocycle on $\operatorname{Spec} k \times_{BG} \operatorname{Spec} k \cong G$. This is a function $\phi \colon G \to \mathbf G_m$, and the cocycle condition amounts to the condition that $\phi$ is a homomorphism. We finally conclude that $\mathbf{Pic}_{BG} \cong \mathbf{Hom}(G,\mathbf G_m) = G^\vee$.
\end{proof}
\vskip-\leftarrowstskip
There are more elementary proofs of \autoref{Lem G-torsor sequence} by trivialising a line bundle $\mathscr L$ on $X$ by pulling back to $Y$, and using $H^0(Y,\mathcal O_Y) = k$ to construct a morphism $G \to \mathbf G_m$. The advantage of our proof above is that it gives the obstruction to surjectivity of $\mathbf{Pic}_X \to (\mathbf{Pic}_Y)^G$: the next term in the sequence is $R^2\pi_{BG,*}\mathbf G_m$. Jensen showed \cite{Jen} that this group vanishes for most of the groups we're interested in, e.g.~$\mathbf Z/p$, $\pmb\alpha_p$, and $\pmb\mu_p$, but we don't need this.
\begin{Cor}\leftarrowbel{Cor quotient of hypersurface}
Let $X$ be a proper $k$-variety, $G$ a finite group scheme over $k$, and $Y \to X$ a $G$-torsor. If $\mathbf{Pic}_Y^\tau = 0$, then $\mathbf{Pic}_X^\tau \cong G^\vee$.
\end{Cor}
\begin{proof}
The image of $\mathbf{Pic}_X^\tau$ always lands in $\mathbf{Pic}_Y^\tau$, which is $0$ by assumption. The restriction of the short exact sequence of \autoref{Lem G-torsor sequence} to the respective torsion components gives the result.
\end{proof}
\vskip-\leftarrowstskip
This allows us to construct a surface for which Hodge symmetry fails, following Serre \cite{SerMex}.
\begin{Prop}\leftarrowbel{Prop example Hodge symmetry}
There exists a smooth projective surface $X$ over $\mathbf F_p$ admitting a lift to $\mathbf Z_p$ such that $H^0(X,\Omega_X^1) = 0$ but $h^1(X,\mathcal O_X) = 1$.
\end{Prop}
\begin{proof}
Let $G \to \operatorname{Spec} \mathbf Z_p$ be the constant \'etale group scheme $\mathbf Z/p$. Choose a linear action of $G$ on $\mathbf P^N_{\mathbf Z_p}$ for some $N \gg 0$ such that the fixed point locus of the special fibre has codimension at least $3$. For example, we can take $3$ copies of the regular representation, and then projectivise. Form the quotient $\mathcal Z = \mathbf P^N/G$, which is smooth away from the image of the fixed locus.
By the codimension $\geq 3$ assumption, repeatedly applying Poonen's Bertini theorem \cite{Poo} produces a projective surface $\mathcal X \subseteq \mathcal Z^{\text{sm}}$ over $\mathbf Z_p$ whose special fibre is smooth, hence $\mathcal X$ is smooth over $\mathbf Z_p$. Its inverse image $\mathcal Y$ in $\mathbf P^N_{\mathbf Z_p}$ is a complete intersection, and $\mathcal Y \to \mathcal X$ is a $G$-torsor.
The special fibre $Y$ of $\mathcal Y$ is smooth since $X$ and $G$ are, so $H^0(Y,\Omega_Y^1) = 0$, $H^1(Y,\mathcal O_Y) = 0$ and $\mathbf{Pic}_Y^\tau = 0$ \cite[Exp.\ XI,\ Thm.\ 1.8]{SGA7II}. Then \autoref{Cor quotient of hypersurface} gives $\mathbf{Pic}_X^\tau = (\mathbf Z/p)^\vee = \pmb\mu_p$, so
\[
H^1(X,\mathcal O_X) = \operatorname{Lie}(\mathbf{Pic}_X^\tau) = \operatorname{Lie}(\pmb\mu_p)
\]
has dimension $1$. On the other hand, $Y \to X$ is \'etale since it is a $(\mathbf Z/p)$-torsor, so $H^0(X,\Omega_X^1) \to H^0(Y,\Omega_Y^1)$ is injective, forcing $H^0(X,\Omega_X^1) = 0$.
\end{proof}
\begin{Rmk}\leftarrowbel{Rmk Hodge-de Rham lift}
Because the surface $X$ constructed in \autoref{Prop example Hodge symmetry} lifts to $\mathbf Z_p$, the Hodge--de Rham spectral sequence degenerates \cite{DelIll}. For $H^0(X,\Omega_X^1)$ and $H^1(X,\mathcal O_X)$, this can also be checked by hand using cocycles (analogous to the proof of \autoref{Prop example Hodge-de Rham degeneration} below).
\end{Rmk}
\section{Failure of Hodge--de Rham degeneration}\leftarrowbel{Sec Hodge-de Rham degeneration}
Inspired by a construction of W.\ Lang \cite{Lang}, we do a precise computation of $H^0(X,\Omega_X^1)$, $H^1(X,\mathcal O_X)$, and $H_{\operatorname{dR}}^1(X)$ when $X$ is an $\pmb\alpha_p$-quotient of a (singular) complete intersection. Lang moreover shows that $X$ lifts to a degree $2$ ramified extension of $\mathbf Z_p$, but this plays no role for us.
A lot is written about structure theory and singularities of $\pmb\alpha_p$-torsors and more generally purely inseparable quotients \cite{Eke, KimNiit, AraAvra,DemGab,Tzio}, but we only need elementary results.
\begin{Rmk}\leftarrowbel{Rmk alpha_p torsor}
If $A$ is a ring of characteristic $p$, then an $\pmb\alpha_p$-torsor is given by $A \to A[z]/(z^p - f)$ for some $f \in A$, and it is the trivial torsor if and only if $f$ is a $p$-th power. Indeed, this follows from the exact sequence
\[
A \stackrel {(-)^p}\longrightarrow A \to H^1(\operatorname{Spec} A, \pmb\alpha_p) \to 0.
\]
For a general $\pmb\alpha_p$-torsor $\pi \colon Y \to X$, there are affine opens $\operatorname{Spec} A_i = U_i \subseteq X$ and sections $z_i \in \mathcal O_Y(\pi^{-1}(U_i))$ and $f_i \in \mathcal O_X(U_i)$ such that $\mathcal O_X(U_i) \to \mathcal O_Y(\pi^{-1}(U_i))$ is given by $A_i \to A_i[z_i]/(z_i^p - f_i)$. We have $f_i - f_j = g_{ij}^p$ for some $g_{ij} \in \mathcal O_X(U_{ij})$, hence the $1$-forms $\omega_i = df_i$ glue to a global $1$-form $\omega$.
\end{Rmk}
\begin{Lemma}\leftarrowbel{Lem torsor}
Let $X$ be a normal integral scheme of characteristic $p$, and let $\pi \colon Y \to X$ be an $\pmb\alpha_p$-torsor, with $f_i$, $z_i$, $g_{ij}$, $\omega_i$, and $\omega$ as in \autoref{Rmk alpha_p torsor}. Then the following are equivalent:
\begin{enumerate}
\item $\pi$ is not the trivial torsor;\leftarrowbel{Item nontrivial torsor}
\item no $f_i$ is a $p$-th power;\leftarrowbel{Item not a p-th power}
\item $Y$ is integral.\leftarrowbel{Item integral}
\item $\omega$ is not zero in $H^0(X,\Omega_{X/\mathbf F_p}^1)$ (or even in $\Omega_{K(X)/\mathbf F_p}^1$);\leftarrowbel{Item df nonzero}
\end{enumerate}
\end{Lemma}
\begin{proof}
If one $f_i$ is a $p$-th power, then they all are, since they differ by $p$-th powers. Hence, $\pi$ is trivial in this case. This proves $\ref{Item nontrivial torsor} \mathbf Rightarrow \ref{Item not a p-th power}$; the converse is trivial. Since $X$ is normal, $f_i$ is a $p$-th power in $A_i$ if and only if it is so in $K = K(X)$. Thus, if no $f_i$ is a $p$-th power, then all the rings $K[z_i]/(z_i^p-f_i)$ are fields, so $A_i[z_i]/(z_i^p-f_i)$ is a domain. This proves $\ref{Item not a p-th power} \mathbf Rightarrow \ref{Item integral}$, and again the converse is trivial. Finally, $\ref{Item not a p-th power} \Leftrightarrow \ref{Item df nonzero}$ is \cite[Tag \href{https://stacks.math.columbia.edu/tag/07P2}{07P2}]{Stacks}.
\end{proof}
\begin{Lemma}\leftarrowbel{Lem torsor smooth}
Let $k$ be a field of characteristic $p$, let $X$ and $Y$ be finite type $k$-schemes, and let $\pi \colon Y \to X$ be a morphism of $k$-schemes that is an $\pmb\alpha_p$-torsor. If $Y$ is smooth, then so is $X$.
\end{Lemma}
\begin{proof}
We may replace $k$ by $\bar k$ and `smooth' by `regular'. Since $Y$ is regular, Kunz's theorem \cite[Thm.\ 2.1]{Kunz} implies that $\operatorname{Fr}_Y \colon Y \to Y$ is flat. We have a factorisation
\begin{equation*}
\begin{tikzcd}
Y \ar{r}{\pi}\ar[bend left]{rr}{Fr_Y} & X \ar{r}\ar[bend right]{rr}[swap]{\operatorname{Fr}_X} & Y \ar{r}{\pi} & X\punct{,}
\end{tikzcd}
\end{equation*}
where $\pi$ and $\operatorname{Fr}_Y$ are flat. We conclude that $\operatorname{Fr}_X \circ \pi = \pi \circ \operatorname{Fr}_Y$ is flat, hence $\operatorname{Fr}_X$ is flat since $\pi$ is faithfully flat. Then Kunz's theorem \cite[Thm.\ 2.1]{Kunz} implies that $X$ is regular.
\end{proof}
\begin{Prop}\leftarrowbel{Prop example Hodge-de Rham degeneration}
There exists a smooth projective surface $X$ over $\mathbf F_p$ for which $H^0(X,\Omega_X^1)$, $H^1(X,\mathcal O_X)$, and $H_{\operatorname{dR}}^1(X)$ are all $1$-dimensional. In particular, the Hodge--de Rham spectral sequence of $X$ does not degenerate.
\end{Prop}
\begin{proof}
Let $G \to \operatorname{Spec} \mathbf F_p$ be the group scheme $\pmb\alpha_p$, and choose a linear action of $G$ on $\mathbf P^N_{\mathbf F_p}$ for some $N \gg 0$ such that the fixed point locus has codimension at least $3$. For example, we take 3 copies of the regular representation, and then projectivise \cite[Lem.\ 4.2.2]{Ray}. Form the quotient $Z = \mathbf P^N/G$, which is an $\pmb\alpha_p$-torsor away from the image $Z^{\text{fix}} \subseteq Z$ of the fixed locus \cite[Tag \href{https://stacks.math.columbia.edu/tag/07S7}{07S7}]{Stacks}. In particular, $Z$ is smooth outside $Z^{\text{fix}}$ by \autoref{Lem torsor smooth}. Since $\mathbf P^N$ is geometrically integral, the differential form $\omega$ of \autoref{Rmk alpha_p torsor} is nontrivial by \autoref{Lem torsor}.
Repeatedly applying Poonen's Bertini theorem \cite{Poo} produces a smooth projective surface $X \subseteq Z \setminus Z^{\text{fix}}$, which we may choose such that $\omega|_X$ is not identically zero (for example by specifying a tangency condition at a point $x \in Z \setminus Z^{\text{fix}}$).
The inverse image $Y \subseteq \mathbf P^N$ of $X$ is a complete intersection, and $\pi \colon Y \to X$ is an $\pmb\alpha_p$-torsor since $Y \subseteq Z\setminus Z^{\text{fix}}$. Hence $Y$ is geometrically integral by \autoref{Lem torsor}, since $\omega|_{X_{\bar \mathbf F_p}} \neq 0$. Since $Y$ is a geometrically integral complete intersection of dimension $\geq 2$, we find $H^1(Y,\mathcal O_Y) = 0$ and $H^0(Y,\Omega_Y^1) = 0$.
In particular, $\mathbf{Pic}_Y^0 = 0$, so \autoref{Lem G-torsor sequence} gives
\[
\mathbf{Pic}_X^0 = \pmb\alpha_p^\vee = \pmb\alpha_p.
\]
Thus, $H^1(X,\mathcal O_X) = \operatorname{Lie}(\mathbf{Pic}_X^0)$ is $1$-dimensional. The short exact sequence
\[
0 \to H^1(X,\pmb\alpha_p) \to H^1(X,\mathcal O_X) \stackrel{(-)^p}\longrightarrow H^1(X,\mathcal O_X)
\]
shows that $H^1(X,\mathcal O_X)$ is spanned by the image of the nontrivial $\pmb\alpha_p$-torsor $\pi$. In the notation of \autoref{Rmk alpha_p torsor}, the generator of $H^1(X,\mathcal O_X)$ is given by the \u Cech $1$-cocycle $(g_{ij})$. We claim that its image in $H^1(X,\Omega_X^1)$ is nonzero, i.e.\ $(dg_{ij})$ is not a \u Cech coboundary. Suppose it were; say $(dg_{ij}) = _{\operatorname{\acute et}}a_i - _{\operatorname{\acute et}}a_j$ for forms $_{\operatorname{\acute et}}a_i \in H^0(U_i,\Omega_{U_i}^1)$. Then consider the $1$-form $dz_i - _{\operatorname{\acute et}}a_i$ on $\pi^{-1}(U_i)$. We have
\[
(z_i-z_j)^p = f_i - f_j = g_{ij}^p,
\]
hence $z_i - z_j = g_{ij}$ since $Y$ is integral. Hence,
\[
(dz_i - _{\operatorname{\acute et}}a_i)- (dz_j - _{\operatorname{\acute et}}a_j) = dg_{ij} - _{\operatorname{\acute et}}a_i + _{\operatorname{\acute et}}a_j = 0,
\]
showing that the $dz_i - _{\operatorname{\acute et}}a_i$ glue to a global $1$-form on $Y$, which is nonzero because $dz_i$ is not pulled back from $U_i$. This contradicts the vanishing of $H^0(Y,\Omega_Y^1)$. We conclude that the generator $(g_{ij})$ of $H^1(X,\mathcal O_X)$ does not survive the Hodge--de Rham spectral sequence.
On the other hand, the kernel of $\Omega_X^1 \to \pi_* \Omega_Y^1$ is locally generated by $\omega_i$, since the cotangent complex of $\pi^{-1}(U_i) \to U_i$ is the complex
\[
\mathcal I_i/\mathcal I_i^2 \stackrel 0\longrightarrow \Omega_{\pi^{-1}(U_i)/U_i}^1,
\]
where $\mathcal I_i$ is the ideal sheaf on $U_i \times \mathbf A^1$ given by $z_i^p-f_i$. We get an exact sequence
\[
0 \to \omega\mathcal O_X \to \Omega_X^1 \to \pi_*\Omega_Y^1,
\]
so vanishing of $H^0(Y,\Omega_Y^1)$ gives $H^0(X,\omega\mathcal O_X) = H^0(X,\Omega_X^1)$, i.e.\ $H^0(X,\Omega_X^1)$ is $1$-dimensional and generated by $\omega$. Note that $d\omega = 0$ since $\omega$ is locally exact, so $\omega$ survives the Hodge--de Rham spectral sequence. Hence, $h_{\operatorname{dR}}^1(X) = 1$.
\end{proof}
\vskip-\leftarrowstskip
The constructions of \autoref{Prop example Hodge symmetry} and \autoref{Prop example Hodge-de Rham degeneration} are both covered by \cite[Prop.\ 4.2.3]{Ray}, but we needed a more detailed analysis of the Hodge numbers. Using Poonen's Bertini theorems, we were also able to construct examples over the prime field $\mathbf F_p$. The smoothness claim for $\pmb\alpha_p$-quotients is not explained in [\emph{loc.\ cit.}].
\begin{Rmk}\leftarrowbel{Rmk not smooth}
In general we cannot expect the complete intersection $Y \subseteq \mathbf P^N$ of the proof of \autoref{Prop example Hodge-de Rham degeneration} to be smooth. Indeed, the degree of $Y$ is large with respect to $p$ in order for $Y$ to be $\pmb\alpha_p$-invariant. But a smooth complete intersection of sufficiently high degree does not admit any infinitesimal automorphisms, so in particular cannot have a free $\pmb\alpha_p$-action.
\end{Rmk}
\section{The Hodge ring of varieties}\leftarrowbel{Sec Hodge ring}
We modify the definition of the Hodge ring of \cite{KS} to account for the failure of Hodge symmetry in characteristic $p > 0$.
\begin{Def}\leftarrowbel{Def Hodge ring}
Consider $\mathbf Z[x,y,z]$ as a graded ring where $x$ and $y$ both have degree $0$ and $z$ has degree $1$. The \emph{Hodge ring of varieties over $k$} is the graded subring $\mathcal{H}_* \subseteq \mathbf Z[x,y,z]$ whose $n$-th graded piece is
\[
\mathcal{H}_n = \left\{\left(\ \sum_{i,j = 0}^nh^{i,j}x^iy^j\right)z^n\ \Bigg|\ \begin{array}{c}h^{n-i,n-j} = h^{i,j} \\ \text{ for all } i,j \in \{0,\ldots,n\}\end{array}\right\}.
\]
This is different from the notation of \cite{KS}, where $\mathcal{H}_*$ denotes the ring where moreover Hodge symmetry holds.
\end{Def}
\begin{Def}\leftarrowbel{Def h(X)}
Write $h \colon \mathbf{Var}_k \to \mathcal{H}_*$ for the map that sends an $n$-dimensional variety $X$ to its \emph{(formal) Hodge polynomial}
\[
h(X) = \left(\ \sum_{i,j=0}^nh^{i,j}(X)x^iy^j\right)z^n.
\]
where $h^{i,j}(X) = h^j(X,\Omega_X^i)$ as usual. The K\"unneth formula \cite[Tag \href{https://stacks.math.columbia.edu/tag/0BED}{0BED}]{Stacks} shows that $h$ preserves products, i.e.\ $h(X \times Y) = h(X) \cdot h(Y)$ for all $X, Y \in \mathbf{Var}_k$.
To justify the name \emph{Hodge ring of varieties}, we show in \autoref{Cor Hodge generated by varieties} that $\mathcal{H}_*$ is the subring (equivalently, subgroup) of $\mathbf Z[x,y,z]$ generated by the image of $h$.
\end{Def}
\begin{Thm}\leftarrowbel{Thm Hodge ring presentation}
Consider the graded ring $\mathbf Z[A,B,C,D]$ where $A$ and $B$ have degree $1$, and $C$ and $D$ have degree $2$. Then the map
\[
\phi \colon \mathbf Z[A,B,C,D] \to \mathcal{H}_*
\]
given by
\begin{align*}
\phi(A) &= (1+xy)z, & \phi(C) &= xy \cdot z^2,\\
\phi(B) &= (x+y)z, & \phi(D) &= (x+xy^2)z^2
\end{align*}
is a surjection of graded rings, with kernel generated by
\[
G := D^2 - ABD + C(A^2+B^2-4C).
\]
\end{Thm}
\begin{Rmk}
Kotschick and Schreieder show \cite[Thm.\ 6]{KS} that $\mathbf Z[A,B,C]$ maps isomorphically onto the subring of $H_*$ where Hodge symmetry holds. Our proof is a modification and simplification of theirs.
\end{Rmk}
\begin{Lemma}\leftarrowbel{Lem free of the same rank}
Let $\psi \colon M \to N$ be a homomorphism between free abelian groups of the same finite rank. Then $\psi$ is an isomorphism if and only if $\psi \otimes \mathbf F_\ell$ is injective for all primes $\ell$.
\end{Lemma}
\begin{proof}
The cokernel $C$ is nonzero if and only if $C \otimes \mathbf F_\ell \neq 0$ for all primes $\ell$, so the result follows from right exactness of $- \otimes \mathbf F_\ell$ and a dimension argument.
\end{proof}
\begin{Def}\leftarrowbel{Def r_n}
Write $r_n$ for the number
\[
r_n := \left\{\begin{array}{ll} \frac{(n+1)^2 + 1}{2}, & n \equiv 0 \pmod 2, \\ \frac{(n+1)^2}{2}, & n \equiv 1 \pmod 2.\end{array}\right.
\]
Then $\mathcal{H}_n$ is free of rank $r_n$, with basis given by
\begin{align*}
&(x^i y^j + x^{n-i}y^{n-j})z^n, & (i,j) &\neq (n-i,n-j),\\
&x^iy^j z^n, & (i,j) &= \left(\frac{n}{2},\frac{n}{2}\right).
\end{align*}
\end{Def}
\begin{Lemma}\leftarrowbel{Lem dimension of R}
The degree $n$ part of the algebra $R = \mathbf Z[A,B,C,D]/(G)$ is free of rank $r_n$.
\end{Lemma}
\begin{proof}
It has a basis given by
\[
\left\{A^iB^jC^k\ \bigg|\ \begin{array}{c}i,j,k \in \mathbf Z_{\geq 0},\\i+j+2k = n\end{array}\right\} \cup \left\{ A^iB^jC^kD\ \bigg|\ \begin{array}{c}i,j,k \in \mathbf Z_{\geq 0},\\i+j+2k = n-2\end{array}\right\}.
\]
Moreover, a simple induction shows that
\[
r'_n := \# \left\{ (i,j,k) \in \mathbf Z_{\geq 0}^3\ \bigg|\ i+j+2k = n \right\} = \left\{\begin{array}{ll} \frac{(n+2)^2}{4}, & n \equiv 0 \pmod 2, \\ \frac{(n+2)^2-1}{4}, & n \equiv 1 \pmod 2.\end{array}\right.
\]
Thus, the result follows since $\operatorname{rk} R_n = r'_n + r'_{n-2} = r_n$.
\end{proof}
\begin{proof}[Proof of \autoref{Thm Hodge ring presentation}.]
Clearly $\phi$ is a homomorphism of graded rings, and one easily verifies that $G \in \ker(\phi)$. Thus, we get an induced map
\[
\psi \colon R \to \mathcal{H}_*
\]
between graded rings that have the same rank in each degree by \autoref{Def r_n} and \autoref{Lem dimension of R}. By \autoref{Lem free of the same rank} it suffices to show that $\psi_\ell = \psi \otimes \mathbf F_\ell$ is injective for every prime $\ell$.
Note that $R \otimes \mathbf F_\ell = \mathbf F_\ell[A,B,C,D]/(G)$ is a $3$-dimensional domain since $G$ is irreducible (for example, its restriction modulo $B$ is Eisenstein at $(C) \subseteq \mathbf F_\ell[A,C]$ when viewed as a polynomial in $D$). Thus, to show injectivity of $\psi_\ell$, it is enough to show that the image $\operatorname{im}(\psi_\ell) \subseteq \mathbf F_\ell[x,y,z]$ has Krull dimension $3$. But the elements $\psi_\ell(A)$, $\psi_\ell(B)$, $\psi_\ell(C)$ are algebraically independent, for example because the Jacobian
\[
J = \begin{pmatrix}
\tfrac{dA}{dx} & \tfrac{dA}{dy} & \tfrac{dA}{dz}\\[.3em]
\tfrac{dB}{dx} & \tfrac{dB}{dy} & \tfrac{dB}{dz}\\[.3em]
\tfrac{dC}{dx} & \tfrac{dC}{dy} & \tfrac{dC}{dz}
\end{pmatrix} = \begin{pmatrix}
yz & xz & 1+xy \\
z & z & x+y \\
yz^2 & xz^2 & 2xyz
\end{pmatrix}
\]
is invertible at $(x,y,z) = (0,1,1)$.
\end{proof}
\begin{Cor}\leftarrowbel{Cor generated by P^1, E, P^2, and S}
Let $E$ be an elliptic curve, and let $S$ be any surface for which $h^{1,0}(S)-h^{0,1}(S) = \pm 1$ (for example, the surface constructed in \autoref{Prop example Hodge symmetry}). Then $\mathcal{H}_*$ is generated by $\mathbf P^1$, $E$, $\mathbf P^2$, and $S$, subject only to a monic quadratic equation in $S$ over $\mathbf Z[\mathbf P^1,E,\mathbf P^2]$.
\end{Cor}
\begin{proof}
In the notation of \autoref{Thm Hodge ring presentation}, we have $A = h(\mathbf P^1)$, $B = h(E) - h(\mathbf P^1)$, and $C = h(\mathbf P^1 \times \mathbf P^1) - h(\mathbf P^2)$. Finally, $D$ can be obtained from $\pm h(S)$ by adding a suitable linear combination of $A^2$, $AB$, $B^2$, and $C$, since $h^{1,0}(S) - h^{0,1}(S) = \pm 1$. This proves the first statement, and the second follows from \autoref{Thm Hodge ring presentation} since $h(S)$ differs from $\pm D$ by a translation in $\mathbf Z[\mathbf P^1,E,\mathbf P^2]$.
\end{proof}
\begin{Cor}\leftarrowbel{Cor Hodge generated by varieties}
The Hodge ring of varieties over $k$ is generated by smooth projective varieties that are defined over $\mathbf F_p$, admit a lift to $\mathbf Z_p$, and for which the Hodge--de Rham spectral sequence degenerates.
\end{Cor}
\begin{proof}
Clearly $\mathbf P^1$ and $\mathbf P^2$ can be defined over $\mathbf F_p$ and lifted to $\mathbf Z_p$. For $E$ we may choose an elliptic curve over $\mathbf F_p$ and lift to $\mathbf Z_p$, and for $S$ we may choose the surface of \autoref{Prop example Hodge symmetry}. The degeneration claim is \autoref{Rmk Hodge-de Rham lift}.
\end{proof}
\vskip-\leftarrowstskip
We deduce from the above presentation the following theorems.
\begin{Thm}
The only universal congruences between the Hodge numbers $h^{i,j}(X)$ of smooth proper varieties $X$ of dimension $n$ over $k$ are those given by Serre duality.
\qedsymbol
\end{Thm}
\begin{Thm}\leftarrowbel{Thm linear relations Hodge}
The only universal linear relations between the Hodge numbers $h^{i,j}(X)$ of smooth proper varieties $X$ of dimension $n$ over $k$ are those given by Serre duality.
\qedsymbol
\end{Thm}
That is, if $m, n \in \mathbf Z_{> 0}$ and $\leftarrowmbda_{i,j} \in \mathbf Z$ for $i, j \in \{0, \ldots, n\}$ are such that
\begin{equation}\leftarrowbel{Eq cong}
\sum \leftarrowmbda_{i,j} h^{i,j} (X) \equiv 0 \pmod m
\end{equation}
for every $X \in \mathbf{Var}_k$ of dimension $n$, then for all $(i,j) \neq (\tfrac{n}{2},\tfrac{n}{2})$ we have
\begin{equation}\leftarrowbel{Eq cong 2}
\leftarrowmbda_{i,j} \equiv - \leftarrowmbda_{n-i,n-j} \pmod m,
\end{equation}
and similarly if we take $\leftarrowmbda_{i,j} \in \mathbf Q$ and consider equality instead of congruence mod $m$ in (\ref{Eq cong}) and (\ref{Eq cong 2}).
\begin{Rmk}
While this paper was in preparation, the results of \cite{KS} on linear relations between Hodge numbers in characteristic $0$ were improved to cover all \emph{polynomial} relations \cite{PS}. In positive characteristic, this will appear in joint work between the first author of \cite{PS} and the present author \cite{vDdBP}.
\end{Rmk}
\section{Birational invariants}\leftarrowbel{Sec birational}
The Hodge numbers $h^0(X,\Omega_X^i)$ are birational invariants of a smooth proper variety $X$, and Chatzistamatiou and R\"ulling proved the same for the numbers $h^i(X,\mathcal O_X)$ \cite[Thm.~1]{ChaRul}. We show that these are the only linear combinations of Hodge numbers that are birational invariants.
\begin{Def}
Let $\mathcal{I} \subseteq \mathcal{H}_*$ be the subgroup generated by differences $X - X'$ for $X, X' \in \mathbf{Var}_k$ birational. Note that $\mathcal{I}$ is a (homogeneous) ideal, for if $X \dashrightarrow X'$ is a birational map, then so is $X \times Y \dashrightarrow X' \times Y$ for any $Y \in \mathbf{Var}_k$. Write $\mathcal{I}_n$ for the degree $n$ part of $\mathcal{I}$.
\end{Def}
\begin{Prop}
Consider the quotient map
\[
\phi \colon \mathcal{H}_* \rightarrow \mathbf Z[x,y,z]/(xy).
\]
Then
\begin{enumerate}
\item The degree $n$ part of $\operatorname{im} \phi$ is free of rank $2n$, with basis given by
\[
\left\{y^jz^n\ \bigg|\ 0 \leq j \leq n-1\right\} \cup \left\{ x^iz^n\ \bigg|\ 0 < i \leq n-1\right\} \cup \bigg\{ (x^n + y^n)z^n\bigg\}.
\]
\item The kernel of $\phi$ satisfies $\ker \phi = (C) = \mathcal{I}$.
\end{enumerate}
\end{Prop}
\begin{proof}
The first statement is obvious, by looking at the image of the basis
\begin{align*}
&(x^i y^j + x^{n-i}y^{n-j}) z^n,& & (i,j) \neq \left(\tfrac{n}{2},\tfrac{n}{2}\right),\\
&x^i y^j \cdot z^n,& & (i,j) = \left(\tfrac{n}{2},\tfrac{n}{2}\right)
\end{align*}
of $\mathcal{H}_n$. To prove the second statement, note that $C = xy \cdot z \in \ker \phi$. We have
\[
\mathcal{H}_*/(C) = \mathbf Z[A,B,D]/(D^2 - ABD),
\]
so a basis for $\mathcal{H}_*/(C)$ is given by $A^iB^j$ and $A^iB^jD$ for $i,j \in \mathbf Z_{\geq 0}$. Thus, the degree $n$ part of $\mathcal{H}_*/(C)$ is free of rank $(n+1) + (n-1) = 2n$. Since $\mathcal{H}_*/(C) \rightarrow \operatorname{im} \phi$ is surjective and the degree $n$ parts of both sides are free of the same rank, we conclude that it is an isomorphism. Thus, $(C) = \ker \phi$.
The Hodge numbers $h^{i,0}(X) = h^0(X,\Omega_X^i)$ and $h^{0,j}(X) = h^j(X,\mathcal O_X)$ are birational invariants (for the latter, see \cite{ChaRul}). Since $\phi$ only remembers the $h^{p,0}$ and the $h^{0,q}$, we get $\mathcal{I} \subseteq \ker \phi$. Finally, note that $C \in \mathcal{I}$ because $C = \Bl{\operatorname{pt}}(\mathbf P^2) - \mathbf P^2$ (or $\mathbf P^1 \times \mathbf P^1 - \mathbf P^2$). Thus, $(C) \subseteq \mathcal{I}$, which finishes the proof.
\end{proof}
\vskip-\leftarrowstskip
We deduce the following theorem, which is the analogue of Theorem 2 of \cite{KS}.
\begin{Thm}
The mod $m$ reduction of an integral linear combination of Hodge numbers is a birational invariant of smooth proper varieties if and only if the linear combination is congruent mod $m$ to a linear combination of the $h^{i,0}$ and the $h^{0,j}$ (and their duals $h^{n-i,n}$ and $h^{n,n-j}$).
\qedsymbol
\end{Thm}
\begin{Thm}\leftarrowbel{Thm linear combinations birational invariant}
A rational linear combination of Hodge numbers is a birational invariant of smooth proper varieties if and only if is is a linear combination of the $h^{i,0}$ and the $h^{0,j}$ (and their duals $h^{n-i,n}$ and $h^{n,n-j}$).
\qedsymbol
\end{Thm}
\section{The de Rham ring}\leftarrowbel{Sec de Rham}
In analogy with the Hodge ring, we define a de Rham ring $\mathcal{DR}_*$ whose elements correspond to formal de Rham polynomials of varieties.
\begin{Def}
Consider $\mathbf Z[t,z]$ as a graded ring where $t$ has degree $0$ and $z$ has degree $1$. The \emph{de Rham ring} of varieties over $k$ is the graded subring $\mathcal{DR}_* \subseteq \mathbf Z[t,z]$ whose degree $n$ part is
\[
\mathcal{DR}_n = \left\{\left(\ \sum_{i=0}^{2n}h^it^i\right)z^n\ \Bigg|\ \begin{array}{c}h^i = h^{2n-i} \text{ for all } i,\\ h^n \text{ is even if } n \text{ is odd}.\end{array}\right\}.
\]
This differs from the notation of \cite{KS}, where $\mathcal{DR}_*$ denotes the ring where moreover $h^i$ is even for every odd degree $i$.
\end{Def}
\begin{Def}
Write $\operatorname{dR} \colon \mathbf{Var}_k \to \mathcal{DR}_*$ for the map sending an $n$-dimensional variety $X$ to its \emph{(formal) de Rham polynomial}
\[
\operatorname{dR}(X) = \left(\ \sum_{i=0}^{2n}h_{\operatorname{dR}}^i(X) t^i\right)z^n.
\]
Note that the cup product defines a perfect pairing \cite[Thm.\ VII.2.1.3]{Berthelot}
\[
H_{\operatorname{dR}}^i(X) \times H_{\operatorname{dR}}^{2n-i}(X) \to k,
\]
showing that $h_{\operatorname{dR}}^i(X) = h_{\operatorname{dR}}^{2n-i}(X)$. The pairing on $H_{\operatorname{dR}}^n(X)$ is alternating if $n$ is odd, so in that case $h_{\operatorname{dR}}^n(X)$ is even, showing that $\operatorname{dR}(X)$ lands in $\mathcal{DR}_*$. The K\"unneth formula \cite[Cor.\ V.4.2.3]{Berthelot} shows that $\operatorname{dR}(X \times Y) = \operatorname{dR}(X)\cdot\operatorname{dR}(Y)$ for $X, Y \in \mathbf{Var}_k$.
\end{Def}
\begin{Rmk}\leftarrowbel{Rmk Hodge to de Rham}
There is a natural map $s \colon \mathcal{H}_* \to \mathcal{DR}_*$ given by $x, y \mapsto t$ and $z \mapsto z$. The triangle
\begin{equation*}
\begin{tikzcd}[column sep=.1em]
& \mathbf{Var}_k \ar[start anchor={[xshift=-.1em]}]{ld}[swap]{h}\ar[start anchor={[xshift=.1em]}]{rd}{\operatorname{dR}} & \\
\mathcal{H}_* \ar{rr}[swap]{s} & & \mathcal{DR}_*\!\!\!\!\!
\end{tikzcd}
\end{equation*}
does not commute, because $s(h(X)) = \operatorname{dR}(X)$ if and only if the Hodge--de Rham spectral sequence of $X$ degenerates.
However, by \autoref{Cor Hodge generated by varieties} the Hodge ring $\mathcal{H}_*$ can be generated by varieties for which the Hodge--de Rham spectral sequence degenerates. We can use the presentation of $\mathcal{H}_*$ from \autoref{Thm Hodge ring presentation} to get a compatible presentation of $\mathcal{DR}_*$. In particular, $\mathcal{DR}_*$ is generated by the image of $\operatorname{dR}$; see \autoref{Cor de Rham generated by varieties}.
\end{Rmk}
\begin{Thm}\leftarrowbel{Thm de Rham ring presentation}
Consider the graded ring $\mathbf Z[A,B,C,D]$ where $A$ and $B$ have degree $1$, and $C$ and $D$ have degree $2$. Then the map
\[
\psi \colon \mathbf Z[A,B,C,D] \to \mathcal{DR}_*
\]
given by
\begin{align*}
\phi(A) &= (1+t^2)z, & \phi(C) &= t^2 \cdot z^2,\\
\phi(B) &= 2t \cdot z, & \phi(D) &= (t+t^3)z^2
\end{align*}
is a surjection of graded rings with $\psi = s \circ \phi$. The kernel of $\psi$ is given by
\[
J = (A^2C-D^2, AB-2D, B^2-4C, BD-2AC).
\]
\end{Thm}
\begin{proof}
Note that $\mathcal{DR}_n$ is free of rank $n+1$, with basis given by
\begin{align*}
&(t^i+t^{2n-i})z^n & & 0 \leq i < n,\\
&t^nz^n & & n \equiv 0 \pmod{2},\\
&2t^nz^n & & n \equiv 1 \pmod{2}.
\end{align*}
Computing the image of $s$ on the basis of $\mathcal{H}_n$ of \autoref{Def r_n}, we see that $s$ is surjective. Since $\phi$ is surjective by \autoref{Thm Hodge ring presentation}, we conclude that $\psi = s \circ \phi$ is surjective. On the other hand, one checks that $J \subseteq \ker \psi$, so we get a surjection
\[
R = \mathbf Z[A,B,C,D]/J \twoheadrightarrow \mathcal{DR}_*.
\]
It suffices to show that the degree $n$ part of $R$ is generated by $n+1$ elements. It is generated by $A^iB^jC^kD^\ell$ for $i+j+2k+2\ell = n$. By the relation $B^2 - 4C$ we may assume $j \leq 1$. The relation $AB - 2D$ then shows that we need only consider the monomials $A^iC^kD^\ell$ and $BC^kD^\ell$. Then $BD - 2AC$ allows us to restrict to $A^iC^kD^\ell$ and $BC^k$. Finally, the relation $A^2C-D^2$ shows that $R_n$ is generated by
\begin{align*}
A^iD^\ell, & & C^kD^\ell\ \ (k > 0), & & AC^kD^\ell\ \ (k > 0), & & BC^k.
\end{align*}
If $n$ is even, there are $\tfrac{n}{2}+1$, $\tfrac{n}{2}$, $0$, and $0$ monomials of these types in $R_n$ respectively, and if $n$ is odd there are $\tfrac{n+1}{2}$, $0$, $\tfrac{n-1}{2}$, and $1$ monomials of these types in $R_n$. In both cases, they add up to $n+1$ elements generating $R_n$.
\end{proof}
\begin{Cor}\leftarrowbel{Cor de Rham generated by varieties}
The de Rham ring of varieties over $k$ is generated by smooth projective varieties that are defined over $\mathbf F_p$, admit a lift to $\mathbf Z_p$, and for which the Hodge--de Rham spectral sequence degenerates.
\end{Cor}
\begin{proof}
This follows from the same statement in \autoref{Cor Hodge generated by varieties} and the surjectivity of $s \colon \mathcal{H}_* \to \mathcal{DR}_*$.
\end{proof}
\section{The Hodge--de Rham ring}\leftarrowbel{Sec Hodge-de Rham}
Because the triangle in \autoref{Rmk Hodge to de Rham} does not commute, the last thing to compute is the image of the diagonal map
\begin{align*}
\mathbf{Var}_k &\to \mathcal{H}_* \times \mathcal{DR}_*\\
X &\mapsto (h(X),\operatorname{dR}(X)).
\end{align*}
Once again, we will define a subring containing the image, and construct enough varieties that generate this subring.
\begin{Def}
Define the ring homomorphisms
\begin{align*}
\chi \colon \mathcal{H}_* &\to \mathbf Z[z] & h^{0,0} \colon \mathcal{H}_* &\to \mathbf Z[z] & \chi \colon \mathcal{DR}_* &\to \mathbf Z[z] & h^0 \colon \mathcal{DR}_* &\to \mathbf Z[z]\\
x,y &\mapsto -1, & x,y &\mapsto 0, & t &\mapsto -1, & t &\mapsto 0.
\end{align*}
Then the \emph{Hodge--de Rham ring of varieties over $k$} is the subring
\[
\mathcal{HDR}_* = \left\{\big(a,b\big) \in \mathcal{H}_* \times \mathcal{DR}_* \ \Bigg|\ \begin{array}{c}h^{0,0}(a) = h^0(b),\\\chi(a) = \chi(b).\end{array}\right\}.
\]
Note that $(h(X),\operatorname{dR}(X)) \in \mathcal{HDR}_n$ if $X$ is a smooth and proper variety over $k$ of dimension $n$. Indeed, the Euler characteristic in any spectral sequence is constant between the pages, so $\chi(h(X)) = \chi(\operatorname{dR}(X))$. Moreover, $h^{0,0}(X)$ and $h^0(X)$ agree since they both equal the number of geometric components of $X$.
\end{Def}
\begin{Lemma}\leftarrowbel{Lem kernel of chi times h^0}
The kernel of the morphism $(\chi,h^0) \colon \mathcal{DR}_* \to \mathbf Z[z] \times \mathbf Z[z]$ given by $t \mapsto (-1,0)$ is generated by $(t+2t^2+t^3)z^2$ and $(t^2+2t^3+t^4)z^3$.
\end{Lemma}
\begin{proof}
Clearly $I = ((t+2t^2+t^3)z^2,(t^2+2t^3+t^4)z^3)$ is contained in $\ker(\chi,h^0)$, so we get a quotient map
\[
R = \mathcal{DR}_*/I \to \mathbf Z[z] \times \mathbf Z[z].
\]
The image is generated in degree $n$ by $(1,1)$ if $n = 0$, by $(2z^n,0)$ and $(0,z^n)$ if $n$ is odd, and by $(z^n,0)$ and $(0,z^n)$ if $n > 0$ is even. Thus, it suffices to show that $R_n$ can be generated by $1$ element if $n = 0$ and by $2$ elements if $n > 0$.
Under the presentation $\mathbf Z[A,B,C,D]/J \cong \mathcal{DR}_*$ of \autoref{Thm de Rham ring presentation}, the elements $D+2C$ and $AC+BC$ map to the generators of $I$, so
\[
R = \mathbf Z[A,B,C,D]/(J+(D+2C,AC+BC)).
\]
Pass to the quotient $\mathbf Z[A,B,C,D]/(D+2C) \cong \mathbf Z[A,B,C]$, where the image of $J+(D+2C,AC+BC)$ is generated by $B^2-4C$, $AB+4C$, and $AC+BC$. Setting $E = A + B$ (corresponding to the class of an elliptic curve), we finally have to compute
\[
R \cong \mathbf Z[E,B,C]/(B^2-4C,EB,EC).
\]
Then $R_n$ is generated by $E^iB^jC^k$ for $i+j+2k=n$. Using $B^2-4C$ we reduce to $j \leq 1$. The relation $EB$ shows that for $j = 1$ we may take $i = 0$. By the relation $EC$ we may assume $i = 0$ if $k > 0$. Thus, $R_n$ is generated by
\begin{align*}
E^i, & & C^k\ \ (k > 0), & & BC^k,
\end{align*}
sitting in degree $i$, $2k$, and $2k + 1$ respectively. We see that $R_n$ is generated by $1$ element if $n = 0$ and by $2$ elements if $n > 0$.
\end{proof}
\begin{Rmk}\leftarrowbel{Rmk generated by a threefold}
The ideal generated by $(t+2t^2+t^3)z^2$ contains $(2t^2+4t^3+2t^4)z^3$ and $(t+2t^2+2t^3+2t^4+t^5)z^3$, so we may replace the second generator of $I$ as in \autoref{Lem kernel of chi times h^0} by any element in $I_3$ for which $h^2$ is odd.
\end{Rmk}
\begin{Thm}\leftarrowbel{Thm Hodge-de Rham ring presentation}
Let $\mathbf Z[A,B,C,D]$ as in \autoref{Thm Hodge ring presentation} and \autoref{Thm de Rham ring presentation}. Let $S$ be a surface for which $h^{1,0}(S)+h^{0,1}(S) - h_{\operatorname{dR}}^1(S)$ is odd, and let $T$ be a threefold for which $h^{2,0}(T) + h^{1,1}(T) + h^{0,2}(T) - h_{\operatorname{dR}}^2(T)$ is odd. Define the map
\[
\tau \colon \mathbf Z[A,B,C,D][S,T] \to \mathcal{HDR}_* \subseteq \mathcal{H}_* \times \mathcal{DR}_*
\]
on $\mathbf Z[A,B,C,D]$ by $(\phi,\psi)$, and by
\begin{align*}
\tau(S) = (h(S),\operatorname{dR}(S)), & & \tau(T) = (h(T),\operatorname{dR}(T)).
\end{align*}
Then $\tau$ is surjective. In particular, $\mathcal{HDR}_*$ is generated by varieties.
\end{Thm}
\begin{Ex}\leftarrowbel{Ex S and T}
For $S$ we may take the surface of \autoref{Prop example Hodge-de Rham degeneration}. For $T$, we may take a sufficiently high degree smooth hypersurface in $S \times S$. Indeed, a K\"unneth computation shows that for $S \times S$ the difference $h^{2,0}+h^{1,1}+h^{0,2}-h_{\operatorname{dR}}^2$ is odd. In characteristic $0$, Nakano vanishing implies weak Lefschetz for algebraic de Rham cohomology. In arbitrary characteristic, replacing Nakano vanishing by Serre vanishing shows that for sufficiently high degree hypersurfaces $T \subseteq S \times S$, the Hodge and de Rham numbers in degree $i < \dim T$ agree with those of $S \times S$.
See for example \cite{vDdBP} for a proof of weak Lefschetz for algebraic de Rham cohomology using Nakano vanishing or Serre vanishing.
\end{Ex}
\begin{proof}[Proof of \autoref{Thm Hodge-de Rham ring presentation}]
By \autoref{Cor Hodge generated by varieties} the Hodge ring $\mathcal{H}_*$ is generated by varieties for which the Hodge--de Rham spectral sequence degenerates. Thus, given $(a,b) \in \mathcal{HDR}_*$, we know that $(a,s(a)) \in \operatorname{im}(\tau)$. We reduce to the case $a = 0$, hence $b \in \ker(\chi,h^0)$, which is the ideal $I$ of \autoref{Lem kernel of chi times h^0}. Again using that $(a,s(a)) \in \operatorname{im}(\tau)$, and by surjectivity of $s$ (\autoref{Thm de Rham ring presentation}), it suffices to let $b$ be one of the generators of $I$.
Since $(h(S),s(h(S)))$ is in the image of $\tau$, so is
\[
S' = \bigg(h(S),s(h(S))\bigg)-\tau(S) = \bigg(0,s(h(S))-\operatorname{dR}(S)\bigg).
\]
Writing $b = s(h(S))-\operatorname{dR}(S)$ as $b = (\sum_i b^it^i)z^2$, we have $b^1 = 1$ by assumption. Since $b \in I$ we get $h^0(b) = 0$, hence $b^0 = 0$, so we get $b^4 = 0$, $b^3 = 1$ by Poincar\'e duality. Finally, $\chi(b) = 0$ gives $b^2 = 2$. We conclude that
\[
S' = (0,(t+2t^2+t^3)z^2).
\]
Replacing $S$ by $T$ in this argument, we get an element of $I_3$ for which $h^2$ is odd. By \autoref{Rmk generated by a threefold} any such class can be taken as the second generator for $I$.
\end{proof}
\vskip-\leftarrowstskip
This gives all linear congruences between Hodge numbers and de Rham numbers.
\begin{Thm}
The only universal congruences between the Hodge numbers $h^{i,j}(X)$ and the de Rham numbers $h_{\operatorname{dR}}^i(X)$ of smooth proper varieties $X$ of dimension $n$ over $k$ are the congruences spanned by
\begin{itemize}
\item Serre duality: $h^{i,j}(X) = h^{n-i,n-j}(X)$;
\item Poincar\'e duality: $h_{\operatorname{dR}}^i(X) = h_{\operatorname{dR}}^{2n-i}(X)$;
\item Components: $h^{0,0}(X) = h_{\operatorname{dR}}^0(X)$;
\item Euler characteristic: $\sum_{i,j} (-1)^{i+j}h^{i,j}(X) = \sum_i (-1)^ih_{\operatorname{dR}}^i(X)$;
\item Parity of middle cohomology: $h_{\operatorname{dR}}^n(X) \equiv 0 \pmod 2$ if $n$ is odd.
\qed
\end{itemize}
\end{Thm}
\begin{Thm}\leftarrowbel{Thm linear relations Hodge-de Rham}
The only universal linear relations between the Hodge numbers $h^{i,j}(X)$ and the de Rham numbers $h_{\operatorname{dR}}^i(X)$ of smooth proper varieties $X$ of dimension $n$ over $k$ are the relations spanned by
\begin{itemize}
\item Serre duality: $h^{i,j}(X) = h^{n-i,n-j}(X)$;
\item Poincar\'e duality: $h_{\operatorname{dR}}^i(X) = h_{\operatorname{dR}}^{2n-i}(X)$;
\item Components: $h^{0,0}(X) = h_{\operatorname{dR}}^0(X)$;
\item Euler characteristic: $\sum_{i,j} (-1)^{i+j}h^{i,j}(X) = \sum_i (-1)^ih_{\operatorname{dR}}^i(X)$.
\qed
\end{itemize}
\end{Thm}
\begin{comment}
\section{Variant for rigid analytic varieties}\leftarrowbel{Sec rigid}
Let $k$ be a complete nonarchimedean field of mixed characteristic $(0,p)$, i.e.\ a field complete with respect to a rank $1$ valuation (not necessarily discrete) whose residue field has characteristic $p > 0$. Let $\mathbf{Rig}_k$ be the set of isomorphism classes of smooth proper rigid analytic varieties over $k$.
Then Serre duality holds for all $X \in \mathbf{Rig}_k$ \cite{vdP}, and there are Hopf surfaces $X$ with $h^1(X,\mathcal O_X) = 1$ and $h^0(X,\Omega_X^1) = 0$ \cite{Mustafin}, showing that Hodge symmetry fails in the same way as in \autoref{Prop example Hodge symmetry}.
Thus, the arguments of \autoref{Sec Hodge ring} carry over verbatim:
\begin{Thm}
The only universal congruences between the Hodge numbers $h^{i,j}(X)$ of smooth proper rigid analytic spaces $X$ of dimension $n$ over $k$ are those given by Serre duality.
\qedsymbol
\end{Thm}
\begin{Thm}
The only universal linear relations between the Hodge numbers $h^{i,j}(X)$ of smooth proper rigid analytic spaces $X$ of dimension $n$ over $k$ are those given by Serre duality.
\qedsymbol
\end{Thm}
If $k$ is discretely valued, then the Hodge--de Rham spectral sequence degenerates for all $X \in \mathbf{Rig}_k$ by \cite[Cor.\ 1.8]{Scholze}. This was generalised by Conrad--Gabber to arbitrary nonarchimedean fields $k$ of mixed characteristic $(0,p)$ by a spreading out argument \cite{ConGab}; see also \cite[Thm.\ 13.3]{BMS}. Thus, the study of the Hodge--de Rham ring as in \autoref{Sec Hodge-de Rham} is not interesting, because the de Rham numbers $h_{\operatorname{dR}}^m(X)$ are given as the sum $\sum_{i + j = m}h^{i,j}(X)$.
Finally, we address bimeromorphic invariants.
\begin{Def}
In analogy with the complex case \cite[p.~75]{AndSto}
we say that a \emph{meromorphic map} $f \colon X \dashrightarrow Y$ between smooth proper rigid analytic varieties is a map $f \colon U \to Y$ defined on the complement $U$ of a closed analytic subset $Z \subsetneq X$ such that the closure $\mathbf Gamma \subseteq X \times Y$ of the graph of $f$ is a closed analytic subset.
A meromorphic map $f \colon X \dashrightarrow Y$ is \emph{bimeromorphic} if there exist closed analytic subsets $A \subsetneq X$ and $B \subsetneq Y$ such that $f$ induces an isomorphism $X \setminus A \stackrel\sim\to Y \setminus B$; in this case $f^{-1} \colon Y \dashrightarrow X$ is meromorphic as well, and the closure of the graph of $f^{-1}$ is $\mathbf Gamma^\top \subseteq Y \times X$, where $\mathbf Gamma$ is the closure of the graph of $f$.
\end{Def}
\begin{Prop}\leftarrowbel{Prop bimeromorphic}
Let $X, Y \in \mathbf{Rig}_k$ be bimeromorphic. Then $h^{i,0}(X) = h^{i,0}(Y)$ and $h^{0,j}(X) = h^{0,j}(Y)$ for all $i, j$.
\end{Prop}
\begin{proof}[Sketch of proof]
Let $f \colon X \dashrightarrow Y$ be a bimeromorphic map, let $\mathbf Gamma \subseteq X \times Y$ and $\mathbf Gamma^\top \subseteq Y \times X$ be the closures of the graphs of $f$ and $f^{-1}$ respectively, and let $[\mathbf Gamma] \in H^*((X \times Y)_{\bar k}, \mathbf Q_p)$ and $[\mathbf Gamma^\top] \in H^*((Y \times X)_{\bar k}, \mathbf Q_p)$ be their cohomology classes. Then the cycles
\begin{align*}
\alpha &= \mathbf Gamma^\top \circ \mathbf Gamma - \Delta_X \in H^*((X \times X)_{\bar k}, \mathbf Q_p)\\
\beta &= \mathbf Gamma \circ \mathbf Gamma^\top - \Delta_Y \in H^*((Y \times Y)_{\bar k}, \mathbf Q_p)
\end{align*}
are supported on $D \times D$ and $E \times E$ respectively, for some closed analytic subvarieties $D \subsetneq X$ and $E \subsetneq Y$.
If $k$ is discretely valued, then the Hodge--Tate decomposition \cite[Cor.~1.8]{Scholze} gives
\[
H^n((X \times X)_{\bar k}, \mathbf Q_p) \underset{\mathbf Q_p\!\!\!}{\otimes} \mathbf C_k \cong \bigoplus_{p+q=n} H^q(X, \Omega_X^p) \underset{k}\otimes \mathbf C_k(-p),
\]
where $\mathbf C_k$ is the completion of $\bar k$. Galois equivariance of the Gysin maps (with the right Tate twists) shows that the cycle $\alpha$ supported on $D \times D$ gets mapped to the inner part $0 < p < n$. Therefore, the action of $\alpha$ on $H^*(X_{\bar k},\mathbf Q_p) \to H^*(X_{\bar k},\mathbf Q_p)$ is $0$ on the outer edge of the Hodge diamond, showing that $\mathbf Gamma^\top \circ \mathbf Gamma$ acts as the identity on the outer Hodge cohomology. The same goes for $\mathbf Gamma \circ \mathbf Gamma^\top$, so $f$ induces an isomorphism on outer Hodge cohomology.
If $k$ is arbitrary, an approximation argument \`a la Conrad--Gabber \cite{ConGab} deduces the result from the discretely valued case applied in families.
\end{proof}
\vskip-\leftarrowstskip
It is plausible that there is a more general argument that also works in equicharacteristic $0$ or $p$, but for that one probably needs to construct an action of `cycles' on compactly supported Hodge cohomology, in the sense of \cite{ChaRul}.
Now the arguments from \autoref{Sec birational} carry over:
\begin{Thm}
The mod $m$ reduction of an integral linear combination of Hodge numbers is a bimeromorphic invariant of smooth proper rigid analytic spaces over $k$ if and only if the linear combination is congruent mod $m$ to a linear combination of the $h^{i,0}$ and the $h^{0,j}$ (and their duals $h^{n-i,n}$ and $h^{n,n-j}$).
\qedsymbol
\end{Thm}
\begin{Thm}
A rational linear combination of Hodge numbers is a bimeromorphic invariant of smooth proper rigid analytic spaces over $k$ if and only if is is a linear combination of the $h^{i,0}$ and the $h^{0,j}$ (and their duals $h^{n-i,n}$ and $h^{n,n-j}$).
\qedsymbol
\end{Thm}
\end{comment}
\phantomsection
\printbibliography
\end{document}
|
\begin{document}
\begin{center}
\LARGE
{\bf Stratified Patient Appointment Scheduling \\
for Community-based \\
Chronic Disease Management Programs \\}
\normalsize
{Martin Savelsbergh} \\
{\it Georgia Institue of Technology, Atlanta} \\
{Karen Smilowitz} \\
{\it Northwestern University, Evanston}
\end{center}
\begin{abstract}
Disease management programs have emerged as a cost-effective approach to treat chronic diseases. Appointment adherence is critical to the success of such programs; missed appointment are costly, resulting in reduced resource utilization and worsening of patients' health states. The time of an appointment is one of the factors that impacts adherence. We investigate the benefits, in terms of improved adherence, of incorporating patients' time-of-day preferences during appointment schedule creation and, thus, ultimately, on population health outcomes. Through an extensive computational study, we demonstrate, more generally, the usefulness of patient stratification in appointment scheduling in the environment that motivates our research, an asthma management program offered in Chicago. We find that capturing patient characteristics in appointment scheduling, especially their time preferences, leads to substantial improvements in community health outcomes. We also identify settings in which simple, easy-to-use policies can produce schedules that are comparable in quality to those obtained with an optimization-based approach.
\end{abstract}
\noindent \textbf{Keywords:} appointment scheduling, chronic
disease, community-based care, disease progression, patient no-show, time-of-day preference
\section{Introduction}
Disease management programs have emerged as a cost-effective
approach to treat chronic diseases; see \cite{jones2007achieving}. A
disease management program serves a patient population for a
specific chronic disease, such as asthma or diabetes; see
\cite{jones2005breathmobile} and \cite{kucukyazici2013managing}. The
asthma management program offered in Chicago by The Mobile C.A.R.E.
Foundation (MCF) is an example of such a disease management program.
Through a partnership with the Chicago public school system, MCF
serves asthmatic children through repeated visits to their school
with mobile clinics. At each visit, asthmatic children are examined
and treated by a medical team. The visits, as well as the children
that will be seen during the visits, are scheduled months in advance
by an MCF administrator.
The characteristics of this setting lead to interesting challenges in appointment scheduling. Firstly, community-based disease management programs (such as the program that motivates our research) typically serve a fixed patient population with limited care capacity and with a goal of maximizing health outcomes for the entire population. Secondly, the nature of chronic conditions requires recurring patient visits over a planning horizon to
maintain disease control. Disease progression occurs between visits,
which must be taken into account when scheduling appointments.
Thirdly, unlike traditional appointment scheduling settings in which
patients request appointments as needed, appointments in
community-based chronic disease management programs are scheduled by
the provider. This often happens far in advance and overbooking
appointment slots may not be possible. In the case of MCF, for
example, privacy issues and a lack of space to wait in the mobile
clinic prohibit overbooking. Finally, because appointments are
scheduled far in advance, there is a higher likelihood that patients
fail to show up for an appointment.
\begin{wrapfigure}{r}{0.4\textwidth}
\includegraphics[scale=0.4]{MCF_noshows}
\caption{\small Historical missed appointment percentages by time of day at
MCF. \label{f:mcf-missed}}
\end{wrapfigure}
In the case of MCF, a parent or guardian must accompany the patient, and no-show rates of more than 15\% are not uncommon. Importantly, an analysis of historical data from MCF shows that no-show rates vary with time of day, with lower no-show rates in the early morning, during lunch, and the late afternoon; see Figure \ref{f:mcf-missed}. This is due mostly to the work schedules of parents who must accompany a patient. Consequently, it appears to be important to consider not only the interval between visits, but also the time of the visits when
creating appointment schedules. This explains the two main goals of our research: (1) to assess the benefits, in terms of health outcomes, of accounting for time-of-day preferences in appointment scheduling, and (2) to investigate how easy or hard it is to incorporate time-of-day preferences in appointment scheduling procedures.
We explore both a sophisticated appointment scheduling method, which
considers individual patient characteristics, and relatively simple
and easy-to-use appointment scheduling methods, which only
distinguish groups of patients with similar characteristics, which
we refer to as ``cohort scheduling policies''. These approaches are
compared based on their ability to maximize the health
state of a population, measured by the likelihood that patients'
disease is controlled. We design a set of stylized test instances
based on our motivating setting to better understand the importance
of accounting for patient-specific, time-dependent no-show rates.
The study demonstrates that explicitly accounting for these factors
produces appointment schedules with substantially better population
health outcomes, up to 15\% better in some settings. The study
further shows that easy-to-use cohort-based methods are effective in
settings with a fairly homogeneous patient population and in settings
in which patient preferences are known or can easily be deduced.
These results are encouraging and highlight the tremendous potential
of acquiring and using patients' time-of-day preferences to
construct more effective appointment schedules resulting in better
population health outcomes.
The remainder of the paper is organized as follows. In Section
\ref{sec:char}, we present relevant literature and discuss the
characteristics of the appointment scheduling environment we
consider. In Section \ref{s:sch_app}, we present optimization-based
and cohort-based appointment scheduling approaches. In Section
\ref{s:comp_study}, we present a computational study, in which we
evaluate the performance of scheduling methods for patient
populations with different characteristics. We end with final
remarks in Section \ref{s:final}.
\section{Scheduling patient appointments in chronic disease management}
\label{sec:char}
\cite{kucukyazici2013managing} present an overview of
community-based care programs for chronic diseases, detailing
program operations and effectiveness and highlighting three relevant
papers from the operations research literature: \cite{leff1986lp},
\cite{Deoetal2011}, and \cite{kucukyazici2011analytical}. Critical
to all of these papers is the interaction between care provided and
patient health state, as patients' health states change over time
and with access to health care. Our work builds on
\cite{Deoetal2011}, which first examined the challenges of
appointment scheduling for MCF. In that paper, the authors present
an integrated capacity allocation model to select which patients to
see each period. The model combines clinical (disease progression) and
operational (capacity constraint) factors and is shown to outperform
traditional strategies that decouple the two. However, the model
does not consider patient no-shows, and, as a result, the allocation
of time slots within a day is not part of the model.
Because overbooking is not an option for MCF, each patient no-show
implies a loss of already scarce provider capacity. \cite{MCFcase09}
highlights the impact of patient no-show rates on MCF operations and
outlines the strict policies in place to ensure that provider
capacity is used as effectively as possible: patients who miss
appointments repeatedly may be removed from the program, and schools
with excessive aggregate no-show rates may also be removed from the
program. In this paper, by explicitly considering the temporal
dependence of patient no-show rates in appointment scheduling, we
hope to reduce the occurrence of no-shows.
Appointment scheduling problems in health care settings have been
the focus of much recent work. \cite{gupta2008appointment} provide
a comprehensive overview of the subject; more recent surveys have
included appointment scheduling in reviews of operations research
techniques in a wider range of health care decisions; see
\cite{batun2013optimization} and \cite{hulshof2012taxonomic}.
\cite{gupta2008appointment} categorize the health care appointment
scheduling literature by scheduling environment, given the unique
characteristics of each setting: primary care, speciality clinic,
and elective surgery. Our setting, i.e., a chronic disease
management program, shares some characteristics with speciality
clinics and elective surgery, with a few key differences. As with
elective surgery, patients are scheduled in a ``single batch'',
meaning an administrator schedules all slots for a given time period
at once, rather than scheduling appointments as patients make
requests, as is the case in primary care and often in speciality
clinics. However, unlike surgical settings, chronic patients
require recurring visits to the provider and the interval between
these visits impacts disease progression; see
\cite{jones2007achieving}.
\cite{SSV2008} demonstrate the relevance of appointment adherence in
a study of the impact of no-shows among patients with diabetes. The
authors find that for each 10\% increment in missed appointment
rate, the odds of good control decrease by a factor of 1.12 and the
odds of poor control increase by factor of 1.24.
\cite{gupta2008appointment} identify patient no-shows as a key
factor in scheduling and highlight approaches, e.g., open access and
overbooking, to address no-shows; see \cite{robinson2010comparison}
and \cite{liu2010dynamic} for examples of recent work. However, as
noted earlier, the lack of waiting space and the requirement that
parents accompany patients preclude such options in our setting.
A few recent papers have considered patient preferences (including
time-of-day preferences) in the presence of no-shows; see
\cite{feldman2012appointment}, \cite{gupta2008revenue}, and
\cite{wang2011adaptive}. These papers consider dynamic
settings in which patients are scheduled as they make requests.
\cite{feldman2012appointment} consider a multi-day setting in which
patients are offered a set of appointment options at the time of the
appointment request. While their paper also considers a static
model, the request arrival and scheduling setting are quite different
from the single batch scheduling in our chronic disease management
program setting. Their work, along with the others in this stream,
are more akin to revenue management models.
\cite{samoranioutpatient} study multi-day dynamic appointment
scheduling with no-show rates that vary by patient and time since
booking in an outpatient setting with open access and overbooking.
Their work includes a detailed analysis of data from a mental health
center to identify causes of no-shows among patients. Unlike our
setting, appointments are scheduled on a rolling horizon as patients
request appointments. While these differences lead to a
fundamentally different model, the authors use
column generation to handle the large number of variables in their
model, similar to our optimization-based approach.
Next, we present the key features and characteristics of the patient
appointment scheduling environment we consider: patient appointment
schedule, patient disease progression, patient no-show
probabilities, and patient appointment schedule evaluation.
\subsection{Patient appointment schedule} \label{sb:pat_app_sch}
We consider an appointment scheduling environment where the planning horizon consists of $K$ periods, each with $T$ time slots, and where there are $P$ patients in the population. In the context of MCF, a period represents a day in which a mobile clinic visits a particular school with $P$ asthmatic students, $T$ of which can be seen that day. For ease of notation and consistent with MCF practice, we assume that periods are equally spaced in time; however, our models can be generalized if this is not the case. Due to the limited number of time slots in each period, it is typically not possible to see all patients each period (i.e., $P > T$).
The appointment scheduling environment can be represented by means
of a layered network. Each layer represents a period and each node
within a layer represents a time slot, i.e., node $(k,t)$ represents
time slot $t$ in period $k$. A layered network with two periods and
two time slots per period is depicted in Figure \ref{f:net_flow}.
An arc $((k_1, t_1), (k_2, t_2))$ in the layered network represents
the option to schedule an appointment for a patient in time slot
$t_1$ in period $k_1$ followed by an appointment in time slot $t_2$
in period $k_2$. We assume that each patient is seen in period $0$.
While this assumption simplifies the modeling, it can be relaxed
easily.
\begin{figure}
\caption{\small Representation of a patient appointment scheduling
environment with two periods and two time slot per period. \label{f:net_flow}
\label{f:net_flow}
\end{figure}
A patient appointment schedule, i.e., the periods and the time slots within these periods in which the patient is scheduled to be seen, can be represented as a path in the layered network. The appointment scheduling problem is to determine patient appointment schedules
that maximize the aggregate health status of all patients over the planning horizon and in
which no time slot is assigned to more than one patient. A population appointment schedule, i.e., the set of schedules for all patients in the population, can be represented as a set of node disjoint paths in the layered network.
\subsection{Patient disease progression} \label{sb:dis_prog}
The health state of an asthmatic patient is defined by two factors:
severity and control. Severity can be interpreted as the intrinsic
susceptibility of the patient (a factor measured at a patient's
first appointment and a factor that does not change over time).
Severity creates different classifications of patients; the common
severity levels are mild intermittent, mild persistent, moderate
persistent, and severe persistent; see \citet{NHLBI2007guidelines}. Control is the extent to which a patient's asthma is under control,
and may change over time with treatment and natural disease
progression. Categories for control vary within the asthma
community; however, MCF uses one category for controlled and three
sub-categories for uncontrolled, depending on the degree to which
the patient's asthma is not controlled.
\cite{Deoetal2011} characterize disease progression by a patient's
severity, the control state diagnosed and treatment performed at the
last visit, and the time since the last visit. The authors model
disease progression as a Markov process. Based on data from MCF,
the authors calibrate a per-period transition matrix $\mathcal{P}$ to model
natural disease progression between control states for patients and
a transition matrix $\mathcal{Q}$ to represent the treatment effect of a
scheduled visit in terms of changing a patient's control status.
Recognizing that treatment is most effective just after a visit and
that natural disease progression occurs in subsequent periods, the
transition matrix $\mathcal{P}q$ is applied to a patient's diagnosed control
state following a visit, and matrix $\mathcal{P}$ is applied in following
periods until the next scheduled visit.
In our work, we focus on the special case with only two control states (\textit{$0$ = controlled} and \textit{$1$ = uncontrolled}) and ``perfect repair''. With perfect repair, a patient returns to the controlled state after a treatment, regardless of the state diagnosed at the visit. After treatment, disease progression continues with the matrix $\mathcal{P}$. In this setting, disease progression can be characterized by severity (which influences $\mathcal{P}$ as described below) and the time since the last visit. As in \cite{Deoetal2011}, we assume that a patient's control cannot improve through natural disease progression, i.e., an uncontrolled patient can only become controlled through a scheduled visit). With these assumptions, the disease progression matrix $\mathcal{P}$ (for periods in which no visit is scheduled for a patient) and treatment matrix $\mathcal{Q}$ (for periods in which a visit is scheduled for a patient) are as follows:
\begin{align*}
\mathcal{P} = \left[ \begin{array}{cc}
\alpha &1-\alpha\\
0& 1
\end{array}\right]~~~
\mathcal{Q} = \left[ \begin{array}{cc}
1 & 0\\
1& 0
\end{array}\right],
\end{align*}
where the first row and first column correspond to being in a controlled state and the second row and the second column correspond to being in an uncontrolled state.
The parameter $\alpha$ represents the probability that a controlled
patient remains in a controlled health state in the following
period. As shown in \cite{Deoetal2011}, the value of $\alpha$
depends on the patient's severity. For ease of notation, we
formulate the optimization model using a single value of $\alpha$.
However, the computational study includes values that vary by
severity. The probability that a controlled patient remains in the
controlled state decreases as the time since the last visit,
$\delta$, increases. More specifically, the probability that a
patient is in the controlled health state $\delta$ periods after his
last visit is $\alpha^\delta$, and, therefore, the probability that
a patient is in the uncontrolled health state $\delta$ periods after
his last visit is $1-\alpha^\delta$. (See Figure \ref{f:progression}
for an example with $\alpha = 0.95$.)
\begin{figure}
\caption{\small Progression over time of the probability of
a patient's health state for $\alpha = 0.95$. \label{f:progression}
\label{f:progression}
\end{figure}
\subsection{Patient no-show probabilities}
Controlling the health states of patients is often complicated by
patients' lack of adherence to scheduled appointments. In the
context of MCF, this is due mostly to parents not showing
up at their child's appointment, which means the examination and
treatment of the child cannot occur, because a parent must be
present. Thus, associated with each patient $i \in \{1, ..., P\}$
and each time slot $t \in \{1, ..., T\}$, there is a patient no-show
probability $n_{it}$. These probabilities differ by time slot $t$
due to the relative convenience of the time slots (e.g., first
appointment of the day and during lunch time). We assume that the
probabilities are the same in each period of the planning horizon,
although the model can be generalized to relax this assumption.
Determining patient no-show probabilities is challenging. As patients are seen infrequently, it is impossible to collect sufficient data to employ statistical techniques to estimate patient no-show probabilities. However, a process can be put in place to get meaningful information from the patients themselves. At an intake consultation, initial time-of-day preference information has to be collected, and follow-up phone calls or emails have to take place at regular intervals to find out if the information on file is still accurate or whether time-of-day preferences have changed. Furthermore, if a no-show occurs, it is essential to assess whether inaccurate or out-of-date patient time-of-day preference information was a contributor, and, if so, take the necessary corrective actions.
\subsection{Patient appointment schedule evaluation}
In Section \ref{sb:pat_app_sch}, we discuss how a patient
appointment schedule can be represented as a path in a layered
network, where an arc in the path links two consecutive scheduled
appointments for a patient, and, in Section \ref{sb:dis_prog}, we
discuss how asthma control deteriorates with the time between
visits (the length of an arc). In this section, we present an
approach to evaluate patient schedules based on the probability of disease control over the planning period (to be
defined precisely next). Using disease control as an indicator of the quality of an appointment schedule
seems appropriate, as \cite{briggs2006cost}, for example, link
asthma control level to a health related quality of life and
\cite{price2002development} demonstrate the links between asthma
control and attack occurrence.
First, we consider the situation with perfect schedule
adherence, i.e., $n_{it} = 0$ for all $i \in \{1, ..., P\}$ and $t
\in \{1, ...,T\}$. In this setting, we associate the quantity
\begin{equation}
\sum_{\delta=1}^{\Delta} (1-\alpha^\delta),
\end{equation}
with an arc between two consecutive appointments that are $\Delta$ periods apart. We refer to this quantity, which is the sum of probabilities that a patient is in the uncontrolled state in the periods between the two appointments, as the \textit{aggregate probability} (realizing that the value, in fact, does not represent an actual probability). The aggregate probability $U_i$ of patient
$i$ being in an uncontrolled state during the planning horizon is
then simply the sum of the aggregate probabilities associated with
the arcs in the path representing the patient's appointment
schedule.
However, when patient schedule adherence is not perfect, i.e.,
$n_{it} > 0$ for some or all $i \in \{1, ..., P\}$ and $t \in \{1,
...,T\}$), the aggregate probability associated with an arc in the
path can no longer be calculated without knowledge of prior
appointments. The time between consecutive visits of a patient is no
longer equal to the number of periods represented by the length of
an arc, since there is a positive probability that the patient did
not show up at the appointment at the tail of the arc. The presence
of no-show probabilities implies that the time between visits is
uncertain, and thus calculating the aggregate probability that a
patient is in the uncontrolled state during the planning horizon
becomes more involved.
To simplify the calculations, we model the option of not scheduling patient $i$ in a given period with a fictitious time slot $T+1$ with no-show probability $n_{i,T+1} = 1$, i.e., a patient that is scheduled \textit{not} to be seen in a period, will not be seen in that period with probability 1. By adding an additional node corresponding to this artificial time slot to the layered network (as well as the necessary arcs), a patient appointment schedule is represented by a path of exactly $K+1$ arcs, each connecting one period to the next. (Note that we allow more than one patient to be scheduled in this fictitious time slot.) We can now construct a time-since-last-visit probability tree for a patient appointment schedule. Figure \ref{f:decision_tree} presents the early periods of a
time-since-last-visit probability tree for a patient appointment
schedule with appointment time slots $t_1$, $t_2$, $t_3$, $\dots$.
\begin{figure}
\caption{\small Time-since-last-visit probability tree for a patient appointment schedule $t_1$, $t_2$, $t_3$, $\dots$. \label{f:decision_tree}
\label{f:decision_tree}
\end{figure}
As shown in Figure \ref{f:decision_tree}, the probability of the
time since the last visit, $l$, can be calculated explicitly at each
period (where, for presentational convenience, we have indicated the
time, $\Delta$, since the last visit on the arc into a node). Given
the assumption that all patients are seen in period 0, there are
$K+1$ possible values for the time since the last visit $l$. To
calculate the expected time since the last visit $l$, the
probability distribution function of all possible values of $l$ is
needed. Let $P_{i}^{kl}$ denote the probability that the number of
periods since the last visit of patient $i$ is $l$ immediately after
the scheduled appointment in period $k$, i.e.,
\begin{equation} \label{e:state_prob}
P_{i}^{kl} = \begin{cases}
1 - n_{it} &\text{ if } l = 0\\
n_{it} P_i^{k-1,l-1} &\text{ otherwise, }
\end{cases}
\end{equation}
assuming that the appointment in period $k$ is in time slot $t$. Since we assume that at the start of the planning horizon a patient has just been seen, we have $P_i^{00} = 1$ and $P_i^{0l} = 0, \forall l > 0$. Recall that if patient $i$ is not scheduled in
period $k$, then $n_{it} = n_{i,T+1} = 1$. Since the no-show
probability of the time slot impacts the control state of a patient
in the interval following the scheduled appointment, the control
state is calculated including the interval following the scheduled
appointment. With the distribution function defined at each
period $k$, the expected aggregate probability $E[U_i^k]$ of patient
$i$ being in an uncontrolled state after a visit in period $k$, including the interval following period $k$, is
\begin{equation} \label{e:exp_cost}
E[U_i^k] = E[U_i^{k-1}] + \sum_{l=0}^{k} P_i^{kl} (1-\alpha^{l+1}),
\end{equation}
where $E[U_i^0] = 1 - \alpha$.
\section{Patient appointment scheduling approaches} \label{s:sch_app}
As mentioned in the introduction, the no-show rates at MCF vary by time of day, which suggests that taking time-of-day preferences into account during the construction of appointment schedules may be beneficial and may improve population health outcomes. As a consequence, the central questions underlying our research are whether time-of-day preference information can be incorporated in patient scheduling algorithms and whether the benefits of employing such algorithms can be quantified. To answer these questions, we develop and deploy a sophisticated appointment scheduling method, which considers individual patient characteristics (Section \ref{s:sol_app}), as well as relatively
simple and easy-to-use appointment scheduling methods, which divide the patients into groups with similar characteristics and schedule the patients within a group using a round-robin scheme, which we refer to as ``cohort scheduling policies'' (Section
\ref{s:rule-methods}).
\subsection{Optimization-based appointment scheduling methods} \label{s:sol_app}
\subsubsection{Model formulation}
Let the set of all possible patient appointment schedules be denoted
by $\mathcal{R}$. Furthermore, let $b_{kt}^r$ for
$k \in \{1,...,K\}$, $t \in \{1,...,T\}$, and $r \in \mathcal{R}$ indicate
whether or not a patient is seen in time slot $t$ in period $k$ in schedule
$r$ ($b_{kt}^r = 1$) or not ($b_{kt}^r = 0$), and let $u_i^r$ for $i
\in \{1,...,P\}$ and $r \in \mathcal{R}$ denote the expected
aggregate probability $E[U_i^K]$ of being in an uncontrolled state
over the planning horizon when schedule $r$ is assigned to patient
$i$ (calculated using (\ref{e:exp_cost})). Finally, let $x_i^r$ for
$i \in \{1,...,P\}$ and $r \in \mathcal{R}$ be a binary variable
representing whether or not schedule $r$ is assigned to patient $i$
($x_i^r = 1$) or not ($x_i^r = 0$). Recall that we model the option
of not scheduling a patient in a given period with a time slot $T+1$
with capacity $C_{T+1} = P$; the capacity of all other time slots is
1. The optimization model is defined as
\begin{subequations}
\label{e:set_part}
\begin{equation} \label{e:set_part_obj}
\min \sum_{r \in \mathcal{R}} \sum_{i = 1}^{P} u_i^r x_i^r
\end{equation} \indent
subject to
\begin{align}
\sum_{r \in \mathcal{R}} x_i^r &= 1 & i \in \{1, ..., P\} \label{e:patient_assign} \\
\sum_{i = 1}^{P} \sum_{r \in \mathcal{R}} b_{kt}^r x_i^r & \leq C_t & k \in \{1, ..., K\}, t \in \{1, ..., T+1\} \label{e:slot_assign} \\
x_i^r &\in \{0,1\} & r \in \mathcal{R}, i
\in \{1, ..., P\}. \label{e:integer}
\end{align}
\end{subequations}
Rather than enumerating all possible patient appointment schedules
upfront, we use column generation to solve the linear programming
relaxation of (\ref{e:set_part}) and iteratively add new appointment
schedules to a restricted master problem \citep{BJNSV1998, dds2005}.
We relax constraints (\ref{e:patient_assign}) to $\sum_{r \in
\mathcal{R}} a_i^r x_i^r \geq 1$ for computational efficiency. Since
all patient appointment schedules have a positive aggregate
probability of being in an uncontrolled state, this will not change
the optimal solution.
We initialize the restricted master problem with the patient
schedules derived from a simple rotation policy (see Section
\ref{s:rule-methods}). After solving the linear programming
relaxation of (\ref{e:set_part}), we use a branch-and-bound approach
to obtain an integer solution. We do not generate additional
columns throughout the branch-and-bound tree.
\subsubsection{Pricing problem formulation}
Given an optimal solution to the linear programming relaxation of
the restricted master problem, a pricing problem is solved to
determine whether there are any patient appointment schedules with
negative reduced costs. This can be done independently for each
patient.
Recall that a patient appointment schedule can be represented as a
path in a layered network. Figure \ref{f:pricing_prob} shows an
example of the layered network for an environment with four periods
and two time slots per period. Note that each layer, corresponding
to a period, includes an additional node to account for the patient
not being scheduled in that period.
\begin{figure}
\caption{\small A layered network for a pricing problem for a single patient; 4 periods and 2 time slots. \label{f:pricing_prob}
\label{f:pricing_prob}
\end{figure}
Let $\sigma_i$ denote the dual variable associated with the
relaxation of constraint (\ref{e:patient_assign}) for patient $i$
and let $\pi_{kt}$ denote the dual variable associated with
constraint (\ref{e:slot_assign}) for period $k$ and time slot $t$.
The reduced cost of an appointment schedule for patient $i$ is given
by the expected aggregate probability of being in an uncontrolled
state of that appointment schedule plus the sum of the dual values
associated with the time slots in that appointment schedule and the
dual value associated with the constraint that ensures exactly one
appointment schedule is selected for the patient. This is
equivalent to the value of the corresponding path in the layered
network plus the sum of the dual values associated with the nodes
visited on that path and the dual value associated with the
constraint that ensures exactly one appointment schedule is selected
for the patient (this last term is independent of the path in the
layered network). Therefore, determining whether a patient
appointment schedule with negative reduced cost exists for patient
$i$ can be done by solving a shortest path problem on the layered
network.
The {\it adjusted} expected aggregate probability $\hat{E}[U_i^k]$
of a partial path for patient $i$ ending in node $(k,t)$, i.e.,
adjusted by the dual values associated with the nodes visited on the
partial path, is given by
\begin{equation} \label{e:exp_red_cost}
\hat{E}[U_i^k] = \hat{E}[U_i^{k-1}] + \sum_{l=0}^{k} P^{k,l}_i
(1-\alpha^{l+1}) + \pi_{kt},
\end{equation}
which involves the (discrete) probability distribution of the time
since the last visit, represented by the $K+1$ dimensional vector
$(P^{k,0}_i, P^{k,1}_i, \ldots, P^{k,K}_i)$ and defined by (\ref{e:state_prob}).
Note that in (\ref{e:exp_red_cost}) we use the fact that in
period $k$ the probability that the time since the last visit is
greater than $k$ is zero (because every patient is seen in period
0).
The value of the dual variable associated with the relaxation of
constraints (\ref{e:patient_assign}) that ensure exactly one
appointment schedule is selected for patient $i$, i.e., $\sigma_i$,
is added to the adjusted expected aggregate probability of a
(complete) path to determine the reduced cost of the path. The
pricing problem finds for each patient $i \in \{1, ..., P\}$ a path
with minimum reduced cost and adds the corresponding column to the
restricted master problem if the reduced cost of that path is
negative. The restricted master problem is resolved to obtain a new
optimal dual solution and the process repeats as long as any columns
with negative reduced costs are found.
\subsubsection{Pricing problem solution approaches}
With the inclusion of the ``no appointment'' node, the network has a
simple layered structure in which each layer corresponds to a
period and in which there are only arcs between consecutive layers.
Thus, any path from the source to the sink visits exactly one node
in each layer; see Figure \ref{f:pricing_prob} for an example. The
structure of the layered network is the same for all patients; only
the no-show rates and the severity differ by patient. The dual
values change each time the pricing problem is solved.
Because solving the pricing problem optimally involves solving a
multi-label shortest path problem with $K+2$ labels ($K+1$ for the
probability vector and one for the adjusted expected cost), solving
the pricing problem for large values of the planning horizon $K$ can
become prohibitive. Therefore, we consider the following heuristic
for solving the pricing problem. Rather than using the probability
distribution of the time since the last visit, we use the expected
time $E[\Delta^k]$ since the last visit, which can be calculated as
follows: $E[\Delta^k] = (1-n_{it})(1) + n_{it} (E[\Delta^{k-1}] +
1)$. This reduces the number of labels to maintain in the
multi-label shortest path problem from $K+2$ to $2$. Computational
tests show that the heuristic produces near-optimal solutions in an
acceptable amount of time.
\subsection{Cohort-based appointment scheduling methods}
\label{s:rule-methods}
In this section, we present cohort-based scheduling methods, which (1)
partition the patient population into cohorts based on some
differentiating factor, e.g., time-of-day preference, disease
severity, and reliability; (2) use a simple rule to assign time slots in
the planning period to each of the cohorts; and (3) apply a
simple rotation policy to assign time slots to the patients in a
cohort. Cohort-based scheduling methods are intuitive and
easy-to-use and are quite effective when a heterogeneous patient
population can easily be partitioned into cohorts.
\subsubsection{Cohort development} \label{sbb:cohortdevel}
In this study, we consider three differentiating factors for
grouping patients into cohorts: time-of-day preference, disease
severity, and reliability. Cohort strategies can be characterized by
the number of factors considered for differentiating patients and
the specific features used. A 1-level cohort strategy based on
time-of-day preference, for example, partitions the patient
population into two or more cohorts based on patients' time-of-day
preferences, e.g., a cohort that prefers morning time slots, a
cohort that prefers noon-time time slots, and a cohort that prefers
afternoon time slots. The 0-level cohort strategy has a single cohort consisting of the entire patient population and thus does not distinguish patients and treats all patients the same.
\subsubsection{Allocating time slots to cohorts} \label{sbb:timeassign}
Due to the natural relation between time-of-day preferences and time slots, the logic for allocating time slots to cohorts determined using time-of-day preferences should be different from the logic for allocating time slots to cohorts determined using either disease severity or reliability.
First, we consider cohorts determined using time-of-day preferences. When patients are partitioned into a morning cohort and an afternoon cohort, the
number of patients in each cohort is roughly equal, and there are eight time slots in a period (as is the case in our computational experiments), then assigning the 4 morning time slots to the morning cohort and the 4 afternoon
time slots to the afternoon cohort is the natural course of action.
On the other hand, if there are three times as many patients in the
afternoon cohort, then assigning the first 2 time slots (morning) to
the morning cohort and the last 6 time slots (late morning and
afternoon) to the afternoon cohort is the natural course of action.
In more complex settings, where the numbers do not work out as
nicely as in the examples above, more sophisticated approaches can
be employed, somewhat similar to the one discussed next for allocating slots
to cohorts created by differentiating based on disease severity and
reliability.
For ease of presentation, we assume that there are two cohorts with $n_1$ and $n_2$ patients ($n_1 \leq n_2$), respectively, and that a total of $m = |K||T|$ time slots have to be assigned to the patients over the planning period ($m \gg n_1 + n_2$). The first step is to divide the $m$ time slots over the two cohorts. One possibility is a proportional allocation according to the number of patients in the cohort, but in many cases this not the best choice. For example, when the cohorts are created based on disease
severity and the number of patients in the cohorts is the same, it
is probably better to allocate more time slots to the cohort with patients
with a higher severity level. For now, suppose that a fraction $f < 0.5$ of
the time slots is allocated to the first cohort, i.e., $\lceil f m
\rceil$ time slots will be used for appointments of patients in the
first cohort. We spread these time slots equally spaced over
the total $m$ time slots by allocating time slots $\lceil \frac{j}{f} \rceil$ for $j = 1, \ldots, \lceil f m \rceil$ to the first cohort. The
remaining time slots are allocated to the second cohort. For
example, if there are 80 time slots in the planning period and 40\% of them
($f = 0.4$) are allocated to the first cohort, then the first cohort
will get time slots $3 = \lceil \frac{1}{0.4} \rceil$, $5 = \lceil
\frac{2}{0.4} \rceil$, $8 = \lceil \frac{3}{0.4} \rceil$, etc. The
scheme can easily be extended to accommodate more than two cohorts
by applying the above procedure recursively, e.g., allocate time
slots to the cohort with the smallest fraction of time slots, then
allocated time slots to the cohort with the second smallest fraction
of time slots, etc. Thus, to define a cohort strategy, one only needs to
decide on the fraction of slots that will be allocated to each of
the cohorts. Once that decision is made the time slots are
allocated automatically.
\subsubsection{Scheduling patients within each cohort} \label{sbb:rotation}
We use a simple rotation policy to assign patients in a cohort to
the time slots allocated to the cohort, i.e., a round-robin
scheduling rule which schedules patients by patient index. See
Algorithm \ref{alg:rot} for a more precise description.
\begin{algorithm}
\DontPrintSemicolon
$i \leftarrow 1$ \;
\For{$k\leftarrow 1$ \KwTo $K$}{
\For{$t\leftarrow 1$ \KwTo $T$}{
Assign patient $i$ to time slot $t$ in period $k$ \;
\If{$i = P$}{$i \leftarrow 1$} \Else{$i \leftarrow i+1$}
}
}
\caption{\small Creating an appointment schedule with a rotation policy. \label{alg:rot}}
\end{algorithm}
The rotation policy has the advantage that it automatically spreads
out appointments and diversifies the time slots of the appointments
(unless the number of time slots in a period is a divider of the
number of patients in a cohort). When the number of patients is a
multiple of the number of time slots, a diversification can be
introduced by using a slot-reversing rotation policy; see the
Appendix A for details.
\section{Computational study} \label{s:comp_study}
We have conducted an extensive computational study to (1) assess the benefit of considering time-of-day preferences when scheduling appointments (by incorporating no-show probabilities during schedule creation), and (2) assess the qualitative differences between the optimization-based appointment scheduling method and
the simpler and easier-to-use cohort-based appointment scheduling
methods.
\subsection{Instances}
To assess the benefit of considering patient time-of-day preferences during appointment scheduling and, more generally, accounting for different patient characteristics during appointment scheduling, we create a set of instances with varying patient profiles along the key dimensions of severity, reliability, and time-of-day preferences. Each instance has 20 patients, covers a planning horizon of 13 periods, and each period has 8 time slots.
\subsubsection*{Time-of-day preferences}
Time-of-day preferences are modeled in terms of no-show probabilities (e.g., low no-show probabilities for morning time slots indicate a preference for morning time slots). We consider three categories of time-of-day preference: \textit{AM}, \textit{Noon}, and \textit{PM}. The no-show probabilities associated with each of these time-of-day preferences, for both a strong preference variant and a weak preference variant, are shown in Table \ref{t:no_show}. We consider six patient population profiles, shown in the right-most part of Table \ref{t:no_show}, in which the number of patients with a specific time-of-day preference differs; Profile I \& II: homogeneous or almost homogeneous preferences; Profile III \& IV \& V: mixed AM, Noon, and PM preferences; Profile VI: balanced AM, Noon, and PM preferences.
\begin{table}[htbp]
\caption{\small Time-of-day preferences and slot-dependent no-show probabilities.}
\label{t:no_show} \centering
\scriptsize
\begin{tabular}{l|l| r r r r r r r r | r r r r r r}
\toprule
Preference & Category & \multicolumn{8}{c}{No-show probability} & \multicolumn{6}{|c}{Profiles} \\
Strength & & \multicolumn{8}{c}{(time slot)} & \multicolumn{6}{|c}{(\# patients)} \\
\hline
& & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & I & II & III & IV & V & VI \\
\hline
Strong & \textit{AM} & 0.05 & 0.05 & 0.05 & 0.05 & 0.35 & 0.35 & 0.35 & 0.35 & 20 & 16 & 10 & 10 & 5 & 7\\
& \textit{Noon} & 0.35 & 0.35 & 0.35 & 0.05 & 0.05 & 0.35 & 0.35 & 0.35 & 0 & 2 & 5 & 0 & 10 & 6\\
& \textit{PM} & 0.35 & 0.35 & 0.35 & 0.35 & 0.05 & 0.05 & 0.05 & 0.05 & 0 & 2 & 5 & 10 & 5 & 7\\
\hline
Weak & \textit{AM} & 0.05 & 0.05 & 0.05 & 0.05 & 0.15 & 0.15 & 0.15 & 0.15 & 20 & 16 & 10 & 10 & 5 & 7\\
& \textit{Noon} & 0.15 & 0.15 & 0.15 & 0.05 & 0.05 & 0.15 & 0.15 & 0.15 & 0 & 2 & 5 & 0 & 10 & 6\\
& \textit{PM} & 0.15 & 0.15 & 0.15 & 0.15 & 0.05 & 0.05 & 0.05 & 0.05 & 0 & 2 & 5 & 10 & 5 & 7\\
\bottomrule
\end{tabular}
\normalsize
\end{table}
\subsubsection*{Severity}
Patient severity is modeled with different values of $\alpha$, the probability that a controlled patient remains in a controlled health state in the period following treatment. We consider severities in the range from \textit{Mild}, modeled with $\alpha = 0.9$, to \textit{Severe}, modeled with $\alpha = 0.8$. We consider four patient population profiles, shown in the right-most part of Table \ref{t:severity}, in which the number of patients with a specific severity level differs; Profile I: homogeneous mild; Profile II: mixed mild/severe; Profile III: homogeneous severe; Profile IV: varied with severities in the interval [0.8 - 0.9] (i.e., between mild and severe). In Profile II, the patients in the population with a mild severity level are selected randomly (and thus so are the patients with a severe severity level). As a consequence, the number of mild and severe patients with a similar time-of-day preferences might not be balanced, e.g., if 10 patients have a morning time-of-day preference, it is possible that three have a mild level of severity and seven have a severe level of severity. The severity of the patients in the varied profile is drawn randomly from a uniform distribution with lower and upper bounds 0.8 and 0.9, respectively.
\begin{table}[htbp]
\caption{\small Severity profiles.}
\label{t:severity} \centering
\begin{tabular}{l | c | r r r r }
\toprule
Category & \multicolumn{1}{c}{Control probability} & \multicolumn{4}{|c}{Profiles} \\
& \multicolumn{1}{c}{($\alpha$)} & \multicolumn{4}{|c}{(\# patients)} \\
\hline
& & I & II & III & IV \\
\hline
Mild & 0.8 & 20 & 10 & 0 & 0 \\
Varying & [0.8 - 0.9] & 0 & 0 & 0 & 20 \\
Severe & 0.9 & 0 & 10 & 20 & 0 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection*{Reliability}
Patient reliability is modeled by adjusting the no-show probabilities associated with time-of-day preferences. We consider reliabilities in the range from \textit{Reliable}, modeled by multiplying the no-show probabilities associated with time-of-day preferences by 0.8, to \textit{Unreliable}, modeled by keeping the no-show probabilities associated with time-of-day preferences unchanged. We consider four patient population profiles, shown in the right-most part of Table \ref{t:reliability}, in which the number of patients with a specific reliability level differs; Profile I: homogeneous reliable; Profile II: mixed reliable/ unreliable; Profile III: homogeneous unreliable; Profile IV: varied with reliabilities in the interval [0.8 - 1.0] (drawn randomly from a uniform distribution with lower and upper bounds 0.8 and 1.0, respectively). Again, reliable and unreliable patients in the \textit{mixed} profile are selected randomly, and thus the number of reliable and unreliable patients with a similar time-of-day preference (and severity level) might not be balanced.
\begin{table}[htbp]
\caption{\small Reliability profiles.}
\label{t:reliability} \centering
\begin{tabular}{l | c | r r r r }
\toprule
Category & \multicolumn{1}{c}{Reliability} & \multicolumn{4}{|c}{Profiles} \\
& & \multicolumn{4}{|c}{(\# patients)} \\
\hline
& & I & II & III & IV \\
\hline
More reliable & 0.8 & 20 & 10 & 0 & 0 \\
Varying & [0.8 - 1.0] & 0 & 0 & 0 & 20 \\
Less reliable & 1.0 & 0 & 10 & 20 & 0 \\
\bottomrule
\end{tabular}
\end{table}
\noindent We have combined the above patient population profiles into a total of 42 instances as shown in the first five columns of Table \ref{t:cohort_perf}. For each of these instances, we examine two variants, one in which the time-of-day preferences are strong, and one in which the time-of-day preferences are weak.
\subsection{Computational results}
To evaluate the benefit of accounting for different patient characteristics during appointment scheduling, and to determine the effectiveness of cohort policies, we evaluate the expected aggregate control probability $z$ for the patient population of the appointment schedule produced by a particular cohort strategy, relative to the expected aggregate control probability $z^*$ for the patient population of the appointment
schedule produced by the optimization-based method.
In Table \ref{t:cohort_perf}, we present the percentage performance gap, $100\frac{z-z^*}{z^*}$. The first five columns of Table \ref{t:cohort_perf} describe the instance characteristics. Columns 6-9 present the performance gaps for instances in which patients have strong time-of-day preferences and columns 10-14 present performance gaps for instances in which patients have weak time-of-day preferences. Recall that the 0-level cohort policy is a simple rotation policy that does not distinguish patients and treats all patients the same. The three 1-level cohort scheduling policies relate to the distinguishing factor used to define the cohorts, i.e., time-of-day preference (T), disease severity (S), and reliability (R). A 1-level cohort policy requires the specification of the number of slots allocated to each of the cohorts. These allocations can be found in Tables \ref{coh:tod}, \ref{coh:sev}, and \ref{coh:rel} in Appendix B, respectively. We note that because the optimization-based method uses heuristic pricing and does not generate additional columns during the tree search, it is possible to see negative gaps (indicating that a cohort-strategy has produced a better solution).
\begin{table}[htbp]
\centering
\caption{\small Performance of cohort scheduling policies relative to the optimization-based approach \label{t:cohort_perf}}
\scriptsize
\centering
\begin{tabular}{r|rr|rr|r|rrr|r|rrr}
\toprule
\multicolumn{5}{c|}{Instance characteristics} & \multicolumn{4}{c|}{Strong time preference (0.05 v 0.35)} & \multicolumn{4}{c}{Weak time preference (0.05 v 0.15)} \\
\midrule
\multicolumn{1}{c|}{ToD Preference} & \multicolumn{2}{c|}{Severity} & \multicolumn{2}{c|}{Reliability} & 0-level & \multicolumn{3}{c|}{1-level} & 0-level & \multicolumn{3}{c}{1-level} \\
\multicolumn{1}{c|}{Profile} & \multicolumn{1}{c}{0.8} & \multicolumn{1}{c|}{0.9} & 0.8 & 1 & & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{S} & \multicolumn{1}{c|}{R} & & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{S} & \multicolumn{1}{c}{R} \\ \hline
\multicolumn{1}{c|}{\multirow{3}[2]{*}{I}} & 20 & 0 & \multicolumn{2}{c|}{\multirow{3}[1]{*}{Random}} & 1.6\% & & & 1.9\% & 0.7\% & & & 0.9\% \\
\multicolumn{1}{c|}{} & 10 & 10 & \multicolumn{2}{c|}{} & 1.3\% & & 0.6\% & 1.4\% & 2.1\% & & 0.9\% & 2.4\% \\
\multicolumn{1}{c|}{} & 0 & 20 & \multicolumn{2}{c|}{} & 2.0\% & & & 2.5\% & 1.0\% & & & 1.3\% \\ \cline{2-13}
\multicolumn{1}{c|}{AM:20} & \multicolumn{2}{c|}{\multirow{4}[1]{*}{Random}} & 0 & 20 & 1.1\% & & 2.7\% & & 1.4\% & & 1.8\% & \\
\multicolumn{1}{c|}{Noon:0} & \multicolumn{2}{c|}{} & 10 & 10 & 2.4\% & & 3.8\% & 4.7\% & 1.5\% & & 1.9\% & 3.3\% \\
\multicolumn{1}{c|}{PM:0} & \multicolumn{2}{c|}{} & 20 & 0 & 1.1\% & & 2.3\% & & 1.3\% & & 1.7\% & \\
\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{Random} & 1.8\% & & 3.3\% & 2.4\% & 1.5\% & & 1.9\% & 1.9\% \\ \hline
\multicolumn{1}{c|}{\multirow{3}[2]{*}{II}} & 20 & 0 & \multicolumn{2}{c|}{\multirow{3}[1]{*}{Random}} & 6.8\% & 0.9\% & & 7.3\% & 2.3\% & 0.4\% & & 2.6\% \\
\multicolumn{1}{c|}{} & 10 & 10 & \multicolumn{2}{c|}{} & 6.8\% & 0.6\% & 5.3\% & 7.4\% & 3.5\% & 1.6\% & 1.9\% & 3.8\% \\
\multicolumn{1}{c|}{} & 0 & 20 & \multicolumn{2}{c|}{} & 8.4\% & 1.2\% & & 9.3\% & 2.9\% & 0.5\% & & 3.3\% \\ \cline{2-13}
\multicolumn{1}{c|}{AM:16} & \multicolumn{2}{c|}{\multirow{4}[1]{*}{Random}} & 0 & 20 & 8.0\% & 0.7\% & 8.9\% & & 3.4\% & 1.0\% & 3.6\% & \\
\multicolumn{1}{c|}{Noon:2} & \multicolumn{2}{c|}{} & 10 & 10 & 8.2\% & 1.3\% & 8.9\% & 9.0\% & 3.2\% & 0.9\% & 3.4\% & 4.5\% \\
\multicolumn{1}{c|}{PM:2} & \multicolumn{2}{c|}{} & 20 & 0 & 6.5\% & 0.6\% & 7.1\% & & 2.9\% & 1.0\% & 3.1\% & \\
\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{Random} & 7.6\% & 1.1\% & 8.4\% & 8.6\% & 3.3\% & 1.1\% & 3.5\% & 3.8\% \\ \hline
\multicolumn{1}{c|}{\multirow{3}[2]{*}{III}} & 20 & 0 & \multicolumn{2}{c|}{\multirow{3}[1]{*}{Random}} & 11.0\% & 1.4\% & & 11.9\% & 3.5\% & 0.4\% & & 3.9\% \\
\multicolumn{1}{c|}{} & 10 & 10 & \multicolumn{2}{c|}{} & 9.3\% & 2.4\% & 9.1\% & 9.4\% & 5.4\% & 2.0\% & 4.2\% & 5.7\% \\
\multicolumn{1}{c|}{} & 0 & 20 & \multicolumn{2}{c|}{} & 13.9\% & 1.7\% & & 15.2\% & 4.4\% & 0.5\% & & 4.9\% \\ \cline{2-13}
\multicolumn{1}{c|}{AM:10} & \multicolumn{2}{c|}{\multirow{4}[1]{*}{Random}} & 0 & 20 & 14.3\% & 2.0\% & 16.1\% & & 5.1\% & 1.2\% & 5.7\% & \\
\multicolumn{1}{c|}{Noon:5} & \multicolumn{2}{c|}{} & 10 & 10 & 12.9\% & 1.7\% & 14.6\% & 16.3\% & 4.8\% & 1.2\% & 5.3\% & 6.9\% \\
\multicolumn{1}{c|}{PM:5} & \multicolumn{2}{c|}{} & 20 & 0 & 11.4\% & 1.7\% & 12.8\% & & 4.3\% & 1.2\% & 4.7\% & \\
\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{Random } & 13.0\% & 2.1\% & 14.8\% & 14.2\% & 4.7\% & 1.2\% & 5.2\% & 5.3\% \\ \hline
\multicolumn{1}{c|}{\multirow{3}[2]{*}{IV}} & 20 & 0 & \multicolumn{2}{c|}{\multirow{3}[1]{*}{Random}} & 10.2\% & -0.1\% & & 10.3\% & 3.3\% & -0.1\% & & 3.4\% \\
\multicolumn{1}{c|}{} & 10 & 10 & \multicolumn{2}{c|}{} & 13.2\% & 2.1\% & 11.9\% & 13.5\% & 5.6\% & 2.1\% & 4.1\% & 5.8\% \\
\multicolumn{1}{c|}{} & 0 & 20 & \multicolumn{2}{c|}{} & 12.7\% & -0.2\% & & 12.9\% & 4.1\% & -0.2\% & & 4.3\% \\ \cline{2-13}
\multicolumn{1}{c|}{AM:10} & \multicolumn{2}{c|}{\multirow{4}[1]{*}{Random }} & 0 & 20 & 14.0\% & 1.0\% & 17.2\% & & 5.3\% & 1.0\% & 6.2\% & \\
\multicolumn{1}{c|}{Noon:0} & \multicolumn{2}{c|}{} & 10 & 10 & 12.7\% & 1.0\% & 15.4\% & 14.7\% & 4.8\% & 1.0\% & 5.6\% & 6.5\% \\
\multicolumn{1}{c|}{PM:10} & \multicolumn{2}{c|}{} & 20 & 0 & 11.4\% & 1.0\% & 13.8\% & & 4.4\% & 1.0\% & 5.1\% & \\
\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{Random } & 12.5\% & 1.0\% & 15.2\% & 12.9\% & 4.8\% & 1.0\% & 5.6\% & 5.2\% \\ \hline
\multicolumn{1}{c|}{\multirow{3}[2]{*}{V}} & 20 & 0 & \multicolumn{2}{c|}{\multirow{3}[1]{*}{Random}} & 11.4\% & -0.1\% & & 12.9\% & 3.7\% & 0.0\% & & 4.2\% \\
\multicolumn{1}{c|}{} & 10 & 10 & \multicolumn{2}{c|}{} & 14.7\% & 1.6\% & 16.6\% & 16.2\% & 6.0\% & 1.6\% & 5.4\% & 6.6\% \\
\multicolumn{1}{c|}{} & 0 & 20 & \multicolumn{2}{c|}{} & 14.3\% & -0.1\% & & 16.5\% & 4.6\% & -0.1\% & & 5.3\% \\ \cline{2-13}
\multicolumn{1}{c|}{AM:5} & \multicolumn{2}{c|}{\multirow{4}[1]{*}{Random }} & 0 & 20 & 15.6\% & 0.8\% & 18.2\% & & 5.5\% & 0.8\% & 6.3\% & \\
\multicolumn{1}{c|}{Noon:10} & \multicolumn{2}{c|}{} & 10 & 10 & 13.8\% & 0.8\% & 16.2\% & 18.5\% & 5.0\% & 0.8\% & 5.7\% & 7.4\% \\
\multicolumn{1}{c|}{PM:5} & \multicolumn{2}{c|}{} & 20 & 0 & 12.5\% & 0.8\% & 14.5\% & & 4.6\% & 0.8\% & 5.2\% & \\
\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{Random} & 13.7\% & 0.8\% & 16.3\% & 15.7\% & 4.9\% & 0.8\% & 5.7\% & 5.8\% \\ \hline
\multicolumn{1}{c|}{\multirow{3}[2]{*}{VI}} & 20 & 0 & \multicolumn{2}{c|}{\multirow{3}[1]{*}{Random}} & 9.8\% & 1.0\% & & 9.9\% & 3.1\% & 0.3\% & & 3.3\% \\
\multicolumn{1}{c|}{} & 10 & 10 & \multicolumn{2}{c|}{} & 12.2\% & 2.3\% & 13.0\% & 12.2\% & 5.3\% & 2.3\% & 4.4\% & 5.5\% \\
\multicolumn{1}{c|}{} & 0 & 20 & \multicolumn{2}{c|}{} & 11.3\% & 0.1\% & & 11.6\% & 3.6\% & 0.1\% & & 3.9\% \\ \cline{2-13}
\multicolumn{1}{c|}{AM:7} & \multicolumn{2}{c|}{\multirow{4}[1]{*}{Random }} & 0 & 20 & 12.1\% & 0.9\% & 14.2\% & & 4.7\% & 1.2\% & 5.4\% & \\
\multicolumn{1}{c|}{Noon:6} & \multicolumn{2}{c|}{} & 10 & 10 & 11.6\% & 1.5\% & 13.5\% & 14.8\% & 4.5\% & 1.4\% & 5.1\% & 6.5\% \\
\multicolumn{1}{c|}{PM:7} & \multicolumn{2}{c|}{} & 20 & 0 & 9.7\% & 1.0\% & 11.4\% & & 4.0\% & 1.3\% & 4.5\% & \\
\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{Random} & 11.3\% & 1.5\% & 13.4\% & 11.7\% & 4.4\% & 1.3\% & 5.1\% & 4.8\% \\
\bottomrule
\end{tabular}
\normalsize
\end{table}
\noindent \textbf{Analysis of the rotation policy
(0-level cohort scheduling policy)}
\begin{observation}
A simple rotation policy performs well only when all patients have the same time-of-day preferences.
\end{observation}
When patients have the same time-of-day preferences (i.e., time-of-day patient population profile I), the simple rotation policy results in a population appointment schedule with a level of aggregate control that is reasonably close to that of the population appointment schedule produced by the more sophisticated optimization-based approach (gaps of about one to two percent). With the rotation policy, all patients are seen with the same frequency, and desirable and undesirable time slots are assigned alternatingly.
The optimization-based approach exploits the full flexibility of assigning slots, i.e., the number of slots to assign to a patient, the specific periods in which to assign a slot to a patient (the spread), and the type of slot to assign to a patient (desirable or undesirable), and considers all patients in the population simultaneously. As a result, a (slightly) better population appointment schedule is obtained even in this setting.
Figure \ref{f:I_MSR_VS} shows the 13-period population appointment schedule produced by the optimization-based approach when all patients have a strong preference for morning slots, patients have mixed reliability, and severity levels vary across patients.
The first ten rows show the patient appointment schedules for the reliable patients and the second ten rows for the less reliable patients. Within each reliability group, patients are shown in nondecreasing order of severity. Note that this means, in some sense, that the patients that require the most carefully constructed appointment schedules appear in the bottom rows and the patients for which there is more leeway in constructing their appointment schedules appear in the top rows.
\begin{figure}
\caption{\small Time slots assigned in the optimization-based solution: Profile I: homogeneous AM time slot; mixed reliability; varying severity: (A) desirable morning slot (1-4); (P) less desirable afternoon slot (5-8). \label{f:I_MSR_VS}
\label{f:I_MSR_VS}
\end{figure}
An examination of the population appointment schedule reveals the logic ``applied by'' the optimization-based approach. The patients that require the most carefully constructed appointment schedules (i.e., severe, but unreliable patients) are given their preferred slots and more of them when their disease is more severe. No afternoon slots are allocated to these patients. The patients for which there is more leeway in constructing their appointment schedules (i.e., reliable and less severe patients) are given few slots, some not at their preferred time. The patients in between (i.e., reliable, but more severe patients) are given many, but undesirable slots. Minor variations to this logic occur due to the total number of slots available to be assigned.
As shown in Figure \ref{f:I_MSR_VS}, even in settings involving patients with a common time-of-day preference, the population appointment schedule produced by the optimization-based approach is more complex than the one produced by a simple rotation policy, weighing the relative impact of each dimension (time-of-day preference, severity, and reliability) when assigning slots. The rotation policy simply focuses on diversifying slots and providing equal access to all patients. For settings involving patients with a common time-of-day preference, such a simple strategy works well, even if patients vary in terms of severity and/or reliability.
\begin{observation}
When patient time-of-day preferences vary, accounting for these differences in the optimization-based approach can lead to improvements of up to 15\% over the simple rotation policy that ignores these differences and treats all patients the same.
\end{observation}
As time-of-day preferences start to vary, the difference in quality of the schedule produced by the simple rotation policy and the schedule produced by the optimization-based approach increases. Even when only 20\% of the patients have a differing time-of-day preference (Profile II), performance gaps between one and two percent become performance gaps between 6.5 and 8.5 percent when time-of-day preferences are strong. In these settings, the schedule produced by the optimization-based approach ensures that, when possible, patients are given their preferred time slots, and, when not possible, a similar logic to what we have seen for the common time-of-day preference setting is employed.
Figure \ref{f:II_MSR_VS} shows the population appointment schedule produced by the optimization-based approach when most patients have a strong preference for morning slots, patients have mixed reliability, and severity levels vary across patients. The noon time slots are further differentiated to account for the fact that some are also desirable for patients with a morning preference and some are also desirable for patients with an afternoon preference, i.e., a preferred joint noon-AM slot (N/A) or a preferred joint noon-PM slot (N/P).
\begin{figure}
\caption{\small Time slots assigned in the optimization-based solution: Profile II: Predominately AM preference; mixed reliability; varying severity: (A) morning (1-3); (P) afternoon (6-8); (N/A) noon/morning (4); (N/P) noon/afternoon. \label{f:II_MSR_VS}
\label{f:II_MSR_VS}
\end{figure}
An examination of the population appointment schedule shows that patients in the two smaller cohorts (with PM and noon time preferences) are assigned their preferred slots (all 13 noon/PM slots are assigned to the patients with a noon preference and 12 of the PM slots are assigned to the patients with an afternoon preference). The allocation of the remaining slots across the patients with a morning preference employs the logic that we have seen before to assign slots to patients with a common time-of-day preference. Only preferred slots are assigned to unreliable patients and more of them if their disease is more severe; if possible, few, but preferred, slots are assigned to reliable patients, but, if not possible, more, but a mix of desirable and undesirable, slots are assigned to reliable patients.
The difference in quality between the rotation policy and the optimization-based approach is largest when the patient population has balanced AM, Noon, PM preferences (i.e., time-of-day patient population profile VI), with performance gaps between 11.5 and 15.5 percent. In these settings, the schedule produced by the optimization-based approach allocates preferred slots to each group of patients. Of course, such an allocation is naturally imbalanced, because there are fewer noon slots. (The patients with a morning or afternoon preference have three slots per period whereas the patients with a noon preference only have two slots per period.) Consider, Figure \ref{f:VI_MSR_VS}, in which we show the population appointment schedule produced by the optimization-based approach when patients have balanced, but strong, time-of-day preferences, patients have mixed reliability, and severity levels vary across patients.
\begin{figure}
\caption{\small Time slots assigned in the optimization-based solution: Profile VI: Mixed time preference (7 6 7); mixed reliability; varying severity: (A) morning; (P) afternoon; (N) noon. \label{f:VI_MSR_VS}
\label{f:VI_MSR_VS}
\end{figure}
We see that the average number of visits over the planning horizon is 5.6 for patients with a morning or afternoon preference and only 4.3 for patients with a noon preference. However, in this specific instance, there is a larger fraction of reliable patients with a noon time preference, compared to the fraction of reliable patients among those with a morning or afternoon preference, and, thus, fewer slots are required to produce effective appointment schedules for the patients with a noon time preference. The optimization-based approach ``recognizes'' such instance-specific characteristics and exploits them, whereas the simple rotation policy does not and simply assigns either five or six appointments to patients (with an average of 5.2).
These results suggest that for heterogeneous patient populations, stratifying and scheduling by cohorts can be beneficial. In the following, we evaluate the ability of simple cohort scheduling policies to capture patient differences and produce high-quality quality schedules.
\noindent \textbf{Analysis of the 1-level cohort scheduling policies}
\begin{observation}
Stratifying patients by time-of-day preference yields significant improvement over the simple rotation policy when the patients in the population have clearly distinguishable time-of-day preferences. Stratifying along other distinguishing factors can lead to low
quality schedules.
\end{observation}
Using a time-based 1-level cohort scheduling policy, which partitions the set of patients into cohorts based on their time-of-day preferences and allocates an appropriate number of slots to each cohort, can significantly increase the quality of the population appointment schedule (compared to the quality of the population appointment schedule produced by the simple rotation policy) as it can avoid assigning undesirable time slots to patients. In Table \ref{t:cohort_perf}, we see that the maximum performance gap for the time-based 1-level cohort strategy is 2.4\% when patients have strong time-of-day preferences and 2.3\% when patients have weak time-of-day preferences. The average performance gap is only 1.1\% when patients have strong preferences and 0.9\% when patients have weak preferences.
Table \ref{t:TP_perf} shows the performance of the time-based 1-level cohort scheduling policy for the different time-of-day preference profiles. Columns 2 - 4 show the fraction of the total number of slots allocated to each cohort, columns 5 - 7 show the average number of slots assigned to a patient for each cohort, columns 8-10 show the fraction of preferred time slots assigned to each cohort, and column 9 shows the average performance gap over all instances with a given profile.
\begin{table}[htbp]
\centering
\caption{\small Performance of the time-based 1-level cohort scheduling policies for different time-of-day preference profiles. \label{t:TP_perf}}
\small
\begin{tabular}{r||rrr||rrr||rrr||r}
\toprule
&\multicolumn{3}{c||}{Slot allocation} & \multicolumn{3}{c||}{Slots per patient} & \multicolumn{3}{c||}{Preferred slots} & \multicolumn{1}{c}{Avg. gap} \\
\midrule
Profile & AM & Noon & PM & AM & Noon & PM & AM & Noon & PM & \\ \hline
II (16-2-2) & $\frac{3}{4}$ & $\frac{1}{8}$ & $\frac{1}{8}$ & 4.9 & 6.5 & 6.5 & $\frac{2}{3}$ & 1 & 1 & 0.9\% \\
III (10-5-5) & $\frac{1}{2}$ & $\frac{1}{4}$ & $\frac{1}{4}$ & 5.2 & 5.2 & 5.2 & $\frac{2}{3}$ & 1 & 1 & 1.5\% \\
IV (10-0-10) & $\frac{1}{2}$ & 0 & $\frac{1}{2}$ & 5.2 & - & 5.2 & 1 & - & 1 & 0.8\% \\
V (5-10-5) & $\frac{1}{4}$ & $\frac{1}{2}$ & $\frac{1}{4}$ & 5.2 & 5.2 & 5.2 & 1 & $\frac{1}{2}$ & 1 & 0.7\% \\
VI (7-6-7) & $\frac{3}{8}$ & $\frac{2}{8}$ & $\frac{3}{8}$ & 5.6 & 4.3 & 5.6 & 1 & 1 & 1 & 1.2\% \\
\bottomrule
\end{tabular}
\normalsize
\end{table}
We see that by allocating the chosen fraction of the total number of slots to each of the cohorts, the average number of slots per patient as well as the quality of slots assigned to patients (i.e., whether a preferred or non-preferred slot is assigned) in a cohort is close to ideal. (Note that because there are 13 periods, 8 slots per period, and 20 patients, the average number of slots per patient is 5.2.) As a consequence, the average performance gaps for the different time-of-day preference profiles are small. The population appointment schedules produced by the optimization-based approach are slightly better because they better accommodate differences in reliability and disease severity, and have greater flexibility in the number and spread of visits for a patient over the planning horizon based on the desirability of the slots assigned.
The 1-level cohort scheduling policies that partition patients based on either their disease severity or their reliability (and ignore their time-of-day preferences) result in low-quality population appointment schedules. The average gap for the severity-based 1-level cohort scheduling policy is 13.1\% for strong time preferences and 4.8\% for weak time preferences. The average gap for the reliability-based 1-level cohort scheduling policy is 12.5\% for strong time preferences and 4.9\% for weak time preferences. We can see that ignoring time-of-day preferences, even when those preferences are weak, leads to poor schedules. When patient time-of-day preferences vary, the performance gaps for the population appointment schedules obtained with the severity-based and reliability-based 1-level cohort strategies are often worse than those obtained with the simple rotation policy.
These insights, of course, have been influenced by the characteristics of the instances in our test set. However, we are confident that the success of a 1-level cohort strategy depends on two key conditions: (1) the characteristic used to define the cohorts must have a significant impact on the quality of a population appointment schedule, and (2) the cohorts as well as an appropriate allocation of slots to cohorts must be easy to identify.
In the settings considered in our computational study, these two conditions were satisfied for the time-of-day preferences, but not for the disease severity and reliability (in both cases the first condition was not met).
In summary, our computational experiments have demonstrated that accounting for time-of-day preferences of patients can significantly improve population appointment schedule quality. Furthermore, in many situations, relatively simple, but effective time-based cohort scheduling policies can yield populations appointments schedules of similar quality. Creating cohorts based on other patient characteristics did not perform well for the settings considered.
\section{Final Remarks}
\label{s:final}
We have investigated optimization- and cohort-based methods for scheduling appointments for patients in a community-based chronic disease management program with the goal of minimizing the aggregate probability of patients being in an uncontrolled health state. The
optimization-based method explicitly accounts for disease progression since the time of the last appointment and the possibility that patients fail to show up at appointments. Our computational study (1) highlights the considerable impact that time-of-day preferences can have on population health outcomes, (2) demonstrates that simple strategies, i.e., cohort-based scheduling policies, can be effective in reducing no-show rates when the patient population can easily be divided into cohorts with similar time-of-day preferences, and (3) optimization-based methods are preferred and provide better health outcomes when accurate and detailed individual patient information is available. The latter suggests that developing and putting in place processes to gather that data, e.g., by specifically focusing on such data during intake consultations or by including and monitoring operational data within electronic medical records, should be considered as the benefits to population health outcomes of using that information can be substantial.
Our computational study also reveals that the highest quality population appointment schedules carefully tradeoff the visit frequency and the desirability of visit times to control no-show rates. In most situations, intuitive rules-of-thumb, i.e., higher visit frequencies for patients with more severe disease levels, spreading patient visits equally throughout the planning period, and assigning more desirable visit times to patients, perform well and when applied in a straightforward way (as in the cohort scheduling
policies) can substantially improve population health outcomes.
\section*{Acknowledgment}
This research has been supported by grant CMII-0654398 from the
National Science Foundation. The authors thank Katrese Minor, Dr.
Paul Detjen, and the entire staff at the Mobile C.A.R.E. Foundation,
and Tricia Morphew of the Asthma and Allergy Foundation of America
for their invaluable input on childhood asthma and mobile health
care programs.
\section*{Appendix A. Cohort scheduling algorithms}
Given a number of cohorts $C$ and for each cohort $c$, for $c =
1,...,C$, the set of patients $P^c = \{p^c_1, p^c_2, ...,
p^c_{n^c}\}$ and the set of time slots $T^c = \{t^c_1, t^c_2, ...,
t^c_{k^c}\}$, where $\cup^C_{c=1} P^c = \{1,...,P\}$, $\cup^C_{c=1}
T^c = \{1,...,T\}$, and $P^i \cap P^j = \emptyset$ and $T^i \cap T^j
= \emptyset$ for all $i,j = 1,...,C, i \neq j$, Algorithm
\ref{alg:cohort} creates the population appointment schedule.
\begin{algorithm}[H]
\For{$c \leftarrow 1$ \KwTo $C$}{
$i \leftarrow 1$ \;
\For{$k\leftarrow 1$ \KwTo $K$}{
\For{$j \leftarrow 1$ \KwTo $k^c$}{
Assign patient $p^c_i$ to time slot $t^c_j$ in period $k$ \;
\lIf{$i = n^c$}{$i \leftarrow 1$} \lElse{$i \leftarrow i+1$} \;
}
}
} \caption{Creating an appointment schedule with a cohort policy.}
\label{alg:cohort}
\end{algorithm}
The rotation policy naturally introduces diversification in the time
slots assigned to a patient \textit{unless} the number of patients
is a multiple of the number of time slots, because in that case, a
patient will be assigned the same time slot in each of his visits.
When the number of patients is a multiple of the number of time
slots, diversification is accomplished by introducing a
slot-reversing rotation policy as shown in Algorithm \ref{alg:sr}.
\begin{algorithm}[H]
\For{$c \leftarrow 1$ \KwTo $C$}{
$i \leftarrow 1$ \;
$direction \leftarrow up $\;
\For{$k\leftarrow 1$ \KwTo $K$}{
\For{$j \leftarrow 1$ \KwTo $k^c$}{
\eIf{direction = up}{
Assign patient $p^c_i$ to time slot $t^c_j$ in period $k$ \;
}{
Assign patient $p^c_i$ to time slot $t^c_{k^c-j}$ in period $k$ \;
}
\eIf{$i = n^c$}{
$i \leftarrow 1$ \;
\lIf{direction = up}{$direction \leftarrow down$} \lElse{$direction \leftarrow up$} \;
}
{
$i \leftarrow i+1$ \;
}
}
}
} \caption{Creating an appointment schedule with a cohort policy
with slot reversing.} \label{alg:sr}
\end{algorithm}
\section*{Appendix B. Slot allocations for cohort scheduling algorithms}
\begin{table}[htp]
\caption{Slot allocation for time-of-day cohorts \label{coh:tod}}
\centering
\begin{tabular}{r r r| r r r}
\hline
\multicolumn{3}{c|}{Profile} & \multicolumn{3}{c}{Cohort slot allocation (\% total)} \\
\cline{4-6}
AM &Noon &PM &Cohort 1 (AM) & Cohort 2 (Noon) & Cohort 3 (PM) \\
\hline
20 & 0 & 0 & 1 & 0 & 0 \\
18 & 2 & 2 & $\frac{3}{4}$ & $\frac{1}{8}$ & $\frac{1}{8}$ \\
10 & 5 & 5 & $\frac{1}{2}$ & $\frac{1}{4}$ & $\frac{1}{4}$ \\
10 & 0 & 10 & $\frac{1}{2}$ & 0 & $\frac{1}{2}$ \\
5 & 10 & 5 & $\frac{1}{4}$ & $\frac{1}{2}$ & $\frac{1}{4}$ \\
7 & 6 & 7 & $\frac{3}{8}$ & $\frac{2}{8}$ & $\frac{3}{8}$ \\
\hline
\end{tabular}
\end{table}
\begin{table}[htp]
\caption{Slot allocation for severity cohorts \label{coh:sev}}
\centering
\begin{tabular}{r r | r r}
\hline
\multicolumn{2}{c|}{Profile} & \multicolumn{2}{c}{Cohort slot allocation (\% total)} \\
\cline{3-4}
Severe &Mild &Cohort 1 (Severe)& Cohort 2 (Mild) \\
\hline
20 & 0 & 1 & 0 \\
10 & 10 & $\frac{5}{8}$ & $\frac{3}{8}$ \\
0 & 20 & 0 & 1 \\
10 ($<0.85$) &10 ($>0.85$) &$\frac{5}{8}$ & $\frac{3}{8}$ \\
\hline
\end{tabular}
\end{table}
\begin{table}[htp]
\caption{Slot allocation for reliability cohorts \label{coh:rel}}
\centering
\begin{tabular}{r r | r r}
\hline
\multicolumn{2}{c|}{Profile} & \multicolumn{2}{c}{Cohort slot allocation (\% total)} \\
\cline{3-4}
Reliable &Unreliable & Cohort 1 (Reliable) & Cohort 2 (Unreliable) \\
\hline
20 & 0 & 1 & 0 \\
10 & 10 & $\frac{3}{8}$ & $\frac{5}{8}$ \\
0 & 20 & 0 & 1 \\
12 ($<0.9$) &8 ($>0.9$) & $\frac{1}{2}$ & $\frac{1}{2}$ \\
\hline
\end{tabular}
\end{table}
\section*{Appendix C. Slot assignments for additional profiles}
\begin{figure}
\caption{\small Time slots assigned in the optimization-based solution: Profile III: Mixed time preference (10 5 5); mixed reliability; varying severity: (A) morning; (P) afternoon; (N) noon. \label{f:III_MSR_VS}
\label{f:III_MSR_VS}
\end{figure}
\begin{figure}
\caption{\small Time slots assigned in the optimization-based solution: Profile V: Mixed time preference (5 10 5); mixed reliability; varying severity: (A) morning; (P) afternoon; (N) noon. \label{f:V_MSR_VS}
\label{f:V_MSR_VS}
\end{figure}
\end{document}
|
\begin{document}
\title{Entanglement dynamics in presence of diversity under decohering environments}
\author{Fernando Galve}
\author{Gian Luca Giorgi}
\author{Roberta Zambrini}
\affiliation{IFISC (UIB-CSIC),
Instituto de F\'isica Interdisciplinar y Sistemas Complejos, UIB Campus,
E-07122 Palma de Mallorca, Spain}
\date{\today}
\begin{abstract}
We study the evolution of entanglement of a pair of coupled, non-resonant harmonic oscillators in contact with an
environment. For both the cases of a common bath and of two separate baths for each of the oscillators, a full
master equation is provided without rotating wave approximation. This allows us to characterize the entanglement
dynamics as a function of the diversity between the oscillators frequencies and their mutual coupling. Also the
correlation between the occupation numbers is considered to explore the degree of quantumness of the system.
The singular effect of the resonance condition (identical oscillators) and its relationship with the possibility of preserving asymptotic
entanglement are discussed. The importance of the bath's memory properties is investigated by comparing Markovian
and non-Markovian evolutions.
\end{abstract}
\pacs{03.65.Yz, 03.65.Ud}
\maketitle
\section{Introduction}
Coupled harmonic oscillators are the first approximation to a broad class of extended
systems not only in different fields of physics but also in chemistry and biology. Within
the quantum formalism, they are the basis of the description of electromagnetic field
interactions in quantum optics and approximate lattice systems in different
traps in atomic physics \cite{loudon,haroche}. Moreover, in the last few years there has
been impressive progress towards the cooling and back-action evasion measurement of
`macroscopic' -in terms of number of atoms- harmonic oscillators, allowing the observation
of their quantum behavior. Two main class of systems experimentally realized are
nanoelectromechanical structures (NEMS) \cite{Naik,Schwab} and different kinds of optomechanical
systems \cite{optomech} where nano- and micromechanical devices, cavities or suspended
mirrors are respectively coupled to single electrons or light. As an example, it has recently
been reported the observation of NEMS extremely near to the ground
state of motion with an occupation factor of just 3.8 \cite{Schwab}. These experiments
would allow to observe coherent quantum superposition states, entanglement and to study
decoherence processes in a controllable way on massive solid-state objects.
Phenomena associated to the $coupling$ of these quantum oscillators have been revisited
within the context of quantum information in many theoretical studies during the last
decade \cite{entinarrays,harm_chains,suddenstuff,galve,liu,paz-roncaglia,paz-roncaglia2}.
Entropy and entanglement in extended systems with many degrees of freedom (harmonic
chains or lattices)\cite{entinarrays}, have been characterized in fundamental and thermal
states, exploring scaling laws and connections with phase transitions
\cite{harm_chains}. An important advantage is that these systems admit Gaussian state
solutions having a well defined \cite{simon} computable \cite{vidal} measure of
entanglement, the logarithmic negativity \cite{volume}. The question of the generation
of entanglement has also been addressed considering oscillators whose parameters are
modulated in time \cite{suddenstuff,galve}. In these studies losses were generally
neglected, while recently the effects of decoherence on a pair of entangled oscillators
have been considered in presence of dissipation through baths of infinite oscillators
\cite{liu,paz-roncaglia,paz-roncaglia2}. Our aim in this paper is to analyze a rather
unexplored aspect of this problem that is {\it the effect of the diversity} on the
entanglement between coupled harmonic oscillators in different situations. Indeed,
instead of considering identical oscillators, we look at the effects of detuning between
their frequencies, $\omega_1$ and $\omega_2$.
The interest about diversity effects on entanglement is both theoretical and related to
experimental issues. We mention, for instance, the effect of diversity of frequencies of
two photons entering in a beam splitter. This leads to a completely different output with
respect to the case of indistinguishable photons \cite{Hong1987,Yamamoto}
\footnote{ Notice also that recently the robustness of equal light modes quantum interference has
been tested against another kind of diversity -on states-, being dissimilar source considered
\cite{Bennett2009}.}
Coupling between different harmonic modes has been also
extensively studied in quantum optics in presence of nonlinear interactions and
parametric coupling, allowing, for instance, generation of entanglement between photons
pairs and intense light beams \cite{squeezingbook,braunstein} of different colors. In
that context, however, the frequency diversity of each pair of oscillators is
compensated by a third mode and is in general not relevant, while here we focus on
pairs of mechanical oscillators off-resonance and with constant couplings. The main
expected consequence is indeed an effective decoupling of the oscillators due to fast
rotation of their interaction term and we will show the effects on the robustness of
entanglement.
It is clear that the identity of the oscillators in general is a very peculiar and strong
assumption not always justified and introducing a symmetry into the system with deep
consequences. It is indeed important to clarify the effects of relaxation of this
symmetry introducing some diversity. In an extension from two coupled oscillators to an
array, this will imply a break in the translational symmetry. Apart from the fundamental
interest, we point out that in experimental realization of engineered arrays of massive
quantum oscillators, some diversity between them might actually be unavoidable. Coupled
oscillators with different frequencies have been also suggested for quantum limited
measurements \cite{Leaci}. Finally, in many experiments coupled oscillators actually
model different physical entities (for instance radiation and a moving mirror in
optomechanics) and a symmetric Hamiltonian would describe only a very special case
\cite{aspelmayer}.
Our analysis of entanglement evolution encompasses both diversity between harmonic
oscillators and dissipation. Once the oscillators are prepared in some entangled state
(for instance through sudden switch of their coupling), we look at its robustness increasing
diversity and coupling strength. It is well known that an object whose all degrees of
freedom are coupled to an environment will decohere into a thermal state with the same
temperature as the heat bath \cite{haroche}. The thermal state, unless temperatures are very
low, is separable and highly entropic, so that after thermalization all entanglement shared
between the coupled oscillators disappears \cite{anders}. As the dissipating oscillators
reach a separable state in a finite time, and not asymptotically, the name of sudden death
has been suggested \cite{suddendeath}. Still under certain conditions on the ways in which
the oscillators dissipate, entanglement can survive asymptotically \cite{asympt_ent_osc}.
The transition from a quantum to a classical behavior in these systems does not convey only
fundamental interest being also important in view of applications of harmonic systems
operating at the quantum limit, including quantum information processing and not shot-noise
limited measurements of displacement, forces or charges \cite{applications}. These phenomena
have generally been studied for identical oscillators and, in particular, previous works
provide a master equation description in the case of a fully symmetric Hamiltonian
\cite{liu,paz-roncaglia}. Off-resonance oscillators in presence of a common bath have
recently been considered in Ref. \cite{paz-roncaglia2} showing that in this case lower
temperatures are needed to maintain entanglement asymptotically. In this work we analyze
systematically the role of diversity on the entanglement dynamics and provide the full
master equations for $\omega_1\neq\omega_2$ (i) both for common and separate baths for the
two oscillators, (ii) without the approximation of rotating waves, generally assumed when
modeling light fields, and (iii) comparing results to the non-Markovian case.
Recently there have been some works focusing on memory effects and non-Markovianity. In
the case of continuous variables, non-Markovian effects on entanglement evolution have
been discussed \cite{maniscalco,liu}. In Ref. \cite{maniscalco}, the authors considered
two identical oscillators coupled to separate baths in the very high temperature regime
and analyze how the matching between the frequency of the oscillators and the spectral
density of the bath affects both Markovian and non-Markovian dynamics. In Ref. \cite{liu},
the non-Markovian evolution has been compared with the case where both the Markovian limit
and the rotating wave approximation in the system-bath coupling are taken. Since the
effects of these two approximations are not easily separable, it is difficult to single
out non-Markovian corrections from that analysis. Our study of entanglement dynamics
in presence of diversity is also extended, for the sake of comparison, to the
non-Markovian case, looking at both common and separate baths and showing that deviations
are actually negligible.
In Sect. \ref{model} we introduce the model of a pair of oscillators with different
frequencies and coupled through position both between them and with the baths of
oscillators. Both master equations for one common and two separate baths are presented,
providing all the details in the Appendix. Temporal decay of entanglement between the
oscillators and correlations between the occupation numbers are shown in Sect.
\ref{results} analysing the role of frequency diversity when also their coupling
strength is varied. Apart from the entanglement dynamics, we also discuss its robustness
(asymptotic entanglement for common bath) in the context of the symmetry of the system,
Sect. \ref{asympt}. Non-Markovian deviations from these results are shown in Sect.
\ref{non-Markovian} and further discussion and conclusions are left for Sect. \ref{concl}.
\section{Model and master equations}
\label{model}
We consider two harmonic oscillators with the same mass and different frequencies
coupled to a thermal bath. As discussed in Ref.\cite{zell}, depending on the
distance between the two oscillators, different modelizations of the system-bath
interaction can be done. We will discuss the case of two distant objects, which
amounts to considering the coupling with two independent baths, and compare it with
the zero-distance scenario (common bath). Analyzing the role of
diversity on the quantum features of this system, we also generalize previous
works on identical oscillators dissipating in common and separate baths
\cite{liu,paz-roncaglia,paz-roncaglia2}.
\subsection{Separate baths}
The model Hamiltonian, with each oscillator coupled to an infinite number of
oscillators (separate baths), is $H^{sep}=H_S+H_B^{sep}+H_{SB}^{sep}$.
The system Hamiltonian
\begin{equation}
H_{S}= \frac{p_1^2}{2}+\frac{1}{2}\omega_{1}^2x_1^2+ \frac{p_2^2}{2}+
\frac{1}{2}\omega_{2}^2x_2^2 +\lambda x_1 x_2
\end{equation}
describes two oscillators with different frequencies $\omega_{1,2}$,
coupled through their positions,
\begin{equation}
H_{B}^{sep}=\sum_{k}\sum_{i=1}^2\left(\frac{P_k^{(i)2}}{2}+\frac{1}{2}
\Omega_{k}^{(i)2} X_{k}^{(i)2}\right)
\end{equation}
is the free Hamiltonian of two (identical) bosonic baths, and
\begin{equation}
H_{SB}^{sep}=\sum_{k}\lambda_{k}^{(1)} X_{k}^{(1)}x_1+\sum_{k}\lambda_{k}^{(2)}
X_{k}^{(2)}x_2\label{sb}
\end{equation}
encompasses the system-bath interaction.
The master equation for the reduced density matrix of the two oscillators, up to the
second order in $H_{SB}^{sep}$ (weak coupling limit) is, assuming $\hbar=1$,
\begin{eqnarray}
\frac{d \rho}{dt}&=&-i[H_S,{\rho}]\nonumber\\&-&
\frac{1}{2}\sum_{i,j=1}^2 \big\{i\epsilon_{ij}^2[x_i x_j,\rho]+D_{ij}[x_i, [x_j,\rho]]
\nonumber\\ &+&
i\Gamma_{ij}[x_i,\{ p_j,\rho\}]-F_{ij}[x_i ,[p_j,\rho]]\big\}.\label{me_sep}
\end{eqnarray}
In Appendix we give an explicit derivation of the master equation together with the definition
of the coefficients. Then, $\rho$ is subject to energy renormalization ($\epsilon_{ij}$),
dissipation $\Gamma_{ij}$, and diffusion $D_{ij},F_{ij}$.
Coefficients in Eq. (\ref{me_sep}) depend on time, but in the following, we
will consider the Markovian limit (performed sending $t\rightarrow \infty$ in Eqs.
(\ref{coe}-\ref{cog})). Non-Markovian corrections will be discussed in Sec.
\ref{non-Markovian}. We anticipate that we will focus on the Markovian limit because
corrections dropping from such an approximation turn out to be negligible in most of the
cases, both for equal or different frequencies $\omega_1$ and $\omega_2$, and for common or
separate baths.
To obtain an explicit expression for $\epsilon_{ij},D_{ij},\Gamma_{ij},F_{ij}$,
we need to know the density of states of the baths,
defined for both of them as
\begin{equation}\label{eq:J}
J(\Omega)=\sum_{k}\frac{\lambda_{k}^{2}}{ \Omega_{k}}\delta(\Omega-\Omega_k).
\end{equation}
Here, we will consider explicitly the Ohmic environment with a Lorentz-Drude
cut-off function, whose spectral density is
\begin{equation}\label{JOhm}
J(\Omega)=\frac{2 \gamma}{\pi}\Omega\frac{\Lambda^2}{\Lambda^2+\Omega^2}
\end{equation}
We learned from the master equation (\ref{me_sep}) that bare frequencies of the
oscillators are renormalized because of the presence of $\epsilon_{ij}$, and
this renormalization turns out to depend on the frequency cut-off $\Lambda$. This
undesirable unphysical
effect can be removed by adding to the initial Hamiltonian
counter-terms \cite{caldeiraleggett}
which exactly compensate the asymptotic
values of $\epsilon_{ij}$.
This is accomplished by replacing
$\epsilon_{ij}(t)$ with $\epsilon_{ij}(t)-\epsilon_{ij}(\infty)$
in Eq. (\ref{me_sep}).
Then, in the Markovian case, we will simply drop them from the master equation. This
renormalization procedure amounts to redefining the natural frequency in terms of
the observed frequency.
\subsection{Common bath}
We introduce now the case of common bath modeled by
\begin{equation}
H_{B}^{c}=\sum_{k}\left(\frac{P_k^{2}}{2}+\frac{1}{2}
\Omega_{k}^{2} X_{k}^{2}\right)
\end{equation}
and we consider the same spectral density (\ref{JOhm}). The interaction term reads
\begin{equation}
H_{SB}^c=\sum_{k}\lambda_{k} X_{k}(x_1+x_2),\label{sb_common}
\end{equation}
meaning that the system is coupled to the bath through the mode
$x_+=(x_1+x_2)/\sqrt{2},$ while
the mode $x_-=(x_1-x_2)/\sqrt{2}$ is decoupled. The master equation in this case
is
\begin{eqnarray}\label{me_comm}
\frac{d \rho}{dt}&=&-i[H_S,{\rho}]- \frac{i}{2} \sum_{i\neq j}(\bar{\epsilon}_{ii}^{2}-\bar{\epsilon}_{jj}^2)x_j \rho x_i\nonumber\\
&&-\frac{1}{\sqrt{2}}\sum_{i=1}^2\big\{ i \bar{\epsilon}_{ii}^2[x_ix_+, \rho]
+{\bar D_{ii}}[x_+,[x_i,\rho]]\nonumber\\
&&+i\bar\Gamma_{ii}[x_+,\{p_i,\rho\}]-\bar F_{ii}[x_+,[p_i,\rho]]\big\},
\end{eqnarray}
(see the Appendix for the definition of coefficients).
Coupled systems dissipating in a common bath have been recently subject of interest
because of the possibility to preserve entanglement asymptotically at high temperatures,
being this a major distinctive feature with respect to the case of separate baths. In
Sect. \ref{asympt}, we will study entanglement evolution under Eq. (\ref{me_comm})
revisiting the possibility to maintain entanglement asymptotically in presence of
diversity (frequency detuning). As a matter of fact, whereas in the case of equal
frequencies the decoupled mode $x_-$ is an eigenmode of the isolated system, this is no
longer true once $\omega_1 \neq \omega_2$. We will see how asymptotic entanglement
can be preserved in presence of diversity, through a proper choice of the
system-bath coupling constants.
Another difference in the case of common bath is the lack of the symmetry
$\lambda\rightarrow-\lambda$. A closer look to the form of the total Hamiltonian in
presence of separate baths allows us to single out its symmetry properties.
$H^{sep}$ is clearly modified by the transformation $\lambda\rightarrow-\lambda$. However,
once the canonical transformation $U$ is introduced,
such that $U x_2 U^\dag=-x_2 $ and $U p_2
U^\dag=-p_2 $, it is easy to show that the master equation is invariant under the combined
action of $U$ and the flip of $\lambda$. In fact, while $H_{S}$
is left unmodified, the change in $H_{SB}^{sep}$ (that would correspond to flip
$\lambda_{k}^{(2)}$ in $-\lambda_{k}^{(2)}$) is ineffective with respect to the evolution
of the reduced density matrix, being $\rho$ determined only by contributions proportional to
$(\lambda_{k}^{(i)})^2$. A simple consequence of this symmetry can be observed studying
the evolution of the two-mode squeezed state
\begin{eqnarray}\label{TMS}
| \Psi_{TMS}
\rangle=\sqrt{1-\mu}\sum_{n=0}^\infty \mu^{n/2}|n\rangle|n\rangle,
\end{eqnarray}
where $\mu=\tanh^2 r$,
and $r$ is the squeezing amplitude. For this state, the canonical transformation $U$
amounts to changing $r$ in $-r$. Then, the same evolution must be obtained considering
$\lambda$ in $H^{sep}$ and a given squeezing $r$ in the initial condition, or $-\lambda$
and $-r$. All these considerations are true only in the case of separate baths. Once the
common bath is taken, because of the action of $U$, the mode coupled to the bath is $x_-$
instead of $x_+$. Therefore, in the case of separate baths, results for opposite
$\lambda$ are equivalent to a change in the sign of the initial squeezing, while this is
not the case
for a common bath.
\section{Entanglement and quantum correlations}
\label{results}
The calculation of bipartite entanglement for mixed states is generally an unsolved task.
Nevertheless, the criterion of positivity of the partial transposed density matrix of
Gaussian two-mode states $\rho^{T_B}$ is necessary and sufficient for their separability
\cite{peres,horodecki,simon}. The amount of entanglement can be measured through the
logarithmic negativity, defined as $E_{{\cal N}}=\log_{2}||\rho^{T_B}||$, being $||\rho^{T_B}||$
the modulus of the sum of the negative eigenvalues of $\rho^{T_B}$ \cite{vidal}. For any
two-mode Gaussian state, the logarithmic negativity is $E_{{\cal
N}}=\max[0,-\log_{2}2\lambda_{-}]$, where $\lambda_{-}$ is the smallest symplectic
eigenvalue of $\rho^{T_B}$.
Because the system is connected to heat baths, the dissipating degree of freedom will
ultimately attain the thermal state corresponding to the system Hamiltonian. The
properties of entanglement in the thermal state are well studied, whence it is known that
only for very low temperatures entanglement is present between the two oscillators.
Therefore we focus our attention on the time evolution of the decoherence process itself,
namely how fast an initially entangled state decays into a separable one, for separate
(Sect. \ref{decayseparate}) and common (Sect. \ref{asympt}) baths.
\subsection{Entanglement decay time}
\label{decayseparate}
We consider as a non-separable initial state the two-mode squeezed vacuum (\ref{TMS}),
with squeezing
parameter $r$. This state has an amount of entanglement proportional to $r$ and consists in
the orthogonal modes $x_\pm=(x_1\pm x_2)/\sqrt{2}$ being simultaneously squeezed and
stretched, respectively, by an amount $r$. Such a state, for infinite squeezing, gives the
maximally entangled state for continuous variables, long known due to the famous paper by
Einstein, Rosen and Podolsky \cite{EPR}.
The entanglement evolution in presence of separate baths is shown in
Fig. \ref{figplots}a for different parameters choices, showing the effects of increasing
the coupling strength and the detuning. Entanglement, in general, decays with an
oscillatory behavior with oscillation amplitude depending on the $\lambda$ strength
\cite{galve}. We study the last time at which entanglement vanishes, $t_F$.
In Fig. \ref{fig0b} and \ref{fig0} we scan how long it takes for such an
entangled state (with $r=2$) to become separable when the oscillators detuning and
coupling strength are varied. Indeed, the last time $t_F$ is represented in Fig. \ref{fig0} for
different values of the ratio between frequencies ($\omega_2/\omega_1$) and coupling
between oscillators ($\lambda/\omega_1^2$).
For this choice of parameters, Fig. \ref{fig0b} shows that entanglement is present for
few tens of periods. Due to the scaling of temperature, time, and couplings with the
frequency of the first oscillator, $\omega_1$, it is clear that results shown in Figs.
\ref{fig0b} and \ref{fig0} are not symmetric by exchange of the oscillators role. In other
words, even if it is physically equivalent to have the first oscillator with half the
frequency of the second one or the second half the first one, due to the scaling, these
figures are not symmetric with respect to the line $\omega_2/\omega_1= 1$.
Figure \ref{fig0b} shows two main features: (i) an increase of the survival time $t_F$
when $\omega_2/\omega_1$ is increased; (ii) an increase of $t_F$ with $|\lambda|$ in
absence of diversity ($\omega_2=\omega_1$), while for $\omega_2\gtrsim2\omega_1$ this
dependence is weakened. This can also be appreciated in the perspective presented in Fig.
\ref{fig0}.
Both features can be explained in terms of the eigenmodes of the system $Q_\pm$
(\ref{eigenmodes}) and their eigenfrequencies $\Omega_\pm$ (\ref{eigenfr}). Since the
eigenmodes do not interact with each other, they can be regarded as independent channels
for decoherence. The eigenmodes will only interact with near-resonant frequencies in the
bath. In addition, all variances at the thermal states they approach are
dependent upon the fraction $\Omega_\pm /2k_BT$.
As far as it concerns the feature (i), in absence of
coupling ($\lambda=0$, implying $\Omega_{+,-}=\omega_{2,1}$) and for $\omega_2\to\infty$,
the `effective temperature' of the final thermal state reached by the eigenmode
$Q_+$ will vanish,
$T_{\text{eff.},+}=k_BT/\omega_2\to 0$.
The eigenmodes will therefore reach respectively
a thermal state ($T_{\text{eff.},-}=k_BT/\omega_1$) and a ground state ($T_{\text{eff.},+}=0$).
Since the presence of entanglement is a competition between the reduced purities of
individual oscillators and the total purity \cite{adesso}, just by improving
the final purity of oscillator $Q_+$ its time evolution has an overall higher purity,
thus making entanglement higher (and its survival time longer). This effect is equivalent
to reduce the real temperatures of the baths (while keeping everything else constant).
While the decoherence is given by the coupling $\gamma$ and is hence not reduced, the
reached final state is purer and thus entanglement is seen to survive longer.
As far as (ii) is concerned, for $\omega_2=\omega_1$ the eigenfrequencies of the system are
$\Omega_\pm=\sqrt{\omega_1^2\pm |\lambda|}$. By increasing $|\lambda|$ up to $\omega_1^2$
the eigenfrequency $\Omega_- $ vanishes and the amount of bath modes which have a similar
frequency, given by the spectral density, also vanishes ($J(\Omega_-\to 0)\to 0$). That is,
the amount of bath modes with which this degree of freedom interacts tends to zero, and
therefore the amount of decoherence suffered. This effect, however, is partially compensated
by the fact that a vanishing $\Omega_-$ would imply that this mode will reach a thermal
state of effective infinite temperature. Were it not so, the entanglement would survive
asymptotically, as in the case of common bath, where for $\omega_2=\omega_1$ the mode
$\Omega_-$ does not decohere.
We then see that all major effects related to entanglement decay between coupled
oscillators in presence of heat baths can be explained in terms of: 1) eigenfrequencies
(where they lie within the spectral density of the heat baths), and 2) effective
temperatures reached by the eigenmodes after thermalization.
A minor feature is that the increase of $t_F$ with $\lambda$ and
$\omega_2$ is not completely smooth due to entanglement oscillations and this gives rise to
the `arena'-shaped dependence that is clearer in Fig. \ref{fig0}. In general, the dependence of the
decoherence time on the oscillation frequency of the system is negligible for optics
experiments at ambient temperatures ($T_{\text{eff.},\pm}=k_BT/\Omega_\pm\simeq 0$), but becomes extremely important in presence of the
lower frequencies of mechanical oscillators \cite{giorgi2009}.
Notice that we have restricted the coupling within the boundary
$|\lambda|<\omega_1\omega_2$ (triangle in the lower part of Fig. \ref{fig0b}), otherwise
one of the eigenfrequencies ($\Omega_-$) would become imaginary, meaning that trajectories
would be unbounded which, though not unphysical in principle, leads to leaks in any kind
of experiment.
\begin{figure}
\caption{(Color online). Time evolution of an entangled two-mode squeezed initial state
(\ref{TMS}
\label{figplots}
\end{figure}
\begin{figure}
\caption{(Color online). Parameter scan of $t_F$ (see text) as a function of the oscillators coupling and frequency detuning, in the case of separate baths. We have restricted the scan within a limited detuning ($\omega_2\leq2\omega_1$) as a result of the absence of any important features outside this region. In addition, the condition $\lambda<\omega_1\omega_2$ ensures reality of the eigenfrequencies in the problem. The behavior of $t_F$ is basically monotonic in $\omega_2/\omega_1$, with a superimposed arena-like shape, coming from the oscillatory nature of entanglement. See text for more details. The dots A, B, C, D are given in figure \ref{figplots}
\label{fig0b}
\end{figure}
\begin{figure}
\caption{(Color online). Three dimensional representation of figure \ref{fig0b}
\label{fig0}
\end{figure}
\subsection{Common bath}
\label{asympt}
Let us compare now the previous results with the oscillators evolution with dissipation
through a common bath, Eq. (\ref{me_comm}). This model can be considered a limit case
while separate baths would model the opposite one, when baths are completely
uncorrelated. The presence of asymptotic entanglement in this situation has been studied
by \cite{asympt_ent_osc,paz-roncaglia,liu}, showing that entanglement can survive for
sufficiently low temperatures (or conversely for high enough initial squeezings). In the
case with different frequencies Paz and Roncaglia noticed in Ref. \cite{paz-roncaglia2}
that this asymptotic behavior disappears as the detuning is increased, meaning that
it is highly dependent on the frequency matching between oscillators and the fact that
each oscillator's bath can be regarded as `perfectly correlated' to the other one. In
physical realizations these two conditions will hardly be met and it is interesting to
know the effect of deviations from it. We have then studied the effects of frequency
diversity in the case of a common bath, looking again at the robustness of entanglement
in terms of the decay time $t_F$ (Fig.\ref{figplots}b). In Fig. \ref{fig1} we show results equivalent to what
represented in Fig. \ref{fig0} but for common bath. There we see that in the resonant case
entanglement never vanishes, and so the survival time is in that case infinite. In Fig.
\ref{fig1} diverging times $t_F$ are recognized along the line $\omega_1=\omega_2$.
When the oscillators frequencies begin to differ, survival diminishes fast to values
similar to the uncorrelated baths case. This phenomenon can be understood from the
master equation (\ref{me_comm}). The fact that the coupling to the bath is
$\gamma(x_1+x_2)$ establishes that the mode
$x_-$ remains decoupled at all times, thus keeping its coherence
and contributing positively to a nonzero entanglement. If the frequencies are
different, that mode gets coupled to $x_+$ which is itself
coupled to the bath, and its decoherence is transferred to $x_-$. This way, both
modes are decohered and end up in a thermal state, with no presence of
entanglement. Hence the survival time is finite. Because of this, it is worthy to
stress the fact that even if the frequency difference is infinitesimal, decoherence
will eventually show up, even though it does so in a time inversely proportional to
the infitesimum. Thus, increasing the difference in frequencies just exacerbates
the transfer velocity of decoherence from mode $x_+$ to mode $x_-$. This argument
evidences how artificial the `equal frequencies' assumption is.
\begin{figure}
\caption{(Color online). Parameter scan of $t_F$ (see text) as a function of the oscillators coupling and frequency detuning, in the case of common bath. As seen here, $t_F$ is huge whenever the detuning is small, and in the order of tens of periods for high detuning. As expected, at resonance one of the eigenmodes decouples from the baths, leading to an infinite $t_F$. We have truncated the plot at heights of $\omega_1 t_F=210$, otherwise this `mountain riff' is infinitely high (but only strictly infinite when $\omega_1=\omega_2$).}
\label{fig1b}
\end{figure}
\begin{figure}
\caption{(Color online). 3D representation of Figure \ref{fig1b}
\label{fig1}
\end{figure}
We point out that asymptotic entanglement can be found also in
presence of frequency diversity and is not related to the symmetry present for
$\omega_1=\omega_2$. As a matter of fact, if the couplings to the common bath are
different for each oscillator, i.e. $\gamma_1\neq\gamma_2$, we can restore
asymptotic entanglement even off-resonance, for $\omega_1\neq\omega_2$.
By noticing the relation between $x_{1,2}$ with
the eigenmodes of the system $Q_\pm$, we see that the mode $Q_+$ can be uncoupled
from the bath when the angle $\theta$ fulfills
\begin{equation}\label{cond_theta}
\cos{\theta}=\frac{\gamma_1}{\sqrt{\gamma_1^2+\gamma_2^2}}
\end{equation}
with $\theta=\theta(\omega_1,\omega_2,\lambda)$ given in the Appendix.
Thus, half of the system is kept in a pure state, and we can show that entanglement
survives asymptotically exactly in the same fashion as it did for equal frequencies
above. In other words, with different couplings to the baths together
with different frequencies, the system can retain asymptotic entanglement if Eq.
(\ref{cond_theta}) is fulfilled. In any case this phenomenon is as unique as the
`equal frequencies' one, and requires fine-tuned parameters (tuning bath couplings or
frequency difference) in order to be observable. It should then be regarded as
exceptional too. Our argument clarifies that asymptotic entanglement
is $not$ a consequence of a symmetry in the system, but {\it comes from finely
tuned decoherent-free degree of freedom
of the system.}
\subsection{Twin oscillators}
We have considered entanglement as an indicator of the quantumness of our system, measurable for the
family of (Gaussian) states considered here. The discrimination between the predictions of classical
and quantum theories for coupled harmonic oscillators has been largely studied also in optics. In that
context there have been many theoretical predictions experimentally confirmed, focusing on the violation
of different classical inequalities, or the positivity of variances
\cite{ineq}. An example is the variance of the difference of the occupation numbers:
\begin{equation}
d=\langle:(n_1-n_2)^2:\rangle\label{vard}
\end{equation}
where, as usual, $n_i$ is the occupation number operator of each oscillator. $d$ has been considered
in two mode squeezed state generated by parametric oscillators in optics, to characterize twin beams
\cite{loudon}.
The quantum character of the correlations in the occupation numbers comes from the negativity of the
variance $d$ and can only follow from the negativity of the corresponding quasi-probability, the
Glauber-Sudarshan representation in this case. In analogy with the optical case, we consider here
$twin$ oscillators looking at the temporal dynamics of $d$.
The correspondent fourth order moments can be obtained from the covariance
matrix, because the states we are dealing with are Gaussian. In Fig. \ref{fig2} we plot two examples
comparing the evolution of this variance and entanglement for the common/separate baths cases. We have
taken only the negative part of this correlation -identifying quantum behavior- and inverted it
(plotting $\max(0,-d)$ ) for ease of comparison with entanglement. For the initial entangled
state (\ref{TMS}),
$d=-2\mu/(1-\mu)$, always negative. In other words, for squeezed states the occupation number of one
oscillator determines the other one.
We observe that starting from this value $d$ decays with large oscillations
and that the sudden deaths of
entanglement and of this correlation coincide up to a fraction of a period. Still entanglement
evolution is smoother and twin oscillators temporarily loose their quantum correlations even for
entangled oscillators.
\begin{figure}
\caption{(Color online). Comparison between the time evolutions of entanglement (black) and the correlation measure
$\rm{max}
\label{fig2}
\end{figure}
\section{Non-Markovian evolution}
\label{non-Markovian}
Up to now we have considered the Markovian master equation, in the sense that the
time-dependent coefficients related to the heat bath have been replaced by their
asymptotic value, obtained by integrating up to infinite time. This implies a
complete absence of memory in the bath, which does not retain
instantaneous information on the dynamics of the two oscillators. However, this
simplification can be dropped, and we are left with a complete description {\it
within the weak coupling approximation}. In this
section we analyze how these corrections affect the entanglement evolution we
discussed so far.
In Ref. \cite{maniscalco}, a comparison has been made considering temperatures
two orders of magnitude greater than the frequency cut-off. The authors found that,
when the frequency of the oscillators falls inside the bath spectral density, entanglement
persists for a longer time than in a Markovian channel. When there is no resonance
between reservoir and oscillators, non-Markovian correlations accelerate
decoherence and generate entanglement oscillations.
As it is known, when all the important frequencies
are much lower than the cut-off, the density of states of Eq.
(\ref{JOhm}) is expected to generate almost memoryless friction \cite{weiss}. Then,
taking $ \omega_1\ll\Lambda$, we compared Markovian and non-Markovian entanglement dynamics
for different choices of the parameters of the system and the bath ($\omega_2/\omega_1$,
$\lambda/\omega_1^2$, $\gamma/\omega_1$, $k_BT/\omega_1$, $r$). The following considerations
are valid for separate baths as well as for a common environment. In the case of very small coupling
($\gamma\sim 10^{-3}\omega_1$) non-Markovian corrections are practically unobservable
independently on the value of the system parameters up to relevant temperatures. To observe
some appreciable deviations, we need to reach temperatures of the order of $10\omega_1$. This
result is understood considering that for small values of the system-bath coupling the role
played by the bath itself is highly reduced. Since non-Markovian corrections to the values of
the coefficients are relevant only during the first stage of the evolution, it can be also
predicted that if the initial state is robust enough (as, for instance, in the case of
squeezing $r=2$ discussed in the previous sections), non-Markovianity will have marginal
effects. It is in fact true that, in the case of $\gamma \sim 10^{-3}\omega_1$ and $T\sim 10
\omega_1$, a correction to the entanglement decay time can only be observed assuming low
initial squeezing ($r\lesssim0.1$). When $\gamma$ starts to increase, it is possible to deal with
cases where some observable corrections due to non-Markovianity emerge also in the
low-temperature regime. In particular, the case of equal frequencies turns out to be the more
sensitive. To give an example, in Fig. \ref{nmars} we compared the two evolutions for two
different values of $r$. Starting from $r=2$, non-Markovian corrections are almost
negligible, giving corrections of few percents to the value of $t_F$
found within the Markovian approximation.
In order to observe a noticeable difference, we must start with a factorized
state ($r=0$). As it can be observed in Fig. \ref{nmars}, in this case, while memoryless
baths are able to support a very small quantity of entanglement, the initial kick given by
non-Markovian coefficients enhances entanglement generation, its maximum amount is amplified
of about one order of magnitude, and also its death time is increased. It is however worth
noting that the absolute value reached is, in any case, very low.
\begin{figure}
\caption{(Color online). Comparison between Markovian (orange) and non-Markovian (black) entanglement dynamics.
The system parameters are $\omega_1=\omega_2$, $\lambda=0.2\omega_1^2$,
$\gamma=0.01\times 2/\pi\omega_1$, $\Lambda=20\omega_1$, and $k_BT=10\omega_1$. The initial squeezings are a) $r=2$ and b) $r=0$.}
\label{nmars}
\end{figure}
\section{Discussion and conclusion}
\label{concl}
During last years, a series of papers has investigated entanglement dynamics of coupled
harmonic oscillators in contact with thermal environments. Nevertheless, a comprehensive
analysis around the role played by the diversity of the two oscillators was missing.
We started our discussion with the case of separate baths, which seems to be more
physically meaningful. As one should expect, after a transient, unless the temperature is
very low, the initial two-mode squeezed state $| \Psi_{TMS}\rangle$ becomes separable.
Studying the decay time, we observed a series of interesting features that we have
qualitatively explained. First of all, by increasing $\omega_2/\omega_1$ and by keeping
unchanged all other parameters, entanglement survival time increases as well. In the limit
of $\omega_2/\omega_1\gg1$, $\Omega_+$ and $\Omega_-$ tend, respectively, to $\omega_2$
and $\omega_1$. Since the asymptotic thermal state the system is reaching depends itself on
the frequencies, the purity of the final state is enhanced together with the coherence
time necessary to reach it. On the other hand, near $\omega_2=\omega_1$, entanglement
survival time can be raised also by increasing $|\lambda| $. When $|\lambda|\rightarrow
\omega_1^2$, $\Omega_-$ is close to zero. Here, we argue that the behavior we observe is the result
of two competing effects. If, on one side, the thermal equilibrium state is less pure
because of the presence of a vanishing frequency, on the other side, the number of
decohering channels goes to zero, since $\Omega_-$ falls in a poorly populated region of
the spectral density, increasing in this way the time necessary to reach the thermal
equilibrium.
If the two oscillators share a common bath, it becomes possible the observation of
asymptotic entanglement at relevant temperatures. One of the two eigenmodes of the system
can result in fact decoupled from the environment. As a consequence of this decoupling,
the asymptotic density matrix will result as the direct sum of the matrix of pure state
and of a thermal state, corresponding to the mode interacting with the bath. When the two
oscillators have identical frequencies and bath couplings, the
mode $x_-$ gets decoupled independently of the value of $\lambda$, giving rise to
the plateau we observe in Figs. \ref{fig1b}, \ref{fig1}. It is important to stress that
this regime can be achieved not only when the oscillators are resonant, but also in
presence of frequency detuning, through a proper engineering of the system-bath
interaction. Conversely, if their bath couplings are different
$\gamma_1\neq\gamma_2$, frequencies $\omega_1$ and $\omega_2$ can be found so that this
special regime is achieved.
We also studied the dynamical evolution of the degree of quantumness of the system by
means of the quantity $d$, introduced in Eq. (\ref{vard}). The negativity of this
quantity is a direct evidence of the negativity of the Glauber-Sudarshan quasi-probability
distribution of the state. The agreement between the behavior of $d$ and the entanglement
is worth to be investigated, since a tight relation between them is not known.
As for the Markovian versus non-Markovian debate, we compared the two behaviors in a wide
range of parameters. While in Ref. \cite{maniscalco} the authors investigated what changes
when the frequency of the oscillators goes outside from the spectral distribution of the
bath in the high temperature regime, we tried to do an extensive analysis for
$\omega_1,\omega_2\ll\Lambda$. Our results indicate that non-Markovian corrections can be
observed, and are important, only for a reduced subset of initial conditions. To see
them, we must put a relevant coupling between oscillators and bath, $\omega_1$ should be
close to $\omega_2$, and the initial state should be (almost) disentangled. Since the
master equation has been obtained in the weak-coupling limit, the first of this
assumptions seems at least questionable. If we release only one of them, Markovian and
non-Markovian evolutions are almost identical.
\acknowledgments{
We acknowledge funding from Juan de la Cierva program and from CoQuSys project.}
\begin{appendix}
\section{Derivation of master equation}
A system-bath model is usually described through the Hamiltonian $H=H_S + H_B + V$, where $H_S$ refers to the system and $H_B$ to the bath, and where $V$ is the term containing the interaction. Generally, the coupling term can be written as $V=\sum_k S_k\otimes B_k$, where $ S_k$ denote system operators and $ B_k$ bath operators.
The master equation for the reduced density matrix to the second order in the
coupling strength (for a general discussion see for instance \cite{gardiner}) is given by
\begin{eqnarray}
\frac{d \rho}{dt}&=&-i[H_S,{\rho}]\nonumber
\\&-&\int_0^td\tau\sum_{l,m}\big\{ C_{l,m}(\tau)[S_l \tilde{S}_m(-\tau)\rho-\tilde{S}_m(-\tau)\rho S_l] \nonumber
\\&+& C_{m,l}(-\tau)[\rho\tilde{S}_m(-\tau)S_l -S_l\rho\tilde{S}_m(-\tau)]\big\}.\label{a1}
\end{eqnarray}
Here, $C_{l,m}(\tau)$ are the bath's correlation functions, defined by
\begin{equation}
C_{l,m}(\tau)={\rm Tr}_B [\tilde{B}_l(\tau)\tilde{B}_m(0)R_0],\label{a2}
\end{equation}
being $R_0=\left( e^{-\beta H_B}/{\rm Tr e^{-\beta H_B}}\right) $ the equilibrium density matrix of the bath. In Eqs. (\ref{a1},\ref{a2})
the tilde indicates the interaction picture taken with respect to $H_0=H_S+H_B$. For instance, $\tilde{B}_l(\tau)=e^{i H_0 \tau} B_le^{-i H_0 \tau}$.
In the following, the correlation functions will appear combined as $C_{l,m}^A(\tau)= C_{l,m}(\tau)+ C_{m,l}(-\tau)$ and $C_{l,m}^C(\tau)= C_{l,m}(\tau)- C_{m,l}(-\tau)$.
\subsection{Separate baths}
In our model, whose Hamiltonian is $H^{sep}$, we can identify $B_1=\sum_{k}\lambda_{k}^{(1)}X_{k}^{(1)}$ and $B_2=\sum_{k}\lambda_{k}^{(2)}X_{k}^{(2)}$ and, correspondingly, $S_1=x_1,S_2=x_2$ . Their expressions in the interaction picture are
\begin{equation}
\tilde{B}_i=\sum_{k}\lambda_{k}^{(i)}\left[ X_{k}^{(i)}\cos \Omega_{k}^{(i)}t+P_{k}^{(i)}\dfrac{ \sin \Omega_{k}^{(i)}t}{m \Omega_{k}^{(i)}}\right]
\end{equation}
and $\tilde{x}_i(\tau)=\alpha_{i1}x_1+\alpha_{i2}x_2+\beta_{i1}p_1+\beta_{i2}p_2$, with
\begin{eqnarray}
\alpha_{11}&=&\cos^2\theta \cos\Omega_- \tau+\sin^2\theta \cos\Omega_+ \tau,\label{int1}\\
\alpha_{12}&=&\frac{\sin2\theta}{2}(\cos\Omega_+ \tau-\cos\Omega_- \tau),\label{int2}\\
\beta_{11}&=&\frac{\cos^2\theta }{\Omega_-} \sin\Omega_- \tau+\frac{\sin^2\theta }{\Omega_+} \sin\Omega_+ \tau,\label{int3}\\
\beta_{12}&=& \frac{\sin2\theta}{2}( \frac{\sin\Omega_+ \tau}{\Omega_+} -\frac{ \sin\Omega_- \tau}{\Omega_-} )\label{int4},\end{eqnarray}
and $\alpha_{22}(\theta)=\alpha_{11}(\pi/2-\theta)$, $\beta_{22}(\theta)=\beta_{11}(\pi/2-\theta)$, $\alpha_{21}=\alpha_{12}$, and $ \beta_{21}=\beta_{12}$.
The coefficients appearing in Eqs. (\ref{int1}-\ref{int4}) are defined as
\begin{equation}
\theta=\frac{1}{2}\arctan \left( \frac{2\lambda}{\omega_2^2-\omega_1^2}\right)
\end{equation}
and
\begin{equation}\label{eigenfr}
\Omega_\pm=\sqrt{\frac{\omega_1^2+\omega_2^2}{2}\pm\frac{\sqrt{4\lambda^{2}+
(\omega_2^2-\omega_1^2)^2}}{2}}.
\end{equation}
The normal modes for the position are
\begin{eqnarray}\label{eigenmodes}
Q_{+}=\cos\theta x_2+\sin\theta x_1,\\ Q_{-}=\cos\theta x_1-\sin\theta x_2,
\end{eqnarray}
while for the momentum
\begin{eqnarray}
P_{+}=\cos\theta p_2+\sin\theta p_1,\\ P_{-}=\cos\theta p_1-\sin\theta p_2.
\end{eqnarray}
Notice that, for $\omega_{1}=\omega_{2}$, $\theta=\pi/4$. This implies $ \alpha_{11}= \alpha_{22}$ and $ \beta_{11}= \beta_{22}$.
Since the two baths are identical and uncorrelated, we have $C_{1,1}(\tau)=C_{2,2}(\tau)=C(\tau)$ and $C_{1,2}(\tau)=C_{2,1}(\tau)=0$. From the knowledge of the density of states $J(\Omega)$ it is possible to obtain the explicit expression of $ C^C(\tau)$ and $ C^A(\tau)$:
\begin{eqnarray}
C^C(\tau)&=&-i\int_0^\infty d\Omega J(\Omega) \sin \Omega \tau, \\ C^A(\tau)&=&\int_0^\infty d\Omega J(\Omega) \cos \Omega \tau \coth\frac{\Omega}{2 k_B T}.
\end{eqnarray}
The master equation reads
\begin{eqnarray}
\frac{d \rho}{dt}&=&-i[H_S,{\rho}]-\int_0^t d\tau\sum_{i=1,2} \big\{[x_i,\{x_i(-\tau),\rho\}]\frac{C^C(\tau)}{2}\nonumber\\&+&[x_i,[x_i(-\tau),\rho]]\frac{C^A(\tau)}{2}\big\}.
\end{eqnarray}
By using expressions (\ref{int1}-\ref{int4}), it can be written as in Eq. (\ref{me_sep}),
with
\begin{eqnarray}
\epsilon_{ij}^2&=&-i\int_0^t d\tau \alpha_{ij}C^C(\tau),\label{coe}\\
D_{ij}&=&\int_0^t d\tau \alpha_{ij}C^A(\tau),\label{cod}\\
F_{ij}&=&\int_0^t d\tau \beta_{ij}C^A(\tau),\label{cof}\\
\Gamma_{ij}&=&i\int_0^t d\tau \beta_{ij}C^C(\tau).\label{cog}
\end{eqnarray}
The considerations about symmetry given in Sec. \ref{model} can be understood in terms of the explicit form of the master equation. The substitution $\lambda\rightarrow-\lambda$ is equivalent to sending $\theta$ to $-\theta$. Then, also $ \alpha_{12}$ and $\beta_{12}$ change their sign. But, since these coefficients multiply one operator (position or momentum) of the first oscillator and one operator of the second one, the canonical transformation $U$ compensates the change of sign, and the evolution of $\rho$ is unchanged.
The explicit form of $\epsilon_{ij},D_{ij},\Gamma_{ij},F_{ij}$, in the Markovian case, can be calculated using the equalities
\begin{equation}
\lim_{t\rightarrow\infty}\int_0^t d\tau e^{-i(\omega-\omega_0)\tau}=\pi \delta (\omega-\omega_0)+i\frac{P}{\omega-\omega_0},
\end{equation}
where $P$ denotes the Cauchy principal value, and
\begin{equation}
\coth{\frac{\omega}{2 k_B T}}=2 k_B T\sum_{n=-\infty}^{+\infty}\frac{\omega}{\omega^2+\nu_n^2},
\end{equation}
with $\nu_n=2\pi n k_BT$.
From the master equation, a set of closed equations of motion for the average values of the second moments can be derived ($i,j=1,2$):
\begin{widetext}
\begin{eqnarray}
\frac{d \langle x_{i}x_{j} \rangle}{dt}&=&\langle p_{i} x_{j}+p_{j} x_{i}\rangle\\
\frac{d \langle p_{i}p_{j} \rangle}{dt}&=&-\frac{1}{2}(\omega_{i}^{2}+\epsilon_{ii}^2)\langle \{x_{i}, p_{j}\}\rangle-\frac{1}{2}(\omega_{j}^{2}+\epsilon_{jj}^2)\langle \{ x_{j}, p_{i}\}\rangle\nonumber\\
&-&\frac{1}{2}(\lambda+\epsilon_{12}^2)\sum_{k=1}^2[\langle \{x_{k},p_{i}\} \rangle(1-\delta_{k,j})+ \langle \{x_{k},p_{j}\}\rangle(1-\delta_{k,i})]\nonumber\\
&-&(\Gamma_{ii}+\Gamma_{jj})\langle p_{i}p_{j} \rangle-\Gamma_{12}\sum_{k=1}^2[\langle p_{k}p_{i} \rangle(1-\delta_{k,j})+ \langle p_{k}p_{j}\rangle(1-\delta_{k,i})]+D_{ij}\\
\frac{d \langle \{x_{i},p_{j}\} \rangle}{dt}&=&2\langle p_{i} p_{j}\rangle-2(\omega_{j}^{2}+\epsilon_{jj}^2)\langle x_{i} x_{j}\rangle-2(\lambda+\epsilon_{12}^2)\left[ \langle x_{i}^{2}\rangle(1-\delta_{ij})+\delta_{ij}\langle x_1 x_2\rangle\right] \nonumber\\
&-&\Gamma_{jj} \langle \{x_{i},p_{j}\} \rangle-\Gamma_{12}\langle \{x_{i},p_{i}\} \rangle+F_{ij}
\end{eqnarray}
\end{widetext}
\subsection{Common bath}
In this case, $C_{1,1}(\tau)=C_{2,2}(\tau)=C_{1,2}(\tau)=C_{2,1}(\tau)$. Then, the master
equation can be written as has
\begin{eqnarray}
\frac{d \rho}{dt}&=&-i[H_S,{\rho}]-\int_0^t d\tau\sum_{i,j=1,2} \big\{[x_i,\{x_j(-\tau),\rho\}]\frac{C^C(\tau)}{2}\nonumber\\&+&[x_i,[x_j(-\tau),\rho]]\frac{C^A(\tau)}{2}\big\}.
\end{eqnarray}
and, using Eqs. (\ref{coe}-\ref{cog}), assumes the form given in Eq. (\ref{me_comm})
with $\bar D_{ii}=D_{ii}+D_{12}$ and similarly for $\epsilon^{2},\Gamma,F$. The equations of motion for the second moments read ($ i,j=1,2$)
\begin{widetext}
\begin{eqnarray}
\frac{d \langle x_{i}x_{j} \rangle}{dt}&=&\langle p_{i} x_{j}+p_{j} x_{i}\rangle\\
\frac{d \langle p_{i}p_{j} \rangle}{dt}&=&-\sum_{k=1}^2 (\omega_{k}^{2}+\bar{\epsilon}_{kk}^2)( \langle x_{k}p_{j} \rangle\delta_{ik}+ \langle p_{i}x_{k} \rangle\delta_{jk})\nonumber\\
&-&\frac{1}{2}\sum_{k=1}^2 (\lambda+\bar{\epsilon}_{ii}^2 )\left[ \langle \{x_{k},p_{i}\}\rangle(1-\delta_{kj})+
\langle \{x_{k},p_{j}\}\rangle(1-\delta_{ki}) \right] +\frac{1}{2}(\bar D_{ii}+\bar D_{jj})\nonumber\\
&-&(\bar{\Gamma}_{ii}+\bar{\Gamma}_{jj})\langle p_{i}p_{j} \rangle-\sum_{k=1}^2\bar{\Gamma}_{kk}[\langle p_{i}p_{k} \rangle(1-\delta_{kj})+\langle p_{j}p_{k} \rangle(1-\delta_{ki})]\\
\frac{d \langle \{x_{i},p_{j}\} \rangle}{dt}&=&2\langle p_{1} p_{2}\rangle-2(\omega_{j}^{2}+\bar{\epsilon}_{jj}^2)\langle x_{i} x_{j}\rangle\nonumber\\
&-&2\sum_{k=1}^2(\lambda+\bar{\epsilon}_{kk}^2 ) \langle x_{i}x_k\rangle (1- \delta_{ik})\delta_{ij}
-2(\lambda+\bar{\epsilon}_{ii}^2 ) \langle x_{i}^2\rangle (1-\delta_{ij})\nonumber\\&-&\bar\Gamma_{jj} \langle\{ x_{i},p_{j}\} \rangle-\sum_{k=1}^2\bar\Gamma_{kk}\langle \{x_{i},p_{k}\} \rangle (1- \delta_{ik})+\bar F_{ii}\nonumber\\
\end{eqnarray}
\end{widetext}
\end{appendix}
\end{document}
|
\begin{document}
\title{Quasi-sure Existence of Gaussian Rough Paths and Large Deviation
Principles for Capacities}
\author{H. Boedihardjo
\thanks{Oxford-Man Institute, University of Oxford, Oxford OX2 6ED, England.
Email: [email protected]
}, X. Geng
\thanks{Mathematical Institute, University of Oxford, Oxford OX2 6GG and the
Oxford-Man Institute, Eagle House, Walton Well Road, Oxford OX2 6ED.\protect \\
Email: [email protected]
} and Z. Qian
\thanks{Exeter College, University of Oxford, Oxford OX1 3DP, England. Email:
[email protected].
}}
\date{}
\maketitle
\begin{abstract}
We construct a quasi-sure version (in the sense of Malliavin) of geometric
rough paths associated with a Gaussian process with long-time memory.
As an application we establish a large deviation principle (LDP) for
capacities for such Gaussian rough paths. Together with Lyons' universal
limit theorem, our results yield immediately the corresponding results
for pathwise solutions to stochastic differential equations driven
by such Gaussian process in the sense of rough paths. Moreover, our
LDP result implies the result of Yoshida on the LDP for capacities
over the abstract Wiener space associated with such Gaussian process.
\end{abstract}
\section{Introduction}
The theory of rough paths, established by Lyons in his groundbreaking
paper \cite{Lyons}, gives us a fundamental way of understanding path
integrals along one forms and pathwise solutions to differential equations
driven by rough signals. After his work, the study of the (geometric)
rough path nature of stochastic processes (e.g. Brownian motion, Markov
processes, martingales, Gaussian processes, etc.) becomes rather important,
since it will then immediately lead to a pathwise theory of stochastic
differential equations driven by such processes, which is one of the
central problems in stochastic analysis. The rough path regularity
of Brownian motion was first studied in the unpublished Ph.D. thesis
of Sipiläinen \cite{Sipilainen}. Later on Coutin and Qian \cite{CQ}
proved that the sample paths of fractional Brownian motion with Hurst
parameter $H>1/4$ can be lifted as geometric rough paths in a canonical
way, and such canonical lifting does not exist when $H\leqslant1/4$.
Of course their result covers the Brownian motion case. The systematic
study of stochastic processes as rough paths then appeared in the
monographs on rough path theory by Lyons and Qian \cite{LQ} and by
Friz and Victoir \cite{FV}.
The continuity of the solution map for rough differential equations,
which was also proved by Lyons \cite{Lyons} and usually known as
the universal limit theorem, is a fundamental result in rough path
theory. To some extend it gives us a way of understanding the right
topology under which differential equations are stable on rough path
space. An easy but important application of the universal limit theorem
is large deviation principles (or simply LDPs) for pathwise solutions
to stochastic differential equations according to the contraction
principle, once the LDP for the law of the driving process as rough
paths is established under the rough path topology. This is also the
main motivation of strengthening the classical LDPs for probability
measures on path space under the uniform topology to the rough path
setting. Since the rough path topology is stronger than the uniform
topology, a direct corollary is the classical Freidlin-Wentzell theory
on path space, which does not follow immediately from the contraction
principle and is in fact highly nontrivial as the solution map is
not continuous in this case. In the case of Brownian motion, Ledoux,
Qian and Zhang first established the LDP for the law of Brownian rough
paths. Their result was then extended to the case of fractional Brownian
motion by Millet and Sanz-Solé \cite{MS}. The general study of LDPs
for different stochastic processes in particular for Gaussian processes
as rough paths can be found in \cite{FV}.
We first recall some basic notions from rough path theory which we
use throughout the rest of this article. We refer the readers to \cite{FV},
\cite{LCL}, \cite{LQ} for a detailed presentation.
For $n\geqslant1,$ let
\[
T^{(n)}\left(\mathbb{R}^{d}\right)=\oplus_{i=0}^{n}\left(\mathbb{R}^{d}\right)^{\otimes i}
\]
be the truncated tensor algebra over $\mathbb{R}^{d}$ degree $n$,
where $\left(\mathbb{R}^{d}\right)^{\otimes0}:=0.$ We use $\Delta$
to denote the standard $2$-simplex $\{(s,t):\ 0\leqslant s\leqslant t\leqslant1\}$.
We call an $\mathbb{R}^{d}$-valued continuous paths over $[0,1]$
\textit{smooth} if it has bounded total variation. Given a smooth
path $w,$ for $k\in\mathbb{N}$ define
\begin{equation}
w_{s,t}^{k}=\int\limits _{s<t_{1}<\cdots<t_{k}<t}dw_{t_{1}}\otimes\cdots\otimes dw_{t_{k}},\ (s,t)\in\Delta.\label{signature}
\end{equation}
From classical integration theory we know that (\ref{signature})
is well-defined as the limit of Riemann-Stieltjes sums. Let $\boldsymbol{w}:\Delta\rightarrow T^{(n)}\left(\mathbb{R}^{d}\right)$
be the functional given by
\[
\boldsymbol{w}_{s,t}=\left(1,w_{s,t}^{1},\cdots,w_{s,t}^{n}\right)\text{, \ \ \ }(s,t)\in\Delta\text{.}
\]
This is usually called the \textit{lifting} of $w$ up to degree $n$.
The additivity property of integration over disjoint intervals is
then summarized as the following so-called \textit{Chen's identity}:
\begin{equation}
\boldsymbol{w}_{s,u}\otimes\boldsymbol{w}_{u,t}=\boldsymbol{w}_{s,t}\text{ \ \ \ }\forall0\leqslant s\leqslant u\leqslant t\leqslant1.\label{Chen's identity}
\end{equation}
We use $\Omega_{n}^{\infty}\left(\mathbb{R}^{d}\right)$ to denote
the space of all such functionals which are liftings of smooth paths
$w.$ In the definition of $\Omega_{n}^{\infty}$, the starting point
of the path is irrelevant, and we always assume that paths start at
the origin.
Let $p\geqslant1$ be fixed and $[p]$ denote the integer part of
$p$ (not greater than $p$). The $p$-\textit{variation metric }$d_{p}$
on $\Omega_{[p]}^{\infty}$ is defined by
\[
d_{p}(\boldsymbol{u},\boldsymbol{w})=\max_{1\leqslant i\leqslant[p]}\sup_{D}\left(\sum_{l}\left\vert u_{t_{l-1},t_{l}}^{i}-w_{t_{l-1},t_{l}}^{i}\right\vert ^{\frac{p}{i}}\right)^{\frac{i}{p}},
\]
where the supremum $\sup_{D}$ is taken over all possible finite partitions
of $[0,1]$. The completion of $\Omega_{[p]}^{\infty}$ under $d_{p}$
is called the space of \textit{geometric $p$-rough paths} over $\mathbb{R}^{d}$,
and it is denoted by $G\Omega_{p}\left(\mathbb{R}^{d}\right)$. If
$\boldsymbol{w}=\left(1,w^{1},\cdots,w^{[p]}\right)\in G\Omega_{p}\left(\mathbb{R}^{d}\right)$,
then $\boldsymbol{w}$ also satisfies Chen's identity (\ref{Chen's identity})
in $T^{[p]}\left(\mathbb{R}^{d}\right)$, and $\boldsymbol{w}$ has
finite $p$-variation in the sense that $\sup_{D}\sum_{l}\left\vert w_{t_{l-1},t_{l}}\right\vert ^{\frac{p}{i}}<\infty$
for all $1\leqslant i\leqslant\lbrack p]$.
The fundamental result in rough path theory is the following so-called
\textit{Lyons' universal limit theorem} (see \cite{Lyons}, and also
\cite{FV}, \cite{LQ}) for differential equations driven by geometric
rough paths.
\begin{thm}
\label{Universal Limit Theorem}Let $\{V_{1},\cdots,V_{d}\}$ be a
family of $\gamma$-Lipschitz vector fields on $\mathbb{R}^{N}$ for
some $\gamma>p.$ For any given $x_{0}\in\mathbb{R}^{N},$ define
the map
\[
F(x_{0},\cdot):\ \Omega_{[p]}^{\infty}\left(\mathbb{R}^{d}\right)\rightarrow G\Omega_{p}\left(\mathbb{R}^{N}\right)
\]
in the following way. For any $\boldsymbol{w}\in\Omega_{[p]}^{\infty}\left(\mathbb{R}^{d}\right)$
which is the lifting of some smooth path $w$, let $x$ be the unique
smooth path which is the solution in $\mathbb{R}^{N}$ of the ODE
\[
dx_{t}=\sum_{\alpha=1}^{d}V_{\alpha}(x_{t})dw_{t}^{\alpha},\ t\in[0,1],
\]
with initial value $x_{0}.$ $F(x_{0},\boldsymbol{w})$ is then defined
to be the lifting of $x$ in $\Omega_{p}^{\infty}(\mathbb{R}^{N}).$
Then the map $F(x_{0},\cdot)$ is uniformly continuous on bounded
sets with respect to the $p$-variation metric.
\end{thm}
\begin{rem}
Theorem \ref{Universal Limit Theorem} is not the original version
of Lyons' result in \cite{Lyons} but an equivalent form. The original
result of Lyons is formulated in terms of rough path integrals and
does not restrict to geometric rough paths only. Here we state the
result in a more elementary form to avoid the machinery of rough path
integrals.
\end{rem}
The theory of rough paths can be applied to quasi-sure analysis for
Gaussian measures on path space. The notion of quasi-sure analysis
was originally introduced by Malliavin \cite{Malliavin82} (see also
\cite{Malliavin}) to the study of non-degenerate conditioning and
disintegration of Gaussian measures on abstract Wiener spaces. The
fundamental concept in quasi-sure analysis is capacity, which specifies
more precise scales for ``negligible'' subsets of an abstract Wiener
space. In particular, a set of capacity zero is always a null set,
while in general a null set may have positive capacity. According
to Malliavin, the theory of quasi-sure analysis can be regarded as
an infinite dimensional version of non-linear potential theory. It
enables us to disintegrate a Gaussian measure continuously in the
infinite dimensional setting, which for instance applies to the study
of bridge processes and pinned diffusions. Moreover, it also leads
to sharper estimates than classical methods.
The main goal of the present article is to initiate the study of Gaussian
rough paths in the setting of quasi-sure analysis. Due to power tools
in rough path theory, our results lead to the verification of many
classical results for the quasi-sure analysis on Wiener space.
The first aim of this article is to study the quasi-sure existence
of canonical lifting for sample paths of Gaussian processes as geometric
rough paths. The Brownian motion case was studied by Inahama \cite{Inahama1}
under the $p$-variation metric, and Aida \cite{Aida}, Higuchi \cite{Higuchi},
Inahama \cite{Inahama2} and Watanabe \cite{Watanabe} independently
under the Besov norm, by exploiting methods from the Malliavin calculus.
More precisely, it was proved that for quasi-surely, Brownian sample
paths can be lifted as geometric $p$-rough paths for $2<p<3$ . In
the next section, we extend this result to a class of Gaussian processes
with long-time memory which includes fractional Brownian motion, by
applying techniques both from rough path theory and the Malliavin
calculus. Combining our result with Lyons' universal limit theorem,
we obtain immediately a quasi-sure limit theorem for pathwise solutions
to stochastic differential equations driven by Gaussian processes,
which improves the Wong-Zakai type limit theorem and its quasi-sure
version (see for example Ren \cite{Ren}, Malliavin-Nualart \cite{MN}
and the references therein).
The technique we use in the next section enables us to establish a
large deviation principle for capacities for Gaussian rough paths
with long-time memory, which is the second aim of this article. LDPs
for capacities for transformations on an abstract Wiener space was
first studied by Yoshida \cite{Yoshida}. The general definition and
the basic properties of LDPs for induced capacities on a Polish space
first appeared in Gao and Ren \cite{GR}, in which the case of stochastic
flows driven by Brownian motion was also investigated. Before establishing
our LDP result, we first prove two fundamental results on transformations
of LDPs for capacities: the contraction principle and exponential
good approximations, which are both easy adaptations from the classical
results for probability measures. Our LDP result is then based on
the result and method developed in the next section and finite dimensional
approximations. It turns out that the general result of Yoshida in
the case of Gaussian processes is a direct corollary of our result
due to the continuity of the projection map from a geometric rough
path onto its first level path. The original proof of Yoshida relies
crucially on the infinite dimensional structure of abstract Wiener
space, and in particular deep properties of capacity and analytic
properties of the Ornstein-Uhlenbeck semigroup. However, our technique
here replies only on basic properties of capacity and finite dimensional
Gaussian spaces. Moreover, again from Lyons' universal limit theorem,
our LDP result immediately yields the LDPs for capacities for pathwise
solutions to stochastic differential equations driven by Gaussian
processes. In this respect our result is stronger than the result
of Yoshida since we are working in a stronger topology (the $p$-variation
topology) instead of the uniform topology, which is too weak to support
the continuity of the solution map for differential equations. It
is also interesting to note that Inahama \cite{Inahama2} was already
able to applied techniques from quasi-sure analysis to establish LDPs
for pinned diffusion measures.
\section{Quasi-sure Existence of Gaussian Rough Paths}
In the present article, we consider the following class of Gaussian
processes with long-time memory in the sense of Coutin-Qian \cite{CQ}.
\begin{defn}
\label{long time memory}A $d$-dimensional centered, continuous Gaussian
process $\{B_{t}\}_{t\geqslant0}$ starting at the origin with independent
components is said to have $h$-\textit{long-time memory} for some
$0<h<1$ and if there is a constant $C_{h}$ such that
\[
\mathbb{E}\left[\left\vert B_{t}-B_{s}\right\vert ^{2}\right]\leqslant C_{h}\left\vert t-s\right\vert ^{2h}
\]
for $s,t\geqslant0$ and
\[
\left\vert \mathbb{E}\left[\left(B_{t}^{i}-B_{s}^{i}\right)\left(B_{t+\tau}^{i}-B_{s+\tau}^{i}\right)\right]\right\vert \leqslant C_{h}\tau^{2h}\left\vert \frac{t-s}{\tau}\right\vert ^{2}
\]
for $1\leqslant i\leqslant d,$ $s,t\geqslant0$, $\tau>0$ with $(t-s)/\tau\leqslant1$.
\end{defn}
A fundamental example of Gaussian processes with long-time memory
is fractional Brownian motion with $h$ being the Hurst parameter
(see \cite{LQ}).
From now on, we always assume that such Gaussian process is realized
on the path space over the finite time period $[0,1]$. This is of
course equivalent to the consideration of the process over any $[0,T].$
Let $W$ be the space of all $\mathbb{R}^{d}$-valued continuous paths
$w$ over $[0,1]$ with $w_{0}=0,$ and equip $W$ with the Borel
$\sigma$-algebra $\mathcal{B}(W)$. Let $\mathbb{P}$ be the law
on $(W,\mathcal{B}(W))$ of some Gaussian process with $h$-long-time
memory in the sense of Definition \ref{long time memory}.
It is a fundamental result of Coutin and Qian \cite{CQ} that if $h>1/4,\ 2<p<4$
with $hp>1,$ then outside a $\mathbb{P}$-null set each sample path
$w\in W$ can be lifted as geometric $p$-rough paths in a canonical
way. More precisely, for $m\geqslant1$, let $t_{m}^{k}=k/2^{m}$
($k=0,1,\cdots,2^{m}$) be the $m$-th dyadic partition of $[0,1].$
Given $w\in W$, define $w^{(m)}$ to be the dyadic piecewise linear
interpolation of $w$ by
\[
w_{t}^{(m)}=w_{t_{m}^{k-1}}+2^{m}\left(t-t_{m}^{k-1}\right)\left(w_{t_{m}^{k}}-w_{t_{m}^{k-1}}\right),\ t\in\left[t_{m}^{k-1},t_{m}^{k}\right],
\]
and let
\[
\boldsymbol{w}_{s,t}^{(m)}=\left(1,w_{s,t}^{(m),1},w_{s,t}^{(m),2},w_{s,t}^{(m),3}\right),\ (s,t)\in\Delta,
\]
be the geometric rough path associated with $w^{(m)}$ up to level
3. Let $\mathcal{A}_{p}$ be the totality of all $w\in W$ such that
$\{\boldsymbol{w}^{(m)}\}_{m\geqslant1}$ is a Cauchy sequence under
the $p$-variation metric $d_{p}$. Then $\mathcal{A}_{p}^{c}$ is
a $\mathbb{P}$-null set and hence $\boldsymbol{w}^{(m)}$ converges
to a unique geometric $p$-rough path $\boldsymbol{w}$ for $\mathbb{P}$-almost-surely.
The convergence holds in $L^{1}(W,\mathbb{P})$ as well.
\begin{rem}
Although a geometric $p$-rough path is defined up to level $[p],$
by Lyons' extension theorem (see \cite{Lyons}) it does not make a
difference if we always consider up to level $3$ under $d_{p}$ since
$2<p<4$.
\end{rem}
\begin{rem}
Coutin and Qian \cite{CQ} also showed that if $h\leqslant1/4,$ no
subsequence of $\boldsymbol{w}_{s,t}^{(m)}$ converges in probability
or in $L^{1}$, and hence such canonical lifting of sample paths as
geometric rough paths does not exist.
\end{rem}
The goal of this section is to strengthen the result of Coutin-Qian
to the quasi-sure setting in the sense of Malliavin. The main result
and technique developed in this section are essential to establish
a large deviation principle for capacities as we will see later on.
Throughout the rest of this article, we fix $h\in(1/4,1/2]$, $p\in(2,4)$
with $hp>1$ (the case of $h>1/2$ is trivial from the rough path
point of view), and consider a $d$-dimensional Gaussian process with
$h$-long-time memory.
We first recall some basic notions about the Malliavin calculus and
quasi-sure analysis. We refer the readers to \cite{Malliavin}, \cite{Nualart}
for a systematic discussion.
Let $\mathcal{H}$ be the Cameron-Martin space associated with the
corresponding Gaussian measure $\mathbb{P}$ on $W$. $\mathcal{H}$
is canonically defined to be the space of all paths in $W$ of the
form
\[
h_{t}=\mathbb{E}[Zw_{t}],\ t\in[0,1],
\]
where $Z$ is an element of the $L^{2}$ space generated by the process,
and the inner product is given by $\langle h_{1},h_{2}\rangle=\mathbb{E}[Z_{1}Z_{2}].$
It follows that the identity map $\iota$ defines a continuous and
dense embedding from $\mathcal{H}$ into $W$ which makes $\left(W,\mathcal{H},\mathbb{P}\right)$
into an abstract Wiener space in the sense of Gross. Let $\iota^{*}:\ W^{*}\rightarrow\mathcal{H}^{*}\cong\mathcal{H}$
be the dual of $\iota.$ Then the identity map $\mathcal{I}:\ W^{\ast}\hookrightarrow L^{2}(W,\mathbb{P})$
uniquely extends to an isometric embedding from $\mathcal{H}$ into
$L^{2}(W,\mathbb{P})$ via $\iota^{*}$.
If $f$ is a smooth Schwarz function on $\mathbb{R}^{n}$, and $\varphi_{1},\cdots,\varphi_{n}\in W^{*}$,
then $F=f(\varphi_{1},\cdots,\varphi_{n})$ is called a \textit{smooth
(Wiener) functional} on $W$. The collection of all smooth functionals
on $W$ is denoted by $\mathcal{S}$. The \textit{Malliavin derivative}
of $F$ is defined to be the $\mathcal{H}$-valued functional
\[
DF=\sum_{i=1}^{n}\frac{\partial f}{\partial x^{i}}(\varphi_{1},\cdots,\varphi_{n})\iota^{*}\varphi_{i}\text{,}
\]
Such definition can be generalized to smooth functionals taking values
in a separable Hilbert space $E$. Let $\mathcal{S}(E)$ be the space
of $E$-valued functionals of the form $F=\sum_{i=1}^{k}F_{i}e_{i}$,
where $F_{i}\in\mathcal{S}$, $e_{i}\in E$. The Malliavin derivative
of $F$ is defined to be the $\mathcal{H}\otimes E$-valued functional
$DF=\sum_{i=1}^{k}DF_{i}\otimes e_{i}$. Such definition is independent
of the form of $F$, and by induction we can define higher order derivatives
$D^{N}F$ for $N\in\mathbb{N}$, which is then an $\mathcal{H}^{\otimes N}\otimes E$-valued
functional. Given $q\geqslant1,\ N\in\mathbb{N}$, the Sobolev norm
$\Vert\cdot\Vert_{q,N;E}$ on $\mathcal{S}(E)$ is defined by
\[
\Vert F\Vert_{q,N;E}=\left(\sum_{i=0}^{N}\mathbb{E}\left[\left\Vert D^{i}F\right\Vert _{\mathcal{H}^{\otimes i}\otimes E}^{q}\right]\right)^{\frac{1}{q}}.
\]
The completion of $(\mathcal{S}(E),\Vert\cdot\Vert_{q,N;E})$ is called
the $(q,N)$-\textit{Sobolev space} for $E$-valued functionals over
$W$, and it is denoted by $\mathbb{D}_{N}^{q}(E)$.
For any $q>1,N\in\mathbb{N},$ the $(q,N)$-\textit{capacity} Cap$_{q,N}$
is a functional defined on the collection of all subsets of $W$.
If $O$ is an open subset of $W$, then
\[
\text{Cap}_{q,N}(O):=\inf\left\{ \Vert u\Vert_{q,N}:\ u\in\mathbb{D}_{N}^{q},\ u\geqslant1\text{ on }O,\ u\geqslant0\ \text{on }W\text{,}\ \mathbb{P}\mbox{-a.s.}\right\}
\]
and for any arbitrary subset $A$ of $W$,
\[
\text{Cap}_{q,N}(A):=\inf\left\{ \text{Cap}_{q,N}(O):\ O\ \text{open, }A\subset O\right\} .
\]
A subset $A\subset W$ is called \textit{slim} if Cap$_{q,N}(A)=0$
for all $q>1$ and $N\in\mathbb{N}$. A property for paths in $W$
is said to hold for \textit{quasi-surely if} it holds outside a slim
set.
The $(q,N)$-capacity has the following basic properties:
(1) if $A\subset B$, then
\[
0\leqslant\mbox{Cap}_{q,N}(A)\leqslant\mbox{Cap}_{q,N}(B);
\]
(2) Cap$_{q,N}$ is increasing in $q$ and $N$;
(3) Cap$_{q,N}$ is sub-additive, i.e.,
\[
\mbox{Cap}_{q,N}\left(\bigcup_{i=1}^{\infty}A_{i}\right)\leqslant\sum_{i=1}^{\infty}\mbox{Cap}_{q,N}(A_{i}).
\]
The following quasi-sure version of Tchebycheff's inequality and Borel-Cantelli's
lemma play an essential role in the study of quasi-sure convergence
in our approach. We refer the readers to \cite{Malliavin} for the
proof.
\begin{prop}
\label{Tchebycheff and Borel-Cantelli}(1) For any $\lambda>0$ and
any $u\in\mathbb{D}_{N}^{q}$ which is lower semi-continuous, we have
\[
\mathrm{Cap}_{q,N}\left\{ w\in W:u(w)>\lambda\right\} \leqslant\frac{C_{q,N}}{\lambda}\Vert u\Vert_{q,N},
\]
where $C_{q,N}$ is a constant depending only on $q$ and $N$.
(2) For any sequence $\{A_{n}\}_{n=1}^{\infty}$ of subsets of $W,$
if $\sum_{n=1}^{\infty}\mathrm{Cap}_{q,N}(A_{n})<\infty,$ then
\[
\mathrm{Cap}_{q,N}\left(\limsup_{n\rightarrow\infty}A_{n}\right)=0.
\]
\end{prop}
Now we are in a position to state our main result of this section.
\begin{thm}
\label{quasi-sure convergence}Suppose that $\mathbb{P}$ is the Gaussian
measure on $(W,\mathcal{B}(W))$ associated with a $d$-dimensional
Gaussian process with $h$-long-time memory for some $h\in(1/4,1/2],\ p\in(2,4)$
with $hp>1.$ Then $\mathcal{A}_{p}^{c}$ is a slim set. In particular,
sample paths of the Gaussian processes can be lifted as geometric
$p$-rough paths in a canonical way quasi-surely, as the limit of
the lifting of dyadic piecewise linear interpolation under $d_{p}.$
\end{thm}
By applying Lyons' universal limit theorem (Theorem \ref{Universal Limit Theorem})
for rough differential equations driven by geometric rough paths,
an immediate consequence of Theorem \ref{quasi-sure convergence}
is the quasi-sure existence and uniqueness for pathwise solutions
to stochastic differential equations driven by Gaussian processes
with $h$-long-time memory in the sense of geometric rough paths,
under certain regularity conditions on the generating vector fields.
The main idea of proving Theorem \ref{quasi-sure convergence} is
to use a crucial control on the $p$-variation metric which is defined
over dyadic partitions only, and to apply basic results for Gaussian
polynomials in the Malliavin calculus.
If $\boldsymbol{w}=(1,w^{1},w^{2},w^{3})$ and $\boldsymbol{\tilde{w}}=(1,\tilde{w}^{1},\tilde{w}^{2},\tilde{w}^{3})$
are two functionals on $\Delta$ taking values in $T^{3}(\mathbb{R}^{d})$,
define
\begin{equation}
\rho_{i}(\boldsymbol{w},\boldsymbol{\tilde{w}})=\left(\sum_{n=1}^{\infty}n^{\gamma}\sum_{k=1}^{2^{n}}\left\vert w_{t_{n}^{k-1},t_{n}^{k}}^{i}-\tilde{w}_{t_{n}^{k-1},t_{n}^{k}}^{i}\right\vert ^{\frac{p}{i}}\right)^{\frac{i}{p}},\label{the rho function}
\end{equation}
where $i=1,2,3$ and $\gamma>p-1$ is a fixed constant . We use $\rho_{j}(\boldsymbol{w})$
to denote $\rho_{j}(\boldsymbol{w},\boldsymbol{\tilde{w}})$ with
$\boldsymbol{\tilde{w}}=(1,0,0,0)$. These functionals were originally
introduced by Hambly and Lyons \cite{HL} for constructing the stochastic
area processes associated with Brownian motions on the Sierpinski
gasket. They were then used by Ledoux, Qian and Zhang \cite{LQZ}
to establish a large deviation principle for Brownian rough paths
under the $p$-variation topology. We also use these functionals
to prove a large deviation principle for capacity in the next section.
The following estimate is contained implicitly in \cite{HL} and made
explicit in \cite{LQ}.
\begin{lem}
\label{control of p-var metric as lemma}There exists a positive constant
$C_{d,p,\gamma}$ depending only on $d,p,\gamma,$ such that for any
$\boldsymbol{w},\widetilde{\boldsymbol{w}},$
\begin{eqnarray}
d_{p}(\boldsymbol{w},\widetilde{\boldsymbol{w}}) & \leqslant & C_{d,p,\gamma}\max\left\{ \rho_{1}(\boldsymbol{w},\widetilde{\boldsymbol{w}}),\rho_{2}(\boldsymbol{w},\widetilde{\boldsymbol{w}}),\rho_{1}(\boldsymbol{w},\widetilde{\boldsymbol{w}})\left(\rho_{1}(\boldsymbol{w})+\rho_{1}(\widetilde{\boldsymbol{w}})\right),\right.\nonumber \\
& & \rho_{3}(\boldsymbol{w},\widetilde{\boldsymbol{w}}),\rho_{2}(\boldsymbol{w},\widetilde{\boldsymbol{w}})\left(\rho_{1}(\boldsymbol{w})+\rho_{1}(\widetilde{\boldsymbol{w}})\right),\nonumber \\
& & \left.\rho_{1}(\boldsymbol{w},\widetilde{\boldsymbol{w}})\left(\rho_{2}(\boldsymbol{w})+\rho_{2}(\widetilde{\boldsymbol{w}})+(\rho_{1}(\boldsymbol{w})+\rho_{1}(\widetilde{\boldsymbol{w}}))^{2}\right)\right\} .\label{control of p-var metric as equality}
\end{eqnarray}
\end{lem}
The main difficulty of proving Theorem \ref{quasi-sure convergence}
is that it is unknown if the $p$-variation metric is a differentiable
in the sense of Malliavin. We get around this difficulty first by
controlling the $p$-variation metric using Lemma \ref{control of p-var metric as lemma}
and then by observing that the capacity of $\left\{ \rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)>\lambda\right\} $
is ``evenly distributed'' over the dyadic sub-intervals (see (\ref{evenly distributed})
in the following). Our task is then reduced to the estimation of the
Sobolev norms of certain Gaussian polynomials, which is contained
in the following basic result in the Malliavin calculus (see \cite{Nualart}).
\begin{lem}
\label{polynomial estimates}Fix $N\in\mathbb{N}$. Let $\mathcal{P}^{N}(E)$
be the space of $E$-valued polynomial functionals of degree less
than or equal to $N$. Then for any $q>2,$ we have
\begin{equation}
\|F\|_{q;E}\leqslant(N+1)(q-1)^{\frac{N}{2}}\|F\|_{2;E}.\label{equivalence of q-norm and 2-norm}
\end{equation}
Moreover, for any $F\in\mathcal{P}^{N}(E)$ and $i\leqslant N$ we
have
\begin{equation}
\Vert D^{i}F\Vert_{2;\mathcal{H}^{\otimes i}\otimes E}\leqslant N^{\frac{i+1}{2}}\Vert F\Vert_{2;E}.\label{differential inequality}
\end{equation}
\end{lem}
The following $L^{2}$-estimates for the dyadic piecewise linear interpolation,
which are contained in a series of calculations in \cite{LQ}, are
crucial for us.
\begin{lem}
\label{L^2 estimates}Let $m,n\geqslant1$ and $k=1,\cdots,2^{n}$.
1) For $i=1,2,3$, we have
\[
\left\Vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\Vert _{2;(\mathbb{R}^{d})^{\otimes i}}\leqslant\begin{cases}
C_{d,h}\left(\frac{1}{2^{nh}}\right)^{i}, & n\leqslant m,\\
C_{d,h}\left(\frac{2^{m(1-h)}}{2^{n}}\right)^{i}, & n>m.
\end{cases}
\]
2) We also have
\begin{eqnarray*}
\left\Vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),1}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),1}\right\Vert _{2;\mathbb{R}^{d}} & \leqslant & \begin{cases}
0, & n\leqslant m,\\
C_{d,h}\frac{2^{m(1-h)}}{2^{n}}, & n>m;
\end{cases}\\
\left\Vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),2}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),2}\right\Vert _{2;(\mathbb{R}^{d})^{\otimes2}} & \leqslant & \begin{cases}
C_{d,h}\frac{1}{2^{\frac{1}{2}(4h-1)m}2^{\frac{1}{2}n}}, & n\leqslant m,\\
C_{d,h}\frac{2^{2m(1-h)}}{2^{2n}}, & n>m;
\end{cases}\\
\left\Vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),3}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),3}\right\Vert _{2;(\mathbb{R}^{d})^{\otimes3}} & \leqslant & \begin{cases}
C_{d,h}\frac{1}{2^{\frac{1}{2}(4h-1)m}2^{\frac{1+2h}{2}n}}, & n\leqslant m,\\
C_{d,h}\frac{2^{3m(1-h)}}{2^{3n}}, & n>m.
\end{cases}
\end{eqnarray*}
Here $C_{d,h}$ is a constant depending only on $d$ and $h.$
\end{lem}
Now we can proceed to the proof of Theorem \ref{quasi-sure convergence}.
The key step is to establish estimates for the capacities of the tail
events $\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)>\lambda\right\} $
and $\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(m)}\right)>\lambda\right\} $
($i=1,2,3$). This is contained in the following lemma.
\begin{lem}
\label{estimating tail events}Let $\theta\in\left(\left(\frac{p(2h+1)}{6}-1\right)^{+},hp-1\right),$
$\widetilde{N}>\frac{N}{2}\vee\left(2\left(h-\frac{\theta+1}{p}\right)\right)^{-1}$.
Then we have
(1)
\begin{equation}
\begin{array}{ccc}
\mathrm{Cap}_{q,N}\left\{ w:\rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)>\lambda\right\} & \leqslant & C_{i}\lambda^{-2\widetilde{N}}\left(\frac{1}{2^{m}}\right)^{2i\widetilde{N}\left(h-\frac{\theta+1}{p}\right)-1},\end{array}\label{first estimate}
\end{equation}
(2)
\begin{equation}
\mathrm{Cap}_{q,N}\left\{ w:\rho_{i}\left(\boldsymbol{w}^{(m)}\right)>\lambda\right\} \leqslant C_{i}\lambda^{-2\widetilde{N}}.\label{second estimate}
\end{equation}
Here $C_{i}$ is a positive constant of the form $C_{i}=C_{1}C_{2}^{\tilde{N}}g\left(\tilde{N};N\right)\tilde{N}^{i\tilde{N}},$
where $C_{1}$ depends only on $q$ and $N,$ $C_{2}$ depends only
on $d,p,h,\gamma,\theta,q$ and $g\left(\tilde{N};N\right)$ is a
polynomial in $\tilde{N}$ with degree depending only on $N$ and
universal constant coefficients.\end{lem}
\begin{proof}
For $i=1,2,3$, set
\begin{eqnarray*}
I_{i}(m;\lambda) & = & \mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{\left(m+1\right)},\boldsymbol{w}^{\left(m\right)}\right)>\lambda\right\} \\
& = & \mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{\left(m+1\right)},\boldsymbol{w}^{\left(m\right)}\right)^{\frac{p}{i}}>\lambda^{\frac{p}{i}}\right\} .
\end{eqnarray*}
By the definition of $\rho_{i},$ for every $\theta>0$ we have
\begin{align*}
& \left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{\left(m+1\right)},\boldsymbol{w}^{\left(m\right)}\right)^{\frac{p}{i}}>\lambda^{\frac{p}{i}}\right\} \\
\subset & \bigcup\limits _{n=1}^{\infty}\left\{ w:\ \sum_{k=1}^{2^{n}}\left\vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\vert ^{\frac{p}{i}}>C_{\gamma,\theta}\lambda^{\frac{p}{i}}\left(\frac{1}{2^{n}}\right)^{\theta}\right\} ,
\end{align*}
where $C_{\gamma,\theta}=\left(\sum_{n=1}^{\infty}n^{\gamma}2^{-n\theta}\right)^{-1}$.
Therefore,
\begin{align}
& \ I_{i}\left(m;\lambda\right)\nonumber \\
\leqslant & \ \sum_{n=1}^{\infty}\mbox{Cap}_{q,N}\left\{ w:\ \sum_{k=1}^{2^{n}}\left\vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\vert ^{\frac{p}{i}}>\lambda^{\frac{p}{i}}C_{\gamma,\theta}\left(\frac{1}{2^{n}}\right)^{\theta}\right\} \nonumber \\
\leqslant & \ \sum_{n=1}^{\infty}\sum_{k=1}^{2^{n}}\mbox{Cap}_{q,N}\left\{ w:\ \left\vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\vert ^{\frac{p}{i}}>\lambda^{\frac{p}{i}}C_{\gamma,\theta}\left(\frac{1}{2^{n}}\right)^{\theta+1}\right\} .\label{evenly distributed}
\end{align}
On the other hand, for any $\tilde{N}>0$ we have
\begin{align*}
& \ \mbox{Cap}_{q,N}\left\{ \left\vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\vert ^{\frac{p}{i}}>\lambda^{\frac{p}{i}}C_{\gamma,\theta}\left(\frac{1}{2^{n}}\right)^{\theta+1}\right\} \\
= & \ \mbox{Cap}_{q,N}\left\{ f_{m,n,k}^{i}>\left[\lambda C_{\gamma,\theta}^{\frac{i}{p}}\left(\frac{1}{2^{n}}\right)^{\frac{i}{p}(\theta+1)}\right]^{2\tilde{N}}\right\} ,
\end{align*}
where
\[
\begin{array}{ccc}
f_{m,n,k}^{i}(w) & = & \left\vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\vert ^{2\tilde{N}},\ \mbox{for }w\in W.\end{array}
\]
Since $\tilde{N}$ is a natural number, $f_{m,n,k}^{i}$ are polynomial
functionals of degree $2i\tilde{N}$, and hence they are $N$ times
differentiable in the sense of Malliavin provided $\tilde{N}\geqslant\frac{N}{2}$.
Consequently, we can apply Tchebycheff's inequality (the first part
of Proposition \ref{Tchebycheff and Borel-Cantelli}) to obtain
\[
I_{i}(m;\lambda)\leqslant C_{q,N}\sum_{n=1}^{\infty}\sum_{k=1}^{2^{n}}\left(C_{\gamma,\theta}^{\frac{i}{p}}\lambda\left(\frac{1}{2^{n}}\right)^{\frac{i}{p}(\theta+1)}\right)^{-2\widetilde{N}}\left\Vert f_{m,n,k}^{i}\right\Vert _{q,N}.
\]
If $q>2,$ by (\ref{equivalence of q-norm and 2-norm}) of Lemma \ref{polynomial estimates},
we have
\begin{eqnarray*}
\left\Vert f_{m,n,k}^{i}\right\Vert _{q,N} & \leqslant & \sum_{l=0}^{N}\|D^{l}f_{m,n,k}^{i}\|_{q;\mathcal{H}^{\otimes l}}\\
& \leqslant & \left(2i\tilde{N}+1\right)(q-1)^{i\tilde{N}}\sum_{l=0}^{N}\left\Vert D^{l}f_{m,n,k}^{i}\right\Vert _{2;\mathcal{H}^{\otimes l}}.
\end{eqnarray*}
By (\ref{differential inequality}) of Lemma \ref{polynomial estimates},
we have
\[
\left\Vert D^{l}f_{m,n,k}^{i}\right\Vert _{2;\mathcal{H}^{\otimes l}}\leqslant\left(2i\tilde{N}\right)^{\frac{N+1}{2}}\left\Vert f_{m,n,k}^{i}\right\Vert _{2}.
\]
Therefore,
\[
\left\Vert f_{m,n,k}^{i}\right\Vert _{q,N}\leqslant(N+1)\left(2i\tilde{N}+1\right)(q-1)^{i\tilde{N}}\left(2i\tilde{N}\right)^{\frac{N+1}{2}}\left\Vert f_{m,n,k}^{i}\right\Vert _{2}.
\]
Moreover, since $w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}$
is an $(\mathbb{R}^{d})^{\otimes i}$-valued polynomial functional
of degree $i$, we know again from (\ref{equivalence of q-norm and 2-norm})
that
\begin{eqnarray*}
\left\Vert f_{m,n,k}^{i}\right\Vert _{2} & = & \left\Vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\Vert _{4\tilde{N};\left(\mathbb{R}^{d}\right)^{\otimes i}}^{2\tilde{N}}\\
& \leqslant & (i+1)^{2\tilde{N}}\left(4\widetilde{N}-1\right)^{i\tilde{N}}\left\Vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\Vert _{2;\left(\mathbb{R}^{d}\right)^{\otimes i}}^{2\tilde{N}}.
\end{eqnarray*}
Therefore,
\begin{align}
& \left\Vert f_{m,n,k}^{i}\right\Vert _{q,N}\nonumber \\
\leqslant & (N+1)\left((q-1)^{i}(i+1)^{2}\right)^{\tilde{N}}\left(2i\tilde{N}+1\right)\left(2i\tilde{N}\right)^{\frac{N+1}{2}}\nonumber \\
& \cdot\left(4\tilde{N}-1\right)^{i\tilde{N}}\left\Vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\Vert _{2;\left(\mathbb{R}^{d}\right)^{\otimes i}}^{2\tilde{N}}\nonumber \\
\leqslant & (N+1)\left(1024(q-1)^{3}\right)^{\tilde{N}}\left(6\tilde{N}+1\right)\left(6\tilde{N}\right)^{N}\tilde{N}^{i\tilde{N}}\left\Vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\Vert _{2;\left(\mathbb{R}^{d}\right)^{\otimes i}}^{2\tilde{N}}.\label{controlling the Sobolev norm}
\end{align}
Let $C_{i}$ be the constant before $\left\Vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\Vert _{2;\left(\mathbb{R}^{d}\right)^{\otimes i}}^{2\tilde{N}}$
on the R.H.S. of (\ref{controlling the Sobolev norm}).
By absorbing the constant in Tchebycheff's inequality into $C_{i}$,
we arrive at
\begin{align}
& \ I_{i}(m;\lambda)\nonumber \\
\leqslant & \ C_{i}\sum_{n=1}^{\infty}\sum_{k=1}^{2^{n}}\left(C_{\theta}^{\frac{i}{p}}\lambda\left(\frac{1}{2^{n}}\right)^{\frac{i}{p}(\theta+1)}\right)^{-2\widetilde{N}}\left\Vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m+1),i}-w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\Vert _{2;\left(\mathbb{R}^{d}\right)^{\otimes i}}^{2\tilde{N}}.\label{estimating I_i}
\end{align}
Exactly the same computation yields
\begin{align}
& \ \mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{\left(m\right)}\right)>\lambda\right\} \nonumber \\
\leqslant & \ C_{i}\sum_{n=1}^{\infty}\sum_{k=1}^{2^{n}}\left(C_{\gamma,\theta}^{\frac{i}{p}}\lambda\left(\frac{1}{2^{n}}\right)^{\frac{i}{p}(\theta+1)}\right)^{-2\widetilde{N}}\left\Vert w_{t_{n}^{k-1},t_{n}^{k}}^{(m),i}\right\Vert _{2;\left(\mathbb{R}^{d}\right)^{\otimes i}}^{2\tilde{N}}.\label{estimating rho_i}
\end{align}
We now substitute the estimates in Lemma \ref{L^2 estimates} into
(\ref{estimating I_i}) and (\ref{estimating rho_i}). In what follows,
we assume that $\theta\in\left(\left(\frac{p(2h+1)}{6}-1\right)^{+},hp-1\right),$
$\widetilde{N}>\frac{N}{2}\vee\left(2\left(h-\frac{\theta+1}{p}\right)\right)^{-1}$
for summability reason. We also absorb the constant $C_{d,h}$ in
Lemma \ref{L^2 estimates} and $C_{\gamma,\theta}$.
For $i=1$, this gives
\begin{align*}
I_{1}\left(m;\lambda\right) & \leqslant C_{1}\lambda^{-2\tilde{N}}2^{2\tilde{N}m\left(1-h\right)}\sum_{n=m+1}^{\infty}\sum_{k=1}^{2^{n}}2^{-2n\tilde{N}\left(1-\frac{\theta+1}{p}\right)}\\
& \leqslant C_{1}\lambda^{-2\tilde{N}}2^{-m\left(2\tilde{N}\left(h-\frac{\theta+1}{p}\right)-1\right)}.
\end{align*}
For $i=2$, this gives\foreignlanguage{british}{
\begin{eqnarray*}
I_{2}\left(m;\lambda\right) & \leqslant & C_{2}\lambda^{-2\tilde{N}}\left(\sum_{n=1}^{m}\sum_{k=1}^{2^{n}}2^{-n\tilde{N}\left(1-\frac{4(\theta+1)}{p}\right)-m\tilde{N}\left(4h-1\right)}\right.\\
& & \left.+\sum_{n=m+1}^{\infty}\sum_{k=1}^{2^{n}}2^{-4n\tilde{N}\left(1-\frac{\theta+1}{p}\right)+4m\tilde{N}\left(1-h\right)}\right)\\
& \leqslant & C_{2}\lambda^{-2\tilde{N}}2^{-m\left(4\tilde{N}\left(h-\frac{\theta+1}{p}\right)-1\right)}
\end{eqnarray*}
}
For $i=3$, this gives\foreignlanguage{british}{
\begin{eqnarray*}
I_{3}\left(m;\lambda\right) & \leqslant & C_{3}\lambda^{-2\tilde{N}}\left(\sum_{n=1}^{m}\sum_{k=1}^{2^{n}}2^{-n\tilde{N}\left(1+2h-\frac{6\left(\theta+1\right)}{p}\right)-m\tilde{N}\left(4h-1\right)}\right.\\
& & \left.+\sum_{n=m+1}^{\infty}\sum_{k=1}^{2^{n}}2^{-6n\tilde{N}\left(1-\frac{\theta+1}{p}\right)+6m\tilde{N}\left(1-h\right)}\right)\\
& \leqslant & C_{3}\lambda^{-2\tilde{N}}2^{-m\left(6\tilde{N}\left(h-\frac{\theta+1}{p}\right)-1\right)}
\end{eqnarray*}
}
Therefore, for $i=1,2,3,$ we have
\[
I_{i}\left(m;\lambda\right)\leqslant C_{i}\lambda^{-2\tilde{N}}2^{-m\left(2i\tilde{N}\left(h-\frac{\theta+1}{p}\right)-1\right)}
\]
which gives (\ref{first estimate}). From the computation it is easy
to see that the constants $C_{i}$ here are of the form stated in
the lemma.
Similar computation yields that for $i=1,2,3,$ \foreignlanguage{british}{
\begin{eqnarray*}
& & \mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{\left(m\right)}\right)>\lambda\right\} \\
& \leqslant & C_{i}\left(\lambda^{-2\tilde{N}}\sum_{n=1}^{m}\sum_{k=1}^{2^{n}}2^{-2n\tilde{N}i\left(h-\frac{\theta+1}{p}\right)}\right.
\end{eqnarray*}
\begin{align*}
& \ \left.+\lambda^{-2\tilde{N}}\sum_{n=m+1}^{\infty}\sum_{k=1}^{2^{n}}2^{-2\tilde{N}i\left(n\left(1-\frac{\theta+1}{p}\right)-m\left(1-h\right)\right)}\right)\\
\leqslant & \ C_{i}\lambda^{-2\tilde{N}}
\end{align*}
}with $C_{i}$ of the form stated in the lemma. This gives (\ref{second estimate}).
\end{proof}
\begin{rem}
The explicit form of the constants in Lemma \ref{estimating tail events}
is used in the next section when proving a large deviation principle
for capacities.
\end{rem}
Now we are in a position to complete the proof of Theorem \ref{quasi-sure convergence}.
\begin{proof}[Proof of Theorem \ref{quasi-sure convergence}]
By rewriting (\ref{control of p-var metric as equality}) as
\begin{align}
& \ d_{p}(\boldsymbol{w},\widetilde{\boldsymbol{w}})\nonumber \\
\leqslant & \ C_{d,p,\gamma}\max\left\{ \rho_{i}(\boldsymbol{w},\widetilde{\boldsymbol{w}})\left(\rho_{j}(\boldsymbol{w})+\rho_{j}(\widetilde{\boldsymbol{w}})\right)^{k}:\left(i,j,k\right)\in\mathbb{N}\times\mathbb{N}\times\mathbb{Z}_{+},i+jk\leqslant3\right\} \label{rewritting control of d_p}
\end{align}
we only need to show that there exists a positive constant $\beta,$
such that for any $\ \left(i,j,k\right)\in\mathbb{N}\times\mathbb{N}\times\mathbb{Z}_{+}$
satisfying $i+jk\leqslant3$, we have
\begin{align}
& \ \sum_{m=1}^{\infty}\text{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)\left(\rho_{j}\left(\boldsymbol{w}^{(m)}\right)+\rho_{j}\left(\boldsymbol{w}^{(m+1)}\right)\right)^{k}>\frac{1}{2^{m\beta}}\right\} \nonumber \\
< & \ \infty.\label{only need to show}
\end{align}
Indeed, if the above result holds, then by Lemma \ref{control of p-var metric as lemma},
we have
\[
\sum_{m=1}^{\infty}\text{Cap}_{q,N}\left\{ w:\ d_{p}\left(\boldsymbol{w}^{(m)},\boldsymbol{w}^{(m+1)}\right)>C'_{d,p,\gamma}\frac{1}{2^{m\beta}}\right\} <\infty,
\]
where $C'_{d,p,\gamma}$ is some constant depending only on $d,p,\gamma$.
It follows from the quasi-sure version of Borel-Catelli's lemma (the
second part of Proposition \ref{Tchebycheff and Borel-Cantelli})
that
\[
\text{Cap}_{q,N}\left(\limsup_{m\rightarrow\infty}\left\{ w:\ d_{p}\left(\boldsymbol{w}^{(m)},\boldsymbol{w}^{(m+1)}\right)>C_{d,p,\gamma}'\frac{1}{2^{m\beta}}\right\} \right)=0.
\]
Since
\begin{align*}
\mathcal{A}_{p}^{c}= & \left\{ w:\ \boldsymbol{w}^{(m)}\text{\ is not a Cauchy sequence in }\mbox{under }d_{p}\right\} \\
\subset & \left\{ w:\ \sum_{m=1}^{\infty}d_{p}\left(\boldsymbol{w}^{(m)},\boldsymbol{w}^{(m+1)}\right)=\infty\right\} \\
\subset & \limsup_{m\rightarrow\infty}\left\{ w:\ d_{p}\left(\boldsymbol{w}^{(m)},\boldsymbol{w}^{(m+1)}\right)>C'_{d,p,\gamma}\frac{1}{2^{m\beta}}\right\} ,
\end{align*}
it follows that $\mbox{Cap}_{q,N}(\mathcal{A}_{p}^{c})=0$ which completes
the proof.
Now we prove (\ref{only need to show}).
First consider the case $k>0.$ For any $\alpha,\beta>0,$ we have
\begin{align*}
\text{} & \text{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)\left(\rho_{j}\left(\boldsymbol{w}^{(m)}\right)+\rho_{j}\left(\boldsymbol{w}^{(m+1)}\right)\right)^{k}>\frac{1}{2^{m\beta}}\right\} \\
\leqslant\text{} & \text{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)>\frac{1}{2^{m\left(\beta+\alpha\right)}}\right\} \\
& +\text{Cap}_{q,N}\left\{ w:\ \left(\rho_{j}\left(\boldsymbol{w}^{(m)}\right)+\rho_{j}\left(\boldsymbol{w}^{(m+1)}\right)\right)^{k}>2^{m\alpha}\right\} \\
\leqslant\text{} & \text{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)>\frac{1}{2^{m\left(\beta+\alpha\right)}}\right\} \\
& +\text{Cap}_{q,N}\left\{ w:\ \rho_{j}\left(\boldsymbol{w}^{(m)}\right)>2^{\frac{m\alpha}{k}-1}\right\} \\
& +\text{Cap}_{q,N}\left\{ w:\ \rho_{j}\left(\boldsymbol{w}^{(m+1)}\right)>2^{\frac{m\alpha}{k}-1}\right\}
\end{align*}
By Lemma \ref{estimating tail events}, for $\theta\in\left(\left(\frac{p(2h+1)}{6}-1\right)^{+},hp-1\right)$,
$\widetilde{N}>\frac{N}{2}\vee\left(2\left(h-\frac{\theta+1}{p}\right)\right)^{-1}$
and $i=1,2,3,$ we have
\begin{align*}
& \ \text{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)>\frac{1}{2^{m\left(\beta+\alpha\right)}}\right\} \\
\leqslant & \ C_{i}\left(\frac{1}{2^{m}}\right)^{2i\widetilde{N}\left(h-\frac{\theta+1}{p}\right)-1-2\left(\beta+\alpha\right)\widetilde{N}}.
\end{align*}
Let $\alpha,\beta>0$ be such that
\begin{equation}
\frac{2\widetilde{N}\left(h-\frac{\theta+1}{p}\right)-1}{2\tilde{N}}>\beta+\alpha>0.\label{choice of alpha and beta}
\end{equation}
It follows easily that
\begin{equation}
\sum_{m=1}^{\infty}\text{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)>\frac{1}{2^{m\left(\beta+\alpha\right)}}\right\} <\infty.\label{first term summable}
\end{equation}
Similarly,
\[
\text{Cap}_{q,N}\left\{ w:\ \rho_{j}\left(\boldsymbol{w}^{(m)}\right)>2^{\frac{m\alpha}{k}-1}\right\} \leqslant C_{j}2^{-\frac{m\alpha}{k}-1},
\]
and hence
\[
\sum_{m=1}^{\infty}\text{Cap}_{q,N}\left\{ w:\ \rho_{j}\left(\boldsymbol{w}^{(m)}\right)>2^{\frac{m\alpha}{k}-1}\right\} <\infty.
\]
Combining with (\ref{first term summable}), we arrive at
\begin{align*}
& \sum_{m=1}^{\infty}\text{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)\left(\rho_{j}\left(\boldsymbol{w}^{(m)}\right)+\rho_{j}\left(\boldsymbol{w}^{(m+1)}\right)\right)^{k}>\frac{1}{2^{m\beta}}\right\} \\
< & \ \infty.
\end{align*}
The case of $k=0$ follows directly from (\ref{first term summable}),
since for all $\alpha>0$,
\[
\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)>\frac{1}{2^{m\beta}}\right\} \subset\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(m+1)},\boldsymbol{w}^{(m)}\right)>\frac{1}{2^{m\left(\beta+\alpha\right)}}\right\} .
\]
Now the proof is complete.
\end{proof}
\section{Large Deviations for Capacities}
In this section, we apply the previous technique to prove a large
deviation principle for capacities for Gaussian rough paths with long-time
memory.
Before stating our main result, we first recall the definition of
general LDPs for induced capacities in Polish spaces (see \cite{GR},
\cite{Yoshida}).
Let $(B,H,\mu)$ be an abstract Wiener space.
\begin{defn}
Let $q>1,N\in\mathbb{N},$ and let $\{T^{\varepsilon}\}$ be a family
of $\mbox{Cap}_{q,N}$-quasi surely defined maps from $B$ to some
Polish space $(X,d).$ We say that the family $\{T^{\varepsilon}\}$
satisfies the $\mbox{Cap}_{q,N}$-\textit{large deviation principle}
(or simply $\mbox{Cap}_{q,N}$-$LDP$) with good rate function $I:\ X\rightarrow[0,\infty]$
if
(1) $I$ is a good rate function on $X,$ i.e. $I$ is lower semi-continuous
and for every $\alpha>0,$ the level set $\Psi_{I}(\alpha)=\{y\in X:\ I(y)\leqslant\alpha\}$
is compact in $X$;
(2) for every closed subset $C\subset X,$ we have
\begin{equation}
\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mbox{Cap}_{q,N}\left\{ w\in B:\ T^{\varepsilon}(w)\in C\right\} \leqslant-\frac{1}{q}\inf_{x\in C}I(x),\label{upper bound for LDP}
\end{equation}
and for ever open subset $G\subset X,$ we have
\begin{equation}
\liminf_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mbox{Cap}_{q,N}\left\{ w\in B:\ T^{\varepsilon}(w)\in G\right\} \geqslant-\frac{1}{q}\inf_{x\in G}I(x).\label{lower bound for LDP}
\end{equation}
\end{defn}
\begin{rem}
The appearance of the factor $1/q$ comes from the definition of $\mbox{Cap}_{q,N}$,
so
\begin{equation}
\mbox{Cap}_{q,N}(A)\geqslant\mbox{Cap}_{q,0}(A)=\mathbb{P}(A)^{\frac{1}{q}},\ \forall A\in\mathcal{B}(B).\label{relation between measure and capacity}
\end{equation}
It is consistent with the classical large deviation principle for
probability measures.
\end{rem}
Due to the properties of $(q,N)$-capacity, many important results
for LDPs can be carried through in the capacity setting without much
difficulty, and the proofs are similar to the case of probability
measures. Here we present two fundamental results on transformations
of LDPs for capacities that are crucial for us, which did not appear
in \cite{GR},\cite{Yoshida} and related literatures.
The first result is the contraction principle.
\begin{thm}
\label{contraction principle}Let $\{T^{\varepsilon}\}$ be a family
of $\mathrm{Cap}_{q,N}$-quasi surely defined maps from $B$ to $(X,d)$
satisfying the $\mathrm{Cap}_{q,N}$-LDP with good rate function $I.$
Let $F$ be a continuous map from $X$ to another Polish space $(Y,d').$
Then the family $\{F\circ T^{\varepsilon}\}$ of $C\mathrm{ap}_{q,N}$-quasi
surely defined maps satisfies the $\mathrm{Cap}_{q,N}$-LDP with good
rate function
\begin{equation}
J(y)=\inf_{x:\ F(x)=y}I(x),\label{rate function in contraction principle}
\end{equation}
where we define $\inf\emptyset=\infty.$ \end{thm}
\begin{proof}
Since $I$ is a good rate function, it is not hard to see that $J$
is lower semi-continuous and also by the continuity of $F$, if $J(y)<\infty$
then the infimum in (\ref{rate function in contraction principle})
is attained at some point $x\in F^{-1}(y).$ Therefore, for any $\alpha>0,$
we have
\[
\{y\in Y:\ J(y)\leqslant\alpha\}=F\left(\{x\in X:\ I(x)\leqslant\alpha\}\right),
\]
and hence $J$ is a good rate function. The $\mathrm{Cap}_{q,N}$-LDP
(the upper bound (\ref{upper bound for LDP}) and lower bound (\ref{lower bound for LDP}))
for the family $\{F\circ T^{\varepsilon}\}$ under the good rate function
$J$ follows easily from the continuity of $F.$
\end{proof}
The second result is about exponential good approximations.
We first need the following definition.
\begin{defn}
Let $\{T^{\varepsilon,m}\}$ and $\{T^{\varepsilon}\}$ be two families
of $\mbox{Cap}_{q,N}$-quasi-surely defined maps from $B$ to $(X,d).$
We say that $\{T^{\varepsilon,m}\}$ are \textit{exponentially good
approximations} of $\{T^{\varepsilon}\}$ under $\mbox{Cap}_{q,N}$,
if for any $\lambda>0,$
\begin{equation}
\lim_{m\rightarrow\infty}\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mbox{Cap}_{q,N}\{w:\ d(T^{\varepsilon,m}(w),T^{\varepsilon}(w))>\lambda\}=-\infty.\label{exponential good approximation}
\end{equation}
\end{defn}
Now we have the following result.
\begin{thm}
\label{exponential approximation}Suppose that for each $m\geqslant1,$
the family $\{T^{\varepsilon,m}\}$ of $\mathrm{Cap}_{q,N}$-quasi-surely
defined maps satisfies the $\mathrm{Cap}_{q,N}$-LDP with good rate
function $I_{m}$ and $\{T^{\varepsilon,m}\}$ are exponentially good
approximations of some family $\{T^{\varepsilon}\}$ of $\mathrm{Cap}_{q,N}$-quasi-surely
defined maps. Suppose further that the function $I$ defined by
\begin{equation}
I(x)=\sup_{\lambda>0}\liminf_{m\rightarrow\infty}\inf_{y\in B_{x,\lambda}}I_{m}(y),\label{rate function given in exponential approximation}
\end{equation}
where $B_{x,\lambda}$ denotes the open ball $\{y\in X:\ d(y,x)<\lambda\},$
is a good rate function and for every closed set $C\subset X,$
\begin{equation}
\inf_{x\in C}I(x)\leqslant\limsup_{m\rightarrow\infty}\inf_{x\in C}I_{m}(x).\label{upper bound criterion in exponential approximations}
\end{equation}
Then $\{T^{\varepsilon}\}$ satisfies the $\mathrm{Cap}_{q,N}$-LDP
with good rate function $I.$\end{thm}
\begin{proof}
\textit{Upper bound.} Let $C$ be a closed subset of $X$. For any
$\lambda>0,$ let $C_{\lambda}=\{x:\ d(x,C)\leqslant\lambda\}.$ Since
\begin{align*}
& \left\{ w:\ T^{\varepsilon}(w)\in C\right\} \\
\subset & \left\{ w:\ T^{\varepsilon,m}(w)\in C_{\lambda}\right\} \bigcup\left\{ w:\ d\left(T^{\varepsilon,m}(w),T^{\varepsilon}(w)\right)>\lambda\right\} ,
\end{align*}
it follows from the $\mathrm{Cap}_{q,N}$-LDP for $\{T^{\varepsilon,m}\}$
(the upper bound) that
\begin{align*}
& \ \limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in C\right\} \\
\leqslant & \ \limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon,m}(w)\in C_{\lambda}\right\} \\
& \ \vee\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ d\left(T^{\varepsilon,m}(w),T^{\varepsilon}(w)\right)>\lambda\right\} \\
\leqslant & \ \left(-\frac{1}{q}\inf_{x\in C_{\lambda}}I_{m}(x)\right)\vee\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ d\left(T^{\varepsilon,m}(w),T^{\varepsilon}(w)\right)>\lambda\right\} .
\end{align*}
By letting $m\rightarrow\infty,$ we obtain from (\ref{exponential good approximation})
and (\ref{upper bound criterion in exponential approximations}) that
\begin{align*}
\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in C\right\} \leqslant & -\frac{1}{q}\limsup_{m\rightarrow\infty}\inf_{x\in C_{\lambda}}I_{m}(x)\\
\leqslant & -\frac{1}{q}\inf_{x\in C_{\lambda}}I(x).
\end{align*}
Now the upper bound (\ref{upper bound for LDP}) follows from a basic
property for good rate functions (see \cite{DZ}, Lemma 4.1.6) that
\[
\lim_{\lambda\rightarrow0}\inf_{x\in C_{\lambda}}I(x)=\inf_{x\in C}I(x).
\]
To prove the lower bound (\ref{lower bound for LDP}), we first show
that
\begin{eqnarray}
-\frac{1}{q}I(x) & = & \inf_{\lambda>0}\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in B_{x,\lambda}\right\} \nonumber \\
& = & \inf_{\lambda>0}\liminf_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in B_{x,\lambda}\right\} .\label{Key step in exponential approximation}
\end{eqnarray}
In fact, since
\begin{align}
& \left\{ w:\ T^{\varepsilon,m}(w)\in B_{x,\lambda}\right\} \nonumber \\
\subset & \left\{ w:\ T^{\varepsilon}(w)\in B_{x,2\lambda}\right\} \bigcup\left\{ w:\ d(T^{\varepsilon,m}(w),T^{\varepsilon}(w))>\lambda\right\} ,\label{event inclusion in LDP}
\end{align}
we have
\begin{align*}
& \ \mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon,m}(w)\in B_{x,\lambda}\right\} \\
\leqslant & \ \mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in B_{x,2\lambda}\right\} +\left\{ w:\ d(T^{\varepsilon,m}(w),T^{\varepsilon}(w))>\lambda\right\} .
\end{align*}
It follows from the $\mathrm{Cap}_{q,N}$-LDP (the lower bound) for
$\{T^{\varepsilon,m}\}$ that
\begin{align*}
-\frac{1}{q}\inf_{y\in B_{x,\lambda}}I_{m}(y)\leqslant & \ \liminf_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon,m}(w)\in B_{x,\lambda}\right\} \\
\leqslant & \ \liminf_{\varepsilon\rightarrow0}\varepsilon^{2}\left(\log\mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in B_{x,2\lambda}\right\} \right.\\
& \left.\ \vee\log\mathrm{Cap}_{q,N}\left\{ w:\ d(T^{\varepsilon,m}(w),T^{\varepsilon}(w))>\lambda\right\} \right)\\
\leqslant & \ \liminf_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in B_{x,2\lambda}\right\} \\
& \ \vee\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ d(T^{\varepsilon,m}(w),T^{\varepsilon}(w))>\lambda\right\} ,
\end{align*}
and (\ref{exponential good approximation}) implies that
\begin{align*}
-\frac{1}{q}\liminf_{m\rightarrow\infty}\inf_{y\in B_{x,\lambda}}I_{m}(y)\leqslant & \ \liminf_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in B_{x,2\lambda}\right\} .
\end{align*}
By taking infimum over $\lambda>0$, we obtain
\[
-\frac{1}{q}I(x)\leqslant\inf_{\lambda>0}\liminf_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in B_{x,2\lambda}\right\} .
\]
On the other hand, by exchanging $T^{\varepsilon,m}$ and $T^{\varepsilon}$
in (\ref{event inclusion in LDP}), the same argument yields that
(using the upper bound in the $\mathrm{Cap}_{q,N}$-LDP)
\[
\inf_{\lambda>0}\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in B_{x,\lambda}\right\} \leqslant-\frac{1}{q}I(x).
\]
Therefore, (\ref{Key step in exponential approximation}) follows.
\textit{Lower bound.} Let $G$ be an open subset of $X$. For any
fixed $x\in G,$ take $\lambda>0$ such that $B_{x,\lambda}\subset G.$
It follows from (\ref{Key step in exponential approximation}) that
\begin{align*}
& \liminf_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mbox{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in G\right\} \\
\geqslant & \liminf_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mbox{Cap}_{q,N}\left\{ w:\ T^{\varepsilon}(w)\in B_{x,\lambda}\right\} \\
\geqslant & -\frac{1}{q}I(x).
\end{align*}
Therefore, the lower bound (\ref{lower bound for LDP}) holds.
\end{proof}
Consider the abstract Wiener space $(W,\mathcal{H},\mathbb{P})$ associated
with a Gaussian process satisfying the assumptions in Theorem \ref{quasi-sure convergence}.
According to \cite{FV}, the covariance function of the process has
finite $(1/2h)$-variation in the 2D sense, and $\mathcal{H}$ is
continuously embedded in the space of continuous paths with finite
$(1/2h)$-variation. Therefore, every $h\in\mathcal{H}$ admits a
natural lifting $\boldsymbol{h}$ in $G\Omega_{p}(\mathbb{R}^{d})$
in the sense of iterated Young's integrals (see \cite{Young}).
Recall that $\mathcal{A}_{p}$ is the set of paths $w\in W$ such
that the lifting $\boldsymbol{w}^{(m)}$ of the dyadic piecewise linear
interpolation of $w$ is a Cauchy sequence under $d_{p}$, and the
map
\[
F:\ w\in\mathcal{A}_{p}\mapsto\boldsymbol{w}=\left(1,w^{1},\cdots,w^{[p]}\right):=\lim_{m\rightarrow\infty}\boldsymbol{w}^{(m)}\in G\Omega_{p}\left(\mathbb{R}^{d}\right)
\]
is well-defined. For $\varepsilon>0,$ let $T^{\varepsilon}:\ \mathcal{A}_{p}\rightarrow G\Omega_{p}\left(\mathbb{R}^{d}\right)$
be the map defined by
\[
T^{\varepsilon}(w)=\delta_{\varepsilon}\boldsymbol{w}:=(1,\varepsilon w^{1},\cdots,\varepsilon^{[p]}w^{[p]}).
\]
By Theorem \ref{quasi-sure convergence}, $\mathcal{A}_{p}^{c}$ is
a slim set. Therefore, $T^{\varepsilon}$ is quasi-surely well-defined.
Let
\begin{equation}
\Lambda(w)=\begin{cases}
\frac{1}{2}\|w\|_{\mathcal{H}}^{2}, & w\in\mathcal{H};\\
\infty, & \mbox{otherwise},
\end{cases}\label{original rate function}
\end{equation}
and define $I:\ G\Omega_{p}\left(\mathbb{R}^{d}\right)\rightarrow[0,\infty]$
by
\begin{equation}
I(\boldsymbol{w})=\inf\{\Lambda(w):\ w\in\mathcal{A}_{p},\ F(w)=\boldsymbol{w}\}.\label{good rate function}
\end{equation}
We will see later in Lemma \ref{Cameron-Martin convergence} that
$\mathcal{H}\subset\mathcal{A}_{p}$ and hence
\[
I(\boldsymbol{w})=\begin{cases}
\frac{1}{2}\left\Vert \pi_{1}(\boldsymbol{w})_{0,\cdot}\right\Vert _{\mathcal{H}}^{2}, & \mbox{if}\ \pi_{1}(\boldsymbol{w})_{0,\cdot}\in\mathcal{H}\ \mbox{and }\boldsymbol{w}=F\left(\pi_{1}(\boldsymbol{w})_{0,\cdot}\right);\\
\infty, & \mbox{otherwise,}
\end{cases}
\]
where $\pi_{1}$ is the projection onto the first level path.
Now we can state our main result of this section.
\begin{thm}
\label{LDP for capacity}For any $q>1,$ $N\in\mathbb{N},$ the family
$\{T^{\varepsilon}\}$ of $\mathrm{Cap}_{q,N}$-quasi-surely defined
maps from $W$ to $G\Omega_{p}\left(\mathbb{R}^{d}\right)$ satisfies
the $\mathrm{Cap}_{q,N}$-LDP with good rate function $I$.
\end{thm}
In particular, since the projection map from $G\Omega_{p}\left(\mathbb{R}^{d}\right)$
onto the first level path is continuous, we immediately obtain the
following result of Yoshida \cite{Yoshida} in the case of Gaussian
processes with long-time memory.
\begin{cor}
The family of maps $\{\varepsilon w\}$ satisfies the $\mathrm{Cap}_{q,N}$-LDP
with good rate function $\Lambda.$
\end{cor}
Moreover, according to the universal limit theorem (Theorem \ref{Universal Limit Theorem})
and the contraction principle (Theorem \ref{contraction principle}),
a direct corollary of Theorem \ref{LDP for capacity} is the LDPs
for capacities for solutions to differential equations driven by Gaussian
rough paths with long-time memory. This generalizes the classical
Freidlin-Wentzell theory for diffusion measures in the quasi-sure
and rough path setting. Here we are again taking the advantage of
working in the stronger topology (the $p$-variation topology), under
which we have nice stability for differential equations.
It should be pointed out that the lifting map $F$, which can be regarded
as the pathwise solution to a differential equation driven by $w$
with a polynomial one form, is \textit{not} continuous under the uniform
topology (see \cite{LCL}, \cite{LQ}). Therefore the contraction
principle cannot be applied directly in our context. A standard way
of getting around this difficulty, as in \cite{LQZ} for Brownian
motion and \cite{MS} for fractional Brownian motion in the case of
LDPs for probability measures, is to construct exponentially good
approximations by using dyadic piecewise linear interpolation. Here
we adopt the same idea in the capacity setting.
Let $T^{\varepsilon,m}:\ W\rightarrow G\Omega_{p}(\mathbb{R}^{d})$
be the map given by $T^{\varepsilon,m}(w)=\delta_{\varepsilon}\boldsymbol{w}^{(m)}.$
The proof of Theorem \ref{exponential approximation} essentially
consists of two parts: show that the family $\left\{ T^{\varepsilon,m}\right\} $
satisfies a $\mathrm{Cap}_{q,N}$-LDP and show that $\left\{ T^{\varepsilon,m}\right\} $
are exponentially good approximations of $\{T^{\varepsilon}\}$ under
$\mathrm{Cap}_{q,N}$.
We first need to establish the $\mathrm{Cap}_{q,N}$-LDP for $\left\{ T^{\varepsilon,m}\right\} $,
and we begin with considering the standard finite dimensional abstract
Wiener space.
Let $\mu$ be the standard Gaussian measure on $\mathbb{R}^{n}.$
In this case, the Cameron-Martin space is just $\mathbb{R}^{n}$ equipped
with the standard Euclidean inner product. For clarity we use the
notation $\mathrm{Cap}_{q,N}^{(n)}$ to emphasize that the capacity
is defined on $\mathbb{R}^{n}$. Now we have the following result.
\begin{prop}
The family $\{\varepsilon x\}$ satisfies the $\mathrm{Cap}_{q,N}^{(n)}$-LDP
with good rate function
\[
J(x)=\frac{|x|^{2}}{2},\ x\in\mathbb{R}^{n}.
\]
\end{prop}
\begin{proof}
The lower bound follows immediately from the simple relation in (\ref{relation between measure and capacity})
and the classical LDP for the family $\{\mu\left(\varepsilon^{-1}dx\right)\}$
of probability measures. It suffices to establish the upper bound.
We first prove the following inequality for the one dimensional case:
\begin{equation}
\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}^{(1)}\{x:\ \varepsilon x>b\}\leqslant-\frac{1}{2q}b^{2},\label{one dimensional case for half interval}
\end{equation}
where $b>0.$ In fact, for any $\lambda>0,$ define the non-negative
function
\[
f(x)=\mathrm{e}^{\lambda\varepsilon x-\lambda b},\ x\in\mathbb{R}^{1}.
\]
Obviously $f\in\mathbb{D}_{N}^{q}$, and $f\geqslant1$ on $\{x:\ \varepsilon x>b\}.$
Therefore, by the definition of capacity we have
\begin{eqnarray*}
\mathrm{Cap}_{q,N}^{(1)}\{x:\ \varepsilon x>b\} & \leqslant & \|f\|_{q,N}\\
& \leqslant & \sum_{i=0}^{N}\left(\int_{\mathbb{R}^{1}}\left|f^{(i)}\right|^{q}\mu(dx)\right)^{\frac{1}{q}}\\
& = & \sum_{i=0}^{N}\left(\int_{\mathbb{R}^{1}}(\lambda\varepsilon)^{qi}\mathrm{e}^{q\lambda\varepsilon x-q\lambda}\frac{1}{\sqrt{2\pi}}\mathrm{e}^{-\frac{x^{2}}{2}}dx\right)^{\frac{1}{q}}\\
& = & \sum_{i=0}^{N}(\lambda\varepsilon)^{i}\mathrm{e}^{\frac{q}{2}(\lambda\varepsilon)^{2}-\lambda b}.
\end{eqnarray*}
It follows that
\[
\varepsilon^{2}\log\mathrm{Cap}_{q,N}^{(1)}\{x:\ \varepsilon x>b\}\leqslant\varepsilon^{2}\log N+\max_{0\leqslant i\leqslant N}\left\{ i\varepsilon^{2}\log(\lambda\varepsilon)\right\} +\frac{q}{2}(\lambda\varepsilon^{2})^{2}-\lambda\varepsilon^{2}b.
\]
Now take $\lambda=b/(q\varepsilon^{2})$, then we have
\[
\varepsilon^{2}\log\mathrm{Cap}_{q,N}^{(1)}\{x:\ \varepsilon x>b\}\leqslant\varepsilon^{2}\log N+\max_{0\leqslant i\leqslant N}\left\{ i\varepsilon^{2}\log\left(\frac{b}{q\varepsilon}\right)\right\} -\frac{b^{2}}{2q},
\]
and therefore (\ref{one dimensional case for half interval}) holds.
Apparently (\ref{one dimensional case for half interval}) still holds
if $\{x:\ \varepsilon x>b\}$ is replaced by $\{x:\ \varepsilon x\geqslant b\},$
and a similar inequality holds for $\{x:\ \varepsilon x\leqslant a\}$
for $a<0.$
Now we come back to the $n$-dimensional case.
Firstly, consider an open ball $B(a,r)\subset\mathbb{R}^{n}.$ For
any $\lambda\in\mathbb{R}^{n},$ consider the non-negative function
\[
f(x)=\mathrm{e}^{\langle\lambda,\varepsilon x\rangle+|\lambda|r-\langle\lambda,a\rangle},\ x\in\mathbb{R}^{n}.
\]
Then apparently we have $f\in\mathbb{D}_{N}^{q}$. Moreover, from
the fact that
\[
\langle\lambda,a\rangle-|\lambda|r=\inf_{y\in B(a,r)}\langle\lambda,y\rangle,
\]
we have $f\geqslant1$ on $\{x:\ \varepsilon x\in B(a,r)\}.$ Therefore,
similarly as before we have
\begin{eqnarray*}
\mathrm{Cap}_{q,N}^{(n)}\{x:\ \varepsilon x\in B(a,r)\} & \leqslant & \|f\|_{q,N}\\
& \leqslant & \sum_{i=0}^{N}\left(\int_{\mathbb{R}^{n}}\left|D^{i}f\right|^{q}\mu(dx)\right)^{\frac{1}{q}}\\
& \leqslant & \sum_{i=0}^{N}(n|\lambda|\varepsilon)^{i}\mathrm{e}^{\frac{1}{2}q(|\lambda|\varepsilon)^{2}+|\lambda|r-\langle\lambda,a\rangle}
\end{eqnarray*}
and
\begin{eqnarray*}
& & \varepsilon^{2}\log\mathrm{Cap}_{q,N}^{(n)}\{x:\ \varepsilon x\in B(a,r)\}\\
& \leqslant & \varepsilon^{2}\log N+\max_{0\leqslant i\leqslant N}\left\{ i\varepsilon^{2}\log(n|\lambda\varepsilon|)\right\} +\frac{q}{2}\left(|\lambda|\varepsilon^{2}\right)^{2}\\
& & +|\lambda|\varepsilon^{2}r-\langle\varepsilon^{2}\lambda,a\rangle.
\end{eqnarray*}
Note that the function $\frac{q}{2}\left(|\lambda|\varepsilon^{2}\right)^{2}+|\lambda|\varepsilon^{2}r-\langle\varepsilon^{2}\lambda,a\rangle$
attains its minimum at
\[
\lambda=\frac{(|a|-r)^{+}}{q\varepsilon^{2}|a|}a,
\]
By taking this $\lambda$ and letting $\varepsilon\rightarrow0$,
we arrive at
\[
\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}^{(n)}\{x:\ \varepsilon x\in B(a,r)\}\leqslant-\frac{1}{2q}\left((|a|-r)^{+}\right)^{2}=-\frac{1}{q}\inf_{y\in B(a,r)}J(y).
\]
Secondly, let $K$ be a compact subset of $\mathbb{R}^{n}.$ Then
for any $\delta>0,$ we can find a finite cover of $K$ by open balls
$\{B(a_{i},\delta)\}_{1\leqslant i\leqslant k(\delta)}$ where each
$a_{i}\in K.$ It follows that
\begin{align*}
& \limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\{x:\ \varepsilon x\in K\}\\
\leqslant & \limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\left(\log k(\delta)+\max_{1\leqslant i\leqslant k(\delta)}\log\mathrm{Cap}_{q,N}^{(n)}\{x:\ \varepsilon x\in B(a_{i},\delta)\}\right)\\
\leqslant & \max_{1\leqslant i\leqslant k(\delta)}\left(-\frac{1}{q}\inf_{y\in B(a_{i},\delta)}J(y)\right)\\
\leqslant & -\frac{1}{q}\inf_{y\in B(K,\delta)}J(y),
\end{align*}
where $B(K,\delta):=\{x:\ \mathrm{dist}(x,K)<\delta\}.$ By letting
$\delta\rightarrow0$ we obtain the upper bound result for the compact
set $K.$
Finally, let $C$ be an arbitrary closed subset of $\mathbb{R}^{n}.$
For $\rho>0,$ let
\[
H_{\rho}=\{x:\ \left|x^{i}\right|\leqslant\rho\ \mathrm{for\ all}\ i\}.
\]
Then we have
\[
\mathrm{Cap}_{q,N}^{(n)}\{x:\ \varepsilon x\in C\}\leqslant\mathrm{Cap}_{q,N}^{(n)}\{x:\ \varepsilon x\in C\bigcap H_{\rho}\}+\sum_{i=1}^{n}\mathrm{Cap}_{q,N}^{(n)}\{x:\ \varepsilon\left|x^{i}\right|>\rho\}.
\]
On the other hand, from the definition of capacity, we have (see also
the proof of the following Corollary \ref{Cap-LDP for T^epsilon,m}):
\[
\mathrm{Cap}_{q,N}^{(n)}\{x:\ \varepsilon\left|x^{i}\right|>\rho\}\leqslant\mathrm{Cap}_{q,N}^{(1)}\{x\in\mathbb{R}^{1}:\ \varepsilon|x|>\rho\}.
\]
Combining with the upper bound result for compact sets and (\ref{one dimensional case for half interval}),
we arrive at
\[
\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}^{(n)}\{x:\ \varepsilon x\in C\}\leqslant\max\left\{ -\frac{1}{q}\inf_{y\in C\bigcap H_{\rho}}J(y),-\frac{1}{q}\rho^{2}\right\} ,\ \forall\rho>0.
\]
The upper bound result for $C$ follows from letting $\rho\rightarrow\infty.$
\end{proof}
Now consider the situation where $\nu$ is a general non-degenerate
Gaussian measure on $\mathbb{R}^{n}$ with covariance matrix $\Sigma.$
In this case the Cameron-Martin space $\mathcal{H}=\mathbb{R}^{n}$
but with inner product
\[
\langle h_{1},h_{2}\rangle=h_{1}^{T}\Sigma^{-1}h_{2}.
\]
Moreover, the Cameron-Martin embedding $\iota:\ \mathcal{H}\rightarrow\mathbb{R}^{n}$
is just the identity map but the dual embedding $\iota^{*}:\ \mathbb{R}^{n}\rightarrow\mathcal{H}^{*}\cong\mathcal{H}$
is given by
\[
\iota^{*}(\lambda)=\Sigma\lambda,\ \lambda\in\mathbb{R}^{n}.
\]
Therefore, if we write $\Sigma=QQ^{T}$ for some non-degenerate matrix
$Q$, it follows from the definition of Sobolev spaces and change
of variables that
\[
\mathrm{Cap}_{q,N}^{\nu}(A)=\mathrm{Cap}_{q,N}^{\mu}\left(Q^{-1}A\right),\ \forall A\subset\mathbb{R}^{n},
\]
where the L.H.S. is the capacity for $\nu$ and the R.H.S. is the
capacity for the standard Gaussian measure $\mu.$ In other words,
capacities for non-degenerate Gaussian measures on $\mathbb{R}^{n}$
are all equivalent. As a consequence, we conclude that the family
$\{\varepsilon x\}$ satisfies the $\mathrm{Cap}_{q,N}^{\nu}$-LDP
with good rate function
\[
J(y)=\frac{1}{2}\|y\|_{\mathcal{H}}^{2}=\frac{1}{2}y^{T}\Sigma^{-1}y,\ y\in\mathbb{R}^{n}.
\]
The case of degenerate Gaussian measures follows easily by restriction
on the maximal invariant subspace on which the covariance matrix is
positive definite.
A direct consequence of the previous discussion is the following.
\begin{cor}
\label{Cap-LDP for T^epsilon,m}For each $m\geqslant1,$ the family
$\{T^{\varepsilon,m}\}$ satisfies the $\mathrm{Cap}_{q,N}$-LDP with
good rate function
\begin{equation}
I_{m}(\boldsymbol{w})=\inf\left\{ J_{m}(x):\ x\in\left(\mathbb{R}^{d}\right)^{2^{m}}:\ \Phi_{m}(x)=\boldsymbol{w}\right\} ,\ \boldsymbol{w}\in G\Omega_{p}\left(\mathbb{R}^{d}\right),\label{Rate function in the discrete case}
\end{equation}
where $J_{m}(x)$ is the good rate function for the Gaussian measure
$\nu_{m}$ on $\left(\mathbb{R}^{d}\right)^{2^{m}}$ induced by $\left(w_{t_{m}^{1}},\cdots,w_{t_{m}^{2^{m}}}\right),$
and $\Phi_{m}$ is the map sending each $x\in\left(\mathbb{R}^{d}\right)^{2^{m}}$
to the lifting of the dyadic piecewise linear interpolation associated
with $x.$\end{cor}
\begin{proof}
Since $\Phi_{m}$ is continuous under the Euclidean and $p$-variation
topology respectively, the result follows immediately from the contraction
principle (Theorem \ref{contraction principle}) once we have established
the $\mathrm{Cap}_{q,N}$-LDP for the family $\varepsilon\pi_{m}:\ W\rightarrow\left(\mathbb{R}^{d}\right)^{2^{m}}$
where $\pi^{m}$ is defined by
\[
\pi_{m}(w)=\left(w_{t_{m}^{1}},\cdots,w_{t_{m}^{2^{m}}}\right),\ w\in W,
\]
with good rate function $J_{m}.$
To see this, first notice again that the lower bound follows from
the relation (\ref{relation between measure and capacity}) and the
classical LDP for finite dimensional Gaussian measures. Moreover,
let $U$ be an open subset of $\left(\mathbb{R}^{d}\right)^{2^{m}}$
and let $f\in\mathbb{D}_{N}^{q}\left(\nu_{m}\right)$ be a function
such that for $\nu_{m}$-almost surely
\[
f\geqslant1\ \mathrm{on}\ U,\ f\geqslant0\ \mathrm{on}\ \left(\mathbb{R}^{d}\right)^{2^{m}},
\]
$\mathbb{D}_{N}^{q}\left(\nu_{m}\right)$ is the Sobolev space over
$\left(\mathbb{R}^{d}\right)^{2^{m}}$ associated with $\nu_{m}$.
Define
\[
g(w)=f\left(w_{t_{m}^{1}},\cdots,w_{t_{m}^{2^{m}}}\right),\ w\in W.
\]
Apparently $g\in\mathbb{D}_{N}^{q}$, and for $\mathbb{P}$-almost
surely
\[
g\geqslant1\ \mathrm{on}\ \pi_{m}^{-1}U,\ g\geqslant0\ \mathrm{on}\ W.
\]
Moreover, since $\|g\|_{q,N}=\|f\|_{q,N;\nu_{m}}$, we know that
\[
\mathrm{Cap}_{q,N}\left(\pi_{m}^{-1}U\right)\leqslant\|f\|_{q,N;\nu_{m}}.
\]
By taking infimum over all such $f,$ we obtain
\[
\mathrm{Cap}_{q,N}\left(\pi_{m}^{-1}U\right)\leqslant\mathrm{Cap}_{q,N}^{\nu_{m}}(U).
\]
Now the upper bound result follows from the $\mathrm{Cap}_{q,N}$-LDP
for the family $\{\nu_{m,\varepsilon}:=\nu_{m}(\varepsilon^{-1}dx)\}$
of probability measure and a simple limiting argument.
\end{proof}
\begin{rem}
There is an equivalent way of expressing the rate function $I_{m},$
which is very convenient for us to prove our main result of Theorem
\ref{LDP for capacity}. In fact, from classical LDP results for Gaussian
measures (see for example \cite{DS}), we know that the family $\left\{ \mathbb{P}_{\varepsilon}:=\mathbb{P}\left(\varepsilon^{-1}dw\right)\right\} $
of probability measures on $W$ satisfies the LDP with good rate function
$\Lambda$ given by (\ref{original rate function}). Moreover, the
map $\Psi_{m}:\ W\rightarrow G\Omega_{p}(\mathbb{R}^{d})$ defined
by $\Psi_{m}(w)=\boldsymbol{w}^{(m)}$ is continuous under the uniform
and $p$-variation topology respectively. Therefore, according to
the classical contraction principle, the family $\{\mathbb{P}_{\varepsilon}\circ\Psi_{m}^{-1}\}$
of probability measures on $G\Omega_{p}\left(\mathbb{R}^{d}\right)$
satisfies the LDP with good rate function
\begin{equation}
I_{m}'(\boldsymbol{w})=\inf\left\{ \Lambda(w):\ w\in W,\ \Psi_{m}(w)=\boldsymbol{w}\right\} ,\ \boldsymbol{w}\in G\Omega_{p}\left(\mathbb{R}^{d}\right).\label{approximating rate function}
\end{equation}
On the other hand, the same argument implies that the family $\left\{ \nu_{m,\varepsilon}\circ\Phi_{m}^{-1}\right\} $
of probability measures on $G\Omega_{p}\left(\mathbb{R}^{d}\right)$
satisfies the LDP with good rate function $I_{m}$ given by (\ref{Rate function in the discrete case}).
Observe that $\mathbb{P}_{\varepsilon}\circ\Psi_{m}^{-1}=\nu_{m,\varepsilon}\circ\Phi_{m}^{-1}.$
By the uniqueness of rate functions (see \cite{DZ}, Chapter 4, Lemma
4.1.4), we conclude that $I_{m}=I_{m}'.$
\end{rem}
\begin{rem}
Of course we can apply Yoshida's result directly with the contraction
principle to obtain the $\mathrm{Cap}_{q,N}$-LDP for the family $\left\{ T^{\varepsilon,m}\right\} $
with good rate function $I'_{m}.$ Here we do not proceed in this
way so that in the end our result yields Yoshida's one as a corollary,
and our proof relies only on basic properties of capacities and finite
dimensional Gaussian spaces.
\end{rem}
The second main ingredient of proving Theorem \ref{LDP for capacity}
is the following.
\begin{lem}
\label{Key Lemma for LDP}For any $q>1,N\in\mathbb{N}$ and $\lambda>0$,
we have
\[
\lim_{m\rightarrow\infty}\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\mathrm{Cap}_{q,N}\left\{ w:\ d_{p}\left(\delta_{\varepsilon}\boldsymbol{w}^{(m)},\delta_{\varepsilon}\boldsymbol{w}\right)>\lambda\right\} =-\infty.
\]
Therefore, $\{T^{\varepsilon,m}\}$ are exponentially good approximations
of $\{T^{\varepsilon}\}$ under $\mathrm{Cap}_{q,N}$.\end{lem}
\begin{proof}
For any $\beta>0,$ since
\begin{align*}
& \left\{ w:\ d_{p}\left(\delta_{\varepsilon}\boldsymbol{w}^{(m)},\delta_{\varepsilon}\boldsymbol{w}\right)>\lambda\right\} \\
\subset & \left\{ w:\ \sum_{l=m}^{\infty}d_{p}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)},\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>\lambda\right\} \\
\subset & \bigcup_{l=m}^{\infty}\left\{ w:\ d_{p}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)},\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>\frac{\lambda}{C_{\beta}}\cdot\frac{1}{2^{(l-m)\beta}}\right\} ,
\end{align*}
we have
\begin{align*}
& \ \mbox{Cap}_{q,N}\left\{ w:\ d_{p}\left(\delta_{\varepsilon}\boldsymbol{w}^{(m)},\delta_{\varepsilon}\boldsymbol{w}\right)>\lambda\right\} \\
\leqslant & \ \sum_{l=m}^{\infty}\mbox{Cap}_{q,N}\left\{ w:\ d_{p}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)},\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>\frac{\lambda}{C_{\beta}}\cdot\frac{1}{2^{(l-m)\beta}}\right\} ,
\end{align*}
where $C_{\beta}:=\sum_{k=0}^{\infty}2^{-\beta k}.$ It then follows
from (\ref{rewritting control of d_p}) that for any $\alpha>0$,
\begin{align}
& \ \mbox{Cap}_{q,N}\left\{ w:\ d_{p}\left(\delta_{\varepsilon}\boldsymbol{w}^{(m)},\delta_{\varepsilon}\boldsymbol{w}\right)>\lambda\right\} \nonumber \\
\leqslant & \ \sum_{i=1}^{3}\sum_{l=m}^{\infty}\mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)},\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>\frac{\lambda}{C_{d,p,\gamma,\beta}}\frac{1}{2^{(l-m)\beta}}\right\} \nonumber \\
& \ +\sum_{\substack{i,j,k\geqslant1\\
i+jk\leqslant3
}
}\sum_{l=m}^{\infty}\left(\mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)},\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>\frac{\lambda}{C_{d,p,\gamma,\beta}}\frac{2^{m\beta}}{2^{l(\alpha+\beta)}}\right\} \right.\nonumber \\
& \ +\mbox{Cap}_{q,N}\left\{ w:\ \rho_{j}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)}\right)>2^{\frac{l\alpha}{k}-1}\right\} \nonumber \\
& \ +\mbox{Cap}_{q,N}\left\{ w:\ \rho_{j}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>2^{\frac{l\alpha}{k}-1}\right\} ,\label{several terms}
\end{align}
where $C_{d,p,\gamma,\beta}$ is a constant depending only on $p,d,\gamma,\beta$.
Similar to the proof of Theorem \ref{quasi-sure convergence}, we
estimate each term on the R.H.S. of (\ref{several terms}). Here we
choose $\alpha,\beta$ in exactly the same way as in the proof of
Theorem \ref{quasi-sure convergence}, namely, by (\ref{choice of alpha and beta}).
It should be pointed out that the choice of $\alpha,\beta$ can be
made independent of $\widetilde{N},$ since $\theta\in\left(\left(\frac{p(2h+1)}{6}-1\right)^{+},hp-1\right).$
Firstly, it follows from Lemma \ref{estimating tail events} that
for $i=1,2,3,$
\begin{align}
& \ \mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)},\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>\frac{\lambda}{C_{d,p,\gamma,\beta}}\frac{1}{2^{(l-m)\beta}}\right\} \nonumber \\
= & \ \mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\boldsymbol{w}^{(l)},\boldsymbol{w}^{(l+1)}\right)>\frac{\lambda}{C_{d,p,\gamma,\beta}}\frac{\varepsilon^{-i}}{2^{(l-m)\beta}}\right\} \nonumber \\
\leqslant & \ C_{1}C_{2}^{\widetilde{N}}g\left(\widetilde{N};N\right)\widetilde{N}^{i\widetilde{N}}\cdot\left(\frac{\lambda}{C_{d,p,\gamma,\beta}}\frac{\varepsilon^{-i}}{2^{(l-m)\beta}}\right)^{-2\widetilde{N}}\cdot\left(\frac{1}{2^{l}}\right)^{2i\widetilde{N}\left(h-\frac{\theta+1}{p}\right)-1}\nonumber \\
= & \ C_{1}C_{3}^{\widetilde{N}}g\left(\widetilde{N};N\right)\left(\widetilde{N}\varepsilon^{2}\right)^{i\widetilde{N}}\cdot\frac{1}{2^{2m\widetilde{N}\beta}}\left(\frac{1}{2^{l}}\right)^{2i\widetilde{N}\left(h-\frac{\theta+1}{p}\right)-1-2\widetilde{N}\beta},\label{first term}
\end{align}
where $C_{3}=C_{2}\left(\frac{\lambda}{C_{d,p,\gamma,\beta}}\right)^{-2}.$
Note that by the choice of $\beta,$ the R.H.S. of (\ref{first term})
is summable over $l,$ and it follows that
\begin{align*}
& \ \sum_{l=m}^{\infty}\mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)},\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>\frac{\lambda}{C_{d,p,\gamma,\beta}}\frac{1}{2^{(l-m)\beta}}\right\} \\
\leqslant & \ C_{4}C_{3}^{\widetilde{N}}g\left(\widetilde{N};N\right)\left(\widetilde{N}\varepsilon^{2}\right)^{i\widetilde{N}}\cdot\left(\frac{1}{2^{m}}\right)^{2i\widetilde{N}\left(h-\frac{\theta+1}{p}\right)-1},
\end{align*}
where $C_{4}=C_{1}\left(1-2^{-\left(2i\widetilde{N}\left(h-\frac{\theta+1}{p}\right)-1-2\widetilde{N}\beta\right)}\right)^{-1}.$
By taking $\widetilde{N}=\left[\varepsilon^{-2}\right]$ for $\varepsilon$
small enough, it is easy to see that
\begin{align*}
& \limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\left(\sum_{l=m}^{\infty}\mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)},\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>\frac{\lambda}{C_{d,p,\gamma,\beta}}\frac{1}{2^{(l-m)\beta}}\right\} \right)\\
= & \log C_{3}+2i\left(h-\frac{\theta+1}{p}\right)\log\left(\frac{1}{2^{m}}\right).
\end{align*}
Therefore, we have
\begin{align*}
& \lim_{m\rightarrow\infty}\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\left(\sum_{l=m}^{\infty}\mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)},\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>\frac{\lambda}{C_{d,p,\gamma,\beta}}\frac{1}{2^{(l-m)\beta}}\right\} \right)\\
= & -\infty.
\end{align*}
Again by the choice of $\alpha,\beta$ and by taking $\widetilde{N}=\left[\varepsilon^{-2}\right],$
the same computation based on Lemma \ref{estimating tail events}
yields that
\begin{align*}
& \lim_{m\rightarrow\infty}\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\left(\sum_{l=m}^{\infty}\mbox{Cap}_{q,N}\left\{ w:\ \rho_{i}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)},\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>\frac{\lambda}{C_{d,p,\gamma,\beta}}\frac{2^{m\beta}}{2^{l(\alpha+\beta)}}\right\} \right)\\
= & \lim_{m\rightarrow\infty}\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\left(\sum_{l=m}^{\infty}\mbox{Cap}_{q,N}\left\{ w:\ \rho_{j}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l)}\right)>2^{\frac{l\alpha}{k}-1}\right\} \right)\\
= & \lim_{m\rightarrow\infty}\limsup_{\varepsilon\rightarrow0}\varepsilon^{2}\log\left(\sum_{l=m}^{\infty}\mbox{Cap}_{q,N}\left\{ w:\ \rho_{j}\left(\delta_{\varepsilon}\boldsymbol{w}^{(l+1)}\right)>2^{\frac{l\alpha}{k}-1}\right\} \right)\\
= & -\infty,
\end{align*}
for $i,j,k\geqslant1$ with $i+jk\leqslant3.$
Now the desired result follows easily.
\end{proof}
In order to apply Theorem \ref{exponential approximation}, we need
the following convergence result in \cite{FV} for Cameron-Martin
paths.
\begin{lem}
\label{Cameron-Martin convergence} For any $\alpha>0,$ we have
\[
\lim_{m\rightarrow\infty}\sup_{\{h\in\mathcal{H}:\ \|h\|_{\mathcal{H}}\leqslant\alpha\}}d_{p}\left(\boldsymbol{h}^{(m)},\boldsymbol{h}\right)=0.
\]
In particular, $\mathcal{H}$ is contained in $\mathcal{A}_{p}.$
\end{lem}
Now we are in a position to prove Theorem \ref{LDP for capacity}.
\begin{proof}[Proof of Theorem \ref{LDP for capacity}]
It suffices to show that the function $I$ given by (\ref{good rate function})
coincides with the one given by (\ref{rate function given in exponential approximation}),
and it satisfies all conditions in Theorem \ref{exponential approximation}.
Here we use $I_{m}'$ given by (\ref{approximating rate function})
for the rate function of $\{T^{\varepsilon,m}\}.$
Firstly, by Lemma \ref{Cameron-Martin convergence} it is easy to
see that the lifting map $F$ is continuous on each level set $\{w:\ \Lambda(w)\leqslant\alpha\}\subset\mathcal{H}\subset\mathcal{A}_{p}$
of $\Lambda.$ It follows from the definition of $I$ that
\[
F(\{w:\ \Lambda(w)\leqslant\alpha\})=\{\boldsymbol{w}:\ I(\boldsymbol{w})\leqslant\alpha\},
\]
which then implies that $I$ is a good rate function.
Now we show that for any closed subset $C\subset G\Omega_{p}\left(\mathbb{R}^{d}\right),$
we have
\begin{equation}
\inf_{\boldsymbol{w}\in C}I(\boldsymbol{w})\leqslant\liminf_{m\rightarrow\infty}\inf_{\boldsymbol{w}\in C}I'_{m}(\boldsymbol{w}).\label{verifying conditions}
\end{equation}
In fact, let $\gamma_{m}=\inf_{\boldsymbol{w}\in C}I'_{m}(\boldsymbol{w})=\inf_{w\in\Psi_{m}^{-1}(C)}\Lambda(w).$
We only consider the nontrivial case $\liminf_{m\rightarrow\infty}\gamma_{m}=\alpha<\infty,$
and without loss of generality we assume that $\lim_{m\rightarrow\infty}\gamma_{m}=\alpha.$
Since $\Lambda$ is a good rate function, we know that the infimum
over the closed subset $\Psi_{m}^{-1}(C)\subset W$ is attainable.
Therefore, there exists $w_{m}\in W$ such that $\Psi_{m}(w_{m})\in C$
and $\gamma_{m}=\Lambda(w_{m}).$ It follows from Lemma \ref{Cameron-Martin convergence}
that for any fixed $\lambda>0,$ $F(w_{m})\in C_{\lambda}$ when $m$
is large, where $C_{\lambda}:=\{\boldsymbol{w}:\ d_{p}(\boldsymbol{w},C)\leqslant\lambda\}$.
Consequently, when $m$ is large, we have
\[
\inf_{\boldsymbol{w}\in C_{\lambda}}I(\boldsymbol{w})\leqslant I(F(w_{m}))=\Lambda(w_{m})=\gamma_{m},
\]
and hence
\[
\inf_{\boldsymbol{w}\in C_{\lambda}}I(\boldsymbol{w})\leqslant\alpha.
\]
(\ref{verifying conditions}) then follows easily from \cite{DZ},
Chapter 4, Lemma 4.1.6. by taking $\lambda\rightarrow0$.
A direct consequence of (\ref{verifying conditions}) is the condition
(\ref{upper bound criterion in exponential approximations}) in Theorem
\ref{exponential approximation}. Moreover, if we let $C=\overline{B_{\boldsymbol{w},\lambda}}$
in (\ref{verifying conditions}), by taking $\lambda\rightarrow0$
we easily obtain that $I(\boldsymbol{w})\leqslant\overline{I}(\boldsymbol{w})$,
where $\overline{I}$ is the function given by (\ref{rate function given in exponential approximation}).
It remains to show that $\overline{I}(\boldsymbol{w})\leqslant I(\boldsymbol{w})$,
and we only consider the nontrivial case $I(\boldsymbol{w})=\alpha<\infty.$
It follows that $I(\boldsymbol{w})=\Lambda(w),$ where $w\in\mathcal{H}\subset\mathcal{A}_{p}$
with $F(w)=\boldsymbol{w}.$ Let $\boldsymbol{w}_{m}=\Psi_{m}(w).$
By Lemma \ref{Cameron-Martin convergence} we know that $\boldsymbol{w}_{m}\rightarrow\boldsymbol{w}$
under $d_{p}.$ Therefore, for any fixed $\lambda>0$,
\[
\inf_{\boldsymbol{w}'\in B_{\boldsymbol{w},\lambda}}I'_{m}(\boldsymbol{w}')\leqslant I'_{m}(\boldsymbol{w}_{m})\leqslant\Lambda(w)=I(\boldsymbol{w})
\]
when $m$ is large. By taking ``$\liminf_{m\rightarrow\infty}$''
and ``$\sup_{\lambda>0}$'', we obtain that $\overline{I}(\boldsymbol{w})\leqslant I(\boldsymbol{w}).$
Now the proof is complete.
\end{proof}
\begin{rem}
In some literature (in particular, in \cite{Yoshida}), the Sobolev
norms over $(W,\mathcal{H},\mathbb{P})$ are defined in terms of the
Ornstein-Uhlenbeck operator, which can be regarded as the infinite
dimensional Laplacian under the Gaussian measure $\mathbb{P}.$ An
advantage of using such norms is that they can be easily extended
to the fractional case. According to the well known Meyer's inequalities,
such norms are equivalent to the ones we have used here which are
defined in terms of the Malliavan derivatives. Therefore, the LDP
for the corresponding capacities under these Sobolev norms holds in
exactly the same way.
\end{rem}
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Quasilimiting behavior for one-dimensional diffusions with killing\thanksref{TT1}}
\runtitle{Quasilimiting behavior}
\begin{aug}
\author[A]{\fnms{Martin} \snm{Kolb}\ead[label=e1]{[email protected]}}
and
\author[A]{\fnms{David} \snm{Steinsaltz}\corref{}\ead[label=e2]{[email protected]}}
\runauthor{M. Kolb and D. Steinsaltz}
\affiliation{University of Oxford}
\address[A]{Department of Statistics\\
University of Oxford\\
1 South Parks Street\\
Oxford OX1 3TG\\
United Kingdom\\
\printead{e1}\\
\phantom{E-mail:\ }\printead*{e2}}
\end{aug}
\thankstext{TT1}{Supported in part by the New Dynamics of Ageing program.}
\received{\smonth{4} \syear{2010}}
\revised{\smonth{8} \syear{2010}}
\begin{abstract}
This paper extends and clarifies results of Steinsaltz and Evans
[\textit{Trans. Amer. Math. Soc.} \textbf{359} (\citeyear{quasistat}) 1285--1234], which found
conditions for convergence of a killed
one-dimensional diffusion conditioned on survival, to a quasistationary
distribution whose density is given by the principal eigenfunction of
the generator. Under the assumption that the limit of the killing at
infinity differs from the principal eigenvalue we prove that
convergence to quasistationarity occurs if and only if the principal
eigenfunction is integrable. When the killing at $\infty$ is
\textit{larger} than the principal eigenvalue, then the eigenfunction
is always
integrable. When the killing at $\infty$ is \textit{smaller}, the
eigenfunction is integrable only when the unkilled process is
recurrent; otherwise, the process conditioned on survival converges to
0 density on any bounded interval.
\end{abstract}
\begin{keyword}[class=AMS]
\kwd[Primary ]{60J60}
\kwd{60J70}
\kwd[; secondary ]{60J35}
\kwd{47E05}
\kwd{47F05}.
\end{keyword}
\begin{keyword}
\kwd{Killed one-dimensional diffusions}
\kwd{quasi-limiting distributions}.
\end{keyword}
\end{frontmatter}
\section{Introduction} \label{sec:intro}
\subsection{Background and history} \label{sec:background}
Killed Markov processes are central objects in probability theory. One
natural line of inquiry runs to questions about the asymptotic behavior
of the process conditioned on long-term survival.
We work with the one-dimensional diffusions $(X_t)_{t \geq0}$ on the
interval $[0,\infty)$, generated by the differential expression
$-L:=\frac{1}{2}\frac{d^2}{dx^2}+b\frac{d}{dx}$. In addition to the\vspace*{1pt}
possible killing at the boundary 0, there is is a killing rate $\kappa
$, so that we will really be concerned with the differential expression
$-L^{\kappa}:=-L-\kappa$. (We leave the description of the domain,
and hence of the operator and attendant semigroup, for later, because
much of the analysis will depend on moving flexibly among various
domains on which this differential expression can operate.) Let $\nu$
be a compactly supported distribution on $[0,\infty)$. We aim to find
conditions which imply convergence of the family of distributions
\begin{equation}\label{mu}
\mu^{\nu}_t(\cdot):=\mathbb{P}_{\nu} (X_t \in\cdot\mid\tau_\partial
> t )
\end{equation}
as $t \rightarrow\infty$. This limit is sometimes called the
\textit{Yaglom limit}, after the seminal work of \citet{aY47} on
branching Markov processes conditioned on long survival. Any such limit
must be \textit{quasistationary}, in the sense that when started in
this distribution the process will remain in a multiple of the same
distribution for all times. The extensive mathematical development and
wide-ranging applications in this area---a bibliography of papers on
quasistationary distributions and Yaglom limits compiled and
periodically updated by \citet{quasibiblio} lists 403 entries
through 2010---permit us to mention only a smattering of the vast array
of applications of killed Markov processes to biology
[\citet{SV66}, \citet{gH97}, \citet{HT05},
\citet{6authors}, demography: \citet{diffmort},
\citet{hlB76}, \citet{LA09}, medicine: \citet{MS88}, \citet{aY07}
and statistics: \citet{oA95}, \citet{AG03}, \citet{dM04}]. Particularly in the
demographic and medical contexts, where killed Markov processes suggest
themselves as models for populations undergoing culling by mortality or
other processes, Yaglom limits, while rarely mentioned explicitly in
the applied literature, correspond naturally to the observable
distribution of survivors.
The central concerns of this theory are to describe, for a given class
of sub-Markov processes, the quasistationary distributions (if any),
and to describe the convergence (or not) of the process conditioned on
survival to one of these quasistationary distributions. A significant
part of the literature focuses on discrete state spaces, commonly
birth--death processes (or with some more flexible localization of the
transitions), with killing only on the boundary. One of the most
general accounts of the existence and convergence to quasistationary
distributions for discrete processes of this kind can be found in
\citet{FKMP95}. One unusual contribution, outside of these
categories, is \citet{fG01}, which proves convergence to
quasistationarity for fairly arbitrary discrete Markov chains with
general killing, by imposing a stringent Lyapunov-like drift condition.
Existence and vague-convergence conditions for discrete-time Markov
chains on general metric spaces can be found in \citet{LP01}.
Killed birth--death processes naturally generalize to killed diffusions
in the continuous-space context, but these have received rather less
attention. The existence of eigenfunctions for the generator of a
one-dimensional diffusions is simplified in the continuous setting, as
we may rely upon standard theory of ordinary differential equations.
Showing that these eigenfunctions are integrable (hence represent the
densities of distributions), and quasistationary is more involved,
though, and showing Yaglom convergence to the minimal quasistationary
distribution becomes technically challenging, particularly when the
state space is an unbounded interval. The foundation for all later work
on Yaglom convergence of diffusions was laid by \citet{pM61},
who used standard results from Sturm--Liouville theory and the spectral
theorem for self-adjoint operators to prove vague convergence (i.e.,
convergence of the distribution of the process conditioned on being in
a compact set), and uniform convergence under an assumption of strong
inward drift. These results have been substantially extended by a
shifting coalition of researchers who have produced papers
\citet{CMSM95}, \citet{MSM01}, \citet{6authors}, which
elucidate the conditions under which Yaglom convergence occurs, and
distinguish in \citet{MSM04} the $R$-positive situation from the
$R$-null---essentially, exponential-rate decay of probabilities
distinguished from decays that are asymptotically not exactly
exponential---in terms of the eigenfunctions.
One important constraint in most work in this field to date---as well
as Pinsky's results in \citet{rP85}, for diffusions on a compact
domain with gradient-type drift---has been the assumption that killing
occurs only at the boundary. Not only is this restriction unnatural
from the perspective of many of the applications, particularly the
demographic applications discussed in \citeauthor{quasistat} (\citeyear{diffmort}, \citeyear{quasistat}), it
obscures the fundamental links among the spectrum, the killing rate out
at infinity, the recurrence-transience dichotomy and Yaglom
convergence. (An exception which proves the rule is the biological
application of internally killed diffusions [\citet{KT83}], which makes
no reference to any of the literature on killed diffusions and cites
only \citet{eS66} for quasistationary distributions of discrete chains.)
Note that this approach to conditioning is quite different from Doob's
\textit{h-process} or \textit{h-transform}. We can generate an $h$-transform of
a Markov process which corresponds to conditioning on the process
\textit{never} being killed. That is, we look at the distribution of $\{
X_{t}\dvtx t\in[0,s]\}$ for fixed $s$ conditioned on $\tau_\partial>T$ (where $\tau_\partial$
is the killing time), in the limit as $T\to\infty$; we may then take
a second limit $s\to\infty$ to define the process on $[0,\infty)$.
This procedure generally produces a new Markov process, which is now
unkilled. A well-known example of this is the three-dimensional Bessel
process, which may be derived from the one-dimensional Brownian motion,
conditioned never to hit 0 [\citet{sV07}, Section 6.6]. This is
intimately connected to questions about the Martin boundary.
We will be concerned here only with the Yaglom approach, conditioning
on survival up to finite times. A key difference is that collection of
distributions~$\mu^{\nu}_t$ for different times $t$ are not
consistent, and so cannot be analyzed directly with Markov-process
techniques. They are more amenable to an analytic semigroup
approach.
\subsection{Heuristics} \label{sec:heur}
We begin by observing that general spectral theory---sum\-marized here in
Lemma \ref{L:sqrtcompare}---tells us that the bottom of the
spectrum~$\lambda_{0}^{\kappa}$ gives the exponential rate of decay of the distribution of
$X_t$ restricted to a compact interval.
What needs to be addressed, then, is the question of whether the
portion of the surviving mass within a compact interval dominates the
total surviving mass. There are two ways of addressing this question.
One is in terms of the spectrum of the $\mathfrak{L}^{2}$ generator. Suppose
$K:=\lim_{x\to\infty}\kappa(x)$ exists. In \citet{quasistat} the
emphasis was placed on the crucial distinction between the cases $\lambda_{0}^{\kappa}
>K$ and $\lambda_{0}^{\kappa}<K$.
It turns out that a more useful dichotomy is whether or not $\lambda_{0}^{\kappa}$ is
an isolated eigenvalue. The eigenvalue $\lambda_{0}^{\kappa}$ is isolated when the
diffusion takes place on a compact interval with two regular
boundaries, but also in cases which intuitively seem well-approximated
by a compact process, as when there is strong drift pulling the process
in from $\infty$, and when there is strong killing out toward $\infty
$. Thus the isolated-eigenvalue case includes all of the $\lambda_{0}^{\kappa}<K$
case [see Lemma \ref{spectrum}\hyperlink{it:isolated}{(v)}]. We expect the same
methods that work in finite dimensions, for powers of positive
symmetric matrices, to work in this case as well. One catch is that the
$\mathfrak{L}^{2}$ convergence need not tell us about the convergence of the
conditioned density, which is an $\mathfrak{L}^{1}$ property; indeed, it is
easy to see [cf. Proposition 2.3 of \citet{quasistat}] that the process
always escapes to $\infty$ if $\int_{0}^{\infty}\varphi(\lambda_{0}^{\kappa},x)\,d\Gamma(x)$ is
not finite, where~$\Gamma$ is the speed measure and $\varphi(\lambda
,\cdot)$ is the eigenfunction of the generator, defined as the
solution to an ordinary differential equation in Section \ref
{sec:SLspec}. We show, in Lemma \ref{L:L1L2}, that these conditions
do, in fact, suffice: That is, whenever~$\lambda_{0}^{\kappa}$ is an isolated
eigenvalue, and the corresponding eigenfunction is also integrable,
then we have convergence to the quasistationary distribution given by
the density $\varphi(\lambda_{0}^{\kappa},\cdot)/\int\varphi(\lambda_{0}^{\kappa},x)\,d\Gamma(x)$.
What about the case when $\lambda_{0}^{\kappa}$ is not an isolated eigenvalue? This
corresponds to the $R$-null and $R$-transient cases in Tweedie's theory
[\citeauthor{rT74} (\citeyear{rT74}, \citeyear{rT742})], where the decay of the transition kernel is not
exactly exponential with rate $-\lambda_{0}^{\kappa}$, but slightly faster, in the
sense that $e^{\lambda_{0}^{\kappa} t}p^{\kappa}(t,x,y)\to0$ ($p^{\kappa}$ being
the diffusion transition kernel). It turns out that in this case the
convergence lines up precisely with the standard recurrence/transience
dichotomy for the unkilled process. [Another way of putting this is to
say that when the $R$-recurrence or $R$-transience does not conform to
the properties of the unkilled process, this must be reflected in the
equality of $\lambda_{0}^{\kappa}$ and $\lim_{x\to\infty} \kappa(x)$.]
Another way of understanding the nonisolated case is by thinking about
how the condition $\lambda_{0}^{\kappa}>K$ implies that the distribution must decline
on compact sets at a faster exponential rate than would keep pace with
the killing out toward $\infty$. There are two ways this imbalance in
killing can be maintained: Either the mass vanishes toward $\infty$,
meaning that the scale (of the unkilled diffusion) is finite; this is
the $R$-transient case. Or the scale is infinite with finite speed,
which means that the excess mass keeps returning to 0, at long
intervals, and the killing rate $\lambda_{0}$ corresponds to real
killing at~0, not escape; this is the $R$-null case. In the
$R$-transient case the conditioned process escapes to infinity. In the
$R$-null case the conditioned process converges to the quasistationary
distribution. Note that the arguments for the one or the other behavior
seem to refer only to the motion, irrespective of the killing $\kappa$.
\subsection{Main results} \label{sec:main}
The core of this work is the identification of the asymptotic behavior
of killed diffusions in terms of the relation between the principal
eigenvalue of the generator, the limit behavior of $\kappa$ and the
nature of the boundary at~$\infty$. We move beyond earlier work in
removing unnecessary constraints on the drift and killing terms, and in
providing easily testable criteria for determining whether the
conditioned process converges to a quasistationary distribution for all
cases in which the bottom of the spectrum~$\lambda_{0}^{\kappa}$ does not coincide
with the limit of the killing rate at $\infty$.
We begin by summarizing the most important results. These results
presuppose general assumptions and restrictions on the processes
involved, which will be formulated fully in Section
\ref{sec:ADPR}.
The quasistationary distribution will be defined in terms of its
density $\varphi(\lambda_{0}^{\kappa},\cdot)$ with respect to $\Gamma$, where
$\varphi(\lambda_0^{\kappa},\cdot)$ is the principal eigenfunction
of the generator, defined as the solution to an ordinary differential
equation with appropriate boundary condition, stated formally in
Section \ref{sec:SLspec}.
\begin{enumerate}[(iii)]
\item[(i)] \textit{Convergence on compacta}: There is always convergence
to the quasistationary distribution on compact sets, stated formally as
Theorem~\ref{generallocalMandl}.\hypertarget{it:generallocalMandl}{}
\item[(ii)] \textit{Dichotomy}: If $\lambda_{0}^{\kappa}>\limsup_{x\to\infty} \kappa
(x)$ or $\lambda_{0}^{\kappa}<\liminf_{x\to\infty} \kappa(x)$, then the
conditioned process either converges to the quasistationary
distribution with density $\varphi(\lambda_{0}^{\kappa},\cdot) (\int_{0}^{\infty}\varphi
(\lambda_{0}^{\kappa},y)\, d\Gamma(y) )^{-1}$ with respect to $\Gamma$, or escapes
to $\infty$. This behavior is independent of the initial distribution,
provided only that it is compactly supported. (For explanation of the
terminology, see Section \ref{sec:quasi}.) This is Theorem 3.3 of
\citet{quasistat}, but it is restated here as Theorem \ref{Thm33} in
a slightly stronger form, as several restrictions have been
removed.\hypertarget{it:compacta}{}
\item[(iii)] \textit{Yaglom convergence with high killing at $\infty$
always}: If $\lambda_{0}^{\kappa}<\liminf_{x\to\infty}\kappa(x)$, then the
conditioned process converges to the quasistationary distribution. This
is stated as Theorem \ref{limkbiggerev}. Note that this includes the
(somewhat unintuitive) fact that a bound on the $\mathfrak{L}^{2}$ spectrum
implies that $\int_{0}^{\infty}\varphi(\lambda_{0}^{\kappa},y)\,d\Gamma(y)<\infty$, which is a
fact about the $\mathfrak{L}^{1}$ spectrum.\hypertarget{it:bigK}{}
\item[(iv)] \textit{Yaglom convergence with low killing at $\infty$ when
recurrent}: If $K:=\lim_{x\to\infty} \kappa(x)$ exists and $K<\lambda_{0}^{\kappa}
$, then the behavior of the conditioned process depends on the
transience or recurrence of the unkilled process. If the unkilled
process is transient---that is, if $\int_{0}^{\infty}\gamma(x)^{-1}\,dx<\infty
$---then the conditioned process escapes to $\infty$. If the unkilled
process is recurrent---that is, if $\int_{0}^{\infty}\gamma(x)^{-1}\,dx=\infty
$---then the conditioned process converges to the quasistationary
distribution. These results are stated in Theorems \ref{Escape} and
\ref{qsddrift}.\hypertarget{it:smallK}{}
\item[(v)] \textit{Yaglom convergence equivalent to integrability of the
principal eigenfunction}: If $\lambda_{0}^{\kappa}< \liminf_{x \rightarrow\infty
}\kappa(x)$ or $\lambda_{0}^{\kappa}> \lim_{x \rightarrow\infty}\kappa(x)$, then
convergence to quasistationarity is equivalent to the integrability of
the principal eigenfunction $\varphi(\lambda_0^{\kappa},\cdot)$.
This is stated as Theorem \ref{thm:integrability} \hypertarget{it:integrab}{}
\end{enumerate}
Our results extend those of \citet{quasistat} in several ways:
\begin{itemize}
\item In \citet{quasistat} the authors had to impose conditions that
required the drift and killing not to grow too quickly, or be too
irregular in order to insure that $\infty$ is of the limit point type.
Here there is no constraint on the killing other than local
boundedness, and no constraint on the drift other than that which
implies that $\infty$ is inaccessible. The case of an entrance
boundary at infinity was excluded in \citet{quasistat}. Moreover, in
contrast to \citet{quasistat} item \hyperlink{it:generallocalMandl}{(i)} is
shown to hold without any further condition on the initial distribution
other than compact support.
\item In \citet{quasistat} an assertion of the type \hyperlink{it:bigK}{(iii)}
was shown under the assumption that $K = \lim_{x \rightarrow\infty
}\kappa(x)$ exists and some further growth restrictions on $b$ and
$\kappa$.
\item Item \hyperlink{it:smallK}{(iv)} describes the most substantial advance:
The case $\lim_{x \rightarrow\infty}\kappa(x)<\lambda_{0}^{\kappa}$ is now shown
to be split by the standard recurrence-transience dichotomy, which
tells us whether the conditioned process converges or escapes. In
\citet{quasistat} the dichotomy could not be decided if $\lim_{x
\rightarrow\infty}\kappa(x)<\lambda_{0}^{\kappa}$.
\item In \citet{quasistat} the assertion of item \hyperlink{it:integrab}{(v)} was established only in the case $\lambda_{0}^{\kappa}< \liminf_{x
\rightarrow\infty}\kappa(x)$.
\end{itemize}
\section{Assumptions, definitions and previous results} \label{sec:ADPR}
\subsection{Analytic terminology} \label{sec:analytic}
In general a Sturm--Liouville operator is any formal differential
operator of the form $\tau= \tau_{p,q,V}= -\frac{1}{2p}\frac
{d}{dx}q\frac{d}{dx} + V$, where $p,q\dvtx (a_1,a_2) \rightarrow(0,\infty
)$ and $V\dvtx(a_1,a_2)\rightarrow\mathbb{R}$ are sufficiently
well-behaved functions. In this work we consider only operators where
$p=q=\gamma$, $V=\kappa\geq0$ and $a_1=0$, $a_2=\infty$. Note that
the diffusion coefficient has been set to~$1$. However, the case of a
general nondegenerate diffusion coefficient can be reduced to the
present case via a time change. Thus our results can be applied to the
case of a general diffusion coefficient. This reduction simplifies the
formulas considerably.
Moreover, we always assume in this chapter that $\gamma(x) = e^{2\int
_0^x b (s)\,ds}$ for some $b \in\mathfrak{L}^{1}_{loc}([0,\infty))\cap
C((0,\infty))$ and $0 \leq\kappa\in C ([0,\infty) )$.
These conditions are not entirely necessary, but this constraint still
admits a large class of one-dimensional diffusions. [However, see \citet{6authors} for a natural application to biology which requires $b$ to
be singular at 0.] Concerning the assumptions on $b$ we could replace
the condition $b \in\mathfrak{L}^{1}_{loc}([0,\infty))$ by the condition that
$\int_0^1 e^{-\int_c^x2b(s)\,ds}\,dx<\infty$ for some $c\in
(0,\infty)$, and $\int_0^1 e^{\int_c^x 2b(s)\,ds}\,dx<\infty$,
which is equivalent to saying that the boundary point $0$ is regular in
the sense of Feller and also in the sense of Weyl. In this paper we
will consistently use $\Gamma$ as a reference measure instead of the
Lebesgue measure, which is different from the convention adopted in
\citet{quasistat}. Recall that the speed measure of a one-dimensional
diffusion is also the reversing measure, with respect to which the
generator is symmetric. Unless otherwise indicated, we will always use
the bare notation $\mathfrak{L}^{2}$ to mean $\mathfrak{L}^{2} ((0,\infty),\Gamma
)$, and for $f,g\in\mathfrak{L}^{2}$ we have the inner product
\begin{equation} \label{E:innerprod}
\langle f,g\rangle=\int_{0}^{\infty} f(x)g(x)\gamma(x) \,dx.
\end{equation}
The formal differential operator $L^{\kappa} = -\frac{1}{2\gamma
}\frac{d}{dx}\gamma\frac{d}{dx} + \kappa$ gives rise to a closable
densely defined quadratic form $\tilde{q}^{\kappa, \alpha}$ in $\mathfrak{L}
^2$ by
\begin{eqnarray} \label{E:defineqk}
\qquad &&\varphi\mapsto\tilde{q}^{\kappa,\alpha}(\varphi)\nonumber\\[-8pt]\\[-8pt]
\qquad &&\qquad =
\cases{
\displaystyle \alpha\varphi(0)^{2}+\frac{1}{2}\int_0^{\infty}|\varphi
'(y)|^2\gamma(y)\,dy + \int_0^{\infty}\kappa(y)|\varphi(y)|^2
\gamma(y)\,dy, \vspace*{2pt}\cr
\qquad \mbox{if $\alpha<\infty$,}\vspace*{4pt}\cr
\displaystyle \frac{1}{2}\int_0^{\infty}|\varphi'(y)|^2\gamma(y)\,dy + \int
_0^{\infty}\kappa(y)|\varphi(y)|^2 \gamma(y)\,dy,\vspace*{2pt}\cr
\qquad \mbox{if $\alpha=\infty$},
}\nonumber
\end{eqnarray}
for any $\varphi\in\mathcal{D}_{\kappa,\alpha}$, where $\mathcal{D}_{\kappa
,\alpha}$ is defined by
\[
\mathcal{D}_{\kappa, \alpha}:=
\cases{
\lbrace\varphi\in\mathfrak{L}^2 |\varphi\in C^1(0,\infty)\cap
C([0,\infty)), \tilde{q}^{\kappa,\alpha}(\varphi)<\infty
\rbrace,\vspace*{1pt}\cr
\qquad \mbox{if $\alpha\in[0,\infty)$,} \vspace*{3pt}\cr
\lbrace\varphi\in\mathfrak{L}^2 |\varphi\in C^1(0,\infty)\cap
C([0,\infty)), \varphi(0)=0, \tilde{q}^{\kappa,\infty}(\varphi
)<\infty \rbrace,\vspace*{1pt}\cr
\qquad \mbox{if $\alpha= \infty$}.
}
\]
The closure of this quadratic form will be denoted by $q^{\kappa
,\alpha}$. To the quadratic form $q^{\kappa,\alpha}$ there
corresponds a uniquely defined positive self-adjoint
operator~$L^{\kappa,\alpha}$ with a dense domain of definition $\mathcal
{D}(L^{\kappa,\alpha})$. It is easy to see (essentially via
integration by parts) that the action of the operator $L^{\kappa
,\alpha}$ is given by
\[
L^{\kappa,\alpha}\varphi(x) = -\tfrac{1}{2}\varphi''(x) - b(x)
\varphi'(x) + \kappa(x) \varphi(x).
\]
By definition of the operator $L^{\kappa,\alpha}$ every element
$\varphi\in\mathcal{D}(L^{\kappa,\alpha})$ is absolutely
continuous and satisfies the boundary condition $2\alpha\varphi(0)=
\varphi'(0)$ [or $\varphi(0)=0$ when $\alpha=\infty$]. As in the
definition of $q^{\kappa,\alpha}$ we see that $\alpha=\infty$
corresponds to Dirichlet condition at $0$ (instantaneous killing), and
$\alpha= 0$ to Neumann condition (pure reflection) at $0$.
The bottom of the spectrum of $L^{\kappa}$ will be denoted by $\lambda
^{\kappa}_0$. The spectrum of the self-adjoint operator $L^{\kappa}$
is written $\Sigma(L^{\kappa})$. Where there is no danger of
confusion, the corresponding objects with $\kappa\equiv0$ will also
be denoted by~$q$, $L$ and $\lambda_0$ instead of $q^0$, $L^0$ and
$\lambda_0^0$, respectively (or $q^{0,\alpha}$, $L^{0,\alpha}$ and
$\lambda_{0}^{0,\alpha}$). Since $L^{\kappa}$ and $L$ are
self-adjoint operators, the spectral theorem implies the existence of
spectral resolutions $(E^{\kappa}_{\lambda})_{\lambda\in[\lambda
_0^{\kappa},\infty)}$ and $(E_{\lambda})_{\lambda\in[\lambda
_0,\infty)}$, respectively. For the basic facts concerning spectral
theory of self-adjoint operators the reader should consult \citet{jW00}.
The spectral theorem for self-adjoint operators allows us to define
functions $f(L^{\kappa})$ of the operator. For every Borel-measurable
function $f\dvtx\mathbb{R} \rightarrow\mathbb{R}$ the operator
$f(L^{\kappa})$ is defined via
\begin{eqnarray}
\mathcal{D}(f(L^{\kappa}))&=& \biggl\lbrace u \in\mathfrak{L}^2 \Big|\int
_{\Sigma(L^{\kappa})}|f(\lambda)| ^2\,d\|E^{\kappa}u\|^2(\lambda
)<\infty \biggr\rbrace,\label{spectraltheorem1}\\
f(L^{\kappa})u &=& \int_{\Sigma(L^{\kappa})}f(\lambda)\,dE^{\kappa
}(\lambda) u,\label{spectraltheorem2}\\
\|f(L^{\kappa}) u \|^{2} &=& \int_{\Sigma(L^{\kappa})}
f(\lambda)^{2} \,d\|E^{\kappa}u\|^2(\lambda). \label{spectraltheorem3}
\end{eqnarray}
Observe that for a Borel-measurable function $f\dvtx[0,\infty)\rightarrow
\mathbb{R}$ and $a \geq0$ we have $\operatorname{Ran} (f(L^{\kappa})) \subset
\mathcal{D}((L^{\kappa})^{a})$ if $[0,\infty) \ni\lambda\mapsto
|\lambda^{a} f(\lambda)|$ is bounded. This implies in particular that
the range of $e^{-t L^{\kappa}}$ is contained in the domain of all
powers of $L^{\kappa}$. Moreover the spectral theorem allows us to
clarify further the connection between the quadratic form $q^{\kappa}$
and the associated nonnegative operator $L^{\kappa}$. Let $\sqrt
{L^{\kappa}}$ denote the unique nonnegative square root of $L^{\kappa
}$, which is defined using the spectral theorem. Then we have $\mathcal
{D}(q^{\kappa})=\mathcal{D}(\sqrt{L^{\kappa}})$, and for every $f
\in\mathcal{D}(L^{\kappa})$ we have
\begin{equation}\label{formoperator}
q^{\kappa}(f,g) = \bigl\langle\sqrt{L^{\kappa}}f, \sqrt{L^{\kappa
}}g \bigr\rangle.
\end{equation}
Using the ``elliptic'' Harnack inequality and Weyl's spectral theorem
it is not difficult to see that
\begin{eqnarray}\label{possolutions}
\lambda_0^{\kappa} &=& \max\biggl\lbrace\lambda\in\mathbb{R} \mid
\mbox{there is a positive solution of $(L^{\kappa}-\lambda)u=0$}
\nonumber\\[-8pt]\\[-8pt]
&&\hspace*{94pt}\mbox{with }u(0)=\frac{1}{1+\alpha}, \frac{1}{2}u'(0)=\frac
{\alpha}{1+\alpha} \biggr\rbrace.\nonumber
\end{eqnarray}
[This was proved by \citet{pM61} using slightly different methods.]
Equation \eqref{possolutions} already suggests that for $0 \leq
\lambda\leq\lambda_0^{\kappa}$ solutions of $(L^{\kappa}-\lambda
)u=0$ might have a probabilistic significance.
In the sequel we usually denote by $\varphi(\lambda,\cdot)$ the
solution of the eigenvalue equation
\begin{equation}\label{eigenodequation}
(L^{\kappa}-\lambda)\varphi(\lambda,\cdot)=0,\qquad \varphi
(\lambda,0)=\frac{1}{1+\alpha}, \frac{1}{2}\varphi'(\lambda
,0)=\frac{\alpha}{1+\alpha}.
\end{equation}
It might be important to note that solutions in \eqref{possolutions}
and \eqref{eigenodequation} are solutions in the sense of the theory
of ordinary differential equations. An important issue is whether the
solution also belongs to the Hilbert space $\mathfrak{L}^2$ and thus is an
eigenfunction in the sense of spectral theory. When we wish to
emphasize that certain solutions are also eigenfunctions in the sense
of spectral theory, we denote them by $u_{\lambda}$.
Crucial to much of our analysis is the fact that the asymptotic
behavior of the semigroup is wholly determined by the spectrum right
near the base of the spectral measure, which we show in Lemma \ref
{L:sqrtcompare}, and then that the base of the spectral measure for any
nonnegative function is $\lambda_{0}^{\kappa}$, which is Lemma \ref{L:supportbase}.
For $g\in\mathfrak{L}^{2}$, define $\lambda_{g}$ to be the infimum of the
support of the spectral measure of $g$; that is,
\begin{equation} \label{E:lambdag}
\lambda_{g}:= \sup \{\lambda \dvtx \|E _{\lambda} g \|=0 \},
\end{equation}
and let $\mathcal{A}_{\lambda}$ be the subspace of $\mathfrak{L}^{2}$ consisting
of functions $f$ such that $\lambda_{f}\ge\lambda$.
\begin{lemma} \label{L:sqrtcompare}
Given $g\in\mathcal{D}(L^{\kappa,\alpha})$, we have
\begin{eqnarray} \label{E:sqrtcompare}
|g(x)|&\le& C_{\alpha}(x) \bigl\|\sqrt{L^{\kappa}} g \bigr\|
+C'_{\alpha} \| g \|\nonumber\hspace*{-35pt}\\[-8pt]\\[-8pt]
&=&C_{\alpha}(x) \biggl(\int_{0}^{\infty}\lambda \,d\|E^{\kappa} g\|
^{2}(\lambda) \biggr)^{1/2}+C'_{\alpha} \biggl(\int_{0}^{\infty}d\|
E^{\kappa} g\|^{2}(\lambda) \biggr)^{1/2},\nonumber\hspace*{-35pt}
\end{eqnarray}
where
\begin{equation} \label{E:Calpha}
C_{\alpha}(x):=
\cases{
\displaystyle \max \biggl\{\sqrt{\frac{2}{\alpha}}, \biggl(2\int_{0}^{x}\gamma
(y)^{-1}\,dy \biggr)^{1/2} \biggr\},&\quad for $\alpha>0$,\cr
\displaystyle \biggl(18\int_{0}^{x}\gamma(y)^{-1}\,dy \biggr)^{1/2},&\quad for $\alpha=0$,
}\hspace*{-35pt}
\end{equation}
and
\begin{equation} \label{E:Cprimealpha}
\quad C'_{\alpha}:=
\cases{
0,&\quad if $\alpha>0$ or $ \displaystyle \int_{0}^{\infty
}\gamma(y)\,dy=\infty$,\cr
\displaystyle \biggl(\int_{0}^{\infty} \gamma(y) \,dy \biggr)^{-1/2},&\quad if
$\alpha=0$ and $\displaystyle \int_{0}^{\infty}\gamma(y)\,dy<\infty$.
}\hspace*{-35pt}
\end{equation}
For any $t>1/2\lambda_{g}$,
\begin{equation} \label{E:sqrtcompare2}
\sup|e^{-tL^{\kappa}}g(x)|\le \bigl(C_{\alpha}(x)\lambda
_{g}+C'_{\alpha} \bigr)\|g\| e^{-t\lambda_{g}}.
\end{equation}
\end{lemma}
\begin{pf}
Suppose $\alpha\in(0,\infty)$. Since $g\in\mathcal{D}(L^{\kappa
})$ is differentiable, we have
\begin{eqnarray*}
|g(x)|&\le&|g(0)|+\int_{0}^{\infty}|g'(y) |\frac{\mathbf{1}_{[0,x]}}{\gamma(y)}
\gamma(y)\,dy\\
&=& |g(0)|+ \biggl\langle|g'|, \frac{\mathbf{1}_{[0,x]}}{\gamma}
\biggr\rangle\\
&\le&|g(0)|+ \biggl\| \frac{\mathbf{1}_{[0,x]}}{\gamma} \biggr\| \cdot\|
g'\|\qquad \mbox{(Cauchy--Schwarz inequality)}\\
&\le& \biggl(2 |g(0)|^{2}+ 2 \biggl\| \frac{\mathbf{1}_{[0,x]}}{\gamma
} \biggr\| \int_{0}^{\infty}|g'(y)|^{2}\gamma(y)\,dy \biggr)^{1/2}\\
&\le& C_{\alpha} q^{\kappa,\alpha}(g)^{1/2}\\
&=& C_{\alpha} \bigl\| \sqrt{L^{\kappa,\alpha}}g \bigr\|
\end{eqnarray*}
by \eqref{E:defineqk} and \eqref{formoperator}.
The spectral theorem \eqref{spectraltheorem3} allows us to represent
$\sqrt{L^{\kappa}}g$ in terms of the spectral resolution, yielding
\eqref{E:sqrtcompare}.
If $\alpha=\infty$, then $g(0)=0$, so the corresponding term drops
out of the bound.
If $\alpha=0$, we have the alternative bound
\begin{eqnarray*}
|g(x)|&\le&|g(0)|+C (q^{\kappa,0}(g) )^{1/2},\\
|g(x)|&\ge&|g(0)|-C (q^{\kappa,0}(g) )^{1/2},
\end{eqnarray*}
where $C=\sqrt{2} \| \frac{\mathbf{1}_{[0,x]}}{\gamma} \|$.
The second bound gives us
\[
\|g\|^{2}\ge \biggl( \int_{0}^{\infty}\gamma(y)\,dy \biggr) \bigl(
| g(0) |^{2}-2C | g(0) | (q^{\kappa
,0}(g) )^{1/2} \bigr),
\]
which implies that
\[
| g(0) | \le2C\sqrt{q^{\kappa,0}(g)}+ \|g\|^{2} \biggl(
\int_{0}^{\infty}\gamma(y)\,dy \biggr)^{-1/2}.
\]
We combine this with the above calculation to obtain the appropriate
version of \eqref{E:sqrtcompare}.
For any positive $t$, we have $g_{t}:=e^{-tL^{\kappa}}g\in\mathcal
{D}(L^{\kappa})$, so we may apply \eqref{E:sqrtcompare} to obtain
\[
|g_{t}(x)|\le \biggl(\int_{0}^{x}\gamma(y)^{-1}\,dy \biggr)^{1/2}
\bigl\| \sqrt{L^{\kappa}}e^{-tL^{\kappa}}g \bigr\|.
\]
Applying again the spectral theorem \eqref{spectraltheorem3}---now
with $f(x)=\sqrt{x} e^{-tx}$---yields
\begin{eqnarray*}
\| \sqrt{L^{\kappa}}e^{-tL^{\kappa}}g \|^{2}&=&\int_{0}^{\infty}
\lambda e^{-2t\lambda} \,d \| E^{\kappa} g \|^{2} (\lambda
)\\
&=&\int_{\lambda_{g}}^{\infty} \lambda e^{-2t\lambda} \,d \|
E^{\kappa} g \|^{2} (\lambda)\\[-1pt]
&\le&\lambda_{g} e^{-2t\lambda_{g}}\|g\|^{2},
\end{eqnarray*}
since $\lambda e^{-2t\lambda}$ attains its maximum at $\lambda=\frac
{1}{2t}$. Similarly,
\[
\|e^{-tL^{\kappa}}g \|^{2}=\int_{\lambda_{g}}^{\infty}
e^{-2t\lambda} \,d \| E^{\kappa} g \|^{2} (\lambda)\le
e^{-2t\lambda_{g}}\|g\|^{2}.\vspace*{-2pt}
\]
\upqed
\end{pf}
\begin{lemma} \label{L:supportbase}
For any nonnegative measurable function $f \in\mathfrak{L}^2$ with\break \mbox{$\|f\|>0$},
the spectral measure $d\|E^{\kappa}f\|^{2}(\lambda)$ corresponding to
$f$ includes $\lambda_{0}^{\kappa}$ in its support.\vspace*{-2pt}
\end{lemma}
\begin{pf}
Since $e^{-L^{\kappa}}f$ is everywhere nonnegative (except perhaps at
the boundary), and its associated spectral measure has the same support
as $d\|E^{\kappa}f\|^{2}$, we may assume that if there were a
counterexample it would not vanish off the boundary.
Suppose there is some $\lambda_{*}>\lambda_{0}^{\kappa}$ such that
$\|E^{\kappa}_{\lambda^{*}} f\|=0$.
Then for any $h \in\mathfrak{L}^2$ with $|h| \le f$,
\begin{eqnarray*}
e^{-\lambda_{*}t}\|f\|^{2}&\ge&\int_{\lambda_{*}}^{\infty}
e^{-\lambda t} \,d\|E^{\kappa}f\|^{2}(\lambda)\\[-1pt]
&=& \| e^{-({t}/{2})L^{\kappa}} f \|\\[-1pt]
&\ge& \| e^{-({t}/{2})L^{\kappa}} h \|\\[-1pt]
&=& \int_{\lambda_{0}^{\kappa}}^{\infty} e^{-\lambda t} \,d\|E^{\kappa}h\|
^{2}(\lambda).
\end{eqnarray*}
Thus, it must be that $d\|E^{\kappa}h\|^{2}(\lambda)$ is supported on
$[\lambda_{*},\infty)$ as well. Thus, for all such $h$ we have $\|
E^{\kappa}_{\lambda^{*}} h\|=0$.
Let $f_{n}=f\cdot\mathbf{1}_{[0,n]}$. For any $\tilde\lambda\in(\lambda_{0}^{\kappa},\lambda
_{*})$, by \eqref{spectraltheorem1} $f_{n}$ is in the domain of the
resolvent $R_{\tilde\lambda}=(L^{\kappa}-\tilde\lambda)^{-1}$. Furthermore, by
\eqref{spectraltheorem3}, if we choose $\lambda_{**}$ large enough so
that $\|\mathbb{E}_{\lambda_{**}} f\|>0$, then
\[
\| R_{\tilde\lambda}f \|^{2} =\int(\tilde\lambda-\lambda)^{-2} \,d \|
E^{\kappa}f \|^{2}(\lambda)\ge(\tilde\lambda-\lambda_{**})^{-2} \|\mathbb{E}
_{\lambda_{**}} f\|^{2}>0.
\]
Let $g_{n}:=R_{\tilde\lambda} f_{n}/\|R_{\tilde\lambda}f_{n}\|$. Then $g_{n}$ satisfies
\[
L^{\kappa} g_{n}(x)=\tilde\lambda g_{n} \qquad \mbox{for }x\le n.
\]
(In principle, the equality holds only in the $\mathfrak{L}^{2}$ sense, but it
becomes true for all $x$ since both sides are in $\mathcal{D}_{\alpha}$,
hence, in particular, continuous.) By the representation
\[
R_{\tilde\lambda}f_{n}=\int_{0}^{\infty} e^{s\tilde\lambda}e^{-sL^{\kappa}}f_{n} \,ds,
\]
we see that $g_{n}$ is nonnegative.\vadjust{\goodbreak}
The space of solutions to the ordinary differential equation $L^{\kappa
}g=\tilde\lambda g$ satisfying boundary condition \eqref{E:fellerbound} is
one-dimensional, so if we renormalize to
\[
\tilde{g}_{n}:=
\cases{
(1+\alpha)^{-1}g_{n}(0)^{-1}g_{n}, &\quad if $\alpha<\infty$,\cr
g'_{n}(0)^{-1}g_{n}, &\quad if $\alpha=\infty$,
}
\]
we have $\tilde{g}_{n}(x)=\tilde{g}_{n'}(x)$ for $x\in[0,n]$ when $n\le n'$. The
limit must then be identical with the function $\varphi(\tilde\lambda,\cdot
)$, and is everywhere nonnegative, contradicting the characterization
of $\lambda_{0}^{\kappa}$ in \eqref{possolutions}.
\end{pf}
\subsection{Boundary conditions, recurrence and transience} \label{sec:BC}
Defining the diffusion includes a boundary condition at 0, parametrised
by $\alpha\in[0,\infty]$
\begin{equation} \label{E:fellerbound}\quad
2\alpha\phi(0) = \phi'(0)\qquad \mbox{if }\alpha<\infty,\quad \mbox{or}\quad \phi(0)=0 \qquad \mbox{if }\alpha=\infty.
\end{equation}
(It is more common in probability to use a parameter on $[0,1]$,
corresponding to $\alpha/(1+\alpha)$.) The condition $\alpha=\infty
$ corresponds to instantaneous killing at~0, while $\alpha=0$
corresponds to reflection with no killing. Intermediate parameters
correspond to ``slow killing'' at 0, so that the process is killed when
the local time at 0 reaches an exponentially distributed random
variable. The operator $L^{\kappa,\alpha}$ is associated with the
closure of the quadratic form~$\tilde{q}^{\kappa,\alpha}$.
That is, $L$ is the self-adjoint realization of the differential
expression $-\frac{1}{2\gamma}\frac{d}{dx} (\gamma\frac
{d}{dx} )+\kappa$ in $\mathfrak{L}^2$ that has boundary condition \eqref
{E:fellerbound} at $0$. The quadratic form $q$ is a~Dirichlet form, and
the canonically associated Markov process is a solution for the
martingale problem associated to the operator $L$ with the appropriate
killing or reflection at $0$. This means there exists a family of
measures $(\mathbb{P}_t)_{t \in(0,\infty)}$ on the space
$C([0,\infty),\mathbb{R})$ of real valued continuous functions on
$[0,\infty)$ such that for every $f \in\mathfrak{L}^2$ and every $x \in
(0,\infty)$ (due to the Feller property)
\[
(e^{-tL}f)(x) = \mathbb{E}_x[f(X_t),T_0 > t],
\]
where $(X_t)$ is the canonical process on $C([0,\infty),\mathbb{R})$,
and $T_0$ is a random time defined with respect to the local time at 0.
(Again, if $\alpha=\infty$, then $T_{0}$ is the time of first hitting
0; if $\alpha=0$, then $T_{0}\equiv\infty$.) In this normalization,
the scale measure has density $\gamma(x)^{-1}$ with respect to Lebesgue measure.
It is a trivial consequence of the definition of natural scale that
$\int^{X_{t}}\gamma(x)^{-1}$ is a martingale, and so that $\mathbb
{P}_x (X_{t} \mbox{ hits 0 eventually} ) = 1$ for $x>0$ if
and only if the scale function is infinite at $\infty$; that is, for
$c>0$, $\int_c^{\infty}\gamma(x)^{-1}\,dx = \infty$. When there is
killing at 0, the process is recurrent only when the scale function is
infinite at both ends. In analytic terms, recurrence means that the
associated generator is critical [see \citet{GZ91} and \citet{rP95}].
Recall that $L^{\kappa}$ is called critical iff there exists a unique
(up to constant multiples) positive solution $\psi$ of $L^{\kappa
}\psi= 0$. Otherwise $L^{\kappa}$ is called subcritical. We know from
criticality theory---for example, from Theorem~3.15 of \citet{GZ91}---that the generator must be critical if 0 is an isolated
eigenvalue. A generalization of this fact will be used in Lemma~\ref{spectrum}.
The semigroup $e^{-tL^{\kappa}}$ has a probabilistic representation:
We consider the product space
\[
C([0,\infty)) \times[0,\infty) = \lbrace(\omega,\xi) \in
C([0,\infty))\times[0,\infty) \rbrace
\]
endowed with the natural product $\sigma$-field. Let $(\tilde{\mathbb
{P}}_x)_{x \in(0,\infty)}$ denote the family of measures which is
induced by the Dirichlet form $q^0$. For $x \in(0,\infty)$ we define
the measures
\[
\tilde{\mathbb{P}}_x \otimes e^{-\xi}\,d\xi
\]
and the stopping time
\[
T_{\kappa}(\omega,\xi) = \inf \biggl\lbrace s \geq0 \Big|\int_0^s
\kappa(\omega_s)\,ds \geq\xi \biggr\rbrace.
\]
If we set
\[
\tau_{\partial} = \min ( T_0 , T_{\kappa} )
\]
then we have the Feynman--Kac representation,
\begin{equation}\label{FKrepresent}
\qquad (e^{-tL^{\kappa}}f)(x) = \tilde{\mathbb{E}}_x [f(X_t),\tau
_{\partial}>t ] = \mathbb{E}_x \bigl[e^{-\int_0^{t}\kappa
(X_s)\,ds}f(X_t),T_0 > t \bigr].
\end{equation}
It is easy to see that $e^{-tL^{\kappa}}$ is an integral operator. We
denoty by $p^{\kappa}(t,x,y)$ its integral kernel with respect to the
measure $\Gamma$, that is,
\[
e^{-tL^{\kappa}}f(x) = \int_0^{\infty}p^{\kappa}(t,x,y)f(y)\Gamma
(dy) \qquad \mbox{for every $f \in L^2((0,\infty),\Gamma)$}.
\]
Since we are working with the self-adjoint version of the generator
(with respect to the measure $\Gamma$), the Feynman--Kac representation
holds in great generality, following the derivation in \citet{DvC00}.
We will generally omit the tilde, since it will be clear from context
which measure is meant.
Let us recall the usual Feller classification [see, e.g., Chapter 3 in
\citet{BL07}] of boundary points for diffusion generators $-\frac
{1}{2}\frac{d^2}{dx^2} - b(x)\frac{d}{dx}$ in an open interval
$(0,r)$.\vspace*{-2pt}
\begin{definition}
Let $c \in(0, r)$ be given and set $\gamma(x) = e^{\int_c^x 2b(y)\,
dy}$. The point $r$ is called \textit{accessible}, if $\int
_c^{r}\gamma(x)^{-1} \int_c^x\gamma(y)\,dy\,dx < \infty$, and
otherwise \textit{inaccessible}. If $r$ is an accessible boundary
point, then it is called \textit{regular} iff $\int_c^{r}\gamma
(x)\int_c^x\gamma(y)^{-1}\,dy\,dx < \infty$. If $r$ is accessible and
$\int_c^{r}\gamma(x)\int_c^x\gamma(y)^{-1}\,dy\,dx = \infty$, then
$r$ is called an \textit{exit boundary}. If $r$ is inaccessible, then
it is an \textit{entrance boundary}, iff $\int_c^{r}\gamma(x)\int
_c^x\gamma(y)^{-1}\,dy\,dx < \infty$.
If $r$ is inaccessible and $\int_c^{r}\gamma(x)\int_c^x\gamma
(y)^{-1}\,dy\,dx = \infty$, then $r$ is called \textit{natural}. Of
course the same classification holds for $0$.\vspace*{-2pt}
\end{definition}
Except where otherwise indicated, we will always assume that the
boundary point $\infty$ is inaccessible.\vadjust{\goodbreak}
It is easy to check that the boundary point $r$ is regular if and only
if $\int_c^{r}\gamma(x)\,dx < \infty$ and $\int_c^{r}\gamma(x)^{-1}\,
dx < \infty$. A boundary point is thus regular in the sense of Feller
if and only if it is regular in the sense of Weyl; cf. \citet{JR76}.
Let us recall the relevant definition from the Weyl theory of
self-adjoint extensions of singular Sturm--Liouville operators
$L^{\kappa} =- \frac{1}{2\gamma}\frac{d}{dx}(\gamma\frac
{d}{dx})+\kappa$ in $(0,r)$, adapted to our special
situation.\vspace*{-2pt}
\begin{definition}\label{lplc}
We say that boundary $r$ is of \textit{limit-point type}, if there
exists $c \in(0,r)$ and $z\in\mathbb{C}$, and a solution $f$ of
$(L^{\kappa}-z)f = 0$ such that $\int_c^{r}|f(y)|^2\gamma(y)\,dy =
\infty$. If there exists $c \in(0,\infty)$, such that for every
solution of the equation $(L^{\kappa}-z)f = 0$ the integral $\int
_{c}^{r}|f(y)|^2\gamma(y)\,dy$ is finite, then we say that $r$ is of
\textit{limit-circle type}. An analogous notation applies to the
boundary point~$0$.\vspace*{-2pt}
\end{definition}
A fundamental result in the theory of Sturm--Liouville operators is the
so called Weyl-alternative, which states that exactly one of the above
situations holds and that the limit-point/limit-circle classification
is independent of $z \in\mathbb{C}$ [see \citet{JR76}]. Moreover if
we are in the limit-point case at $r$, then for every $z \in\mathbb
{C}\setminus\mathbb{R}$ there exists exactly one solution of the
equation $(L^{\kappa}-z)f = 0$ which satisfies $\int_c^r|f(s)|^2
\gamma(y)\,dy< \infty$. Roughly limit-circle case at a boundary point
$r$ means that we have to specify boundary conditions at $r$ in order
to get a self-adjoint realization, whereas in the limit-point case at
$r$ no boundary conditions at $r$ are necessary.\vspace*{-2pt}
\subsection{Quasi-limiting and quasi-stationary behavior} \label{sec:quasi}
We say that $X_t$ \textit{converges from the initial distribution $\nu
$ to the quasistationary distribution $\varphi$ on compacta} if for
any positive $z$, and any Borel $A \subset[0,z]$
\[
\lim_{t\rightarrow\infty} \mathbb{P}_{\nu} ( X_t \in A |
X_t \leq z) =\frac{\int_A\varphi(y) \gamma(y)\,dy}{\int_0^{z}\varphi
(y) \gamma(y)\,dy};
\]
$X_t$ \textit{converges from the initial distribution $\nu$ to the
quasistationary distribution $\varphi$} if $\int_0^{\infty}\varphi
(y) \gamma(y)\,dy<\infty$, and for any Borel subset $A \subset
[0,\infty)$
\[
\lim_{t\rightarrow\infty} \mathbb{P}_{\nu} ( X_t \in A |
\tau_{\partial} > t) =\frac{\int_A\varphi(y) \gamma(y)\,dy}{\int
_0^{\infty}\varphi(y) \gamma(y)\,dy}.
\]
Finally we say that $X_t$ \textit{escapes from the initial
distribution $\nu$ to infinity} if
\[
\lim_{t \rightarrow\infty} \mathbb{P}_{\nu} ( X_t \leq z |
\tau_{\partial}> t ) = 0.\vspace*{-2pt}
\]
\begin{remark}\label{qsl}
In the literature there is no completely standard terminology for
quasistationary distributions. The probability measure $\frac{\varphi
(y) \Gamma(dy)}{\int_0^{\infty}\varphi(y) \gamma(y)\,dy}$ described
here is sometimes also called a quasi-limiting distribution.
A~quasistationary distribution $\tilde{\nu}$ is often defined as a
probability measure $\tilde{\nu}$ supported in $(0,\infty)$ satisfying
\[
\mathbb{P}_{\tilde{\nu}} (X_t \in A |\tau_{\partial} >
t ) = \tilde{\nu}(A) \qquad \forall\mbox{ Borel sets } A\subset
(0,\infty), t > 0.\vadjust{\goodbreak}
\]
Quasilimiting distributions are also called Yaglom limits. It is not
difficult to see that quasilimiting distributions are also
quasistationary distributions.\vspace*{-2pt}
\end{remark}
\subsection{Previous results} \label{sec:previous}
Observe that we have in equation \eqref{possolutions} that $\varphi
(\lambda_0^{\kappa},\cdot)$ is positive. \citet{quasistat} showed
a slightly weaker version of the following result. Their additional
assumptions concerning the $b$ and $\kappa$ are easily seen to be unnecessary.\vspace*{-2pt}
\begin{theorem}[{[Theorem 3.3 in \citet{quasistat}]}]\label{Thm33}
Assume that $\infty$ is a natural boundary point and that we are in
the limit-point case at $\infty$. Suppose that either
\[
\liminf_{x \rightarrow\infty}\kappa(x) > \lambda_0^{\kappa}
\qquad \mbox{or} \qquad \limsup_{x\rightarrow\infty}\kappa(x) < \lambda
_0^{\kappa}.
\]
Then either $X_t$ converges to the quasistationary distribution
$\varphi(\lambda_0^{\kappa},y)\,d\gamma(y)/\break\int_0^{\infty}\varphi
(\lambda_0^{\kappa},y)\,d\gamma(y)$, or $X_t$ escapes to infinity. In
the case $\liminf\kappa(x) > \lambda_0^{\kappa}$, $X_t$~converges
to the quasistationary distribution $\varphi(\lambda_0^{\kappa
},\cdot)$ if and only if\break $\int_0^{\infty}\varphi(\lambda_0^{\kappa
},y) \times\gamma(y)\,dy$ is finite.\vspace*{-2pt}
\end{theorem}
A priori it would not have been clear that the conditional distribution
converges, and that the mass cannot split, with part of the mass
remaining on a compact interval and the remainder escaping to infinity.
Having recognized that there is a a dichotomy, it is natural to then
seek a simple criterion for discriminating between the cases: escape or
convergence. One such is given in \citet{quasistat}, under which
$X_t$ converges to quasistationarity, namely when $\lambda_0^{\kappa}
< K =: \lim_{t \rightarrow\infty}\kappa(t)$ together with the
growth bound
{\renewcommand{$\mathit{ID}'$}{$\mathit{GB}'$}
\begin{equation}\label{eqGB1}
\exists\tilde{b}, \tilde{\kappa}\geq0\
\forall y \mbox{ large enough: } |b(y)| \leq\tilde{b}y \mbox{
and } \kappa(y)\leq\tilde{\kappa}y
\end{equation}}
\vspace*{-\baselineskip}
\noindent or the related bound
{\renewcommand{$\mathit{ID}'$}{$\mathit{GB}''$}
\begin{eqnarray}
&&\exists\bar{b}_1, \bar{b}_2, \bar{\kappa},
\beta\geq0 \ \forall y \mbox{ large enough: }\bar{b}_1y^{\beta
}\geq b(y)\geq-\bar{b}_1y, b'(y)\geq-\bar{b}_2y^2\nonumber\hspace*{-35pt}\\[-8pt]\\[-8pt]
&& \hphantom{\exists\bar{b}_1, \bar{b}_2, \bar{\kappa},
\beta\geq0 \ \forall y \mbox{ large enough: }}\mbox{and } \kappa(y)\leq
\tilde{\kappa}y.\nonumber\hspace*{-35pt}
\end{eqnarray}}
\vspace*{-\baselineskip}
\noindent While these conditions are satisfied in many applications they are,
from a~theoretical point of view, unsatisfactory. In particular, it
seems peculiar that an upper bound on the killing rate as in (\ref{eqGB1})
should be necessary. On the contrary increasing the killing rate
$\kappa$ should, from a heuristic point of view, only strengthen the
convergence to quasistationarity.\vspace*{-2pt}
\begin{remark}
We make use of Theorem \ref{Thm33} only in the case
\[
\lambda
_0^{\kappa} > \lim_{x \rightarrow\infty}\kappa(x)
\]
and $\Gamma ((0,\infty)) < \infty$. In the other cases we use different
techniques. In the next chapter we will show that $\infty$ is always
in the limit-point case. As emphasized and explained in \citet{quasistat} in this case the heuristic behind Theorem~\ref{Thm33} is
quite clear, but the translation of this idea into formal mathematics
is not trivial.\vadjust{\goodbreak}
\end{remark}
\section{Analytic results}
In this chapter we derive several key analytic facts about the spectra
of generators and resolvents. While some of these are standard in the
theory of Sturm--Liouville operators, and well known to specialists in
that field, they are less familiar to probabilists, and we explain them
in some detail here. In Section \ref{sec:FW} we show that the
technical conditions for a limit-point boundary at~$\infty$ may be
weakened. Section \ref{sec:SLspec} derives basic results linking the
spectrum and speed measure. Section \ref{sec:LPHI} presents the
standard parabolic Harnack inequality in the form that we will be
using. Section \ref{sec:SRLT} applies the analytic results to
convergence on compacta. Section~\ref{sec:ID} explains why strong
conditions on the initial conditions are unnecessary. Finally, Section
\ref{sec:entrance} generalizes the results to the case of an entrance
boundary at $\infty$.
\subsection{Classification of boundary points} \label{sec:FW}
We start by establishing a connection between the Feller classification
and the Weyl classification of boundary points. This has already been
investigated in \citet{nW85} for the case $\kappa= 0$, but in this
work the author introduces the notion of weak entrance boundary and
shows that one is in the limit-circle case if the boundary point is of
weak entrance type. We show that there are no weak entrance boundaries
at $\infty$ by proving that $\infty$ is in the limit-point case. The
proof we give is well known in the Schr\"{o}dinger case [see \citet{BMS02} for similar ideas in a much more general context]. We assume
regularity of the coefficients of the Sturm--Liouville expression,
although weaker assumptions would also suffice.
\begin{lemma} \label{L:SL}
Let the Sturm--Liouville expression $\tau f(x) = -\frac{1}{\gamma
(x)} (\gamma(x)\times f'(x) )' + \kappa(x)f(x)$ be given. Assume
that $\gamma$ is strictly positive and locally Lipschitz in $(0,\infty
)$ and $\kappa\in\mathfrak{L}^2_{loc}([0,\infty))$ such that $\kappa(x)
\geq-C|x|^2+D$ for some constants $C,D\geq0$. Then we are in the
limit-point case at $\infty$.
\end{lemma}
\begin{pf}
We can assume, without loss of generality, that $D=0$ and that~$\gamma$
is continuous up to the boundary. The first assumption is obviously
harmless. If $\gamma$ is not continuous up to zero we can consider the
differential expression in $(1,\infty)$ instead of $(0,\infty)$. This
shift does not change the Weyl-classification of $\tau$ at infinity.
Similarly, we may assume that the boundary condition at 0 is Dirichlet
($\alpha=\infty$), since the classification at infinity is unaffected
by the boundary condition at 0.
As usual in the theory of Sturm--Liouville operators we define the
maximal operator $T$ and the minimal operator $\widetilde{T}$
associated to the differential expression $\tau$ as
\begin{eqnarray*}
\mathcal{D}(T)&:=& \lbrace f \in\mathfrak{L}^2 | f, \gamma f' \mbox{
absolutely continuous in $(0,\infty)$, } \tau f \in\mathfrak{L}^2
\rbrace,
\\
Tf&:=&\tau f \quad \mbox{for $f \in\mathcal{D}(T)$}
\end{eqnarray*}
and
\begin{eqnarray*}
\mathcal{D}(\widetilde{T})&:=& \lbrace f \in\mathcal{D}(T)|
f\mbox{ has compact support in $(0, \infty)$} \rbrace,\\
\widetilde{T}f&:=& \tau f \quad \mbox{for $f \in\mathcal{D}(\widetilde{T})$},
\end{eqnarray*}
respectively. Let $T_{D}$ be the restriction of the maximal operator
$T$ to the domain
\[
\mathcal{D} = \lbrace f \in\mathcal{D}(T) | f(0) = 0
\rbrace,
\]
that is, we put Dirichlet boundary conditions at the boundary point $0$.
The deficiency indices (we refer to the short summary in the \hyperref[app]{Appendix})
of~$\widetilde{T}$ are $(1,1)$ if the limit-point case holds at
$\infty$, and $(2,2)$ if limit-circle holds at $\infty$. In the
former case, the maximal symmetric (self-adjoint) extensions of
$\widetilde{T}$ are one-dimensional; in the latter case, they are
two-dimensional. If~$T_{D}$ defines a symmetric operator---$\langle
f,T_{D}f\rangle\in\mathbb{R}$ for every $f \in\mathcal{D}$---then
it cannot have dimension higher than 2. But there is one free parameter
at 0; in the limit-circle case there would be two free parameters at
$\infty$. Thus, in the limit-circle case $T_{D}$ would be a
three-dimensional extension of $\widetilde{T}$, so it could not be
symmetric. If we show that $T_{D}$ is symmetric, it will follow that
the Sturm--Liouville problem is in the limit-point case at $\infty$.
Let $\varphi\in C^{\infty}_c(\mathbb{R})$ such that $0\leq\varphi
\leq1$ and
\[
\varphi(x) =
\cases{
1, & \quad if $|x|\leq1$,\cr
0, & \quad if $|x| \geq2$.
}
\]
Further we set $\varphi_k(x) = \varphi(\frac{x}{k})$ ($k \in\mathbb
{N}$). This gives, for $f \in\mathcal{D}(T_{D})$ and $k \in\mathbb{N}$
\begin{eqnarray}\label{intparts1}
\langle f,T_{D}f\rangle&=& \lim_{k\rightarrow\infty}\int_0^{\infty
}\varphi_k(x)^2\overline{f(x)}T_{D}f(x) \gamma(x)\,dx \nonumber \\
&=& \lim_{k\rightarrow\infty} \int_0^{\infty}\varphi
_k(x)^2\overline{f(x)} \biggl[-\frac{1}{2\gamma}(\gamma f')'(x)+
\kappa(x)f(x) \biggr]\gamma(x)\,dx \\
&=& \lim_{k\rightarrow\infty} \biggl[\frac{1}{2}\int_0^ {\infty
} (\varphi_k(x)^{2}\overline{f(x)} )'f'(x)\gamma(x)\,dx \nonumber\\
&&\hphantom{\lim_{k\rightarrow\infty} \biggl[}
{} + \int_0^{\infty}\varphi_k(x)^{2}\kappa
(x)|f(x)|^2\gamma(x)\,dx \biggr]\nonumber\\
&=& \lim_{k\rightarrow\infty} \biggl\lbrace\int_0^{\infty}\varphi
_k^2(x) \biggl(\frac{1}{2}|f'(x)|^2 + \kappa(x)|f(x)|^2 \biggr)\,\gamma
(x)\,dx \nonumber \\
&&\hspace*{69pt}{} + \int_0^{\infty}\varphi_k(x)\varphi_k'(x)\overline
{f(x)}f'(x)\gamma(x)\,dx \biggr\rbrace.\nonumber
\end{eqnarray}
Observe that in the third line the boundary term $\varphi_k^{2}\bar
{f}\gamma f'|_0^{\infty}$ coming from the integration by parts
vanishes, since $\gamma f$ is continuous up to $0$ (since it satisfies
an ODE), $f(0)=0$ and $\varphi_k(x)$ is identically zero for $x$ large enough.
The first term on the right-hand side is real, and we have to prove
that the second term converges to $0$ as $k \rightarrow\infty$. We
have, by the Cauchy--Schwarz inequality and the properties of the
cut-off sequence $(\varphi_k)$,
\begin{eqnarray}\label{CauchySchwarzself}\qquad
&& \biggl|\int_0^{\infty}\varphi_k(x)\varphi_k'(x)\overline
{f(x)}f'(x)\gamma(x)\,dx \biggr| \nonumber \\
&&\qquad \leq \biggl(\int_0^{\infty}\varphi_k(x)^2|f'(x)|^2\gamma(x)\,dx\int
_0^{\infty}|\varphi_k'(x)|^2|f(x)|^2 \gamma(x)\,dx \biggr)^{
{1}/{2}}\\
&&\qquad \leq Ck^{-1} \biggl(\int_0^{\infty}\varphi_k(x)^2|f'(x)|^2 \gamma
(x)\,dx\int_k^{2k}|f(x)|^2\gamma(x)\,dx \biggr)^{{1}/{2}}.\nonumber
\end{eqnarray}
For the first integral on the right-hand side we integrate by parts in
a similar vein to \eqref{intparts1}. The assumptions on $\kappa$ as
well as the elementary inequality $|ab| \leq a^{2}/4 + b^2$ imply
\begin{eqnarray*}
&&\frac{1}{2}\int_0^{\infty}\varphi_k(x)^2|f'(x)|^2\gamma(x)\,dx \\
&&\qquad =
\int_0^{\infty}\varphi_k(x)^{2}\overline{f(x)}(T_{D}f(x)) \gamma
(x)\,dx \\
&&\qquad \quad {} - \int_0^{\infty}\varphi_k(x)^2\kappa(x)|f(x)|^2 \gamma
(x)\,dx - \int_0^{\infty}\varphi_k\varphi_k'(x)\overline{f(x)}f'(x)
\gamma(x)\,dx\\
&&\qquad \leq\int_0^{\infty}\varphi_k(x)|f(x)||T_{D}f(x)| \gamma
(x)\,dx + C \int_0^{\infty}\varphi_k^2x^2|f(x)|^2 \gamma(x)\,dx\\
&&\qquad \quad {} +\int_0^{\infty}|\varphi_k(x)\varphi
_k'(x)\overline{f(x)}f'(x)| \gamma(x)\,dx\\
&&\qquad \leq \int_0^{\infty}\varphi_k(x)|f(x)||T_{D}f(x)| \gamma
(x)\,dx + C \int_0^{\infty}\varphi_k^2x^2|f(x)|^2 \gamma(x)\,dx\\
&&\qquad \quad {} +\int_0^{\infty} \biggl[\frac{1}{4}|\varphi
_k(x)f'(x)|^2+ |\varphi_k'(x)\overline{f(x)}|^{2} \biggr] \gamma
(x)\,dx\\
&&\qquad \leq \|f\|\|T_{D}f\| + C(2k)^2\|f\|^2\\
& &\qquad \quad {} +\frac{1}{4}\int_0^{\infty}\varphi
_k(x)^2|f'(x)|^2 \gamma(x)\,dx + M^2 k^{-2}\|f\|^2.
\end{eqnarray*}
This yields
\[
\int_0^{\infty}\varphi_k(x)^2|f'(x)|^2 \gamma(x)\,dx \leq4\|f\|\|
T_{D}f\| + 16Ck^2\|f\|^2 + 4 M^2k^{-2}\|f\|^2
\]
and therefore for large $k$
\begin{equation}\label{growthink}
\int_0^{\infty}\varphi_k(x)^2|f'(x)|^2\,dx \leq C_1 + C_2k^2 \leq C_3k^{2}.
\end{equation}
Thus inequalities \eqref{CauchySchwarzself} and \eqref{growthink}
imply that [observe that $f \in L^2((0,\infty),\gamma)$]
\begin{eqnarray*}
\biggl|\int_0^{\infty}\varphi_k(x)\varphi_k'(x)\overline
{f(x)}f'(x) \gamma(x)\,dx \biggr| &\leq& Ck^{-1} \biggl(C_3k^2\int
_k^{2k}|f(x)|^2 \gamma(x)\,dx \biggr)^{{1}/{2}}\\
&\rightarrow&0,
\end{eqnarray*}
as $k\rightarrow\infty$. This proves the assertion, and so completes
the proof.
\end{pf}
\subsection{The spectrum of Sturm--Liouville operators} \label{sec:SLspec}
We begin with a version of the spectral theorem for self-adjoint
operators on a Hilbert space, specifically adapted to Sturm--Liouville
operators. A proof of it can be found in general references on the
theory of Sturm--Liouville or Schr\"{o}dinger operators, such as \citet{GZ06}, \citet{CL90}, \citet{aZ05}.
Let $\tau=-\frac{1}{2\gamma}\frac{d}{dx} (\gamma\frac{d}{dx})
+ \kappa$ be a Sturm--Liouville expression which is regular at $0$ and
in the limit-point case at infinity, and let $H$ be the self-adjoint
realization of $\tau$ in $\mathfrak{L}^2$ with boundary conditions \ref
{E:fellerbound} at $0$. Let $\varphi(z,\cdot)$ be the unique solution
of the ordinary differential equation $\tau\varphi(z,\cdot)=z\varphi
(z,\cdot)$ satisfying $\varphi(z,0)=1/(1+\alpha)$ and $\frac
{1}{2}\varphi'(z,0)=\alpha/(1+\alpha)$.
Given a continuous function $F\in C(\mathbb{R})$ and a $\sigma
$-finite measure $\mu$ on $\mathbb{R}$, we have a corresponding
maximal multiplication operator $M_{F}$ on $\mathfrak{L}^{2}(\mathbb{R}, \mu
)$ defined by
\begin{eqnarray*}
\mathcal{D}(M_{F})&=& \{g\in\mathfrak{L}^{2}(\mathbb{R}, \mu) \mbox{
s.t. }gF\in\mathfrak{L}^{2}(\mathbb{R}, \mu) \},\\
M_{F}(g)&=&Fg.
\end{eqnarray*}
\begin{theorem}[(Weyl's spectral theorem)] \label{weylspectraltheorem}
There exists a measure $\sigma$ whose support is $\Sigma(H)$, such
that the map taking a compactly supported function $h\in\mathfrak{L}
^2((0,\infty),\Gamma) $ to the function $\hat{h}\in\mathfrak{L}^2(\Sigma
(H), \sigma)$, defined by
\[
\hat{h}(\cdot)= \int_0^{\infty}h(x)\varphi(\cdot,x) \gamma(x)\,dx
\]
may be uniquely extended to a unitary mapping $U\dvtx\mathfrak{L}^{2}((0,\infty
),\Gamma)\to\mathfrak{L}^{2}(\Sigma(H),\sigma)$ with the property
\[
U F(H)U^{-1} = M_F.
\]
The spectrum of $H$ is simple, and $\Sigma(F(H)) = \operatorname{ess\,
ran}_{\sigma}(F)$.
\end{theorem}
The spectrum $\Sigma(A)$ of a self-adjoint operator $A$ may be divided
into two components: the essential spectrum $\Sigma_{\mathrm{ess}}(A)$,
comprising the limit points and eigenvalues of infinite multiplicity,
and the discrete part $\Sigma_d(A)$, comprising the isolated
eigenvalues of finite multiplicity. In the Sturm--Liouville case every
eigenvalue has finite multiplicity (no more than 2), so the essential
spectrum consists only of limit points of the spectrum. It is well
known that the essential part of the spectrum of self-adjoint operators
is invariant with respect to relatively compact perturbations [see
Theorem 9.15 of \citet{jW00}]. [We recall that an operator
$V\dvtx X\rightarrow X$ on the Banach space $X$ is called relatively compact
with respect to $T\dvtx X\to X$ if $\mathcal{D}(T) \subset\mathcal
{D}(V)$, and if for some $z \in\mathbb{C}\setminus\Sigma(T)$ the
operator $V(T-z)^{-1}$ is compact. We refer to Section 9.2 of
\citet{jW00} for further details.]
The core of our results is contained in the following analytic lemma,
which catalogs some of the key linkages among the base of the spectrum,
the scale measure and the speed measure. These take us beyond the
results of Theorem \ref{Thm33}, by separating the influence of the
drift from the effect of the killing term. Moreover, they show clearly
why the case $\lambda_0^{\kappa} < K$ will turn out to be easier than
the case $\lambda_0^{\kappa}>K$. The major results---particularly
Theorems \ref{generallocalMandl}, \ref{limkbiggerev}, \ref{qsddrift}
and \ref{Escape}---will in essence be just unpacking these analytic
results in probabilistic terminology.
\begin{lemma}\label{spectrum}
With the above definitions:
\begin{enumerate}[(vii)]
\item[(i)] if $\lim_{x \rightarrow\infty}\kappa(x) = K$, then
$\Sigma_{\mathrm{ess}}(L^{0,\alpha})+K = \Sigma_{\mathrm{ess}}(L^{\kappa,\alpha
})$;\hypertarget{it:specess1}{}
\item[(ii)] $\lambda_0^{0,\infty} > 0$ and $\int_0^{\infty}\gamma
(x)^{-1}\,dx=\infty$ imply $\Gamma(\mathbb{R}_+) =\int_{0}^{\infty}\gamma(x)\,dx <
\infty$; \hypertarget{it:speedfinite}{}
\item[(iii)] $\lambda_0^{0,\alpha} > 0$ and $\Gamma ([0, \infty
) ) = \infty$ imply $\lambda_{0}^{0,0}>0$;\hypertarget{it:neumannbottom}{}
\item[(iv)] $\lambda_0^{0,\alpha} > 0$ and $\Gamma ([0, \infty
) ) = \infty$ imply $\lim_{r \rightarrow\infty}\frac
{1}{r}\log\Gamma([0,r)) > 0$;\hypertarget{it:speedexpon}{}
\item[(v)] if $\lambda_0^{\kappa,\alpha} < \liminf_{x\rightarrow
\infty}\kappa(x)$, then $\lambda_0^{\kappa}$ is a simple isolated
eigenvalue with a unique positive eigenfunction;\hypertarget{it:isolated}{}
\item[(vi)] if $\alpha>0$ (not pure reflection at 0) or if $ \Gamma
([0,\infty)) = \infty$, then 0 is not an isolated eigenvalue of
$L^{0,\alpha}$;\hypertarget{it:noisolated}{}
\item[(vii)] if $\alpha>0$ (not pure reflection at 0) or if $ \Gamma
([0,\infty)) = \infty$, then $\lambda_0^{\kappa} > \limsup_{x
\rightarrow\infty}\kappa(x)$ implies $\lambda_0^{0,\alpha} > 0$.\hypertarget{it:lambda0}{}
\end{enumerate}
\end{lemma}
\begin{pf}
{Assertion} \hyperlink{it:specess1}{(i)} can be derived from the fact that
the essential spectra of two self-adjoint operators $L_1$ and $L_2$
coincide if for some $z \in\mathbb{C}\setminus(\Sigma(L_1)\cup
\Sigma(L_2))$ the difference
\[
(L_1-z)^{-1}-(L_2-z)^{-1}
\]
is a compact operator; cf. Theorem 9.15 of \citet{jW00}.
Set $\kappa_n(t) =\mathbf{1}_{[0,n]}(t)(\kappa(t)-K)$. The resolvent
equation gives for $z \in\mathbb{C}\setminus\mathbb{R}$
\begin{eqnarray*}
(L^{\kappa_n}-z)^{-1} - (L-z)^{-1} &=& (L^{\kappa
_n}-z)^{-1}(L-L^{\kappa_n})(L-z)^{-1} \\
&=& -(L^{\kappa_n}-z)^{-1}\kappa_n(L-z)^{-1}.
\end{eqnarray*}
Observe now that the operator $\kappa_n(L-z)^{-1}$ is compact; that
is, the operator acting by multiplication with $\kappa_n$ is
relatively compact with respect to the operator $L$. This can be seen
by considering the explicit form of the resolvent [see Chapter~3.3 in
\citet{JR76}; similar results can be found in \citet{CL55}]. We have
\begin{eqnarray*}
&&[\kappa_n(L-z)^{-1}]g(x) \\
&&\qquad = \kappa_n(x) \frac{1}{W(v,u)} \biggl(
v(x)\int_0^x u(y)g(y)\gamma(y)\,dy + u(x)\int_x^{\infty}v(y)g(y)\gamma
(y)\,dy \biggr)
\end{eqnarray*}
where $u$ and $v$ are linearly independent solutions of
\begin{eqnarray}
(\tau-z)w = 0 \mbox{ satisfying }\nonumber \\
\eqntext{\displaystyle u(0)=\frac{1}{1+\alpha}, \frac{1}{2}u'(0)=\frac{\alpha
}{1+\alpha} \mbox{ and }
\int_1^{\infty} |v(y)|^2 \gamma(y)\,dy<\infty.}
\end{eqnarray}
Observe that here we use the fact that we are in the limit-point case
at infinity.
The Wronskian $W(f,g)$ of two locally absolutely continuous functions~$f$
and $g$ is defined by
\[
W(f,g)(x) = [f(x) g'(x)- f'(x)g(x) ]\gamma(x).
\]
Thus $\kappa_n(L-z)^{-1}$ is an integral operator in $\mathfrak{L}^2$ with
kernel $k(\cdot,\cdot)$ given by
\[
k(x,y) =
\cases{
W(v,u)^{-1}\kappa_n(x)v(x)u(y), &\quad if $ y \leq x$, and \cr
W(v,u)^{-1}\kappa_n(x)v(y)u(x), &\quad if $ y \geq x$.
}
\]
The known properties of $u$ and $v$ imply
\begin{eqnarray*}
&&\int_0^{\infty}\int_0^{\infty}|k(x,y)|^2 \gamma(y) \gamma(x)\,dy
\,dx \\
&&\qquad = \frac{1}{W(v,u)^2}\int_0^{n} \biggl(|v(x)|^2\int_0^x|u(y)|^2
\gamma(y) \,dy + |u(x)|^2\int_x^{\infty}|v(y)|^2 \gamma(y)\,dy \biggr)\\
&&\qquad \quad \hphantom{\frac{1}{W(v,u)^2}\int_0^{n}}
{}\times|\kappa_n(x)|^{2}\gamma(x)\,dx \\
&&\qquad < \infty.
\end{eqnarray*}
Thus $\kappa_n(L-z)^{-1}$ is Hilbert--Schmidt, hence also compact.
We complete the proof by observing that the resolvent equation
\[
(L^{\kappa_n+K}-z)^{-1} - (L^{\kappa}-z)^{-1} = (L^{\kappa
_n}-z)^{-1}(\kappa- \kappa_n-K)(L^{\kappa}-z)^{-1},
\]
implies
\begin{eqnarray*}
\|(L^{\kappa_n+K}-z)^{-1} - (L^{\kappa}-z)^{-1} \| &\leq&\|
(L^{\kappa_n}-z)^{-1}\| \|\kappa-\kappa_n-K\|_{\infty} \|(L^{\kappa
}-z)^{-1}\| \\
&\leq&\frac{1}{(\Im z)^2} \|\kappa-\kappa_n-K\|_{\infty}
\rightarrow0
\end{eqnarray*}
as $n\rightarrow\infty$; that is, $L^{\kappa_{n}+K}$ converges in
the norm-resolvent sense to $L^{\kappa}$. In the second inequality we
used the fact that the operator norm of the operator that acts as
multiplication by a function $f$ is just the supremum norm of
$f$.
{Assertion} \hyperlink{it:speedfinite}{(ii)} is contained in \citet{MSM01}
and also follows from Theorem 1 of the recent work \citet{rP09}.
[In \citet{rP09} somewhat stronger conditions on the drift are imposed,
but these are not actually necessary for the proof.]
{Assertion} \hyperlink{it:neumannbottom}{(iii)}: Because $L^{0,\alpha}$ and
$L^{0,0}$ differ only in their (one-dimensional) boundary conditions,
the difference $(L^{0,0}+1)^{-1}-(L^{0,\alpha}+1)^{-1}$ has
one-dimensional range, so it is compact. For more details, see
\citeauthor{jW00} (\citeyear{jW00}), Satz~10.17. The bottom of the essential spectrum of $L^{0,0}$
is then strictly positive, since it coincides with the bottom of the
essential spectrum\break of~$L^{0,\alpha}$, hence is above the bottom of the
full spectrum of $L^{0,\alpha}$. If $\lambda_0^{0,0}:=\inf\operatorname
{spec}(L^{0,0})=0$, then $\lambda_0^{0,0}=0$ is necessarily an
isolated eigenvalue of the operator $L^{0,0}$. Let us assume that
$\lambda_0^{0,0} = 0$. The unique (up to positive multiples)
nontrivial and nonnegative eigenfunction $v_{N} \in\mathfrak{L}^2$
associated to $\lambda_0^{0,0}=0$ therefore solves the boundary value problem
\[
L^{0,0} v_{N} = \lambda_0^{0,0} v_{N} = 0, \qquad v_{N}(0) =\frac
{1}{1+\alpha} \quad \mbox{and}\quad \frac{1}{2}\frac{dv_{N}}{dx}(0) =
\frac{\alpha}{1+\alpha}.
\]
Since this ordinary differential equation has a unique solution, and
since the constant function $\mathbf{1}$ is also a solution of this
equation, we conclude that $v_N = \mathbf{1}$. Thus $\mathbf{1}\in
\mathfrak{L}^2$, which means that $\Gamma ((0,\infty) )<\infty$,
contradicting our assumption that $\Gamma$ is infinite. It follows that
$\lambda_0^{0,0}>0$.
{Assertion} \hyperlink{it:speedexpon}{(iv)} follows from the above and the
work of \citet{lN98}. His result implies that the bottom
of the essential spectrum of the operator~$L^{0,0}$ is bounded above by
$\limsup_{r \rightarrow\infty}\frac{1}{r}\log\Gamma((0,r))$. This
is $0$ if the volume growth is subexponential. Since we have already
showed that $\lambda_{0}^{0,0}>0$, the result follows.
{Assertion} \hyperlink{it:isolated}{(v)}: Assume first that $\lim_{x
\rightarrow\infty}\kappa(x)$ exists. If $\lambda^{\kappa}_0 < \lim
_{x\to\infty}\kappa(x) = K$, then an application of the result \hyperlink{it:specess1}{(i)} shows that $L^{\kappa} = L + K + (\kappa-K)$ has the
same essential spectrum as $L + K$. Since $L$ is a positive operator,
the bottom of the spectrum of $L + K$, hence a fortiori of the
essential spectrum, has to be at least $K$, hence bigger than $\lambda_{0}^{\kappa}$,
which implies \hyperlink{it:isolated}{(v)}.
Let us now assume only that $\liminf_{x \rightarrow\infty}\kappa
(x)>\lambda_0^{\kappa}$. By the decomposition principle [see Section
131 in \citet{AG81}]
it is not difficult to see that $L^{\kappa}$ has the same essential
spectrum as the operator $L^{\kappa}_a$ ($a>0$), defined as the
self-adjoint extension of $\tau^{\kappa}$ in $\mathfrak{L}^2((a,\infty),\Gamma
)$ satisfying Dirichlet boundary conditions at $a$.
If $a_0>0$ and $\varepsilon>0$ are such that $\inf_{x\geq a_0}\kappa
(x)>\lambda_0^{\kappa}+\varepsilon$ we conclude that
\begin{eqnarray*}
\inf\Sigma_{\mathrm{ess}}(L^{\kappa}) &\geq&\inf\Sigma(L_{a_{0}}^{\kappa})
\\
&\geq&\mathop{\inf_{\varphi\in C^{\infty}_c(a_0,\infty)}}_{\|
\varphi\|_{\mathfrak{L}^2((a_0,\infty),\Gamma)}=1} \int_{a_0}^{\infty
}|\varphi'(x)|^2 \gamma(x)\,dx + \int_{a_0}^{\infty}\kappa
(x)|\varphi(x)|^2\gamma(x)\,dx \\
&\geq&\lambda_0^{\kappa}+\varepsilon.
\end{eqnarray*}
{Assertion} \hyperlink{it:noisolated}{(vi)}: Let $f,g$ be continuous
functions, nonnegative and not identically zero, with compact support
on $(0,\infty)$, and $\lambda_{1}:= \inf\Sigma(L^{0,\alpha
})\setminus\{0\}$, where $\alpha>0$. Suppose 0 is an isolated
eigenvalue, so that $\lambda_{1}>0$. Then
\begin{eqnarray*}
\langle f, e^{-tL^{0,\alpha}}g \rangle&=& \int_{\Sigma
(L^{0,\alpha})} e^{-t\lambda}\,d \langle f, E g \rangle
(\lambda)\\
&=& \langle f, E^{0,\alpha}(\{0\}) g \rangle+ \int
_{\lambda_{1}}^{\infty} e^{-t\lambda} \,d \langle f, E^{0,\alpha
} g \rangle(\lambda)\\
&\to&\langle f,u_{0}\rangle\langle g,u_{0}\rangle\qquad \mbox{as } t\to
\infty.
\end{eqnarray*}
This is positive, since $u_{0}$ may be chosen to be strictly positive.
By Lemma \ref{L:sqrtcompare} [observe that $e^{-tL^{0,\alpha}}g \in
\mathcal{D}(L^{0,\alpha})$ according to equation (3)] there is a
constant $C$ such that for all $x\in\operatorname{supp}(g)$,
\begin{eqnarray*}
|e^{-tL^{0,\alpha}}g(x) |&\le& C \bigl\| \sqrt{L^{0}}
e^{-tL^{0}} g \bigr\|\\
&\le& C \biggl(\int_{0}^{\infty} \lambda e^{-2\lambda t} d\|E^{0,\alpha
}_{\lambda} g\|^{2} \biggr)^{1/2}\\
&\le& C\|g\| e^{-\lambda_{1}t} \qquad \mbox{for $t$ sufficiently large},
\end{eqnarray*}
so that
\[
\langle f, e^{-tL^{0}}g \rangle\le C\|g\| e^{-\lambda
_{1}t} \int_{0}^{\infty} f(x) \gamma(x)\,dx \to0
\]
as $t\to\infty$, which is a contradiction.
{Assertion} \hyperlink{it:lambda0}{(vii)}: Suppose first that the limit $K$
of $\kappa(x)$ exists. Intuitively, what we are saying is that when
the mass in a neighborhood of 0 shrinks at a rate faster than~$K$ (what
$\lambda_{0}^{\kappa}$ measures), it is being driven by drift: Either
the mass is being swept down into a region of high killing near 0, or
it is being swept up away from~0. In the latter case, the drift will
still cause the mass near 0 to shrink exponentially in the absence of
killing; in the former case, the killing at 0 will do the job, except
in the case of pure reflection at~0.\looseness=-1
By part \hyperlink{it:specess1}{(i)}, we see that $L^{\kappa} = L + K +
(\kappa-K)$ and $L + K$ have the same essential spectrum. In
particular we conclude that $\inf\Sigma_{\mathrm{ess}}(L) + K = \inf\Sigma
_{\mathrm{ess}}(L+K) \geq\lambda_0^{\kappa}$ and therefore $\inf\Sigma
_{\mathrm{ess}}(L) \geq\lambda_0^{\kappa} - K > 0$, so $\lambda_{0}=0$ would
imply that $0$ is an isolated eigenvalue. Since this is impossible, by
assertion~\hyperlink{it:noisolated}{(vi)}, it follows that $\lambda_0 > 0$. The
extension to the case when the limit does not exist goes exactly the
same way as in the proof of assertion \hyperlink{it:isolated}{(v)} above.
\end{pf}
\begin{remark}
$\!\!$It was shown in \citet{rP09} that conclusion \hyperlink{it:speedfinite}{(ii)}
of Lemma~\ref{spectrum} can be sharpened. Assuming that absorption is
certain, it was shown that
\begin{equation}\label{Mu}
\frac{1}{8A(b)} \leq\lambda_0 \leq\frac{1}{2A(b)},
\end{equation}
where
\[
A(b) = \sup_{x>0} \biggl(\int_{x}^{\infty} \gamma(y)\,dy \biggr)
\biggl(\int_0^x\gamma(y)^{-1}\,dy \biggr).
\]
Related analytic inequalities, which are usually referred to as
weighted Hardy inequalities, can be found in \citet{bM72}. Indeed the
results of Muckenhoupt (\citeyear{bM72}) imply Pinsky's bounds.
\end{remark}
\begin{remark}
The fact that the bottom of the spectrum is an isolated eigenvalue is
also of practical interest, because in this case the associated
eigenfunction can be approximated accurately by the ground states of
regular Sturm--Liouville operators on bounded intervals [see the recent
survey \citet{jW05}]. Such a result has recently been rederived
[\citet{dV09}] in the context of approximating the minimal quasistationary
distribution of a diffusion generator with discrete spectrum via
interacting particle systems of Fleming--Viot type.
\end{remark}
\begin{remark}\label{heatasympt}
Assume that $\lambda_0^{\kappa}$ is an eigenvalue with associated
eigenfunction $u_{\lambda_0^{\kappa}} \in\mathfrak{L}^2$, which by general
theory is strictly positive and simple. Then
\begin{equation} \label{E:heatasympt}
\lim_{t\rightarrow\infty}e^{\lambda_0^{\kappa}t}p^{\kappa}(t,x,y)
= c u_{\lambda_0^{\kappa}}(x)u_{\lambda_0^{\kappa}}(y),
\end{equation}
where $c$ is a normalizing constant. This was proved in \citet{bS93}
for the transition function of Brownian motion on Riemannian manifolds
but the proof carries over without essential changes to our case.
\end{remark}
We will also make use of the following result which is a special case
of Theorem~3.1 in \citet{quasistat}.
\begin{lemma}[{[Theorem 3.1 in \citet{quasistat}]}]\label{qsdcomp}
Let $0 \leq f \in\mathfrak{L}^2$ with compact support $\operatorname{supp}(f) \subset
[0,\infty)$ be given, and let $\nu_f$ denote the measure $f(x)\gamma
(x)\,dx$. Let $L^{\kappa}$ be as in Lemma \ref{spectrum} and let
$p^{\kappa}(t,\cdot,\cdot)$ denote the integral kernel of
$e^{-tL^{\kappa}}$. Then for arbitrary measurable bounded sets $A, B
\subset(0,\infty)$
\[
\lim_{t \rightarrow\infty}\frac{\int_0^{\infty}f(x)\int
_Bp^{\kappa}(t,x,y) \gamma(y) \gamma(x)\,dy\,dx}{\int_0^{\infty
}f(x)\int_Ap^{\kappa}(t,x,y) \gamma(y)\gamma(x)\,dy\,dx} = \frac
{\int_B\varphi(\lambda_0^{\kappa},y) \gamma(y)\,dy}{\int_A\varphi
(\lambda_0^{\kappa},y) \gamma(y)\,dy},
\]
that is, $X_t$ converges from the initial distributions $\frac{\nu
_f}{\int_0^{\infty}f(s) \gamma(ds)}$ on compacta to the
quasistationary distribution $\varphi(\lambda_0^{\kappa},\cdot)$.
\end{lemma}
The above lemma can be proved directly using the spectral
representation for Sturm--Liouville operators. The reader will see the
necessary arguments later in this work in the proof of Theorem \ref
{generallocalMandl}. Our first goal is to extend this result to the
case of general compactly supported initial
distributions~$\nu$.\vadjust{\goodbreak}
We begin by deducing some consequences of Lemma \ref{qsdcomp}.
This will lead to Proposition \ref{ratiolimit}, which is a ``strong
ratio limit theorem.'' Before we start proving the strong ratio limit
theorem we explain another analytic fact which has no direct relation
to spectral theory but which will turn out to be very useful.
\subsection{Local parabolic Harnack inequality} \label{sec:LPHI}
A crucial tool for smoothing analytic information about the transition
kernel between different times and sites is the local parabolic Harnack
inequality, which quite generally holds for second order parabolic
differential equations. One version appropriate to our current purposes
may be found in \citet{gL96}, and states that for fixed $x_0, t_0 \in
(0,\infty)$ and $R>0$ there is a constant $C$ such that for every weak
solution $u$ of $(\partial_t - L^{\kappa})u = 0$ which is
nonnegative in $Q((x_0,t_0),4R) \subset(0,\infty) \times(0,\infty)$,
\[
\sup_{\Theta((x_0,t_0),R/2)} u \leq C\inf_{Q((x_0,t_0),R)} u,
\]
where
\[
Q((x_0,t_0),R) = \bigl\lbrace(x,t) \in\mathbb{R}^2 |\max
\bigl(|x-x_0|,\sqrt{|t-t_0|} \bigr)<R,t<t_0\bigr\rbrace
\]
and $\Theta((x_0,t_0),R) = Q((x_0,t_0-R^2),R)$. As in Theorem 10 of
\citet{eD97} this inequality can be applied to the transition kernel
$p^{\kappa}(t,x,y)$ in order to prove that for every compact $K
\subset(0,\infty)$ and $T>0$ there is a constant $c=c(K,T)> 0$ such
that for $t\geq T$, $x_1,x_2,x_3,x_4 \in K$
\begin{equation} \label{E:harnack}
c^{-1}p^{\kappa}(t,x_1,x_2) \leq p^{\kappa}(t,x_3,x_4) \leq
cp^{\kappa}(t,x_1,x_2).
\end{equation}
Moreover the local parabolic Harnack inequality shows that there exists
a~locally bounded function $\zeta\dvtx(0,\infty)\rightarrow(0,\infty)$
such that for every $t \geq1$, $y>0$, and $x,z>0$ satisfying
$|z-x|<\frac{1}{2}\wedge\frac{|x|}{4}$
\begin{equation} \label{E:harnack2}
p^{\kappa}(t,x,y) \leq\zeta(x)p^{\kappa}(t+1,z,y).
\end{equation}
\subsection{Strong ratio limit theorem and convergence on compacta}
\label{sec:SRLT}
\begin{lemma}\label{weakcomp}
For any fixed $x_0 \in(0, \infty)$ the family of functions
\[
\biggl\lbrace[0,\infty) \times\mathbb{R}_+ \times\mathbb{R}_+\ni
(t,x,y)\mapsto\frac{p^{\kappa}(t+s,x,y)}{p^{\kappa}(s,x_0,x_0)}\Big|
s \geq1 \biggr\rbrace
\]
is relatively compact in the space $C((0, \infty)^2, \mathbb{R})$ of
real-valued continuous functions on $(0,\infty)^2$, endowed with the
vague topology.
\end{lemma}
\begin{pf}
Let $(s_n)_{n \in\mathbb{N}}$ be a sequence with $1 \leq s_n
\rightarrow\infty$, and set for $t\in[0,\infty)$, $x,y \in(0,
\infty)$
\[
r_n(t,x,y) = \frac{p^{\kappa}(t+s_n,x,y)}{p^{\kappa}(s_n,x_{0},x_{0})},\vadjust{\goodbreak}
\]
where $a \in(0,\infty)$ is fixed. The functions $(t,x,y) \mapsto
r_n(t,x,y)$ ($n\in\mathbb{N}$) are solutions to the parabolic
equation
\[
(2\partial_t + L^{\kappa}_x + L^{\kappa}_y)r_n(t,x,y)=0,
\]
where the operator $L_x^{\kappa}$ and $L_y^{\kappa}$ act as
$L^{\kappa}$ on $x$- and $y$-variable, respectively. By the local
parabolic Harnack inequality (see Section \ref{sec:LPHI}) we conclude
that for each compact set $K \subset(0,\infty)$ there exists a
constant $C_K$ such that for all $n\in\mathbb{N}$, $t \geq0$ and
$x,y,a \in K$
\[
p^{\kappa}(t+s_n,x,y) \leq C_K p^{\kappa}(t+s_n,x_{0},x_{0}).
\]
By general spectral theory it is proved in \citet{eD97} that $r
\mapsto p^{\kappa}(r,x_0,x_0)$ is nonincreasing. Therefore we
conclude that for $t\geq0$ and $x,y \in K$
\[
\frac{p^{\kappa}(t+s_n,x,y)}{p^{\kappa}(s_n,x_{0},x_{0})} \leq C_K.
\]
Theorem 6.28 in \citet{gL96}
shows that the set $\lbrace r_n \mid n \in\mathbb{N}\rbrace$ is
locally uniformly equicontinuous. Therefore by the theorem of
Arzela--Ascoli [\citet{jK55}, Theorem 17], there exists a subsequence
$(r_{n_k})_{k \in\mathbb{N}}$ which converges locally uniformly.
\end{pf}
The proof above is modeled on Theorem 2.2 of \citet{ABT02}. Since
that theorem assumed the operator was critical and the coefficients
were H\"{o}lder-continuous, some modification was required
The analytic core of quasilimiting behavior is the convergence of
ratios of transition kernels, which we state and prove here as
Proposition \ref{ratiolimit}. This will imply convergence to the
quasistationary distribution on compacta, Theorem \ref
{generallocalMandl}. Convergence on the whole state space will then
require a~consideration of the recurrence or transience, to decide
whether most of the mass stays in a compact interval or escapes to infinity.
Results comparing transition probabilities at different times and
sites, in the limit as time goes to infinity, are commonly referred to
as strong ratio limit theorems. Strong ratio limit theorems for certain
branching processes can be found in \citet{AN72}. A proof of the
strong ratio property for certain Markov chains on the integers was
given in \citet{hK95}.
\begin{Prop}\label{ratiolimit}
For any $a \in(0, \infty)$
\[
\lim_{s\rightarrow\infty}\frac{p(t+s,x,y)}{p(s,a,a)} = e^{-\lambda_{0}^{\kappa}
t}\frac{\varphi({\lambda^ {\kappa}_0},x)\varphi(\lambda^{\kappa
}_0,y)}{\varphi(\lambda^{\kappa}_0,a)\varphi(\lambda^{\kappa}_0,a)}.
\]
\end{Prop}
\begin{pf}
For every sequence $(s_n)_{n\in\mathbb{N}} \subset(0,\infty)$
converging to infinity we know by Lemma \ref{weakcomp} that for some
subsequence $(s_{n_k})_k$ of $(s_n)$ there exists a~function $\psi$
such that
\[
\frac{p^{\kappa}(t+s_{n_k},x,y)}{p^{\kappa}(s_{n_k},a,a)}
\rightarrow\psi(t,x,y),
\]
where the convergence is locally uniform in $[0,\infty)\times
(0,\infty)^2$. Since by Lem\-ma~7.5 in \citet{quasistat} [see also
\citet{eD97}] for every $f \in\mathfrak{L}^2$ with compact support
\[
\lim_{s\rightarrow\infty}\frac{\langle e^{-(t+s)L^{\kappa
}}f,f\rangle}{\langle e^{-sL^{\kappa}}f,f\rangle} = e^{-\lambda
_0^{\kappa}t},
\]
one easily concludes that
\[
\psi(t,x,y) = e^{-\lambda_0^{\kappa}(t)}\psi(0,x,y).
\]
Lemma \ref{qsdcomp} shows that for every $f,g, h\in C^{\infty
}_0(0,\infty)$
\begin{eqnarray*}
&&\frac{\int_0^{\infty} g(y)\varphi(\lambda_0^{\kappa},y) \gamma
(y)\,dy}{\int_0^{\infty}h(y)\varphi(\lambda_0^{\kappa},y) \gamma
(y)\,dy} \\
&&\qquad = \lim_{k \rightarrow\infty}\frac{\int_0^{\infty}f(x)\int
_0^{\infty}g(y)p^{\kappa}(s_{n_k},x,y) \gamma(y)\gamma(x) \, dy \,
dx}{\int_0^{\infty}f(x)\int_0^{\infty}h(y)p^{\kappa}(s_{n_k},x,y)
\gamma(y)\gamma(x) \, dy \, dx} \\
&&\qquad = \lim_{k \rightarrow\infty}\frac{\int_0^{\infty}f(x)\int
_0^{\infty}g(y)({p^{\kappa}(s_{n_k},x,y)}/{p^{\kappa
}(s_{n_k},x_0,x_0)}) \gamma(y)\gamma(x) \, dy \, dx}{\int_0^{\infty
}f(x)\int_0^{\infty}h(y)({p^{\kappa}(s_{n_k},x,y)}/{p^{\kappa
}(s_{n_k},x_0,x_0)}) \gamma(y)\gamma(x) \, dy \, dx} \\
&&\qquad = \frac{\int_0^{\infty}f(x)\int_0^{\infty}g(y) \psi(0,x,y)
\gamma(y)\gamma(x) \, dy \, dx}{\int_0^{\infty}f(x)\int_0^{\infty
}h(y) \psi(0,x,y) \gamma(y)\gamma(x) \, dy \, dx} .
\end{eqnarray*}
This implies that for $x \in(0,\infty)$, $g,h\in C^{\infty
}_0((0,\infty)$
\begin{eqnarray*}
&&\frac{\int_0^{\infty} g(y)\varphi(\lambda_0^{\kappa},y) \gamma
(y)\,dy}{\int_0^{\infty}h(y)\varphi(\lambda_0^{\kappa},y) \gamma
(y)\,dy} \int_0^{\infty}h(y) \psi(0,x,y) \gamma(y)\,dy\\
&&\qquad =\int_0^{\infty
}g(y) \psi(0,x,y) \gamma(y)\,dy,
\end{eqnarray*}
and hence for every $h$
\[
\psi(0,x,y) = \varphi(\lambda_0^{\kappa},y)\frac{\int_0^{\infty
}h(z) \psi(0,x,z) \gamma(z)\,dz}{\int_0^{\infty}h(z)\varphi(\lambda
_0^{\kappa},z) \gamma(z)\,dz}.
\]
Due to the symmetry of $\psi(0,\cdot,\cdot)$ we conclude that for
some constant $c \geq0$
\[
\psi(0,x,y) = c \varphi(\lambda_0^{\kappa},x)\varphi(\lambda
_0^{\kappa},y).
\]
Because of $\psi(0,a,a) = 1$ we arrive at $c^{-1} = \varphi(\lambda
_0^{\kappa},a)\varphi(\lambda_0^{\kappa},a)$. Since this is true
for every subsequence, the assertion of the theorem is proved.
\end{pf}
\begin{Corollary}\label{C:ratiolimitintegrated}
If $\nu$ is any compactly supported initial distribution, and $f$ a
nonnegative compactly supported measurable function with $\nu[f]>0$,
then for any fixed $t$,
\[
|\log\mathbb{E}_{\nu}[f(X_{t+s})] -\log\mathbb{E}_{\nu}[f(X_{s})] |
\]
is bounded for $s\in\mathbb{R}^{+}$.
\end{Corollary}
\begin{pf}
Since $g(s):=\int\!\!\int p^{\kappa}(s,x,y)\gamma(y) f(y) \,d\nu(x)$ is
positive and continuous, it suffices to show that
\[
0<\liminf_{s\to\infty} \frac{g(t+s)}{g(s)}\le\limsup_{s\to\infty
} \frac{g(t+s)}{g(s)}<\infty.
\]
By \eqref{E:harnack} we may find positive $c$ such that for $s$
sufficiently large, and any $x_{*},y_{*}\in K:=\operatorname{supp}(\nu)\cup\operatorname{supp}(f)$,
\[
c^{-1} p^{\kappa}(s,x_{*},y_{*})\Gamma[f]\le g(s)\le c p^{\kappa
}(s,x_{*},y_{*})\Gamma[f].
\]
Thus,
\[
c^{-2} \frac{p^{\kappa}(s+t,x_{*},y_{*})}{p^{\kappa
}(s,x_{*},y_{*})}\le\frac{g(s+t)}{g(s)}\le c^{2} \frac{p^{\kappa
}(s+t,x_{*},y_{*})}{p^{\kappa}(s,x_{*},y_{*})}.
\]
By Proposition \ref{ratiolimit} the upper and lower bounds converge to
$c^{2}e^{-\lambda_{0}^{\kappa}}$ and $c^{-2}e^{-\lambda_{0}^{\kappa}}$, respectively, as $s\to
\infty$.
\end{pf}
\begin{remark}
\label{rem:Martin}
In terms of parabolic Martin boundary theory Proposition \ref
{ratiolimit} says that every sequence $(s_n,x) \subset(0,\infty)
\times(0,\infty)$ with $\lim_{n\rightarrow\infty}s_n = \infty$
converges in the parabolic Martin topology to the parabolic Martin
boundary point corresponding to the minimal parabolic function
$h_{\lambda_0^{\kappa}}(t,x) = e^{\lambda_0^{\kappa}t}\varphi
(\lambda_0^{\kappa},x)$. The parabolic function $h_{\lambda
_0^{\kappa}}$ must actually be invariant, since it corresponds to a
point in the parabolic Martin boundary whose time coordinate is $\infty$.
\end{remark}
\begin{remark}\label{Daviesconjecture}
The existence of strong ratio limits for general symmetric diffusion,
that is, the existence of
\[
\lim_{t\rightarrow\infty}\frac{p(t,x,y)}{p(t,x_0,y_0)},
\]
where $p(t,\cdot,\cdot)$ denotes the transition kernel of the
diffusion, was investigated under special conditions by \citet{eD97}
and is now often referred to as Davies's conjecture. In a private
communication, Gady Kozma disproved this conjecture by presenting a
counterexample. Proposition \ref{ratiolimit} shows that in one
dimension the Davies conjecture is true, if one boundary point is
regular. It is an open question whether the Davies conjecture
generally holds in one dimension.
\end{remark}
\begin{Prop} \label{P:Harnackconstantlimit}
For all positive $z$, including $z=\infty$,
\[
\limsup_{t\to\infty} \frac{1}{t} \log\mathbb{P}_{x} \{ X_{t} \le
z \} \quad \mbox{and}\quad \liminf_{t\to\infty} \frac{1}{t}
\log\mathbb{P}_{x} \{ X_{t} \le z \}
\]
are both constant in $x>0$. Hence also
\[
\limsup_{t\to\infty} \frac{1}{t} \log\mathbb{P}_{x} \{ X_{t} \le z
|\tau_\partial>t \} \quad \mbox{and}\quad \liminf_{t\to\infty}
\frac{1}{t} \log\mathbb{P}_{x} \{ X_{t} \le z |\tau_\partial>t \}
\]
are both constant in $x>0$.
\end{Prop}
\begin{pf}
We prove only the first statement for $\limsup$, the other proof being
identical. Suppose we have $x,x'$ such that
\begin{equation} \label{E:xxprime}
\limsup_{t\to\infty} \frac{1}{t} \log\mathbb{P}_{x} \{ X_{t} \le z
\} < \limsup_{t\to\infty} \frac{1}{t} \log\mathbb{P}_{x'} \{
X_{t} \le z \} .
\end{equation}
We may assume without loss of generality that $|x-x'|< \frac
{1}{2}\wedge\frac{|x|}{4}\wedge\frac{|x'|}{4}$. (If not, then there
must be other starting points closer together where the limits differ.)
Applying \eqref{E:harnack2}, we have for all $t\geq1$,
\[
\mathbb{P}_{x'} \{ X_{t} \le z \} \leq\zeta(x')\mathbb{P}_{x} \{
X_{t+1} \le z \} ,
\]
so that
\begin{eqnarray*}
\limsup_{t\to\infty} \frac{1}{t}\log\mathbb{P}_{x'} \{ X_{t} \le
z \} &\leq&\limsup_{t\to\infty}\frac{1}{t}\log\mathbb{P}_{x} \{
X_{t} \le z \}\\
&&{} +\limsup_{t\to\infty} \frac{1}{t} \log\frac{\mathbb{P}_{x}
\{ X_{t+1} \le z \}}{\mathbb{P}_{x} \{ X_{t} \le z \}}.
\end{eqnarray*}
The limits on the second line are 0 by Corrolary \ref
{C:ratiolimitintegrated}. We have then a~contradiction to \eqref
{E:xxprime}, which completes the proof.
\end{pf}
\subsection{Conditions on the initial distribution} \label{sec:ID}
In their version of the ratio limit theorem [\citeauthor{quasistat} (\citeyear{quasistat}), Theorem
3.1], the authors had to pose an additional condition on the
initial distribution $\nu$, and they stated the general case as an
open problem. Their most general condition reads\looseness=1
{\renewcommand{$\mathit{ID}'$}{$\mathit{ID}'$}
\begin{equation}\label{eqID}
\qquad \begin{tabular}{@{}p{300pt}@{}}
If $X_0$ has distribution $\nu$, then
$\exists s \geq0$ for which the distribution of~$X_s$ has a density
$f \in\mathfrak{L}^2$, with $\liminf_{\lambda
\downarrow\lambda_0^{\kappa}} Uf(\lambda) > - \infty$,
\end{tabular}
\end{equation}}
\vspace*{-8pt}
\setcounter{equation}{8}\looseness=0
\noindent where the $U$ denotes the unitary operator from Theorem \ref
{weylspectraltheorem}. [In \citet{quasistat} the definition of the
operator $U$ is slightly different. This is connected to the fact that
there the authors work $L^2$ spaces with respect to the Lebesgue
measure.] This condition is obviously satisfied for compactly supported
initial distributions having a density, but it is not obvious how to
verify that a given general initial distribution $\nu$ with compact
support satisfies the condition (\ref{eqID}). Using some results from
spectral theory and several ideas from \citet{quasistat}, we can
remove this restriction. An essential ingredient in the proof is Lemma
\ref{L:sqrtcompare}, which allows us to ignore the upper end of the
spectrum for large $t$.
\begin{lemma}\label{L:lowspec}
Given $g\in\mathfrak{L}^{2}$, we have
\begin{equation} \label{E:lowspec}
\lim_{t\to\infty} t^{-1}\log \| e^{-tL^{\kappa}}g \|
=-\lambda_{g}.
\end{equation}
\end{lemma}
\begin{pf}
By the spectral theorem \eqref{spectraltheorem3} we know that
\begin{eqnarray*}
\limsup_{t\to\infty} t^{-1}\log \| e^{-tL^{\kappa}}g \|^{2}&=&\limsup_{t\to\infty} t^{-1}\log\int_{\lambda_{g}}^{\infty
} e^{-2t\lambda} \,d \| E^{\kappa}_{\lambda} g \|^{2}(\lambda)\\
&\le&-2\lambda_{g} +\limsup_{t\to\infty} t^{-1}\log \| g \|^{2}\\
&=& -2\lambda_{g}.
\end{eqnarray*}
For the lower bound we take any $\lambda_{*}>\lambda_{g}$, and have
\begin{eqnarray*}
\liminf_{t\to\infty} t^{-1}\log \| e^{-tL^{\kappa}}g \|
^{2}&\ge&\liminf_{t\to\infty} t^{-1}\log\int_{\lambda
_{g}}^{\lambda_{*}} e^{-2t\lambda} \,d \| E^{\kappa}_{\lambda}
g \|^{2} (\lambda)\\
&\ge&-2\lambda_{*} +\liminf_{t\to\infty} t^{-1}\log \|
E^{\kappa} ([\lambda_{g},\lambda_{*}] )g \|^{2}.
\end{eqnarray*}
Since $\lambda_{g}$ is in the support of $d\|E^{\kappa}g\|$, this is
equal to $-2\lambda_{*}$. Since this is true for any $\lambda
_{*}>\lambda_{g}$, this completes the proof of \eqref{E:lowspec}.
\end{pf}
For a Radon measure $\nu$ on $(0,\infty)$ and a Borel measurable
function $f\dvtx\break(0,\infty)\mapsto\mathbb{C}$ we use the notation
$\langle\nu, f\rangle:= \int_0^{\infty} f(s) \nu(ds)$.
\begin{theorem}\label{generallocalMandl}
The killed diffusion $X_t$ converges on compacta to the quasistationary
distribution with density proportional to $\varphi(\lambda_0^{\kappa
},\cdot)$ from any initial distribution which is compactly supported
in $(0,\infty)$.
\end{theorem}
\begin{pf}
An application of Weyl's eigenfunction expansion theorem and Fubini's
theorem tells us that the operator $E^\kappa([\lambda_0^{\kappa
},\lambda_1))e^{-tL^{\kappa}}$ has a continuous integral kernel
\begin{equation}\label{truncatedintegralkernel}
(t,x,y) \mapsto h^{\lambda_1}(t,x,y) = \int_{[\lambda_0^{\kappa
},\lambda_1]}e^{-t\lambda}\varphi(\lambda,x)\varphi(\lambda,y)\,
d\sigma(\lambda)
\end{equation}
with respect to the measure $\Gamma$. This implies that for every
compact subset $K \subset[0,\infty)$ the function $E^\kappa([\lambda
_0^{\kappa},\lambda_1])e^{-tL^{\kappa}}\mathbf{1}_K$ is continuous
and therefore
\[
\langle\nu,E^\kappa([\lambda_0^{\kappa},\lambda_1])e^{-tL^{\kappa
}}\mathbf{1}_K\rangle= \int_{\mathbb{R}}E^\kappa([\lambda
_0^{\kappa},\lambda_1])e^{-tL^{\kappa}}\mathbf{1}_K(x) \,d\nu(x)
\]
is well defined. For every Borel set $A \subset[0,z]$, then
\begin{eqnarray}\label{kernelform}\qquad
&&\nu [ E^\kappa([\lambda_0^{\kappa},\lambda_1])e^{-tL^{\kappa
}}\mathbf{1}_A ] \nonumber\\[-8pt]\\[-8pt]
&&\qquad = \int_{[\lambda_0^{\kappa},\lambda
_1]}e^{-t\lambda} \biggl[\int_{0}^{z}\varphi(\lambda,x) \, d\nu
(x)\int_A\varphi(\lambda,y)\,d\Gamma(y) \biggr]\,d\sigma(\lambda).\nonumber
\end{eqnarray}
Let $g\dvtx[\lambda_{0}^{\kappa},\infty)\to\mathbb{R}$ be any continuous
function. Then
\begin{equation} \label{gbound}\qquad
\limsup_{t \rightarrow\infty} \biggl|\frac{\int_{[\lambda
_0^{\kappa},\lambda_1]}e^{-t\lambda}g(\lambda)\,d\sigma(\lambda
)}{\int_{[\lambda_0^{\kappa},\lambda_1]}e^{-t\lambda}\,d\sigma
(\lambda)}-g(\lambda_0^{\kappa})\biggr | \le\sup_{[\lambda
_0^{\kappa},\lambda_1]}|g(\lambda)-g(\lambda_0^{\kappa})|.
\end{equation}
As in the proof of Theorem 3.1 in \citet{quasistat}, for any $\lambda
_1,\tilde{\lambda}_1,\lambda_2 > \lambda_0^{\kappa}$ set $\lambda
_{*}=\lambda_1\wedge\tilde{\lambda}_1\wedge\lambda_2\wedge\tilde
\lambda_2$ and $\lambda^{*}=\lambda_1\vee\tilde{\lambda}_1\vee
\lambda_2\vee\tilde\lambda_2$. Then we have the bound
\begin{eqnarray*}
&& \biggl|\frac{\int_{[\lambda_0^{\kappa},\lambda_1]}e^{-t\lambda
}g(\lambda)\,d\sigma(\lambda)}{\int_{[\lambda_0^{\kappa},\lambda
_2]}e^{-t\lambda}\,d\sigma(\lambda)}-\frac{\int_{[\lambda
_0^{\kappa},\tilde{\lambda}_1]}e^{-t\lambda}g(\lambda)\,d\sigma
(\lambda)}{\int_{[\lambda_0^{\kappa},\tilde\lambda
_2]}e^{-t\lambda}\,d\sigma(\lambda)} \biggr|\\[-2pt]
&&\qquad \leq e^{(\lambda_0^{\kappa}-\lambda_*)t}\frac{\int_{[\lambda_0^{\kappa},\lambda
^{*}]}|g(\lambda)|\,d\sigma(\lambda)}{\int_{[\lambda_0^{\kappa
},\lambda_{*})]}\,d\sigma(\lambda)},
\end{eqnarray*}
which tells us that
\begin{equation}\label{lambdaindependence}
\limsup_{t \rightarrow\infty}\frac{\int_{[\lambda_0^{\kappa
},\lambda_1]}e^{-t\lambda}g(\lambda)\,d\sigma(\lambda)}{\int
_{[\lambda_0^{\kappa},\lambda_2]}e^{-t\lambda}\,d\sigma(\lambda)}
\mbox{ is independent of }\lambda_1, \lambda_2 \in(\lambda
_0^{\kappa},\infty).
\end{equation}
Since $g$ is continuous, \eqref{gbound} and \eqref
{lambdaindependence} combine to show that for a general positive
continuous $h$,
\begin{equation}\label{gundeins}
\lim_{t \rightarrow\infty} \biggl|\frac{\int_{[\lambda_0^{\kappa
},\lambda_1]}e^{-t\lambda}h(\lambda)g(\lambda)\,d\sigma(\lambda
)}{\int_{[\lambda_0^{\kappa},\lambda_1]}h(\lambda)e^{-t\lambda}\,
d\sigma(\lambda)}-g(\lambda_0^{\kappa}) \biggr| = 0.
\end{equation}
By \eqref{kernelform} we now see that for every $\lambda_1 \in
(\lambda_0^{\kappa},\infty)$
\begin{eqnarray}\label{lokalspektralconv}
&&\lim_{t \rightarrow\infty} \frac{\nu[ E^\kappa([\lambda
_0^{\kappa},\lambda_1])e^{-tL^{\kappa}}\mathbf{1}_A]}{\nu[
E^\kappa([\lambda_0^{\kappa},\lambda_1])e^{-tL^{\kappa}}\mathbf
{1}_{[0,z]}]} \nonumber\\[-2pt]
&&\qquad = \lim_{t \rightarrow\infty}
\frac{\int_{[\lambda_0^{\kappa},\lambda_1]}e^{-t\lambda}
[\int_{0}^{z}\varphi(\lambda,x) \, d\nu(x)\int_A\varphi(\lambda
,y)\,d\Gamma(y) ]\,d\sigma(\lambda)}
{\int_{[\lambda_0^{\kappa},\lambda_1]}e^{-t\lambda} [\int
_{0}^{z}\varphi(\lambda,x) \, d\nu(x)\int_0^{z}\varphi(\lambda
,y)\,d\Gamma(y) ]\,d\sigma(\lambda)}\\[-2pt]
&&\qquad =\frac{\int_A\varphi(\lambda_0^{\kappa},y) \gamma
(y)\,dy}{\int_0^z\varphi(\lambda_0^{\kappa},y) \gamma(y)\,dy}.\nonumber
\end{eqnarray}
The assertion of the theorem follows immediately from \eqref{lokalspektralconv} once it is shown that
\begin{equation}\label{finalconvcomp}
\lim_{t \rightarrow\infty}\frac{\mathbb{P}_{\nu}(X_t \in
A)}{\mathbb{P}_{\nu}(X_t \leq z)} = \lim_{t \rightarrow\infty
}\frac{ \nu [ E^\kappa([\lambda_0^{\kappa},\lambda
_1])e^{-tL^{\kappa}}\mathbf{1}_A ]}{\nu [ E^\kappa
([\lambda_0^{\kappa},\lambda_1])e^{-tL^{\kappa}}\mathbf
{1}_{[0,z]} ]}.
\end{equation}
Observe that
\begin{eqnarray}\label{laststep}
&&\frac{\nu [ e^{-tL^{\kappa}}\mathbf{1}_A ]}{\nu [
E^\kappa([0,\lambda_1])e^{-tL^{\kappa}}\mathbf{1}_A ]}\nonumber \\[-2pt]
&&\qquad =
\frac{\nu [ E^\kappa([0,\lambda_1])e^{-tL^{\kappa}}\mathbf
{1}_A ]+\nu [ E^\kappa((\lambda_1,\infty))e^{-tL^{\kappa
}}\mathbf{1}_A ]}{\nu [ E^\kappa([0,\lambda
_1])e^{-tL^{\kappa}}\mathbf{1}_A ]}\\[-2pt]
&&\qquad = 1 +\frac{\nu [ E^\kappa((\lambda_1,\infty))e^{-tL^{\kappa
}}\mathbf{1}_A ]}{\nu [ E^\kappa([0,\lambda
_1])e^{-tL^{\kappa}}\mathbf{1}_A ]}.\nonumber
\end{eqnarray}
Since $ e^{-tL^{\kappa}}\mathbf{1}_A$ and $E^\kappa([0,\lambda
_1])e^{-tL^{\kappa}}\mathbf{1}_A$ are continuous, the function
$E^\kappa((\lambda_1,\break\infty))e^{-tL^{\kappa}}\mathbf{1}_A$ must
also be continuous.\vadjust{\goodbreak} Thus $\nu[ E^\kappa((\lambda_1,\infty
))e^{-tL^{\kappa}}\mathbf{1}_A]$ is well defined. By Lemma \ref
{L:sqrtcompare},
\begin{equation}\label{Zaehler}
-\lim_{t \rightarrow\infty}\frac{1}{t}\log |\langle\nu,
E^\kappa ((\lambda_1,\infty) )e^{-tL^{\kappa}}\mathbf
{1}_A\rangle | \geq\lambda_1.
\end{equation}
As $\varphi(\lambda^{\kappa}_0,x) > 0$ for every $x \in(0,\infty)$
there is, by continuity, $\lambda_1 > \lambda^{\kappa}_0$ such that
for every $\lambda\in[\lambda^{\kappa}_0,\lambda_1]$
\[
\int_0^{\infty}\varphi(\lambda,x)\,d\nu(x) \mbox{ and } \int
_A\varphi(\lambda,y)\gamma(y)\,dy \mbox{ are both positive}.
\]
Then it is easy to see that
\begin{equation}\label{Nenner}
-\lim_{t \rightarrow\infty}\frac{1}{t}\log\!\int_{[\lambda^{\kappa
}_0,\lambda_1]}e^{-\lambda t}\!\int_0^{\infty}\!\varphi(\lambda,x)\,d\nu
(x)\!\int_A\varphi(\lambda,y) \gamma(y)\,dy\,d\sigma(\lambda) \leq
\lambda_0^{\kappa}.\hspace*{-35pt}
\end{equation}
Equations \eqref{laststep}, \eqref{Zaehler} and \eqref{Nenner}
combine to show that\vspace*{-1pt}
\[
\lim_{t \rightarrow\infty}\frac{\nu [ e^{-tL^{\kappa
}}\mathbf{1}_A ]}{\nu [E^\kappa([0,\lambda
_1])e^{-tL^{\kappa}}\mathbf{1}_A ]} = 1,
\]
and therefore \eqref{finalconvcomp}.
\end{pf}
\subsection{\texorpdfstring{Entrance boundary at $\infty$}{Entrance boundary at infinity}} \label{sec:entrance}
As mentioned above, with the exception of the recent work [\citet
{6authors}], work on these problems has generally assumed that $0$ is
regular and $\infty$ natural. Intuitively, we should expect the
problems to be easier if $\infty$ is an entrance boundary. We show
that this is indeed the case in Theorem \ref{nichtnatuerlich}, as the
spectrum of the operator $L^{\kappa}$ is purely discrete. This
interesting fact has not been mentioned by previous authors [cf.
Section 3 in \citet{6authors}] working on quasistationary distributions
for one-dimensional diffusions. The proof relies on standard ideas from
the spectral theory of differential operators.\vspace*{-2pt}
\begin{theorem}\label{nichtnatuerlich}
If $\infty$ is an entrance boundary, then the spectrum of $L^{\kappa
}$ is discrete.\vspace*{-2pt}
\end{theorem}
\begin{pf}
Assume that $0$ is a regular boundary point, and we begin by
considering the case $\kappa\equiv0$. Let $f$ be a solution to the
eigenvalue equation $-\frac{1}{2\gamma} (\gamma f')'=\lambda f$ on
$(0,\infty)$, for some $\lambda>0$.
Let $x>1$ be any local maximum, and $\tilde{x}>x$ the first local
minimum following $x$ (assuming there is one). Since $f$ solves the
equation $\tau u = \lambda u$ one easily sees that local maxima are
positive and local minima negative. Integrating by parts and using the
fact that $f'(x)=f'(\tilde{x})=0$, we have
\begin{eqnarray*}
0&<&f(x) - f(\tilde{x}) = \int_{\tilde{x}}^{x}f'(y)\,dy\\
&=&\int_{\tilde{x}}^{x}(\gamma(y)f'(y)) \gamma(y)^{-1}\,dy\\
&=&\int_{x}^{\tilde{x}}(\gamma(y)f'(y))' \int_{1}^{y}\gamma(z)^{-1}\,dz \,dy\\
&=&-2\lambda\int_{x}^{\tilde{x}}\gamma(y)f(y) \int_{1}^{y}\gamma(z)^{-1}\,dz \,dy\\
&<&2\lambda \bigl(f(x)-f(\tilde{x}) \bigr)\int_{x}^{\tilde{x}}\gamma(y) \int_{1}^{y}\gamma(z)^{-1}\,dz \,dy,
\end{eqnarray*}
from which we conclude that
\[
\frac{1}{2\lambda}<\int_{x}^{\tilde{x}}\gamma(y) \int_{1}^{y}\gamma
(z)^{-1}\,dz \,dy.
\]
Since we have assumed that $\infty$ is an entrance boundary, we know that
\begin{eqnarray*}
\infty&>&\int_{1}^{\infty}\gamma(y) \int_{1}^{y}\gamma(z)^{-1}\,dz \,dy\\
&\ge&\sum_{\mathrm{pairs }\ (x,\tilde{x})}\int_{x}^{\tilde{x}}\gamma(y) \int
_{1}^{y}\gamma(z)^{-1}\,dz \,dy\\
&\ge&(2\lambda)^{-1} \cdot\#\mbox{pairs }(x,\tilde{x}).
\end{eqnarray*}
Since the zeroes of $f$ are separated by alternating local minima and
maxima, it follows that $f$ has only finitely many zeroes on $(1,\infty
)$, hence only finitely many zeroes in all. It follows from a theorem
of Hartmann [\citet{jW67}, Theorem~1.1] that the spectrum of $L^{0}$ (the
operator with $\kappa\equiv0$) is discrete.
Suppose now that the spectrum of $L^{\kappa}$ is not discrete. Then
there is a $\lambda_{*}$ such that $E_{\lambda_{*}}$ has
infinite-dimensional range. All such $v \in\operatorname
{Ran}(E_{\lambda_{*}})$ are in the domain of $q^{\kappa}$ and satisfy
$q^{\kappa}(v,v)\le\lambda_{*}\|v\|^{2}$. But then they are also in
the domain of $q^{0}$ and satisfy $q^{0}(v,v)\le\lambda_{*}\|v\|
^{2}$. By the minimax principle for the discrete spectrum [cf.
\citeauthor{jW00} (\citeyear{jW00}), Theorem 8.8], this contradicts the fact that $L$ has discrete spectrum.
\end{pf}
\begin{remark}\label{logistic}
There are general necessary and sufficient conditions for the
discreteness of the spectrum of Sturm--Liouville operators obtained in
\citet{CR02}, of which Theorem \ref{nichtnatuerlich} may be seen as
a special case. However, general versions found in the literature, such
as the main result in \citet{CR02} and Theorem 1 in \citet{rP09},
do not seem to be immediately applicable.
\end{remark}
\section{Convergence to quasistationarity} \label{sec:qstat}
In this section we consider the problem of convergence to the Yaglom
limit. More precisely we ask for conditions, which ensure that $X_t$
converges to the quasistationary distribution given by the density
$\varphi(\lambda_0^{\kappa},\cdot)$. Recall that we always assume
that $0$ is regular.
\subsection{The asymptotic measure and asymptotic killing rate} \label
{sec:asympt}
We begin by collecting some basic results about the asymptotic
distribution of the process on sets which may not be compact. These
results hold whenever $\lambda_{0}^{\kappa}\ne K$, but will be required primarily in
Section \ref{sec:lowkill}, where $\lambda_{0}^{\kappa}>K$.
As in \citet{quasistat} we define, for Borel sets $A$, the family of measures
\[
F_t(\nu,A) = \mathbb{P}_{\nu} (X_t \in A |\tau_{\partial}
> t )
\]
and
\[
a_t(\nu,r) = \mathbb{P}_{\nu} ( \tau_{\partial} > t+r |
\tau_{\partial}> t ) = \int F_t(\nu,dy)\mathbb{P}_y (
\tau_{\partial} > r ).
\]
If the process $X_t$ started from the compactly supported initial
distribution~$\nu$ escapes to infinity, then for any sequence
$(t_n)_{n \in\mathbb{N}}$ converging to infinity the measures
$F_{t_n}(\nu,\cdot)$ converge weakly to point the measure $\delta
_{\infty}$. If the process $X_t$ started from $\nu$ converges to the
quasistationary distribution $\varphi$ then then the limit of
$F_{t_n}(\nu,dy)$ is concentrated on $\mathbb{R}_+$, and has the
density $\frac{\varphi(\lambda_0^{\kappa},\cdot)}{\int_0^{\infty
}\varphi(\lambda_0^{\kappa},y) \gamma(y)\,dy}$ with respect to $\Gamma$.
The next lemma is in essence a combination of Lemma~5.3 and
Theorem~3.3 in \citet{quasistat}, together with our Lemma~\ref{qsdcomp}.
\begin{lemma}\label{SEcomb}
Assume that $\infty$ is a natural boundary point and suppose that
$\lambda^{\kappa}_0 \ne K$. Then the limit $a(\nu,r) = \lim
_{t\rightarrow\infty}a_{t}(\nu,r)$ exists, and satisfies
\begin{eqnarray}\label{formelfuera}
a(\nu,r) &=& F(\nu,\mathbb{R}_+) \int\varphi(\lambda_0^{\kappa
},y)\mathbb{P}_y (\tau_{\partial}>r ) \gamma(y)\,dy \nonumber\\[-8pt]\\[-8pt]
&&{}+\bigl(1-F(\nu,\mathbb{R}_+) \bigr) e^{-Kr}.\nonumber
\end{eqnarray}
Either $F(\nu,\mathbb{R}_+) = 0$ for every compactly supported
initial distribution $\nu$ or $F(\nu,\mathbb{R}_+) = 1$ for every
such $\nu$.
There exists $\eta_{\nu} \in\mathbb{R}$ (called the \textup
{asymptotic mortality rate}) such that
\begin{equation}\label{eta}
a(\nu,r) = e^{-\eta_{\nu} r}.
\end{equation}
If the process escapes to infinity then $\eta_{\nu}=K$.
\end{lemma}
\begin{pf}
Let $\nu$ be a compactly supported initial distribution. Let $(t_n)
\subset(0,\infty)$ be a sequence converging to infinity. On the
compactification $[0,\infty]$ of $(0, \infty)$ the sequence of
measures $F_{t_n}(\nu,dy)$ has a limit point. By Theorem~\ref{Thm33}
this limit point is either a measure on $(0,\infty)$ which has the
density $\frac{\varphi(\lambda_0^{\kappa},\cdot)}{\int_0^{\infty
}\varphi(\lambda_0^{\kappa},y) \gamma(y)\,dy}$\vspace*{1pt} with respect to the
measure $\Gamma$ or is the point mass at $\infty$. Theorem \ref{Thm33}
shows that there is only one limit point, and that the limit point is
independent of the sequence $(t_n)$ and the initial distribution $\nu$.
Thus $F_t(\nu,dy)$ converges weakly. If $\infty$ is natural, then
according to Proposition~3.1 in combination with Proposition 4.3 in
\citet{A74} the unkilled diffusion process ($\kappa= 0$) satisfies
$\lim_{y \rightarrow\infty}\mathbb{P}_{y}(T_a \leq t)=0$ for every
$a>0$. Due to our assumption on $\kappa$ we are given $\varepsilon>
0$ and sufficiently large $a>0$, and we therefore get ($x \geq a$)
\begin{eqnarray*}
e^{-(K-\varepsilon)t} &\leq&\mathbb{P}_x(\tau_{\partial}>t) =
\mathbb{E}_x \bigl[e^{-\int_0^t \kappa(X_s)\,ds};T_a\leq t \bigr]+
\mathbb{E}_x \bigl[e^{-\int_0^t \kappa(X_s)\,ds};T_a > t \bigr]\\
&\leq&\varepsilon+ e^{-(K+\varepsilon)t}.
\end{eqnarray*}
Thus we conclude that
\[
\lim_{y \rightarrow\infty}\mathbb{P}_y (\tau_{\partial}>
r ) = e^{-Kr}.
\]
This shows that
\begin{eqnarray*}
\lim_{t \rightarrow\infty}\int F_t(\nu,dy)\mathbb{P}_y ( \tau
_{\partial} > r ) &=& F(\nu,\mathbb{R}_+) \int\varphi(\lambda
_0^{\kappa},y)\mathbb{P}_y (\tau_{\partial}>r ) \gamma
(y)\,dy \\
&&{} + \bigl(1-F(\nu,\mathbb{R}_+) e^{-Kr} \bigr),
\end{eqnarray*}
which is \eqref{formelfuera}.
For any $r,s\ge0$ we have
\begin{eqnarray*}
a(\nu,r+s) &=& \lim_{t \rightarrow\infty} \frac{\mathbb{P}_{\nu
}(\tau_{\partial} > t+r+s)}{\mathbb{P}(\tau_{\partial}>t)} \\
&=& \lim_{t \rightarrow\infty} \frac{\mathbb{P}_{\nu}(\tau
_{\partial} > t+r+s)}{\mathbb{P}_{\nu}(\tau_{\partial} > t+s)}
\frac{\mathbb{P}_{\nu}(\tau_{\partial}>t+s)}{\mathbb{P}_{\nu
}(\tau_{\partial}>t)}\\
&=& \biggl(\lim_{t \rightarrow\infty} \frac{\mathbb{P}_{\nu}(\tau
_{\partial} > t+r+s)}{\mathbb{P}_{\nu}(\tau_{\partial} >
t+s)} \biggr) \biggl(\lim_{t \rightarrow\infty} \frac{\mathbb
{P}_{\nu}(\tau_{\partial}>t+s)}{\mathbb{P}_{\nu}(\tau_{\partial
}>t)} \biggr)\\
&=& a(\nu,r)a(\nu,s),
\end{eqnarray*}
which directly implies \eqref{eta}. The final statement follows
directly from \eqref{formelfuera}.
\end{pf}
The quantity $\eta_{\nu}$ plays an important role. The reason for
this consists of the implication
\begin{equation}\label{elementaryfact}
\lim_{t \rightarrow\infty}\frac{\mathbb{P}_{\nu} (\tau
_{\partial}>t+r )}{\mathbb{P}_{\nu} (\tau_{\partial
}>t )}=e^{-\eta_{\nu}r} \quad \mathbb{R}ightarrow\quad -\lim_{t\rightarrow\infty
}\mathbb{P}_{\nu} (\tau_{\partial}>t ) = \eta_{\nu},
\end{equation}
whose elementary proof is left to the reader. Thus in order to decide,
whether $X_t$ converges to the quasistationary distribution, we
investigate the asymptotic behavior of the function $r \mapsto\mathbb
{P}_{\nu} (\tau_{\partial} > r ) $, as $r \rightarrow
\infty$.
\begin{lemma}\label{exporder}
Suppose that $\Gamma$ is a finite measure. Then for any compactly
supported initial distribution $\nu$ we have
\[
-\lim_{t\rightarrow\infty}\frac{1}{t}\log\mathbb{P}_{\nu}
(\tau_{\partial} > t ) = \lambda_0^{\kappa}.\vadjust{\goodbreak}
\]
\end{lemma}
\begin{pf}
Since $\Gamma$ is finite, the constant function $\mathbf{1}$ is in $\mathfrak{L}
^2$. Lemma \ref{L:sqrtcompare} implies
\begin{eqnarray}
\limsup_{t \rightarrow\infty}\frac{1}{t}\log\mathbb{P}_{\nu
} (\tau_{\partial}> t ) &=&\limsup_{t \rightarrow\infty
}\frac{1}{t}\log\int
e^{-tL^{\kappa}}\mathbf{1}(x)\,d\nu(x)\nonumber\\[-8pt]\\[-8pt]
&\leq&- \lambda_0^{\kappa}.\nonumber
\end{eqnarray}
Now we need a corresponding lower bound. Fix any $z>0$, and let $\mathcal
{I}:=\{x\dvtx|x-z|<\frac{1}{2}\wedge\frac{z}{4}\}$.
By \eqref{E:harnack2} there is a constant $C(z)=\sup_{w\in\mathcal{I}}
\zeta(w)$ such that
\begin{eqnarray}\label{lowerbound}
\| e^{-{t}/{2}L^{\kappa}}\mathbf{1}_{\mathcal{I}} \|^{2}&=&
\langle e^{-tL^{\kappa}}\mathbf{1}_{\mathcal{I}} , \mathbf{1}_{\mathcal{I}}
\rangle\nonumber \\
&=&\int_{\mathcal{I}} \int_{\mathcal{I}} p^{\kappa}(t,x,y)\,d\Gamma(y)\,d\Gamma
(x)\nonumber\\[-8pt]\\[-8pt]
&\leq& C(z) \int_{\mathcal{I}} p^{\kappa}(t+1,z,y) \gamma(y)\,dy \nonumber \\
&\le& C(z) \mathbb{P}_z (\tau_{\partial}>t+1 ).\nonumber
\end{eqnarray}
By Lemma \ref{L:lowspec} and Lemma \ref{L:supportbase} (using that
$\mathbf{1}_{\mathcal{I}}$ is strictly positive on $\mathcal{I}$) we see that
\[
\liminf_{t \rightarrow\infty}\frac{1}{t}\log\mathbb{P}_{z}
(\tau_{\partial}> t ) \ge-\inf\operatorname{supp} d\|E^{\kappa}\mathbf
{1}_\mathcal{I}\|^2(\lambda)=-\lambda_{0}^{\kappa}.
\]
Since by Proposition \ref{P:Harnackconstantlimit} the exponential rate
of decay of $\mathbb{P}_z (\tau_{\partial}>t )$ is
locally constant in $z$ this completes the proof.
\end{pf}
\subsection{\texorpdfstring{High killing at $\infty$}{High killing at infinity}}
In this section we consider the case where the asymptotic killing rate
is strictly bigger than $\lambda^{\kappa}_0$. Theorem \ref{Thm33}
shows that one has convergence to the quasistationary distribution if
and only if the lowest eigenfunction is integrable. We give a proof of
this assertion and moreover prove that the lowest eigenfunction is
actually always integrable. Therefore $\liminf\kappa>\lambda^{\kappa
}_0$ always implies convergence to the quasistationary distribution. In
contrast to \citet{quasistat}, we do not need to assume that $\infty
$ is a natural boundary. Thus $\infty$ is only assumed to be
inaccessible. Since according to Lemma \ref{spectrum}\hyperlink{it:isolated}{(v)} the bottom of the spectrum
is an isolated eigenvalue, the corresponding eigenfunction is
square-integrable, as well as $\lambda_0^{\kappa}$-invariant.
\begin{theorem}\label{limkbiggerev}
Suppose that $\liminf_{x\rightarrow\infty}\kappa(x) > \lambda
_0^{\kappa}$. Then we have for every Borel set $U \subset(0,\infty)$
\begin{equation} \label{E:recurrent}
\lim_{t\rightarrow\infty}e^{\lambda^{\kappa}_0 t}\mathbb{P}^x(X_t
\in U; \tau_{\partial} > t) = u_{\lambda^{\kappa}_0}(x)\int_U
u_{\lambda^{\kappa}_0}(y) \gamma(y)\,dy,
\end{equation}
where $u_{\lambda_0^{\kappa}} \in\mathfrak{L}^2$ denotes the uniquely
determined (up to positive multiples) eigenfunction associated\vadjust{\goodbreak} to the
eigenvalue $\lambda_0^{\kappa}$. Moreover, the process $(X_t)$
associated to the Dirichlet form $q^{\kappa}$ converges to the
quasistationary distribution $u_{\lambda_0^{\kappa}}$.
\end{theorem}
The theorem will be the direct consequence of two lemmas: Lemma \ref
{L:L1L2}, which states that quasilimiting convergence follows whenever
the eigenfunction $u_{\lambda_{0}^{\kappa}}$ is in $\mathfrak{L}^{2}$ and $\mathfrak{L}^{1}$; and
Lemma \ref{integrability}, which states that $u_{\lambda_{0}^{\kappa}}$ is indeed
in~$\mathfrak{L}^{1}$ when $\liminf_{x\to\infty}\kappa(x)>\lambda_{0}^{\kappa}$.
\begin{lemma}\label{L:L1L2}
Suppose $u_{\lambda_{0}^{\kappa}}\in\mathfrak{L}^{1}\cap\mathfrak{L}^{2}$. Then \eqref
{E:recurrent} holds, and $X_t$ converges to the quasistationary
distribution $u_{\lambda_{0}^{\kappa}}(y)\Gamma(dy)/\int_0^{\infty}u_{\lambda_{0}^{\kappa}}(y)\Gamma(dy)$.
\end{lemma}
\begin{pf}
We know from Lemma \ref{spectrum} [part \hyperlink{it:isolated}{(v)} that
$\lambda^{\kappa}_0$ is an isolated eigenvalue. Therefore, the
eigenfunction $u_{\lambda^{\kappa}_0}$ is square integrable and satisfies
\begin{equation} \label{E:L2ae}
e^{-tL^{\kappa}}u_{\lambda^{\kappa}_0} = e^{-t\lambda^{\kappa
}_0}u_{\lambda^{\kappa}_0} \mbox{ in }\mathfrak{L}^{2},\mbox{ hence
identically (since $u_{\lambda_{0}^{\kappa}}$ is continuous)}.\hspace*{-35pt}
\end{equation}
By \eqref{E:harnack2}, for $r>0$ sufficiently small,
\begin{eqnarray*}
p^{\kappa}(t,x,y) &=& \frac{\int_{B_r(x)}p^{\kappa}(t,x,y)u_{\lambda
^{\kappa}_0}(\tilde{x}) \gamma(\tilde{x})\,d\tilde{x}}{\int
_{B_r(x)}u_{\lambda^{\kappa}_0}(\tilde{x}) \gamma(\tilde
{x})\,d\tilde{x}} \\
&\leq&\zeta(x)\frac{\int_{B_r(x)}p^{\kappa}(t+1,\tilde
{x},y)u_{\lambda^{\kappa}_0}(\tilde{x}) \gamma(\tilde{x})\,d\tilde
{x}}{\int_{B_r(x)}u_{\lambda^{\kappa}_0}(\tilde{x}) \gamma(\tilde
{x})\,d\tilde{x}} \\
&\leq&\zeta(x) \frac{e^{-(t+1)\lambda^{\kappa}_0}u_{\lambda
^{\kappa}_0}(y)}{\int_{B_r(x)}u_{\lambda^{\kappa}_0}(\tilde{x})
\gamma(\tilde{x})\,d\tilde{x}}.
\end{eqnarray*}
For fixed $x$, $p^{\kappa}(t,x,y)e^{t\lambda_{0}^{\kappa}}$ is dominated by a
constant times $u_{\lambda_{0}^{\kappa}}(y)$, which is in~$\mathfrak{L}^{1}$.
The dominated convergence theorem, together with \eqref{E:heatasympt},
implies that there is a constant $c$ such that for any Borel set $U$,
\begin{eqnarray}\label{laststeptoqsd}
\qquad \lim_{t\rightarrow\infty}e^{\lambda^{\kappa}_0 t}\mathbb
{P}_{x} (X_t \in U , \tau_{\partial} > t ) &=&\lim
_{t\rightarrow\infty} \int_0^{\infty} e^{\lambda^{\kappa}_0
t}p^{\kappa}(t,x,y)\mathbf{1}_U(y)
\gamma(y)\,dy\nonumber\\[-8pt]\\[-8pt]
&=& c u_{\lambda^{\kappa}_0}(x) \int_Uu_{\lambda^{\kappa}_0}(y)
\gamma(y) \,dy.\nonumber
\end{eqnarray}
Taking quotients,
\begin{eqnarray*}
\lim_{t\rightarrow\infty}\mathbb{P}^x ( X_t \in U|\tau
_{\partial} > t ) &=& \lim_{t\rightarrow\infty}\frac{\mathbb
{P}^x ( X_t \in U, \tau_{\partial} > t )}{\mathbb
{P}^x (\tau_{\partial} > t )} \\
&=& \frac{\lim_{t\rightarrow\infty}e^{\lambda^{\kappa}_0t}\mathbb
{P}^x (X_t \in U, \tau_{\partial} > t )}{\lim
_{t\rightarrow\infty}e^{\lambda^{\kappa}_0t}\mathbb{P}^x
(\tau_{\kappa} > t )} \\
&=& \frac{c\int_Uu_{\lambda^{\kappa}_0}(y) \gamma(y)\,dy}{c\int
_0^{\infty}u_{\lambda^{\kappa}_0}(y) \gamma(y)\,dy}.
\end{eqnarray*}
\upqed
\end{pf}\eject
For the second part of the proof we apply an argument used in \citet
{CL90} to derive properties of eigenfunctions of Schr\"{o}dinger
operators. Some modification is required to deal with the complication
that we have a domain with boundary, and we do not know a priori that
the eigenfunctions are bounded. The one-dimensional setting helps us to
overcome these complications.
\begin{lemma}\label{integrability}
Assume that $\lambda^{\kappa}_0<K := \liminf_{x \rightarrow\infty
}\kappa(x)$. Then the square integrable nonnegative eigenfunction
$u_{\lambda^{\kappa}_0}$ associated to the isolated
eigenvalue~$\lambda^{\kappa}_0$ is integrable with respect to the measure $\Gamma$.
\end{lemma}
\begin{pf}
By \eqref{E:L2ae} and the Feynman--Kac formula
\begin{equation}\label{invariance}
e^{-\lambda^{\kappa}_0 t} u_{\lambda_0}(x) = \mathbb{E}_x
\bigl[e^{-\int_0^t\kappa(X_s^{*})\,ds}u_{\lambda^{\kappa
}_0}(X_t^{*}),T_0 > t \bigr],
\end{equation}
for every $x \in[0,\infty)$, where $X^{*}_{s}$ is the diffusion which
is killed only at the boundary. For $t \geq0$ we define the martingale
\[
M_t = e^{-\int_0^t(\kappa-\lambda^{\kappa}_0)(X^*_s)\,ds}u_{\lambda
^{\kappa}_0}(X^*_t)\mathbf{1}_{\lbrace T_0 > t\rbrace}.
\]
By the assumption $\lambda^{\kappa}_0 < K$ there exist positive real
numbers $a$ and $\varepsilon$ such that $\kappa(x) - \lambda^{\kappa
}_0 > \varepsilon$ for every $x \in[a, \infty)$. Let $T_a$ be the
first hitting time of the set $[0,a]$.
By the optional sampling theorem we get for every $T>0$ and $x > a$
\begin{eqnarray}\label{optionalsampling}
u_{\lambda^{\kappa}_0}(x) &=& \mathbb{E}_x \bigl[ e^{-\int_0^{T_a
\wedge T}(\kappa-\lambda^{\kappa}_0)(X^*_s)\,ds}u_{\lambda^{\kappa
}_0}(X^*_{T_a \wedge T})\mathbf{1}_{\lbrace T_0 > T_a \wedge T\rbrace
} \bigr] \nonumber\\
&=& \mathbb{E}_x \bigl[e^{-\int_0^{T}(\kappa-\lambda^{\kappa
}_0)(X^*_s)\,ds}u_{\lambda^{\kappa}_0}(X^*_{T})\mathbf{1}_{\lbrace
T_a>T \rbrace} \bigr] \nonumber\\[-8pt]\\[-8pt]
&&{} + \mathbb{E}_x \bigl[e^{-\int_0^{T_a}(\kappa-\lambda
^{\kappa}_0)(X^*_s)\,ds}u_{\lambda^{\kappa}_0}(a)\mathbf
{1}_{\lbrace T_a \leq T \rbrace} \bigr] \nonumber \\
&\le& e^{-\varepsilon T} \mathbb{E}_x \bigl[u_{\lambda^{\kappa
}_0}(X^*_T)\mathbf{1}_{\lbrace T_0 > T\rbrace} \bigr]+ u_{\lambda
^{\kappa}_0}(a) \mathbb{E}_x [e^{-\varepsilon T_{a}\wedge
T} ] .\nonumber
\end{eqnarray}
By Lemma \ref{L:sqrtcompare} and the spectral theorem \eqref
{spectraltheorem3} the first term is bounded by
\begin{eqnarray}\label{boundednesspotential}
&&e^{-\varepsilon T} \bigl(C_{\alpha}(x)\bigl\|\sqrt{L}e^{-TL}u_{\lambda
^{\kappa}_0}\bigr\|+C'_{\alpha}(x)\|e^{-TL}u_{\lambda^{\kappa}_0}\|
\bigr)\nonumber \\
&&\qquad = e^{-\varepsilon T} \biggl[C_{\alpha}(x) \biggl(\int_{0}^{\infty}
\lambda e^{-2T\lambda} d\|E^{0}u_{\lambda^{\kappa}_0}\|^2(\lambda
) \biggr)^{1/2}\nonumber \\
&&\qquad \quad \hspace*{33pt}{} +C'_{\alpha} \biggl(\int_{0}^{\infty} e^{-2T\lambda}
\,d\|E^{0}u_{\lambda^{\kappa}_0}\|^2(\lambda) \biggr)^{1/2} \biggr]\\
&&\qquad \le e^{-\varepsilon T}2T^{-1/2} \|u_{\lambda^{\kappa}_0}\|\nonumber \\
&&\qquad \operatornamerarrow{T\to\infty} 0.\nonumber
\end{eqnarray}
We have then, from \eqref{optionalsampling} and the dominated
convergence theorem, that
\begin{eqnarray}\label{fortyone}
0 &\leq& u_{\lambda_{0}^{\kappa}}(x) \le\lim_{T\to\infty} u_{\lambda^{\kappa
}_0}(a) \mathbb{E}_x [e^{-\varepsilon T_{a}\wedge T} ] \nonumber\\[-8pt]\\[-8pt]
& =& u_{\lambda^{\kappa}_0}(a) \mathbb{E}_x [e^{-\varepsilon
T_{a}} ].\nonumber
\end{eqnarray}
We now appeal to a basic fact from potential theory [stated and proved
in much greater generality as Proposition D.15 of \citet{DvC00}; see
also page 285 of \citet{BG68}]: There is a~constant $C(a)$ such that
for all $x\ge a$,
\begin{equation} \label{E:potential}
\mathbb{E}_x [e^{-\varepsilon T_a};T_a < \infty ] = C(a)
g^{\varepsilon}(x,a),
\end{equation}
where $g^{\varepsilon}$ is the $\varepsilon$-potential, defined by
\[
g^{\varepsilon}(x,y) = \int_0^{\infty}e^{-\varepsilon t}p(t,x,y)\,dt,
\]
where $p(t,x,y)=p^0(t,x,y)$ denotes the integral kernel of the operator
$e^{-tL}$. Since $e^{-tL}$ is self-adjoint, the integral kernel
$p(t,x,y)$ is symmetric with respect to $\Gamma$, so that from \eqref{fortyone}
\begin{eqnarray*}
\int_a^{\infty} u_{\lambda_{0}^{\kappa}}(x) \gamma(x)\,dx&\le& u_{\lambda^{\kappa
}_0}(a)\int_{a}^{\infty}\mathbb{E}_x [e^{-\varepsilon T_a}
] \gamma(x) \,dx\\
&\le& C(a) u_{\lambda^{\kappa}_0}(a) \int_{0}^{\infty} g^{\varepsilon}(x,a) \gamma
(x)\,dx \\
&=& C(a)u_{\lambda^{\kappa}_0}(a) \int_{0}^{\infty}\int_{0}^{\infty} e^{-\varepsilon t}
p(t,x,a) \gamma(x) \,dx \,dt \\
&=& C(a) u_{\lambda^{\kappa}_0}(a) \int_{0}^{\infty}\int_{0}^{\infty} e^{-\varepsilon t}
p(t,a,x) \gamma(x) \,dx \,dt \\
&\le& C(a) \varepsilon^{-1}u_{\lambda^{\kappa}_0}(a) .
\end{eqnarray*}
Since $u_{\lambda_{0}^{\kappa}}(x) \gamma(x)$ is bounded on $[0,a]$, this completes
the proof.
\end{pf}
\begin{remark}
The above result reflects a general principle, which seems to be well
known to analysts and mathematical physicists: The decay of the
eigenfunctions associated with isolated eigenvalues is dictated by the
decay of Green's function, at least in regions where the potential
$\kappa$ is negligible.
\end{remark}
\subsection{\texorpdfstring{Low killing at $\infty$: The recurrent case}{Low killing at infinity: The recurrent case}} \label{sec:lowkill}
We assume for the remainder of this section that $K:=\lim_{x
\rightarrow\infty}\kappa(x)$ exists. Whereas the total surviving
mass in the case $K > \lambda^{\kappa}_0$ decays at the strictly
exponential rate $e^{-\lambda_{0}^{\kappa}t}$, in the case $\lim_{x
\rightarrow\infty}\kappa(x) < \lambda_0^{\kappa}$ one typically has
\begin{equation}\label{nonpositiverecurrence}
\lim_{t \rightarrow\infty}e^{\lambda_0^{\kappa}t} \mathbb
{P}_x(X_t \in A, \tau_{\partial} > t) = 0
\end{equation}
for every bounded Borel set $A \subset[0,\infty)$. (This can be seen
for a Brownian motion with constant\vadjust{\goodbreak} drift by direct computation.)
Equation \eqref{nonpositiverecurrence} remains true for every
diffusion, if the bottom of the spectrum of the diffusion generator is
not an eigenvalue in the $\mathfrak{L}^2$-sense. Thus we cannot rely upon
arguments that assume a spectral gap.
It may seem surprising that, despite the complicated relationship
between the unkilled motion and killing for determining the lifetime of
the process (and hence, whether it returns to its starting point), the
conventional transience/recurrence dichotomy for the \textit{unkilled}
process is exactly the criterion that distinguishes between convergence
and escape to infinity. We begin in this section by assuming that the
unkilled process is recurrent, which is equivalent to assuming that
$\int_0^{\infty}\gamma(x)^{-1}\,dx = \infty$, and show that this
implies convergence to quasistationarity. In particular the lowest
eigenfunction $\varphi(\lambda_0^{\kappa},\cdot)$ is integrable
(but now not necessarily square integrable) with respect to $\Gamma$. In
Section \ref{sec:transient} we then address the case when the unkilled
process is transient.\vspace*{-3pt}
\begin{theorem}\label{qsddrift}
Let infinity be a natural boundary. Suppose that \mbox{$K< \lambda^{\kappa
}_0$}, and $\int_0^{\infty}\gamma(x)^{-1}\,dx = \infty$. Then $X_t$
started from an arbitrary compactly supported initial distribution $\nu
$ converges to the quasistationary distribution with $\Gamma$-density
proportional to $\varphi(\lambda_0^{\kappa},\cdot)$. Moreover, the
asymptotic mortality rate~$\eta_{\nu}$ is independent of~$\nu$ and
equals $\lambda_0^{\kappa}$.\vspace*{-3pt}
\end{theorem}
\begin{pf}
If $X_t$ escapes to infinity then we know from Lemma \ref{SEcomb} that
\[
a(\nu,r) = \lim_{t\rightarrow\infty}\frac{\mathbb{P}_{\nu} (
\tau_{\partial}>t+r )}{\mathbb{P}_{\nu}(\tau_{\partial}>t)}=
e^{-K r}.
\]
Since by assumption $\lambda_0^{\kappa}>K$, when $\alpha>0$ part
\hyperlink{it:lambda0}{(vii)} of Lemma \ref{spectrum} tells us that $\lambda
_0>0$. The strict positivity of $\lambda_0$ together with the
assumption $\int_0^{\infty}\gamma(x)^{-1}\,dx = \infty$ allow us to
apply part \hyperlink{it:speedfinite}{(ii)} of Lemma \ref{spectrum}, to
conclude that the speed measure $\Gamma$ is finite. When $\alpha=0$ and
$\Gamma$ is infinite the same reasoning holds, leading to a
contradiction. Therefore we may assume, in any case, that $\Gamma$ is finite.
Therefore Lemma \ref{exporder} shows that for every compactly
supported measure~$\nu$
\[
-\lim_{t\rightarrow\infty}\frac{1}{t}\log\mathbb{P}_{\nu}
(\tau_{\partial}> t ) = \lambda^{\kappa}_0.
\]
In the case of escape to infinity equations \eqref{eta} and \eqref
{elementaryfact} imply
\[
-\lim_{t\rightarrow\infty} \frac{1}{t}\log\mathbb{P}_{\nu}(\tau
_{\partial}>t) = \eta_{\nu} = K \ne\lambda_0^{\kappa}.
\]
Therefore the assumption $F(\nu,\mathbb{R}_+) = 0$ cannot be true,
and thus by Theorem \ref{Thm33} we conclude $F(\nu,\mathbb{R_+}) =
1$ and $F(\nu,\infty) = 0$. Thus $X_t$ converges from every compactly
supported initial distribution $\nu$ to the quasistationary
distribution $\varphi(\lambda_0^{\kappa},\cdot)$.\vspace*{-3pt}
\end{pf}
The above theorem has the following corollary, which in a slightly more
restrictive form already appears in the\vadjust{\goodbreak} work of \citet{CMSM95}. The
proof presented in \citet{CMSM95} suffers from a gap, so it seems to be
worth presenting an alternative (and more general) proof of the assertion.
\begin{Corollary}\label{CMSM}
Suppose $\kappa\equiv0$ and $\infty$ is a natural boundary point,
and the process $X_{t}$ is recurrent, with $\alpha>0$.
\begin{itemize}
\item If $\lambda_{0}>0$, then $X_t$ converges from every compactly
supported initial distribution $\nu$ to the quasistationary
distribution with $\Gamma$-density proportional to $\varphi(\lambda
_0,\cdot)$.
\item If $\lambda_0 = 0$, then $X_t$ started from $\nu$ escapes to infinity.
\end{itemize}
\end{Corollary}
\begin{pf}
The first part of the assertion follows directly from Theorem~\ref
{qsddrift}. In order to prove the second assertion, observe that the function
\[
R(y):=
\frac{1}{1+\alpha}+\frac{2\alpha}{1+\alpha}\int_{0}^{y}\gamma(x)^{-1}\,dx
\]
satisfies $LR=0$; since $R(0)=1/(1+\alpha)$ and $\frac
{1}{2}R'(0)=\alpha/(1+\alpha)$ the function~$R$ coincides with the
unique eigenfunction $\varphi(0,\cdot)$. We have
\begin{eqnarray*}
\int_0^{\infty}\varphi(0,y) \gamma(y)\,dy &=& \frac{1}{1+\alpha}\int
_0^{\infty}\gamma(y)\,dy+\frac{2\alpha}{1+\alpha}\int_0^{\infty
}\gamma(y)\int_0^y\gamma^{-1}(x)\,dx\, dy \\
&=& \infty,
\end{eqnarray*}
by the assumption that $\infty$ is a natural boundary.
\end{pf}
\subsection{Low killing at infinity: The transient case} \label{sec:transient}
\begin{theorem}\label{Escape}
Suppose that $\infty$ is a natural boundary point and that $\int
_0^{\infty}\gamma(x)^{-1}\,dx < \infty$. If $K<\lambda_{0}^{\kappa}$, then $X_t$
escapes to infinity from every initial distribution. The rate of escape
is exponential with rate $\lambda_{0}^{\kappa}-K$, in the sense that for all $z,x>0$,
\begin{equation} \label{E:escaperate}
\limsup_{t\to\infty} \frac{1}{t}\log\mathbb{P}_x ( X_t \leq
z |\tau_{\partial} > t ) = -(\lambda_{0}^{\kappa}-K).
\end{equation}
\end{theorem}
\begin{pf}
Observe that the condition $\int_0^{\infty}\gamma(x)^{-1}\,dx <
\infty$ implies that for each $a \in(0,\infty)$ and each $x \in(a,
\infty)$ the unkilled diffusion (corresponding to the generator $L$)
started from $x$ has nonzero probability of never hitting $a$. For
$\varepsilon> 0$ we can choose $a = a_{\varepsilon} \in(0, \infty)$
such that $\kappa(x) \in(K-\varepsilon,K+\varepsilon)$ for every $x
\in[a, \infty)$. Then we have for every $x \in(a,\infty)$
\begin{eqnarray}\label{denominator}
\mathbb{P}_x ( \tau_{\partial}>t ) &=& \mathbb{E}_x
\bigl[e^{-\int_0^{t}\kappa(X_s)\,ds}, T_0>t \bigr] \nonumber \\
&\geq& e^{-(K+\varepsilon)t} \mathbb{P}_x (T_a > t ) \\
&\geq& e^{-(K+\varepsilon)t}\mathbb{P}_x (T_a = \infty ).\nonumber
\end{eqnarray}
Since $\mathbb{P}_x (T_a = \infty )$ is an increasing
function of $x$, we can apply the Markov property to see that there is
a nonzero increasing function $C(x)$ such that for all $x>0$
\begin{equation} \label{den2}
\mathbb{P}_x ( \tau_{\partial}>t ) \ge\mathbb{P}_x
( X_1\ge a+1 ) \cdot\inf_{x'\ge a+1} \mathbb{P}_{x'}
( \tau_{\partial}>t-1 )\ge
C(x) e^{-(K+\varepsilon)t}.\hspace*{-35pt}
\end{equation}
Note that $\inf_{x'\ge a+1} \mathbb{P}_{x'} ( \tau_{\partial
}>t-1 ) > 0$ because there is no explosion.
On the other hand, for any fixed $z\ge0$ we can apply the bound \eqref
{E:sqrtcompare2} and Lemma~\ref{L:supportbase} to see that
\begin{eqnarray} \label{E:num}
\mathbb{P}_x ( X_t\leq z, \tau_{\partial} > t ) &=&
\bigl(e^{-tL^{\kappa}}\mathbf{1}_{[0,z]}\bigr) (x) \nonumber\\[-8pt]\\[-8pt]
&\leq& \bigl(C_{\alpha}(x)\lambda_{0}^{\kappa}+C'_{\alpha} \bigr)\bigl\|\mathbf{1}
_{[0,z]}\bigr\| e^{-t\lambda_{0}^{\kappa}}\nonumber
\end{eqnarray}
for all $t>1/2\lambda_{0}^{\kappa}$. Combining \eqref{den2} and \eqref{E:num}, we
see that there is a constant~$C'$ such that
\begin{equation} \label{E:numden}
\mathbb{P}_x ( X_t\leq z | \tau_{\partial} > t )
\le
\bigl\|\mathbf{1}_{[0,z]}\bigr\| \frac{C'}{C(x)} e^{-(\lambda_{0}^{\kappa}-K-\varepsilon)t}.
\end{equation}
We conclude that for all $x\ge a_{\varepsilon}$,
\[
\limsup_{t\to\infty} \frac{1}{t}\log\mathbb{P}_x ( X_t \leq
z |\tau_{\partial} > t ) \le-(\lambda_{0}^{\kappa}-K)+\varepsilon.
\]
By Proposition \ref{P:Harnackconstantlimit}, since $\varepsilon$ is
arbitrary, we conclude that the limsup is no more than $-(\lambda_{0}^{\kappa}-K)$.
In particular, we have shown that the process escapes to infinity. By
Lemma~\ref{SEcomb}, it follows that $\lim_{t\to\infty} \mathbb
{P}_{x} \{\tau_\partial>t+1 | \tau_\partial>t \}=e^{-K},$ from
which we conclude using \eqref{elementaryfact} that
\[
\lim_{t\to\infty} t^{-1}\log\mathbb{P}_{x} \{\tau_\partial>t \}=-K.
\]
Lemmas \ref{L:lowspec} and \ref{L:supportbase} tell us that
\[
\lim_{t\to\infty} t^{-1}\log\mathbb{P}_{x} \{X_t \leq z
\}\ge-\lambda_{0}^{\kappa},
\]
from which we conclude that
\[
\liminf_{t\to\infty} \frac{1}{t}\log\mathbb{P}_x ( X_t \leq
z |\tau_{\partial} > t ) \ge-(\lambda_{0}^{\kappa}-K),
\]
completing the proof of \eqref{E:escaperate}.
\end{pf}
If $\kappa$ is eventually constant---that is, for some $a$ we have
$\kappa(x)=K$ for all \mbox{$x\ge a$}---then we can strengthen the conclusion
of Theorem \ref{Escape} slightly.
\begin{Corollary}\label{compareMM}
Suppose that $\kappa$ is eventually constant and that \mbox{$\lambda_{0}^{\kappa}> 0$}.
Then for every $x , z\in(0,\infty)$
\[
\sup_{t} e^{(\lambda_{0}^{\kappa}-K) t} \mathbb{P}_x (X_t \leq z |\tau
_{\partial} > t ) < \infty.
\]
\end{Corollary}
\begin{pf}
If $\kappa$ is eventually constant, then \eqref{denominator} and
\eqref{den2} hold with $\varepsilon=0$, hence \eqref{E:numden} as
well.\vadjust{\goodbreak}
\end{pf}
\begin{remark}
The case $\kappa\equiv0$ corresponds to the setting considered in
\citet{MSM01}. Theorem 4 of \citet{MSM01} includes a slightly
weaker version of the result in Corollary \ref{compareMM}, obtained by
different methods. The above theorem shows that when $\kappa\equiv0$
the principal eigenvalue $\lambda_0^{\kappa}$ gives the exponential
convergence rate at which~$X_t$ escapes to infinity.
\end{remark}
As already mentioned in Remark \ref{qsl}, a quasilimiting distribution
$\tilde{\nu}$, which in our case is a probability measure on
$(0,\infty)$ is always quasistationary in the sense that for every
Borel set $A \subset(0,\infty)$,
\[
\mathbb{P}_{\tilde{\nu}} (X_t \in A |\tau_{\partial}>
t ) = \tilde{\nu}(A);
\]
but the converse need not hold true. In the cases where we know there
is no quasilimiting distribution, though, because the process escapes
to $\infty$, we can show that there is also no quasistationary distribution.
\begin{Corollary}
Let $\infty$ be a natural boundary, with $\lambda_0^{\kappa}>K$
and\break
$\int_0^{\infty}\gamma(x)^{-1}\,dx < \infty$.
Then there is no quasistationary distribution.
\end{Corollary}
\begin{pf}
Assume that $\tilde{\nu}$ is a general quasistationary distribution.
The measure $\tilde{\nu}$ is absolutely continuous with respect to
$\Gamma$ with a positive continuous density $g\dvtx[0,\infty)\rightarrow
(0,\infty)$ (for a sketch of the proof of this fact we refer to the
\hyperref[app]{Appendix}). There is a $\lambda$ such that $\mathbb{P}_{\tilde{\nu
}}(\tau_{\partial}>t) = e^{-\lambda t}$. By \eqref{den2}, for any
positive $\varepsilon$,
\begin{equation} \label{E:killatmost}
e^{-\lambda t}=\mathbb{P}_{\tilde\nu}\{\tau_\partial>t\}\ge e^{-(K+\varepsilon
)t}\int C(x)\,d\tilde\nu(x),
\end{equation}
which means that $\lambda\le K$. For any fixed $x_0>0$,
\begin{eqnarray} \label{E:killatleast}
\qquad \mathbb{P}_{\tilde\nu}\{X_t\le z, \tau_\partial>t \}&=& \bigl\langle
g,e^{-tL^{\kappa}}\mathbf{1}_{(0,z]} \bigr\rangle\nonumber\\
&=& \bigl\langle g\mathbf{1}_{[0,x_0]},e^{-tL^{\kappa}}\mathbf
{1}_{(0,z]} \bigr\rangle+ \bigl\langle r \mathbf{1}_{(x_0,\infty
)},e^{-tL^{\kappa}}\mathbf{1}_{(0,z]} \bigr\rangle\\
&\le& \bigl\|g\mathbf{1}_{[0,x_0]} \bigr\|\cdot\bigl \|e^{-tL^{\kappa
}}\mathbf{1}_{(0,z]} \bigr\|+\sup_{x\ge x_{0}} \mathbb{P}_{x} \{X_{t}\le
z, \tau_\partial>t\}.\nonumber
\end{eqnarray}
Since $\|g\mathbf{1}_{[0,x_{0}]}\|$ and $\|\mathbf{1}_{(0,z]}\|$ are both
finite, we can use \eqref{spectraltheorem3} and \eqref{E:num} to see
that there is a constant $B$ such that
\begin{equation} \label{E:killatleast2}
\mathbb{P}_{\tilde{\nu}}\{X_t\le z, \tau_\partial>t \}\le Be^{-t\lambda_{0}^{\kappa}}.
\end{equation}
Combining \eqref{E:killatmost} and \eqref{E:killatleast2}, we see
that for all positive $t$,
\begin{equation} \label{E:stable}
\tilde{\nu} ([0,z] )=\mathbb{P}_{\tilde{\nu}}\{X_t\le z |
\tau_\partial>t \}\le Be^{-(\lambda_{0}^{\kappa}-K)t},
\end{equation}
so $\tilde{\nu}$ must be identically 0 on $[0,\infty)$.
\end{pf}
\subsection{Processes that may not hit 0} \label{sec:htrans}
Consider a process which is killed only at 0 (i.e., with $\kappa\equiv
0$). If the process is not almost surely absorbed at 0
eventually---that is, if $\mathbb{P}_x(T_0 = \infty) > 0$---we may
wish to condition the process at time $t$ on being killed eventually,
but not yet. That is, we consider the long-time asymptotics of
\[
\mathbb{P}_x\bigl(X_t \in\cdot\,| T_0 \in(t,\infty)\bigr).\vspace*{-2pt}
\]
Conditions of this kind can often be found in the analogous problems in
the theory of branching processes. This problem can be reduced to our
previous analysis by an h-transform. The function $h(x) = \mathbb
{P}_x(T_0 < \infty)$ is harmonic, and by general theory [see \citet
{rP95}, Chapter 4, Sections 3 and 10] the process $(X_t)$ conditioned
to hit $0$ corresponds to the generator $L^h$ whose action is given by
\[
L^{h}f=\biggl(\frac{1}{h}L(h f)\biggr)(x) = -\frac{1}{2}f''(x)+
\biggl(-b(x)-\frac{h'(x)}{h(x)} \biggr)f'(x).\vspace*{-2pt}
\]
The process associated to the operator $L^h$ can again be defined by
Dirichlet form techniques, and the associated family of measures on the
path space is denoted by $\tilde{\mathbb{P}}_x$. As explained above
we have
\[
\mathbb{P}_x(\cdot\,| T_0 < \infty) = \tilde{\mathbb{P}}_x(\cdot).\vspace*{-2pt}
\]
The operator $L^h$ can be realized as a self-adjoint operator on the
Hilbert space $\mathfrak{L}^2((0,\infty), h(x)^{2}\gamma(x)\,dx)$. The
transformation $V\dvtx\mathfrak{L}^2((0,\infty),\break h(x)^{2}\gamma(x)\,dx)\to\mathfrak{L}
^2((0,\infty),\gamma(x)\,dx)$ defined by $Vf=fh$ is unitary, and defines
a unitary equivalence between $L$ and $L^{h}$, so the spectrum is
invariant under $h$-transforms. In particular, positivity of the bottom
of the spectrum of $L$ implies the positivity of the spectrum of $L^h$.
Since absorption is certain with respect to the measure $\tilde
{\mathbb{P}}_x$ we can apply our previous results in order to conclude
that for every Borel set $A \subset(0,\infty)$
\[
\lim_{t \rightarrow\infty}\mathbb{P}_x\bigl(X_t \in A | T_0 \in
(t,\infty)\bigr) = \frac{\int_A\tilde{\varphi}^h(\lambda_0,x)h(x)\gamma
(x)\,dx} {\int_0^{\infty}\tilde{\varphi}^h(\lambda_0,x)h(x)\gamma(x)\,dx},\vspace*{-2pt}
\]
where $\tilde{\varphi}^h(\lambda_0,x)$ is the unique solution of
$(L^h-\lambda_0)u = 0$, which satisfies $\tilde{\varphi}^h(\lambda
_0,0)=0$ and $(\tilde{\varphi}^h)'(\lambda_0,0)=1$.\vspace*{-3pt}
\subsection{\texorpdfstring{The case of an entrance boundary at $\infty$}{The case of an entrance boundary at infinity}} \label{sec:entrancebound}
$\!\!\!$Observe that $\int_0^{\infty} \gamma(x)^{-1}\,dx = \infty$ if $\infty$
is an entrance boundary. This follows from the fact that in this
situation the total speed measure $\int_0^{\infty}\gamma(x) \,dx$ must
be finite. Thus, the situation is essentially the same as in Theorem
\ref{qsddrift}. Indeed, we always have convergence to
quasistationarity if $\infty$ is an entrance boundary.\vspace*{-3pt}
\begin{theorem}\label{nichtnatuerlichqsd}
Assume that $0$ is regular and that $\infty$ is an entrance boundary.
Then the bottom of the spectrum is an isolated eigenvalue with
associated nonnegative eigenfunction $u_{\lambda_0^{\kappa}}$. From
every compactly supported initial distribution $\nu$, the process
$X_t$ converges to the distribution with density $u_{\lambda_0^{\kappa
}}/\int_{0}^{\infty} u_{\lambda_{0}^{\kappa}}(x)\gamma(x)\,dx$ with respect to $\Gamma$.\vadjust{\goodbreak}
\end{theorem}
\begin{pf}
The first assertion follows from Theorem \ref{nichtnatuerlich}. Lemma
\ref{L:L1L2} directly implies that $X_t$ converges to the
quasistationary distribution $u_{\lambda_0^{\kappa}}$ from every
compactly supported initial distribution if and only if $\int
_0^{\infty}u_{\lambda_0^{\kappa}}(y) \gamma(y)\,dy$ is finite. Since
we are assuming that $0$ is regular and $\infty$ is an entrance
boundary the speed measure $\Gamma$ must be finite. Thus the $\mathfrak{L}
^{2}(\Gamma)$ function $u_{\lambda_{0}^{\kappa}}$ is also in $\mathfrak{L}^{1}(\Gamma)$.\vspace*{-2pt}
\end{pf}
\subsection{\texorpdfstring{Existence and uniqueness of quasistationary distributions when \mbox{$\kappa\equiv0$}}
{Existence and uniqueness of quasistationary distributions when kappa equivalent 0}}
In this short section we first reformulate the criterium for existence
of quasistationary distributions in the case $\kappa\equiv0$ and
$\mathbb{P}_x(T_0< \infty)=1$. This allows a direct comparison with
the criterium for the uniqueness of the quasistationary distriubution,
which has been recently established in \citet{6authors}. We consider
only the case $\alpha>0$, since otherwise there is no killing at all,
and this is merely a classical situation of a stationary distribution.
The interesting point in the next result consists of the fact that the
existence of some exponential moment of the first hitting time $T_0$ of
$0$ is equivalent to the existence of quasistationary distributions for
any $\alpha>0$.\vspace*{-2pt}
\begin{theorem}
Let $0$ be regular and let infinity be inaccessible. Moreover, suppose
that $\alpha\in(0,\infty]$, $\kappa\equiv0$ and $\mathbb{P}_x(T_0
< \infty) = 1$.
\begin{itemize}[(ii)]
\item[(i)] There exists a quasistationary distribution if and only if
for some $\varepsilon> 0$ and some (hence every) $x >0$
\[
\mathbb{E}_x [ e^{\varepsilon T_0} ] < \infty.
\]
\item[(ii)] There exists a unique quasistationary distribution if and
only if for every $a > 0$ there exists $y_a > 0$ such that
\[
\sup_{x > y_a}\mathbb{E}_x [e^{a T_{y_a}} ] < \infty.
\]
This is true if and only if infinity is an entrance boundary.\vspace*{-2pt}
\end{itemize}
\end{theorem}
\begin{pf}
Assertion (ii) follows from assertion (i) in combination with Theorem~7.3 of \citet{6authors}.
In order to prove assertion (i) let us first
assume that there exists a quasistationary distribution $\nu$. Then
there exists $\bar{\lambda}$ such that $\mathbb{P}_{\nu} (\tau
_{\partial}>t ) = e^{-\bar{\lambda}t}$. Since hitting $0$ is
certain we conclude that $\bar{\lambda}>0$. As shown in Lemma \ref{lemA} of
the \hyperref[app]{Appendix} the measure $\nu$ is absolutely continuous with respect
to $\Gamma$ with a strictly positive and continuous density $\varphi
$. Therefore, we get for $0<a<b$ and some positive constant $c>0$
\begin{eqnarray}
e^{-\bar{\lambda}t}&=&\mathbb{P}_{\nu} (\tau_{\partial}>t) =
\int_0^{\infty}\varphi(y)\mathbb{P}_y(\tau_{\partial}>t)\Gamma
(dy)\nonumber \\
&\geq& c\int_a^b\varphi(y)\bigl(e^{-tL^{0,\alpha}}\mathbf
{1}_{(a,b)}\varphi\bigr)(y)\Gamma(dy) \\
&=& c \int_{[0,\infty)}e^{-t\lambda}\,d\bigl\|E^{0,\alpha}_{\lambda
}\bigl(\mathbf{1}_{(a,b)}\varphi\bigr)\bigr\|^2.\nonumber
\end{eqnarray}
Using Lemma \ref{L:supportbase} we therefore have $\lambda
_0^{0,\alpha}=\inf\operatorname{supp} d\|E^{0,\alpha}(\mathbf
{1}_{(a,b)}\varphi)\|^2(\lambda) \geq\bar{\lambda}>0$. Since the
essential spectra of $L^{0,\alpha}$ and $L^{0,\infty}$ coincide,
either $\lambda_0^{0,0}$ is strictly positive or $0$ is an isolated
eigenvalue. According to Lemma \ref{spectrum}\hyperlink{it:noisolated}{(vi)} the latter case
cannot occur. But this means, according to Corollary \ref{CMSM}, that
$\varphi(\lambda_0^{0,0},y)\Gamma(dy)/\int_0^{\infty}\varphi
(\lambda_0^{0,0},y)\Gamma(dy)$ is the quasilimiting distribution of
the diffusion killed at $0$ and therefore $\lim_{t \rightarrow\infty
}\frac{1}{t}\log\mathbb{P}_x(T_0>t)=-\lambda_0^{0,0}$. Hence
$\mathbb{E}_x[e^{\varepsilon T_0}]<\infty$ for every $0<\varepsilon<
\lambda_0^{0,0}$.
Assume now that $\mathbb{E}_x[e^{\varepsilon T_0} ]<\infty$ for
some $0<\varepsilon$. Then obviously
\[
\lim_{t \rightarrow\infty}\frac{1}{t}\log\mathbb{P}_x(T_0>t)<0.
\]
Using implication \eqref{elementaryfact} we see that the asymptotic
killing rate $\eta_x$ is strictly bigger than $0$. By Lemma \ref
{SEcomb} the process $X_t$ with absorption at $0$ does not escape to
infinity; hence it converges. By Corollary \ref{CMSM} we then have
\mbox{$\lambda_0^{0,\infty}>0$}. Using the same argument as in the first
part of the proof of assertion (i) we conclude that $\lambda
_0^{0,\alpha}>0$, which by Corollary \ref{CMSM} implies the existence
of a~quasistationary distribution.
\end{pf}
Thus uniqueness of quasistationary distributions is equivalent to the
``time of implosion from infinity into the interior'' having
exponential moments of all orders, whereas existence of a
quasistationary distribution is equivalent to the existence of \textit
{some} exponential moment of the first hitting time of $0$. Both
results together account for the existence and uniqueness of
quasistationary distributions.
\begin{remark}
It seems to be a rather general principle that there are three
possibilities. The first possibility is the nonexistence of
quasistationary distributions. If there exists a quasistationary
distribution, then it is either unique or there is a whole continuum of
quasistationary distributions parameterized by a real interval. This is
at least true for birth and death processes on the nonnegative
integers; cf. \citet{jC78}.
\end{remark}
\section{The dichotomy and the integrability of the principal eigenfunction}
According to the basic dichotomy of \citet{quasistat}, as extended
here, we know that under the assumptions $K \ne\lambda_0^{\kappa}$
and the nonaccessibility of infinity either $X_t$ converges to the
quasistationary distribution $\varphi(\lambda_0^{\kappa},\cdot)/
\int_0^{\infty}\varphi(\lambda_0^{\kappa},y)\,d\gamma(y)$ or $X_t$
escapes to infinity. Moreover, we have shown that escape to infinity
occurs (under these assumptions) if and only if the boundary point
infinity is natural, the underlying unkilled diffusion is transient and
$\lambda_0^{\kappa}>K$.
Another way of expressing this dichotomy is in terms of the
integrability of the principal eigenfunction. It follows without much
effort from Theorem \ref{generallocalMandl} [see, e.g., Proposition
2.3 in \citet{quasistat}], that $\int_0^{\infty} \varphi(\lambda
_0^{\kappa},y) \Gamma(dy) = \infty$ implies escape to infinity. Is
integrability of the principal eigenfunction actually equivalent to
Yaglom convergence, at least under the condition $K \ne\lambda
_0^{\kappa}$? We answer in the affirmative, stating the result as
a~theorem because of its salience, although it might strictly be seen as
a~fairly direct corollary to the results of Section~\ref{sec:qstat}.
\begin{theorem}\label{thm:integrability}
Assume that $0$ is regular and that infinity is not accessible.
Moreover, suppose that $K \ne\lambda_0^{\kappa}$. Then $X_t$ escapes
to infinity if and only if
\[
\int_0^{\infty} \varphi(\lambda_0^{\kappa},y)\,d\Gamma(y) =
\infty.
\]
If $\int_0^{\infty} \varphi(\lambda_0^{\kappa},y)\,d\Gamma(y) <
\infty$, then $X_t$ converges to the quasilimiting distribution $\frac
{\varphi(\lambda_0^{\kappa},y)\,d\Gamma(y)}{\int_0^{\infty} \varphi
(\lambda_0^{\kappa},y)\,d\Gamma(y)}$.
\end{theorem}
\begin{pf}
All we need to show is that
\[
\int_0^{\infty} \varphi(\lambda_0^{\kappa},y)\,d\Gamma(y) = \infty
\]
holds when there is escape to infinity; that is, in the case $\lambda
_0^{\kappa}>K$, $\infty$ is natural, and $\int_0^{\infty}\gamma
(y)^{-1}\,dy < \infty$. In all other cases we know that $X_t$
converges to quasistationary, and in particular the principal
eigenfunction is integrable.
Under these assumptions we know from the proof of Theorem \ref{Escape}
[see equation \eqref{den2}] that for every $\varepsilon> 0$ there
exists a nontrivial, nonnegative increasing function $C_{\varepsilon
}(\cdot)$ such that for $y > 0$ and $t > 0$
\begin{equation}\label{1rev}
\mathbb{P}_y (\tau_{\partial}>t ) \geq C_{\varepsilon
}(y) e^{-(K+\varepsilon)t}.
\end{equation}
Let us assume that $m:=\int_0^{\infty}\varphi(\lambda_0^{\kappa
},y)\,d\Gamma(y)<\infty$ and show that this gives a~contradiction.
Integration of \eqref{1rev} with respect to the probability measure
$m^{-1}\varphi(\lambda_0^{\kappa},y)\,d\Gamma(y)$ gives
\begin{eqnarray}\label{Arev}
&&m^{-1}\int_0^{\infty}\varphi(\lambda_0^{\kappa},y)\mathbb
{P}_y (\tau_{\partial}>t )\,d\Gamma(y) \nonumber\\[-8pt]\\[-8pt]
&&\qquad \geq
e^{-(K+\varepsilon)t}m^{-1}\int_0^{\infty}C_{\varepsilon}(y)\varphi
(\lambda_0^{\kappa},y)\,d\Gamma(y).\nonumber
\end{eqnarray}
On the other hand, using the symmetry of the semigroup we have
\begin{eqnarray}\label{L1norm}
&&m^{-1}\int_0^{\infty}\varphi(\lambda_0^{\kappa},y)\mathbb
{P}_y (\tau_{\partial}>t )\,d\Gamma(y) \nonumber\\[-8pt]\\[-8pt]
&&\qquad = m^{-1}\int
_0^{\infty} (e^{-tL^{\kappa}}\varphi(\lambda_0^{\kappa},\cdot
) )(y)\,d\Gamma(y).\nonumber
\end{eqnarray}
Now observe that $\varphi(\lambda_0^{\kappa},\cdot)$ is $\lambda
_0^{\kappa}$-subinvariant [see, e.g., Lemma 7.7 in \citet{quasistat}],
that is,
\begin{equation}\label{subinv}
e^{-tL^{\kappa}}\varphi(\lambda_0^{\kappa},\cdot) \leq e^{-\lambda
_0^{\kappa}t}\varphi(\lambda_0^{\kappa},\cdot).\vadjust{\goodbreak}
\end{equation}
Using \eqref{L1norm}, \eqref{Arev} and \eqref{subinv}, for all $t>0$,
\begin{eqnarray}
m&\geq&\int_0^{\infty}\varphi(\lambda_0^{\kappa},y)\mathbb
{P}_y (\tau_{\partial}>t )\,d\Gamma(y) \nonumber\\[-8pt]\\[-8pt]
&\geq& e^{(\lambda_{0}^{\kappa}-K-\varepsilon)t}\int_0^{\infty}C_{\varepsilon}(y)\varphi(\lambda
_0^{\kappa},y)\,d\Gamma(y).\nonumber
\end{eqnarray}
As we know that $\lambda_{0}^{\kappa}-K-\varepsilon> 0$ for $\varepsilon$
sufficiently small, and the integral is assumed nonzero, the right-hand
side goes to $\infty$ as $t\to\infty$, which is a~contradiction if
$m$ is finite. Therefore $m=\int_0^{\infty}\varphi(\lambda
_0^{\kappa},y)\,d\Gamma(y)= \infty$.
\end{pf}
\begin{appendix}
\section*{Appendix}\label{app}
\setcounter{equation}{0}
In this \hyperref[app]{Appendix} we sketch a proof of the regularity of quasistationary
distributions of one-dimensional diffusions with one regular boundary.
\renewcommand{A}{A}
\begin{lemmaA}\label{lemA}
Let $L^{\kappa}$ be one of the self-adjoint realizations considered in
this work, of the Sturm--Liouville expression $\tau+ \kappa$ in $\mathfrak{L}
^2$; and let $\tilde{\nu}$ be a~quasistationary distribution. Then
$\tilde{\nu}$ is absolutely continuous with respect to the measure
$\Gamma$, with a positive and continuous density $g\dvtx[0,\infty
)\rightarrow\mathbb{R}$.
\end{lemmaA}
\begin{pf}
The main assertion of the lemma will be almost obvious to readers who
are familiar with regularity theory for stationary distributions.
Observe that the main point here is the continuity up to the boundary
$0$. Indeed, the main strategy we follow is very similar to the case of
stationary distributions. Straightforward arguments show that $\tilde
{\nu}$ is absolutely continuous with respect to the measure $\Gamma$.
Denote by $g$ the density of $\tilde{\nu}$ with respect to $\Gamma$.
The equation
\[
e^{-\lambda t}\tilde{\nu}(f) = \mathbb{E}_{\tilde{\nu}}
[f(X_t);\tau_{\partial}>t ];\qquad \lambda\geq0, f \in
C_c((0,\infty)),
\]
which results from quasistationarity of $\tilde{\nu}$, implies that
\begin{equation}\label{E:ellipticequation}
\forall f \in C^{\infty}_c((0,\infty))\dvtx\langle\tilde{\nu},
(L^{\kappa}+\lambda)f\rangle= \int g(x)(L^{\kappa}+\lambda)f(x)\,
d\Gamma(x) = 0.\hspace*{-35pt}
\end{equation}
This means that for any $0<c<d$,
\[
g \in\mathcal{D}(T_{c,d}^*) \quad \mbox{and}\quad T_{c,d}^*g = 0,
\]
where $T_{c,d}^*$ denotes the adjoint [taken in the Hilbert space $\mathfrak{L}
^2((c,d),\Gamma)$] of the minimal operator $T_{c,d}$ defined as the
restriction of the differential operator $L^{\kappa}+ \lambda$ to
$C^{\infty}_c((c,d))$. The domain of $T_{c,d}^*$ is given by
\begin{eqnarray}
\qquad \mathcal{D}(T_{c,d}^*) &=& \biggl\lbrace f \in\mathfrak{L}^2((c,d),\Gamma)
| f,\gamma f' \mbox{ absolutely continuous in $(c,d)$
and}\nonumber\\[-8pt]\\[-8pt]
&&\hspace*{107pt}\frac{-1}{2\gamma}(\gamma f')' + (\kappa+\lambda) f \in
\mathfrak{L}^2((c,d),\Gamma) \biggr\rbrace,\nonumber
\end{eqnarray}
and for $f \in\mathcal{D}(T_{c,d}^*)$ one has $T_{c,d}^* = \frac
{-1}{2\gamma}(\gamma f')' + (\kappa+ \lambda)f$. Since $c,d \in
(0,\infty)$ are arbitrary we conclude that
\begin{equation}
\frac{-1}{2\gamma}(\gamma f)' (x) + (\kappa+ \lambda)f (x)=0
\qquad \mbox{in $(0,\infty)$}.
\end{equation}
Due to the regularity of the boundary point $0$ we conclude (using
standard ODE theory) that
\[
\lim_{x \rightarrow0+}g(x) \in\mathbb{R}
\]
exists.
\end{pf}
\subsection{Very short summary of extension theory}
In this section we give a~summary of the analytic results we applied in
Lemma \ref{L:SL}, a full account of which can be found in Chapter 10
of \citet{jW00}. For an arbitrary symmetric operator $S$ in a complex
Hilbert space $\mathcal{H}$ let $\operatorname{Ran}(S-z)$ denote the
image of the linear operator $S-z$ and let $\operatorname
{Ran}(S-z)^{\perp}$ denote its orthonal complement. Set $\beta
_+:=\dim\operatorname{Ran}(S+i)^{\perp}$ and $\beta_-:=\dim
\operatorname{Ran}(S-i)^{\perp}$.
Observe that in the symmetric case $ \dim\operatorname{Ran}(S\pm
i)^{\perp} = \dim\operatorname{Ker}(S^* \mp i)$. Thus the deficiency
indices give the dimension of the solution space of the equation $(S^*
\mp i)u=0$ ($u \in\mathcal{H}$).
The pair $(\beta_+,\beta_-)$ are called the deficiency numbers of $S$
and describes the ``number'' of self-adjoint extensions of $S$. If in
the notation of the beginning of Section 2.1 $S=\tau_{p,q,V}$ is, for
example, a (minimal) Sturm--Liouville differential expression on the
interval $(a_1,a_2)$, then one always has $\beta_+=\beta_-$. Moreover,
\[
(\beta_+,\beta_-)=
\cases{
(0,0), &\quad if $a_1$ and $a_2$ are both limit point boundaries, \vspace*{2pt}\cr
(1,1), &\quad if one boundary point is limit point and \cr
&\quad the other boundary point limit circle,\vspace*{2pt}\cr
(2,2), &\quad if $a_1$ and $a_2$ are both limit circle boundaries.
}
\]
We note that these formulas follow immediately from the definition of
limit-point type/limit-circle type, the remark appearing after
Definition \ref{lplc} and the fact that the deficiency indices give
the dimensions of the space of solutions to eigenvalue equations. The
case $(\beta_+,\beta_-)=(0,0)$ corresponds to essential
self-adjointness, that is, the case, where there is is only one
self-adjoint extension. Moreover, if $(\beta_+,\beta_-)=(m,m)$ a
symmetric extension $T$ of $S$ [i.e., $\mathcal{D}(S)\subset\mathcal
{D}(T), T\restriction\mathcal{D}(S)=S$] is self adjoint if and only
if $\mathcal{D}(T)/\mathcal{D}(S)$ has dimension $m$. Thus in this
case self-adjoint extensions of $S$ are exactly the $m$-dimensional
symmetric extensions.
\end{appendix}
\section*{Acknowledgments}
The authors would like to thank Ross Pinsky for sending them his work
[\citet{rP09}] prior to publication and for his useful remarks. Moreover
we would like to thank Gady Kozma for sending us his work concerning
the Davies conjecture. The hospitality of Worcester College, Oxford for
M. Kolb in support of this collaboration is gratefully acknowledged.
Moreover, the authors thank the referee for a very thorough and careful
reading of the paper.
\printaddresses
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Online EM for Functional Data}
\author[ucd]{Florian Maire\corref{cor1}}
\ead{[email protected]}
\author[telecom]{Eric Moulines}
\author[onera]{Sidonie Lefebvre}
\cortext[cor1]{Corresponding author}
\address[ucd]{School of Mathematics and Statistics, University College Dublin, Ireland}
\address[telecom]{CMAP, {\'E}cole Polytechnique, 91128 Palaiseau, France }
\address[onera]{ONERA - the French Aerospace Lab, F-91761 Palaiseau, France}
\begin{abstract}
A novel approach to perform unsupervised sequential learning for functional data is proposed. Our goal is to extract reference shapes (referred to as \textit{templates})
from noisy, deformed and censored realizations of curves and images. Our model generalizes the Bayesian dense deformable template model \cite{Allassonniere:Mixture,Allassonniere:survey}, a hierarchical model in which the template is the function to be estimated and the deformation is a nuisance, assumed to be random with a known prior distribution. The templates are estimated using a Monte Carlo version of the online Expectation-Maximization (EM) algorithm, extending~\cite{Cappe:Online}. Our sequential inference framework is significantly more computationally efficient than equivalent batch learning algorithms, especially when the missing data is high-dimensional. Some numerical illustrations on curve registration problem and templates extraction from images are provided to support our findings.
\end{abstract}
\begin{keyword}
online Expectation-Maximization algorithm, deformable templates models, unsupervised clustering, Markov chain Monte Carlo, Carlin and Chib algorithm, Big Data.
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{sec:intro}
Functional data analysis is concerned with the analysis of curves and shapes, which often display common patterns but also variations (in amplitude, orientations, time-space warping, etc...). The problem of extracting common patterns (referred to as \textit{templates}) from functional data,
and the related problem of curves/images registration has given raised to a wealth of research efforts;~ see \cite{ramsay:functional,zhong:2008,ramsay:2011} and the references therein.
Most of the proposed techniques used so far have been developed in a supervised classification context. The method typically aims at finding a time/space warping transformation allowing to synchronize/register all the observations associated to a given class of curves/shapes and to estimate a template by computing a cross-sectional mean of the aligned patterns. In most cases, the deformation is penalized, to favor "small" time/space shifts. Many different deformation models have been proposed for curves and for images. For curves, the warping function is often assumed to be monotone increasing. In this context, the dynamic time warping algorithm is by far the most popular algorithm: it enables the alignment of curves by minimizing a cost function of the warping path, which can be solved by a dynamic programming algorithm \cite{Wang:warpingTime}. Non parametric \cite{kneip:statistical,Silvermann:nonParam,Ramsay:curveReg} as well as Bayesian approaches \cite{Telesca:bayesCurRe,Liu:Simultaneous} have also been proposed, but they are still far less popular. The situation is more complex for shapes and images. Different deformation models have been proposed, involving rigid deformations, small deformations~\cite{Castellanos:localWarp} or deformation fields ruled by a differential equation; see \cite{Christensen:elas_trans}.
In this paper, we introduce a common Bayesian statistical framework for \textit{unsupervised} clustering and template extraction, with applications to curve synchronization and shape registration. Following the seminal work by \cite{Allassonniere:Mixture} and \cite{Allassonniere:Toward}, we generalize the mixture of \textit{deformable template models}. This approach models a curve/shape as a template (defined as a function of time or space), selected from a collection of templates, which undergoes a random deformation and is observed in presence of an additive noise; see \cite{Allassonniere:Toward,Bigot:defTemp,Christensen:defTemp} and \cite{Allassonniere:survey} for a complete survey. Contrary to the classical time warping/spatial registration algorithms which consists in synchronizing all the observations of a shape in a supervised framework, the mixture of deformable template models is an unsupervised classifier: it estimates functional templates from a set of shapes/curves and consider the time warping/spatial deformations as a random nuisance parameter. It is important to stress that the model allows to integrate the deformation conditionally on the observations while considering the templates as unknown deterministic functional parameters. In this context, the deformation might be seen as a \textit{random effect}, which is similar to random effects in linear mixed models in longitudinal data analysis. Whereas this change in perspective might seem rather benign, it makes a huge difference both in theory and in practice.
In our model, the warping/deformation function and the cluster index is modeled as hidden data and we consequently turn to an Expectation-Maximization (EM)-type algorithm \cite{Dempster:EM} to estimate the templates. However, in our model the conditional expectation of the complete data log-likelihood is analytically intractable, compromising a plain EM implementation. This situation has raised a significant research interest over the last decades and several versions of so-called stochastic EM, in which the E-step is approximated, have been successfully applied to the template extraction problem. A rough approximation of the conditional expectation was considered in \cite{ma:bayesian}, in which the posterior distribution is replaced by a point mass located at the posterior mode. Another elementary approach consists in linearizing the deformed template in the neighborhood of its nominal shape, under the assumption of small deformations. This alternative has been considered, among others by \cite{Liu:Simultaneous} and \cite{Frey:EM_clust}, in which the transformed mixture of Gaussian models was used. Another way to handle the E-step, proposed by \cite{Gaffney:curCluAli}, consists in performing an approximate Bayesian integration, which amounts to replace the posterior distribution of the hidden data conditionally to the observation by a Gaussian distribution, obtained from a Laplace approximation. Here again, it is not always easy to justify such approximations. The expectation can also be approximated by Markov chain Monte Carlo, an idea which was put forward by \cite{Allassonniere:Mixture} and \cite{Kuhn:MCMCSAEM}, extending the original Stochastic Approximation EM (SAEM) \cite{Delyon:ConvSAEM} and known as the MCMC-SAEM algorithm. This algorithm has been theoretically justified \cite{Kuhn:MCMCSAEM} and has shown to perform satisfactorily in the template extraction application \cite{Allassonniere:Mixture}. However it turns out to be a time-consuming solution especially when a large number of observations are available and the dimension of the missing data is huge. The extension of the model to multiple classes is even more computationally involved.
We propose the Monte Carlo online EM (MCoEM), an online algorithm in which the curves/shapes are processed one at a time and only once, allowing to estimate the unknown parameters of the mixture of deformable templates model. We adapt the online EM algorithm proposed in \cite{Cappe:Online} to intractable E-step settings whereby casting MCoEM as a \textit{noisy} online EM. Our model is too general to allow the linearization or the use of Gaussian approximation of the complete data log-likelihood, as it was done in \cite{Liu:Simultaneous} and \cite{Gaffney:curCluAli}. We thus propose to approximate the conditional expectation thanks to an MCMC algorithm adapted from the celebrated Carlin and Chib algorithm \cite{Carlin:Bayesian}. Indeed, working online implies processing the data on the fly without storing them afterwards and this requires the posterior distribution exploration to be more accurate than in the MCMC-SAEM framework which refines the state-space exploration gradually, at each EM iteration. Building an online learning framework for template extraction has a two-fold motivation: (i) the data need not be stored which can be useful should the algorithm be implemented on a portable device with limited memory/energy resources and (ii) MCoEM reduces significantly the computational burden that would be generated by an equivalent batch algorithm such as the MCMC-SAEM \cite{Kuhn:MCMCSAEM}.
This paper is organized as follows: in Section \ref{sec:model} the mixture of the dense deformable template model is generalized and the Monte Carlo online EM algorithm is presented in Section \ref{sec:onlineEM}. The sampling method of the joint posterior distribution is proposed in Section \ref{sec:carlinChib}. Illustrations of templates obtained by applying MCoEM to curves and shapes are proposed in Section \ref{sec:illustration} and are compared with those obtained using MCMC-SAEM. An application of the methodology to a classification problem is provided in Section \ref{sec:classification} and shows how competitive MCoEM is over batch equivalent algorithms. Benefits and shortcomings of the MCoEM methodology are discussed in Section \ref{sec:conclusion} and perspectives raised.
\section{A mixture of deformable template model}
\label{sec:model}
\subsection{A basic deformable model}
In this section, we introduce a basic model for curves and images. A \textit{template} is a function defined on a space $\mathbb{U}$ and taking for simplicity real values. Typically, for curves $\mathbb{U}=\mathbb{R}$ and for shapes $\mathbb{U}=\mathbb{R}^{2}$. We denote by $\mathcal{M}hbb{F}$ the set of templates.
The observations are modeled as the stochastic process $Y$ indexed by $u\in\mathbb{U}$ and given by:
\begin{equation}
\label{eq:modelFunc}
Y(u)=\lambda\,f\circ D(u,\beta) + \sigma W(u)\;,
\end{equation}
where, $f\in\mathcal{M}hbb{F}$ is a template function, $\lambda\in\mathbb{R}^{+\,\ast}$ is a scaling factor, $\sigma^{2}\in\mathbb{R}^{+\,\ast}$ is the noise variance and $W$ a Gaussian process with zero-mean, unit variance and known covariance function. $D$ is a function, belonging to $\mathcal{M}hbb{D}$, the set of mappings from $\mathbb{U}$ to itself and parameterized by a vector $\beta\in\betaSpace$, where $\betaSpace$ is an open subset of some euclidean space of dimension $d_{\defo}$. For curves, $\mathcal{M}hbb{D}$ can be chosen as the homotheties and translations mappings and more generally as the set of monotone functions (with appropriate smoothness conditions). For shapes, $\mathcal{M}hbb{D}$ can be taken as the set of rigid transformations of the plane, such as rotations, homotheties or translations and a local deformation field. The models for the set of deformations $\mathcal{M}hbb{D}$ are problem dependent; see Section \ref{sec:illustration}.
In this setting, $\beta$ and $\lambda$ are random variables and each realization of $Y$ follows from different realizations of $\beta$ and $\lambda$. The quantity of interest is the template $f$ (a deterministic functional parameter), while the deformation $D$ and the global scaling $\lambda$ are regarded as nuisance parameters, that should be integrated out.
Finally, we assume that the set of templates $\mathcal{M}hbb{F}$ is the linear subspace spanned by the basis vectors $\{\phi_{\ell}\}_{1\leq \ell\leq m}$. Hence, a template $f_{\boldsymbol{\alpha}}\in\mathcal{M}hbb{F}$ may be expressed as:
\begin{equation}
\label{eq:template}
f_{\boldsymbol{\alpha}}=\sum_{\ell=1}^{m}\alpha_{\ell}\phi_{\ell}\;,\quad \text{where $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_{m})^{T}\in\mathcal{M}hcal{A}$,}
\end{equation}
where for all $\ell\in\{1,\ldots,m\}$, $\phi_\ell:\mathbb{U}\to\mathbb{R}$ and $\mathcal{M}hcal{A}$ is a subset of $\mathbb{R}^{m}$.
The pattern is observed at some design points denoted $\Omega=\{u_1,\ldots,u_{|\design|}\}$, where $|\design|$ is the dimension of the observations such that for all $s\in\{1,\ldots,|\design|\}$, $u_s\in\mathbb{U}$. Let $\Phi_{\beta}$ be the $|\design|\timesm$ matrix defined such that for all $(s,\ell)$ in $\{1,\ldots,|\design|\}\times\{1,\ldots,m\}$,
\begin{equation}
\label{eq:matPhi}
[\Phi_{\beta}]_{s,\ell}=\phi_{\ell}\circ D(u_s,\beta)\;.
\end{equation}
Defining $\mathcal{M}hbf{Y}=(Y(u_1),\ldots,Y(u_{|\design|}))^{T}$ and $\mathcal{M}hbf{W}=(W(u_1),\ldots,W(u_{|\design|}))^{T}$ and using \eqref{eq:modelFunc}, the vector of observations can be expressed in a matrix-vector form as:
\begin{equation}
\label{eq:modelVect}
\mathcal{M}hbf{Y}=\lambda\Phi_{\beta}\boldsymbol{\alpha}+\sigma\mathcal{M}hbf{W}\;.
\end{equation}
\subsection{A mixture of deformable templates}
We extend the model to include multiple templates corresponding to the different "typical" shapes that we are willing to cluster and then recognize. To that purpose, we construct a mixture of the template model introduced in the previous section. Denote by $C$ the number of classes $(\cluster{1},\ldots,\cluster{C})$. We associate to each observation $\mathcal{M}hbf{Y}$ an (hidden) class index $I \in \mathcal{M}hbb{I}$, where $\mathcal{M}hbb{I}=\{1,\dots,C\}$. To each class $\{\cluster{j}\}_{j\in\mathcal{M}hbb{I}}$ is attached a template function $\{f_{j}\}_{j\in\mathcal{M}hbb{I}}$ in $\mathcal{M}hbb{F}$, which is parameterized by $\{\boldsymbol{\alpha}_{j}\}_{j\in\mathcal{M}hbb{I}}\in\mathbb{R}^{m}$. Moreover, a weight $\omega_j\in(0,1)$ is assigned to the class $I=j\in\mathcal{M}hbb{I}$ and we denote by $\boldsymbol{\omega}=(\omega_1,\ldots,\omega_C)$ the set of prior weights $(\sum_{j=1}^C\omega_j=1)$. To sum up, we consider the following hierarchical model:
\begin{equation}
\label{eq:mixtModelVect}
\mathcal{M}hbf{Y}\in\cluster{j}, \qquad \, \mathcal{M}hbf{Y}=\lambda\Phi_{\beta}\boldsymbol{\alpha}_{j}+\sigma\mathcal{M}hbf{W}\;.
\end{equation}
It is assumed that the observations $\{ \mathcal{M}hbf{Y}_n \}_{n \geq 1}$ are independent random variables, generated as follows:
\begin{equation}
\label{eq:statModel}
\begin{cases}
I_n\sim\text{Multi}(1,\boldsymbol{\omega})\,,\\
\lambda_n \sim \mathcal{M}hrm{Gamma}(a,b)\,,\\
\beta_n\,|\,I_n=j\sim\mathcal{N}_{d_{\defo}}(0_{d_{\defo}},\Gamma_j)\;,
\end{cases}
\end{equation}
where $\text{Multi}$ denotes the multinomial distribution, $(a,b)$ the parameters of the Gamma distribution (assumed known), $0_{d_{\defo}}$ the $d_{\defo}$-dimensional null vector and $\Gamma_j$ the deformation covariance matrix associated to the $\cluster{j}$. In Section \ref{sec:illustration}, different covariance models are used in function of the deformation model adopted. We stress that the distribution of the scaling parameter is independent of the class index, while the deformation prior distribution is class-dependent. Indeed, on the one hand, the scaling factor accounts for different ranges of observation and is thus independent of what is actually being observed. On the other hand, considering different prior distributions for the deformation might help to learn typical relevant distortions for each class and thus ease the warping process.
In the sequel we assume that $\{ \mathcal{M}hbf{W}_{n} \}_{n \geq 1}$ is a vector-valued white noise with zero-mean and identity covariance matrix. The extension to more general covariance is straightforward. Hence, conditionally on the class index $I_n$, the global scale $\lambda_n$ and local deformation $\beta_n$, the likelihood of $\mathcal{M}hbf{Y}_n$ given the missing data is:
\begin{equation}
\label{eq:obsRV}
\mathcal{M}hbf{Y}_n\,|\,I_n=j, \lambda_n, \beta_n,\sim\,\mathcal{N}_{|\design|}(\lambda_n\Phi_{\beta_n}\,\boldsymbol{\alpha}_{j},\sigma^{2}\mathcal{M}hrm{Id})\;,
\end{equation}
where $\mathcal{M}hrm{Id}$ is the identity matrix.
Denote by $\thetaSet$ the set of parameters.
\begin{equation}
\label{eq:paramSP}
\thetaSet= \bigcup_{j =1}^{C} \bigg\{\, \big(\boldsymbol{\alpha}_j,\Gamma_{j},\omega_j,\sigma\big) \, | \, \boldsymbol{\alpha}_j\in\mathcal{M}hcal{A},\, \Gamma_{j}\in\posDefMat{},\,\omega_j \in (0,1),\,\sigma>0\,\bigg\} \cap \left\{ \sum_{j=1}^C \omega_j = 1 \right\} \;.
\end{equation}
where $\posDefMat{}$ is the set of $d_{\defo} \times d_{\defo}$ positive definite matrices.
Let $\mathcal{M}hbf{X}_n$ be the random vector $\mathcal{M}hbf{X}_n=(\beta_n,\lambda_n)$ taking its values in $\mathbb{X}=\betaSpace\times\mathbb{R}^{+\ast}$ with dimension $d_{\bX}=d_{\defo}+1$. In the sequel, we will use the formalism and the terminology of the incomplete data model; see \cite{mclachlan:algorithm}. In this formalism, the observation $\mathcal{M}hbf{Y}_n$ stands for the incomplete data, $(I_n,\mathcal{M}hbf{X}_n)$ are the missing data and $(I_n,\mathcal{M}hbf{X}_n,\mathcal{M}hbf{Y}_n)$ are the the complete data. For a given value of the parameter $\theta \in \thetaSet$, the complete data likelihood $\text{L}_\theta$ writes:
\begin{equation}
\label{eq:likelihood}
\text{L}_{\theta}(I_n,\mathcal{M}hbf{X}_n,\mathcal{M}hbf{Y}_n)=g_{\theta}(\mathcal{M}hbf{Y}_n\,|\,I_n,\mathcal{M}hbf{X}_n)p_{\theta}(\mathcal{M}hbf{X}_n\,|\, I_n)\omega_{I_n}\;,
\end{equation}
where, for a given value of the parameter $\theta\in\thetaSet$, $g_\theta$ is the conditional density of the observations given the missing data and $p_\theta$ is the conditional density of the scaling factor and the local deformation parameter given the class index. Using \eqref{eq:obsRV} and \eqref{eq:statModel}, these densities write
\begin{eqnarray}
\label{eq:prior1}
&&g_{\theta}(\mathcal{M}hbf{Y}_n\,|\,I_n,\mathcal{M}hbf{X}_n)\propto\exp\left(-(1/{2\sigma^{2}})\|\mathcal{M}hbf{Y}_n-\lambda_n\Phi_{\beta_n}\boldsymbol{\alpha}_{I_n}\|^{2}\right)\,,\\
\label{eq:prior2}
&&p_{\theta}(\mathcal{M}hbf{X}_n\,|\, I_n)\propto\exp\left(-(1/2) \beta_n^{T} \Gamma_{I_n}^{-1}{\beta_n}\right) \lambda_n^{a-1} \exp(-b \lambda_n)\;.
\end{eqnarray}
The incomplete data likelihood is obtained by marginalizing the complete data likelihood with respect to the missing data.
\section{Sequential parameter estimation using the Online EM algorithm}
\label{sec:onlineEM}
In its original version \cite{Dempster:EM}, the Expectation-Maximization (EM) is a batch algorithm, \textit{i.e.}\, that uses a fixed set of observations, performing maximum likelihood estimation in incomplete data models. It produces a sequence of parameters, in such a way that the observed likelihood is increased at each iteration. Each iteration is decomposed into two steps. In the E-step, the conditional expectation of the complete data log-likelihood function given the observations and the current fit of the parameters is computed; in the M-step, the parameters are updated by maximizing the conditional expectation computed in the E-step.
In this paper, we focus on a learning setup in which the observations are obtained sequentially and the parameters are updated as soon as a new observation is available. Among several sequential learning algorithms designed to estimate parameters in missing data models, the online EM algorithm proposed in~\cite{Cappe:Online} sticks closely to the original EM methodology~\cite{Dempster:EM}. It does not require to compute the gradient of the incomplete data likelihood nor the inverse of the complete data Fisher information matrix. Under some mild assumptions, it is shown in \cite{Cappe:Online} that, even when the model is misspecified, the algorithm converges to the set of stationary points of the Kullback-Leibler divergence between the observed likelihood (which does not necessarily belongs to the statistical model) and the incomplete data likelihood. For a given value of the parameter $\theta \in \thetaSet$, we denote by $\target{\theta}{}{\mathcal{M}hbf{Y}_n}$ the posterior distribution of the missing data $(I_n,\mathcal{M}hbf{X}_n)$, given the observation $\mathcal{M}hbf{Y}_n$. The online EM \cite{Cappe:Online} is initiated with an initial guess $\estim{0}\in\thetaSet$. At the $n$-th iteration, the E-step consists in computing the function $\hat{Q}_{n}:\thetaSet\to\mathbb{R}$ defined recursively for all $n>0$ by:
\begin{equation}
\label{eq:estep}
\hat{Q}_{n}(\theta)=\hat{Q}_{n-1}(\theta)+\varrho_{n}\left(\mathcal{M}hbb{E}_{\estim{n-1}}\left[\,\log L_{\theta}(I_n,\mathcal{M}hbf{X}_n,\mathcal{M}hbf{Y}_{n})\,|\,\mathcal{M}hbf{Y}_{n}\,\right]-\hat{Q}_{n-1}(\theta)\right)\;,
\end{equation}
where $\mathcal{M}hbb{E}_{\estim{n-1}}(\,\cdot\,|\,\mathcal{M}hbf{Y}_n)$ stands for the conditional expectation under $\target{\estim{n-1}}{}{\mathcal{M}hbf{Y}_n}$, $\{\varrho_{n}\}_{n>0}$ is a decreasing sequence of positive step sizes, with $\varrho_1=1$, such that $\hat{Q}_0$ needs not be specified. In the M-step, the next estimate $\estim{n}$ is obtained by maximizing
\begin{equation}
\label{eq:mstep}
\estim{n}=\mathcal{M}hrm{arg}\,\max\limits_{\theta \in \thetaSet}\, \hat{Q}_{n}(\theta)\;.
\end{equation}
Under our model specification, the complete data log-likelihood belongs to a curved exponential family. Indeed, for a given parameter $\theta\in\thetaSet$, $\log\text{L}_\theta$ writes
\begin{equation}
\label{eq:exp_model}
\log\text{L}_\theta(I,\mathcal{M}hbf{X},\mathcal{M}hbf{Y})=t(\theta)+\pscal{}{r(\theta)}{S(I,\mathcal{M}hbf{X},\mathcal{M}hbf{Y})}\;,
\end{equation}
where the function $t$ is given by
\begin{equation*}
\label{eq:def_t}
t(\theta)=\log\frac{b^a}{\mathcal{M}hrm{Gamma}maFct(a)}-\frac{|\design|}{2}\log{2\pi\sigma^2}-d_{\defo}\log{2\pi}\,,
\end{equation*}
and the functions $r(\theta)=(r_1(\theta),\ldots,r_C(\theta))$ and $S(I,\mathcal{M}hbf{X},\mathcal{M}hbf{Y})=(S_1(I,\mathcal{M}hbf{X},\mathcal{M}hbf{Y}),\ldots,S_C(I,\mathcal{M}hbf{X},\mathcal{M}hbf{Y}))$, such that for all $j\in\{1,\ldots,C\}$:
\begin{gather*}
r_j(\theta)=(1/2)\left(
2\log(\omega_j)-\log\det\Gamma_j,
2\sigma^{-2}\boldsymbol{\alpha}_j,
-\sigma^{-2}(\boldsymbol{\alpha}_j\boldsymbol{\alpha}_j^{T}),
-{\Gamma_j^{-1}}^{T},
-\sigma^{-2},
-2b,
2(a-1)\right)\,,\\
S_j(I,\mathcal{M}hbf{X},\mathcal{M}hbf{Y}) =\delta_{I,j}\left(
1,
\lambda\phi_{\beta}^{T}\mathcal{M}hbf{Y},
\lambda^2\phi_{\beta}^{T}\phi_{\beta},
\beta\beta^{T},
\|\mathcal{M}hbf{Y}\|^{2},
\lambda,
\log{\lambda}
\right)\,.
\end{gather*}
As a consequence, the two steps of the online EM consist in (i) computing for all $j\in\{1,\ldots,C\}$ the stochastic approximation (SA) recursion
\begin{align}
\label{eq:estep_curvedExp}
&\hat{s}_{n,j}=\hat{s}_{n-1,j}+\varrho_{n}\left(\bar{s}_{n,j}(\mathcal{M}hbf{Y}_{n};\estim{n-1})-\hat{s}_{n-1,j}\right),
\end{align}
where $\bar{s}_{n,j}(\mathcal{M}hbf{Y}_n;\estim{n-1})= \mathcal{M}hbb{E}_{\estim{n-1}}\left[\,S_j(I_n,\mathcal{M}hbf{X}_n,\mathcal{M}hbf{Y}_{n})\,|\,\mathcal{M}hbf{Y}_n\right]$ and (ii) updating the parameters according to
\begin{align}
\label{eq:mstep_curvedExp}
&\estim{n}=\mathcal{M}hrm{arg}\,\max\limits_{\theta \in \thetaSet}\;\left\{t(\theta)+\sum_{j=1}^{C}\pscal{}{r_j(\theta)}{\hat{s}_{n,j}}\right\}\,.
\end{align}
The maximization is in closed form. However, this algorithm remains essentially of theoretical interest because in our model the conditional expectation $\bar{s}_{n,j}(\mathcal{M}hbf{Y}_n;\estim{n-1})$ is not analytically tractable. Intractable E-steps have already been addressed for batch EM algorithms. In \cite{Delyon:ConvSAEM}, the authors proved the convergence of the Stochastic Approximation EM (SAEM) algorithm in which the E-step is replaced by a stochastic approximation making use of realizations of the missing data generated according to the posterior distribution. Still, extending the SAEM algorithm to the online setup is not feasible in our case. Indeed, independent and identically distributed (\textit{i.i.d.}) samples from $\target{\estim{n-1}}{}{\mathcal{M}hbf{Y}_n}$ can not be simulated. An alternative to the SAEM algorithm, known as MCMC-SAEM, was proposed in \cite{Kuhn:MCMCSAEM}: the authors suggested to use Markov chain Monte Carlo (MCMC) methods (see \cite{Andrieu:IntroMCMC} for an introduction) to obtain samples from the posterior distribution.
In this paper, we adapt this approach to the sequential setting outlined above leading to the MCoEM (Monte Carlo online EM) algorithm. It is a 3-step iterative algorithm. Given the current fit of parameter $\estim{n-1}$ and a new observation $\mathcal{M}hbf{Y}_{n}$, the algorithm proceeds as follows:
\begin{enumerate}[(1)]
\item \emph{simulation step}: simulate, using a $\target{\estim{n-1}}{}{\mathcal{M}hbf{Y}_n}$-reversible Markov kernel $\K{n}$, a Markov chain $\{I_n[k],\mathcal{M}hbf{X}_{n}[k]\}_{k>0}$,
\item \emph{stochastic approximation step}: update for each class $j\in\{1,\ldots,C\}$, the complete data sufficient statistics using the following recursion
\begin{equation}
\label{eq:estep_SAOEM} \tilde{s}_{n,j}=\tilde{s}_{n-1,j}+\varrho_{n}\left(\frac{1}{m_n}\sum_{k=1}^{m_n}S_j(I_n[k],\mathcal{M}hbf{X}_n[k],\mathcal{M}hbf{Y}_n)-\tilde{s}_{n-1,j}\right),
\end{equation}
where $m_n$ is the number of MCMC iterations performed at the $n$-th iteration of the MCoEM algorithm,
\item \emph{maximization step}: update the parameter $\estim{n}$ by maximizing the function~:
\begin{equation}
\label{eq:mstep_SAOEM}
\estim{n}=\mathcal{M}hrm{arg}\,\max\limits_{\theta \in \thetaSet}\;\left\{t(\theta)+ \sum_{j=1}^{C}\pscal{}{r_j(\theta)}{\tilde{s}_{n,j}}\right\}\,.
\end{equation}
\end{enumerate}
For numerical stability, it is recommended not to update the parameter $\estim{n}$ at each iteration, especially in the first iterations of the algorithm (see discussion in Section \ref{sec:illustration}). MCoEM updates $\estim{n}$ according to an user-defined update schedule $\mathcal{M}hfrak{N}\subset \mathbb{N}$. Algorithm \ref{alg:MCoEM} provides a pseudo-code representation of MCoEM.
\begin{algorithm}
\caption{Monte Carlo online EM}
\label{alg:MCoEM}
\begin{algorithmic}[1]
\State {\bf{Input:}}
\begin{itemize}
\item Initial guess: $\estim{0}\in\thetaSet$
\item A stream of observations: $\mathcal{M}hbf{Y}_1,\mathcal{M}hbf{Y}_2,\ldots$
\item Parameter update schedule: $\mathcal{M}hfrak{N}\subseteq \mathbb{N}$
\item An iteration counter $n$, initialized to $0$
\item A sequence of positive step sizes $\{\varrho_1,\varrho_2,\ldots\}$ with $\varrho_1=1$
\item MCMC length schedule $\{m_1,m_2,\ldots\}$
\end{itemize}
\State {\textbf{When} a new observation $\mathcal{M}hbf{Y}$ is available \textbf{do}}
\State \hspace{.7cm} Increment the iteration counter: $n=n+1$
\State \hspace{.7cm} \textbf{Simulation step}: Sample $m_n$ missing data $\{I_{n}[k],\mathcal{M}hbf{X}_{n}[k]\}_{k=1}^{m_n}$ from a Markov chain targeting $\pi_{\estim{n-1}(\,\cdot\,|\,\mathcal{M}hbf{Y})}$
$\blacktriangleright$ See Algorithm \ref{alg:MCMC}
\State \hspace{.7cm} \textbf{SA step}: Update the sufficient statistics $\tilde{s}_{n,1},\ldots, \tilde{s}_{n,C}$ via the stochastic approximation step
$\blacktriangleright$ See Eq. \eqref{eq:estep_SAOEM}
\State \hspace{.7cm} {\textbf{If} $n\in\mathcal{M}hfrak{N}$ \textbf{then}}
\State \hspace{1.4cm} \textbf{Maximization step}: Update the parameter estimate to $\estim{n}$
\hspace{.7cm}$\blacktriangleright$ See Eq. \eqref{eq:mstep_SAOEM}
\State \hspace{1.4cm}{\textbf{else}}
\State \hspace{1.4cm}Set $\estim{n}=\estim{n-1}$
\State \hspace{.7cm}{\textbf{end if}}
\State {\bf{Output:}} A sequence of parameters $\estim{1},\estim{2},\ldots$.
\end{algorithmic}
\end{algorithm}
\section{Approximating the hidden data joint posterior distribution}
\label{sec:carlinChib}
In this section, we construct a transition kernel $\K{}$ to sample the target distribution $\target{\theta}{}{\mathcal{M}hbf{Y}}$ (for notational simplicity, the iteration index $n$ of the EM algorithm is omitted in this section).
\begin{rem}
At this stage, one might legitimately wonder why a special care must be taken when choosing $\K{}$, while valid MCMC routines are by now well established and available. Having a closer look at the target distribution dismisses resorting to standard MCMC methods such as the Gibbs sampler \cite{Gelfand:GibbsSampling,Geman:Gibbs} to simulate samples from $\target{\theta}{}{\mathcal{M}hbf{Y}}$. Indeed, the target distribution is not defined on the product space $(\mathcal{M}hbb{I},\mathbb{X})$ but on the following union of spaces $(\mathcal{M}hbb{I}=1,\mathbb{X})\cup\cdots\cup (\mathcal{M}hbb{I}=C,\mathbb{X})$. This is because, in our framework, the deformation $\mathcal{M}hbf{X}$ should always be consistent with the class of the observation it applies to.
\end{rem}
\subsection{MCMC on an extended state space}
\label{subsec:mcmc}
We now explain the approach we followed. The basic idea, stemming from \cite{Carlin:Bayesian}, is to specify a joint distribution over the class index $I$ and auxiliary variables $\tilde{\mathcal{M}hbf{X}}_{1},\ldots, \tilde{\mathcal{M}hbf{X}}_{C}$, where for all $j\in\{1,\ldots,C\}$, $\tilde{\mathcal{M}hbf{X}}_{j}\in\mathbb{X}$ is a deformation parameter associated to the class $\cluster{j}$. We stress that, in this approach, we sample at each iteration deformation parameters for each class. To specify the joint distribution, we introduce the \textit{pseudo-priors} or \textit{linking densities}, denoted $\{\kappa_{\theta,j}\}_{j=1}^{C}$. Note that whereas the knowledge of the normalizing constant is not required for an MCMC algorithm, the normalizing constant of the pseudo-priors are assumed to be known, \textit{i.e.}\,\ the pseudo-priors $\{\kappa_{\theta,j}\}_{j=1}^{C}$ should integrate to $1$. Also, it is assumed that exact sampling from the pseudo-priors is doable (and is computationally inexpensive). We define an auxiliary joint posterior density $\targetCC{\theta}{}{\mathcal{M}hbf{Y}}$ on the product space $\mathcal{M}hbb{I}\times\mathbb{X}\times\cdots\times\mathbb{X}$ by:
\begin{multline}
\targetCC{\theta}{I,\tilde{\mathcal{M}hbf{X}}_1,\ldots,\tilde{\mathcal{M}hbf{X}}_C}{\mathcal{M}hbf{Y}}=\target{\theta}{I,\tilde{\mathcal{M}hbf{X}}_I}{\mathcal{M}hbf{Y}}\prod_{j\neq I}\kappa_{\theta,j}(\tilde{\mathcal{M}hbf{X}}_j)\\
\propto g_{\theta}(\mathcal{M}hbf{Y}\,|\,I,\tilde{\mathcal{M}hbf{X}}_I)p_{\theta}(\tilde{\mathcal{M}hbf{X}}_I\,|\,I)\omega_I\prod_{j\neq I}\kappa_{\theta,j}(\tilde{\mathcal{M}hbf{X}}_j)\;,\label{eq:targetCC}
\end{multline}
where $\omega_I$, $g_{\theta}$ and $p_{\theta}$ are defined in~\eqref{eq:statModel},~\eqref{eq:prior1} and \eqref{eq:prior2} respectively. It can be noted that the marginal of $\targetCC{\theta}{}{\mathcal{M}hbf{Y}}$ with respect to\ to the auxiliary deformation parameters is the target distribution $\target{\theta}{}{\mathcal{M}hbf{Y}}$:
\begin{equation}
\label{eq:marginale}
\target{\theta}{I,\mathcal{M}hbf{X}}{\mathcal{M}hbf{Y}}=\idotsint \targetCC{\theta}{I,\tilde{\mathcal{M}hbf{x}}_{1:I-1},\mathcal{M}hbf{X},\tilde{\mathcal{M}hbf{x}}_{I+1:C}}{\mathcal{M}hbf{Y}}\mathrm{d} \tilde{\mathcal{M}hbf{x}}_{-I}\;,
\end{equation}
where for all $(i,j)\in\mathcal{M}hbb{I}^{2}$, such that $i<j$, $a_{i:j}=(a_i, a_{i+1}, \dots, a_j)$ and for all $i\in\mathcal{M}hbb{I}$, $a_{-i}=\{a_{j}\}_{j=1,j\neq i}^{C}$. Remarkably, this property does not depend on the choice of pseudo-priors.
A Metropolis-within-Gibbs sampler targeting $\targetCC{\theta}{}{\mathcal{M}hbf{Y}}$ is used to simulate a Markov chain $(I[k],\tilde{\mathcal{M}hbf{X}}_1[k],\ldots,\tilde{\mathcal{M}hbf{X}}_C[k])$ on the product space $(\mathcal{M}hbb{I}\times\mathbb{X}\times\ldots\allowbreak\times\mathbb{X})$. Suppose the Markov chain is at state $(I,\tilde{\mathcal{M}hbf{X}}_1,\ldots,\tilde{\mathcal{M}hbf{X}}_C)$, the so-called full conditional posterior distributions required for the Gibbs sampler are:
\begin{align}
\label{eq:discreteCC}
&\targetCC{\theta}{I}{\tilde{\mathcal{M}hbf{X}}_{1:C},\mathcal{M}hbf{Y}}\propto g_{\theta}(\mathcal{M}hbf{Y}\,|\,I,\tilde{\mathcal{M}hbf{X}}_{I})p_{\theta}(\tilde{\mathcal{M}hbf{X}}_{I}\,|\,{I})\omega_{I}\prod_{j\neq I}\kappa_{\theta,j}(\tilde{\mathcal{M}hbf{X}}_j)\,.\\
\label{eq:auxiliaryCC}
&\targetCC{\theta}{\tilde{\mathcal{M}hbf{X}}_j}{I,\tilde{\mathcal{M}hbf{X}}_{-j},\mathcal{M}hbf{Y}}\propto
\begin{cases}
g_{\theta}(\mathcal{M}hbf{Y}\,|\,I,\tilde{\mathcal{M}hbf{X}}_{I})p_{\theta}(\tilde{\mathcal{M}hbf{X}}_{I}\,|\,I)=\pi_{\theta}(\tilde{\mathcal{M}hbf{X}}_I\,|\,I,\mathcal{M}hbf{Y})\;,&j=I\\
\kappa_{\theta,j}(\tilde{\mathcal{M}hbf{X}}_j)\;,&j\neq I\,.
\end{cases}
\end{align}
From \eqref{eq:discreteCC} and \eqref{eq:auxiliaryCC}, it can be seen that sampling the class index and the auxiliary deformations from their respective full conditional posterior distribution is straightforward. However, since sampling the new parameter from the current class cannot be achieved directly, a Random Walk Metropolis-Hastings (RWMH) \cite{Metropolis:metropolisAlg} kernel $P_{\theta}(\tilde{\mathcal{M}hbf{X}}_I;\,\cdot\,|\,\tilde{\mathcal{M}hbf{X}}_{-I},I,\mathcal{M}hbf{Y})$ having $\pi_\theta(\,\cdot\,|\,I,\mathcal{M}hbf{Y})$ as its stationary distribution is applied $r$ times to $\tilde{\mathcal{M}hbf{X}}_I$ to generate $\tilde{\mathcal{M}hbf{X}}'_I$. The Markov chain transition writes:
\begin{enumerate}[(i)]
\item $I'\sim \targetCC{\theta}{\cdot}{\tilde{\mathcal{M}hbf{X}}_{1:C},\mathcal{M}hbf{Y}}$
\item $\tilde{\mathcal{M}hbf{X}}_j'\sim \kappa_{\theta,j}$, for $j\neq I'$
\item $\tilde{\mathcal{M}hbf{X}}_{I'}'\sim P^r_{\theta}(\tilde{\mathcal{M}hbf{X}}_{I'};\,\cdot\,|\,\tilde{\mathcal{M}hbf{X}}_{-I'}',I',\mathcal{M}hbf{Y})$
\end{enumerate}
and the transition kernel $\tK{}^{\texttt{CC}}$ may thus be expressed as:
\begin{equation}
\label{eq:kernelCC_extended}
\tK{}^{\texttt{CC}}(I',\mathrm{d}\tilde{\mathcal{M}hbf{X}}_{1:C}'\,|\,I,\tilde{\mathcal{M}hbf{X}}_{1:C})=\targetCC{\theta}{I'}{\tilde{\mathcal{M}hbf{X}}_{1:C},\mathcal{M}hbf{Y}}P_{\theta}(\tilde{\mathcal{M}hbf{X}}_{I'};\mathrm{d} \tilde{\mathcal{M}hbf{X}}_{I'}'\,|\,\tilde{\mathcal{M}hbf{X}}_{-I'}',I',\mathcal{M}hbf{Y})\prod_{j\neq I}\kappa_{\theta,j}(\mathrm{d}\tilde{\mathcal{M}hbf{X}}_j')\;.
\end{equation}
The Markov chain $\{I[k],\tilde{\mathcal{M}hbf{X}}_1[k],\ldots,\tilde{\mathcal{M}hbf{X}}_C[k]\}_{k>0}$, simulated through a Metropolis-within-Gibbs algorithm, provides samples from $\targetCC{\theta}{}{\mathcal{M}hbf{Y}}$. However, only the marginal samples $\{I[k],\mathcal{M}hbf{X}[k]=\tilde{\mathcal{M}hbf{X}}_{I[k]}\}_{k>0}$, distributed under $\target{\theta}{}{\mathcal{M}hbf{Y}}$ \eqref{eq:marginale}, are of interest and will be used in the approximation of the E-step of the MCoEM algorithm \eqref{eq:estep_SAOEM}. Pseudo-code of the Markov chain simulation algorithm is reported in Algorithm \ref{alg:MCMC}.
\begin{algorithm}
\caption{Markov chain simulating missing data}
\label{alg:MCMC}
\begin{algorithmic}[1]
\State {\bf{Input:}}
\begin{itemize}
\item An observation: $\mathcal{M}hbf{Y}$
\item A parameter estimate: $\theta$
\item Number of components: $C$
\item Length of the Markov chain: $m$
\item Number of RWMH iterations: $r$
\end{itemize}
\State Specification of the pseudo-prior densities $\kappa_{\theta,1},\ldots,\kappa_{\theta,2}$
$\blacktriangleright$ See Section \ref{subsec:pseudo}
\State Set $\tilde{\mathcal{M}hbf{X}}_j[0]\sim \kappa_{\theta,j}$ for $j=1,\ldots,C$
\For{$k=1,\ldots,m$}
\State Class sampling: $I[k]\sim \targetCC{\theta}{I}{\tilde{\mathcal{M}hbf{X}}_{1:C}[k-1],\mathcal{M}hbf{Y}}$
$\blacktriangleright$ See Eq. \ref{eq:discreteCC}
\State Let $i=I[k]$
\State Random Walk Metropolis-Hastings move: $\tilde{\mathcal{M}hbf{X}}_{i}[k]\sim P^r_{\theta}(\tilde{\mathcal{M}hbf{X}}_{i}[k-1];\,\cdot\,|\,i,\mathcal{M}hbf{Y})$
\For {$j\in\{1,\ldots,C\}\backslash\{i\}$}
\State Pseudo-prior update: $\tilde{\mathcal{M}hbf{X}}_j[k]\sim\kappa_{\theta,j}$
\EndFor
\State Set $\mathcal{M}hbf{X}[k]=\tilde{\mathcal{M}hbf{X}}_{i}[k]$
\EndFor
\State {\bf{Output:}} A Markov chain $(I[1],\mathcal{M}hbf{X}[1],\ldots,I[m],\mathcal{M}hbf{X}[m])$.
\end{algorithmic}
\end{algorithm}
\subsection{Choice of the pseudo-prior densities}
\label{subsec:pseudo}
The specification of the linking densities is essential for sampling efficiency. Ideally, these densities should be close to the marginal posterior: for all $j\in\{1,\ldots,C\}$, the density $\mathcal{M}hbf{X} \to \kappa_{\theta,j}(\,\mathcal{M}hbf{X}\,)$ should be chosen as a proxy to $\mathcal{M}hbf{X}\to\pi_{\theta}(\mathcal{M}hbf{X}\,|\,j,\mathcal{M}hbf{Y})$. An idea is for instance to set the pseudo-prior density as a Gaussian approximation of the target density. Such an approximation can be obtained using the Laplace method~\cite{Wolfinger:Laplace} or other approximate Bayesian sampling method. Under the (weak) assumption that the function $\mathcal{M}hbf{X}\to\target{\theta}{\mathcal{M}hbf{X}}{j,\mathcal{M}hbf{Y}}$ admits a maximum,
\begin{equation}
\label{eq:pseudo_mean}
\mathcal{M}hbf{X}s_j=\arg\max_{\mathcal{M}hbf{X}\in\mathbb{X}}\pi_{\theta}(\mathcal{M}hbf{X}\,|\,j,\mathcal{M}hbf{Y})\,,
\end{equation}
the Taylor-expansion of the logarithm of $\pi_{\theta}(\mathcal{M}hbf{X}\,|\,j,\mathcal{M}hbf{Y})$ writes:
\begin{equation}
\label{eq:TaylorExp}
\log{\pi_{\theta}(\mathcal{M}hbf{X}\,|\,j,\mathcal{M}hbf{Y})}=\log{\pi_{\theta}(\mathcal{M}hbf{X}s_j\,|\,j,\mathcal{M}hbf{Y})}+\frac{1}{2}(\mathcal{M}hbf{X}-\mathcal{M}hbf{X}s_j)^{T}H_{j} (\mathcal{M}hbf{X}-\mathcal{M}hbf{X}s_j)+o(\|\mathcal{M}hbf{X}-\mathcal{M}hbf{X}s_j\|^{2})\,,
\end{equation}
where for all $j\in\{1,\ldots,C\}$, $H_j$ is the Hessian matrix, whose coefficients are given for all $(q,r)\in\{1,\ldots,d_{\bX}\}^{2}$ by:
\begin{equation}
\label{eq:pseudo_cov}
[H_j]_{q,r}=\left.\frac{\partial^{2}}{\partial \mathcal{M}hbf{X}_{q}\partial \mathcal{M}hbf{X}_{r}}\log{\pi_{\theta}(\mathcal{M}hbf{X}\,|\,j,\mathcal{M}hbf{Y})}\right|_{\mathcal{M}hbf{X}=\mathcal{M}hbf{X}s_j}\,.
\end{equation}
Note that for better readability, for all $j\in\{1,\ldots,C\}$, the dependence of the linking densities $\kappa_{\theta,j}$, and the parameters $\mathcal{M}hbf{X}s_j$, $H_j$ on $\mathcal{M}hbf{Y}$ and $\theta$ is not made explicit in these notations, but does exist.
The previous discussion suggests that $\mathcal{N}_{d_{\bX}}(\mathcal{M}hbf{X}s_j,-H_{j}^{-1})$ is a sensible candidate for $\kappa_{\theta,j}$. The pseudo-priors parameters $\mathcal{M}hbf{X}s_j$ may be obtained using standard nonlinear optimization methods. Since $\mathcal{M}hbf{X}s$ is only used in the pseudo-prior specification, the precision of the optimizer does not matter much and simple heuristics can be used (see related discussion in Section \ref{sec:illustration}).
\begin{rem}
Our proposed kernel shares some similarities with that proposed in \cite{Allassonniere:Mixture}, which also makes use of auxiliary variable $\{\tilde{\mathcal{M}hbf{X}}_1,\ldots,\tilde{\mathcal{M}hbf{X}}_C\}$. These authors propose to first sample the class index $I$ from $\pi_{\theta}(\,\cdot\,|\,\mathcal{M}hbf{Y})$ and then draw $\mathcal{M}hbf{X}\sim\pi_{\theta}(\,\cdot\,|\,I,\mathcal{M}hbf{Y})$. However, since sampling the class index from the posterior distribution is not doable (indeed $\pi(I=j\,|\,\mathcal{M}hbf{Y})\propto\pi_\theta(j,\mathcal{M}hbf{Y})$ which is not analytically tractable), auxiliary variables $\{\tilde{\mathcal{M}hbf{X}}_1[k],\ldots,\tilde{\mathcal{M}hbf{X}}_C[k]\}_{k>0}$ are sampled from $C$ independent Markov chains each targeting $\pi_{\theta}(\,\cdot\,|\,j,\mathcal{M}hbf{Y})$, $j\in\{1,\ldots,C\}$, in an attempt to approximate the posterior weights $\{\pi_{\theta}(j\,|\,\mathcal{M}hbf{Y})\}_{j=1}^{C}$. These approximate weights allow to sample $I$ and then parameter samples $\{\mathcal{M}hbf{X}[k]\}_{k>0}$ are drawn using a Markov chain targeting $\pi_\theta(\,\cdot\,|\,I,\mathcal{M}hbf{Y})$. This scheme is computationally intensive all the more that \cite{Allassonniere:Mixture} uses a batch learning setup which implies that at each iteration, as many latent variables $\{I_n[k],\mathcal{M}hbf{X}_n[k]\}_{k>0}$ as there are observations in the dataset need to be sampled.
\end{rem}
\section{Numerical Illustration}
\label{sec:illustration}
We evaluate the performance of our online learning algorithm by inferring two types of data: growth velocity curves and handwritten digits. These two examples illustrate the flexibility, the stability and the computational effectiveness of the proposed MCoEM. MCoEM is then compared to an equivalent SAEM algorithm on the handwritten digits templates extraction task.
\subsection{Growth velocity curve study}
\label{subsec:5:1}
The growth velocity curve example is a classical benchmark in curve registration; see \cite{ramsay:functional},\cite{zhong:2008}. It is used here for illustrative purposes, because the rationale of the model is easy to grasp. The growth curves are obtained from the Berkeley Growth Study data \cite{tuddenham:physical} and display the evolution of the growth velocity between $2$ and $18$ years, for $39$ boys and $54$ girls; see Figure \ref{fig:curves}. Even though each observation is known to arise from either a boy or a girl, we won't make any use of this information, as MCoEM is designed for unsupervised inference on mixture models. The objective of the algorithm is therefore to retrieve a standard growth profile for boys and girls from the unlabeled set of growth velocity curves. The growth velocity curves, plot the growth velocity of individuals observed at $|\design|=31$ landmarks $\Omega=\{u_{1},\ldots,u_{|\design|}\}$, irregularly spaced, such that for all $s\in\{1,\ldots,|\design|\}$, $2\leq u_s\leq 18$.
\begin{figure}
\caption{Growth velocity samples and templates extraction obtained through $1,000$ iterations of the MCoEM algorithm.\label{fig:curves}
\label{fig:curves}
\end{figure}
\subsubsection{Deformable template model}
Growth profiles may vary from an individual to another, both as a function of the time and in amplitude. The algorithm aims to extract templates for the growth velocity curves: it associates to each observation $Y_n$ a monotonically increasing time warping function $u\mapsto D(u,\beta_n)$ as well as a global scaling parameter $\lambda_n$. We consider a mixture model with $C=2$, implying that we aim at retrieving templates for boys and girls growth velocity separately: the class index $I_n\in\{1,2\}$ models the boys and girls clusters. In this illustration, the template is a function $f_{\boldsymbol{\alpha}_i}$ ($i\in\{1,2\}$) defined on an open segment $\mathbb{U}=\ooint{u_i}{u_f}=\ooint{2}{18}$ parameterized as:
\begin{equation}
\label{eq:curveRegTemplate}
f_{\boldsymbol{\alpha}_i}(u)=\sum_{\ell=1}^{m}\alpha_{i,\ell}\phi_\ell(u)\;,\quad (\alpha_{i,1},\ldots,\alpha_{i,m})\in\mathcal{M}hcal{A}={\mathbb{R}^{+}}^{m}\,,
\end{equation}
where $\{\phi_{\ell}\}_{\ell=1}^{m}$ is set as $u\mapsto\phi_\ell(u)=\exp{({\nu}_\ell^{-2}(u-r_\ell)^2)}$, where $\{r_\ell\}_{\ell=1}^{m}$ are regularly spaced landmark points in $\mathbb{U}$. The choice of $\{\phi_\ell\}_{\ell=1}^{m}$ and $\mathcal{M}hcal{A}$ ensures that the template function $u\mapsto f_{\boldsymbol{\alpha}_i}(u)$ is a positive function, which is a natural constraint for growth velocity curves. For all $\ell\in\{1,\ldots,m\}$, the bandwidth of $\phi_\ell$ is set as ${\nu}_\ell^2=-\frac{\min_{u\in\Omega}\|r_\ell-u\|^{2}}{\log{\varepsilon}}$, where $\varepsilon \in \ooint{0}{1}$ is the value of $\phi_\ell$ at the nearest design point of $r_\ell$. This choice of bandwidth enables to take into account the irregularly spaced measurement points in $\Omega$. In this implementation, we used $m=35$, so that kernels $\phi_1,\phi_2,\ldots$ are centered on landmarks distant from a 6-month interval and $\varepsilon=0.1$. The deformable template model \eqref{eq:modelFunc} simply writes for all $u\in\mathbb{U}$:
$$
Y_n\in\cluster{i},\qquad Y_n(u)=\lambda_n f_{\boldsymbol{\alpha_i}}\circ D(u,\beta_n)+\sigma W_n(u)\,,
$$
In this setting, the time warping function $u\mapsto D(u,\beta)$ is monotonically increasing and should satisfy $D(u_i,\beta)\geq u_i$ and $D(u_f,\beta)\leq u_f$ (indeed, outside $\ooint{u_i}{u_f}$, the template vanishes \eqref{eq:curveRegTemplate}). In order to satisfy these constraints, we write $D(\,\cdot\,,\beta)$ as:
\begin{equation}
\label{eq:locDef_curves}
D(u,\beta)=u_i+(u_f-u_i)H(u,\beta)\;,
\end{equation}
where $H(\,\cdot\,,\beta)$ is modeled as proposed in \cite{Ramsay:curveReg} with:
\begin{equation}
\label{eq:locDef_H}
H(u,\beta)=\frac{\int_{{u'_i}}^{u}\exp{\left[\sum_{k=1}^{d_{\defo}}\beta_k\psi_k(v)\right]\mathrm{d} v}}{\int_{u'_i}^{u'_f}\exp{\left[\sum_{k=1}^{d_{\defo}}\beta_k\psi_k(v)\right]\mathrm{d} v}}\;,
\end{equation}
where $u'_i\leq u_i$ and $u'_f\geq u_f$ allow to satisfy the constraints stated above. For all $k$ in $\{1,\ldots,d_{\defo}\}$, $\beta_k\in\mathbb{R}$ and $\{\psi_k\}_{k=1}^{d_{\defo}}$ is a dictionary of Gaussian kernels centered on the landmark points $\{q_k\}_{k=1}^{d_{\defo}}$ with the same bandwidth $\tau^2$. In this implementation, we set $u'_i=0$, $u'_f=20$ and use $d_{\defo}=20$ regularly spaced landmark points such that $q_1=u'_i$ and $q_{d_{\defo}}=u'_f$; the kernel variance is set to $\tau^{2}=1$. Moreover, the prior distribution \eqref{eq:prior1} of $\beta$ is set with a mean equals to $(1,\dots,1)^T$ and for all $j\in\{1,2\}$ a covariance matrix $\Gamma_j$ parameterized by the variance $\mathcal{M}hrm{Gamma}ma_j$, such that $\Gamma_j=\mathcal{M}hrm{Gamma}ma_j^{2} \mathcal{M}hrm{Id}_{d_{\defo}}$. The estimate $\hat{\mathcal{M}hrm{Gamma}ma}^{2}_{j,n}$ of $\mathcal{M}hrm{Gamma}ma_j^{2}$ after $n=1000$ iterations is $\hat{\mathcal{M}hrm{Gamma}ma}_{1,1000}^{2}=0.08$ and $\hat{\mathcal{M}hrm{Gamma}ma}_{2,1000}^{2}=0.07$. A Gamma prior with parameters $a=b=10$ is assumed for $\lambda_n$.
\subsubsection{Sampling the missing data}
The Figures \ref{fig:curveSamplingScheme_1}--\ref{fig:curveSamplingScheme_3} illustrate the sampling scheme proposed in Section \ref{sec:carlinChib}, taking place at a given iteration $n$ of the MCoEM algorithm (the index $n$ is omitted hereafter). For $j\in\{1,2\}$, the auxiliary variable $\tilde{\mathcal{M}hbf{X}}_j$ consists in $\tilde{\mathcal{M}hbf{X}}_j=(\lambda_j,\beta_j)$. In Figure \ref{fig:curveSamplingScheme_1}, green dots represent an observation $Y$ along with the templates in plain curves (boys on the top panel and girls on the bottom panel). In each panel, the dashed curves illustrate different realizations of the distorted template under the action of deformation parameters $\tilde{\mathcal{M}hbf{X}}_1[k]=(\beta_1[k],\lambda_1[k])$ and $\tilde{\mathcal{M}hbf{X}}_2[k]=(\beta_2[k],\lambda_2[k])$ sampled using the kernel $\tK{}^{\texttt{CC}}$. For each new observation $\mathcal{M}hbf{Y}$, we used $300$ iterations of the Markov chain detailed in Subsection \ref{subsec:mcmc}, discarding the first 100 states for burn-in. The pseudo-priors $\kappa_1$ and $\kappa_2$ were set as Gaussian distributions, as specified in Subsection \ref{subsec:pseudo}. For $j\in\{1,2\}$, the mean $(\lambda^{\star}_j,\beta^{\star}_j)$ were obtained through a quasi-Newton optimization method (with an early stopping rule, because the precision of the fit does not matter much). For computational efficiency, the covariance matrix was set as $\hat{\Gamma}_{j,n}=\hat{\mathcal{M}hrm{Gamma}ma}_{j,n}^2\mathcal{M}hrm{Id}_{d_{\defo}}$ (which is the $j$-th class prior covariance matrix estimate). Even though, the pseudo-prior distributions provide inappropriate deformation parameters (see some samples from $D(\,\cdot\,,\beta_{1}[k])$ on the top panel of Figure \ref{fig:curveSamplingScheme_1}), they nevertheless achieve their two-fold target, namely (i) allowing to switch between models as illustrated in Figure \ref{fig:curveSamplingScheme_2} and (ii) sampling deformations that are consistent with $\mathcal{M}hbf{Y}$: the distorted templates tend to match the observation. Figure \ref{fig:curveSamplingScheme_2} shows two warping functions $D(\,\cdot\,,\beta_1[k])$ and $D(\,\cdot\,,\beta_2[k])$ corresponding to the samples $\beta_1[k]$ and $\beta_2[k]$ obtained at the $k=300$-th iteration of the Markov chain. This shows that, in order to register the template with the observation, the boys time warping function (in black, parameterized by $\beta_1$) accelerates the time from 9 years old onwards much faster than its girls counterpart (in red, parameterized by $\beta_2$). This is an evidence that this observation is more likely to arise from a girl record. The sampling of the cluster index \eqref{eq:discreteCC} makes use of the complete data log-likelihood and promotes models involving small deformations. Therefore, the class $I=2$ is more likely as confirmed by Figure \ref{fig:curveSamplingScheme_3} representing the class sampling scheme throughout the $300$ MCMC iterations.
\begin{figure}
\caption{Sampling of the hidden data posterior distribution. $\{\lambda_1[k],\beta_1[k],\lambda_2[k],\beta_2[k],I[k]\}
\label{fig:curveSamplingScheme_1}
\end{figure}
\begin{figure}
\caption{Time warping functions for the deformation parameters $\beta_1$ and $\beta_2$ sampled at the last iteration ($k=300$) of the Markov chain produced by $\tK{}
\label{fig:curveSamplingScheme_2}
\end{figure}
\begin{figure}
\caption{Class sampling $\{I[1],\ldots,I[300]\}
\label{fig:curveSamplingScheme_3}
\end{figure}
\subsubsection{Template estimation}
Starting with two $m=35$ dimensional random vectors $\hat{\boldsymbol{\alpha}}_{1,0}$ and $\hat{\boldsymbol{\alpha}}_{2,0}$, the two templates $f_{\hat{\boldsymbol{\alpha}}_{1,1000}}$ and $f_{\hat{\boldsymbol{\alpha}}_{2,1000}}$, displayed in Figure \ref{fig:curves} were obtained after $N=1,000$ iterations of the MCoEM algorithm. Since a limited number of observations are available, each observation is processed several times, drawn at random throughout the iterations. The templates show that the girls reach the pubertal growth spurt earlier (between $11$ and $12$ years) than boys (between $13$ and $14$ years). Moreover, we notice that the boys growth velocity profile features a pre-pubertal dip more pronounced than for the girls.
\subsection{Handwritten digits template extraction}
\label{subsec:5:2}
The algorithm is then applied to a collection of handwritten digits, the US postal database, featuring $N=1,000$ samples for each handwritten digit from $0$ to $9$, each of which consists of a $16 \times 16$ pixel image. The USPS digits data were gathered at the Center of Excellence in Document Analysis and Recognition (CEDAR) at SUNY Buffalo, as part of a project sponsored by the US Postal Service; see \cite{hull:database}. The main difficulty with these data stems from the geometric dispersion \emph{within each class} of digit. Two sources of variability are considered:
\begin{enumerate}[(i)]
\item The first type is assumed meaningful, since intrinsically related to the class of digit, and MCoEM seeks to learn them: the templates. A digit may indeed need more than a single prototype shape to be efficiently model by a mixture of deformable templates. For example, a digit two may be written with or without a loop in the lower left-hand corner and a digit seven may feature an horizontal bar on the diagonal line.
\item The second type is regarded as nuisances resulting from the presentation context and are deemed irrelevant to identify the class of an observation. They consist of small local deformations and global deformations such as a rotations, homotheties and translations. Such nuisances result from the size of the pen used, different handwriting skills, digits being partially censored by the observation window, etc.
\end{enumerate}
\subsubsection{Deformable template model}
An observation $\mathcal{M}hbf{Y}_n$ is a $16\times 16$ matrix, regarded as a $|\design|=256$ dimensional vector, whose coordinates correspond to the photometry of a fixed set of pixels, $(u_{1},\ldots,u_{|\design|})$, such that for all $s\in\{1,\ldots,|\design|\}$, $u_s\in\ooint{-1}{1}\times\ooint{-1}{1}$. The raw database consists of noise-free observations, such that for all $s\in\{1,\ldots,|\design|\}$, $\mathcal{M}hbf{Y}_{n,s}\in\ooint{0}{1}$. To make the problem more challenging, an additive Gaussian noise $W_s=\sigma\epsilon$, where $\sigma=0.2$ and $\epsilon\sim\mathcal{M}hcal{N}(0,1)$, is added to each pixel $\mathcal{M}hbf{Y}_{n,s}$ (see Figure \ref{fig:templates} (a)).
A template $f$ is a function defined on $\mathbb{U}=\mathbb{R}^2$. The dictionary of functions $\{\phi_\ell\}_{\ell=1}^{m}$ is set as Gaussian kernels with $m=256$. The landmark points $\{r_\ell\}_{\ell=1}^{m}$ are regularly spaced in the square $\ooint{-1}{1}\times\ooint{-1}{1}$ and the kernel $\phi_\ell$ is defined as $u\mapsto\phi_\ell(u)=\exp{({\nu}^{-2}(u-r_\ell)^2)}$ with $\nu=0.2$.
Contrary to the growth velocity curve case, where growth profiles feature different scales, the scale dispersion in the measurement space is limited in this database. As a consequence, using a scaling factor $\lambda_n$ is not relevant. To each image $\mathcal{M}hbf{Y}_n$ is associated a class $I_n\in\{1,\ldots,C\}$ and a deformation parameter $\beta_n$, such that a template can be geometrically deformed under the action of a function $u\mapsto D(u,\beta_n)$. We consider two complementary types of deformation:
\begin{itemize}
\item A rigid deformation $u\mapsto\rigid{u}{\upsilon_n}$ where $\upsilon_n$ parameterizes rotations, homotheties and translations. Indeed, the templates need to be allowed to rotate and to be translated in space, in order to match the observations and in particular those which are partially censored by the observation window. Homotheties allow to zoom in or to zoom out the templates.
In this case, $\upsilon_n$ is a $6$-dimensional real vector, $\upsilon_n=(\varphi_n,\varrho_n,c_n,t_n)$, where $c_n$ is the center of the rotation of angle $\varphi_n$ and of the homotheties having $\varrho_n$ as ratio and $t_n$ is the translation vector. $\rigid{\,\cdot\,}{\upsilon_n}$ writes for all $u\in\mathbb{U}$:
$$\rigid{u}{\upsilon_n}=\rotation{\varphi_n}(\varrho_n u+t_n-c_n)+c_n\;,$$
where $\rotation{\varphi_n}$ is the rotation matrix with angle $\varphi_n$. A Gaussian prior is set on $\upsilon_n$, with zero mean for the components $(\varphi_n,c_n,t_n)$ and a mean one for $\varrho_n$. The covariance matrix is diagonal with variances set to $0.1$.
\item A smooth small deformation field is used to register locally a template with the observation. It is parameterized by a $d_{V}$-dimensional vector $\delta_n=(\delta_{n,1},\ldots,\delta_{n,d_{V}})$ and writes for all $u\in\mathbb{U}$ as
$$
\deltaal{u}{\delta_n}=\sum_{k=1}^{d_{V}}\delta_{n,k}\psi_k(u)\;,
$$
where for all $k\in\{1,\ldots,d_{V}\}$, $\delta_{n,k}\in\mathbb{R}^{2}$ in order to allow small displacements in the two directions. The smoothness of the deformation is enforced by the choice of functions $\{\psi_k\}_{k=1}^{d_{V}}$ which belongs to a dictionary of Gaussian kernels defined on $\mathbb{R}^{2}$ and centered on the landmark points $\{q_k\}_{k=1}^{d_{V}}$ with identical variance $\sigma_L^{2}$, such that for all $k\in\{1,\ldots,d_{V}\}$, $\psi_k(u)=\exp{\left(\sigma_{V}^{-2}\|u-q_k\|^{2}\right)}$. In this implementation, we used $d_{V}=36$ landmark points at the vertices of a regular grid on the square $\ooint{-0.5}{0.5}\times\ooint{-0.5}{0.5}$ and a bandwidth $\sigma_V^2=0.16$. As a consequence the local deformation parameter $\delta_n$ is a $72$-dimensional vector. Similarly to $\upsilon_n$, conditionally on $I_n=j$, a Gaussian distribution with zero mean and covariance matrix $\Gamma_j$ is assumed for the parameter $\delta_n$. In this implementation, for all $j\in\{1,\ldots,C\}$, $\Gamma_j$ writes $\Gamma_j=\mathcal{M}hrm{Gamma}ma_j^{2}M$ where $M$ is a fixed matrix with ones on the diagonal and $0.2$ on the lower and upper diagonals.
\end{itemize}
Hence, the parameter $\beta_n$ is a $78$-dimensional vector which writes $\beta_n=(\upsilon_n,\delta_n)$ and belongs to the space $\betaSpace=[0,2\pi]\times\mathbb{R}^{+}\times\mathbb{R}^2\times\mathbb{R}^2\times \mathbb{R}^{72}$.
\begin{figure}
\caption{Distortion of a template $f_{\boldsymbol{\alpha}
\label{fig:templateDeformations}
\end{figure}
Finally, the deformation model writes in this setting for all $u\in\mathbb{U}$ and a parameter $\beta_n\in\betaSpace$ as:
\begin{equation}
\label{eq:defGeoModelDigits}
D(u,\beta_n)=\rotation{\varphi_n}(\varrho_n u+t_n-c_n)+c_n+\sum_{k=1}^{d_{V}}\delta_{n,k}\psi_k(u)\;.
\end{equation}
It is illustrated with Figure \ref{fig:templateDeformations}.
\subsubsection{Parameter estimation}
\label{sec:5:2:2}
We consider two learning setups:
\begin{enumerate}
\item \textbf{Partially-supervised}: the templates are learnt for each digit separately with $C_1=4$ classes through $N_1=1,000$ iterations of the MCoEM. Thus, 10 independent models are learnt and the resulting templates are reported in Figure \ref{fig:templates}-(b). We refer to this approach as partially-supervised since MCoEM deals with images of the same digit (labeled) but assigns each observation to one of the four classes describing this type of digit in an unsupervised fashion.
\item \textbf{Fully-unsupervised}: the templates are learnt from the dataset containing all the 10 digits (unlabelled), with $C_2=20$ classes and $N_2=5,000$. Thus, only one model is learnt and the resulting templates are illustrated with Figure \ref{fig:templates}-(c).
\end{enumerate}
\begin{figure}
\caption{Templates estimated in the two different schemes: (b) partially-supervised, after $N_1=1,000$ MCoEM iterations with $C_1=4$ components for each model and (c) fully-unsupervised, after $N_2=5,000$ MCoEM iterations with $C_2=20$ components. In both setups, MCoEM was applied on handwritten digit images similar to those displayed in (a).}
\label{fig:templates}
\end{figure}
The templates obtained in the two settings are similar, even though in Figure \ref{fig:templates}-(c), the algorithm makes use of a class for digits that can hardly be classified in one of the existing mixture component (template in the bottom right corner). In addition, in the fully-unsupervised scheme, the number of classes describing a digit is ruled by the learning algorithm and may not be optimal: for instance a digit two could be described with more than two clusters, whereas three classes for a digit nine are a bit excessive.
Following the guidelines provided in \cite{Cappe:Online}, the sufficient statistics (and consequently the parameters) should be updated for the first time once several observations have been gathered. Indeed, the parameters update step (see Eq. \eqref{eq:mstep_SAOEM}) requires that the sufficient statistics vector check some constraints. In particular $\{\tilde{s}_{n,j,1}\}_{j=1}^{C}$ should be nonzero scalars and $\{\tilde{s}_{n,j,3}\}_{j=1}^{C}$ should be invertible matrices. In practice, these assumptions hold, when the first update happens after $n=50$ MCoEM iterations, the second after $n=75$ and as soon as a new observation is available from $n=100$ onwards. Initialisation of the template parameters can potentially lead to degeneracy if one or more classes are initialised with pathologic parameters. This issue was not encountered in the partially-supervised setup probably because the class sampling is easier, the data being all observations of the same digit. The initial template parameters were thus set randomly. In the fully-unsupervised scheme however, the template parameters were set as the clusters centroid returned by a k-means clustering algorithm (using the Matlab built-in routine) applied to 50 images of the dataset drawn at random. More precisely for all $j\in\{1,\ldots,C\}$, $\hat{\boldsymbol{\alpha}}_{0,j}=(\Phi_{O_{d_{\defo}}}^{T}\Phi_{O_{d_{\defo}}})^{-1}\Phi_{O_{d_{\defo}}}^{T}\mathcal{M}hbf{c}_j$, where $\Phi_{\beta}$ is defined in Eq. \eqref{eq:matPhi} and $\mathcal{M}hbf{c}_j$ is k-means cluster $j$ centroid.
Figure \ref{fig:estimation1} shows the parameters estimate $\{\estim{n}\}_{n=1}^{1000}$ throughout the MCoEM algorithm for the digit two learnt separately with $C=4$ classes (partially-supervised). The functions $\{f_{\boldsymbol{\alpha}_{n,j}}\}_{1\leq j\leq C}$ tends progressively to usual reference shapes and each new observation available enhances the templates estimate (\autoref{fig:estimation1}-(a)).
\begin{figure}
\caption{Templates extraction and inference}
\label{fig:estimation1}
\end{figure}
\subsubsection{Sampling the missing data}
The hidden data $\beta_n=(\upsilon_n,\delta_n)$, and $I_n$ are simulated with $t_n=200$ iterations if $n\leq 100$ and $t_n=500$ iterations otherwise of the sampling scheme proposed in Section \ref{sec:carlinChib}. This choice of $m_n$ is motivated by the fact that when the templates are not well resolved (which occurs in the early estimates of MCoEM), a rough approximation of the conditional expectation is sufficient. Moreover, a burn-in period of $100$ iterations was applied. Finally, given the high dimension of $\beta_n$, the quasi-Newton optimization methods to estimate $\{\beta_j^{\star}\}_{j=1}^{C}$ in Eq.~\eqref{eq:pseudo_mean} is time-consuming. Therefore, the pseudo-priors parameters are set as the sample mean and covariance matrix derived from $100$ iterations of a random walk targeting the posterior distribution and taking place before the first MCMC iteration.
\begin{figure}
\caption{Sampling missing data $(I[k],\beta_1[k],\ldots,\beta_4[k])\sim\tilde{\pi}
\label{fig:sampling_posterior_digits}
\end{figure}
Figure \ref{fig:sampling_posterior_digits}-(a) shows a realization of the $k=450$-th iteration of the Markov chain $\tK{n}^{\texttt{CC}}$ occurring in the $n=600$-th iteration of the MCoEM algorithm (the index $n$ is omitted hereafter). In this scenario, we aim at extracting $C=4$ templates of the digit $7$ in a partially-supervised setting (see Figure \ref{fig:templates}-(b)). Given $I[k-1]=3$, the auxiliary variables $\{\beta_{j}[k]\}_{j\neq 3}$ are sampled from the linking densities $\{\kappa_{\hat{\theta}_{n-1},j}\}_{j\neq 3}$, while $\beta_{3}[k]$ is simulated with $r=20$ iterations of a Gaussian increment Random Walk Metropolis-Hastings algorithm, whose variance is adjusted to obtain an overall acceptance rate of $40\%$ (see \cite{Andrieu:IntroMCMC}). Iterating the Metropolis-Hastings kernel $r$ times speeds up the convergence of the chain without changing the stationary distribution. Despite the rough approximation on the pseudo-priors parameters, Figure \ref{fig:sampling_posterior_digits}-(a) shows that the simulated deformations $\beta_{j}[k]$ are consistent with the observation $\mathcal{M}hbf{Y}_{n}$ for each model $j\in\{1,\ldots,C\}$. As a consequence, the Markov chain $\{I[k],\beta_1[k],\ldots,\beta_C[k]\}_{k>0}$ mixes well; see Figure \ref{fig:sampling_posterior_digits}-(b) which displays the class index samples $\{I[k]\}_{k>0}$ throughout the $t_n=500$ MCMC iterations. An animation of the MCMC sampling scheme can be found online at \url{http://mathsci.ucd.ie/~fmaire/MCoEM/carlinChib.html}.
\subsubsection{Comparison with SAEM-MCMC}
For conciseness, we will write from now on SAEM instead of SAEM-MCMC for the algorithm formalized by \cite{Kuhn:MCMCSAEM} and applied to perform template estimation in \cite{Allassonniere:Mixture}.
Templates estimated by MCoEM are compared with those obtained by applying SAEM \cite{Allassonniere:Mixture} to the same images, in both setups. In the partially-supervised setup, both algorithms processed the same $n=300$ images for each class of digit, during a 10-hour runtime. In the fully-unsupervised approach, MCoEM and SAEM processed the same $n=500$ images ($50$ images of each digit), during a 40-hour runtime experiment. SAEM is a batch stochastic EM algorithm that processes all the data at each iteration. In the mixture of deformable models context, this means that SAEM has to register each observation with the set of templates estimated at each iteration, whereby a significant computational burden is generated. As a consequence, in a 10-hour running time experiment, SAEM could only perform 23 iterations while MCoEM completed nearly 2,000 iterations. Figures \ref{fig:templates_mcoem_saem} and \ref{fig:templates_mcoem_saem_unsup} report the sets of templates extracted by both methods in the two setups.
In the partially-supervised setup, the two sets of estimated templates show similar features (Fig. \ref{fig:templates_mcoem_saem}), highlighting that in spite of processing the data on the fly, MCoEM yields a similar stability than SAEM. From a qualitative perspective, performing nearly ten times as many iterations than SAEM is beneficial for MCoEM whose templates look much smoother and yield a better resolution. An animation of the template estimation in this setup can be found online at \url{http://mathsci.ucd.ie/~fmaire/MCoEM/templates.html}.
The templates estimated by MCoEM and SAEM in the fully-unsupervised setup, implemented with $C=15$ components, are reported in Figure \ref{fig:templates_mcoem_saem_unsup}. The first ten templates are consistent for both algorithms while the last five templates differ significantly. On the one hand, MCoEM only makes use of 13 from the 15 available classes. The two remaining classes corresponds to the 12th and 15th templates in the middle column of Figure \ref{fig:templates_mcoem_saem_unsup}. Figure \ref{fig:weight_est} plots the weight evolution for each class as MCoEM moves forward and shows that those two classes have quickly become unused by the algorithm. The first ten classes weight is slightly lower than 1/10 which is in line with the dataset. On the other hand, SAEM maintains the 15 classes alive all throughout the algorithm. In this example, SAEM appears more robust than MCoEM for inferring a mixture model. However, we believe that the stability of MCoEM can be improved by increasing the number of iterations before the first parameter update (only 50 in our simulation), hence avoiding this degeneracy problem. Indeed, from Figure \ref{fig:weight_est} it is clear that those two classes have been left empty after the first 50 iterations, paving the way to the pathological effect observed at the next updates.
\begin{figure}
\caption{Templates extracted by MCoEM (b) and SAEM (c) from the same dataset, consisting of $n=300$ handwritten digit images from each type of digit, in a partially-supervised way and during a 10-hour running time experiment. Each model comprises $C=2$ classes. (a) represents the initial templates drawn at random $\hat{\boldsymbol{\alpha}
\label{fig:templates_mcoem_saem}
\end{figure}
\begin{figure}
\caption{Templates extracted by MCoEM (b) and SAEM (c) from the same dataset, consisting of $n=500$ handwritten digit images from each type of digit, in a fully-supervised way and during a 40-hour running time experiment. The model comprises $C=15$ mixture components. (a) represents the initial templates based on k-means clustering applied on 50 random images. See \url{http://mathsci.ucd.ie/~fmaire/MCoEM/templates.html}
\label{fig:templates_mcoem_saem_unsup}
\end{figure}
\begin{figure}
\caption{Evolution of the weight for each of the $C=15$ classes of the fully-unsupervised mixture of template model inferred by MCoEM. Circled data points corresponds to the two classes whose weight vanishes.\label{fig:weight_est}
\label{fig:weight_est}
\end{figure}
\section{Classification}
\label{sec:classification}
When considering real-time classification applications, the MCoEM methodology may prove more adequate than SAEM: indeed as soon as the first estimate of $\theta$ is available, a classifier can be implemented. Of course, the rate of correct classification is expected to improve upon random guessing as soon as the templates take shape. Both algorithms produce a sequence of parameter estimates. However, since iterations of MCoEM and SAEM have different complexity, we consider $\hat{\theta}_t$, the parameter estimate after a runtime of $t$ time units, as a fair way to compare both methods.
Learning parameters of the mixture of deformable models \eqref{eq:mixtModelVect} allows to classify labeled observations $\{(\mathcal{M}hbf{\tilde{Y}}_1,V_1),\ldots,(\mathcal{M}hbf{\tilde{Y}}_N,V_N)\}$ gathered in a testing dataset. There is no overlap between those testing observations and the data $\{\mathcal{M}hbf{Y}_1,\ldots,\mathcal{M}hbf{Y}_n\}$ processed by the algorithms during the learning phase. Let $\rho_t$ be the \textit{live} error rate at time $t$, defined as the empirical rate of uncorrect classification (based on $N=1,000$ testing observations) obtained using the parameters estimated by the algorithms at time $t$:
$$
\rho_t=\frac{1}{N}\sum_{k=1}^N\mathcal{M}hds{1}_{\hat{V}_{k,t}\neq V_k}\,,
$$
where $\hat{V}_{k,t}$ is the class of digit assigned to $\mathcal{M}hbf{\tilde{Y}}_k$ returned by the classifier using the estimate $\theta_t$. In this Section, we compare the live error rate on the handwritten digits example such that $V_k\in\{0,\ldots,9\}$ (see Section \ref{subsec:5:2}) based on estimates from MCoEM (processing a new observation at each iteration), SAEM-50 and SAEM-300, \textit{i.e.}\, SAEM using $n=50$ and $n=300$ learning observations respectively. Both learning setups partially-supervised and fully-unsupervised are considered.
\subsection{Partially-supervised learning}
In this approach, each type of digit $v\in\{0,\ldots,9\}$ is described at time $t$ by a set of parameters $(\hat{\theta}_{1,t}^{(v)},\ldots,\hat{\theta}_{C,t}^{(v)})$. We used $C=2$ classes per digit in this implementation. The following unnormalized probabilities
\begin{equation}
\label{eq:classif1}
\text{for all }v\in\{0,\ldots,9\},\quad \pi_v(\mathcal{M}hbf{\tilde{Y}}_k,\hat{\theta}_{t})=\sum_{i=1}^{C}\mathcal{M}hbb{E}_{\hat{\theta}_{i,t}^{(v)}}\left[g_{\theta}(\mathcal{M}hbf{\tilde{Y}}_k\,|\,I_k,\mathcal{M}hbf{X}_k)\,|\,\mathcal{M}hbf{\tilde{Y}}_k, I_k=i\right]\,,
\end{equation}
are calculated and the guess $\hat{V}_{k,t}$ is defined as
\begin{equation}
\label{eq:classif1_V}
\hat{V}_{k,t}(\hat{\theta}_t)=\arg\max_{v\in\{0,\ldots,9\}}\pi_v(\mathcal{M}hbf{\tilde{Y}}_k,\hat{\theta}_{t})\,.
\end{equation}
The conditional expectation in \eqref{eq:classif1_V} is intractable and approximated by the sample mean of a Metropolis--Hastings Markov chain targeting the posterior distribution of $\mathcal{M}hbf{X}_k$, $\pi(\,\cdot\,|\,\mathcal{M}hbf{Y}_k,I_k=i)$.
\begin{figure}
\caption{Live error rate for MCoEM, SAEM-50 and SAEM-300, applied in the partially-supervised setup. For each algorithm, a ball at time $t$ represents the error rate at time $t$. $\rho_t$ is obtained by comparing the estimated class $\hat{V}
\label{fig:classif_partially}
\end{figure}
Figure \ref{fig:classif_partially} reports the live error rate for the three algorithms applied in the partially-supervised setup. For each algorithm the error rate at $t=0$ is $\rho_0=.9$ since the initial template are uninformative (see Figure \ref{fig:templates_mcoem_saem}-(a)) The second left-most ball corresponds to the error rate using the first estimate produced by each algorithm. For each algorithm, the time per iteration is reported in Table \ref{tab:classRunTime}. Note that for numerical stability, MCoEM first parameter update occurred after 10 iterations, the second after 15 iterations and at each iteration from the 20-th iteration onwards. SAEM yields a large error rate when learning from only $n=50$ observations. It is significantly reduced when using $n=300$ observations but this improvement comes at the price of a prohibitively large computational cost. The first estimate produced by SAEM-300 is available after $t_1=1,570$ s. At this time, MCoEM has already performed nearly 80 iterations and exhibits a significantly lower live error rate: approximately $\rho_{t_1}=.24$ for MCOEM and $\rho_{t_1}=.34$ for SAEM-300. MCoEM yields a successful tradeoff between SAEM with limited $n$ allowing quick estimates but poor error rate and SAEM with larger $n$ allowing lower error rate but a slower estimation. In addition, since MCoEM makes use of new data at each iteration, its live error rate is expected to keep reducing while SAEM's error rate, using a fixed dataset, seems to flatten once convergence of the parameter is reached.
\begin{table}
\centering
\caption{CPU time of an iteration of MCoEM and SAEM with $n=50$ and $n=300$ and $C=2$ mixture components.\label{tab:classRunTime}}
\begin{tabular}{c|c|c|c}
& MCoEM & SAEM-50 & SAEM-300 \\
\hline
CPU time / iteration (s) & 20 & 225 & 1,570\\
\end{tabular}
\end{table}
\subsection{Fully-unsupervised learning}
\label{sec:classification:2}
Since in this learning setup, an object (\textit{e.g.}\, a digit nine) may be described by several templates (3 in the simulation of Figure \ref{fig:templates}-(c) and 2 in that of Figure \ref{fig:templates_mcoem_saem_unsup}-(b)), an intermediate layer of classes is required. Based on this observation, an external agent must specify the mapping $M:\mathcal{M}hbb{I}\to\{0,1,\ldots,9\}$ that links each class designed by MCoEM/SAEM to the object it describes. Classification is then carried out as in the semi-supervised setup. More precisely, given the estimate $\hat{\theta}_t$, the following unnormalized probabilities
\begin{equation}
\label{eq:classif2}
\text{for all }v\in\{0,\ldots,9\},\quad \pi_v(\mathcal{M}hbf{\tilde{Y}}_k,\hat{\theta}_t)=\sum_{i\in M(v)}\mathcal{M}hbb{E}_{\hat{\theta}_{i,t}}\left[g_{\theta}(\mathcal{M}hbf{\tilde{Y}}_k\,|\,I_k,\mathcal{M}hbf{X}_k)\,|\,\mathcal{M}hbf{\tilde{Y}}_k,I_k=i\right]\,,
\end{equation}
are approximated by an MCMC estimate and the guess $\hat{V}_{k,t}$ is derived as in \eqref{eq:classif1_V}.
Figure \ref{fig:classif_unsup} reports the live error rate of MCoEM and SAEM-500 in the fully-unsupervised setup.
At time $t=0$, the error rate $\rho_0$ is .65 and not .9 as in the previous setup. This is because the initial templates are derived from k-means centroids based on the same $n=50$ data (see Section \ref{sec:5:2:2}) and are thus no longer non-informative. SAEM is clearly penalized by processing $n=500$ observations and estimating $C=15$ classes of parameters and it nearly takes 5 hours of computation to get the first SAEM's estimate. Interestingly, SAEM's first estimate is nearly as "good", in the error rate sense, as the MCoEM estimate obtained after 5 hours. Nevertheless, using MCoEM offers a practitioner the possibility to classify much quicker new observations.
\begin{figure}
\caption{Live error rate for MCoEM and SAEM-500, applied in the fully-unsupervised setup. Dashed lines are used for readability only and do not convey any error rate outside the balls. \label{fig:classif_unsup}
\label{fig:classif_unsup}
\end{figure}
\begin{table}
\centering
\caption{CPU time of an iteration of MCoEM and SAEM $n=500$ and $C=15$ mixture components.\label{tab:classRunTime_unsup}}
\begin{tabular}{c|c|c}
& MCoEM & SAEM-500 \\
\hline
CPU time / iteration (s) & 170 & 17,570\\
\end{tabular}
\end{table}
\section{Discussion}
\label{sec:conclusion}
We have proposed a statistical framework to perform sequential and unsupervised inference in a deformable template model, with application to curve synchronization and shape extraction and registration. It makes use of the Monte Carlo online EM algorithm (MCoEM), derived from \cite{Cappe:Online} and a novel MCMC sampling method, based on the Carlin and Chib sampler \cite{Carlin:Bayesian}, allowing to simulate the unsamplable joint distribution of the cluster index and deformation parameters. The method has been applied successfully to extract reference templates from several data sets featuring high time/geometric dispersion.
Our work was primarily motivated by the computational gain arising when processing one observation at a time. Indeed, when the missing data is a large vector and many observations are available, stochastic batch EM algorithms such as SAEM \cite{Delyon:ConvSAEM} are prohibitively slow for practical use. This has been illustrated with the classification problem (Section \ref{sec:classification:2}) in which SAEM's error rate after nearly 5 hours of computation is still at the initial level. In comparison, it took MCoEM less than 20 minutes to reach less than half the initial error rate. In this perspective, MCoEM can be regarded as a linearization of stochastic batch EM algorithms, which can be particularly appealing in a Big Data context.
In terms of implementation, the main concern when inferring a mixture model with MCoEM is class degeneracy. To mitigate this risk, two points have been discussed. First, a particular care should be brought to the way initial parameters are set and especially the templates. We have suggested to use k-means clustering on a limited set of observations to initiate the templates. Second, the number of EM iterations between the first parameter updates should be large enough in order to assign at least one observation to each class. Adaptive implementations have not been considered but could yield an automated update schedule.
The handwritten digit example studied in this paper shows that MCoEM seems to inherit SAEM's asymptotic behaviour. Indeed, (i) qualitatively, the template shapes extracted by both algorithms are similar and (ii) quantitatively, the error rates are comparable. This result calls for further investigation as a theoretical framework is yet to be developed to establish the convergence of MCoEM. Both SAEM and online EM proofs of convergence relies on stochastic approximation theory arguments. However those proofs cannot be straightforwardly extended to MCoEM since it combines two approximations: one on the conditional expectation (which is in SAEM) and the other one on the data generating process (which is in the online EM). We therefore leave this as a future work. An interesting question is to assess to what extend the convergence rate of the online EM \cite{Cappe:Online}, known to be optimal, is degraded when replacing the expectation of the sufficient statistics by an unbiased estimate.
\section*{Acknowledgments}
\noindent This work has been supported by the ONERA, the French Aerospace Lab and the DGA, the French Procurement Agency.
\end{document}
|
\begin{document}
\title{Error models for mode-mismatch in linear optics quantum computing}
\author{Peter P. Rohde}
\email[]{[email protected]}
\homepage{http://www.physics.uq.edu.au/people/rohde/}
\author{Timothy C. Ralph}
\affiliation{Centre for Quantum Computer Technology, Department of Physics\\ University of Queensland, Brisbane, QLD 4072, Australia}
\date{\today}
\begin{abstract}
One of the most significant challenges facing the development of linear optics quantum computing (LOQC) is mode-mismatch, whereby photon distinguishability is introduced within circuits, undermining quantum interference effects. We examine the effects of mode-mismatch on the parity (or fusion) gate, the fundamental building block in several recent LOQC schemes. We derive simple error models for the effects of mode-mismatch on its operation, and relate these error models to current fault tolerant threshold estimates.
\end{abstract}
\pacs{03.67.Lx,42.50.-p}
\maketitle
\section{Introduction}
Linear optics quantum computing (LOQC), as it was originally proposed \cite{bib:KLM01}, suffered the problem of unfavorable scaling in physical resource requirements. Recently, several proposals have been made which significantly reduce these requirements. Most notably, schemes employing cluster states \cite{bib:Raussendorf01,bib:Raussendorf03,bib:Nielsen04,bib:BrowneRudolph05,bib:Varnava05} and parity states \cite{bib:GilchristHayes05,bib:RalphHayes05} have been suggested. The fundamental building block of many such schemes is the parity gate \cite{bib:Weinfurter94,bib:BraunsteinMann95} (also referred to as the type-II fusion gate \cite{bib:BrowneRudolph05}), which projects an incident two-photon state into the even- or odd-parity sub-space.
The parity gate is implemented physically using a polarizing beamsplitter (PBS) and post-selection, described in Fig.~\ref{fig:PBS_measurement}. Parity measurement has many applications and, for example, forms the basis of the linear optics controlled-{\sc NOT} ({\sc CNOT}) gate described in Ref.~\cite{bib:Pittman01}, the entanglement purification protocol of Ref.~\cite{bib:Pan01}, the cluster state LOQC scheme of Ref.~\cite{bib:BrowneRudolph05} and the parity encoded LOQC scheme of Refs.~\cite{bib:RalphHayes05,bib:GilchristHayes05}.
\begin{figure}
\caption{Parity measurement using a PBS, which completely transmits horizontally- and completely reflects vertically polarized photons. Upon post-selection on detecting exactly one photon at each beamsplitter output, only the $\ket{HH}
\label{fig:PBS_measurement}
\end{figure}
One of the most significant challenges facing the experimental realization of LOQC circuits is mode-mismatch \cite{bib:RohdeRalph05,bib:RohdePryde05,bib:RohdeRalph05b}, whereby photon indistinguishability is compromised within a circuit, undermining the desired quantum interference effects. In this paper we consider the effects of mode-mismatch on the parity gate and derive a general error model describing these effects. We apply this model specifically to the the cluster state approach to LOQC. Our results suggest that in this context mode-mismatch can be tolerated using existing fault tolerance techniques for dealing with general depolarizing noise. We relate physical parameters, such as the degree of mode-mismatch and photo-detector characteristics, to current fault tolerance threshold estimates.
\section{General error model}
Consider an arbitrary $n$-qubit, polarization encoded state. This can be expressed generally in the form
\begin{eqnarray}
\ket{\psi}=\sum_{i_1,\dots,i_n\in\{H,V\}}\lambda_{i_1,\dots,i_n}\ket{i_1}\dots\ket{i_n}
\end{eqnarray}
Expanding around the first two qubits, an equivalent expression is
\begin{eqnarray} \label{eq:state_rep}
\ket{\psi}&=&\alpha_{HH}\ket{HH}\ket{\phi_{HH}}+\alpha_{HV}\ket{HV}\ket{\phi_{HV}}\nonumber\\
&+&\alpha_{VH}\ket{VH}\ket{\phi_{VH}}+\alpha_{VV}\ket{VV}\ket{\phi_{VV}}
\end{eqnarray}
where the $\alpha_{xy}$ coefficients denote the amplitude of the corresponding terms and $\ket{\phi_{xy}}$ the state of the rest of the system for the respective state of the first two qubits.
We wish to perform the parity gate between the two factored qubits. We express these qubits in terms of their temporal wave-functions, $\psi_A(t)$ and $\psi_B(t)$,
\begin{widetext}
\begin{eqnarray}
\ket{\psi}&=&\alpha_{HH}\left(\int_{-\infty}^{\infty}\psi_A(t)\hat{a}^\dag_H(t)\,\mathrm{d}t\ket{0}\right)\left(\int_{-\infty}^{\infty}\psi_B(t)\hat{b}^\dag_H(t)\,\mathrm{d}t\ket{0}\right)\ket{\phi_{HH}}\nonumber\\
&+&\alpha_{HV}\left(\int_{-\infty}^{\infty}\psi_A(t)\hat{a}^\dag_H(t)\,\mathrm{d}t\ket{0}\right)\left(\int_{-\infty}^{\infty}\psi_B(t)\hat{b}^\dag_V(t)\,\mathrm{d}t\ket{0}\right)\ket{\phi_{HV}}\nonumber\\
&+&\alpha_{VH}\left(\int_{-\infty}^{\infty}\psi_A(t)\hat{a}^\dag_V(t)\,\mathrm{d}t\ket{0}\right)\left(\int_{-\infty}^{\infty}\psi_B(t)\hat{b}^\dag_H(t)\,\mathrm{d}t\ket{0}\right)\ket{\phi_{VH}}\nonumber\\
&+&\alpha_{VV}\left(\int_{-\infty}^{\infty}\psi_A(t)\hat{a}^\dag_V(t)\,\mathrm{d}t\ket{0}\right)\left(\int_{-\infty}^{\infty}\psi_B(t)\hat{b}^\dag_V(t)\,\mathrm{d}t\ket{0}\right)\ket{\phi_{VV}}
\end{eqnarray}
\end{widetext}
where $\hat{a}^\dag(t)$ and $\hat{b}^\dag(t)$ are the time-specific creation operators for the first two photons. Note that while we specifically make reference to temporal wave-functions, the same arguments hold in any photonic degree of freedom, such as spatial or spectral.
Next we apply the parity gate between qubits $A$ and $B$ and post-select upon detecting exactly one photon at each beamsplitter output, the required \emph{success} signature. Measurements are modeled using the photo-detector model described in Ref.~\cite{bib:RohdeRalph05c}. In this model, photo-detectors are characterized by two parameters -- their \emph{resolution} ($\delta$) and \emph{bandwidth} ($\Delta$). The resolution characterizes the spectral uncertainty in a measurement event and the detector is unable to distinguish between spectral components within this range. The bandwidth characterizes the total range of frequencies the detector responds to. See Ref.~\cite{bib:RohdeRalph05c} for a complete description and physical motivation. Based on this model, each measurement can be expressed generally in the form
\begin{widetext}
\begin{equation}
\hat\rho_\mathrm{measured}=\mathrm{tr}_D\left[\int_{-\Delta}^{\Delta}\left(\int_{\omega_0-\delta}^{\omega_0+\delta}\ket{\omega}_D\bra{\omega}_D\,\mathrm{d}\omega\right)\hat\rho_\mathrm{in}\left(\int_{\omega_0-\delta}^{\omega_0+\delta}\ket{\omega}_D\bra{\omega}_D\,\mathrm{d}\omega\right)\mathrm{d}\omega_0\right]\nonumber\\
\end{equation}
\end{widetext}
where $\ket\omega_D\bra\omega_D$ is the projector onto the frequency eigenstate $\omega$, acting on photon $D$, the one being detected. $\hat\rho_\mathrm{in}$ is the incident state and $\hat\rho_\mathrm{measured}$ is the state following photo-detection.
\section{Error model for the parity gate}
In the case of parity measurement the output state can be expressed in the form
\begin{eqnarray}
f_{II}\ket{\psi}&=&|\alpha_{HH}|^2\ket{\phi_{HH}}\bra{\phi_{HH}}\nonumber\\
&+&\gamma\alpha_{HH}\alpha_{VV}^*\ket{\phi_{HH}}\bra{\phi_{VV}}\nonumber\\
&+&\gamma\alpha_{VV}\alpha_{HH}^*\ket{\phi_{VV}}\bra{\phi_{HH}}\nonumber\\
&+&|\alpha_{VV}|^2\ket{\phi_{VV}}\bra{\phi_{VV}}
\end{eqnarray}
where $f_{II}$ denotes the parity gate operation and normalization factors have been omitted for simplicity. $\gamma$ is a coherence parameter, and is non-trivially related to the detectors' resolution and bandwidth, as well as the integral overlap of the interacting photons' wave-functions which characterizes the degree of mode-mismatch between them. $\gamma$ obeys $0\leq\gamma\leq1$, where $\gamma=1$ corresponds to complete photon indistinguishability, the ideal case, and $\gamma=0$ to complete photon distinguishability.
In the case of ideal photo-detectors (\emph{i.e.} infinite bandwidth and zero resolution, see Ref.~\cite{bib:RohdeRalph05c}), $\gamma$ is equal to the integral overlap of the interacting photons,
\begin{equation}
\gamma_\mathrm{ideal}=\left|\int_{-\infty}^\infty\int_{-\infty}^\infty\psi_A(\omega_A)\psi_B^*(\omega_B)\mathrm{d}\omega_A\mathrm{d}\omega_B\right|^2
\end{equation}
and is related to the Hong-Ou-Mandel (HOM) \cite{bib:HOM87} visibility \footnote{HOM interference is a physically different scenario, where two photons interact on a 50/50 beamsplitter rather than a PBS. We include this relationship because HOM visibility is a commonly quoted measure of mode-mismatch and therefore gives some insight into what values for $\gamma$ are realistically achievable.} by
\begin{equation}
\gamma_\mathrm{ideal}=\frac{2V}{1+V}
\end{equation}
Ideally, the output state is the coherent superposition
\begin{equation} \label{eq:psi_ideal}
\ket{\psi_\mathrm{ideal}}=\alpha_{HH}\ket{\phi_{HH}}+\alpha_{VV}\ket{\phi_{VV}}
\end{equation}
In the presence of mode-mismatch, $\gamma\leq1$, the output state decoheres into a mixture of this state and the corresponding negative superposition
\begin{equation} \label{eq:psi_error}
\ket{\psi_\mathrm{error}}=\alpha_{HH}\ket{\phi_{HH}}-\alpha_{VV}\ket{\phi_{VV}}\end{equation}
where the degree of decoherence is realated to $\gamma$. The output state can be expressed in the form
\begin{equation} \label{eq:parity_error_model}
f_{II}\ket{\psi}=(1-p_\mathrm{error})\ket{\psi_\mathrm{ideal}}\bra{\psi_\mathrm{ideal}}+p_\mathrm{error}\ket{\psi_\mathrm{error}}\bra{\psi_\mathrm{error}}
\end{equation}
where $p_\mathrm{error}$ is the error probability. This error model can be understood intuitively according to Fig.~\ref{fig:PBS_measurement_MM}.
\begin{figure}
\caption{Parity measurement in the presence of mode-mismatch (graphically represented as a temporal displacement in one of the incident photons). Now the detection process reveals \emph{some}
\label{fig:PBS_measurement_MM}
\end{figure}
$p_\mathrm{error}$ can be expressed in terms of $\gamma$,
\begin{equation}
p_\mathrm{error}=\frac{1-\gamma}{2}
\end{equation}
The relationship between the degree of mode-mismatch, detector characteristics and error probability is shown in Fig.~\ref{fig:p_delta}. We assume photons have transform-limited Gaussian temporal wave-packets. Fig.~\ref{fig:p_delta} indicates that if error rates are to be minimized, photo-detector bandwidth and resolution ought to be kept as small as possible. This can be achieved through narrowband filtering, a technique currently employed in many coincidence type experiments. However, it should be noted that employing such filtering reduces the overall success probability of the gate. In schemes where the gate is used to progressively construct resource states, this has the effect of incurring a polynomial overhead in resource requirements.
In Fig.~\ref{fig:int_det} we consider the limits of \emph{frequency-integrated} and \emph{time-integrated} detection, where the photo-detectors' resolution and bandwidth in the respective domains are assumed to be infinite.
\begin{figure*}
\caption{Error probability ($p_\mathrm{error}
\label{fig:p_delta}
\end{figure*}
\begin{figure}
\caption{Error probability ($p_\mathrm{error}
\label{fig:int_det}
\end{figure}
Since the parity gate operates non-determinisitcally, it is necessary to consider its behavior upon failure. Failure occurs when two photons are detected at one beamsplitter output port. Upon failure the $\ket{HV}$ and $\ket{VH}$ cases can be distinguished based on where the photons were detected. For example, if both photons are measured in the upper mode, the incident state must have been $\ket{VH}$. Thus, upon detecting both photons at one of the output ports, we have effectively performed a $Z$-measurement on both photons. The projection performed upon failure is unaffected by mode-mismatch, since we already have complete \emph{which-path} information, and mode-mismatch does not provide any additional distinguishing information.
The behavior of the parity gate can be subtly modified through the application of single qubit rotations prior to the gate. For example, in the type-II fusion gate described in Ref.~\cite{bib:BrowneRudolph05} both incident photons are first rotated by $45^\circ$. This has the effect of transforming the error model of Eqs.~\ref{eq:psi_ideal} and \ref{eq:psi_error} to
\begin{eqnarray}
\ket{\psi_\mathrm{ideal}}&=&\alpha_{HH}\ket{\phi_{HH}}+\alpha_{VV}\ket{\phi_{VV}}\nonumber\\
\ket{\psi_\mathrm{error}}&=&\alpha_{HV}\ket{\phi_{HV}}+\alpha_{VH}\ket{\phi_{VH}}
\end{eqnarray}
Thus, the error is no longer a phase error, but rather manifests itself as a probability of projecting into the wrong parity sub-space. Furthermore, upon failure the gate now performs an $X$-measurement (\emph{i.e.} in the $+/-$ basis) instead of a $Z$-measurement.
\begin{figure*}
\caption{Error probability ($p_\mathrm{error}
\label{fig:p_delta_zoom}
\end{figure*}
\section{The cluster state model for quantum computation}
The standard model for quantum computation is very analogous to our understanding of classical circuits -- we prepare an input state, apply a series of gates, and measure the outputs. The cluster state \cite{bib:Raussendorf01,bib:Raussendorf03} model provides us with a completely different, yet computationally equivalent, model for quantum computing. We begin by preparing a maximally entangled, multi-qubit state, known as a \emph{cluster state}. Once a cluster state has been prepared, an arbitrary algorithm can be implemented by performing a sequence of single qubit measurements, which are trivial in an optical scenario. The order of these measurements and the choice of measurement bases determines the algorithm. Thus, cluster states act as a resource for universal quantum computation.
A cluster state can be represented as a graph. Nodes represent qubits initially prepared in the \mbox{$\ket{+}=(\ket{0}+\ket{1})/\sqrt{2}$} state. Vertices between nodes represent the application of controlled-sign ({\sc CZ}) gates between the respective qubits.
The cluster state model is particularly useful in the optical scenario, since it provides a means for performing LOQC far more efficiently \cite{bib:Nielsen04} than previous proposals. This is achieved by using non-deterministic {\sc CZ} gates to probabilistically produce a resource of small \emph{micro-clusters}. Larger clusters are constructed by progressively fusing micro-clusters onto the main cluster. When this fails the qubits being fused together are removed. When it succeeds we have successfully \emph{grown} the cluster. By ensuring the micro-clusters are sufficiently large we can always ensure that \emph{on average} the cluster grows as we repeat this process. Thus, we can grow arbitrarily large cluster states using physical resources which grow polynomially with the size of the final cluster.
\section{The redundantly-encoded cluster state scheme}
We specifically consider the scheme described in Ref.~\cite{bib:BrowneRudolph05}, whereby each logical qubit is encoded using a redundant array of physical qubits. Specifically, $\ket{0}_L\equiv\ket{H}^{\otimes n}$ and $\ket{1}_L\equiv\ket{V}^{\otimes n}$, where $n$ is the level of encoding. We assume a resource of such states is available and therefore restrict ourselves to considering the errors introduced during the fusion processes. The {\sc CZ} gates are applied between one physical qubit from each logical qubits, referred to as the \emph{detachable} qubits. Because the physical qubits within each logical qubit are correlated, this is equivalent to performing the {\sc CZ} gate between the logical qubits. Desctructive {\sc CZ} gates can be implemented by applying a Hadamard gate to one qubit and applying a parity gate between them. Because we are utilizing redundant encoding, performing destructive {\sc CZ} gates does not destroy the logical qubits, but reduces their level of encoding by one, shown in Fig.~\ref{fig:redundant_enc}.
\begin{figure}
\caption{The redundantly encoded cluster state scheme. Large circles represent logical cluster qubits, while smaller circles represent redundant physical qubits. A destructive {\sc CZ}
\label{fig:redundant_enc}
\end{figure}
We now show this in detail. Consider a completely general state where we have factored out the logical qubits being fused,
\begin{eqnarray}
\ket{\psi}&=&\alpha_{HH}\ket{H}^{\otimes n}\ket{H}^{\otimes n}\ket{\phi_{HH}}\nonumber\\
&+&\alpha_{HV}\ket{H}^{\otimes n}\ket{V}^{\otimes n}\ket{\phi_{HV}}\nonumber\\
&+&\alpha_{VH}\ket{V}^{\otimes n}\ket{H}^{\otimes n}\ket{\phi_{VH}}\nonumber\\
&+&\alpha_{VV}\ket{V}^{\otimes n}\ket{V}^{\otimes n}\ket{\phi_{VV}}
\end{eqnarray}
Factorizing the detachable qubits and applying a Hadamard gate to the first qubit we obtain
\begin{eqnarray}
(H\otimes I)\ket{\psi}&=&\left[\alpha_{HH}\ket{H}^{\otimes n-1}\ket{H}^{\otimes n-1}\ket{\phi_{HH}}]\right\ket{H}\ket{H}\nonumber\\
&+&\left[\alpha_{HH}\ket{H}^{\otimes n-1}\ket{H}^{\otimes n-1}\ket{\phi_{HH}}]\right\ket{V}\ket{H}\nonumber\\
&+&\left[\alpha_{HV}\ket{H}^{\otimes n-1}\ket{V}^{\otimes n-1}\ket{\phi_{HV}}]\right\ket{H}\ket{V}\nonumber\\
&+&\left[\alpha_{HV}\ket{H}^{\otimes n-1}\ket{V}^{\otimes n-1}\ket{\phi_{HV}}]\right\ket{V}\ket{V}\nonumber\\
&+&\left[\alpha_{VH}\ket{V}^{\otimes n-1}\ket{H}^{\otimes n-1}\ket{\phi_{VH}}]\right\ket{H}\ket{H}\nonumber\\
&-&\left[\alpha_{VH}\ket{V}^{\otimes n-1}\ket{H}^{\otimes n-1}\ket{\phi_{VH}}]\right\ket{V}\ket{H}\nonumber\\
&+&\left[\alpha_{VV}\ket{V}^{\otimes n-1}\ket{V}^{\otimes n-1}\ket{\phi_{VV}}]\right\ket{H}\ket{V}\nonumber\\
&-&\left[\alpha_{VV}\ket{V}^{\otimes n-1}\ket{V}^{\otimes n-1}\ket{\phi_{VV}}]\right\ket{V}\ket{V}\nonumber\\
\end{eqnarray}
Following the parity gate we are left with
\begin{eqnarray} \label{eq:psi_f2_ideal}
f_{II}(H\otimes I)\ket{\psi}&=&\alpha_{HH}\ket{H}^{\otimes n-1}\ket{H}^{\otimes n-1}\ket{\phi_{HH}}\nonumber\\
&+&\alpha_{HV}\ket{H}^{\otimes n-1}\ket{V}^{\otimes n-1}\ket{\phi_{HV}}\nonumber\\
&+&\alpha_{VH}\ket{V}^{\otimes n-1}\ket{H}^{\otimes n-1}\ket{\phi_{VH}}\nonumber\\
&-&\alpha_{VV}\ket{V}^{\otimes n-1}\ket{V}^{\otimes n-1}\ket{\phi_{VV}}
\end{eqnarray}
which is equivalent to the application of a {\sc CZ} gate between the factored logical qubits.
\section{Error model for cluster states}
We now consider the application of the parity gate error model to the redundantly encoded cluster state scheme. In the presence of mode-mismatch the gate has a probability of projecting into the wrong parity sub-space. Therefore,
\begin{eqnarray}
\ket{\psi_\mathrm{error}}&=&\left[\alpha_{HH}\ket{H}^{\otimes n-1}\ket{H}^{\otimes n-1}\ket{\phi_{HH}}\right]\nonumber\\
&+&\left[\alpha_{HV}\ket{H}^{\otimes n-1}\ket{V}^{\otimes n-1}\ket{\phi_{HV}}\right]\nonumber\\
&-&\left[\alpha_{VH}\ket{V}^{\otimes n-1}\ket{H}^{\otimes n-1}\ket{\phi_{VH}}\right]\nonumber\\
&+&\left[\alpha_{VV}\ket{V}^{\otimes n-1}\ket{V}^{\otimes n-1}\ket{\phi_{VV}}\right]
\end{eqnarray}
which differs from the ideal case (Eq.~\ref{eq:psi_f2_ideal}) through the application of a phase-flip to the first logical qubit. Thus, following fusion the state can be expressed
\begin{equation}
f_{II}(H\otimes I)\ket\psi=(1-p_\mathrm{error})\ket{C}\bra{C}+p_\mathrm{error}\hat{Z}_i\ket{C}\bra{C}\hat{Z}_i
\end{equation}
where $\ket{C}$ is the desired cluster state and $i$ denotes the fused qubit. This is simply a dephasing error model, as shown in Fig.~\ref{fig:cluster_fig}. It has been shown that quantum error correction is possible for such error models \cite{bib:NielsenDawson04}. Fault tolerant thresholds for a full Pauli error model with loss on cluster states have been estimated to be on the order of $10^{-4}$ \cite{bib:Dawson05}. Dephasing is a subset of this error model and can therefore be corrected for in principle. As before, achieving error probabilities within this threshold is possible, assuming sufficient control over detector characteristics and filtering. Once again, this comes at the expense of success probability, which incurs a polynomial physical resource overhead.
\begin{figure}
\caption{Error model for the fusion gate in the construction of redundantly encoded cluster states. When the parity gate succeeds the two clusters are joined together. With some probability a $Z$-error will be introduced onto one of the the fused qubits. When the parity gate fails the fused qubits are removed from the clusters.}
\label{fig:cluster_fig}
\end{figure}
Upon gate failure, both physical qubits are effectively measured in the computational basis, which removes the respective logical qubits from the cluster, but does not destroy the remainder of the cluster. Because our model employs non-ideal detectors, which have finite bandwidth, it is also possible that less than a total of two photons are detected between the output ports of the parity gate. This is equivalent to photon loss. When this happens the affected logical qubits are irrecoverably destroyed. The remainder of the cluster can be recovered by measuring all neighboring qubits in the computational basis. In terms of the physical resource overhead, this is clearly more costly than a standard gate failure. However, the overhead is nonetheless polynomial, and scalable quantum computation is still possible in principle.
\section{Conclusion}
We constructed an error model for mode-mismatch in the parity gate, which forms the basis of several recent proposals for scalable linear optics quantum computing and other quantum optics experiments. This model was applied to the cluster state model for quantum computing. We related our results to current estimates for fault tolerant thresholds and found that mode-mismatch can be tolerated using existing quantum error correction techniques, assuming sufficient control over photo-detector characteristics and filtering. This comes at the expense of success probability, which affects the overall scaling of such schemes. However, the scaling of these schemes is polynomial with failure rate, and therefore in principle does not inhibit scalable linear optics quantum computing. While we specifically applied our model to a cluster state approach for LOQC, our model could easily be applied to other proposals where the parity gate is the fundamental building block.
\begin{acknowledgments}
We thank Daniel E. Browne, Terry Rudolph and Henry L. Haselgrove for helpful discussions. This work was supported by the Australian Research Council and the QLD State Government.
\end{acknowledgments}
\end{document}
|
\begin{document}
\title[Propagation of a probe pulse inside a BEC under conditions of
EIT]{Propagation of a probe pulse inside a Bose-Einstein condensate
under conditions of electromagnetically induced transparency}
\author{Pablo Barberis-Blostein}
\address{Instituto de
Investigaciones en Matem\'aticas Aplicadas y en Sistemas,
Universidad Nacional Autonoma de M\'exico, Circuito Escolar s/n
Ciudad Universitaria
C.P. 04510
M\'exico, D.F.
}
\author{Omar Aguilar-Loreto}
\address{Centro Universitario de la Costa Sur, Universidad de
Guadalajara, Av. Independencia Nacional 151, Autl\'an, Jalisco,
M\'exico}
\begin{abstract}
We obtain a partial differential equation for a pulse travelling
inside a Bose-Einstein condensate under conditions of electromagnetically
induced transparency. The equation is valid for a weak probe pulse.
We solve the equation for the case of a three-level BEC in $\Lambda$
configuration with one of its ground state spatial profiles
initially constant. The solution characterizes, in detail, the
effect that the evolution of the condensate wave function has on
pulse propagation, including the process of stopping and releasing
it.
\end{abstract}
\pacs{42.50.Gy,42.50.Dv,03.75.Nt}
\maketitle
\section{Introduction}\label{sec:introduction}
A medium composed of three-level atoms in $\Lambda$ configuration (two
ground states, $1$ and $2$, dipole-coupled to an excited state $0$,
see Fig.~\ref{fig:atomo}) can be rendered transparent using a
technique known as electromagnetically induced transparency (EIT). We
will call the field mode coupling states $2$ and $0$ the probe field
and the field mode coupling states $1$ and $0$ the control field. In
the usual EIT configuration the probe field is weak compared with the
control field, which, in turn, is in resonance with the transition
frequency between states $1$ and $0$. We denote by $\delta$ the
detuning of the probe field from resonance with respect to the
transition frequency between states $2$ and $0$. The range of
frequency detunings, $\delta$, where absorption of the probe field by
the medium is negligible is known as the transparency window; its size
is proportional to the control field. The appearance of this
transparency window using the control field is known as
electromagnetically induced transparency; see
Fig.~\ref{fig:absorption} for an example of the typical absorption
spectrum of a probe field interacting with atoms showing EIT. In
Ref.~\cite{rv:marangos,rv:harris} the fundamentals of EIT are
explained in detail. The probe field inside a medium composed of
several three-level atoms exhibiting EIT propagates without distortion
with a velocity that is a function of the control field
\cite{rv:lukincollo,rv:rmpfleisch}; this can be used to slow down the
probe velocity as much as desired \cite{rv:hau,
PhysRevLett.111.033601}.
For an atom, the atomic wave function has a dependence on its
position; when dealing with pulse propagation inside a medium composed
of atoms, we can ignore this dependence when the pulse size is much
larger than the variance of the atomic position; in this case only the
expectation value of the atomic position enters the description and we
say that we treat the atoms as point-like. The theory developed in
Ref.~\cite{rv:lukincollo,rv:rmpfleisch} treats the atoms as
point-like. When the atoms that compose the medium are in a
Bose-Einstein condensate, they can not longer be treated as
point-like. In this case the condensate wave function together with
its dynamics have a strong effect on the probe field propagation as is
shown through numerical simulations in
Ref.~\cite{Dutton2004,Dutton2002,Hau2008}.
To observe this effect it is necessary that the
time during which the pulse interacts with the medium is larger than
the time scale characterizing the condensate wave function dynamics.
Since a
Bose-Einstein condensate has a well defined spatial phase relation,
the system evolution does not necessarily decohere the information
carried by the pulse, opening the possibility of using slow light and
BEC to manipulate optical information \cite{Hau2008,Riedl2013}.
The main question we address in this paper is: How does the evolution
of the condensate wave function modify the propagating pulse?
We address it in a form where analytical results can be obtained.
To solve this question we have to solve the Gross-Pitaevskii
equations, describing the condensate evolution, together with the
Maxwell equations, describing the field propagation. We obtain a
partial differential equation for the pulse propagation inside a
Bose-Einstein condensate with atoms in $\Lambda$ configuration. This
equation is much simpler than the set of equations that are usually
used to find the pulse propagation and the condensate wave function.
For some simple cases it can be solved analytically, showing
explicitly how the different elements -- probe pulse, control field
and wave functions -- interact and evolve. The equation is obtained
using a weak probe and approximations similar to those made in
Ref.~\cite{rv:memoria2}, but taking into account the condensate wave
function. For a condensate wave function with a spatial profile that
is initially constant, we write the solution in integral form. The
evolution for the field looks similar to the evolution of the wave
function for atoms in state $1$, but with its potential being described in a
reference system that moves with the pulse; we use this similarity for
analyzing the pulse propagation in the medium.
A general overview of a three-level Bose-Einstein condensate
interacting with two modes of the field is presented in
section~\ref{sec:model}, where known approximations are used to
express the model in a simplified way. In
section~\ref{sec:prop-equat-probe} we obtain the propagation equation
for the case of the probe pulse. In
section~\ref{sec:slow-light-inside} we study how the pulse propagates
inside the condensate showing EIT and discuss some physical
implications derived from the analytic solution. Concluding remarks
are given in section~\ref{sec:conclusion}.
\begin{figure}
\caption{(a) Three-level atom in $\Lambda$ configuration interacting
with two modes of the field. Transitions from state $|1\rangle$ to
$|0\rangle$ and $|2\rangle$ to $|0\rangle$ are dipole
allowed. Transitions from state $|1\rangle$ to $|2\rangle$ are not
dipole allowed. The control mode of the field couples, resonantly,
state $|1\rangle$ with $|0\rangle$. The probe mode of the field
couples, with a detuning $\delta$, state $|2\rangle$ with
$|0\rangle$. The rate of spontaneous decay from the excited state to
the ground state is given by $\gamma$. (b) An example of the
absorption profile for a weak probe field, with detuning $\delta$,
interacting with three-level atoms. The interaction of the control
field with the corresponding dipole transition is
characterized by the Rabi frequency $G$. The absorption
spectrum for two different values of $G$ are shown in the plot,
note how the transparency window depends on the Rabi frequency. }
\label{fig:atomo}
\label{fig:absorption}
\end{figure}
\section{Model}\label{sec:model}
We only consider dynamics in the $x$ direction. The Hamiltonian of a three-level Bose-Einstein condensate interacting with two modes of light is
given by
\begin{equation}\label{eq:hamiltonian}
H=H_0+H_{int}\, ,
\end{equation}
with
\begin{eqnarray}
H_0 &=& -\hbar \omega_{2} \int dx \, \hat{\psi}_2^{\dagger}(x)\hat{\psi}_2(x)-\hbar \omega_{1} \int dx \, \hat{\psi}_1^{\dagger}(x)\hat{\psi}_1(x)\nonumber\\
& &+\sum_j \int dx \, \hat{\psi}_j^{\dagger}(x) \left ( -\frac{\hbar^2}{2 M}\frac{\partial^2}{\partial
x^2} +V_j(x) \right ) \hat{\psi}_j(x)
\nonumber\\ & & + \hbar \sum_j \int dx \, u_j(x) \hat{\psi}_j^{\dagger}(x)\hat{\psi}_j^{\dagger}(x)\hat{\psi}_j(x)\hat{\psi}_j(x) \nonumber\\
& & + \frac{\hbar}{2} \sum_{i\neq j} \int dx \, u_{ij}(x) \hat{\psi}_j^{\dagger}(x)\hat{\psi}_i^{\dagger}(x)\hat{\psi}_j(x)\hat{\psi}_i(x)\, ,
\nonumber\\
\end{eqnarray}
where $\hat{\psi}^{\dagger}_j(x),\hat{\psi}_j(x)$ are bosonic
operators that create and destroy a particle, with mass $M$, in the
state $j$, with $j=0,1,2$, at position $x$; these obey commutation rules
\begin{equation}\label{eqn:commutator}
\left [ \hat{\psi}_i(x),\hat{\psi}^{\dagger}_j(x') \right ] = \delta_{ij} \,
\delta (x-x')\, .
\end{equation}
The functions $V_{j}(x)$ are the trapping potential for the atoms in
state $j$. The constants $u_j$ are potentials due to elastic
collisions between particles in the same level $j$, $u_{ij}$ are
potentials due to elastic collisions between particles in level $i$
with particles in level $j$. We set the energy of the state $j=0$ as
the zero of the energy scale. The energy of atomic state $j=1,2$ is
given by $-\hbar \omega_j$ and the interaction Hamiltonian reads
\begin{eqnarray}
H_{int}&=&\hbar G(t) \int dx \,\left ( \hat{\psi}_0^{\dagger}(x)\hat{\psi}_1(x) e^{i (k_G x-\omega_L t)} +h.c. \right )
\nonumber\\& &+ \hbar g \int dx \, \mathcal{E}(x,t)
\left ( \hat{\psi}_0^{\dagger}(x)\hat{\psi}_2(x) +h.c. \right )\, ,
\end{eqnarray}
where $G(t)$ is proportional to the strength of the continuous control
pulse, with wave number $k_G$ and frequency $\omega_L$,
$\mathcal{E}(x,t)$ is the incoming probe pulse, assumed to be centered
at a frequency $\bar{\omega}$, and $g$ is the atom-field coupling.
In order to calculate the system evolution we use the Heisenberg
representation, where the state vectors are time-independent and the
evolution is given by time-dependent operators that obey the
Heisenberg equations of motion. The Heisenberg
equations of motion for the bosonic operators are
\begin{equation}
\abl{t} \hat{\psi}_j(x,t)=\frac{1}{i \hbar } \left [ \hat{\psi}_j(x,t),H \right ]\, ,
\end{equation}
substituting the Hamiltonian $H$ given by Eq.~(\ref{eq:hamiltonian}) we
get the following differential equations for the bosonic operators
\begin{eqnarray}\label{eq:complete_system}
\abl{t} \hat{\psi}_0 &=& \left ( \frac{i \hbar}{2M}\frac{\partial^2}{\partial
x^2}
-\frac{i}{\hbar} V_0 -2i u_0 \hat{\psi}^{\dagger}_0\hat{\psi}_0 - i u_{01}
\hat{\psi}^{\dagger}_1\hat{\psi}_1 \right . \nonumber\\ & & \left .-i u_{02} \hat{\psi}^{\dagger}_2\hat{\psi}_2
\right )\hat{\psi}_0 -i G(t) e^{i (k_G x-\omega_L t)} \hat{\psi}_1 -i
g \mathcal{E} \hat{\psi}_2 \, , \nonumber \\
\abl{t} \hat{\psi}_1 &=& \left ( i\hbar\omega_1+\frac{i \hbar
}{2M}\frac{\partial^2}{\partial
x^2} -\frac{i}{\hbar} V_1 -2i u_1 \hat{\psi}^{\dagger}_1\hat{\psi}_1
- i u_{12} \hat{\psi}^{\dagger}_2\hat{\psi}_2 \right .\nonumber \\ & & \left .
-i u_{10} \hat{\psi}^{\dagger}_0\hat{\psi}_0
\right )\hat{\psi}_1 -i G(t) e^{i (k_G x-\omega_L t)} \hat{\psi}_0 \, ,\nonumber \\
\abl{t}
\hat{\psi}_2 &=& \left ( i\hbar\omega_2+ \frac{i \hbar}{2M}\frac{\partial^2}{\partial
x^2} -\frac{i}{\hbar} V_2
-2i u_2 \hat{\psi}^{\dagger}_2\hat{\psi}_2 - i u_{20} \hat{\psi}^{\dagger}_0\hat{\psi}_0
\right . \nonumber\\ & & \left .-i u_{21} \hat{\psi}^{\dagger}_1\hat{\psi}_1 \right )\hat{\psi}_2 -i g \mathcal{E}^* \hat{\psi}_0\, .
\end{eqnarray}
Here we have omitted the time and space dependence of the operators and
variables and we assumed that $u_{ij}=u_{ji}$.
We proceed now by writing the field operator as the sum of its
expectation value plus fluctuations~\cite{Dalfovo1999}
\begin{equation}
\label{eq:sum_cnumber_operator}
\hat{\psi}_i=\psi_i(x,t)1\!\!1+\delta\hat{\psi}_i\, ,
\end{equation}
where $1\!\!1$ is the identity operator and $\psi_i(x,t)=\langle
\hat{\psi}_i \rangle$ ( $\langle\cdots\rangle$ indicates the quantum
expectation value) is known as the mean field. The Gross-Pitaevskii
equations are partial differential equations where the fluctuations,
$\delta\hat{\psi}_i$, are not taken into account. Their solutions
give an approximation to the evolution of the mean field,
$\psi_i(x,t)$, whose squared norm gives the condensate spatial profile
of the atoms in level $i$. The Gross-Pitaevskii equations are valid
when the mean field dominates the fluctuations, this is the case when
the interactions between atoms are weak and the system temperature is
well below the transition temperature for Bose condensation.
Using Eq.~(\ref{eq:sum_cnumber_operator}) in
Eqs.~(\ref{eq:complete_system}) and neglecting fluctuations, we obtain
a set of coupled Gross-Pitaevskii Equations, they look similar to
Eqs.~(\ref{eq:complete_system}), but with functions $\psi_i(x,t)$
instead of operators $\hat{\psi}_i(x,t)$.
We assume that the control field is in resonance with the dipole
transition $1\rightarrow 0$ of atoms at rest, $\omega_1=\omega_L$. The
dominant frequency of the probe field is $\overline{\omega}$, assumed
to be in resonance with the dipole transition $2\rightarrow 0$ of
atoms at rest, $\overline{\omega}=\omega_2$. In order to eliminate
rapidly rotating terms we make the following transformations:
$\psi_1\rightarrow e^{i \omega_1 t}\psi_1$, $\psi_2\rightarrow e^{i
\overline{\omega} t}\psi_2$, $\psi_0\rightarrow e^{-i \omega_L
t}e^{-i \overline{\omega} t}\psi_0$ and $\mathcal{E}\rightarrow
\mathcal{E} e^{i \omega_2 t}$. All the atoms are initially in state
$2$, i.e. $\psi_0(x,t=0)=\psi_1(x,t=0)=0$. We use the weak probe approximation, $G\gg g\mathcal{E}$, and
assume that the wave functions $\psi_j$ can be expanded in powers of
$\Omega=g\mathcal{E}$:
\begin{equation}\label{eq:expansion_omega}
\psi_j=\psi_j^{(0)}+\psi_j^{(1)} \Omega+O(\Omega^2)\, .
\end{equation}
The expansion limits the validity of our results to the case where the
Rabi frequency of the probe field is much weaker than the the Rabi
frequency of the control field, this is the standard approximation in
EIT based slow light physics.
To zeroth order we obtain
\begin{equation}\label{eq:futuragp}
\abl{t} \psi_2^{(0)} = \left ( \frac{i \hbar}{2M}\frac{\partial^2}{\partial
x^2}
-\frac{i}{\hbar} V_2 -2i u_2
\psi^{*(0)}_2\psi^{(0)}_2 \right )\psi^{(0)}_2 \, ,
\end{equation}
and $\psi_1^{(0)}=\psi_0^{(0)}=0$.
At first order we get
\begin{eqnarray}
\abl{t} \psi_0^{(1)} &=& \left(\frac{i \hbar }{2M}\frac{\partial^2}{\partial
x^2}
-\frac{i}{\hbar} V_0- i u_{02}
\psi^{*(0)}_2\psi^{(0)}_2-i \Delta -\frac{\gamma}{2}\right)
\psi_0^{(1)} \nonumber \\ & & -i G(t) e^{i k_G x} \psi_1^{(1)} -i g \mathcal{E}
\psi_2^{(0)} \, ,\label{eq:psi0}\\
\abl{t} \psi_1^{(1)} &=& \left(\frac{i
\hbar}{2M}\frac{\partial^2}{\partial
x^2} -\frac{i}{\hbar} V_1- i u_{12}
\psi^{*(0)}_2\psi^{(0)}_2\right)\psi_1^{(1)} \nonumber\\ & &
-i G(t) e^{-i k_G
x} \psi_0^{(1)}\, ,\label{eq:expanionb} \\
\abl{t} \psi_2^{(1)} &=& \left (
\frac{i \hbar}{2M}\frac{\partial^2}{\partial
x^2} -\frac{i}{\hbar} V_2 -2i u_2
\psi^{*(0)}_2\psi^{(0)}_2 \right )\psi^{(1)}_2 \nonumber \\
& & -i u_2
\psi^{*(1)}_2\psi^{(0)}_2\psi^{(0)}_2\, ,
\label{eq:expanionc}
\end{eqnarray}
where $\Delta=\omega_L-\omega_2$. We inserted spontaneous decay from
state $0$, at rate $\gamma$, phenomenologically in
Eq. (\ref{eq:psi0}). Since the probe pulse is weak, the population
of level $1$ of the condensate is small, this is the reason that
justify keeping only terms up to first order in $\Omega$ in
Eq.~(\ref{eq:expansion_omega}).
\section{Propagation Equation for the Probe Pulse}\label{sec:prop-equat-probe}
In the slowly varying approximation, the equation for the
probe electromagnetic field becomes \cite{lb:scully}
\begin{equation}
\left(\partiell{t}{} +c \partiell{x}{} \right )\mathcal{\overline{E}} =i g e^{-i k_F x}\langle\psi^{*}_2 (x,t) \psi_0(x,t)\rangle\, ,
\end{equation}
with $\mathcal{\overline{E}}= \mathcal{E} e^{i k_F x}$ being the
slowly varying part of the electromagnetic field with
$k_F=k(\bar{\omega})$. To first order in $\Omega$, the
dipole operator in the previous equation is
\[
\langle\psi^{*}_2 (x,t) \psi_0(x,t)\rangle\approx
\langle\psi^{*(0)}_2 (x,t) \psi^{(1)}_0(x,t)\rangle\, .
\]
Eq.~(\ref{eq:expanionb}) implies
\begin{equation}\label{eqn:psi2psi0}
\psi_0^{(1)}=\frac{i e^{i k_G x}}{G} \left ( \abl{t} -\frac{i
\hbar}{2M}\frac{\partial^2}{\partial
x^2} +\frac{i}{\hbar} V_1+i u_{12} \psi^{*(0)}_2\psi^{(0)}_2\right )\psi_1^{(1)}\, .
\end{equation}
We assume that $\psi_0^{(1)}$ reaches its stationary value very
quickly compared with the time scale evolution of the other
wave functions. Note that in the stationary regime the system reaches the
electromagnetically induced transparency regime and the population of
state $0$ is negligible. These two considerations allow us to write
Eq.~(\ref{eq:psi0}) as
\begin{equation}\label{eq:dip1}
\psi_1^{(1)}\approx -\frac{g\mathcal{\overline{E}} e^{i k_F x}}{G}
e^{-i k_G x}\psi^{(0)}_2\, ;
\end{equation}
this expression is equivalent to Eq.~(27) in Ref~\cite{rv:memoria2}.
We insert Eq.~(\ref{eq:dip1}) into Eq.~(\ref{eqn:psi2psi0}) and denote
the condensate wave function of level $2$, when there is no field, as
\[
\psi_2^{(0)}=\alpha(x,t)\, ,
\]
obtaining
\begin{eqnarray}\label{eqn:dipole2quant}
\langle\psi^{*(0)}_2 (x,t) \psi^{(1)}_0(x,t)\rangle&=&-\frac{ig
e^{i k_Fx}\alpha^*(x,t)}{G(t)} \partiell{t}{} \frac{\mathcal{\overline{E}}(x,t)
\alpha(x,t)}{G(t)}\nonumber\\&& -\frac{g e^{i k_G x}\alpha^*(x,t)}{G^2(t)} \frac{\hbar
}{2M}\frac{\partial^2}{\partial
x^2}
e^{i (k_F-k_G
)x}\alpha(x,t)\mathcal{\overline{E}}(x,t)\\&& +\frac{g e^{i k_F
x}\mathcal{\overline{E}}(x,t)|\alpha(x,t)|^2}{\hbar G^2(t)} V_1+\frac{g e^{i k_F
x}\mathcal{\overline{E}}(x,t)u_{12}|\alpha(x,t)|^4}{G^2(t)}\nonumber\, .
\end{eqnarray}
Substituting in the propagation equation we obtain
\begin{eqnarray}\label{eq:progen}
\left(\partiell{t}{} +c \partiell{x}{} \right )\mathcal{\overline{E}}(x,t) &=&-\frac{g^2
\alpha^*(x,t)}{G(t)}\left\{\partiell{t}{}\frac{\mathcal{\overline{E}}(x,t)\alpha(x,t)}{G(t)} \right.\nonumber\\&&\left. -\frac{i \hbar}{2 M G(t)} \left[-(k_G-k_F)^2 \mathcal{\overline{E}}(x,t)
\alpha(x,t)\right.\right.\nonumber\\ &&\left.\left. -i 2 (k_G-k_F)
\partiell{x}{}\mathcal{\overline{E}}(x,t) \alpha(x,t)+\frac{\partial^2}{\partial
x^2}\mathcal{\overline{E}}(x,t) \alpha(x,t) \right]\right.\nonumber\\&&\left.+\frac{i
\mathcal{\overline{E}}(x,t)\alpha(x,t)}{\hbar G(t)} V_1+\frac{i
\mathcal{\overline{E}}(x,t)u_{12}|\alpha(x,t)|^2\alpha(x,t)}{G(t)} \right\}\, ,\nonumber\\
\end{eqnarray}
which is one of our main results. The function
$\alpha(x,t)=\psi_2^{(0)}(x,t)$ is given by the solution of
Eq.~(\ref{eq:futuragp}) and does not depend on the field or the
condensate wave functions of the other atomic levels.
In the following section we take advantage of this to obtain an
analytic expression for the solution in the particular case of an
initially spatially uniform condensate. This solution exemplifies how
the different terms on the right side of Eq.~(\ref{eq:progen})
contribute to the probe pulse evolution.
\section{Slow light inside a condensate}\label{sec:slow-light-inside}
In this section, we study how the pulse probe propagates inside the
condensate showing EIT. We will focus on the case where
$\alpha(x,t)=\alpha e^{i\mu t}$, with $\alpha$ a constant, is a
solution of the Gross-Pitaevsky equation for the state $2$ of the
condensate. Spatially uniform condensates have been achieved
experimentally \cite{gaunt_bose-einstein_2013}. The potential, $V_2$,
that generates this distribution, could be a square well; it can also
be a wide harmonic potential and the following discussion is valid in
the region near the center of the potential, where there is little
change in the density distribution \cite{pethick2002bose}. This case
avoids the effect that density change in atoms in level $2$ has
on pulse propagation; we are interested in the effect that state
evolution of atoms in level $1$ has on the pulse. This approximation
allow us to obtain an analytic expression for the solution that
completely characterizes both, the pulse propagation and the atomic state
evolution.
Defining $k_t=k_G-k_F$,
Eq.~(\ref{eq:progen}) reads
\begin{eqnarray}
\left( \frac{\partial }{\partial t}+c\frac{\partial }{\partial x}\right)
\mathcal{E}\left( x,t\right) &=&-\frac{g^2 \left\vert \alpha \right\vert ^{2}}{G\left( t\right) }\left(
\frac{\partial }{\partial t} \left( \frac{\mathcal{E}\left( x,t\right)
}{G\left( t\right) }\right) +i\mu \left( \frac{\mathcal{E}\left( x,t\right)
}{G\left( t\right) }\right)\right.\nonumber\\&&\left. +\frac{i \mathcal{E}(x,t)}{\hbar G(t)}
V_1(x)+\frac{i \mathcal{E}(x,t)u_{12}|\alpha|^2}{G(t)} \right
) \nonumber\\&&+\frac{i g^2 \left\vert \alpha \right\vert ^{2} \hbar}{ 2 M G^{2}\left( t\right) }\left[
-k_{t}^{2}-i2k_{t}\frac{\partial }{\partial x}+ \frac{\partial
^{2}}{\partial x^{2}}\right] \mathcal{E}\left( x,t\right)\, .
\end{eqnarray}
To solve this equation we use a reference system that moves
with the probe pulse. With substitution
\begin{equation}
\label{eq:cambio_variable}
u=-\frac{x}{c}+\int_0^t\frac{G^2(\xi)}{g^2 \left\vert \alpha \right\vert ^{2}+G^2(\xi)}d\xi
\end{equation}
we obtain
\begin{equation}\label{eq:solconst}
\mathcal{E}(u,t)=\frac{G(t)e^{i\phi(t)}}{\sqrt{G^2(t)+g^2 \left\vert \alpha \right\vert ^{2}}}
\exp{\left (\int_0^t \frac{-i}{\hbar}\tilde{H}(\tau) d \tau\right )}\,\,\mathcal{E}(u,t=0)\, ,
\end{equation}
where
\begin{equation}
\label{eq:fakehamiltonian}
\tilde{H}=\frac{1}{(1+G^2(t)/g^2 \left\vert \alpha \right\vert ^{2})}\left ( \frac{\hbar^2}{2 M}(k_t+\frac{i}{c}\frac{\partial}{\partial
u})^2 +V_1(u,t)\right )\,
\end{equation}
and
\begin{equation}\label{eq:phidet}
\phi(t)=(\mu+u_{12}|\alpha|^2)\left(\int_0^t\frac{G^2(\tau)}{G^2(\tau)+g^2 \left\vert \alpha \right\vert ^{2}} d\tau-t\right)\,\, .
\end{equation}
The solution for the field shown in (\ref{eq:solconst}), includes some
of the usual characteristic behavior of light traveling through media
showing EIT. The pulse velocity depends on the value of $G(t)$; as
$G(t)\rightarrow 0$ the pulse is stopped and, as can be seen from
expression (\ref{eq:dip1}), its information is transferred to the
atoms \cite{rv:lukincollo}. Note that if $k_t=0$ and the pulse is
already stopped ($G(t)=0$), the operator $\tilde{H}$, defined in
(\ref{eq:fakehamiltonian}), coincides with the Hamiltonian operator of
an atom trapped in the potential $V_1$. As shown in
relation (\ref{eq:dip1}), the wave function of atoms in state $1$ is given
by the probe field once the pulse is trapped; then, while the pulse is
trapped, the atomic wave function for atoms in state $1$ evolves in
terms of its own Hamiltonian. As solution (\ref{eq:solconst}) shows, the
profile of it gets written in the pulse once it is released
($G(t)\rightarrow \infty$). If $k_t\neq 0$, the photons transfer
momentum to the atoms that go from state $2$ to state $1$. This
precisely appears in (\ref{eq:fakehamiltonian}) as a displacement of the
momentum operator by $k_t$. This effect has been used to move a pulse
by physically moving the atoms in state $1$, and then releasing the
pulse once the atoms are in a new position \cite{rv:hau2condensados}.
It is clear from the solution that the pulse is modified as it travels
through the medium, and in the process of stopping it; the strength of
this modification depends on $G(t)$ and on the trapping potential for
atoms in state $1$. The behavior described above is the expected
behavior for trapping a pulse in a BEC \cite{Dutton2004}. The
advantage of the analytic expression for the field
(\ref{eq:solconst}), is that it tells us how the pulse evolve: as if
it were a quantum particle with a Hamiltonian given by
Eq.~(\ref{eq:fakehamiltonian}). Note that $\tilde{H}$ looks similar to
the Hamiltonian operator for atoms in state $1$ but multiplied by the
factor $1/(1+G^2(t)/g^2|\alpha|^2)$ and in a reference frame that
moves with the pulse; when the control field is zero, $u=x$ and
$\tilde{H}$ coincides with the Hamiltonian of atoms in state $1$. Due
to these facts we claim that: the way the probe pulse propagation is
affected by the evolution of the wave function of atoms in state $1$,
is given by $\tilde{H}$. With this interpretation, the coefficient
factor $1/(1+G^2(t)/g^2 \left\vert \alpha \right\vert ^{2})$ in
Hamiltonian (\ref{eq:fakehamiltonian}), gives the strength of how the
evolution of the atomic wave function affects the propagating pulse.
We will now explore some of the
physical insights that solution (\ref{eq:solconst}) exhibits.
The field, shown in (\ref{eq:solconst}), does not depend on $u_1$:
collisions between atoms in state $1$ does not affect the field
propagation; in the limit of weak probe field, the number of atoms
transferred to state $1$ is small and collisions between them are
negligible.
The field has a global phase which depends on $\phi(t)$ (see
(\ref{eq:solconst}), (\ref{eq:phidet})), which in turn depends on term
$u_{12}$ that characterizes the strength of collisions between atoms
in state 2 with atoms in state 1. Pulses that propagate in the medium
with different control fields, $G(t)$, will have different phases; the
phase accumulation is maximum when the pulse is stopped, and it is
negligible when $G(t)$ is large. Depending on which parameters are
known, making these pulses interfere could be used to extract the
value of $u_{12}$, $\mu$ or $g$. If we make interfere a pulse that has
been stopped and then released, with a pulse that propagates at
constant velocity, we could extract information on the phase advance
due to the process of trapping and releasing the pulse.
For intermediate cases ($G(t)\neq 0,\infty$) the pulse is modified;
how it is modified depends on the wave function dynamics of atoms in
state $1$ and on the coupling between this dynamics and the pulse,
$1/(1+G^2(t)/g^2 \left\vert \alpha \right\vert ^{2})$. For example,
if $V_1$ is constant inside the region the pulse propagates, the width
of the wave function for state $1$ expands as a free particle, then
the probe pulse width expands as if it were a free particle with mass
$M\,(1+G^2(t)/g^2|\alpha|^2)$. In this case, not taking into account
that the field strength changes as a function of the control field,
the evolution of the field, in the process of trapping it, would be
similar to the evolution of a free particle wave function whose mass
change with time.
The pulse propagation can be manipulated
using $V_1$. For example, if $V_1=M\omega^2 u^2$, the center of the
pulse (in the moving frame) will behave as if it were a harmonic
oscillator with mass $M\,(1+G^2(t)/g^2|\alpha|^2)$ and frequency
$\omega/(1+G^2(t)/g^2|\alpha|^2)$.
\section{Conclusion}\label{sec:conclusion}
When a probe pulse of light propagates through a three-level
Bose-Einstein condensate under conditions of electromagnetically induced
transparency, its shape is modified by the evolution of the condensate
wave function. The strength of this modification is a function of the
control field. This is in contrast with the case where atoms are
assumed point-like, where the probe pulse propagates without
modification.
We derived a partial differential equation whose solution gives the
probe pulse propagation inside a three-level Bose-Einstein condensate
under conditions of electromagnetically induced transparency. We obtained an
analytic expression when the initial profile for the atoms in state
$2$ is constant. The solution shows explicitly how different
parameters of the system -- trapping potential of the atoms, time
dependent control field, scattering lengths, atom-field coupling --
affect the pulse propagation and the evolution of the condensate wave
function.
The form of the solution for the probe propagation,
expression (\ref{eq:solconst}), is similar to the formal solution of the
Schrodinger equation for the wave function of an atom in state $1$, but
with the Hamiltonian given in a reference frame that moves with the
pulse.
The solution allows us to see how to manipulate the pulse by choosing
different trapping potentials for atoms in state $1$ and different
control fields. For example, if the potentials are zero, the width of
the probe pulse expands in time as if it were a free quantum particle,
but with an expansion rate which is a function of the control
field. The propagated pulse acquires a global phase that depends on
the control field and the scattering length between atoms in state $1$
and $2$. This phase could be measured by performing interference
experiments with pulses that propagate with different control fields.
We are confident that our results can help in the discussion of manipulating
light memories using condensates.
\section{Acknowledgments}
We thank Giovanna Morigi and Stefan Rist for useful discussions. Work
supported in part by DGAPA-UNAM grant PAPIIT IN103714.
\section{Bibliography}
\end{document}
|
\bigegin{document}
\topitle[A Posteriori Estimate and Adaptivity for Finite Range A/C 2D]{A Posteriori Error Estimate and Adaptive Mesh Refinement Algorithm for Atomistic/Continuum Coupling with Finite Range Interactions in Two Dimensions}
\textrm{a}uthor{M. Liao}
\textrm{a}ddress{Mingjie Liao\\
Department of Applied Mathematics and Mechanics\\
University of Science and Technology Beijing\\
No. 30 Xueyuan Road, Haidian District\\
Beijing 100083\\
China}
\email{[email protected]}
\textrm{a}uthor{P. Lin}
\textrm{a}ddress{Ping Lin\\
Department of Mathematics\\
University of Dundee\\
Dundee, DD1 4HN, Scotland\\
United Kingdom}
\email{[email protected]}
\textrm{a}uthor{L. Zhang}
\textrm{a}ddress{Lei Zhang \\ School of Mathematical Sciences,
Institute of Natural Sciences, and Ministry of Education Key
Laboratory of Scientific and Engineering Computing (MOE-LSC) \\
Shanghai Jiao Tong University \\ 800 Dongchuan Road \\ Shanghai
200240 \\ China}
\email{[email protected]}
\tophanks{ML and PL were partially supported by National Natural Science Foundation of China grant 91430106. LZ was partially supported by National Natural Science Foundation of China grant 11471214, 11571314 and the One Thousand Plan of China for young scientists.}
\bigegin{abstract}
In this paper, we develop the residual based a posteriori error estimates and the corresponding adaptive mesh refinement algorithm for atomistic/continuum (a/c) coupling with finite range interactions in two dimensions. We have systematically derived a new explicitly computable stress tensor formula for finite range interactions. In particular, we use the geometric reconstruction based consistent atomistic/continuum (GRAC) coupling scheme, which is optimal if the continuum model is discretized by $P^1$ finite elements. The numerical results of the adaptive mesh refinement algorithm is consistent with the optimal a priori error estimates.
\end{abstract}
\sigma^{\textrm{i}}gmaubjclass[2000]{65N12, 65N15, 70C20, 82D25}
\keywords{atomistic models, coarse graining, atomistic-to-continuum coupling, quasicontinuum method, a posteriori error estimate}
\maketitle
\sigma^{\textrm{i}}gmaection{Introduction}
\label{sec:introduction}
Atomistic/continuum (a/c) coupling methods are a class of computational multiscale methods for crystalline solids with defects that aim to optimally balance the accuracy of the atomistic model and the efficiency of the continuum model \textrm{c}ite{Ortiz:1995a, Shenoy:1999a, Gumbsch:1989}. The construction and analysis of different a/c coupling methods have attracted considerable attention in the research community in recent years. Rigorous a prior analysis and systematic benchmark has been done in, for example, \textrm{c}ite{LinP:2006a, OrtnerShapeev:2011, LiOrShVK:2014, COLZ2013, OrZh:2016, MiLu:2011}. We refer readers to \textrm{c}ite{Miller:2008, LuOr:acta} for a review. The study of a/c coupling methods has not only provided an analytical framework for the prototypical problems\textrm{c}ite{Ehrlacher:2016}, but also opened avenue for coupling schemes in more complicated physical situations \textrm{c}ite{QMMM:2016,QMMM:2017, Fang:2018}.
Like many multiscale methods dealing with defects or singularities, adaptivity is the key for the efficient implementation of a/c coupling methods. In contrast to the a priori analysis, the development of a posteriori analysis for a/c coupling methods are still lagging behind. Although heuristic methods have been proposed in the engineering literature \textrm{c}ite{Kochman:2016, prud06,Shenoy:1999a}. Previous mathematical justifications are largely limited to one dimension cases \textrm{c}ite{prud06, arndtluskin07b, arndtluskin07c}. In particular, the residual based a posteriori error bounds for a/c coupling schemes are first derived in \textrm{c}ite{OrtnerSuli:2008a, Ortner:qnl.1d, OrtnerWang:2014} by Ortner et al. in one dimension.
In \textrm{c}ite{APEst:2017}, we carried out a rigorous a posteriori analysis of the residual, the stability constant, and the error bound, for a consistent atomistic/continuum coupling method \textrm{c}ite{PRE-ac.2dcorners} with nearest neighbour interactions in two dimensions. Corresponding adaptive mesh refinement algorithm was designed and implemented based on the a posteriori error estimates, and the convergence rate with respect to degrees of freedom is the same as optimal a priori error estimates. This is the first rigorous a posteriori analysis for a/c coupling method in two dimensions. With the a posteriori error estimates and the adaptive algorithm, we can not only automatically move the a/c interface and adjust the discretization of the continuum region, but also change the size of the computational domain. We have also introduced the so-called ``stress tensor correction" technique, which distinguish the essential difference of high dimensional results compared with previous one dimensional results.
In this paper, we treat the more general case of a/c coupling with finite range interactions, which is physically more relevant and computationally more involved. The a priori analysis of GRAC scheme has been extended from nearest neighbour case in \textrm{c}ite{PRE-ac.2dcorners} to finite range interactions in \textrm{c}ite{COLZ2013}. $\textsf{el}l^{1}$-minimization is introduced to resolve the issue of non-uniqueness of reconstruction parameters, and a stabilisation mechanism is proposed in \textrm{c}ite{2013-stab.ac} to reduce the stability gap between the a/c coupling scheme and the original atomistic model.
The analytical framework for both a priori analysis and a posteriori analysis of a/c coupling methods strongly relies on the stress based formulation. In \textrm{c}ite{Or:2011a,OrtnerTheil2012}, an explicit formulation of stress tensor is proposed based on a mollified version of line measure supported on the interaction bonds, thence one can obtain an integral representation of finite differences to further derive an integral representation of the first variation of the interaction energy. This representation greatly simplifies the expression of the stress tensor and plays a significant role in the a priori analysis. However, the obtained stress tensor is a function of the continuous space variable, therefore it is difficult to compute in practice, and not suitable for the a posteriori estimates and adaptive computation.
In this paper, we derive a novel expression of the stress tensor for finite range interactions, which is new to the best of our knowledge. The stress tensor is piecewise constant and only depends on a local neighbourhood, therefore it is computable and the assembly cost is linear with respect to the number of bonds. This stress formulation allows for the convenient derivation of a posterior error estimates and efficient implementation of the adaptive algorithms.
The paper is organized as follows. We set up the atomistic to continuum (a/c) coupling models for point defects in \S~\ref{sec:formulation}, and introduce the general GRAC formulation in \S~\ref{sec:grac}. In \S~\ref{sec:stress}, we present the stress formulation and the stress tensor assembly algorithm for finite range interactions. In \S~\ref{sec:numerics} the rigorous a posteriori error estimates based adaptive algorithm is deduced and complemented by numerical experiments. We draw conclusions and make suggestions for future research in \S~\ref{sec:conclusion}. Some auxiliary results are given in the Appendix \S~\ref{sec:appendix}.
\def\mathbb{R}def{R^{\topextrm{def}}}
\def\mathbb{R}g{\mathcal{R}}
\def\mathbb{R}gnn{\mathcal{N}}
\defr_{\textrm{cut}}{r_{\topextrm{cut}}}
\def\L^{\a}mbdahom{\L^{\a}mbda^{\topextrm{hom}}}
\def\nabladef{D^{\topextrm{def}}}
\def\L^{\a}mbdadef{\L^{\a}mbda^{\topextrm{def}}}
\def\mathbb{N}h{\mathcal{N}_h}
\def\mathscr{U}h{\mathscr{U}_h}
\def\mathbb{R}a{R^\textrm{a}}
\def\mathbb{R}b{R^{\topextrm{b}}}
\def\mathbb{R}c{R^\textrm{c}}
\def\mathscr{E}b{\mathscr{E}^{\topextrm{b}}}
\def{\textrm{DOF}}{{\topextrm{DOF}}}
\def{\R^d}h{{\R^d}ega_h}
\def\mathcal{T}hr{{\mathcal{T}_{h,R}}}
\def\textrm{vor}{\topextrm{vor}}
\def\Us_{h,R}{\mathscr{U}_{h,R}}
\def\mathcal{T}a{\mathcal{T}_\textrm{a}}
\def\mathcal{T}h{\mathcal{T}_{\topextrm{h}}}
\def\sigma^{\textrm{i}}gmah{\sigma^{\textrm{i}}gmaigma^{\topextrm{h}}}
\sigma^{\textrm{i}}gmaection{Model Formulation}
\label{sec:formulation}
In this section, We setup an atomistic model for crystal defects in an infinite lattice in the spirit of \textrm{c}ite{Ehrlacher:2016} in \S~\ref{sec:formulation:atm} and then introduce the Cauchy-Born continuum model in \S~\ref{sec:formulation:continuum}. We give a generic form of a/c coupling schemes in \S~\ref{sec:formulation:ac}.
\sigma^{\textrm{i}}gmaubsection{Atomistic Model}
\label{sec:formulation:atm}
\def\mathbb{R}def{R^{\topextrm{def}}}
\def\mathbb{R}g{\mathcal{R}}
\def\mathbb{R}gnn{\mathcal{N}}
\defr_{\textrm{cut}}{r_{\topextrm{cut}}}
\def\L^{\a}mbdahom{\L^{\a}mbda^{\topextrm{hom}}}
\def\nabladef{D^{\topextrm{def}}}
\def\L^{\a}mbdadef{\L^{\a}mbda^{\topextrm{def}}}
\sigma^{\textrm{i}}gmaubsubsection{Atomistic lattice and defects}
\label{sec:formulation:atm:lattice}
Given $d \textrm{i}n \{2, 3\}$, ${\textsf{A}} \textrm{i}n \mathbb{R}^{d \topimes d}$ non-singular, $\L^{\a}mbdahom := {\textsf{A}} \mathbb{Z}^d$ is the \topextit{homogeneous reference lattice} which represents a perfect \topextit{single lattice crystal} formed by identical atoms. $\L^{\a}mbda\sigma^{\textrm{i}}gmaubset \mathbb{R}^d$ is the \topextit{reference lattice} with some local \topextit{defects}. The mismatch between $\L^{\a}mbda$ and $\L^{\a}mbdahom$ represents possible defects $\L^{\a}mbdadef$, which are contained in some localized \topextit{defect cores} $\nabladef$ such that the atoms in $\L^{\a}mbda\sigma^{\textrm{i}}gmaetminus \nabladef$ do not interact with defects $\L^{\a}mbdadef$. Vacancy, interstitial and impurity are different types of possible point defects.
\sigma^{\textrm{i}}gmaubsubsection{Lattice function and lattice function space}
\label{sec:formulation:atm:function}
Given $m\textrm{i}n\{1,2,3\}$, denote the set of vector-valued \topextit{lattice functions} by
\bigegin{displaymath}
\mathscr{U} := \{v: \L^{\a}mbda\topo \mathbb{R}^m \}.
\end{displaymath}
A \topextit{deformed configuration} is a \topextit{lattice function} $y \textrm{i}n \mathscr{U}$. Let $x$ be the identity map, the \topextit{displacement} $u\textrm{i}n \mathscr{U}$ is defined by $u(\textsf{el}l) = y(\textsf{el}l)-x(\textsf{el}l) = y(\textsf{el}l) - \textsf{el}l $ for any $\textsf{el}l\textrm{i}n\L^{\a}mbda$.
For each $\textsf{el}l\textrm{i}n \L^{\a}mbda$, we prescribe an \topextit{interaction neighbourhood} $\mathbb{N}hd_{\textsf{el}l} := \{ \textsf{el}l' \textrm{i}n \L^{\a}mbda \,|\, 0<|\textsf{el}l'-\textsf{el}l| \leq r_{\textrm{cut}} \}$ with some cut-off radius $r_{\textrm{cut}}$. The \topextit{interaction range} $\mathbb{R}g_\textsf{el}l := \{\textsf{el}l'-\textsf{el}l \,|\, \textsf{el}l'\textrm{i}n \mathbb{N}hd_\textsf{el}l\}$ is defined as the union of lattice vectors defined by the finite differences between lattice points in $\mathbb{N}hd_{\textsf{el}l}$ and $\textsf{el}l$. Define the ``finite difference stencil'' $Dv(\textsf{el}l):= \{D_\rho v(\textsf{el}l)\}_{\rho \textrm{i}n \mathbb{R}g_\textsf{el}l} :=\{v(\textsf{el}l+\rho)-v(\textsf{el}l)\}_{\rho \textrm{i}n \mathbb{R}g_\textsf{el}l}$.
The homogeneous lattice $\L^{\a}mbdahom = {\textsf{A}}\mathbb{Z}^d$ naturally induces a simplicial micro-triangulation $\mathcal{T}^{\textrm{a}}$. In two dimensions, $\mathcal{T}^{\textrm{a}} = \{{\textsf{A}}\xi + \textrm{h}at{T}, {\textsf{A}}\xi-\textrm{h}at{T}|\xi\textrm{i}n \mathbb{Z}^2\}$, where $\textrm{h}at{T} = {\topextrm{conv}}\{0, e_1, e_2\}$. Let $\bigar{\zeta}\textrm{i}n W^{1, \textrm{i}nfty}(\L^{\a}mbdahom; \mathbb{R})$ be the $P_1$ nodal basis function associated with the origin; namely, $\bigar{\zeta}$ is piecewise linear with respect to $\mathcal{T}^\textrm{a}$, and $\bigar{\zeta}(0) = 1$ and $\bigar{\zeta}(\xi)=0$ for $\xi\neq 0$ and $\xi\textrm{i}n \L^{\a}mbdahom$. The nodal interpolant of $v\textrm{i}n \mathscr{U}$ can be written as
\bigegin{displaymath}
\bigar{v}(x):=\sigma^{\textrm{i}}gmaum_{\xi\textrm{i}n\mathbb{Z}^d}v(\xi)\bigar{\zeta}(x-\xi).
\end{displaymath}
\def\mathscr{U}e{\mathscr{U}^{1,2}}
We can introduce the discrete homogeneous Sobolev spaces
\bigegin{displaymath}
\mathscr{U}e :=\{u\textrm{i}n \mathscr{U}|\nabla \bigar{u}\textrm{i}n L^2\},
\end{displaymath}
with semi-norm $\|\nabla \bigar{u}\|_{L^2}$.
\sigma^{\textrm{i}}gmaubsubsection{Interaction potential}
\label{sec:formulation:atm:potential}
We consider the general multibody interaction potential of the \topextit{generic pair functional form} \textrm{c}ite{TadmorMiller:2012}, which includes the widely used potentials such as EAM (Embedded Atom Method) potential \textrm{c}ite{Daw1984a} and Finnis-Sinclair model \textrm{c}ite{Finnis1984}. Namely, the potential is a function of the distances between atoms within interaction range and has no angular dependence. For example, for each $\textsf{el}l \textrm{i}n \L^{\a}mbda$, let $V_\textsf{el}l(y)$ denote the \topextit{site energy}
associated with the lattice site $\textsf{el}l \textrm{i}n \L^{\a}mbda$, the EAM potential reads,
\bigegin{align}
V_{\textsf{el}l}(y) := & \sigma^{\textrm{i}}gmaum_{\textsf{el}l' \textrm{i}n \mathbb{N}hd_{\textsf{el}l}} \Phi(|y(\textsf{el}l)-y(\textsf{el}l')|) + F\Big(
{\topextstyle \sigma^{\textrm{i}}gmaum_{\textsf{el}l' \textrm{i}n \mathbb{N}hd_{\textsf{el}l}} \psi(|y(\textsf{el}l)-y(\textsf{el}l')|)} \Big),\nonumber\\
= &\sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n \mathbb{R}g_{\textsf{el}l}} \Phi\big(|D_\rho y(\textsf{el}l)|\big) + F\Big(
{\topextstyle \sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n \mathbb{R}g_{\textsf{el}l}}} \psi\big( |D_\rho y(\textsf{el}l)|\big) \Big).
\label{eq:eam_potential}
\end{align}
with the pair potential $\Phi$, the electron density function $\psi$ and the embedding function $F$.
\bigegin{remark}
\label{rem:pairfuncational}
For convenience, with a slight abuse of notation, we will use $V_{\textsf{el}l}(D_\rho y)$, $V_{\textsf{el}l}(|D_\rho y|)$ instead of $\widehat{V}_{\textsf{el}l}(\{D_\rho y(\textsf{el}l) \}_{\rho\textrm{i}n \mathbb{R}g_\textsf{el}l})$, $\widetilde{V}_{\textsf{el}l}(\{|D_\rho y (\textsf{el}l) |\}_{\rho\textrm{i}n \mathbb{R}g_\textsf{el}l})$ when there is no confusion in the context.
\end{remark}
We assume that the potential $V_\textsf{el}l(y)\textrm{i}n C^k((\mathbb{R}^d)^{\mathbb{R}g_\textsf{el}l}), k \geq 2$. We also assume that $V_\textsf{el}l(y)$ is \topextit{homogeneous} outside the defect region $\nabladef$, namely, $V_\textsf{el}l = V$ and $\mathbb{R}g_\textsf{el}l = \mathbb{R}g$ for $\textsf{el}l \textrm{i}n \L^{\a}mbdaambda \sigma^{\textrm{i}}gmaetminus \nabladef$. Furthermore, $V$ and $\mathbb{R}g$ have the following point symmetry: $\mathbb{R}g = -\mathbb{R}g$, and $V(\{-g_{-\rho}\}_{\rho\textrm{i}n\mathbb{R}g}) = V(g)$.
For an infinite lattice, assume the macroscopic applied strain is $\textsf{B}\textrm{i}n\mathbb{R}^{d\topimes d}$, we redefine the potential $V_\textsf{el}l(y)$ as the difference $V_\textsf{el}l(y) - V_\textsf{el}l(y_\textsf{B})$. We denote the energy functional $\mathscr{E}(y)$ as the infinite sum of the redefined potential over $\L^{\a}mbdaambda$, which is well-defined for $y-y^{\textsf{B}}\textrm{i}n\mathscr{U}e$ \textrm{c}ite{Ehrlacher:2016},
\bigegin{equation}
\label{eqn:Ea}
\mathscr{E}(y) = \sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda} V_\textsf{el}l(y)
\end{equation}
Under the above conditions, the goal of the atomistic problem is to find a \topextit{strongly stable}
equilibrium $y$, such that,
\bigegin{equation}
\label{eq:min}
y \textrm{i}n \textrm{a}rg\min \big\{ \mathscr{E}(y) \bigsep y-y^B\textrm{i}n \mathscr{U}e \big\}.
\end{equation}
$y$ is \topextit{strongly stable} if there exists $c_0 > 0$ such that
\bigegin{displaymath}
\langle \textrm{d}el \mathscr{E}(y) v, v \rangle \geq c_0 \| \nabla v \|_{L^2}^2, \quad \forall v \textrm{i}n \mathscr{U}^{1,2}.
\end{displaymath}
\sigma^{\textrm{i}}gmaubsection{Continuum model}
\label{sec:formulation:continuum}
From the atomistic model, a continuum model can be derived by coarse graining, and computationally it allows for the reduction of degrees of freedom when the deformation is smooth. A typical choice in the multi-scale context is the Cauchy-Born continuum model \textrm{c}ite{E:2007a, OrtnerTheil2012}. Let $W : \mathbb{R}^{d \topimes d} \topo \mathbb{R}$ be a strain energy density function, the Cauchy-Born energy density $W$ is defined by
\bigegin{displaymath}
W(\textsf{F}) := \det {\textsf{A}}^{-1} V(\textsf{F} \textrm{c}dot \mathbb{R}g).
\end{displaymath}
\sigma^{\textrm{i}}gmaubsection{Generic Formulation of Energy Based Atomistic/Continuum Coupling}
\label{sec:formulation:ac}
We give a generic formulation of the a/c coupling method and employ concepts and notation from various earlier works, such as \textrm{c}ite{Ortiz:1995a,Shenoy:1999a,Shimokawa:2004,2012-optbqce, COLZ2013}, and we adapt the formulation to the settings in this paper.
The computational domain ${\R^d}ega_R = {\R^d}ega_R^\textrm{a} \bigigcup {\R^d}ega_R^\textrm{c} \sigma^{\textrm{i}}gmaubset \mathbb{R}^d$ is a simply connected, polygonal and closed set, consists of the atomistic region ${\R^d}ega_R^{\textrm{a}}$ and the continuum partition ${\R^d}ega_R^\textrm{c}$, where $R$ is the radius of ${\R^d}ega_R$. Given the reference lattice $\L^{\a}mbdaambda$ with some local defects, we decompose the set $\L^{\a}mbda^{\textrm{a},\textrm{i}} := \L^{\a}mbda \bigigcap {\R^d}ega^\textrm{a}_R = \L^{\a}mbda^\textrm{a} \bigigcup \L^{\a}mbda^\textrm{i}$ into a core atomistic set $\L^{\a}mbda^\textrm{a}$ and an interfacial atomistic set $\L^{\a}mbda^\textrm{i}$ such that $\L^{\a}mbda\bigigcap\nabladef \sigma^{\textrm{i}}gmaubset \L^{\a}mbda^\textrm{a}$, where $\nabladef$ represents the defect core. Let $\mathcal{T}^\textrm{a}_{h,R}$ (respectively $\mathcal{T}^\textrm{i}_{h,R}$) be the canonical triangulation induced by $\L^{\a}mbda^{\textrm{a}}$ (respectively $\L^{\a}mbda^{\textrm{i}}$), and $\mathcal{T}^{c}_{h,R}$ be a shape-regular simplicial partition of the continuum region. We denote $\mathcal{T}hr = \mathcal{T}^\textrm{c}_{h,R} \bigigcup \mathcal{T}^\textrm{i}_{h,R} \bigigcup \mathcal{T}^\textrm{a}_{h,R}$ as the triangulation of the a/c coupling configuration. Please see Figure \ref{figs:plotMesh} for an illustration of the computational mesh.
\bigegin{figure}[htb]
\bigegin{center}
\textrm{i}ncludegraphics[scale=0.7]{./plotmesh.eps}
\textrm{c}aption{Illustration of computational mesh. The computational domain is ${\R^d}ega_R$, and the corresponding triangulation is $\mathcal{T}_{h,R}$. $\textrm{c}irc$ in the DimGrey region are atoms of $\L^{\a}mbda^{a}$. For the next nearest neighbour interaction, $\L^{\a}mbda^\textrm{i}$ contains $\bigullet$ in the LightGrey interface region. $\Bigox$ are continuum degrees of freedom. }
\label{figs:plotMesh}
\end{center}
\end{figure}
The space of \topextit{coarse-grained} displacements is,
\bigegin{align*}
\mathscr{U}_{h,R} := \big\{ u_h : {\R^d}ega_{h, R} \topo \mathbb{R}^m \bigsep ~&
\topext{ $u_h$ is continuous and p.w. affine w.r.t. $\mathcal{T}_{h,R}$, } \\[-1mm]
& \topext{ $u_h = 0$ on $\partial {\R^d}ega_R$ } \big\}.
\end{align*}
The subscript $R$ in the above definitions can be dropped if there is no confusion, for example, we can replace $\mathcal{T}hr$ by $\mathcal{T}h$.
Let $\textrm{vor}(\textsf{el}l)$ represents the voronoi cell associated with $\textsf{el}l$ of the homogeneous reference lattice $\L^{\a}mbdahom := {\textsf{A}} \mathbb{Z}^d$, for some given non-singular ${\textsf{A}} \textrm{i}n \mathbb{R}^{d \topimes d}$. We have $|\textrm{vor}(\textsf{el}l)| = \det {\textsf{A}}$, for each $\textsf{el}l \textrm{i}n \L^{\a}mbdahom$. For each $\textsf{el}l \textrm{i}n \L^{\a}mbda$ denote its effective cell as $\nu_\textsf{el}l$ (see \textrm{c}ite{PRE-ac.2dcorners}), let $\omega_\textsf{el}l:=\displaystyle{\frac{|\nu_\textsf{el}l|}{|\textrm{vor}(\textsf{el}l)|}}$ be the effective volume associated with $\textsf{el}l$. For each element $T \textrm{i}n \mathcal{T}h$ we define the effective volume of $T$ by
\bigegin{displaymath}
\omega_T := |T \sigma^{\textrm{i}}gmaetminus (\bigigcup_{\textsf{el}l \textrm{i}n \L^{\a}mbda^{\textrm{a}}} {\topextrm{vor}}(\textsf{el}l))\sigma^{\textrm{i}}gmaetminus(\bigigcup_{\textsf{el}l \textrm{i}n \L^{\a}mbda^{\textrm{i}}} \nu^\textrm{i}_\textsf{el}l)|.
\end{displaymath}
We note that $\omega_T =0 $ if $T\textrm{i}n \mathcal{T}^\textrm{a}_h\sigma^{\textrm{i}}gmaetminus\mathcal{T}^\textrm{i}_h$, $\omega_T= |T|$ if $T\textrm{i}n \mathcal{T}^\textrm{c}_h\sigma^{\textrm{i}}gmaetminus \mathcal{T}^\textrm{i}_h$, and $0\leq \omega_T < |T|$ if $T\textrm{i}n \mathcal{T}^\textrm{i}_h$, dependes on how we defining $\nu_{\textsf{el}l}^{i}$, the effective cell of $\textsf{el}l\textrm{i}n\L^{\a}mbda^{i}$.
The choices of $\nu_\textsf{el}l$ and $\omega_T$ satisfy $\sigma^{\textrm{i}}gmaum_{\textsf{el}l\textrm{i}n \L^{\a}mbda^{\textrm{a},\textrm{i}}} \nu_\textsf{el}l + \sigma^{\textrm{i}}gmaum_{T\textrm{i}n \mathcal{T}h} \omega_T = |{\R^d}ega_{h,R}|$.
Now we are ready to define the generic a/c coupling energy functional $\mathscr{E}h$,
\bigegin{align}
\label{eq:generic_ac_energy}
\mathscr{E}h(y_h) := & \sigma^{\textrm{i}}gmaum_{ \textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{a}} V_\textsf{el}l(y_h) + \sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}} \omega_\textsf{el}l V^\textrm{i}_\textsf{el}l(y_h) + \sigma^{\textrm{i}}gmaum_{T \textrm{i}n \mathcal{T}h} \omega_T W(\nabla y_h|_T)
\end{align}
where $V_\textsf{el}l^\textrm{i}$ is a modified interface site potential.
The goal of a/c coupling is to find
\bigegin{equation}
\label{eq:min_ac}
y_{h,R} \textrm{i}n \textrm{a}rg\min \big\{ \mathscr{E}h(y_h) \bigsep y_h - y^B \textrm{i}n
\mathscr{U}_{h,R} \big\}.
\end{equation}
The subscript $R$ in $y_{h,R}$ and $\mathscr{U}_{h,R}$ can be omitted if there is no confusion.
\def\mathscr{E}h{\mathscr{E}^{\topextrm{h}}}
The first variation of the a/c coupling variational problem \eqref{eq:min_ac} is to find $y_h-y^\textsf{B} \textrm{i}n \mathscr{U}_{h, R}$ such that
\bigegin{equation}
\label{eqn:firstvariationeh}
\langle\delta\mathscr{E}h(y_h), v_h\rangle = 0, \quad\forall v_h\textrm{i}n \mathscr{U}_{h, R}.
\end{equation}
Spurious artificial force could occur at the interface for energy based coupling even for homogeneous deformation \textrm{c}ite{Shenoy:1999a}, and was dubbed "ghost force". The issue of "ghost force removal" has received considerable attention in the recent years, and consistent a/c coupling methods without ghost force were developed by \textrm{c}ite{Shimokawa:2004,E:2006} in one dimension and \textrm{c}ite{PRE-ac.2dcorners,Shapeev:2010a} in two dimensions. We will introduce the consistent GRAC formulation in \S~\ref{sec:grac} .
\defy^\mB{y^\textsf{B}}
\sigma^{\textrm{i}}gmaection{General GRAC formulation}
\label{sec:grac}
In this section, we describe the construction of the \topextit{geometric reconstruction based consistent atomistic/continuum} (GRAC) coupling energy for multibody potentials with general interaction range and arbitrary interfaces.
Given the homogeneous site potential $V\big( D y(\textsf{el}l) \big)$, we can represent the interface potential $V_\textsf{el}l^\textrm{i}$ in \eqref{eq:generic_ac_energy} in terms of $V$. For each $\textsf{el}l \textrm{i}n
\L^{\a}mbda^\textrm{i}, \rho, \varsigma \textrm{i}n \mathbb{R}g_\textsf{el}l$, let $C_{\textsf{el}l;\rho,\varsigma}$ be
free parameters, and define
\bigegin{equation}
\label{eq:defn_Phi_int}
V_\textsf{el}l^\textrm{i}(y) := V \Big( \big( {\topextstyle \sigma^{\textrm{i}}gmaum_{\varsigma \textrm{i}n
\mathbb{R}g_\textsf{el}l} C_{\textsf{el}l;\rho,\varsigma} D_\varsigma y(\textsf{el}l) } \big)_{\rho \textrm{i}n
\mathbb{R}g_\textsf{el}l} \Big)
\end{equation}
A convenient short-hand notation is
\bigegin{displaymath}
V_\textsf{el}l^\textrm{i}(y) = V( C_\textsf{el}l \textrm{c}dot Dy(\textsf{el}l) ), \quad \topext{where}
\quad \textrm{c}ases{
C_\textsf{el}l := (C_{\textsf{el}l;\rho,\varsigma})_{\rho,\varsigma \textrm{i}n \mathbb{R}g_\textsf{el}l}, \quad
\topext{and} &\\
C_\textsf{el}l \textrm{c}dot Dy := \big( {\topextstyle \sigma^{\textrm{i}}gmaum_{\varsigma \textrm{i}n
\mathbb{R}g_\textsf{el}l} C_{\textsf{el}l;\rho,\varsigma} D_\varsigma y } \big)_{\rho \textrm{i}n
\mathbb{R}g_\textsf{el}l}. & }
\end{displaymath}
We call the parameters $C_{\textsf{el}l;\rho,\varsigma}$ as the \topextit{reconstruction parameters}.
To construct consistent a/c coupling energy, we need to enforce the so-called \topextit{patch tests} for the energy functional $\mathscr{E}h$, namely, energy patch test \eqref{eq:energy_pt} and force patch test \eqref{eq:force_pt}. Those patch tests in turn prescribe conditions \eqref{eq:E_pt_grac} and \eqref{eq:F_pt_grac} for the reconstruction parameters $C$. In general, the reconstruction parameters satisfying patch tests are not unique, an $\textsf{el}l^{1}$-minimization technique can be introduced to choose the "optimal" parameters \textrm{c}ite{COLZ2013}. Also, a stabilisation mechanism can be applied to improve the stability of the GRAC coupling scheme \textrm{c}ite{2013-stab.ac}.
\sigma^{\textrm{i}}gmaubsection{Energy patch test}
To guarantee that $\mathscr{E}h$ approximates the atomistic
energy $\mathscr{E}=\sigma^{\textrm{i}}gmaum_{\textsf{el}l\textrm{i}n\L^{\a}mbda}V_{\textsf{el}l}(y)$, it is reasonable to require that the interface
potentials satisfy an \topextit{energy patch test}
\bigegin{equation}
\label{eq:energy_pt}
V_\textsf{el}l^\textrm{i}(y^{\textsf{F}}) = V(y^{\textsf{F}}) \qquad \quad \forall\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}, \quad \textsf{F} \textrm{i}n \mathbb{R}^{m
\topimes d}.
\end{equation}
namely, the interface potential coincides with the atomistic potential for uniform deformations.
For the GRAC coupling scheme, a sufficient and necessary condition for the energy patch test is that $\textsf{F} \textrm{c}dot \mathbb{R}g(\textsf{el}l) = C_\textsf{el}l \textrm{c}dot (\textsf{F} \textrm{c}dot \mathbb{R}g)$ for all $\textsf{F} \textrm{i}n \mathbb{R}^{m \topimes d}$ and $\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}$. This is equivalent to
\bigegin{equation}
\label{eq:E_pt_grac}
\rho = \sigma^{\textrm{i}}gmaum_{\varsigma \textrm{i}n \mathbb{R}g(\textsf{el}l)} C_{\textsf{el}l; \rho,\varsigma} \varsigma \qquad
\forall \textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}, \quad \rho \textrm{i}n \mathbb{R}g(\textsf{el}l).
\end{equation}
\sigma^{\textrm{i}}gmaubsection{Force patch test}
\label{sec:force_pt_lineqn}
We call the following condition the \topextit{force patch test}, namely, for $\L^{\a}mbda = \L^{\a}mbda^{\topextrm{hom}}$ and $\Phi_{\textsf{el}l} = \Phi$,
\bigegin{equation}
\label{eq:force_pt}
\langle \delta \mathscr{E}h(y^{\textsf{F}}), v_h \rangle = 0 \qquad \forall v_h \textrm{i}n \mathscr{U}T,
\quad \textsf{F} \textrm{i}n \mathbb{R}^{m \topimes d}.
\end{equation}
where $\textsf{F}$ is some uniform deformation gradient. This is saying that there is no artificial "ghost force" for uniform deformations.
From the general GRAC formulation \eqref{eq:generic_ac_energy}, we can decompose the first variation of the a/c coupling energy into three parts,
\bigegin{equation}
\langle\delta\mathscr{E}h(y^{\textsf{F}}), v_h\rangle = \langle\delta\mathscr{E}a(y^{\textsf{F}}), v_h\rangle
+ \langle\delta\mathscr{E}i(y^{\textsf{F}}), v_h\rangle + \langle\delta\mathscr{E}c(y^{\textsf{F}}), v_h\rangle.\quad \forall v_h \textrm{i}n \mathscr{U}T
\label{eq:threeparts}
\end{equation}
To simplify the notation, we drop the $y^{\textsf{F}}$ dependence from the following expressions in this section, for example, we write $\mathscr{E}a$ instead of $\mathscr{E}a(y^{\textsf{F}})$,
$\nabla_\rho V$ instead of $\nabla_\rho V(Dy^{\textsf{F}})$, and so forth. Here, $\nabla_\rho
V$ denotes the partial derivative of $V$ with respect to the $D_\rho y$
component. Since $\nabla_\rho V = -\nabla_{-\rho} V$, we only consider half of the directions in the
interaction range: fix $\mathbb{R}gp \sigma^{\textrm{i}}gmaubset \mathbb{R}g$ such that $\mathbb{R}gp \textrm{c}up
(-\mathbb{R}gp) = \mathbb{R}g$ and $\mathbb{R}gp \textrm{c}ap (-\mathbb{R}gp) = \emptyset$.
As proposed in \textrm{c}ite{COLZ2013}, a necessary and sufficient condition on the reconstruction parameters $C_\textsf{el}l$ to satisfy the force patch test
\eqref{eq:force_pt} for all $V \textrm{i}n C^\textrm{i}nfty((\mathbb{R}^d)^\mathbb{R}g)$ is
\bigegin{equation}
\label{eq:F_pt_grac}
c^\textrm{a}_\rho(\textsf{el}l) + c^\textrm{i}_\rho(\textsf{el}l) + c^\textrm{c}_\rho(\textsf{el}l) = 0,
\end{equation}
for $\textsf{el}l\textrm{i}n\L^{\a}mbdai+\mathbb{R}g$, and $\rho\textrm{i}n\mathbb{R}gp$. The coefficients $c^\textrm{a}_\rho(\textsf{el}l)$, {$c^\textrm{i}_\rho(\textsf{el}l)$} and
$c^\textrm{c}_\rho(\textsf{el}l)$ are geometric parameters with respect to the underlying lattice and the interface geometry, formulated by collecting all the coefficients for the terms $\nabla_\rho V \textrm{c}dot$ of the first variations of a/c coupling energy as in the following equations \eqref{eq:firstvar}. The interface coefficients $c^\textrm{i}_\rho(\textsf{el}l)$ depend linearly on the unknown reconstruction paramters $C_{\textsf{el}l;\rho,\varsigma}$. Then, by \eqref{eq:threeparts}, we have
\bigegin{align}
\bigegin{split}
\langle\delta\mathscr{E}a, v_h\rangle & = \sigma^{\textrm{i}}gmaum_{\textsf{el}l\textrm{i}n\L^{\a}mbdaa +\mathbb{R}g}\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}gp}
c^\textrm{a}_\rho(\textsf{el}l) \big[\nabla_\rho V \textrm{c}dot v_h(\textsf{el}l)\big] \\
&= \sigma^{\textrm{i}}gmaum_{\sigma^{\textrm{i}}gmaubstack{\rho \textrm{i}n \mathbb{R}gp \\ \textsf{el}l \textrm{i}n
\L^{\a}mbdaa-\rho}} {\big[\nabla_\rho V \textrm{c}dot v_h(\textsf{el}l) \big]}- \sigma^{\textrm{i}}gmaum_{\sigma^{\textrm{i}}gmaubstack{\rho \textrm{i}n
\mathbb{R}gp \\ \textsf{el}l\textrm{i}n\L^{\a}mbdaa+\rho}}\big[ \nabla_\rho V \textrm{c}dot v_h(\textsf{el}l)\big], \\
\langle\delta\mathscr{E}i, v_h\rangle & = \sigma^{\textrm{i}}gmaum_{\textsf{el}l\textrm{i}n\L^{\a}mbda^i
+\mathbb{R}g}\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}gp}c^\textrm{i}_\rho(\textsf{el}l) \big[ \nabla_\rho V \textrm{c}dot v_h(\textsf{el}l) \big] \\
&= \sigma^{\textrm{i}}gmaum_{\sigma^{\textrm{i}}gmaubstack{\varsigma \textrm{i}n \mathbb{R}g \\ \textsf{el}l \textrm{i}n
\L^{\a}mbdai+\varsigma}} \omega_{\textsf{el}l-\varsigma}^\textrm{i} \sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}gp}
(C_{\textsf{el}l-\varsigma;\rho,\varsigma}-C_{\textsf{el}l-\varsigma;-\rho,\varsigma}) \big[\nabla_\rho V \textrm{c}dot v_h(\textsf{el}l)\big]\\
& \qquad
-\sigma^{\textrm{i}}gmaum_{\textsf{el}l\textrm{i}n\L^{\a}mbdai} \omega_\textsf{el}l^\textrm{i} \sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}gp}
\sigma^{\textrm{i}}gmaum_{\varsigma\textrm{i}n\mathbb{R}g} (C_{\textsf{el}l;\rho,\varsigma}-C_{\textsf{el}l;-\rho,\varsigma}) \big[\nabla_\rho
V \textrm{c}dot v_h(\textsf{el}l) \big], \quad \topext{and} \\
\langle\delta\mathscr{E}c, v_h\rangle &= \sigma^{\textrm{i}}gmaum_T\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}gp}\sigma^{\textrm{i}}gmaum_{i=1}^3
2\frac{\omega_T}{\det{\textsf{A}}}\nabla_T\phi_i^T\textrm{c}dot \rho \big[ {\nabla_\rho V} \textrm{c}dot v^T_{h,i} \big], \\
&= \sigma^{\textrm{i}}gmaum_T\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}gp}\sigma^{\textrm{i}}gmaum_{i=1}^3
2\frac{\omega_T}{\det{\textsf{A}}}\nabla_T\phi_i^T\textrm{c}dot \rho \big[ {\nabla_\rho V} \textrm{c}dot
v^T_{h,i} \big],
\end{split}
\label{eq:firstvar}
\end{align}
where the nodes $\textsf{el}l_{i}^{T}$ are the three corners of the triangle $T$, $v_{h,i}^T = v(\textsf{el}l_i^T)$ and $\phi_i^T$ are the three nodal linear bases corresponding to $v_{h,i}^T$, $i = 1,2,3$,
The force patch test is automatically satisfied for the atomistic model and the Cauchy-Born continuum model.Therefore, we only need to consider the force consistency for those sites with the extended interface region $\L^{\a}mbda^\textrm{i}+\mathbb{R}g:=\{\textsf{el}l\textrm{i}n\L^{\a}mbda|\exists \textsf{el}l'\textrm{i}n\L^{\a}mbda^\textrm{i}, \exists\rho\textrm{i}n\mathbb{R}g,\topext{ such that } \textsf{el}l = \textsf{el}l'+\rho\}$. Therefore \eqref{eq:F_pt_grac} for $\textsf{el}l\textrm{i}n\L^{\a}mbda^\textrm{i}+\mathbb{R}g$ together with \eqref{eq:E_pt_grac} for $\textsf{el}l\textrm{i}n\L^{\a}mbda^\textrm{i}$ form a linear system for the unknown parameters $C_{\textsf{el}l;\rho,\varsigma}$.
\bigegin{remark}
In \textrm{c}ite{PRE-ac.2dcorners,COLZ2013}, we choose $ \omega_\textsf{el}l^i = {\textrm{vor}}(\textsf{el}l), \forall \textsf{el}l \textrm{i}n \L^{\a}mbda^i$. To reduce the degrees of freedom of the reconstruction parameters and also the number of constraint equations, we can use a variation of the local reflection method in \textrm{c}ite{COLZ2013}. Namely, we choose $\omega_\textsf{el}l^i = {\textrm{vor}}(\textsf{el}l) \textrm{c}ap {\R^d}ega^\textrm{a}$, and enforce that $C_{\textsf{el}l; \rho,\varsigma} = 0$ for $\textsf{el}l \textrm{i}n \L^{\a}mbdai$ and $\textsf{el}l+\varsigma\textrm{i}n {\R^d}ega^\textrm{c}$, then we only need to impose the force balance equation for $(\L^{\a}mbdai + \mathbb{R}g) \textrm{c}ap \L^{\a}mbda^{\textrm{a}}$. It has been shown that in \textrm{c}ite{COLZ2013}, the linear system for the reconstruction parameters is underdetermined, and up to numerical accuracy, the solution exists and therefore it is not unique.
\label{rm:rank_def}
\end{remark}
\bigegin{figure}
\bigegin{center}
\textrm{i}ncludegraphics[width=6cm]{./effvol_W.eps}
\textrm{c}aption{Effective Voronoi cells for the
interface nodes (filled circles) are the shaded area in the
above figure. $\omega_\textsf{el}l^\textrm{i}<1$ for the outmost interface atoms which are
adjacent to the continuum region.}\label{fig:effvol}
\end{center}
\end{figure}
\sigma^{\textrm{i}}gmaubsection{Consistency and Optimisation of Reconstruction Parameters}
\label{sec:optim_coeffs}
Due to the non-uniqueness of the reconstruction parameters $C_{\textsf{el}l,\rho,\varsigma}$, we need to choose the "optimal" parameters. A naive idea is to use a least-squares approach to minimize $\|C\|_{\textsf{el}l^2}$,
\bigegin{equation}
\label{eq:least_squares}
\topext{minimize } \sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}} \sigma^{\textrm{i}}gmaum_{\rho,\varsigma \textrm{i}n
\mathbb{R}g(\textsf{el}l)} |C_{\textsf{el}l;\rho,\varsigma}|^2 \quad \topext{ subject to \eqref{eq:E_pt_grac} and \eqref{eq:F_pt_grac}.}
\end{equation}
However, the resulting parameters do not give a convergent method.
It is shown in \textrm{c}ite[Thm. 6.1]{Or:2011a} , under the assumptions
that $d = 2$ and that the atomistic region ${\R^d}ega^\textrm{a}$ is connected, any a/c coupling
scheme of the type \eqref{eq:generic_ac_energy} satisfying the energy
and force patch tests \eqref{eq:energy_pt}, \eqref{eq:force_pt} is first-order consistent: if $y = y^\mB$ in
$\L^{\a}mbda \sigma^{\textrm{i}}gmaetminus {\R^d}ega_R$ and if $\topilde{y}$ is an $H^2_{\topextrm
{loc}}$-conforming interpolant of $y$, then
\bigegin{equation}
\label{eq:consistency}
\big\langle \delta \mathscr{E}(y) - \delta \mathscr{E}h(I_h y), v_h \big\rangle \leq C_1
\| h \nabla^2 \topilde{y} \|_{L^2(\topilde{{\R^d}ega}^\textrm{c})} \|\nabla v_h\|_{L^2},
\end{equation}
where $C_1$ is independent of $y$. The dependence of $C_1$
on the reconstruction parameters $C_\textsf{el}l$ is analysed in \textrm{c}ite{COLZ2013},
\bigegin{align}
\label{eq:C1_dependence}
& C_1 \leq C_1' \, (1 + \topextrm{width}(\L^{\a}mbda^\textrm{i})) \, \sigma^{\textrm{i}}gmaum_{\rho,\varsigma \textrm{i}n
\mathbb{R}g} |\rho|\,|\varsigma|\, M_{\rho,\varsigma} + C_1''\\
\notag
\notag & \topext{where} \quad M_{\rho,\varsigma} = \max_{\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}}
\sigma^{\textrm{i}}gmaum_{\topau,\topau' \textrm{i}n \mathbb{R}g(\textsf{el}l)} |V_{\topau,\topau'}(C_\textsf{el}l \textrm{c}dot Dy(\textsf{el}l))| \,
|C_{\textsf{el}l;\topau,\rho}|\,|C_{\textsf{el}l;\topau',\varsigma}|.
\end{align}
$C_1'$ is a generic constant and $C_1''$ does not depend on the
reconstruction parameters.
Intuitively one may think of
\bigegin{displaymath}
M(\textsf{el}l) := \sigma^{\textrm{i}}gmaum_{\rho,\varsigma} |\rho|\,|\varsigma| \sigma^{\textrm{i}}gmaum_{\topau,\topau'}
\big|V_{\topau,\topau'}(C_\textsf{el}l \textrm{c}dot Dy(\textsf{el}l))\big| \,
|C_{\textsf{el}l;\topau,\rho}|\,|C_{\textsf{el}l;\topau',\varsigma}|
\end{displaymath}
to be a realistic ($\textsf{el}l$-dependent) pre-factor. With generic structural assumption $|V_{\topau,\topau'}(C_\textsf{el}l \textrm{c}dot Dy(\textsf{el}l))| \lesssim
\omega(|\topau|)\,\omega(|\topau'|)$, where $\omega$ has some decay that
is determined by the specific interaction potential, see the discussion for an EAM type potential in App.B.2 \textrm{c}ite{OrtnerTheil2012}. We obtain that
\bigegin{align*}
M(\textsf{el}l) &\lesssim \sigma^{\textrm{i}}gmaum_{\rho,\varsigma} |\rho|\,|\varsigma| \sigma^{\textrm{i}}gmaum_{\topau,\topau'}
\omega(|\topau|)\,\omega(|\topau'|) \,
|C_{\textsf{el}l;\topau,\rho}|\,|C_{\textsf{el}l;\topau',\varsigma}| \\
&= \Big(\sigma^{\textrm{i}}gmaum_{\rho,\topau} |\rho| \omega(|\topau|)
|C_{\textsf{el}l;\topau,\rho}|\Big)\,
\Big(\sigma^{\textrm{i}}gmaum_{\varsigma,\topau'} |\varsigma| \omega(|\topau'|)
|C_{\textsf{el}l;\topau',\varsigma}|\Big) \\
&= \Big(\sigma^{\textrm{i}}gmaum_{\rho,\topau} |\rho| \omega(|\topau|)
|C_{\textsf{el}l;\topau,\rho}|\Big)^2.
\end{align*}
This indicates that, instead of $\|C\|_{\textsf{el}l^2}$, we should minimize
$\max_{\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}} \sigma^{\textrm{i}}gmaum_{\rho,\topau} |\rho| \omega(|\topau|)
|C_{\textsf{el}l;\topau,\rho}|$. Since we do not in general know the generic
weights $\omega$, we simply drop them, and instead minimise
$\sigma^{\textrm{i}}gmaum_{\rho,\topau} |C_{\textsf{el}l;\topau,\rho}|$. Further, taking the maximum
of $\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}$ leads to a difficult and computationally expensive
multi-objective optimisation problem. Instead, we propose to minimise
the $\textsf{el}l^1$-norm of all the coefficients:
\bigegin{equation}
\label{eq:ell1-min}
\topext{ minimise } \sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}} \sigma^{\textrm{i}}gmaum_{\rho,\varsigma \textrm{i}n
\mathbb{R}g(\textsf{el}l)} |C_{\textsf{el}l;\rho,\varsigma}| \quad \topext{ subject to
\eqref{eq:E_pt_grac} and \eqref{eq:F_pt_grac}.}
\end{equation}
We remark that with the simplifications, the reconstruction parameters only depend on the interaction range, not the particular form of interaction potentials, also, $\textsf{el}l^1$-minimisation tends to generate ``sparse'' reconstruction
parameters which may present some gain in computational cost in the
energy and force assembly routines for $E^\textrm{a}c$.
\sigma^{\textrm{i}}gmaubsection{Stability and stabilisation}
\label{sec:stab}
The stability of a/c coupling method is of great importance for numerical analysis and the issue of critical stability is essential for practitioners.
The issue of stability and (in)stability of a/c coupling scheme is discussed in detail in \textrm{c}ite{2013-stab.ac}, in particular, a stabilization mechanism
is proposed to reduce the stability gap between the a/c coupling methods and the corresponding atomistic model (ground truth). We sketch those results in this subsection.
We expect a/c coupling method has the following stability estimate of the form,
\bigegin{equation}
\label{eq:stability}
\langle \textrm{d}el \mathscr{E}h(I_h y) v_h, v_h \rangle
\geq c_0 \| \nabla v_h \|_{L^2}^2
\end{equation}
together with consistency and other technical assumptions, we can prove the convergence of the a/c coupling method with inverse function theorem with the form $\|u - u_h\| \lesssim \topext{consistency error} / c_0$.
Also, we have the observation that for any uniform deformation, the stability constant of a/c coupling method is less or equal to the stability constant of the corresponding atomistic model. This motivate the introduction of the following stabilization of the interface potential,
\bigegin{equation}
\label{eq:qnl_stab}
V_\textsf{el}l^\textrm{i}(y_h) :=
V \big( C_\textsf{el}l \textrm{c}dot Dy_h(\textsf{el}l) \big) + \kappa |D_{\topextrm{nn}}^2 y_h(\textsf{el}l)|^2,
\end{equation}
where $\kappa \geq 0$ is a stabilisation parameter, and $|D_{\topextrm{nn}}^2
y_h(\textsf{el}l)|^2$ is defined as follows: we choose $m \geq d$ linearly
independent ``nearest-neighbour'' directions $a_1, \dots, a_m$ in the
lattice, and denote
\bigegin{displaymath}
\big|D_{\topextrm{nn}}^2 y_h(\textsf{el}l)\big|^2 := \sigma^{\textrm{i}}gmaum_{j = 1}^m \big| y_h(\textsf{el}l+a_j) -
2 y_h(\textsf{el}l) + y_h(\textsf{el}l-a_j) \big|^2.
\end{displaymath}
If $\kappa$ is $O(1)$ constant, the stabilization term does not affect consistency, the reconstruction parameters $C_\textsf{el}l$ are still determined by \eqref{eq:ell1-min}. However, the stabilisation will affect the computation of the stress tensor in the interface region, which we will introduce in the following section.
In \textrm{c}ite{2013-stab.ac}, we have theoretically proved for some symmetric configuration and numerical justified for prototypical examples that such stabilization can reduce the stability gap and suppress spurious critical mode for the a/c coupling methods.
\sigma^{\textrm{i}}gmaection{Stress formulation}
\label{sec:stress}
\def\mathcal{T}a{\mathcal{T}_\textrm{a}}
\def\mathcal{T}h{\mathcal{T}_{\topextrm{h}}}
\def\sigma^{\textrm{i}}gmah{\sigma^{\textrm{i}}gmaigma^{\topextrm{h}}}
\def\sigma^{\textrm{i}}gmai{\sigma^{\textrm{i}}gmaigma^{\topextrm{i}}}
The stress formulation plays a significant role in the analysis of a/c coupling methods. Given deformation $y-y^B\textrm{i}n \mathscr{U}e$ and an energy functional $\mathscr{E}(y)$, the first Piola-Kirchhoff stress $\sigma^{\textrm{i}}gmaigma(y)$ satisfies
\bigegin{displaymath}
\langle\deltata\mathscr{E}(y), v\rangle = \textrm{i}nt_{\mathbb{R}^d}\sigma^{\textrm{i}}gma(y):\nabla v \,{\textrm{d}}x,
\end{displaymath}
for any $v \textrm{i}n \mathscr{U}e$.
From the first variation of the a/c energy functional \eqref{eq:threeparts} and \eqref{eq:firstvar}, we expect to derive its stress formulation, namely, for any $v_h\textrm{i}n\mathscr{U}h$,
\bigegin{align*}
\langle\deltata\mathscr{E}h(y_h), v_h\rangle = & \langle\deltata\mathscr{E}a, v_h\rangle + \langle\deltata\mathscr{E}i, v_h\rangle + \langle\deltata\mathscr{E}c, v_h\rangle \\
= & \textrm{i}nt_{{\R^d}ega^\textrm{a}}\sigma^{\textrm{i}}gmaa:\nabla v_h \,{\textrm{d}}x + \textrm{i}nt_{{\R^d}ega^\textrm{i}}\sigma^{\textrm{i}}gmai:\nabla v_h \,{\textrm{d}}x + \textrm{i}nt_{{\R^d}ega^\textrm{c}}\sigma^{\textrm{i}}gmac:\nabla_{T} v_h \\
= & \textrm{i}nt_{{\R^d}ega} \sigma^{\textrm{i}}gmah : \nabla v_h \,{\textrm{d}}x
\end{align*}
In this section, we first review the stress tensor formula derived by \textrm{c}ite{OrtnerTheil2012} in \S~\ref{sec:stress:apriori} which is essential for the a priori analysis, however, this formula is not suitable for the a posteriori estimate. In \S~\ref{sec:stress:apost} we introduce a novel computable stress tensor expression, which is convenient for the purpose of a posterior error estimation and adaptive algorithm. We discuss the assembly of stress tensor for our model problem in \S~\ref{sec:stress:assm}.
\sigma^{\textrm{i}}gmaubsection{Stress Tensor Formulation in \textrm{c}ite{OrtnerTheil2012}}
\label{sec:stress:apriori}
We first discuss the formula for $\sigma^{\textrm{i}}gmaa$. For simplicity, consider full atomistic energy $\mathscr{E}$ in \eqref{eqn:Ea}. The "canonical weak form" of $\deltata\mathscr{E}$ is
\bigegin{equation}
\langle\deltata \mathscr{E}(u), v\rangle = \sigma^{\textrm{i}}gmaum_{\textsf{el}l\textrm{i}n \L^{\a}mbda}\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}g} V_{\textsf{el}l, \rho}(u)\textrm{c}dot D_\rho v(\textsf{el}l), \quad \topext{for } v\textrm{i}n \mathscr{U}e
\label{eq:weakform}
\end{equation}
Now we define a modified version of the canonical weak form. Let
\bigegin{equation}
v^{\textrm{a}st} := \bigar{\zeta} \textrm{a}st \bigar{v}.
\label{eq:vast}
\end{equation}
The finite differences $D_{\rho}v^{\textrm{a}st}(\zeta)$ can be expressed as in \textrm{c}ite{Shapeev2012},
\bigegin{align}
\nonumber D_{\rho}v^{\textrm{a}st}(\textsf{el}l) = & \textrm{i}nt_{s=0}^{1}\nabla_{\rho}v^{\textrm{a}st}(\textsf{el}l+s\rho) \,{\textrm{d}}s = \textrm{i}nt_{\mathbb{R}^d}\textrm{i}nt_{s=0}^{1}\bigar{\zeta}(\textsf{el}l+s\rho-x)\nabla_{\rho}\bigar{v}(x)\,{\textrm{d}}s\,{\textrm{d}}x \\
=& \textrm{i}nt_{\mathbb{R}^d} \textrm{c}hi_{\textsf{el}l,\rho}(x)\nabla_{\rho}\bigar{v}(x)\,{\textrm{d}}x
\label{eq:dvast}
\end{align}
where $\textrm{c}hi_{\textsf{el}l,\rho}$ is a generic weighting function defined as below, can be understood as a mollified version of the line measure.
\bigegin{equation}
\label{eq:chi}
\textrm{c}hi_{\textsf{el}l,\rho}(x) := \textrm{i}nt_{0}^{1}\bigar{\zeta}(\textsf{el}l+t\rho-x)\,{\textrm{d}}t
\end{equation}
Now, we replace the test function in \eqref{eq:weakform} from $v$ to $v^{\textrm{a}st}$,
\bigegin{align*}
\langle\deltata\mathscr{E}(y), v^{\textrm{a}st}\rangle = & \sigma^{\textrm{i}}gmaum_{\textsf{el}l\textrm{i}n\L^{\a}mbda^{a}}\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}g}\partial_{\rho}V_{\textsf{el}l}\textrm{c}dot\textrm{i}nt_{\mathbb{R}^d} \textrm{c}hi_{\textsf{el}l,\rho}(x)\nabla_{\rho}\bigar{v}(x)\,{\textrm{d}}x \\
=& \textrm{i}nt_{\mathbb{R}^d}\left\{ \sigma^{\textrm{i}}gmaum_{\textsf{el}l\textrm{i}n\L^{\a}mbda^{a}}\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}g}\left[\partial_{\rho}V_{\textsf{el}l}\widetildeimes\rho\right]\textrm{c}hi_{\textsf{el}l,\rho}(x) \right\}:\nabla\bigar{v}\,{\textrm{d}}x
\end{align*}
Thus, we have shown that for $y-y^{\textsf{B}}\textrm{i}n\mathscr{U}e$ and $v\textrm{i}n\mathscr{U}$ with compact support ,
\bigegin{displaymath}
\langle\deltata\mathscr{E}(y), v^{\textrm{a}st}\rangle = \textrm{i}nt_{\mathbb{R}^d}\sigma^{\textrm{i}}gmaa(y;x):\nabla\bigar{v} \,{\textrm{d}}x,
\end{displaymath}
with
\bigegin{equation}
\label{eq:sa:priori}
\sigma^{\textrm{i}}gmaa(y;x):=\sigma^{\textrm{i}}gmaum_{\textsf{el}l\textrm{i}n\L^{\a}mbda^\textrm{a}}\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}g}\left[\partial_{\rho}V_{\textsf{el}l}\widetildeimes\rho\right]\textrm{c}hi_{\textsf{el}l,\rho}(x),
\end{equation}
Through an analogy to the analysis above, $\langle\deltata\mathscr{E}i(y), v^{\textrm{a}st}\rangle$ could be written as,
\bigegin{displaymath}
\langle\deltata\mathscr{E}i(y), v^{\textrm{a}st}\rangle = \textrm{i}nt_{\mathbb{R}^d}\sigma^{\textrm{i}}gmai(y;x):\nabla\bigar{v} \,{\textrm{d}}x,
\end{displaymath}
where
\bigegin{equation}
\label{eq:si:priori}
\sigma^{\textrm{i}}gmai(y;x):=\sigma^{\textrm{i}}gmaum_{\textsf{el}l\textrm{i}n\L^{\a}mbda^\textrm{i}}\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}g}\left[\omega^\textrm{i}_\textsf{el}l\partial_{\rho}V^{\textrm{i}}_{\textsf{el}l}\widetildeimes\rho\right]\textrm{c}hi_{\textsf{el}l,\rho}(x),
\end{equation}
Finally, the first Piola-Kirchhoff stress of the Cauchy-Born model gives,
\bigegin{equation}
\label{eq:sc:priori}
\sigma^{\textrm{i}}gmac(y;x)=\partial W(\nabla y(x))
\end{equation}
By proper regularity assumptions on $V$ and $y$, it can be shown that $\sigma^{\textrm{i}}gmac(y;x)$ is second order consistent to $\sigma^{\textrm{i}}gmaa(y;x)$.
The stress tensor expression introduced in this section is important for the a priori analysis of Caucy-Born continuum model in \textrm{c}ite{OrtnerTheil2012} and blended a/c coupling method \textrm{c}ite{LiOrShVK:2014}. However, since $\sigma^{\textrm{i}}gmaa(y;x)$ and $\sigma^{\textrm{i}}gmai(y;x)$ depend on $x$ through $\textrm{c}hi_{\textsf{el}l,\rho}(x)$, it is relatively difficult to calculate the value of them, which is crucial for the a posteriori error estimates and adaptive algorithm. In the next section, we will introduce a computable expression of stress tensor.
\sigma^{\textrm{i}}gmaubsection{A Computable Stress Tensor Formulation}
\label{sec:stress:apost}
\def\textrm{meas}{\topextrm{meas}}
\def\mathcal{T}lrho{\mathcal{T}_\textsf{el}l^\rho}
\def\omega_{\ell}^\rho{\omega_{\textsf{el}l}^\rho}
For the nearest neighbour interactions \textrm{c}ite{APEst:2017}, we use the canonical expressions of $\deltata \mathscr{E}$ ($\deltata\mathscr{E}h$) to define $\sigma^{\textrm{i}}gmaa$ ($\sigma^{\textrm{i}}gmah$). In that case, $\sigma^{\textrm{i}}gmaa$ ($\sigma^{\textrm{i}}gmah$) is piecewise constant over the triangulation $\mathcal{T}a$ ($\mathcal{T}h$). In this section, we will extend this formulation to general finite range interactions.
\def\textrm{length}{\topextrm{length}}
Consider the canonical weak form of $\deltata\mathscr{E}$ in \eqref{eq:weakform}, instead of replacing $v$ with $v^{\textrm{a}st}$, we distribute $D_\rho v$ to relevant triangles in order to transfer the sum with respect to atoms to the sum over elements in the micro-triangulation $\mathcal{T}_\textrm{a}$ induced by the reference lattice $\L^{\a}mbda$. We express $D_\rho v$ as,
\bigegin{equation}
D_{\rho}v(\textsf{el}l) = \sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}lrho}\omega_{\ell}^\rho (T)\nabla_T v\textrm{c}dot\rho
\label{eq:stress:drho}
\end{equation}
where $\mathcal{T}_\textsf{el}l^\rho := \{T\textrm{i}n \mathcal{T}a|\textrm{length}(T\textrm{c}ap(\textsf{el}l, \textsf{el}l+\rho))>0\}$ is the set which contains the elements that form the compact support of bond $(\textsf{el}l, \textsf{el}l+\rho)$, $\omega_{\ell}^\rho (T)$ is an appropriate weight function. The implementation details for two dimensional triangular lattice will be discussed in detail in \S~\ref{sec:stress:assm}.
For any $y-y^\textsf{B}\textrm{i}n\mathscr{U}e, v\textrm{i}n\mathscr{U}e$,
\bigegin{equation}
\bigegin{split}
\langle\delta\mathscr{E}(y), v\rangle&=\sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda}\sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n \mathbb{R}g_\textsf{el}l}\partial_{\rho}V_{\textsf{el}l}\textrm{c}dot D_{\rho}v \\
&=\sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda}\sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n \mathbb{R}g_\textsf{el}l}\partial_{\rho}V_{\textsf{el}l}\textrm{c}dot\left(\omega_{\rho}\left(\sigma^{\textrm{i}}gmaum_{b=(\textsf{el}l, \textsf{el}l+\rho)\textrm{c}ap T \neq \emptyset}\nablac{T} v\right)\textrm{c}dot\rho\right) \\
&=\sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda}\sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n \mathbb{R}g_\textsf{el}l}\frac{2|T|}{\det{\textsf{A}}}\partial_{\rho}V_{\textsf{el}l}\widetildeimes\rho:\left(\omega_{\rho}\sigma^{\textrm{i}}gmaum_{b=(\textsf{el}l, \textsf{el}l+\rho)\textrm{c}ap T \neq \emptyset}\nablac{T} v\right) \\
&=|T|\sigma^{\textrm{i}}gmaum_{T\textrm{i}n \mathcal{T}a}\sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n \mathbb{R}g_\textsf{el}l}\frac{1}{\det {\textsf{A}}}\sigma^{\textrm{i}}gmaum_{b=(\textsf{el}l, \textsf{el}l+\rho)\textrm{c}ap T \neq \emptyset}2\omega_{\rho}\partial_{\rho}V_{\textsf{el}l}\widetildeimes\rho:\left(\nablac{T} v\right), \forall v\textrm{i}n\mathscr{U}
\end{split}
\label{eq:atomvar}
\end{equation}
$\omega_{\ell}^\rho(T)$ represents the contribution of the value of $\partial_{\rho}V(\nablac{T}y_h)\widetildeimes\rho$ from element $T$, which depends on the specific type of $\rho\textrm{i}n\mathbb{R}g$.
Recall from \eqref{eq:generic_ac_energy} and \eqref{eq:defn_Phi_int}, the GRAC a/c coupling energy functional is of the form
\bigegin{displaymath}
\mathscr{E}h(y_h) := \sigma^{\textrm{i}}gmaum_{ \textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{a}} V_\textsf{el}l(y_h) + \sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}} \omega_\textsf{el}l V^\textrm{i}_\textsf{el}l(y_h) + \sigma^{\textrm{i}}gmaum_{T \textrm{i}n \mathcal{T}h} \omega_T W(\nabla y_h|_T)
\end{displaymath}
with
\bigegin{displaymath}
V_\textsf{el}l^\textrm{i}(y) := V_{\textsf{el}l} \Big( \big( { \sigma^{\textrm{i}}gmaum_{\varsigma \textrm{i}n
\mathbb{R}g_\textsf{el}l} C_{\textsf{el}l;\rho,\varsigma} D_\varsigma y(\textsf{el}l) } \big)_{\rho \textrm{i}n
\mathbb{R}g_\textsf{el}l} \Big)
\end{displaymath}
With the decomposition of the first variation of GRAC in \eqref{eq:threeparts}, the first variation of the interface energy $\mathscr{E}i$ has the following form,
\bigegin{equation}
\bigegin{split}
\langle\delta\sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda^{\textrm{i}}} \omega_{\textsf{el}l} V^{\textrm{i}}_{\textsf{el}l}(y_h), v_{h}\rangle &= \langle\delta\sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda^{\textrm{i}}} \omega_{\textsf{el}l} V_{\textsf{el}l} \Big( \big( { \sigma^{\textrm{i}}gmaum_{\varsigma \textrm{i}n\mathbb{R}g_\textsf{el}l} C_{\textsf{el}l;\rho,\varsigma} D_{\varsigma} y_h } \big)_{\rho \textrm{i}n\mathbb{R}g_\textsf{el}l} \Big), v_{h}\rangle \\
&= \sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}} \omega_\textsf{el}l \sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n\mathbb{R}g_\textsf{el}l} \partial_{\rho}V_{\textsf{el}l} \textrm{c}dot \left(\sigma^{\textrm{i}}gmaum_{\varsigma \textrm{i}n\mathbb{R}g_\textsf{el}l} C_{\textsf{el}l;\rho,\varsigma} D_\varsigma v_h(\textsf{el}l)\right) \\
&= \sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}} \omega_\textsf{el}l \sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n\mathbb{R}g_\textsf{el}l} \sigma^{\textrm{i}}gmaum_{\varsigma \textrm{i}n\mathbb{R}g_\textsf{el}l} \partial_{\rho}V_{\textsf{el}l} C_{\textsf{el}l;\rho,\varsigma} \textrm{c}dot \left(\omega_{\varsigma}\left(\sigma^{\textrm{i}}gmaum_{b=(\textsf{el}l, \textsf{el}l+\varsigma)\textrm{c}ap T \neq \emptyset}\nablac{T} v_{h}\right)\textrm{c}dot\varsigma \right) \\
&= \sigma^{\textrm{i}}gmaum_{\textsf{el}l \textrm{i}n \L^{\a}mbda^\textrm{i}} \frac{2\omega_\textsf{el}l |T|}{\det{\textsf{A}}} \sigma^{\textrm{i}}gmaum_{\varsigma \textrm{i}n\mathbb{R}g_\textsf{el}l} \left(\sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n\mathbb{R}g_\textsf{el}l}C_{\textsf{el}l;\rho,\varsigma} \partial_{\rho}V_{\textsf{el}l} \widetildeimes \varsigma\right) : \left(\omega_{\varsigma}\sigma^{\textrm{i}}gmaum_{b=(\textsf{el}l, \textsf{el}l+\varsigma)\textrm{c}ap T \neq \emptyset}\nablac{T} v_{h}\right) \\
&= \sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}_{\textrm{h}}^{\textrm{i}}} \frac{|T|}{\det{\textsf{A}}} \sigma^{\textrm{i}}gmaum_{\varsigma \textrm{i}n\mathbb{R}g_\textsf{el}l} \sigma^{\textrm{i}}gmaum_{b=(\textsf{el}l, \textsf{el}l+\varsigma)\textrm{c}ap T \neq \emptyset} 2\omega_{\textsf{el}l}\left(\sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n\mathbb{R}g_\textsf{el}l}C_{\textsf{el}l;\rho,\varsigma} \omega_{\varsigma}\partial_{\rho}V_{\textsf{el}l} \widetildeimes \varsigma\right) : \left(\nablac{T} v_{h}\right)
\end{split}
\label{eq:intfvar}
\end{equation}
Consider the continuum model, for any $y_h - y^\textsf{B}\textrm{i}n \mathscr{U}h,v_h\textrm{i}n\mathscr{U}h$, we have
\bigegin{align}
\langle\delta\mathscr{E}c(y_h),v_h\rangle &=\sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}h}|T|\partial W(\nablac{T} y_h) \label{eq:contvar} \\
&= \sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}h}|T|\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}g} \partial_{\rho} V(\nabla_T y_h) \widetildeimes \rho:\nablac{T} v_{h},
\end{align}
Adding the stabilization term as in \eqref{eq:qnl_stab}, we are ready to obtain the first variation of the (stabilized) GRAC coupling method,
\def\mathbb{R}nn{\mathbb{R}_{\topextrm{nn}}}
\bigegin{align}
\langle\delta\mathscr{E}h(y_h),v_h\rangle&=\sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}_{\textrm{h}}^{\textrm{a}}\textrm{c}up\mathcal{T}_{\textrm{h}}^{\textrm{i}}} \frac{|T|}{\det {\textsf{A}}} \sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n\mathbb{R}g_\textsf{el}l} \sigma^{\textrm{i}}gmaum_{b=(\textsf{el}l, \textsf{el}l+\rho)\textrm{c}ap T \neq \emptyset}2\omega_{\textsf{el}l} \omega_{\ell}^\rho(T)\partial_{\rho}V_{\textsf{el}l}^{h} \widetildeimes \rho : \left(\nablac{T} v_{h}\right)
\nonumber \\
& + \sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}h} \frac{\omega_T}{\det {\textsf{A}}}\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}g} \partial_{\rho} V(\nabla_T y_h) \widetildeimes \rho:\nablac{T} v_{h}, \label{eq:acvar} \\
& + C_{\topextrm{stab}}\sigma^{\textrm{i}}gmaum_{T'\textrm{i}n\mathcal{T}h^{i}}\sigma^{\textrm{i}}gmaum_{\zeta\textrm{i}n\mathbb{R}nn}\sigma^{\textrm{i}}gmaum_{b'=(\textsf{el}l, \textsf{el}l+\zeta)\textrm{c}ap T' \neq \emptyset}2\omega_{\textsf{el}l}^{\zeta}(T) \topextrm{d}_{\topextrm{nn}}^{3}(\textsf{el}l, \zeta)\widetildeimes\zeta : \nablac{T'} \nonumber
\end{align}
\bigegin{displaymath}
\topext{where}
\quad \partial_{\rho}V_{\textsf{el}l}^{h} := \textrm{c}ases{
\partial_{\rho}V_{\textsf{el}l}, \quad \textsf{el}l\textrm{i}n\L^{\a}mbda^{a}
&\\
\sigma^{\textrm{i}}gmaum_{\varsigma \textrm{i}n\mathbb{R}g_\textsf{el}l}C_{\textsf{el}l;\varsigma,\rho} \partial_{\varsigma}V_{\textsf{el}l}, \quad \textsf{el}l\textrm{i}n\L^{\a}mbda^{i} & }
\end{displaymath}
and
\bigegin{displaymath}
\topextrm{d}_{\topextrm{nn}}^{3}(\textsf{el}l, \zeta) = -y_h(\textsf{el}l+2\zeta)+3y_h(\textsf{el}l+\zeta)-3y_h(\textsf{el}l)+y_h(\textsf{el}l-\zeta)
\end{displaymath}
$C_{\topextrm{stab}}$ is a constant equal to 1 if stabilisation is applied, and 0 otherwise.
We can define the \topextit{atomistic stress tensor} $\sigma^{\textrm{i}}gmaa$ (with respect to micro-triangulation $\mathcal{T}a$), the \topextit{continuum
stress tensor} $\sigma^{\textrm{i}}gmac$, and the \topextit{a/c stress tensor} $\sigma^{\textrm{i}}gmah$ by the following first variations of different models,
\bigegin{align}
\langle\delta\mathscr{E}a(y),v\rangle&=\sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}a}|T|\sigma^{\textrm{i}}gmaa(y;T):\nablac{T} v, \forall v\textrm{i}n\mathscr{U},
\label{eq:atomstress}\\
\langle\delta\mathscr{E}c(y_h),v_h\rangle&=\sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}h}|T|\sigma^{\textrm{i}}gmac(y_h;T):\nablac{T} v_h, \forall v_h\textrm{i}n\mathscr{U}h,
\label{eq:contstress}\\
\langle\delta\mathscr{E}h(y_h),v_h\rangle&=\sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}h}|T|\sigma^{\textrm{i}}gmah(y_h;T):\nablac{T} v_h, \forall v_h\textrm{i}n\mathscr{U}h.
\label{eq:acstress}
\end{align}
From the above discussions, we have the following "canonical" choices for $\sigma^{\textrm{i}}gmaa$, $\sigma^{\textrm{i}}gmac$ and $\sigma^{\textrm{i}}gmah$.
\bigegin{align}
\label{eq:defn_Sa}
\sigma^{\textrm{i}}gmaa(y; T) :=~& \frac{1}{\det {\textsf{A}}}\sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n \mathbb{R}g_\textsf{el}l}\sigma^{\textrm{i}}gmaum_{b=(\textsf{el}l, \textsf{el}l+\rho)\textrm{c}ap T \neq \emptyset}2\omega_{\ell}^\rho(T)\partial_{\rho}V_{\textsf{el}l}\widetildeimes\rho, \\
\label{eq:defn_Sc}
\sigma^{\textrm{i}}gmac(y_h; T) :=~& \frac{1}{\det {\textsf{A}}}\sigma^{\textrm{i}}gmaum_{\rho\textrm{i}n\mathbb{R}g} \partial_{\rho} V(\nabla_T y_h) \widetildeimes \rho, \\
\label{eq:defn_Sh}
\sigma^{\textrm{i}}gmah(y_h; T) :=~&\frac{1}{\det {\textsf{A}}}\left(\sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n\mathbb{R}g_\textsf{el}l} \sigma^{\textrm{i}}gmaum_{b=(\textsf{el}l, \textsf{el}l+\rho)\textrm{c}ap T \neq \emptyset} 2\omega_{\textsf{el}l}\omega_{\ell}^\rho(T) \partial_{\rho}V_{\textsf{el}l}^{h} \widetildeimes \rho + \frac{\omega_T}{|T|} \sigma^{\textrm{i}}gmac(y_h; T) \right. \\
& \left. +C_{\topextrm{stab}}\sigma^{\textrm{i}}gmaum_{\zeta\textrm{i}n\mathbb{R}nn}\sigma^{\textrm{i}}gmaum_{\sigma^{\textrm{i}}gmaubstack{b'=(\textsf{el}l, \textsf{el}l+\zeta)\textrm{c}ap T' \neq \emptyset \\ T'\textrm{i}n\mathcal{T}^\textrm{i}}} 2\omega_{\textsf{el}l}^{\zeta}(T) \topextrm{d}_{\topextrm{nn}}^{3}(\textsf{el}l,\zeta)\widetildeimes\zeta \right). \nonumber
\end{align}
However, the stress tensors defined through \eqref{eq:atomstress}-\eqref{eq:acstress} are not unique due to the following results.
\bigegin{definition}
We call piecewise constant tensor field $\sigma^{\textrm{i}}gmaigma\textrm{i}n \textrm{P}_0(\mathcal{T})^{2\topimes 2}$ \topextit{divergence free} if
\bigegin{displaymath}
\sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}}|T|\sigma^{\textrm{i}}gmaigma(T):\nablac{T} v\equiv 0, \forall v\textrm{i}n ({\topextrm{P}}_1(\mathcal{T}))^2.
\end{displaymath}
\end{definition}
\bigegin{corollary}
By definitions \eqref{eq:acstress}, it is easy to know that the force patch test condition \eqref{eq:force_pt} is equivalent to that $\sigma^{\textrm{i}}gmah(y_\textsf{F})$ is divergence free for any constant deformation gradient $\textsf{F}$.
\end{corollary}
\def\mathscr{F}h {\mathcal{F}_h}
The discrete divergence free tensor fields over the triangulation $\mathcal{T}$ can be characterized by the non-conforming Crouzeix-Raviart finite elements \textrm{c}ite{PRE-ac.2dcorners, Or:2011a}. The Crouzeix-Raviart finite element space over $\mathcal{T}$ is defined as
\bigegin{align*}
N_{1}(\mathcal{T})=\{c:\bigigcup_{T\textrm{i}n\mathcal{T}}\topextrm{int}(T)\topo\mathbb{R} \quad \bigig{|}& \quad c \topextrm{ is piecewise affine w.r.t. }\mathcal{T}, \topextrm{and} \\
& \topextrm{continuous in edge midpoints } q_{f}, \forall f\textrm{i}n\mathcal{F}\}
\end{align*}
The following lemma in \textrm{c}ite{PRE-ac.2dcorners} characterizes the discrete divergence-free tensor field.
\bigegin{lemma}
A tensor field $\sigma^{\textrm{i}}gmaigma\textrm{i}n\textsf{P}_{0}(\mathcal{T})^{2\topimes 2}$ is divergence free if and only if there exists a constant $\sigma^{\textrm{i}}gmaigma_{0}\textrm{i}n\mathbb{R}^{2\topimes 2}$ and a function $c\textrm{i}n N_{1}(\mathcal{T})^{2}$ such that
\bigegin{displaymath}
\sigma^{\textrm{i}}gmaigma = \sigma^{\textrm{i}}gmaigma_{0} + \nabla c\textsf{J}, \qquad \topextrm{where}\quad\textsf{J} = \mymat{0 & -1 \\ 1 & 0}\textrm{i}n{\topextsf{SO}}(2).
\end{displaymath}
\label{lem:divfree}
\end{lemma}
The following corollary provides a representation of the stress tensors defined in \eqref{eq:atomstress}-\eqref{eq:acstress}.
\bigegin{corollary}
\label{cor:divfree}
The stress tensors in the definitions \eqref{eq:atomstress}-\eqref{eq:acstress} are not unique.
Given any stress tensor $\sigma^{\textrm{i}}gmaigma \textrm{i}n \textsf{P}_{0}(\mathcal{T})^{2\topimes 2}$ satisfies one of the definitions \eqref{eq:atomstress}-\eqref{eq:acstress} , where $\mathcal{T}$ is the corresponding triangulation. Define the admissible set as $\topextrm{Adm}(\sigma^{\textrm{i}}gmaigma):=\{\sigma^{\textrm{i}}gmaigma + \nabla c \textsf{J}, c\textrm{i}n N_1(\mathcal{T})^2\}$, then any $\sigma^{\textrm{i}}gmaigma'\textrm{i}n \topextrm{Adm}(\sigma^{\textrm{i}}gmaigma)$ satisfies the definition of stress tensor.
\end{corollary}
\def\mathscr{P}^{\textrm{def}}{\mathscr{P}^{\topextrm{def}}}
\def\mathscr{U}c{\mathscr{U}^c}
\def\mathscr{U}e{\mathscr{U}^{1,2}}
\def\mathscr{U}rh{\mathscr{U}_R^h}
\def\mathscr{U}h{\mathscr{U}^h}
\def\mathscr{E}h{\mathscr{E}^{\topextrm{h}}}
\def\mathcal{T}r{\topextrm{Tr}}
\def\sigma^{\textrm{i}}gmajump{\llbracket\sigma^{\textrm{i}}gmah\rrbracket_f}
\def\,{\textrm{d}}x {\topextrm{dx}}
\def\mathscr{F}h {\mathcal{F}_h}
\def\mathscr{F}hi{\mathscr{F}h\bigigcap {\topextrm{int}}({\R^d}ega_R)}
\sigma^{\textrm{i}}gmaection{Adaptive Algorithms and Numerical Experiments}
\label{sec:numerics}
\def\mathcal{T}a{\mathcal{T}_\textrm{a}}
\def\mathcal{T}h{\mathcal{T}_{\topextrm{h}}}
\def\sigma^{\textrm{i}}gmah{\sigma^{\textrm{i}}gmaigma^{\topextrm{h}}}
\def\sigma^{\textrm{i}}gmajump{\llbracket\sigma^{\textrm{i}}gmah\rrbracket}
\newcommand{\fig}[1]{Figure \ref{#1}}
\newcommand{\topab}[1]{Table \ref{#1}}
We have derived the following a posteriori estimate in \textrm{c}ite[Theorem 3.1]{APEst:2017}: let $y_h$ be the a/c solution, for any $v\textrm{i}n\mathscr{U}e$, the residual $\textsf{R}[v] = \langle \delta\mathscr{E}(I_\a y_h), v \rangle$ is bounded by the sum of the following estimators,
\bigegin{equation}
\langle \delta\mathscr{E}(I_\a y_h), v \rangle \leq \bigig(\eta_T(y_h) + \eta_M(y_h) + \eta_C(y_h) \bigig)\|\nabla v\|_{L^2}
\label{eq:residual}
\end{equation}
where $\eta_T$ is the truncation error estimator (the $L^2$ norm of the atomistc stress tensor close to the outer boundary), $\eta_M$ is the modelling error estimator (the difference of a/c stress tensor and atomistic stress tensor), and $\eta_C$ is the coarsening error (jump of a/c stress tensor across interior edges). They are given by
\bigegin{equation}
\label{eqn:etat}
\eta_T(y_h): = C_1\|\sigma^{\textrm{i}}gmaa(I_\a y_h)-\sigma^{\textrm{i}}gmaigma^\textsf{B} \|_{L^2({\R^d}ega_R\sigma^{\textrm{i}}gmaetminus B_{R/2})}.
\end{equation}
where $\displaystyle\sigma^{\textrm{i}}gmaigma^\textsf{B} = \frac{1}{\det {\textsf{A}}}\sigma^{\textrm{i}}gmaum_{\rho \textrm{i}n \mathbb{R}g_\textsf{el}l}\partial_{\rho}V(\textsf{B} a)\widetildeimes\rho$.
\bigegin{equation}
\label{eqn:etam}
\eta_M(y_h) := C_2\bigig\{\sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}a}|T|\bigig[\sigma^{\textrm{i}}gmaa(I_\a y_h, T) - \sigma^{\textrm{i}}gmaum_{T'\textrm{i}n \mathcal{T}h, T'\bigigcap T\neq \emptyset}\frac{|T'\bigigcap T|}{|T|}\sigma^{\textrm{i}}gmah(y_h, T')\bigig]^2\bigig\}^{\frac12}.
\end{equation}
\bigegin{equation}
\label{eqn:etac}
\eta_C(u_h) := C_3(\sigma^{\textrm{i}}gmaum_{f\textrm{i}n\mathscr{F}h} (h_f\sigma^{\textrm{i}}gmajump)^2)^\frac12
\end{equation}
where $C_1$, $C_2$, and $C_3$ are independent of $R$ and $u_h$, actually $C_3 = \sigma^{\textrm{i}}gmaqrt{3}CC'_{\mathcal{T}h}$ depends only on the shape regularity of $\mathcal{T}h$.
In this section, we will propose an adaptive mesh refinement algorithm for finite range interactions based on the a posteriori error estimates \eqref{eq:residual}. Numerical experiments show that our algorithm achieves an optimal convergence rate in terms of accuracy vs. the degrees of freedom, which is also consistent with the optimal a priori error estimates.
\sigma^{\textrm{i}}gmaubsection{Adaptive mesh refinement algorithm.}
\label{sec:numerics:algorithm}
We will develop the adaptive mesh refinement algorithm for GRAC method with finite range interactions. In \textrm{c}ite{APEst:2017}, we have designed the adaptive algorithm for GRAC with nearest neighbour interaction. In the nearest neighbor case, there exists a special set of reconstruction parameters \textrm{c}ite{PRE-ac.2dcorners}. However, with finite range interactions, we have to apply $\textsf{el}l^{1}$-minimisation to solve the reconstruction parameters from \eqref{eq:E_pt_grac} and \eqref{eq:F_pt_grac}. Furthermore, we need to stabilize the coupling scheme by adding a stabilization term at the interface region, which will effectively add a second order term to the interface stress tensor. We will compare the effects of $\textsf{el}l^1$ minimization and stabilization in numerical experiments.
\sigma^{\textrm{i}}gmaubsubsection{Stress tensor correction}
\label{sec:Interfacial-stress}
By \eqref{eq:defn_Sa}-\eqref{eq:defn_Sh}, the error estimators $\eta_T$, $\eta_M$, and $\eta_C$ depend on the stress tensors $\sigma^{\textrm{i}}gmah$ and $\sigma^{\textrm{i}}gmaa$, which are unique up to divergence free tensor fields by Lemma \ref{lem:divfree}. In principle, we need to minimize $\eta(y_h) := \topilde{\eta}(\sigma^{\textrm{i}}gmaa(I_\a y_h), \sigma^{\textrm{i}}gmah(y_h)) = \eta_T(y_h)+\eta_M(y_h)+\eta_C(y_h)$ with respect to all the admissible stress tensors.
\bigegin{equation}
\langle \delta\mathscr{E}(I_\a y_h), v \rangle \leq \min_{c_a\textrm{i}n N_1(\mathcal{T}_a)^2, c_h\textrm{i}n N_1(\mathcal{T}_h)^2}\topilde{\eta}(\sigma^{\textrm{i}}gmaa(I_\a y_h)+\nabla c_a\textsf{J}, \sigma^{\textrm{i}}gmah(y_h)+\nabla c_h\textsf{J})\|\nabla v\|_{L^2}.
\label{eq:sharpresidual2}
\end{equation}
To save computational power, we can choose a "good'' a/c stress tensor instead of the "optimal" one. A "good" a/c stress tensor should satisfy the following natural conditions:
\bigegin{itemize}
\textrm{i}tem Equal to the atomistic stress tensor in the atomistic domain.
\textrm{i}tem Equal to the continuum stress tensor for uniform deformation.
\end{itemize}
Then, we choose $c_a\equiv 0$ and $c_h(q_f) =0$ in \eqref{eq:sharpresidual2}, where $q_f$ is the midpoint of $f\textrm{i}n \mathscr{F}h, f\textrm{c}ap\L^{\a}mbda_i=\emptyset$. Also, since $\eta_T$ and $\eta_C$ are higher order contributions to the estimator, we only need to minimize the modeling error $\eta_M$ with respect to the degrees of freedom of $\sigma^{\textrm{i}}gmah$ adjacent to the interface.
We propose the following algorithm Algorithm \ref{alg:etc} for approximate stress tensor correction:
\bigegin{algorithm}
\label{alg:etc}
Approximate stress tensor correction
\bigegin{enumerate}
\textrm{i}tem Take $\sigma^{\textrm{i}}gmaa(I_\a y_h)$ and $\sigma^{\textrm{i}}gmah(y_h)$ as the canonical forms in \eqref{eq:defn_Sa} and \eqref{eq:defn_Sh} respectively.
\textrm{i}tem Denote $q_f$ as the midpoint of $f\textrm{i}n \mathscr{F}h, f\textrm{c}ap\L^{\a}mbda_i\neq\emptyset$. $c_h$ minimizes the following sum
\bigegin{equation}
\sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}^\textrm{i}}|T|\left[\sigma^{\textrm{i}}gmaa(I_\a y_h, T) - \bigig(\sigma^{\textrm{i}}gmah(I_\a y_h, T)+\nabla c_h \textsf{J}\bigig)\right]^2
\end{equation}
subject to the constraint that $c_h(q_f)=0$, for $f\bigigcap \L^{\a}mbda_\textrm{i} = \emptyset$.
\textrm{i}tem Let $\sigma^{\textrm{i}}gmah(y_h) = \sigma^{\textrm{i}}gmah(y_h) + \nabla c^h\textsf{J}$, compute $\eta_M$, $\eta_T$ and $\eta_C$ with $\sigma^{\textrm{i}}gmaa(I_\a y_h)$ and $\sigma^{\textrm{i}}gmah(y_h)$.
\end{enumerate}
\end{algorithm}
\sigma^{\textrm{i}}gmaubsubsection{Local error estimator}
We can evaluate the global estimator $\eta$ by computing its three components individually. These components corresond to different operations in the adaptive algorithm. The value of truncation error $\eta_T$ is determined by the domain size. As the size of the computational domain becomes larger, we have smaller $\eta_T$. Hence, we regard $\eta_T$ as a criterion to control the domain size. Values of the modelling error $\eta_M$ and the coarsening error $\eta_C$ indicate local error contributions, and we need to assign $\eta_M$ and $\eta_C$ to local elements properly.
We define
\bigegin{displaymath}
\eta_{M}(T, T^{a}):= |T^{a}\bigigcap T|\left[\sigma^{\textrm{i}}gmaa(I_\a y_h, T^{a}) - \sigma^{\textrm{i}}gmaum_{T'\textrm{i}n \mathcal{T}h, T'\bigigcap T^{a}\neq \emptyset}\frac{|T'\bigigcap T^{a}|}{|T^{a}|}(\sigma^{\textrm{i}}gmah(y_h, T'))\right]^2.
\end{displaymath}
for $T^{a}\textrm{i}n\mathcal{T}a$, then let
\bigegin{displaymath}
\eta_{M}(T) = \sigma^{\textrm{i}}gmaum_{T^a\textrm{i}n \mathcal{T}a, T^a\bigigcap T\neq \emptyset} \eta_M(T, T^a), \topextrm{ for } T\textrm{i}n\mathcal{T}h.
\end{displaymath}
\bigegin{displaymath}
\eta_{C}(T) = \sigma^{\textrm{i}}gmaqrt{3}C^{\mathcal{T}r}C'_{\mathcal{T}h}\sigma^{\textrm{i}}gmaum_{f\textrm{i}n \mathscr{F}h\bigigcap T\textrm{i}n\mathcal{T}h}\frac12(h_f\sigma^{\textrm{i}}gmajump_f)^2.
\end{displaymath}
Once all the local estimators are assigned, we are ready to define the indicator $\rho_{T}$:
\bigegin{equation}
\rho_{T} = (C^{\mathcal{T}r})^2\frac{\eta_{M}(T)}{\eta_M}+(\sigma^{\textrm{i}}gmaqrt{3}C^{\mathcal{T}r}C'_{\mathcal{T}h})^2\frac{\eta_{C}(T)}{\eta_C}.
\label{eq:localestimator1}
\end{equation}
Notice that the sum of local estimators $\rho_T$ together with truncation error $\eta_T$ is equal to the global estimator $\eta$.
The constants $C^{\mathcal{T}r}$, $C'_{\mathcal{T}h}$ in \eqref{eq:localestimator1} are not known a priorily, instead, we use their empirical estimates in the numerical implementation.
Here is the main algorithm Algorithm \ref{alg:size}, where the D\"{o}rfler adaptive strategy \textrm{c}ite{Dorfler:1996} is adopted.
\def\mathcal{M}{\mathcal{M}}
\bigegin{algorithm}
\label{alg:size}
A posteriori mesh refinement with size control.
\bigegin{enumerate}
\textrm{i}tem[Step 0] Set ${\R^d}ega_{R_0}$, $\mathcal{T}h$, $N_{\max}$, $\rho_{\topextrm{tol}}$, $\topau_1$, $\topau_{3}$ and $R_{\max}$.
\textrm{i}tem[Step 1] \topextit{Solve:} Solve the a/c solution $y_{h, R}$ of \eqref{eq:min_ac} on the current mesh $\mathcal{T}_{h, R}$.
\textrm{i}tem[Step 2] \topextit{Estimate}: Carry out the stress tensor correction step in Algorithm \ref{alg:etc}, and compute the error indicator $\rho_{T}$ for each $T\textrm{i}n\mathcal{T}h$, including the contribution from truncation error $\eta_T$. Set $\rho_{T} = 0$ for $T\textrm{i}n\mathcal{T}a\bigigcap \mathcal{T}h$. Compute the degrees of freedom $N$, error estimator $\rho_T$ and $\rho = \sigma^{\textrm{i}}gmaum_{T}\rho_T$. Stop if $N>N_{\max}$ or $\rho < \rho_{\topextrm{tol}}$ or $R>R_{\max}$.
\textrm{i}tem[Step 3] \topextit{Mark:}
\bigegin{enumerate}
\textrm{i}tem[Step 3.1]: Choose a minimal subset $\mathcal{M}\sigma^{\textrm{i}}gmaubset \mathcal{T}h$ such that
\bigegin{displaymath}
\sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{M}}\rho_{T}\geq\frac{1}{2}\sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{T}h}\rho_{T}.
\end{displaymath}
\textrm{i}tem[Step 3.2]: We can find the interface elements which are within $k$ layers of atomistic distance, $\mathcal{M}^k_\textrm{i}:=\{T\textrm{i}n\mathcal{M}\bigigcap \mathcal{T}h^\textrm{c}: \topextrm{dist}(T, \L^{\a}mbdai)\leq k \}$. Choose $K\geq 1$, find the first $k\leq K$ such that
\bigegin{equation}
\sigma^{\textrm{i}}gmaum_{T\textrm{i}n \mathcal{M}^k_\textrm{i}}\rho_{T}\geq \topau_1\sigma^{\textrm{i}}gmaum_{T\textrm{i}n\mathcal{M}}\rho_{T},
\label{eq:interface2}
\end{equation}
with tolerance $0<\topau_1<1$. If such a $k$ can be found, let $\mathcal{M} = \mathcal{M}\sigma^{\textrm{i}}gmaetminus \mathcal{M}^k_\textrm{i}$. Then in step 3, expand interface $\L^{\a}mbda_\textrm{i}$ outward by $k$ layers.
\end{enumerate}
\textrm{i}tem[Step 4] \topextit{Refine:} If \eqref{eq:interface2} is true, expand interface $\L^{\a}mbda_\textrm{i}$ outward by one layer. If $\eta_{T}\geq \topau_3 \rho$, enlarge the computational domain. Bisect all elements $T\textrm{i}n \mathcal{M}$. Stop if $\frac{\eta_T}{\eta_M + \eta_C}\geq \topau_2$, otherwise, go to Step 1.
\end{enumerate}
\end{algorithm}
\sigma^{\textrm{i}}gmaubsection{Numerical Experiments}
\label{sec:numerics:problem}
We present the numerical experiments with the following model problem. We consider the two dimensional triangular lattice $\L^{\a}mbdahom :={\textsf{A}}\mathbb{Z}^{2}$ with
\bigegin{equation}
{\textsf{A}} = \mymat{1 & \textrm{c}os(\pi/3) \\ 0 & \sigma^{\textrm{i}}gmain(\pi/3)}.
\label{eq:Atrilattice}
\end{equation}
Let $a_1 = (1,0)^T$, then $a_j = {\textsf{A}}_6^{j-1}a_1$, $j=1,\dots, 6$, are the nearest neighbour directions in $\L^{\a}mbdahom$, where ${\textsf{A}}_6$ is the rotation matrix corresponding to a $\pi/3$ clockwise planar rotation. Let $\mathbb{R}gnn = \{a_i\}_{i=1}^{6}$ denote the set of nearest neighbor interacting bonds. Given the cut-off radius $r_{\textrm{cut}}$, then each interaction direction $\rho\textrm{i}n\mathbb{R}g$ can be uniquely represented as $\rho = \textrm{a}lpha a_{i} + \bigeta a_{i+1}$, where $\textrm{a}lpha\geq 0, \bigeta\geq 0, \textrm{a}lpha+\bigeta\leqr_{\textrm{cut}}$ and $a_i\textrm{i}n \mathbb{R}gnn$.
Recall the EAM potential defined in \eqref{eq:eam_potential}. Let
\bigegin{displaymath}
\phi(r)=\exp(-2a(r-1))-2\exp(-a(r-1)),\quad \psi(r)=\exp(-br)
\end{displaymath}
\bigegin{displaymath}
F(\topilde{\rho})=C\left[(\topilde{\rho}-\topilde{\rho}_{0})^{2}+
(\topilde{\rho}-\topilde{\rho}_{0})^{4}\right]
\end{displaymath}
with parameters $a=4.4, b=3, c=5$ and $\topilde{\rho}_{0}=6\exp(-b)$, which is the same as the numerical experiments in the a priori analysis of GRAC method \textrm{c}ite{COLZ2013}.
To generate a defect, we remove $k$ atoms from $\L^{\a}mbdahom$,
\bigegin{align*}
\L^{\a}mbda_{k}^{\topextrm{def}}:=\{-(k/2)e_{1}, \ldots, (k/2-1)e_{1})\}, & \qquad{\topextrm{if}}\quad k \quad\topextrm{is even},\\
\L^{\a}mbda_{k}^{\topextrm{def}}:=\{-(k-1)/2e_{1}, \ldots, (k-1)/2e_{1})\}, & \qquad{\topextrm{if}}\quad k \quad\topextrm{is odd},
\end{align*}
and $\L^{\a}mbda = \L^{\a}mbdahom\sigma^{\textrm{i}}gmaetminus \L^{\a}mbda_{k}^{\topextrm{def}}$.
For $\textsf{el}l\textrm{i}n\L^{\a}mbda$, consider the next nearest neighbour interaction, $\mathbb{N}hd_{\textsf{el}l} := \{ \textsf{el}l' \textrm{i}n \L^{\a}mbda \,|\, 0<|\textsf{el}l'-\textsf{el}l| \leq 2 \}$, and interaction range $\mathbb{R}g_\textsf{el}l := \{\textsf{el}l'-\textsf{el}l \,|\, \textsf{el}l'\textrm{i}n \mathbb{N}hd_\textsf{el}l\} \sigma^{\textrm{i}}gmaubseteq \{a_j, j=1,\dots, 18\}$. The defect core $\nabladef$ can be defined by $\nabladef = \{x: \topextrm{dist} (x, \L^{\a}mbda_{k}^{\topextrm{def}})\leq 2 \}$, $\L^{\a}mbda \bigigcap \nabladef$ is the first layer of atoms around $\L^{\a}mbda_{k}^{\topextrm{def}}$.
\sigma^{\textrm{i}}gmaubsubsection{Di-vacancy}
\label{sec:numerics:di-vacancy}
In this section, we numerically justify the performance of the proposed adaptive mesh refinement algorithm.
We take the same di-vacancy example in \textrm{c}ite{COLZ2013}, namely, setting $k=2$ for $\L^{\a}mbda_{k}^{\topextrm{def}}$. We apply isotropic stretch $\mathrm{S}$ and shear $\gamma_{II}$ by setting
\bigegin{displaymath}
{\topextsf{B}}=\left(
\bigegin{array}{cc}
1+\mathrm{S} & \gamma_{II} \\
0 & 1+\mathrm{S}
\end{array} \right)
\textrm{c}dot{\mathsf{F_{0}}}
\end{displaymath}
where $\mathsf{F_{0}} \propto \mathrm{I}$ minimizing the Cauchy-Born energy density $\mathrm{W}$, $\mathrm{S}=\gamma_{II}=0.03$.
\sigma^{\textrm{i}}gmaubsubsection{Micro-crack}
\label{sec:micro-crack}
In the microcrack experiment, we remove a longer segment of atoms,
$\L^{\a}mbda^{\topextrm{def}}_{11}=\{-5e_1,\dots,5e_1\}$ from the computational
domain. The body is then loaded in mixed mode ${\rm I}$ \& ${\rm II}$,
by setting,
\bigegin{displaymath}
\textsf{B} := \bigegin{pmatrix} 1 & \gamma_{\rm II} \\ 0 & 1+\gamma_{\rm I} \end{pmatrix}\textrm{c}dot \textsf{F}_0.
\end{displaymath}
where $\textsf{F}_0 \propto I$ minimizes $W$, and $\gamma_{\rm I}=\gamma_{\rm
II}=0.03$ ($3\%$ shear and $3\%$ tensile stretch).
We take $\topau_3 = 0.7$ in the numerical implementation of Algorithm \ref{alg:size}. We compare results with and without stabilisation and also with different optimisation approaches to obtain the reconstruction parameters.
From the numerical results in Figures \ref{fig:nV2-H1} - \ref{fig:nV11-En}, we can see that the least square method for the reconstruction parameters does not converge at all. For the $\textsf{el}l^1$-minimisation approach without stabilization, there exist large errors in the pre-asymptotic regime, though the solutions tend to converge with increasing degree of freedom. For $\textsf{el}l^1$-minimisation combined with stabilisation, we are able to obtain the optimal convergence rate that consistent with the a priori result.
\bigegin{figure}[H]
\bigegin{center}
\textrm{i}ncludegraphics[scale=0.55]{./H1.eps}
\textrm{c}aption{Numerical results by Algorithm \ref{alg:size}: $H^1$ error vs. $N$, using a posteriori estimator in $H^1$ norm. In the legend, l1 means $\textsf{el}l^{1}$-minimization approach while l2 represents a least-squares approach; s1 indicates with stabilization and s0 without.}
\label{fig:nV2-H1}
\end{center}
\end{figure}
\bigegin{figure}[H]
\bigegin{center}
\textrm{i}ncludegraphics[scale=0.55]{./En.eps}
\textrm{c}aption{Numerical results by Algorithm \ref{alg:size}: Energy error vs. $N$, using a posteriori estimator in $H^1$ norm. In the legend, l1 means $\textsf{el}l^{1}$-minimization approach while l2 represents a least-squares approach; s1 indicates with stabilization and s0 without.}
\label{fig:nV2-En}
\end{center}
\end{figure}
\bigegin{figure}[H]
\bigegin{center}
\textrm{i}ncludegraphics[scale=0.55]{./nV11-H1.eps}
\textrm{c}aption{Numerical results by Algorithm \ref{alg:size}: $H^1$ error vs. $N$, using a posteriori estimator in $H^1$ norm. In the legend, l1 means $\textsf{el}l^{1}$-minimization approach while l2 represents a least-squares approach; s1 indicates with stabilization and s0 without.}
\label{fig:nV11-H1}
\end{center}
\end{figure}
\bigegin{figure}[H]
\bigegin{center}
\textrm{i}ncludegraphics[scale=0.55]{./nV11-En.eps}
\textrm{c}aption{Numerical results by Algorithm \ref{alg:size}: Energy error vs. $N$, using a posteriori estimator in $H^1$ norm. In the legend, l1 means $\textsf{el}l^{1}$-minimization approach while l2 represents a least-squares approach; s1 indicates with stabilization and s0 without.}
\label{fig:nV11-En}
\end{center}
\end{figure}
\sigma^{\textrm{i}}gmaection{Conclusion}
\label{sec:conclusion}
In this paper, we construct the adaptive algorithm for a class of consistent (ghost force free) atomistic/continuum coupling schemes with finite range interactions based on the rigorous a posteriori error estimates. Different from the localization formula for the stress tensor in the a priori analysis \textrm{c}ite{Or:2011a, OrShSu:2012, OrtnerTheil2012}, we develop a computable formulation for the stress tensor. Combined with $\textsf{el}l_1$-minimization approach for reconstruction parameters and stabilization, we have shown that the numerical results for the corresponding adaptive algorithms are comparable to optimal a priori analysis.
The extension of the straight screw dislocation in two dimensions and point defect case in three dimensions is straightforward. More practical problems, for example, the study of dislocation nucleation and dislocation interaction by a/c coupling methods has attracted considerable attention from the early stage of a/c coupling methods \textrm{c}ite{Tadmor:1999, Phillips:1999}. The difficulty is to deal with boundary condition and complicated geometry changes of the interface.
For general atomistic/continuum coupling schemes, such as BQCE, BQCF and BGFC, the a priori analysis in \textrm{c}ite{MiLu:2011, LiOrShVK:2014, OrZh:2016} provide a general analytical framework and the stress tensor based formulation plays a key role in the analysis. Therefore, the a posteriori analysis for those coupling schemes can inherit this analytical framework and the stress tensor formulation. Techniques developed in this paper will be essential for the efficient implementation of the corresponding adaptive algorithms.
\sigma^{\textrm{i}}gmaection{Appendix}
\label{sec:appendix}
\sigma^{\textrm{i}}gmaubsection{Derivation of \eqref{eq:stress:drho} for Two Dimensional Triangular Lattice}
\label{sec:stress:assm}
In this section, for two dimensional triangular lattice introduced in \S~\ref{sec:numerics:problem}, we derive \eqref{eq:stress:drho}.
We first classify interaction bonds into two types according to their intersection to corresponding elements.
\def\mathbb{R}gnn{\mathbb{R}g_{\topextrm{nn}}}
\def\textsf{B}p{\textsf{B}_\topextrm{I}}
\def\textsf{B}c{\textsf{B}_\topextrm{II}}
Type I interaction bonds is parallel to one of the nearest neighbour directions. The set of all Type I bonds is defined by $\textsf{B}p = \{\rho| \exists \textrm{a}_i \textrm{i}n \mathbb{R}gnn, \topext{ such that }\rho = |\rho| a_i\}$, as illustrated in Figure \ref{figs:a2b0}.
\bigegin{figure}[H]
\bigegin{center}
\textrm{i}ncludegraphics[scale=0.45]{./a2b0.eps}
\textrm{c}aption{Illustration of corresponding elements (marked gray) to bond $\rho\textrm{i}n\textsf{B}p$ with $\rho = 2a_3$.}
\label{figs:a2b0}
\end{center}
\end{figure}
For each $\rho\textrm{i}n\textsf{B}p$ with start point $\textsf{el}l\textrm{i}n\L^{\a}mbda$ and $v\textrm{i}n\mathscr{U}e$, we have
\bigegin{align}
D_\rho v &= \sigma^{\textrm{i}}gmaum_{k=1}^{|\rho|} D_{a_i}v(\textsf{el}l+(k-1)a_i) \nonumber \\
& = \left(\sigma^{\textrm{i}}gmaum_{k=1}^{|\rho|} \left(\frac12\nabla_{T_{k}^{+}}v+\frac12\nabla_{T_{k}^{-}}v\right)\right)\textrm{c}dot \frac{\rho}{|\rho|} \\
& = \sigma^{\textrm{i}}gmaum_{k=1}^{|\rho|} \frac{1}{2|\rho|} \left(\nabla_{T_{k}^{+}}v\textrm{c}dot \rho+\nabla_{T_{k}^{-}}v\textrm{c}dot \rho\right) \nonumber
\end{align}
where $T_{k}^{+}$ and $T_{k}^{-}$ are the triangles with $\left(\textsf{el}l+(k-1)a_i, \textsf{el}l+k a_i\right)$ as the common edge, for $k=1,\ldots,|\rho|$, superscript ``$+$'' means the element being located on the left of the bond. In this case, the contribution in\eqref{eq:stress:drho} is
$$\omega_{\ell}^\rho(T_{k}^{(\pm)})=\frac{1}{2|\rho|}.$$
Type II interaction bonds is not parallel to any of the nearest neighbour directions, therefore they must cross the intersecting elements. The set of all Type II bonds is defined by $\textsf{B}c = \{\rho|\rho \topext{ is not parallel to any }a_i\textrm{i}n\mathbb{R}gnn\}$, as illustrated in Figure \ref{figs:a1b2}.
\bigegin{figure}[H]
\bigegin{center}
\textrm{i}ncludegraphics[scale=0.35]{./a1b2.eps}
\textrm{c}aption{Illustration of the intersecting elements (marked gray) with respect to bond $\rho\textrm{i}n\textsf{B}c$, $\rho = a_1 + 2a_2$. }
\label{figs:a1b2}
\end{center}
\end{figure}
For any Type II interaction bond $\rho\textrm{i}n\textsf{B}c$, there exists nearest neighbour directions $a_i$, $a_{i+1}$, such that $\rho = \textrm{a}lpha a_i + \bigeta a_{i+1}$, $\textrm{a}lpha,\bigeta>0$. Let $(\textsf{el}l, \textsf{el}l+\rho)$ consecutively intersect with elements $T_k$, $k = 1, \textrm{c}dots, n_p$, for $v\textrm{i}n\mathscr{U}e$, we have
\bigegin{align}
D_\rho v &= \partial_\rho V_\textsf{el}l \left(\sigma^{\textrm{i}}gmaum_{k=1}^{n_\rho} D_{\rho_{k}}v(\textsf{el}l+\sigma^{\textrm{i}}gmaum_{t=1}^{k}\rho_{k-1})\right) \nonumber \\
& = \left(\sigma^{\textrm{i}}gmaum_{k=1}^{n_\rho} \nabla_{T_k}v\right)\textrm{c}dot \rho_k \\
& = \left(\sigma^{\textrm{i}}gmaum_{k=1}^{n_\rho} \nabla_{T_k}v\right)\textrm{c}dot \frac{|\rho_k|}{|\rho|}\rho \nonumber \\
& = \sigma^{\textrm{i}}gmaum_{k=1}^{n_\rho)} \frac{|\rho_k|}{|\rho|} \nabla_{T_k}v\textrm{c}dot \rho \nonumber
\end{align}
where the set of vectors $\{\rho_k\}_{k=1}^{n_\rho}$ is the unit partition of $\rho$ with $\rho = \sigma^{\textrm{i}}gmaum_{k=1}^{n_\rho}\rho_k$, $\rho_0 = \mathbf{0}$ and $\frac{\rho_k}{|\rho_k|}=\frac{\rho}{|\rho|}$ for $k=1,\ldots,n_p$. $\rho_k = T_k \textrm{c}ap (\textsf{el}l, \textsf{el}l+\rho)$. For the two dimensional triangular lattice as in \S~\ref{sec:numerics:problem}, we have $n_\rho = 2(\textrm{a}lpha + \bigeta -1)$ if $\textrm{a}lpha \neq \bigeta$, and $n_\rho = 2\textrm{a}lpha$ if $\textrm{a}lpha = \bigeta$. In this case,
$$\omega_{\ell}^\rho(T_k)=\frac{|\rho_k|}{|\rho|}.$$
For the Type II bonds with $\textrm{a}lpha = \bigeta$, in the hexagon site geometry, The contribution factor $\omega_\rho^{T}$ is the same for every element related to bond $\rho$ with the value $\omega_\rho^T = \frac{1}{2\textrm{a}lpha}$ which can be see from figure (\ref{figs:a2b2}).
\bigegin{figure}[H]
\bigegin{center}
\textrm{i}ncludegraphics[scale=0.45]{./a2b2.eps}
\textrm{c}aption{Illustration of corresponding elements (marked gray) to bond $\rho\textrm{i}n\textsf{B}p$ with $\rho = 2a_1+2a_2$.}
\label{figs:a2b2}
\end{center}
\end{figure}
\bigibliographystyle{plain}
\bigibliography{qc-1.bib}
\end{document}
|
\begin{document}
\title{Time-Optimal Control Problem With State Constraints In A Time-Periodic Flow Field
\thanks{This work was supported by FCT (Portugal): support of FCT R\&D Unit SYSTEC --
POCI-01-0145-FEDER-006933/SYSTEC funded by ERDF $|$ COMPETE2020 $|$ FCT/MEC $|$ PT2020
extension to 2018, and NORTE-01-0145-FEDER-000033, by ERDF $|$ NORTE 2020.
Results described in Chapter~\ref{sec: numercial results} were obtained
during research visit of R. Chertovskih to the Federal Research Center
``Computer Science and Control'' (Russian Academy of Sciences)
supported by the Russian Science Foundation (project no. 19-11-00258).
}
}
\titlerunning{Time-Optimal Control Problem In A Time-Periodic Flow Field}
\author{Roman Chertovskih\inst{1}\orcidID{0000-0002-5179-4344} \and
Nathalie T. Khalil\inst{1}\orcidID{0000-0002-4761-5868} \and
Fernando L. Pereira\inst{1}\orcidID{0000-0002-9602-2452}}
\authorrunning{Chertovskih, Khalil, Pereira}
\institute{Research Center for Systems and Technologies (SYSTEC), Electrical and Computer
Engineering Department, Faculty of Engineering, University of Porto, 4200-465, Portugal
\email{[email protected], [email protected], [email protected]}\\
\url{https://www.nathaliekhalil.com/, https://paginas.fe.up.pt/~flp/}}
\maketitle
\begin{abstract}
This article contributes to a framework for a computational indirect method based on the Pontryagin maximum principle to efficiently solve a class of state constrained time-optimal control problems in the presence of a time-dependent flow field.
Path-planning in a flow with tidal variations is an emblematic real-life task that serves as an example falling
in the class of problems considered here. Under rather natural assumptions on the flow field, the problem is
regular and the measure Lagrange multiplier, associated with the state constraint, is continuous.
These properties (regularity and continuity) play a critical role in computing the field of extremals by solving
the two-point boundary value problem given by the maximum principle. Moreover, some sufficient conditions
preventing the emergence of singular control processes are given.
A couple of examples of time-periodic fluid flows are studied and the corresponding optimal solutions are found.
\keywords{
path-planning \and
optimal control \and
state constraints \and
indirect method \and
Pontryagin maximum principle \and
two-point boundary value problems
}
\end{abstract}
\section{Introduction}
In this article, we consider a state-constrained time-optimal control problem in the presence of a time-periodic flow field, the so-called ``navigation problem''. We are interested in computing its set of extremals using an indirect method based on the necessary optimality conditions provided by the maximum principle
\cite{Pontryagin_1962}. However, indirect methods represent significant challenges for optimal control problems with
state constraints. Indeed, the difficulty is due to the fact that the state constraint Lagrange multiplier appearing
in the necessary optimality conditions whenever the state constraint becomes active is a mere Borel measure. Thus,
in general, this multiplier is discontinuous, and this leads to serious difficulties in computing the set of extremals at the times in which the state trajectory meets the boundary of the state constraint set.
Therefore, in order to overcome this difficulty, we employ a not so commonly used form of the maximum principle - the
so-called Gamkrelidze form - and impose a regularity condition on the data of the problem that entails the continuity of the measure Lagrange multiplier. This is crucial to ensure the appropriate behavior of the proposed numerical
procedure to find the corresponding set of extremals.
Moreover, we also present certain conditions that, once satisfied, prevent the emergence of singular control
processes. These may be helpful in guiding the computational procedures, by enabling to check the absence of singular controls.
To better grasp the proposed indirect method in the framework of regular problems, we study a navigation problem
in $\mathbb R^2$. More precisely, we consider an object moving in a closed state domain, subject to a time-dependent
fluid flow vector field. The dynamics of the proposed model is affine in the control variable whose values are
constrained to the unit square in $\mathbb{R}^2$, and is affected by the vector flow field action. For this model,
we are interested in computing the optimal time of a path which connects two given distinct initial and terminal
points. For the problem in question, the regularity condition is satisfied under non-restrictive conditions on the
vector flow field. In order to develop the proposed indirect method, we derive the corresponding necessary
optimality conditions in the non-degenerate Gamkrelidze form, from which the expression of the optimal control is
computed.
Moreover, the regularity of the problem entails an explicit expression for the measure multiplier. The points where
the extremal trajectories reach the boundary of the state constraint can be computed as a result of the continuity of
the measure Lagrange multiplier. The two expressions - of the control and the measure multiplier - are functions of
the state and adjoint variables, and are replaced in the associated two-point boundary value problem, solved via a
shooting algorithm. From the set of all extremals, only the optimal ones, i.e., with the minimal time, are selected as
solutions to the given time-optimal problem. We discuss some examples of time-periodic vector flow fields, and
we plot the corresponding set of extremals.
The proposed numerical approach based indirect method was discussed in \cite{Khalil et al. IEEE TAC 2019} for the steady flow field case. Our paper extends the analysis to time-periodic flow fields. The dependence of the flow on time is crucial for many realistic path planning problems. For example, tides play an important role in shaping water
velocity fields in rivers, mainly near their mouth~\cite{jos}. Moreover, in our paper we discuss, with more details, sufficient conditions for the non-occurrence of singular controls for a particular choice of control set.
The area of state-constrained optimal control problems has been widely investigated in the literature, cf. \cite{Gamkrelidze_1960,Berkovitz_1962,Warga_1964,Gamkrelidze_1965,Dubovitskii_Milyutin_1965,Hestenes_1966,Neustadt_1966,Neustadt_1967,Halkin_1970,Russak_1970,Ioffe_Tikhomirov_1979}. Questions related to the non-degeneracy of the necessary optimality conditions can be found in \cite{Arutyunov_Tynyanskiy_1985,Dubovitskii_Dubovitskii_1985,Vinter_Fereira_1994,Arutyunov_Aseev_1997,Arutyunov_2000,Vinter_2000,Arutyunov_Karamzin_Pereira_2011,Bettiol_Khalil_2016}. Issues on the continuity of the measure multiplier are extensively studied in \cite{Hager_1979,Maurer_1979,Milyutin_1990,Galbraith_Vinter_2003,Bonnans_2009,Arutyunov_2012,Arutyunov_Karamzin_2015,Arutyunov_Karamzin_Pereira_2018}. Numerical methods for computing the set of extremals in the presence of state constraints can be found in \cite{Bryson_1969,Jacobson_1969,Betts_1993,Fabien_1996,Maurer_2000,Vasiliev_2002,Pytlak_2006,Haberkorn_2011,Keulen_2014}. For indirect methods, we refer the reader to \cite{Jacobson_1969,Fabien_1996,Maurer_2000} \cite{Vasiliev_2002,Pytlak_2006,Haberkorn_2011,Keulen_2014}, among many others.
The article is organized as follows: the formulation of the time-optimal navigation problem is described in Section \ref{sec: problem formulation} and the regularity concept is discussed. In Section \ref{sec: maximum pronciple}, the necessary optimality conditions in the Gamkrelidze's form are given in a non-degenerate form. Section
\ref{sec: applications} is devoted to the application of the maximum principle to the problem in question when the
control set is constrained to the unit square in $\mathbb{R}^2$ and to derive the explicit formulas for the corresponding
measure multiplier and extremal control. Sufficient conditions for the non-occurrence of singular controls are also discussed.
Numerical results and the description of the algorithm are featured in Section \ref{sec: numercial results}. Section \ref{sec: conclusion} concerns a conclusion, and the Appendix contains detailed proofs of the key results.
\section{Problem Formulation: Navigation Problem}\label{sec: problem formulation}
We consider an object driven by a dynamical system in a two-dimensional time-space dependent flow field $v(t,x)$, and while subject to affine state constraints. The ultimate goal is to compute a control process that yields the minimum transit time between two given starting and final points $A$ and $B$ within the set of extremals, i.e., the set of control processes satisfying the maximum principle conditions. The corresponding problem is described as follow:
\begin{equation}
\begin{aligned}\label{problem}
& {\text{Minimize}}& & T \\
& \text{subject to} & & \dot x = u + v(t,x), \\
&&& x(0)=A,\;\;x(T)=B, \\
&&& |x_1|\le 1, \\
&&& u \in U: = \{ u \ : \ \varphi(u) \leq 0 \},
\end{aligned}
\end{equation}
where $x={\it col}(x_1,x_2)\in AC ([0,T]; \mathbb{R}^2)$, and $u={\it col}(u_1,u_2)\in L^1([0,T]; \mathbb{R}^2)$ are, respectively, the state and the control variables. The point $A$ is the starting point, while $B$ is the terminal point, and $v:[0,T] \times \mathbb{R}^2 \to \mathbb{R}^2$ is a smooth map which defines a fluid flow varying in time and space. Moreover, $\varphi=(\varphi_1, \varphi_2)$ is a given vector-valued mapping representing the control constraint set $U\subset \mathbb{R}^2$. The state constraint set is represented by the inequality $|x_1| \le 1$.
The terminal time $T$ is to be minimized by the optimal control process.
\subsection{Regularity Condition}
For the problem (\ref{problem}) we consider here, the function $\Gamma(t,x,u): [0,T] \times \mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}$, defined by the scalar product of the gradient of the function defining the considered active inequality state constraint and the corresponding dynamics, as
\[ \Gamma(t,x,u) := u_1 + v_1(t,x) . \]
Denote by $I_\varphi(u)$ the set of $i$'s such that $\varphi_i(u)=0$ ($i=1,2$). In the sequel of the regularity concept in \cite{Arutyunov_Karamzin_2015}, we state the following definition:
\begin{definition}\label{def: regularity}
Assume that, for all $t\in [0,T]$, $x\in\mathbb R^2$ and $u\in \mathbb{R}^2$, such that $|x_1|=1$, $\varphi(u)=0$ and $\Gamma(t,x,u)=0$, the set of vectors $\pdv{\Gamma}{u}$ and $\nabla \varphi_i$, for all $i\in I_\varphi(u)$ is linearly independent. Then, we say that problem (\ref{problem}) is regular with respect to the state constraint.
\end{definition}
As it will be explained in the coming sections, the regularity of the problem is, in the context of our paper, crucial for the appropriate behavior of the numerical proposed approach at points in which the trajectory meets the boundary of the state constraint set. The regularity condition might seem restrictive. However, for a large class of engineering problems it is automatically satisfied under natural assumptions. An example will be featured in Section \ref{sec: applications} for a specific case of control set $U$.
\section{Maximum Principle}\label{sec: maximum pronciple}
In this section, we derive non-degenerate necessary optimality conditions in the Gamkrelidze's form
for problem (\ref{problem}).
We start by considering the extended time-dependent Hamilton-Pontryagin function
$$
\bar H(t,x,u,\psi,\mu,\lambda) = \langle\psi,u + v(t,x)\rangle - \mu \Gamma(t,x,u) -\lambda,
$$
where $\psi \in \mathbb R^2$, $\mu \in \mathbb R$ and $\lambda \in \mathbb R$.
In order to satisfy the notation in what follows, we denote by $f^*(y,z)$ the function $ f(x,y,z)$ in which $x$ is
replaced by the reference value $x^*$.
\begin{theorem}\label{theorem: maximum principle}
We assume that problem (\ref{problem}) is regular in the sense of Definition \ref{def: regularity}. Then, for an optimal process $(x^*,u^*,T^*)$, there exist a set of Lagrange multipliers: a number $\lambda \in [0, 1]$, an absolutely continuous adjoint arc $\psi=(\psi_1,\psi_2) \in W_{1,\infty}([0, T^*]; \mathbb R^2)$, and a scalar function $\mu(\cdot)$, such that:
\begin{itemize}
\item[(a)]\label{item: adjoint system}Adjoint equation
\begin{align*} \dot \psi(t) & = -\frac{\partial{\bar H}}{\partial x}(t,x^*(t),u^*(t),\psi(t),\mu(t),\lambda)
\qquad {\textrm for \ a.a. }\ t\in [0,T^*];
\end{align*}
\item[(b)]\label{item: max condition} Maximum condition
\begin{align*}u^*(t) & \in \mathop{\rm argmax}_{\varphi(u) \le 0} \left\{{\bar H} (t,x^*(t),u,\psi(t),\mu(t),\lambda) \right\} \qquad {\textrm for \ a.a. }\ t\in [0,T^*];
\end{align*}
\item[(c)]\label{item: conservation law} Time-transversality condition
\[ h(T^*)=0 \; \mbox{ where }\; h(t):= \max_{\varphi(u) \le 0} \left\{ \bar H^*(t,u) \right\} \; \mbox{ satisfies } \; \dot h= \pdv{\bar H^*}{t} (t)\]
a.a. $ t \in [0,T^*]$;
\item[(d)]\label{item: measure continuity} $\mu(t)$ is constant on the time intervals where $ |x^*(t)|< 1 $,
increasing on $\{t\in [0,T]:x_1^*(t)=-1\}$, and decreasing on $\{t\in [0,T]: x_1^*(t)=1\}$.
Moreover, $\mu(\cdot)$ is continuous on $[0,T^*]$;
\item[(e)] \label{item: nontriviality condition} Non-triviality condition
$$\begin{array}{ll} &\lambda + |\psi_1(t)-\mu(t)| + |\psi_2(t)| > 0,\end{array} \qquad \text{for all }\ t\in [0,T^*].$$
\end{itemize}
\end{theorem}
\begin{remark}\label{remark: extended problem}
\noindent
\begin{itemize}
\item[a)] The proof of Theorem \ref{theorem: maximum principle} can be found in the Appendix. It relies on a time-reparametrization technique converting problem (\ref{problem}) into a fixed and time-independent one, and on results of \cite{Arutyunov_2000}, \cite{Karamzin_Pereira_2019}, and \cite[Theorem 4.1]{Arutyunov_Karamzin_Pereira_2011}.
\item[b)] From the regularity property of the problem, the expression of the measure multiplier can be found in terms of the state and the adjoint variables. The junction points -- the points at which the trajectory meets the state constraint boundary -- can be computed as a result of the continuity of the measure multiplier $\mu$ (condition (d)). From these considerations, explicit formulae for the measure multiplier can be obtained. This is the core of our computational scheme proposed in this article for finding the set of extremals.
\item[c)] The non-triviality condition (e), which asserts the non-degeneracy of the Maximum Principle, implies that
\begin{equation} \label{eq: result from the non-triviality} |\psi_1(t)-\mu(t)| + |\psi_2(t)| > 0 \quad \text{for all } t \in [0,T^*]. \end{equation}
A proof is provided in the Appendix.
\end{itemize}
\end{remark}
\section{Applications: Control Set Constrained to the Square} \label{sec: applications}
In this section, we consider the specific case of a control set represented as the unit square in $\mathbb{R}^2$, i.e.
\begin{equation} \label{def: control set square} U: = \{ u \in \mathbb{R}^2 \ : \ \varphi_1(u):= |u_1| \le 1 \text{ and } \varphi_2(u) := |u_2| \le 1 \}.
\end{equation}
We study how a simple assumption on the vector flow field can automatically lead to regularity in the sense of Definition \ref{def: regularity}. Thereafter, we use the necessary optimality condition derived in Section \ref{sec: maximum pronciple} to obtain explicit expressions of the optimal control and the measure Lagrange multiplier in terms of the state and adjoint variables. These expressions will be substituted in the associated
boundary-value problem to numerically find the set of extremals (see Section \ref{sec: numercial results}).
\subsection{Sufficient Condition for Regularity}
In the problem considered here, the following simple assumption suffices to ensure regularity as defined above.
\begin{itemize}
\item[(H)] $|v_1(t,x)| < 1$ for all $(t, x)\in \mathbb{R}\times\mathbb{R}^2$.
\end{itemize}
Indeed, if the starting and/or terminal positions are in the interior of the state constraint domain, and if
the flow is much faster at the boundary of the state constraint set, $|x_1|=1$, assumption (H) is crucial to
guarantee that the moving object is able to overcome the flow field effect, and, thus, leave the boundary of
the state constraint, and move across the river along the axis $0x_1$.
\begin{proposition}
Assume that (H) is satisfied. Then, the problem (\ref{problem}), with the specific choice of the control set $U$, as defined in (\ref{def: control set square}), is regular in the sense of Definition \ref{def: regularity}.
\end{proposition}
\begin{proof}
For $\Gamma(t,x,u)=0$, $|u_1| < 1$ ${\cal L}$-a.e., and the strict inequality $\varphi_1(u)< 0$ holds at the boundary of the state constraint set. Then, the regularity condition is satisfied if the vectors $\nabla\varphi_2$, and $\pdv{\Gamma}{u}$ constitute a linearly independent set. For our particular problem, $\nabla\varphi_2= {\it col}(0,1)$, and $\pdv{\Gamma}{u} = {\it col}(1,0)$. Therefore, the problem (\ref{problem}) with the particular choice in (\ref{def: control set square}) is regular.
\end{proof}
\begin{remark}
Under assumption (H), the necessary conditions of optimality expressed by Theorem \ref{theorem: maximum principle}, guaranteeing the non-degeneracy of the Lagrange multipliers, and the continuity of the Borel measure $\mu$, can be applied.
\end{remark}
\subsection{Explicit formulas for $u^*$ and $\mu$}
From the maximum condition (b) of Theorem \ref{theorem: maximum principle}, for a.a. $t\in [0,T^*]$,
\begin{align*} & \max_{|u_1| \le 1,|u_2| \le 1} \left\{\left(\psi_1(t) - \mu(t)\right)u_1 + \psi_2(t) u_2\right\} = \left(\psi_1(t) - \mu(t)\right)u_1^*(t)+ \psi_2(t) u_2^*(t).
\end{align*}
This implies that the value of the optimal control process $(u_1^*,u_2^*)$ varies w.r.t. the sign of $\psi_1-\mu$ and $\psi_2$, as follows:
\begin{equation}
\label{uop2}
\begin{cases}
\text{if}\ \psi_1 - \mu \neq 0, & \text{then}\ u_1^* = sgn(\psi_1 - \mu)\\
\text{if}\ \psi_2 \neq 0, & \text{then}\ u_2^* = sgn(\psi_2).\\
\end{cases}
\end{equation}
The expressions of $u^*$ and $\mu$ differ for points belonging to the boundary of the state constraint set or for points in its interior. Next, we discuss these two cases.
When the trajectory stays on the boundary of the state constraint set during a certain set $\Delta$, then $u_1^*(t)=-v_1(t,x^*(t))$ ${\cal L}$-a.e. on $\Delta$, and, by continuity, everywhere on $\Delta$. Thus, under assumption (H), $|u^*_1(t)|<1$. Therefore, from the maximum condition, we have
\begin{equation}
\label{mu2}
\mu(t)=\psi_1(t) \qquad \text{for all } t \text{ such that } |x^*(t)|=1.
\end{equation}
Moreover, as a result of (\ref{eq: result from the non-triviality}), we have $\psi_2(t) \neq 0$ for all $t$ such that $|x_1^*(t)|=1$ and, thus, $u_2= \pm 1$.
Now, let $\Delta$ be a time interval during which the trajectory lies in the interior of the state constraint set, i.e. $|x_1^*(t)|<1$ $\forall\, t\in\Delta$. For any point $t\in \Delta$, the Lagrange measure multiplier $\mu$ is constant. Thus, it follows that, for some $t$ in a nonzero Lebesgue measure subset of $ \Delta$, we have $\psi_1(t) - \mu(t) = 0$, and since $\mu(t) $ is constant on $\Delta$, and hence, $\dot \psi_1 (t)=0 $, on this subset. From the adjoint equations, we conclude that $\displaystyle \psi_2(t)\frac{\partial v _2(x)}{\partial x_1} = 0$. From the non-triviality condition, we have to have $\psi_2(t) \neq 0 $ ${\cal L}$-a.e. on $\Delta$, and, thus, $\displaystyle \frac{\partial v_2(x)}{\partial x_1} = 0$ ${\cal L}$-a.e. on $\Delta$. In this case the maximum condition is not informative for the first component of the control. This is one case of the so-called singular control, i.e., the controls cannot be defined on a non-zero measure set. Another singular situation corresponds to the case when $\psi_2(t)=0$ for some $t$ in a nonzero Lebesgue measure subset of $\Delta$. Next, we present and prove sufficient conditions that preclude the emergence of singular control by imposing additional conditions on the flow vector-field $v$.
Let $ S_0=\{t\in[0,T]: |x_1(t)|=1\}$, and $S_{-}=\{t\in[0,T]: |x_1(t)|<1\}$, $\Delta\subset [0,T]$ such that ${\cal L}\mbox{-meas}(\Delta) >0 $, and
\begin{eqnarray*}
S_1&=&\{t\in\Delta\subset S_{-}: \psi_1(t)-\mu(t) =0 \mbox{ and }\psi_2\neq 0 \}\\
S_2&=&\{t\in\Delta\subset S_{-}: \psi_1(t)-\mu(t) \neq 0 \mbox{ and }\psi_2 = 0 \}\\
S_3&=&\{t\in\Delta\subset S_0: \psi_1(t)-\mu(t) =0 \mbox{ or }\psi_2 = 0 \}\\
\end{eqnarray*}
In what follows, we suppress the $t$-dependence of $v$ as it does not play any role in the developments.
\begin{proposition}\label{proposition: regularity sufficient condition-2}
\noindent
\begin{itemize}
\item[a)] If $\displaystyle \frac{\partial v_2(x(t))}{\partial x_1} \neq 0$ on $S_1$, then ${\cal L}\mbox{-meas}(S_1)=0 $.
\item[b)] If $\displaystyle \frac{\partial v_1(x(t))}{\partial x_2} \neq 0$ on $S_2$, then ${\cal L}\mbox{-meas}(S_2)=0 $.
\item[c)] On $S_3$ we always have ${\cal L}\mbox{-meas}(S_3)=0 $.
\end{itemize}
\end{proposition}
\begin{proof}
\noindent
Proof of item $a)$. Clearly, for all $t\in S_1$, $\displaystyle - \dot \psi_1(t) = \psi_2(t)\frac{\partial v_2(x(t))}{\partial x_1}$, and, thus $-\dot \psi_1(t) \neq 0$ for all $t\in S_1$. Since $S_1\subset S_{-}$,
we readily conclude that ${\cal L}$-meas$(S_1)=0$.
\noindent Proof of item $b)$. Now, for $t\in S_2$, we have $\displaystyle - \dot \psi_2(t) = (\psi_1(t)-\mu(t)) \frac{\partial v_1(x(t))}{\partial x_2}$. Thus, if $\frac{\partial v_1(x(t))}{\partial x_2}\neq 0$, then $\dot \psi_2 (t) \neq 0$, and, thus, ${\cal L}$-meas$(S_2)=0$.
\noindent Proof of item $c)$. Consider some $t\in S_3$. The system of adjoint equations can be written as follows
\begin{equation}\label{adjoint2}
-\dot \psi(t) = D_x^T v(x(t))\psi(t)-\mu(t) \nabla_x v_1(x(t))
\end{equation}
Since $\psi_1(t) -\mu(t) = 0 $ on $ S_0$, we have
$$ -\dot \psi_1(t) = \frac{\partial v_2(x(t))}{\partial x_1} \psi_2(t) \; \mbox{ and } -\dot \psi_2(t) = \frac{\partial v_2(x(t))}{\partial x_2} \psi_2(t).$$
If there exists some $\bar t \in \Delta$ such that $\psi_2 (\bar t) =0$, then $\psi_2 (\bar t) =0$ on a nonzero Lebesgue measure subset of $\Delta$. From the above, also follows that $\psi_1(t) $ is constant on this subset what contradicts $\psi_1(t)-\mu(t) =0 $ on $S_0$. From this, and the nontriviality condition of the multipliers, we have the desired conclusion.
\end{proof}
\begin{remark}\label{remark: extra regularity condition sufficient condition}
Proposition \ref{proposition: regularity sufficient condition-2} represents sufficient conditions to avoid singular controls. These conditions allow
the maximum condition to stay informative and to define the optimal controls on a non-zero measure set.
Since the conditions are only sufficient, this means that there might be some cases in which the singular controls do not occur even if these conditions are not satisfied.
\end{remark}
\section{Numerical Results}\label{sec: numercial results}
In this section, we present and discuss some numerical results for the above stated optimal control problem by
using an indirect method based on the considered Maximum Principle. The conditions of this Maximum Principle
lead to the following two-point Boundary Value Problem (BVP):
\begin{subequations}
\label{goveq}
\begin{align}
\dot{x}&=u+v,\label{ge1}\\
\dot{\psi}&=-\psi \frac{\partial v}{\partial x} + \mu \nabla v_1,\label{ge2}\\
&x(0)=A,\label{xini}\\
&x(T)=B\label{xterm}.
\end{align}
\end{subequations}
The control variables $u_1(t)$ and $u_2(t)$ are given by (\ref{uop2}) and the measure Lagrange multiplier
$\mu(t)$ is constant for trajectories not meeting the boundary and is defined by (\ref{mu2}) along the boundary
of the state constraint.
Boundary conditions at $0$ and $T$ for the adjoint arc $\psi(t)$ are absent.
The BVP problem (\ref{goveq}) is solved by a variant of the shooting method (see, e.g., \cite{nr} for a brief overview of the shooting methods). The shooting parameter is the angle $\theta$ parameterizing the initial boundary condition for $\psi$:
\begin{equation}
\label{psiini}
\psi(0)=(\cos(\theta),\sin(\theta)).
\end{equation}
Starting from the initial conditions (\ref{xini}) and (\ref{psiini}) for a given value of $\theta$, the Cauchy
problem for the system of ordinary differential equations (\ref{ge1})--(\ref{ge2}) is solved by the classical
$4^{th}$ order Runge-Kutta method with the constant time step $\tau=10^{-4}$.
The set of solutions to the BVP (\ref{goveq}) constitute the field of extremals. By integrating the system (\ref{ge1})--(\ref{ge2}) forward in time, the measure Lagrange multiplier $\mu(t)$ is
set to zero for the trajectory in the interior of the domain (i.e. when $|x_1|<1$).
If it reaches the boundary, $|x_1|=1$, at, say, $t=t^*$, then $\mu^*=\mu(t^*)$ is computed by (\ref{mu2}).
By using the continuity of the measure Lagrange multiplier \cite{Arutyunov_Karamzin_2015}, we deem, that if $|\mu^*|<10^{-3}$, the point is a junction point of an extremal and integration of the system is continued
following the domain boundary. At each time step along the boundary, the trajectories leaving it with constant values of $\mu$ are computed.
This is done to find another junction point or a segment joining the boundary with the terminal point $B$.
If at a certain time the terminal boundary condition (\ref{xterm}) is satisfied to the accuracy $10^{-3}$,
the corresponding trajectory represents an extremal.
To find all extremals, the parameter $\theta$ is varied from $0$ to $2\pi$ in (\ref{psiini}) with
a constant step of $10^{-2}$, being the bisection method used if the required accuracy is not achieved.
Once all extremals, i.e., field of extremals) are computed, the one possessing minimal travelling time among
all the extremals is the optimal solution to the original control problem (\ref{problem}).
The first example considers the steady fluid flow
\begin{equation}
\label{eq:steady}
v(x)=(\frac{1}{4}x_1,\, -x_1^2),
\end{equation}
which is represented by the black arrows in Fig.~\ref{fig1}. We clearly notice that, for this specific field, $\displaystyle \pdv{v_1}{x_2}=0$ everywhere, while $\displaystyle \pdv{v_2}{x_1}=0$ at points such that $x_1=0$. However the field of extremal can still be numerically found, supporting Remark \ref{remark: extra regularity condition sufficient condition}. The initial and final positions are $A=(0,0)$ and $B=(-0.5,-6)$, respectively. The field of extremals is also shown in Fig.~\ref{fig1}. The optimal extremal is the red one with travelling time $3.43$ s time units.
In the next example, a perturbation of the flow periodic in time (\ref{eq:steady}) is considered for the same
trajectory endpoints $A$ and $B$. Here,
$$v(x,t)=\left(\frac{1}{4 x_1}+\sin(\frac{\pi t}{2}),\, -x_1^2\right),$$
reflects the fact that tidal variations affect only the component transversal to the main flow. Even Proposition \ref{proposition: regularity sufficient condition-2} is violated at some points, the field of extremals is computed and it is shown in Fig.~\ref{fig2}, displaying four extremals as in the steady case considered above. In contrast to the problem for the steady flow (\ref{eq:steady}),
only one extremal (shown in blue) does not meet the boundary; two extremals (black) meet the right boundary and
only one (red) has the left boundary segment. The optimal extremal is again the one (shown in red, with
travelling time $3.63$ s) involving the left boundary segment, however, the travelling time along the extremal
(black) involving the right boundary, $3.77$ s, is not significantly larger.
\begin{figure}
\caption{\small Field of extremals for the steady fluid flow \hbox{$v(x)=(\frac{1}
\label{fig1}
\caption{\small Field of extremals for the time-periodic fluid flow \hbox{$v(x,t)=(\frac{1}
\label{fig2}
\end{figure}
The last example concerns the flow vector field
\begin{equation*}
\label{eq:steady-time}
v(x)=\bigg(\frac{x_1}{4} + \frac{x_2}{10},\, -x_1^2- \frac{1}{2}\sin^2\bigg(\frac{\pi t}{2}\bigg)\bigg),
\end{equation*}
For this field, $\displaystyle \pdv{v_1}{x_2} \neq 0$, while $\displaystyle \pdv{v_2}{x_1} = 0$ when $x_1=0$. The corresponding field of extremals is computed and shown in Fig. \ref{fig3}. This field contains five inner trajectories not touching the state constraint boundary, one right trajectory, and an optimal one that reaches the final point $B$ after $3.19$ time units.
\begin{figure}
\caption{
Field of extremals for the fluid flow \hbox{$v(x)=\left(\frac{1}
\label{fig3}
\end{figure}
\section{Conclusion}\label{sec: conclusion} In this article, we presented an approach based on the maximum principle amenable to the numerical
computation of solutions to a regular class of state-constrained optimal control navigation problems
subject to a flow field effects. In order to overcome the computational difficulties due to the Borel measure
associated with the state constraints, a not so common version of the maximum principle - the so-called Gamkrelidze form - was adopted and a regularity condition on the data of the problem was imposed to ensure
the continuity of the Borel measure multiplier.
We showed how this property plays a significant role for trajectories meeting the state constraint boundary.
We also proved that this regularity assumption is not restrictive, and it is naturally satisfied by a wide
class of optimal control problems.
The theoretical analysis was supported by several illustrative examples (for steady and time-periodic
flows mimicking real river currents) for which the corresponding fields of extremals were constructed, and optimal solutions were found.
\section*{Appendix}
In the first part of this appendix, the proof of Theorem \ref{theorem: maximum principle} is provided. The second part concerns the proof of condition (\ref{eq: result from the non-triviality}).
{\bf Proof of Theorem \ref{theorem: maximum principle}.} Following Remark \ref{remark: extended problem} a), a time-reparametrization is needed. Indeed, for a minimizer $(x^*,u^*,T^*)$ of the original problem (\ref{problem}), we consider the extended fixed and independent-time optimal control problem associated to (\ref{problem}).
\begin{equation}
\begin{aligned}\label{problem:extended}
& {\text{Minimize}}
& & x_0(T^*) \\
& \text{subject to}
& & \dot x_0 = u_0, \quad \dot x = (u + v(x_0,x))u_0, \quad \text{ a.e. } t \in [0,T^*] \\
&&& x(0)=A,\;\;x(T^*)=B, \; \; x_0(0)=0,\;\;x_0(T^*) \in \mathbb{R}\\
&&& |x_1|\le 1, \quad \text{ for all }\ t \in [0,T^*]\\
&&& u_0 \in [1-\alpha, 1+\alpha] \quad \text{ a.e. } t \in [0,T^*] \\
&&& u \in U: = \{ u \ : \ \varphi(u) \leq 0 \},
\end{aligned}
\end{equation}
where $x_0$ is a new state variable which parameterizes the time, and $u_0$ the associated control defined in $[1-\alpha,1+\alpha]$ where $\alpha\in (0,1)$. We can prove that if $(x^*,u^*,T^*)$ is a minimizer for (\ref{problem}) then $({x}_0^*(t)=t, u_0^*= 1, x^*, u^*)$ is a minimizer for (\ref{problem:extended}).
The corresponding time-independent Hamiltonian is denoted $\bar H_0$ and defined as follow:
\[ \bar H_0 := \bar H u_0 + \psi_0u_0 \qquad \text{where } \psi_0 \in \mathbb{R}. \]
Here $\bar H$ is the extended Hamiltonian for the original problem (\ref{problem}).
The application of the maximum principle to (\ref{problem:extended}) yields the existence of a number
$\lambda\in [0,1]$, $(\psi_0,\psi) \in W_{1,\infty}([0,T^*];\mathbb{R}) \times W_{1,\infty}([0,T^*];\mathbb{R}^2)$, and a
scalar function $\mu(.)$, such that
\begin{itemize}
\item[(i)] $
(\dot \psi_0 (t), \dot \psi(t)) = -\pdv{\bar H_0}{x_0} \times \pdv{\bar H_0}{x}
({x}_0^*, u_0^*, x^*,u^*,\psi_0,\psi,\mu,\lambda) \qquad \text{ for a.e. } t \in [0,T^*];
$
\item[(ii)] $(u_0^*(t),u^*(t)) \in \mathop{\rm argmax}\limits_{w\in [1-\alpha, 1+\alpha], \ \varphi(u) \le 0} \{{\bar H_0}(x_0^*(t), u_0,x^*(t),u,\psi_0(t), \psi(t),\mu(t),\lambda) \}$
for a.e. $t\in [0,T^*]$;
\item[(iii)] (Conservation law)
$ \max\limits_{u_0\in [1-\alpha, 1+\alpha],\ \varphi(u) \le 0} \{\bar H_0(u_0,u) \} =0 \qquad \forall t\in [0,T^*]; $
\item[(iv)] $\mu(t)$ is constant on the time intervals where
$ |x^*(t)|< 1,$
increasing on $\{t\in [0,T]:x_1^*(t)=-1\}$, and decreasing on $\{t\in [0,T]: x_1^*(t)=1\}$.
Moreover, $\mu(\cdot)$ is continuous on $[0,T^*]$;
\item[(v)]
$\lambda + |\psi_0(t)| + |\psi_1(t)-\mu(t)| + |\psi_2(t)| > 0, \qquad \forall t\in [0,T^*].$
\end{itemize}
\begin{remark} These necessary optimality conditions are results of \cite{Arutyunov_2000}, \cite{Karamzin_Pereira_2019}, and \cite[Theorem 4.1]{Arutyunov_Karamzin_Pereira_2011}. Moreover, the non-triviality condition (v) is implied by the regularity of the extended problem (\ref{problem:extended}), in the sense of Definition \ref{def: regularity}. More details can be found in \cite{Khalil et al. IEEE TAC 2019}. \end{remark}
We explicit now these necessary conditions (i)-(v).
Condition (i) is equivalent to the following:
\begin{align*}
\dot \psi(t) & =\left(-\psi(t) \frac{\partial v}{\partial x} (x_0^*(t),x^*(t)) + \mu(t) \frac{\partial \Gamma}{\partial x}(x_0^*(t),x^*(t),u^*(t))\right).u_0^*(t) \\
& =-\psi(t) \frac{\partial v}{\partial x} (t,x^*(t)) + \mu(t) \frac{\partial \Gamma}{\partial x}(t,x^*(t),u^*(t))
\end{align*}
which proves condition (a) of Theorem \ref{theorem: maximum principle}, and
\begin{align*}
\dot \psi_0(t) & =\bigg(-\psi(t) \frac{\partial v}{\partial x_0} (x_0^*(t),x^*(t)) + \mu(t) \frac{\partial \Gamma}{\partial x_0}(x_0^*(t),x^*(t),u^*(t))\bigg).u_0^*(t) \\
& =-\psi(t) \frac{\partial v}{\partial x_0} (t,x^*(t)) + \mu(t) \frac{\partial \Gamma}{\partial x_0}(t, x^*(t),u^*(t)).
\end{align*}
Expliciting the maximum condition (ii) implies
\begin{align} \nonumber
&\max\limits_{u_0\in [1-\alpha, 1+\alpha], \ u \in U} \left\{ u_0 \left( \psi_0(t) + (\psi_1(t)-\mu(t)) u_1 + \psi_2(t) u_2 \right) \right\} \\
& \label{eq: adjoint system extra adjoint variable}\qquad \qquad = \psi_0(t) + (\psi_1(t)-\mu(t)) u^*_1(t) + \psi_2(t) u_2^*(t),
\end{align}
i.e.
\[ \psi_0(t) + (\psi_1(t)-\mu(t)) u^*_1(t) + \psi_2(t) u_2^*(t) \ge u_0 \left( \psi_0(t) + (\psi_1(t)-\mu(t)) u_1 + \psi_2(t) u_2 \right) \]
for all $u\in U$ and $u_0 \in [1-\alpha, 1+\alpha]$, in particular for $u_0=1$. Therefore, for all $u\in U$
\[ (\psi_1(t)-\mu(t)) u^*_1(t) + \psi_2(t) u_2^*(t) \ge (\psi_1(t)-\mu(t)) u_1 + \psi_2(t) u_2 \]
which confirms that the maximum condition (b) of Theorem \ref{theorem: maximum principle} holds true.
Furthermore, the maximum condition (ii) is equivalent to
\[ u_0(\psi_0+\bar H(u))\le\psi_0+\bar H(\bar u)\qquad\forall u_0 \in [1-\alpha, 1+\alpha],\ u \in U. \]
This allows us to deduce that $\psi_0 = - \bar H(\bar u)$. Therefore, owing to the adjoint equation (\ref{eq: adjoint system extra adjoint variable}) associated to $\psi_0$, we obtain
\begin{align*}
\dot \psi_0(t) = \diff{\psi_0}{t} (t) = - \diff{\bar H(\bar u)}{t} (t)= \pdv{(-\bar H_0)}{x_0} ({x}_0^*, u_0^*, x^*,u^*,\psi_0,\psi,\mu,\lambda)
\end{align*}
However, $\bar H_0 = \bar H u_0 + \psi_0 u_0$. We deduce that
\[\diff{\bar H(u^*)}{t} = \pdv{\bar H}{t} (u^*) \]
confirming the time-transversality condition (c) of Theorem \ref{theorem: maximum principle}.
The non-triviality condition (e) is a direct consequence of (\ref{eq: result from the non-triviality}). It suffices to prove it by contradiction. Finally, condition (d) is direct from condition (iv). Therefore, Theorem \ref{theorem: maximum principle} is proved.
\vskip3ex
{\bf Proof of Condition (\ref{eq: result from the non-triviality}).} We recall that condition (\ref{eq: result from the non-triviality}) is
\begin{equation*} |\psi_1(t)-\mu(t)| + |\psi_2(t)| > 0 \quad \text{for all } t \in [0,T^*]. \end{equation*}
Condition (v) above implies that
\begin{equation}\label{item: non-trivaility condition for the extended problem}
|\psi_0(t)| + |\psi_1(t)-\mu(t)| + |\psi_2(t)| > 0, \qquad \text{for all } t\in [0,T^*].
\end{equation}
Indeed, by contradiction, if there exists some $t_1 \in [0,T^*]$ such that (\ref{item: non-trivaility condition for the extended problem}) is violated, then the conservation law (iii)
\[ \max_{u_0\in [1-\alpha, 1+\alpha], \ \varphi(u) \le 0} \{ (\psi_1-\mu)(u_1+v_1)u_0 + \psi_2(u_2+v_2)u_0 + \psi_0 u_0 \} = \lambda\]
implies that $\lambda=0$, contradicting (v). Now to prove (\ref{eq: result from the non-triviality}), we suppose that there exists $t_1\in [0,T^*]$ such that (2) is violated. Then, replacing in the maximum condition (ii), we obtain
\[ \psi_0(t_1) = \max_{u_0\in [1-\alpha, 1+\alpha]} \psi_0(t_1) u_0 \]
which implies that $\psi_0(t_1)=0$ (recalling that $u_0\neq 0$). Therefore, (\ref{item: non-trivaility condition for the extended problem}) is violated. This proves that (\ref{eq: result from the non-triviality}) holds true.
\end{document}
|
\begin{document}
\title{Reducing T-count with the ZX-calculus}
\author{Aleks Kissinger}
\affiliation{Radboud University Nijmegen}
\email{[email protected]}
\homepage{https://www.cs.ru.nl/A.Kissinger}
\author{John van de Wetering}
\affiliation{Radboud University Nijmegen}
\email{[email protected]}
\homepage{http://vdwetering.name}
\begin{abstract}
Reducing the number of non-Clifford quantum gates present in a circuit is an important task for efficiently implementing quantum computations, especially in the fault-tolerant regime.
We present a new method for reducing the number of T-gates in a quantum circuit based on the ZX-calculus, which matches or beats previous approaches to T-count reduction on the majority of our benchmark circuits in the ancilla-free case, in some cases yielding up to 50\% improvement. Our method begins by representing the quantum circuit as a ZX-diagram, a tensor network-like structure that can be transformed and simplified according to the rules of the ZX-calculus. We then show that a recently-proposed simplification strategy can be extended to reduce T-count using a new technique called phase teleportation. This technique allows non-Clifford phases to combine and cancel by propagating non-locally through a generic quantum circuit. Phase teleportation does not change the number or location of non-phase gates and the method also applies to arbitrary non-Clifford phase gates as well as gates with unknown phase parameters in parametrised circuits. Furthermore, the simplification strategy we use is powerful enough to validate equality of many circuits. In particular, we use it to show that our optimised circuits are indeed equal to the original ones.
We have implemented the routines of this paper in the open-source library PyZX.
\end{abstract}
\noindent{\it Keywords}: Quantum Circuit Optimisation, T-count Optimisation, ZX-calculus, Phase Polynomials, Local Complementation and Pivoting
\section{Introduction}
Quantum circuits give a simple, universal language for describing quantum computations at a low level. It is often useful when studying circuits to distinguish between two kinds of primitive operations: Clifford gates and non-Clifford gates. Circuits consisting only of Clifford gates can be efficiently classically simulated~\cite{aaronsongottesman2004}, and can be implemented in a fault-tolerant manner with relative ease within many quantum error correcting codes such as the surface code~\cite{RaussendorfHarrington,horsman2012surface}. However, achieving universality requires at least one non-Clifford gate, such as the $T$ gate. While techniques such as magic state distillation and injection allow for fault-tolerant implementation of $T$ gates, they typically require an order of magnitude more resources than Clifford gates~\cite{CampbellRoads}. Hence, minimisation of non-Clifford gates within a circuit is of paramount importance to fault-tolerant quantum computation.
There are methods for computing exact-optimal solutions to the problem of T-count minimisation, but they do so at the cost of an exponential running time \cite{amy2013meet,di2016parallelizing}. To date, the most successful scalable approaches to $T$-count minimisation have been based on \textit{phase polynomials}. Such methods rely on an efficient representation of circuits consisting of just CNOT and Z-phase gates in terms of their action on basis states. The first heuristic method for efficiently reducing T-count and T-depth using this representation, called \textit{Tpar}, was introduced by Amy et al in Ref.~\cite{amy2014polynomial}. The results of that paper were later improved upon in Ref.~\cite{amy2016t} and \cite{heyfron2018efficient} by exploiting equivalences between the phase polynomial optimisation problem and other known hard problems: respectively, least-distance Reed-Muller decoding, and least-rank factorisation of certain 3-tensors.
Phase-polynomial methods share the limitation that they cannot deal directly with arbitrary quantum circuits. In particular, an arbitrary circuit will also contain Hadamard gates, which destroy the phase polynomial structure.
Na\"ively, one can simply cut the circuit into Hadamard-free sections and apply the optimisation locally. This can be significantly improved by preprocessing to produce larger Hadamard-free sections: either by simple gate transformations~\cite{abdessaied2014quantum,nam2018automated} or introducing ancillae and classical control~\cite{heyfron2018efficient}.
While these approaches introduce various tricks and refinements, they share a reliance on phase polynomials as a common core. In this paper, we propose a new approach to reducing non-Clifford gate count based on the theoretical framework laid out in Ref.~\cite{cliff-simp}. We first transform a circuit into a special kind of tensor network called a \emph{ZX-diagram}~\cite{CD1,CD2}. This diagram is then subject to a collection of graphical transformation rules called the \textit{ZX-calculus}~\cite{Backens1}. By breaking the rigid circuit structure, ZX-diagrams are then subject to simplifications that have no circuit analogue.
It was noted in Ref.~\cite{cliff-simp} that non-Clifford phases (i.e. angles which are not multiples of $\pi/2$) form an obstruction to the simplification. To overcome this issue, we introduce one crucial refinement to the simplification procedure: the \textit{gadgetization} of non-Clifford phases. By splitting a node containing a phase into two parts consisting of the node itself, and a new \textit{phase gadget}, phases can propagate non-locally through a ZX-diagram and potentially cancel or combine with each other. In the case where there are no Hadamard gates in the circuit, these gadgets correspond to phase-parity terms in the representation of a phase polynomial, hence this non-local propagation can be seen as a generalisation of phase polynomial techniques to general circuits.
After performing a combination of phase-gadgetization, simplification, and phase-gadget cancellation, we can use a variation on the technique described in Ref.~\cite{cliff-simp} to re-extract a quantum circuit from the ZX-diagram with fewer non-Clifford phases. Alternatively, we can exploit the fact that our simplification procedure is completely parametric in non-Clifford phase angles to do something more lightweight: rather than combining two phase-gadgets into one, we can simply let the angle from one phase gadget `jump' onto the other one: $(\alpha_i, \alpha_j) \leadsto (\alpha_i + \alpha_j, 0)$. Since this doesn't have any effect on the graphical structure of the \zxdiagram, performing this modification to the phases of the original circuit will result in a new circuit that reduces to the same \zxdiagram as before. As a consequence, the new circuit is provably equivalent to the old one.
Hence, rather than re-extracting a circuit from a ZX-diagram, we use it as a tool for discovering phases that can be shifted around non-locally without changing the computed unitary. We call this technique \textit{phase teleportation}. A pleasant property of phase teleportation, as opposed to the simplify-and-extract method, is that it leaves the structure of the quantum circuit completely intact, only changing the parameters. Hence, 2-qubit gate count is never increased and gates are always applied between the same pairs of qubits as before. As pointed out in Ref.~\cite{nam2018automated}, this could be advantageous when the circuit has been designed with limited qubit connectivity of the physical qubits in mind. Both optimisation routines are implemented in the open source Python library \emph{PyZX}~\cite{PyZXGithub}.
By leaving the circuit model we can sometimes `look around' obstructions such as Hadamard gates to find more optimisations. We see this translated in our results. In benchmark circuits with an abundance of Hadamard gates we can significantly outperform previous methods.
We also use the simplification routine for ZX-diagrams to validate equality of circuits. We do this by composing the adjoint of the optimised circuit with the original circuit and checking whether our simplification routine reduces the resulting ZX-diagram to the identity. While this method cannot detect errors in a circuit, the set of rewrite rules forms a certificate of equality when it does reduce a circuit to the identity.
While the general problem of circuit equality validation is QMA-hard~\cite{bookatz2012qmacomplete}, and hence a general efficient validation strategy is unlikely to exist, our method is powerful enough to validate equality of our optimised circuits as well as those produced in Ref.~\cite{nam2018automated}.
It should be noted that we target ancilla-free optimisation and compare ourselves to the best known results for ancilla-free T-count reduction. It is already known that the required amount of T gates can decrease when ancillae are allowed~\cite{amy2014polynomial}. Ref.~\cite{heyfron2018efficient} obtains lower T-counts on many of the circuits we consider by introducing ancillae via a technique called \textit{Hadmard gadgetization}. Using a hybrid of our approach and more advanced phase polynomial techniques, we can obtain ZX-diagrams which \textit{in principle} exhibit very low T-counts, but re-extracting a quantum circuit from the ZX-diagram becomes an obstacle, and will almost certainly require introducing ancillae. This remains an open problem, which we will discuss in more detail in Section~\ref{sec:discussion}.
\noindent \textbf{Note}: A few days after the first version of this paper appeared as a preprint~\cite{tcountpreprint}, Zhang and Chen reported nearly identical numbers to those in Table~\ref{fig:results}, using an independent approach based on Pauli rotations~\cite{zhang2019optimizing}. It is a topic of ongoing research as to why these seemingly quite different methods produce the same T-counts.
\noindent \textbf{Acknowledgements}: The authors are supported in part by AFOSR grant FA2386-18-1-4028. They would like to thank Earl Campbell and Luke Heyfron for useful discussions and help running the Topt tool as well as Matthew Amy for checking various optimised circuits in the Feynman verifier~\cite{AmyVerification}. They furthermore thank Quanlong Wang and Niel de Beaudrap for interesting discussions on the ZX-calculus, phase gadgets, and circuit optimisation. Finally, they would like to thank Will Zeng and the Unitary Fund for their support of the PyZX project.
\section{Results}
Our procedure simplifies quantum circuits formed from the following primitive set of gates:
\begin{equation}\label{eq:gate-set}
\CX \ :=\
\left(\begin{matrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
\end{matrix}\right)
\qquad
Z_\alpha \ :=\
\left(\begin{matrix}
1 & 0 \\
0 & e^{i \alpha}
\end{matrix}\right)
\qquad
H \ :=\ \frac{1}{\sqrt{2}}
\left(\begin{matrix}
1 & 1 \\
1 & -1
\end{matrix}\right)
\end{equation}
with the goal of reducing the total number of gates of the form $Z_{\alpha}$ where $\alpha \neq k \cdot \frac\pi2$ for $k \in \mathbb Z$. For all of the benchmark circuits, all of those gates are $T := Z_{\pi/4}$, so from hence forth, we will simply refer to this number as the T-count.
\begin{figure}
\caption{Original circuit}
\label{fig:tof3-circuit}
\caption{The circuit expanded as a \zxdiagram, with 21 T gates.}
\label{fig:tof3-zx-circuit}
\caption{Simplified \zxdiagram.}
\label{fig:tof3-zx-opt}
\caption{15 T gates remain after phase-teleportation.}
\label{fig:tof3-circuit-opt}
\caption{Overview of the steps in our phase-teleportation scheme.
}
\label{fig:overview}
\end{figure}
\newcommand{\rowfont{\bf}}{\rowfont{\bf}}
\newcommand{\rowfont{\it}}{\rowfont{\it}}
\newcommand{\bf}{\bf}
\begin{table}
\centering
\begin{tabu}{lrrrrrr}
Circuit & $n$& T& Best prev. & Method & PyZX & PyZX+TODD \\
\hline
\rowfont{\bf}
adder$_8$ \cite{AmyGithub} & 24 & 399 & 213 & RM$_m$ & 173 & \bf 167 \\
Adder8 \cite{NRSCMGithub} & 23 & 266 & 56 & NRSCM & 56 & 56 \\
Adder16 \cite{NRSCMGithub} & 47 & 602 & 120 & NRSCM & 120 & 120 \\
Adder32 \cite{NRSCMGithub} & 95 & 1274 & 248 & NRSCM & 248 & 248 \\
Adder64 \cite{NRSCMGithub} & 191 & 2618 & 504 & NRSCM & 504 & 504 \\
barenco-tof$3$ \cite{NRSCMGithub} & 5 & 28 & 16 & Tpar & 16 & 16 \\
barenco-tof$4$ \cite{NRSCMGithub} & 7 & 56 & 28 & Tpar & 28 & 28 \\
barenco-tof$5$ \cite{NRSCMGithub} & 9 & 84 & 40 & Tpar & 40 & 40 \\
barenco-tof$10$ \cite{NRSCMGithub} & 19 & 224 & 100 & Tpar & 100 & 100 \\
tof$_3$ \cite{NRSCMGithub} & 5 & 21 & 15 & Tpar & 15 & 15 \\
tof$_4$ \cite{NRSCMGithub} & 7 & 35 & 23 & Tpar & 23 & 23 \\
tof$_5$ \cite{NRSCMGithub}& 9 & 49 & 31 & Tpar & 31 & 31 \\
tof$_{10}$ \cite{NRSCMGithub} & 19 & 119 & 71 & Tpar & 71 & 71 \\
\rowfont{\it}
csla-mux$_3$ \cite{NRSCMGithub} & 15 & 70 & 58 & RM$_r$ & 62 & \bf 45 \\
\rowfont{\it}
csum-mux$_9$ \cite{NRSCMGithub} & 30 & 196 & 76 & RM$_r$ & 84 & \bf 72 \\
\rowfont{\bf}
cycle17$_3$ \cite{AmyGithub} & 35 & 4739 & 1944 & RM$_m$ & 1797 & 1797 \\
\rowfont{\it}
gf($2^4$)-mult \cite{NRSCMGithub} & 12 & 112 & 56 & TODD & 68 & \bf 52 \\
\rowfont{\it}
gf($2^5$)-mult \cite{NRSCMGithub} & 15 & 175 & 90 & TODD & 115 & \bf 86 \\
\rowfont{\it}
gf($2^6$)-mult \cite{NRSCMGithub} & 18 & 252 & 132 & TODD & 150 & \bf 122 \\
\rowfont{\it}
gf($2^7$)-mult \cite{NRSCMGithub} & 21 & 343 & 185 & TODD & 217 & \bf 173 \\
\rowfont{\it}
gf($2^8$)-mult \cite{NRSCMGithub} & 24 & 448 & 216 & TODD & 264 & \bf 214 \\
ham15-low \cite{AmyGithub} & 17 & 161 & 97 & Tpar & 97 & 97 \\
\rowfont{\bf}
ham15-med \cite{AmyGithub} & 17 & 574 & 230 & Tpar & 212 & 212 \\
ham15-high \cite{AmyGithub} & 20 & 2457 & 1019 & Tpar & 1019 & \bf 1013 \\
hwb$_6$ \cite{AmyGithub} & 7 & 105 & 75 & Tpar & 75 & \bf 72 \\
\rowfont{\bf}
hwb$_8$ \cite{AmyGithub} & 12 & 5887 & 3531 & RM$_{m\&r}$ & 3517 & \bf 3501 \\
\rowfont{\it}
mod-mult-55 \cite{NRSCMGithub} & 9 & 49 & 28 & TODD & 35 & \bf 20 \\
mod-red-21 \cite{NRSCMGithub} & 11 & 119 & 73 & Tpar & 73 & 73 \\
\rowfont{\bf}
mod5$_4$ \cite{NRSCMGithub} & 5 & 28 & 16 & Tpar & 8 & \bf 7 \\
\rowfont{\bf}
nth-prime$_6$ \cite{AmyGithub} & 9 & 567 & 400 & RM$_{m\&r}$ & 279 & 279 \\
\rowfont{\it}
nth-prime$_8$ \cite{AmyGithub} & 12 & 6671 & 4045 & RM$_{m\&r}$ & 4047 & \bf 3958 \\
qcla-adder$_{10}$ \cite{NRSCMGithub} & 36 & 589 & 162 & Tpar & 162 & \bf 158 \\
\rowfont{\it}
qcla-com$_7$ \cite{NRSCMGithub} & 24 & 203 & 94 & RM$_m$ & 95 & \bf 91 \\
qcla-mod$_7$ \cite{NRSCMGithub} & 26 & 413 & 235${}^\textrm{a}$ & NRSCM & 237 & \bf 216 \\
rc-adder$_6$ \cite{NRSCMGithub} & 14 & 77 & 47 & RM$_{m\&r}$ & 47 & 47 \\
vbe-adder$_3$ \cite{NRSCMGithub} & 10 & 70 & 24 & Tpar & 24 & 24
\end{tabu}
\caption{Benchmark circuits. The columns \emph{$n$} and \emph{T} contain the amount of qubits and T gates in the original circuit. \emph{Best prev.} is the previous best-known ancilla-free T-count for that circuit and \emph{Method} specifies which method was used: \emph{RM$_m$} and \emph{RM$_r$} refer to the \emph{maximum} and \emph{recursive} Reed-Muller decoder of Ref.~\cite{amy2016t}, \emph{Tpar} is Ref.~\cite{amy2014polynomial}, \emph{TODD} is Ref.~\cite{heyfron2018efficient} and \emph{NRSCM} refers to Ref.~\cite{nam2018automated}. \emph{PyZX} and \emph{PyZX + TODD} specify the T-counts produced by the procedures outlined in the Methods section. Numbers shown in bold are better than previous best, and italics are worse. The superscript (a) indicates an error in a previously reported T-count. The error was found using Amy's Feynman tool~\cite{AmyVerification}.
\label{fig:results}}
\end{table}
The four main steps of our procedure are depicted in Fig.~\ref{fig:overview} and described in detail in Section~\ref{sec:simplifymain}. If a circuit has gates which are not in the gate set~\eqref{eq:gate-set} (as in Fig.~\ref{fig:tof3-circuit}), we first expand those gates in terms of \CX, H, and T gates and translate that circuit into a ZX-diagram (Fig.~\ref{fig:tof3-zx-circuit}). We then apply the simplification procedure described in Section~\ref{sec:simplify-zx-full} to obtain a reduced ZX-diagram (Fig.~\ref{fig:tof3-zx-opt}). Finally, we use the data about corresponding phases obtained from this simplification to perform \textit{phase teleportation} in the original circuit to reduce T-count (Fig.~\ref{fig:tof3-circuit-opt}).
For our benchmarks, we have used all of the Clifford+T benchmark circuits from Refs.~\cite{amy2014polynomial,nam2018automated}, minus some of some of the larger members of the \texttt{gf($2^n$)-mult} family. These benchmark circuits were also used in Refs.~\cite{amy2016t,heyfron2018efficient} and include components that are likely to be of interest to quantum algorithms, such as adders or Grover oracles. Of these 36 benchmark circuits, we are at or improving upon the state of the art for 26 circuits ($\sim$72\%), and we improve on the state of the art on 6 ($\sim$17\%). If we apply some simple post-processing afterwards and feed the resulting circuit into the TODD phase polynomial optimiser~\cite{heyfron2018efficient}, we improve on the state of the art for 20 circuits ($\sim$56\%). These two methods seem to complement each other well in the ancilla-free regime, obtaining significantly better numbers than either of the two methods alone, and matching or beating all other methods for every circuit tested. These results are given in Table~\ref{fig:results}.
We achieve an identical T-count to the previously best-known result for 20 of the 36 circuits.
This is interesting, since the methods we use are quite different from previous methods. As a result this can be seen as evidence that those T-counts exist in some kind of `local optimum', at least in the ancilla-free case. The circuits where PyZX seems to do considerably better are ones that contain many Hadamard gates.
The fact that PyZX achieves improvements when there are many Hadamard gates is as expected, as most other successful methods employ a dedicated phase-polynomial optimiser~\cite{amy2014polynomial,amy2016t,nam2018automated,heyfron2018efficient} that is hampered by the existence of Hadamard gates. On the other hand, the only circuits where phase polynomial techniques significantly out-perform our methods are in the \texttt{gf($2^n$)-mult} family. After some simple pre-processing, these circuits have almost no Hadamard gates, hence they are very well-suited to phase polynomial techniques.
It should be noted that while the circuits of Table~\ref{fig:results} are all written in the Clifford+T gate set, our optimisation routine is agnostic to the values of the non-Clifford phases. We have also tested our routine on the quantum Fourier transform circuits of Ref.~\cite{nam2018automated} that include more general non-Clifford phases, and in each case found that our non-Clifford gate count exactly matched their results.
The optimisation routines are implemented in the open source Python library \emph{PyZX}~\cite{PyZXGithub}. All circuit optimisations were run on a consumer laptop. Our method took a few seconds to run for most circuits, with some of the bigger ones taking up to a few minutes. We tested the correctness of the optimisation by generating the matrix of the original and the optimised circuit for thousands of small randomly generated circuits and checking equality, in addition to doing the same for the smaller benchmark circuits.
We can also use the ZX-calculus for verification of equality~\cite{chancellor2016coherent}. For all benchmark circuits, we composed the original circuit with the adjoint of the optimised one, and then ran our simplification routine on this circuit. In every case, the resulting circuit was reduced to the identity. Since the set of rewrites needed to do this is vastly different then the ones used to produce the original optimised circuit, this is strong evidence that the optimised circuit is correct, as it is very unlikely that an error in our rewrites would cancel itself in this way. With the same validation scheme we have also verified correctness of all the optimised benchmark circuits of Ref.~\cite{nam2018automated}, except for \texttt{qcla-mod$_7$}, which was then shown to be incorrect using the Feynman tool~\cite{AmyVerification}.
\section{Discussion}\label{sec:discussion}
We have implemented a novel quantum circuit optimisation routine that uses \zxdiagrams to go beyond the rigid structure of the circuit model. This routine matches or beats the previous best-known T-count for the majority of the benchmark circuits we have tested. We have furthermore shown that combining our routine with the TODD compiler~\cite{heyfron2018efficient} achieves T-counts that are better than either of these methods separately. Finally, our simplification routine is powerful enough to validate the correctness of our optimised circuits.
There are quite a few ways in which our routine can be improved or be made more versatile.
Our method currently does not affect the amount of CNOT or Hadamard gates in the circuit. This is because we are not actually making use of the simplified \zxdiagram to extract a new circuit, but we are simply using this diagram to track cancellation of phase gates. It is possible to extract a circuit directly from the \zxdiagram produced by our routine, but at this stage such a circuit often contains more gates than we started out with. For future work, an obvious direction then is to improve our circuit extraction methods to produce better circuits directly from the \zxdiagrams.
Our method currently only supports ancilla-free optimisation. It has been shown~\cite{amy2014polynomial,heyfron2018efficient} that allowing additional ancillae can greatly decrease the required T-count. A promising approach to introducing ancillae into our simplification procedure is the following. We can treat the reduced ZX-diagram as a phase polynomial circuit, where every non-input corresponds to introducing an ancilla in the $\ket{+}$ state and every non-output corresponds to projecting (i.e. post-selecting) onto $\bra{+}$. Indeed we can transform it into a circuit of this form using the \SpiderRule rule of the \zxcalculus (cf.~Section~\ref{sec:zxcalculus}):
\ctikzfig{tof3-zx-opt-anc}
The middle part of the right-hand side can be described as a phase polynomial (cf.~Section~\ref{sec:phase-gadgets}), and hence can be further reduced by powerful phase polynomial techniques such as Reed-Muller decoding~\cite{amy2016t} or 3-tensor factorisation~\cite{heyfron2018efficient}. The resulting circuit contains post-selection and cannot be realised deterministically in general. However, in Ref.\cite{cliff-simp}, we showed that, if certain graph-theoretic constraints are respected, these interior vertices (i.e.~ancillae) can always be eliminated. However, phase polynomial techniques typically break those constraints, so it is an interesting open problem to see if deterministic computation can still be recovered, possibly by introducing measurements and classical control.
While the \zxcalculus forms a powerful language for reasoning about low-level gate sets (e.g. Clifford+T), it can only reason about Toffoli and CCZ gates in an indirect manner, by first translating those gates into Clifford+T representations. The \emph{ZH-calculus}~\cite{zh-calculus} in contrast, makes it straightforward to reason about mid-level quantum gates such as Toffoli and CCZ gates. It then stands to reason that we can achieve further optimisations for circuits defined in terms of these mid-level gates (such as adders and classical oracles), by first doing simplifications in the ZH-calculus, then translating the diagram into a Clifford+T gate set, and doing further simplifications in the \zxcalculus.
It was already noted in the introduction that our simplification is completely parametric in non-Clifford phase angles. Indeed we show the correctness of the phase teleportation routine in Section~\ref{sec:teleport} by working on a representation of a circuit where all non-Clifford angles are replaced with free parameters. The procedure itself then proceeds by combining and eliminating some parameters. An immediate consequence is that our simplification procedure generalises from concrete circuits to parametrised circuits, where the analogue of T-count reduction is elimination of redundant free parameters. This could potentially yield significant improvements in the performance of hybrid classical/quantum algorithms relying on parametrised circuits, such as the quantum variational eigensolver~\cite{peruzzo2014variational}.
One final open question concerns the power of our circuit equality validation scheme, using the ZX-calculus simplifier. We have already noted that this scheme seems to be powerful enough to validate any optimisation made by our simplification routine or the one found in Ref.~\cite{nam2018automated}. It is then an interesting question to explore the exact power (and limitations) of this scheme.
\section{Methods}\label{sec:simplifymain}
In this section we will explain our main contributions in depth, namely how to do T-count optimisation using the \zxcalculus. On a high level this proceeds in the following way:
\begin{itemize}
\item First we translate a quantum circuit into a \zxdiagram. See Section~\ref{sec:zxcalculus}.
\item We apply the diagrammatic simplification procedure \textbf{ZX-simplify} laid out in Sections~\ref{sec:simplify-zx-cliff}-\ref{sec:simplify-zx-full}.
\item Finally, by keeping track of certain simplification steps, and how they affect phases in the original circuit, we will decrease the T-count of the circuit by means of \textit{phase teleportation}. See Section~\ref{sec:teleport}.
\end{itemize}
Section~\ref{sec:TODD} explains our how our PyZX-produced circuit is combined with post-processing and the TODD compiler.
\subsection{Background: the \zxcalculus}\label{sec:zxcalculus}
We will provide a brief overview of the \zxcalculus. For an in-depth
reference see Ref.~\cite{CKbook}.
The \zxcalculus is a diagrammatic language similar to the familiar
quantum circuit notation. A \emph{\zxdiagram} (or simply
\emph{diagram}) consists of \emph{wires} and \emph{spiders}. Wires
entering the diagram from the left are \emph{inputs}; wires exiting to
the right are \emph{outputs}. Given two diagrams we can compose them
by joining the outputs of the first to the inputs of the second, or
form their tensor product by simply stacking the two diagrams.
Spiders are linear operations which can have any number of input or output
wires. There are two varieties: $Z$ spiders depicted as green dots and $X$ spiders depicted as red dots:\footnote{If you are reading this
document in monochrome or otherwise have difficulty distinguishing green and red, $Z$ spiders will appear lightly-shaded and $X$ darkly-shaded.}
\[
\small
\tikzfig{Zsp-a} \ := \ \ketbra{0...0}{0...0} +
e^{i \alpha} \ketbra{1...1}{1...1}
\qquad
\tikzfig{Xsp-a} \ := \ \ketbra{+...+}{+...+} +
e^{i \alpha} \ketbra{-...-}{-...-}
\]
The diagram as a whole corresponds to a linear map built from the
spiders (and permutations) by the usual composition and tensor product
of linear maps. As a special case, diagrams with no inputs represent
(unnormalised) state preparations.
\begin{example}\label{ex:basic-maps-and-states}
We can immediately write down some simple state preparations and
unitaries in the \zxcalculus:
\[
\begin{array}{rclcrcl}
\tikzfig{ket-+} & = & \ket{0} + \ket{1} \ \propto \ket{+} &
\qquad\qquad &
\tikzfig{ket-0} & = & \ket{+} + \ket{-} \ \propto \ket{0} \\
&\quad& & & \quad \\
\tikzfig{Z-a} & = & \ketbra{0}{0} + e^{i \alpha} \ketbra{1}{1} =
Z_\alpha &
&
\tikzfig{X-a} & = & \ketbra{+}{+} + e^{i \alpha} \ketbra{-}{-} = X_\alpha
\end{array}
\]
In particular we have the Pauli matrices:
\[
\tikzfig{Z} \ \ =\ \ Z \qquad\qquad \tikzfig{X}\ \ =\ \ X \qquad\qquad
\]
\end{example}
It will be convenient to introduce a symbol -- a yellow square -- for
the Hadamard gate. This is defined (up to a global phase) by the equation:
\begin{equation}\label{eq:Hdef}
\tikzfig{had-alt}
\end{equation}
We will often use an alternative notation to simplify the diagrams,
and replace a Hadamard between two spiders by a blue dashed edge, as
illustrated below.
\ctikzfig{blue-edge-def}
Both the blue edge notation and the Hadamard box can always be
translated back into spiders when necessary. We will refer to the blue
edge as a \emph{Hadamard edge}.
Two diagrams are considered \emph{equal} when one can be deformed to
the other by moving the vertices around in the plane, bending,
unbending, crossing, and uncrossing wires, as long as the connectivity
and the order of the inputs and outputs is maintained. Equivalently, a
ZX-diagram can be considered as a graphical depiction of a tensor network,
as in e.g.~\cite{Penrose}. Then, just like any other tensor network, we can observe that the interpretation of a ZX-diagram is unaffected by deformation. As tensors, Z and X spiders can be written as follows:
\begin{align*}
\left( \ \tikzfig{Zsp-nolegs} \ \right)_{i_1...i_m}^{j_1...j_n} & =
{\small \begin{cases}
1 & \textrm{ if } i_1 = ... = i_m = j_1 = ... = j_n = 0 \\
e^{i \alpha} & \textrm{ if } i_1 = ... = i_m = j_1 = ... = j_n = 1 \\
0 & \textrm{ otherwise}
\end{cases}} \\
\left( \ \tikzfig{Xsp-nolegs} \ \right)_{i_1...i_m}^{j_1...j_n} & =
{\small \frac{1}{\sqrt{2}} \cdot
\begin{cases}
1 + e^{i \alpha} & \textrm{ if } i_1 \oplus ... \oplus i_m \oplus j_1 \oplus ... \oplus j_n = 0 \\
1 - e^{i \alpha} & \textrm{ if } i_1 \oplus ... \oplus i_m \oplus j_1 \oplus ... \oplus j_n = 1
\end{cases}}
\end{align*}
where all $i_\alpha, j_\beta$ range over $\{0,1\}$.
In addition to simple deformations, ZX-diagrams satisfy a set of equations called the \zxcalculus. There exists several variations of the ZX-calculus. The set of rules we will use is presented in Figure~\ref{fig:zx-rules}. Diagrams that can be transformed into each other using the rules of the ZX-calculus correspond to equal linear maps (up to normalisation). ZX-diagrams with arbitrary angles are expressive enough to represent any linear map~\cite{CD2}. It is often useful to consider \textit{Clifford ZX-diagrams}, by analogy to Clifford circuits, where all angles are restricted to multiples of $\pi/2$. In that case, the rules given in Figure~\ref{fig:zx-rules} are \textit{complete} with respect to linear map equality~\cite{Backens1}. That is, if two Clifford ZX-diagrams describe the same linear map, one can be transformed into the other using the rules in Figure~\ref{fig:zx-rules}. Extensions of the calculus exist that are complete for larger families of \zxdiagrams/maps, including \textit{Clifford+T} \zxdiagrams \cite{SimonCompleteness}, where phases are multiples of $\pi/4$, and arbitrary \zxdiagrams where any phase is allowed~\cite{HarnyAmarCompleteness,JPV-universal,euler-zx}.
\begin{figure}
\caption{\label{fig:zx-rules}
\label{fig:zx-rules}
\end{figure}
Quantum circuits can be translated into \zxdiagrams in a straightforward manner.
We will take as our starting point circuits constructed
from the following universal set of gates:
\[
\CX \ :=\
\left(\begin{matrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
\end{matrix}\right)
\qquad\qquad
Z_\alpha \ :=\
\left(\begin{matrix}
1 & 0 \\
0 & e^{i \alpha}
\end{matrix}\right)
\qquad\qquad
H \ :=\ \frac{1}{\sqrt{2}}
\left(\begin{matrix}
1 & 1 \\
1 & -1
\end{matrix}\right)
\]
This gate set admits a convenient representation in terms of
spiders:
\begin{align}\label{eq:zx-gates}
\CX & = \tikzfig{cnot} &
Z_\alpha & = \tikzfig{Z-a} &
H & = \tikzfig{h-alone}
\end{align}
Note that for the CNOT gate, the green spider is the first (i.e.~control) qubit and the red spider is the second (i.e.~target) qubit.
Other common gates can easily be expressed in terms of these gates. In particular, $S := Z_{\frac\pi2}$, $T := Z_{\frac\pi4}$ and:
\begin{align}\label{eq:zx-derived-gates}
X_\alpha & = \tikzfig{X-a-expanded} &
\ensuremath{\textrm{CZ}}\xspace & = \tikzfig{cz-small}
\end{align}
\begin{figure}
\caption{\label{fig:graph-like}
\label{fig:graph-like}
\end{figure}
The first step of our simplification procedure is to transform the circuit into something we call a \emph{graph-like} \zxdiagram (see Fig.~\ref{fig:graph-like} for an example).
\begin{definition}\label{def:graph-form}
A \zxdiagram is \emph{graph-like} when:
\begin{enumerate}
\item All spiders are Z-spiders.
\item Z-spiders are only connected via Hadamard edges.
\item There are no parallel Hadamard edges or self-loops.
\item Every input or output is connected to a Z-spider and every Z-spider is connected to at most one input or output.
\end{enumerate}
\end{definition}
In Ref.~\cite{cliff-simp} it is shown that any \zxdiagram can efficiently be transformed into a graph-like \zxdiagram using the rules of the ZX-calculus. This transformation essentially amounts to turning all X spiders into Z spiders with the \HadamardRule rule, fusing as many spiders together as possible with \SpiderRule, and removing parallel edges/self-loops with the following derived rules:
\begin{equation}\label{eq:parallel-edges-loops}
\tikzfig{par-edge-rem} \qquad
\tikzfig{self-loop-rem} \qquad
\tikzfig{h-self-loop-rem}
\end{equation}
In particular, the number of non-Clifford phases in a diagram is never increased, and can actually be decreased by the \SpiderRule rule, as phases are added together.
We call this \textit{graph-like} because the resulting \zxdiagram is essentially an indirected, simple graph whose vertices are labelled by phase angles.
\subsection{Clifford simplification of ZX-diagrams}\label{sec:simplify-zx-cliff}
A spider connected to an input or an output is called a \textit{boundary spider}, whereas all other spiders are called \textit{interior spiders}. If we interpret an $N$-qubit circuit as a ZX-diagram, there are precisely $N$ inputs and $N$ outputs, hence there are at most $2N$ boundary spiders. On the other hand, there will in general be a very large number of interior spiders.
The main idea behind the first part of our simplification strategy is to remove as many interior \textit{Clifford spiders}, i.e. spiders whose phase is a multiple of $\pi/2$, as possible. We do this by using two rewrite rules based on the graph-theoretic operations of \emph{local complementation} and \emph{pivoting}. For the proof of correctness of these rewrite rules we refer to Ref.~\cite{cliff-simp}.
The first rule, based on local complementation, deletes a spider with a phase of $\pm\pi/2$ and introduces edges between all of its neighbours:
\begin{equation}\label{eq:lc-simp}
\tikzfig{lc-simp}
\end{equation}
Since parallel edges vanish (cf. equations~\eqref{eq:parallel-edges-loops}), this can be seen as complementing the set of edges connecting the neighbours of the deleted vertex, hence the name.
The second rule deletes pairs of \textit{Pauli spiders}, i.e. spiders whose phase is a multiple of $\pi$. For a pair of connected Pauli spiders $u, v$, we can split the neighbourhood of $\{u,v\}$ into three pieces: $U$ the spiders only connected to $u$, $V$ the spiders only connected to $v$, and $W$, the spiders connected to both. We can then delete the pair of spiders $u, v$ provided we introduce complete bipartite graphs on $(U,W)$, $(V,W)$ and $(U,V)$:
\begin{equation}\label{eq:pivot-simp}
\tikzfig{pivot-simp}
\end{equation}
Again, thanks to \eqref{eq:parallel-edges-loops}, this can be seen as completementing the sets of edges present in the three bipartite graphs $(U,W)$, $(V,W)$ and $(U,V)$.
Since the rules \LcompSimp and \PivotSimp both delete at least one spider, we can simply apply them repeatedly until no rule matches. This gives us a terminating procedure for simplifying our diagram. Note that we do not target the spiders in any specific order. Different orders of application will yield different diagrams (i.e. these rules are not \textit{confluent}), but we always obtain the same amount of non-Clifford spiders at the end.
At this point, the simplification procedure in Ref.~\cite{cliff-simp} employs a variation of \PivotSimp to remove a few more Pauli spiders and terminates. In particular, nothing is done to eliminate \textit{non-Clifford} spiders. This is the goal of the next 2 sections.
\subsection{Phase gadgets}\label{sec:phase-gadgets}
We first introduce a new concept for \zxdiagrams: a \textit{phase gadget}. A phase gadget is simply an arity-1 spider with angle $\alpha$, connected via a Hadamard edge to a spider with no angle:
\[
\tikzfig{phase-gadget}
\]
Phase gadgets are a useful tool for working with \zxdiagrams corresponding to unitaries. For example, the diagram
\begin{equation}\label{eq:phase-gadget-unitary}
\tikzfig{phase-gadget-unitary}
\end{equation}
yields an $n$-qubit unitary $U$ defined by:
\[
U ::
\ket{x_1, ..., x_n} \mapsto
e^{i \alpha (x_1 \oplus \ldots \oplus x_n)} \ket{x_1, ..., x_n}
\]
In fact, it is straightforward to show concretely (or in the ZX-calculus) that this unitary is equal to a ladder of CNOT gates, followed by a single phase gate, followed by the reverse ladder of CNOT gates. For example, on 4 qubits:
\begin{equation}\label{eq:phasegadget}
\tikzfig{phase-gadget-circ}
\end{equation}
Since these gates are diagonal in the computational basis, they commute with each other. This also follows straightforwardly from the \SpiderRule rule:
\ctikzfig{phase-gadget-commute}
Arbitrary diagonal unitaries, i.e. unitaries of the form:
\[
U ::
\ket{x_1, ..., x_n} \mapsto
e^{i f(x_1, \ldots, x_n)} \ket{x_1, ..., x_n}
\]
for some $f : \{0,1\}^n \to \mathbb R$, can easily be expressed in terms of phase gadgets. For example:
\[
\tikzfig{phase-poly}
\ \ ::\ \
\ket{x_1,x_2,x_3,x_4}
\mapsto
e^{i (
\frac \pi 4 x_1 \oplus x_4 +
\frac \pi 8 x_1 \oplus x_2 -
\frac \pi 4 x_1 \oplus x_3)}
\ket{x_1,x_2,x_3,x_4}
\]
In fact, the angles appearing in the phase gadgets correspond to the Fourier expansion\footnote{A brief discussion of the form~\eqref{eq:fourier}, and its relation to the usual Fourier transform of a semi-boolean function, can be found in the appendix of Ref.~\cite{amy2018cnot}.} of the semi-boolean function $f$. That is, we can express any function $f : \{0,1\}^n \to \mathbb R$ as follows:
\begin{equation}\label{eq:fourier}
f(\vec x) = \alpha + \sum_{\vec y} \alpha_{\vec y} (x_1 y_1 \oplus \ldots \oplus x_n y_n)
\end{equation}
where $\vec x, \vec y \in \{ 0, 1 \}^n$ and $\alpha, \alpha_{\vec y} \in \mathbb R$. In the context of diagonal unitaries, $\alpha$ yields a global phase (which we ignore), and each $\alpha_{\vec y}$ corresponds to a phase gadget.
\textit{Phase polynomial} techniques perform transformations on the function $f$ in order to reduce the T-count needed to implement $U$ (or some $U'$ that is Clifford-equivalent to $U$). In the sequel, we will consider not just phase gadgets arising from unitaries such as \eqref{eq:phase-gadget-unitary}, but phase gadgets appearing in arbitary graph-like \zxdiagrams. Hence, our simplification procedure can be seen as a generalisation of phase polynomial techniques.
\subsection{Full simplification of ZX-diagrams}\label{sec:simplify-zx-full}
In this section, we will introduce rules that reduce the number of non-Clifford spiders in the \zxdiagram, and hence the T-count in the resulting circuit.
First, it is worth noting that the \PivotSimp rule from section~\ref{sec:simplify-zx-cliff} was only able to remove an interior Pauli spider adjacent to another interior Pauli spider. We can now introduce two variations of this rule, \PivotSimpGadget and \PivotSimpGadgetBoundary, which together allow us to remove any remaining interior Pauli spider, at the cost of introducing a phase gadget.
\begin{equation*}\label{eq:pivot-simp-gadget}
\tikzfig{pivot-simp-gadget}
\end{equation*}
\begin{equation*}\label{eq:pivot-simp-boundary-gadget}
\tikzfig{pivot-simp-boundary-gadget}
\end{equation*}
We apply \PivotSimpGadget when the interior Pauli spider is connected to any other interior spider, while \PivotSimpGadget is applied when it is connected to some boundary spider.
Applying these rules to every remaining interior Pauli spider yields a diagram where every internal spider is either non-Clifford or part of a phase-gadget. If the phase-gadget is Clifford, then it can be removed by either \PivotSimp or by two applications of \LcompSimp. Hence we can reduce to a case where all phase-gadgets are non-Clifford.
We can now apply the following two rules, which both strictly decrease the number of non-Clifford spiders:
\begin{equation*}
\tikzfig{id-simp-1}\qquad\qquad \tikzfig{gadget-simp}
\end{equation*}
When a phase gadget is connected to exactly one other spider, its phase can be combined with the phase on that spider via \IDSimp. This is essentially an application of the rules \IdRule and \HHRule.
When two phase gadgets are connected to exactly the same set of spiders, they can be fused into one via the gadget-fusion rule \GadgetSimp. This rule can be shown using the ZX-calculus:
\begin{center}
\scalebox{0.9}{\tikzfig{gf-proof}}
\end{center}
where \NBialgRule is the $n$-ary generalisation of the rule \BialgRule, which follows from the other rules (see e.g.~\cite{CKbook}, \S9.4). For unitaries of the form~\eqref{eq:phase-gadget-unitary}, it corresponds to a well-known simplification used in phase-polynomial circuits, where two phases acting on the same parity of the input qubits can be summed together.
Each of the rewrite rules \IDSimp and \GadgetSimp removes a non-Clifford spider, and transforms another non-Clifford spider into a Clifford spider, which can be removed by one of the previous rules. We can now fully describe our simplification procedure for graph-like \zxdiagrams.
\begin{algorithm}
\textbf{ZX-simplify}: Starting with a graph-like ZX-diagram, do the following:
\begin{enumerate}
\item Apply \LcompSimp until all interior proper Clifford vertices are removed.
\item Apply \PivotSimp, \PivotSimpGadget and \PivotSimpGadgetBoundary until all interior Pauli vertices are removed or transformed into phase-gadgets.
\item Remove all Clifford phase-gadgets using \LcompSimp and \PivotSimp.
\item Apply \IDSimp and \GadgetSimp wherever possible. If any matches were found, go back to step 1, otherwise we are done.
\end{enumerate}
\end{algorithm}
This algorithm always terminates as every step either removes a spider or a phase-gadget.
In terms of complexity we see that if the original diagram had $n$ spiders, that this algorithm takes at most $n$ steps. Each step might need us to toggle the connectivity of all the neighbours of the involved spider. As this spider has at most $n$ neighbours, this could involve $n^2$ operations on the diagram. The complexity of the algorithm is therefore bounded above by $O(n^3)$ elementary graph operations. In practice though, the \zxdiagrams resulting from quantum circuits will be quite sparse, and we tend to see a time scaling roughly between $O(n)$ and $O(n^2)$ on our benchmark circuits.
It will be useful to have a name for the diagrams produced by this simplification procedure.
\begin{definition}
We say a graph-like \zxdiagram is in \emph{reduced gadget form} when
\begin{itemize}
\item Every internal spider is a non-Clifford spider or part of a non-Clifford phase-gadget.
\item Every phase-gadget has more than one target.
\item No two phase-gadgets have the same set of targets.
\end{itemize}
\end{definition}
\subsection{Phase teleportation}\label{sec:teleport}
The simplification procedure described in the previous section produces a \zxdiagram which does not look at all like a circuit. In order to get a new, simplified circuit out, we could apply (a variation of) the circuit extraction procedure described in Ref.~\cite{cliff-simp}. Alternatively, we can short-circuit the extraction using a trick we refer to as \textit{phase teleportation}.
We begin by replacing every non-Clifford phase in our starting circuit $C$ with a fresh variable name, $\alpha_1, \ldots, \alpha_n$, and storing the angles in a separate table $\tau : \{1, \ldots, n\} \to \mathbb R$.
We can then perform the simplification procedure described in the previous section \textit{symbolically}. That is, we work on a \zxdiagram whose spiders are labelled not just with phase angles, but with polynomials over the variables $(\alpha_1, \ldots, \alpha_n)$.
Then, consider what happens when two variables are added together during the \IDSimp and \GadgetSimp rules. One of two things can occur: $(a)$ the two variables have the same sign or $(b)$ they have different signs:
\[
(a) \ \ \ \tikzfig{gf-symbolic}
\]
\[
(b) \ \ \ \tikzfig{gf-symbol-diff}
\]
Since none of our simplifications will copy any of the variables we started with, these are the only occurences of $\alpha_i$ and $\alpha_j$ in the \zxdiagram. Hence, in case $(a)$, if we replace $\alpha_i$ with $\alpha_i + \alpha_j$ and $\alpha_j$ with $0$, we get an equivalent diagram.
Put another way, in case $(a)$, we can update our table $\tau$ by setting $\tau'(i) := \tau(i) + \tau(j)$, $\tau'(j) := 0$, and $\tau'(k) := \tau(k)$ for $k \notin \{i,j\}$.
Then, $(C, \tau)$ and $(C, \tau')$ describe circuits which are provably equivalent by the rules of \zxcalculus. Case $(b)$ is similar, except we should set $\tau'(i) := \tau(i) - \tau(j)$.
This observation yields the following algorithm:
\begin{algorithm}
\textbf{Phase teleportation}: Staring with a circuit, do the following:
\begin{enumerate}
\item Choose unique variables $\alpha_1, \ldots, \alpha_n$ for each non-Clifford phase and store the pair $(C, \tau)$, where $C$ is the parametrised circuit and $\tau : \{1, \ldots, n\} \to \mathbb R$ assigns each variable to its phase.
\item Interpret $C$ as a \zxdiagram and run the \textbf{ZX-simplify} algorithm on the simplified diagram while doing the following:
\begin{quote}
Whenever \IDSimp or \GadgetSimp are applied to a pair of vertices or phase-gadgets containing variables $\alpha_i$ and $\alpha_j$, respectively, update the phase table $\tau$ as described for cases $(a)$ and $(b)$ above.
\end{quote}
\item When \textbf{ZX-simplify} finishes, the pair $(C, \tau')$ describes an equivalent circuit.
\end{enumerate}
\end{algorithm}
Even though we do compute the reduced gadget form of the circuit $C$, the new circuit we output has the same structure as $C$ itself, but with some of the phases changed. As a result, no new gates are introduced, but many non-Clifford phase gates will have their angles set to $0$ or to multiples of $\pi/2$. Hence, running a dedicated gate minimising circuit optimisation routine afterwards will often be much more effective.
\subsection{Circuit optimisation and TODD}\label{sec:TODD}
We now briefly describe a combined optimisation routing consisting of first running the phase teleportation procedure, then doing some simple post-processing, and finally applying the \emph{TODD} algorithm described in Ref.~\cite{heyfron2018efficient}.
The circuit post-processing works by doing forward and backward passes through the circuit. During the forward pass, we commute 1-qubit gates as far forward as possible using standard gate commutation rules, cancelling and combining gates whenever we can.
We then take the adjoint of the circuit and repeat the process, and keep repeating the process until no more gates are removed.
We then apply the ancilla-free version of the TODD algorithm using the C++ tool Topt~\cite{Topt}. This tool is designed to optimise CNOT+Phase circuits, so we first cut our circuit into Hadamard-free chunks. Then, before running Topt on each chunk, we again use standard gate commutation laws to pull as many gates as possible from neighbouring chunks into the current one. Since Topt is non-deterministic, we run it multiple times and we take the best result. Running Topt on each chunk in this manner then yields the T-counts reported in the last column of Table~\ref{fig:results}.
\end{document}
|
\begin{document}
\title{Interactive Learning Based Realizability and 1-Backtracking Games}
\begin{abstract} We prove that interactive learning based classical realizability (introduced by Aschieri and Berardi for first order arithmetic \cite{Aschieri}) is sound with respect to Coquand game semantics. In particular, any realizer of an implication-and-negation-free arithmetical formula embodies a winning recursive strategy for the 1-Backtracking version of Tarski games. We also give examples of realizer and winning strategy extraction for some classical proofs. We also sketch some ongoing work about how to extend our notion of realizability in order to obtain completeness with respect to Coquand semantics, when it is restricted to 1-Backtracking games.
\end{abstract}
\section{Introduction}
In this paper we show that learning based realizability (see Aschieri and Berardi \cite{Aschieri}) relates to 1-Backtracking Tarski games as intuitionistic realizability (see Kleene \cite{Kleene}) relates to Tarski games. It is well know that Tarski games (see, definition \ref{definition-TarskiGame} below) are just a simple way of rephrasing the concept of classical truth in terms of a game between two players - the first one trying to show the truth of a formula, the second its falsehood - and that an intuitionistic realizer gives a winning recursive strategy to the first player. The result is quite expected: since a realizer gives a way of computing all the information about the truth of a formula, the player trying to prove the truth of that formula has a recursive winning strategy. However, not at all \emph{any} classically provable arithmetical formula allows a winning recursive strategy for that player; otherwise, the decidability of the Halting problem would follow. In \cite{Coquand}, Coquand introduced a game semantics for Peano Arithmetic such that, for any provable formula $A$, the first player has a recursive winning strategy, coming from the proof of $A$. The key idea of that remarkable result is to modify Tarski games, allowing players to correct their mistakes and backtrack to a previous position. Here we show that learning based realizers have direct interpretation as winning recursive strategies in 1-Backtracking Tarski games (which are a particular case of Coquand games see \cite{BerCoq} and definition \ref{definition-1BacktrackingGames} below). The result, again, is expected: interactive learning based realizers, by design, are similar to strategies in games with backtracking: they improve their computational ability by learning from interaction and counterexamples in a convergent way; eventually, they gather enough information about the truth of a formula to win the game.
An interesting step towards our result was the Hayashi realizability \cite{Hayashi1}. Indeed, a realizer in the sense of Hayashi represents a recursive winning strategy in 1-Backtracking games. However, from the computational point of view, realizers do not relate to 1-Backtracking games in a significant way: Hayashi winning strategies work by exhaustive search and, actually, do not learn from the game and from the \emph{interaction} with the other player. As a result of this issue, constructive upper bounds on the length of games cannot be obtained, whereas using our realizability it is possible. For example, in the case of the 1-Backtracking Tarski game for the formula $\exists x \forall y f(x)\leq f(y)$, the Hayashi realizer checks all the natural numbers until an $n$ such that $\forall y f(n)\leq f(y)$ is found; on the contrary, our realizer yields a strategy which bounds the number of backtrackings by $f(0)$, as shown in this paper. In this case, the Hayashi strategy is the same one suggested by the classical \emph{truth} of the formula, but instead one is interested in the constructive strategy suggested by its classical \emph{proof}.
Since learning based realizers are extracted from proofs in $\HA+ \EM_1$ (Heyting Arithmetic with excluded middle over existential sentences, see \cite{Aschieri}), one also has an interpretation of classical proofs as learning strategies. Moreover, studying learning based realizers in terms of 1-Backtracking games also sheds light on their behaviour and offers an interesting case study in program extraction and interpretation in classical arithmetic.
The plan of the paper is the following. In section \S \ref{calculusandrealizability}, we recall the calculus of realizers and the main notion of interactive learning based realizability. In section \S \ref{gamesrealizability}, we prove our main theorem: a realizer of an arithmetical formula embodies a winning strategy in its associated 1-Backtracking Tarski game. In section \S \ref{examples}, we extract realizers from two classical proofs and study their behavior as learning strategies. In section \S \ref{completeness}, we define an extension of our realizability and formulate a conjecture about its completeness with respect to 1-Backtracking Tarski games.
\section{The Calculus $\SystemTClass$ and Learning-Based Realizability}\label{calculusandrealizability}
The whole content of this section is based on Aschieri and Berardi \cite{Aschieri}, where the reader may also find full motivations and proofs. We recall here the definitions and the results we need in the rest of the paper.\\
The winning strategies for 1-Backtracking Tarski games will be represented by terms of $\SystemTClass$ (see \cite{Aschieri}). $\SystemTClass$ is a system of typed lambda calculus which extends G\"odel's system $\SystemT$ by adding symbols for non computable functions and a new type $\State$ (denoting a set of states of knowledge) together with two basic operations over it. The terms of $\SystemTClass$ are computed with respect to a state of knowledge, which represents a finite approximation of the non computable functions used in the system.
For a complete definition of $\SystemT$ we refer to Girard \cite{Girard}. $\SystemT$ is simply typed $\lambda$-calculus, with atomic types $\Nat$ (representing the set $\NatSet$ of natural numbers) and $\Bool$ (representing the set $\BoolSet = \{\mbox{True},\mbox{False}\}$ of booleans), product types $T \times U$ and arrows types $T \rightarrow U$, constants $0: \Nat$, $\mathsf{S}:\Nat\rightarrow \Nat$, $\True, \False: \Bool$, pairs $\langle.,.\rangle$, projections $\pi_0, \pi_1$, conditional ${\tt if}_T$ and primitive recursion ${\tt R}_T$ in all types, and the usual reduction rules $(\beta),(\pi),({\tt if}),({\tt R})$ for $\lambda$, $\langle .,.\rangle,{\tt if}_T,{\tt R}_T$. From now on, if $t, u$ are terms of $\SystemT$ with $t=u$ we denote provable equality in $\SystemT$. If $k \in \NatSet$, the numeral denoting $k$ is the closed normal term $ S^k(0)$ of type $\Nat$. All closed normal terms of type $\Nat$ are numerals. Any closed normal term of type $\Bool$ in $\SystemT$ is ${\True}$ or ${\False}$.
We introduce a notation for ternary projections: if $T = A \times (B \times C)$, with $p_0, p_1, p_2$ we respectively denote the terms $\pi_0$, $\lambda x:T.\pi_0(\pi_1(x))$, $\lambda x:T.\pi_1(\pi_1(x))$.
If $u = \langle u_0,\langle u_1,u_2\rangle \rangle : T$, then $\proj_iu=u_i$ in $\SystemT$ for $i=0,1,2$. We abbreviate $\langle u_0,\langle u_1,u_2\rangle \rangle :T $ with $\langle u_0,u_1,u_2\rangle : T$.
\begin{definition} [States of Knowledge and Consistent Union]\label{definition-StateOfKnowledge}\begin{enumerate}
\item
A $k$-ary {\em predicate} of $\SystemT$ is any closed normal term $P:\Nat^{k}\rightarrow \Bool$ of $\SystemT$.
\item
An atom is any triple $\langle P,\vec{n},{m}\rangle $, where $P$ is a $(k+1)$-ary predicate of $\SystemT$, and $\vec{n},m$ are $(k+1)$ numerals, and $P\vec{n}m = \True$ in $\SystemT$.
\item
Two atoms $\langle P,\vec{n},{m}\rangle $, $\langle P',\vec{n'},{m'}\rangle $ are {\em consistent} if $P = P'$ and $\vec{n} = \vec{n'}$ in $\SystemT$ imply $m = m'$.
\item
A state of knowledge, shortly a {\em state}, is any finite set $S$ of pairwise consistent atoms.
\item
Two states $S_1, S_2$ are consistent if $S_1 \cup S_2$ is a state.
\item
$\StateSet$ is the set of all states of knowledge.
\item
The {\em consistent union} $S_1 \CupSem S_2$ of $S_1, S_2 \in \StateSet$ is $S_1 \cup S_2 \in \StateSet$ minus all atoms of $S_2$ which are inconsistent with some atom of $S_1$.
\end{enumerate}
\end{definition}
For each state of knowledge $S$ we assume having a unique constant $ \makestate{S}$ denoting it; if there is no ambiguity, we just assume that state constants are strings of the form $\{\langle P,\vec{n_1},m_1\rangle,\ldots, \langle P, \vec{n_k}, m_k\rangle\}$, denoting a state of knowledge. We define with $\SystemTState = \SystemT + \State + \{\makestate{S}|S \in \StateSet\}$ the extension of $\SystemT$ with one atomic type $\State$ denoting $\StateSet$, and a constant $ \makestate{S} : \State$ for each $S \in \StateSet$, and {\em no} new reduction rule. Computation on states will be defined by a set of algebraic reduction rules we call ``functional''.
\begin{definition}[Functional set of rules]\label{definition-functional}
Let $C$ be any set of constants, each one of some type $A_1\rightarrow \ldots \rightarrow A_n\rightarrow A$, for some $A_1,\ldots,A_n, A \in\{ \Bool, \Nat, \State\}$. We say that $\mathcal{R}$ is a {\em functional set of reduction rules} for $C$ if $\mathcal{R}$ consists, for all $c\in C$ and all closed normal terms ${a_1}:A_1,\ldots, {a_n}:A_n$ of $\SystemT_\State$, of exactly one rule $c {a_1}\ldots {a_n}\mapsto {a}$, where ${a}:A$ is a closed normal term of $\SystemT_\State$.
\end{definition}
We define two extensions of $\SystemT_\State$: an extension $\SystemTClass$ with symbols denoting non-computable maps $X_P:\Nat^k\rightarrow \Bool, \Phi_P: \Nat^k\rightarrow \Nat$ (for each $k$-ary predicate $P$ of $\SystemT$) and no computable reduction rules, another extension $\SystemTLearn$, with the computable approximations $\chi_P,\phi_P$ of $X_P, \Phi_P$, and a computable set of reduction rules. $X_P$ and $\Phi_P$ are intended to represent respectively the oracle mapping $\vec{n}$ to the truth value of $\exists x P\vec{n}x$, and a Skolem function mapping $\vec{n}$ to an element $m$ such that $\exists x P\vec{n}x$ holds iff $P\vec{n}m=\True$. We use the elements of $\SystemTClass$ to represent non-computable realizers, and the elements of $\SystemTLearn$ to represent a computable ``approximation'' of a realizer. We denote terms of type $\State$ by $\rho, \rho', \ldots$.
\begin{definition} \label{definition-TermLanguageL1}
Assume $P:\Nat^{k+1}\rightarrow \Bool$ is a $k+1$-ary predicate of $\SystemT$. We introduce the following constants:
\begin{enumerate}
\item
$\chi_P:\State \rightarrow \Nat^k\rightarrow \Bool$
and
$\varphi_P:\State \rightarrow \Nat^k\rightarrow \Nat$.
\item
$X_P:\Nat^k\rightarrow \Bool$ and $\Phi_P: \Nat^k \rightarrow \Nat$.
\item
$\Cup:\State\rightarrow \State \rightarrow \State$ (we denote $\Cup\rho_1\rho_2$ with $\rho_1\Cup\rho_2$).
\item
$\Add_P:\Nat^{k+1} \rightarrow \State$ and $\add_P:\State \rightarrow \Nat^{k+1} \rightarrow \State$.
\end{enumerate}
\begin{enumerate}
\item
$\Xi_\State$ is the set of all constants $\chi_P,\varphi_P, \Cup, \add_P$.
\item
$\Xi$ is the set of all constants $X_P,\Phi_P, \Cup, \Add_P$.
\item
$\SystemTClass = \SystemT_\State + \Xi$.
\item
A term $t \in \SystemTClass$ has state $\makestate{\emptyset}$ if it has no state constant different from $\makestate{\emptyset}$.
\end{enumerate}
\end{definition}
Let $\vec{t} = t_1\ldots t_k$. We interpret $\chi_P{s} \vec{t}$ and $\varphi_P{s}\vec{t} $ respectively as a ``guess'' for the values of the oracle and the Skolem map $X_P$ and $\Phi_P$ for $\exists y.P\vec{t}y$, guess computed w.r.t. the knowledge state denoted by the constant $s$. There is no set of computable reduction rules for the constants $\Phi_P, X_P \in \Xi$, and therefore no set of computable reduction rules for $\SystemTClass$. If $\rho_1, \rho_2$ denotes the states $S_1, S_2 \in \StateSet$, we interpret $\rho_1 \Cup \rho_2$ as denoting the consistent union $S_1 \CupSem S_2$ of $S_1, S_2$. $\Add_P$ denotes the map constantly equal to the empty state $\emptyset$. $\add_P{\makestate{S}} \vec{n}m $ denotes the empty state $\emptyset$ if we cannot add the atom $\langle P, \vec{n},m\rangle$ to $S$, either because $\langle P,\vec{n},m'\rangle \in S$ for some numeral $m'$, or because $P\vec{n}m={\False}$. $\add_P{\makestate{S}} \vec{n}m $ denotes the state $\{\langle P, \vec{n},m \rangle\}$ otherwise. We define a system $\SystemTLearn$ with reduction rules over $\Xi_\State$ by a functional reduction set $\mathcal{R}_\State$.
\begin{definition}[The System $\SystemTLearn$] \label{definition-EquationalTheoryL1}
Let $s, s_1, s_2$ be state constants denoting the states $S, S_1, S_2$. Let $\langle P, \vec{n},m \rangle$ be an atom. $\mathcal{R}_\State$ is the following functional set of reduction rules for $\Xi_\State$:
\begin{enumerate}
\item
If $\langle P,\vec{n},{m}\rangle \in S$, then
$\chi_P{s}\vec{n} \mapsto {\True}$ and $\varphi_P{s}\vec{n} \mapsto {m}$, else
$\chi_P{s}\vec{n} \mapsto {\False}$ and $\varphi_P{s}\vec{n} \mapsto {0}$.
\item
${s_1}\Cup{s_2} \mapsto \makestate{S_1 \CupSem S_2}$
\item
$\add_P{s}\vec{n}{m} \mapsto \makestate{\emptyset}$ if either $\langle P,\vec{n},{m'} \rangle \in S$ for some numeral $m'$ or $P\vec{n}{m} = {\False}$, and $\add_P{s}\vec{n}{m} \mapsto \makestate{\{\langle P,\vec{n},{m} \rangle\}}$ otherwise.
\end{enumerate}
We define $\SystemTLearn = \SystemT_\State + \Xi_\State + \mathcal{R}_\State$.
\end{definition}
\textbf{Remark.} $\SystemTLearn$ is nothing but $\SystemTState$ with some ``syntactic sugar''. $\SystemTLearn$ is strongly normalizing, has Church-Rosser property for closed term of atomic types and:
\begin{proposition}[Normal Form Property for $\SystemTLearn$]\label{proposition-normalform} Assume $A$ is either an atomic type or a product type. Then any closed normal term $t \in \SystemTLearn$ of type $A$ is: a numeral ${n}:\Nat$, or a boolean $\True,\False:\Bool$, or a state constant $s:\State$, or a pair $\langle u,v \rangle: B \times C$.
\end{proposition}
\begin{definition} Assume $t \in \SystemTClass$ and $s$ is a state constant. We call ``approximation of $t$ at state $s$'' the term $t[{s}]$ of $\SystemTLearn$ obtained from $t$ by replacing each constant $X_P$ with $\chi_P{s}$, each constant $\Phi_P$ with $\varphi_P{s}$, each constant $\Add_P$ with $\add_P{s}$.
\end{definition}
If $s, s'$ are state constants denoting $S, S' \in \StateSet$, we write $s \le s'$ for $S \subseteq S'$. We say that a sequence $\{s_i\}_{i\in\NatSet}$ of state constants is a weakly increasing chain of states (is w.i. for short), if $s_i\le s_{i+1}$ for all $i\in\NatSet$.
\begin{definition}[Convergence]
\label{definition-Convergence} Assume
that $\{s_i\}_{i\in\NatSet} $ is a w.i. sequence of state constants,
and $u, v \in \SystemTClass$.
\begin{enumerate}
\item
$u$ converges in $\{s_i\}_{i\in\NatSet}$ if $\exists i\in\NatSet.
\forall j\geq i.u[s_j]=u[s_{i}]$ in $\SystemTLearn$.
\item
$u$ converges if $u$ converges in every w.i. sequence of state constants.
\end{enumerate}
\end{definition}
Our realizability semantics relies on two properties of the non computable terms of atomic type in $\SystemTClass$. First, if we repeatedly increase the knowledge state $s$, eventually the value of $t[s]$ stops changing. Second, if $t$ has type $\State$, and contains no state constants but $\makestate{\emptyset}$, then we may effectively find a way of increasing the knowledge state $s$ such that eventually we have $t[s]=\makestate{\emptyset}$.
\begin{theorem}[Stability Theorem] \label{theorem-StabilityTheorem}
Assume $t \in \SystemTClass$ is a closed term of atomic type $A$ ($A\in\{\Bool,\Nat,\State\}$). Then $t$ is convergent.
\end{theorem}
\begin{theorem}[Fixed Point Property]\label{Fixed Point Property}
Let $t:\State$ be a closed term of $\SystemTClass$ of state $\makestate{\emptyset}$, and $s = \makestate{S}$. Define $\tau(S) = S'$ if $t[\makestate{S}] = \makestate{S}'$, and $f(S) = S \cup \tau(S)$.
\begin{enumerate}
\item
For any $n\in\NatSet$, define $f^0(S)=S$ and $f^{n+1}(S)=f(f^n(S))$. There are $h\in\NatSet$, $S'\in\StateSet$ such that ${S}' = \fix^h({S})\supseteq S$, $\fix({S}')={S}'$ and $\tau(S') = \emptyset$.
\item
We may effectively find a state constant $s' \ge s$ such that $t[s'] = \makestate{\emptyset}$.
\end{enumerate}
\end{theorem}
\begin{definition}[The language $\mathcal{L}$ of Peano Arithmetic] \label{definition-extendedarithmetic}
\begin{enumerate}
\item
The terms of $\mathcal{L}$ are all $t \in \SystemT$, such that $t:\Nat$ and $FV(t) \subseteq \{x_1^\Nat, \ldots, x_n^\Nat\}$ for some $x_1, \ldots, x_n$.
\item
The atomic formulas of $\mathcal{L}$ are all $Qt_1\ldots t_n \in \SystemT$, for some $Q:\Nat^{n}\rightarrow \Bool$ {\em closed term of $\SystemT$}, and some terms $t_1,\ldots,t_n$ of $\mathcal{L}$.
\item
The formulas of $\mathcal{L}$ are built from atomic formulas of $\mathcal{L}$ by the connectives $\lor,\land,\rightarrow \forall,\exists$ as usual.
\end{enumerate}
\end{definition}
\begin{definition}[Types for realizers]
\label{definition-TypesForRealizers} For each
arithmetical formula $A$ we define a type $|A|$ of $\SystemT$ by
induction on $A$:
$|P(t_1,\ldots,t_n)|=\State$,
$|A\wedge B|=|A|\times |B|$,
$|A\vee B|= \Bool\times (|A|\times |B|)$,
$|A\rightarrow B|=|A|\rightarrow |B|$,
$|\forall x A|=\Nat\rightarrow |A|$,
$|\exists x A|= \Nat\times |A|$
\end{definition}
We define now our notion of realizability, which is relativized to a knowledge state $s$, and differs from Kreisel modified realizability for a single detail: if we realize an atomic formula, the atomic formula does not need to be true, unless the realizer is equal to the empty set in $s$.
\begin{definition}[Realizability]
\label{lemma-IndexedRealizabilityAndRealizability}
Assume $s$ is a state constant, $t\in \SystemTClass$ is a closed term of state $\makestate{\emptyset}$, $A \in \mathcal{L} $ is a closed formula, and $t:|A|$. Let $\vec{t} = t_1, \ldots, t_n : \Nat$.
\begin{enumerate}
\item
$t\Vvdash_s P(\vec{t})$ if and only if $t[s] = \makestate{\emptyset}$ in $\SystemTLearn$ implies
$P(\vec{t})={\True}$
\item
$t\Vvdash_s{A\wedge B}$ if and only if $\pi_0t \Vvdash_s{A}$ and $\pi_1t\Vvdash_s{B}$
\item
$t\Vvdash_s {A\vee B}$ if and only if either $\proj_0t[{s}]={\True}$ in $\SystemTLearn$ and $\proj_1t\Vvdash_s A$, or $\proj_0t[{s}]={\False}$ in $\SystemTLearn$ and $\proj_2t\Vvdash_s B$
\item
$t\Vvdash_s {A\rightarrow B}$ if and only if for all $u$, if $u\Vvdash_s{A}$,
then $tu\Vvdash_s{B}$
\item
$t\Vvdash_s {\forall x A}$ if and only if for all numerals $n$,
$t{n}\Vvdash_s A[{n}/x]$
\item
$t\Vvdash_s \exists x A$ if and only for some numeral $n$, $\pi_0t[{s}]= {n}$ in $\SystemTLearn$ and $\pi_1t \Vvdash_s A[{n}/x]$
\end{enumerate}
We define $t \Vvdash A$ if and only if $t\Vvdash_s A$ for all state constants $s$.
\end{definition}
\begin{theorem}\label{Realizability Theorem}
If $A$ is a closed formula provable in $\HA + \EM_1$ (see \cite{Aschieri}), then there
exists $t\in \SystemTClass$ such that $t\Vvdash A$.
\end{theorem}
\section{Games, Learning and Realizability}\label{gamesrealizability}
\label{section-GamesLearningandRealizability}
In this section, we define the notion of game, its 1-Backtracking version andTarski games. We also prove our main theorem, connecting learning based realizability and 1-Backtracking Tarski games.
\begin{definition}[Games]
\label{definition-Games}
\begin{enumerate}
\item
A \emph{game} $G$ between two players is a quadruple $(V,E_1,E_2, W)$,
where $V$ is a set, $E_1,E_2$ are subsets of $V\times V$ such that
$Dom(E_1)\cap Dom(E_2)=\emptyset$, where $Dom(E_i)$ is the domain of $E_i$, and $W$ is a set of sequences,
possibly infinite, of elements of $V$. The elements of $V$ are
called \emph{positions} of the game; $E_1$, $E_2$ are the transition
relations respectively for player one and player two:
$(v_1,v_2)\in E_i$ means that player $i$ can legally move from the
position $v_1$ to the position $v_2$.
\item We define a \emph{play} to be a walk, possibly infinite, in the
graph $(V,E_1\cup E_2)$, i.e. a sequence, possibly void, $v_1::v_2::\ldots:: v_n::\ldots $ of elements of $V$ such that $(v_i, v_{i+1})\in E_1\cup E_2$ for every $i$. A play of the form $v_1:: v_2:: \ldots:: v_n::\ldots $ is said to \emph{start from} $v_1$. A play is said to be
\emph{complete} if it is either infinite or is equal to $v_1::\ldots:: v_n$ and $v_n\notin Dom(E_1\cup E_2)$. $W$ is required to be a set of
complete plays. If $p$ is a complete play and $p\in W$,
we say that player one wins in $p$. If $p$ is a complete play and $p\notin
W$,
we say that player two wins in $p$.
\item
Let $P_G$ be the set of finite plays. Consider a function $f:
P_G\rightarrow V$; a play $v_1::\ldots:: v_n::\ldots$ is said to be
$f$-correct if $f(v_1,\ldots, v_i)=v_{i+1}$ for every $i$ such that
$(v_i,v_{i+1})\in E_1$
\item
A \emph{winning strategy} from position $v$ for player one is a function
$\omega: P_G\rightarrow V$ such that every complete
$\omega$-correct play $v::v_1::\ldots :: v_n::\ldots $ belongs to $W$.
\end{enumerate}
\end{definition}
\textbf{Notation.} If for $i\in \NatSet, i=1,\ldots, n$ we have that $p_i=(p_i)_{0}:: \ldots :: (p_i)_{n_i}$ is a finite sequence of elements of length $n_i$, with $p_1::\ldots ::p_n$ we denote the sequence \[(p_1)_0::\ldots ::(p_1)_{n_1}:: \ldots :: (p_k)_0::\ldots :: (p_k)_{n_k}\] where $(p_i)_j$ denotes the $j$-th element of the sequence $p_i$. \\\\
Suppose that $a_1::a_2::\ldots :: a_n$ is a play of a game $G$,
representing, for some reason, a bad situation for player one (for
example, in the game of chess, $a_n$ might be a configuration of
the
chessboard in which player one has just lost his queen). Then,
learnt the lesson, player one might wish to erase some of his moves
and come back to the time the play was just, say, $a_1,a_2$ and
choose, say, $b_1$ in place of $a_3$; in other words, player one
might wish to \emph{backtrack}. Then, the game might go on as
$a_1 :: a_2 ::b_1::\ldots :: b_m$ and, once again, player one might want to
backtrack to, say, $a_1::a_2::b_1::\ldots :: b_i$, with $i< m$, and so
on... As there is no learning without remembering, player one
must keep in mind the errors made during the play. This is the
idea
of 1-Backtracking games (for more motivations, we refer the reader to \cite{BerCoq} and \cite{BerLig}) and here is our definition.
\begin{definition}[1-Backtracking Games]
\label{definition-1BacktrackingGames} Let
$G=(V,E_1,E_2,W)$ be a game.
\begin{enumerate}
\item
We define $1Back(G)$ as the game $(P_G, E_1',E_2', W')$, where:\\
\item $P_G$ is the set of finite plays of $G$\\
\item $E_2':=\{(p::a,\ p::a::b)\ |\ p, p::a\in P_G,
(a,b)\in
E_2 \}$ and \[E_1':=\{(p::a,\ p::a::b)\ |\ p, p::a\in P_G, (a,b)\in
E_1\}\ \cup\] \[\{(p::a::q::d,\ p::a)\ |\ p, q\in P_G, p::a::q::d\in P_G, a\in Dom(E_1)\]\[
d\notin Dom(E_2), p::a::q::d\notin W \};\]
\item $W'$ is the set of finite complete plays $p_1::\ldots :: p_n$ of
$(P_G, E_1', E_2')$ such that $p_n\in W$.
\end{enumerate}
\end{definition}
\textbf{Note.} The pair $(p::a::q::d,\ p::a)$ in the definition above of $E_2'$ codifies a {\em backtracking move} by player one (and we point out that $q::d$ might be the empty sequence).\\
\textbf{Remark.} Differently from \cite{BerCoq}, in which both players are allowed to backtrack, we only consider the case in which only player one is supposed do that (as in \cite{Hayashi1}). It is not that our results would not hold: clearly, the proofs in this paper would work just as fine for the definition of 1-Backtracking Tarski games given in \cite{BerCoq}. However, as noted in \cite{BerCoq}, any player-one recursive winning strategy in our version of the game can be effectively transformed into a winning strategy for player one in the other version the game. Hence, adding backtracking for the second player does not increase the computational challenge for player one.
Moreover, the notion of winner of the game given in \cite{BerCoq} is strictly non constructive and games played by player one with the correct winning strategy may even not terminate. Whereas, with our definition, we can formulate our main theorem as a program termination result: whatever the strategy chosen by player two, the game terminates with the win of player one. This is also the spirit of realizability and hence of this paper: the constructive information must be computed in a finite amount of time, not in the limit. \\
In the well known Tarski games, there are two players and a formula
on the board. The second player - usually called Abelard - tries to
show that the formula is false, while the first player - usually
called Eloise - tries to show that it is true. Let us see the
definition.
\begin{definition}[Tarski Games]
\label{definition-TarskiGame} Let $A$ be a closed
implication and negation free arithmetical formula of $\mathcal{L}$. We define the
Tarski
game for $A$ as the game $T_A=(V, E_1, E_2, W)$, where:
\begin{enumerate}
\item
$V$ is the set of all subformula occurrences of $A$; that is, $V$ is the smallest set of formulas such that, if either $A\lor B$ or $A\land B$ belongs to $V$, then $A,B\in V$; if either $\forall x A(x)$ or $\exists x A(x)$ belongs to $V$, then $A(n)\in V$ for all numerals $n$. \\
\item
$E_1$ is the set of pairs $(A_1,A_2)\in V\times V$ such that $A_1=\exists x
A(x)$ and $A_2=A(n)$, or $A_1=A\lor B$ and either $A_2=A$ or
$A_2=B$;\\
\item
$ E_2$ is the set of pairs $(A_1,A_2)\in V\times V$ such that $A_1=\forall x
A(x)$ and $A_2=A(n)$, or $A_1=A\land B$ and $A_2=A$ or
$A_2=B$;\\
\item
$W$ is the set of finite complete plays $A_1::\ldots ::A_n$ such that
$A_n=\True$.
\end{enumerate}
\end{definition}
\textbf{Note.} We stress that Tarski games are defined only for implication and negation free formulas. Indeed, $1Back(T_A)$, when $A$ contains implications, would be much more involved and less intuitive (for a definition of Tarski games for every arithmetical formula see for example Berardi \cite{Ber2}).\\
What we want to show is that if $t\Vvdash A$,
$t$ gives to player one a recursive winning strategy in
$1Back(T_A)$. The idea of the proof is the following. Suppose we play as player one. Our strategy is relativized to a knowledge state and we start
the game by fixing the actual state of knowledge as $\makestate{\emptyset}$.
Then we play in the same way as we would do in the Tarski game. For
example, if there is $\forall x A(x)$ on the board and
$A(n)$ is chosen by player two, we recursively play the
strategy given by $tn$; if there is $\exists x A(x)$ on the
board, we calculate $\pi_0t[\makestate{\emptyset}]=n$ and play
$A(n)$ and recursively the strategy given by $\pi_1t$. If there is $A\lor B$ on the board, we calculate $\proj_0t[\makestate{\emptyset}]$, and according as to whether it equals $\True$ or $\False$, we play the strategy recursively given by $\proj_1t$ or $\proj_2t$.
If there is an atomic formula on the board, if it is true, we win; otherwise we extend the current state with the state $\emptyset \Cup t[\makestate{\emptyset}]$, we backtrack and play with respect to the new state of knowledge and trying to keep as close as possible to the previous game.
Eventually, we will reach a state large enough to enable our
realizer to give always correct answers and we will win. Let us consider first an example and then the formal definition of the winning strategy for Eloise.\\
\textbf{Example ($\EM_1$)}. Given a predicate $P$ of $\SystemT$, and its boolean negation predicate $\neg P$ (which is representable in $\SystemT$), the realizer $E_P$ of \[\EM_1:=\forall x.\ \exists y\
P(x, y)\vee \forall y \neg P(x,y)\]
is defined as \[\lambda \alpha^{\Nat}
\langle X_P\alpha,\ \langle
\Phi_P{\alpha},\
\makestate{\emptyset} \rangle ,\ \lambda m^{\Nat}\
\Add_P {\alpha}m\rangle \]
According to the rules of the game $1Back(T_{\EM_1})$, Abelard is the first to move and, for some numeral $n$, chooses the formula
\[\exists y\ P(n, y)\vee \forall y \neg P(n,y)\]
Now is the turn of Eloise
and she plays the strategy given by the term
\[\langle X_Pn,\ \langle
\Phi_P{\alpha},\
\makestate{\emptyset} \rangle ,\ \lambda m^{\Nat}\
\Add_P nm\rangle \]
Hence, she computes $X_Pn[\makestate{\emptyset}]=\chi_P\makestate{\emptyset} n=\False$ (by definition \ref{definition-EquationalTheoryL1}), so she plays the formula
\[\forall y \neg P(n,y)\]
and Abelard chooses $m$ and plays
\[\neg P(n,m)\]
If $\neg P(n,m)=\True$, Eloise wins. Otherwise, she plays the strategy given by
\[ (\lambda m^{\Nat}\ \Add_P {\alpha}m )m[\makestate{\emptyset}]= \add_P \makestate{\emptyset} n m=\{\langle P,n,m\rangle\}\]
So, the new knowledge state is now $\{\langle P,n,m\rangle\}$ and she backtracks to the formula
\[\exists y\ P(n, y)\vee \forall y \neg P(n,y)\]
Now, by definition \ref{definition-EquationalTheoryL1}, $X_Pn[\{\langle P,n,m\rangle\}]=\True$ and she plays the formula
\[\exists y\ P(n, y)\]
calculates the term \[\pi_0\langle
\Phi_Pn,\
{\emptyset} \rangle[\{\langle P,n,m\rangle\}]=\varphi_P\{\langle P,n,m\rangle\}n=m\]
plays $P(n,m)$ and wins.\\\\
\textbf{Notation.} In the following, we shall denote with upper case letters $A, B,C$
closed arithmetical formulas, with lower case letters $p,q,r$
plays of $T_A$ and with upper case letters $P,Q,R$ plays of
$1Back(T_A)$ (and all those letters may be indexed by numbers). To avoid confusion with the plays of $T_A$, plays of 1Back($T_A$) will be denoted as $p_1,\ldots, p_n$ rather than $p_1::\ldots :: p_n$. Moreover, if $P=q_1,\ldots, q_m$, then $P, p_1,\ldots, p_n$ will denote the sequence $q_1, \ldots, q_m, p_1,\ldots p_n $.
\begin{definition}\label{adaptrealizer}
Fix $u$ such that $u\Vvdash A$. Let $p$ be a finite play of
$T_A$ starting with $A$. We define by induction on the length of
$p$ a term $\rho(p)\in \SystemTClass$ (read as `the realizer adapt
to $p$') in the following way: \begin{enumerate}
\item If $p=A$, then
$\rho(p)=u$. \item If $p=(q:: \exists x B(x):: B(n))$ and
$\rho(q:: \exists x B(x))=t$, then $\rho(p)=\pi_1t$.
\item If $p=(q::
\forall x B(x):: B(n))$ and $\rho(q:: \forall x B(x))=t$,
then $\rho(p)=tn$.
\item If $p=(q:: B_0\land B_1:: B_i)$
and $\rho(q:: B_0\land B_1)=t$, then $\rho(p)=\pi_it$.
\item If
$p=(q:: B_1\lor B_2:: B_i)$ and $\rho(q:: B_1\lor B_2)=t$, then
$\rho(p)=\proj_it$.\end{enumerate}
Given a play $P=Q, q::B$ of $1Back(T_A)$, we set
$\rho(P)=\rho(q::B)$.
\end{definition}
\begin{definition}\label{adaptstate}
Fix $u$ such that $u\Vvdash A$. Let $\rho$ be as in definition \ref{adaptrealizer} and $P$ be a finite play of $1Back(T_A)$ starting with $A$. We
define by induction on the length of $P$ a state $\Sigma(P)$ (read
as `the state associated to $P$') in the following way:
\begin{enumerate}
\item If $P=A$, then $\Sigma(P)=\varnothing$.
\item If $P=(Q, p::B, p::B::C)$ and $\Sigma(Q, p::B)=s$, then
$\Sigma(P)=s$.
\item If $P=(Q, p::B::q, p::B)$ and $\Sigma(Q,
p::B::q)=s$ and $\rho(Q, p::B::q)=t$, then if $t:\State$, then
$\Sigma(P)=s\Cup t[s]$, else $\Sigma(P)=s$.\end{enumerate}
\end{definition}
\begin{definition}[Winning strategy for 1Back($T_A$)]\label{definition-winningstrategy}
Fix $u$ such that $u\Vvdash A$. Let $\rho$ and $\Sigma$ be respectively as in definition \ref{adaptrealizer} and \ref{adaptstate}. We define a function $\omega$ from the set of finite plays of
$1Back(T_A)$ to set of finite plays of $T_A$; $\omega$ is intended
to be a recursive winning strategy from $A$ for player one in $1Back(T_A)$.
\begin{enumerate}
\item If $\rho(P,q::\exists x B(x))=t$,
$\Sigma(P,q::\exists x B(x))=s$ and $(\pi_0t)[s]={n}$,
then \[\omega(P,q::\exists x B(x))= q::\exists x B(x)::
B({n})\]
\item If $\rho(P, q::B\lor C)=t$ and $\Sigma(P, q::B\lor C)=s$,
then if $(\proj_0t)[s]=\True$ then \[\omega(P, q::B\lor C)=q::B\lor
C::B\] else \[\omega(P, q::B\lor C)= q::B\lor C:: C\]
\item If $A_n$ is atomic, $A_n=\False$, $\rho(P, A_1::\cdots::
A_n)=t$ and $\Sigma(P, A_1::\cdots ::A_n)=s$, then \[\omega(P,
A_1::\cdots:: A_n)= A_1::\cdots:: A_i\] where $i$ is equal to the smallest
$j< n$ such that $\rho( A_1::\cdots:: A_j)=w$ and either
\[A_j=\exists x C(x)\land A_{j+1}= C({n}) \land
(\pi_0w)[s\Cup t[s]]\neq {n}\]
or \[A_j=B_1\lor B_2\land
A_{j+1}= B_1 \land (\proj_0w)[s\Cup t[s]]=\False\]
or \[A_j=B_1\lor B_2\land A_{j+1}= B_2 \land
(\proj_0w)[s\Cup t[s]]=\True\] If such $j$ does not exist, we set $i=n$.
\item
In the other cases, $\omega(P,q)=q$.
\end{enumerate}
\end{definition}
\begin{lemma} \label{preservationlemma}
\label{lemma-Completenessof1Backtracking}Suppose $u\Vvdash A$ and $\rho,\Sigma,\omega$ as in definition \ref{definition-winningstrategy}. Let $Q$ be a finite
$\omega$-correct play of $1Back(T_A)$ starting with $A$, $\rho(Q)=t$, $\Sigma(Q)=s$. If
$Q=Q',q'::B$, then $t\Vvdash_s B$. \end{lemma}
\textit{Proof.} By a straightforward induction on the length of
$Q$.
\comment{
\textit{Proof.} See the full version of this paper \cite{Aschierifull}.
By a straightforward induction on the length of
$Q$.
\begin{enumerate}
\item If $Q=A$, then $t=\rho(Q)=u\Vvdash_s A$.\\
\item If $Q=P,q::\exists x B(x),q::\exists x B(x)::
B({n})$, then let $t'=\rho(P,q::\exists x B(x))$. By
definition of $\Sigma$, $s=\Sigma(P,q::\exists x B(x))$. Since $Q$
is $\omega$-correct and $(q::\exists x B(x),q::\exists x B(x)::
B({n}))\in E_1$, we have $\omega(P,q::\exists x
B(x))=q::\exists x B(x):: B({n})$ and so
${n}=(\pi_0t')[s]$. Moreover, by definition of $\rho$,
$t=\pi_1t'$; by induction hypothesis, $t'\Vvdash_s \exists x B(x)$;
so, $t=\pi_1 t' \Vvdash_s B({n})$.\\
\item If $Q=P,q::B\lor C,q::B\lor C:: B$, then let
$t'=\rho(P,q::B\lor C)$. By definition of $\Sigma$,
$s=\Sigma(P,q::B\lor C)$. Since $Q$ is $\omega$-correct and
$(q::B\lor C,q::B\lor C:: B)\in E_1$, we have $\omega(P,q::B\lor
C)=q::B\lor C:: B$ and so $(\proj_0t')[s]=\True$. Moreover, by definition
of $\rho$, $t=\proj_1t'$; by induction hypothesis, $t'\Vvdash_s
B\lor C$; so, $t=\proj_1t' \Vvdash_s B$.
The other case is analogous.\\
\item
If $Q=P,q::\forall x B(x), q::\forall x B(x)::B({n})$,
then let $t'=\rho(P,q::\forall x B(x))$. By definition of $\Sigma$, $s=\Sigma(P, q::\forall x B(x))$. By definition of $\rho$,
$t=t'{n} $; by induction hypothesis, $t'\Vvdash_s \forall x
B(x)$; hence, $t=t'n \Vvdash_s B({n})$.\\
\item
If $Q=P,q::B\land C, q::B\land C::B$, then let
$t'=\rho(P,q::B\land
C)$. By definition of $\Sigma$, $s=\Sigma (P, q::B\land C)$. By definition of $\rho$, $t=\pi_0t'$; by induction hypothesis,
$t'\Vvdash_s B\land C$; hence, $t=\pi_0 t' \Vvdash_s B$. The other case is analogous.\\
\item
If $Q=P, A_1::\cdots:: A_n, A_1::\cdots:: A_i$, $i<n$, $A_n$ atomic, then $A_1=A$. Furthermore, if $\Sigma(P,A_1::\cdots:: A_n)=s'$
and $t'=\rho(P,A_1::\cdots::A_n)$, then $s=s'\Cup t'[s']$. Let
$t_j=\rho(A_1::\cdots :: A_j)$, for $j=1,\ldots, i$. We prove by
induction on $j$ that $t_j\Vvdash_s A_j$, and hence the thesis.
If $j=1$, then $t_1=\rho(A_1)=\rho(A)=u\Vvdash_s A=A_1$.\\ If $j>1$,
by induction hypothesis $t_{k}\Vvdash_s A_{k}$, for every $k<j$. If either $A_{j-1}=\forall x
C(x)$ or $A_{j-1}=C_0\wedge C_1$, then
either $t_j=t_{j-1}{n}$ and $A_j=C({n})$, or
$t_j=\pi_mt_{j-1}$ and $A_j=C_m$: in both cases, we have
$t_j\Vvdash_s A_j$, since $t_{j-1}\Vvdash_s A_{j-1}$. Therefore, by definition of $\omega$ and $i$ and the $\omega$-\emph{correctness} of $Q$, the remaining possibilities are that either
$A_{j-1}=\exists x C(x)$, $A_j=
C({n})$, $t_j=\pi_1t_{j-1}$, with $(\pi_0t_{j-1})[s]= {n}$; or
$A_{j-1}=C_1\lor C_2$, $A_j=C_m$, $t_j=\proj_m t_{j-1}$ and
$(\proj_0t_{j-1})[s]=\True$ if and only if $m=1$; in both cases, we
have $t_j\Vvdash_s A_j$. \end{enumerate}
\qed
}
\begin{theorem}[Soundness Theorem] Let $A$ be a closed negation and implication free arithmetical formula. Suppose that $u\Vvdash A$ and consider the game
$1Back(T_A)$. Let $\omega$ be as in definition \ref{definition-winningstrategy}. Then $\omega$ is a recursive winning strategy from $A$ for player one. \end{theorem}
\textit{Proof.} The theorem will be proved in the full version of this paper. The idea is to prove it by contradiction, assuming there is an infinite $\omega$-correct play. Then one can produce an increasing sequence of states. Using theorems \ref{theorem-StabilityTheorem} and \ref{Fixed Point Property}, one can show that Eloise's moves eventually stabilize and that the game results in a winnning position for Eloise.
\comment{
\textit{Proof.} We begin by showing that there is no infinite
$\omega$-correct play.\\ Let $P=p_1,\ldots, p_n,\ldots$ be, for the
sake of contradiction, an infinite $\omega$-correct play, with
$p_1=A$. Let $A_1::\cdots :: A_k$ be the \emph{longest} play of $T_A$ such that there exists $j$ such that for every
$n\geq j$, $p_n$ is of the form $A_1::\cdots ::A_k::q_n$. $A_1::\cdots::A_k$ is well defined, because: $p_n$ is of the form $A::q'_n$ for every $n$; the length of $p_n$ is at most the degree of the formula $A$; the sequence of maximum length is unique because any two such sequences are one the prefix of the other, and therefore are equal. Moreover, let $\{n_i\}_{i\in\NatSet}$ be the infinite increasing sequence
of all indexes $n_i$ such that $p_{n_i}$ is of the form $A_1::\cdots:: A_k::q_{n_i}$ and
$p_{n_i+1}=A_1::\cdots :: A_k$ (indeed, $\{n_i\}_{i\in\NatSet}$
must be infinite: if it were not so, then there would be an index
$j'$ such that for every $n\geq j'$, $p_n=A_1::\cdots
::A_k::A_{k+1}::q$, violating the assumption on the maximal length of
$A_1::\cdots::A_k$). $A_k$, if not atomic, is a disjunction or an
existential statement.\\ Let now $s_i=\Sigma(p_1,\ldots, p_{i})$
and
$t=\rho(A_1::\cdots:: A_k)$. For every $i$, $s_i\leq s_{i+1}$, by definition of $\Sigma$.
There
are three cases:
1) $A_k=\exists x A(x)$. Then, by the Stability Theorem (Theorem \ref{theorem-StabilityTheorem}), there exists
$m$ such that for every $a$, if $n_a\geq m$, then $(\pi_0t)[s_{n_a}]=(\pi_0t)[s_m]$. Let $h:=(\pi_0t)[s_{n_a}\Cup t_1[s_{n_a}]]=(\pi_0t)[s_{n_a+1}]$, where $t_1=\rho(p_1,\ldots, p_{n_a})$. So let $a$ such that $n_a\geq m$; then \[p_{n_a+2}=\omega(p_1,\ldots,
p_{n_a+1})=\omega(p_1,\ldots, A_1::\cdots :: A_k)=A_1::\cdots ::A_k::A(h)\] Moreover, by hypothesis, and since $p_{n_a+1}=A_1::\cdots::A_k$
\[p_{n_{(a+1)}}=A_1::\cdots ::A_k::q_{n_{(a+1)}}= A_1::\cdots ::A_k::A(h)::q'\]
for some $q'$ and $p_{n_{(a+1)}+1}=A_1::\cdots ::A_k$: contradiction, since \[h=(\pi_0t)[s_{n_{a}+1}]=(\pi_0t)[s_{n_{(a+1)}+1}]=(\pi_0t)[s_{n_{(a+1)}}\Cup t_2[s_{n_{(a+1)}}]]\]
where $t_2=\rho(p_1,\ldots, p_{n_{(a+1)}})$, whilst $h\neq (\pi_0t)[s_{n_{(a+1)}}\Cup t_2[s_{n_{(a+1)}}]]$ should hold, by definition of $\omega$.
2) $A_k=A\lor B$. This case is totally analogous to the preceding.
3) $A_k$ is atomic. Then, for every $n\geq j$, $p_n=A_1::\cdots::
A_k$. So, for every $n\geq j$, $s_{n+1}=s\Cup t[s_n]$ and hence, by Theorem \ref{Fixed Point Property} there
exists $m\geq j$ such that $t[s_{m}]=\makestate{\emptyset}$. But $t
\Vvdash_{s_{m}} A_k$, by Lemma \ref{preservationlemma}; hence, $A_k$ must equal $\True$, and so it is
impossible that $(p_m, p_{m+1})=(A_1::\cdots ::A_k, A_1::\cdots
::A_k)\in E_1'$: contradiction.\\
Let now $p=p_1,\ldots, p_n$ be a
complete finite $\omega$-correct play. $p_n$ must equal
$B_1::\cdots::B_k$, with $B_k$ atomic and $B_k=\True$: otherwise,
$p$ wouldn't be complete, since player one would lose the play $p_n$ in $T_A$ and hence would be allowed to backtrack by definition \ref{definition-1BacktrackingGames}.\\
\qed }
\section{Examples}
\label{examples}
\comment{
\textbf{Example ($\Sigma^0_1$ and $\Pi^0_2$ formulas).} Suppose
$u\Vdash \exists x P(x)$, with $P(x)$ atomic. Let $n$ be the
smallest natural number such that $s:=(\pi_1u)^n[\varnothing]$ is a
fixed point of $\pi_1u$. We have $u\Vdash_s \exists x P(x)$ and so
$\pi_1u\Vdash_s P(\overline{m})$, where $\overline{m}=(\pi_0u)[s]$.
Since $(\pi_1u)[s]=s$, $P(\overline{m})$ must be true. Hence, here
is the algorithm (in pseudo code) to find the witness for $\exists
x
P(x)$:\\ \\$s:=\varnothing$;\\ repeat $s:=(\pi_1u)[s]$ until
$(\pi_1u)[s]=s$;\\
return $(\pi_0u)[s];$\\ Suppose now $t\Vdash \forall x\exists y
P(x,y)$. Then, given $n\in\NatSet$, $t\overline{n}\Vdash\exists
yP(\overline{n},y)$. So we can apply the algorithm for $\Sigma^0_1$
formulas. Hence, the extracted algorithm is the following:\\
$u:=t\overline{n}$;\\ $s:=\varnothing$;\\ repeat $s:=(\pi_1u)[s]$
until
$(\pi_1u)[s]=s$;\\ return $(\pi_0u)[s];$\\
}
\textbf{Minimum Principle for functions over natural numbers.} The
minimum principle states that every function $f$ over natural
numbers has a minimum value, i.e. there exists an $f(n)\in \NatSet$
such that for every $m\in\NatSet$ $ f(m)\geq f(n)$. We can prove
this principle in $\HA + \EM_1$, for any $f$ in the language. We assume $P(y,x)\equiv f(x)<y$, but, in order to
enhance readability, we will write $f(x)<y$ rather than the obscure
$P(y,x)$. We define:\\ $Lessef(n):= \exists \alpha f(\alpha)\leq
n$\\ $Lessf(n):=\exists \alpha f(\alpha)<n$\\ $Notlessf(n):=
\forall \alpha f(\alpha)\geq n$\\ Then we formulate - in equivalent form - the
minimum principle as: \[Hasminf:=\exists y.\ Notlessf(y)\wedge
Lessef(y)\] The informal argument goes as follows. As base case of the induction, we just observe that $f(k)\leq 0$, implies $f$ has a minimum value (i.e. $f(k)$). Afterwards, if $Notlessf(f(0))$, we are done, we have find the minimum. Otherwise, $Lessf(f(0))$, and hence $f(\alpha)<f(0)$ for some $\alpha$ given by the oracle. Hence $f(\alpha)\leq f(0)-1$ and we conclude that $f$ has a minimum value by induction hypothesis.
Now we give the formal proofs, which are natural deduction trees, decorated with terms of $\SystemTClass$, as formalized in \cite{Aschieri}. We first prove
that
$\forall n.\ (Lessef(n)\rightarrow Hasminf)\rightarrow
(Lessef(S(n))\rightarrow Hasminf)$ holds.
\def\vskip-1ex plus.1ex minus.1ex{\vskip-2ex plus.1ex minus.1ex}
\begin{prooftree}
\small
\AxiomC{$E_P: \forall n.\ Notlessf(S(n))\lor Lessf(S(n))$}
\UnaryInfC{$E_Pn: Notlessf(S(n))\lor Lessf(S(n))$}
\AxiomC{$[Notlessf(S(n))]$}
\noLine
\UnaryInfC{$D_1$}
\noLine
\UnaryInfC{$Hasminf$}
\AxiomC{$[Lessf(S(n))]$}
\noLine
\UnaryInfC{$D_2$}
\noLine
\UnaryInfC{$Hasminf$}
\TrinaryInfC{$D: Hasminf$}
\UnaryInfC{$\lambda w_2 D: Lessef(S(n))\rightarrow Hasminf$}
\UnaryInfC{$\lambda w_1\lambda w_2 D: (Lessef(n)\rightarrow
Hasminf)\rightarrow (Lessef(S(n))\rightarrow Hasminf)$}
\UnaryInfC{$\lambda n \lambda w_1\lambda w_2 D: \forall n
(Lessef(n)\rightarrow Hasminf)\rightarrow (Lessef(S(n)\rightarrow
Hasminf)$}
\end{prooftree}where the term $D$ is looked at later, $D_1$ is the proof
\def\vskip-1ex plus.1ex minus.1ex{\vskip0ex plus.1ex minus.1ex}\begin{prooftree}
\small
\AxiomC{$v_1: Notlessf(S(n))$}
\AxiomC{$w_2: Lessef(S(n))$}
\BinaryInfC{$\langle v_1,w_2\rangle :
Notlessf(S(n))\land
Lessef(S(n))$}
\UnaryInfC{$\langle S(n),\langle v_1,w_2\rangle
\rangle : Hasminf$}
\end{prooftree}
and $D_2$ is the proof
\def\vskip-1ex plus.1ex minus.1ex{\vskip-3ex plus.1ex minus.1ex}
\begin{prooftree}
\small
\AxiomC{$v_2: [Lessf(S(n))]$}
\AxiomC{$w_1:
[Lessef(n)\rightarrow
Hasminf]$}
\AxiomC{$[x_2: f(z)< S(n)]$}
\UnaryInfC{$x_2: f(z)\leq
n$}
\UnaryInfC{$\langle
z,x_2\rangle :
Lessef(n)$}
\BinaryInfC{$w_1\langle
z,x_2\rangle :
Hasminf$}
\BinaryInfC{$w_1\langle
\pi_0v_2,\pi_1v_2\rangle :
Hasminf$}
\end{prooftree}
We prove now that $Lessef(0)\rightarrow Hasminf$
\def\vskip-1ex plus.1ex minus.1ex{\vskip-4ex plus.1ex minus.1ex}
\begin{prooftree}
\small
\AxiomC{$w: [Lessef(0)]$}
\AxiomC{$x_1: [f(z)\leq 0]$}
\UnaryInfC{$x_1: f(z)=0$}
\UnaryInfC{$x_1: f(\alpha)\geq f(z)$}
\UnaryInfC{$\lambda \alpha x_1: Notlessf(f(z))$}
\AxiomC{$\emptyset: f(z)\leq f(z)$}
\UnaryInfC{$\langle z,\emptyset\rangle : Lessef(f(z))$}
\BinaryInfC{$\langle \lambda\alpha x_1,\langle z,\emptyset\rangle
\rangle : Notlessf(f(z))\land
Lessef(f(z))$}
\UnaryInfC{$\langle f(z),\langle \lambda\alpha x_1,\langle
z,\emptyset\rangle \rangle \rangle : Hasminf$}
\BinaryInfC{$\langle f(\pi_0w),\langle \lambda\alpha \pi_1
w,\langle \pi_0w,\emptyset\rangle \rangle \rangle :
Hasminf$}
\UnaryInfC{$F:=\lambda w\langle f(\pi_0w),\langle \lambda\alpha
\pi_1
w,\langle \pi_0w,\emptyset\rangle \rangle \rangle : Lessef(0)\rightarrow
Hasminf$}
\end{prooftree}
Therefore we can conclude with the induction rule that \[\lambda
\alpha^\Nat\ {\tt R}F(\lambda n\lambda w_1\lambda w_2 D)\alpha: \forall x.
Lessef(x)\rightarrow Hasminf\] And now the thesis:
\def\vskip-1ex plus.1ex minus.1ex{\vskip-1ex plus-1ex minus.1ex} \begin{prooftree}
\small
\AxiomC{$\emptyset: f(0)\leq f(0)$}
\UnaryInfC{$\langle 0, \emptyset\rangle : Lessef (f(0))$}
\AxiomC{$\lambda
\alpha^\Nat\ {\tt R}F(\lambda n\lambda w_1\lambda w_2 D)\alpha: \forall x. Lessef(x)\rightarrow Hasminf$}
\UnaryInfC{${\tt R}F(\lambda n\lambda w_1\lambda w_2
D)f(0): Lessef(f(0))\rightarrow Hasminf$}
\BinaryInfC{$M:={\tt R}F(\lambda n\lambda w_1\lambda w_2 D)f(0)\langle
0,\emptyset\rangle :
Hasminf$}
\end{prooftree}
Let us now take a closer look to $D$. We have defined \[D:={\tt if}\
X_P S(n)\ {\tt then}\ w_1\langle \Phi_P S(n), \emptyset
\rangle \
{\tt else}\ \langle S(n),\langle \lambda \beta\ (\Add_P)
S(n)\beta,w_2\rangle \rangle
\]
Let $s$ be a state and let us consider $M$, the realizer of $Hasminf$, in the base case of the recursion and after in its general form during the computation: ${\tt R}F(\lambda n\lambda
w_1\lambda w_2 D)f(0)\langle m,\emptyset\rangle
[s]$. If $f(0)=0$, \[M[s]={\tt R}F(\lambda n\lambda w_1\lambda
w_2 D)f(0)\langle{0},\emptyset\rangle
[s]=\] \[=F\langle 0,\emptyset\rangle
=\langle f(0),\langle \lambda
\alpha \emptyset, \langle 0,\emptyset\rangle \rangle \]
If $f(0)=S(n)$, we have two other cases. If $\chi_PsS({n})=\True$, then \[{\tt R}F(\lambda n\lambda w_1\lambda
w_2 D)S({n})\langle{m},\emptyset\rangle
[s]=\]\[=(\lambda n\lambda w_1\lambda w_2 D){n}({\tt R}F(\lambda
n\lambda w_1\lambda w_2 D){n})\langle
{m},\emptyset\rangle [s]= \]\[={\tt R}F(\lambda n\lambda
w_1\lambda w_2 D){n}\langle \Phi_P
(S({n})),\emptyset\rangle [s]\]
If $\chi_PsS({n})=\False$, then \[{\tt R}F(\lambda n\lambda
w_1\lambda w_2 D)S({n})\langle {m},\emptyset\rangle
[s]=\]\[=(\lambda n\lambda w_1\lambda w_2 D){n}({\tt R}F(\lambda
n\lambda w_1\lambda w_2
D){n})\langle {m},\emptyset\rangle [s]= \]\[=
\langle S({n}),\langle \lambda \beta\ (\add_P)s S(n)\beta,\langle {m},\emptyset\rangle \rangle
\rangle \]
In the first case, the minum value of $f$ has been found. In the second case, the operator ${\tt R}$, starting from $S(n)$, recursively calls itself on $n$; in the third case, it reduces to its normal form. From these equations, we easily deduce the behavior of
the realizer of $Hasminf$. In a pseudo imperative programming
language, for the witness of $Hasminf$ we would write:\\
$n:=f(0);$\\
$while\ (\chi_Psn=\True, i.e.\ \exists m \ such\ that\ f({m})<
n\in
s)$\\
$do\ n:=n-1;$\\
$return\ n;$ \\
Hence, when $f(0)>0$, we have, for some numeral $k$
\[M[s]=\langle k,\langle \lambda \beta\ (\add_P)s k\beta,\langle
\varphi_Psk,\emptyset\rangle \rangle \rangle \]
It is clear that $k$ is the
minimum value of $f$, according to the partial information provided by
$s$
about $f$, and that $f(\varphi_Psk)\leq k$. If $s$ is sufficiently complete, then $k$
is the true minimum of $f$.\\
The normal form of the realizer $M$ of $Hasminf$ is so simple that we can immediately extract the winning strategy $\omega$ for the 1-Backtraking version of the Tarski game for $Hasminf$.
Suppose the current state of the game is $s$. If $f(0)=0$, Eloise chooses the formula
\[Notlessf(0)\land Lessef (0)\]
and wins. If $f(0)>0$, she chooses
\[Notlessf(k)\land Lessef (k)=\forall \alpha\ f(\alpha) \geq k \land \exists \alpha\ f(\alpha)\leq k \]
If Abelard chooses $\exists \alpha\ f(\alpha)\leq k$, she wins, because she responds with $f(\varphi_Psk)\leq k$, which holds. Suppose hence Abelard chooses \[\forall \alpha\ f(\alpha) \geq k\] and then $f(\beta)\geq k$. If it holds, Eloise wins. Otherwise, she adds to the current state $s$
\[(\lambda \beta\ (\add_P)s k\beta)\beta= (\add_P)s k\beta=\{f(\beta)<k\}\]
and backtracks to $Hasminf$ and then plays again. This time, she chooses
\[Notlessf(f(\beta))\land Lessef (f(\beta))\]
(using $f(\beta)$, which was Abelard's counterexample to the minimality of $k$ and is smaller than her previous choice for the minimum value). After at most $f(0)$ backtrackings, she wins. \\
\textbf{Coquand's Example.} We investigate now an example - due to
Coquand - in our framework of realizability. We want to prove that
for every function over natural numbers and for every
$a\in\NatSet$ there exists $x\in\NatSet$ such that $f(x)\leq
f(x+a)$. Thanks to the minimum principle, we can give a very easy
classical proof:
\comment{
\begin{prooftree}
\AxiomC{$h: Hasminf$}
\AxiomC{$w: Notlessf(\mu)\land Lessef(\mu)$}
\UnaryInfC{$\pi_1w: Lessef(\mu)$}
\AxiomC{$w: Notlessf(\mu)\land Lessef(\mu)$}
\UnaryInfC{$\pi_0 w: Notlessf (\mu)$}
\UnaryInfC{$\pi_0w(z+a): f(z+a)\geq \mu$}
\AxiomC{$v: f(z)\leq \mu$}
\BinaryInfC{$\pi_0w(z+a)\Cup v: f(z)\leq f(z+a)$}
\UnaryInfC{$\langle z,\pi_0w(z+a)\Cup v\rangle :
\exists x
f(x)\leq f(x+a)$}
\UnaryInfC{$\lambda a \langle z,\pi_0w(z+a)\Cup
v\rangle :
\forall a \exists x f(x)\leq f(x+a)$}
\BinaryInfC{$\lambda a
\langle \pi_0\pi_1w,\pi_0w(\pi_0\pi_1w+a)\Cup
\pi_1\pi_1w\rangle : \forall a \exists
x f(x)\leq f(x+a)$}
\BinaryInfC{$\lambda a
\langle \pi_0\pi_1\pi_1h,\pi_0\pi_1h(\pi_0\pi_1\pi_1h+a)\Cup
\pi_1\pi_1\pi_1h\rangle : \forall a \exists x f(x)\leq f(x+a)$}
\end{prooftree}}
\def\vskip-1ex plus.1ex minus.1ex{\vskip-1ex plus.1ex minus.1ex}
\begin{prooftree}
\scriptsize\AxiomC{$Hasminf$}
\AxiomC{$ [Notlessf(\mu)\land Lessef(\mu)]$}
\UnaryInfC{$ Lessef(\mu)$}
\AxiomC{$ [Notlessf(\mu)\land Lessef(\mu)]$}
\UnaryInfC{$ Notlessf (\mu)$}
\UnaryInfC{$ f(z+a)\geq \mu$}
\AxiomC{$ [f(z)\leq \mu]$}
\BinaryInfC{$ f(z)\leq f(z+a)$}
\UnaryInfC{$\exists x
f(x)\leq f(x+a)$}
\UnaryInfC{$\forall a \exists x f(x)\leq f(x+a)$}
\BinaryInfC{$ \forall a \exists
x f(x)\leq f(x+a)$}
\BinaryInfC{$\forall a \exists x f(x)\leq f(x+a)$}
\end{prooftree}
The extracted realizer is \[\lambda a
\langle \pi_0\pi_1\pi_1M,\pi_0\pi_1M(\pi_0\pi_1\pi_1M+a)\Cup
\pi_1\pi_1\pi_1h\rangle\] where $M$ is the realizer of $Hasminf$.
$m:=\pi_0\pi_1\pi_1M[s]$ is a point the purported minimum value $\mu:=\pi_0M$ of $f$ is attained at, accordingly to the information in the state $s$ (i.e. $f(m)\leq \mu$). So, if Abelard chooses
\[\exists x\ f(x)\leq f(x+a)\]
Eloise chooses
\[f(m)\leq f(m+a)\]
We have to
consider the term \[U[s]:=\pi_0\pi_1M(\pi_0\pi_1\pi_1M+a)\Cup
\pi_1\pi_1\pi_1M[s]\] which updates the current state $s$. Surely,
$\pi_1\pi_1\pi_1M[s]=\emptyset$. $\pi_0\pi_1M[s]$ is equal either
to $\lambda \beta\ (\add_P)s \mu\beta$
or to $\lambda \alpha \emptyset$.
So,
what does $U[s]$
actually do? We have:
\[U[s]=\pi_0\pi_1M(\pi_0\pi_1\pi_1M+a)[s]
=\pi_0\pi_1M(m+a)[s]\]
with either $\pi_0\pi_1M(m+a)[s]=\emptyset$ or \[\pi_0\pi_1M(m+a)[s]= \{f(m+a)<f(m)\}\]
So $U[s]$ tests if $f(m+a)< f(m)$; if it is not the
case, Eloise wins, otherwise she enlarges the state $s$, including the information
$f(m+a)<f(m)$ and backtracks to $\exists x f(x)\leq f(x+a)$. Starting from the state $\emptyset$,
after $k+1$ backtrackings, it will be reached a state $s'$, which will
be of the form $\{ f((k+1)a<f(ka),\ldots, f(2a)<f(a), f(a)<f(0)\}$
and Eloise will play $f((k+1)a)\leq f((k+1)a+a)$. Hence, the extracted
algorithm for Eloise's witness is the following:\\\\
$n:=0$; while $f(n)>f(n+a)$ do $n:=n+a$; return $n$;
\section{Partial Recursive Learning Based Realizability and Completeness}\label{completeness}
In this section we extend our notion of realizability and increase the computational power of our realizers, in order to be able to represent any partial recursive function and in particular, we conjecture, every recursive strategies of 1-Backtracking Tarski games. So, we choose to add to our calculus a fixed point combinator $\mathsf{Y}$, such that for every term $u:A\rightarrow A$, $\mathsf{Y}u=u(\mathsf{Yu})$.
\begin{definition}[Systems $\PRclass$ and $\PRlearn$ ]
We define $\PRclass$ and $\PRlearn$ to be, respectively, the extensions of $\SystemTClass$ and $\SystemTLearn$ obtained by adding for every type $A$ a constant $\mathsf{Y}_A$ of type $(A\rightarrow A)\rightarrow A$ and a new equality axiom $\mathsf{Y_A}u=u(\mathsf{Y_A}u)$ for every term $u: A\rightarrow A$.
\end{definition}
Since in $\PRclass$ there is a schema for unbounded iteration, properties like convergence do not hold anymore (think about a term taking a states $s$ and returning the largest $n$ such that $\chi_Psn=\True$ ). So we have to {\em ask} our realizers to be convergent. Hence, for each type $A$ of $\PRclass$ we define a set $\|A\|$ of
terms $u: A$ which we call the set of {\em stable terms} of
type $A$. We define stable terms by lifting the notion of
convergence from atomic types (having a special case for the atomic
type $\State$, as we said) to arrow and product types.
\begin{definition}[Convergence]
\label{definition-Convergence2} Assume
that $\{s_i\}_{i\in\NatSet} $ is a w.i. sequence of state constants,
and $u, v \in \PRclass$.
\begin{enumerate}
\item
$u$ converges in $\{s_i\}_{i\in\NatSet}$ if there exists a normal form $v$ such that $\exists i
\forall j\geq i.u[s_j]=v$ in $\PRlearn$.
\item
$u$ converges if $u$ converges in every w.i. sequence of state constants.
\end{enumerate}
\end{definition}
\begin{definition}[Stable Terms]
\label{definition-StableTerms} Let
$\{s_i\}_{i\in\NatSet}$ be a w.i. chain of states and $s \in
\StateSet$. Assume $A$ is a
type. We define a set $\|A\|$ of terms
$t \in\PRclass$ of type $A$, by induction on $A$.
\begin{enumerate}
\item
$\|\State\|=\{t:\State \ |\ t\mbox{ converges}\}$
\item
$\|\Nat\|=\{t:\Nat \ |\ t\mbox{ converges}\}$
\item
$\|\Bool\|=\{t:\Bool \ |\ t\mbox{ converges}\}$
\item
$\|A\times B\|=\{t:A \times B \ |\
\pi_0t\in\|A\|,\pi_1t\in\|B\|\}$
\item
$\|A\rightarrow B\|=\{t:A\rightarrow B \ |\ \forall u\in
\|A\|, tu\in\|B\|\}$
\end{enumerate}
If $t\in\|A\|$, we say that $t$ is a {\em stable} term of type
$A$.
\end{definition}
Now we extend the notion of realizability with respect to $\PRclass$ and $\PRlearn$.
\begin{definition}[Realizability]
\label{lemma-IndexedRealizabilityAndRealizability2}
Assume $s$ is a state constant, $t\in \PRclass$ is a closed term of state $\makestate{\emptyset}$, $A \in \mathcal{L} $ is a closed formula, and $t\in \||A|\|$. Let $\vec{t} = t_1, \ldots, t_n : \Nat$.
\begin{enumerate}
\item
$t\Vvdash_s P(\vec{t})$ if and only if $t[s] = \makestate{\emptyset}$ in $\PRlearn$ implies
$P(\vec{t})={\True}$
\item
$t\Vvdash_s{A\wedge B}$ if and only if $\pi_0t \Vvdash_s{A}$ and $\pi_1t\Vvdash_s{B}$
\item
$t\Vvdash_s {A\vee B}$ if and only if either $\proj_0t[{s}]={\True}$ in $\PRlearn$ and $\proj_1t\Vvdash_s A$, or $\proj_0t[{s}]={\False}$ in $\PRlearn$ and $\proj_2t\Vvdash_s B$
\item
$t\Vvdash_s {A\rightarrow B}$ if and only if for all $u$, if $u\Vvdash_s{A}$,
then $tu\Vvdash_s{B}$
\item
$t\Vvdash_s {\forall x A}$ if and only if for all numerals $n$,
$t{n}\Vvdash_s A[{n}/x]$
\item
$t\Vvdash_s \exists x A$ if and only for some numeral $n$, $\pi_0t[{s}]= {n}$ in $\PRlearn$ and $\pi_1t \Vvdash_s A[{n}/x]$
\end{enumerate}
We define $t \Vvdash A$ if and only if $t\Vvdash_s A$ for all state constants $s$.
\end{definition}
The following conjecture will be addressed in the next version of this paper:
\begin{theorem}[Conjecture]
Suppose there exists a recursive winning strategy for player one in $1Back(T_A)$. Then there exists a term $t$ of $\PRclass$ such that $t\Vvdash A$.
\end{theorem}
\section{Conclusions and Further work}
The main contribution of this paper is conceptual, rather than technical, and it should be useful to understand the significance and see possible uses of learning based realizability. We have shown how learning based realizers may be understood in terms of backtracking games and that this interpretation offers a way of eliciting constructive information from them. The idea is that playing games represents a way of challenging realizers; they react to the challenge by learning from failure and counterexamples. In the context of games, it is also possible to appreciate the notion of convergence, i.e. the fact that realizers stabilize their behaviour as they increase their knowledge. Indeed, it looks like similar ideas are useful to understand other classical realizabilities (see for example, Miquel \cite{Miq}).
A further step will be taken in the full version of this paper, where we plan to solve the conjecture about the completeness of learning based realizability with respect to 1Backtracking games. As pointed out by a referee, the conjecture could be interesting with respect to a problem of game semantics, i.e. whether all recursive innocent strategies are intepretation of a term of PCF.
\end{document}
|
\begin{document}
\setstcolor{red}
\title{Isoenergetic cycle for the quantum Rabi model}
\author{G. Alvarado Barrios}
\email[Gabriel Alvarado]{\quad [email protected]}
\affiliation{Departamento de F\'isica, Universidad de Santiago de Chile (USACH),
Avenida Ecuador 3493, 9170124, Santiago, Chile}
\author{Francisco J. Pe\~na}
\affiliation{Departamento de F\'isica, Universidad T\'ecnica Federico Santa Mar\'ia Casilla 110V,Valpara\'iso, Chile}
\author{F. Albarr\'an-Arriagada}
\affiliation{Departamento de F\'isica, Universidad de Santiago de Chile (USACH),
Avenida Ecuador 3493, 9170124, Santiago, Chile}
\author{P. Vargas}
\affiliation{Departamento de F\'isica, Universidad T\'ecnica Federico Santa Mar\'ia Casilla 110V,Valpara\'iso, Chile}
\affiliation{Center for the Development of Nanoscience and Nanotechnology 9170124, Estaci\'on Central, Santiago, Chile}
\author{J. C. Retamal}
\affiliation{Departamento de F\'isica, Universidad de Santiago de Chile (USACH),
Avenida Ecuador 3493, 9170124, Santiago, Chile}
\affiliation{Center for the Development of Nanoscience and Nanotechnology 9170124, Estaci\'on Central, Santiago, Chile}
\date{\today}
\begin{abstract}
The isoenergetic cycle is a purely mechanical cycle comprised of adabatic and isoenergetic processes. In the latter the system interacts with an energy bath keeping constant the expectation value of the Hamiltonian. This cycle has been mostly studied in systems consisting of particles confined in a power-law trap. In this work we study the performance of the isoenergetic cycle for a system described by the quantum Rabi model for the case of controlling the coupling strength parameter, the resonator frequency and the two-level system frequency. For the cases of controlling either the coupling strength parameter or the resonator frequency, we find that it is possible to reach maximal unit efficiency when the parameter is sufficiently increased in the first adiabatic stage. In addition, for the first two cases the maximal work extracted is obtained at parameter values corresponding to high efficiency which constitutes an improvement over current proposals of this cycle.
\end{abstract}
\maketitle
\section{Introduction}
The possibility to create nano-scale devices which are more efficient than current classical counterparts motivates the study of the quantum version of the very well known cycles of classical thermodynamics
\cite{Scully2002,Rezek2006Irreversible,Quan2007,Esposito2010,Alvarado2017}. The quantum nature of the working substance and the first law of thermodynamics are the basic ingredients to establish a relationship between classical thermodynamics and quantum mechanics.
In the beginning of the 00s a thermodynamical cycle with no classical analogue termed ``isoenergetic cycle" was proposed by Bender \textit{et. al.} \cite{Bender2000}, who envisioned the replacement of the heat baths for so-called ``energy baths". This was originally presented as a proposal for the substitution of the concept of temperature with the expectation value of the system Hamiltonian \cite{Abe2011,Wang2012,Liu2016,Santos2017}. When the system is coupled to an energy bath it evolves through an isoenergetic process during which the expectation value of the Hamiltonian is constant. This cycle was originally proposed for a non-relativistic single particle confined in one-dimensional potential well. Recently, it was extended to the case of relativistic regime by considering the single-particle Dirac spectrum \cite{Munoz2012,Pena2016} and was treated to the case of an ideal N-particle Fermi system \cite{Wang2015}.
On the other hand, light-matter systems are described in the more basic sense by the quantum Rabi model \cite{Rabi1937}. This model describes the interaction of a single electromagnetic mode with a two-level system (TLS), and it has been studied in a wide range of the coupling parameter \cite{Shore1993,niemczyk2010circuit,Casanova2010DSC}. In particular, the ultrastrong-coupling (USC) regime, which has been experimentally realized \cite{niemczyk2010circuit}, corresponds to the case where the coupling strength and the resonator frequency become comparable. The light-matter interaction in the USC regime presents interesting properties, such as parity symmetry, and anharmonic energy spectrum \cite{Braak2011}. These properties have led to remarkable applications of systems described by the USC, also termed quantum Rabi systems (QRS), such as fast quantum gates \cite{Romero2012ultrafast}, efficient energy transfer \cite{Kyaw2017,Cardenas-Lopez2017}, and generation of non-classical states \cite{Nori2010,Albarran-Arriagada2017}. Further, current progress in superconducting circuit technology has enabled the manipulation of several parameters of QRSs \cite{Peropadre2010switchable,Gustavsson2012driven,Wallquist2006selective,Sandberg2009exploring,Paauw2009tuning,schwarz2013gradiometric,Koch2007Transmon,Barends2013Xmon,Martinis2014Tunable}. This progress, together with the anharmonic spectrum of the QRS constitutes an interesting system to investigate the performance of the isoenergetic cycle.
In this work we study the isoenergetic cycle where the working substance corresponds to a two-level system interacting with a single electromagnetic mode described by the quantum Rabi model. We consider an analytical approximation of the energy levels which allows for qualitative and quantitative description of the thermodynamical quantities depending on the range of validity of the approximation. We obtain the total work extracted and efficiency of the cycle for the variation of each one of the parameters of the model, namely, the coupling strength, the resonator frequency and the two-level system frequency. For the cases where the energy spectrum shows nonlinearity and degeneracy we see that the cycle performance is improved. In particular, we find that the nonlinear dependence of the energy levels on either the coupling strength, $g$, or the resonator frequency, $\omega$, allows for the cycle efficiency to reach maximal unit value, when the parameter is sufficiently increased in the first adiabatic stage.
\subsection{Quantum Rabi model}
We will consider a working substance composed of a light-matter system described by the quantum Rabi model \cite{Rabi1937,Braak2011}, which reads as
\begin{figure}
\caption{Two lowest energy levels of the quantum Rabi model as a function of (a) the coupling strength $g$, with $\omega=\Omega$, (b) the resonator frequency $\omega$, with $g=\Omega$, and (c) the TLS frequency $\Omega$ with $g=\omega$. The Solid line denotes the exact diagonalization of Eq.~(\ref{Hamiltonian}
\label{Fig1}
\end{figure}
\begin{equation}
H = \hbar \Omega \sigma_{z} + \hbar \omega a^{\dagger}a + \hbar g \sigma_{x} (a^{\dagger} + a),
\label{Hamiltonian}
\end{equation}
where $a\,(a^{\dagger})$ corresponds to the bosonic annihilation (creation) operator of the resonator mode, $\sigma_{x}$ and $\sigma_{z}$ stand for the Pauli operators describing the two-level system.
In addition, $\Omega$, $\omega$ and $g$, correspond to the TLS frequency, resonator frequency and TLS-resonator coupling strength, respectively.
This model has been considered for several applications in quantum information processing \cite{Nataf2011,Romero2012ultrafast,kyaw2015scalable,Kyaw2015QEC,joshi2017qubit,AlbarranArriagada2018}. The ratio between the coupling strength and the resonator frequency $g/\omega$ ($\omega\sim\Omega$) separates the behavior of the system into different regimes \cite{Wolf2013,Rossatto2017}. In the strong coupling regime, where the coupling strength is much larger than any decoherence or dephasing rate in the system, and for values $g/\omega \lesssim 10^{-2}$ one can perform the rotating wave approximation (RWA) and the system can be described by the Jaynes-Cummings model \cite{JC1963}. As the ratio $g/\omega$ is increased beyond the strong coupling regime there is a breakdown of the RWA and the system must be described by the full quantum Rabi model. We distinguish two main regimes for the later case, the ultra-strong coupling regime (USC) \cite{niemczyk2010circuit,FornDiaz2010BSS,Bourassa2009USC} where the coupling strength is comparable to the resonator frequency $g \lesssim \omega$ and the deep-strong coupling regime (DSC) \cite{yoshihara2017DSC,Casanova2010DSC} where the interaction parameter is greater than the relevant frequencies $g > \omega$.
In this work we study the isoenergetic cycle for a working substance which is described by the two lowest energy levels of the quantum Rabi model. In order to better describe the behavior of the thermodynamical figures of merit we will use a simple approximation for the first two lowest energy levels, employed on a recent work \cite{Alvarado2017} based on Refs. \cite{Irish2007,Yu2012analytical}. The approximated energy levels are given by
\begin{eqnarray}
\label{levels}
E_{\textbf{0}} =&& - \hbar \frac{g^2}{\omega} - \hbar \frac{\Omega}{2} e^{-2\frac{g^{2}}{\omega^{2}}},\nonumber\\
E_{\textbf{1}} =&& - \hbar \frac{g^2}{\omega} + \hbar \frac{\Omega}{2} e^{-2\frac{g^{2}}{\omega^{2}}}, \label{Approximation}
\end{eqnarray}
where $E_{\textbf{0}}$ and $E_{\textbf{1}}$ refers to the energy of the ground and first excited state, respectively. Figure \ref{Fig1} shows $E_{\textbf{0}}$ and $E_{\textbf{1}}$ as a function of each of the parameters, $g$, $\omega $ and $\Omega$ as obtained from Eq. (\ref{levels}), compared to their calculation as obtained from the numerical diagonalization of Eq.~(\ref{Hamiltonian}). We can see that the approximation given by Eq. (\ref{levels}) captures the behavior of the spectrum for all values of $g$ and $\omega$ considered, while for the case of $\Omega$ it is not a good approximation for $\Omega>\omega$. Therefore, we will only consider the numerical calculation for the later case.
\subsection{\label{sec:firstlaw} First law of thermodynamic}
Let us consider a system with discrete energy levels and whose Hamiltonian $\hat{H}\left(\xi\right)$ depends explicitly on a parameter $\xi$ that can be varied at an arbitrary slow rate. We define the eigenstate and eigenenergies of $\hat{H}\left(\xi\right)$ by $\hat{H}\vert n;\xi\rangle=E_{n}(\xi)\vert n;\xi\rangle$, then, for state $\ket{\psi}=\sum_{n=0}c_n\ket{\xi;n}$, the average energy $\mean{E}=\mean{\hat{H}}$ of the system takes the form
\begin{equation}
\mean{E}=\sum_{n}p_{n}\left(\xi\right)E_{n}\left(\xi\right).
\end{equation}
where $p_{n}=\abs{c_{n}}^2$. The change in the average energy due to an arbitrary quasistatic process involving the modulation of the parameter $\xi$ is given by
\begin{eqnarray}
\delta\mean{E}=&&\sum_{n}E_{n}\left(\xi\right)\frac{\partial}{\partial\xi} p_{n}\left(\xi\right)\delta\xi +\sum_{n}p_{n}\left(\xi\right)\frac{\partial}{\partial\xi}E_{n}\left(\xi\right)\delta\xi \nonumber\\
=&&\delta Q + \delta W.
\label{eq1}
\end{eqnarray}
where
\begin{eqnarray}
\delta Q &&= \sum_{n}E_{n}\left(\xi\right)\frac{\partial}{\partial\xi} p_{n}\left(\xi\right)\delta\xi, \nonumber\\
\delta W &&= \sum_{n}p_{n}\left(\xi\right)\frac{\partial}{\partial\xi}E_{n}\left(\xi\right)\delta\xi. \label{WorkP}
\end{eqnarray}
Equation (\ref{eq1}) is cast in a form reminiscent of the first law of thermodynamics, however, the first term of Eq. (\ref{eq1}) can only be associated with heat when it is possible to define a temperature in the system, as is the case of a interaction with a thermal reservoir in an isochoric process. Since this is not the case for isoenergetic processes, $\delta Q$ is known as the energy exchange \cite{Pena2016,Wang2015}, while the second term $\delta W$ can be identified with the work done. That is, the work done corresponds to the change in the eigenenergies $E_{n}\left(\xi\right)$ which is in agreement with the fact that work can only be performed trough a change in generalized coordinates of the systems, which in turn gives rise to a change in the eigenenergies.
\subsection{\label{sec:isoenergetic} Isoenergetic Cycle}
\begin{figure}
\caption{Diagram of the Isoenergetic cycle for (a) $\xi\equiv g$, (b) $\xi\equiv \omega$ and (c) $\xi\equiv \Omega$. Stages $1 \rightarrow 2$ and $3\rightarrow4$ correspond to isoenergetic processes, while stages $2 \rightarrow 3 $ and $4 \rightarrow 1$ correspond to adiabatic processes.}
\label{Fig2}
\end{figure}
The isoenergetic cycle is composed by two adiabatic processes and two isoenergetic ones (see Fig. \ref{Fig2}). In the isoenergetic process the central idea is to keep constant the initial energy expectation value along the procedure, which means $\delta Q + \delta W=0$. Therefore, both work and energy exchange are nonzero during this process. This means that for $\xi$ $\in$ $\left[\xi_{k},\xi_{\ell}\right]$, we have
\begin{equation}
\sum_{n}p_{n}(\xi_{k})E_{n}(\xi_{k})=\sum_{n}p_{n}(\xi)E_{n}(\xi)=\sum_{n}p_{n}(\xi_{\ell})E_{n}(\xi_{\ell}),
\label{eq6}
\end{equation}
where $k$ and $\ell$ refers to the ends points of the compression process ($k=1$, $\ell=2$) or expansion process ($k=3$, $\ell=4$). If we consider that the states at the ends of the isoenergetic process correspond to the ground state and first excited state of the system, as is shown in Fig.~\ref{Fig2}, the processes are termed maximal compression for $E_{\textbf{0}}(\xi_{1}) = E_{\textbf{1}}(\xi_{2})$, and maximal expansion for $E_{\textbf{1}}(\xi_{3}) = E_{\textbf{0}}(\xi_{4})$. These conditions yield $\xi_{2}$ as a function of $\xi_{1}$, and $\xi_{4}$ as a function of $\xi_{3}$; and are referred to as the isoenergetic condition.
For a two-level system, the energy exchange along the isoenergetic process for maximal expansion given by \cite{Santos2017,Munoz2012}
\begin{eqnarray}
Q_{\textrm{in}}^{k\rightarrow \ell} = E_{\textbf{0}}(\xi_{k}) &\times& \ln\left[\Bigg|\frac{E_{\textbf{0}}(\xi_{\ell})-E_{\textbf{1}}(\xi_{\ell})}{E_{\textbf{0}}(\xi_{k})-E_{\textbf{1}}(\xi_{k})}\Bigg|\right] \nonumber \\
&+& \int_{\xi_{k}}^{\xi_{\ell}}\frac{E_{\textbf{0}}\frac{dE_{\textbf{1}}}{d\xi} - E_{\textbf{1}}\frac{dE_{\textbf{0}}}{d\xi}}{E_{\textbf{0}}(\xi)-E_{\textbf{1}}(\xi)} d\xi. \quad
\label{energyexchange}
\end{eqnarray}
Where $k=1$ and $\ell=2$. For a maximal compression process we refer to the energy exchange as $Q_{\textrm{out}}^{k\rightarrow \ell}$ ($k=3$, $\ell=4$), and it is obtained by exchanging $\textbf{0}$ by $\textbf{1}$, and $\textbf{1}$ by $\textbf{0}$ in Eq. $(\ref{energyexchange})$. The subscripts \textquotedblleft in\textquotedblright $\,$ and \textquotedblleft out\textquotedblright $\,$ denote that energy enters or leaves the system, respectively.
In a isoenergetic process there is work performed the parameter $\xi$ is changed as can be seen from Eq.\ref{WorkP}. At the same time, the energy exchange $Q_{\text{in(out)}}^{k\rightarrow\ell}$ is supplied by the energy bath in order to keep the expectation value of the Hamiltonian constant. Since in this process the average energy change is zero, we write
\begin{equation}
Q_{\text{in(out)}}^{k \rightarrow \ell} + W_{\textrm{iso}}^{k\rightarrow \ell}=0.
\end{equation}
Where $W_{\textrm{iso}}^{k\rightarrow\ell}$ is the work done by the system, from this, we obtain that $W_{\textrm{iso}}^{k\rightarrow \ell} = -Q_{\text{in(out)}}^{k \rightarrow \ell}$. As will be seen in the following sections, the isoenergetic processes are the only contribution to the total work extracted.
On the other hand, in a generic adiabatic process the occupation probabilities $p_{n}(\xi)$ are constant and only work is performed by the system, which is given by \cite{Quan2007}
\begin{eqnarray}
\nonumber
W_{\text{(ad)}}^{i\rightarrow j} &=\int_{\xi_{i}}^{\xi_{j}}d\xi\left(\frac{\partial E}{\partial \xi}\right)_{p_{n}(\xi_{i})=p_{n}(\xi_{j})=\text{constant}} \\ &=\sum_{n}p_{n}(\xi_{i})\left[E_{n}(\xi_{j})-E_{n}(\xi_{i})\right],
\end{eqnarray}
where the superscripts $(i, j)$ can taken the values $(i=2,j=3)$ for an adiabatic expansion and $(i=4,j=1)$ for the adiabatic compression, respectively. From Fig.~\ref{Fig2} it is clear that, for each case, the net contribution of the adiabatic processes cancels out, that is, ${W_{\text{(ad)}}^{2 \rightarrow 3} + W_{(ad)}^{4 \rightarrow 1} = 0}$. Therefore, the total work extracted is obtained from the isoenergetic processes, and reads
\begin{equation}
W_{\textrm{total}} = W^{1\rightarrow 2}_{\textrm{iso}} + W^{3\rightarrow 4}_{\textrm{iso}}.
\end{equation}
Finally, the efficiency of the cycle is
\begin{equation}
\eta= \frac{W_{\textrm{total}}}{Q_{\textrm{in}}} = 1 - \frac{Q_{\text{out}}^{3\rightarrow 4}}{Q_{\textrm{in}}^{1\rightarrow 2}}.
\end{equation}
It is evident from this expression that to improve the efficiency in a isoenergetic cycle, requires to reduce the ratio $Q^{3\rightarrow 4}_{\textrm{out}}/Q^{1\rightarrow 2}_{\textrm{in}}$. As will be shown later, the quantum Rabi system spectrum yields a better minimization of this ratio than most other systems previously considered.
The isoenergetic cycle is specified by the initial parameter $\xi_{1}$ and $\alpha^{(\xi)} \equiv \xi_{3}/\xi_{2}$, which characterizes the adiabatic process.
The quantum Rabi model depends of three parameters, the coupling strength $g$, the resonator frequency $\omega$ and the TLS frequency $\Omega$. In our cycle, we will fix two of them and vary the third. Furthermore, we will consider the cases of varying each of the three parameters.
We have chosen the first isoenergetic process to be of maximal expansion, which will determine whether $\xi$ should be increased or decreased during the first isoenergetic stage. For the case of $\xi = g$ we must increase the parameter, whereas for $\xi = \omega$ and $\xi = \Omega$ we must decrease the parameter.
\section{Quantum Rabi Model as a Working Substance}
\subsection{\label{sec:varying} Case of $\xi\equiv g$}
\begin{figure}
\caption{(a) $g_2$ as a function of $g_1$, given by the isoenergetic condition $E_{\textbf{0}
\label{Fig3}
\end{figure}
Let us start by considering the case of $\xi \equiv g$ as the parameter to be varied, and fix $\omega = \Omega$. This is motivated by experimentally reported control of the coupling strength \cite{Peropadre2010switchable,Gustavsson2012driven}. Figure~\ref{Fig2}~(a) shows the diagram of the isoenergetic cycle corresponding to this case.
Let us first consider the isoenergetic expansion and compression stages. The first isoenergetic process is subject to the isoenergetic condition given by $E_{\textbf{0}}(g_{1})=E_{\textbf{1}}(g_{2})$ which yields $g_{2}$ as a function of $g_{1}$. This is shown in Fig.~\ref{Fig3}~(a). Due to the structure of the energy levels, the range of values for $g_{1}$ in which the cycle can be operated is approximately between $0<g_{1}<1.5$. Beyond this value, the energy levels become degenerate and we expect no energy exchange in the isoenergetic process. Therefore, the energy spectrum imposes a bound in the range of values of $g_{1}$ for the operation of the isoenergetic cycle. Similarly we consider the isoenergetic condition for the compression stage $E_{\textbf{1}}(g_{3}) = E_{\textbf{0}}(g_{4})$ and obtain the values of $g_{4}$ for given $g_{3}$, which is shown in Fig.~\ref{Fig3}~(b).
\begin{figure}
\caption{(a) Energy exchange $Q_{\textrm{in}
\label{Fig4}
\end{figure}
\begin{figure}
\caption{Efficiency $\eta$ as function $g_1$ for $\alpha^{(g)}
\label{Fig5}
\end{figure}
From Eq. (\ref{energyexchange}), we obtain the energy exchange for the isoenergetic expansion and compression process as
\begin{eqnarray}
\label{Q12}
Q^{1\rightarrow 2}_{\textrm{in}}=\frac{2}{\omega^{2}}\left(g_{2}^{2}-g_{1}^{2}\right)\left(\frac{\hbar g_{1}^{2}}{\omega}+\frac{\hbar\Omega}{2}e^{-\frac{2g_{1}^{2}}{\omega^{2}}}\right) \\ \nonumber - \frac{\hbar}{\omega^{3}}\left(\omega^{2}\left(g_{2}^{2}-g_{1}^{2}\right)+\left(g_{2}^{4}-g_{1}^{4}\right)\right),
\end{eqnarray}
\begin{eqnarray}
\label{Q34}
Q^{3\rightarrow 4}_{\textrm{out}}=\frac{2}{\omega^{2}}\left(g_{3}^{2}-g_{4}^{2}\right)\left(-\frac{\hbar g_{3}^{2}}{\omega}+\frac{\hbar\Omega}{2}e^{-\frac{2g_{3}^{2}}{\omega^{2}}}\right) \\ \nonumber + \frac{\hbar}{\omega^{3}}\left(\omega^{2}\left(g_{3}^{2}-g_{4}^{2}\right)+\left(g_{3}^{4}-g_{4}^{4}\right)\right).
\end{eqnarray}
We can see from Eq. (\ref{Q12}) and (\ref{Q34}) that the energy exchange that enters or leaves the system is proportional to ${g_2^2-g_1^2}$ or ${g_3^2-g_4^2}$, respectively. Then, by inspecting Fig. \ref{Fig2} (a) we would expect that $Q^{1\rightarrow 2}_{\textrm{in}}/Q^{3\rightarrow 4}_{\textrm{out}}>1$, and that this ratio should be increased by incrementing $\alpha^{(g)}$.
On the other hand, for the first and second adiabatic processes the work done is given by $W^{2\rightarrow 3}=E_{\textbf{1}}(g_{3})-E_{\textbf{1}}(g_{2})$ and $W^{4\rightarrow 1}=E_{\textbf{0}}\left(g_{1}\right)-E_{\textbf{0}}\left(g_{4}\right)$, respectively. Where $g_{3}=\alpha^{(g)} g_{2}$ and $g_{4}$ is specified by the second isoenergetic condition.
The total work extracted, $W_{\textrm{total}}$, depends on $g_{1}$ and $\alpha^{(g)}$, as is shown in Fig. \ref{Fig4} (b). We see from the figure that incrementing the values of the adiabatic parameter $\alpha^{(g)}$ increases the total work extracted, as would be expected. In addition the total work extracted vanishes as $g_{1}\rightarrow 1.5 \Omega$, which is a consequence of the energy levels becoming degenerate at these values of the coupling strength.
Figure \ref{Fig5} shows the efficiency, $\eta$, of the cycle as a function of $g_{1}$ for different values $\alpha^{(g)}$. From this figure, we see that the efficiency increases with $g_{1}$ as well as with $\alpha^{(g)}$. This is a consequence of the nonlinear dependence of the energy spectrum on the parameter $g$. Additionally, we see that for finite values of $g_{1}$ the efficiency quickly approaches its maximal theoretical value, instead of asymptotically converging to it \cite{Santos2017,Munoz2012,Pena2016}. This can be understood from Fig.~\ref{Fig2}~(a) and Fig.~\ref{Fig3}, since, as $g_{1}$ and $\alpha^{(g)}$ increase, we can expect that the ratio $Q^{3\rightarrow 4}_{\textrm{out}}/Q^{1\rightarrow 2}_{\textrm{in}}$ to be minimized. This is because the nonlinearity of the energy spectrum is such that the second isoenergetic process happen closer to the region where the energy levels become degenerate, and from Eq.~(\ref{Q34}) we see that if $g_{4} \rightarrow g_{3}$, then $Q^{3\rightarrow 4}_{\textrm{out}}\rightarrow 0$. However, this will happen for $W_{\text{total}}\rightarrow 0$ as can be seen from Fig.~\ref{Fig4}~(b). On the other hand, in the region of maximal total work extracted we find values of the efficiency that range in $0.5 <\eta <0.95$ depending on the values of $\alpha^{(g)}$.
\subsection{\label{sec:varomega} Case of $\xi \equiv \omega$}
\begin{figure}
\caption{(a) $\omega^{-1}
\label{Fig6}
\end{figure}
Now, we consider the choice of $\xi \equiv \omega$ as the parameter to be varied, and fix $g = \Omega$. This is motivated by experimentally reported control of the resonator frequency \cite{Wallquist2006selective,Sandberg2009exploring}.
For this case, the energy exchange for maximal expansion and compression are given by
\begin{eqnarray}
Q^{1 \rightarrow 2}_{\textrm{in}} &=& \left(-2g^{2}\left(\frac{1} {\omega_{2}^{2}}-\frac{1}{\omega_{1}^{2}}\right)\right) E_{1}\left(\omega_{1}\right)\\ \nonumber
&-& \frac{4}{3}\hbar g^{4}\left(\frac{1}{\omega_{2}^{3}}-\frac{1}{\omega_{1}^{3}}\right) -\hbar g^{2}\left(\frac{1}{\omega_{2}}-\frac{1}{\omega_{1}}\right),
\end{eqnarray}
\begin{eqnarray}
Q^{3\rightarrow 4}_{\textrm{out}}&=& \left(2g^{2}\left(\frac{1} {\omega_{3}^{2}}-\frac{1}{\omega_{4}^{2}}\right)\right) E_{2}\left(\omega_{3}\right)\\ \nonumber
&+& \frac{4}{3}\hbar g^{4}\left(\frac{1}{\omega_{3}^{3}}-\frac{1}{\omega_{4}^{3}}\right) +\hbar g^{2}\left(\frac{1}{\omega_{3}}-\frac{1}{\omega_{4}}\right).
\end{eqnarray}
Where $\omega_{2}$, and $\omega_{4}$ are obtained from the isoenergetic conditions $E_{\textbf{1}}(\omega_{2}) = E_{\textbf{0}}(\omega_{1})$ and $E_{\textbf{0}}(\omega_{4}) = E_{\textbf{1}}(\omega_{3})$, respectively. This is presented in Fig.~\ref{Fig6}. In what follows we find convenient to express the results in terms of $1/\omega_{1}$.
\begin{figure}
\caption{(a) Energy exchange $Q_{\textrm{in}
\label{Fig7}
\end{figure}
\begin{figure}
\caption{Efficiency as function $\omega_{1}
\label{Fig8}
\end{figure}
In this case, the range of values of $\omega$ for the operation of the isoenergetic cycle is lower bounded by $\omega = 0.5 \, \Omega$. Below this value the energy levels become degenerate and there is no total work extracted nor energy exchange as can be seen from Fig. \ref{Fig7}.
For the first and second adiabatic processes the work done is given by $W^{2\rightarrow 3} = E_{\textbf{1}}\left(\omega_{3}\right)-E_{\textbf{1}}\left(\omega_{2}\right)$ and ${W^{4\rightarrow 1} = E_{\textbf{0}}\left(\omega_{1}\right)-E_{\textbf{0}}\left(\omega_{4}\right)}$, respectively. Where ${\omega_{3}=\alpha^{(\omega)}\omega_{2}}$, and $\omega_{4}$ is specified by the second isoenergetic process.
The total work extracted, $W_{\textrm{total}}$, is shown in Fig.~\ref{Fig7}~(a) as a function of $\omega_{1}^{-1}$. We see that for ${0.35 \lesssim \omega_{1}^{-1} \lesssim 0.45}$ (in units of $\Omega^{-1}$) we obtain the region of maximal $W_{\textrm{total}}$ for different values of $\alpha^{(\omega)}$. In addition, in Fig.~\ref{Fig7} we see that as $\omega_{1}\rightarrow 2 \, \Omega^{-1}$, then, $Q_{\text{in}}^{1\rightarrow 2}\rightarrow 0$ and $W_{\text{total}}\rightarrow 0$, which is a consequence of the energy levels becoming degenerate beyond this value of resonator frequency.
\begin{figure}
\caption{(a) Shows $\Omega_{2}
\label{IsoCond_WQ}
\end{figure}
In Fig. \ref{Fig8} we show the efficiency as a function of $\omega_{1}^{-1}$ for different values of $\alpha^{(\omega)}$, where we see that the efficiency increases as $\alpha^{(\omega)}$ is reduced. Notice that the efficiency approaches its maximal theoretical value within the range of $\omega_{1}$ considered. The reason for this is similar to the case of $\xi=g$, where the nonlinearity and degeneracy of the energy spectrum lead to a minimization of the ratio $Q^{3\rightarrow 4}_{\textrm{out}}/Q^{1\rightarrow 2}_{\textrm{in}}$. This can be seen in Fig.~\ref{Fig2}~(b). At the same time, the maximization of the efficiency occurs as the energy exchange and total work extracted go to zero. On the other hand, in the region of maximal total work extracted we find values of the efficiency that range in $0.1 <\eta <0.65$ depending on the values of $\alpha^{(\omega)}$.
For both the $\xi\equiv g$ case or the $\xi\equiv\omega$ case, the nonlinearity and degeneracy of the energy spectrum allows to reach maximal efficiency of the isoenergetic cycle.
\subsection{Case of $\xi \equiv \Omega$}
For the final case, we consider the choice $\xi \equiv \Omega$ as the parameter to be varied, and fix $g=\omega$. This is motivated by experimentally reported control of the TLS frequency \cite{Paauw2009tuning,schwarz2013gradiometric}. Since the approximation of Eq.(\ref{Approximation}) breaks down for $\Omega>\omega$, we will only consider numerical calculations of the figures of merit.
The solution for the isoenergetic condition is shown in Fig. (\ref{IsoCond_WQ}). We see that this case differs from the previous ones in that there is no need to limit the parameter $\Omega$ to a specific range of values because there is no degeneracy of the energy levels. Nonetheless, we have restricted the values of $\Omega$ to the range $0.5<\Omega<6$ (in units of $\omega$) to facilitate the comparison with the other cases.
The total work extracted is shown in Fig.~(\ref{Work_WQ}), it can be seen that it is considerably smaller than in previous cases, as expected from inspecting the energy spectrum in Fig.~(\ref{Fig2})~(c). Since in this case there is no degeneracy, the total work extracted does not vanish within the chosen range of the parameter.
In Fig.~(\ref{Eff_WQ}) we show the efficiency as a function of $\Omega_{1}^{-1}$ for different values of $\alpha^{(\Omega)}$. Here, the efficiency is smaller than in the previous cases. This is because the functional dependence of the energy levels with $\Omega$ is closer to linear behavior as compared with the other two parameters that were previously considered.
\begin{figure}
\caption{(a) Energy exchange $Q_{\textrm{in}
\label{Work_WQ}
\end{figure}
\begin{figure}
\caption{Efficiency as a function of $\Omega_{1}
\label{Eff_WQ}
\end{figure}
\section{Conclusions}
We have studied the performance of an isoenergetic cycle with a working substance described by the quantum Rabi model. We have considered the variation of each of the parameters of the system, $g$, $\omega$ and $\Omega$. We use a simple approximation of the energy levels which helps to understand the behavior of the figures of merit.
We find that the nonlinear dependence of the energy levels on either the coupling strength, $g$, or the resonator frequency, $\omega$, allows for the cycle efficiency to reach maximal unit value. This occurs when the parameter is sufficiently increased (for $g$) or decreased (for $\omega$) in the first adiabatic stage. On the other hand, maximal total work extracted is found at efficiencies in the range of $0.5<\eta<0.95$ for the variation of $g$, and in the range of $0.1< \eta <0.65$ for the variation of $\omega$, which depend on the changes induced by the adiabatic processes.
Finally, we considered the case of varying the TLS frequency $\Omega$. We find that the total work extracted and the efficiency are considerably smaller than in the previous cases. This is because the functional dependence of the energy levels with $\Omega$ is closer to linear behavior as compared with the other two parameters.
Therefore, we can distinguish the degeneracy and nonlinearity of the energy spectrum of the working substance as optimal conditions to consider for the isoenergetic cycle.
The authors acknowledge support from CONICYT Doctorado Nacional 21140587, CONICYT Doctorado Nacional 21140432, Direcci\'on de Postgrado USACH, FONDECYT-postdoctoral 3170010, Financiamiento Basal para Centros Cient\'ificos y Tecnol\'ogicos de Excelencia, under Project No. FB 0807 (Chile), USM-DGIIP grant number PI-M-17-3 (Chile) and FONDECYT under grant No. 1140194.
\begin{thebibliography}{44}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Scully}(2002)}]{Scully2002}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~O.}\ \bibnamefont
{Scully}},\ }\href {\doibase 10.1103/PhysRevLett.88.050602} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {88}},\ \bibinfo {pages} {050602} (\bibinfo {year}
{2002})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rezek}\ and\ \citenamefont
{Kosloff}(2006)}]{Rezek2006Irreversible}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Rezek}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Kosloff}},\ }\href {http://stacks.iop.org/1367-2630/8/i=5/a=083} {\bibfield
{journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo
{volume} {8}},\ \bibinfo {pages} {83} (\bibinfo {year} {2006})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Quan}\ \emph {et~al.}(2007)\citenamefont {Quan},
\citenamefont {Liu}, \citenamefont {Sun},\ and\ \citenamefont
{Nori}}]{Quan2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~T.}\ \bibnamefont
{Quan}}, \bibinfo {author} {\bibfnamefont {Y.-x.}\ \bibnamefont {Liu}},
\bibinfo {author} {\bibfnamefont {C.~P.}\ \bibnamefont {Sun}}, \ and\
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\href {\doibase
10.1103/PhysRevE.76.031105} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. E}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {031105}
(\bibinfo {year} {2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Esposito}\ \emph {et~al.}(2010)\citenamefont
{Esposito}, \citenamefont {Kawai}, \citenamefont {Lindenberg},\ and\
\citenamefont {Van~den Broeck}}]{Esposito2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Esposito}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kawai}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Lindenberg}}, \ and\
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Van~den Broeck}},\ }\href
{\doibase 10.1103/PhysRevE.81.041106} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo
{pages} {041106} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Barrios}\ \emph {et~al.}(2017)\citenamefont
{Barrios}, \citenamefont {Albarr\'an-Arriagada}, \citenamefont
{C\'ardenas-L\'opez}, \citenamefont {Romero},\ and\ \citenamefont
{Retamal}}]{Alvarado2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont
{Barrios}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Albarr\'an-Arriagada}}, \bibinfo {author} {\bibfnamefont {F.~A.}\
\bibnamefont {C\'ardenas-L\'opez}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Romero}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~C.}\
\bibnamefont {Retamal}},\ }\href {\doibase 10.1103/PhysRevA.96.052119}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {96}},\ \bibinfo {pages} {052119} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bender}\ \emph {et~al.}(2000)\citenamefont {Bender},
\citenamefont {Brody},\ and\ \citenamefont {Meister}}]{Bender2000}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont
{Bender}}, \bibinfo {author} {\bibfnamefont {D.~C.}\ \bibnamefont {Brody}}, \
and\ \bibinfo {author} {\bibfnamefont {B.~K.}\ \bibnamefont {Meister}},\
}\href {http://stacks.iop.org/0305-4470/33/i=24/a=302} {\bibfield {journal}
{\bibinfo {journal} {Journal of Physics A: Mathematical and General}\
}\textbf {\bibinfo {volume} {33}},\ \bibinfo {pages} {4427} (\bibinfo {year}
{2000})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Abe}\ and\ \citenamefont {Okuyama}(2011)}]{Abe2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Abe}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Okuyama}},\
}\href {\doibase 10.1103/PhysRevE.83.021121} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo
{pages} {021121} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wang}\ and\ \citenamefont {He}(2012)}]{Wang2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Wang}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {He}},\
}\href {http://aip.scitation.org/doi/abs/10.1063/1.3681295} {\bibfield
{journal} {\bibinfo {journal} {Journal of Applied Physics}\ }\textbf
{\bibinfo {volume} {111}},\ \bibinfo {pages} {043505} (\bibinfo {year}
{2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Liu}\ and\ \citenamefont {Ou}(2016)}]{Liu2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Liu}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Ou}},\
}\href {http://www.mdpi.com/1099-4300/18/6/205} {\bibfield {journal}
{\bibinfo {journal} {Entropy}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo
{pages} {205} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Santos}\ and\ \citenamefont
{Bernardini}(2017)}]{Santos2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~F.~G.}\
\bibnamefont {Santos}}\ and\ \bibinfo {author} {\bibfnamefont {A.~E.}\
\bibnamefont {Bernardini}},\ }\href {\doibase 10.1140/epjp/i2017-11538-1}
{\bibfield {journal} {\bibinfo {journal} {The European Physical Journal
Plus}\ }\textbf {\bibinfo {volume} {132}},\ \bibinfo {pages} {260} (\bibinfo
{year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mu\~noz}\ and\ \citenamefont
{Pe\~na}(2012)}]{Munoz2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Mu\~noz}}\ and\ \bibinfo {author} {\bibfnamefont {F.~J.}\ \bibnamefont
{Pe\~na}},\ }\href {\doibase 10.1103/PhysRevE.86.061108} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume}
{86}},\ \bibinfo {pages} {061108} (\bibinfo {year} {2012})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Pe\~na}\ \emph {et~al.}(2016)\citenamefont {Pe\~na},
\citenamefont {Ferr\'e}, \citenamefont {Orellana}, \citenamefont {Rojas},\
and\ \citenamefont {Vargas}}]{Pena2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~J.}\ \bibnamefont
{Pe\~na}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ferr\'e}},
\bibinfo {author} {\bibfnamefont {P.~A.}\ \bibnamefont {Orellana}}, \bibinfo
{author} {\bibfnamefont {R.~G.}\ \bibnamefont {Rojas}}, \ and\ \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Vargas}},\ }\href {\doibase
10.1103/PhysRevE.94.022109} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. E}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {022109}
(\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2015)\citenamefont {Wang},
\citenamefont {Ma},\ and\ \citenamefont {He}}]{Wang2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ma}}, \ and\
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {He}},\ }\href
{http://stacks.iop.org/0295-5075/111/i=2/a=20006} {\bibfield {journal}
{\bibinfo {journal} {EPL (Europhysics Letters)}\ }\textbf {\bibinfo {volume}
{111}},\ \bibinfo {pages} {20006} (\bibinfo {year} {2015})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Rabi}(1937)}]{Rabi1937}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~I.}\ \bibnamefont
{Rabi}},\ }\href {\doibase 10.1103/PhysRev.51.652} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev.}\ }\textbf {\bibinfo {volume} {51}},\
\bibinfo {pages} {652} (\bibinfo {year} {1937})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Shore}\ and\ \citenamefont
{Knight}(1993)}]{Shore1993}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~W.}\ \bibnamefont
{Shore}}\ and\ \bibinfo {author} {\bibfnamefont {P.~L.}\ \bibnamefont
{Knight}},\ }\href {\doibase 10.1080/09500349314551321} {\bibfield {journal}
{\bibinfo {journal} {Journal of Modern Optics}\ }\textbf {\bibinfo {volume}
{40}},\ \bibinfo {pages} {1195} (\bibinfo {year} {1993})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Niemczyk}\ \emph {et~al.}(2010)\citenamefont
{Niemczyk}, \citenamefont {Deppe}, \citenamefont {Huebl}, \citenamefont
{Menzel}, \citenamefont {Hocke}, \citenamefont {Schwarz}, \citenamefont
{Garcia-Ripoll}, \citenamefont {Zueco}, \citenamefont {H{\"u}mmer},
\citenamefont {Solano}, \citenamefont {Marx},\ and\ \citenamefont
{Gross}}]{niemczyk2010circuit}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Niemczyk}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Deppe}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Huebl}}, \bibinfo
{author} {\bibfnamefont {E.}~\bibnamefont {Menzel}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Hocke}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Schwarz}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Garcia-Ripoll}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Zueco}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {H{\"u}mmer}}, \bibinfo {author} {\bibfnamefont
{E.}~\bibnamefont {Solano}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Marx}}, \ and\ \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Gross}},\ }\href {\doibase 10.1038/nphys1730} {\bibfield
{journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume}
{6}},\ \bibinfo {pages} {772} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Casanova}\ \emph {et~al.}(2010)\citenamefont
{Casanova}, \citenamefont {Romero}, \citenamefont {Lizuain}, \citenamefont
{Garc\'{\i}a-Ripoll},\ and\ \citenamefont {Solano}}]{Casanova2010DSC}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Casanova}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Romero}},
\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Lizuain}}, \bibinfo
{author} {\bibfnamefont {J.~J.}\ \bibnamefont {Garc\'{\i}a-Ripoll}}, \ and\
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Solano}},\ }\href
{\doibase 10.1103/PhysRevLett.105.263603} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo
{pages} {263603} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Braak}(2011)}]{Braak2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Braak}},\ }\href {\doibase 10.1103/PhysRevLett.107.100401} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {107}},\ \bibinfo {pages} {100401} (\bibinfo {year}
{2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Romero}\ \emph {et~al.}(2012)\citenamefont {Romero},
\citenamefont {Ballester}, \citenamefont {Wang}, \citenamefont {Scarani},\
and\ \citenamefont {Solano}}]{Romero2012ultrafast}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Romero}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Ballester}},
\bibinfo {author} {\bibfnamefont {Y.~M.}\ \bibnamefont {Wang}}, \bibinfo
{author} {\bibfnamefont {V.}~\bibnamefont {Scarani}}, \ and\ \bibinfo
{author} {\bibfnamefont {E.}~\bibnamefont {Solano}},\ }\href {\doibase
10.1103/PhysRevLett.108.120501} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages}
{120501} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kyaw}\ \emph {et~al.}(2017)\citenamefont {Kyaw},
\citenamefont {Allende}, \citenamefont {Kwek},\ and\ \citenamefont
{Romero}}]{Kyaw2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~H.}\ \bibnamefont
{Kyaw}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Allende}},
\bibinfo {author} {\bibfnamefont {L.-C.}\ \bibnamefont {Kwek}}, \ and\
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Romero}},\ }\href
{http://stacks.iop.org/2058-9565/2/i=2/a=025007} {\bibfield {journal}
{\bibinfo {journal} {Quantum Science and Technology}\ }\textbf {\bibinfo
{volume} {2}},\ \bibinfo {pages} {025007} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {C{\'{a}}rdenas-L{\'{o}}pez}\ \emph
{et~al.}(2017)\citenamefont {C{\'{a}}rdenas-L{\'{o}}pez}, \citenamefont
{Albarr{\'{a}}n-Arriagada}, \citenamefont {Barrios}, \citenamefont
{Retamal},\ and\ \citenamefont {Romero}}]{Cardenas-Lopez2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont
{C{\'{a}}rdenas-L{\'{o}}pez}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Albarr{\'{a}}n-Arriagada}}, \bibinfo {author}
{\bibfnamefont {G.~A.}\ \bibnamefont {Barrios}}, \bibinfo {author}
{\bibfnamefont {J.~C.}\ \bibnamefont {Retamal}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Romero}},\ }\href
{https://www.nature.com/articles/s41598-017-04467-1} {\bibfield {journal}
{\bibinfo {journal} {Scientific Reports}\ }\textbf {\bibinfo {volume} {7}},\
\bibinfo {pages} {4157} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ashhab}\ and\ \citenamefont {Nori}(2010)}]{Nori2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Ashhab}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\
}\href {\doibase 10.1103/PhysRevA.81.042311} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo
{pages} {042311} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Albarr{\'{a}}n-Arriagada}\ \emph
{et~al.}(2017)\citenamefont {Albarr{\'{a}}n-Arriagada}, \citenamefont
{{Alvarado Barrios}}, \citenamefont {C{\'{a}}rdenas-L{\'{o}}pez},
\citenamefont {Romero},\ and\ \citenamefont
{Retamal}}]{Albarran-Arriagada2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Albarr{\'{a}}n-Arriagada}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {{Alvarado Barrios}}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {C{\'{a}}rdenas-L{\'{o}}pez}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Romero}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Retamal}},\ }\href
{http://iopscience.iop.org/article/10.1088/1751-8121/aa66a0/meta} {\bibfield
{journal} {\bibinfo {journal} {Journal of Physics A: Mathematical and
Theoretical}\ }\textbf {\bibinfo {volume} {50}},\ \bibinfo {pages} {184001}
(\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Peropadre}\ \emph {et~al.}(2010)\citenamefont
{Peropadre}, \citenamefont {Forn-D\'{\i}az}, \citenamefont {Solano},\ and\
\citenamefont {Garc\'{\i}a-Ripoll}}]{Peropadre2010switchable}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Peropadre}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Forn-D\'{\i}az}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Solano}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont
{Garc\'{\i}a-Ripoll}},\ }\href {\doibase 10.1103/PhysRevLett.105.023601}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {105}},\ \bibinfo {pages} {023601} (\bibinfo {year}
{2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gustavsson}\ \emph {et~al.}(2012)\citenamefont
{Gustavsson}, \citenamefont {Bylander}, \citenamefont {Yan}, \citenamefont
{Forn-D\'{\i}az}, \citenamefont {Bolkhovsky}, \citenamefont {Braje},
\citenamefont {Fitch}, \citenamefont {Harrabi}, \citenamefont {Lennon},
\citenamefont {Miloshi}, \citenamefont {Murphy}, \citenamefont {Slattery},
\citenamefont {Spector}, \citenamefont {Turek}, \citenamefont {Weir},
\citenamefont {Welander}, \citenamefont {Yoshihara}, \citenamefont {Cory},
\citenamefont {Nakamura}, \citenamefont {Orlando},\ and\ \citenamefont
{Oliver}}]{Gustavsson2012driven}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Gustavsson}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Bylander}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Yan}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Forn-D\'{\i}az}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Bolkhovsky}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Braje}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Fitch}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Harrabi}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Lennon}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Miloshi}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Murphy}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Slattery}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Spector}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Turek}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Weir}}, \bibinfo {author} {\bibfnamefont {P.~B.}\
\bibnamefont {Welander}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Yoshihara}}, \bibinfo {author} {\bibfnamefont {D.~G.}\ \bibnamefont {Cory}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Nakamura}}, \bibinfo
{author} {\bibfnamefont {T.~P.}\ \bibnamefont {Orlando}}, \ and\ \bibinfo
{author} {\bibfnamefont {W.~D.}\ \bibnamefont {Oliver}},\ }\href {\doibase
10.1103/PhysRevLett.108.170503} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages}
{170503} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wallquist}\ \emph {et~al.}(2006)\citenamefont
{Wallquist}, \citenamefont {Shumeiko},\ and\ \citenamefont
{Wendin}}]{Wallquist2006selective}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Wallquist}}, \bibinfo {author} {\bibfnamefont {V.~S.}\ \bibnamefont
{Shumeiko}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Wendin}},\ }\href {\doibase 10.1103/PhysRevB.74.224506} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume}
{74}},\ \bibinfo {pages} {224506} (\bibinfo {year} {2006})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Sandberg}\ \emph {et~al.}(2009)\citenamefont
{Sandberg}, \citenamefont {Persson}, \citenamefont {Hoi}, \citenamefont
{Wilson},\ and\ \citenamefont {Delsing}}]{Sandberg2009exploring}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Sandberg}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Persson}},
\bibinfo {author} {\bibfnamefont {I.~C.}\ \bibnamefont {Hoi}}, \bibinfo
{author} {\bibfnamefont {C.~M.}\ \bibnamefont {Wilson}}, \ and\ \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Delsing}},\ }\href
{http://stacks.iop.org/1402-4896/2009/i=T137/a=014018} {\bibfield {journal}
{\bibinfo {journal} {Physica Scripta}\ }\textbf {\bibinfo {volume} {2009}},\
\bibinfo {pages} {014018} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Paauw}\ \emph {et~al.}(2009)\citenamefont {Paauw},
\citenamefont {Fedorov}, \citenamefont {Harmans},\ and\ \citenamefont
{Mooij}}]{Paauw2009tuning}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~G.}\ \bibnamefont
{Paauw}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fedorov}},
\bibinfo {author} {\bibfnamefont {C.~J. P.~M.}\ \bibnamefont {Harmans}}, \
and\ \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Mooij}},\ }\href
{\doibase 10.1103/PhysRevLett.102.090501} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo
{pages} {090501} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Schwarz}\ \emph {et~al.}(2013)\citenamefont
{Schwarz}, \citenamefont {Goetz}, \citenamefont {Jiang}, \citenamefont
{Niemczyk}, \citenamefont {Deppe}, \citenamefont {Marx},\ and\ \citenamefont
{Gross}}]{schwarz2013gradiometric}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont
{Schwarz}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goetz}},
\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Jiang}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Niemczyk}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Deppe}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Marx}}, \ and\ \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Gross}},\ }\href
{http://stacks.iop.org/1367-2630/15/i=4/a=045001} {\bibfield {journal}
{\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume}
{15}},\ \bibinfo {pages} {045001} (\bibinfo {year} {2013})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Koch}\ \emph {et~al.}(2007)\citenamefont {Koch},
\citenamefont {Yu}, \citenamefont {Gambetta}, \citenamefont {Houck},
\citenamefont {Schuster}, \citenamefont {Majer}, \citenamefont {Blais},
\citenamefont {Devoret}, \citenamefont {Girvin},\ and\ \citenamefont
{Schoelkopf}}]{Koch2007Transmon}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Koch}}, \bibinfo {author} {\bibfnamefont {T.~M.}\ \bibnamefont {Yu}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gambetta}}, \bibinfo
{author} {\bibfnamefont {A.~A.}\ \bibnamefont {Houck}}, \bibinfo {author}
{\bibfnamefont {D.~I.}\ \bibnamefont {Schuster}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Majer}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Blais}}, \bibinfo {author} {\bibfnamefont {M.~H.}\
\bibnamefont {Devoret}}, \bibinfo {author} {\bibfnamefont {S.~M.}\
\bibnamefont {Girvin}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\
\bibnamefont {Schoelkopf}},\ }\href {\doibase 10.1103/PhysRevA.76.042319}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {76}},\ \bibinfo {pages} {042319} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Barends}\ \emph {et~al.}(2013)\citenamefont
{Barends}, \citenamefont {Kelly}, \citenamefont {Megrant}, \citenamefont
{Sank}, \citenamefont {Jeffrey}, \citenamefont {Chen}, \citenamefont {Yin},
\citenamefont {Chiaro}, \citenamefont {Mutus}, \citenamefont {Neill},
\citenamefont {O'Malley}, \citenamefont {Roushan}, \citenamefont {Wenner},
\citenamefont {White}, \citenamefont {Cleland},\ and\ \citenamefont
{Martinis}}]{Barends2013Xmon}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Barends}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kelly}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Sank}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Yin}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Chiaro}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Mutus}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Neill}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {O'Malley}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Roushan}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Wenner}}, \bibinfo {author} {\bibfnamefont {T.~C.}\
\bibnamefont {White}}, \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Cleland}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Martinis}},\ }\href {\doibase 10.1103/PhysRevLett.111.080502} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {111}},\ \bibinfo {pages} {080502} (\bibinfo {year}
{2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2014)\citenamefont {Chen},
\citenamefont {Neill}, \citenamefont {Roushan}, \citenamefont {Leung},
\citenamefont {Fang}, \citenamefont {Barends}, \citenamefont {Kelly},
\citenamefont {Campbell}, \citenamefont {Chen}, \citenamefont {Chiaro},
\citenamefont {Dunsworth}, \citenamefont {Jeffrey}, \citenamefont {Megrant},
\citenamefont {Mutus}, \citenamefont {O'Malley}, \citenamefont {Quintana},
\citenamefont {Sank}, \citenamefont {Vainsencher}, \citenamefont {Wenner},
\citenamefont {White}, \citenamefont {Geller}, \citenamefont {Cleland},\ and\
\citenamefont {Martinis}}]{Martinis2014Tunable}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Neill}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Roushan}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Leung}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Fang}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Barends}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Kelly}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Campbell}}, \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Chiaro}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dunsworth}},
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo {author}
{\bibfnamefont {J.~Y.}\ \bibnamefont {Mutus}}, \bibinfo {author}
{\bibfnamefont {P.~J.~J.}\ \bibnamefont {O'Malley}}, \bibinfo {author}
{\bibfnamefont {C.~M.}\ \bibnamefont {Quintana}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Sank}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Vainsencher}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Wenner}}, \bibinfo {author} {\bibfnamefont {T.~C.}\
\bibnamefont {White}}, \bibinfo {author} {\bibfnamefont {M.~R.}\ \bibnamefont
{Geller}}, \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Cleland}},
\ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}},\
}\href {\doibase 10.1103/PhysRevLett.113.220502} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {113}},\
\bibinfo {pages} {220502} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nataf}\ and\ \citenamefont
{Ciuti}(2011)}]{Nataf2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Nataf}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Ciuti}},\
}\href {\doibase 10.1103/PhysRevLett.107.190402} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {107}},\
\bibinfo {pages} {190402} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kyaw}\ \emph
{et~al.}(2015{\natexlab{a}})\citenamefont {Kyaw}, \citenamefont {Felicetti},
\citenamefont {Romero}, \citenamefont {Solano},\ and\ \citenamefont
{Kwek}}]{kyaw2015scalable}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~H.}\ \bibnamefont
{Kyaw}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Felicetti}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Romero}}, \bibinfo
{author} {\bibfnamefont {E.}~\bibnamefont {Solano}}, \ and\ \bibinfo {author}
{\bibfnamefont {L.-C.}\ \bibnamefont {Kwek}},\ }\href {\doibase
10.1038/srep08621} {\bibfield {journal} {\bibinfo {journal} {Scientific
Reports}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {8621}
(\bibinfo {year} {2015}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kyaw}\ \emph
{et~al.}(2015{\natexlab{b}})\citenamefont {Kyaw}, \citenamefont
{Herrera-Mart\'{\i}}, \citenamefont {Solano}, \citenamefont {Romero},\ and\
\citenamefont {Kwek}}]{Kyaw2015QEC}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~H.}\ \bibnamefont
{Kyaw}}, \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont
{Herrera-Mart\'{\i}}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Solano}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Romero}}, \
and\ \bibinfo {author} {\bibfnamefont {L.-C.}\ \bibnamefont {Kwek}},\ }\href
{\doibase 10.1103/PhysRevB.91.064503} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo
{pages} {064503} (\bibinfo {year} {2015}{\natexlab{b}})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Albarr\'an-Arriagada}\ \emph
{et~al.}(2018)\citenamefont {Albarr\'an-Arriagada}, \citenamefont {Lamata},
\citenamefont {Solano}, \citenamefont {Romero},\ and\ \citenamefont
{Retamal}}]{AlbarranArriagada2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Albarr\'an-Arriagada}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Lamata}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Solano}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Romero}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.~C.}\ \bibnamefont {Retamal}},\ }\href {\doibase
10.1103/PhysRevA.97.022306} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {022306}
(\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wolf}\ \emph {et~al.}(2013)\citenamefont {Wolf},
\citenamefont {Vallone}, \citenamefont {Romero}, \citenamefont {Kollar},
\citenamefont {Solano},\ and\ \citenamefont {Braak}}]{Wolf2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont
{Wolf}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Vallone}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Romero}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Kollar}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Solano}}, \ and\ \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Braak}},\ }\href {\doibase
10.1103/PhysRevA.87.023835} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {023835}
(\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rossatto}\ \emph {et~al.}(2017)\citenamefont
{Rossatto}, \citenamefont {Villas-B\^oas}, \citenamefont {Sanz},\ and\
\citenamefont {Solano}}]{Rossatto2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~Z.}\ \bibnamefont
{Rossatto}}, \bibinfo {author} {\bibfnamefont {C.~J.}\ \bibnamefont
{Villas-B\^oas}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sanz}},
\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Solano}},\ }\href
{\doibase 10.1103/PhysRevA.96.013849} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo
{pages} {013849} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Jaynes}\ and\ \citenamefont
{Cummings}(1963)}]{JC1963}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~T.}\ \bibnamefont
{Jaynes}}\ and\ \bibinfo {author} {\bibfnamefont {F.~W.}\ \bibnamefont
{Cummings}},\ }\href {\doibase 10.1109/PROC.1963.1664} {\bibfield {journal}
{\bibinfo {journal} {Proceedings of the IEEE}\ }\textbf {\bibinfo {volume}
{51}},\ \bibinfo {pages} {89} (\bibinfo {year} {1963})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Forn-D\'{\i}az}\ \emph {et~al.}(2010)\citenamefont
{Forn-D\'{\i}az}, \citenamefont {Lisenfeld}, \citenamefont {Marcos},
\citenamefont {Garc\'{\i}a-Ripoll}, \citenamefont {Solano}, \citenamefont
{Harmans},\ and\ \citenamefont {Mooij}}]{FornDiaz2010BSS}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Forn-D\'{\i}az}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Lisenfeld}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Marcos}},
\bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Garc\'{\i}a-Ripoll}},
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Solano}}, \bibinfo
{author} {\bibfnamefont {C.~J. P.~M.}\ \bibnamefont {Harmans}}, \ and\
\bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Mooij}},\ }\href
{\doibase 10.1103/PhysRevLett.105.237001} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo
{pages} {237001} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bourassa}\ \emph {et~al.}(2009)\citenamefont
{Bourassa}, \citenamefont {Gambetta}, \citenamefont {Abdumalikov},
\citenamefont {Astafiev}, \citenamefont {Nakamura},\ and\ \citenamefont
{Blais}}]{Bourassa2009USC}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Bourassa}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Gambetta}}, \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont
{Abdumalikov}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Astafiev}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Nakamura}},
\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Blais}},\ }\href
{\doibase 10.1103/PhysRevA.80.032109} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo
{pages} {032109} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yoshihara}\ \emph {et~al.}(2017)\citenamefont
{Yoshihara}, \citenamefont {Fuse}, \citenamefont {Ashhab}, \citenamefont
{Kakuyanagi}, \citenamefont {Saito},\ and\ \citenamefont
{Semba}}]{yoshihara2017DSC}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Yoshihara}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Fuse}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ashhab}}, \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Kakuyanagi}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Saito}}, \ and\ \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Semba}},\ }\href {\doibase
10.1038/nphys3906} {\bibfield {journal} {\bibinfo {journal} {Nature
Physics}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {44} (\bibinfo
{year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Irish}(2007)}]{Irish2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~K.}\ \bibnamefont
{Irish}},\ }\href {\doibase 10.1103/PhysRevLett.99.173601} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {99}},\ \bibinfo {pages} {173601} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yu}\ \emph {et~al.}(2012)\citenamefont {Yu},
\citenamefont {Zhu}, \citenamefont {Liang}, \citenamefont {Chen},\ and\
\citenamefont {Jia}}]{Yu2012analytical}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Yu}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhu}}, \bibinfo
{author} {\bibfnamefont {Q.}~\bibnamefont {Liang}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Chen}}, \ and\ \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Jia}},\ }\href {\doibase
10.1103/PhysRevA.86.015803} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {015803}
(\bibinfo {year} {2012})}\BibitemShut {NoStop}
\end{thebibliography}
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Singular measures and convolution operators}
\thanks{Research supported by grant BFM2000-0206-C04-03}
\author{J. M. Aldaz},
\ead{[email protected]}
\author{Juan L. Varona}
\ead{[email protected]}
\address{Departamento de Matem\'aticas y Computaci\'on,
Universidad de La Rioja,\\
26004 Logro\~no, Spain}
\begin{abstract}
We show that in the study of certain convolution operators,
functions can be replaced by measures without changing the size of the
constants appearing in weak type $(1,1)$ inequalities. As an application,
we prove that the best constants for the centered
Hardy-Littlewood maximal operator associated to parallelotopes do not
decrease with the dimension.
\end{abstract}
\begin{keyword}
Hardy-Littlewood maximal function
\MSC 42A85
\end{keyword}
\end{frontmatter}
\section{Introduction}
The method of discretization for convolution
operators, due to M.~de Guzm\'an
(cf.~\cite{Guz}, Theorem~4.1.1), and further developed by
M.~T.~Men\'ar\-guez and F.~Soria (cf.~Theorem~1 of~\cite{MeSo})
consists in replacing functions by finite sums of Dirac
deltas in the study of the operator.
So far, the main applications of these theorems have been
related to the Hardy-Littlewood maximal function, and
more precisely, to the determination of bounds for the best
constants $c_d$ appearing in the weak type $(1,1)$ inequalities
(cf.~\cite{MeSo},~\cite{A1},~\cite{Mel1}, and \cite{Mel2} for the
one dimensional case, and for higher dimensions,
\cite{MeSo} and~\cite{A2}).
In this paper we complement de Guzm\'an's Theorem by proving
that one can
consider arbitrary measures instead of finite discrete measures, and the same
conclusions still hold (Theorem~\ref{th:equiv}). A special case of our
theorem (where the space is the real line and the convolution operator
is precisely the Hardy-Littlewood maximal function) appears
in~\cite{Mel2} (see Theorem~2).
Regarding upper bounds
for $c_d$, E.~M.~Stein and J.~Str\"omberg (see~\cite{StSt}) showed
that the constants grow at most like $O(d \log d)$ for arbitrary balls, and
like $O(d)$ in the case of euclidean balls. With respect to lower
bounds for the maximal function associated to cubes, it is shown
in~\cite{MeSo}, Theorem~6,
that $c_d \ge \big( \frac{1+2^{1/d}}{2} \big)^d$. These
bounds, which decrease with the dimension to $\sqrt{2}$,
where conjectured to be optimal in~\cite{Me}.
The ``optimality part" of the conjecture was refuted
in~\cite{A2}, where
it was proved that $\liminf_d c_d \ge \frac{47 \sqrt{2}}{36}$.
It is an easy consequence of Theorem~\ref{th:equiv}
that the ``decreasing part'' of the conjecture
is also false:
For cubes the inequality $c_d\le c_{d+1}$ holds in every dimension $d$
(Theorem~\ref{th:ineq}). In dimensions $1$ and $2$ the stronger result
$c_1 < c_2$ is known, thanks to the recent determination by Antonios
D.~Melas of the exact value of $c_1$ as $\frac{11
+ \sqrt{61}}{12}$ (Corollary~1 of \cite{Mel2}). Since $c_2 \ge
\sqrt\frac32 + \frac{3-\sqrt2}4$, by Proposition~1.4 of \cite{A2},
Melas's result entails that the first inequality is strict.
Finally, we note that the original question of Stein and Str\"omberg (see
also~\cite{BH}, Problem~7.74~c, proposed by A.~Carbery) as to whether
$\lim_d c_d < \infty$ or $\lim_d c_d = \infty$, remains open.
\section{Convolution operators and measures}
We shall state the main theorem of this note in terms of a
locally compact group~$X$.
Denote by $C(X)$ the family of all continuous
functions $g \colon X\to \mathbb{R}$, by $C_c(X)$ the continuous
functions with compact support, and by $\lambda$ the left Haar
measure on~$X$. If $X = \mathbb{R}^d$,
$\lambda^d$ will stand for the $d$-dimensional Lebesgue
measure. As usual, we shall write $dx$ instead of $d\lambda (x)$.
A finite real valued Borel measure $\mu$ on $X$ is \emph{Radon} if
$|\mu|$ is inner
regular with respect to the compact sets. It is well
known that if $X$ is a locally compact separable metric space, then
every finite Borel measure is automatically Radon.
Let $\mathcal{N}$ be a neighborhood base at $0$ such that each element
of $\mathcal{N}$ has compact closure, and let
$\{h_U : U \in \mathcal{N}\}$ be an approximate identity, i.e.,
a family of nonnegative Borel functions such that for every
$U \in \mathcal{N}$, $\operatorname{supp} h_U \subset U$ and $\|h_U\|_1 =1$.
Furthermore, since for every neighborhood $U$ of $0$ there
is a continuous function $g_U$ with values in $[0,1]$,
$g_U(0) = 1$, and $\operatorname{supp} g_U \subset U$, we may assume that each
function in the approximate identity is continuous (obtain $h_U$
by normalizing~$g_U$).
Let $\mu$ be a finite, nonnegative Radon measure on~$X$.
Recall that
\[
h*f(x) = \int f(y^{-1}x) h(y) \, dy
\text{\qquad and \qquad}
\mu*f(x) = \int f(y^{-1}x) \, d\mu(y).
\]
Let $g\in C_c(X)$;
we shall utilize the following well known results:
$\mu*(h_U*g) = (\mu*h_U)*g$, and
$h_U*g \to g$ uniformly as $U \downarrow 0$.
The idea of the proof below consists simply in replacing the measure
$\mu$ with the continuous function $\mu*h_U$, using the fact that
$\| \mu*h_U \|_1 = \mu(X)$.
The $L_1$ norm refers always in this paper to Haar measure.
\begin{lem}
\label{lem:family}
Let $\{k_{\beta}\}$ be a family of
nonnegative lower semicontinuous real valued functions, defined
on~$X$. Set $k^*v := \sup_{\beta}|v*k_{\beta}|$, where $v$ is either a
function or a measure. Then, for every
finite real valued Radon measure $\mu$ on $X$,
and every $\alpha > 0$,
\[
\lambda^d \{k^* \mu > \alpha\} \le
\sup \big\{ \lambda^d \{k^* f > \alpha\} : \|f\|_1 = |\mu|(X) \big\}.
\]
The same result holds if $\{k_n\}$ is a sequence of
nonnegative real valued Borel functions.
\end{lem}
\begin{pf}
Consider first the case where $\{k_{\beta}\}$ is a family of
lower semicontinuous functions. We shall assume that
functions and measures are nonnegative.
There is no loss of generality in doing so since
$k^* f \le k^*|f|$ and
$k^*\mu \le k^*|\mu|$
always. Also,
by lower semicontinuity,
$\int k_{\beta} \, d\mu = \sup \{\int g_{\gamma,\beta} \, d\mu :
0\le g_{\gamma ,\beta} \le k_\beta$,
$g_{\gamma,\beta} \in C_c(X)\}$ (Corollary~7.13 of~\cite{Fo}).
It follows that for every $x$,
$\sup_\beta \mu *k_{\beta} (x) = \sup_{\gamma, \beta}
\{\mu *g_{\gamma,\beta} (x): 0\le g_{\gamma ,\beta} \le k_\beta$,
$g_{\gamma ,\beta} \in C_c(X)\}$. Therefore
we may assume that the family $\{k_{\beta}\}$
consists of nonnegative continuous functions with compact support.
Next, let $\{h_U: U \in \mathcal{N}\}$ be an approximate identity as above,
with each $h_U$ continuous, and
let $C \subset \{k^* \mu > \alpha\}$
be a compact set. It suffices to show that there exists
a function $f$ with $\| f \|_1 = \mu(X)$ and
$C \subset \{k^* f > \alpha\}$. We shall take $f$ to be $\mu*h_{U_0}$,
for a suitably chosen neighborhood $U_0$. Since
$\{k^* \mu > \alpha\} = \cup_\beta \{\mu *k_\beta > \alpha\}$ and each
$\mu *k_\beta $ is continuous, there exists a finite subcollection of
indices $\{\beta_1, \dots , \beta_\ell\}$ with
$C \subset \cup_1^\ell \{\mu *k_{\beta_i} > \alpha\}$, so the continuous
function $\max_{1\le i\le \ell} \mu *k_{\beta_i} $ attains a minimum
value $\alpha + a$ on $C$, with $a$ strictly positive.
Because $\mu$ is a finite measure and $h_U * k_{\beta_i}$
converges uniformly to $k_{\beta_i}$ as $U \to 0$,
$\mu * h_U * k_{\beta_i}$ also
converges uniformly to $\mu * k_{\beta_i}$. Hence, there exists an $U_0 \in
\mathcal{N}$ such that for every $V \subset U_0$, $V \in \mathcal{N}$, and
every $i\in \{1, \dots, \ell\}$,
$\| \mu * k_{\beta_i} - \mu * h_V * k_{\beta_i} \|_\infty < a /2$.
In particular, it follows that
\[
C \subset \big\{ \max_{1 \le i \le \ell} \mu * h_{U_0} * k_{\beta_i}
> \alpha \big\}
\subset \big\{ k^*(\mu*h_{U_0})> \alpha \big\}.
\]
The case where $\{k_n\}$ is a sequence of
nonnegative bounded Borel functions, can be proven by reduction
to the previous one. Choose a finite Radon measure $\mu$
and fix $\alpha > 0$.
Given $\epsilon \in (0, 1)$, for every $n$ let
$g_n \ge k_n$ be a bounded, lower semicontinuous function with
\[
\| g_n - k_n \|_1 < \frac{\epsilon^2}{2^{n+1}\mu(X)}
\]
(cf.~Proposition~7.14 of~\cite{Fo}).
Then, for any $ f \in L_1(\lambda)$, using the Fubini-Tonelli Theorem and
left invariance we have
\begin{gather*}
\| g^*f - k^*f \|_1
\\
= \Big\| \sup_n \int g_n (y^{-1}x) f(y) \,dy
- \sup_n \int k_n (y^{-1}x) f(y) \,dy \Big\|_1
\\
\le \sum_n \iint (g_n (y^{-1}x) - k_n (y^{-1}x)) |f(y)| \,dy\,dx
\\
= \sum_n \int |f(y)| \int (g_n (y^{-1}x) - k_n (y^{-1}x)) \,dx\,dy
\\
= \sum_n \|f\|_1 \|g_n - k_n\|_1
< \|f\|_1 \epsilon^2 (\mu(X))^{-1}.
\end{gather*}
In particular, if $\| f \|_1 = \mu(X)$,
we have that
\[
\| g^*f - k^*f \|_1 < \epsilon^2,
\]
from which
\[
\lambda \{g^*f - k^*f \ge \epsilon\}
\le \frac{\| g^* f - k^* f\|_1}{\epsilon} < \epsilon
\]
follows.
Now
$\{g^* f > \alpha + \epsilon \} \subset \{k^* f > \alpha\} \cup
\{g^* f - k^* f > \epsilon\}$, so
\begin{gather*}
(\alpha + \epsilon) \lambda \{ k^*\mu > \alpha + \epsilon \}
\le (\alpha + \epsilon ) \lambda \{ g^*\mu > \alpha + \epsilon \}
\\
\le (\alpha + \epsilon) \sup \{ \lambda \{g^*f > \alpha + \epsilon\}
: \|f\|_1 = \mu(X) \}
\\
\le (\alpha + \epsilon) ( \sup \{\lambda \{k^*f > \alpha\}
: \|f\|_1 = \mu(X)\} + \epsilon ),
\end{gather*}
and the result is obtained by letting $\epsilon \downarrow 0$.
\end{pf}
\begin{thm}
\label{th:equiv}
Let $\{k_{\beta}\}$ be a family of
nonnegative lower semicontinuous real valued functions, defined on
$X$, and let $c>0$ be a fixed constant. Then the
following are equivalent:
(i) For every function $f\in L_1 (\lambda )$,
and every $\alpha > 0$,
\[
\alpha \lambda \{k^*f > \alpha\} \le c \|f\|_1.
\]
(ii) For every finite real valued Radon measure $\mu$ on $X$,
and every $\alpha > 0$,
\[
\alpha \lambda \{k^*\mu > \alpha\} \le c |\mu|(X).
\]
The same result holds if $\{k_{n}\}$ is a sequence of
nonnegative real valued Borel functions.
\end{thm}
\begin{pf}
(i) is the special case of~(ii) where $d\mu(y) = f(y)\,dy$.
For the other direction, by Lemma~\ref{lem:family} and part~(i) we have
\[
\alpha \lambda \{k^* \mu > \alpha\}
\le \alpha \sup \{ \lambda \{k^*f > \alpha\} : \|f\|_1 = |\mu|(X) \}
\le c |\mu|(X).
\]
\end{pf}
\begin{rem}
By the discretization theorem of M.~de Guzm\'an
(see~\cite{Guz}, Theorem~4.1.1), further refined by
M.~T.~Men\'arguez and F.~Soria (Theorem~1 of~\cite{MeSo}), in
$\mathbb{R}^d$ conditions~(i) and~(ii) of Theorem~\ref{th:equiv} are
both equivalent to
\textit{
(iii) For every finite collection $\{\delta_{x_1}, \dots , \delta_{x_N}\}$
of Dirac deltas on $X$, and every $\alpha > 0$,
\[
\alpha \lambda \big\{ k^* \sum_1^N \delta_{x_i} > \alpha \big\} \le c N.
\]
}
From the viewpoint of obtaining lower
bounds, the usefulness of~(ii) is due to the fact that it allows to choose
among a wider class of potential examples than just finite sums of Dirac deltas.
Both~(ii) and~(iii) will be utilized in the sext section.
\end{rem}
\section{Behavior of constants for the Hardy-Littlewood maximal operator}
\label{sec:constants}
Let $B \subset \mathbb{R}^d$ be an open, bounded, convex set,
symmetric about zero.
We shall call $B$ a ball, since each norm on $\mathbb{R}^d$ yields sets of this
type, and each bounded $B$, convex and symmetric about zero, defines a norm.
The (centered) Hardy-Littlewood maximal operator associated to $B$
is defined for locally integrable
functions $f \colon \mathbb{R}^d \to \mathbb{R}$ as
\[
M_{d,B} f(x) := \sup_{r > 0}
\frac{\chi_{rB}}{r^d \lambda^d (B)}*|f| (x).
\]
We denote by $c_{d,B}$ the best constant in the weak type $(1,1)$
inequality
$\alpha \lambda^d \{ M_{d,B} f > \alpha \} \le c \|f\|_1$,
where $c$ is independent of
$f \in L^1(\mathbb{R}^n)$ and $\alpha > 0$. Let $s :=\{r_n\}_{-\infty}^\infty$
be a lacunary (bi)sequence (i.e., a sequence
that satisfies $r_{n+1}/r_n \ge c$ for some fixed constant
$c > 1$ and every $n \in \mathbb{Z}$). Then the associated maximal operator
is defined via
\[
M_{s,d,B} f(x) := \sup_{n \in \mathbb{Z}}
\frac{\chi_{r_n B}}{r_n^d \lambda^d(B)}*|f| (x).
\]
The arguments given below are applicable to both the maximal function
and to lacunary versions of it, so we shall not introduce
a different notation for the best constants in the lacunary case.
In particular, Lemma~\ref{lem:linear} and Theorem~\ref{th:ineq}
refer to all of these maximal operators, but only the usual maximal operator
shall be mentioned in the proofs.
Given a finite sum
$\mu = \sum_1^k \delta_{x_i}$ of Dirac deltas, where
the $x_i$'s need not be all different, let $\sharp (x + B)$ be the number
of point masses from $\mu$ contained in $x + B$.
\begin{lem}
\label{lem:linear}
Let $B$ be a ball in $\mathbb{R}^d$. Then for
every linear transformation $T \colon \mathbb{R}^d \to \mathbb{R}^d$ with
$\det T \ne 0$, $c_{d,B} = c_{d,T(B)}$.
\end{lem}
\begin{pf}
Given $\mu := \sum_1^k \delta_{x_i}$ and
$T \mu := \sum_1^k \delta_{T(x_i)}$, we have that
\[
M_{d,B} \mu (x) := \sup_{r > 0} \frac{\sharp (x + rB)}{r^d\lambda^d (B)}
\]
and
\[
M_{d,T(B)} T\mu (x) := \sup_{r > 0}
\frac{\sharp (x + rT(B))}{r^d\lambda^d (T(B))}.
\]
Then
$x \in \{ M_{d,B} \mu > \alpha \}$
iff
$T(x)\in \{ M_{d,T(B)} T\mu > (\alpha / |\det T|)\}$.
Since
\[
|\det T| \lambda^d \{ M_{d,B} \mu > \alpha \}
= \lambda^d \{ M_{d,T(B)} T\mu > (\alpha / |\det T|) \},
\]
we have
\[
\alpha \lambda^d \{ M_{d,B} \mu > \alpha \}
= (\alpha / |\det T|) \lambda^d \{ M_{d,T(B)} T\mu
> (\alpha / |\det T|) \},
\]
and the result follows.
\end{pf}
\begin{thm}
\label{th:ineq}
For each $d \in \mathbb{N} \setminus \{0\}$ let $B_d$ be
a $d$-dimensional parallelotope centered at zero. Then
$c_{d,B_d} \le c_{d+1,B_{d+1}}$ for both the maximal operator
and for lacunary operators.
\end{thm}
\begin{pf}
Since every such $B_d$ is the image under a nonsingular
linear transformation of the $d$-dimensional cube $Q_d$ centered at zero
with sides parallel to the axes and
volume $1$, we may assume that in fact $B_d = Q_d$.
With the convex bodies fixed, we will write
$c_d$ and $M_d$ rather than $c_{d,B_d}$ and $M_{d,B_d}$. Given $\alpha > 0$,
$\mu_d = \sum_1^k \delta_{x_i}$ on $\mathbb{R}^d$
and a constant $c > 0$ such that
$\alpha \lambda^d\{ M_d \mu_d > \alpha \} > c \mu_d (\mathbb{R}^d)$, we want to
find a measure $\mu_{d+1}$ on $\mathbb{R}^{d+1}$ such that
$\alpha \lambda ^{d+1}\{ M_{d+1} \mu_{d+1} > \alpha \} >
c \mu_{d+1} (\mathbb{R}^{d+1})$. This will imply that $c_d\le c_{d+1}$.
Let $L := (k/\alpha)^{1/d}$. Note that if $r \ge L$, then for every
$x\in \mathbb{R}^d$,
$\frac{\sharp (x + rQ_d) }{r^d} \le \alpha$. Choose $N \gg L$
such that $\alpha \frac{N-L}{N} \lambda^d \{ M_d \mu_d > \alpha \} > c k$,
and let $\mu_{d+1} := \mu_d \times \lambda_{[-N,N]}$, where
$\lambda_{[-N,N]}$ stands for the restriction of linear Lebesgue measure
to the interval $[-N, N]$. We claim that
$\{ M_d \mu_d > \alpha \} \times [-N+L, N-L] \subset
\{ M_{d+1} \mu_{d+1} > \alpha \}$. In order to establish the
claim, the following notation shall be used:
If $x = (x_1, \dots , x_d) \in \mathbb{R}^d$,
by $(x, x_{d+1})$ we denote the point
$(x_1, \dots , x_d, x_{d+1}) \in \mathbb{R}^{d+1}$. Now if
$x\in \{ M_d \mu_d > \alpha \}$, then there exists an $r(x)\in (0, L)$
such that $r(x)^{-d} \mu_d (x + r(x) Q_d) > \alpha$, so for every
$y\in [-N + L, N -L]$,
\begin{gather*}
r(x)^{-d-1} \mu_{d+1} ((x, y) + r(x) Q_{d+1})
\\
= r(x)^{-d-1} (\mu_d (x + r(x) Q_d)
\times \lambda_{[-N,N]}( [y - \tfrac{r(x)}2, y + \tfrac{r(x)}2] )
\\
= r(x)^{-d} \mu_d (x + r(x) Q_d)> \alpha,
\end{gather*}
as desired. But now
\begin{gather*}
\alpha \lambda^{d+1} \{ M_{d+1} \mu_{d+1} > \alpha \}
\ge 2 \alpha (N-L) \lambda^d \{ M_d \mu_d > \alpha \}
\\
= 2 \alpha N \frac{N-L}{N} \lambda^d \{ M_d \mu_d > \alpha \}
> 2 N c k = c \mu_{d+1} (\mathbb{R}^{d+1}).
\end{gather*}
\end{pf}
\begin{rem}
Recall from the Introduction that for the
$\ell_\infty$ balls (i.e., cubes with sides parallel to the axes)
$
c_1 < c_2.
$
Since the $\ell_1$ unit ball in dimension 2 is a square, it
follows from Lemma~\ref{lem:linear} that the best constant in dimension 2
is equal for the $\ell_1$ and the $\ell_\infty$ norms. It follows that
$c_1 < c_2$ in the $\ell_1$ case also. It would
be interesting to know whether or not the best constants associated
to the $\ell_p$ balls are all the same.
Note that establishing bounds of the type
$a^{-1} c_{d,2} \le c_{d,p} \le a c_{d,2}$ (where the constant
$a \ge 1$ is independent of the dimension $d$ and $c_{d,p}$
denotes the best constant associated to the $\ell_p$ ball),
would show that the bounds
$O(d)$ (which hold for euclidean balls by
\cite{StSt}) extend to $\ell_p$ balls.
\end{rem}
\end{document}
|
\begin{document}
\title{Compressed sensing quantum process tomography for superconducting
quantum gates}
\author{Andrey V. Rodionov$^1$, Andrzej Veitia$^1$, R. Barends$^2$, J. Kelly$^2$, Daniel Sank$^2$, J. Wenner$^2$, John M. Martinis$^2$, Robert L. Kosut$^3$, and Alexander N. Korotkov$^1$}
\affiliation{$^1$Department of Electrical Engineering, University of California, Riverside, California 92521, USA \\
$^2$Department of Physics, University of California, Santa Barbara,
California 93106, USA \\
$^3$SC Solutions, 1261 Oakmead Parkway, Sunnyvale, California 94085, USA
}
\date{\today}
\begin{abstract}
We apply the method of compressed sensing (CS) quantum process
tomography (QPT) to characterize quantum gates based on
superconducting Xmon and phase qubits. Using experimental data for a
two-qubit controlled-Z gate, we obtain an estimate for the process
matrix $\chi$ with reasonably high fidelity compared to full QPT,
but using a significantly reduced set of initial states and
measurement configurations. We show that the CS method still works
when the amount of used data is so small that the standard QPT would
have an underdetermined system of equations. We also apply the CS
method to the analysis of the three-qubit Toffoli gate with
numerically added noise, and similarly show that the method works
well for a substantially reduced set of data. For the CS calculations we
use two different bases in which the process matrix $\chi$ is approximately
sparse, and show that the resulting estimates of the process
matrices match each other with reasonably high fidelity. For both
two-qubit and three-qubit gates, we characterize the quantum process
by not only its process matrix and fidelity, but also by the
corresponding standard deviation, defined via variation of the state
fidelity for different initial states.
\end{abstract}
\pacs{03.65.Wj, 03.67.Lx, 85.25.Cp}
\maketitle
\section{Introduction}
An important challenge in quantum information science and quantum
computing is the experimental realization of high-fidelity quantum
operations on multi-qubit systems. Quantum process tomography
(QPT)~\cite{N-C, ChuangQPT1997, Poyatos-97} is a procedure devised
to fully characterize a quantum operation. The role of QPT in
experimental characterization of quantum gates is twofold. First, it
allows us to quantify the quality of the gate; that is, it tells us
how close the actual and desired quantum operations are. Second, QPT
may aid in diagnosing and correcting errors in the experimental
operation~\cite{Boulant-03,Bendersky-08,Mohseni-09,Kofman,
Korotkov-13}. The importance of QPT has led to extensive theoretical
research on this subject (e.g.,
\cite{Leung-03,DAriano-03,Emerson-07,Mohseni-06,Wolf-08,Bogdanov-10}).
Although conceptually simple, QPT suffers from a
fundamental drawback: the number of required experimental
configurations scales exponentially with the number of qubits
(e.g.,~\cite{MohseniResourseAnalysis}). An $N$-qubit quantum
operation can be represented by a $4^{N}\times 4^{N}$ process matrix
$\chi$~\cite{N-C} containing $16^{N}$ independent real parameters
(or $16^{N}-4^{N}$ parameters for a trace-preserving operation)
which can be determined experimentally by QPT. Therefore, even for
few-qubit systems, QPT involves collecting large amounts of
tomographic data and heavy classical postprocessing. To alleviate
the problem of exponential scaling of QPT resources, alternative
methods have been developed, e.g., randomized
benchmarking~\cite{Knill2008, Emerson-05, Magesan-12} and Monte
Carlo process certification~\cite{FlammiaMonteCarlo2011,daSilva-11}.
These protocols, however, find only the fidelity of an operation
instead of its full process matrix. Both randomized benchmarking and
Monte Carlo process certification have been demonstrated
experimentally for superconducting qubit gates
(see~\cite{Chow-09,Corcoles2013,SteffenMonteCarlo2012} and
references therein). Although these protocols are efficient tools
for the verification of quantum gates, their limitation lies in the
fact that they do not provide any description of particular errors
affecting a given process and therefore they cannot be used to
improve the performance of the gates.
Recently, a new
approach to QPT which incorporates ideas from signal processing
theory has been proposed~\cite{KosutSVD, ShabaniKosut}. The basic
idea is to combine standard QPT with compressed sensing (CS) theory
\cite{Candes2006, Candes2008, Donoho, CandesWakin}, which asserts
that sparse signals may be efficiently recovered even when heavily
undersampled. As a result, compressed sensing quantum process
tomography (CS QPT) enables one to recover the process matrix $\chi$
from far fewer experimental configurations than standard QPT. The
method proposed in~\cite{KosutSVD, ShabaniKosut} is hoped to provide
an exponential speed-up over standard QPT. In particular, for a
$d$-dimensional system the method is supposed to require only
$O(s\log{d})$ experimental probabilities to produce a good estimate
of the process matrix $\chi$, if $\chi$ is known to be
$s$-compressible~\cite{s-sparse} in some known basis. (For
comparison, standard QPT requires at least $d^4$ probabilities,
where $d=2^{N}$ for $N$ qubits.) Note that there are bases in which
the process matrix describing the target process (the desired
unitary operation) is maximally sparse, i.e.\ containing only one
non-zero element; for example, this is the case for the so-called
singular-value-decomposition (SVD) basis \cite{KosutSVD} and the
Pauli-error basis \cite{Korotkov-13}. Therefore, if the actual
process is close to the ideal (target) process, then it is plausible
to expect that its process matrix is approximately sparse when written in
such a basis \cite{ShabaniKosut}.
The CS QPT
method was experimentally validated in Ref.~\cite{ShabaniKosut} for
a photonic two-qubit controlled-Z (CZ) gate. In that experiment, sufficiently accurate
estimates for the process matrix were obtained via CS QPT
using much fewer experimental configurations than the
standard QPT.
The CS idea also inspired another (quite different) algorithm for
quantum state tomography (QST) \cite{GrossFlammia2010,Flammia2012},
which can be generalized to QPT \cite{Flammia2012,Baldwin-14}. This matrix-completion method of
CS QST estimates the density matrices of nearly pure (low rank~$r$)
$d$-dimensional quantum states from expectation values of only $O(r
d \, {\rm poly} \log d)$ observables, instead of $d^2$ observables required for
standard QST. It is important to mention that this method does not
require any assumption about the quantum state of a system, except
that it must be a low-rank state (in particular, we do not need to
know the state approximately). The CS QST method has been used to
reconstruct the quantum states of a 4-qubit photonic
system~\cite{Yuan} and cesium atomic spins ~\cite{SmithDeutsch}. In
Ref.~\cite{Flammia2012} it has been shown that using the
Jamio\l{}kowski process-state isomorphism ~\cite{Jamiolkowski} the formalism of CS
QST can also be applied to the QPT, requiring $O(r d^{2}\, {\rm poly}\log d)$
measured probabilities (where $r$ is the rank of the Jamio\l{}kowski
state) to produce a good estimate of the process matrix $\chi$.
Therefore there is crudely a square-root speedup compared with
standard QPT.
Note that this algorithm requires exponentially more
resources than the CS QPT method of Ref.\ \cite{ShabaniKosut}, but
it does not require to know a particular basis in which the matrix
$\chi$ is sparse. The performance of these two methods has been compared in the recent paper \cite{Baldwin-14} for a simulated quantum system with dimension $d=5$; the reported result is that the method of Ref.\ \cite{Flammia2012} works better for coherent errors, while the method of Ref.\ \cite{ShabaniKosut} is better for incoherent errors.
In this paper we apply the method of Ref.\ \cite{ShabaniKosut} to
the two-qubit CZ gate realized with superconducting qubits. Using
the experimental results, we find that CS QPT works reasonably well
when the number of used experimental configurations is up to $\sim$7
times less than for standard QPT. Using simulations for a
three-qubit Toffoli gate, we find that the reduction factor is
$\sim$40, compared with standard QPT. In the analysis we calculate
two fidelities: the fidelity of the CS QPT-estimated process matrix
$\chi_{\rm CS}$ compared with the matrix $\chi_{\rm full}$ from the
full data set and compared with $\chi_{\rm ideal}$ for the ideal
unitary process. Besides calculating the fidelities, we also
calculate the standard deviation of the fidelity, defined via the
variation of the state fidelity for different initial states. We
show that this characteristic is also estimated reasonably well by
using the CS QPT.
Our paper is structured as follows. Section~\ref{SQPT} is a brief
review of standard QPT and CS QPT. In Sec.~\ref{Configurations} we
discuss the set of measurement configurations used to collect QPT
data for superconducting qubits, and also briefly discuss our way to
compute the process matrix $\chi$ via compressed sensing. In
Sec.~\ref{results-two-qubits} we present our numerical results for
the CS QPT of a superconducting two-qubit CZ gate. In this section
we also compare numerical results obtained by applying the CS QPT
method in two different operator bases, the Pauli-error basis and
the SVD basis. In Sec.~\ref{OurResults-ThreeQu} we study the CS QPT
of a simulated three-qubit Toffoli gate with numerically added
noise. Then in Sec.~\ref{FidSquaredSection} we use the process
matrices obtained via compressed sensing to estimate the standard
deviation of the state fidelity, with varying initial state. Section
\ref{Conclusion} is a conclusion. In Appendices we discuss the
Pauli-error basis (Appendix A), SVD basis (Appendix B), and
calculation of the average square of the state fidelity (Appendix
C).
\section{Methods of Quantum Process Tomography}
\label{SQPT}
\subsection{ Standard Quantum Process Tomography}
\label{QPT-standard}
The idea behind QPT is to reconstruct a quantum operation $\rho^{\rm
in}\mapsto \rho^{\rm fin}= \mathcal{E}(\rho^{\rm in})$ from
experimental data. The quantum operation is a
completely positive map, which for an $N$-qubit system prepared in the state
with density matrix $\rho^{\rm in}$ can be written as
\begin{equation}
\label{MainDefinition}
\mathcal{E}(\rho^{\rm in})=\sum_{\alpha,\beta=1}^{d^{2}}\chi_{\alpha \beta} E_{\alpha}\rho^{\rm in} E_{\beta}^{\dagger},
\end{equation}
where $d=2^{N}$ is the dimension of the system, $\chi \in
\mathbb{C}^{d^{2}\times d^{2}}$ is the process matrix and
$\{E_{\alpha} \in \mathbb{C}^{d\times d} \} $ is a chosen basis of
operators. We assume that this basis is orthogonal,
$\braket{E_{\alpha}|E_{\beta}} \equiv
\tr(E_{\alpha}^{\dagger}E_{\beta})=Q \, \delta_{\alpha \beta}$,
where $Q=d$ for the Pauli basis and Pauli-error basis, while $Q=1$
for the SVD basis (see Appendices A and B). Note that for a trace-preserving operation ${\rm Tr}(\chi)=1$ if $Q=d$, while ${\rm Tr}(\chi)=d$ if $Q=1$.
In this paper we
implicitly assume the usual normalization $Q=d$, unless mentioned otherwise. The process matrix
$\chi$ is positive semidefinite (which implies being Hermitian), and
we also assume it to be trace-preserving,
\begin{eqnarray}
\label{ChiPositive}
\chi \geq 0 \quad (\text{positive semidefinite}), \\
\label{ChiTracePreserving}
\sum_{\alpha, \beta=1}^{d^{2}} \chi_{\alpha \beta}E_{\beta}^{\dagger}E_{\alpha}= \mathbb{I}_{d} \quad (\text{trace preserving}).\\ \notag
\end{eqnarray}
These conditions ensure that $\rho^{\rm fin}=\mathcal{E}(\rho^{\rm in})$ is a legitimate density matrix for an arbitrary (legitimate) input state
$\rho^{\rm in}$. The condition (\ref{ChiTracePreserving}) reduces the
number of real independent parameters in $\chi$ from $d^4$ to $d^{4}-d^{2}$. Hence, the number of parameters needed to fully specify the map
$\mathcal{E}$ scales as $O(16^{N})$ with the number of qubits $N$. Note that the set of allowed process matrices $\chi$ defined by
Eqs.~(\ref{ChiPositive})~and~(\ref{ChiTracePreserving}) is convex \cite{Kosut2004Convex,KosutSVD}.
The essential idea of standard QPT is to exploit the linearity of
the map (\ref{MainDefinition}) by preparing the qubits in different
initial states, applying the quantum gate, and then measuring a set
of observables until the collected data allows us to obtain the
process matrix $\chi$ through matrix inversion or other methods.
More precisely, if the qubits are prepared in the state $\rho^{\rm
in}_{k}$, then the probability of finding them in the (measured)
state $\ket{\phi_{i}}$ after applying the gate is given by
\begin{equation}
\label{Probability}
P_{ik}=\tr(\Pi_{i} \mathcal{E}(\rho^{\rm in}_{k}))=\sum_{\alpha, \beta}\tr( \Pi_{i} E_{\alpha}\rho^{\rm in}_{k} E^{\dagger}_{\beta}) \chi_{\alpha
\beta},
\end{equation}
where $\Pi_{i}=\ket{\phi_{i}}\bra{\phi_{i}}$. By preparing the qubits in one of the linearly independent input states $\{\rho^{\rm in}_{1},
\ldots \rho^{\rm in}_{N_{\rm in}} \}$ and performing a series of projective measurements $ \{\Pi_{1},\ldots, \Pi_{N_{\rm meas}}\}$ on the output states,
one obtains a set of $m= N_{\rm in}N_{\rm meas}$ probabilities $ \{ P_{ik}\} $ which, using Eq.~(\ref{Probability}), may be written as
\begin{equation}
\label{Vectorized}
\vec{P}(\chi)=\Phi \vec{\chi},
\end{equation}
where $\vec{P}(\chi) \in \mathbb{C}^{m\times 1}$ and $\vec{\chi} \in
\mathbb{C}^{d^{4}\times 1}$ are vectorized forms of $\{P_{ik}\} $
and $ \chi_{\alpha \beta}$, respectively. The $m\times d^{4}$
transformation matrix $\Phi$ has entries given by $\Phi_{ik, \alpha
\beta}=\tr( \Pi_{i}
E_{\alpha}\rho^{\rm in}_{k} E^{\dagger}_{\beta})$.\\
In principle, for tomographically complete sets of input states
$\{\rho^{\rm in}_{1}, \ldots \rho^{\rm in}_{N_{\rm in}}\}$ and
measurement operators $\{\Pi_{1},\ldots, \Pi_{N_{\rm meas}}\}$, one
could invert Eq.~(\ref{Vectorized}) and thus uniquely find $\chi$
by using the experimental set of probabilities
$\vec{P}^{\textrm{exp}}$. In practice, however, because of
experimental uncertainties present in $\vec{P}^{\textrm{exp}}$,
the process matrix thus obtained may be non-physical, that is, inconsistent
with the conditions (\ref{ChiPositive}) and
(\ref{ChiTracePreserving}). In standard QPT this problem is remedied
by finding the {\it physical} process matrix [satisfying (\ref{ChiPositive}) and
(\ref{ChiTracePreserving})] that minimizes (in some
sense) the difference between the probabilities $\vec{P}(\chi)$
and the experimental
probabilities $\vec{P}^{\textrm{exp}}$.\\
Two popular methods used to estimate a physical process matrix
$\chi$ compatible with the experimental data are the maximum
likelihood (ML) method ~\cite{MLEIterative,RiebeMLEQPT,Micuda-14}
(see also \cite{James2001,Bogdanov-04}) and the least-squares (LS)
method ~\cite{OBrien,Chow-09,Fuchs-12}. The ML method minimizes the
cost function \cite{MLEIterative}
\be
{\cal C}_{ML}=- \sum\nolimits_j P^{\rm exp}_j \ln P_j(\chi),
\label{cost-ML}\ee
where the index $j$ labels the measured probabilities, while the LS
method (often also called maximum likelihood) minimizes the
difference between $\vec{P}(\chi)$ and $\vec{P}^{\textrm{exp}}$ in
the $\ell_{2}$-norm sense~\cite{NormDef}, so the minimized cost
function is
\be
{\cal C}_{LS}= ||\vec{P}(\chi)-\vec{P}^{\rm exp}||_{\ell_{2}}^2=
\sum\nolimits_j [P^{\rm exp}_j- P_j(\chi)]^2.
\label{cost-LS}\ee
In both methods the conditions (\ref{ChiPositive}) and
(\ref{ChiTracePreserving}) should be satisfied to ensure that $\chi$
corresponds to a physical process. This can be done in a number of
ways, for example, using the Cholesky decomposition, or Lagrange
multipliers, or just stating the conditions (\ref{ChiPositive}) and
(\ref{ChiTracePreserving}) as a constraint (if an appropriate software package is
used). The ML method (\ref{cost-ML}) is natural when the inaccuracy of
$\vec{P}^{\rm exp}$ is dominated by the statistical error due to a
limited number of experimental runs. However, this method does not
work well if a target probability $P_j$ is near zero, but $P_j^{\rm
exp}$ is not near zero due to experimental imperfections (e.g.,
``dark counts''); this is because the cost function (\ref{cost-ML})
is very sensitive to changes in $P^{\rm exp}_j$ when
$P_j(\chi)\approx 0$. Therefore, the LS method (\ref{cost-LS}) is a
better choice when the inaccuracy of $\vec{P}^{\rm exp}$ is not due
to a limited number of experimental runs.
Note that other cost functions can also be used for minimization in
the procedure. For example, by replacing $\ln P_j(\chi)$ in Eq.\
(\ref{cost-ML}) with $\ln [P_j(\chi)/P^{\rm exp}_j]$ (this obviously
does not affect optimization), then expanding the logarithm to
second order, and using condition $\sum_j P_j(\chi)=\sum_j P^{\rm
exp}_j$ (which cancels the first-order term), we obtain
\cite{James2001} ${\cal C}_{ML}\approx {\rm const} + \sum_j [P^{\rm
exp}_j- P_j(\chi)]^2/2P^{\rm exp}_j$. This leads to another natural
cost function
\be
{\cal C}=\sum_j \frac{[P_j(\chi)-P^{\rm exp}_j]^2}{P^{\rm exp}_j+a},
\label{cost-3}\ee
where we phenomenologically introduced an additional parameter $a$,
so that for $a\gg 1$ the minimization reduces to the LS method,
while for $a \ll 1$ it is close to the ML method (the parameter $a$
characterizes the relative importance of non-statistical and
statistical errors). One more natural cost function is similar to
Eq.\ (\ref{cost-3}), but with $P^{\rm exp}_j$ in the denominator
replaced by $P^{\rm exp}_j(1-P^{\rm exp}_j)$ (see
\cite{MLEIterative}), which corresponds to the binomial distribution
variance.
In this paper we use the LS method (\ref{cost-LS}) for the standard
QPT. In particular, we find the process matrix $\chi_{\rm full}$ for
the full data set $\vec{P}^{\text{exp}}_{\rm full}$ by minimizing
$||\vec{P}(\chi_{\rm full})-\vec{P}^{\text{exp}}_{\rm
full}||_{\ell_{2}}$, subject to conditions Eqs.~(\ref{ChiPositive})
and (\ref{ChiTracePreserving}). Note that such minimization is a
convex optimization problem and therefore computationally tractable.
\subsection{Compressed Sensing Quantum Process Tomography}
\label{CS-QPT}
If the number of available experimental probabilities is smaller
than the number of independent parameters in the process matrix
(i.e. $ m< d^{4}- d^{2}$), then the set of linear equations
Eq.~(\ref{Vectorized}) for the process matrix $\chi$ becomes
underdetermined. Actually, the LS method may still formally work in
this case for some range of $m$, but, as discussed in Secs.\
\ref{Sec-2q-LS} and \ref{OurResults-ThreeQu}, it is less effective.
By using the ideas of compressed sensing \cite{Candes2006, Donoho,
Candes2008, CandesWakin}, the method of CS QPT requires a
significantly smaller set of experimental data to produce a
reasonably accurate estimate of the process matrix. Let us formulate
the problem mathematically as follows: we wish to find the physical
process matrix $\vec{\chi}_{0}$ satisfying the equation
\begin{equation}
\label{CSProblem}
\vec{P}^{\textrm{exp}}=\Phi \vec{\chi}_{0}+ \vec{z},
\end{equation}
where the vector $\vec{P}^{\rm exp} \in \mathbb{C}^{m}$ (with $ m<
d^{4}-d^{2}$) and the matrix $\Phi \in \mathbb{C}^{m\times d^{4}}$
are given, while $\vec{z}\in \mathbb{C}^{m}$ is an unknown noise
vector, whose elements are assumed to be bounded (in the
root-mean-square sense) by a known limit $\varepsilon$,
$||\vec{z}||_{\ell_{2}}/\sqrt{m} \leq \varepsilon$. While this
problem seems to be ill-posed since the available information is
both noisy and incomplete, in Ref.~\cite{Candes2006} it was shown
that if the vector $\chi_{0}$ is sufficiently sparse and the matrix
$\Phi$ satisfies the restricted isometry property (RIP), $\chi_{0}$
can be accurately estimated from Eq.~(\ref{CSProblem}). Note that
the CS techniques of Ref.~{\cite{Candes2006}} were developed in the
context of signal processing; to adapt \cite{KosutSVD} these
techniques to QPT we also need to include the positivity and
trace-preservation conditions, Eqs.\ (\ref{ChiPositive}) and
(\ref{ChiTracePreserving}).
The idea of CS QPT \cite{ShabaniKosut} is to minimize the
$\ell_1$-norm~\cite{NormDef} of $\vec{\chi}$ in a basis where
${\chi}$ is assumed to be approximately sparse. Mathematically, the method
is solving the following convex optimization problem:
\begin{eqnarray}
\label{mainL1problem} && \text{minimize}\,\,\,
{||\vec{\chi}||}_{\ell_1} \, ,
\\
\label{ConditionsL1Problem} && \text{subject to \,}{||\vec{P}(\chi)
-\vec{P}^{\rm exp}||}_{\ell_2}\bigl/\sqrt{m} \le \varepsilon \qquad \\
&& \hspace{1.6cm} \text{and conditions
(\ref{ChiPositive}), (\ref{ChiTracePreserving}).} \nonumber
\end{eqnarray}
As shown
in Refs.\ \cite{Candes2008, ShabaniKosut}, a faithful reconstruction
recovery of an approximately $s$-sparse process matrix~$\chi_0$ via this
optimization is guaranteed (see below) if (i) the matrix~$\Phi$
satisfies the RIP condition,
\begin{equation}
\label{RIP} 1-\delta_s \le \displaystyle\frac{||\Phi\vec\chi_1 -
\Phi\vec\chi_2 ||^2_{\ell_2}}{||\vec\chi_1 -
\vec\chi_2||^2_{\ell_2}}\le 1+\delta_s ,
\end{equation}
for all $s$-sparse vectors (process matrices) $\vec{\chi}_1$ and
$\vec{\chi}_2$,
(ii) the isometry constant $\delta_s$ is sufficiently small, $\delta_s < \sqrt{2} - 1$, and
(iii) the number of data points is sufficiently large,
\be
m\ge C_0 s \log(d^4/s)=O(sN),
\label{ineq-m}\ee
where $C_0$ is a constant. Quantitatively, if $\chi_{\rm CS}$ is
the solution of the optimization problem [Eqs.~(\ref{mainL1problem})
and (\ref{ConditionsL1Problem})], then the estimation error
${||\chi_{\rm CS}-\chi_{0}||}_{\ell_{2}} $ is bounded as
\begin{equation}
\label{bounds}
\frac{ || \chi_{\rm CS} - \chi_{0} ||_{\ell_{2}}}{\sqrt{m}} \leq
\frac{ C_{1} ||\chi_{0}(s)-\chi_{0}||_{\ell_{1}} }{\sqrt{m s}}+C_{2\,} \varepsilon,
\end{equation}
where $\chi_{0}(s)$ is the best $s$-sparse approximation of
$\chi_{0}$, while $C_{1}$ and $ C_{2}$ are constants of the order
$O(\delta_{s})$. Note that in the noiseless case ($\varepsilon=0 $)
the recovery is exact if the process matrix $\chi_0$ is $s$-sparse.
Also note that while the required number of data points $m$ and the
recovery accuracy depend on the sparsity $s$, the method itself
[Eqs.~(\ref{mainL1problem}) and (\ref{ConditionsL1Problem})] does
not depend on $s$, and therefore $s$ need not be known.
The inequality (\ref{ineq-m}) and the first term in the inequality
(\ref{bounds}) indicate that the CS QPT method is supposed to work
well only if the actual process matrix $\chi_0$ is sufficiently
sparse. Therefore, it is important to use an operator basis
$\{E_{\alpha}\}$ [see Eq.~(\ref{MainDefinition})], in which the
ideal (desired) process matrix $\chi_{\rm ideal}$ is maximally
sparse, i.e., it contains only one nonzero element. Then it is
plausible to expect the actual process matrix $\chi_0$ to be approximately
sparse \cite{ShabaniKosut}. In this paper we will use two bases in
which the ideal process matrix is maximally sparse. These are the
so-called Pauli-error basis \cite{Korotkov-13} and the SVD basis of
the ideal unitary operation~\cite{KosutSVD}. In the Pauli-error
basis $\{E_{\alpha}\}$, the first element $E_1$ coincides with the
desired unitary $U$, while other elements are related via the
$N$-qubit Pauli matrices $\cal P$, so that $E_\alpha=U{\cal
P}_\alpha$. In the SVD basis $E_1=U /\sqrt{d}$, and other elements
are obtained via a numerical SVD procedure. More details about the
Pauli-error and SVD bases are discussed in Appendices A and B.
As mentioned previously, the method of CS QPT involves the RIP condition
(\ref{RIP}) for the transformation matrix~$\Phi$. In Ref.\ \cite{ShabaniKosut}
it was shown that if the
transformation matrix~$\Phi$ in Eq.~(\ref{Vectorized}) is
constructed from randomly selected input states~$\rho_{k}^{\rm in}$
and random measurements~$\Pi_{i}$, then~$\Phi$ obeys the RIP
condition with high probability. Notice that once a basis
$\{E_{\alpha}\}$ and a tomographically complete (or overcomplete)
set $\{\rho^{\rm in}_{k}, \Pi_{i}\}$ have been chosen, the matrix
$\Phi_{\textrm{full}}$ corresponding to the full data set is fully
defined, since it does not depend on the experimental outcomes.
Therefore, the mentioned above result of Ref.~\cite{ShabaniKosut}
tells us that if we build a matrix~$\Phi_{m}$ by randomly selecting
$m$ rows from~$\Phi_{\textrm{full}}$, then~$\Phi_{m}$ is very likely
to satisfy the RIP condition. Hence, the
submatrix~$\Phi_{m}~\in~\mathbb{C}^{m\times d^4}$, together with the
corresponding set of experimental outcomes
$\vec{P}^{\textrm{exp}}~\in~\mathbb{C}^{m}$ can be used to produce
an estimate of the process matrix via the~$\ell_{1}$-minimization
procedure (\ref{mainL1problem}) and (\ref{ConditionsL1Problem}).
\section{Standard and CS QPT of multi-qubit superconducting gates}
\label{Configurations}
There are several different ways to perform standard QPT for an
$N$-qubit quantum gate realized with superconducting qubits
\cite{Mariantoni-11,Bialczak-10,Reed-12,Dewes-12,FedorovImplementToffoli,
Chow-12,Chow-13,Yamamoto-10}. The differences are the following.
First, it can be performed using either $n_{\rm in}=4$ initial
states for each qubit
\cite{Bialczak-10,Reed-12,Dewes-12,FedorovImplementToffoli},
e.g., $\{\ket{0}, \ket{1},
(\ket{0}+\ket{1})/\sqrt{2}, (\ket{0}+i\ket{1})/\sqrt{2}\}$, or using
$n_{\rm in}=6$ initial states per qubit \cite{Chow-12,Chow-13},
$\{\ket{0}, \ket{1}, (\ket{0}\pm\ket{1})/\sqrt{2}, (\ket{0}\pm
i\ket{1})/\sqrt{2}\}$, so that the total number of initial states is
$N_{\rm in}=n_{\rm in}^N$. (It is tomographically sufficient to use
$n_{\rm in}=4$, but the set of 6 initial states is more symmetric,
so it can reduce the effect of experimental imperfections.) Second, the
final measurement of the qubits can be realized in the computational
basis after one out of $n_{\rm R}=3$ rotations per qubit
\cite{Bialczak-10,Dewes-12}, e.g., $\mathcal{R}_{\rm meas}=\{\mathbb{I},
R_{y}^{-\pi/2}, R_{x}^{\pi/2}\}$, or $n_{\rm R}=4$ rotations
\cite{Chow-09,Reed-12,Chow-13}, e.g., $\mathcal{R}_{\rm meas}=\{\mathbb{I}, R_{y}^{\pi}, R_{y}^{\pi/2}, R_{x}^{\pi/2}\}$, or $n_{\rm R}=6$ rotations
\cite{Mariantoni-11,Chow-12,Yamamoto-10}, e.g., $\mathcal{R}_{\rm meas}=\{\mathbb{I}, R_{y}^{\pi},
R_{y}^{\pm \pi/2}, R_{x}^{\pm\pi/2}\}$. This gives $N_{\rm R
}=n_{\rm R}^N$ measurement ``directions'' in the Hilbert space.
Third, it may be possible to measure the state of each qubit simultaneously
\cite{Bialczak-10,Mariantoni-11,Dewes-12}, so that the probabilities for all $2^N$ outcomes
are measured, or it may be technically possible to measure the probability for only one state (say, $|0... 0\rangle$) or a weighed sum of the probabilities \cite{Chow-12,Reed-12,FedorovImplementToffoli}.
Therefore, the number of measured
probabilities for each configuration is either $N_{\rm prob}=2^N$ (with $2^N-1$
independent probabilities, since their sum is equal 1) or $N_{\rm
prob}=1$. Note that if $N_{\rm prob}=2^N$, then using $n_{\rm R}=6$
rotations per qubit formally gives the same probabilities as for
$n_{\rm R}=3$, and in an experiment this formal symmetry can be used
to improve the accuracy of the results. In contrast, in the case
when $N_{\rm prob}=1$, the use of $n_{\rm R}=4$ or $n_{\rm R}=6$ are
natural for the complete tomography.
Thus, the number of measurement configurations (including input
state and rotations) in standard QPT is $M_{\rm conf}=N_{\rm
in}N_{\rm R}=n_{\rm in}^N n_{\rm R}^N$, while the total number of
probabilities in the data set is $M=M_{\rm conf}N_{\rm prob}$. This
number of probabilities can be as large as $M=72^N$ for $n_{\rm
in}=6$, $n_{\rm R}=6$, and $N_{\rm prob}=2^N$ (with $72^N-36^N$
independent probabilities). Since only $16^N-4^N$ independent
probabilities are necessary for the standard QPT, a natural choice
for a shorter experiment is $n_{\rm in}=4$, $n_{\rm R}=3$, and
$N_{\rm prob}=2^N$; then $M=24^N$, with $24^N-12^N$ independent
probabilities. If $N_{\rm prob}=1$ due to the limitations of the
measurement technique, then the natural choices are $n_{\rm in}=4$
and $n_{\rm R}=4$, giving $M=16^N$ or $n_{\rm in}=4$ and $n_{\rm
R}=6$, giving $M=24^N$.
In this paper we focus on the case $n_{\rm in}=4$, $n_{\rm R}=3$,
and $N_{\rm prob}=2^N$. Then for a two-qubit quantum gate there are
$M_{\rm conf}=12^N=144$ measurement configurations and $M=24^N=576$
probabilities (432 of them independent). For a three-qubit gate
there are $M_{\rm conf}=1728$ configurations and $M=13824$
probabilities (12096 of them independent).
The main experimental data used in this paper are for the two-qubit
CZ gate realized with Xmon qubits \cite{Xmon}. The data were
obtained with $n_{\rm in}=6$, $n_{\rm R}=6$, and $N_{\rm prob}=2^N$.
However, since the main emphasis of this paper is analysis of the
QPT with a reduced data set, we started by reducing the data set to
$n_{\rm in}=4$ and $n_{\rm R}=3$ by using only the corresponding
probabilities and removing other data. We will refer to these data
as ``full data'' (with $M_{\rm conf}=144$ and $M=24^N=576$). For
testing the CS method we randomly choose $m_{\rm conf} \leq M_{\rm
conf}$ configurations, with corresponding $m=4m_{\rm conf}$
experimental probabilities ($3m_{\rm conf}$ of them independent).
Since the process matrix $\chi$ is characterized by $16^N-4^N=240$
independent parameters, the power of the CS method is most evident
when $m_{\rm conf}<80$, so that the system of equations
(\ref{Vectorized}) is underdetermined. [For a three-qubit gate the
system of equations becomes underdetermined for $m_{\rm
conf}<(16^N-4^N)/(2^N-1)=576$.]
The data used for the analysis here were taken on a different device
from the one used in Ref.\ \cite{Barends-14}. For the device used
here the qubits were coupled via a bus, and the entangling gate
between qubits A and B was implemented with three multiqubit
operations: 1) swap state from qubit B to bus, 2) CZ gate between
qubit A and bus, 3) swap back to qubit B. The swap was done with the
resonant Strauch gate \cite{Strauch-03}, by detuning the frequency
of qubit A with a square pulse. Generating a square pulse is
experimentally challenging, moreover this gate has a single optimum
in pulse amplitude and time. We also note that the qubit frequency
control was not optimized for imperfections in the control wiring,
as described in Ref.\ \cite{Kelly-14} (see also Fig.\ S4 in
Supplementary Information of \cite{Barends-14}).
The
combination of device, non-optimal control, and multiple operations,
leads to the experimental process fidelity $F_\chi = 0.91$ of the CZ
gate used for the analysis here to be significantly less than the
randomized benchmarking fidelity $F_{RB} = 0.994$ reported in
\cite{Barends-14}. Moreover, QPT necessarily includes state
preparation and measurement (SPAM) errors \cite{Magesan-12}, while
randomized benchmarking does not suffer from these errors. This is
why we intentionally used the data for a not-well-optimized CZ gate
so that the gate error dominates over the SPAM errors. (Note that we
use correction for the imperfect measurement fidelity
\cite{Mariantoni-11}; however, it does not remove the measurement
errors completely.) It should also be mentioned that in the ideal
case $1-F_\chi =(1-F_{\rm RB})\times (1+2^{-N})$, so the QPT
fidelity is supposed to be slightly less than the randomized
benchmarking fidelity.
For the full data set, we first calculate the process matrix
$\chi_{\rm full}$ by using the least-squares method described at the
end of Sec.\ \ref{QPT-standard}. For that we use three different
operator bases \{$E_{\alpha}$\}: the Pauli basis, the Pauli-error
basis, and the SVD basis. The pre-computed transformation matrix
$\Phi$ in Eq.\ (\ref{Vectorized}) depends on the choice of the
basis, thus giving a basis-dependent result for $\chi_{\rm full}$.
We then check that the results essentially coincide by converting
$\chi_{\rm full}$ between the bases and calculating the fidelity
between the corresponding matrices (the infidelity is found to be
less than $10^{-7}$). The fidelity between two process matrices
$\chi_{1}$ and $\chi_{2}$ is defined as the square of the Uhlmann
fidelity~\cite{Uhlmann,Jozsa},
\begin{equation}
\label{DefinitionFidelity} F(\chi_{1},\chi_{2}) =
\Bigl({\rm Tr}\sqrt{\chi_{1}^{1/2}\,\chi_{2}\,\chi_{1}^{1/2}}\Bigr)^2,
\end{equation}
so that it reduces to $F(\chi_{1},\chi_{2})=\tr(\chi_{1}\chi_{2})$
\cite{GilchristNielsen} when at least one of the process matrices
corresponds to a unitary operation. Since $0\leq F\leq 1$, we refer
to $1-F$ as the infidelity.
After calculating $\chi_{\rm full}$ for the full data set, we can calculate its fidelity compared to the process matrix $\chi_{\rm ideal}$ of the desired ideal unitary operation, $F_\chi = F_{\rm full}=F(\chi_{\rm full}, \chi_{\rm ideal})$. This is the main number used to characterize the quality of the quantum operation.
Then we calculate the compressed-sensing process matrix $\chi_{\rm
CS}$ by solving the $\ell_{1}$-minimization problem described by
Eqs.\ (\ref{mainL1problem}) and (\ref{ConditionsL1Problem}), using
the reduced data set. It is obtained from the full data set by
randomly selecting $m_{\rm conf}$ configurations out of the full
number $M_{\rm conf}$ configurations. We use the fidelity
$F(\chi_{\rm CS},\chi_{\rm full})$ to quantify how well the process
matrix $\chi_{\rm CS}$ approximates the matrix $\chi_{\rm full}$
obtained from full tomographic data. Additionally, we calculate the
process fidelity $F(\chi_{\rm CS},\chi_{\rm ideal})$ between
$\chi_{\rm CS}$ and the ideal operation, to see how closely it
estimates the process fidelity $F_{\rm full}$, obtained using the
full data set.
Since both the least-squares and the $\ell_{1}$-norm minimization
are convex optimization problems~\cite{KosutSVD,BoydConvex}, they
can be efficiently solved numerically. We used two ways for
MATLAB-based numerical calculations: (1) using the package CVX
\cite{CVXBoyd}, which calls the convex solver SeDuMi
\cite{SeDuMi13}; or (2) using the package YALMIP
\cite{YALMIPLofberg}, which calls the convex solver SDPT3
\cite{SDPT3}. In general, we have found that for our particular realization of computation, CVX with the solver SeDuMi works better than the
combination YALMIP-SDPT3 (more details are below).
\section{Results for two-qubit CZ gate}
\label{results-two-qubits}
In this section we present results for the experimental CZ gate
realized with superconducting Xmon qubits \cite{Xmon,Barends-14}.
As explained above, the full data set consists of $M=576$ measured
probabilities (432 of them independent), corresponding to $M_{\rm
conf}=4^2\times 3^2=144$ configurations, with 4 probabilities (3 of
them independent) for each configurations. The LS method using the
full data set produces the process matrix $\chi_{\rm full}$, which
has the process fidelity $F(\chi_{\rm full}, \chi_{\rm
ideal})=0.907$ relative to the ideal CZ operation. Note that our
full data set is actually a subset of an even larger data set (as
explained in the previous section), and the $\chi$ matrix calculated
from the initial set corresponds to the process fidelity of 0.928;
the difference gives a crude estimate of the overall accuracy of the
procedure.
The CS method calculations were mainly done in the Pauli-error
basis, using the CVX-SeDuMi combination for the $\ell_1$-norm
minimization. This is what is implicitly assumed in this section,
unless specified otherwise. Note that the CS-method optimization is
very different from the LS method. Therefore, even for the full data
set we would expect the process matrix $\chi_{\rm CS}$ to be
different from $\chi_{\rm full}$. Moreover, $\chi_{\rm CS}$ depends
on the noise parameter $\varepsilon$ [see Eq.\
(\ref{ConditionsL1Problem})], which to some extent is arbitrary. To
clarify the role of the parameter $\varepsilon$, we will first
discuss the CS method applied to the full data set, with varying
$\varepsilon$, and then discuss the CS QPT for a reduced data set,
using either near-optimal or non-optimal values of $\varepsilon$.
\subsection{Full data set, varying $\varepsilon$ }
\label{varyingeps}
We start with calculating the process matrix $\chi_{CS}$ by solving
the $\ell_{1}$-minimization problem, Eqs.~(\ref{mainL1problem}) and
(\ref{ConditionsL1Problem}), using the full data set and varying the
noise parameter $\varepsilon$. The resulting matrix is compared with
the LS result $\chi_{\rm full}$ and with the ideal matrix $\chi_{\rm
ideal}$. Figure \ref{fig1} shows the corresponding fidelities
$F(\chi_{\rm CS}, \chi_{\rm full})$ and $F(\chi_{\rm CS}, \chi_{\rm
ideal})$ as functions of $\varepsilon$. We see that $\chi_{\rm CS}$
coincides with $\chi_{\rm full}$ [so that $F(\chi_{\rm CS},
\chi_{\rm full})=1$] at the optimal value $\varepsilon_{\rm
opt}=0.0199$. This is exactly the noise level corresponding to the
LS procedure, $||\vec{P}^{\rm exp}_{\rm full}-\Phi \vec{\chi}_{\rm
full}||_{\ell_2}/\sqrt{M}=0.0199$. With $\varepsilon$ increasing above
this level, the relative fidelity between $\chi_{\rm CS}$ and
$\chi_{\rm full}$ decreases, but it remains above 0.95 for
$\varepsilon <0.028$. Correspondingly, the process fidelity reported
by $\chi_{\rm CS}$, i.e.\ $F(\chi_{\rm CS}, \chi_{\rm ideal})$, also
changes. It starts with $F(\chi_{\rm CS}, \chi_{\rm
ideal})=F(\chi_{\rm full}, \chi_{\rm ideal})=0.907$ for
$\varepsilon=0.0199$, then increases with increasing $\varepsilon$,
then remains flat above $\varepsilon =0.025$, and then decreases at
$\varepsilon> 0.032$. We note that for another set of experimental
data (for a CZ gate realized with phase qubits) there was no
increasing part of this curve, and the dependence of $F(\chi_{\rm
CS}, \chi_{\rm ideal})$ on $\varepsilon$ remained practically flat
for a wide range of $\varepsilon$; one more set of experimental data
for phase qubits again had the increasing part of this curve.
\begin{figure}
\caption{(color online) The CS QPT procedure, applied to the full
data set, with varying noise parameter $\varepsilon$. The red
(upper) line shows the fidelity $F(\chi_{\rm CS}
\label{fig1}
\end{figure}
To check how close the result of $\ell_1$-optimization
(\ref{mainL1problem}) is to the upper bound of the condition
(\ref{ConditionsL1Problem}), we calculate the numerical value
$\varepsilon_{\rm num}=||\vec{P}^{\rm exp}_{\rm full}-\Phi
\vec{\chi}_{\rm CS}||_{\ell_2}/\sqrt{M}$ as a function of
$\varepsilon$. The result is shown in the inset of Fig.\ \ref{fig1},
we see that $\varepsilon_{\rm num}$ is quite close to $\varepsilon$.
The CVX-SeDuMi package does not solve the optimization problem for
values of the noise parameter $\varepsilon$ below the optimal value
$\varepsilon_{\rm opt}$.
Finding a proper value of $\varepsilon$ to be used in the CS method
is not a trivial problem, since for the reduced data set we
cannot find $\varepsilon_{\rm opt}$ in the way we used. Therefore,
the value of $\varepsilon$ should be estimated either from some
prior information about the noise level in the system or by trying
to solve the $\ell_1$-minimization problem with varying value of
$\varepsilon$. Note that the noise level $||\vec{P}^{\rm exp}-\Phi
\vec{\chi}_{\rm ideal}||_{\ell_2}/\sqrt{M}$ defined by the ideal
process is not a good estimate of $\varepsilon_{\rm opt}$; in
particular for our full data it is 0.035, which is significantly
higher than $\varepsilon_{\rm opt}=0.0199$.
\subsection{Reduced data set, near-optimal $\varepsilon$ }
\begin{figure}
\caption{(color online) The CS method results using a reduced data
set with randomly chosen $m_{\rm conf}
\label{fig2}
\end{figure}
Now we apply the CS method to a reduced data set, by randomly
choosing $m_{\rm conf}$ out of $M_{\rm conf}=144$ configurations,
while using all 4 probabilities for each configuration. (Therefore
the number of used probabilities is $m=4m_{\rm conf}$ instead of
$M=4M_{\rm conf}$ in the full data set.) For the noise level
$\varepsilon$ we use a value slightly larger than $\varepsilon_{\rm
opt}$ \cite{ShabaniKosut}. If a value too close to $\varepsilon_{\rm
opt}$ is used, then the optimization procedure often does not find a
solution; this happens when we choose configurations with a
relatively large level of noise in the measured probability values.
For the figures presented in this subsection we used
$\varepsilon=0.02015$, which for the full data set corresponds to
the fidelity of 0.995 compared with $\chi_{\rm full}$ and to the process fidelity of 0.910 (see Fig.\ \ref{fig1}).
Figure \ref{fig2} shows the fidelities $F(\chi_{\rm CS}, \chi_{\rm
full})$ (upper line) and $F(\chi_{\rm CS}, \chi_{\rm ideal})$ (lower
line) versus the number $m_{\rm conf}$ of used configurations. For
each value of $m_{\rm conf}$ we repeat the procedure 50 times,
choosing different random configurations. The error bars in Fig.\
\ref{fig2} show the standard deviations ($\pm \sigma$) calculated
using these 50 numerical experiments, while the central points
correspond to the average values.
We see that the upper (red) line starts with fidelity $F(\chi_{\rm CS},
\chi_{\rm full})=0.995$ for the full data set ($m_{\rm conf}=144$) and decreases with decreasing $m_{\rm conf}$. It is important that this decrease is not very strong, so that we can reconstruct the process matrix reasonably accurately, using only a small fraction of the QPT data. We emphasize that the system of equations (\ref{Vectorized}) in the standard QPT procedure becomes underdetermined at $m_{\rm conf}<80$; nevertheless, the CS method reconstructs $\chi_{\rm full}$ quite well for $m_{\rm conf}\agt 40$ and still gives reasonable results for $m_{\rm conf}\agt 20$. In particular, for $m_{\rm conf}$ between 40 and 80, the reconstruction fidelity $F(\chi_{\rm CS},
\chi_{\rm full})$ changes between 0.96 and 0.98.
The lower (blue) line in Fig.\ \ref{fig2} shows that the process fidelity $F_{\chi}=F(\chi_{\rm CS},\chi_{\rm ideal})$ can also be found quite accurately, using only $m_{\rm conf}\agt 40$ configurations (the line remains practically flat), and the CS method still works reasonably well down to $m_{\rm conf}\agt 20$. Even though the blue line remains practically flat down to $m_{\rm conf}\simeq 40$, the error bars grow, which means that in a particular experiment with substantially reduced set of QPT data, the estimated process fidelity $F_\chi$ may noticeably differ from the actual value. It is interesting that the error bars become very large at approximately the same value ($m_{\rm conf}\simeq 20$), for which the average values for the red and blue lines become unacceptably low.
\begin{figure}
\caption{(color online) (a) The process matrix $\chi_{\rm full}
\label{fig-chi}
\end{figure}
Figure \ref{fig-chi} shows examples of the CS estimated process
matrices $\chi_{CS}$ for $m_{\rm conf}=72$ (middle panel) and
$m_{\rm conf}=36$ (lower panel), together with the full-data process
matrix $\chi_{\rm full}$ (upper panel). The process matrices are
drawn in the Pauli-error basis to display the process imperfections
more clearly. The peak $\chi_{II,II}$ is off the scale and is cut
arbitrarily. We see that the CS estimated process matrices are
different from the full-data matrix; however the positions of the
main peaks are reproduced exactly, and their heights are also
reproduced rather well (for a small number of selected
configurations the peaks sometimes appear at wrong positions). It is
interesting to see that the CS procedure suppressed the height of
minor peaks. Note that both presented $\chi_{\rm CS}$ are based on
the data sets corresponding to underdetermined system of equations.
The computer resources needed for the calculation of results presented in Fig.\ 2 are not demanding. The calculations require about 30 MB of computer memory and 2--4 seconds time for a modest PC per individual calculation (smaller time for smaller number of configurations).
\begin{figure}
\caption{(color online) Similar to Fig.\ \ref{fig2}
\label{fig-phase}
\end{figure}
Besides the presented results, we have also performed analysis for
the CS QPT of two CZ gates based on phase qubits. The results are
qualitatively similar, except the process fidelity for phase-qubit
gates was significantly lower: 0.62 and 0.51. The results for one of
these gates are presented in Fig.\ \ref{fig-phase}. Comparing with
Fig.\ \ref{fig2}, we see that CS QPT works better for this
lower-fidelity gate. In particular, the blue line in Fig.\
\ref{fig-phase} is practically flat down to $m_{\rm conf}\simeq 20$
and the error bars are quite small. We think that the CS QPT works
better for a lower-fidelity gate because experimental imperfections
affect the measurement error relatively less in this case than for a
higher-fidelity gate.
Thus, our results show that for a CZ gate realized with
superconducting qubits CS QPT can reduce the number of used QPT
configurations by up to a factor of 7 compared with full QPT, and up
to a factor of 4 compared with the threshold at which the system of
equations for the standard QPT becomes underdetemined.
\subsection{Reduced data set, nonoptimal $\varepsilon$ }
As mentioned above, in a QPT experiment with a reduced data set,
there is no straightforward way to find the near-optimal value of
the noise parameter $\varepsilon$ (which we find here from the full
data set). Therefore, it is important to check how well the CS
method works when a nonoptimal value of $\varepsilon$ is used.
Figure \ref{fig-nonopt} shows the results similar to those in Fig.\
\ref{fig2}, but with several values of the noise parameter:
$\varepsilon/\varepsilon_{\rm opt}=1.01$, 1.2, 1.4, 1.6, and 1.8.
The upper panel shows the fidelity between the matrix $\chi_{\rm
CS}$ and the full-data matrix $\chi_{\rm full}$; the lower panel
shows the process fidelity $F(\chi_{\rm CS}, \chi_{\rm ideal})$. We
see that the fidelity of the $\chi$ matrix estimation, $F(\chi_{\rm
CS}, \chi_{\rm full})$, becomes monotonously worse with increasing
$\varepsilon$, while the estimated process fidelity, $F(\chi_{\rm
CS}, \chi_{\rm ideal})$, may become larger when a nonoptimal
$\varepsilon$ is used.
\begin{figure}
\caption{(color online) (a) Fidelity $F(\chi_{\rm CS}
\label{fig-nonopt}
\end{figure}
Similar results (not presented here) for the CZ gate based on phase
qubits (see Fig.\ \ref{fig-phase}) have shown significantly better
tolerance to a nonoptimal choice of $\varepsilon$; in particular,
even for $\varepsilon =3 \varepsilon_{\rm opt}$ the process fidelity
practically coincides with the blue line in Fig.\ \ref{fig-phase}
(obtained for $\varepsilon \approx \varepsilon_{\rm opt}$). We
believe the lower gate fidelity for phase qubits is responsible for
this relative insensitivity to the choice of $\varepsilon$.
\subsection{Comparison between Pauli-error and SVD bases}
So far for the CS method we have used the Pauli-error basis, in
which the process matrix $\chi$ is expected to be approximately
sparse because the ideal process matrix $\chi_{\rm ideal}$ contains
only one non-zero element, $\chi_{{\rm ideal}, II,II}=1$. However,
there are an infinite number of the operator bases with this
property: for example, the SVD basis (see Appendix B) suggested in
Refs.\ \cite{KosutSVD} and \cite{ShabaniKosut}. The process matrix
is different in the Pauli-error and SVD bases, therefore the CS
optimization should produce different results. To compare the
results, we do the CS optimization in the SVD basis, then convert
the resulting matrix $\chi$ into the Pauli-error basis, and
calculate the fidelity $F(\chi_{\rm CS-SVD},\chi_{\rm CS})$ between
the transformed process matrix and the matrix $\chi_{\rm CS}$
obtained using optimization in the Pauli-error basis directly.
\begin{figure}
\caption{(color online) Comparison between the CS results obtained
in the SVD and Pauli-error bases. The green line shows the relative
fidelity $F(\chi_{\rm CS-SVD}
\label{comp-SVD-Pauli}
\end{figure}
The green line in Fig.\ \ref{comp-SVD-Pauli} shows $F(\chi_{\rm
CS-SVD},\chi_{\rm CS})$ as a function of the selected size of the
data set for the CZ gate realized with Xmon qubits, similar to Fig.\
\ref{fig2} (the same $\varepsilon$ is used). We also show the
fidelity between the SVD-basis-obtained matrix $\chi_{\rm CS-SVD}$
and the full-data matrix $\chi_{\rm full}$ as well as the ideal
process matrix $\chi_{\rm ideal}$. For comparison we also include
the lines shown in Fig.\ \ref{fig2} (dashed lines), obtained using the Pauli-error
basis. As we see, the results obtained in the two bases are close to
each other, though the SVD basis seems to work a little better at
small data sizes, $m_{\rm conf}\simeq 20$. The visual comparison of
$\chi$-matrices obtained in these bases (as in Fig.\ \ref{fig-chi},
not presented here) also shows that they are quite similar. It
should be noted that the calculations in the SVD basis are somewhat
faster ($\sim$2 seconds per point) and require less memory ($\sim$6
MB) than the calculations in the Pauli-error basis. This is because
the matrix $\Phi$ defined in Eq.\ (\ref{Vectorized}) for the CZ gate
contains about half the number of non-zero elements in the SVD basis
than in the Pauli-error basis.
All results presented here are obtained using the CVX-SeDuMi
package. The results for the CZ gate obtained using the YALMIP-SDPT3
package are similar when the same value of $\varepsilon$ is used.
Surprisingly, in our realization of computation, the YALMIP-SDPT3
package still finds reasonable solutions when $\varepsilon$ is
significantly smaller than $\varepsilon_{\rm opt}$ (even when
$\varepsilon$ is zero or negative), so that the problem cannot have
a solution; apparently in this case the solver increases the value
of $\varepsilon$ until a solution is found. This may seem to be a
good feature of YALMIP-SDPT3. However, using $\varepsilon <
\varepsilon_{\rm opt}$ should decrease the accuracy of the result
(see the next subsection). Moreover, YALMIP-SDPT3 does not work well
for the Toffoli gate discussed in Sec.\ \ref{OurResults-ThreeQu}.
Thus we conclude that CVX-SeDuMi package is better than YALMIP-SDPT3
package for our CS calculations. (Note that this finding may be
specific to our system.)
\subsection{Comparison with least-squares minimization}
\label{Sec-2q-LS}
Besides using the CS method for reduced data sets, we also used the
LS minimization [with constraints (\ref{ChiPositive}) and
(\ref{ChiTracePreserving})] for the same reduced sets. Solid lines
in Fig.\ \ref{fig-LS-2q} show the resulting fidelity $F(\chi_{\rm
LS},\chi_{\rm full})$ compared with the full-data process matrix and
the estimated process fidelity $F(\chi_{\rm LS},\chi_{\rm ideal})$.
\begin{figure}
\caption{(color online) Comparison between the results obtained by
the LS and CS methods. The solid lines are for the LS method, the
dashed lines (same as in Fig.\ \ref{fig2}
\label{fig-LS-2q}
\end{figure}
Somewhat surprisingly, the LS method still works (though less well)
in a significantly underdetermined regime. Naively, we would expect
that in this case Eq.\ (\ref{Vectorized}) can be satisfied exactly,
and there are many exact solutions corresponding to the null space
of the selected part of the matrix $\Phi$. However, numerical
results show that in reality Eq.\ (\ref{Vectorized}) cannot be
satisfied exactly unless the selected data set is very small. The
reason is that the matrix $\chi$ has to be positive, and the
(corrected) experimental probabilities can be close to the limits of
the physical range or even outside it.
The problem is that the experimental probabilities are not directly
obtained from the experiment, but are corrected for imperfect
measurement fidelity \cite{Mariantoni-11}. As a result, they may
become larger than one or smaller than zero. This happens fairly
often for high fidelity gates because for an ideal operation the
measurement results are often zeros and ones, so the experimental
probabilities should also be close to zero or one. Any additional
deviation due to imperfect correction for the measurement fidelity
may then push the probabilities outside of the physical range. It is obvious
that in this case Eq.\ (\ref{Vectorized}) cannot be satisfied
exactly for any physical $\chi$. To resolve this problem one could
consider rescaling the probabilities in such instances, so that they
are exactly one or zero instead of lying outside the range. However,
this also does not help much because a probability of one means that
the resulting state is pure, so this strongly reduces the number of free
parameters in the process matrix $\chi$. As a result, Eq.\
(\ref{Vectorized}) cannot be satisfied exactly, and the LS
minimization is formally possible even in the underdetermined case.
Another reason why Eq.\ (\ref{Vectorized}) may be impossible to
satisfy in the underdetermined case, is that the randomly selected
rows of the matrix $\Phi$ can be linearly dependent. Then
mathematically some linear relations between the experimental
probabilities must be satisfied, while in reality they are obviously
not satisfied exactly.
These reasons make the LS minimization a mathematically possible
procedure even in the underdetermined regime. However, as we see
from Fig.\ \ref{fig-LS-2q}, in this case the procedure works less
well than the compressed sensing, estimating the process matrix and
process fidelity with a lower accuracy. Similar calculations for the
CZ gate realized with phase qubits (not presented here) also show
that the LS method does not work well at relatively small $m_{\rm
conf}$.
The advantage of the
compressed sensing in comparison with the LS minimization becomes
even stronger for the three-qubit Toffoli gate considered in the
next section. Note though that when the selected data set is large
enough to give an overdetermined system of equations
(\ref{Vectorized}), the LS method works better than the CS method.
Therefore, the compressed sensing is beneficial only for a
substantially reduced (underdetermined) data set, which is exactly
the desired regime of operation.
\section{Three-Qubit CS QPT for Toffoli gate}
\label{OurResults-ThreeQu}
In this section we apply the compressed sensing method to simulated
tomographic data corresponding to a three-qubit Toffoli gate
\cite{N-C,CoryToffoli,MonzRealizeToffoli,Mariantoni-11,FedorovImplementToffoli}.
As discussed in Sec.\ \ref{Configurations}, the process matrix of a
three-qubit gate contains $16^3-4^3 = 4032$ independent real
parameters, while the full QPT requires $M_{\rm conf} =12^3= 1728$
measurement configurations yielding a total of $M = 12^3\times 2^3 =
13824$ experimental probabilities, if we use $n_{\rm in}=4$ initial
states and $n_{\rm R}=3$ measurement rotations per qubit, with all
qubits measured independently. If we work with a partial data set,
the system of equations (\ref{Vectorized}) becomes underdetermined
if the number $m_{\rm conf}$ of used configurations is less than
$4032/7=576$. In such a regime the traditional maximum likelihood or
LS methods are not expected to provide a good estimate of the process
matrix. In this section we demonstrate that for our simulated
Toffoli gate the compressed sensing method works well even for a
much smaller number of configurations, $m_{\rm conf}\ll 576$.
For the analysis we have simulated experimental data corresponding
to a noisy Toffoli gate by adding truncated Gaussian noise with
a small amplitude to each of $M=13824$ ideal measurement
probabilities~$P^{\rm {ideal}}_i$. We assumed the set of
experimental probabilities in Eq.\ (\ref{Vectorized}) to be of the
form $P^{\textrm{exp}}_i = P^{\textrm{ideal}}_i + \Delta{P}_i$,
where $\Delta{P}_{i}$ are random numbers sampled from the normal
distribution with zero mean and a small standard deviation $\sigma$.
By choosing different values of the standard
deviation $\sigma$ we can change the process fidelity of the simulated
Toffoli gate: a smaller value of $\sigma$ makes the process fidelity
closer to 1. After adding the Gaussian
noise~$\Delta{P}_i$ to the ideal probabilities~$P^{\rm ideal}_i$, we
check whether the resulting simulated
probabilities~$P^{\textrm{exp}}_i$ are in the interval $ [0,1]$. If
a~$P^\textrm{{exp}}_i$ happens to be outside the interval $[0,1]$,
we repeat the procedure until the condition $P^{\textrm{exp}}_i\in
[0,1]$ is satisfied. Finally, we renormalize each set of 8
probabilities corresponding to the same measurement configuration so
that these probabilities add up to~$1$.
Thus the simulated imperfect quantum process is defined by $M=13824$
probabilities, corresponding to $M_{\rm conf}=1728$ configurations;
the process fidelity for a particular realization (used here) with
$\sigma=0.01$ is $F_\chi=F(\chi_{\rm full},\chi_{\rm
ideal})=0.959$. We then test efficiency of the compressed sensing
method by randomly selecting $m_{\rm conf}\leq 1728$ configurations,
finding the corresponding process matrix $\chi_{\rm CS}$, and
comparing it with the full-data matrix $\chi_{\rm full}$ by
calculating the fidelity $F(\chi_{\rm CS},\chi_{\rm full})$. We also
calculate the process fidelity $F(\chi_{\rm CS},\chi_{\rm ideal})$
given by $\chi_{\rm CS}$.
\begin{figure}
\caption{(color online) CS QPT for a simulated Toffoli gate. Red
line: fidelity $F(\chi_{\rm CS}
\label{Fig-3q-CS}
\end{figure}
The red line in Fig.\ \ref{Fig-3q-CS} shows the fidelity
$F(\chi_{\rm CS},\chi_{\rm full})$ as a function of the number
$m_{\rm conf}$ of randomly selected configurations. The value of
$\varepsilon$ is chosen to be practically equal to $\varepsilon_{\rm
opt}=||(\vec{P}^{\rm exp}_{\rm full} - \Phi\vec{\chi}_{\rm
full})||_{\ell_2}/\sqrt{M}=0.01146$ (the relative difference is less
than $10^{-3}$). The $\ell_1$-minimization is done using the
CVX-SeDuMi package. The error bars are calculated by repeating the
procedure of random selection 7 times. We see a reasonably high
fidelity $F(\chi_{\rm CS},\chi_{\rm full})$ of the reconstructed
process matrix even for small numbers of selected configurations.
For example, $F(\chi_{\rm CS},\chi_{\rm full})=0.95$ for only
$m_{\rm conf}=40$ configurations, which represents a reduction by
more than a factor of 40 compared with the full QPT and
approximately a factor of 15 compared with the threshold of the
underdetermined system of equations.
The blue line in Fig.\ \ref{Fig-3q-CS} shows the process fidelity
$F(\chi_{\rm CS},\chi_{\rm ideal})$ calculated by the CS method. We
see that it remains practically flat down to $m_{\rm conf}\agt 40$,
which means that $\chi_{\rm CS}$ can be used efficiently to estimate
the actual process fidelity.
\begin{figure}
\caption{(color online) Comparison between the calculations using CS
and LS methods for the simulated Toffoli gate. Solid lines are for
the LS method, dashed lines (the same as in Fig.\ \ref{Fig-3q-CS}
\label{Fig9}
\end{figure}
Figure \ref{Fig9} shows similar results calculated using the LS
method (for comparison the lines from Fig.\ \ref{Fig-3q-CS} are
shown by dashed lines). We see that the LS method still works in the
underdetermined regime ($m_{\rm conf}<576$); however, it works
significantly worse than the CS method. As an example, for $m_{\rm
conf}=40$ the fidelity of the process matrix estimation using the LS
method is $F(\chi_{\rm LS},\chi_{\rm full})=0.86$, which is
significantly less than $F(\chi_{\rm CS},\chi_{\rm full})=0.95$ for
the CS method. Similarly, for $m_{\rm conf}=40$ the process fidelity
obtained via the CS method, $F(\chi_{\rm CS},\chi_{\rm ideal})=0.96$
is close to the full-data value of 0.959, while the LS-method value,
$F(\chi_{\rm LS},\chi_{\rm ideal})=0.85$, is quite different.
Besides using the Pauli-error basis for the results shown in Fig.\ \ref{Fig-3q-CS}, we have also performed the calculations using the SVD basis. The results (not shown) are very close to those in Fig.\ \ref{Fig-3q-CS}, and the relative fidelity $F(\chi_{\rm CS-SVD}, \chi_{\rm CS})$ is above 0.98 for $m_{\rm conf}>200$ and above 0.95 for $m_{\rm conf}>40$. We have also performed the calculations using non-optimal values of the noise parameter $\varepsilon$. In comparison with the results for CZ gate shown in Fig.\ \ref{fig-nonopt}, the results for the Toffoli gate (not shown) are more sensitive to the variation of $\varepsilon$. In particular, the fidelity $F(\chi_{\rm CS}, \chi_{\rm full})$ is about 0.93 for $\varepsilon =1.2 \varepsilon_{\rm opt}$ (not significantly depending on $m_{\rm conf}$ for $m_{\rm conf}>40$) and the process fidelity $F(\chi_{\rm CS}, \chi_{\rm ideal})$ for $\varepsilon =1.2 \varepsilon_{\rm opt}$ is approximately 0.93 instead of the actual value 0.96.
Compared with the two-qubit case, it takes significantly more
computing time and memory to solve the $\ell_{1}$-minimization
problem for three qubits. In particular, our calculations in the
Pauli-error basis took about 8 hours per point on a personal
computer for $m_{\rm conf}\simeq 1500$ and about 1.5 hours per point
for $m_{\rm conf}\simeq 40$; this is three orders of magnitude
longer than for two qubits. The amount of used computer memory was
3--10 GB, which is two orders of magnitude larger than for two
qubits. (The calculations in the SVD basis for the Toffoli gate took
1--3 hours per point and $\sim$2 GB of memory.) Such a strong
scaling of required computer resources with the number of qubits
seems to be the limiting factor in extending the CS QPT beyond three
qubits, unless a more efficient algorithm is found. (Note that LS calculations required similar amount of memory, but the computation time was much shorter.)
The presented results have been obtained using the CVX-SeDuMi
package. We also attempted to use the YALMIP-SDPT3 package. However, in our realization of computation
the calculation results were very unreliable for $m_{\rm conf}<200$
using the SVD basis, and even worse when the Pauli-error basis was
used. Therefore we decided to use only the CVX-SeDuMi package for
the 3-qubit CS procedure.
\section{Standard deviation of state fidelity}
\label{FidSquaredSection}
As shown in previous sections, the process matrices $\chi_{\rm CS}$ obtained via the CS method allow us to estimate reliably the process fidelity $F_\chi = F(\chi, \chi_{\rm ideal})$ of a gate using just a small fraction of the full experimental data. While $F_\chi$ is the most widely used characteristic of an experimental gate accuracy, it is not the only one. An equivalent characteristic (usually used in randomized benchmarking) is the average state fidelity, defined as $\overline{F_{\rm st}}=\int {\rm Tr} (\rho_{\rm actual} \rho_{\rm ideal})\, d |\psi_{\rm in}\rangle / \int d |\psi_{\rm in}\rangle$, where the integration is over the initial pure states $|\psi_{\rm in}\rangle$ (using the Haar measure; it is often assumed that $\int d |\psi_{\rm in}\rangle=1$), while the states $\rho_{\rm ideal}$ and $\rho_{\rm actual}$ are the ideal and actual final states for the initial state $|\psi_{\rm in}\rangle$. The average state fidelity $\overline{F_{\rm st}}$ is
sometimes called the ``gate fidelity'' \cite{Magesan-12} and can be naturally measured in the randomized benchmarking ($F_{\rm RB}=\overline{F_{\rm st}}$); it is linearly related ~\cite{Horodecki,NielsenAveGateFidelity} to the process fidelity, $\overline{F_{\rm st}}=(F_\chi d +1)/(d+1)$, where $d=2^N$ is the Hilbert space dimension.
Besides the average state fidelity, an obviously important characteristic of a gate operation is the worst-case state fidelity $F_{\rm st, min}$, which is minimized over the initial state. Unfortunately, the minimum state fidelity is hard to find computationally even when the process matrix $\chi$ is known. Another natural characteristic is the standard deviation of the state fidelity,
\be
\Delta F_{\rm st}=\sqrt{\overline{F^2_{\rm st}}-\overline{F_{\rm st}}^{\,2}},
\label{std-def}\ee
where $\overline{F_{\rm st}^2}=\int [{\rm Tr} (\rho_{\rm actual}
\rho_{\rm ideal})]^2\, d |\psi_{\rm in}\rangle / \int d |\psi_{\rm
in}\rangle$ is the average square of the state fidelity. The
advantage of $\Delta F_{\rm st}$ in comparison with $F_{\rm st,
min}$ is that $\overline{F_{\rm st}^2}$ and $\Delta F_{\rm st}$ can
be calculated from $\chi$ in a straightforward way \cite{Molmer2008,
Emerson2011}. Our way of calculating $\overline{F_{\rm st}^2}$ is
described in Appendix C [see Eq.\ (\ref{fidsquared})].
\begin{figure}
\caption{(color online) Blue (upper) line: average state infidelity $1-\overline{F_{\rm st}
\label{Fig-std-2q}
\end{figure}
We have analyzed numerically how well the CS QPT estimates $\Delta F_{\rm st}$ from the reduced data set, using the previously calculated process matrices $\chi_{\rm CS}$ for the experimental CZ gate and the simulated Toffoli gate (considered in Secs.\ \ref{results-two-qubits} and \ref{OurResults-ThreeQu}). The results are presented in Figs.\ \ref{Fig-std-2q} and \ref{Fig-std-3q}. We show the average state infidelity, $1-\overline{F_{\rm st}}$, and the standard deviation of the state fidelity, $\Delta F_{\rm st}$, as functions of the number of selected configurations, $m_{\rm conf}$. The random selection of used configurations is repeated 50 times for Fig.\ \ref{Fig-std-2q} (7 times for Fig.\ \ref{Fig-std-3q}), the error bars show the statistical variation, while the dots show the average values.
As seen from Figs.\ \ref{Fig-std-2q} and \ref{Fig-std-3q}, the CS method estimates reasonably well not only the average state fidelity $\overline{F_{\rm st}}$ (which is equivalent to $F_{\chi}$ presented in Figs.\ \ref{fig2} and \ref{Fig-3q-CS}), but also its standard deviation $\Delta F_{\rm st}$.
It is interesting to note that $\Delta F_{\rm st}$ is significantly smaller than the infidelity $1-\overline{F_{\rm st}}$, which means that the state fidelity ${\rm Tr} (\rho_{\rm acual}\rho_{\rm ideal})$ does not vary significantly for different initial states [the ratio $\Delta F_{\rm st}/(1-\overline{F_{\rm st}})$ is especially small for the simulated Toffoli gate, though this may be because of our particular way of simulation].
\begin{figure}
\caption{(color online) The same as in Fig.\ \ref{Fig-std-2q}
\label{Fig-std-3q}
\end{figure}
\section{Conclusion}\label{Conclusion}
In this paper we have numerically analyzed the efficiency of
compressed sensing quantum process tomography (CS QPT)
\cite{KosutSVD,ShabaniKosut} applied to superconducting qubits (we
did not consider the CS method of Refs.\
\cite{GrossFlammia2010,Flammia2012}). We have used experimental data
for two-qubit CZ gates realized with Xmon and phase qubits, and
simulated data for the three-qubit Toffoli gate with numerically
added noise. We have shown that CS QPT permits a reasonably high
fidelity estimation of the process matrix from a substantially
reduced data set compared to the full QPT. In particular, for the CZ
gate (Fig.\ \ref{fig2}) the amount of data can be reduced by a
factor of $\sim$7 compared to the full QPT (which is a factor of $\sim$4
compared to the threshold of underdetermined system of equations).
For the Toffoli gate (Fig.\ \ref{Fig-3q-CS}) the data reduction
factor is $\sim$40 compared to the full QPT ($\sim$15 compared to
the threshold of underdeterminacy).
In our analysis we have primarily used two characteristics. The
first characteristic is the comparison between the CS-obtained
process matrix $\chi_{\rm CS}$ and the matrix $\chi_{\rm full}$
obtained from the full data set; this comparison is quantitatively
represented by the fidelity $F(\chi_{\rm CS},\chi_{\rm full})$. The
second characteristic is how well the CS method estimates the
process fidelity $F_\chi$, i.e., how close $F(\chi_{\rm
CS},\chi_{\rm ideal})$ is to the full-data value $F(\chi_{\rm
full},\chi_{\rm ideal})$. Besides these two characteristics, we have
also calculated the standard deviation of the state fidelity $\Delta
F_{\rm st}$ [Eq.\ (\ref{std-def})] and checked how well the CS
method estimates $\Delta F_{\rm st}$ from a reduced data set (Figs.\
\ref{Fig-std-2q} and \ref{Fig-std-3q}). Our compressed sensing
method depends on the choice of the basis, in which the process
matrix should be approximately sparse, and also depends on the choice of
the noise parameter $\varepsilon$ [see Eq.\
(\ref{ConditionsL1Problem})]. We have used two bases: the
Pauli-error basis and the SVD basis. The results obtained in both
bases are similar to each other, though the SVD basis required less
computational resources. The issue of choosing proper $\varepsilon$
is not trivial. In our calculations we have used a value slightly
larger than the noise level calculated from the full data set.
However, in an experiment with a reduced data set this way of
choosing $\varepsilon$ is not possible, so its value should be
chosen from an estimate of the inaccuracy of the experimental
probabilities. We have shown that the CS method tolerates some
inaccuracy of $\varepsilon$ (up to $\sim$60$\%$ for the results
shown in Fig.\ \ref{fig-nonopt}); however, finding a proper way of
choosing $\varepsilon$ is still an open issue.
We have also compared the performance of the CS method with the
least squares optimization. Somewhat surprisingly, the LS method can
still be applied when the systems of equations (\ref{Vectorized}) is
underdetermined (unless the data set size is too small). This is
because the condition of a process matrix being physical (positive,
trace-preserving) usually makes satisfying Eqs.\ (\ref{Vectorized})
impossible. However, even though the LS method formally works, it
gives a less accurate estimate of $\chi$ than the CS method in the
significantly underdetermined regime (although it does give a better
estimate in the overdetermined regime). The advantage of the CS
method over the LS method is more pronounced for the Toffoli gate
(Fig.\ \ref{Fig9}).
Thus the CS QPT is useful for two-qubit and three-qubit quantum
gates based on superconducting qubits. The method offers a very
significant reduction of the needed amount of experimental data.
However, the scaling of the required computing resources with the
number of qubits seems to be prohibitive: in our calculations it
took three orders of magnitude longer and two orders of magnitude
more memory for the three-qubit-gate calculation than for two
qubits. Such a scaling of computing resources seems to be a limiting
factor in the application of our implementation of the CS method for
QPT of four or more qubits. Therefore, the development of more
efficient numerical algorithms for the CS QPT is an important task
for future research.
\section{Pauli-error basis}
\label{AppendixChiTilde}
In this Appendix we discuss the definition of the Pauli-error basis
used in this paper. The detailed theory of the QPT in the
Pauli-error basis is presented in Ref.\ \cite{Korotkov-13}.
Let us start with description of a quantum process $\mathcal{E}$ in
the Pauli basis $\{ {\cal P}_\alpha\}$,
\be
\rho^{\rm in} \mapsto \mathcal{E}(\rho^{\rm in})=\sum_{\alpha,\beta
= 1}^{d^{2}}\chi_{\alpha \beta} {\cal P}_{\alpha}\rho^{\rm in} {\cal
P}_{\beta}^{\dagger},
\label{AppA-Pauli}\ee
where for generality ${\cal P}$ is not necessarily Hermitian (to
include the modified Pauli basis, in which $Y=-i\sigma_y$). Recall
that $d=2^N$ is the dimension of the Hilbert space for $N$ qubits.
In order to compare the process $\mathcal{E}$ with a desired unitary rotation $U$ [i.e.\ with the map $\mathcal{U}(\rho^{\rm in})=U\rho^{\rm in}U^\dagger$], let us formally apply the inverse unitary $U^{-1}=U^\dagger$ after the process $\mathcal{E}$. The resulting composed process
\be
\tilde{\mathcal{E}} = \mathcal{U}^{-1} \circ \mathcal{E}
\label{tilde-E}\ee
characterizes the error: if $\mathcal{E}$ is close to the desired $\mathcal{U}$, then $\tilde{\mathcal{E}}$ is close to the identity (memory) operation.
The process matrix $\tilde{\chi}$ of $\tilde{\mathcal{E}}$ in the Pauli basis is what we call in this paper the process matrix in the Pauli-error basis.
The process matrix $\tilde{\chi}$ obviously satisfies the relation
\be
\sum_{\alpha,\beta }\tilde\chi_{\alpha \beta} {\cal P}_{\alpha}\rho^{\rm in} {\cal P}_{\beta}^{\dagger}=
U^{-1}\left( \sum_{\alpha,\beta }\chi_{\alpha \beta}{\cal P}_{\alpha}\rho^{\rm in} {\cal P}_{\beta}^{\dagger}\right) U,
\ee
which can be rewritten as
\be
\sum_{\alpha,\beta }\tilde\chi_{\alpha \beta} (U{\cal P}_{\alpha}) \rho^{\rm in} (U{\cal P}_{\beta})^{\dagger}=
\sum_{\alpha,\beta }\chi_{\alpha \beta}{\cal P}_{\alpha}\rho^{\rm in} {\cal P}_{\beta}^{\dagger}.
\ee
Therefore the error matrix $\tilde\chi$ is formally the process matrix of the original map $\mathcal{E}$, expressed in the operator basis
\be
E_\alpha = U{\cal P}_\alpha.
\ee
This is the Pauli-error basis used in our paper. (Another obvious way to define the error basis is to use $E_\alpha={\cal P}_\alpha U$ \cite{Korotkov-13}; however, we do not use this second definition here.) The Pauli-error basis matrices $E_\alpha$ have the same normalization as the Pauli matrices,
\be
\langle E_\alpha | E_\beta \rangle = {\rm Tr} (E^\dagger_\alpha E_\beta)=d\, \delta_{\alpha\beta}.
\ee
The matrices $\chi$ and $\tilde\chi$ (in the Pauli and Pauli-error bases) are related via unitary transformation,
\be
\tilde\chi=V\chi V^\dagger, \,\,\, V_{\alpha \beta}={\rm Tr}({\cal P}_\alpha^\dagger U^\dagger {\cal P}_\beta)/d.
\ee
The matrix $\tilde\chi$ has a number of convenient properties
\cite{Korotkov-13}. It has only one large element, which is at the
upper left corner and corresponds to the process fidelity,
$\tilde\chi_{\cal II}=F_\chi=F(\chi, \chi_{\rm ideal})$. All other
non-zero elements of $\tilde\chi$ describe imperfections. In
particular, the imaginary elements in the left column (or upper row)
characterize unitary imperfections (assuming the standard
non-modified Pauli basis), other off-diagonal elements are due to
decoherence, and the diagonal elements correspond to the error
probabilities in the Pauli-twirling approximation.
\section{SVD basis}
\label{AppendixSVD}
The SVD basis used in this paper is introduced following Ref.\
\cite{KosutSVD}. Let us start with the so-called natural basis for
$d\times d$ matrices, which consists of matrices $E^{\rm
nat}_\alpha$, having one element equal to one, while other elements
are zero. The numbering corresponds to the vectorized
form obtained by stacking the columns: for $\alpha = (d-1)i+j$ the matrix is
$(E^{\rm nat}_\alpha )_{lk}=\delta_{il}\delta_{jk}$. For a desired
unitary rotation $U$, the process matrix $\chi^{\rm nat}$ in the
natural basis can be obtained by expanding $U$ in the natural basis,
$U=\sum_\alpha u_\alpha E^{\rm nat}_\alpha$, and then constructing
the outer product,
\be
\chi^{\rm nat}_{\alpha\beta}= u_\alpha u_\beta^*.
\ee
For example, for the ideal CZ gate the components $u_\alpha$ are $(1 ,0 ,0 ,0 ,0 ,1 ,0 ,0 ,0 ,0 ,1 ,0 ,0 ,0 ,0 ,-1)$, and $\chi^{\rm nat}$ has 16 non-zero elements, equal to $\pm 1$. Note that $\chi^{\rm nat}$ is a rank-1 matrix with ${\rm Tr}(\chi^{\rm nat})=\sum_\alpha |u_\alpha|^2=d$.
We then apply numerical procedure of the SVD decomposition, which diagonalizes the matrix $\chi^{\rm nat}$ for the desired unitary process,
\be
\chi^{\rm nat}= V \text{diag}(d,0,\ldots, 0) V^{\dagger},
\label{AppB-decomp}\ee
where $V$ is a unitary $d^2\times d^2$ matrix and the only non-zero eigenvalue is equal to $d$ because ${\rm Tr}(\chi^{\rm nat})=d$. The columns of thus obtained transformation matrix $V$ are the vectorized forms of thus introduced SVD-basis matrices $E_\alpha^{\rm SVD}$,
\begin{equation}
E_\alpha^{\rm SVD} =\sum\limits_{\beta=1}^{d^2}V_{\beta \alpha}\, E_{\beta}^{\rm nat}.
\label{AppB-ESVD}\end{equation}
Note that the notation $V$ used in Appendix A has a different meaning.
The matrices of the SVD basis introduced via Eqs.\ (\ref{AppB-decomp}) and (\ref{AppB-ESVD}) have the different normalization compared with the Pauli basis,
\begin{equation}
{\rm Tr} (E_{\alpha}^{{\rm SVD}\dagger}E_{\beta}^{\rm SVD})=\delta_{\alpha \beta}.
\end{equation}
Correspondingly, the normalization of the process matrix $\chi^{\rm SVD}$ in the SVD basis is ${\rm Tr} \chi^{\rm SVD}=d$ (for a trace-preserving process). For the ideal unitary process the matrix $\chi^{\rm SVD}$ has one non-zero (top left) element, which is equal to $\sqrt{d}$. For an imperfect realization of the desired unitary operation the top left element is related to the process fidelity as $\chi^{\rm SVD}_{11}=F_\chi d$.
Note that when the numerical SVD procedure (\ref{AppB-decomp}) is applied to $\chi^{\rm nat}$ of ideal CZ and/or Toffoli gates, many (most) of the resulting SVD-basis matrices $E_\alpha^{\rm SVD}$ coincide with the matrices of the natural basis $E_\alpha^{\rm nat}$. Since these matrices contain only one non-zero element, the matrix $\Phi$ in Eq.\ (\ref{Vectorized}) is simpler (has more zero elements) than for the Pauli or Pauli-error basis. (The number of non-zero elements of $\Phi$ in the SVD basis is crudely twice less for the CZ gate and 4 times less for the Toffoli gate.) As the result, from the computational point of view it is easier to use the SVD basis than the Pauli-error basis: less memory and less computational time are needed.
\section{Average square of state fidelity}
\label{AppendixFidSquared}
In this subsection we present a detailed derivation of an explicit
formula for the squared state fidelity $\overline{F^{2}_{\rm st}}$,
averaged over all pure initial states, for a quantum operation,
represented via Kraus operators. We follow the same steps as in
Ref.\ \cite{Emerson2011}, where a closed-form expression for
$\overline{F^{2}_{\rm st}}$ in terms of the process matrix $\chi$
was presented. Although our approach is not new, we show it here for
completeness.
We begin by writing the quantum operation as $\mathcal{E}=\mathcal{U}\circ \tilde{\mathcal{E}}$ [see Eq.\ (\ref{tilde-E})], where $\mathcal{U}$ corresponds
to the ideal (desired) unitary operation, while the map $\tilde{\mathcal{E}}$ accounts for the errors in the actual gate. Let
\begin{equation}
\label{Lambda}
\tilde{\mathcal{E}} (\rho)=\sum_{n} A_{n} \rho A_{n}^{\dagger}
\end{equation}
be the operator-sum representation of $\tilde{\mathcal{E}}$, where $\{A_{n}\}_{n=1}^{d^{2}}$ are Kraus operators satisfying the trace-preservation
condition $\sum_{n}A^{\dagger}_{n} A_{n}=\mathbb{I} $. The Kraus operators can be easily obtained from the process matrix $\chi_{\alpha
\beta}$ describing the operation $\mathcal{E}$. Note that by diagonalizing $\chi$, i.e., $\chi= V D V^{\dagger}$, where V is
unitary and $ D=\text{diag}(\lambda_{1}, \lambda_{2},\ldots) $ with $\lambda_{n}\geq 0$, we can
express the Kraus operators in Eq.~(\ref{Lambda}) as $A_{n}=\sqrt{\lambda_{n}}\, U^{\dagger} \sum_{\alpha}E_{\alpha} V_{\alpha n}$, where $U$ is the desired unitary.
Now, the state fidelity $F_{\phi}$ (assuming a pure initial state
$|\phi\rangle$)
can be written in terms of $\{A_{n}\}$ as follows:
\begin{equation}
F_{\phi} \equiv \bra{\phi} \tilde{\mathcal{E}} (\phi)\ket{\phi}=
\sum_{n}\bra{\phi}A_{n}\ket{\phi}\bra{\phi}A_{n}^{\dagger}\ket{\phi}.
\end{equation}
Notice that by using the identity $ \tr(A\otimes B)=\tr(A)\tr(B)$,
one can rewrite the above expression for $F_{\phi}$ as
\begin{equation}
F_{\phi}= \sum_{n} \tr{[(A_{n}\otimes A_{n}^{\dagger})( \ket{\phi}\bra{\phi}^{\otimes 2})]},
\end{equation}
where the notation $\ket{\phi} \bra{\phi}^{\otimes k} \equiv
\underbrace{\ket{\phi}\bra{\phi}\otimes \ket{\phi}\bra{\phi} \ldots
\otimes\ket{\phi}\bra{\phi}}_{k}$ means that the state is copied in
$k$ identical Hilbert spaces. Similarly, one can express the squared
state fidelity as
\begin{eqnarray}
\label{fsquaredS4}
F_{\phi}^{2}&=& \sum_{n,m}\bra{\phi}A_{n}\ket{\phi}\bra{\phi}A_{n}^{\dagger}\ket{\phi}
\bra{\phi}A_{m}\ket{\phi}\bra{\phi}A_{m}^{\dagger}\ket{\phi}\notag \\
&=&\sum_{n,m}\tr\big[ (A_{n}\otimes A_{n}^{\dagger}\otimes A_{m}\otimes A_{m}^{\dagger} )
( \ket{\phi} \bra{\phi}^{\otimes 4} ) \big]. \qquad
\end{eqnarray}
In order to compute the average state fidelity $\overline{F_{\rm
st}}=\int F_{\phi}\, d\phi$, the average square of the state
fidelity $\overline{F^{2}_{\rm st}}=\int F_{\phi}^{2} \, d\phi$,
and higher powers of $F_{\rm st}$ (we assume the normalized integration over the initial pure states, $\int d\phi=1$), one can use the following
result~\cite{Poulin2004}
\begin{equation}
\label{HilbertIntegrals}
\int \ket{\phi} \bra{\phi}^{\otimes k} d\phi =
\frac{1}{ {\binom{k+d-1}{d-1}}} \, \Pi_{k}, \quad
\Pi_{k}\equiv \frac{1}{k!}\sum_{\sigma \in S_{k}} P_{\sigma}.
\end{equation}
Here $\sigma$ is an element of the permutation group $S_{k}$ (the
$k!$ permutations of $k$ objects) and the operator $P_{\sigma}$ is
the representation of $\sigma$ in $\mathcal{H}^{\otimes
k}=\underbrace{\mathcal{H}\otimes \ldots \mathcal{H}}_{k}$, i.e.,
\begin{equation}
P_{\sigma}(\ket{\phi_{1}}\otimes\ket{\phi_{2}}\ldots \otimes\ket{\phi_{k}})=\ket{\phi_{\sigma(1)}}\otimes \ket{\phi_{\sigma(2)}} \ldots
\otimes\ket{\phi_{\sigma(k)}}.
\label{AppC6}\end{equation}
(The operator $P_\sigma$ acts on the wavefunction of $kN$ qubits by permuting $k$ blocks, each containing $N$ qubits.)
In view of the above discussion, we see that the $k$th moment
$\overline{F^{k}_{\rm st}}\equiv \int F_{\phi}^{k}\, d\phi$ can be
expressed
as a sum of $(2k)!$ terms corresponding to the elements in
$S_{2k}$ [note that $k$ in Eqs.\ (\ref{HilbertIntegrals}) and (\ref{AppC6}) is now replaced with $2k$],
\begin{equation}
\label{kth-moment} \overline{F^{k}_{\rm st}}=\frac{\displaystyle \sum_{n_{1}\ldots n_{k}} \sum_{\sigma \in
S_{2k}} \tr [( A_{n_{1}}\otimes
A^{\dagger}_{n_{1}}\otimes \ldots
A_{n_{k}}\otimes
A^{\dagger}_{n_{k}}) P_{\sigma}]}{\binom{2k+d-1}{d-1}{(2k)!}}.
\end{equation}
For example, the average state fidelity $\overline{F_{\rm st}}$ is
determined by the sum over $S_{2}$,
\begin{eqnarray}
&& \tr(A_{n} \otimes A_{n}^{\dagger} \ \Pi_{2})= \frac{1}{2}\sum_{\sigma \in S_{2}} \tr(A_{n} \otimes A_{n}^{\dagger} P_{\sigma})\notag \\
&& \hspace{1cm} =\frac{1}{2}\sum_{\sigma \in
S_{2}}\sum_{i_{1},i_{2}}\bra{i_{1},i_{2}}A_{n}\otimes
A_{n}^{\dagger} \ket{\sigma(i_{1}),
\sigma(i_{2})}\notag \qquad \\
&& \hspace{1cm} =\frac{1}{2}(
\underbrace{\tr(A_{n})\tr(A_{n}^{\dagger})}_{\textrm{identity}})+
\underbrace{\tr(A_{n}A_{n}^{\dagger})}_{\textrm{transposition}}),
\end{eqnarray}
which yields the well-known result~\cite{NielsenAveGateFidelity}
\begin{equation}
\overline{F_{\rm st}}=\frac{1}{d(d+1)}\left( \sum_{n}|\tr(A_{n})|^2+
d\right).
\end{equation}
In order to express $\overline{F^{2}_{\rm st}}$ in terms of Kraus operators, it is convenient to write each element of the group $S_{4}$ as a
product of disjoint cycles. The 24 elements of the permutation groups $S_{4}$ can be grouped as follows (we use the so-called cycle notation for permutations):
$\bullet$ Identity (1 element): (1)(2)(3)(4) (this notation means that no change of position occurs for all numbers in the sequence 1234);
$\bullet$ Transpositions (6 elements): (12), (13), (14), (23), (24), and (34) (this notations means that only two specified numbers in the sequence are exchanged);
$\bullet$ 3-cycles (8 elements): (123), (132), (124), (142), (134), (143), (234), and (243) [here the notation (123) means the permutation 1$\rightarrow$2$\rightarrow$3$\rightarrow$1, while the remaining number does not change];
$\bullet$ Products of transpositions (3 elements): (12)(34), (13)(24), and (14)(23) (two pairs of numbers exchange);
$\bullet$ 4-cycles (6 elements): (1234), (1243), (1324), (1342), (1423), and (1432) [here (1234) means the permutation 1$\rightarrow$2$\rightarrow$3$\rightarrow$4$\rightarrow$1].
This classification simplifies keeping track of the terms $N_{\sigma}\equiv \sum_{n,m}\tr\big[ \big( A_{n}\otimes A_{n}^{\dagger}\otimes
A_{m}\otimes A_{m}^{\dagger} \big) P_{\sigma}\big]$ in Eq.~(\ref{kth-moment}). The corresponding contributions to the sum $\sum_{\sigma \in
S_{4}} N_{\sigma}$ are the following:
\begin{align*}
&\text{Identity:}\notag \\
& \big(\sum_{n}|\tr(A_{n}\big)|^{2})^{2}. \notag \\
&\text{Transpositions:} \notag \\
& 2d \sum_{n}|\tr(A_{n})|^2+ 2\sum_{n,m}\tr(A_{n}A_{m}^{\dagger})\tr(A_{n}^{\dagger})\tr(A_{m})\notag \\
& + \sum_{n,m}(\tr(A_{n}A_{m})\tr(A_{n}^{\dagger})\tr(A_{m}^{\dagger})+h.c).\notag \\
&\text{3-cycles:} \notag \notag \\
& 4 \sum_{n}|\tr(A_{n})|^2 +2\sum_{n,m}(\tr(A_{n}A_{n}^{\dagger}A_{m})\tr(A_{m}^{\dagger})+h.c).\notag \\
&\text{Products of transpositions:} \notag \\
&d^2+\sum_{n,m}(|\tr(A_{n}A_{m})|^{2}+ |\tr(A_{n}A_{m}^{\dagger})|^{2}). \notag \\
&\text{4-cycles:} \notag \\
& 3d +\sum_{n,m}\tr(A_{n}A_{n}^{\dagger}A_{m}A_{m}^{\dagger})+2\sum_{n,m}\tr(A_{n}A_{m}A_{n}^{\dagger} A_{m}^{\dagger}).
\end{align*}
(We used the trace-preservation condition $\sum_{n} A_{n}^{\dagger}A_{n}=\mathbb{I}$). Substituting the above terms in
Eq.~(\ref{kth-moment}) (with $k=2$), we finally obtain the average square of the state fidelity,
\begin{eqnarray}
\label{fidsquared}
&& \hspace{-0.5cm} \overline{F^{2}_{\rm st}}=\frac{1}{d(d+1)(d+2)(d+3)}\Big(d^{2}+3d
\nonumber\\
&& +2(d+2)\sum_{n}|\tr(A_{n})|^2+\big( \sum_{n}|\tr(A_{n})|^2\big)^2
\nonumber\\
&& +\sum_{n,m}\big( |\tr(A_{n}A_{m})|^2+ |\tr(A_{n}A_{m}^{\dagger})|^2\big)
\nonumber \\
&& + 2\sum_{n,m}\tr(A_{n}A_{m} A_{n}^{\dagger}A_{m}^{\dagger})
+\sum_{n,m}\tr(A_{n}A_{n}^{\dagger}A_{m}A_{m}^{\dagger}) \nonumber\\
&& +2\sum_{n,m} \tr(
A_{n}A_{m}^{\dagger}) \tr(A_{n}^{\dagger})\tr(A_{m})
\nonumber \\
&&+2 \sum_{n,m} {\rm Re}[ \tr(A_{n}A_{m})\tr(A_{n}^{\dagger})\tr(A_{m}^{\dagger})]
\nonumber \\
&& +4\sum_{n,m} {\rm Re} [\tr(A_{n}A_{n}^{\dagger}A_{m}^{\dagger})\tr(A_{m})
] \Big).
\end{eqnarray}
This is the formula we used in this paper to calculate $\overline{F^{2}_{\rm st}}$.
\end{document}
|
\begin{document}
\title{Definability of initial segments}
\section{Introduction}
Let $T$ be a first order theory formulated in the language $L$ and
$P$ a new relation symbol not in $L$.
Let $\varphi(P)$ be an $L \cup \{P\}$-sentence.
Let us say that $\varphi(P)$ defines $P$ implicitly in $T$
if $T$ proves $\varphi(P) \wedge \varphi(P') \rightarrow
\forall x(P(x) \leftrightarrow P(x'))$.
Beth's definability theorem states that if
$\varphi(P)$ defines $P$ implicitlly in $T$ then
$P(x)$ is equivalent to an $L$-formula.
However, if we consider implicit definability in a
given model alone, the situation changes.
For a more precise explanation,
let us say that a subset $A$ of a given model $M$
of $T$ is implicitly definable if there exists a sentence $\varphi(P)$
such that $A$ is the unique set with $(M,A) \models \varphi(P)$.
It is easy to find a structure
in which two kinds of definability (implicit definability and first order definability) are different.
For example, let us consider the structure $M=(\mathbb N \cup \mathbb Z,<)$,
where $<$ is a total order such that any element in the
$\mathbb Z$-part is greater than any element in the
$\mathbb N$-part.
The $\mathbb N$-part is not first order definable in $M$, because
the theory of $M$ admits quantifier elimination after adding the
constant $0$ (the least element) and
the successor function to the language.
But the $\mathbb N$-part is implicitly definable in $M$,
because it is the unique non trivial initial segment
without the last element.
On the other hand, for a given structure,
we can easily find an elementary extension in which
two notions coincide.
In this paper, we shall consider implicit definability of the
standard part $\{0,1,...\}$ in nonstandard models of Peano
arithmetic ($PA$).
It is needless to say that the standard part of a nonstandard
model of $PA$
is not first order definable. As is stated above, there is a model
in which every set defined implicitly is first order definable.
So we ask whether there is a model of $PA$ in which the
standard part is implicitly definable.
In \S 1, we define a certain class of formulas, and show that
in any model of $PA$ the standard part
is not implicitly defined by using such formulas.
\S 2 is the main section of the present paper,
we shall construct a model of $PA$ in which the standard
part is implicitly defined.
To construct such a model, first we assume a set theoretic
hypothesis $\diamondsuit_{S_\lambda^{\lambda^+}}$,
which is an assertion of the existence of a very general set.
Then we shall eliminate the hypothesis using absoluteness
for the existence of a model having a tree structure with a certain
property.
In this paper $L$ is a first order countable language.
$L$-structures are denoted by $M$, $N$, $M_i$, $\cdots$.
We do not strictly distinguish a structure and its universe.
$A$, $B$, $\cdots$ will be used for denoting subsets of
of some $L$-structure. Finite tuples of elements from
some $L$-structure are denoted by $\bar a$, $\bar b, \cdots$.
We simply write $A \subset M$ for expressing that $A$ is
a subset of the universe of $M$.
\section{Undefinability result}
Let us first recall the definition of implicit definability.
\begin{definition}\rm
Let $M$ be an $L$-structure. Let $P$ be a unary
second order variable. A subset $A$ of $M$ is said to be
{\it implicitly definable} in $M$ if there is an
$L \cup \{P\}$-sentence $\varphi(P)$ with parameters
such that $A$ is the unique solution to $\varphi(P)$, i.e.
$\{A\}=\{B \subset M: M \models \varphi(B)\}$.
\end{definition}
In this section $L$ is the language $\{0,1,+,\cdot,<\}$, and
$PA$ denotes the Peano arithmetic formulated in $L$.
We shall prove that the standard part is not
implicitly definable in any model of $PA$
by using a certain form of formulas.
We fix a model $M$ of $PA$, and work on $M$.
\begin{definition}\rm
An $L \cup \{P\}$-formula $\varphi(\bar y)$ (with parameters) will
be called simple if it is equivalent to a prenex normal form
$
Q_1 \bar x_1\cdots Q_n \bar x_n
[
P(f(\bar x_1,...,\bar x_n, \bar y)) \rightarrow
P(g(\bar x_1,...,\bar x_n,\bar y))
]
$
where
$Q_i$'s are quantifiers and
$f$ and $g$ are definable functions.
If $Q_1=\forall$ then $\varphi$ will be called a simple
$\Pi_n$-formula. Similarly it is called a simple $\Sigma_n$-formula
if $Q_1=\exists$.
\end{definition}
\begin{remark}\rm
If $P$ is an initial segment of $M$, then
\begin{enumerate}
\item $a_1 \in P \wedge a_2 \in P$ is equivalent to
$\max\{a_1,a_2\} \in P$;
\item $a_1 \in P \vee a_2 \in P$ is equivalent to
$\min\{a_1,a_2\}\in P$.
\end{enumerate}
An $L$-formula $\varphi(\bar x)$ is equivalent to a formula of
the form $P(f(\bar x))$, where $f$ is a definable function
such that $f(\bar x)=0$ if $\varphi(\bar x)$ holds and
$f(\bar x)=a$ ($a$ is a nonstandard element) otherwise.
In what follows,
an initial segment $I \subsetneq M$ will be called a {\it cut }
if $I$ is closed under successor. The statement that $P$ is a
cut is expressed by a simple $\Pi_2$-formula.
\end{remark}
We shall prove that
the standard part is not implicitly definable
by a finite number of simple $\Pi_2$-formulas.
In fact we can prove more.
\begin{proposition}\label{proposition}
Let $I_0$ be a cut of $M$ with $I_0 <a$ i.e.
any element of $I_0$ is smaller than $a$.
Let $\{\varphi_i(P):i\leq n\}$ be a finite set of simple formulas.
If $I_0$ satisfies $\{\varphi_i(P):i\leq n\}$, then
there is another cut $I<a$ which also satisfies
$\{\varphi_i(P):i\leq n\}$.
\end{proposition}
Let us say that a cut $I$ is approximated by a decreasing
$\omega$-sequence, if
there is a definable function $f(x)$ with
$
I=\{a \in M: \ (\forall m \in \omega)\ a \leq f(m) \}.
$
Similarly we say that $I$ is approximated by an increasing
$\omega$-sequence if there is a definable function
$g(x)$ with $
I=\{a \in M: (\exists m \in \omega)\ a \leq g(m) \}.
$
Notice that no cut of $M$ is approximated by both a decreasing
$\omega$-sequence
and an increasing $\omega$-sequence.
\bigbreak\noindent
{\it Proof of Proposition \ref{proposition}:}
For $i \leq n$, let $\varphi_i(P)$ have the form
$
\forall \bar x \exists \bar y
[
P(f_i(\bar x,\bar y)) \rightarrow P(g_i(\bar x,\bar y))
].
$
By the remark just after Proposition \ref{proposition}, we can assume
that $I_0$ cannot be approximated by a decreasing
$\omega$-sequence.
We shall show that there is an initial segment $I$ with
$I_0 \subsetneq I < a$ and
$M \models \bigwedge_{i \leq n}\varphi_i(I)$.
Since $I_0$ satisfies $\varphi_i(P)$, for each
$b_0 \in M$ with $I_0<b_0<a$, we have
$
M \models
\bigwedge_{i \leq n}
\forall \bar x \exists \bar y
[
f_i(\bar x, \bar y) \in \omega \rightarrow
g_i(\bar x, \bar y) \leq b_0
]
$.
By overspill there is an element $b_1$ with $I_0 < b_1 <b_0$
such that
$$
M \models
\bigwedge_{i \leq n}
\forall \bar x \exists \bar y[
f_i(\bar x, \bar y) \leq b_1 \rightarrow
g_i(\bar x, \bar y) \leq b_0
]
$$
By choosing maximum such $b_1 < b_0$, we may assume
that $b_1 \in \dcl(\bar a,b_0)$, where
$\bar a$ are parameters necessary for defining $f_i$'s and $g_i$'s.
So we can choose an $L(\bar a)$-definable function, $h(x)$
such that
(i) $I_0 <b <a$ implies $I_0<h(b)<b$ and (ii)
$M \models
\bigwedge_{i \leq n}
\forall x \exists y
[
f_i(\bar x, \bar y) \leq h(b) \rightarrow
g_i(\bar x, \bar y) \leq b
]$, for any nonstandard $b \in M$.
By using recursion we can choose a definable function
$l(x)$ with
$
l(m)=h^m (a)
$ (the $m$-time application of $h$)
for each $m \in \omega$.
Now we put
$$
I=
\{
d \in M: ( \forall m \in \omega)\ d \leq l(m)
\}.
$$
Since $m < h(m)$ holds for any $m \in \omega$,
by overspill, there is a nostandard $m^*$ such that
$m^* < h(m^*)$. This shows that
$I$ is an initial segment different from $I_0$.
Now we show:
\proclaim Claim. For all $i \leq n$ and for all $\bar d \in M$,
there is $\bar e \in M$ such that
$$
f_i(\bar d, \bar e) \in I\ \rightarrow \ g_i(\bar d, \bar e) \in I.
$$
\par
Let $d \in M$ and $i \leq n$ be given. We can assume that
$
\forall y (f_i(\bar d, \bar y) \in I)
$
holds in $M$.
So by the definitions of $I$ and $l$, for all $k \in \omega$,
we have
$
M \models
\forall y (f_i(\bar d, \bar y) \leq l(k)).
$
Hence, for some nonstandard $k^* \in M$ with
$k^* \leq l(k^*)$, we have
$$
M \models
\forall \bar y (f_i(\bar d, \bar y)
\leq l(k^*)).
$$
On the other hand, by our choice of $h$ and $l$, we can
find $\bar e$ with
$$
M \models
f_i(\bar d, \bar e) \leq l(k^*) \rightarrow
g_i(\bar d, \bar e) \leq l(k^*-1).
$$
Hence, for this $\bar e$, we have
$
g_i(\bar d, \bar e) \leq l(k^*-1) \in I.
$ \qed
\begin{corollary}
The standard part is not implicitly definable by a finite number of
simple $\Sigma_3$-formulas.
\end{corollary}
\section{Definability result}
In this section we aim to prove the following theorem:
\begin{theorem}\label{theorem}
There is a model of $PA$ in which the standard part is
implicitly definable.
\end{theorem}
Instead of proving the theorem, we prove a more general result
(Theorem \ref{main}), from which Theorem \ref{theorem} easily follows.
For stating the result, we need some preparations.
We assume the language $L$ contains a binary predicate
symbol $<$, a constant symbol $0$ and a unary function symbol $S$.
We fix a complete $L$-theory $T$ with
a partial definable function $F(x,y)$ such that the following sentences
are members of $T$:
\begin{itemize}
\item $<$ is a linear order with the first element $0$;
\item For each $x$, $S(x)$ is the immediate successor of $x$ with respect to $<$;
\item $\forall y_1,...,y_n \forall z_1,...,z_n\exists x
(\bigwedge_{i\neq j} y_i \neq y_j \ \rightarrow \
\bigwedge_{i=1}^{n}F(x,y_i)=z_i)$\ \ (for $n \in \omega$).
\end{itemize}
\begin{remark}\rm
Any completion of $PA$ satisfies our requirements stated
above.
\end{remark}
Let $P$ be a new unary predicate symbol not in $L$.
Throughout this section $\psi^*(P)$ is
the conjunction of the following $L\cup \{P\}$-sentences:
\begin{enumerate}
\item $P$ is a cut
(non-empty proper initial segment closed under $S$), i.e.
$\neg (\forall x P(x)) \wedge P(0) \wedge
\forall x \forall y (P(y) \wedge x<y
\rightarrow P(x)) \wedge \forall x (P(x) \rightarrow P(S(x)))$;
\item For no $x$ and $z$ with $P(z)$, is $\{F(x,y):y<z \} \cap P$
unbounded in $P$, i.e.
$\forall x \forall z [ P(z) \rightarrow \exists w
( P(w) \wedge \forall y (P(F(x,y)) \rightarrow F(x,y) < w))]$.
\end{enumerate}
It is clear that in any model $M$ of $T$, the ``standard'' part
$I=\{S^n(0):n \in \omega\}$ satisfies $\psi^*(P)$, i.e.
the sentence $\psi^*(P)$ holds in the $L \cup \{P\}$-structure $(M,I)$.
\begin{definition}\rm
A model $M$ of $T$ will be called $\psi^*$-appropriate if
the following two conditions are satisfied:
\begin{enumerate}
\item $M \neq \{S^n(0):n \in \omega\}$;
\item If $(M, I) \models \psi^*(P)$ then
(a) $I=\{S^n(0):n \in \omega\}$ or (b) $I$ is definable
in $M$ by an $L$-formula with parameters.
\end{enumerate}
\end{definition}
\begin{remark}\rm
In case that $T$ is a completion of $PA$, the part (b)
of the condition 2 in the above definition does not occur, because
in any model of $T$ no definable proper subset is closed under $S$.
\end{remark}
\begin{theorem}\label{main}
There is an appropriate model of $T^*$.
\end{theorem}
We shall prove the theorem above by a series of claims.
For a period of time, we fix an infinite cardinal $\lambda$.
First we need some definition.
\begin{definition}\rm
Let $M$ be a model of $T$ and $\varphi(x, \bar a)$ a formula
with parameters from $M$. We say that $\varphi(x, \bar a)$ is
$\Gamma^{\rm sind}_F$-big (in $M$) if in some (any)
$|T|^+$-saturated model $N \succ M$ there is $A \subset N $
with $|A| \leq |T|$
such that for any finite number of distinct elements
$a_1,...,a_n \in N \setminus A$,
and any elements $b_1,...,b_n \in N $, we have
$$
N \models
\exists x [\varphi(x,\bar a) \wedge \bigwedge_{i=1}^nF(x,a_i) =b_i].
$$
In the above definition, if $\lambda=\aleph_0$, we replace the
condition $|A| \leq |T|$ by $|A|<\aleph_0$.
\end{definition}
Let us briefly recall the definition of {\it bigness} defined in [2].
Let $R \notin L$ be a unary predicate symbol.
A statement (or an infinitary $L \cup \{R\}$-sentence) $\Gamma(R)$
is called a notion of
bigness for $T$, if any model $M$ of $T$ satisfies the following axioms,
for all formulas $\varphi(x,\bar y)$ and $\psi(x,\bar y)$
(where $\Gamma(\varphi(x,\bar y))$ means that setting
$R(x)=\varphi(x,\bar y)$ [so $\bar y$ is a parameter] makes
$\Gamma$ true):
\begin{enumerate}
\item $\forall \bar y (\forall x(\varphi \rightarrow \psi) \wedge
\Gamma(\varphi) \ \rightarrow \ \Gamma(\psi))$;
\item $\forall \bar y (\Gamma(\varphi \vee \psi) \ \rightarrow \
\Gamma(\varphi) \vee \Gamma(\psi))$;
\item $\forall \bar y (\Gamma(\varphi) \rightarrow \exists^{ \geq 2} x
\varphi)$;
\item $\Gamma(x=x)$.
\end{enumerate}
Now let $\Gamma(\varphi)$ be the statement
``$\varphi$ is $\Gamma_F^{\rm sind}$-big''. Then this $\Gamma$ satisfies the above
four axioms: It is easy to see that our $\Gamma$ saitsfies
Axioms 1, 3 and 4. So let us prove Axiom 2.
Suppose that neither $\varphi$ nor $\psi$ are big.
Let $M$ be a model of $T$ and $N \succ M$ be $|T|^+$-satrurated.
Let $A$ be a subset of $ N $ of cardinality $\leq |T|$.
Since $\varphi$ is not big, $A$ cannot witness the definition of
bigness, so there are
a finite number of elements
$a_1,...,a_n \in N \setminus A$ with no repetition
and $b_1,...,b_n \in N$
such that
$N \models \forall x [\bigwedge_{i \leq n} F(x,a_i)=b_i \rightarrow
\neg \varphi(x)]$.
Since $\psi$ is not big,
$A'=A \cup \{a_1,...,a_n\}$ cannot witness the definition of
bigness, hence there are $a_{n+1},...,a_m \in
N \setminus (A \cup \{a_1,...,a_n\})$ with no repetition and
$b_{n+1},...,b_m \in N$ such that
$N \models \forall x [\bigwedge_{n+1 \leq i \leq m}
F(x,a_i)=b_i \rightarrow \neg \psi(x)]$. So $N \models
\forall x [\bigwedge_{i \leq m} F(x,a_i)=b_i \rightarrow
\neg (\varphi(x) \vee \psi(x))]$. Since $A$ was chosen
arbitrarily, this shows that $\varphi \vee \psi$ is not big.
\medbreak
For simplicity we assume $\lambda > |T|$. (This assumption
is for simplicity only.)
\begin{claim}
(Under $\diamondsuit_{S_\lambda^{\lambda^+}}+\diamondsuit_\lambda$,
where $\lambda=\lambda^{<\lambda}$, $S_\lambda^{\lambda^+}=\{\delta<\lambda^+:{\rm cf}(\delta)=\lambda\}$)
There are a continuous elementary chain
$\langle M_i:i<\lambda^+ \rangle$ of models of $T$ and
a sequence $\langle a_i: i< \lambda^+ \rangle$ of elements
$a_i \in M_{i+1} \setminus M_i$ such that
\begin{description}
\item{(a)} $|M_i|= \lambda$;
\item{(b)} $M_i$ is saturated except when $\aleph_0 \leq {\rm cf}(i)
\leq \lambda$;
\item{(c)} $\tp_{M_{i+1}}(a_i/M_i)$
is $\Gamma_F^{\rm sind}$-big, i.e. each formula in it is $\Gamma_F^{\rm sind}$-big.
\item{(d)} $M_i \subset \{F^{M_{i+1}}(a_i,c): M_i \models c<b\}$
if $b \in M_i \setminus \{S^n(0): n \in \omega\}$,
\item {(e)} if $(C_1,C_2)$ is a Dedekind cut of
$M=\bigcup_{i < \lambda^+}M_i$ of cofinality
$(\lambda^+,\lambda^+)$ then $C_1$ is
a subset of $M$ definable with parameters.
(A Dedekind cut of $M$ of cofinality $(\mu_1,\mu_2)$ is a pair
$(C_1,C_2)$ such that
(i) $M=C_1 \cup C_2$,
(ii) $\forall x \in C_1 \forall y \in C_2 [x<^M y]$,
(iii) the cofinality of $C_1$ with respect to $<$ is
$\mu_1$ and
(iv) the coinitiality of $C_2$ (i.e. the cofinality of $C_2$
with respect to the reverse ordering) is $\mu_2$.
\end{description}
\end{claim}
\proof See [2]. For more details, see [3].
\bigbreak
Now we expand the language $L$ by adding new binary predicate
symbols.
Let $L^* = L \cup \{ E_1,E_2,<_{\mathrm les}, <_{\mathrm tr} \}$.
We expand the $L$-structure $M$ defined in claim A
to an $L^*$-structure $M^*$ by the following interpretation.
For $a \in M$, let $i(a)= \min \{ i < \lambda^+: a \in M_{i+1}\}$.
\begin{enumerate}
\item $E_1^{M^*}=\{(a,b): i(a)=i(b)\}$;
\item $E_2^{M^*}=\{(a,b): i(a)=i(b) \text{ and
$M \models (c<a \equiv c<b)$ for every
$c \in M_{i(a)}$} \}$, \\
In other words, $(a,b) \in E_2^{M^*}$ iff $a$ and $b$ realize
the same Dedekind cut of $M_{i(a)} (=M_{i(b)})$;
\item $<_{\mathrm lev }^{M^*}=\{(a,b): i(a)<i(b)\}$;
\item $<_{\mathrm tr }^{M^*}=\{(a,b): i(a)<i(b) \text{ and
$M \models (c<a \equiv c<b)$ for every
$c \in M_{i(a)}$} \}$.
\end{enumerate}
The relation $<_{\mathrm tr}$ defines a preorder on $M^*$ and
induces a tree structure on the $E_2$-equivalence classes.
This tree structure $(M^*/E_2, <_{\mathrm tr})$ is a definable object of
${M^*}^{\rm eq}$.
(We do not use a new symbol for the order induced by
$<_{\mathrm tr}$.)
Simiarly $<_{\mathrm lev}$ induces a linear order on
the $E_1$-equivalence classes.
Let $R$ be the definable function which maps $a_{E_2}$ to
$a_{E_1}$. $R$ is considered as a rank function which
assigns a level to each node of the tree.
Then $\langle <_{\mathrm tr}, <_{\mathrm lev},R \rangle$ is
an {\it $L^*$-tree} in the sense of [1].
A subset $B$ of $M^*/E_2$ will be called a {\it branch} of the tree if
(i) it is linearly ordered by $<_{\mathrm tr}$, (ii) $a_{E_2} \in B$ and
$b \leq_{\mathrm tr} a$ imply $b_{E_2} \in B$ and (iii)
the set $\{R(a_{E_2}):a_{E_2} \in B\}$ of all levels in $B$ is
unbounded in $M^*/E_1$.
\begin{claim}
Every branch of
the tree $(M^*/E_2,<_{\mathrm tr}, <_{\mathrm lev},R)$ is definable in $M^*$.
\end{claim}
\proof
Let $B$ be a branch of the tree
$(M^*/E_2,<_{\mathrm tr}, <_{\mathrm lev},R)$.
We show that $B$ is definable in $M^*$.
Let $I$ be the $<$-initial segment determined by $B$,
i.e.
$$
I=
\{a \in M^*: M^* \models
(\forall b_{E_2} \in B)(\exists c_{E_2}\in B)[b_{E_2} <_{\mathrm tr}
c_{E_2} \wedge a < c ]\}.
$$
It is easy to see that $I$ and $B$ are interdefinable in $M^*$.
In fact, we have $b_{E_2} \in B$ if and only if
there exist $c \in I$ and $d \in M^* \setminus I$ such that
\begin{itemize}
\item $b_{E_2}$ intersects the
interval $[c,d]$,
\item if $b_{E_2} \subset I$ then
any other $ b'_{E_2} $ with $[c,d] \cap I \cap b'_{E_2} \neq \emptyset$
has a strictly larger level than $b_{E_2}$ and
\item
if $b_{E_2} \subset M^* \setminus I$ then any other
$ b'_{E_2} $ with $[c,d] \cap (M^* \setminus I) \cap b'_{E_2}
\neq \emptyset$ has
a strictly larger level than $b_{E_2}$.
\end{itemize}
If the cofinality of $(I,M^* \setminus I)$ is
$(\lambda^+,\lambda^+)$, then $I$ is definable in $M$ by the
property (e) of Claim A, so $B$ is definable in $M^*$.
So we may assume that the cofinality is not $(\lambda^+,\lambda^+)$.
First suppose that $ {\rm cf}(I) \leq \lambda$.
Then we can choose a set
$\{a_i : i< \lambda\}$ which is cofinal in $I$.
Choose $j<\lambda^+$ with ${\rm cf}(j)=\lambda$ and
$\{a_i : i< \lambda\} \subset M_j$.
If $M_j \setminus I$ is bounded from below in
$M^* \setminus I$, say by $d \in M^* \setminus I$, then
$I$ is defined in $M^*$ by the formula
$\exists y [ x < y < d \wedge y <_{\mathrm lev} e ]$,
where $e$ is an element from $M_{j+1} \setminus M_j$.
So we may assume that there is a set $\{a_i':i<\lambda\}
\subset M_j \setminus I$ which is
coinitial in $M^* \setminus I$.
(We shall derive a contradiction from this. )
Let $b_{E_2} \in B$ with $b \notin M_j$. Since the other
case can be treated similarly, we can
assume that $b \in I$. Then $b_{E_2}$ is
included in some interval $[0, a_i]$.
By the definition of $I$, there is $c_{E_2} \in B$ such
that $b_{E_2} <_{\mathrm lev} c_{E_2}$ and $a_i < c$.
But then $b$ and $c$ determine different Dedekind cuts
of $M_j$, hence $b$ and $c$ are not comparable
with respect to $<_{\mathrm tr}$. This contradicts our assumption
that $B$ is a branch.
Second suppose that the coinitiality of $M^* \setminus I$
is $\leq \lambda$ and that the cofinality of $I$ is $\lambda^+$.
As in the first case, we can choose $j<\lambda^+$ such that
$M_j \setminus I$ is coinitial in $M^* \setminus I$.
Choose $d \in I $ which bounds $I \cap M_j$ from above and
an element $e \in M_{j+1} \setminus M_j$.
Then $I$ is defined by the formula
$
\forall y[ d<y \wedge y <_{\mathrm lev} e \rightarrow x < y].
$
Lastly the case where the cofinality of $(I, M^* \setminus I)$ is
$(\mu_1,\mu_2)$ with $\mu_1, \mu_2 \leq \lambda$ is
impossible by the definition of branch.
\qed
Let $T^*$ be the $L^*$-theory of $M^*$.
Under the hypothesis of Claim A
(i.e. $\diamondsuit_{S_\lambda^{\lambda^+}}$ etc),
we have proven the existence of $M^* \models T^*$
having a tree with the property
stated in Claim B.
However, by the absoluteness
(e.g. Thorem 6 in [1]), the existence of such a model
can be proven without the hypothesis.
Moreover, as $T^*$ is countable, we can assume that relevant
properities of $M^*$ expressed by one
$L^*_{\omega_1\omega}(Q)$-sentence are
also possessed by such models. ($Q$ is the quantifier which
expresses ``there are uncountably many''.)
Thus in ZFC we can show
\begin{claim}
There is a model $N^* \models T^*$ of cardinality $\aleph_1$
that satisfies:
\begin{enumerate}
\item The tree $(N^*/E_2, <_{\mathrm tr})$ has no undefinable branch;
\item The set $N^*/E_1$ of levels has the cardinality $\aleph_1$, but
for each $b/E_1 \in N^*/E_1$,
$\{c/E_1: c/E_1 <_{\mathrm lev}b/E_1\}$ is countable;
\item If $I$ is a definable subset of $N^*$ with
the Dedekind cut $(I,N^* \setminus I)$ of cofinality
$(\aleph_1,\aleph_1)$, then $I$ is definable in $N$;
\item The clause (d) of Claim A,
namely, for each level $d_{E_1}$ there is $a \in N^*$ such that
if $b \in N^* \setminus \{S^n(0): n \in \omega\}$
then $\{F(a,c): c < b\}$ includes $\{c \in I: c \leq_{\mathrm lev} d\}$.
\end{enumerate}
\end{claim}
\begin{claim}
Let $N^*$ be a model of $T^*$ with the properties stated in
Claim C.
Then the reduct $N$ of $N^*$ to the language $L$ is
$\psi^*$-appropriate.
\end{claim}
\proof
Toward a contradiction, we assume that there is an undefinable
(in the sense of $N$) subset $I \subset N$
with $(N,I) \models \psi^*(P)$ and $I \neq \{S^n(0): n \in \omega\}$.
We show that the cofinality of $(I, N^* \setminus I)$ is
$(\aleph_1,\aleph_1)$. Suppose that this is not the case.
First assume that the cofinality of $(I,<)$ is less than $\aleph_1$.
As $(N^*/E_1, <_{\mathrm lev})$ has the cofinality $\aleph_1$,
there is $d/E_1$ such that $\{c \in I: c \leq_{\mathrm lev} d\}$
is unbounded in $I$. Since $I \neq \{S^n(0): n \in \omega\}$,
we can choose
$b \in I \setminus \{S^n(0):n \in \omega\}$.
By the fourth condition of Claim C, there is $a \in N^*$
such that
$\{F(a,c): c < b\}$ includes $\{c \in I: c \leq_{\mathrm lev} d\}$.
So $\{F(a,c): c < b\} \cap I$ is unbounded in
$I$. This contradicts the last clause in the definition of
$\psi^*$.
Second assume that the coinitiality of $N^* \setminus I$ is
less than $\aleph_1$.
For a similar reason as in the first case,
we can find $d_{E_1}$ such that $\{c \in N^* \setminus I:
c \leq_{\mathrm lev} d\}$
is unbounded from below in $N^* \setminus I$.
Also we can choose $a \in N^*$ and $b \in I$ such that
$\{F(a,c): c < b\}$ includes $\{c \in I: c \leq_{\mathrm lev} d\}$.
If $I \cap \{F(a,c): c < b\}$ were bounded
(from above) say by $e \in I$,
then $I$ would be definable in $N$ by the $L$-formula
$$
\varphi(x,a,b,e) \stackrel{\rm def}{\equiv}
\forall z
[(e < z \wedge \exists y ( y<b \wedge z=F(a,y) ))\rightarrow x < z],
$$
contradicting our assumption that $I$ is not definable.
So $I \cap \{F(a,c): M_{i^*} \models c < b\}$ is not
bounded in $I$. Again this contradicts the last clause in
the definition of $\psi^*$.
So we have proven that the cofinality of $(I,N^* \setminus I)$
is $(\aleph_1,\aleph_1)$.
As in the proof of Claim B, we shall define a set
$\{(b_i)_{E_2} :i<\aleph_1\}$ and definable
intervals $J_i \subset N^*$ $(i<\aleph_1)$ such that
for each $i<\aleph_1$,
\begin{itemize}
\item $J_i$'s are decreasing;
\item $b_i \in J_i$, $J_i \cap I \neq \emptyset$, $J_i \cap (N^* \setminus I) \neq \emptyset$;
\item there is no element $d \in J_i$ with
$d <_{\mathrm lev} b_i$.
\end{itemize}
Suppose that we have chosen $d_j$'s and $J_j$'s for all $j < i$.
Since the cofinality of $I$ and the coinitiality of $N^* \setminus I$
are both $\aleph_1$, $\bigcap_{j<i} J_i$ intersects both $I$ and
$N^* \setminus I$. Choose $b \in \bigcap_{j<i} J_i \cap I$ and
$c \in \bigcap_{j<i} J_i \cap (N^* \setminus I)$.
Then we put $J_i =\{e \in N^*: N^* \models b <e <d\}$.
Choose $b_i \in J_i$ of the minimum level.
(Such $b_i$ exists and $(b_i)_{E_2}$ is unique,
because every nonempty definable
subset of $N^*/E_1$ has the minimum element with respect to
$<_{\mathrm lev}$. If there are
two such elements, they are distinguished by elements of
lower levels, contradicting the minimality.)
We claim that $\{(b_i)_{E_2}: i<\aleph_1\}$ determines a branch
$B=\{c_{E_2}: c_{E_2} \leq_{\mathrm tr} (b_i)_{E_2}
\text{ for some $i$}\}$.
For this it is sufficient to show that the $b_i$'s are linearly ordered by
$\leq_{\mathrm tr}$.
Let $i \leq i' <\aleph_1$. Then both $b_i$ and $b_{i'}$ are members of
the interval $J_i$.
Suppose that $b_i$ and $b_{i'}$ are not comparable with respect to
$ \leq_{\mathrm tr}$.
They determine different Dedekind cuts of the elements
of lower levels. So there is an element $c \in J_i$ with
$c <_{\mathrm lev} b_i$. This contradicts our choice of
$b_i \in J_i$.
By our assumption (the fourth condition in Claim C),
the branch $B=\{(b_i)_{E_2}:i<\aleph_1\}$ is definable
in $N^*$.
It is easy to see that $I$ and $B$ are interdefinable in $N^*$.
So $I$ is also definable in $N^*$, hence $I$ is definable in $N$ by the
third condition in Claim C.
This contradicts our assumption that $I$ is undefinable in $N$.
\qed
\begin{center}
{\bf References.}
\end{center}
[1] S. Shelah,
Models with second-order properties II.
Trees with no undefined branches, Annals of Mathematical
Logic 14 (1978), pp. 73-87. [Sh:73].
[2] S. Shelah,
Models with second order properties IV.
A general method and eliminating diamonds, Annals of Pure and
Applied Logic 25 (1983), pp. 183-212.
[Sh:107].
[3] S. Shelah,
Non structure theory. In preparation. [Sh:e].
\end{document}
|
\begin{document}
\title[]{Unique continuation results for Ricci curvature and
Applications}
\author[]{Michael T. Anderson and Marc Herzlich}
\thanks{The first author is partially supported by NSF Grant DMS
0604735; the second author
is partially supported by ANR project GeomEinstein 06-BLAN-0154. \\
MSC Classification: 58J32, 58J60, 53C21. Keywords: Einstein metrics,
unique continuation }
\abstract{Unique continuation results are proved for metrics with
prescribed
Ricci curvature in the setting of bounded metrics on compact manifolds
with
boundary, and in the setting of complete conformally compact metrics on
such
manifolds. Related to this issue, an isometry extension property is
proved:
continuous groups of isometries at conformal infinity extend into the
bulk of
any complete conformally compact Einstein metric. Relations of this
property
with the invariance of the Gauss-Codazzi constraint equations under
deformations
are also discussed.}
\endabstract
\maketitle
\setcounter{section}{0}
\section{Introduction.}
\setcounter{equation}{0}
In this paper, we study certain issues related to the boundary
behavior of metrics with prescribed Ricci curvature. Let $M$ be a
compact $(n+1)$-dimensional manifold with compact non-empty boundary
$\partial M$. We consider two possible classes of Riemannian metrics
$g$ on $M$. First, $g$ may extend smoothly to a Riemannian metric on
the closure $\bar M = M \cup \partial M$, thus inducing a Riemannian
metric $\gamma = g|_{\partial M}$ on $\partial M$. Second, $g$ may be
a complete metric on $M$, so that $\partial M$ is ``at infinity''. In
this case, we assume that $g$ is conformally compact, i.e.~there exists
a defining function $\rho$ for $\partial M$ in $M$ such that the
conformally equivalent metric
\begin{equation} \label{e1.1}
\widetilde g = \rho^{2}g
\end{equation}
extends at least $C^{2}$ to $\partial M$. The defining function $\rho$ is
unique only up to multiplication by positive functions; hence only the
conformal class $[\gamma]$ of the associated boundary metric $\gamma =
\bar g|_{\partial M}$ is determined by $(M, g)$.
The issue of boundary regularity of Riemannian metrics $g$ with
controlled Ricci curvature has been addressed recently in several
papers. Thus, [4] proves boundary regularity for bounded metrics $g$
on $M$ with controlled Ricci curvature, assuming control on the
boundary metric $\gamma$ and the mean curvature of $\partial M$ in $M$.
In [16], boundary regularity is proved for conformally compact Einstein
metrics with smooth conformal infinity; this was previously proved by
different methods in dimension 4 in [3], cf.~also [5].
One purpose of this paper is to prove a unique
continuation property at the boundary $\partial M$ for bounded metrics
or for
conformally compact metrics. We first state a version of the result for
Einstein
metrics on bounded domains.
\begin{theorem} \label{t 1.1.}
Let $(M, g)$ be a $C^{3,\alpha}$ metric on a compact manifold with
boundary $M$, with induced metric $\gamma = g|_{\partial M}$,
and let $A$ be the $2^{\rm nd}$ fundamental form of $\partial M$
in $M$. Suppose the Ricci curvature $\operatorname{Ric}_{g}$ satisfies
\begin{equation} \label{e1.2}
\operatorname{Ric}_{g} = \lambda g,
\end{equation}
where $\lambda$ is a fixed constant.
Then $(M, g)$ is uniquely determined up to local isometry and inclusion,
by the Cauchy data $(\gamma, A)$ on an arbitrary open set $U$ of $\partial M$.
\end{theorem}
Thus, if $(M_{1}, g_{1})$ and $(M_{2}, g_{2})$ are a pair of Einstein metrics
as above, whose Cauchy data $(\gamma, A)$ agree on an open set $U$ common to
both $\partial M_{1}$ and $\partial M_{2}$, then after passing to suitable
covering spaces $\bar M_{i}$, either there exist isometric embeddings
$\bar M_{1} \subset \bar M_{2}$ or $\bar M_{2} \subset \bar M_{1}$ or there
exists an Einstein metric $(\bar M_{3}, g_{3})$ and isometric embeddings
$(\bar M_{i}, g_{i}) \subset (\bar M_{3}, g_{3})$. Similar results hold for
metrics which satisfy other covariant equations involving the metric to
$2^{\rm nd}$ order, for example the Einstein equations coupled to other
fields; see Proposition 3.7.
For conformally compact metrics, the $2^{\rm nd}$ fundamental form
$A$ of the
compactified metric $\bar g$ in (1.1) is umbilic, and completely
determined by
the defining function $\rho$. In fact, for conformally compact Einstein
metrics,
the higher order Lie derivatives ${\mathcal L}_{N}^{(k)}\bar g$ at
$\partial M$, where
$N$ is the unit vector in the direction $\bar \nabla \rho$, are
determined by the
conformal infinity $[\gamma]$ and $\rho$ up to order $k < n$. Supposing
$\rho$ is a
geodesic defining function, so that $||\bar \nabla \rho|| = 1$, let
\begin{equation}\label{e1.3}
g_{(n)} = \tfrac{1}{n!}{\mathcal L}_{N}^{(n)}\bar g.
\end{equation}
More precisely, $g_{(n)}$ is the $n^{\rm th}$ term in the
Fefferman-Graham expansion
of the metric $g$; this is given by (1.3) when $n$ is odd, and in a
similar way when $n$
is even, cf. [18] and \S 4 below. The term $g_{(n)}$ is the natural
analogue of $A$ for
conformally compact Einstein metrics.
\begin{theorem} \label{t 1.2.}
Let $g$ be a $C^{2}$ conformally compact Einstein metric on a compact
manifold $M$ with
$C^{\infty}$ smooth conformal infinity $[\gamma]$, normalized so that
\begin{equation} \label{e1.4}
\operatorname{Ric}_{g} = -ng,
\end{equation}
Then the Cauchy data $(\gamma, g_{(n)})$ restricted to any open set
$U$ of
$\partial M$ uniquely determine $(M, g)$ up to local isometry and
determine
$(\gamma, g_{(n)})$ globally on $\partial M$.
\end{theorem}
The recent boundary regularity result of Chru\'sciel et al.,~[16],
implies that $(M, g)$
is $C^{\infty}$ polyhomogeneous conformally compact, so that the
hypotheses of Theorem 1.2
imply the term $g_{(n)}$ is well-defined on $\partial M$. A more
general version of Theorem 1.2, without the smoothness assumption
on $[\gamma]$, is proved in \S 4, cf.~Theorem 4.1. For conformally
compact metrics coupled to other fields, see Remark 4.5.
Of course neither Theorem 1.1 or 1.2 hold when just the boundary
metric
$\gamma$ on $U \subset \partial M$ is fixed. For example, in the
context of
Theorem 1.2, by [20] and [16], given any $C^{\infty}$ smooth boundary
metric $\gamma$
sufficiently close to the round metric on $S^{n}$, there is a smooth
(in the polyhomogeneous
sense) conformally compact Einstein metric on the $(n+1)$-ball
$B^{n+1}$, close to the
Poincar\'e metric. Hence, the behavior of $\gamma$ in $U$ is
independent of its
behavior on the complement of $U$ in $\partial M$.
Theorems 1.1 and 1.2 have been phrased in the context of ``global''
Einstein
metrics, defined on compact manifolds with compact boundary. However,
the proofs
are local, and these results hold for metrics defined on an open
manifold with
boundary. From this perspective, the data $(\gamma, A)$ or $(\gamma,
g_{(n)})$
on $U$ determine whether Einstein metric $g$ has a global extension to
an
Einstein metric on a compact manifold with boundary, (or conformally
compact Einstein
metric), and how smooth that extension is at the global boundary.
A second purpose of the paper is to prove the following isometry
extension result which
is at least conceptually closely related to Theorem 1.2. However, while
Theorem 1.2 is
valid locally, this result depends crucially on global properties.
\begin{theorem} \label{t1.3}
Let $g$ be a $C^{2}$ conformally compact Einstein metric on a compact
manifold $M$ with
$C^{\infty}$ boundary metric $(\partial M, \gamma)$, and suppose
\begin{equation}\label{e1.5}
\pi_{1}(M, \partial M) = 0.
\end{equation}
Then any connected group of isometries of $(\partial M, \gamma)$
extends to an action
by isometries on $(M, g)$.
\end{theorem}
The condition (1.5) is equivalent to the statement that $\partial M$
is connected
and the inclusion map $\iota: \partial M \rightarrow M$ induces a
surjection
$\pi_{1}(\partial M) \rightarrow \pi_{1}(M) \rightarrow 0$.
Rather surprisingly, this result is closely related to the equations
at conformal
infinity induced by the Gauss-Codazzi equations on hypersurfaces
tending to $\partial M$.
It turns out that isometry extension from the boundary at least into a
thickening of
the boundary is equivalent to the requirement that the Gauss-Codazzi
equations induced
at $\partial M$ are preserved under arbitrary deformations of the
boundary metric. This
is discussed in detail in \S 5, see e.g.~Proposition 5.4. We note that
this result does
not hold for complete, asymptotically (locally) flat Einstein metrics,
cf. Remark 5.8.
A simple consequence of Theorem 1.3 is the following uniqueness
result:
\begin{corollary} \label{c1.4}
A $C^{2}$ conformally compact Einstein metric with conformal infinity
given by the
class of the round metric $g_{+1}$ on the sphere $S^{n}$ is necessarily
isometric
to the Poincar\'e metric on the ball $B^{n+1}$.
\end{corollary}
Results similar to Theorem 1.3 and Corollary 1.4 have previously been
proved in a number
of different special cases by several authors, see for example [7],
[9], [31], [33];
the proofs in all these cases are very different from the proof given
here.
It is well-known that unique continuation does not hold for large
classes of elliptic
systems of PDE's, even for general small perturbations of systems which
are diagonal at
leading order; see for instance [23] and references therein for a
discussion related to
geometric PDEs. The proofs of Theorems 1.1 and 1.2 rely on unique
continuation results
of Calder\'on [13], [14] and Mazzeo [27] respectively, based on
Carleman estimates. The
main difficulty in reducing the proofs to these results is the
diffeomorphism covariance
of the Einstein equations and, more importantly, that of the
``abstract'' Cauchy data
$(\gamma, A)$ or $(\gamma, g_{(n)})$ at $\partial M$. The unique
continuation theorem
of Mazzeo requires a diagonal (i.e.~uncoupled) Laplace-type system of
equations, at
leading (second) order. The unique continuation result of Calder\'on is
more general,
but again requires strong restrictions on the structure of the leading
order symbol
of the operator. For emphasis and clarity, these issues are discussed
in more detail
in \S 2. The proofs of Theorems 1.1, 1.2 and 1.3 are then given in \S
3, \S 4 and \S 5
respectively.
Very recently, while the writing on this paper was being completed,
O. Biquard [12] has given a different proof of Theorem 1.2, which avoids
some of the gauge issues discussed above. However, his method apparently
requires $C^{\infty}$ smoothness of the boundary data, which limits the
applicability of this result; for instance the applications in [5]
or [6] require finite or low differentiability of the boundary data.
We would like to thank Michael Taylor for interesting discussions
on geodesic-harmonic coordinates, Piotr Chru\'sciel and Erwann Delay for
interesting discussions concerning Theorem 1.3, and Olivier Biquard for
informing us of his independent work on unique continuation.
\section{Local Coordinates and Cauchy Data}
\setcounter{equation}{0}
In this section, we discuss in more detail the remarks in the Introduction
on classes of local coordinate systems, and their relation with Cauchy
data on the boundary $\partial M$.
Thus, consider for example solutions to the system
\begin{equation} \label{e2.1}
\operatorname{Ric}_{g} = 0,
\end{equation}
defined near the boundary $\partial M$ of an $(n+1)$-dimensional manifold $M$.
Since the Ricci curvature involves two derivatives of the metric,
Cauchy data at $\partial M$ consist of the boundary metric $\gamma $
and its first derivative, invariantly represented by the $2^{\rm nd}$
fundamental form $A$ of $\partial M$ in $M$. Thus, we assume $(\gamma , A)$
are prescribed at $\partial M$, (subject to the Gauss and Gauss-Codazzi
equations), and call $(\gamma, A)$ abstract Cauchy data. Observe that
the abstract Cauchy data are invariant under diffeomorphisms of $M$ equal
to the identity at $\partial M$.
The metric $g$ determines the geodesic defining function
$$t(x) = dist_{g}(x, \partial M).$$
The function $t$ depends of course on $g$; however, given any
other smooth metric $g'$, there is a diffeomorphism $F$ of a
neighborhood of $\partial M$, equal to the identity on $\partial M$,
such that $t'(x) = dist_{F^{*}g'}(x, \partial M)$ satisfies $t' = t$.
As noted above, this normalization does not change the abstract Cauchy
data $(\gamma, A)$ and preserves the isometry class of the metric.
Let $\{y^{\alpha}\}$, $0 \leq \alpha \leq n$, be any local coordinates
on a domain $\Omega $ in $M$ containing a domain $U$ in $\partial M$.
We assume that $\{y^{i}\}$ for $1 \leq i \leq n$ form local coordinates
for $\partial M$ when $y^{0} = 0$, so that $\partial / \partial y^{0}$ is
transverse to $\partial M$. Throughout the paper, Greek indices $\alpha$,
$\beta$ run from $0$ to $n$, while Latin indices $i$, $j$ run from $1$
to $n$. If $g_{\alpha\beta}$ are the components of $g$ in these coordinates,
then the abstract Cauchy problem associated to (2.1) in the local coordinates
$\{y^{\alpha}\}$ is the system
\begin{equation} \label{e2.2}
(\operatorname{Ric}_{g})_{\alpha\beta} = 0, \ \ {\rm with} \ \
g_{ij}|_{U} = \gamma_{ij}, \ \tfrac{1}{2}({\mathcal L}_{\nabla
t}g)_{ij}|_{U} =
a_{ij},
\end{equation}
where $\gamma_{ij}$ and $a_{ij}$ are given on $U$, (subject to the constraints
of the Gauss and Gauss-Codazzi equations). Here one immediately sees a
problem, in that (2.2) on $U \subset \partial M$ involves only the
tangential part $g_{ij}$ of the metric (at 0 order), and not the full metric
$g_{\alpha\beta}$ at $U$. The normal $g_{00}$ and mixed $g_{0i}$ components
of the metric are not prescribed at $U$. As seen below, these components
are gauge-dependent; they cannot be prescribed ``abstractly'', independent
of coordinates, as is the case with $\gamma$ and $A$.
In other words, if (2.1) is expressed in local coordinates $\{y^{\alpha}\}$
as above, then a well-defined Cauchy or unique continuation problem has
the form
\begin{equation} \label{e2.3}
(\operatorname{Ric}_{g})_{\alpha\beta} = 0, \ \ {\rm with} \ \
g_{\alpha\beta} = \gamma_{\alpha\beta}, \
\tfrac{1}{2}\partial_{t}g_{\alpha\beta} = a_{\alpha\beta}, \ {\rm on} \
U \subset
\partial M ,
\end{equation}
where $\Omega$ is an open set in $({\mathbb R}^{n+1})^{+}$ with
$\partial \Omega = U$ an
open set in $\partial ({\mathbb R}^{n+1})^{+} ={\mathbb R}^{n}$.
Formally, (2.3) is a determined system, while (2.2) is underdetermined.
Let $g_{0}$ and $g_{1}$ be two solutions to (2.1), with the same
Cauchy data $(\gamma , A)$, and with geodesic defining functions
$t_{0}$, $t_{1}$. Changing the metric $g_{1}$ by a diffeomorphism
if necessary, one may assume that $t_{0} = t_{1}$. One may then write
the metrics with respect to a Gaussian or geodesic boundary coordinate
system $(t, y^{i})$ as
\begin{equation} \label{e2.4}
g_{k} = dt^{2} + (g_{k})_{t},
\end{equation}
where $(g_{k})_{t}$ is a curve of metrics on $\partial M$ and $k = 0, 1$.
Here $y_{i}$ are coordinates on $\partial M$ which are extended into $M$
to be invariant under the flow of the vector field $\nabla t$. The metric
$(g_{k})_{t}$ is the metric induced on $S(t)$ and pulled back to
$\partial M$ by the flow of $\nabla t$. One has $(g_{k})_{0} = \gamma$ and
$\frac{1}{2}\frac{d}{dt}(g_{k})_{t}|_{t=0} = A$.
Since $g_{0\alpha} = \delta_{0\alpha}$ in these coordinates,
$\nabla t = \partial_{t}$,
and hence the local coordinates are the same for both metrics, (or at least
may be chosen to be the same). Thus, geodesic boundary coordinates are natural
from the point of view of the Cauchy or unique continuation problem, since
in such local coordinates the system (2.2), together with the prescription
$g_{0\alpha} = \delta_{0\alpha}$, is equivalent to the system (2.3).
However, the Ricci curvature is not elliptic or diagonal to leading order
in these coordinates. The expression of the Ricci curvature in such
coordinates does not satisfy the hypotheses of Calder\'on's theorem [14],
and it appears to be difficult to establish unique continuation of solutions
in these coordinates by working directly on the equations on the metric
(see, however, [12] for another approach).
Next suppose that $\{x^{\alpha}\}$ are boundary harmonic coordinates,
defined as follows. For $1 \leq i \leq n$, let $\hat x^{i}$ be local harmonic
coordinates on a domain $U$ in $(\partial M, \gamma)$. Extend $\hat x^{i}$ into
$M$ to be harmonic functions in $(\Omega, g)$, $\Omega \subset M$,
with Dirichlet boundary data; thus
\begin{equation} \label{e2.5}
\Delta_{g}x^{i} = 0, \ \ x^{i}|_{U} = \hat x^{i}.
\end{equation}
Let $x^{0}$ be a harmonic function on $\Omega$ with 0 boundary data, so
that
\begin{equation} \label{e2.6}
\Delta_{g}x^{0} = 0, \ \ x^{0}|_{U} = 0.
\end{equation}
Then the collection $\{x^{\alpha}\}$, $0 \leq \alpha \leq n$, form a
local harmonic coordinate chart on a domain $\Omega \subset (M, g)$. In
such coordinates, one has
\begin{equation} \label{e2.7}
(\operatorname{Ric}_{g})_{\alpha\beta} =
-\tfrac{1}{2}g^{\mu\nu}\partial_{\mu}\partial_{\nu}g_{\alpha\beta}
+ Q_{\alpha\beta}(g, \partial g),
\end{equation}
where $Q(g, \partial g)$ depends only on $g$ and its first derivatives.
This is an elliptic operator, diagonal at leading order, and satisfies the
hypotheses of Calder\'on's theorem. However, in general, the local Cauchy problem
(2.3) is not well-defined in these coordinates; if $g_{0}$ and $g_{1}$ are two
solutions of (2.1), each with corresponding local boundary harmonic
coordinates, then the components
$(g_{0})_{0\alpha}$ and $(g_{1})_{0\alpha}$ in general will differ at
$U \subset
\partial M$. This is of course closely related to the fact that there
are many possible
choices of harmonic functions $x^{\alpha}$ satisfying (2.5) and (2.6),
and to the fact
that the behavior of harmonic functions depends on global properties of
$(\Omega, g)$.
In any case, it is not known how to set up a well-defined Cauchy
problem in these
coordinates for which one can apply standard unique continuation
results.
Consider then geodesic-harmonic coordinates ``intermediate'' between
geodesic
boundary and boundary harmonic coordinates. Thus, let $t$ be the
geodesic distance
to $\partial M$ as above. Choose local harmonic coordinates $\hat
x^{i}$ on
$\partial M$ as before and extend them into $M$ to be harmonic on the
level sets
$S(t)$ of $t$, i.e.~locally on $S(t)$,
\begin{equation} \label{e2.8}
\Delta_{U(t)}x^{i} = 0, \ \ x^{i}|_{\partial U(t)} = \hat
x^{i}|_{\partial U(t)};
\end{equation}
here the boundary value $\hat x^{i}$ is the extension of $\hat x^{i}$
on $U$
into $M$ which is invariant under the flow $\phi_{t}$ of $\nabla t$,
and $U(t) = \phi_{t}(U) \subset S(t)$. The functions $(t, x^{i})$ form
a coordinate
system in a neighborhood $\Omega$ in $M$ with $\Omega\cap\partial M =
U$.
It is not difficult to prove that geodesic-harmonic coordinates
preserve the
Cauchy data, in the sense that the data (2.2) in such coordinates imply
the data
(2.3). However, the Ricci curvature is not an elliptic operator in the
metric
in these coordinates, nor is it diagonal at leading order; the main
reason is
that the mean curvature of the level sets $S(t)$ is not apriori
controlled.
So again, it remains an open question whether unique continuation can
be proved
in these coordinates.
Having listed these attempts which appear to fail, a natural choice
of
coordinates which do satisfy the necessary requirements are
$H$-harmonic
coordinates $(\tau , x^{i})$, whose $\tau$-level surfaces
$S_{\tau}$ are of prescribed mean curvature $H$ and with $x^{i}$
harmonic on $S_{\tau}$. These coordinates were introduced by
Andersson-Moncrief [8] to prove a well-posedness result for the
Cauchy problem for the Einstein equations in general relativity,
and, as shown in [8], have a number of advantageous properties.
Thus, adapting some of the arguments of [8], we show in \S 3 that
the Einstein equations (1.2) are effectively elliptic in such
coordinates, and
such coordinates preserve the Cauchy data in the sense above,
(i.e.~(2.2) implies (2.3)). It will then be shown that unique
continuation holds in such coordinates, via application of the
Calder\'on theorem.
\section{Proof of Theorem 1.1}
\setcounter{equation}{0}
Theorem 1.1 follows from a purely local result, which we formulate as follows.
Let $C$ be a domain diffeomorphic to a cylinder $I\times B^{n} \subset
{\mathbb R}^{n+1}$, with $U = \{0\}\times B^n$, diffeomorphic to a ball
in ${\mathbb R}^{n}$. Let $U = \partial_{0}C$ be the horizontal boundary
and $\partial C = I\times S^{n-1}$ be the vertical boundary.
Let $g$ be a Riemannian metric on $C$ which is $C^{k-1,\alpha}$ up to the
boundary of $C$ in the given standard coordinate
system $\{y^{\alpha}\} = \{y^0,y^i\}$ with $y^{0} = 0$ on $U$ and $k\geq 2$.
Without loss of generality, we assume that $C$ is chosen sufficiently
small so that $g$ is close to the Euclidean metric $\delta$ in the
$C^{k-1,\alpha}$ topology. For simplicity, we shall rescale $C$ and
the coordinates $\{y^{\alpha}\}$ if necessary so that $(C, g)$ is
$C^{k-1,\alpha}$ close to the standard cylinder $((I\times B^{n}(1),
B^{n}(1))) \subset ({\mathbb R}^{n+1},{\mathbb R}^{n})$, $I = [0,1]$.
We will prove the following local version of Theorem 1.1.
\begin{theorem} \label{t 3.1.}
Let $g_{0}$, $g_{1}$ be two $C^{k-1,\alpha}$ metrics as above on $C$,
$k\geq 4$, satisfying
\begin{equation} \label{e3.1}
\operatorname{Ric}_{g_{i}} = \lambda g_{i}, \quad i = 0,1
\end{equation}
for some fixed constant $\lambda$. Suppose $g_{0}$ and
$g_{1}$ have the same abstract Cauchy data on $U$ in the sense of \S 2,
so that $\gamma_{0} = \gamma_{1}$ and $A_{0} = A_{1}$.
Then $(C, g_{0})$ is isometric to $(C, g_{1})$, by an isometry
equal to the identity on $U$. In particular, Theorem 1.1
holds.
\end{theorem}
The proof of Theorem 3.1 will proceed in several steps, organized
around several Lemmas. We first work with a fixed
metric $g$ on $C$ as above. Let $N$ be the inward unit normal to $U$ in
$C$ and let $A = \nabla N$ be the corresponding second fundamental
form, with mean curvature $H = tr_g A$ on $U$. By the initial assumptions
above, $A$ and $H$ are close to $0$ in $C^{k-2,\alpha}$; more precisely,
one may assume that
$$||A||_{C^{k-2,\alpha}} = O(\varepsilon)$$
with $\varepsilon$ positive but as small as needed, by a further rescaling
of the coordinates (this will play an important role at various places
below). Note moreover that the rescaling process turns the Einstein constant
$\lambda$ into $\varepsilon\lambda$. Abusing notation here, we denote $y^{0} = t$
and without loss of generality assume that the coordinates $y^{i}$ are harmonic
on $U$.
To begin, we construct certain systems of $H$-harmonic coordinates discussed
at the end of \S 2. Let $\phi: C \rightarrow C$ be a diffeomorphism of the
cylinder $C$, (in other words a change of coordinates), so that
$y^{\alpha} = \phi^{\alpha}(x^{\beta})$, where $x^{\beta}$ is another
coordinate system for $C$. As above, we write $x^{\alpha} = (\tau, x^{i})$
and assume that $\phi$ is close to the identity map. The level surfaces
$S_{\tau} = \{\tau\}\times B^{n}$ are mapped under $\phi$ to a foliation
$\Sigma_{\tau}$ of $C$, with each leaf given by the graph of the function
$\phi_{\tau}$ over $B^{n}$. We assume $\phi_{0} = id$, so that $\phi = id$
on $U$. Let $f:\partial C \rightarrow \partial C$ be the induced diffeomorphism
on the boundary $\partial C$.
\begin{lemma} \label{l 3.2.}
Let $k\geq 2$. Given a $C^{k,\alpha}$ mapping $f$ on $\partial C$ as above,
close to the identity in $C^{k,\alpha}$, and a metric $g$ close to the
Euclidean metric $\delta$ in $C^{k-1,\alpha}$ on $C$, there exists a
unique $\phi \in \operatorname{Diff}^{k,\alpha}(C)$ such that, with respect to the
pull-back metric $\phi^{*}(g)$,
\begin{equation} \label{e3.2}
H^{\phi^{*}(g)}(S_{\tau}) = H^{\phi^{*}(g)}(S_{0}), \ \ {\rm and} \ \
\Delta^{\phi^{*}(g)}_{S_{\tau}}x^{i} = 0,
\end{equation}
with the property that $\phi_{|\partial C} = f$. Thus, the leaves
$\tau = const$ have mean curvature independent of $\tau$, in the
$x^{\alpha}$-coordinates, and the coordinate functions $x^{i}$ are
harmonic on each $S_{\tau}$.
\end{lemma}
{\bf Proof:} Let
$$ \mathcal{H} : \operatorname{Met}^{k-1,\alpha}(C)\times \operatorname{Diff}^{k,\alpha}_0(C)
\longrightarrow C^{k-2,\alpha}(C)\times \prod_{1}^{n}C^{k-2,\alpha}(C)
\times \operatorname{Diff}^{k,\alpha}_0(\partial C) $$
$$ \mathcal{H}(g,F) = (H^{F^{*}(g)}(S_{\tau}) - H^{F^{*}(g)}(S_{0}),
\Delta^{F^{*}(g)}_{S_{\tau}}x^{i}, F_{|\partial C}),$$
where $\operatorname{Diff}_{0}^{k,\alpha}(C)$ is the space of $C^{k,\alpha}$ diffeomorphisms
on the cylinder equal to the identity on $C_{0} = \{0\}\times B^{n}$.
The map $\mathcal{H}$ is clearly a smooth map of Banach spaces, and its
linearization at $(\delta, id)$ in the second variable is
$$ L(v) = (\Delta_{\delta}v^{0}, \Delta_{\delta}v^{i}, v_{|\partial C}), $$
where $\Delta_{\delta}$ is the Laplacian with respect to the flat metric $\delta$
on $S_{\tau}$. The operator $L$ is clearly an isomorphism, and by the implicit
function theorem in Banach spaces, it follows that there is a smooth map
$$\Phi : \mathcal{U}\times\mathcal{V} \subset \operatorname{Met}^{k-1,\alpha}(C)\times
\operatorname{Diff}^{k,\alpha}_0(\partial C)
\longrightarrow \operatorname{Diff}^{k,\alpha}_0(C),$$
$$\Phi(g,f) = \phi^g(f)$$
from a neigbourhood of the Euclidean metric and the identity map such that
$\left(\phi^g(f)\right)_{|\partial C} = f$, and satisfying (3.2).
Note moreover that $\phi^g(f)$ is $C^{k,\alpha}$-close to the identity if
$f$ is close to it on $\partial C$ and $g$ is $C^{k-1,\alpha}$-close to
the Euclidean metric on $C$. This implies that the family $\{\Sigma_{\tau}\}$
forms a $C^{k,\alpha}$ foliation of $C$.
{\qed
}
The metric $g$ in the $x^{\alpha} = (\tau, x^{i})$ coordinates,
i.e.~$\phi^{*}g$, may be written in lapse/shift form, commonly used in general
relativity, as
\begin{equation}\label{e3.3}
g = u^{2}d\tau^{2} + g_{ij}(dx^{i} + \sigma^{i}d\tau)(dx^{j} +
\sigma^{j}d\tau),
\end{equation}
where $u$ is the lapse and $\sigma$ is the shift in the $x$-coordinates
and $g_{ij}$ is the induced metric on the leaves $S_{\tau} = \{\tau = const\}$.
A simple computation shows that lapse and shift are related to the metric $g =
g_{\alpha\beta}^ydy^{\alpha}dy^{\beta}$ in the initial $(y^{\alpha})$ coordinates
by the equations
\begin{equation}\label{e3.4}
u^{2} + |\sigma|^{2} =
g_{\alpha\beta}^{y}(\partial_{\tau}\phi^{\alpha})(\partial_{\tau}\phi^{\beta}),
\end{equation}
\begin{equation}\label{e3.5}
g_{ij}\sigma^{j} =
g_{\alpha\beta}^{y}(\partial_{\tau}\phi^{\alpha})(\partial_{i}\phi^{\beta}),
\end{equation}
\begin{equation}\label{e3.6}
g_{ij} = g_{\alpha\beta}^y(\partial_{i}\phi^{\alpha})(\partial_{j}\phi^{\beta}).
\end{equation}
A computation using \eqref{e3.5} shows that
$|\sigma|^{2} = g^{ij}g_{\alpha\beta}^{y}g_{\mu\nu}^{y}
\partial_{\tau}\phi^{\alpha}\partial_{\tau}
\phi^{\mu}\partial_{i}\phi^{\beta}\partial_{j}\phi^{\nu}$.
>From $g_{0j} = g_{ij}\sigma^i$ and $g_{00} = u^2 + |\sigma|^2$,
one may compute $g^{\alpha\beta}$ and, expanding, this yields $g^{00} = u^{-2}$
and $\sigma^i = - u^2 g^{0i}$. The unit normal $N$ to the foliation $\Sigma_{\tau}$
is given by
\begin{equation}\label{e3.7}
N = u^{-1}(\partial_{\tau} - \sigma),
\end{equation}
so that, for instance, $g(N,\cdot)=ud\tau$ (this will be useful later on).
It is now important to notice that the construction of $H$-harmonic coordinates
in Lemma 3.2 can be done for any choice of boundary diffeomorphism $f$. We
shall show that there is a (unique) choice of $f$ close to the identity with
$f = id$ on $\partial_{0}C = \{0\}\times S^{n-1}$, such that $u$ is
identically $1$ and the shift $\sigma$ vanishes on the vertical boundary
$\partial C = I \times S^{n-1}$.
\begin{lemma}\label{l3.3}
For any $k \geq 3$, there exists a $C^{k,\alpha}$ diffeomorphism
$f: \partial C \rightarrow \partial C$ such that the lapse $u$ and shift
$\sigma$ of $g$ in \eqref{e3.3} satisfies
\begin{equation}\label{e3.8}
u = 1, \ {\rm and} \ \sigma = 0, \ \ {\rm on} \ \ \partial C.
\end{equation}
\end{lemma}
{\bf Proof:} Consider the operator
\begin{equation}\label{7}
\Xi: \operatorname{Met}^{k-1,\alpha}(C)\times \operatorname{Diff}_{0}^{k,\alpha}(\partial C) \rightarrow
C^{k-1,\alpha}(\partial C)\times \prod_{1}^{n}C^{k-1,\alpha}(\partial C),
\end{equation}
$$\Xi(g, f) = (g_{\alpha\beta}^{y}(\partial_{\tau}\phi^{\alpha})
(\partial_{\tau}\phi^{\beta}) - |\sigma|^{2}(\phi), \sigma^{i}(\phi)) ,$$
where $\phi = \phi^g(f)$ is defined above in the proof of Lemma 3.2; recall that
$\phi|_{\partial C} = f$. More precisely, $\Xi$ is defined in the neighborhoods
${\mathcal U}$ and ${\mathcal V}$ defined in Lemma 3.2 above. From \eqref{e3.5},
one has $\sigma^{i} = g^{ij}g_{\alpha\beta}^{y}\partial_{\tau}
\phi^{\alpha}\partial_{j}\phi^{\beta}$.
Note that for the map $f = id$ on $\partial C$, and at the metric $g_0 = \delta$,
one has $\phi^{g_{0}}(id) = id$ and $|\xi|^{2}(id) = 0$, so that
$\Xi(g_{0}, id) = (1, 0, 0)$. Thus,
\begin{equation}\label{8}
\Xi(g, id) = (1 + O(\varepsilon), O(\varepsilon))
\end{equation}
where, as already discussed, $\varepsilon$ is positive and may be taken
as small as needed. We would like to apply the implicit function theorem
to assert that for any $g \in {\mathcal U}$, where ${\mathcal U}$ is
sufficiently small, there exists $f = f(g) \in {\mathcal V}$, such that
\begin{equation}\label{9}
\Xi(g, f(g)) = (1, 0).
\end{equation}
If such $f$ exists, then, for any $g \in {\mathcal U}$, the pair $(g, f)$ defines
a $C^{k,\alpha}$ diffeomorphism $\phi: C \rightarrow C$ and the resulting
metric $\phi^{*}g$ satisfies \eqref{e3.8}. Thus it suffices to solve \eqref{9}.
There is however a loss of one derivative in the map $\Xi$ and its
derivative in the second variable, as is obvious by looking at its value
at the metric $g_0=\delta$:
\begin{equation}\label{10}
(D_2\Xi)_{(g_0,id)}(h) =
(2\partial_{\tau}h^{0},\partial_{\tau}h^{i} + \partial_{i}h^{0}).
\end{equation}
Thus, we need to use the Nash-Moser inverse function theorem. We use this
in the form given in [34,\S 6.3], and in particular [34, Thm.6.3.3, Cor.~1,
Cor.~2]. Following Zehnder's notation, (with $s$ in place of $\sigma$), let
$X_{s} = \operatorname{Met}^{k-1,\alpha}(C)$, $Y_{s} = \operatorname{Diff}^{k,\alpha}_{0}(\partial C)$,
and $Z_{s} = C^{k-1,\alpha}(\partial C)\times \prod_{1}^{n}C^{k-1,\alpha}
(\partial C)$, so that $s$ is a linear function of $k+\alpha$. Thus we write
$X_{s} = \operatorname{Met}^{s + 1 + \varepsilon}(C)$, for some arbitrary but fixed
$\varepsilon > 0$ (recall we start at $k \geq 2$), $Y_{s} = \operatorname{Diff}^{s + 2 +
\varepsilon}_{0}(\partial C)$ and $Z_{s} = \prod_{1}^{n}C^{s + 1 +
\varepsilon}(\partial C)$. We check the hypotheses of Zehnder's theorem:
(H1) When $s = 0$, $\Xi$ is $C^{2}$ in $f$, with uniform bounds in $Y_{0}$.
This is clearly true.
(H2) $\Xi$ is Lipschitz in $X_{0}$, also true.
(H3) $\Xi$ is of order $s = \infty$, with growth $\delta = 1$. This follows
from
$$||\Xi(g, f)||_{C^{k-2,\alpha}} \leq C(k)(||g||_{C^{k-1,\alpha}}
+ ||f||_{C^{k,\alpha}}).$$
(H4) Existence of right inverse of loss $\gamma = 1$. Let
$(D_{2}\Xi)_{(g,f)}$ be the derivative of $\Xi$ with respect to the 2nd variable
$f$ at $(g,f)$. Then varying $f$ in the direction $v$, $f_{s} = f + sv$, it
is easy to see that the operator $(D_{2}\Xi)_{(g,f)}$ is a 1st order linear
PDE in $v$, with all coefficients in $C^{k-1,\alpha}$. As in (3.12), the
boundary $S^{n-1} = \{\tau = 0\}$ is non-characteristic. Hence, for any
$h \in C^{k-1,\alpha}$, there exists a unique $C^{k-1,\alpha}$ smooth
solution $v$ to $$D_{2}\Xi_{(g,f)}(v) = h$$ with initial value $v_{0} = 0$
on $S^{n-1}$. This gives the existence of an inverse operator $L_{(g,f)}$
to $D_{2}\Xi_{(g,f)}$, with a loss of 1-derivative. One has
$L: Z_{s} \rightarrow Y_{s- 1}$ with $D_{2}\Xi_{(g,f)} \circ L = id$. The
remaining conditions of (H4) are easily checked to hold.
It follows then from [34, Cor.~2,p.241] that for any $g$ close
to $g_{0}$ in $X_{2 + \varepsilon}$ there exists $f \in Y_{1}$, (depending
continuously on $g$), which satisfies \eqref{9}, (and similarly for higher $s$).
This shows that, for any $g \in \operatorname{Met}^{k,\alpha}(C)$ close to $g_{0}$,
with $k \geq 3$, there exists $f \in C^{k,\alpha'}(\partial C)$, which
solves \eqref{9}. Pulling back as above gives, for any initial
$g \in C^{k,\alpha}$, a $C^{k-1,\alpha'}$ metric $\phi^{*}g$ in
$H$-harmonic coordinates and satisfying \eqref{e3.8}.
{\qed
}
For the remainder of the proof, we work in the fixed $H$-harmonic coordinate
system satisfying \eqref{e3.8}. Next, we derive the form of the Einstein
equations for the metric $g$ in (\ref{e3.3}). First, the $2^{\rm nd}$
fundamental form $A = \frac{1}{2}{\mathcal L}_{N}\, g_{S_{\tau}}$ of
the leaves $S_{\tau}$ has the form
\begin{equation}\label{e3.17}
A = {\tfrac{1}{2}}u^{-1}(\mathcal{L}_{\partial_{\tau}}g_{S} -
{\mathcal L}_{\sigma}g_{S}),
\end{equation}
where we have denoted by $g_{S}$ the restriction of $g$ on $S_{\tau}$. More
precisely, and since we shall compute on the $(n+1)$-dimensional manifold
with tensors living on the $n$-dimensional slices $S_{\tau}$,
$$ g_{S} = g(\Pi_{S}\cdot,\Pi_{S}\cdot)$$
where $\Pi_{S}$ is the orthogonal projection operator on $S_{\tau}$. Thus,
$g_{S} = g_{ij}(dx^i + \sigma^i d\tau)(dx^j + \sigma^j d\tau)$, as
in \eqref{e3.3}. Clearly \eqref{e3.17} is the same as
\begin{equation} \label{e3.18}
\mathcal{L}_{\partial_{\tau}}g_{S} = 2uA + {\mathcal L}_{\sigma}g_{S} .
\end{equation}
A straightforward computation from commuting derivatives gives the Riccati
equation
\begin{equation} \label{e3.19}
({\mathcal L}_{N}A) = A^{2} - u^{-1}(D^{2}u) - R_{N},
\end{equation}
where $R_{N} = g_{S}(R(\cdot,N)N,\cdot)_{|TS\otimes TS}$ and
$A^{2}$ is the bilinear form associated through $g_{S}$ to the square
of the shape operator of $S_{\tau}$. (The equation (\ref{e3.19}) may also be
derived from the $2^{\rm nd}$ variation formula). Using the fact
that $A$ is tangential, (i.e.~$A(N, \cdot ) = 0$), this gives
\begin{equation} \label{e3.20}
\partial_{\tau}A = -{\mathcal L}_{\sigma}A - D^{2}u + uA^{2} - uR_{N}.
\end{equation}
Another straightforward calculation via the Gauss equations shows that
$R_{N} = \operatorname{Ric}_{g} - \operatorname{Ric}_{S_{\tau}} + HA - A^{2}$, which, via
\eqref{e3.18} and \eqref{e3.20} gives the system of 'evolution' equations for
$g_{ij}$ and $A = A_{ij}$ on $S_{\tau}$:
\begin{equation} \label{e3.21}
\partial_{\tau}g = 2uA + {\mathcal L}_{\sigma}g_{S},
\end{equation}
\begin{equation} \label{e3.22}
\partial_{\tau}A = {\mathcal L}_{\sigma}A - D_{S}^{2}u + u\left( \operatorname{Ric}_{S}
- \operatorname{Ric}_{g} + 2A^{2} - HA \right).
\end{equation}
(Up to sign differences, these are the well-known Einstein evolution
equations in general relativity, cf. \cite{8,32}). Substituting the
expression of $A$ given by \eqref{e3.21} in (\ref{e3.22}) gives the
$2^{\rm nd}$-order evolution equation for $g$:
\begin{equation}\label{e3.23}
(\mathcal{L}_{\partial_{\tau}}\mathcal{L}_{\partial_{\tau}}
+ \mathcal{L}_{\sigma}\mathcal{L}_{\sigma} -
2\mathcal{L}_{\partial_{\tau}}\mathcal{L}_{\sigma})g_{S} =
udu(N) A - 2u D_{S}^{2}u + 2u^2\left( \operatorname{Ric}_{S} - \operatorname{Ric}_{g} + 2A^{2}
- HA \right).
\end{equation}
We now shift from these intrinsic equations to their expressions in coordinates.
Any tangential $1$-form on $S_{\tau}$ necessarily is of the form
$$ \alpha = \alpha_i(dx^i + \sigma^i d\tau),$$ thus it is enough to work with
the $(i,j)$ components only.
Using (\ref{e2.7}), (along the slices $S_{\tau}$), one obtains
\begin{equation} \label{e3.24}
u^{2}\Delta_{S} g_{ij} + \left( (\mathcal{L}_{\partial_{\tau}}\mathcal{L}_{\partial_{\tau}}
+ \mathcal{L}_{\sigma}\mathcal{L}_{\sigma}
- 2\mathcal{L}_{\partial_{\tau}}\mathcal{L}_{\sigma})g_{S}\right)_{ij}
= -2u^{2}(\operatorname{Ric}_{g})_{ij}-2u(D_{S}^{2}u)_{ij} + Q_{ij}(g,\partial g),
\end{equation}
where $Q_{ij}$ is a term involving at most the first order derivatives
of $g_{\alpha\beta}$ in all $x^{\alpha}$ directions. Now,
$$ (\mathcal{L}_{\partial_{\tau}}g_{S})_{ij} = \partial_{\tau}g_{ij},\quad
(\mathcal{L}_{\sigma}g_{S})_{ij} = \sigma^k\partial_k g_{ij}
+ g_{kj}\partial_i\sigma^k + g_{ik}\partial_j\sigma^k , $$
so that, for Einstein metrics,
\begin{equation} \label{e3.25}
(\partial_{\tau}^{2}+ u^{2}\Delta
- 2\sigma^k\partial_k\partial_{\tau} + \sigma^k\sigma^l\partial^2_{kl})g_{ij} =
- 2u(D^{2}u)_{ij} + S_{ij}(g,\partial g) + Q_{ij}(g,\partial g),
\end{equation}
where $Q_{ij}$ has the same general form as before and $S_{ij}$ contains tangential
first and second derivatives of $\sigma$.
The $0i$ and $00$ components of the Ricci curvature in the bulk are given by
the `constraint' equations along each leaf $S_{\tau}$:
\begin{equation} \label{e3.26} \begin{split}
\delta (A - Hg) & = - \operatorname{Ric}_{g}(N, \cdot ) = 0, \\
|A|^{2} - H^{2} + R_{S_{\tau}} & = R_{g} - 2\operatorname{Ric}_{g}(N,N) = (n-1)\lambda .
\end{split}
\end{equation}
Next, we derive the equations for the lapse $u$ and shift $\sigma$
along the leaves $S_{\tau}$.
\begin{lemma} \label{l 3.4.}
The lapse $u$ and shift $\sigma$ satisfy the following equations:
\begin{equation} \label{e3.27}
\Delta u + |A|^{2}u + \lambda u = -uN(H) =
-(\partial_{\tau} - \sigma)H.
\end{equation}
\begin{equation} \label{e3.28}
\Delta\sigma^{i} = -2u\langle D^{2}x^{i}, A \rangle - u\langle dx^{i},
dH \rangle - 2\langle dx^{i}, A(\nabla u) - {\tfrac{1}{2}}Hdu \rangle .
\end{equation}
\end{lemma}
{\bf Proof:} The lapse equation is derived by taking the trace of
(\ref{e3.19}), and noting that
$$ tr {\mathcal L}_{N}A = N(H) + 2|A|^{2}.$$
For the shift equation, since the functions $x^{i}$ are harmonic on
$S_{\tau}$, one has
$$\Delta ((x^{i})') + (\Delta')(x^{i}) = 0, $$
where $'$ denotes the Lie derivative with respect to $uN$
and the Laplacian is taken with respect to the induced metric on
the slices $S_{\tau}$. Moreover $(x^{i})' = - \sigma^{i}$ (see above),
and from standard formulas, cf. \cite[Ch.~1K]{11} for example, one has
$$(\Delta')(x^{i}) = -2\langle D^{2}x^{i}, \delta^{*}uN\rangle
+ 2\langle dx^{i}, \beta (\delta^{*}uN)\rangle, $$
where all the terms on the right are along $S_{\tau}$ and $\beta$
is the Bianchi operator, $\beta(k) = \delta k + \frac{1}{2}dtr k$.
Thus, $\delta^{*}(uN) = uA$, and the shift components $\sigma^{i}$
satisfy
$$\Delta\sigma^{i} = -2u\langle D^{2}x^{i}, A\rangle + 2u\langle dx^{i},
\delta A - {\tfrac{1}{2}}dH\rangle + 2\langle dx^{i}, A(\nabla u) -
{\tfrac{1}{2}}Hdu \rangle.$$
The relation (\ref{e3.28}) then follows from the constraint equation
(\ref{e3.26}).{\qed
}
Summarizing the work above, the Einstein equations in local $H$-harmonic
coordinates imply the following system on the data $(g_{ij}, u, \sigma)$:
\begin{equation} \label{e3.31}
(\partial_{\tau}^{2}+ u^{2}\Delta
- 2\sigma^k\partial_k\partial_{\tau} + \sigma^k\sigma^l\partial^2_{kl})g_{ij} =
- 2u(D^{2}u)_{ij} + S_{ij}(g_{\alpha\beta}, \partial g_{\alpha\beta})
+ Q_{ij}(g_{\alpha\beta}, \partial g_{\alpha\beta}),
\end{equation}
\begin{equation} \label{e3.32}
\Delta u + |A|^{2}u + \lambda u = dH_0(\sigma).
\end{equation}
\begin{equation} \label{e3.33}
\Delta\sigma^{i} = -2u\langle D^{2}x^{i}, A \rangle -
u\partial_{i}H_0 -
2\left( A^i_j\nabla^j u - {\tfrac{1}{2}}H_0 \nabla^i u \right) ,
\end{equation}
where $H_0$ denotes the mean curvature of the $\{\tau=0\}$-slice $U$.
\begin{remark} \label{r 3.6.}
{\rm The system (\ref{e3.31})-(\ref{e3.33}) is essentially an elliptic system
in $(g_{ij}, u, \sigma)$, given that $H = H_{0}$ is prescribed. Thus, assuming
$u \sim 1$ and $\sigma \sim 0$, the operator $P = \partial_{\tau}^{2}+
u^{2}\Delta - 2\sigma^i\partial_{i}\partial_{\tau} +
\sigma^k\sigma^l\partial_{kl}^{2}$ is elliptic on $C$ and acts diagonally on
$\{g_{ij}\}$, as is the Laplace operator on the slices $S_{\tau}$
acting on $(u, \sigma)$. The system (\ref{e3.31})-(\ref{e3.33}) is of course
coupled, but the couplings are all of lower order, i.e.~$1^{\rm st}$ order,
except for the term $D^{2}u$ in (\ref{e3.31}). However, this term can be
controlled or estimated by elliptic regularity applied to the lapse equation
(\ref{e3.32}) (as discussed further below).
Given the above, it is not difficult to deduce that local $H$-harmonic
coordinates have the optimal regularity property, i.e.~if $g$ is in
$C^{m,\alpha}(C)$ in some local coordinate system, then $g$ is in
$C^{m,\alpha}(C)$ in $H$-harmonic coordinates. Since this will not
actually be used here, we omit further details of the proof. }
\end{remark}
Next we show that the lapse and shift, and their $\tau$-derivatives,
are determined by the tangential metric $g_{S}$ and its $\tau$-derivative.
\begin{lemma} \label{l 3.5.}
Suppose the metric $g$ is close to the Euclidean metric in the $C^{2,\alpha}$
topology. Then in local $H$-harmonic coordinates $(\tau,x^i)$ as defined above,
the lapse-shift components $(u,\sigma^{i})$ and their derivatives
$(\partial_{\tau}u,\partial_{\tau}\sigma^{i})$, are uniquely determined
either by the tangential metric $g_{ij}$ and $2^{\rm nd}$ fundamental form
$A_{ij}$ on each $S_{\tau}$, or by the tangential metric $g_{ij}$ and its
time derivatives $\partial_{\tau}g_{ij}$ on each $S_{\tau}$.
\end{lemma}
{\bf Proof:} The system \eqref{e3.32}-(\ref{e3.33}) is a coupled elliptic
system in the pair $(u, \sigma)$ on $S_{\tau}$, with boundary values on
$\partial S_{\tau}$ given by
\begin{equation} \label{e3.29}
u|_{\partial S_{\tau}} = 1 , \ \ \sigma|_{\partial S_{\tau}} = 0.
\end{equation}
In the $x^{i}$ coordinates, all the coefficients of (\ref{e3.32})-(\ref{e3.33})
are bounded in $C^{\alpha}$. Since the metric $g_{ij}$ is close to the flat
metric in the $C^{2,\alpha}$ topology, it is standard that there is then
a unique solution to the elliptic boundary value problem
(\ref{e3.32})-(\ref{e3.33})-(\ref{e3.29}), cf.~[19]. The solution $(u, \sigma)$
is uniquely determined by the coefficients $(g_{ij}, A_{ij})$ and the
terms or coefficients containing derivatives of $H$ and the $x^i$. But
these are also determined by $(g_{ij}, A_{ij})$. Combining the facts above,
it follows that $(u, \sigma)$ is uniquely determined by $(g_{ij}, A_{ij})$.
The second claim is obtained in the same manner: rewrite the equations by
replacing all the occurrences of $A_{ij}$ by its expression in \eqref{e3.17}.
The equations are then non-linear equations in $(u,\sigma^i)$. Considering
them as a non-linear operator from $C^{2,\alpha}$ to $C^{\alpha}$ depending
also on the metric, a simple computation shows that the operator linearized
at the Euclidean metric is invertible. Invertibility of the non-linear operator
then follows from the implicit function theorem.
Next we claim that $\partial_{\tau}g_{0\alpha}$ is also determined by
$(g_{ij}, A_{ij})$ along $S_{\tau}$. To see this, first note that
$$ \mathcal{L}_Ng = \mathcal{L}_Ng_{S} + \mathcal{L}_N(g(N,\cdot))\otimes
g(N,\cdot) + g(N,\cdot)\otimes \mathcal{L}_N(g(N,\cdot)) ,$$
where $\mathcal{L}_Ng_{S}=2A$, $g(N,\cdot) = u d\tau$ and
$$ \mathcal{L}_N(g(N,\cdot))(\partial_i) = - g(N,[u^{-1}(\partial_{\tau} -
\sigma,\partial_i])= u^{-1}du(\partial_i), \quad \mathcal{L}_N(g(N,\cdot)))(N)
= 0.$$
This shows that all components of $\mathcal{L}_Ng$ are determined by $(g_{ij},
A_{ij})$, (since $u$ and $\sigma$ are already so determined). Now write
$\mathcal{L}_Ng(\partial_{\tau}, \partial_{\alpha}) = N(g_{0\alpha}) +
l_{0\alpha}$. One has $N(g_{0\alpha}) = u^{-1}\partial_{\tau}g_{0\alpha} -
u^{-1}\partial_{\sigma}g_{0\alpha}$, and the second term is again determined
by $(g_{ij}, A_{ij})$. Calculating the term $l_{0i}$ above explicitly,
one easily finds that it also depends only on $(g_{ij}, A_{ij})$, so that
$$\partial_{\tau}g_{0i} = \phi_i ,$$
is determined by an explicit formula in $g_{ij}$, $A_{ij}$, $u$, $\sigma$ and
their tangential derivatives, and so implicitly by $g_{ij}$, $A_{ij}$.
Working now in the same way shows that the same is true for
$\partial_{\tau}g_{00}$. This completes the proof.
{\qed
}
\subsection*{Proof of Theorems 3.1 and 1.1.}
Suppose that $g$ and $\widetilde{g}$ are two Einstein metrics on $C$
with identical $(\gamma, A)$ on $\partial_{0}C = U$. One may construct
$H$-harmonic coordinates for each, and via a diffeomorphism identifying these
coordinates, assume that the resulting pair of metrics $g$ and $\widetilde g$
have fixed $H$-harmonic coordinates $(\tau, x^{i})$, and both metrics satisfy
the system (\ref{e3.31})-(\ref{e3.33}). Let
\begin{equation} \label{e3.34}
h = h_{ij} = \widetilde g_{ij}- g_{ij}.
\end{equation}
One then takes the difference of both equations (\ref{e3.31}) and freezes the
coefficients at $g$ to obtain a linear equation in $h$. Thus, for example,
$\Delta_{\widetilde g}\widetilde g_{ij} - \Delta_{g}g_{ij} =
\Delta_{g}(h_{ij}) - (g^{ab} - \widetilde g^{ab})
\partial_{a}\partial_{b}\widetilde g_{ij}$. The second
term here is of zero order, (rational), in the difference $h$, with
coefficients depending on two derivatives of $\widetilde g$. Carrying out
the same procedure on the remaining terms in \eqref{e3.31} gives the equation
$$(\partial_{\tau}^{2}+ u^{2}\Delta -
2\partial_{\sigma}\partial_{\tau} + \partial_{\sigma}^{2})h_{ij} = -
2(\widetilde u(\widetilde D^{2}\widetilde u)_{ij} - u(D^{2}u)_{ij}) +
\mathcal{Q}_{ij}(h_{\alpha\beta},\partial_{\mu} h_{\alpha\beta}),$$
where we have denoted $\partial_{\sigma}=\sigma^k\partial_k$,
$\partial_{\sigma}^2=\sigma^k\sigma^l\partial_{kl}$, and $\mathcal{Q}$ is
a term depending on two derivatives of the background $(g,\widetilde g)$
and linear in its arguments, whose precise value may change from line to line.
Similarly,
$\widetilde D^{2}\widetilde u - D^{2}u = D^{2}v + (\widetilde D^{2} -
D^{2})\widetilde u$, where $v = \widetilde u - u$ and the second term is
of the form $Q$ above. Hence,
\begin{equation} \label{e3.35}
(\partial_{\tau}^{2}+ u^{2}\Delta -
2\partial_{\sigma}\partial_{\tau} + \partial_{\sigma}^{2})h_{ij} = -
2u(D^{2}v)_{ij} + \mathcal{Q}_{ij}(h_{\alpha\beta},\partial_{\mu}
h_{\alpha\beta}),
\end{equation}
Note that since we have linearized, $\mathcal{Q}$ depends linearly on
$h_{\alpha\beta}$ and $\partial_{\mu} h_{\alpha\beta}$, with nonlinear
coefficients depending on $\widetilde g$ and $g$.
Next we use the lapse and shift equations (\ref{e3.32})-(\ref{e3.33})
to estimate the differences $v = \widetilde u - u$ and
$\chi = \widetilde \sigma - \sigma$. Thus, as before,
$\Delta_{\widetilde g}\widetilde u
- \Delta_{g}u = \Delta_{g}v + D^{2}_{h}(\widetilde u)$,
where $D^{2}_{h}$ is a $2^{\rm nd}$-order differential operator on $\widetilde u$
with coefficients depending on the difference $h$, to first order. The remaining
terms in (\ref{e3.32})-(\ref{e3.33}) can all be treated in the same way, using
\eqref{e3.17} to replace occurences of $A_{ij}$ by derivatives in $\tau$ and
$\sigma$. Taking the difference, it then follows from (\ref{e3.32}) and
(\ref{e3.33}) that
\begin{equation} \label{e3.36}\begin{cases}
\Delta v + |A|^2 v +\lambda v = Q(h_{ij}, \partial_{\mu}h_{ij}) \\
\Delta \chi^i +2v\langle D^2x^i,A\rangle + v \partial_iH_0
+2 (A_j^i-\frac{1}{2}H\delta_j^i)g^{jk}\partial_ku
= Q^{i}(h_{kl}, \partial_{\mu}h_{kl})
\end{cases}
\end{equation}
where the terms $Q$ are linear in the arguments and their
coefficients depend on one derivatives of
$\widetilde g$. Note also that the zeroth- and first-order terms in $(v,\chi)$
aresmall if the metric is close to the Euclidean metric. Thus, the left-hand
side operators are invertible with $v = 0$ and $\chi=0$ on $\partial S_{\tau}$,
and elliptic regularity applied to the system \eqref{e3.36} then gives
\begin{equation} \label{e3.37}
||v||_{L_{x}^{2,2}} \leq C(\widetilde g, g) ||h_{ij}||_{L_{(\tau ,x)}^{1,2}} ,
\end{equation}
and
\begin{equation} \label{e3.38}
||\chi||_{L_{x}^{2,2}} \leq C(\widetilde g, g)
||h_{ij}||_{L_{(\tau ,x)}^{1,2}} .
\end{equation}
It follows from (\ref{e3.35}) and (\ref{e3.37})-(\ref{e3.38}) that
\begin{equation} \label{e3.39}
||P(h_{ij})||_{L_{x}^{2}} \leq C(\widetilde{g}_{\alpha\beta}, g_{\alpha\beta})
\, ||h_{\alpha\beta}||_{L_{(\tau ,x)}^{1,2}},
\end{equation}
where $P$ is given in Remark \ref{r 3.6.}.
Now by applying Lemma \ref{l 3.5.} to $(u, \sigma)$ and $(\widetilde u,
\widetilde \sigma)$ and taking the difference as above, it follows that $v$
and $\chi$, as well as $\partial_{\tau}v$ and $\partial_{\tau}\chi$ are
given by a linear expression in $h_{ij}$ and its first derivatives
(in every direction). Hence, \eqref{e3.39} becomes
\begin{equation}\label{e3.40}
||P(h_{ij})||_{L_{x}^{2}} \leq C(\widetilde g, g)
||h_{ij}||_{L_{(\tau ,x)}^{1,2}}.
\end{equation}
We are now in position to apply the Calder\'on unique continuation
theorem [14]. Thus, the operator $P$ is elliptic and diagonal, and the
Cauchy data for $P$ vanish at $U$, i.e.
\begin{equation} \label{e3.41}
h = \partial_{\tau}h = 0 \ \ {\rm at} \ \ U.
\end{equation}
We claim that $P$ satisfies the hypotheses of the Calderon unique
continuation theorem [14]. Following [14], decompose the symbol of $P$ as
\begin{equation} \label{e3.42}
A_{2}(\tau ,x,\xi) = (u^{2}g^{kl}\xi_{k}\xi_{l} -
2\sigma^{k}\sigma^{l}\xi_{k}\xi_{l})I,
\end{equation}
$$A_{1}(\tau ,x,\xi) = \sigma^{k}\xi_{k}I, $$
where $I$ is the $N\times N$ identity matrix, $N = \frac{1}{2}n(n+1)$,
equal to the cardinality of $\{ij\}$. Setting $|\xi|^{2} = 1$, \eqref{e3.42}
becomes
$$A_{2}(\tau ,x,\xi) = (u^{2} -
2\sigma^{k}\sigma^{l}\xi_{k}\xi_{l})I,$$
$$A_{1}(\tau ,x,\xi) = \sigma^{k}\xi_{k}I.$$
Now form the matrix
\begin{equation}\label{e3.43}
M = \left(
\begin{array}{cc}
0 & -I \\
A_{2} & A_{1}
\end{array}
\right)
\end{equation}
The matrices $A_{1}$ and $A_{2}$ are diagonal, and it is then easy to
see that $M$ is diagonalizable, i.e.~has a basis of eigenvectors over
${\mathbb C}$. This implies that $M$ satisfies the hypotheses of
[14, Thm.~11(iii)], cf.~also [14, Thm.~4]. The bound \eqref{e3.40} is
substituted in the basic Carleman estimate of [14, Thm.~6], cf.~also
[29, (6.1)], showing that $h_{ij}$ satisfies the unique continuation
property. It follows from \eqref{e3.41} and the Calder\'on unique
continuation theorem that
$$h_{ij} = \widetilde g_{ij} - g_{ij} = 0, $$
in an open neighborhood $\Omega \subset C$.
By Lemma \ref{l 3.5.} once again, this implies $h_{\alpha\beta} = 0$, i.e.
$$\widetilde g_{\alpha\beta} = g_{\alpha\beta}, $$
in $\Omega$, so that $\widetilde{g}$ is isometric to $g$ in $\Omega$.
By construction, the isometry from $\widetilde{g}$ to $g$ equals the identity
on $U$. This shows that the metric $g$ is uniquely determined in $\Omega$,
up to isometry, by the abstract Cauchy data on $U$. Since Einstein
metrics are real-analytic in the interior in harmonic coordinates, a
standard analytic continuation argument, (cf.~[25] for instance), then
implies that $g$ is unique up to isometry everywhere in $C$. This
completes the proof of Theorem 3.1.
In the context of Theorem 1.1, the same analytic continuation argument
shows that a pair of Einstein metrics $(M_{i}, g_{i})$, $i = 0,1$, whose
Cauchy data agree on a common open set $U$ of $\partial M_{i}$ are
everywhere locally isometric, i.e.~they become isometric in suitable
covering spaces, modulo restriction or extension of the domain, as
discussed following Theorem 1.1. This then also completes the proof
of Theorem 1.1.
{\qed
}
As an illustration, suppose $(M_{1}, g_{1})$ and $(M_{2}, g_{2})$ are a
pair of Einstein metrics on compact manifolds-with-boundary and the Cauchy
data for $g_{1}$ and $g_{2}$ agree on an open set $U$ of the boundary.
Suppose $M_{i}$ are connected and the topological condition (1.5) holds
for each $M_{i}$. Then, modulo isometry, either $M_{1} \subset M_{2}$,
$M_{2} \subset M_{1}$, or $M_{i}$ are subdomains in a larger Einstein
manifold $M_{3} = M_{1}\cup M_{2}$.
We conclude this section with a discussion of generalizations of
Theorem 1.1. First, one might consider the unique continuation problem
for
\begin{equation} \label{e3.45}
\operatorname{Ric}_{g} = T,
\end{equation}
where $T$ is a fixed symmetric bilinear form on $M$, at least
$C^{\alpha}$ up to $\bar M$. However, this problem is not natural,
in that is not covariant under changes by diffeomorphism. For metrics
alone, the Einstein equation (1.2) is the only equation covariant under
diffeomorphisms which involves at most the $2^{\rm nd}$ derivatives of
the metric. Nevertheless, the proof of Theorem 1.1 shows that if
$\widetilde g$ and $g$ are two solutions of \eqref{e3.45} which have
common $H$-harmonic coordinates near (a portion of) $\partial M$ on which
$(\gamma, A) = (\widetilde \gamma, \widetilde A)$, then $\widetilde g$ is
isometric to $g$ near (a portion of) $\partial M$.
Instead, it is more natural to consider the Einstein equation
coupled (covariantly) to other fields $\chi$ besides the metric; such
equations arise naturally in many areas of physics. For example, $\chi$
may be a function on $M$, i.e.~a scalar field, or $\chi$ may be a
connection 1-form (gauge field) on a bundle over $M$. We assume that
the field(s) $\chi$ arise via a diffeomorphism-invariant Lagrangian
${\mathcal L} = {\mathcal L}(g, \chi)$, depending on $\chi$ and its
first derivatives in local coordinates, and that $\chi$ satisfies
field equations, i.e.~Euler-Lagrange equations, coupled to the metric.
For example, for a free massive scalar field, the equation is the
eigenfunction equation
\begin{equation} \label{e3.46}
\Delta_{g}\chi = \mu\chi ,
\end{equation}
while for a connection 1-form, the equations are the Yang-Mills
equations, (or Maxwell equations when the bundle is a $U(1)$ bundle):
\begin{equation} \label{e3.47}
dF = d^{*}F = 0,
\end{equation}
where $F$ is the curvature of the connection $\chi$. Associated to
such fields is the stress-energy tensor $T = T_{\mu\nu}$; this is a
symmetric bilinear form obtained by varying the Lagrangian for $\chi$
with respect to the metric, cf.~[22] for example. For the free massive
scalar field $\chi$ above, one has
$$T = d\chi\cdot d\chi - \tfrac{1}{2}(|d\chi|^{2} + \mu\chi^{2})g, $$
while for a connection 1-form
$$T = F \cdot F - \tfrac{1}{4}|F|^{2}g, $$
where $(F \cdot F)_{ab} = F_{ac}F_{bd}g^{cd}$.
When the part of the Lagrangian involving the metric to $2^{\rm nd}$
order
only contains the scalar curvature, i.e.~the Einstein-Hilbert action,
the
resulting coupled Euler-Lagrange equations for the system $(g, \chi)$
are
\begin{equation} \label{e3.48}
\operatorname{Ric}_{g} - \frac{R}{2}g = T, \ \ E_{g}(\chi) = 0.
\end{equation}
By taking the trace, this can be rewritten as
\begin{equation} \label{e3.49}
\operatorname{Ric}_{g} = \hat T = T - \frac{1}{n-1}tr_{g}T, \ \ E_{g}(\chi) = 0.
\end{equation}
Here we assume $E_{g}(\chi)$ is a $2^{\rm nd}$ order elliptic system
for
$\chi$, with coefficients depending on $g$, as in \eqref{e3.46} or
\eqref{e3.47}, (the latter viewed as an equation for the connection).
In case the field(s) $\chi$ have an internal symmetry group, as in
the case of gauge fields, this will require a particular choice of
gauge for $\chi$ in which the Euler-Lagrange equations become an
elliptic system in $\chi$. It is also assumed that solutions $\chi$
of $E_{g}(\chi) = 0$ satisfy the unique continuation property; for
instance $E_{g}$ satisfies
the hypotheses of the Calder\'on theorem [14]. Theorem 1.1 now easily
extends to cover
\eqref{e3.48} or \eqref{e3.49}.
\begin{proposition} \label{p3.7}
Let $M$ be a compact manifold with boundary $\partial M$. Then
$C^{3,\alpha}$ solutions $(g, \chi)$ of \eqref{e3.48} on $\bar M$
are uniquely determined, up to local isometry and inclusion, by
the Cauchy data $(\gamma, A)$ of $g$ and the Cauchy data
$(\chi, \partial_{t}\chi)$ on an open set $U \subset \partial M$.
\end{proposition}
{\bf Proof:}
The proof is the same as the proof of Theorem 1.1. Briefly, via a
suitable
diffeomorphism equal to the identity on $\partial M$, one brings a pair
of solutions
of \eqref{e3.48} with common Cauchy data into a fixed system of $H$-harmonic
coordinates
for each metric. As before, one then applies Calder\'on uniqueness to
the resulting
system \eqref{e3.48} in the difference of the metrics and fields. Further
details are left
to the reader.
{\qed
}
\section{Proof of Theorem 1.2.}
\setcounter{equation}{0}
Let $g$ be a conformally compact metric on a compact $(n+1)$-manifold
$M$ with boundary
which has a $C^{2}$ geodesic
compactification
\begin{equation} \label{e4.1}
\bar g = t^{2}g,
\end{equation}
where $t(x) = dist_{\bar g}(x, \partial M)$. By the Gauss Lemma, one
has the splitting
\begin{equation} \label{e4.2}
\bar g = dt^{2} + g_{t},
\end{equation}
near $\partial M,$ where $g_{t}$ is a curve of metrics on $\partial M$
with $g_{0} = \gamma$ the boundary metric. The curve $g_{t}$ is
obtained by taking the induced metric the level sets $S(t)$ of $t$,
and pulling back by the flow of $N = \bar \nabla t$. Note that if
$r = -\log t$, then $g = dr^{2} + t^{-2}g_{t}$, so the integral curves
of $\nabla r$ with respect to $g$ are also geodesics. Each choice of
boundary
metric $\gamma \in [\gamma]$ determines a unique geodesic defining
function $t$.
Now suppose $g$ is Einstein, so that (1.4) holds and suppose for the
moment that
$g$ is $C^{2}$ conformally compact with $C^{\infty}$ smooth boundary
metric $\gamma$.
Then the boundary regularity result of [16] implies that $\bar g$ is
$C^{\infty}$
smooth when $n$ is odd, and is $C^{\infty}$ polyhomogeneous when $n$ is
even. Hence,
the curve $g_{t}$ has a Taylor-type series in $t$, called the
Fefferman-Graham
expansion [18]. The exact form of the expansion depends on whether $n$
is odd
or even. If $n$ is odd, one has a power series expansion
\begin{equation} \label{e4.3}
g_{t} \sim g_{(0)} + t^{2}g_{(2)} + \cdots + t^{n-1}g_{(n-1)} +
t^{n}g_{(n)}
+ \cdots ,
\end{equation}
while if $n$ is even, the series is polyhomogeneous,
\begin{equation} \label{e4.4}
g_{t} \sim g_{(0)} + t^{2}g_{(2)} + \cdots + t^{n}g_{(n)} + t^{n}\log t
\ {\mathcal H} +
\cdots .
\end{equation}
In both cases, this expansion is even in powers of $t$, up to $t^{n}$.
It is important
to observe that the coefficients $g_{(2k)}$, $k \leq [n/2]$, as well as
the coefficient
${\mathcal H}$ when $n$ is even, are explicitly determined by the
boundary metric $\gamma =
g_{(0)}$ and the Einstein condition (1.4), cf.~[18], [20]. For $n$
even, the series (4.4)
has terms of the form $t^{n+k}(\log t)^{m}$.
For any $n$, the divergence and trace (with respect to $g_{(0)} =
\gamma$) of $g_{(n)}$
are determined by the boundary metric $\gamma$; in fact there is a
symmetric bilinear form
$r_{(n)}$ and scalar function $a_{(n)}$, both depending only on
$\gamma$ and its derivatives
up to order $n$, such that
\begin{equation} \label{e4.5}
\delta_{\gamma}(g_{(n)} + r_{(n)}) = 0, \ \ {\rm and} \ \
tr_{\gamma}(g_{(n)} + r_{(n)}) =
a_{(n)}.
\end{equation}
For $n$ odd, $r_{(n)} = a_{(n)} = 0$. (The divergence-free tensor
$g_{(n)} + r_{(n)}$ is
closely related to the stress-energy of a conformal field theory on
$(\partial M, \gamma)$,
cf.~[17]). The relations (4.5) will be discussed further in \S 5.
However, beyond the relations (4.5), the term $g_{(n)}$ is not
determined by $g_{(0)}$;
it depends on the ``global'' structure of the metric $g$. The higher
order coefficients
$g_{(k)}$ of $t^{k}$ and coefficients $h_{(km)}$ of $t^{n+k}(\log
t)^{m}$, are then
determined by $g_{(0)}$ and $g_{(n)}$ via the Einstein equations. The
equations (4.5)
are constraint equations, and arise from the Gauss-Codazzi and Gauss
and Riccati equations
on the level sets $S(t) = \{x: t(x) = t\}$ in the limit $t \rightarrow
0$; this is also
discussed further in \S 5.
In analogy to the situation in \S 3, the term $g_{(n)}$ corresponds
to the $2^{\rm nd}$
fundamental form $A$ of the boundary, in that, modulo the constraints
(4.5), it is freely
specifiable as Cauchy data, and is the only such term depending on
normal derivatives of
the boundary metric.
Suppose now $g_{0}$ and $g_{1}$ are two solutions of
\begin{equation} \label{e4.6}
\operatorname{Ric}_{g} + ng = 0,
\end{equation}
with the same $C^{\infty}$ conformal infinity $[\gamma]$. Then there
exist
geodesic defining functions $t_{k}$ such that $\bar g_{k} =
(t_{k})^{2}g_{k}$
have a common boundary metric $\gamma \in [\gamma]$, and both metrics
are
defined for $t_{k} \leq \varepsilon$, for some $\varepsilon > 0$.
The hypotheses of Theorem 1.2, together with the discussion above
concerning
(4.3) and (4.4), then imply that
\begin{equation} \label{e4.7}
|g_{1} - g_{0}| = o(e^{-nr}) = o(t^{n}),
\end{equation}
where the norm is taken with respect to $g_{1}$, (or $g_{0}$).
Given this background, we prove the following more general version of
Theorem 1.2,
analogous to Theorem 3.1. Let $\Omega$ be a domain diffeomorphic to
$I\times
B^{n}$, where $B^{n}$ is a ball in ${\mathbb R}^{n}$ with boundary $U =
\partial
\Omega$ diffeomorphic to a ball in ${\mathbb R}^{n} \simeq \{0\}\times
{\mathbb R}^{n}$.
\begin{theorem} \label{t4.1}
Let $g_{0}$ and $g_{1}$ be a pair of conformally compact Einstein
metrics on
a domain $\Omega$ as above. Suppose $g_{0}$ and $g_{1}$ have
$C^{3,\alpha}$
geodesic compactifications, and \eqref{e4.7} holds in $\Omega$.
Then $(\Omega, g_{0})$ is isometric to $(\Omega, g_{1})$, by an
isometry equal
to the identity on $\partial \Omega$. Hence, if $(M_{0}, g_{0})$ and
$(M_{1}, g_{1})$
are conformally compact Einstein metrics on compact manifolds with
boundary, and
\eqref{e4.7} holds on some open domain $\Omega$ in $M_{0}$ and $M_{1}$,
then the manifolds
$M_{0}$ and $M_{1}$ are diffeomorphic in some covering space of each
and the lifted
metrics $g_{0}$ and $g_{1}$ are isometric.
\end{theorem}
The proof of Theorem 4.1 is very similar to that of Theorem 3.1. For
clarity, we first prove the result in case the metrics $g_{i}$, $i =
0,1$, have a common $C^{\infty}$ boundary metric $\gamma$ and then show
how the proof can be extended to cover the more general case of metrics
with less regularity.
By applying a diffeomorphism if necessary, one may assume that the
metrics $g_{i}$ have a common geodesic defining function $t$ defined
near $\partial\Omega$ and common geodesic boundary coordinates. By
[16],
the geodesically compactified metrics $\bar g_{i} = t^{2}g_{i}$ are
$C^{\infty}$ polyhomogeneous and extend $C^{\infty}$ polyhomogeneously
to $\partial\Omega$. It follows from the discussion of the Fefferman-Graham
expansion following (4.5) that $g_{0}$ and $g_{1}$ agree to infinite order
at $\partial U$, i.e.
\begin{equation} \label{e4.8}
k = g_{1} - g_{0} = O(t^{\nu}),
\end{equation}
for any $\nu < \infty$. Of course $k_{0\alpha} = 0$.
For the rest of the proof, we work in the setting of the compactified
metrics $\bar g_{i}$. As in the proof of Theorem 3.1, we assume
that the domain $\Omega$, now denoted $C$, is sufficiently small so that
$(C, \bar g_{i})$ is close to the flat metric on the standard cylinder
$C = I\times B^{n}$, with $\bar A = 0$ on $U = \partial_{0}C$.
(Note that $g_{(1)} = 0$ in (4.3)-(4.4)). In particular, near
$\partial_{0}C$, $\bar H = O(t)$. One may construct a foliation
$S_{\tau}$ with $\bar H_{S_{\tau}} = 0$, together with corresponding
$H$-harmonic coordinates $(\tau , x^{i})$, exactly as in Lemmas 3.2
and 3.3, and satisfying the boundary conditions \eqref{e3.8}. All of
the analysis carried out in \S 3 carries over to this situation with
only a single difference. Namely, for the term $\operatorname{Ric}_{g}$ in \eqref{e3.23}
or \eqref{e3.24}, one now no longer has $\operatorname{Ric}_{g} = \lambda g$, but instead
the Ricci curvature $\bar \operatorname{Ric}$ of the compactified metric $\bar g$.
Using the facts that $\operatorname{Ric}_{g} = -ng$ and the compactification
$\bar g$ is geodesic, standard formulas for the behavior of Ricci
curvature under conformal change give
\begin{equation} \label{e4.9}
\bar \operatorname{Ric} = -(n-1)t^{-1}\bar D^{2}t - t^{-1}\bar \Delta t \bar g.
\end{equation}
One has $\bar D^{2}t = {\mathcal L}_{\nabla t}\bar g = O(t)$.
If $(t, y^{i})$ are geodesic boundary coordinates, then $\partial_{x^{i}}
= \sum(1-\varepsilon (\tau))\partial_{y^{j}} + \varepsilon (\tau)\nabla t$,
where $\varepsilon(\tau) = O(\tau)$. Similarly, $\tau /t = 1 +
\varepsilon(\tau)$. (The specific form of $\varepsilon (\tau)$
of course differs in each occurence above, but this is insignificant).
Since $\bar D^{2}t$ vanishes on $\nabla t$, it follows from (4.9)
that in the $x^{i}$ coordinates on $S_{\tau}$,
\begin{equation} \label{e4.10}
\bar \operatorname{Ric}_{ij} = -(n-1)(1-\varepsilon)^{2}t^{-1}({\mathcal L}_{\nabla t}
\bar g)_{ij} - (1-\varepsilon)^{2}t^{-1}(\bar \Delta t)\bar g_{ij} +
\varepsilon t^{-1}(\bar \Delta t)q_{ij},
\end{equation}
where $q_{ij}$ depends only on $\bar g_{0\alpha}$ to zero-order. Next
$({\mathcal L}_{\nabla t}\bar g) = (1 - \varepsilon)\partial_{\tau}\bar
g +
\varepsilon (\tau)\partial_{x^{\alpha}}\bar g$ and similarly for the
Laplace term in (4.10). Substituting (4.10) in \eqref{e3.24}, it follows
that the analogue of \eqref{e3.25} in this context is the 'evolution equation'
\begin{equation} \label{e4.11}
\tau^{2}(\partial_{\tau}^{2}+ u^{2}\Delta -
2\partial_{\sigma}\partial_{\tau} + \partial_{\sigma}^{2})g_{ij} = -
2\tau^{2}u(D^{2}u)_{ij} + S_{ij}(g,\tau\partial g) + Q_{ij}(g,\tau\partial g),
\end{equation}
where $S_{ij}$ and $Q_{ij}$ have the same meaning as before. Here and below,
we drop the bar from the notation.
The lapse $u$ and shift $\sigma$ satisfy essentially the same
equations as before, namely
\begin{equation} \label{e4.12}
\Delta u + |A|^{2}u - (t^{-1}\Delta t)u = 0,
\end{equation}
\begin{equation} \label{e4.13}
\Delta\sigma^{i} = -2u\langle D^{2}x^{i}, A \rangle - 2\langle dx^{i},
A(\nabla u) \rangle .
\end{equation}
Comparing with (\ref{e3.27})-(\ref{e3.28}), one has here $H = 0$, with the
$\lambda$ term in replaced by $-t^{-1}\Delta t$. Lemma 3.6 holds
as before, since $t^{-1}\Delta t$ is smooth up to $\partial_{0}C$.
One now proceeds just as in the proof of Theorem 3.1, taking the
difference of the equation (4.11) to obtain a linear equation on $h =
\widetilde g - g$; (recall that the bars have been removed from the
notation). Note that by (4.8), together with elliptic regularity
applied to (4.12)-(4.13), as in the proof of Lemma 3.6, one has
\begin{equation} \label{e4.14}
h_{\alpha\beta} = O(t^{\nu}),
\end{equation}
for all $\nu < \infty$. The estimates \eqref{e3.37}-\eqref{e3.39}
and \eqref{e3.40} hold as before.
Let $P(h_{ij}) = \tau^{2}(\partial_{\tau}^{2}+ u^{2}\Delta -
2\partial_{\sigma}\partial_{\tau} + \partial_{\sigma}^{2})$. Then
$P$ is a fully degenerate $2^{\rm nd}$ order elliptic operator, with
smooth coefficients, and one has
$$||P(h_{ij})||_{L_{x}^{2}} \leq C||h_{ij}||_{L_{\tau,x}^{1,2}},$$
where the $1^{\rm st}$ order derivatives on the right are of the form
$\tau\partial$. Further, by (4.14), $h$ vanishes to infinite order at
$\partial_{0}C$. It then follows from a unique continuation theorem
of Mazzeo, [27, Thm.~14], that
$$h_{ij} = 0 $$
in $\Omega \subset C$. The vanishing of $h = h_{\alpha\beta}$ in
$C$ then follows as before in the proof of Theorem 3.1.
Next suppose $g_{0}$ and $g_{1}$ have only a $C^{3,\alpha}$ geodesic
compactification with a common boundary metric $\gamma$, but that
(4.7) holds. All of the arguments above remain valid, except the
infinite order vanishing property (4.8), and the corresponding (4.14),
which are replaced by the statements $k = o(t^{n})$ and $h = o(t^{n})$
respectively. The unique continuation result in [27] per se, requires
the infinite order decay (4.14). Thus, it suffices to show that (4.14)
does in fact hold.
To do this, we first show that $k = O(t^{\nu})$ weakly, for all $\nu
< \infty$. This will imply $h = O(t^{\nu})$ weakly, and the strong or
pointwise decay (4.14) then follows from elliptic regularity.
In geodesic boundary coordinates, the geodesic compactification of a
conformally compact Einstein metric satisfies the equation
\begin{equation} \label{e4.15}
t\ddot g - (n-1)\dot g - 2Hg^{T}-2t\operatorname{Ric}_{S(t)} + tH\dot g - t(\dot
g)^{2} = 0,
\end{equation}
where $\dot g$ is the Lie derivative of $g$ with respect to $\nabla t$,
cf.~[18] or [21]. Thus $\dot g = 2A$, where $A$ is the $2^{\rm nd}$
fundamental form of the level set $S(t)$ of $t$, (with respect to the
inward normal). Also $H = tr A$, $T$ denotes restriction or projection
onto $S(t)$ and $\operatorname{Ric}_{S(t)}$ is the intrinsic Ricci curvature of
$S(t)$.
(The equation (4.15) may be derived from \eqref{e3.22} by setting $u = 1$
and $\sigma = 0$). We recall, as above, that the bar has been removed
from the notation.
As above, the metrics $g_{0}$ and $g_{1}$ are assumed to have a fixed
geodesic defining function $t$ with common boundary metric $\gamma$ and
common geodesic boundary coordinates. Taking the difference of the
equation (4.15) evaluated on $g_{1}$ and $g_{0}$ gives the following
equation for $k = g_{1} - g_{0}$ as in (4.8):
\begin{equation} \label{e4.16}
t\ddot k - (n-1)\dot k = tr(\dot k) g_{0}^{T} + 2t(\operatorname{Ric}_{S(t)}^{1} -
\operatorname{Ric}_{S(t)}^{0})
+ O(t)k + O(t^{2})\dot k,
\end{equation}
where $O(t^{k})$ denotes terms of order $t^{k}$ with coefficients
depending
smoothly on $g_{0}$. One has $\operatorname{Ric}_{S(t)} = D_{x}^{2}(g_{ij})$ is a
$2^{\rm nd}$
order operator on $g_{ij}$, so that (4.16) gives
\begin{equation} \label{e4.17}
t\ddot k - (n-1)\dot k = tr(\dot k) g_{0}^{T} + 2tD_{x}^{2}(k) + O(t)k
+
O(t^{2})\dot k,
\end{equation}
The (positive) indicial root of the trace-free part of (4.16) or
(4.17)
is $n$, in that the formal power series solution of (4.17) has
undetermined
coefficient at order $t^{n}$, as in the Fefferman-Graham expansion
(4.3)-(4.4).
The hypothesis (4.7) implies that
\begin{equation} \label{e4.18}
k = o(t^{n}),
\end{equation}
so that this $n^{\rm th}$ order coefficient vanishes. However, taking
the
trace of (4.17) gives
$$t \, tr\ddot k - (2n-1)tr \dot k = tr (O(t)k + O(t^{2}\dot k)) +
2t \, tr (D_{x}^{2}(k)),$$
which has indicial root $2n$. To see that $tr k$ is in fact formally
determined at order $2n$, one uses the trace of the Riccati equation
\eqref{e3.19}, (with $u = 1$ and $\sigma = 0$), which gives
\begin{equation}\label{e4.19}
\dot H + |A|^{2} = -\operatorname{Ric}(T,T).
\end{equation}
Via (4.9), this is easily seen to be equivalent to
$$t\dot H - H = -t|A|^{2}.$$
This holds for each compactified metric $g_{1}$ and $g_{0}$, and so
taking
the difference, and computing as in (4.16)-(4.17) gives the equation
\begin{equation} \label{e4.20}
t\frac{d^{2}}{dt^{2}}(tr k) - \frac{d}{dt}(tr k) = O(t)k +
O(t^{2})\dot k .
\end{equation}
The positive indicial root of (4.20) is 2, and by (4.7), the
$O(t^{2})$
component of the formal expansion of $tr k$ vanishes. Similarly, the
trace-free
part $k_{0}$ of $k$ satisfies the equation
\begin{equation} \label{e4.21}
t\ddot k_{0} -(n-1)\dot k_{0} = 2t(D_{x}^{2}(k))_{0} + [O(t)k]_{0} +
[O(t^{2})\dot k]_{0} ,
\end{equation}
with indicial root $n$. As in [18], by repeated differentiation of
(4.20)
and (4.21) it follows from (4.7) that the formal expansion of $k$
vanishes.
Next we show that (4.8) holds weakly.
\begin{lemma} \label{l4.2}
Suppose $k = o(t^{n})$ weakly, in that, with respect to the
compactified metric
$(S(t), g)$, ($g = g_{0}$),
\begin{equation} \label{e4.22}
\int_{S(t)}\langle k, \phi \rangle = o(t^{n}), \ {\rm as} \ t
\rightarrow 0 ,
\end{equation}
where $\phi$ is any symmetric bilinear form, $C^{\infty}$ smooth up to
$U = \partial_{0}C$ and vanishing to infinite order on $\partial C$.
Then
\begin{equation} \label{e4.23}
k = o(t^{\nu}), \ {\rm weakly},
\end{equation}
for any $\nu < \infty$, i.e.~{\rm (4.22)} holds, with $\nu$ in place of
$n$.
\end{lemma}
{\bf Proof:}
Here smoothness is measured with respect to the given geodesic
coordinates
$(t, x^{i})$ covering $C$. The proof proceeds by induction,
starting at
the initial level $n$. As above, the trace-free and pure trace cases
are treated
separately, and so we assume in the following first that $\phi$ is
trace-free.
Pair $k$ with $\phi$ and integrate (4.17) over the level sets $S(t)$ to
obtain
\begin{equation} \label{e4.24}
t\int_{S(t)}\langle \ddot k, \phi \rangle - (n-1)\int_{S(t)}\langle
\dot k,
\phi \rangle = t\int_{S(t)}\langle k, P_{2}(\phi) \rangle +
\int_{S(t)}\langle O(t)k, \phi \rangle + \int_{S(t)}\langle
O(t^{2})\dot k, \phi \rangle .
\end{equation}
Here $P_{2}(\phi)$ is obtained by integrating the $D_{x}^{2}$ term on
the right in
(4.17) by parts over $S(t)$. Thus $P_{2}(\phi)$, and more generally,
$P_{k}(\phi)$
denote differential operators of order $k$ on $\phi$ with coefficients
depending
on $g$ and $g_{1}$ and their derivatives up to order 2 and so at least
continuous
up to $\bar \partial \Omega$. We use these expressions generically, so
their
exact form may change from line-to-line below. Note also there are no
boundary
terms at $\partial S(t)$ arising from the integration by parts, by the
vanishing
hypothesis on $\partial C$.
For the terms on the right in (4.24) one then has
$$\int_{S(t)}\langle O(t)k, \phi \rangle = t\int_{S(t)}\langle k,
P_{0}(\phi) \rangle ,$$
while, since $A = O(t)$ and $H = O(t)$,
$$\int_{S(t)}\langle O(t^{2})\dot k, \phi \rangle =
t^{2}\int_{S(t)}\langle \dot k, P_{0}(\phi) \rangle
= t^{2}\frac{d}{dt}\int_{S(t)}\langle k, P_{0}(\phi) \rangle -
t^{2}\int_{S(t)}\langle k, P_{1}(\phi) \rangle .$$
Similarly, for the terms on the left in (4.24), one has
$$\int_{S(t)}\langle \dot k, \phi \rangle =
\frac{d}{dt}\int_{S(t)}\langle k, \phi \rangle - t \int_{S(t)}\langle
k, P_{1}(\phi) \rangle ,$$
while
$$\int_{S(t)}\langle \ddot k, \phi \rangle =
\frac{d^{2}}{dt^{2}}\int_{S(t)}\langle k, \phi \rangle
- 2t\frac{d}{dt}\int_{S(t)}\langle k, P_{1}(\phi) \rangle +
\int_{S(t)}\langle k, P_{1}(\phi) \rangle + t\int_{S(t)}\langle k,
P_{2}(\phi) \rangle .$$
Now let
$$f = f(t) = \int_{S(t)}\langle k, \phi \rangle .$$
Then the computations above give
\begin{equation} \label{e4.25}
t\ddot f - (n-1)\dot f = t\int_{S(t)}\langle k, P_{2}(\phi) \rangle +
(1 + t^{2})\int_{S(t)}
\langle k, P_{1}(\phi) \rangle
\end{equation}
$$+ \frac{d}{dt}\int_{S(t)}t^{2}\langle k, P_{0}(\phi) \rangle
+ \frac{d}{dt}\int_{S(t)}t\langle k, P_{1}(\phi) \rangle .$$
First observe that
\begin{equation}\label{e4.26}
\int_{S(t)}\langle k, \phi \rangle = o(t^{n}) \Rightarrow
\int_{S(t)}\langle k,
P_{k}(\phi) \rangle = o(t^{n}),
\end{equation}
for all $C^{\infty}$ forms $\phi$ vanishing to infinite order at
$\partial C$.
For if the left side of (4.26) holds, then $\int_{S(t)}\langle k,
\partial^{k}\phi
\rangle = o(t^{n})$, since the hypotheses on $\phi$ are closed under
differentiation.
The coefficients of $P_{k}$ are at least continuous, and it is
elementary to verify
that if $\int_{S(t)}\langle k, \partial^{k}\phi \rangle = o(t^{n})$,
then
$\int_{S(t)}\langle k, \phi \partial^{k}\phi \rangle = o(t^{n})$, for
any function
$\phi$ continuous on $\bar C$. Note that the same result holds
with $p$ in
place of $n$, for any $p < \infty$.
It follows from (4.26) and the initial hypothesis (4.22) that the
first two terms
on the right in (4.25) are $o(t^{n})$ as $t \rightarrow 0$. Since
$t\ddot f -
(n-1)\dot f = t^{n}\frac{d}{dt}(\frac{\dot f}{t^{n-1}})$, this gives
$$\frac{d}{dt}(\frac{\dot f}{t^{n-1}}) = o(1) +
t^{-n}\frac{d}{dt}\int_{S(t)}t\langle k,
P_{1}(\phi) \rangle + t^{-n}\frac{d}{dt}\int_{S(t)}t^{2}\langle k,
P_{0}(\phi) \rangle .$$
Integrating from $0$ to $t$ implies
$$\frac{\dot f}{t^{n-1}} = o(t) + t^{-n+1}\int_{S(t)}\langle k,
P_{1}(\phi) \rangle
+ n\int_{0}^{t}t^{-n}\int_{S(t)}\langle k, P_{1}(\phi) \rangle + c_{1}
= o(t)
+ c_{1} ,$$
where $c_{1}$ is a constant. A further integration using (4.26) again
gives
\begin{equation} \label{e4.27}
f = o(t^{n+1}) + c_{1}'t^{n} + c_{2},
\end{equation}
where $c_{1}' = \frac{c_{1}}{n}$. Once more by (4.22), this implies
that
$$f = o(t^{n+1}).$$
Note the special role played by the indicial root $n$ here; if instead
one had only
$k = O(t^{n})$, then the argument above does not give $k = O(t^{n+1})$
weakly.
This first estimate holds in fact for any given trace-free $\phi$
which is $C^{2}$
on $\bar C$, and vanishing to first order on $\partial C$.
Working in the
same way with the trace equation (4.20) shows that the same result
holds for pure
trace terms. In particular, it follows that
\begin{equation} \label{e4.28}
k = o(t^{n+1}) \ {\rm weakly} .
\end{equation}
One now just repeats this argument inductively, with the improved
estimate
(4.28) in place of (4.22), using (4.26) inductively. Note that each
inductive
step requires higher differentiability of the test function $\phi$ and
its higher
order vanishing at $\partial C$.
{\qed
}
Lemma 4.2 proves that $k = k_{\alpha\beta} = O(t^{\nu})$ weakly, for
any
$\nu < \infty$. As discussed in \S 3, the transition from geodesic
boundary
coordinates to $H$-harmonic coordinates is $C^{2,\alpha}$ and hence
\begin{equation}\label{e4.29}
h = h_{\alpha\beta} = O(t^{\nu}),
\end{equation}
weakly, with the level sets $S(t)$ replaced by $S_{\tau}$. Next,
as in
Remark 3.5 and the proof of Theorem 3.1, the equations (4.11)-(4.13)
satisfy
elliptic estimates, and elliptic regularity in weighted H\"older
spaces, cf.~[26], [20],
shows that the weak decay (4.29) implies strong or pointwise decay,
i.e.~(4.14) holds.
The proof of Theorem 4.1 and thus Theorem 1.2 is now completed as
before in the
$C^{\infty}$ smooth case.
{\qed
}
\begin{remark} \label{r 4.3}
{\rm In [3, Thm.~3.2], a proof of unique continuation of conformally
compact
Einstein metrics was given in dimension 4, using the fact that the
compactified metric $\widetilde g$ in (1.1) satisfies the Bach
equation,
together with the Calder\'on uniqueness theorem. However, the proof
in [3] used harmonic coordinates; as discussed in \S 2, such
coordinates
do not preserve the Cauchy data. The first author is grateful to
Robin Graham for pointing this out. Theorem 1.2 thus corrects this
error, and generalizes the result to any dimension. }
\end{remark}
For the work to follow in \S 5, we note that Theorem 4.1 also holds
for
linearizations of the Einstein equations, i.e.~forms $k$ satisfying
\begin{equation}\label{e4.30}
\frac{d}{dt}(\operatorname{Ric}_{g+tk} + n(g+tk))|_{t=0} = 0.
\end{equation}
Thus, if $k$ satisfies (4.30) and the analog of (4.7), i.e.~$|k| =
o(t^{n})$, then
$k$ is pure gauge in $\Omega$, in that $k = \delta^{*}Z$, where $Z$ is
a vector
field on $\Omega$ with $Z = 0$ on $\partial \Omega$. The proof of this
is exactly
the same as the proof of Theorem 4.1, replacing the finite difference
$k = g_{1} - g_{0}$ by an infinitesimal difference.
This has the following consequence:
\begin{corollary}\label{c4.4}
Let $(M, g)$ be a conformally compact Einstein manifold with metric $g$
having a
$C^{3,\alpha}$ geodesic compactification. Suppose the topological
condition
{\rm (1.5)} holds, i.e.~$\pi_{1}(M, \partial M) = 0$.
If $k$ is an infinitesimal Einstein deformation on $M$ as in {\rm
(4.30)},
in divergence-free gauge, i.e.
\begin{equation}\label{e4.31}
\delta k = 0,
\end{equation}
with $k = o(t^{n})$ on approach to $\partial M$, then
$$k = 0 \ \ {\rm on} \ \ M.$$
\end{corollary}
{\bf Proof:} The topological condition (1.5), together with the same
analytic continuation argument at the end of the proof of Theorem 3.1,
implies that $k$ is pure gauge globally on $M$, in that $k = \delta^{*}Z$
on $M$ with $Z = 0$ on $\partial M$. (Recall that (1.5) implies that
$\partial M$ is connected). From (4.31), one then has
$$\delta \delta^{*}Z = 0,$$
on $M$. Pairing this with $Z$ and integrating over $B(t)$, it follows
that
$$\int_{B(t)}|\delta^{*}Z|^{2} = \int_{S(t)}\delta^{*}Z(Z, N),$$
where $N$ is the unit outward normal. Since $|Z|_{g}$ is bounded and
$|\delta^{*}Z|vol(S(t)) = o(1)$, (since $|k| = o(t^{n})$), it follows
that
$$\int_{M}|\delta^{*}Z|^{2} = 0,$$
which gives the result.
{\qed
}
Of course, analogs of these results also hold for bounded domains,
via the
proof of Theorem 3.1; the verification is left to the reader.
\begin{remark} \label{r4.5}
{\rm The analogue of Proposition 3.7 most likely also holds in the
setting of
conformally compact metrics, for fields $\tau$ whose Euler-Lagrange
equation is a diagonal system of Laplace-type operators to leading
order, as in \eqref{e3.46} or \eqref{e3.47}. The proof of this is
basically the same as that of Proposition 3.7, using the proof of
Theorem 1.2 and with the Mazzeo unique continuation result in place
of that of Calder\'on. However, we will not carry out the details
of the proof here. }
\end{remark}
\section{Isometry Extension and the Constraint Equations.}
\setcounter{equation}{0}
In this section, we prove Theorem 1.3 that continuous groups of
isometries at the
boundary extend to isometries in the interior of complete conformally
compact Einstein
metrics and relate this issue in general to the constraint equations
induced by the
Gauss-Codazzi equations.
We begin with the following elementary consequence of Theorem 4.1.
\begin{proposition} \label{p5.1}
Let $(\Omega, g)$ be a $C^{n}$ polyhomogeneous conformally compact
Einstein metric
on a domain $\Omega \simeq B^{n+1}$ with boundary metric $\gamma$ on
$\partial
\Omega \simeq B^{n}$. Suppose $X$ is a Killing field on $(\partial
\Omega, \gamma)$
and
\begin{equation} \label{e5.1}
{\mathcal L}_{X}g_{(n)} = 0,
\end{equation}
where $g_{(n)}$ is the $n^{\rm th}$ term in the Fefferman-Graham
expansion {\rm (4.3)}
or {\rm (4.4)}.
Then $X$ extends to a Killing field on $(\Omega, g)$.
\end{proposition}
{\bf Proof:}
Extend $X$ to a smooth vector field on $\Omega$ by requiring $[X, N] =
0$, where
$N = \nabla \log t$ and $t$ is the geodesic defining function
determined by $g$ and
$\gamma$. Let $\phi_{s}$ be the corresponding 1-parameter group of
diffeomorphisms
and set $g_{s} = \phi^{*}_{s}g$. Then $t$ is the geodesic defining
function for
$g_{s}$ for any $s$, and the pair $(g, g_{s})$ satisfy the hypotheses
of Theorem 4.1.
Theorem 4.1 then implies that $g_{s}$ is isometric to $g$, i.e.~there
exist diffeomorphisms $\psi_{s}$ of $\Omega$, equal to the identity on
$\partial \Omega$, such that $\psi_{s}^{*}\phi_{s}^{*}g = g$. Thus
$\phi_{s}\circ \psi_{s}$ is a 1-parameter group of isometries of $g$
defined in
$\Omega$, with $Y$ the corresponding Killing field. (In fact, $Y = X$,
since any Killing
field $Y$ tangent to $\partial\Omega$ preserves the geodesics tangent
to $N$, and so
$[Y, N] = 0$. This determines $Y$ uniquely in terms of its value at
$\partial \Omega$.
Since $X$ satisfies the same equation with the same initial value, this
gives the claim).
{\qed
}
We point out that the the same result, and proof, also hold in the
case of
Einstein metrics on bounded domains, via Theorem 3.1; the condition
(5.1) is
of course replaced by ${\mathcal L}_{X}A = 0$. For some examples and
discussion
in the bounded domain case, see [1], [2].
Suppose now that $(M, g)$ is a (global) conformally compact Einstein
metric and
there is a domain $\Omega$ as in Proposition 5.1 contained in $M$ on
which (5.1)
holds. Then by analytic continuation as discussed at the end of the
proof of Theorem
3.1, $X$ extends to a local Killing field on all of $M$, i.e.~$X$
extends to a Killing
field on the universal cover $\widetilde M$. In particular, if the
condition \eqref{e1.5} holds, i.e.~$\pi_{1}(M, \partial M) = 0$,
then $X$ extends to a global Killing field on $M$. Again, the same
result holds in the context of bounded domains.
\begin{remark}\label{r5.2}
{\rm A natural analogue of Proposition 5.1 holds for conformal Killing
fields on
$(\partial \Omega, \gamma)$, i.e.~vector fields which preserve the
conformal class
$[\gamma]$ at conformal infinity. Such vector fields satisfy the
conformal Killing
equation
\begin{equation}\label{e5.2}
\hat{\mathcal L}_{X}\gamma = {\mathcal L}_{X}\gamma - \frac{tr(\mathcal
L_{X}\gamma)}{n}
\gamma = 0 .
\end{equation}
Namely, since we are working locally, it is well-known - and easy to
prove - that any
non-vanishing conformal Killing field is Killing with respect to a
conformally related
metric $\widetilde \gamma = \lambda^{2}\gamma$, so that
$${\mathcal L}_{X}\widetilde \gamma = 0 .$$
Hence, if ${\mathcal L}_{X}\widetilde g_{(n)} = 0$, then Proposition
5.1 implies that
$X$ extends to a Killing field on $\Omega$.
One may express $\widetilde g_{(n)}$ in terms of $\lambda$ and the
lower order
terms $g_{(k)}$, $k < n$ in the Fefferman-Graham expansion (4.3)-(4.4);
however, the
expressions become very complicated for $n$ even and large, cf.~[17].
Thus, while the
equation (5.2) is conformally invariant, the corresponding conformally
invariant
equation for $g_{(n)}$ will be complicated in general. }
\end{remark}
Next we consider the constraint equations (4.5) in detail, i.e.
\begin{equation}\label{e5.3}
\delta \tau_{(n)} = 0 \ \ {\rm and} \ \ tr \,\tau_{(n)} = a_{(n)},
\end{equation}
where $\tau_{(n)} = g_{(n)} + r_{(n)}$; $r_{(n)}$ and $a_{(n)}$ are
explicitly determined
by the boundary metric $\gamma = g_{(0)}$ and its derivatives up to
order $n$. Both vanish
when $n$ is odd.
As will be seen below, the most important issue is the divergence
constraint in
(5.3), which arises from the Gauss-Codazzi equations. To see this, in
the setting
of \S 4, on $S(t) \subset (M, g)$, the Gauss-Codazzi equations are
\begin{equation} \label{e5.4}
\delta(A - Hg) = -\operatorname{Ric}(N, \cdot),
\end{equation}
as 1-forms on $S(t)$; here $N = -t\partial_{t}$ is the unit outward
normal. The same
equation holds on a geodesic compactification $(M, \bar g)$. If $g$ is
Einstein, then
$\operatorname{Ric}(N, \cdot) = \bar \operatorname{Ric}(\bar N, \cdot) = 0$; the latter equality
follows from (4.9).
The equation (5.4) holds for all $t$ small, and differentiating $(n-1)$
times with
respect to $t$ gives rise to the divergence constraint in (5.3).
The Gauss-Codazzi equations are not used in the derivation and
properties of
the Fefferman-Graham expansion (4.3)-(4.4) per se. The derivation of
these equations
involves only the tangential $(ij)$ part of the Ricci curvature. The
asymptotic
behavior of the normal $(00)$ part of the Ricci curvature gives rise to
the trace
constraint in (5.3), cf.~(4.19)-(4.20).
Let ${\mathcal T}$ be the space of pairs $(g_{(0)}, \tau_{(n)})$
satisfying (5.3).
If $\tau_{(n)}^{0}$ is any fixed solution of (5.3), then any other
solution with
the same $g_{(0)}$ is of the form $\tau_{(n)} = \tau_{(n)}^{0} + \tau$,
where
$\tau$ is transverse-traceless with respect to $g_{(0)}$. (Of
course if $n$
is odd, one may take $\tau_{(n)}^{0} = 0$). The space ${\mathcal T}$
naturally projects onto $\operatorname{Met}(\partial M)$ with fiber at $\gamma$ an
affine space of symmetric tensors
and is a subset of the product
$\operatorname{Met}(\partial M) \times {\mathbb S}^{2}(\partial M) \simeq
T(\operatorname{Met}(\partial M))$. Let
\begin{equation}\label{e5.5}
\pi: {\mathcal T} \rightarrow \operatorname{Met}(\partial M)
\end{equation}
be the projection onto the base space $\operatorname{Met}(\partial M)$, (the first
factor projection).
By the discussion in \S 4, $(g_{(0)}, \tau_{(n)}) \in {\mathcal T}$
if and only
if the corresponding pair $(g_{(0)}, g_{(n)})$ determine a formal
polyhomogenous solution
to the Einstein equations near conformal infinity, i.e.~formal series
solutions
containing $\log$ terms, as in (4.3)-(4.4). In fact, if $g_{(0)}$ and
$g_{(n)}$ are
real-analytic on $\partial M$, a result of Kichenassamy [24] implies
that the series
(4.3) or (4.4) converges, and gives an Einstein metric $g$, defined in
a neighborhood
of $\partial M$. The metric $g$ is complete near $\partial M$ and has a
conformal
compactification inducing the given data $(g_{(0)}, g_{(n)})$ on
$\partial M$.
Here we recall from the discussion in \S 4 that all coefficients of the
expansion
(4.3) or (4.4) are determined by $g_{(0)}$ and $g_{(n)}$.
In this regard, consider the following:
{\bf Problem.} Is $\pi: {\mathcal T} \rightarrow \operatorname{Met}(\partial M)$ an
open map?
Thus, given any $(g_{(0)}, \tau_{(n)}) \in {\mathcal T}$ and any
boundary metric
$\widetilde g_{(0)}$ sufficiently close to $g_{(0)}$, does there exist
$\widetilde \tau_{(n)}$ close to $\tau_{(n)}$ such that $(\widetilde
g_{(0)},
\widetilde \tau_{(n)}) \in {\mathcal T}$.
Although $\pi$ is obviously globally surjective, the problem above is
whether
$\pi$ is locally surjective. For example, a simple fold map $x
\rightarrow x^{3}-x$
is not locally surjective near $\pm\sqrt{3}/3$. Observe that the trace
condition in
(5.3) imposes no constraint on $g_{(0)}$; given any $g_{(0)}$, it is
easy to find
$g_{(n)}$ such that $tr_{g_{(0)}}(g_{(n)}+r_{(n)}) = a_{(n)}$; this
equation can readily
be solved algebraically for many $g_{(n)}$.
By the inverse function theorem, it suffices, (and is probably also
necessary), to
examine the problem above at the linearized level. However the
linearization of the
divergence condition in (5.3) gives a non-trivial constraint on the
variation $h_{(0)}$
of $g_{(0)}$. Namely, the linearization in this case gives
\begin{equation} \label{e5.6}
\delta'(\tau_{(n)}) + \delta(\tau_{(n)})' = 0 ,
\end{equation}
where $\delta ' = \frac{d}{du}\delta_{g_{(0)}+uh_{(0)}}$, and similarly
for
$(\tau_{(n)})'$.
Whether (5.6) is solvable for any $h_{(0)} \in S^{2}(\partial M)$
depends on the data $g_{(0)}$ and $g_{(n)}$. For example, it is
trivially solvable when
$\tau_{(n)} = 0$. For compact $\partial M$, one has
\begin{equation} \label{e5.7}
\Omega^{1}(\partial M) = Im \delta \oplus Ker \delta^{*},
\end{equation}
where $\Omega^{1}$ is the space of 1-forms, so that solvability in
general requires that
\begin{equation}\label{e5.8}
\delta'(\tau_{(n)}) \in Im \delta = (Ker \delta^{*})^{\perp}.
\end{equation}
Of course $Ker \delta^{*}$ is exactly the space of Killing fields on
$(\partial M, \gamma)$,
and so this space serves as a potential obstruction space.
Clearly then $\pi$ is locally surjective when $(\partial M, g_{(0)})$
has no Killing fields.
On the other hand, it is easy to construct examples where $(\partial M,
\gamma)$ does
have Killing fields and $\pi$ is not locally surjective:
\begin{example} \label{ex5.3}
{\rm Let $(\partial M, g_{(0)})$ be the flat metric on the $n$-torus
$T^{n}$, $n \geq 3$,
and define $g_{(n)} = -(n-2)(d\theta^{2})^{2} +(d\theta^{3})^{2} +
\cdots + (d\theta^{n})^{2}$.
Then $g_{(n)}$ is transverse-traceless with respect to $g_{(0)}$. Let
$f = f(\theta^{1})$.
Then $\hat g_{(n)} = fg_{(n)}$ is still transverse-traceless with
respect to $g_{(0)}$, so that
$(g_{(0)}, \hat g_{(n)}) \in {\mathcal T}$, at least for $n$ odd.
It is then not difficult to see via a direct calculation, or more
easily via
Proposition 5.4 below, that (5.8) does not hold, so that $\pi$ is not
locally surjective. }
\end{example}
Next we relate these two issues, i.e.~the general solvability of the
divergence
constraint (5.6) and the extension of Killing fields on the boundary
into the bulk.
The following result holds for general $\phi \in S^{2}(\partial M)$
with $\delta \phi = 0$ on $\partial M$.
\begin{proposition} \label{p5.4}
If $X$ is a Killing field on $(\partial M, \gamma)$, with $\partial M$
compact, then
\begin{equation}\label{e5.9}
\int_{\partial M}\langle {\mathcal L}_{X}\tau_{(n)}, h_{(0)} \rangle
dV=
-2\int_{\partial M}\langle \delta'(\tau_{(n)}), X \rangle dV,
\end{equation}
where $\delta' = \frac{d}{ds}\delta_{\gamma + sh_{(0)}}$.
In particular, {\rm (5.1)} holds for all Killing fields on $(\partial
M, \gamma)$ if
and only if the linearized divergence constraint vanishes, i.e.~{\rm
(5.6)} holds for all $h$.
\end{proposition}
{\bf Proof:}
Since $X$ is a Killing field on $(\partial M, \gamma)$, one has
\begin{equation} \label{e5.10}
\int_{\partial M}\langle {\mathcal L}_{X}\tau, h \rangle dV_{\gamma} =
-
\int_{\partial M}\langle \tau, {\mathcal L}_{X}h \rangle dV_{\gamma}.
\end{equation}
Setting $\gamma_{s} = \gamma + sh$, the divergence theorem gives
\begin{equation}\label{e5.11}
0 = \int_{\partial M}\delta_{\gamma_{s}}(\tau(X))dV_{\gamma_{s}} =
\int_{\partial M}\langle \delta_{\gamma_{s}}\tau, X \rangle
dV_{\gamma_{s}}
- {\tfrac{1}{2}}\int_{\partial M}\langle \tau, {\mathcal
L}_{X}\gamma_{s}
\rangle dV_{\gamma_{s}},
\end{equation}
where the second equality is a simple computation from the definitions;
the
inner products are with respect to $\gamma_{s}$. Taking the derivative
with
respect to $s$ at $s = 0$, and using the facts that $X$ is Killing and
$\delta(\tau) = 0$, it follows that
\begin{equation}\label{e5.12}
\int_{\partial M}\langle \delta' \tau, X \rangle dV
- {\tfrac{1}{2}}\int_{\partial M}\langle \tau, {\mathcal L}_{X}h
\rangle dV = 0.
\end{equation}
Combining this with (5.10) then gives (5.9); note that
${\mathcal L}_{X}r_{(n)} = 0$ in this case, since $r_{(n)}$
is determined by the boundary metric.
To prove the last statement, by (5.9), (5.1) holds if and only if
$\int_{\partial M}\langle \delta'(\tau_{(n)}), X \rangle = 0$, for all
variations $h$. If (5.6) holds, then $\delta'(\tau_{(n)}) = \delta
h_{(n)}'$,
for some $h_{(n)}'$ and so $\int_{\partial M}\langle
\delta'(\tau_{(n)}), X \rangle
= \int_{\partial M}\langle h_{(n)}', \delta^{*}X \rangle = 0$, since
$X$ is
Killing. The converse of this argument holds equally well.
{\qed
}
Proposition 5.4 implies that in general, Killing fields on $\partial
M$ do not
extend to Killing fields in a neighborhood of $\partial M$,
(cf.~Example 5.3).
(Exactly the same result and proof hold in the bounded domain case,
when the term
$\tau_{(n)}$ is replaced by $A - Hg$).
Now as noted above, whether isometry extension holds or not depends
on the
term $\tau_{(n)} = g_{(n)}+r_{(n)}$, or more precisely on the relation
of the
boundary metric $g_{(0)}$ with $\tau_{(n)}$. For Einstein metrics which
are
globally conformally compact, the term $\tau_{(n)}$ is determined, up
to a finite
dimensional moduli space, by the boundary metric $g_{(0)}$; (this is
discussed
further below). Thus, whether isometry extension holds or not is quite
a delicate
issue; if so, it must depend crucially on the global structure of $(M,
g)$.
Before beginning the proof of Theorem 1.3, we first need to discuss
some
background material from [5]-[6].
Let $E_{AH}$ be the space of conformally compact, or equivalently
asymptotically
hyperbolic Einstein metrics on $M$ which have a $C^{\infty}$
polyhomogeneous
conformal compactification with respect to a fixed smooth defining
function $\rho$,
as in (1.1). In [5], it is shown that $E_{AH}$ is a smooth, infinite
dimensional
manifold. One has a natural smooth boundary map
\begin{equation}\label{e5.13}
\Pi: E_{AH} \rightarrow \operatorname{Met}(\partial M),
\end{equation}
sending $g$ to its boundary metric $\gamma$.
The moduli space ${\mathcal E}_{AH}$ is the quotient
$E_{AH}/{\mathcal D}_{1}$, where ${\mathcal D}_{1}$ is the
group of smooth (polyhomogeneous) diffeomorphisms $\phi$ of $M$
equal to the identity on $\partial M$. Thus, $g' \sim g$ if
$g' = \phi^{*}g$, with $\phi \in {\mathcal D}_{1}$. Changing the
defining function $\rho$ in (1.1) changes the boundary metric conformally.
Also, if $\phi \in {\mathcal D}_{1}$ then $\rho \circ \phi$ is another
defining function, and all defining functions are of this form near
$\partial M$. Hence if ${\mathcal C}$ denotes the space of smooth
conformal classes of metrics on $\partial M$, then the boundary map
(5.13) descends to a smooth map
\begin{equation}\label{e5.14}
\Pi: {\mathcal E}_{AH} \rightarrow {\mathcal C}
\end{equation}
independent of the defining function $\rho$. The boundary map $\Pi$
in (5.14) is Fredholm, of Fredholm index 0.
The linearization of the Einstein operator $\operatorname{Ric}_{g} + ng$ at an
Einstein metric
$g$ is given by
\begin{equation}\label{e5.15}
\hat L = (\operatorname{Ric}_{g} + ng)' = \tfrac{1}{2}D^{*}D - R - \delta^{*}\beta,
\end{equation}
acting on the space of symmetric 2-tensors $S^{2}(M)$ on $M$, cf.~[10].
Here, (as
in \S 3), $\beta$ is the Bianchi operator, $\beta(h) = \delta h +
\frac{1}{2}d tr h$,
Thus, $h \in T_{g}E_{AH}$ if and only if
$$\hat L(h) = 0.$$
The operator $\hat L$ is not elliptic, due to the $\delta^{*}\beta$
term. As is
well-known, this arises from the diffeomorphism group, and to obtain an
elliptic
linearization, one needs a gauge choice to break the diffeomorphism
invariance
of the Einstein equations. We will use a slight modification of the
Bianchi
gauge introduced in [11].
To describe this, given any fixed $g_{0} \in E_{AH}$ with geodesic
defining
function $t$ and boundary metric $\gamma_{0}$, let $\gamma$ be a
boundary metric
near $\gamma_{0}$ and define the hyperbolic cone metric $g_{\gamma}$ on
$\gamma$
by setting
$$g_{\gamma} = t^{-2}(dt^{2} + \gamma);$$
$g_{\gamma}$ is defined in a neighborhood of $\partial M$. Next, set
\begin{equation}\label{e5.16}
g(\gamma) = g_{0} + \eta(g_{\gamma} - g_{\gamma_{0}}),
\end{equation}
where $\eta$ is a non-negative cutoff function supported near $\partial
M$ with
$\eta = 1$ in a small neighborhood of $\partial M$. Any conformally
compact metric
$g$ near $g_{0}$, with boundary metric $\gamma$ then has the form
\begin{equation}\label{e5.17}
g = g(\gamma) + h,
\end{equation}
where $|h|_{g_{0}} = O(t^{2})$; equivalently $\bar h = t^{2}h$
satisfies $\bar h_{ij}
= O(t^{2})$ in any smooth coordinate chart near $\partial M$. The space
of such
symmetric bilinear forms $h$ is denoted by ${\mathbb S}_{2}(M)$ and the
space of
metrics $g$ of the form (5.17) is denoted by $\operatorname{Met}_{AH}$.
The Bianchi-gauged Einstein operator, (with background metric
$g_{0}$), is defined
by
\begin{equation}\label{e5.18}
\Phi_{g_{0}}: \operatorname{Met}_{AH} \rightarrow {\mathbb S}_{2}(M)
\end{equation}
$$\Phi_{g_{0}}(g) = \Phi (g(\gamma) + h) = \operatorname{Ric}_{g} + ng +
(\delta_{g})^{*}\beta_{g(\gamma)}(g),$$
where $\beta_{g(\gamma)}$ is the Bianchi operator with respect to
$g(\gamma)$. By
[11, Lemma I.1.4],
\begin{equation}\label{e5.19}
Z_{AH} \equiv \Phi^{-1}(0)\cap\{\operatorname{Ric} < 0\} \subset E_{AH},
\end{equation}
where $\{\operatorname{Ric} < 0\}$ is the open set of metrics with negative Ricci
curvature. In fact, if $g \in E_{AH}$ is close to $g_{0}$, and $\Phi (g)
= 0$, then $\beta_{g(\gamma)}(g) = 0$ and moreover
\begin{equation}\label{e5.20}
\delta_{g(\gamma)}(g) = 0 \ \ {\rm and} \ \ tr_{g(\gamma)}(g) = 0.
\end{equation}
The space $Z_{AH}$ is a local slice for the action of ${\mathcal D}_{1}$
on $E_{AH}$: for any $g\in E_{AH}$ near $g_{0}$, there exists a
diffeomorphism $\phi \in {\mathcal D}_{1}$ such that $\phi^{*}g\in
Z_{AH}$, cf.~again [11].
The linearization of $\Phi$ at $g_{0} \in E_{AH}$ with respect to
the $2^{\rm nd}$
variable $h$ has the simple form
\begin{equation}\label{e5.21}
(D_{2}\Phi)_{g_{0}}(\dot h) = \tfrac{1}{2}D^{*}D \dot h -
R_{g_{0}}(\dot h),
\end{equation}
while the variation of $\Phi$ at $g_{0}$ with respect to the $1^{\rm
st}$ variable
$g(\gamma)$ has the form
\begin{equation}\label{e5.22}
(D_{1}\Phi)_{g_{0}}(\dot g(\gamma)) = (D_{2}\Phi)_{g_{0}}(\dot
g(\gamma))
- \delta_{g_{0}}^{*}\beta_{g_{0}}(\dot g(\gamma)) = (\operatorname{Ric}_{g} +
ng)'(\dot g(\gamma)),
\end{equation}
as in (5.15). Clearly $\dot g(\gamma) = \eta t^{-2}\dot \gamma$.
The kernel of the elliptic self-adjoint linear operator
\begin{equation}\label{e5.23}
L = \tfrac{1}{2}D^{*}D - R
\end{equation}
acting on the $2^{\rm nd}$ variable $h$, represents the space of
non-trivial
infinitesimal Einstein deformations vanishing on $\partial M$. Let $K$
denote the
$L^{2}$ kernel of $L$. This is the same as the kernel of $L$ on
${\mathbb S}_{2}(M)$,
cf.~[11], [26]. An Einstein metric $g_{0} \in E_{AH}$ is called {\it
non-degenerate}
if
\begin{equation} \label{e5.24}
K = 0.
\end{equation}
For $g_{0} \in {\mathcal E}_{AH}$ the kernel $K = K_{g_{0}}$ equals
the kernel of the linear map $D\Pi: T_{g_{0}}{\mathcal E}_{AH} \rightarrow
T_{\Pi(g_{0})}{\mathcal C}$. Hence, $g_{0}$ is non-degenerate if and
only if $g_{0}$ is a regular point of the boundary map $\Pi$ in which
case $\Pi$ is a local diffeomorphism near $g_{0}$. From now on, we
denote $g_{0}$ by $g$.
By the regularity result of Chru\'sciel et al.~[16], any $\kappa \in K$
has a $C^{\infty}$ smooth polyhomogeneous expansion, analogous to the
Fefferman-Graham expansion (4.3)-(4.4), with leading order terms satisfying
\begin{equation}\label{e5.25}
\kappa = O(t^{n}), \ \ \kappa(N,Y) = O(t^{n+1}), \ \ \kappa (N,N) =
O(t^{n+1+\mu}),
\end{equation}
where $N = -t\partial_{t}$ is the unit outward normal vector to the
$t$-level set $S(t)$, $Y$ is any $g$-unit vector tangent to $S(t)$
and $\mu > 0$; cf.~also [28, Prop.~5]. Here $\kappa = O(t^{n})$ means
$|\kappa|_{g} = O(t^{n})$. Also by an argument similar to the one
leading to (5.20), any $\kappa \in K$ is
transverse-traceless, i.e.
\begin{equation}\label{e5.26}
\delta \kappa = tr \kappa = 0.
\end{equation}
Given this background, we are now ready to begin the proof of Theorem
1.3.
{\bf Proof of Theorem 1.3.}
Let $\bar g = t^{2}g$ be a geodesic compactification of $g$ with
boundary metric $\gamma$. By the boundary regularity result of [16],
$\bar g$ is $C^{\infty}$ polyhomogeneous on $\bar M$. It suffices to
prove
Theorem 1.3 for arbitrary 1-parameter subgroups of the isometry group
of
$(\partial M, \gamma)$. Thus, let $\phi_{s}$ be a local 1-parameter
group
of isometries of $\gamma$ with $\phi_{0} = id$, so that
$$\phi_{s}^{*}\gamma = \gamma .$$
The diffeomorphisms $\phi_{s}$ of $\partial M$ may be extended to
diffeomorphisms of $M$, so that the curve
\begin{equation} \label{e5.27}
g_{s} = \phi_{s}^{*}g
\end{equation}
is a smooth curve in $E_{AH}$. By construction then, $\Pi[g_{s}] =
[\gamma]$,
so that $[h] = [\frac{dg_{s}}{ds}] \in Ker D\Pi$, for $\Pi$ as in
(5.14). One
may then alter the diffeomorphisms $\phi_{s}$ by composition with
diffeomorphisms
in ${\mathcal D}_{1}$ if necessary, so that $h = \frac{dg_{s}}{ds} \in
K_{g}$,
where $K_{g}$ is the kernel in (5.24). Denoting $h = \kappa$, it
follows that
\begin{equation}\label{e5.28}
\kappa = \delta^{*}X,
\end{equation}
where $X = d\phi_{s}/ds$ is smooth up to $\bar M$.
Thus it suffices to prove that $\delta^{*}X = 0$, since this will
imply that
$g_{s} = g$, (when $g_{s}$ is modified by the action of ${\mathcal
D}_{1}$). If
$K_{g} = 0$, i.e.~if $g$ is a regular point of the boundary map $\Pi$,
then
this is now obvious, (from the above), and proves the result in this
special
case; (the proof in this case requires only that $(M, g)$ be
$C^{2,\alpha}$
conformally compact).
We give two different, (although related), proofs of Theorem 1.3, one
conceptual and
one more computational. The first, conceptual, proof involves an
understanding of the
cokernel of the map $D\Pi_{g}$ in $\operatorname{Met}(\partial M)$, and so one first
needs to give an
explicit description of this cokernel. To begin, recall the derivative
\begin{equation} \label{e5.29}
(D\Phi)_{g}: T_{g}\operatorname{Met}_{AH}(M) \rightarrow T_{\Phi(g)}{\mathbb
S}_{2}(M).
\end{equation}
Via (5.17), one has $T_{g}\operatorname{Met}_{AH} = T_{\gamma}\operatorname{Met}(\partial M)\oplus
T_{h}{\mathbb S}_{2}(M)$
and the derivative with respect to the second factor is given by
(5.21). If $K = 0$, then
$D_{2}\Phi$ is surjective at $g$, (since $D_{2}\Phi$ has index 0, and we
recall that the kernel and cokernel here are equal to their $L^2$ counterparts),
and hence so is $D\Pi$.
In general, to understand $Coker D\Pi$, we show that $D\Phi$ is always
surjective; this
follows from the claim that for any non-zero $\kappa\in K$ there is a
tangent vector
$\dot g(\gamma) \in T_{\gamma}\operatorname{Met}(\partial M) \subset T_{g}\operatorname{Met}_{AH}$
such that
\begin{equation} \label{e5.30}
\int_{M}\langle (D_{1}\Phi)_{g}(\dot g(\gamma)), \kappa \rangle dV_{g}
\neq 0.
\end{equation}
Thus, the boundary variations $\dot g(\gamma)$ satisfying (5.30) for
some $\kappa$
correspond to the cokernel. To prove (5.30), let $B(t) = \{x \in M:
t(x) \geq t\}$
and $S(t) = \partial B(t) = \{x \in M: t(x) = t\}$. Apply the
divergence theorem to
the integral (5.30) over $B(t)$; twice for the Laplace
term in (5.22) and once for the $\delta^{*}$ term in (5.22). Since
$$\kappa \in Ker L \ {\rm and} \ \delta \kappa = 0,$$
it follows that the integral (5.30) reduces to an integral over the
boundary,
and gives
\begin{equation} \label{e5.31}
\int_{B(t)}\langle (D_{1}\Phi)_{g}(\dot g(\gamma), \kappa \rangle
dV_{g} =
{\tfrac{1}{2}}\int_{S(t)}(\langle \dot g(\gamma), \nabla_{N}\kappa
\rangle -
\langle \nabla_{N}\dot g(\gamma), \kappa \rangle -
2\langle \beta(\dot g(\gamma)), \kappa(N) \rangle )dV_{S(t)}.
\end{equation}
Of course $dV_{S(t)} = t^{-n}dV_{\gamma} + O(t^{-(n-1)})$. By (5.25)
the last
term in (5.31) is then $O(t)$ and so may be ignored. Let
\begin{equation}\label{e5.32}
\widetilde \kappa = t^{-n}\kappa,
\end{equation}
so that by (5.25), $|\widetilde \kappa|_{g}|_{S(t)} \leq C$. Setting
$\hat \kappa
= t^{2}\widetilde \kappa$, one has $|\hat \kappa|_{\bar g} =
|\widetilde \kappa|_{g}$,
and so the same is true for $|\hat \kappa|_{\bar g}$. From the
definition (5.16), a
straightforward computation shows that near $\partial M$,
$$\dot g(\gamma) = t^{-2}\dot \gamma, \ \ {\rm and} \ \ \nabla_{N}\dot
g(\gamma) = 0.$$
Note that $|\dot g(\gamma)|_{g} \sim 1$ as $t \rightarrow 0$. Hence,
$$(\langle \dot g(\gamma), \nabla_{N}\kappa \rangle_{g} -
\langle \nabla_{N}\dot g(\gamma), \kappa \rangle)_{g}dV_{S(t)} =
t^{2}\langle \nabla_{N}\kappa , \dot \gamma \rangle_{\gamma}dV_{S(t)}
+ O(t)$$
$$= \langle \nabla_{N}\hat \kappa - (n-2)\hat \kappa, \dot \gamma
\rangle_{\gamma}
dV_{\gamma} + O(t).$$
Thus,
\begin{equation}\label{e5.33}
\int_{B(t)}\langle (D_{1}\Phi)_{g}(\dot g(\gamma), \kappa \rangle
dV_{g} =
{\tfrac{1}{2}}\int_{S(t)}\langle \nabla_{N}\hat \kappa - (n-2)\hat
\kappa,
\dot \gamma \rangle_{\gamma}dV_{\gamma} + O(t).
\end{equation}
Now suppose, (contrary to (5.30)),
\begin{equation}\label{e5.34}
\nabla_{N}\hat \kappa - (n-2)\hat \kappa = O(t),
\end{equation}
as forms on $(S(t), \bar g)$; note however that $\nabla$ is taken with
respect to $g$ in (5.34). It follows from the smooth polyhomogeneity of
$\hat \kappa$ near $\partial M$ and elementary integration that (5.34)
gives
\begin{equation}\label{e5.35}
\kappa = o(t^{n}).
\end{equation}
The form $\kappa$ is an infinitesimal Einstein deformation, divergence-free
by (5.26). Thus Corollary 4.4 and (5.35), together with the assumption in
Theorem 1.3 that $\pi_{1}(M, \partial M) = 0$, imply that
$$\kappa = 0 \ \ {\rm on} \ \ M,$$
giving a contradiction. This proves the relation (5.30).
The proof above shows that the form
\begin{equation}\label{e5.36}
\dot g(\gamma) = \lim_{t\rightarrow 0}\hat \kappa|_{S(t)},
\end{equation}
on $\partial M$ satisfies (5.30). The limit here exists by the
smooth polyhomogeneity of $\kappa$ at $\partial M$. Thus, the
space
\begin{equation}\label{e5.37}
\hat K = \{\hat \kappa = \lim_{t\rightarrow 0}t^{-(n-2)}\kappa|_{S(t)}:
\kappa \in K\},
\end{equation}
is naturally identified with the cokernel of $D\Pi_{g}$ in
$T_{\gamma}\operatorname{Met}(\partial M)$.
Note that $dim \widetilde K = dim K$ and also that the estimates (5.25)
show that
$\hat \kappa = \hat \kappa^{T}$ on $\partial M$. This means that
infinitesimal deformations of the boundary metric $\gamma$ in the
direction $\hat \kappa$, $\hat \kappa \in \hat K$, are not realized as
$\frac{d}{ds}\Pi(g_{s})|_{s=0}$, where $g_{s}$ is a curve in $E_{AH}$
through $g$, i.e.~a curve of {\it global} Einstein metrics on $M$.
On the other hand, suppose that $\kappa = \delta^{*}X$, i.e.~(5.28) holds
for some $\kappa \in K$ and vector field $X$ on $M$ (necessarily) inducing
a Killing field on $(\partial M, \gamma)$. Consider the local curve of metrics
\begin{equation}\label{e5.38}
g_{s} = g + s\delta^{*}(\frac{X}{t^{n}})
\end{equation}
defined in a neighborhood of $\partial M$. The curve $g_{s}$ is Einstein
to $1^{\rm st}$ order in $s$ at $s = 0$. The induced variation of the
boundary metric on $S(t)$ is, by construction, $(\widetilde \kappa)^{T}|_{S(t)}
\sim \widetilde \kappa|_{S(t)}$, which, by rescaling, compactifies to
$\hat \kappa$ at $\partial M$; here $\widetilde \kappa$ is given as in
(5.32). Now note that the linearized divergence constraint (5.6) or (5.8)
only involves the behavior at $\partial M$, or equivalently, the limiting
behavior on $(S(t), \gamma_{t})$, $\gamma_{t} = \bar g|_{S(t)}$, as
$t \rightarrow 0$. This basically shows that the constraint (5.6) may
be solved in the direction $h_{(0)} = \hat \kappa$; a complete justification
of this is given in the more computational proof to follow. Also, a simple
calculation, cf.~(5.42) below, gives ${\mathcal L}_{X}\tau_{(n)} =
{\mathcal L}_{X}g_{(n)} = 2\hat \kappa$. (The first statement follows
since the term $r_{(n)}$ is intrinsic to the boundary metric $\gamma$,
so that ${\mathcal L}_{X}r_{(n)} = 0$). Hence, it follows from Proposition
5.4 that
\begin{equation}\label{e5.39}
2\int_{\partial M}|\hat \kappa|^{2}dV_{\gamma} = \int_{\partial M}
\langle {\mathcal L}_{X}\tau_{(n)}, \hat \kappa \rangle dV_{\gamma} =
2\int_{\partial M}\langle \delta(\tau_{(n)}'), X \rangle dV_{\gamma} =
2\int_{\partial M}\langle \tau_{(n)}', \delta^{*}X \rangle dV_{\gamma} = 0,
\end{equation}
and thus ${\mathcal L}_{X}\tau_{(n)} = 0$ on $(\partial M, \gamma)$.
Corollary 4.4 or Proposition 5.1 and the assumption $\pi_{1}(M, \partial M)
= 0$ then imply that $\kappa = 0$ on $M$, so that $X$ is a Killing field on
$M$. This completes the first proof of Theorem 1.3. {\qed
}
From the converse part of Proposition 5.4, one also obtains:
\begin{corollary}\label{c5.5}
Let $g$ be a conformally compact Einstein metric on a compact manifold
$M$ with $C^{\infty}$ boundary metric $\gamma$. Then the linearized
divergence constraint equation {\rm (5.6)} is always solvable on
$(\partial M, \gamma)$, i.e.~the map $\pi$ in {\rm (5.5)} is locally
surjective at $(\gamma, \tau_{(n)})$.
\end{corollary}
It is useful and of interest to give another, direct computational
proof of Theorem 1.3, without using the identification (5.37) as the
cokernel of $D\Pi$. The basic idea is to compute as in Proposition 5.4
on $(S(t), g_{t})$, with $A - Hg_{t}$ in place of $\tau_{(n)}$, and
then pass to the limit on $\partial M$. Throughout the proof, we
assume (5.28) holds.
Before starting the proof per se, we note that the estimates (5.25)
and (5.28) imply that $X$ is tangential, i.e.~tangential to $(S(t), g)$,
to high order, in that
\begin{equation}\label{e5.40}
\langle X, N \rangle = O(t^{n+1+\mu}).
\end{equation}
To see this, one has $(\delta^{*}X)(N, N) = \langle \nabla_{N}X,
N \rangle = N \langle X, N \rangle$. Thus (5.40) follows from (5.25)
and the claim that $\langle X, N \rangle = 0$ on $\partial M$. To
prove the latter, consider the compactified metric $\bar g = t^{2}g$.
One has ${\mathcal L}_{X}\bar g = {\mathcal L}_{X}(t^{2}g) =
2\frac{X(t)}{t}\bar g + O(t^{n})$. Thus for the induced metric
$\gamma$ on $\partial M$, ${\mathcal L}_{X}\gamma =
2\lambda \gamma$, where $\lambda = \lim_{t\rightarrow 0}\frac{X(t)}{t}$.
Since $X$ is a Killing field on $(\partial M, \gamma)$, this gives
$\lambda = 0$, which is equivalent to the statement that
$\lim_{t\rightarrow 0}\langle X, N \rangle_{g} = 0$. Note also
that since $X$ is smooth up to $\partial M$, $|X|_{g} = O(t^{-1})$.
We claim also that
\begin{equation}\label{e5.41}
[X, N] = O(t^{n+1}),
\end{equation}
in norm. First, $\langle [X, N], N\rangle = \langle \nabla_{X}N -
\nabla_{N}X, N \rangle
= - (\delta^{*}X)(N,N) = O(t^{n+1+\mu})$. On the other hand, on
tangential $g$-unit vectors
$Y$, $\langle [X, N], Y\rangle = \langle \nabla_{X}N - \nabla_{N}X, Y
\rangle
\sim \langle \nabla_{X}N, Y \rangle - 2(\delta^{*}X)(N,Y) + \langle
\nabla_{Y}X, N \rangle
\sim - 2(\delta^{*}X)(N,Y) = O(t^{n+1})$, as claimed. Here $\sim$
denotes equality modulo
terms of order $o(t^{n})$. We have also used the fact that $\langle
\nabla_{X}N, Y \rangle
+ \langle \nabla_{Y}X, N \rangle \sim X\langle N, Y \rangle = 0$.
Now, to begin the proof itself, (assuming (5.28)), as above write
$$g_{s} = g + s\kappa + O(s^{2}) = g + s\delta^{*}X + O(s^{2}).$$
If $t_{s}$ is the geodesic defining function for $g_{s}$, (with
boundary metric
$\gamma$), then the Fefferman-Graham expansion gives $\bar g_{s} =
dt_{s}^{2} +
(\gamma + t_{s}^{2}g_{(2),s} + \dots + t_{s}^{n}g_{(n),s}) +
O(t^{n+1})$. The
estimate (5.40) implies that $t_{s} = t + sO(t^{n+2+\alpha}) +
O(s^{2})$,
so that modulo lower order terms, we may view $t_{s} \sim t$. Taking
the derivative
of the FG expansion with respect to $s$ at $s = 0$, and using the fact
that
$X$ is Killing on $(\partial M, \gamma)$, together with the fact that
the lower
order terms $g_{(k)}$, $k < n$, are determined by $\gamma$, it follows
that, for
$\hat \kappa$ as in (5.37),
\begin{equation}\label{e5.42}
\hat \kappa = {\tfrac{1}{2}}{\mathcal L}_{X}g_{(n)},
\end{equation}
at $\partial M$. Here both $\hat \kappa$ and ${\mathcal L}_{X}g_{(n)}$
are viewed
as forms on $(\partial M, \gamma)$.
Next, we claim that on $(S(t), g_{t})$,
\begin{equation}\label{e5.43}
{\mathcal L}_{X}A = -{\tfrac{n-2}{2}}t^{n-2}{\mathcal L}_{X}g_{(n)} +
O(t^{n-1}),
\end{equation}
To see this, one has $A = \frac{1}{2}{\mathcal L}_{N}g =
-\frac{1}{2}{\mathcal L}_{t\partial t}g = -\frac{1}{2}{\mathcal
L}_{t\partial t}(t^{-2}g_{t})$.
But ${\mathcal L}_{t\partial t}(t^{-2}g_{t}) = \sum {\mathcal
L}_{t\partial t}(t^{-2+k}g_{(k)}) =
\sum (k-2)t^{k-2}g_{(k)}$. The same reasoning as before then gives
(5.43).
Given these results, we now compute
$$\int_{S(t)}\langle {\mathcal L}_{X}(A - Hg_{t}), \widetilde \kappa
\rangle_{g_{t}} dV_{S(t)};$$
compare with the left side of (5.9). First, by (5.43),
$$\int_{S(t)}\langle {\mathcal L}_{X}A , \widetilde \kappa
\rangle_{g_{t}} dV_{S(t)} =
-{\tfrac{n-2}{2}}\int_{S(t)}\langle {\mathcal L}_{X}g_{(n)}, \hat
\kappa \rangle_{\gamma}
dV_{\gamma} + O(t).$$
Next, one has ${\mathcal L}_{X}(Hg_{t}) = X(H)g_{t} + H{\mathcal
L}_{X}g_{t}$. For the
first term, $X(H) = tr {\mathcal L}_{X}A + O(t^{n}) =
-\frac{n-2}{2}t^{n-2}tr {\mathcal L}_{X}g_{(n)}
+ O(t^{n})$. Since $tr g_{(n)}$ is intrinsic to $\gamma$ and $X$ is
Killing on
$(\partial M, \gamma)$, it follows that $X(H) = O(t^{n-1})$. Also,
$\langle g_{t},
\widetilde \kappa \rangle = tr^{T}\widetilde \kappa$, where $tr^{T}$ is
the tangential
trace. By (5.25) and the fact that $\kappa$ is trace-free, $\langle
g_{t}, \widetilde \kappa
\rangle = O(t^{1+\alpha})$. Hence $X(H)\langle g_{t}, \widetilde \kappa
\rangle dV_{S(t)} =
O(t^{\alpha})$. Similarly, from (5.41) one computes ${\mathcal
L}_{X}g_{t} = {\mathcal L}_{X}g
+ O(t^{n+1}) = 2t^{n}\widetilde \kappa + O(t^{n+1})$. Since $H \sim n$,
using (5.42) this gives
$$-\int_{S(t)}\langle {\mathcal L}_{X}(Hg_{t}), \widetilde \kappa
\rangle dV_{S(t)} =
-n\int_{S(t)}\langle {\mathcal L}_{X}g_{(n)}, \hat \kappa
\rangle_{\gamma}
dV_{\gamma} + O(t^{\alpha}).$$
Combining these computations then gives
\begin{equation}\label{e5.44}
\int_{S(t)}\langle {\mathcal L}_{X}(A - Hg_{t}), \widetilde \kappa
\rangle_{g_{t}} dV_{S(t)} =
-({\tfrac{n-2}{2}}+n)\int_{\partial M}\langle {\mathcal L}_{X}g_{(n)},
\hat \kappa \rangle_{\gamma} dV_{\gamma} + o(1).
\end{equation}
On the other hand, one may use the method of proof of Proposition
5.4 to compute the left side of (5.44). First since on $S(t)$, $\tau
= A - Hg_{t}$ is divergence-free, a slight extension of the calculation
(5.9) gives, for any vector field $Y$ tangent to $S(t)$ and variation $h$
of $g_{t} = g|_{S(t)}$,
\begin{equation}\label{e5.45}
\int_{S(t)}\langle {\mathcal L}_{Y}\tau, h\rangle_{g_{t}} dV_{g_{t}} =
-2\int_{S(t)}\langle \delta'(\tau), Y \rangle dV_{g_{t}} +
\int_{S(t)}[\delta Y \langle \tau, h \rangle + \langle \tau,
\delta^{*}Y \rangle tr h]dV_{g_{t}}.
\end{equation}
Now let the tangential variation $h$ be given by $h =
(\delta^{*}\frac{X}{t^{n}})^{T}$, where $\delta^{*} = \delta_{g}^{*}$. Thus
$h = (t^{-n}\kappa)^{T} = (\widetilde \kappa)^{T}$, for $\widetilde \kappa$
as in (5.32). Also, set $Y = X^{T}$. Observe that the estimate (5.40)
implies that $X$ agrees with $X^{T}$ to high degree, in that
$X = X^{T} + O(^{n+1+\mu})$. This has the effect that one may use
$X$ and $X^{T}$ interchangably in the computations below. For example,
since $\kappa$ is trace-free, $\delta_{g}X = 0$ and hence a simple
calculation shows that $\delta_{g_{t}}Y = O(t^{n+1+\mu})$. Similarly,
$tr^{T}h = O(t^{1+\mu})$, while $\delta^{*}Y = O(t^{n})$. In particular,
the second term on the right in \eqref{e5.45} is $O(t)$, and hence may
be ignored.
Next, the deformation $h$ above is the tangential part of a (trivial)
infinitesimal Einstein deformation, and hence the linearized divergence
constraint (5.6) holds along $S(t)$, in the direction $h$. Arguing then
as in the proof of Proposition 5.4, it follows from \eqref{e5.45} and
the fact that $X^{T} \sim X$ to high order that
\begin{equation}\label{e5.46}
\int_{S(t)}\langle {\mathcal L}_{X}(A - Hg_{t}), \widetilde \kappa
\rangle_{g_{t}} dV_{S(t)} =
2\int_{S(t)}\langle (A - Hg_{t})', (\delta^{*}X)^{T} \rangle
dV_{g_{t}} + O(t).
\end{equation}
Now $A' = \frac{d}{ds}(A_{g+s\widetilde \kappa}) = \frac{1}{2}
({\mathcal L}_{N}\widetilde \kappa + {\mathcal L}_{N'}g) =
\frac{1}{2}\nabla_{N}\widetilde \kappa + \widetilde \kappa + O(t)$.
Similarly, $(Hg_{t})' = H'g_{t} + H(g_{t})'$. The first term here,
when paired with $(\delta^{*}X)^{T}$ and integrated, gives $O(t)$,
while the second term is $n\widetilde \kappa$ to leading order.
Thus one has
\begin{equation}\label{e5.47}
2\int_{S(t)}\langle (A - Hg_{t})', (\delta^{*}X)^{T} \rangle dV_{g_{t}} =
\int_{S(t)}\langle \nabla_{N}\widetilde \kappa - (2n-2)\widetilde \kappa,
\kappa \rangle_{g_{t}} dV_{g_{t}} + O(t).
\end{equation}
A straightforward calculation, essentially the same as that preceding (5.33)
shows that
\begin{equation}\label{e5.48}
\int_{S(t)}\langle \nabla_{N}\widetilde \kappa - (2n-2)\widetilde \kappa,
\kappa \rangle_{g_{t}} dV_{g_{t}} =
\int_{S(t)}[{\tfrac{1}{2}}N(|\hat \kappa|^{2}) - (2n-2)|\hat \kappa|^{2}]
dV_{\gamma} + O(t),
\end{equation}
where the norms on the right are with respect to $\bar g$. The first term on
the right in \eqref{e5.48} is $O(t)$, and comparing \eqref{e5.48} with
\eqref{e5.44}-\eqref{e5.47} shows that
$$\hat \kappa = 0$$
on $\partial M$. Hence via Corollary 4.4, $\kappa = 0$ on $M$, as before.
This completes the second proof of Theorem 1.3.
{\qed
}
{\bf Proof of Corollary 1.4.}
Suppose $(M, g)$ is a conformally compact Einstein metric with
boundary metric
given by the round metric $S^{n}(1)$ on $S^{n}$. Theorem 1.3 implies
that the
isometry group of $(M, g)$ contains the isometry group of $S^{n}$. This
reduces
the Einstein equations to a simple system of ODE's, and it is easily
seen that
the only solution is given by the Poincar\'e metric on the ball
$B^{n+1}$.
{\qed
}
\begin{remark}\label{r5.6}
{\rm By means of Obata's theorem [30], Theorem 1.3 remains true for
continuous groups of conformal isometries at conformal infinity. Thus,
the class of the round metric on $S^{n}$ is the only conformal class
which supports an essential conformal Killing field, i.e.~a field
which is not Killing with respect to some conformally related metric.
Corollary 1.4 shows that any $g \in E_{AH}$ with boundary metric
$S^{n}(1)$ is necessarily the hyperbolic metric $g_{-1}$ on
the ball. For $g_{-1}$, it is well-known that essential conformal
Killing fields on $S^{n}$ extend to Killing fields on
$({\mathbb H}^{n+1}, g_{-1})$.
We expect that a modification of the proof of Theorem 1.3 would give
this result
directly, without the use of Obata's theorem. In fact, such would
probably give
(yet) another proof of Obata's result. }
\end{remark}
Corollary 5.5 shows, in the global situation, that the projection
$\pi$ of
the constraint manifold ${\mathcal T}$ to $\operatorname{Met}(\partial M)$ is always
locally surjective.
Hence there exists a formal solution, and an exact solution in the
analytic case, for
any nearby boundary metric, which is defined in a neighborhood of the
boundary. However,
the full boundary map $\Pi$ in (5.13) or (5.14) on global metrics is
not locally surjective
in general; nor is it always globally surjective.
The simplest example of this behavior is provided by the family of
AdS Schwarzschild
metrics. These are metrics on ${\mathbb R}^{2}\times S^{n-1}$ of the
form
$$g_{m} = V^{-1}dr^{2} + Vd\theta^{2} + r^{2}g_{S^{n-1}(1)},$$
where $V = V(r) = 1 + r^{2} - \frac{2m}{r^{n-2}}$. Here $m > 0$ and $r
\in [r_{+}, \infty]$,
where $r_{+}$ is the largest root of the equation $V(r_{+}) = 0$. The
locus $\{r_{+} = 0\}$
is a totally geodesic round $S^{n-1}$ of radius $r_{+}$. Smoothness of
the metric at
$\{r_{+} = 0\}$ requires that the circular parameter $\theta$ runs over
the interval
$[0,\beta]$, where
$$\beta = \frac{4\pi r_{+}}{nr_{+}^{2}+(n-2)}.$$
The metrics $g_{m}$ are isometrically distinct for distinct values of
$m$, and form
a curve in $E_{AH}$ with conformal infinity given by the conformal
class of the
product metric on $S^{1}(\beta)\times S^{n-1}(1)$. As $m$ ranges over
the interval
$(0, \infty)$, $\beta$ has a maximum value of
$$\beta \leq \beta_{\max} = 2\pi \sqrt{(n-2)/n}.$$
As $m \rightarrow 0$ or $m \rightarrow \infty$, $\beta \rightarrow 0$.
Hence, the metrics $S^{1}(L)\times S^{n-1}(1)$ are not in
$\Pi(g_{m})$ for any
$L > \beta_{\max}$. In fact these boundary metrics are not in $Im(\Pi)$
generally,
for any manifold $M^{n+1}$. For Theorem 1.3 implies that any
conformally compact
Einstein metric with boundary metric $S^{1}(L)\times S^{n-1}(1)$ has an
isometry
group containing the isometry group of $S^{1}(L)\times S^{n-1}(1)$.
This again
reduces the Einstein equations to a system of ODE's and it is easy to
see, (although
we do not give the calculations here), that any such metric is an AdS
Schwarzschild metric.
\begin{remark}\label{r5.7}
{\rm In the context of Propositions 5.1 and 5.4, it is natural to
consider the
issue of whether local Killing fields of $\partial M$, (i.e.~Killing
fields defined
on the universal cover), extend to local Killing fields of any global
conformally
compact Einstein metric. Note that Proposition 5.1 and Proposition 5.4
are both
local results, the latter by using variations $h_{(0)}$ which are of
compact support.
However, the linearized constraint condition (5.8) is not invariant
under covering
spaces; even the splitting (5.7) is not invariant under coverings,
since a Killing
field on a covering space need not descend to the base space.
We claim that local Killing fields do not extend even locally into
the interior in
general. As a specific example, let $N^{n+1}$ be any complete,
geometrically finite
hyperbolic manifold, with conformal infinity $(\partial N, \gamma)$,
and which has
at least one parabolic end, i.e.~a finite volume cusp end, with cross
sections given
by flat tori $T^{n}$. There exist many such manifolds. The metric at
conformal infinity
is conformally flat, so there are many local Killing fields on
$\partial N$. For example,
in many examples $N$ itself is a compact hyperbolic manifold. Of course
the local
(conformal) isometries of $\partial N$ extend here to local isometries
of $N$.
However, as shown in [15], the cusp end may be capped off by Dehn
filling with a
solid torus, to give infinitely many distinct conformally compact
Einstein metrics
with the same boundary metric $(\partial N, \gamma)$. These Dehn-filled
Einstein metrics
cannot inherit all the local conformal symmetries of the boundary. }
\end{remark}
\begin{remark} \label{r5.8}
{\rm We point out that Theorem 1.3 fails for complete Ricci-flat
metrics which are ALE
(asymptotically locally Euclidean). The simplest counterexamples are
the family of
Eguchi-Hanson metrics, which have boundary metric at infinity given by
the round metric
on $S^{3}/{\mathbb Z}_{2}$. The symmetry group of these metrics is
strictly smaller than
the isometry group $Isom (S^{3}/{\mathbb Z}_{2})$ of the boundary.
Similarly, the
Gibbons-Hawking family of metrics with boundary metric the round metric
on
$S^{3}/{\mathbb Z}_{k}$ have only an $S^{1}$ isometry group, much
smaller than
the group $Isom (S^{3}/{\mathbb Z}_{k})$.
This indicates that, despite a number of proposals, some important
features of holographic
renormalization in the AdS context cannot carry over to the
asymptotically flat case. }
\end{remark}
\begin{center}
September, 2007
\end{center}
\noindent
{\address Department of Mathematics\\
S.U.N.Y. at Stony Brook\\
Stony Brook, NY 11794-3651, USA\\
E-mail: [email protected]}
\noindent
{\address Institut de math\'ematiques et de mod\'elisation de
Montpellier\\
CNRS et Universit\'e Montpellier II\\
34095 Montpellier Cedex 5, France\\
E-mail: [email protected]}
\end{document}
|
\begin{document}
\title[Dichotomy for strictly increasing bisymmetric maps]{A dichotomy result for strictly increasing bisymmetric maps}
\author{P\'al Burai, Gergely Kiss and Patricia Szokol}
\address{P\'al Burai, \newline Budapest University of Technology and Economics, \newline 1111 Budapest,
Műegyetem rkp. 3., HUNGARY}
\email{[email protected]}
\address{Gergely Kiss, \newline Alfr\'ed R\'enyi Institute of Mathematics,\newline 1053 Budapest, Re\'altanoda street 13-15, HUNGARY}
\email{[email protected]}
\address{Patricia Szokol, \newline University of Debrecen,\newline
Faculty of Informatics, University of Debrecen,
MTA-DE Research Group “Equations, Functions and Curves”, \newline
4028, Debrecen, 26 Kassai road, Hungary}
\email{[email protected]}
\keywords{Bisymmetry, quasi-arithmetic mean, reflexivity, symmetry, regularity of bisymmetric maps}
\subjclass[2010]{26E60, 39B05, 39B22, 26B99}
\maketitle
\begin{abstract}
In this paper we show some remarkable consequences of the method which proves that every bisymmetric, symmetric, reflexive, strictly monotonic binary map on a proper interval is continuous, in particular it is a quasi-arithmetic mean. Now we demonstrate that this result can be refined in the way that the symmetry condition can be weakened by assuming symmetry only for a pair of distinct points of an interval.
\end{abstract}
\section{Introduction}
The role of bisymmetry in the characterization of binary quasi-a\-rith\-metic means goes back to the research of J\'anos Acz\'el (see \cite{Aczel1948}). This led him to a new approach, which is basically different from the earlier multivariate characterization of quasi-arithmetic means by Kolmogoroff \cite{Kolmogoroff1930}, Nagumo \cite{Nagumo1930} and de Finetti \cite{deFinetti1931}. Since that time quasi-arithmetic means became central objects in theory of functional equations, especially in the theory of means (see e.g. \cite{Burai2013c, Daroczy2013,Duc2020,Glazowska2020,Kiss2018,LPZ2020,Nagy2019,Pales2011,Pasteczka2020},
in particular \cite{Daroczy2002} and \cite{Jar2018} and the references therein).
In the proof of Aczél's characterization (Theorem \ref{Aczel1}, for details see \cite[Theorem 1 on p. 287]{Aczel1989}) the assumption of continuity was essentially used. It seemed that continuity cannot be omitted from the conditions of Theorem \ref{Aczel1} until quite recently the authors showed that the characterization of two-variable quasi-arithmetic means is possible without the assumption of continuity (Theorem \ref{T:bisymmetryimpliescontinuity}, for details see \cite[Theorem 8]{BKSZ2021}). It was proved that every bisymmetric, symmetric, reflexive, strictly monotonic binary mapping $F$ on a proper interval $I$ is continuous, in particular it is a quasi-arithmetic mean.
In this paper we show a nontrivial consequence of Theorem \ref{T:bisymmetryimpliescontinuity}.
We prove a dichotomy result of bisymmetric, reflexive, strictly monotonic operations on an interval concerning symmetry (Corollary \ref{cor1}). Namely, such functions are either symmetric everywhere or nowhere symmetric.
In this sense this paper can be seen as the next step toward the characterization of bisymmetric, partially strictly monotone operations (see Open Problem \ref{op1}).
The remaining part of this paper organized as follows.
In Section \ref{S:preliminary} we introduce the basic definitions and preliminary results. Section \ref{s3} is devoted to our main result (Theorem \ref{T:bisymmetryimpliescontinuity2}) and its consequences. Here we show some illustrative examples for the strictness of our main result.
The proof of Theorem \ref{T:bisymmetryimpliescontinuity2} is a quite lengthy and technical one. Therefore, we introduce at first the needed concepts and prove some important lemmata in Section \ref{s31}, while Section \ref{s32} is devoted to the proof of Theorem \ref{T:bisymmetryimpliescontinuity2}. We finish this short note with some concluding remarks.
\section{Notations}\label{S:preliminary}
We keep the following notations throughout the text. Let $I\subseteq\mathbb{R}$ be a proper interval (i.e. the interior of $I$ is nonempty)
and $F\colon I\times I\to\mathbb{R}$ be a map.
Then $F$ is said to be
\begin{enumerate}[(i)]
\item {\it reflexive}, if $F(x,x)=x$ for every $x\in I$;
\item {\it partially strictly increasing}, if the functions $$x\mapsto F(x,y_0),\quad y\mapsto F(x_0,y)$$ are strictly increasing for every fixed $x_0\in I$ and $y_0\in I$. One can define partially strictly monotone, partially monotone, partially increasing functions similarly;
\item {\it symmetric}, if $F(x,y)=F(y,x)$ for every $x,y\in I$;
\item {\it bisymmetric}, if
\begin{equation*}\label{E:bisymmetry}
F\big(F(x,y),F(u,v)\big)=F\big(F(x,u),F(y,v)\big)
\end{equation*}
for every $x,y,u,v\in I$;
\item \emph{left / right cancellative}, if $F(x,a)=F(y,a)$ / $F(a,x)=F(a,y)$ implies $x=y$ for every $x,y,a\in I$. If $F$ is both left and right cancellative, then we simple say that $F$ is {\it cancellative}.
\item \emph{mean}, if \begin{equation*}
\min\{x,y\}\leq F(x,y)\leq\max\{x,y\}
\end{equation*} for every $x,y\in I$. $F$ is a {\it strict mean} if the previous inequalities are strict whenever $x\ne y$.
\end{enumerate}
\begin{obvs}\label{o1}
If a map $F:I^2\to I$ is partially strictly increasing, then it is cancellative.
\end{obvs}
The following fundamental result is due to Acz\'el \cite{Aczel1948} (see also \cite[Theorem 1 on p. 287]{Aczel1989}).
\begin{thm}\label{Aczel1}
A function $F:I^2\to I$ is continuous, reflexive, partially strictly monotonic, symmetric and bisymmetric mapping if and only if there is a continuous, strictly increasing function $f$
that satisfies
\begin{equation}\label{eqa1}
F(x,y)=f \left(\frac{f^{-1}(x)+f^{-1}(y)}{2}\right),\qquad x,y\in I.
\end{equation}
\end{thm}
A function $F$ which satisfies \eqref{eqa1} is called a {\it quasi-arithmetic mean}. In other words, a quasi-arithmetic mean is a conjugate of the arithmetic mean by a continuous bijection $f$.
In \cite{BKSZ2021} the authors showed that the assumption of continuity for $F$ in Theorem \ref{Aczel1} can be omitted insomuch as it is the consequence of the remaining conditions.
\begin{thm}\label{T:bisymmetryimpliescontinuity}
A function $F\colon I^2\to I$ is reflexive, partially strictly increasing, symmetric and bisymmetric mapping if and only if there is a continuous, strictly monotonic function $f$ such that
\begin{equation}\label{Eq_foalak}
F(x,y)=f\left(\frac{f^{-1}(x)+f^{-1}(y)}{2}\right),\qquad x,y\in I.
\end{equation}
In particular every reflexive, partially strictly increasing, symmetric and bisymmetric binary mapping defined on $I$ is continuous.
\end{thm}
\section{Dichotomy result on the symmetry of bisymmetric, strictly monotone, reflexive binary functions}\label{s3}
We prove that a reflexive, bisymmetric, partially strictly increasing map is either totally symmetric or totally non-symmetric on the whole domain.
The main result of this section runs as follows:
\begin{thm}\label{T:bisymmetryimpliescontinuity2}
Let us assume that $I$ is a proper interval and $F\colon I^2\to I$ is a reflexive, partially strictly increasing and bisymmetric map. Suppose that there is an $\alpha,\beta\in I$ ($\alpha\ne \beta$) such that $F(\alpha,\beta)=F(\beta,\alpha)$. Then $F$ is symmetric on $I$ and continuous, i.e., $F$ is a quasi-arithmetic mean.
\end{thm}
As an immediate consequence of Theorem \ref{T:bisymmetryimpliescontinuity2} we get the following dichotomy result.
\begin{cor}\label{cor1}
Let $I$ be a proper interval, then
every bisymmetric, partially strictly increasing, reflexive, binary function $F\colon I^2\to I$ is
either symmetric everywhere on $I$ or nowhere symmetric on $I$.
\end{cor}
We illustrate the strictness of our main results with some examples.
The map
\[
F\colon[0,1]^2\to[0,1],\quad F(x,y):=\begin{cases}
\frac{2xy}{x+y}&\mbox{if } x\in[0,1], \ \mbox{and } y\in [0,\tfrac12[\\
\sqrt{xy}&\mbox{if } x\in [0,\tfrac12],\ \mbox{and } y\in [\tfrac12,1]\\
\frac{x+y}{2}&\mbox{otherwise}
\end{cases}
\]
is reflexive, partially strictly monotone increasing, not bisymmetric and it is neither symmetric nor non-symmetric for every elements of $[0,1]^2$.
The map
\[
F\colon[0,1]^2\to[0,1],\quad F(x,y):=\begin{cases}
y&\mbox{if } x,y\in[\tfrac12,1]\\
\min\{x,y\}&\mbox{otherwise}
\end{cases}
\]
is reflexive, bisymmetric, partially monotone increasing but not strictly, and it is neither symmetric nor non-symmetric for every elements of $[0,1]^2$.
Concerning the relaxation of reflexivity condition we can formulate the following open problem.
\begin{open} \label{op1}
Is it true or not that every bisymmetric, partially strictly increasing map is automatically continuous?
\end{open}
If the answer is affirmative, then the resulted map can be written in the following form (see \cite[Exercise 2, p. 296]{Aczel1989}).
\[
F(x,y)=k^{-1}(ak(x)+bk(y)+c),
\]
where $k$ is an invertible, continuous function, and $a,b,c$ are arbitrary real constants such that $ab\not=0$. In this case $F$ is either symmetric or non-symmetric everywhere. It is reflexive only if $c=0$ and $a+b=1$.
We could not find a map which is bisymmetric, partially strictly increasing, not reflexive and neither symmetric nor non-symmetric for every pair of $I^2$.
\subsection{Auxiliary results and needed concepts}\label{s31}
\phantom{nnn}
Let $(u,v,F)_n$ denote the set of all expressions that can be build as n-times compositions of $F$ by using $u$ and $v$.
For instance
\begin{align*}
(u,v,F)_0=&\{u,v\}\\
(u,v,F)_1=&\{F(u,u),F(u,v), F(v,u), F(v,v)\}\\
(u,v,F)_2=&\{F(F(u,u),u),F(F(u,v),u), F(F(v,u),u), F(F(v,v),u),\\ F(F(u,u),v)&,F(F(u,v),v), F(F(v,u),v), F(F(v,v),v), F(u,F(u,u)), \\ F(u,F(u,v))&, F(u,F(v,u)), F(v,F(v,v)),F(v,F(u,u)),F(v,F(u,v)), \\ F(v,F(v,u))&, F(v,F(v,v))\}.
\end{align*}
Moreover, let $(u,v,F)_{\infty}$ denote the set of all expressions that can be build as any number of compositions of $F$ by using $u$ and $v$. Formally, $$(u,v,F)_{\infty}=\bigcup_{n=1}^{\infty}(u,v,F)_{n}. $$
Reflexivity implies that $(u,v,F)_k\subset (u,v,F)_n$, if $k<n$. Hence, for the sake of convenience, we can introduce the notion of the length of expressions of $(u,v,F)_{\infty}$ as follows.
Let $x\in (u,v,F)_{\infty}$, such that
\[
\min_{k\in\mathbb{N}}\{x\in (u,v,F)_k\}=k_0.
\]
Then $k_0$ is called the length of $x$. Notation: $\mathcal{L}(x)=k_0$.
For example the length of $x = F(F(u,u),v)=F(u,v)$ is $1$.
We go on this subsection with the proof of some technical lemmata.
\begin{lem} If $F(u,v)=F(v,u)$, then F is symmetric for any pair $(t,s)$, where $t,s\in (u,v, F)_{\infty}$. \end{lem}
\begin{proof}
We prove it by induction with respect to the length of the elements of $(u,v, F)_{\infty}$. It is easy to check that $F$ is symmetric for any pair of elements $(u,v,F)_k$ if $k=0,1$. For instance $F$ is symmetric for $(u,F(u,v))$.
Indeed, applying the reflexivity and bisymmetry of $F$ and $F(u,v)=F(v,u)$, we get
\begin{eqnarray*}
F(u,F(u,v))=F(F(u,u),F(v,u))=F(F(u,v),u).
\end{eqnarray*}
Similarly, $F$ is symmetric for $\{(u,F(v,u)), (v,F(u,v)), (v,F(v,u)) \}$, and hence for any pair of elements of $(u,v,F)_1$.
Now, we prove that $F$ is symmetric for any pair of elements of $(u,v,F)_k$, where $k\le n+1$, under the assumption that $F$ is symmetric for any pair of elements of $(u,v,F)_k$, where $k\le n$.
Let $x$ and $y$ be two elements of $(u,v, F)_{\infty}$ such that
$\mathcal{L}(x)=k$, $\mathcal{L}(y)=l$ where $k,l \le n+1$. Then, there exists $a,b,c,d \in (u,v, F)_{\infty}$, such that $\mathcal{L}(a), \mathcal{L}(b), \mathcal{L}(c), \mathcal{L}(d)\leq n$ and
\[
x=F(a,b), \qquad y=F(c,d).
\]
By the inductive hypothesis we get that $F$ is symmetric for each pair of the set $\{a,b,c,d\}$. Applying the bisymmetry of $F$ we obtain
\begin{eqnarray*}
F(x,y)&=&F(F(a,b),F(c,d))=F(F(a,c),F(b,d))\\
&=&F(F(c,a),F(d,b))=F(F(c,d),F(a,b))=F(y,x).
\end{eqnarray*}
\end{proof}
\begin{lem}\label{l1}
Let $I$ be a proper interval, and $F:I^2\to I$ be a bisymmetric, partially strictly increasing and reflexive function. Suppose that
there are $u,v\in I,\ u<v$ such that $F(u,v)=F(v,u)$. Then there is an invertible, continuous function $f\colon[0,1]\to [u,v]$ such that $F$ can be written in the form
\begin{equation}\label{eqam}
F(x,y)=f\left(\frac{f^{-1}(x)+f^{-1}(y)}{2}\right),\qquad x,y\in [u,v].
\end{equation}
In particular, $F(s,t)=F(t,s)$ holds for all $s,t\in [u,v]$.
\end{lem}
\begin{proof}
The argument is similar to the proof\footnote{for the details see \cite[proof of Theorem 8 on page 479]{BKSZ2021}} of Theorem \ref{T:bisymmetryimpliescontinuity}. The main observation is that the proof
use only the symmetry of $F$ on the images of dyadic numbers which is exactly the set $(u,v,F)_{\infty}$ in our case. For convenience we briefly sketch the crucial steps of the proof.
\begin{itemize}
\item Define $f\colon[0,1]\to[u,v]$ recursively on the set of dyadic numbers $\mathcal{D}$ to the set $(u,v,F)_{\infty}$, so that $f(0)=u,f(1)=v, f(\frac{1}{2})=u\circ v$ and $f$ satisfies the identity \begin{equation}\label{identity_on_dyadics}
f\left(\frac{d_1+d_2}{2}\right)=F(f(d_1), f(d_2))
\end{equation}
for every $d_1,d_2\in\mathcal{D}$. There can be proved that such an $f$ is well-defined and strictly increasing. In this argument we crucially use the fact that $F$ is symmetric on $(u,v,F)_{\infty}$. By its recursive definition, it is clear that $f(\mathcal{D})=(u,v,F)_{\infty}$. (See also Acz\'el and Dhombres \cite{Aczel1989} on the pages $287-290$.)
\item The closure of $f(\mathcal{D})$ has uncountably many two-sided accumulation points\footnote{A point $\alpha$ in a set $H$ is a {\it two-sided accumulation point} if for every
$\varepsilon>0$, we have
\[
]\alpha-\varepsilon,\alpha[~\cap~H\not=\emptyset\quad\mbox{ and }\quad ]\alpha,\alpha+\varepsilon[~\cap~H\not=\emptyset.
\].}.
\item If $f(\mathcal{D})$ is not dense in $[u,v]$, i.e., there are $X,Y\in [u,v]$ such that $]X,Y[~\cap ~ f(\mathcal{D})= \emptyset$, then one can show that for arbitrary two-sided accumulation points $s\not=t$ we have
\[
]F(X,s),F(Y,s)[~\cap~ ]F(X,t),F(Y,t)[ ~=\emptyset.
\]
Hence the cardinality of disjoint intervals as well as the cardinality of two-sided accumulation points is uncountable, which gives a contradiction. Thus, $f(\mathcal{D})$ has to be dense in $[u,v]$.
\item If $f(\mathcal{D})$ is dense in $[u,v]$, then $f$ can be defined strictly increasingly on $[0,1]$, so that $f$ is continuous and satisfies \eqref{eqam}.
\end{itemize}
\end{proof}
\begin{lem}\label{l:union}
Let $I_1, I_2\subseteq I$ be two intervals such that $F$ is symmetric on $I_1$ and $I_2$. Then $F$ is symmetric on $F(I_1, I_2):=\{\ F(x_1,x_2)\ |\ x_1\in I_1,\ x_2\in I_2\ \}$.
Furthermore, if $I_1\cap I_2\not=\emptyset$, then $F$ is symmetric on $I_1\cup I_2$.
\end{lem}
\begin{proof}
We have to show that $F(z_1, z_2)=F(z_2, z_1)$ for $z_1, z_2\in F(I_1, I_2)$.
Let $x_1,x_2\in I_1$ and $y_1,y_2\in I_2$ such that $F(x_1, y_1)=z_1$ and $F(x_2, y_2)=z_2$.
Then \begin{align*}
&F(z_1, z_2)=F(F(x_1, y_1), F(x_2, y_2))=F(F(x_1, x_2), F(y_1, y_2))=\\&F(F(x_2, x_1),F(y_2, y_1))=F(F(x_2,y_2),F(x_1, y_1))=F(z_2, z_1),
\end{align*}
where in the second and fourth equalities we use bisymmetry and the third equality holds by the symmetry of $F$ on $I_1$ and $I_2$.
Now, let us assume that $z\in I_1\cap I_2$ and let $x\in I_1$, $y\in I_2$ be arbitrary. We have to show, that $F(x,y)=F(y,x)$.
Using $F(x,z)=F(z,x)$, $F(y,z)=F(z,y)$ and the bisymmetry of $F$, we get
\begin{eqnarray*}
F(F(x,y),F(z,z))&=&F(F(x,z),F(y,z))=\\F(F(z,x),F(y,z))&=&F(F(z,y),F(x,z))=\\
F(F(y,z),F(x,z))&=&F(F(y,x),F(z,z)).
\end{eqnarray*}
Since $F$ is partially strictly increasing, by Observation \ref{o1}, it is cancellative and hence $F(x,y)=F(y,x)$.
\end{proof}
Now, we are in the position to prove our main theorem.
\subsection{Proof of Theorem \ref{T:bisymmetryimpliescontinuity2}}\label{s32}
\phantom{nnn}
Let us assume first that $I$ is a proper compact interval.
Let $\sim$ be defined on $I$ such that for any $a,b\in I$ we have $a\sim b$ if and only if $F(a,b)=F(b,a)$. Then $\sim$ is an equivalence relation.
Indeed, $\sim$ is clearly reflexive and symmetric. Transitivity is a direct consequence of Lemma \ref{l:union}.
Lemma \ref{l1} guarantees that if two points are in the same equivalence class, then the interval between them belongs to the same class. Combining this fact with the transitivity of $\sim$, we can obtain that every equivalence class is an interval.
One can introduce an ordering $<$ between the equivalence classes of $\sim$ as follows.
For two equivalence classes $I_1, I_2$ ($I_1\ne I_2$) we say that $I_1$ smaller than $I_2$ (denote it by $I_1<I_2$) if every element of $I_1$ is smaller than every element of $I_2$. As every equivalence class is an interval, this definition is meaningful and gives a natural total ordering on the equivalence classes of $F$ in $I$.
\textbf{Step 1:} {\it Let $I_1,I_2\subseteq I$ be two equivalence classes such that $I_1 < I_2$, then $F(I_1, I_3)< F(I_2, I_3)$ (resp. $F(I_3, I_1)< F(I_3, I_2)$) for every equivalence class $I_3$. In particular, if $I_1<I_2$, then $I_1<F(I_1, I_2)<I_2$.}
By Lemma \ref{l:union}, we get that $F$ is symmetric on $F(I_1,I_3)$ and on $F(I_2,I_3)$. If these sets are disjoint, then $F(I_1, I_3)< F(I_2, I_3)$ by partially strictly increasingness of $F$. Now, assume that there exists a common element of $F(I_1,I_3)$ and $F(I_2,I_3)$, i.e., there exist $x_1\in I_1$, $x_2\in I_2$ and $y_1,y_2 \in I_3$ such that $F(x_1,y_1)=F(x_2, y_2)$. Hence,
\[
F(F(x_1,y_1),F(x_2,y_2))=F(F(x_2,y_2),F(x_1,y_1)).
\]
By bisymmetry, the left hand-side is equal to $F(F(x_1,x_2),F(y_1,y_2))$. Concerning the right-hand-side, bisymmetry and the fact $y_1\sim y_2$ implies that
\[
F(F(x_1,x_2),F(y_1,y_2))=F(F(x_2,x_1),F(y_1,y_2)).
\]
Moreover, $F$ is partially strictly increasing and hence, by Observation \ref{o1}, it is cancellative. Consequently, $F(x_1,x_2)=F(x_2,x_1)$, which is a contradiction, since $x_1$ and $x_2$ are belonging to two different equivalence classes.
Similarly, we can get that $F(I_3, I_1)< F(I_3, I_2)$ for any $I_3$, if $I_1< I_2$.
In particular, the choice $I_1=I_3$ gives that $F(I_1, I_1)=I_1<F(I_1, I_2)$.
Analogously, substituting $I_2=I_3$ to $F(I_1, I_3)< F(I_2, I_3)$ we have $F(I_1, I_2)=I_1<F(I_2, I_2)=I_2$.
Thus, it implies that if $I_1<I_2$, then $I_1<F(I_1, I_2)<I_2$.
\textbf{Step 2:} \emph{Every equivalence class is a closed interval.}
As we have seen, every equivalence class is a (not necessarily proper) interval. Let $I_1$ be an equivalence class with endpoints $a$ and $b$. If $a=b$, then $I_1$ is a singleton and we are done. Now we assume that $a\ne b$. Suppose that $b\not\in I_1$. Then there is an equivalence class $I_2$ containing $b$.
However, Step 1 implies that $I_1<F(I_1,I_2)<I_2$, which is a contradiction, since $b$ is on the boundary of $I_1$ and $I_2$, so there is no space for $F(I_1, I_2)$. Thus $b\in I_1$. Similar argument shows that $a\in I_1$ and hence $I_1$ is closed.
It is important to note that the equivalence classes can be singletons, but according to our assumption $F(\alpha,\beta)=F(\beta,\alpha)$ holds for given $\alpha,\beta\in I$, hence $\alpha\sim \beta$ and there is at least one equivalence class $I_{\alpha\beta}$ which is a proper interval that contains $\alpha$ and $\beta$.
\textbf{Step 3:} \emph{Let $I$ be a proper- and $J$ be an arbitrary interval, then
$F(I, J)$ (reps. $F(J, I)$) contained in such an equivalence class, which is a proper interval.}
If $I$ is proper, then $F(I, J)$ has at least two elements. Hence, the equivalence class containing $F(I, J)$ is an interval which is proper.
If the intersection of $I$ and $J$ is nonempty, then the statement comes immediately from Lemma \ref{l:union}.
If the intersection is empty, then we can deduce the statement from Step 1 and Step 2.
\textbf{Step 4:} \emph{The whole interval $I$ where $F$ is defined constitutes one equivalence class.}
Let us assume that we have at least two different equivalence classes $I_1$ and $I_2$. Without loss of generality, we can assume that $I_1<I_2$ and at least one of intervals is a proper one (e.g. $I_1=I_{\alpha\beta}$).
Iterating the fact (by Step 1) that $I_1<I_2$ implies $I_1<F(I_1, I_2)<I_2$, we will get infinitely many equivalence classes that are proper intervals. Indeed, the sequence
\[
I_{\alpha\beta},\ F(I_{\alpha\beta}, I_2), F(F(I_{\alpha\beta}, I_2), I_2),\ F(F(F(I_{\alpha\beta}, I_2), I_2), I_2)
\]
gives such equivalence classes, where $I_{\alpha\beta}$ was defined in Step 2.
Let us denote the cardinality of equivalence classes by $\kappa$ and we index the equivalence classes $I_{j}$ for $j<\kappa$.
We distinguish two cases:
\begin{enumerate}
\item $\kappa=\aleph_0$: In this case we have countably infinitely many closed, disjoint intervals that covers the closed interval $I$. This is not possible by the following theorem of Sierpinski \cite{Si} (see also \cite[p. 173]{Ku}).
\begin{thm*}[Sierpinski]
Let $X$ be a compact connected Hausdorff space (i.e. continuum). If $X$ has a countable cover $\{X_i\}_{i=1}^{\infty}$ by pairwise disjoint closed subsets, then at most one of the sets $X_i$ is non-empty.
\end{thm*}
\noindent In our case this implies that one of the equivalence classes must be the whole interval $I$.
\item $\kappa >\aleph_0$: In this case we take $\{F(I_{\alpha\beta},I_j): j<\kappa\}$. By Step 2 and by Step 3, for all $j<\kappa$ the sets $F(I_{\alpha\beta},I_j)$ are contained in equivalence classes that are pairwise disjoint and proper intervals. So we can find uncountably many disjoint, proper intervals in $\mathbb{R}$, which is impossible.
\end{enumerate}
Thus we get that every point of $I$ is in one equivalence class, hence $F$ is symmetric on $I$. In particular, $F$ a is quasi-arithmetic mean on the compact, proper interval $I$.
\textbf{Step 5:} \emph{If $F$ is a quasi-arithmetic mean on every compact subinterval of an arbitrary interval $I$, then it is a quasi-arithmetic mean on $I$.}
The proof is based on a standard compact exhaustion argument. The interested reader is referred to the proof of Theorem 1 in \cite[p. 287]{Aczel1989}.
This finishes the
proof of Theorem \ref{T:bisymmetryimpliescontinuity2}.
\section{Concluding remarks}
One of the main goal concerning the characterization of bisymmetric operations without any regularity assumption is formalized in Open Problem \ref{op1}, which ask whether bisymmetry with strict increasingness would imply continuity. In the first joint paper of the authors (\cite{BKSZ2021}) there have been proved that this is true if the operation is also symmetric and reflexive (see Theorem \ref{T:bisymmetryimpliescontinuity}). Following Acz\'el's idea in the investigation of bisymmetric, strictly increasing maps (\cite{Aczel1948}) the next step would be to verify the case where symmetry condition is not assumed.
\begin{open}
Is it true or not that every bisymmetric, partially
strictly increasing, reflexive map is automatically continuous?
\end{open}
At this moment we do not know the exact answer to this question, although we believe that it is true. In this direction our present investigation would be an initial step by showing the dichotomy of symmetry of bisymmetric, strictly increasing, reflexive operations.
Furthermore, it is important to note that reflexivity in the proof of Theorem \ref{T:bisymmetryimpliescontinuity2} have not been used centrally, just implicitly in Lemma \ref{l1}. This observation leads to the following open question.
\begin{open}
Is it true or not that every bisymmetric, partially
strictly increasing, symmetric map is automatically continuous?
\end{open}
If the answer is affirmative, then with its proof we may get the analogue of Lemma \ref{l1} without reflexivity.
Moreover, it would automatically imply the analogues of Theorem \ref{T:bisymmetryimpliescontinuity2} and the dichotomy result Corollary \ref{cor1} without the assumption of reflexivity.
\end{document}
|
\begin{document}
\numberwithin{equation}{section}
\thispagestyle{empty}
\baselineskip=6.3mm
\begin{center}
{\large{\bf Split general quasi-variational inequality problem}}
{ \bf {K.R. Kazmi{\footnote {emails: [email protected]; [email protected] (K.R. Kazmi)}}}}
{\it Department of Mathematics, Aligarh Muslim University, Aligarh-202002, India}\\
\end{center}
\parindent=0mm
{\bf Abstract:} In this paper, we introduce a split general quasi-variational inequality problem which is a natural extension of split variational inequality problem, quasi-variational and variational inequality problems in Hilbert spaces. Using projection method, we propose an iterative algorithm for the split general quasi-variational inequality problem and discuss some its special cases. Further, we discuss the convergence criteria of these iterative algorithms. The results presented in this paper generalize, unify and improve the previously known many results for the quasi-variational and variational inequality problems.
\parindent=0mm
{\bf Keywords:} Split general quasi-variational inequality problem, Split quasi-variational inequality problem, Split general variational inequality problem, iterative algorithms, convergence analysis.
\noindent {\bf AMS Subject Classifications:} Primary 47J53, 65K10; Secondary 49M37, 90C25.\\
\parindent=8mm
\noindent {\bf 1. Introduction}
Throughout the paper unless otherwise stated, for each $i\in\{1,2\}$, let $H_i$ be a real Hilbert space with inner product $\langle \cdot,\cdot \rangle $ and norm $\| \cdot \|$; let $C_i$ be a nonempty, closed and convex subset of $H_i$.
The {\it variational inequality problem} (in short, VIP) is to find $x_1\in C_1$ such that
$$\langle f_1(x_1),y_1-x_1\rangle\geq 0,~~\forall y_1\in C_1, \eqno(1.1)$$
where $f_1:C_1\to H_1$ be a nonlinear mapping.
Variational inequality theory introduced by Stampacchia [1] and Fichera [2] independently, in early sixties in potential theory and mechanics, respectively, constitutes a significant extension of variational principles. It has been shown that the variational inequality theory provides the natural, descent, unified and efficient framework for a general treatment of a wide class of unrelated linear and nonlinear problem arising in elasticity, economics, transportations, optimization, control theory and engineering sciences, see for instance [3-8]. The development of variational inequality theory can be viewed as the simultaneous pursuit of two different lines of research. On the one hand, it reveals the fundamental facts on the qualitative behavior of solutions to important classes of problems. On the other hands, it enables us to develop highly efficient and powerful numerical methods to solve, for example, obstacle, unilateral, free and moving boundary value problems. In last five decades, considerable interest has been shown in developing various classes of variational inequality problems, both for its own sake and for its applications.
An important generalization of variational inequality problem is quasi-variational inequality problem introduced and studied by Bensoussan, Goursat and Lions [9] in connection of impulse control problems. More presicely, for each $i$, let $C_i: H_i \to 2^{H_i}$ be a nonempty, closed and convex set valued mapping, where $2^{H_i}$ be the family of all nonempty subsets of $H_i$. The {\it quasi-variational inequality problem} (in short, QVIP) is to find $x_1\in H_1$ such that $x_1\in C_1(x_1)$ and
$$\langle f_1(x_1),y_1-x_1\rangle\geq 0,~~\forall y_1\in C_1(x_1), \eqno(1.2)$$
where $f_1:H_1\to H_1$ be a nonlinear mapping.
We observe that if $C_1(x_1)=C_1$ for all $x_1 \in H_1$, then QVIP(1.2) is reduced to VIP(1.1). In many important applications, $C_1(x_1)=m(x_1)+C_1$ for each $x_1 \in H_1$, where $m: H_1 \to H_1$ is a single valued mapping, see for instance [4,5]. Since then various generalization of QVIP(1.2) have been proposed and analyzed, see for instance [10-14].
Recently, Censor {\it et al.} [15] introduced and studied the following {\it split variational inequality problem} (in short, SpVIP): For each $i\in\{1,2\}$, let $f_i:H_i\to H_i$ be a nonlinear mapping and $A:H_1\to H_2$ be a bounded linear operator with its adjoint operator $A^*$. Then the SpVIP is to:
\noindent Find $x_1^*\in C_1$ such that
$$\langle f_1(x_1^*),x_1-x_1^*\rangle\geq 0,~~\forall x_1 \in C_1,\eqno(1.3a)$$
and such that
$$x_2^*=Ax_1^*\in C_2~~{\rm solves}~\langle f_2(x_2^*),x_2-x_2^*\rangle\geq 0,~~\forall x_2 \in C_2.\eqno(1.3b)$$
SpVIP(1.3a-b) amount to saying: find a solution of variational inequality problem VIP(1.3a), the image of which under a given bounded linear operator is a solution of VIP(1.3b). It is worth mentioning that SpVIP is quite general and permits split minimization between two spaces so the image of a minimizer of a given function, under a bounded linear operator, is a minimizer of another function. SpVIP(1.3a-b) is an important generalization of VIP(1.1). It also includes as special case, the split zero problem and split feasibility problem which has already been studied and used in practice as a model in intensity-modulated radiation therapy treatment planning, see [16-18]. For the further related work, we refer to see Moudafi [19], Byrne {\it et al.} [20], Kazmi and Rizvi [21-24] and Kazmi [25].
In this paper, we introduce the following natural generalization of SpVIP(1.3a-b): For each $i\in \{1,2\}$, let $C_i: H_i \to 2^{H_i}$ be a nonempty, closed and convex set valued mapping. For each $i\in\{1,2\}$, let $f_i:H_i\to H_i$ and $g_i:H_i\to H_i$ be nonlinear mappings and $A:H_1\to H_2$ be a bounded linear operator with its adjoint operator $A^*$. Then we consider the problem:
\noindent Find $x_1^*\in H_1$ such that $g_1(x_1^*)\in C_1(x_1^*)$ and
$$\langle f_1(x_1^*),x_1-g_1(x_1^*)\rangle\geq 0,~~\forall x_1 \in C_1(x_1^*),\eqno(1.4a)$$
and such that
$$x_2^*=Ax_1^*\in H_2,~~g_2(x_2^*)\in C_2(x_2^*)~~{\rm solves}~\langle f_2(x_2^*),x_2-g_2(x_2^*)\rangle\geq 0,~~\forall x_2 \in C_2(x_2^*).\eqno(1.4b)$$
We call the problem (1.4a-b), the {\it split general quasi-variational inequality problem} (in short, SpGQVIP).
Now, we observe some special cases of SpGQVIP(1.4a-b).
If we set $g_i=I_i$, where $I_i$ is identity operator on $H_i$, then SpGQVIP(1.4a-b) is reduced to the following {\it split quasi-variational inequality problem} (in short, SpQVIP):
\noindent Find $x_1^*\in H_1$ such that $x_1^*\in C_1(x_1^*)$ and
$$\langle f_1(x_1^*),x_1-x_1^*\rangle\geq 0,~~\forall x_1 \in C_1(x_1^*),\eqno(1.5a)$$
and such that
$$x_2^*=Ax_1^*\in H_2, x_2^*\in C_2(x_2^*)~~{\rm solves}~\langle f_2(x_2^*),x_2-x_2^*\rangle\geq 0,~~\forall x_2 \in C_2(x_2^*),\eqno(1.5b)$$
which appears to be new.
If we set $C_i(x_i)=C_i$ for all $x_i \in H_i$, then SpGQVIP(1.4a-b) is reduced to following {\it split general variational inequality problem} (in short, SpGVIP):
\noindent Find $x_1^*\in H_1$ such that $g_1(x_1^*)\in C_1$ and
$$\langle f_1(x_1^*),x_1-g_1(x_1^*)\rangle\geq 0,~~\forall x_1 \in C_1,\eqno(1.6a)$$
and such that
$$x_2^*=Ax_1^*\in H_2,~~g_2(x_2^*)\in C_2~~{\rm solves}~\langle f_2(x_2^*),x_2-g_2(x_2^*)\rangle\geq 0,~~\forall x_2 \in C_2,\eqno(1.6b)$$
which appears to be new.
Further, if we set $C_i(x_i)=C_i$ for all $x_i \in H_i$, and $g_i=I_i$, then SpGQVIP(1.4a-b) is reduced to SpVIP(1.3a-b).
Furthermore, if we set $H_2=H_1; C_2(x_2)=C_1(x_1) \forall x_i$;$ f_2=f_1$, and $g_i=I_i$, then SpGQVIP(1.4a-b) is reduced to QVIP(1.2).
Using projection method, we propose an iterative algorithm for SpGQVIP(1.4a-b) and discuss some its special cases which are the iterative algorithms for SpQVIP(1.5a-b), SpGVIP(1.6a-b), SpVIP(1.3a-b) and QVIP(1.2). Further, we discuss the convergence criteria of these iterative algorithms. The results presented in this paper generalize, unify and improve the previously known many results for the quasi-variational and variational inequality problems.
\noindent{\bf 2. Iterative algorithms }
For each $i\in\{1,2\}$, a mapping $P_{C_i}$ is said to be {\it metric projection} of $H_i$ onto $C_i$ if for every point $x_i \in H_i$, there exists a unique nearest point in $C_i$ denoted by $P_ {C_i} (x_i)$ such that
$$ \|x_i-P_{C_i}(x_i)\|\leq \|x_i-{\bar{x}}_i\|, ~~ \forall {\bar{x}}_i \in C_i.$$
It is well known that $P_{C_i}$ is nonexpansive mapping and satisfies
$$\langle x_i-{\bar{x}}_i ,P_{C_i}(x_i)-P_{C_i}({\bar{x}}_i) \rangle \geq \|P_{C_i}(x_i)-P_{C_i}({\bar{x}}_i)\|^2, ~~\forall x_i,{\bar{x}}_i \in H_i.\eqno(2.1)$$
Moreover, $P_{C_i}(x_i)$ is characterized by:
$$\langle x_i-P_{C_i}(x_i),{\bar{x}}_i-P_{C_i}(x_i) \rangle \leq 0, ~~ \forall {\bar{x}}_i \in C_i.\eqno(2.2)$$
Further, it is easy to see that the following is true:
$$x_1^* {\rm ~is~a~solution~of~ QVIP(1.2)}\Leftrightarrow x_1^*=P_{C_1(x_1^*)} (x_1^*-\rho_1 f_1(x_1^*)),~~\rho_1>0.$$
Hence, SpGQVIP(1.4a-b) can be reformulated as follows:~~ Find $x_1^* \in H_1 $ with $x_2^*=Ax_1^*$ such that $g_i(x_i^*)\in C_i(x_i^*)$ and
$$g_i(x_i^*)= P_{C_i(x_i^*)}(g_i(x_i^*)- \rho_i f_i(x_i^*)), \eqno(2.3)$$
for $\rho_i >0$.
Based on above arguments, we propose the following iterative algorithm for approximating a solution to SpGQVIP(1.4a-b).
Let $\{\alpha^n\} \subseteq (0,1)$ be a sequence such that $\sum \limits^{\infty}_{n=1} \alpha^n=+\infty$, and let $\rho_1,~ \rho_2,~ \gamma$ are parameters with positive values.
\noindent {\bf Iterative algorithm 2.1.} Given $x_1^0\in H_1,$ compute the iterative sequences $\{x_1^n\}$ defined by the iterative schemes:
$$g_1(y^n)= P_{C_1({x_1^n})}(g_1(x_1^n)- \rho_1 f_1(x_1^n)) \eqno(2.4a)$$
$$g_2(z^n)= P_{C_2({Ay^n})}(g_2(Ay^n)- \rho_2 f_2(Ay^n)) \eqno(2.4b)$$
$$x_1^{n+1}=(1-\alpha^n)x_1^n +\alpha^n[y^n+\gamma A^*(z^n-Ay^n)] \eqno(2.4c)$$
for all $n=0,1,2,.....~, \rho_i, \gamma >0.$
If we set $g_i=I_i$, then Iterative algorithm 2.1 is reduced to following iterative algorithm for SpQVIP(1.5a-b):
\noindent {\bf Iterative algorithm 2.2.} Given $x_1^0\in H_1,$ compute the iterative sequences $\{x_1^n\}$ defined by the iterative schemes:
$$y^n= P_{C_1({x_1^n})}(x_1^n- \rho_1 f_1(x_1^n)) \eqno(2.5a)$$
$$z^n= P_{C_2({Ay^n})}(Ay^n- \rho_2 f_2(Ay^n)) \eqno(2.5b)$$
$$x_1^{n+1}=(1-\alpha^n)x_1^n +\alpha^n[y^n+\gamma A^*(z^n-Ay^n)] \eqno(2.5c)$$
for all $n=0,1,2,.....~, \rho_i, \gamma >0.$
If we set $C_i(x_i)=C_i$ for all $x_i \in H_i$, then Iterative algorithm 2.1 is reduced to following iterative algorithm for SpGVIP(1.6a-b):
\noindent {\bf Iterative algorithm 2.3.} Given $x_1^0\in H_1,$ compute the iterative sequences $\{x_1^n\}$ defined by the iterative schemes:
$$g_1(y^n)= P_{C_1}(g_1(x_1^n)- \rho_1 f_1(x_1^n)) \eqno(2.6a)$$
$$g_2(z^n)= P_{C_2}(g_2(Ay^n)- \rho_2 f_2(Ay^n)) \eqno(2.6b)$$
$$x_1^{n+1}=(1-\alpha^n)x_1^n +\alpha^n[y^n+\gamma A^*(z^n-Ay^n)] \eqno(2.6c)$$
for all $n=0,1,2,.....~, \rho_i, \gamma >0.$
If we set $C_i(x_i)=C_i$ for all $x_i \in H_i$, and $g_i=I_i$, then Iterative algorithm 2.1 is reduced to following iterative algorithm for SpVIP(1.3a-b):
\noindent {\bf Iterative algorithm 2.4[25].} Given $x_1^0\in H_1,$ compute the iterative sequences $\{x_1^n\}$ defined by the iterative schemes:
$$y^n= P_{C_1}(x_1^n- \rho_1 f_1(x_1^n))$$
$$z^n= P_{C_2}(Ay^n- \rho_2 f_2(Ay^n)) $$
$$x_1^{n+1}=(1-\alpha^n)x_1^n +\alpha^n[y^n+\gamma A^*(z^n-Ay^n)] $$
for all $n=0,1,2,.....~,\rho_i, \gamma >0.$
If we set $H_2=H_1;~ C_2(x_2)=C_1(x_1)~ \forall x_i$; $f_2=f_1$, and $g_i=I_i$, then Iterative algorithm 2.1 is reduced to following Mann iterative algorithm for QVIP(1.2):
\noindent {\bf Iterative algorithm 2.5.} Given $x_1^0\in H_1,$ compute the iterative sequences $\{x_1^n\}$ defined by the iterative schemes:
$$y^n= P_{C_1}(x_1^n- \rho_1 f_1(x_1^n))$$
$$x_1^{n+1}=(1-\alpha^n)x_1^n +\alpha^n y^n $$
for all $n=0,1,2,.....~,\rho_1>0.$
\noindent {\bf Assumption 2.1.} For all $x_i, y_i, z_i \in H_i$, the operator $P_{C_i({x_i})}$ satisfies the condition:
$$\|P_{C_i({x_i})}(z_i)-P_{C_i({y_i})}(z_i)\| \leq \nu_i\|x_i-y_i\|,$$
for some constant $\nu_i>0$.
\noindent{\bf Definition 2.1.} A nonlinear mapping $f_1:H_1 \to H_1$ is said to be:
\begin{enumerate}
\item[(i)] $\alpha_1$-{\it strongly monotone}, if there exists a constant $\alpha_1 >0$ such that
$$\langle f_1(x)-f_1(\bar{x}), x-\bar{x}\rangle~ \geq \alpha\|x-\bar{x}\|^2, ~~\forall x,\bar{x} \in H_1;$$
\item[(ii)] $\beta_1$-{\it Lipschitz continuous}, if there exists a constant $\beta_1>0$ such that
$$ \|f_1(x)-f_1(\bar{x})\|\leq \beta\|x-\bar{x}\|, ~~\forall x,\bar{x} \in H_1.$$
\end{enumerate}
\noindent {\bf 3. Results}
Now, we study the convergence of the Iterative algorithm 2.1 for SpGQVIP(1.4a-b).
\noindent{\bf Theorem 3.1.} For each $i\in \{1,2\}$, let $C_i: H_i \to 2^{H_i}$ be a nonempty, closed and convex set valued mapping. Let $f_i:H_i \to H_i$ be $\alpha_i$-strongly monotone with respect to $g_i$ and $\beta_i$-Lipschitz continuous and let $g_i:H_i \to H_i$ be $\delta_i$-Lipschitz continuous and $(g_i-I_i)$ be $\sigma_i$-strongly monotone, where $I_i$ is the identity operator on $H_i$. Let $A: H_1 \to H_2$ be a bounded linear operator and $A^*$ be its adjoint operator. Suppose $x_1^* \in H_1$ is a solution to SpGQVIP(1.4a-b) and Assumption 2.1 hold, then the sequence $\{x_1^n\}$ generated by Iterative algorithm 2.1 converges strongly to $x_1^*$ provided that the constants $ \rho_i $ and $\gamma$ satisfy the following conditions:
$$\left|\rho_1 - \frac{\alpha_1}{\beta_1^2} \right|<\frac{\sqrt{\alpha_1^2-\beta_1^2(\delta_1^2-k_1^2)}}{\beta_1^2}$$
$$\alpha_1 > \beta_1 \sqrt{\delta_1^2-k_1^2};~~ k_1= \left[\frac{\sqrt{2\sigma_1+1}}{1+2\theta_2}-\nu_1\right];~~\delta_1> \left|k_1 \right|;$$
$$0< \theta_2=\frac{1}{\sqrt{2\sigma_2+1}}\left\{\sqrt{\delta_2^2-2\rho_2 \alpha_2 + \rho_2^2 \beta_2^2}+\nu_2\right\};~~\rho_2>0;~~ \gamma \in \left(0, \frac{2}{\|A\|^2} \right) $$
\parindent=8mm
\noindent{\bf Proof.} Since $x_1^* \in H_1$ is a solution to SpGQVIP(1.4a-b), then $x_1^* \in H_1$ be such that $g_i(x_i^*)\in C_i(x_i^*)$ and
$$g_1(x_1^*)= P_{C_1(x_1^*)}(g_1(x_1^*)- \rho_1 f_1(x_1^*)), \eqno(3.1)$$
$$g_2(Ax_1^*)= P_{C_2(Ax_1^*)}(g_2(Ax_1^*)- \rho_2 f_2(Ax_1^*)), \eqno(3.2)$$
for $\rho_i >0$.
From Iterative algorithm 2.1(2.4a), Assumption 2.1 and (3.1), we have
$$\|g_1(y^n)-g_1(x_1^*)\|=\|P_{C_1(x_1^n)}(g_1(x_1^n)- \rho_1 f_1(x_1^n))-P_{C_1(x_1^*)}(g_1(x_1^*)- \rho_1 f_1(x_1^*))\|$$
$$\leq \|P_{C_1(x_1^n)}(g_1(x_1^n)- \rho_1 f_1(x_1^n))-P_{C_1(x_1^n)}(g_1(x_1^*)- \rho_1 f_1(x_1^*))\|$$
$$+\|P_{C_1(x_1^n)}(g_1(x_1^*)- \rho_1 f_1(x_1^*))-P_{C_1(x_1^*)}(g_1(x_1^*)- \rho_1 f_1(x_1^*))\|$$
$$\leq \|g_1(x_1^n)-g_1(x_1^*)- \rho_1 (f_1(x_1^n)- f_1(x_1^*))\|+\nu_1 \|x_1^n-x_1^*\|.$$
Now, using the fact that $f_1$ is $\alpha_1$-strongly monotone with respect to $g_1$ and $\beta_1$-Lipschitz continuous, and $g_1$ is $\delta_1$-Lipschitz continuous, we have
$$\|g_1(x_1^n)-g_1(x_1^*)-\rho_1 (f_1(x_1^n)- f_1(x_1^*))\|^2 \hspace{2.5in}$$
$$ = \|g_1(x_1^n)-g_1(x_1^*)\|^2 -2 \rho_1 \langle f_1(x_1^n)- f_1(x_1^*), g_1(x_1^n)-g_1(x_1^*)\rangle $$
$$+\rho^2 \|f_1(x_1^n)- f_1(x_1^*)\|^2\hspace{2.5in}$$
$$\leq (\delta_1^2-2\rho_1 \alpha_1 + \rho_1^2 \beta_1^2)\|x_1^n-x_1^*\|^2.\hspace{1.8in}$$
As a result, we obtain
$$\|g_1(y^n)-g_1(x_1^*)\| \leq \left\{\sqrt{\delta_1^2-2\rho_1 \alpha_1 + \rho_1^2 \beta_1^2} +\nu_1\right\} \|x_1^n-x_1^*\|. \eqno(3.3)$$
Since $(g_1-I_1)$ is $\sigma_1$-strongly monotone, we have
$$\|y^n-x_1^*\|^2 \leq \|g_1(y^n)-g_1(x_1^*)\|^2 -2\langle (g_1-I_1)y^n-(g_1-I_1)x_1^*, y^n-x_1^* \rangle$$
$$\leq \|g_1(y^n)-g_1(x_1^*)\|^2 -2\sigma_1\|y^n-x_1^*\|^2,\hspace{.5in}$$
which implies
$$\|y^n-x_1^*\| \leq \frac{1}{\sqrt{2\sigma_1+1}}\|g_1(y^n)-g_1(x_1^*)\| \eqno(3.4)$$
From (3.3) and (3.4), we have
$$\|y^n-x_1^*\| \leq \theta_1 \|x_1^n-x_1^*\|, \eqno(3.6)$$
where $\theta_1=\frac{1}{\sqrt{2\sigma_1+1}}\left\{\sqrt{\delta_1^2-2\rho_1 \alpha_1 + \rho_1^2 \beta_1^2} +\nu_1\right\}.$
Similarly, from Iterative algorithm 2.1(2.4b), Assumption 2.1 and (3.2) and using the fact that $f_2$ is $\alpha_2$-strongly monotone with respect to $g_2$ and $\beta_2$-Lipschitz continuous, and $(g_2-I_2)$ is $\sigma_2$-strongly monotone, and $g_2$ is $\delta_2$-Lipschitz continuous, we have
$$\|g_2(z^n)-g_2(Ax_1^*)\| \leq \left\{\sqrt{\delta_2^2-2\rho_2 \alpha_2 + \rho_2^2 \beta_2^2}+\nu_2\right\} \|Ay^n-Ax_1^*\|,\eqno(3.7)$$
and
$$\|z^n-Ax_1^*\| \leq \theta_2 \|Ay^n-Ax_1^*\|, \eqno(3.8)$$
where $\theta_2=\frac{1}{\sqrt{2\sigma_2+1}}\left\{\sqrt{\delta_2-2\rho_2 \alpha_2 + \rho_2^2 \beta_2^2}+\nu_2\right\}.$
Next, from Iterative algorithm 2.1(2.4c), we have
$$\|x_1^{n+1}-x_1^*\|\leq (1-\alpha^n)\|x_1^{n}-x_1^*\|+ \alpha^n[\|y^n-x_1^*-\gamma A^*(Ay^n-Ax_1^*)\|+\gamma\|A^*(z^n-Ax_1^*)\|]\eqno(3.9)$$
Further, using the definition of $A^*$, fact that $A^*$ is a bounded linear operator with $\|A^*\|=\|A\|$, and given condition on $\gamma$, we have
$$\|y^n-x_1^*-\gamma A^*(Ay^n-Ax_1^*)\|^2=\|y^n-x_1^*\|^2-2\gamma \langle y^n-x_1^*,A^*(Ay^n-Ax_1^*)\rangle + \gamma^2\|A^*(Ay^n-Ax_1^*)\|^2$$
$$\leq\|y^n-x_1^*\|^2- \gamma(2- \gamma\|A\|^2) \|Ay^n-Ax_1^*\|^2 $$
$$\leq \|y^n-x_1^*\|^2\hspace{2.1in}\eqno(3.10)$$
and, using (3.8), we have
$$\|A^*(z^n-Ax_1^*)\|\leq \|A\|\|z^n-Ax_1^*\|\hspace{1.5in}$$
$$ \leq \theta_2 \|A\|\|Ay^n-Ax_1^*\|$$
$$\leq \theta_2 \|A\|^2\|y^n-x_1^*\|. \hspace{.2in}\eqno(3.11)$$
Combining (3.10) and (3.11) with inequality (3.9), as a result, we obtain
$$\|x_1^{n+1}-x_1^*\|\leq [1-\alpha^n(1-\theta)] \|x_1^{n}-x_1^*\|,$$
where $\theta= \theta_1(1+\gamma \|A\|^2 \theta_2)$.
Hence, after $n$ iterations, we obtain
$$\|x_{n+1}-x^*\|\leq \prod\limits^{n}_{j=1}[1-\alpha_j(1-\theta)] \|x_0-x^*\|. \eqno(3.12)$$
It follows from conditions on $\rho_1~\rho_2$ that $\theta \in (0,1)$. Since $\sum \limits^{\infty}_{n=1} \alpha^n=+\infty$ and $\theta \in (0,1)$, it implies in the light of [10] that
$$\lim_{n \to \infty}\prod\limits^{n}_{j=1}[1-\alpha_j(1-\theta)]=0.$$
Thus it follows from (3.12) that $\{x_n\}$ converges strongly to $x^*$ as $n \to +\infty$. Since $A$ is continuous, it follows from (3.3),(3.6), (3.7) and (3.8) that $y^n \to x_1^*$, $g_1(y^n) \to g_1(x_1^*)$ $Ay^n \to Ax_1^*$, $z^n\to Ax_1^*$ and $g_2(z^n)\to g_2(Ax_1^*)$ as $n \to +\infty$. This completes the proof.\\
If we set $g_i=I_i,$ then Theorem 3.1 reduces to the following result for the convergence of the Iterative algorithm 2.2 for SpQVIP(1.5a-b).
\noindent{\bf Corollary 3.1.} For each $i\in \{1,2\}$, let $C_i: H_i \to 2^{H_i}$ be a nonempty, closed and convex set valued mapping. Let $f_i:H_i \to H_i$ be $\alpha_i$-strongly monotone and $\beta_i$-Lipschitz continuous and let $A: H_1 \to H_2$ be a bounded linear operator and $A^*$ be its adjoint operator. Suppose $x_1^* \in H_1$ is a solution to SpQVIP(1.5a-b) and Assumption 2.1 hold, then the sequence $\{x_1^n\}$ generated by Iterative algorithm 2.2 converges strongly to $x_1^*$ provided that the constants $ \rho_i $ and $\gamma$ satisfy the following conditions:
$$\left|\rho_1 - \frac{\alpha_1}{\beta_1^2} \right|<\frac{\sqrt{\alpha_1^2-\beta_1^2(1-k_1^2)}}{\beta_1^2}$$
$$\alpha_1 > \beta_1 \sqrt{1-k_1^2};~~ k_1= \frac{1}{1+2\theta_2}-\nu_1;~~\left|k_1 \right|<1;$$
$$0< \theta_2=\left\{\sqrt{1-2\rho_2 \alpha_2 + \rho_2^2 \beta_2^2}+\nu_2\right\};~~\rho_2>0;~~ \gamma \in \left(0, \frac{2}{\|A\|^2} \right) $$
If we set $C_i(x_i)=C_i,~ \forall x_i\in H_i$ then Theorem 3.1 reduces to the following result for the convergence of the Iterative algorithm 2.3 for SpGVIP(1.6a-b).
\noindent{\bf Corollary 3.2.} For each $i\in \{1,2\}$, let $C_i$ be a nonempty, closed and convex set in $H_i$. Let $f_i:H_i \to H_i$ be $\alpha_i$-strongly monotone with respect to $g_i$ and $\beta_i$-Lipschitz continuous and let $g_i:H_i \to H_i$ be $\delta_i$-Lipschitz continuous and $(g_i-I_i)$ be $\sigma_i$-strongly monotone, where $I_i$ is the identity operator on $H_i$. Let $A: H_1 \to H_2$ be a bounded linear operator and $A^*$ be its adjoint operator. Suppose $x_1^* \in H_1$ is a solution to SpGVIP(1.6a-b) and Assumption 2.1 hold, then the sequence $\{x_1^n\}$ generated by Iterative algorithm 2.3 converges strongly to $x_1^*$ provided that the constants $ \rho_i $ and $\gamma$ satisfy the following conditions:
$$\left|\rho_1 - \frac{\alpha_1}{\beta_1^2} \right|<\frac{\sqrt{\alpha_1^2-\beta_1^2(\delta_1^2-k_1^2)}}{\beta_1^2}$$
$$\alpha_1 > \beta_1 \sqrt{\delta_1^2-k_1^2};~~ k_1=\frac{\sqrt{2\sigma_1+1}}{1+2\theta_2};~~\delta_1> \left|k_1 \right|;$$
$$0< \theta_2= \sqrt{\frac{\delta_2^2-2\rho_2 \alpha_2 + \rho_2^2 \beta_2^2}{2\sigma_2+1}};~~\rho_2>0;~~ \gamma \in \left(0, \frac{2}{\|A\|^2} \right) $$
If we set $H_2=H_1;~ C_2(x_2)=C_1(x_1)~ \forall x_i$; $f_2=f_1;~ A=I_1$, and $g_i=I_i$, then Theorem 3.1 reduces to the following result for the convergence of the Iterative algorithm 2.5 for QVIP(1.2).
\noindent{\bf Corollary 3.3.} Let $C_1: H_1 \to 2^{H_1}$ be a nonempty, closed and convex set valued mapping. Let $f_1:H_1 \to H_1$ be $\alpha_1$-strongly monotone and $\beta_1$-Lipschitz continuous. Suppose $x_1^* \in H_1$ is a solution to QVIP(1.2) and Assumption 2.1 hold, then the sequence $\{x_1^n\}$ generated by Iterative algorithm 2.5 converges strongly to $x_1^*$ provided that the constant $ \rho_1 $ satisfies the following conditions:
$$\left|\rho_1 - \frac{\alpha_1}{\beta_1^2} \right|<\frac{\sqrt{\alpha_1^2-\beta_1^2(1-k_1^2)}}{\beta_1^2}$$
$$\alpha_1 > \beta_1 \sqrt{1-k_1^2};~~ k_1= 1-\nu_1;~~\left|k_1 \right|<1.$$
\noindent{\bf Remark 3.1.}~ It is of further research effort to extend the iterative method presented in this paper for solving the split variational inclusions [19] and the split equilibrium problem [22].
\noindent{\bf{References}}
\begin{enumerate}
\item [{1.}] Stampacchia, G: Formes bilinearires coercitives sur les ensembles convexes. C.R. Acad. Sci. Paris {\bf{258}}, 4413-4416 (1964)
\item [{2.}] Fichera, G: Problemi elastostatici con vincoli unilaterali: Il problema di Signorini ambigue condizione al contorno. Attem. Acad. Naz. Lincei. Mem. Cl. Sci. Nat. Sez. Ia {\bf{7}}(8), 91-140 (1963/64)
\item [{3.}] Bensoussan, A, Lions, JL: Applications of Variational Inequalities to Stochastic Control. North-Holland, Amsterdam, 1982
\item [{4.}] Bensoussan, A, Lions, JL: Impulse Control and Quasivariational Inequalities. Gauthiers Villers, Paris, 1984.
\item [{5.}] Baiocchi, C, Capelo, A: Variational and Quasi-variational Inequalities. Wiley, New York, 1984
\item [{6.}] Crank, J: Free and Moving Boundary Problems. Clarendon Press, Oxford, 1984
\item [{7.}] Glowinski, R: Numerical Methods for Nonlinear Variational Problems. Springer, Berlin, 1984
\item [{8.}] Kikuchi, N, Oden, JT: Contact Problems in Elasticity. SIAM, Philadelphia, 1998
\item [{9.}] Bensoussan, A, Goursat,M, Lions, JL: Contr\^{o}le impulsinnel et inequations quasivariationnelles stationeries. C.R. Acad. Sci. {\bf 276}, 1279-1284 (1973)
\item [{10.}] Kazmi, KR: On a class of quasivariational inequalities. New Zealand J. Math. {\bf 24}, 17-23 (1995)
\item [{11.}] Kazmi, KR: Mann and Ishikawa type perturbed iterative algorithms for generalized quasi-variational inclusions. J. Math. Anal. Appl. {\bf 209}, 572-584 (1997)
\item [{12.}] Kazmi, KR, Bhat, MI, Khan, FA: A class of multi-valued quasi-variational inequalities. J. Nonlinear Convex Anal. {\bf 6}(3), 487-495 (2005)
\item [{13.}]Kazmi, KR: Iterative algorithm for a class of generalized quasi-variational inclusions with fuzzy mappings in Banach spaces. J. Comput. Appl. Math. {\bf 188}(1), 1-11 (2006)
\item [{14.}] Kazmi, KR, Khan, FA, Shahzad, M: Existence and iterative approximation of a unique solution of a system of general quasi-variational inequality problems, Thai J. Math. {\bf 8}(2), 405-417 (2010)
\item [{15.}] Censor, Y, Gibali, A, Reich, S: Algorithms for the split variational inequality problem. Numerical Algorithms {\bf 59}, 301-323 (2012)
\item [{16.}] Censor, Y, Bortfeld, T, Martin, B, Trofimov, A: A unified approach for inversion problems in intensity modulated radiation therapy. Physics in Medicine and Biology {\bf 51}, 2353-2365 (2006)
\item [{17.}] Censor, Y, Elfving, T: A multiprojection algorithm using Bergman projections in product space. Numerical Algorithms {\bf 8}, 221-239 (1994)
\item [{18}] Combettes, PL: The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. {\bf 95}, 155-270 (1996)
\item[{19.}] Moudafi, A: Split monotone variational inclusions. J. Optim. Theory Appl. {\bf 150}, 275-283 (2011)
\item[{20.}] Byrne, C, Censor, Y, Gibali, A, Reich, S: Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. {\bf 13}, 759-775 (2012)
\item[{21.}] Kazmi, KR, Rizvi, SH: Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egyptian Math. Soc. {\bf 21}, 44-51 (2013)
\item[{22.}] Kazmi, KR, Rizvi, SH: Iterative approximation of a common solution of a split generalized equilibrium problem and a fixed point problem for nonexpansive semigroup, Mathematical Sciences {\bf 7}, Art. 1 (2013) (doi 10.1186/2251-7456-7-1)
\item[{23.}] Kazmi, KR, Rizvi, SH: An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. In Press, Optimization Letters (2013)(doi 10.1007/s11590-013-0629-2)
\item[{24.}] Kazmi, KR, Rizvi, SH: Implicit iterative method for approximating a common solution of split equilibrium problem and fixed point problem for a nonexpansive semigroup. In Press, Arab J. Math. Sci. (2013)(doi 10.1016/j.ajmsc.2013.04.002)
\item[{25.}] Kazmi, KR: Split nonconvex variational inequality problem. Mathematical Sciences {\bf 7}, Art. 36 (2013) (doi: 10.1186/10.1186/2251-7456-7-36)
\end{enumerate}
\end{document}
|
\begin{document}
\title{Quantum theory: the role of
microsystems and macrosystems}
\author{Ludovico \surname{Lanz}}
\author{Bassano \surname{Vacchini}}
\affiliation{Dipartimento di Fisica
dell'Universit\`a di Milano and INFN,
Sezione di Milano
\\
Via Celoria 16, I--20133, Milan, Italy}
\author{Olaf \surname{Melsheimer}}
\affiliation{Fachbereich Physik, Philipps-Universit\"at
\\
Renthof 7, D--35032, Marburg, Germany}
\date{\today}
\begin{abstract}
We stress the notion of statistical experiment, which is mandatory
for quantum mechanics, and recall Ludwig's foundation of quantum
mechanics, which provides the most general framework to deal with
statistical experiments giving evidence for particles. In this
approach particles appear as interaction carriers between
preparation and registration apparatuses. We further briefly point
out the more modern and versatile formalism of quantum theory,
stressing the relevance of probabilistic concepts in its
formulation. At last we discuss the role of macrosystems, focusing
on quantum field theory for their description and introducing for
them objective state parameters.
\end{abstract}
\pacs{03.65.Ta,03.65.Ca,03.65.Yz}
\maketitle
\section{Introduction}
\label{intro}
Quantum theory is an increasingly successful theory of matter and some
typical features that have appeared paradoxical, such as EPR
correlations, are now on the way to become a technological resource in
the realm of quantum communication. There is however a fundamental
difficulty: it appears as a theory of measurements that runs into
troubles if one describes in the most naive quantum mechanical way a
measuring device at work. A precise proof of this was recently given
by Bassi and Ghirardi\cite{Bassi-decoh}, further pointing to unitary
quantum mechanical evolution as the basic flaw and proposing the GRW
modification of the Schr\"odinger equation by a universal stochastic
process\cite{Bassi-GRW-review}. Many other proposals appeared to make
quantum theory less measuring device dependent; just to mention a few
let us recall the Bohmian interpretation (see\cite{Duerr} and
literature therein for recent review and\cite{Duerr04} for present
developments), together with a new more general framework suggested by
Adler\cite{Adler}, also motivated by open problems in high energy
physics, by which stochastic modifications to the Schr\"odinger
equation can become a more natural low energy effective theory. Our
aim in this paper is to recall and refresh the, in our opinion, very
deep reformulation of foundations of quantum mechanics that was given
by Ludwig: in this approach the very concept of microsystem is
investigated and quantum theory turns out to be a naturally incomplete
theory of it, completely satisfactory at a non-relativistic level. Its
extension to general systems cannot dispense with thermodynamics and,
in our opinion, provides a natural opening to quantum field theory.
Section~\ref{sec:from-syst-micr} stresses the essential role played by
statistical experiments in quantum mechanics; in Section~\ref{micro}
the concept of microsystem as proposed by Ludwig is recalled; in
Section~\ref{modern} we briefly review the modern formulation of
quantum mechanics, which besides Ludwig's approach also arose quite
independently in other contexts; in Section~\ref{microtomacro} the
generalization to many types of microsystems is considered together
with the role played by quantum field theory in order to cope with
this situation; in Sect.~\ref{sec:conclusions-outlook} we finally
briefly summarize the contents and main message of the paper, and
discuss possible future developments.
\section{Statistical experiments}
\label{sec:from-syst-micr}
Quantum theory marked a turning point in physics since the basic
Galilean concept of reproducible experiments encountered a basic
crisis and it was necessary to weaken and clarify the notion of
\textit{reproducibility}. After Galileo, in the pre--quantum era, the
concept of reproducible experiment allowed to delimit a part of the
world ruled by physics, arriving for it to an atomistic underlying
model, the interactions between the elementary components being the
universal unifying core of the huge phenomenological complexity.
The quantum era is characterized by the evidence that naive
reproducibility fails in experiments focused on microsystems and must
be generalized by the much subtler concept of reproducibility of
statistical experiments, so that just the most fundamental physics
becomes \textit{essentially statistical}. An experiment deals with a
system reprepared in a well fixed way for a large number of
independent experimental runs and only the frequencies of the well
defined events one is looking for in these runs are the result of the
experiment and have a counterpart in the underlying theory. What is
going on during a single run obviously belongs to reality, but not in
all aspects to physics. To physics belongs what we have described as
\textit{fixable} in the repetitions and the frequencies of well
identified changes. In suitable conditions the essentially statistical
character we have pointed out can be neglected and then the previously
mentioned classical description of matter appears and provides a
conceptual structure leading through a procedure called
\textit{quantization} to the actual quantum theory. This could feed
the believe that \textit{quantization} of classical frameworks is
fundamental enough to catch the extremely vast phenomenology of
physical systems. On the contrary, basing on Ludwig's foundations of
quantum theory we shall take such phenomenology as the starting point
of quantum theory and as its natural completion. It is present day
technology offering high vacuum techniques, highly efficient
detectors, highly controllable sources and devices for trapping and
handling single microsystems that provides most direct evidence for
the peculiarity of microsystems. In the more commonly observed
phenomenology what happens inside a certain space region until a
certain time influences in future times the adjacent space regions
involving simultaneously an infinity of space points. On the contrary
if an elementary microsystem is prepared in this region (e.g. an
attenuated source is located there) a process is generated starting at
some future time point from \textit{one} single space point at an
appreciable distance (may be also an enormous distance in
astrophysical extrapolations) which then expands inside its future
light cone. In these preparations an effect is triggered only at one
space-time point in the universe even if detectors would be placed
everywhere. The position of this point and often the whole process
contained in its future light-cone is a stochastic variable; by many
runs frequencies can be established and repeating this whole procedure
the reproducibility of these frequencies can be controlled and one
gets evidence of a statistical law. This statistical law depends in
general strongly also on the devices placed between the source and the
points from which macroscopic processes can start. Still more
impressive is the most direct generalization of a microsystem
consisting of two elementary components: a source of such system
causes two processes inside the future light cones of two space-time
points, with stochasticity as before but with the restriction of
absolutely strict correlations between the two parts of the whole
processes, associated to conservation rules, quite independently of
the distance of the two points. This is a preparation such that only
at two space-time points in the universe a process begins due to
preparation, consisting of two parts, extremely strongly correlated
each other, one part showing a stochastic character. Often the
processes started by microsystems behave as further sources of
elementary and composed microsystems randomly involving a finite
number of space-time points correlated among themselves and starting
points of further correlated processes. The fundamental consequence
of quantum theory is the statistical flavour attached to experiments,
by this refinement of the concept of reproducibility. Since
operatively a statistical experiment is a much more intriguing
enterprise, where some conditions must be explicitly selected and then
they must be controlled and guaranteed during several runs, one can
indeed expect that the mathematical representation of what is done in
an experiment must have a higher complexity as it was in pre--quantum
physics. Ludwig's work can be understood as a fundamental
justification of the new mathematical tools. In setting up an
experiment aiming to establish results in some physical context,
previous chapters of physics are taken as consolidated, some
pretheories are taken as given and already enter in the language that
is used by people setting the experiment. So physicists, engineers,
technicians building apparatuses for a high energy experiment aiming
e.g. to verify aspects of the standard model, use experience and
knowledge coming from a well established and much simpler
phenomenology: it turns out that they have under complete control a
huge part of what goes on in the experiment, but obviously not these
aspects of reality that the experiment is challenging. In experiments
in the quantum era the technology of suitable triggering of
apparatuses with correlation and anticorrelation settings is
superposed to more classical arrangements related e.g. to Euclidean
geometry, to time determinations, to phase space of classical
mechanics and so on.
\section{The Notion of microsystem}
\label{micro}
In his axiomatic approach to the foundations of quantum mechanics
Ludwig proposed to take as fundamental domain of the theory the
statistical experiments with single microsystems and the frequencies
of the related phenomena. Instead of the particles themselves one
considers the macroscopic setup of any real experiment, which can be
divided in a preparation procedure and a registration procedure, both
to be described in terms of pretheories. A simple example of
preparation apparatus could be an accelerator plus target, while a
typical registration apparatus could be a bubble chamber. Once this
experimental setup is suitably described, one considers the rate
according to which microsystems prepared with the given preparation
apparatus trigger the assigned registration apparatus: these are the
frequencies to be compared with the quantum mechanical laws. The
general scheme of a statistical experiment can be depicted as follows,
with preparation and registration apparatuses displayed as boxes
(so-called \textit{Ludwig's Kisten}), the latter acted upon by the
former by means of a directed interaction brought about by a
microsystem
\begin{displaymath}
\fbox{
\hphantom{sp}
\vrule height 20 pt depth 20 pt width 0 pt
\mbox{$\mathrm{preparation}
\atop\mathrm{apparatus}
$}
\vrule height 20 pt depth 20 pt width 0 pt
\hphantom{sp}
}
\quad
{{\mathrm{directed}\atop\mathrm{interaction}}\atop \longrightarrow}
\quad
\fbox{
\hphantom{sp}
\vrule height 20 pt depth 20 pt width 0 pt
\mbox{$
\textrm{registration}
\atop
\textrm{apparatus}
$}
\vrule height 20 pt depth 20 pt width 0 pt
\hphantom{sp}
} \ .
\end{displaymath}
In this spirit we want to introduce the notion of microsystem as of
something which has been prepared by a preparation apparatus and
registered by a registration apparatus. To do this we need a
statistical theory, in terms of which the general structures of
preparation and registration, which can be applied both to microsystem
and macrosystem, can be described. We will almost verbatim follow the
introductory reference\cite{Ludwig-Grundlagen} when recalling the
basic axioms, but we shall proceed in a less detailed way with respect
to the full axiomatic approach initiated in the sixties and described
in\cite{Ludwig-Foundations}.
\subsection{Statistical selection procedures}
\label{sec:stat-select-proc}
Let $M$ be the set having as elements the representatives
of the physical features whose statistics we want to
describe (in the present case we shall concentrate on microsystems).
The statistics is related to selection procedures, by which
special features may be selected. A selection procedure is
to be described by a subset $a\subset M$, corresponding to the
subset of features (microsystems) that satisfy the given
selection procedure.
We define as \textsc{ Selection Procedure} the
following mathematical structure: a set $M$ and a subset
${\cal S} \subset {P}(M)$ (where ${P}(M)$ denotes the power set of $M$) such that
\begin{description}
\item[S~1.1]
$
a, b \in {\cal S}, a\subset b \Rightarrow
b \backslash a \in {\cal S}
$
\item[S~1.2]
$
a, b \in {\cal S} \Rightarrow
a \cap b \in {\cal S}.
$
\end{description}
We call selection procedure both ${\cal S}$ and an element $a$ of
${\cal S}$. {\bf S~1.2} says that the selection procedure consisting
in selecting both according to $a$ and $b$ exists. If $a \subset b$
we say that $a$ is {\em finer} than $b$. {\bf S~1.1} says that if we
use two selection procedures $a$ and $b$, where $a$ is finer than $b$,
the rest of the objects $x \in b$, which do not satisfy the finer
criterion $a$, still constitute a selection procedure. Note that in
this construction it is not necessarily $M\in{\cal S}$. Only if some
selection has been done on its elements $M$ acquires a physical
meaning, therefore we shall not assume that $M$ itself belongs to
$\mathcal{S}$. In fact $a\in {\cal S}$, $M\in{\cal S}$ would lead to
$M\backslash a\in{\cal S}$ contrary to physical meaningfulness. Let
us consider in fact a given source for a certain type of particles.
For the prepared particles we can make important assertions about the
experiments for which the particles are used, but we cannot make any
definite statement for all other particles of the same type not
prepared from this source. Thus it is meaningful not to require that
$M$ be a selection procedure. Let us note that ${\cal S}(a)\equiv \{
b \in {\cal S} | b \subset a \} $ is a Boolean ring, while ${\cal S}$
need not even be a lattice, because $a, b \in{\cal S}$ does not
automatically imply $a \cup b \in {\cal S}$. In particular two
selection procedures $a, b \in {\cal S}$ are called
\textsc{Coexistent} relative to $c$ if both $a \subset c$ and $b
\subset c$.
It often happens in applications that two selection
procedures $a$ and $b$, where $b$ is finer than $a$,
are not statistically independent.
Consider for example an experiment in which of the $N$ systems
prepared according to the selection procedure $a$, $N_1$
also satisfy the selection criterion of $b$: we say that
$N_1/N$ is the relative frequency of $b$ relative to $a$.
If this frequency shows to be reproducible and it is
confirmed by experiments with great number of systems, we say that $b$
is statistically dependent from $a$.
Let ${\cal
S} \subset {P}(M)$ be a selection procedure, for which
{\bf S~1.1} and {\bf S~1.2} hold, and let ${\cal T} \equiv
\{
(a,b) | a, b \in {\cal S}, b \subset a, a \ne \emptyset
\}
$: we say that ${\cal S}$ is a \textsc{Statistical
Selection Procedure} whenever a real function $\lambda(a,b)$
with $0 \leq \lambda(a,b) \leq 1$ is defined on ${\cal T}$
such that
\begin{description}
\item[S~2.1]
$
a_1, a_2 \in {\cal S},
a_1 \cap a_2 = \emptyset,
a_1 \cup a_2 \in {\cal S}
\Rightarrow
\lambda(a_1 \cup a_2,a_1) + \lambda(a_1 \cup
a_2,a_2)=1
$
\item[S~2.2]
$
a_1, a_2, a_3 \in {\cal S},
a_1 \supset a_2 \supset a_3,
a_2 \ne \emptyset
\Rightarrow
\lambda(a_1,a_3)= \lambda(a_1,a_2)\lambda(a_2,a_3)
$
\item[S~2.3]
$
a_1, a_2 \in {\cal S}, a_1 \supset a_2,
a_2 \ne \emptyset
\Rightarrow
\lambda(a_1,a_2) \ne 0
$ .
\end{description}
$\lambda(a,b)$ is usually called the {\em conditional probability}
of $b$ relative to $a$ and represents the frequency with
which systems selected by $a$ also satisfy $b$.
If $a_1 \cup a_2$ is a selection procedure, both $a_1$ and
$a_2$ are finer than $a_1 \cup a_2$; if $a_1 \cap a_2 =
\emptyset $
they exclude each other. If $N$ systems are selected
according to $a_1 \cup a_2$, of which $N_1$ also satisfy
$a_1$ and $N_2$ satisfy $a_2$, because of $a_1 \cap a_2 =
\emptyset $
we have $N_1 + N_2 = N$: this explains {\bf S~2.1}.
If for three selection procedures we have $a_1 \supset a_2 \supset a_3$
and $N_1$ systems are selected according to $a_1$, between
these $N_2$ according to $a_2$, between these again $N_3$
according to $a_3$, we simply have
$N_3 / N_1 = (N_2 / N_1) (N_3 / N_2)$, that is to say {\bf
S~2.2}.
If $a_1 \supset a_2 \ne \emptyset$, of the $N$ systems
chosen according to $a_1$ certainly finitely many will also
satisfy $a_2$, which is {\bf S~3.3}.
From these axioms follows $\lambda(a_1,0)=0$ and
$\lambda(a_1,a_1)=1$; moreover, if $a_2 \cap a_3 = \emptyset$, $a_2,
a_3 \subset a_1$, we have
$
\lambda(a_1, a_2 \cup a_3) = \lambda(a_1,a_2) +
\lambda(a_1,a_3)
$.
Note that $\mu(b)=\lambda(a,b)$ is an additive measure
on the Boolean ring ${\cal S}(a)$ and for $a \supset a_1
\supset a_2$ we have $\lambda(a_1,a_2)=\mu(a_2) / \mu(a_1)$.
On the Boolean ring ${\cal S}(a)$ one can therefore recover all the
conditional probabilities $\lambda(a,b)$ from the
probability function $\mu(b)$.
Such structure is very general and only the additional criteria by
which the family $\mathcal{S}$ is selected out from $M$ allows to
recognize relationship with physical procedures. These procedures come
from phenomenology, from known sectors of physics and technology. They
can be implemented in laboratories since appropriate language and
techniques have been developed and it is rather obvious that
apparatuses used in experimental settings are related to the whole
technical evolution by which materials were produced, that can be
sorted and adequately transformed. General thermodynamical concepts
such as local equilibrium are immediately of relevance, non
equilibrium being producible by putting different components together;
basic physical indexes, such as temperature, were recognized and led
to the feasibility of increasingly sophisticated selections. Families
$\mathcal{S}$ of subsets satisfying only the requirements
\textbf{S~1}, which we have simply called selection procedures,
acquire the fundamental properties \textbf{S~2} only if an adequate
degree of selection has been attained, generally including some
suitable isolation or shielding device. Then when a sufficiently
selected subset $a \in \mathcal{S}$ has been obtained, further
partitions of $a$ into disjoint subsets show the statistical
regularity expressed by \textbf{S~2}, so that $\mathcal{S}$ is a
statistical selection procedure and physics can start to explain the
probability function $\lambda(a,b)$, $b\subset a$. Condition
\textbf{S~1.1} means that once we are able to perform selection $a$
and selection $b$, it is possible to build an equipment which produces
selections $a$ and $b$ together, i.e. we are considering only
compatible selection procedures: this is the practical way to produce,
starting with two selection procedures $a$ and $b$, another one $a
\cap b$ finer then $a$ and $b$ since $a \cap b\subset a$, $a \cap
b\subset b$. Phenomenology used in setting experiments seems to
satisfy this very simple and general criterion; this is often but
misleadingly described as a \textit{classical } character of
macroscopic world. Taking the concepts of time and space as already
established, associating selection procedures to space-time regions
one is lead, by relativistic causality, to assume that selection
procedures associated with two space time regions at space-like
separation from each other are compatible. It is a selection
procedure to prepare a physical system during a certain initial time
interval inside a finite space region, then finer selections can be
done over a longer time interval and if these are statistical
selection procedures a very general statistical description of
dynamics is achieved, i.e. control of the system in the initial time
interval is often enough to allow a statistical regularity during the
time evolution of the system.
We now introduce a mathematical expression for the notion of
experimental mixture. Considering a selection procedure ${\cal S}$, a
partition of $a\in{\cal S}$ of the form $a=\cup_{i=1}^{n} b_i$, with
$b_i \in {\cal S}$ and mutually disjoint is called a
\textsc{Decomposition} of $a$ in the $b_i$, and $a$ is called a
\textsc{Mixture} of the $b_i$. Since the set ${\cal S}(a)$ is a
Boolean ring a decomposition of $a$ is simply a disjoint partition of
the unit element $a$ of ${\cal S}(a)$. With the above defined additive
measure $\mu(b)$ over ${\cal S}(a)$ we have $\sum_{i=1}^{n}
\mu(b_i)=1$, $\mu(b_i)=\lambda(a,b_i)$ being the weights of $b_i$ in
$a$. If we experimentally choose $N$ systems according to $a$, and of
these $N_i$ are further selected according to $b_i$, the relations
$N_i / N \approx \mu(b_i)$ must be verified in physical approximation.
This should however not induce the reader to confuse the notion of
selection procedure with that of ensemble, which will be introduced
later on.
\subsection{Preparation and registration}
\label{sec:prep-registr}
Exploiting the above defined notions of selection procedure
and of statistical selection procedure we want to introduce on
$M$ (which is expected to become the set of microsystems)
suitable mathematical structures, so as to interpret its
elements as physical systems, in that they can be prepared
and registered. Let a structure ${\cal Q}\subset{P}(M)$
be given on $M$, which we call \textsc{ Preparation
Procedure}, such that ({\bf A} standing for axiom)
\begin{description}
\item[A~1] ${\cal Q}$ is a statistical selection procedure.
\end{description}
The elements of ${\cal Q}$ are representatives of
well-defined technical processes, to be described by
pretheories and not by quantum mechanics itself,
thanks to which microsystems can be produced in large
numbers. The mathematical relation $x\in a$ ($a \in {\cal
Q}$) means: $x$ has been obtained according to the
preparation procedure $a$. There are very many examples of
preparation procedures, e.g., an ion-accelerator together with
the apparatus which generates the ion-beam.
We denote by $\lambda_{{\cal Q}}(a,b)$ the probability
function defined over ${\cal Q}$.
We now consider a specific physical example, in order to
make this construction clearer. We take an experimental
apparatus which generates couples (1,2) of spin 1/2 particles with
total spin 0 and emits them in opposite directions. As
preparation procedure for the system 1 we consider the
apparatus consisting of the preparation apparatus for the
couple (1,2) and an apparatus detecting the $z$ component of
the spin of system 2. This apparatus
gives us three different preparation procedures for the
system 1. Preparation procedure $a_1^3$: all prepared
systems 1 independent of the detection on system 2;
preparation procedure $a_1^{3+}$: all systems 1, by which a
positive $z$ component has been detected for system 2;
preparation procedure $a_1^{3-}$: all systems 1, by which a
negative $z$ component has been detected for system 2.
We obviously have
$a_1^{3+} \subset a_1^3$,
$a_1^{3-} \subset a_1^3$,
$a_1^{3+} \cap a_1^{3-}= \emptyset$ and
$a_1^3=a_1^{3-} \cup a_1^{3+}$ represents a decomposition
of $a_1^3$. In this particular case the weights are given by
$\mu(a_1^{3\pm})=\lambda(a_1^3,a_1^{3\pm})= {1/2}$.
We now want to introduce the notion of registration. Let
there be on $M$ two further structures, the set of \textsc{ Registration
Procedures} ${\cal R}\subset {P}(M)$ and the set of
\textsc{ Registration Methods} ${\cal R}_0 \subset {P}(M)$, satisfying
\begin{description}
\item[A~2\hphantom{.1}] ${\cal R}$ is a selection procedure
\item[A~3\hphantom{.1}] ${\cal R}_0$ is a
statistical selection procedure
\item[A~4.1] ${\cal R}_0 \subset {\cal R}$
\item[A~4.2] From $b \in {\cal R}$ and ${\cal
R}_0 \ni b_0 \subset b$ follows $b\in {\cal R}_0$
\item[A~4.3] To each $b \in {\cal R}$ there exists
a $b_0 \in {\cal R}_0$ for which
$b\subset b_0$.
\end{description}
These two structures correspond to the two steps of a typical
registration process: the construction and utilization of the
registration apparatus and the selection according to the changes
which have occurred or not occurred in the registration apparatus. Let
us consider for example a proportional counter: $b_0 \in {\cal R}_0$
is the set of all microsystems which have been applied to the counter;
the elements of ${\cal R}_0$ characterize therefore the construction
of the registration apparatus and its application to microsystems. For
a particular microsystem $x\in b_0$ the counter may or may not
respond: let $b_+$ (with $b_+ \subset b_0$) be the selection procedure
of all $x\in b_0$ for which the counter has responded and $b_-$ the
set of all $x\in b_0$ for which the counter has not responded. $b_+$
and $b_-$ are elements of ${\cal R}$. {\bf A~3} accounts for the fact
that the apparatus, apart from triggering by microsystems, is a
macrosystem with statistically reproducible features. It is instead
extremely important that we do not require ${\cal R}$ to be a
statistical selection procedure. To understand this point let us come
back to the previous example. The counter characterized by $b_0$ may
respond or not, so that $b_0$ is decomposed in the two sets $b_+$ and
$b_-$, such that $b_0 = b_+ \cup b_-$ and $b_+ \cap b_- = \emptyset$.
There is however in nature no reproducible frequency $\lambda_{{\cal
R}}(b_0,b_+)$; in fact if in a real experiment $N$ microsystems
$x_1, x_2, \ldots , x_N$ are applied to the counter, i.e., $x_1 \in
b_0, x_2 \in b_0, \ldots , x_N \in b_0$, and for $N_+$ of these the
counter has responded, the frequency $N_+ / N$ depends in an essential
way on the previous history of the microsystems, it cannot be
reproduced on the basis of the registration procedure alone.
Let us call ${\cal S}$ the smallest set of selection
procedures containing all $a\cap b$ with $a\in{\cal Q}$ and
$b\in{\cal R}$ (remember that $a\cap b$ is the set of all
microsystems that have been prepared according to $a$ and
registered according to $b$). We have ${\cal S}\subset{P}(M)$, but in the general case neither
${\cal Q}\subset{\cal S}$ nor
${\cal R}\subset{\cal S}$ will be true.
We now come to a most important statement, according to
which preparation {\em and} registration procedures
together give reproducible frequencies
\begin{description}
\item[A~5] ${\cal S}$ is a statistical selection procedure.
\end{description}
Of course there will be some relations between the
statistics in ${\cal S}$ and those in ${\cal Q}$ and ${\cal
R}_0$.
We now want to express the fact that preparation
procedures and registration methods are
independent of each other; denoting with $\lambda_{{\cal
S}}(c,c')$ the probability function in ${\cal S}$ we have
\begin{description}
\item[A~6.1] If $a,a'\in {\cal Q}$, $a'\subset a$
and $b_0\in{\cal R}_0$ then
$\lambda_{\cal S}(a\cap b_0,a'\cap b_0)
=\lambda_{\cal Q}(a,a')$
\item[A~6.2] If $a\in {\cal Q}$
and $b_0,b'_0 \in{\cal R}_0$, $b'_0\subset b_0$,
then $\lambda_{\cal S}(a\cap b_0, a\cap b'_0)=
\lambda_{{\cal R}_0}
(b_0,b'_0)$.
\end{description}
On the contrary in general $\lambda_{\cal S}(a\cap b,a'\cap b)
\not =\lambda_{\cal Q}(a,a')$, where
$\lambda_{\cal Q}(a,a')$ is the frequency with which
microsystems prepared according to $a$ satisfy the finer
selection $a'$.
\textbf{A~6.1} and \textbf{A~6.2} mean that, except for the
microsystem, the preparation and registration apparatuses do not interact.
Thus {\bf A~6.1} and {\bf A~6.2} express the directedness of the
interaction of the preparation on the registration
apparatus.
A set $M$ with three structures ${\cal Q}\subset{P}(M)$, ${\cal
R}\subset{P}(M)$, ${\cal R}_0 \subset{P}(M)$, satisfying
{\bf A~1} to {\bf A~6} is a set of physical systems selected by a
measuring process. As stressed at the beginning of this section the
structures we have used to introduce the notion of physical system are
not restricted to the case of microsystems, they can describe
measurements on macroscopic systems as well. Thanks to the axioms {\bf
A~1} to {\bf A~6}, implying the independence of the preparation
procedure with respect to the registration procedure, the facts that
we have called physical systems have some reality beyond that of the
direct interpretation in terms of preparation and registration
procedures. Intuitively this means that in the preparation
\textit{something} is produced which can be afterwards detected by the
registration apparatus. Nevertheless the physical systems that we have
introduced are still closely related to the associated production and
detection methods; it does not seem that they can be described in
terms of the objective properties that we are accustomed to ascribe to
physical systems. Speaking of self-existing objects which do not
suffer or exert any influence on the rest of the world would be
physically meaningless and, from a logical point of view,
self-contradictory. Nevertheless in physics one seeks to describe
portions of the world as if they were isolated, in the sense that on a
given description level their interactions with the rest of the world
may be neglected. To the extent that this is possible one may
attribute objective properties to the considered system. The
introduced scheme is so far very general, being applicable both to
macrosystems and microsystems: the selection procedures in ${\cal S}$
describe a conventional \textit{classical statistics}, not exhibiting the
\textit{typical} quantum mechanical structure. The transition to quantum
statistics will be made only later with axiom {\bf QM}, thus coming to
the notion of microsystem.
\subsection{Equivalence classes}
\label{sec:equivalence-classes}
From
{\bf S~2} and
{\bf A~6} one can prove that the probability function
$\lambda_{\cal S}(c,c')$ is uniquely determined by
$\lambda_{\cal Q}$ and by the special values
\begin{equation}
\label{2.1}
\lambda_{\cal S}(a\cap b_0,a\cap b) ,
\end{equation}
with $a\in {\cal Q}$,
$b\in{\cal R}$, $b_0\in{\cal R}_0$ and $b\subset b_0$.
$\lambda_{\cal S}(a\cap b_0,a\cap b)$ gives the frequencies
with which microsystems prepared by $a$ and
applied to the apparatus characterized by $b_0$ trigger it
according to $b$.
The values (\ref{2.1}) are just the values
the experimental physicist obtains to compare with the theory: $N$
systems are prepared according to the preparation
procedure $a$ and applied to the registration method
specified by $b_0$, then one counts the number $N_+$ of
microsystems which trigger the registration apparatus in a
definite way, specified by $b$. Within physical
approximations the number $N_+ / N$ should agree with
(\ref{2.1}): the whole statistics of experiments with
microsystems is contained in (\ref{2.1}).
To proceed further let us introduce the set ${\cal F}$ of
\textsc{Effect Processes}: ${\cal F}\equiv \{ {(b_0,b)} | b_0\in{\cal
R}_0, b_0\ne\emptyset , b\in{\cal R},b\subset b_0 \}$. A couple
${(b_0,b)}$ in ${\cal F}$ exactly describes the experimental situation
corresponding to the generation of an effect. We may now write in a
simpler way the function (\ref{2.1}): denoting by $g={(b_0,b)}$ a
couple in ${\cal F}$ we define $\lambda_{\cal S}(a\cap b_0,a\cap
b)=\mu (a,g)$, where the function $\mu(a,g)$ is defined on the whole
${\cal Q} \times {\cal F}$ . According to $\mu (a_1,g)=\mu(a_2,g)$
for all $g\in{\cal F}$ an equivalence relation $a_1 \sim a_2$ is
defined on ${\cal Q}$, which allows to partition it into equivalence
classes. We call ${\cal K}$ the set of all equivalence classes in
${\cal Q}$: an element of ${\cal K}$ is called \textsc{Ensemble} (or
{\em state}) and ${\cal K}$ is the set of ensembles. Let us stress the
fact that an ensemble $w\in{\cal K}$ is not a subset of $M$, that is
to say, an ensemble $w$ is not a set of prepared microsystems: it is a
class of sets $a$ of prepared microsystems. The difference between
ensembles and preparation procedures is very important. Analogously
to what has been done in ${\cal Q}$, one can introduce an equivalence
relation in ${\cal F}$: $g_1\sim g_2$ whenever $ \mu(a,g_1)=
\mu(a,g_2) $ for all $a\in{\cal Q}$. We denote by ${\cal L}$ the set
of all equivalence classes in ${\cal F}$: an element $f\in{\cal L}$ is
called \textsc{Effect} and ${\cal L}$ is the set of all effects. Once
again one should not confuse effects and effect processes. Through
${\tilde \mu}(w,f)=\mu(a,g)$ for $w\in{\cal K}$, $f\in{\cal L}$ and
$a\in w$, $g\in f$ a function ${\tilde \mu}(w,f)$ is defined on ${\cal
K}\times{\cal L}$ (in the following we will simply write $\mu$
instead of $\tilde\mu$). For the real function ${\mu}(w,f)$ on ${\cal
K}\times{\cal L}$ we have:
\begin{enumerate}
\item $0\leq \mu(w,f)\leq 1$,
\item $\mu(w_1,f)=\mu(w_2,f)\ \forall f\in{\cal L}
\Rightarrow w_1=w_2$,
\item $\mu(w,f_1)=\mu(w,f_2)\ \forall w\in{\cal K}
\Rightarrow f_1=f_2$,
\item $\exists !\ f_0\in{\cal L}$ (also
denoted by 0) such that
$\mu(w,f_0)=0\ \forall w\in{\cal K}$,
\item $\exists !\ f_1\in{\cal L}$ (also
denoted by \openone) such that
$\mu(w,f_1)=1\ \forall w\in{\cal K}$.
\end{enumerate}
Mixtures on $\mathcal{Q}$ are transferred on $\mathcal{K}$, as one can
show taking into account $\lambda_{\cal S}(a\cap b_0, a'\cap b_0)=\lambda_{\cal Q}(a,a')$: let $a=\cup_{i=1}^{n}a_i$,
$a_i\in\mathcal{Q}$, $a_i\not\in \emptyset$, $a_i\cap a_j=\emptyset$ $i\ne
j$ then if $a\in w$, $a_i\in w_i$ one has
\begin{displaymath}
w=\sum_{i=1}^{n} \lambda_{\cal Q}(a,a_i)w_i.
\end{displaymath}
By this fundamental statistical property a preparation procedure
$a\in\mathcal{Q}$ of a microsystem becomes very close to an element
$w\in \mathcal{K}$. It is however very important to be aware of the
fact that the passage from $\mathcal{Q}$ to $\mathcal{K}$ is a step by
which new mathematical entities are introduced which have a basic role
in describing the physics of a microsystem under \textit{all possible
preparation and detection procedures}: then $w$ does not simply
represent one concrete preparation procedure of a microsystem.
By the introduction of equivalence classes some universality character
of $\mu (w,f)$ has been introduced: in the case of a microsystem these
equivalence classes contain a huge number of elements.
Actually something of the particular experimental situation described
by $a\in\mathcal{Q}$ goes lost when the equivalence class $w$ to which
$a$ belongs is considered and the inverse passage from $w$ to $a$
cannot be done if one only relies on quantum theory of microsystems.
Some paradox in quantum mechanics, e.g. EPR paradox have their roots
just in neglecting this fact, typically in an equivalence class $w\in
K$ two preparation procedures $a$ and $a' \in\mathcal{Q}$ can be
contained which are \textit{incompatible}: $a\cap a'=\emptyset$, i.e.
the two concrete selection procedures cannot be performed together. In
Ludwig's point of view the debated question of completeness of quantum
mechanics is not an issue from the very beginning.
The introduced partitions into equivalence classes of the sets ${\cal
Q}$ and ${\cal F}$ are most important. These partitions do not
simply amount to make the theory of the considered physical systems
independent from inessential features in the construction of the
apparatuses $a\in{\cal Q}$ and $b_0\in{\cal R}_0$. They have a much
deeper significance with regard to the physical theory. For example
the partition of ${\cal Q}$ depends in an essentially way on which and
how many effect processes are physically realizable. Restricting the
set ${\cal F}$ to a subset $\widetilde{{\cal F}}$ could imply a
coarser partition of ${\cal Q}$. Axioms about the extension of the
sets ${\cal Q}$ and ${\cal F}$ amount to specify the theory one is
dealing with, thus indirectly identifying the described physical
systems and the possible realizable experiments.
\subsection{Quantum Mechanics}
\label{ro and f in mq}
So far we have introduced the quantities that connect
theory and experiment, that is to say the elements of
${\cal Q}$, ${\cal R}$, ${\cal R}_0$ and the functions
$\lambda_{\cal Q}$,
$\lambda_{{\cal R}_0}$,
$\lambda_{\cal S}$.
Note that contrary to the usual formulations of quantum
mechanics, neither the statistical operators (or in
particular the pure states) nor the self-adjoint operators
(describing the so-called observables) will be used for
direct comparison with experiment: the relationship between
mathematical description and experiment exclusively rests
upon the preparation and the registration procedures and
the probability function $\lambda_{\cal S}$.
We now add an axiom connecting this general
theoretical scheme to the usual Hilbert space quantum
mechanics ({\bf QM} standing for quantum mechanics).
\begin{description}
\item[QM] There is a bijective map $\mathcal{W}$ of
${\cal K}$ onto the set $\mathcal{K}
(\mathcal{H})$ of positive self-adjoint
operators $W$ on a Hilbert space ${\cal H}$ with
${\mbox{Tr}}(W)=1$ and a bijective map $\mathcal{F}$
of ${\cal L}$ onto the set $\mathcal{L}
(\mathcal{H})\subset \mathcal{B}
(\mathcal{H})$ of all self-adjoint
operators with $0\leq F \leq \openone$, so that
$\mu(w,f)=\Tr (WF)$ holds where $W=\mathcal{W}[w]$,
$F=\mathcal{F}[f]$.
\end{description}
Because of {\bf QM} one simply identifies ${\cal K}$ with $\mathcal{K}
(\mathcal{H})$, ${\cal L}$ with $\mathcal{L} (\mathcal{H})$ and
$\mu(w,f)$ with $\Tr (WF)$. The convex set $\mathcal{K}
(\mathcal{H})$ is the base of the base-norm space $\mathcal{T}
(\mathcal{H})$ of trace-class operators on ${\cal H}$, while
$\mathcal{L} (\mathcal{H})$ is the order unit interval of the order
unit space $\mathcal{B} (\mathcal{H})$ of bounded operators on ${\cal
H}$. The Banach space $\mathcal{B} (\mathcal{H})$ is the dual of the
Banach space $\mathcal{T} (\mathcal{H})$, the canonical bilinear form
being given by $\langle W,A \rangle = \Tr
(W^{\scriptscriptstyle\dagger}A)$ with $W\in \mathcal{T}
(\mathcal{H})$ and $A\in \mathcal{B} (\mathcal{H})$. The axiom {\bf
QM} is prepared by introducing the functions $\mu (w,f)$ and their
affine dependence on $w$. It can be guessed when quantum mechanics is
introduced in the usual textbook way and the basic statistical
interpretation is given. A deeper axiomatic effort has been done by
Ludwig\cite{Ludwig-Foundations,Ludwig-Axiomatic} in order to obtaine the
Hilbert space structure basing on physically more transparent axioms.
After the introduction of {\bf QM} we call $M$ the set of
\textsc{Microsystems}. So far we have considered only one type of
microsystems, a more general situation will be considered later on.
It seems very stimulating that the simple physical fact of essentially
statistical regularity of processes leads in Ludwig's point of view to
a concept of microsystem which goes much beyond the classical concept
of atom brought in by chemistry: it is no longer so strictly
associated with \textit{smallness} and with the role of
\textit{component} of matter. We shall recall in the next section how
naturally mathematics of quantum theory is born if one describes this
concept of microsystem. On the basis of the above formulation of the
foundations of quantum mechanics it is clear that the Hilbert space
does not directly describe a physical structure. It is a mathematical
tool which permits us to cleverly handle the structure of the convex
set $\mathcal{K} (\mathcal{H})$. Since the positive affine functionals
on $\mathcal{K} (\mathcal{H})$ are identical to the elements of the
positive cone of $\mathcal{B} (\mathcal{H})$ (of which $\mathcal{L}
(\mathcal{H})$ is the basis), it is the structure of $\mathcal{K}
(\mathcal{H})$ alone which determines the physical structure of
microsystems.
\section{The modern formulation of quantum mechanics}
\label{modern}
In the introduction we have tried to give a brief exposition of the
main ideas behind Ludwig's axiomatic approach to quantum mechanics.
One of his aims was to put aside the ill-defined notions of state and
observable, primarily focusing on a proper description of the
statistical experiments one is actually faced with in quantum
mechanics. In a typical experiment a macroscopic apparatus realizing a
classically described preparation procedure triggers some detector
which gives as output a macroscopic signal, according to a suitably
devised registration procedure. The notion of microsystem is only
recovered as a convenient way to describe the most simple among such
statistical experiments, in which a preparation procedure triggers
with a definite reproducible frequency some registration procedure,
the microsystem acting as correlation carrier from the former to the
latter. The mathematical entity describing an equivalence class of
preparation apparatuses is then identified with the state of the
microsystem, while the mathematical entity corresponding to an
equivalence class of registration apparatuses, originally called
\textit{Effekt} by Ludwig, contains the information about what has
been experimentally measured\cite{Kraus}. The spaces in which these
objects live are the latter the dual of the former, the relative
frequency with which the preparation triggers the registration is
obtained by using the canonical bilinear form among the two spaces. This
frequency characterizes the yes-no answer of the registration or
measuring apparatus when affected by the preparation apparatus. As a
result of Ludwig's analysis in the quantum case states, to be seen as
mathematical representatives of equivalence classes of actual
preparation procedures, are given by statistical operators, while
observables, to be seen as mathematical representatives of equivalence
classes of actual registration or measuring procedures, are given by
effects. Taking into account the fact that registration apparatuses
associated to effects are naturally endowed with a scale (e.g. an
interval on the real line for a positive measurement) the notion of
effect immediately leads to the concept of observable as positive
operator-valued measure\cite{Grabowski}. Note that the consideration
of equivalence classes is actually a key point. Utterly different and
incompatible (in the sense that they cannot be performed together)
preparation procedures might lead to one and the same state, i.e.
statistical selection procedure. The different preparation procedures
in the same equivalence class are related to the, generally infinite,
possible decompositions of a given statistical operator, corresponding
to generally incompatible macroscopic procedures, as stressed by the
EPR paradox. In the present paragraph we will give a very brief
presentation of the more general and more flexible formulation of
quantum mechanics, which naturally comes out of Ludwig's approach.
This modern formulation of quantum mechanics, giving the most general
description of statistical experiments and transformation of states,
is obviously the result of research work by very many authors, often
starting from quite different standpoints. Among the many possible
references on the subject we recall the work by
Ludwig\cite{Ludwig-Foundations} and by Holevo\cite{HolevoOLD,HolevoNEW},
referring to these books for a more extensive bibliography. Let us
mention that the modern formulation of quantum mechanics can also be
recovered within the Bohmian approach\cite{Duerr04}.
\subsection{Description of quantum measurements}
\label{sec:stat-struct-quant}
A state in quantum mechanics, to be understood as the mathematical
representative of an equivalence class of preparation procedures, is
given by a statistical operator, i.e. a trace class operator, positive
and with trace equal to one. We recall that the set $\mathcal{T}
(\mathcal{H})$ of trace class operators on a Hilbert space
$\mathcal{H}$ form a Banach space and is in particular an
ideal of the Banach space of bounded operators $\mathcal{B}
(\mathcal{H})$, which is the dual space of $\mathcal{T}
(\mathcal{H})$, the duality form being given by the trace. In
particular the set of statistical operators $\mathcal{K}
(\mathcal{H})$
\begin{displaymath}
\mathcal{K}
(\mathcal{H})=\{ \rho\in \mathcal{T}
(\mathcal{H}) | \rho\geq 0 \quad \Tr \rho=1 \}
\end{displaymath}
is a convex subset of the space of self-adjoint elements in $\mathcal{T}
(\mathcal{H})$ and is the base of the cone of positive elements which
generates the space of self-adjoint elements in $\mathcal{T}
(\mathcal{H})$. The convex structure of the set naturally accounts for
the possibility to consider statistical mixtures, i.e.
\begin{displaymath}
\rho_i\in \mathcal{K}
(\mathcal{H}), \quad \lambda_i\geq 0 \quad \sum_i \lambda_i=1
\Rightarrow \sum_i \lambda_i\rho_i\in \mathcal{K}
(\mathcal{H}),
\end{displaymath}
while pure states in the sense of one dimensional projections appear as
extreme points of the convex set $\mathcal{K}
(\mathcal{H})$, i.e. elements which do not admit any further
demixture
\begin{displaymath}
\rho=\lambda \rho_1+ (1-\lambda)\rho_2 \quad 0<\lambda<1 \quad \rho_1,\rho_2\in\mathcal{K}
(\mathcal{H}) \Rightarrow \rho=\rho_1=\rho_2,
\end{displaymath}
corresponding to the highest control in the preparation
procedure. Being compact and self-adjoint any statistical operator can
be represented as a convex combination of pure states
\begin{displaymath}
\rho=\sum_{i} \lambda_i |\psi_i\rangle\langle \psi_i|
\quad
\lambda_i\geq 0 \quad \sum_i \lambda_i=1 \quad \|\psi_i\|=1.
\end{displaymath}
One such representation is given by the spectral representation of
$\rho$, however in general infinitely many such representations are
possible, not necessarily involving orthogonal vectors; these
different representations do generally correspond to different and
incompatible preparation procedures, in the sense that they cannot be
performed together (think e.g. of a device preparing spin 1/2
particles in terms of their spin states, fully unpolarized states can
be obtained by observing the spin along any axis, no device however
can simultaneously measure the spin along two different axes). An
important confirmation that statistical operators give the most
general mathematical representative of a preparation comes from the
following highly nontrivial theorem by Gleason\cite{Gleason}. Let us
consider the set $\mathcal{P} (\mathcal{H})$ of orthogonal projections
in $\mathcal{H}$, in one to one correspondence with the closed
subspaces of $\mathcal{H}$, building up the so-called quantum logic of
events\cite{Cassinelli}. We first define a probability measure on
$\mathcal{P} (\mathcal{H})$ as a real function $\mu: \mathcal{P}
(\mathcal{H}) \rightarrow \mathbb{R}$ such that $0\leq \mu (E)\leq 1 \
\forall E\in \mathcal{P} (\mathcal{H})$ and $\mu (\sum_i E_i)=\sum_i
\mu (E_i)$ for $\{ E_i\}\subset \mathcal{P} (\mathcal{H}), E_iE_j=0\
i\ne j$ (i.e. $\{ E_i\}$ are compatible projections corresponding to
orthogonal subspaces). Then according to Gleason for $dim
\mathcal{H}\geq 3$ any such probability measure has the form $\mu (E)=
\Tr \rho E$ $\forall E\in \mathcal{P} (\mathcal{H})$, with $\rho$ a
statistical operator.
\subsection{Generalized notion of observable}
\label{sec:gener-noti-observ}
In order to describe the statistics of a given experiment, once the
state has been characterized one needs to specify the probability that
the registered value of the quantity one is trying to measure lies in
a given interval within the physically allowed range (in the following
$\mathbb{R}$ for the sake of simplicity). This amounts to define an
affine mapping (i.e. preserving convex linear combinations) from the
convex set $\mathcal{K} (\mathcal{H})$ of statistical operators to the
set of probability measures on $\mathcal{B} (\mathbb{R})$ (the Borel
$\sigma$-algebra on the space of outcomes $\mathbb{R}$). In full
generality such mappings take the form $\Tr \rho F (M)$ where $\rho$
is the statistical operator, and $F (M)$ is a uniquely defined
positive operator-valued measure\cite{HolevoNEW}. ($M$ being an
element in the Borel $\sigma$-algebra $\mathcal{B} (\mathbb{R})$). As
it is well--known a positive operator-valued measure is a mapping
defined on the $\sigma$-algebra $\mathcal{B} (\mathbb{R})$ and taking
values in the space of positive bounded operators such that $0\leq F
(M)\leq 1$, $F(\mathbb{R})=1$ so that one has the normalization
necessary for the probabilistic interpretation, and
$\sigma$-additivity holds, in the sense that $F (\cup_i M_i)=\sum_i F
(M_i)$ for any disjoint partition $\{ M_i\}$ of $\mathbb{R}$. For
fixed $M\in\mathcal{B} (\mathbb{R})$ the operator $F (M)$ is an
effect, i.e. a positive operator between 0 and $\openone$, and $\Tr
\rho F (M)$ tells us the probability that in an experiment, whose
preparation procedure is described by $\rho$, we will actually find
that our registration procedure gives a positive answer to the
question whether the measured quantity lies in the fixed interval $M$.
The above introduced structures in which registrations are naturally
associated to points or intervals in $\mathbb{R}$ is straightforward
in Ludwig's construction of effects. Note that the statistical nature
of the experiment only requires $\Tr \rho F (M)$ to be a number
between zero and one. There is no reason to ask $F (M)$ to be a
projection-valued measure, and therefore that for any fixed
$M\in\mathcal{B} (\mathbb{R})$ the operator $F (M)$ is an orthogonal
projection (also called \textit{decision effect} by Ludwig), this will
only happen for a subset of the possible registration procedures,
corresponding to most \textit{sensitive} measurements, which moreover
do generally not exhaust the set of extreme points of the convex set
of positive operator-valued measures. If $F (M)$ is a
projection-valued measure, it then uniquely corresponds to the
spectral measure of a self-adjoint operator in $\mathcal{H}$. The
usual notion of observable in the sense of a self-adjoint operator is
thus recovered for particular observables. What one is really
interested in is the probability distribution of the different
possible outcomes of an experiment, once the state has been fixed, and
only in particular, though often very relevant, cases this can be done
by identifying a self-adjoint operator and its associated spectral
measure. Note that contrary to classical mechanics different
generalized observables have different probability distributions and
not all observables can be described in terms of a joint probability
distribution, leading to a structure known as quantum probability,
generalizing the classical notion of probability
theory\cite{FagnolaPROYEC-Fannes-Strocchi}. In fact it has been argued
that the passage from classical to quantum theory is actually a
generalization of probability theory\cite{StreaterJMP}. Note that
inside Ludwig's point of view coexistence of observables is related to
the actual possibility of constructing concrete measuring apparatuses.
This (most concise) presentation of how to express the quantum
mechanical theoretical predictions for a statistical experiment,
leading to the notion of state as statistical operator and generalized
observable as positive operator-valued measure (also called
non-orthogonal resolution of identity in the mathematical literature)
is certainly not in the spirit of textbooks on quantum mechanics (even
not very recent ones), it is much closer to the presentation of
quantum mechanics one finds in the introductory chapters of books
concerned with quantum information and communication,
e.g.\cite{Nielsen-Chuang}. Note however that in quantum information
and communication one is often only concerned with finite-dimensional
Hilbert spaces, so that the range of the positive operator-valued
measure is given by a denumerable set of operators $0\leq F_i\leq
\openone$ summing up to the identity, $\sum_i F_i = \openone$.
\subsection{Measurements as mappings on states}
\label{sec:meas-as-mapp}
Up to now we have only given the general description of the statistics
of the outcomes of a possible measurement. More generally one might be
interested in how a state is transformed as a consequence of a given
measurement. Note that the shift from pure states, corresponding to
state vectors, to statistical operators from a mathematical standpoint
shifts the attention from operators in $\mathcal{H}$ to affine
mappings on the convex set $\mathcal{K} (\mathcal{H})$. The way in
which a state is changed as a consequence of some registration
procedure applied to it is generally described in terms of an
instrument, a notion first introduced by Davies and
Lewis\cite{Davies-Lewis}. An instrument is a mapping $\mathcal{M}$
defined on the $\sigma$-algebra $\mathcal{B} (\mathbb{R})$ giving the
possible outcomes of an experiment and taking values in the set of
operations, i.e. of contracting, positivity preserving affine mappings
on $\mathcal{K} (\mathcal{H})$, first introduced by Haag and
Kastler\cite{Haag-Kastler} and called \textit{Umpraeparierung} by
Ludwig in his axiomatic construction. In particular an instrument
$\mathcal{M}$ is such that: $\mathcal{M} (M)$ is an operation $\forall
M\in\mathcal{B} (\mathbb{R})$, i.e. $\mathcal{M} (M)[\rho]\geq 0$ and
$\Tr \mathcal{M} (M)[\rho]\leq \openone$; $\Tr \mathcal{M}
(\mathbb{R})[\rho]=1$, accounting for normalization; $\mathcal{M}
(\cdot)$ is $\sigma$-additive, i.e. $\mathcal{M} (\cup_i M_i)=\sum_i
\mathcal{M} (M_i)$ for any collection of pairwise disjoint sets $\{
M_i\}$. The interpretation is as follows, $\mathcal{M} (M)[\rho]$
gives the statistical subcollection obtained by selecting the prepared
state described by $\rho$ according to the fact that the measurement
outcome lies in $M$, $\mathcal{M} (\mathbb{R})[\rho]$ is the
transformed state obtained if no selection is made according to the
measurement outcome. Of course knowledge of the instrument
corresponding to a certain state transformation related to a given
measurement also provides the full statistics of the outcomes,
obtained by the positive operator-valued measure given by
$\mathcal{M}' (M)[\openone]$, where the prime denotes the adjoint with
respect to the trace operation. However different instruments actually lead to
the same positive operator-valued measure, according to the fact that
the very same quantity can be actually measured in different ways,
leading to states which transformed differently, depending on the
actual experimental apparatus used in order to implement the
measurement. Knowledge of the transformed state allows to deal with
subsequent measurements, both discrete and continuous, and in fact the
notion of instrument leads to a formulation of continual measurement
in quantum mechanics. The field of continual measurement is by now
well established, providing the necessary theoretical background for
important experiments in quantum optics (see \cite{BarchielliLNM} for
a recent review mainly in the spirit of quantum stochastic
differential equations and\cite{Davies,continue1-continue2} for
earlier work). Once again this description of a measurement as a
repreparation of the incoming state depending on the measurement
outcome is certainly not emphasized in quantum mechanics textbooks,
but is a natural and fruitful standpoint in quantum information and
communication theory. Actually the very notion of microsystem as
something which is prepared by a macroscopic apparatus and
subsequently registered in a registration apparatus, i.e. as
correlation carrier between macroscopically operated apparatuses,
naturally emerging from Ludwig's axiomatic studies, is a very pregnant
and fertile viewpoint in quantum mechanics and in particular quantum
information and communication, as advocated by Werner\cite{Alber}; not
by chance key concepts like preparation and registration are naturally
renamed as sender and receiver.
\subsection{Open systems and irreversibility}
\label{decoh}
As a last remark we note that the operational approach we have most
briefly and incompletely sketched, stressing the relevance of mappings
acting on states living in the space of trace class operators,
corresponding to transformation of states (Schr\"odinger picture),
together with the adjoint mappings acting in the space of bounded
operators (Heisenberg picture), does not only apply to the description
of a measurement process. These mappings also generally describe the
spontaneous repreparations of a system with elapsing time, i.e. its
dynamics. If the system is closed, so that one has reversibility,
then it can be shown that the time evolution mapping necessarily has
the form $\mathcal{L}[\rho]=U (t)\rho U^{\dagger} (t)$, with $U (t)$ a
unitary mapping, and no measuring decomposition applies, consisting in
sorting statistical subcollections on the basis of a certain
measurement outcome. In the general case of an open system however
irreversibility comes in, either due to the interaction with some
environment or to the effect of some measuring apparatus, so that more
general mappings appear in order to describe this wider class of
transformations of quantum states and observables. This is an open and
very active field of research, of interest both to mathematicians and
physicists\cite{HolevoNEW,Petruccione}, where the relevance of
concepts and techniques inherited and generalized or inspired by the
classical theory of probability and stochastic processes cannot be
overstressed. A general characterization of such mappings has been
obtained only in a few cases, exploiting their property of being
completely positive. For example in the description of irreversible
and Markovian dynamics a landmark result has been obtained by Gorini,
Kossakowski, Sudarshan and Lindblad\cite{GoriniJMP76-Lindblad},
leading to the so-called Lindblad structure of a master-equation, very
useful in applications\cite{Alicki,Petruccione}. Important hints and
restrictions on the structure of such mappings come from the
requirement of covariance under the action of some symmetry group
relevant for the system at hand\cite{HolevoNEW}. Biased by our
interests and work let us quote recent results in this framework,
dealing with quantum Brownian motion\cite{art3-art5-art6-art7-art10}
and decoherence due to momentum transfer events\cite{garda03-art12},
where the relevance of covariance and probabilistic concepts appear at
work.
\section{From microsystems to macrosystems}
\label{microtomacro}
In Ludwig's point of view space-time symmetries arise as follows: let
us consider an experiment with preparation part ${\cal Q}$ and
registration part ${\cal R}_0$ ${\cal R}$, then placing a reference
frame on ${\cal Q}$ one gets a family of symmetry transformed
registration parts $g {\cal R}_0$ $g{\cal R}$ for any reference frame
transformation $g$ belonging to the relevant symmetry group and a new
experiment ${\cal Q}$, $g {\cal R}_0$ $g{\cal R}$ can be considered. By
an appropriate treatment\cite{Ludwig-Foundations} one recovers the
typical results of usual symmetry theory based on Wigner's theorem,
where the Hilbert space $\mathcal{H}$ associated with a single
microsystem carries a unitary projective representation of the Galilei
group: if the microsystem is elementary such a representation is
irreducible\cite{Mackey}.
So far only a single microsystem has been treated, however one has
evidence of different elementary microsystems and of a huge set of non
elementary ones. The general description of different types of
microsystems, labelled as $1,2,\ldots,n$ requires a $n$-uple of
Hilbert spaces $\mathcal{H}_1,\ldots,\mathcal{H}_n$, an element of
${\cal K}$ being a $n$-uple of positive trace class operators
$W_1,\ldots,W_n$, normalised according to $\sum_{i=1}^n \Tr W_i =1$,
an element of ${\cal L}$ being a $n$-uple $F_1,\ldots,F_n$ of
operators on $\mathcal{H}_1,\ldots,\mathcal{H}_n$ such that $0\leq F_i
\leq \openone_i$, $i=1,\ldots,n$, then finally $\mu (w,f) =
\sum_{i=1}^n \Tr (W_i F_i)$. The projection $F_i =
(0,\ldots,0,\openone_i,0,\ldots,0)$ with probability $\mu (w,f_i) =
\Tr (W_i)$ corresponds to the registration of the microsystem of type
$i$. It turns out that non elementary microsystems are described in
Hilbert spaces
\begin{equation}
\mathcal{H}_i = h^{(\mathrm{e})}_{\alpha_1} \otimes \ldots \otimes h^{(\mathrm{e})}_{\alpha_{\kappa_i}}
\label{eq:mm2}
\end{equation}
where $h^{(\mathrm{e})}_{\alpha_j}$ is the Hilbert space of an
elementary microsystem (the superscript $(\mathrm{e})$ standing for
elementary) and ${\kappa_i}$ is the number of elementary components:
the basic simplification is the restricted number of the latter ones.
A large variety of non elementary microsystems is understood having as
elementary microsystem electron and nuclei in the context of
electromagnetism; by deeper understanding it was discovered that
nuclei are not elementary microsystems and a smaller number of more
fundamental microsystems is introduced in present day subnuclear
physics. In the tensor product (\ref{eq:mm2}) many factors are
repeated: of these repeated factors only the completely symmetric or
antisymmetric part must be taken. This is a very important correction
on the simple structure (\ref{eq:mm2}), that we indicate simply by
$\mathcal{H}^{\sigma}_{i}$, the superscript $\sigma$ standing for the
aforementioned symmetrizations.
In the non relativistic case symmetry transformations for non elementary microsystems
can be obtained from those associated with the elementary components.
Here the self-adjoint generators of the one-parameter subgroups acquire an outstanding
importance and the projection-valued measures associated with them provide observables
with a straightforward physical interpretation like position, momentum, angular momentum
and energy. Apart from the energy they have an additive structure and their expectation values
take a simple form:
\begin{equation}
< A > = \sum_i \Tr_{\mathcal{H}^{\sigma}_{i}} (A_i W_i) \qquad
A_i = \sum_{l=1}^{\kappa_i} A^{(\mathrm{e})}_l .
\label{eq:mm3}
\end{equation}
In the case of the energy a non additive contribution generally arises
called \textit{interaction energy}, responsible of binding elementary
microsystems to composed microsystems.
When a classical limit holds the picture appears of elementary
microsystems as elementary classical particles, of composed
microsystems as structures of these interacting particles, showing
quantities built up collectively by elementary contributions: then
a composed system with very many components is a macrosystem. A phase
space $\Gamma$ emerges, a state of the macrosystem being a point $P$
in this space, a selection procedure can be represented by suitable
subsets of $\Gamma$ and it becomes a statistical selection procedure
when a probability density $\rho (P)$ is given on $\Gamma$. Of course
$\Gamma$ is a huge space, to give $\rho (P)$ and calculate the
functions $\lambda (a,b)$ can be very difficult. Actually on this
way experimental settings are invented, realized and finally work,
also a feeling is established which helps in correct guessing of $W$
and $F$ associated to $a \in {\mathcal Q}$ and $b_0 b \in {\mathcal R}_0
{\mathcal R}$. However all this is an approximation which e.g. cannot
really grasp the typical quantum feature of $\mathcal{H}^{\sigma}_{i}$
replacing ${\mathcal{H}}_i$: it is the absence of
$\Gamma$ that makes it difficult to represent statistical selection
procedures. Then a problem appears to close in a consistent way
Ludwig's point of view inside present day quantum theory. Ludwig aims
to a more comprehensive theory, which should provide in a natural way
a state space for a macroscopic system.
We shall now conclude this discussion indicating briefly a way we have
taken to face this problem\cite{torun99-holevo-qic}. First of all let
us stress a peculiar role that quantum field theory can have with
respect to macrosystems. A macrosystem is the physical support of all possible types of microsystems;
we shall not base on the naive atomistic point of view that it is composed by them, instead it
is the carrier of all of them.
If we consider the microsystems prepared when a macrosystem evolves until a time $t$,
the $W_t \in {\mathcal K}$ shows by the structure
$W_{1t},W_{2t},\ldots,W_{nt}$ which types of microsystems have been prepared;
this typology varies with time $t$: the number of microsystems $N_{it}$
becomes an interesting quantity. The question immediately arises of an
underlying Hilbert space such that $\mathcal{H}^{\sigma}_{i}$ are isomorphic to
subspaces of it and possibly the connection $\mathcal{H}_i \to \mathcal{H}^{\sigma}_{i}$
becomes natural; then an observable arises to be interpreted as number of microsystems of type $\alpha$.
It is well known how quantum field theory solves in a brilliant way
this question: for each elementary microsystem
a Fock space ${\mathcal{H}_{F}}_{\alpha}$ is defined and the Hilbert space
is given by
\begin{equation}
\mathfrak{H}=\prod_{\alpha}\otimes {\mathcal{H}_{F}}_{\alpha}
\label{eq:4}
\end{equation}
where the factors are the Fock spaces associated to each type of
elementary microsystem.
In this setting \eqref{eq:mm3} is replaced by $< A > = \Tr (A W)$,
$W$ being a statistical operator on $\mathfrak{H}$
and $A$ a self-adjoint operator in $\mathfrak{H}$.
\subsection{Quantum field theory and macrosystems}
\label{sec:from-micr-macr}
While the operators on the Hilbert spaces ${\mathcal H}_i$ are
constructed in terms of fundamental operators $x$ and $p$ having the
meaning of position and momentum with a clear classical limit, so that
quantum theory appears close to classical atomistic physics via a
\textit{quantization} procedure, in this new setting related to
$\mathfrak{H}$ given by~\eqref{eq:4} fundamental operators appear by
which $W$ and $A$ are constructed that just connect the subspaces
characterized by a fixed number of elementary microsystems, acting as
creation and annihilation operators of the elementary microsystems.
Therefore the Hilbert space $\mathfrak{H}$ and the related set of
statistical operators appear as natural candidates for a quantum
theory of a macrosystem and one can expect that just focusing quantum
field theory to macrosystems one can both improve the characterization
of physically meaningful statistical operators in $\mathfrak{H}$ and
account for the objectivity elements which should characterize
macrosystems. One is immediately confirmed in this idea by the fact
that just this new framework provides fields observables $A
(\mathbf{x})$ as densities of generators of symmetry transformations,
which obey to typical balance equations, so that $\Tr A (\mathbf{x})
W=\langle A (\mathbf{x})\rangle$ can be interpreted as expectation of
the physical quantities that one needs in the phenomenological
description of macroscopic systems. Furthermore there is the well
known example of a macrosystem at equilibrium. It is described by the
statistical operator:
\begin{equation}
\label{eq:1}
W\equiv \frac{e^{-\beta (H_{\Omega}-\mu N)}}{\Tr e^{-\beta (H_{\Omega}-\mu N)}},
\end{equation}
which is built in terms of the relevant observables energy $H_{\Omega}$
and the new observable $N$ typical for the passage $h^{(\mathrm{e})}_{\alpha}\rightarrow {\mathcal{H}_{F_{\alpha}}}$ in the simplest case
of only one type of elementary microsystem. $H_{\Omega}$ is the
operator in $\mathfrak{H}$ constructed in terms of an energy density $H (\mathbf{x})$:
$H_{\Omega}=\int_{\Omega} d^3\! \mathbf{x} \, H (\mathbf{x})$, where
$H (\mathbf{x})$ is obtained in terms of the fundamental field
operator $\psi (\mathbf{x})$ as it is established by quantum field
technique, taking also in account boundary conditions on $\Omega$. The
input of all this is the Hamilton operator which comes from time
translations of a microsystem; actually by this resetting one ends up
with a self-adjoint operator $H_{\Omega}$ having a point spectrum, so
that the trace class operators $W$ can be constructed, when the
parameters $\beta$ and $\mu$ are in appropriate ranges. These
parameters label the different equilibrium macrosystems and have a
precise phenomenological meaning as temperature and chemical
potential, entering in a primary way in any macroscopic selection
procedure.
\subsection{The role of non-equilibrium states}
\label{sec:role-non-equilibrium}
The statistical operator given in~\eqref{eq:1} is an element of $\mathcal{K}
(\mathcal{H})$, constructed as a function of the observables
$H_{\Omega}$ and $N$, so that the total mass is related to a
superselection rule. The impact from thermodynamics at equilibrium is so
fruitful that one wonders whether one can generalize it outside the
very particular and in a sense too strongly idealized situation
described as \textit{equilibrium}, still satisfying the superselection
rule for the total mass. Once a suitable set of
\textit{relevant} linearly independent field observables $A_j
(\mathbf{x})$ is given in $\mathfrak{H}$ one considers a set of
classical fields $\zeta_j (\mathbf{x})$ such that the operator $\Phi
(\zeta)\equiv \sum_j\int_{\Omega} d^3\! \mathbf{x}\, \zeta_j
(\mathbf{x})A_j (\mathbf{x})$ is essentially self-adjoint and
$e^{-\Phi (\zeta)}$
is trace class so that a
statistical operator, that we call macroscopic reference state, can be
defined:
\begin{equation}
\label{gibbsref}
W_{\zeta}= \frac{\mathrm{e}^{-\Phi (\zeta)}}{\Tr \mathrm{e}^{-\Phi (\zeta)}}.
\end{equation}
The classical fields represent a \textit{local generalization} of the previous
equilibrium parameters $\beta$, $\mu$: the field operators $A_j
(\mathbf{x})$ have a quasi-local character in the sense that they
depend on ${\psi}(\mathbf{x})$,
${\psi}^{\dagger} (\mathbf{y})$ for
$|\mathbf{x}-\mathbf{y}|\ll\delta$ with $\delta$ much smaller than the
typical variation scale of the state parameters $\zeta_j
(\mathbf{x})$. Such a quasi-local character emerges if one considers
the fundamental mechanical densities, that we recall in the
non-relativistic case: mass density, momentum density, kinetic energy
density, where higher derivatives inside the different expressions
loosely mean less locality. The field $\psi(\mathbf{x})=\int d^3\!
\mathbf{x}_1 \ldots d^3\! \mathbf{x}_k\, g
(\mathbf{x},\mathbf{x}_1,\ldots,\mathbf{x}_k) \psi_1
(\mathbf{x}_1)\ldots\psi_k (\mathbf{x}_k)$ refers to a field
composed by elementary ones, $g
(\mathbf{x},\mathbf{x}_1,\ldots,\mathbf{x}_k)$ being a suitable
structure function of microsystems, concentrated for
$|\mathbf{x}-\mathbf{x}_{i}|\ll\delta$, $i=1,2\ldots,k$. The
reference state~\eqref{gibbsref}, that we call a \textit{macrostate}
with state parameters $\zeta(\mathbf{x})$, provides a geometrical
structure\cite{Streater} which replaces lacking phase--space and the
expression $-k\Tr (W_{\zeta}\log W_{\zeta} )$ acquires the role
of thermodynamical entropy. The subtlety with relevant variables is
that their linear span is not invariant under time evolution.
The far reaching consequence of this is that the general dynamics of a
macrosystem cannot be described only with a family of
\textit{macrostates} $W_{\zeta_t}$ by a suitable choice of time
dependent state parameters ${\zeta_t}$: a more general framework is
necessary and in addition to relevant variables, \textit{irrelevant}
ones impose themselves. One succeeds however in constructing
statistical operators $\rho_t$, solutions of the Liouville-von Neumann
equation, having as input the reference state~\eqref{gibbsref} and
displaying the whole history $\zeta_{t'}(\mathbf{x})$ for $t'<t$ of
the state parameters and also introducing irreversibility in a
fundamental way. This is also the philosophy behind the formalism of
non-equilibrium statistical operator initially proposed by
Zubarev\cite{Zubarev} and extensively used in non-equilibrium
thermodynamics\cite{Roepke}. Now we come to the main difficulty: the
construction of $\rho_t$ by means of \textit{one} state
$W_{\zeta_{t}}$, i.e. $\rho_t$ carrying only \textit{one} family of
state parameters, is in general successful only for suitable time
intervals. Mixtures of several reference states with the
structure~\eqref{gibbsref} but modified through the creation of
microsystems in suitable states ${\psi_\alpha}_t\in\mathcal{H}_i$ also
appear, providing a much richer parametrization of $\rho_t$ in terms
of states ${\psi_\alpha}_t$ of microsystems with certain statistical
weights and state parameters ${\zeta_{\alpha}}_t$ influenced by the
microsystem. In conclusion on the space $\mathfrak{H}$ state
parameters, having a direct relevance in macroscopic phenomenology,
can be introduced in a natural way. Deterministic evolution of one set
of state parameters means that a selection has been done fine enough
to avoid for them a statistical description; in general however, at
least piecewise, dynamics is not deterministic and this is described
by the appearance of microsystems.
Looking at quantum field theory in this way, microsystems become
intertwined with the reference macrostate, which in a sense replaces
the vacuum state of the usual field theoretical treatment of
microsystems: this can have an impact also for the general description
of interaction in the context of relativistic quantum field theory.
\section{Conclusions and Outlook}
\label{sec:conclusions-outlook}
We have recalled the modern formulation of quantum theory with
observables given by positive operator-valued measures and evolution
of possibly open systems given by mappings on states, recalling in
particular the starting point of Ludwig's approach. In this framework
the essentially statistical character of the phenomenology of
microsystems appears as a universal feature of typical non equilibrium
systems. We have argued that quantum field theory, which emerges as
underlying theory of all microsystems, appears as a natural framework
in which statistical operators can be constructed carrying objective
state parameters consisting in classical fields, which generalize the
well known parametrization in terms of temperature and chemical
potential. Evidence of microsystems is related with the breakdown of
the deterministic time evolution of these state parameters: then by
facing stochasticity in their dynamics, quantum theory of microsystems
can emerge in a natural way from quantum field theory, initially
focused on macrosystems, putting in a new light the question of a
proper separation of their dynamics from the macroscopic background.
\begin{acknowledgments}
The authors gratefully acknowledge financial support by MIUR under FIRB
and PRIN05.
\end{acknowledgments}
\end{document}
|
\begin{document}
\title{Economical Convex Coverings and Applications\thanks{An earlier version of this paper appeared in the \textit{Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms} (SODA), pp. 1834--1861, 2023.}}
\author{
Sunil Arya\thanks{Research supported by the Research Grants Council of Hong Kong, China under project numbers 16213219 and 16214721. The work of David Mount was supported by NSF grant CCF--1618866. The work of Guilherme da Fonseca was supported by the French ANR PRC grant ADDS (ANR-19-CE48-0005).}\\
Department of Computer Science and Engineering \\
The Hong Kong University of Science and Technology, Hong Kong\\
[email protected]
\and
Guilherme D. da Fonseca\footnotemark[1]\\
Aix-Marseille Universit\'{e} and LIS Lab, France\\
[email protected]
\and
David M. Mount\footnotemark[1]\\
Department of Computer Science and
Institute for Advanced Computer Studies \\
University of Maryland, College Park, Maryland \\
[email protected]
}
\date{}
\maketitle
\begin{abstract}
Coverings of convex bodies have emerged as a central component in the design of efficient solutions to approximation problems involving convex bodies. Intuitively, given a convex body $K$ and $\varepsilon > 0$, a \emph{covering} is a collection of convex bodies whose union covers $K$ such that a constant factor expansion of each body lies within an $\varepsilon$ expansion of $K$. Coverings have been employed in many applications, such as approximations for diameter, width, and $\varepsilon$-kernels of point sets, approximate nearest neighbor searching, polytope approximations with low combinatorial complexity, and approximations to the Closest Vector Problem (CVP).
It is known how to construct coverings of size $n^{O(n)} / \varepsilon^{(n-1)/2}$ for general convex bodies in $\mathbb{R}^n$. In special cases, such as when the convex body is the $\ell_p$ unit ball, this bound has been improved to $2^{O(n)} / \varepsilon^{(n-1)/2}$. This raises the question of whether such a bound generally holds. In this paper we answer the question in the affirmative.
We demonstrate the power and versatility of our coverings by applying them to the problem of approximating a convex body by a polytope, where the error is measured through the Banach-Mazur metric. Given a well-centered convex body $K$ and an approximation parameter $\varepsilon > 0$, we show that there exists a polytope $P$ consisting of $2^{O(n)} / \varepsilon^{(n-1)/2}$ vertices (facets) such that $K \subset P \subset K(1+\varepsilon)$. This bound is optimal in the worst case up to factors of $2^{O(n)}$. (This bound has been established recently using different techniques, but our approach is arguably simpler and more elegant.) As an additional consequence, we obtain the fastest $(1+\varepsilon)$-approximate CVP algorithm that works in any norm, with a running time of $2^{O(n)} / \varepsilon^{(n-1)/2}$ up to polynomial factors in the input size, and we obtain the fastest $(1+\varepsilon)$-approximation algorithm for integer programming. We also present a framework for constructing coverings of optimal size for any convex body (up to factors of $2^{O(n)}$).
\end{abstract}
\noindent\textbf{Keywords:} Approximation algorithms, high dimensional geometry, convex coverings, Banach-Mazur metric, lattice algorithms, closest vector problem, Macbeath regions
\section{Introduction} \label{s:intro}
Convex bodies are of fundamental importance in mathematics and computer science, and given the high complexity of exact representations, concise approximate representations are essential to many applications. There are a number of ways to define the distance between two convex bodies (see, e.g., \cite{Bor00}), and each gives rise to a different notion of approximation. While Hausdorff distance is commonly studied, it is not sensitive to the shape of the convex body. In this paper we will consider a common linear-invariant distance, called the Banach-Mazur distance.
Given two convex bodies $X$ and $Y$ in real $n$-dimensional space, $\mathbb{R}^n$, both of which contain the origin in their interiors, their \emph{Banach-Mazur distance}, denoted $\dist_{\text{BM}}(X,Y)$, is defined to be the minimum value of $\ln \lambda$ such that there exists a linear transformation $T$ such that $T X \subseteq Y \subseteq \lambda \cdot T X$. Given $\delta > 0$, we say that $Y$ is an \emph{Banach-Mazur $\delta$-approximation} of $X$ if $\dist_{\text{BM}}(X,Y) \leq \delta$. $T$ will be the identity transformation in our constructions, and thus, given a convex body $K$ in $\mathbb{R}^n$ and $\varepsilon > 0$, we seek a convex polytope $P$ such that $K \subseteq P \subseteq (1+\varepsilon) K$. This implies that $\dist_{\text BM}(K,P) \leq \ln (1+\varepsilon)$, which is approximately $\varepsilon$ for small $\varepsilon$. The scaling is taking place about the origin, and it is standard practice to assume that $K$ is well-centered in the sense that the origin lies within $K$ and is not too close to $K$'s boundary. (See Section~\ref{s:centrality} for the formal definition.) Unlike Hausdorff, the Banach-Mazur measure has the desirable property of being sensitive to $K$'s shape, being more accurate where $K$ is narrower and less accurate where $K$ is wider.
The principal question is, given $n$ and $\varepsilon > 0$, what is the minimum number of vertices (or facets) needed to $\varepsilon$-approximate any convex body $K$ in $\mathbb{R}^n$ by a polytope in the above sense. This problem has been well studied. Existing bounds hold under the assumption that $K$ is well-centered. We say that a bound is \emph{nonuniform} if it holds for all $\varepsilon \leq \varepsilon_0$, where $\varepsilon_0$ depends on $K$. Typical nonuniform bounds assume that $K$ is smooth, and the value of $\varepsilon_0$ depends on $K$'s smoothness. Our focus will be on uniform bounds, where $\varepsilon_0$ does not depend on $K$.
Dudley~\cite{Dud74} and Bronshtein and Ivanov~\cite{BrI76} provided uniform bounds in the Hausdorff context, but their results can be recast under Banach-Mazur, where they imply the existence of an approximating polytope with $n^{O(n)} / \varepsilon^{(n-1)/2}$ vertices (facets). For smooth convex bodies, B{\"{o}}r{\"{o}}czky \cite{Bor00,Gru93b} established a nonuniform bound of $2^{O(n)} / \varepsilon^{(n-1)/2}$. Barvinok~\cite{Bar14} improved the bound in the uniform setting for symmetric convex bodies. Ignoring a factor that is polylogarithmic in $1/\varepsilon$, his bound is $2^{O(n)} / \varepsilon^{n/2}$. Finally, Nasz{\' o}di, Nazarov, and Ryabogin obtained a worst-case optimal approximation of size $2^{O(n)} / \varepsilon^{(n-1)/2}$~\cite{NNR20}. Their bound is uniform and holds for general convex bodies.
The main result of this paper is an alternative asymptotically optimal construction of an $\varepsilon$-approximation of a convex body $K$ in $\mathbb{R}^n$ in the Banach-Mazur setting. Our construction is superior to that of \cite{NNR20} in two ways. First, while the construction presented in \cite{NNR20} is very clever, it involves the combination of a number of technical elements (transforming the body to standard position, rounding it, computing a Bronshteın-Ivanov net, and filtering to reduce the sample size). In contrast, ours is quite simple. We employ a greedy process that samples points from $K$'s interior, and the final approximation is just the convex hull of these points. Second, our construction is more powerful in that it provides an additional covering structure for $K$. Each sample point is associated with a centrally symmetric convex body, and together these bodies form a cover of $K$ such that their union lies within the expansion $(1+\varepsilon) K$. As a direct consequence of this additional structure, we obtain the fastest approximation algorithm to date for the closest vector problem (CVP) that operates in any norm.
\subsection{Our Results} \label{s:results}
Throughout, we assume that $K$ is a full-dimensional convex body in $\mathbb{R}^n$, which is well-centered about the origin. There are a number of notions of centrality that suffice for our purposes (see Section~\ref{s:centrality} for formal definitions). Our first result involves the existence of concise coverings. Given a convex body $K$ that contains the origin in its interior and reals $c \geq 1$ and $\varepsilon > 0$, a \emph{$(c,\varepsilon)$-covering} of $K$ is a collection $\mathcal{Q}$ of bodies whose union covers $K$ such that a factor-$c$ expansion of each $Q \in \mathcal{Q}$ about its centroid lies within $(1+\varepsilon) K$ (see Figure~\ref{f:cover-basic}). Coverings have emerged as an important tool in convex approximation. They have been applied to several problems in the field of computational geometry, including combinatorial complexity~\cite{AAFM22, AFM17c, AFM12b}, approximate nearest neighbor searching~\cite{AFM17a}, and computing the diameter and $\varepsilon$-kernels~\cite{AFM17b}.
\begin{figure}
\caption{\label{f:cover-basic}
\label{f:cover-basic}
\end{figure}
Given a convex body in $\mathbb{R}^n$, constant $c \geq 1$ and parameter $\varepsilon > 0$, what is the minimum size of a $(c,\varepsilon)$-covering as a function of $n$ and $\varepsilon$? Abdelkader and Mount considered the problem in spaces of constant dimension~\cite{AbM18}. They did not analyze their bounds for the high-dimensional case, but based on results from \cite{AFM17a}, it can be shown that their results yield an upper bound of $n^{O(n)} / \varepsilon^{(n-1)/2}$ in $\mathbb{R}^n$. A number of special cases have been explored in the high dimensional case. Nasz{\' o}di and Venzin demonstrated the existence of $(2,\varepsilon)$-coverings of size $2^{O(n)} / \varepsilon^{n/2}$ when $K$ is an $\ell_p$ ball for any fixed $p \geq 2$~\cite{NaV22}. For the $\ell_{\infty}$ ball, Eisenbrand, H{\"a}hnle, and Niemeier showed the existence of $(2,\varepsilon)$-coverings of size $2^{O(n)} / \log^n (1/\varepsilon)$, consisting of axis-parallel rectangles~\cite{EHN11}. They also presented a nearly matching lower bound of $2^{-O(n)} / \log^n(1/\varepsilon)$, even when the covering consisted of parallelepipeds.
In this paper we establish the following bound on the size of $(c,\varepsilon)$-coverings, which holds for any well-centered convex body in $\mathbb{R}^n$.
\begin{theorem} \label{thm:cover-worst}
Let $0 < \varepsilon \leq 1$ be a real parameter and $c \geq 2$ be a constant. Let $K \subseteq \mathbb{R}^n$ be a well-centered convex body. Then there is a $(c,\varepsilon)$-covering for $K$ consisting of at most $2^{O(n)} / \varepsilon^{(n-1)/2}$ centrally symmetric convex bodies.
\end{theorem}
It is not difficult to prove a lower bound of $2^{-O(n)} / \varepsilon^{(n-1)/2}$ on the size of any $(2,\varepsilon)$-covering for Euclidean balls (see, e.g., Nasz{\'o}di and Venzin \cite{NaV22}). Therefore, the above bound is optimal with respect to $\varepsilon$-dependencies. In Section~\ref{s:instance-opt} (Theorem~\ref{thm:cover-inst}), we prove that for any constant $c \geq 2$, our construction is in fact instance optimal to within a factor of $2^{O(n)}$. This means that for any well-centered convex body $K$, our covering exceeds the size of any $(c,\varepsilon)$-covering for $K$ by such a factor. In Section~\ref{s:apps-cvp}, we present a randomized algorithm that constructs a slightly larger covering (by a factor of $\log(1/\varepsilon)$). Following standard convention, our constructions assume that access to $K$ is provided by a weak membership oracle (defined in Section~\ref{s:apps}).
We present a number of applications of this result. First, in Section~\ref{s:approx-BM} we show that the convex hull of the center points of the covering elements yields an approximation in the Banach-Mazur metric.
\begin{theorem} \label{thm:approx-BM}
Given a well-centered convex body $K$ and an approximation parameter $\varepsilon > 0$, there exists a polytope $P$ consisting of $2^{O(n)} / \varepsilon^{(n-1)/2}$ vertices (facets) such that $K \subset P \subset K(1+\varepsilon)$.
\end{theorem}
There are also applications to lattice problems. In the \emph{Closest Vector Problem} (CVP), an $n$-dimensional lattice $L$ in $\mathbb{R}^n$ is given (that is, the set of integer linear combinations of $n$ basis vectors) together with a target vector $t \in \mathbb{R}^n$. The problem is to return a vector in $L$ closest to $t$ under some given norm. This problem has applications to cryptography~\cite{Odl90, JoS98, NgS01}, integer programming~\cite{Len83, DPV11, DaK16}, and factoring polynomials over the rationals~\cite{LLL82}, among several other problems. The problem is NP-hard for any $\ell_p$ norm~\cite{vEB81} and cannot be solved exactly in $2^{(1-\gamma)n}$ time for constant $\gamma > 0$, under certain conditional hardness assumptions~\cite{BGS17}.
This problem has a considerable history. The first solution proposed to the CVP under the $\ell_\infty$ norm takes $2^{O(n^3)}$ time through integer linear programming~\cite{Len83}, which was later improved to $n^{O(n)}$~\cite{Kan87}. For the $\ell_2$ norm, Micciancio and Voulgaris presented an algorithm that runs in single exponential $2^{O(n)}$ time~\cite{MiV13}, and currently the fastest algorithm for exact Euclidean CVP is by Aggarwal, Dadush, and Stephens-Davidowitz~\cite{ADS15} and runs in $2^{n + o(n)}$ time. However, solving the CVP problem exactly in single exponential time for norms other than Euclidean remains an open problem. (For additional information, see~\cite{HPS11}.) Dadush, Peikert, and Vempala~\cite{DPV11} considered CVP and the related Shortest Vector Problem (SVP) in the context of (possibly asymmetric) norms defined by convex bodies. Their work demonstrated a rich connection between lattice algorithms and convex geometry.
In the approximate version of the CVP problem, denoted \emph{$(1+\varepsilon)$-CVP}, we are also given a parameter $\varepsilon > 0$, and the goal is to find a lattice vector whose distance to $t$ is at most $1+\varepsilon$ times the optimum. CVP is NP-hard to approximate~\cite{Aro94, DKRS03} and conditional hardness results show that for $p \geq 1$ CVP in $\ell_p$ is hard to approximate in $2^{(1-\gamma)n}$ time for constant $\gamma > 0$, except when $p$ is even~\cite{ABGS21}.
The randomized sieving approach of Ajtai, Kumar, and Sivakumar~\cite{AKS01} was extended to approximate CVP for $\ell_p$ norms by Bl{\"o}mer and Naewe~\cite{BN09} and to the general case of well-centered norms by Dadush~\cite{Dad14}. These algorithms run in time and space $2^{O(n)} / \varepsilon^{2n}$. Building on the Voronoi cell approach~\cite{MiV13, DPV11}, Dadush and Kun~\cite{DaK16} presented deterministic algorithms that improved the running time to $2^{O(n)} / \varepsilon^{n}$ and space to $\widetilde{O}(2^n)$.
Eisenbrand, H{\"a}hnle, and Niemeier~\cite{EHN11} and Nasz{\'o}di and Venzin~\cite{NaV22} have explored the use of $(c,\varepsilon)$-coverings of the unit ball in the norm to obtain efficient algorithms for approximate CVP by ``boosting'' a weak constant-factor approximation to a strong $(1+\varepsilon)$-approximation. By exploiting the unique properties of hypercubes, Eisenbrand \textit{et al.}~\cite{EHN11} improved the running time for the $\ell_\infty$ norm to $2^{O(n)} \log^n(1/\varepsilon)$ time. Nasz{\'o}di and Venzin~\cite{NaV22} extended this approach to $\ell_p$ norms. The running time of their algorithm is $2^{O(n)} / \varepsilon^{n/2}$ for $p \ge 2$ and $2^{O(n)} / \varepsilon^{n/p}$ for $1 \le p \le 2$. The constants in the $2^{O(n)}$ term in the running time depend on $p$.
By applying our covering within existing algorithms, we obtain the fastest algorithm to date for $(1+\varepsilon)$-approximate CVP that operates in any norm. The algorithm is randomized and runs in single exponential time, $2^{O(n)} / \varepsilon^{(n-1)/2}$. (Following standard practice, we ignore factors that are polynomial in the input size.) The result is stated formally below.
\begin{theorem} \label{thm:cvp}
There is a randomized algorithm that, given any well-centered convex body $K$ and lattice $L$, solves the $(1+\varepsilon)$-CVP problem in the norm defined by $K$, in $2^{O(n)} / \varepsilon^{(n-1)/2}$-time and $O(2^{n})$-space, with probability at least $1 - 2^{-n}$.
\end{theorem}
Finally, through a reduction from approximate CVP to approximate integer programming (IP) due to Dadush~\cite{Dad14}, we present a randomized algorithm for approximate IP (see Theorem~\ref{thm:approx-ip} in Section~\ref{s:apps-ip}).
\subsection{Techniques} \label{s:techniques}
As mentioned above, coverings are a powerful tool in obtaining efficient solutions to approximation problems involving convex bodies. The fundamental problem tackled here involves the sizes of $(c,\varepsilon)$-coverings for general convex bodies in $\mathbb{R}^n$ and especially the dependencies on $\varepsilon$. Our approach employs a classical concept from convex geometry, called a \emph{Macbeath region}~\cite{Mac52}. Given a convex body $K$ and a point $x \in K$, the Macbeath region $M_K(x)$ is the largest centrally symmetric body centered at $x$ and contained in $K$ (see Figure~\ref{f:macbeath}(a)). Macbeath regions have found numerous uses in the theory of convex sets and the geometry of numbers (see B\'{a}r\'{a}ny~\cite{Bar00} for an excellent survey). They have also been applied to several problems in the field of computational geometry, including lower bounds~\cite{BCP93, AMM09b, AMX12}, combinatorial complexity~\cite{AFM12b, MuR14, AFM17c, DGJ19, AAFM22}, approximate nearest neighbor searching~\cite{AFM17a}, and computing the diameter and $\varepsilon$-kernels~\cite{AFM17b}.
\begin{figure}
\caption{\label{f:macbeath}
\label{f:macbeath}
\end{figure}
In the context of $(c,\varepsilon)$-coverings, the obvious (and indeed maximal) choice for a covering element centered at any point $x$ is to take the Macbeath region centered at $x$ with respect to the expanded body $K_{\varepsilon} = (1+\varepsilon) K$, and then scale it by a factor of $\frac{1}{c}$ about $x$ (see Figure~\ref{f:macbeath}(b)). The construction and analysis of such Macbeath-based coverings is among the principal contributions of this paper. In their work on the economical cap cover, B\'{a}r\'{a}ny and Larman observed how Macbeath regions serve as an efficient agent for covering the region near the boundary of a convex body~\cite{BaL88}. While Macbeath regions can be quite elongated, especially near the body's boundary, they behave in many respects like fixed-radius balls in a metric space. (Vernicos and Walsh proved that shrunken Macbeath regions are similar in shape to fixed-radius balls in the Hilbert geometry induced by $K$~\cite{AbM18, VeW16}.) This leads to a very simple covering construction based on computing a maximal set of points such that the suitably shrunken Macbeath regions centered at these points are pairwise disjoint. The covering is then constructed by uniformly increasing the scale factor so the resulting Macbeath regions cover $K$.
Two challenges arise in implementing and analyzing this construction. The first is that of how to compute these Macbeath regions efficiently. The second is proving that this simple construction yields the desired bound on the size of the covering. A natural approach to the latter is a packing argument based on volume considerations. Unfortunately, this fails because Macbeath regions may have very small volume. Our approach for dealing with small Macbeath regions is to exploit a Mahler-like reciprocal property in the volumes of the Macbeath regions in the original body $K$ and its polar, $K^*$ (see Section~\ref{s:centrality} for definitions). In the low-dimensional setting, the analysis exploits a correspondence between caps in $K$ and $K^*$, such that the volumes of these caps have a reciprocal relationship (see, e.g., \cite{AAFM22}). As a consequence, for each Macbeath region in $K$ of small volume, there is a Macbeath region in $K^*$ of large volume. Thus, by randomly sampling in both $K$ and $K^*$, it is possible to hit all the Macbeath regions.
Generalizing this to the high-dimensional setting involves overcoming a number of technical difficulties. A straightforward generalization of the methods of \cite{AAFM22} yields a covering of size $n^{O(n)} / \varepsilon^{(n-1)/2}$. A critical step in the analysis involves relating the volumes of two $(n-1)$-dimensional convex bodies that arise by projecting caps and dual caps. In earlier works, where the dimension was assumed to be a constant, a crude bound sufficed. But in the high-dimensional setting, it is essential to avoid factors that depend on the dimension. A key insight of this paper is that it is possible to avoid these factors through the use of the difference body. (See Lemma~\ref{lem:sandwich-dualcaps} in Section~\ref{s:diff-body}.) Through the use of this more refined geometric analysis, we establish this Mahler-like relationship in Sections~\ref{s:mahler} (particularly Lemmas~\ref{lem:vol-product} and~\ref{lem:mahler-mac}). We apply this in Section~\ref{s:worst-opt} to obtain our bounds on the size of the covering. In Section~\ref{s:approx-BM} we show how this leads to an $\varepsilon$-approximation in the Banach-Mazur measure. The sampling process is described in Section~\ref{s:apps} along with applications.
\section{Preliminaries} \label{s:prelim}
In this section, we introduce terminology and notation, which will be used throughout the paper. This section can be skipped on first reading (moving directly to Section~\ref{s:mahler}).
\subsection{Lengths and Measures} \label{s:length}
Given vectors $u, v \in \mathbb{R}^n$, let $\ang{u,v}$ denote their dot product, and let $\|v\| = \sqrt{\ang{v,v}}$ denote $v$'s Euclidean length. Throughout, we will use the terms \emph{point} and \emph{vector} interchangeably. Given points $p,q \in \mathbb{R}^n$, let $\|p q\| = \|p - q\|$ denote the Euclidean distance between them. Let $\vol(\cdot)$ and $\area(\cdot)$ denote the $n$-dimensional and $(n-1)$-dimensional Lebesgue measures, respectively.
Throughout, $K \subseteq \mathbb{R}^n$ will denote a full-dimensional compact convex body with the origin $O$ in its interior. Let $\|x\|_K = \inf \{s \ge 0: x \in s K\}$ denote $K$'s associated Minkowski functional, or \emph{gauge function}. If $K$ is centrally symmetric, its gauge function defines a norm, but we will abuse notation and use the term ``norm'' even when $K$ is not centrally symmetric. Given $\varepsilon > 0$, define $K_{\varepsilon} = (1+\varepsilon)K$ to be a uniform scaling of $K$ by $1+\varepsilon$.
Given a convex body $K \subseteq \mathbb{R}^n$, its \emph{difference body}, denoted $\Delta(K)$, is defined to be the Minkowski sum $K \oplus -K$. The difference body is convex and centrally symmetric and satisfies the following property.
\begin{lemma}[Rogers and Shephard~\cite{RoS57}] \label{lem:vol-diffbody}
Given a convex body $K \subseteq \mathbb{R}^n$, $\vol(\Delta(K)) \le 4^n \vol(K)$.
\end{lemma}
\subsection{Polarity and Centrality Properties} \label{s:centrality}
Given a bounded convex body $K \subseteq \mathbb{R}^n$ that contains the origin $O$ in its interior, define its \emph{polar}, denoted $K^*$, to be the convex set
\[
K^*
~ = ~ \{ u \,:\, \ang{u,v} \le 1, \hbox{~for all $v \in K$} \}.
\]
The polar enjoys many useful properties (see, e.g., Eggleston~\cite{Egg58}). For example, it is well known that $K^*$ is bounded and $(K^*)^* = K$. Further, if $K_1$ and $K_2$ are two convex bodies both containing the origin such that $K_1 \subseteq K_2$, then $K_2^* \subseteq K_1^*$.
Given a nonzero vector $v \in \mathbb{R}^n$, we define its ``polar'' $v^*$ to be the hyperplane that is orthogonal to $v$ and at distance $1/\|v\|$ from the origin, on the same side of the origin as $v$. The polar of a hyperplane is defined as the inverse of this mapping. We may equivalently define $K^*$ as the intersection of the closed halfspaces that contain the origin, bounded by the hyperplanes $v^*$, for all $v \in K$.
Given a convex body $K \subseteq \mathbb{R}^n$, there are many ways to characterize the property that $K$ is centered about the origin~\cite{Gru63, Tot15}. In this section we explore a few relevant measures of centrality.
First, define $K$'s \emph{Mahler volume} to be the product $\vol(K) \cdot \vol(K^*)$. The Mahler volume is well studied (see, e.g.~\cite{San49,MeP90,Sch93}). It is invariant under linear transformations, and it depends on the location of the origin within $K$. In the following definitions, any fixed constant may be used in the $O(n)$ term.
\begin{description}
\item[Santal{\'o} property:] The Mahler volume of $K$ is at most $2^{O(n)} \cdot \omega_n^2$, where $\omega_n$ denotes the volume of the $n$-dimensional unit Euclidean ball ($\omega_n = \pi^{n/2} / \Gamma\big(\half{n}+1\big)$).
\item[Winternitz property:] For any hyperplane passing through the origin, the ratio of the volume of the portion of $K$ on each side of the hyperplane to the volume of $K$ is at least $2^{-O(n)}$.
\item[Kovner-Besicovitch property:] The ratio of the volume of $K \cap -K$ to the volume of $K$ is at least $2^{-O(n)}$.
\end{description}
Following Dadush, Peikert, and Vempala~\cite{DPV11}, we say that $K$ is \emph{well-centered} if it satisfies the Kovner-Besicovitch property. Generally, $K$ is \emph{well-centered} about a point $x$ if $K-x$ is well-centered. For our purposes, however, any of the above can be used, as shown in the following lemma.
\begin{lemma}
\label{lem:centroid}
The three centrality properties (Santal{\'o}, Winternitz, and Kovner-Besicovitch) are equivalent in the sense that a convex body $K \subseteq \mathbb{R}^n$ that satisfies any one of them satisfies the other two subject to a change in the $2^{O(n)}$ factor. Further, if the origin coincides with $K$'s centroid, these properties are all satisfied.
\end{lemma}
Let us first introduce some notation. Given a hyperplane $h$, let $h^+$ and $h^-$ denote its two halfspaces. Given $0 < \delta < \frac{1}{2}$, let $h$ be a hyperplane that intersects $K$ such that $\vol(K \cap h^+) = \delta \cdot \vol(K)$. Define the \emph{$\delta$-floating body}, denoted $K_\delta$, to be the intersection of halfspaces $h^-$ for all such hyperplanes $h$. For $t > 0$, define the \emph{$t$-Santal{\'o} region} $S(K,t) \subseteq K$ to be the set of points $x \in K$ such that the Mahler volume of $K$ with respect to $x$ is at most $t \, \omega_n^2$, where $\omega_{n}$ denotes the volume of the $n$-dimensional unit Euclidean ball. Both the floating body and the Santal{\'o} region (when nonempty) are convex subsets of $K$, and Meyer and Werner showed that they satisfy the following property.
\begin{lemma}[Meyer and Werner~\cite{MeW98}]
\label{lem:float-San49}
For all $0 < \delta < \frac{1}{2}$, $K_{\delta} \subseteq S(K,t)$, where $t = 1/(4\delta(1-\delta))$.
\end{lemma}
We also need the following result by Milman and Pajor~\cite{MiP00} (Remark~4 following Corollary~3), which implies that if $K$ satisfies Santal{\'o}, then it satisfies Kovner-Besicovitch.
\begin{lemma}[Milman and Pajor~\cite{MiP00}] \label{lem:santalo-kb}
Let $K$ be a convex body with the origin $O$ in its interior such that $\vol(K) \cdot \vol(K^*) \leq s \, w_n^2$, where $s$ is a parameter. Then $\vol(K \cap -K) / \vol(K) \geq 2^{-O(n)} / s$.
\end{lemma}
We are now ready to prove Lemma~\ref{lem:centroid}.
\begin{proof} (of Lemma~\ref{lem:centroid})
First, suppose that $K$ satisfies Kovner-Besicovitch, that is, $\vol(K \cap -K) \ge 2^{-O(n)} \cdot \vol(K)$. Consider any hyperplane $h$ passing through the origin. As $K \cap -K$ is centrally symmetric, half of this body lies on each side of $h$. Thus, the volume of the portion of $K$ on either side of $h$ is at least $2^{-O(n)} \cdot \vol(K)$, and so $K$ satisfies the Winternitz property.
Next, suppose that $K$ satisfies Winternitz. Observe that any point outside the floating body $K_{\delta}$ is contained in a halfspace $h^+$ such that $\vol(K \cap h^+) \leq \delta \cdot \vol(K)$. By Winternitz, all halfspaces containing the origin have volume at least $2^{-O(n)} \cdot \vol(K)$, and so the origin is contained within the floating body $K_{\delta}$ for $\delta = 2^{-O(n)}$. It follows from Lemma~\ref{lem:float-San49} that the origin lies within the Santal{\'o} region $S(K,t)$ for some $t = 2^{O(n)}$. Thus, $K$ satisfies the Santal{\'o} property.
Finally, if $K$ satisfies Santal{\'o}, then it follows from Lemma~\ref{lem:santalo-kb} that it satisfies the Kovner-Besicovitch property. This establishes the equivalence of the three centrality properties.
Milman and Pajor~\cite{MiP00} (Corollary 3) showed that if the origin coincides with $K$'s centroid, then $K$ satisfies Kovner-Besicovitch, implying that it satisfies the other properties as well.
\end{proof}
Lower bounds on the Mahler volume have also been extensively studied~\cite{BoM87,Kup08,Naz12}. Recalling the value of $\omega_n$ from the Santal{\'o} property, the following lower bound holds irrespective of the location of the origin within a convex body~\cite{BoM87}.
\begin{lemma}
\label{lem:mahler-bounds}
Given a convex body $K \subseteq \mathbb{R}^n$ whose interior contains the origin, $\vol(K) \cdot \vol(K^*) \geq 2^{-O(n)} \cdot \omega_n^2$.
\end{lemma}
\subsection{Caps, Rays, and Relative Measures}
Consider a compact convex body $K$ in $n$-dimensional space $\mathbb{R}^n$ with the origin $O$ in its interior. A \emph{cap} $C$ of $K$ is defined to be the nonempty intersection of $K$ with a halfspace. Letting $h_1$ denote a hyperplane that does not pass through the origin, let $\pcap{K}{h_1}$ denote the cap resulting by intersecting $K$ with the halfspace bounded by $h_1$ that does not contain the origin (see Figure~\ref{f:widray}(a)). Define the \emph{base} of $C$, denoted $\base(C)$, to be $h_1 \cap K$. Letting $h_0$ denote a supporting hyperplane for $K$ and $C$ parallel to $h_1$, define an \emph{apex} of $C$ to be any point of $h_0 \cap K$.
\begin{figure}
\caption{\label{f:widray}
\label{f:widray}
\end{figure}
We define the \emph{absolute width} of cap $C$ to be $\dist(h_1,h_0)$. When a cap does not contain the origin, it will be convenient to define the \emph{relative width} of $C$, denoted $\width_K(C)$, to be the ratio $\dist(h_1,h_0) / \dist(O,h_0)$. We extend the notion of width to hyperplanes by defining $\width_K(h_1) = \width_K(\pcap{K}{h_1})$. Observe that as a hyperplane is translated from a supporting hyperplane to the origin, the relative width of its cap ranges from 0 to a limiting value of 1.
We also characterize the closeness of a point to the boundary in both absolute and relative terms. Given a point $p_1 \in K$, let $p_0$ denote the point of intersection of the ray $O p_1$ with the boundary of $K$. Define the \emph{absolute ray distance} of $p_1$ to be $\|p_1 p_0\|$, and define the \emph{relative ray distance} of $p_1$, denoted $\ray_K(p_1)$, to be the ratio $\|p_1 p_0\| / \|O p_0\|$. Relative widths and relative ray distances are both affine invariants, and unless otherwise specified, references to widths and ray distances will be understood to be in the relative sense.
We can also define volumes in a manner that is affine invariant. Recall that $\vol(\cdot)$ denotes the standard Lebesgue volume measure. For any region $\Lambda \subseteq K$, define the \emph{relative volume} of $\Lambda$ with respect to $K$, denoted $\vol_K(\Lambda)$, to be $\vol(\Lambda)/\vol(K)$.
With the aid of the polar transformation we can extend the concepts of width and ray distance to objects lying outside of $K$. Consider a hyperplane $h_2$ parallel to $h_1$ that lies beyond the supporting hyperplane $h_0$ (see Figure~\ref{f:widray}(a)). It follows that $h_2^* \in K^*$, and we define $\width_K(h_2) = \ray_{K^*}(h_2^*)$ (see Figure~\ref{f:widray}(b)). Similarly, for a point $p_2 \notin K$ that lies along the ray $O p_1$, it follows that the hyperplane $p_2^*$ intersects $K^*$, and we define $\ray_K(p_2) = \width_{K^*}(p_2^*)$. By properties of the polar transformation, it is easy to see that $\width_K(h_2) = \dist(h_0,h_2) / \dist(O,h_2)$. Similarly, $\ray_K(p_2) = \|p_0 p_2\| / \|O p_2\|$. Henceforth, we will omit references to $K$ when it is clear from context.
Some of our results apply only when we are sufficiently close to the boundary of $K$. Given $0 \leq \alpha \leq 1$, we say that a cap $C$ is \emph{$\alpha$-shallow} if $\width(C) \le \alpha$, and we say that a point $p$ is \emph{$\alpha$-shallow} if $\ray(p) \le \alpha$. We will simply say \emph{shallow} to mean $\alpha$-shallow, where $\alpha$ is a sufficiently small constant.
Given any cap $C$ and a real $\lambda > 0$, we define its $\lambda$-expansion, denoted $C^{\lambda}$, to be the cap of $K$ cut by a hyperplane parallel to the base of $C$ such that the absolute width of $C^{\lambda}$ is $\lambda$ times the absolute width of $C$. (Note that if the expansion of a cap is large enough it may be the same as $K$.)
We now present a number of useful technical results on ray distances and cap widths in both their absolute and relative forms.
\begin{lemma} \label{lem:raydist-width}
Let $C$ be a cap of $K$ that does not contain the origin and let $p$ be a point in $C$. Then $\ray(p) \leq \width(C)$.
\end{lemma}
\begin{proof}
Let $h$ be the hyperplane passing through the base of $C$, and let $h_0$ be the supporting hyperplane of $K$ parallel to $h$ at $C$'s apex. Let $q$, $p_0$, and $q_0$ denote the points of intersection of the ray $O p$ with $h$, $\partial K$, and $h_0$, respectively. Since $p \in C$, the order of these points along the ray is $\ang{O, q, p, p_0, q_0}$. By considering the hyperplanes parallel to $h$ passing through these points, we have
\[
\ray(p)
~ = ~ \frac{\|p p_0\|}{\|O p_0\|}
~ \leq ~ \frac{\|q p_0\|}{\|O p_0\|}
~ \leq ~ \frac{\|q p_0\| + \|p_0 q_0\|}{\|O p_0\| + \|p_0 q_0\|}
~ = ~ \frac{\|q q_0\|}{\|O q_0\|}
~ = ~ \frac{\dist(h,h_0)}{\dist(O,h_0)}
~ = ~ \width(C). \qedhere
\]
\end{proof}
There are two natural ways to associate a cap with any point $p \in K$. The first is the \emph{minimum volume cap}, which is any cap whose base passes through $p$ of minimum volume among all such caps. For the second, assume that $p \neq O$, and let $p_0$ denote the point of intersection of the ray $O p$ with the boundary of $K$. Let $h_0$ be any supporting hyperplane of $K$ at $p_0$. Take the cap $C$ induced by a hyperplane parallel to $h_0$ passing through $p$. As shown in the following lemma this is the cap of minimum width containing $p$.
\begin{lemma}
\label{lem:min-width-cap}
For any $p \in K \setminus \{O\}$, consider the cap $C$ defined above. Then $\width(C) = \ray(p)$ and further, $C$ has the minimum width over all caps that contain $p$.
\end{lemma}
\begin{proof}
Let $h$ denote the hyperplane passing through $p$ parallel to $h_0$ (defined above). By similar triangles, we have
\[
\width(C)
~ = ~ \frac{\dist(h,h_0)}{\dist(O,h_0)}
~ = ~ \frac{\|p p_0\|}{\|O p_0\|}
~ = ~ \ray(p).
\]
By Lemma~\ref{lem:raydist-width}, for any cap $C'$ that contains $p$, $\ray(p) \leq \width(C')$, and hence $\width(C) \leq \width(C')$.
\end{proof}
The following lemma gives a simple lower and upper bound on the absolute volume of a cap.
\begin{lemma}
\label{lem:vol-cap}
Let $C$ be a $\frac{1}{2}$-shallow cap, let $a = \area(\base(C))$, and let $w$ denote $C$'s absolute width. Then $a w/n \leq \vol(C) \leq 2^{n-1} a w$.
\end{lemma}
\begin{proof}
Let $p$ be the apex of $C$ and $\base(C)$ denote its base. Let $P = \conv(\base(C) \cup \{p\})$. Clearly, $P \subseteq C$ and $\vol(P) = a w/n$, which yields the lower bound. To see the upper bound, observe that $C$ lies within the generalized infinite cone whose apex is $O$ and base is $\base(C)$. Because $\width(C) \leq \frac{1}{2}$, it follows that the area of any slice of $C$ cut by a hyperplane parallel to $\base(C)$ exceeds the area of $\base(C)$ by a factor of at most $2^{n-1}$. The upper bound follows from elementary geometry.
\end{proof}
An easy consequence of convexity is that, for $\lambda \ge 1$, $C^{\lambda}$ is a subset of the region obtained by scaling $C$ by a factor of $\lambda$ about its apex. This implies the following lemma.
\begin{lemma} \label{lem:cap-exp}
Given any cap $C$ and a real $\lambda \geq 1$, $\vol(C^{\lambda}) \leq \lambda^n \vol(C)$.
\end{lemma}
Another consequence of convexity is that containment of caps is preserved under expansion. This is a straightforward adaptation of Lemma~4.4 in~\cite{AFM17c}.
\begin{lemma} \label{lem:cap-containment-exp}
Given two caps $C_1 \subseteq C_2$ and a real $\lambda \geq 1$, $C_1^{\lambda} \subseteq C_2^{\lambda}$.
\end{lemma}
The following lemma is a technical result, which shows that if a ray hits the interior of the base of a cap of width at least $\varepsilon$, then it hits the interior of the base of a cap of width exactly $\varepsilon$ that is contained in the original.
\begin{lemma} \label{lem:cap-tech}
Let $0 < \varepsilon < 1$, and let $K \subseteq \mathbb{R}^n$ be a convex body containing the origin in its interior. Let $r$ be a ray shot from the origin, and let $D$ be a cap of $K$ of width at least $\varepsilon$ such that ray $r$ intersects the interior of its base. Then there exists a cap $E \subseteq D$ of width $\varepsilon$ such that ray $r$ intersects the interior of its base.
\end{lemma}
\begin{proof}
Let $p$ be the point of intersection of ray $r$ with the boundary of $K$. Let $F \subseteq D$ be the cap whose base passes through $p$ and is parallel to the base of $D$. We now consider two cases.
If the width of cap $F$ is less than $\varepsilon$, then we let $E$ be the cap of width $\varepsilon$ obtained by translating the base of $F$ parallel to itself (towards the base of $D$, as shown in Figure~\ref{f:subcap}(a)). Clearly $E \subseteq D$ and satisfies the conditions specified in the lemma.
\begin{figure}
\caption{\label{f:subcap}
\label{f:subcap}
\end{figure}
Otherwise, if the width of cap $F$ is at least $\varepsilon$, then intuitively, we can rotate its base about $p$ (shrinking cap $F$ in the process), until its width is infinitesimally smaller than $\varepsilon$ (Figure~\ref{f:subcap}(b)). More formally, let $u_F$ denote the normal vector for $F$'s base and let $u_p$ denote the (any) surface normal vector to $K$ at $p$ (both unit length). Since $p$ is on the boundary, the cap orthogonal to $u_p$ and passing through $p$ has width zero. Since $F$ has width at least $\varepsilon$, $u_F \neq u_p$.
Considering the 2-dimensional linear subspace spanned by $u_F$ and $u_p$, we rotate continuously from $u_F$ to $u_p$, and consider the hyperplane passing through $p$ orthogonal to this vector. Clearly, the width of the associated cap varies continuously from $\width(F)$ to zero. Thus, there must be an angle where the cap width is infinitesimally smaller than $\varepsilon$. We can expand this cap by translating its base parallel to itself to obtain a cap $E$ of width $\varepsilon$, which satisfies all the conditions specified in the lemma.
\end{proof}
\subsection{Dual Caps and Cones} \label{s:dcaps}
It will be useful to consider the notion of a cap in a dual setting (see, e.g., \cite{AFM12b, AFM12a}). Given a convex body $K \subseteq \mathbb{R}^n$ and a point $z$ that is exterior to $K$, we define the \emph{dual cap} of $K$ with respect to $z$, denoted $\dcap{K}{z}$, to be the set of $(n-1)$-dimensional hyperplanes that pass through $z$ and do not intersect $K$'s interior (see Figure~\ref{f:dual-cap-def}). In this paper, $K$ will be either full dimensional or one dimension less. We define the polar of a dual cap to be the set of points that results by taking the polar of each hyperplane of the dual cap.
\begin{figure}
\caption{\label{f:dual-cap-def}
\label{f:dual-cap-def}
\end{figure}
Given $z$ exterior to $K$, and consider the cap of $K^*$ induced by the hyperplane $z^*$. By standard properties of the polar transformation, a hyperplane $h \in \dcap{K}{z}$ if and only if the point $h^*$ lies on $K^* \cap z^*$. As an immediate consequence, we obtain the following relationship between caps and dual caps.
\begin{lemma} \label{lem:polardcap}
Let $K \subseteq \mathbb{R}^n$ be a full dimensional convex body that contains the origin and let $z \not\in K$. Then $(\dcap{K}{z})^* = \base(\pcap{K^*}{z^*})$.
\end{lemma}
Another useful concept involves cones induced by external points. A convex body $K$ and a point $z \not\in K$ naturally define two infinite convex cones. The \emph{inner cone}, denoted $\icone{K}{z}$, is the intersection of all the halfspaces that contain $K$ whose bounding hyperplanes pass through $z$ (see Figure~\ref{f:dualcaps2}). Equivalently, $\icone{K}{z}$ is the set of points $p$ such that the ray $z p$ intersects $K$. The \emph{outer cone}, denoted $\ocone{K}{z}$, is defined analogously as the intersection of halfspaces passing through $z$ that do not contain any point of $K$ (see Figure \ref{f:cone}). It is easy to see that $\ocone{K}{z}$ is the reflection of $\icone{K}{z}$ about $z$. The following lemma shows that membership in the outer cone and containment of caps are related through duality.
\begin{figure}
\caption{\label{f:cone}
\label{f:cone}
\end{figure}
\begin{lemma} \label{lem:ocone}
Let $K$ be a convex body with the origin $O$ in its interior. Then $u \in \ocone{K}{z}$ if and only if $\pcap{K^*}{z^*} \subseteq \pcap{K^*}{u^*}$.
\end{lemma}
\begin{proof}
By definition, $u \in \ocone{K}{z}$ if and only if any hyperplane $h$ that separates $z$ from $K$ also separates $u$ from $K$. Also, by standard properties of the polar transformation, a hyperplane $h$ separates $z$ from $K$ if and only if the point $h^* \in \pcap{K^*}{z^*}$. Similarly, hyperplane $h$ separates $u$ from $K$ if and only if the point $h^* \in \pcap{K^*}{u^*}$. Thus, the condition $u \in \ocone{K}{z}$ is equivalent to the condition $\pcap{K^*}{z^*} \subseteq \pcap{K^*}{u^*}$.
\end{proof}
\subsection{Macbeath Regions}
Given a convex body $K$ and a point $x \in K$, and a scaling factor $\lambda > 0$, the \emph{Macbeath region} $M_K^\lambda(x)$ is defined as
\[
M_K^\lambda(x) ~= ~ x + \lambda ((K - x) \cap (x - K)).
\]
It is easy to see that $M_K^1(x)$ is the intersection of $K$ with the reflection of $K$ around $x$, and so $M_K^1(x)$ is centrally symmetric about $x$. Indeed, it is the largest centrally symmetric body centered at $x$ and contained in $K$. Furthermore, $M_K^\lambda(x)$ is a copy of $M_K^1(x)$ scaled by the factor $\lambda$ about the center $x$ (see the right side of Figure~\ref{f:mahlermac}). We will omit the subscript $K$ when the convex body is clear from the context. As a convenience, we define $M(x) = M^1(x)$.
We now present lemmas that encapsulate standard properties of Macbeath regions. The first lemma implies that a (shrunken) Macbeath region can act as a proxy for any other (shrunken) Macbeath region overlapping it~\cite{ELR70,BCP93}. Our version uses different parameters and is proved in~\cite{AFM17a} (Lemma~2.4).
\begin{lemma} \label{lem:mac-mac}
Let $K$ be a convex body and let $\lambda \le \frac{1}{5}$ be any real. If $x, y \in K$ such that $M^{\lambda}(x) \cap M^{\lambda}(y) \neq \emptyset$, then $M^{\lambda}(y) \subseteq M^{4\lambda}(x)$.
\end{lemma}
The following lemmas are useful in situations when we know that a Macbeath region overlaps a cap of $K$, and allow us to conclude that a constant factor expansion of the cap will fully contain the Macbeath region. The first applies to shrunken Macbeath regions and the second to Macbeath regions with any scaling factor. The proof of the first appears in~\cite{AFM17c} (Lemma~2.5), and the second is an immediate consequence of the definition of Macbeath regions.
\begin{lemma} \label{lem:mac-cap}
Let $K$ be a convex body. Let $C$ be a cap of $K$ and $x$ be a point in $K$ such that $C \cap M^{1/5}(x) \neq \emptyset$. Then $M^{1/5}(x) \subseteq C^2$.
\end{lemma}
\begin{lemma} \label{lem:mac-cap-var}
Let $K$ be a convex body and $\lambda > 0$. If $x$ is a point in a cap $C$ of $K$, then $M^\lambda(x) \cap K \subseteq C^{1+\lambda}$.
\end{lemma}
Points in a shrunken Macbeath region are similar in many respects. For example, they have similar ray distances.
\begin{lemma}
\label{lem:core-ray}
Let $K$ be a convex body. If $x$ is a $\frac{1}{2}$-shallow point in $K$ and $y \in M^{1/5}(x)$, then $\ray(x)/2 \leq \ray(y) \leq 2 \ray(x)$.
\end{lemma}
\begin{proof}
Let $C_x$ denote the minimum width cap for $x$. By Lemma~\ref{lem:min-width-cap}, $\width(C_x) = \ray(x)$. Also, by Lemma~\ref{lem:mac-cap}, we have $M^{1/5}(x) \subseteq C_x^2$ and so $y \in C_x^2$. It follows from Lemma~\ref{lem:raydist-width} that $\ray(y) \leq \width(C_x^2) = 2 \width(C_x)$. Thus $\ray(y) \leq 2 \ray(x)$, which proves the second inequality. To prove the first inequality, note that this follows trivially unless $\ray(y) \leq \frac{1}{4}$ (since $\ray(x) \leq \frac{1}{2}$). If $\ray(y) \leq \frac{1}{4}$, consider the minimum width cap $C_y$ for $y$. By Lemma~\ref{lem:min-width-cap}, $\width(C_y) = \ray(y)$. Also, by Lemma~\ref{lem:mac-cap}, we have $M^{1/5}(x) \subseteq C_y^2$ and so $x \in C_y^2$. It follows from Lemma~\ref{lem:raydist-width} that $\ray(x) \leq \width(C_y^2) = 2 \width(C_y)$. Thus $\ray(x) \leq 2 \ray(y)$, which completes the proof.
\end{proof}
The remaining lemmas in this section relate caps with the associated Macbeath regions.
\begin{lemma}[B{\'a}r{\'a}ny~\cite{Bar07}] \label{lem:min-vol-cap1}
Given a convex body $K \subseteq \mathbb{R}^n$, let $C$ be a $\frac{1}{3}$-shallow cap of $K$, and let $p$ be the centroid of $\base(C)$. Then $C \subseteq M^{2n}(p)$.
\end{lemma}
\begin{lemma}
\label{lem:wide-cap}
Let $0 < \beta < 1$ be any constant. Let $K \subseteq \mathbb{R}^n$ be a well-centered convex body, $p \in K$, and $C$ be the minimum volume cap associated with $p$. If $C$ contains the origin or $\width(C) \geq \beta$, then $\vol_K(M(p)) \geq 2^{-O(n)}$.
\end{lemma}
\begin{proof}
We claim that $K$ satisfies the Winternitz property with respect to $p$. Note this is equivalent to the claim that $\vol_K(C) \geq 2^{-O(n)}$.
We consider two cases. First, suppose that $C$ contains the origin. Since $K$ is well-centered, by Lemma~\ref{lem:centroid}, $K$ satisfies the Winternitz property with respect to the origin. It follows that $\vol_K(C) \geq 2^{-O(n)}$. Otherwise, if $C$ does not contain the origin, then since the width of $C$ is at least $\beta$, the expanded cap $C^{1/\beta}$ contains the origin. By Lemma~\ref{lem:cap-exp}, $\vol(C^{1/\beta}) \leq 2^{O(n)} \vol(C)$. Again, using the fact that $K$ satisfies the Winternitz property with respect to the origin, we have $\vol_K(C^{1/\beta}) \geq 2^{-O(n)}$. Thus, in both cases, $\vol_K(C) \geq 2^{-O(n)}$, which proves the claim.
Since $K$ satisfies the Winternitz property with respect to $p$, by Lemma~\ref{lem:centroid}, it must satisfy the Kovner-Besicovitch property with respect to $p$. Thus $\vol_K(M(p)) = \vol_K((K-p) \cap (p-K)) \geq 2^{-O(n)}$, as desired.
\end{proof}
\begin{lemma}
\label{lem:min-vol-cap2}
Given a convex body $K \subseteq \mathbb{R}^n$, let $C$ be a $\frac{1}{3}$-shallow cap of $K$, and let $p$ be the centroid of $\base(C)$. We have
\[
2^{-O(n)} \cdot \vol(C)
~ \leq ~ \vol(M(p))
~ \leq ~ 2 \cdot \vol(C).
\]
\end{lemma}
\begin{proof}
The second inequality holds easily because half of $M(p)$ lies inside $C$. To prove the first inequality, let $B = \base(C)$, let $a = \area(B)$ denote its $(n-1)$-dimensional volume, and let $B' = M(p) \cap B$. Treating $p$ as the origin of the coordinate system, by definition of Macbeath regions, $B' = B \cap - B$. By applying Lemma~\ref{lem:centroid} (to the hyperplane containing $B$) we have $\area(B') \geq a / 2^{O(n)}$.
Let $x$ denote the apex of $C$, and let $x'$ be the farthest point on segment $\overline{p x}$ that is contained in $M(p)$. By Lemma~\ref{lem:min-vol-cap1}, $\|p x'\| \geq \|p x\| / 2 n$. By convexity, the generalized cone $P = \conv(B' \cup \{x'\})$ is contained within $M(p)$. Letting $w$ denote the absolute width of $C$, the height of this cone is at least $w/2 n$. Thus
\[
\vol(M(p))
~ \geq ~ \vol(P)
~ \geq ~ \frac{\area(B') \cdot w/2 n}{n}
~ \geq ~ \frac{(a / 2^{O(n)}) \cdot w/2 n}{n}
~ = ~ \frac{a w}{ n^2 2^{O(n)}}.
\]
By Lemma~\ref{lem:vol-cap}, $\vol(C) \leq 2^{n-1} a w$, and thus,
\[
\vol(M(p))
~ \geq ~ 2^{-O(n)} \cdot \vol(C),
\]
as desired.
\end{proof}
\begin{corollary}
\label{cor:min-vol-cap2}
Let $K \subseteq \mathbb{R}^n$ be a convex body, $p \in K$, and $C$ be the minimum volume cap associated with $p$. We have
\[
2^{-O(n)} \cdot \vol(C)
~ \leq ~ \vol(M(p))
~ \leq ~ 2 \cdot \vol(C).
\]
\end{corollary}
\begin{proof}
The second inequality holds for the same reason as in Lemma~\ref{lem:min-vol-cap2}. To prove the first inequality, recall the well-known property of minimum volume caps that $p$ is the centroid of the base of its associated minimum volume cap~\cite{ELR70}. Treating the centroid of $K$ as the origin, we consider two cases. If $C$ is $(1/3)$-shallow, then the corollary follows from Lemma~\ref{lem:min-vol-cap2}. Otherwise, $C$ contains the origin or its width is at least $1/3$. Noting that $K$ is well-centered with respect to the centroid (Lemma~\ref{lem:centroid}) and applying Lemma~\ref{lem:wide-cap}, it follows that $\vol_K(M(p)) \ge 2^{-O(n)}$. That is, $\vol(M(p)) \geq 2^{-O(n)} \vol(K) \geq 2^{-O(n)} \vol(C)$, which completes the proof.
\end{proof}
\subsection{Similar Caps}
\label{s:similar}
The Macbeath regions of a convex body $K$, and more specifically, its shrunken Macbeath regions, provide an affine-invariant notion of the closeness between points, through the property that both points lie within the same shrunken Macbeath region. We would like to define a similar affine-invariant notion of closeness between caps. We say that two caps $C_1$ and $C_2$ are \emph{$\lambda$-similar} for $\lambda \ge 1$, if $C_1 \subseteq C_2^{\lambda}$ and $C_2 \subseteq C_1^{\lambda}$ (see Figure~\ref{f:similar}(a)). If two caps are $\lambda$-similar for a constant $\lambda$, we say that the caps are \emph{similar}.
\begin{figure}
\caption{\label{f:similar}
\label{f:similar}
\end{figure}
It is natural to conjecture that these two notions of similarity are related through duality. In order to establish such a relationship consider the following mapping. Consider a point $z \in K^*$. Take a point $\hat{z} \not\in K^*$ on the ray $O z$ such that $\ray(\hat{z}) = \varepsilon$ (see Figure~\ref{f:similar}(b)). The dual hyperplane $\hat{z}^*$ intersects $K$, and so induces a cap, which we call $z$'s \emph{$\varepsilon$-representative cap} (see Figure~\ref{f:similar}(c)). The main result of this section is Lemma~\ref{lem:sandwich}, which shows that points lying within the same shrunken Macbeath region have similar representative caps. Before proving this, we begin with a technical lemma.
\begin{lemma} \label{lem:guarding}
Let $\alpha \le \frac{1}{8}$. Let $y \in K^*$ be an $\alpha$-shallow point. Consider two rays $r$ and $r'$ shot from the origin through $M^{1/5}(y)$ (see Figure \ref{f:guarding}). Let $z \not\in K^*$ be an $\alpha$-shallow point on $r$ and let $u \not\in K^*$ be a point on $r'$ such that $\ray(u) > 4 \ray(y) + 2 \ray(z)$. Then $\pcap{K}{z^*} \subseteq \pcap{K}{u^*}$.
\end{lemma}
\begin{figure}
\caption{\label{f:guarding}
\label{f:guarding}
\end{figure}
\begin{proof}
Let $h$ be any hyperplane passing through $z$ that does not intersect $K^*$. We will show that $h$ separates $u$ from $K^*$. This would imply that $u \in \ocone{K^*}{z}$, and the result would then follow from Lemma~\ref{lem:ocone}.
Let $p$ be any point in $r \cap M^{1/5}(y)$. By Lemma~\ref{lem:core-ray}, we have $\ray(p) \le 2 \ray(y)$. Consider a hyperplane $h'$ that is parallel to $h$ and passes through $p$ (see Figure \ref{f:guardingproof}). Let $C$ be the cap induced by $h'$. Letting $t$ denote the point of intersection of ray $r$ with $\partial K^*$, we have
\begin{equation} \label{eq:guarding}
\width(C)
~ \leq ~ \frac{\|p z\|}{\|O z\|}
~ = ~ \frac{\|p t\| + \|t z\|}{\|O z\|}
~ \leq ~ \frac{\|p t\|}{\|O t\|} + \frac{\|t z\|}{\|O z\|}
~ = ~ \ray(p) + \ray(z)
~ \leq ~ 2 \ray(y) + \ray(z).
\end{equation}
Since $C$ intersects $M^{1/5}(y)$, by Lemma~\ref{lem:mac-cap}, the cap $C^2$ encloses $M^{1/5}(y)$. Since $y$ and $z$ are $\alpha$-shallow for $\alpha = \frac{1}{8}$, by Eq.~\eqref{eq:guarding} we have $\width(C) \le 3/8$. It follows $\width(C^2) < 1$, and hence $O$ lies outside $C^2$. Let $h''$ denote the hyperplane passing through the base of $C^2$. Since $r'$ intersects $M^{1/5}(y)$, it follows that $r'$ must intersect $h''$ and $h$. Let $z'$ denote the point of intersection of $r'$ with $h$. We will show that $\ray(z') \le 4 \ray(y) + 2 \ray(z)$. Recalling from the statement of the lemma that $\ray(u) > 4 \ray(y) + 2 \ray(z)$, this would imply that $h$ separates $u$ from $K^*$, as desired.
\begin{figure}
\caption{\label{f:guardingproof}
\label{f:guardingproof}
\end{figure}
Let $x$ and $x'$ denote the points of intersection of the rays $r$ and $r'$, respectively, with $h''$. By similar triangles we have $\ray(z') \le \|x' z'\| / \|O z'\| = \|x z\| / \|O z\|$. Observe that the distance between $h''$ and $h'$ is no more than the distance between $h'$ and $h$, and so $\|x z\| \le 2 \|p z\|$. Combining this with Eq.~\eqref{eq:guarding}, we obtain
\[
\ray(z')
~ \leq ~ \frac{\|x z\|}{\|O z\|}
~ \leq ~ \frac{2 \|p z\|}{\|O z\|}
~ \leq ~ 2 (2 \ray(y) + \ray(z))
~ = ~ 4 \ray(y) + 2 \ray(z),
\]
which completes the proof.
\end{proof}
We now establish the main result of this section.
\begin{lemma}
\label{lem:sandwich}
Let $\varepsilon \leq \frac{1}{16}$, and let $y \in K^*$ such that $\ray(y) \leq \varepsilon$. For any two points $x, z \in M^{1/5}(y)$, their respective $\varepsilon$-representative caps are 8-similar.
\end{lemma}
\begin{proof}
Let $x_1$ and $z_1$ be points external to $K^*$ both at ray distance $\varepsilon$ on the rays $O x$ and $O z$, respectively (see Figure \ref{f:sandwichproof}(a)). Let $C_x$ and $C_z$ denote the $\varepsilon$-representative caps of $x$ and $z$, respectively (see Figure \ref{f:sandwichproof}(b)). Recall that $C_x$ and $C_z$ are the caps in $K$ induced by $x_1^*$ and $z_1^*$, respectively. By standard properties of the polar transformation $\width(C_x) = \ray(x_1) = \varepsilon$, and similarly, $\width(C_z) = \ray(z_1) = \varepsilon$. Let $x_2$ and $z_2$ be points external to $K^*$ both at ray distance $8 \varepsilon$ on the rays $O x$ and $O z$, respectively (see Figure~\ref{f:sandwichproof}). By our bound on $\varepsilon$, these ray distances are at most $\frac{1}{2}$. Clearly, $x_2^*$ and $z_2^*$ induce the caps $C_x^8$ and $C_z^8$ in $K$, respectively.
\begin{figure}
\caption{\label{f:sandwichproof}
\label{f:sandwichproof}
\end{figure}
Since $\ray(x_2) = 8\varepsilon, \ray(y) \leq \varepsilon$ and $\ray(z_1) = \varepsilon$, we have $\ray(x_2) > 2\ray(z_1) + 4\ray(y)$. It follows from Lemma~\ref{lem:guarding} that $C_z \subseteq C_x^8$. A symmetrical argument shows that $C_x \subseteq C_z^8$. Therefore $C_x$ and $C_z$ are 8-similar, as desired.
\end{proof}
The next lemma shows that similarity holds, even if ray distances are altered by a constant factor.
\begin{corollary} \label{cor:sandwich}
Let $\varepsilon \leq \frac{1}{16}$, and let $y \in K^*$ such that $\ray(y) \leq \varepsilon$. Let $C_x$ be a cap of $K$ such that $\varepsilon/2 \leq \width(C_x) \leq 2\varepsilon$, and such that the ray shot from the origin orthogonal to the base of $C_x$ intersects $M^{1/5}(y)$. Then the cap $C_x$ and the $\varepsilon$-representative cap $C_z$ of any point $z \in M^{1/5}(y)$ are 16-similar.
\end{corollary}
\begin{proof}
Let $r$ denote the ray shot from the origin orthogonal to the base of $C_x$. Let $x$ be any point that lies in $r \cap M^{1/5}(y)$. Let $C_x'$ be the $\varepsilon$-representative cap of $x$. By Lemma~\ref{lem:sandwich}, the caps $C_x'$ and $C_z$ are 8-similar. Also, it follows from our choice of point $x$ that the caps $C_x$ and $C_x'$ have parallel bases and their widths differ by a factor of at most two. Thus $C_x$ and $C_x'$ are 2-similar. Using the fact that $C_x'$ and $C_z$ are 8-similar, and applying Lemma~\ref{lem:cap-containment-exp}, it is easy to see that $C_x$ and $C_z$ are 16-similar.
\end{proof}
\section{Caps in the Polar: Mahler Relationship} \label{s:mahler}
As mentioned in Section~\ref{s:techniques}, a central element of our analysis is establishing a Mahler-like reciprocal relationship between volumes of caps in $K$ and corresponding caps of $K^*$. While our new result is similar in spirit to those given by Arya {\textit{et al.}}~\cite{AAFM22} and that of Nasz{\'o}di {\textit{et al.}}~\cite{NNR20}, it is stronger than both. Compared to \cite{AAFM22}, the dependency of the Mahler volume on dimension is improved from $2^{-O(n\log n)}$ to $2^{-O(n)}$, which is critical in the high-dimensional setting in reducing terms of the form $n^{O(n)}$ to $2^{O(n)}$. Further, our result is presented in a cleaner form, which is affine-invariant. Compared to Nasz{\'o}di {\textit{et al.}}~\cite{NNR20}, which was focused on sampling from just the boundary of $K$, our results can be applied to caps of varying widths, and hence it applies to sampling from the interior of $K$. This fact too is critical in the applications we consider. Our improvements are obtained by a more sophisticated geometric analysis and our affine-invariant approach.
For the sake of concreteness, we state the lemmas of this section in terms of an arbitrary direction, which we call ``vertical,'' and any hyperplane orthogonal to this direction is called ``horizontal.'' Since the direction is arbitrary, there is no loss of generality.
\subsection{Dual Caps and the Difference Body} \label{s:diff-body}
This subsection is devoted to a key construction in our analysis. Given a full dimensional convex body $K$ and a point $z \not\in K$, the following lemma identifies an $(n-1)$-dimensional body $\Upsilon$ such that $\dcap{\Upsilon}{z} = \dcap{K}{z}$, where $\Upsilon$ is related to the base $B$ of a certain $\varepsilon$-width cap in the sense that $\Upsilon$ can be sandwiched between $B$ and a scaled copy of the difference body of $B$.
\begin{figure}
\caption{\label{f:dualcaps2}
\label{f:dualcaps2}
\end{figure}
\begin{lemma} \label{lem:sandwich-dualcaps}
Let $\varepsilon \le \frac{1}{8}$. Let $K$ be a convex body with the origin $O$ in its interior. Let $z \notin K$ be a point on the ray from the origin directed vertically upwards such that $\ray(z) = 2\varepsilon$. Consider an $\varepsilon$-width cap $C$ above the origin whose base $B$ intersects $Oz$ and is horizontal. Let $H_b$ be the hyperplane passing through the base $B$, and let $\Upsilon = \icone{K}{z} \cap H_b$. Let $x$ denote the point of intersection of $B$ with $Oz$, and let $B_{\Delta} = 5\Delta(B) + x$. Then $B \subseteq \Upsilon \subseteq B_{\Delta}$ (see Figure~\ref{f:dualcaps2}).
\end{lemma}
\begin{proof}
By definition, $K \subseteq \icone{K}{z}$, and so $B \subseteq \Upsilon$. Thus, it suffices to show that $\Upsilon \subseteq B_{\Delta}$. To prove this, we will show that $K \subseteq \icone{B_{\Delta}}{z}$.
Let $a$ denote an apex of $C$ and let $a'$ be the point obtained by projecting $a$ orthogonally onto $O z$ (see Figure~\ref{f:dualcaps3}). Without loss of generality, assume that $\|O a'\| = 1 $. Note that $\|xa'\| = \varepsilon$, where $x$ is the point of intersection of the ray $O z$ with the base of cap $C$. It is easy to check that $\varepsilon \le \|a'z\| \le 3 \varepsilon$.
\begin{figure}
\caption{\label{f:dualcaps3}
\label{f:dualcaps3}
\end{figure}
For the remainder of this proof, it will be convenient to imagine that the origin is at $x$. Our strategy will be to show that $C \subseteq \icone{2(1+2\varepsilon)B}{z}$ and $K \setminus C \subseteq \icone{4(1+2\varepsilon) \Delta(B)}{z}$. Since $B$ contains the origin, it follows easily that $B \subseteq \Delta(B)$. This implies that $K \subseteq \icone{4(1+2\varepsilon) \Delta(B)}{z} \subseteq \icone{5\Delta(B)}{z}$ since $\varepsilon \le \frac{1}{8}$. By definition of $B_{\Delta}$, this would complete the proof.
First, we will prove that $C \subseteq \icone{2(1+2\varepsilon)B}{z}$. It follows from convexity that $C$ is contained in the truncated portion of $\icone{B}{O}$ between the hyperplane $H_b$ and the hyperplane above $H_b$ that is parallel to it at distance $\varepsilon$ (call it $H_a$). Note that $\icone{B}{O} \cap H_a$ is the $(n-1)$-dimensional convex body obtained by scaling $B$ about $x$ by a factor of $1/(1-\varepsilon)$ and translating it vertically upwards by amount $\varepsilon$. Call
this body $B_a$. (Formally, $B_a = (1/(1-\varepsilon)) B + a'$.) It
is easy to see that $C \subseteq \icone{B_a}{z}$. Since $\|z x\| \le 2 \|z a'\|$, it follows that $\icone{B_a}{z} \cap H_b \subseteq 2 (1/(1-\varepsilon)) B$. Thus $C \subseteq \icone{2 (1/(1-\varepsilon)) B}{z} \subseteq \icone{2 (1+2\varepsilon) B}{z}$, where in the last containment we have used the fact that $\varepsilon \le \frac{1}{8}$.
It remains to prove that $K \setminus C \subseteq \icone{4(1+2\varepsilon) \Delta(B)}{z}$. By convexity, it follows that $K \setminus C \subseteq \icone{B}{a}$. Define $t = a' - a$ and $B^+ = \conv(B \cup (B+t))$. We claim that $K \setminus C \subseteq \icone{B^+}{a'}$. To prove this, let $p$ be any point in $K \setminus C$. Since $K \setminus C \subseteq \icone{B}{a}$, it follows that $\overline{a p}$ intersects the base $B$; let $b$ denote this point of intersection. Since $b \in B$, we have $b \in B^+$. Define $b' = b + t$. Clearly $b' \in B + t$ and hence $b' \in B^+$. Note that the points $b, b', a', a$ form a parallelogram (because $b'-a' = b-a$). By elementary geometry, $p$ also lies in the 2-dimensional flat of this parallelogram and $\overline{a' p}$ intersects $\overline{b b'}$. Since $b, b' \in B^+$ and $B^+$ is convex, it follows that $\overline{b b'}$ is contained in $B^+$. Thus $\overline{a' p}$ intersects $B^+$, which implies that $p \in \icone{B^+}{a'}$. This proves that $K \setminus C \subseteq \icone{B^+}{a'}$, as desired.
Next consider the cone obtained by translating $\icone{B^+}{a'}$ vertically upwards to $z$. Clearly the resulting cone contains $K \setminus C$, and since $\|z x\| \le 4\|a' x\|$, it follows that the intersection of this cone with $H_b$ is contained in $4B^+$. Thus $K \setminus C \subseteq \icone{4B^+}{z}$.
To complete the proof we need to relate $B^+$ to $\Delta(B)$. To be precise, we will show that $B^+ \subseteq (1+2\varepsilon) \Delta(B)$. Recall that $B^+ = \conv(B \cup (B+t))$. By our earlier remarks, $a \in B_a$ and hence $-t = a - a' \in (1/(1-\varepsilon)) B$. It follows that $B + t \subseteq (1/(1-\varepsilon)) B - (-t) \subseteq \Delta((1/(1-\varepsilon)) B)$, where the first containment is trivial and the second is immediate from the definition of difference bodies. Also, $B \subseteq \Delta((1/(1-\varepsilon)) B)$ holds trivially. By convexity of difference bodies, it follows that $B^+ \subseteq \Delta((1/(1-\varepsilon)) B)$. Thus $B^+ \subseteq (1/(1-\varepsilon)) \Delta(B) \subseteq (1+2\varepsilon) \Delta(B)$. Recalling that $K \setminus C \subseteq \icone{4B^+}{z}$, it follows that $K \setminus C \subseteq \icone{4 (1+2\varepsilon) \Delta(B)}{z}$, which completes the proof.
\end{proof}
\subsection{Relating Caps in the Primal and Polar} \label{s:primal-polar}
In order to establish a Mahler-like relation between the volumes of caps of $K$ and $K^*$, it will be helpful to consider projections in one lower dimension, $n-1$. We will make use of a special case of a result appearing in \cite{AAFM22} (Lemma~{3.1}).
Consider a convex body $K$ lying on an $(n-1)$-dimensional hyperplane and a point $z$ that lies on the opposite side of this hyperplane from the origin (see Figure~\ref{f:polars}). The polar of the dual cap of $K$ with respect to $z$ is an $(n-1)$-dimensional convex body on the hyperplane $z^*$. Letting $G$ denote this object, the following lemma shows that if we project both $K$ and $G$ onto a suitable $(n-1)$-dimensional hyperplane, $G$ is the polar of $K$ up to scale factor.
\begin{lemma}[Arya {\textit{et al.}}~\cite{AAFM22}] \label{lem:polars}
Let $z \in \mathbb{R}^n$ be a point that lies on a vertical ray from the origin $O$, and let $K$ be an $(n-1)$-dimensional convex body whose interior intersects the segment $O z$ at some point $x$. Further, suppose that $K$ lies on a hyperplane orthogonal to $Oz$. Let $G = (\dcap{K}{z})^*$ and let $t$ be the point of intersection of the vertical ray from $O$ with $z^*$. Then $G - t = \alpha (K - x)^*$, where $\alpha = \|x z\| / \|O z\|$.
\end{lemma}
\begin{figure}
\caption{\label{f:polars}
\label{f:polars}
\end{figure}
The following lemma describes the correspondence between caps in $K$ and its polar $K^*$, and it establishes the critical Mahler-type relationship between the volumes of these caps.
\begin{lemma} \label{lem:vol-product}
Let $0 < \varepsilon \leq \frac{1}{8}$, and let $K \subseteq \mathbb{R}^n$ be a well-centered convex body. Let $C$ be a cap of $K$ of width at least $\varepsilon$. Consider the ray shot from the origin orthogonal to the base of $C$, and let $D$ be a cap of $K^*$ of width at least $\varepsilon$ such that this ray intersects the interior of its base (see Figure~\ref{f:volprod}). Then
\[
\vol_K(C) \cdot \vol_{K^*}(D)
~ \geq ~ 2^{-O(n)} \varepsilon^{n+1}.
\]
\end{lemma}
\begin{figure}
\caption{\label{f:volprod}
\label{f:volprod}
\end{figure}
\begin{proof}
Let $C'$ be a cap of width $2\varepsilon$ whose base is parallel to the base of $C$ and which is on the same side of the origin as $C$. Clearly such a cap can be obtained by translating the base of $C$ parallel to itself. Note that $C' \subseteq C^2$ and so, by Lemma~\ref{lem:cap-exp}, it follows that $\vol(C') \le 2^{O(n)} \cdot \vol(C)$. Let $r$ denote the ray in the polar space, emanating from the origin of $K^*$ in a direction orthogonal to the base of $C$ (see Figure~\ref{f:volprodproof}). Recall that $r$ intersects the interior of the base of $D$. By Lemma~\ref{lem:cap-tech}, we can find a cap $D' \subseteq D$ whose width is $\varepsilon$ and such that ray $r$ intersects the interior of the base of $D'$. It is now easy to see that it suffices to prove the lemma with $C'$ and $D'$ in place of $C$ and $D$, respectively. As a convenience, in the remainder of this proof, we will write $C$ and $D$ in place of $C'$ and $D'$, respectively.
\begin{figure}
\caption{\label{f:volprodproof}
\label{f:volprodproof}
\end{figure}
As the product considered in this lemma is affine-invariant, we will apply a suitable linear transformation to simplify the subsequent analysis. Specifically, we apply a linear transformation in the polar space such that the base of $D$ becomes horizontal while the ray $r$ is directed vertically upwards. It is easy to see that the effect of this transformation in the original space is to make the base of cap $C$ horizontal (because it is the polar of a point on ray $r$). To summarize, after the transformation, the hyperplanes passing through the bases of the caps $C$ and $D$ are horizontal and above the origin and as relative measures the widths of both caps are unchanged. Further, the ray $r$ is directed vertically upwards in the polar and intersects the interior of the base of $D$. Also, after uniform scaling, we may assume that the absolute distance between the origin and the supporting hyperplane of cap $C$ that is parallel to its base is unity.
Let $B_C$ denote the base of cap $C$ and $H_C$ denote the hyperplane passing through $B_C$. Also, let $B_D$ denote the base of cap $D$ and $H_D$ denote the hyperplane passing through $B_D$. Define $z = H_C^*$. Note that $z$ lies outside $K^*$ on the ray from the origin directed vertically upwards and $\ray(z) = \width(C) = 2\varepsilon$. By Lemma~\ref{lem:polardcap}, $B_C = (\dcap{K^*}{z})^*$. Define $\Upsilon = \icone{K^*}{z} \cap H_D$. Clearly $\dcap{K^*}{z} = \dcap{\Upsilon}{z}$. Thus $B_C = (\dcap{\Upsilon}{z})^*$.
Let $y$ denote the point of intersection of the vertical ray from $O$ with $B_C$, and let $x$ denote the point of intersection of the vertical ray from $O$ with $B_D$. Henceforth, in this proof, we will treat $y$ as the origin in the primal space and $x$ as the origin in the polar space. Applying Lemma~\ref{lem:polars} (setting $K$ in that lemma to $\Upsilon$), it follows that $B_C = \alpha \Upsilon^*$, where $\alpha = \|x z\| / \|O z\|$. Noting that $B_C$ is $(n-1)$-dimensional and $\alpha = \Theta(\varepsilon)$, it follows that
\[
\area(B_C) ~ \geq ~ 2^{-O(n)} \varepsilon^{n-1} \cdot \area(\Upsilon^*).
\]
By Lemma~\ref{lem:vol-cap}, we have $\vol(C) \geq 2^{-O(n)} \varepsilon \cdot \area(B_C)$ and $\vol(D) \geq 2^{-O(n)} \varepsilon \cdot \area(B_D)$. Thus,
\begin{equation} \label{eq:mah1}
\vol(C) \cdot \vol(D)
~ \geq ~ 2^{-O(n)} \varepsilon^2 \cdot \area(B_C) \cdot \area(B_D)
~ \geq ~ 2^{-O(n)} \varepsilon^{n+1} \cdot \area(\Upsilon^*) \cdot \area(B_D).
\end{equation}
By Lemma~\ref{lem:sandwich-dualcaps}, $\Upsilon \subseteq B_{\Delta}$, where $B_\Delta = 5 \Delta(B_D)$. Recalling from Lemma~\ref{lem:vol-diffbody} that $\area(\Delta(B_D)) \leq 4^{n-1} \cdot \area(B_D)$, we have
\[
\area(\Upsilon)
~ \leq ~ \area(B_{\Delta})
~ = ~ 5^{n-1} \cdot \area(\Delta(B_D))
~ \leq ~ 5^{n-1} \cdot 4^{n-1} \cdot \area(B_D)
~ \leq ~ 2^{O(n)} \cdot \area(B_D).
\]
Substituting this bound into Eq.~\eqref{eq:mah1}, we obtain
\[
\vol(C) \cdot \vol(D)
~ \geq ~ 2^{-O(n)} \varepsilon^{n+1} \cdot \area(\Upsilon^*) \cdot \area(\Upsilon)
~ \geq ~ 2^{-O(n)} \varepsilon^{n+1} \cdot \omega_{n-1}^2,
\]
where we have applied Lemma~\ref{lem:mahler-bounds} to lower bound the Mahler volume in the last step. Since $K$ is well-centered, it follows from Lemma~\ref{lem:centroid} that $K$ satisfies the Santal{\'o} property, that is, $\vol(K) \cdot \vol(K^*) \leq 2^{O(n)} \cdot \omega_n^2$. Recalling the definition of $\omega_n$ from Section~\ref{s:centrality}, we have $\omega_{n-1} / \omega_n = \Theta(\sqrt{n})$. Thus
\[
\vol_K(C) \cdot \vol_{K^*}(D)
~ \geq ~ 2^{-O(n)} \varepsilon^{n+1},
\]
as desired.
\end{proof}
Finally, we present the main ``take-away'' of this section. This lemma shows that the bound on the product of volumes from the previous lemma holds within the neighborhood of the ray, specifically to any shrunken Macbeath region that intersects the ray.
\begin{lemma} \label{lem:mahler-mac}
Let parameter $\varepsilon$, convex body $K$ and cap $C$ of $K$ be as defined in Lemma~\ref{lem:vol-product}. Suppose that the ray $r$ shot from the origin orthogonal to the base of $C$ intersects a Macbeath region $M^{1/5}(x)$ of $K^*$, where $\ray(x) = \varepsilon$ (see Figure~\ref{f:mahlermac}). Then
\[
\vol_K(C) \cdot \vol_{K^*}(M^{1/5}(x))
~ \geq ~ 2^{-O(n)} \varepsilon^{n+1}.
\]
\end{lemma}
\begin{figure}
\caption{\label{f:mahlermac}
\label{f:mahlermac}
\end{figure}
\begin{proof}
Let $y$ be a point in the intersection of the ray $r$ with $M^{1/5}(x)$ and let $D$ denote the minimum volume cap of $K^*$ that contains $y$. Since $M^{1/5}(y) \cap M^{1/5}(x) \neq \emptyset$, by Lemma~\ref{lem:mac-mac}, we have $M^{1/5}(y) \subseteq M^{4/5}(x)$. Thus $\vol(M^{1/5}(x)) \geq 2^{-O(n)} \cdot \vol(M^{1/5}(y))$. Also, by Corollary~\ref{cor:min-vol-cap2}, we have $\vol(M^{1/5}(y)) \geq 2^{-O(n)} \cdot \vol(D)$. Thus $\vol(M^{1/5}(x)) \geq 2^{-O(n)} \cdot \vol(D)$. To complete the proof, it suffices to show the inequality given in the statement of the lemma with $D$ in place of $M^{1/5}(x)$. By Lemma~\ref{lem:core-ray}, we have $\ray(y) \geq \ray(x) / 2$, and by Lemma~\ref{lem:raydist-width}, we have $\width(D) \geq \ray(y)$. Thus $\width(D) \geq \ray(x)/2 = \varepsilon / 2$. Applying Lemma~\ref{lem:vol-product} on caps $C$ and $D$, the desired inequality now follows.
\end{proof}
\section{Covers of Convex Bodies} \label{s:cover}
As mentioned earlier, we employ a Macbeath region-based adaptation of $(c,\varepsilon)$-coverings in our solution to approximate CVP. Since our construction will involve composing coverings of various regions of $K$, we define our coverings in the following restricted manner. Let $K \subseteq \mathbb{R}^n$ be a convex body, let $\Lambda$ be an arbitrary subset of $\interior(K)$, and let $c \geq 2$ be any constant. Define a \emph{$\Lambda$-limited $c$-covering} to be a collection $\mathcal{Q}$ of convex bodies that cover $\Lambda$, such that the $c$-factor expansion of each body about its centroid is contained within $K$.
Our coverings will be based on Macbeath regions. Given $X \subseteq K$, define $\mathscr{M}_K^{\lambda}(X) = \{ M_K^{\lambda}(x) : x \in X\}$. Define a \emph{$(K, \Lambda, c)$-MNet} to be any maximal set of points $X \subseteq \Lambda$ such that the shrunken Macbeath regions $\mathscr{M}_K^{1/4c}(X)$ are pairwise disjoint. Through basic properties of Macbeath regions, we can obtain a covering by suitable expansion as shown in the following lemma, which summarizes the properties of MNets.
\begin{lemma} \label{lem:delone}
Given a convex body $K \subseteq \mathbb{R}^n$, $\Lambda \subset \interior(K)$, and $c \ge 2$, a $(K,\Lambda,c)$-MNet $X$ satisfies the following properties:
\begin{enumerate}
\item[$(a)$] (Packing) The elements of $\mathscr{M}_K^{1/4c}(X)$ are pairwise disjoint.
\item[$(b)$] (Covering) The union of $\mathscr{M}_K^{1/c}(X)$ covers $\Lambda$.
\item[$(c)$] (Buffering) The union of $\mathscr{M}_K(X)$ is contained within $K$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part~(a) is an immediate consequences of the definition. Part~(c) follows by basic properties of Macbeath regions. To prove part (b), let $\lambda = 1/c$ and consider any point $y \in \Lambda$. By maximality, there is $x \in X$ such that $M^{\lambda/4}(x)$ overlaps $M^{\lambda/4}(y)$. By Lemma~\ref{lem:mac-mac}, $M^{\lambda/4}(y) \subseteq M^{\lambda}(x)$, which implies that $y \in M^{\lambda}(x)$.
\end{proof}
Observe that property~(b) implies that if $X$ is a $(K, \Lambda, c)$-MNet, then $\mathscr{M}_K^{1/c}(X)$ is a $\Lambda$-limited $c$-covering. Further, recalling that $K_{\varepsilon} = (1+\varepsilon)K$, if $X$ is a $(K_{\varepsilon}, K, c)$-MNet, then $\mathscr{M}_K^{1/c}(X)$ is a $(c,\varepsilon)$-covering of $K$ (see Figure~\ref{f:cover}).
\begin{figure}
\caption{\label{f:cover}
\label{f:cover}
\end{figure}
\subsection{Instance Optimality} \label{s:instance-opt}
In this section we show that an MNet for $K_{\varepsilon}$ naturally generates an \emph{instance optimal} $(2,\varepsilon)$-covering in the sense that its size cannot exceed that of any $(2,\varepsilon)$-covering of $K$ by a factor of $2^{O(n)}$ (Lemma~\ref{lem:cover-inst} and Theorem~\ref{thm:cover-inst}). It is worth noting that this fact holds irrespective of the location of the origin in $\interior(K)$. In other words, we require no centrality assumptions for this result.
We begin with two lemmas that are straightforward adaptations of lemmas in \cite{NaV22}. The first lemma shows that one incurs a size penalty of only $2^{O(n)}$ by restricting to $c$-coverings to centrally symmetric convex bodies. The second shows that a constant change in the expansion factor results in a similar penalty.
\begin{lemma} \label{lem:cover-sym}
Let $c \ge 2$ be a constant. Let $Q \subseteq \mathbb{R}^n$ be a convex body with its centroid at the origin. There exists a set of $2^{O(n)}$ centrally symmetric convex bodies which together cover $Q$, such that the central $c$-expansion of any of these bodies is contained within $2 Q$.
\end{lemma}
\begin{proof}
Let $R = M_Q(O) = Q \cap -Q$, and let $R' = \frac{1}{c} R$ and $R'' = \frac{1}{2c} R$. Clearly, all these bodies are centrally symmetric about the origin. By Lemma~\ref{lem:centroid}, $\vol(R) \geq 2^{-O(n)} \vol(Q)$, and since $c$ is a constant, the volumes of $R'$ and $R''$ are similarly bounded. Let $X \subset Q$ be a maximal discrete set of points such that the translates $X \oplus R'' = \{x + R'' : x \in X\}$ are pairwise disjoint. We will show that the bodies $X \oplus R'$ satisfy the lemma.
To establish the expansion property, observe that for all $x \in X$, $x + c R' = x + R \subseteq Q \oplus R \subseteq 2 Q$. To prove the size bound, by disjointness we have
\[
|X| \cdot \vol(R'')
~ \leq ~ \vol(2 Q)
~ \leq ~ 2^{O(n)} \vol(Q)
~ \leq ~ 2^{O(n)} \vol(R''),
\]
and therefore $|X| = 2^{O(n)}$. Finally, to prove coverage, consider any $y \in Q$. By maximality there exists $x \in X$ such that $x + R''$ overlaps $y + R''$. Since $c \geq 2$, it follows that $y \in x + 2 R'' = x + R'$.
\end{proof}
\begin{lemma} \label{lem:cover-transform}
Let $K \subseteq \mathbb{R}^n$ be a convex body, let $\Lambda \subset \interior(K)$, and let $c \ge 2$ be a constant. Let $\mathcal{Q}$ be a $\Lambda$-limited $c$-covering with respect to $K$. For any constant $c' \ge 2$, there exists a $\Lambda$-limited $c'$-covering with respect to $K$ consisting of centrally symmetric convex bodies whose size is at most $2^{O(n)} |\mathcal{Q}|$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:cover-sym}, we can replace each body $Q \in \mathcal{Q}$ by a set of $2^{O(n)}$ centrally symmetric convex bodies which together cover $Q$ and such that the $c'$-expansion of any of these bodies is contained within the 2-expansion of $Q$ (about its centroid). It is easy to see that the resulting set of bodies is a $\Lambda$-limited $c'$-cover with respect to $K$ with the desired size.
\end{proof}
We are now ready to show that a $(K,\Lambda,c)$-MNet can be used to generate an instance-optimal limited covering.
\begin{lemma} \label{lem:cover-inst}
Let $K \subseteq \mathbb{R}^n$ be a convex body, let $\Lambda \subset \interior(K)$, and let $c \geq 2$ be a constant. Let $X$ be a $(K,\Lambda,c)$-MNet, and let $\mathscr{M} = \mathscr{M}_K^{1/c}(X)$ be the associated $\Lambda$-limited $c$-covering with respect to $K$. Given any $\Lambda$-limited $c$-covering $\mathcal{Q}$ with respect to $K$, $|\mathscr{M}| \leq 2^{O(n)} |\mathcal{Q}|$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:cover-transform}, there exists a $\Lambda$-limited 5-covering with respect to $K$ consisting of at most $2^{O(n)} |\mathcal{Q}|$ centrally symmetric convex bodies. Let $\mathcal{Q}'$ denote this covering, and let $Y$ denote the set of centers of these bodies. Consider any $Q \in \mathcal{Q}'$, and let $y$ denote its center. By definition, $M(y) = M_K(y)$ is the largest centrally symmetric body centered at $y$ that is contained within $K$. Since $Q$ is a centrally symmetric convex body whose 5-expansion about $y$ is contained within $K$, it follows that $Q \subseteq M^{1/5}(y)$. Therefore, $\mathscr{M}^{1/5}(Y)$ is a $\Lambda$-limited 5-covering of the same cardinality as $\mathcal{Q}'$.
By the packing property of Lemma~\ref{lem:delone}, the Macbeath regions $\mathscr{M}^{1/4c}(X)$ are pairwise disjoint. To relate these two coverings, assign each $x \in X$ to any $y \in Y$ such that $x \in M^{1/5}(y)$. We will show that at most $2^{O(n)}$ elements of $X$ are assigned to any $y \in Y$. Assuming this for now, we have
\[
|\mathscr{M}|
~ = ~ |X|
~ \leq ~ 2^{O(n)} |Y|
~ = ~ 2^{O(n)} |\mathcal{Q}'|
~ \leq ~ 2^{O(n)} |\mathcal{Q}|,
\]
thus completing the proof.
To prove the assertion, consider any $x \in X$ assigned to some $y \in Y$. Since $M^{1/5}(x) \cap M^{1/5}(y) \neq \emptyset$, by Lemma~\ref{lem:mac-mac} and the fact that $c \geq 2$, we have
\[
M^{1/4c}(x)
~ \subseteq ~ M^{1/5}(x)
~ \subseteq ~ M^{4/5}(y).
\]
Lemma~\ref{lem:mac-mac} also implies that $M^{1/5}(y) \subseteq M^{4/5}(x)$, and so $\vol(M^{1/4c}(x)) \geq 2^{-O(n)} \vol(M^{4/5}(y))$. Since the Macbeath regions of $M^{1/4c}(X)$ are pairwise disjoint, by a simple packing argument, the number of points of $X$ assigned to any $y \in Y$ is at most $2^{O(n)}$, as desired.
\end{proof}
Recall that a $K$-limited $c$-covering with respect to $K_{\varepsilon} = (1+\varepsilon) K$ is a $(c,\varepsilon)$-covering for $K$. Applying the above lemma in this case, we obtain the main result of this section.
\begin{theorem} \label{thm:cover-inst}
Let $0 < \varepsilon \leq 1$, let $K \subseteq \mathbb{R}^n$ be a convex body such that $O \in \interior(K)$, and let $c \ge 2$ be a constant. Let $X$ be a $(K_{\varepsilon}, K, c)$-MNet, and let $\mathscr{M} = \mathscr{M}_{K_\varepsilon}^{1/c}(X)$ be the associated $(c,\varepsilon)$-covering with respect to $K$. Given any $(c,\varepsilon)$-covering $\mathcal{Q}$ with respect to $K$, $|\mathscr{M}| \leq 2^{O(n)} |\mathcal{Q}|$.
\end{theorem}
\subsection{Worst-Case Optimality} \label{s:worst-opt}
Our main result in this section, given in Lemma~\ref{lem:cover-worst}, establishes the existence of a $(c,\varepsilon)$-covering of size $2^{O(n)}/\varepsilon^{(n-1)/2}$. This directly implies Theorem~\ref{thm:cover-worst}. Before presenting this result, it will be useful to first establish a bound on the maximum number of disjoint Macbeath regions associated with $\Theta(\varepsilon)$-width caps. The proof is based on the relationship between caps in $K$ and $K^*$.
Let $K \subseteq \mathbb{R}^n$ be a well-centered convex body. Given $0 < \varepsilon \leq \frac{1}{32}$, let $\Lambda \subseteq K$ denote the centroids of the bases of all caps whose relative widths are between $\varepsilon$ and $2\varepsilon$. Given a constant $c \geq 2$, let $X$ be a $(K, \Lambda, c)$-MNet, and let $\mathscr{M}(X) = \mathscr{M}_K^{1/c}(X)$ be the associated covering. We will show that $|X| \le 2^{O(n)} / \varepsilon^{(n-1)/2}$, which will imply a similar bound on the size of the associated $\Lambda$-limited $c$-covering.
Recall that for any region $\Lambda \subseteq K$, its relative volume is $\vol_K(\Lambda) = \vol(\Lambda)/\vol(K)$. Let $t = \varepsilon^{(n+1)/2}$. Define $X_{\geq t} = \{x \in X : \vol_K(M_K^{1/c}(x)) \geq t\}$ to be the centers of the ``large'' Macbeath regions in the covering of relative volume at least $t$, and let $X_{< t} = X \setminus X_{\geq t}$ denote the centers of the remaining ``small'' Macbeath regions.
To bound the number of small Macbeath regions, we will make use of the polar body $K^*$. Let $\Lambda'$ denote the boundary of $(1-\varepsilon) K^*$. Let $Y$ be a $(K^*, \Lambda', 5)$-MNet, and let $\mathscr{M}(Y) = \mathscr{M}_{K^*}^{1/5}(Y)$ be the associated covering. Let $t' = 2^{-O(n)} \varepsilon^{(n+1)/2}$, where the constant hidden in $O(n)$ is sufficiently large, and analogously define $Y_{\geq t'} = \{y \in Y : \vol_{K^*}(M_{K^*}^{1/5}(y)) \geq t'\}$ to be the set of centers of the ``large'' Macbeath regions in the polar covering $\mathscr{M}(Y)$ whose relative volume is at least $t'$.
The following lemma summarizes the essential properties of the resulting Macbeath regions.
\begin{lemma} \label{lem:layer}
Given a well-centered convex body $K \subseteq \mathbb{R}^n$, $0 < \varepsilon \leq \frac{1}{32}$, constant $c \ge 2$, and the entities $\Lambda$, $\Lambda'$, $X$, $Y$, $t$, and $t'$ defined above, the following hold:
\begin{enumerate}
\item[$(a)$] The regions $\mathscr{M}_K^{1/c}(X)$ are contained in $\Lambda_K(\varepsilon) = K \setminus (1-4\varepsilon)K$, and $\vol_K(\Lambda_K(\varepsilon)) = O(n \varepsilon)$.
\item[$(b)$] For any $x \in X_{\geq t}$, $\vol_K(M^{1/c}(x)) \geq \varepsilon^{(n+1)/2}$, and $|X_{\geq t}| \le 2^{O(n)} / \varepsilon^{(n-1)/2}$.
\item[$(c)$] The regions $\mathscr{M}_{K^*}^{1/5}(Y)$ are contained in $\Lambda_{K^*}(\varepsilon) = K^* \setminus (1-2\varepsilon)K^*$, and $\vol_{K^*}(\Lambda_{K*}(\varepsilon)) = O(n \varepsilon)$.
\item[$(d)$] For any $y \in Y_{\geq t'}$, $\vol_{K^*}(M^{1/5}(y)) \geq 2^{-O(n)} \varepsilon^{(n+1)/2}$, and $|Y_{\geq t'}| \le 2^{O(n)} / \varepsilon^{(n-1)/2}$.
\item[$(e)$] For any $x \in X_{< t}$, there is $y \in Y_{\geq t'}$ such that for any point $z \in M^{1/5}(y)$, we have $M^{1/c}(x) \subseteq C_z^{32}$, and $\vol(M^{1/c}(x)) \geq 2^{-O(n)} \vol(C_z^{32})$, where $C_z \subseteq K$ is $z$'s $\varepsilon$-representative cap.
\item[$(f)$] $|X| \le 2^{O(n)} / \varepsilon^{(n-1)/2}$.
\end{enumerate}
\end{lemma}
\begin{proof}
To prove~(a), let $x$ be any point of $X$ and let $M_x = M^{1/c}(x)$ be the associated covering Macbeath region. Because $X$ is a $(K, \Lambda, c)$-MNet, $M_x$ is centered at the centroid of the base of a cap $C_x$ of width between $\varepsilon$ and $2\varepsilon$. Since $c \geq 1$, by Lemma~\ref{lem:mac-cap-var}, $M_x \subseteq C_x^2$. As $C_x^2$ has width at most $4\varepsilon$, it follows that $C_x^2 \subseteq \Lambda_K(\varepsilon)$, and so too is $M_x$. Clearly, $\vol_K(\Lambda_K(\varepsilon)) = 1 - (1-4\varepsilon)^n = O(n\varepsilon)$.
To prove~(b), observe that the Macbeath regions $\mathscr{M}^{1/4c}(X_{\geq t})$ are pairwise disjoint, and each has relative volume at least $t/4^n \geq 2^{-O(n)} \varepsilon^{(n+1)/2}$. By a simple packing argument, $|X_{\geq t}| \leq \vol_K(\Lambda_K(\varepsilon)) / (t/4^n) \leq 2^{O(n)} / \varepsilon^{(n-1)/2}$.
To prove~(c), let $y$ be any point of $Y$ and let $M_y = M^{1/5}(y)$ be the associated covering Macbeath region. Since $y$ lies on the boundary of $(1-\varepsilon) K^*$, $y$ lies on the base of a cap $C_y$ of $K^*$ induced by the supporting hyperplane of $(1-\varepsilon) K^*$. By Lemma~\ref{lem:mac-cap-var}, $M_y \subseteq C_y^2$. Since $C_y^2$ has width $2\varepsilon$, it follows that $C_y^2 \subseteq \Lambda_{K*}(\varepsilon)$, and so too is $M_y$. Also, $\vol_{K^*}(\Lambda_{K*}(\varepsilon)) = 1 - (1-2\varepsilon)^n = O(n\varepsilon)$.
To prove~(d), observe that by Lemma~\ref{lem:delone}, the Macbeath regions $\mathscr{M}^{1/(4 \cdot 5)}(Y_{\geq t'})$ are pairwise disjoint, and each has relative volume at least $t'/4^n = 2^{-O(n)} \varepsilon^{(n+1)/2}$. By a simple packing argument, $|Y_{\geq t'}| \leq \vol_{K^*}(\Lambda_{K*}(\varepsilon)) / (t'/4^n) \leq 2^{O(n)} / \varepsilon^{(n-1)/2}$.
To prove~(e), let $x$ be any point of $X_{< t}$ and let $M_x = M^{1/c}(x)$ be the associated covering Macbeath region. As in~(a), $M_x$ is centered at the centroid of the base of a cap $C_x$ of width between $\varepsilon$ and $2\varepsilon$. Since $c$ is a constant, by Lemma~\ref{lem:min-vol-cap2}, $\vol(C_x) \leq 2^{O(n)} \vol(M_x)$. Since $\vol_K(M_x) \leq t = \varepsilon^{(n+1)/2}$, we have $\vol_K(C_x) \leq 2^{O(n)} \varepsilon^{(n+1)/2}$.
In the polar, consider the ray $r$ shot from the origin orthogonal to the base of $C_x$. This ray will intersect some covering Macbeath region $M_y = M^{1/5}(y)$, for some $y \in Y$. We will show that $y$ satisfies all the properties given in part~(e). As $K$ is well-centered, we can apply the Mahler-like volume relation from Lemma~\ref{lem:mahler-mac} to obtain $\vol_K(C_x) \cdot \vol_{K^*}(M_y) \geq 2^{-O(n)} \varepsilon^{n+1}$. Using the upper bound on $\vol_K(C_x)$ shown above, it follows that $\vol_{K^*}(M_y) \geq 2^{-O(n)} \varepsilon^{(n+1)/2}$. Thus, $y \in Y_{\geq t'}$.
It is easy to verify that the preconditions of Corollary~\ref{cor:sandwich} are satisfied where $C_x$ plays the role of $C$, $M_y$ plays the role of $M^{1/5}(y)$, and $z$ is any point in $M_y$. It follows that the caps $C_x$ and $C_z$ are 16-similar, that is, $C_x \subseteq C_z^{16}$ and $C_z \subseteq C_x^{16}$. By Lemma~\ref{lem:mac-cap-var}, $M_x \subseteq C_x^2$, and by Lemma~\ref{lem:cap-containment-exp}, $C_x^2 \subseteq C_z^{32}$. Thus $M_x \subseteq C_z^{32}$. Also, since $C_z \subseteq C_x^{16}$, it follows from Lemma~\ref{lem:cap-exp} that $\vol(C_x) \geq 2^{-O(n)} \vol(C_z)$. By Lemma~\ref{lem:min-vol-cap2}, $\vol(M_x) \geq 2^{-O(n)} \vol(C_x)$. Thus $\vol(M_x) \geq 2^{-O(n)} \vol(C_z) \ge 2^{-O(n)} \vol(C_z^{32})$, which establishes~(e).
Finally, to prove~(f), observe that in light of~(b), it suffices to show that $|X_{< t}| \le 2^{O(n)} / \varepsilon^{(n-1)/2}$. This quantity can be bounded by the following charging argument. For each $y \in Y_{\geq t'}$, we say that it \emph{charges} all the points $x \in X$ whose Macbeath region $M^{1/4c}(x)$ is contained in $C_y^{32}$ and whose volume is at least $2^{-O(n)} \vol(C_y^{32})$, where the constant hidden in $O(n)$ is sufficiently large. Note that any point of $Y_{\geq t'}$ charges at most $2^{O(n)}$ points of $X$. Applying part (e), it follows that every $x \in X_{< t}$ is charged by some $y \in Y_{\geq t'}$. Since $|Y_{\geq t'}| \leq 2^{O(n)} / \varepsilon^{(n-1)/2}$ and each point of $Y_{\geq t'}$ charges at most $2^{O(n)}$ points of $X$, it follows that $|X_{< t}| \leq 2^{O(n)} / \varepsilon^{(n-1)/2}$, which completes the proof.
\end{proof}
We are now ready to present the main result of this section. Recall that $K \subseteq \mathbb{R}^n$ is a well-centered convex body. Given $0 < \varepsilon \leq 1$, define a \emph{layered decomposition} of $K$ as follows. Recalling that $K_{\varepsilon} = (1+\varepsilon)K$, for each $x \in K$, define its \emph{width}, denoted $\width(x)$, to be the width of the associated minimum volume cap of $K_{\varepsilon}$. Since $\ray_{K_{\varepsilon}}(x) \geq \varepsilon / (1 + \varepsilon) \geq \varepsilon / 2$, it follows from Lemma~\ref{lem:raydist-width} that $\width(x) \geq \varepsilon / 2$. Let $\beta$ be a sufficiently small constant, and let $k_0 = \ceil{\log\frac{\beta}{\varepsilon}}$. For $0 \leq i \leq k_0$, define the layer $i$ be the set of points $x \in K$ such that $\width(x) \in [2^{i-1},2^i)\varepsilon$. Define layer $k_0 + 1$ to be the set of remaining points of $K$, which have width at least $\beta$. Note that the number of layers is $O(\log\frac{1}{\varepsilon})$.
\begin{lemma} \label{lem:cover-worst}
Let $0 < \varepsilon \leq 1$, let $K \subseteq \mathbb{R}^n$ be a well-centered convex body, and let $c \ge 2$ be a constant. Let $X$ be a $(K_{\varepsilon},K,c)$-MNet, and let $\mathscr{M} = \mathscr{M}_{K_{\varepsilon}}^{1/c}(X)$. Then $\mathscr{M}$ is a $(c,\varepsilon)$-covering for $K$ consisting of at most $2^{O(n)} / \varepsilon^{(n-1)/2}$ centrally symmetric convex bodies.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:delone}, $\mathscr{M}$ is a $(c,\varepsilon)$-covering for $K$. We will bound the size of the covering by partitioning the points of $X$ based on the layered decomposition (defined above) and then use Lemma~\ref{lem:layer} to bound the number of points in each layer.
For $0 \leq i \leq k_0$, let $X_i$ be subset of points of $X$ that are in layer $i$. Since $K$ is well-centered, $K_{\varepsilon}$ is also well-centered. By Lemma~\ref{lem:layer}(f), $|X_i| \leq 2^{O(n)} /(2^i \varepsilon)^{(n-1)/2}$. Summing $|X_i|$ over all layers $0$ to $k_0$ we have at most $2^{O(n)} / \varepsilon^{(n-1)/2}$ points in all these layers.
It remains only to bound $|X_{k_0+1}|$. Consider the set $\mathscr{M}_{K_{\varepsilon}}^{1/4c}(X_{k_0+1})$ of the associated packing Macbeath regions. By Lemma~\ref{lem:delone}, these Macbeath regions are pairwise disjoint. Recall that the minimum volume cap of any point in $X_{k_0+1}$ has width at least $\beta$ (used in the definition of $k_0$). Hence by Lemma~\ref{lem:wide-cap} (and the fact that $c$ is a constant), each of these Macbeath regions has relative volume of at least $2^{-O(n)}$. By a simple packing argument, it follows that $|X_{k_0+1}| \leq 2^{O(n)}$, which completes the proof.
\end{proof}
\section{Applications: Banach-Mazur Approximation} \label{s:approx-BM}
In this section we show that the convex hull of the centers of any $(c,\varepsilon)$-covering implies the existence of an approximating polytope in the Banach-Mazur distance. The main result is given in the following lemma. Combining this with our covering from Theorem~\ref{thm:cover-worst} establishes Theorem~\ref{thm:approx-BM}.
\begin{lemma} \label{lem:approx-BM}
Let $0 < \varepsilon < 1$, let $K \subseteq \mathbb{R}^n$ be a well-centered convex body, and let $c \ge 2$ be a constant. Let $X$ be the set of centers of any $(c,\varepsilon')$-covering of $K(1 + \varepsilon/c)$, where $\varepsilon' = \frac{1+\varepsilon}{1+\varepsilon/c} - 1$. Then $K \subset \conv(X) \subset K(1+\varepsilon)$.
\end{lemma}
\begin{proof}
Let $\mathscr{M}$ denote the covering mentioned in the statement of the lemma. By definition, the bodies of $\mathscr{M}$ together cover $K(1+\varepsilon/c)$ and the $c$-expansion of any such body about its center is contained within $K (1+\varepsilon)$. Since each body of $\mathscr{M}$ is contained within $K(1+\varepsilon)$, it follows that $X \subset K(1+\varepsilon)$ and so $\conv(X) \subset K(1+\varepsilon)$. To prove that $K \subset \conv(X)$, it suffices to show that there is a point of $X$ in every cap of $K(1+\varepsilon)$ defined by a supporting hyperplane of $K$.
\begin{figure}
\caption{\label{f:thm2}
\label{f:thm2}
\end{figure}
Let $C$ be a cap of $K(1+\varepsilon)$ defined by a supporting hyperplane $H$ of $K$. Let $x$ be a point at which $H$ touches $K$. For the sake of concreteness, assume that $H$ is horizontal and $K$ lies below $H$. Consider the ray emanating from the origin passing through $x$. Suppose that this ray intersects the boundary of $K(1+\varepsilon/c)$ at $y$ and the boundary of $K(1+\varepsilon)$ at $z$. Let $H_z$ denote the supporting hyperplane of $K(1+\varepsilon)$ at $z$. Clearly $H_z$ is parallel to $H$ and the distance between $H$ and $H_z$ is $c$ times the distance between $y$ and $H$.
Consider any body $B$ of $\mathscr{M}$ that contains point $y$. We claim that the center $p$ of the body $B$ is contained within $C$. By our earlier remarks, $p \in K(1+\varepsilon)$. Thus, we only need to show that $p$ cannot lie below $H$. To see this, recall that the body formed by expanding $B$ about its center $p$ by a factor of $c$ is contained within $K(1+\varepsilon)$. In particular, the point $p + c (y-p) \in K(1+\varepsilon)$. However, if $p$ lies below $H$, then the point $p + c (y-p)$ would lie above $H_z$, and hence outside $K(1+\varepsilon)$. It follows that $p$ cannot lie below $H$, which completes the proof.
\end{proof}
By Lemma~\ref{lem:cover-worst}, there exists a $(c,\varepsilon')$-covering $\mathscr{M}$ for $K(1 + \varepsilon/c)$ consisting of at most $2^{O(n)} / (\varepsilon')^{(n-1)/2}$ centrally symmetric convex bodies. The bound on vertices in Theorem~\ref{thm:approx-BM} now follows immediately from the above lemma (setting $P = \conv(X)$ and noting that $\varepsilon' = \Theta(\varepsilon)$), and the bound on facets follows via polarity and scaling by a factor of $(1+\varepsilon)$.
\section{Applications: Approximate CVP and IP} \label{s:apps}
\subsection{Preliminaries} \label{s:apps-prelim}
An $n$-dimensional lattice $L \subseteq \mathbb{R}^n$ is the set of all integer linear combinations of a basis $b_1, \ldots, b_n$ of $\mathbb{R}^n$. Given a lattice $L$, a convex body $K$ and a target $t \in \mathbb{R}^n$, the \emph{closest vector problem} (CVP) seeks to find a closest vector in $L$ to $t$ under $\|\cdot\|_K$. Given a parameter $\varepsilon > 0$, the \emph{$(1+\varepsilon)$-approximate CVP problem} seeks to find any lattice vector whose distance to $t$ under $\|\cdot\|_K$ is at most $(1+\varepsilon)$ times the true closest.
We employ a standard computational model in our $(1+\varepsilon)$-CVP algorithm. Given reals $0 < r \leq r'$ and $x \in \mathbb{R}^n$, we say that a convex body $K \subseteq \mathbb{R}^n$ is \emph{$(x,r,r')$-centered} if $x + r B_2^n \subseteq K \subseteq x + r' B_2^n$, where $B_2^n$ is the unit Euclidean ball centered at the origin. We assume that the convex body $K$ inducing the norm is $(O,r,r')$-centered, where both $r$ and $r'$ are given explicitly as inputs.
We assume that the basis vectors of the lattice $L$ are presented as an $n \times n$ matrix over the rationals. Input size is measured as the total number of bits used to encode $r$, $r'$, $t$, and $\varepsilon$ and the basis vectors of $L$ (all rationals).
Following standard conventions, we assume that access to $K$ is provided through a \emph{membership oracle}, which on input $x \in \mathbb{R}^n$ returns 1 if $x \in K$ and 0 otherwise. Our algorithms apply more generally where $K$ is presented using a \emph{weak membership oracle}, which takes an extra parameter $\delta > 0$ and only needs to return the correct answer when $x$ is at Euclidean distance at least $\delta$ from the boundary of $K$.
In the oracle model of computation, the running time is measured by the number of oracle calls and bit complexity of arithmetic operations. Note that the running time of our $(1+\varepsilon)$-CVP algorithm will be exponential in the dimension $n$. We will follow standard practice and suppress polynomial factors in $n$ and the input size. We will also simplify the presentation by expressing our algorithms assuming exact oracles, but the adaptation to weak oracles is straightforward.
Our approach to approximate CVP follows one introduced by Eisenbrand {\textit{et al.}}~\cite{EHN11} for $\ell_{\infty}$ and later extended in a number of works~\cite{NaV22,EiV22,RoV22}, which employs coverings of $K$. Given any constant $c \geq 2$, a \emph{$(c,\varepsilon)$-covering} of an $(O,r,r')$-centered convex body $K$ is a collection $\mathcal{Q}$ of convex bodies, such that a factor-$c$ expansion of each $Q \in \mathcal{Q}$ about its centroid lies within $K_{\varepsilon}$. Nasz{\'o}di and Venzin showed that a $(2,\varepsilon)$-covering of $K$ can be used to boost the approximation factor of any $2$-CVP solver for general norms.
\begin{lemma}[Nasz{\'o}di and Venzin~\cite{NaV22}] \label{lem:boost}
Let $L$ be a lattice and let $K$ be an $(O,r,r')$-centered convex body. Given a $(2,\varepsilon)$-covering of $K$ consisting of $N$ centrally symmetric convex bodies, we can solve $(1 + 7\varepsilon)$-CVP under $\|\cdot\|_K$ with $\widetilde{O}(N)$ calls to a 2-CVP solver for norms (where $\widetilde{O}$ conceals polylogarithmic factors).
\end{lemma}
\subsection{CVP Algorithm} \label{s:apps-cvp}
As in Lemma~\ref{lem:cover-worst}, let $K \subseteq \mathbb{R}^n$ be a well-centered convex body. In this section, we present our algorithm for computing a $(1+\varepsilon)$-approximation to the closest vector (CVP) under the norm defined by $K$.
Given a convex body $K \subseteq \mathbb{R}^n$, $0 < \varepsilon \leq 1$, and a constant $c \geq 2$, a \emph{$(c,\varepsilon)$-enumerator} is a procedure that outputs the elements of a $(c,\varepsilon)$-covering for $K$. Each of the elements of the covering is represented as an oracle for an $(a,r,r')$-centered convex body, where $a$, $r$, and $r'$ are given explicitly in the output (as rationals). Our enumerator will be randomized in the Monte Carlo sense, meaning that it achieves a stated running time, but the output may fail to be a $(c,\varepsilon)$-covering with some given probability. Define an enumerator's \emph{overhead} to be its total running time divided by the number of elements output, and its \emph{space complexity} to be the amount of memory it needs.
Our enumerator is based on constructing hitting sets for coverings associated with certain MNets. The following lemma will be useful.
\begin{lemma} \label{lem:hitting-set}
Let $K \subseteq \mathbb{R}^n$ be a convex body, $\Lambda \subset \interior(K)$, and $c \geq 2$. Let $X$ be a $(K,\Lambda,4c)$-MNet and let $\mathscr{M} = \mathscr{M}_K^{1/4c}(X)$ be the associated covering. Let $Y$ be any hitting set for $\mathscr{M}$ in the sense that for each $M \in \mathscr{M}$, $Y \cap M \neq \emptyset$. Then $\mathscr{M}_K^{1/c}(Y)$ is a $\Lambda$-limited $c$-covering with respect to $K$.
\end{lemma}
\begin{proof}
Since $c > 1$, the $c$-expansion of any Macbeath region of $M^{1/c}(Y)$ is contained within $K$. To prove the covering property, let $z$ be any point of $\Lambda$. By Lemma~\ref{lem:delone}, there is a point $x \in X$ such that $z \in M^{1/4c}(x)$. Let $y$ be a point of $Y$ that is contained in $M^{1/4c}(x)$. Since $M^{1/4c}(x) \cap M^{1/4c}(y) \neq \emptyset$, by Lemma~\ref{lem:mac-mac}, $M^{1/4c}(x) \subseteq M^{1/c}(y)$. Thus $z \in M^{1/c}(y)$. It follows that $M^{1/c}(Y)$ is a $\Lambda$-limited $c$-covering with respect to $K$.
\end{proof}
The following lemma shows that membership oracles for $K$ can be extended to its polar as well as Macbeath regions and caps that are $\varepsilon$-deep.
\begin{lemma} \label{lem:oracle}
Given an $(O,r,r')$-centered convex body $K$, specified by a weak membership oracle, in time polynomial in $n$, $\log\frac{1}{\varepsilon}$, and $\log\frac{r'}{r}$ we can do the following:
\begin{enumerate}
\item[$(i)$] Construct a weak membership oracle for $K^*$.
\item[$(ii)$] Given a point $x \in K$ such that $\ray(x) \geq \varepsilon$, construct a weak membership oracle for $M^{\lambda}_K(x)$ for any constant $\lambda > 0$.
\item[$(iii)$] Given a hyperplane $h$ intersecting $K$ which induces a cap $C$ of width at least $\varepsilon$, construct a weak membership oracle for $C$.
\end{enumerate}
\end{lemma}
\begin{proof}
Assertion~(i) follows directly from standard reductions (see Theorem~{4.3.2} and Lemma~{4.4.1} from Gr{\"o}tschel, Lov{\'a}sz, and Schrijver~\cite{GLS88}).
Note that $K^*$ is $\big(O,\frac{1}{r'},\frac{1}{r}\big)$-centered. To prove (ii), note that we can construct a membership oracle for $M(x)$ by using the fact that a point $y \in M(x)$ if and only if $y \in K$ and $2 x - y \in K$. If $\ray(x) \ge \varepsilon$, it is straightforward to show that $M(x)$ is $(x, \Omega(\varepsilon r), r')$-centered. The generalization of this construction to $M^{\lambda}_K(x)$ for any constant $\lambda > 0$ is immediate. Finally, to prove (iii), observe that the membership oracle is easy, but centering is the issue. We first determine the apex $a$ of $C$ (approximately) by finding the supporting hyperplane of $K$ that is parallel to $h$. We let $b$ denote the point midway on the segment $O a$ between base of the cap and $a$. It is easy to show that a Euclidean ball of radius $\Omega(\varepsilon r)$ can be centered at $b$, which is contained within $C$. Thus $C$ is $(b,\Omega(\varepsilon r), 2 r')$-centered.
\end{proof}
We will make use of standard sampling results (see, e.g., \cite{DFK91,Vem05}), which state that given $\eta > 0$, there exists an algorithm that outputs an $\eta$-uniform $X \in K$ using at most $\poly\big(n, \ln \frac{1}{\eta}, \ln \frac{r'}{r}\big)$ calls to a membership oracle for $K$ and arithmetic operations. (A random point $X \in K$ is \emph{$\eta$-uniform} if the total variation distance between the sample $X$ and uniform vector in $K$ is at most $\eta$.) As with membership oracles, it will simplify the presentation to state our constructions in terms of a true uniform sampler, but the generalization is straightforward.
\begin{lemma} \label{lem:sample-cover}
Given $0 < \varepsilon \leq 1$, constant $c \geq 2$, and an oracle for a convex body $K \subseteq \mathbb{R}^n$ which is both well-centered and $(O,r,r')$-centered, there exists a randomized $(c,\varepsilon)$-enumerator for $K$, which generates a covering of size
\[
2^{O(n)} \cdot \frac{1}{\varepsilon^{(n-1)/2}} \cdot \log\frac{1}{\varepsilon},
\]
such that the cover elements are $(a, O(\varepsilon r), r')$-centered. The enumerator succeeds with probability $1 - 2^{-O(n)}$, and its overhead and space complexity are both polynomial in $n$, $\log\frac{r'}{r}$ and $\log\frac{1}{\varepsilon}$.
\end{lemma}
In our construction, the elements of the covering will be centrally symmetric, and more specifically, the covering element centered at a point $a \in K$ will be a Macbeath region of the form $M_{K_{\varepsilon}}^{1/c'}(a)$, where $c' = O(c)$.
\begin{proof}
Recall the layered decomposition of $K$ described just before Lemma~\ref{lem:cover-worst}. For $0 \le i \le k_0$, layer $i$ consists of points $x \in K$ such that $\width(x) \in [2^{i-1},2^i)\varepsilon$, and layer $k_0+1$ consists of the remaining points $x \in K$. Note that for points in layer $k_0+1$, $\width(x) \ge \beta$. Here $\beta$ is a constant and the number of layers $k_0+2 = O(\log\frac{1}{\varepsilon})$. Let $\Lambda_i$ denote the points in layer $i$. Our enumerator runs in phases, where the $i$-th phase generates elements of a $\Lambda_i$-limited $c$-covering with respect to $K_{\varepsilon}$. Clearly, the elements generated in all the phases together constitute a $(c,\varepsilon)$-covering for $K$.
For $0 \le i \le k_0$, to describe phase $i$ of the enumerator, it will simplify notation to write $K, \Lambda, \varepsilon$, and $c$ for $K_{\varepsilon}, \Lambda_i, 2^{i-1}\varepsilon$, and $4c$, respectively. Our (new) objective is to generate a $\Lambda$-limited $(c/4)$-covering in this phase. Let $X$ be a $(K,\Lambda,c)$-MNet, let $\mathscr{M} = \mathscr{M}_{K}^{1/c}(X)$ be the associated covering, and let $X'$ be a hitting set for $\mathscr{M}$. By Lemma~\ref{lem:hitting-set}, $\mathscr{M}_K^{4/c}(X')$ is a $\Lambda$-limited $(c/4)$-covering.
We show how to generate the hitting set $X'$ for $\mathscr{M}$ along with the elements of $\mathscr{M}_K^{4/c}(X')$ in the desired form. In addition to the quantities $K, \Lambda, \varepsilon, c, X$ defined above, define also the quantities $\Lambda',Y,t,t'$, as in Lemma~\ref{lem:layer}. By Lemma~\ref{lem:layer}(a), the regions of $\mathscr{M}$ are contained in $\Lambda_K(\varepsilon) = K \setminus (1-4\varepsilon) K$. Recall the distinction between ``large'' and ``small'' Macbeath regions of $\mathscr{M}$, based on whether its relative volume is at least $t$. We will use a different strategy for hitting these two kinds of regions.
First, let us consider the large Macbeath regions. We claim that it suffices to choose $(2^{O(n)} / \varepsilon^{(n-1)/2}) \cdot \log\frac{1}{\varepsilon}$ points uniformly in $\Lambda_K(\varepsilon)$ to hit all the large Macbeath regions with high probability. Before proving this, note that we can sample $\Lambda_K(\varepsilon)$ uniformly by first choosing a point $p$ from the uniform distribution in $K$ and then choosing a point uniformly from the portion of the ray $Op \cap \Lambda_K(\varepsilon)$. Using binary search, we can find such a point with constant probability in $O(\log\frac{r'}{r} + \log\frac{1}{\varepsilon})$ steps. We omit the straightforward details.
To prove the claim, let $M$ be a large Macbeath region. By Lemma~\ref{lem:layer}(a) and (b), $M \subseteq \Lambda_K(\varepsilon)$, $\vol_K(M) \ge \varepsilon^{(n+1)/2}$, and $\vol_K(\Lambda_K(\varepsilon)) = O(n\varepsilon)$. Thus $\vol(M) / \vol(\Lambda_K(\varepsilon)) \geq 2^{-O(n)} \varepsilon^{(n-1)/2}$. Also, by Lemma~\ref{lem:layer}(b), the number of large Macbeath regions is at most $2^{O(n)}/\varepsilon^{(n-1)/2}$. A standard calculation implies that the probability of failing to hit some large Macbeath region in a layer is no more than $\varepsilon^{O(n)}$.
Next we show how to generate a hitting set for the small Macbeath regions. Intuitively, as these are small, they cannot be stabbed efficiently by uniform sampling in $\Lambda_K(\varepsilon)$. Instead, we will hit them by exploiting the relationship between the small Macbeath regions of $\mathscr{M}$ and the large Macbeath regions of $\mathscr{M}' = \mathscr{M}_{K^*}^{1/5}(Y)$. Recall that $Y$ is a $(K^*,\Lambda',5)$-MNet, where $\Lambda'$ is the boundary of $(1-\varepsilon)K^*$, and the large Macbeath regions of $\mathscr{M}'$ have volume at least $t' = 2^{-O(n)} \varepsilon^{(n+1)/2}$. Our high-level idea for hitting the small Macbeath regions of $\mathscr{M}$ is to hit the large Macbeath regions of $\mathscr{M}'$ and then uniformly sample the associated $\varepsilon$-representative cap of $K$.
More precisely, we perform $(2^{O(n)} / \varepsilon^{(n-1)/2}) \cdot \log(1/\varepsilon)$ iterations of the following procedure. First, we choose a point $p$ uniformly in $\Lambda_{K^*}(\varepsilon) = K^* \setminus (1-2\varepsilon) K^*$. (This can be done in a manner analogous to uniformly sampling $\Lambda_K(\varepsilon)$, which we described above.) Next, we sample uniformly in the cap $C_p^{32}$, where $C_p$ is $p$'s $\varepsilon$-representative cap in $K$. We claim that this procedure stabs all the small Macbeath regions of $\mathscr{M}$ with high probability.
To see why, recall from Lemma~\ref{lem:layer}(e) that for any small Macbeath region $M \in \mathscr{M}$, there is a large Macbeath region $M' \in \mathscr{M}'$ with the following properties. Let $y$ be any point in $M'$ and let $C_y$ be $y$'s $\varepsilon$-representative cap in $K$. Then $M \subseteq C_y^{32}$ and $\vol(M) \ge 2^{-O(n)} \vol(C_y^{32})$. Also, by properties (c) and (d) of Lemma~\ref{lem:layer}, we have $M' \subseteq \Lambda_{K^*}(\varepsilon)$, $\vol_{K^*}(M') \ge 2^{-O(n)} \varepsilon^{(n+1)/2}$, and $\vol_{K^*}(\Lambda_{K^*}(\varepsilon)) = O(n\varepsilon)$. It follows that the probability of hitting a fixed small Macbeath region $M$ of $\mathscr{M}$ in any one trial (\textit{i.e.}, sampling $p$ uniformly in $\Lambda_{K^*}(\varepsilon)$, followed by sampling a point uniformly in the cap $C_p^{32}$) is at least $2^{-O(n)} \varepsilon^{(n-1)/2}$. Also, by Lemma~\ref{lem:layer}(f), the number of small Macbeath regions of $\mathscr{M}$ is at most $2^{O(n)}/\varepsilon^{(n-1)/2}$. The same calculation as for large Macbeath regions implies that the probability of failing to hit some small Macbeath region of $\mathscr{M}$ is no more than $\varepsilon^{O(n)}$.
Putting it together, it follows that we can hit the Macbeath regions in all the layers $i$, $0 \leq i \leq k_0$ with failure probability bounded by $2^{-O(n)}$.
Finally, we describe phase $k_0+1$ of the enumerator. Recall that $\Lambda_{k_0+1}$ consists of points such that the associated minimum volume cap has width at least $\beta$, where $\beta$ is a constant. Let $X$ be a $(K_{\varepsilon},\Lambda_{k_0+1},4c)$-MNet and let $\mathscr{M} = \mathscr{M}_{K_{\varepsilon}}^{1/4c}(X)$ be the associated covering. By Lemma~\ref{lem:wide-cap}, the Macbeath regions of $\mathscr{M}$ have relative volume at least $2^{-O(n)}$. Thus, we can hit all the Macbeath regions of $\mathscr{M}$ with $2^{O(n)}$ uniformly sampled points in $K$ with failure probability no more that $2^{-O(n)}$.
In closing, we mention that Lemma~\ref{lem:oracle} shows that the enumerator can construct the three membership oracles it needs for its operation. Specifically, for each point in the hitting set, by part (ii), we can construct an oracle for the associated Macbeath region. By part (i), we can construct an oracle for $K^*$, which we need to sample uniformly in $K^*$, and by part (iii), we can construct oracles for the caps of $K$ which need to be sampled uniformly. This completes the proof.
\end{proof}
Our algorithm and its analysis follows the general structure presented by Eisenbrand \textit{et al.}~\cite{EHN11} and Nasz{\'o}di and Venzin~\cite{NaV22}. We solve the $(1+\varepsilon)$-CVP in the norm $\|\cdot\|_K$ by reducing it to the $(1+\varepsilon)$-gap CVP problem in this norm. In the $(1+\varepsilon)$-gap CVP problem, given a target $t$ and a number $\gamma > 0$, we have to either find a lattice vector whose distance to $t$ is at most $\gamma$ or assert that all lattice vectors have distance more than $\gamma / (1+\varepsilon)$. We solve the $(1+\varepsilon)$-CVP problem via binary search on the distance from the target. Given the problem parameters $n$, $\varepsilon$, $\rho = \frac{r'}{r}$, and letting $b$ denote the number of bits in the numerical inputs, the number of different distance values that need to be tested can be shown to be $O(\log n + \log\frac{1}{\varepsilon} + \log \rho + \log b)$. Let $\Phi(n, \varepsilon, \rho, b)$ denote this quantity. For each distance, we need to solve the $(1+\varepsilon)$-gap CVP problem. In turn, the $(1+\varepsilon)$-gap CVP problem is solved by invoking the $(c,\varepsilon)$-enumerator. For each of the $N$ bodies generated by the enumerator, we need to call a 2-gap CVP solver. For this purpose, we use Dadush and Kun's deterministic algorithm~\cite{DaK16} as the 2-gap CVP solver. As this 2-gap CVP solver always yields the correct answer, the only source of error in our algorithm arises from the fact that a valid covering may not be generated. The failure rate of our $(c,\varepsilon)$-enumerator is $2^{-O(n)}$, which we reduce further by running it $\log \Phi(n, \varepsilon, \rho, b)$ times. This ensures that all the coverings generated over the course of solving the $(1+\varepsilon)$-CVP problem are correct with probability at least $1-2^{-O(n)}$. Recalling that the algorithm by Dadush and Kun takes $2^{O(n)}$ time and $O(2^n)$ space, we have established Theorem~\ref{thm:cvp} (neglecting polynomial factors in the input size).
\subsection{Approximate Integer Programming} \label{s:apps-ip}
Through a reduction by Dadush, our CVP result also implies a new algorithm for approximate integer programming (IP). We are given a convex body $K \subseteq \mathbb{R}^n$ and an $n$-dimensional lattice $L \subset \mathbb{R}^n$, and we are to determine either that $K \cap L = \emptyset$ or return a point $y \in K \cap L$. The best algorithm known for this problem takes $n^{O(n)}$ time~\cite{Kan87}, which has sparked interest in the approximate version. In approximate integer programming, the algorithm must return a lattice point in $(1+\varepsilon)K$ (where the $(1+\varepsilon)$-expansion of $K$ is about the centroid), or assert that there are no lattice points in $K$.
Dadush~\cite{Dad14} has shown that approximate IP can be reduced to $(1+\varepsilon)$-CVP problem under a well-centered norm. His method is to first find an approximate centroid $p$ and then make one call to a $(1+\varepsilon)$-CVP solver for the norm induced by $K-p$. By plugging in our solver, we obtain an immediate improvement with respect to the $\varepsilon$-dependencies (neglecting polynomial factors in the input size).
\begin{theorem} \label{thm:approx-ip}
There exists a $2^{O(n)}/\varepsilon^{(n-1)/2}$-time and $O(2^{n})$-space randomized algorithm which solves the approximate integer programming problem with probability at least $1 - 2^{-n}$ .
\end{theorem}
\section{Conclusions} \label{s:conc}
In this paper we have demonstrated the existence of concise coverings for convex bodies. In particular, we have shown that given a real parameter $0 < \varepsilon \leq 1$ and constant $c \geq 2$, any well-centered convex body $K$ in $\mathbb{R}^n$ has a $(c,\varepsilon)$-covering for $K$ consisting of at most $2^{O(n)} / \varepsilon^{(n-1)/2}$ centrally symmetric convex bodies. This bound is optimal with respect to $\varepsilon$-dependencies. Furthermore, we have shown that the size of the covering is instance-optimal up to factors of $2^{O(n)}$. Coverings are useful structures. One consequence of our improved coverings is a new (and arguably simpler) construction of $\varepsilon$-approximating polytopes in the Banach-Mazur metric. We have also demonstrated improved approximation algorithms for the closest-vector problem in general norms and integer programming.
In contrast to earlier approaches, our covering elements are based on scaled Macbeath regions for the body $K$. This raises the question of what is the best choice of covering elements. Eisenbrand \textit{et al.}~\cite{EHN11} showed that the size of any covering based on ellipsoids grows as $\Omega(n^{n/2})$, even when the domain being covered is a hypercube. Our Macbeath-based approach results in a reduction of the dimensional dependence to $2^{O(n)}$ for any convex body. Macbeath regions have many nice properties, including the fact that it is easy to construct membership oracles from a membership oracle for the original body. Unfortunately, Macbeath regions have drawbacks, including the fact that their boundary complexity can be as high as $K$'s boundary complexity.
It is natural to wonder whether we can do better than ellipsoid-based coverings with uniform covering elements. For example, can we build more economical coverings based on affine transformations of some other fixed convex body. Recent results from the theory of volume ratios imply that this is not generally possible. The work of Galicer, Merzbacher, and Pinasco \cite{GMP21} (combined with polarity) implies that for any convex body $L$, there exists a convex body $K$, such that for any affine transformation $T$, if $T(L)$ is contained within $K$, then $\vol(T(L))$ is at most $\vol(K) / (b n)^{n/2}$, where $b$ is an absolute constant. A straightforward packing argument implies that if we restrict covering elements to affine images of a fixed convex body, the worst-case size of a $(c,\varepsilon)$ covering grows as $\Omega(n^{n/2})$ (independent of $\varepsilon$).
\pdfbookmark[1]{References}{s:ref}
\setlength{\bibitemsep}{1ex}
\DeclareFieldFormat[article, inproceedings, incollection]{title}{#1}
\printbibliography
\end{document}
|
\begin{document}
\RUNAUTHOR{Lyu et al.}
\RUNTITLE{Building Formulations for Piecewise Linear Relaxations of Nonlinear Functions}
\TITLE{Building Formulations for Piecewise Linear Relaxations of Nonlinear Functions}
\ARTICLEAUTHORS{
\AUTHOR{Bochuan Lyu}
\AFF{Department of Computational Applied Mathematics and Operations Research, Rice University\\
Houston, TX, 77005, \EMAIL{[email protected]}}
\AUTHOR{Illya V. Hicks}
\AFF{Department of Computational Applied Mathematics and Operations Research, Rice University\\
Houston, TX, 77005, \EMAIL{[email protected]}}
\AUTHOR{Joey Huchette}
\AFF{Google Research, Cambridge, MA, 02142, \EMAIL{[email protected]}}
}
\ABSTRACT{We study mixed-integer programming formulations for the piecewise linear lower and upper bounds (in other words, piecewise linear relaxations) of nonlinear functions that can be modeled by a new class of combinatorial disjunctive constraints (CDCs), generalized $n$D-ordered CDCs. We first introduce a general formulation technique to model piecewise linear lower and upper bounds of univariate nonlinear functions concurrently so that it uses fewer binary variables than modeling bounds separately. Next, we propose logarithmically sized ideal non-extended formulations to model the piecewise linear relaxations of univariate and higher-dimensional nonlinear functions under the CDC and independent branching frameworks. We also perform computational experiments for the approaches modeling the piecewise linear relaxations of univariate nonlinear functions and show significant speed-ups of our proposed formulations. Furthermore, we demonstrate that piecewise linear relaxations can provide strong dual bounds of the original problems with less computational time in order of magnitude.
}
\KEYWORDS{Mixed-integer programming, Piecewise linear relaxations, Combinatorial disjunctive constraints}
\maketitle
\section{Introduction}
Many optimization problems in chemical engineering~\citep{codas2012mixed,codas2012integrated,silva2014computational}, robotics~\citep{dai2019global,deits2014footstep} and marketing~\citep{bertsimas2017robust,camm2006conjoint,wang2009branch} contain nonlinear functions with a form of $f: D \rightarrow \mathbb{R}$, where the domain $D \subseteq \mathbb{R}^n$ is bounded and can be partitioned into polyhedral pieces. One of the natural approaches to outer-approximate or relax the nonlinear function $f$ is to use (continuous) piecewise linear functions to create lower and upper bounds $\underl{f}: D \rightarrow \mathbb{R}$ and $\bar{f}: D \rightarrow \mathbb{R}$ such that $\underl{f}(x) \leq f(x) \leq \bar{f}(x)$ for any $x \in D$, as it can lead to optimization problems that are easier to solve computationally than the original problems and provide valid dual bounds~\citep{bergamini2005logic,bergamini2008improved,geissler2012using,misener2012global,misener2011apogee}. Next, the domain of $\underl{f}$ is partitioned into a finite family of polytopes (i.e. bounded polyhedra) $\{\underl{C}^i\}_{i=1}^{\underl{d}}$. Within each polytope, there exists a function $\underl{f}^i: \underl{C}^i \rightarrow \mathbb{R}$ such that $\underl{f}(x) = \underl{f}^i(x)$ for $x \in \underl{C}^i$. Similarly, $\bar{f}$ can be partitioned into $\{\bar{C}^i\}_{i=1}^{\bar{d}}$ with functions $\bar{f}(x) = \bar{f}^i(x)$ for $x \in \bar{C}^i$. Each $\underl{f}^i(x)$ or $ \bar{f}^i(x)$ is an affine function over $\underl{C}^i$ or $\bar{C}^i$. Then, we can call $\underl{f}(x) \leq f(x) \leq \bar{f}(x)$ as a \textit{piecewise linear relaxation} of $f(x)$ where $\underl{f}(x)$ is a \textit{piecewise linear lower bound} and $\bar{f}(x)$ is a \textit{piecewise linear upper bound}.
If $D$ is a polyhedron and $\{(x, y): x \in D, \underl{f}(x) \leq y = f(x) \leq \bar{f}(x)\}$ is convex, i.e., $\underl{f}(x)$ is convex and $\bar{f}(x)$ is concave for $x \in D$, then the optimization with the constraint $\underl{f}(x) \leq f(x) \leq \bar{f}(x)$ could be formulated as a linear programming (LP) problem. However, the optimization problems involving piecewise linear functions are NP-hard in general~\citep{keha2006branch}. To solve piecewise linear optimization problems, many specialized algorithms are designed: \citet{beale1970special} introduced a concept of ordered sets for nonconvex functions and exploited a branch-and-bound algorithm; \citet{keha2006branch} studied a branch-and-cut algorithm for solving LP with continuous separable piecewise-linear cost functions without introducing binary variables; \citet{de2008special} proposed a special ordered set approach for optimizing a discontinuous separable piecewise linear function, and then \citet{de2013branch} worked on a branch-and-cut algorithm for piecewise linear optimization problems with semi-continuous constraints.
Another popular approach for optimization problems involving piecewise linear functions is to formulate those functions as mixed-integer linear programming (MILP) constraints with auxiliary integer decision variables, which has been a very active research area for decades~\citep{croxton2003comparison,d2010piecewise,huchette2022nonconvex,jeroslow1984modelling,jeroslow1985experimental,keha2004models,padberg2000approximating,vielma2010mixed,vielma2011modeling}. Especially, \citet{vielma2010mixed} summarized those formulations and provided a unifying framework for piecewise linear functions in optimizations.\footnote{Those formulations in the literature and our proposed formulations can also be applied to other mixed-integer programming formulations, but we focus on MILP formulations in this work.} In more recent work, \citet{huchette2022nonconvex} worked on computationally more efficient formulations for univariate and bivariate piecewise linear functions and compared computational performances among different formulations. We will review some logarithmically sized ideal formulations of univariate piecewise linear functions in Section~\ref{sec:pwr_pre_brief}. We say that a MILP formulation is \textit{ideal} if each extreme point of its linear programming (LP) relaxation also satisfies the integrality conditions in the MILP formulation.
If the domain $D$ can be represented by a union of polyhedral pieces $\{C^i\}_{i=1}^{d}$ such that each $\{(x, y) \in C^{i} \times \mathbb{R}: \underl{f}(x) \leq y \leq \bar{f}(x) \}$ is also a polytope for $i \in \llbracket d \rrbracket$, where $\llbracket d \rrbracket := \{1, \hdots, d\}$, then the piecewise linear relaxation can be reformulated as a combinatorial disjunctive constraint (CDC)~\citep{huchette2019combinatorial} formally defined in Section~\ref{sec:pwr_cdc}. The idea of modeling piecewise linear relaxations for bilinear terms has been studied in recent works~\citep{misener2012global,castro2015tightening,castro2016normalized,castillo2018global} to provide valid dual bounds of nonconvex quadratic problems. \citet{sundar2021piecewise} also studied the MILP formulation of the piecewise linear relaxations of multilinear terms. We will discuss how to use CDC to reformulate piecewise linear relaxation in Section~\ref{sec:pwr_cdc}. Then, we will use the independent branching scheme introduced by~\citet{vielma2011modeling} to obtain new logarithmically sized
ideal MILP formulations of the piecewise linear relaxation, $\underl{f}(x) \leq f(x) \leq \bar{f}(x)$.
Consider the relaxation of the nonlinear function $f(x)$ depicted in Figure~\ref{fig:pwl}. The relaxation can be viewed as the union of 8 triangular sets; standard lower bounds indicate that this can be modeled using $\lceil \log_2(8) \rceil = 3$ binary variables according to Proposition 1~\citep{huchette2019combinatorial}. However, separately formulating the upper and lower bounds will require at least $2 \lceil \log_2(9) \rceil = 8$ binary variables. In Section~\ref{sec:mupwr}, we will show that, by jointly formulating the upper and lower bounding functions, we can produce an ideal MILP formulation with $\lceil \log_2(16) \rceil = 4$ binary variables. Then, in Section~\ref{sec:univariate_pwr}, by constructing MILP formulations directly on the disjunctive representation of the relaxation, we produce MILP formulations that attain the lower bound with only 3 binary variables. In Section~\ref{sec:pwr_computational}, we will show that the MILP formulations with fewer binary variables, all else being equal, tend to perform better computationally.
\begin{figure}
\caption{A piecewise linear relaxation of a univariate nonlinear function.}
\label{fig:pwl}
\end{figure}
\noindent \textbf{Our contributions}
\begin{enumerate}
\item In Section~\ref{sec:mupwr}, we develop a framework using one set of binary variables or $\operatorname{SOS} 2$ constraint to model multiple piecewise linear functions if they share the same domain and input variable. We show that using one set of binary variables to model multiple univariate piecewise linear functions at the same time could have up to \textbf{6x speed-ups} compared with modeling each piecewise linear function separately in our experiments.
\item In Section~\ref{sec:univariate_pwr}, we obtain computationally more efficient formulations of univariate piecewise linear relaxations via the combinatorial disjunctive constraint and the independent branching frameworks. We define a general class of CDCs for modeling univariate piecewise linear relaxations to be \textbf{generalized 1D-ordered CDCs}, which model the piecewise linear relaxations directly as unions of polytopes. Then, we present two families of \textbf{logarithmically sized ideal MILP formulations} (Gray code and biclique cover formulations) for generalized 1D-ordered CDCs.
\item In Section~\ref{sec:higher_pwr}, we generalize the class of generalized 1D-ordered CDCs to \textbf{generalized $n$D-ordered CDCs} for modeling the piecewise linear relaxations in higher dimensions and present a class of \textbf{logarithmically sized ideal MILP formulations} of generalized $n$D-ordered CDCs.
\item In Section~\ref{sec:pwr_computational}, we use a 2D inverse kinematics problem from robotics (a 2D version of~\citep{dai2019global}) and a stochastic share-of-choice problem in marketing~\citep{bertsimas2017robust,camm2006conjoint,wang2009branch} as instances to test the computational performance of univariate piecewise linear relaxation formulations. Our proposed methods perform up to \textbf{2x speed-ups} on harder instances compared with other formulations modeling piecewise linear relaxations directly and up to \textbf{4x speed-ups} with the fastest existing formulations modeling piecewise linear lower and upper bounds simultaneously.\footnote{We only test the performance of formulations modeling piecewise linear lower and upper bounds simultaneously for the harder instances because modeling piecewise linear lower and upper bounds separately performs poorly for easy instances.} Furthermore, we show that piecewise linear relaxation problems could provide strong dual bounds within \textbf{1/100 of solving time} of the original nonlinear optimization problems.
\end{enumerate}
We call a nonlinear function $f: D \rightarrow \mathbb{R}$ a \textit{univariate} nonlinear function if $D \subseteq \mathbb{R}$. The piecewise linear relaxation of $f$ is called \textit{univariate} piecewise linear relaxation of $f$. We also want to note that the generation procedure of biclique cover formulations of generalized 1D-ordered CDCs is improved from the algorithms by~\citet{lyu2022modeling} and~\citet{lyu2023maximal}: no conflict graphs are needed and no need to check whether the merged bicliques are subgraphs of conflict graphs within the generation procedure, which could reduce the computational time for building the formulations when the conflict graphs are large.
\section{Univariate Piecewise Linear Function Formulations and Special Ordered Sets of Type 2} \label{sec:pwr_pre_brief}
In this section, we will review some formulations for univariate piecewise linear functions, and important concept related to those formulations: special ordered sets of type 2 and Gray code. We refer readers to~\citet{vielma2010mixed} and~\citet{huchette2022nonconvex} for a comprehensive review on formulations modeling univariate piecewise linear functions. In Appendix~\ref{sec:pwr_pre}, we will also provide incremental (Inc), multiple choice (MC), convex combination (CC), logarithmic disaggregated convex combination (DLog)~\citep{vielma2010mixed}, logarithmic independent branching (LogIB)~\citep{huchette2019combinatorial}, logarithmic embedding (LogE)~\citep{vielma2018embedding}, binary zig-zag (ZZB), and general integer zig-zag (ZZI)~\citep{huchette2022nonconvex} formulations for our computational experiments in Section~\ref{sec:pwr_computational}.
One of the popular approaches to model univariate piecewise linear function is through special ordered sets of type 2 (SOS 2) as defined in Definition~\ref{def:sos2}. We denote that $\Delta^N := \{\lambda \in \mathbb{R}^N_{\geq 0}: \sum_{i=1}^N \lambda_i = 1\}$ where $N$ is a positive integer. Also, note that $\mathbb{R}^N_{\geq 0} := \{x \in \mathbb{R}^N: x \geq 0\}$, $\llbracket N\rrbracket := \{1,2,\hdots, N\}$, and $\llbracket N_1, N_2 \rrbracket := \{N_1, \hdots, N_2\}$.
\begin{definition}[special ordered sets of type 2] \label{def:sos2}
A \textit{special ordered set of type 2 (SOS 2)} constraint for $\lambda \in \mathbb{R}^N$ can be expressed as
\begin{align}
\lambda \in \operatorname{SOS} 2(N) &:= \bigcup_{i=1}^{N-1} \left\{\lambda \in \Delta^{N}: \lambda_j = 0, \forall j \in \llbracket N\rrbracket \operatorname{set}minus \{i, i+1\} \right\}.
\end{align}
\end{definition}
Let $f(x)$ be a univariate piecewise linear function with $N$ breakpoints: $L= \hat{x}_1 < \hat{x}_2 < \hdots < \hat{x}_N = U \in \mathbb{R}$ and $\hat{y}_i = f(\hat{x}_i)$ for the simplicity. Then, $\{(x, y): y=f(x), x \in [L, U]\}$ can be modeled by a special ordered set type 2, $\operatorname{SOS} 2(N)$:
\begin{subequations} \label{eq:pwl_by_sos2}
\begin{alignat}{2}
& y = \sum_{v=1}^N \lambda_v \hat{y}_v, \qquad & x = \sum_{v=1}^N \lambda_v \hat{x}_v\\
& \lambda \in \operatorname{SOS} 2(N), & x, y \in \mathbb{R}.
\end{alignat}
\end{subequations}
Although \eqref{eq:pwl_by_sos2} is not a mixed-integer linear programming formulation because of $\lambda \in \operatorname{SOS} 2(N)$, there are several existing techniques to model the $\operatorname{SOS} 2$ constraints in MILP formulations, including logarithmic independent branching (LogIB)~\citep{huchette2019combinatorial}, logarithmic embedding (LogE)~\citep{vielma2018embedding}, binary zig-zag (ZZB), and general integer zig-zag (ZZI)~\citep{huchette2022nonconvex} formulations.
The three formulations (LogIB, LogE, and ZZB) only requiring $\lceil \log_2(d) \rceil$ binary variables to formulate $\operatorname{SOS} 2(d+1)$ and one formulation (ZZI) requiring $\lceil \log_2(d) \rceil$ general integer variables~\citep{huchette2019combinatorial,huchette2022nonconvex,vielma2018embedding}. All of those formulations are based on a Gray code which is a sequence of distinct binary vectors to encode a sequence of numbers and each consecutive pair of binary vectors differs in only one entry. A binary reflected Gray code is a simple and concrete example of a Gray code where the size of the binary vector is only logarithmic to the encoded numbers.
\begin{definition}[Gray codes] \label{def:gc}
A \textit{Gray code} for $d$ numbers is a sequence of distinct binary vectors $\{h^i\}_{i=1}^d \subseteq \{0, 1\}^t$ where $h^i \neq h^j$ for any $i \neq j$ and each adjacent pair $h^i$ and $h^{i+1}$ differs in exactly one entry.
\end{definition}
\begin{definition}[binary reflected Gray codes] \label{def:brgc}
A Gray code $\{h^i\}_{i=1}^{2^b} \subseteq \{0, 1\}^{b}$ is a binary reflected Gray code satisfying the following properties:
\begin{enumerate}
\item $h^0 = (0)$ and $h^1 = (1)$ if $b = 1$.
\item Let $\{g^i\}_{i=1}^{2^{b-1}} \subseteq \{0, 1\}^{b-1}$ be a binary reflected Gray code (BRGC). Then, $h^i = (0, g^i)$ for $i = 1, \hdots, 2^{b - 1}$ and $h^i = (1, g^i)$ for $i = 2^{b-1}+1, \hdots, 2^b$.
\end{enumerate}
\end{definition}
Note that $(\cdot, \cdot)$ is a concatenation operator.
\section{Multiple Univariate Piecewise Linear Functions With a Same Input Variable} \label{sec:mupwr}
In this section, we will introduce a modeling technique to use one $\operatorname{SOS} 2$ constraint for multiple piecewise linear constraints:
\begin{alignat}{2} \label{eq:multi_pwl}
& y^i = f^i(x), x \in [L, U], \qquad & \forall i \in \llbracket k \rrbracket.
\end{alignat}
It is not hard to see that to build a piecewise linear relaxation of $y = f(x)$ for some nonlinear function $f$ and $x \in [L, U]$, we can construct piecewise linear lower and upper bounds $\bar{y} = \bar{f}(x)$ and $\underl{y} = \underl{f}(x)$. It can be viewed as a special case of modeling multiple piecewise linear constraints with the same input variable $x$ (Proposition~\ref{prop:multi_pwl_by_sos2} with $k=2$).
\begin{proposition} \label{prop:multi_pwl_by_sos2}
Given $k$ piecewise linear functions $f^i: [L, U] \rightarrow \mathbb{R}$ and corresponding breakpoints: $L=\hat{x}^i_1 < \hdots < \hat{x}^i_{d_i+1}=U$ for $i \in \llbracket k \rrbracket$, then a valid formulation for $\{(x, y): x \in [L, U], y^i = f^i(x), i \in \llbracket k \rrbracket\}$ is
\begin{subequations} \label{eq:multi_pwl_by_sos2}
\begin{alignat}{2}
& y^i = \sum_{v=1}^{d+1} \lambda_v f^i(\hat{x}_v), \qquad & \forall i \in \llbracket k \rrbracket \\
& x = \sum_{v=1}^{d+1} \lambda_v \hat{x}_v\\
& \lambda \in \operatorname{SOS} 2(d+1), & x \in \mathbb{R}, y \in \mathbb{R}^k, \label{eq:multi_pwl_by_sos2_c}
\end{alignat}
\end{subequations}
\noindent where $d = |\bigcup_{i \in \llbracket k \rrbracket} \{\hat{x}^i_j\}_{j=1}^{d_i+1}| - 1$ and $\{\hat{x}_v\}_{v=1}^{d+1} = \bigcup_{i \in \llbracket k \rrbracket} \{\hat{x}^i_j\}_{j=1}^{d_i+1}$ such that $\hat{x}_v < \hat{x}_{v+1}$ for $v \in \llbracket d \rrbracket$.
\end{proposition}
Note that $\lambda \in \operatorname{SOS} 2(d+1)$ in \eqref{eq:multi_pwl_by_sos2_c} can be modeled by any formulation of $\operatorname{SOS} 2$, such as LogIB, LogE, ZZB, or ZZI. Following the same manner, by merging all the breakpoints, we can also construct merged formulations for other univariate piecewise linear functions. We will discuss how to improve the incremental formulation in Appendix~\ref{sec:mupwr_merged}.
By using~\eqref{eq:multi_pwl_by_sos2}, we can reduce the number of binary variables compared with modeling each piecewise linear constraint separately. For example, if we use LogE or ZZB formulation for $\operatorname{SOS} 2$ in~\eqref{eq:multi_pwl_by_sos2}, the formulation only needs $\lceil \log_2(d) \rceil$ instead of $\sum_{i=1}^k \lceil \log_2(d_i) \rceil$ binary variables.
We have made some improvements over modeling piecewise linear lower and upper bounds separately, but~\eqref{eq:multi_pwl_by_sos2} still introduces some potential unnecessary binary or integer variables and some unnecessary nonconvexity into the model. For example, we need $\operatorname{SOS} 2(17)$ in~\eqref{eq:multi_pwl_by_sos2_c} for the piecewise linear relaxation shown in Figure~\ref{fig:pwl}. The LogE or ZZB formulation of $\operatorname{SOS} 2(17)$ requires $\lceil \log_2(16) \rceil = 4$ binary variables. However, if we view the piecewise linear relaxation in Figure~\ref{fig:pwl} as a union of polytopes (in this case triangles), we can see that there are only 8 triangles and 3 binary variables are needed, Thus, in the following sections, we will introduce combinatorial disjunctive constraints to model the piecewise linear relaxation directly.
\section{Combinatorial Disjunctive Constraints, Independent Branching, and Graph Theory Notations} \label{sec:pwr_cdc}
In this section, we will introduce combinatorial disjunctive constraints (CDCs) and a general framework, independent branching, to build MILP formulations of CDCs. The study of disjunctive constraints originates by~\citet{balas1975disjunctive,balas1979disjunctive,balas1998disjunctive}. A disjunctive constraint has the form of
\begin{align} \label{eq:pwr_dc_poly}
x \in \bigcup_{i=1}^d P^i,
\end{align}
\noindent where each $P^i$ is a polyhedron. In particular, if each $P^i$ is also bounded, then $P^i$ can also be expressed as the convex combination of the finite set of its extreme points $V^i$ by the Minkowski-Weyl Theorem~\citep{minkowski1897allgemeine,weyl1934elementare}:
\begin{align} \label{eq:pwr_poly_convex}
P^i = \operatorname{conv}(V^i) := \left\{\sum_{v \in V^i} \lambda_v v: \sum_{v \in V^i} \lambda_v = 1, \lambda \geq 0 \right\}.
\end{align}
By only keeping the combinatorial structure in the disjunctive constraint, a more general approach is modeling the continuous variables, say $\lambda$, on a collection of indices $\mathcal{S} = \{S^i\}_{i=1}^d$ and each $S^i$ contains all indices of extreme points of $P^i$. We formally define combinatorial disjunctive constraints in Definition~\ref{def:cdc_s}.
\begin{definition}[combinatorial disjunctive constraints] \label{def:cdc_s}
A \textit{combinatorial disjunctive constraint} (CDC) represented by the set of indices $\mathcal{S}$ is
\begin{align} \label{eq:pwr_cdc}
\lambda \in \operatorname{CDC}(\mathbfcal{S}) := \bigcup_{S \in \mathbfcal{S}} Q(S),
\end{align}
\noindent where $Q(S) := \{\lambda \in \mathbb{R}^J: \sum_{v \in J} \lambda_v = 1, \lambda_{J \operatorname{set}minus S} = 0, \lambda \geq 0\}$ and $J = \cup_{S \in \mathcal{S}} S$.
\end{definition}
We say a MILP formulation for $\operatorname{CDC}(\mathbfcal{S})$ is \textit{non-extended} if it does not require auxiliary continuous variables other than $\lambda$. An alternative form to represent~\eqref{eq:pwr_cdc} is the \textit{independent branching} (IB) scheme framework introduced by \citet{vielma2011modeling} and generalized by \citet{huchette2019combinatorial}. In this framework, we rewrite \eqref{eq:pwr_cdc} as $t$ intersections of $k$ alternatives each:
\begin{align} \label{eq:pwr_k_IB}
\operatorname{CDC}(\mathbfcal{S}) = \bigcap_{j=1}^t \left( \bigcup_{i=1}^k Q(L^j_i) \right).
\end{align}
If $\operatorname{CDC}(\mathbfcal{S})$ can be rewritten into $t$ intersections of 2 alternatives, i.e. $\operatorname{CDC}(\mathbfcal{S}) = \bigcap_{j=1}^t \left( Q(L^j) \bigcup Q(R^j) \right)$, then we call the CDC to be \textit{pairwise IB-representable}.
\begin{definition}[pairwise IB-representable]
A combinatorial disjunctive constraint $\operatorname{CDC}(\mathbfcal{S})$ is \textit{pairwise IB-representable} if it can be written as
\begin{align}
\operatorname{CDC}(\mathbfcal{S}) = \bigcap_{j=1}^t \left( Q(L^j) \bigcup Q(R^j) \right),
\end{align}
\noindent for some $L^j, R^j \subseteq J$. We denote that $\{\{L^j, R^j\}\}_{j=1}^t$ is a \textit{pairwise IB-scheme} for $\operatorname{CDC}(\mathbfcal{S})$.
\end{definition}
We want to note that not every CDC is pairwise IB-representable and we provide the sufficient and necessary condition in Proposition~\ref{prop:pairwise_IB_at_most_two}. We also formally define feasible and infeasible sets in Definition~\ref{def:pwr_feasible_sets}.
\begin{definition}[feasible and infeasible sets] \label{def:pwr_feasible_sets} A set $S \subseteq J$ is a \textit{feasible set} with respect to $\operatorname{CDC}(\mathbfcal{S})$ if $S \subseteq T$ for some $T \in \mathbfcal{S}$. It is an \textit{infeasible set} otherwise. A \textit{minimal infeasible set} is an infeasible set $S \subseteq J$ such that any proper subset of $S$ is a feasible set.
\end{definition}
\begin{proposition}[Theorem 1~\citep{huchette2019combinatorial}\footnote{We only consider the case when $k=2$ and we use minimal infeasible set directly without defining a hypergraph as in the work~\citep{huchette2019combinatorial}.}] \label{prop:pairwise_IB_at_most_two}
A pairwise IB-scheme exists for $\operatorname{CDC}(\mathbfcal{S})$ if and only if each minimal infeasible set has cardinality at most 2.
\end{proposition}
\citet{huchette2019combinatorial} discovered that building small and strong mixed-integer programming (MIP) formulations of pairwise IB-representable combinatorial disjunctive constraints can be done by solving minimum biclique cover problems on the conflict graphs of CDCs, where biclique covers are defined in Definition~\ref{def:pwr_biclique_covers} and conflict graphs are provided in Definition~\ref{def:pwr_conflict_graphs}.
Before we define biclique covers, we want to introduce some basic graph notations for the paper. A \textit{simple graph} is a pair $G := (V, E)$ where $V$ is a finite set of vertices and $E \subseteq \{uv: u, v\in V, u \neq v\}$. We use $V(G)$ and $E(G)$ to represent the vertex set and edge set of the graph $G$. A \textit{subgraph} $G' := (V', E')$ of $G$ is a graph where $V' \subseteq V$ and $E' \subseteq \{uv \in E: u, v \in V'\}$. An \textit{induced subgraph} of $G$ by only keeping vertices $A$ is denoted as $G(A) = (A, E_A)$, where $E_A = \{uv \in E: u, v \in A\}$. A graph is a \textit{cycle} if the vertices and edges are $V = \{v_1, v_2, \hdots, v_n\}$ and $E = \{v_1v_2, v_2v_3, \hdots, v_{n-1}v_n, v_nv_1\}$. A graph is a \textit{path} if the vertices and edges are $V = \{v_1, v_2, \hdots, v_n\}$ and $E = \{v_1v_2, v_2v_3, \hdots, v_{n-1}v_n\}$. A graph $G := (V, E)$ is \textit{connected} if there exists a path between $u$ and $v$ for any $u, v \in V$. A graph is \textit{tree} if it is connected and does not have any subgraph that is a cycle. A \textit{bipartite} graph $G = (L \cup R, E)$ is a graph where $L$ and $R$ are disjointed vertex sets with the edge set $E \subseteq L \times R$. We refer readers to~\citet{bondy2008graph} for further general graph theory background and definitions.
\begin{definition}[biclique covers]~\label{def:pwr_biclique_covers}
A \textit{biclique} graph is a complete bipartite graph $(L \cup R, \{uv: u \in L, v \in R\})$, which is denoted as $\{L, R\}$. A \textit{biclique cover} of graph $G = (J, E)$ is a collection of biclique subgraphs of $G$ that covers the edge set $E$.
\end{definition}
\begin{definition}[conflict graphs]~\label{def:pwr_conflict_graphs}
A \textit{conflict graph} for a $\operatorname{CDC}(\mathbfcal{S})$ is denoted as $G^c_{\mathbfcal{S}} := (J, \bar{E})$ with $\bar{E} = \{uv: \{u, v\} \text{ is an infeasible set}, u, v \in J, u \neq v\}$.
\end{definition}
In Proposition~\ref{prop:pwr_cdc_bc}, we show that any biclique cover of the conflict graph of a $\operatorname{CDC}(\mathbfcal{S})$ can provide an ideal formulation of $\operatorname{CDC}(\mathbfcal{S})$.
\begin{proposition}[Theorem 3~\citep{huchette2019combinatorial}, Corollary 1~\citep{lyu2022modeling}] \label{prop:pwr_cdc_bc}
Given a biclique cover $\{\{L^j, R^j\}\}_{j=1}^t$ of the conflict graph $G^c_{\mathbfcal{S}}$ for a pairwise IB-representable $\operatorname{CDC}(\mathbfcal{S})$, the following is an ideal formulation for $\operatorname{CDC}(\mathbfcal{S})$ with $J = \bigcup_{S \in \mathbfcal{S}} S$:
\begin{subequations} \label{eq:pwr_bc_ideal_formulation}
\begin{alignat}{2}
& \sum_{v \in L^j} \lambda_v \leq z_j, & \forall j \in \llbracket t\rrbracket\\
& \sum_{v \in R^j} \lambda_v \leq 1 - z_j, \quad & \forall j \in \llbracket t\rrbracket \\
& \sum_{v \in J} \lambda_v = 1, & \lambda \geq 0 \\
& z_j \in \{0, 1\}, & \forall j \in \llbracket t\rrbracket .
\end{alignat}
\end{subequations}
\end{proposition}
\section{Univariate Piecewise Linear Relaxations} \label{sec:univariate_pwr}
In this section, we will use $\operatorname{CDC}(\mathbfcal{S})$ to formulate univariate piecewise linear relaxations directly. As we have shown in Figure~\ref{fig:pwl}, the feasible regions of the piecewise linear relaxations can be viewed as a nonconvex polygon, which can be partitioned into convex polygons, two-dimensional polytopes (a classic computational geometry problem: convex partitioning~\citep{o1998computational}). Also, if the piecewise lower and upper bounds are chosen under certain approaches, the convex partitioning could be trivial. For example, in Figure~\ref{fig:pwl}, the feasible region of the piecewise linear relaxation is a union of 8 triangles. Furthermore, we assume that the set of convex polygons can be ordered in a sequence such that only two consecutive polygons can share vertices or extreme points. For the class of CDCs to describe a such set of convex polygons, we call it generalized 1D-ordered CDCs as in Definition~\ref{def:g_1d_ordered_cdc}.
\begin{definition}[generalized 1D-ordered CDCs]~\label{def:g_1d_ordered_cdc}
$\operatorname{CDC}(\mathbfcal{S})$ is a generalized 1D-ordered CDC if $\mathbfcal{S} = \{S^i\}_{i=1}^d$ such that $S^i \cap S^j = \emptyset$ for $|i - j| \geq 2$ and $i, j \in \llbracket d \rrbracket$.
\end{definition}
To use biclique covers of the conflict graphs associated with generalized 1D-ordered CDCs, we need to prove that such CDCs are pairwise IB-representable.
\begin{proposition} \label{prop:g_1d_ordered_cdc_pairwise}
If $\operatorname{CDC}(\mathbfcal{S})$ is a generalized 1D-ordered CDC, then $\operatorname{CDC}(\mathbfcal{S})$ is pairwise IB-representable.
\end{proposition}
We want to note that Proposition~\ref{prop:g_1d_ordered_cdc_pairwise} is a direct result of Theorem 1 in the work~\citep{lyu2022modeling}.
\begin{remark}
The $\operatorname{SOS} 2$ constraint $\lambda \in \operatorname{SOS} 2(N)$ is a generalized 1D-ordered CDC.\footnote{Also, SOS 1 constraint is in the class of generalized 1D-ordered CDCs but not $\operatorname{SOS} k$ for $k \geq 3$.}
\end{remark}
\subsection{Gray Code Formulations} \label{sec:gray_code_form}
In this section, we will introduce a class of ideal formulations of the generalized 1D-ordered CDCs obtained by Gray codes.
\begin{theorem}\label{thm:g1d_ideal}
Given a generalized 1D-ordered $\operatorname{CDC}(\mathbfcal{S})$ with $\mathbfcal{S} = \{S^i\}_{i=1}^d$ and an arbitrary Gray code $\{h^i\}_{i=1}^d \subseteq \{0, 1\}^t$, one can provide an ideal formulation for $\lambda \in \operatorname{CDC}(\mathbfcal{S})$:
\begin{subequations} \label{eq:g1d_ideal_form}
\begin{alignat}{2}
& \sum_{v \in L^j} \lambda_v \leq z_j, & \forall j \in \llbracket t \rrbracket \label{eq:g1d_ideal_form_a}\\
& \sum_{v \in R^j} \lambda_v \leq 1 - z_j, \quad & \forall j \in \llbracket t \rrbracket \label{eq:g1d_ideal_form_b} \\
& \sum_{v \in J} \lambda_v = 1, & \lambda \geq 0 \\
& z_j \in \{0, 1\}, & \forall j \in \llbracket t \rrbracket .
\end{alignat}
\end{subequations}
where $L^j_{\operatorname{pre}} = \bigcup_{i=1: h^i_j=0}^d S^i$, $R^j_{\operatorname{pre}} = \bigcup_{i=1: h^i_j=1}^d S^i$, $L^j = L^j_{\operatorname{pre}} \operatorname{set}minus R^j_{\operatorname{pre}}$, and $R^j = R^j_{\operatorname{pre}} \operatorname{set}minus L^j_{\operatorname{pre}}$.
\end{theorem}
The proof of Theorem~\ref{thm:g1d_ideal} is in Appendix~\ref{sec:proof_g1d_ideal}.
Since we will use the constraints in~\eqref{eq:g1d_ideal_form} to construct ideal formulations for CDCs of higher dimensional piecewise linear relaxations, we provide a notation in Remark~\ref{rm:pwl_1d_ideal} for simplicity.
\begin{remark} \label{rm:pwl_1d_ideal}
We denote $\lambda \in \operatorname{g1d}(\mathbfcal{S}, \{h^i\}_{i=1}^d, z)$ for the ideal formulation in~\eqref{eq:g1d_ideal_form_a} and~\eqref{eq:g1d_ideal_form_b} given a set of indices $\mathbfcal{S}$, a Gray code $\{h^i\}_{i=1}^d \subseteq \{0, 1\}^t$, and binary variables $z \in \{0, 1\}^t$.
\end{remark}
We also want to remark that the LogIB formulation in Proposition~\ref{prop:sos2_loge} of Appendix~\ref{sec:pwr_pre} is a special case of Theorem~\ref{thm:g1d_ideal}.
\begin{remark}
Given a $\operatorname{CDC}(\mathbfcal{S}) = \operatorname{SOS} 2(d+1)$ and a first $d$ binary vectors of BRGC for $t = 2^{\lceil \log_2(d) \rceil}$: $\{h^i\}_{i=1}^d \subseteq \{0, 1\}^t$, then the formulation in~\eqref{eq:g1d_ideal_form} of Theorem~\ref{thm:g1d_ideal} is the same as the LogIB formulation.
\end{remark}
\subsection{Gray Codes and Reversed Edge Rankings}
Before we can construct the biclique cover formulation of generalized 1D-ordered CDCs in Section~\ref{sec:biclique_form}, we want to introduce the \textit{reversed edge ranking}. It also turns out that reversed edge rankings can be also used to construct Gray codes. It builds a connection between the Gray code formulations and biclique cover formulations. It also provides us with a balanced Gray code that obtains a computationally more efficient Gray code formulation than the BRGC does.
\begin{definition} \label{def:reversed_edge_ranking}
Given a tree $T = (V, E)$, a mapping $\varphi_T: E \rightarrow \{1, 2, \hdots, r\}$ is a \textit{reversed edge ranking} of $T$ if for any $e_1, e_2 \in E$ with $\varphi_T(e_1) = \varphi_T(e_2)$, there exists $e_3 \in E$ on the path between $e_1$ and $e_2$ such that $\varphi_T(e_3) < \varphi_T(e_1) = \varphi_T(e_2)$.\footnote{In the literature~\citep{iyer1991edge,lam2001optimal,de1995optimal,zhou1995finding}, \textit{edge ranking} is defined with $\varphi_T(e_3) > \varphi_T(e_1) = \varphi_T(e_2)$. Thus, we denote the mapping in Definition~\ref{def:reversed_edge_ranking} as \textit{reversed edge ranking}.}. The number of the ranks used by $\varphi_T$ is $r$. We also call $\varphi(e)$ a label or a mapping of $e$.
\end{definition}
\begin{figure}
\caption{Two mappings of the edges of a path graph with 6 vertices, $P_6$, to $\{1,2,3\}
\label{fig:edge_ranking_p_6}
\end{figure}
To construct Gray codes, we only need to find reversed edge rankings on path graphs. In Figure~\ref{fig:edge_ranking_p_6}, we demonstrate two mappings of the edges of a path graph $P_6$ to $\{1,2,3\}$, where the top one in Figure~\ref{fig:edge_ranking_p_6} is a reversed edge ranking because we can see that $1$ is between each pair of $2$'s and $3$'s. However, the bottom one in Figure~\ref{fig:edge_ranking_p_6} is not, since there is only one edge with a label of $3$ between the pair of edges of $2$'s.
\begin{theorem} \label{thm:edge_ranking_to_gray_code}
Given a reversed edge ranking $\varphi$ with the number of rankings $r$ of a path graph $P_n$, then $\{h^i\}_{i=1}^n \subseteq \{0, 1\}^r$ is a Gray code for $n$ numbers such that
\begin{enumerate}
\item $h^i_1 = 0$ for $i \in \llbracket n \rrbracket$.
\item $h^i_j = \begin{cases}1 - h^{i-1}_j, \text{ if } \varphi(v_{i-1}v_i) = j \\ h^{i-1}_j, \text{ otherwise,} \end{cases}$ for $i \in \{2, \hdots, n\}$ and $j \in \llbracket r \rrbracket$.
\end{enumerate}
\end{theorem}
Furthermore, we design a procedure in Algorithm~\ref{alg:path_edge_ranking_heuristic} to generate reversed edge rankings of a path graph $P_n$.
\begin{algorithm}[h]
\begin{algorithmic}[1]
\State \textbf{Input}: A path graph with $n$ vertices $P_n$.
\State \textbf{Output}: A reversed edge ranking $\varphi$ of $P_n$.
\State Initialize $\varphi(e) \leftarrow 0$ for $e \in E(P_n)$.
\State $\Call{Label}{P_n, 1, \varphi}$
\State \textbf{return} $\varphi(e)$
\Function{Label}{$P, \textit{level}, \varphi$}
\If{$|V(P)| \leq 1$}
\State \textbf{return}
\EndIf
\State Select an arbitrary edge $e$ to cut $P$ into two components $P^1$ and $P^2$.
\State $\varphi(e) \leftarrow \textit{level}$
\State $\Call{Label}{P^1, \textit{level}+1, \varphi}$; $\Call{Label}{P^2, \textit{level}+1, \varphi}$
\State \textbf{return}
\EndFunction
\end{algorithmic}
\caption{A general procedure to generate a reversed edge ranking of a path graph $P_n$.} \label{alg:path_edge_ranking_heuristic}
\end{algorithm}
\begin{theorem} \label{thm:path_edge_ranking_heuristic}
Algorithm~\ref{alg:path_edge_ranking_heuristic} returns a reversed edge ranking of $P_n$.
\end{theorem}
Algorithm~\ref{alg:path_edge_ranking_heuristic} can also be used to generate \textit{balanced Gray code}, which we will show in Section~\ref{sec:pwr_computational} that could provide computationally more efficient Gray code formulation than BRGC does.
\begin{remark} \label{rm:balanced_gc}
We call the Gray code: \textit{balanced Gray code}, which is generated by Theorem~\ref{thm:edge_ranking_to_gray_code} and Algorithm~\ref{alg:path_edge_ranking_heuristic} by always selecting the edge to cut $P$ into $P^1$ and $P^2$ such that the vertices in $P^1$ has smaller indices than those in $P^2$ and $|V(P^1)| = \lfloor |V(P)| / 2 \rfloor$ and $|V(P^2)| = \lceil |V(P)| / 2 \rceil$. We also call the reversed edge ranking produced by Algorithm~\ref{alg:path_edge_ranking_heuristic} in such a manner: \textit{balanced reversed edge ranking}.
\end{remark}
\subsection{Biclique Cover Formulations} \label{sec:biclique_form}
In this section, we will introduce a formulation of generalized 1D-ordered $\operatorname{CDC}(\mathbfcal{S})$ motivated by the ``divided and conquer" algorithm in Algorithm 1 of the work~\citep{lyu2022modeling}. A generalized 1D-ordered CDC is a CDC admitting junction trees defined in Definition~\ref{def:pwr_junction_tree}, which is a focus of study in the paper~\citep{lyu2022modeling}. Because of that, we can design a more specific procedure to find small biclique covers of the conflict graphs associated with generalized 1D-ordered CDCs.
\begin{algorithm}[h]
\begin{algorithmic}[1]
\State \textbf{Input}: A path graph $\mathcal{P}$ and a reversed edge ranking $\varphi$ of $\mathcal{P}$.
\State \textbf{Output}: A set of pair of indices $\{\{I^{\varphi(e), e}, J^{\varphi(e), e}\}: e \in E(\mathcal{P})\}$.
\Function{Separation}{$\mathcal{P}, \varphi$}
\If{$|V(\mathcal{P})| \leq 1$}
\State \textbf{return} $\{\}$
\EndIf
\State Find an edge $e$ such that $\varphi(e)$ is minimized. \Comment{There exists a unique edge, since $\varphi$ is a reversed edge ranking.}
\State Let $\mathcal{P}^1$ and $\mathcal{P}^2$ be the two paths of $\mathcal{P} \operatorname{set}minus e$ such that the indices of vertices in $\mathcal{P}^1$ is smaller than those of $\mathcal{P}^2$.
\State Let $I^{\varphi(e), e} \leftarrow \{i: S^i \in \mathcal{P}^1\}$ and $J^{\varphi(e), e} \leftarrow \{j: S^j \in \mathcal{P}^2\}$.
\State \textbf{return} $\{\{I^{\varphi(e), e}, J^{\varphi(e), e}\}\} \cup \Call{Separation}{\mathcal{P}^1} \cup \Call{Separation}{\mathcal{P}^2}$
\EndFunction
\end{algorithmic}
\caption{A separation subroutine.} \label{alg:g1d_biclique_sep}
\end{algorithm}
\begin{definition}[junction trees] \label{def:pwr_junction_tree}
A \textit{junction tree} of $\operatorname{CDC}(\mathbfcal{S})$ is denoted as $\mathcal{T}_{\mathbfcal{S}} = (\mathbfcal{S}, \mathcal{E})$, where $\mathcal{T}_{\mathbfcal{S}}$ is a tree and $\mathcal{E}$ satisfies:
\begin{itemize}
\item For any $S^1, S^2 \in \mathbfcal{S}$, the unique path $\mathcal{P}$ between $S^1$ and $S^2$ in $\mathcal{T}_{\mathbfcal{S}}$ satisfies that $S^1 \cap S^2 \subseteq S$ for any $S \in V(\mathcal{P})$, or equivalently $S^1 \cap S^2 \subseteq \operatorname{mid}(e)$ for any $e \in E(\mathcal{P})$.
\end{itemize}
The \textit{middle set} of the edge $S^1 S^2$ is defined as $\operatorname{mid}(S^1 S^2) := S^1 \cap S^2$
\end{definition}
A junction tree of a generalized 1D-ordered $\operatorname{CDC}(\mathbfcal{S})$ is just a path graph as described in Remark~\ref{rm:g1d_junction_tree}.
\begin{remark} \label{rm:g1d_junction_tree}
Given a generalized 1D-ordered $\operatorname{CDC}(\mathbfcal{S})$ with $\mathbfcal{S} = \{S^i\}_{i=1}^d$, then a path graph $\mathcal{P}$ is a junction tree of $\operatorname{CDC}(\mathbfcal{S})$, where the edge set of path graph $\mathcal{P}$ is $\{S^i S^{i+1}: \forall i \in \llbracket d-1 \rrbracket\}$.
\end{remark}
In Algorithm~\ref{alg:g1d_biclique_sep}, we modify the $\Call{Separation}$ subroutine in Algorithm 1 of~\citep{lyu2022modeling} to focus on generalized 1D-ordered $\operatorname{CDC}(\mathbfcal{S})$. A class of ideal formulations can be found by Algorithm~\ref{alg:g1d_biclique}.
\begin{algorithm}[h]
\begin{algorithmic}[1]
\State \textbf{Input}: A set of indices $\mathbfcal{S} = \{S^i\}_{i=1}^d$, a path graph $\mathcal{P}$ with $E(\mathcal{P}) = \{S^i S^{i+1}\}_{i=1}^{d-1}$, and a reversed edge ranking $\varphi$ of $\mathcal{P}$ with the number of ranks $r$.
\State \textbf{Output}: A set of pair of indices $\{\{A^{j}, B^{j}\}: j \in \llbracket r\rrbracket\}$.
\State $\{\{I^{\varphi(e), e}, J^{\varphi(e), e}\}: e \in E(\mathcal{P})\} \leftarrow \Call{Separation}{\mathcal{P}, \varphi}$.
\State Initialize $A^k \leftarrow \{\}$ and $B^k \leftarrow \{\}$ for $k \in \llbracket r \rrbracket$.
\For {$j \in \llbracket r \rrbracket$}
\State $\operatorname{count} \leftarrow 1$
\For {$e \in \{e' \in E(\mathcal{P}): \varphi(e') = j\}$} \Comment{Note that the order of edges in $E(\mathcal{P})$ is $S^1 S^2, S^2 S^3, \hdots$}
\State If $\operatorname{count}$ is odd, then $A^j \leftarrow A^j \cup I^{\varphi(e), e}$ and $B^j \leftarrow B^j \cup J^{\varphi(e), e}$. Otherwise, $A^j \leftarrow A^j \cup J^{\varphi(e), e}$ and $B^j \leftarrow B^j \cup I^{\varphi(e), e}$.
\State $\operatorname{count} \leftarrow \operatorname{count} + 1$
\EndFor
\EndFor
\State \textbf{return} $\{\{A^{j}, B^{j}\}: j \in \llbracket r\rrbracket\}$
\end{algorithmic}
\caption{An approach to generate a set of pair of indices to represent biclique cover of conflict graph associated with $\operatorname{CDC}(\mathbfcal{S})$.} \label{alg:g1d_biclique}
\end{algorithm}
\begin{theorem}\label{thm:g1d_biclique_ideal}
Given a generalized 1D-ordered $\operatorname{CDC}(\mathbfcal{S})$ with $\mathbfcal{S} = \{S^i\}_{i=1}^d$,
we can construct a junction tree of $\operatorname{CDC}(\mathbfcal{S})$: a path graph $\mathcal{P}$ with $E(\mathcal{P}) = \{S^i S^{i+1}\}_{i=1}^{d-1}$. Then, an arbitrary reversed edge ranking of $\mathcal{P}$, $\varphi$, with number of ranks $r$ can provide an ideal formulation for $\lambda \in \operatorname{CDC}(\mathbfcal{S})$:
\begin{subequations} \label{eq:g1d_ideal_form_biclique}
\begin{alignat}{2}
& \sum_{v \in L^j} \lambda_v \leq z_j, & \forall j \in \llbracket r \rrbracket \label{eq:g1d_ideal_form_biclique_a}\\
& \sum_{v \in R^j} \lambda_v \leq 1 - z_j, \quad & \forall j \in \llbracket r \rrbracket \label{eq:g1d_ideal_form_biclique_b} \\
& \sum_{v \in J} \lambda_v = 1, & \lambda \geq 0 \\
& z_j \in \{0, 1\}, & \forall j \in \llbracket t \rrbracket ,
\end{alignat}
\end{subequations}
where $\{\{A^j, B^j\}: j \in \llbracket r\rrbracket\}$ is the output of Algorithm~\ref{alg:g1d_biclique} with inputs: $\mathbfcal{S}, \mathcal{P}, \varphi, r$; and
\begin{align}
\left\{\left\{L^j := \bigcup_{i \in A^j} S^i \operatorname{set}minus \bigcup_{i \in B^j} S^i, R^j := \bigcup_{i \in B^j} S^i \operatorname{set}minus \bigcup_{i \in A^j} S^i \right\}\right\}_{j=1.}^r
\end{align}
\end{theorem}
The proof of Theorem~\ref{thm:g1d_biclique_ideal} is in Appendix~\ref{sec:proof_g1d_biclique_ideal}. Both Gray code formulation and biclique cover formulation can be logarithmically sized ideal formulations if the length of the binary vectors in Gray code or the ranking of the reversed edge ranking is logarithmically sized to $d$, for example, balanced Gray code or balanced reversed edge ranking in Remark~\ref{rm:balanced_gc}. In Appendix~\ref{sec:g1d_diff}, we will discuss an example where the Gray code formulation in Theorem~\ref{thm:g1d_ideal} is different from the biclique cover formulation in Theorem~\ref{thm:g1d_biclique_ideal}.
\section{Higher Dimensions} \label{sec:higher_pwr}
The idea of generalized 1D-ordered CDCs can be easily extended to higher dimensions, which can be used to provide piecewise linear relaxations of nonlinear functions with multivariate inputs.
\subsection{Generalized 2D-Ordered CDCs}
Consider that the optimization problem involves a constraint with $y = f(x)$ where $y \in \mathbb{R}, x \in [L_1, U_1] \times [L_2, U_2] \subseteq \mathbb{R}^2$. We can partition $[L_1, U_1] \times [L_2, U_2]$ into a $d_1 \times d_2$ rectangular grid and provide a polytope relaxation of the nonlinear function $f(x)$ within each rectangle. We let $S^{i_1, i_2}$ be the indices representing the vertices or extreme points of the relaxation polytope of $f(x)$ in the $(i_1, i_2)$-th rectangular grid. Then, it is not hard to see that $S^{i_1, i_2}$ can share vertices with $S^{j_1, j_2}$ if and only if $|i_1 - j_1| \geq 1$ and $|i_2 - j_2| \geq 1$ becuase of the geometric locations. Furthermore, to guarantee the pairwise IB-representability, we assume that $S^{i_1, i_2} \cap S^{i_1+1, i_2+1} \subseteq S^{i_1, i_2+1}$ and $S^{i_1, i_2} \cap S^{i_1+1, i_2+1} \subseteq S^{i_1+1, i_2}$ for any $i_1 \in \llbracket d_1 - 1 \rrbracket$ and $i_2 \in \llbracket d_2 - 1 \rrbracket$.
\begin{definition}[generalized 2D-ordered CDCs] \label{def:g_2d_ordered_cdc}
$\operatorname{CDC}(\mathbfcal{S})$ is a generalized 2D-ordered CDC if $\mathbfcal{S} = \{S^{i_1, i_2}: i_1 \in \llbracket d_1 \rrbracket, i_2 \in \llbracket d_2 \rrbracket\}$ such that
\begin{enumerate}
\item $S^{i_1, i_2} \cap S^{j_1, j_2} = \emptyset$ if $|i_1 - j_1| \geq 2$ or $|i_2 - j_2| \geq 2$ for $i_1, j_1 \in \llbracket d_1 \rrbracket$ and $i_2, j_2 \in \llbracket d_2 \rrbracket$.
\item $S^{i_1, i_2} \cap S^{i_1+1, i_2+1} \subseteq S^{i_1, i_2+1}$ and $S^{i_1, i_2} \cap S^{i_1+1, i_2+1} \subseteq S^{i_1+1, i_2}$ for $i_1 \in \llbracket d_1 - 1 \rrbracket$ and $i_2 \in \llbracket d_2 - 1 \rrbracket$.
\end{enumerate}
\end{definition}
The combinatorial disjunctive constraints for the piecewise McCormick relaxation~\citep{castro2015tightening} of bilinear term $y = x_1 x_2$ can be viewed as an example of generalized 2D-ordered CDCs. Suppose that we have $y = x_1 x_2$ where $x_1 \in [L_1, U_1]$ and $x_2 \in [L_2, U_2]$. Then, the convex hull of points
\begin{align*}
\{(L_1, L_2), (L_1, U_2), (U_1, L_2), (U_1, U_2)\}
\end{align*}
\noindent contains the set $\{(x_1, x_2, y) \in [L_1, U_1] \times [L_2, U_2] \times \mathbb{R}: y = x_1x_2\}$, i.e. McCormick envelope~\citep{mccormick1976computability}. Suppose that we have the breakpoints $L_1= \hat{x}^1_1 < \hdots < \hat{x}^1_{d_1+1} = U_1$ and $L_2 = \hat{x}^2_1 < \hdots < \hat{x}^2_{d_2+1} = U_2$. We let $(i, j)$ to represent the point $(\hat{x}^1_i, \hat{x}^2_j)$. Then, the combinatorial disjunctive constraint for the piecewise McCormick relaxation of bilinear term $y = x_1 x_2$ with breakpoints $(\hat{x}^1_i, \hat{x}^2_j)$ can be expressed as $\operatorname{CDC}(\mathbfcal{S})$, where
\begin{align*}
\mathbfcal{S} = \{\{(i, j), (i, j+1), (i+1, j), (i+1, j+1)\}: i \in \llbracket d_1 \rrbracket, j \in \llbracket d_2 \rrbracket\}.
\end{align*}
Then, we can write down a formulation for this combinatorial disjunctive constraint $\lambda \in \operatorname{CDC}(\mathbfcal{S})$:
\begin{alignat*}{2}
&\mu_i = \sum_{j=1}^{d_2+1} \lambda_{i, j}, &\forall i \in \llbracket d_1 + 1 \rrbracket \\
&\rho_j = \sum_{i=1}^{d_1+1} \lambda_{i, j}, & \forall j \in \llbracket d_2 + 1 \rrbracket \\
& \mu \in \operatorname{SOS} 2(d_1 + 1), \qquad & \rho \in \operatorname{SOS} 2(d_2 + 1) \\
&\sum_{i=1}^{d_1}\sum_{j=1}^{d_2} \lambda_{i,j} = 1, & \lambda \geq 0.
\end{alignat*}
The idea of using two $\operatorname{SOS} 2$ constraints to represent the combinatorial disjunctive constraint for the piecewise McCormick relaxation of bilinear term motivates us to use generalized 1D-ordered CDCs to model generalized 2D-ordered CDCs as we will show in Theorem~\ref{thm:g2d_ideal}.
We want to note that if $\operatorname{CDC}(\mathbfcal{S})$ is a generalized 2D-ordered CDC, it might not be a CDC admitting junction trees. A simple counterexample is that $S^{1, 1} = \{1, 2\}, S^{1, 2} = \{1, 4\}, S^{2, 1} = \{2, 3\}, S^{2, 2} = \{3,4\}$ as demonstrated in Figure~\ref{fig:g2d_counter_example}. In contrast, the counterexample is still pairwise IB-representable. Thus, it is important to show that any generalized 2D-ordered CDC is pairwise IB-representable.
\begin{figure}
\caption{A generalized 2D-ordered CDC that is not a CDC admitting junction trees but is pairwise IB-representable.}
\label{fig:g2d_counter_example}
\end{figure}
\begin{theorem} \label{thm:g2d_ib}
If $\operatorname{CDC}(\mathbfcal{S})$ is a generalized 2D-ordered CDC, then it is pairwise IB-representable.
\end{theorem}
Theorem~\ref{thm:g2d_ib} can be viewed as a Corollary of Theorem~\ref{thm:gnd_ib} in Section~\ref{sec:gnd}.
Then, because of the pairwise IB-representability of generalized 2D-ordered CDCs, we can construct ideal formulations by finding biclique covers of the associated conflict graphs (Proposition~\ref{prop:pwr_cdc_bc}). Recall that we have defined $\operatorname{g1d}$ in Remark~\ref{rm:pwl_1d_ideal}.
\begin{theorem} \label{thm:g2d_ideal}
Given a generalized 2D-ordered CDC with $\mathbfcal{S} = \{S^{i_1, i_2}: i_1 \in \llbracket d_1 \rrbracket, i_2 \in \llbracket d_2 \rrbracket\}$, two arbitrary Gray codes $\{h^i\}_{i=1}^{d_1} \subseteq \{0, 1\}^{t_1}$ and $\{g^i\}_{i=1}^{d_2} \subseteq \{0, 1\}^{t_2}$ can provide an ideal formulation for $\lambda \in \mathbfcal{S}$:
\begin{subequations} \label{eq:g2d_ideal_form}
\begin{alignat}{2}
& \lambda \in \operatorname{g1d}\left(\mathbfcal{S}^1, \{h^i\}_{i=1}^{d_1}, z^1\right), \qquad && \lambda \in \operatorname{g1d}\left(\mathbfcal{S}^2, \{g^i\}_{i=1}^{d_2}, z^2 \right) \label{eq:g2d_ideal_form_a} \\
& \sum_{v \in J} \lambda_v = 1, && \lambda \geq 0 \\
& z^1 \in \{0, 1\}^{t_1}, && z^1 \in \{0, 1\}^{t_2},
\end{alignat}
\end{subequations}
where $\mathbfcal{S}^1 = \left\{\bigcup_{i_2 \in \llbracket d_2 \rrbracket} S^{i_1, i_2} \right\}_{i_1=1}^{d_1}$ and $\mathbfcal{S}^2 =\left\{\bigcup_{i_1 \in \llbracket d_1 \rrbracket} S^{i_1, i_2} \right\}_{i_2=1}^{d_2}$.
\end{theorem}
Again, Theorem~\ref{thm:g2d_ideal} is a Corollary of Theorem~\ref{thm:gnd_ideal} in Section~\ref{sec:gnd}.
\subsection{Generalized $n$D-Ordered CDCs} \label{sec:gnd}
The generalized 1D-ordered or 2D-ordered CDCs can be also extended to higher dimensions. Note that $||x||_1 = \sum_{i=1}^n |x_i|$ is the $\ell^1$ norm of vector $x \in \mathbb{R}^n$ and $||x||_{\infty} = \max_i |x_i|$ is the infinity norm of vector $x \in \mathbb{R}^n$.
Consider a nonlinear function $f: D \rightarrow \mathbb{R}$ where $D \in \mathbb{R}^n$ is bounded and can be partitioned into a finite number of hyperrectangles or a $\llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket$ grid, $\{C^{\mathbf{i}}: \mathbf{i} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket\}$. Let $\underl{f}: D \rightarrow \mathbb{R}$ and $\bar{f}: D \rightarrow \mathbb{R}$ be the continuous piecewise linear lower and upper bounds of $f$ such that $\underl{f}(x) \leq f(x) \leq \bar{f}(x)$ for $x \in D$ and $P^{\mathbf{i}} = \{(x, y) \in C^{\mathbf{i}} \times \mathbb{R}: \underl{f}(x) \leq y \leq \bar{f}(x)\}$ is a polytope for $\mathbf{i} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket$. In addition, we assume that
\begin{itemize}
\item If $\operatorname{ext}(P^{\mathbf{i}}) \cap \operatorname{ext}(P^{\mathbf{j}}) \neq \emptyset$, then $||\mathbf{i} - \mathbf{j}||_{\infty} \leq 1$.
\item $\operatorname{ext}(P^{\mathbf{i}}) \cap \operatorname{ext}(P^{\mathbf{j}}) \subseteq \operatorname{ext}(P^{\mathbf{v}})$ if $||\mathbf{i} - \mathbf{v}||_1 + ||\mathbf{j} - \mathbf{v}||_1 = ||\mathbf{i} - \mathbf{j}||_1$. \footnote{To ensure the pairwise IB-representability.}
\end{itemize}
We denote the combinatorial disjunctive constraint $\operatorname{CDC}(\mathbfcal{S})$ for representing union of such $P^{\mathbf{i}}$, $\bigcup_{\mathbf{i} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket} P^{\mathbf{i}}$, as a generalized $n$D-ordered CDC.
\begin{definition}[generalized $n$D-ordered CDCs] \label{def:g_nd_ordered_cdc}
$\operatorname{CDC}(\mathbfcal{S})$ is a generalized $n$D-ordered CDC if $\mathbfcal{S} = \{S^{\mathbf{i}}: \mathbf{i} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket\}$ such that
\begin{enumerate}
\item $S^{\mathbf{i}} \cap S^{\mathbf{j}} = \emptyset$ if $||\mathbf{i} - \mathbf{j}||_{\infty} \geq 2$ for $\mathbf{i}, \mathbf{j} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket$.
\item $S^{\mathbf{i}} \cap S^{\mathbf{j}} \subseteq S^{\mathbf{v}}$ if $||\mathbf{i} - \mathbf{v}||_1 + ||\mathbf{j} - \mathbf{v}||_1 = ||\mathbf{i} - \mathbf{j}||_1$.
\end{enumerate}
\end{definition}
Note that $n$ can also take the value of 1 and 2, which means Definition~\ref{def:g_nd_ordered_cdc} can be viewed as a generalization of Definition~\ref{def:g_1d_ordered_cdc} and Definition~\ref{def:g_2d_ordered_cdc}.
Then, we can show the pairwise IB-representability and provide logarithmically sized ideal formulations of generalized $n$D-ordered CDC.
\begin{theorem} \label{thm:gnd_ib}
If $\operatorname{CDC}(\mathbfcal{S})$ is a generalized $n$D-ordered CDC, then it is pairwise IB-representable.
\end{theorem}
\begin{theorem} \label{thm:gnd_ideal}
Given a generalized $n$D-ordered CDC with $\mathbfcal{S} = \{S^{\mathbf{i}}: \mathbf{i} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket\}$, $n$ arbitrary Gray codes $\{h^{i, j}\}_{i=1}^{d_j} \subseteq \{0, 1\}^{t_j}$ for $j \in \llbracket n \rrbracket$ can provide an ideal formulation for $\lambda \in \mathbfcal{S}$:
\begin{subequations} \label{eq:gnd_ideal_form}
\begin{alignat}{2}
& \lambda \in \operatorname{g1d}(\mathbfcal{S}^j, \{h^{i,j}\}_{i=1}^{d_j}, z^j), \qquad && \forall j \in \llbracket n \rrbracket \label{eq:gnd_ideal_form_a}\\
& \sum_{v \in J} \lambda_v = 1, && \lambda \geq 0 \\
& z^j \in \{0, 1\}^{t_j}, && \forall j \in \llbracket n \rrbracket,
\end{alignat}
\end{subequations}
where $\mathbfcal{S}^j = \{\bigcup_{\mathbf{i}_v \in \llbracket d_v \rrbracket: v \neq j} S^{\mathbf{i}}\}_{\mathbf{i}_j=1}^{d_j}$ for $j \in \llbracket n \rrbracket$.
\end{theorem}
The proofs of Theorems~\ref{thm:gnd_ib} and~\ref{thm:gnd_ideal} are in Appendix~\ref{sec:proofs_gnd}.
Note that $\lambda \in \operatorname{g1d}\left(\mathbfcal{S}^1, \{h^i\}_{i=1}^{d_1}, z^1 \right)$ and $\lambda \in \operatorname{g1d}\left(\mathbfcal{S}^2, \{g^i\}_{i=1}^{d_2}, z^2 \right)$ in~\eqref{eq:gnd_ideal_form_a} can be also replaced by other constraints found by biclique covers of conflict graphs $G^c_{\mathbfcal{S}^1}$ and $G^c_{\mathbfcal{S}^2}$, such as equations \eqref{eq:g1d_ideal_form_biclique_a} and~\eqref{eq:g1d_ideal_form_biclique_b} in Theorem~\ref{thm:g1d_biclique_ideal}.
\section{Computational Results} \label{sec:pwr_computational}
In this section, we will test the computational performance of modeling approaches and different formulations for piecewise linear relaxations of univariate nonlinear functions.\footnote{The code of our experiments is available at \\ \href{https://github.com/BochuanBob/PiecewiseLinearRelaxation.jl}{https://github.com/BochuanBob/PiecewiseLinearRelaxation.jl}.} We select 2D inverse kinematics problems and share-of-choice product design problems as two applications. In these two applications, nonlinear functions, such as $\sin(), \cos()$, and $\exp()$ functions, appear in the constraints. Thus, only using piecewise linear approximation cannot provide either a primal solution or dual bound directly from the solver. However, the piecewise linear relaxation approach can provide dual bound directly from the solving process. Note that we do not test the piecewise linear relaxation approach on multicommodity transportation problems as in~\citep{vielma2010mixed,huchette2022nonconvex} since the nonlinear functions are only in the objective function and there is no need for providing both piecewise linear lower and upper bounds. In Section~\ref{sec:nonlinear}, we also compare the piecewise linear relaxation approach with a nonlinear solver, SCIP.
First, we want to introduce the computational experiments within the piecewise linear relaxation framework. In Sections~\ref{sec:2d_inverse_experiments} and~\ref{sec:share_of_choice_experiments}, we use Gurobi v10.0.0~\citep{gurobi} as the MILP solver and JuMP v1.5.0~\citep{DunningHuchetteLubin2017} as the modeling language, with four threads on a Red Hat Enterprise Linux version 7.9 workstation with 16 GB of RAM and Intel(R) Xeon(R) W-2102 CPU with 4 cores @ 2.90GHz. We compare the performances of three methods each with several different formulations in the experiments:
\begin{itemize}
\item \textit{Base}: Use piecewise linear function formulations to model the piecewise linear lower bound and piecewise linear upper bound separately. The formulations for each piecewise linear function include \textit{Inc}: incremental in~\eqref{eq:pwl_inc}; \textit{CC}: convex combination in~\eqref{eq:sos2_cc}; \textit{MC}: multiple choice in~\eqref{eq:pwl_mc}; \textit{DLog}: logarithmic disaggregated convex combination in~\eqref{eq:pwl_DLog}; \textit{LogE}: logarithmic embedding in~\eqref{eq:sos2_loge}; binary zig-zag in~\eqref{form:zzb}; \textit{ZZI}: general integer zig-zag in~\eqref{form:zzi}.
\item \textit{Merged}: Use Proposition~\ref{prop:multi_pwl_by_sos2} or Proposition~\ref{prop:multi_pwl_inc} to formulate the piecewise linear lower and upper bounds at the same time. The formulations include \textit{Inc}: incremental in~\eqref{eq:multi_pwl_inc};
\textit{DLog}: logarithmic disaggregated convex combination in~\eqref{eq:pwl_DLog} with similar modification as incremental in Proposition~\ref{prop:multi_pwl_inc};
\textit{LogE}: logarithmic embedding for~\eqref{eq:multi_pwl_by_sos2_c} in~\eqref{eq:multi_pwl_by_sos2};
\textit{SOS2}: the default $\operatorname{SOS} 2$ constraint in Gurobi for~\eqref{eq:multi_pwl_by_sos2_c} in~\eqref{eq:multi_pwl_by_sos2};
\textit{ZZB}: binary zig-zag for~\eqref{eq:multi_pwl_by_sos2_c} in~\eqref{eq:multi_pwl_by_sos2};
\textit{ZZI}: general integer zig-zag for~\eqref{eq:multi_pwl_by_sos2_c} in~\eqref{eq:multi_pwl_by_sos2}.
\item \textit{PWR}: Model the piecewise linear relaxations directly with combinatorial disjunctive constraint and independent branching framework. The formulations include
\textit{Inc}: incremental in~\eqref{eq:pwr_inc_formulation};
\textit{DLog}: logarithmic disaggregated convex combination in~\eqref{eq:pwr_dLog_formulation};
\textit{BRGC}: use binary reflected Gray code in Definition~\ref{def:brgc} for~\eqref{eq:g1d_ideal_form} of Theorem~\ref{thm:g1d_ideal};
\textit{Balanced}: use balanced Gray code defined in Remark~\ref{rm:balanced_gc} for~\eqref{eq:g1d_ideal_form} of Theorem~\ref{thm:g1d_ideal};
\textit{Biclique}: use balanced reversed edge ranking defined in Remark~\ref{rm:balanced_gc} for~\eqref{eq:g1d_ideal_form_biclique} of Theorem~\ref{thm:g1d_biclique_ideal}.
\end{itemize}
We want to note that \textit{BRGC}, \textit{Balanced}, and \textit{Biclique} are our proposed formulations, where \textit{BRGC} and \textit{Balanced} are Gray code formulations using different Gray codes and \textit{Biclique} is a biclique cover formulation. We also refer reader to Appendices~\ref{sec:pwr_pre} (\textit{Base}), \ref{sec:mupwr_merged} (\textit{Merged}), and~\ref{sec:inc_dlog} (\textit{PWR}) for \textit{MC}, \textit{CC}, \textit{Inc}, \textit{DLog}, \textit{LogE}, \textit{LogIB}, \textit{ZZB}, and \textit{ZZI} formulations.
\subsection{How to Obtain Piecewise Linear Relaxations}
In our computational experiments, we focus on the univariate nonlinear function $f: [L, U] \rightarrow \mathbb{R}$ such that $f$ is differentiable and $[L, U]$ can be partitioned into line segments $\{[\hat{x}_i, \hat{x}_{i+1}]\}_{i=1}^{N-1}$ where $f$ is convex or concave in each $[\hat{x}_i, \hat{x}_{i-1}]$ and $N$ is the total number of breakpoints. For example, $f(x) = \sin(x)$, $f(x) = \cos(x)$ or $f(x) = \exp(x)$.
There are two major parameters that will affect the feasible region of piecewise linear relaxations: $N^{\operatorname{pre}}$ and $N^{\operatorname{seg}}$. The value of $N^{\operatorname{pre}}$ will determine the number of polytope pieces and the value of $N^{\operatorname{seg}}$ will determine the shape of the polytope of each piece for the relaxation. First, we need to obtain the line segments $\{[\hat{x}_i, \hat{x}_{i+1}]\}_{i=1}^{N-1}$ where $f$ is convex or concave in each piece. We start with construct $\{[\hat{x}_i, \hat{x}_{i+1}]\}_{i=1}^{N^{\operatorname{pre}}-1}$ with equally spaced between $L$ and $U$ inclusively. Then, we will add necessary breakpoints to get the line segments $\{[\hat{x}_i, \hat{x}_{i+1}]\}_{i=1}^{N-1}$ where $f$ is convex or concave in each piece. Note that $N^{\operatorname{pre}} - 1$ is not necessarily equal to the number of polytope pieces in the piecewise linear relaxation because of the additional breakpoints.
\begin{figure}
\caption{$N^{\operatorname{seg}
\caption{$N^{\operatorname{seg}
\caption{The shape of polytope in each relaxation piece when $N^{\operatorname{seg}
\label{fig:n_seg_demo}
\end{figure}
After we get the line segments $\{[\hat{x}_i, \hat{x}_{i+1}]\}_{i=1}^{N-1}$ where $f$ is convex or concave in each piece, then we will create polytope that relaxes the nonlinear function $f$ in each piece. As shown in Figure~\ref{fig:n_seg_demo}, we are interested in creating the polytope relaxation of $f$ (green line) between $A$ and $B$. When $N^{\operatorname{seg}} = 1$, we can basically find the tangent lines of $f$ at $A$ and $B$, which intersect at $C$. The points $A, B, C$ will represent the polytope relaxation of $f$. Then, we will also use $A, B, C$ to create piecewise linear lower and upper bounds for \textit{Base} and \textit{Merged} methods. In this case, $f$ is convex between the points $A$ and $B$. Thus, $AC$ and $CB$ will be the line segments of the piecewise linear upper bound and $AB$ will be the line segment in the piecewise linear lower bound. When $N^{\operatorname{seg}} = 2$, we will first project $C$ to $D$ on the function $f$ with the same $x$-value. Then, use the tangent line of $f$ at $A$, $D$, and $B$ to find points $E$ and $F$. Thus, the polytope relaxation of $N^{\operatorname{seg}}=2$ has the extreme points $A, B, E$, $F$. Note that $D$ is on the line segment $EF$, so it is not an extreme point. When $N^{\operatorname{seg}}$ is larger than 2, we will follow the same manner to obtain polytopes with more extreme points.
\subsection{2D Inverse Kinematics Problems} \label{sec:2d_inverse_experiments}
The first nonlinear optimization problem for testing the performances of different formulations is 2D inverse kinematics. In this optimization problem, we want to control the angles of $n$ joints $\theta^1, \hdots, \theta^n$ so as to place the end effector or ``hand" of the robot arm to a target position $x^{\operatorname{des}} \in \mathbb{R}^2$ with a target angle $\theta^{\operatorname{des}} \in \mathbb{R}$. The angles of each joint $\theta^i$ is within a range: $[L^i, U^i] \subseteq \mathbb{R}$. The vector $v^i \in \mathbb{R}^2$ represents the length vector of $i$-th link under the initial position, i.e. $\theta^i = 0$. In our objective function, we minimize the $L_1$ distance between $x^{\operatorname{sum}}$ and $x^{\operatorname{des}}$ and the $L_1$ distance between $\theta^{\operatorname{sum}}$ and $\theta^{\operatorname{des}}$ at the same time, where $\theta^{\operatorname{init}}$ is the initial angle of the end effector. We also introduce a weight $\beta$ on the difference of the angle in the objective function.
\begin{subequations} \label{eq:robot_form}
\begin{alignat}{2}
\min_{x, \theta, t} \qquad & t^x_1 + t^x_2 + \beta \cdot t^{\theta}
\label{eq:robot_form_a} \\
\text{s.t.} \qquad & t^x \geq x^{\operatorname{sum}} - x^{\operatorname{des}} & t^x \geq x^{\operatorname{des}} - x^{\operatorname{sum}} \label{eq:robot_form_b} \\
& t^{\theta} \geq {\theta}^{\operatorname{sum}} - {\theta}^{\operatorname{des}} & t^{\theta} \geq \theta^{\operatorname{des}} - \theta^{\operatorname{sum}} \label{eq:robot_form_c}\\
& \begin{bmatrix} \cos(\sum_{j=1}^i \theta^j) & -\sin(\sum_{j=1}^i \theta^j) \\ \sin(\sum_{j=1}^i \theta^j) & \cos(\sum_{j=1}^i \theta^j) \\ \end{bmatrix} v^i = x^i, & \forall i \in \llbracket n \rrbracket \label{eq:robot_form_d} \\
& x^{\operatorname{sum}} = \sum_{i=1}^n x^i, & \theta^{\operatorname{sum}} = \theta^{\operatorname{init}} + \sum_{i=1}^n \theta^i \\
& x^i, t^x \in \mathbb{R}^2, \theta^i \in [L^i, U^i], t^\theta \in \mathbb{R}, & \forall i \in \llbracket n \rrbracket.
\end{alignat}
\end{subequations}
\begin{table}[h]
\begin{adjustbox}{angle=0,scale=0.58}
\centering
\begin{tabular}{lll|lllllll|llllll|lllll}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & \multicolumn{7}{c|}{Based} & \multicolumn{6}{c|}{Merged} & \multicolumn{5}{c}{PWR} \\
\multicolumn{1}{c}{$N^{\operatorname{pre}}$} & \multicolumn{1}{c}{$N^{\operatorname{seg}}$} & \multicolumn{1}{c|}{Metric} & \multicolumn{1}{c}{Inc} & \multicolumn{1}{c}{CC} & \multicolumn{1}{c}{MC} & \multicolumn{1}{c}{DLog} & \multicolumn{1}{c}{LogE} & \multicolumn{1}{c}{ZZB} & \multicolumn{1}{c|}{ZZI} & \multicolumn{1}{c}{Inc} & \multicolumn{1}{c}{DLog} & \multicolumn{1}{c}{LogE} & \multicolumn{1}{c}{SOS2} & \multicolumn{1}{c}{ZZB} & \multicolumn{1}{c|}{ZZI} & \multicolumn{1}{c}{Inc} & \multicolumn{1}{c}{DLog} & \multicolumn{1}{c}{\textbf{BRGC}} & \multicolumn{1}{c}{\textbf{Balanced}} & \multicolumn{1}{c}{\textbf{Biclique}} \\ \hline
50 & 1 & Mean (s) & 1.69 & 7.27 & 9.31 & 3.67 & 2.21 & 1.24 & 1.25 & 2.66 & 2.88 & 1.19 & 0.85 & 0.76 & 0.73 & 9.96 & 1.13 & 0.83 & 0.73 & \textbf{0.69} \\
& & Std & 0.81 & 4.70 & 5.91 & 1.26 & 1.00 & 0.61 & 0.59 & 1.51 & 1.17 & 0.53 & 0.47 & 0.36 & \textbf{0.29} & 5.67 & 0.46 & 0.38 & 0.33 & \textbf{0.29} \\
& & Win & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 3 & 3 & 0 & 0 & 1 & \textbf{5} & \textbf{5} \\
& & Fail & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} \\
50 & 2 & Mean (s) & 4.82 & 18.35 & 19.29 & 5.10 & 3.21 & 3.12 & 2.18 & 6.17 & 3.26 & 1.33 & 1.03 & 1.08 & 0.86 & 13.98 & 1.33 & 1.25 & 0.94 & \textbf{0.85} \\
& & Std & 4.21 & 19.98 & 12.03 & 1.88 & 1.26 & 1.57 & 0.98 & 3.06 & 1.27 & 0.49 & 0.57 & 0.65 & 0.37 & 8.95 & 0.55 & 0.64 & \textbf{0.30} & 0.32 \\
& & Win & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 3 & \textbf{6} & 0 & 0 & 0 & 2 & \textbf{6} \\
& & Fail & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} \\
50 & 4 & Mean (s) & 21.01 & 73.63 & 115.57 & 31.27 & 14.07 & 12.30 & 11.54 & 35.43 & 10.49 & 6.40 & 2.72 & 2.60 & 2.97 & 88.25 & 2.48 & 2.47 & 2.21 & \textbf{1.88} \\
& & Std & 11.14 & 67.17 & 57.41 & 15.68 & 10.24 & 5.68 & 5.62 & 17.33 & 3.36 & 2.20 & 1.54 & 1.17 & 1.03 & 90.49 & 0.91 & 1.03 & 0.92 & \textbf{0.86} \\
& & Win & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5 & 0 & 1 & 0 & 3 & 1 & 2 & \textbf{8} \\
& & Fail & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} \\
100 & 1 & Mean (s) & 5.31 & 15.45 & 47.10 & 6.53 & 3.92 & 2.49 & 2.34 & 9.91 & 4.71 & 2.10 & 1.51 & 1.26 & 1.38 & 85.22 & 2.21 & 1.68 & 1.31 & \textbf{1.21} \\
& & Std & 3.04 & 16.62 & 35.42 & 2.71 & 1.79 & 1.20 & 1.27 & 5.01 & 1.97 & 0.82 & 1.20 & 0.61 & 0.85 & 61.33 & 0.91 & 0.76 & 0.54 & \textbf{0.45} \\
& & Win & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{6} & 4 & 3 & 0 & 0 & 0 & 5 & 2 \\
& & Fail & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} \\
100 & 2 & Mean (s) & 12.25 & 29.92 & 79.14 & 10.18 & 6.18 & 5.29 & 3.84 & 20.50 & 5.66 & 2.78 & 1.75 & 2.07 & 1.85 & 154.46 & 2.77 & 2.28 & 1.85 & \textbf{1.57} \\
& & Std & 5.89 & 18.78 & 62.33 & 4.59 & 2.57 & 2.79 & 1.68 & 10.02 & 2.08 & 1.24 & 1.00 & 1.16 & 0.70 & 176.56 & 1.37 & 1.07 & 0.82 & \textbf{0.59} \\
& & Win & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 7 & 0 & 2 & 0 & 0 & 0 & 2 & \textbf{9} \\
& & Fail & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & 2 & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} \\
100 & 4 & Mean (s) & 76.26 & 299.58 & 478.80 & 79.22 & 31.57 & 29.21 & 32.42 & 115.35 & 19.29 & 13.68 & 452.06 & 5.90 & 8.01 & 426.20 & 5.51 & 5.60 & 3.69 & \textbf{3.60} \\
& & Std & 48.22 & 204.47 & 204.26 & 51.01 & 14.60 & 15.40 & 19.14 & 66.90 & 3.04 & 5.66 & 262.95 & 2.34 & 4.17 & 251.85 & 2.62 & 3.38 & 1.15 & \textbf{0.98} \\
& & Win & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 0 & 0 & 0 & 1 & 2 & 5 & \textbf{9} \\
& & Fail & \textbf{0} & 4 & 14 & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & 15 & \textbf{0} & \textbf{0} & 11 & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} \\
200 & 1 & Mean (s) & 18.41 & 25.86 & 270.75 & 13.52 & 7.10 & 4.55 & 4.46 & 32.39 & 9.34 & 4.23 & 2.61 & 2.66 & 2.64 & 434.52 & 5.73 & 3.41 & 2.52 & \textbf{2.31} \\
& & Std & 10.72 & 17.45 & 200.06 & 6.71 & 2.99 & 1.58 & 2.22 & 16.44 & 2.93 & 1.68 & 1.49 & 1.27 & 1.02 & 254.81 & 3.20 & 1.73 & 1.01 & \textbf{0.84} \\
& & Win & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5 & 2 & 2 & 0 & 0 & 0 & 2 & \textbf{9} \\
& & Fail & \textbf{0} & \textbf{0} & 4 & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & 11 & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} \\
200 & 2 & Mean (s) & 42.91 & 120.74 & 408.94 & 23.86 & 11.29 & 11.57 & 8.23 & 60.20 & 10.66 & 6.48 & 3.88 & 4.43 & 4.04 & 438.63 & 4.96 & 4.30 & 3.09 & \textbf{3.06} \\
& & Std & 26.33 & 144.38 & 221.26 & 11.08 & 3.96 & 6.42 & 3.03 & 30.57 & 4.40 & 2.21 & 2.25 & 2.07 & 1.22 & 259.15 & 1.49 & 2.00 & 1.56 & \textbf{1.06} \\
& & Win & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 & 2 & 0 & 0 & 0 & 0 & \textbf{7} & \textbf{7} \\
& & Fail & \textbf{0} & 1 & 9 & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & 14 & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} \\
200 & 4 & Mean (s) & 274.95 & 462.92 & 593.31 & 189.25 & 93.58 & 66.02 & 95.27 & 356.63 & 44.26 & 27.99 & 432.28 & 16.37 & 15.57 & 462.33 & 14.78 & 14.46 & 10.22 & \textbf{9.82} \\
& & Std & 156.74 & 237.11 & 30.02 & 104.40 & 46.10 & 26.50 & 54.44 & 175.96 & 11.72 & 9.34 & 265.61 & 6.16 & 7.87 & 245.04 & 6.18 & 7.18 & 3.40 & \textbf{3.28} \\
& & Win & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 2 & 7 & \textbf{8} \\
& & Fail & \textbf{0} & 14 & 19 & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & 1 & \textbf{0} & \textbf{0} & 14 & \textbf{0} & \textbf{0} & 15 & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0}
\end{tabular}
\end{adjustbox}
\caption{Computational results for 2D inverse kinematics problems.} \label{tb:robot}
\end{table}
Note that~\eqref{eq:robot_form_a}, \eqref{eq:robot_form_b}, and~\eqref{eq:robot_form_c} is equivalent to $\min_{x, \theta} ||x^{\operatorname{sum}} - x^{\operatorname{des}}||_1 + \beta \cdot |\theta^{\operatorname{sum}} - \theta^{\operatorname{des}}|$, where $||\cdot||_1$ is the $\ell^1$ norm. The matrix in~\eqref{eq:robot_form_d} is a two-dimensional rotation matrix to rotate the vector $v^i$ by an angle of $\sum_{j=1}^i \theta^j$.
We compare the computational performance of \textit{Base}, \textit{Merged}, and \textit{PWR} methods for 20 randomly generated 2D inverse kinematics instances with the number of joints of $n=4$, $\beta = 0.1$, the lower bound of the angle of each joint: $-\frac{\pi}{2}, -\frac{\pi}{4}, -\frac{\pi}{4}, -\frac{\pi}{4}$, and upper bounds: $\frac{\pi}{2}, \frac{\pi}{4}, \frac{\pi}{4}, \frac{\pi}{4}$. We alter piecewise linear relaxations parameters $N^{\operatorname{pre}} \in \{50, 100, 200\}$ and $N^{\operatorname{seg}} \in \{1, 2, 4\}$ and set the time limit of the solver to 600 seconds. We let the solver stop when the relative gap between primal and dual bounds is less than $10^{-6}$. We record the computational results in Table~\ref{tb:robot}. For each method and each formulation, we record the average solving time (Mean) in seconds, the standard deviation (Std), the number of instances that are solved in the shortest time (Win), and the number of timeouts (Fail).
As we can see in Table~\ref{tb:robot}, the performance of logarithmically sized formulations\footnote{The logarithmically sized formulations include: \textit{DLog}, \textit{LogE}, \textit{ZZB}, \textit{ZZI} of \textit{Base} and \textit{Merged}; \textit{DLog}, \textit{BRGC}, \textit{Balanced}, \textit{Biclique} of \textit{PWR}.} tends to be stable as the number of polytope pieces and polytope shape parameters, i.e. $N^{\operatorname{pre}}$ and $N^{\operatorname{seg}}$, get larger. On the other hand, the solving time of linear sized formulations\footnote{The linear sized formulations include: \textit{Inc}, \textit{MC}, \textit{CC} of \textit{Base}; \textit{Inc} of \textit{Merged} and \textit{PWR}.} increases significantly as those two parameters get larger. The \textit{Merged} approaches perform better than \textit{Base} approaches in general. \textit{Biclique} formulation of \textit{PWR} performs the best among all the approaches and formulations. Note that the \textit{Inc} of \textit{PWR} is slower than \textit{Inc} of \textit{Base} and \textit{Merged}. It is because \textit{Inc} formulations of \textit{Base} and \textit{Merged} use the information of univariate piecewise linear functions whereas \textit{Inc} formulation of \textit{PWR} uses a general framework for all disjunctive constraints.
\subsection{Share-of-Choice Product Design Problems} \label{sec:share_of_choice_experiments}
To test \textit{Merged} and \textit{PWR} methods on larger optimization problems, we consider a share-of-choice product design problem in marketing~\citep{bertsimas2017robust,camm2006conjoint,wang2009branch} that is also used to test the performance of PiecewiseLinearOpt package~\citep{huchette2022nonconvex}. The optimization problem can be expressed as
\begin{subequations}\label{eq:soc}
\begin{alignat}{2}
\max_{x, \mu, \bar{\mu}, p, \bar{p}} \qquad & \sum_{i=1}^v \lambda_i \bar{p}_i, \\
& \mu_i^s = \beta^{i, s} \cdot x & \forall s \in \llbracket S \rrbracket, i \in \llbracket v \rrbracket \\
& p^s_i = \frac{1}{1 + \exp(u_i - \mu^s_i)} \qquad & \forall s \in \llbracket S \rrbracket, i \in \llbracket v \rrbracket \label{eq:soc_c}\\
\text{s.t.} \qquad & \bar{\mu}_i = \frac{1}{S} \sum_{s=1}^S \beta^{i, s} \cdot x & \forall i \in \llbracket v \rrbracket \\
& \bar{p}_i = \frac{1}{1 + \exp(u_i - \bar{\mu}_i)}\qquad & \forall i \in \llbracket v \rrbracket \label{eq:soc_e} \\
& \sum_{i=1}^v \lambda_i p^s_i \geq C \sum_{i=1}^v \lambda_i \bar{p}_i & \forall s \in \llbracket S \rrbracket \label{eq:soc_f} \\
& 0 \leq x_j \leq 1 & \forall j \in \llbracket \eta \rrbracket,
\end{alignat}
\end{subequations}
\noindent where the product design space $x \in [0, 1]^\eta$, $v$ is the number of types of customers with shares of market $\lambda \in [0,1]^v$, and $\beta^{i, s} \in \mathbb{R}^{\eta}$ is the preference vector of each customer type $i$ of each scenario $s$ of $S$ scenarios. In~\eqref{eq:soc_c}, $p_i^s$ describes the probability of purchase from customer $i$ under scenario $s$ where $u_i$ is a minimum ``utility hurdle". In~\eqref{eq:soc_e}, $\bar{p}_i$ describes the overall probability (considering all scenarios) of purchase from customer $i$. The constant $C$ is a nonnegative percentage and~\eqref{eq:soc_f} assures that the expected number of purchases in each scenario is greater than a certain percentage of the expected number of overall purchases. The objective of the optimization problem~\eqref{eq:soc} is to maximize the overall expected number of purchases among all scenarios.
The nonlinear function that we need to find piecewise linear relaxation has a form of $f_u(x) = \frac{1}{1 + \exp(u - x)}$. By taking the second derivative, we know that the only additional point to ensure the convexity or concavity of each line segment is $u$.
In our computational experiments, we generate 20 randomly instances of the share-of-choice product design problem with the nonnegative percentage $C=0.2$, the number of customer types $v=10$, the number of scenarios $S=6$, and the dimension of product design space $\eta=15$. We only compare the computational performance of \textit{Merged} and \textit{PWR} methods and do not include any formulations from \textit{Base} method because of the poor performances of \textit{Base} in 2D inverse kinematics problem as shown in Table~\ref{tb:robot}. We do not include \textit{Inc} from \textit{Base} because of the same reason. We use $N^{\operatorname{pre}}=50$, alter $N^{\operatorname{seg}} \in \{1,2,4\}$, set the time limit to 1800 seconds and the threshold of the relative gap between primal and dual bounds to $10^{-6}$, and report the results in Table~\ref{tb:soc}. Similarly as 2D inverse kinematics problems, for each method and each formulation, we record the average solving time (Mean) in seconds, the standard deviation (Std), the number of instances that are solved in the shortest time (Win), and the number of timeouts (Fail).
As shown in Table~\ref{tb:soc}, \textit{Inc} formulation of \textit{Merged} is the fastest approach for $N^{\operatorname{pre}} = 50$ and $N^{\operatorname{seg}} = 1$. The \textit{Balanced} and \textit{Biclique} formulations of \textit{PWR} has the best performance among all approaches where \textit{Biclique} formulation is slightly better than \textit{Balanced}.
\begin{table}[H]
\centering
\begin{adjustbox}{angle=0,scale=0.72}
\begin{tabular}{ll|llllll|llll}
&& \multicolumn{6}{c|}{Merged} & \multicolumn{4}{c}{PWR} \\
$N^{\operatorname{seg}}$ & Metric & \multicolumn{1}{c}{Inc} & \multicolumn{1}{c}{DLog} & \multicolumn{1}{c}{LogE} & \multicolumn{1}{c}{SOS2} & \multicolumn{1}{c}{ZZB} & \multicolumn{1}{c|}{ZZI} & \multicolumn{1}{c}{DLog} & \multicolumn{1}{c}{\textbf{BRGC}} & \multicolumn{1}{c}{\textbf{Balanced}} & \multicolumn{1}{c}{\textbf{Biclique}} \\ \hline
1& Mean (s) & 74.43 & 777.19 & 90.60& 1620.13& 172.81& \textbf{72.34} & 163.22 & 134.90 & 134.82 & 79.36\\
& Std& \textbf{43.15} & 577.12 & 142.59 & 553.62 & 206.92& 70.51 & 386.89 & 392.30 & 392.52 & 133.30 \\
& Win& 3 & 0& 2& 2& 0 & 1 & 0& \textbf{4}& \textbf{4}& \textbf{4}\\
& Fail & \textbf{0} & 3& \textbf{0}& 18 & \textbf{0} & \textbf{0} & 1& 1& 1& \textbf{0}\\
2& Mean (s) & 199.46& 1396.09& 166.88 & 1350.50& 322.35& 187.53& 130.31 & 158.13 & 79.58& \textbf{67.31}\\
& Std& 233.65& 432.28 & 378.76 & 798.80 & 305.14& 382.89& 94.93& 387.38 & \textbf{31.44}& 39.56\\
& Win& 0 & 0& 2& 5& 0 & 2 & 1& 2& 2& \textbf{6}\\
& Fail & \textbf{0} & 7& \textbf{0}& 15 & \textbf{0} & 1 & \textbf{0}& 1& \textbf{0}& \textbf{0}\\
4& Mean (s) & 1412.30 & 1789.85& 1230.25& 1531.24& 818.40& 1172.91 & 328.62 & 328.45 & 251.48 & \textbf{199.07} \\
& Std& 403.53& \textbf{45.44}& 616.62 & 656.41 & 536.85& 691.58& 351.28 & 364.75 & 254.98 & 114.74 \\
& Win& 0 & 0& 0& 3& 0 & 0 & 3& 0& 3& \textbf{11} \\
& Fail & 6 & 19 & 9& 17 & 3 & 9 & \textbf{0}& 1& \textbf{0}& \textbf{0}
\end{tabular}
\end{adjustbox}
\caption{Computational results for share-of-choice problems with $N^{\operatorname{pre}} = 50$.}
\label{tb:soc}
\end{table}
\subsection{In Comparison with a Nonlinear Solver} \label{sec:nonlinear}
We test our piecewise linear relaxation approach against the mixed-integer nonlinear programming (MINLP) solver of SCIP v8.0.2~\citep{bestuzheva2021scip} with one thread on a Red Hat Enterprise Linux version 7.9 workstation with 16 GB of RAM and Intel(R) Xeon(R) W-2102 CPU with 4 cores @ 2.90GHz. Gurobi v10.0.0 is used as MILP solvers for our piecewise linear relaxation approach. We generate 20 randomly instances of the share-of-choice product design problem with the nonnegative percentage $C=0.2$, the number of customer types $v=10$, the number of scenarios $S=6$, and the dimension of product design space $\eta=15$. We test four methods
\begin{enumerate}
\item \textit{MINLP}: Solve the original nonlinear problem by MINLP solver of SCIP.
\item \textit{MILP Tiny}: Use Gurobi's MILP solver to solve \textit{Inc} formulaton of \textit{Merged} with $N^{\operatorname{pre}}=10$ and $N^{\operatorname{seg}}=1$.
\item \textit{MILP Small}: Use Gurobi's MILP solver to solve \textit{Inc} formulaton of \textit{Merged} with $N^{\operatorname{pre}}=10$ and $N^{\operatorname{seg}}=2$.
\item \textit{MILP Large}: Use Gurobi's MILP solver to solve \textit{Balanced} formulaton of \textit{PWR} with $N^{\operatorname{pre}}=50$ and $N^{\operatorname{seg}}=2$.
\end{enumerate}
We want to note that the methods and formulations chosen for the piecewise linear relaxation might not be the fastest among all approaches listed in Section~\ref{sec:pwr_computational}. The time limit is set to 600 seconds and the threshold of the relative gap between primal and dual bounds is set to $10^{-6}$.
\begin{table}[h]
\centering
\begin{adjustbox}{angle=0,scale=0.7}
\begin{tabular}{l|lll|lll|lll|lll}
& \multicolumn{3}{c|}{MINLP} & \multicolumn{3}{c|}{MILP Tiny} & \multicolumn{3}{c|}{MILP Small} & \multicolumn{3}{c}{MILP Large} \\
Instance & Primal & Dual & Time (s) & Primal & Dual & Time (s) & Primal & Dual & Time (s) & Primal & Dual & Time (s) \\ \hline
1 & \textbf{0.7244} & 0.8352 & 600.00 & 0.7633 & 0.7633 & \textbf{2.40} & 0.7477 & 0.7477 & 3.93 & 0.7250 & \textbf{0.7250} & 72.58 \\
2 & \textbf{0.4952} & 0.6785 & 600.00 & 0.5206 & 0.5206 & \textbf{2.41} & 0.5051 & 0.5051 & 4.16 & 0.4960 & \textbf{0.4960} & 79.85 \\
3 & \textbf{0.3229} & 0.5531 & 600.00 & 0.3737 & 0.3737 & \textbf{2.32} & 0.3693 & 0.3693 & 4.34 & 0.3242 & \textbf{0.3242} & 130.21 \\
4 & \textbf{0.5200} & 0.8619 & 600.00 & 0.5676 & 0.5676 & \textbf{3.58} & 0.5464 & 0.5464 & 14.99 & 0.5207 & \textbf{0.5207} & 145.95 \\
5 & \textbf{0.3687} & 0.5740 & 600.00 & 0.4142 & 0.4142 & \textbf{7.45} & 0.3910 & 0.3910 & 17.22 & 0.3710 & \textbf{0.3710} & 263.65 \\
6 & \textbf{0.4323} & 0.5904 & 600.00 & 0.4784 & 0.4784 & \textbf{6.96} & 0.4655 & 0.4655 & 14.48 & 0.4333 & \textbf{0.4333} & 125.73 \\
7 & \textbf{0.4540} & 0.7296 & 600.00 & 0.4797 & 0.4797 & \textbf{9.25} & 0.4627 & 0.4627 & 21.58 & 0.4545 & \textbf{0.4545} & 132.88 \\
8 & \textbf{0.2932} & 0.6504 & 600.00 & 0.3202 & 0.3202 & \textbf{12.87} & 0.3016 & 0.3016 & 19.62 & 0.2940 & \textbf{0.2940} & 358.16 \\
9 & \textbf{0.5677} & 0.6142 & 600.00 & 0.6039 & 0.6039 & \textbf{1.68} & 0.5930 & 0.5930 & 3.04 & 0.5692 & \textbf{0.5692} & 89.97 \\
10 & \textbf{0.3028} & \textbf{0.3028} & \textbf{3.76} & 0.3248 & 0.3248 & 15.97 & 0.3164 & 0.3164 & 93.13 & 0.2638 & 0.3108 & 600.00 \\
11 & \textbf{0.3160} & 0.6123 & 600.00 & 0.3424 & 0.3424 & \textbf{14.20} & 0.3374 & 0.3374 & 23.73 & $-\infty$ & \textbf{0.3199} & 600.00 \\
12 & \textbf{0.2621} & 0.5256 & 600.00 & 0.3175 & 0.3175 & \textbf{2.83} & 0.3101 & 0.3101 & 16.84 & 0.2672 & \textbf{0.2672} & 124.11 \\
13 & \textbf{0.4110} & 0.7255 & 600.00 & 0.4229 & 0.4229 & \textbf{9.22} & 0.4160 & 0.4160 & 10.93 & 0.4114 & \textbf{0.4114} & 93.33 \\
14 & \textbf{0.4600} & \textbf{0.4600} & \textbf{0.21} & 0.5068 & 0.5068 & 2.74 & 0.4897 & 0.4897 & 21.97 & 0.4607 & 0.4607 & 160.21 \\
15 & \textbf{0.4172} & 0.6003 & 600.00 & 0.4454 & 0.4454 & \textbf{1.95} & 0.4312 & 0.4312 & 3.60 & 0.4181 & \textbf{0.4181} & 85.07 \\
16 & \textbf{0.4439} & 0.7993 & 600.00 & 0.4900 & 0.4900 & \textbf{10.65} & 0.4770 & 0.4770 & 17.37 & 0.4452 & \textbf{0.4452} & 152.47 \\
17 & \textbf{0.2395} & 0.2769 & 600.00 & 0.2483 & 0.2483 & \textbf{1.41} & 0.2413 & 0.2413 & 6.03 & 0.2397 & \textbf{0.2397} & 68.24 \\
18 & \textbf{0.3490} & 0.5918 & 600.00 & 0.3933 & 0.3933 & \textbf{10.22} & 0.3872 & 0.3872 & 9.77 & 0.3529 & \textbf{0.3529} & 181.15 \\
19 & \textbf{0.3869} & 0.8080 & 600.00 & 0.4138 & 0.4138 & \textbf{3.42} & 0.4069 & 0.4069 & 3.98 & 0.3876 & \textbf{0.3876} & 111.61 \\
20 & \textbf{0.3573} & 0.4461 & 600.00 & 0.3946 & 0.3946 & \textbf{1.33} & 0.3662 & 0.3662 & 1.51 & 0.3580 & \textbf{0.3580} & 97.14
\end{tabular}
\end{adjustbox}
\caption{Comparison between solving original nonlinear problems and piecewise linear relaxation problems.}
\label{tb:nlp}
\end{table}
We demonstrate the computational results in Table~\ref{tb:nlp}. We want to note that the primal solutions to the relaxation problems might not be feasible in the original problems. In most instances, solving piecewise linear relaxation problems with the MILP solver of Gurobi is much faster than solving the original problems with the MINLP solver of SCIP. Especially, the MILP solver can find high-quality dual bounds of the original problems quickly for most cases. Furthermore, increasing the number of polytope pieces $N^{\operatorname{pre}}$ and the polytope shape parameter $N^{\operatorname{seg}}$ can improve the dual bounds in the relaxation problems. However, there are several cases that the MILP solver is struggling to find good feasible solutions, like the 10th and 11th instances. A combination of using heuristics of MINLP for primal bound and piecewise linear relaxation for dual bound could lead to a faster solving process for this type of problem.
\section{Conclusions and Future Work}
This paper studies the MILP formulations of piecewise linear relaxations of nonlinear functions. For univariate nonlinear functions, we review the MILP formulations for piecewise linear functions and discuss how to use them to formulate piecewise linear relaxations. Then, we introduce generalized 1D-ordered CDCs and present Gray code and biclique cover formulations. We demonstrate both the relations and differences between Gray code and biclique cover formulations and build the connections with optimal edge ranking of trees. Next, we extend the idea to higher dimensional: generalized $n$D-ordered CDCs and provide logarithmically sized ideal formulations. We also test our formulations of piecewise linear relaxations of univariate nonlinear functions against existing formulations with applications in 2D kinematics inverse and share-of-choice product design problems. Computational results show that the Gray code (\textit{Balanced}) and biclique cover (\textit{Biclique}) formulations have significant speed-ups over existing approaches.
Several research directions can be followed after this work. Could we design computationally more efficient formulations for generalized 1D-ordered CDCs? What are the computational performances of different approaches for piecewise linear relaxations with more than one variable? Could we design an efficient procedure to find a piecewise linear relaxation of a given multivariate nonlinear function such that it can be modeled by generalized $n$D-ordered CDCs?
\theendnotes
\ACKNOWLEDGMENT{}
\begin{APPENDICES}
\section{Some Univariate Piecewise Linear Function Formulations} \label{sec:pwr_pre}
In this section, we will also review incremental (Inc), multiple choice (MC), convex combination (CC), logarithmic disaggregated convex combination (DLog)~\citep{vielma2010mixed} formulations for our computational experiments in Section~\ref{sec:pwr_computational}.
\begin{proposition}
Given a univariate piecewise linear function $f(x)$ where $f$ has $d+1$ breakpoints: $L= \hat{x}_1 < \hat{x}_2 < \hdots < \hat{x}_N = U \in \mathbb{R}$ and $\hat{y}_i = f(\hat{x}_i)$ for the simplicity, then the incremental formulation of $\{(x, y): y=f(x), L \leq x \leq U\}$ can be described by
\begin{subequations} \label{eq:pwl_inc}
\begin{alignat}{2}
& x = \hat{x}_1 + \sum_{i=1}^d \delta_i \cdot (\hat{x}_{i+1} - \hat{x}_i), \qquad & y = \hat{y}_1 + \sum_{i=1}^d \delta_i \cdot (\hat{y}_{i+1} - \hat{y}_i) \\
& \delta_{i+1} \leq z_i \leq \delta_i & x,y \in \mathbb{R}, \delta \in [0, 1]^{d+1}, z \in \{0, 1\}^d.
\end{alignat}
\end{subequations}
\noindent Furthermore, the multiple choice formulation of $\{(x, y): y=f(x), L \leq x \leq U\}$ is
\begin{subequations} \label{eq:pwl_mc}
\begin{alignat}{2}
& x^{\operatorname{copy}}_i \geq \hat{x}_i z_i, \quad x^{\operatorname{copy}}_i \leq \hat{x}_{i+1} z_i, & \forall i \in \llbracket d \rrbracket\\
& y^{\operatorname{copy}}_i = \hat{y}_i z_i + \frac{\hat{y}_{i+1} - \hat{y}_i}{\hat{x}_{i+1} - \hat{x}_i} \cdot (x^{\operatorname{copy}}_i - \hat{x}_i z_i), \qquad & \forall i \in \llbracket d \rrbracket\\
& \sum_{i=1}^d x^{\operatorname{copy}}_i = x, & x^{\operatorname{copy}} \in \mathbb{R}^{d} \\
& \sum_{i=1}^d y^{\operatorname{copy}}_i = y, & y^{\operatorname{copy}} \in \mathbb{R}^d \\
& \sum_{i=1}^d z_i = 1, & x, y \in \mathbb{R}, z \in \{0, 1\}^d.
\end{alignat}
\end{subequations}
\end{proposition}
Then, we will introduce the convex combination (CC) formulation of $\operatorname{SOS} 2$, which can be used to formulate univariate piecewise linear functions.
\begin{proposition}
Given a positive integer $d$, a valid formulation (convex combination formulation) of $\lambda \in \operatorname{SOS} 2(d+1)$ is
\begin{subequations} \label{eq:sos2_cc}
\begin{alignat}{2}
& \lambda_1 \leq z_1, & \lambda_{d+1} \leq z_d \\
& \lambda_i \leq z_{i-1} + z_i, \qquad & \forall i \in \{2, \hdots, d\}\\
& \sum_{i=1}^d z_i = 1, \qquad & z \in \{0, 1\}^d, \lambda \in \Delta^{d+1}.
\end{alignat}
\end{subequations}
\end{proposition}
Another formulation for $\{(x, y): y=f(x), x \in [L, U]\}$ where $f: [L, U] \rightarrow \mathbb{R}$ is \textit{logarithmic disaggregated convex combination}, which is denote as DLog~\citep{vielma2010mixed}. We use the formulation that is implemented in PiecewiseLinearOpt~\citep{huchette2022nonconvex}.
\begin{proposition}
Given a univariate piecewise linear function $f(x)$ where $f$ has $d+1$ breakpoints: $L= \hat{x}_1 < \hat{x}_2 < \hdots < \hat{x}_N = U \in \mathbb{R}$ and $\hat{y}_i = f(\hat{x}_i)$ for the simplicity, let $r = \lceil \log_2(d) \rceil$ and $\{h^{i}\}_{i=1}^{d} \subseteq \{0, 1\}^{r}$ be the first $d$ binary vectors of a BRGC for $2^r$ elements. Then, the DLog formulation of $\{(x, y): y=f(x), L \leq x \leq U\}$ can be described by
\begin{subequations} \label{eq:pwl_DLog}
\begin{alignat}{2}
& \gamma_{1}^1 + \gamma_{d+1}^d + \sum_{i=2}^{d-1}(\gamma_{i}^{i-1} + \gamma_{i}^{i}) = 1 \\
& x = \gamma_{1}^1 \hat{x}_1 + \gamma_{d+1}^d \hat{x}_{d+1} + \sum_{i=2}^{d-1}(\gamma_{i}^{i-1} + \gamma_{i, i}) \hat{x}_{i} \\
& y = \gamma_{1}^{1} \hat{y}_1 + \gamma_{d+1}^{d} \hat{y}_{d+1} + \sum_{i=2}^{d-1}(\gamma_{i}^{i-1} + \gamma_{i}^{i}) \hat{y}_{i} \\
& \sum_{i=1}^d (\gamma_i^i + \gamma_{i+1}^i) h^i_j = z_j, & \forall j \in \llbracket r \rrbracket \\
& 0 \leq \gamma \leq 1 & x,y\in \mathbb{R}, z \in \{0, 1\}^r.
\end{alignat}
\end{subequations}
\end{proposition}
We would also like to denote $K^r = \{K^{r, i}\}_{i=1}^{2^r} \subseteq \{0, 1\}^{r}$ as a BRGC for $2^r$ elements. Then, $C^r = \{C^{r, i}\}_{i=1}^{2^r} \subseteq \{0, 1\}^r$ where $C^{r,i}_k = \sum_{j=2}^i |K^{r,j}_k - K^{r, j-1}_k|$ for each $i \in \llbracket d \rrbracket$ and $k \in \llbracket r \rrbracket$. In other words, $C^{r, i}_k$ is the number of changing values in the sequence $(K^{r, 1}_k, \hdots, K^{r, i}_k)$.
\begin{proposition} \label{prop:sos2_loge}
Let $r = \lceil \log_2(d) \rceil$ and $K^r = \{K^{r, i}\}_{i=1}^{d} \subseteq \{0, 1\}^{r}$ be the first $d$ binary vectors of a BRGC for $2^r$ elements. Then, an ideal formulation (LogIB) of $\lambda \in \operatorname{SOS} 2 (d+1)$ can be expressed as
\begin{subequations} \label{eq:sos2_logib}
\begin{alignat}{2}
& \sum_{v \in L^j} \lambda_v \leq z_j, \sum_{v \in R^j} \lambda_v \leq 1 - z_j, \qquad & \forall j \in \llbracket r\rrbracket\\
& \lambda \in \Delta^{d+1}, & z \in \{0, 1\}^r,
\end{alignat}
\end{subequations}
\noindent where $L^j = \{\tau \in \llbracket d+1 \rrbracket: K^{r, \tau-1}_j = 1 \text{ and } K^{r, \tau}_j = 1\}$ and $R^j = \{\tau \in \llbracket d+1 \rrbracket: K^{r, \tau-1}_j = 0 \text{ and } K^{r, \tau}_j = 0\}$ for $j \in \llbracket r \rrbracket$. Note that $K^{r, 0} \equiv K^{r, 1}$ and $K^{r, d} \equiv K^{r, d+1}$ for simplicity.
Under the same settings, an ideal formulation (LogE) of $\lambda \in \operatorname{SOS} 2(d+1)$ can be expressed as
\begin{subequations} \label{eq:sos2_loge}
\begin{alignat}{2}
& \sum_{v=1}^{d+1} \min\{K^{r, v-1}_j, K^{r, v}_j\} \lambda_{v} \leq z_j \leq \sum_{v=1}^{d+1} \max\{K^{r, v-1}_j, K^{r, v}_j\} \lambda_{v}, \quad & \forall j \in \llbracket r \rrbracket \\
& \lambda \in \Delta^{d+1} \\
& z_j \in \{0, 1\}, & \forall j \in \llbracket r\rrbracket,
\end{alignat}
\end{subequations}
\end{proposition}
\begin{proposition} \label{prop:sos2_zzi_zzb}
Let $r = \lceil \log_2(d) \rceil$ and denote $C^{r, 0} \equiv C^{r, 1}$ and $C^{r, d} \equiv C^{r, d+1}$ for simplicity. Then, two ideal formulations, ZZI~\eqref{form:zzi} and ZZB~\eqref{form:zzb}, for $\lambda \in \operatorname{SOS} 2 (d+1)$ are given by
\begin{alignat}{2} \label{form:zzi}
& \sum_{v=1}^{d+1} C^{r,v-1}_k \lambda_v \leq z_k \leq \sum_{v=1}^{d+1} C^{r, v}_k \lambda_v, \qquad & \forall k \in \llbracket r \rrbracket, (\lambda, z) \in \Delta^{d+1} \times \mathbb{Z}^r
\end{alignat}
\noindent and
\begin{alignat}{2} \label{form:zzb}
& \sum_{v=1}^{d+1} C^{r,v-1}_k \lambda_v \leq z_k + \sum_{l=k+1}^r 2^{l-k-1} z_l \leq \sum_{v=1}^{d+1} C^{r, v}_k \lambda_v, \qquad & \forall k \in \llbracket r \rrbracket, (\lambda, z) \in \Delta^{d+1} \times \{0, 1\}^r.
\end{alignat}
\end{proposition}
\section{Merged Incremental Formulation} \label{sec:mupwr_merged}
In this section, we will provide the formulations of \textit{Merged} approach in our computational experiments (Section~\ref{sec:pwr_computational}).
\begin{proposition} \label{prop:multi_pwl_inc}
Given $k$ piecewise linear functions $f^i: [L, U] \rightarrow \mathbb{R}$ and corresponding breakpoints: $L=\hat{x}^i_1 < \hdots < \hat{x}^i_{d_i+1}=U$ for $i \in \llbracket k \rrbracket$, then a valid formulation for $\{(x, y): x \in [L, U], y^i = f^i(x), i \in \llbracket k \rrbracket\}$ is
\begin{subequations} \label{eq:multi_pwl_inc}
\begin{alignat}{2}
& y^i = f^i(\hat{x}_1) + \sum_{v=1}^d \delta_v \cdot (f^i(\hat{x}_{v+1}) - f^i(\hat{x}_v)), & \forall i \in \llbracket k \rrbracket \\
& x = \hat{x}_1 + \sum_{v=1}^d \delta_v \cdot (\hat{x}_{v+1} - \hat{x}_v) \\
& \delta_{v+1} \leq z_v \leq \delta_v & \delta \in [0, 1]^{d+1}, z \in \{0, 1\}^d.
\end{alignat}
\end{subequations}
\noindent where $d = |\bigcup_{i \in \llbracket k \rrbracket} \{\hat{x}^i_j\}_{j=1}^{d_i+1}| - 1$ and $\{\hat{x}_v\}_{v=1}^{d+1} = \bigcup_{i \in \llbracket k \rrbracket} \{\hat{x}^i_j\}_{j=1}^{d_i+1}$ such that $\hat{x}_v < \hat{x}_{v+1}$ for $v \in \llbracket d \rrbracket$.
\end{proposition}
A similar approach in Proposition~\ref{prop:multi_pwl_inc} can be also applied to other univariate piecewise linear function formulations, such as MC and DLog in Appendix~\ref{sec:pwr_pre}.
\section{Incremental and DLog Formulations of Generalized 1D-Ordered CDCs} \label{sec:inc_dlog}
\citet{yildiz2013incremental} generalized the incremental formulation for piecewise linear functions to any finite union of polyhedra with identical recession cones. We adapt it for generalized 1D-ordered CDCs.
\begin{proposition}
Given $\mathbfcal{S} = \{S^i\}_{i=1}^d$ such that $\operatorname{CDC}(\mathbfcal{S})$ is a generalized 1D-ordered CDC, a formulation for $\lambda \in \operatorname{CDC}(\mathbfcal{S})$ can be expressed as
\begin{subequations} \label{eq:pwr_inc_formulation}
\begin{alignat}{2}
& \lambda_v = \sum_{S \in \mathbf{S}: v \in S} \gamma^S_v, & \forall v \in J \\
& \sum_{v \in S^i} \gamma^{S^i}_v = z_i, &\forall i \in \llbracket d \rrbracket \\
& u_{i} \geq u_{i+1}, &\forall i \in \llbracket d-2 \rrbracket \\
& z_{i+1} = u_{i} - u_{i+1}, &\forall i \in \llbracket d-2 \rrbracket \\
& z_1 = 1 - u_1, & z_d = u_{d-1} \\
& 0 \leq \gamma \leq 1 & z \in [0, 1]^{d}, u \in \{0, 1\}^{d-1},
\end{alignat}
\end{subequations}
\noindent where $J = \bigcup_{S \in \mathbfcal{S}} S$.
\end{proposition}
We can also construct a DLog formulation for generalized 1D-ordered $\operatorname{CDC}(\mathbfcal{S})$. The formulation is summarized by \citet{vielma2010mixed} from ideas of \citet{ibaraki1976integer,vielma2011modeling}.
\begin{proposition}
Given $\mathbfcal{S} = \{S^i\}_{i=1}^d$ such that $\operatorname{CDC}(\mathbfcal{S})$ is a generalized 1D-ordered CDC, let $r = \lceil \log_2(d) \rceil$ and $\{h^{i}\}_{i=1}^{d} \subseteq \{0, 1\}^{r}$ be the first $d$ binary vectors of a BRGC for $2^r$ elements. Then, a formulation for $\lambda \in \operatorname{CDC}(\mathbfcal{S})$ can be expressed as
\begin{subequations} \label{eq:pwr_dLog_formulation}
\begin{alignat}{2}
& \lambda_v = \sum_{S \in \mathbf{S}: v \in S} \gamma^S_v, & \forall v \in J \\
& \sum_{v \in S^i} \gamma^{S^i}_v = z_i, &\forall i \in \llbracket d \rrbracket \\
& \sum_{i=1:h^i_j=0}^d z_i \leq u_j, & \forall j \in \llbracket r \rrbracket \\
& \sum_{i=1:h^i_j=1}^d z_i \leq 1-u_j, & \forall j \in \llbracket r \rrbracket \\
& 0 \leq \gamma \leq 1 & z \in [0, 1]^{d}, u \in \{0, 1\}^{d-1},
\end{alignat}
\end{subequations}
\noindent where $J = \bigcup_{S \in \mathbfcal{S}} S$.
\end{proposition}
\section{Proof of Theorem~\ref{thm:g1d_ideal}} \label{sec:proof_g1d_ideal}
We start with exploring some properties of Gray codes. First, we want to remark that $t \geq \lceil \log_2(d) \rceil$ for any Gray code $\{h^i\}_{i=1}^d \subseteq \{0, 1\}^t$ because each $h^i$ is distinct.
\begin{remark}
Let $\{h^i\}_{i=1}^d \subseteq \{0, 1\}^t$ be an arbitrary Gray code for $d$ numbers. Then, $t \geq \lceil \log_2(d) \rceil$.
\end{remark}
Lemma~\ref{lm:gray_code_3} shows that given an arbitrary Gray code, $\{h^i\}_{i=1}^d \subseteq \{0, 1\}^t$, for $d \geq 3$ numbers and $i', i'', i''+1 \in \llbracket d \rrbracket$ are three distinct integers, there must exist an entry $j$ such that $1 - h^{i'}_j = h^{i''}_j = h^{i''+1}_j$. Then, we prove another property of Gray code in Lemma~\ref{lm:gray_code_4}: given an arbitrary Gray code, $\{h^i\}_{i=1}^d \subseteq \{0, 1\}^t$, for $d \geq 4$ numbers and $i', i'+1, i'', i''+1 \in \llbracket d \rrbracket$ are distinct, there must exist an entry $j$ such that $1 - h^{i'}_j = 1 - h^{i'+1}_j = h^{i''}_j = h^{i''+1}_j$.
\begin{lemma} \label{lm:gray_code_3}
Let $\{h^i\}_{i=1}^d \subseteq \{0, 1\}^t$ be an arbitrary Gray code for $d$ numbers and $i', i'', i''+1 \in \llbracket d \rrbracket$ be three distinct integers. Then, there exists $j \in \llbracket t \rrbracket$ such that $1 - h^{i'}_j = h^{i''}_j = h^{i''+1}_j$.
\end{lemma}
\proof{Proof}
Since $d \geq 3$, then $t \geq \lceil \log_2(3) \rceil = 2$. Since there is one different entry between $h^{i''}$ and $h^{i''+1}$, we denote that entry be $j'$. Thus, $h^{i''}_{j'} = 1 - h^{i''+1}_{j'}$ and $h^{i''}_{j} = h^{i''+1}_j$ for $j \neq j'$. Since $h^{i'}_{j'} \in \{0, 1\}$, then either $h^{i'}_{j'} = h^{i''}_{j'}$ or $h^{i'}_{j'} = h^{i''+1}_{j'}$. Without loss of generality, we can assume that $h^{i'}_{j'} = h^{i''}_{j'}$. Because $h^{i'}$ and $h^{i''}$ are distinct binary vectors, there exist $j \neq j'$ such that $1 - h^{i'}_{j} = h^{i''}_{j} = h^{i''+1}_{j}$.
\Halmos\endproof
\begin{lemma} \label{lm:gray_code_4}
Let $\{h^i\}_{i=1}^d \subseteq \{0, 1\}^t$ be an arbitrary Gray code for $d$ numbers and $i', i'' \in \llbracket d -1\rrbracket$ be two distinct integers such that $|i' - i''| \geq 2$. Then, there exists $j \in \llbracket t \rrbracket$ such that $1 - h^{i'}_j = 1 - h^{i'+1}_j = h^{i''}_j = h^{i''+1}_j$.
\end{lemma}
\proof{Proof}
Since $d \geq 4$, then $t \geq \lceil \log_2(4) \rceil = 2$. Since there is one different entry between $h^{i'}$ and $h^{i'+1}$, we denote that entry be $j'$. Similarly, there is only one different entry between $h^{i''}$ and $h^{i''+1}$. We denote that entry be $j''$. If $j' = j''$, then there must exists $j \in \llbracket t \rrbracket$ such that $1 - h^{i'}_j = 1 - h^{i'+1}_j = h^{i''}_j = h^{i''+1}_j$. Otherwise, $h^{i'} = h^{i''}$ or $h^{i'+1} = h^{i''}$.
If $j' \neq j''$, we can assume that $j' < j''$ without loss of generality. Then, we define that
\begin{align*}
H := \begin{bmatrix}
h^{i'}_{j'} & h^{i'}_{j''} \\ h^{i'+1}_{j'} & h^{i'+1}_{j''} \\
h^{i''}_{j'} & h^{i''}_{j''} \\ h^{i''+1}_{j'} & h^{i''+1}_{j''} \\
\end{bmatrix} =
\begin{bmatrix} & a & b \\ & 1 - a & b \\ & d & c \\ & d & 1 - c \\ \end{bmatrix},
\end{align*}
\noindent for some $a, b, c, d \in \{0, 1\}$. It is not hard to see that no matter what $a, b, c, d \in \{0, 1\}$ take, one of $H_{1, :}$ and $H_{2, :}$ must be equal to $H_{3, :}$ and $H_{4, :}$, where $H_{i,:}$ is the $i$-th row of $H$. Thus, $t \geq 3$.
For any $j \in \llbracket t \rrbracket \operatorname{set}minus \{j', j''\}$, $h^{i'}_j = h^{i'+1}_j$ and $h^{i''}_j = h^{i''+1}_j$. We also want to note that $h^{i'}$, $h^{i'+1}$, $h^{i''}$, and $h^{i''+1}$ are distinct binary vectors. Thus, there exists $j \in \llbracket t \rrbracket \operatorname{set}minus \{j', j''\}$ such that $1 - h^{i'}_j = 1 - h^{i'+1}_j = h^{i''}_j = h^{i''+1}_j$.
\Halmos\endproof
Then, we will show that Gray codes can be used to construct biclique covers of the conflict graphs of generalized 1D-ordered CDCs in Proposition~\ref{prop:g1d_gray_code}. Lemmas~\ref{lm:gray_code_3} and~\ref{lm:gray_code_4} can be applied to show how the constructed biclique covers can cover the edges.
\begin{proposition} \label{prop:g1d_gray_code}
Let $\operatorname{CDC}(\mathbfcal{S})$ be a generalized 1D-ordered CDC with $\mathbfcal{S} = \{S^i\}_{i=1}^d$ and $\{h^i\}_{i=1}^d \subseteq \{0, 1\}^t$ be an arbitrary Gray code for $d$ numbers, then
\begin{align}
\left\{L^j := \bigcup_{i=1: h^i_j=0}^d S^i \operatorname{set}minus \bigcup_{i=1: h^i_j=1}^d S^i, R^j := \bigcup_{i=1: h^i_j=1}^d S^i \operatorname{set}minus \bigcup_{i=1: h^i_j=0}^d S^i \right\}_{j=1}^t
\end{align}
\noindent is a biclique cover of the conflict graph $G_{\mathbfcal{S}}^c$ of $\operatorname{CDC}(\mathbfcal{S})$.
\end{proposition}
\proof{Proof}
Let $u \in L^j$ and $v \in R^j$. It is not hard to see that $\{u, v\} \not\subseteq S^i$ for any $i \in \llbracket d \rrbracket$. Thus, $\{L^j, R^j\}$ is a biclique subgraph of $G_{\mathbfcal{S}}^c$.
For the convenience, we partition $S^i$ into three parts: $X^{i'} = S^{i'} \operatorname{set}minus \bigcup_{j' \neq i'} S^{j'}$, $Y^{i'-1} = S^{i'-1} \cap S^{i'}$, and $Y^{i'} = S^{i'} \cap S^{i'+1}$ for $i' \in \{2, \hdots, d-1\}$. Similarly, we partition $S^1$ into two parts: $X^1 = S^1 \operatorname{set}minus \bigcup_{j \neq 1} S^j$ and $Y^1 = S^1 \cap S^2$; we also partition $S^d$ into two parts: $X^d = S^d \operatorname{set}minus \bigcup_{j \neq d} S^j$ and $Y^{d-1} = S^{d-1} \cap S^d$.
It is not hard to see that the edges of the conflict graph $G_{\mathbfcal{S}}^c$ of $\operatorname{CDC}(\mathbfcal{S})$ can be partitioned into three parts:
\begin{enumerate}
\item The edges between $X^{i'}$ and $X^{i''}$, i.e. biclique $\{X^{i'}, X^{i''}\}$, for $i' \neq i'' \in \llbracket d \rrbracket$.
\item The edges between $X^{i'}$ and $Y^{i''}$ for $i'' \neq i'$, $i'' \neq i'-1$, $i'' \in \llbracket d-1 \rrbracket$, and $i' \in \llbracket d \rrbracket$.
\item The edges between $Y^{i'}$ and $Y^{i''}$ for $|i' - i''| \geq 2$ and $i', i'' \in \llbracket d-1 \rrbracket$.
\end{enumerate}
We want to note that $X^i \subseteq L^j$ if and only if $h^i_j = 0$; $X^i \subseteq R^j$ if and only if $h^i_j = 1$. Also, $Y^i \subseteq L^j$ if and only if $h^i_j = h^{i+1}_j = 0$; $Y^i \subseteq R^j$ if and only if $h^i_j = h^{i+1}_j = 1$.
First, given $i' \neq i'' \in \llbracket d \rrbracket$, $h^{i'} \neq h^{i''}$ by the definition of Gray code. In other word, there exists $j \in \llbracket t \rrbracket$ such that $h^{i'}_j \neq h^{i''}_j$. Therefore, $X^{i'} \subseteq L^j$ and $X^{i''} \subseteq R^j$.
Second, given arbitrary $i'' \neq i'$, $i'' \neq i'-1$, $i'' \in \llbracket d-1 \rrbracket$, and $i' \in \llbracket d \rrbracket$, we know that there exists $j \in \llbracket t \rrbracket$ such that $1 - h^{i'}_j = h^{i''}_j = h^{i''+1}_j$ by Lemma~\ref{lm:gray_code_3}. Thus, $\{X^{i'}, Y^{i''}\}$ must be a biclique subgraph of $\{L^j, R^j\}$.
Third, given arbitrary $|i' - i''| \geq 2$ and $i', i'' \in \llbracket d-1 \rrbracket$, we know that there exists $j \in \llbracket t \rrbracket$ such that $1 - h^{i'}_j = 1 - h^{i'+1}_j = h^{i''}_j = h^{i''+1}_j$. Thus, $\{X^{i''}, Y^{i''}\}$ must be a biclique subgraph of $\{L^j, R^j\}$.
\Halmos\endproof
\proof{Proof of Theorem~\ref{thm:g1d_ideal}}
Since we can use a Gray code to find a biclique cover of the conflict graph of a generalized 1D-ordered CDC, Theorem~\ref{thm:g1d_ideal} is a direct result of the combination of Proposition~\ref{prop:pwr_cdc_bc} and Proposition~\ref{prop:g1d_gray_code}.
\Halmos\endproof
\section{Proof of Theorems~\ref{thm:edge_ranking_to_gray_code} and~\ref{thm:path_edge_ranking_heuristic}}
\proof{Proof of Theorem~\ref{thm:edge_ranking_to_gray_code}}
We start by proving that there is only one different entry between $h^{i-1}$ and $h^i$. It is not hard to see that $h^i_j = 1 - h^{i-1}_j$ if $\varphi(v_{i-1}v_i) = j$ and $h^i_j = h^{i-1}_j$ for any $j \in \llbracket r \rrbracket$ such that $j \neq \varphi(v_{i-1}v_i)$.
Then, we want to show that $h^{i'}$ and $h^{i''}$ are different for arbitrary unique $i', i'' \in \llbracket n \rrbracket$. Let $j'$ be the smallest label of the edges on the path between $v_{i'}$ and $v_{i''}$. Then, it is not hard to see that there is only exactly one edge on the path between $v_{i'}$ and $v_{i''}$ with the label $j'$. Otherwise, $\varphi$ is not a reversed edge ranking. Thus, $h^{i'}_{j'} \neq h^{i''}_{j'}$, i.e., $h^{i'}$ and $h^{i''}$ are different.
\Halmos\endproof
\proof{Proof of Theorem~\ref{thm:path_edge_ranking_heuristic}}
We prove this statement by induction. Within the recursion of $\Call{Label}{P, \textit{level}, \varphi}$, if $|V(P)| \in \{1, 2\}$, it is obvious that for every pair of edges $u, v$ in $P$ such that $\varphi(u) = \varphi(v)$, there exists an edge $w$ on the path between $u$ and $v$ such that $\varphi(w) < \varphi(u) = \varphi(v)$.
We assume that for $|V(P)| < k$ for some positive integer $k$, by $\Call{Label}{P, \textit{level}, \varphi}$, $\varphi$ is a reversed edge ranking of $P$. Then, if $|V(P)| = k$, we can see that there is an edge mapping to $\textit{level}$ between $P^1$ and $P^2$ by $\Call{Label}{P, \textit{level}, \varphi}$. Also, the edges in $P^1$ and $P^2$ are mapped to numbers at least $\textit{level} + 1$. Let $u \in E(P^1), v \in E(P^2)$ and $\varphi(u)= \varphi(v)$. Then, there exists an edge mapping to $\textit{level}$, which is less than $\varphi(u)= \varphi(v)$. Since $V(P^1) < k$ and $V(P^2) < k$, then $\varphi$ is also a reversed edge ranking of $P^1$ and $P^2$. Thus, $\varphi$ is a reversed edge ranking of $P$ for any path graph $P$ by $\Call{Label}{P, \textit{level}, \varphi}$. Hence, Algorithm~\ref{alg:path_edge_ranking_heuristic} returns a reversed edge ranking of $P_n$.
\Halmos\endproof
\section{Proof of Theorem~\ref{thm:g1d_biclique_ideal}} \label{sec:proof_g1d_biclique_ideal}
In this section, we will prove the logarithmically sized ideal formulations for generalized 1D-ordered CDCs in Theorem~\ref{thm:g1d_ideal} by using Propositions~\ref{prop:g1d_biclique_sep} and~\ref{prop:g1d_biclique}.
\begin{proposition} \label{prop:g1d_biclique_sep}
Given a generalized 1D-ordered $\operatorname{CDC}(\mathbfcal{S})$ with $\mathbfcal{S} = \{S^i\}_{i=1}^d$ and a reversed edge ranking $\varphi$ of $\mathcal{P}$ with $E(\mathcal{P}) = \{S^i S^{i+1}: \forall i \in \llbracket d-1 \rrbracket\}$, let $\left\{\left\{I^{\varphi(e), e}, J^{\varphi(e), e}\right\}: e \in E(\mathcal{P})\right\}$ be the output of $\Call{Separation}{\mathcal{P}, \varphi}$ in Algorithm~\ref{alg:g1d_biclique_sep}. Then,
\begin{align*}
\left\{\left\{\bigcup_{i \in I^{\varphi(e), e}} S^i \operatorname{set}minus \operatorname{mid}(e), \bigcup_{j \in J^{\varphi(e), e}} S^j \operatorname{set}minus \operatorname{mid}(e) \right\}: e \in E(\mathcal{P})\right\}.
\end{align*}
\noindent is a biclique cover of the conflict graph of $\operatorname{CDC}(\mathbfcal{S})$
\end{proposition}
\proof{Proof}
It is a Corollary of Theorem 3 by~\citet{lyu2022modeling}.
\Halmos\endproof
Because a junction tree of $\operatorname{CDC}(\mathbfcal{S})$ is a path and $S^i \cap S^j = \emptyset$ if $|i - j| \geq 2$, we can merge the bicliques represented by $\left\{I^{\varphi(e), e}, J^{\varphi(e), e} \right\}$ for each label value $\varphi(e)$. We want to remark that the indices within $I^{\varphi(e), e}$, $J^{\varphi(e), e}$, and $I^{\varphi(e), e} \cup J^{\varphi(e), e}$ are all consecutive. We also visualize the merging procedure in Figure~\ref{fig:merge_procedure}. All of the edges with the label of $j$ are ordered into $e^1, \hdots, e^n$. Then, we merge $I^{\varphi(e^k), e^k}$ and $J^{\varphi(e^k), e^k}$ alternatively into $A^j$ and $B^j$. It means $A^j$ is the union of $I^{\varphi(e^k), e^k}$ where $k$ is odd and $J^{\varphi(e^k), e^k}$ where $k$ is even. Similarly, $B^j$ is the union of $I^{\varphi(e^k), e^k}$ where $k$ is even and $J^{\varphi(e^k), e^k}$ where $k$ is odd.
\begin{figure}
\caption{A merging procedure to obtain a pair of index sets $\{A^j, B^j\}
\label{fig:merge_procedure}
\end{figure}
\begin{proposition} \label{prop:g1d_biclique}
Given the same settings as Proposition~\ref{prop:g1d_biclique_sep}: $\operatorname{CDC}(\mathbfcal{S}), \varphi$, and $\mathcal{P}$, let $\left\{\left\{I^{\varphi(e), e}, J^{\varphi(e), e}\right\}: e \in E(\mathcal{P})\right\}$ be the output of $\Call{Separation}{\mathcal{P}, \varphi}$ in Algorithm~\ref{alg:g1d_biclique_sep}. We denote the number of ranks of $\varphi$ as $r$. Given an arbitrary $j \in \llbracket r \rrbracket$, assume that
\begin{enumerate}
\item $e^1, \hdots, e^n$ be all the edges of $\mathcal{P}$ such that $\varphi(e^k) = j$ and $e^1, \hdots, e^n$ follow the same order as $S^1 S^2, \hdots, S^{d-1}S^d$.
\item $A^j = \bigcup_{k = 1: k \text{ is odd}}^n I^{\varphi(e^k), e^k} \cup \bigcup_{k = 1: k \text{ is even}}^n J^{\varphi(e^k), e^k}$.
\item $B^j = \bigcup_{k = 1: k \text{ is odd}}^n J^{\varphi(e^k), e^k} \cup \bigcup_{k = 1: k \text{ is even}}^n I^{\varphi(e^k), e^k}$.
\end{enumerate}
Then,
\begin{align*}
\left\{L^j := \bigcup_{i \in A^j} S^i \operatorname{set}minus \bigcup_{i \in B^j} S^i, R^j := \bigcup_{i \in B^j} S^i \operatorname{set}minus \bigcup_{i \in A^j} S^i \right\}
\end{align*}
\noindent is a biclique subgraph of the conflict graph $G^c_{\mathbfcal{S}}$ and
\begin{align*}
\left\{X^e := \bigcup_{i \in I^{\varphi(e), e}} S^i \operatorname{set}minus \operatorname{mid}(e), Y^e := \bigcup_{i \in J^{\varphi(e), e}} S^i \operatorname{set}minus \operatorname{mid}(e) \right\}
\end{align*}
is a biclique subgraph of $\{L^j, R^j\}$ for any edge $e$ such that $\varphi(e) = j$.
\end{proposition}
\proof{Proof}
First, we want to prove that $\{L^j, R^j\}$ is a biclique subgraph of the conflict graph $G^c_{\mathbfcal{S}}$. Assume that $\{L^j, R^j\}$ is not a biclique subgraph. Then, there must exists $u \in L^j, v \in R^j$ such that $\{u, v\} \subseteq S^{i_1}$ for some $i_1 \not\in A^j \cup B^j$. Also, $\{u, v\} \not\subseteq S^i$ for any $i \in A^j \cup B^j$. Let $i_2 \in A^j$ such that $u \in S^{i_2}$. Similarly, let $i_3 \in B^j$ such that $v \in S^{i_3}$. By the definition of generalized 1D-ordered CDCs, $|i_1 - i_2| \leq 1$ and $|i_1 - i_3| \leq 1$. As shown in Algorithm~\ref{alg:g1d_biclique_sep}, $\max(I^{\varphi(e^k), e^k}) = \min(J^{\varphi(e^k), e^k}))-1$. Thus, it is not possible that $\max(I^{\varphi(e^k), e^k}) < i_1 < \min(J^{\varphi(e^k), e^k}))$. Also, if $\max(J^{\varphi(e^k), e^k}) < i_1 < \min(I^{\varphi(e^{k+1}), e^{k+1}})$, it is also not possible to have $|i_1 - i_2| \leq 1$ and $|i_1 - i_3| \leq 1$, since both of $I^{\varphi(e^{k+1}), e^{k+1}}, J^{\varphi(e^k), e^k}$ are subsets of $A^j$ or they are both subsets of $B^j$. It leads to a contradiction. Thus, $\{L^j, R^j\}$ is a biclique subgraph of $G^c_{\mathbfcal{S}}$.
Second, we want to prove that $\{X^e, Y^e\}$ is a biclique subgraph of $\{L^j, R^j\}$ for arbitrary edge $e^k$ such that $\varphi(e^k) = j$. Also, assume that $\left\{X^{e^k}, Y^{e^k}\right\}$ is not a biclique subgraph of $\{L^j, R^j\}$. Then, there exists $u \in X^{e^k}$ and $v \in Y^{e^k}$ such that $uv$ is not an edge of $\{L^j, R^j\}$. Without loss of generality, we assume that $k$ is an odd number, $u \not\in L^j$, and $u \not\in R^j$. Since $u \not\in Y^{e^k}$, we know that $u \not\in S^i$ for any $i \in J^{\varphi(e^k), e^k}$. By the definition of generalized 1D-ordered CDCs, $u \not\in S^i$ for any $i \in I^{\varphi(e^{k-1}), e^{k-1}}$. Thus, $u \not \in S^i$ for any $i \in B^k$. However, $u \in S^i$ for any $i \in A^k$. Thus, $u \in L^j$ leads to a contradiction.
\Halmos\endproof
\proof{Proof of Theorem~\ref{thm:g1d_biclique_ideal}}
By Propositions~\ref{prop:g1d_biclique_sep} and~\ref{prop:g1d_biclique}, we know that $\{\{L^j, R^j\}\}_{j=1}^r$ is a biclique cover of the conflict graph of $\operatorname{CDC}(\mathbfcal{S})$. Then, we can complete the proof by Proposition~\ref{prop:pwr_cdc_bc}.
\Halmos\endproof
\section{The Difference Between Gray Code and Biclique Cover Formulations for Generalized 1D-Ordered CDCs} \label{sec:g1d_diff}
In this section, we provide an example to demonstrate the difference between Gray code formulation in Theorem~\ref{thm:g1d_ideal} and biclique cover formulation in Theorem~\ref{thm:g1d_biclique_ideal}. Assume that we have a generalized 1D-ordered CDC, $\operatorname{CDC}(\mathbfcal{S})$, where
\begin{align*}
\mathbfcal{S} = \{\{1, 2, 3\}, \{3, 4, 5\}, \{5, 6, 7\}, \{7, 8, 9\}, \{9, 10, 11\}, \{11, 12, 13\}\}.
\end{align*}
For both Gray code and biclique cover formulations, we use the reversed edge ranking in Figure~\ref{fig:diff_example} to generate the formulations. Note that reversed edge ranking can generate a Gray code by following Theorem~\ref{thm:edge_ranking_to_gray_code}
\begin{align*}
\{&\{0, 0, 0\} \\
&\{0, 0, 1\} \\
\{h^i\}_{i=1}^6 = \quad &\{0, 1, 1\} \\
&\{1, 1, 1\} \\
&\{1, 0, 1\} \\
&\{1, 0, 0\}
\}.
\end{align*}
Then, we can get a Gray code formulation of $\lambda \in \operatorname{CDC}(\mathbfcal{S})$
\begin{alignat*}{2}
& \lambda_1 + \lambda_2 + \lambda_3 + \lambda_4 + \lambda_5 + \lambda_6 \leq z_1, \qquad & \lambda_8 + \lambda_9 + \lambda_{10} + \lambda_{11} + \lambda_{12} + \lambda_{13} \leq 1 - z_1 \\
& \lambda_1 + \lambda_2 + \lambda_3 + \lambda_4 + \lambda_{10} + \lambda_{11} + \lambda_{12} + \lambda_{13} \leq z_2, & \lambda_6 + \lambda_7 + \lambda_8 \leq 1 - z_2 \\
& \lambda_1 + \lambda_2 + \lambda_{12} + \lambda_{13} \leq z_3 & \lambda_4 + \lambda_5 + \lambda_6 + \lambda_7 + \lambda_8 + \lambda_9 + \lambda_{10} \leq 1 - z_3\\
&\sum_{v=1}^{13} \lambda_v = 1, & \lambda \geq 0
\end{alignat*}
Similarly, the set $\{\{A^j, B^j\}: j \in \llbracket 3 \rrbracket\}$ generated by Algorithm~\ref{alg:g1d_biclique} with the reversed edge ranking in Figure~\ref{fig:diff_example} can be represented by the code
\begin{align*}
\{&\{0, 0, 0\} \\
&\{0, 0, 1\} \\
\{g^i\}_{i=1}^6 = \quad &\{0, 1, -1\} \\
&\{1, 1, -1\} \\
&\{1, 0, 1\} \\
&\{1, 0, 0\}
\},
\end{align*}
\noindent where $A^j = \{g^i_j: g^i_j = 0, i \in \llbracket 6 \rrbracket\}$ and $B^j = \{g^i_j: g^i_j = 1, i \in \llbracket 6 \rrbracket\}$. Then, a biclique cover formulation of $\lambda \in \operatorname{CDC}(\mathbfcal{S})$
\begin{alignat*}{2}
& \lambda_1 + \lambda_2 + \lambda_3 + \lambda_4 + \lambda_5 + \lambda_6 \leq z_1, \qquad & \lambda_8 + \lambda_9 + \lambda_{10} + \lambda_{11} + \lambda_{12} + \lambda_{13} \leq 1 - z_1 \\
& \lambda_1 + \lambda_2 + \lambda_3 + \lambda_4 + \lambda_{10} + \lambda_{11} + \lambda_{12} + \lambda_{13} \leq z_2, & \lambda_6 + \lambda_7 + \lambda_8 \leq 1 - z_2 \\
& \lambda_1 + \lambda_2 + \lambda_{12} + \lambda_{13} \leq z_3 & \lambda_4 + \lambda_5 + \lambda_9 + \lambda_{10} \leq 1 - z_3\\
&\sum_{v=1}^{13} \lambda_v = 1, & \lambda \geq 0
\end{alignat*}
\begin{figure}
\caption{A reversed edge ranking for a path graph with vertices $\{S^i\}
\label{fig:diff_example}
\end{figure}
\section{Proofs of Theorems~\ref{thm:gnd_ib} and~\ref{thm:gnd_ideal}} \label{sec:proofs_gnd}
In this section, we will prove the pairwise IB-representability of generalized $n$D-ordered CDCs (Theorem~\ref{thm:gnd_ib}) and show the logarithmically sized ideal formulations (Theorem~\ref{thm:gnd_ideal}).
\proof{Proof of Theorem~\ref{thm:gnd_ib}}
Let $\mathbfcal{S} = \{S^{\mathbf{i}}: \forall \mathbf{i} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket\}$. Assume that $\operatorname{CDC}(\mathbfcal{S})$ is not pairwise IB-representable. Then, by Proposition~\ref{prop:pairwise_IB_at_most_two}, there exists a minimal infeasible set $I$ such that $|I| \geq 3$. Let $x_1, x_2, x_3 \in I$ be three unique elements. Then, $I \operatorname{set}minus \{x_j\}$ is a feasible set and we can assume that $I \operatorname{set}minus \{x_j\} \subseteq S^{\mathbf{i}^j}$ for $j \in \{1,2,3\}$. Thus, any pair out of $S^{\mathbf{i}^1}, S^{\mathbf{i}^2}, S^{\mathbf{i}^3}$ shares some common elements.
By the definition of generalized $n$D-ordered CDCs, we know that $||\mathbf{i}^1 - \mathbf{i}^2||_\infty \leq 1$, $||\mathbf{i}^1 - \mathbf{i}^3||_\infty \leq 1$, and $||\mathbf{i}^2 - \mathbf{i}^3||_\infty \leq 1$. Then, we know that $\mathbf{i}^1_i, \mathbf{i}^2_i, \mathbf{i}^3_i$ cannot be three distinct variables for any $i \in \llbracket n \rrbracket$. Otherwise, we can assume that $\mathbf{i}^1_i + 1 = \mathbf{i}^2_i = \mathbf{i}^3_i - 1$ without loss of generality and $||\mathbf{i}^1 - \mathbf{i}^3||_\infty > 1$. Thus, we can construct $\mathbf{i}' \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket$ such that $\mathbf{i}'_i$ is equal to at least two of $\mathbf{i}^1_i, \mathbf{i}^2_i, \mathbf{i}^3_i$. Then, we have
\begin{align*}
||\mathbf{i}^1 - \mathbf{i}'||_1 + ||\mathbf{i}' - \mathbf{i}^2||_1 &= ||\mathbf{i}^1 - \mathbf{i}^2||_1 \\
||\mathbf{i}^1 - \mathbf{i}'||_1 + ||\mathbf{i}' - \mathbf{i}^3||_1 &= ||\mathbf{i}^1 - \mathbf{i}^3||_1 \\
||\mathbf{i}^2 - \mathbf{i}'||_1 + ||\mathbf{i}' - \mathbf{i}^3||_1 &= ||\mathbf{i}^2 - \mathbf{i}^3||_1.
\end{align*}
Thus, by property 2 of Definition~\ref{def:g_nd_ordered_cdc},
\begin{align*}
S^{\mathbf{i}^1} \cap S^{\mathbf{i}^2} &\subseteq S^{\mathbf{i}'} \\
S^{\mathbf{i}^1} \cap S^{\mathbf{i}^3} &\subseteq S^{\mathbf{i}'} \\
S^{\mathbf{i}^2} \cap S^{\mathbf{i}^3} &\subseteq S^{\mathbf{i}'}.
\end{align*}
\noindent Hence, $I \subseteq S^{\mathbf{i}'}$, which is a contradiction.
\Halmos\endproof
\proof{Proof of Theorem~\ref{thm:gnd_ideal}}
The constraints of $\lambda \in \operatorname{g1d}\left(\mathbfcal{S}^j, \{h^i,j\}_{i=1}^{d_1}, z^j \right)$ are obtained by biclique covers of conflict graphs of $\operatorname{CDC}(\mathbfcal{S}^j)$ for $j \in \llbracket n \rrbracket$. Thus, we only need to show that the union of the edges of the conflict graphs $G^c_{\mathbfcal{S}^j}$ is exactly equal to the edges of $G^c_{\mathbfcal{S}}$.
We start with showing that $E(G^c_{\mathbfcal{S}^1}) \subseteq E(G^c_{\mathbfcal{S}})$. Let $uw$ be an arbitrary edge of $E(G^c_{\mathbfcal{S}^1})$, then $\{u, w\} \not \subseteq \bigcup_{\mathbf{i}_v \in \llbracket d_v \rrbracket: v \neq 1} S^{\mathbf{i}}$ for any $\mathbf{i}_1 \in \llbracket d_1 \rrbracket$. Thus, $\{u, w\} \not\subseteq S^{\mathbf{i}}$ for any $\mathbf{i} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket$. Hence, $E(G^c_{\mathbfcal{S}^1}) \subseteq E(G^c_{\mathbfcal{S}})$. Similarly, $E(G^c_{\mathbfcal{S}^j}) \subseteq E(G^c_{\mathbfcal{S}})$ for any $j \in \llbracket n \rrbracket$.
Then, we want to prove that $\bigcup_{j=1}^k E(G^c_{\mathbfcal{S}^j}) = E(G^c_{\mathbfcal{S}})$. Let $uw$ be an arbitrary edge of $E(G^c_{\mathbfcal{S}})$. Then, we know that $\{u, w\} \not\subseteq S^{\mathbf{i}}$ for any $\mathbf{i} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket$. Let
\begin{align*}
\mathbf{I}^u &= \{\mathbf{i} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket: u \in S^{\mathbf{i}}\}, \\
\mathbf{I}^u_j &= \{\mathbf{i}_j: \mathbf{i} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket, u \in S^{\mathbf{i}}\}, \qquad \forall j \in \llbracket n \rrbracket.
\end{align*}
We define $\mathbf{I}^w, \mathbf{I}^w_j$ in the same way. We want to note that $\mathbf{I}^u \cap \mathbf{I}^w = \emptyset$ since $\{u, w\} \not\subseteq S^{\mathbf{i}}$ for any $\mathbf{i} \in \llbracket d_1 \rrbracket \times \hdots \times \llbracket d_n \rrbracket$. Also, $\mathbf{I}^u = \mathbf{I}^u_1 \times \hdots \times \mathbf{I}^u_n$ and $\mathbf{I}^w = \mathbf{I}^w_1 \times \hdots \times \mathbf{I}^w_n$ by the property 2 of Definition~\ref{def:g_nd_ordered_cdc}
Thus, we know that there must exists $j_1 \in \llbracket n \rrbracket$ such that $\mathbf{I}^u_{j_1} \cap \mathbf{I}^w_{j_1} = \emptyset$, which means $\{u, w\} \not\subseteq \bigcup_{\mathbf{i}_v \in \llbracket d_v \rrbracket: v \neq j_1} S^{\mathbf{i}}$ for $\mathbf{i}_{j_1} \in \llbracket d_{j_1} \rrbracket$. Hence, $uw \in E(G^c_{\mathbfcal{S}^{j_1}})$.
\Halmos\endproof
\end{APPENDICES}
\end{document}
|
\begin{document}
\title{Additive Manufactured and Topology Optimized Permanent Magnet Spin-Rotator for Neutron Interferometry Applications}
\author{Wenzel Kersten$^1$}
\author{Laurids Brandl$^1$}
\author{Richard Wagner$^1$}
\author{Christian Huber$^{2,3}$}
\author{Florian Bruckner$^{2,3}$}
\author{Yuji Hasegawa$^{1,4}$}
\author{Dieter Suess$^{2,3}$}
\author{Stephan Sponar$^1$}
\affiliation{
$^1$Atominstitut, TU Wien, Stadionallee 2, 1020 Vienna, Austria \\
$^2$Physics of Functional Materials, University of Vienna, 1090 Vienna, Austria \\
$^3$Christian Doppler Laboratory for Advanced Magnetic Sensing and Materials, 1090 Vienna, Austria \\
$^4$Department of Applied Physics, Hokkaido University, Kita-ku, Sapporo 060-8628, Japan}
\date{\today}
\hyphenpenalty=800\relax
\exhyphenpenalty=800\relax
\sloppy
\setlength{\parindent}{0pt}
\noindent
\begin{abstract}
In neutron interferometric experiments using polarized neutrons coherent spin-rotation control is required. In this letter we present a new method for Larmor spin-rotation around an axis parallel to the outer guide field using topology optimized 3D printed magnets. The use of 3D printed magnets instead of magnetic coils avoids unwanted inductances and offers the advantage of no heat dissipation, which prevents potential loss in interferometric contrast due to temperature gradients in the interferometer. We use topology optimization to arrive at a design of the magnet geometry that is optimized for homogeneity of the magnetic action over the neutron beam profile and adjustability by varying the distance between the 3D printed magnets. We verify the performance in polarimetric and interferometric neutron experiments.
\end{abstract}
\maketitle
\begin{SCfigure*}[]
\centering
\includegraphics[width=0.75\textwidth]{Helmholtz_box_fig}
\caption[Current neutron interferometric setup.]{Current neutron interferometric setup. (a) Placement of the box inside the neutron interferometer together with the phase shifter plate $\chi$.
(b) Schematic view of the water-cooled Larmor spin-rotator box with coils in Helmholtz geometry and magnetic field direction indicated in magenta. Also depicted are in and outlets for temperature-controlled cooling water. The box needs to be waterproof, which makes manufacture tedious. (c) Design and field domain for an optimized 3D printed permanent magnetic Larmor spin-rotator.}
\label{fig:interferometer_2}
\end{SCfigure*}
\section{Introduction}
Since its first demonstration in 1974, neutron interferometry \cite{Rauch74,RauchBook} has proven itself a powerful tool to study fundamental concepts of quantum mechanics. For instance the spin-superposition law \cite{Badurek83Direct,Badurek83TimeDepend}, gravitationally induced phases (Colella-Overhause-Werner effect) \cite{Colella75} or the Sagnac effect \cite{Werner79} have been verified experimentally.
Experiments challenging our views on reality include the Einstein-Podolsky-Rosen experiments, where a violation of Bell's inequality, assuming local hidden variables, can be measured. Using pairs of entangled photons \cite{BookBertlmannZeilinger,Freedman72,Aspect82,Kwiat95,Weihs98,Tittel98,Rowe01,Zeilinger15,Hensen15} it can be shown that quantum mechanics is non-local, i.e., it cannot be reproduced by local realistic theories. Entanglement can not only be achieved for spatially separated particles but also between different degrees of freedom \cite{Moehring04,Matsukevich2008,hasegawa2003violation,Sakai06,Geppert18}, e.g. path, spin and energy in the case of neutrons. This enables the experimental violation of Bell's inequality for non-contextual hidden variables using neutron interferometric setups \cite{Hasegawa06,Bartosik09}, giving proof that measurement outcomes are not predetermined and therefore depend on the experimental context \cite{Bell66,Mermin93}. Not a statistical violation but a contradiction between quantum mechanics and local hidden variable theories was found by Greenberger, Horne and Zeilinger in 1989 for, at least, tripartite entanglement \cite{GHZ89Pro,GHZ90}, also feasible with neutrons \cite{Hasegawa10}.
More recently, a neutron optical approach for obtaining weak values \cite{Sponar15}, a new type of quantum variable introduced by Aharanov already three decades ago \cite{Aharonov88}, has been realized. With this novel technique quantum paradoxes such as the Quantum Cheshire Cat \cite{Denkmayr14} or a violation of the classical Pigeonhole effect \cite{Cai17} have been studied. In addition, a direct tomographic state reconstruction technique based on weak values, independent of the measurement strength, has been established \cite{Denkmayr17}. Common to all these experiments is the requirement of high interference contrast and even more important, a high efficiency in spin manipulation capacity, i.e., full control of the spin state.
\section{3D printed spin-rotator}
The current setup to tests of fundamental phenomena in quantum mechanics
Bell's inequality violation at the Institut Laue Langevin in Grenoble, France, is shown in Fig.\,\ref{fig:interferometer_2}. The Larmor spin-rotator has a Helmholtz coil geometry. It applies a local field in addition to the guide field in the $z$-direction, thereby locally changing the Larmor frequency, with which the spin precesses around the field in the $xy$-plane. The rotation angle is given by
\begin{align}
\alpha(B_z) = \frac{2\mu_N}{\hbar} B_z \frac{l}{v}
\label{eq:alpha}
\end{align}
where $\mu_N=-9.6623647\times10^{-27}$~J/T is the magnetic moment of a neutron, $l$ is the length of the coil and $v$ is the velocity of the neutrons. Since the local field and the guide field are parallel, the field transition can be adiabatic.
Such a Larmor spin-rotator must have a highly homogeneous field along the neutron beam path, because an inhomogeneous field would lead to a dephasing of the neutron and therefore to a loss in contrast of the interferogram. Another crucial point is the thermal stability of the setup. A change of temperature during the measurement leads to a loss in contrast, as phase drifts occur, e.g., a temperature change of $1$~$^\circ$C results in $1.92$~rad phase shift \cite{geppert2014improvement}. For this reason the Helmholtz coil Larmor spin-rotator is water-cooled (Fig.\,\ref{fig:interferometer_2}(b)), which complicates the setup of the experiment, because the temperature of the cooling water has to be optimized. In addition, the manufacture of waterproof boxes to hold the Helmholtz coils is tedious.
\subsection{Requirements}
To improve the design at hand, the actual condition should be examined in more detail. The neutron path is simplified by a field box $\Omega_f$ with a size of $7\times7\times40$~mm$^3$ ($a\times a\times L$). To describe the influence of the magnetic field on the phase shift of the neutrons, the action $\Theta$ is defined as
\begin{align}
\Theta=\frac{1}{a^2}\int_{\Omega_{f}}| B_z | \mathrm{d} \boldsymbol{r}.
\end{align}
In order to rotate the neutron spin by an angle $\alpha=\pi$, an action of $\Theta=35~\text{mT}\cdot\text{mm}$ is necessary. The action of the Helmholtz coil geometry is calculated using the finite element method (FEM) tool Magnum.fe \cite{magnumfe}. Fig.~\ref{fig:spinflipper_helmholz}(a) shows the geometry as well as the field box with vectors of the magnetic field. The field shows an inhomogeneous behavior outside the coil. A calculation of the current $I$ to reach an action of $\Theta=35~\text{mT}\cdot\text{mm}$ is plotted in Fig.\,\ref{fig:spinflipper_helmholz}(b). The relative error of the field homogeneity is plotted in \ref{fig:gap_mag_action}(b). An optimized design should be found with the requirements, presented in the next subsection.
The goal is to find an optimized permanent magnetic Larmor spin-rotator by the usage of the inverse stray field and topology optimization framework \cite{bruckner2017solving,huber20173d,huber2017topology}. Following design parameters are given:
\begin{itemize}
\item Size of the field box $\Omega_f$: $7\times7\times40$~mm$^3$ ($a\times a\times L$).
\item Maximum design volume $\Omega_p$: $24\times24\times20$~mm$^3$ ($a\times a\times L$).
\item Action $\Theta$ of the external field: $\Theta=35$~mT$\cdot$mm.
\item $\Theta$ is adjustable in the range of $\pm 5$~mT$\cdot$mm.
\item Homogeneous magnetic field density $\boldsymbol{B}(\boldsymbol{r})$ along $z$-axis: $\boldsymbol{B}(\boldsymbol{r})=(0,0,B_z)$.
\end{itemize}
A challenge for a permanent magnetic system is to make the action $\Theta$ adjustable. For the Helmholtz coil geometry, $\Theta$ is easily adjustable by the current trough the coils. The easiest way to adjust $\Theta$ for a permanent magnetic setup is to change the distance between the neutron path (field box) and the magnets. To realize a homogeneous magnetic field density in the field box $\Omega_f$, the functional for the minimization problem can given by
\begin{align}
\label{eq:j_spin}
J = \int_{\Omega_{f}} \left( | \triangledown B_x |^2 + | \triangledown B_y |^2 + | (\triangledown B_z)_y |^2 + | (\triangledown B_z)_z |^2\right) \mathrm{d} \boldsymbol{r}
\end{align}
\begin{figure}\label{fig:spinflipper_helmholz}
\end{figure}
\subsection{Two Candidate Designs}
\label{sec:spin_flipper_designs}
Two different initial designs are investigated to find a proper replacement of the Helmholtz coils. The first design is a modified Halbach cylinder \cite{bjork2010comparison}. To achieve a better field homogeneity outside the magnet, it is divided into two rows, which is schematically illustated in Fig.\,\ref{fig:spinflipper_new_design_result}. To adjust $\Theta$, the design is split in the middle and the gap $\Delta z$ between both halves is adjustable. In total 20 segments are used. Each permanent magnetic segment has a constant remanence $|\boldsymbol{B_r}|$, but the direction of $\boldsymbol{B_r}$ is open and defines the optimization parameter for the inverse stray field optimization. Fig.\,\ref{fig:spinflipper_new_design}(a) shows the initial design of the modified Halbach cylinder. Due to the fact that only the direction of the remanence is an optimization parameter, no regularization parameter is necessary. Fig.\,\ref{fig:spinflipper_new_design_result}(b) shows the result of the inverse stray field simulation for an action of $\Theta=35$~mT$\cdot$mm. In general, the magnetization (remanence) vectors have the same direction as a standard Halbach cylinder, only the segments on the top and bottom of the field box show a deviating direction, to make the field in the field box more homogeneous.
\begin{figure}\label{fig:spinflipper_new_design_result}
\end{figure}
The second investigated design is a topology optimized one. Fig.\,\ref{fig:spinflipper_new_design}(a) shows the design domain $\Omega_p$ and the field box $\Omega_f$ where $J$ of equation~\eqref{eq:j_spin} should be minimized. To adjust $\Theta$, the design consists of two halves, and the gap $\Delta z$ is adjustable. The mesh of the design domain consist of $256,542$ tetrahedral elements. No volume constraint is applied for the optimization. Fig.\,\ref{fig:spinflipper_new_design}(b) shows the topology optimized version of a permanent magnetic Larmor spin-rotator.
Several numerical simulations are performed to find the optimal parameters of both designs. To adjust $\Theta$ in both directions, a gap of $\Delta z=2.25$~mm is chosen. To get an action of $\Theta=35$~mT$\cdot$mm, a remanence of the permanent magnet of $B_r=61$~mT for the topology optimized version and $B_r=68$~mT for the Halbach design is necessary. How $\Delta z$ and $B_r$ influence $\Theta$, is plotted in Fig.\,\ref{fig:gap_mag_action}. There exists a linear correlation between action and gap size. The topology optimized design is less sensitive for changing $\Delta z$.
\begin{figure}\label{fig:spinflipper_new_design}
\end{figure}
\begin{figure}\label{fig:gap_mag_action}
\end{figure}
The homogeneity of $\boldsymbol{B}(\boldsymbol{r})$ in the field box $\Omega_f$ has a crucial impact on the performance. The relative error is defined as
\begin{align}
\delta e = \frac{J}{ \int_{\Omega_f} |\boldsymbol{B}|^2 \mathrm{d} \boldsymbol{r} }
\end{align}
with the functional $J$ of equation~\eqref{eq:j_spin}. Fig.\,\ref{fig:gap_action_relerror}(a) shows the relative error $\delta e$ as a function of the gap $\Delta z$. The topology optimized design has a much lower relative error compared to the Halbach cylinder design, and the dependency on the gap size is much lower as well. However, the main question is, if the optimized permanent magnetic design has a lower relative error, or a better performance as the current Helmholtz coil geometry. Fig.\,\ref{fig:gap_action_relerror}(b) shows a plot of the relative error as a function of the action for the current and the both optimized designs. For the current design, $\delta e$ is independent of $\Theta$. In the range of around of $\Theta=30-40$~mT$\cdot$mm the topology optimized version shows a better performance. This translates to a range of $\Delta z \simeq 1 - 3.5 ~\text{mm}$ for the tunable gap distance.
\begin{figure}\label{fig:gap_action_relerror}
\end{figure}
\begin{figure}\label{fig:spin_flipper_2}
\end{figure}
\begin{figure*}\label{fig:exp_setups}
\end{figure*}
\subsection{Fabrication}
\label{sec:spin_flipper_validation}
For the manufacturing process of the permanent magnets, we use a conventional low-cost end-user 3D printer without any modifications \cite{huber20163d} to print the topology optimized system. As a printing material the prefabricated compound material (Neofer$^\circledR$~25/60p) from Magnetfabrik Bonn GmbH is used to realize the setup. All four segments have the same shape. Fig.\,\ref{fig:spinflipper_new_design}(c) shows a picture of the printed segments. After the printing process, the magnets must be magnetized. Compared to all previous applications, only a weak magnetic field is necessary. The segments must have exactly a remanence of $B_r=61$~mT to generate an action of $\Theta=35$~mT$\cdot$mm for a gap of $\Delta z=2.25$~mm.
Magnetization of the segments are performed with an electromagnet with a maximum magnetic flux density inside the electromagnet of $1.9$~T in permanent operation mode. A jig with the exact positions of the segments is 3D printed. The jig is inserted into the electromagnet and the external field is increased in small steps. After each step, the magnetic field density of the segments is measured by the 3D Hall probe. FEM simulation of the arrangement yields a field of $B_z=1.18$~mT in the center for the desired action $\Theta$. With this approach a good adjustment of the remanence $B_r$ is possible.
After the magnetization procedure, the magnetic field between the segments is measured and compared with simulation results. Fig.\,\ref{fig:spin_flipper_2}(b) shows a volume scan between the segments and a line scan of $B_z$ at $y=0$~mm, $z=0$~mm for the measured topology optimized version and the former Helmholtz coil geometry. Simulation results for $B_r=61$~mT and measurements are in a good agreement. The vector field illustrates the homogeneity of the measurement and simulations, respectively.
\begin{figure}\label{fig:spin_flipper_result}
\end{figure}
\section{Performance in neutron optical experiments}
\subsection{Polarimeter experiment}
The real performance of the 3D printed Larmor spin-rotator can be only tested with a neutron experiment. For this reason, interference measurements with neutrons are performed at the TRIGA MARK-II reactor at the Atominstitut. Fig.~\ref{fig:exp_setups} shows an illustration of the experimental setups. The gap $\Delta z$ and therefore the action $\Theta$ can be adjusted by a 3D printed mounting system with counter-rotating threads. In the polarimeter experiment, depicted in Fig.\,\ref{fig:exp_setups}(a), the first direct current spin turner DC1 rotates the spin of the neutrons in flight direction by an angle of $\pi/2$, then a cadmium aperture reduces the beam. After the aperture, the beam enters the 3D printed Larmor spin-rotator. The intensity modulation is created by varying the position of DC2, which causes another $\pi/2$ spin-rotation around the x-axis. After DC2, the neutrons pass through a supermirror analyzer, transmitting only the $\ket{+z}$-spin component into the detector.
Interference patterns for different $\Delta z$ are plotted in Fig.\,\ref{fig:spin_flipper_result}(b). These are only test measurements with a small aperture opening of $3\times5$~mm$^2$. The reference measurement (black curve), from which the phase shifts are measured, is done without the Larmor spin-rotator. These measured phase shifts directly translate to the spin-rotation angle $\alpha$. First measurements show a spin-rotation of $\alpha=\pi$ for a gap of $\Delta z=2.25$~mm with the original remanence of $B_r = 61$ mT. This fits really well to the simulation for this action $\Theta$. Another crucial parameter is the spin contrast $C_S$ of the setup, which can be calculated from $C_S = \frac{I_0 - I_\pi}{I_0 + I_\pi}$, where $I_\pi$ ($I_0$) is the intensity behind the spin analyzer after a $\pi$-flip (no flip). Here, a contrast of more than $95$~\% can be achieved for a gap of $\Delta z=2.25$~mm.
However, the measurements showed such a promising linearity of the phase shift with $\Delta z$, that the 3D printed magnets were magnetized to a four times higher remanence of $B_r = 244$ mT, in order to have a wider tuning range of the spin-rotation angle $\alpha$. Fig.\,\ref{fig:polarimeter_result} shows plots of $\alpha$ calculated from phase shifts of the interference patterns against $\Delta z$. These polarimetric measurements are carried out with an improved motorized adjustment mechanism for the gap distance and a wider aperture of $7\times 7$~mm$^2$. They show a good linearity of the spin-rotation angle against the gap size.
\begin{figure}\label{fig:polarimeter_result}
\end{figure}
\begin{figure}\label{fig:result_magnetfeld}
\end{figure}
\subsection{Interferometer experiment}
As an example of the performance of the newly fabricated spin-rotators at their intended application inside a perfect crystal neutron interferometer, measurements are carried out at the Atominstitut. The experiment is comparable to the first demonstration of the $4\pi$-spinor symmetry of neutrons \cite{rauch1975fourpi}, its setup is depicted in Fig.\,\ref{fig:exp_setups}(b). The unpolarized neutron beam with a cross section of $10 \times 10$~mm$^2$ is split in path $\ket{I}$ and $\ket{II}$ after the first interferometer plate. The phase shifter plate can be rotated in order to create a phase difference $\chi$ between the two paths. The 3D printed Larmor spin-rotator with variable $\Delta z$ is placed in path $I$. At the last interferometer plate, the two paths are recombined and leave the interferometer in two separate beams, the O beam in the forward direction and the reflected H beam. Only the O beam has the same number of reflections and transmissions, and therefore is able to show maximum interferometric contrast $C$, when the phase shift $\chi$ is varied. The O beam intensity of an empty interferometer as a function of $\chi$ can be written as \cite{RauchBook}
\begin{align}
I_O(\chi) = \lvert \Psi_0 \rvert^2 (1 + C \cos\chi).
\end{align}
Here, $C$ is an parameter and depends on the interferometer used, temperature gradients, vibrations, etc. The density matrix of an unpolarized neutron in the interferometer, assuming perfect contrast $C=1$, can be written as the direct product
\begin{align}
\rho = \frac{1}{2}\dyad{\psi_i}\otimes\frac{1}{2}(\ket{\uparrow}\!\!\bra{\uparrow}+\dyad{\downarrow}),
\end{align}
where the first term in the product is the path state $\ket{\psi_i} = \frac{1}{\sqrt{2}}(\ket{I} + e^{i \chi}\ket{II})$, which depends on the phase shift $\chi$, and the second term is the maximally mixed spin state. The interaction with the local magnetic field in path $I$ can be modeled as
\begin{align}
U_{int}(\alpha) = \dyad{I}\otimes e^{i \frac{\alpha}{2} \sigma_z} + \dyad{II} \otimes \mathbbm{1},
\end{align}
where $\alpha$ depends on $B_{loc}(\Delta z)$ and is given by equation \eqref{eq:alpha}, when B is homogeneous. At the last interferometer plate, the projector $P_{f}=\dyad{\psi_f}\otimes\mathbbm{1}$ acts on the path state and projects it onto the state $\ket{\psi_f} = \frac{1}{\sqrt{2}}(\ket{I} + \ket{II})$. Therefore, the intensity at the O detector is given by
\begin{equation}
\begin{split}
I_O(\chi,\alpha(\Delta z))
&\propto \frac{1}{C}\Tr(P_{f} U_{int} \rho U_{int}^\dagger P_{f}^\dagger) + \frac{C-1}{C} \\
&= \frac{1}{2}\left(1 + C \cos(\frac{\alpha}{2})\cos(\chi)\right),
\end{split}
\end{equation}
where the empirical prefactor $C$ has been reintroduced in front of the interference term. The interferometric contrast $C\cos(\alpha/2)$ is reduced to zero for odd multiples of the spin rotation angle $\alpha = \pi$, i.e., when the spin states in the two paths are orthogonal. At $\alpha = 2 \pi$ the interferogram exhibits a phase shift of $\pi$ compared to $\alpha = 0$, i.e. magnets removed. Only at $\alpha = 4\pi$ the interferogram has returned to its initial contrast and phase, which shows the $4\pi$-symmetry of the neutron as a spin-$1/2$ particle.
In order to experimentally demonstrate this $4\pi$-symmetry in a neutron interferometer experiment, three different magnetization strengths of the 3D printed Larmor spin-rotator are used, each identified by the magnetic field strength $B_{\rm{ref}}$ measured in the center point at $\Delta z = 2.25$ mm. Measurements of the dependence of the magnetic field on the distance $\Delta z$ are plotted in Fig.\,\ref{fig:result_magnetfeld}. For distances upwards of 8 mm, the magnetic field ceases to be linear. Nevertheless, interferograms are recorded for $\Delta z$ up to 10 mm and plotted against the magnetic field calculated from fits with a quadratic polynomial. In Fig.\,\ref{fig:interferometer_result} the results of the interferometric measurements are depicted, which clearly reproduce the $4\pi$-symmetry in the experiment.
\begin{figure}\label{fig:interferometer_result}
\end{figure}
\section{Discussion and Outlook}
The combination of 3D printing and topology optimization using FEM methods offers new possibilities for tackling design problems, e.g. restrictions in available space to implement physical interactions. The design goal of this work, a Larmor spin-rotator consisting of a region of homogeneous magnetic field density, which is also tunable in magnitude, is met by a setup of four segments which are located to the sides of the field box. Tunability of the magnetic field strength is realized by introducing a variable gap between the top and bottom half of the spin-rotator. The new method can be used in applications, where electromagnetic coils are unfavorable, e.g., due to heat dissipation or undesired inductances. Future developments could include different field configurations. One example is a spin-rotation field, which is perpendicular to the guide field and a non-adiabatic field transition is required. Another possible field configuration is that of a wiggler, which consists of stacked regions of anti-parallel magnetic fields along the beam-path. The spatial variation of the magnetic fields in the wiggler leads to resonant Rabi-flops of the spin, similar to the working principle of a resonant-frequency spin-flipper, where the variation happens in the time-domain. Using such devices, it is also possible to shape the beam-profile in momentum space. The new Larmor spin-rotator design developed in this work can be seen as a first step toward an implementation of custom magnetic field shaping using 3D printed permanent magnets in neutron optics.
\section{Conclusion}
We have developed a new method for coherent Larmor spin-rotation control using topology optimized 3D printed magnets for applications in neutron interferometer experiments, where the rotation axis is parallel to the outer guide field. The magnetic action can be adjusted linearly by varying the distance between the magnets. We have showed that spin-rotation angles of more than $4 \pi$ are possible, depending on the initial magnetization strength. Measurements are in good agreement with simulations and showed a spin contrast of more than 95\%, comparable to older methods using Helmholtz coils. The advantages of this new method are that unwanted inductances are avoided, and that no heat is dissipated by current carrying wires, which prevents a reduction in interferometric contrast due to temperature gradients or would make water cooling necessary.
\begin{thebibliography}{40}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Rauch}\ \emph {et~al.}(1974)\citenamefont {Rauch},
\citenamefont {Treimer},\ and\ \citenamefont {Bonse}}]{Rauch74}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Rauch}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Treimer}}, \
and\ \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Bonse}},\ }\href
{\doibase DOI: 10.1016/0375-9601(74)90132-7} {\bibfield {journal} {\bibinfo
{journal} {Phys. Lett. A}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo
{pages} {369 } (\bibinfo {year} {1974})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rauch}\ and\ \citenamefont
{Werner}(2000)}]{RauchBook}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Rauch}}\ and\ \bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont
{Werner}},\ }\href@noop {} {\emph {\bibinfo {title} {Neutron
Interferometry}}}\ (\bibinfo {publisher} {Clarendon Press, Oxford},\
\bibinfo {year} {2000})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Summhammer}\ \emph {et~al.}(1983)\citenamefont
{Summhammer}, \citenamefont {Badurek}, \citenamefont {Rauch}, \citenamefont
{Kischko},\ and\ \citenamefont {Zeilinger}}]{Badurek83Direct}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Summhammer}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Badurek}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Rauch}}, \bibinfo
{author} {\bibfnamefont {U.}~\bibnamefont {Kischko}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
10.1103/PhysRevA.27.2523} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {27}},\ \bibinfo {pages} {2523}
(\bibinfo {year} {1983})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Badurek}\ \emph {et~al.}(1983)\citenamefont
{Badurek}, \citenamefont {Rauch},\ and\ \citenamefont
{Summhammer}}]{Badurek83TimeDepend}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Badurek}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Rauch}}, \
and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Summhammer}},\
}\href {\doibase 10.1103/PhysRevLett.51.1015} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo
{pages} {1015} (\bibinfo {year} {1983})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Colella}\ \emph {et~al.}(1975)\citenamefont
{Colella}, \citenamefont {Overhauser},\ and\ \citenamefont
{Werner}}]{Colella75}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Colella}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont
{Overhauser}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont
{Werner}},\ }\href {\doibase 10.1103/PhysRevLett.34.1472} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {34}},\ \bibinfo {pages} {1472} (\bibinfo {year}
{1975})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Werner}\ \emph {et~al.}(1979)\citenamefont {Werner},
\citenamefont {Staudenmann},\ and\ \citenamefont {Colella}}]{Werner79}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont
{Werner}}, \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont
{Staudenmann}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Colella}},\ }\href {\doibase 10.1103/PhysRevLett.42.1103} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {42}},\ \bibinfo {pages} {1103} (\bibinfo {year}
{1979})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {{Bertlmann}}\ and\ \citenamefont
{{Zeilinger}}(2002)}]{BookBertlmannZeilinger}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~A.}\ \bibnamefont
{{Bertlmann}}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{{Zeilinger}}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum
{[Un]}speakables, from {Bell} to Quantum Information}}},\ edited by\ \bibinfo
{editor} {\bibfnamefont {R.~A.}\ \bibnamefont {{Bertlmann}}}\ and\ \bibinfo
{editor} {\bibfnamefont {A.}~\bibnamefont {{Zeilinger}}},\ Springer Verlag,
Heidelberg\ (\bibinfo {year} {2002})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Freedman}\ and\ \citenamefont
{Clauser}(1972)}]{Freedman72}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont
{Freedman}}\ and\ \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont
{Clauser}},\ }\href {\doibase 10.1103/PhysRevLett.28.938} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {28}},\ \bibinfo {pages} {938} (\bibinfo {year} {1972})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Aspect}\ \emph {et~al.}(1982)\citenamefont {Aspect},
\citenamefont {Grangier},\ and\ \citenamefont {Roger}}]{Aspect82}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Aspect}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Grangier}}, \
and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Roger}},\ }\href
{\doibase 10.1103/PhysRevLett.49.91} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {49}},\ \bibinfo
{pages} {91} (\bibinfo {year} {1982})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kwiat}\ \emph {et~al.}(1995)\citenamefont {Kwiat},
\citenamefont {Mattle}, \citenamefont {Weinfurter}, \citenamefont
{Zeilinger}, \citenamefont {Sergienko},\ and\ \citenamefont
{Shih}}]{Kwiat95}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~G.}\ \bibnamefont
{Kwiat}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Mattle}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}}, \bibinfo {author}
{\bibfnamefont {A.~V.}\ \bibnamefont {Sergienko}}, \ and\ \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Shih}},\ }\href {\doibase
10.1103/PhysRevLett.75.4337} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {75}},\ \bibinfo {pages}
{4337} (\bibinfo {year} {1995})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Weihs}\ \emph {et~al.}(1998)\citenamefont {Weihs},
\citenamefont {Jennewein}, \citenamefont {Simon}, \citenamefont
{Weinfurter},\ and\ \citenamefont {Zeilinger}}]{Weihs98}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Weihs}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Simon}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
10.1103/PhysRevLett.81.5039} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages}
{5039} (\bibinfo {year} {1998})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Tittel}\ \emph {et~al.}(1998)\citenamefont {Tittel},
\citenamefont {Brendel}, \citenamefont {Zbinden},\ and\ \citenamefont
{Gisin}}]{Tittel98}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Tittel}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Brendel}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zbinden}}, \ and\
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}},\ }\href
{\doibase 10.1103/PhysRevLett.81.3563} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo
{pages} {3563} (\bibinfo {year} {1998})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rowe}\ \emph {et~al.}(2001)\citenamefont {Rowe},
\citenamefont {Kielpinski}, \citenamefont {Meyer}, \citenamefont {Sackett},
\citenamefont {Itano}, \citenamefont {Monroe},\ and\ \citenamefont
{Wineland}}]{Rowe01}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Rowe}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Kielpinski}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Meyer}}, \bibinfo
{author} {\bibfnamefont {C.~A.}\ \bibnamefont {Sackett}}, \bibinfo {author}
{\bibfnamefont {W.}~\bibnamefont {Itano}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Monroe}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\
\bibnamefont {Wineland}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {409}},\ \bibinfo
{pages} {791} (\bibinfo {year} {2001})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Giustina}\ \emph {et~al.}(2015)\citenamefont
{Giustina}, \citenamefont {Versteegh}, \citenamefont {Wengerowsky},
\citenamefont {Handsteiner}, \citenamefont {Hochrainer}, \citenamefont
{Phelan}, \citenamefont {Steinlechner}, \citenamefont {Kofler}, \citenamefont
{Larsson}, \citenamefont {Abell\'an}, \citenamefont {Amaya}, \citenamefont
{Pruneri}, \citenamefont {Mitchell}, \citenamefont {Beyer}, \citenamefont
{Gerrits}, \citenamefont {Lita}, \citenamefont {Shalm}, \citenamefont {Nam},
\citenamefont {Scheidl}, \citenamefont {Ursin}, \citenamefont {Wittmann},\
and\ \citenamefont {Zeilinger}}]{Zeilinger15}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Giustina}}, \bibinfo {author} {\bibfnamefont {M.~A.~M.}\ \bibnamefont
{Versteegh}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Wengerowsky}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Handsteiner}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Hochrainer}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Phelan}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Steinlechner}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Kofler}}, \bibinfo {author}
{\bibfnamefont {J.-A.}\ \bibnamefont {Larsson}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Abell\'an}}, \bibinfo {author}
{\bibfnamefont {W.}~\bibnamefont {Amaya}}, \bibinfo {author} {\bibfnamefont
{V.}~\bibnamefont {Pruneri}}, \bibinfo {author} {\bibfnamefont {M.~W.}\
\bibnamefont {Mitchell}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Beyer}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Gerrits}},
\bibinfo {author} {\bibfnamefont {A.~E.}\ \bibnamefont {Lita}}, \bibinfo
{author} {\bibfnamefont {L.~K.}\ \bibnamefont {Shalm}}, \bibinfo {author}
{\bibfnamefont {S.~W.}\ \bibnamefont {Nam}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Scheidl}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Ursin}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Wittmann}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase
10.1103/PhysRevLett.115.250401} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {115}},\ \bibinfo {pages}
{250401} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hensen}\ \emph {et~al.}(2015)\citenamefont {Hensen},
\citenamefont {Bernien}, \citenamefont {Dreau}, \citenamefont {Reiserer},
\citenamefont {Kalb}, \citenamefont {Blok}, \citenamefont {Ruitenberg},
\citenamefont {Vermeulen}, \citenamefont {Schouten}, \citenamefont {Abellan},
\citenamefont {Amaya}, \citenamefont {Pruneri}, \citenamefont {Mitchell},
\citenamefont {Markham}, \citenamefont {Twitchen}, \citenamefont {Elkouss},
\citenamefont {Wehner},\ and\ \citenamefont {T.~H.~Taminiau}}]{Hensen15}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Hensen}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Bernien}},
\bibinfo {author} {\bibfnamefont {A.~E.}\ \bibnamefont {Dreau}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Reiserer}}, \bibinfo {author}
{\bibfnamefont {N.}~\bibnamefont {Kalb}}, \bibinfo {author} {\bibfnamefont
{M.~S.}\ \bibnamefont {Blok}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Ruitenberg}}, \bibinfo {author} {\bibfnamefont {R.~F.~L.}\
\bibnamefont {Vermeulen}}, \bibinfo {author} {\bibfnamefont {R.~N.}\
\bibnamefont {Schouten}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Abellan}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Amaya}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Pruneri}}, \bibinfo
{author} {\bibfnamefont {M.~W.}\ \bibnamefont {Mitchell}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Markham}}, \bibinfo {author} {\bibfnamefont
{D.~J.}\ \bibnamefont {Twitchen}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Elkouss}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Wehner}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~H.}\
\bibnamefont {T.~H.~Taminiau}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Nature (London)}\ }\textbf {\bibinfo {volume} {526}},\
\bibinfo {pages} {682} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Moehring}\ \emph {et~al.}(2004)\citenamefont
{Moehring}, \citenamefont {Madsen}, \citenamefont {Blinov},\ and\
\citenamefont {Monroe}}]{Moehring04}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~L.}\ \bibnamefont
{Moehring}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont
{Madsen}}, \bibinfo {author} {\bibfnamefont {B.~B.}\ \bibnamefont {Blinov}},
\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}},\ }\href
{\doibase 10.1103/PhysRevLett.93.090410} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo
{pages} {090410} (\bibinfo {year} {2004})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Matsukevich}\ \emph {et~al.}(2008)\citenamefont
{Matsukevich}, \citenamefont {Maunz}, \citenamefont {Moehring}, \citenamefont
{Olmschenk},\ and\ \citenamefont {Monroe}}]{Matsukevich2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~N.}\ \bibnamefont
{Matsukevich}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Maunz}},
\bibinfo {author} {\bibfnamefont {D.~L.}\ \bibnamefont {Moehring}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Olmschenk}}, \ and\ \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Monroe}},\ }\href {\doibase
10.1103/PhysRevLett.100.150404} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages}
{150404} (\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hasegawa}\ \emph {et~al.}(2003)\citenamefont
{Hasegawa}, \citenamefont {Loidl}, \citenamefont {Badurek}, \citenamefont
{Baron},\ and\ \citenamefont {Rauch}}]{hasegawa2003violation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Hasegawa}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Loidl}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Badurek}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Baron}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Rauch}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {425}},\
\bibinfo {pages} {45} (\bibinfo {year} {2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sakai}\ \emph {et~al.}(2006)\citenamefont {Sakai},
\citenamefont {Saito}, \citenamefont {Ikeda}, \citenamefont {Itoh},
\citenamefont {Kawabata}, \citenamefont {Kuboki}, \citenamefont {Maeda},
\citenamefont {Matsui}, \citenamefont {Rangacharyulu}, \citenamefont
{Sasano}, \citenamefont {Satou}, \citenamefont {Sekiguchi}, \citenamefont
{Suda}, \citenamefont {Tamii}, \citenamefont {Uesaka},\ and\ \citenamefont
{Yako}}]{Sakai06}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Sakai}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Saito}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Ikeda}}, \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Itoh}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Kawabata}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Kuboki}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Maeda}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Matsui}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Rangacharyulu}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Sasano}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Satou}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Sekiguchi}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Suda}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Tamii}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Uesaka}}, \
and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Yako}},\ }\href
{\doibase 10.1103/PhysRevLett.97.150405} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo
{pages} {150405} (\bibinfo {year} {2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Geppert-Kleinrath}\ \emph {et~al.}(2018)\citenamefont
{Geppert-Kleinrath}, \citenamefont {Denkmayr}, \citenamefont {Sponar},
\citenamefont {Lemmel}, \citenamefont {Jenke},\ and\ \citenamefont
{Hasegawa}}]{Geppert18}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Geppert-Kleinrath}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Denkmayr}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sponar}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Lemmel}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Jenke}}, \ and\ \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Hasegawa}},\ }\href {\doibase
10.1103/PhysRevA.97.052111} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {052111}
(\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hasegawa}\ \emph {et~al.}(2006)\citenamefont
{Hasegawa}, \citenamefont {Loidl}, \citenamefont {Badurek}, \citenamefont
{Baron},\ and\ \citenamefont {Rauch}}]{Hasegawa06}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Hasegawa}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Loidl}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Badurek}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Baron}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Rauch}},\ }\href
{http://link.aps.org/abstract/PRL/v97/e230401} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {97}},\
\bibinfo {eid} {230401} (\bibinfo {year} {2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bartosik}\ \emph {et~al.}(2009)\citenamefont
{Bartosik}, \citenamefont {Klepp}, \citenamefont {Schmitzer}, \citenamefont
{Sponar}, \citenamefont {Cabello}, \citenamefont {Rauch},\ and\ \citenamefont
{Hasegawa}}]{Bartosik09}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bartosik}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Klepp}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Schmitzer}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Sponar}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Cabello}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Rauch}}, \ and\ \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Hasegawa}},\ }\href {\doibase
10.1103/PhysRevLett.103.040403} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages}
{040403} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bell}(1966)}]{Bell66}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Bell}},\ }\href {\doibase 10.1103/RevModPhys.38.447} {\bibfield {journal}
{\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {38}},\
\bibinfo {pages} {447} (\bibinfo {year} {1966})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mermin}(1993)}]{Mermin93}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont
{Mermin}},\ }\href {\doibase 10.1103/RevModPhys.65.803} {\bibfield {journal}
{\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {65}},\
\bibinfo {pages} {803} (\bibinfo {year} {1993})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Greenberger}\ \emph {et~al.}(1989)\citenamefont
{Greenberger}, \citenamefont {Horne},\ and\ \citenamefont
{Zeilinger}}]{GHZ89Pro}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont
{Greenberger}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Horne}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Zeilinger}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Bell`s
Theorem, Quantum Theory, and Concepts of the Universe}}},\ \bibinfo {editor}
{edited by\ \bibinfo {editor} {\bibfnamefont {M.}~\bibnamefont {Kafatos}}}\
(\bibinfo {publisher} {Kluwer Academics, Dordrecht, The Netherlands},\
\bibinfo {year} {1989})\ pp.\ \bibinfo {pages} {73--76}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Greenberger}\ \emph {et~al.}(1990)\citenamefont
{Greenberger}, \citenamefont {Shimony}, \citenamefont {Horne},\ and\
\citenamefont {Zeilinger}}]{GHZ90}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont
{Greenberger}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Shimony}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Horne}},
\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\
}\href {\doibase 10.1103/PhysRevLett.58.2047} {\bibfield {journal} {\bibinfo
{journal} {Am. J. Phys.}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo
{pages} {1131} (\bibinfo {year} {1990})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hasegawa}\ \emph {et~al.}(2010)\citenamefont
{Hasegawa}, \citenamefont {Loidl}, \citenamefont {Badurek}, \citenamefont
{Durstberger-Rennhofer}, \citenamefont {Sponar},\ and\ \citenamefont
{Rauch}}]{Hasegawa10}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Hasegawa}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Loidl}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Badurek}}, \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Durstberger-Rennhofer}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Sponar}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Rauch}},\ }\href {\doibase
10.1103/PhysRevA.81.032121} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {032121}
(\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Sponar}\ \emph {et~al.}(2015)\citenamefont {Sponar},
\citenamefont {Denkmayr}, \citenamefont {Geppert}, \citenamefont {Lemmel},
\citenamefont {Matzkin}, \citenamefont {Tollaksen},\ and\ \citenamefont
{Hasegawa}}]{Sponar15}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Sponar}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Denkmayr}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Geppert}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Lemmel}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Matzkin}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Tollaksen}}, \ and\ \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Hasegawa}},\ }\href {\doibase 10.1103/PhysRevA.92.062121}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {92}},\ \bibinfo {pages} {062121} (\bibinfo {year}
{2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Aharonov}\ \emph {et~al.}(1988)\citenamefont
{Aharonov}, \citenamefont {Albert},\ and\ \citenamefont
{Vaidman}}]{Aharonov88}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Aharonov}}, \bibinfo {author} {\bibfnamefont {D.~Z.}\ \bibnamefont
{Albert}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Vaidman}},\ }\href {\doibase 10.1103/PhysRevLett.60.1351} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {60}},\ \bibinfo {pages} {1351} (\bibinfo {year}
{1988})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Denkmayr}\ \emph {et~al.}(2014)\citenamefont
{Denkmayr}, \citenamefont {Geppert}, \citenamefont {Sponar}, \citenamefont
{Lemmel}, \citenamefont {Matzkin}, \citenamefont {Tollaksen},\ and\
\citenamefont {Hasegawa}}]{Denkmayr14}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Denkmayr}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Geppert}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sponar}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Lemmel}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Matzkin}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Tollaksen}}, \ and\ \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Hasegawa}},\ }\href {\doibase 10.1038/ ncomms5492}
{\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo
{volume} {5}},\ \bibinfo {pages} {4492} (\bibinfo {year} {2014})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Waegell}\ \emph {et~al.}(2017)\citenamefont
{Waegell}, \citenamefont {Denkmayr}, \citenamefont {Geppert}, \citenamefont
{Ebner}, \citenamefont {Jenke}, \citenamefont {Hasegawa}, \citenamefont
{Sponar}, \citenamefont {Dressel},\ and\ \citenamefont {Tollaksen}}]{Cai17}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Waegell}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Denkmayr}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Geppert}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Ebner}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Jenke}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Hasegawa}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Sponar}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Dressel}}, \ and\ \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Tollaksen}},\ }\href {\doibase 10.1103/PhysRevA.96.052131}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {96}},\ \bibinfo {pages} {052131} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Denkmayr}\ \emph {et~al.}(2017)\citenamefont
{Denkmayr}, \citenamefont {Geppert}, \citenamefont {Lemmel}, \citenamefont
{Waegell}, \citenamefont {Dressel}, \citenamefont {Hasegawa},\ and\
\citenamefont {Sponar}}]{Denkmayr17}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Denkmayr}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Geppert}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Lemmel}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Waegell}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Dressel}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Hasegawa}}, \ and\ \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Sponar}},\ }\href {\doibase
10.1103/PhysRevLett.118.010402} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages}
{010402} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Geppert}\ \emph {et~al.}(2014)\citenamefont
{Geppert}, \citenamefont {Denkmayr}, \citenamefont {Sponar}, \citenamefont
{Lemmel},\ and\ \citenamefont {Hasegawa}}]{geppert2014improvement}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Geppert}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Denkmayr}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sponar}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Lemmel}}, \ and\ \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Hasegawa}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nuclear Instruments and Methods in Physics
Research Section A: Accelerators, Spectrometers, Detectors and Associated
Equipment}\ }\textbf {\bibinfo {volume} {763}},\ \bibinfo {pages} {417}
(\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Abert}\ \emph {et~al.}(2013)\citenamefont {Abert},
\citenamefont {Exl}, \citenamefont {Bruckner}, \citenamefont {Drews},\ and\
\citenamefont {Suess}}]{magnumfe}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Abert}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Exl}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Bruckner}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Drews}}, \ and\ \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Suess}},\ }\href {\doibase
https://doi.org/10.1016/j.jmmm.2013.05.051} {\bibfield {journal} {\bibinfo
{journal} {Journal of Magnetism and Magnetic Materials}\ }\textbf {\bibinfo
{volume} {345}},\ \bibinfo {pages} {29 } (\bibinfo {year}
{2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bruckner}\ \emph {et~al.}(2017)\citenamefont
{Bruckner}, \citenamefont {Abert}, \citenamefont {Wautischer}, \citenamefont
{Huber}, \citenamefont {Vogler}, \citenamefont {Hinze},\ and\ \citenamefont
{Suess}}]{bruckner2017solving}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Bruckner}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Abert}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Wautischer}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Huber}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Vogler}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Hinze}}, \ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Suess}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Scientific reports}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo
{pages} {40816} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Huber}\ \emph
{et~al.}(2017{\natexlab{a}})\citenamefont {Huber}, \citenamefont {Abert},
\citenamefont {Bruckner}, \citenamefont {Groenefeld}, \citenamefont
{Schuschnigg}, \citenamefont {Teliban}, \citenamefont {Vogler}, \citenamefont
{Wautischer}, \citenamefont {Windl},\ and\ \citenamefont
{Suess}}]{huber20173d}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Huber}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Abert}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Bruckner}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Groenefeld}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Schuschnigg}}, \bibinfo {author}
{\bibfnamefont {I.}~\bibnamefont {Teliban}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Vogler}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Wautischer}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Windl}}, \ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Suess}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Scientific reports}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo
{pages} {9419} (\bibinfo {year} {2017}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Huber}\ \emph
{et~al.}(2017{\natexlab{b}})\citenamefont {Huber}, \citenamefont {Abert},
\citenamefont {Bruckner}, \citenamefont {Pfaff}, \citenamefont {Kriwet},
\citenamefont {Groenefeld}, \citenamefont {Teliban}, \citenamefont {Vogler},\
and\ \citenamefont {Suess}}]{huber2017topology}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Huber}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Abert}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Bruckner}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Pfaff}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Kriwet}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Groenefeld}}, \bibinfo {author} {\bibfnamefont
{I.}~\bibnamefont {Teliban}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Vogler}}, \ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Suess}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Journal of Applied Physics}\ }\textbf {\bibinfo {volume} {122}},\
\bibinfo {pages} {053904} (\bibinfo {year} {2017}{\natexlab{b}})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Bj{\o}rk}\ \emph {et~al.}(2010)\citenamefont
{Bj{\o}rk}, \citenamefont {Bahl}, \citenamefont {Smith},\ and\ \citenamefont
{Pryds}}]{bjork2010comparison}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Bj{\o}rk}}, \bibinfo {author} {\bibfnamefont {C.~R.~H.}\ \bibnamefont
{Bahl}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Smith}}, \ and\
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Pryds}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Journal of Magnetism and Magnetic
Materials}\ }\textbf {\bibinfo {volume} {322}},\ \bibinfo {pages} {3664}
(\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Huber}\ \emph {et~al.}(2016)\citenamefont {Huber},
\citenamefont {Abert}, \citenamefont {Bruckner}, \citenamefont {Groenefeld},
\citenamefont {Muthsam}, \citenamefont {Schuschnigg}, \citenamefont {Sirak},
\citenamefont {Thanhoffer}, \citenamefont {Teliban}, \citenamefont {Vogler}
\emph {et~al.}}]{huber20163d}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Huber}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Abert}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Bruckner}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Groenefeld}}, \bibinfo {author}
{\bibfnamefont {O.}~\bibnamefont {Muthsam}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Schuschnigg}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Sirak}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Thanhoffer}}, \bibinfo {author} {\bibfnamefont
{I.}~\bibnamefont {Teliban}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Vogler}}, \emph {et~al.},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Applied Physics Letters}\ }\textbf {\bibinfo
{volume} {109}},\ \bibinfo {pages} {162401} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rauch}\ \emph {et~al.}(1975)\citenamefont {Rauch},
\citenamefont {Zeilinger}, \citenamefont {Badurek}, \citenamefont {Wilfing},
\citenamefont {Bauspiess},\ and\ \citenamefont {Bonse}}]{rauch1975fourpi}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Rauch}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Badurek}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Wilfing}}, \bibinfo {author}
{\bibfnamefont {W.}~\bibnamefont {Bauspiess}}, \ and\ \bibinfo {author}
{\bibfnamefont {U.}~\bibnamefont {Bonse}},\ }\href {\doibase
https://doi.org/10.1016/0375-9601(75)90798-7} {\bibfield {journal} {\bibinfo
{journal} {Physics Letters A}\ }\textbf {\bibinfo {volume} {54}},\ \bibinfo
{pages} {425 } (\bibinfo {year} {1975})}\BibitemShut {NoStop}
\end{thebibliography}
\end{document}
|
\begin{equation}gin{document}
\title[Optimal consumption of multiple goods]{Optimal consumption of multiple goods in incomplete markets}
\author{Oleksii Mostovyi}
\address{Oleskii Mostovyi, Department of Mathematics, University of Connecticut (USA)}
\email{[email protected]}
\thanks{
The author would like to thank Robert C. Merton for suggesting this problem and for a discussion on the subject of the paper. The author is also thankful to an anonymous referee for valuable comments. The author's research is supported by NSF Grant DMS-1600307. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect those of the National Science Foundation.}
\date{\today}
\subjclass[2010]{91G10, 93E20. \textit{JEL Classification:} C61, G11.}
\keywords{optimal consumption, multiple goods, utility maximization, no unbounded profit with bounded risk, arbitrage of the first kind, local martingale deflator, duality theory, semimartingale, incomplete market, optimal investment}
\begin{equation}gin{abstract}
We consider the problem of optimal consumption of multiple goods in incomplete semimartingale markets. We formulate the dual problem and identify conditions that allow for existence and uniqueness of the solution and give a characterization of the optimal consumption strategy in terms of the dual optimizer.
We illustrate our results with examples in both complete and incomplete models. In particular, we construct closed-form solutions in some incomplete models.
\end{abstract}
\maketitle
\section{Introduction}
The problem of optimal consumption of multiple goods has been investigated in \cite{Fisher75, Breeden79}.
For a single consumption good in continuous-time settings, it was first formulated in \cite{Merton69}. Since then, this problem was analyzed in a large number of papers in both complete and incomplete settings with a range of techniques based on Hamilton-Jacobi-Bellman equations, backward stochastic differential equations, and convex duality being used for its analysis.
In the present paper, we formulate a problem of optimal consumption of multiple goods in a general incomplete semimartingale model of a financial market. We construct the dual problem and characterize optimal consumption policies in terms of the solution to the dual problem. We also identify mathematical conditions, that allow for existence and uniqueness of the solution and a dual characterization. We illustrate our results by examples, where in particular we obtain closed-form solutions in incomplete markets. Our proofs rely on certain results on weakly measurable correspondences for Carath\'eodory functions, multidimensional convex-analytic techniques, and some recent advances in stochastic analysis in mathematical finance, in particular, the characterization of the ``no unbounded profit with bounded risk'' condition in terms of non-emptiness of the set of equivalent local martingale deflators from \cite{MostovyiNUPBR, KabanovKardarasSong} and sharp conditions for solvability of the expected utility maximization problem in a single good setting from \cite{Mostovyi2015}.
The remainder of this paper is organized as follows: in Section \ref{sec:2} we specify the model setting, formulate the problem, and state main results (in Theorem \ref{mainTheorem}). In Section \ref{Examples} we discuss various specific cases. In particular, we present there the structure of the solution in complete models and the additive utility case as well as closed-form solutions in some incomplete models (with and without an additive structure of the utility). We conclude the paper with Section \ref{proofs}, which contains proofs.
\section{Setting and main results} \label{sec:2}
\subsection{Setting} \label{sec:setting}
Let $\widetilde S=(\widetilde S_t)_{t\geq0}$ an $\mathbb{R}^d$-valued semimartingale, representing the discounted prices\footnote{Since we allow preferences to be stochastic (see the definition below), there is no loss of generality in assuming that asset prices are discounted, see \cite[Remark 2.2]{Mostovyi2015} for a more detailed explanation of this observation.} of $d$ risky assets on a complete stochastic basis $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in[0,\infty)},\mathbb{P})$, with $\mathcal{F}_0$ being
the trivial $\sigma$-algebra.
We fix a \emph{stochastic clock} $\kappa=(\kappa_t)_{t\geq0}$, which is a nondecreasing, c\`adl\`ag, adapted process, such that
\begin{equation} \label{clock}
\kappa_0 = 0,
\qquad
\mathbb{P}(\kappa_{\infty}>0)>0
\qquad\text{and}\qquad
\kappa_{\infty}\leq \begin{aligned}r A,
\end{equation}
where $\begin{aligned}r A$ is a positive constant.
The stochastic clock $\kappa$ specifies times when consumption is assumed to occur.
Various optimal investment-consumption problems can be recovered from the present general setting by suitably specifying the clock process $\kappa$. For example, the problem of maximizing expected utility of terminal wealth at some finite investment horizon $T<\infty$ can be recovered by simply letting $\kappa\triangleq\mathbb{I}_{\dbraco{T,\infty}}$. Likewise, maximization of expected utility from consumption only up to a finite horizon $T<\infty$ can be obtained by letting $\kappa_t\triangleq \min(t, T)$, for $t\geq0$.
Other specifications include maximization of utility form lifetime consumption, from consumption at a finite set of stopping times, and from terminal wealth at a random horizon, see e.g.,\cite[Examples 2.5-2.9]{Mostovyi2015} for a description of possible standard choices of the clock process~$\kappa$.
We suppose that there are $m$ different consumption goods, where $S^k_t$ denotes the discounted price of commodity $k$ at time $t$. We assume that for each $k\in\{1,\dots, m\}$, $S^k=(S^k_t)_{t\geq0}$ is a {\it strictly} positive optional processes
on $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in[0,\infty)},\mathbb{P})$.
A \emph{portfolio} is defined by a triplet $\Pi=(x,H,c)$, where $x\in\mathbb{R}$ represents an initial capital, $H=(H_t)_{t\geq0}$ is a $d$-dimensional $\widetilde S$-integrable process,
$H^j_t$ represents the holdings in the $j$-th risky asset at time $t$, $j \in\{1,\dots,d\}$, $t\geq 0$, $c$ is an $m$-dimensional {\it consumption process}, whose every component $(c^k_t)_{t\geq0}$ is a nonnegative optional process representing the consumption {\it rate} of commodity $k$, $k = \{1,\dots, m\}$.
The
wealth process $X=(X_t)_{t\geq0}$ of a portfolio $\Pi=(x,H,c)$ is defined as
\begin{equation}\label{X}
X_t \triangleq x + \int_0^tH_u\,\mathrm d \widetilde S_u - \sum\limits_{k=1}^m\int_0^tc^k_uS^k_u\,\mathrm d \kappa_u,
\qquad t\geq0.
\end{equation}
\subsection{Absence of arbitrage} \label{sec:NUPBR}
The main objective of this part is to specify the no-arbitrage type condition \eqref{NUPBR} below. As it is commonly done in the literature (see for example \cite{KS99}), we begin defining $\mathcal{X}$ to be the collection of all nonnegative wealth processes associated to portfolios of the form $\Pi=(1,H,0)$,~i.e.,
\[
\mathcal{X} \triangleq \left\{X\geq0 : X_t=
1 + \int_0^tH_u\mathrm d \widetilde S_u,\quad t\geq0\right\}.
\]
In this paper, we suppose the following no-arbitrage-type condition:
\begin{equation} \tag{NUPBR} \label{NUPBR}
\text{the set }
\mathcal{X}_T \triangleq \bigl\{X_T : X\in\mathcal{X}\bigr\}
\text{ is bounded in probability, for every }T\in\mathbb{R}_+,
\end{equation}
where (NUPBR) stands for {\it no unbounded profit with bounded risk}. This condition was originally introduced in
\cite{KaratzasKardaras07}. It is proven in \cite[Proposition 1]{Kardaras10}, that \eqref{NUPBR} is equivalent to another (weak) no-arbitrage condition, namely absence of \emph{arbitrages of the first kind} on $[0,T]$, see \cite[Definition 1]{Kar14}.
A useful characterization of \eqref{NUPBR} is given via the set of \emph{equivalent local martingale deflators (ELMD)} that is defined as follows:
\begin{equation}gin{equation}\label{setZ}
\begin{equation}gin{array}{rl}
\mathcal{Z} \triangleq \bigl\{ Z >0 :\;& Z \text{ is a c\`adl\`ag local martingale such that } Z_0 =1 \text{ and } \\
& ZX = (Z_tX_t)_{t\geq 0} \text{ is a local martingale for every } X \in \mathcal{X} \bigr\}.
\end{array}
\end{equation}
It is proven in \cite[Proposition 2.1]{MostovyiNUPBR} (see also \cite{KabanovKardarasSong}) that condition \eqref{NUPBR} holds if and only if $\mathcal{Z}\neq\emptyset$.
This result was previously established in the one-dimensional case in the finite time horizon in \cite[Theorem 2.1]{Kardaras12}. Also, \cite[Theorem 2.6]{TS14} contains a closely related result (in a finite time horizon) in terms of {\it strict $\sigma$-martingale densities}, see \cite{TS14} for the corresponding definition and details.
\begin{equation}gin{rem} \label{rem:comp_mostovyi}
Condition \eqref{NUPBR} is weaker than the existence of an equivalent martingale measure (see for example \cite[p. 463]{DS94} for the definition an equivalent martingale measure), another classical no-arbitrage type assumption, which in the infinite time horizon is even stronger than
\begin{equation}gin{equation}\label{NFLVR}
\{Z\in\mathcal{Z} : Z\text{ is a martingale}\}\neq\emptyset.
\end{equation}
Note that in the {\it finite time horizon} setting, \eqref{NFLVR} is equivalent to the existence of an equivalent martingale measure. Besides, \eqref{NFLVR} is apparently stronger than \eqref{NUPBR} (by comparison of \eqref{setZ} and \eqref{NFLVR} combined with \cite[Proposition 2.1]{MostovyiNUPBR}). We also would like to point out that \eqref{NFLVR} holds in every original formulation of \cite{Merton69}, where the problem of optimal consumption from investment (in a single consumption good setting) was introduced, including the infinite-time horizon case. In general, \eqref{NFLVR} can be stronger than \eqref{NUPBR}. A classical example, where \eqref{NUPBR} holds but \eqref{NFLVR} fails, corresponds to the three-dimensional Bessel process driving the stock price, see e.g., \cite[Example 4.6]{KaratzasKardaras07}.
\end{rem}
\subsection{Admissible consumptions}
For a given initial capital $x>0$, an $m$-dimensional optional consumption process $c$ is said to be \emph{$x$-admissible} if there exists an $\mathbb{R}^d$-valued predictable $\widetilde S$-integrable process $H$ such that the wealth process $X$ in \eqref{X}, corresponding to the portfolio $\Pi=(x,H,c)$ is nonnegative; the set of $x$-admissible consumption processes corresponding to a stochastic clock $\kappa$ is denoted by $\mathcal{A}(x)$.
For brevity, we denote $\mathcal{A}\triangleq\mathcal{A}(1)$.
\subsection{Preferences of a rational economic agent} \label{sec:invest}
Building from the formulation of \cite{MertonCTF}, we assume that preferences of a rational economic agent are represented by a \emph{optional utility-valued process} (or simply a {\it utility process}) $U=U(t,\omega,x):[0,\infty)\times\Omega\times[0,\infty)^m\rightarrow\mathbb{R}\cup\{-\infty\}$, where for every $(t, \omega)\in [0,\infty)\times \Omega$, $U(t,\omega, \cdot)$ is an Inada-type utility function, i.e., $U(t,\omega, \cdot)$ satisfies the following (technical) assumption.
\begin{equation}gin{as} \label{as:U}
For every $(t,\omega)\in[0,\infty)\times\Omega$, the function $$\mathbb{R}^m_{+} \ni x\mapsto U(t,\omega,x)\in \mathbb{R}\cup\{-\infty\}$$ is strictly concave, strictly increasing in every component, finite-valued and continuously differentiable in the interior of the positive orthant, and satisfies the Inada conditions
\[
\underset{x_i\downarrow0}{\lim} \, {\partial_{x_i}U}(t,\omega,x) = \infty
\quad\text{and}\quad
\underset{x_i\uparrow\infty}{\lim} \, {\partial_{x_i}U}(t,\omega,x) = 0,\quad i = 1,\dots, m,
\]
where ${\partial_{x_i}U}(t,\omega, \cdot):\mathbb{R}^m_{++} \mapsto\mathbb{R}$ is the partial derivative of $U(t,\omega,\cdot)$ with respect to the $i$-th spatial variable
\footnote{For the results below, we only need to specify the gradient of $U(t,\omega, \cdot)$ in the {\it interior} of the first orthant, i.e., at the points $x\in\mathbb{R}^m$, where $U(t,\omega, x)$ is (finite-valued and) differentiable.}. On the boundary of the first orthant, by upper semicontinuity, we {suppose} that $U(t,\omega,x)=\limsup\limits_{x'\to x}U(t,\omega,x')$ (note that some of these values may be $-\infty$ and that $U(t,\omega,x) = \lim\limits_{t\downarrow 0}U(t,\omega,x + t(x'-x))$, where $x'$ is an arbitrary element in the interior of the first orthant, see \cite[Proposition B.1.2.5]{LH04}). Finally, for every $x\in \mathbb{R}^m_{+}$, we assume that the stochastic process $U(\cdot,\cdot,x)$ is optional.
\end{as}
\begin{equation}gin{rem} The Inada conditions in Assumption \ref{as:U} were introduced in \cite{Inada63}. These are technical assumptions that have natural economic interpretations and that allow for a deeper tractability of the problem (as e.g., in \cite{KS99}).
Likewise, the semicontinuity of $U$ is imposed for regularity purposes. It also used in e.g., \cite{Pietro1, Pietro2}.
\end{rem}
In particular, modeling preferences via utility process allows to take into account utility maximization problems under a change of num\'eraire (see e.g., \cite[Example 4.2]{MostovyiRE}). This is the primary reason why we suppose that the prices of the traded stocks are discounted, as this allows to simplify notations without any loss of generality.
Note also that Assumption \ref{as:U} does not make any requirement on the {\it asymptotic elasticity} of $U$, introduced in \cite{KS99}.
To a utility process $U$ satisfying Assumption \ref{as:U}, we associate the \emph{primal value function}, defined as
\begin{equation} \label{primalProblem}
u(x) \triangleq \sup_{{\mathbf c} = (c^1,\dots,c^m)\in\mathcal{A}(x)}\mathbb{E}\left[\int_0^{\infty}U(t,\omega,{\mathbf c}_t)\,\mathrm d\kappa_t\right],\quad x>0.
\end{equation}
To ensure that the integral above is well-defined, we use the convention
\begin{equation}\label{9171}
\mathbb{E}\left[\int_0^{\infty}U(t,\omega,{\mathbf c}_t)\,\mathrm d\kappa_t\right]\triangleq-\infty\quad if\quad \mathbb{E}\left[\int_0^{\infty}U^-(t,\omega,{\mathbf c}_t)\,\mathrm d\kappa_t\right]=\infty,
\end{equation}
where $U^-(t,\omega,\cdot)$ is the negative part of $U(t,\omega,\cdot)$. Note that formulation \eqref{primalProblem} is a generalization of the formulation in \cite[p. 205]{MertonCTF}, in the form \eqref{primalProblem} we allow for stochastic preferences and include several standard formulations as particular cases.
\subsection{Dual problem} \label{sec:dualProblem}
In order to specify model assumptions that ensure existence and uniqueness of solutions to \eqref{primalProblem} and to give a characterization of this solution, we need to formulate an appropriate dual problem.
Let us define
\begin{equation}\label{def:U*}
U^{*}(t,\omega, x)\triangleq \sup\limits_{\substack{(x_1,\dots,x_m)\in\mathbb{R}^m_{+}:\\\sum\limits_{k = 1}^mS^k_t(\omega)x_k \leq x}}U\left(t,\omega,x^1,\dots,x^m\right),\quad (t,\omega,x)\in [0,\infty)\times \Omega\times[0,\infty).
\end{equation}
Let us set a family of transformations $A:[0,\infty)\times\Omega\times\mathbb{R}^m\mapsto \mathbb{R}$, as $$A (t,\omega, x_1,\dots,x_m)\triangleq S^1_t(\omega)x_1 + \dots + S^m_t(\omega)x_m,\quad (t,\omega,x_1,\dots,x_m)\in[0,\infty)\times\Omega\times[0,\infty)^m.$$
Note that for every $(t,\omega)\in[0,\infty)\times \Omega$, $A(t,\omega,\cdot)$ is a linear transformation from $\mathbb{R}^m$ to $\mathbb{R}$ and $U^{*}(t,\omega,\cdot)$ is the image of $U(t,\omega,\cdot)$ under $A(t,\omega,\cdot)$ (see e.g., \cite[p. 96]{LH04} for the definition and properties of the {\it image of a function under a linear mapping}\footnote{Equivalently, see \cite[Theorem 5.2]{Rok}, where $U^*(t,\omega, \cdot)$ is named the image of $U(t,\omega,\cdot)$ under the linear transformation $A(t,\omega,\cdot)$
, $(t,\omega)\in[0,\infty)\times\Omega$.}).
We define a stochastic field $V^{*}$ as the pointwise conjugate of $U^{*}$ (equivalently, as the pointwise conjugate of the image function of $U$ under $A$) in the sense that
\[
V^{*}(t,\omega,y) \triangleq \sup_{x>0}\left(U^*(t,\omega,x)-xy\right),
\qquad (t,\omega,y)\in[0,\infty)\times\Omega\times[0,\infty),
\]
where $\sup\limits_{x>0}$ and $\sup\limits_{x\geq 0}$ coincide thanks to continuity of $U^*$ established in Lemma \ref{lem:U*}.
We also introduce the following set of dual processes:
\begin{equation}gin{align*}
\mathcal{Y}(y) \triangleq \cl\bigl\{Y :\;& Y\text{ is c\`adl\`ag adapted and }\\
&
0\leq Y\leq yZ \text{ $(\mathrm d\kappa\times\mathbb{P})$-a.e. for some }Z\in\mathcal{Z}\bigr\},
\end{align*}
where the closure is taken in the topology of convergence in measure $(\mathrm d\kappa\times\mathbb{P})$ on the measure space of real-valued optional processes $\left(\Omega\times [0,\infty), \mathcal{O},\mathrm d\kappa\times\mathbb{P}\right)$, where $\mathcal{O}$ is the optional sigma-field. We write $\mathcal{Y}\triangleq\mathcal{Y}(1)$ for brevity. Note that $\mathcal Y$ is closely related to - but different from - the set with the same name in \cite{KS99}.
The value function of the dual optimization problem, or equivalently, the \emph{dual value function}, is then defined as
\begin{equation} \label{dualProblem}
v(y) \triangleq \inf_{Y\in\mathcal{Y}(y)}\mathbb{E}\left[\int_0^{\infty}V^{*}(t,\omega,Y_t)\,\mathrm d\kappa_t\right]{,\quad y>0},
\end{equation}
with the convention $\mathbb{E}[\int_0^{\infty}V^{*}(t,\omega,Y_t)\,\mathrm d\kappa_t]\triangleq\infty$ if $\mathbb{E}[\int_0^{\infty}{V^{*}}^+(t,\omega,Y_t)\,\mathrm d\kappa_t]=\infty$, where ${V^{*}}^+(t,\omega,\cdot)$ is the positive part of ${V^{*}}(t,\omega,\cdot)$.
We are now in a position to state the following theorem,
which is the main result of this paper.
\begin{equation}gin{thm} \label{mainTheorem}
Assume that conditions \eqref{clock} and \eqref{NUPBR} hold true and let $U$ satisfies Assumption \ref{as:U}. Let us also suppose that
\begin{equation} \label{eq:finiteness}
v(y)<\infty\quad\text{ for every }y>0
\quad\text{and}\quad
u(x)>-\infty\quad\text{ for every }x>0.
\end{equation}
Then we have
\begin{equation}gin{enumerate}
\item[(i)]
$u(x)<\infty$, for every $x>0$, and $v(y)>-\infty$, for every $y>0$, i.e., the value functions are \underline{finite-valued}.
\item[(ii)]
The functions $u$ and $-v$ are continuously differentiable on $(0,\infty)$, strictly concave, strictly increasing and satisfy the Inada conditions
\begin{equation}gin{equation}\label{453}
\begin{equation}gin{array}{rclccrcl}
\underset{x\downarrow0}{\lim} \, u'(x) & = &\infty, &\quad&
\underset{y\downarrow0}{\lim} \, -v'(y) &=& \infty, \\
\underset{x\rightarrow\infty}{\lim} \, u'(x) &=& 0, &\quad&
\underset{y\rightarrow\infty}{\lim} \, -v'(y) &=& 0. \\
\end{array}
\end{equation}
\item[(iii)]
For every $x>0$ and $y>0$, the solutions $\widehat{c}(x)=(\widehat{c}^1(x),\dots, \widehat{c}^m(x))$ to \eqref{primalProblem} and $\widehat{Y}(y)$ to \eqref{dualProblem} exist and are unique and, if $y=u'(x)$, we have the optimality characterizations
\begin{equation}\label{eq:3212}
\widehat Y_t(y)(\omega)=\frac{
{\partial_{x_i}U}\bigl(t,\omega, \widehat c^1_t(x)(\omega),\dots,\widehat c^m_t(x)(\omega)\bigr)}{S^i_t(x)(\omega)},\quad (\mathrm d\kappa\times \mathbb{P})\text{-a.e.},\quad i = 1,\dots, m.
\end{equation}
and
\begin{equation}\label{eq:3221}
\hat{Y}_t(y)(\omega) = U^*_x\bigl(t,\omega,\sum\limits_{i = 1}^m\hat{c}^i_t(x)(\omega) S^i_t(\omega) \bigr),
\qquad (\mathrm d\kappa\times\mathbb{P})\text{-a.e.},
\end{equation}
with ${U_x^{*}}$ denoting the partial derivative of $U^*$ with respect to its third argument.
\item[(iv)]
For every $x>0$, the constraint $x$ is binding in the sense that
\begin{equation}\label{451}
\mathbb{E}\left[\int_0^{\infty}\sum\limits_{i = 1}^m\widehat{c}^i_t(x)S^i_t\dfrac{\widehat{Y}_t(y)}{y}\,\mathrm d\kappa_t\right] = x,\quad where~y=u'(x).
\end{equation}
\item[(v)]
The functions $u$ and $v$ are Legendre conjugate, i.e.,
\begin{equation}gin{equation}\label{452}
v(y) = \underset{x>0}{\sup} \bigl(u(x)-xy\bigr),\quad y>0, \qquad
u(x) = \underset{y>0}{\inf} \bigl(v(y)+xy\bigr),\quad x>0.
\end{equation}
\item[(vi)]
The dual value function $v$ can be represented as
\begin{equation} \label{eq:v_defl}
v(y) = \inf_{Z\in\mathcal Z}\mathbb{E}\left[\int_0^{\infty}V(t,\omega,yZ_t(\omega))\,\mathrm d\kappa_t(\omega)\right],\quad y>0.
\end{equation}
\end{enumerate}
\end{thm}
\begin{equation}gin{rem}[On sufficient conditions for the validity of \eqref{eq:finiteness}]
Condition \eqref{eq:finiteness} holds if there exists one primal element $c\in\mathcal A$ and one dual element $Y\in\mathcal Y$ such that
$$\mathbb E\left[\int_0^\infty U\left(t,\omega,zc^1_t,\dots,zc^m_t\right)\mathrm d\kappa_t\right]>-\infty\quad and\quad \mathbb E\left[\int_0^\infty V^{*}\left(t,\omega,zY_t\right)\mathrm d\kappa_t\right]<\infty,\quad z>0.$$
In particular,
for every $x>0$, as an $m$-dimensional optional process with constant values $\left(\tfrac{x}{\begin{aligned}r Am},\dots,\tfrac{x}{\begin{aligned}r Am}\right)$ belongs to $\mathcal A(x)$, a sufficient condition in \eqref{eq:finiteness} for the finiteness of $u$ is
$$\mathbb E\left[\int_0^\infty U\left(t,\omega,\tfrac{x}{\begin{aligned}r Am},\dots,\tfrac{x}{\begin{aligned}r Am}\right)\mathrm d\kappa_t\right]>-\infty,\quad x>0,$$
which typically holds if $U$ is nonrandom. Likewise, as $\mathcal{Z}\neq\emptyset$ (by \eqref{NUPBR} and \cite[Proposition 2.1]{MostovyiNUPBR}), finiteness of $v$ holds if for one equivalent local martingale deflator $Z$, we have
$$ \mathbb E\left[\int_0^\infty V^{*}\left(t,\omega,yZ_t\right)\mathrm d\kappa_t\right]<\infty,\quad y>0.$$
\end{rem}
\section{Examples}\label{Examples}
\section*{Complete market solution and dual characterization}
If the model is complete, the dual characterization of the optimal consumption policies has a particularly nice form,
as $\mathcal{Z}$ contains a unique element, $Z$. The solutions corresponding to different $y$'s in the dual problem \eqref{dualProblem} are $yZ$, $y>0$. Therefore, in \eqref{eq:3221} and \eqref{eq:3212} we have $\widehat Y(y)= yZ$, $y>0$.
\section*{Special case: Additive utility}
An important example of $U^{*}$ corresponds to $U$ having an additive form with respect to its spatial components, i.e., when
$$U(t,\omega, c_1,\dots, c_m) = U^1(t,\omega,c_1)+\dots+U^m(t,\omega, c_m),\quad (t,\omega)\in[0,\infty)\times \Omega,$$
where for every $k=1,\dots,m$, $U^k$ is a utility process in the sense of \cite[Assumption 2.1]{Mostovyi2015} and a utility process in sense of the Assumption \ref{as:U} with $m=1$. In this case, for every $(t,\omega)\in[0,\infty)\times\Omega$, $U^*(t,\omega,\cdot)$ is given by the {\it infimal convolution} of $U^k(t,\omega,\cdot)$'s, see the definition in e.g., \cite[p. 34]{Rok}. Let $V^i(t,\omega,\cdot)$ denote the convex conjugate of $U^i(t,\omega,\cdot)$, $i=1,\dots,m$. Then the convex conjugate of $U^{*}(t,\omega,\cdot)$ is $V^{*}(t,\omega, \cdot)$ given by
$$V^*(t,\omega,\cdot) = V^1(t,\omega,\cdot) + \dots +V^m(t,\omega,\cdot).$$
This result was established e.g., in \cite[Theorem 16.4, p. 145]{Rok}. In this case, the optimal $\widehat c(x) = (\widehat c^1(x),\dots,\widehat c^m(x))$ has a more explicit characterization via $I_i(t,\omega,\cdot) \triangleq \left(U^i_x\right)^{-1}(t,\omega,\cdot)$, the the pointwise inverse of the partial derivative of $U^i(t,\omega,\cdot)$ with respect to the third argument, as \eqref{eq:3212} can be solved for $\widehat c^i(x)$, $i = 1,\dots,m$, as follows
\begin{equation}\label{eq:3261}
\widehat c^i_t(x)(\omega)= I_i\left(t,\omega,\widehat Y_t(y)(\omega)S^i_t(\omega)\right),\quad (\mathrm d\kappa\times \mathbb{P})\text{-a.e.},\quad i = 1,\dots, m.
\end{equation}
Using \eqref{eq:3221}, we can restate \eqref{eq:3261} as
\begin{equation}\nonumber
\widehat c^i_t(x)(\omega)= I_i\left(t,\omega,U^*_x\bigl(t,\omega,\widehat c^{*}_t(x)(\omega) \bigr)S^i_t(\omega)\right),\quad (\mathrm d\kappa\times \mathbb{P})\text{-a.e.},\quad i = 1,\dots, m,
\end{equation}
where $\widehat c^{*}(x)$ is the optimizer to the auxiliary problem \eqref{eq:u*} corresponding to the initial wealth $x>0$.
\begin{equation}gin{rem}
In the following three examples we consider some incomplete models that admit closed-form solutions for one good and show how these results apply to multiple good settings.
\end{rem}
\section*{Example of a closed form solution in an incomplete model with additive logarithmic utility}
Let us suppose that $d$ traded discounted assets are modeled with Ito processes of the form
\begin{equation}gin{equation}\label{4161}
d\widetilde S^i_t = \widetilde S^i_t b^i_tdt + \widetilde S^i_t \sum\limits_{j=1}^n \sigma^{ij}_tdW^j_t,\quad i = 1,\dots,d,\quad \widetilde S_0\in\mathbb{R}^d,
\end{equation}
where $W$ is an $\mathbb{R}^n$-valued standard Brownian motion and $b^i$, $\sigma^{ij}$, $i = 1,\dots,d$, $j=1,\dots, n$, are predictable processes, such that the unique strong solution to \eqref{4161} exists, see e.g., \cite{KS98}. Let us suppose that there are $m$ consumption goods and that the value function of a rational economic agent is given by
$$\sup\limits_{c\in \mathcal A(x)}\mathbb E\left[\int_0^T e^{-\nu t}\log(c_1\dots c_m)dt\right],\quad x>0,$$
(with the same convention as the one specified after \eqref{primalProblem}),
where an impatience rate $\nu$ and a time horizon $T$ are positive constants. Note that in this case
$\kappa_t = \frac{1 - e^{-\nu t}}{\nu}$, $t\in[0,T]$, i.e., $\kappa$ is deterministic.
Let us also suppose that there exists an $\mathbb{R}^d$-valued process $\gamma$, such that
$$b_t - \sigma_t\sigma_t^T \gamma_t = 0\quad (\mathrm d \kappa\times \mathbb P)-a.e.$$
Let $\mathcal E$ denotes the Dol\'eans-Dade exponential. Then, using \cite[Theorem 3.1 and Example 4.2]{GollKallsen00} and Theorem \ref{mainTheorem}, we get
$$\widehat c^{*}_t(x) = \frac{x\nu}{1 - e^{-\nu T}}\mathcal E\left(\int_0^{\cdot}\gamma^T_sd\widetilde S_s\right)_t,\quad x>0,$$
$$\widehat c^i_t(x) = \frac{\widehat c^{*}_t(x)}{S^i_t M},\quad i=1,\dots,m, \quad x>0,$$
$$\widehat Y_t(y) = \frac{y}{\mathcal E\left(\int_0^{\cdot}\gamma^T_s d\widetilde S_s\right)_t},\quad y>0,\quad t\in[0,T].$$
\section*{Example of a closed-form solution and dual characterization in an incomplete additive case}
Let us fix a filtered probability space $(\Omega, \mathcal F, \mathbb P)$, where $(\mathcal F_t)_{t\geq 0}$ is the augmentation of the filtration generated by a two-dimensional Brownian motion $(W^1, W^2)$.
Let us suppose that there are two traded securities: a risk-free asset $B$, such that $$B_t = e^{rt},\quad t>0,$$
where $r$ is a nonnegative constant, and a risky stock $\widetilde S$ with the dynamics
$$d\widetilde S_t = \widetilde S_t\mu_t dt + \widetilde S_t\sigma_t dW^1_t,\quad t\geq 0,\quad\widetilde S_0 \in\mathbb{R}_{+},$$
where processes $\mu$ and $\sigma$ are such that $\theta_t = \frac{\mu_t - r}{\sigma_t}$, $t\geq 0$, the market price of risk process, follows the Ornstein-Uhlenbeck process
$$d\theta_t = -\lambda_{\theta}(\theta_t - \begin{aligned}r \theta)dt + \sigma_\theta\left(\rho dW^1_t + \sqrt{1-\rho^2} dW^2_t\right),\quad t\geq 0,\quad \theta_0\in\mathbb{R}_{+},$$
where $\lambda_{\theta}$, $\sigma_{\theta}$, and $\begin{aligned}r \theta$ are positive constants, $\rho\in(-1,1)$.
Let us also assume that
$\kappa$ corresponds to the expected utility maximization from terminal wealth, i.e., $\kappa =\mathbb{I}_{\dbraco{T,\infty}}$, $T\in\mathbb{R}_{+}$, that there are $m$ consumption goods, where $S^i$, $i = 1,\dots,m$, are {\it deterministic}, and $$U(T,\omega, c_1,\dots,c_m) = \frac{c_1^p}{p} + \dots + \frac{c_m^p}{p},\quad (c_1,\dots,c_m)\in \mathbb{R}^m_{+},\quad \omega\in\Omega,$$
where $p<0$.
Let us set
$$q\triangleq \frac{p}{1-p},\quad A\triangleq \sum\limits_{i=1}^m(S^i_T)^{-q},\quad and \quad
B\triangleq A^{1-p}
.$$Then, by direct computations, we get $$U^*(T,\omega,x) = \frac{x^p}{p}B,\quad x>0.$$
Using the argument in \cite{KO96},
one can express the optimal trading strategy is $\widehat H(x)$ in a closed form in terms of a solution to a system of (nonlinear) ordinary differential equations (see \cite[p. 147]{KO96}),
where $\widehat H_t(x)$ is the number of shares of the risky asset in the portfolio at time $t$, $t\in[0,T]$. With $\widehat X(x)$ such that $$d\widehat X_t(x) = \widehat H_t(x)d\widetilde S_t + (\widehat X_t(x)-\widehat H_t(x)\widetilde S_t)rdt,\quad \widehat X_0(x) = x,$$ using Theorem \ref{mainTheorem}
, we get
$$\widehat {c}^{*}_T(x) = \widehat X_T(x),\quad x>0,$$
$$\widehat Y_T(y) = \frac{y}{\mathbb E\left[\left(\widehat {c}^{*}_T(1) \right)^p\right]}\left(\widehat {c}^{*}_T(1)\right)^{p-1},\quad y>0,$$
$$\widehat c^i_T(x) = \frac{\widehat c^{*}_T(x)}{A}(S^i_T)^{-(1+ q)},\quad x>0.$$
\section*{Example of a closed-form solution and dual characterization in an incomplete non-additive case}
Here we will suppose that
$\kappa =\mathbb{I}_{\dbraco{T,\infty}}$, where $T\in \mathbb{R}_{+}$, and let
$$U(t,\omega,c_1,c_2) = - \frac{c_1^{p_1}}{p_1}\frac{c_2^{p_2}}{p_2},\quad p_1<0,p_2<0,$$
i.e., there are two consumption goods.
One can see that $U(t,\omega,\cdot)$ is jointly concave, since the Hessian of $-U(t,\omega,\cdot)$ is positive definite on $\mathbb{R}^2_{++}$. We also extend $U(t,\omega,\cdot)$ to the boundary of $\mathbb{R}^2_{+}$ by $-\infty$. Then, with $p \triangleq p_1 + p_2<0$, $U^*$ is given by
$$U^*(t,\omega, x) = \frac{x^p}{p}\frac{(-p_1)^{p_1 - 1}(-p_2)^{p_2-1}}{(-p)^{p-1}}(S^1_t)^{-p_1}(S^2_t)^{-p_2},\quad x>0.$$
Let us define $G \triangleq \frac{(-p_1)^{p_1 - 1}(-p_2)^{p_2-1}}{(-p)^{p-1}}(S^1_T)^{-p_1}(S^2_T)^{-p_2}$.
Then $U(T,\omega, x) = \frac{x^p}{p}G(\omega)$, $x>0$.
Let us suppose that $W^1$ and $W^2$ are two Brownian motions with a fixed correlation $\rho$ such that $0<|\rho|<1$.
Let $(\mathcal F_t)_{t\geq 0}$ be the usual augmentation of the filtration generated by $W^1$ and $W^2$ and $(\mathcal G_t)_{t\geq 0}$ be the usual augmentation of the filtration generated by $W^2$.
We also assume that there is a bond $B$ and a stock $\widetilde S$ on the market. Their dynamics are given by
$$d\widetilde S_t = \widetilde S_t(\mu_t dt + \sigma_t dW^1_t ),\quad \widetilde S_0 \in\mathbb{R},$$
$$dB_t = B_tr_t dt,\quad B_0 = 1,$$
where the drift $\mu$, volatility $\sigma$, and sport interest rate $r$ are bounded, progressively measurable processes with respect to $({\mathcal G}_t)_t\in[0,T]$, and $\sigma$ is strictly positive.
Let us suppose that $S^1_T$ and $S^2_T$ are $\mathcal G_T$-measurable random variables with moments of all orders. Then $G$ is also $\mathcal G_T$-measurable random variable with moments of all orders (by H\"older's inequality) and the auxiliary value function $u^*$ defined in \eqref{eq:u*} satisfies the settings of \cite{Tehranchi04}.
Also, as $u^{*}(x)\geq\frac{x^p}{p}\mathbb E[G] >-\infty$ and since $V(T,\omega, \cdot)$ is negative-valued (thus, $v(y)\leq0$), the assumption \eqref{eq:finiteness} holds.
Let us set $$\lambda_t \triangleq \frac{\mu_t - r_t}{\sigma_t},\quad \delta \triangleq \frac{1 - p}{1 - p + \rho^2 p},\quad \frac{d\mathbb Q}{d\mathbb P} \triangleq \exp\left(-\frac{\rho^2 p^2}{2(1-p)^2}\int_0^T\lambda^2_sds + \frac{\rho p}{1-p}\int_0^T\lambda_sdW^2_s \right),$$
$$K_t \triangleq \frac{p}{(1-p)}\left(\lambda_t + \rho\delta\frac{\begin{equation}ta_t}{\mathbb {E^Q}[\exp(\int_0^T (r_s/\delta) ds)|\mathcal F_t]}\right),\quad t\in[0,T].$$
Then, using \cite[Proposition 3.4]{Tehranchi04} and Theorem \ref{mainTheorem}, we deduce that
$$\widehat c^{*}_T(x) = x\exp\left(\int_0^T\left(r + K_s\lambda_s -\tfrac{1}{2}K^2_s\right)ds + \int_0^T K_sdW^1_s\right),\quad x>0,$$
$$\widehat Y_T(y) =\frac{y}{\mathbb E\left[\left( \widehat c^{*}_T(1)\right)^p\right]}\exp\left(\int_0^T(p-1)\left(r + K_s\lambda_s -\tfrac{1}{2}K^2_s\right)ds + \int_0^T (p-1)K_sdW^1_s\right) ,\quad y>0,$$
$$\widehat c^i_T = \frac{\widehat c^{*}_T(x) p_i}{pS^i_T},\quad i = 1,2,\quad x>0,$$
are the optimizers to \eqref{dualProblem}, \eqref{eq:u*}, and \eqref{primalProblem}, respectively. From Theorem \ref{mainTheorem}, we conclude that for every $x>0$, $\widehat c^i_T(x)$, $i=1,2,$ and $\widehat Y_T(u'(x))$ are related via \eqref{eq:3212} and \eqref{eq:3221}.
\section{Proofs}\label{proofs}
We begin from a characterization of the utility process $U^*$ defined in \eqref{def:U*}.
\begin{equation}gin{lem}\label{lem:U*} Let $U$ satisfies Assumption \ref{as:U} and $U^{*}$ be defined in \eqref{def:U*}. Then, $U^{*}$ is an Inada-type utility process for $m=1$ in the sense of Assumption \ref{as:U}, i.e., $U^{*}$ satisfies:
\begin{equation}gin{enumerate}\item
For every $(t,\omega)\in[0,\infty)\times\Omega$, the function $x\mapsto U^{*}(t,\omega,x)$ is finite-valued on $(0,\infty)$, strictly concave, and strictly increasing.
\item For every $(t,\omega)\in[0,\infty)\times\Omega$, the function $x\mapsto U^{*}(t,\omega,x)$ is continuously differentiable on $(0,\infty)$ and satisfies the Inada conditions
\[
\underset{z\downarrow0}{\lim} \, {U_x^{*}}(t,\omega,z) = \infty
\qquad\text{and}\qquad
\underset{z\uparrow \infty}{\lim} \, {U_x^{*}}(t,\omega,z) = 0.
\]
\item For every $(t,\omega)\in[0,\infty)\times\Omega$, at $z=0$, we have $$U^{*}(t,\omega,0)=\lim_{z\downarrow0}U^{*}(t,\omega,z)$$ (note that this value may be $-\infty$).
\item For every $z\geq0$, the stochastic process $U^{*}(\cdot,\cdot,z)$ is optional.
\end{enumerate}
\end{lem}
\begin{equation}gin{proof}
For every $(t,\omega)\in[0,\infty)\times\Omega$,
as $U^{*}(t,\omega, \cdot)$ is an image function under an appropriate linear transformation of a {\it concave} function $U(t,\omega, \cdot)$, therefore using e.g., \cite[Theorem B.2.4.2]{LH04}, one can show that $U^{*}(t,\omega, \cdot)$ is concave.
In order to show strict concavity of $U^{*}(t,\omega, \cdot)$, one can proceed as follows. First, for some positive numbers $x_1\neq x_2$, let $\mathbf c^i= (c^{i,1}, \dots, c^{i,m})$ be such that
\begin{equation}gin{equation}\label{9172}\begin{equation}gin{array}{rcl}
\sum\limits_{k=1}^mS^k_tc^{i, k}&\leq &x_i,\quad and\\
U^{*}\left(t,\omega,x_i\right) &=& U(t,\omega, c^{i,1},\dots,c^{i,m}),\quad i = 1,2.\\
\end{array}\end{equation}
The existence of such $\mathbf c^i$'s follows from compactness of the domain of the optimization problem in the definition of $U^{*}(t,\omega, x)$ (for every $x>0$) and upper semicontinuity of $U(t,\omega,\cdot)$. Since in \eqref{9172}, $\mathbf c^i$ necessarily satisfies inequality $\sum\limits_{k=1}^mS^k_tc^{i, k}\leq x_i$ with equality, $i=1,2$, from the strict monotonicity of $U(t,\omega,\cdot)$ in every spatial component and $x_1\neq x_2$, we deduce that $\mathbf c^1\neq \mathbf c^2$. Consequently, from {\it strict} concavity of $U(t,\omega,\cdot)$, we get
\begin{equation}gin{displaymath}\begin{equation}gin{array}{rcl}
U^{*}\left(t,\omega,\tfrac{x_1 + x_2}{2}\right) &=& \sup\limits_{\substack{(c_1,\dots,c_m)\in\mathbb{R}^m_{+}:\\\sum\limits_{k = 1}^mc_kS^k_t(\omega) \leq \tfrac{x_1 + x_2}{2}}} U(t,\omega, c_1,\dots,c_m) \\
&\geq & U\left(t,\omega, \tfrac{c^{1,1} +c^{2,1}}{2} ,\dots,\tfrac{c^{1,m}+c^{2,m}}{2}\right)\\
&> & \tfrac{1}{2}U\left(t,\omega,c^{1,1},\dots,c^{1,m}\right) +
\tfrac{1}{2}U\left(t,\omega, c^{2,1} ,\dots,c^{2,m}\right)\\
&=&\tfrac{1}{2}U^{*}\left(t,\omega,x_1\right) + \tfrac{1}{2}U^{*}\left(t,\omega,x_2\right). \\
\end{array}
\end{displaymath}
Therefore, $U^{*}(t,\omega, \cdot)$ is strictly concave.
As $U^{*}(t,\omega, \cdot)$ is increasing and strictly concave, it is {\it strictly} increasing.
For every $(t,\omega)\in[0,\infty)\times\Omega$ and $x>0$, using the
Inada conditions for $U(t,\omega, \cdot)$ one can show that there exists $(c_1,\dots,c_m)$ in the {\it interior} of the first orthant, such that $\sum\limits_{i = 1}^mc_iS^i_t(\omega) = x$ and $U^*(t,\omega, x) = U(t,\omega, c_1,\dots,c_m)$.
As a result, differentiability of $U^{*}(t,\omega, \cdot)$ (in the third argument) follows from differentiability of $U(t,\omega, \cdot)$ and general properties of the subgradient of the image function, see e.g., \cite[Corollary D.4.5.2]{LH04}.
As $U^{*}(t,\omega, \cdot)$ is concave and differentiable, we deduce that $U^{*}(t,\omega, \cdot)$ is {\it continuously} differentiable in the interior of its domain, see \cite[Theorem D.6.2.4]{LH04}.
The Inada conditions for $U^{*}(t,\omega, \cdot)$ follow from the (version of the) Inada conditions for $U(t,\omega, \cdot)$ and \cite[Theorem D.4.5.1, p.192]{LH04}.
For every $(t,\omega)\in[0,\infty)\times\Omega$, as $U(t,\omega,\cdot)$ is a closed concave function, using e.g., \cite[Theorem 9.2, p. 75]{Rok}, we deduce that $U^{*}(t,\omega,\cdot)$ is also a {\it closed} concave function\footnote{Note that in general, the image of a closed convex or concave function under a linear transformation need not be closed, see a discussion in \cite[p.97]{LH04}.}. In particular, we get $$U^{*}(t,\omega,0)=\lim_{z\downarrow0}U^{*}(t,\omega,z),\quad(t,\omega)\in[0,\infty)\times\Omega.$$
Finally, for every $x\geq 0$, $U^{*}(\cdot, \cdot, x)$ is optional as a supremum of countably many optional processes (where from continuity of $U(t,\omega, \cdot)$ in the relative interior of its effective domain, it is enough to take the supremum (in the definition of $U^*(t,\omega, \cdot)$) over the $m$-dimensional vectors, whose components take only {\it rational} values).
\end{proof}
\begin{equation}gin{rem}
Lemma \ref{lem:U*} asserts that $U^{*}$ satisfies Assumption 2.1 in \cite{Mostovyi2015}.
\end{rem}
For every $x>0$, we denote by $\mathcal{A}^*(x)$ the set of $1$-dimensional optional processes $c^{*}$, for which there exists an $\mathbb{R}^d$-valued predictable $\widetilde S$-integrable process $H$, such that
\begin{equation}\nonumber
X_t \triangleq x + \int_0^tH_u\,\mathrm d \widetilde S_u - \int_0^tc^{*}_u\,\mathrm d \kappa_u,
\qquad t\geq0,
\end{equation}
is nonnegative, $\mathbb{P}$-a.s. We also define
\begin{equation} \label{eq:u*}
u^*(x) \triangleq \sup_{c^* \in\mathcal{A}^*(x)}\mathbb{E}\left[\int_0^{\infty}U(t,\omega,c^{*}_t(\omega))\,\mathrm d\kappa_t(\omega)\right],\quad x>0.
\end{equation}
with the convention analogous to \eqref{9171}:
\begin{equation} \nonumber
\mathbb{E}\left[\int_0^{\infty}U^*(t,\omega,c^{*}_t(\omega))\,\mathrm d\kappa_t(\omega)\right]\triangleq-\infty,\quad if \quad \mathbb{E}\left[\int_0^{\infty}{U^*}^{-}(t,\omega,c^{*}_t(\omega))\,\mathrm d\kappa_t(\omega)\right] = \infty.
\end{equation}
\begin{equation}gin{proof}[Proof of Theorem \ref{mainTheorem}]
Let $x> 0$ be fixed and $c\in \mathcal A(x)$. Then $c^{*}_t \triangleq \sum\limits_{k= 1}^mc^k_tS^k_t$, $t\geq 0$, is an optional process such that $c^*\in\mathcal A^*(x)$. Therefore,
\begin{equation}\label{eq:3211}
u^*(x) \geq u(x) >-\infty,\quad x>0.
\end{equation}
Since $U^*$ satisfies the assertions of Lemma \ref{lem:U*}, standard techniques in convex analysis show that $-V^{*}$ has the same properties as $U^*$.
Therefore, optimization problems \eqref{eq:u*} and \eqref{dualProblem} satisfy the assumptions of \cite[Theorem 3.2]{Mostovyi2015}
. Consequently, \cite[Theorem 3.2]{Mostovyi2015} applies, which in particular asserts that $u^*$ and $v$ are finite-valued and that for every $x>0$, the exists a strictly positive optional process $\widehat{c^{*}}(x)$, the unique maximizer to \eqref{eq:u*}.
Let us consider
\begin{equation}\label{eq:tmp}
\sup\limits_{\substack{(x_1,\dots,x_m)\in\mathbb{R}^m_{+}:\\\sum\limits_{k = 1}^mx_kS^k_t(\omega) \leq \widehat{c^{*}_t}(x)(\omega)}} U\left(t,\omega,x^1,\dots,x^m\right),\quad (t,\omega)\in [0,\infty)\times \Omega,
\end{equation}
and define a correspondence
$\varphi:[0,\infty)\times \Omega \twoheadrightarrow \mathbb{R}^m$
as follows $$\varphi(t,\omega) \triangleq \left\{(x_1,\dots,x_m)\in\mathbb{R}^m_{+}:\sum\limits_{k = 1}^mx_k S^k_t(\omega) \leq\widehat{c^{*}_t}(x)(\omega)\right\}.$$
From {\it strict} positivity of the $S^k$'s and positivity and $(\mathrm d \kappa\times \mathbb{P})$-a.e. finiteness of $\widehat {c^{*}}(x)$ (by \cite[Theorem 3.2]{Mostovyi2015}), we deduce that $\varphi$ has nonempty\footnote{Note that the origin in $\mathbb{R}^m$ is in $\varphi(t,\omega)$ for every $(t,\omega)\in[0,\infty)\times \Omega$.} compact values $(\mathrm d \kappa\times \mathbb{P})$-a.e. Let us consider the lower inverse of $\varphi^l$ defined by
$$\varphi^l(G)\triangleq \left\{(t,\omega)\in[0,\infty)\times \Omega:~\varphi(t,\omega)\cap G \neq \emptyset \right\},\quad G\subset{\mathbb{R}^m}.$$
Let us also consider a subset of $\mathbb{R}^m$ of the form $A\triangleq [a_1,b_1]\times \dots \times [a_m,b_m],$ where $a_i$'s and $b_i$'s are real numbers. In view of the weak measurability of $\varphi$ (see \cite[Definition 18.1, p. 592]{AB}) that we are planning to show, it is enough to consider $b_i\geq 0$, $i=1,\dots,m$. In addition, let us set $\begin{aligned}r a_i = \max(0, a_i)$. One can see that for such a set $A$, as
$$\varphi^l(A) = \varphi^l([\begin{aligned}r a_1, b_1]\times \dots\times [\begin{aligned}r a_m, b_m]),$$
we have $$\varphi^l(A) = \left\{(t,\omega):~\sum\limits_{i = 1}^m \begin{aligned}r a_iS^i_t(\omega)\leq \widehat{c^{*}_t}(x)(\omega)\right\}.$$
As $\widehat{c^{*}}(x)$ and $S^i$'s are optional processes and since $\varphi^l\left(\bigcup\limits_{n\in\mathbb N}A_n\right) = \bigcup\limits_{n\in\mathbb N}\varphi^l(A_n)$ (see \cite[Section 17.1]{AB}, where $A_n$'s are subsets of $\mathbb{R}^m$), we deduce that $\varphi^l(G)\in\mathcal{O}$ for every open subset $G$ of $\mathbb{R}^m$,
i.e., $\varphi$ is weakly measurable.
As $U$ is a Carath\'eodory function (see \cite[Definition 4.50, p. 153]{AB}), we conclude from \cite[Theorem 18.19, p. 605]{AB} that there exists an {\it optional} $\mathbb{R}^m$-valued process $\widehat c_t(x)$, $t\in[0,T]$, the maximizer of \eqref{eq:tmp} for $(\mathrm d \kappa\times \mathbb{P})$-a.e. $(t,\omega)\in[0,\infty)\times \Omega$. The uniqueness of such a maximizer follows from {\it strict} concavity of $U(t,\omega, \cdot)$ (for every $(t,\omega)\in [0,\infty)\times\Omega$)\footnote{\cite[Theorem 18.19, p. 605]{AB} gives a maximizer, which is a measurable multifunction, and from the uniqueness of the maximizer it is a single valued multifunction, for which the concept of measurability coincides with measurability for functions.}. As $\widehat{c^{*}}(x)\in\mathcal A^*(x)$, we deduce that $\widehat c(x)\in\mathcal{A}(x)$. Combining this with \eqref{eq:3211}, we conclude that $\widehat c(x)$ is the unique (up to an equivalence class) maximizer to \eqref{primalProblem}.
For $x>0$, let $\widehat c^i_t(x)$, $i = 1,\dots,m$, denote the components of $\widehat c_t(x)$.
As $\sum\limits_{i = 1}^m \widehat c^i_t(x)(\omega) S^i_t(\omega) = \widehat c_t^*(\omega)$, $(\mathrm d\kappa\times \mathbb P)$-a.e., (where the argument here is similar to the discussion after \eqref{9172}) relations \eqref{453}, \eqref{eq:3221}, \eqref{451}, and \eqref{452} follow from \cite[Theorem 3.2]{Mostovyi2015}, whereas \eqref{eq:v_defl} results from \cite[Theorem 3.3]{Mostovyi2015} (equivalently, from \cite[Theorem 2.4]{MostovyiNUPBR}). In turn, combining \eqref{eq:3221} with
\cite[Theorem D.4.5.1]{LH04}, we get
\begin{equation}\nonumber
\begin{equation}gin{array}{rcl}
\widehat Y_t(\omega) &=& U^*_x\left(t,\omega,\widehat {c_t^*}(x)(\omega)\right) \\
&=& \left\{ s(t,\omega) \in \mathbb{R}:~
S^i_t(\omega)s(t,\omega) = {\partial_{x_i}U}\left(t,\omega, \widehat c^1(x)(\omega),\dots,\widehat c^m(x)(\omega)\right),~i = 1,\dots,m\right\} \\
&&\hspace{114mm} (\mathrm d\kappa\times \mathbb P){\text -a.e.,} \\
\end{array}
\end{equation}
i.e., \eqref{eq:3212} holds.
\end{proof}
\end{document}
|
\begin{document}
\title{Density of 4-edge paths in graphs with fixed edge density}
\author{D\'{a}niel T. Nagy\footnote{E\"{o}tv\"{o}s Lor\'and University, Budapest. [email protected]}}
\maketitle
\begin{abstract}
We investigate the number of 4-edge paths in graphs with a fixed number of vertices and edges. An asymptotically sharp upper bound is given to this quantity. The extremal construction is the quasi-star or the quasi-clique graph, depending on the edge density. An easy lower bound is also proved. This answer resembles the classic theorem of Ahlswede and Katona about the maximal number of 2-edge paths, and a recent theorem of Kenyon, Radin, Ren and Sadun about $k$-edge stars.
\end{abstract}
\section{Introduction}
The aim of this paper is to asymptotically determine the maximal and minimal number of 4-edge paths in graphs with fixed number of vertices and edges.
The first result of this kind is due to Ahlswede and Katona \cite{ahl}, who described the graphs with a fixed number of vertices and edges containing the maximal number of 2-edge paths. To state this result, we need some simple definitions.
\begin{definition}
The quasi-clique $C_n^e$ is a graph with $n$ vertices and $e$ edges, defined as follows. Take the unique representation
$$e=\binom{a}{b}+b~~~~~~0\le b<a,$$
connect the first $a$ vertices to each other, and connect the $a+1$-th vertex to the first $b$ vertices
\end{definition}
\begin{definition}
The quasi-star $S_n^e$ is a graph with $n$ vertices and $e$ edges, defined as follows. Take the unique representation
$$\binom{n}{2}-e=\binom{p}{2}+q,~~~~~~0\le q<p,$$
connect the first $n-p-1$ vertices with every vertex, and connect the $n-p$-th vertex with the first $n-q$ vertices.
\end{definition}
It is easy to see that $S_n^e$ is isomorphic to the complement of $S_n^{\binom{n}{2}-e}$.
\begin{notation}
The number of 2-edge paths in $C_n^e$ and $S_n^e$ is denoted by $C(n,e)$ and $S(n,e)$ respectively, while the number of $k$-edge stars is denoted by $C_k(n,e)$ and $S_k(n,e)$ respectively.
\end{notation}
\begin{theorem} {\bf (Ahlswede and Katona, 1978, \cite{ahl})} \label{ahlkat}
Let $G$ be a simple graph with $n$ vertices and $e$ edges. Then the number of 2-edge paths in $G$ is at most $\max(C(n,e),~ S(n,e))$.
Furthermore,
$$\max(C(n,e),~ S(n,e))=
\begin{cases}
S(n,e)~~~~~~\textrm{if}~~ 0\le e\le\frac{1}{2}\binom{n}{2}-\frac{n}{2}, \\
C(n,e)~~~~~~\textrm{if}~~ \frac{1}{2}\binom{n}{2}+\frac{n}{2}\le e \le \binom{n}{2}.
\end{cases}$$
\end{theorem}
Roughly speaking, this theorem states that if the edge density if smaller than $\frac{1}{2}$, then the quasi-star is the extremal example, while for higher edge densities the quasi-clique becomes extremal. (The transition between the two cases happens in a nontrivial way.)
Recently, Kenyon, Radin, Ren and Sadun proved a similar result for $k$-edge stars, using the notion of graphons. Translating the result back to language of graphs, we get the following theorem:
\begin{theorem} {\bf (Kenyon, Radin, Ren and Sadun, 2014, \cite{star})} \label{starthm}
Let $G$ be a simple graph with $n$ vertices and $e$ edges, and let $2\le k\le 30$. Then the number of k-edge stars in $G$ is at most $$\max(C_k(n,e),~ S_k(n,e))(1+O(e^{-\frac{1}{2}})).$$
\end{theorem}
The theorem is conjectured to hold for all values of $k$. (The only thing left to prove this, is a complicated extremal value problem.) Similarly to the case of the $2$-edge path, $C_k(n,e)<S_k(n,e)$ if the edge density is small, and $S_k(n,e)<C_k(n,e)$ if it is greater. The point of transition depends on $k$.
Now let us discuss three theorems with just one fixed parameter: the number edges. (So $n$ is not fixed.) We will start with a general theorem of Alon.
\begin{theorem} {\bf (Alon, 1981, \cite{alon1})} \label{alonthm}
Let $N(G,H)$ denote the number of subgraphs of $G$ that are isomorphic to $H$. Assume that $H$ is a single graph that has a spanning subgraph which is the vertex-disjoint union of edges and cycles. Then
$$N(G,H)\le (1+O(e^{-\frac{1}{2}}))N(C_n^e, H).$$
\end{theorem}
It means that for these graphs $H$, the asymptotically extremal example is always the quasi-clique. Note that this theorem can be applied in the case of fixed $n$ and $e$, since the extremal example provided by it is the quasi-clique. (No matter how many vertices we are given, we just have to construct a quasi-clique of $e$ edges.)
Also note that this theorem provides upper bounds for all graphs with a perfect matching, (for example all paths with an odd number of edges) and Hamiltonian graphs (for example complete graphs). In the case of the triangle graph $K_3$, the asymptotically best lower bound was proved by Razborov \cite{raz}.
The problem of finding the maximal number of 4-edge paths in graphs with $e$ edges (and an unlimited number of vertices) was solved by Bollob\'{a}s and Sarkar.
\begin{theorem} {\bf (Bollob\'{a}s and Sarkar, 2003, \cite{sarkar2})}
The number of 4-edge paths among graphs with $e$ edges is maximized by the graph that is obtained by taking the complete bipartite graph $K_{2,\lceil e/2\rceil}$, and deleting an edge if $e$ is odd.
\end{theorem}
Bollob\'as and Sarkar also proved asymptotic results for $2k$-edge paths. \cite{sarkar1} The extremal example in this case is the complete bipartite graph with $k$ vertices in one side. For $2k+1$-edge paths, the asymptotically extremal example is the quasi-clique. It follows from Theorem \ref{alonthm}, and is also proved in \cite{sarkar1}.
Alon had a conjecture for star-forests (vertex-disjoint union of stars), which was partially verified by F\"uredi.
\begin{conjecture} {\bf (Alon, 1986, \cite{alon2})}
Let $H$ be a star-forest. For any $e>0$, the graph maximizing the number of subgraphs isomorphic to $H$ among graphs with $e$ edges is a star-forest.
\end{conjecture}
\begin{theorem}{\bf (F\"uredi, 1992, \cite{furedi})}
Let $H$ be star-forest consisting of components with $a_1, a_2, \dots a_t$ edges. Assume that $a_i>\log_2(t+1)$ holds for all $1\le i\le t$. Let $e$ be sufficiently large. Then the graph maximizing the number of subgraphs isomorphic to $H$ among those with $e$ edges is a star-forest with $t$ components.
\end{theorem}
Considering the above results, investigating the number of the 4-edge paths seems to be the "natural" choice in the case of fixed $(n,e)$. In this paper, an asymptotic upper bound will be given to this quantity. Similarly to the case of $k$-edge stars, the asymptotically extremal graphs are the quasi-stars and the quasi-cliques. We will also prove an easy asymptotic lower bound.
\section{Proof of the main result}
\begin{theorem} \label{main}
Let $G$ be a simple graph with $n$ vertices and $e$ edges. Let $c=\frac{2e}{n^2}$. (Then $0\le c \le 1$.) Let $N$ denote the number of 4-edge-paths. Then
$$\frac{1}{2}c^4n^5(1-O(n^{-1}))\le N \le \frac{1}{2}\max((1-\sqrt{1-c})^2((c+1)\sqrt{1-c}+c),~c^\frac{5}{2})n^5.$$
\end{theorem}
\begin{proof}
Let $N'$ denote the number of the sequences $\{v_0, v_1, v_2, v_3, v_4\}$ where $v_i$ are (not necessarily different vertices) of $G$ and $v_{i-1}v_i\in E(G)$ for $i=1,\dots 4$. Here, we count every 4-edge path twice (there are two directions). We also count some walks of length 4 with repeated vertices. However, the number of such walks is only $O(n^4)$. Therefore $2N \le N'\le 2N+O(n^4)$, so it suffices to prove
$$c^4\le \frac{N'}{n^5} \le \max((1-\sqrt{1-c})^2((c+1)\sqrt{1-c}+c),~c^\frac{5}{2}).$$
Let us note that $\frac{N'}{n^5}$ is often referred to as the homomorphism density of the 4-edge path $P^4$ in $G$, and denoted by $t(P^4,G)$. (See \cite{lovaszbook} for an overview in the topic of graph homomorphisms.)
First, we prove the lower bound, which is much easier. If we want to select a 4-edge walk, we can start by choosing $v_2$, then $v_1$ and $v_3$ (we have to pick them from $N(v_2)$), and finally $v_0$ and $v_4$ ($\deg(v_1)$ and $\deg(v_3)$ possibilities). So we can write $N'$ as below, and estimate it by using twice that $\displaystyle\sum_{i=1}^m x_i^2\ge \frac{1}{n}\left(\displaystyle\sum_{i=1}^m x_i\right)^2$ holds for all real numbers.
$$N'=\sum_{v_2\in V(G)}\left(\sum_{v_i\in N(v_2)} \deg(v_i) \right)^2\ge \frac{1}{n}\left(\sum_{v_2\in V(G)} \left(\sum_{v_i\in N(v_2)} \deg(v_i)\right)\right)^2=$$
$$\frac{1}{n}\left(\sum_{v_i\in V(G)} \deg(v_i)^2 \right)^2\ge
\frac{1}{n}\left(\frac{1}{n}\left(\sum_{v_i\in V(G)} \deg(v_i)\right)^2 \right)^2=
\frac{1}{n}\left(\frac{1}{n}\left(cn^2\right)^2 \right)^2=c^4n^5.$$
Now we move on to the proof of the upper bound. Let codeg($v,w$) denote the number of common neighbours of the vertices $v$ and $w$. Note that
$$N'=\sum_{v_1, v_3\in V(G)} \deg(v_1)\deg(v_3)\textrm{codeg}(v_1,v_3),$$
since after fixing $v_1$ and $v_3$, we have $\deg(v_1)$ candidates for $v_0$, $\deg(v_3)$ candidates for $v_4$, and $\textrm{codeg}(v_1,v_3)$ candidates for $v_2$. Obviously, $\textrm{codeg}(v_1,v_3)\le \min(\deg(v_1), \deg(v_3))$, therefore
$$N'\le\sum_{v_1, v_3\in V(G)} \deg(v_1)\deg(v_3)\min(\deg(v_1), \deg(v_3)).$$
\begin{definition}
Let $G$ be a simple graph with $n$ vertices labeled $w_1, w_2, \dots w_n$. $A_G:[0,1)^2\rightarrow [0,1]$ is the function that is 1 on all rectangles $[\frac{i-1}{n}, \frac{i}{n})\times [\frac{j-1}{n}, \frac{j}{n})$ where $w_iw_j\in E(G)$, and 0 elsewhere.
\end{definition}
\begin{definition}
Let $A:=[0,1)^2\rightarrow [0,1)$ be an integrable function satisfying $A(x,y)=A(y,x)$ for all $0\le x,y<1$. Then for all $0\le x<1$ let
$$\ell(x)=\int_0^1 A(x,y) \, \mathrm{d}y$$
and let
$$S(A)=\int_0^1 \int_0^1 \ell(x)\ell(y)\min(\ell(x), \ell(y)) \, \mathrm{d}x \mathrm{d}y.$$
\end{definition}
Note that $A_G$ satisfies $\int_0^1 \int_0^1 A_G(x,y) \, \mathrm{d}x \mathrm{d}y=c$ and $A_G(x,y)=A_G(y,x)$. If $x\in [\frac{i-1}{n}, \frac{i}{n})$, then $\ell(x)=\frac{\deg(w_i)}{n}$, so
$$S(A_G)=\frac{1}{n^5}\sum_{1\le i,j \le n} \deg(w_i)\deg(w_j)\min(\deg(w_i), \deg(w_j))\ge \frac{N'}{n^5}.$$
\begin{definition}
Let $0\le c \le 1$. Then let $A_1(c):[0,1)^2\rightarrow [0,1]$ be the function satisfying $A_1(x,y)=1$ if $\min(x,y)<1-\sqrt{1-c}$ and $A_1(x,y)=0$ otherwise. Let $A_2(c):[0,1)^2\rightarrow [0,1]$ be the function satisfying $A_2(x,y)=1$ if $\max(x,y)<\sqrt{c}$ and $A_2(x,y)=0$ otherwise. (It is easy to see that $\int_0^1 \int_0^1 A_i(x,y) \, \mathrm{d}x \mathrm{d}y=c$ holds for $i=1,2$.) See Figure \ref{a1a2}.
\end{definition}
\begin{figure}
\caption{The functions $A_1(c)$ and $A_2(c)$.}
\label{a1a2}
\end{figure}
Theorem \ref{main} will be an easy consequence of the following theorem.
\begin{theorem} \label{thm2}
Let $0\le c \le 1$ and $K\in\mathbb{N}^+$ fixed numbers. Assume that $A:=[0,1)^2\rightarrow [0,1)$ is a function satisfying $\int_0^1 \int_0^1 A(x,y) \, \mathrm{d}x \mathrm{d}y=c$ and $A(x,y)=A(y,x)$ for all $0\le x,y<1$, and that there are some numbers $0=q_0<q_1<q_2<\dots < q_K=1$ such that $A$ is constant on $[q_{i-1}, q_i)\times [q_{j-1}, q_j)$ for all $1\le i,j\le K$. Then
$$S(A)\le \max(S(A_1(c)),S(A_2(c)))=\max((1-\sqrt{1-c})^2((c+1)\sqrt{1-c}+c),~c^\frac{5}{2}).$$
\end{theorem}
\begin{proof}
We will use the following notations. $I_i=[q_{i-1},q_i)$, $t_i=q_i-q_{i-1}=|I_i|$, $\ell_i=\ell(x)$ for any $x\in I_i$, $A_{i,j}$ is the value of $A$ in the rectangle $I_i\times I_j$. We will refer to the sets of the form $[0,1)\times I_i$ and $I_i\times [0,1)$ as rows and columns respectively.
The function $S$ is continuous on a compact set defined by the conditions, so its maximum is attained for some $A$. Let $A$ be a function maximizing $S$, and let
$$T(A)=\int_0^1 \int_0^1 \int_0^1 \int_0^1 |A(x_1,y_1)-A(x_2,y_2)| \, \mathrm{d}x_1 \mathrm{d}y_1 \mathrm{d}x_2 \mathrm{d}y_2=\sum_{1\le a_1, a_2, b_1, b_2 \le K} t_{a_1}t_{b_1}t_{a_2}t_{b_2}|A_{a_1,b_1}-A_{a_2,b_2}|.$$
By a similar compactness argument, the minimum of $T$ is also attained for some $A$ (among those that maximize $S$). Such an $A$ can not have four rectangles $I_{i_1}\times I_{j_1}, I_{i_1}\times I_{j_2}, I_{i_2}\times I_{j_1}$ and $I_{i_2}\times I_{j_2}$ satisfying $A_{i_1,j_1}< A_{i_2,j_1}$ and $A_{i_1,j_2}> A_{i_2,j_2}$.
For some $\varepsilon>0$, replace the values $A_{i_1,j_1}, A_{i_2,j_1}, A_{i_1,j_2}$ and $A_{i_2,j_2}$ by $A_{i_1,j_1}+\frac{\varepsilon}{t_{i_1}t_{j_1}}, A_{i_2,j_1}-\frac{\varepsilon}{t_{i_2}t_{j_1}}, A_{i_1,j_2}-\frac{\varepsilon}{t_{i_1}t_{j_2}}$ and $A_{i_2,j_2}+\frac{\varepsilon}{t_{i_2}t_{j_2}}$ respectively. By choosing a small enough $\varepsilon$, the value of $A$ remains greater in $I_{i_2}\times I_{j_1}$ and $I_{i_1}\times I_{j_2}$ than in $I_{i_1}\times I_{j_1}$ and $I_{i_2}\times I_{j_2}$ respectively. Note that such a change does not change the values $\ell(x)$, therefore not changing $S(A)$. (To see that, take a line that intersects two of the four rectangles where the value of $A$ changes. It increases in one of them, while decreasing in the other one. This results in a 0 net change in the integral of $A$ over that line, since if one of the rectangles intersect the line in a segment $\lambda$ times as long as the other one, then its area is $\lambda$ times greater, so the change in the value of $A$ is $\lambda$ times smaller.)
Now we show that the value $T(A)$ decreases during this transformation. $T(A)$ is the sum of differences between the values $A_{i,j}$, weighted with the areas of these rectangles. Assume that the value of $A$ is greater in $r_1$ than in $r_2$ for two rectangles $r_1$ and $r_2$. If we decrease the value of $A$ in a rectangle $r_1$ with $\frac{\varepsilon}{Area(r_1)}$, and increase it in $r_2$ with $\frac{\varepsilon}{Area(r_2)}$ for a small enough $\varepsilon$, then $T(A)$ decreases. To see that, note that
$$Area(r_1)Area(r_2)|A_{r_1}-A_{r_2}|>
Area(r_1)Area(r_2)\left|A_{r_1}-\frac{\varepsilon}{Area(r_1)}-\left(A_{r_2}+\frac{\varepsilon}{Area(r_2)}\right)\right|$$
and for any rectangle $r_3\not\in\{r_1,r_2\}$
$$Area(r_1)Area(r_3)|A_{r_1}-A_{r_3}|+Area(r_2)Area(r_3)|A_{r_2}-A_{r_3}|\ge$$
$$Area(r_1)Area(r_3)\left|A_{r_1}-\frac{\varepsilon}{Area(r_1)}-A_{r_3}\right|+Area(r_2)Area(r_3)\left|A_{r_2}+\frac{\varepsilon}{Area(r_2)}-A_{r_3}\right|.$$
Applying this to $(r_1,r_2)=(I_{i_2}\times I_{j_1}, I_{i_1}\times I_{j_1})$ and $(r_1,r_2)=(I_{i_1}\times I_{j_2}, I_{i_2}\times I_{j_2})$ the desired result follows.
The symmetry of $A$ can be ruined by this transformation, but replacing $A(x,y)$ by $\frac{A(x,y)+A(y,x)}{2}$ for all $0\le x,y\le 1$ fixes this while not increasing $T(A)$ and not changing $S(A)$. (The fact that $T(A)$ does not increase can be verified by the above calculation dealing with the decrease of $A$ in a high-valued rectangle and the its increase in a lower valued one.)
Rearrange the intervals $I_i$ such that $\ell_1\ge \ell_2\ge\dots \ge \ell_K$. The property we just proved for the rectangles implies that for any four rectangles of the form $I_{i_1}\times I_{j_1}, I_{i_1}\times I_{j_2}, I_{i_2}\times I_{j_1}$ and $I_{i_2}\times I_{j_2}$, we have
$$A_{i_1,j_1}< A_{i_2,j_1} \Rightarrow A_{i_1,j_2}\le A_{i_2,j_2}.$$
Now we prove that $A$ is decreasing in both variables. (Since $A(x,y)=A(y,x)$, it is enough to show that for one variable.) Assume to the contrary that for some $i_1<i_2$ and $j$ we have $A_{i_1,j}<A_{i_2,j}$. Then for all $1\le p \le K$ we have $A_{i_1,p}\le A_{i_2,p}$. It results in $\ell_{i_1}<\ell_{i_2}$, a contradiction.
This decreasing property implies that if $\ell_i=\ell_{i+1}$ then $A$ is identical in $I_i\times [0,1)$ and $I_{i+1}\times [0,1)$ so we can merge all such intervals and assume that $\ell_1 > \ell_2 > \dots > \ell_{k}$ for some $k\le K$.
Note that $S(A)$ can be expressed as
\begin{equation}
\label{sa}
S(A)=\sum_{i=1}^k \sum_{j=1}^k t_i t_j \ell_i \ell_j \min(\ell_i, \ell_j).
\end{equation}
Consider an $A$ that meets the theorem's requirements, maximizes $S(A)$ and is decreasing in both variables. We state that there can not be two rectangles in the same row (or column) where the value of $A$ is neither 0 nor 1. Assume that for some $1\le a<b\le k$ and $1\le p\le k$ we have $0<A_{a,p}<1$ and $0<A_{b,p}<1$. Pick some $\epsilon\in\mathbb{R}$ and change $A_{a,p}$ and $A_{p,a}$ to $A_{a,p}+\frac{\varepsilon}{t_a t_p}$ while changing $A_{b,p}$ and $A_{p,b}$ to $A_{b,p}-\frac{\varepsilon}{t_b t_p}$. This transformation changes only two $\ell$ values: $\ell_a$ becomes $\ell_a+\frac{\varepsilon}{t_a}$ and $\ell_b$ becomes $\ell_b-\frac{\varepsilon}{t_b}$. If $|\varepsilon|$ is small enough then $0<A_{a,p}+\frac{\varepsilon}{t_a t_p}, A_{b,p}-\frac{\varepsilon}{t_b t_p}<1$ and the order of the $\ell$ values is preserved. Now we show that $S(A)$ is a strictly convex function of $\varepsilon$ in a neighborhood of 0. Consider the $k^2$ terms in the expression (\ref{sa}). The terms including other terms than $a$ and $b$ are obviously convex functions of $\varepsilon$, since $\varepsilon$ appears at a power of at most 2 in them, and it has a positive coefficient when it has power 2. So the terms of $S(A)$ corresponding to pairs of indices other than $(a,a), (a,b), (b,a)$ and $(b,b)$ are convex functions of $\varepsilon$. All we have to prove is that the sum of the terms corresponding to these four pairs is strictly convex at $\varepsilon=0$.
$$t_a^2\left(\ell_a+\frac{\varepsilon}{t_a}\right)^3+t_b^2\left(\ell_b-\frac{\varepsilon}{t_b}\right)^3+2 t_a t_b \left(\ell_a+\frac{\varepsilon}{t_a}\right)\left(\ell_b-\frac{\varepsilon}{t_b}\right)^2.$$
Differentiating twice with respect to $\varepsilon$ and substituting $\varepsilon=0$, we get
$$6\ell_a-2\ell_b+\frac{4t_a \ell_a}{t_b}.$$
It is positive, because $a<b$ implies $\ell_a>\ell_b$. Therefore $S(A)$ is a strictly convex function of $\varepsilon$ in a neighborhood of 0, so can not have a maximum at 0. This proves that the $A$ under investigation has at most one rectangle in every row and column with a value different from 0 or 1, as depicted in Figure \ref{atmost1}.
\begin{figure}
\caption{Example of a function considered at this point in the proof}
\label{atmost1}
\end{figure}
Now we will prove that actually there are no such rectangles at all. Since $A$ is decreasing in both variables, each row (and column) starts with some 1-valued rectangles, then it might include a single rectangle with value between 0 and 1, then it contains 0-valued rectangles. (Of course, a row or column not necessarily contains all three types of rectangles.) If $A$ has a single rectangle of size $t\times t$ with nonzero value, then $A$'s value is $\frac{c}{t^2}$ there. (So $\frac{c}{t^2}\le 1$.) Then $S(A)=t^2\cdot (t\cdot \frac{c}{t^2})^3=\frac{c^3}{t}\le c^{\frac{5}{2}}$. If there are multiple nonzero-valued rectangles, then $A_{1,1}=1$.
Assume that $0<\lambda=A_{1,j}=A_{j,1}<1$ for some $j$. (Then $j\in\{k-1,k\}$). We will show that it is possible to modify $A$ to increase $S(A)$, so this case is not possible. (From now on we will modify the lengths of the intervals too, not just the value of $A$ in the rectangles.) Divide the interval $I_j$ into two intervals $I_{j'}$ and $I_{j''}$ of length $t_{j'}=\lambda\cdot t_j$ and $t_{j''}=(1-\lambda)\cdot t_j$ respectively. Then divide $I_1\times I_j$ into two rectangles of size $t_1\times t_{j'}$ and $t_1\times t_{j''}$, and set $A$ to be 1 and 0 respectively in them. Modify $A$ in $I_j\times I_1$ similarly to keep $A$ symmetric.
After this modification, we will get $\ell_{j'}=\frac{\ell_j}{\lambda}$ and $\ell_{j''}=0$. This means that the terms with $j''$ can be ignored in (\ref{sa}). The only terms to change in (\ref{sa}) are $t_j$ becoming $t_{j'}=\lambda\cdot t_j$ and $\ell_j$ becoming $\ell_{j'}=\frac{\ell_j}{\lambda}$. Since the power of $\ell_j$ is not smaller than the power $t_j$ in any term of (\ref{sa}), and greater than it in $t_j^2\cdot\ell_j^3$, the value of $S(A)$ increases by this modification. So we can assume that $A$ takes only 0 and 1 values in $I_1\times[0,1)$ and $[0,1)\times I_1$. To show that noninteger values are not possible in the other places, we need a technical lemma.
Note that the variables $t_i$ and $\ell_i$ appearing in the following lemma should be considered real numbers with no connection to any function $A: [0,1)^2\rightarrow [0,1]$, but the same notations are used, since the lemma will be applied in such settings.
\begin{lemma} \label{trlem}
Let $t_1, t_2, \dots, t_k$ be positive reals and let $\ell_1 > \ell_2 > \dots > \ell_{k}\ge 0$. Assume that there is a neighborhood $H$ of $t_\beta$ such that for some $\alpha\in\{\beta-1,\beta+1\}$ and $x\in H$ we can replace the numbers $t_\alpha, t_\beta, \ell_\beta$ by $t_\alpha(x)=t_\alpha+t_\beta-x$, $t_\beta(x)=x$ and $\ell_\beta(x)=\ell_\alpha+\frac{t_\beta}{x}(\ell_\beta-\ell_\alpha)$ respectively, without changing the nonnegativity of the variables and preserving the order of the $\ell$'s. The other variables are left unchanged: $t_i(x)=t_i$, if $i\not\in\{\alpha,\beta\}$ and $\ell_i(x)=\ell_i$, if $i\not=\beta$. (Roughly speaking, this transformation preserves the sum $c=\displaystyle\sum_{i=1}^k t_i(x)\ell_i(x)$, while changing only three of the values: two neighboring $t$'s and the $\ell$ corresponding to one of them.) Then the function
\begin{equation} \label{lemmaeq}
S(x)=\sum_{i=1}^k \sum_{j=1}^k t_i(x) t_j(x) \ell_i(x) \ell_j(x) \min(\ell_i(x), \ell_j(x))
\end{equation}
is strictly convex at $x=t_\beta$. Therefore it has no maximum there.
\end{lemma}
\begin{proof}
Consider the formula (\ref{lemmaeq}) and select all the terms depending on $x$. We can ignore the terms where one of the indices is $\alpha$ or $\beta$, and the other is greater than $\max(\alpha, \beta)$ because
$$(t_\alpha+t_\beta-x)t_p \ell_\alpha \ell_p^2 + x t_p \left(\ell_\alpha+\frac{t_\beta}{x}(\ell_\beta-\ell_\alpha)\right)\ell_p^2
=t_\alpha t_p \ell_\alpha \ell_p^2+t_\beta t_p \ell_\beta \ell_p^2.$$
does not depend on $x$. The sum of the other terms depending on $x$ can be written as $2S_1(x)+S_2(x)$. Here $S_1(x)$ is the sum of the terms corresponding to pairs of indices where one of the elements is $\alpha$ or $\beta$ and the other one in smaller than $\alpha$ and $\beta$. $S_2(x)$ denotes the sum of the terms corresponding to the pairs of indices $(\alpha, \alpha), (\alpha, \beta), (\beta, \alpha)$ and $(\beta, \beta)$.
$$S_1(x)=\left((t_\alpha+t_\beta-x)\cdot \ell_\alpha^2 +
x\cdot \left(\ell_\alpha+\frac{t_\beta}{x}(\ell_\beta-\ell_\alpha)\right)^2\right)\cdot\sum_{p=1}^{\min(\alpha,\beta)-1} t_p\ell_p,$$
$$S_2(x)=(t_\alpha+t_\beta-x)^2\cdot \ell_\alpha^3+x^2\cdot \left(\ell_\alpha+\frac{t_\beta}{x}(\ell_\beta-\ell_\alpha)\right)^3+$$
$$2(t_\alpha+t_\beta-x)x \ell_\alpha \left(\ell_\alpha+\frac{t_\beta}{x}(\ell_\beta-\ell_\alpha)\right) \min\left(\ell_\alpha, \ell_\alpha+\frac{t_\beta}{x}(\ell_\beta-\ell_\alpha)\right).$$
We have to show that $S_1(x)$ and $S_2(x)$ are strictly convex at $x=t_\beta$, therefore $S(x)$ does not takes its maximum there. We will start with $S_1(x)$. We can disregard the constant factor at the right, as it does not change convexity. The left factor can be expressed as $\lambda_2 x^2+\lambda_1 x+\lambda_0+\lambda_{-1} x^{-1}$. Since $\lambda_2=\ell_\alpha^2>0$ and $\lambda_{-1}=t_\beta(\ell_\beta-\ell_\alpha)^2>0$, $S_1$ is strictly convex.
Now we consider $S_2(x)$. First, assume that $\beta=\alpha+1$, and therefore $\ell_\alpha>\ell_\beta$. In this case we have
$$S_2(x)=(t_\alpha+t_\beta-x)^2\cdot \ell_\alpha^3+x^2\cdot \left(\ell_\alpha+\frac{t_\beta}{x}(\ell_\beta-\ell_\alpha)\right)^3+
2(t_\alpha+t_\beta-x)x \ell_\alpha \left(\ell_\alpha+\frac{t_\beta}{x}(\ell_\beta-\ell_\alpha)\right)^2.$$
Differentiating twice by $x$ and setting $x=t_\beta$ we get
$$2t_\beta^{-1}(\ell_\beta-\ell_\alpha)^2\big((\ell_\alpha+\ell_\beta)t_\beta+2\ell_\alpha t_\alpha\big)>0.$$
If $\beta=\alpha-1$, and therefore $\ell_\alpha<\ell_\beta$, we have
$$S_2(x)=(t_\alpha+t_\beta-x)^2\cdot \ell_\alpha^3+x^2\cdot \left(\ell_\alpha+\frac{t_\beta}{x}(\ell_\beta-\ell_\alpha)\right)^3+
2(t_\alpha+t_\beta-x)x \ell_\alpha^2 \left(\ell_\alpha+\frac{t_\beta}{x}(\ell_\beta-\ell_\alpha)\right).$$
Differentiating twice by $x$ and setting $x=t_\beta$ we get
$$2(\ell_\beta-\ell_\alpha)^3>0.$$
In both cases, $S_2(x)$ is strictly convex at $x=t_\beta$. This concludes the proof of the lemma.
\end{proof}
Now we can continue the proof of the theorem. Assume that $A$ is not entirely 0-1 valued. Let $I_p\times [0,1)$ be the first column containing a rectangle with a value different from 0 and 1. We already proved that $p\ge 2$. Let $I_p\times I_q$ the unique rectangle in the $p$-th column with $0<A_{p,q}<1$. Since $A_{p,q}=A_{q,p}$, we know that $p\le q$. We will show that $A$ admits the type of transformation described in Lemma \ref{trlem}, therefore does not maximize $S(A)$. We will describe transformations in each case that change only two neighboring $t$ values and the $\ell$ corresponding to one of them.
{\it Case a}: First, assume that $p<q<k$ and $A_{p-1,q+1}=1$. Then the rows $[0,1)\times I_q$ and $[0,1)\times I_{q+1}$ differ only in the $p$-th column. We can move the point separating the intervals $I_q$ and $I_{q+1}$, keeping $A_{p,q+1}=0$ while adjusting $A_{p,q}$ such that the integral of $A$ over the whole square remains unchanged. We apply these changes to the other side of the main diagonal to keep $A$ symmetric. During this transformation the only $\ell$ value to change is $\ell_q$. So Lemma \ref{trlem} can be applied with $\alpha=q+1, \beta=q$.
\begin{figure}
\caption{Illustration for Case a (left) and Case b (right).}
\end{figure}
{\it Case b}: Now assume that $p<q=k$ or $p<q<k$ and $A_{p-1, q+1}=0$. Then the columns $I_{p-1}\times [0,1)$ and $I_{p}\times [0,1)$ differ only in the $q$-th row. We can move the point separating the intervals $I_{p-1}$ and $I_{p}$, keeping $A_{p-1,q}=1$ while adjusting $A_{p,q}$ such that the integral of $A$ over the whole square stays the same. We apply these changes to the other side of the main diagonal to keep $A$ symmetric. During this transformation the only $\ell$ value to change is $\ell_p$. So Lemma \ref{trlem} can be applied with $\alpha=p-1, \beta=p$.
\begin{figure}
\caption{Illustration for Case c (left) and Case d (right).}
\end{figure}
{\it Case c}: Assume that $p=q<k$ and $A_{p-1,p+1}=1$. Then the rows $[0,1)\times I_p$ and $[0,1)\times I_{p+1}$ differ only in the $p$-th column. We can move the point separating the intervals $I_p$ and $I_{p+1}$, keeping $A_{p,p+1}=0$ while adjusting $A_{p,p}$ such that the integral of $A$ over the whole square stays the same. (We apply these changes to the the intervals defining the rows and columns simultaneously to keep $A$ symmetric.) During this transformation the only $\ell$ value to change is $\ell_p$. So Lemma \ref{trlem} can be applied with $\alpha=p+1, \beta=p$.
{\it Case d}: Now assume that $p=q=k$ or $p=q<k$ and $A_{p-1, p+1}=0$. Then the columns $I_{p-1}\times [0,1)$ and $I_{p}\times [0,1)$ differ only in the $p$-th row. We can move the point separating the intervals $I_{p-1}$ and $I_{p}$, keeping $A_{p-1,p}=1$ while adjusting $A_{p,p}$ such that the integral of $A$ over the whole square stays the same. (We apply these changes to the the intervals defining the rows and columns simultaneously to keep $A$ symmetric.) During this transformation the only $\ell$ value to change is $\ell_p$. So Lemma \ref{trlem} can be applied with $\alpha=p-1, \beta=p$.
With this, we have covered all the possibilities. From now on, we can assume that $A: [0,1)^2\rightarrow \{0,1\}$.
Since $A$ is 0-1 valued and decreasing in both variables, there exists some $k'\in \{k-1,k\}$ such that $k$ of the $\ell$ values is positive and $A_{i,j}=1$ if and only if $i+j\le k'+1$. Now we will show that $A$ can not maximize $S(A)$ if $k'\ge 4$.
Assume that $k'\ge 4$. It is possible to move the point separating the intervals $I_{k'-1}$ and $I_{k'}$, while keeping $\ell_{k'-1}$ unchanged and adjusting $\ell_{k'}$ to keep the integral over the whole square unchanged. When these changes are applied to the other side of the main diagonal to preserve the symmetry, we se that the point separating $I_1$ and $I_2$ moves, $\ell_1$ remains unchanged, but $\ell_2$ changes. During this transformation, the following values change (see Figure \ref{k4}):
$t_{k'}\rightarrow x,$
$t_{k'-1}\rightarrow t_{k'-1}+t_{k'}-x,$
$\ell_{k'}\rightarrow \ell_{k'-1}+\frac{t_{k'}}{x}(\ell_{k'}-\ell_{k'-1}),$
$\ell_2\rightarrow\ell_1-x,$
$t_2\rightarrow \frac{t_2(\ell_1-\ell_2)}{x},$
$t_1\rightarrow t_1+t_2-\frac{t_2(\ell_1-\ell_2)}{x}.$
\begin{figure}
\caption{The case $k'\ge 4$}
\label{k4}
\end{figure}
Now we investigate how $S(A)$ changes during such a transformation. First, apply the changes only to $t_{k'}, t_{k'-1}$ and $\ell_{k'}$. Lemma \ref{trlem} states that $S$ is now a strictly convex function of $x$. (Because we changed only two neighboring $t$'s and the $\ell$ corresponding to one of them, while preserving the sum $\displaystyle\sum_{i=1}^{k'} t_i\ell_i$.) Now apply the changes to $t_1, t_2$ and $\ell_2$ too. Since $t_1\ell_1+t_2\ell_2$ does not change during the transformation, for any $3\le s \le k'$, the sum of the terms in (\ref{sa}) corresponding to the pairs of indices (1,$s$), ($s$,1), (2,$s$) and ($s$,2) which is
$$2t_1t_s\ell_1\ell_s^2+2t_2t_s\ell_2\ell_s^2=2t_s\ell_s^2(t_1\ell_1+t_2\ell_2),$$
does not change. Therefore it is enough to consider the terms where both indices are 1 or 2.
$$\left(t_1+t_2-\frac{t_2(\ell_1-\ell_2)}{x}\right)^2\ell_1^3+\left(\frac{t_2(\ell_1-\ell_2)}{x}\right)^2(\ell_1-x)^3+
2\left(t_1+t_2-\frac{t_2(\ell_1-\ell_2)}{x}\right)\left(\frac{t_2(\ell_1-\ell_2)}{x}\right)\ell_1(\ell_1-x)^2.$$
Differentiating two times by $x$ and setting $x=t_{k'}=\ell_1-\ell_2$ we get $\frac{2t_2^2 \ell_1^2}{t_{k'}}>0$, so the above formula is a strictly convex function of $x$. As the sum of two strictly convex functions, $S$ is a strictly convex function of $x$, therefore $A$ does not maximize $S(A)$.
Now assume that $k'=3$. We state that if $A: [0,1)^2\rightarrow \{0,1\}$ is a symmetric function decreasing in both variables with at most three positive $\ell$ values, then there is another such function $B$ with at most two positive $\ell$ values such that $\int_0^1 \int_0^1 A(x,y) \, \mathrm{d}x \mathrm{d}y=\int_0^1 \int_0^1 B(x,y) \, \mathrm{d}x \mathrm{d}y$ and $S(B)\ge S(A)$.
If $A$ is such a function then $\ell_1=t_1+t_2+t_3$, $\ell_2=t_1+t_2$ and $\ell_3=t_1$. Without loss of generality we can assume that $t_1+t_2+t_3=\ell_1=1$. (Replacing $A(x,y)$ with $A(\lambda x, \lambda y)$ changes $S(A)$ with a factor of $\lambda^5$, so the rescaling does not change our problem.) Note that in this case the two parameters $s=1-\int_0^1 \int_0^1 A(x,y) \, \mathrm{d}x \mathrm{d}y$ and $x=t_3$ are enough two define $A$. (See Figure \ref{k3}.) We have
$\ell_1=1,$
$\ell_2=1-x,$
$t_1=\ell_3=1-\frac{s+x^2}{2x},$
$t_2=\frac{s-x^2}{2x},$
$t_3=x.$
\begin{figure}
\caption{The case $k'=3$}
\label{k3}
\end{figure}
Note that $x$ can take any value from $[1-\sqrt{1-s},\sqrt{s}]$. The two endpoints correspond to functions with at most two intervals. (After scaling back, we get a step function with at most two positive $\ell$ values.)
$$S(A)=f(x)=t_1^2\ell_1^3+t_2^2\ell_2^3+t_3^2\ell_3^3+2t_1t_2\ell_1\ell_2^2+2t_1t_3\ell_1\ell_3^2+2t_2t_3\ell_2\ell_3^2=$$
$$\left(1-\frac{s+x^2}{2x}\right)^2+\left(\frac{s-x^2}{2x}\right)^2(1-x)^3+x^2\left(1-\frac{s+x^2}{2x}\right)^3+$$
$$2\left(1-\frac{s+x^2}{2x}\right)\left(\frac{s-x^2}{2x}\right)(1-x)^2+
2\left(1-\frac{s+x^2}{2x}\right)x\left(1-\frac{s+x^2}{2x}\right)^2+
2\left(\frac{s-x^2}{2x}\right)x(1-x)\left(1-\frac{s+x^2}{2x}\right)^2.$$
We will prove that for a fixed $s$, $f(x)$ is either increasing in $[1-\sqrt{1-s},\sqrt{s}]$ or there exists an $x_0\in [1-\sqrt{1-s},\sqrt{s}]$ such that $f(x)$ is strictly decreasing in $[1-\sqrt{1-s},x_0]$ and strictly increasing in $[x_0,\sqrt{s}]$. In both cases, $f(x)$ must take its maximum in one of the endpoints. Differentiate $f(x)$ by $x$. We need that $f'(x)$ is either positive in $(1-\sqrt{1-s},\sqrt{s})$ or it is negative in $(1-\sqrt{1-s},x_0)$ and positive in $(x_0,\sqrt{s})$. Since $x>0$, it is sufficient to prove the same for $f'(x)x$. An elementary calculation shows that $\lim_{x\searrow 0} f'(x)x=-\infty$ and $f'(\sqrt{s})\sqrt{s}=0$. If we could show that $f'(x)x$ is strictly concave in $[0,\sqrt{s}]$, then the desired result would follow. After further calculation
$$4(f'(x)x)''=-50x^3+96x^2+(27s-54)x-16s-3s^2(2-s)\frac{1}{x^3}.$$
We are going to prove that the above formula is negative if $0<x,s<1$. Since $s<1$ it enough to show that
$$g(x,s)=-50x^3+96x^2+(27s-54)x-16s-3s^2\frac{1}{x^3}\leq 0.$$
This is a polynomial of $s$ of degree 2. For a fixed $x$, it takes its maximum at $s=\frac{27x^4-16x^3}{6}$.
If $\frac{27x^4-16x^3}{6}\ge 1$, then $x\ge 0.89$ and
$$g(x,s)\le g(x,1)=-50x^3+96x^2-27x-16-3x^{-3}\le 0.$$
If $\frac{27x^4-16x^3}{6}\le 1$, then $x\le 0.9$ and
$$g(x,s)\le g\left(x,\frac{27x^4-16x^3}{6}\right)=\frac{1}{12}(729x^5-864x^4-344x^3+1152x^2-648x)\le 0.$$
(Both of the above results follow by elementary calculus.) This concludes the proof of the case $k'=3$. We obtained that $S(A)$ is maximized by a function with at most two positive $\ell$ values.
Now we can assume that $k'\le 2$. Then $A$ is completely defined by the parameters $\int_0^1 \int_0^1 A(x,y) \, \mathrm{d}x \mathrm{d}y=c$ and $t_1=x$. (See Figure \ref{k2}.)
$t_1=\ell_2=x,$
$t_2=\frac{c-x^2}{2x},$
$\ell_1=\frac{c+x^2}{2x}.$
\begin{figure}
\caption{The case $k'=2$}
\label{k2}
\end{figure}
Note that $1-\sqrt{1-c} \le x\le \sqrt{c}$, and $x=1-\sqrt{1-c}$ corresponds to $A_1(c)$, while $x=\sqrt{c}$ corresponds to $A_2(c)$. (See Figure \ref{a1a2}.)
$$S(A)=t_1^2\ell_1^3+t_2^2\ell_2^3+2t_1t_2\ell_1\ell_2^2=x^2 \left(\frac{c+x^2}{2x}\right)^3+ \left(\frac{c-x^2}{2x}\right)^2 x^3 + 2x\left(\frac{c-x^2}{2x}\right)\left(\frac{c+x^2}{2x}\right)x^2=$$
$$\frac{1}{8}\left(\frac{c^3}{x}+9c^2x-cx^3-x^5\right).$$
Using the substitution $y=\frac{x}{\sqrt{c}}$ (where $\frac{1-\sqrt{1-c}}{\sqrt{c}} \le y\le 1$) we get
$$S(A)=\frac{c^{\frac{5}{2}}}{8}\left(\frac{1}{y}+9y-y^3-y^5\right).$$
We want to show that this function takes its maximum at one of the endpoints of its domain. It suffices to show that there exists a real number $0<y_0<1$ such that the function $y\rightarrow \frac{1}{y}+9y-y^3-y^5$ is strictly decreasing in $(0,y_0)$ and strictly increasing in $(y_0,1)$.
Differentiating once we get the function $f(y)=-\frac{1}{y^2}+9-3y^2-5y^4$. We need that there is some $0<y_0<1$ such that $f(y)<0$ if $0<y<y_0$ and $f(y)>0$ if $y_0<y<1$. Consider the function $g(y)=f(\sqrt{y})y$. It is obvious that $g$ has the desired property if and only $f$ has it. Since $g(y)=-5y^3-3y^2+9y-1$, a polynomial of degree 3, this property can be verified for $g$ by elementary calculus. With this, Theorem \ref{thm2} is proved.
\end{proof}
With this, we proved that $\frac{N'}{n^5}\le S(A_G)\le \max(A_1(c), A_2(c))$, finishing the proof of the upper bound. Using the formula (\ref{sa}), we find that $A_1(c)=(1-\sqrt{1-c})^2((c+1)\sqrt{1-c}+c)$ and $A_2(c)=c^{5/2}$. This concludes the proof of Theorem \ref{main}. By plotting these two functions we can conclude that there is some $c_0\approx 0.0865$ such that $A_1(c)\ge A_2(c)$ in $[0,c_0]$ and $A_1(c)\le A_2(c)$ in $[c_0,1]$.
\end{proof}
\section{Remarks and open questions}
First, we note that the bounds in Theorem \ref{main} are asymptotically sharp.
\begin{remark}
Let $n$ and $e$ be fixed positive integers satisfying $e\le \binom{n}{2}$. Let $c=\frac{2e}{n^2}$. Then there is a simple graph $G_1$ with $n$ vertices and $e$ edges containing at most $\frac{1}{2}c^4n^5$ 4-edge paths. Additionally, there are simple graphs $G_2$ and $G_3$ with $n$ vertices and $e$ edges that contain at least $\frac{1}{2}c^\frac{5}{2}n^5(1-O(n^{-1}))$ and $\frac{1}{2}(1-\sqrt{1-c})^2((c+1)\sqrt{1-c}+c)n^5(1-O(n^{-1}))$ 4-edge paths respectively.
\end{remark}
\begin{proof}
Let $G_1$ be a graph with $n$ vertices and $e$ edges such that the degree of any two vertices differ by at most 1. (It is well-known that such a graph exists.) Then the degree of any vertex is at most $\frac{2e}{n}+1=cn+1$. So the number of 4-edge paths is at most
$$\frac{1}{2}n(cn+1)(cn)(cn-1)(cn-2)\le\frac{1}{2}n(cn)^4=\frac{1}{2}c^4n^5.$$
Now we show that we can choose the quasi-clique $C_n^e$ for $G_2$. $C_n^e$ contains an $a$-clique, where $a$ is the greatest integer satisfying $\binom{a}{2}\le e$. Therefore $\binom{a+1}{2}\ge e$, implying $a\ge \sqrt{2e}-1$. The number of 4-edge paths in this clique is
$$\frac{1}{2}a(a-1)(a-2)(a-3)(a-4)\ge \frac{1}{2}(2e)^\frac{5}{2}(1-O(e^{-\frac{1}{2}}))\ge \frac{1}{2}c^\frac{5}{2}n^5(1-O(n^{-1})).$$
A similar (but more complicated) calculation gives that we can choose the quasi-star $S_n^e$ for $G_3$.
\end{proof}
We conclude the paper with a few open questions.
\begin{question}
We proved that either the quasi-star or the quasi-clique asymptotically maximizes the number of 4-edge paths in graphs with given edge density. Is it true that this maximum is actually exactly (not just asymptotically) achieved by either the quasi-star or the quasi-clique?
\end{question}
Theorem \ref{ahlkat} states that the above is true for 2-edge paths.
\begin{question}
Is it true for all graphs $H$ that the number of subgraphs isomorphic to $H$ in graphs with given edge density is (asymptotically) maximized by either the quasi-star or the quasi-clique?
\end{question}
It is true for 4-edge paths and $k$-edge stars, when $2\le k \le 30$ (see Theorem \ref{starthm}). When $H$ is a graph having a spanning subgraph that is a vertex-disjoint union of edges and cycles, only the quasi-clique comes into play (see Theorem \ref{alonthm}).
\begin{question}
Is it true that for every graph $H$, there is a constant $c_H<1$ such that among graphs with $n$ vertices and edge density $c>c_H$, the number of subgraphs isomorphic to $H$ is (asymptotically) maximized by the quasi-clique?
\end{question}
\noindent
{\bf Acknowledgement} I would like to thank Gyula O.H. Katona for his help with the creation of this paper.
\end{document}
|
\begin{document}
\journal{Information and Computation}
\begin{frontmatter}
\title{Model Checking Markov Population Models by Stochastic Approximations}
\author[TS,SA]{Luca Bortolussi}
\address[TS]{DMG, University of Trieste, Trieste, Italy.}
\address[SA]{MOSI, Department of Computer Science, Saarland University, Saarbr\"ucken, Germany.}
\ead{[email protected]}
\author[LU]{Roberta Lanciani}
\author[LU]{Laura Nenzi}
\address[LU]{IMT, Lucca, Italy.}
\ead{[email protected]}
\ead{[email protected]}
\defChecking MPM by Stochastic Approximation{Checking MPM by Stochastic Approximation}
\defL. Bortolussi \& R. Lanciani \& L. Nenzi{L. Bortolussi \& R. Lanciani \& L. Nenzi}
\maketitle
\begin{abstract}
Many complex systems can be described by population models, in which a pool of agents interacts and produces complex collective behaviours. We consider the problem of verifying formal properties of the underlying mathematical representation of these models, which is a Continuous Time Markov Chain, often with a huge state space.
To circumvent the state space explosion, we rely on stochastic approximation techniques, which replace the large model by a simpler one, guaranteed to be probabilistically consistent. We show how to efficiently and accurately verify properties of random individual agents, specified by Continuous Stochastic Logic extended with Timed Automata (CSL-TA), and how to lift these specifications to the collective level, approximating the number of agents satisfying them using second or higher order stochastic approximation techniques.
\end{abstract}
\begin{keyword}
Stochastic Model Checking \sep Fluid Model Checking \sep Stochastic Approximation \sep Moment Closure \sep Linear Noise \sep Population Models \sep Maximum Entropy.
\end{keyword}
\end{frontmatter}
\section{Introduction}
\label{sec:intro}
Many real-life examples of large complex systems, ranging from (natural) biochemical pathways to (artificial) computer networks, exhibit \textit{collective behaviours}. These global dynamics are the result of intricate interactions between the large number of individual entities that comprise the populations of these systems. Understanding, predicting and controlling these emergent behaviours is becoming an increasingly important challenge for the scientists of the modern era. In particular, the development of an efficient and well-founded mathematical and computational modelling framework is essential to master the analysis of such complex collective systems.
In the Formal Methods community, powerful automatic verification techniques have been developed to validate the performance of a model of a system. In such \textit{model checkers} \cite{mcbook}, the model and a property of interest are given in input to an algorithm which verifies whether or not the requirement is satisfied by the representation of the system.
As the dynamics of a collective system is intrinsically subject to noisy behaviours, especially when the population is not very large, the formal analysis and verification of a collective system have to rely on appropriate \textit{Stochastic Model Checking} techniques. For instance, in \cite{ctmcmc}, Continuous Stochastic Logic formulae are checked against models of the system expressed as Continuous Time Markov Chains (CTMC, \cite{norris}), which are a natural mathematical framework for population models. These approaches are based on an
exhaustive exploration of the state space of the model, which limits their practical use, due to \textit{state space explosion}: when the number of interacting agents in the population increases, the number of states of the underlying CTMC quickly reaches astronomical values.
To deal with this problem, some of the most successful applications of Stochastic Model Checking to large population models are based on statistical analysis \cite{prism,jha2009statistical,bortolussi2016smoothed}, which still remain costly from a computational point of view, because of the need or running simulation algorithms a large number of times.
In this work, we take a different approach, exploiting a powerful class of methods to accurately approximate the dynamics of the individuals and the population, that goes under the name of \emph{Stochastic Approximations} \cite{tutorial}.
\mathbf{v}space{3mm}
\noindent\textbf{Related Work.}
Stochastic approximation methods have been successfully used in the computational biochemistry community \cite{grima2010, vankampen} to approximate the noisy behaviour of collective systems by a simpler process whose behaviour can be extracted by solving a (numerically integrable) set of \textit{Differential Equations} (DEs), resulting in a fast and easy way of obtaining an estimation of the dynamics of the model.
Moreover, for almost all the techniques that we are going to consider in this work, the quality of the estimations improves as the number of agents in the system increases, keeping constant the computational cost and reaching exactness in the limit of an infinite population. In this way, such approximation methods actually take advantage of the large sizes of the collective systems, making them a fast, accurate and reliable approach to deal with the curse of the state space explosion. Among the many types of Stochastic Approximations present in the literature, we are going to exploit the \textit{Fluid Approximation} (FA) \cite{tutorial,fluidmc}, the \textit{Central Limit Approximation} (CLA) \cite{vankampen, kurtz}, and the \textit{System Size Expansion} (SSE) \cite{schnoerr2016approximation}. We are also going to use \textit{Moment Closure} (MC) \cite{schnoerr2016approximation} combined with distribution reconstruction techniques based on the \textit{Maximum Entropy} principle \cite{andreychenko2015model}.
Stochastic Approximations entered into the model checking scene only recently. Pioneering work focussed on checking CSL properties \cite{concur12,fluidmc,bertinoro13} or deterministic automata specifications \cite{HaydenTCS, HaydenTSE} for a single random individual in a population. Following this line of work, more complex individual properties had been considered, in particular rewards \cite{qapl15} and timed automata with one clock \cite{formats15}.
Another direction of integration of stochastic approximations and model checking is related to the so called local-to-global specifications \cite{qest}, in which individual properties, specified by timed automata (with some restrictions), are lifted at the collective level by counting how many agents satisfy a local specification. This lifting is obtained by applying the CLA to approximate the distribution of agents \cite{qest} or by moment closure to obtain bounds \cite{HaydenTCS, HaydenTSE}. A simpler approach, focussing on expected values at the collective level, is \cite{anjaL2G}.
Finally, stochastic approximation has been used also to approximate global reachability properties, either exploiting central limit results for hitting times \cite{epew}, or by a clever discretisation of the Gaussian processes obtained by the CLA \cite{qest16}.
\mathbf{v}space{3mm}
\noindent\textbf{Contributions.}
In this paper, we start from the approach of \cite{qest} for the approximation of satisfaction probabilities of local-to-global properties, and extend it in several directions:
\begin{itemize}
\item We extend fluid model checking \cite{fluidmc} to a subset of CSL-TA \cite{cslta}, a logic specifying temporal properties by means of Deterministic Timed Automata (DTA) with a single clock. We consider in particular DTAs in which the clock is never reset, and provide a model checking algorithm also for nested formulae, leveraging fluid approximation.
\item We lift CSL-TA properties to the collective level, exploiting the central limit approximation, thus extending the approach of \cite{qest} to a more complex set of properties. We also remove some restrictions on the class of models considered with respect to those discussed in \cite{qest}.
\item We extend both \cite{qest} and \cite{fluidmc} by showing how to effectively use higher order approximations to correct for finite size effects. This requires to integrate within the model checking framework either moment closure or higher-order SSE \cite{schnoerr2016approximation}, together with maximum entropy distribution reconstruction \cite{andreychenko2015model}.
\end{itemize}
Throughout the paper, we make use of a simple but instructive running example of an epidemic model to illustrate the presented techniques.
\mathbf{v}space{3mm}
\noindent\textbf{Paper Structure.}
The paper is organised as follows. In Section \ref{sec:popmod}, we introduce the class of models we consider, and in Section \ref{sec:property}, the property specification language. Section \ref{sec:cla} contains an introduction on stochastic approximation techniques. Section \ref{sec:checkCSLTA} shows how to model check local properties described by CSL-TA, while Section \ref{sec:modelchecking} deals with local-to-global properties. Conclusions are drawn in Section \ref{sec:conc}. The appendix contains novel proofs.
\section{Markov Population Models}
\label{sec:popmod}
In this section, we introduce a formalism to specify \emph{Markovian Population Models}. These models consists of typically large collections of interacting components, or \textit{agents}.
Each component is a finite state machine, which can change internal state by interacting with other agents or with the environment. Agents can be of different kinds or classes. Interactions are described by specifying which kinds of agents participate in the interaction and the rate at which it happens. The rate is a function of the collective state of the model, i.e. of the population size. This information, together with an initial state, describe the full population model and it defines a Markov chain in continuous time.
More specifically, an agent class $\mathscr{A}$ defines its (finite) state space and its (finite) set of \textit{local} transitions. In the following, the descriptor '\textit{local}' refers to the fact we are formalizing the model at the agent level, whereas we use '\textit{global}' to define state spaces and transitions when modelling the entire population.
\begin{definition}[Agent class]
\label{class}
An \emph{agent class} $\mathscr{A}$ is a pair $(S, E)$ where $S = \{s_1, \ldots, s_n\}$ is the state space of the agent and $E = \{\loctr{1}, \ldots, \loctr{m}\}$ is the set of local transitions of the form $\loctr{i} = \locarr{i}$, $i \in \{1, \ldots, m\}$, where $\loclab{i}$ is the transition label, taken from the label set $\mathscr{L}$.
\end{definition}
The label $\loclab{i}$ of a local transition $\loctr{i} = \locarr{i}$ may not be unique. Without loss of generality, we require that two local transitions having the same initial and final states must have different labels (if not, just rename labels).
An agent belonging to the class $\mathscr{A}_h = (S_h, E_h)$ is identified by a random variable $Y_h(t) \in S$, denoting the state of the agent at time $t$, and the initial state $Y_h(0) \in S$.
\begin{figure}
\caption{ The automaton representation of a network node.}
\label{SIRagent}
\end{figure}
\noindent\textbf{Running Example.}
In order to illustrate the definitions and the techniques of the paper, we consider a simple example of a worm epidemic in a peer-to-peer network composed of $N$ nodes (see e.g.\ \cite{meanfieldP2P} for mean field analysis of network epidemics). Each node is modelled by the simple agent shown in Figure \ref{SIRagent}, which has three states: susceptible to infection (S), infected (I), and patched/immune to infection (R).
The contagion of a susceptible node can occur due to an event external to the network ($ext$), like the reception of an infected email, or by file sharing with an infected node within the network ($inf$). Nodes can also be patched, at different rates, depending if they are infected ($patch_1$) or not ($patch_0$). A patched node remains immune from the worm for some time, until immunity is lost ($loss$), modelling for instance the appearance of a new version of the worm.
The agent class $\mathscr{A}_{node} = (S_{node}, E_{node})$ of the network node can be easily reconstructed form the automaton representation in Figure \ref{SIRagent}: its local states are $S_{node} = \{s_S,s_I,s_R\}$, which we also denote as $\{S,I,R\}$, and its local transitions are $E_{node} =\{S\mathbf{x}rightarrow{inf} I, S\mathbf{x}rightarrow{ext} I, I\mathbf{x}rightarrow{inf} I, I\mathbf{x}rightarrow{patch_1} R, S\mathbf{x}rightarrow{patch_0} R, R\mathbf{x}rightarrow{loss} S \}$.
In the following, without loss of generality, we consider populations of $N$ agents $Y^{(N)}_k$, $k \in \{1, \ldots, N\}$, all belonging to the same class $\mathscr{A} = (S, E)$ with $S = \{s_1, \ldots, s_n\}$. All the results we will present in the following hold for models with multiple classes of agents, at the price of keeping an extra index to identify the class of each agent.
We further make the classical assumption that agents in the same state are indistinguishable, hence the state of the population model can be described by \textit{collective} or \textit{counting variables} $\mathbf{X}^{(N)} = (X^{(N)}_{1}, \ldots, X^{(N)}_{n})$, $X^{(N)}_j \in \{0, \ldots, N\}$, defined by $X^{(N)}_{j} = \sum^{N}_{k = 1} \mathds{1}\{Y^{(N)}_{k} = j\}$.
The initial state $\mathbf{x}_0^{(N)}$ is given by $\mathbf{x}_0^{(N)} = \mathbf{X}^{(N)}(0)$, and the counting variables satisfy the conservation relation $\sum_{j \in S} X^{(N)}_{j} = N$. To complete the definition of a population model, we need to specify its \emph{global} transitions, describing all the possible events that can change the state of the system.
\begin{definition}[Population model]
\label{populationModel}
A \textit{population model} $\mathcal{X}^{(N)}$ of size $N$ is a tuple $\mathcal{X}^{(N)} = (\mathscr{A}, \mathcal{T}^{(N)}, \mathbf{x}_0^{(N)})$, where:
\begin{itemize}
\item $\mathscr{A}$ is an agent class, as in Definition \ref{class};
\item $\mathcal{T}^{(N)} = \{ \mathbf{v}r{t}u_1, \ldots, \mathbf{v}r{t}u_\ell \}$ is the set of global transitions of the form $\mathbf{v}r{t}u_i = (\syncset{i}, f^{(N)}_i)$, where:
\begin{itemize}
\item $\syncset{i} = \{ s_1 \mathbf{x}rightarrow{\loclab{1}} s'_1, \ldots, s_p \mathbf{x}rightarrow{\loclab{p}} s'_p \}$ is the (finite) set of local transitions synchronized by $\mathbf{v}r{t}u_i$;
\item $f^{(N)}_{i}: \mathds{R}^n \longrightarrow \mathds{R}_{\geq 0}$ is the (Lipschitz continuous) global rate function.
\end{itemize}
\item$\mathbf{x}_0^{(N)}$ is the initial state.
\end{itemize}
\end{definition}
The rate $f^{(N)}_i$ gives the expected frequency of transition $\mathbf{v}r{t}u_i$ as a function of the state of the system. We assume $f^{(N)}_i$ equal to zero if there are not enough agents available to perform the transition. The synchronization set $\syncset{i}$, instead, specifies how many agents are involved in the transition $\mathbf{v}r{t}u_i$ and how they change state: when $\mathbf{v}r{t}u_i$ occurs, we see the local transitions $s_1 \mathbf{x}rightarrow{\loclab{1}} s'_1, \ldots, s_p \mathbf{x}rightarrow{\loclab{p}} s'_p$ fire at the (local) level of the $p$ agents involved in $\mathbf{v}r{t}u_i$.
\begin{figure}
\caption{Global transitions of the network epidemic model}
\label{fig:SIRtransitions}
\end{figure}
\begin{example} \label{ex:popmodel}
The population model $\mathcal{X}^{(N)}_{net} = (\mathscr{A}_{node}, \mathcal{T}^{(N)}, \mathbf{x}_0^{(N)})$ for the epidemic example has population variables $\mathbf{X} = (X_S, X_I, X_R)$. The initial conditions $\mathbf{x}_0^{(N)}$ are simply a network of susceptible nodes, $\mathbf{x}_0^{(N)} = (N,0,0)$. The set $\mathcal{T}^{(N)}$, instead, specifies five global transitions: $\mathbf{v}r{t}u_{ext}, \mathbf{v}r{t}u_{loss}, \mathbf{v}r{t}u_{patch_0}, \mathbf{v}r{t}u_{patch_1}, \mathbf{v}r{t}u_{inf} \in \mathcal{T}^{(N)}$, detailed in Figure~\ref{fig:SIRtransitions}. \\
As an example, consider the transition $\mathbf{v}r{t}u_{ext}$ encoding the external infection. Its synchronisation set $\{S \mathbf{x}rightarrow{ext} I\}$ specifies that only one susceptible node is involved and changes state from $S$ to $I$ at a rate given by $f^{(N)}_{ext}(\mathbf{X} ) = \kappa_{ext} X_S$, corresponding to a rate of infection $\kappa_{ext}$ per node.
The transitions $\mathbf{v}r{t}u_{loss}, \mathbf{v}r{t}u_{patch_0}, \mathbf{v}r{t}u_{patch_1}$ have a similar format, while the internal infection i $\mathbf{v}r{t}u_{inf}$ involves one $S$-node and one $I$-node, having synchronization set $\{I \mathbf{x}rightarrow{inf} I, S \mathbf{x}rightarrow{inf} I\}$. Furthermore, we assume that an infected node sends infectious messages at rate $\kappa_{inf}$ to a random node, giving a classical density dependent rate function $f^{(N)}_{inf}(\mathbf{X} ) = \frac{1}{N} \kappa_{inf} X_S X_I$ \cite{epidbook}.
\end{example}
\begin{remark}
\label{rem:restrictions}
The population models of Definition \ref{populationModel} have a main restriction: the population size is constant.
This limitation can be removed, as the approximation techniques we will exploit do not rely on it. However, extra care has to be put in treating local properties, as discussed in \cite{fluidmc}.
\end{remark}
Given a population model $\mathcal{X}^{(N)} = (\mathscr{A}, \mathcal{T}^{(N)}, \mathbf{x}_0^{(N)})$ and a global transition $\mathbf{v}r{t}u = (\syncset{\mathbf{v}r{t}u}, f^{(N)}_{\mathbf{v}r{t}u}) \in \mathcal{T}^{(N)}$ with $\syncset{\mathbf{v}r{t}u} = \{ s_1 \mathbf{x}rightarrow{\loclab{1}} s'_1, \ldots, s_p \mathbf{x}rightarrow{\loclab{p}} s'_p \}$, we encode the net change in $\mathbf{X}^{(N)}$ due to $\mathbf{v}r{t}u$ in the {\it update vector} $\textbf{v}_\mathbf{v}r{t}u= \sum_{i = 1}^{p} (\textbf{e}_{s_i} - \textbf{e}_{s'_i})$,
where
$\textbf{e}_{s_i}$ is the vector that is equal to $1$ in position $s_i$ and zero elsewhere.
The CTMC $\mathbf{X}^{(N)}(t)$ associated with $\mathcal{X}^{(N)}$ has state space $\mathcal{S}^{(N)} = \{ (z_1, \ldots, z_{n})\\ \in \mathbb{N}^{n}\ \lvert\ \sum^{n}_{i = 1} z_i = N\}$, initial probability distribution concentrated on $\mathbf{x}_0^{(N)}$, and \textit{infinitesimal generator matrix} $\textbf{Q}$ defined for $\mathbf{v}r{x},\mathbf{v}r{x'}\in\mathcal{S}^{(N)}$, $\mathbf{v}r{x} \neq \mathbf{v}r{x'}$, by
\[
q_{\mathbf{v}r{x}, \mathbf{v}r{x'}} = \sum_{\mathbf{v}r{t}u \in \mathcal{T} | \textbf{v}_\mathbf{v}r{t}u = \mathbf{v}r{x'} - \mathbf{v}r{x}} f^{(N)}_\mathbf{v}r{t}u(\mathbf{v}r{x}).
\]
Equipped with this definition, we can analyse a model either by numerical integration of the Kolmogorov equations of the CTMC \cite{norris}, or relying on stochastic simulation and statistical analysis of the sampled trajectories \cite{prism,jha2009statistical}. The first approach is not feasible for large populations ($N$ large), due to the severe state space explosion of this class of models. The second approach scales better, but is still computationally intensive, requiring many simulations, whose cost typically scales (linearly) with $N$.
\section{Individual and Collective Properties}
\label{sec:property}
We introduce now the class of properties considered in the paper. We distinguish two levels of properties: local properties, describing the behaviour of individual agents; and global properties, representing the collective behaviour of
agents with respect to a local property of interest.
In this respect, our approach is similar to \cite{anjaL2G,HaydenTCS}.
Local properties are expressed in terms of a temporal logic. To improve expressiveness of the specifications, we go beyond CSL, as used in \cite{fluidmc,anjaL2G}, and consider CSL-TA \cite{cslta}, an extension of CSL in which path properties are specified by Deterministic Timed Automata (DTA) with a single clock. To rely on fluid approximation techniques, we consider here DTA in which the clock refers always to the global time, i.e it cannot be reset. DTA are used to specify time bounded properties, and we consistently restrict to time-bounded quantification in the CSL layer. This is justified because dealing with steady state properties is always problematic in the context of fluid approximation, see \cite{fluidmc,tutorial,HaydenTCS} for further discussion on this point.
The global property layer allows us to specify queries about the fraction of agents that satisfy a given local specification. In particular, given a path or a state formula $\phi$, we want to compute the probability that the fraction of agents satisfying $\phi$ is smaller or larger than a threshold $\Sigmaha$. This is captured by appropriate operators, that can be then combined to specify more complex global queries, as in \cite{anjaL2G}.
Let us fix a population model composed of agents from a class $\mathscr{A} = (S, E)$. Path properties are specified by a \textit{1-global-clock Deterministic Timed Automata} (1gDTA). The edges of the 1gDTA are labelled by a triple composed of: an action name, taken from the set $\mathscr{L}$ of transition names of the population model; a boolean formula, interpreted on the states $S$ of agent $\mathscr{A}$; and a clock constraint, specifying when the transition can fire. The use of actions and formulae to label edges is similar to asCSL \cite{ascsl}, and deviates from the original definition of CSL-TA \cite{cslta}. The intended meaning is that a transition $s \mathbf{x}rightarrow{\Sigmaha} s'$ matches an edge with label $\Sigmaha,\phi$ in the 1gDTA if and only if the action name $\Sigmaha$ is the same and the state $s$ satisfies the formula $\phi$. This allows the specification of more complex path properties
and provides a mechanism to nest CSL-TA specifications.
Let us call $\Gamma_{\pstsp}$ this set of \textit{state propositions on} $S$, and call $\mathcal{B}(\Gamma_{\pstsp})$ the set of boolean combinations over $\Gamma_{\pstsp}$, denoting with $\models_{\Gamma_{\pstsp}}$ the satisfaction relation over $\mathcal{B}(\Gamma_{\pstsp})$ formulae, defined in the standard way. We use the letter $\phi$ to range over formulae in $\mathcal{B}(\Gamma_{\pstsp})$.
We consider a single clock variable $x \in \mathds{R}_{\geq 0}$, called \textit{global clock}, never reset in time. Let $\mathcal{V}$ be the set of \textit{valuations}, i.e. functions $\eta: \{x\} \longrightarrow \mathds{R}^{\geq 0}$ that assign a nonnegative real-value to the global clock $x$, and let $\mathcal{C}\mathcal{C}$ the set of \textit{clock constraints}, which are boolean combinations of basic clock constraints of the form $x \leq a$, where $a \in \mathds{Q}^{\geq 0}$. Finally, we write $\eta(x) \models_{\mathcal{C}\mathcal{C}} c$ if and only if $c \in \mathcal{C}\mathcal{C}$ is satisfied when its clock variable take the value $\eta(x)$. We are now ready to define 1gDTA.
\begin{definition}[1-global-clock Deterministic Timed Automaton]
\label{def:1gDTA}
A
1gDTA is specified by the tuple $\mathbb{D} = (\mathscr{L}, \Gamma_{\pstsp}, Q, q_0, F, \rightarrow)$ where $\mathscr{L}$ is the label set of $\mathscr{A}$; $\Gamma_{\pstsp}$ is the set of atomic state propositions; $Q$ is the (finite) set of states of the DTA, with initial state $q_0 \in Q$; $F \subseteq Q$ is the set of final (or accepting) states, and $\rightarrow \subseteq Q \times \Sigma \times \mathcal{B}(\Gamma_{\pstsp}) \times \mathcal{C}\mathcal{C} \times Q$ is the edge relation, where $(q, \Sigmaha, \gamma, c, q') \in \rightarrow$ is usually denoted by $q \mathbf{x}rightarrow{\Sigmaha, \phi, c}q'$. The edges of $\mathbb{D}$ further satisfy:
\begin{enumerate}
\item (\emph{determinism}) for each $\Sigmaha\in\mathscr{L}$, $s\inS$ and clock valuation $\eta(x)\in\mathbb{R}_{\geq 0}$, there is exactly one edge $q \mathbf{x}rightarrow{\Sigmaha, \phi, c}q'$ such that $s\models_{\Gamma_{\pstsp}}\phi$ and $\eta(x) \models_{\mathcal{C}\mathcal{C}} c$.
\item (\emph{absorption} of final states) all final states $q\in F$ are absorbing, i.e. all transitions starting from a final state are self-loops.
\end{enumerate}
\end{definition}
When we write a 1gDTA, we do not want to specify all possible transitions in the automaton. Hence, we assume that all non-specified edges are self-loops on the automata state.
Specifically, given $\Sigmaha$, $s$, and $\eta(x)$, if there is no specified edge from state $q$ with label $\Sigmaha$, with formula satisfied by $s$ and clock constraint satisfied by $\eta(x)$, then we assume the existence of an edge looping on $q$ and satisfying all conditions.
The condition of determinism of Definition \ref{def:1gDTA} can be easily enforced by considering 1gDTA that have additional restrictions on the edges, using only formulae $\phi_s$, $s\inS$, which are true only in state $s$, and requiring that two transitions with label $\phi_s, \Sigmaha$ have mutually exclusive clock constraints. We call these 1gDTA \emph{explicitly deterministic}. All examples of properties in this paper will be of this kind, and it is easy to see that any 1gDTA can be converted into an equivalent explicitly deterministic one, by properly multiplying edges and splitting state formulae and constraints.
\begin{example} \label{ex:property}
As an example, consider the agent class of the network epidemics shown in Figure \ref{SIRagent}, and the 1gDTA specification of Figure \ref{DTAprop} (a),
where the formula $\phi_S$ is true in local state $S$ and false in states $I$ and $R$. The property is satisfied when a susceptible node is infected by an internal infection after the first $\mathbf{v}r{t}u$ units of time. The sink state $q_b$ is used to discard agents being infected before $\mathbf{v}r{t}u$ units of time. The use of the state formula $\phi_S$ allows us to focus only on agents that get infected, rather than also on agents that spread the contagion.
We can describe more complex properties as the 1gDTA specification of Figure \ref{DTAprop} (b). This automaton describes the fact that an agent is infected by an internal contact twice, the first infection happening between time 1 and 2, and the second infection happening before time 4. Also here, the sink state $q_b$ is used to discard agents being infected for the first time before time 1.
\end{example}
\begin{figure}
\caption{The 1gDTA specifications discussed in Example \ref{ex:property}
\label{step0}
\label{step1}
\label{DTAprop}
\end{figure}
A run $\rho$ of a 1gDTA $\mathbb{D}$ is a sequence of states of $Q$, actions and times taken to change state, $q_0\mathbf{x}rightarrow{\Sigmaha_0,t_0}q_1\mathbf{x}rightarrow{\Sigmaha_1,t_1}\ldots q_n$, such that clock constraints are satisfied.
A run is accepting if $q_n\in F$.
Consider now a population model $\mathcal{X}^{(N)}$, and focus on a single individual agent
of class $\mathscr{A}$ in the population. A path $\sigma$ of length $n$ for such an agent is a sequence of the form $s_0 \mathbf{x}rightarrow{\Sigmaha_0,t_0} s_1 \mathbf{x}rightarrow{\Sigmaha_1,t_1} s_2 \mathbf{x}rightarrow{\Sigmaha_2,t_2} \ldots s_n$, where $s_i \in S$, $t_i\in \mathbb{R}_\geq 0 $ is the time spent in the local state $s_i$, and $\Sigmaha_i$ is the action taken at step $i$. The set of those paths will be denoted by $Path^n[\mathscr{A}]$ and the set of paths of finite length by $Path^*[\mathscr{A}]$. Given $\sigma$, we let $\mathbf{v}r{t}u[\sigma] = \sum_{i=0}^{|\sigma|-1} t_i$ be the total time taken to go from state $s_0$ to state $s_n$, and with $\mathbf{v}r{t}u_i[\sigma]$ the time taken to reach state $s_i$.
The set of paths of total duration equal to $T\in\mathbb{R}_{\geq 0}$ is denoted by $Path^T[\mathscr{A}]$.
Given a path $\sigma$ of length $n$, we define the run $\rho_\sigma$ of a 1gDTA $\mathbb{D}$ induced by $\sigma$ to be $q_0\mathbf{x}rightarrow{\Sigmaha_0,t_0}q_1\mathbf{x}rightarrow{\Sigmaha_1,t_1}\ldots q_n$, where state $q_{i+1}$ is determined by the unique transition $q_i \mathbf{x}rightarrow{\Sigmaha_i,\phi,c} q_{i+1}$ such that $s_i \models_{\Gamma_{\pstsp}} \phi$ and $T_{i+1}[\sigma] \models_{\mathcal{C}\mathcal{C}} c$. If $\rho_\sigma$ is accepting, we write $\sigma \models \mathbb{D}$.
Given a 1gDTA $\mathbb{D}$, we indicate it with $\mathbb{D}[\phi_1,\ldots,\phi_k]$ when we want to explicitly list all the atomic propositions $\Gamma_{\pstsp}$ used to build the state propositions $\mathcal{B}(\Gamma_{\pstsp})$. This will be needed to define the local logic CSL-TA.
\begin{definition}[CSL-TA]
\label{def:CSLTA}
A CSL-TA formula $\mathbf{P}hi$ on a agent class $\mathscr{A}$ is defined recursively as
\[ \mathtt{true}~|~a~|~\neg\mathbf{P}hi~|~\mathbf{P}hi_1\wedge\mathbf{P}hi_2~|~\mathbf{P}^{\leq T}_{\bowtie p}\left(\mathbb{D}[\mathbf{P}hi_1,\ldots,\mathbf{P}hi_k] \right),\]
where $a$ is an atomic proposition interpreted on $S$, $T\in \mathbb{R}_{\geq 0}$, $p\in[0,1]$, $\bowtie \in \{<,\leq,\geq,>\}$, and $\mathbb{D}[\mathbf{P}hi_1,\ldots,\mathbf{P}hi_k]$ is a 1gDTA with atomic formulae taken to be CLS-TA formulae $\mathbf{P}hi_1,\ldots,\mathbf{P}hi_k$.
\end{definition}
This definition is similar to \cite{cslta}, with the only difference of the use of a restricted class of DTA, and of the time bound on the probability operator $T$. The satisfaction relation is defined relatively to state $s\inS$ of an individual agent $Y(t)$ in $\mathcal{X}^{(N)}$ of class $\mathscr{A}$ and an initial time $t_0$. The only interesting case is the one involving 1gDTA specifications, which requires a 1gDTA path property to hold with probability satisfying the bound $\bowtie p$:
\[ s,t_0 \models \mathbf{P}^{\leq T}_{\bowtie p}\left(\mathbb{D}[\mathbf{P}hi_1,\ldots,\mathbf{P}hi_k] \right)\ \text{iff}\ \mathbb{P}\{ \sigma\in Path^T[\mathscr{A}] ~|~\sigma\models \mathbb{D}[\mathbf{P}hi_1,\ldots,\mathbf{P}hi_k] \} \bowtie p \]
We turn now to introduce global properties. Here, we will consider basic properties that lift local specifications to the collective level, looking at the fraction/ number of agents in the population that satisfy a local specification, given either as a state CSL-TA formula $\mathbf{P}hi$, or as a path property specified by a 1gDTA $\mathbb{D}$. More specifically, we ask ourselves if the fraction/ number of agents satisfying $\mathbb{D}$ (resp. $\mathbf{P}hi$) is included in the interval $[a,b]$, which we denote by $\mathbb{D} \in [a,b]$. This is a random event, hence we need to compute its probability. In atomic global properties, we will compare this probability with a given threshold. Therefore, we have two kinds of global atomic properties:
\begin{description}
\item[Path properties:] $\gp{\gprop{\mathbb{D}}{a}{b}}{}{\bowtie p}$: the probability that the fraction/ number of agents that satisfy the local path property $\mathbb{D}$ is in the interval $[a,b]$ is $\bowtie p$, for $\bowtie\in\{<,\leq,\geq,>\}$;
\item[State properties:] $\gp{\gprop{\mathbf{P}hi}{a}{b}}{}{\bowtie p}$: the probability that the fraction/ number of agents that satisfy the local state property $\mathbf{P}hi$ is in the interval $[a,b]$ is $\bowtie p$, for $\bowtie\in\{<,\leq,\geq,>\}$.
\end{description}
Both properties above can be checked at any starting time $t_0$.
Atomic global properties are then combined together by boolean operators, to define more expressive queries. We therefore define a \emph{collective or global property}, as
\begin{definition}
\label{def:globalProp}
Given a population model $\mathcal{X}^{(N)}$, a collective/ global property on $\mathcal{X}^{(N)}$ is given by the following syntax:
\[ \mathbf{P}si = \mathtt{true}~|~\gp{\gprop{\mathbb{D}}{a}{b}}{\bowtie p}~|~\gp{\gprop{\mathbf{P}hi}{a}{b}}{\bowtie p}~|~\neg\mathbf{P}si~|~\mathbf{P}si_1\wedge\mathbf{P}si_2\]
\end{definition}
\begin{example}
As an example, consider again the 1gDTA property $\mathbb{D}$ of Figure \ref{DTAprop}(b). Then the atomic global property $\gp{\gpropl{\mathbb{D}}{\frac{N}{3}}}{}{\geq 0.8}$ specifies that, with probability at least 0.8, no more than one third of network nodes will be infected twice in the 4 time units by an internal contact, where the first infection happens in between time 1 and 2.
\end{example}
\begin{remark} \label{rem:timeDependentDTA}
In Definition \ref{def:CSLTA}, we allow the arbitrary nesting of CSL properties within 1gDTA. By the discussion of \cite{fluidmc}, this operation requires some care when we want to apply fluid approximations to estimate probabilities. The problem is that individual agents are time-dependent non-Markovian processes (in fact, they are projections/ marginal distributions of a Markov processes, the global model), for which the satisfaction of a CSL-TA formula involving the probability quantifier depends on the initial time at which the formula is evaluated. Hence, the satisfaction of a CSL-TA formula is a time-dependent function, while 1gDTA require time independent state formulae. This discrepancy can be reconciled by encoding this time dependency in the 1gDTA itself, using clock constraints. Hence, a state formula that is true in $s$ up to time 5 and false afterwards, will give rise to two edges in the 1gDTA, the first considering a state formula in which $s$ is true, and with an additional clock constraint $x\leq 5$, while the second corresponding to an edge with a state formula falsified by $s$, and additional clock constraint $x>5$.
More specifically, we consider a family $\phi_t$ of boolean state propositions $\Gamma_{\pstsp}$, indexed by a time index $t\in[0,T]$, whose satisfaction value can change a finite number of times up to time $T$. This means that the interval $[0,T]$ of interest can be partitioned into a finite interval cover $[0,t_1)$, $[t_1,t_2)$,\ldots, $[t_n,T]$, such that the satisfaction of $\phi_t$ in each $[t_i,t_{i+1})$ is constant, meaning that for each state $s$ and times $t,t'\in [t_i,t_{i+1})$, then $s\models \phi_t$ if and only if $s\models \phi_{t'}$. To reduce such automata to 1gDTA, the idea is to replace the edge $q \mathbf{x}rightarrow{\Sigmaha, \phi_t, c}q'$ with a collection of edges $\{q \mathbf{x}rightarrow{\Sigmaha, \phi_{t_i}, c \wedge (t_i \leq x < t_i+1)}q'~|~i=0,\ldots,n\}$. This operation will be referred in the following as \emph{structural resolution of timed-varying properties}.
Once this is done, one has to check that the so-obtained DTA satisfies the determinism condition. This follows trivially if the original DTA is explicitly deterministic.
\end{remark}
\
\section{Stochastic Approximations}
\label{sec:cla}
In this section, we briefly present several approaches to approximate a population model by a simpler system that allows us to keep the state space explosion under control.
Probably the most widespread way to approximate population models is the mean field or fluid limit, which is usually accurate for large populations \cite{tutorial}. In the mesoscopic regime, i.e. for populations in the order of hundreds of individuals in which fluctuations cannot be neglected, one can rely on the linear noise approximation, treating the state space as continuous and noise as Gaussian \cite{vankampen}. As an alternative, when we are interested in moments of the population process, we can rely on a large gamma of moment closure techniques \cite{schnoerr2015}.
\paragraph{Fluid Approximation}
Given a population model $\mathcal{X}^{(N)} = (\mathscr{A}, \mathcal{T}^{(N)}, \mathbf{x}_0^{(N)})$, the Fluid Approximations provides an estimation of the stochastic dynamics of $\mathcal{X}^{(N)}$, exact in the limit of an {\it infinite} population. In particular, we consider an infinite sequence $(\mathcal{X}^{(N)})_{N \in \mathds{N}}$ of population models, all sharing the same structure, for increasing population size $N \in \mathds{N}$ (e.g.\ the network models $(\mathcal{X}^{(N)}_{net})_{N \in \mathds{N}}$ with an increasing number of network nodes).
To compare the dynamics of the models in the sequence, we consider the \textit{normalised counting variables} $\hat{\mathbf{X}} = \frac{1}{N}\mathbf{X}$ (known also as \textit{population densities} or \textit{occupancy measures}, see \cite{tutorial} for further details) and we define the \textit{normalized population models} $\mathcal{X}n^{(N)} = (\mathscr{A}, \hat{\mathcal{T}}^{(N)},\hat{\mathbf{x}}_0^{(N)})$, obtained from $\mathcal{X}^{(N)}$ by making the rate functions depend on the normalised variables and rescaling the initial conditions.
For simplicity, we assume that the rate function of each transition $\mathbf{v}r{t}u\in\hat{\mathcal{T}}^{(N)}$ satisfies the \textit{density dependent condition} $\frac{1}{N} f_{\mathbf{v}r{t}u}^{(N)}(\hat{\mathbf{X}}) = f_\mathbf{v}r{t}u(\hat{\mathbf{X}})$ for some Lipschitz function $f_\mathbf{v}r{t}u: \mathds{R}^n \longrightarrow \mathds{R}_{\geq 0}$, i.e. rates on normalised variables are independent of $N$. Also the \textit{drift} $\textbf{F}$ of $\mathcal{X}^{(N)}$, that is the mean instantaneous change of the normalised variables, is given by $\textbf{F}(\hat{\mathbf{X}}) = \sum_{\mathbf{v}r{t}u \in \hat{\mathcal{T}}^{(N)}} \textbf{v}_{\mathbf{v}r{t}u} f_{\mathbf{v}r{t}u}(\hat{\mathbf{X}})$ and, thus, is independent of $N$.
The unique solution\footnote{The solution exists and is unique because $F$ is Lipschitz continuous, as each $f_\mathbf{v}r{t}u$ is.} \[\boldsymbol\mathbf{P}hi: \mathds{R}_{\geq 0} \longrightarrow \mathds{R}^n\] of the differential equation
\begin{equation}
\label{eqn:fluid}
\frac{d\boldsymbol\mathbf{P}hi(t)}{dt} = \textbf{F}(\boldsymbol\mathbf{P}hi(t)),\quad \text{given } \boldsymbol\mathbf{P}hi(0) = \hat{\mathbf{x}}_0^{(N)},
\end{equation}
is the {\it Fluid Approximation} of the CTMC $\hat{\mathbf{X}}^{(N)}(t)$ associated with $\mathcal{X}n^{(N)}$, and has been successfully used to describe the collective behaviour of complex systems with large populations \cite{tutorial}. The correctness of this approximation in the limit of an infinite population is guaranteed by the Kurtz Theorem \cite{kurtz,tutorial}, which states that $\sup_{t \in [0, T]}\| \hat{\mathbf{X}}^{(N)}(t) - \boldsymbol\mathbf{P}hi(t)\|$ converges to zero (almost surely) as $N$ goes to infinity:
\begin{theorem}
\label{th:kurtz}
Suppose $\lim_{N \rightarrow \infty} \hat{\mathbf{x}}_0^{(N)} = \textbf{x}_0$. For every \textnormal{finite} time horizon $T > 0$:
$$
\lim_{N \rightarrow \infty} \sup_{t \in [0, T]}\left\| \hat{\mathbf{X}}^{(N)}(t) - \boldsymbol\mathbf{P}hi(t)\right\| = 0\qquad\text{almost surely}.
$$
\end{theorem}
\paragraph{Fast Simulation.}
The mean field convergence theorem can be used also to approximate the behaviour of a random individual agent in a large population model. The idea is that, in the limit of large populations, the behaviour of individual agents becomes independent, and influenced from the rest of the system only through the solution of the mean field equation. This result is known in literature as fast simulation \cite{darling,boudec,fluidmc}.
More formally, call $Y^{(N)}(t)$ the stochastic process of a random individual agent, with state space $S$, and $\mathbf{X}^{(N)}(t)$ the CTMC associated with the population model. If we consider an individual agent conditional on the state $\mathbf{x}$ of the full population model, we can write down its infinitesimal generator matrix $Q(\mathbf{x})$ as a function of $\mathbf{x}$. Formally this is obtained by computing the fraction of the total rate of a transition seen by a random agent in a given state $s$, i.e. by dividing the total rate by the number of individuals in state $s$. A more formal treatment for population models will be given in the next section.
For the moment, observe that in a finite population, $Y^{(N)}(t)$ and $\mathbf{X}^{(N)}(t)$ are not independent, and to track the behaviour of an individual agent, we need to solve the full model $(Y^{(N)}(t),\mathbf{X}^{(N)}(t))$. However, the fast simulation theorem below proves \cite{darling} that, in the limit of $N$ going to infinity, we can approximate the behaviour of $Y^{(N)}(t)$ by the agent $y(t)$, with state space $S$ and time-dependent infinitesimal generator matrix given by $Q( \boldsymbol\mathbf{P}hi(t))$, with $ \boldsymbol\mathbf{P}hi(t)$ the solution of the fluid equation presented above:
\begin{theorem} \label{th:fastsim}
For any $T<\infty$, \[\lim_{N\rightarrow \infty}P\{Y^{(N)}(t)\neq y(t),\ \text{for some } t\leq T\} = 0.\]
\end{theorem}
\paragraph{Linear Noise}
While the Fluid Approximation correctly describes the transient collective behaviour for very large populations, it is less accurate when one has to deal with a \textit{mesoscopic} system, meaning a system with a population in the order of hundreds of individuals and whose dynamics results to be intrinsically probabilistic. Indeed, the (stochastic) behaviour becomes increasingly relevant as the size of the population decreases. The technique of \textit{Central Limit Approximation} (CLA), also known as \textit{Linear Noise Approximation}, provides an alternative and more accurate estimation of the stochastic dynamics of mesoscopic systems. In particular, in the CLA, the probabilistic fluctuations about the average deterministic behaviour (described by the fluid limit) are approximated by a Gaussian process.
Two equivalent approaches can be followed to introduce the Central Limit Approximation: the more intuitive van Kampen's one in \cite{vankampen}, known as System Size Expansion (SSE), and a more rigorous derivation by Kurtz in \cite{kurtz}, exploiting the theory of stochastic integral equations.
In both these approaches, the idea is to focus on the process $\textbf{Z}^{(N)}(t)\ :=\ N^{\frac{1}{2}}\left(\hat{\mathbf{X}}^{(N)}(t) - \boldsymbol\mathbf{P}hi(t)\right)$, capturing the rescaled fluctuations of the Markov chain around the fluid limit.
Then, by relying on convergence results for Brownian motion, one shows that $\{\textbf{Z}^{(N)}(t)\ |\ t \in \mathds{R}\}$, for large populations $N$, can be approximated \cite{vankampen,kurtz} by the Gaussian process\footnote{A Gaussian process $\textbf{Z}(t)$ is characterised by the fact that the joint distribution of $\textbf{Z}(t_1),\ldots,\textbf{Z}(t_k)$ is a multivariate normal distribution for any $t_1,\ldots,t_k$.} $ \{\textbf{Z}(t) \in \mathds{R}^n\ |\ t \in \mathds{R}\}$ (\textit{independent of} $N$), whose mean $\mathbf{E} [t]$ and covariance $\mathbf{C} [t]$ are given by
\begin{equation}\label{mean}
\begin{cases}
\frac{\partial \mathbf{E}[t]}{\partial t} = \mathbf{J}_{\textbf{F}}(\boldsymbol\mathbf{P}hi(t)) \mathbf{E}[t]\\
\mathbf{E} [0] = 0
\end{cases}
\end{equation}
and
\begin{equation}
\begin{cases}\label{covariance}
\frac{\partial \mathbf{C}[t]}{\partial t} = \mathbf{J}_{\textbf{F}} (\boldsymbol\mathbf{P}hi(t))\mathbf{C}[t] + \mathbf{C}[t]\mathbf{J}^{T}_{\textbf{F}}(\boldsymbol\mathbf{P}hi(t)) + \textbf{G}(\boldsymbol\mathbf{P}hi(t))\\
\mathbf{C}[0] = 0,\end{cases}
\end{equation}
where $\mathbf{J}_{\textbf{F}}(\boldsymbol\mathbf{P}hi(t))$ denotes the Jacobian of the limit drift $\textbf{F}$ calculated along the deterministic fluid limit $\boldsymbol\mathbf{P}hi: \mathds{R}_{\geq 0} \longrightarrow \mathds{R}^n$, and $\textbf{G}(\hat{\mathbf{X}}) = \sum_{\mathbf{v}r{t}u \in \hat{\mathcal{T}}^{(N)}} \textbf{v}_{\mathbf{v}r{t}u}\textbf{v}_{\mathbf{v}r{t}u}^T f_{\mathbf{v}r{t}u}(\hat{\mathbf{X}})$ is called the \emph{diffusion} term.
The nature of the approximation of $\textbf{Z}^{(N)}(t)$ by $\textbf{Z}(t)$ is captured in the following theorem~\cite{kurtz}.
\begin{theorem}
\label{th:central}
Let $\textbf{Z}(t)$ be the Gaussian process with mean (\ref{mean}) and covariance (\ref{covariance}) and $\textbf{Z}^{(N)}(t)$ be the random variable given by $\textbf{Z}^{(N)}(t)\ :=\ N^{\frac{1}{2}}\left(\hat{\mathbf{X}}^{(N)}(t) - \boldsymbol\mathbf{P}hi(t)\right)$. Assume that $\lim_{N \rightarrow \infty} \textbf{Z}^{(N)}(0) = \textbf{Z}(0)$. Then, $\textbf{Z}^{(N)}(t)$ converges in distribution to $\textbf{Z}(t)$.\footnote{Formally, $\{\textbf{Z}^{(N)}(t)\}_{t\in \mathds{R}_{\geq 0}} \Rightarrow \{\textbf{Z}(t)\}_{t\in \mathds{R}_{\geq 0}}$, i.e. the convergence in distribution is of $\textbf{Z}^{(N)}$ to $\textbf{Z}$, seen as random variables taking values in the space of cadlag functions from $\mathds{R}^n$ to $\mathds{R}$. }
\end{theorem}
The \textit{Central Limit Approximation} then approximates the normalized CTMC $\hat{\mathbf{X}}^{(N)}(t) = \boldsymbol\mathbf{P}hi(t) + N^{-\frac{1}{2}} \textbf{Z}^{(N)}(t)$ associated with $\mathcal{X}n^{(N)}$ by the stochastic process
\begin{equation}\label{vanassumption}
\boldsymbol\mathbf{P}hi(t) + N^{-\frac{1}{2}} \textbf{Z}(t).
\end{equation}
Theorem \ref{th:central} guarantees its asymptotic correctness in the limit of an infinite population.
In the derivation of Van Kampen \cite{vankampen}, the idea is to start from the master equation and introduce an expansion around a small parameter given by the inverse of the system size. In this way, truncating at the lowest order, one obtains that fluctuations obey a linear Fokker-Plank equation, whose solution is the Gaussian Process described by the mean and covariance function introduced above. The advantage of this derivation is that one can keep higher orders in the expansion \cite{grima2010}, obtaining a non-linear Fokker-Plank, which cannot be solved analytically, but which can be either integrated numerically, or from which equations for moments can be extracted. These equations have higher-order correction terms depending on system-size, which vanish in the thermodynamic limit, collapsing to mean field. We do not present the derivation here in detail, but refer the interested reader to the appropriate papers. It is worth mentioning that this approach has been implemented in the tool iNA \cite{ina} and in the matlab toolbox CERENA \cite{CERENA}.
\paragraph{Moment Closure}
The class of approximation techniques known as moment closure \cite{schnoerr2016} is a viable alternative to mean field, linear noise, or SSE, when the interest is on the moments of the population process for a finite population rather than on the limit behaviour for large population sizes. These methods start from a general ODE for the moments of a stochastic process, known as Dynkin formula \cite{kallenberg}. If $h(\mathbf{x})$ is any sufficiently smooth function with domain $\mathbb{R}^n$, then
\[ \frac{d}{dt}\mathbb{E}_t[h(\mathbf{X}^{(N)}(t))] = \sum_{\mathbf{v}r{t}u \in \hat{\mathcal{T}}^{(N)}} \mathbb{E}_{t}[ (h(\mathbf{X}^{(N)}(t) + \textbf{v}_{\mathbf{v}r{t}u}) - h(\mathbf{X}^{(N)}(t)) )f^{(N)}_{\mathbf{v}r{t}u}(\mathbf{X}^{(N)}(t)) ]. \]
In the formula above, $\mathbf{v}r{t}u \in \hat{\mathcal{T}}^{(N)}$ are the transitions of a population model, each with rate $f^{(N)}_{\mathbf{v}r{t}u}(\mathbf{x})$ and with update vector $\textbf{v}_{\mathbf{v}r{t}u}$, see Section \ref{sec:popmod}.
Starting from this formula, one can easily obtain the exact ODEs for (non-centered) moments, by using a suitable polynomial expression for $h$. For instance, for $h(\mathbf{x}) = x_i$ one can obtain the mean of population $i$, with $h(\mathbf{x}) = x_i^2$ one obtains the second moment of population $i$, from which variance can be computed as $\mathbb{E}[X_i^2] - \mathbb{E}[X_i]^2$, and so on.
It is useful to see what happens when $h$ is the vector valued identity, $h(\mathbf{x}) = \mathbf{x}$, giving the mean for all variables. In this case, the ODE above becomes
\[ \frac{d}{dt}\mathbb{E}_t[\mathbf{X}^{(N)}(t)] = \sum_{\mathbf{v}r{t}u \in \hat{\mathcal{T}}^{(N)}} \textbf{v}_{\mathbf{v}r{t}u} \mathbb{E}_{t}[ f^{(N)}_{\mathbf{v}r{t}u}(\mathbf{X}^{(N)}(t)) ], \]
hence the right hand side depends on the expectation of the rate functions with respect to the state of the Markov process $\mathbf{X}^{(N)}(t)$ at time $t$. If all rate functions are linear, then the previous equation is closed, i.e. depends only on the mean $\mathbb{E}[\mathbf{X}^{(N)}(t)]$. However, when the rate functions are non-linear polynomials, like in the epidemic example, or more complex non-linear functions, then this is no more the case. For example, in the epidemic model of Section , we can observe how the infection rate is $k_iX_S X_I$, hence giving rise to a term in the ODE for the mean equal to $k_i\mathbb{E}[X_S X_I]$, i.e. the covariance between $X_S$ and $X_I$. This is the \emph{leit motiv} of non-linearity: differential equation for a moment of order $m$ will depend on moments of higher order, thus giving rise to an infinite dimensional ODE system.
The whole business of moment closure is to find clever ways to truncate these infinite dimensional ODEs to obtain a finite dimensional set of equations up to order $m$ to be solved by numerical integration.
In literature, there are different techniques to close the moment equations. Typically they impose some condition that is satisfied by a family of probability distributions, from moments of order $m$ on (e.g. normal, log-normal, low dispersion), but they may also try to match the derivatives to those of the true equations, or use other ideas, see \cite{schnoerr2014,schnoerr2016,CERENA} for a presentation of different moment closure strategies.
Typically, the higher the order $m$ of truncation, the more accurate will be the estimate of lower order moments, like the mean. This is not always true, as the accuracy of moment closures depends on the system under consideration in complex ways, see \cite{schnoerr2014} for a discussion in this sense.
In the paper, we make use of the low dispersion moment closure, which can be obtained by setting the centred moments to zero from order $m$ on, see \cite{schnoerr2016,CERENA} .\\
\section{Model Checking CSL-TA Properties on Individual Agents}
\label{sec:checkCSLTA}
In this section, we show how to check CSL-TA formulae on individual agents. The challenging task is that of computing path properties for 1gDTAs, restricting to constant atomic propositions (in the light of Remark \ref{rem:timeDependentDTA}). In this paper, we will consider efficient approximations of the path probability based on fluid approximations (Section \ref{sec:singlePath}).
The computation of path probabilities starts from the construction of an enriched agent, obtained by a product construction between the agent model with the automata of the properties (Section \ref{sec:singleSynch}). Then, we show how to compute path probabilities from a fixed initial time (Section \ref{sec:singlePathFixed}), and then as a function of the initial time (Section \ref{sec:singlePathVariable}).
The next step is to provide an approximate model checking algorithm for CSL-TA (Section \ref{sec:singleMC}), discussing its decidability and the convergence to the exact value for large populations (Section \ref{sec:singleConvergence}).
Finally, we show how to improve the accuracy of the approximation for finite populations, exploiting higher order moment closure techniques (Section \ref{sec:singleHO}).
\subsection{Computing path probabilities} \label{sec:singlePath}
\subsubsection{Synchronisation of agents and 1gDTAs.} \label{sec:singleSynch}
The first step in the computation of path probabilities is to synchronize the agent and the property, constructing an extended Markov population model in which the state space of each agent is combined with the specific path property we are observing.
The main difficulty in this procedure is the presence of time constraints in the path property specification. However, thanks to the restriction to a single global clock, we can partition the time interval of interest into a finite set of subintervals or regions, within which no clock constraint changes status. Thus, in each region, we can remove the clock constraints, deleting all the edges that cannot fire because their clock constraints are false. In this way, we generate a sequence of Deterministic Finite Automata (DFA), that are then combined with the local model $\mathscr{A}$ by a standard product of automata.
Let $\mathscr{A} = (S, E)$ be an agent class, $\mathbb{D} = (\mathscr{L}, \Gamma_{\pstsp}, Q, q_0, F, \rightarrow)$ be a local path property, and $T> 0$ be the time horizon.
\mathbf{v}space{0.2cm}
\noindent \textbf{First step: enforcing uniqueness of transition labels}. We define a new agent class $\bar{\mathscr{A}} = (S, \bar{E})$ by renaming the local transitions in $E$ to make their label unique. This allows us to remove edge formulae in $\mathbb{D}$, simplifying the product construction. In particular, if there exist $s_1 \mathbf{x}rightarrow{\loclab{}} s'_1, \ldots, s_m \mathbf{x}rightarrow{\loclab{}} s'_m \in E$ having the same label $\loclab{}$, we rename them by $\loclab{s_1}, \ldots, \loclab{s_m}$, obtaining $s_1 \mathbf{x}rightarrow{\loclab{s_1}} s'_1, \ldots, s_m \mathbf{x}rightarrow{\loclab{s_m}} s'_m \in \bar{E}$. The 1gDTA $\dta$ is updated accordingly, by substituting each edge $q\mathbf{x}rightarrow{\Sigmaha,\phi,c}q'$ with the set of edges $q\mathbf{x}rightarrow{\Sigmaha_{s_i},\phi,c}q'$, for $i=1,\ldots,m$. We call $\bar{\mathscr{L}}$ the label set of $\bar{\mathscr{A}}$.
\mathbf{v}space{0.2cm}
\noindent \textbf{Second step: removal of state conditions}.
We remove from the edge relation of $\dta$ all the edges $q\mathbf{x}rightarrow{\Sigmaha_{s_i},\phi,c}q'$ such that $s_i\not\models_{\Gamma_{\pstsp}} \phi$, where $s_i$ is the source state of the (now unique) transition of $\bar{\mathscr{A}}$ labeled by $\Sigmaha_{s_i}$. At this point, the information carried by state propositions becomes redundant, thus we drop them, writing
$q\mathbf{x}rightarrow{\Sigmaha_{s_i},c}q'$ in place of $q\mathbf{x}rightarrow{\Sigmaha_{s_i},\phi,c}q'$.
\mathbf{v}space{0.2cm}
\noindent \textbf{Third step: removal of clock constraints}. Let $t_1, \ldots, t_{k}$ be the ordered sequence of constants (smaller than $T$) appearing in the clock constraints of the edges of $\dta$. We extend this sequence by letting $t_0 = 0$ and $t_{k+1} = T$. Let $I_j = [t_{j-1}, t_{j}]$, $j = 1, \ldots, {k+1}$, be the $j$-th sub-interval of $[0,T]$ identified by such a sequence. For each $I_j$, we define a Deterministic Finite Automaton (DFA), $\mathbb{D}{j} = (\mathscr{L}, Q, q_0, F, \tr{j}{})$, whose edge relation $\tr{j}{}$ is obtained from that of $\dta$ by selecting only the edges for which the clock constraints are satisfied in $I_j$, and dropping the clock constraint. Hence, from $q\mathbf{x}rightarrow{\Sigmaha_{s_i},c}q'$ such that $\eta(x)\models_{\mathcal{C}\mathcal{C}} c$ whenever $\eta(x)\in (t_{j-1},t_{j})$, we obtain the DFA edge $(q,\Sigmaha_{s_i},q')\in\tr{j}{}$, denoted also by $q\tr{j}{\Sigmaha_{s_i}} q'$.
\mathbf{v}space{0.2cm}
\noindent \textbf{Fourth step: synchronization}. To keep track of the behaviour of the agents with respect to the property specified by $\dta$, we synchronize the agent class $\bar{\mathscr{A}} = (S, \bar{E})$ with each DFA $\mathbb{D}{j}$
through the standard product of automata. The sequence of deterministic automata obtained in this procedure is called the \textit{agent class associated with the local property} $\dta$.
\begin{definition}[Agent class associated with the local property $\dta$]
\label{def:synchAgentClass}
The agent class ${\mathscr{P}}$ associated with the local property $\dta$ is the sequence ${\mathscr{P}} = (\aca{1}, \ldots,\\ \aca{k+1})$ of deterministic automata $\aca{j} = (\hat{\cstsp}, \hat{\ctrsp}_{j})$, $j = 1, \ldots, k+1$, where $\hat{\cstsp} = S \times Q$ is the state space and $\hat{\ctrsp}_{j}$ is the set of local transitions $\loctr{i}^{j} = (s, q) \mathbf{x}rightarrow{\loclab{s}} (s', q')$, such that $s\mathbf{x}rightarrow{\loclab{s}} s'$ is a local transition in $\bar{\mathscr{A}}$ and $q\mathbf{x}rightarrow{\loclab{s}} q'$ is an edge in $\mathbb{D}{j}$.
\end{definition}
\begin{example} \label{ex:syncA_DTA}
As an example, in Figure \ref{fig:sync}, we show the synchronisation steps of the SIR automata described in Figure \ref{SIRagent}, and the 1gDTA specification of Figure \ref{DTAprop} (a). After the first step, Figure \ref{fig:sync} (a), the new agent class $\bar{\mathscr{A}}_{node} = (S_{node}, \bar{E}_{node})$ of the network node has local transitions $\bar{E}_{node} =\{S\mathbf{x}rightarrow{inf_S} I, S\mathbf{x}rightarrow{ext} I, I\mathbf{x}rightarrow{inf_I} I, I\mathbf{x}rightarrow{patch_1} R, S\mathbf{x}rightarrow{patch_0} R, R\mathbf{x}rightarrow{loss} S \}$. The 1gDTA $\dta$ is updated accordingly, by substituting the edge $q_0\mathbf{x}rightarrow{inf,\phi_S, c}q_i$ with the set of edges $q_0 \mathbf{x}rightarrow{inf_S,\phi_S,c}q'$, for $i= b,f$. We remove then, Figure \ref{fig:sync} (b), the redundant state conditions of $\dta$.
In the third step, Figure \ref{fig:sync} (c), we remove the clock constraints. For the considered 1gDTA we have two time intervals $I_1 = [0, \mathbf{v}r{t}u]$ and $I_2 = [\mathbf{v}r{t}u ,T]$. We define then two DFAs: $\mathscr{D}_{[0, \mathbf{v}r{t}u ]} $ and $\mathscr{D}_{[\mathbf{v}r{t}u, T ]} $. Finally, Figure \ref{fig:sync} (d), we synchronise the agent class $\bar{\mathscr{A}}_{node}$ with each DFA.
\begin{figure}
\caption{ Synchronisation steps of the SIR automata described in Figure \ref{SIRagent}
\label{step1}
\label{step2}
\label{step3}
\label{step4}
\label{step4}
\label{fig:sync}
\end{figure}
\end{example}
\subsubsection{The stochastic model of the single agent.} \label{sec:agentModel}
The synchronisation of an agent with a property enables us to monitor if and when the agent satisfies it. To progress further into verification, we need to tweak the system model so that one agent is tagged and monitored during model execution. The idea to do this is simple: we couple the agent with the population model, and each time a global transition fires which can change the current state of the agent, we choose whether to update the tagged agent or another untagged one. The best way to formalize this is to define the infinitesimal generator of an individual agent, conditional on the state of the population model. In turn, to specify this we just need to specify for each local transition of the agent class of Definition \ref{def:synchAgentClass} the rate at which an individual agent will see this transition happen, given the current state $\mathbf{X}^{(N)}(t)$ of the global model. To fix the notation, let us denote by $\tilde{Y}^{(N)}(t)$ the state of the tagged agent at time t.
Consider now a transition $\loctr{i}^{j} = (s, q) \mathbf{x}rightarrow{\loclab{s}} (s', q')$ of the agent of class ${\mathscr{P}}_j$,\footnote{We consider the behaviour in a single interval $I_j$ of ${\mathscr{P}}$.} and let $\mathbf{v}r{t}u$ be a global transition of the population model such that $s \mathbf{x}rightarrow{\loclab{}} s'$ belongs to its synchronisation set $\syncset{\mathbf{v}r{t}u}$. Let $f^{(N)}_{\mathbf{v}r{t}u}: \mathds{R}^n \longrightarrow \mathds{R}_{\geq 0}$ the rate function of the transition. Furthermore, let $m_{\mathbf{v}r{t}u}$ be the multiplicity of $s \mathbf{x}rightarrow{\loclab{}} s'$ in $\syncset{\mathbf{v}r{t}u}$. Then, the fraction of rate of $\mathbf{v}r{t}u$ seen by the individual agent can be computed by dividing the global rate by the number of agents in state $s$, and correcting for the multiplicity $m_{\mathbf{v}r{t}u}$, as shown in the following
\begin{proposition} \label{prop:individualRates}
The rate of transition $(s, q) \mathbf{x}rightarrow{\loclab{s}} (s', q')$ of an individual agent due to global transition $\mathbf{v}r{t}u$, given that the population model is in state $\mathbf{X}^{(N)}(t) = \mathbf{x}$, is
\[ g^{(N)}_{\mathbf{v}r{t}u}((s,q),(s',q')) = \frac{m_\mathbf{v}r{t}u}{x_s} f^{(N)}_{\mathbf{v}r{t}u}(\mathbf{x}).\]
\end{proposition}
We can now easily define the infinitesimal generator of an individual agent of class ${\mathscr{P}}_j$, conditional on the population model being in state $\mathbf{X}^{(N)}(t) = \mathbf{x}$ as the matrix $Q_j^{(N)}(\mathbf{x})$ such that
\[Q^{(N)}_{j,(s,q),(s',q')}(\mathbf{x}) = \sum_{(s, q) \mathbf{x}rightarrow{\loclab{s}} (s', q')\in \hat{\ctrsp}_j } \sum_{\mathbf{v}r{t}u| s \mathbf{x}rightarrow{\loclab{}} s' \in\syncset{\mathbf{v}r{t}u}} g^{(N)}_{\mathbf{v}r{t}u}((s,q),(s',q')). \]
Furthermore, as customary, let $Q^{(N)}_{j,(s,q),(s,q)}(\mathbf{x})= - \sum_{(s,q) \neq (s',q')} Q^{(N)}_{j,(s,q),(s,q)}(\mathbf{x})$.
\begin{example} \label{ex:Q1agent}
Let us consider the synchronisation of a SIR agent and the 1gDTA specification described in the previous subsection (Example \ref{ex:syncA_DTA}). We couple this agent with the population model $\mathcal{X}^{(N)}_{net} = (\mathscr{A}_{node}, \mathcal{T}^{(N)}, \mathbf{x}_0^{(N)})$, that has population variables $\mathbf{X} = (X_S, X_I, X_R)$. We can define the infinitesimal generator of the individual agent of class $\mathscr{P}_{[0, \mathbf{v}r{t}u]}$ and $\mathscr{P}_{[\mathbf{v}r{t}u, T]} $ conditional on the population model being in state $\mathbf{X}^{(N)}(t) = (x_S, x_I, x_R)$.
For example, to change its state from $S_0$ to $I_b$ the automata can execute the local transition $S_0 \mathbf{x}rightarrow{inf_{S,0}} I_b$ that belongs to the synchronisation set of the global transition $\mathbf{v}r{t}u_{inf}$, with multiplicity $m_{\mathbf{v}r{t}u_{inf}}=1$. The global transition has rate function $f^{(N)}_{inf}(\mathbf{x}) = \frac{1}{N} \kappa_{inf} x_S x_I$. The rate of the individual agent is then equal to $g^{(N)}_{\mathbf{v}r{t}u_{inf}}(S_0,I_b) = \frac{1}{x_S}\frac{\kappa_{inf}}{N} x_S x_I = \frac{\kappa_{inf}}{N} x_I$, where $\mathbf{x} = (x_S, x_I, x_R)$ is the density of the population model, which can be computed at a given time by the fluid approximation. This is the only transition that allows to change state from $S_0$ to $I_b$ then $ Q^{(N)}_{[0,\mathbf{v}r{t}u],(S_0,I_b)}(\mathbf{x}) =\frac{\kappa_{inf}}{N} x_I$. In a similar way we can compute the other values of the generator matrix: $Q^{(N)}_{[0,\mathbf{v}r{t}u],(S_0,I_0)}(\mathbf{x})=\kappa_{ext}$, $Q^{(N)}_{[0,\mathbf{v}r{t}u],(S_0,R_0)}(\mathbf{x})=\kappa_{patch_0}$.
$ Q^{(N)}_{[0,\mathbf{v}r{t}u],(S_0,S_0)}(\mathbf{x}) = - \kappa_{ext} - \frac{\kappa_{inf}}{N} x_I-\kappa_{patch_0}$. $Q_{[\mathbf{v}r{t}u,T]}(\mathbf{x})$ is equal to $Q_{[0, \mathbf{v}r{t}u]}(\mathbf{x})$ except for $Q_{[T,\mathbf{v}r{t}u], (S_0,I_b)}(\mathbf{x})=0$ and $Q_{[T,\mathbf{v}r{t}u], (S_0,I_f)}(\mathbf{x})= \frac{\kappa_{inf}}{N} x_I$.
\end{example}
\subsubsection{Computing path probabilities for a fixed initial time} \label{sec:singlePathFixed}
Verifying the property $\mathbb{D}$ on an individual agent requires to compute the path probability of the set of paths that satisfy it. This can be done by synchronising the agent with the property, as in Definition \ref{def:synchAgentClass}, and then computing the probability at the time horizon $T$ of being in an accepting state. This is sufficient because of the absorption property of final states in a 1gDTA (condition 2 of Definition \ref{def:1gDTA}), which guarantees that whenever an agent enters a final state of the 1gDTA, it will never leave it, i.e. that the second component of a state $(s,q_f)$, $q_f\in F$, will never change.
Let the individual agent $Y^{(N)}(t)$ be in state $s_0$ at the initial time $t_0$. Then the synchronised agent will start from state $(s_0,q_0)$, and
\begin{equation}
\label{eq:prob}
P(s_0,t_0 \models \mathbb{D}) = P( Y^{(N)},s_0,t_0 \models \mathbb{D} ) = \sum_{(s,q)|q\in F} P(\tilde{Y}^{(N)}(t_0+T) = (s,q)).
\end{equation}
The problem with the formula above is that to compute the probabilities of $\tilde{Y}^{(N)}$ one needs to solve the joint process $(\tilde{Y}^{(N)}(t),\mathbf{X}^{(N)}(t))$, as the rates of $\tilde{Y}^{(N)}$ depend on the state of the global model. To speed up this computation, the idea is to plug in an approximation. The simplest choice, which is typically working well for moderate to large population sizes, is to rely on the fast simulation, approximating $\tilde{Y}^{(N)}$ by $\hat{Y}(t)$, the individual agent model with time-dependent rates, plugging in $Q_j$ the solution $\mathbf{x}(t)$ of the mean field equation for the global model: $Q_j = Q_j(\mathbf{x}(t))$. This is the idea pursued in \cite{fluidmc}, which gives a speedup of many orders of magnitude.
In this section, we proceed along this direction, but follow a different derivation which makes easier to correct the model for finite size effects, using higher order moment closure techniques. Consider the joint distribution $P(\tilde{Y}^{(N)}(t),\mathbf{X}^{(N)}(t))$ and write it as $P(\tilde{Y}^{(N)}(t)|\mathbf{X}^{(N)}(t))P(\mathbf{X}^{(N)}(t))$. We now plug in the crucial approximation, which is a consequence of the fast simulation theorem: we assume $\tilde{Y}^{(N)}(t)$ and $\mathbf{X}^{(N)}(t))$ to be independent. This guarantees that $\tilde{Y}^{(N)}(t)$ is a Markov process, so that we can derive the forward Kolmogorov equations for the marginal process $\tilde{Y}^{(N)}(t)$, as
\begin{equation} \label{eq:Keq}
\frac{d}{dt} P_j(t|t_0) = P_j(t|t_0) \mathbb{E}_{\mathbf{X}^{(N)}(t)}[Q_j(\mathbf{X}^{(N)}(t))]
\end{equation}
i.e. by marginalising over the global population model. Here $P_j(t|t_0) $ is a matrix of transition probabilities: $P_j(t|t_0) [(s,q),(s',q')]$ is the probability of being in state $(s',q')$ at time $t$, starting from state $(s,q)$ at time $t_0$.
Now, the fast simulation regime introduces the further approximation
\[ \mathbb{E}_{\mathbf{X}^{(N)}(t)}[Q_j(\mathbf{X}^{(N)}(t))] \approx Q_j(\mathbb{E}[\mathbf{X}^{(N)}(t)]) \approx Q_j(\mathbf{x}(t))\]
where we made a first order approximation of $\mathbb{E}_{\mathbf{X}^{(N)}(t)}[Q_j(\mathbf{X}^{(N)}(t))]$ by Taylor expanding it around the mean $\mathbb{E}[\mathbf{X}^{(N)}(t)]$, and then approximated this mean at first order with the solution of the mean field equation: $\mathbb{E}[\mathbf{X}^{(N)}(t)]\approx\mathbf{x}(t)$ \cite{vankampen,gardiner,bortolussi2008}.
Notice that, even at first order, we obtain a time-inhomogeneous model for the individual agents, with rates modulated by the average behaviour of the full process. Obviously, there is no need to stop at first order, and we can introduce higher order approximations of the average $\mathbb{E}_{\mathbf{X}^{(N)}(t)}[Q_j(\mathbf{X}^{(N)}(t))] $, by relying on moment closure approximation. We will investigate this direction in Section \ref{sec:singleHO}.
To compute the probability of the property $\mathbb{D}$, we need now to take into account the structure of clock constraints. The idea is that we can apply the approximation discussed above to each synchronised model ${\mathscr{P}}_j$, and then combining the so obtained probabilities by multiplying the probability transition matrix. More specifically, call $P_j(t_i|t_{k})$ the probability transition matrix of ${\mathscr{P}}_j$, computed by solving the approximate Kolmogorov equations \eqref{eq:Keq}.
Let $t_1,\ldots,t_N$ be the clock constraints. Then we define
\begin{equation} \label{eqn:fullProbIndividual}
P(t_N|t_0) = P_1(t_1|t_0)P_2(t_2|t_1)\cdots P_N(t_N|t_{N-1})
\end{equation}
and then let $P(\tilde{Y}^{(N)}(t_0+T) = (s,q)) = P(t_N|t_0)[(s_0,q_0),(s,q)]$ in equation \eqref{eq:prob}. The satisfaction probability $P(s_0,t_0 \models \mathbb{D})$ can now be calculated according to equation \eqref{eq:prob}.
\begin{example} \label{ex:st1agentfixTime}
In the previous subsection (Example \ref{ex:Q1agent}), we couple the automata $\mathscr{P}_{[0, \mathbf{v}r{t}u]}$ and $\mathscr{P}_{[\mathbf{v}r{t}u, T]} $ with the population model $\mathcal{X}^{(N)}_{net} = (\mathscr{A}_{node}, \mathcal{T}^{(N)}, \mathbf{x}_0^{(N)})$ and we compute their infinitesimal generator matrix conditional on the population model being in state $\mathbf{X}^{(N)}(t) = (x_S, x_I, x_R)$. Let us denoted by $Y^{(N)}(t)$ the state of the tagged agent at time t and suppose it is in state $S$ at time $t_0
=0$, The synchronised agent will start from state $S_0$. We want to compute $P( Y^{(N)},S,t_0 \models \mathbb{D} ) = P(\tilde{Y}^{(N)}(t_0+T) = S_f) + P(\tilde{Y}^{(N)}(t_0+T) = I_f) +P(\tilde{Y}^{(N)}(t_0+T) = R_f).$ To do that we approximate $\tilde{Y}^{(N)}$ by $\hat{Y}(t)$ computing the infinitesimal generator matrix $Q_{[0,\mathbf{v}r{t}u]}$ and $Q_{[\mathbf{v}r{t}u,T]}$ and we integrate the approximate Kolmogorov equations:
\[ \frac{d}{dt} P_{[0,\mathbf{v}r{t}u]}(t|0) = P_{[0,\mathbf{v}r{t}u]}(t|0) Q_{[0,\mathbf{v}r{t}u]}(\mathbf{x}(t)) \qquad\qquad \frac{d}{dt} P_{[\mathbf{v}r{t}u, T]}(t|\mathbf{v}r{t}u) = P_{[\mathbf{v}r{t}u,T]}(t|\mathbf{v}r{t}u) Q_{[\mathbf{v}r{t}u,T]}(\mathbf{x}(t)) .\]
Note that we have only one clock constraint $\mathbf{v}r{t}u$.
We can compute then $ P(T|0)= P_{[0,\mathbf{v}r{t}u]}(\mathbf{v}r{t}u|0) P_{[\mathbf{v}r{t}u, T]}(T|\mathbf{v}r{t}u) $. The total satisfaction probability is equal to $P( Y^{(N)},S_0,t_0 \models \mathbb{D} ) = P(T|0)[S_0,S_f ]+ P(T|0)[S_0,I_f]+ P(T|0)[S_0,R_f]$.
In Figure \ref{1agfixtimefluid}, we report a comparison of the fast simulation (FS) described above with the statistical estimation (using the Gillespie algorithm, SSA with 10000 runs) of the path probabilities, as function of the time horizon $T$, computed for different values of population size $N$ (20,50,100).
In Table \ref{table:error1agfixtime}, we report the average computational cost of SSA, the computational cost FS and the relative SpeedUp (FScost/SSA cost), the mean and maximum absolute and relative errors of the fast simulation in $[0,T]$. We also report the error at the final time of the simulation, when the probability has stabilised to its limit value. It can be seen that both the average and the maximum errors decrease with $N$, as expected, and are already quite small for $N=50$ (for the first property, the maximum difference in the path probability for all runs is of the order of 0.06, while the average error is 0.003). For $N=100$, the FS is practically indistinguishable from the (estimated) true probability. Moreover, the solution of the ODE system is computationally independent of $N$, and also much faster, as can be seen from the computation costs, than the simulation based method.
\begin{figure}
\caption{Comparison of the fast simulation (FS) and a statistical estimate (using the Gillespie algorithm, SSA with 10000 runs) of the path probabilities of the 1gDTA property of Figure \ref{DTAprop}
\label{1agfixtimefluid}
\end{figure}
\begin{table}[!t]
\begin{center}
\begin{small}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
$N$ & SSAcost & FScost & Speedup & max(err) & $\mathbb{E}$[err] & err($T$) & $\mathbb{E}$[Relerr] & Relerr($T$) \\
\hline
20 & 68.870 & 0.2040& 337.598 & 0.0159 & 0.0086 & 0.0158& 0.0579 & 0.0631\\
\hline
50 & 77.591 & 0.2040 & 380.350 & 0.0121 & 0.0062 & 0.0120 & 0.0406 & 0.0487 \\
\hline
100 & 97.490 & 0.2040 & 477.892 & 0.0045 & 0.0017 & 0.0020 & 0.0166 & 0.0084 \\
\hline
200 & 103.598 &0.2040 & 507.833 & 0.0044 & 0.0017 & 0.0018& 0.0147 & 0.0077\\
\hline
500 & 119.3612 &0.2040 & 585.104 &0.0041 & 0.0015 & 0.0008 & 0.0214 & 0.0033 \\
\hline
\end{tabular}
\end{small}
\end{center}
\caption{Computational cost of the statistical estimation (SSA) for 10000 runs and of the Fast Simulation (FScost), the relative SpeedUp (FScost/SSAcost) and the errors obtained by the Fast Simulation: maximum and average absolute error (max([er]), $\mathbb{E}$[er]) and relative error (Relerr($T$), $\mathbb{E}$[Relerr]) with respect to time, and error at the final time horizon $T$ (err($T$). Data is shown as a function of the network size $N$. }
\label{table:error1agfixtime}
\end{table}
\end{example}
\subsubsection{Computing path probabilities as a function of the initial time} \label{sec:singlePathVariable}
In the previous section we showed how to compute the satisfaction probability of a path formula for a fixed initial time, in the approximate single agent model. As the rates of the individual will depend on the global system through the expected values of some functions of the global variables, the individual agent is a time-dependent CTMC, hence the same property evaluated at different initial times can in principle have different probability values. In order to properly deal with nesting in the model checking algorithm for the CSL-TA logic, following the approach of \cite{fluidmc}, we need to compute the path probability as a function of the initial time.
We can apply a similar approach as in \cite{fluidmc}, which we quickly recall here.
Consider the probability transition matrix $P(t_0+T|t_0)$. Fixing the time horizon $T$, we need to compute it as a function of $t_0$. To achieve that, we can combine the forward and backward Kolmogorov equations:
\[ \frac{\partial}{\partial t} P(t|s) = P(t|s)Q(t)\qquad\qquad \frac{\partial}{\partial s} P(t|s) = -Q(s)P(t|s) .\] obtaining the following ODE for $P(t_0+T|t_0)$:
\begin{equation}
\label{eq:forwbackKE}
\frac{d}{dt_0} P(t_0+T|t_0) = P(t_0+T|t_0)Q(t_0+T) -Q(t_0)P_j(t_0+T|t_0) .
\end{equation}
To lift this computation at the level of equation \eqref{eqn:fullProbIndividual}, we compute each $P_j$ separately by numerically integrating the corresponding ODE, with initial conditions given by the identity matrix, and then take their product at each initial time step of interest, relying on the Markovian nature of the approximate single agent model.
Note that if $t_1,\ldots,t_N$ are the fixed clock constraints, and $T_j =t_{j+1} -t_j$ the fixed interval between each clock constraints. The Kolmogorov equations are define on the traslate clock constraints $\tilde{t}_1,,\ldots,\tilde{t}_N$ such that $\tilde{t}_i=t_0 +t_i$. In this way, we obtain $P(t_0+T|t_0)$ as a function of $t_0$. The path probability $P(s_0,t_0 \models \mathbb{D}) $ can then be computed according to equation \eqref{eq:prob}. Let see the next example for more explanations.
\begin{example}
\label{ex:st1agentFcnTime}
Consider again the running example. Fixing the time horizon $T = t_N$, and the clock constraint $\mathbf{v}r{t}u$, let $t_{\mathbf{v}r{t}u} = t_0 + \mathbf{v}r{t}u$ and $T_{f} = T-\mathbf{v}r{t}u$. We integrate the Kolmogorov equations (\ref{eq:forwbackKE}) for $P_{[t_0,\mathbf{v}r{t}u]}(t_0+\mathbf{v}r{t}u|t_0)$ and $P_{[\mathbf{v}r{t}u,T]}(t_\mathbf{v}r{t}u+T_{f}|t_\mathbf{v}r{t}u) $.
Then we have that $ P(t_0 + T|t_0)= P_{[0,\mathbf{v}r{t}u]}(t_0 + \mathbf{v}r{t}u|t_0) P_{[\mathbf{v}r{t}u,T]}(t_\mathbf{v}r{t}u+T_{f}|t_\mathbf{v}r{t}u)) = P_{[0,\mathbf{v}r{t}u]}(t_0 + \mathbf{v}r{t}u|t_0) P_{[\mathbf{v}r{t}u, T]}(t_0 + T|t_0 + \mathbf{v}r{t}u) $. In Figure \ref{subfig:prop1agfcnT0}, we plot the satisfaction probability of the1gDTA property of Figure \ref{DTAprop} as function of the initial time $t_0$ for a single agent and a SIR population with $N=100$. We can see that the satisfaction decrease for the first 5 time units and then increase until a reach a steady state around $t=50$ time units. This is in accordance with the behaviour of the population (Figure \ref{subfig:SIR}) where the number of infected rapidly increase for the first 5 time units. The property that we are verifying requires a that the agent has to be infected only after the first $10$ time units, this implies that the higher is the number of infected at time $t_0$, the lower will be the probability to satisfy the property. We can observe also that the value for $t_0=0$ is exactly the same that we obtained in the previous example computing the probability as a function of the time horizon $T$, fixed here to $T=300$.
\begin{figure}
\caption{(a) Satisfaction probability of the1gDTA property of Figure \ref{DTAprop}
\label{subfig:prop1agfcnT0}
\label{subfig:SIR}
\end{figure}
\end{example}
\begin{remark}
The equation \eqref{eq:forwbackKE} is a matrix valued ODE that can be solved with standard numerical routines. However, it is typically very stiff, and its direct integration may turn out to be impossible due to numerical instabilities. Typically this appears when integrating for an interval larger that a constant $T'$, which opens a way to tackle the instability using the Markov property of the process, see the appendix of \cite{bertinoro13} for a discussion of an algorithm that keeps numerical errors under control.
\end{remark}
\subsection{Model Checking 1gDTA} \label{sec:singleMC}
We now present the full model checking algorithm for the individual specification properties. The routine presented in the previous section to approximate the path probabilities is the core of the approach. In fact, the difficult property to check in the logic of Definition~\ref{def:CSLTA}, is the formula $\mathbf{P}^{\leq T}_{\bowtie p}\left(\mathbb{D}[\mathbf{P}hi_1,\ldots,\mathbf{P}hi_k] \right)$, whose truth is easily obtained once the function
$P(s_0,t_0 \models \mathbb{D})$ is computed by fluid approximation for the product model of the agent-property. The only extra operation that we need to solve is to check if $P(s_0,t_0 \models \mathbb{D})) \bowtie p$. This is not necessarily a trivial operation, because for nested subformulae we need to do this check for each initial time $t_0$, and there are uncountably many of them. The solution is to resort to numerical routines that look for all the zeros of the function $P(s_0,t_0 \models \mathbb{D})-p$, possibly relying on root finding algorithms embedded in ODE solvers \cite{burden}. In this way, we can compute a boolean-valued function returning the truth value for each state and initial time $t_0$. The complete procedure
to check $\mathbf{P}hi = \mathbf{P}^{\leq T}_{\bowtie p}\left(\mathbb{D}[\mathbf{P}hi_1,\ldots,\mathbf{P}hi_k] \right)$ is sketched in Algorithm \ref{mcpath}. It takes as input time dependent CSL-TA formulae $\mathbf{P}hi_1(t)$,\ldots, $\mathbf{P}hi_k(t)$, it performs first the structural resolution of timed-varying properties $\mathbf{P}hi_j(t)$, as discussed in remark \ref{rem:timeDependentDTA}, then computes the path probability $P(s_0,t_0 \models \mathbb{D})$ according to the previous section, and finally solves $P(s_0,t_0 \models \mathbb{D}) \bowtie p$ to return the time dependent truth value $\mathbf{P}hi(t_0)$.
Algorithm \ref{mcpath} is called from the full model checking procedure, which solves the model checking problem recursively on the parse tree of a CSL-TA formula. Boolean operations on time-dependent truth profiles $\mathbf{P}hi_i(t)$ are performed pointwise in time. To this end, we can rely on the algorithms for boolean signals developed in \cite{maler2004} for the logic STL.
\begin{algorithm}
\caption{Model checking algorithm for $\mathbf{P}^{\leq T}_{\bowtie p}\left(\mathbb{D}[\mathbf{P}hi_1,\ldots,\mathbf{P}hi_k] \right)$}
\label{mcpath}
\begin{algorithmic}[1]
\mathbf{P}rocedure{check}{$\mathbf{P}^{\leq T}_{\bowtie p}\left(\mathbb{D}[\mathbf{P}hi_1,\ldots,\mathbf{P}hi_k] \right)$,$\mathbf{P}hi_1(t)$,\ldots,$\mathbf{P}hi_k(t)$,$\mathscr{A}$,$\mathcal{X}^{(N)}$}
\State construct the structural reduction $\mathbb{D}'$ of $\mathbb{D}[\mathbf{P}hi_1,\ldots,\mathbf{P}hi_k]$ for the timed properties $\mathbf{P}hi_1(t)$,\ldots,$\mathbf{P}hi_k(t)$.
\State Construct the product ${\mathscr{P}}$ between the agent $\mathscr{A}$ and the property $\mathbb{D}'$.
\State Compute the solution of mean field equations $\mathbf{x}(t)$
\State Compute the solution of the Kolmogorov Equations $P(t_0+T|t_0) $ for ${\mathscr{P}}$ and $P(s_0,t_0 \models \mathbb{D})$.
\State Compute $\mathbf{P}hi(t_0) \equiv P(s_0,t_0 \models \mathbb{D}) - p \bowtie 0$
\State \textbf{return} $\mathbf{P}hi(t_0)$
\mathbf{E}ndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Computability and convergence} \label{sec:singleConvergence}
In this section, we briefly discuss the computability and convergence of the model checking algorithm. Computability is not straightforward, as the model of the single agent we are checking depends on the solution of the fluid or of a moment closure equation, hence standard results about CTMCs \cite{aziz} do not hold. Even more complicated is the fact that we need to compare $P(s_0,t_0 \models \mathbb{D})$ with the threshold $p$ not for a single time point, but for uncountably many. Hence, we need conditions guaranteeing that the solution of $P(s_0,t_0 \models \mathbb{D}) - p = 0$ is computable and that the number of zeros is finite. The problem is analogous to the one discussed in \cite{fluidmc}, hence the same recipe can be applied here. The idea is to restrict the admissible rate functions of the population model to (piecewise) real analytic functions \cite{realanalytic},\footnote{A function is real analytic function if it admits a power series expansion.} which guarantees that the solution of the fluid and of moment closure equations is still (piecewise) real analytic, and so are the solutions of the Kolmogorov equations for the individual agent. This in turn implies that $P(s_0,t_0 \models \mathbb{D}) - p = 0$ has a finite number of zeros, and that these zeros are computable for almost all values of the threshold $p$. Indeed, computation of the zero of a function is possible only for \emph{transversal zeros}, which are points for which the function changes sign while crossing the zero axis. This leaves out tangential zeros (the function touches zero at a minimum or maximum point). In \cite{fluidmc} it was proved that the function $P(s_0,t_0 \models \mathbb{D})-p$ has tangential zeros only for a set of values of $p$ of Lebesgue measure zero in $[0,1]$. This justifies the introduction of the notion of \emph{quasi-computability} for the model checking problem, requiring the model checking algorithm to terminate for all but a subset of measure zero of the threshold values $p$ in the probability quantifiers. Invoking the results of \cite{fluidmc}, we can then conclude that:
\begin{theorem} \label{th:quasicomp}
The model checking problem based on fast simulation for local CSL-TA properties is quasi-computable, for population models with (piecewise) real analytic rate functions. $
\blacksquare$\newline
\end{theorem}
An orthogonal but related issue is that of the convergence of the so computed path probabilities to the true values, i.e. of the accuracy of the approximation. In this case, we can rely on the fast simulation result (Theorem \ref{th:fastsim}), and applying similar arguments as in \cite{fluidmc}, the following result holds true:
\begin{theorem} \label{th:convergenceIndividual}
If $\hat{Y},s_0,t_0 \models \mathbf{P}hi$ is computable, then
$\hat{Y},s_0,t_0 \models \mathbf{P}hi$ if and only if there is $N_0$ such that for all $N\geq N_0$, $Y^{(N)},s_0,t_0 \models \mathbf{P}hi$. $
\blacksquare$\newline
\end{theorem}
The previous theorem holds for almost every formula: if $\mathbf{P}hi$ contains $k$ probabilistic quantifiers, we need to discard a set of thresholds $p$ of measure zero in $[0,1]^k$.
\begin{remark}
The method presented in this section is a extension of the approach of \cite{fluidmc} for CSL to CSL-TA. There is, however, a remarkable difference between the two approaches: nesting probabilistic operators in CSL introduces discontinuities in the function $P(s_0,t_0 \models \mathbb{D})$, which makes the verification of nested properties very challenging. Discontinuities insurge because when a subformula of the until operator changes truth value in a given state at a given time, this induces a change of the goal or unsafe set in the corresponding reachability problem \cite{fluidmc,ctmcmc}. CSL-TA has not such a problem: when a subformula in $\mathbf{P}^{\leq T}_{\bowtie p}\left(\mathbb{D}[\mathbf{P}hi_1,\ldots,\mathbf{P}hi_k] \right)$ changes truth value, then there is a change in the \emph{edges} of the 1gDTA, inducing a change in the transitions that the individual
agent synchronised with the 1gDTA can make. This changes the dynamics, i.e. the vector field (introducing a discontinuity in the derivatives), but not the value of the path probability.
\end{remark}
\subsection{Higher-order corrections} \label{sec:singleHO}
One way to increase the accuracy of the proposed model checking algorithm based on fluid approximation and fast simulation is to improve the approximation of the mean $\mathbb{E}_{\mathbf{X}^{(N)}(t)}[Q_j(\mathbf{X}^{(N)}(t))] \approx Q_j(\mathbf{x}(t))$. In particular, we can rely on higher order corrections of the $\mathbb{E}_{\mathbf{X}^{(N)}(t)}[Q_j(\mathbf{X}^{(N)}(t))] $ using either system size expansion or moment closure techniques.
Doing this algorithmically is straightforward: one can derive equations for the different terms in $\mathbb{E}_{\mathbf{X}^{(N)}(t)}[Q_j(\mathbf{X}^{(N)}(t))] $, which are typically monomials on the variables of $\mathbf{X}^{(N)}(t)$, i.e. they correspond to mean or higher order moments (in case of non polynomial terms, a Taylor expansion is required). Then, these solutions determine corrected time dependent rates for the infinitesimal generator of the individual agent, which can be used to solve the Kolmogorov equations as for fast simulation.
In the following, we discuss how these higher order corrections work on the epidemic example, how much they improve the accuracy, and at which computational cost.
\begin{example}
In this example, considering the same model and property of the last Example \ref{ex:Q1agent}, we want to compare the result of the fast simulation (FS) with high order corrections. In particular, for the Moment Closure, we have considered a low dispersion of order 4 (hence we have set to zero all the moments of order greater than or equal to 5), instead for the system size expansion we used the {\it Effective Mesoscopic Rate Equation} (EMRE).
In Figure \ref{1AgfluidHighorder20}, we report the results for $N=20$ and $N=50$. As we can immediately see, both EMRE and MM improve the estimate. In Table \ref{table:error1agHighorder}, we report the maximum and mean absolute and relative errors obtained by the FS and the higher-order approximation for $N=20,50,100$.
We would like to remark that the high value of the maximum relative error (RE) is misleading. In fact as it reaches such a value at the beginning of the simulation time, when the true satisfaction probability is very close to zero, and the statistical estimate is unreliable. Note that the RE then decays very fast, as it can be seen in Figure \ref{error1ag}, where we can also clearly see that the EMRE and the MM improve sensibly the approximation. Note also how EMRE and MM are mostly effective for small populations, bringing less significant contributions for larger ones.
\label{ex:st1agenthighorder}
\begin{figure}
\caption{Comparison of the results obtained by the fast simulation (FS), the Moment Closure (MM), the Effective Mesoscopic Rate Equationa (EMRE) and the statistical estimate (SSA) of the path probabilities of the 1gDTA property of Figure \ref{DTAprop}
\label{1AgfluidHighorder20}
\end{figure}
\begin{table}[!t]
\begin{center}
\begin{footnotesize}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$N$ & max(erFS) & $\mathbb{E}$[erFS] & max(erEMRE) & $\mathbb{E}$[erEMRE]& max(erMM) & $\mathbb{E}$[erMM] \\
\hline
20 & 0.0159 & 0.0086 & 0.0089 & 0.0040 & 0.0088 & 0.0039 \\
\hline
50 & 0.0121 & 0.0062 & 0.0076 & 0.0029 & 0.0069 & 0.0025 \\
\hline
100 & 0.0045 & 0.0017 & 0.0056 & 0.0027& 0.0056 & 0.0028 \\
\hline
\hline
$N$ & max(RerFS) & $\mathbb{E}$[RerFS] & max(RerEMRE) & $\mathbb{E}$[RerEMRE]& max(RerMM) & $\mathbb{E}$[RerMM] \\
\hline
20 & 0.8966 & 0.0584 & 0.8859 & 0.0278 & 0.8837 & 0.0274 \\
\hline
50 & 0.8506 & 0.0406 & 0.8447 & 0.0251 & 0.8443 & 0.0228 \\
\hline
100 & 0.5267 & 0.0166 & 0.5173 & 0.0227& 0.5170 & 0.0227\\
\hline
\end{tabular}
\end{footnotesize}
\end{center}
\caption{Maximum and mean absolute and relative error on the reachability probability estimations obtained by the Fast Simulation (FS), the EMRE and the Moment Closure (MM) in the experiments of Figure \ref{1AgfluidHighorder20}.}
\mathbf{v}space{-2ex}
\label{table:error1agHighorder}
\end{table}
\begin{figure}
\caption{Plot of the absolute errors (left) and the relative errors (right) for $N=20$.}
\label{error1ag}
\end{figure}
\end{example}
\section{Model Checking Collective Properties}
\label{sec:modelchecking}
In this section, we show how to deal with the collective properties of Definition \ref{def:globalProp}. The mechanism is similar to the one for individual properties: starting from the synchronization of an agent class with a property, we can construct a new collective model, in which population variables will count how many agents of that class are in a given agent-property product state. Once this collective model is constructed, we can use it to compute the probabilities of collective path formulae, which are the main challenge also in this case. For this, we need to rely on the linear noise approximation, or on some higher-order moment closure technique combined with distribution reconstruction routines, like maximum entropy \cite{andreychenko2015}.
Finally, we will show how to check the other collective properties and in particular the collective state properties (boolean combinations are trivial).
While presenting the method, we will comment also on its asymptotic correctness.
\subsection{Collective Synchronisation of Agents and Path Properties}
\label{sec:collectivesynch}
In order to model check collective path properties, we need to update the population model $\mathcal{X}^{(N)} = (\mathscr{A}, \mathcal{T}^{(N)}, \mathbf{x}_0^{(N)})$ so that we can count how many agents of class $\mathscr{A} = (S, E)$ satisfy a local specification $\mathbb{D} = (\mathscr{L}, \Gamma_{\pstsp}, Q, q_0, F, \rightarrow)$.
We do this by defining the population model associated with the local property $\dta$ as a sequence $\boldsymbol\pop^{(N)} = (\mathcal{X}_{I_{1}}^{(N)}, \ldots, \mathcal{X}_{I_{k}}^{(N)})$ of population models. Since the agent states are synchronized with the property automaton, each transition in the population model needs to be replicated many times in the extended collective model to account for all possible combinations of the extended local state space. Furthermore, we also need to take care of rate functions in order not to change the global rate. Recall from Section \ref{sec:singleSynch} the definition of the agent class ${\mathscr{P}} = (\aca{1}, \ldots,\\ \aca{k+1})$ associated with the property $\dta$, which contains a sequence of deterministic automata $\aca{j} = (\hat{\cstsp}, \hat{\ctrsp}_{j})$, $j = 1, \ldots, k+1$, one for each DTA with time constraint resolved.
Let us fix the attention on the $j$-th element $\aca{j} $ in the agent class ${\mathscr{P}}$ associated with the property $\dta$. The state space of each $\aca{j} $ is $S\times Q$, hence to construct the global model we need $nm$ counting variables ($n=|S|$, $m=|Q|$), where $X_{s,q}$ counts how many agents are in the local state $(s,q)$.
Let $\mathbf{v}r{t}u = (\syncset{\mathbf{v}r{t}u},f^{(N)}) \in \mathcal{T}^{(N)}$ be a global transition, apply the relabeling of action labels, according to step 1 of Section \ref{sec:singleSynch}, and focus on the synchronisation set $\syncset{\mathbf{v}r{t}u} = \{ s_1\mathbf{x}rightarrow{\Sigmaha_{s_1}} s_1',\ldots, s_k\mathbf{x}rightarrow{\Sigmaha_{s_k}} s_k' \}$.
We need to consider all possible ways of associating states of $Q$ with the different states $s_1,\ldots,s_k$ in $\syncset{\mathbf{v}r{t}u}$. Indeed, each choice $\mathbf{v}ec{q} = (q_1,\ldots,q_k)\in Q^k$ generates a different transition in $\mathcal{X}_{I_{j}}^{(N)}$, with synchronization set $\syncset{\mathbf{v}r{t}u,\mathbf{v}ec{q}} = \{ (s_1,q_1)\mathbf{x}rightarrow{\Sigmaha_{s_1}} (s_1',q_1'),\ldots, (s_k,q_k)\mathbf{x}rightarrow{\Sigmaha_{s_k}} (s_k',q_k') \}$, where $q_i'$ is the unique state of $Q$ such that $q_i\mathbf{x}rightarrow{\Sigmaha_{s_i}} q_i'$. The rate function $f_{\mathbf{v}ec{q}}^{(N)}$ associated with this instance of $\mathbf{v}r{t}u$ needs to be a fraction of the total rate function $f^{(N)}$ of $\mathbf{v}r{t}u$, proportional, for each $i=1,\ldots,k$, to the fraction of the agents in state $s_i$ that is in the combined state $(s_i,q_i)$, accounting for the correct multiplicity in the synchronisation set. In particular, let $\kappa_{s_i}^{q_i}$ be the multiplicity with which $(s_i,q_i)$ appears in the left hand side of a rule in $\syncset{\mathbf{v}r{t}u,\mathbf{v}ec{q}}$, and ${\kappa}_{s_i}$ be the multiplicity of $s_i$ as a left hand side in $\syncset{\mathbf{v}r{t}u}$.
The simplest way to proceed is to fix an ordering of the elements of $\syncset{\mathbf{v}r{t}u}$ and count how many ordered tuples of agents we can form in the current state $\mathbf{X}$ of the system, where element $j$ of the tuple is an element in state $s_j$, as specified in $\syncset{\mathbf{v}r{t}u}$. By doing the same for $\syncset{\mathbf{v}r{t}u,\mathbf{v}ec{q}}$, and taking the ratio of the two quantities, we obtain the following formula for the rate:
\begin{equation}
\label{eqn:globalRate}
f_{\mathbf{v}ec{q}}^{(N)} (\mathbf{X}) = \frac{\prod_{(s,q)\in LHS(\syncset{\mathbf{v}r{t}u,\mathbf{v}ec{q}})} \frac{X_{s,q}!}{(X_{s,q}-\kappa_{s}^{q} )!}}{\prod_{s\in LHS(\syncset{\mathbf{v}r{t}u})} \frac{X_{s}!}{(X_{s}-\kappa_{s} )!} } f^{(N)} (\mathbf{X})
\end{equation}
where $LHS(\syncset{\mathbf{v}r{t}u})$ is the set containing all the states appearing on the left hand side of a rule in $\syncset{\mathbf{v}r{t}u}$, and similarly for $LHS(\syncset{\mathbf{v}r{t}u,\mathbf{v}ec{q}})$.
Moreover, $\widetilde{\mathbf{X}} = (X_1, \ldots, X_n)$ with $X_s = \sum_{r = 1}^{m} X_{s,r}$. Due to the restrictions enforced in Definition \ref{populationModel}, summing up the rates $f_{\mathbf{v}ec{q}}^{(N)} (\mathbf{X})$ for all possible choices of $\mathbf{v}ec{q}=(q_1,\ldots,q_k)\in Q^k$, we obtain $f^{(N)}(\widetilde{\mathbf{X}})$:
\begin{proposition}
\label{prop:global_rates}
With the definition above, it holds that
$\sum_{\mathbf{v}ec{q}\in Q^k} f_{\mathbf{v}ec{q}}^{(N)} (\mathbf{X}) = f^{(N)} (\mathbf{X})$, i.e.
\[ \sum_{\mathbf{v}ec{q}\in Q^k} \frac{\prod_{(s,q)\in LHS(\syncset{\mathbf{v}r{t}u,\mathbf{v}ec{q}})} \frac{X_{s,q}!}{(X_{s,q}-\kappa_{s}^{q} )!}}{\prod_{s\in LHS(\syncset{\mathbf{v}r{t}u})} \frac{X_{s}!}{(X_{s}-\kappa_{s} )!}} = 1\]
\end{proposition}
The discussion above is encapsulated into the following:
\begin{definition}[Population model associated with a local property]
\label{def:popModelProp}
The population model associated with the local property $\dta$ is the sequence $\boldsymbol\pop^{(N)} = (\mathcal{X}_{I_{1}}^{(N)}, \ldots, \mathcal{X}_{I_{k}}^{(N)})$. The elements $\mathcal{X}_{I_{j}}^{(N)} = (\aca{j}, \mathcal{T}_j^{(N)})$ are such that $\aca{j}$ is the $j$-th element of the agent class associated with $\dta$ and $\mathcal{T}_j^{(N)}$ is the set of global transitions of the form $\mathbf{v}r{t}u_i^j = (\syncset{i}^j, f_{j,i}^{(N)})$, as defined above.\footnote{Initial conditions of population models in $\boldsymbol\pop^{(N)}$ are dropped, as they are not required in the following. The initial condition at time zero is obtained from that of $\mathcal{X}^{(N)}$ by letting $(x_0)_{s,q_0} = (x_0)_s$, where $q_0$ the initial state of $\dta$ and $s\in S$.}
\end{definition}
\subsection{Model Checking Collective Path Properties}
Consider a population model $\mathcal{X}^{(N)}$, for a fixed population size $N$, and a global path property $\gp{\gprop{\dta(T)}{a}{b}}{}{\bowtie p}$. This requires us to compute the probability $\gp{\gprop{\dta(T)}{a}{b}}{}{}$ that, at time $T$, the fraction of agents satisfying the local specification $\dta$ is contained in $[a,b]$. We will achieve this by exploiting the construction of Section \ref{sec:collectivesynch}, according to which we obtain a sequence of population models $\boldsymbol\pop^{(N)} = (\mathcal{X}_{I_{1}}^{(N)}, \ldots, \mathcal{X}_{I_{k}}^{(N)})$, synchronising local agents with the sequence of deterministic automata associated with $\dta$. In such construction we identified a sequence of times $0=t_0,t_1,\ldots,t_{k}=T$ and in each interval $I_j = [t_{j-1},t_j]$ the satisfaction of clock constraints does not change.
Therefore, in order to compute $\gp{\gprop{\dta(T)}{a}{b}}{}{}$, we can rely on \emph{transient analysis} algorithms for CTMCs \cite{ctmcmc}: first we compute the probability distribution at time $t_1$ for the first population model $\mathcal{X}_{I_{1}}^{(N)}$; then we use this result as the initial distribution for the CTMC associated with the population model $\mathcal{X}_{I_{2}}^{(N)}$ and we compute its probability distribution at time $t_2$; and so on, until we obtain the probability distribution for $\mathcal{X}_{I_{k}}^{(N)}$ at time $t_k = T$.
At this point, we just need to observe that the desired probability can be obtained by summing the probability of all those states $\mathbf{X} \in \mathcal{S}^{(N)}$ satisfying $\sum_{s\in S, q\in F} \hat{X}_{s,q} \in [a,b]$. This works because of the absorbing property of the final states in the 1gDTA (condition 2 of Definition \ref{def:1gDTA}), which guarantees that whenever an agent enters a final state of the property, it will never leave it, hence the quantity $\sum_{s\in S, q\in F} \hat{X}_{s,q} (T)$ collects the number of agents that have reached a final state of the property by time $T$.
Unfortunately, this direct, numerical approach to model checking suffers from state space explosion, which is severe even for a population size of few hundreds of individuals.
For very large populations, when fluctuations are very small and the process behaves nearly deterministically, we could rely on fluid approximation to conclude that the probability of the path formula is approximatively equal to one if and only if $\sum_{s\in S, q\in F} \mathbf{P}hi_{s,q}(T) \in (a,b)$, where $\mathbf{P}hi$ is the solution of the fluid equation, and to zero otherwise (excluding the border cases in which the sum equals either $a$ or $b$).
Populations of the order of few hundreds individuals, however, are too small to invoke the fluid limit in such a way, and fluctuations still play a major role. Hence, in order to apply stochastic approximation to estimate the satisfaction probability, we need to rely on a technique giving information about the distribution of the process at a given time. It is here that the Central Limit Approximation enters the picture.
The idea is simply to compute the average and covariance matrix of the approximating Gaussian Process by solving the ODEs shown in Section \ref{sec:cla}. In doing this, we have to take proper care of the different population models associated with the time intervals $I_j$. Then, we integrate the Gaussian density of the approximating distribution at time $T$ to estimate of the probability $\gp{\gprop{\dta(T)}{a}{b}}{}{}$. The justification of this approach is in Theorem \ref{th:central}, which guarantees that the estimated probability is asymptotically correct, but in practice, we can obtain good approximations also for relatively small populations, in the order of hundreds of individuals.
\subsubsection*{Verification algorithm by Central Limit Approximation.}
The \textit{input} of the verification algorithm is:
\begin{itemize}
\item[$\bullet$] an agent class $\mathscr{A} = (S, E)$ and a population model $\mathcal{X}^{(N)} = (\mathscr{A}, \mathcal{T}^{(N)}, \mathbf{x}_0^{(N)})$;
\item[$\bullet$] a local property specified by a 1gDTA $\dta = (\mathscr{L}, \Gamma_{\pstsp}, Q, q_0, F, \rightarrow)$;
\item[$\bullet$] a global property $\gp{\gprop{\dta(T)}{a}{b}}{}{\bowtie p}$ with time horizon $T > 0$.
\end{itemize}
The \textit{steps} of the algorithm are:
\begin{enumerate}
\item \textbf{Construction of the population model associated with $\dta$}. Construct the \emph{normalised} population model $\mathcal{X}n^{(N)} = (\mathcal{X}n_{I_{1}}^{(N)}, \ldots, \mathcal{X}n_{I_{k}}^{(N)})$ associated with $\dta$ according to the recipe of Section \ref{sec:collectivesynch}. Then modify it by adding to its vector of counting variables $\hat{\mathbf{X}}^{(N)}$ a new variable $\hat{X}_{Final}$ that keeps track of the fraction of agents entering any of the final states $(s,q)$, $q\in F$.\footnote{Namely, this variable is increased by one for each transition entering a final state, and never decreased.}
\item \textbf{Integration of the central limit equations}.
For each $j=1,\ldots,k$, generate and solve numerically the system of ODEs that describes the fluid limit $\boldsymbol\mathbf{P}hi_j(t)$ and the Gaussian covariance $\mathbf{C}_j[\textbf{Z}(t)]$ for the population model $\mathcal{X}_{I_{j}}^{(N)}$ in the interval $I_j = [t_{j-1},t_j]$, with initial conditions $\boldsymbol\mathbf{P}hi_j(t_{j-1}) = \boldsymbol\mathbf{P}hi_{j-1}(t_{j-1})$ and $\mathbf{C}_j[\textbf{Z}(t_{j-1})]= \mathbf{C}_{j-1}[\textbf{Z}(t_{j-1})]$ for $j>1$, and $\boldsymbol\mathbf{P}hi_1(0) = \mathbf{x}_0$, $\mathbf{C}_1[\textbf{Z}(0)] = 0$.\\
Define the population mean as $\mathbf{E}^{(N)}[\mathbf{X}(t)] = N \boldsymbol\mathbf{P}hi_j(t)$ and the population covariance as $\mathbf{C}^{(N)}[\mathbf{X}(t)] = N \mathbf{C}_j[\textbf{Z}(t)]$, for $t\in I_j$. Finally, identify the component $E_{Final}^{(N)}[\mathbf{X}(t)]$ and the diagonal entry $C_{Final}^{(N)}[\mathbf{X}(t)]$ corresponding to $X_{Final}$.
\item \textbf{Computation of the probability}.
Let $g(x~|~\mu,\sigma^2)$ be the probability density of a Gaussian distribution with mean $\mu$ and variance $\sigma^2$. Then, approximate $\gp{\gprop{\dta(T)}{a}{b}}{}{}$ by
\[ \tilde{P}^{(N)}_\dta(T) = \int_{N a}^{N b} g(x~|~E_{Final}^{(N)}[\mathbf{X}(t)],C_{Final}^{(N)}[\mathbf{X}(t)])\text{d}x,\]
and compare the result with the probability bound $\bowtie p$.
\end{enumerate}
The asymptotic correctness of this procedure is captured in the next theorem, whose proof is obtained by an application of Theorem \ref{th:central}, and reported in the appendix. We denote by ${P}^{(N)}_\dta(T)$ the exact value of $\gp{\gprop{\dta(T)}{a}{b}}{}{}$ and by $\tilde{P}^{(N)}_\dta(T)$ the approximate value computed by the Central Limit Approximation.
\begin{theorem}
\label{th:convergence}
Under the hypothesis of Theorem \ref{th:central}, it holds that \linebreak $\lim_{N\rightarrow\infty}\|{P}^{(N)}_\dta(T) - \tilde{P}^{(N)}_\dta(T)\| = 0 $. $
\blacksquare$\newline
\end{theorem}
\begin{remark}
\label{rem:FinalVar}
The introduction of the counting variable $X_{Final}$ is needed to correctly capture the variance in entering one of the final states of the property. Indeed, it holds that $X_{Final} = \sum_{s\in S,q\in F} X_{s,q}$, and in principle we could have applied the CLA to the model without $X_{Final}$, using the fact that the sum of Gaussian variables is Gaussian (with mean and variance given by the sum of means and variances of the addends). In doing this, though, we would have overestimated the variance of $X_{Final}$, because we would implicitly take into account the dynamics within the final components, adding their variance. The introduction of $X_{Final}$, instead, avoids this problem, as its variance depends only on the events that allow the agents to enter one of the final states.
\end{remark}
\begin{example}
\label{ex:CLA}
We discuss now the quality of the Central Limit Approximation for mesoscopic populations from an experimental perspective. We present a detailed investigation of the behaviour of the network epidemics described in Figure \ref{SIRagent}.
We consider the two local properties expressed as 1gDTAs shown in Figure \ref{DTApropExa}. The first property $\dta_1$ has no clock constraints on the edges of the automaton, therefore the 1gDTA reduces to a DFA. The property is satisfied if an infected node is patched before being able to infect other nodes in the network, thus checking the effectiveness of the antivirus deployment strategy. The second property $\dta_2$, instead, is properly timed, and it is the same property that we use in the previous section for the single agent.
It is satisfied when a susceptible node is infected by an internal infection after the first $\mathbf{v}r{t}u$ units of time. The corresponding global properties that we consider are $\gp{\gpropg{\dta_1(T)}{\Sigmaha_1}}{}{\bowtie p}$ and $\gp{\gpropg{\dta_2(T)}{\Sigmaha_2}}{}{\bowtie p}$, where $\Sigmaha_i$ is the fraction of agents that has to satisfy $\dta_i$.
In Fig.~\ref{fig:glob1} we report the final step of the synchronisation procedure for the first property $\dta_1$. The synchronisation procedure of $\dta_2$ was already reported in the Example~\ref{ex:syncA_DTA}, Fig.~\ref{fig:sync}. Note that the state space of $\mathscr{P}$ is $S \times Q_1$ where $S =\{S, I, R\} $ is the state space of the SIR automata and $Q_1= \{ q_b, q_0, q_f\}$ is the state space of the $\dta_1$ local property; hence, the global model $\mathscr{P}$ has $nm = 9$ counting variables ($n=|S|$, $m=|Q_1|$).
\begin{figure}\label{DTApropExa}
\end{figure}
\begin{figure}
\caption{ (a) A DFA specification. (b) Synchronisation of the SIR automata described in Figure \ref{SIRagent}
\label{fig:glob1}
\end{figure}
The population model associated with the local property $\dta_1$ is then
equal to $\boldsymbol\pop^{(N)} = ( \aca, \mathcal{T}^{(N)})$, where $ \mathcal{T}^{(N)}$ is the set of global transitions.
We modify it adding a new variable $\hat{X}_{Final}$ that keeps track of the fraction of agents entering in the final state $(I, q_f) =I_f$; this can happen only with the transition $I_0 \mathbf{x}rightarrow{inf} R_f$. Hence, whenever such a transition fires, we also increase appropriately the value of $\hat{X}_{final}$, by a straightforward modification of the update vector associated with such a transition. Then, we integrate the central limit equations and we identify the component $E_{Final}^{(N)}[\mathbf{X}(t)]$ and the diagonal entry $C_{Final}^{(N)}[\mathbf{X}(t)]$ corresponding to $X_{Final}$. A similar procedure is done for the property $\dta_2$.
In Figure \ref{experimentl2g}, we show the approximate probability $\tilde{P}^{(N)}_{\dta_{i}}(T)$ of $\gp{\gprop{\dta_i(T)}{\Sigmaha_i}{1}}{}{}$) as a function of the time horizon $T$, for different values of $N$ and a specific configuration of parameters ($\kappa_{\mathit{inf}}=0.05$, $\kappa_{patch_1}=0.02$, $\kappa_{loss}=0.01$, $\kappa_{ext}=0.05$, $\kappa_{patch_0}=0.001$, $\Sigmaha_1=0.5$, $\Sigmaha_2=0.2$). The CLA is compared with a statistical estimate, obtained from 10000 simulation runs.
As we can see, the accuracy in the transient phase increases rapidly with $N$, and the estimate is very good for both properties already for $N=100$. The same parameter configuration was used to obtain the computational costs (in Seconds), showed in Table \ref{table:speedupl2g}. As we have seen, by definition the Central Limit Approximation (CLA) is independent of the population size $N$ and its computational costs is hundreds of times less than that of the statistical estimate (the Gillespie Algorithm) for both the first and the second properties. The values shown in Figure \ref{experimentl2g} can then be easily compared with the probability bound $\bowtie p$, to check the satisfiability of the property $\gp{\gpropg{\dta_i(T)}{\Sigmaha_i}}{}{\bowtie p}$.
\begin{figure}\label{experimentl2g}
\end{figure}
\begin{table}[!t]
\begin{center}
\textbf{First Property}\\
\begin{small}
\begin{tabular}{|c||c|c|c|}
\hline
$N$ & SSAcost & CLAcost & Speedup \\
\hline
20 & 22.4114 & 0.0618 & 362.6440\\
\hline
50 & 23.3467 & 0.0618 & 377.7783\\
\hline
100 & 24.2689 & 0.0618 & 392.7006\\
\hline
200 & 26.1074 & 0.0618 & 442.4498\\
\hline
500 & 28.8754 & 0.0618 & 467.2395\\
\hline
\end{tabular}
\end{small}
\mathbf{v}space{0.2cm}
\textbf{Second Property}\\
\begin{small}
\begin{tabular}{|c||c|c|c|}
\hline
$N$ & SSAcost & CLAcost & Speedup\\
\hline
20 & 32.0598 & 0.3035 & 105.6336\\
\hline
50 & 29.0915 & 0.3035 & 95.8534\\
\hline
100 & 28.8651 & 0.3035 & 95.1074\\
\hline
200 & 33.9825 & 0.3035 & 111.9687\\
\hline
500 & 43.4737 & 0.3035 & 143.2412\\
\hline
\end{tabular}
\end{small}
\end{center}
\caption[Computational costs of the Central Limit Approximation in the validation of Local-to-Global Properties.]{Average computational costs (in Seconds) of the Gillespie Algorithm (SSAcost) and the Central Limit Approximation (CLAcost), and the relative SpeedUp (CLAcost/SSAcost). The data are shown as a function of the population size $N$ (by definition the CLA is independent of $N$).}
\label{table:speedupl2g}
\end{table}
Furthermore, in order to check more extensively the quality of the approximation also as a function of the system parameters, we ran the following experiment. We considered five different values of $N$ ($N=20,50,100,200,500$). For each of these values, we randomly chose 20 different combinations of parameter values, sampling uniformly from: $\kappa_{\mathit{inf}} \in [0.05, 5]$, $\kappa_{patch_1} \in [0.02, 2]$, $\kappa_{loss} \in [0.01, 1]$, $\kappa_{ext} \in [0.05, 5]$, $\kappa_{patch_0} \in [0.001, 0.1]$, $\Sigmaha_1 \in [0.1, 0.95]$, $\Sigmaha_2 \in [0.1, 0.3]$.
For each parameter set, we compared the CLA of the probability of each global property with a statistical estimate (from 5000 runs), measuring the error in a grid of 1000 equi-spaced time points. We then computed the maximum error and the average error. In Table \ref{table:errorl2g}, we report the mean and maximum values of these quantities over the 20 runs, for each considered value of $N$. We also report the error at the final time of the simulation, when the probability has stabilised to its limit value.\footnote{For this model, we can extend the analysis to steady state, as the fluid limit has a unique, globally attracting steady state. This is not possible in general, cf. \cite{tutorial}.} It can be seen that both the average and the maximum errors decrease with $N$, as expected, and are already quite small for $N=100$ (for the first property, the maximum difference in the path probability for all runs is of the order of 0.06, while the average error is 0.003). For $N=500$, the CLA is practically indistinguishable from the (estimated) true probability. For the second property, the errors are slightly worse, but still reasonably small.
\begin{table}[!t]
\begin{center}
\textbf{First Property}
\begin{small}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
$N$ & MaxEr & $\mathbb{E}$[MaxEr] & Max$\mathbb{E}$[Er] & $\mathbb{E}$[$\mathbb{E}$[Er]] & MaxEr($T$) & $\mathbb{E}$[Er($T$)] \\
\hline
20 & 0.1336 & 0.0420 & 0.0491 & 0.0094 & 0.0442 & 0.0037 \\
\hline
50 & 0.0866 & 0.0366 & 0.0631 & 0.0067 & 0.0128 & 0.0018 \\
\hline
100 & 0.0611 & 0.0266 & 0.0249 & 0.0030 & 0.0307 & 0.0017 \\
\hline
200 & 0.0504 & 0.0191 & 0.0055 & 0.0003 & 0.0033 & 0.0002 \\
\hline
500 & 0.0336 & 0.0120 & 0.0024 & 0.0003 & 0.0002 & 9.5e-6 \\
\hline
\end{tabular}
\end{small}
\mathbf{v}space{0.2cm}
\textbf{Second Property}
\begin{small}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
$N$ & MaxEr & $\mathbb{E}$[MaxEr] & Max$\mathbb{E}$[Er] & $\mathbb{E}$[$\mathbb{E}$[Er]] & MaxEr($T$) & $\mathbb{E}$[Er($T$)] \\
\hline
20 & 0.2478 & 0.1173 & 0.1552 & 0.0450 & 0.1662 & 0.0448 \\
\hline
50 & 0.2216 & 0.0767 & 0.1233 & 0.0340 & 0.1337 & 0.0361 \\
\hline
100 & 0.1380 & 0.0620 & 0.0887 & 0.0216 & 0.0979 & 0.0208 \\
\hline
200 & 0.1365 & 0.0538 & 0.0716 & 0.0053 & 0.0779 & 0.0162 \\
\hline
500 & 0.1187 & 0.0398 & 0.0585 & 0.0100 & 0.0725 & 0.0108 \\
\hline
\end{tabular}
\end{small}
\end{center}
\caption[Errors obtained by the Central Limit Approximation in the validation of Local-to-Global Properties.]{Errors obtained by the Central Limit Approximation in the validation of Local-to-Global Properties. Maximum and mean of the maximum error (MaxEr, $\mathbb{E}$[MaxEr]) for each parameter configuration; maximum and mean of the average error with respect to time (Max$\mathbb{E}$[Er]), $\mathbb{E}$[$\mathbb{E}$[Er]]) for each parameter configuration; maximum and average error at the final time horizon $T$ (MaxEr($T$), $\mathbb{E}$[Er($T$)] ) for each parameter configuration. Data is shown as a function of the network size $N$. }
\label{table:errorl2g}
\end{table}
Finally, we considered the problem of understanding what are the most important aspects that determine the error. To this end, we regressed the observed error against the following features: estimated probability value by CLA, error in the predicted average and variance of $X_{Final}$ (between the CLA and the statistical estimates), and statistical estimates of the mean, variance, skewness and kurtosis of $X_{Final}$. We used Gaussian Process regression with Adaptive Relevance Detection (GP-ADR, \cite{gp}), which performs a regularised regression searching the best fit on an infinite dimensional subspace of continuous functions, and permitted us to identify the most relevant features by learning the hyperparameters of the kernel function. We used both a squared exponential kernel, a quadratic kernel, and a combination of the two, with a training set of 500 points, selected randomly from the experiments performed. The mean prediction error on a test set of other 500 points (independently of $N$) is around 0.015 for all the considered kernels. Furthermore, GP-ADR selected as most relevant the quadratic kernel, and in particular the following two features: the estimated probability and the error in the mean of $X_{Final}$. This suggests that moment closure techniques improving the prediction of the average can possibly reduce the error of the method.
\end{example}
\subsubsection{Finite-Size Threshold Correction}
Results obtained by CLA can be further improved for small values of $N$ by introducing a correction on the thresholds $a$ and $b$ of a property $\gp{\gprop{\dta(T)}{a}{b}}{}{\bowtie p}$, taking into account the discrepancy between the discrete nature of population counts and its continuous approximation. To better understand the correction, we will illustrate on a property of the form $\gp{\gpropg{\dta(T)}{\Sigmaha}}{}{\bowtie p}$. In the algorithm presented above, the CLA approximation works by integrating the Gaussian approximation of the variable $X_{final}$, as computed by CLA, from $\Sigmaha N$ to infinity. However, for small $N$, in this way we neglect the discrete nature of the state space. Suppose we would like to compute the probability of $X_{final} = i$. Using the Gaussian approximation, we would always obtain zero, unless we integrate in a region around $i$. The obvious candidate is $[i-\frac{1}{2},i+\frac{1}{2}]$, which correspond to a partition of the interval $[0,N]$ into subintervals of the form $[i-\frac{1}{2},i+\frac{1}{2}]$\footnote{The extremes $0$ and $N$ has to be treated in a special way: $(-\infty,\frac{1}{2}]$ for 0 and $[N-\frac{1}{2},\infty)$ for $N$}.
Following this line of reasoning, instead of integrating the Gaussian approximation for $X_{final}$ from $\Sigmaha N$, we should start from $j-\frac{1}{2}$, where $j$ is the smallest integer greater than or equal to $\Sigmaha N$, i.e. $j=\lceil \Sigmaha N \rceil$. Note that $j$ is the smallest value that $X_{final}$ can take to satisfy the property, when verifying it in the discrete stochastic model. Similarly, when dealing with properties of the form $\gp{\gpropl{\dta(T)}{\Sigmaha}}{}{\bowtie p}$, we would need to integrate up to $\lfloor \Sigmaha N \rfloor + \frac{1}{2}$, combining the two corrections with dealing with threshold intervals $[a,b]$. In several experimental tests, we observed that this simple correction improves considerably the approximation, becoming less significant for large $N$.
\begin{example}
\label{ex:alphacorrect}
In Figure~\ref{fig:ln_cor} we see the correction at work for $N=20$ and the first property of Example \ref{ex:CLA}, in which $\Sigmaha = 0.5$, hence $\lceil \Sigmaha N \rceil = 10$. We can see what happens if we integrate from 9.5 instead of 10. Integrating from 10, some probability mass is lost, and the CLA under-approximates the true solution. The correction allows us to recover some of this lost mass, improving considerably the quality of the approximation.
\begin{figure}
\caption{Comparison of a statistical estimate (using the Gillespie algorithm, SSA, Central Limit Approximation (CLA), and the CLA with the finite-size threshold correction (CLAc) for the first property of Example \ref{ex:CLA}
\label{fig:ln_cor}
\end{figure}
\end{example}
\subsubsection{Verification algorithm by Higher Order Approximations.} The method presented above relies on the central limit approximation, hence it works well when this gives an accurate description of the dynamics at time $T$. In particular, if the distribution of $X^{(N)}_{final}(T)$ is skewed, or deviates significantly from a Gaussian, or the fluid approximation gives a poor estimate of the mean, then this approach will give poor results.
One way to improve the accuracy of central limit is to use higher order approximations, either higher order system size expansion or moment closure techniques. In order to use these techniques to approximately verify our property, however, we need to know how the quantity $X^{(N)}_{final}$ is distributed at time $T$. Unfortunately, the Central Limit Approximation is quite special in this case: in fact, it entails that the distribution of $X^{(N)}_{final}$ is Gaussian, hence we need just mean and covariance to characterise it. Higher-order approximations, instead, have no closed form solution for the distribution of $X^{(N)}_{final}$, and they are used to access informations about its moments. Unfortunately, even the knowledge of all moments is not enough to uniquely identify a distribution.
To tackle this problem and construct a plausible probability density function for $X^{(N)}_{final}$, we need to apply some moment reconstruction technique, taking a finite number of moments of a distribution and producing a plausible approximation. In this work,
we leverage an advanced information theoretic \textit{moment-reconstruction} technique based on the \textit{maximum entropy principle} \cite{abramov2010,andreychenko2015,chapter2015}, which we exploited already in \cite{epew}, and which is of quite common usage in systems biology and in population models \cite{andreychenko2015}.
More specifically, the idea is to find the distribution $p(x)$ that maximises the entropy
\[ p = \text{argmax}_q H[q] = -\int q(x)\log q(x) dx, \]
subject to moment matching constraints $\mathbb{E}_q[x^k] = \int x^k q(x)dx = \mu_k$, $k=1,\ldots, m$ where $\mu_k$ are given non-centred moments, and $\int q(x)dx =1$, $q(x) \geq 0$.
By introducing Lagrange multipliers and applying the Kuhn-Tucker theorem \cite{berger_maximum_1996}, one can show that the solution to the distribution reconstruction problem takes the form
\[ p(x) = \frac{1}{Z} \exp \left( -\sum\limits_{k=1}^{m} \lambda_k x^k \right), \]
where $Z$ is the partition function, i.e. the normalization constant making $p$ a distribution, and $\lambda_i$ are obtained by numerically minimising the dual formulation of the optimisation problem, namely the convex function
\[ \mathbf{P}si(\lambda)=\ln Z + \sum\limits_{k=1}^{m} \lambda_k \mu_{k}. \]
More details can be found in \cite{abramov2010,andreychenko2015,chapter2015}.
Hence, to improve the estimation of ${P}^{(N)}_\dta(T)$ given in the previous section, we compute moments up to order $m$ by solving moment closure or higher order system size expansion equations, and the apply the maximum entropy moment reconstruction discussed above. Typically, $m$ is not very large, usually ranging between 2 and 8, with a typical value of 4 (using moment closure equations truncated at order 5, to have a more accurate estimate of the fourth order moment). This is also due to the fact that high order moment equations tend to be stiff and difficult to integrate numerically \cite{schnoerr2016,andreychenko2015}. The drawback of this approach is that it requires the solution of a multidimensional optimization problem, and then the numerical integration of the so obtained function $p(x)$ in the range $[a,b]$ specified by the property.
It is worth noting that the solution to maximum entropy for the first two moments only is given by a Gaussian distribution. Hence, stopping at order two gives a fast way to estimate the probability ${P}^{(N)}_\dta(T)$, by using corrected mean and variance with respect to those of linear noise. If we use the system size expansion correction, we still retain convergence as the population size diverges. The behaviour of other moment closures techniques for large populations, instead, is less clear \cite{schnoerr2014}.
\begin{example}
\label{ex:highorderPop}
Let us consider again the untimed property of Example~\ref{ex:CLA} with the same parameters, reported in the caption of the figure. For moment closure, we have considered a low dispersion of order 4, hence we have set to zero all the moments of order greater or equal to 5.
In Figure \ref{fig:highorderMC_IOS}, we compare the results obtained for the CLA and the Gillespie's statistical estimates (with 10000 runs) (SSA), with the probabilities estimated by the IOS and the MC, for $N=20$, without (left) and with (right) finite-size correction of the thresholds.
In this setting, the performance of the three types of approximation (CLA, IOS and MC) is comparable. IOS and MC show a little improvement over CLA. This is better seen in Figure \ref{fig:highorderMC_IOS} (left), where we did not use the finite-size correction of the threshold, hence curves are more separated.
\\
In Figure \ref{fig:ios}, instead, we compare the results obtained for the CLA and the Gillespie's statistical estimates (with 10000 runs) (SSA), with the probabilities estimated by EMRE and IOS, for a different parameter set, population size $N=20$, and finite-size correction of the threshold. We can see that in this case the CLA is not very accurate, while EMRE and IOS improve considerably the estimate. In this figure we have not reported the results of MC with maximum entropy reconstruction, due to numerical instabilities in the optimization phase. This is a known issue with the maximum entropy method , and a more careful implementation of the optimization is needed to circumvent such effects, a task which is beyond the scope of this paper.
\begin{figure}\label{fig:highorderMC_IOS}
\end{figure}
\begin{figure}
\caption{Comparison of the results obtained by the CLA, the statistical estimate (SSA), the Effective Mesoscopic Rate Equation (EMRE), and the System Size Expansion (IOS), for $N=20$ and parameters $\kappa_{\mathit{inf}
\label{fig:ios}
\end{figure}
\end{example}
As we have seen in the previous example, use of higher-order corrections improves the quality of the estimates. However, this comes with an increased computational cost, which scales as $O(n^k)$, where $n$ is the number of different local states of the agents, and $k$ is the order of the higher order correction. Note that this is still independent from the population size $N$.
In the future, we plan to stress test the three model checking procedure on more complex property, to understand better the different performances and the quality of the estimations, and their behaviour with respect to the population size $N$. In particular, we want to investigate scenarios where fluid and central limit approximations are known to perform poorly, like systems exhibiting a multi-stable behaviour. In these cases, we expect higher-order corrections to bring an even more evident gain in accuracy.
\subsection{The Model Checking Algorithm for Collective Properties}
We turn now to discuss how to verify the other collective properties of Definition \ref{def:globalProp}, starting from state properties $\gp{\gprop{\mathbf{P}hi}{a}{b}}{}{\bowtie p}$.
These are fairly simple to verify, relying on the model checking algorithm for CSL-TA for individual agents discussed in Section \ref{sec:checkCSLTA}. Essentially, we first run this algorithm and check if the CSL-TA formula $\mathbf{P}hi$ is satisfied, for each state $s\in S$ of an agent class $\mathscr{A}$ at a given initial time $t_0$. Let us call $S(\mathbf{P}hi,t_0) = \{s\in S~|~s,t_0\models \mathbf{P}hi \}$ the set of states satisfying it. Then, checking a path property requires us to compute the probability $P^{(N)}_{\mathbf{P}hi\in [a,b]}(t_0)$ with which the variable $X^{(N)}_{\mathbf{P}hi}(t_0) = \sum_{s\in S(\mathbf{P}hi,t_0)} X^{(N)}_s(t_0) $
belongs to $[a,b]$.
This is difficult to do exactly, but we can rely on the same approximations introduced above for the path probability. More specifically, we consider the basic population model (in this case, there is no need to perform the product construction at the global level, this has already been taken care of while checking the local properties), and compute its moments either by linear noise approximation or by higher order moment closure techniques. Given the moments of variables $X^{(N)}_s$, we can easily obtain moments for $X^{(N)}_{\mathbf{P}hi}$,\footnote{For the $k$-th non-centred moment, expand the expression $(\sum_{s\in S(\mathbf{P}hi,t_0)} X^{(N)}_s(t_0) )^k$ and use the values of moments up to order $k$ for the $X_s$ variables.} from which we can approximate the distribution of $X^{(N)}_{\mathbf{P}hi}$, either using a Gaussian (for linear noise) or by maximum entropy reconstruction. Finally, we need to integrate numerically this distribution in $[a,b]$ to obtain an approximation $\tilde{P}^{(N)}_{\mathbf{P}hi\in [a,b]}(t_0)$ of the probability $P^{(N)}_{\mathbf{P}hi\in [a,b]}(t_0)$. Verification of $\gp{\gprop{\mathbf{P}hi}{a}{b}}{}{\bowtie p}$ is concluded by comparing this value with the threshold $p$.
By Theorem \ref{th:convergenceIndividual}, for $N$ large enough the set $S(\mathbf{P}hi,t_0)$ will contain all and only the states satisfying the local specification $\mathbf{P}hi$. Furthermore, if we use the linear noise approximation, we can rely on Theorem \ref{th:central} for the convergence of the distribution of each $X_s$ to its linear noise approximation. The combination of these two results allows us to show that
\begin{theorem}
\label{th:convergence_state}
Under the hypothesis of Theorems \ref{th:central} and \ref{th:convergenceIndividual}, it holds that \linebreak $\lim_{N\rightarrow\infty}\|P^{(N)}_{\mathbf{P}hi\in [a,b]}(t_0) - \tilde{P}^{(N)}_{\mathbf{P}hi\in [a,b]}(t_0)\| = 0 $. $
\blacksquare$\newline
\end{theorem}
This concludes the presentation of the model checking algorithm for collective properties, as Boolean operators are straightforward.
\section{Experimental analysis}
\label{sec:exper}
\subsection{Experimental Analysis} \label{subsec:l2g:results}
We discuss now the quality of our method from an experimental perspective. We present a detailed investigation of the behaviour of the running example describing a network epidemics introduced along all the paper. We consider the two local properties expressed as 1gDTAs shown in Figure \ref{DTAprop}. As said before, the first property $\dta_1$ has no clock constraints on the edges of the automaton, therefore the 1gDTA reduces to a DFA. The property is satisfied if an infected node is patched before being able to infect other nodes in the network, thus checking the effectiveness of the antivirus deployment strategy. The second property $\dta_2$, instead, is properly timed.
It is satisfied when a susceptible node is infected by an internal infection after the first $\mathbf{v}r{t}u$ units of time. The corresponding global properties that we consider are $\gp{\gpropg{\dta_1(T)}{\Sigmaha_1}}{}{}$ and $\gp{\gpropg{\dta_2(T)}{\Sigmaha_2}}{}{}$.
In Table \ref{table:speedupl2g}, we show the computational costs (in Seconds), for
the probability of the two global properties as a function of the time horizon $T$, for different values of $N$ and a specific configuration of parameters ($\kappa_{\mathit{inf}}=0.05$, $\kappa_{patch_1}=0.02$, $\kappa_{loss}=0.01$, $\kappa_{ext}=0.05$, $\kappa_{patch_0}=0.001$, $\Sigmaha_1=0.5$, $\Sigmaha_2=0.2$). The CLA is compared with a statistical estimate, obtained from 10000 simulation runs. As we have seen, by definition the Central Limit Approximation is independent of the population size $N$ and its computational costs is hundreds of times less than that of the statistical estimate (the Gillespie Algorithm) for both the first and the second properties.
\begin{figure}\label{experimentl2g}
\end{figure}
\begin{table}[!t]
\begin{center}
\textbf{First Property}\\
\begin{small}
\begin{tabular}{|c||c|c|c|}
\hline
$N$ & SSAcost & CLAcost & Speedup \\
\hline
20 & 22.4114 & 0.0618 & 362.6440\\
\hline
50 & 23.3467 & 0.0618 & 377.7783\\
\hline
100 & 24.2689 & 0.0618 & 392.7006\\
\hline
200 & 26.1074 & 0.0618 & 442.4498\\
\hline
500 & 28.8754 & 0.0618 & 467.2395\\
\hline
\end{tabular}
\end{small}
\mathbf{v}space{0.2cm}
\textbf{Second Property}\\
\begin{small}
\begin{tabular}{|c||c|c|c|}
\hline
$N$ & SSAcost & CLAcost & Speedup\\
\hline
20 & 32.0598 & 0.3035 & 105.6336\\
\hline
50 & 29.0915 & 0.3035 & 95.8534\\
\hline
100 & 28.8651 & 0.3035 & 95.1074\\
\hline
200 & 33.9825 & 0.3035 & 111.9687\\
\hline
500 & 43.4737 & 0.3035 & 143.2412\\
\hline
\end{tabular}
\end{small}
\end{center}
\caption[Computational costs of the Central Limit Approximation in the validation of Local-to-Global Properties.]{Average computational costs (in Seconds) of the Gillespie Algorithm (SSAcost) and the Central Limit Approximation (CLAcost), and the relative SpeedUp (CLAcost/SSAcost). The data are shown as a function of the population size $N$ (by definition the CLA is independent of $N$).}
\label{table:speedupl2g}
\end{table}
In order to check extensively the quality of the approximation also as a function of the system parameters, we ran the following experiment. We considered five different values of $N$ ($N=20,50,100,200,500$). For each of these values, we randomly chose 20 different combinations of parameter values, sampling uniformly from: $\kappa_{\mathit{inf}} \in [0.05, 5]$, $\kappa_{patch_1} \in [0.02, 2]$, $\kappa_{loss} \in [0.01, 1]$, $\kappa_{ext} \in [0.05, 5]$, $\kappa_{patch_0} \in [0.001, 0.1]$, $\Sigmaha_1 \in [0.1, 0.95]$, $\Sigmaha_2 \in [0.1, 0.3]$.
For each parameter set, we compared the CLA of the probability of each global property with a statistical estimate (from 5000 runs), measuring the error in a grid of 1000 equi-spaced time points. We then computed the maximum error and the average error. In Table \ref{table:errorl2g}, we report the mean and maximum values of these quantities over the 20 runs, for each considered value of $N$. We also report the error at the final time of the simulation, when the probability has stabilised to its limit value.\footnote{For this model, we can extend the analysis to steady state, as the fluid limit has a unique, globally attracting steady state. This is not possible in general, cf. \cite{tutorial}.} It can be seen that both the average and the maximum errors decrease with $N$, as expected, and are already quite small for $N=100$ (for the first property, the maximum difference in the path probability for all runs is of the order of 0.06, while the average error is 0.003). For $N=500$, the CLA is practically indistinguishable from the (estimated) true probability. For the second property, the errors are slightly worse, but still reasonably small.
\begin{table}[!t]
\begin{center}
\textbf{First Property}
\begin{small}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
$N$ & MaxEr & $\mathbb{E}$[MaxEr] & Max$\mathbb{E}$[Er] & $\mathbb{E}$[$\mathbb{E}$[Er]] & MaxEr($T$) & $\mathbb{E}$[Er($T$)] \\
\hline
20 & 0.1336 & 0.0420 & 0.0491 & 0.0094 & 0.0442 & 0.0037 \\
\hline
50 & 0.0866 & 0.0366 & 0.0631 & 0.0067 & 0.0128 & 0.0018 \\
\hline
100 & 0.0611 & 0.0266 & 0.0249 & 0.0030 & 0.0307 & 0.0017 \\
\hline
200 & 0.0504 & 0.0191 & 0.0055 & 0.0003 & 0.0033 & 0.0002 \\
\hline
500 & 0.0336 & 0.0120 & 0.0024 & 0.0003 & 0.0002 & 9.5e-6 \\
\hline
\end{tabular}
\end{small}
\mathbf{v}space{0.2cm}
\textbf{Second Property}
\begin{small}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
$N$ & MaxEr & $\mathbb{E}$[MaxEr] & Max$\mathbb{E}$[Er] & $\mathbb{E}$[$\mathbb{E}$[Er]] & MaxEr($T$) & $\mathbb{E}$[Er($T$)] \\
\hline
20 & 0.2478 & 0.1173 & 0.1552 & 0.0450 & 0.1662 & 0.0448 \\
\hline
50 & 0.2216 & 0.0767 & 0.1233 & 0.0340 & 0.1337 & 0.0361 \\
\hline
100 & 0.1380 & 0.0620 & 0.0887 & 0.0216 & 0.0979 & 0.0208 \\
\hline
200 & 0.1365 & 0.0538 & 0.0716 & 0.0053 & 0.0779 & 0.0162 \\
\hline
500 & 0.1187 & 0.0398 & 0.0585 & 0.0100 & 0.0725 & 0.0108 \\
\hline
\end{tabular}
\end{small}
\end{center}
\caption[Errors obtained by the Central Limit Approximation in the validation of Local-to-Global Properties.]{Errors obtained by the Central Limit Approximation in the validation of Local-to-Global Properties. Maximum and mean of the maximum error (MaxEr, $\mathbb{E}$[MaxEr]) for each parameter configuration; maximum and mean of the average error with respect to time (Max$\mathbb{E}$[Er]), $\mathbb{E}$[$\mathbb{E}$[Er]]) for each parameter configuration; maximum and average error at the final time horizon $T$ (MaxEr($T$), $\mathbb{E}$[Er($T$)] ) for each parameter configuration. Data is shown as a function of the network size $N$. }
\label{table:errorl2g}
\end{table}
Finally, we considered the problem of understanding what are the most important aspects that determine the error. To this end, we regressed the observed error against the following features: estimated probability value by CLA, error in the predicted average and variance of $X_{Final}$ (between the CLA and the statistical estimates), and statistical estimates of the mean, variance, skewness and kurtosis of $X_{Final}$. We used Gaussian Process regression with Adaptive Relevance Detection (GP-ADR, \cite{gp}), which performs a regularised regression searching the best fit on an infinite dimensional subspace of continuous functions, and permitted us to identify the most relevant features by learning the hyperparameters of the kernel function. We used both a squared exponential kernel, a quadratic kernel, and a combination of the two, with a training set of 500 points, selected randomly from the experiments performed. The mean prediction error on a test set of other 500 points (independently of $N$) is around 0.015 for all the considered kernels. Furthermore, GP-ADR selected as most relevant the quadratic kernel, and in particular the following two features: the estimated probability and the error in the mean of $X_{Final}$. This suggests that moment closure techniques improving the prediction of the average can possibly reduce the error of the method.
\subsection{Results of System Size Expansion and Moment Closure}\label{exp:sse}
\section{Conclusion}
\label{sec:conc}
In this paper, we presented a framework for fast and reliable approximate verification of certain classes of properties of population models. In particular, we considered properties of random individuals, referred to as local properties, expressed by the logic CSL-TA (an extension of CSL using Deterministic Timed Automata as temporal modalities), and their lifting to the collective level, computing the probability that the number of agents satisfying the local property meets a given threshold. In order to efficiently compute reachability probabilities, we relied on several stochastic approximations. For individual properties, we exploited the fluid approximation and fast simulation, thus extending fluid model checking \cite{fluidmc} to CSL-TA properties, and we also considered higher-order corrections, exploiting moment closure methods. For the collective properties, we extended the class of properties considered in \cite{qest} to nested CSL-TA specifications, leveraging the central limit approximation and higher order corrections combined with maximum entropy distribution reconstruction routines. For both classes of properties, we provided theoretical results guaranteeing convergence in the limit of infinite populations, and experimental evidence of the effectiveness of the method.
From a practical point of view, the approach we presented is computationally efficient, outperforming even statistical model checking, while being accurate already for populations of moderate size. Furthermore, its complexity depends only on the number of local states and transitions of a model, and not on the population level, and convergence results guarantee that the error decreases as the population increases. Hence, this approach is very effective for medium and large populations, on the order of hundreds of individuals or more, precisely when exact and statistical methods start to suffer from a prohibitive large computational cost.
This work can be extended in few directions. First, by providing a tool taking care of automatising the steps required to check a property. Secondly, by considering a more general class of local properties, i.e. removing the restriction that clocks cannot be reset. At the individual level, we can rely on the results of \cite{formats15}, building on top of the fluid approximation of population models with deterministic time delays \cite{qest12,hayden}. In \cite{formats15}, clock resets introduce deterministic delays in model of an individual agent synchronised with a property, reflecting into the fluid equations and the Kolmogorov equations for individuals becoming Delay Differential Equations.
The challenge with these Delay Differential Equations is their stiffness, which calls for effective numerical solving routines to make them usable in practice. Lifting to the collective level requires a central limit approximation, which can be crafted building on the results of \cite{hayden}. Moment closure for this class of models, instead, is still an open research problem.
Finally, probably the most challenging and rewarding direction to investigate is that of providing tight error bounds that could be used to assess when the approximation is good and when it may be questionable. This is challenging, as known error bounds even for fluid approximation tend to be over-conservative, being based on worst-case inequalities like Gronwall's one \cite{kurtz}. A possible direction is to generalize and exploit the approach of \cite{bortolussi2013}.
\appendix
\section{Proofs}
\label{app:proofs}
In this appendix, we provide the proofs of the main results of the paper.
\noindent\textbf{Proposition~\ref{prop:individualRates}.}
The rate of transition $(s, q) \mathbf{x}rightarrow{\loclab{s}} (s', q')$ of an individual agent due to global transition $\mathbf{v}r{t}u$, given that the population model is in state $\mathbf{X}^{(N)}(t) = \mathbf{x}$, is
\[ g^{(N)}_{\mathbf{v}r{t}u}((s,q),(s',q')) = \frac{m_\mathbf{v}r{t}u}{x_s} f^{(N)}_{\mathbf{v}r{t}u}(\mathbf{x}).\]
\proof To obtain the expression for $g^{(N)}_{\mathbf{v}r{t}u}((s,q),(s',q'))$, we need to compute which is the probability that the tagged agent is one of the randomly chosen agents in state $s$ that are updated by the transition $\mathbf{v}r{t}u$, and multiply the global rate of a $\mathbf{v}r{t}u$ transition by it. As there are $x_s$ agents in state $s$, and $m_\mathbf{v}r{t}u$ are involved in the transition, this probability is readily computed as the fraction of subsets of $m_\mathbf{v}r{t}u$ elements of a set of $X_s$ elements that contain a fixed element. This number is
\[ \frac{\binom{x_s-1}{m_\mathbf{v}r{t}u-1}}{\binom{x_s}{m_\mathbf{v}r{t}u}} = \frac{m_\mathbf{v}r{t}u}{x_s}. \]
$
\blacksquare$\newline
\noindent\textbf{Proposition~\ref{prop:global_rates}.}
With the definitions of Section \ref{sec:collectivesynch}, it holds that
$\sum_{\mathbf{v}ec{q}\in Q^k} f_{\mathbf{v}ec{q}}^{(N)} (\mathbf{X}) = f^{(N)} (\mathbf{X})$, i.e.
\[ \sum_{\mathbf{v}ec{q}\in Q^k} \frac{\prod_{(s,q)\in LHS(\syncset{\mathbf{v}r{t}u,\mathbf{v}ec{q}})} \frac{X_{s,q}!}{(X_{s,q}-\kappa_{s}^{q} )!}}{\prod_{s\in LHS(\syncset{\mathbf{v}r{t}u})} \frac{X_{s}!}{(X_{s}-\kappa_{s} )!}} = 1\]
\proof We start by providing a more detailed derivation of the formula for the rate $f_{\mathbf{v}ec{q}}^{(N)} (\mathbf{X})$. As stated in the main text, we fix an ordering of the elements in $\syncset{\mathbf{v}r{t}u}$, and count how many ordered tuples we can construct in the aggregated state $\widetilde{\mathbf{X}} = (X_1, \ldots, X_n)$, where $X_s = \sum_{r = 1}^{m} X_{s,r}$. Hence, the element $j$ of the so build tuple is an agent in state $s_j$, where $s_j$ is the left hand side of the $j$-th update rule in $\syncset{\mathbf{v}r{t}u}$. Now, if state $s$ appears $\kappa_{s}$ times in the lhs of a rule in $\syncset{\mathbf{v}r{t}u}$, then each time it appears we pick an agent of type $X_s$. The first time there are $X_s$ possible choices, the second time $X_s-1$ and so on. It follows that the contribution of agents in state $s$ to the number of $\syncset{\mathbf{v}r{t}u}$-tuples is $X_s(X_s-1)\cdots (X_s-\kappa_s +1) = \prod_{h< \kappa_s}(X_s-h) = \frac{X_{s}!}{(X_{s}-\kappa_{s} )!}$. Here we are implicitly assuming $X_s \geq \kappa_s$, and set to zero such product otherwise. To count the number of tuples, we now have to multiply these expressions for each state $s$ appearing in the lhs of a rule of $\syncset{\mathbf{v}r{t}u}$. What we get is the denominator $\prod_{s\in LHS(\syncset{\mathbf{v}r{t}u})} \frac{X_{s}!}{(X_{s}-\kappa_{s} )!}$. The numerator is computed similarly, just considering the rules in $\syncset{\mathbf{v}r{t}u,\mathbf{v}ec{q}}$, for a fixed $\mathbf{v}ec{q}$. \\
Now, to prove the formula, consider agents in the product model, with states $(s,q)$, and build a $\syncset{\mathbf{v}r{t}u}$-tuples with them, ignoring the part of the state coming from the property, i.e. $q$. Each agent in state $s$ in such a tuple will nonetheless have also a property state $q$ associated with it. If we enumerate such property states $q$ for all the agents in the tuple, we get a vector $\mathbf{v}ec{q}$. In this way, we can assign each such a tuple to one vector $\mathbf{v}ec{q}$, hence partitioning the set of $\syncset{\mathbf{v}r{t}u}$-tuples built ignoring the property state into disjoint subsets, one for each $\mathbf{v}ec{q}$. It is clear that if we count how many tuples of type $\mathbf{v}ec{q}$ there are, and we add this number for each $\mathbf{v}ec{q}$, we are counting the cardinality of all the $\syncset{\mathbf{v}r{t}u}$-tuples. Due to the discussion above, the number of tuples of type $\mathbf{v}ec{q}$ is $\prod_{(s,q)\in LHS(\syncset{\mathbf{v}r{t}u,\mathbf{v}ec{q}})} \frac{X_{s,q}!}{(X_{s,q}-\kappa_{s}^{q} )!}$, and the number of $\syncset{\mathbf{v}r{t}u}$-tuples is $\prod_{s\in LHS(\syncset{\mathbf{v}r{t}u})} \frac{X_{s}!}{(X_{s}-\kappa_{s} )!}$. Hence
\[ \sum_{\mathbf{v}ec{q}\in Q^k} \prod_{(s,q)\in LHS(\syncset{\mathbf{v}r{t}u,\mathbf{v}ec{q}})} \frac{X_{s,q}!}{(X_{s,q}-\kappa_{s}^{q} )!} = \prod_{s\in LHS(\syncset{\mathbf{v}r{t}u})} \frac{X_{s}!}{(X_{s}-\kappa_{s} )!}. \] $
\blacksquare$\newline
\noindent\textbf{Theorem~\ref{th:convergence}.}
Under the hypothesis of Theorem \ref{th:central}, it holds that \linebreak $\lim_{N\rightarrow\infty}\|{P}^{(N)}_\dta(T) - \tilde{P}^{(N)}_\dta(T)\| = 0 $.
\proof
Recall that by Theorem \ref{th:central}, the sequence of random processes $\textbf{Z}^{(N)}(t)\ :=\ N^{\frac{1}{2}}\left(\hat{\mathbf{X}}^{(N)}(t) - \boldsymbol\mathbf{P}hi(t)\right)$ converges to the Gaussian random process $\textbf{Z}(t)$ obtained by the central limit approximation. Assume for the moment that the population model associated with the property $\dta$ is composed by a single model. It is easy to verify that the conditions to apply Theorem \ref{th:central} are satisfied (all rate functions of the modified population models are Lipschitz continuous). In particular, the initial conditions for $\textbf{Z}^{(N)}(t)$ and $\textbf{Z}(t)$ converge by definition.
As we are interested in the value of those processes at a fixed time $T>0$, let $\textbf{Z}^{(N)} = \textbf{Z}^{(N)}(T)$ and $\textbf{Z} = \textbf{Z}(T)$. Theorem \ref{th:central} implies that $\textbf{Z}^{(N)} \Rightarrow \textbf{Z}$ (weak convergence).
First of all, we transform the interval $[a,b]$ into a $N$-dependent interval $[a^{(N)},b^{(N)}]$, so that we can evaluate ${P}^{(N)}_\dta(T)$ as $\mathbb{P}\{Z_{Final}^{(N)}\in [a^{(N)},b^{(N)}]\}$ and $\tilde{P}^{(N)}_\dta(T)$ as $\mathbb{P}\{Z_{Final} \in [a^{(N)},b^{(N)}]\}$, where $Z_{Final}^{(N)}$ and $Z_{Final}$ are the marginal distributions of $\textbf{Z}^{(N)}$ and $\textbf{Z}$ on the coordinate corresponding to $X_{Final}$. By the definition of $\textbf{Z}^{(N)}$, it easily follows that $a^{(N)} = N^{\frac{1}{2}}\left(a - \boldsymbol\mathbf{P}hi_{Final}(T)\right)$ and $b^{(N)} = N^{\frac{1}{2}}\left(b - \boldsymbol\mathbf{P}hi_{Final}(T)\right)$.
Ideally, to prove the convergence of the probability values, we would like to invoke the Portmanteau theorem\footnote{See P. Billingsley. Convergence of Probability Measures, 2nd edition. Wiley, 1999.}, using the weak convergence of $\textbf{Z}^{(N)}$ to $\textbf{Z}$. However, this does not work here, as the sets for which we have to evaluate the probability depend on $N$. Hence, we need a slightly trickier argument.
By the triangular inequality, we have
\[
\begin{split}
\| \mathbb{P}\{Z_{Final}^{(N)}\in [a^{(N)},b^{(N)}]\} & - \mathbb{P}\{Z_{Final}\in [a^{(N)},b^{(N)}]\} \| \leq \\
& \underbrace{\| \mathbb{P}\{Z_{Final}^{(N)}\in [a^{(N)},b^{(N)}]\} - \mathbb{P}\{Z_{Final}^{(N)}\in [a^\infty,b^\infty]\} \|}_{(a)} +\\
& \underbrace{\| \mathbb{P}\{Z_{Final}\in [a^\infty,b^\infty]\} - \mathbb{P}\{Z_{Final}\in [a^{(N)},b^{(N)}]\} \|}_{(b)}
\end{split}
\]
where $[a^\infty,b^\infty]$ is the the limit set to which $[a^{(N)},b^{(N)}]$ converges as $N$ goes to infinity. Clearly, $a^\infty = \lim_{N\rightarrow\infty} a^{(N)}$, and similarly for $b^\infty$. We have four cases, depending on the relative value of $a$ and $b$ with respect to $\boldsymbol\mathbf{P}hi_{Final}(T)$:
\begin{enumerate}
\item if $a,b > \boldsymbol\mathbf{P}hi_{Final}(T)$ or $a,b < \boldsymbol\mathbf{P}hi_{Final}(T)$, then $[a^\infty,b^\infty] = \emptyset$. In fact, in the first case, both $a^\infty = +\infty$ and $b^\infty = +\infty$;
\item if $a< \boldsymbol\mathbf{P}hi_{Final}(T)$ and $b > \boldsymbol\mathbf{P}hi_{Final}(T)$, then $[a^\infty,b^\infty] = [-\infty,+\infty] = \mathbb{R}$;
\item if $a = \boldsymbol\mathbf{P}hi_{Final}(T)$ and $b > \boldsymbol\mathbf{P}hi_{Final}(T)$, then $[a^\infty,b^\infty] = [0,+\infty]$;
\item if $a< \boldsymbol\mathbf{P}hi_{Final}(T)$ and $b = \boldsymbol\mathbf{P}hi_{Final}(T)$, then $[a^\infty,b^\infty] = [-\infty,0]$;
\end{enumerate}
The term (b) in the inequality above goes to zero, due to convergence of $[a^{(N)},b^{(N)}]$ to $[a^\infty,b^\infty]$. To deal with term (a), instead, we can exploit the fact that, as $Z_{Final}^{(N)}\Rightarrow Z_{Final}$ and $\mathbb{R}$ is a Polish space, by the Prohorov theorem, $Z_{Final}^{(N)}$ is uniformly tight, hence for each $\mathbf{v}arepsilon>0$ there is $k_\mathbf{v}arepsilon>0$ such that, for all $N$, $\mathbb{P}\{Z_{Final}^{(N)} \in [-k_\mathbf{v}arepsilon,k_\mathbf{v}arepsilon]\}>1-\mathbf{v}arepsilon$. We deal with the four cases above separately:
\begin{enumerate}
\item Fix $\mathbf{v}arepsilon>0$ and let $N_0$ be such that, for $N\geq N_0$, $[a^{(N)},b^{(N)}]\cap [-k_\mathbf{v}arepsilon,k_\mathbf{v}arepsilon] = \emptyset$. It follows that $\mathbb{P}\{Z_{Final}\in [a^{(N)},b^{(N)}]\} < \mathbf{v}arepsilon$. As $\mathbb{P}\{Z_{Final}^{(N)}\in [a^\infty,b^\infty]\} = 0$, the term (a) is less than $\mathbf{v}arepsilon$, which implies that (a) goes to zero for $N$ going to infinity.
\item Fix $\mathbf{v}arepsilon>0$ and let $N_0$ be such that, for $N\geq N_0$, $[a^{(N)},b^{(N)}]\cap [-k_\mathbf{v}arepsilon,k_\mathbf{v}arepsilon] = [-k_\mathbf{v}arepsilon,k_\mathbf{v}arepsilon]$. As $\mathbb{P}\{Z_{Final}^{(N)}\in [a^\infty,b^\infty]\} = 1$, it follows that (a) is smaller than $\mathbf{v}arepsilon$, hence it has limit 0.
\item Fix $\mathbf{v}arepsilon>0$ and let $N_0$ be such that, for $N\geq N_0$, $[a^{(N)},b^{(N)}]\cap [-k_\mathbf{v}arepsilon,k_\mathbf{v}arepsilon] = [0,k_\mathbf{v}arepsilon]$. By the monotonicity of the probability distributions, term (a) is smaller than $\mathbb{P}\{Z_{Final} > k_\mathbf{v}arepsilon\}$, which is itself smaller than $\mathbf{v}arepsilon$. Also in this case, it follows that (a) has limit 0.
\item This case is symmetric with respect to case 3.
\end{enumerate}
Putting all together, we have shown that $\| \mathbb{P}\{Z_{Final}^{(N)}\in [a^{(N)},b^{(N)}]\} - \mathbb{P}\{Z_{Final}\in [a^{(N)},b^{(N)}]\} \|$ goes to zero as $N$ goes to infinity, as desired.
In order to deal with the cases in which the population model associated with the property $\dta$ is a sequence of $k>1$ models, we can rely on the fact that the time constants defining intervals $I_j$ are fixed, hence Theorem \ref{th:central} holds inductively for each model of the sequence. In fact, the initial conditions for model $\mathcal{X}n_{I_{j}}$ are given by the final state of model $\mathcal{X}n_{I_{j-1}}$, which converge by inductive hypothesis. Therefore, we just need to apply the argument discussed above to the final model of the sequence. $
\blacksquare$\newline
\end{document}
|
\begin{document}
\title{Deterministic Teleportation and Universal Computation Without Particle Exchange}
\author{Hatim Salih}
\email{[email protected]}
\affiliation{Quantum Engineering Technology Laboratory, Department of Electrical and Electronic Engineering, University of Bristol, Woodland Road, Bristol, BS8 1UB, UK}
\affiliation{Quantum Technology Enterprise Centre, HH Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol, BS8 1TL, UK}
\author{Jonte R. Hance}
\affiliation{Quantum Engineering Technology Laboratory, Department of Electrical and Electronic Engineering, University of Bristol, Woodland Road, Bristol, BS8 1UB, UK}
\author{Will McCutcheon}
\affiliation{Quantum Engineering Technology Laboratory, Department of Electrical and Electronic Engineering, University of Bristol, Woodland Road, Bristol, BS8 1UB, UK}
\affiliation{Institute of Photonics and Quantum Science, School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK}
\author{Terry Rudolph}
\affiliation{Department of Physics, Imperial College London, Prince Consort Road, London SW7 2AZ, United Kingdom}
\author{John Rarity}
\affiliation{Quantum Engineering Technology Laboratory, Department of Electrical and Electronic Engineering, University of Bristol, Woodland Road, Bristol, BS8 1UB, UK}
\date{\today}
\begin{abstract}
Teleportation is a cornerstone of quantum technologies, and has played a key role in the development of quantum information theory. Pushing the limits of teleportation is therefore of particular importance. Here, we apply a different aspect of quantumness to teleportation---namely exchange-free computation at a distance. The controlled-phase universal gate we propose, where no particles are exchanged between control and target, allows the full repertoire of quantum computation, including complete Bell detection among two remote parties, and is experimentally feasible. Our teleportation-with-a-twist, which we extend to telecloning, then requires no pre-shared entanglement nor classical communication between sender and receiver, with the teleported state gradually appearing at its destination.
\end{abstract}
\maketitle
\section{Introduction}
In the popular imagination, teleportation has come to refer to the process by which a body or an object is transported from one place to another without taking the actual journey. While a staple of science fiction, science by contrast seemed to rule it out based on the uncertainty principle, which placed a fundamental limit on the accuracy of measurement \cite{Kennard1927Quantenmechanik}. No wonder when in 1993 Bennett and colleagues proposed the first quantum teleportation protocol \cite{Bennett1993Teleporting}, it was soon recognised as a seminal moment in physics. Relying on the non-classical resource of pre-shared entanglement between the communicating parties, an unknown quantum state of a physical system is jointly measured by the sender with one part of an entangled pair, in such a way that allows its reconstruction at the remote entangled partner at the receiver, while leaving behind its physical constituents. Classical communication is typically required to complete this disembodied transport.
Not only has quantum teleportation become a backbone of quantum technologies such as quantum communication, quantum computing, and quantum networks, it has also played a crucial role in the development of formal quantum information theory. As such, pushing the limits of quantum teleportation is of significant importance, which is what we intend to do here by invoking yet another aspect of quantumness: exchange-free computation at a distance. While Gedanken, or thought experiments have historically played a crucial conceptual role in physics (with the EPR proposal for instance being famously conceived as such) we try to go beyond theory to proposing feasible demonstration.
In exchange-free communication, also known as counterfactual communication, a classical message is sent by means of quantum processes without the communicating parties exchanging any particles. (We use the term exchange-free in place of counterfactual since the term counterfactual has historically been used differently in the literature. Moreover, the term exchange-free more accurately describes the protocol, namely information being sent without exchange of particles.) With its roots in the phenomena of interaction-free measurement and the quantum Zeno effect \cite{Elitzur1993Bomb,Kwiat1995IFM,Kwiat1999Interrogation,Rudolph2000Zeno,Mitchison2001CFComputation,Hosten2006CounterComp,Hance2021CFGI}, the first such deterministic protocol was proposed by Salih et al \cite{Salih2013Protocol}, before being experimentally demonstrated by Pan and colleagues \cite{Pan2017Experiment}. While once controversial, the once-heated debate over whether exchange-free communication was permitted by the laws of physics (for both bit values) seems now to be resolving; Nature does allow exchange-free communication, and consequently computation at a distance \cite{Vaidman2014SalihCommProtocol,Salih2014ReplyVaidmanComment,Griffith2016Path,Salih2018CommentPath,Griffiths2018Reply,Aharonov2019Modification,Salih2018Laws,Hance2021Quantum}.
This counterfactual communication was generalised to sending quantum information exchange-free for the first time in the Salih14 protocol \cite{Salih2016Qubit,*Salih2014Qubit}, also known as counterportation, proposing an exchange-free quantum CNOT gate as a new computing primitive. The exchange-free CNOT was later employed by Zaman et al to propose exchange-free Bell analysis, albeit with a 50\% theoretical efficiency limit \cite{Zaman2018SalihBell}.
By contrast, the controlled $\hat{R}_z$-rotation we propose here, based on the above-mentioned CNOT gate, is universal and has no theoretical limit on efficiency. We then combine quantum teleportation with exchange-free computation at a distance to propose deterministic exchange-free teleportation. The core of this gate is set up by the entangling operation enabled by a one-dimensional cavity atom system. The ground state of an atom in a cavity can be put into a superposition of being on-resonant (a zero) and reflecting, or off-resonant (a one) and transmitting, a photon \cite{Hu2009QuantumDot,Reiserer2015Cavity}. We then construct a counterfactual way of probing whether Bob is blocking/not blocking (transmitting/reflecting) using the standard counterfactual communication-style protocol.
\section{Results}
\begin{figure*}
\caption{Our setup for an experimentally feasible, exchange-free controlled-$\hat{R}
\label{fig:CZGate}
\end{figure*}
We first go through the chained quantum Zeno effect (CQZE) unit, as given in Fig.\ref{fig:CZGate}. This is based on Salih's exchange-free CNOT gate, which has Bob enacting a superposition of blocking and not blocking his side of the communication channel \cite{Salih2016Qubit,*Salih2014Qubit, Salih2018Paradox}.
The switchable mirror, SM1, is first switched off to allow the photon into the outer interferometer, before being switched on again. The switchable polarisation rotator, SPR1, rotates the photon's polarisation from $H$ to $V$, by a small angle $\frac{\pi}{2M}$:
\begin{equation}
\begin{split}
\left| H \right\rangle \to \cos \frac{\pi}{2M} \left| H \right\rangle + \sin \frac{\pi}{2M} \left| V \right\rangle\\
\left| V \right\rangle \to \cos \frac{\pi}{2M} \left| V \right\rangle - \sin \frac{\pi}{2M} \left| H \right\rangle
\end{split}
\end{equation}
The polarising beam-splitter, PBS2, passes the $H$-polarised component towards the mirror below it, while reflecting the small $V$-polarised component towards the inner interferometer. The switchable mirror, SM2, is then switched off to allow the $V$-polarised component into the inner interferometer, before being switched on again. The switchable polarisation rotator, SPR2, rotates the $V$-polarised component by a small angle $\frac{\pi}{2N}$:
\begin{equation}
\left| V \right\rangle \to \cos \frac{\pi}{2N} \left| V \right\rangle - \sin \frac{\pi}{2N} \left| H \right\rangle
\end{equation}
The polarising beamsplitter, PBS3, then reflects the $V$-polarised component up, towards a mirror while passing the $H$-polarised component towards Bob's trapped atom, which is in a superposition, $\alpha \left| 0 \right\rangle + \beta \left| 1 \right\rangle$, of reflecting back any photon, and transmitting it through to the loss detector. If the atom reflects (is in state $\ket{0}$), the $H$-polarised component survives---if the atom transmits ($\ket{1}$), the component is lost. This means the overall action of the inner interferometer, assuming the photon is not lost to Bob's detector $D_B$, can be described as
\begin{equation}
\begin{split}
&\left| V \right\rangle (\alpha \left| 0 \right\rangle + \beta \left| 1 \right\rangle) \to\\
&\alpha (\cos \frac{\pi}{2N} \left| V \right\rangle - \sin \frac{\pi}{2N} \left| H \right\rangle)\left| 0 \right\rangle + \beta \cos \frac{\pi}{2N} \left| V \right\rangle \left| 1 \right\rangle
\end{split}
\end{equation}
This represents one inner cycle. The photonic superposition has now been brought back together by PBS3 towards SM2. After $N$ such cycles the state is,
\begin{align}
\left| V \right\rangle (\alpha \left| 0 \right\rangle + \beta \left| 1 \right\rangle) \to \alpha \left| H \right\rangle \left| 0 \right\rangle + \beta {\cos}^N \frac{\pi}{2N} \left| V \right\rangle \left| 1 \right\rangle
\end{align}
The switchable mirror, SM2, is then switched off to let the photonic component inside the inner interferometer out. Since for large $N$, ${\cos}^N \frac{\pi}{2N}\rightarrow1$, the state becomes,
\begin{align}
\left| V \right\rangle (\alpha \left| 0 \right\rangle + \beta \left| 1 \right\rangle) \to \alpha \left| H \right\rangle \left| 0 \right\rangle + \beta \left| V \right\rangle \left| 1 \right\rangle
\end{align}
Similarly, the first outer cycle, starting with the photon at SM1, assuming the photon is neither lost to to Alice's detector $D_A$, nor to Bob's $D_B$ inside the inner interferometer, implements
\begin{equation}
\begin{split}
&\left| H \right\rangle (\alpha \left| 0 \right\rangle + \beta \left| 1 \right\rangle) \to\\
&\alpha \cos \frac{\pi}{2M} \left| H \right\rangle \left| 0 \right\rangle + \beta (\cos \frac{\pi}{2M} \left| H \right\rangle + \sin \frac{\pi}{2M} \left| V \right\rangle) \left| 1 \right\rangle
\end{split}
\end{equation}
This represents one outer cycle, containing $N$ inner cycles. The photonic superposition has now been brought back together by PBS2 towards SM1. After $M$ such cycles, the state is
\begin{align}
\left| H \right\rangle (\alpha \left| 0 \right\rangle + \beta \left| 1 \right\rangle) \to \alpha {\cos}^M \frac{\pi}{2M} \left| H \right\rangle \left| 0 \right\rangle + \beta \left| V \right\rangle \left| 1 \right\rangle
\end{align}
Since for large $M$, ${\cos}^M \frac{\pi}{2M}$ approaches 1, this goes to,
\begin{align}
\left| H \right\rangle (\alpha \left| 0 \right\rangle + \beta \left| 1 \right\rangle) \to \alpha \left| H \right\rangle \left| 0 \right\rangle + \beta \left| V \right\rangle \left| 1 \right\rangle
\end{align}
The switchable mirror SM1 is now switched off to let the photon out. Crucially, this last equation describes the action of a quantum CNOT gate with Bob's as the control qubit, acting on Alice's $H$-polarised photon.
\begin{figure*}
\caption{Our protocol for exchange-free teleportation. In this protocol, Alice has a photon-polarisation qubit, and Bob has a maximally-entangled pair of qubits, one implemented as a trapped-atom enacting a superposition of blocking and not blocking the communication channel, and the other as photon polarisation. Alice's qubit begins in the state to be teleported, $\ket{\psi}
\label{fig:Teleprot}
\end{figure*}
We now explain the rest of the setup, which uses the CQZE unit to implement a universal, general-input controlled-$\hat{R}_z$ rotation gate (Fig.\ref{fig:CZGate}). We begin with a superposition state at Alice, $a\ket{V}+b\ket{H}$. This is split at PBS1, with the $H$-polarised component going into an optical loop, and the $V$-polarised component going through a $\pi/2$ half wave plate flipping its polarisation to $H$, before being admitted into the CQZE unit by turning off the switchable mirror SM0, before turning it on again. Upon exiting the CQZE unit, it is reflected back by SM0, having a phase of $\theta+\pi$ if $V$-polarised (0 if $H$) applied to it by the phase shifter, before going through another run of the CQZE unit. This always produces an $H$-polarised state, as noted in \cite{Li2019CFStateEx}. Note that the $\pi$ term in the phase shifter is a correction term. The photonic component now exits through SM0, which is switched off, before being flipping back to $V$-polarisation at the $\pi/2$ half wave plate, having acquired a $\theta$ phase shift. It then recombines with the $H$-polarised component at PBS1.
Given the initial state of the overall system is
\begin{equation}
(a\ket{V}+b\ket{H})\otimes(\alpha\ket{0}+\beta\ket{1})
\end{equation}
and that only the initially $V$-polarised component ``interacts" with the trapped atom, we get the final state
\begin{equation}
a\ket{V}(\alpha\ket{0}+\beta e^{i\theta}\ket{1})+b\ket{H}(\alpha\ket{0}+\beta\ket{1})
\end{equation}
This is an entangled state between Alice's polarisation qubit and Bob's trapped ion qubit: a controlled-phase rotation, with Alice's as the control qubit and Bob's as the target. Due to the symmetry of control and target qubits for this type of rotation, it can also be factorised as
\begin{equation}
\alpha\ket{0}(a\ket{V}+b\ket{H})+\beta\ket{1}(ae^{i\theta}\ket{V}+b\ket{H})
\end{equation}
the same controlled-phase rotation expressed differently, now with Bob's as the control qubit and Alice's as the target. Taking the special case when $\theta=\pi$, we get a controlled-Z gate.
On universality, our exchange-free controlled-$\hat{R}_z$, as a two-qubit gate, allows efficient implementation of any quantum circuit when combined with local operations. But there's another sense in which it is universal. As explained later, this gate can be operated differently, allowing one party with classical action to enact any desired operation on a second party's remote photonic qubit, exchange-free. This classical action can even control a two-qubit gate at the second party, as shown in \cite{Salih2021EFQubit}. Our controlled-$\hat{R}_z$ gate can therefore be thought of as a universal set in its own right.
Bob needs a way to implement a superposition of reflecting, bit ``0'', and blocking, bit ``1'', Alice's photon. There are many ways to go about this; however, recent breakthroughs in trapped atoms inside optical cavities \cite{Reiserer2015Cavity}, such as the demonstration of light-matter quantum logic gates \cite{Tiecke2014NanophotonicAtom,Reiserer2014a}, make trapped atoms an obvious choice.
Bob's qubit is a single $^{87}$Rb atom trapped inside a high-finesse optical resonator by a three-dimensional optical lattice \cite{Reiserer2014a,Reiserer2014}. Depending on which of its two internal ground states the $^{87}$Rb atom is in, a photon impinging on the cavity in Fig. \ref{fig:CZGate} will either be reflected as a result of strong coupling, or otherwise enter the cavity on its way towards detector $D_B$. For this, it needs to have mirror reflectivities such that a photon entering the cavity exists towards detector $D_B$, similar to \cite{Mucke2010}. By placing the $^{87}$Rb in a superposition of its two ground states, by means of Raman transitions applied through a pair of Raman lasers, Bob implements the desired superposition of reflecting Alice's photon back and blocking it. Note that coherence time for such a system is of $\mathcal{O}(10^{-4}s)$ \cite{Reiserer2014}, with longer times possible. Therefore, if the protocol is completed within $\mathcal{O}(10^{-5}s)$, lower-bounded by the $\mathcal{O}(10^{-9}s)$ switching speed of switchable components, then decoherence effects can be ignored.
We now move to an exchange-free implementation of teleportation. This is based on the quantum teleportation first devised by Bennett et al \cite{Bennett1993Teleporting}, but recast such that there is no need for previously-shared entanglement between Alice and Bob, nor classical communication between then. Our teleportation scheme is shown in Fig.\ref{fig:Teleprot}.
In this protocol, we have a photon-polarisation qubit at Alice, and an entangled pair of qubits, one trapped-atom and the other photon-polarisation, at Bob. Alice's qubit is instantiated in the state to be teleported, e.g. by a third party, while Bob's two modes are in the maximally entangled state
\begin{equation}
\frac{\ket{H}\ket{0}+\ket{V}\ket{1}}{\sqrt{2}}
\end{equation}
(which can be created by Bob sending an $H$-polarised photon into the cavity, when it is in the superposition $(\ket{0}+\ket{1})/\sqrt{2}$, and then recombining the photon's components from the transmitted and reflected outputs at a polarising beamsplitter, after applying a $H\rightarrow V$ rotation on the path from the transmitted side).
To enact teleportation, Bob first applies a Hadamard gate to his trapped-atom qubit, before Alice applies an exchange-free controlled-Z gate, with her photonic qubit as the control and and Bob's trapped-atom qubit as the target. Bob and Alice then apply Hadamard gates onto their respective qubits, before measuring the states in the computational basis for Bob, and in the $H/V$ basis for Alice, together performing a complete Bell measurement. Bob then either flips or doesn't flip the polarisation of his photonic qubit based on the classical measurement outcome of his trapped-atom qubit. Alice then, based on the classical measurement outcome of her qubit, either performs an exchange-free controlled-Z on Bob's photonic qubit with her control set to $\ket{1}$ by blocking both runs, or else sets her control to $\ket{0}$ by not blocking both runs. These last two steps by Bob and Alice respectively act as the feed-forward step of teleportation (which next-generation trapped atoms are expected to allow) leaving Bob's photonic qubit in the state of Alice's original qubit.
\begin{figure}
\caption{Average fidelity of our exchange-free teleportation protocol, shown as a function of the number of outer and inner cycles, M and N. This is for an imperfect trapped-atom at Bob that fails to reflect an incident photon 34\% of the time when it should reflect, and fails to block the photon 8\% of the time when it should block. Fidelity is averaged over 100 points evenly distributed on the Bloch sphere of possible states Alice could send.}
\label{fig:Fid}
\end{figure}
\section{Discussion}
Our exchange-free protocol bears all the hallmarks of teleportation as given by Pirandola et al \cite{Pirandola2015AdvsTele}. Alice's input state is unknown to her, and can be provided by a third party who also verifies the teleported state at Bob. The protocol allows complete Bell detection, which in our case is jointly carried out by Alice and Bob exchange-free. The protocol allows the possibility of real-time correction on Bob's photonic qubit. Lastly, even for the smallest number of cycles, achievable fidelity for our protocol exceeds the 2/3 limit of ``classical teleportation", which comes from the no-cloning theorem \cite{Wootters1982NoCloning}. Fig.\ref{fig:Fid} gives the average fidelity $F(\theta,\phi)$, where
\begin{equation}
F(\theta,\phi) = \braket{\Psi_{in}}{\Psi_{out}}\braket{\Psi_{out}}{\Psi_{in}}
\end{equation}
and $\Psi_{in}$ and $\Psi_{out}$ are the input and output states of the protocol, $\theta$ and $\phi$ parameterise the input state's Bloch sphere (azimuthal and radial angle respectively), and the average fidelity is $ F(\theta,\phi)$ averaged over $\theta$ and $\phi$ \cite{Oh2002TeleFid}. The average efficiency of the protocol is 30\% for a number of cycles $M$ equal to 10 and $N$ equal to 100, but improves for larger numbers of cycles.
Speaking of cloning, since quantum telecloning combines approximate cloning with teleportation to transport multiple approximate copies of a states, one would think that our exchange-free teleportation protocol might allow telecloning to be carried out exchange-free. In fact the telecloning scheme of Murao et al \cite{Murao1999Telecloning}, which employs a Bell measurement, along with local operations at the receiver based on the Bell detections, can be made exchange-free in a similar manner to how we made teleportation exchange-free. Their scheme starts with an already prepared multipartite entangled state \cite{Murao1999Telecloning}, which for our purposes we take to be located at Bob, with one of the entangled qubits in the form of say a trapped-atom, and the output qubits where the approximate copies appear, all photonic. Alice and Bob jointly perform an exchange-free Bell measurement between Alice's photonic input qubit, and Bob's trapped-atom `port' qubit, as we show in Fig.\ref{fig:Telecloning}.
\begin{figure*}
\caption{An entanglement diagram for exchange-free telecloning. The diagram shows the initial entangled state between port-qubit P, copy-qubits C$_q$, and ancilla-qubits A$_{q-1}
\label{fig:Telecloning}
\end{figure*}
Based on the classical outcomes of the Bell measurement, Alice applies suitable exchange-free controlled-rotations (Pauli operations) to recover the approximate copies at Bob. The fidelity of these copies is limited by the no-cloning theorem to
\begin{equation}
\gamma=\frac{2q+1}{3q}
\end{equation}
where $q$ is the number of approximate copies of our state that we want to send. For the protocol, when the trapped-atom interaction is ideal, we always reach this limit of fidelity (5/6 for two copies) even for the smallest number of cycles. In Fig.\ref{fig:avgFidTC}, we give the fidelity for an imperfect trapped-atom at Bob that fails to reflect an incident photon 34\% of the time when it should reflect, and fails to block the photon 8\% of the time when it should block, for different values of $M$ and $N$. Average efficiency is 14\% for $M$ equal to 10 and $N$ equal to 100, but improves for larger numbers of cycles.
\begin{figure}
\caption{Average fidelity for exchange-free telecloning using the exchange-free controlled-Z gate described above, plotted for different numbers of outer ($M$) and inner ($N$) cycles. Fidelity is calculated for an imperfect trapped-atom at Bob that fails to reflect an incident photon 34\% of the time when it should reflect, and fails to block the photon 8\% of the time when it should block. Fidelity is averaged over 100 points evenly distributed on the Bloch sphere of possible states Alice could send.}
\label{fig:avgFidTC}
\end{figure}
An interesting modification of Salih et al's 2013 protocol was recently proposed by Aharonov and Vaidman, satisfying their criterion, based on weak trace, for exchange-free communication \cite{Aharonov2019Modification}. Following Salih's 2018 paper on counterportation \cite{Salih2018Paradox}, we now show how to implement it in our protocol. In the CQZE module, after applying SPR2 inside the inner interferometer for the $N$th cycle, Alice now makes a measurement by blocking the entrance to channel leading to Bob. (She may alternatively flip the polarisation and use a PBS to direct the photonic component away from Bob.) Instead of switching SM2 off, it is kept turned on for a duration corresponding to $N$ more inner cycles, after which SM2 is switched off as before. One has to compensate for the added time by means of optical delays. The idea here is that, for the case of Bob not blocking, any lingering $V$ component inside the inner interferometer after $N$ inner cycles (because of weak measurement or otherwise) will be rotated towards $H$ over the extra $N$ inner cycles. This has the effect that, at least as a first order approximation, any weak trace in the channel leading to Bob is made negligibly small.
While alternative proposals have been given for counterfactual communication by Arvidsson-Shukur et al. \cite{Arvidsson2016Communication,Calafell2019Trace}, the protocols' definition of counterfactuality have been the subject of debate \cite{Vaidman2019Analysis,Hance2021Quantum,Wander2021Three}. Their adoption of Fisher Information as a tool for analysing counterfactuality is interesting nonetheless.
As Vaidman points out in \cite{Vaidman2016Counterfactual}, Salih's 2014 protocol---also known as counterportation \cite{Salih2018Paradox}---achieves the same end goal of Bennett et al.'s (1993) teleportation \cite{Bennett1993Teleporting}, namely the disembodied transport of an unknown quantum state, albeit over a large number of protocol cycles. It is an entirely different protocol though, as can be seen from their respective quantum-circuit diagrams. Our current protocol, by contrast to counterportation, is directly based on the 1993 teleportation protocol, but implemented using our universal, newly proposed counterfactual Z-gate, with the aim of exploring the foundations of this most central of quantum information protocols.
{\color{red}Interestingly, Aharonov et al. have recently shown that counterfactual processes such as the ones presented here conserve modular angular momentum, much like the quantum Cheshire Cat effect \cite{Aharonov2020Nonlocal,Aharonov2021Dynamical}}.
In our recent paper, \textit{Exchange-Free Computation on an Unknown Qubit at a Distance}, we give a protocol that allows Bob to implement any phase on Alice's qubit, exchange-free \cite{Salih2021EFQubit}. This then forms the basis of a device that we called a phase unit, allowing Bob to apply any arbitrary single-qubit unitary to the qubit, exchange-free. However, an issue that phase unit displayed was that the time the photon exited the device was correlated with the phase applied by Bob. While we provided a way for Bob to undo this time-binning after the fact, it is generally desirable to remove it altogether.
By adapting the controlled phase-rotation above, a phase unit can be constructed that doesn't exhibit this time-binning. We use the set-up in Fig.\ref{fig:CZGate}, but instead with a classical Bob either blocking or not-blocking, and with SM0 keeping Alice's photon in the device for 2L runs (rather than 2). Bob sets $\theta$ to $\pi/L$, blocking for $2k$ of the runs and not blocking for $2(L-k)$, in units of 2 runs where he either blocks for both or does not block for both. This allows Bob to set a phase on Alice's photon of $2\pi k/L$. We place three of these devices in series, interspersed with a $-\pi/4$-aligned Quarter Wave Plate, $\hat{\textbf{U}}_{QWP}\hat{\textbf{R}}_x(-\pi/2)$, and its adjoint, $\hat{\textbf{U}}^{\dagger}_{QWP} \hat{\textbf{R}}_x(\pi/2)$, to create a chained-$\hat{R}_z \hat{R}_x \hat{R}_z$a set of rotations, into which any single qubit unitary can be decomposed. Bob can thus apply any arbitrary single-qubit unitary to Alice's qubit---both exchange-free, and without time-binning. This also, as we show in \cite{Salih2021EFQubit}, allows us to classically control of a universal two-qubit gate, which enables Bob to directly enact in principle any desired algorithm on a remote Alice's programmable quantum circuit.
In conclusion, we have shown how the chained quantum Zeno effect can be employed to construct an experimentally feasible, exchange-free controlled-$\hat{R}_z$ operation, which is not only a universal gate, but can be considered a universal set. This allowed us to propose a protocol for deterministic teleportation of an unknown quantum state between Alice and Bob, without exchanging particles. The fact that the multiple cycles cause teleportation to happen gradually, in slow-motion so to speak, as opposed to standard quantum teleportation where the teleported state appears at once, is as interesting as it is surprising.
\begin{filecontents}{Detteleref.bib}
@article{Kennard1927Quantenmechanik,
title={Zur quantenmechanik einfacher bewegungstypen},
author={Kennard, Earle H},
journal={Zeitschrift f{\"u}r Physik},
volume={44},
number={4-5},
pages={326--352},
year={1927},
publisher={Springer},
doi={10.1007/BF01391200}
}
@article{Bennett1993Teleporting,
title={Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels},
author={Bennett, Charles H and Brassard, Gilles and Cr{\'e}peau, Claude and Jozsa, Richard and Peres, Asher and Wootters, William K},
journal={Physical review letters},
volume={70},
number={13},
pages={1895},
year={1993},
publisher={APS},
doi={10.1103/PhysRevLett.70.1895}
}
@article{Elitzur1993Bomb,
author={Elitzur, Avshalom C. and Vaidman, Lev},
title={Quantum mechanical interaction-free measurements},
journal={Foundations of Physics},
year={1993},
month={Jul},
day={01},
volume={23},
number={7},
pages={987--997},
issn={1572-9516},
doi={10.1007/BF00736012}
}
@article{Kwiat1995IFM,
title = {Interaction-Free Measurement},
author = {Kwiat, Paul and Weinfurter, Harald and Herzog, Thomas and Zeilinger, Anton and Kasevich, Mark A.},
journal = {Phys. Rev. Lett.},
volume = {74},
issue = {24},
pages = {4763--4766},
numpages = {0},
year = {1995},
month = {Jun},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.74.4763}
}
@article{Kwiat1999Interrogation,
title={High-efficiency quantum interrogation measurements via the quantum Zeno effect},
author={Kwiat, Paul G and White, AG and Mitchell, JR and Nairz, O and Weihs, G and Weinfurter, H and Zeilinger, A},
journal={Physical Review Letters},
volume={83},
number={23},
pages={4725},
year={1999},
publisher={APS},
doi = {10.1103/PhysRevLett.83.4725}
}
@article{Rudolph2000Zeno,
title = {Better Schemes for Quantum Interrogation in Lossy Experiments},
author = {Rudolph, T.},
journal = {Phys. Rev. Lett.},
volume = {85},
issue = {14},
pages = {2925--2928},
numpages = {0},
year = {2000},
month = {Oct},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.85.2925}
}
@article{Mitchison2001CFComputation,
title={Counterfactual computation},
author={Mitchison, Graeme and Jozsa, Richard},
journal={Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences},
volume={457},
number={2009},
pages={1175--1193},
year={2001},
publisher={The Royal Society},
doi={10.1098/rspa.2000.0714}
}
@article{Hosten2006CounterComp,
title={Counterfactual quantum computation through quantum interrogation},
author={Hosten, Onur and Rakher, Matthew T and Barreiro, Julio T and Peters, Nicholas A and Kwiat, Paul G},
journal={Nature},
volume={439},
number={7079},
pages={949},
year={2006},
publisher={Nature Publishing Group},
doi={10.1038/nature04523}
}
@article{Hance2021CFGI,
title={Counterfactual ghost imaging},
author={Hance, Jonte R and Rarity, John},
journal={npj Quantum Information},
volume={7},
number={1},
pages={1--7},
year={2021},
publisher={Nature Publishing Group},
doi={10.1038/s41534-021-00411-4}
}
@article{Salih2013Protocol,
title = {Protocol for Direct Counterfactual Quantum Communication},
author = {Salih, Hatim and Li, Zheng-Hong and Al-Amri, Mohammad and Zubairy, Muhammad Suhail},
journal = {Phys. Rev. Lett.},
volume = {110},
issue = {17},
pages = {170502},
numpages = {5},
year = {2013},
month = {Apr},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.110.170502}
}
@article{Pan2017Experiment,
title = {Direct counterfactual communication via quantum Zeno effect},
author = {Cao, Yuan and Li, Yu-Huai and Cao, Zhu and Yin, Juan and Chen, Yu-Ao and Yin, Hua-Lei and Chen, Teng-Yun and Ma, Xiongfeng and Peng, Cheng-Zhi and Pan, Jian-Wei},
journal = {Proceedings of the National Academy of Sciences},
volume = {114},
number = {19},
pages = {4920--4924},
year = {2017},
publisher = {National Acad Sciences},
doi = {10.1073/pnas.1614560114}
}
@article{Vaidman2014SalihCommProtocol,
title = {Comment on ``Protocol for Direct Counterfactual Quantum Communication''},
author = {Vaidman, Lev},
journal = {Phys. Rev. Lett.},
volume = {112},
issue = {20},
pages = {208901},
numpages = {1},
year = {2014},
month = {May},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.112.208901}
}
@article{Salih2014ReplyVaidmanComment,
title = {Salih et al. Reply:},
author = {Salih, Hatim and Li, Zheng-Hong and Al-Amri, Mohammad and Zubairy, Muhammad Suhail},
journal = {Phys. Rev. Lett.},
volume = {112},
issue = {20},
pages = {208902},
numpages = {2},
year = {2014},
month = {May},
publisher = {American Physical Society},
doi = {10.1103/PhysRevLett.112.208902}
}
@article{Griffith2016Path,
title = {Particle path through a nested Mach-Zehnder interferometer},
author = {Griffiths, Robert B.},
journal = {Phys. Rev. A},
volume = {94},
issue = {3},
pages = {032115},
numpages = {13},
year = {2016},
month = {Sep},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.94.032115}
}
@article{Salih2018CommentPath,
title = {Comment on ``Particle path through a nested Mach-Zehnder interferometer''},
author = {Salih, Hatim},
journal = {Phys. Rev. A},
volume = {97},
issue = {2},
pages = {026101},
numpages = {3},
year = {2018},
month = {Feb},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.97.026101}
}
@article{Griffiths2018Reply,
title = {Reply to ``Comment on `Particle path through a nested Mach-Zehnder interferometer' ''},
author = {Griffiths, Robert B.},
journal = {Phys. Rev. A},
volume = {97},
issue = {2},
pages = {026102},
numpages = {2},
year = {2018},
month = {Feb},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.97.026102}
}
@article{Aharonov2019Modification,
title = {Modification of counterfactual communication protocols that eliminates weak particle traces},
author = {Aharonov, Yakir and Vaidman, Lev},
journal = {Phys. Rev. A},
volume = {99},
issue = {1},
pages = {010103},
numpages = {6},
year = {2019},
month = {Jan},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.99.010103}
}
@article{Salih2018Laws,
title={Do the laws of physics prohibit counterfactual communication?},
author={Salih, Hatim and McCutcheon, Will and Hance, Jonte R. and Rarity, John},
journal={arXiv:1806.01257},
year={2018},
url={https://arxiv.org/abs/1806.01257}
}
@article{Hance2021Quantum,
title={How Quantum is Quantum Counterfactual Communication?},
author={Hance, Jonte R and Ladyman, James and Rarity, John},
journal={Foundations of Physics},
year={2021},
volume={51},
number={1},
pages={12},
month={Feb},
doi={10.1007/s10701-021-00412-5}
}
@article{Salih2016Qubit,
title={Protocol for counterfactually transporting an unknown qubit},
author={Salih, Hatim},
journal={Frontiers in Physics},
volume={3},
pages={94},
year={2016},
publisher={Frontiers},
doi={10.3389/FPHY.2015.00094}
}
@article{Salih2014Qubit,
title={Protocol for Counterfactually Transporting an Unknown Qubit},
author={Salih, Hatim},
journal={arXiv preprint arXiv:1404.2200},
year={2014},
url={https://arxiv.org/abs/1404.2200}
}
@article{Zaman2018SalihBell,
title={Counterfactual Bell-state analysis},
author={Zaman, Fakhar and Jeong, Youngmin and Shin, Hyundong},
journal={Scientific reports},
volume={8},
number={1},
pages={14641},
year={2018},
publisher={Nature Publishing Group},
doi={10.1038/s41598-018-32928-8}
}
@article{Hu2009Quantumdot,
title={Proposed entanglement beam splitter using a quantum-dot spin in a double-sided optical microcavity},
author={Hu, CY and Munro, WJ and O’Brien, JL and Rarity, JG},
journal={Physical Review B},
volume={80},
number={20},
pages={205326},
year={2009},
publisher={APS},
doi={10.1103/PhysRevB.80.205326}
}
@article{Reiserer2015Cavity,
title = {{Cavity-based quantum networks with single atoms and optical photons}},
year = {2015},
journal = {Reviews of Modern Physics},
author = {Reiserer, Andreas and Rempe, Gerhard},
number = {4},
month = {12},
pages = {1379--1418},
volume = {87},
publisher = {American Physical Society},
doi = {10.1103/RevModPhys.87.1379},
issn = {0034-6861}
}
@article{Salih2018Paradox,
title={From a Quantum Paradox to Counterportation},
author={Salih, Hatim},
journal={arXiv:1807.06586},
year={2018},
url={https://arxiv.org/abs/1807.06586}
}
@article{Li2019CFStateEx,
title={Counterfactual exchange of unknown quantum states},
author={Li, Zheng-Hong and Al-Amri, M and Yang, Xi-Hua and Zubairy, M Suhail},
journal={Physical Review A},
volume={100},
number={2},
pages={022110},
year={2019},
publisher={APS},
doi={10.1103/PhysRevA.100.022110}
}
@article{Salih2021EFQubit,
year = 2021,
month = {jan},
publisher = {{IOP} Publishing},
volume = {23},
number = {1},
pages = {013004},
author = {Hatim Salih and Jonte R Hance and Will McCutcheon and Terry Rudolph and John Rarity},
title = {Exchange-free computation on an unknown qubit at a distance},
journal = {New Journal of Physics},
doi={10.1088/1367-2630/abd3c4}
}
@article{Tiecke2014NanophotonicAtom,
title = {{Nanophotonic quantum phase switch with a single atom}},
year = {2014},
journal = {Nature},
author = {Tiecke, T. G. and Thompson, J. D. and de Leon, N. P. and Liu, L. R. and Vuleti{\'{c}}, V. and Lukin, M. D.},
number = {7495},
month = {4},
pages = {241--244},
volume = {508},
publisher = {Nature Publishing Group},
doi = {10.1038/nature13188},
issn = {0028-0836}
}
@article{Reiserer2014a,
title = {{A quantum gate between a flying optical photon and a single trapped atom}},
year = {2014},
journal = {Nature},
author = {Reiserer, Andreas and Kalb, Norbert and Rempe, Gerhard and Ritter, Stephan},
pages = {237–240},
volume = {508},
isbn = {0028-0836 1476-4687},
doi = {10.1038/nature13177},
issn = {14764687},
pmid = {24717512},
arxivId = {1404.2453}
}
@phdthesis{Reiserer2014,
title = {{A controlled phase gate between a single atom and an optical photon}},
year = {2014},
author = {Reiserer, Andreas},
school = {Technische Universit{\"{a}}t München}
}
@article{Mucke2010,
title = {{Electromagnetically induced transparency with single atoms in a cavity}},
year = {2010},
journal = {Nature},
author = {M{\"{u}}cke, Martin and Figueroa, Eden and Bochmann, Joerg and Hahn, Carolin and Murr, Karim and Ritter, Stephan and Villas-Boas, Celso J. and Rempe, Gerhard},
pages = {755–758},
volume = {465},
isbn = {0028-0836},
doi = {10.1038/nature09093},
issn = {00280836},
pmid = {20463661},
arxivId = {arXiv:1005.3289v1}
}
@article{Pirandola2015AdvsTele,
title={Advances in quantum teleportation},
author={Pirandola, Stefano and Eisert, Jens and Weedbrook, Christian and Furusawa, Akira and Braunstein, Samuel L},
journal={Nature photonics},
volume={9},
number={10},
pages={641--652},
year={2015},
publisher={Nature Publishing Group},
doi={10.1038/nphoton.2015.154}
}
@article{Wootters1982NoCloning,
title={A single quantum cannot be cloned},
author={Wootters, William K and Zurek, Wojciech H},
journal={Nature},
volume={299},
number={5886},
pages={802--803},
year={1982},
publisher={Nature Publishing Group},
doi={10.1038/299802a0}
}
@article{Oh2002TeleFid,
title={Fidelity of quantum teleportation through noisy channels},
volume={66},
ISSN={1094-1622},
DOI={10.1103/physreva.66.022316},
number={2},
journal={Physical Review A},
publisher={American Physical Society (APS)},
author={Oh, Sangchul and Lee, Soonchil and Lee, Hai-woong},
year={2002},
month={Aug}
}
@article{Murao1999Telecloning,
title={Quantum telecloning and multiparticle entanglement},
author={Murao, M and Jonathan, D and Plenio, MB and Vedral, V},
journal={Physical Review A},
volume={59},
number={1},
pages={156},
year={1999},
publisher={APS},
doi={10.1103/PhysRevA.59.156}
}
@article{Arvidsson2016Communication,
title = {Quantum counterfactual communication without a weak trace},
author = {Arvidsson-Shukur, D. R. M. and Barnes, C. H. W.},
journal = {Phys. Rev. A},
volume = {94},
issue = {6},
pages = {062303},
numpages = {5},
year = {2016},
month = {Dec},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.94.062303}
}
@article{Calafell2019Trace,
title={Trace-free counterfactual communication with a nanophotonic processor},
author={Calafell, I Alonso and Str{\"o}mberg, T and Arvidsson-Shukur, DRM and Rozema, LA and Saggio, V and Greganti, C and Harris, NC and Prabhu, M and Carolan, J and Hochberg, M and Baehr-Jones, T. and Englund, D. and Barnes, C.H.W. and Walther, P.},
journal={npj Quantum Information},
volume={5},
number={1},
pages={61},
year={2019},
publisher={Nature Publishing Group},
doi={10.1038/s41534-019-0179-2}
}
@article{wander2021three,
title={Three approaches for analyzing the counterfactuality of counterfactual protocols},
author={Wander, Alon and Cohen, Eliahu and Vaidman, Lev},
journal={Physical Review A},
volume={104},
number={1},
pages={012610},
year={2021},
publisher={APS},
doi={10.1103/PhysRevA.104.012610}
}
@article{Vaidman2019Analysis,
title = {Analysis of counterfactuality of counterfactual communication protocols},
author = {Vaidman, Lev},
journal = {Phys. Rev. A},
volume = {99},
issue = {5},
pages = {052127},
numpages = {8},
year = {2019},
month = {May},
publisher = {American Physical Society},
doi = {10.1103/PhysRevA.99.052127}
}
@article{Vaidman2016Counterfactual,
title={“Counterfactual” quantum protocols},
author={Vaidman, Lev},
journal={International Journal of Quantum Information},
volume={14},
number={04},
pages={1640012},
year={2016},
publisher={World Scientific},
doi={10.1142/S0219749916400128}
}
@article{Aharonov2020Nonlocal,
title={What is nonlocal in counterfactual quantum communication?},
author={Aharonov, Yakir and Rohrlich, Daniel},
journal={Physical Review Letters},
volume={125},
number={26},
pages={260401},
year={2020},
publisher={APS},
doi={10.1103/PhysRevLett.125.260401}
}
@article{Aharonov2021Dynamical,
title={A dynamical quantum Cheshire Cat effect and implications for counterfactual communication},
author={Aharonov, Yakir and Cohen, Eliahu and Popescu, Sandu},
journal={Nature Communications},
volume={12},
number={1},
pages={1--8},
year={2021},
publisher={Nature Publishing Group},
doi={10.1038/s41467-021-24933-9}
}
\end{filecontents}
\end{document}
|
\begin{equation}gin{document}
\tildeitle{Systemic Risk and Heterogeneous Mean Field Type Interbank Network}
\author{Li-Hsien Sun\tildehanks{Institute of Statistics, National Central University, Chung-Li, Taiwan, 32001 {\em [email protected]}. Work supported by Most grant 108-2118-M-008-002-MY2}}
\date{}
\title{Systemic Risk and Heterogeneous Mean Field Type Interbank Network}
\begin{equation}gin{abstract}
We study the system of heterogeneous interbank lending and borrowing based on the relative average of log-capitalization given by the linear combination of the average within groups and the ensemble average and describe the evolution of log-capitalization by a system of coupled diffusions. The model incorporates a game feature with homogeneity within groups and heterogeneity between groups where banks search for the optimal lending or borrowing strategies through minimizing the heterogeneous linear quadratic costs in order to avoid to approach the default barrier.
Due to the complicity of the lending and borrowing system, the closed-loop Nash equilibria and the open-loop Nash equilibria are both driven by the coupled Riccati equations. The existence of the equilibria in the two-group case where
the number of banks are sufficiently large is guaranteed by the solvability for the coupled Riccati equations as the number of banks goes to infinity in each group. The equilibria are consisted of the mean-reverting term identical to the one group game and the group average owing to heterogeneity. In addition, the corresponding heterogeneous mean filed game with the arbitrary number of groups is also discussed. The existence of the $\mbox{var}epsilonilon$-Nash equilibrium in the general $d$ heterogeneous groups is also verified. Finally, in the financial implication, we observe the Nash equilibria governed by the mean-reverting term and the linear combination of the ensemble averages of individual groups and study the influence of the relative parameters on the liquidity rate through the numerical analysis.
\end{equation}d{abstract}
\tildeextbf{Keywords:} Systemic risk, inter-bank borrowing and lending system, heterogeneous group, relative ensemble average, Nash equilibrium, Mean Field Game.
\section{Introduction}\label{sec:intro}
Toward a deeper understanding of systemic risk created by lending and borrowing behavior under the heterogeneous environment, we extend the model studied in \cite{R.Carmona2013} from one homogeneous group into several groups with heterogeneity.
The evolution of monetary reserve is described by a system of interacting diffusions with homogeneity within groups and heterogeneity between groups. Banks intend to borrow money from a central bank when they remain below the global average of capitalization treated as the critical level and lend money to a central bank when they stay above the same critical level through minimizing their cost for the corresponding lending or borrowing given by the linear quadratic objective functions with varied parameters. Furthermore, motivated by \cite{Touzi2015}, instead of the global ensemble average, we propose the relative ensemble average which is the linear combination of the group average and the global ensemble average. In the case of the finite players with heterogeneous groups, we solve the closed-loop Nash equilibria using the dynamic programming principle and the corresponding fully coupled backward Hamilton Jacobi Bellman (HJB) equations. In addition, through the Pontryagin minimization principle and the corresponding the adjoint forward and backward stochastic differential equations (FBSDEs), we get the open-loop Nash equilibria.
Due to the complicity of the heterogeneity, the closed-loop Nash equilibria and the open-loop Nash equilibria are given by the coupled Riccati equations. Hence, in the two-group case, we propose the solvability condition for the existence of the equilibria as the number of banks in each group goes to infinity in the sense that in the case of sufficiently large number of banks, the existence of the closed-loop Nash equilibria and the open-loop Nash equilibria can be guaranteed. Furthermore, we discuss heterogeneous mean field games (MFGs) with the common noises where the number of groups is arbitrary. Due to the complexity generated by common noises, the $\mbox{var}epsilonilon$-Nash equilibria can not be obtained using the HJB equations. Hence, the adjoint FBSDEs are applied to solve the equilibria. The existence of the $\mbox{var}epsilonilon$-Nash equilibria is also proved under some sufficient conditions. As the results in \cite{R.Carmona2013}, the closed-loop Nash equilibria and the open look Nash equilibria converge to the $\mbox{var}epsilonilon$-Nash equilibria.
We observe that owing to the linear quadratic regulator, the equilibria are the linear combination of the mean-reverting term identical to the one group system discussed in \cite{R.Carmona2013} and the group ensemble averages given by heterogeneity. In addition, through the numerical studies, if banks prefer tracing the global ensemble average rather than the average of their own groups, they intend to increase the liquidity rate, and the larger sample size also implies the larger liquidity rate.
In the literature, this type of interaction in continuous time is studied in several models. \cite{Fouque-Sun, R.Carmona2013} describe systemic risk based on the coupled Ornstein-Uhlenbeck (OU) type processes. The extension of this OU type model with delay obligation is also proposed by \cite{Carmona-Fouque2016}. \cite{Fouque-Ichiba, Sun2016} investigate system crash through the Cox-Ingersoll-Ross (CIR) type processes. The rare event related to systemic risk given by the bistable model is discussed in \cite{Garnier-Mean-Field, GarnierPapanicolaouYang}. The stability created by a central agent according to the model in \cite{Garnier-Mean-Field, GarnierPapanicolaouYang} is provided by \cite{Papanicolaou-Yang2015}.
The asymptotic equilibria called $\mbox{var}epsilonilon$-Nash equilibria of stochastic differential games in one homogeneous group are solved by MFGs proposed by \cite{MFG1, MFG2, MFG3}. Here, the interactions given by empirical distributions whose solution satisfies the coupled backward HJB equation backward in time and the forward Kolmogorov equation forward in time. Independently, Nash certainty equivalent treated as a similar case of MFGs is also developed by \cite{HuangCainesMalhame1,HuangCainesMalhame2}. In addition, the probabilistic approach in the form of FBSDEs to obtain $\mbox{var}epsilonilon$-Nash equilibria is studied in
\cite{Bensoussan_et_al, CarmonaDelarueLachapelle, Carmona-Lacker2015,MFG-book-1}. \cite{Lacker-Webster2014, Lacker2015, MFG-book-2} discuss MFGs with common noise and the master equations. In particular, \cite{Lacker-Zari2017} study the optimal investment under heterogeneous relative performance in the case of mean field limit.
The paper is organized as follows. In Section \ref{Heter}, we analyze the case of two heterogeneous groups with relative performance and study the corresponding closed-loop Nash equilibria in Section \ref{Sec:NE}. In parciular, Section \ref{MFG} is devoted to solving for the $\mbox{var}epsilonilon$-Nash equilibria with common noises in MFGs with heterogeneous groups using the coupled FBSDEs. The financial implication is illustrated in Section \ref{FI}. The concluding remark is given in Section \ref{conclusions}.
\section{Heterogeneous Groups}\label{Heter}
According to the interbank lending and borrowing system discussed in \cite{R.Carmona2013}, it is nature to consider the model of interbank lending and borrowing containing $d$ groups and $N_k$ denoted the number of banks in group $k=1,\cdots,d$ where $N=\sum_{j=1}^dN_j$. The $i$-th Bank in group $k$ intends to obtain the optimal strategy through minimizing its own linear quadratic cost function
\begin{eqnarray}
\label{objective}J^{(k)i}(\alphaha)=\mathbb{E} \bigg\{\int_{0}^{T} f^{N}_{(k)}(X^{(k)i}_t, X^{-(k)i}_t, \alphaha^{(k)i}_t)
dt+ g_{(k)}(X^{(k)i}_T, X^{-(k)i}_T )\bigg\},
\end{eqnarray}
where $X=(X^{(1)1},\cdots,X^{(d)N_d})$, $x=(x^{(1)1},\cdots,x^{(d)N_d})$, $\alphaha=(\alphaha^{(1)1},\cdots,\alphaha^{(d)N_d})$, and $ X^{-{(k)i}}=( X^{(1)1},\cdots, X^{(k)i-1}, X^{(k)i+1},\cdots, X^{(d)N_d})$ where the running cost is
\begin{eqnarray}
&&f^N_{(k)}(x^{(k)i}, x^{-(k)i},\alphaha)=\frac{(\alphaha )^2}{2}-q_k\alphaha \left(\tildeilde x^{\lambda_k} -x^{(k)i}\right)
+\frac{\mbox{var}epsilonilon_k}{2}\left(\tildeilde x^{\lambda_k} -x^{(k)i}\right)^2,\label{running_cost}
\end{eqnarray}
and terminal cost is
\begin{equation}\label{teminal_cost}
g^N_{(k)}(x^{(k)i}, x^{-(k)i})=\frac{c_k}{2}\left(\tildeilde x^{\lambda_k} -x^{(k)i} \right)^2,
\end{equation}
with $x^{-(k)i}=( x^{(1)1},\cdots, x^{(k)i-1}, x^{(k)i+1},\cdots, x^{(d)N_d}$) where the relative ensemble average is
\[
\overlineverline x^{\lambda_k}=(1-\lambda_k)\overlineverline x^{(k)}+\lambda_k\overlineverline{x}
\]
where the global average of capitalization and the group average of capitalization are written as
\[
\overlineverline{x} =\frac{1}{N}\sum_{k=1}^d\sum_{i=1}^{N_k}x^{(k)i} ,\,\overlineverline{x} ^{(k)}=\frac{1}{N_k}\sum_{i=1}^{N_k}x^{(k)i},
\]
under the constraint
\begin{eqnarray}\label{diffusions}
\nonumber dX^{(k)i}_t &=& (\alphaha_{t}^{(k)i}+\gamma^{(k)}_{t})dt\\
&&+\sigma_k\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{k}dW_t^{(k)}+\sqrt{1-\rho^2_{k}}dW^{(k)i}_t\right)\right),
\end{eqnarray}
for $i=1,\cdots,N_k$ with nonnegative diffusion parameters $\sigma_k$, nonnegative deterministic growth rate $\gamma^{(k)}$ in $L^\infty$-space denoted as the collection of bounded measurable functions on $[0,T]$. We further assume that $W^{(k)i}_t$ for all $k=1,\cdots,d$ and $i=1,\cdots,N_k$ are standard Brownian motions and $W^{(0)}_t$ and $W^{(k)}_t$ for $k=1,\cdots,d$ are the common noises between groups and within groups corresponding to the parameters $\rho $ and $\rho_k$ for $k=1,\cdots,d$ denoted as the correlation between groups and within groups respectively. Note that all Brownian motions are defined on a filtration probability space $(\Omega,{\cal{F}},{\cal{P}},\{{\cal{F}}_t\})$ referred to as Definition 5.8 of Chapter 2 in \cite{Karatzas2000}. The initial value $X_0^{(k)i}$ which may also be a squared integrable random variable $\xi^{(k)}$ for $k=1,\cdots,d$ independent of the Brownian motions defined on $(\Omega,{\cal{F}},{\cal{P}},\{{\cal{F}}_t\})$. Namely, the system of the interbank lending and borrowing contains the feature of homogeneity within the groups and heterogeneity between groups. Note that $\alphaha_\cdot$ is a progressively measurable control process and $\alphaha^{(k)i}_\cdot$ is admissible if it satisfy the integrability condition given by
\begin{equation}\label{admissible}
\mathbb{E} \int_0^T\left|\alphaha_s^{(k)i}\right|^2ds<\infty.
\end{equation}
In addition, the parameters $q_k$, $\mbox{var}epsilonilon_k$, and $c_k$ stay in positive satisfying $q_k^2\leq \mbox{var}epsilonilon_k$ in order to guarantee that $\alphaha\rightarrow f_{(k)}^{N}(x,\alphaha)$ is convex for any $x$ and $x\rightarrow f_{(k)}^{N}(x,\alphaha)$ is convex for any $\alphaha$.
The parameters $0\leq\lambda_k\leq 1$ for $k=1,\cdots,d$ are described as the relative consideration for the group average and the global average. The case of large $\lambda$ means that banks consider tracing the global average rather than the group one through the large ratio on global average. Finally, $q_k$ presents the incentive of lending and borrowing behavior for banks in group $k$ as the description in \cite{Carmona-Fouque2016,R.Carmona2013}.
For simplicity, in case of the finite players, we study the case of two heterogeneous groups where $d=2$
Hence, the dynamics for both groups are written as
\begin{eqnarray}
\nonumber dX^{(1)i}_t &=& (\alphaha_{t}^{(1)i}+\gamma^{(1)}_{t})dt\\
\label{diffusion-major}&&+\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right),
\end{eqnarray}
and
\begin{eqnarray}
\nonumber dX^{(2)i}_t &=& (\alphaha_{t}^{(2)i}+\gamma^{(2)}_{t})dt\\
\label{diffusion-minor}&&+\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{2}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)i}_t\right)\right),
\end{eqnarray}
with the initial value $X_0^{(k)i}$ for $k=1,2$ and $i=1,\cdots,N_k$. In particular, given the first group consisted of larger banks and the second group consisted of smaller banks, we may further assume that $ 0\leq\lambda_1<\lambda_2 \leq 1$ since large banks intend to trace their own group ensemble average $\overlineverline X^{(1)}$ rather than the global average $\overlineverline X$. On the contrary, small banks prefer tracing large banks through the global ensemble average. In addition, the number of large banks $N_1$ is usually less than the number of small banks $N_2$.
\section{Construction of Nash Equilibria }\label{Sec:NE}
This section is devoted to obtain the closed-loop Nash equilibria and the open-loop Nash equilibria in the finite players game. The solution to the closed-loop Nash equilibria are given by the HJB approach. The open-loop Nash equilibria are obtained using the FBSDEs based on the Pontryagin minimum principle.
\subsection{Closed-loop Nash Equilibria}\label{HJB-approach}
In order to solve the closed-loop Nash equilibrium, given the optimal strategies $\hat\alphaha^{(k)j}$ for $j\neq i$ with the corresponding trajectories $$\hat X^{-{(k)i}}=(\hat X^{(1)1},\cdots,\hat X^{(k)i-1},\hat X^{(k)i+1},\cdots,\hat X^{(2)N_2}),$$ bank $(1)i$ and bank $(2)j$ intend to minimize the objective functions through the value functions written as
\begin{equation}\label{value-function-1}
V^{(1)i}(t,x)=\inf_{\alphaha^{(1)i}\in{\cal{A}} }\mathbb{E} _{t,x}\left\{\int_t^T f^N_{(1)}(X^{(1)i}_s, \hat X^{-(1)i}_s, \alphaha^{(1)i}_s )ds+ g^N_{(1)}(X_{T}^{(1)i},\hat X^{-(1)i}_T )\right\},
\end{equation}
and
\begin{equation}\label{value-function-2}
V^{(2)j}(t,x)=\inf_{\alphaha^{(2)j}\in{\cal{A}}}\mathbb{E} _{t,x}\left\{\int_t^T f^N_{(2)}(X^{(2)j}_s, \hat X^{-(2)j}_s, \alphaha^{(2)j}_s )ds+ g^N_{(2)}(X_{T}^{(2)j},\hat X^{-(2)j}_T )\right\},
\end{equation}
subject to
\begin{eqnarray}\label{coupled-1}
\nonumber dX^{(1)i}_t &=& (\alphaha_{t}^{(1)i}+\gamma^{(1)}_{t})dt\\
&&+\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right) ,
\end{eqnarray}
and
\begin{eqnarray}\label{coupled-2}
\nonumber dX^{(2)j}_t &=& (\alphaha_{t}^{(2)j}+\gamma^{(2)}_{t})dt\\
&&+\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)j}_t\right)\right),
\end{eqnarray}
where $W^{(k)i}_t$ for all $k=1,2$, $i=1,\cdots,N_1$ and $j=1,\cdots,N_2$ are standard Brownian motions and $W^{(0)}_t$ and $W^{(k)}_t$ for $k=1,2$ are the common noises between groups and within groups corresponding to the parameters $\rho$ and $\rho_k$ for $k=1,\cdots,2$ denoted as the correlation between groups and within groups respectively. The initial value $X_0^{(k)i}$ may also be a squared integrable random variable $\xi^{(k)}$. The control process $\alphaha^{(k)i}$ is progressively measurable and ${\cal{A}}$ is denoted as the admissible set given by \eqref{admissible} for $\alphaha^{(k)i}$.
Note that $\mathbb{E} _{t,x}$ denotes the expectation given $X_t=x$.
\vskip 0.5 in
\begin{equation}gin{theorem}\label{Hete-Nash}
Assuming $q_k^2\leq \mbox{var}epsilonilon_k$ for $k=1,2$ and $\eta^{(i)}_t$ and $\phi^{(i)}_t$ for $i=1,\cdots,10$ satisfying \eqref{eta1} to \eqref{phi10}, the value functions of the closed-loop Nash equilibria to the problem \eqref{value-function-1} and \eqref{value-function-2} subject to \eqref{coupled-1} and \eqref{coupled-2} are given by
\begin{eqnarray}
\nonumber V^{(1)i}(t,x)&=&\frac{\eta^{(1)}_t}{2}( \overlineverline x^{(1)}-x^{(1)i})^2+\frac{\eta^{(2)}_t}{2}(\overlineverline x^{(1)})^2+\frac{\eta^{(3)}_t}{2}(\overlineverline x^{(2)})^2\\
\nonumber&&+\eta^{(4)}_t( \overlineverline x^{(1)}-x^{(1)i})\overlineverline x^{(1)}+\eta^{(5)}_t(\overlineverline x^{(1)}-x^{(1)i})\overlineverline x^{(2)}+\eta^{(6)}_t\overlineverline x^{(1)}\overlineverline x^{(2)}\\
&&+\eta^{(7)}_t( \overlineverline x^{(1)}-x^{(1)i})+\eta^{(8)}_t\overlineverline x^{(1)}+\eta^{(9)}_t\overlineverline x^{(2)}+\eta^{(10)}_t,\label{ansatz-1-prop}
\end{eqnarray}
and
\begin{eqnarray}
\nonumber V^{(2)j}(t,x)&=&\frac{\phi^{(1)}_t}{2}( \overlineverline x^{(2)}-x^{(2)j})^2+\frac{\phi^{(2)}_t}{2}(\overlineverline x^{(1)})^2+\frac{\phi^{(3)}_t}{2}(\overlineverline x^{(2)})^2\\
\nonumber&&+\phi^{(4)}_t(\overlineverline x^{(2)}-x^{(2)j})\overlineverline x^{(1)}+\phi^{(5)}_t(\overlineverline x^{(2)}-x^{(2)j})\overlineverline x^{(2)}+\phi^{(6)}_t\overlineverline x^{(1)}\overlineverline x^{(2)}\\
&&+\phi^{(7)}_t(\overlineverline x^{(2)}-x^{(2)j})+\phi^{(8)}_t\overlineverline x^{(1)}+\phi^{(9)}_t\overlineverline x^{(2)}+\phi^{(10)}_t,\label{ansatz-2-prop}
\end{eqnarray}
and the corresponding closed-loop Nash equilibria are
\begin{eqnarray}
\label{optimal-finite-ansatz-V1}
&&\hat\alphaha^{(1)i}(t,x)=(q_1+\widetilde\eta^{(1)}_t)( \overlineverline x^{(1)}-x^{(1)i})+\widetilde\eta^{(4)}_t\overlineverline x^{(1)}+\widetilde\eta^{(5)}_t\overlineverline x^{(2)}+\widetilde\eta_t^{(7)},\\
&&\hat\alphaha^{(2)j}(t,x)=(q_2+\widetilde\phi^{(1)}_t)( \overlineverline x^{(2)}-x^{(2)j})+\widetilde\phi^{(4)}_t\overlineverline x^{(1)}+\widetilde\phi^{(5)}_t\overlineverline x^{(2)}+\widetilde\phi_t^{(7)},\label{optimal-finite-ansatz-V2}
\end{eqnarray}
for $i=1,\cdots,N_1$ and $j=1,\cdots,N_2$ where
\begin{eqnarray}
&&\nonumber \widetilde\eta^{(1)}_t=(1-\frac{1}{N_1})\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t, \quad\widetilde\eta^{(4)}_t=(1-\frac{1}{N_1})\eta^{(4)}_t-\frac{1}{N_1}\eta^{(2)}_t-\lambda_1(1-\begin{equation}ta_1)q_1,\\
&&\label{tildeeta}\widetilde\eta^{(5)}_t=(1-\frac{1}{N_1})\eta^{(5)}_t-\frac{1}{N_1}\eta^{(6)}_t+\lambda_1\begin{equation}ta_2q_1,\quad\widetilde\eta^{(7)}_t=(1-\frac{1}{N_1})\eta^{(7)}_t-\frac{1}{N_1}\eta^{(8)}_t,
\end{eqnarray}
and
\begin{eqnarray}
&&\nonumber \widetilde\phi^{(1)}_t=(1-\frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi^{(5)}_t, \quad \widetilde\phi^{(4)}_t=(1-\frac{1}{N_2})\phi^{(4)}_t-\frac{1}{N_2}\phi^{(6)}_t+\lambda_2\begin{equation}ta_1q2,\\
\nonumber&&\widetilde\phi^{(5)}_t=(1-\frac{1}{N_2})\phi^{(5)}_t-\frac{1}{N_2}\phi^{(3)}_t-\lambda_2(1-\begin{equation}ta_2)q_2,\quad \widetilde\phi^{(7)}_t=(1-\frac{1}{N_2})\phi^{(7)}_t-\frac{1}{N_2}\phi^{(9)}_t.\\
\label{tildephi}
\end{eqnarray}
\end{equation}d{theorem}
\begin{equation}gin{proof}
See Appendix \ref{Appex-1}.
\end{equation}d{proof}
Similarly, due to the complexity of the coupled ODEs given by (\ref{eta1}-\ref{phi10}), we now study the existence of the coupled system (\ref{eta1}-\ref{phi10}) in the case of $N_1\rightarrow\infty$ and $N_2\rightarrow\infty$.
\begin{equation}gin{prop}\label{Prop_suff}
As $N_1\rightarrow\infty$ and $N_2\rightarrow\infty$, the coupled equations \eqref{eta1} to \eqref{eta6} and \eqref{phi1} to \eqref{phi6} are rewritten as
written as
\begin{eqnarray}
\label{eta1_N} \dot{\widehat{\eta}}^{(1)}_t&=&2 q_1\widehat\eta^{(1)}_t+ (\widehat\eta_t^{(1)})^2-(\mbox{var}epsilonilon_1-q_1^2),\\
\nonumber {\dot{\widehat\eta}^{(2)}_t}&=&2\left(-\widehat\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right) \widehat\eta^{(2)}_t-(\widehat\eta_t^{(4)})^2\\
&&-2\left(\widehat\phi^{(4)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat\eta^{(6)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2(\begin{equation}ta_1-1)^2,\label{eta2_N}\\
\nonumber {\dot{\widehat\eta}^{(3)}_t} &=&2\left(-\widehat\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right)\widehat\eta^{(3)}_t-(\widehat\eta_t^{(5)})^2\\
&&-2\left(\widehat\eta^{(5)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat\eta^{(6)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2\begin{equation}ta_2^2\label{eta3_N}\\
\nonumber \dot{\widehat\eta}^{(4)}_t&=&q_1\left(1+\lambda_1(1-\begin{equation}ta_1)\right) \widehat\eta^{(4)}_t-(\widehat\eta^{(4)}_t)^2\\
&&-\left(\widehat\phi^{(4)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat\eta^{(5)}_t+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1(1- \begin{equation}ta_1),\label{eta4_N}\\
\dot{\widehat\eta}^{(5)}_t&=&\left(q_1-\widehat\eta^{(4)}_t-\widehat\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \widehat\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\widehat\eta^{(4)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2,\label{eta5_N}\\
\nonumber\dot{\widehat\eta}^{(6)}_t&=&\left(-\widehat\eta^{(4)}_t-\widehat\phi^{(5)}_t+q_1\lambda_1(1-\begin{equation}ta_1)+q_2\lambda_2(1-\begin{equation}ta_2)\right)\widehat\eta^{(6)}_t-\left(\widehat\eta^{(5)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat\eta^{(2)}_t\\
&&-\widehat\eta_t^{(4)}\widehat\eta_t^{(5)}-\left(\widehat\phi^{(4)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat \eta^{(3)}_t+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2(1-\begin{equation}ta_1)\begin{equation}ta_2,\label{eta6_N}
\end{eqnarray}
\begin{eqnarray}
\label{phi1_N}\dot{\widehat\phi}^{(1)}_t&=&2 q_2\widehat\phi^{(1)}_t+(\widehat\phi^{(1)}_t)^2 -(\mbox{var}epsilonilon_2-q_2^2),\\
\nonumber {\dot{\widehat\phi}^{(2)}_t}&=&2\left(-\widehat\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right) \widehat\phi^{(2)}_t-(\widehat\phi_t^{(4)})^2\\
&&-2\left( \widehat\phi^{(4)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat\phi^{(6)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2\begin{equation}ta_2^2,\label{phi2_N}\\
\nonumber {\dot{\widehat\phi}^{(3)}_t} &=&2\left(-\widehat\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \widehat\phi^{(3)}_t -(\phi_t^{(5)})^2\\
&&-2\left(\widehat\eta^{(5)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat \phi^{(6)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2(\begin{equation}ta_2-1)^2,
\label{phi3_N}\\
\dot{\widehat\phi}^{(4)}_t&=&\left(q_2-\widehat\eta^{(4)}_t-\widehat\phi_t^{(5)}+q_1\lambda_1(1-\begin{equation}ta_1)\right)\widehat\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\widehat \phi^{(5)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1,\label{phi4_N}\\
\nonumber \dot{\widehat\phi}^{(5)}_t&=&q_2\left(1+\lambda_2(1-\begin{equation}ta_2)\right)\widehat\phi^{(5)}_t-(\widehat\phi^{(5)}_t)^2\\
&&-\left(\widehat\eta^{(5)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat\phi_t^{(4)}+(\mbox{var}epsilonilon_2-q_2^2)\lambda_2(1-\begin{equation}ta_2),\label{phi5_N}\\
\nonumber\dot{\widehat\phi}^{(6)}_t&=&\left(-\widehat\eta^{(4)}_t-\widehat\phi^{(5)}_t+q_1\lambda_1(1-\begin{equation}ta_1)+q_2\lambda_2(1-\begin{equation}ta_2)\right) \widehat\phi^{(6)}_t-\widehat\phi_t^{(4)}\widehat\phi_t^{(5)}\\
\nonumber &&-\left(\widehat\eta^{(5)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat \phi^{(2)}_t-\left(\widehat\phi^{(4)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat\phi^{(3)}_t- (\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2\begin{equation}ta_1(\begin{equation}ta_2-1),\\\label{phi6_N}
\end{eqnarray}
with terminal conditions
\begin{eqnarray}n
&&\widehat\eta_T^{(1)}=c_1,\quad \widehat\eta_T^{(2)}=c_1\lambda_1^2(\begin{equation}ta_1-1)^2, \quad \widehat\eta_T^{(3)}=c_1\lambda_1^2\begin{equation}ta_2^2, \quad\widehat \eta_T^{(4)}=c_1\lambda_1(\begin{equation}ta_1-1),\\
&&\widehat\eta_T^{(5)}=c_1\lambda_1\begin{equation}ta_2,\quad\widehat \eta_T^{(6)}=c_1\lambda_1^2(\begin{equation}ta_1-1)\begin{equation}ta_2.
\end{eqnarray}n
and
\begin{eqnarray}n
&&\widehat\phi_T^{(1)}=c_2,\quad \widehat \phi_T^{(2)}=c_2\lambda_2^2\begin{equation}ta_1^2, \quad \widehat\phi_T^{(3)}=c_2\lambda_2^2(\begin{equation}ta_2-1)^2, \quad \widehat\phi_T^{(4)}=c_2\lambda_2\begin{equation}ta_1,\\
&&\widehat\phi_T^{(5)}=c_2\lambda_2(\begin{equation}ta_2-1),\quad \widehat\phi_T^{(6)}=c_2\lambda_2^2\begin{equation}ta_1(\begin{equation}ta_2-1).
\end{eqnarray}n
where $0<\begin{equation}ta_1,\;\begin{equation}ta_2<1$, $\begin{equation}ta_1+\begin{equation}ta_2=1$, $0<\lambda_1,\;\lambda_2<1$, and $q_1,q_2,\mbox{var}epsilonilon_1,\mbox{var}epsilonilon_2>0$ with $\mbox{var}epsilonilon_1-q_1^2>0$ and $\mbox{var}epsilonilon_2-q_2^2>0$. The existence of the coupled equations \eqref{eta1_N} to \eqref{phi6_N} is verified.
\end{equation}d{prop}
\begin{equation}gin{proof}
We first observe that the existence of the coupled equations (\ref{eta1_N}-\ref{phi6_N}) is rely on the existence of the coupled Riccati equations (\ref{eta4_N}-\ref{eta5_N}) and (\ref{phi4_N}-\ref{phi5_N}). Using (\ref{eta4_N}-\ref{eta5_N}) and (\ref{phi4_N}-\ref{phi5_N}), we have
\begin{eqnarray}
\nonumber \dot{\widehat\eta_t^{(4)}}+\dot{\widehat\eta_t^{(5)}}&=&q_1\widehat\eta^{(4)}_t-(\widehat\eta^{(4)}_t)^2-\widehat\phi^{(4)}_t\widehat\eta^{(5)}_t+q_1\widehat\eta^{(5)}_t-\widehat\phi^{(5)}_t\widehat\eta^{(5)}_t-\widehat\eta^{(4)}_t\widehat\eta^{(5)}_t\\
&=&\left(q_1-\widehat\eta^{(4)}_t\right)\left(\widehat\eta^{(4)}_t+\widehat\eta^{(5)}_t\right)-\widehat\eta^{(5)}_t\left(\widehat\phi^{(4)}_t+\widehat\phi^{(5)}_t\right)\label{eqn_sum_1}
\end{eqnarray}
and similarly
\begin{eqnarray}
\dot{\widehat\phi_t^{(4)}}+\dot{\widehat\phi_t^{(5)}}&=&\left(q_2-\widehat\phi^{(5)}_t\right)\left(\widehat\phi^{(4)}_t+\widehat\phi^{(5)}_t\right)-\widehat\phi^{(4)}_t\left(\widehat\eta^{(4)}_t+\widehat\eta^{(5)}_t\right)\label{eqn_sum_2}
\end{eqnarray}
with the terminal conditions $\widehat\eta^{(4)}_T+\widehat\eta^{(5)}_T=0$ and $\widehat\phi^{(4)}_T+\widehat\phi^{(5)}_T=0$. Observe that \eqref{eqn_sum_1} and \eqref{eqn_sum_2} are linear equations for $\widehat\eta^{(4)}_t+\widehat\eta^{(5)}_t$ and $\widehat\phi^{(4)}_t+\widehat\phi^{(5)}_t$ with the terminal condition being zero implying
\begin{equation}\label{suff_cond_1}
\widehat\eta^{(4)}_t+\widehat\eta^{(5)}_t=0, \quad \widehat\phi^{(4)}_t+\widehat\phi^{(5)}_t=0,
\end{equation}
for $0 \leq t\leq T$. Namely $\widehat\eta^{(4)}_t=-\widehat\eta^{(5)}_t$ and $\widehat\phi^{(4)}_t=-\widehat\phi^{(5)}_t$ for $0\leq t\leq T$.
Hence, it is sufficient to study the existence of $\widehat\eta^{(5)}_t$ and $\widehat\phi^{(4)}_t$.
Inserting $\widehat\eta^{(4)}_t=-\widehat\eta^{(5)}_t$ and $\widehat\phi^{(5)}_t=-\widehat\phi^{(4)}_t$ into $\widehat\eta^{(5)}_t$ and $\widehat\phi^{(4)}_t$ gives
\begin{eqnarray}n
\nonumber\dot{\widehat\eta}_t^{(5)}&=&\left(q_1+\widehat\eta_t^{(5)}+\widehat\phi_t^{(4)}+q_2\lambda_2\begin{equation}ta_1\right)\widehat\eta_t^{(5)}+q_1\lambda_1\begin{equation}ta_2\widehat\eta_t^{(5)}-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2\\
&=&\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)\widehat\eta_t^{(5)}+(\widehat\eta_t^{(5)})^2+\widehat\phi_t^{(4)}\widehat\eta_t^{(5)}-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2,
\end{eqnarray}n
and
\begin{eqnarray}n
\dot{\widehat\phi}_t^{(4)}&=&\left(q_2+q_1\lambda_1\begin{equation}ta_2+q_2\lambda_2\begin{equation}ta_1\right)\widehat\phi_t^{(4)}+(\widehat\phi_t^{(4)})^2+\widehat\eta_t^{(5)}\widehat\phi_t^{(4)}-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1.
\end{eqnarray}n
Now, consider ${\check\eta}_t^{(5)}=\widehat\eta_{T-t}^{(5)}$ and $\check\phi_t^{(4)}=\widehat\phi_{T-t}^{(4)}$. Then
\begin{eqnarray}
\dot{\check\eta}_t^{(5)}&=&-\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)\check\eta_t^{(5)}-(\check\eta_t^{(5)})^2-\check\phi_t^{(4)}\check\eta_t^{(5)}+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2,\\
\dot{\check\phi}_t^{(4)}&=&-\left(q_2+q_1\lambda_1\begin{equation}ta_2+q_2\lambda_2\begin{equation}ta_1\right)\check\phi_t^{(4)}-(\check\phi_t^{(4)})^2-\check\eta_t^{(5)}\check\phi_t^{(4)}+(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1,
\end{eqnarray}
with the initial conditions $\check\eta_0^{(5)}=c_1\lambda_1\begin{equation}ta_2$ and $\check\phi_0^{(4)}=c_2\lambda_2\begin{equation}ta_1$. Simple argument implies $\check\eta_t^{(5)}$ and $\check\phi_t^{(4)}$ being positive for $0\leq t\leq T$. Then, we have
\begin{eqnarray}n
\dot{\check\eta}_t^{(5)}&\leq& -\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)\check\eta_t^{(5)}+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2,\\
\dot{\check\phi}_t^{(4)}&\leq& -\left(q_2+q_1\lambda_1\begin{equation}ta_2+q_2\lambda_2\begin{equation}ta_1\right)\check\phi_t^{(4)}+(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1,
\end{eqnarray}n
leading to
\begin{eqnarray}n
e^{\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)t}\check\eta_t^{(5)}&\leq&c_1\lambda_1\begin{equation}ta_2+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2\int_0^t e^{\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)s}ds\\
&\leq&c_1\lambda_1\begin{equation}ta_2+\frac{(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2}{q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2}e^{\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)t}\\
\end{eqnarray}n
such that
\begin{equation}
0\leq \check\eta_t^{(5)} \leq c_1\lambda_1\begin{equation}ta_2 e^{-\left(q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2\right)t}+\frac{(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2}{q_1+q_2\lambda_2\begin{equation}ta_1+q_1\lambda_1\begin{equation}ta_2}.
\end{equation}
Similarly, we also get
\begin{equation}
0\leq \check\phi_t^{(4)}\leq c_2\lambda_2\begin{equation}ta_1 e^{-\left(q_2+q_1\lambda_1\begin{equation}ta_2+q_2\lambda_2\begin{equation}ta_1\right)t}+\frac{(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1}{q_2+q_1\lambda_1\begin{equation}ta_2+q_2\lambda_2\begin{equation}ta_1}.
\end{equation}
The proof is complete.
\end{equation}d{proof}
According to the results in Proposition \ref{Prop_suff}, as $N_1$ and $N_2$ are large enough, the existence of the coupled ODEs \eqref{eta1} to \eqref{phi10} are guaranteed.
\subsection{Open-loop Nash Equilibria}\label{FBSDE-approach}
Referring to \cite{R.Carmona2013}, we now study the open-loop Nash equilibria where the strategies are the functions given at initial time. Namely, the strategies are the function of $t$ and $X_0$ given in \cite{CarmonaSIAM2016}.
\begin{equation}gin{theorem}\label{Hete-open}
Assume $q_k^2\leq \mbox{var}epsilonilon_k$ for $k=1,2$ and $\eta^{o,(i)}_t$ and $\phi^{o,(i)}_t$ for $i=1,\cdots,4$ satisfying \eqref{eta_open-1} to \eqref{phi_open-4}. The open-loop Nash equilibria are written as
\begin{eqnarray}
\nonumber \hat\alphaha^{o,(1)i}&=&\left(q_1+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(1)}\right)(\overlineverline X_t^{(1)}-X_t^{(1)i})\\
\nonumber&&+\left(q_1\lambda_1(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(2)}\right)\overlineverline X_t^{(1)}\\
&&+\left(q_1\lambda_1\begin{equation}ta_2+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(3)}\right)\overlineverline X_t^{(2)}+\left(1-\frac{1}{\widetilde N_1}\right)\eta_t^{o,(4)},\\
\nonumber \hat\alphaha^{o,(2)j}&=&\left(q_2+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(1)}\right)(\overlineverline X_t^{(2)}-X_t^{(2)j})\\
\nonumber &&+\left(q_2\lambda_2\begin{equation}ta_1+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(2)}\right)\overlineverline X_t^{(1)}\\
&&+\left(q_2\lambda_2(\begin{equation}ta_2-1)+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(3)}\right)\overlineverline X_t^{(2)}+\left(1-\frac{1}{\widetilde N_2}\right)\phi_t^{o,(4)},
\end{eqnarray}
where $$\frac{1}{\widetilde N_k}=\frac{1-\lambda_k}{N_k}+\frac{\lambda_k}{N},$$ for $k=1,2$.
\end{equation}d{theorem}
\begin{equation}gin{proof}
See Appendix \ref{Appex-open}.
\end{equation}d{proof}
Due to the complexity of coupled ordinary differential equations (ODEs) (\ref{eta_open-1}-\ref{phi_open-4}), we also verify the existence of (\ref{eta_open-1}-\ref{phi_open-4}) in the case of $N_1\rightarrow\infty$ and $N_2\rightarrow\infty$ where the equations are given by
\begin{eqnarray}
\dot{\widehat{\eta}}^{o,(1)}_t&=&2 q_1\widehat\eta^{o,(1)}_t+ (\widehat\eta_t^{o,(1)})^2-(\mbox{var}epsilonilon_1-q_1^2),\\
\nonumber \dot{\widehat\eta}^{o,(2)}_t&=&q_1\left(1+\lambda_1(1-\begin{equation}ta_1)\right) \widehat\eta^{o,(2)}_t-(\widehat\eta^{o,(2)}_t)^2\\
&&-\left(\widehat\phi^{o,(2)}_t+q_2\lambda_2\begin{equation}ta_1\right)\widehat\eta^{o,(3)}_t+(\mbox{var}epsilonilon_1-q_1^2)\lambda_1(1- \begin{equation}ta_1),\\
\nonumber \dot{\widehat\eta}^{o,(3)}_t&=&\left(q_1-\widehat\eta^{o,(2)}_t-\widehat\phi^{o,(3)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \widehat\eta^{o,(3)}_t\\
&&-q_1\lambda_1\begin{equation}ta_2\widehat\eta^{o,(2)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2,\\
\dot{\widehat\phi}^{o,(1)}_t&=&2 q_2\widehat\phi^{o,(1)}_t+(\widehat\phi^{o,(1)}_t)^2 -(\mbox{var}epsilonilon_2-q_2^2),\\
\nonumber\dot{\widehat\phi}^{o,(2)}_t&=&\left(q_2-\widehat\eta^{o,(2)}_t-\widehat\phi_t^{o,(3)}+q_1\lambda_1(1-\begin{equation}ta_1)\right)\widehat\phi^{o,(2)}_t\\
&& -q_2\lambda_2\begin{equation}ta_1\widehat \phi^{o,(3)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1, \\
\nonumber \dot{\widehat\phi}^{o,(3)}_t&=&q_2\left(1+\lambda_2(1-\begin{equation}ta_2)\right)\widehat\phi^{o,(3)}_t-(\widehat\phi^{o,(3)}_t)^2\\
&&-\left(\widehat\eta^{o,(3)}_t+q_1\lambda_1\begin{equation}ta_2\right)\widehat\phi_t^{o,(2)}+(\mbox{var}epsilonilon_2-q_2^2)\lambda_2(1-\begin{equation}ta_2),
\end{eqnarray}
with the terminal conditions
\begin{eqnarray}n
\widehat\eta_T^{o,(1)}=c_1,\;\widehat\eta_T^{o,(2)}=c_1\lambda_1(\begin{equation}ta_1-1),\;\widehat\eta_T^{o,(3)}=c_1\lambda_1\begin{equation}ta_2,\\
\widehat\phi_T^{o,(1)}=c_2,\;\widehat\eta_T^{o,(2)}=c_2\lambda_2\begin{equation}ta_1,\;\widehat\phi_T^{o,(3)}=c_2\lambda_2(\begin{equation}ta_1-1).
\end{eqnarray}n
Note that according to the results in Section \ref{HJB-approach}, we obtain the Riccati equations $\widehat\eta_T^{o,(2)}=\widehat\eta_T^{(4)}$ and $\widehat\phi_T^{o,(3)}=\widehat\phi_T^{(5)}$ and the linear ODEs $\widehat\eta_T^{o,(3)}=\widehat\eta_T^{(5)}$ and $\widehat\phi_T^{o,(2)}=\widehat\phi_T^{(4)}$. Hence, the existence of the coupled ODEs is verified by Proposition \ref{Prop_suff}.
We then discuss $\mbox{var}epsilonilon$-Nash Equilibria in the case of the general $d$ groups mean field game. Namely, $N\rightarrow\infty$, $N_k\rightarrow\infty$ with $\frac{N_k}{N}\rightarrow\begin{equation}ta_k$ for $k=1,\cdots,d$.
\section{$\mbox{var}epsilonilon$-Nash Equilibria: Mean Field Games}\label{MFG}
We return to the case of the general $d$ groups. The $i$-th bank in group $k$ minimizes the objective function given by
\begin{eqnarray}
\nonumber J^{(k)i}(\alphaha)&=&\mathbb{E} \bigg\{\int_{0}^{T}
\left[\frac{(\alphaha_{t}^{(k)i})^2}{2}-q_k\alphaha_{t}^{(k)i}\left(\overlineverline{X}^{\lambda_k}_{t}-X^{(k)i}_{t}\right)
+\frac{\mbox{var}epsilonilon_k}{2}\left(\overlineverline{X}^{\lambda_k}_{t}-X^{(k)i}_{t}\right)^2\right]dt\\
& &+ \frac{c_k}{2}\left(\overlineverline{X}^{\lambda_k}_{T}-X^{(k)i}_{T}\right)^2\bigg\},
\end{eqnarray}
with $q_k^2<\mbox{var}epsilonilon_k$ subject to
\begin{eqnarray}
\nonumber dX^{(k)i}_t &=& (\alphaha_{t}^{(k)i}+\gamma^{(k)}_{t})dt\\
&&+\sigma_k\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{k}dW_t^{(k)}+\sqrt{1-\rho^2_{k}}dW^{(k)i}_t\right)\right),
\end{eqnarray}
with the initial value $X_0^{(k)i}$ which may also be a squared integrable random variable $\xi^{(k)}$. In the mean field limit as $N\rightarrow \infty$ and $\frac{N_k}{N}\rightarrow\begin{equation}ta_k$ for all $k$, as in the case of one group, the $d$ heterogenous groups is solved by the $d$-players game. Referred to \cite{R.Carmona2013}, the scheme to solve the $\mbox{var}epsilonilon$-Nash equilibria is as follows:
\begin{equation}gin{enumerate}
\item Fix $m^{(k)}_t=\mathbb{E} [X^{(k)}_t|(W^{(0)}_s)_{s\leq t}]$ which is a candidate for the limit of $\overlineverline{X}^{(k)}_t$ as $N_k\rightarrow \infty$:
\[
m^{(k)}_t=\lim_{I_k\rightarrow\infty}\overlineverline{X}^{(k)}_t,
\]
for all $k$, and
\[
M_t=\lim_{N,N_1\ldots,N_k\rightarrow\infty}\sum_{k=1}^d\frac{N_k}{N}\overlineverline{X}^{(k)}_t=\sum_{k=1}^d\begin{equation}ta_km^{(k)}_t,
\]
where a vector of standard Brownian motions $W^{(0)}=(W^{(0),(0)},\cdots,W^{(0),(d)})$ represents the common noises.
\item Substitute $m_t^{(k)}$ to $\overlineverline X^{(k)}$ and solve the $d$-players control problem through minimizing the objective function written as
\begin{eqnarray}n
&& \nonumber \inf_{ \alphaha^{(k)}\in {\cal{A}}} \mathbb{E} \bigg\{\int_{0}^{T} \left[\frac{(\alphaha_{t}^{(k)})^2}{2}-q_k\alphaha_{t}^{(k)}\left(M^{\lambda_k}_{t}-X^{(k)}_{t}\right)+\frac{\mbox{var}epsilonilon_k}{2}\left(M^{\lambda_k}_{t}-X^{(k)}_{t}\right)^2\right]dt\\
&&\quad\quad\quad+ \frac{c_k}{2}\left(M^{\lambda_k}_{T}-X^{(k)}_{T}\right)^2\bigg\}
\end{eqnarray}n
with $q_k^2<\mbox{var}epsilonilon_k$ subject to the dynamics
\begin{eqnarray}
\nonumber dX^{(k)}_t &=& (\alphaha_{t}^{(k)}+\gamma^{(k)}_{t})dt\\
\nonumber &&+\sigma_k\left(\rho dW^{(0),(0)}_t+\sqrt{1-\rho^2}\left(\rho_{k}dW_t^{(0),(k)}+\sqrt{1-\rho^2_{k}}dW^{(k)}_t\right)\right),\\\label{X-d-groups}
\end{eqnarray}
with $M_t^{\lambda_k}=(1-\lambda_k)m_t^{(k)}+\lambda_kM_t$ where $W_t^{(k)}$ is a Brownian motion independent of $W_t^{(0)}$.
\item Similarly, solve the fixed point problem: find $$m^{(k)}_t=\mathbb{E} [X^{(k)}_t|(W^{(0)}_s)_{s\leq t},(W_s^{(0),(k)})_{s\leq t}]$$ for all $t$.
\end{equation}d{enumerate}
\begin{equation}gin{theorem}\label{Hete-MFG-prop}
Assuming $q_k^2\leq \mbox{var}epsilonilon_k$ and $\eta_t^{m,(k)}$, $\psi^{m,(k),h}_t$, and $\mu^{m,(k)}_t$ satisfying
\begin{eqnarray}
\label{Hete-eta-MFG}\dot\eta_t^{m,(k)}&=&2q_k \eta_t^{m,(k)}+(\eta_t^{m,(k)})^2-(\mbox{var}epsilonilon_k-q_k^2),\\
\nonumber \dot\psi_t^{m,(k),h_1}&=&q_k\psi_t^{m,(k),h_1}-\sum_{h=1}^d\psi_t^{m,(k),h}\left(\psi_t^{m,(h),h_1}+q_h\lambda_h(\begin{equation}ta_{h_1 }-\deltalta_{h,h_1})\right)\\
&&-(\mbox{var}epsilonilon_k-q_k^2)\lambda_k(\begin{equation}ta_{h_1}-\deltalta_{k,h_1})\label{Hete-psi-MFG}\\
\dot\mu^{m,(k)}_t&=&q_k\mu^{m,(k)}_t-\sum_{h=1}^d\psi_t^{m,(k),h}(\mu_t^{m,(h)}+\gamma_t^{(h)}),
\label{Hete-mu-MFG}
\end{eqnarray}
with terminal conditions $\eta_T^{m,(k)}=c_k$, $\psi_T^{m,(k),h}=c_k\lambda_k(\begin{equation}ta_h-\deltalta_{k,h})$, and $\mu^{m,(k)}_T=0$ for $k,h=1,\cdots,d$, the $\mbox{var}epsilonilon$-Nash equilibrium is given by
\begin{equation}\label{Hete-optimal-MFG}
\hat\alphaha_t^{m,(k)}=(q_k+\eta^{m,(k)}_t)( m^{(k)}_t-x^{(k)})+\sum_{h=1}^d\widetilde\psi_t^{m,(k),h}m^{(h)}_t+\mu^{m,(k)}_t,\quad
\end{equation}
where $m_t^{(k)}$ is given by
\begin{eqnarray}n
\nonumber dm^{(k)}_t&=&\bigg\{\sum_{h_1=1}^d\psi_t^{m,(k),h_1}m^{(h_1)}_t+\mu^{m,(k)}_t+\gamma^{(k)}_t+q_k\lambda_k\sum_{h_1=1}^d(\begin{equation}ta_{h_1}-\deltalta_{k,h_1})m^{(h_1)}_t\bigg\}dt\\
&&+\sigma_k\left(\rho dW^{(0),(0)}_t+\sqrt{1-\rho^2} \rho_{k}dW_t^{(0),(k)} \right)
\end{eqnarray}n
for $k=1,\cdots,d$. Given $c_{\tildeilde k}\geq \max_{k,h}\left(\frac{q_k\lambda_k}{\lambda_h}-q_h\right)$ for $\tildeilde k=1,\cdots,d$, the existence of the coupled ODEs \eqref{Hete-eta-MFG} to \eqref{Hete-mu-MFG} can be verified.
\end{equation}d{theorem}
\begin{equation}gin{proof}
See Appendix \ref{Appex-Hete-MFG}.
\end{equation}d{proof}
In particular, in the case of $d=2$, as $N\rightarrow\infty$, $N_1\rightarrow\infty$, and $N_2\rightarrow\infty$ with $\frac{N_1}{N}\rightarrow\begin{equation}ta_1$ and $\frac{N_2}{N}\rightarrow\begin{equation}ta_2$, we observe that $\widetilde\eta^{(4)}_t\rightarrow\widetilde\psi_t^{m,(1),1}$, $\widetilde\eta^{(5)}_t\rightarrow\widetilde\psi_t^{m,(1),2}$, $\widetilde\eta_t^{(7)}\rightarrow\widetilde\mu_t^{m,(1)}$, $\widetilde\phi^{(4)}_t\rightarrow\widetilde\psi_t^{m,(2),1}$, $\widetilde\phi^{(5)}_t\rightarrow\psi_t^{m,(2),2}$, and $\widetilde\phi_t^{(7)}\rightarrow\widetilde\mu_t^{m,(2)}$ for $0\leq t \leq T$ such that the $\mbox{var}epsilonilon$-Nash equilibria in the case of the mean field game with heterogeneous groups are the limit of the closed-loop and the open loop Nash equilibria in the case of the finite player game with heterogeneous groups. The results are consistent with \cite{Lacker2018}. Hence, the asymptotic optimal strategy for bank $(1)i$ is given by
\[
\hat\alphaha^{m,(1)i}_t=(q_1+ \eta^{m,(1)}_t)( \overlineverline x^{(1)}-x^{(1)i})+\widetilde\psi_t^{m,(1),1}\overlineverline x^{(1)}+\widetilde\psi_t^{m,(1),2}\overlineverline x^{(2)}+\mu^{m,(1)}_t.
\]
\begin{equation}gin{remark}
The case of no common noise implies the given $m_t$ for $0\leq t \leq T$ being a deterministic function. For instance, in the case of $d=1$, the $\mbox{var}epsilonilon$-Nash equilibrium in mean field games can be obtained using the HJB equation written as
\begin{eqnarray}n
\partialrtial_tV&+&\inf_{\alphaha}\bigg\{\alphaha\partialrtial_xV+\frac{\sigma^2}{2}\partialrtial_{xx}V +\frac{\alphaha^2}{2}-q\alphaha(m_t-x)+\frac{\mbox{var}epsilonilon}{2}(m_t-x)^2\bigg\}=0, \\
\end{eqnarray}n
with the terminal condition $V(T,x)=\frac{c}{2}(m_T-x)^2$.
\end{equation}d{remark}
\section{Financial Implications}\label{FI}
The aim of this section is to investigate the financial implications for this heterogeneous interbank lending and borrowing model. We mainly comment on the closed-loop Nash equilibria in the case of finite players identical to the open-loop Nash equilibria and $\mbox{var}epsilonilon$-Nash equilibria. We recall the closed-loop Nash equilibria written as
\begin{eqnarray}
\label{optimal-finite-ansatz-V1-FI}
\hat\alphaha^{(1)i}(t,x)&=&(q_1+\widetilde\eta^{(1)}_t)( \overlineverline x^{(1)}-x^{(1)i})+\widetilde\eta^{(4)}_t\overlineverline x^{(1)}+\widetilde\eta^{(5)}_t\overlineverline x^{(2)}+\widetilde\eta_t^{(7)},\\
\hat\alphaha^{(2)j}(t,x)&=&(q_2+\widetilde\phi^{(1)}_t)(\overlineverline x^{(2)}-x^{(2)j})+\widetilde\phi^{(4)}_t\overlineverline x^{(1)}+\widetilde\phi^{(5)}_t\overlineverline x^{(2)}+\widetilde\phi_t^{(7)}.\label{optimal-finite-ansatz-V2-FI}
\end{eqnarray}
The corresponding optimal trajectories are written as
\begin{eqnarray}
\nonumber dX_t^{(1)i}&=&\left\{(q_1+\widetilde\eta^{(1)}_t)( \overlineverline X_t^{(1)}-X_t^{(1)i})+\widetilde\eta^{(4)}_t\overlineverline X_t^{(1)}+\widetilde\eta^{(5)}_t\overlineverline X_t^{(2)}+\widetilde\eta_t^{(7)}+\gamma^{(1)}_t\right\}dt \\
\nonumber&& +\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right)\\
\nonumber &=&\left\{\frac{q_1+\widetilde\eta^{(1)}_t}{N_1}\sum_{l=1}^{N_1}(X_t^{(1)l}-X_t^{(1)i})+\widetilde\eta^{(4)}_t\overlineverline X_t^{(1)}+\widetilde\eta^{(5)}_t\overlineverline X_t^{(2)}+\widetilde\eta_t^{(7)}+\gamma^{(1)}_t\right\}dt \\
\label{X-1-FI}&& +\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right),\\
\nonumber dX_t^{(2)j}&=&\left\{(q_2+\widetilde\phi^{(1)}_t)( \overlineverline X_t^{(2)}-X_t^{(2)j})+\widetilde\phi^{(4)}_t\overlineverline X_t^{(1)}+\widetilde\phi^{(5)}_t\overlineverline X_t^{(2)}+\widetilde\phi_t^{(7)}+\gamma^{(2)}_t\right\}dt\\
\nonumber&& +\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{2}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)i}_t\right)\right)\\
\nonumber&=&\left\{\frac{q_2+\widetilde\phi^{(1)}_t}{N_2}\sum_{l=1}^{N_2}(X^{(2)l}-X_t^{(2)j})+\widetilde\phi^{(4)}_t\overlineverline X_t^{(1)}+\widetilde\phi^{(5)}_t\overlineverline X_t^{(2)}+\widetilde\phi_t^{(7)}+\gamma^{(2)}_t\right\}dt\\
\label{X-2-FI}&& +\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{2}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)i}_t\right)\right)
\end{eqnarray}
with given initial values $X_0^{(1)i}$ and $X_0^{(2)j}$ for $i=1,\cdots,N_1$ and $j=1,\cdots,N_2$.
Compared to the homogeneous group scenario discussed in \cite{R.Carmona2013}, owing to the linear quadratic structure, heterogeneity implies that banks not only purely consider the distance between their capitalization and averages of their own group capitalization where these terms can be rewritten as
$
\frac{1}{N_1}\sum_{l=1}^{N_1}(X_t^{(1)l}-X_t^{(1)i})
$
and $\frac{1}{N_2}\sum_{l=1}^{N_2}(X^{(2)l}-X_t^{(2)j})$ show the lending and borrowing behavior within their own groups
identical to the homogeneous case but also the linear combination of the average of each group.
In particular, as $N_1$ and $N_2$ are sufficiently large , based on the relation $\widetilde\eta^{(4)}_t=-\widetilde\eta^{(5)}_t$ and $\widetilde\phi^{(4)}_t=-\widetilde\phi^{(5)}_t$ in Proposition \ref{Prop_suff}, we rewrite the system (\ref{X-1-FI}-\ref{X-2-FI}) as
\begin{eqnarray}
\nonumber dX_t^{(1)i}&=&\left\{\frac{q_1+\widetilde\eta^{(1)}_t}{N_1}\sum_{l=1}^{N_1}(X_t^{(1)l}-X_t^{(1)i})+\widetilde\eta^{(5)}_t(\overlineverline X_t^{(2)}-\overlineverline X_t^{(1)})+\widetilde\eta_t^{(7)}+\gamma^{(1)}_t\right\}dt \\
\label{X-1-FI-1}&& +\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right),\\
\nonumber dX_t^{(2)j}&=&\left\{\frac{q_2+\widetilde\phi^{(1)}_t}{N_2}\sum_{l=1}^{N_2}(X^{(2)l}-X_t^{(2)j})+\widetilde\phi^{(4)}_t(\overlineverline X_t^{(1)}-\overlineverline X_t^{(2)})+\widetilde\phi_t^{(7)}+\gamma^{(2)}_t\right\}dt\\
\label{X-2-FI-1}&& +\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{2}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)i}_t\right)\right).
\end{eqnarray}
The terms $\widetilde\eta^{(5)}_t(\overlineverline X_t^{(2)}-\overlineverline X_t^{(1)})$ and $\widetilde\phi^{(4)}_t(\overlineverline X_t^{(1)}-\overlineverline X_t^{(2)})$ with $\widetilde\eta^{(5)}_t$ and $\widetilde\phi^{(4)}_t$ being positive for $0\leq t \leq T$ give the mean-reverting interaction between groups with each other such that the ensemble averages intend to be close. The dynamics of the distance $\overlineverline X^{D}_t= \overlineverline X^{(1)}_t-\overlineverline X_t^{(2)} $ is written as
\begin{eqnarray}
\nonumber d\overlineverline X^{D}_t&=&-\left\{(\widetilde\eta^{(5)}_t+\widetilde\phi^{(4)}_t)\overlineverline X^{D}_t+(\widetilde\eta_t^{(7)}+\gamma^{(1)}_t-\widetilde\phi_t^{(7)}-\gamma^{(2)}_t)\right\}dt\\
\nonumber &&+\rho\left(\sigma_1-\sigma_2\right)dW^{(0)}_t+\sqrt{1-\rho^2}\left(\sigma_1\rho_1dW^{(1)}_t-\sigma_2\rho_2dW_t^{(2)}\right)\\
&&+\sqrt{1-\rho^2}\left(\sqrt{1-\rho_1^2}\frac{1}{N_1}\sum_{l=1}^{N_1}dW^{(1)l}_t-\sqrt{1-\rho_2^2}\frac{1}{N_2}\sum_{l=1}^{N_2}dW^{(2)l}_t\right).
\end{eqnarray}
with $\overlineverline X_0^{(D)}=\overlineverline X^{(1)}_0-\overlineverline X_0^{(2)}$. As $N_1, N_2\rightarrow\infty$, the distance $\overlineverline X^{D}_t$ is driven by common noises $W^{(0)}_t$, $W^{(1)}_t$ and $W^{(2)}_t$. Namely, the stronger correlation leads to larger fluctuation between groups.
On the contrary, in the case of no common noise with $\rho=\rho_1=\rho_2=0$ and no growth rates $\gamma^{(1)}=\gamma^{(2)}_t=0$ leading to $\widetilde\eta_t^{(7)}=\widetilde\phi_t^{(7)}=0$ for all $t\geq 0$, we obtain $\overlineverline X^{(D)}_t\rightarrow 0$ as $t\rightarrow\infty$ in the sense that in the long term behavior, all banks trace the global average $\overlineverline X_t$ driven by only the scaled Brownian motion. This implies that systemic risk happens in the same manner as studied in \cite{R.Carmona2013} and therefore
\[
\mathbb{P} (\tildeau<\infty) = \lim_{T\tildeo\infty}\mathbb{P} (\tildeau \leq T) =\lim_{T\tildeo\infty}2\Phi\left(\frac{D\sqrt{N}}{\sigma\sqrt{T}}\right)= 1,
\]
with $\Phi$ being the standard normal distribution function where $\tildeau=\inf\{t:\overlineverline X_t\leq D\}$ with the certain default level $D$. The systemic event
\[
\left\{\left(\frac{1}{N}\sum_{i=1}^{N}X_{t}^{(i)}\right)\leq {D}\; \mathrm{for\;some}\; t\right\}
\]
defined by \cite{Fouque-Sun} is unavoidable.
In the numerical analysis, suppose that the first group is the group of stronger banks and the second one is the group of smaller banks. As discussion in Section \ref{Heter}, we first assume $\begin{equation}ta_1=0.2$ and $\begin{equation}ta_2=0.8$ and further fix the relative consideration $\lambda_1=0.1$ for the relative ensemble average
\[
\overlineverline x^{\lambda_1}=(1-\lambda_1)\overlineverline x^{(1)}+\lambda_1\overlineverline{x},
\]
since the major players prefer tracing the group average rather than the ensemble average. By using varied $\lambda_2$ and $N$, we then obtain the following implications:
\begin{equation}gin{enumerate}
\item We first comment on two extreme cases. As $\lambda_1=\lambda_2=0$ or $\begin{equation}ta_1=\begin{equation}ta_2=1$, the model degenerates to the two homogeneous group model without the interaction between groups referred to \cite{R.Carmona2013}.
\item In the intermediate region, in Figure \ref{liquidityratelambda}, we observe that the liquidity rate
\begin{equation}\label{liquidityrate}
\tildeilde\eta^{(1)}=(1-\frac{1}{N_1})\eta^{(1)}-\frac{1}{N_1}\eta^{(4)}
\end{equation}
increases in the relative proportion $\lambda_2$. Namely, when banks consider to trace the global ensemble average $\overlineverline x$, they intend to lend to or borrow from a central bank more frequently.
\item As the terminal time $T$ becomes large, Figure \ref{liquidityratelambda_t=10} shows that the liquidity rate intends to be a constant. The identical results are obtained in \cite{R.Carmona2013}.
\item As the numbers of banks $N$, $N_1$, and $N_2$ become large, the liquid rate \eqref{liquidityrate} also increases. The interbank lending and borrowing behavior becomes more frequently. See Figure \ref{liquidityrate_N} for instance.
\end{equation}d{enumerate}
\begin{equation}gin{figure}[htbp]
\begin{equation}gin{center}
\includegraphics[width=12cm,height=8cm]{liquid_rate_t=1_N=10.eps}
\caption{The liquidity $\tildeilde\eta^{(1)}=(1-\frac{1}{N_1})\eta^{(1)}-\frac{1}{N_1}\eta^{(4)}$ with the varied $\lambda_2$ and the fixed $\lambda_1=0.1$. The fixed parameters are $N=10$, $N_1=2$, $N_2=8$, $q_1=q_2=2$, $\mbox{var}epsilonilon_1=5$, $\mbox{var}epsilonilon_2=4.5$, $c_1=c_2=0$, $T=1$, and $\gamma^{(1)}_t=\gamma^{(2)}_t=0$ for $0\leq t\leq T$. }
\label{liquidityratelambda}
\end{equation}d{center}
\end{equation}d{figure}
\begin{equation}gin{figure}[htbp]
\begin{equation}gin{center}
\includegraphics[width=12cm,height=8cm]{liquid_rate_t=10_N=10.eps}
\caption{The liquidity $\tildeilde\eta^{(1)}=(1-\frac{1}{N_1})\eta^{(1)}-\frac{1}{N_1}\eta^{(4)}$ with the varied $\lambda_2$ and the fixed $\lambda_1=0.1$. The fixed parameters are $N=10$, $N_1=2$, $N_2=8$, $q_1=q_2=2$, $\mbox{var}epsilonilon_1=5$, $\mbox{var}epsilonilon_2=4.5$, $c_1=c_2=0$, $T=10$, and $\gamma^{(1)}_t=\gamma^{(2)}_t=0$ for $0\leq t\leq T$. }
\label{liquidityratelambda_t=10}
\end{equation}d{center}
\end{equation}d{figure}
\begin{equation}gin{figure}[htbp]
\begin{equation}gin{center}
\includegraphics[width=6cm,height=8cm]{liquid_rate_t=1_N.eps}
\includegraphics[width=6cm,height=8cm]{liquid_rate_t=10_N.eps}
\caption{The liquidity $\tildeilde\eta^{(1)}=(1-\frac{1}{N_1})\eta^{(1)}-\frac{1}{N_1}\eta^{(4)}$ with the varied $N$, $N_1$, $N_2$ with the proportion $\begin{equation}ta_1=0.2$, $\begin{equation}ta_2=0.8$. The terminal times are $T=1$ (left) and $T=10$ (right). The parameters are $\lambda_1=0.5$, $\lambda_2=0.1$, $q_1=q_2=2$, $\mbox{var}epsilonilon_1=5$, $\mbox{var}epsilonilon_2=4.5$, $c_1=c_2=0$, and $\gamma^{(1)}_t=\gamma^{(2)}_t=0$ for $0\leq t\leq T$.}
\label{liquidityrate_N}
\end{equation}d{center}
\end{equation}d{figure}
\section{Conclusions}\label{conclusions}
We study the system of interbank lending and borrowing with heterogeneous groups where the lending and borrowing depends on the homogeneous parameters within groups and heterogeneous parameters between groups. The amount of lending and borrowing is based on the relative ensemble average
\[
\tildeilde x^{\lambda_k}=(1-\lambda_k)\overlineverline x^{(k)}+\lambda_k\overlineverline{x},\quad k=1,\cdots,d.
\]
Due to the heterogeneity structure, the value functions and the corresponding closed-loop and open-loop Nash equilibria are given by the coupled Riccati equations. In the two-group case, the existence of the coupled Riccati equations can be proved through when the number of banks are sufficiently large so that the existence of the value functions and equilibria are guaranteed. In addition, in the case of mean field games with the general $d$ groups, the existence of the $\mbox{var}epsilonilon$-Nash equilibria is also verified.
We observe that owing to heterogeneity, the equilibria are consisted of the term of mean-reverting at their own group averages and all group ensemble averages . In the mean field case with no common noise, systemic event happens almost surely in the long time period. The numerical results illustrate that as banks intend to trace the global average as large $\lambda_k$, they prefer liquidating more frequently using a larger liquidity rate. The liquidity rate is also increasing in the number of banks.
The problem can be extended in several directions. First, it is interesting to discuss the delay obligation based on the model studied in \cite{Carmona-Fouque2016,FouqueZhang2018}. Second, it is nature to consider the stochastic growth rate in the system. Third, the CIR type processes can be applied to describe the capitalization of banks. See \cite{Fouque-Ichiba,Sun2016}. Furthermore, referring to \cite{BMMB2019}, the bubble assets is worth to study in the interbank lending and borrowing system. The admissible conditions for the equilibria of the above extensions are also interesting to investigate.
\section{Proof of Theorem \ref{Hete-Nash} and Verification Theorem} \label{Appex-1}
The corresponding coupled HJB equations for the value functions (\ref{value-function-1}) and (\ref{value-function-2}) read
\begin{eqnarray}\label{HJB1}
\nonumber \partialrtial_{t}V^{(1)i} &+&\inf_{\alphaha^{(1)i}}\bigg\{
\sum_{l\neq i,l=1}^{N_1}\bigg(\gamma^{(1)}_t+{\hat\alphaha^{(1)l}(t,x)}\bigg)\partialrtial_{x^{(1)l}}V^{(1)i}
+ \bigg(\gamma^{(1)}_t+{\alphaha^i}\bigg)\partialrtial_{x^{(1)i}}V^{(1)i}\\
\nonumber&+&\sum_{h=1}^{N_2}\bigg(\gamma_t^{(2)}+{\hat\alphaha^{(2)h}(t,x)}\bigg)\partialrtial_{x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\left((\rho^2+(1-\rho^2)\rho_{1 }^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_{1 }^2)\right) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}\rho ^2 \partialrtial_{x^{(1)l}x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}\rho ^2 \partialrtial_{x^{(2)l}x^{(1)h}}V^{(1)i}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} \left((\rho^2+(1-\rho^2)\rho_{2}^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2 }^2)\right) \partialrtial_{x^{(2)l}x^{(2)h}}V^{(1)i}\\
&+&\frac{(\alphaha^{(1)i})^2}{2}-q_1\alphaha^{(1)i}\left(\overlineverline x^{\lambda_1}-x^{(1)i}\right)+\frac{\mbox{var}epsilonilon_1}{2}(\overlineverline x^{\lambda_1}-x^{(1)i})^2\bigg\}=0,
\end{eqnarray}
with the terminal condition $V^{(1)i}(T,x)=\frac{c_1}{2}(\overlineverline x^{\lambda_1} -x^{(1)i})^2$ and
\begin{eqnarray}\label{HJB2}
\nonumber \partialrtial_{t}V^{(2)j}&+&\inf_{\alphaha^{(2)j}}\bigg\{
\sum_{l=1}^{N_1}\bigg(\gamma_t^{(1)}+{\hat\alphaha^{(1)l}(t,x)}\bigg)\partialrtial_{x^{(1)l}}V^{(2)j}\\
\nonumber &+&\sum_{h\neq j,h=1}^{N_2}\bigg(\gamma_t^{(2)}+{\hat\alphaha^{(2)h}(t,x)}\bigg)\partialrtial_{x^{(2)h}}V^{(2)j}+\bigg(\gamma_t^{(2)}+{\alphaha^{(2)j}}\bigg)\partialrtial_{x^{(2)j}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\left((\rho^2+(1-\rho^2)\rho_{1 }^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_{1 }^2)\right) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}\rho ^2 \partialrtial_{x^{(1)l}x^{(2)h}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}\rho ^2 \partialrtial_{x^{(2)l}x^{(1)h}}V^{(2)j}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} \left((\rho^2+(1-\rho^2)\rho_{2}^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2 }^2)\right)\partialrtial_{x^{(2)l}x^{(2)h}}V^{(2)j}\\
&+&\frac{(\alphaha^{(2)j})^2}{2}-q_2\alphaha^{(2)j}\left(\overlineverline x^{\lambda_2}-x^{(2)j}\right)+\frac{\mbox{var}epsilonilon_2}{2}(\overlineverline x^{\lambda_2}-x^{(2)j})^2\bigg\} =0,\label{HJB2}
\end{eqnarray}
with the terminal condition $V^{(2)j}(T,x)=\frac{c_2}{2}(\overlineverline x^{\lambda_2} -x^{(2)j})^2$. The first order condition gives the candidate of the optimal strategy for bank $(k)i$ written as
\begin{equation}\label{candidate}
\hat\alphaha^{(k)i}=q_k(\overlineverline x^{\lambda_k}-x^{(k)i})-\partialrtial_{x^{(k)i}}V^{(k)i}.
\end{equation}
Inserting \eqref{candidate} into \eqref{HJB1} and \eqref{HJB2} gives
\begin{eqnarray}
\nonumber \partialrtial_{t}V^{(1)i}(t,x)&+& \bigg\{
\sum_{l=1}^{N_1}\bigg(\gamma^{(1)}_t+q_1(\overlineverline x^{\lambda_1} -x^{(1)l})-\partialrtial_{x^{(1)l}}V^{(1)l}\bigg)\partialrtial_{x^{(1)l}}V^{(1)i}\\
\nonumber&+&\sum_{h=1}^{N_2}\bigg(\gamma_t^{(2)}+q_2(\overlineverline x^{\lambda_2}-x^{(2)h})-\partialrtial_{x^{(2)h}}V^{(2)h}\bigg)\partialrtial_{x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\left((\rho^2+(1-\rho^2)\rho_{1 }^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_{1 }^2)\right) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}\rho ^2 \partialrtial_{x^{(1)l}x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}\rho ^2 \partialrtial_{x^{(2)l}x^{(1)h}}V^{(1)i}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} \left((\rho^2+(1-\rho^2)\rho_{2}^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2 }^2)\right) \partialrtial_{x^{(2)l}x^{(2)h}}V^{(1)i}\\
&+&\frac{(\partialrtial_{x^{(1)i}}V^{(1)i})^2}{2}+\frac{\mbox{var}epsilonilon_1-q_1^2}{2}(\overlineverline x^{\lambda_1} -x^{(1)i})^2\bigg\}=0,\label{HJB1-1}
\end{eqnarray}
with the terminal condition $V^{(1)i}(T,x)=\frac{c_1}{2}(\overlineverline x^{\lambda_1} -x^{(1)i})^2$ and
\begin{eqnarray}
\nonumber \partialrtial_{t}V^{(2)j}(t,x)&+& \bigg\{
\sum_{l=1}^{N_1}\bigg(\gamma^{(1)}_t+q_1(\overlineverline x^{\lambda_1} -x^{(1)l})-\partialrtial_{x^{(1)l}}V^{(1)l}\bigg)\partialrtial_{x^{(1)l}}V^{(2)j}\\
\nonumber&+&\sum_{h=1}^{N_2}\bigg(\gamma_t^{(2)}+q_2(\overlineverline x^{\lambda_2}-x^{(2)h})-\partialrtial_{x^{(2)h}}V^{(2)h}\bigg)\partialrtial_{x^{(2)h}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\left((\rho^2+(1-\rho^2)\rho_{1 }^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_{1 }^2)\right) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}\rho ^2 \partialrtial_{x^{(1)l}x^{(2)h}}V^{(2)j}\\
\nonumber&+&\frac{\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}\rho ^2 \partialrtial_{x^{(2)l}x^{(1)h}}V^{(2)j}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} \left((\rho^2+(1-\rho^2)\rho_{2}^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2 }^2)\right)\partialrtial_{x^{(2)l}x^{(2)h}}V^{(2)j}\\
&+&\frac{(\partialrtial_{x^{(2)j}}V^{(2)j})^2}{2}+\frac{\mbox{var}epsilonilon_2-q_2^2}{2}(\overlineverline x^{\lambda_2}-x^{(2)j})^2\bigg\}=0,\label{HJB2-1}
\end{eqnarray}
with the terminal condition $V^{(2)j}(T,x)=\frac{c_2}{2}(\overlineverline x^{\lambda_2}-x^{(2)j})^2$. We make the ansatz for $V^{(1)i}$ written as
\begin{eqnarray}
\nonumber V^{(1)i}(t,x)&=&\frac{\eta^{(1)}_t}{2}(\overlineverline x^{(1)}-x^{(1)i})^2+\frac{\eta^{(2)}_t}{2}(\overlineverline x^{(1)})^2+\frac{\eta^{(3)}_t}{2}(\overlineverline x^{(2)})^2\\
\nonumber&&+\eta^{(4)}_t(\overlineverline x^{(1)}-x^{(1)i})\overlineverline x^{(1)}+\eta^{(5)}_t(\overlineverline x^{(1)}-x^{(1)i})\overlineverline x^{(2)}+\eta^{(6)}_t\overlineverline x^{(1)}\overlineverline x^{(2)}\\
&&+\eta^{(7)}_t(\overlineverline x^{(1)}-x^{(1)i})+\eta^{(8)}_t\overlineverline x^{(1)}+\eta^{(9)}_t\overlineverline x^{(2)}+\eta^{(10)}_t\label{ansatz-1},
\end{eqnarray}
and the ansatz for $V^{(2)j}$ given by
\begin{eqnarray}
\nonumber V^{(2)j}(t,x)&=&\frac{\phi^{(1)}_t}{2}(\overlineverline x^{(2)}-x^{(2)j})^2+\frac{\phi^{(2)}_t}{2}(\overlineverline x^{(1)})^2+\frac{\phi^{(3)}_t}{2}(\overlineverline x^{(2)})^2\\
\nonumber &&+\phi^{(4)}_t(\overlineverline x^{(2)}-x^{(2)j})\overlineverline x^{(1)}+\phi^{(5)}_t(\overlineverline x^{(2)}-x^{(2)j})\overlineverline x^{(2)}+\phi^{(6)}_t\overlineverline x^{(1)}\overlineverline x^{(2)}\\
&&+\phi^{(7)}_t(\overlineverline x^{(2)}-x^{(2)j})+\phi^{(8)}_t\overlineverline x^{(1)}+\phi^{(9)}_t\overlineverline x^{(2)}+\phi^{(10)}_t,\label{ansatz-2}
\end{eqnarray}
where $\eta^{(i)}$ and $\phi^{(j)}$ for $i=1,\cdots,10$ and $j=1,\cdots,10$ are deterministic functions with terminal conditions
\begin{eqnarray}n
&&\eta_T^{(1)}=c_1,\quad \eta_T^{(2)}=c_1\lambda_1^2(\begin{equation}ta_1-1)^2, \quad \eta_T^{(3)}=c_1\lambda_1^2\begin{equation}ta_2^2, \quad \eta_T^{(4)}=c_1\lambda_1(\begin{equation}ta_1-1),\\
&&\eta_T^{(5)}=c_1\lambda_1\begin{equation}ta_2,\quad \eta_T^{(6)}=c_1\lambda_1^2(\begin{equation}ta_1-1)\begin{equation}ta_2,\quad \eta_T^{(7)}=\eta_T^{(8)}=\eta_T^{(9)}=\eta_T^{(10)}=0,
\end{eqnarray}n
and
\begin{eqnarray}n
&&\phi_T^{(1)}=c_2,\quad \phi_T^{(2)}=c_2\lambda_2^2\begin{equation}ta_1^2, \quad \phi_T^{(3)}=c_2\lambda_2^2(\begin{equation}ta_2-1)^2, \quad \phi_T^{(4)}=c_2\lambda_2\begin{equation}ta_1,\\
&&\phi_T^{(5)}=c_2\lambda_2(\begin{equation}ta_2-1),\quad \phi_T^{(6)}=c_2\lambda_2^2\begin{equation}ta_1(\begin{equation}ta_2-1),\quad \phi_T^{(7)}= \phi_T^{(8)}=\phi_T^{(9)}=\phi_T^{(10)}=0,
\end{eqnarray}n
using
\begin{eqnarray}n
\nonumber(\overlineverline x^{\lambda_1} -x^{(1)i})&=& (\overlineverline x^{(1)}-x^{(1)i})+\lambda_1(\begin{equation}ta_1-1)\overlineverline x^{(1)}+\lambda_1\begin{equation}ta_2\overlineverline x^{(2)}\\
\nonumber(\overlineverline x^{\lambda_2}-x^{(2)j})&=&(\overlineverline x^{(2)}-x^{(2)j})+\lambda_2\begin{equation}ta_1\overlineverline x^{(1)}+\lambda_2(\begin{equation}ta_2-1)\overlineverline x^{(2)}.
\end{eqnarray}n
Inserting the ansatz \eqref{ansatz-1} and \eqref{ansatz-2} into the HJB equations \eqref{HJB1} and \eqref{HJB2} and using
\begin{eqnarray}n
\partialrtial_{x^{(1)l}}V^{(1)i}&=&\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta^{(1)}_t(\overlineverline x^{(1)}-x^{(1)i})+\frac{1}{N_1}\eta^{(2)}_t\overlineverline x^{(1)}+\frac{1}{N_1}\eta^{(4)}_t(\overlineverline x^{(1)}-x^{(1)i})\\
&&+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(4)}\overlineverline x^{(1)}+ \left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(5)} \overlineverline x^{(2)}+\frac{1}{N_1}\eta^{(6)}_t\overlineverline x^{(2)}\\
&&+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\\
\partialrtial_{x^{(2)l}}V^{(1)i}&=& \frac{1}{N_2}\eta^{(3)}_t\overlineverline x^{(2)}+\frac{1}{N_2}\eta^{(5)}_t(\overlineverline x^{(1)}-x^{(1)i})+\frac{1}{N_2}\eta^{(6)}_t\overlineverline x^{(1)}+\frac{1}{N_2}\eta_t^{(9)} \\
\end{eqnarray}n
\begin{eqnarray}n
\partialrtial_{x^{(1)l}x^{(1)h}}V^{(1)i}&=&\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)\eta^{(1)}_t+\left(\frac{1}{N_1}\right)^2\eta_t^{(2)}\\
&&+\frac{1}{N_1}\left(\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\right)\eta^{(4)}_t,\\
\partialrtial_{x^{(1)l}x^{(2)h}}V^{(1)i}&=& \frac{1}{N_2}\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta^{(5)}_t+\frac{1}{N_1N_2}\eta^{(6)}_t,\\
\partialrtial_{x^{(2)l}x^{(1)h}}V^{(1)i}&=&\frac{1}{N_2}\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)\eta^{(5)}_t+\frac{1}{N_1N_2}\eta^{(6)}_t,\\
\partialrtial_{x^{(2)l}x^{(2)h}}V^{(1)i}&=& \left(\frac{1}{N_2}\right)^2\eta_t^{(3)} ,
\end{eqnarray}n
\begin{eqnarray}n
\partialrtial_{x^{(1)l}}V^{(2)j}&=& \frac{1}{N_1}\phi^{(2)}_t\overlineverline x^{(1)} +\frac{1}{N_1}\phi^{(4)}_t(\overlineverline x^{(2)}-x^{(2)j})+\frac{1}{N_1}\phi^{(6)}_t\overlineverline x^{(2)}+\frac{1}{N_1}\phi_t^{(8)}\\
\partialrtial_{x^{(2)l}}V^{(2)j}&=&\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi^{(1)}_t(\overlineverline x^{(2)}-x^{(2)j})+\frac{1}{N_2}\phi^{(3)}_t\overlineverline x^{(2)} + \left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(4)} \overlineverline x^{(1)}\\
&&+\frac{1}{N_2}\phi^{(5)}_t(\overlineverline x^{(2)}-x^{(2)j})+\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(5)}\overlineverline x^{(2)}+\frac{1}{N_2}\phi^{(6)}_t\overlineverline x^{(1)},\\
&&+\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}
\end{eqnarray}n
\begin{eqnarray}n
\partialrtial_{x^{(1)l}x^{(1)h}}V^{(2)j}&=&\left(\frac{1}{N_1}\right)^2\phi_t^{(2)},\\
\partialrtial_{x^{(1)l}x^{(2)h}}V^{(2)j}&=& \frac{1}{N_1}\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\phi_t^{(4)}+\frac{1}{N_1N_2}\phi_t^{(6)},\\
\partialrtial_{x^{(2)l}x^{(1)h}}V^{(2)j}&=& \frac{1}{N_1}\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(4)}+\frac{1}{N_1N_2}\phi_t^{(6)},\\
\partialrtial_{x^{(2)l}x^{(2)h}}V^{(2)j}&=&\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\phi_t^{(1)}+\left(\frac{1}{N_2}\right)^2\phi_t^{(3)}\\
&&+\frac{1}{N_2}\left(\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)+\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\right)\phi_t^{(5)},
\end{eqnarray}n
we get
\begin{eqnarray}n
\partialrtial_tV^{1(i)}&+&\sum_{l=1}^{N_1}\bigg\{\left(q_1+(1-\frac{1}{N_1})\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t\right)(\overlineverline x^{(1)}-x^{(1)l})\\
&&-\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\overlineverline x^{(1)}\\
&&- \left((\frac{1}{N_1}-1)\eta^{(5)}_t+\frac{1}{N_1}\eta^{(6)}_t-q_1\lambda_1\begin{equation}ta_2\right)\overlineverline x^{(2)}-\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)+\gamma^{(1)}_t\bigg\} \\
&&\bigg\{\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta^{(1)}_t(\overlineverline x^{(1)}-x^{(1)i})+\frac{1}{N_1}\eta^{(2)}_t\overlineverline x^{(1)}+\frac{1}{N_1}\eta^{(4)}_t(\overlineverline x^{(1)}-x^{(1)i})\\
&&+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(4)}\overlineverline x^{(1)}+ \left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(5)} \overlineverline x^{(2)}+\frac{1}{N_1}\eta^{(6)}_t\overlineverline x^{(2)}\\
&&+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\bigg\}\\
&+&\sum_{l=1}^{N_2}\bigg\{\bigg(q_2+(1- \frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi^{(5)}_t\bigg)(\overlineverline x^{(2)}-x^{(2)l})\\
&&-\left((\frac{1}{N_2}-1)\phi_t^{(4)}+\frac{1}{N_2}\phi_t^{(6)}-q_2\lambda_2\begin{equation}ta_1\right)\overlineverline x^{(1)}\\
&&-\left((\frac{1}{N_2}-1)\phi_t^{(5)}+\frac{1}{N_2}\phi_t^{(3)}+q_2\lambda_2(1-\begin{equation}ta_2)\right)\overlineverline x^{(2)}-\frac{1}{N_2}\eta_t^{(9)} +\gamma^{(2)}_t\bigg\}\\
&& \bigg\{\frac{1}{N_2}\eta^{(3)}_t\overlineverline x^{(2)}+\frac{1}{N_2}\eta^{(5)}_t(\overlineverline x^{(1)}-x^{(1)i})+\frac{1}{N_2}\eta^{(6)}_t\overlineverline x^{(1)}+\frac{1}{N_2}\eta_t^{(9)} \bigg\}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\bigg\{\rho^2+(1-\rho^2)\rho_1^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_1^2)\bigg\}\\
&&\bigg\{\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)\eta^{(1)}_t+\left(\frac{1}{N_1}\right)^2\eta_t^{(2)}\\
&&+ \frac{1}{N_1}\left(\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)+\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\right)\eta^{(4)}_t\bigg\} \\
\nonumber&+&\frac{\rho^2\sigma_1\sigma_2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}\bigg\{\frac{1}{N_2}\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)l}\right)\eta^{(5)}_t+\frac{1}{N_1N_2}\eta^{(6)}_t\bigg\}\\
\nonumber&+&\frac{\rho^2\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1} \bigg\{\frac{1}{N_2}\left(\frac{1}{N_1}-\deltalta_{(1)i,(1)h}\right)\eta^{(5)}_t+\frac{1}{N_1N_2}\eta^{(6)}_t\bigg\}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2}\bigg\{\rho_{2}^2+(1-\rho^2)\rho_2^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2}^2)\bigg\}\bigg\{ \left(\frac{1}{N_2}\right)^2\eta_t^{(3)}\bigg\}\\
&+&\frac{1}{2}\bigg\{\left((\frac{1}{N_1}-1)\eta^{(1)}_t+\frac{1}{N_1}\eta^{(4)}_t\right)(\overlineverline x^{(1)}-x^{(1)i})+\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t\right)\overlineverline x^{(1)}\\
&&+\left((\frac{1}{N_1}-1)\eta^{(5)}_t+\frac{1}{N_1}\eta^{(6)}_t\right)\overlineverline x^{(2)}+(\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\bigg\}^2\\
&+&\frac{\mbox{var}epsilonilon_1-q_1^2}{2}\bigg\{ (\overlineverline x^{(1)}-x^{(1)i})+\lambda_1(\begin{equation}ta_1-1)\overlineverline x^{(1)}+\lambda_1\begin{equation}ta_2\overlineverline x^{(2)}\bigg\}^2=0,
\end{eqnarray}n
for $i=1,\cdots,N_1$ and
\begin{eqnarray}n
\partialrtial_tV^{2(j)}&+&\sum_{l=1}^{N_1}\bigg\{\bigg(q_1+(1-\frac{1}{N_1})\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t\bigg)(\overlineverline x^{(1)}-x^{(1)l})\\
&&-\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\overlineverline x^{(1)}\\
&&-\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right)\overlineverline x^{(2)}-\frac{1}{N_1}\eta_t^{(8)}+\gamma^{(1)}_t\bigg\}\\
&& \bigg\{ \frac{1}{N_1}\phi^{(2)}_t\overlineverline x^{(1)} +\frac{1}{N_1}\phi^{(4)}_t(\overlineverline x^{(2)}-x^{(2)j})+\frac{1}{N_1}\phi^{(6)}_t\overlineverline x^{(2)}+\frac{1}{N_1}\eta_t^{(8)}\bigg\}\\
&+&\sum_{l=1}^{N_2}\bigg\{\left(q_2+(1-\frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi^{(5)}_t\right)( \overlineverline x^{(2)}-x^{(2)l})\\
&&-\left(\frac{1}{N_2}\eta^{(6)}_t+(\frac{1}{N_2} -1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\overlineverline x^{(1)}\\
&&-\left((\frac{1}{N_2}-1)\phi^{(5)}_t+\frac{1}{N_2}\phi^{(3)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right)\overlineverline x^{(2)}\\
&&-\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\right)+\gamma^{(2)}_t\bigg\}\\
&& \bigg\{\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi^{(1)}_t(\overlineverline x^{(2)}-x^{(2)j})+\frac{1}{N_2}\phi^{(3)}_t\overlineverline x^{(2)} + \left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(4)} \overlineverline x^{(1)}\\
&&+\frac{1}{N_2}\phi^{(5)}_t(\overlineverline x^{(2)}-x^{(2)j})+\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(5)}\overlineverline x^{(2)}+\frac{1}{N_2}\phi^{(6)}_t\overlineverline x^{(1)}\\
&&+\left((\frac{1}{N_2}-\deltalta_{(2)j,(2)l})\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\right)\bigg\}\\
\nonumber&+&\frac{\sigma_1^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}\bigg\{\rho^2+(1-\rho^2)\rho_1^2+\deltalta_{(1)l,(1)h}(1-\rho^2)(1-\rho_{1}^2)\bigg\}\bigg\{\left(\frac{1}{N_1}\right)^2\phi_t^{(2)}\bigg\}\\
\nonumber&+&\frac{\rho^2\sigma_1\sigma_2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2} \bigg\{\frac{1}{N_1}\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\phi_t^{(4)}+\frac{1}{N_1N_2}\phi_t^{(6)}\bigg\}\\
\nonumber&+&\frac{\rho^2\sigma_2\sigma_1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1} \bigg\{\frac{1}{N_1}\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\phi_t^{(4)}+\frac{1}{N_1N_2}\phi_t^{(6)}\bigg\}\\
\nonumber &+& \frac{\sigma_2^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} \bigg\{\rho^2+(1-\rho^2)\rho_2^2+\deltalta_{(2)l,(2)h}(1-\rho^2)(1-\rho_{2}^2)\bigg\}\\
&&\bigg\{\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\phi_t^{(1)}+\left(\frac{1}{N_2}\right)^2\phi_t^{(3)}\\
&&+\frac{1}{N_2}\left(\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)l}\right)+\left(\frac{1}{N_2}-\deltalta_{(2)j,(2)h}\right)\right)\phi_t^{(5)}\bigg\}\\
&+& \frac{1}{2}\bigg\{\left((\frac{1}{N_2}-1)\phi^{(1)}_t+\frac{1}{N_2}\phi^{(5)}_t\right)(\overlineverline x^{(2)} -x^{(2)j})+\left(\frac{1}{N_2}\eta^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t\right)\overlineverline x^{(1)}\\
&&+\left((\frac{1}{N_1}-1)\phi^{(5)}_t+\frac{1}{N_2}\phi^{(3)}_t\right)\overlineverline x^{(2)}+(\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\bigg\}^2\\
&+&\frac{\mbox{var}epsilonilon_2-q_2^2}{2}\bigg\{(\overlineverline x^{(2)}-x^{(2)j})+\lambda_2\begin{equation}ta_1\overlineverline x^{(1)}+\lambda_2(\begin{equation}ta_2-1)\overlineverline x^{(2)}\bigg\}^2=0,
\end{eqnarray}n
for $j=1,\cdots,N_2$.
By idenfifying the terms of states $X$, we obtain that the deterministic functions $\eta^i$ and $\phi^i$ for $i=1,\cdots,10$ must satisfy
\begin{eqnarray}
\nonumber \dot\eta^{(1)}_t&=&2\left(q_1+(1-\frac{1}{N_1} )\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t\right)\eta^{(1)}_t- \left(\frac{1}{N_1}\eta^{(4)}_t+(\frac{1}{N_1}-1)\eta_t^{(1)}\right)^2-(\mbox{var}epsilonilon_1-q_1^2)\\
\label{eta1}\\
\nonumber {\dot\eta^{(2)}_t}&=&2\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1} -1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\eta^{(2)}_t- \left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1} -1)\eta_t^{(4)}\right)^2\\
\label{eta2}&&+2\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\eta^{(6)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2(\begin{equation}ta_1-1)^2\\
\nonumber {\dot\eta^{(3)}_t} &=&2\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right)\eta^{(3)}_t-\left(\frac{1}{N_1}\eta_t^{(6)}+(\frac{1}{N_1}-1)\eta_t^{(5)}\right)^2\\
&&+2\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right)\eta^{(6)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2\begin{equation}ta_2^2
\label{eta3}
\end{eqnarray}
\begin{eqnarray}
\nonumber\dot\eta^{(4)}_t&=&\left(q_1+(1-\frac{1}{N_1})\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t\right) \eta^{(4)}_t+\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\eta^{(4)}_t\\
\nonumber&&-\left(\frac{1}{N_1}\eta^{(4)}_t+(\frac{1}{N_1}-1)\eta_t^{(1)}\right)\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta_t^{(4)}\right)\\
\label{eta4}&&+\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\eta^{(5)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1(\begin{equation}ta_1-1)\\
\nonumber\dot\eta^{(5)}_t&=&\left(q_1+(1-\frac{1}{N_1})\eta^{(1)}_t-\frac{1}{N_1}\eta^{(4)}_t\right) \eta^{(5)}_t+\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right)\eta^{(5)}_t\\
\nonumber&&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right)\eta^{(4)}_t\\
\label{eta5}&&-\left(\frac{1}{N_1}\eta^{(4)}_t+(\frac{1}{N_1}-1)\eta_t^{(1)}\right)\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta_t^{(5)}\right)-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2\\
\nonumber\dot\eta^{(6)}_t&=&\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\eta^{(6)}_t\\
\nonumber&&+\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \eta^{(6)}_t\\
\nonumber&&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right)\eta^{(2)}_t\\
\nonumber &&-\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta_t^{(4)}\right)\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta_t^{(5)}\right)\\
&&+\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right) \eta^{(3)}_t-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1^2(\begin{equation}ta_1-1)\begin{equation}ta_2\label{eta6}
\end{eqnarray}
\begin{eqnarray}
\nonumber \dot\eta^{(7)}_t&=&\left(q_1+(1-\frac{1}{N_1})\eta_t^{(1)}-\frac{1}{N_1}\eta_t^{(4)}\right)\eta_t^{(7)}\\
\nonumber &&+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\eta_t^{(4)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\eta_t^{(5)}\\
&&-\left((\frac{1}{N_1}-1)\eta_t^{(1)}+\frac{1}{N_1}\eta_t^{(4)}\right)\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)\label{eta7}\\
\nonumber \dot\eta^{(8)}_t&=&\left(\frac{1}{N_1}\eta_t^{(2)}+(\frac{1}{N_1}-1)\eta_t^{(4)}+q_1\lambda_1(1-\begin{equation}ta_1)\right)\eta_t^{(8)}+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\eta_t^{(2)}\\
\nonumber&&-\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta_t^{(4)}\right)\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)\\
\nonumber &&+\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\eta_t^{(9)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\eta_t^{(6)}\\
\label{eta8}\\
\nonumber \dot\eta^{(9)}_t&=&\left((\frac{1}{N_2}-1)\phi_t^{(5)}+\frac{1}{N_2}\phi_t^{(3)}+q_2\lambda_2(1-\begin{equation}ta_2)\right)\eta_t^{(9)}+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\eta_t^{(6)}\\
\nonumber &&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta_t^{(5)}-q_1\lambda_1\begin{equation}ta_2\right)\eta_t^{(8)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\eta_t^{(3)}\\
\label{eta9} &&-\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta_t^{(5)}\right)\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)\\
\nonumber \dot\eta^{(10)}_t&=&\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma^{(1)}_t\right)\eta_t^{(8)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma^{(2)}_t\right)\eta_t^{(9)}\\
\nonumber &&-\frac{\sigma_1^2}{2}\left((1-\frac{1}{N_1})(1-\rho^2)(1-\rho_1^2)\eta_t^{(1)}+\left(\rho^2+(1-\rho^2)\rho_1^2+\frac{1}{N_1}(1-\rho^2)(1-\rho^2_1)\right)\eta_t^{(2)}\right)\\
\nonumber &&-\rho^2\sigma_1\sigma_2\eta_t^{(6)}-\frac{\sigma_2^2}{2}\left(\rho_2^2+(1-\rho^2)\rho^2_2+\frac{1}{N_2}(1-\rho^2)(1-\rho_2^2)\right)\eta_t^{(3)}\\
&&-\frac{1}{2}\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)^2,\label{eta10}
\end{eqnarray}
and
\begin{eqnarray}
\nonumber\dot\phi^{(1)}_t&=&2\left(q_2+(1-\frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi_t^{(5)}\right)\phi^{(1)}_t-\left(( \frac{1}{N_2}-1)\phi^{(1)}_t-\frac{1}{N_2}\phi_t^{(5)}\right)^2-(\mbox{var}epsilonilon_2-q_2^2)\\
\label{phi1}\\
\nonumber {\dot\phi^{(2)}_t}&=&2\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right) \phi^{(2)}_t- \left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi_t^{(4)}\right)^2 \\
&&+2\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\phi^{(6)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2\begin{equation}ta_2^2 \label{phi2}\\
\nonumber {\dot\phi^{(3)}_t} &=&2\left( \frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \phi^{(3)}_t - \left(\frac{1}{N_2}\phi_t^3+(\frac{1}{N_2}-1)\phi_t^{(5)}\right)^2\\
&&+2\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right) \phi^{(6)}_t- (\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2(\begin{equation}ta_2-1)^2
\label{phi3}
\end{eqnarray}
\begin{eqnarray}
\nonumber\dot\phi^{(4)}_t&=&\left(q_2+(1-\frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi_t^{(5)}\right)\phi^{(4)}_t+\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\phi^{(5)}_t\\
\nonumber&&-\left((\frac{1}{N_2}-1)\phi^{(1)}_t+\frac{1}{N_2}\phi_t^{(5)}\right)\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi_t^{(4)}\right)\\
&&+\left(\frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right)\phi^{(4)}_t-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1\label{phi4}\\
\nonumber\dot\phi^{(5)}_t&=&\left(q_2+(1-\frac{1}{N_2})\phi^{(1)}_t-\frac{1}{N_2}\phi_t^{(5)} \right)\phi^{(5)}_t+\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi_t^{(5)}+q_2\lambda_2(1-\begin{equation}ta_2)\right)\phi^{(5)}_t\\
\nonumber&&-\left((\frac{1}{N_2}-1)\phi^{(1)}_t+\frac{1}{N_2}\phi_t^{(5)}\right)\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi_t^{(5)}\right)\\
\label{phi5}&&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right)\phi_t^{(4)}-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2(\begin{equation}ta_2-1)\\
\nonumber\dot\phi^{(6)}_t&=&\left( \frac{1}{N_1}\eta^{(2)}_t+(\frac{1}{N_1}-1)\eta^{(4)}_t+q_1\lambda_1(1-\begin{equation}ta_1)\right) \phi^{(6)}_t \\
\nonumber&&+\left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi^{(5)}_t+q_2\lambda_2(1-\begin{equation}ta_2)\right) \phi^{(6)}_t \\
\nonumber &&-\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi_t^{(4)} \right) \left(\frac{1}{N_2}\phi^{(3)}_t+(\frac{1}{N_2}-1)\phi_t^{(5)}\right)\\
\nonumber&&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta^{(5)}_t-q_1\lambda_1\begin{equation}ta_2\right) \phi^{(2)}_t +\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\phi^{(3)}_t\\
\label{phi6}&&- (\mbox{var}epsilonilon_2-q_2^2)\lambda_2^2\begin{equation}ta_1(\begin{equation}ta_2-1)\\
\nonumber\dot\phi^{(7)}_t&=&\left(q_2+(1-\frac{1}{N_2})\phi_t^{(1)}-\frac{1}{N_2}\phi_t^{(5)}\right)\phi_t^{(7)}\\
\nonumber &&+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\phi_t^{(4)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\phi_t^{(5)}\\
&&-\left((\frac{1}{N_2}-1)\phi_t^{(1)}+\frac{1}{N_2}\phi_t^{(5)}\right)\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\right)\label{phi7}\\
\nonumber \dot\phi^{(8)}_t&=&\left(\frac{1}{N_1}\eta_t^{(2)}+(\frac{1}{N_1}-1)\eta_t^{(4)}+q_1\lambda_1(1-\begin{equation}ta_1)\right)\phi_t^{(8)}+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\phi_t^{(2)}\\
\nonumber&&-\left((\frac{1}{N_2}-1)\phi_t^{(4)}+\frac{1}{N_2}\phi_t^{(6)}\right)\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\right)\\
\nonumber&&+\left(\frac{1}{N_2}\phi^{(6)}_t+(\frac{1}{N_2}-1)\phi^{(4)}_t-q_2\lambda_2\begin{equation}ta_1\right)\phi_t^{(9)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\phi_t^{(6)}\\\label{phi8}\\
\nonumber \dot\phi^{(9)}_t&=&\left((\frac{1}{N_2}-1)\phi_t^{(5)}+\frac{1}{N_2}\phi_t^{(3)}+q_2\lambda_2(1-\begin{equation}ta_2)\right)\phi_t^{(9)}+\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}-\gamma_t^{(1)}\right)\phi_t^{(6)}\\
\nonumber &&+\left(\frac{1}{N_1}\eta^{(6)}_t+(\frac{1}{N_1}-1)\eta_t^{(5)}-q_1\lambda_1\begin{equation}ta_2\right)\phi_t^{(8)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma_t^{(2)}\right)\phi_t^{(3)}\\
\label{phi9} &&+\left((\frac{1}{N_2}-1)\phi_t^{(5)}+\frac{1}{N_2}\phi_t^{(3)}\right)\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}\right)
\end{eqnarray}
\begin{eqnarray}
\nonumber \dot\phi^{(10)}_t&=& \left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\phi_t^{(8)}-\gamma^{(1)}_t\right)\eta_t^{(8)}+\left((\frac{1}{N_2}-1)\phi_t^{(7)}+\frac{1}{N_2}\phi_t^{(9)}-\gamma^{(2)}_t\right)\phi_t^{(9)}\\
\nonumber &&-\frac{\sigma_1^2}{2}\left(\rho_1^2+(1-\rho^2)\rho_1+\frac{1}{N_1}(1-\rho^2)(1-\rho_1^2)\right)\phi_t^{(2)}-\rho^2\sigma_1\sigma_2\phi_t^{(6)}\\
\nonumber &&-\frac{\sigma_2^2}{2}\left((1-\frac{1}{N_2})(1-\rho^2)(1-\rho_2^2)\phi_t^{(1)}+\left(\rho^2+(1-\rho^2)\rho_2^2+\frac{1}{N_2}(1-\rho^2)(1-\rho^2_2)\right)\phi_t^{(3)}\right)\\
&&-\frac{1}{2}\left((\frac{1}{N_1}-1)\eta_t^{(7)}+\frac{1}{N_1}\eta_t^{(8)}\right)^2,\label{phi10}
\end{eqnarray}
with terminal conditions
\begin{eqnarray}n
&&\eta_T^{(1)}=c_1,\quad \eta_T^{(2)}=c_1\lambda_1^2(\begin{equation}ta_1-1)^2, \quad \eta_T^{(3)}=c_1\lambda_1^2\begin{equation}ta_2^2, \quad \eta_T^{(4)}=c_1\lambda_1(\begin{equation}ta_1-1),\\
&&\eta_T^{(5)}=c_1\lambda_1\begin{equation}ta_2,\quad \eta_T^{(6)}=c_1\lambda_1^2(\begin{equation}ta_1-1)\begin{equation}ta_2,\quad \eta_T^{(7)}=\eta_T^{(8)}=\eta_T^{(9)}=\eta_T^{(10)}=0,
\end{eqnarray}n
and
\begin{eqnarray}n
&&\phi_T^{(1)}=c_2,\quad \phi_T^{(2)}=c_2\lambda_2^2\begin{equation}ta_1^2, \quad \phi_T^{(3)}=c_2\lambda_2^2(\begin{equation}ta_2-1)^2, \quad \phi_T^{(4)}=c_2\lambda_2\begin{equation}ta_1,\\
&&\phi_T^{(5)}=c_2\lambda_2(\begin{equation}ta_2-1),\quad \phi_T^{(6)}=c_2\lambda_2^2\begin{equation}ta_1(\begin{equation}ta_2-1),\quad \phi_T^{(7)}= \phi_T^{(8)}=\phi_T^{(9)}=\phi_T^{(10)}=0.
\end{eqnarray}n
We now discuss the existence of $\eta^{(i)}$ and $\phi^{(i)}$ for $i=1,\cdots,10$. First, observe that $\eta^{(3)}$, $\eta^{(i)}$ for $i=5,\cdots,10$, $\phi^{(2)}$, $\phi^{(4)}$, and $\phi^{(i)}$ for $i=6,\cdots,10$ are coupled first order linear equations.
The existence of $\eta^{(i)}$ and $\phi^{(i)}$ for $i=1,\cdots,10$ in the case of sufficiently large $N_1$ and $N_2$ can be verified in Proposition \ref{Prop_suff}. Hence, the closed-loop Nash equilibria are written as
\begin{eqnarray}
\label{optimal-finite-ansatz-V1-appen}
\hat\alphaha^{(1)i}(t,x)&=&(q_1+\tildeilde\eta^{(1)}_t)(\overlineverline x^{(1)}-x^{(1)i})+\tildeilde\eta^{(4)}_t\overlineverline x^{(1)}+\tildeilde\eta^{(5)}_t\overlineverline x^{(2)}+\tildeilde\eta_t^{(7)},\\
\hat\alphaha^{(2)j}(t,x)&=&(q_2+\tildeilde\phi^{(1)}_t)(\overlineverline x^{(2)}-x^{(2)j})+\tildeilde\phi^{(4)}_t\overlineverline x^{(1)}+\tildeilde\phi^{(5)}_t\overlineverline x^{(2)}+\tildeilde\phi_t^{(7)},\label{optimal-finite-ansatz-V2-appen}
\end{eqnarray}
where $\tildeilde\eta^i$ and $\tildeilde\phi^i$ for $i=1,4,5,7$ satisfy (\ref{tildeeta}-\ref{tildephi}).
\qed
We then verify that $V^{(1)i}$, $V^{(2)j}$, $\hat\alphaha^{(1)i}$, and $\hat\alphaha^{(2)j}$ are the solutions to the problem (\ref{value-function-1}-\ref{coupled-2}). Without loss of generality, we show the verification theorem for $V^{(1)i}$.
{\tildeheorem\label{Ver-Thm}(Verification Theorem)\\
Given the optimal strategies $\hat\alphaha^{(1)l}$ for $l\neq i$ given by \eqref{optimal-finite-ansatz-V1} and $\hat\alphaha^{(2)j}$ for $j=1,\cdots,N_2$ given by \eqref{optimal-finite-ansatz-V2}, $V^{(1)i}$ given by \eqref{ansatz-1} is the value function associated to the problem \eqref{value-function-1} and \eqref{value-function-2} subject to \eqref{coupled-1} and \eqref{coupled-2} and $\hat\alphaha^{(1)i}$ is the optimal strategy for the $i$-th bank in the first group and also the closed-loop Nash equilibrium.
}
\begin{equation}gin{proof}
According to the notations in \cite{Sun2016}, an admissible strategy $\tildeilde\alphaha$ and its corresponding trajectory $\tildeilde X$ are given by
\begin{equation}
\tildeilde\alphaha_t=\left(\hat\alphaha_t^{(1)1},\cdots, \alphaha^{(1)i}_t,\cdots,\hat\alphaha_t^{(1)N_1},\hat\alphaha_t^{(2)1},\cdots,\hat\alphaha_t^{(2)N_2}\right)
\end{equation}
and
\begin{equation}
\tildeilde X_t=\left(\tildeilde X_t^{(1)1},\cdots,\tildeilde X^{(1)i}_t,\cdots,\tildeilde X_t^{(1)N_1},\tildeilde X_t^{(2)1},\cdots,\tildeilde X_t^{(2)N_2}\right).
\end{equation}
In addition, the optimal strategy $\hat\alphaha$ and its corresponding trajectory $\hat X$ are written as
\begin{equation}
\hat\alphaha_t=\left(\hat\alphaha_t^{(1)1},\cdots,\hat\alphaha^{(1)i}_t,\cdots,\hat\alphaha_t^{(1)N_1},\hat\alphaha_t^{(2)1},\cdots,\hat\alphaha_t^{(2)N_2}\right)
\end{equation}
and
\begin{equation}
\hat X_t=\left(\hat X_t^{(1)1},\cdots,\hat X^{(1)i}_t,\cdots,\hat X_t^{(1)N_1},\hat X_t^{(2)1},\cdots,\hat X_t^{(2)N_2}\right).
\end{equation}
We claim for any admissible strategy $\tildeilde\alphaha$,
\begin{equation} \label{upper}
V^{(1)i}(t,x)\leq \mathbb{E} _{t,x}\left\{\int_t^T f^N_{(1)}(\tildeilde{X}_t, \alphaha^{(1)i}_s)ds+g_{(1)}(\tildeilde{X}_T)\right\},
\end{equation}
and for $\hat{\alphaha}$
\begin{equation} \label{optim}
V^{(1)i}(t,x)= \mathbb{E} _{t,x}\left\{\int_t^T f^N_{(1)}(\hat{X}_t, \hat{\alphaha}^{(1)i}_s)ds+g_{(1)}(\hat{X}_T)\right\},
\end{equation}
leading to $\hat{\alphaha}^{(1)i}$ is the optimal strategy for $i$-th bank in the first group . We can assume
\begin{equation} \label{condition}
\mathbb{E} _{t,x}\left\{\int_t^T f^N_{(1)}(\tildeilde{X}_t, {\alphaha}^{(1)i}_s)ds\right\}<\infty,
\end{equation}
otherwise (\ref{upper}) holds automatically. For some $M>0$, define the exit time
\[
\tildeheta_M=\inf\{t;\; |\tildeilde{X}_t|\geq M \}.
\]
Given the condition \eqref{condition}, in order to complete the proof, we shall claim
\begin{equation} \label{nonexplosion}
P(\tildeheta_M\leq T)\rightarrow 0,\ M\rightarrow \infty,
\end{equation}
and
\begin{equation} \label{ui}
\mathbb{E} _{t, x}[\sup_{t\leq s\leq T}|\tildeilde{X}_s|^2]<\infty.
\end{equation}
The proof of the above properties is postponed.
Given the optimal strategies $\hat\alphaha^{(1)i}$ and $\hat\alphaha^{(2)j}$, applying It\^o's formula, we get
\begin{eqnarray}
\nonumber&&V^{(1)i}(T\wedge\tildeheta_M,\tildeilde{X}_{T\wedge\tildeheta_M})\\
\nonumber&=&V^{(1)i}(t,x)\\
\nonumber &&+\int_t^{T\wedge\tildeheta_M}\bigg\{\partialrtial_sV^{(1)i}(s,\tildeilde X_s)+
\sum_{l\neq i,l=1}^{N_1}\bigg(\gamma^{(1)}_t+{\hat\alphaha^{(1)l}(t,x)}\bigg)\partialrtial_{x^{(1)l}}V^{(1)i}(s,\tildeilde X_s)
\\
\nonumber&&+ \bigg(\gamma^{(1)}_t+{\alphaha^{(1)i}}\bigg)\partialrtial_{x^{(1)i}}V^{(1)i}(s,\tildeilde X_s)+\sum_{h=1}^{N_2}\bigg(\gamma_t^{(2)}+{\hat\alphaha^{(2)h}(t,x)}\bigg)\partialrtial_{x^{(2)h}}V^{(1)i}(s,\tildeilde X_s)\\
\nonumber&&+\frac{(\sigma^1)^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}((\rho^{11})^2+\deltalta_{(1)l,(1)h}(1-(\rho^{11})^2)) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(1)i}(s,\tildeilde X_s)\\
\nonumber&&+\frac{\sigma^1\sigma^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}((\rho^{12})^2+\deltalta_{(1)l,(2)h}(1-(\rho^{12})^2)) \partialrtial_{x^{(1)l}x^{(2)h}}V^{(1)i}(s,\tildeilde X_s)\\
\nonumber&&+\frac{\sigma^2\sigma^1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}((\rho^{21})^2+\deltalta_{(2)l,(1)h}(1-(\rho^{21})^2)) \partialrtial_{x^{(2)l}x^{(1)h}}V^{(1)i}(s,\tildeilde X_s)\\
\nonumber &&+ \frac{(\sigma^2)^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} ((\rho^{22})^2+\deltalta_{(2)l,(2)h}(1-(\rho^{22})^2))\partialrtial_{x^{(2)l}x^{(2)h}}V^{(1)i}(s,\tildeilde X_s)\bigg\} ds\\
\nonumber&&+\int_t^{T\wedge\tildeheta_M}\sigma^1\sum_{l=1}^{N_1}\partialrtial_{x^{(1)l}}V^{(1)i}(s, \tildeilde{X}_s) dW^{(1)l}_s+\int_t^{T\wedge\tildeheta_M}\sigma^2\sum_{h=1}^{N_1}\partialrtial_{x^{(2)h}}V^{(1)i}(s, \tildeilde{X}_s) dW^{(2)h}_s.
\end{eqnarray}
Taking the expectation on both sides and using
\begin{eqnarray} \label{positive}
\nonumber \partialrtial_{t}V^{(1)i} &+&
\sum_{l\neq i,l=1}^{N_1}\bigg(\gamma^{(1)}_t+{\hat\alphaha^{(1)l}(t,x)}\bigg)\partialrtial_{x^{(1)l}}V^{(1)i}
+ \bigg(\gamma^{(1)}_t+{\alphaha^{(1)i}}\bigg)\partialrtial_{x^{(1)i}}V^{(1)i}\\
\nonumber&+&\sum_{h=1}^{N_2}\bigg(\gamma_t^{(2)}+{\hat\alphaha^{(2)h}(t,x)}\bigg)\partialrtial_{x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{(\sigma^1)^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_1}((\rho^{11})^2+\deltalta_{(1)l,(1)h}(1-(\rho^{11})^2)) \partialrtial_{x^{(1)l}x^{(1)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma^1\sigma^2}{2} \sum_{l=1}^{N_1}\sum_{h=1}^{N_2}((\rho^{12})^2+\deltalta_{(1)l,(2)h}(1-(\rho^{12})^2)) \partialrtial_{x^{(1)l}x^{(2)h}}V^{(1)i}\\
\nonumber&+&\frac{\sigma^2\sigma^1}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_1}((\rho^{21})^2+\deltalta_{(2)l,(1)h}(1-(\rho^{21})^2)) \partialrtial_{x^{(2)l}x^{(1)h}}V^{(1)i}\\
\nonumber &+& \frac{(\sigma^2)^2}{2} \sum_{l=1}^{N_2}\sum_{h=1}^{N_2} ((\rho^{22})^2+\deltalta_{(2)l,(2)h}(1-(\rho^{22})^2))\partialrtial_{x^{(2)l}x^{(2)h}}V^{(1)i}\\
&+&\frac{(\alphaha^{(1)i})^2}{2}-q^1\alphaha^{(1)i}\left(\overlineverline x^{\lambda_1}-x^{(1)i}\right)+\frac{\mbox{var}epsilonilon^1}{2}(\overlineverline x^{\lambda_1}-x^{(1)i})^2 \geq 0
\end{eqnarray}
give
\begin{eqnarray}
\nonumber V^{(1)i}(t,x) &\leq&\mathbb{E} \bigg\{\int_t^{T\wedge\tildeheta_M}\left(\frac{(\alphaha^{(1)i}_s)^2}{2}-q_1\alphaha^{(1)i}_s\left(\overlineverline{\tildeilde X}^{\lambda_1}_s-\tildeilde X^{(1)i}_s\right)+\frac{\mbox{var}epsilonilon_1}{2}(\overlineverline {\tildeilde X}^{\lambda_1}_s-\tildeilde X^{(1)i}_s)^2\right)ds \\
&&+ V^{(1)i}(T\wedge\tildeheta_M,\tildeilde X_{T\wedge\tildeheta_M})\bigg\}.\label{ver-ineq}
\end{eqnarray}
Assuming that there exists a constant $C$ such that
$$
|V^{(1)i}(T\wedge\tildeheta_M,\tildeilde X_{T\wedge\tildeheta_M})|\leq C(1+ \sup_{t\leq s\leq T} |\tildeilde{X}_s|^2),
$$
and then the condition \eqref{ui} implies $V^{(1)i}(T\wedge\tildeheta_M,X_{T\wedge\tildeheta_M})$ being uniformly integrable. Together with (\ref{nonexplosion}), we obtain as $M\rightarrow\infty$
$$
\mathbb{E} _{t, x}[ V^{(1)i}(T\wedge\tildeheta_M,\tildeilde X_{T\wedge\tildeheta_M})]\rightarrow \mathbb{E} _{t, x}[g^N_{(1)}(\tildeilde{X}_T)].
$$
In addition, owing to the integrand staying nonnegative, we get
\begin{eqnarray}n
\nonumber && \mathbb{E} \bigg\{\int_t^{T\wedge\tildeheta_M}\left(\frac{(\alphaha^{(1)i}_s)^2}{2}-q\alphaha^{(1)i}_s\left(\overlineverline X^{\lambda_1}_s-\tildeilde{X}^{(1)i}_s\right)+\frac{\mbox{var}epsilonilon}{2}(\overlineverline X^{\lambda_1}_s-\tildeilde{X}^{(1)i}_s)^2\right)ds \bigg\}\\
&& \leq \mathbb{E} \bigg\{\int_t^{T}\left(\frac{(\alphaha^{(1)i}_s)^2}{2}-q\alphaha^{(1)i}_s\left(\overlineverline X^{\lambda_1}_s-\tildeilde{X}^{(1)i}_s\right)+\frac{\mbox{var}epsilonilon}{2}(\overlineverline X^{\lambda_1}_s-\tildeilde{X}^{(1)i}_s)^2\right)ds \bigg\}.
\end{eqnarray}n
Based on the above results, we obtain
\begin{eqnarray}
V^{(1)i}(t,x_t) \leq\mathbb{E} \bigg\{\int_t^{T}f^N_{(1)}(\tildeilde{X}_t, \alphaha^{(1)i}_s)ds
+g^N_{(1)}(\tildeilde{X}_T))\bigg\}.\label{V(t,x)}
\end{eqnarray}
This completes the proof of (\ref{upper}).
In order to prove \eqref{nonexplosion}, we first recall
\begin{equation}\label{coupled-appex}
d\tildeilde X_t^{(1)i}=(\alphaha^{(1)i}_t+\gamma_t^{(1)})dt+ \sigma_1dW^{(1)i}_t
\end{equation}
and
\[
d(\tildeilde X_t^{(1)i})^2=\left(2\tildeilde X_t^{(1)i}(\alphaha^{(1)i}_t+\gamma^{(1)}_t)+\sigma_1^2\right)dt+2\tildeilde X_t^{(1)i}\sigma_1dW^{(1)i}_t.
\]
Denote $\begin{equation}ta>0$ large enough satisfying
\[
\begin{equation}ta>2\sup_{0\leq t\leq T}\left\{(\gamma^{(1)}_t)^2+\sigma_1^2+1\right\}.
\]
and apply It\^o formula to $e^{-\begin{equation}ta t}(\tildeilde X_t^{(1)i})^2$ leading to
\begin{eqnarray}n
&&e^{-\begin{equation}ta(t\wedge\tildeheta_M)}(\tildeilde X^{(1)i})^2_{t\wedge\tildeheta_M}\\
&\leq&(\tildeilde X_0^{(1)i})^2+\int_0^{t\wedge\tildeheta_M}e^{-\begin{equation}ta s}\left(-\frac{\begin{equation}ta}{2}((\tildeilde X_s^{(1)i})^2-1)+|\alphaha^{(1)i}_s|^2\right)ds+\int_0^{t\wedge\tildeheta_M}e^{-\begin{equation}ta s}2\sigma_1\tildeilde X_s^{(1)i}dW^{(1)i}_t.
\end{eqnarray}n
Taking expectation on both sides gives
\begin{equation}
e^{-\begin{equation}ta t}M^2\mathbb{P} (\tildeheta_M\leq t)\leq (\tildeilde X_0^{(1)i})^2+\frac{\begin{equation}ta }{2}t+\mathbb{E} \left[\int_0^{t\wedge\tildeheta_M}|\alphaha^{(1)i}_s|^2ds\right]-\frac{\begin{equation}ta}{2}\mathbb{E} \left[\int_0^{t\wedge\tildeheta_M}(\tildeilde X_s^{(1)i})^2ds\right].
\end{equation}
By letting $t=T$ and $M\rightarrow \infty$, we have \eqref{nonexplosion} and
\begin{equation}\label{condition-X2}
\frac{\begin{equation}ta}{2}\mathbb{E} \left[\int_0^{T}(\tildeilde X_s^{(1)i})^2ds\right]\leq (\tildeilde X_0^{(1)i})^2+\frac{\begin{equation}ta}{2}T+\mathbb{E} \left[\int_0^T|\alphaha^{(1)i}_s|^2ds\right].
\end{equation}
Applying Doob's martingale inequality and Cauchy-Schuwartz inequality to \eqref{coupled-appex} and using \eqref{condition-X2} imply
\begin{eqnarray}
\nonumber&&\mathbb{E} [\sup_{t\leq s\leq T}|\tildeilde X^{(1)i}_s|^2]\\
\nonumber&\leq& 2\mathbb{E} \left[\int_t^T|\gamma^{(1)}_s|ds\right]^2+2\mathbb{E} \left[\int_t^T|\alphaha^{(1)i}_s|ds\right]^2+2\mathbb{E} \left[\sup_{t\leq u\leq T}\int_t^u2\sigma_1 \tildeilde X_s^{(1)i}dW^{(1)i}_s\right]^2\\
&\leq&C_1T\mathbb{E} \left[\int_t^T|\gamma^{(1)}_s|^2+|\alphaha^{(1)i}_s|^2ds\right]+C_2\mathbb{E} \left[\int_t^T (\tildeilde X^{(1)i}_s)^2ds\right]< \infty,
\end{eqnarray}
where $C_1$ and $C_2$ are two positive constants. This proves \eqref{ui}.
\end{equation}d{proof}
\section{Proof of Theorem \ref{Hete-open}} \label{Appex-open}
Applying the Pontryagin principle to the proposed problem (\ref{objective}-\ref{diffusions}), we obtain the Hamiltonians written as
\begin{eqnarray}
\nonumber H^{(1)i}&=&\sum_{k=1}^{N_1}(\gamma_t^{(1)}+\alphaha^{(1)k})y^{(1)i,(1)k}+\sum_{k=1}^{N_2}(\gamma^{(2)}_t+\alphaha^{(2)k})y^{(1)i,(2)k}\\
&&+\frac{(\alphaha^{(1)i})^2}{2}-q_1\alphaha^{(1)i}(\overlineverline x^{\lambda_1}-x^{(1)i})+\frac{\mbox{var}epsilonilon_1}{2}(\overlineverline x^{\lambda_1}-x^{(1)i})^2,
\end{eqnarray}
and
\begin{eqnarray}
\nonumber H^{(2)j}&=&\sum_{k=1}^{N_1}(\gamma_t^{(1)}+\alphaha^{(1)k})y^{(2)j,(1)k}+\sum_{k=1}^{N_2}(\gamma^{(2)}_t+\alphaha^{(2)k})y^{(2)j,(2)k}\\
&&+\frac{(\alphaha^{(2)j})^2}{2}-q_2\alphaha^{(2)j}(\overlineverline x^{\lambda_2}-x^{(2)j})+\frac{\mbox{var}epsilonilon_2}{2}(\overlineverline x^{\lambda_2}-x^{(2)j})^2,
\end{eqnarray}
where the adjoint diffusions $Y_t^{(1)i,(1)l}$, $Y_t^{(1)i,(2)h}$, $Y_t^{(2)j,(1)l}$, and $Y_t^{(2)j,(2)h}$ for $i,l=1,\cdots,N_1$ and $j,h=1,\cdots,N_2$ are given by
\begin{eqnarray}
\label{Y-1-1}\nonumber dY_t^{(1)i,(1)l}&=&-\frac{\partialrtial H^{(1)i}}{\partialrtial x^{(1)l}}(\hat\alphaha_t^{(1)i})dt+\sum_{k=0}^2Z_t^{(1)i,(1)l,k}dW_t^{(k)}\\
&&+\sum_{k=1}^{N_1}Z_t^{(1)i,(1)l,(1)k}dW_t^{(1)k}+\sum_{k=1}^{N_2}Z_t^{(1)i,(1)l,(2)k}dW_t^{(2)k},\\
\nonumber dY_t^{(1)i,(2)h}&=&-\frac{\partialrtial H^{(1)i}}{\partialrtial x^{(2)h}}(\hat\alphaha_t^{(1)i})dt+\sum_{k=0}^2Z_t^{(1)i,(2)h,k}dW_t^{(k)}\\
&&+\sum_{k=1}^{N_1}Z_t^{(1)i,(2)h,(1)k}dW_t^{(1)k}+\sum_{k=1}^{N_2}Z_t^{(1)i,(2)h,(2)k}dW_t^{(2)k},
\end{eqnarray}
and
\begin{eqnarray}
\nonumber dY_t^{(2)j,(1)l}&=&-\frac{\partialrtial H^{(2)j}}{\partialrtial x^{(1)l}}(\hat\alphaha_t^{(2)j})dt+\sum_{k=0}^2Z_t^{(2)j,(1)l,k}dW_t^{(k)}\\
&&+\sum_{k=1}^{N_1}Z_t^{(2)j,(1)l,(1)k}dW_t^{(1)k}+\sum_{k=1}^{N_2}Z_t^{(2)j,(1)l,(2)k}dW_t^{(2)k},\\
\nonumber dY_t^{(2)j,(2)h}&=&-\frac{\partialrtial H^{(2)j}}{\partialrtial x^{(2)h}}(\hat\alphaha_t^{(2)j})dt+\sum_{k=0}^2Z_t^{(2)j,(2)h,k}dW_t^{(k)}\\
\label{Y-1-4}&&+\sum_{k=1}^{N_1}Z_t^{(2)j,(2)h,(1)k}dW_t^{(1)k}+\sum_{k=1}^{N_2}Z_t^{(2)j,(2)h,(2)k}dW_t^{(2)k}
\end{eqnarray}
with the squared integrable progressive processes
\begin{eqnarray}n
Z_t^{(1)i,(1)l,k}, Z_t^{(1)i,(1)l,(1)k_1}, Z_t^{(1)i,(1)l,(2)k_2}, Z_t^{(1)i,(2)h,k}, Z_t^{(1)i,(2)h,(1)k_1}, Z_t^{(1)i,(2)h,(2)k_2},\\
Z_t^{(2)j,(1)l,k}, Z_t^{(2)j,(1)l,(1)k}, Z_t^{(2)j,(1)l,(2)k}, Z_t^{(2)j,(2)h,k}, Z_t^{(2)j,(2)h,(1)k}, Z_t^{(2)j,(2)h,(2)k},
\end{eqnarray}n
for $i,l=1,\cdots,N_1$, $j,h=1,\cdots,N_2$, and $k=0,1,2$. The terminal conditions are written as
\[
Y_T^{(1)i,(1)l}=c_1\left( \frac{1-\lambda_1}{N_1}+\frac{\lambda_1}{N}-\deltalta_{(1)i,(1)l}\right)(\overlineverline X^{\lambda_1}_T-X_T^{(1)i}),\,\;Y_T^{(1)i,(2)h}=c_1\frac{\lambda_1}{N}(\overlineverline X^{\lambda_1}_T-X_T^{(1)i}),
\]
and
\[
Y_T^{(2)j,(1)l}=c_2\frac{\lambda_2}{N}(\overlineverline X^{\lambda_2}_T-X_T^{(2)j}),\;Y_T^{(2)j,(2)h}=c_2\left( \frac{1-\lambda_2}{N_2}+\frac{\lambda_2}{N}-\deltalta_{(2)j,(2)h}\right)(\overlineverline X^{\lambda_2}_T-X_T^{(2)j}).
\]
Through minimizing the Hamiltonians with respect to $\alphaha$ written as
\[
\frac{\partialrtial H^{(1)i}}{\partialrtial \alphaha^{(1)i}}(\hat\alphaha^{(1)i})=0, \quad \frac{\partialrtial H^{(2)j}}{\partialrtial \alphaha^{(2)j}}(\hat\alphaha^{(2)j})=0,
\]
the Nash equilibria are given by
\begin{eqnarray}
\label{optimal_open-1-1}\hat\alphaha^{o,(1)i}&=&q_1(\overlineverline x^{\lambda_1}-x^{(1)i})-y^{(1)i, (1)i},\\
\label{optimal_open-1-2}\hat\alphaha^{o,(2)j}&=&q_2(\overlineverline x^{\lambda_2}-x^{(2)j})-y^{(2)j,(2)j},
\end{eqnarray}
such that the optimal forward equations for banks are given by
\begin{eqnarray}
\nonumber dX^{(1)i}_t &=& \left(q_1(\overlineverline X_t^{\lambda_1}-X_t^{(1)i})-Y_t^{(1)i, (1)i} +\gamma^{(1)}_{t}\right)dt\\
&&+\sigma_1\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(1)}+\sqrt{1-\rho^2_{1}}dW^{(1)i}_t\right)\right) ,
\end{eqnarray}
and
\begin{eqnarray}
\nonumber dX^{(2)j}_t &=& \left(q_2(\overlineverline X_t^{\lambda_2}-X_t^{(2)j})-Y_t^{(2)j,(2)j}+\gamma^{(2)}_{t}\right)dt\\
&&+\sigma_2\left(\rho dW^{(0)}_t+\sqrt{1-\rho^2}\left(\rho_{1}dW_t^{(2)}+\sqrt{1-\rho^2_{2}}dW^{(2)j}_t\right)\right).
\end{eqnarray}
Inserting \eqref{optimal_open-1-1} and \eqref{optimal_open-1-2} into (\ref{Y-1-1}-\ref{Y-1-4}), the adjoint processes are rewritten as
\begin{eqnarray}
\nonumber dY_t^{(1)i,(1)l}&=&\left( \frac{1-\lambda_1}{N_1}+\frac{\lambda_1}{N}-\deltalta_{(1)i,(1)l}\right)\left\{-(\mbox{var}epsilonilon_1-q_1^2)(\overlineverline X^{\lambda_1}_t-X_t^{(1)i})-q_1Y_t^{(1)i,(1)i}\right\}dt\\
\nonumber&&+\sum_{k=0}^2Z_t^{(1)i,(1)l,k}dW_t^{(k)}\\
&&+\sum_{k_1=1}^{N_1}Z_t^{(1)i,(1)l,(1)k_1}dW_t^{(1)k_1}+\sum_{k_2=1}^{N_2}Z_t^{(1)i,(1)l,(2)k_2}dW_t^{(2)k_2},
\label{Y-1-1}\\
\nonumber dY_t^{(1)i,(2)h}&=&\frac{\lambda_1}{N}\left\{-(\mbox{var}epsilonilon_1-q_1^2)(\overlineverline X^{\lambda_1}_t-X_t^{(1)i})-q_1Y_t^{(1)i,(1)i}\right\}dt+\sum_{k=0}^2Z_t^{(1)i,(2)h,k}dW_t^{(k)}\\
&&+\sum_{k_1=1}^{N_1}Z_t^{(1)i,(2)h,(1)k_1}dW_t^{(1)k_1}+\sum_{k_2=1}^{N_2}Z_t^{(1)i,(2)h,(2)k_2}dW_t^{(2)k_2},
\end{eqnarray}
with the terminal conditions
\[
Y_T^{(1)i,(1)l}=c_1\left( \frac{1-\lambda_1}{N_1}+\frac{\lambda_1}{N}-\deltalta_{(1)i,(1)l}\right)(\overlineverline X^{\lambda_1}_T-X_T^{(1)i}),\,\;Y_T^{(1)i,(2)h}=c_1\frac{\lambda_1}{N}(\overlineverline X^{\lambda_1}_T-X_T^{(1)i}),
\]
and
\begin{eqnarray}
\nonumber dY_t^{(2)j,(1)l}&=&\frac{\lambda_2}{N}\left\{-(\mbox{var}epsilonilon_2-q_2^2)(\overlineverline X^{\lambda_2}_t-X_t^{(2)j})-q_1Y_t^{(1)i,(1)i}\right\}dt+\sum_{k=0}^2Z_t^{(2)j,(1)l,k}dW_t^{(k)}\\
&&+\sum_{k_1=1}^{N_1}Z_t^{(2)j,(1)l,(1)k_1}dW_t^{(1)k_1}+\sum_{k_2=1}^{N_2}Z_t^{(2)j,(1)l,(2)k_2}dW_t^{(2)k_2},\\
\nonumber dY_t^{(2)j,(2)h}&=&\left( \frac{1-\lambda_2}{N_2}+\frac{\lambda_2}{N}-\deltalta_{(2)j,(2)h}\right)\left\{-(\mbox{var}epsilonilon_2-q_2^2)(\overlineverline X^{\lambda_2}_t-X^{(2)j})-q_2Y_t^{(2)j,(2)j}\right\}dt\\
\nonumber&&+\sum_{k=0}^2Z_t^{(2)j,(2)h,k}dW_t^{(k)}\\
&&+\sum_{k_1=1}^{N_1}Z_t^{(2)j,(2)h,(1)k_1}dW_t^{(1)k_1}+\sum_{k_2=1}^{N_2}Z_t^{(2)j,(2)h,(2)k_2}dW_t^{(2)k_2},
\label{Y-1-4}
\end{eqnarray}
with the terminal conditions
\[
Y_T^{(2)j,(1)l}=c_2\frac{\lambda_2}{N}(\overlineverline X^{\lambda_2}_T-X_T^{(2)j}),\;Y_T^{(2)j,(2)h}=c_2\left( \frac{1-\lambda_2}{N_2}+\frac{\lambda_2}{N}-\deltalta_{(2)j,(2)h}\right)(\overlineverline X^{\lambda_2}_T-X_T^{(2)j}).
\]
We then make the ansatz written as
\begin{eqnarray}
\nonumber Y_t^{(1)i,(1)l}&=&\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\left(\eta_t^{(o),1}(\overlineverline X_t^{(1)}-X_t^{(1)i})+\eta_t^{(o),2}\overlineverline X_t^{(1)}+\eta_t^{(o),3}\overlineverline X_t^{(2)}+\eta_t^{(o),4}\right)\\
\label{ansatz_open_1} \\
Y_t^{(1)i,(2)h}&=&\frac{\lambda_1}{N}\left(\eta_t^{(o),1}(\overlineverline X_t^{(1)}-X_t^{(1)i})+\eta_t^{(o),2}\overlineverline X_t^{(1)}+\eta_t^{(o),3}\overlineverline X_t^{(2)}+\eta_t^{(o),4}\right)
\end{eqnarray}
and
\begin{eqnarray}
Y_t^{(2)j,(1)l}&=&\frac{\lambda_2}{N}\left(\phi_t^{o,(1)}(\overlineverline X_t^{(1)}-X_t^{(1)i})+\phi_t^{o,(2)}\overlineverline X_t^{(1)}+\phi_t^{o,(3)}\overlineverline X_t^{(2)}+\phi_t^{o,(4)}\right)\\
\nonumber Y_t^{(2)j,(2)h}&=&\left(\frac{1}{\widetilde N_2}-\deltalta_{(1)i,(1)l}\right)\left(\phi_t^{o,(1)}(\overlineverline X_t^{(1)}-X_t^{(1)i})+\phi_t^{o,(2)}\overlineverline X_t^{(1)}+\phi_t^{o,(3)}\overlineverline X_t^{(2)}+\phi_t^{o,(4)}\right)\\
\label{ansatz_open_2}
\end{eqnarray}
where $$\frac{1}{\widetilde N_k}=\frac{1-\lambda_k}{N_k}+\frac{\lambda_k}{N},$$ for $k=1,2$. Differentiating (\ref{ansatz_open_1}-\ref{ansatz_open_2}) and identifying the $dY_t^{(1)i,(1)l}$, $dY_t^{(1)i,(2)h}$, $dY_t^{(2)j,(1)l}$, and $Y_t^{(2)j,(2)h}$ with (\ref{Y-1-1}-\ref{Y-1-4}), we obtain that the deterministic functions $\eta_t^{o,(i)}$ and $\phi_t^{o,(i)}$ for $i=1,\cdots,4$ must satisfy
\begin{eqnarray}
\label{eta_open-1}\dot\eta_t^{o,(1)}&=&\left(2-\frac{1}{\widetilde N_1}\right)q_1\eta_t^{o,(1)}+\left(1-\frac{1}{\widetilde N_1}\right)(\eta_t^{o,(1)})^2-(\mbox{var}epsilonilon_1-q_1^2)\\
\nonumber \dot\eta_t^{o,(2)}&=&-\left(q_1\lambda_1(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(2)}\right)\eta_t^{o,(2)}-\left(q_2\lambda_1\begin{equation}ta_1+(1-\frac{1}{\widetilde N_2})\phi^{o,(2)}\right)\eta_t^{o,(3)}\\
&&-q_1\left(\frac{1}{\widetilde N_1}-1\right)\eta_t^{o,(2)}-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1(\begin{equation}ta_1-1)\\
\nonumber \dot\eta_t^{o,(3)}&=&-\left(q_1\lambda_2\begin{equation}ta_2+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(3)}\right)\eta_t^{o,(2)}-\left(q_2\lambda_2(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(3)}\right)\eta_t^{o,(3)}\\
&&-q_1\left(\frac{1}{\widetilde N_1}-1\right)\eta_t^{o,(3)}-(\mbox{var}epsilonilon_1-q_1^2)\lambda_1\begin{equation}ta_2\\
\nonumber \dot\eta_t^{o,(4)}&=&-\left((1-\frac{1}{\widetilde N_1})\eta_t^{o,(4)}+\gamma_t^{(1)}\right)\eta_t^{o,(2)}\\
&&-\left((1-\frac{1}{\widetilde N_2})\phi_t^{o,(4)}+\gamma_t^{(2)}\right)\eta_t^{o,(3)}-q_1\left(\frac{1}{\widetilde N_1}-1\right)\eta_t^{o,(4)}
\end{eqnarray}
\begin{eqnarray}
\dot\phi_t^{o,(1)}&=&\left(2-\frac{1}{\widetilde N_2}\right)q_2\phi_t^{o,(1)}+\left(1-\frac{1}{\widetilde N_2}\right)(\phi_t^{o,(1)})^2-(\mbox{var}epsilonilon_2-q_2^2)\\
\nonumber \dot\phi_t^{o,(2)}&=&-\left(q_2\lambda_2(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(2)}\right)\phi_t^{o,(2)}-\left(q_2\lambda_2\begin{equation}ta_2+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(2)}\right)\phi_t^{o,(3)}\\
&&-q_2\left(\frac{1}{\widetilde N_2}-1\right)\phi_t^{o,(2)}-(\mbox{var}epsilonilon_2-q_2^2)\lambda_2\begin{equation}ta_1\\
\nonumber\dot\phi_t^{o,(3)}&=&-\left(q_1\lambda_2\begin{equation}ta_2+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(3)}\right)\phi_t^{o,(2)}-\left(q_2\lambda_2(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(3)}\right)\phi_t^{o,(3)}\\
&&-q_2\left(\frac{1}{\widetilde N_2}-1\right)\phi_t^{o,(3)}-(\mbox{var}epsilonilon_2-q_2^2)\lambda_1(\begin{equation}ta_2-1)\\
\nonumber\dot\phi_t^{o,(4)}&=&-\left((1-\frac{1}{\widetilde N_1})\eta_t^{o,(4)}+\gamma_t^{(1)}\right)\phi_t^{o,(2)}\\
&&-\left((1-\frac{1}{\widetilde N_2})\phi_t^{o,(4)}+\gamma_t^{(2)}\right)\phi_t^{o,(3)}-q_2\left(\frac{1}{\widetilde N_2}-1\right)\phi_t^{o,(4)}\label{phi_open-4}
\end{eqnarray}
with the terminal conditions
\begin{eqnarray}n
\eta_T^{o,(1)}=c_1,\;\eta_T^{o,(2)}=c_1\lambda_1(\begin{equation}ta_1-1)\;\eta_T^{o,(3)}=c_1\lambda_1\begin{equation}ta_2,\;\eta_T^{o,(4)}=0,\\
\phi_T^{o,(1)}=c_2,\;\eta_T^{o,(2)}=c_2\lambda_2\begin{equation}ta_1\;\phi_T^{o,(3)}=c_2\lambda_2(\begin{equation}ta_1-1),\;\phi_T^{o,(4)}=0,
\end{eqnarray}n
and the squared integrable progressive processes are given by
\begin{eqnarray}n
&&Z_t^{(1)i,(1)l,0}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\rho\left(\sigma_1\eta_t^{o,(2)}+\sigma_2\eta_t^{o,(3)}\right),\\
&&Z_t^{(1)i,(1)l,1}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{o,(2)}\sigma_1\sqrt{1-\rho^2}\rho_1,\;\\
&&Z_t^{(1)i,(1)l,2}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\rho_2,\\
&&Z_t^{(1)i,(1)l,(1)k_1}=\frac{1}{N_1}\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{o,(1)}\left(\frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_1^2}-\deltalta_{(1)i,(1)k_1}\right),\\
&&Z_t^{(1)i,(1)l,(2)k_2}=\frac{1}{N_1}\left(\frac{1}{\widetilde N_1}-\deltalta_{(1)i,(1)l}\right)\eta_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\sqrt{1-\rho_2^2},
\end{eqnarray}n
and
\begin{eqnarray}n
&&Z_t^{(1)i,(2)h,0}=\frac{\lambda_1}{N}\rho\left(\sigma_1\eta_t^{o,(2)}+\sigma_2\eta_t^{o,(3)}\right),\\
&&Z_t^{(1)i,(2)h,1}=\frac{\lambda_1}{N}\eta_t^{o,(2)}\sigma_1\sqrt{1-\rho^2}\rho_1,\\
&&Z_t^{(1)i,(2)h,2}=\frac{\lambda_1}{N}\eta_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\rho_2,\\
&&Z_t^{(1)i,(2)h,(1)k_1}=\frac{\lambda_1}{N}\eta_t^{o,(1)}\left(\frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_1^2}-\deltalta_{(1)i,(1)k_1}\right)\\
&&Z_t^{(1)i,(2)h,(2)k_2}=\frac{\lambda_1}{N}\eta_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\sqrt{1-\rho_2^2},\\
\end{eqnarray}n
and
\begin{eqnarray}n
&&Z_t^{(2)j,(1)l,0}=\frac{\lambda_2}{N}\rho\left(\sigma_1\phi_t^{o,(2)}+\sigma_2\phi_t^{o,(3)}\right),\\
&&Z_t^{(2)j,(1)l,1}=\frac{\lambda_2}{N}\phi_t^{o,(2)}\sigma_1\sqrt{1-\rho^2}\rho_1,\\
&&Z_t^{(2)j,(1)l,2}=\frac{\lambda_2}{N}\phi_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\rho_2,\\
&&Z_t^{(2)j,(1)l,(1)k_1}=\frac{\lambda_2}{N}\phi_t^{o,(2)} \frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_1^2},\\
&&Z_t^{(2)j,(1)l,(2)k_2}=\frac{\lambda_2}{N}\phi_t^{o,(1)}\left( \frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_2^2}-\deltalta_{(2)j,(2)k_2}\right),
\end{eqnarray}n
and
\begin{eqnarray}n
&&Z_t^{(2)j,(2)h,0}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(2)j,(2)h}\right)\rho\left(\sigma_1\phi_t^{o,(2)}+\sigma_2\phi_t^{o,(3)}\right),\\
&&Z_t^{(1)i,(2)h,1}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(2)j,(2)h}\right)\phi_t^{o,(2)}\sigma_1\sqrt{1-\rho^2}\rho_1,\\
&&Z_t^{(2)j,(2)h,2}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(2)j,(2)h}\right)\frac{\lambda_2}{N}\phi_t^{o,(3)}\sigma_2\sqrt{1-\rho^2}\rho_2,\\
&&Z_t^{(2)j,(2)h,(1)k_1}=\left(\frac{1}{\widetilde N_1}-\deltalta_{(2)j,(2)h}\right)\phi_t^{o,(2)} \frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_1^2},\\
&&Z_t^{(2)j,(2)h,(2)k_2}=\frac{\lambda_2}{N}\phi_t^{o,(1)}\left( \frac{1}{N_1}\sigma_1\sqrt{1-\rho^2}\sqrt{1-\rho_2^2}-\deltalta_{(2)j,(2)k_2}\right),
\end{eqnarray}n
for $i,l=1,\cdots,N_1$ and $j,h=1,\cdots,N_2$. Hence, the open-loop Nash equilibria are written as
\begin{eqnarray}
\nonumber \hat\alphaha^{o,(1)i}&=&\left(q_1+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(1)}\right)(\overlineverline X_t^{(1)}-X_t^{(1)i})+\left(q_1\lambda_1(\begin{equation}ta_1-1)+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(2)}\right)\overlineverline X_t^{(1)}\\
\label{open-app-1}&&+\left(q_1\lambda_1\begin{equation}ta_2+(1-\frac{1}{\widetilde N_1})\eta_t^{o,(3)}\right)\overlineverline X_t^{(2)}+\left(1-\frac{1}{\widetilde N_1}\right)\eta_t^{o,(4)},\\
\nonumber \hat\alphaha^{o,(2)j}&=&\left(q_2+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(1)}\right)(\overlineverline X_t^{(2)}-X_t^{(2)j})+\left(q_2\lambda_2\begin{equation}ta_1+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(2)}\right)\overlineverline X_t^{(1)}\\
\label{open-app-2}&&+\left(q_2\lambda_2(\begin{equation}ta_2-1)+(1-\frac{1}{\widetilde N_2})\phi_t^{o,(3)}\right)\overlineverline X_t^{(2)}+\left(1-\frac{1}{\widetilde N_2}\right)\phi_t^{o,(4)}.
\end{eqnarray}
Note that the existence of the coupled ODEs (\ref{eta_open-1}-\ref{phi_open-4}) in the case of sufficiently large $N_1$ and $N_2$ is studied in Proposition \ref{Prop_suff}. Based on the open-loop equilibria (\ref{open-app-1}-\ref{open-app-2}), we have the lending and borrowing system satisfying the Lipchitz condition in the sense that the existence of the corresponding FBSDEs can be verified using the fixed point argument. See \cite{Carmona-Fouque2016} for instance.
\section{Proof of Theorem \ref{Hete-MFG-prop}}\label{Appex-Hete-MFG}
Due to the non-Markovian structure for the given $m_t^{(k)}$ for $k=1,\cdots,d$, in order to obtain the $\mbox{var}epsilonilon$-Nash equilibrium for the coupled diffusions with common noises, we again apply the adjoint FBSDEs discussed in \cite{CarmonaDelarueLachapelle} and \cite{R.Carmona2013}. The corresponding Hamiltonian is given by
\begin{equation}
H^{k}(t,x,y^{(k)},\alphaha)=\sum_{h=1}^d(\alphaha^{(h)}+\gamma^{(h)}_t)y^{(k),h}+\frac{(\alphaha^{(k)})^2}{2}-q_k\alphaha^{(k)}\left(M^{\lambda_k}_t-x^{(k)}\right)+\frac{\mbox{var}epsilonilon_k}{2}\left(M^{\lambda_k}_t-x^{(k)}\right)^2,
\end{equation}
where $x=(x^{(1)},\cdots,x^{(d)})$, $y^{(k)}=(y^{k,1},\cdots,y^{k,d})$, and $\alphaha=(\alphaha^{(1)},\cdots,\alphaha^{(d)})$. The Hamiltonian attains its minimum at
\begin{equation}
\hat\alphaha^{m,(k)}_t=q_k\left(M^{\lambda_k}_t-x^{(k)}\right)-y^{k,k}.
\end{equation}
The backward equations satisfy
\begin{eqnarray}
\nonumber dY^{k,l}_t&=&-\partialrtial_{x^{(l)}}H^k(\hat\alphaha^{(k)})dt+\sum_{h=0}^dZ^{0,k,l,h}_tdW^{(0),(h)}_t+\sum_{h=1}^dZ^{k,l,h}_tdW^{(h)}_t\\
\nonumber &=&(q_kY^{k,k}_t+(\mbox{var}epsilonilon_k-q_k^2)(M^{\lambda_k}_t-X^{(k)}_t))\deltalta_{k,l}dt+\sum_{h=0}^dZ^{0,k,l,h}_tdW^{(0),(h)}_t+\sum_{h=1}^dZ^{k,l,h}_tdW^{(h)}_t,\\
\label{Y-MFG}
\end{eqnarray}
with the terminal conditions $Y^{k,l}_T=\frac{c_k}{2}(X^{(k)}_T-m^{(k)}_T)\deltalta_{k,l}$ for $k,l=1,\cdots,d$ where the processes $Z^{0,k,l,h}_t$ and $Z^{k,l,h}_t$ are adapted and square integrable. We make the ansatz for $Y^{k,l}_t$ written as
\begin{equation}\label{ansatz-MFG}
Y^{k,l}_t=-\left(\eta^{m,(k)}_t(m^{(k)}_t-X_t^{(k)})+\sum_{h_1=1}^d\psi_t^{m,(k),h_1}m^{(h_1)}_t+\mu^{m,(k)}_t\right)\deltalta_{k,l},
\end{equation}
leading to
\begin{eqnarray}
\nonumber dX^{(k)}_t&=&\bigg\{(q_k+\eta^{m,(k)}_t)(m^{(k)}_t -X_t^{(k)})+\sum_{h_1=1}^d\psi_t^{m,(k),h_1}m^{(h_1)}_t+\mu^{m,(k)}_t+\gamma^{(k)}_t\\
\nonumber&&+q_k\lambda_k\sum_{h_1=1}^d(\begin{equation}ta_{h_1}-\deltalta_{k,h_1})m^{(h_1)}_t\bigg\}dt\\
&&+\sigma_k\left(\rho dW^{(0),(0)}_t+\sqrt{1-\rho^2}\left(\rho_{k}dW_t^{(0),(k)}+\sqrt{1-\rho^2_{k}}dW^{(k)}_t\right)\right),\label{X-MFG-1}\\
\nonumber dm^{(k)}_t&=&\bigg\{\sum_{h_1=1}^d\psi_t^{m,(k),h_1}m^{(h_1)}_t+\mu^{m,(k)}_t+\gamma^{(k)}_t+q_k\lambda_k\sum_{h_1=1}^d(\begin{equation}ta_{h_1}-\deltalta_{k,h_1})m^{(h_1)}_t\bigg\}dt\\
&&+\sigma_k\left(\rho dW^{(0),(0)}_t+\sqrt{1-\rho^2} \rho_{k}dW_t^{(0),(k)} \right) \label{m-hete}
\end{eqnarray}
Inserting the ansatz \eqref{ansatz-MFG} into \eqref{Y-MFG} gives
\begin{eqnarray}
\nonumber dY^{k,l}_t&=&\deltalta_{k,l}\bigg\{(-q_k\eta_t^{m,(k)}+\mbox{var}epsilonilon_k-q_k^2)(m^{(k)}_t-X_t^{(k)})+(\mbox{var}epsilonilon_k-q_k^2) \lambda_k\sum_{h_1=1}^d(\begin{equation}ta_{h_1}-\deltalta_{k,h_1})m_t^{(h_1)} \\
\nonumber&&-q_k\sum_{h_1=1}^d\psi_t^{m,(k),h_1}m^{(h_1)}_t-q_k\mu^{m,(k)}_t\bigg\}dt+\sum_{h=0}^dZ^{0,k,l,h}_tdW^{(0),(h)}_t+\sum_{h=1}^dZ^{k,l,h}_tdW^{(h)}_t,\\\label{Y-MFG-1}
\end{eqnarray}
and applying It\^o formula to \eqref{ansatz-MFG} and using \eqref{X-MFG-1} and \eqref{m-hete} imply
\begin{eqnarray}
\nonumber dY^{k,l}_t&=&\deltalta_{k,l}\bigg\{\bigg(-\dot\eta^{m,(k)}_t(m^{(k)}_t-X_t^{(k)})+\eta^{m,(k)}_t (q_k+\eta_t^{m,(k)})(m^{(k)}_t-X^{(k)}_t)\\
\nonumber&&-\dot\mu_t^{m,(k)}-\sum_{h_1=1}^d\dot\psi_t^{m,(k),h_1}m^{(h_1)}_t\\
\nonumber&& -\sum_{h=1}^d\psi_t^{m,(k),h}\left(\sum_{h_1=1}^d(\psi_t^{m,(h),h_1}+q_h\lambda_h(\begin{equation}ta_{h_1}-\deltalta_{h,h_1}))m^{(h_1)}_t+\mu_t^{m,(h)}+\gamma^{(h)}_t \right)\bigg)dt \\
\nonumber&&+\sum_{h=1}^d\psi_t^{m,(k),h}\sigma_h\left(\rho dW^{(0),(0)}_t+\sqrt{1-\rho^2} \rho_{h}dW_t^{(0),(h)}\right)\\
&&+\eta_t^{m,(k)}\sigma_k\sqrt{1-\rho^2}\sqrt{1-\rho_k^2}dW^{(k)}_t \bigg\}.
\label{Y-MFG-2}
\end{eqnarray}
Similarly, through identifying \eqref{Y-MFG-1} and \eqref{Y-MFG-2}, we get $\eta_t^{(k)}$, $\psi^{(k),h}_t$, and $\mu_t^{(k)}$ must satisfy (\ref{Hete-eta-MFG}-\ref{Hete-mu-MFG}) and the squared integrable processes $Z^{0,k,l,h}_t$ and $Z^{k,l,h}_t$ satisfying
\begin{equation}
Z^{0,k,l,0}_t=-\eta_t^{m,(k)}\lambda_k\rho\sum_{h_1=1}^d\sigma_{h_1}(\begin{equation}ta_{h_1}-\deltalta_{k,h_1}+\psi_t^{m,(k),h_1}),\;l=k,\quad Z^{0,k,l,0}_t=0, \; l\neq k,
\end{equation}
and
\begin{equation}
Z^{0,k,l,h}_t=-\eta_t^{m,(k)}\lambda_k\sqrt{1-\rho^2}\sigma_{h }(\begin{equation}ta_{h }-\deltalta_{k,h}+\psi_t^{m,(k),h}),\;l=k,\quad Z^{0,k,l,h}_t=0, \; l\neq k,
\end{equation}
and
\begin{equation}
Z^{k,l,h}_t=\eta^{m,(k)}_t\sigma_k\sqrt{1-\rho^2}\sqrt{1-\rho_k^2},\;l=k,\quad Z^{k,l,h}_t=0,\; l\neq k.
\end{equation}
By the fixed point argument, the $\mbox{var}epsilonilon$-Nash equilibria are given by
\begin{equation}
\hat\alphaha_t^{m,(k)}=(q_k+\eta^{m,(k)}_t)(m^{(k)}_t-x^{(k)})+\sum_{h=1}^d\widetilde\psi_t^{m,(k),h}m^{h}_t+\mu^{m,(k)}_t,\quad k=1,\cdots,d
\end{equation}
where $\widetilde\psi_t^{m,(k),h}=\psi_t^{m,(k),h}+q_k\lambda_k(\begin{equation}ta_k-\deltalta_{k,h})$.
We now study the existence of the coupled ODEs (\ref{Hete-eta-MFG}-\ref{Hete-mu-MFG}). Note that we show the existence of the case of two heterogeneous groups in Proposition \ref{Prop_suff}. Observe that \eqref{Hete-eta-MFG} satisfies the Riccati equation without coupling. Given \eqref{Hete-psi-MFG}, the system \eqref{Hete-mu-MFG} is the system of linear ODEs. Hence, it is sufficient to show the existence of \eqref{Hete-psi-MFG}. Similar to the results in Proposition \ref{Prop_suff}, in the general $d$ groups, we obtain
\[
\sum_{h_1=1}^d\psi_t^{m,(k),h_1}=0,
\]
implying that given $k=h_1$
\begin{equation}\label{MFG_suff_cond_1}
\psi_t^{m,(k),k}=-\sum_{h_1\neq k}\psi_t^{m,(k),h_1}.
\end{equation}
Now, by inserting \eqref{MFG_suff_cond_1} into \eqref{Hete-mu-MFG}, for $k\neq h_1$, \eqref{Hete-mu-MFG} can be rewritten as
\begin{eqnarray}
\nonumber \dot\psi_t^{m,(k),h_1}&=&q_k\psi_t^{m,(k),h_1}+\sum_{h\neq k}\psi_t^{m,(k),h}\left(\psi_t^{m,(k),h_1}+q_k\lambda_k\begin{equation}ta_{h_1 }\right)\\
\nonumber &&-\sum_{h\neq k}\psi_t^{m,(k),h}\left(\psi_t^{m,(h),h_1}+q_h\lambda_h(\begin{equation}ta_{h_1 }-\deltalta_{h,h_1})\right)-(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1} \\
\nonumber &=&q_k(1+\lambda_k\begin{equation}ta_{h_1})\psi_t^{m,(k),h_1}+(\psi_t^{m,(k),h_1})^2\\
\nonumber &&-\sum_{h\neq k,h_1}\psi_t^{m,(k),h}\left(\psi_t^{m,(h),h_1}+q_h\lambda_h\begin{equation}ta_{h_1 }\right)\\
\nonumber &&+\psi_t^{m,(k),h_1}\left(\sum_{h\neq h_1}\psi_t^{m,(k),h}+q_{h_1}\lambda_{h_1}(1-\begin{equation}ta_{h_1 })\right)\\
\nonumber &&+\sum_{h\neq k, h_1}\psi_t^{m,(k),h}\left(\psi_t^{m,(k),h_1}+q_k\lambda_k\begin{equation}ta_{h_1 }\right)-(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1}\\
\nonumber &=&q_k(1+\lambda_k\begin{equation}ta_{h_1})\psi_t^{m,(k),h_1}+(\psi_t^{m,(k),h_1})^2\\
\nonumber &&+\psi_t^{m,(k),h_1}\left(\sum_{h\neq h_1}\psi_t^{m,(k),h}+q_{h_1}\lambda_{h_1}(1-\begin{equation}ta_{h_1 })\right)+\psi_t^{m,(k),h_1}\sum_{h\neq k, h_1}\psi_t^{m,(k),h}\\
&&-\sum_{h\neq k,h_1}\psi_t^{m,(k),h}\left(\psi_t^{m,(h),h_1}+q_h\lambda_h\begin{equation}ta_{h_1 }-q_k\lambda_k\begin{equation}ta_{h_1 }\right)-(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1}\
\label{C-15}
\end{eqnarray}
We now further assume $c_{\tildeilde k}\geq \max_{k,h}\left(\frac{q_k\lambda_k}{\lambda_h}-q_h\right)$ for $\tildeilde k=1,\cdots,d$ such that the term
\begin{equation}\label{C-15-1}
\sum_{h\neq k,h_1}\psi_t^{m,(k),h}\left(\psi_t^{m,(h),h_1}+q_h\lambda_h\begin{equation}ta_{h_1 }-q_k\lambda_k\begin{equation}ta_{h_1 }\right)
\end{equation}
stays in negative for all $t$ in order to guarantee $\psi_t^{m,(k),h_1}\geq 0$ for $k\neq h_1$. Note that in the two-group case with $d=2$, the equation \eqref{C-15-1} can be removed. See Proposition \ref{Prop_suff} for details.
Similarly, using $\check\psi_t^{m,(k),h_1}= \psi_{T-t}^{m,(k),h_1}$, we have
\begin{eqnarray}n
\dot{\check\psi}_t^{m,(k),h_1}&=&-q_k(1+\lambda_k\begin{equation}ta_{h_1})\check\psi_t^{m,(k),h_1}-(\check\psi_t^{m,(k),h_1})^2-\sum_{h\neq k, h_1}\check\psi_t^{m,(k),h}\left(\check\psi_t^{m,(k),h_1}+q_k\lambda_k\begin{equation}ta_{h_1 }\right)\\
&&-\check\psi_t^{m,(k),h_1}\left(\sum_{h\neq h_1}\check\psi_t^{m,(k),h}+q_{h_1}\lambda_{h_1}(1-\begin{equation}ta_{h_1 })\right)\\
&&+\sum_{h\neq k,h_1}\check\psi_t^{m,(k),h}\left(\check\psi_t^{m,(h),h_1}+q_h\lambda_h\begin{equation}ta_{h_1 } \right)+(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1} \\
&\leq &-q_k(1+\lambda_k\begin{equation}ta_{h_1})\check\psi_t^{m,(k),h_1}-(\check\psi_t^{m,(k),h_1})^2\\
&&+\sum_{h\neq k,h_1}\check\psi_t^{m,(k),h}\left(\check\psi_t^{m,(h),h_1}+q_h\lambda_h\begin{equation}ta_{h_1 }\right)+(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1}
\end{eqnarray}n
Let $$\underlinenderline \zeta=\min_{k,h}q_k(1+\lambda_k\begin{equation}ta_h),\quad\overlineverline \zeta=\max_{k,h}q_k\lambda_k\begin{equation}ta_h.$$ We have
\begin{eqnarray}\label{psi_ij}
\nonumber \dot{\check\psi}_t^{m,(k),h_1}&\leq&\underlinenderline\zeta \check\psi_t^{m,(k),h_1}-(\check\psi_t^{m,(k),h_1})^2+\frac{1}{2}\sum_{h\neq k,h_1}\left((\check\psi_t^{m,(k),h})^2+(\check\psi_t^{m,(h),h_1})^2\right)\\
&&+\overlineverline \zeta\sum_{h\neq k,h_1}\check\psi_t^{m,(k),h}+(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1} .
\end{eqnarray}
Now, using $$\check\psi_t=\sum_{k=1,\cdots,d,h_1=1,\cdots,N_k,k\neq h_1}\check\psi_t^{m,(k),h_1}$$ and \eqref{psi_ij} leads to
\begin{eqnarray}
\dot{\check\psi}_t&\leq&-\underlinenderline\zeta\check\psi_t+\overlineverline \zeta\check\psi_t=(\overlineverline \zeta-\underlinenderline \zeta)\check\psi_t+\hat\zeta
\end{eqnarray}
implying
\begin{equation}
\check\psi_t\leq\widetilde\zeta e^{(\overlineverline \zeta-\underlinenderline \zeta)t}+\frac{\hat\zeta}{\overlineverline \zeta-\underlinenderline \zeta}\left(e^{(\overlineverline \zeta-\underlinenderline \zeta)t}-1\right),
\end{equation}
where $$\widetilde\zeta=\sum_{k=1,\cdots,d,h_1=1,\cdots,N_k,k\neq h_1}c_k\lambda_k\begin{equation}ta_{h_1},\quad \hat\zeta=\sum_{k=1,\cdots,d,h_1=1,\cdots,N_k,k\neq h_1}(\mbox{var}epsilonilon_k-q_k^2)\lambda_k\begin{equation}ta_{h_1}.$$
Using $\psi_t^{m,(k),h_1}\geq 0$ for $k\neq h_1$, the proof is complete. \qed
\end{equation}d{document}
|
\begin{document}
\title{Fatigue Estimation Methods Comparison \\ for Wind Turbine Control}
\author{J.J.~Barradas~Berglind}
\thanks{}
\author{Rafael~Wisniewski}
\thanks{\emph{Pre-print submitted to Wind Energy.}}
\dedicatory{Automation \& Control, Department of Electronic Systems \\ Aalborg
University, Aalborg East 9220, DK. e-mail: \{jjb,raf\}@es.aau.dk}
\keywords{Fatigue Estimation, Load Estimation, Fatigue for Wind Turbine Control, Rainflow Counting, Spectral Methods, Hysteresis Operator}
\begin{abstract}
Fatigue is a critical factor in structures as wind turbines exposed to harsh operating conditions, both in the design stage and control during their operation. In the present paper the most recognized approaches to estimate the damage caused by fatigue are discussed and compared, with special focus on their applicability for wind turbine control. The aim of this paper is to serve as a guide among the vast literature on fatigue and shed some light on the underlying relationships between these methods.
\end{abstract}
\maketitle
\section{Introduction and Motivation}
Fatigue has been widely and exhaustively studied from different perspectives, and the literature is vast and approached from different perspectives; thus, incorporating fatigue or wear in components of a wind turbine in a control problem may seem as a daunting task. Fatigue is regarded as a critical factor in structures such as wind turbines, where it is necessary to ensure a certain life span under normal operating conditions in a turbulent environment. These environmental conditions lead to irregular loadings, which is also the case for waves and uneven roads. The main focus of the present is on fatigue estimation methods for wind turbine control, and as such the most widely used methods are described, with special emphasis in the applicability of these techniques for control.
In general, fatigue can be understood as the weakening or breakdown of a material subject to stress, especially a repeated series of stresses. From a materials perspective, it can be also thought of as elastoplastic deformations causing damage on a certain material or structure, compromising its integrity.
Fatigue is a phenomenon that occurs in a microscopic scale, manifesting itself as deterioration or damage. Consequently, it has been of interest in different fields and has been studied extensively with different perspectives; a very detailed history of fatigue can be found in \cite{schutz96}. It could be argued that two major turning points in the history of fatigue came firstly with the contributions of W{\"o}hler, who suggested design for finite fatigue life in the 1860's \cite{Wohler_1860} and the so-called W{\"o}hler curve (or S-N curve stress versus number of cycles to failure) which still sets the basis for theoretical damage estimation; and secondly with the linear damage accumulation rule by Palmgren \cite{Palmgren_1924} and Miner \cite{Miner_1945}, still under use nowadays.
\section{Fatigue Estimation for Wind Turbine Control}
Perhaps the most recognized and used measure for fatigue damage estimation is the so-called rainflow counting (RFC) method, which is used in combination with the Palmgren-Miner rule. In the wind turbine context, the impact on fatigue from a load can be described by an equivalent damage load (EDL); basically, the EDL is calculated using the Palmgren-Miner rule to determine a single, constant-rate fatigue load that will produce equivalent damage \cite{Sutherland99}.
Load or fatigue reduction techniques for wind turbines can be roughly divided in active and passive. The former makes use of the controller, e.g., by changing the pitching angle or the generator torque, while the latter entails the design of the structure. In \cite{bottasso13}, both strategies are combined to reduce loads in the blades. In the wind turbine control context, the control algorithm may have substantial effects on the wind turbine components; for example, controlling the pitching angle may lead to thrust load changes, which consequently affects the loads on the tower and blades \cite{bossanyi03}. In \cite{bossanyi03}, \cite{bossanyi05}, \cite{larsen05} reductions in loading are achieved by controlling the pitch of each blade independently; the damage of different control strategies is assessed by EDL, using S-N curves. In \cite{lescher06}, \cite{nourdine10} a load reduction control strategies are proposed, where the damage is evaluated using the RFC algorithm. Model predictive control (MPC) strategies using wind preview have been proposed in \cite{Sol_11}, \cite{madsen12} to reduce loads, evaluated via EDL. In \cite{Hamm_07}, control strategies were designed, by approximating fatigue load by an analytical function based on spectral moments. The Aeolus project \cite{aeolus} has a simulation platform, which considers the fatigue load of wind farm for optimization as a post-processing method.
A large amount of the current control methods rely on the calculation of the damage either by EDL or RFC, which can be only used as post-processing tools; other methods are based on minimization of some norms of the stress on different components of the wind turbine, which are hoped to reduce fatigue, but they are not a reliable characterization of the damage \cite{Sol_11}, \cite{Mirzaei_13}. Thus, in this paper we will introduce and compare the most recognized fatigue estimation methods, and explore different alternatives with a focus on whether they can be incorporated in control loops and thus be used in the controller synthesis directly.
\section{Fatigue Estimation Methods}
Some of the most recognized approaches to estimate the damage caused by fatigue will be discussed and compared in the sequel. From a materials perspective, an extensive survey for homogeneous materials was done in \cite{fatemi_98}. In the wind turbine context, \cite{Sutherland99} goes through the counting and spectral techniques used for wind turbine design. The perspective taken here is from a control point of view and as such we categorize the fatigue estimation methods as follows:
\begin{enumerate}
\item{Counting methods}
\item{Frequency domain or spectral methods}
\item{Stochastic methods}
\item{Hysteresis operator}
\end{enumerate}
In all cases, we assume that the input signal is obtained from time history of the loading parameter of interest, such as force, torque, stress, strain, acceleration, or deflection \cite{fatiguelee_2005}.
\subsection{Counting Methods}
Cycle counting methods are algorithms that identify fatigue cycles by combining and extrapolating information from extrema (maxima and minima) in a time series. These algorithms are used together with damage accumulation rules, which calculate the total damage as a summation of increments. The most popular method among the counting methods is the so-called rainflow counting (RFC) method, jointly with the Palmgren-Miner rule of linear damage accumulation to calculate the expected damage. The Palmgren-Miner rule is the most popular due to its simplicity; however, by applying it one assumes a fixed-load, neglecting interaction and sequence effects that might have a significant contribution to the damage, e.g., \cite{Agerskov} for tests with random loading.
Other cycle counting methods include: peak-valley counting (PVC), level-crossing counting (LCC), range counting (RC), and range-pairs counting (RPC); for more details see \cite{ASTMe1049} and \cite{Benasciutti_Thesis}. Here, we will focus on the RFC method, which is the most widely used and the most accurate in identifying the damaging effects caused by complex loadings, \cite{Dowling71}. The rainflow counting method, first introduced by Endo \cite{Endo_1967}, has a complex sequential and nonlinear structure in order to decompose arbitrary sequences of loads into cycles, and its name comes from an analogy with roofs collecting rainwater to explain the algorithm, sometimes also referred to as pagoda roof. A figure depicting the described procedure is shown below in Figure \ref{fig:Int_RFC}.
\begin{figure}
\caption{Rainflow counting damage estimation procedure.}
\label{fig:Int_RFC}
\end{figure}
For many materials there is an explicit relation between number of cycles to failure and cycle amplitude, which is known as S-N or W{\"o}hler curves, given as a line in a log-log scale as
\begin{align}
s^{k}N = K,
\label{SNcurve}
\end{align}
where $k$ and $K$ are material specific parameters and $N$ is the number of cycles to failure at a given stress amplitude $s$. Then, for a time history, the total damage under the linear accumulation damage (Palmgren-Miner) rule is given as
\begin{align}
D(T) = \sum\limits_{i=1}^{N(T)}\Delta D_{i} = \sum\limits_{i=1}^{N(T)}\frac{1}{N_{i}},
\label{DamagePM1}
\end{align}
for damage increments $\Delta D_{i}$ associated to each counted cycle, $N_{i}$ the number of cycles to failure associated to stress amplitude $s_{i}$, and the number of all counted cycles $N(T)$. Taking the S-N curve relationship in \eqref{SNcurve}, we can rewrite \eqref{DamagePM1} as
\begin{align}
D(T) = \sum\limits_{i=1}^{N(T)}\frac{s_{i}^{k}}{K}.
\end{align}
Different RFC algorithms have been proposed such as \cite{downing1982} and \cite{rychlik_3}, with different rules but providing the same results. A way to implement the RFC algorithm is using the Rainflow toolbox introduced in \cite{Nieslony_09}. An example is presented below, using the wind turbine model from the standard NREL 5MW wind turbine \cite{Jonk_NREL5MW}, running is closed-loop with standard pitch and torque controllers. The input used for the comparison is a time series of the tower bending moment extracted after the simulation of $600$ seconds. The results are presented on Figure \ref{fig:Ex1_RFC}. On the top the input stress is shown, and in the bottom part the instantaneous damage and the accumulated damage are shown. For our example, we will let $k=4$ and $K=6.25\times 10^{37}$ as in \cite{Hamm06_Thesis}, where the value of $k$ is adequate for steel structures. For this example, the instantaneous damage was extrapolated to its causing time, such that it can be plotted in the right time scale instead of the reduced turning-point scale.
\begin{figure}
\caption{Rainflow counting algorithm example, using the toolbox from \cite{Nieslony_09}
\label{fig:Ex1_RFC}
\end{figure}
Other outputs provided by the toolbox in \cite{Nieslony_09} are amplitude and cycle mean histograms, as well as the so-called rainflow matrix (RFM), from which the number of counted cycles with a given amplitude and mean value are obtained from the given stress history. Since the RFM will play a role further on this paper, we will elaborate on its construction. Load signals can be discretized to a certain number of levels, allowing an efficient storage of the cycles in a so-called rainflow matrix, which is an upper triangular matrix by definition. Consequently, cycle amplitudes and mean values can be grouped in bins, such that the cycle count can be summarized as a matrix (for details see \cite{rychlik_2}, and Chapter 2 in \cite{Benasciutti_Thesis}); sometimes this matrix is shown transposed. The rainflow matrix for the aforementioned example is depicted on Figure \ref{fig:Ex1_RFC2} for 10 bins, where cycle mean is on the $y-$axis, cycle amplitude in the $x-$axis and number of cycles in the $z-$axis.
\begin{figure}
\caption{Rainflow Matrix, using the toolbox from \cite{Nieslony_09}
\label{fig:Ex1_RFC2}
\end{figure}
Lastly, NREL has a an estimator of fatigue-life called \texttt{MLife} (currently in alpha version, an improvement on \texttt{MCrunch} \cite{MCrunch}), which runs the RFC algorithm of \cite{Nieslony_09}. \texttt{MLife} calculates fatigue life for one or several time series, incorporating the Goodman correction to the damage calculation (to account and correct for the fixed-load assumption). These calculations include short-term damage equivalent loads and damage rates, lifetime results based on time series, accumulated lifetime damage, and time until failure \cite{MLife}.
\subsection{Spectral Methods}
An alternative to counting methods are the so-called spectral or frequency domain methods \cite{Bishop_99}, which assume narrow band processes and calculate the lifetime estimate by using an empirical formula that uses the spectral moments of the input signal; the aim of these methods is to approximate the rainflow density of the RFC algorithm. This procedure is depicted on Figure \ref{fig:Int_Spectral}. It is worth mentioning that some of these methods are based on empiric formulas, being essentially black-box and may be restricted to Gaussian histories. A comparison of different spectral methods was carried out in \cite{Benasciutti_06}.
\begin{figure}
\caption{Spectral methods damage estimation procedure.}
\label{fig:Int_Spectral}
\end{figure}
Spectral methods are based on statistical information of the signal of interest, i.e., its spectral moments. Following from \cite{Bishop_99} and \cite{Hamm_07}, the \emph{m}$^{th}$ spectral moment of the process $x(t)$ is defined as
\begin{align}
\lambda^{x}_{m} = \frac{1}{\pi}\int\limits_{0}^{\infty}f^{m}\cdot S_{x}(f) df,
\end{align}
where $S_{x}(f)$ is the power density (PSD) of the process, with the following properties
\begin{align}
\lambda^{x}_{0} = \sigma^{2}_{x}, \;\; \lambda^{x}_{2} = \sigma^{2}_{\dot{x}} \;\; \text{and} \;\; \lambda^{x}_{4} = \sigma^{2}_{\ddot{x}}.
\label{moments4}
\end{align}
In other words, the variance of the process is given by $\lambda^{x}_{0}$, the variance of the process' first derivative is then given by the second moment, and lastly the variance of the process' second derivative is given by the fourth moment. Consequently, following the results in \cite{Benasciutti05} and \cite{Ryc93} the damage rate for narrow-banded Gaussian stress histories is given by
\begin{align}
d_{\curlywedge} = \frac{1}{2\pi}\sqrt{\frac{\lambda_{4}}{\lambda_{2}}}\frac{1}{K}\left(2\sqrt{2\lambda_{0}}\right)^{k}\Gamma\left(1+\frac{k}{2}\right),
\end{align}
where $\Gamma(\cdot)$ corresponds to the gamma distribution, and $k$, $K$ are the S-N parameters used in the RFC case. In \cite{Benasciutti05}, the authors proposed an estimate of the expected fatigue damage rate given as the narrow-band approximation augmented with a correction factor to account for the process not necessarily being narrow-band
\begin{align}
E\left[d\right] \approx d_{\curlywedge} \cdot \left(b+(1-b)\alpha_{2}^{k+1}\right)
\end{align}
with
\begin{align}
b = \frac{\left(\alpha_{1}-\alpha_{2}\right)\left[1.112\left(1+\alpha_{1}\alpha_{2}-(\alpha_{1}+\alpha_{2})\right)e^{2.11\alpha_{2}}+\left(\alpha_{1}-\alpha_{2}\right)\right]}{\left(\alpha_{2}-1\right)^{2}}
\end{align}
and
\begin{align}
\alpha_{1} = \frac{\lambda_{1}}{\sqrt{\lambda_{0}\lambda_{2}}}, \hspace{12pt} \alpha_{2}= \frac{\lambda_{2}}{\sqrt{\lambda_{0}\lambda_{4}}}.
\end{align}
In \cite{Hamm_07} and \cite{Hamm06_Thesis}, the numerical integration of the spectral density as in \eqref{moments4} is avoided, since the spectral moments are computed by means of polynomial evaluation and differentiation, involving a logarithm and an inverse tangent function. This allowed the method to be incorporated in the control loop.
In order to compare the spectral method with the example presented in the previous section, the spectral moments $\lambda=(\lambda_{0},\lambda_{1},\lambda_{2},\lambda_{4})$ of the time series were calculated using the \texttt{WAFO toolbox} \cite{WAFO} (through integration)
\begin{align}
\lambda=\{4.4071E^{14},-3.949E^{07},2.2904E^{11},2.1263E^{11}\},
\end{align}
and then the damage was computed using the Benasciutti approximation, using the \texttt{Matlab} script in Appendix B.3. of \cite{Hamm06_Thesis}, such that
\begin{align}
d_{B} = 4.0024E^{-12},
\end{align}
which is a little off compared to the RFC case; this can be explained by the fact that we need to scale the damage rate according to the geometry of the system, which is generally unknown. However, the obtained damage rate can be normalized to be used for control purposes, for details see \cite{Hamm_07}. In \cite{Ragan_07} the RFC method is compared with the spectral method using Dirlik's formula (which approximates the rainflow density, see \cite{Dirlik_85}) for fatigue analysis of several components of wind turbines, where it is concluded that spectral methods work very well in some cases, but rather poorly in others due to the narrow band assumption. However, spectral methods do have the advantage of conveniently relying on spectral information that is easier to estimate from limited data.
\subsection{Stochastic Methods}
In \cite{Sobczyk_87}, a thorough survey of stochastic methods for fatigue estimation in materials is presented, including reliability-inspired approaches, evolutionary probabilistic approaches and models for random fatigue crack growth. Modeling fatigue as a stochastic process makes sense due to the random nature of fatigue, which becomes more obvious under time-varying random loading.
Due to the broadness of this class of methods, we will focus on one example of the evolutionary approach. Following \cite{Sobczyk_87}, by introducing the hypothesis that the process is Markovian, such that future outcomes only depend on present information, disregarding the past. This way, we will have a random process with only forward transitions,
\begin{align}
E_{0} \rightarrow E_{1} \rightarrow \cdots \rightarrow E_{k} \rightarrow E_{k+1} \cdots \rightarrow E_{n}=E^{*},
\label{Eq:fatigueMC}
\end{align}
where $E_{0}$ denotes a damage-free state and $E^{*}$ characterizes the ultimate damage or destruction. Letting $P_{k}(t)$ be the probability that the specimen at time $t$ is on state $E_{k}
$ (notice that the state transitions are discrete, while the time evolution is continuous), then we obtain the following system of differential equations
\begin{align}
\frac{dP_{0}(t)}{dt} &= q_{0}P_{0}(t) \nonumber\\
\frac{dP_{k}(t)}{dt} &= q_{k}P_{k}(t) + q_{k-1}P_{k-1}(t), \hspace{12pt} k \geq 1,
\end{align}
or in shorter notation
\begin{align}
\frac{dP_{k}(t)}{dt} &= Q P_{k}(t), \hspace{12pt} k \geq 0,
\label{Mchain}
\end{align}
which corresponds to a Markov chain (MC) with intensity or transition matrix $Q$. Markov chains are well studied and have been successfully used in control settings; however, a shortcoming of this approach is that it is assumed that the intensity matrix $Q$ is not generally known. It could be assumed that the intensities are obtained from physical experiments, but this would correspond to a certain load; so, if the load changes, the parameters will change as well. However, the elements of $Q$ could be identified, using for instance recursive maximum likelihood identification methods, in order to capture the shifts in the load introduced by the controller.
In the present, for the sake of comparison, we will make use of the equivalence in \cite{rychlik_2}, where a method to convert between rainflow matrix to a Markov matrix is presented. As an example, we take the rainflow matrix depicted in Figure \ref{fig:Ex1_RFC2} and use the \texttt{WAFO} toolbox to convert it into a Markov matrix, and obtain its corresponding intensity matrix $Q$. Additionally, the MC is simulated for as many steps as the length of turning points of the RFC algorithm, such that the instantaneous damage can be reconstructed in the appropriate time instances. The simulation of the MC is presented on Figure \ref{fig:MCsimulation}, where the size of the MC corresponds to the number of bins of the RFM.
\begin{figure}
\caption{Markov Chain simulation, using the WAFO toolbox \cite{WAFO}
\label{fig:MCsimulation}
\end{figure}
Then, the damage evolution is scaled according to the RFM amplitudes, and afterwards the Palmgren-Miner rule is used. One of the possible realizations is compared against the RFC method on Figure \ref{fig:RFCvsMC}. Note that many realizations for the damage evolution are possible, since the MC in \eqref{Mchain} is governed by probabilities.
\begin{figure}
\caption{RFC versus Markov chain method damage comparison.}
\label{fig:RFCvsMC}
\end{figure}
\subsection{Hysteresis Operator}
As mentioned in \cite{downing1982} and \cite{fatiguelee_2005}, the purpose of the RFC method is to identify the closed hysteresis loops in the stress and strain signals. In \cite{tchankov1998}, an incremental method for the calculation of dissipated energy under random loading is presented, where the dissipated hysteresis energy to failure is used as the fatigue life parameter; the physical interpretation is that as some of the energy is dissipated, certain damage is introduced to a material or structure.
In \cite{BroSpre_96} an equivalence between symmetric RFC and a Preisach hysteresis operator is provided. This is a very useful result, since it gives the opportunity to incorporate the fatigue estimation online in the control loop. Additionally, this method is strongly related to the physical behavior of the damaging process as explained in \cite{BroDreKre_96}. If one associates values to individual cycles or hysteresis loops, it is being assumed that the underlying process is rate independent, thus meaning that only the loops themselves are important, but not the speed with which they are traversed; in other words, what causes the damage is the cycle amplitude and not how fast it occurs. Rate independent processes are mathematically formalized as hysteresis operators, see \cite{KrasPok_89}, \cite{Mayergoyz_91} \cite{BroSpre_96}.
The aforementioned equivalence in \cite{BroSpre_96} between symmetric rainflow counting (RFC) and a type of Preisach operator, is given as
\begin{align}
D_{ac}(s)=\sum_{\mu<\tau}\frac{c(s)(\mu,\tau)}{N(\mu,\tau)}=\text{Var}(\mathcal{W}(s)).
\label{Cor2_13}
\end{align}
where the left-hand side corresponds to the damage given by the RFC with $c(s)(\mu,\tau)$ being the rainflow count associated with a fixed string $s=(v_{0},\cdots,v_{N})$, counting between the values of $\mu$ and $\tau$, and $N(\mu,\tau)$ denotes the number of times a repetition of the input cycle $(\mu,\tau)$ leads to failure.
The right-hand side of \eqref{Cor2_13} is the variation of a special hysteresis operator, namely the Preisach operator defined as,
\begin{align}
\mathcal{W}(s) = \int_{\mu<\tau}\rho(\mu,\tau) \mathcal{R}_{\mu,\tau}(s)d\mu d\tau.
\label{preisach_op}
\end{align}
with density function $\rho(\mu,\tau)$, interpreted as a gain that changes with the different values of $\mu$ and $\tau$, being a function of $N(\mu,\tau)$. To interpret the right-hand side of \eqref{Cor2_13} we will need to introduce the relay operator $\mathcal{R}_{\mu,\tau}(s) = \mathcal{R}_{\mu,\tau}(v_{0},\cdots,v_{N})=(w_{0},\cdots,w_{N})$,
where its output is given by
\begin{align}
w_{i} = \left\{
\begin{array}{l l}
1, & \quad v_{i}\geq \tau,\\
0, & \quad v_{i}\leq \mu,\\
w_{i-1}, & \quad \mu<v_{i}<\tau.
\end{array} \right.
\end{align}
with $\mu<\tau$ and $w_{-1} \in \{0,1\}$ given. The relevant threshold values for the relays $\mathcal{R}_{\mu,\tau}$ in the Preisach operator $\mathcal{W}(s)$ then lie within the triangle
\begin{align}
P=\left\{(\mu,\tau)\in\mathbb{R}^{2},-M\leq \mu\leq \tau\leq M\right\}.
\label{prei_plane}
\end{align}
known as the Preisach plane. The variation operator $\text{Var}(\cdot)$ is a counting element defined as
\begin{align}
\text{Var}(s)=\sum^{N-1}_{i=0}\left|v_{i+1}-v_{i}\right|
\end{align}
for an arbitrary input sequence $s=(v_{0},\cdots,v_{N})$; so essentially, $\text{Var}(\mathcal{W}(s))$ represents the counting between the thresholds $\mu$ and $\tau$, weighted by certain gain $\rho$. Notice as well, that the limit under the integral defining the Preisach operator in \eqref{preisach_op} is congruent with the RFM being upper triangular.
In order to apply this fatigue estimation method to the previous example, the Preisach operator $\mathcal{W}(s)$ was approximated as a parallel connection of three relay operators
\begin{align}
\mathcal{H}(s)=\sum_{i}\nu(\mu_{i},\tau_{i})\mathcal{R}_{\mu_{i},\tau_{i}}(s),
\end{align}
for $i=\{1,2,3\}$. The thresholds were set to $(\mu_{1},\tau_{1})=(-0.66M,0.66M)$, $(\mu_{2},\tau_{2})=(0.66M,0.66M)$ and $(\mu_{3},\tau_{3})=(-0.66M,-0.66M)$ corresponding to uniform discretization, where $M$ is the bound for the Preisach plane in \eqref{prei_plane} calculated as $M=\max\left\{\min\left\{s\right\},\max\left\{s\right\}\right\}$. The initial conditions of the relays were given according to the following condition:
\begin{align}
w_{-1}(\mu_{i},\tau_{i}) = \left\{
\begin{array}{l l}
1, & \quad \mu_{i}+\tau_{i}<0, \\
0, & \quad \mu_{i}+\tau_{i}\geq 0.
\end{array} \right.
\end{align}
Lastly, since the Preisach density function $\rho(\mu,\tau)$, captured by the weightings on each relay $\nu(\mu_{i},\tau_{i})$ is unknown, the individual weightings of each relay were normalized such that $\nu_{1}=\alpha$, $\nu_{2}=\alpha^2$, $\nu_{3}=\alpha^3$ for $\nu_{1}+\nu_{2}+\nu_{3}=1$. Thus the accumulated damage can be written in closed form as
\begin{align}
D_{ac}(s) = \text{Var}\left(\mathcal{H}(s)\right),
\label{eq:EXdamage}
\end{align}
where we let the input signal $s$ be the tower bending moment from the previous examples.
\begin{figure}
\caption{RFC versus Hysteresis method damage comparison.}
\label{fig:RFCvsHyst}
\end{figure}
A comparison between the RFC, using the procedure described before, and the hysteresis method obtained by \eqref{eq:EXdamage} is shown in Figure \ref{fig:RFCvsHyst}. Even though the magnitude in the damage given by the hysteresis method is off scale, this could be resolved by identifying the Preisach density, see \cite{Kris_01} for an identification procedure and a summary of other identification methods.
It is worth mentioning that the results in \eqref{Cor2_13} apply to symmetric RFC. As mentioned in \cite{BroDreKre_96} not all RFC methods are symmetric; however, for symmetric RFC the so-called Madelung rules apply, i.e., deletion pairs commute, meaning that it does not matter the order in which the sequences are deleted. However, if the primal concern is to apply this technique online, no deletion is actually possible since the estimation is done directly on measurements.
\subsection{Crack Growth approaches}
Another alternative for fatigue estimation is the crack growth approach, which can be both addressed from a deterministic view-point using Paris' law (\cite{paris63}), or a stochastic perspective using for example jump processes, diffusion processes or stochastic differential equations (SDEs). However, in the crack growth approach a microscopic scale perspective is taken, thus making it difficult to transport to system level. We refer the interested readers to \cite{Sobczyk_FCG}, \cite{fatemi_98} and the references therein.
\section{Methods Comparison and Discussion}
The aforementioned fatigue estimation methods share certain relations between each other. Firstly, there is an equivalence between the rainflow matrix and the Markov matrix or intensity of the Markov chain. Moreover, both have zeros below the diagonal, which is also the case for the Preisach plane $P$ in the Hysteresis method. The Spectral methods are related to RFC, since their intention is to approximate the rainflow density by spectral formulas, and they also relate to the stochastic methods in that their goal is to approximate certain density function. The hysteresis method is strongly related to the RFC, since the RFC actually identifies the closed hysteresis loops by counting cycles. A sketch of these relationships is depicted on Figure \ref{fig:ConnectionDiagr}.
\begin{figure}
\caption{Relationship between the compared methods.}
\label{fig:ConnectionDiagr}
\end{figure}
Furthermore, a method comparison summary is shown on Table \ref{tab:AdvDisadv}, where advantages and disadvantages are presented for each method previously introduced.
\begin{table}[htb]
\begin{center} \begin{tabular}{l l l}
\hspace{0pt}\textbf{Method} & \hspace{0pt}\textbf{Advantages} &\hspace{0pt}\textbf{Disadvantages} \\ \hline
Rainflow & Active Standard (ASTM E1049) & Post-processing \\
\;Counting & Widely used & Relies on linear accum. hypothesis \\
& & Algorithmic, very non-linear \\ \hline
Spectral & Can be used for control & Black-box \\
& Based on statistical measures & Narrow-band approximation \\ \hline
Stochastic & Account for random loading & Parameters generally unknown \\
\;Methods & Could be used for prediction & May involve PDEs, SDEs \\
& & Very abstract formulation \\ \hline
Hysteresis & Online estimation & Typically hard control problem \\
& Strong physical interpretation & Density generally unknown \\
& Close mathematical form & Approximation may be needed \\
& & \\
\end{tabular}
\caption{Methods advantages and disadvantages.}
\label{tab:AdvDisadv}
\end{center}\end{table}
For the next comparison part we will focus just on the MC instead of the whole stochastic methods class, which is quite broad. The accumulated damage provided by the RFC, MC and Hysteresis methods are compared in Figure \ref{fig:RFCHystMC}. The damage given by the hysteresis was normalized, such that it matches the accumulated damage of the RFC. The spectral method example could not be included, since the method delivers the damage rate itself and not instantaneous measurements. For the RFC and the MC method presented here, the instantaneous damage is given every time an extrema occurs and zero elsewhere, which is exactly what the hysteresis does, i.e., hold the value between certain thresholds. All these techniques can be used as post-processing tools, however not all of them can be used in the control loop. A brief summary is presented on Table \ref{tab:Control}, where it is reported if the methods can be implemented directly online or indirectly, i.e., not using measurements. The spectral methods are included indirectly, since they were included in the loop through transfer functions and not based on measurements. The Markov chain could be included online if the intensity matrix is parametrized with respect to the controls, which may not be realizable.
\begin{table}[htb]
\begin{center} \begin{tabular}{l c c l}
\textbf{Method} & \textbf{Online} &\textbf{Indirect} &\textbf{Comments} \\ \hline
RFC & - & - & Only Post-processing \\ \hline
Spectral & - & X & Moments obtained by transfer function \\ \hline
Hysteresis & X & - & Approximation may be needed \\
& & & \\
\end{tabular}
\caption{Methods applicability for control.}
\label{tab:Control}
\end{center}\end{table}
\begin{figure}
\caption{Normalized accumulated damage for different estimation methods.}
\label{fig:RFCHystMC}
\end{figure}
\section{Conclusions}
The literature regarding fatigue estimation methods is vast, since fatigue is an entire discipline by itself. The aim of the present paper is to provide a guide to the most recognized methods, which were assembled in four groups. These methods were presented and compared, from a control perspective in a Wind Turbine setting by estimating the damage from a tower bending moment time-series. A chart describing their advantages and disadvantages is presented on Table \ref{tab:AdvDisadv} and their applicability to control in Table \ref{tab:Control}. We also attempted to shed some light on the underlying relations between them.
Summarizing, the most widely used and standardized method is the RFC, but its algorithmic nature restricts its usage primarily as a post-processing tool. The spectral methods provide an alternative by trying to emulate the rainflow density function, they are based on statistical measures that are easier to calculate, but they are black-box and restricted (mainly) to narrow-band processes. The stochastic methods can accommodate the randomness of fatigue, but their construction is abstract and complicated, often involving stochastic or partial differential equations, and their parameters may need identification. The hysteresis method can be implemented online, acting on instantaneous measurements, but its complex and non-linear nature results in hard control problems. In general, one could say that the controller will influence the loading in the wind turbine components, and thus for implementing any of these techniques in the control loop, variable load should be considered by the estimation method in some sense.
\end{document}
|
\begin{document}
\title{Splitting of separatrices for the Hamiltonian-Hopf bifurcation
with the Swift-Hohenberg equation as an example}
\begin{abstract}
We study homoclinic orbits of the Swift-Hohenberg equation near a Hamiltonian-Hopf bifurcation.
It is well known that in this case the normal form of the equation is integrable at all orders.
Therefore the difference between the stable and unstable manifolds is exponentially small and
the study requires a method capable to detect phenomena beyond all algebraic orders provided by the normal form theory.
We propose an asymptotic expansion for an homoclinic invariant which quantitatively describes the
transversality of the invariant manifolds. We perform high-precision numerical experiments to support
validity of the asymptotic expansion and evaluate a Stokes constant numerically using two independent methods.
\end{abstract}
\section{The generalized Swift-Hohenberg equation}
The generalized Swift-Hohenberg equation (GSHE),
\begin{equation}\label{4:GSHE}
u_t = \epsilon u + \kappa u^2-u^3-(1+\Delta)^2 u
\end{equation}
is widely used to model nonlinear phenomena
in various areas of modern Physics including hydrodynamics, pattern formation
and nonlinear optics (e.g. \cite{BurkeK2006,HaragusS2007}).
This equation (with $\kappa=0$) was originally introduced
by Swift and Hohenberg~\cite{JSPH:77}
in a study of thermal fluctuations in a convective instability.
In the following we consider $u$ to be one dimensional
and study stationary solutions of \eqref{4:GSHE} which satisfy the ordinary differential equation
\begin{equation}\label{4:SHE}
\epsilon u + \kappa u^2-u^3-(1+\partial^2_x)^2 u = 0\,.
\end{equation}
Obviously this equation has a reversible symmetry (if $u(x)$ satisfy the equation
then $u(-x)$ also does). It is well known that for small
negative $\epsilon$ this equation has two symmetric
homoclinic solutions~\cite{GL:95} similar to the ones shown on Figure~\ref{4:homoclinicorbits1}.
In this paper we study transversality of the homoclinic solutions,
which implies existence of multi-pulse homoclinic solutions and
a small scale chaos.
\begin{figure}
\caption{\small Two primary symmetric homoclinic solutions
of the scalar stationary GSHE ($\epsilon=-0.05$ and $\kappa=2$).}
\label{4:homoclinicorbits1}
\end{figure}
In order to describe the homoclinic phenomena it is convenient to rewrite
the equation \eqref{4:SHE} in the form of an equivalent Hamiltonian system~\cite{BelyakovGL1997,LLLB:04}:
\begin{align}\label{4:HE}
\dot{q_1}&=q_2 & \dot{p_1}&=p_2-\epsilon q_1 - \kappa q_1^2 + q_1^3 \\
\notag \dot{q_2}&=p_2-q_1 & \dot{p_2}&=-p_1\,,
\end{align}
where the variables are defined by the following equalities
\begin{equation}\label{4:changeofvariables}
u=q_1,\quad u' = q_2,\quad -(u'+u''')=p_1 \quad \mathrm{and}\quad u+u''=p_2
\end{equation}
and the Hamiltonian function has the form
\begin{equation}\label{4:H}
H_\epsilon=p_1 q_2 - p_2 q_1 + \frac{p_2^2}{2}+\epsilon \frac{q_1^2}{2}+\kappa \frac{q_1^3}{3}-\frac{q_1^4}{4}.
\end{equation}
The system \eqref{4:HE} is reversible with respect to the involution,
\begin{equation*}
S: (q_1,q_2,p_1,p_2) \rightarrow (q_1,-q_2,-p_1,p_2).
\end{equation*}
The origin is an equiblibrium of the system and the eigenvalues of the linearized vector field are
\begin{equation*}
\left\{\pm\sqrt{-1+\sqrt{\epsilon}},\,\pm\sqrt{-1-\sqrt{\epsilon}}\right\}\,.
\end{equation*}
If $\epsilon<0$, the eigenvalues form a quadruple $\pm \beta_\epsilon\pm i\alpha_\epsilon$ where
\begin{eqnarray*}
\beta_\epsilon&=&\frac{\sqrt {2\sqrt {1-{\it \epsilon}}-2}}{2}=\sqrt{-\frac{\epsilon}4}\,(1+O(\epsilon))\,,\\
\alpha_\epsilon&=&\frac{\sqrt {2\sqrt {1-{\it \epsilon}}+2}}{2}=1+O(\epsilon)\,.
\end{eqnarray*}
At $\epsilon=0$ the eigenvalues collide forming two purely imaginary eigenvalues~$\pm i$ of multiplicity two. Moreover,
the corresponding linearization of the vector field is not semisimple.
Thus, the equilibrium point of system \eqref{4:HE} undergoes
a Hamiltonian-Hopf bifurcation described in the book \cite{vdMeer1985} (see also \cite{Sok}).
In general position there are two possible scenarios of the bifurcation depending on the sign of a certain
coefficient of a normal form. In the Swift-Hohenberg equation both scenarios are possible
and depend on the value of the parameter $\kappa$. In this paper we will consider the case when the equilibrium is stable at
the moment of the bifurcation (see \cite{Tre,LM} for more details)
which corresponds to $|\kappa|>\sqrt{\frac{27}{38}}$ as shown in~\cite{BelyakovGL1997}. Also note that the degenerate case $|\kappa|=\sqrt{\frac{27}{38}}$ leads to some interesting phenomena including
``homoclinic snaking" \cite{WoodC99,KnoblochW2008,CK:09}.
When $\epsilon<0$ is small, the equilibrium is a saddle-focus and the Stable Manifold Theorem
implies the existence of two-dimensional stable $\mathbf{W}_\epsilon^{s}$ and unstable $\mathbf{W}_\epsilon^{u}$
manifolds for the equilibrium point. These manifolds are
contained inside the zero energy level of the Hamiltonian $H_\epsilon$.
The original Hamiltonian \eqref{4:H} can be seen as a perturbation of an integrable Hamiltonian
which can be derived from the normal form theory (see section \ref{Se:nf} for details). Since the normal form is integrable,
its stable and unstable manifolds coincide
(see also discussion in \cite{GIMP:93} for the reversible set up).
In \cite{GL:95}, Glebsky and Lerman used the implicit function theorem
to prove the existence of two reversible (symmetric) homoclinic orbits for the original
system \eqref{4:HE} when $\epsilon<0$ is small.
As a matter of fact, this result follows from a more general
study concerning a 1:1 resonance in four dimensional reversible vector fields
(see \cite{GIMP:93}). Also the paper \cite{GL:95} conjectures that
the stable and unstable manifolds should intersect transversely yielding,
in particular, the existence of countably many reversible homoclinic orbits.
These orbits are known as \textit{multisolitons} for the Swift-Hohenberg equation
and they have been the subject of study in several works (see \cite{ARC:98} and the references therein).
Note that no conclusion about the transversality of stable and unstable manifolds can be made using only the normal form theory.
In this paper we study the splitting of the stable and unstable
manifolds which happens beyond all orders of the normal
form theory. Let $\mathbf{p}_\epsilon$ be a symmetric homoclinic point
belonging to one of the two primary symmetric homoclinic orbits.
In section \ref{Se:hi} we propose a natural way to select vectors $v^{u,s}_\epsilon$ tangent to
$\mathbf{W}_\epsilon^{s}$ and $\mathbf{W}_\epsilon^{u}$
at $\mathbf{p}_\epsilon$ (see equation (\ref{Eq:vuvs})). The main goal of this paper
is to establish the following asymptotic formula for the value of the standard symplectic
form on this pair of vectors:
\begin{equation}\label{4:Asymptoticformulahomoclinic}
\Omega(v^u_\epsilon,v^s_\epsilon)=e^{-\frac{\pi\alpha_\epsilon}{2\beta_\epsilon}}
\left(\omega_0(\kappa)+O(\epsilon)\right)\,.
\end{equation}
Note that $\mathbf{W}_\epsilon^{s}$ and $\mathbf{W}_\epsilon^{u}$ are two dimensional Lagrangian manifolds confined inside
the three dimensional energy level $\{H_\epsilon=0\}$.
These manifolds intersect along homoclinic orbits. Their intersection along the orbit of $\mathbf{p}_\epsilon$ is transverse
(inside the energy level) provided $\Omega(v^u_\epsilon,v^s_\epsilon)\neq 0$. If $\omega_0(\kappa)\ne0$,
the asymptotic formula (\ref{4:Asymptoticformulahomoclinic}) implies the transversality of the homoclinic orbit for small negative $\epsilon$,
and therefore $\omega_0(\kappa)$ is known as the \textit{splitting coefficient}.
We stress that the derivation of formula (\ref{4:Asymptoticformulahomoclinic}) does not
rely substantially on the specific form of the Swift-Hohenberg equation and exactly the same asymptotic
expression (only the splitting coefficient may take different values)
can be deduced {\em for a generic analytic family of reversible Hamiltonian systems undergoing a subcritical Hamiltonian-Hopf
bifurcation}. The details can be found in \cite{JP10}, where a majority of the arguments presented in this paper have been transformed into rigorous mathematical proofs.
In this paper, the derivation of formula \eqref{4:Asymptoticformulahomoclinic} is not rigorous,
as it is based on numerous estimates and assumptions which are not proved here.
Nevertheless similar statements were proved in a similar context for
other problems \cite{G99,VGVL:01}.
Supporting the validity of formula \eqref{4:Asymptoticformulahomoclinic} we perform a set of numerical experiments and compute the splitting coefficient using two distinct methods. This constant is related to a purely imaginary Stokes constant, and
Figure~\ref{4:figStokesbeta} gives an idea about its behaviour
as a function of the parameter $\kappa$. The value of the Stokes constant comes from the study of the Hamiltonian \eqref{4:H}
at the exact moment of the bifurcation (i.e. at $\epsilon=0$). We will discuss the relevant definitions in section~\ref{Se:Stokes}
and some methods for its numerical evaluation in section~\ref{Se:num}.
\begin{figure}
\caption{Graph of the function $\mathrm{Im}
\label{4:figStokesbeta}
\end{figure}
Recently, S. J. Chapman and G. Kozyreff \cite{CK:09} used the multiple-scales analysis beyond all orders
to study localised patterns which emerge from a subcritical modulation instability in the Swift-Hohenberg equation.
Their analysis captured exponentially small phenomena by means of optimal truncation of certain formal expansions
combined with a study of their analytical continuation in a vicinity of the Sokes lines.
Technically our approach is different and we do not require higher order terms, additionally our
approach has the advantage of being directly applicable to study the exponentially small splitting of invariant manifolds
near generic Hamiltonian-Hopf bifurcations, for which the Swift-Hohenberg is a particular example.
The rest of the paper is organised in the following way. In Section~\ref{Se:hi}
we discuss the definition of an homoclinic invariant which provides a very convenient
tool for measuring the splitting of invariant manifolds. In Section~\ref{Se:nf}
we review some facts from the normal form theory which will be necessary
for the exposition of our results. Section~\ref{Se:Stokes} contains the definition of the Stokes constant. An informal derivation of the asymptotic formula \eqref{4:Asymptoticformulahomoclinic}
which describes the splitting of invariant manifolds of the
stationary Swift-Hohenberg equation near the Hamiltonian-Hopf bifurcation is placed in Section~\ref{Se:asymp}.
As the derivation of the asymptotic formula is not rigorous
we perform a set of high-precision numerical experiments
in order to confirm its validity. Moreover, similar to
many other problems which involve exponentially small splitting of
invariant manifolds \cite{VGVL:01}, the asymptotic formula contains a splitting
coefficient which comes from an auxiliary problem and requires numerical evaluation.
The results of our numerical experiments are reported in Sections~\ref{Se:num} and~\ref{Se:num3}.
\subsection{Homoclinic invariant\label{Se:hi}}
In a study of homoclinic trajectories, both numerical and analytical,
it is usually important to have a convenient basis in the tangent
space to the stable and unstable manifolds. Below we provide a definition
adapted to our problem. This definition can be of independent interest
as it can be easily extended onto hyperbolic equilibria of higher
dimensional systems (not necessarily Hamiltonian).
Suppose that the origin is an equilibrium of a Hamiltonian vector field $X_H$
and that $\pm\beta\pm i\alpha$ are the eigenvalues of $DX_H(0)$.
Then the origin has a two dimensional stable manifold. According to Hartman~\cite{MR0141856} the restriction of
the vector field on $W^s_{loc}$ can be linearised by a $C^1$ change of variables.
In the polar coordinates the linearised dynamics on $W^s_{loc}$ takes the form:
$$
\dot r=-\beta r\,\qquad \dot\varphi=\alpha\,.
$$
It is convenient to introduce $z=-\ln r$ so that
$$
\dot z=\beta\,.
$$
Then the local stable manifold is the image of a function
$$
\Gamma^s:\{(\varphi,z):\varphi\in S^1,z>-\log r_0\}\to\mathbb R^4
$$
where $r_0$ is the radius of the linearisation domain and $S^1$ is the unit circle.
Since $\Gamma^s$ maps trajectories into trajectories we can propagate it uniquely
along the trajectories of the Hamiltonian system using the property
\begin{equation}\label{Eq:Gammau}
\Gamma^s(\varphi+\alpha t,z+\beta t)=\Phi^t_H\circ \Gamma^s(\varphi,z)
\end{equation}
where $\Phi^t_H$ is the flow defined by the Hamiltonian equation.
Note that
$$
\Gamma^s(\varphi+2\pi,z)=\Gamma^s(\varphi,z)
$$
since $\varphi$ is the angle component of the polar coordinates.
Moreover,
$$
\lim_{z\to+\infty}\Gamma^s(\varphi,z)=0\,.
$$
Differentiating $\Gamma^s$ along a trajectory we see that it
satisfies the non-linear PDE:
\begin{equation}\label{Eq:mainPDE}
\alpha\partial_\varphi\Gamma+\beta\partial_z\Gamma=X_H(\Gamma)\,.
\end{equation}
Each of the derivatives $\partial_z\Gamma^s$ and $\partial_\varphi\Gamma^s$ defines
a vector field on $W^s$.
The equation (\ref{Eq:Gammau}) implies that $\partial_z\Gamma^s$ and $\partial_\varphi\Gamma^s$
are invariant under the restriction of the flow $\Phi^t_H\Bigr|_{W^s}$.
We can define $\Gamma^u$ applying the same arguments to the Hamiltonian $-H$. In this case it is convenient
to set $z=\ln r$ to ensure that $\Gamma^u$ satisfies the same PDE as $\Gamma^s$.
In a reversible system with a reversing involution $S$, it is convenient to
set
\begin{equation}\label{Eq:unstablemfd}
\Gamma^u(\varphi,z)=S\circ\Gamma^s(-\varphi,-z).
\end{equation}
Now suppose that the system has a homoclinic trajectory $\gamma_h$. Let us choose a point $\mathbf{p}_h\in\gamma_h$.
The freedom in the definition allows us
to assume that $\mathbf{p}_h=\Gamma^s(0,0)=\Gamma^u(0,0)$ without loosing in generality.
This condition completely eliminates the freedom from the definition of $\Gamma^u$ and $\Gamma^s$.
In a Hamiltonian system the symplectic form provides a natural tool for studying transversality
of invariant manifolds. Thus we arrive at the following,
\begin{definition}[Homoclinic Invariant]The homoclinic invariant $\omega$ is defined by the formula,
\begin{equation}\label{Eq:hominv}
\omega=\Omega(\partial_\varphi\Gamma^u(0,0),\partial_\varphi\Gamma^s(0,0))\,.
\end{equation}
\end{definition}
This definition is a natural extension of the homoclinic invariant defined for homoclinic
orbits of area-preserving maps~\cite{VGVL:01}.
In the left hand side of the asymptotic formula (\ref{4:Asymptoticformulahomoclinic})
we use the notation
\begin{equation}\label{Eq:vuvs}
v_\epsilon^{u,s}=\partial_\varphi\Gamma^{u,s}(0,0).
\end{equation}
It is easy to see that $\omega$ takes the same
value for all points of the homoclinic trajectory
$\gamma_h=\{\Phi^t_H(\mathbf{p}_h):t\in\mathbb R\}$. Indeed it follows from \eqref{Eq:Gammau} that
$$
\partial_\varphi\Gamma^s(\alpha t,\beta t)=D\Phi^t_H(\mathbf{p}_\epsilon)\partial_\varphi \Gamma^s(0,0),
$$
and a similar identity is valid for the unstable manifold. Since the Hamiltonian flow $\Phi^t_H$ is symplectic,
we conclude that
$\Omega(\partial_\varphi\Gamma^u(\alpha t,\beta t),\partial_\varphi\Gamma^s(\alpha t,\beta t))
=\Omega(\partial_\varphi\Gamma^u(0,0),\partial_\varphi\Gamma^s(0,0))=\omega$.
Since $\Gamma^s$ and $\Gamma^u$ are Lagrangian and belong to the energy level $H=H(0)$, which is three-dimensional,
the inequality $\omega\ne0$ implies the transversality of the homoclinic trajectory.
Indeed, if $\omega\ne0$, the vectors $\partial_\varphi\Gamma^u(0,0)$, $\partial_\varphi\Gamma^s(0,0)$
and $X_H(\mathbf{p}_h)$ are linearly independent and therefore span the tangent space to
the energy level at $\mathbf{p}_h$.
We note that we can define two vectors tangent to $W^s$ and
another two vectors tangent to $W^u$ at $\mathbf{p}_h\in W^s\cap W^u$.
So we could use
\begin{equation}\label{Eq:xy}
\omega_{x,y}:=\Omega(\partial_x\Gamma^u(0,0),\partial_y\Gamma^s(0,0)),\quad x,y\in\left\{\varphi,z\right\}
\end{equation}
instead of $\omega$. But these invariants are not independent. Indeed,
$$
\alpha\partial_\varphi\Gamma^u(0,0)+\beta\partial_z\Gamma^u(0,0)=\alpha\partial_\varphi\Gamma^s(0,0)+\beta\partial_z\Gamma^s(0,0)
$$
as both expressions are equal to $X_H(\mathbf{p}_h)$. Then
equation (\ref{Eq:xy}) implies
$$
\alpha^2\omega-\beta^2\omega_{z,z}=0,\quad \alpha\omega+\beta\omega_{\varphi,z}=0,\quad \alpha\omega+\beta\omega_{z,\varphi}=0.
$$
In the derivation of these identities it is also necessary to take into account that
$W^{u,s}$ are Lagrangian (i.e., the symplectic form $\Omega$ vanishes on their tangent spaces).
In the case of the Swift-Hohenberg equation the system of PDE (\ref{Eq:mainPDE})
can be conveniently replaced by a single scalar PDE of higher order
obtained from (\ref{4:SHE}) by replacing $\partial_x$ with the differential operator
$$
\partial=\alpha_\epsilon\partial_\varphi + \beta_\epsilon\partial_z.
$$
Let us use $u^{\pm}_\epsilon$ to denote the first component of $\mathbf{\Gamma}^u_\epsilon$
and $\mathbf{\Gamma}^s_\epsilon$ respectively, then $u^\pm_\epsilon$ satisfies the equation
\begin{equation}\label{Eq:SHPDE}
(1+\partial^2)^2 u = \epsilon u + \kappa u^2-u^3\,.
\end{equation}
Its other components can be restored using (\ref{4:changeofvariables}).
The Swift-Hohenberg equation is reversible and following \eqref{Eq:unstablemfd} we define
$$
u^{+}_\epsilon(\varphi,z)=u^{-}_\epsilon(-\varphi,-z)\,.
$$
We also assume that $\mathbf{\Gamma}^s_\epsilon(0,0)=\mathbf{\Gamma}^u_\epsilon(0,0)$ is
the primary symmetric homoclinic point. Then the formula
for the homoclinic invariant can be rewritten in terms of $u^{-}$:
\begin{equation}
\omega=2\partial_\varphi\left((u^{-})^2 +u^{-}\partial^2u^{-})\right)
\end{equation}
where the derivatives are evaluated at $(\varphi,z)=(0,0)$.
\subsection{Normal form of the Swift-Hohenberg equation\label{Se:nf}}
The most convenient description of the bifurcation is obtained with the help of the normal form.
As a first step the quadratic part of the Hamiltonian \eqref{4:H}
is normalised with the help of a linear symplectic transformation (similar to \cite{NBRC:74}):
\begin{equation*}
T= \left( \begin {array}{cccc} 0&-1/4\,\sqrt {2}&-1/2\,\sqrt {2}&0
\\\noalign{
}1/4\,\sqrt {2}&0&0&1/2\,\sqrt {2}
\\\noalign{
}\sqrt {2}&0&0&0\\\noalign{
}0&-\sqrt {2}&0&0
\end {array} \right)
\end{equation*}
which transforms \eqref{4:H} into
\begin{equation}\label{4:H2}
\begin{split}
H_\epsilon=&-(q_2p_1-q_1p_2)+\frac{1}{2}(q_1^2+q_2^2)+\frac{1}{4}p_1^2\epsilon
-\frac{\sqrt{2}}{12}\kappa p_1^3+\frac{1}{4}q_2p_1\epsilon-\frac{\sqrt{2}}{8}\kappa q_2p_1^2+\\
&\frac{1}{16}q_2^2\epsilon-\frac{\sqrt{2}}{16}\kappa q_2^2p_1
-\frac{\sqrt{2}}{96}\kappa q_2^3-\frac{1}{16}p_1^4-\frac{1}{8}q_2p_1^3
-\frac{3}{32}q_2^2p_1^2-\frac{1}{32}q_2^3p_1-\frac{1}{256}q_2^4
\end{split}
\end{equation}
where we keep the same notation for the variables.
Note that the involution $S$ in the new coordinates takes the form
\begin{equation}\label{4:Involution}
\tilde{S}: (q_1,q_2,p_1,p_2) \rightarrow (-q_1,q_2,p_1,-p_2).
\end{equation}
Now, with the quadratic part in normal form, we can apply
the standard normal form procedure to normalize the Hamiltonian \eqref{4:H2}
up to any order: There is a near identity canonical change of variables $\Psi_n$
which normalizes all terms of order less than equal to $n$
and transforms the Hamiltonian to the following form:
\begin{equation}\label{4:HN}
H_\epsilon=H_\epsilon^n + \mathrm{higher}\ \mathrm{order}\ \mathrm{terms}
\end{equation}
where
\begin{equation*}
H_\epsilon^n=-I_1 + I_2+\sum_{\substack{3i+2j+2l\geq4\\i+j\geq1}}^n a_{i,j,l}I_1^iI_3^j\epsilon^l
\end{equation*}
with
\begin{equation*}
I_1=q_2p_1-q_1p_2,\qquad I_2=\frac{q_1^2+q_2^2}{2},\qquad I_3=\frac{p_1^2+p_2^2}{2}.
\end{equation*}
This normalization preserves the reversibility with respect to the involution \eqref{4:Involution}.
In the case of the GSHE the normal form up to the order five has the form (see Appendix~\ref{Ap:A}
for more details about the change of variables)
\begin{equation*}
H_\epsilon^5=-I_1 + \left(I_2+\frac{1}{4}\epsilon I_3 + \eta I_3^2\right)
+\left(\frac{1}{8}\epsilon I_1+\mu\, I_1{I_3}\right)\,.
\end{equation*}
The leading part of the normal form includes two parameters which can be explicitly
expressed in terms of the original parameter $\kappa$:
\begin{equation*}
\eta=4\left( {\frac {19}{576}}{\kappa}^{2}-{\frac {3}{128}} \right)\quad \mathrm{and}\quad
\mu=2\left( {\frac {65}{864}}\,{\kappa}^{2} -{\frac {3}{64}}\right)\,.
\end{equation*}
The geometry of the invariant manifolds depends on the sign of $\eta$ \cite{vdMeer1985}.
In the case of GSHE, if
\begin{equation*}
\left|\kappa\right|>\sqrt{\frac{27}{38}},
\end{equation*}
then $\eta>0$ \cite{GL:95}, and the truncated normal form has a continuum of homoclinic orbits
among which exactly two are reversible, i.e., symmetric with respect to the
involution \eqref{4:Involution}.
In order to describe the geometry of the invariant manifolds near the bifurcation
it is convenient to introduce the new parameter $\epsilon=-4\delta^2$ and
perform the standard scaling:
\begin{equation*}
q_1=\delta^2Q_1,\quad q_2=\delta^2Q_2,\quad p_1=\delta P_1,\quad p_2=\delta P_2\,.
\end{equation*}
This change of variables is not symplectic, nevertheless it preserves the form of the Hamiltonian equations
since the symplectic form gains a constant factor $\delta^3$, so we have to multiply the Hamiltonian
by $\delta^{-3}$ in order to return back to the standard symplectic form.
The Hamiltonian $H_\epsilon^n$ is transformed into,
\begin{equation*}
h_\delta^n=- \mathcal{I}_1 + \left(\mathcal{I}_2-\mathcal{I}_3+\eta\mathcal{I}_3^2\right)\delta
+ \left(-\frac{1}{2} \mathcal{I}_1+\mu\, \mathcal{I}_1{\mathcal{I}_3}\right)\delta^2+O(\delta^3),
\end{equation*}
where the $\mathcal{I}_i$'s are defined in the same way as the $I_i$'s but in the new variables $Q$ and $P$.
This Hamiltonian system has an equilibrium at the origin characterized by a quadruple of complex eigenvalues
$\pm i\alpha_{n,\epsilon}\pm \beta_{n,\epsilon}$, where
$\alpha_{n,\epsilon}=1+\frac{1}{2}\delta^2+O(\delta^4)$ and $\beta_{n,\epsilon}=\delta-\frac{1}{2}\delta^3+O(\delta^5)$.
The equilibrium has a two dimensional stable and two dimensional unstable manifolds. Thus, following \eqref{Eq:mainPDE} we parametrize these manifolds by solutions
of the partial differential equation:
\begin{equation}\label{4:HEN}
\left(\alpha_{n,\epsilon}\partial_\varphi + \beta_{n,\epsilon}\partial_z\right)\mathbf{\Upsilon}_n
=X_{h_\delta^n}(\mathbf{\Upsilon}_n).
\end{equation}
The function $\mathbf{\Upsilon}_n(\varphi,z)$ is real-analytic, converges to zero as $z\to\pm\infty$
and is $2\pi$-periodic in $\varphi$.
Taking into account the rotational symmetry of the normal form Hamiltonian, we can look for the
solution of this equation in the form:
\begin{equation*}
\begin{split}
\mathbf{\Upsilon}_n(\varphi,z)&=\bigl(R_n(z)\cos(\theta_n(\varphi,z)),R_n(z) \sin(\theta_n(\varphi,z)),\\
&\qquad r_n(z)\cos(\theta_n(\varphi,z)),r_n(z)\sin(\theta_n(\varphi,z))\bigr)
\end{split}
\end{equation*}
\normalsize
where $R_n(z)$, $r_n(z)$ and $\theta_n(\varphi,z)$ are real analytic functions.
In particular, for $n=5$ it is not difficult to see that the eigenvalues of $DX_{h_\delta^5}(0)$ are the quadruple $\pm\beta_{5,\epsilon}\pm i\alpha_{5,\epsilon}$ where,
$$
\beta_{5,\epsilon}=\delta\qquad \alpha_{5,\epsilon}=1+\frac{\delta^2}2\,.
$$
Thus, we get the following system of equations:
\begin{equation*}
\begin{split}
\beta_{5,\epsilon}R_5'=-\delta r_5\left(1-\eta r_5^2\right)
\,,
\qquad
\beta_{5,\epsilon}r_5'=-\delta R_5\,,\qquad
\\
\left(\alpha_{5,\epsilon}\partial_\varphi + \beta_{5,\epsilon}\partial_z\right)
\theta_5=
1+\frac{\delta^2}2(1-\mu r_5^2)\,.
\end{split}
\end{equation*}
From these equations we conclude that
\begin{equation*}
\begin{split}
r_5=\sqrt{\frac{2}{\eta}}\frac1{\cosh z}\,,\qquad
R_5=\sqrt{\frac{2}{\eta}}\frac{\sinh z}{\cosh^2 z},\\
\theta_5=\varphi-\frac{\delta^2\mu}2\int^zr_5^2dz
=\varphi-\frac{\delta\mu}{\eta}\frac{\sinh z}{\cosh z}
\,.
\end{split}
\end{equation*}
We see that $(r_5(z),R_5(z))$ runs over a homoclinic loop when $z$ varies
from $-\infty$ to~$+\infty$.
In general the parameterization $\mathbf{\Upsilon}_n$ is the unique solution of
\eqref{4:HEN} such that $R_n(0)=0$ and $\theta_n(\varphi,0)=\varphi$.
Thus, $\mathbf{\Upsilon}_n(\varphi,z)$ belongs to the symmetry plane associated
with the involution \eqref{4:Involution} if and only if $z=0$ and $\varphi=0$
or $\varphi=\pi$. Therefore, there are exactly 2 symmetric homoclinic points.
Let us call these homoclinic orbits the primary reversible homoclinic orbit.
\subsection{Stokes constant}\label{Se:Stokes}
In this subsection we define the Stokes constant for the GSHE at $\epsilon=0$.
Although the equilibrium at the origin is not hyperbolic (its eigenvalues are $\pm i$ with multiplicity two), it still has
invariant manifolds \cite{JP10} which can be non-real.
More precisely, we look for complex analytic solutions of the following equation
\begin{equation}\label{Eq:inner}
(1+(\partial_\varphi + \partial_\tau)^2)^2 u = \kappa u^2-u^3\,,
\end{equation}
which decay polynomially in a sectorial neighbourhood of infinity in the $\tau$ variable
and which are $2\pi$-periodic in $\varphi$.
These solutions parametrize a certain complex stable (unstable) invariant manifold of the origin which is immersed in $\mathbb{C}^4$.
In \cite{JP10} it is shown (for similar problems see \cite{VGVL:01,Baldoma06,OliveSS03}) that equation \eqref{Eq:inner} has an analytic solution $u=u_0^-$ with the following asymptotic behaviour:
\begin{equation*}
u^{-}_0(\varphi,\tau)=\frac{P_1(\varphi)}{\tau} +\frac{P_2(\varphi)}{\tau^2}+O(\tau^{-3})
\end{equation*}
in the set
$$\tau\in\mathcal{D}^{-}_{r,\theta_0}=\left\{\tau\::\:\left|\arg(\tau+r)\right|>\theta_0\right\}
\,,$$
where $\theta_0$ is a small fixed constant, $r$ is sufficiently large and
\begin{equation}\label{Eq:p1p2}
P_1=\frac{i\cos \left( \varphi \right)}{\sqrt {\eta}}\,,
\qquad
P_2=
\frac{i}{\sqrt {\eta}} \left( \frac{\mu}{{\eta}}+\frac{1}{2}\right)\sin(\varphi)-{\frac {\kappa\,\cos \left( 2\,\varphi \right) }{18\eta}}-{
\frac {\kappa}{2\eta}}\,.
\end{equation}
The function $u^-_0$ is $2\pi$-periodic in $\varphi$. More generally it is possible to prove (see \cite{JP10}) that there exist unique trigonometric polynomials $P_k$ for $k\geq 3$ of degree $k$ satisfying $P_k(\varphi)=(-1)^k\overline{P_k(-\overline\varphi)}$ such that $\hat{u}_0(\varphi,\tau):=\sum_{k\geq1}P_k(\varphi)\tau^{-k}$ solves formally equation \eqref{Eq:inner} and moreover,
\begin{equation*}
u^{-}_0(\varphi,\tau)=\sum_{k=1}^{N}P_k(\varphi)\tau^{-k}+O(\tau^{-(N+1)}).
\end{equation*}
Taking into account \eqref{Eq:p1p2} we have that $\hat{u}_0(\varphi,\tau)=\overline{\hat{u}_0(-\overline\varphi,-\overline\tau)}$ and the unique formal solution $\hat{u}_0$ is known as the \textit{formal separatrix}.
Equation \eqref{Eq:inner} has a second solution $u=u^+_0$ with
$$
u^+_0(\varphi,\tau)=\overline{u^-_0(-\overline\varphi,-\overline\tau)}\,.
$$
It has the same asymptotic behaviour as $u^-_0$ but is defined in a different sector, more precisely,
it is defined for $\tau$ such that $-\overline\tau\in\mathcal{D}^{-}_{r,\theta_0}$. The solutions $u^\pm_0$ have a
common asymptotics on the intersection of their domains but they do not
typically coincide. The difference of these two solutions can be described in the following way.
We can restore 4-dimensional vectors $\mathbf{\Gamma}_0^{\pm}$ using equations
\eqref{4:changeofvariables} with $'$ replaced by $\partial_\varphi+\partial_\tau$.
In particular, the first component of $\mathbf{\Gamma}_0^{\pm}$ coincides with $u^\pm_0$.
The functions $\mathbf{\Gamma}^{\pm}_0$ are parameterizations of the stable and unstable manifolds
and satisfy the following non-linear partial differential equation,
\begin{equation}\label{4:VFH0}
(\partial_\varphi+\partial_\tau)\mathbf{\Gamma}^{\pm}_0=X_{H_0}(\mathbf{\Gamma}^{\pm}_0),
\end{equation}
where $H_0$ denotes the Hamiltonian \eqref{4:H} at the exact moment of bifurcation $\epsilon=0$.
Let
$$
\Delta_0(\varphi,\tau)=
\mathbf{\Gamma}_0^{+}(\varphi,\tau)-\mathbf{\Gamma}_0^{-}(\varphi,\tau)
$$
and
$$
\theta_0(\varphi,\tau)=
\Omega\bigl(\Delta_0(\varphi,\tau),\partial_\varphi\mathbf{\Gamma}_0^{+}(\varphi,\tau)\bigr)\,,
$$
where $\Omega$ is the standard symplectic form. In \cite{JP10} it is proved that there is a constant $\Theta_0(\kappa)$ such that
\begin{equation}\label{Eq:theta0}
\theta_0(\varphi,\tau)
=\Theta_0(\kappa)e^{-i(\tau-\varphi)}+O(e^{-(2-\epsilon_0)i(\tau-\varphi)})
\end{equation}
as $\mathrm{Im}\,\tau \rightarrow-\infty$ and for very small $\epsilon_0>0$.
The constant $\Theta_0(\kappa)$ is known as the\/ {\em Stokes constant}.
The Stokes constant can be defined by the following limit:
\begin{equation}\label{4:SC}
\Theta_0(\kappa):=\lim_{\mathrm{Im}(\tau)\rightarrow -\infty} \theta_0(\varphi,\tau)e^{i(\tau-\varphi)}\,.
\end{equation}
We note that the value of the Stokes constant cannot be obtained
from our arguments. Fortunately the numerical evaluation of this constant is reasonably easy.
Figure~\ref{4:figStokesbeta} shows the values of $\mathrm{Im}\,\Theta_0(\kappa)$ plotted
against $\kappa$ for $\kappa>\kappa_0=\sqrt{\frac{27}{38}}$. The picture suggests that the Stokes constant vanishes
infinitely many times and that its zeros accumulate to $\kappa_0$.
\subsection{Asymptotic formula for the homoclinic invariant\label{Se:asymp}}
In this section we derive the asymptotic formula (\ref{4:Asymptoticformulahomoclinic}) for the homoclinic invariant
of the primary symmetric homoclinic orbit. Our method is not rigorous
and relies on the complex matching approach
similar to one used for the standard map and the rapidly perturbed pendulum (see \cite{VGVL:01}).
We point out that in the latter two cases the method leaded to a complete proof of
asymptotic formulae similar to (\ref{4:Asymptoticformulahomoclinic}). Our approach has certain
similarity to the complex matching methods used in \cite{HakimM93,CK:09} but is different in several
important technical details.
At the end of section \ref{Se:nf} we obtained an approximation of the separatrix in the normal form coordinates.
Transforming $\Upsilon_5(\varphi,z)$ back to the original coordinates we obtain
the following approximation:
\begin{eqnarray}\label{4:leadingorderGamma_epsilon}
\lefteqn{u^{-}_\epsilon(\varphi,z)=
-\frac{1}{\sqrt {\eta}}{\frac {\cos \left( \varphi \right)}{\cosh
\left( z \right) }}\delta
}
\\
\nonumber
&&\quad+\left(\frac{9\kappa+\kappa\cos(2\varphi)}{18\eta}\frac{1}{\cosh^2(z)}
-\frac{1}{\sqrt{\eta}}\left(\frac{\mu}{\eta}+\frac{1}{2}\right)\frac{\sin(\varphi)\sinh(z)}{\cosh^2(z)}\right)\delta^2+O(\delta^3)
\end{eqnarray}
where $\epsilon=-4\delta^2$. Since the function in the right-hand-side of the equation is even,
it also approximates the stable separatrix represented by $u^{+}_\epsilon(\varphi,z)=u^{-}_{\epsilon}(-\varphi,-z)$.
A more accurate approximation with a $O(\delta^n)$ error can be obtained with the help of higher order normal form
theory, but naturally none of those approximations can distinguish between the stable and unstable separatrices
and we come to the conclusion that $$u^{-}_\epsilon(\varphi,z)-u^{+}_\epsilon(\varphi,z)=O(\delta^n)$$
for all $n$. Of course the constant in this upper bound may depend on the point $(\varphi,z)$.
Therefore, the difference between the stable and unstable parametrisation cannot be detected using power series of the perturbation theory,
and we say it is beyond all algebraic orders.
A rather standard approach to the problem is based on studying the analytical continuation
of the parametrisations and looking for places in the complexified variables
where the leading orders of the approximation \eqref{4:leadingorderGamma_epsilon} grow significantly.
We note that the variables $z$ and $\varphi$ play different roles,
in particular we assume that $\varphi$ is kept real or, more precisely,
in a fixed narrow strip around the real axis.
It is easy to see that the leading orders of $u^-_\epsilon$ have poles
at $z=i\frac{\pi}{2}+ki\pi$ for any integer $k$. In the following we study the behaviour
of the parametrisations near the singular point $z=i\frac{\pi}{2}$. The first step is to re-expand
the functions in Laurent series around the singularity and introduce a new variable
\begin{equation}\label{4:ztaurelation}
\tau=\frac{\alpha_\epsilon}{\beta_\epsilon}z-i\frac{\pi\alpha_\epsilon}{2\beta_\epsilon}.
\end{equation}
Substituting this new variable into \eqref{4:leadingorderGamma_epsilon}
and expanding around $\tau=0$ we conclude that
\begin{equation}\label{4:leadingorderGamma_0}
u^{-}_\epsilon(\varphi,\tfrac{\beta_\epsilon}{\alpha_\epsilon}\tau+i\tfrac{\pi}{2})
=\left(\frac{P_1(\varphi)}{\tau}
+\frac{P_2(\varphi)}{\tau^2}+O(\tau^{-3})\right)+O(\epsilon)
\end{equation}
where $P_1$ and $P_2$ are the same as in \eqref{Eq:p1p2}
and the error terms come from the analysis of the next order corrections.
In this analysis we consider the terms in \eqref{4:leadingorderGamma_epsilon}
which are most divergent and in this way obtain the essential behaviour of $u^{-}_\epsilon$
around the singularity.
Transforming the equation \eqref{Eq:SHPDE} to the variable \eqref{4:ztaurelation}, setting $\epsilon=0$
and noting that $\alpha_0=1$, we obtain equation \eqref{Eq:inner} considered in the previous subsection.
The following method is known as ``complex matching" and is based on the observation
that $u^\pm_0$ approximate $u^\pm_\varepsilon$ in a region where $|z-i\frac\pi2|$ is small
but $\tau$ is still large.
Taking into account \eqref{4:leadingorderGamma_0} we conclude that
\begin{eqnarray}\label{Eqs:upmapp}
u^-_\epsilon(\varphi,\tfrac{\beta_\epsilon}{\alpha_\epsilon}\tau+i\tfrac{\pi}{2})&=&u_0^{-}(\varphi,\tau)+O(\epsilon)\,,\\
u^+_\epsilon(\varphi,\tfrac{\beta_\epsilon}{\alpha_\epsilon}\tau+i\tfrac{\pi}{2})&=&u_0^{+}(\varphi,\tau)+O(\epsilon)\,.
\end{eqnarray}
in a neighbourhood of a segment of the imaginary axis where $\Im\tau$ is large negative.
In a rigorous justification of the method we use the interval $-R\log\epsilon^{-1}<\Im\tau<-R$, where $R$ is a large constant.
Now restoring the 4-dimensional vectors $\mathbf{\Gamma}^{u,s}_\epsilon$ using the relations \eqref{4:changeofvariables} we obtain the following estimate for the difference,
\begin{equation}\label{Eq:est1}
\Delta(\varphi,\tfrac{\beta_\epsilon}{\alpha_\epsilon}\tau+i\tfrac{\pi}{2})=-\Delta_0(\varphi,\tau)+O(\epsilon)
\end{equation}
valid for $-R\log\epsilon^{-1}<\Im\tau<-R$ where $\Delta(\varphi,z)=\mathbf{\Gamma}^{u}_\epsilon(\varphi,z)-\mathbf{\Gamma}^{s}_\epsilon(\varphi,z)$.
In order to derive an asymptotic formula for the homoclinic invariant, we consider an auxiliary function
defined by
\begin{equation*}
\Theta(\varphi,z)=\Omega\left(\Delta(\varphi,z),\partial_\varphi\mathbf{\Gamma}^{s}_\epsilon(\varphi,z)\right)\,,
\end{equation*}
where $\Omega$ is the standard symplectic form.
The homoclinic invariant of the primary homoclinic orbit is defined by \eqref{Eq:hominv}
which takes the form
\begin{equation}\label{4:homoclinicinvariant}
\omega=\Omega\bigl(\partial_\varphi \mathbf{\Gamma}^{u}_\epsilon(0,0),
\partial_\varphi \mathbf{\Gamma}^{s}_\epsilon(0,0)\bigr)\,.
\end{equation}
Differentiating the definition of $\Theta$ at the origin and taking into account that $\Delta(0,0)=0$
we get the relation:
$$
\omega=\partial_\varphi\Theta(0,0).
$$
Thus, we only need to estimate the function $\Theta$ and its derivative. Considering higher approximations of $u^{\pm}_\epsilon$ in \eqref{Eqs:upmapp} it is possible to improve the estimate in \eqref{Eq:est1}. In \cite{JP10} it is proved that in a neighbourhood of the point $\tau=-i\log(\epsilon^{-1})$ the following estimate holds:
\begin{equation}\label{Eq:vareq}
\Delta(\varphi,\tfrac{\beta_\epsilon}{\alpha_\epsilon}\tau+i\tfrac{\pi}{2})=-\Delta_0(\varphi,\tau)+O(\epsilon^2)
\end{equation}
which leads to
\begin{equation}\label{Eq:bound}
\begin{split}
\Theta(\varphi,z)&=-\theta_0(\varphi,\tau)+O(\epsilon^2)
=-e^{-i(\tau-\varphi)}\Theta_0(\kappa)+O(\epsilon^2)\,,
\end{split}
\end{equation}
Now note that the function $\Theta$ satisfies the following equation,
\begin{equation}\label{3:ThetaDepsilon}
(\alpha_\epsilon\partial_\varphi+\beta_\epsilon\partial_z) \Theta=\Omega(F(\Delta),\partial_\varphi\mathbf{\Gamma}^u_\epsilon),
\end{equation}
where $F(\Delta)=X_{H_{\epsilon}}(\mathbf{\Gamma}^u_\epsilon+\Delta)-X_{H_{\epsilon}}(\mathbf{\Gamma}^u_\epsilon)-DX_{H_{\epsilon}}(\mathbf{\Gamma}^u_\epsilon)\Delta$. As $F(\Delta)$ is of second order in $\Delta$ then $\Theta$ approximately satisfies the homogeneous equation $(\alpha_\epsilon\partial_\varphi+\beta_\epsilon\partial_z) u=0$ with an error of the order of $O(\left|\Delta(\varphi,z)\right|^2)$.
Taking into account that the splitting of separatrices is rather small, we continue
our arguments neglecting this error. Then there is a function $f$ such that
$$
\Theta(\varphi,z)=f(\alpha_\epsilon z-\beta_\epsilon\varphi)
$$
inside the domain of $\Theta$, which implies that $f$ can be extended by periodicity
onto the strip $|\Im(z)|<\tfrac\pi2-R\delta$.
We expand the function $f$ into Fourier series, i.e.,
$$
\Theta(\varphi,z)=\sum_{k\in\mathbb{Z}}f_ke^{ik (\tfrac{\alpha_\epsilon}{\beta_\epsilon}
z-\varphi)}\,.
$$
The coefficients of the series can be expressed in terms of Fourier integrals:
\begin{eqnarray}
f_{k}&=& \frac{\alpha_\epsilon}{2\pi\beta_\epsilon}
\int_0^{\tfrac{2\pi\beta_\epsilon}{\alpha_\epsilon}} e^{-ik\tfrac{\alpha_\epsilon}{\beta_\epsilon}z}\Theta(0,z)dz\,.
\end{eqnarray}
Following the common procedure of Fourier Analysis, we
shift the contour of integration to $\Im z=\frac\pi2-\tfrac{\beta_\epsilon}{\alpha_\epsilon}\log\epsilon^{-1}$,
change the variable to \eqref{4:ztaurelation} and use the estimate \eqref{Eq:bound} to get
\begin{eqnarray}
f_{-1}&=& -e^{-\tfrac{\pi\alpha_\epsilon}{2\beta_\epsilon}}\left(\Theta_0(\kappa)+O(\epsilon)\right)\,,
\end{eqnarray}
$f_1=\overline{f_{-1}}$ and there is a positive constant $C$ such that
$$
|f_k|\le C\epsilon^{2-|k|}e^{-|k|\tfrac{\pi\alpha_\epsilon}{2\beta_\epsilon}}\qquad
\mbox{for $|k|\ge2$}.
$$
Substituting these estimates into the Fourier series we get that
for real values of $\varphi,z$
\begin{equation}\begin{split}
\Theta(\varphi,z)&=-2e^{-\frac{\pi\alpha_\epsilon}{2\beta_\epsilon}}\left|\Theta_0\right|\cos\left(\frac{\alpha_\epsilon}{\beta_\epsilon}z
-\varphi-\arg(\Theta_0)\right)+O(e^{-\frac{\pi\alpha_\epsilon}{2\beta_\epsilon}}\epsilon)\,,\\
\partial_\varphi\Theta(\varphi,z)&=-2e^{-\frac{\pi\alpha_\epsilon}{2\beta_\epsilon}}
\left|\Theta_0\right|\sin\left(\frac{\alpha_\epsilon}{\beta_\epsilon}z-\varphi-\arg(\Theta_0)\right)
+O(e^{-\frac{\pi\alpha_\epsilon}{2\beta_\epsilon}}\epsilon)\,.\\
\end{split}
\end{equation}
Since $\Theta(0,0)=0$ for all $\epsilon$ then $\arg(\Theta_0)=\pm\frac{\pi}{2}$, i.e., the Stokes constant is a purely imaginary number
and equation \eqref{4:Asymptoticformulahomoclinic} follows directly.
We note that the integrability of the normal form allows us to
repeat the arguments with more accurate approximations of the separatrices,
the result of this consideration leads to the conjecture that
\begin{equation}\label{4:AsymptoticExpansionHomoclinic}
\omega(\epsilon)\asymp e^{-\frac{\pi\alpha_\epsilon}{2\beta_\epsilon}}\sum_{k\geq 0}\omega_k \epsilon^k
\end{equation}
where $\omega_0=2\mathrm{Im}(\Theta_0(\kappa))$.
\section{Computation of the Stokes constant\label{Se:num}}
Since the arguments involved in the derivation of the asymptotic formula are not rigorous, we have developed numerical methods to check the validity of our results.
The procedure is based on comparison of two different methods for evaluation of the
Stokes constants. The first method relies on the definition \eqref{4:SC}
and involves the GSHE with $\epsilon=0$ only. The second method evaluates
the homoclinic invariant for $\varepsilon\ne0$ and
relies on the validity of the asymptotic expansion \eqref{4:AsymptoticExpansionHomoclinic}
to extrapolate the values of the (normalised) homoclinic invariant towards $\varepsilon=0$
in order to get $\omega_0$.
\subsection{A method for the computation of the Stokes constant}
Let us describe the first method for computing the Stokes constant.
We set $\tau=-i\sigma$ for $\sigma>0$, $\varphi=0$ and rewrite equation~\eqref{Eq:theta0} in the form:
\begin{equation}\label{4:theta0}
\Theta_0=\theta_0(0,-i\sigma)e^{\sigma}+O\left(e^{-(1-\epsilon_0)\sigma}\right).
\end{equation}
Then we proceed as follows.
\begin{enumerate}
\item The first step is to construct a good approximation of stable and unstable manifolds.
This approximation is given by a finite sum of the unique formal separatrix $\hat{u}_0$ defined in section \ref{Se:Stokes}. Given $N\geq1$ and the formal separatrix $\hat{u}_0$ we can use the relations \eqref{4:changeofvariables} to define,
\begin{equation*}
\mathbf{\Gamma}_N(\varphi,\tau):=\sum_{k=1}^N \Gamma_k(\varphi)\tau^{-k}\,,
\end{equation*}
where
\begin{equation*}
\Gamma_k(\varphi)=\sum_{j=-k}^{k}\Gamma_{k,j}e^{ji\varphi}\text{ with } \Gamma_{k,j}\in \mathbb{C}^4,
\end{equation*}
such that $\mathbf{\Gamma}_N$ approximates the parameterizations $\mathbf{\Gamma}^{\pm}_0$ in the following sense
\begin{equation*}
\mathbf{\Gamma}^{\pm}_0(\varphi,z)-\mathbf{\Gamma}_N(\varphi,\tau)=O(\tau^{-N-1})\,.
\end{equation*}
The natural number $N$ can be chosen using the \textsl{astronomers recipe}.
It simply chooses $N$ such that for fixed $\tau$ and $\varphi$ it minimizes $\left|\Gamma_{N+1}(\varphi)\tau^{-N-1}\right|$,
that is, the least term of the formal series $\sum_{k\geq1}\Gamma_k(\varphi)\tau^{-k}$ (see Figure \ref{4:figAst2}).
\begin{figure}
\caption{Graph of $\log_{10}
\label{4:figAst2}
\end{figure}
\item A point on the unstable manifold (resp. stable manifold) can be represented in the coordinates $(\varphi,\tau)$.
In order to obtain a point close to the unstable manifold we fix a positive real number $\sigma \in \mathbb{R}^{+}$
and a sufficiently large $d \in \mathbb{R}^{+}$ and define $z^{-}_0=\mathbf{\Gamma}_N(-d,-i\sigma-d)$ and
a tangent vector $v^{-}_0=\partial_\varphi\mathbf{\Gamma}_N(-d,-i\sigma-d)$. Analogously, for the stable manifold
we define $z^{+}_0=\mathbf{\Gamma}_N(d,-i\sigma+d)$ and $v^{+}_0=\partial_\varphi\mathbf{\Gamma}_N(d,-i\sigma+d)$.
\item The next step is to measure the difference of stable and unstable manifold at the point $(\varphi,\tau)=(0,-i\sigma)$.
Taking into account the periodicity in $\varphi$ we set $d$ equal to a multiple to $2\pi$ and integrate numerically the system,
\begin{equation}\label{4:S1}\begin{array}{l}
z'=X_{H_0}(z)\\
v'=DX_{H_0}(z)v\\
\end{array}
\end{equation}
forward in time with $t \in [0,d]$ and initial conditions $z^-(0)=z_0^{-},v^-(0)=v_0^{-}$
and then backward in time with $t \in [-d,0]$ and initial conditions $z^+(0)=z_0^{+},v^+(0)=v_0^{+}$.
\item Finally we evaluate,
\begin{equation}\label{4:theta}
\hat{\Theta}(\sigma)=\Omega(z^+(-d)-z^-(d),v^-(d))e^{\sigma}
\end{equation}
\end{enumerate}
\begin{remark}
The stable and unstable manifolds have the same asymptotic expansion, hence the difference $z^+(-d)-z^-(d)$ is exponentially small,
i.e. comparable with $e^{\sigma}$. Thus the system \eqref{4:S1} has to be integrated with great accuracy.
In the case of GSHE an excellent integrator can be constructed using a high order Taylor series method.
\end{remark}
\subsection{Numerical results}
In all current computations we have used a Taylor series method, which is incorporated in the Maple Software,
to integrate the equations of motion \eqref{4:S1}. The method uses an adaptive step procedure controlled
by a local error tolerance which was set to $10^{-D}$, where $D$ is the number of significant digits used
in the computations. The order of the method has been automatically defined using the formula $\max(22,\left\lfloor 1.5D\right\rfloor)$.
Having fixed $\kappa=2$ (which we recall to be one of the parameters of the original equation \eqref{4:GSHE}) we have computed the first 45 coefficients of the formal separatrix $\hat{u}_0$
with 60 digits precision. The error committed by the approximation $\mathbf{\Gamma}_N$ is approximately of the order of the first missing term.
\begin{figure}
\caption{\small The top figure represents the graph of the function
$\mathrm{Im}
\label{4:fig1}
\end{figure}
Using double precision (16 digits) we have integrated numerically the equations \eqref{4:S1}
to obtain $\hat{\Theta}(\sigma)$ for values of $\sigma$ uniformly distributed in the interval
$\left[20,28.89\right]$. The initial conditions were computed using $d=350\pi$ and the first
9 terms of $\mathbf{\Gamma}_N$. The results are depicted in Figure \ref{4:fig1}.
The expected errors are bounded by the dashed curves. This implies in particular that the method is numerically
stable, that is, the propagation errors due to integration do not increase drastically.
There are several sources of errors that affect the accuracy of the computation of the Stokes constant, namely:
\begin{itemize}
\item Approximation of stable and unstable manifolds given by the function $\mathbf{\Gamma}_N$;
\item Errors due to the numerical integration;
\item Rounding errors.
\end{itemize}
The first and the second source of errors can be made small compared to the rounding errors, which can be roughly estimated by,
\begin{equation}\label{4:roundoff}
\frac{C}{\sigma^2}10^{-D}e^{\sigma},
\end{equation}
where $D$ is the number of digits used in the computations and $C$ is a real positive constant
which reflects the propagation of rounding errors. Using this estimate we have provided bounds
for the rounding errors which can be observed in Figure \ref{4:fig1}. The constant $C$ can
be estimated by fitting the function \eqref{4:roundoff} to the points $\left|\hat{\Theta}(\sigma)\right|$ for $\sigma\geq 25$.
Using the method of least squares we have concluded that $C$ is approximately $38.5$.
With double arithmetic precision the method previously described allows the computation of 7 to 8 correct digits
of the Stokes constant $\Theta_0$. In fact the rounding errors in computing $\hat{\Theta}(\sigma)$ from
formula \eqref{4:theta} grow accordingly to \eqref{4:roundoff} whereas the neglected terms of the formula
\eqref{4:theta0} decrease like $C_1e^{-\sigma}$, where $C_1$ is some positive constant. Hence the optimal
is attained when both contributions are of the same order. The constant $C_1$ can be estimated by fitting
the function $C_0+C_1e^{-\sigma}$ to the points $\left|\hat{\Theta}(\sigma)\right|$ for $\sigma\leq 24$.
Using the method of least squares we have obtained that $C_1$ is approximately $17305.75$.
Using this information we can determine the value $\sigma^*$ where both contributions are essentially of the same order.
This means that $\sigma^*$ must satisfy the equation,
\begin{equation*}
(e^{-\sigma})^2=\frac{C}{{\sigma}^2\,C_1}10^{-D}
\end{equation*}
which implies,
\begin{equation*}
\left|\Theta_0-\hat{\Theta}(\sigma^*)\right|\approx \frac{816}{\sigma^*}10^{-\frac{D}{2}}
\end{equation*}
\begin{table*}[tp]
\tiny
\begin{tabular}{|c|c|c|l|}
\hline
$D$&$\sigma^*$&$\mathrm{Re}(\hat{\Theta}(\sigma^*))$&$\mathrm{Im}(\hat{\Theta}(\sigma^*))$\\
\hline
16&24.68&2.7e-05&\textbf{10.472161}43901571\\
20&29.46&7.8e-07&\textbf{10.47216195}3423286113\\
24&34.21&1.6e-08&\textbf{10.4721619569}069446924024\\
28&38.95&3.1e-10&\textbf{10.472161956944}13924682820786\\
32&43.67&5.3e-12&\textbf{10.47216195694439}6725504278408504\\
36&48.37&8.5e-14&\textbf{10.4721619569443983}419527788851129556\\
40&53.07&1.2e-15&\textbf{10.472161956944398358}12989263311456886391\\
44&57.76&1.8e-17&\textbf{10.47216195694439835828}4180684468467819622191\\
48&62.45&2.6e-19&\textbf{10.4721619569443983582855}084356725900717201861670\\
52&67.12&3.5e-21&\textbf{10.472161956944398358285521}30242825730920048239485015\\
56&71.80&4.7e-23&\textbf{10.47216195694439835828552143}0879142372532568396894067732\\
60&76.46&6.2e-25&\textbf{10.4721619569443983582855214320}209319731283197852962601326570\\
64&81.13&8.0e-27&\textbf{10.472161956944398358285521432031}66495538939445255794702026972749\\
68&85.79&1.0e-28&\textbf{10.47216195694439835828552143203190}0047829633854060398152634432422925\\
\hline
\end{tabular}
\caption{Stokes constant evaluated at the optimum $\sigma^*$ for different computer precisions.
In the computations we have used $d=350\pi$ and $N=40$}
\label{4:tabStokesConstant}
\end{table*}
In this way it is possible to obtain 8 correct digits for the Stokes constant using only double precision.
In Table \ref{4:tabStokesConstant} we have listed the values of $\hat{\Theta}(\sigma^{*})$ evaluated
at the optimum $\sigma^*$ for higher computer precisions. The digits in bold correspond
to correct digits of the Stokes constant. We also note that the numerics suggest that $\Theta_0$
is pure imaginary which agrees with our prediction.
Finally, let us mention that in the process of computing the Stokes constant we have made several
choices for the parameters. Namely, the number of terms $N$ used to compute $\mathbf{\Gamma}_N$
and the parameter $d$ which were used in computing the initial conditions of step (ii) of the numerical scheme.
In fact the results are independent of these particular choices and Table \ref{4:tabrobust} demonstrates
the robustness of the numerical method.
\begin{table*}[tp]
\begin{tabular}{|c|c|c|c|}
\hline
$d \backslash N$&10&20&30\\
\hline
100$\pi$&10.47216215179386&10.47216215183208&10.47216215181955\\
\hline
150$\pi$&10.47216131335742&10.47216131335746&10.47216131335772\\
\hline
200$\pi$&10.47216144775669&10.47216144775671&10.47216144775682\\
\hline
250$\pi$&10.47216149546998&10.47216149546998&10.47216149547027\\
\hline
300$\pi$&10.47216132022817&10.47216132022820&10.47216132022773\\
\hline
350$\pi$&10.47216138600882&10.47216138600883&10.47216138600868\\
\hline
\end{tabular}
\caption{Comparison of the value of $\mathrm{Im}(\hat{\Theta}(25))$ for different values of parameters $N$ and $d$.}
\label{4:tabrobust}
\end{table*}
\section{High precision computations of an asymptotic expansion for the homoclinic invariant\label{Se:num3}}
In this section we present a numerical method for the computation of the homoclinic invariant as defined
in \eqref{4:homoclinicinvariant} for the Swift-Hohenberg equation with $\kappa=2$ and $\epsilon<0$.
Moreover we investigate from a numerical point of view the validity of the asymptotic expansion
\eqref{4:AsymptoticExpansionHomoclinic} for the homoclinic invariant. This section follows the ideas of \cite{GS:08}
originally developed for the study of exponentially small phenomena for area-preserving maps.
In order to compute the homoclinic invariant \eqref{Eq:hominv}
we need to compute two tangent vectors at the symmetric homoclinic point $\mathbf{\Gamma}^{s}_\epsilon(0,0)$.
Using the fact that the system is reversible we can obtain the stable tangent vector
$\partial_\varphi \mathbf{\Gamma}^{s}_\epsilon$ by applying the reversor to the unstable
tangent vector $\partial_\varphi \mathbf{\Gamma}^{u}_\epsilon$. The unstable tangent vector
$\partial_\varphi \mathbf{\Gamma}^{u}_\epsilon$ lives in the tangent plane of the unstable
manifold at the symmetric homoclinic orbit. Thus an easy way to compute this tangent vector
is to approximate the primary homoclinic orbit near the equilibrium point by the following expansion,
\begin{equation}\label{4:FormalExpansion}
\mathbf{\Gamma}_{\epsilon,N}^{u}(\varphi,z)=\sum_{k=1}^{N}e^{k z}\left(\mathbf{c}_k(\epsilon)
+\sum_{j\geq1}^k \mathbf{a}_{k,j}(\epsilon)\cos(j\varphi) + \mathbf{b}_{k,j}(\epsilon)\sin(j\varphi)\right)
\end{equation}
and then use the variational equations,
\begin{equation}\label{4:equationsofmotions1}
\begin{split}
\mathbf{x}'&=X_{H_\epsilon}(\mathbf{x})\\
\mathbf{v}'&=DX_{H_\epsilon}(\mathbf{x})\mathbf{v}\\
\end{split}
\end{equation}
to transport the tangent vector $\partial_\varphi\mathbf{\Gamma}_{\epsilon,N}^{u}$ along
the primary homoclinic orbit until it hits the symmetric plane $\mathrm{Fix}(S)$ defined by $\left\{q_2=0,p_1=0\right\}$.
Let us present the details of the method.
\subsection{A method for the computation of the homoclinic invariant}
\begin{enumerate}
\item The first step is to determine the coefficients of \eqref{4:FormalExpansion}. To that end we take a new expansion,
\begin{equation*}
u_N(\varphi,z)=\sum_{k=1}^{N}e^{k z}\left(c_k(\epsilon)+\sum_{j\geq1}^k a_{k,j}(\epsilon)\cos(j\varphi) + b_{k,j}(\epsilon)\sin(j\varphi)\right)
\end{equation*}
and substitute into the equation,
\begin{equation}\label{4:eqSHnormal}
((\alpha_\epsilon\partial_\varphi+\beta_\epsilon\partial_z)^2+1)^2\,u=\epsilon u+2u^2-u^3
\end{equation}
and collect the terms of the same order in $e^{kz}$. In this way it is possible to determine coefficients
$c_k$, $a_{k,j}$ and $b_{k,j}$. It is not difficult to see that the coefficients $a_{1,1}$ and $b_{1,1}$
satisfy no relations and that all other coefficients depend from these two. So we define,
\begin{equation*}
a_{1,1}=r_0\cos(\psi_0)\ \ \mathrm{and} \ \ b_{1,1}=r_0\sin(\psi_0)
\end{equation*}
Now recall that the first component of $\mathbf{\Gamma}^{u}_\epsilon$ solves equation \eqref{4:eqSHnormal}
and due to the asymptotic behavior \eqref{4:leadingorderGamma_epsilon} we conclude that for $z<<0$ and $\delta<<1$
it is approximately,
\begin{equation}\label{4:leadingordersGamma}
e^{z}\left(-\frac{2\delta}{\sqrt{\eta}}\cos(\varphi)+\frac{\delta^2}{\sqrt{\eta}}\left(1+\frac{2\mu}{\eta}\right)\sin(\varphi)\right)+O(e^{2z})
\end{equation}
where $\epsilon=-4\delta^2$. Next we \textsl{"match"} the leading order of $u_N(\phi,s)$ with the expression
\eqref{4:leadingordersGamma} and conclude that $\psi_0$ and $r_0$ must satisfy,
\begin{equation}\label{4:choiceparameters}
\begin{split}
\psi_0&=\arctan\left(-\left(1+\frac{2\mu}{\eta}\right)\frac{\delta}{2}\right)\\
r_0&=\frac{2\delta}{\sqrt{\eta}}\sqrt{1+\left(1+\frac{2\mu}{\eta}\right)^2\frac{\delta^2}{4}}
\end{split}
\end{equation}
Taking into account \eqref{4:changeofvariables} we reconstruct $\mathbf{\Gamma}_{\epsilon,N}^{u}$ from $u_N$
and due to the \textsl{"matching"} \eqref{4:choiceparameters} we have,
\begin{equation*}
\mathbf{\Gamma}_{\epsilon}^{u}(t,t)\approx \mathbf{\Gamma}_{\epsilon,N}^{u}(t,t), \ \mathrm{as} \ \ t\rightarrow -\infty,\ \delta\rightarrow0.
\end{equation*}
That is, for small values of $\delta$, the expansion $\mathbf{\Gamma}_{\epsilon,N}^{u}$ provides
a good approximation of the primary homoclinic orbit near the equilibrium point.
\item The second step is to improve the accuracy of the approximation of the symmetric homoclinic point,
provided by $\mathbf{\Gamma}_{\epsilon,N}^{u}$. Given small $\delta$ and sufficiently large $T_0>0$ we want to determine $(T,\psi)$ such that,
\begin{align*}
\mathbf{x}'&=X_{H_\epsilon}(\mathbf{x}),&\mathbf{x}(0;\psi)&=\mathbf{\Gamma}_{\epsilon,N}^{u}(-\alpha_\epsilon T_0,-\beta_\epsilon T_0;\psi)
\end{align*}
subject to,
\begin{equation}\label{4:reversibilitycondition}
\mathbf{x}(T;\psi) \in \mathrm{Fix}(S)
\end{equation}
This problem can be solved using Newton method. Starting from $(T_0,\psi_0)$ we obtain a sequence of points $(T_i,\psi_i)$,
\begin{equation}\label{4:newtonmethod}
\begin{pmatrix}
T_{i+1}\\
\psi_{i+1}
\end{pmatrix}=\begin{pmatrix}
T_{i}\\
\psi_{i}
\end{pmatrix}-\begin{pmatrix}
\frac{\partial q_2}{\partial T}(T_i;\psi_i)&\frac{\partial q_2}{\partial \psi}(T_i;\psi_i)\\
\frac{\partial p_1}{\partial T}(T_i;\psi_i)&\frac{\partial p_1}{\partial \psi}(T_i;\psi_i)
\end{pmatrix}^{-1}\begin{pmatrix}
q_2(T_i;\psi_i)\\
p_1(T_i;\psi_i)
\end{pmatrix}
\end{equation}
that converges to a limit $(T_{*},\psi_{*})$ such that $\mathbf{x}(T_*;\psi_*) \in \mathrm{Fix}(S)$,
provided $(T_0,\psi_0)$ is sufficiently close to $(T_*,\psi_*)$ (see \cite{ARCAS:93}). The derivatives
in \eqref{4:newtonmethod} can be computed using the variational equations along the orbit $\mathbf{x}(t;\psi)$.
Later we will see that the formulae \eqref{4:choiceparameters} provide sufficiently accurate initial guesses
yielding the convergence of the Newton method.
\item Having obtained in the previous step an accurate approximation of the symmetric homoclinic point,
the last step is to integrate numerically the system,
\begin{align*}
\mathbf{x}'&=X_{H_\epsilon}(\mathbf{x}),&\mathbf{x}(0;\psi)&=
\mathbf{\Gamma}_{\epsilon,N}^{u}(-\alpha_\epsilon T_0,-\beta_\epsilon T_0;\psi_{*})\\
\mathbf{v}'&=DX_{H_\epsilon}(\mathbf{x})\mathbf{v},& \mathbf{v}(0;\psi)
&=\alpha_\epsilon\partial_\varphi \mathbf{\Gamma}_{\epsilon,N}^{u}(-\alpha_\epsilon T_0,-\beta_\epsilon T_0;\psi_{*})\\
\end{align*}
and evaluate the homoclinic invariant,
\begin{equation*}
\hat{\omega}=\Omega(\mathbf{v}(T_{*},\psi_{*}),S(\mathbf{v}(T_{*},\psi_{*})))
\end{equation*}
\end{enumerate}
\subsection{Numerical results}
We have considered a finite set $\mathcal{I}$ consisting of points in the interval
$\epsilon \in [-\frac{1}{10},-\frac{1}{1000}]$ and computed the homoclinic invariant
for those points using the method previously described. For all points in $\mathcal{I}$
the magnitude of homoclinic invariant ranges from $10^{-5}$ to $10^{-45}$.
Thus, in all numerical integrations we have used a high order Taylor method which
allows to perform the numerical integration with very high precision. We have computed
the coefficients of the expansion \eqref{4:FormalExpansion} up to $N=5$ and for each
$\epsilon\in \mathcal{I}$ we have chosen $T_0$ sufficiently large so that
$\mathbf{\Gamma}_{\epsilon,N}^{u}(-\alpha_\epsilon T_0,-\beta_\epsilon T_0)$ approximates
the unstable manifold within the required precision. The initial point $(T_0,\psi_0)$
used in Newton method proved to be very close to $(T_*,\psi_*)$ and its relative
error can be observed in Figure \ref{4:figrelativeerrornewtonmethod}.
\begin{figure}
\caption{\small Relative error of $(T_0,\psi_0)$ depending on $\epsilon\in\mathcal{I}
\label{4:figrelativeerrornewtonmethod}
\end{figure}
After computing the homoclinic invariant we have normalized it using the formula,
\begin{equation*}
\bar{\omega}(\epsilon)=\frac{\omega(\epsilon)}{2}e^{\frac{\pi\alpha_\epsilon}{2\beta_\epsilon}}
\end{equation*}
\begin{figure}
\caption{\small Graph of the function $\bar{\omega}
\label{4:fighomoclinic}
\end{figure}
The behaviour of the function $\bar{\omega}(\epsilon)$ can be observed in Figure \ref{4:fighomoclinic}.
It possible to see that it is approaching the value of the Stokes constant computed in the previous section.
Moreover, it is aproaching this value in a linear fashion, supporting the validity
of the asymptotic formula \eqref{4:Asymptoticformulahomoclinic}.
Taking into account the asymptotic expansion for $\omega(\epsilon)$ we investigate
the validity of the following asymptotic expansion for $\bar{\omega}(\epsilon)$,
\begin{equation}\label{4:asymptoticexpansion2}
\bar{\omega}(\epsilon)\asymp\sum_{k\geq0}\bar{\omega}_k\epsilon^k
\end{equation}
\begin{table*}[tp]
\center
\scriptsize
\begin{tabular}{|c|l|l|l|}
\hline
&$\bar{\omega}_0$&$\bar{\omega}_1$&$\bar{\omega}_2$\\
\hline
5&10.47216195694& 8.979943127&- 42.60110\\6&10.472161956944&
8.979943127&- 42.601100\\7&
10.4721619569443& 8.9799431275&- 42.6011004\\8& 10.47216195694439& 8.97994312752&- 42.60110043\\9& 10.472161956944398&
8.9799431275209&- 42.601100432
\\10& 10.4721619569443983& 8.9799431275210&-
42.601100432\\11&
10.4721619569443983& 8.9799431275210&- 42.601100432\\12& 10.4721619569443983&
8.9799431275210&- 42.6011004327\\
\hline
&$\bar{\omega}_3$&$\bar{\omega}_4$&$\bar{\omega}_5$\\
\hline
5& 152.88&- 774.4& 3.8$\times 10^3$\\
6& 152.888&- 774.2& 3.8$\times 10^3$\\
7& 152.887&- 774.40& 3.80$\times 10^3$\\
8& 152.88795&- 774.39& 3.814$\times 10^3$\\
9& 152.88795&- 774.394& 3.813$\times 10^3$\\
10&152.887958&- 774.3944& 3.8138$\times 10^3$\\
11& 152.887958&- 774.3944& 3.813$\times 10^3$\\
12& 152.887958&- 774.3944& 3.813$\times 10^3$ \\
\hline
\end{tabular}
\caption{Coefficients of the estimated polynomials for different subsets of $\mathcal{P}$ and different degrees.}
\label{4:Tableestimatedcoefficients}
\end{table*}
To that end, we have taken 14 points evenly spaced in the interval $[-2.7\times10^{-3},-1.4\times10^{-3}]$
and computed the corresponding normalized homoclinic invariant with more than 40 correct digits.
Let us denote this set of homoclinic invariants by $\mathcal{P}$.
Then, in order to get the first few coefficients of the asymptotic expansion \eqref{4:asymptoticexpansion2}
we have fitted a partial sum of the asymptotic expansion to the points of $\mathcal{P}$.
Here we have used as many points as the number of unknown coefficients.
Moreover, following \cite{GS:08} we have performed the following tests to evaluate the validity of the asymptotic expansion:
\begin{enumerate}
\item Interpolating different partial sums to different subsets of $\mathcal{P}$
should give essentially the same results for the coefficients.
\item The constant term of the interpolating polynomial should coincide with
the value of the Stokes constant computed in the previous section.
\item The interpolating polynomial should reasonably approximate
$\bar{\omega}(\epsilon)$ outside the interval $[-2.7\times10^{-3},-1.4\times10^{-3}]$,
in the sense that it agrees with the main property of an aymptotic expansion:
\begin{equation*}
\left|\bar{\omega}(\epsilon)-\sum_{k\geq0}^{n-1}\bar{\omega}_k\epsilon^k\right|\leq C \epsilon^n,\ \forall \epsilon \in \left[\epsilon_0,0\right)
\end{equation*}
for some $C>0$ and $\epsilon_0<0$.
\end{enumerate}
\begin{figure}
\caption{Relative error of the asymptotic expansion of $\bar{\omega}
\label{4:fig_relativeerror}
\end{figure}
For the first test we have considered all possible subsets of $\mathcal{P}$
having only $6$ consecutive elements and interpolated these data by polynomials
of degree 5. Then for each coefficient, we extracted the part of the number which
is equal to all polynomials. We have repeated this process for polynomials of degree 6
up to degree 12. The results are summarized in Table \ref{4:Tableestimatedcoefficients},
where it is possible to see that there is a good agreement between the coefficients
of the different interpolating polynomials of different subsets of $\mathcal{P}$.
We can also infer from Table \ref{4:Tableestimatedcoefficients} that the results
are numerically stable. Thus, we have the following estimates for the first 6 coefficients of \eqref{4:asymptoticexpansion2}:
\begin{align*}
\bar{\omega}_0&=10.4721619569443983\ldots & \bar{\omega}_1&=8.9799431275210\ldots& \bar{\omega}_2&=-42.601100432\ldots\\
\bar{\omega}_3&=152.887958\ldots & \bar{\omega}_4&=-774.3944\ldots & \bar{\omega}_5&=3.813\ldots\times 10^3
\end{align*}
Furthermore, it is clear that the coefficient $\bar{\omega}_0$ coincides (up to 18 digits) with the value of the Stokes constant which we recall,
\begin{equation*}
\left|\Theta_0\right|=10.47216195694439835828552143203190\ldots
\end{equation*}
Moreover, in Figure \ref{4:fig_relativeerror} we see that the relative error of the asymptotic
expansion does not exceed $0.06$ in the hole interval $\left[-\frac{1}{10},0\right]$.
Thus, our numerical results provide a satisfactory numerical evidence that supports the correctness
of the asymptotic expansion \eqref{4:AsymptoticExpansionHomoclinic}.
\begin{appendix}
\section{Transformation of GSHE to the normal form\label{Ap:A}}
In order to normalize $H_\epsilon$ up to order $5$, we have used the method
of Lie series to determine Hamiltonians $F_i$, $i=0,\ldots,4$ which generate the following near identity canonical map,
$$
\Psi_5=\Phi_{F_0}^1\circ\Phi_{F_1}^1\circ\Phi_{F_2}^1\circ\Phi_{F_3}^1\circ\Phi_{F_4}^1\,,
$$
where
\begin{equation}\begin{split}
F_0&=\epsilon\, \left( -{\frac {5}{32}}\,{ q_1}\,{ p_1}+{\frac {3}{32}}
\,{ q_2}\,{ p_2}+\frac{1}{8}\,{ p_1}\,{ p_2} \right)\\
F_1&={\frac {7}{216}}\,\kappa\,\sqrt {2}{{ q_1}}^{2}{ p_2}+{\frac {95}{
216}}\,\kappa\,\sqrt {2}{ q_1}\,{ q_2}\,{ p_1}+{\frac {17}{72}}
\,\kappa\,\sqrt {2}{ q_1}\,{{ p_1}}^{2}+{\frac {5}{36}}\,\kappa\,
\sqrt {2}{ q_1}\,{{ p_2}}^{2}+\\
&\quad {\frac {175}{432}}\,\kappa\,\sqrt {2
}{{ q_2}}^{2}{ p_2}+\frac{1}{36}\,\kappa\,\sqrt {2}{ q_2}\,{ p_1}\,{
p_2}-\frac{1}{12}\,\kappa\,\sqrt {2}{{ p_1}}^{2}{ p_2}-\frac{1}{18}\,\kappa\,
\sqrt {2}{{ p_2}}^{3}\\
F_2&= \left( -{\frac {517}{20736}}\,{\kappa}^{2}+{\frac {29}{512}} \right)
{ q_1}\,{{ p_1}}^{3}+ \left( -{\frac {217}{20736}}\,{\kappa}^{2}+{
\frac {17}{512}} \right) { q_1}\,{ p_1}\,{{ p_2}}^{2}+\\
&\quad \left( {
\frac {2327}{20736}}\,{\kappa}^{2}-{\frac {31}{512}} \right) { q_2}
\,{{ p_1}}^{2}{ p_2}+ \left( -{\frac {19}{512}}+{\frac {2027}{
20736}}\,{\kappa}^{2} \right) { q_2}\,{{ p_2}}^{3}+ \\
&\quad\left( -{
\frac {5}{128}}+{\frac {7}{192}}\,{\kappa}^{2} \right) {{ p_1}}^{3}{
p_2}+ \left( {\frac {19}{576}}\,{\kappa}^{2}-{\frac {3}{128}}
\right) { p_1}\,{{ p_2}}^{3}\\
F_3&=\epsilon\, \left( -{\frac {143}{1152}}\,\kappa\,\sqrt {2}{{ p_1}}^{2
}{ p_2}-{\frac {167}{1728}}\,\kappa\,\sqrt {2}{{ p_2}}^{3}
\right)\\
F_4&=-{\frac {2}{1215}}\,\sqrt {2}\kappa\, \left( 37\,{\kappa}^{2}-27
\right) {{ p_2}}^{5}-{\frac {1}{648}}\,\sqrt {2}\kappa\, \left( -45
+52\,{\kappa}^{2} \right) {{ p_1}}^{4}{ p_2}-\\
&\quad{\frac {1}{243}}\,
\sqrt {2}\kappa\, \left( -27+34\,{\kappa}^{2} \right) {{ p_1}}^{2}{{
p_2}}^{3}\\
\end{split}
\end{equation}
Using an algebraic manipulator it is not difficult to see that $\Psi_5$ transforms $H_\epsilon$ into the desired form.
\end{appendix}
\end{document}
|
\begin{document}
\title{The abelianization of a symmetric mapping class group}
\author{Masatoshi Sato}
\date{}
\maketitle
\begin{abstract}
Let $\Sigma_{g,r}$ be a compact oriented surface of genus $g$ with $r$ boundary components. We determine the abelianization of the symmetric mapping class group $\hat{\mathcal{M}}_{(g,r)}(p_2)$ of a double unbranched cover $p_2:\Sigma_{2g-1,2r}\to\Sigma_{g,r}$ using the Riemann constant, Schottky theta constant, and the theta multiplier. We also give lower bounds of the abelianizations of some finite index subgroups of the mapping class group.
\epsilonnd{abstract}
\tableofcontents
\section{Introduction}
Let $g$ be a positive integer, $r\ge0$, and $S$ a set of $n$ points in the interior of $\Sigma_{g,r}$. We denote by $\operatorname{Diff}_+(\Sigma_{g,r},\partial\Sigma_{g,r}, S)$ the group of all orientation preserving diffeomorphisms which fix the boundary $\partial\Sigma_{g,r}$ pointwise, and map $S$ onto itself allowing to permute it. The mapping class group $\mathcal{M}_{g,r}^n$ is the group of all isotopy classes $\pi_0\operatorname{Diff}_+(\Sigma_{g,r}, \partial\Sigma_{g,r}, S)$ of such diffeomorphisms. We write simply $\mathcal{M}_{g,r}:=\mathcal{M}_{g,r}^0$ and $\mathcal{M}_g^n:=\mathcal{M}_{g,0}^n$. The mapping class group and its finite index subgroups play an important role in low-dimensional topology, in the theory of Teichm\"{u}ller spaces and in algebraic geometry. For example, the level $d$ mapping class group $\mathcal{M}_{g,r}[d]$ is defined to be the finite index subgroup of $\mathcal{M}_{g,r}$ which acts trivially on $H_1(\Sigma_{g,r};\mathbf{Z}/d\mathbf{Z})$ for $d>0$. It arises as the orbifold fundamental group of the moduli space of genus $g$ curves with level $d$ structure.
To compute the abelianizations, or equivalently, the first integral homology groups of finite index subgroups is one of the important problems in the mapping class groups. The Torelli group $\mathcal{I}_{g,r}$ is the subgroup which acts trivially on $H_1(\Sigma_{g,r};\mathbf{Z})$. McCarthy\cite{mccarthy2000fcg} proved that the first rational homology group of a finite index subgroup that includes the Torelli group vanishes for $r=n=0$, and more generally, Hain\cite{hain28tga} proved it for any $r\ge0$, $n\ge0$.
\begin{theorem}[McCarthy, Hain]\label{theorem:finiteindex}
Let $\mathcal{M}$ be a finite index subgroup of $\mathcal{M}_{g,r}^n$ that includes the Torelli group where $g\ge3$, $r\geq0$. Then
\[
H_1(\mathcal{M};\mathbf{Q})=0.
\]
\epsilonnd{theorem}
This theorem gives us little information about $H_1(\mathcal{M};\mathbf{Z})$ as a finite group. In fact, Farb raised the problem to compute the abelianizations of the subgroup $\mathcal{M}_{g,r}[d]$ in \cite{farb2006spm} Problem 5.23 p.43.
In this paper, we confine ourselves to the case $r=0$ or $1$ when it is not specified. For a finite regular cover $p$ on $\Sigma_{g,r}$, possibly branched, Birman-Hilden\cite{birman1973ihr} defined the symmetric mapping class group $\hat{\mathcal{M}}_{(g,r)}(p)$. That is closely related to a finite index subgroup of the mapping class group. As stated in subsection \ref{subsection:defSMCG}, the symmetric mapping class group is a finite group extension of a certain finite index subgroup of the mapping class group. In particular, we will have $H_1(\hat{\mathcal{M}}_{(g,r)}(p);\mathbf{Q})=0$ for all abel covers $p$. But in general, the first integral homology groups of symmetric mapping class groups and finite index subgroups of $\mathcal{M}_{g,r}^n$ are unknown.
One of the finite index subgroups, the spin mapping class group is defined by the subgroup of the mapping class group that preserves a spin structures on the surface. Lee-Miller-Weintraub\cite{lee1988rit} made the surjective homomophism from the spin mapping class group to $\mathbf{Z}/4\mathbf{Z}$ using the theta multiplier. Harer\cite{harer1993rpg} proved that this homomorphism is in fact an isomorphism.
In this paper, we determine the symmetric mapping class group $\hat{\mathcal{M}}_{(g,r)}(p_2)$ of an unbranched double cover $p_2:\Sigma_{2g-1,2r}\to\Sigma_{g,r}$ using the Riemann theta constant, Schottky theta constant, and the theta multiplier. We also compute a certain finite index subgroup $\mathcal{M}_{g,r}(p_2)$ of the mapping class group. That is included in the level 2 mapping class group $\mathcal{M}_{g,r}[2]$.
If we fix the symplectic basis of $H_1(\Sigma_{g,r};\mathbf{Z})$, the action of mapping class group $\mathcal{M}_{g,r}^n$ on $H_1(\Sigma_{g,r};\mathbf{Z})$ induces the surjective homomorphism
\[
\iota:\mathcal{M}_{g,r}\to Sp(2g;\mathbf{Z}),
\]
where $Sp(2g;\mathbf{Z})$ is the symplectic group of rank $2g$. Denote the image of $\mathcal{M}_{g,r}(p_2)$ under $\iota$ by $\Gamma_g(p_2)$. We also denote the image $\iota(\mathcal{M}_{g,r}[d])$ by $\Gamma_g[d]$, that is equal to the kernel $\operatorname{Ker}(Sp(2g;\mathbf{Z})\to Sp(2g;\mathbf{Z}/d\mathbf{Z}))$ of mod $d$ reduction. The main theorem is as follows.
\begin{theorem}\label{main-theorem}
For $r=0,1$, when genus $g\ge4$,
\begin{gather*}
H_1(\hat{\mathcal{M}}_{(g,r)}(p_2);\mathbf{Z})\cong H_1(\mathcal{M}_{g,1}(p_2);\mathbf{Z})\cong\mathbf{Z}/4\mathbf{Z},\\
H_1(\mathcal{M}_g(p_2);\mathbf{Z})\cong
\begin{cases}
\mathbf{Z}/4\mathbf{Z},\hspace{1cm} \text{ if\ \ } g:\text{odd},\\
\mathbf{Z}/2\mathbf{Z},\hspace{1cm} \text{ if\ \ } g:\text{even},
\epsilonnd{cases}\\
H_1(\Gamma_g(p_2);\mathbf{Z})\cong\mathbf{Z}/2\mathbf{Z}.
\epsilonnd{gather*}
\epsilonnd{theorem}
After proving the theorem, we state that the first homology groups of the level $d$ mapping class group $H_1(\mathcal{M}_{g,1}[d];\mathbf{Z})$ have many elements of order 4 for any even integer $d$ (Proposition \ref{prop:leveld}).
In section \ref{symMCG}, we define the symmetric mapping class group, and describe the relation to a finite index subgroup of the mapping class group. In section \ref{genGamma}, we prove that the integral homology groups of $\hat{\mathcal{M}}_{(g,r)}(p_2)$ and $\mathcal{M}_{g,r}(p_2)$ are cyclic groups of order at most 4. We also have $H_1(\Gamma_g(p_2);\mathbf{Z})\cong \mathbf{Z}/2\mathbf{Z}$.
In section \ref{surj}, we construct an isomorphism $H_1(\hat{\mathcal{M}}_{(g,r)}(p_2);\mathbf{Z})\cong\mathbf{Z}/4\mathbf{Z}$ using the Schottky theta constant and the theta multiplier to complete the proof of theorem \ref{main-theorem}.
\section{The symmetric mapping class group}\label{symMCG}
In this section, we define the symmetric mapping class group following Birman-Hilden\cite{birman1973ihr}, and prove some properties. In particular, we describe $\mathcal{M}_{g,r}(p)=\operatorname{Im} P$ by means of the action of the mapping class group on the equivalent classes of the covers in Subsection \ref{subsection:cover}. We will see that the groups $\hat{\mathcal{M}}_{(g,r)}(p_2)$ and $\mathcal{M}_{g,r}(p_2)$ do not depend on the choice of the double cover $p_2$ up to isomorphism.
\subsection{Definition of the symmetric mapping class group}\label{subsection:defSMCG}
Birman-Hilden\cite{birman1973ihr} defined the symmetric mapping class group of a regular cover $p: \Sigma_{g',r'}\to \Sigma_{g,r}$, possibly branched as follows. Denote the deck transformation group of the cover by $\operatorname{Deck}(p)$.
\begin{definition}
Let $C(p)$ be the centralizer of the deck transformation $\operatorname{Deck}(p)$ in the diffeomorphism group $\operatorname{Diff}_+(\Sigma_{g',r'})$. The symmetric mapping class group of the cover $p$ is defined by
\[
\hat{\mathcal{M}}_{(g,r)}(p)=\pi_0(C(p)\cap\operatorname{Diff}_+(\Sigma_{g',r'},\partial\Sigma_{g',r'})).
\]
\epsilonnd{definition}
Let $S\subset\Sigma_{g,r}$ be the branch set of the cover $p$. For $\hat{f}\in C(p)\cap \operatorname{Diff}_+(\Sigma_{g',r'},\partial\Sigma_{g',r'})$, there exists a unique diffeomorphism $f\in \operatorname{Diff}_+(\Sigma_{g,r},\partial\Sigma_{g,r}, S)$ such that the diagram
\[
\begin{CD}
\Sigma_{g',r'}@>\hat{f}>>\Sigma_{g',r'}\\
@V p VV @V p VV\\
\Sigma_{g,r}@>f>>\Sigma_{g,r}
\epsilonnd{CD}
\]
commutes. Note that $f$ maps the branch set $S$ into itself. The diffeomorphism $f\in \operatorname{Diff}_+(\Sigma_{g,r},\partial\Sigma_{g,r}, S)$ is called the projection of $\hat{f}\in C(p)\cap \operatorname{Diff}_+(\Sigma_{g',r'},\partial\Sigma_{g',r'})$. For $[\hat{f}], [\hat{g}]\in\hat{\mathcal{M}}_{(g,r)}(p)$ such that $[\hat{f}]=[\hat{g}]$, an isotopy between $\hat{f}$ and $\hat{g}$ induces the isotopy on the base space $\Sigma_{g,r}$ between the projections $f$ and $g$. Hence we can define the homomorphism
\[
\begin{array}{cccc}
P:&\hat{\mathcal{M}}_{(g,r)}(p)&\to&\mathcal{M}_{g,r}^n,\\
&[\hat{f}]&\mapsto &[f]
\epsilonnd{array}
\]
where $n\ge0$ is the order of $S$.
We denote the image $\operatorname{Im} P\subset\mathcal{M}_{g,r}^n$ by $\mathcal{M}_{g,r}(p)$. The kernel of $P$ is included in the group of isotopy classes of all the deck transformations in $\hat{\mathcal{M}}_{(g,r)}(p_2)$. Since any deck transformations without identity do not fix the boundary pointwise, we have $\operatorname{Ker} P=id$ when $r=1$. When $r=0$, $\operatorname{Ker} P$ consists of the isotopy classes of all the deck transformations.
In particular, $\operatorname{Ker} P$ is a finite group. Apply the Lyndon-Hochschild-Serre spectral sequence to the group extension
\[1\to \operatorname{Ker} P\to \hat{\mathcal{M}}_{(g,r)}(p)\to\mathcal{M}_{g,r}(p)\to 0,\]
then we have
\[
H_*(\hat{\mathcal{M}}_{(g,r)}(p);\mathbf{Q})\cong H_*(\mathcal{M}_{g,r}(p);\mathbf{Q}).
\]
\subsection{The action of the mapping class group on the equivalent classes of $G$-covers}\label{subsection:cover}
For a finite group $G$ and a finite set $S$, denote all the surjective homomorphisms $\pi_1(\Sigma_{g,r}-S, *)\to G$ by $\operatorname{Surj}(\pi_1(\Sigma_{g,r}-S, *), G)$. The group $G$ acts on this set by inner automorphism. Denote the quotient set by
\[
m(G,*):=\operatorname{Surj}(\pi_1(\Sigma_{g,r}-S, *), G)/\operatorname{Inn} G.
\]
For paths $l,l':[0,1]\to\Sigma_{g,r}-S$ such that $l(0)=l'(1)$, we define $l\cdot l'$ to be the path obtained by traversing first $l'$ and then $l$. For a path $l:[0,1]\to\Sigma_{g,r}-S$, we define a isomorphism $l_*$ by
\[
\begin{array}{cccc}
l_*:&\pi_1(\Sigma_{g,r}-S,l(0))&\to&\pi_1(\Sigma_{g,r}-S,l(1)).\\
&\gamma&\mapsto&l\cdot\gamma\cdot l^{-1}
\epsilonnd{array}
\]
If we pick a path $l$ from $*$ to $*'$, we have the isomorphism $l_*:\pi_1(\Sigma_{g,r}-S, *)\cong \pi_1(\Sigma_{g,r}-S, *')$.
Hence we also have the isomorphism
\[m(G,*)=m(G,*').\]
It is easy to see that this isomorphism does not depend on the choice of $l$, hence we denote $m(G):=m(G,*)$.
The mapping class group $\mathcal{M}_{g,r}$ acts on the set $m(G)$. In fact the diffeomorphism $f\in \operatorname{Diff}_+(\Sigma_{g,r}, \partial\Sigma_{g,r}, S)$ induces the map
\[
\begin{array}{ccc}
m(G)&\to&m(G)\\
\epsilonmpty[c]&\mapsto&[cf_*].
\epsilonnd{array}
\]
\begin{proposition}
Let $c:\pi_1(\Sigma_{g,r}-S,*)\to G$ denote the monodromy homomorphism of a branched or unbranched $G$-cover $p:\Sigma_{g',r'}\to\Sigma_{g,r}$, where $S$ is the branch set. The stabilizer of $[c]\in m(G)$ is equal to $\mathcal{M}_{g,r}(p)$.
\epsilonnd{proposition}
\begin{proof}
Suppose $[f]\in\mathcal{M}_{g,r}^n$ be in the stablizer of $[c]$. Since $[cf_*]=[c]$, there exists a path $l$ from $*$ to $f(*)$ such that
\[
c(l_*^{-1}f(\gamma))=c(\gamma), \text{ for } \gamma\in\pi_1(\Sigma_{g,r}-S,*).
\]
In particular, we have
\[
\operatorname{Ker}(c)=l_*^{-1}f_*(\operatorname{Ker} c).
\]
Hence the covers $p$ and $fp$ are equivalent. Choose a lift $\hat{l}$ of $l$, then there exists $\hat{f}\in\operatorname{Diff}(\Sigma_{g',r'})$ such that
\[
p\hat{f}=fp:\Sigma_{g',r'}\to\Sigma_{g,r},\text{ and } \hat{f}(\hat{l}(0))=\hat{l}(1).
\]
Then we have
\[
\hat{f}c(\gamma)\hat{f}^{-1}=c(l_*^{-1}f(\gamma))=c(\gamma)\in\operatorname{Diff}_+\Sigma_{g',r'}.
\]
Hence $\hat{f}$ is in the centralizer $C(p)$ of the deck transformation group $\operatorname{Deck}(p)$. When $r=1$, $\operatorname{Deck}(p)$ acts on $\pi_0(\partial\Sigma_{g',r'})$ transitively. It is easy to see that for any $\hat{f}\in C(p)$, there exist $t\in \operatorname{Deck}(p)$ such that $\hat{f}t$ acts trivially
on $\pi_0(\partial\Sigma_{g',r'})$. Therefore, there exists $t\in \operatorname{Deck}(p)$ such that $\hat{f}t\in C(p)\cap\operatorname{Diff}_+(\Sigma_{g',r'},\partial\Sigma_{g',r'})$ and $f=P([\hat{f}t])$.
Conversely, suppose $f=P(\hat{f})\in\mathcal{M}_{g,r}(p)$. Choose a path $\hat{l}$ such that $\hat{f}(\hat{l}(0))=\hat{l}(1)$. Denote the projection $l=p\hat{l}$, then we have
\[
c(l_*^{-1}f(\gamma))=\hat{f}c(\gamma)\hat{f}^{-1}=c(\gamma)\in\operatorname{Diff}_+\Sigma_{g',r'}.
\]
Hence we have $[c]=[cf_*]$.
\epsilonnd{proof}
Hence, $\mathcal{M}_{g,r}(p)$ is a finite index subgroup of the mapping class group. In particular, if $p$ is an abel cover, $\mathcal{M}_{g,r}(p)$ includes the Torelli group. By Theorem \ref{theorem:finiteindex}, we have $H_1(\mathcal{M}_{g,r}(p);\mathbf{Q})=0$. Consider the double covers on $\Sigma_{g,r}$. The number of the equivalent classes of double unbranched covers on $\Sigma_{g,r}$ are $2^{2g}-1$. Since the action of mapping class group $\mathcal{M}_{g,r}$ on $m(\mathbf{Z}/2\mathbf{Z})$ is transitive, the subgroup $\mathcal{M}_{g,r}(p_2)$ does not depend on the choice of the double cover $p_2$ up to conjugate. It is easy to see that $\hat{\mathcal{M}}_{(g,r)}(p_2)$ is also unique up to isomorphism.
\section{A lower bound of the order of the cyclic group $H_1(\hat{\mathcal{M}}_{(g,r)}(p_2);\mathbf{Z})$}\label{genGamma}
In this section we prove that the integral homology groups of $\hat{\mathcal{M}}_{(g,r)}(p_2)$ and $\mathcal{M}_{g,r}(p_2)$ are cyclic groups of order at most $4$. We compute $H_1(\Gamma_g(p_2);\mathbf{Z})$ in Subsection \ref{subsection:Gamma} and $H_1(\mathcal{I}_{g,r};\mathbf{Z})_{\mathcal{M}_{g,r}(p_2)}$ in Subsection \ref{subsection:torelli} to obtain the lower bound.
In subsection \ref{subsection:cover}, we proved that the symmetric mapping class group $\hat{\mathcal{M}}_{(g,r)}(p_2)$ and $\mathcal{M}_{g,r}(p_2)$ do not depend on the choice of the unbranched double cover $p_2$ up to isomorphism. Hence we fix the unbranched double cover $p_2$ whose monodromy $c\in\operatorname{Hom}(\pi_1(\Sigma_{g,r});\mathbf{Z}/2\mathbf{Z})\cong H^1(\Sigma_{g,r};\mathbf{Z}/2\mathbf{Z})$ is equal to the Poincar\'e dual of $B_g$ in Figure \ref{symplectic}.
\subsection{The first homology group $H_1(\Gamma_g(p_2);\mathbf{Z})$}\label{subsection:Gamma}
In this subsection, using the generators of $\Gamma_g[2]$ in Igusa\cite{igusa1964grt}, we prove that $H_1(\Gamma_g(p_2);\mathbf{Z})$ is a cyclic group of order 2. We also prove that $H_1(\hat{\mathcal{M}}_{(g,r)}(p_2);\mathbf{Z})$ and $H_1(\mathcal{M}_{g,r}(p_2);\mathbf{Z})$ are cyclic of order at most 4 when genus $g\ge4$, using the $\mathcal{M}_{g,r}$ module structure of the abelianization of the Torelli group determined by Johnson\cite{johnson1980aqm}. In particular, we obtain $H_1(\mathcal{M}_g(p_2);\mathbf{Z})\cong \mathbf{Z}/2\mathbf{Z}$ if genus $g\ge 4$ is even. In the next section, we complete the proof of Theorem \ref{main-theorem}.
We consider $\Sigma_{g,1}=\Sigma_g-D^2\subset\Sigma_g$.
Pick simple closed curves $\{A_i, B_i\}_{i=1}^g\subset \Sigma_{g,r}$ as shown in Figure \ref{symplectic}. They give a symplectic basis of $H:=H_1(\Sigma_{g,r};\mathbf{Z})$ which we denote by the same symbol $\{A_i, B_i\}_{i=1}^g$. The action of the mapping class group on $H_1(\Sigma_{g,r};\mathbf{Z})$ induces
\[
\iota:\mathcal{M}_{g,r}\to\operatorname{Sp}(2g,\mathbf{Z}).
\]
We denote the Dehn twist along the simple closed curve $A_g$ by $a\in\mathcal{M}_{g,r}$.
\begin{figure}[htbp]
\begin{center}
\includegraphics{symplectic.eps}
\epsilonnd{center}
\caption{}
\label{symplectic}
\epsilonnd{figure}
Let $S$ be a subsurface in $\Sigma_{g,r}$ as shown in Figure \ref{subsurface} and denote their mapping class groups which fix the boundary pointwise by $\mathcal{M}_S$.
\begin{figure}[htbp]
\begin{center}
\includegraphics{subsurface.eps}
\epsilonnd{center}
\caption{}
\label{subsurface}
\epsilonnd{figure}
The inclusion $S\to \Sigma_g$ induces a homomorphism
\[
i_S:\mathcal{M}_S\to \mathcal{M}_g.
\]
As in Introduction, we denote by $\iota: \mathcal{M}_g\to\operatorname{Sp}(2g;\mathbf{Z})$ the homomorphism defined by the action of $\mathcal{M}_g$ on the homology group $H$, and denote the ring of integral $n$-square matrices by $M(n;\mathbf{Z})$ for a positive integer $n$. It is easy to see that the image of $i_S(\mathcal{M}_S)$ under $\iota$ is
\[
\iota(i_S(\mathcal{M}_S))=
\left\{
\sigma=\left.
\begin{pmatrix}
\alpha'&\leftidx{^t}{v_1}{}&\beta'&0\\
0&1&0&0\\
\gamma'&\leftidx{^t}{v_2}{}&\delta'&0\\
v_3&k&v_4&1
\epsilonnd{pmatrix}
\in \operatorname{Sp}(2g;\mathbf{Z})
\ \right|\
\begin{array}{c}
\alpha', \beta', \gamma', \delta'\in M(g-1;\mathbf{Z}),\\
v_1,v_2,v_3,v_4\in \mathbf{Z}^{g-1}, k\in\mathbf{Z}
\epsilonnd{array}
\right\}.
\]
\begin{proposition}\label{generator}
When $g\ge1$, $\Gamma_g(p_2)$ is generated by $\iota(i_S(\mathcal{M}_S))$ and $ \iota(a^2)$.
\epsilonnd{proposition}
\begin{proof}
First, we show that $\Gamma_g(p_2)$ is generated by $\iota(i_S(\mathcal{M}_S))$ and $\Gamma_g[2]$. Since an element $\sigma\in\Gamma_g(p_2)$ preserves the homology class $B_g\in H_1(\Sigma_{g,r};\mathbf{Z}/2\mathbf{Z})$, it can be written in the form
\[
\sigma\epsilonquiv
\begin{pmatrix}
\alpha'&\leftidx{^t}{v_1}{}&\beta'&0\\
0&1&0&0\\
\gamma'&\leftidx{^t}{v_2}{}&\delta'&0\\
v_3&k&v_4&1
\epsilonnd{pmatrix}
\operatorname{mod} 2.\]
Hence there exists $\sigma_0\in\iota(i_S(\mathcal{M}_S))$ such that
\[\sigma_0\epsilonquiv \sigma \ \operatorname{mod} 2, \]
so that $\Gamma_g(p_2)$ is generated by $\iota(i_S(\mathcal{M}_S))$ and $\Gamma_g[2]$.
Next, we describe the generators of $\Gamma_g[2]$ given in Igusa\cite{igusa1964grt}. We denote by $I_n$ the unit matrix of order $n$, and by $e_{ij}$ the $2g$-square matrix with 1 at the $(i,j)$-th entry and 0 elsewhere. As was shown in Igusa\cite{igusa1964grt}, $\Gamma_g[2]$ is generated by
\begin{align*}
\alpha_{ij}=&I_{2g}+2e_{ij}-2e_{g+j,g+i}&& 1\le i,j\le g,\ i\ne j, \\
\alpha_{ii}=&I_{2g}-2e_{ii}-2e_{i+g,i+g}&& 1\le i\le g, \\
\beta_{ij}=&I_{2g}+2e_{i,j+g}+2e_{j,i+g}&& 1\le i<j\le g, \\
\beta_{ii}=&I_{2g}+2e_{i,i+g}&& 1\le i\le g, \\
\gamma_{ij}=&\leftidx{^t}{\beta_{ij}}{}&&1\le i\le j\le g.
\epsilonnd{align*}
To prove the proposition, it suffices to show that these matrices are in the subgroup of $\Gamma_g(p_2)$ generated by $\iota(i_S(\mathcal{M}_S))$ and $\iota(a^2)$. The matrices
\[
\alpha_{ij} (1\le i\le g-1, 1\le j\le g),\
\beta_{ij} (1\le i\le j\le g-1)\text{, and}\
\gamma_{ij} (1\le i\le j\le g)
\]
are clearly in $\iota(i_S(\mathcal{M}_S))$. Choose oriented simple closed curves $C_i, C'_i, C_{ij}, C'_{ij}, C''_{ij}\subset \Sigma_{g,r}$ such that $[C_i]=A_i$, $[C'_i]=B_i$, $[C_{ij}]=A_i+A_j$, $[C'_{ij}]=B_i+B_j$, $[C''_{ij}]=A_i+B_j$. Denote the Dehn twist along a simple closed curve $C$ by $T_C$. Then the matrices
\[
\alpha_{gj} (1\le j\le g-1) \ \text{ and }\ \beta_{ig} (1\le i\le g-1)
\]
are written as $\iota(T_{C''_{gi}}^2T_{C'_i}^{-2}T_{C_g}^{-2})$ and $\iota(T_{C_{ig}}^2T_{C_i}^{-2}T_{C_g}^{-2})$ respectively. Clearly $\iota(T_{C'_i}^2)$ and $\iota(T_{C_i}^2)$ are in $\iota(i_S(\mathcal{M}_S))$, and we have $\iota(T_{C_g}^2)=\iota(a^2)$. Denote the two boundary components of $S$ by $S_1$ and $S_2$. For any two arcs $l_1,l_2:[0,1]\to S$ that satisfy $l_1(0)=l_2(0)\in S_1$ and $l_1(1)=l_2(1)\in S_2$, there exists $\varphi\in \mathcal{M}_S$ such that
\[\varphi l_1=l_2.\]
Choose $C''_{gi}$ and $C_{ig}$ such that $\sharp(C''_{gi}\cap S_1)=\sharp(C_{ig}\cap S_1)=1$ and they intersect with $S_1$ transversely, there exist $\psi, \psi'\in i_S(\mathcal{M}_S)$ that satisfy $[\psi(C''_{gi})]=[\psi'({C_{ig}})]=A_g$. Thus we have
\[\psi T_{C''_{gi}}^2\psi^{-1}=\psi'T_{C_{ig}}^2{\psi'}^{-1}=a^2.\]
This proves the matrices $\alpha_{gj}$ and $\beta_{ig}$ are in the subgroup of $\Gamma_g(p_2)$ generated by $\iota(i_S(\mathcal{M}_S))$ and $\iota(a^2)$. Finally the matrices $\alpha_{gg}$ and $\beta_{gg}$ satisfy $\alpha_{gg}=\iota(T_{C''_{gg}}^2)\beta_{gg}\gamma_{gg}^{-1}$, and $\beta_{gg}=\iota(a^2)$. Hence $\alpha_{gg}$ and $\beta_{gg}$ are also in the subgroup, as was to be shown.
\epsilonnd{proof}
Using Proposition \ref{generator}, we now calculate the first homology group $H_1(\Gamma_g(p_2);\mathbf{Z})$.
\begin{proposition}\label{Gamma(p_2)}
When $g\ge4$,
\[
H_1(\Gamma_g(p_2);\mathbf{Z})\cong \mathbf{Z}/2\mathbf{Z}.
\]
\epsilonnd{proposition}
\begin{proof}
Powell \cite{powell1978ttm} had proved $H_1(\mathcal{M}_g;\mathbf{Z})=0$ when $g\ge3$. More generally, Harer \cite{harer1983shg} proved that $H_1(\mathcal{M}_{g,r};\mathbf{Z})=0$ when $g\ge3$ for any $r$. Hence the first homology $H_1(\mathcal{M}_S;\mathbf{Z})$ vanishes since genus of $S$ $\ge3$. We have
\[i_S(\mathcal{M}_S)=\{0\}\subset H_1(\mathcal{M}_g(p_2);\mathbf{Z}).\]
Since we proved that the group $\Gamma_g(p_2)$ is generated by $\iota(i_S(\mathcal{M}_S))$ and $\iota(a^2)$ in Proposition \ref{generator}, the homology group $H_1(\Gamma_g(p_2);\mathbf{Z})$ is generated by $[\iota(a^2)]$.
Next, we construct a surjective homomorphism $\Gamma_g(p_2)\to\mathbf{Z}/2\mathbf{Z}$. Since any $\sigma=(\sigma_{ij})\in\Gamma_g(p_2)$ preserves the homology class $B_g\in H_1(\Sigma_{g,r};\mathbf{Z}/2\mathbf{Z})$, we have
\[\sigma_{gi}\epsilonquiv \delta_{ig}\text{, and \ } \sigma_{i\,2g}\epsilonquiv \delta_{i\,2g}\ \operatorname{mod} 2,\]
where $\delta$ is the Kronecker delta. Then for $\sigma, \sigma'\in\Gamma_g(p_2)$, the $(g,2g)$-th entry of $\sigma\sigma'$ satisfies
\[(\sigma\sigma')_{g\,2g}=\sum_{i=1}^{2g}\sigma_{gi}\sigma'_{i\,2g}\epsilonquiv \sigma_{g\,2g}+\sigma'_{g\,2g}\ \operatorname{mod} 4.\]
Hence we have the homomorphism
\[
\begin{array}{cccc}
\Psi:&\Gamma_g(p_2)&\to&\mathbf{Z}/2\mathbf{Z}\\
&\sigma&\mapsto& \displaystyle\frac{\sigma_{g\,2g}}{2}.
\epsilonnd{array}
\]
Since $\Psi([\iota(a^2)])=1$, we have $[\iota(a^2)]\ne0\in H_1(\Gamma_g(p_2);\mathbf{Z})$.
Finally, to complete the proof it suffices to show that $2[\iota(a^2)]=0$. Apply the Lyndon-Hochschild-Serre spectral sequence to the group extension
\[1\to \mathcal{I}_{g,r}\to \mathcal{M}_{g,r}(p_2)\to\Gamma_g(p_2)\to 0,\]
then we have
\[
H_1(\mathcal{I}_{g,r};\mathbf{Z})_{\mathcal{M}_{g,r}(p_2)}\to H_1(\mathcal{M}_{g,r}(p_2);\mathbf{Z})\to H_1(\Gamma_g(p_2);\mathbf{Z})\to 0.
\]
Denote by $D$ and $D'$ the simple closed curves as shown in Figure \ref{t_dt_d'}.
\begin{figure}[htbp]
\begin{center}
\includegraphics{t_d.eps}
\epsilonnd{center}
\caption{}
\label{t_dt_d'}
\epsilonnd{figure}
Denote by $c_1$, $c_2$, and $c_3$ the Dehn twists along the simple closed curves $C_1$, $C_2$, and $C_3$ as shown in Figure \ref{abc} respectively. Since $c_1$ and $c_2$ are in $i_S(\mathcal{M}_S)$, $[c_1]=[c_2]=0\in H_1(\mathcal{M}_{g,r}(p_2);\mathbf{Z})$. By the chain relation, we have $T_DT_{D'}=(c_1c_2c_3)^4$.
\begin{figure}[htbp]
\begin{center}
\includegraphics{abc.eps}
\epsilonnd{center}
\caption{}
\label{abc}
\epsilonnd{figure}
Using the braid relations $c_1c_3=c_3c_1$, $c_1c_2c_1=c_2c_1c_2$, and $c_2c_3c_2=c_3c_2c_3$, we have
\[
[T_DT_{D'}]=[(c_1c_2c_3)^4]=[c_3c_2c_1^2c_2c_3]=[c_3c_2c_1^2c_2^{-1}c_3^{-1}]+[c_3c_2^2c_3^{-1}]+[c_3^2]
\in H_1(\mathcal{M}_{g,r}(p_2);\mathbf{Z}).\]
Since $c_3c_2c_1^2c_2^{-1}c_3^{-1}$ and $c_2c_1^2c_2^{-1}$ are the squares of the Dehn twists along the simple closed curves $c_3c_2(C_1)$ and $c_3(C_2)$, we have
\[
[c_3c_2c_1^2c_2^{-1}c_3^{-1}]=[c_3c_2^2c_3^{-1}]=[c_3^2]=[a^2].
\]
Hence $[T_DT_{D'}^{-1}]=[T_DT_{D'}]+[T_{D'}^{-2}]=2[a^2]$. Since $T_DT_{D'}^{-1}\in\mathcal{I}_{g,r}$, it follows that $2[\iota(a^2)]=[\iota(T_DT_{D'}^{-1})]=0\in H_1(\Gamma_g(p_2);\mathbf{Z})$.
This proves the proposition.
\epsilonnd{proof}
\subsection{The coinvariant $H_1(\mathcal{I}_{g,r};\mathbf{Z})_{\mathcal{M}_{g,r}(p_2)}$}\label{subsection:torelli}
To calculate the first homology group of the symmetric mapping class groups, we compute $H_1(\mathcal{I}_{g,r};\mathbf{Z})_{\mathcal{M}_{g,r}(p_2)}$.
\begin{lemma}\label{torelli}
When $g\ge4$,
\[
H_1(\mathcal{I}_g;\mathbf{Z})_{\mathcal{M}_g(p_2)}\cong
\begin{cases}
\mathbf{Z}/2\mathbf{Z},\hspace{0.3cm}&\text{if }g:\text{odd},\\
0,\hspace{0.3cm}&\text{if }g:\text{even},
\epsilonnd{cases}
\]
\[
H_1(\mathcal{I}_{g,1};\mathbf{Z})_{\mathcal{M}_{g,1}(p_2)}\cong\mathbf{Z}/2\mathbf{Z}.
\]
Moreover $H_1(\mathcal{I}_{g,r};\mathbf{Z})_{\mathcal{M}_{g,r}(p_2)}$ is generated by $T_DT_{D'}^{-1}\in\mathcal{I}_{g,r}$ for $r=0,1$.
\epsilonnd{lemma}
Before proving the lemma, we review the space of boolean polynomials. Let $H$ denote the first homology group $H_1(\Sigma_{g,r};\mathbf{Z})$ of the surface as before. Consider the polynomial ring with coefficients in $\mathbf{Z}/2\mathbf{Z}$ with the basis $\bar{x}$ for $x\in H\otimes \mathbf{Z}/2\mathbf{Z}$. Denote by $J$ the ideal in the polynomial generated by
\[\overline{x+y}-(\bar{x}+\bar{y}+x\cdot y),\hspace{0.4cm}\bar{x}^2-\bar{x},\hspace{0.4cm}\text{ for } x,y\in H\otimes \mathbf{Z}/2\mathbf{Z}.\]
The space of boolean polynomials of degree at most $n$ is defined by
\[B^n=\frac{M_n}{J\cap M_n},\]
where $M_n$ is the module of all polynomials of degree at most $n$. Note that $B^n$ is isomorphic to the $\mathbf{Z}/2\mathbf{Z}$ module of all square free polynimials of degree at most $n$ generated by $\{\bar{A}_i, \bar{B}_i\}_{i=1}^g$.
Denote $B^3$ by $B_{g,1}^3$, and for $\alpha=\Sigma_{i=1}^g\bar{A}_i\bar{B}_i\in B^2$, the cokernel of
\[
\begin{array}{ccc}
B^1&\to&B^3\\
x&\mapsto&\alpha x
\epsilonnd{array}
\]
by $B_{g,0}^3$. The action of $\mathcal{M}_{g,r}$ on $H$ induces an action on $B_{g,r}^3$. Birman-Craggs\cite{birman1978muim} defined a family of homomorphisms $\mathcal{I}_g\to \mathbf{Z}/2\mathbf{Z}$. Johnson\cite{johnson1980qfa} showed that these homomorphisms give a surjective homomorphism of $\mathcal{M}_{g,r}$ modules
\[
\mu: \mathcal{I}_{g,r}\to B_{g,r}^3.
\]
For $r=0,1$, Johnson\cite{johnson1985stg} showed that the induced homomorphism $H_1(\mathcal{I}_{g,r};\mathbf{Z})_{\mathcal{M}_{g,r}[2]}\cong B_{g,r}^3$ is an isomorphism.
\begin{proof}[proof of Lemma \ref{torelli}]
Since $\mu$ is an isomorphism of $\mathcal{M}_{g,r}$ module, we have
\[
H_1(\mathcal{I}_{g,r};\mathbf{Z})_{\mathcal{M}_{g,r}(p_2)}\cong (B_{g,r}^3)_{\mathcal{M}_{g,r}(p_2)}.
\]
Hence it suffices to compute $(B_{g,r}^3)_{\mathcal{M}_{g,r}(p_2)}$ to prove the lemma. Denote the subsurface $S'\subset S$ of genus $g-1$ as shown in Figure \ref{subsurface}. $\mathcal{I}_{S'}$ is the Torelli group of $S'$, that is the subgroup of $\mathcal{M}_{S'}$ which act trivially on $H_1(S';\mathbf{Z})$.
Consider the homomorphism
\[
(\mathcal{I}_{S'})_{\mathcal{M}_{S'}}\to (\mathcal{I}_{g,r})_{\mathcal{M}_{g,r}(p_2)}\cong (B_{g,r}^3)_{\mathcal{M}_{g,r}(p_2)}.
\]
induced by the inclusion $S'\to\Sigma_{g,r}$.
Since $(\mathcal{I}_{S'})_{\mathcal{M}_{S'}}=0$ (Johnson\cite{johnson1979hsa}), the image of the homomorphism is trivial. Thus we have
\[\bar{1}=\bar{X}=\bar{X}\bar{Y}=\bar{X}\bar{Y}\bar{Z}=0, \ \text{ for } \{X,Y,Z\}\subset\{A_1,A_2,\cdots,A_{g-1},B_1,B_2\cdots,B_{g-1}\}.\]
For $X=A_g, B_g$, we have
\begin{gather*}
(I_{2g}+e_{1,g+1})(\bar{B}_1\bar{X})=(\bar{B}_1+\bar{A}_1+1)\bar{X},\
(I_{2g}+e_{g+1,1})(\bar{A}_1\bar{X})=(\bar{A}_1+\bar{B}_1+1)\bar{X},\\
\text{ and }\ (I_{2g}+e_{1,2}-e_{g+2,g+1})(\bar{A}_2\bar{X})=(\bar{A}_2+\bar{A}_1)\bar{X}.
\epsilonnd{gather*}
Hence $\bar{X}=\bar{A}_1\bar{X}=\bar{B}_1\bar{X}=0\in(B_{g,r}^3)_{\mathcal{M}_{g,r}(p_2)}$. For $1< i< g$, we have
\[(I_{2g}+e_{g+i,1}+e_{g+1,i})(\bar{A}_1\bar{X})=(\bar{A}_1+\bar{B}_i)\bar{X}
,\text{ and }\ (I_{2g}+e_{i,g+1}+e_{1,g+i})(\bar{B}_1\bar{X})=(\bar{B}_1+\bar{A}_i)\bar{X}.\]
Hence $\bar{B}_i\bar{X}=\bar{A}_i\bar{X}=0\in(B_{g,r}^3)_{\mathcal{M}_{g,r}(p_2)}$. If we put $\bar{X}=\bar{A}_g\bar{B}_g$, we have $\bar{Y}\bar{A}_g\bar{B}_g=0$ in the same way for $\bar{Y}\in\{1,\bar{A}_1,\bar{A}_2,\cdots,\bar{A}_{g-1},\bar{B}_1,\bar{B}_2,\cdots,\bar{B}_{g-1}\}$. For $X=A_g, B_g$, and any $i,j$ such that $1\le i,j<g$, $i\ne j$, we have
\begin{gather*}
(I_{2g}+e_{g+j,j})(\bar{A}_i\bar{A}_j\bar{X})=\bar{A}_i\bar{A}_j\bar{X}+\bar{A}_i\bar{B}_j\bar{X}+\bar{A}_i\bar{X}, \ (I_{2g}+e_{j,g+j})(\bar{A}_i\bar{B}_j\bar{X})=\bar{A}_i\bar{B}_j\bar{X}+\bar{A_i}\bar{A}_j\bar{X}+\bar{A}_i\bar{X},\\
(I_{2g}+e_{g+j,j})(\bar{B}_i\bar{A}_j\bar{X})=\bar{B}_i\bar{A}_j\bar{X}+\bar{B}_i\bar{B}_j\bar{X}+\bar{B}_i\bar{X},\
(I_{2g}+e_{g+i,g}+e_{2g,g+i})(\bar{A}_i\bar{A}_g\bar{B}_g)=\bar{A}_i\bar{A}_g\bar{B}_g+\bar{A}_i\Bar{B}_i\bar{B}_g,\\
\text{and }\ (I_{2g}-e_{1,1}-e_{g+1,g+1}+e_{i,1}+e_{1,i}+e_{g+i,g+1}+e_{g+1,g+i})(\bar{A}_1\bar{B}_1\bar{A}_g)=\bar{A}_i\bar{B}_i\bar{A}_g.
\epsilonnd{gather*}
Hence $\bar{A}_i\bar{B}_j\bar{X}=\bar{A}_i\bar{A}_j\bar{X}=\bar{B}_i\bar{B}_j\bar{X}=\bar{A}_i\bar{B}_i\bar{B}_g=0$, and $\bar{A}_1\bar{B}_1\bar{A}_g=\bar{A}_i\bar{B}_i\bar{A}_g\in (B_{g,r}^3)_{\mathcal{M}_{g,r}(p_2)}$.
Therefore $(B_{g,r}^3)_{\mathcal{M}_{g,r}(p_2)}$ is a cyclic group of order 2 with generator $\bar{A}_1\bar{B}_1\bar{A}_g$ or a trivial group. For $r=0$, $B_{g,0}^3$ has a relation
\[
\alpha\bar{A}_g=(\sum_{i=1}^g\bar{A}_i\bar{B}_i)\bar{A}_g=0,
\]
so that $(g-1)\bar{A}_1\bar{B}_1\bar{A}_g=0\in(B_{g,0}^3)_{\mathcal{M}_g(p_2)}$. This shows that $(B_{g,r}^3)_{\mathcal{M}_{g,r}(p_2)}$ is trivial when $g$ is even and $r=0$.
Next we consider the case $g$ is odd or $r=1$. Let $S_n$ be the permutation group of degree $n$ and $\operatorname{sign}(s)$ the sign of $s\in S_n$. Denote by $\Lambda^nH$ the image of the homomorphism
\[
\begin{array}{cccc}
\lambda:&H^{\otimes n}&\to&H^{\otimes n}\\
&x_1\otimes x_2\otimes\cdots\otimes x_n&\mapsto&\displaystyle\sum_{s\in S_n}\operatorname{sign}(s) x_{s(1)}\otimes x_{s(2)}\otimes\cdots\otimes x_{s(n)}.
\epsilonnd{array}
\]
Denote by $V_1$ and $V_0$ the module $\Lambda^3H$ and the cokernel of
\[
\begin{array}{ccc}
H&\to&\Lambda^3H\\
X&\mapsto&\sum_{i=1}^g A_i\wedge B_i\wedge X,
\epsilonnd{array}
\]
respectively. Then Johnson\cite{johnson1980aqm} shows
\[
\begin{array}{cccc}
\displaystyle\frac{B_{g,r}^3}{B^2}&\to &V_r\otimes\mathbf{Z}/2\mathbf{Z}\\[3pt]
\bar{X}\bar{Y}\bar{Z}&\mapsto &X\wedge Y\wedge Z,
\epsilonnd{array}
\]
is a well-defined $\mathcal{M}_{g,r}$ module isomorphism. Now we have a $\mathcal{M}_{g,r}(p_2)$ homomorphism
\[
\begin{array}{cccc}
(B_g\cdot)C:&(B_{g,r}^3)_{\mathcal{M}_{g,r}(p_2)}&\to &\mathbf{Z}/2\mathbf{Z}\\[3pt]
&\bar{X}\bar{Y}\bar{Z}&\mapsto &(X\cdot Y)B_g\cdot Z+(Y\cdot Z)B_g\cdot X+(Z\cdot X)B_g\cdot Y.
\epsilonnd{array}
\]
Here it should be remarked that the intersection number with $B_g$ $(B_g\cdot):H\otimes\mathbf{Z}/2\mathbf{Z}\to\mathbf{Z}/2\mathbf{Z}$ is $\mathcal{M}_{g,r}(p_2)$-invariant. Since $(B_g\cdot)C (\bar{A}_1\bar{B}_1\bar{A}_g)=1$, it is surjective. Hence $(B_{g,r}^3)_{\mathcal{M}_{g,r}(p_2)}$ is a cyclic group of order 2 with generator $\bar{A}_1\bar{B}_1\bar{A}_g$. Johnson\cite{johnson1980qfa} computed $\mu(T_DT_{D'}^{-1})=\bar{A}_1\bar{B}_1(\bar{A}_g+1)$, so that $T_DT_{D'}^{-1}$ is a generator of $H_1(\mathcal{I}_{g,r};\mathbf{Z})_{\mathcal{M}_{g,r}(p_2)}$.
\epsilonnd{proof}
Now, we prove that $H_1(\hat{\mathcal{M}}_{(g,r)};\mathbf{Z})$ and $H_1(\mathcal{M}_{g,r};\mathbf{Z})$ are cyclic groups of order at most 4.
We need the following Lemma.
\begin{lemma}
Let $\hat{b}:\hat{\mathcal{M}}_{(g,1)}(p_2)\to\hat{\mathcal{M}}_{(g)}(p_2)$ be a homomorphism induced by an obvious embedding $\Sigma_{2g-1,2}\to\Sigma_{2g-1}$. Then $\hat{b}$ is surjective.
\epsilonnd{lemma}
\begin{proof}
By the obvious embedding $\Sigma_{g,1}\to\Sigma_{g}$, we have a surjective homomorphism $b:\mathcal{M}_{g,1}(p_2)\to\mathcal{M}_g(p_2)$. Since the diagram
\[
\begin{CD}
\hat{\mathcal{M}}_{(g,1)}(p_2)@>\hat{b}>>\hat{\mathcal{M}}_{(g)}(p_2)\\
@V P VV @V P VV\\
\mathcal{M}_{g,1}(p_2)@>b>>\mathcal{M}_{g}(p_2)
\epsilonnd{CD}
\]
commutes, $\hat{b}P=Pb$ is surjective. Hence it surfices to show $\operatorname{Ker} P\subset\hat{\mathcal{M}}_{(g)}(p_2)$ is included in $\operatorname{Im}\hat{b}$. Recall that $\operatorname{Ker} P$ consists of the isotopy classes of all the deck transformation.
\begin{figure}[htbp]
\begin{center}
\includegraphics{a_g.eps}
\epsilonnd{center}
\caption{}
\label{fig:a_g}
\epsilonnd{figure}
Cut the surface $\Sigma_{g,r}$ along the two simple closed curves $A_g$, $A'_g$ in Figure \ref{fig:a_g}. Then we have the subsurface $S_0$ of genus $g-1$ and the other subsurface $S'_0$ of genus $0$. We can construct a diffeomorphism $\hat{f}_0\in C(p)\cap\operatorname{Diff}_+(\Sigma_{2g-1,2},\partial\Sigma_{2g-1,2})$ which have the following properties:
\begin{enumerate}
\item $\hat{f}_0|_{p^{-1}(S_0)}$ is the restriction of the deck transformation $t\ne id$,
\item $\hat{f}_0|_{p^{-1}(S_1)}={T'}_{A_g}{T'}_{A_g'}^{-1}$, where ${T'}_{A_g}$ and ${T'}_{A_g'}$ is the half Dehn twists along $A_g$ and $A_g'$.
\epsilonnd{enumerate}
Then $\hat{f}_0$ is included in $C(p)\cap\operatorname{Diff}_+(\Sigma_{2g-1,2},\partial\Sigma_{2g-1,2})$, and the image of $[\hat{f}_0]$ under $\hat{\mathcal{M}}_{(g,1)}(p_2)\to\hat{\mathcal{M}}_{(g)}(p_2)$ equals the deck transformation $t$. This proves the lemma.
\epsilonnd{proof}
In the proof of Proposition \ref{Gamma(p_2)}, we have the exact sequence
\[
H_1(\mathcal{I}_{g,r};\mathbf{Z})_{\mathcal{M}_{g,r}(p_2)}\to H_1(\mathcal{M}_{g,r}(p_2);\mathbf{Z})\to H_1(\Gamma_g(p_2);\mathbf{Z})\to 0.
\]
By Proposition \ref{Gamma(p_2)} and Lemma \ref{torelli}, we obtain
\[
H_1(\mathcal{M}_{g,r}(p_2);\mathbf{Z})=\mathbf{Z}/2\mathbf{Z}\text{\ or \,}\mathbf{Z}/4\mathbf{Z}.
\]
In particular if genus $g$ is even,
\[
H_1(\mathcal{M}_g(p_2);\mathbf{Z})=\mathbf{Z}/2\mathbf{Z}.
\]
From the isomorphism $\hat{\mathcal{M}}_{(g,1)}(p_2)\cong\mathcal{M}_{g,1}(p_2)$ and the surjective homomorphism $b:\hat{\mathcal{M}}_{(g,1)}(p_2)\to\hat{\mathcal{M}}_{(g)}(p_2)$, we have
\[
H_1(\hat{\mathcal{M}}_{(g,r)}(p_2);\mathbf{Z})=\mathbf{Z}/2\mathbf{Z}\text{\ or \,}\mathbf{Z}/4\mathbf{Z}
\]
for $r=0,1$.
\begin{remark}\label{rem:generator}
For $r=0,1$, pick a simple closed curve $c\subset \Sigma_{g,r}$.
If the intersection number $c\cdot B_g$ is odd, then $[T_c^2]\in H_1(\Gamma_g(p_2);\mathbf{Z})$ is a generator. Hence $[T_c^2]\in H_1(\mathcal{M}_{g,r}(p_2);\mathbf{Z})$ is also a generator, and the lift of $T_c^2$ is a generator of $H_1(\hat{\mathcal{M}}_{(g,r)}(p_2);\mathbf{Z})$.
If $c$ is included in the subsurface $S$, we have $[T_c^2]=0\in H_1(\hat{\mathcal{M}}_{(g,r)}(p_2);\mathbf{Z})$, by Proposition \ref{generator}.
\epsilonnd{remark}
\section{A surjective homomorphism $\hat{\mathcal{M}}_{(g)}(p_2)\to\mathbf{Z}/4\mathbf{Z}$}\label{surj}
For a root of unity $\zeta$, we denote by $<\!\!\!\>\zeta\>\!\!\!>$ the cyclic group generated by $\zeta$.
In this section, we construct a surjective homomorphism
\[
e:\hat{\mathcal{M}}_{(g)}(p_2)\to<\!\sqrt{-1}\!>
\]
using the Schottky theta constant associated with the cover $p_2:\Sigma_{2g-1}\to\Sigma_g$ when $g\ge2$, to complete Theorem \ref{main-theorem}. In the following, suppose genus $g\ge2$.
\subsection{The Jacobi variety and the Prym variety}
Endow the surface $\Sigma_g$ with the structure of a Riemann surface $R$. Then the covering map $p_2:\Sigma_{2g-1}\to\Sigma_g$ induces the structure of a Riemann surface $\hat{R}$ in the surface $\Sigma_{2g-1}$. In this subsection, we review the Jacobi variety of the Riemann surface $R$ and the Prym variety of the double unbranched cover $p_2:\hat{R}\to R$.
\begin{definition}
A $g$-characteristic is a row vector $m\in\mathbf{Z}^{2g}$. We denote $m=(m'|m'')$ where $m'=(m'_1,m'_2,\cdots,m'_g)$, $m''=(m''_1,m''_2,\cdots,m''_g)\in \mathbf{Z}^g$. We call the $g$-chatacteristic $m$ is even (resp. odd) if $\sum_{i=1}^g m'_im''_i$ is even (resp. odd).
\epsilonnd{definition}
We denote the Siegel upper half space of degree $g$ by $\mathfrak{S}_g$. For a $g$-characteristic $m=(m'|m'')\in \mathbf{Z}^{2g}$ and $\tau\in \mathfrak{S}_g$, $z\in \mathbf{C}^g$, The theta function $\theta_m$ is defined by
\[\theta_m(\tau, z):=\sum_{p\in\mathbf{Z}^{g}}\epsilonxp(\pi i\{(p+m'/2)\tau\leftidx{^t}(p+m'/2)+(p+m'/2)\leftidx{^t}(z+m''/2)\}).\]
We denote $\theta_m(\tau, 0)$ simply by $\theta_m(\tau)$. Let $\Omega$ be the sheaf of holomorphic 1-forms on $R$. Choose a symplectic basis $\{A_i, B_i\}_{i=1}^g$ of $H_1(R;\mathbf{Z})$. It is known that under the homomorphism
\[
\begin{array}{cccc}
H_1(R;\mathbf{Z})&\to &H^0(R;\Omega)^*&:=\operatorname{Hom}(H^0(R;\Omega),\mathbf{C}),\\
c&\mapsto&(\omega\mapsto\int_c\omega)
\epsilonnd{array}
\]
$H_1(R;\mathbf{Z})$ maps onto a lattice in $H^0(R;\Omega)^*$. The Jacobi variety of $R$ is defined by
\[J(R)=\frac{H^0(R;\Omega)^*}{H_1(R;\mathbf{Z})}.\]
A basis $\{\omega_i\}_{i=1}^g$ of $H^0(R;\Omega)$ that satisfies
\[\int_{A_j}\omega_i=
\begin{cases}
1,&\text{ if }i=j,\\
0,&\text{ if }i\ne j,
\epsilonnd{cases}
\]
is called the normalized basis with respect to the symplectic basis $\{A_i, B_i\}_{i=1}^g$. For the normalized basis $\{\omega_i\}_{i=1}^g$, the $g$-square matrix
\[\tau=(\tau_{ij}), \hspace{0.5cm} \tau_{ij}=\int_{B_j}\omega_i\]
is known to be the elements of the Siegel upper half space $\mathfrak{S}_g$, and is called the period matrix. For an even $g$-characteristic $m=(m'|m'')$ and the period matrix $\tau$, $\theta_{m}(\tau)$ is called the Riemann theta constant with $m$ associated with the compact Riemann surface $R$ and $\{A_i, B_i\}_{i=1}^g$.
Denote the generator of the deck transformation group of the cover $\hat{R}\to R$ by $t: \hat{R}\to \hat{R}$, the $(-1)$-eigenspace of $t_*: H_1(\hat{R};\mathbf{Z})\to H_1(\hat{R};\mathbf{Z})$ by
\[
H_1(\hat{R};\mathbf{Z})^{-}=\{c\in H_1(\hat{R};\mathbf{Z})\ |\ t_*(c)=-c \},
\]
and the $(-1)$-eigenspace of $t^*: H^0(\hat{R};\Omega)\to H^0(\hat{R};\Omega)$ by
\[
H^0(\hat{R};\Omega)^{-}=\{\omega\in H^0(\hat{R};\Omega)\ |\ t^*(\omega)=-\omega \}.
\]
Under the homomorphism
\[
\begin{array}{cccc}
H_1(\hat{R};\mathbf{Z})&\to &H^0(\hat{R};\Omega)^*&:=\operatorname{Hom}(H^0(\hat{R};\Omega),\mathbf{C}),\\
c&\mapsto&(\omega\mapsto\int_c\omega)
\epsilonnd{array}
\]
$H_1(\hat{R};\mathbf{Z})^-$ maps onto a lattice in $(H^0(R;\Omega)^-)^*$.
\begin{definition}
The Prym variety $\operatorname{Prym}(\hat{R}, p_2)$ of the cover $p_2$ is defined by
\[\operatorname{Prym}(\hat{R}, p_2)=\frac{(H^0(\hat{R};\Omega)^{-})^*}{H_1(\hat{R};\mathbf{Z})^{-}}\subset J(\hat{R}).\]
\epsilonnd{definition}
For a symplectic basis $\{A_i, B_i\} _{i=1}^g$ of $H_1(R;\mathbf{Z})$, we choose a basis of $H_1(\hat{R};\mathbf{Z})$ as follows. For $i=1,2,\cdots,g-1$, denote the two lifts of $A_i$ by $\hat{A}_i$ and $\hat{A}_{i+g}$, and the two lifts of $B_i$ by $\hat{B}_i$ and $\hat{B}_{i+g}$ such that
\[
\hat{A}_i\cdot \hat{B}_i=1.
\]
The lifts of $2A_g$ and $B_g$ are uniquely determined, and denote them by $\hat{A}_g$ and $\hat{B}_g$, respectively. Then, $\{A_i-A_{g+i}, B_i-B_{g+i}\}_{i=1}^{g-1}$ form a basis of $H_1(\hat{R};\mathbf{Z})^{-}$. Moreover since the basis $\{\hat{A}_i-\hat{A}_{g+i}, \hat{B}_i-\hat{B}_{g+i}\}_{i=1}^{g-1}$ of $H_1(\hat{R};\mathbf{Z})^{-}$ satisfies
\begin{gather*}
(\hat{A}_i-\hat{A}_{g+i})\cdot(\hat{A}_j-\hat{A}_{g+j})=0,\ (\hat{B}_i-\hat{B}_{g+i})\cdot(\hat{B}_j-\hat{B}_{g+j})=0\\
(\hat{A}_i-\hat{A}_{g+i})\cdot(\hat{B}_j-\hat{B}_{g+j})=2\delta_{i\,j}.
\epsilonnd{gather*}
Therefore, the action of $\hat{\varphi}\in\hat{\mathcal{M}}_{(g)}(p_2)$ on the basis $\{\hat{A}_i-\hat{A}_{g+i}, \hat{B}_i-\hat{B}_{g+i}\}_{i=1}^{g-1}$ induces the homomorphism
\[\tilde{\iota}:\hat{\mathcal{M}}_{(g)}(p_2)\to\operatorname{Sp}(2g-2;\mathbf{Z}).\]
For the above symplectic basis $\{\hat{A}_i, \hat{B}_i\}_{i=1}^{2g-1}$, choose the normalized basis $\{\hat{\omega}_i\}_{i=1}^{2g-1}$ of $H^0(\hat{R};\Omega)$, then $\{(\hat{\omega}_i-\hat{\omega}_{g+i})/2\}_{i=1}^{g-1}$ is a basis of $H^0(\hat{R};\Omega)^{-}$. It is known that the $(g-1)$-square matrix\[\tilde{\tau}=(\tilde{\tau}_{ij}), \hspace{0.5cm} \tilde{\tau}_{ij}=\int_{\hat{B}_j-\hat{B}_{g+j}}\frac{\hat{\omega}_i-\hat{\omega}_{g+i}}{2}\]
is the element of the Siegel upper half space $\mathfrak{S}_{g-1}$. We call $\tilde{\tau}$ the period matrix of the Prym variety.
\begin{definition}
For an even $(g-1)$-characteristic $\tilde{m}=(\tilde{m}'|\tilde{m}'')$ and the period matrix $\tilde{\tau}$ of $\operatorname{Prym}(\hat{R}, p_2)$, $\theta_{\tilde{m}}(\tilde{\tau})$ is called the Schottky theta constant with $\tilde{m}$ associated with the cover $p_2:\hat{R}\to R$ and $\{\hat{A}_i, \hat{B}_i\}_{i=1}^{2g-1}$.
\epsilonnd{definition}
\subsection{Definition of the homomorphism $e:\hat{\mathcal{M}}_g(p_2)\to<\!\sqrt{-1}\!>$}
In this subsection, we give the definition of the homomorphism $e:\hat{\mathcal{M}}_{(g)}(p_2)\to<\!\sqrt{-1}\!>$.
Let $\tau$ be the period matrix of $R$, and $\tilde{\tau}$ the period matrix of the cover $p_2$. Consider the function
\[
\Phi_{m, n}^{\tilde{m}}(\tilde{\tau}, \tau)=\frac{\tilde\theta_{\tilde{m}}^2(\tilde{\tau})}{\theta_m(\tau)\theta_n(\tau)}
\]
for even g-characteristics $m,n$ and an even $(g-1)$-characteristic $\tilde{m}$. For a generic Riemann surface and a double unbranched covering space, $\Phi_{m,n}^{\tilde{m}}(\tilde{\tau}, \tau)$ is known to be a nonzero complex number (Fay\cite{fay1973tfr}). For a $g$-square matrix $M=(m_{ij})$, denote the row vector obtained by taking the diagonal entries of $M$ by $M_0:=(m_{11}, m_{22}, \cdots, m_{gg})\in\mathbf{Z}^g$. For
$\sigma=
\begin{pmatrix}
\alpha&\beta\\
\gamma&\delta
\epsilonnd{pmatrix}
\in\operatorname{Sp}(2g;\mathbf{Z})$ and a $g$-characteristic $m$,
we define
\[
\sigma\cdot m=m
\begin{pmatrix}
\leftidx{^t}{\alpha}{}&-\leftidx{^t}{\gamma}{}\\
-\leftidx{^t}{\beta}{}&\leftidx{^t}{\delta}{}
\epsilonnd{pmatrix}
+((\leftidx{^t}{\beta}{}\alpha)_0|(\leftidx{^t}{\delta}{}\gamma)_0)\in \mathbf{Z}^{2g}.
\]
Note that this is not an action of $\operatorname{Sp}(2g;\mathbf{Z})$ on $\mathbf{Z}^{2g}$, and that this definition is different from that of Igusa\cite{igusa1964grt}.
For $\hat{\varphi}\in\hat{\mathcal{M}}_{(g)}(p_2)$, denote $P_2(\hat{\varphi})$ by $\varphi\in\mathcal{M}_g(p_2)$. For an even $(g-1)$-characteristic $\tilde{m}$, choose the $g$-characteristics $m=(\tilde{m}',0|\tilde{m}'',1)$ and $n=(\tilde{m}',0 |\tilde{m}'',0)$. Define the map $d_{\tilde{m},(\tilde{\tau}, \tau)}:\hat{\mathcal{M}}_{(g)}(p_2)\to\mathbf{C}$ by
\[
d_{\tilde{m}, (\tilde{\tau}, \tau)}(\hat{\varphi}):=
\frac{\Phi_{m,n}^{\tilde{m}}(\tilde{\tau}, \tau)}{\Phi_{\iota(\varphi)\cdot m,\iota(\varphi)\cdot n}^{\tilde{\iota}(\hat{\varphi})\cdot \tilde{m}}(\tilde{\tau}, \tau)}.
\]
In the next subsection, we will prove that $d_{\tilde{m}}=d_{\tilde{m}, (\tilde{\tau}, \tau)}$ is independent of the period matrices $\tilde{\tau}$ and $\tau$, and that the image of $d_{\tilde{m}}$ equals $<\!-1\!>$. For
$\sigma=
\begin{pmatrix}
\alpha&\beta\\
\gamma&\delta
\epsilonnd{pmatrix}\in\operatorname{Sp}(2g;\mathbf{Z})$ and $\tau\in\mathfrak{S}_g$, we define the action of $\operatorname{Sp}(2g;\mathbf{Z})$ on $\mathfrak{S}_g$ by
\[
\sigma\cdot\tau:=(\delta\tau+\gamma)(\beta\tau+\alpha)^{-1}.
\]
For $\sigma=\begin{pmatrix}\alpha&\beta\\\gamma&\delta\epsilonnd{pmatrix}\in\operatorname{Sp}(2g;\mathbf{Z})$, it is known that the theta function has the transformation law (see Igusa\cite{igusa1972tf})
\[
\theta_{\sigma\cdot m}(\sigma\cdot\tau)=\gamma_m(\sigma)\det(\beta\tau+\alpha)^{-\frac{1}{2}}\theta_m(\tau),
\]
where $\gamma_m(\sigma)\in<\!\epsilonxp(\pi/4)\!>$ is called the theta multiplier.
Now we can construct a homomorphism $e_{\tilde{m}}: \hat{\mathcal{M}}_{(g)}(p_2)\to <\!\sqrt{-1}\!>$ using $d_{\tilde{m}}$ and $\gamma_m$. For $\hat{\varphi}\in\hat{\mathcal{M}}_{(g)}(p_2)$ and an even $(g-1)$-characteristic $\tilde{m}$, define the map $e_{\tilde{m}}$ by
\[
e_{\tilde{m}}(\hat{\varphi}):=d_{\tilde{m}}(\hat{\varphi})\frac{\gamma_{\tilde{m}}^2(\tilde{\iota}(\hat{\varphi}))}{\gamma_m(\iota(\varphi))\gamma_n(\iota(\varphi))}.
\]
Note that $\gamma_m^2(\iota(\varphi))$ and $\gamma_m(\iota(\varphi))\gamma_n(\iota(\varphi))$ are uniquely determined. We will prove that $e=e_{\tilde{m}}$ is a homomorphism independent of the choice of $\tilde{m}$, and that the image of $e_{\tilde{m}}$ equals $<\!\sqrt{-1}\!>$ in the next subsection.
\subsection{Proof of the main theorem}
In this subsection, we will prove that $e_{\tilde{m}}:\hat{\mathcal{M}}_{(g)}(p_2)\to <\!\sqrt{-1}\!>$ is a surjective homomorphism. We also prove that $d_{\tilde{m}}=d_{\tilde{m},(\tilde{\tau},\tau)}$ does not depends on the choice of $(\tilde{\tau},\tau)$, and that the image of $d_{\tilde{m}}$ equals the cyclic group $<\!-1\!>$. For $\hat{\varphi}\in\mathcal{M}_{(g)}(p_2)$, we denote simply $\tilde{\iota}(\hat{\varphi})\in\operatorname{Sp}(2g-2;\mathbf{Z})$ and $\iota(\varphi)=\iota(P(\hat{\varphi}))\in\Gamma_g(p_2)$ by $\tilde{\sigma}$ and $\sigma$, respectively.
To prove that $d_{\tilde{m}}$ only depends on $\hat{\varphi}\in\hat{\mathcal{M}}_{(g)}(p_2)$ and $\tilde{m}\in \mathbf{Z}^{g-1}$, we need the following theorem.
\begin{theorem}[Farkas, Rauch\cite{farkas1970prs}]\label{Farkas}\ \\
For an even $(g-1)$-characteristic $\tilde{m}$, define the $g$-characteristics $m=(\tilde{m}',0|\tilde{m}'',1)$ and $n=(\tilde{m}',0 |\tilde{m}'',0)$. Then $\Phi_{m,n}^{\tilde{m}}(\tilde{\tau}, \tau)$ does not depend on the choice of $\tilde{m}$.
\epsilonnd{theorem}
Define $\pi: \mathbf{Z}^{2g}\to\mathbf{Z}^{2g-2}$ by $\pi(m'|m'')=(m'_1,m'_2,\cdots, m'_{g-1}\ |\ m''_1, m''_2,\cdots, m''_{g-1})$.
\begin{lemma}\label{lemma:mod2char}
For an even $g$-characteristic $\tilde{m}$ and $\hat{\varphi}\in\mathcal{M}_{(g)}(p_2)$,
\[\tilde{\sigma}\cdot \tilde{m}\epsilonquiv \pi(\sigma\cdot m)\ \operatorname{mod} 2,\]
where $m=(\tilde{m}',0|\tilde{m}'',1)$.
\epsilonnd{lemma}
\begin{proof}
Denote the $1$-eigenspace of $H_1(\hat{R};\mathbf{Q})$ by $H_1(\hat{R};\mathbf{Q})^+$. Then
\[
\{\hat{A}_i+\hat{A}_{g+i},\ \hat{B}_i+\hat{B}_{g+i}\}_{i=1}^{g-1}\cup \{\hat{A}_g,\ 2\hat{B}_g\}
\]
is a basis of $H_1(\hat{R};\mathbf{Q})^+$. The restriction of $p_2$
\[
H_1(\hat{R};\mathbf{Q})^+\to H_1(R;\mathbf{Q})
\]
maps the basis $\{\hat{A}_i+\hat{A}_{g+i},\ \hat{B}_i+\hat{B}_{g+i}\}_{i=1}^{g-1}\cup \{\hat{A}_g,\ 2\hat{B}_g\}\in H_1(\hat{R};\mathbf{Q})^+$ to the basis $\{2A_i,\ 2B_i\}_{i=1}^{g}\in H_1(R;\mathbf{Q})$. Since for $i=1,\cdots,g-1$ we have
\begin{gather*}
\varphi_*(2A_i)=\varphi_*(p_2)_*(\hat{A}_i+\hat{A}_{g+i})= (p_2)_*\hat{\varphi}_*(\hat{A}_i+\hat{A}_{g+i}),\ \varphi(2A_g)= (p_2)_*\hat{\varphi}_*(\hat{A}_g),\\
\varphi_*(2B_i)= (p_2)_*\hat{\varphi}_*(\hat{B}_i+\hat{B}_{g+i}),\text{ and }\varphi_*(2B_g)= (p_2)_*\hat{\varphi}_*(2\hat{B}_g).
\epsilonnd{gather*}
Hence, the induced homomorphism $\hat{\mathcal{M}}_{(g)}(p_2)\to\operatorname{Sp}(2g;\mathbf{Z})$ by the action of $\hat{\varphi}\in\hat{\mathcal{M}}_{(g)}(p_2)$ on the basis $\{\hat{A}_i+\hat{A}_{g+i},\ \hat{B}_i+\hat{B}_{g+i}\}_{i=1}^{g-1}\cup \{\hat{A}_g,\ 2\hat{B}_g\}$ is equal to $\iota P_2:\hat{\mathcal{M}}_{(g)}(p_2)\to\Gamma_g(p_2)$. Denote $\tilde{\sigma}\in\operatorname{Sp}(2g-2;\mathbf{Z})$ by
\[
\tilde{\sigma}=
\begin{pmatrix}
\alpha'&\beta'\\
\gamma'&\delta'
\epsilonnd{pmatrix},
\]
where $\alpha', \beta', \gamma', \delta'\in M(g-1;\mathbf{Z})$. Since we have
\[
\hat{\varphi}_*(\hat{A}_i+\hat{A}_{g+i})\epsilonquiv\hat{\varphi}_*(\hat{A}_i-\hat{A}_{g+i}),\text{ and }\hat{\varphi}_*(\hat{B}_i+\hat{B}_{g+i})\epsilonquiv\hat{\varphi}_*(\hat{B}_i-\hat{B}_{g+i}) \ \operatorname{mod} 2,
\]
and $\sigma=\iota P_2(\hat{\varphi})$ preserves the homology class $B_g \operatorname{mod}2$, $\sigma$ is written in the form
\[
\sigma=
\begin{pmatrix}
\alpha'&\leftidx{^t}{v_1}{}&\beta'&0\\
0&1&0&0\\
\gamma'&\leftidx{^t}{v_2}{}&\delta'&0\\
v_3&k&v_4&1
\epsilonnd{pmatrix}
\operatorname{mod}2,
\]
where $v_1,v_2,v_3,v_4\in \mathbf{Z}^{g-1}, k\in\mathbf{Z}$. Then it is easy to see that $\pi(\sigma\cdot m)\epsilonquiv\tilde{\sigma}\cdot \tilde{m}\ \operatorname{mod}2$.
\epsilonnd{proof}
\begin{lemma}
For $\hat{\varphi}\in\hat{\mathcal{M}}_{(g)}(p_2)$, the value $d_{\tilde{m}}(\hat{\varphi})=d_{\tilde{m},(\tilde{\tau},\tau)}(\hat{\varphi})$ does not depend on the choice of $(\tilde{\tau},\tau)$, and the image of $d_{\tilde{m}}$ equals the cyclic group $<\!-1\!>$. In particular, it does not depend on a complex structure of the cover $p_2:\hat{R}\to R$.
\epsilonnd{lemma}
\begin{proof}
Note that, for any g-characteristic $u=(u'|u''), v=(v'|v'')\in\mathbf{Z}$ we have
\[\theta_{u+2v}=(-1)^{u'v''}\theta_u,\]
by the definition of the theta function. Consider the g-characteristic $v_0=(0,\cdots,0,1|0,\cdots,0,0)\in\mathbf{Z}^{2g}$. Since $\sigma$ preserves the homology class $B_g\operatorname{mod}2$, we have
\begin{gather*}
\sigma\cdot(m-n)=(m-n)
\begin{pmatrix}
\leftidx{^t}{\alpha}{}&-\leftidx{^t}{\gamma}{}\\
-\leftidx{^t}{\beta}{}&\leftidx{^t}{\delta}{}
\epsilonnd{pmatrix}
\epsilonquiv v_0
\begin{pmatrix}
\leftidx{^t}{\alpha}{}&-\leftidx{^t}{\gamma}{}\\
-\leftidx{^t}{\beta}{}&\leftidx{^t}{\delta}{}
\epsilonnd{pmatrix}
\epsilonquiv v_0\ {, and}\\
(\sigma\cdot m)'_g\epsilonquiv(\sigma\cdot n)'_g\epsilonquiv(\beta\leftidx{^t}{\alpha}{})_{gg}\epsilonquiv0\ \operatorname{mod} 2.
\epsilonnd{gather*}
By Lemma \ref{lemma:mod2char}, there exists $v_1, v_2\in\mathbf{Z}^{2g}$ such that
\[
\sigma\cdot m+2v_1=((\tilde{\sigma}\cdot \tilde{m})',0|(\tilde{\sigma}\cdot \tilde{m})'',k_1),\text{ and } \sigma\cdot n+2v_2=((\tilde{\sigma}\cdot \tilde{m})',0|(\tilde{\sigma}\cdot \tilde{m})'',k_2),
\]
where
\[
k_1=0\text{ or }1,\ k_2=0\text{ or }1,\text{ and } k_1+k_2=1.
\]
Then there exists $p(\tilde{m},\hat{\varphi})\in <\!-1\!>$ such that
\[
\Phi_{\sigma\cdot m+2v_1, \sigma\cdot n+2v_2}^{\tilde{\sigma}\cdot\tilde{m}}(\tilde{\tau}, \tau)=p(\tilde{m}, \hat{\varphi})\Phi_{\sigma\cdot m, \sigma\cdot n}^{\tilde{\sigma}\cdot\tilde{m}}(\tilde{\tau}, \tau).
\]
Note that $p(\tilde{m},\hat{\varphi})$ does not depend on the choice of $(\tilde{\tau},\tau)$. By Theorem \ref{Farkas}, we have
\[
\Phi_{\sigma\cdot m+2v_1, \sigma\cdot n+2v_2}^{\tilde{\sigma}\cdot\tilde{m}}(\tilde{\tau}, \tau)=\Phi_{m,n}^{\tilde{m}}(\tilde{\tau}, \tau).
\]
Hence we have
\[
p(\tilde{m},\hat{\varphi})=d_{\tilde{m}}(\hat{\varphi}).
\]
This proves the lemma.
\epsilonnd{proof}
Consider the action of $\varphi\in\mathcal{M}_g(p_2)$ on the symplectic basis $\{A_i, B_i\}_{i=1}^g$. The basis $\{\varphi_*A_i, \varphi_*B_i\} _{i=1}^g$ is also a symplectic basis of $H_1(R;\mathbf{Z})$. The corresponding period matrix is
\[
\tau'=(\tau'_{ij}), \hspace{0.5cm} \tau'_{ij}=\int_{\varphi_*B_j}\omega'_i,
\]
where $\{\omega'_i\}_{i=1}^{g}$ is the normalized basis. This is equal to $\leftidx{^t}{\iota(\varphi)}{}\cdot\tau$. Next, Consider the action of $\hat{\varphi}\in\hat{\mathcal{M}}_{(g)}(p_2)$ on the basis $\{\hat{A}_i, \hat{B}_i\} _{i=1}^{2g-1}$ of $H_1(\hat{R};\mathbf{Z})$. Note that the basis $\{\hat{\varphi}_*\hat{A}_i,\hat{\varphi}_*\hat{B}_i\}_{i=1}^{2g-1}$ is again the lift of $\{\varphi_*A_i, \varphi_*B_i\} _{i=1}^g$. The period matrix of $\operatorname{Prym}(\hat{R}, p_2)$ with respect to the basis $\{\hat{\varphi}_*(\hat{A}_i-\hat{A}_{g+i}), \hat{\varphi}_*(\hat{B}_i-\hat{B}_{g+i})\} _{i=1}^{2g-1}$ of $H_1(\hat{R};\mathbf{Z})$ is
\[
\tilde{\tau}':=(\tilde{\tau}'_{ij}), \hspace{0.5cm} \tilde{\tau}'_{ij}=\int_{\hat{\varphi}_*(\hat{B}_j-\hat{B}_{g+j})}\frac{\hat{\omega'}_i-\hat{\omega'}_{g+i}}{2},
\]
where $\{\hat{\omega'}_i\}_{i=1}^{2g-1}$ is the normalized basis. This is equal to $\leftidx{^t}{\tilde{\iota}}{}(\hat{\varphi})\cdot\tilde{\tau}$. Hence, $\leftidx{^t}{\iota(\varphi)}{}\cdot\tau$ is also the perod matrix of $R$, and $\leftidx{^t}{\tilde{\iota}}{}(\hat{\varphi})\cdot\tilde{\tau}$ is also the period matrix of the cover $p_2$. This shows that the pair $(\tilde{\sigma}\cdot\tilde{\tau}, \sigma\cdot\tau)$ satisfies the condition of Theorem \ref{Farkas} for any $\hat{\varphi}\in\hat{\mathcal{M}}_{(g)}(p_2)$.
\begin{theorem}
The map $e_{\tilde{m}}$ is a homomorphism, and the image of $e_{\tilde{m}}(\hat{\varphi})$ equals $<\!\sqrt{-1}\!>$. Moreover $e({\hat{\varphi}}):=e_{\tilde{m}}(\hat{\varphi})$ does not depend on the choice of $\tilde{m}$.
\epsilonnd{theorem}
\begin{proof}
For $\hat{\varphi}\in\hat{\mathcal{M}}_{(g)}(p_2)$, denote $\sigma_1:=\sigma=\iota P_2(\hat{\varphi}')$, and $\tilde{\sigma}_1:=\tilde{\sigma}=\tilde{\iota}(\hat{\varphi})$. Similarly, denote $\sigma_2:=\iota P_2(\hat{\varphi}')$, $\tilde{\sigma}_2:=\tilde{\iota}(\hat{\varphi}')$, and $\sigma_3:=\iota P_2(\hat{\varphi}\hat{\varphi}')$, $\tilde{\sigma}_3:=\tilde{\iota}(\hat{\varphi}\hat{\varphi}')$. Write $\sigma_i$ as
\[
\sigma_i=
\begin{pmatrix}
\alpha_i&\beta_i\\
\gamma_i&\delta_i
\epsilonnd{pmatrix}
\hspace{0.5cm}
\text{for } i=1,2,3.
\]
We also denote simply $\tilde{\tau}':=\tilde{\sigma}_2\cdot\tilde{\tau}$, and
$\tau':=\sigma_2\cdot\tau$.
Since the pairs $(\tilde{\sigma}_1\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_1\sigma_2\cdot\tau)$, and $(\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_2\cdot\tau)$ satisfies the condition of Theorem \ref{Farkas}, we have
\begin{gather*}
\frac{1}{d_{\tilde{m}}(\hat{\varphi}\hat{\varphi'})}
=\frac{\Phi_{(\sigma_1\sigma_2)\cdot m, (\sigma_1\sigma_2)\cdot n}^{(\tilde{\sigma}_1\tilde{\sigma}_2)\cdot\tilde{m}}(\tilde{\sigma}_1\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_1\sigma_2\cdot\tau)}{\Phi_{m, n}^{\tilde{m}}(\tilde{\sigma}_1\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_1\sigma_2\cdot\tau)}
=\frac{\gamma_{\tilde{m}}^2(\tilde{\sigma}_1\tilde{\sigma}_2)}{\gamma_m(\sigma_1\sigma_2)\gamma_n(\sigma_1\sigma_2)}
\frac{\det(\tilde{\beta}_3\tilde{\tau}+\tilde{\alpha}_3)^{-1}}{\det(\beta_3\tau+\alpha_3)^{-1}}
\frac{\Phi_{m, n}^{\tilde{m}}(\tilde{\tau}, \tau)}{\Phi_{m, n}^{\tilde{m}}(\tilde{\sigma}_1\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_1\sigma_2\cdot\tau)},\\
\frac{1}{d_{\tilde{m}}(\hat{\varphi})}
=\frac{\Phi_{\sigma_1\cdot m, \sigma_1\cdot n}^{\tilde{\sigma}_1\cdot\tilde{m}}(\tilde{\sigma}_1\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_1\sigma_2\cdot\tau)}{\Phi_{m, n}^{\tilde{m}}(\tilde{\sigma}_1\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_1\sigma_2\cdot\tau)}
=\frac{\gamma_{\tilde{m}}^2(\tilde{\sigma}_1)}{\gamma_m(\sigma_1)\gamma_n(\sigma_1)}
\frac{\det(\tilde{\beta}_1\tilde{\tau}'+\tilde{\alpha}_1)^{-1}}{\det(\beta_1\tau'+\alpha_1)^{-1}}
\frac{\Phi_{m, n}^{\tilde{m}}(\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_2\cdot\tau)}{\Phi_{m, n}^{\tilde{m}}(\tilde{\sigma}_1\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_1\sigma_2\cdot\tau)},\\
\frac{1}{d_{\tilde{m}}(\hat{\varphi'})}
=\frac{\Phi_{\sigma_2\cdot m, \sigma_2\cdot n}^{\tilde{\sigma}_2\cdot\tilde{m}}(\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_2\cdot\tau)}{\Phi_{m, n}^{\tilde{m}}(\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_2\cdot\tau)}
=\frac{\gamma_{\tilde{m}}^2(\tilde{\sigma}_2)}{\gamma_m(\sigma_2)\gamma_n(\sigma_2)}
\frac{\det(\tilde{\beta}_2\tilde{\tau}+\tilde{\alpha}_2)^{-1}}{\det(\beta_2\tau+\alpha_2)^{-1}}
\frac{\Phi_{m, n}^{\tilde{m}}(\tilde{\tau}, \tau)}{\Phi_{m, n}^{\tilde{m}}(\tilde{\sigma}_2\cdot\tilde{\tau}, \sigma_2\cdot\tau)},
\epsilonnd{gather*}
by the definition of $d_{\tilde{m}}(\hat{\varphi})$.
It is easy to see that
\begin{gather*}
\det(\tilde{\beta}_2\tilde{\tau}+\tilde{\alpha}_2)\det(\tilde{\beta}_1\tilde{\tau}'+\tilde{\alpha}_1)=\det(\tilde{\beta}_3\tilde{\tau}+\tilde{\alpha}_3)\text{, and}\\
\det(\beta_2\tau+\alpha_2)\det(\beta_1\tau'+\alpha_1)=\det(\beta_3\tau+\alpha_3).
\epsilonnd{gather*}
This shows that $e_{\tilde{m}}$ is a homomorphism.
Next, we determine the image of $e_{\tilde{m}}$. There are two lifts in $\hat{\mathcal{M}}_{(g)}(p_2)$ of $a^2\in\mathcal{M}_g(p_2)$. We denote the lift which fix the homology class $\hat{A_1}$ by $\hat{a}\in\hat{\mathcal{M}}_{(g)}(p_2)$. As we stated in Remark \ref{rem:generator}, $H_1(\hat{\mathcal{M}}_{(g)}(p_2);\mathbf{Z})$ is generated by $\hat{a}$. For $\hat{\varphi}=\hat{a}$, we have $\tilde{\sigma}=\tilde{\iota}(\hat{a})=I_{2g-2}\in \operatorname{Sp}(2g-2;\mathbf{Z})$, $\sigma=\iota P_2(\hat{a})=\gamma_{gg}\in\Gamma_g(p_2)$. From Theorem 3 in Igusa\cite{igusa1964grt}, for any $\tilde{m}\in\mathbf{Z}^{2(g-1)}$, we have
\[
\gamma_m(\sigma)\gamma_n(\sigma)=-\sqrt{-1}, \text{ and \ } \gamma_{\tilde{m}}^2(\tilde{\sigma})=1,
\]
so that
\[
\frac{\gamma_{\tilde{m}}^2(\tilde{\sigma})}{\gamma_m(\sigma)\gamma_n(\sigma)}=\sqrt{-1}.
\]
It is easy to see that $d_{\tilde{m}}(\hat{a})=1$. Hence $e_{\tilde{m}}(\hat{a})$ is a generator of the cyclic group $<\!\sqrt{-1}\!>$ and is independent of the choice of $\tilde{m}$.
\epsilonnd{proof}
For $r=0,1$, we proved $H_1(\hat{\mathcal{M}}_{(g,r)}(p_2);\mathbf{Z})\cong\mathbf{Z}/2\mathbf{Z}\text{ or }\mathbf{Z}/4\mathbf{Z}$ in Section \ref{genGamma}. From the above Theorem, we have
\[H_1(\hat{\mathcal{M}}_{(g,r)}(p_2);\mathbf{Z})\cong\mathbf{Z}/4\mathbf{Z}.\]
Since $\mathcal{M}_{g,1}(p_2)$ is isomorphic to $\hat{\mathcal{M}}_{(g,1)}(p_2)$, we have $H_1(\mathcal{M}_{g,1}(p_2);\mathbf{Z})\cong \mathbf{Z}/4\mathbf{Z}$. Consider $H_1(\mathcal{M}_g(p_2);\mathbf{Z})$ when genus $g$ is odd. For the deck transformation $t$, we obtain
\[
e(t)=(-1)^{g-1},
\]
from Theorem 3 in Igusa\cite{igusa1964grt}. By the Lyndon-Hochschild-Serre spectral sequence, we have
\[
\mathbf{Z}/2\mathbf{Z}\to H_1(\hat{\mathcal{M}}_{(g)}(p_2);\mathbf{Z})\to H_1(\mathcal{M}_g(p_2);\mathbf{Z})\to 0
\]
This shows that $H_1(\mathcal{M}_g(p_2);\mathbf{Z})\cong\mathbf{Z}/4\mathbf{Z}$ when $g$ is odd. This completes the proof of Theorem \ref{main-theorem}.
From the Theorem \ref{main-theorem}, we obtain many homomorphisms $\mathcal{M}_{g,1}[d]\to\mathbf{Z}/4\mathbf{Z}$ for an even integer $d$.
\begin{proposition}\label{prop:leveld}
For a positive even integer $d$, there exists an injection
\[
(\mathbf{Z}/4\mathbf{Z})^{2g} \hookrightarrow \operatorname{Hom}(\mathcal{M}_{g,1}[d];\mathbf{Z}/4\mathbf{Z}).
\]
When $d=2$ and $g$ is $odd$, we have
\[
(\mathbf{Z}/4\mathbf{Z})^{2g} \hookrightarrow \operatorname{Hom}(\mathcal{M}_{g}[d];\mathbf{Z}/4\mathbf{Z}).
\]
\epsilonnd{proposition}
\begin{proof}
To prove the proposition, we will construct a homomorphism from $\mathcal{M}_{g,1}[d]$ into $\mathcal{M}_{dg/2-1,1}(p'_X)$ for a certain double cover $p'_X$.
Let $X$ be one of the homology classes $A_1,\cdots,A_g,B_1,\cdots,B_g\in H_1(\Sigma_g;\mathbf{Z})$. Consider the $d$ cover $q_X:\Sigma_{dg-1}\to\Sigma_{g}$ such that the monodromy homomorphisms $\pi_1(\Sigma_{g})\to\mathbf{Z}/d\mathbf{Z}$ is equal to the Poincar\'e dual of $X\in H^1(\Sigma_{g};\mathbf{Z}/d\mathbf{Z})$. Denote a generator of the deck transformation group by $t_X$. Consider
\[
\Sigma_{g,1}=\Sigma_g-D^2\subset \Sigma_g\text{, and }\Sigma_{dg-1,d}=\Sigma_{dg-1}-{q}_X^{-1}(D^2).
\]
We denote the restriction of the cover $q_X|_{\Sigma_{g,1}}:\Sigma_{dg-1,d}\to\Sigma_{g,1}$ by $p_X$. Choose two connected components $D_1$ and $D_2$ of ${q}_X^{-1}(D^2)$ such that $t_X^{d/2}D_1=D_2$. Consider $\Sigma_{dg-1,2}=\Sigma_{dg-1}-\amalg_{i=1}^{2}D_i$. Then we have the double cover
\[p'_X:\Sigma_{dg-1,2}\to\Sigma_{dg-1,2}/<\!t_X^{d/2}\!>=\Sigma_{dg/2-1,1}.\]
We have the projection $P_X:\hat{\mathcal{M}}_{(g,1)}(p_X)\to\mathcal{M}_{g,1}(p_X)$ and $P'_X:\hat{\mathcal{M}}_{(dg/2-1,1)}(p'_X)\to\mathcal{M}_{dg/2-1,1}(p'_X)$. Since the centralizer of $<\!t_X\!>$ is included in the centralizer of $<\!t_X^{d/2}\!>$, we have the homomorphism
\[
\begin{array}{cccc}
Q_X:&\hat{\mathcal{M}}_{(g,1)}(p_X)&\to&\hat{\mathcal{M}}_{(dg/2-1,1)}(p'_X).\\
&[\hat{f}]&\mapsto&[\hat{f}\cup id_{\cup_{i=1}^{d-2} D^2}]
\epsilonnd{array}
\]
Note that we have the inclusion map $i_X:\mathcal{M}_{g,1}[d]\to\mathcal{M}_{g,1}(p_X)$. Hence we have the homomorphism
\[
P'_XQ_XP_X^{-1}i_X:\mathcal{M}_{g,1}[d]\to\mathcal{M}_{dg/2-1,1}(p'_X).
\]
Consider the induced homomorphism $(P'_XQ_XP_X^{-1}i_X)_*:H_1(\mathcal{M}_{g,1}[d];\mathbf{Z})\to H_1((\mathcal{M}_{dg/2-1,1}(p'_X);\mathbf{Z})$. For the simple closed curves $Y=A_1,\cdots,A_g,B_1,\cdots,B_g$, denote the Dehn twists along $Y$ by $T_Y$. Then we have
\[
(P'_XQ_XP_X^{-1}i_X)_*(T_Y^d)=
\begin{cases}
1,&\text{ if } Y=X,\\
0,&\text{ otherwise},
\epsilonnd{cases}
\]
by Remark \ref{rem:generator}. Hence the induced map
\[(\mathbf{Z}/4\mathbf{Z})^{2g} \to \operatorname{Hom}(\mathcal{M}_{g,1}[d];\mathbf{Z}/4\mathbf{Z})\]
is injective.
Next, consider the case of $d=2$ and $g$ is odd. Then $H_1(\mathcal{M}_{g}(p_X);\mathbf{Z})$ is isomorphic to $\mathbf{Z}/4\mathbf{Z}$. The inclusion $\mathcal{M}_{g}[2]\to\mathcal{M}_g(p_X)$ induces a homomorphism $H_1(\mathcal{M}_{g}[2];\mathbf{Z})\to H_1(\mathcal{M}_{g}(p_X);\mathbf{Z})\cong\mathbf{Z}/4\mathbf{Z}$. Similarly, we have the injective homomorphism $(\mathbf{Z}/4\mathbf{Z})^{2g} \to \operatorname{Hom}(\mathcal{M}_{g}[2];\mathbf{Z}/4\mathbf{Z})$. This completes the proof.
\epsilonnd{proof}
\epsilonnd{document}
|
\begin{document}
\title{A Remark on Orbital Free Entropy}
\author[Y.~Ueda]{Yoshimichi Ueda}
\address{
Graduate School of Mathematics,
Kyushu University,
Fukuoka, 810-8560, Japan
}
\email{[email protected]}
\thanks{Supported by Grant-in-Aid for Challenging Exploratory Research 16K13762.}
\subjclass[2010]{46L54, 52C17, 28A78, 94A17.}
\keywords{Free independence; Free entropy; Free mutual information; Orbital free entropy}
\date{Feb. 20th, 2017}
\begin{abstract} A lower estimate of the orbital free entropy $\chi_\mathrm{orb}$ under unitary conjugation is proved, and it together with Voiculescu's observation shows that the conjectural exact formula relating $\chi_\mathrm{orb}$ to the free entropy $\chi$ breaks in general in contrast to the case when given random multi-variables are all hyperfinite.
\end{abstract}
\maketitle
\allowdisplaybreaks{
\section{Introduction}
Voiculescu's theory of free entropy (see \cite{Voiculescu:Survey}) has two alternative approaches; the microstate free entropy $\chi$ and the microstate-free free entropy $\chi^*$, both of which are believed to define the \emph{same} free entropy (at least under the $R^\omega$-embeddability assumption). Similarly to the microstate free entropy $\chi$, the orbital free entropy $\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n)$ of given random self-adjoint multi-variables $\mathbf{X}_i$ was also constructed based on an appropriate notion of microstates (called `orbital microstates', i.e., the `unitary orbit part' of the usual matricial microstates appearing in the definition of $\chi$) in \cite{HiaiMiyamotoUeda:IJM09},\cite{Ueda:IUMJ14}. The free entropy should be understood, in some senses, as the `size' of a given set of non-commutative random variables, while $\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n)$ \emph{precisely} measures how far the positional relation among the $W^*(\mathbf{X}_i)$ are from the freely independent positional relation in the ambient tracial $W^*$-probability space. In fact, we have known that $\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n)$ is non-positive and equals zero if and only if the $W^*(\mathbf{X}_i)$ are freely independent (modulo the $R^\omega$-embeddability assumption). This fact and the other general properties of $\chi_\mathrm{orb}$ suggest that the minus orbital free entropy $-\chi_\mathrm{orb}$ is a microstate variant of Voiculescu's free mutual information $i^*$ whose definition is indeed `microstate-free'. Hence it is natural to expect that those two quantities have the same properties.
In \cite{Voiculescu:AdvMath99} Voiculescu (implicitly) proved that
$$
i^*(v_1 A_1 v_1^*;\dots;v_n A_n v_n^*: B) \leq -(\Sigma(v_1)+\cdots+\Sigma(v_n))
$$
holds for unital $*$-subalgebras $A_1,\dots,A_n,B$ of a tracial $W^*$-probability space and a freely independent family of unitaries $v_1,\dots,v_n$ in the same $W^*$-probability space such that the family is freely independent of $A_1\vee\cdots\vee A_n \vee B$. Here, we set $\Sigma(v_i) := \int_\mathbb{T}\int_\mathbb{T}\log|\zeta_1 - \zeta_2|\,\mu_{v_i}(d\zeta_1)\mu_{v_i}(d\zeta_2)$ with the spectral distribution measure $\mu_{v_i}$ of $v_i$ with respect to $\tau$. In fact, this inequality immediately follows from \cite[Proposition 9.4]{Voiculescu:AdvMath99} (see Proposition 10.11 in the same paper). Its natural `orbital counterpart' should be
$$
\chi_\mathrm{orb}(v_1\mathbf{X}_1 v_1^*,\dots,v_n\mathbf{X}_n v_n^*) \geq \Sigma(v_1)+\cdots+\Sigma(v_n)
$$
with regarding $A_i = W^*(\mathbf{X}_i)$ ($1 \leq i \leq n$) and $B = \mathbb{C}$. We will prove a slightly improved inequality (Theorem \ref{T1}). The inequality is nothing but a further evidence about the unification conjecture between $i^*$ and $\chi_\mathrm{orb}$. However, more importantly, the inequality together with Voiculescu's discussion \cite[\S\S14.1]{Voiculescu:AdvMath99} answers, in the negative, the question on the expected relation between $\chi_\mathrm{orb}$ and $\chi$. Namely, the main formula in \cite{HiaiMiyamotoUeda:IJM09} (see \eqref{Eq7} below), which we call the exact formula relating $\chi_\mathrm{orb}$ to $\chi$, does not hold without any additional assumptions.
In the final part of this short note we also give an observation about the question of whether or not there is a variant of $\chi_\mathrm{orb}$ satisfying both the `$W^*$-invariance' for each given random self-adjoint multi-variable and the exact formula relating $\chi_\mathrm{orb}$ to $\chi$ in general. Here it is fair to mention two other attempts due to Biane--Dabrowski \cite{BianeDabrowski:AdvMath13} and Dabrowski \cite{Dabrowki:arXiv:1604.06420}, but this question is not yet resolved at the moment of this writing.
\section{Preliminaries}
Throughout this note, $(\mathcal{M},\tau)$ denotes a tracial $W^*$-probability space, that is, $\mathcal{M}$ is a finite von Neumann algebra and $\tau$ a faithful normal tracial state on $\mathcal{M}$. We denote the $N\times N$ self-adjoint matrices by $M_N(\mathbb{C})^\mathrm{sa}$ and the Haar probability measure on the $N\times N$ unitary group $\mathrm{U}(N)$ by $\gamma_{\mathrm{U}(N)}$.
\subsection{Orbital free entropy} (\cite{HiaiMiyamotoUeda:IJM09},\cite{Ueda:IUMJ14}.) Let $\mathbf{X}_i = (X_{i1},\dots,X_{ir(i)})$, $1 \leq i \leq n$, be arbitrary random self-adjoint multi-variables in $(\mathcal{M},\tau)$. We recall an expression of $\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n)$ that we will use in this note. Let $R > 0$ be given possibly with $R=\infty$, and $m \in \mathbb{N}$ and $\delta > 0$ be arbitrarily given. For given multi-matrices $\mathbf{A}_i = (A_{ij})_{j=1}^{r(i)} \in (M_N(\mathbb{C})^\mathrm{sa})^{r(i)}$, $1 \leq i \leq n$, the set of orbital microstates $\Gamma_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\mathbf{A}_i)_{i=1}^n\,;\,N,m,\delta)$ is defined to be all $(U_i)_{i=1}^n \in \mathrm{U}(N)^n$ such that
$$
\big|\mathrm{tr}_N(h((U_i \mathbf{A}_i U_i^*)_{i=1}^n)) - \tau(h((\mathbf{X}_i)_{i=1}^n))\big| < \delta
$$
holds whenever $h$ is a $*$-monomial in $(r(1)+\cdots+r(n))$ indeterminates of degree not greater than $m$. Similarly, $\Gamma_R(\mathbf{X}_i;N,m,\delta)$ denotes the set of all $\mathbf{A} \in ((M_N(\mathbb{C})^\mathrm{sa})_R)^{r(i)}$ such that
$$
\big|\mathrm{tr}_N(h(\mathbf{A})) - \tau(h(\mathbf{X}_i))\big| < \delta
$$
holds whenever $h$ is a $*$-monomial in $r(i)$ indeterminates of degree not greater than $m$. It is rather trivial that if some $\mathbf{A}_i$ sits in $((M_N(\mathbb{C})^\mathrm{sa})_R)^{r(i)}\setminus\Gamma_R(\mathbf{X}_i\,;\,N,m,\delta)$, then $\Gamma_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\mathbf{A}_i)_{i=1}^n\,;\,N,m,\delta)$ must be the empty set. Hence we define
\begin{align}\label{Eq1}
&\bar{\chi}_{\mathrm{orb},R}(\mathbf{X}_1,\dots,\mathbf{X}_n\,;\,N,m,\delta) \notag\\
&\quad\quad:=
\sup_{\mathbf{A}_i \in (M_N(\mathbb{C})^\mathrm{sa})_R^{r(i)}}\log\Big(\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\Gamma_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\mathbf{A}_i)_{i=1}^n\,;\,N,m,\delta)\big)\Big) \\
&\quad\quad\,=
\sup_{\mathbf{A}_i \in \Gamma_R(\mathbf{X}_i\,;\,N,m,\delta)}\log\Big(\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\Gamma_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\mathbf{A}_i)_{i=1}^n\,;\,N,m,\delta)\big)\Big) \notag
\end{align}
(defined to be $-\infty$ if some $\Gamma_R(\mathbf{X}_i\,;\,N,m,\delta) = \emptyset$), and we define
\begin{equation}\label{Eq2}
\chi_{\mathrm{orb},R}(\mathbf{X}_1,\dots,\mathbf{X}_n)
:=
\lim_{\substack{m\to\infty \\ \delta\searrow0}} \varlimsup_{N\to\infty}
\frac{1}{N^2}\bar{\chi}_{\mathrm{orb},R}(\mathbf{X}_1,\dots,\mathbf{X}_n\,;\,N,m,\delta).
\end{equation}
It is known, see \cite[Corollary 2.7]{Ueda:IUMJ14}, that $\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n) := \sup_{R > 0}\chi_{\mathrm{orb},R}(\mathbf{X}_1,\dots,\mathbf{X}_n) = \chi_{\mathrm{orb},R}(\mathbf{X}_1,\dots,\mathbf{X}_n)$ holds whenever $R \geq \max\{\Vert X_{ij}\Vert_\infty\,|\,1 \leq i \leq n, 1 \leq j \leq r(i)\}$.
Let $\mathbf{v} = (v_1,\dots,v_s)$ be an $s$-tuple of unitaries in $(\mathcal{M},\tau)$. For given multi-matrices $\mathbf{A}_i = (A_{ij})_{j=1}^{r(i)} \in (M_N(\mathbb{C})^\mathrm{sa})^{r(i)}$, $1 \leq i \leq n$, $\Gamma_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\mathbf{A}_i)_{i=1}^n:\mathbf{v}\,;\,N,m,\delta)$ in presence of $\mathbf{v}$ is defined to be all $(U_i)_{i=1}^n \in \mathrm{U}(N)^n$ such that there exists $\mathbf{V} = (V_1,\dots,V_s) \in U(N)^s$ so that
$$
\big|\mathrm{tr}_N(h((U_i \mathbf{A}_i U_i^*)_{i=1}^n,\mathbf{V})) - \tau(h((\mathbf{X}_i)_{i=1}^n,\mathbf{v}))\big| < \delta
$$
holds whenever $h$ is a $*$-monomial in $(r(1)+\cdots+r(n)+s)$ indeterminates of degree not greater than $m$. Then $\chi_{\mathrm{orb},R}(\mathbf{X}_1,\dots,\mathbf{X}_n:\mathbf{v})$ can be obtained in the same way as above with $\Gamma_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\mathbf{A}_i)_{i=1}^n:\mathbf{v}\,;\,N,m,\delta)$ in place of $\Gamma_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\mathbf{A}_i)_{i=1}^n\,;\,N,m,\delta)$. Remark that $\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:\mathbf{v}) := \sup_{R>0}\chi_{\mathrm{orb},R}(\mathbf{X}_1,\dots,\mathbf{X}_n:\mathbf{v}) = \chi_{\mathrm{orb},R}(\mathbf{X}_1,\dots,\mathbf{X}_n:\mathbf{v})$ also holds if $R \geq \max\{\Vert X_{ij}\Vert_\infty\,|\,1 \leq i \leq n, 1 \leq j \leq r(i)\}$. Moreover, $\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:\mathbf{v}) \leq \chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n)$ trivially holds.
\subsection{Microstate free entropy for unitaries} (See \cite[\S6.5]{HiaiPetz:Book}.) Let $\mathbf{v} = (v_1,\dots,v_n)$ be an $n$-tuple of unitaries in $\mathcal{M}$. We recall the microstate free entropy $\chi_u(\mathbf{v})$. Let $m \in \mathbb{N}$ and $\delta > 0$ be arbitrarily given. For every $N \in \mathbb{N}$ we define $\Gamma_u(\mathbf{v};N,m,\delta)$ to be the set of all $\mathbf{V} = (V_1,\dots,V_n) \in \mathrm{U}(N)^n$ such that $\big|\mathrm{tr}_N(h(\mathbf{V})) - \tau(h(\mathbf{v}))\big| < \delta$ holds whenever $h$ is a $*$-monomial in $n$ indeterminates of degree not greater than $m$. Then
\begin{equation}\label{Eq3}
\chi_u(\mathbf{v}) := \lim_{m\to\infty \atop \delta\searrow0} \varlimsup_{N\to\infty} \frac{1}{N^2}\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\Gamma_u(\mathbf{v};N,m,\delta)\big).
\end{equation}
Note that $\chi_u(\mathbf{v}) = \sum_{i=1}^n\chi_u(v_i)$ holds when $v_1,\dots,v_n$ are freely independent and that $\chi_u(\mathbf{v}) = 0$ if $\mathbf{v}$ is a freely independent family of Haar unitaries. Moreover, when $n=1$, $\chi_u(v_1) = \Sigma(v_1)$ holds.
\subsection{Voiculescu's measure concentration result} (\cite{Voiculescu:IMRN98}) Let $(\mathfrak{A},\phi)$ be a non-commutative probability space, and $(\Omega_i)_{i\in I}$ be a
family of subsets of $\mathfrak{A}$. Denote by $(\mathfrak{A}^{\star I},\phi^{\star I})$ the reduced free product of copies of $(\mathfrak{A},\phi)$ indexed by $I$, and by $\lambda_i$ the canonical map of $\mathfrak{A}$ onto the $i$-th copy of $\mathfrak{A}$ in $\mathfrak{A}^{\star I}$. For each $\varepsilon > 0$ and $m \in \mathbb{N}$ we say that $(\Omega_i)_{i\in I}$ are $(m,\varepsilon)$-free (in $(\mathfrak{A},\phi)$) if
\begin{equation*}
\left|\phi(a_1\cdots a_k)
- \phi^{\star I}(\lambda_{i_1}(a_1)\cdots \lambda_{i_k}(a_k))\right| < \varepsilon
\end{equation*}
for all $a_j \in \Omega_{i_j}$, $i_j \in I$ with $1 \leq j \leq k$ and
$1 \leq k \leq m$.
\begin{lemma}\label{L1}{\rm(Voiculescu \cite[Corollary 2.13]{Voiculescu:IMRN98})} Let $R>0$, $\varepsilon > 0$, $\theta > 0$ and $m \in \mathbb{N}$ be given. Then there exists $N_0 \in \mathbb{N}$ such that
\begin{align*}
\gamma_{\mathrm{U}(N)}^{\otimes p}\big(\big\{ (U_1,\dots,U_p) \in \mathrm{U}(N)^p:
&\ \{T_1^{(0)},\dots,T_{q_0}^{(0)}\},
\{U_1 T_1^{(1)}U_1^*,\dots,U_1 T_{q_1}^{(1)}U_1^*\}, \\
&\dots,\{U_p T_1^{(p)}U_p^*,\dots,U_p T_{q_p}^{(p)}U_p^*\}
\ \text{are $(m,\varepsilon)$-free} \big\}\big) > 1-\theta
\end{align*}
whenever $N \geq N_0$ and $T_j^{(i)} \in M_N(\mathbb{C})$ with
$\Vert T_j^{(i)}\Vert_\infty \leq R$, $1 \leq p \leq m$, $1 \leq q_i \leq m$, $1 \leq i \leq p$, $1 \leq j \leq q_i$.
$0 \leq i \leq p$.
\end{lemma}
\section{Lower estimate of $\chi_\mathrm{orb}$ under unitary conjugation}
This section is devoted to proving the following:
\begin{theorem}\label{T1} Let $\mathbf{X}_1,\dots,\mathbf{X}_{n+1}$ be random self-adjoint multi-variables in $(\mathcal{M},\tau)$ and $\mathbf{v} = (v_1,\dots,v_n)$ be an $n$-tuple of unitaries in $\mathcal{M}$. Assume that $\mathbf{X} := \mathbf{X}_1\sqcup\cdots\sqcup\mathbf{X}_{n+1}$ has f.d.a.~in the sense of Voiculescu \cite[Definition 3.1]{Voiculescu:IMRN98} (or equivalently, $W^*(\mathbf{X})$ is $R^\omega$-embeddable) and that $\mathbf{X}$ and $\mathbf{v}$ are freely independent. Then
\begin{equation}\label{Eq4}
\begin{aligned}
\chi_\mathrm{orb}(v_1\mathbf{X}_1 v_1^*,\dots,v_n\mathbf{X}_n v_n^*,\mathbf{X}_{n+1})
&\geq
\chi_\mathrm{orb}(v_1\mathbf{X}_1 v_1^*,\dots,v_n\mathbf{X}_n v_n^*,\mathbf{X}_{n+1}:\mathbf{v}) \\
&\geq
\chi_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:\mathbf{v}) \\
&\geq
\chi_u(\mathbf{v}).
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}
The first inequality in \eqref{Eq4} is trivial, and the second follows from (the conditional variant of) \cite[Theorem 2.6(6)]{Ueda:IUMJ14}. Hence it suffices only to prove the third inequality in \eqref{Eq4}. We may and do also assume that $\chi_u(\mathbf{v}) > -\infty$; otherwise the desired inequality trivially holds.
Write $\mathbf{X} = (X_1,\dots,X_r)$ for simplicity. Set $R := \max\{\Vert X_j\Vert_\infty\,|\,1\leq j \leq r\}$, and let $m \in \mathbb{N}$ and $\delta>0$ be arbitrarily given. We can choose $\delta'>0$ in such a way that $\delta' \leq \delta$ and that, for every $N \in \mathbb{N}$, if $\mathbf{A} \in \Gamma_R(\mathbf{X};N,m,\delta')$ and $\mathbf{V}=(V_1,\dots,V_n) \in \Gamma_u(\mathbf{v};N,2m,\delta')$ are $(3m,\delta')$-free, then
$$
\big|\mathrm{tr}_N(h(V_1\mathbf{A}V_1\sqcup\cdots\sqcup V_n\mathbf{A}V_n^*\sqcup\mathbf{A}\sqcup\mathbf{V})) - \tau(h(v_1\mathbf{X}v_1^*\sqcup\cdots\sqcup v_n\mathbf{X}v_n^*\sqcup\mathbf{v}))\big| < \delta
$$
whenever $h$ is a $*$-monomial of $(n+1)r + n$ indeterminates of degree not greater than $m$. For such a $\delta'>0$ the assumptions here ensure that there exists $N_0 \in \mathbb{N}$ so that $\Gamma_R(\mathbf{X}\,;\,N,m,\delta') \neq \emptyset$ and the probability measure
$$
\nu_N := \frac{1}{\gamma_{\mathrm{U}(N)}^{\otimes n}(\Gamma_u(\mathbf{v};N,2m,\delta'))}\gamma_{\mathrm{U}(N)}^{\otimes n}\!\upharpoonright_{\Gamma_u(\mathbf{v};N,2m,\delta')}
$$
is well-defined whenever $N \geq N_0$. Let $\Xi(N) \in \Gamma_R(\mathbf{X};N,m,\delta')$ be arbitrarily chosen for each $N \geq N_0$. Note that $\Xi(N)$ also falls in $\Gamma_R(\mathbf{X};N,m,\delta) = \Gamma(v_i\mathbf{X}v_i^*;N,m,\delta)$ since $\delta' < \delta$. Then we define
$$
\Theta(N,3m,\delta') := \{ (V_1,\dots,V_n,U) \in \mathrm{U}(N)^{n+1}\,|\,\text{$\{V_1,\dots,V_n\}$ and $U\Xi(N)U^*$ are $(3m,\delta')$-free}\}.
$$
By what we have remarked at the beginning of this paragraph, we see that
\begin{equation}\label{Eq5}
\begin{aligned}
&(V_1,\dots,V_n,U) \in \Theta(N,3m,\delta')\cap(\Gamma_u(\mathbf{v};N,3m,\delta')\times\mathrm{U}(N)) \\
&\Longrightarrow (V_1 U,\dots, V_n U, U) \in \Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi(N),\dots,\Xi(N)):\mathbf{v}\,;\,N,m,\delta).
\end{aligned}
\end{equation}
By Lemma \ref{L1} there exists $N_1 \geq N_0$ so that
$$
\gamma_{\mathrm{U}(N)}\big(\{U \in \mathrm{U}(N)\,|\,(V_1,\dots,V_n,U) \in \Theta(N,3m,\delta')\}\big) > \frac{1}{2}
$$
for every $N \geq N_1$ and every $(V_1,\dots,V_n) \in \mathrm{U}(N)^n$. Consequently, we have
\begin{equation*}
(\nu_N\otimes\gamma_{\mathrm{U}(N)})\big(\Theta(N,3m,\delta')\big) > \frac{1}{2}
\end{equation*}
whenever $N \geq N_1$. Therefore, for every $N \geq N_1$ we have
\begin{equation}\label{Eq6}
\begin{aligned}
&\frac{1}{2}\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\Gamma_u(\mathbf{v};N,2m,\delta')\big) \\
&<
\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\Gamma_u(\mathbf{v};N,2m,\delta')\big)\times
(\nu_N\otimes\gamma_{\mathrm{U}(N)})\big(\Theta(N,3m,\delta')\big) \\
&=
\gamma_{\mathrm{U}(N)}^{\otimes (n+1)}\big(\Theta(N,3m,\delta')\cap(\Gamma_u(\mathbf{v};N,3m,\delta')\times\mathrm{U}(N))\big) \\
&\leq
\gamma_{\mathrm{U}(N)}^{\otimes (n+1)}\big(\big\{(V_1,\dots,V_n,U) \in \mathrm{U}(N)^{n+1}\,|\,\\
&\qquad(V_1 U,\dots,V_n U,U) \in \Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi(N),\dots,\Xi(N)):\mathbf{v}\,;\,N,m,\delta) \big\}\big) \\
&=
\int_{\mathrm{U}(N)}\gamma_{\mathrm{U}(N)}(dU)\,\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\big\{(V_1,\dots,V_n) \in \mathrm{U}(N)^n\,|\,\\
&\qquad(V_1 U,\dots,V_n U,U) \in \Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi(N),\dots,\Xi(N)):\mathbf{v}\,;\,N,m,\delta) \big\}\big) \\
&=
\int_{\mathrm{U}(N)}\gamma_{\mathrm{U}(N)}(dU)\,\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\big\{(U_1,\dots,U_n) \in \mathrm{U}(N)^n\,|\,\\
&\qquad(U_1,\dots,U_n,U) \in \Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi(N),\dots,\Xi(N)):\mathbf{v}\,;\,N,m,\delta) \big\}\big) \\
&=
\gamma_{\mathrm{U}(N)}^{\otimes(n+1)}\big(\Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi(N),\dots,\Xi(N)):\mathbf{v}\,;\,N,m,\delta)\big),
\end{aligned}
\end{equation}
where the fourth line is obtained by \eqref{Eq5} and the sixth due to the right-invariance of the Haar probability measure $\gamma_{\mathrm{U}(N)}$. Hence
\begin{align*}
\chi_u(\mathbf{v})
&\leq
\varlimsup_{N\to\infty}\frac{1}{N^2}\log\Big(\frac{1}{2}\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\Gamma_u(\mathbf{v};N,2m,\delta'\big)\Big) \\
&\leq
\varlimsup_{N\to\infty}\frac{1}{N^2}\log\gamma_{\mathrm{U}(N)}^{\otimes(n+1)}\big(\Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi(N),\dots,\Xi(N)):\mathbf{v}\,;\,N,m,\delta)\big) \\
&\leq
\varlimsup_{N\to\infty}\frac{1}{N^2}\bar{\chi}_{\mathrm{orb},R}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:\mathbf{v};N,m,\delta),
\end{align*}
implying the desired inequality since $m, \delta$ are arbitrary.
\end{proof}
\begin{remark}\label{R1} {\rm Inequality \eqref{Eq4} is not optimal as follows. Assume that $(\mathcal{M},\tau) = (L(\mathbb{F}_r),\tau_{\mathbb{F}_r})\star(L(\mathbb{Z}_m),\tau_{\mathbb{Z}_m})$ and that $\mathbf{X}$ is the canonical free semicircular generators of $L(\mathbb{F}_r)$ and $v$ is a canonical generator of $L(\mathbb{Z}_m)$. Since $\tau(v) = 0$, one easily confirms that $v\mathbf{X}v^*$ and $\mathbf{X}$ are freely independent so that $\chi_\mathrm{orb}(v\mathbf{X}v^*,\mathbf{X}) = 0$. On the other hand, we know that $\chi_u(v) = -\infty$, since the spectral measure of $v$ has an atom.
}
\end{remark}
\begin{remark} \label{R2} {\rm The proof of Theorem \ref{T1} (actually, the idea of obtaining the second equality in \eqref{Eq6}) gives an alternative representation of $\bar{\chi}_{\mathrm{orb},R}(\mathbf{X}_1,\dots,\mathbf{X}_n\,;\,N,m,\delta)$:
\begin{align*}
&\bar{\chi}_{\mathrm{orb},R}(\mathbf{X}_1,\dots,\mathbf{X}_n\,;\,N,m,\delta)\\
&\quad\quad=
\sup_{\mathbf{A}_i \in (M_N(\mathbb{C})^\mathrm{sa})_R^{r(i)}}\log\Big(\gamma_{\mathrm{U}(N)}^{\otimes n-1}\big(\big\{ (U_i)_{i=1}^{n-1} \in \mathrm{U}(N)^{n-1}\,\big| \\
&\qquad\qquad\qquad\qquad (U_1,\dots,U_{n-1},I_N) \in \Gamma_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\mathbf{A}_i)_{i=1}^n\,;\,N,m,\delta)\big\}\big)\Big) \\
&\quad\quad\,=
\sup_{\mathbf{A}_i \in \Gamma_R(\mathbf{X}_i\,;\,N,m,\delta)}\log\Big(\gamma_{\mathrm{U}(N)}^{\otimes n-1}\big(\big\{ (U_i)_{i=1}^{n-1} \in \mathrm{U}(N)^{n-1}\,\big| \\
&\qquad\qquad\qquad\qquad (U_1,\dots,U_{n-1},I_N) \in \Gamma_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\mathbf{A}_i)_{i=1}^n\,;\,N,m,\delta)\big\}\big)\Big),
\end{align*}
when $n \geq 2$.
This corresponds to \cite[Remarks 10.2(c)]{Voiculescu:AdvMath99}.
}
\end{remark}
\section{Discussions}
\subsection{Negative observation} In \cite[Theorem 2.6]{HiaiMiyamotoUeda:IJM09} the following formula was shown when all $\mathbf{X}_i$ are singletons:
\begin{equation}\label{Eq7}
\chi(\mathbf{X}_1\sqcup\dots\sqcup\mathbf{X}_n) = \chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n) + \sum_{i=1}^n \chi(\mathbf{X}_i).
\end{equation}
Note that the same formula trivially holds true (as $-\infty = -\infty$) even when one replaces each singleton $\mathbf{X}_i$ with a hyperfinite non-singleton $\mathbf{X}_i$, that is, $W^*(\mathbf{X}_i)$ is hyperfinite and $\mathbf{X}_i$ consists of at least two elements. Beyond the hyperfiniteness situation, inequality ($\leq$) in \eqref{Eq7} still holds (see \cite[Proposition 2.8]{Ueda:IUMJ14}), but equality unfortunately does not in general as follows. The following argument is attributed to Voiculescu \cite[\S\S14.1]{Voiculescu:AdvMath99}. Let $\mathbf{X}=(X_1,X_2)$ be a semicircular system in $\mathcal{M}$ and $v \in \mathcal{M}$ be a unitary such that $\tau(v) \neq 0$, $\chi_u(v) > -\infty$, and that $\mathbf{X}$ and $v$ are $*$-freely independent. Set $Y_i := vX_i v^*$, $i=1,2$, and $\mathbf{Y} := (Y_1,Y_2)$. By \cite[Proposition 2.5]{Voiculescu:AdvMath99} $W^*(X_1,X_2,Y_1,Y_2) = W^*(X_1,X_2,Y_1) = W^*(X_1,X_2,v)$, and hence by \cite[Proposition 3.8]{Voiculescu:InventMath94}
$$
\chi(X_1,X_2,Y_1,Y_2) = \chi(X_1,X_2,Y_1,I) \leq \chi(X_1,X_2,Y_1) + \chi(I) = -\infty,
$$
where $I$ denotes the unit of $\mathcal{M}$. On the other hand, by Theorem \ref{T1} $\chi_\mathrm{orb}(\mathbf{X},\mathbf{Y}) \geq \chi_u(v) > -\infty$, implying that
$$\chi(\mathbf{X}\sqcup\mathbf{Y}) = -\infty < \chi_\mathrm{orb}(\mathbf{X},\mathbf{Y}) + \chi(\mathbf{X}) + \chi(\mathbf{Y}).
$$
In particular, the quantity ``$C^\omega$" (or probably ``$C$" too) in \cite[Remark 2.9]{Ueda:IUMJ14} does not coincide with $\chi_\mathrm{orb}$ in general. An interesting question is whether or not $\chi(\mathbf{X}_1\sqcup\cdots\sqcup\mathbf{X}_n) > -\infty$ is enough to make the exact formula relating $\chi_\mathrm{orb}$ to $\chi$ hold. Note that $\chi_\mathrm{orb} = \widetilde{\chi}_\mathrm{orb}$ (Biane--Dabrowski's variant \cite{BianeDabrowski:AdvMath13}) holds under the assumption. Moreover, the orbital free entropy dimension $\delta_{0,\mathrm{orb}}(\mathbf{X},\mathbf{Y})$ must be zero in this case thanks to \cite[Proposition 4.3(5)]{Ueda:IUMJ14}, since $\chi_\mathrm{orb}(\mathbf{X},\mathbf{Y}) > -\infty$. Also $\delta_0(\mathbf{X}) = \delta_0(\mathbf{Y}) = 2$ is trivial. Note that $\chi_u(v) > -\infty$ forces that the probability distribution of $v$ has no atom. Thus, it is likely (if one believes that $\delta_0$ gives a $W^*$-invariant) that
$$
\delta_0(\mathbf{X}\sqcup\mathbf{Y}) \overset{?}{=} 3 < 4 = \delta_{0,\mathrm{orb}}(\mathbf{X},\mathbf{Y}) + \delta_0(\mathbf{X}) + \delta_0(\mathbf{Y})
$$
is expected. This means that if $\delta_0(\mathbf{X}_1\sqcup\cdots\sqcup\mathbf{X}_n) = \delta_{0,\mathrm{orb}}(\mathbf{X}_1,\dots,\mathbf{X}_n) + \sum_{i=1}^n \delta_0(\mathbf{X}_i)$ held in general, then the $W^*$-invariance problem of $\delta_0$ would be resolved negatively. Hence it seems still interesting only to ask whether $\delta_0(\mathbf{X}\sqcup\mathbf{Y}) \lneqq 4$ or not.
\subsection{Other possible variants of $\chi_\mathrm{orb}$} The above discussion tells us that if a variant of $\chi_\mathrm{orb}$ satisfies Theorem \ref{T1}, then the variant does not satisfy the exact formula relating $\chi_\mathrm{orb}$ to $\chi$ in general. Following our previous work \cite{HiaiMiyamotoUeda:IJM09} with Hiai and Miyamoto one may consider the following variant of $\chi_\mathrm{orb}$: For each $1 \leq i \leq n$, we select an (operator norm-)bounded sequence $\{\Xi_i(N)\}_{N\in\mathbb{N}}$ with $\Xi_i(N) \in (M_N(\mathbb{C})^\mathrm{sa})^{r(i)}$ such that the joint distribution of $\Xi_i(N)$ under $\mathrm{tr}_N$ converges to that of $\mathbf{X}_i$ under $\tau$ as $N\to\infty$. Then we replace $\bar{\chi}_{\mathrm{orb},R}(\mathbf{X}_1,\dots,\mathbf{X}_n\,;\,N,m,\delta)$ in the definition of $\chi_\mathrm{orb}$ with
\begin{equation*}
\begin{aligned}
&\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\Xi_i(N))_{i=1}^n\,;\,N,m,\delta) \\
&\quad\quad:=
\log\Big(\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\Gamma_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\Xi_i(N))_{i=1}^n\,;\,N,m,\delta)\big)\Big),
\end{aligned}
\end{equation*}
and define
\begin{equation*}
\begin{aligned}
&\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\Xi_i(N))_{i=1}^n) \\
&\quad\quad:=
\lim_{m\to\infty \atop \delta\searrow0} \varlimsup_{N\to\infty}\frac{1}{N^2}\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\Xi_i(N))_{i=1}^n\,;\,N,m,\delta).
\end{aligned}
\end{equation*}
The conditional variant $\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:(\Xi_i(N))_{i=1}^n:\mathbf{v})$ is defined exactly in the same fashion as $\chi_\mathrm{orb}(\mathbf{X}_1,\dots,\mathbf{X}_n:\mathbf{v})$. Then we may consider their supremum all over the possible choices of $(\Xi_i(N))_{i=1}^n$ under some suitable constraint as a variant of $\chi_\mathrm{orb}$.
Even if the constraint of selecting sequences of multi-matrices is chosen to be the way of approximating to the freely independent copies of given random self-adjoint multi-variables, then the resulting variant of $\chi_\mathrm{orb}$ still satisfies Theorem \ref{T1}, and in turn does not satisfy the exact formula relating $\chi_\mathrm{orb}$ to $\chi$ in general. More precisely we can prove the following:
\begin{proposition}\label{P1} Let $\mathbf{X} = (X_j)_{j=1}^r$ be a random self-adjoint multi-variables in $(\mathcal{M},\tau)$ and $\mathbf{v} = (v_1,\dots,v_n)$ be an $n$-tuple of unitaries in $\mathcal{M}$. Assume that $\mathbf{X}$ has f.d.a.~(see Theorem \ref{T1}) and that $\mathbf{X}$ and $\mathbf{v}$ are freely independent. Then there exists a bounded sequence $\{(\Xi_i(N))_{i=1}^{n+1}\}_{N\in\mathbb{N}}$ with $\Xi_i(N) \in (M_N(\mathbb{C})^\mathrm{sa})^r$ such that the joint distribution of $\Xi_1(N)\sqcup\cdots\sqcup\Xi_{n+1}(N)$ under $\mathrm{tr}_N$ converges to the freely independent $n+1$ copies $\mathbf{X}_1^f\sqcup\cdots\sqcup\mathbf{X}_{n+1}^f$ of $\mathbf{X}$ {\rm(}n.b., the joint distribution of $\mathbf{X}$ is identical to that of every $v_i\mathbf{X}v_i^*${\rm)} under $\tau$ as $N\to\infty$, and moreover that
\begin{equation*}
\begin{aligned}
&\chi_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi_i(N))_{i=1}^{n+1}) \\
&\qquad \geq
\chi_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi_i(N))_{i=1}^{n+1}):\mathbf{v}) \geq
\chi_u(\mathbf{v}).
\end{aligned}
\end{equation*}
\end{proposition}
\begin{proof}
Let $R > 0$ be sufficiently large. Since $\mathbf{X}$ has f.d.a., Lemma \ref{L1} shows that for each $m \in \mathbb{N}$ and $\delta > 0$ one has $\{((U_i)_{i=1}^{n+1},\mathbf{A}) \in \mathrm{U}(N)^{n+1}\times ((M_N(\mathbb{C})^\mathrm{sa})_R)^r \mid (U_i \mathbf{A}U_i^*)_{i=1}^{n+1} \in \Gamma_R(\mathbf{X}_1^f\sqcup\cdots\sqcup\mathbf{X}_{n+1}^f\,;\,N,m,\delta)\} \neq \emptyset$ for all sufficiently large $N \in \mathbb{N}$. By using this fact, it is easy to choose a bounded sequence $\Xi(N) \in ((M_N(\mathbb{C})^\mathrm{sa})_R)^r$ and a sequence $(W_i(N))_{i=1}^{n+1} \in \mathrm{U}(N)^{n+1}$ in such a way that both the joint distributions of $\Xi(N)$ and of $W_1(N)\Xi(N)W_1(N)^*\sqcup\cdots\sqcup W_{n+1}(N)\Xi(N)W_{n+1}(N)^*$ under $\mathrm{tr}_N$ converge to those of $\mathbf{X}$ and of $\mathbf{X}_1^f\sqcup\cdots\sqcup\mathbf{X}_{n+1}^f$, respectively, under $\tau$ as $N\to\infty$. Set $\Xi_i(N) := W_i(N)\Xi(N)W_i(N)^*$, $1 \leq i \leq n+1$, and we will prove that $(\Xi_i(N))_{i=1}^{n+1}$ is a desired sequence.
For given $m \in \mathbb{N}$ and $\delta >0$, we choose $0 < \delta' < \delta$ as in the proof of Theorem \ref{T1}. Let $\nu_N$ and $\Theta(N,3m,\delta')$ be also chosen exactly in the same way as in the proof of Theorem \ref{T1}. We can choose $N_0 \in \mathbb{N}$ in such a way that $\Xi(N) \in \Gamma_R(\mathbf{X}\,;\,N,m,\delta')$ and $\nu_N$ is well-defined as long as $N \geq N_0$. By the same reasoning as in the proof of Theorem \ref{T1} we have
\begin{equation}\label{Eq8}
\begin{aligned}
&(V_1,\dots,V_n,U) \in \Theta(N,3m,\delta')\cap(\Gamma_u(\mathbf{v};N,3m,\delta')\times\mathrm{U}(N)) \\
&\Longrightarrow (V_1 U,\dots, V_n U, U) \in \Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi(N),\dots,\Xi(N)):\mathbf{v}\,;\,N,m,\delta) \\
& \Longleftrightarrow (V_1 U W_1(N)^*,\dots, V_n U W_n(N)^*, U W_{n+1}(N)^*) \\
&\qquad\qquad\qquad\qquad\qquad\in \Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi_i(N))_{i=1}^{n+1}:\mathbf{v}\,;\,N,m,\delta).
\end{aligned}
\end{equation}
As in the proof of Theorem \ref{T1} again, Lemma \ref{L1} shows that there exists $N_1 \geq N_0$ so that
\begin{equation*}
(\nu_N\otimes\gamma_{\mathrm{U}(N)})\big(\Theta(N,3m,\delta')\big) > \frac{1}{2}
\end{equation*}
whenever $N \geq N_1$. Therefore, for every $N \geq N_1$ we have
\begin{align*}
&\frac{1}{2}\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\Gamma_u(\mathbf{v};N,2m,\delta')\big) \\
&<
\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\Gamma_u(\mathbf{v};N,2m,\delta')\big)\times
(\nu_N\otimes\gamma_{\mathrm{U}(N)})\big(\Theta(N,3m,\delta')\big) \\
&=
\gamma_{\mathrm{U}(N)}^{\otimes (n+1)}\big(\Theta(N,3m,\delta')\cap(\Gamma_u(\mathbf{v};N,3m,\delta')\times\mathrm{U}(N))\big) \\
&\leq
\gamma_{\mathrm{U}(N)}^{\otimes (n+1)}\big(\big\{(V_1,\dots,V_n,U) \in \mathrm{U}(N)^{n+1}\,|\,\\
&\qquad\qquad\qquad(V_1 U W_1(N)^*,\dots,V_n U W_n(N)^*,UW_{n+1}(N)^*) \\
&\qquad\qquad\qquad\qquad\in \Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi_i(N))_{i=1}^{n+1}:\mathbf{v}\,;\,N,m,\delta) \big\}\big) \\
&=
\int_{\mathrm{U}(N)}\gamma_{\mathrm{U}(N)}(dU)\,\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\big\{(V_1,\dots,V_n) \in \mathrm{U}(N)^n\,|\,\\
&\qquad\qquad(V_1 UW_1(N)^*,\dots,V_n UW_n(N)^*,UW_{n+1}(N)^*) \\
&\qquad\qquad\qquad\qquad\in \Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi_i(N))_{i=1}^{n+1}:\mathbf{v}\,;\,N,m,\delta) \big\}\big) \\
&=
\int_{\mathrm{U}(N)}\gamma_{\mathrm{U}(N)}(dU)\,\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\big\{(V_1,\dots,V_n) \in \mathrm{U}(N)^n\,|\,\\
&\qquad\qquad(V_1 UW_{n+1}(N)W_1(N)^*,\dots,V_n UW_{n+1}(N)W_n(N)^*,U) \\
&\qquad\qquad\qquad\qquad\in \Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi_i(N))_{i=1}^{n+1}:\mathbf{v}\,;\,N,m,\delta) \big\}\big) \\
&=
\int_{\mathrm{U}(N)}\gamma_{\mathrm{U}(N)}(dU)\,\gamma_{\mathrm{U}(N)}^{\otimes n}\big(\big\{(U_1,\dots,U_n) \in \mathrm{U}(N)^n\,|\,\\
&\qquad(U_1,\dots,U_n,U) \in \Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi_i(N))_{i=1}^{n+1}:\mathbf{v}\,;\,N,m,\delta) \big\}\big) \\
&=
\gamma_{\mathrm{U}(N)}^{\otimes(n+1)}\big(\Gamma_\mathrm{orb}(v_1\mathbf{X}v_1^*,\dots,v_n\mathbf{X}v_n^*,\mathbf{X}:(\Xi_i(N))_{i=1}^{n+1}:\mathbf{v}\,;\,N,m,\delta)\big),
\end{align*}
where the fourth line is obtained by \eqref{Eq8} and both the sixth and the seventh due to the right-invariance of the Haar probability measure $\gamma_{\mathrm{U}(N)}$. Hence the desired inequality follows as in the proof of Theorem \ref{T1}.
\end{proof}
In view of our work \cite{Ueda:Preprint2016} and Voiculescu's liberation theory \cite{Voiculescu:AdvMath99}, a candidate constraint of selecting sequences of multi-matrices may be the way of approximating to $\mathbf{X}_1\sqcup\cdots\sqcup\mathbf{X}_n$ globally, though it probably does not satisfy the exact formula relating $\chi_\mathrm{orb}$ to $\chi$ in general.
\section*{Acknowledgment}
The author would like to thank the referee for his or her careful reading and pointing out several typos.
\end{document}
|
\begin{document}
\begin{abstract}
We classify surjective lattice homomorphisms $W\to W'$ between the weak orders on finite Coxeter groups.
Equivalently, we classify lattice congruences $\Theta$ on $W$ such that the quotient $W/\Theta$ is isomorphic to $W'$.
Surprisingly, surjective homomorphisms exist quite generally:
They exist if and only if the diagram of $W'$ is obtained from the diagram of $W$ by deleting vertices, deleting edges, and/or decreasing edge labels.
A surjective homomorphism $W\to W'$ is determined by its restrictions to rank-two standard parabolic subgroups of~$W$.
Despite seeming natural in the setting of Coxeter groups, this determination in rank two is nontrivial.
Indeed, from the combinatorial lattice theory point of view, all of these classification results should appear unlikely \textit{a priori}.
As an application of the classification of surjective homomorphisms between weak orders, we also obtain a classification of surjective homomorphisms between Cambrian lattices and a general construction of refinement relations between Cambrian fans.
\end{abstract}
\title{Lattice homomorphisms between weak orders}
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}\label{intro}
The weak order on a finite Coxeter group $W$ is a partial order (in fact, lattice~\cite{orderings}) structure on $W$ that encodes both the geometric structure of the reflection representation of $W$ and the combinatorial group theory of the defining presentation of~$W$.
Recent papers have elucidated the structure of lattice congruences on the weak order \cite{congruence} and applied this understanding to construct fans coarsening the normal fan of the $W$-permutohedron~\cite{con_app},
combinatorial models of cluster algebras of finite type \cite{cambrian,sort_camb,camb_fan}, polytopal realizations of generalized associahedra \cite{HL,HLT}, and sub Hopf algebras of the Malvenuto-Reutenauer Hopf algebra of permutations \cite{sash,rectangle,Meehan,con_app}.
A~thorough discussion of lattice congruences of the weak order (and more generally of certain posets of regions) is available in \cite{regions9,regions10}.
The purpose of this paper is to classify surjective lattice homomorphisms between the weak orders on two finite Coxeter groups $W$ and $W'$.
Equivalently, we classify the lattice congruences $\Theta$ on a finite Coxeter group $W$ such that the quotient lattice $W/\Theta$ is isomorphic to the weak order on a finite Coxeter group~$W'$.
From the point of view of combinatorial lattice theory, the classification results are quite surprising \textit{a priori}.
As an illustration of the almost miraculous nature of the situation, we begin this introduction with a representative example (Example~\ref{miraculous}), after giving just enough lattice-theoretic details to make the example understandable.
(More lattice-theoretic details are in Section~\ref{shard sec}.)
A \newword{homomorphism} from a lattice $L$ to a lattice $L'$ is a map $\eta:L\to L'$ such that $\eta(x\wedge y)=\eta(x)\wedge\eta(y)$ and $\eta(x\vee y)=\eta(x)\vee\eta(y)$.
A \newword{congruence} on a lattice $L$ is an equivalence relation such that
\[(x_1\equiv y_1\text{ and }x_2\equiv y_2)\text{ implies }\left[(x_1\wedge x_2)\equiv (y_1\wedge y_2)\text{ and }(x_1\vee x_2)\equiv (y_1\vee y_2)\right].\]
Given a congruence $\Theta$ on $L$, the \newword{quotient lattice} $L/\Theta$ is a lattice structure on the set of equivalence classes where the meet $C_1\wedge C_2$ of two classes is the equivalence class containing $x\wedge y$ for any $x\in C_1$ and $y\in C_2$ and the join is described similarly.
When $L$ is a finite lattice, the congruence classes of any congruence $\Theta$ on $L$ are intervals.
The quotient $L/\Theta$ is isomorphic to the subposet of $L$ induced by the set of elements $x$ such that $x$ is the bottom element of its congruence class.
We use the symbol $\lessdot$ for cover relations in $L$ and often call a pair $x\lessdot y$ an \newword{edge} (because it forms an edge in the Hasse diagram of $L$).
If $x\lessdot y$ and $x\equiv y$, then we say that the congruence \newword{contracts} the edge $x\lessdot y$.
Since congruence classes on a finite lattice are intervals, to specify a congruence it is enough to specify which edges the congruence contracts.
Edges cannot be contracted independently; rather, contracting some edge typically forces the contraction of other edges to ensure that the result is a congruence.
Forcing among edge contractions on the weak order is governed entirely\footnote{In a general lattice, forcing might be less local. (See \cite{GratzerPolygon}.)
The weak order is special because it is a \newword{polygonal lattice}.
See \cite[~Definition~9-6.1]{regions9}, \cite[~Theorem~9-6.5]{regions9}, and \cite[~Theorem~10-3.7]{regions10}.} by a local forcing rule in \newword{polygons}.
A polygon is an interval such that the underlying graph of the Hasse diagram of the interval is a cycle.
There are two \newword{top edges} in a polygon, the two that are incident to the maximum, and two \newword{bottom edges}, incident to the minimum.
The remaining edges in the interval, if there are any, are \newword{side edges}.
The forcing rule for polygons is the following: if a top (respectively bottom) edge is contracted, then the opposite bottom (respectively top) edge must also be contracted, and all side edges (if there are any) must be contracted.
One case of the rule is illustrated in Figure~\ref{poly force}, where shading indicates contracted edges.
(The other case of the rule is dual to the illustration.)
\begin{figure}
\caption{The forcing rule for edge contractions in the weak order}
\label{poly force}
\end{figure}
\begin{example}\label{miraculous}
Consider a Coxeter group $W$ of type $B_3$.
Figure~\ref{weakB3diagram}.a is a close-up of a certain order ideal in the weak order on $W$.
\begin{figure}
\caption{a: The defining presentation of $W$, encoded in an order ideal in the weak order. b: Contracting two edges in the order ideal.}
\label{weakB3diagram}
\end{figure}
This ideal contains all of the information about the Coxeter diagram of $W$.
Namely, the presence of an octagon indicates an edge with label 4, the hexagon indicates an edge with label 3, and the square indicates a pair of vertices not connected by an edge.
The Coxeter diagram of a Coxeter group of type $A_3$ has the same diagram, except that the label 4 is replaced by 3.
Informally, we can turn the picture in Figure~\ref{weakB3diagram}.a into the analogous picture for $A_3$, by contracting two side edges of the octagon to form a hexagon, as indicated by shading in Figure~\ref{weakB3diagram}.b.
If we take the same two edges in the whole weak order on $W$, we can use the polygonal forcing rules to find the finest lattice congruence that contracts the two edges.
This congruence is illustrated in Figure~\ref{B3toA3}.a.
\begin{figure}
\caption{a: The smallest congruence on $W$ contracting the edges shaded in Figure~\ref{weakB3diagram}
\label{B3toA3}
\end{figure}
\textit{A priori}, we shouldn't expect this congruence to have any significance, but, surprisingly, the quotient, shown in Figure~\ref{B3toA3}.b, of the weak order modulo this congruence is isomorphic to the weak order on a Coxeter group of type $A_3$.
(Recall from above that the lattice quotient is isomorphic to the subposet consisting of elements that are at the bottom of their congruence class.)
To recap:
We start with the weak order on $B_3$, look at some polygons at the bottom of the weak order that encode the Coxeter diagram for $B_3$, and na\"{i}vely contract edges of these polygons to make one of the polygons smaller so that the polygons instead encode the Coxeter diagram for $A_3$.
Miraculously, the contracted edges generate a congruence such that the quotient is the weak order on $A_3$.
\end{example}
In general, a \newword{diagram homomorphism} starts with the Coxeter diagram of a Coxeter system $(W,S)$, deletes vertices, decreases labels on edges, and/or erases edges, and relabels the vertices to obtain the Coxeter diagram of some Coxeter system $(W',S')$.
(When no vertices are deleted, no labels are decreased, and no edges are erased, this is a \newword{diagram isomorphism} and when $(W',S')=(W,S)$ it is a \newword{diagram automorphism}.)
For brevity in what follows, we will say ``a diagram homomorphism from $(W,S)$ to $(W',S')$'' to mean ``a diagram homomorphism from the Coxeter diagram of $(W,S)$ to the Coxeter diagram of $(W',S')$.''
The first main results of the paper are the following theorem and several more detailed versions of it.
\begin{theorem}\label{main}
Given finite Coxeter systems $(W,S)$ and $(W',S')$, there exists a surjective lattice homomorphism from the weak order on $W$ to the weak order on $W'$ if and only if there exists a diagram homomorphism from $(W,S)$ to $(W',S')$
\end{theorem}
\begin{remark}\label{auto}
A restriction of Theorem~\ref{main} to \emph{isomorphisms} is well-known and extends to a characterization of meet-semilattice isomorphisms of the weak order on many infinite Coxeter groups.
See \cite[Corollary~3.2.6]{Bj-Br}.
\end{remark}
\begin{remark}\label{what's hard}
The existence of surjective homomorphisms between weak orders is not difficult to prove, \textit{a posteriori}:
We give explicit maps.
However, the other results in the classification need more machinery, specifically the machinery of shards, as explained in Section~\ref{shard sec}.
Furthermore, without the machinery of shards, we would not be able to find the explicit homomorphisms that prove existence.
\end{remark}
In order to make more detailed classification statements, we first give a factorization result for any surjective lattice homomorphism between weak orders.
Given a finite Coxeter group $W$ and a standard parabolic subgroup $W_J$, the \newword{parabolic homomorphism} $\eta_J$ is the map taking $w\in W$ to $w_J\in W_J$, where $w_J$ is the parabolic factor in the usual factorization of $w$ as an element of the parabolic subgroup times an element of the quotient.
A parabolic homomorphism corresponds to a diagram homomorphism that only deletes vertices from the diagram of~$W$.
An \newword{atom} in a finite lattice is an element that covers the minimal element $\hat0$.
We will call a homomorphism of finite lattices \newword{compressive} if it is surjective and restricts to a bijection between the sets of atoms of the two lattices.
(The term is an analogy to the physical process of compression where atoms are not created or destroyed but are brought closer together.
If $\eta:L\to L'$ is compressive, then $\eta$ moves two atoms $a_1,a_2$ of $L$ weakly closer in the sense that the interval below $\eta(a_1)\vee\eta(a_2)$ has weakly fewer elements than the interval below $a_1\vee a_2$.)
In particular, a compressive homomorphism between weak orders on Coxeter systems $(W,S)$ and $(W',S')$ is a surjective homomorphism $W\to W'$ that restricts to a bijection between $S$ and $S'$.
In Section~\ref{delete vert sec}, we prove the following theorem.
\begin{theorem}\label{para factor}
Let $\eta:W\to W'$ be a surjective lattice homomorphism and let $J=\set{s\in S:\eta(s)\neq 1'}$.
Then $\eta$ factors as $\eta|_{W_J}\circ\eta_J$.
The map $\eta|_{W_J}$ (the restriction of $\eta$ to $W_J$) is a compressive homomorphism.
\end{theorem}
Parabolic homomorphisms and their associated congruences are well understood.
(See \cite[Section~6]{congruence}.)
The task, therefore, becomes to understand compressive homomorphisms between Coxeter groups.
To study compressive homomorphisms from $(W,S)$ to $(W',S')$, we may as well take $S'=S$ and require $\eta$ to restrict to the identity on $S$.
For each $r,s\in S$, let $m(r,s)$ be the order of $rs$ in $W$, and let $m'(r,s)$ be the order of $rs$ in $W'$.
Elementary considerations show that if $\eta:W\to W'$ is compressive, then $m'(r,s)\le m(r,s)$ for each pair $r,s\in S$.
(See Proposition~\ref{diagram facts}.)
Thus a compressive homomorphism corresponds to a diagram homomorphism that only erases edges from and/or reduces edge labels on the diagram of $W$.
More surprising, this property is sufficient to guarantee the existence of a compressive homomorphism.
The following theorem shows that Example~\ref{miraculous} is typical, rather than unusual.
Together with Theorem~\ref{para factor}, it implies Theorem~\ref{main} and adds additional detail.
\begin{theorem}\label{existence}
Suppose $(W,S)$ and $(W',S)$ are finite Coxeter systems.
Then there exists a compressive homomorphism from $W$ to $W'$, fixing $S$, if and only if $m'(r,s)\le m(r,s)$ for each pair $r,s\in S$.
If so, then the homomorphism can be chosen so that the associated congruence on $W$ is homogeneous of degree~$2$.
\end{theorem}
We will review the definition of homogeneous congruences in Section~\ref{shard sec}.
Informally, a homogeneous congruence of degree~$2$ is a congruence that is determined by contracting edges located in the order ideal of the weak order that describes the diagram of $W$, as in Figure~\ref{weakB3diagram}.
The situation is perhaps best appreciated by analogy:
Showing that $W'$ is isomorphic to the quotient of $W$ modulo a homogeneous congruence of degree~$2$ is analogous to finding that a graded ring $R$ is isomorphic to another graded ring $R'$ modulo an ideal generated by homogeneous elements of degree~$2$.
It should be noted, however, that lattice congruences are in general more complicated than ring congruences because the classes of a lattice congruence are not in general defined by an ideal.
See \cite[Sections~II.3--4]{Birkhoff}.
Given $(W,S)$ and $(W',S)$ with $m'(r,s)\le m(r,s)$ for each pair $r,s\in S$, there may be several homomorphisms from $W$ to $W'$ whose associated congruence is homogeneous of degree $2$.
There may also be several homomorphisms whose congruence is not homogeneous.
(In these non-homogeneous cases, the degree of the congruence is always $3$.)
These several possibilities are well-characterized, as we now explain.
Elementary considerations show that the restriction of a compressive homomorphism to any standard parabolic subgroup is still compressive.
(See Proposition~\ref{diagram facts}.)
It turns out that compressive homomorphisms of Coxeter groups are determined by their restrictions to rank-two standard parabolic subgroups.
\begin{theorem}\label{diagram uniqueness}
Let $(W,S)$ and $(W',S)$ be finite Coxeter systems with $m'(r,s)\le m(r,s)$ for each pair $r,s\in S$.
For each $\set{r,s}\subseteq S$, fix a surjective homomorphism $\eta_\set{r,s}$ from $W_{\set{r,s}}$ to $W'_{\set{r,s}}$ with $\eta_\set{r,s}(r)=r$ and $\eta_\set{r,s}(s)=s$.
Then
there is at most one homomorphism $\eta:W\to W'$ such that the restriction of $\eta$ to $W_{\set{r,s}}$ equals $\eta_\set{r,s}$ for each pair $r,s\in S$.
\end{theorem}
As will be apparent in Section~\ref{dihedral sec}, for each pair $\set{r,s}\subseteq S$ with $r\neq s$, there are exactly $\binom{a}{b}^2$ choices of $\eta_\set{r,s}$, where $a=m(r,s)-2$ and $b=m(r,s)-m'(r,s)$.
In Example~\ref{miraculous}, there are four ways to choose all of the maps $\eta_\set{r,s}$:
For both pairs $r,s$ with $m(r,s)\le 3$, we must choose the identity map.
For the pair $r,s$ with $m(r,s)=4$ and $m'(r,s)=3$, there are four choices, corresponding to the four ways to contract one ``left'' side edge and one ``right'' side edge in the octagonal interval of Figure~\ref{weakB3diagram}.a.
Theorem~\ref{diagram uniqueness} says, in particular, that in the example there are at most four homomorphisms that fix $S$ pointwise.
Combining Theorem~\ref{diagram uniqueness} with Theorem~\ref{para factor} leads immediately to the following more general statement.
\begin{cor}\label{surjective uniqueness}
Let $W$ and $W'$ be finite Coxeter groups.
A surjective homomorphism from the weak order on $W$ to the weak order on $W'$ is determined by its restrictions to rank-two standard parabolic subgroups.
\end{cor}
The statement of Theorem~\ref{diagram uniqueness} on the uniqueness of compressive homomorphisms, given their restrictions to rank-two standard parabolic subgroups, is remarkably close to being an existence and uniqueness theorem, in the sense that the phrase ``at most one'' can almost be replaced with ``exactly one.''
The only exceptions arise when $W$ has a standard parabolic subgroup of type $H_3$ such that the corresponding standard parabolic subgroup of $W'$ is of type $B_3$.
In particular, adding the hypothesis that $W$ and $W'$ are crystallographic turns Theorem~\ref{diagram uniqueness} into an existence and uniqueness theorem (stated as Theorem~\ref{existence uniqueness crys}).
Less generally, when $W$ and $W'$ are simply laced meaning that all edges in their diagrams are unlabeled), the existence and uniqueness theorem holds, and in fact this simply laced version of the theorem (Corollary~\ref {existence uniqueness simply}) has a uniform proof, given in Sections~\ref{erase edge sec}--\ref{shard sec}.
The remainder of the classification is proved, type-by-type in the classification of finite Coxeter groups, in Sections~\ref{dihedral sec}, \ref{Bn Sn+1 sec}, and \ref{exceptional sec}.
\begin{remark}\label{type-by-type}
It is disappointing that some of these proofs are not uniform.
However, as described above, the classification of surjective homomorphisms between weak orders on finite Coxeter groups is itself not uniform.
While nice things happen quite generally, the exceptions in some types suggest that uniform arguments probably don't exist outside of the simply-laced case (Corollary~\ref {existence uniqueness simply}).
In particular, although Theorem~\ref{existence uniqueness crys} is a uniform statement about the crystallographic case, there is no indication that the combinatorial lattice theory of the weak order detects the crystallographic case, so a uniform proof would be surprising indeed.
\end{remark}
In Section~\ref{camb sec}, we use the classification of surjective lattice homomorphisms between weak orders to classify surjective lattice homomorphisms between Cambrian lattices.
Cambrian lattices are quotients of the weak order modulo certain congruences called Cambrian congruences.
A Cambrian lattice can also be realized as a sublattice of the weak order consisting of sortable elements \cite{sortable,sort_camb,camb_fan}.
The significance of the Cambrian lattices begins with a collection of results, conjectured in~\cite{cambrian} and proved in~\cite{HLT,sortable,sort_camb,camb_fan}, which say that the Cambrian lattices and the related Cambrian fans encode the combinatorics and geometry of generalized associahedra of~\cite{ga}, which in turn provide a combinatorial model \cite{ga,ca2,camb_fan,framework} for cluster algebras of finite type.
The classification of surjective lattice homomorphisms between Cambrian lattices, given in Theorems~\ref{camb para factor}, \ref{camb exist unique}, and~\ref{camb diagram}, parallels the classification of surjective lattice homomorphisms between weak orders.
The main difference is that the Cambrian lattice results have uniform statements in terms of \newword{oriented diagram homomorphisms}.
(However, our proofs rely on the non-uniform proofs given earlier for the weak order.)
An example of a compressive homomorphism between Cambrian lattices appears as Example~\ref{camb hom ex}, which continues Example~\ref{miraculous}.
Interesting geometric consequences are obtained by combining the results of this paper with \cite[Theorem~1.1]{con_app}.
The latter theorem states that every lattice congruence $\Theta$ on the weak order on $W$ defines a polyhedral fan $\mathcal{F}_\Theta$ that coarsens the fan $\mathcal{F}(W)$ defined by the reflecting hyperplanes of $W$.
The theorem also describes the interaction between the combinatorics/geometry of the fan $\mathcal{F}_\Theta$ and the combinatorics of the quotient lattice.
(In type A, the fan $\mathcal{F}_\Theta$ is known to be polytopal~\cite{PS} for any $\Theta$, but no general polytopality result is known in other types.)
The fact that a surjective lattice homomorphism $\eta:W\to W'$ exists whenever $m'(r,s)\le m(r,s)$ for each pair $r,s\in S$ leads to explicit constructions of a fan $\mathcal{F}_\Theta$ coarsening $\mathcal{F}(W)$ such that $\mathcal{F}_\Theta$ is combinatorially isomorphic to the fan $\mathcal{F}(W')$ defined by the reflecting hyperplanes of $W'$.
An example of this geometric point of view (corresponding to Example~\ref{miraculous}) appears as Example~\ref{B3 to A3 shard}.
Working along the same lines for surjective congruences between Cambrian lattices, we obtain refinement relationships between Cambrian fans (fans associated to Cambrian congruences).
We conclude the introduction by describing these refinement relationships in terms of dominance relationships between Cartan matrices.
A Cartan matrix $A=[a_{ij}]$ \newword{dominates} a Cartan matrix $\mathcal{A}'=[a'_{ij}]$ if ${|a_{ij}|\ge |a'_{ij}|}$ for all $i$ and $j$.
The dominance relation on Cartan matrices implies that $m(r,s)\ge m'(r,s)$ for all $r,s\in S$ in the corresponding Weyl groups.
In the following proposition, $\Phi(A)$ is the full root system for $A$, including any imaginary roots.
In this paper, we only use the proposition in finite type, where there are no imaginary roots.
The proposition follows from known facts about Kac-Moody Lie algebras (see Section~\ref{crys case}), and has also been pointed out as \cite[Lemma~3.5]{Marquis}.
\begin{prop}\label{dom subroot}
Suppose $A$ and $A'$ are symmetrizable Cartan matrices such that $A$ dominates~$A'$.
If $\Phi(A)$ and $\Phi(A')$ are both defined with respect to the same simple roots $\alpha_i$, then $\Phi(A)\supseteq\Phi(A')$ and $\Phi_+(A)\supseteq\Phi_+(A')$.
\end{prop}
Proposition~\ref{dom subroot} may appear to be obviously false to someone who is familiar with root systems.
To clarify, we emphasize that defining both $\Phi(A)$ and $\Phi(A')$ with respect to the same simple roots means identifying the root space of $\Phi(A)$ with the root space of $\Phi(A')$ \emph{by identifying the bases $\set{\alpha_i}$}.
Thus we may restate the proposition as follows: The set of simple root coordinate vectors of roots in $\Phi(A')$ is a subset of the set of simple root coordinate vectors of roots in $\Phi(A)$.
The refinement result on Cambrian lattices is best expressed in terms of co-roots, so we rephrase Proposition~\ref{dom subroot} as Proposition~\ref{dom subcoroot}, which asserts a containment relation among dual root systems $\Phi\spcheck(A)$ and $\Phi\spcheck(A')$ when we identify the simple co-roots instead of the simple roots.
The following result is proved by constructing (in Theorem~\ref{weak hom subroot}) the appropriate homomorphism from $W$ to $W'$ and using it to construct a homomorphism of Cambrian lattices.
\begin{theorem}\label{camb fan coarsen}
Suppose $A$ and $A'$ are Cartan matrices such that $A$ dominates $A'$ and suppose $W$ and $W'$ are the associated groups, both generated by the same set $S$.
Suppose $c$ and $c'$ are Coxeter elements of $W$ and $W'$ respectively that can be written as a product of the elements of $S$ in the same order.
Choose a root system $\Phi(A)$ and a root system $\Phi(A')$ so that the simple \emph{co}-roots are the same for the two root systems.
Construct the Cambrian fan for $(A,c)$ by coarsening the fan determined by the Coxeter arrangement for $\Phi(A)$ and construct the Cambrian fan for $(A',c')$ by coarsening the fan determined by the Coxeter arrangement for $\Phi(A')$.
Then the Cambrian fan for $(A,c)$ refines the Cambrian fan for $(A',c')$.
Whereas the codimension-$1$ faces of the Cambrian fan for $(A,c)$ are orthogonal to co-roots (i.e.\ elements of $\Phi\spcheck(A)$), the Cambrian fan for $(A',c')$ is obtained by removing all codimension-$1$ faces orthogonal to elements of $\Phi\spcheck(A)\setminus\Phi\spcheck(A')$.
\end{theorem}
As mentioned above and as explained in \cite[Section~5]{framework}, Cambrian fans provide a combinatorial model for cluster algebras of finite type.
The cluster-algebraic consequences of Theorem~\ref{camb fan coarsen} are considered in \cite{dominance}.
Indeed, inspired by Theorem~\ref{camb fan coarsen}, the paper \cite{dominance} studies much more general cluster-algebraic phenomena related to dominance relations among matrices.
\begin{remark}\label{simion rem}
To the author's knowledge, the first appearance in the literature of a nontrivial surjective lattice homomorphism between finite Coxeter groups is a map found in Rodica Simion's paper \cite{Simion}.
(See Section~\ref{simion sec}.)
Simion's motivations were not lattice-theoretic, so she did not show that the map is a surjective lattice homomorphism.
However, she did prove several results that hint at lattice theory, including the fact that fibers of the map are intervals and that the \emph{order-theoretic} quotient of $B_n$ modulo the fibers of the map is isomorphic to $S_{n+1}$.
It was Simion's map that first alerted the author to the fact that interesting homomorphisms exist.
\end{remark}
\section{Deleting vertices}\label{delete vert sec}
In this section, we develop the most basic theory of surjective lattice homomorphisms between weak orders, leading to the proof of Theorem~\ref{para factor}, which factors a surjective homomorphism into a parabolic homomorphism and a compressive homomorphism.
We assume the standard background about Coxeter groups and the weak order, which is found, for example, in~\cite{Bj-Br}.
(For an exposition tailored to the point of view of this paper, see~\cite{regions10}.)
As we go, we introduce background on the combinatorics of homomorphisms and congruences of finite lattices.
Proofs of assertions not proved here are found in \cite[Section~9-5]{regions9}.
Let $(W,S)$ be a Coxeter system with identity element $1$.
The usual length function on $W$ is written $\ell$.
The pairwise orders of elements $r,s\in S$ are written $m(r,s)$.
The symbol $W$ will denote not only the group $W$ but also a partial order, the (right) weak order on the Coxeter group $W$.
This is the partial order on $W$ whose cover relations are of the form $w\lessdot ws$ for all $w\in W$ and $s\in S$ with $\ell(w)<\ell(ws)$.
The set $T$ of reflections of $W$ is $\set{wsw^{-1}:w\in W,s\in S}$.
The inversion set of and element $w\in W$ is $\operatorname{inv}(w)=\set{t\in T:\ell(tw)<\ell(w)}$.
An element is uniquely determined by its inversion set.
The weak order on $W$ corresponds to containment order on inversion sets.
The minimal element of $W$ is $1$ and the maximal element is $w_0$.
We have $w_0=\bigvee S$.
Given $J\subseteq S$, the standard parabolic subgroup generated by $J$ is written $W_J$.
This is, in particular, a lower interval in $W$.
The maximal element of $W_J$ is $w_0(J)$, which equals $\bigvee J$.
We need a second Coxeter system $(W',S')$, and we use the same notation for $W'$ as for $W$, with primes added to distinguish the groups.
As a first step, we prove the following basic facts:
\begin{prop}\label{basic facts}
Let $\eta:W\to W'$ be a surjective lattice homomorphism.
Then
\begin{enumerate}
\item $\eta(1)=1'$ and $\eta(w_0)=w'_0$.
\item $S'\subseteq\eta(S)\subseteq (S'\cup\set{1'})$.
\item If $r$ and $s$ are distinct elements of $S$ with $\eta(r)=\eta(s)$, then $\eta(r)=\eta(s)=1'$.
\item If $J\subseteq S$, then $\eta$ restricts to a surjective homomorphism $W_J\to W'_{\eta(J)\setminus\set{1'}}$.
\item $m'(\eta(r),\eta(s))\le m(r,s)$ for each pair $r,s\in S$ with $\eta(r)\neq1'$ and $\eta(s)\neq1'$.
\end{enumerate}
\end{prop}
\begin{proof}
A surjective homomorphism of finite lattices takes the minimal element to the minimal element and the maximal element to the maximal element, so (1) holds.
Suppose $s\in S$ has $\eta(s)\not\in S'\cup\set{1'}$.
Then there exists $s'\in S'$ such that $\eta(s)>s'$.
Since $\eta$ is surjective, there exists $w\in W$ such that $\eta(w)=s'$.
But $\eta$ is order-preserving and $s\le (s\vee w)$, so $\eta(s)\le\eta(s\vee w)=\eta(s)\vee s'=s'$, and this contradiction shows that $\eta(S)\subseteq S'\cup\set{1'}$.
We have $\bigvee\eta(S)=\eta(w_0)=w'_0$.
But $\bigvee J'<w'_0$ for any proper subset $J'$ of $S'$, so $S'\subseteq\eta(S)$, and we have proved (2).
To prove (3), let $r$ and $s$ be distinct elements of $S$ with $\eta(r)=\eta(s)$.
Then $1'=\eta(1)=\eta(r\wedge s)=\eta(r)\wedge\eta(s)=\eta(r)$.
Applying $\eta$ to $w_0(J)$ yields $\eta(\bigvee J)$, which equals $\bigvee_{s\in J}\eta(s)=w'_0(\eta(J)\setminus\set{1'})$.
Since $W_J$ is the interval $[1,w_0(J)]$ and $\eta$ is order-preserving, $\eta(W_J)$ is contained in $[1,w'_0(\eta(J)\setminus\set{1'})]=W'_{\eta(J)\setminus\set{1'}}$.
If $w'\in W'_{\eta(J)\setminus\set{1'}}$, then since $\eta$ is surjective, there exists $w\in W$ such that $\eta(w)=w'$.
Then $w_0(J)\wedge w$ is in $W_J$, and $\eta(w_0(J)\wedge w)=w'_0(\eta(J)\setminus\set{1'})\wedge w'=w'$.
We have proved (4).
If $\eta(r)\neq1'$ and $\eta(s)\neq1'$, then (3) says that $\eta(r)\neq\eta(s)$ and (4) says that $\eta$ restricts to a surjective homomorphism from the rank-two standard parabolic subgroup $W_{\set{r,s}}$ to the rank-two standard parabolic subgroup $W'_{\set{\eta(r),\eta(s)}}$.
Thus $|W_{\set{r,s}}|\ge|W'_{\set{\eta(r),\eta(s)}}|$.
This is equivalent to (5).
\end{proof}
Given $J\subseteq S$, for any $w\in W$, there is a unique factorization $w=w_J\,\cdot \!\phantom{.}^J\! w$ that maximizes
$\ell(w_J)$ subject to the constraints $\ell(w_J)+\ell(\!\!\phantom{.}^J\! w)=\ell(w)$ and $w_J\in W_J$.
The element $w_J$ is also the unique element of $W$ (and the unique element of $W_J$) whose inversion set is $\operatorname{inv}(w)\cap W_J$.
Let $\eta_J:W\to W_J$ be the map sending $w$ to $w_J$.
We call $\eta_J$ a \newword{parabolic homomorphism}.
We now illustrate parabolic homomorphisms in terms of the usual combinatorial representations for types $A_n$ and $B_n$ and in terms of the usual geometric representation for type $H_3$.
\begin{example}\label{An para}
We realize a Coxeter group of type $A_n$ in the usual way as the symmetric group $S_{n+1}$ of permutations of $\set{1,\ldots,n+1}$, with simple generators $S=\set{s_i:i=1,\ldots n}$, where each $s_i$ is the adjacent transposition $(i\,\,\,\,i\!+\!1)$.
We write permutations $\pi$ in one-line notation $\pi=\pi_1\pi_2\cdots\pi_{n+1}$ with each $\pi_i$ standing for $\pi(i)$.
Choose some $k\in\set{1,\ldots,n}$ and let $J=\set{s_1,\ldots,s_{k-1}}\cup\set{s_{k+1},\ldots,s_n}$.
The map $\eta_J$ corresponds to deleting the vertex $s_k$ from the Coxeter diagram for $A_n$, thus splitting it into components, one of type $A_{k-1}$ and one of type $A_{n-k}$.
The map $\eta_J$ takes $\pi$ to $(\sigma,\tau)\in S_k\times S_{n+1-k}$, where $\sigma$ is the permutation of $\set{1,2,\ldots,k}$ given by deleting from the sequence $\pi_1\pi_2\cdots\pi_{n+1}$ all values greater than $k$ and $\tau$ is the permutation of $\set{1,2,\ldots,n+1-k}$ given by deleting from $\pi_1\pi_2\cdots\pi_{n+1}$ all values less than $k+1$ and subtracting $k$ from each value.
For example, if $n=7$ and $k=3$, then $\eta_J(58371426)=(312,25413)$.
The map $\eta_J$ is similarly described for more general~$J$.
\end{example}
\begin{example}\label{Bn para}
As usual, we realize a Coxeter group of type $B_n$ as the group of \newword{signed permutations}.
These are permutations $\pi$ of $\set{\pm1,\pm2,\ldots,\pm n}$ with $\pi(-i)=-\pi(i)$ for all $i$.
The simple generators are the permutations $s_0=(1\,\,\,-\!1)$ and $s_i=(-i\!-\!1\,\,\,\,-i)(i\,\,\,\,i\!+\!1)$ for $i=1,\ldots,n-1$.
A signed permutation $\pi$ is determined by its one-line notation $\pi=\pi_1\pi_2\cdots\pi_n$, where each $\pi_i$ again stands for $\pi(i)$.
For $J=\set{s_0,\ldots,s_{k-1}}\cup\set{s_{k+1},\ldots,s_{n-1}}$, the map $\eta_J$ corresponds to deleting the vertex $s_k$, splitting the diagram of $B_n$ into components, of types $B_k$ and $A_{n-k-1}$.
The map $\eta_J$ takes $\pi$ to $(\sigma,\tau)$, where $\sigma$ is the signed permutation whose one-line notation is the restriction of the sequence $\pi_1\pi_2\cdots\pi_n$ to values with absolute value less than $k+1$ and $\tau$ is the permutation given by restricting the sequence $(-\pi_n)(-\pi_{n-1})\cdots(-\pi_1)\pi_1\cdots\pi_{n-1}\pi_n$ to positive values greater than $k$, and then subtracting $k$ from each value.
For example, if $n=8$, $k=4$, and $\pi=(-4)(-2)71(-8)(-6)5(-3)$, then $\eta_J(\pi)=((-4)(-2)1(-3),2431)$.
\end{example}
\begin{example}\label{H3 para}
For $W$ of type $H_3$, we give a geometric, rather than combinatorial, description of parabolic homomorphisms.
Figure~\ref{H3 para fig}.a shows the reflecting planes of a reflection representation of $W$.
\begin{figure}
\caption{
a: Reflecting planes for $W$ of type $H_3$.
b: Reflecting planes for $W_{\set{s,t}
\label{H3 para fig}
\end{figure}
Each plane is represented by its intersection with a unit sphere about the origin.
The sphere is considered to be opaque, so that we only see the side of the sphere that is closest to us.
The spherical triangles traced out on the sphere (including those on the back of the sphere that we can't see) are in bijection with the elements of $W$.
Specifically, the triangle corresponding to $1$ is marked, and each $w\in W$ corresponds to the image of the triangle marked $1$ under the action of $w$.
Taking $S=\set{r,s,t}$ with $m(r,s)=5$, $m(s,t)=3$ and $m(r,t)=2$, the reflecting planes of the reflections $S$ are the three planes that bound the triangle marked $1$.
The plane corresponding to $s$ is nearly horizontal in the picture and the plane for $r$ intersects the plane for $s$ at the left of the triangle marked $1$.
The parabolic congruence that deletes the vertex $r$ can be seen geometrically in Figure~\ref{H3 para fig}.b, which shows the reflecting planes for the standard parabolic subgroup $W_{\set{s,t}}$.
The sectors cut out by these planes correspond to the elements of $W_{\set{s,t}}$, as shown in the picture.
The parabolic congruence maps an element $w\in W$, corresponding to a triangle $\mathcal{T}$, to the element of $W_{\set{s,t}}$ labeling the sector containing $\mathcal{T}$.
Similar pictures, Figures~\ref{H3 para fig}.c--d, describe the parabolic congruences deleting the vertices $s$ and $t$ respectively.
(Some labels are left out of the representation of $W_{\set{r,s}}$ in Figure~\ref{H3 para fig}.d.)
\end{example}
The following theorem is a concatenation of \cite[Proposition~6.3]{congruence} and \cite[Corollary~6.10]{congruence}.
The fact that $\eta_J$ is a lattice homomorphism was also established in~\cite{Jed}.
\begin{theorem}\label{para cong}
If $J\subseteq S$, then $\eta_J$ is a surjective homomorphism.
Its fibers constitute the finest lattice congruence on $W$ with $1\equiv s$ for all $s\in S\setminus J$.
\end{theorem}
Proposition~\ref{basic facts} and Theorem~\ref{para cong} lead to the proof of Theorem~\ref{para factor}.
To give the proof, we need the following basic observation about a congruence $\Theta$ on a finite lattice $L$:
Congruence classes are intervals, and the quotient $L/\Theta$ is isomorphic to the subposet of $L$ induced by the elements of $L$ that are the bottom elements of congruence classes.
Let $\Theta_J$ be the congruence whose classes are the fibers of $\eta_J$.
An element $w\in W$ is at the bottom of its $\Theta_J$-class if and only if $w\in W_J$.
Thus $W/\Theta_J$ is isomorphic to $W_J$.
\begin{proof}[Proof of Theorem~\ref{para factor}]
If $\Theta$ is the lattice congruence on $W$ whose congruence classes are the fibers of $\eta$, then Proposition~\ref{basic facts}(1) says that $1\equiv s$ for all $s\in S\setminus J$.
Theorem~\ref{para cong} says that the congruence $\Theta_J$, determined by the fibers of the homomorphism $\eta_J$, is a refinement of the congruence $\Theta$.
Thus $\eta$ factors as $\eta=\eta'\circ\nu$, where $\nu:W\to W/\Theta_J$ is the natural map.
The map $\eta'$ maps a $\Theta_J$-class to $\eta(w)$ where $w$ is any element of the $\Theta_J$-class.
However, each $\Theta_J$-class contains a unique element of $W_J$, so we can replace $\nu$ with the map $\eta_J$ and replace $\eta'$ with the restriction of $\eta$ to $W_J$.
By Proposition~\ref{basic facts}(2--3), $\eta|_{W_J}$ restricts to a bijection from $J$ to~$S'$.
\end{proof}
Theorem~\ref{para factor} reduces the problem of classifying surjective homomorphisms $\eta$ between weak orders to the special case of classifying compressive homomorphisms between weak orders.
As in the introduction, we may as well restrict to the case where $S=S'$ and $\eta$ restricts to the identity on $S$.
For convenience, we rewrite some of the assertions of Proposition~\ref{basic facts} in the case where $\eta$ is compressive:
\begin{prop}\label{diagram facts}
Let $(W,S)$ and $(W',S)$ be finite Coxeter systems.
If $\eta:W\to W'$ is a compressive homomorphism fixing $S$ pointwise, then
\begin{enumerate}
\item If $J\subseteq S$, then $\eta$ restricts to a surjective homomorphism $W_J\to W'_J$.
\item $m'(r,s)\le m(r,s)$ for each pair $r,s\in S$.
\end{enumerate}
\end{prop}
Theorem~\ref{para factor} shows that surjective homomorphisms correspond to deleting vertices of the diagram and then applying a compressive homomorphism, and the second assertion of Proposition~\ref{diagram facts} is an additional step towards Theorem~\ref{main}:
It shows that the compressive homomorphism decreases edge labels and/or erases edges.
The remainder of the paper is devoted to classifying compressive homomorphisms between finite Coxeter groups.
\section{Erasing edges}\label{erase edge sec}
In this section, we begin the classification of compressive homomorphisms by considering the simplest case, the case of compressive homomorphisms that erase edges but otherwise do not decrease edge labels.
That is, we consider the case where, for each $r,s\in S$, either $m'(r,s)=m(r,s)$ or $m'(r,s)=2$.
Recall that the diagram of a finite Coxeter group is a forest (a graph without cycles), so removing any edge breaks a connected component of the diagram into two pieces.
Given a set $E$ of edges of the diagram for $W$, write $S$ as a disjoint union of sets $J_1,J_2,\ldots,J_k$ such that each set $J_i$ is the vertex set of a connected component of the graph obtained by deleting the edges $E$ from the diagram.
Define $\eta_E$ to be the map from $W$ to $W_{J_1}\times W_{J_2}\times\cdots\times W_{J_k}$ that sends $w\in W$ to $(w_{J_1},w_{J_2},\ldots,w_{J_k})$.
We call $\eta_E$ an \newword{edge-erasing homomorphism}.
Beginning in this section and finishing in Section~\ref{shard sec}, we prove the following theorem.
\begin{theorem}\label{edge factor}
Let $\eta:W\to W'$ be a compressive homomorphism fixing $S$ pointwise.
Let $E$ be any set of edges in the diagram of $W$ such that each edge $r$---$s$ in $E$ has $m'(r,s)=2$.
Let $J_1,J_2,\ldots,J_k$ be the vertex sets of the connected components of the graph obtained by deleting the edges $E$ from the diagram for $W$.
In particular, $W'\cong W'_{J_1}\times W'_{J_2}\times\cdots\times W'_{J_k}$.
Then $\eta$ factors as $\eta'\circ\eta_E$, where ${\eta':W_{J_1}\times W_{J_2}\times\cdots\times W_{J_k}}\to W'$ is the compressive homomorphism with $\eta'(w_1,\ldots, w_k)=(\eta(w_1),\ldots,\eta(w_k))$.
\end{theorem}
The proof of Theorem~\ref{edge factor} is similar to the proof of Theorem~\ref{para factor}.
We characterize the congruence associated to $\eta_E$ as the finest congruence containing certain equivalences and conclude that any homomorphism that erases the edges $E$ factors through~$\eta_E$.
Let $r$ and $s$ be distinct elements of $S$ and let $m=m(r,s)$.
Suppose $r$ and $s$ form an edge in $E$, or in other words suppose $m\ge 3$.
Then the standard parabolic subgroup $W_{\set{r,s}}$ is the lower interval $[1,w_0(\set{r,s})]$, consisting of two chains: $1\lessdot r\lessdot rs\lessdot rsr\lessdot\cdots\lessdot w_0(\set{r,s})$ and $1\lessdot s\lessdot sr\lessdot srs\lessdot\cdots\lessdot w_0(\set{r,s})$.
We define $\operatorname{alt}_k(r,s)$ to be the word with $k$ letters, starting with $r$ and then alternating $s$, $r$, $s$, etc.
Thus the two elements covered by $w_0(\set{r,s})$ are $\operatorname{alt}_{m-1}(r,s)$ and $\operatorname{alt}_{m-1}(s,r)$.
The key to the proof of Theorem~\ref{edge factor} is the following theorem.
\begin{theorem}\label{edge cong}
If $E$ is a set of edges of the diagram of $W$, then $\eta_E$ is a compressive homomorphism.
Its fibers constitute the finest congruence with $r\equiv\operatorname{alt}_{m(r,s)-1}(r,s)$ and $s\equiv\operatorname{alt}_{m(r,s)-1}(s,r)$ for all edges $r$---$s$ in $E$.
\end{theorem}
Each map $\eta_{J_i}$ is a lattice homomorphism by Theorem~\ref{para cong}, and we easily conclude that $\eta_E$ is a homomorphism as well.
To see that $\eta_E$ is surjective, consider $(w_1,\ldots,w_k)\in W_{J_1}\times W_{J_2}\times\cdots\times W_{J_k}$.
Then each $w_i$ is also an element of $W$.
Furthermore, $\eta_{J_i}(w_j)=1$ if $j\neq i$.
Thus the fact that $\eta_E$ is a lattice homomorphism implies that $\eta_E(\bigvee_{i=1}^kw_i)=(w_1,\ldots,w_k)$.
We have established the first assertion of Theorem~\ref{edge cong}.
The second assertion requires more background on lattice congruences of the weak order.
This background and the proof of the second assertion is given in Section~\ref{shard sec}.
For now, we show how Theorem~\ref{edge cong} is used to prove Theorem~\ref{edge factor}.
\begin{proof}[Proof of Theorem~\ref{edge factor}, given Theorem~\ref{edge cong}]
Let $r$---$s$ be an edge in $E$.
In the Coxeter group $W'$, $r\vee s=rs$.
By Proposition~\ref{diagram facts}, the restriction of $\eta$ to the interval $[1,w_0(\set{r,s})]$ in $W$ is a surjective lattice homomorphism to the interval $[1',rs]$ in~$W'$.
By hypothesis, $\eta$ fixes $r$ and $s$.
Since $\eta$ is order-preserving, $\eta(\operatorname{alt}_{m(r,s)-1}(r,s))$ is either $r$ or $rs$.
But if $\eta(\operatorname{alt}_{m(r,s)-1}(r))=rs$, then $\eta(s\wedge\operatorname{alt}_{m(r,s)-1}(r))=rs\wedge s=s$.
But $s\wedge\operatorname{alt}_{m(r,s)-1}(r)=1$, and $\eta(1)=1'$, so we conclude that $\eta(\operatorname{alt}_{m(r,s)-1}(r))=r$.
Similarly, $\eta(\operatorname{alt}_{m(r,s)-1}(s))=s$.
Thus if $\Theta$ is the lattice congruence on $W$ whose congruence classes are the fibers of $\eta$, then $r\equiv\operatorname{alt}_{m(r,s)-1}(r,s)$ and $s\equiv\operatorname{alt}_{m(r,s)-1}(s,r)$ modulo $\Theta$.
Let $\Theta_E$ be the lattice congruence whose classes are the fibers of $\eta_E$.
Theorem~\ref{edge cong} says that the congruence $\Theta_E$ is a refinement of the congruence $\Theta$.
Thus $\eta$ factors through the natural map $\nu:W\to W/\Theta_E$.
Equivalently, we can factor $\eta$ as $\eta'\circ\eta_E$, where $\eta'$ maps $(w_1,\ldots,w_k)\in W_{J_1}\times\cdots\times W_{J_k}$ to $\eta(w)$, where $w$ is any element in the $\eta$-fiber of $(w_1,\ldots,w_k)$.
Specifically, we can take $\eta'(w_1,\ldots,w_k)=\eta(\bigvee_{i=1}^kw_i)$.
Since $\eta$ is a homomorphism, the latter is $\bigvee_{i=1}^k\eta(w_i)$, which equals $(\eta(w_1),\ldots,\eta(w_k))$ because $W'\cong W'_{J_1}\times W'_{J_2}\times\cdots\times W'_{J_k}$.
It now follows from Proposition~\ref{diagram facts} that $\eta'$ is a surjective homomorphism.
\end{proof}
Theorem~\ref{edge factor} has the following immediate corollary.
\begin{cor}\label{existence uniqueness simply}
Let $(W,S)$ and $(W',S)$ be finite, simply laced finite Coxeter systems such that the diagram of $W'$ is obtained from the diagram of $W$ by erasing a set $E$ of edges.
Then $\eta_E$ is the unique compressive homomorphism from $W$ to $W'$ fixing $S$ pointwise.
\end{cor}
We conclude this section with some examples of edge-erasing homomorphisms.
\begin{example}\label{An erase edge}
We describe edge-erasing homomorphisms from $W$ of type $A_n$ in terms of the combinatorial realization described in Example~\ref{An para}.
If $E=\set{s_{k}\text{---}s_{k+1}}$, the edge-erasing homomorphism $\eta_E$ maps $\pi\in S_{n+1}$ to $(\sigma,\tau)\in S_{k+1}\times S_{n-k+1}$, where $\sigma$ is the restriction of the sequence $\pi_1\pi_2\cdots\pi_{n+1}$ to values $\le k+1$ and $\tau$ is obtained by restricting $\pi_1\pi_2\cdots\pi_{n+1}$ to values $\ge k+1$ and then subtracting $k$ from each value.
For example, if $n=7$ and $k=3$, then $\eta_E(58371426)=(3142,25413)$.
\end{example}
\begin{example}\label{Bn erase edge}
The description for type $B_n$ is similar.
If $E=\set{s_{k-1}\text{---}s_{k}}$, the edge-erasing homomorphism $\eta_E$ maps a signed permutation $\pi$ to $(\sigma,\tau)\in B_k\times S_{n-k+1}$, where $\sigma$ is the restriction of the sequence $\pi_1\pi_2\cdots\pi_{n+1}$ to entries $\pi_i$ with ${|\pi_i|\le k}$ and $\tau$ is obtained by restricting $(-\pi_n)(-\pi_{n-1})\cdots(-\pi_1)\pi_1\cdots\pi_{n-1}\pi_n$ to values $\ge k$ and then subtracting $k-1$ from each value.
For example, if $n=8$, $k=4$, and $\pi=(-4)(-2)71(-8)(-6)5(-3)$, then $\eta(\pi)=((-4)(-2)1(-3),35142)$.
\end{example}
\begin{example}\label{H3 erase edge}
We describe edge-erasing homomorphisms from $W$ of type $H_3$ in the geometric context introduced in Example~\ref{H3 para}.
Figure~\ref{H3 edge fig}.a--b represent the two edge-erasing homomorphisms.
\begin{figure}
\caption{
a: The homomorphism that erases the edge $r$---$s$ in type $H_3$.
b: The homomorphism that erases the edge $s$---$t$ in type~$H_3$.
}
\label{H3 edge fig}
\end{figure}
Thus the homomorphism erasing $r$---$s$ maps each element $w\in W$, corresponding to a triangle $\mathcal{T}$, to the element of $W_{\set{r}}\times W_{\set{s,t}}$ labeling the region in Figure~\ref{H3 edge fig}.a containing $\mathcal{T}$.
Figure~\ref{H3 edge fig}.b represents the homomorphism erasing $s$---$t$ similarly.
In each picture, some labels are omitted or belong on the invisible side of the sphere.
\end{example}
\section{Lattice congruences of the weak order}\label{shard sec}
In this section, we quote results that give us the tools to prove Theorem~\ref{edge cong} and to complete the classification of compressive homomorphisms between finite Coxeter groups.
We prove Theorem~\ref{edge cong} at the end of this section and complete the classification in later sections.
For any results that are stated here without proof or citation, proofs can be found in \cite{congruence} and/or \cite[Section~9-5]{regions9}.
We begin with more details on congruences on a finite lattice $L$.
Recall from Section~\ref{intro} that a congruence $\Theta$ on $L$ is uniquely determined by the set of edges \newword{contracted} by $\Theta$ (the set of edges $x\lessdot y$ such that $x\equiv y$ modulo $\Theta$).
In fact, $\Theta$ is uniquely determined by a smaller amount of information.
An element $j$ of $L$ is \newword{join-irreducible} if it covers exactly one element $j_*$.
We say that $\Theta$ \newword{contracts} the join-irreducible element $j$ if $j\equiv j_*$ modulo $\Theta$.
The congruence $\Theta$ is determined by the set of join-irreducible elements that $\Theta$ contracts.
The set $\operatorname{Con}(L)$ of all congruences on $L$ is a sublattice of the lattice of set partitions of $L$.
In fact, $\operatorname{Con}(L)$ is a distributive lattice.
We write $\operatorname{Irr}(\operatorname{Con}(L))$ for the set of join-irreducible congruences on $\operatorname{Con}(L)$.
By the Fundamental Theorem of Finite Distributive Lattices (see e.g.\ \cite[Theorem~3.4.1]{EC1}), $\operatorname{Con}(L)$ is isomorphic to inclusion order on the set of order ideals in $\operatorname{Irr}(\operatorname{Con}(L))$.
The weak order on a finite Coxeter group $W$ has a special property called \newword{congruence uniformity}, or sometimes called \newword{boundedness}.
(This was first proved in \cite[Theorem~6]{bounded}. See also \cite[Theorem~27]{hyperplane}.)
The definition of congruence uniformity is not necessary here, but congruence uniformity means in particular that the join-irreducible congruences on $W$ (i.e.\ the join-irreducible elements of $\operatorname{Con}(L)$) are in bijection with the join-irreducible elements of $W$ itself.
The join-irreducible elements of $W$ are the elements $j$ such that there exists a unique $s\in S$ with $\ell(js)<\ell(j)$.
For each join-irreducible element $j$ of $W$, the corresponding join-irreducible element $\operatorname{Cg}(j)$ of $\operatorname{Con}(W)$ is the unique finest congruence that contracts~$j$.
Thus $\operatorname{Irr}(\operatorname{Con}(W))$ can be thought of as a partial order on the join-irreducible elements of~$W$.
Congruences on $W$ correspond to order ideals in $\operatorname{Irr}(\operatorname{Con}(W))$.
Given a congruence $\Theta$ with corresponding order ideal $I$, the poset $\operatorname{Irr}(\operatorname{Con}(W/\Theta))$ is isomorphic to the induced subposet of $\operatorname{Irr}(\operatorname{Con}(W))$ obtained by deleting the elements of $I$.
The \newword{support} of an element $w$ of $W$ is the unique smallest subset $J$ of $S$ such that $w$ is in the standard parabolic subgroup $W_J$.
Given a join-irreducible element $j\in W$, the \newword{degree} of $j$ is the size of the support of $j$.
Given a set $\set{j_1,\ldots,j_k}$ of join-irreducible elements of $W$, there exists a unique finest congruence contracting all join-irreducible elements in the set.
This congruence is the join $\operatorname{Cg}(j_1)\vee\cdots\vee\operatorname{Cg}(j_k)$ in the congruence lattice $\operatorname{Con}(W)$, or equivalently, the congruence that contracts a join-irreducible element $j$ if and only if $j$ is in the ideal in $\operatorname{Irr}(\operatorname{Con}(W))$ generated by $\set{j_1,\ldots,j_k}$.
We call this the \newword{congruence generated by} $\set{j_1,\ldots,j_k}$.
A congruence on $W$ is \newword{homogeneous of degree $d$} if it is generated by a set of join-irreducible elements of degree $d$.
Abusing terminology slightly, we call a surjective homomorphism \newword{homogeneous of degree $d$} if its corresponding congruence is.
We restate Theorems~\ref{para cong} and~\ref{edge cong} with this new terminology.
(Recall that Theorem~\ref{para cong} was proven in \cite{congruence}.
We will prove Theorem~\ref{edge cong} below.)
\begin{theorem}\label{para cong ji}
If $J\subseteq S$, then $\eta_J$ is a surjective homogeneous homomorphism of degree $1$.
Its fibers constitute the congruence generated by $\set{s:s\in S\setminus J}$.
\end{theorem}
\begin{theorem}\label{edge cong ji}
If $E$ is a set of edges of the diagram of $W$, then $\eta_E$ is a compressive homogeneous homomorphism of degree $2$.
Its fibers constitute the congruence generated by $\set{\operatorname{alt}_k(r,s):\set{r,s}\in E\text{ and }k=2,\ldots,m(r,s)-1}$.
\end{theorem}
We emphasize that for each set $\set{r,s}$ in $E$ and each $k\in\set{2,\ldots,m(r,s)-1}$, both $\operatorname{alt}_k(r,s)$ and $\operatorname{alt}_k(s,r)$ are in the generating set described in Theorem~\ref{edge cong ji}.
Lattice congruences on the weak order on a finite Coxeter group $W$ are closely tied to the geometry of the reflection representation of $W$.
Let $\Phi$ be a root system associated to $W$ with simple reflections $\Pi$.
For each simple reflection $s\in S$, let $\alpha_s$ be the associated simple root.
For each reflection $t\in T$, let $\beta_t$ be the positive root associated to $t$ and let $H_t$ be the reflecting hyperplane for $t$.
A point $x$ is \newword{below} $H_t$ if the inner product of $x$ with $\beta_t$ is nonnegative.
A set of points is below $H_t$ if each of the points is.
Points and sets are \newword{above} $H_t$ if the inner products are nonpositive.
The set $\mathcal{A}=\set{H_t:t\in T}$ is the Coxeter arrangement associated to $W$.
The arrangement $\mathcal{A}$ cuts space into \newword{regions}, which are in bijection with the elements of~$W$.
Specifically, the identity element of $W$ corresponds to the region $D$ that is below every hyperplane of $\mathcal{A}$, and an element $w$ corresponds to the region $wD$.
A subset $\mathcal{A}'$ of $\mathcal{A}$ is called a \newword{rank-two subarrangement} if $|\mathcal{A}'|>1$ and if there is some codimension-2 subspace $U$ such that $\mathcal{A}'=\set{H\in\mathcal{A}:H\supset U}$.
The subarrangement $\mathcal{A}'$ cuts space into $2|\mathcal{A}'|$ regions.
Exactly one of these regions, $D'$ is below all of the hyperplanes in $\mathcal{A}'$, and two hyperplanes in $\mathcal{A}'$ are facet-defining hyperplanes of $D'$.
These two hyperplanes are called the \newword{basic hyperplanes} of~$\mathcal{A}'$.
We define a \newword{cutting relation} on the hyperplanes of $\mathcal{A}$ as follows:
Given distinct hyperplanes $H, H'\in\mathcal{A}$, let $\mathcal{A}'$ be the rank-two subarrangement containing $H$ and~$H'$.
Then $H$ \newword{cuts} $H'$ if~$H$ \textbf{is} a basic hyperplane of $\mathcal{A}'$ and $H'$ is \textbf{not} a basic hyperplane of $\mathcal{A}'$.
For each $H\in\mathcal{A}$, remove from~$H$ all points contained in hyperplanes of~$\mathcal{A}$ that cut~$H$.
The remaining set of points may be disconnected; the closures of the connected components are called the \newword{shards} in~$H$.
The set of shards of $\mathcal{A}$ is the union, over hyperplanes $H\in\mathcal{A}$, of the set of shards in $H$.
For each shard $\Sigma$, we write $H(\Sigma)$ for the hyperplane containing $\Sigma$.
\begin{example}\label{i25 shards}
When $W$ is a dihedral Coxeter group, the only rank-two subarrangement of $\mathcal{A}$ is $\mathcal{A}$ itself.
Figure~\ref{i25}.a shows the reflection representation of a Coxeter group of type $I_2(5)$ with $S=\set{r,s}$.
Figure~\ref{i25}.b shows the associated shards.
\begin{figure}
\caption{(a): A Coxeter group of type $I_2(5)$.
(b): The associated shards}
\label{i25}
\end{figure}
Each of the shards contains the origin, but to make the picture legible, those shards that don't continue through the origin are drawn with an offset from the origin.
\end{example}
\begin{example}\label{B3 shard ex}
Figure~\ref{B3shards} depicts the shards in the Coxeter arrangement of type $B_3$.
\begin{figure}
\caption{The shards in a Coxeter group of type $B_3$}
\label{B3shards}
\end{figure}
Here, the arrangement $\mathcal{A}$ is a collection of nine planes in $\mathbb R^3$.
The shards are two-dimensional cones contained in these planes.
To capture the picture in the plane, we consider the intersection of $\mathcal{A}$ with a sphere at the origin.
This intersection is an arrangement on nine great circles of the sphere.
Each shard, intersected with the sphere, is either an entire great circle or an arc of a great circle.
The figure shows these intersections under a stereographic projection from the sphere to the plane.
The region $D$ is the small triangle that is inside all of the circles.
As in Figure~\ref{i25}, where shards intersect, those shards that do not continue through the intersection are shown with an offset from the intersection.
Two of the shards are distinguished by arrows.
The significance of these two shards will be explained in Example~\ref{B3 to A3 shard}.
\end{example}
The shards of $\mathcal{A}$ are in one-to-one correspondence with the join-irreducible elements of $W$.
For each shard $\Sigma$, an \newword{upper element} of $\Sigma$ is an element $w\in W$ such that the region $wD$ is above $H(\Sigma)$ and intersects $\Sigma$ in codimension 1.
The set $U(\Sigma)$ of upper elements of $\Sigma$ contains exactly one element $j(\Sigma)$ that is join-irreducible in~$W$.
The element $j(\Sigma)$ is the unique minimal element of $U(\Sigma)$ in the weak order.
This is a bijection from shards to join-irreducible elements.
The inverse map sends a join-irreducible element $j$ to $\Sigma(j)$, the shard that contains $jD\cap(j_*D)$.
Shards in $\mathcal{A}$ correspond to certain collections of edges in the Hasse diagram of the weak order on $W$.
Specifically, a shard $\Sigma$ corresponds to the set of all edges $x\lessdot y$ such that $(xD\cap yD)\subseteq\Sigma$.
For each shard $\Sigma$, a congruence $\Theta$ either contracts none of the edges associated to $\Sigma$ or contracts all of the edges associated to $\Sigma$.
(See \cite[Proposition~6.6]{shardint}.)
If $\Theta$ contracts all of the edges associated to $\Sigma$, then we say that $\Theta$ \newword{removes} $\Sigma$.
In particular, $\Theta$ removes a shard $\Sigma$ if and only if it contracts the join-irreducible element $j(\Sigma)$.
For any congruence $\Theta$, the set of shards not removed by $\Theta$ decomposes space into a fan \cite[Theorem~5.1]{con_app} that is a coarsening of the fan defined by the hyperplanes $\mathcal{A}$.
We now define a directed graph on shards, called the \newword{shard digraph}.
Given two shards~$\Sigma$ and $\Sigma'$, say $\Sigma\to\Sigma'$ if $H(\Sigma)$ cuts $H(\Sigma')$ and $\Sigma\cap\Sigma'$ has codimension~2.
The digraph thus defined on shards is called the \newword{shard digraph}.
This digraph is acyclic\footnote{Shards are often considered in a more general context of simplicial hyperplane arrangements. In this broader setting, the shard digraph need not be acyclic. See \cite[Figure~5]{hyperplane}.}
and we call its transitive closure the \newword{shard poset}.
The bijection $\Sigma\mapsto j(\Sigma)$ from shards to join-irreducible elements is an isomorphism from the shard poset to the poset $\operatorname{Irr}(\operatorname{Con}(W))$, thought of as a partial order on join-irreducible elements of $W$.
Thus, given any congruence $\Theta$, the set of shards removed by $\Theta$ is an order ideal in the shard poset.
Conversely, for any set of shards forming an order ideal in the shard poset, there is a congruence removing exactly that set of shards.
We use this correspondence to reuse shard terminology for join-irreducible elements and vice versa.
So, for example, we talk about the degree of a shard (meaning the degree of the corresponding join-irreducible element), etc.
\begin{example}\label{B3 to A3 shard}
\begin{figure}
\caption{Removing shards from a Coxeter group of type $B_3$}
\label{B3shardsA3}
\end{figure}
Recall that Example~\ref{miraculous} started by choosing a pair of edges in the weak order on a Coxeter group of type $B_3$.
The finest congruence contracting the two edges was calculated, and the quotient modulo this congruence was found to be isomorphic to the weak order on a Coxeter group of type $A_3$.
The characterization of congruences in terms of shards allows us to revisit this example from a geometric point of view.
Contracting the two chosen edges corresponds to removing the two shards indicated with arrows in Figure~\ref{B3shards}.
Removing these two shards forces the removal of all shards below them in the shard poset.
Figure~\ref{B3shardsA3} depicts the shards whose removal is \emph{not} forced.
(Gaps between intersecting shards have been closed in this illustration.)
The resulting fan is piecewise-linearly (but not linearly) equivalent to the fan defined by a Coxeter arrangement of type $A_3$.
\end{example}
Let $\alpha$ denote the involution $w\mapsto ww_0$ on $W.$
This is an anti-automorphism of the weak order.
For each congruence $\Theta$ on $W,$ let $\alpha(\Theta)$ be the \newword{antipodal congruence} to $\Theta$, defined by
$x\equiv y\mod\alpha(\Theta)$ if and only if $\alpha(x)\equiv \alpha(y)\mod\Theta$.
The involution $\alpha$ induces an anti-isomorphism from $W/\Theta$ to $W/(\alpha(\Theta))$.
The following is a restatement of part of ~\cite[Proposition 6.13]{congruence}.
\begin{proposition}
\label{dual cong}
Let $\Theta$ be a lattice congruence on $W$, let $r,s\in S$, and let ${k\in\set{2,3,\ldots,m(r,s)-1}}$.
Then $\operatorname{alt}_k(r,s)$ is contracted by $\Theta$ if and only if $\operatorname{alt}_{k'}(s,r)$ is contracted by $\alpha(\Theta)$, where $k'=m(r,s)-k+1$.
\end{proposition}
The following lemma is part of \cite[Lemma~3.11]{shardint}, rephrased in the special case of the weak order.
(Cf. \cite[Lemma~3.9]{congruence}.)
\begin{lemma} \label{whole}
A shard $\Sigma$ is an entire hyperplane if and only if it is $H_s$ for some $s\in S$.
\end{lemma}
The following lemma is a rephrasing of \cite[Lemma~3.12]{shardint} in the special case of the weak order.
(Cf.\ \cite[Lemma~4.6]{sort_camb}.)
It implies in particular that a shard of degree 2 is only arrowed by shards of degree~1.
\begin{lemma}\label{half}
Let $\Sigma$ be a shard contained in a reflecting hyperplane $H_t$.
The $\Sigma$ has exactly one facet if and only if $t\not\in S$ but $t\in W_{\set{r,s}}$ for some distinct $r,s\in S$.
In that case, the unique facet of $\Sigma$ is $H_r\cap H_s$.
\end{lemma}
We now use the machinery of shards to prove Theorem~\ref{edge cong}.
\begin{proof}[Proof of Theorem~\ref{edge cong}]
Recall that the first assertion of the theorem has already been established.
As before, let $\Theta_E$ be the lattice congruence whose classes are the fibers of $\eta_E$.
It remains only to prove that $\Theta_E$ is generated by the join-irreducible elements $\operatorname{alt}_k(r,s)$ and $\operatorname{alt}_k(s,r)$ for all $\set{r,s}\in E$ and $k=2,3,\ldots,m(r,s)-1$.
Let $s\in S$ and $w\in W$ have $ws\lessdot w$ and let $t\in T$ be the reflection $wsw^{-1}$, so that $ws=tw$.
The inversion set of $w_{J_i}$ is $\operatorname{inv}(w)\cap W_{J_i}$ and the inversion set of an element determines the element, so $(ws)_{J_i}=w_{J_i}$ if and only if $t\not\in W_{J_i}$.
Thus the edge $ws\lessdot w$ is contracted by $\Theta_E$ if and only if $wsw^{-1}\not\in W_{J_i}$ for all $i\in\set{1,\ldots,k}$.
Equivalently, a shard is removed by $\Theta_E$ if and only if the reflecting hyperplane containing it is $H_t$ for some reflection $t$ with $t\not\in W_{J_i}$ for all $i\in\set{1,\ldots,k}$.
Let $\Sigma_k(r,s)$ be the shard associated to the join-irreducible element $\operatorname{alt}_k(r,s)$ for each $k\in\set{2,\ldots,m(r,s)-1}$.
To prove the theorem, we will show that every shard removed by $\Theta_E$ is forced by the removal of all shards of the form $\Sigma_k(r,s)$ and $\Sigma_k(s,r)$ for each edge $r$---$s$ in $E$.
Specifically, let $\Sigma$ be a shard removed by $\Theta_E$.
We complete the proof by establishing the following claim:
If $\Sigma$ is not $\Sigma_k(r,s)$ or $\Sigma_k(s,r)$ for some edge $r$---$s$ in $E$ and some $k\in\set{2,\ldots,m(r,s)-1}$, then there exists another shard $\Sigma'$, also removed by $\Theta_E$, such that $\Sigma'\to\Sigma$ in the shard digraph.
Since the shard digraph is acyclic, the claim will imply that, for every shard $\Sigma$ removed by $\Theta_E$, there is a directed path from a shard $\Sigma_k(r,s)$ or $\Sigma_k(s,r)$ to $\Sigma$.
We now prove the claim.
Since $\Sigma$ is removed by $\Theta_E$, it is contained in a reflecting hyperplane $H_t$ with $t\not\in W_{J_i}$ for all $i\in\set{1,\ldots,k}$.
Consider a reflecting hyperplane $H_{t'}$ that cuts $H_t$ to define a facet of $\Sigma$.
Then $H_{t'}$ is basic in the rank-two subarrangement $\mathcal{A}'$ containing $H_{t'}$ and $H_t$, while $H_t$ is not basic in $\mathcal{A}'$.
The other basic hyperplane, $H_{t''}$, of $\mathcal{A}'$ also cuts $H_t$ to define the same facet of $\Sigma$.
Thus there exists a shard $\Sigma'$ contained in $H_{t'}$ such that $\Sigma'\to\Sigma$, and there exists a shard $\Sigma''$ contained in $H_{t''}$ such that $\Sigma''\to\Sigma$.
If $t'\not\in W_{J_i}$ for all $i\in\set{1,\ldots,k}$ then $\Sigma'$ is removed by $\Theta$.
Similarly, if $t''\not\in W_{J_i}$ for all $i\in\set{1,\ldots,k}$ then $\Sigma''$ is removed by $\Theta$.
Otherwise, there exists $i\in\set{1,\ldots,k}$ with $t'\in W_{J_i}$ and $j\in\set{1,\ldots,k}$ with $t''\in W_{J_j}$.
A reflection is in a standard parabolic subgroup $W_J$ if and only if its reflecting hyperplane contains the intersection $\bigcap_{s\in J}H_s$.
In particular, $t$ is in $W_{J_i\cup J_j}$.
Furthermore $i\neq j$, because if $i=j$, then $t$ is in $W_{J_i}$, contradicting the fact that $\Sigma$ is removed by $\Theta_E$.
The positive root $\beta_t$ is in the positive linear span of $\beta_{t'}$ and $\beta_{t''}$.
But the simple root coordinates (coordinates with respect to the basis $\Pi$) of $\beta_{t'}$ and $\beta_{t''}$ are supported on the disjoint subsets $\set{\alpha_s:s\in J_i}$ and $\set{\alpha_s:s\in J_j}$.
Thus the simple root coordinates of $\beta_{t'}$ are determined, up to scaling, by the simple root coordinates of $\beta_t$, by restricting the coordinates of $\beta_t$ to simple roots in $\set{\alpha_s:s\in J_i}$.
The simple root coordinates of $\beta_{t''}$ are determined similarly, up to scaling, by the simple root coordinates of $\beta_t$.
We conclude that there is at most one facet of $\Sigma$ defined by hyperplanes $H_{t'}$ and $H_{t''}$ such that there exists $i\in\set{1,\ldots,k}$ with $t'\in W_{J_i}$ and $j\in\set{1,\ldots,k}$ with $t''\in W_{J_j}$.
Thus if $\Sigma$ has more than one facet, then we can use one of these facets to find, as above, a shard $\Sigma'$, removed by $\Theta_E$, with $\Sigma'\to\Sigma$.
If $\Sigma$ has only one facet, then Lemma~\ref{half} says that the facet of $\Sigma$ is defined by $H_r$ and $H_s$ for some $r,s\in S$.
In this case, the join-irreducible element $j(\Sigma)$ associated to $\Sigma$ is in $W_{\set{r,s}}$ but not in $\set{r,s}$.
Thus $j(\Sigma)$ is $\operatorname{alt}_k(r,s)$ and $\operatorname{alt}_k(s,r)$ for some $k\in\set{2,3,\ldots,m(r,s)-1}$.
Equivalently $\Sigma=\Sigma_k(r,s)$ or $\Sigma=\Sigma_k(s,r)$ for some $k\in\set{2,3,\ldots,m(r,s)-1}$.
Since $\Sigma$ is removed by $\Theta_E$, the edge $r$---$s$ is in $E$.
Since $j(\Sigma)\not\in\set{r,s}$, in particular $t\not\in S$, so Lemma~\ref{whole} rules out the possibility that $\Sigma$ has no facets.
We have proved the claim, and thus completed the proof of the theorem.
\end{proof}
We conclude the section by describing the shard digraph for types $A_n$ and $B_n$, quoting results of \cite[Sections~7--8]{congruence}.
The descriptions use the combinatorial realizations explained in Examples~\ref{An para} and~\ref{Bn para}.
For any subset $A$ of $\set{1,\ldots,n+1}$, let $A^c=[n+1]\setminus A$, let $m$ be the minimum element of $A$ and let $M$ be the maximum element of $A^c$.
Join-irreducible elements of $S_{n+1}$ correspond to nonempty subsets of $[n+1]$ such that $M>m$.
Given such a subset, the corresponding join-irreducible permutation $\gamma$ is constructed by listing the elements of $A^c$ in ascending order followed by the elements of $A$ in ascending order.
The poset $\operatorname{Irr}(\operatorname{Con}(S_{n+1}))$ can be described combinatorially, as a partial order on join-irreducible elements, in terms of these subsets.
(See \cite[Theorem~8.1]{congruence}.)
In this paper, we do not need the details, except for the case $n=3$, which is shown in Figure~\ref{IrrConA3 fig}.
\begin{figure}
\caption{$\operatorname{Irr}
\label{IrrConA3 fig}
\end{figure}
We do, however, need details of the description of $\operatorname{Irr}(\operatorname{Con}(B_n))$.
A \newword{signed subset} $A$ is a subset of $\set{\pm1,\pm2,\ldots,\pm n}$ such that $A$ contains no pairs $\set{-i,i}$.
Given a nonempty signed subset $A$, let $m$ be the minimum element of $A$.
If $|A|=n$, let $M$ be $-m$ and otherwise, let $M$ be the maximum element of $\set{1,\ldots,n}\setminus \set{|a|:a\in A}$.
The join-irreducible elements of a Coxeter group of type $B_n$ are in bijection with the nonempty signed subsets of $\set{1,\ldots,n}$ with $M>m$.
Given such a signed subset $A$, the corresponding join-irreducible permutation $\gamma$ has one-line notation given by the elements of $\set{1,\ldots,n}\setminus \set{|a|:a\in A}$ in ascending order, followed by the elements of $A$ in ascending order.
The reflection $t$ associated to the unique cover relation down from $\gamma$ is $(M\,\,\,m)(-m\,\,\,-M)$ if $m\neq-M$ or $(m\,\,\,M)$ if $m=-M$.
The associated shard is in the hyperplane $H_t$.
Figure~\ref{IrrConB3 fig} shows the poset $\operatorname{Irr}(\operatorname{Con}(B_3))$.
To describe this poset for general $n$, we quote the combinatorial description of the shard digraph in type $B_n$.
Arrows between shards depend on certain combinations of conditions.
The meaning of the letters q, f and r in the labels for the conditions is explained in \cite[Section~7]{congruence}.
We begin with conditions (q1) through~(q6).
\begin{figure}
\caption{$\operatorname{Irr}
\label{IrrConB3 fig}
\end{figure}
\begin{tabular}{ll}
(q1)&$-m_1=M_1<M_2=-m_2$.\\
(q2)&$-m_2=M_2=M_1>m_1>0$.\\
(q3)&$M_2=M_1>m_1>m_2\neq -M_2$.\\
(q4)&$M_2>M_1>m_1=m_2\neq -M_2$.\\
(q5)&$-m_2=M_1>m_1>-M_2\neq m_2$.\\
(q6)&$-m_2>M_1>m_1=-M_2\neq m_2$.
\end{tabular}
Next we define condition (f), which depends on a parameter in $\set{\pm1,\ldots,\pm n}$.
In the following conditions, the superscript ``$c$'' mean complementation in the set $\set{\pm1,\pm2,\ldots,\pm n}$ and the notation $(x,y)$ means the open interval between $x$ and $y$.
For $a\in\set{\pm1,\ldots,\pm n}$, say $A_2$ satisfies condition (f\,:\,$a$) if one of the following holds:
\begin{tabular}{ll}
(f1\,:\,$a$)&$a\in A_2$.\\
(f2\,:\,$a$)&$a\in A_2^c\setminus\set{-M_2,-m_2}$ and $-a\not\in A_2\cap(m_2,M_2)$.\\
(f3\,:\,$a$)&$a\in\set{-M_2,-m_2}$ and $(A_2\cup-A_2)^c\cap(m_2,M_2)\cap(-M_2,-m_2)=\emptyset$.
\end{tabular}
Every $a\in\set{\pm1,\ldots,\pm n}$ satisfies exactly one of the conditions $a\in A_2$, $a\in A_2^c\setminus\set{-M_2,-m_2}$ and $a\in\set{-M_2,-m_2}$ appearing in condition (f).
Finally, conditions (r1) and (r2):
\begin{tabular}{ll}
(r1)&$A_1\cap(m_1,M_1)=A_2\cap(m_1,M_1)$.\\
(r2)&$A_1\cap(m_1,M_1)=-A_2^c\cap(m_1,M_1)$.
\end{tabular}
\begin{theorem}
\label{B shard}
$\Sigma_1\to\Sigma_2$ if and only if one of the following combinations of conditions holds:
\begin{enumerate}
\item[1. ] \textup{(q1)} and \textup{(r1)}.
\item[2. ] \textup{(q2)} and \textup{(r1)}.
\item[3. ] \textup{(q3)}, \textup{(f\,:\,$m_1$)} and \textup{(r1)}.
\item[4. ] \textup{(q4)}, \textup{(f\,:\,$M_1$)} and \textup{(r1)}.
\item[5. ] \textup{(q5)}, \textup{(f\,:\,$-m_1$)} and \textup{(r2)}.
\item[6. ] \textup{(q6)}, \textup{(f\,:\,$-M_1$)} and \textup{(r2)}.
\end{enumerate}
\end{theorem}
\section{Decreasing edge labels}\label{dihedral sec}
Theorem~\ref{edge factor} reduces the problem of classifying compressive homomorphisms $\eta$ between weak orders to the special case of compressive homomorphisms that do not erase any edges.
Equivalently, these are the compressive homomorphisms between finite Coxeter groups that restrict to (unlabeled) graph isomorphisms of diagrams.
As a byproduct, Theorem~\ref{edge factor} allows us to further reduce the problem to the case where the diagrams are connected (or equivalently, where the Coxeter groups are irreducible).
We continue to take $S$ to be the set of simple generators of $W$ and of $W'$ and assume that $\eta:W\to W'$ is such a homomorphism that fixes $S$ pointwise, so that the diagrams of $W$ and $W'$ are identical as unlabeled graphs.
Recall that Proposition~\ref{diagram facts} says that for each edge in the diagram of $W$, the corresponding edge in the diagram of $W'$ has weakly smaller label.
Up to now, the classification of surjective lattice homomorphisms between weak orders has proceeded by uniform arguments, rather than arguments that are specific to particular families in the classification of finite Coxeter groups.
To complete the classification of homomorphisms, we now turn to the classification of finite Coxeter groups, which tells us in particular that there are very few cases remaining to consider.
We need to study the cases where $(W,W')$ are $(I_2(m),I_2(m'))$ for $m'<m$ or $(B_n,A_n)$, $(F_4,A_4)$, $(H_3,A_3)$, $(H_3,B_3)$, $(H_4,A_4)$, or $(H_4,B_4)$.
The cases where $(W,W')$ are $(I_2(m),I_2(m'))$ for $m'<m$ are easily understood by considering Example~\ref{i25 shards}, where $W$ is $I_2(5)$.
It is apparent from Figure~\ref{i25} that a congruence $\Theta$ has the property that $W/\Theta$ is isomorphic to the weak order on $I_2(4)$ if and only if $\Theta$ contracts exactly one of the join-irreducible elements $rs$, $rsr$, and $rsrs$ and exactly one of the join-irreducible elements $sr$, $srs$, $srsr$.
More generally, let $W$ be a Coxeter group of type $I_2(m)$ with $S=\set{r,s}$.
A congruence $\Theta$ on $W$ has the property that $W/\Theta$ is isomorphic to the weak order on $I_2(m')$ if and only if $\Theta$ contracts exactly $m-m'$ join-irreducible elements of the form $\operatorname{alt}_k(r,s)$ for $k\in\set{2,\ldots,m-1}$ and exactly $m-m'$ join-irreducible elements of the form $\operatorname{alt}_k(s,r)$ for $k\in\set{2,\ldots,m-1}$.
This result for dihedral groups is simple, but its importance for the general case is underscored by Theorem~\ref{diagram uniqueness}.
In Sections~\ref{Bn Sn+1 sec} and~\ref{exceptional sec}, we consider the remaining possibilities for the pair $(W,W')$ and, in particular, complete the proofs of Theorems~\ref{existence} and~\ref{diagram uniqueness}.
As mentioned earlier, Theorem~\ref{main} follows from Theorems~\ref{para factor} and~\ref{existence}.
\section{Homomorphisms from $B_n$ to $S_{n+1}$}\label{Bn Sn+1 sec}
We realize the groups $S_{n+1}$ and $B_n$ as in Examples~\ref{An para} and~\ref{Bn para}.
A surjective homomorphism $\eta$ from $B_n$ to $S_{n+1}$ restricts, by Proposition~\ref{diagram facts}, to a surjective homomorphism from $(B_n)_{\set{s_0,s_1}}\cong B_2$ to $(S_{n+1})_{\set{s_1,s_2}}\cong S_3$.
Thus, as discussed in Section~\ref{dihedral sec}, the congruence associated to $\eta$ contracts exactly one of the join-irreducible elements $s_0s_1,s_0s_1s_0$, and exactly one of the join-irreducible elements $s_1s_0,s_1s_0s_1$.
We will see that each of the four choices leads to a unique surjective homomorphism and that three of the four choices are associated to homogeneous congruences of degree $2$.
\subsection{Simion's homomorphism}\label{simion sec}
We consider first the case where $s_0s_1$ and $s_1s_0s_1$ are contracted, but we start not by contracting join-irreducibles, but by giving a map from $B_n$ to $S_{n+1}$.
This map was first defined by Simion~\cite{Simion} in connection with a construction of the type-B associahedron (also known as the cyclohedron).
Let $\eta_\sigma:B_n\to S_{n+1}$ map $\pi\in B_n$ to the permutation constructed as follows:
Construct the sequence $(-\pi_n)(-\pi_{n-1})\cdots(-\pi_1)\,0\,\pi_1\cdots\pi_{n-1}\pi_n$, extract the subsequence consisting of nonnegative entries and add $1$ to each entry.
Thus for example, for $\pi=3(-4)65(-7)(-1)2\in B_7$, we construct the sequence
\[(-2)17(-5)(-6)4(-3)03(-4)65(-7)(-1)2\]
and extract the subsequence $17403652$, so that $\eta_\sigma(\pi)$ is $28514763\in S_8$.
We will prove the following theorem:
\begin{theorem}\label{simion thm}
The map $\eta_\sigma$ is a surjective lattice homomorphism from $B_n$ to $S_{n+1}$.
Its fibers constitute the congruence generated by $s_0s_1$ and $s_1s_0s_1$.
Furthermore, $\eta_\sigma$ is the unique surjective lattice homomorphism from $B_n$ to $S_{n+1}$ whose restriction to $(B_n)_{\set{s_0,s_1}}$ agrees with $\eta_\sigma$.
\end{theorem}
Suppose $\pi\lessdot \tau$ in the weak order on $B_n$.
Then the one-line notations for $\pi$ and $\tau$ differ in one of two ways:
Either they agree except in the sign of the first entry or they agree except that two adjacent entries of $\pi$ are transposed in $\tau$.
If they agree except in the sign of the first entry, then $\eta_\sigma(\pi)\neq\eta_\sigma(\tau)$.
If they agree except that two adjacent entries of $\pi$ are transposed in $\tau$, then $\eta_\sigma(\pi)=\eta_\sigma(\tau)$ if and only if the two adjacent entries have opposite signs.
We have proved the following:
\begin{prop}\label{simion cover}
If $\pi\lessdot \tau$ in the weak order on $B_n$, then $\eta_\sigma(\pi)=\eta_\sigma(\tau)$ if and only if the reflection $t$ associated to the cover $\pi\lessdot\tau$ is $(i\,\,-j)(j\,\,-i)$ for some $i$ and $j$ with $1\le i<j\le n$.
\end{prop}
We give a similar characterization of the congruence generated by $s_0s_1$ and~$s_1s_0s_1$:
\begin{prop}\label{simion finest}
The congruence generated by $s_0s_1$ and $s_1s_0s_1$ removes a shard $\Sigma$ if and only if $\Sigma$ is contained in a hyperplane $H_t$ such that $t=(i\,\,-j)(j\,\,-i)$ for some $i$ and $j$ with $1\le i<j\le n$.
\end{prop}
\begin{proof}
Let $C$ be the set of signed subsets corresponding to shards contained in hyperplanes $H_t$ such that $t=(i\,\,-j)(j\,\,-i)$ for some $i$ and $j$ with $1\le i<j\le n$.
Thus $C$ is the set of all nonempty signed subsets $A$ with $m<0$ and $m\neq-M$.
(Equivalently, $A\not\subseteq \set{1,\ldots,n}$ and $|A|<n$.)
The assertion of the proposition is that $C$ is the set of signed subsets representing shards removed by the congruence generated by $s_0s_1$ and $s_1s_0s_1$.
The join-irreducible element $s_0s_1$ has one-line notation $2(-1)34\cdots n$, and the join-irreducible element $s_1s_0s_1$ is $1(-2)34\cdots n$.
The corresponding signed subsets are $\set{-1,3,4,\ldots, n}$ and $\set{-2,3,4,\ldots, n}$, both of which are in $C$.
To establish the ``only if'' assertion of the proposition, we show that no set $A_1\in C$ arrows a set $A_2\not\in C$.
Indeed, if $A_1\in C$ and $A_2\not\in C$, then $m_1<0$ and $m_1\neq-M_1$ and either $m_2>0$ or $m_2=-M_2$.
The fact that $m_1<0$ and $m_1\neq-M_1$ rules out conditions (q1) and (q2).
Whether $m_2>0$ or $m_2=-M_2$, conditions (q3)--(q6) are easily ruled out as well.
To establish the ``if'' assertion, we show that every set $A_2\in C$, except the sets $\set{-1,3,4,\ldots, n}$ and $\set{-2,3,4,\ldots, n}$, is arrowed to by another set ${A_1\in C}$.
Since the shard digraph is acyclic, this implies that $\set{-1,3,4,\ldots, n}$ and $\set{-2,3,4,\ldots, n}$ are the unique sources in the restriction of the digraph to $C$.
Let $A_2\in C$, so that $m_2<0$ and $m_2\neq-M_2$, with $A_2$ not equal to $\set{-1,3,4,\ldots, n}$ or $\set{-2,3,4,\ldots, n}$.
We will produce a signed subset $A_1\in C$ with $A_1\to A_2$ by considering several cases.
\noindent
\textbf{Case 1:} $|A_2|<n-1$.
Then let $A_1=A_2\cup\set{M_2}$.
Then $M_2>M_1>0>m_1=m_2\neq-M_2$, so (q4) holds.
Furthermore, $|A_1|<n$, so $m_1\neq M_1$, and thus $A_1\in C$.
We have $M_1\in A_2^c\setminus\set{-M_2,-m_2}$.
By definition of $M_1$, the element $-M_1$ is not in $A_1$, and since $A_2\subset A_1$, we conclude that $-M_1\not\in A_2$.
Thus (f2\,:\,$M_1$) holds.
Also, (r1) holds, so $A_1\to A_2$ in the shard digraph.
\noindent
\textbf{Case 2:} $|A_2|=n-1$.
Since $A_2\in C$, this is the only alternative to Case 1.
By hypothesis, we have ruled out the possibilities $(m,M)=(-1,2)$ and $(m,M)=(-2,1)$.
\noindent
\textbf{Subcase 2a:} $A_2\cap[1,M_2-1]\neq\emptyset$.
(The notation $[a,b]$ stands for the closed interval between $a$ and $b$.)
Define $M_1$ to be the maximum element of $A_2\cap[1,M_2-1]$ and define $A_1$ to be $(A_2\cup\set{M_2})\setminus\set{M_1}$.
Thus $M_1$ is indeed the maximum element of $\set{1,\ldots,n}\setminus \set{|a|:a\in A_1}$.
Conditions (q4), (f1\,:\,$M_1$), and (r1) hold.
We have $m_1=m_2<0$ and $|A_1|=|A_2|<n$, to $A_1$ is in $C$.
\noindent
\textbf{Subcase 2b:} $A_2\cap[1,M_2-1]=\emptyset$ and $m_2>-M_2$.
By definition of $M_2$, neither $M_2$ nor $-M_2$ is in $A_2$.
Thus since $|A_2|=n-1$, every other element $i$ of $\set{1,\ldots,n}$ has either $i$ or $-i$ in $A_2$.
Since $A_2\cap[1,M_2-1]=\emptyset$, we conclude that the interval $[-M_2+1,-1]$ is contained in $A_2$.
Now the fact that $m_2>-M_2$ implies that $m_2=-M_2+1$.
If $m_2=-1$ then $A_2=\set{-1,3,4,\ldots, n}$, a possibility that is ruled out by hypothesis.
Define $A_1$ to be $A_2\setminus\set{m_2}$.
Then $m_1=m_2+1<0$ and $M_1=M_2$.
Also $|A_1|=n-2$, so $A_1$ is in $C$.
Conditions (q3), (f1\,:\,$m_1$) and (r1) hold.
\noindent
\textbf{Subcase 2c:} $A_2\cap[1,M_2-1]=\emptyset$, $m_2<-M_2$, and $A_2\cap[m_2+1,-1]\neq\emptyset$.
Let $A_1=(A_2\cup\set{-m_2})\setminus\set{m_2}$.
Since $A_2\cap[m_2+1,-1]\neq\emptyset$, the minimum element $m_1$ of $A_1$ is negative, and since in addition $|A_1|=|A_2|<n$, the set $A_1$ is in $C$.
Conditions (q3), (f1\,:\,$m_1$), and (r1) hold, with condition (r1) using the fact that $-m_2>M_1$.
\noindent
\textbf{Subcase 2d:} $A_2\cap[1,M_2-1]=\emptyset$, $m_2<-M_2$, and $A_2\cap[m_2+1,-1]=\emptyset$.
As in Subcase 2b, besides $M_2$, every element $i$ of $\set{1,\ldots,n}$ has either $i$ or $-i$ in $A_2$.
Thus the fact that $A_2\cap[1,M_2-1]=\emptyset$ and $A_2\cap[m_2+1,-1]=\emptyset$ implies that $[1,-m_2-1]\cap[1,M_2-1]=\emptyset$.
In other words, either $m_2=-1$ or $M_2=1$, but the fact that $m_2<-M_2$ rules out the possibility that $m_2=-1$.
Thus $M_2=1$.
Since $A_2\cap[m_2+1,-1]=\emptyset$, the set $A_2$ equals $\set{m_2}\cup(\set{1,\ldots,n}\setminus\set{1,-m_2})$.
If $m_2=-2$, then $A_2=\set{-2,3,4,\ldots, n}$, which was also ruled out by hypothesis.
The set $A_1=(A_2\setminus\set{m_2,-m_2-1})\cup\set{-m_2,m_2+1}$ is in $C$.
We have $m_1=m_2+1<-1$ and $M_1=M_2=1$, so conditions (q3), (f2\,:\,$m_1$), and (r1) hold.
\end{proof}
So far, we know nothing about how $\eta_\sigma$ relates to the weak order on $S_{n+1}$.
But Propositions~\ref{simion cover} and~\ref{simion finest} constitute a proof of the second assertion of Theorem~\ref{simion thm}: that the congruence $\Theta_\sigma$ defined by the fibers of $\eta_\sigma$ is generated by $s_0s_1$ and $s_1s_0s_1$.
Proposition~\ref{simion cover} implies that the bottom elements of $\Theta_\sigma$ are exactly those signed permutations whose one-line notation consists of a (possibly empty) sequence of negative entries followed by a (possibly empty) sequence of positive entries.
The restriction of $\eta_\sigma$ to bottom elements is a bijection to $S_{n+1}$, with inverse map described as follows:
If $\tau\in S_{n+1}$ and $\tau_i=1$, then the inverse map takes $\tau$ to the signed permutation whose one-line notation is
\[(-\tau_{i-1}+1)(-\tau_{i-2}+1)\cdots(-\tau_1+1)(\tau_{i+1}-1)(\tau_{i+2}-1)\cdots(\tau_{n+1}-1).\]
This inverse map is easily seen to be order-preserving.
The map $\eta_\sigma$ is also easily seen to be order-preserving, so its restriction is as well.
We have shown that the restriction of $\eta_\sigma$ to bottom elements of $\Theta_\sigma$ is an isomorphism to the weak order on~$S_{n+1}$.
The natural map from $B_n$ to $B_n/\Theta_\sigma$ is a surjective homomorphism, and $B_n/\Theta_\sigma$ is isomorphic to the subposet of $B_n$ induced by bottom elements of $\Theta_\sigma$.
Since $\eta_\sigma$ equals the composition of the natural map, followed by the isomorphism to the poset of bottom elements, followed by the isomorphism to $S_{n+1}$, it is a surjective homomorphism.
Now, if $\eta$ is any surjective homomorphism agreeing with $\eta_\sigma$ on $(B_n)_{\set{s_0,s_1}}$, the associated congruence $\Theta$ contracts $s_0s_1$ and $s_1s_0s_1$.
Thus $\Theta$ is a coarsening of the congruence $\Theta_\sigma$ associated to $\eta_\sigma$.
But then since both $B_n/\Theta$ and $B_n/(\Theta_\sigma)$ are isomorphic to $S_{n+1}$, they must coincide.
We have completed the proof of Theorem~\ref{simion thm}.
\subsection{A non-homogeneous homomorphism}\label{nonhom sec}
In this section, we consider surjective homomorphisms whose congruence contracts $s_0s_1s_0$ and $s_1s_0$.
We begin by defining a homomorphism $\eta_\nu$ that is combinatorially similar to Simion's homomorphism.
We will see that, whereas $\eta_\sigma$ is homogeneous of degree $2$, $\eta_\nu$ is non-homogeneous.
However, $\eta_\nu$ is still of low degree: it is generated by contracting join-irreducible elements of degrees $2$ and $3$.
Let $\eta_\nu:B_n\to S_{n+1}$ map $\pi\in B_n$ to the permutation by extracting the subsequence of $(-\pi_n)(-\pi_{n-1})\cdots(-\pi_1)\pi_1\cdots\pi_{n-1}\pi_n$ consisting of values greater than or equal to $-1$, changing $-1$ to $0$, then adding $1$ to each entry.
Thus for example, for $\pi=3(-4)65(-7)(-1)2\in B_7$, we extract the subsequence $174365(-1)2$, so that $\eta_\nu(\pi)$ is $28547612$.
We will prove the following theorem:
\begin{theorem}\label{nonhom thm}
The map $\eta_\nu$ is a surjective lattice homomorphism from $B_n$ to $S_{n+1}$.
Its fibers constitute the congruence generated by $s_0s_1s_0$, $s_1s_0$, $s_1s_0s_1s_2$, and $s_2s_1s_0s_1s_2$.
Furthermore, $\eta_\nu$ is the unique surjective lattice homomorphism from $B_n$ to $S_{n+1}$ whose restriction to $(B_n)_{\set{s_0,s_1}}$ agrees with $\eta_\nu$.
\end{theorem}
The outline of the proof of Theorem~\ref{nonhom thm} is the same as the proof of Theorem~\ref{simion thm}.
We first determine when signed permutations $\pi\lessdot \tau$ in $B_n$ map to the same element of $S_{n+1}$.
If $\pi$ and $\tau$ agree except in the sign of the first entry (with $\pi_1$ necessarily positive), then $\eta_\nu(\pi)=\eta_\nu(\tau)$ if and only if $\pi_1>1$.
If they agree except that two adjacent entries of $\pi$ are transposed in $\tau$, then $\eta_\nu(\pi)=\eta_\nu(\tau)$ if and only if the two transposed entries have opposite signs and neither is $\pm1$.
Thus:
\begin{prop}\label{nonhom cover}
If $\pi\lessdot \tau$ in the weak order on $B_n$, then $\eta_\nu(\pi)=\eta_\nu(\tau)$ if and only if the reflection $t$ associated to the cover $\pi\lessdot\tau$ is either $(i\,\,-i)$ for some $i>1$ or $(i\,\,-j)(j\,\,-i)$ for some $i$ and $j$ with $2\le i<j\le n$.
\end{prop}
We now show that the congruence generated by $s_0s_1s_0$, $s_1s_0$, $s_1s_0s_1s_2$, and $s_2s_1s_0s_1s_2$ agrees with the fibers of $\eta_\nu$, as described in Proposition~\ref{nonhom cover}.
\begin{prop}\label{nonhom finest}
In the congruence generated by $s_0s_1s_0$, $s_1s_0$, $s_1s_0s_1s_2$, and $s_2s_1s_0s_1s_2$ a shard is removed if and only if it is contained in a hyperplane $H_t$ with $t=(i\,\,-i)$ and $i>1$ or $t=(i\,\,-j)(j\,\,-i)$ and $2\le i<j\le n$.
\end{prop}
\begin{proof}
Let $C$ be the set of signed subsets corresponding to shards contained in hyperplanes $H_t$ such that $t$ is either $(i\,\,-i)$ for some $i>1$ or $(i\,\,-j)(j\,\,-i)$ for some $i$ and $j$ with $2\le i<j\le n$.
Thus $C$ is the set of all nonempty signed subsets $A$ with $m<-1$ and $M>1$.
The elements $s_0s_1s_0$, $s_1s_0$, $s_1s_0s_1s_2$, and $s_2s_1s_0s_1s_2$ correspond respectively to the signed sets $\set{-2,-1,3,4,\ldots,n}$, $\set{-2,1,3,4,\ldots,n}$, $\set{-2,4,5,\ldots,n}$, and $\set{-3,4,5,\ldots,n}$.
We first show that no set $A_1\in C$ arrows a set $A_2\not\in C$.
Let $A_1\in C$ and $A_2\not\in C$.
Then $m_1<-1$ and $M_1>1$, while either $m_2\ge-1$ or $M_2=1$.
The fact that $m_1<-1$ rules out condition (q2).
If $M_2=1$, then the fact that $M_1>1$ rules out conditions (q1), (q3) and (q4), and the fact that $m_1<-1$ rules out (q5) and (q6).
Suppose $m_2\ge -1$.
Then the equality $M_2=-m_2$ in (q1) fails unless $m_2=-1$, but in that case $M_2=1$, so (q1) is ruled out.
The fact that $m_1<-1$ rules out (q3) and (q4).
Since $-m_2\le 1$ and $M_1>1$, conditions (q5) and (q6) fail as well.
Next, let $A_2\in C$, so that $m_2<-1$ and $M_2>1$, with $A_2$ not equal to $\set{-2,-1,3,4,\ldots,n}$, $\set{-2,1,3,4,\ldots,n}$, $\set{-2,4,5,\ldots,n}$, or $\set{-3,4,5,\ldots,n}$.
We will produce a signed subset $A_1\in C$ with $A_1\to A_2$.
\noindent
\textbf{Case 1:} $m_2=-M_2$.
Then $|A_2|=n$.
If $m_2=-2$, then $A_2=\set{-2,-1,3,4,\ldots,n}$ or $A_2=\set{-2,1,3,4,\ldots,n}$, but these are ruled out, so $m_2<-2$.
Let $A_1=(A_2\cup\set{-m_2,m_2+1})\setminus\set{m_2,-m_2-1}$.
Then $m_1=m_2+1$ and $M_1=M_2-1=-m_1$.
Conditions (q1) and (r1) hold, so $A_1\to A_2$.
Also $-m_1=M_1>1$, so $A_1\in C$.
\noindent
\textbf{Case 2:} $m_2\neq-M_2$.
Then $|A_2|<n$.
\noindent
\textbf{Subcase 2a:} $(A_2\cup -A_2)^c\cap[2,M_2-1]\neq\emptyset$.
Let $A_1=A_2\cup\set{M_2}$.
Then $m_1=m_2$ and $M_1$ is the maximal element of $(A_2\cup -A_2)^c\cap[2,M_2-1]$, so $A_2\in C$.
Furthermore, $M_2>M_1>m_1=m_2\neq-M_2$, so (q4) holds.
Also (f2\,:\,$M_1$) and (r1) hold.
\noindent
\textbf{Subcase 2b:} $(A_2\cup -A_2)^c\cap[2,M_2-1]=\emptyset$ and $A_2\cap[2,M_2-1]\neq\emptyset$.
Let $b$ be the largest element of $A_2\cap[2,M_2-1]\neq\emptyset$ and define $A_1=(A_2\cup\set{M_2})\setminus\set{b}$.
Then $m_1=m_2$ and $M_1=b$, so $A_1\in C$.
Also (q4), (f1\,:\,$M_1$) and (r1) hold.
\noindent
\textbf{Subcase 2c:} $(A_2\cup -A_2)^c\cap[2,M_2-1]=\emptyset$ and $A_2\cap[2,M_2-1]=\emptyset$.
In this case, $A_2$ contains $\set{-M_2+1,-M_2+2,\ldots,-2}$.
If $m_2<-2$ and $M_2>2$ then let $A_1=A_2\setminus\set{m_2}$.
This is in $C$, since $M_1=M_2>1$ and $m_1\le-2$.
Also, (q3), (f1\,:\,$m_1$) and (r1) hold.
If $M_2=2$ then $m_2<-2$.
If also $m_2=-3$ then there are two possibilities, because $A_2=\set{-3,4,5,\ldots,n}$ is ruled out.
If $A_2=\set{-3,1,4,5,\ldots,n}$ then let $A_1=\set{-2,1,3,4,\ldots,n}\in C$.
If $A_2=\set{-3,-1,4,5,\ldots,n}$ then let $A_1=\set{-2,-1,3,4,\ldots,n}\in C$.
In either case, (q6), (f3\,:\,$-M_1$) and (r2) hold.
If $m_2<-3$ then let $A_1=(A_2\cup\set{m_2+1,-m_2})\setminus\set{-m_2-1,m_2}$.
Then $m_1=m_2+1<-2$ and $M_1=2$, so $A_1\in C$.
Conditions (q3) and (r1) hold, along with either (f1\,:\,$m_1$) or (f2\,:\,$m_1$).
If $m_2=-2$ then $M_2>2$.
Since $A_2$ contains $\set{-M_2+1,-M_2+2,\ldots,-2}$, we conclude that $M_2=3$.
Since $A_2=\set{-2,4,5,\ldots,n}$ is ruled out, there are two possibilities.
If $A_2=\set{-2,1,4,5,\ldots,n}$ then let $A_1={-2,1,3,4,5,\ldots,n}$.
If $A_2=\set{-2,-1,4,5,\ldots,n}$ then let $A_1={-2,-1,3,4,5,\ldots,n}$.
In either case, (q4), (f3\,:\,$M_1$), and (r1) hold.
\end{proof}
We have showed that the fibers of $\eta_\nu$ are a lattice congruence on $B_n$ satisfying the second assertion of Theorem~\ref{nonhom thm}.
Let $\Theta_\nu$ be this congruence.
The map $\eta_\nu$ is obviously order-preserving, so just as in the proof of Theorem~\ref{simion thm}, we will show that $\eta_\nu$ restricts to a bijection from bottom elements of $\Theta_\nu$-classes to permutations in $S_{n+1}$ and that the inverse of the restriction is order-preserving.
Proposition~\ref{nonhom cover} implies that the bottom elements of $\Theta_\nu$ are exactly those signed permutations whose one-line notation consists of a sequence of positive elements, followed by $\pm1$, then a sequence of negative elements, and finally a sequence of positive elements.
(Any of these three sequences may be empty.)
Such a signed permutation $\pi_1\cdots\pi_n$ with $\pi_i=1$ and negative entries $\pi_{i+1}\cdots\pi_j$, maps to the permutation
\[(-\pi_j)(-\pi_{j-1})\cdots(-\pi_{i+1})\,1\,\pi_1\pi_2\cdots\pi_{i-1}\,2\,\pi_{j+1}\pi_{j+2}\cdots\pi_n.\]
If $\pi_i=-1$ instead, then the bottom element maps to the same permutation, except with the entries $1$ and $2$ swapped.
This restriction of $\eta_\nu$ is a bijection whose inverse takes a permutation $\tau_1\cdots\tau_{n+1}$ with $\tau_i=1$ and $\tau_j=2$, with $i<j$, to the signed permutation
\[(\tau_{i+1}-1)\cdots(\tau_{j-1}-1)\,1\,(-\tau_{i-1}+1)\cdots(-\tau_1+1)(\tau_{j+1}-1)\cdots(\tau_{n+1}-1).\]
If $\tau_i=2$ and $\tau_j=1$, with $i<j$, then the inverse map takes $\tau$ to the same signed permutation, except with $-1$ in place of $1$.
This inverse in order-preserving, and we have proved the first two assertions of Theorem~\ref{nonhom thm}.
To prove the third assertion, we temporarily introduce the bulky notation $\eta_\nu^{(n)}:B_n\to S_{n+1}$.
(Until now, we had suppressed the explicit dependence of the map $\eta_\nu$ on $n$, and we will continue to do so after this explanation.)
Arguing as in the proof of Theorem~\ref{simion thm}, we easily see that, for each $n\ge 3$, the map $\eta_\nu^{(n)}$ is the unique surjective homomorphism from $B_n$ to $S_{n+1}$ whose restriction to $(B_n)_{\set{s_0,s_1,s_2}}$ is $\eta_\nu^{(3)}$.
Now let $\eta$ be any surjective homomorphism from $B_n$ to $S_{n+1}$ whose restriction to $(B_n)_{\set{s_0,s_1}}$ is $\eta_\nu^{(2)}$.
Let $\Theta$ be the associated congruence on $B_n$.
Then in particular, the congruence defined by the restriction of $\eta$ to $(B_n)_{\set{s_0,s_1,s_2}}\cong B_3$ contracts $s_0s_1s_0$ and $s_1s_0$, corresponding to signed subsets $\set{-2,-1,3}$ and $\set{-2,1,3}$.
Figure~\ref{IrrConB3 squashed fig} shows the poset of signed subsets corresponding to join-irreducible elements in $(B_n)_{\set{s_0,s_1,s_2}}\cong B_3$ not forced to be contracted by the contraction of $s_0s_1s_0$ and $s_1s_0$.
\begin{figure}
\caption{The complement of an order ideal in $\operatorname{Irr}
\label{IrrConB3 squashed fig}
\end{figure}
(This is the complement of an order ideal in the poset of Figure~\ref{IrrConB3 fig}.)
By Proposition~\ref{diagram facts}, the restriction of $\eta$ to $(B_n)_{\set{s_0,s_1,s_2}}$ is a surjective homomorphism to $S_4$.
In particular, the restriction of $\operatorname{Irr}(\operatorname{Con}((B_n)_{\set{s_0,s_1,s_2}}))$ to join-irreducibles not contracted by $\Theta$ is isomorphic to $\operatorname{Irr}(\operatorname{Con}(S_4))$.
Comparing Figure~\ref{IrrConB3 squashed fig} to Figure~\ref{IrrConA3 fig}, we see that $\Theta$ must contract two additional join-irreducible elements, beyond those forced by $s_0s_1s_0$ and $s_1s_0$.
We see, furthermore, that the only two join-irreducible elements that can be contracted, to leave a poset isomorphic to $\operatorname{Irr}(\operatorname{Con}(S_4))$, are those whose signed subsets are $-2$ and $-3$.
We conclude that $\Theta$ contracts $s_1s_0s_1s_2$ and $s_2s_1s_0s_1s_2$.
Theorem~\ref{nonhom thm} says that $\Theta$ is weakly coarser than $\Theta_\nu$.
But since $\eta$ and $\eta_\nu$ are both surjective lattice homomorphisms to $S_{n+1}$, the congruences $\Theta$ and $\Theta_\nu$ have the same number of congruence classes, so $\Theta=\Theta_\nu$.
Thus $\eta$ and $\eta_\nu$ agree up to automorphisms of $S_{n+1}$, but the only nontrivial automorphism of $S_{n+1}$ is the diagram automorphism.
Since both maps take $(B_n)_{\set{s_0,s_1}}$ to $(S_{n+1})_{\set{s_1,s_2}}$, we rule out the diagram automorphism and conclude that $\eta=\eta_\nu$.
Thus completes the proof of Theorem~\ref{nonhom thm}.
\subsection{Two more homogeneous homomorphisms}\label{two more sec}
In this section, we consider the case where $s_0s_1$ and $s_1s_0$ are contracted and the case where $s_0s_1s_0$ and $s_1s_0s_1$ are contracted.
The congruences associated to these cases are dual to each other by Proposition~\ref{dual cong}.
Let $\eta_\delta:B_n\to S_{n+1}$ send $\pi\in B_n$ to the permutation obtained as follows:
If the one-line notation for $\pi$ contains the entry $1$, then construct a sequence
\[(-\pi_n)(-\pi_{n-1})\cdots(-\pi_1)\,0\,\pi_1\cdots\pi_{n-1}\pi_n,\]
extract the subsequence consisting of nonnegative entries, and add $1$ to each entry.
If the one-line notation for $\pi$ contains the entry $-1$, then extract the subsequence of $(-\pi_n)(-\pi_{n-1})\cdots(-\pi_1)\pi_1\cdots\pi_{n-1}\pi_n$ consisting of values greater than or equal to $-1$, change $-1$ to $0$, then add $1$ to each entry.
Notice that $\eta_\delta$ is a hybrid of $\eta_\sigma$ and $\eta_\nu$, in the sense that $\eta_\delta(\pi)=\eta_\sigma(\pi)$ if the one-line notation of $\pi$ contains $1$ and $\eta_\delta(\pi)=\eta_\nu(\pi)$ if the one-line notation of $\pi$ contains $-1$.
We will prove the following theorem:
\begin{theorem}\label{delta thm}
The map $\eta_\delta$ is a surjective lattice homomorphism from $B_n$ to $S_{n+1}$.
Its fibers constitute the congruence generated by $s_0s_1s_0$ and $s_1s_0s_1$.
Furthermore, $\eta_\delta$ is the unique surjective lattice homomorphism from $B_n$ to $S_{n+1}$ whose restriction to $(B_n)_{\set{s_0,s_1}}$ agrees with $\eta_\delta$.
\end{theorem}
Theorem~\ref{delta thm} implies in particular that the lattice homomorphism associated to the congruence of Example~\ref{miraculous} is the $n=3$ case of $\eta_\delta$.
Suppose $\pi\lessdot \tau$ in $B_n$.
First, suppose that $1$ is an entry in the one-line notation of both $\pi$ and $\tau$.
If $\pi$ and $\tau$ agree except in the sign of the first entry, then $\eta_\delta(\pi)\neq\eta_\delta(\tau)$.
If they agree except that two adjacent entries of $\pi$ are transposed in $\tau$, then $\eta_\delta(\pi)=\eta_\delta(\tau)$ if and only if the two adjacent entries have opposite signs.
Next, suppose that $-1$ is an entry in the one-line notation of both $\pi$ and $\tau$.
If $\pi$ and $\tau$ agree except in the sign of the first entry, then since $-1$ is an entry in the one-line notation of both $\pi$ and $\tau$, we must have $\pi_1>1$, and therefore $\eta_\delta(\pi)=\eta_\delta(\tau)$.
If they agree except that two adjacent entries of $\pi$ are transposed in $\tau$, then $\eta_\nu(\pi)=\eta_\nu(\tau)$ if and only if the two transposed entries have opposite signs and neither is $-1$.
Finally, suppose $\pi$ has the entry $1$ in its one-line notation, but $\tau$ has $-1$.
Then $\pi_1=1$ and $\tau_1=-1$, and $\eta_\delta(\pi)\neq\eta_\delta(\tau)$.
Thus:
\begin{prop}\label{delta cover}
Suppose $\pi\lessdot \tau$ in the weak order on $B_n$, and let $t$ be the reflection associated to the cover $\pi\lessdot\tau$.
Then $\eta_\delta(\pi)=\eta_\delta(\tau)$ if and only if one of the following conditions holds:
\begin{enumerate}
\item[(i)] $t$ is $(i\,\,-j)(j\,\,-i)$ for some $i$ and $j$ with $2\le i<j\le n$.
\item[(ii)] $\pi$ has the entry $1$ in its one-line notation and $t$ is $(1\,\,-j)(j\,\,-1)$ for some $j$ with $1<j\le n$.
\item[(iii)] $\pi$ has the entry $-1$ in its one-line notation and $t$ is $(i\,\,-i)$ for some $i>1$.
\end{enumerate}
\end{prop}
As in the previous cases, we now show that the congruence generated by $s_0s_1s_0$ and $s_1s_0s_1$ has a description compatible with Proposition~\ref{delta cover}.
\begin{prop}\label{delta finest}
The congruence generated by $s_0s_1s_0$ and $s_1s_0s_1$ removes a shard $\Sigma$ if and only if one of the following conditions holds:
\begin{enumerate}
\item[(i)] $\Sigma$ is contained in a hyperplane $H_t$ such that $t$ is $(i\,\,-j)(j\,\,-i)$ for some $i$ and $j$ with $2\le i<j\le n$.
\item[(ii)] $\Sigma$ is below the hyperplane $H_{(1\,\,-1)}$ and $\Sigma$ is contained in a hyperplane $H_t$ such that $t$ is $(1\,\,-j)(j\,\,-1)$ for some $j$ with $1<j\le n$.
\item[(iii)] $\Sigma$ is above the hyperplane $H_{(1\,\,-1)}$ and $\Sigma$ is contained in a hyperplane $H_{(i\,\,-i)}$ for some $i>1$.
\end{enumerate}
\end{prop}
Before proving Proposition~\ref{delta finest}, we verify that conditions (ii) and (iii) make sense.
Specifically, in both conditions, we rule out the possibility that $\Sigma$ is neither above nor below $H_{(1\,\,-1)}$.
Note that if $1<j\le n$ and $t=(1\,\,-j)(j\,\,-1)$, then the rank-two subarrangement containing $H_{(1\,\,-1)}$ and $H_t$ has basic hyperplanes $H_{(1\,\,-1)}$ and $H_{t'}$, where $t'=(1\,\,j)(-j\,\,-1)$.
Thus every hyperplane $H_t$, for $t$ as in condition (ii), is cut at $H_{(1\,\,-1)}$, and thus every shard in $H_t$ is either above or below $H_{(1\,\,-1)}$.
Similarly, every shard in $H_t$, for $t$ as in condition (iii), is either above or below $H_{(1\,\,-1)}$.
\begin{proof}
Suppose $\Sigma$ is a shard in a hyperplane that is cut by $H_{(1\,\,-1)}$, so that $\Sigma$ is either above or below $H_{(1\,\,-1)}$.
Then $\Sigma$ is above $H_{(1\,\,-1)}$ if and only if its associated join-irreducible element $\gamma$ has $-1$ in its one-line notation.
This occurs if and only if $-1$ is contained in the signed subset representing $\gamma$.
The shards specified by conditions (i)--(iii) in Proposition~\ref{delta finest} correspond, via join-irreducible elements, to signed subsets $A$ described respectively by the following conditions:
\begin{enumerate}
\item[(i)] $A$ has $m<-1$, $M>1$ and $m\neq-M$.
\item[(ii)] $M=1$ and $m<-1$.
\item[(iii)] $-1\in A$ and $-M=m<-1$.
\end{enumerate}
For condition (ii), the condition is $(m,M)\in\set{(-1,j),(-j,1)}$ for some $1<j\le n$, and $-1\not\in A$.
However, the requirement that $-1\not\in A$ implies that $m\neq -1$, so $(m,M)=(-j,1)$, as indicated above.
Let $C$ be the set of signed subsets satisfying (i), (ii) or (iii).
We can describe $C$ more succinctly as the set of signed subsets satisfying both of the following conditions:
\begin{enumerate}
\item[(a)] $m<-1$, and
\item[(b)] If $m=-M$ then $-1\in A$.
\end{enumerate}
We now show that no set $A_1\in C$ arrows a set $A_2\not\in C$.
Let $A_1\in C$ and $A_2\not\in C$.
Suppose $m_2\ge-1$.
Then (q1) and (q6) fail, because each would include the impossible assertion that $M_1<-m_2$.
Also, (q2)--(q4) fail because $m_1<-1$.
Since $M_1>0$, if (q5) holds, then $m_2=-1$ and $M_1=1$.
If in addition (r2) holds, then the fact that $m_2=-1\in A_2\cap(m_1,M_1)$ implies that $-1\in -A_2^c\cap(m_1,M_1)=A_1\cap(m_1,M_1)$.
But having $-1\in A_1$ contradicts the fact that $M_1=1$, and this contradiction rules out the possibility that (q5) and (r2) both hold.
We have ruled out all six possibilities in Theorem~\ref{B shard} in the case where $m_2\ge-1$.
If $m_2<-1$, then since $A_2\not\in C$, we must have $m_2=-M_2$ and $-1\not\in A_2$.
In particular, (q3)--(q6) fail.
As above, (q2) fails because $m_1<-1$.
If (q1) holds, then $m_1=-M_1$, so since $A_1\in C$, we have $-1\in A_1$.
In particular, $M_1>1$, so (r1) fails because $-1\in A_1$ but $-1\not\in A_2$.
Next, we show that any set in $C$, except the sets $\set{-2,-1,3,4,\ldots,n}$ and $\set{-2,3,4,\ldots,n}$, is arrowed to by another set in $C$.
Let $A_2\in C$.
\noindent
\textbf{Case 1:} $m_2=-M_2$.
Then $-1\in A_2$ because $A_2\in C$.
We can rule out the possibility that $m_2=-2$, because in this case, $A_2=\set{-2,-1,3,4,\ldots,n}$.
Let $A_1$ be $\set{-2,-1,3,4,\ldots,n}$, which is in $C$.
Then (q1) and (r1) hold, so $A_1\to A_2$.
\noindent
\textbf{Case 2:} $m_2<-M_2$.
Since $A_2\in C$, $m_2<-1$.
If $m_2=-2$, then $M_2=-1$ and $A_2=\set{-2,3,4,\ldots,n}$, which is ruled out by hypothesis.
Thus $m_2<-2$.
\noindent
\textbf{Subcase 2a:} $[m_2+1,-2]\cap A_2\neq\emptyset$.
Let $A_1=(A_2\cup\set{-m_2})\setminus\set{m_2}$.
Then $A_1\in C$ and (q3), (f1\,:\,$m_1$), and (r1) hold, so $A_1\to A_2$.
\noindent
\textbf{Subcase 2b:} $[m_2+1,-2]\cap A_2=\emptyset$ and $m_2<-M_2-1$.
Let $A_1=(A_2\cup\set{m_2+1,-m_2})\setminus\set{m_2,-m_2-1}$.
Then $A_1\in C$ and (q3), (f2\,:\,$m_1$), and (r1) hold.
\noindent
\textbf{Subcase 2c:} $[m_2+1,-2]\cap A_2=\emptyset$ and $m_2=-M_2-1$.
Since $m_2<-2$, $M_2>1$.
If $|A_2|<n-1$, then let $A_1=A_2\cup\set{M_2}$.
Then $A_1\in C$ and (q4), (f2\,:\,$M_1$), and (r1) hold.
If $|A_2|=n-1$ and $-1\in A_2$, then let $A_1=(A_2\cup\set{-M_2,-m_2})\setminus\set{m_2}$.
Then $m_1=-M_2<-1$, and $M_1=M_2$.
Since $-1\in A_1$, $A_1\in C$.
Also, (q3), (f3\,:\,$m_1$), and (r1) hold, so $A_1\to A_2$.
If $|A_2|=n-1$ and $-1\not\in A_2$, then $A_2=(\set{1,2,\ldots,n}\cup\set{m_2})\setminus\set{-m_2-1,-m_2}$, recalling that $M_2=-m_2-1$.
Let $A_1=(A_2\cup\set{M_2})\setminus\set{M_2-1}$.
Then $A_1\in C$ and (q4), (f1\,:\,$M_1$), and (r1) hold.
\noindent
\textbf{Case 3:} $m_2>-M_2$.
\noindent
\textbf{Subcase 3a:} $m_2>-M_2+1$.
Let $A_1=(A_2\cup\set{M_2})\setminus\set{M_2-1}$.
The element $M_2-1$ may or may not be an element of $A_2$, but since $-M_2+1<m_2$, we know that $-M_2+1\not\in A_2$.
Thus $M_1=M_2-1$, and $m_1=m_2$.
In particular, $A_1\in C$ and, furthermore, (q4) and (r1) hold.
Also, either (f1\,:\,$M_1$) or (f2\,:\,$M_1$) holds.
\noindent
\textbf{Subcase 3b:} $m_2=-M_2+1$.
If either $|A_2|<n-1$ or $-1\in A_2$, then let $A_2=A_1\cup\set{M_2}$.
Then $A_2\in C$ and (q4) and (r1) hold.
If $|A_2|<n-1$, then (f2\,:\,$M_1$) holds, and otherwise (f3\,:\,$M_1$) holds.
Finally, if $|A_2|=n-1$ and $-1\not\in A_2$, then let $A_1=(-A_2^c\cap(m_2,-m_2))\cup\set{m_2}\cup\set{M_2,M_2+1,\ldots,n}$.
Then $|A_1|=n$, $m_1=m_2$, and $M_1=-m_2$.
Conditions (q5), (f3\,:\,$-m_1$) and (r2) hold and $A_1\in C$.
\end{proof}
Combining Propositions~\ref{delta cover} and~\ref{delta finest}, the fibers of $\eta_\delta$ are a lattice congruence $\Theta_\delta$ on $B_n$ satisfying the second assertion of Theorem~\ref{delta thm}.
We now show that $\eta_\delta$ is order-preserving, that $\eta_\delta$ restricts to a bijection from bottom elements of $\Theta_\delta$-classes to permutations in $S_{n+1}$ and that the inverse of the restriction is order-preserving.
To see that $\eta_\delta$ is order-preserving, suppose $\pi\lessdot\tau$ in $B_n$.
If $1$ is an entry in the one-line notation of both $\pi$ and $\tau$, then $\eta_\delta$ coincides with $\eta_\sigma$ on $\pi$ and $\tau$, so $\eta_\delta$ preserves the order relation $\pi\le\tau$.
Similarly, if $-1$ is an entry in the one-line notation of both $\pi$ and $\tau$, then $\eta_\delta$ preserves the order relation $\pi\le\tau$ because $\eta_\nu$ is order-preserving.
Finally, if $1$ is an entry in the one-line notation of $\pi$ and $-1$ is an entry in the one-line notation of $\tau$, then $\eta_\delta(\pi)$ and $\eta_\delta(\tau)$ both have $1$ and $2$ adjacent but in different orders.
Otherwise, the two permutations agree, and we see that $\eta_\delta(\pi)\lessdot\eta_\delta(\tau)$.
We have verified that $\eta_\delta$ is order-preserving.
Proposition~\ref{delta cover} leads to a characterization of the bottom elements of $\Theta_\delta$-classes.
A signed permutation $\pi$ whose one-line notation contains $1$ is a bottom element if and only if its one-line notation consists of a (possibly empty) sequence of negative entries followed by a sequence of positive entries (including $1$).
A signed permutation with $-1$ in its one-line notation is a bottom element if and only if it consists of a (possibly empty) sequence of positive entries, followed by a sequence of negative elements beginning with $-1$, and finally a (possibly empty) sequence of positive elements.
Notice that the bottom elements of $\Theta_\delta$ that have $1$ in their one-line notation map to permutations with $1$ preceding $2$, and the bottom elements of $\Theta_\delta$ that have $-1$ in their one-line notation map to permutations with $2$ preceding~$1$.
The inverse of the restriction of $\eta_\delta$ to bottom elements sends a permutation $\tau$ with $\tau_i=1$ and $\tau_j=2$, for $i<j$, to the signed permutation
\[(-\tau_{i-1}+1)(-\tau_{i-2}+1)\cdots(-\tau_1+1)(\tau_{i+1}-1)(\tau_{i+2}-1)\cdots(\tau_{n+1}-1).\]
The inverse of the restriction sends a permutation $\tau_1\cdots\tau_{n+1}$ with $\tau_i=2$ and $\tau_j=1$, for $i<j$, to the signed permutation
\[(\tau_{i+1}-1)\cdots(\tau_{j-1}-1)\,(-1)\,(-\tau_{i-1}+1)\cdots(-\tau_1+1)(\tau_{j+1}-1)\cdots(\tau_{n+1}-1).\]
It is now easily verified that the inverse is order-preserving.
The third assertion of Theorem~\ref{delta thm} follows as in the case of Theorem~\ref{simion thm}, and we have completed the proof of Theorem~\ref{delta thm}.
Now let $\eta_\epsilon:B_n\to S_{n+1}$ send $\pi\in B_n$ to the permutation obtained as follows:
If the one-line notation for $\pi$ contains the entry $1$, then extract the subsequence of $(-\pi_n)(-\pi_{n-1})\cdots(-\pi_1)\pi_1\cdots\pi_{n-1}\pi_n$ consisting of values greater than or equal to $-1$, change $-1$ to $0$, then add $1$ to each entry.
If the one-line notation for $\pi$ contains the entry $-1$, then construct a sequence
\[(-\pi_n)(-\pi_{n-1})\cdots(-\pi_1)\,0\,\pi_1\cdots\pi_{n-1}\pi_n,\]
extract the subsequence consisting of nonnegative entries, and add $1$ to each entry.
Thus $\eta_\epsilon$ is a hybrid of $\eta_\sigma$ and $\eta_\nu$ in exactly the opposite way that $\eta_\delta$ is a hybrid: $\eta_\epsilon(\pi)=\eta_\nu(\pi)$ if the one-line notation of $\pi$ contains $1$ and $\eta_\epsilon(\pi)=\eta_\sigma(\pi)$ if the one-line notation of $\pi$ contains $-1$.
\begin{theorem}\label{ep thm}
The map $\eta_\epsilon$ is a surjective lattice homomorphism from $B_n$ to $S_{n+1}$.
Its fibers constitute the congruence generated by $s_0s_1$ and $s_1s_0$.
Furthermore, $\eta_\epsilon$ is the unique surjective lattice homomorphism from $B_n$ to $S_{n+1}$ whose restriction to $(B_n)_{\set{s_0,s_1}}$ agrees with $\eta_\epsilon$.
\end{theorem}
Fortunately, to prove Theorem~\ref{ep thm}, we can appeal to Theorem~\ref{delta thm} and Proposition~\ref{dual cong}, to avoid any more tedious arguments about shard arrows for type~$B_n$.
Let $\operatorname{neg}:B_n\to B_n$ be the map that sends a signed permutation $\pi_1\cdots\pi_n$ to $(-\pi_1)\cdots(-\pi_n)$.
Let $\operatorname{rev}:S_{n+1}\to S_{n+1}$ be the map sending a permutation $\tau_1\cdots\tau_{n+1}$ to $\tau_{n+1}\cdots\tau_1$.
The following proposition is easily verified.
\begin{prop}\label{delta ep dual}
The map $\eta_\epsilon$ sends a signed permutation $\pi$ to $\operatorname{rev}(\eta_\delta(\operatorname{neg}(\pi)))$.
\end{prop}
The maps $\operatorname{rev}$ and $\operatorname{neg}$ both send a group element $w$ to $ww_0$, where $w_0$ is the longest element of the corresponding Coxeter group (the element $(n+1)n\cdots1$ in $S_{n+1}$ or the element $(-1)\cdots(-n)$ in $B_n$).
The map $w\mapsto ww_0$ is an anti-automorphism of the weak order on any finite Coxeter group.
Thus the first assertion of Theorem~\ref{ep thm} follows from the first assertion of Theorem~\ref{delta thm}.
Furthermore, by Proposition~\ref{dual cong}, the second assertion of Theorem~\ref{ep thm} follows from the second assertion of Theorem~\ref{delta thm}, and the third assertion follows as usual.
\section{Homomorphisms from exceptional types}\label{exceptional sec}
In this section, we treat the remaining cases, where $(W,W')$ is $(F_4,A_4)$, $(H_3,A_3)$, $(H_3,B_3)$, $(H_4,A_4)$, or $(H_4,B_4)$.
In each case, the stated theorem proves
the first assertion of Theorem~\ref{existence}, while inspection of the proof establishes the second assertion of Theorem~\ref{existence} and completes the proof of Theorem~\ref{diagram uniqueness}.
\subsection*{Homomorphisms from type $F_4$}
Let $W$ be a Coxeter group of type $F_4$ and with $S=\set{p,q,r,s}$, $m(p,q)=3$, $m(q,r)=4$ and $m(r,s)=3$.
Figure~\ref{weakF4diagram} shows the order ideal in the weak order on $W$ that encodes these values of $m$.
This is comparable to Figure~\ref{weakB3diagram}.a, except that the square intervals that indicate where $m$ is $2$ (e.g. $m(p,r)=2$) are omitted.
\begin{figure}
\caption{The Coxeter diagram of $F_4$ encoded as an order ideal}
\label{weakF4diagram}
\end{figure}
For $W'$ of type $A_4$ (i.e. $W'=S_5$), we identify $p,q,r,s$ with $s_1,s_2,s_3,s_4$ (in that order) to discuss homomorphisms fixing $S$ pointwise.
\begin{theorem}\label{F4 thm}
There are exactly four surjective lattice homomorphisms from $F_4$ to $A_4$ that fix $S$ pointwise:
For each choice of $\gamma_1\in\set{qr,qrq}$ and $\gamma_2\in\set{rq,rqr}$, there exists a unique such homomorphism whose associated congruence contracts $\gamma_1$ and $\gamma_2$.
\end{theorem}
\begin{proof}
Suppose $\eta:F_4\to A_4$ be a surjective homomorphism fixing $S$ pointwise.
By Proposition~\ref{diagram facts} and the discussion in Section~\ref{dihedral sec}, the congruence $\Theta$ on $F_4$ defined by the fibers of $\eta$ contracts exactly one of $qr$ and $qrq$ and exactly one of $rq$ and $rqr$.
In each of the four cases, we verify that there is a unique choice of $\eta$.
\noindent
\textbf{Case 1:}
\textit{$\Theta$ contracts $qr$ and $rq$.}
Computer calculations show that the quotient of $W$ modulo the congruence generated by $qr$ and $rq$ is isomorphic to the weak order on $A_4$.
Thus a unique $\eta$ exists in this case.
\noindent
\textbf{Case 2:}
\textit{$\Theta$ contracts $qrq$ and $rqr$.}
Computer calculations as in Case 1 (or combining Case 1 with Proposition~\ref{dual cong}) show that a unique $\eta$ exists.
\noindent
\textbf{Case 3:}
\textit{$\Theta$ contracts $qrq$ and $rq$.}
Proposition~\ref{diagram facts} implies that the restriction to $W_{\set{q,r,s}}$ is a surjective lattice homomorphism.
Thus Theorem~\ref{nonhom thm} implies that the restriction of $\eta$ to $W_{\set{q,r,s}}$ agrees with $\eta_\nu$.
In particular, in addition to the join-irreducible elements of $W_{\set{q,r,s}}$ contracted by the congruence generated by $qrq$ and $rq$, the congruence $\Theta$ contracts $rqrs$ and $srqrs$.
Computer calculations show that the quotient of $W$ modulo the congruence generated by $qrq$, $rq$, $rqrs$, and $srqrs$ is isomorphic to the weak order on $A_4$.
Thus a unique $\eta$ exists.
\noindent
\textbf{Case 4:}
\textit{$\Theta$ contracts $qr$ and $rqr$.}
By an argument/calculation analogous to Case~3 (or by Case 3 and the diagram automorphism of $F_4$), a unique $\eta$ exists.
The congruence $\Theta$ is generated by $qr$, $rqr$, $qrqp$, and $pqrqp$.
\end{proof}
We have dealt with the final crystallographic case, and thus proved the following existence and uniqueness theorem.
\begin{theorem}\label{existence uniqueness crys}
Let $(W,S)$ and $(W',S)$ be finite crystallographic Coxeter systems with $m'(r,s)\le m(r,s)$ for each pair $r,s\in S$.
For each $r,s\in S$, fix a surjective homomorphism $\eta_\set{r,s}$ from $W_{\set{r,s}}$ to $W'_{\set{r,s}}$ with $\eta_\set{r,s}(r)=r$ and $\eta_\set{r,s}(s)=s$.
Then there is exactly one compressive homomorphism $\eta:W\to W'$ such that the restriction of $\eta$ to $W_{\set{r,s}}$ equals $\eta_\set{r,s}$ for each pair $r,s\in S$.
\end{theorem}
\subsection*{Homomorphisms from type $H_3$}
Now, let $W$ be a Coxeter group of type $H_3$ and with $S=\set{q,r,s}$, $m(q,r)=5$ and $m(r,s)=3$.
Identify $q,r,s$ with the generators $s_1,s_2,s_3$ of $A_3=S_4$.
\begin{theorem}\label{H3 A3 thm}
There are exactly nine surjective lattice homomorphisms from $H_3$ to $A_3$ that fix $S$ pointwise:
For each choice of distinct $\gamma_1,\gamma_2\in\set{qr,qrq,qrqr}$ and distinct $\gamma_3,\gamma_4\in\set{rq,rqr,rqrq}$, there exists a unique such homomorphism whose associated congruence contracts $\gamma_1$, $\gamma_2$, $\gamma_3$ and $\gamma_4$.
\end{theorem}
\begin{proof}
Suppose $\eta:H_3\to A_3$ is a surjective homomorphism, fixing $S$ pointwise, whose fibers define a congruence $\Theta$.
As in the proof of Theorem~\ref{F4 thm}, we consider the nine cases separately.
In each of Cases 1--6 below, computer calculations show that the quotient of $H_3$ modulo the congruence generated by the given join-irreducible elements is isomorphic to the weak order on $A_3$.
Thus in each of these cases, a unique $\eta$ exists.
\noindent
\textbf{Case 1:}
\textit{$\Theta$ contracts $qr$, $qrq$, $rq$, and $rqr$.}
\noindent
\textbf{Case 2:}
\textit{$\Theta$ contracts $qr$, $qrq$, $rq$, and $rqrq$.}
\noindent
\textbf{Case 3:}
\textit{$\Theta$ contracts $qr$, $qrq$, $rqr$, and $rqrq$.}
\noindent
\textbf{Case 4:}
\textit{$\Theta$ contracts $qr$, $qrqr$, $rq$, and $rqrq$.}
\noindent
\textbf{Case 5:}
\textit{$\Theta$ contracts $qr$, $qrqr$, $rqr$, and $rqrq$.}
\noindent
\textbf{Case 6:}
\textit{$\Theta$ contracts $qrq$, $qrqr$, $rqr$, and $rqrq$.}
\noindent
\textbf{Case 7:}
\textit{$\Theta$ contracts $qrq$, $qrqr$, $rq$, and $rqr$.}
In this case, the quotient of $H_3$ modulo the congruence $\Theta'$ generated by the given join-irreducible elements is a poset with 28 elements.
Inspection of $\operatorname{Irr}(\operatorname{Con}(H_3/\Theta'))$ reveals a unique way to contract additional join-irreducible elements so as to obtain $\operatorname{Irr}(\operatorname{Con}(A_3))$.
(Cf. Figure~\ref{IrrConB3 squashed fig} and the surrounding discussion in Section~\ref{nonhom sec}.)
The additional join-irreducible elements to be contracted are $rqrqsrq$ and $srqrqsrq$.
Computer calculations confirm that the quotient of $H_3$ modulo the congruence generated by $qrq$, $qrqr$, $rq$, $rqr$, $rqrqsrq$, and $srqrqsrq$ is indeed isomorphic to $A_3$, so there exists a unique $\eta$ in this case.
\noindent
\textbf{Case 8:}
\textit{$\Theta$ contracts $qr$, $qrqr$, $rq$, and $rqr$.}
Computer calculations show that the quotient of $H_3$ modulo the congruence generated by $qr$, $qrqr$, $rq$, $rqr$,
$rqrqsrqr$, and $srqrqsrqr$ is isomorphic to $A_3$.
As in Case 7, inspection of the poset of irreducibles of the congruence lattice of the quotient reveals that no other possibilities exist.
Thus $\eta$ exists and is unique in this case.
\noindent
\textbf{Case 9:}
\textit{$\Theta$ contracts $qrq$, $qrqr$, $rq$, and $rqrq$.}
The existence and uniqueness of $\eta$ follows from Case 7 and Proposition~\ref{dual cong}.
Alternately, the calculation can be made directly as in Cases 7--8, realizing $\Theta$ as the congruence generated by $qrq$, $qrqr$, $rq$, $rqrq$, $rqrs$, and $srqrs$.
\end{proof}
\begin{theorem}\label{H3 B3 thm}
There are exactly eight surjective lattice homomorphisms from $H_3$ to $B_3$:
There is no surjective lattice homomorphism $\eta:H_3\to B_3$ whose associated congruence contracts $qr$ and $rqrq$.
For every other choice of $\gamma_1\in\set{qr,qrq,qrqr}$ and $\gamma_2\in\set{rq,rqr,rqrq}$, there exists a unique such homomorphism whose associated congruence contracts $\gamma_1$ and $\gamma_2$.
\end{theorem}
\begin{proof}
Suppose $\eta:H_3\to B_3$ is a surjective homomorphism whose fibers define a congruence $\Theta$.
We again proceed by cases.
\noindent
\textbf{Case 1:}
\textit{$\Theta$ contracts $qrq$ and $rqr$.}
Computer calculations show that the quotient of $H_3$ modulo the congruence generated by $qrq$ and $rqr$ is isomorphic to the weak order on $B_3$.
Thus a unique $\eta$ exists.
\noindent
Case 1 is the only case in which $\Theta$ is homogeneous of degree $2$.
Cases 2--8 proceed just like Cases 7--9 in the proof of Theorem~\ref{H3 A3 thm}, except that ``inspection of the poset of irreducibles of the congruence lattice of the quotient'' is automated.
In each case, there exists a unique set of additional join-irreducibles required to generate $\Theta$, and these generators are given in parentheses.
Some of these cases can also be obtained from each other by Proposition~\ref{dual cong}.
\noindent
\textbf{Case 2:}
\textit{$\Theta$ contracts $qr$ and $rq$}
($qr$, $rq$, and $qrqsrqrs$).
\noindent
\textbf{Case 3:}
\textit{$\Theta$ contracts $qr$ and $rqr$}
($qr$, $rqr$, $qrqsrqrs$, $rqrqsrqrs$, and $srqrqsrqrs$).
\noindent
\textbf{Case 4:}
\textit{$\Theta$ contracts $qrq$ and $rq$}
($qrq$, $rq$, $qsrqrs$, $rqrqsr$, $srqrqsr$, $rqrs$, $qrqrs$, and $srqrs$).
\noindent
\textbf{Case 5:}
\textit{$\Theta$ contracts $qrq$ and $rqrq$}
($qrq$, $rqrq$, $qsrqrs$, $qrqrs$ and $rqsrqrs$).
\noindent
\textbf{Case 6:}
\textit{$\Theta$ contracts $qrqr$ and $rq$}
($qrqr$, $rq$, $qsrqrqsrqr$, $rqrqsrqr$, $qrqrqsrqr$, $srqrqsrqr$, $rqrqsr$, $srqrqsr$, $rqrs$, and $srqrs$).
\noindent
\textbf{Case 7:}
\textit{$\Theta$ contracts $qrqr$ and $rqr$}
($qrqr$, $rqr$, $srqrqsrqrs$, $qsrqrqsrqr$, $rqrqsrqr$, $qrqrqsrqr$, $srqrqsrqr$, and $rqrqsrqrs$).
\noindent
\textbf{Case 8:}
\textit{$\Theta$ contracts $qrqr$ and $rqrq$}
($qrqr$, $rqrq$, and $rqsrqrs$).
\noindent
\textbf{Case 9:}
\textit{$\Theta$ contracts $qr$ and $rqrq$.}
A computation shows that the quotient of $H_3$ modulo the congruence generated by $qr$ and $rqrq$ is not isomorphic to $B_3$.
Furthermore, this quotient has 48 elements, the same number as $B_3$.
Thus it is impossible to obtain $B_3$ by contracting additional join-irreducible elements.
\end{proof}
\subsection*{Homomorphisms from type $H_4$}
Finally, let $W$ be a Coxeter group of type $H_4$ and with $S=\set{q,r,s,t}$, $m(q,r)=5$, $m(r,s)=3$ and $m(s,t)=3$.
The classification of surjective homomorphisms from $H_4$ to $A_4$ and from $H_4$ to $B_4$ exactly follows the classification of surjective homomorphisms from $H_3$ to $A_3$ and from $H_3$ to $B_3$, as we now explain.
Let $\eta$ be any surjective homomorphism $\eta$ from $H_4$ to $A_4$ or $B_4$ with associated congruence $\Theta$.
By Proposition~\ref{diagram facts}, the restriction $\eta'$ of $\eta$ to the standard parabolic subgroup $W_{\set{q,r,s}}$ (of type $H_3$) is a homomorphism from $H_3$ to $A_3$ or $B_3$.
Thus $\eta'$ is described by Theorem~\ref{H3 A3 thm} or~\ref{H3 B3 thm}.
The congruence $\Theta'$ associated to $\eta'$ is the restriction of $\Theta$ to $W_{\set{q,r,s}}$.
The proofs of Theorems~\ref{H3 A3 thm} and~\ref{H3 B3 thm}, determine, for each surjective homomorphism, a set $\Gamma$ of join-irreducibles that generate the associated congruence.
Since $\Theta'$ agrees with $\Theta$ on $W_{\set{q,r,s}}$, the congruence associated to $\eta$ must also contract the join-irreducibles in $\Gamma$.
In each case, computer calculations show that the quotient of $H_4$ modulo the congruence generated by $\Gamma$ is isomorphic to $A_4$ or $B_4$.
This shows that for each surjective homomorphisms from $H_3$, there is a unique surjective homomorphisms from $H_4$.
Furthermore, Theorem~\ref{H3 B3 thm} and Proposition~\ref{diagram facts} imply that there is no surjective lattice homomorphism from $H_4$ to $B_4$ whose associated congruence contracts $qr$ and $rqrq$.
Thus we have the following theorems:
\begin{theorem}\label{H4 A4 thm}
There are exactly nine surjective lattice homomorphisms from $H_4$ to $A_4$ that fix $S$ pointwise:
For each choice of distinct $\gamma_1,\gamma_2\in\set{qr,qrq,qrqr}$ and distinct $\gamma_3,\gamma_4\in\set{rq,rqr,rqrq}$, there exists a unique such homomorphism whose associated congruence contracts $\gamma_1$, $\gamma_2$, $\gamma_3$ and $\gamma_4$.
\end{theorem}
\begin{theorem}\label{H4 B4 thm}
There are exactly eight surjective lattice homomorphisms from $H_4$ to $B_4$:
There is no surjective lattice homomorphism $\eta:H_4\to B_4$ whose associated congruence contracts $qr$ and $rqrq$.
For every other choice of $\gamma_1\in\set{qr,qrq,qrqr}$ and $\gamma_2\in\set{rq,rqr,rqrq}$, there exists a unique such homomorphism whose associated congruence contracts $\gamma_1$ and $\gamma_2$.
\end{theorem}
\section{Lattice homomorphisms between Cambrian lattices}\label{camb sec}
In this section, we show how the results of this paper on lattice homomorphisms between weak orders imply similar results on lattice homomorphisms between Cambrian lattices.
A \newword{Coxeter element} of a Coxeter group $W$ is an element $c$ that can be written in the form $s_1s_2\cdots s_n$, where $s_1,s_2,\ldots,s_n$ are the elements of $S$.
There may be several total orders $s_1,s_2,\ldots,s_n$ on $S$ whose product is $c$.
These differ only by commutations of commuting elements of $S$.
In particular, for every edge $r$---$s$ in the Coxeter diagram for $W$ (i.e.\ for every pair $r,s$ with $m(r,s)>2$), either $r$ precedes $s$ in every reduced word for $c$ or $s$ precedes $r$ in every reduced word for $c$.
We use the shorthand ``$r$ precedes $s$ in $c$'' or ``$s$ precedes $r$ in $c$'' for these possibilities.
The initial data defining a Cambrian lattice are a finite Coxeter group $W$ and a Coxeter element $c$.
The \newword{Cambrian congruence} is the homogeneous degree-$2$ congruence $\Theta_c$ on $W$ generated by the join-irreducible elements $\operatorname{alt}_k(s,r)$ for every pair $r,s\in S$ such that $r$ precedes $s$ in $c$ and every $k$ from $2$ to $m(r,s)-1$.
Here, as in Section~\ref{erase edge sec}, the notation $\operatorname{alt}_k(s,r)$ stands for the word with $k$ letters, starting with~$s$ and then alternating $r$, $s$, $r$, etc.
Equivalently, $\Theta_c$ is the finest congruence with $s\equiv\operatorname{alt}_{m(r,s)-1}(s,r)$ for every pair $r,s\in S$ such that $r$ precedes $s$ in $c$.
The quotient $W/\Theta_c$ is the \newword{Cambrian lattice}.
The Cambrian lattice is isomorphic to the subposet of $W$ induced by the bottom elements of $\Theta_c$-classes.
To distinguish between the two objects, we write $W/\Theta_c$ for the Cambrian lattice constructed as a quotient of $W$ and write $\operatorname{Camb}(W,c)$ for the Cambrian lattice constructed as the subposet of $c$ induced by bottom elements of $\Theta_c$-classes.
The key result of this section is the following theorem, which will allow us to completely classify surjective homomorphisms between Cambrian lattices, using the classification results on surjective homomorphisms between weak orders.
(These classification results are Theorems~\ref{camb para factor}, \ref{camb exist unique}, and~\ref{camb diagram}.)
\begin{theorem}\label{key camb}
Let $\eta:W\to W'$ be a surjective lattice homomorphism whose associated congruence $\Psi$ is generated by a set $\Gamma$ of join-irreducible elements.
Let $c=s_1s_2\cdots s_n$ be a Coxeter element of $W$ and let $c'=\eta(s_1)\eta(s_2)\cdots\eta(s_n)\in W'$.
Then the restriction of $\eta$ is a surjective lattice homomorphism from $\operatorname{Camb}(W,c)$ to $\operatorname{Camb}(W',c')$.
The associated congruence is generated by $\Gamma\cap\operatorname{Camb}(W,c)$.
\end{theorem}
To understand Theorem~\ref{key camb} correctly, one should keep in mind that $\eta$ is a lattice homomorphism, not a group homomorphism, so that $c'$ need not be equal to $\eta(c)$.
(For example, in the case $n=2$ of Theorem~\ref{simion thm}, the lattice homomorphism $\eta_\sigma$ sends the Coxeter element $s_0s_1$ to $s_1$.)
The element $c'$ is a Coxeter element of $W'$ in light of Proposition~\ref{basic facts} and Theorem~\ref{para factor}.
Before assembling the tools necessary to prove Theorem~\ref{key camb}, we illustrate the theorem by extending Example~\ref{miraculous}.
\begin{example}\label{camb hom ex}
Recall that Figure~\ref{B3toA3} indicates a congruence $\Psi$ on $B_3$ such that the quotient $B_3/\Psi$ is isomorphic to $S_4$.
As mentioned just after Theorem~\ref{delta thm}, the corresponding surjective homomorphism is $\eta_\delta:B_3\to S_4$.
The congruence $\Psi$ is generated by $\Gamma=\set{s_0s_1s_0,s_1s_0s_1}$.
Let $c$ be the Coxeter element $s_0s_1s_2$ of $B_3$.
The congruence $\Theta_c$ on $B_3$ is generated by $\set{s_1s_0,s_1s_0s_1,s_2s_1}$, as illustrated in Figure~\ref{cambriandiagram}.a.
\begin{figure}
\caption{a: Edge contractions that generate the $s_0s_1s_2$-Cambrian congruence on $B_3$.
b: Edge contractions that generate the $s_1s_2s_3$-Cambrian congruence on $S_4$.}
\label{cambriandiagram}
\end{figure}
Let $c'$ be the Coxeter element $s_1s_2s_3$ of $S_4$.
The Cambrian congruence $\Theta_{c'}$ on $S_4$ is generated by the set $\set{s_2s_1,s_3s_2}$, as illustrated in Figure~\ref{cambriandiagram}.b.
The set $\set{s_2s_1,s_3s_2}$ is obtained by applying $\eta_\delta$ to each element of $\set{s_1s_0,s_1s_0s_1,s_2s_1}$ that is not contracted by $\Psi$.
Figure~\ref{camb B3toA3 cong} shows the Cambrian congruence $\Theta_c$ on $B_3$ and the Cambrian congruence $\Theta_{c'}$ on $S_n$.
\begin{figure}
\caption{a: The Cambrian congruence $\Theta_c$ on $B_3$. b: The Cambrian congruence $\Theta_{c'}
\label{camb B3toA3 cong}
\end{figure}
Here, $S_4$ is represented, as in Figure~\ref{B3toA3}.b, as the subposet of $B_3$ induced by bottom elements of the congruence $\Psi$.
Figure~\ref{camb B3toA3} shows the Cambrian lattice $\operatorname{Camb}(B_3,c)$ and the Cambrian lattice $\operatorname{Camb}(S_4,c')$.
\begin{figure}
\caption{a: The Cambrian lattice $\operatorname{Camb}
\label{camb B3toA3}
\end{figure}
The shaded edges of $\operatorname{Camb}(B_3,c)$ indicate the congruence on $\operatorname{Camb}(B_3,c)$ whose quotient is $\operatorname{Camb}(S_4,c')$.
This congruence is generated by the join-irreducible element $s_1s_0s_1$, because $\Gamma\cap\operatorname{Camb}(W,c)=\set{s_0s_1s_0}$.
\end{example}
The proof of Theorem~\ref{key camb} depends on general lattice-theoretic results, but also on special properties of the subposet $\operatorname{Camb}(W,c)$ of $W$.
To explain these properties, we recall from ~\cite{sort_camb} the characterization of bottom elements of $\Theta_c$-classes as the $c$-sortable elements of $W$.
To define $c$-sortable elements, we fix a reduced word $s_1\cdots s_n$ for $c$ and construct, for any element $w\in W$, a canonical reduced word for $w$.
Write a half-infinite word $c^\infty=s_1\cdots s_n|s_1\cdots s_n|s_1\cdots s_n|\ldots$,
where the symbols ``$\,|\,$'' are dividers that mark the locations where the sequence $s_1\cdots s_n$ begins again.
Out of all subwords of $c^\infty$ that are reduced words for $w$, the \newword{$(s_1\cdots s_n)$-sorting word} for $w$ is the one that is lexicographically leftmost, as a sequence of positions in $c^\infty$.
For convenience, when we write $(s_1\cdots s_n)$-sorting words, we include the dividers that occur between letters in the lexicographically leftmost reduced subword.
Thus for example, if $W=S_4$, the $(s_1s_2s_3)$-sorting word for the longest element is $s_1s_2s_3|s_1s_2|s_1$, and the $(s_2s_1s_3)$-sorting word for the longest element is $s_2s_1s_3|s_2s_1s_3$.
Each $(s_1\cdots s_n)$-sorting word defines a sequence of subsets of $S$, by taking the elements between successive dividers, and this sequence depends only on $c$, not on $s_1\cdots s_n$.
An element $w$ is \newword{$c$-sortable} if this sequence of subsets is weakly decreasing.
The following is a combination of \cite[Theorems~1.1, 1.2, 1.4]{sort_camb}.
\begin{theorem}\label{sort_camb thm}
An element of $W$ is the bottom element of its $\Theta_c$-class if and only if it is $c$-sortable.
The $c$-sortable elements constitute a sublattice of the weak order.
\end{theorem}
In general, the set of bottom elements of a congruence is a join-sublattice but need not be a sublattice.
As a special case of a general lattice-theoretic result (found, for example, in \cite[Proposition~9-5.11]{regions9}), a $c$-sortable element $v$ represents a join-irreducible element of the Cambrian lattice $W/\Theta_c$ if and only if $v$ is join-irreducible as an element of $W$. The following simple lemma will be used in the proof of Theorem~\ref{key camb}.
\begin{lemma}\label{sort j*}
If $j$ is a $c$-sortable join-irreducible element and $j_*$ is the unique element covered by $j$, then $j_*$ is $c$-sortable.
\end{lemma}
\begin{proof}
If $c=s_1\cdots s_n$ and $a_1a_2\cdots a_k$ is the $(s_1\cdots s_n)$-sorting word for $j$, then $a_1a_2\cdots a_{k-1}$ is the $(s_1\cdots s_n)$-sorting word for a $c$-sortable element $x$ covered by~$j$.
But $j_*$ is the unique element covered by $j$, so $j_*=x$, which is $c$-sortable.
\end{proof}
We need one of the standard Isomorphism Theorems for lattices.
(See, for example, \cite[Theorem~9-5.22]{regions9}.)
This is easily proved directly, or follows as a special case of the same Isomorphism Theorem in universal algebra.
The notation $[x]_\Theta$ stands for the $\Theta$-class of $x$.
\begin{theorem}\label{3 isom}
Let $L$ be a finite lattice and let $\Theta$ and $\Psi$ be congruences on $L$ such that $\Psi$ refines $\Theta$.
Define a relation $\Theta/\Psi$ on $L/\Psi$ by setting $[x]_\Psi\equiv[y]_\Psi$ modulo $\Theta/\Psi$ if and only if $x\equiv y$ modulo $\Theta$.
Then $\Theta/\Psi$ is a congruence and the map $\beta:L/\Theta\to(L/\Psi)/(\Theta/\Psi)$ sending $[x]_\Theta$ to the set of $\Psi$-classes contained in $[x]_\Theta$ is an isomorphism.
The inverse isomorphism sends a $\Theta/\Psi$-class $C$ in $(L/\Psi)/(\Theta/\Psi)$ to the union of the $\Psi$-classes in $C$.
\end{theorem}
The following lemma will also be useful.
\begin{lemma}\label{Psi and tilde gen}
Let $L$ be a finite lattice and let $\Gamma$ and $\tilde\Gamma$ be sets of join-irreducible elements in $L$.
Let $\Psi$ be the congruence on $L$ generated by $\Gamma$ and let $I$ be the set of join-irreducible elements contracted by $\Psi$.
Define $\tilde\Psi$ and $\tilde{I}$ similarly.
Then the congruence $(\Psi\vee\tilde\Psi)/\Psi$ on $L/\Psi$ is generated by the set $\set{[j]_{\Psi}:j\in \tilde\Gamma\setminus I}$ of join-irreducible elements of $L/\Psi$.
\end{lemma}
\begin{proof}
Identifying join-irreducible elements of $L$ with join-irreducible congruences as before, the sets $I$ and $\tilde{I}$ are the ideals that $\Gamma$ and $\tilde\Gamma$ generate in $\operatorname{Irr}(\operatorname{Con}(L))$.
The join-irreducible elements in the quotient $L/\Psi$ are exactly the elements $[j]_{\Psi}$ where $j\in(\operatorname{Irr}(L)\setminus I)$.
Since the set of join-irreducible elements contracted by $\Psi\vee\tilde\Psi$ is $I\cup \tilde{I}$, each $[j]_{\Psi}$ is contracted by the congruence $(\Psi\vee\tilde\Psi)/\Psi$ if and only if $j$ is in~$\tilde{I}$.
Furthermore, a join-irreducible element $j\in(\operatorname{Irr}(L)\setminus I)$ is in $\tilde{I}$ if and only if it is below some element of $\tilde\Gamma\setminus I$.
We see that $(\Psi\vee\tilde\Psi)/\Psi$ is the congruence on $L/\Psi$ generated by the set $\set{[j]_{\Psi}:j\in \tilde\Gamma\setminus I}$.
\end{proof}
We now prove our key theorem.
\begin{proof}[Proof of Theorem~\ref{key camb}]
The quotient $W/\Psi$ is isomorphic to $W'$.
Let $I$ be the set of join-irreducible elements contracted by $\Psi$.
Let $\tilde\Gamma$ be the generating set that was used to define the Cambrian congruence $\Theta_c$.
By Lemma~\ref{Psi and tilde gen}, the set $\set{[j]_{\Psi}:j\in \tilde\Gamma\setminus I}$ generates the congruence $(\Psi\vee\Theta_c)/\Psi$ on $W/\Psi\cong W'$.
This congruence corresponds to the Cambrian congruence $\Theta_{c'}$ on $W'$, so $\operatorname{Camb}(W',c')$ is isomorphic to $(W/\Psi)/[(\Psi\vee\Theta_c)/\Psi]$, which, by Theorem~\ref{3 isom}, is isomorphic to $W/(\Phi\vee\Theta_c)$.
Also by Theorem~\ref{3 isom}, $W/(\Phi\vee\Theta_c)$ is isomorphic to $(W/\Theta_c)/[(\Phi\vee\Theta_c)/\Theta_c]$, and Lemma~\ref{Psi and tilde gen} says that $(\Phi\vee\Theta_c)/\Theta_c$ is generated by $\set{[j]_{\Theta_c}:j\in(\Gamma\cap\operatorname{Camb}(W,c))}$.
In particular, there is a surjective homomorphism from $W/\Theta_c$ to $W/(\Psi\vee\Theta_c)$ whose associated congruence is generated by $\set{[j]_{\Theta_c}:j\in(\Gamma\cap\operatorname{Camb}(W,c))}$.
Since $x\mapsto[x]_{\Theta_c}$ is an isomorphism from $\operatorname{Camb}(W,c)$ to $W/\Theta_c$, and since $W/(\Psi\vee\Theta_c)$ is isomorphic to $\operatorname{Camb}(W',c')$, we conclude that there is a surjective homomorphism from $\operatorname{Camb}(W,c)$ to $\operatorname{Camb}(W',c')$ whose associated congruence is generated by $\Gamma\cap\operatorname{Camb}(W,c)$.
We will show that this homomorphism is the restriction of $\eta$.
If $x\in\operatorname{Camb}(W',c')$, then $x$ is the bottom element of $[x]_{\Theta_{c'}}$.
Thus $\eta^{-1}(x)$ is a $\Psi$-class in $W$ containing the bottom element $y$ of a $(\Psi\vee\Theta_c)$-class.
In particular, $y$ is the bottom element of $[y]_{\Theta_c}$, or in other words $y\in\operatorname{Camb}(W,c)$.
Since $\eta(y)=x$, we have shown that $\operatorname{Camb}(W',c')\subseteq\eta(\operatorname{Camb}(W,c))$.
Since $\operatorname{Camb}(W,c)$ is a sublattice of $W$ and $\eta$ is a homomorphism, $\eta(\operatorname{Camb}(W,c))$ is a sublattice of $W'$, and the restriction of $\eta$ to $\operatorname{Camb}(W,c)$ is a lattice homomorphism from $\operatorname{Camb}(W,c)$ to $\eta(\operatorname{Camb}(W,c))$.
Let $j\in\Gamma\cap\operatorname{Camb}(W,c)$.
Then because $j\in\Gamma$, $\eta$ contracts $j$, or in other words $\eta(j)=\eta(j_*)$, where $j_*$ is the unique element of $W$ covered by $j$.
By Lemma~\ref{sort j*}, the element $j_*$ is also in $\operatorname{Camb}(W,c)$, so $j_*$ is also the unique element of $\operatorname{Camb}(W,c)$ covered by $j$.
Thus the restriction of $\eta$ to $\operatorname{Camb}(W,c)$ also contracts $j$ in $\operatorname{Camb}(W,c)$.
Since there exists a surjective lattice homomorphism from $\operatorname{Camb}(W,c)$ to $\operatorname{Camb}(W',c')$ whose associated congruence is generated by $\Gamma\cap\operatorname{Camb}(W,c)$, the image of the restriction of $\eta$ can be no larger than $\operatorname{Camb}(W',c')$, and we conclude that $\eta(\operatorname{Camb}(W,c))=\operatorname{Camb}(W,c)$.
We have shown that the restriction of $\eta$ is a surjective homomorphism from $\operatorname{Camb}(W,c)$ to $\operatorname{Camb}(W',c')$ whose associated congruence is generated by $\Gamma\cap\operatorname{Camb}(W,c)$.
\end{proof}
With Theorem~\ref{key camb} in hand, we prove several facts that together constitute a detailed classification of surjective homomorphisms between Cambrian lattices.
An \newword{oriented Coxeter diagram} is a directed graph (with labels on some edges) defined by choosing an orientation of each edge in a Coxeter diagram.
Orientations of the Coxeter diagram of a finite Coxeter group $W$ are in bijection with Coxeter elements of $W$.
There is disagreement in the literature about the convention for this bijection.
The definition of Cambrian lattices given here agrees with the definition in~\cite{cambrian} if we take the following convention, which also agrees with the convention of \cite{sortable,sort_camb}:
To obtain a Coxeter element from an oriented diagram, we require, for each directed edge $s\to t$, that $s$ precedes $t$ in every expression for $c$.
However, the opposite convention is also common.
An \newword{oriented diagram homomorphism} starts with the oriented Coxeter diagram encoding a Coxeter system $(W,S)$ and a choice of Coxeter element $c$, then deletes vertices, decreases labels on directed edges, and/or erases directed edges, and relabels the vertices to obtain the oriented Coxeter diagram of some Coxeter system $(W',S')$ and choice $c'$ of Coxeter element.
We prove the following result and several more detailed results.
\begin{theorem}\label{main camb}
Given a finite Coxeter system $(W,S)$ with a choice $c$ of Coxeter element and another Coxeter system $(W',S')$ with a choice $c'$ of Coxeter element, there exists a surjective lattice homomorphism from $\operatorname{Camb}(W,c)$ to $\operatorname{Camb}(W',c')$ if and only if there exists an oriented diagram homomorphism from the oriented diagram for $(W,S)$ and $c$ to the oriented diagram for $(W',S')$ and $c'$.
\end{theorem}
The proof of Theorem~\ref{main camb} begins with a factorization result analogous to Theorem~\ref{para factor}.
Given $J\subseteq S$ and a Coxeter element $c$ of $W$, the \newword{restriction of $c$ to $W_J$} is the Coxeter element $\tilde c$ of $W_J$ obtained by deleting the letters in $S\setminus J$ from a reduced word for $c$.
(Typically, the restriction is not equal to $c_J$.)
Recall from Section~\ref{delete vert sec} the definition of the parabolic homomorphism $\eta_J$.
Recall from Section~\ref{intro} that a compressive homomorphism is a surjective homomorphism that restricts to a bijection between sets of atoms.
For Cambrian lattices, this is a surjective lattice homomorphism $\operatorname{Camb}(W,c)\to\operatorname{Camb}(W',c')$ that restricts to a bijection between $S$ and $S'$.
\begin{theorem}\label{camb para factor}
Suppose $(W,S)$ and $(W',S')$ are finite Coxeter systems and $c$ and $c'$ are Coxeter elements of $W$ and $W'$.
Suppose $\eta:\operatorname{Camb}(W,c)\to\operatorname{Camb}(W',c')$ is a surjective lattice homomorphism and let $J\subseteq S$ be $\set{s\in S:\eta(s)\neq 1'}$.
Then $\eta$ factors as $\eta|_{\operatorname{Camb}(W_J,\tilde{c})}\circ(\eta_J)|_{\operatorname{Camb}(W,c)}$, where $\tilde{c}$ is the restriction of $c$ to $W_J$.
The map $\eta|_{\operatorname{Camb}(W_J,\tilde{c})}$ is a compressive homomorphism.
\end{theorem}
As was the case for the weak order, the task is to understand compressive homomorphisms between Cambrian lattices.
The following proposition is proved below as a special case of part of Proposition~\ref{camb basic facts}.
Together with Theorem~\ref{camb para factor}, it implies that every surjective lattice homomorphism between Cambrian lattices determines an oriented diagram homomorphism.
\begin{prop}\label{oriented}
Suppose $\eta:\operatorname{Camb}(W,c)\to\operatorname{Camb}(W',c')$ is a compressive homomorphism.
Then $m'(\eta(r),\eta(s))\le m(r,s)$ for each pair $r,s\in S$.
Also, if $s_1\cdots s_n$ is a reduced word for $c$, then $c'=\eta(s_1)\cdots\eta(s_n)$.
\end{prop}
As we did for the weak order, in studying compressive homomorphisms between Cambrian lattices, we may as well take $S'=S$ and let $\eta$ fix each element of $S$.
We will prove the following characterization of compressive homomorphisms.
\begin{theorem}\label{camb exist unique}
Suppose $(W,S)$ and $(W',S)$ are finite Coxeter systems and suppose that $m'(r,s)\le m(r,s)$ for each pair $r,s\in S$.
Given a Coxeter element $c$ of $W$ with reduced word $s_1\cdots s_n$, let $c'$ be the element of $W'$ with reduced word $s_1\cdots s_n$.
Let $\tilde\Gamma$ be a set of join-irreducible elements obtained by choosing, for each $r,s\in S$ with $m'(r,s)<m(r,s)$ such that $r$ precedes $s$ in $c$, exactly $m(r,s)-m(r,s)$ join-irreducible elements in $\set{\operatorname{alt}_k(r,s):k=2,3,\ldots,m(r,s)-1}$.
\begin{enumerate}
\item \label{exist unique tildeGamma}
There exists a unique homomorphism $\eta:\operatorname{Camb}(W,c)\to\operatorname{Camb}(W',c')$ that is compressive and fixes $S$ pointwise and whose associated congruence $\Theta$ contracts all of elements of~$\tilde\Gamma$.
\item \label{generated}
$\Theta$ is generated by $\tilde\Gamma$.
\item \label{every arises}
Every compressive homomorphism from $\operatorname{Camb}(W,c)$ to $\operatorname{Camb}(W',c')$ that fixes $S$ pointwise arises in this manner, for some choice of $\tilde\Gamma$.
\end{enumerate}
\end{theorem}
Theorem~\ref{key camb} implies that, given any compressive homomorphism $\tilde\eta:W\to W'$, there is a choice of $\tilde\Gamma$ in Theorem~\ref{camb exist unique} such that every element of $\tilde\Gamma$ is contracted by the congruence associated to $\tilde\eta$.
The homomorphism $\eta$ thus arising from Theorem~\ref{camb exist unique} is the restriction of $\tilde\eta$.
The following theorem is a form of converse to these statements.
\begin{theorem}\label{camb diagram}
Suppose $(W,S)$ and $(W',S)$ are finite Coxeter systems and suppose that $\eta:\operatorname{Camb}(W,c)\to\operatorname{Camb}(W',c')$ is an compressive homomorphism.
\begin{enumerate}
\item \label{exists diagram}
There exists a compressive homomorphism from $W$ to $W'$ whose restriction to $\operatorname{Camb}(W,c)$ is $\eta$.
\item \label{any diagram}
Let $\tilde\Gamma$ be the set of join-irreducible elements that generates the congruence associated to $\eta$.
Given a compressive homomorphism $\tilde\eta:W\to W'$ such that all elements of $\tilde\Gamma$ are contracted by the congruence associated to $\tilde\eta$, the restriction of $\tilde\eta$ to $\operatorname{Camb}(W,c)$ is $\eta$.
\end{enumerate}
\end{theorem}
We begin our proof of these classification results by pointing out some basic facts on surjective homomorphisms between Cambrian lattices, in analogy with Proposition~\ref{basic facts}.
Proposition~\ref{oriented} is a special case of (5) and (6) in the following proposition.
\begin{prop}\label{camb basic facts}
Let $\eta:\operatorname{Camb}(W,c)\to \operatorname{Camb}(W',c')$ be a surjective lattice homomorphism.
Then
\begin{enumerate}
\item $\eta(1)=1'$.
\item $S'\subseteq\eta(S)\subseteq (S'\cup\set{1'})$.
\item If $r$ and $s$ are distinct elements of $S$ with $\eta(r)=\eta(s)$, then $\eta(r)=\eta(s)=1'$.
\item \label{restrict camb}
If $J\subseteq S$, then $\eta$ restricts to a surjective homomorphism $\operatorname{Camb}(W_J,\tilde{c})\to\operatorname{Camb}(W'_{\eta(J)\setminus\set{1'}},\tilde{c}')$, where $\tilde{c}$ is the restriction of $c$ to $W_J$ and $\tilde{c}'$ is the restriction of $c'$ to $W'_{\eta(J)\setminus\set{1'}}$.
\item $m'(\eta(r),\eta(s))\le m(r,s)$ for each pair $r,s\in S$ with $\eta(r)\neq1'$ and $\eta(s)\neq1'$.
\item \label{c' eta}
If $s_1s_2\cdots s_n$ is a reduced word for $c$, then $c'=\eta(s_1)\eta(s_2)\cdots\eta(s_n)$.
\end{enumerate}
\end{prop}
\begin{proof}
The proof of Proposition~\ref{basic facts} can be repeated verbatim to prove all of the assertions except~\eqref{c' eta}.
The latter is equivalent to the statement that $\eta(r)$ precedes $\eta(s)$ if and only if $r$ precedes $s$, whenever $r,s\in S$ have $\eta(r)\neq1'$ and $\eta(s)\neq1'$.
This statement follows immediately from~\eqref{restrict camb} with $J=\set{r,s}$.
\end{proof}
Taking $\eta=\eta_J$ and $\Gamma=S\setminus J$ in Theorem~\ref{key camb}, we have the following analog of Theorem~\ref{para cong} for Cambrian lattices:
\begin{theorem}\label{camb para cong}
Let~$c$ be a Coxeter element of~$W,$ let $J\subseteq S$ and let~$\tilde c$ be the Coxeter element of~$W_J$ obtained by restriction.
Then $\eta_J$ restricts to a surjective lattice homomorphism from $\operatorname{Camb}(W,c)$ to $\operatorname{Camb}(W_J,\tilde c)$.
The associated lattice congruence on $\operatorname{Camb}(W,c)$ is generated by the set $S\setminus J$ of join-irreducible elements.
\end{theorem}
To prove Theorem~\ref{camb para factor}, the proof of Theorem~\ref{para factor} can be repeated verbatim, with Proposition~\ref{camb basic facts} replacing Proposition~\ref{basic facts} and Theorem~\ref{camb para cong} replacing Theorem~\ref{para cong}.
We prove Theorems~\ref{camb exist unique} and~\ref{camb diagram} together.
\begin{proof}[Proof of Theorems~\ref{camb exist unique} and~\ref{camb diagram}]
We first claim that there is a compressive homomorphism $\tilde\eta:W\to W'$ fixing $S$ and a generating set $\Gamma$ for the associated congruence such that $\tilde\Gamma=\Gamma\cap\operatorname{Camb}(W,c)$.
Arguing as in previous sections, Theorems~\ref{edge factor} and~\ref{edge cong ji} reduce the proof of the claim to the case where $W$ and $W'$ are irreducible and their diagrams coincide except for edge labels.
Looking through the type-by-type results of Sections~\ref{dihedral sec}--\ref{exceptional sec}, we see that in almost every case, the claim is true for a simple reason:
For each choice of $\tilde\Gamma$, there is a surjective homomorphism from $W$ to $W'$ whose associated congruence is homogeneous of degree $2$, with a generating set that includes $\tilde\Gamma$.
The only exceptions come when $(W,W')$ is $(H_3,B_3)$ or $(H_4,B_4)$ and one of the following cases applies:
\noindent
\textbf{Case 1:} $q$ precedes $r$ and $\tilde\Gamma=\set{qr}$.
In this case, in light of Case 2 of the proof of Theorem~\ref{H3 B3 thm}, we need to verify that $qrqsrqrs\in H_3$ is not $qrs$-sortable and not $qsr$-sortable.
But $qrqsrqrs$ has only one other reduced word, $qrsqrqrs$, so its $qrs$-sorting word is $qrs|qr|qrs$, and thus it is not $qrs$-sortable.
Similarly, the $qsr$-sorting word for $qrqsrqrs$ is $qr|qsr|qr|s$, so it is not $qsr$-sortable.
The claim is proved in this case for $(W,W')=(H_3,B_3)$.
We easily conclude that $qrqsrqrs$ is not $qrst$-sortable, $qrts$-sortable, $qstr$-sortable or $qtsr$-sortable as an element of $H_4$ (cf. \cite[Lemma~2.3]{sort_camb}), so the claim is proved in this case for $(W,W')=(H_4,B_4)$ as well.
\noindent
\textbf{Case 2:} $r$ precedes $q$ and $\tilde\Gamma=\set{rq}$.
Arguing similarly to Case 1, it is enough to show that $qrqsrqrs$ is not $rqs$-sortable and not $srq$-sortable as an element of $H_3$.
This is true because its $rqs$-sorting word is $q|rqs|rq|rs$ and its $srq$-sorting word is $q|rq|srq|r|s$.
\noindent
\textbf{Case 3:} $q$ precedes $r$ and $\tilde\Gamma=\set{qrqr}$.
In light of Case 8 of the proof of Theorem~\ref{H3 B3 thm}, we need to verify that $rqsrqrs\in H_3$ is not $qrs$-sortable and not $qsr$-sortable.
This is true because its $qrs$-sorting word is $rs|qr|qrs$ and its $sqr$-sorting word is $r|sqr|qr|s$.
\noindent
\textbf{Case 4:} $r$ precedes $q$ and $\tilde\Gamma=\set{rqrq}$.
The $sqr$-sorting word for $rqsrqrs$ is $r|sqr|qr|s$, and the $srq$-sorting word for $rqsrqrs$ is $rq|srq|r|s$, so $rqsrqrs$ is neither $rqs$-sortable nor $srq$-sortable.
As in Case 3, this is enough.
This completes the proof of the claim.
Now Theorem~\ref{key camb} says that the restriction $\eta$ of $\tilde\eta$ is a compressive homomorphism from $\operatorname{Camb}(W,c)$ to $\operatorname{Camb}(W',c')$, whose associated congruence $\Theta$ is generated by $\tilde\Gamma$.
But then $\eta$ is the unique homomorphism from $\operatorname{Camb}(W,c)$ to $\operatorname{Camb}(W',c')$ whose congruence contracts $\tilde\Gamma$:
Any other congruence contracting $\tilde\Gamma$ is associated to a strictly coarser congruence, and thus the number of congruence classes is strictly less than $|\operatorname{Camb}(W',c')|$.
We have proved Theorem~\ref{camb exist unique}\eqref{exist unique tildeGamma}, Theorem~\ref{camb exist unique}\eqref{generated}, and Theorem~\ref{camb diagram}\eqref{exists diagram}.
Now let $\eta:\operatorname{Camb}(W,c)\to\operatorname{Camb}(W',c')$ be any surjective lattice homomorphism.
Theorem~\ref{camb basic facts}\eqref{restrict camb} and Theorem~\ref{camb basic facts}\eqref{c' eta} imply that, for each $r,s\in S$ with $m'(r,s)<m(r,s)$ such that $r$ precedes $s$ in $c$, the congruence associated to $\eta$ contracts exactly $m(r,s)-m'(r,s)$ join-irreducible elements of the form $\operatorname{alt}_k(r,s)$ with $k=2,\ldots,m(r,s)-1$.
Theorem~\ref{camb exist unique}\eqref{every arises} follows by the uniqueness in Theorem~\ref{camb exist unique}\eqref{exist unique tildeGamma}.
Finally, we prove Theorem~\ref{camb diagram}\eqref{any diagram}.
Given any $\tilde\eta$, let $\Gamma$ be a minimal generating set for the associated congruence.
Theorem~\ref{key camb} says that the restriction of $\eta$ to $\operatorname{Camb}(W,c)$ is a surjective homomorphism to $\operatorname{Camb}(W',c')$, and the associated congruence is generated by the $\Gamma\cap\operatorname{Camb}(W,c)$.
But by the uniqueness in Theorem~\ref{camb exist unique}\eqref{exist unique tildeGamma} and by Theorem~\ref{camb exist unique}\eqref{generated}, the restriction of $\tilde\eta$ to $\operatorname{Camb}(W,c)$ is~$\eta$.
\end{proof}
\section{Refinement relations among Cambrian fans}\label{crys case}
Given a finite crystallographic Coxeter group $W$ with Coxeter arrangement $\mathcal{A}$, an associated Cartan matrix $A$, and a Coxeter element $c$ of $W$, the \newword{Cambrian fan} for $(A,c)$ is the fan defined by the shards of $\mathcal{A}$ not removed by the Cambrian congruence $\Theta_c$.
In this section, we prove Theorem~\ref{camb fan coarsen}, which gives explicit refinement relations among Cambrian fans.
We assume the most basic background about Cartan matrices of finite type and the associated finite root systems.
Recall from the introduction that a Cartan matrix $A=[a_{ij}]$ \newword{dominates} a Cartan matrix $\mathcal{A}'=[a'_{ij}]$ if $|a_{ij}|\ge |a'_{ij}|$ for all $i$ and~$j$.
Recall also that Proposition~\ref{dom subroot} says that when $A$ dominates $A'$, $\Phi(A)\supseteq\Phi(A')$ and $\Phi_+(A)\supseteq\Phi_+(A')$, assuming that $\Phi(A)$ and $\Phi(A')$ are both defined with respect to the same simple roots $\alpha_i$.
We again emphasize that $\Phi(A)$ includes any imaginary roots.
The proposition appears as \cite[Lemma~3.5]{Marquis}, but for completeness, we give a proof here.
\begin{proof}[Proof of Proposition~\ref{dom subroot}]
To construct a Kac-Moody Lie algebra from a symmetrizable Cartan matrix $A$, as explained in \cite[Chapter~1]{Kac}, one first defines an auxiliary Lie algebra $\tilde{\mathfrak{g}}(A)$ using generators and relations.
The Lie algebra $\tilde{\mathfrak{g}}(A)$ decomposes as a direct sum $\mathfrak{n}_-\oplus\mathfrak{h}\oplus\mathfrak{n}_+$ \emph{of complex vector spaces}.
We won't need details on $\mathfrak{h}$ here, but its dual contains linearly independent vectors $\alpha_1,\ldots,\alpha_n$ called the \newword{simple roots}.
The summand $\mathfrak{n}_+$ decomposes further as a (\emph{vector space}) direct sum with infinitely many summands $\tilde{\mathfrak{g}}_\alpha$, indexed by nonzero vectors $\alpha$ in the nonnegative integer span $Q_+$ of the simple roots.
Similarly, $\mathfrak{n}_-$ decomposes into summands indexed by nonzero vectors in the nonpositive integer span of the simple roots.
Furthermore, $\mathfrak{n}_+$ is freely generated by elements $e_1,\ldots,e_n$ and thus is independent of the choice of $A$ (as long as $A$ is $n\times n$).
Similarly, $\mathfrak{n}_-$ is freely generated by elements $f_1,\ldots,f_n$ and is independent of the choice of $A$.
There is a unique largest ideal $\mathfrak{r}$ of $\tilde{\mathfrak{g}}(A)$ whose intersection with $\mathfrak{h}$ is trivial, and this is a direct sum $\mathfrak{r}_-\oplus\mathfrak{r}_+$ \emph{of ideals} with $\mathfrak{r}_\pm=\mathfrak{r}\cap\mathfrak{n}_\pm$.
The \newword{Kac-Moody Lie algebra} $\mathfrak{g}(A)$ is defined to by $\tilde{\mathfrak{g}}(A)/\mathfrak{r}$.
The Lie algebra $\mathfrak{g}(A)$ inherits a direct sum decomposition $\mathfrak{n}_-\oplus\mathfrak{h}\oplus\mathfrak{n}_+$, and the summands $\mathfrak{n}_\pm$ decompose further as
\[\mathfrak{n}_-=\bigoplus_{0\neq\alpha\in Q_+}\mathfrak{g}_{-\alpha}\qquad\text{and}\qquad \mathfrak{n}_+=\bigoplus_{0\neq\alpha\in Q_+}\mathfrak{g}_\alpha.\]
A \newword{root} is a nonzero vector $\alpha$ in $Q_+\cup(-Q_+)$ such that $\mathfrak{g}_\alpha\neq0$.
The \newword{(Kac-Moody) root system} associated to $A$ is the set of all roots.
While $\mathfrak{n}_-$ and $\mathfrak{n}_+$ are independent of the choice of $A$, the ideal $\mathfrak{r}$ depends on $A$ (in a way that is not apparent here because we have not given the presentation of $\tilde{\mathfrak{g}}(A)$).
Thus, writing $\mathfrak{r}(A)$ for the ideal associated to $A$, Proposition~\ref{dom subroot} follows immediately from this claim:
If $A$ and $A'$ are Cartan matrices such that $A$ dominates~$A'$, then $\mathfrak{r}(A)\subseteq\mathfrak{r}(A')$.
This claim follows immediately from a description of the ideal $\mathfrak{r}$ in terms of the \newword{Serre relations}.
Recall that for $x$ in a Lie algebra $\mathfrak{g}$, the linear map $\operatorname{ad}_x:\mathfrak{g}\to\mathfrak{g}$ is $y\mapsto[x,y]$.
In \cite[Theorem~9.11]{Kac}, it is proved that the ideal $\mathfrak{r}_+$ is generated by $\set{(\operatorname{ad}_{e_i})^{1-a_{ij}}(e_j):i\neq j}$ and that $\mathfrak{r}_+$ is generated by $\set{(\operatorname{ad}_{f_i})^{1-a_{ij}}(f_j):i\neq j}$.
\end{proof}
Although we have proved Proposition~\ref{dom subroot} in full generality, we pause to record the following proof due to Hugh Thomas (personal communication, 2018) in the case where $A$ is symmetric.
\begin{proof}[Quiver-theoretic proof of Proposition~\ref{dom subroot} for $A$ symmetric]
Suppose $Q$ is a quiver $Q$ without loops (i.e.\ $1$-cycles) with $n$ vertices, suppose $v\in\mathbb Z^n$, and fix an algebraically closed field.
We associate a symmetric Cartan matrix $A$ to $Q$ by ignoring the direction of edges and taking $a_{ij}$ to be the total number of edges connecting vertex $i$ to vertex $j$.
Kac \cite[Theorem~3]{Kac1980} (generalizing Gabriel's Theorem) proved that $v$ is the dimension vector of an indecomposable representation of $Q$ if and only if $v$ is the simple-root coordinate of a positive root in the root system $\Phi(A)$.
Given symmetric Cartan matrices $A$ and $A'$ with $A$ dominating $A'$, we can construct a quiver $Q'$ associated to $A'$ and add arrows to it to obtain a quiver $Q$ associated to $A$.
If $v$ is the simple root coordinates of a root in $\Phi(A')$, then if we start with an indecomposable representation of $Q'$ with dimension vector $v$ and assign arbitrary maps to the arrows in $Q$ that are not in $Q'$, the result is still indecomposable, so $v$ is the simple root coordinates of a root in $\Phi(A)$.
\end{proof}
The dual root system $\Phi\spcheck(A)$ consists of all co-roots associated to the roots in $\Phi(A)$.
This is a root system in its own right, associated to $A^T$.
Since $A$ dominates $A'$ if and only if $A^T$ dominates $(A')^T$, the following proposition is immediate from Proposition~\ref{dom subroot}.
\begin{prop}\label{dom subcoroot}
Suppose $A$ and $A'$ are symmetrizable Cartan matrices such that $A$ dominates $A'$.
If $\Phi\spcheck(A)$ and $\Phi\spcheck(A')$ are both defined with respect to the same simple co-roots $\alpha\spcheck_i$, then $\Phi\spcheck(A)\supseteq\Phi\spcheck(A')$ and $\Phi\spcheck_+(A)\supseteq\Phi\spcheck_+(A')$.
\end{prop}
Suppose $A$ dominates $A'$ and suppose $(W,S)$ and $(W',S)$ are the associated Coxeter systems.
Let $\mathcal{A}$ and $\mathcal{A}'$ be the associated Coxeter arrangements, realized so that the simple \emph{co}-roots are the same for the two arrangements.
(This requires that two different Euclidean inner products be imposed on $\mathbb R^n$.)
Proposition~\ref{dom subcoroot} implies that $\mathcal{A}'\subseteq\mathcal{A}$, so that in particular, each region of $\mathcal{A}$ is contained in some region of $\mathcal{A}'$.
Since the regions of $\mathcal{A}$ are in bijection with the elements of $W$ and the regions of $\mathcal{A}'$ are in bijection with the elements of $W'$, this containment relation defines a surjective map $\eta:W\to W'$.
\begin{theorem}\label{weak hom subroot}
The map $\eta$, defined above, is a surjective lattice homomorphism from the weak order on $W$ to the weak order on $W'$.
\end{theorem}
\begin{proof}
As in the proof of Proposition~\ref{dom subroot}, we may restrict our attention to the case where $A$ is irreducible and barely dominates $A'$.
We first consider the case where $A'$ is obtained by erasing a single edge $e$ in the Dynkin diagram of $A$.
Let $E=\set{e}$.
Recall from Section~\ref{erase edge sec} that the map $\eta_E$ maps $w\in W$ to $(w_I,w_J)\in W_I\times W_J=W'$, where $I$ and $J$ are the vertex sets of the two components of the diagram of $A'$.
The set of reflections in $W'$ equals the set of reflections in $W_I$ union the set of reflections in $W_J$.
Since $\operatorname{inv}(w_I)=\operatorname{inv}(w)\cap W_I$ and $\operatorname{inv}(w_J)=\operatorname{inv}(w)\cap W_J$, two elements of $W$ map to the same element of $W'$ if and only if the symmetric difference of their inversion sets does not intersect $W_I\cup W_J$.
Comparing with the edge-erasing case in the proof of Proposition~\ref{dom subroot}, and noting that both maps fix $W'$, we see that $\eta=\eta_E$.
Theorem~\ref{edge cong} now says that $\eta$ is a surjective homomorphism.
The case where $A$ is of type $G_2$ is easy and we omit the details.
Now suppose $A$ is of type $C_n$, so that the dual root system $\Phi\spcheck(A)$ is of type $B_n$.
Using a standard realization of the type-B root system, reflections in $W$ correspond to positive co-roots as follows:
A co-root $e_i$ corresponds to $(-i\,\,\,i)$, a co-root $e_j-e_i$ corresponds to $(-j\,\,\,-i)(i\,\,\,j)$, and a co-root $e_j+e_i$ corresponds to $(i\,\,\,-j)(j\,\,\,-i)$.
The positive co-roots for $A'$ are the roots $\alpha_1+\alpha_2+\cdots+\alpha_j=e_j$ for $1\le j\le n$ and $\alpha_i+\alpha_{i+1}+\cdots+\alpha_j=e_j-e_{i-1}$ for $2\le i\le j\le n$.
Two adjacent $\mathcal{A}$-regions are contained in the same $\mathcal{A}'$-region if and only if the hyperplane separating them is in $\mathcal{A}\setminus\mathcal{A}'$.
Thus, in light of Proposition~\ref{simion cover}, we see that $\eta$ has the same fibers as the map $\eta_\sigma$ of Section~\ref{simion sec}, which is a surjective lattice homomorphism by Theorem~\ref{simion thm}.
Writing, as before, $\tau$ for the inverse map $\tau$ to $\eta_\sigma$, the inversions of a permutation $\pi$ correspond to the inversions $t$ of $\tau(\pi)$ such that $H_t\in\mathcal{A}'$.
It follows that $\eta=\eta_\sigma$.
When $A$ is of type $B_n$, so that $\Phi\spcheck(A)$ is of type $C_n$, the argument is the same, using Proposition~\ref{nonhom cover} instead of Proposition~\ref{simion cover}.
When $A$ is of type $F_4$, the dual root system $\Phi\spcheck(A)$ is also of type $F_4$, and we choose an explicit realization of $\Phi\spcheck(A)$ with
\[\alpha\spcheck_p=\mathfrak{r}ac12(-e_1-e_2-e_3+e_4)\,,\quad\alpha\spcheck_q=e_1\,,\quad\alpha\spcheck_r=e_2-e_1\,,\quad\alpha\spcheck_s=e_3-e_2.\]
Here $p$, $q$, $r$, and $s$ are as defined in connection with Theorem~\ref{F4 thm}.
The positive co-roots are $\set{\mathfrak{r}ac12(\pm e_1\pm e_2\pm e_3+e_4)}\cup\set{e_i:i=1,2,3,4}\cup\set{e_j\pm e_i:1\le i<j\le 4}$.
The positive co-roots for the $A'$, as a subset of these co-roots, are
\[\set{e_1,e_2,e_3}\cup\set{e_2-e_1,e_3-e_2,e_3-e_1}\cup\SEt{\sum_{i=1}^4b_ie_i:b_i\in\Set{\pm\mathfrak{r}ac12},b_4=\mathfrak{r}ac12\sum b_i\le0}\]
Computer calculations show that the surjective homomorphism $\eta$ found in Case~4 of the proof of Theorem~\ref{F4 thm} has the property that two elements of $W$, related by a cover in the weak order, map to the same element of $W'$ if and only if the reflection that relates them corresponds to a co-root not contained in the subset $\Phi\spcheck(A')$.
The two maps coincide by the same argument given in previous cases.
\end{proof}
Theorems~\ref{camb exist unique} and~\ref{weak hom subroot} immediately imply Theorem~\ref{camb fan coarsen}.
\begin{remark}\label{alt proof}
We mention two alternative proofs of Theorem~\ref{weak hom subroot} that are almost uniform, but not quite.
Both use the following fact:
For $A$ and $A'$ as in Proposition~\ref{dom subroot}, if $\Phi(A)$ is finite, then $\Phi(A')$ is not only a subset of $\Phi(A)$, but also an order ideal in the root poset of $\Phi(A)$ (the positive roots ordered with $\alpha\le\beta$ if and only if $\beta-\alpha$ is in the nonnegative span of the simple roots).
This fact is easily proved type-by-type, but we are unaware of a uniform proof.
Indeed, the fact fails for infinite root systems, so perhaps a uniform proof shouldn't be expected.
Using the fact, it is easy to prove Theorem~\ref{weak hom subroot} using the polygonality of the weak order (\cite[Theorem~10-3.7]{regions10} and \cite[Theorem~9-6.5]{regions9}) or using the characterization of inversion sets as rank-two biconvex (AKA biclosed) sets of positive roots (\cite[Lemma~4.1]{Dyer} or \cite[Theorem~10-3.24]{regions10}).
\end{remark}
\section*{Acknowledgments}
Thanks are due to Vic Reiner, Hugh Thomas, and John Stembridge for helpful conversations.
Thanks in particular to Hugh Thomas for providing the quiver-theoretic proof of the symmetric case of Proposition~\ref{dom subroot} (see Section~\ref{crys case}) and encouraging the author to improve upon an earlier non-uniform finite-type proof.
Thanks to Nicolas Perrin for bibliographic help concerning Kac-Moody Lie algebras and for his helpful notes~\cite{Perrin}.
The computations described in this paper were carried out in \texttt{maple} using Stembridge's \texttt{coxeter/weyl} and \texttt{posets} packages~\cite{StembridgePackages}.
An anonymous referee gave many helpful suggestions and pointed out most of what appears in Remark~\ref{alt proof}.
\end{document}
|
\betagin{document}
\newcounter{remark}
\newcounter{theor}
\setcounter{remark}{0} \setcounter{theor}{1}
\newtheorem{claim}{Claim}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}{Proposition}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{definition}{Definition}[section]
\newtheorem{corollary}{Corollary}[section]
\newenvironment{proof}[1][Proof]{\betagin{trivlist}
\item[\hskip \labelsep {\bfseries #1}]}{\end{trivlist}}
\newenvironment{remark}[1][Remark]{\addtocounter{remark}{1} \betagin{trivlist}
\item[\hskip \labelsep {\bfseries #1
\thesection.\theremark}]}{\end{trivlist}}
\betagin{center}
{\Large \bf The K\"ahler-Ricci flow on Hirzebruch surfaces
\footnote{Research supported in part by National Science Foundation
grants DMS-06-04805 and DMS-08-48193. The second-named author is also supported in part by a Sloan Research Fellowship.
}}
\\
{\large Jian Song$^{*}$ and
Ben Weinkove$^\dagger$} \\
\end{center}
\noindent
{\bf Abstract} \ We investigate the metric behavior of the K\"ahler-Ricci flow on the Hirzebruch surfaces, assuming the initial metric is invariant under a maximal compact subgroup of the automorphism group. We show that, in the sense of Gromov-Hausdorff, the flow either shrinks to a point, collapses to $\mathbb{P}^1$ or contracts an exceptional divisor, confirming a conjecture of Feldman-Ilmanen-Knopf. We also show that similar behavior holds on higher-dimensional analogues of the Hirzebruch surfaces.
\section{Introduction}
The behavior of the K\"ahler-Ricci flow on a compact manifold $M$ is expected to reveal the metric and algebraic structures on $M$. If $M$ is a K\"ahler manifold with $c_1(M)=0$ then the K\"ahler-Ricci flow $\partial \omega/\partial t = - \textrm{Ric}(\omega)$ starting at a metric $\omega_0$ in any K\"ahler class $\alphapha$, converges to the unique Ricci-flat metric in $\alphapha$ \cite{Cao1}, \cite{Y1}. If $c_1(M) <0$, the normalized K\"ahler-Ricci flow
\betagin{eqnarray} \label{KRF1}
\ddt{} \omega = \mbox{} \lambda \omega - \textrm{Ric}(\omega), \quad \omega(0) = \omega_0,
\end{eqnarray}
with $\lambda=-1$ and $\omega_0 \in \lambda c_1(M)$ converges to the unique K\"ahler-Einstein metric \cite{Cao1}, \cite{Y1}, \cite{A}.
If $c_1(M)>0$ then K\"ahler-Einstein metrics do not exist in general. If one assumes the existence of a K\"ahler-Einstein metric then, according to unpublished work of Perelman \cite{P2} (see \cite{TZhu}), the normalized K\"ahler-Ricci flow (\ref{KRF1}) with $\lambda=1$ and $\omega_0 \in \lambda c_1(M)$ converges to a K\"ahler-Einstein metric (this is due to \cite{H}, \cite{Ch} in the case of one complex dimension).
By a conjecture of Yau \cite{Y2}, a necessary and sufficient condition for $M$ to admit a K\"ahler-Einstein metric is that $M$ be `stable in the sense of geometric invariant theory'. Tian \cite{T1} later proposed the condition of \emph{K-stability} and this concept has been refined and extended by Donaldson \cite{D}.
One might expect that the sufficiency part of the Yau-Tian-Donaldson conjecture can be proved via the flow (\ref{KRF1}). Indeed, the problem of using stability conditions to prove convergence properties of the K\"ahler-Ricci flow is
an area of considerable current interest and we refer the reader to \cite{PS2}, \cite{PSS}, \cite{PSSW1}, \cite{PSSW2}, \cite{R}, \cite{Sz2}, \cite{PSSW3}, \cite{To} and \cite{CW} for some recent advances (however, this list of references is far from complete). We also remark that if $M$ is toric, it turns out that the stability condition
can be replaced by a simpler criterion involving the Futaki invariant,
and the behavior of (\ref{KRF1}) is then well-understood \cite{WZ}, \cite{Zhu}.
There has also been much interest in understanding the behavior of the flow (\ref{KRF1}) with $\lambda=-1$ on manifolds with $c_1(M) \le 0$ (and not strictly definite). In this case, smooth K\"ahler-Einstein metrics cannot exist and the K\"ahler class of $\omega(t)$ must degenerate in the limit. If $M$ is a minimal model of general type, it is shown in \cite{Ts} and later generalized in \cite{TZha} that the K\"ahler-Ricci flow converges to the unique singular K\"ahler-Einstein metric. If $M$ is not of general type, the K\"ahler-Ricci flow collapses and converges weakly to a generalized K\"ahler-Einstein metric if the canonical line bundle $K_M$ is semi-ample \cite{SoT1, SoT2}.
In contrast, the case when $c_1(M)$ is nonnegative or indefinite has been little studied. In complex dimension two, it is natural to consider the rational ruled surfaces, known as the Hirzebruch surfaces and denoted $M_0, M_1, M_2, \ldots$ Indeed, all rational surfaces can be obtained from $\mathbb{P}^2$ and Hirzebruch surfaces via consecutive blow-ups.
In this paper we describe several distinct behaviors of the K\"ahler-Ricci flow on the manifolds $M_k$. We consider the unnormalized K\"ahler-Ricci flow
\betagin{eqnarray} \label{KRF0}
\ddt{} \omega = \mbox{} - \textrm{Ric}(\omega), \quad \omega(0) = \omega_0.
\end{eqnarray}
for $\omega_0$ in any given K\"ahler class. Write $\omega(t) = \frac{\sqrt{-1}}{2\pi} g_{i\ov{j}}dz^i \wedge d\overline{z^j}$, for $g=g(t)$ the K\"ahler metric associated to $\omega(t)$.
We find that, in the Gromov-Hausdorff sense, the flow $g(t)$ may: shrink the manifold to a point, collapse to a lower dimensional manifold or contract a divisor on $M_k$. As we will see, the particular outcome depends on $k$ and the initial K\"ahler class of $\omega_0$. Much of this behavior was conjectured by Feldman, Ilmanen and Knopf in their detailed analysis \cite{FIK} of self-similar solutions of the Ricci flow.
We confirm the Feldman-Ilmanen-Knopf conjectures, under the assumption that the initial metric is invariant under a maximal compact subgroup of the automorphism group.
Our results in this paper give some evidence that the K\"ahler-Ricci flow may indeed provide an analytic approach to the classification theory of algebraic varieties as suggested in \cite{SoT2}.
In general, if the canonical line bundle $K_M$ is not nef, the unnormalized K\"ahler-Ricci flow (\ref{KRF0}) must become singular at some finite time, say $T$.
If the limiting K\"ahler class is big as $t\rightarrow T$, a number of conjectures about the behavior of the flow have been made in \cite{SoT1, SoT2, T2}.
It is conjectured that the limiting K\"ahler metric has a metric completion $(M', d_T)$, where $M'$ is an algebraic variety obtained from $M$ by an algebraic procedure such as a divisorial contraction or flip. It is further proposed in \cite{T2, SoT3} that $(M', d_T)$ have mild singularities and $(M, \omega(t))$ should converge to $( M', d_T)$ in the sense of Gromov-Hausdorff. Our main result in this paper confirms this speculation in the case of $\mathbb{P}^2$ blown up at one point (with any initial K\"ahler class) and a family of higher-dimensional analogues of Hirzebruch surfaces if the initial K\"ahler metric is invariant under a maximal compact subgroup of the automorphism group. In addition, our results may be relevant to a recent conjecture of Tian \cite{T2} that an algebraic manifold is birational to a Fano manifold if and only if the unnormalized K\"ahler-Ricci flow (suitably interpreted) becomes extinct in finite time.
The Hirzebruch surfaces $M_0, M_1, \ldots$ are projective bundles over $\mathbb{P}^1$ which can be described as follows. Write $H$ and $\mathbb{C}_{\mathbb{P}^1}$ for the hyperplane line bundle and trivial line bundle respectively over $\mathbb{P}^1$. Then we define the Hirzebruch surface $M_k$ to be
\betagin{equation}
M_k = \mathbb{P} (H^k \oplus \mathbb{C}_{\mathbb{P}^1}).
\end{equation}
One can check that $M_0$ and $M_1$ are the only Hirzebruch surfaces with positive first Chern class. $M_0$ is the manifold $\mathbb{P}^1 \times \mathbb{P}^1$ and will not be dealt with in this paper (see instead \cite{PS1}, \cite{P2}, \cite{TZhu}, \cite{PSSW2}, \cite{Zhu}). $M_1$ can be identified with $\mathbb{P}^2$ blown up at one point. It is already known that the normalized K\"ahler-Ricci flow (\ref{KRF1}) with $\lambda=1$ on $M_1$ starting at a toric metric $\omega_0$ in $c_1(M_1)$ converges to a K\"ahler-Ricci soliton after modification by automorphisms (see \cite{Zhu} and also \cite{PSSW3}). However, the manifold $M_1$ is still of interest to us since we are considering the more general case of the initial metric $\omega_0$ lying in \emph{any} K\"ahler class.
Assume now that $k\ge 1$ and denote by $D_{\infty}$ the divisor in $M_k$ given by the image of the section $(0,1)$ of $H^{k} \oplus \mathbb{C}_{\mathbb{P}^1}$. Since the complex manifold $M_k$ can also be described by $\mathbb{P}( \mathbb{C}_{\mathbb{P}^1} \oplus H^{-k})$ we can define another divisor $D_0$ on $M_k$ to be that given by the image of the section $(1,0)$ of $\mathbb{C}_{\mathbb{P}^1} \oplus H^{-k}$.
All of the Hirzebruch surfaces $M_k$ admit K\"ahler metrics. Indeed, the cohomology classes of the line bundles $[D_0]$ and $[D_{\infty}]$ span $H^{1,1}(M; \mathbb{R})$ and every K\"ahler class $\alphapha$ can be written uniquely as
\betagin{equation} \label{alpha}
\alphapha = \frac{b}{k} [D_{\infty}] - \frac{a}{k} [D_0]
\end{equation}
for constants $a$, $b$ with $0< a < b$. If $\alphapha_t$ denotes the K\"ahler class of a solution $\omega(t)$ of the flow (\ref{KRF0})
then a short calculation shows that the associated constants $a_t, b_t$ satisfy
\betagin{equation} \label{atbtn2}
b_t = b_0 - t(k+2) \quad \textrm{and} \quad a_t = a_0 + t(k-2).
\end{equation}
Our goal is to understand the behavior of the K\"ahler-Ricci flow with initial metric $\omega_0$ in any given K\"ahler class $\alphapha$.
We focus on the case when $\omega_0$ is invariant under the action of a maximal compact subgroup $G_k \cong U(2)/\mathbb{Z}_k$ of the automorphism group of $M_k$. We will say that $\omega_0$ satisfies the \emph{Calabi symmetry condition}.
This symmetry is explained in detail in Section \ref{calabi} and was used by Calabi \cite{Cal} to construct extremal K\"ahler metrics on $M_k$ (see also \cite{Sz1}). Our first result shows that under this symmetry condition we can describe the convergence of the flow $(M, g(t))$ in the sense of Gromov-Hausdorff.
\pagebreak[3]
\betagin{theorem} \label{thm1} On the Hirzebruch surface $M_k$,
let $\omega = \omega(t)$ be a solution of the K\"ahler-Ricci flow (\ref{KRF0}) with initial K\"ahler metric $\omega_0$ satisfying the Calabi symmetry condition. Assume that $\omega_0$ lies in the K\"ahler class $\alphapha_0$ given by $a_0, b_0$ satisfying $0< a_0< b_0$.
Then we have the following:
\betagin{enumerate}
\item[(a)] If $k \ge 2$ then the flow (\ref{KRF0}) exists on $[0,T)$ with $T= (b_0-a_0)/2k$ and $(M_k, g(t))$ converges to $(\mathbb{P}^1, a_T g_{\emph{FS}})$ in the Gromov-Hausdorff sense as $t \rightarrow T$, where $g_{\emph{FS}}$ is the Fubini-Study metric and $a_T$ is the constant given by (\ref{atbtn2}).
\item[(b)] If $k =1$ there are three subcases.
\betagin{enumerate}
\item[(i)] If $b_0=3a_0$ then the flow (\ref{KRF0}) exists on $[0,T)$ with $T=a_0$ and $(M_1, g(t))$ converges to a point in the Gromov-Hausdorff sense as $t \rightarrow T$.
\item[(ii)] If $b_0< 3a_0$ then the flow (\ref{KRF0}) exists on $[0,T)$ with $T=(b_0-a_0)/2k$ and, as in (a) above, $(M_1, g(t))$ converges to $(\mathbb{P}^1, a_T g_{\emph{FS}})$ in the Gromov-Hausdorff sense as $t \rightarrow T$.
\item[(iii)] If $b_0>3a_0$ then the flow (\ref{KRF0}) exists on $[0,T)$ with $T=a_0$. On compact subsets of $M_1\setminus D_0$, $g(t)$ converges smoothly to a K\"ahler metric $g_T$. If
$(\overline{M}, d_T)$ denotes the metric completion of $(M_1 \setminus D_0, g_T)$, then
$(M_1,g(t))$ converges to $(\overline{M}, d_T)$ in the Gromov-Hausdorff sense as $t \rightarrow T$. $(\overline{M}, d_T)$ has finite diameter and is homeomorphic to the manifold $\mathbb{P}^2$. \end{enumerate}
\end{enumerate}
\end{theorem}
We now make some remarks about this theorem. As mentioned above, the manifold $M_1$ can be identified with $\mathbb{P}^2$ blown up at one point.
The case (b).(i) occurs precisely when the initial K\"ahler form $\omega_0$ lies in the first Chern class $c_1(M_1)$. This situation has been well-studied and the convergence result of (b).(i) is an immediate consequence of the diameter bound of Perelman for the normalized K\"ahler-Ricci flow \cite{P2}, \cite{SeT}.
In the case (b).(iii), we use the work of \cite{Ts}, \cite{TZha}, \cite{Zha} to obtain the smooth convergence of the metric outside $D_0$. Our result shows that
the K\"ahler-Ricci flow `blows down' the exceptional curve on $M_1$.
For more details about this see Section \ref{sectionGH}.
We see from the above that, assuming the Calabi symmetry, there are three distinct behaviors of the K\"ahler-Ricci flow on a Hirzebruch surface, depending on $k$ and the initial K\"ahler class:
\betagin{itemize}
\item The $\mathbb{P}^1$ fiber collapses (cases (a) and (b).(ii))
\item The manifold shrinks to a point (case (b).(i))
\item The exceptional divisor is contracted (case (b).(iii)).
\end{itemize}
We can say some more about the cases (a) and (b).(ii) when the fiber collapses. If $D_H$ denotes any fiber of the map $\pi: M_k \rightarrow \mathbb{P}^1$ then the line bundle associated to $D_H$ is given by $[D_H]= \pi^*H$. The cohomology class of $D_H$ is represented by the smooth (1,1) form $\chi = \pi^* \omega_{\textrm{FS}}$ where $\omega_{\textrm{FS}}$ is the Fubini-Study metric on $\mathbb{P}^1$. We can show that in the case $k \ge 2$ or $k=1$ with $b_0<3a_0$, the K\"ahler form $\omega(t)$ along the flow converges to $a_T \chi$ in a certain weak sense which we now explain. Define for $0\le t < T = (b_0-a_0)/2k$ a reference K\"ahler metric
\betagin{equation}
\hat{\omega}_t = a_t \chi + \frac{(b_t-a_t)}{2k} \theta,
\end{equation}
in $\alphapha_t$, where $\theta$ is a certain closed nonnegative (1,1) form in $2[D_{\infty}]$ (see Lemma \ref{theta} below). Observe that $\hat{\omega}_t$ converges to $a_T \chi$ as $t\rightarrow T$. Now define a potential function $\tilde{\varphi}=\tilde{\varphi}(t)$ by
\betagin{equation}
\omega(t) = \hat{\omega}_t + \frac{\sqrt{-1}}{2\pi} \partial \ov{\partial} \tilde{\varphi}(t),
\end{equation}
where $\tilde{\varphi}$ is subject to a normalization condition $\tilde{\varphi}|_{\rho=0}=0$ (see Section \ref{sectioncalabi}). Then we have:
\betagin{theorem} \label{thm2} Assume that $k\ge 2$ or $k=1$ and $b_0< 3a_0$.
Let $\omega(t)$ be a solution of the flow (\ref{KRF0}) on $M_k$ with $\omega_0$ satisfying the Calabi symmetry condition. Then for all $\betata$ with $0< \betata<1$,
\betagin{enumerate}
\item[(i)] $\tilde{\varphi}(t)$ tends to zero in $C^{1,\betata}_{\hat{g}_0}(M_k)$ as $t \rightarrow T$.
\item[(ii)] For any compact set $K \subset M_k \setminus (D_{\infty} \cup D_0)$, $\tilde{\varphi}(t)$ tends to zero in $C^{2, \betata}_{\hat{g}_0}(K)$ as $t \rightarrow T$. In particular, on such a compact set $K$, $\omega(t)$ converges to $a_T \chi$ on $C^{\betata}_{\hat{g}_0}(K)$ as $t \rightarrow T$.
\end{enumerate}
\end{theorem}
In addition, we can extend our results to higher dimensions by considering $\mathbb{P}^1$ bundles over $\mathbb{P}^{n-1}$ for $n \ge 2$.
Write $H$ and $\mathbb{C}_{\mathbb{P}^{n-1}}$ for the hyperplane line bundle and trivial line bundle respectively over $\mathbb{P}^{n-1}$.
Then we define the $n$-dimensional complex manifold
$M_{n,k}$ by
\betagin{equation}
M_{n,k} = \mathbb{P} (H^k \oplus \mathbb{C}_{\mathbb{P}^{n-1}}). \label{Mnk}
\end{equation}
We can define divisors $D_0$ and $D_{\infty}$ in the same manner as for $M_k$ above. The cohomology classes $[D_0]$ and $[D_{\infty}]$ again span $H^{1,1}(M; \mathbb{R})$ (see for example \cite{GH} or \cite{IS}) and the K\"ahler classes are described as in (\ref{alpha}). Similarly, we have a Calabi symmetry condition, where the maximal compact subgroup of the automorphism group is now $G_k \cong U(n)/\mathbb{Z}_k$ (see Section \ref{calabi}).
We have the following generalization of Theorem \ref{thm1}.
\betagin{theorem} \label{thm3} On $M_{n,k}$,
let $\omega = \omega(t)$ be a solution of the K\"ahler-Ricci flow (\ref{KRF0}) with initial K\"ahler metric $\omega_0$ satisfying the Calabi symmetry condition. Assume that $\omega_0$ lies in the K\"ahler class $\alphapha_0$ given by $a_0, b_0$ satisfying $0< a_0< b_0$.
Then we have the following:
\betagin{enumerate}
\item[(a)] If $k \ge n$ then the flow (\ref{KRF0}) exists on $[0,T)$ with $T= (b_0-a_0)/2k$ and $(M_{n,k}, g(t))$ converges to $(\mathbb{P}^{n-1}, a_T g_{\emph{FS}})$ in the Gromov-Hausdorff sense as $t \rightarrow T$.
\item[(b)] If $1 \le k \le n-1$ there are three subcases.
\betagin{enumerate}
\item[(i)] If $a_0(n+k) = b_0(n-k)$ then the flow (\ref{KRF0}) exists on $[0,T)$ with $T=a_0/(n-k)$ and $(M_{n,k}, g(t))$ converges to a point in the Gromov-Hausdorff sense as $t \rightarrow T$.
\item[(ii)] If $a_0(n+k) > b_0(n-k)$ then the flow (\ref{KRF0}) exists on $[0,T)$ with $T=(b_0-a_0)/2k$ and, as in (a) above, $(M_{n,k}, g(t))$ converges to $(\mathbb{P}^{n-1}, a_T g_{\emph{FS}})$ in the Gromov-Hausdorff sense as $t \rightarrow T$.
\item[(iii)] If $a_0(n+k) < b_0(n-k)$ then the flow (\ref{KRF0}) exists on $[0,T)$ with $T=a_0/(n-k)$. On compact subsets of $M_{n,k} \setminus D_0$, $g(t)$ converges smoothly to a K\"ahler metric $g_T$. If
$(\overline{M}, d_T)$ denotes the metric completion of $(M_{n,k} \setminus D_0, g_T)$, then
$(M_{n,k},g(t))$ converges to $(\overline{M}, d_T)$ in the Gromov-Hausdorff sense as $t \rightarrow T$. $(\overline{M}, d_T)$ has finite diameter and is homeomorphic to the orbifold $\mathbb{P}^{n}/\mathbb{Z}_k$ (see Section \ref{orbifold}).
\end{enumerate}
\end{enumerate}
\end{theorem}
We also prove an analog of Theorem \ref{thm2} in higher dimensions (see Theorem \ref{thmconv} below).
Since Theorem \ref{thm3} includes Theorem \ref{thm1} as a special case, we will prove all of our results in this paper in the general setting of complex dimension $n$. We will often write $M$ for $M_{n,k}$.
Finally, we mention some known results about K\"ahler-Ricci solitons on these manifolds. For $ 1\leq k \leq n-1$, the manifold $M_{n,k}$ has positive first Chern class and admits a K\"ahler-Ricci soliton \cite{Koi, Cao2}. In addition, K\"ahler-Ricci solitons have been constructed on the orbifolds $\mathbb{P}^{n}/\mathbb{Z}_k$ for $2\leq k\leq n-1$ \cite{FIK} and, in the noncompact case, on line bundles over $\mathbb{CP}^{n-1}$ \cite{Cao2, FIK}. The limiting behavior of such solitons is studied and used in \cite{FIK} to construct examples of extending the Ricci flow through singularities.
Theorem \ref{thm3} shows in particular that if the initial K\"ahler is not proportional to $c_1(M_{n,k})$, the K\"ahler-Ricci flow on $M_{n,k}$ ($1\leq k \leq n$) will not converge to a K\"ahler-Ricci soliton on the same manifold after normalization.
The outline of the paper is as follows. In Section \ref{background}, we describe some background material including the details of the Calabi ansatz.
In Section \ref{genestimates} we prove some estimates for the K\"ahler-Ricci flow on the manifolds $M_k$ which hold without any symmetry condition. We then impose the Calabi symmetry assumption in Section \ref{sectioncalabi} to give stronger estimates, and in particular, we prove
Theorem \ref{thm2} (cf. Theorem \ref{thmconv}). In Section \ref{sectionGH} we give proofs of the Gromov-Hausdorff convergence of the flow, thus establishing Theorems \ref{thm1} and \ref{thm3}.
\section{Background} \label{background}
\subsection{The anti-canonical bundle}
Let $M=M_{n,k}$ be the manifold given by (\ref{Mnk}). The anti-canonical bundle $K_M^{-1}$ of $M$ can be described as follows. If $\pi : M \rightarrow \mathbb{P}^{n-1}$ is the bundle map then write $D_H = \pi^{-1} (H_{n-1})$ where $H_{n-1}$ is a fixed hyperplane in $\mathbb{P}^{n-1}$.
Then the anti-canonical line bundle $K^{-1}_{M}$ is given by
\betagin{equation} \label{canonical}
K^{-1}_{M} = 2[D_{\infty}] - (k-n) [D_H] = \frac{(k+n)}{k} [D_{\infty}] + \frac{(k-n)}{k} [D_0].
\end{equation}
and we have
\betagin{equation}
k [D_H] = [D_{\infty}] - [D_0].
\end{equation}
\subsection{The Calabi ansatz} \label{calabi}
We now briefly describe the ansatz of \cite{Cal} following, for the most part, Calabi's exposition. We use coordinates $(x_1, \ldots, x_n)$ on $\mathbb{C}^n\setminus \{ 0 \}$. Then the manifold $\mathbb{P}^{n-1}= (\mathbb{C}^n\setminus \{0\})/\mathbb{C}^*$ can be described by $n$ coordinate charts $U_{1}, \ldots, U_n$, where $U_{i}$ is characterized by $x_{i} \neq 0$. For a fixed $i$, the holomorphic coordinates $z^{j}_{(i)}$ on $U_{i}$, for $1 \le j \le n$, $j \neq i$ are given by $z^{j}_{(i)} = x_{j}/x_{i}$. Then $M=M_{n,k}$ can be defined as the $\mathbb{P}^1$ bundle over $\mathbb{P}^{n-1}$ with a projective fiber coordinate $y_{(i)}$ on $\pi^{-1}(U_{i})$ which transforms by
\betagin{equation}
y_{(\ell)} = \left( \frac{x_{\ell}}{x_{i}} \right)^k y_{(i)}, \quad \textrm{on } \ \pi^{-1}(U_{i} \cap U_{\ell}),
\end{equation}
where $\pi: M \rightarrow \mathbb{P}^{n-1}$ denotes the bundle map.
Then the divisors $D_0$ and $D_{\infty}$ are given by $y_{(i)}=0$ and $y_{(i)}= \infty$ respectively. We parametrize $M \setminus (D_0 \cup D_{\infty})$ by a $k$-to-one map $\mathbb{C}^n \setminus \{ 0 \} \rightarrow M \setminus (D_0 \cup D_{\infty})$ described as follows. The point $(x_1, \ldots, x_n)$, with $x_{i}\neq 0$ say, maps to the point in $M \setminus (D_0 \cup D_{\infty}) \cap \pi^{-1}(U_{i})$ with coordinates $z^{j}_{(i)}= x_{j}/x_{i}$, $y_{(i)} = x_{i}^k$, for $j \neq i$.
It is shown in \cite{Cal} that the group $G_k \cong U(n)/\mathbb{Z}_k$ is a maximal compact subgroup of the automorphisms of $M$ via the natural action on $\mathbb{C}^n\setminus \{0\}$. Moreover, any K\"ahler metric $g_{i \ov{j}}$ on $M$ which is invariant under $G_k$ is described on $\mathbb{C}^n \setminus \{ 0 \}$ as $g_{i \ov{j}} = \partial_i \partial_{\ov{j}} u$ for a potential function $u= u (\rho)$, where
\betagin{equation} \label{rho}
\rho = \log \left( \sum_{i=1}^n |x_i|^2 \right).
\end{equation}
The potential function $u$ has to satisfy certain properties in order to define a K\"ahler metric. Namely,
a K\"ahler metric $g_{i \ov{j}}$ with K\"ahler form $\omega = \frac{\sqrt{-1}}{2\pi} g_{i \ov{j}}dz^i \wedge d\ov{z^j}$ on $M$ in the class (see (\ref{alpha}))
\betagin{equation} \label{alpha2}
\alphapha = \frac{b}{k} [D_{\infty}] - \frac{a}{k} [D_0]
\end{equation}
is given by the potential function $u: \mathbb{R} \rightarrow \mathbb{R}$ with $u'>0$, $u''>0$ together with the following asymptotic condition. There exist smooth functions $u_0, u_{\infty}: [0,\infty) \rightarrow \mathbb{R}$ with $u'_0(0)>0$, $u_{\infty}'(0)>0$ such that
\betagin{equation} \label{u0}
u_0(e^{k\rho}) = u(\rho) -a\rho, \quad u_{\infty}(e^{-k\rho}) = u(\rho)-b\rho,
\end{equation}
for all $\rho \in \mathbb{R}$.
It follows that
$$ \lim_{\rho \rightarrow - \infty} u'(\rho) = a < b= \lim_{\rho \rightarrow \infty} u'(\rho).$$
Note that the divisor $D_0$ corresponds to $\rho = - \infty$ while $D_{\infty}$ corresponds to $\rho=\infty$.
The K\"ahler metric $g_{i \ov{j}}$ associated to $u$ is given in the $x_i$ coordinates by
\betagin{equation} \label{metric}
g_{i \ov{j}} = \partial_i \partial_{\ov{j}} u = e^{-\rho} u'(\rho) \deltalta_{i j} + e^{-2\rho} \ov{x}_i x_j (u''(\rho) - u'(\rho)).
\end{equation}
Conversely, a K\"ahler metric $g$ determines the function $u$ up to the addition of a constant.
The metric $g$ has determinant
\betagin{equation} \label{det}
\deltat g = e^{-n \rho} (u'(\rho))^{n-1} u''(\rho).
\end{equation}
Thus, if we define
\betagin{equation} \label{Riccipotential}
v = - \log \deltat g = n \rho - (n-1) \log u'(\rho) - \log u''(\rho)
\end{equation}
then the Ricci curvature tensor $R_{i \ov{j}} = \partial_i \partial_{\ov{j}} v$ is given by
\betagin{equation} \label{Ricci}
R_{i \ov{j}} = e^{-\rho} v'(\rho) \deltalta_{ij} + e^{-2\rho} \ov{x}_i x_{j} (v''(\rho) - v'(\rho)).
\end{equation}
Finally, we construct a reference metric $\hat{\omega}$ in the class $\alphapha$ given by (\ref{alpha2}). Define a potential function $\hat{u}$ by
\betagin{equation} \label{hatu}
\hat{u}(\rho) = a \rho + \frac{(b-a)}{k} \log (e^{k\rho}+1).
\end{equation}
Then one can check by the above definition that the associated K\"ahler form $\hat{\omega}$ lies in the class $\alphapha$. We observe in addition that $\hat{\omega}$ can be decomposed into a sum of nonnegative (1,1) forms.
The smooth (1,1) form $\chi = \pi^*\omega_{\textrm{FS}} \in [D_H]$
is represented by the straight line function $u_{\chi}$:
\betagin{equation} \label{eqnchi}
u_{\chi}(\rho) = \rho, \quad \chi = \frac{\sqrt{-1}}{2\pi} \partial \ov{\partial} u_{\chi}.
\end{equation}
In addition,
let $u_{\theta}$ and $\theta$ be respectively the potential and associated closed (1,1)-form defined by
\betagin{equation} \label{eqntheta}
u_{\theta} = 2 \log(e^{k\rho}+1), \quad \theta = \frac{\sqrt{-1}}{2\pi} \partial \ov{\partial} u_{\theta}.
\end{equation}
The form $\theta$ lies in the cohomology class $2[D_{\infty}]$. Moreover,
\betagin{equation}
\hat{u} = a u_{\chi} + \frac{(b-a)}{2k} u_{\theta}, \quad \textrm{and} \quad
\hat{\omega}= a \chi + \frac{(b-a)}{2k} \theta \in \alphapha.
\end{equation}
The following lemma will be useful later.
\betagin{lemma} \label{theta}
The smooth nonnegative closed (1,1) form $\theta$ in $2[D_{\infty}]$ given by (\ref{eqntheta}) satisfies
$$ \chi^{n-1} \wedge \theta >0 \quad \textrm{and} \quad \int_M \theta^n >0.$$
\end{lemma}
\betagin{proof}
Note that $$\chi^{n-1} \wedge \theta = \frac{2k}{b-a} \chi^{n-1} \wedge \hat{\omega} >0.$$ Also, from the construction of $\theta$ and the formula (\ref{det}), we have $\int_M \theta^n >0$.
\ensuremath{\Box}
\end{proof}
\subsection{The orbifold $\mathbb{P}^n/\mathbb{Z}_k$} \label{orbifold}
Using homogeneous coordinates $Z_1, \ldots, Z_{n+1}$, we let $\mathbb{P}^n/\mathbb{Z}_k$ be the weighted projective space invariant under
$$j \cdot [Z_1, \ldots, Z_{n+1}] = [e^{2\pi j \sqrt{-1}/k} Z_0, \ldots, e^{2\pi j \sqrt{-1}/k}Z_n, Z_{n+1}],$$
for $j=0,1,2, \ldots, k-1$.
The space $\mathbb{P}^n/\mathbb{Z}_k$ has a natural orbifold structure, branched over the point $[0,\ldots, 0,1]$. In addition, there is a holomorphic map $f: M_{n,k} \rightarrow \mathbb{P}^n/\mathbb{Z}_k$ given as follows. A point in $\pi^{-1}(U_i)$ with coordinates $z_{(i)}^j$ (for $j \neq i$) and fiber coordinate $y_{(i)}$ maps to the point in $\mathbb{P}^n/\mathbb{Z}_k$ with homogeneous coordinates $$[z_{(i)}^1, \ldots, z_{(i)}^{i-1}, 1, z_{(i)}^{i+1}, \ldots, z_{(i)}^n, y_{(i)}^{-1} ],$$
or $[0, \ldots, 0,1]$ if $y_{(i)}=0$.
The map $f$ is well-defined because of the group action. Note that the inverse image $f^{-1}([0,\ldots, 0,1])$ is the divisor $D_0$ and
\betagin{equation}
f|_{M_{n,k}\setminus D_0} : M_{n,k}\setminus D_0 \rightarrow (\mathbb{P}^n/\mathbb{Z}_k) \setminus \{ [0,\ldots, 0,1] \}
\end{equation}
is an isomorphism. In the case $n=2$, $k=1$, the divisor $D_0$ is the exceptional curve on $\mathbb{P}^2$ blown up at one point and $f: M_{2,1} \rightarrow \mathbb{P}^2$ is the blow-down map.
Finally we note that there is a map $\mathbb{C}^n \rightarrow (\mathbb{P}^n/\mathbb{Z}_k)\setminus \{ Z_{n+1}=0\}$ given by
$$(x_1, \ldots, x_n) \mapsto [x_1, \ldots, x_n, 1].$$
This map is $k$-to-one on $\mathbb{C}^n \setminus \{ 0 \}$.
With respect to these coordinates (and the corresponding coordinates $(x_1, \ldots, x_n)$ on $M_{n,k}$ as described above) one can check that $f$ is the identity map on $\mathbb{C}^{n}\setminus \{0\}$.
\section{Estimates for the K\"ahler-Ricci flow} \label{genestimates}
In this section we prove some estimates for a solution $\omega(t)$ to the K\"ahler-Ricci flow (\ref{KRF0}) on $M=M_{n,k}$ that hold \emph{without} the assumption of Calabi symmetry. If the initial metric $\omega_0$ lies in the class $\alphapha_0$ given by constants $0<a_0 <b_0$ then
the K\"ahler class $\alphapha_t$ of $\omega(t)$ evolves by
\betagin{equation}
\alphapha_t = \frac{b_t}{k} [D_{\infty}] - \frac{a_t}{k} [D_0]= \frac{(b_t-a_t)}{k} [D_{\infty}] + a_t [D_H],
\end{equation}
where
\betagin{equation} \label{btat}
b_t = b_0 - (k+n)t \quad \textrm{and} \quad a_t = a_0 + (k-n)t.
\end{equation}
Define
\betagin{equation}
T = \sup\{ t\geq 0 \ | \ \alphapha_t \textrm{ is K\"ahler} \}.
\end{equation}
Note that the K\"ahler metric $\hat{\omega}_t$ given by \betagin{equation} \label{ref1}
\hat{\omega}_t=a_t \chi + \frac{(b_t-a_t)}{2k} \theta
\end{equation}
for $t \in [0,T)$ and $\theta$ from Lemma \ref{theta} lies in $\alphapha_t$.
We observe that the K\"ahler-Ricci flow exists on $[0,T)$.
\betagin{theorem}
There exists a unique smooth solution of the K\"ahler-Ricci flow (\ref{KRF0}) on $M$ starting with $\omega_0 \in \alphapha_0$ for $t$ in $[0, T)$.
\end{theorem}
\betagin{proof}
This follows from a general and well-known result in the K\"ahler-Ricci flow. Indeed, let $X$ be any K\"ahler manifold and $\alphapha_0$ be a K\"ahler class on $X$ with $\omega_0 \in \alphapha_0$. If
$$T = \sup \{ t \ge 0 \ | \ \alphapha_0 + t [K_X] >0 \},$$
then it is shown in \cite{Cao1}, \cite{Ts}, \cite{TZha} that there is a unique smooth solution $\omega=\omega(t)$ of the K\"ahler-Ricci flow (\ref{KRF0}) on $X$ starting at $\omega_0$, for $t$ in $[0,T)$.
\ensuremath{\Box}
\end{proof}
We now deal with the behavior of the flow as $t \rightarrow T$ in various different cases.
\subsection{The case $k \ge n$} \label{kgen}
In this case, the class $\alphapha_t$ remains K\"ahler for $0< t < T$ where
$$T = \frac{b_0-a_0}{2k}.$$ As $t \rightarrow T$, the difference $b_t - a_t$ tends to zero, while the constant $a_t$ remains bounded below away from zero.
The reference metric $\hat{\omega}_t$ in $\alphapha_t$ is given by
\betagin{equation} \label{ref2}
\hat{\omega}_t=a_t \chi + \frac{(b_t-a_t)}{2k} \theta =a_t \chi + (T-t) \theta \in \alphapha_t.
\end{equation}
From (\ref{canonical}) we see that the closed (1,1) form $\theta - (k-n) \chi$ lies in the first Chern class $c_1(M)$. Hence
there is a smooth volume form $\Omega$ on $M$ such that $$ \ddbar\log \Omega = -\theta + (k-n)\chi.$$
We consider the parabolic Monge-Amp\`ere equation:
\betagin{equation}\label{flow2}
\ddt{\varphi} = \log \frac{ (\hat{\omega}_t +\ddbar \varphi)^n}{(T-t) \Omega}, \qquad \varphi|_{t=0}= \varphi_0,
\end{equation}
where $\hat{\omega}_0 + \frac{\sqrt{-1}}{2\pi} \partial \ov{\partial} \varphi_0= \omega_0 \in \alphapha_0$. If $\varphi=\varphi(t)$ solves (\ref{flow2}) then
$$\omega(t) = \hat{\omega}_t + \frac{\sqrt{-1}}{2\pi} \partial \ov{\partial} \varphi$$
solves the K\"ahler-Ricci flow (\ref{KRF0}).
Note that in the following two lemmas, we only use the assumption $k \ge n$ to obtain a uniform lower bound of the constant $a_t$ away from zero, for $t \in [0,T)$.
\betagin{lemma} \label{lemmavolbound}
There exists a constant $C$ depending only on the initial data such that $$|\varphi(t)|\leq C, \quad \omega^n(t) \le C \Omega.$$
\end{lemma}
\betagin{proof}
By Lemma \ref{theta}, since $$\hat{\omega}_t^n= (a_t \chi + (T-t)\theta )^n \ge n(T-t) a_t^{n-1} \chi^{n-1} \wedge \theta,$$
there exist constants $C_1, C_2>0$ independent of $t$ such that
\betagin{equation} \label{eqnvolref}
C_1(T-t) \Omega \le \hat{\omega}_t^n \le C_2(T-t) \Omega.
\end{equation}
Note that we are making use of the fact that $a_t$ is bounded from below away from zero.
To obtain the upper bound of $\varphi$, consider the evolution of $\psi= \varphi - (1+\log C_2)t$. We claim that $\sup_{M \times [0,T)} \psi = \sup_M \psi|_{t=0}$. Otherwise there exists a point $(x,t) \in M \times (0,T)$ at which $\partial \psi/\partial t \ge 0$ and $\ddbar \psi \le 0$. Thus, at that point,
$$0\le \frac{\partial \psi}{\partial t} \le \log \frac{\hat{\omega}_t^n}{(T-t)\Omega} - 1 - \log C_2 \le -1,$$ a contradiction.
Hence $\sup_{M \times [0,T)} \psi = \sup_M \psi|_{t=0}$ and thus $\varphi$ is uniformly bounded from above.
A lower bound on $\varphi$ is obtained similarly.
For the upper bound of $\omega^n$, we will bound $H= \log \frac{\omega^n}{\Omega} - A \varphi$, where $A$ is a constant to be determined later. Writing $\textnormal{tr}_{\omega}{\omega'} = n\frac{ \omega^{n-1} \wedge \omega'}{\omega^n}$ where $\omega'$ is any $(1,1)$-form, we compute
\betagin{eqnarray*}
\frac{\partial}{\partial t} H & = & \Delta \dot{\varphi} + \textnormal{tr}_{\omega} \left( {\ddt{} \hat{\omega}_t} \right) - A \dot{\varphi},
\end{eqnarray*}
where $\Delta$ denotes the Laplace operator associated to $g(t)$.
Since $\dot{\varphi} = H + A\varphi - \log (T-t)$ and $\theta \ge 0$, we have
\betagin{eqnarray*}
\frac{\partial}{\partial t} H & = & \Delta H + A \Delta \varphi + (k-n) \textnormal{tr}_{\omega} \chi - \textnormal{tr}_{\omega} \theta - A(H+A \varphi - \log (T-t)) \\
& \le & \Delta H + An - A a_t \textnormal{tr}_{\omega} \chi + (k-n) \textnormal{tr}_{\omega} \chi - AH - A^2 \varphi + A \log T.
\end{eqnarray*}
Choosing $A$ sufficiently large so that $A a_t \ge (k-n)$ and using the fact that $\varphi$ is uniformly bounded, we see that $H$ is bounded from above by the maximum principle.
\ensuremath{\Box}
\end{proof}
We have, in addition, the following estimate.
\betagin{lemma} \label{trchi} There exists a uniform constant $C>0$ such that
$$\textnormal{tr}_{\omega} \chi = n\frac{\omega^{n-1} \wedge \chi}{\omega^n} \leq C.$$
\end{lemma}
\betagin{proof} This is a `parabolic Schwarz lemma' similar to the one given in \cite{SoT1}. We use the
maximum principle.
Let $\omega_{\textrm{FS}} = \frac{\sqrt{-1}}{2\pi} h_{\alphapha \ov{\betata}} dz^{\alphapha} \wedge d\ov{z}^{\betata}$ be the Fubini-Study metric on $\mathbb{P}^{n-1}$ and let $\pi: M \rightarrow \mathbb{P}^{n-1}$ be the bundle map.
We will calculate the evolution of
\betagin{equation}
w=\textnormal{tr}_{g}(\pi^*h)=g^{i\overline{j}}\pi^{\alphapha}_i
\pi^{\overline{\betata}}_{\overline{j}}h_{\alphapha\overline{\betata}} = n \frac{\omega^{n-1} \wedge \chi}{\omega^n}.
\end{equation}
A standard computation shows that
\betagin{eqnarray}\label{sch} \nonumber
\Delta w &=& g^{k\overline{l}}\partial_k
\partial_{\overline{l}} \left( g^{i\overline{j}}\pi^{\alphapha}_i
\pi^{\overline{\betata}}_{\overline{j}}h_{\alphapha\overline{\betata}} \right)\\
&=&g^{i\overline{l}}g^{k\overline{j}}R_{k\overline{l}}
\pi^{\alphapha}_{i}
\pi^{\overline{\betata}}_{\overline{j}}h_{\alphapha\overline{\betata}}+
g^{i\overline{j}}g^{k\overline{l}}\pi^{\alphapha}_{i,k}\pi^{\overline{\betata}}_{\overline{j},\overline{l}}h_{\alphapha\overline{\betata}}
-g^{i\overline{j}}g^{k\overline{l}}S_{\alphapha\overline{\betata}
\gamma\overline{\delta}} \pi^{\alphapha}_i
\pi^{\overline{\betata}}_{\overline{j}}\pi^{\gamma}_k
\pi^{\overline{\delta}}_{\overline{l}},
\end{eqnarray}
where $S_{\alphapha\overline{\betata} \gamma\overline{\delta}}$ is the
curvature tensor of $h_{\alphapha\bar{\betata}}$.
By the definition of $w$ we have
\betagin{eqnarray} \label{deltaw}
\Delta w \ge g^{i\overline{l}}g^{k\overline{j}}R_{k\overline{l}}
\pi^{\alphapha}_i
\pi^{\overline{\betata}}_{\overline{j}}h_{\alphapha\overline{\betata}} +
g^{i\overline{j}}g^{k\overline{l}}\pi^{\alphapha}_{i,k}\pi^{\overline{\betata}}_{\overline{j},\overline{l}}h_{\alphapha\overline{\betata}}
- 2w^2.
\end{eqnarray}
Now
\betagin{eqnarray} \nonumber
\ddt{w}&=&-g^{i\overline{l}}g^{k\overline{j}}\ddt{g_{k\overline{l}}}
\pi^{\alphapha}_i
\pi^{\overline{\betata}}_{\overline{j}} h_{\alphapha\overline{\betata}}\\ \label{dwdt}
&=&g^{i\overline{l}}g^{k\overline{j}}R_{k\overline{l}}
\pi^{\alphapha}_i
\pi^{\overline{\betata}}_{\overline{j}}h_{\alphapha\overline{\betata}}.
\end{eqnarray}
Combining (\ref{deltaw}) and (\ref{dwdt}),
we have
\betagin{equation}
(\ddt{} - \Delta) w \le - g^{i\overline{j}}g^{k\overline{l}}\pi^{\alphapha}_{i,k}\pi^{\overline{\betata}}_{\overline{j},\overline{l}}h_{\alphapha\overline{\betata}}+ 2w^2.
\end{equation}
On the other hand, by a standard argument (see \cite{Y1} for example)
\betagin{equation}
\frac{|\nabla w|^2}{w} \le g^{i\overline{j}}g^{k\overline{l}}\pi^{\alphapha}_{i,k}\pi^{\overline{\betata}}_{\overline{j},\overline{l}}h_{\alphapha\overline{\betata}}
\end{equation}
and thus
\betagin{equation} \label{logw}
(\ddt{}- \Delta) \log w \leq 2w.
\end{equation}
We now compute the evolution of the quantity $ L = \log w - A \varphi$, where $A$ is a constant to be determined later. From the arithmetic-geometric means inequality $$\frac{ \lambda_1 + \cdots + \lambda_n}{n} \ge ( \lambda_1 \cdots \lambda_n)^{1/n} , \quad \textrm{for } \ \lambda_1, \ldots, \lambda_n \ge 0,$$
and (\ref{eqnvolref})
we have,
\betagin{eqnarray} \label{gam}
\textnormal{tr}_{\omega} \hat{\omega}_t \ge n
\left( \frac{\hat{\omega}_t^n}{\omega^n} \right)^{1/n}
\ge c \left( \frac{(T-t) \Omega}{\omega^n}\right)^{1/n},
\end{eqnarray}
for a uniform constant $c>0$. Then, using the inequality $\textnormal{tr}_{\omega} \hat{\omega}_t \ge a_t w$ together with (\ref{logw}) and (\ref{gam}),
\betagin{eqnarray*}
\left( \ddt{}-\Delta\right) L
&\leq& 2w -A \log \left( \frac{\omega^n}{(T-t)\Omega} \right) + An - A \textnormal{tr}_{\omega} \hat{\omega}_t \\
&\leq& -w + A \log \left( \frac{(T-t)\Omega}{\omega^n} \right) + An - c \left( \frac{(T-t) \Omega}{\omega^n}\right)^{1/n},
\end{eqnarray*}
where we have chosen $A$ sufficiently large so that $(A-1)a_t \ge 3$.
Note that the function $\mu \mapsto A\log \mu - c\mu^{1/n}$ for $\mu >0$ is uniformly bounded from above.
Hence if the maximum of $L$ is achieved at a point $(x_0, t_0) \in M \times (0,T)$ then, at that point, $w$ is uniformly bounded from above.
Since we have already shown in Lemma \ref{lemmavolbound} that $\varphi$ is uniformly bounded along the flow, the required upper bound of $w$ follows by the maximum principle.
\ensuremath{\Box}
\end{proof}
\subsection{The case $1 \le k \le n-1$} \label{sectionKRk1}
There are three distinct types of behavior here which depend on the choice of initial K\"ahler class $\alphapha_0$. We deal with each in turn.
\subsubsection{The subcase $a_0(n+k) = b_0(n-k)$.} \label{subcaseperelman}
In this case $\alphapha_0 = (a_0/(n-k))c_1(M)$ and the class $\alphapha_t = (a_0/(n-k)-t) c_1(M)$ is proportional to the first Chern class. After renormalizing, this is the K\"ahler-Ricci flow (\ref{KRF1}) with $\lambda=1$ on a manifold with $c_1(M)>0$. It is shown in \cite{FIK} that $M$ admits a K\"ahler-Ricci soliton and we refer the reader to the results of \cite{TZhu} and also \cite{PSSW3}. For our purposes, we only need the fact that the diameter of the metric is bounded along the normalized K\"ahler-Ricci flow \cite{P1}, \cite{SeT}.
\subsubsection{The subcase $a_0(n+k) > b_0(n-k)$.}
In this case the K\"ahler class $\alphapha_t$ remains K\"ahler until time $T= (b_0-a_0)/2k$. We have $\lim_{t \rightarrow T} b_t = \lim_{t \rightarrow T} a_t>0$ as in the case $k\ge n$. Lemmas \ref{lemmavolbound} and \ref{trchi} hold with the same proofs in this case since, as pointed out in Section \ref{kgen}, we only used there the fact that $a_t$ is uniformly bounded from below away from zero.
\subsubsection{The subcase $a_0(n+k) < b_0(n-k)$.}
In this case the K\"ahler class $\alphapha_t$ remains K\"ahler until time $T=a_0/(n-k)$. As $t \rightarrow T$, $a_t$ tends to zero while $b_t$ remains bounded below away from zero.
The metrics $\hat{\omega}_t$ and classes $\alphapha_t$ satisfy the following properties for all $t \in [0,T)$:
\betagin{enumerate}
\item[(1)]
The limit $\displaystyle{\hat{\omega}_T = \lim_{t \rightarrow T} \hat{\omega}_t}$ is a smooth closed nonnegative (1,1) form satisfying
$\int_M \hat{\omega}_T^n >0.$
\item[(2)] For all $\varepsilon>0$ sufficiently small (independent of $t$), the class $\alphapha_t - \varepsilon [D_0]$ is K\"ahler.
\end{enumerate}
Indeed, (1) follows from Lemma \ref{theta} and (2) is immediate from the definition of $\alphapha_t$.
We will use these properties to prove the following (cf. \cite{Ts}, \cite{TZha}, \cite{Zha}).
\betagin{theorem} \label{theoremTZ} Let $\omega(t)$ be a solution of the K\"ahler-Ricci flow on $M$ starting at $\omega_0$ in $\alphapha_0$ with $a_0(n+k) < b_0(n-k)$. Then:
\betagin{enumerate}
\item[(i)] If $\Omega$ is a fixed volume form on $M$ then there exists a uniform constant $C$ such that
$$\omega^n(t) \le C\Omega, \quad \emph{for all } t \in [0,T).$$
\item[(ii)]
There exists a closed semi-positive (1,1) current $\omega_T$ on $M$ which is smooth outside the exceptional curve $D_0$ and has an $L^\infty$-bounded local potential such that following holds. The metric $\omega(t)$ along the K\"ahler-Ricci flow converges in $C^{\infty}$ on compact subsets of $M\setminus D_0$ to $\omega_T$ as $t \rightarrow T$.
\end{enumerate}
\end{theorem}
\betagin{proof} This result is essentially contained in \cite{TZha}, \cite{Zha}, but we will include an outline of the proof for the reader's convenience. First, define a closed (1,1) form $\eta$ by
$$\eta = \ddt{} \hat{\omega}_t = (k-n) \chi - \theta,$$
so that the reference metric $\hat{\omega}_t$ is given by
$\hat{\omega}_t = \hat{\omega}_0 + t \eta$. Since $-\eta$ is in $c_1(M)$ there exists a volume form $\Omega$ on $M$ with $\ddbar\log \Omega = \eta$.
Let $\varphi=\varphi(t)$ be a solution of the parabolic Monge-Amp\`ere equation
\betagin{equation} \label{pma}
\ddt{\varphi} = \log \frac{ (\hat{\omega}_t + \frac{\sqrt{-1}}{2\pi} \partial\ov{\partial} \varphi)^n}{\Omega},
\end{equation}
with $\varphi|_{t=0} = 0$, for $t \in [0,T)$. Then $\omega(t) = \hat{\omega} +\frac{\sqrt{-1}}{2\pi} \partial\ov{\partial} \varphi$ solves the K\"ahler-Ricci flow (\ref{KRF0}). We will first bound $\varphi$.
Notice that $\varphi$ is uniformly bounded from above by a simple maximum principle argument. To obtain a uniform $L^{\infty}$ bound on $\varphi$ we will show that $\dot{\varphi}$ is uniformly bounded from above for $t \in [T/2, T)$.
Compute
\betagin{equation}
(\ddt{} - \Delta) \dot{\varphi} = \textrm{tr}_{\omega} \eta.
\end{equation}
Then
\betagin{equation}
(\ddt{} - \Delta)(t \dot{\varphi}- \varphi -nt) = - \textrm{tr}_{\omega} \hat{\omega}_0 \le 0,
\end{equation}
using the fact that $\Delta \varphi = n- \textrm{tr}_{\omega}(\hat{\omega}_0 + t \eta)$.
It follows from the maximum principle that $t \dot{\varphi}$ is uniformly bounded from above. Hence $\dot{\varphi}$ is uniformly bounded from above for $t$ in $[T/2,T)$.
Rewrite (\ref{pma}) as
\betagin{equation} \label{ma}
(\hat{\omega}_t + \ddbar \varphi)^n = e^{\dot{\varphi}} \Omega.
\end{equation}
By property (1) above, the K\"ahler metric $\hat{\omega}_t$ satisfies $\int_M \hat{\omega}_t^n >c>0$ for a uniform constant $c$ for all $t \in [T/2, T)$ and the limit $\hat{\omega}_T$ is a smooth nonnegative (1,1) form.
Hence we can apply the results of \cite{Kol}, \cite{Zha}, \cite{EGZ} on the complex Monge-Amp\`ere equation to obtain an $L^{\infty}$ bound on $\varphi$ which is uniform in $t$. Note that part (i) of the Theorem follows from the upper bound of $\dot{\varphi}$. For (ii), we use property (2) of $\alphapha_t$ as listed above. The argument of Tsuji \cite{Ts} (cf. \cite{Y1}) gives second order estimates for $\varphi$, depending on the $L^{\infty}$ estimate, on all compact sets of $M \setminus D_0$. The rest of the theorem follows by standard theory.
\ensuremath{\Box}
\end{proof}
\section{Estimates under the Calabi symmetry condition} \label{sectioncalabi}
In this section we assume that the initial metric $\omega_0$ satisfies the symmetry condition of Calabi. We prove estimates for the solution of the K\"ahler-Ricci flow and in particular we give a proof of Theorem \ref{thm2} (see Theorem \ref{gest} below.)
Let $\omega(t)$ be a solution of the K\"ahler-Ricci flow (\ref{KRF0}) on a time interval $[0,T)$. The K\"ahler-Ricci flow can be described in terms of the potential $u = u(\rho,t)$. Noting that $\omega$ determines $u$ only up to the addition of a constant, we consider $u=u(\rho, t)$ solving
\betagin{equation}\label{uKRF}
\ddt{} u(\rho, t) = \log u''(\rho,t) + (n-1) \log u'(\rho,t) -n\rho + c_t,
\end{equation}
where
\betagin{equation} \label{ct}
c_t = - \log u''(0,t) - (n-1) u'(0,t).
\end{equation}
The notation $u'(\rho,t)$ denotes the partial derivative $(\partial u/\partial \rho) (\rho,t)$ and similarly for $u''(\rho,t)$.
We assume that $\rho \mapsto u(\rho,0)$ represents the initial K\"ahler metric $g_0$ and we impose the further normalization condition that
\betagin{equation} \label{ic}
u(0,0)=0.
\end{equation}
Then by (\ref{Riccipotential}), the solution $g=g(t)$ of (\ref{KRF0}) can be written as $g_{i \ov{j}} = \partial_i \partial_{\ov{j}} u$.
The constant $c_t$ is chosen so that $\ddt{} u(0,t)=0$ and
thus $u(0,t)=0$ on $[0,T)$. The existence of a unique solution of the K\"ahler-Ricci flow on $[0,T)$ and the parabolicity of (\ref{uKRF}) ensures the existence of a smooth unique $u(\rho,t)$ solving (\ref{uKRF}).
The K\"ahler metric at time $t$ lies in the cohomology class $$\alphapha_t = \frac{b_t}{k}[D_{\infty}] - \frac{a_t}{k} [D_0]$$ along the flow, where $a_t$ and $b_t$ are given by (\ref{btat}). We have
$$\lim_{\rho \rightarrow -\infty} u'(\rho,t) = a_t, \quad \lim_{\rho \rightarrow \infty} u'(\rho, t) = b_t$$
and by convexity
$$a_t < u'(\rho, t) < b_t, \quad \textrm{for all } \rho \in \mathbb{R}.$$
Next, the evolution equations for $u'$, $u''$ and $u'''$ are given by
\betagin{eqnarray} \label{upevolution}
\ddt{} u' & = & \frac{u'''}{u''} + \frac{(n-1) u''}{u'} -n \\ \label{udpevolution}
\ddt{} u'' & = & \frac{u^{(4)}}{u''} - \frac{(u''')^2}{(u'')^2} + \frac{(n-1)u'''}{u'} - \frac{(n-1) (u'')^2}{(u')^2} \\ \nonumber
\ddt{} u''' & = & \frac{u^{(5)}}{u''} - \frac{3u''' u^{(4)}}{(u'')^2} + \frac{2(u''')^3}{(u'')^3}+ \frac{(n-1) u^{(4)}}{u'} \\ \label{utpevolution}
&& \mbox{} - \frac{3(n-1) u'' u'''}{(u')^2} + \frac{2(n-1) (u'')^3}{(u')^3},
\end{eqnarray}
as can be seen from differentiating (\ref{uKRF}).
For the rest of this section we assume that $u=u(\rho,t)$ solves the K\"ahler-Ricci flow (\ref{uKRF}) with (\ref{ct}) and (\ref{ic}).
\subsection{The case $k \ge n$} \label{calabisubsectionk2}
The following elementary lemma shows that as $t \rightarrow T=(b_0-a_0)/2k$, the potential $u$ converges pointwise to the function $a_T u_{\chi}$.
\betagin{lemma} \label{lemmapointwise}
The function $u=u(\rho,t)$ satisfies, for all $\rho$ in $\mathbb{R}$,
\betagin{enumerate}
\item[(i)] $\displaystyle{0 < u'(\rho,t) - a_t < 2k(T-t)},$ for all $t \in [0,T)$;
\item[(ii)] $\displaystyle{\lim_{t \rightarrow T} (u(\rho,t)- a_T \rho) =0}$.
\end{enumerate}
\end{lemma}
\betagin{proof}
(i) follows immediately from the convexity of $u$ and the definition of $a_t$ and $b_t$. For (ii), recall that $u(0,t)=0$ for all $t \in [0,T)$ and so
$$u(\rho,t)-a_t \rho = \int_0^{\rho} (u'(s,t) - a_t)ds$$
Applying (i),
$$\left| u(\rho,t)-a_t \rho \right| \le 2k(T-t) | \rho | \longrightarrow 0,$$
as $t \rightarrow T$, while $a_t \rightarrow a_T$.
\ensuremath{\Box}
\end{proof}
As a simple application of this we prove a lower bound for the K\"ahler metric along the flow.
\betagin{lemma} \label{lowerbd} Along the K\"ahler-Ricci flow, we have
$$\omega(t) \ge a_t \chi$$
for all $t$ in $[0,T)$.
\end{lemma}
\betagin{proof}
With a slight abuse of notation, we write $\chi = \frac{\sqrt{-1}}{2\pi} \chi_{i \ov{j}} dz^i \wedge d\ov{z}^j$. Then by (\ref{metric}),
\betagin{equation} \label{chi}
g_{i\bar{j}}(t) = e^{-\rho} u' \deltalta_{ij} + e^{-2\rho} \ov{x}_i x_j (u''-u'), \quad \chi_{i \ov{j}} = e^{-\rho} \deltalta_{ij} - e^{-2\rho} \ov{x}_i x_j.
\end{equation}
Since $u''>0$ and $u' >a_t$,
$$g_{i \ov{j}}(t) \ge u' e^{-\rho} \left( \deltalta_{ij} - \frac{\ov{x}_i x_j}{\sum_k |x_k|^2} \right) \ge a_t e^{-\rho} \left( \deltalta_{ij} - \frac{\ov{x}_i x_j}{\sum_k |x_k|^2} \right) = a_t \chi_{i \ov{j}},$$
as required.
\ensuremath{\Box}
\end{proof}
We have the following further estimates for $u$ which will give an upper bound for the metric $\omega(t)$.
\betagin{lemma} \label{lemmaest1} There exists a constant $C$ depending only on the initial data such that
\betagin{equation} \label{udp}
0<u''(\rho, t) \le C \min \left( \frac{e^{k \rho}}{(1+e^{k \rho})^2}, (T-t) \right)
\end{equation}
and
\betagin{equation} \label{utp}
|u'''(\rho,t) | \le Cu''(\rho,t)
\end{equation}
for all $(\rho,t) \in \mathbb{R} \times [0,T)$.
\end{lemma}
\betagin{proof}
We begin by establishing the bound
\betagin{equation}\label{bd1}
u''(\rho, t) \le C \frac{e^{k \rho}}{(1+e^{k \rho})^2}.
\end{equation}
Let $\hat{g}_0$ be the reference metric with associated potential $\hat{u}_0$ (see (\ref{ref1})). Then from (\ref{det}) we see that
\betagin{eqnarray} \nonumber
\deltat{\hat{g}_0} & = & e^{-n \rho} ( \hat{u}_0'(\rho))^{n-1} \hat{u}_0''(\rho) \\ \nonumber
& = & k (b_0-a_0) e^{- n \rho} \left( a_0 + (b_0-a_0) \frac{e^{k\rho}}{(1+ e^{k \rho})} \right)^{n-1} \frac{e^{k\rho}}{(1+e^{k\rho})^2} \\
& \le & C e^{-n \rho} \frac{e^{k\rho}}{(1+e^{k\rho})^2}. \label{bd2}
\end{eqnarray}
By Lemma \ref{lemmavolbound}
the volume form $\omega^n$ is uniformly bounded from above by a fixed volume form along the flow, and hence $\deltat g(t) \le C \deltat \hat{g}_0$ for some uniform constant $C$. Combining this fact with (\ref{det}) and (\ref{bd2}), we have
\betagin{equation} \label{eqnvolformbd}
( u'(\rho,t) ) ^{n-1} u''(\rho,t) \le C\frac{e^{k\rho}}{(1+e^{k\rho})^2},
\end{equation}
Then (\ref{bd1}) follows, since $u'(\rho)$ is uniformly bounded from below away from zero.
Next, we give a proof of the bound (\ref{utp}). We will apply the maximum principle to the quantity $u'''/u''$. Before we do this, observe that for each fixed $t \in [0,T)$,
\betagin{equation}
\label{limits1}
\lim_{\rho \rightarrow - \infty} \frac{u'''(\rho,t)}{u''(\rho,t)} = k, \quad \lim_{\rho \rightarrow \infty} \frac{u'''(\rho,t)}{u''(\rho,t)} = -k.
\end{equation}
Indeed, for the first limit, we can compute $u'''/u''$ in terms of the function $s \mapsto u_0(s, t)$ associated to $u$. From (\ref{u0}),
\betagin{eqnarray} \nonumber
u(\rho,t) & =& a_t \rho + u_0(e^{k\rho}, t) \\ \nonumber
u'(\rho,t) & = & a_t + k e^{k\rho} u'_0(e^{k\rho}, t) \\ \nonumber
u''(\rho,t) & = & k^2 e^{k\rho} u'_0(e^{k\rho}, t) + k^2e^{2k\rho} u''_0(e^{k\rho}, t) \\ \label{uppp}
u'''(\rho,t) & = & k^3 e^{k\rho} u'_0(e^{k\rho},t) + 3k^3 e^{2k \rho} u''_0(e^{k\rho},t) + k^3 e^{3k \rho} u_0'''(e^{k\rho},t).
\end{eqnarray}
Hence
\betagin{eqnarray*}
\frac{u'''(\rho,t)}{u''(\rho,t)} & = & \frac{k u'_0(e^{k\rho},t) + 3k e^{k \rho} u''_0(e^{k\rho},t) + k e^{2k \rho} u_0'''(e^{k\rho},t)}{u'_0(e^{k\rho}, t) + e^{k\rho} u''_0(e^{k\rho},t)} \longrightarrow k, \ \textrm{as } \ \rho \rightarrow -\infty,
\end{eqnarray*}
since $u'_0(0, t) >0$. The second limit can be proved in a similar way using the function $u_{\infty}$.
Using (\ref{udpevolution}) and (\ref{utpevolution}) we find that $u'''/u''$ evolves by
\betagin{eqnarray*}
\ddt{} \left( \frac{u'''}{u''} \right) & = & \frac{1}{u''} \left( \frac{u^{(5)}}{u''} - \frac{3u''' u^{(4)}}{(u'')^2} + \frac{2(u''')^3}{(u'')^3} + \frac{(n-1)u^{(4)}}{u'} - \frac{3(n-1)u''u'''}{(u')^2} + \frac{2(n-1)(u'')^3}{(u')^3} \right) \\
&& \mbox{} - \frac{u'''}{(u'')^2} \left( \frac{u^{(4)}}{u''} - \frac{(u''')^2}{(u'')^2} + \frac{(n-1)u'''}{u'} - \frac{(n-1)(u'')^2}{(u')^2} \right).
\end{eqnarray*}
To obtain an upper bound of $u'''/u''$ we argue as follows. From (\ref{limits1}) we may assume without loss of generality that the maximum of $u'''/u''$ occurs at an interior point $(\rho_1, t_1) \in \mathbb{R} \times (0,T)$. At that point,
$$\left( \frac{u'''}{u''} \right)' = \frac{u^{(4)}}{u''} - \frac{(u''')^2}{(u'')^2} = 0$$
and
$$\left( \frac{u'''}{u''} \right)'' = \frac{u^{5}}{u''} - \frac{3u'''u^{(4)}}{(u'')^2} + 2 \frac{(u''')^3}{(u'')^3} \le 0.$$
Then at $(\rho_1, t_1)$,
$$\ddt{} \left( \frac{u'''}{u''} \right) \le - \frac{2(n-1) u'''}{(u')^2} + \frac{2(n-1)(u'')^2}{(u')^3}$$
and hence
$$\left( \frac{u'''}{u''} \right) (\rho_1, t_1) \le \left(\frac{u''}{u'}\right)(\rho_1, t_1) \le C,$$
using (\ref{bd1}). This gives the upper bound for $u'''/u''$. The argument for the lower bound is similar. In fact, since $u''>0$ we obtain $u'''/u'' \ge -k$. This establishes (\ref{utp}).
It remains to prove the estimate $u'' \le C(T-t)$. Fix $t$ in $[0,T)$. Since the bound (\ref{bd1}) implies that $u''(\rho)$ tends to zero as $\rho$ tends to $\pm \infty$, there exists $\tilde{\rho} \in \mathbb{R}$ such that
$$u''(\tilde{\rho}) = \sup_{\rho \in \mathbb{R}} u''(\rho).$$
By the Mean Value Theorem and (\ref{utp}) we have, for all $\rho \in \mathbb{R}$,
$$u''(\tilde{\rho})- u''(\rho) \le C u''(\tilde{\rho}) |\rho - \tilde{\rho} |.$$
Then for $|\rho - \tilde{\rho}| \le 1/2C$,
$$u''(\rho) \ge \frac{u''(\tilde{\rho})}{2} ,$$
and hence
$$\frac{1}{2C} u''(\tilde{\rho})= \int_{ |\rho - \tilde{\rho}| \le 1/2C} \frac{u''(\tilde{\rho}) }{2}d\rho < \int_{-\infty}^{\infty} u''(\rho) d\rho = b_t - a_t =2k(T-t).$$
The bound $u'' \le C(T-t)$ then follows.
\ensuremath{\Box}
\end{proof}
We can now use these estimates on $u$ to obtain bounds on the K\"ahler metric $\omega(t)$ along the K\"ahler-Ricci flow. In the following, $\omega(t)$ will always denote a solution of the K\"ahler-Ricci flow (\ref{KRF0}) with initial metric $\omega_0$ satisfying the Calabi symmetry.
\betagin{theorem} \label{gest}
We have
\betagin{enumerate}
\item[(i)] $\displaystyle{\sup_M \textnormal{tr}_{\hat{g}_0}{g} \le C}.$
\item[(ii)] For any compact set $K \subset M \setminus (D_{\infty} \cup D_0)$,
$$\sup_K | \nabla_{\hat{g}_0} g |_{\hat{g}_0} \le C_K.$$
\end{enumerate}
\end{theorem}
\betagin{proof}
An elementary computation shows that
$$\textnormal{tr}_{\hat{g}_0}{g} = \frac{u''}{\hat{u}_0''} + (n-1) \frac{u'}{\hat{u}_0'}.$$
But we have $u'/\hat{u}_0' \le b_0/a_0$ and, making use of
Lemma \ref{lemmaest1},
$$\frac{u''}{\hat{u}_0''} = \frac{(1+e^{k\rho})^2}{k(b_0-a_0) e^{k\rho}} u'' \le C ,$$
and this gives (i).
We now prove (ii). By (i) it suffices to bound
$$\frac{\partial}{\partial x_{k}} g_{i\ov{j}} = e^{-2\rho} (u''-u') (\ov{x}_k \deltalta_{ij} + \ov{x}_i \deltalta_{jk}) + e^{-3\rho} \ov{x}_i x_j \ov{x}_k (u'''-3u''+2u'),$$
in a given compact set $K \subset M \setminus (D_{\infty} \cup D_0)$.
But $u''$ and $u'''$ and uniformly bounded from above by Lemma \ref{lemmaest1}, and
in $K$, the functions $\rho$ and $x_i$ are uniformly bounded. This gives (ii).
\ensuremath{\Box}
\end{proof}
From Lemma \ref{lowerbd} and Theorem \ref{gest} we obtain the following immediate corollary.
\betagin{corollary}
We have
$$\frac{1}{C} \le \emph{diam}_{g(t)} M \le C.$$
\end{corollary}
Moreover we prove:
\betagin{theorem} \label{thmconv}
Define $\tilde{\varphi}= \tilde{\varphi}(t)$ by
\betagin{equation} \label{phitilde}
\omega(t) = \hat{\omega}_t + \ddbar \tilde{\varphi}, \quad \tilde{\varphi}|_{\rho=0} = 0.
\end{equation}
Then for all $\betata$ with $0< \betata<1$,
\betagin{enumerate}
\item[(i)] $\tilde{\varphi}$ tends to zero in $C^{1,\betata}_{\hat{g}_0}(M)$ as $t \rightarrow T$.
\item[(ii)] For any compact set $K \subset M \setminus (D_{\infty} \cup D_0)$, $\tilde{\varphi}$ tends to zero in $C^{2, \betata}_{\hat{g}_0}(K)$ as $t \rightarrow T$. In particular, on $K$, $\omega(t)$ converges to $a_T \chi$ on $C^{\betata}_{\hat{g}_0}(K)$ as $t \rightarrow T$.
\end{enumerate}
\end{theorem}
\betagin{proof}
By the normalization of $\tilde{\varphi}$, we have $\tilde{\varphi}(t) = u(t) - \hat{u}_t$. As $t \rightarrow T$, $\hat{u}_t$ tends to $a_T u_{\chi}$. Then by Lemma \ref{lemmapointwise}, $\tilde{\varphi}$ converges pointwise to zero. Taking the trace of (\ref{phitilde}) with respect to $\hat{g}_0$, applying the first part of Theorem \ref{gest} and using the fact that $\textnormal{tr}_{\hat{g}_0}\hat{g}_t$ is bounded, we see that $\Delta_{\hat{g}_0} \tilde{\varphi}$ is uniformly bounded on $M$, giving (i). The second part of Theorem \ref{gest} gives (ii).
\ensuremath{\Box}
\end{proof}
Finally, we use the estimates of Lemma \ref{lemmaest1} together with the bound of Lemma \ref{trchi} to show that, away from the divisors $D_0$ and $D_{\infty}$, the fibers are collapsing.
\betagin{theorem} \label{thmfiber} Let $\pi^{-1}(z)$ be the fiber of $\pi: M \rightarrow \mathbb{P}^{n-1}$ over the point $z\in \mathbb{P}^{n-1}$. Define $\omega_z(t) = \omega(t)|_{\pi^{-1}(z)}$. Then for any compact set $K \subset M \setminus (D_0 \cup D_{\infty})$, there exists a constant $C_K$ such that
\betagin{equation} \label{fiberbd}
\sup_{z \in \mathbb{P}^{n-1}} \| \omega_z(t) \|_{C^0(\pi^{-1}(z) \cap K)} \le C_K(T-t).
\end{equation}
\end{theorem}
\betagin{proof}
Fix a compact set $K \subset M \setminus (D_0 \cup D_{\infty})$. From (\ref{det}) we see that on $K$, the quantity $\omega^n/\Omega$ is uniformly equivalent to $u''$. Then
by Lemma \ref{lemmaest1}, there exists a constant $C_K$ such that
\betagin{equation} \label{CK}
\omega^n(x) \le C_K(T-t) \Omega(x), \quad \textrm{for } x \in K.
\end{equation}
Now at a point $x \in K$, choose complex coordinates $z^1, \ldots, z^n$ so that $$\chi = \frac{\sqrt{-1}}{2\pi} \sum_{i=1}^{n-1} dz^i \wedge d\ov{z^i} \quad \textrm{and} \quad \omega = \frac{\sqrt{-1}}{2\pi} \sum_{i=1}^{n} \lambda_i dz^i \wedge d\ov{z^i},$$
for $\lambda_1, \ldots, \lambda_n>0$. The coordinate $z^n$ is in the fiber direction and we wish to obtain an upper bound for $\lambda_n$.
From Lemma \ref{trchi} we have
$$\sum_{i=1}^{n-1} \frac{1}{\lambda_i} \le C.$$
Hence $\lambda_1, \ldots, \lambda_{n-1}$ are uniformly bounded from below away from zero and we have, for uniform constants $C, C'$,
$$\lambda_n \le C \frac{1}{\lambda_1 \cdots \lambda_{n-1}} \frac{\omega^n(x)}{\Omega} \le C' \cdot C_K (T-t)$$
from (\ref{CK}). This proves (\ref{fiberbd}).
\ensuremath{\Box}
\end{proof}
As a corollary of this and Theorem \ref{gest}, the diameter of the fibers goes to zero as $t$ tends to $T$.
\betagin{corollary} \label{corollaryfiber}
As above, let $\pi^{-1}(z)$ be the fiber of $\pi: M \rightarrow \mathbb{P}^{n-1}$ over the point $z\in \mathbb{P}^{n-1}$. Then
$$\lim_{t \rightarrow T} \left( \sup_{z \in \mathbb{P}^{n-1}} \emph{diam}_{g(t)} \,\pi^{-1}(z) \right)=0.$$
\end{corollary}
\betagin{proof}
Fix $\varepsilon >0$. By Theorem \ref{gest}, (i) there exists a tubular neighborhood $N_{\varepsilon}$ of $D_0 \cup D_{\infty}$ such that for all $z \in \mathbb{P}^{n-1}$ and all $t \in [0,T)$,
\betagin{equation} \label{epsilon1}
\textrm{diam}_{g(t)} ( \pi^{-1}(z) \cap N_{\varepsilon} ) < \frac{\varepsilon}{2}.
\end{equation}
On the other hand, applying Theorem \ref{thmfiber} with $K = M \setminus N_{\varepsilon}$ we see that for $t$ sufficiently close to $T$,
\betagin{equation} \label{epsilon2}
\textrm{diam}_{g(t)} ( \pi^{-1}(z) \cap K ) < \frac{\varepsilon}{2},
\end{equation}
for all $z \in \mathbb{P}^{n-1}$. Combining (\ref{epsilon1}) and (\ref{epsilon2}) completes the proof.
\ensuremath{\Box}
\end{proof}
\subsection{The case $1 \le k \le n-1$}
As in section \ref{sectionKRk1}, there are three subcases. We do not require any further estimates when
$a_0(n+k) = b_0(n-k)$ and so we move on to the other two cases.
\subsubsection{The subcase $a_0(n+k) > b_0(n-k)$.}
We obtain all the results of subsection \ref{calabisubsectionk2} by identical proofs. The key point is that, in this case, $a_t$ is uniformly bounded from below away from zero.
\subsubsection{The subcase $a_0(n+k) < b_0(n-k)$.}
Recall that in this case, $a_t = a_0+ (k-n)t$ tends to zero as $t$ tends to the blow-up time $T$. Note that by part (ii) of Theorem \ref{theoremTZ} we already have $C^{\infty}$ estimates for the metric $g(t)$ on $M \setminus D_0$. In this subsection we will obtain estimates for the metric in a neighborhood of $D_0$. First we have the following estimate on $u'$.
\betagin{lemma} \label{lemmak1up}
There exists a uniform constant $C$ such that for all $t \in [0,T)$ and all $\rho \in \mathbb{R}$,
$$ 0 < u'(\rho, t) - a_t \leq C e^{k\rho/n}.$$
\end{lemma}
\betagin{proof} The first inequality follows from the definition of $a_t$ and the convexity of $u$. For the upper bound of $u'(\rho,t)-a_t$ we argue as follows.
By part (i) of Theorem \ref{theoremTZ} the volume form of $\omega(t)$ is uniformly bounded along the flow. Then by the same argument as in the proof of (\ref{eqnvolformbd}), we have
$$ ( u'(\rho,t) )^{n-1} u''(\rho,t) \leq C \frac{e^{k\rho}}{(1+ e^{k\rho})^2} \le C e^{k\rho}.$$
Hence
$$ ((u'(\rho,t))^n)'\leq Ce^{k\rho}.$$
Integrating in $\rho$ we obtain
$$ (u'(\rho,t))^n - a_t^n \leq C e^{k\rho} $$
and thus
$$ u'(\rho, t) \leq a_t + C e^{ \frac{k}{n} \rho},$$
as required.
\ensuremath{\Box}
\end{proof}
Note that the conclusion of Lemma \ref{lemmak1up} could be strengthened for $\rho>0$. However, in this subsection we need only concern ourselves with the case of negative $\rho$ since the metric is bounded away from $D_0$.
\betagin{lemma} \label{lemmaudpk1} There exists a uniform constant $C$ such that
$$ u'' \leq C (u'-a_t) (b_t - u').$$
In particular,
$$u'' \leq C e^{k\rho/n }.$$
\end{lemma}
\betagin{proof}
We evolve the quantity $H = \log u'' - \log (u'-a_t) - \log (b_t- u').$ Using (\ref{upevolution}) and (\ref{udpevolution}) we compute
\betagin{eqnarray} \nonumber
\frac{\partial H}{\partial t} & = & \frac{1}{u''} \left( \frac{u^{(4)}}{u''} - \frac{(u''')^2}{(u'')^2} + \frac{(n-1)u'''}{u'} - \frac{(n-1)(u'')^2}{(u')^2} \right) \\ \nonumber
&& \mbox{} - \frac{1}{u'-a_t} \left( \frac{u'''}{u''} + \frac{(n-1)u''}{u'} -k \right) \\ \label{evolveH}
&& \mbox{} - \frac{1}{b_t-u'} \left( - \frac{u'''}{u''} - \frac{(n-1)u''}{u'} -k \right).
\end{eqnarray}
Before applying the maximum principle we check that $H$ remains bounded from above as $\rho$ tends to $\pm \infty$. For the case of $\rho$ negative we use (\ref{uppp}) to obtain
\betagin{eqnarray*}
\frac{u''(\rho,t)}{(u'(\rho,t)-a_t)(b_t -u'(\rho,t))} & = & \frac{k e^{k\rho} u'_0(e^{k\rho}, t) + ke^{2k\rho} u''_0(e^{k\rho}, t)}{e^{k\rho} u'_0(e^{k\rho},t) (b_t-a_t - ke^{k\rho}u'_0(e^{k\rho},t))} \\
& \le & \frac{k}{b_t-a_t - ke^{k\rho} u'_0(e^{k\rho},t)} + \frac{ke^{k\rho}u''_0(e^{k\rho},t)}{u'_0(e^{k\rho}, t) (b_t-a_t)} \\
& \le & \frac{k}{b_t-a_t} +1,
\end{eqnarray*}
as $\rho$ tends to $- \infty$, where we are using the fact that $u'_0(0,t)>0$. Note that $b_t-a_t$ remains bounded from below away from zero. Similarly, we can show that $H$ is bounded from above as $\rho$ tends to positive infinity.
Suppose then that $H$ has a maximum at a point $(\rho_0,t_0) \in \mathbb{R} \times (0, T)$. Then at this point, we have
\betagin{equation} \label{dhz}
\frac{u'''}{u''} - \frac{u''}{u'-a_t} + \frac{u''}{b_t-u'} =0
\end{equation}
and
\betagin{equation} \label{ddhz}
\frac{u^{(4)}}{u''} - \frac{(u''')^2}{(u'')^2} - \frac{u'''}{u'-a_t} + \frac{(u'')^2}{(u'-a_t)^2} + \frac{u'''}{b_t-u'} + \frac{(u'')^2}{(b_t-u')^2} \le 0.
\end{equation}
Combining (\ref{evolveH}), (\ref{dhz}) and (\ref{ddhz}) we see that at $(\rho_0, t_0)$,
$$0 \leq -u''\left( \frac{1}{(u'-a_t)^2} + \frac{1}{ (b_t - u')^2} \right) - \frac{(n-1)u''}{(u')^2} + \frac{k}{u'-a_t} + \frac{k}{b_t - u'} $$ and hence
$$ \frac{u''}{(u'-a_t)(b_t -u')} \leq \frac{k(b_t - a_t) }{(u'-a_t)^2 + (b_t - u')^2}\leq C,$$
and the result follows by the maximum principle.
\ensuremath{\Box}
\end{proof}
\section{Gromov-Hausdorff convergence} \label{sectionGH}
In this section we prove Theorem \ref{thm3} (and hence also Theorem \ref{thm1}) on the Gromov-Hausdorff convergence of the K\"ahler-Ricci flow. We assume in this section that $g(t)$ is a solution of the K\"ahler-Ricci flow (\ref{KRF0}) on $[0,T)$ with initial metric $g_0$ satisfying the Calabi symmetry condition.
We begin by recalling the definition of Gromov-Hausdorff convergence. It will be convenient to use the characterization given in, for example, \cite{F} or \cite{GW}. Let $(X, d_X)$ and $(Y,d_Y)$ be two compact metric spaces. We define the Gromov-Hausdorff distance $d_{\textrm{GH}}(X,Y)$ to be the infimum of all $\varepsilon>0$ such that the following holds. There exist maps $F: X \rightarrow Y$ and $G: Y \rightarrow X$ such that
\betagin{equation} \label{GH1}
|d_X(x_1, x_2) - d_Y (F(x_1), F(x_2))|\leq \varepsilon, \quad \textrm{for all } x_1, x_2 \in X
\end{equation} and
\betagin{equation} \label{GH2}
d_X(x, G\circ F(x)) < \varepsilon, \quad \textrm{for all } x \in X
\end{equation}
and the two symmetric properties for $Y$ also hold. Note that we do not require the maps $F$ and $G$ to be continuous.
We say that a sequence of compact metric spaces $(X_i, d_{X_i})$ converges to $(Y, d_Y)$ in the sense of Gromov-Hausdorff if $d_{\textrm{GH}}(X_i, Y)$ tends to zero as $i$ tends to infinity.
\subsection{The case $k\ge n$}
Again we consider first the case when $k \ge n$. We prove the following.
\betagin{theorem}
$(M, g(t)) $ converges to $(\mathbb{P}^{n-1}, a_T g_{\textrm{FS}})$ in the Gromov-Hausdorff sense as $t\rightarrow T$.
\end{theorem}
\betagin{proof} We write $d_{g(t)}$ and $d_{T}$ for the distance functions on $M$ and $\mathbb{P}^{n-1}$ associated to $g(t)$ and $a_T \omega_{\textrm{FS}}$ respectively. Let $\varepsilon>0$ be given. Let $\sigma: \mathbb{P}^{n-1} \rightarrow M$ be any smooth map satisfying $\pi \circ \sigma = \textrm{id}_{\mathbb{P}^{n-1}}$ such that $\sigma(\mathbb{P}^{n-1})$ does not intersect $D_0 \cup D_{\infty}$. We first verify that (\ref{GH2}) (and its analog for $Y$) hold whenever $t$ is sufficiently close to $T$. We are taking here $(X, d_X) = (M, d_{g(t)})$, $(Y, d_Y) = (\mathbb{P}^{n-1}, d_{T})$, $F= \pi$ and $G=\sigma$ in the definition of Gromov-Hausdorff convergence.
For $x \in M$, by Corollary \ref{corollaryfiber},
$$d_{g(t)} (x, \sigma\circ \pi(x)) \leq \textrm{diam}_{g(t)} (\pi^{-1}(\pi(x))) \longrightarrow 0$$ uniformly in $x$ as $t\rightarrow T$.
Moreover, for any $y$ in $\mathbb{P}^{n-1}$, $ d_{\textrm{FS}} ( y, \pi \circ\sigma(y)) =0$ holds trivially.
Now we verify (\ref{GH1}). First, again by Corollary \ref{corollaryfiber}, we choose $t$ close enough to $T$ so that for all $z$ in $\mathbb{P}^{n-1}$,
\betagin{equation} \label{diam}
\textrm{diam}_{g(t)} ( \pi^{-1}(z)) < \varepsilon/4.
\end{equation}
For any $x_1, x_2 \in M$, let $y_i = \pi(x_i) \in \mathbb{P}^{n-1}$. Let $\gammamma$ be a geodesic in $\mathbb{P}^{n-1}$ such that $d_{T}(y_1, y_2) = L_{a_T g_{\textrm{FS}}}(\gammamma),$
where $L_{ a_T g_{\textrm{FS} }}( \gammamma ) $ is the arc length of $\gammamma$ with respect to $a_T g_{\textrm{FS}}$.
Choose a small tubular neighborhood $N$ of $D_0 \cup D_\infty$ so that $\sigma(\mathbb{P}^{n-1})$ does not intersect $N$. Then $\tilde{\gammamma}= \sigma \circ \gammamma$ is a smooth path in $M \setminus N$ joining the points $x'_1= \sigma(y_1) $ and $x_2'= \sigma(y_2)$.
By Theorem \ref{thmconv}, $g(t)$ converges to $a_T \pi^*g_{\textrm{FS}}$ uniformly on $M \setminus N$, and hence for $t$ sufficiently close to $T$,
\betagin{equation} \label{L}
L_{g(t)}(\tilde{\gammamma}) < L_{a_T g_{\textrm{FS}}}(\gammamma) + \varepsilon/2.
\end{equation}
Then from (\ref{diam}) and (\ref{L}),
\betagin{equation} \label{GH11}
d_{g(t)}(x_1, x_2) \leq d_{g(t)}(x_1', x_2')+ \frac{\varepsilon}{2} \leq L_{g(t)}(\tilde{\gammamma}) +\frac{\varepsilon}{2}
\leq L_{a_T g_{\textrm{FS}}}(\gammamma) +\varepsilon = d_{T}(y_1, y_2) + \varepsilon.
\end{equation}
On the other hand, by Lemma \ref{lowerbd},
\betagin{equation} \label{GH12}
d_{g(t)}(x_1, x_2) \geq \left(\frac{a_t}{a_T}\right)^{1/2} d_{T}( y_1, y_2),
\end{equation}
and $a_t \rightarrow a_T$ as $t \rightarrow T$.
Combining (\ref{GH11}) and (\ref{GH12}) gives
$$|d_{g(t)} (x_1, x_2) - d_{T}(\pi (x_1), \pi(x_2)) |\leq \varepsilon,$$
and hence the first case of (\ref{GH1}).
The second case of (\ref{GH1}) is simpler. Let $y_1, y_2$ be in $\mathbb{P}^{n-1}$ and write $x_i = \sigma (y_i)$. Since $\sigma(\mathbb{P}^{n-1})$ does not intersect $D_0 \cup D_{\infty}$, we can apply Theorem \ref{thmconv} to obtain the following convergence uniformly in $t$ and the choice of $y_1, y_2$, $x_1$, $x_2$:
$$\lim_{t \rightarrow T} d_{g(t)} (x_1, x_2) = d_{T} (y_1, y_2),$$
as required.
\ensuremath{\Box}\end{proof}
\subsection{The case $1 \le k \le n-1$.}
If $a_0(n+k) > b_0(n-k)$ then one can apply verbatim the argument as in the case $k \geq n$ and the K\"ahler-Ricci flow collapses to $\mathbb{P}^{n-1}$.
If $a_0(n+k) = b_0(n-k)$, then $\alphapha_0 $ is proportional to the first Chern class $c_1(M)$. As discussed in subsection \ref{subcaseperelman},
the diameter is uniformly bounded along the normalized K\"ahler-Ricci flow with initial K\"ahler metric in $c_1(M)$. Then after scaling, the diameter tends to $0$ along the unnormalized K\"ahler-Ricci flow.
We consider then just the subcase $a_0(n+k) < b_0(n-k)$. We will show that as $t \rightarrow T$ the divisor $D_0$ in $M$ contracts. First, we have the following lemma.
\betagin{lemma} \label{lemmagup} There exists a uniform constant $C$ such that the metric $g_{i\ov{j}}=g_{i\ov{j}}(t)$ on $\mathbb{C}^n\setminus \{0\}$ satisfies the estimate
\betagin{equation} \label{gub}
g_{i\ov{j}}(t) \le a_t \chi_{i\ov{j}} + Ce^{(k-n)\rho/n} \deltalta_{ij},
\end{equation}
where $\chi_{i\ov{j}} = e^{-\rho} \deltalta_{ij} - e^{-2\rho} x_i x_j $.
\end{lemma}
\betagin{proof} This follows from Lemmas \ref{lemmak1up} and \ref{lemmaudpk1}. Indeed,
\betagin{eqnarray*}
g_{i\ov{j}} & = & e^{-\rho} u' \deltalta_{ij} + e^{-2\rho} \ov{x}_i x_j (u'' - u') \\
& \le & C e^{(k-n)\rho/n} \deltalta_{ij} + e^{-\rho} a_t \deltalta_{ij} + Ce^{(k-n)\rho/n} e^{-\rho} \ov{x}_i x_j- e^{-2\rho} \ov{x}_i x_j a_t \\
& \le & a_t \chi_{i\ov{j}} + Ce^{(k-n)\rho/n} \deltalta_{ij} ,
\end{eqnarray*}
since $u'(\rho, t) > a_t$.
\ensuremath{\Box}
\end{proof}
Recall that by Theorem \ref{theoremTZ}, the metric $g_t$ along the K\"ahler-Ricci flow converges in $C^{\infty}$ on compact subsets of $M \setminus D_0$ to a singular metric $g_T$ which is smooth on $M \setminus D_0$.
We will apply this and Lemma \ref{lemmagup} to prove the following result on the metric completion of the manifold $(M \setminus D_0, g_T)$.
\betagin{theorem} \label{thmcompletion} Let $g_T$ be the smooth metric on $M \setminus D_0$ obtained by $$ g_T = \lim_{t\rightarrow T} g(t),$$
and let $(\overline{M}, d)$ be the completion of the Riemannian manifold $(M \setminus D_0, g_T)$ as a metric space. Then $(\overline{M},d)$ is a metric space with finite diameter and is homeomorphic to the orbifold $\mathbb{P}^n/\mathbb{Z}_k$ (see Section \ref{orbifold}).
\end{theorem}
\betagin{proof}
Let $f: M \rightarrow \mathbb{P}^n/\mathbb{Z}_k$ be the holomorphic map described in Section \ref{orbifold}. Recall that $f$ restricted to $M \setminus D_0$ is represented in the $(x_1, \ldots, x_n)$ coordinates by the identity map $\textrm{id}: \mathbb{C}^n\setminus \{0\} \rightarrow \mathbb{C}^n \setminus \{ 0 \}$. Now $f$ is an isomorphism on $M \setminus D_0$ and Theorem \ref{theoremTZ} implies that $g(t)$ converges locally in $C^\infty(M \setminus
D_0).$ Thus
it only remains to check the limiting behavior of $g(t)$ near $D_0$ as $t\rightarrow T$ (that is, in a neighborhood of the origin in $\mathbb{C}^n$).
We make a simple observation. Suppose $g_{ij}$ is a continuous Riemannian metric on $B \setminus \{0\}$, where $B = \{ (x_1, \ldots, x_n) \in \mathbb{C}^n \ | \ |x_1|^2 + \cdots + |x_n|^2 \le 1\}$. Suppose in addition that $g_{ij}$ satisfies the inequality
$$g_{ij} \le \frac{C}{r^{\betata}} \deltalta_{ij},$$
for some $\betata <2$, where $r = ( |x_1|^2 + \cdots + |x_n|^2)^{1/2}$. Then the completion of $(B \setminus \{ 0 \}, g)$ as a metric space has finite diameter and is homeomorphic to $B$ with topology induced from $\mathbb{C}^n$.
Now from Lemma \ref{lemmagup}, since $a_t \rightarrow 0$ as $t \rightarrow T$, we see that on $\mathbb{C}^n\setminus \{ 0 \}$,
$$(g_T)_{i\ov{j}} \le \frac{C}{r^{2(n-k)/n}} \deltalta_{ij}$$
and hence the theorem follows from the observation above with $\betata = 2(n-k)/n$.
\ensuremath{\Box}
\end{proof}
In addition, the proof of Theorem \ref{thmcompletion} gives:
\betagin{lemma} Let $N_\varepsilon$ be an $\varepsilon$-tubular neighborhood of $D_0$ in $M$ with respect to the fixed metric $\hat{g}_0$. Then
$$ \lim_{\varepsilon\rightarrow 0} \limsup_{t \rightarrow T} \emph{diam}_{g(t)} N_{\varepsilon} = 0.$$
\end{lemma}
We can then prove:
\betagin{theorem} $(M, g(t))$ converges to $(\overline{M}, d)$ in the Gromov-Hausdorff sense as $t \rightarrow T$, where $(\overline{M}, d)$ is the metric space as described in Theorem \ref{thmcompletion}.
\end{theorem}
\betagin{proof} This is a simple consequence of the results described above. Identifying $\overline{M}$ with $\mathbb{P}^n/\mathbb{Z}_k$ as in the proof of Theorem \ref{thmcompletion},
let $F: M \rightarrow \overline{M}$ be the map corresponding to $f:M \rightarrow \mathbb{P}^n/\mathbb{Z}_k$.
Let $G: \overline{M} \rightarrow M$ be any map satisfying $F \circ G = \textrm{id}_{\overline{M}}$. Then it is left to the reader to check that, using these functions $F$ and $G$, the Gromov-Hausdorff distance between $(M, g(t))$ and $(\overline{M},d)$ tends to zero as $t \rightarrow T$.
\ensuremath{\Box}
\end{proof}
This completes the proof of Theorem \ref{thm3}.
\noindent
{\bf Acknowledgements.} \ The authors are grateful to Professor D.H. Phong for his advice, encouragement and support. In addition, the first-named author thanks Professor G. Tian for some helpful discussions. The second-named author thanks Professor S. Casalaina-Martin for some useful conversations. The authors are also grateful to Valentino Tosatti for some helpful comments on a previous draft of this paper.
\betagin{thebibliography}{99}
\bibitem[A]{A} Aubin, T. {\em \'Equations du type Monge-Amp\`ere sur les vari\'et\'es k\"ahl\'eriennes compactes}, Bull. Sci. Math. (2) {\bf 102} (1978), no. 1, 63--95
\bibitem[Cal]{Cal} Calabi, E. {\em Extremal K\"ahler metrics}, in Seminar on Differential Geometry, pp. 259--290, Ann. of Math. Stud., {\bf 102}, Princeton Univ. Press, Princeton, N.J., 1982
\bibitem[Cao1]{Cao1} Cao, H.-D. {\em Deformation of K\"ahler metrics to K\"ahler-Einstein metrics on compact
K\"ahler manifolds}, Invent. Math. {\bf 81} (1985), no. 2, 359--372
\bibitem[Cao2]{Cao2} Cao, H.-D. {\em Existence of gradient K\"ahler-Ricci solitons}, Elliptic and parabolic methods in geometry (Minneapolis, MN, 1994), 1--16, A K Peters, Wellesley, MA, 1996
\bibitem[CW]{CW} Chen, X.X. and Wang, B. {\em Space of Ricci flows (I)}, preprint, arXiv: 0902.1545
\bibitem[Ch] {Ch} Chow, B. {\em The Ricci flow on the 2-sphere},
J. Differential Geom. {\bf 33} (1991) 325--334
\bibitem[D]{D} Donaldson, S.K. {\em Scalar curvature and stability of toric
varieties}, J. Differential Geom. {\bf 62} (2002), no. 2, 289--349
\bibitem[EGZ]{EGZ} Eyssidieux, P., Guedj, V. and Zeriahi, A., {\em Singular K\"ahler-Einstein metrics}, J. Amer. Math. Soc. {\bf 22} (2009), no. 3, 607--639
\bibitem[FIK]{FIK} Feldman, M., Ilmanen, T. and Knopf, D. {\em Rotationally symmetric shrinking and expanding gradient K\"ahler-Ricci solitons}, J. Differential Geometry {\bf 65} (2003), no. 2, 169--209
\bibitem[F]{F} Fukaya, K. {\em Theory of convergence for Riemannian orbifolds},
Japan. J. Math. (N.S.) {\bf 12} (1986), no. 1, 121--160
\bibitem[GH]{GH} Griffiths, P. and Harris, J. {\em Principles of algebraic geometry}, Pure and Applied Mathematics. Wiley-Interscience, New York, 1978.
\bibitem[GW]{GW} Gross, M, and Wilson, P. M.H. {\em Large complex structure limits of $K3$ surfaces}, J. Differential Geometry {\bf 55} (2000), no. 3, 475--546
\bibitem[H] {H} Hamilton, R.S. {\em The Ricci flow on surfaces},
Contemp. Math. {\bf 71} (1988), 237--261
\bibitem[IS]{IS} Ishikawa, K. and Sakane, Y. {\em On complex projective bundles over a K\"ahler $C$-space},
Osaka J. Math. 16 (1979), no. 1, 121--132
\bibitem[Koi]{Koi} Koiso, N. {\em
On rotationally symmetric Hamilton's equation for K\"ahler-Einstein metrics}, Recent topics in differential and analytic geometry, 327--337,
Adv. Stud. Pure Math., 18-{\rm I}, Academic Press, Boston, MA, 1990
\bibitem[Kol]{Kol} Kolodziej, S. {\em The complex Monge-Amp\`ere
equation}, Acta Math. {\bf 180} (1998), no. 1, 69-117.
\bibitem[P1]{P1} Perelman, G. {\em The entropy formula for the Ricci flow and its geometric applications},
preprint, arXiv:math.DG/0211159
\bibitem[P2]{P2} Perelman, G. unpublished work on the K\"ahler-Ricci flow
\bibitem[PSS] {PSS} Phong, D.H., Sesum, N. and Sturm, J.
{\em Multiplier ideal sheaves and the K\"ahler-Ricci flow}, Comm. Anal. Geom. {\bf 15} (2007), no. 3, 613--632
\bibitem[PS1]{PS1} Phong, D. H. and Sturm, J. {\em On the K\"ahler-Ricci flow on complex surfaces}, Pure Appl. Math. Q. {\bf 1} (2005), no. 2, part 1, 405--413
\bibitem[PS2]{PS2} Phong, D.H. and Sturm, J. {\em On stability and the convergence of the K\"ahler-Ricci flow}, J. Differential Geometry {\bf 72} (2006), no. 1, 149--168
\bibitem[PSSW1]{PSSW1} Phong, D.H., Song, J., Sturm, J. and Weinkove, B. {\em The K\"ahler-Ricci flow and the $\bar\partial$ operator on vector fields}, J. Differential Geometry {\bf 81} (2009), no. 3, 631--647
\bibitem[PSSW2]{PSSW2} Phong, D.H., Song, J., Sturm, J. and Weinkove, B. {\em The K\"ahler-Ricci flow with positive bisectional curvature}, Invent. Math. {\bf 173} (2008), no. 3, 651--665
\bibitem[PSSW3]{PSSW3} Phong, D.H., Song, J., Sturm, J. and Weinkove, B. {\em The modified K\"ahler-Ricci flow and solitons}, arXiv: 0809.094, to appear in Comment. Math. Helvetici
\bibitem[R]{R} Rubinstein, Y. {\em On the construction of Nadel multiplier ideal sheaves and the limiting behavior of the Ricci flow}, Trans. Amer. Math. Soc. {\bf 361} (2009), no. 11, 5839--5850
\bibitem[SeT]{SeT} Sesum, N. and Tian, G. {\em Bounding scalar curvature and diameter along the K\"ahler Ricci flow (after Perelman)}, J. Inst. Math. Jussieu {\bf 7} (2008), no. 3, 575--587
\bibitem[SoT1]{SoT1} Song, J, and Tian, G. {\em The K\"ahler-Ricci flow on surfaces of positive Kodaira dimension},
Invent. Math. {\bf 170} (2007), no. 3, 609--653
\bibitem[SoT2]{SoT2} Song, J, and Tian, G. {\em Canonical measures and K\"ahler-Ricci flow}, preprint, arXiv: 0802.2570
\bibitem[SoT3]{SoT3} Song, J, and Tian, G. {\em Canonical measures and K\"ahler-Ricci flow II}, preprint
\bibitem[Sz1]{Sz1} Szekelyhidi, G. {\em The Calabi functional on a ruled surface}, Ann. Sci. \'Ec. Norm. Sup\'er. (4) {\bf 42} (2009), no. 5, 837--856
\bibitem[Sz2]{Sz2} Szekelyhidi, G. {\em The K\"ahler-Ricci flow and K-stability}, preprint, arXiv: 0803.1613
\bibitem[T1]{T1} Tian, G. {\em K\"ahler-Einstein metrics with positive scalar curvature}, Invent. Math. {\bf 130} (1997), no. 1, 1--37
\bibitem[T2]{T2} Tian, G. {\em New results and problems on K\"ahler-Ricci flow}, G\'eom\'etrie diff\'erentielle, physique math\'ematique, math\'ematiques et soci\'et\'e. II. Ast\'erisque {\bf 322} (2008), 71--92
\bibitem[TZha]{TZha} Tian, G. and Zhang, Z. {\em On the K\"ahler-Ricci flow on projective manifolds of general type}, Chinese Ann. Math. Ser. B {\bf 27} (2006), no. 2, 179--192
\bibitem[TZhu]{TZhu} Tian, G. and Zhu, X. {\em Convergence of K\"ahler-Ricci flow}, J. Amer. Math. Soc. {\bf 20} (2007), no. 3, 675--699
\bibitem[To]{To} Tosatti, V. {\em K\"ahler-Ricci flow on stable Fano manifolds}, preprint, arXiv: 0810.1895
\bibitem[Ts]{Ts} Tsuji, H. {\em Existence and degeneration of K\"ahler-Einstein metrics on minimal algebraic varieties of general type}, Math.
Ann. {\bf 281} (1988), 123-133.
\bibitem[WZ] {WZ} Wang, X.J. and Zhu, X.
{\em K\"ahler-Ricci solitons on toric manifolds with positive first Chern class},
Advances Math. {\bf 188} (2004) 87--103
\bibitem[Y1]{Y1} Yau, S.-T. {\em On the Ricci curvature of a compact K\"ahler
manifold and the complex Monge-Amp\`ere equation, I}, Comm. Pure
Appl. Math. {\bf 31} (1978), 339--411
\bibitem[Y2]{Y2} Yau, S.-T. {\em Open problems in geometry}, Proc. Symposia Pure
Math. {\bf 54} (1993), 1--28 (problem 65)
\bibitem[Zha]{Zha} Zhang, Z. {\em On degenerate Monge-Amp\`ere equations over closed K\"ahler manifolds}, Int. Math. Res. Not. {\bf 2006}, Art. ID 63640, 18 pp
\bibitem[Zhu]{Zhu} Zhu, X. {\em K\"ahler-Ricci flow on a toric manifold with positive first Chern class},
preprint, arXiv:math.DG/0703486
\end{thebibliography}
$^{*}$ Department of Mathematics \\
Rutgers University, Piscataway, NJ 08854\\
Email: [email protected] \\
$^{\dagger}$ Department of Mathematics \\
University of California San Diego, La Jolla, CA 92093\\
Email: [email protected]
\end{document}
|
\begin{document}
\title{A global algorithm for the computation of traveling dissipative solitons}
\begin{abstract}
An algorithm is proposed to calculate traveling dissipative solitons for the
FitzHugh-Nagumo equations. It is based on the application of the steepest descent method
to a certain functional. This approach can be used to find solitons whenever the
problem has a variational structure. Since the method seeks the lowest energy
configuration, it has robust performance qualities. It is global in nature, so that
initial guesses for both the pulse profile and the wave speed can be quite different from
the correct solution. Also,
bifurcations have a minimal effect on the performance.
With an appropriate set of physical parameters in two dimensional domains,
we observe the co-existence of single-soliton and 2-soliton solutions together with
additional unstable traveling pulses. The algorithm automatically calculates these
various pulses as the energy minimizers at different wave speeds. In addition to
finding individual solutions, this approach could be used to augment or initiate
continuation algorithms.
\end{abstract}
\date{}
\keywords{~{FitzHugh-Nagumo}\xspace \and traveling wave \and traveling pulse \and dissipative solitons \and minimizer \and steepest descent}
\section{Introduction}\label{sec:introduction}
Patterns occurring in nature fascinate. They are ubiquitous in all kinds of physical, chemical and biological systems.
{Very often, localized structures are observed, like pulses, fronts and spirals.}
These
fundamental building blocks then self-organize into patterns as a result of their mutual interaction. This may involve a pattern front moving
across a homogeneous ambient state after some kind of destabilization.
{In many cases, the ambient state is}
uniform in all directions. Traveling pulses are localized structures
which relax to the same ambient state; they are therefore especially important in the dynamic transition phase
seen in many experiments and model simulations.
Stable pulses (which are sometimes called {{\em spots}} in multidimensional domains), whether stationary or moving, can be particle-like in many
circumstances and are {often} referred to
as dissipative solitons in {the} physics literature. {For example, see} the books by \cite{Nish2002, AA2005, L2013} and the references therein. In a three-component
activator-inhibitor system studied by both physicists and
mathematicians, it was observed that
fast moving solitons collide to annihilate one another while slow ones bounce off from one another, and an unstable standing pulse can split into two
solitons; see \cite{PBL2005, EMN2006, KM2008, NTYU2007, HDKP2010}.
Understanding the mechanisms behind such pattern formation has
been an on-going struggle since Turing's landmark paper on morphogenesis, \cite{Tur1952};
investigation has intensified in the last three decades in recognition of the importance {observed for various applications}.
Advances in mathematical studies lead to a deeper understanding:
such interactions involve a delicate balance between gain and loss, and the subsequent redistribution of energy and ``mass'' in the system; the ``mass'' can be chemical concentration,
light intensity or current density. Many dissipative soliton models, like Ginsburg-Landau and nonlinear Schrodinger equations, possess variational structures.
Restricting our attention to reaction-diffusion systems,
activator-inhibitor type equations are the natural choices in modeling these phenomena, as they involve gain and loss;
see \cite{PBL2005, L2013}. The two component
FitzHugh-Nagumo equations and the three component activator-inhibitor systems serve as primary
models in such investigations; under suitable parametric restrictions the solutions are minimizers of some variational functionals.
{There are many theoretical studies on these activator-inhibitor
systems that employ various methods of analysis for one or higher dimensional
domains in different parametric regimes, for example}
\cite{CC2012, CC2015, CCH2016, HS2014, Mur2004, DRY2007, RW2003}.
We are particularly interested {in the case} when the activator
diffusivity is small (compared to that of the inhibitor), leading to an
activator profile with a steep slope. By using the tool of $\Gamma$-convergence {on the FitzHugh-Nagumo equations} to study its limiting geometric variational problems with a nonlocal term,
existence and local stability of radially symmetric standing spots in ${\mathbb R}^n$ have been completely classified for all parameters \cite{CCR2018, CCHR2018};
they are the first results on the exact multiplicity of solutions to these limiting equations ({from} $0$ up to $3$ standing pulses).
With similar restrictions on parameters in order to employ $\Gamma$-convergence,
a unique traveling pulse solution in the 1D case has also been shown recently \cite{CCF}. However, when we relax the conditions on the parameters
(so that one cannot employ $\Gamma$-convergence analysis), there can be co-existence of a traveling pulse and two distinct fronts moving in
opposite directions \cite{CC_multi}.
Computational studies are also important, but there can be difficulties.
Continuation algorithms are a common means to compute traveling and stationary waves. While there is no difficulty in finding 1D traveling spots of the
FitzHugh-Nagumo equations, the 2D case is different.
We extract an exact quote from \cite[p.8]{L2013}: ``Only in extreme parameter regimes, where numerical solutions are difficult to obtain, it can be analytically shown that propagating dissipative solitons exist as stable solutions of two-component, two- dimensional, reaction-diffusion systems''; the algorithms
experience a hindrance. At the same time, from \cite[bottom of p.32]{EMN2006}:
``it is generally believed that
{traveling} spots for a two component system in the
whole $\mathbb{R}^2$ do not exist''. The difficulty comes as a result of multiple bifurcations in the 2D domains.
Sometimes, just in some small range of parameters there are
multiple bifurcations occurring (the cusp in Figure~\ref{fig:fig2d_3} is related to bifurcation), resulting in many close-by solutions; in such situations it is not easy to find a good enough
initial guess for the continuation algorithm. Even when sucessful, convergence is not guaranteed.
{Another computational approach is to
feed reasonable initial profiles into the time dependent problem and perform numerical
time stepping; one then hopes that
it results in a more or less steady profile moving at a uniform speed after a long run.
All unstable waves can never be found using such a method.
Even if a stable traveling wave exists, it is easy to miss the solution since usually they
are not global attractors for reaction-diffusion systems, as in the case of the~{FitzHugh-Nagumo}\xspace equations.
This method sometimes provides a good initial guess for a Newton-type algorithm or a numerical
shooting method if we are interested in
very accurate solutions.
As we are interested in cases when $d$ is small, the resulting multiple temporal and spatial scales in
\eqref{eqn:fhn1} will induce
excessive computational effort, unless the initial approximation is extremely good so that the convergence
to the traveling wave takes place in a short time.}
{If we} restrict ourselves to dissipative solitons for which variational formulations are common, we {may} exploit the fact that
they are minimizers.
{In this paper, we develop a robust, global steepest descent
algorithm to find traveling pulse solutions of the FitzHugh-Nagumo equations
both in 1D and on an infinite strip domain in 2D with zero Dirichlet boundary
conditions. It is based on some recent theoretical understanding; see \cite{CC2012, CC2015}.}
The algorithm works even without a good initial guess. If a
bifurcation occurs, it simply tracks the lowest energy configuration and filters out other high energy solutions generated as a result of bifurcation.
Multiple bifurcations in a vicinity will therefore not affect the algorithmic
performance. That explains why we are able
to find quite a few stable and unstable spots with relative ease in our computations. Future robustness tests on this algorithm will be conducted with other boundary conditions, widening the width of the strip domain, and for traveling fronts.
Another study \cite{CCD_new} also shows that the algorithm can easily compute all three traveling waves proved in \cite{CC_multi};
in fact, in addition to these 3 stable minimizers there are 2 unstable waves with the same physical parameters.
The algorithm can also be {applied toward}
the 3-component activator-inhibitor systems, which also possess variational
structures.
We consider the~{FitzHugh-Nagumo}\xspace equations with domain $\Omega\subset \mathbb{R}^n$ in space, for
time $0\leq t <\infty$. Specifically, $\Omega=\mathbb{R}$ if $n=1$ and
$\Omega=\mathbb{R}\times (-L,L)$ if $n=2$, with $0<L<\infty$. The equations are:
\begin{equation}
\begin{array}{rl}
\displaystyle u_t &= \displaystyle \Delta u +\frac{1}{d}\left( f(u)-v\right) , \\
\displaystyle v_t & = \displaystyle \Delta v +u-\gamma \, v ,
\end{array}
\label{eqn:fhn1}
\end{equation}
plus initial conditions.
Boundary conditions are needed for $n=2$; we consider $u=v=0$ on the boundary.
Here $d$ and $\gamma$ are positive constants, and
$f(u)\equiv u(u-\beta )(1-u)$ with $0<\beta <1/2$ being a fixed constant.
Our algorithm is global in the sense that it does not require
a good initial guess of the solution, only some guess in the set ${\mathcal{A}}\xspace$ that we show later
is very easy to construct. The wave speed is found as a root of a certain
functional. Multiple roots correspond to distinct waves, {hence multiple solutions for the same values of $d$, $\gamma$ and $\beta$.}
We develop this approach here for the specific domains $\Omega$ described
above, but it can be easily extended to more general boundary conditions and
domains via standard adaptations of the variational techniques.
After deriving our method, we will
computationally illustrate how it is used to determine the existence and
multiplicity of traveling pulse solutions to (\ref{eqn:fhn1}), and
simultaneously compute the corresponding wave speeds and pulse profiles.
In 2D, we observe a bifurcation and point out how the energy minimization
property allows the algorithm to adapt automatically and find the dissipative soliton.
\subsection{An illustration of robustness}\label{sec:robustExample}
As an example of the global convergence with $n=1$, choosing $\beta =1/4$, $d=0.0005$ and
$\gamma=1/16$, our proposed method
is able to identify one stable and one unstable wave profile, corresponding
to different wave speeds $c_0$ and $c_1$, respectively.
In order to check the stability of
our computed traveling pulses, they are input as initial data into a parabolic
solver for the time-dependent equation~\eqref{eqn:fhn1}. While some break up
quickly, which indicates an unstable traveling wave; others just translate with
the computed speed from our descent algorithm, verifying that they are stable.
The situation is illustrated in~\figref{fig:fig0};
on the {top}, we show the result of inserting the unstable traveling
wave profile into a parabolic solver: the wave just decays rapidly in time. This demonstates the need of a
global algorithm; if the initial guess for a stable wave profile is not close
enough, the parabolic solver may not find it. In contrast, our
proposed method can start with the same unstable profile and
be used to
find the stable profile with speed $c_0$ as well.
In the center
of~\figref{fig:fig0}, this process is shown at various iteration counts, $n$,
of the algorithm. On the {bottom} of~\figref{fig:fig0}, we show the
result of inserting the computed, stable wave profile into the parabolic
solver.
\begin{figure}
\caption{{Top}
\label{fig:fig0}
\end{figure}
The remainder of this paper is organized as follows.
In~\secref{sec:vf} we give a variational formulation of the~{FitzHugh-Nagumo}\xspace equations so that a critical point corresponds to a traveling pulse,
provided its wave speed also satisfies an auxiliary scalar algebraic equation.
The admissible set ${\mathcal{A}}\xspace$ in which we look for the critical point has to be described carefully.
In~\secref{sec:steepestDescent} we present a steepest descent algorithm which, together with the
auxiliary scalar algebraic equation, computes a traveling pulse profile as well as its wave speed.
The details in~\secref{sec:vf}-\secref{sec:steepestDescent} are presented for
one dimension of space. In~\secref{sec:alg2D}, we explain the extension to two
dimensions, which requires fairly minor modifications that are standard for
variational analyses.
In~\secref{sec:results_1d} we present, compare and perform some checks on numerical results from our algorithm in one dimension of space.
The fastest traveling pulse speed has an asymptotic limit as $d \to 0$, see \cite{CC2015}; we check our numerical wave speed against this
theoretical result and find excellent agreement.
In~\secref{sec:results_2d} we present numerical results for two-dimensional
domains and find as many as four traveling pulse solutions (with different wave speeds)
for a single choice of
parameters.
A bifurcation separates these pulses into two distinct groups qualitatively, but the
algorithm automatically computes them all without any special or prior knowledge of the pulse profiles.
In~\secref{sec:summary} we give a short summary of {our results}
and {discuss} some future work. In the {remainder} of {this} paper, the
terminologies {`pulse', `spot' and `dissipative soliton'} essentially mean
the same thing. We use `pulse' in 1D to conform to mathematicians' preference. In 2D, if
the pulse is stable, we call it a (dissipative) soliton to emphasize its localized
structure with {a} particle-like property.
\section{A variational formulation for traveling pulse} \label{sec:vf}
In this section we take $n=1$ for~\eqref{eqn:fhn1}; the case $n=2$ is handled
in Section~\ref{sec:alg2D}. Following \cite[Theorems 1.1]{CC2015}
we impose the restriction $0<\gamma <4/(1-\beta)^2$.
This is a necessary and sufficient condition for the straight line $u=\gamma v$ to cut the curve $v=f(u)$ only at the origin in the
$(u,v)$ plane.
This guarantees that $(u,v)=(0,0)$ is the only constant-state equilibrium
solution and hence eliminates the possibility of a traveling front.
Under such a condition the cited Theorem 1.1 ensures that a traveling pulse solution exists when $d$ is sufficiently small with fixed
$\gamma$ and $\beta$.
We look for traveling pulse solutions $(u(x,t),v(x,t))\in \mathbb{R}^2$ of~\eqref{eqn:fhn1}
with $-\infty <x<\infty$ and $t\geq 0$. For a wave speed $c$, which is not yet known, we require that
\begin{equation}
u(x,t) = \tilde{u} (c(x-ct)) \ \ \text{and} \ \ v(x,t) = \tilde{v} (c(x-ct))
\label{eqn:fhn2}
\end{equation}
for some smooth functions $\tilde{u}: {\mathbb R} \to {\mathbb R}$ and $\tilde{v}: {\mathbb R} \to {\mathbb R}$.
Dropping the tilde in the notation and use $x$ to denote $\xi= c(x-ct)$,
the traveling pulse problem is to find $(u,v,c)$ satisfying
\begin{equation}
\begin{array}{rl}
\displaystyle dc^2 \partialxx{u} +dc^2 \partialx{u} +f(u)-v &=0 , \\
\displaystyle c^2 \partialxx{v} +c^2 \partialx{v}+u-\gamma \, v &=0
\end{array}
\label{eqn:fhn3}
\end{equation}
with $(u,v)\to (0,0)$ as $|x|\to \infty$. We can always let $c>0$.
We introduce Hilbert spaces ${L^2_{ex}}\xspace({\mathbb R})$ and ${H^1_{ex}}\xspace({\mathbb R})$, corresponding to the
inner products
\begin{equation}
\begin{aligned}
\langle v,w\rangle_{{L^2_{ex}}\xspace} &\equiv \int_{\mathbb{R}} e^x vw \, dx \\
\text{and} \ \ \langle v,w\rangle_{{H^1_{ex}}\xspace} &\equiv \int_{\mathbb{R}} e^x \left\{ \partialx{v}\partialx{w} +vw \right\}\, dx ,
\end{aligned}
\label{eqn:innerProd}
\end{equation}
respectively. The induced norms are denoted by $\| \cdot \|_{{L^2_{ex}}\xspace}$ and
$\| \cdot \|_{{H^1_{ex}}\xspace}$.
A variational approach will be used to find weak solutions
$(u,v)\in ({H^1_{ex}}\xspace)^2$ to ~\eqref{eqn:fhn3}.
By solving (\ref{eqn:fhn3}b)
we write $v={\mathcal{L}_c}\xspace u$, where
${\mathcal{L}_c}\xspace : {L^2_{ex}}\xspace\to{L^2_{ex}}\xspace$ is a linear operator.
It can be verified
that ${\mathcal{L}_c}\xspace$ is a self-adjoint operator
on ${L^2_{ex}}\xspace({\mathbb R})$, i.e. $\langle u_1, {\mathcal{L}_c}\xspace u_2 \rangle_{{L^2_{ex}}\xspace} = \langle {\mathcal{L}_c}\xspace u_1, u_2 \rangle_{{L^2_{ex}}\xspace}$ for any
$u_1, u_2 \in {L^2_{ex}}\xspace({\mathbb R})$. A way to see this is to write (\ref{eqn:fhn3}b) as $c^2 (e^x v_i')' - \gamma e^x v_i= - e^x u_i$
for $i=1,2$; then it is easy to check $\langle u_1, v_2 \rangle_{{L^2_{ex}}\xspace}=\langle v_1, u_2 \rangle_{{L^2_{ex}}\xspace}$, provided
there is sufficient control on $u_i$ and $v_i$ so that the boundary terms at infinity arising from integration by parts can be discarded.
For any given $c>0$,
consider the functional $J_c : {H^1_{ex}}\xspace \to \mathbb{R}$ defined by
\begin{equation}
J_c (w) \equiv
\int_{\mathbb{R}} e^x \left\{ \frac{dc^2}{2} w'^{\,2}
+ \frac{1}{2} w\, {\mathcal{L}_c}\xspace w +{\mathcal{F}}\xspace (w) \right\}\, dx,
\label{eqn:Jc}
\end{equation}
where
\begin{equation} \label{eqn:F}
{\mathcal{F}}\xspace (\xi) = -\int_0^\xi f(\tau) \, d\tau = \frac{\xi^4}{4} -\frac{(1+\beta) \xi^3}{3}
+\beta \frac{\xi^2}{2} .
\end{equation}
Making use of the self-adjointness of ${\mathcal{L}_c}\xspace$ on ${L^2_{ex}}\xspace({\mathbb R})$, its Fr\'echet derivative is given by
\begin{equation}
J_c' (w)\phi \equiv \int_{\mathbb{R}} e^x \left\{
dc^2 w' \,\phi' +{\mathcal{L}_c}\xspace w\, \phi -f(w)\phi \right\} \, dx
\ \ \ \text{for all} \ w, \phi \in {H^1_{ex}}\xspace({\mathbb R}) .
\label{eqn:JcPrime}
\end{equation}
A critical point $u$ of $J_c$ in ${H^1_{ex}}\xspace({\mathbb R})$ will satisfy the Euler-Lagrange equation associated with $J_c$, namely
\begin{equation} \label{eqn:intDiff}
- dc^2 (e^x u')' - e^x f (u) + e^x {\mathcal{L}_c}\xspace u=0 \;.
\end{equation}
This integral-differential equation is equivalent to the Fitzhugh-Nagumo equations~\eqref{eqn:fhn3}.
Recall that ${H^1_{ex}}\xspace({\mathbb R}) \subset C({\mathbb R})$, we are now ready for the following.
\begin{defn} A function $w\in C(\mathbb{R})$ is in the class ${-/+/-}\xspace$ if
there exist $-\infty\leq x_1 \leq x_2 \leq \infty$ such that (a)
$w(x)\leq 0$ for all $x\in (-\infty,x_1] \cup [x_2,\infty)$ and (b)
$w(x)\geq 0$ for all $x\in [x_1,x_2]$.
\label{def:MPM}
\end{defn}
\begin{remark} \label{remark1}
(a) In the above definition, the choice of $x_1,x_2$ is not necessarily unique.
If $x_1=-\infty$ and $x_2=\infty$, then $w \geq 0$ on the real line.
In case $x_1=x_2=\infty$, then $w \leq 0$ on the real line. Both examples
are included in the
class $-/+/-$. \\
(b) A function $w$ is said to change sign twice, if
$w \leq 0$ on $(-\infty,x_1] \cup [x_2, \infty)$, $w \geq 0$ on $(x_1,x_2)$ and $w\not \equiv 0$ in each of such
three intervals.
\end{remark}
As $\beta<1/2$, there is a unique $\beta_1$ such that
$\beta<\beta_1<1$ with
${\mathcal{F}}\xspace (\beta_1)=0$.
In addition, we take a constant $M_1 = M_1 (\gamma) \geq 1$ such that
$f(\xi ) \geq 1/\gamma$ for all $\xi \leq -M_1$. A class ${\mathcal{A}}\xspace$ of
admissible functions to be employed in the variational argument is defined as
follows.
\begin{defn}
\begin{equation} \label{eqn:A}
{\mathcal{A}}\xspace \equiv \left\{ w\in {H^1_{ex}}\xspace : \| w\|_{{H^1_{ex}}\xspace}^2 = 2, \,
-M_1 \leq w \leq 1, w \; \mbox{is in the class}\; {-/+/-}\xspace\right\} .
\end{equation}
\end{defn}
We restrict attention to $J_c: {\mathcal{A}}\xspace \to {\mathbb R}$.
A global minimizer is known to exist in ${\mathcal{A}}\xspace$ for any fixed $c>0$. We can therefore let
\begin{equation} \label{eqn:jc1}
{\mathcal{J}(c)}\xspace \equiv \min_{w\in {\mathcal{A}}\xspace} J_c (w) , \ \ \text{for all} \ c>0.
\end{equation}
When $d \leq d_0$ for some sufficiently small $d_0$, it can be shown that
${\mathcal{J}(c)}\xspace<0$ when $c$ is small, ${\mathcal{J}(c)}\xspace>0$ when $c$ is large, and $\cal J$ is a continuous function.
By the intermediate value theorem there is at least one $c_0>0$ such that
$\mathcal{J} (c_0) =0$. Suppose $u_0$ is a
global minimizer in ${\mathcal{A}}\xspace$ when $c=c_0$, and we let $v_0={\cal L}_{c_0} u_0$, then $(u_0,v_0)$
can be shown to be smooth and $(u_0,v_0,c_0)$ is
a traveling pulse solution; thus $u_0$ is an unconstrained critical point of $J_{c_0}$.
We will give a heuristic argument
in subsection~\ref{sec:zeroJc}
on why ${\mathcal{J}(c)}\xspace=0$ determines the wave speed. The rigorous proof is in \cite{CC2015}.
For any fixed $c$
we will construct a steepest descent algorithm in the next section to find ${\mathcal{J}(c)}\xspace$ and the corresponding minimizer $u$ of $J_c$.
A spatial translation of any traveling pulse solution
remains a traveling pulse. This continuum of solutions will induce both theoretical and numerical difficulties.
The condition $\|u_0 \|_{{H^1_{ex}}\xspace}^2=2$ in ${\mathcal{A}}\xspace$ makes sure that $u_0(\cdot+a)$ is not in ${\mathcal{A}}\xspace$
for any non-zero $a \in {\mathbb R}$
and hence eliminates translation of solution.
\begin{remark} \label{remark2}
The admissible set ${\mathcal{A}}\xspace$ defined above differs slightly from that in (2.9) of \cite{CC2015}, because
\begin{enumerate}
\item the weighted $H^1$ norm $\| w \|_{{H^1_{ex}}\xspace}$ employed here is equivalent to
the norm $\sqrt{\int_{\mathbb R} e^x w_x^2 \,dx}$ used in \cite{CC2015}. The new norm may be better
when we perform numerical computations after truncating the real line to a finite domain.
\item Subsequent analysis in \cite{CC2015} leads to tighter bounds on a local minimizer of $J_c$;
it allows us to use the simpler
admissible set \eqref{eqn:A} for computational purposes.
\end{enumerate}
\end{remark}
Suppose there are multiple traveling pulse solutions in ${\cal A}$ for the same physical parameters. If $c_0$ is
the fastest wave speed among these solutions,
it follows from Theorem 1.3 in \cite{CC2015} that
\begin{equation} \label{eqn:fastSpeed}
d c_0^2 \to \frac{(1-2\beta)^2}{2} \ \ \text{for the fastest wave as} \ d \to 0.
\end{equation}
Our proposed numerical algorithm will compute all the traveling pulses irrespective of whether they are fast
or slow waves; indeed we do find multiple waves in some physical parameter regime.
We will employ~\eqref{eqn:fastSpeed} to check the accuracy for our algorithm.
To investigate the stability of our computed traveling pulses, we feed them (after rescaling to the original variables)
as initial conditions
into the time-dependent equations. Both stable and unstable traveling pulses are found
in the admissible set ${\mathcal{A}}\xspace$. The parabolic
solver serves as an independent check for our algorithm.
\subsection{Why the auxiliary equation ${\cal J}(c_0)=0$ determines the wave speed}\label{sec:zeroJc}
It is not immediately clear that when ${\cal J}(c_0)=0$ then $c_0$ is the traveling pulse speed.
We give a heuristic argument in this subsection to enhance the understanding of our algorithm.
Let $c>0$ and $u \in {\mathcal{A}}\xspace$ be a minimizer of $J_c$.
Suppose
\begin{enumerate}
\item the inequality constraints on $u$ is inactive, i.e. $-M_1<u<1$ on the real line;
\item the oscillation requirement $u \in {-/+/-}\xspace$ is inactive, i.e. there is no interval on which
$u=0$. This leads to a smooth $u$;
\item $u$ and its dervative have fast decay as $x \to \infty$ and they remain bounded
as $x \to -\infty$.
\end{enumerate}
We introduce a Lagrange multiplier $\lambda$ to remove the last remaining equality constraint $\| u \|_{{H^1_{ex}}\xspace}^2=2$
in ${\mathcal{A}}\xspace$, therefore
$u$ is an unconstrained critical point of ${\cal I}_c$ with
\begin{equation} \label{eqn:Ic}
{\cal I}_c(w)=J_c(w) + \lambda \left( \int_{\mathbb R} e^x \frac{1}{2} \left( w'^{\,2} + w^{2}\right) \,dx -1 \right) \;.
\end{equation}
Hence for all $\phi \in {H^1_{ex}}\xspace$
\begin{equation} \label{eqn:icPrime}
0= {\cal I}'_c(u)\phi=J'_c(u) \phi + \lambda \int_{\mathbb R} e^x \left( u' \, \phi' + u\phi \right) \, dx\;.
\end{equation}
Set $\phi=u'$. Using \eqref{eqn:JcPrime} the above equation can be reduced to
\begin{equation*}
\int_{\mathbb R} e^x \, \left\{ \frac{\partial}{\partial x} \left( \frac{dc^2}{2} u'^{\,2} + {\mathcal{F}}\xspace(u)+ \frac{1}{2} u \, {\mathcal{L}_c}\xspace u
\right) + \frac{\lambda}{2} \frac{\partial}{\partial x} (u'^2 +u^2) \right\} \, dx =0
\end{equation*}
by using the self-adjointness of ${\mathcal{L}_c}\xspace$ on ${L^2_{ex}}\xspace({\mathbb R})$.
An integration by parts leads to $J_c(u)+ \lambda=0$ due to the assumed asymptotic behavior of $u$ and its derivative
for large $|x|$.
Suppose $c_0$ satisfies $J_{c_0}(u)=0$, then $\lambda=0$. Write this $u$ as $u_0$.
We now have $u_0$ being a minimizer of ${\cal I}_{c_0}$ and $v_0={\cal L}_{c_0} u_0$.
As $u_0$ is an unconstrained critical point of ${\cal I}_{c_0}$,
it will satisfy the Euler-Lagrange equation associated with ${\cal I}_{c_0}$. With $\lambda=0$ in \eqref{eqn:icPrime}, this
Euler-Lagrange equation is the same as $J'_{c_0}(u_0)=0$, which simplifies to~\eqref{eqn:intDiff}.
In other words $(u_0,v_0,c_0)$ satisfies the FitzHugh-Nagumo
equations~\eqref{eqn:fhn3}.
\subsection{Minimizer $u$ is positive somewhere} \label{sec:signs}
\begin{lem} \label{lem:positive}
Let $c>0$ and $u$ be a global minimizer of $J_c: {\mathcal{A}}\xspace \to {\mathbb R}$. Then $\max u>0$.
\end{lem}
\begin{proof}
Suppose $u \leq 0$ for all $x$. As $u \not \equiv 0$, we have $\min u <0$. Define $\tilde{u} \equiv -u$.
Then $\tilde{u} \in {\mathcal{A}}\xspace$ is non-negative and
\begin{eqnarray*}
J_c(\tilde{u})&=&
\int_{\mathbb{R}} e^x \left\{ \frac{dc^2}{2} \tilde{u}'^{\,2}
+ \frac{1}{2} \tilde{u}\, {\mathcal{L}_c}\xspace \tilde{u} +{\mathcal{F}}\xspace (\tilde{u}) \right\}\, dx \\
&<& \int_{\mathbb{R}} e^x \left\{ \frac{dc^2}{2} {u}'^{\,2}
+ \frac{1}{2} {u}\, {\mathcal{L}_c}\xspace {u} +{\mathcal{F}}\xspace ({u}) \right\}\, dx \\
&=& J_c(u) ,
\end{eqnarray*}
because from \eqref{eqn:F} we have ${\mathcal{F}}\xspace(\xi)> {\mathcal{F}}\xspace(-\xi)$ if {$\xi < 0$},
while the gradient energy {and nonlocal energy terms remain the same.} This contradicts $u$ being a global minimizer in ${\mathcal{A}}\xspace$.
$\Box$
\end{proof}
Starting with an initial guess $w^0$, as the successive iterates $w^{(n)}$ from the steepest descent algorithm,
to be proposed in the next section,
get closer to the minimizer $u$, the above Lemma ensures that
$\max w^{(n)}>0$ for large enough $n$.
If $c=c_0$ with ${\cal J}(c_0)=0$, we have a stronger result: $\max u_0 \to 1$ as $d \to 0$
\cite[Theorem 8.6]{CC2015}, and $u_0$ changes signs exactly twice.
\subsection{Monotonicity of ${\mathcal{J}(c)}\xspace$ with respect to $d$} \label{sec:monotone}
This is a simple observation, but will serve as a useful guide to choose a proper range of $d$ for traveling pulse
phenomena in our numerics. It is clear that $J_c$ depends on the parameter $d$
besides $c$. Fix any $c$, $\gamma$, $\beta$ and $w \in {\mathcal{A}}\xspace$. Suppose $d_2 \geq d_1$,
then $J_c(w;d_2)-J_c(w;d_1)= \int_{\mathbb R} e^x \frac{{(d_2-d_1)}c^2}{2} w'^2 \, dx >0$ so that
${\cal J}(c;d_2) \geq {\cal J}(c;d_1)$.
Now let $w$ be a minimizer of $J_c(\cdot \,; d_1)$ when $d=d_1$. It follows that
\begin{eqnarray*}
J_c(w;d_2) & \leq &{\cal J}_c(c;d_1)+ \frac{(d_2-d_1)c^2}{2} \int_{\mathbb R} e^x (w'^2+w^2) \, dx \\
& = & {\cal J}_c(c;d_1)+ (d_2-d_1)c^2.
\end{eqnarray*}
Hence
\begin{equation} \label{eq:orderJ}
{\cal J}(c;d_1) \leq {\cal J}(c;d_2) \leq {\cal J}(c;d_1) + (d_2-d_1) c^2 \;.
\end{equation}
\section{A steepest-descent method for computing ${\mathcal{J}(c)}\xspace$}\label{sec:steepestDescent}
We continue in this section with $n=1$; one dimension in space
for~\eqref{eqn:fhn1}, with the case $n=2$ discussed in Section~\ref{sec:alg2D}.
Given a $c>0$,
we would like to find a global minimizer $u$ of $J_c $ in the admissible set ${\mathcal{A}}\xspace$ so that we can compute
${\mathcal{J}(c)}\xspace=J_c(u)$.
Qualitative features of $u$ have been given in
\cite[Theorem 1.2]{CC2015} when $d$ is small; however quantitatively this
is only a rough guess of what the minimizer profile would be like and a global
algorithm is warranted.
At any $w\in {\mathcal{A}}\xspace$,
since we seek a global minimizer,
it is natural to use the steepest descent tangent vector direction (with ${H^1_{ex}}\xspace$ norm as the metric)
at $w$ on the manifold
${\cal M} \equiv \{p \in {H^1_{ex}}\xspace: \|p \|^2_{{H^1_{ex}}\xspace}=2 \}$.
Following the steepest descent direction will eventually lead us to the minimizer $u$.
If we do not need to stay on the manifold, for any small arbitrary change $\epsilon \phi$, we have
$J_c(w+\epsilon \phi)-J_c(w) = \epsilon J_c'(w) \phi + O(\epsilon^2)$. Hence the steepest descent direction is to optimize
$J'_c(w) \phi$ subject to unit norm on $\phi$. In our case
a modification is necessary to stay on the manifold ${\cal M}$.
Let the steepest descent direction in our case at any given $c>0$ and $w \in {\mathcal{A}}\xspace$ be denoted
by $q=q(w,c)$, which is normalized so that $\| q\|_{{H^1_{ex}}\xspace}^2 =2$.
If $\epsilon$ is small, we want $\tilde{w}=w+\epsilon q$ to satisfy
$\| \tilde{w} \|_{{H^1_{ex}}\xspace}^2=2$ to leading order of $\epsilon$. This amounts to
enforcing the orthogonality condition $\langle w,q \rangle_{{H^1_{ex}}\xspace} =0$ so that $q$ is a tangent vector
on the manifold ${\cal M}$.
A small (second order) correction on $\tilde{w}$, to be described later, will give
a new $w_{new}$ on the manifold ${\cal M}$ and result in $J_c(w_{new}) < J_c(w)$.
Following the idea advocated in \cite{CM1993},
we introduce Lagrange multipliers $\lambda$ and $\mu$
to remove the equality constraints $\| q \|^2_{{H^1_{ex}}\xspace}=2$ and $\langle w, q \rangle_{{H^1_{ex}}\xspace}=0$. Therefore
$q$ can be found as an unconstrained critical point of
\begin{equation}
\begin{aligned}
K_c \left(\phi \right) &\equiv J_c' (w)\phi
+\lambda \left(\frac{1}{2} \| \phi \|_{{H^1_{ex}}\xspace}^2 -1\right)
+\mu \langle w,\phi \rangle_{{H^1_{ex}}\xspace}
\ \ \text{for all} \ \phi \in {H^1_{ex}}\xspace .
\end{aligned}
\label{eqn:K}
\end{equation}
Hence we have
\begin{equation} \label{eqn:kcPrime}
K'_c(q) =0
\end{equation}
with
\begin{equation}
K'_c(\phi )p = J_c' (w)p
+\lambda \langle \phi,p \rangle_{{H^1_{ex}}\xspace} +\mu \langle w,p \rangle_{{H^1_{ex}}\xspace} \ \text{for all} \ p\in {H^1_{ex}}\xspace.
\label{eqn:qWeak1}
\end{equation}
Combining \eqref{eqn:kcPrime} and \eqref{eqn:qWeak1}, we arrive at
\begin{equation} \label{eqn:qWeak}
J_c' (w)p
+\lambda \langle q,p \rangle_{{H^1_{ex}}\xspace} +\mu \langle w,p \rangle_{{H^1_{ex}}\xspace}=0 \ \text{for all} \ p\in {H^1_{ex}}\xspace.
\end{equation}
Upon inserting $p=w$ in~\eqref{eqn:qWeak},
\begin{equation*}
J_c' (w)w
+\lambda \langle q,w\rangle_{{H^1_{ex}}\xspace} +\mu \|w\|^2_{{H^1_{ex}}\xspace} = 0.
\end{equation*}
As $\|w\|^2_{{H^1_{ex}}\xspace}=2$ and $\langle q,w \rangle_{{H^1_{ex}}\xspace}=0$, it is immediate that
\begin{equation}
\mu =-\frac{1}{2} J_c' (w)w ,
\label{eqn:muValue}
\end{equation}
which can be calculated, as $w \in {\mathcal{A}}\xspace$ is given.
Now we are ready to solve the linear equation~\eqref{eqn:qWeak} for $\lambda q$, which is parallel to the seach direction.
It is not necessary to calculate $\lambda$. Indeed by choosing $p=\lambda q$ in~\eqref{eqn:qWeak},
$J_c'(w)(\lambda q) = -\lambda^2 \| q\|_{{H^1_{ex}}\xspace}^2 <0$. Thus we have
\begin{equation} \label{eqn:sign}
J_c (w+\epsilon \lambda q)< J_c (w) \ \ \text{for small enough values} \ \epsilon>0 \;.
\end{equation}
We rewrite ~\eqref{eqn:qWeak} in strong form as follows
\begin{equation*}
- \frac{\partial}{\partial x}\left( e^x \frac{\partial}{\partial x}(\lambda q + \mu w) \right)
+ e^x (\lambda q + \mu w) - \frac{\partial}{\partial x} \left( dc^2 e^x \partialx{w} \right) - e^x f(w) + e^x {\mathcal{L}_c}\xspace w=0
\end{equation*}
with $\lambda q$ being the only unknown in this equation and
$|q(x)|\to 0$ as $|x|\to \infty$.
In order to reduce numerical errors in computing
$\lambda q$ later on, we
introduce the auxiliary function $w^* = \lambda q +(\mu+dc^2) w$, which then
solves
\begin{equation}
-\Delta {w^*}-\partialx{w^*} +w^* = dc^2 w-{\mathcal{L}_c}\xspace (w)+f(w)
\label{eqn:wStar}
\end{equation}
with $|w^* (x)|\to 0$ as $|x|\to \infty$. One can write down its Green's function and a unique solution $w^*$ exists.
The map ${\mathcal{Q}}\xspace :{H^1_{ex}}\xspace \to {H^1_{ex}}\xspace$ such that $w\to {\mathcal{Q}}\xspace (w)\equiv w^*$ is therefore
well-defined.
Nonlinear equations can only be solved numerically by an iterative scheme. Instead of writing $w^{(n)}$ to denote
the $n^{th}$ iterate, we will henceforth employ $w^n$ instead for notation simplicity. It should be clear from the context that
we do not mean the $n^{th}$ power of $w$. Similarly $\alpha^n$ will be used for descent step size instead of $\alpha^{(n)}$.
Let $c>0$ be fixed. Given an approximation $w^n\in {H^1_{ex}}\xspace$ for the minimizer
of $J_c$, we solve ${\mathcal{Q}}\xspace (w^n)$ numerically and update by means of
\begin{equation}
w^{n+1} =w^n+\alpha^n \left( {\mathcal{Q}}\xspace(w^n)-(\mu+dc^2) w^n \right).
\label{eqn:update1}
\end{equation}
Observe that ${\mathcal{Q}}\xspace(w^n)-(\mu+dc^2) w^n =\lambda^n q^n$, and $0<\alpha^n<1$ is some descent step
size, to be discussed later. The positivity of $\alpha^n$ is a consequence of~\eqref{eqn:sign}.
Two problems need to be addressed. One is that $w^{n+1}$ need not be in the oscillation class ${-/+/-}\xspace$.
The other problem is that even if $\| w^n\|_{{H^1_{ex}}\xspace}^2 =2$, the
constraint $\| w^{n+1}\|_{{H^1_{ex}}\xspace}^2 =2$ need not be satisfied. In either case,
$w^{n+1}$ need not be in the class ${\mathcal{A}}\xspace$. We introduce two additional operators
to address these issues.
The first operator $r$ clips any portions of the wave profile that may become
positive outside of the region $[x_1,x_2]$ in~\defref{def:MPM}. The
clipped profile will then develop kinks and becomes non-smooth.
\begin{defn}\label{defn:clipDomain}
\[
{C}_0^+ \equiv \left\{ w\in {C}(\mathbb{R} ): \
w(x)>0 \ \text{for some} \ x\in \mathbb{R} \ \text{and} \lim_{|x|\to \infty} w(x) =0 \right\} .
\]
\end{defn}
\begin{defn}\label{defn:clip}
Let $w\in {C}_0^+$ be given and define
\[
\overline{x} \equiv \max \left\{ x\in\mathbb{R}: \
w(x) =\max_{y\in\mathbb{R}} w(y) \right\} .
\]
Let $(x_1,x_2)$ be the largest open interval containing $\overline{x}$ such
that $w(x)>0$ for all $x\in (x_1,x_2)$. We define the clipping
operator $r: {C}_0^+ \to {-/+/-}\xspace$ such that for any $w\in {C}_0^+$,
\begin{equation}
r(w)(x) = \left\{\begin{array}{cc}
w(x), & \ \text{if} \ x\in (x_1,x_2) \ \text{or} \ w(x)\leq 0, \\
0,& \ \text{otherwise} .
\end{array}\right.
\end{equation}
In case $w \leq 0$ everywhere, we define $r(w)=w$.
\end{defn}
The second operator is a pure translation of a given profile along the x-axis. Such
a shift operator is all that is required to enforce the constraint
$\| w^{n+1} \|_{{H^1_{ex}}\xspace}^2 =2$.
\begin{defn}\label{defn:shift}
The shift operator $s:{H^1_{ex}}\xspace \to {H^1_{ex}}\xspace$ is defined such that for any $w\in {H^1_{ex}}\xspace$,
\begin{equation}
\begin{aligned}
s(w) &= w(\cdot-\log \frac{1}{\omega} ),
\end{aligned}
\end{equation}
where $ \omega = \frac{1}{2} \|w\|^2_{{H^1_{ex}}\xspace}$.
\end{defn}
\begin{lem} \label{lem:shift}
Given any $w\in {H^1_{ex}}\xspace$, $\| s(w)\|_{{H^1_{ex}}\xspace}^2 =2$.
\end{lem}
\begin{proof}
Let $\omega=\frac{1}{2} \|w\|^2_{{H^1_{ex}}\xspace}$ and
$a=\log (1/\omega)$. It follows from~\defref{defn:shift} that
\begin{equation*}
\begin{aligned}
\| s(w)\|^2_{{H^1_{ex}}\xspace}
&=\int_{\mathbb{R}} e^x \left\{ \left(w'(x-a) \right)^2 +\left(w(x-a)\right)^2 \right\}\, dx .
\end{aligned}
\end{equation*}
Via the change of variables $y=x-a$,
\begin{equation*}
\begin{aligned}
\| s(w)\|^2_{{H^1_{ex}}\xspace}
& =e^{a} \| w\|_{{H^1_{ex}}\xspace}^2 = 2.
\end{aligned}
\end{equation*}
\end{proof}
Given $c>0$,
our steepest descent algorithm to compute ${\mathcal{J}(c)}\xspace$ is as follows.
\begin{alg}\label{alg:alg1}
Choose fixed parameters $0<\theta <1$ (see below), a relative error
tolerance $0<\delta_1 <1$ and absolute error tolerances $0<\delta_2 <1$
{and
$0<\delta_3 <1$}.
Given an initial guess $w^0\in {\mathcal{A}}\xspace$ with $\max w^0>0$
and initial descent step size
$0<\alpha^0 <1$, we iterate as follows to generate updates $w^n$ for
$n=1,2,\ldots$.
\begin{enumerate}
\item Compute $v^n= {\mathcal{L}_c}\xspace w^n$.
\item Set $\mu^n =-\frac{1}{2} J_c'(w^n)w^n$.
\item Set $Q^n ={\mathcal{Q}}\xspace(w^n)$.
\item Set $\tilde{w}^{n+1} = w^n +\alpha^n \left( Q^n-(dc^2+\mu^n) w^n \right)$.
\item Set $w^{n+1}(x) = s\left(r\left(\tilde{w}^{n+1}\right)\right)$.
\item Check descent; if $J_c (w^{n+1})>J_c (w^{n})$ then replace
$\alpha^n \leftarrow \theta \alpha^n$ and go to (3).
\item Update the step size, $\alpha^{n+1}$.
\item Repeat (1)-(6) until
\begin{equation}
\left.
\begin{array}{c}
J_c (w^n)\leq J_c (w^{n+1})+\max \left\{ \delta_1 |J_c (w^{n+1})|, \delta_2 \right\} \\
\text{and} \qquad
\sup_{x\in \Omega} \left| w^{n+1} (x) - w^{n} (x) \right| \leq \delta_3
\end{array}\right\} .
\label{eqn:stop1}
\end{equation}
\end{enumerate}
\end{alg}
Heuristic methods for step (6) are discussed later.
The first part of the stopping criterion~\eqref{eqn:stop1} is implemented with
$\delta_2 \ll \delta_1$. This ensures that the relative error is small
when $|J_c (w^{n+1})|> \delta_2/\delta_1$. In practice, when
$|J_c (w^{n+1})|< \delta_2/\delta_1 \ll 1$ is very small we cannot enforce
the relative accuracy constraint due to round-off error effects. In such an event,
the criterion~\eqref{eqn:stop1} still requires the absolute error to be
small. We note that this latter case is near the regime ${\mathcal{J}(c)}\xspace=0$ that we are most
interested in. {The role of $\delta_3$ is to ensure that the profile
is converged completely in the tail region of a pulse (decaying to zero in the
direction opposite of the wave propagation). This is necessary because the
exponential weight in the functional $J_c$ greatly diminishes the effect of tail
perturbations on the energy. In other words, an accurate functional value is achieved
numerically before the tail is fully resolved. Numerically, the supremum of
$\left| w^{n+1} (x) - w^{n} (x) \right|$ is interpreted as the maximum over
all grid points.}
\section{Extension of the method for two dimensions in space}\label{sec:alg2D}
Here we discuss the adaptation of Algorithm~\ref{alg:alg1} for the case of two
dimensions in space. There are still many gaps in the study of traveling waves
for the~{FitzHugh-Nagumo}\xspace equations in multiple dimensions. For algorithmic purposes, the key open issue is
the definition of the admissible set ${\mathcal{A}}\xspace$. It is not clear how to define a
class of functions like~${-/+/-}\xspace$ in multiple dimensions, so the clipping operator
is not defined in this case, presently. However, we have found in our
experiments that the clipping operation was not necessary to compute traveling
pulse solutions in two dimensions. It seems to be sufficient to take the step
size $\alpha^n$ in the algorithm to be small enough.
{The advantage of the clipping operator in 1D is to allow for larger step sizes.}
We look for traveling wave solutions $(u(x,y,t),v(x,y,t))\in\mathbb{R}^2$ of~\eqref{eqn:fhn1}
with $(x,y)\in\Omega=(-\infty,\infty)\times(-L ,L)$ and $t>0$.
Let
\begin{equation}
u(x,y,t) = \tilde{u} (c(x-ct),cy) \ \ \text{and} \ \ v(x,y,t) = \tilde{v} (c(x-ct),cy)
\label{eqn:fhn2_2d}
\end{equation}
for some smooth functions $\tilde{u}: \Omega \to {\mathbb R}$ and $\tilde{v}: \Omega \to {\mathbb R}$.
After dropping the tilde in the notation and using $(x,y)$ to denote $(c(x-ct),cy)$,
the traveling pulse problem is to find $(u,v,c)$ on the rescaled domain $\Omega^* = (-\infty,\infty)\times (-cL,cL)$ satisfying
\begin{equation}
\begin{array}{rl}
\displaystyle dc^2 \Delta u +dc^2 \partialx{u} +f(u)-v &=0 , \\
\displaystyle c^2 \Delta {v} +c^2 \partialx{v}+u-\gamma \, v &=0
\end{array}
\label{eqn:fhn3_2d}
\end{equation}
with $(u,v)\to (0,0)$ as $|x|\to \infty$ and $(u,v)=(0,0)$ on
$\partial\Omega^*$.
We introduce Hilbert spaces ${L^2_{ex}}\xspace({\Omega^*})$ and ${H^1_{ex}}\xspace({\Omega^*})$, corresponding to
the inner products
\begin{equation}
\begin{aligned}
\langle v,w\rangle_{{L^2_{ex}}\xspace} &\equiv \int_{\Omega^*} e^x vw \, dx \, dy \\
\text{and} \ \ \langle v,w\rangle_{{H^1_{ex}}\xspace} &\equiv \int_{\Omega^*} e^x \left\{ \nabla{v}\cdot \nabla{w} +vw \right\}\, dx \, dy ,
\end{aligned}
\label{eqn:innerProd_2d}
\end{equation}
respectively. The induced norms are again denoted by $\| \cdot \|_{{L^2_{ex}}\xspace}$ and
$\| \cdot \|_{{H^1_{ex}}\xspace}$. Let $W$ denote the subspace
\begin{equation}
W\equiv \left\{ v\in {H^1_{ex}}\xspace\, |\, T_0 (v)=0 \right\},
\label{eqn:W}
\end{equation}
where $T_0$ is the trace operator on ${H^1_{ex}}\xspace (\Omega^*)$.
The variational approach of Section~\ref{sec:vf} can be extended now to
find weak solutions $(u,v)\in W\times W$ to ~\eqref{eqn:fhn3_2d}.
As in the 1D case, we write $v={\mathcal{L}_c}\xspace u$ and the
functional $J_c:{H^1_{ex}}\xspace\to\mathbb{R}$ is defined as
\begin{equation}
J_c (w) \equiv
\int_{\Omega^*} e^x \left\{ \frac{dc^2}{2} \left| \nabla w\right|^{\,2}
+ \frac{1}{2} w\, {\mathcal{L}_c}\xspace w +{\mathcal{F}}\xspace (w) \right\}\, dx \, dy .
\label{eqn:Jc_2d}
\end{equation}
We restrict our attention to the following admissible set
in order to avoid a continuum of solutions due to translation:
\begin{equation}
{\mathcal{A}}\xspace \equiv \left\{ v\in W\, : \, \| v\|_{{H^1_{ex}}\xspace}^2 =2 \right\}.
\label{eqn:A_2d}
\end{equation}
Suppose $u_c$ is a minimizer of $J_c$ in the admissible set ${\cal A}$,
the traveling wave speed $c_0$ will be determined by $J_{c_0}(u_{c_0})=0$, and
$(u_{c_0}, v_{c_0}, c_0)$ is a traveling wave solution.
We follow along in Sections~\ref{sec:vf}-\ref{sec:steepestDescent} and find that
the equation~\eqref{eqn:wStar} for the update $w^*=\lambda q +(\mu +dc^2)w$
still holds in two dimensions, if one interprets $w^*\in W$ and the Laplacian
operator correctly. The operators ${\mathcal{Q}}\xspace:{H^1_{ex}}\xspace\to{H^1_{ex}}\xspace$ and $s:{H^1_{ex}}\xspace\to{H^1_{ex}}\xspace$ may be
extended in a trivial way, so that Algorithm~\ref{alg:alg1} may still be thought
to hold in two dimensions, so long as we do not use the clipping operator.
Equivalently, define $r(w)=w$ for all $w\in {H^1_{ex}}\xspace$ to be the identity for our
computations in two dimensions.
\begin{remark} \label{remark_scale}
In theoretical studies, it may be more convenient to employ the scaling $(X,Y)=(c\xi,y)$ so that the transformed domain $\Omega^*$ is the same
as $\Omega$. For numerical computation, so long as we have the same number of mesh points in the vertical direction, there is essentially very little
difference between the two scalings.
\end{remark}
The traveling pulses decay to zero as $|x| \to \infty$ for both one- or
two-dimensional domains.
Instead of imposing the zero Dirichlet boundary condition on the bounded
computational domain in~\algref{alg:alg1}, we
{derive asymptotic boundary conditions that provide}
better information on solution behaviors as $|x| \to \infty$ than just knowing
that they go to zero.
If the governing equation is linear, eliminating the blow-up mode will yield the asymptotic information.
Similar conclusions can be drawn by linearizing the nonlinear equations about the zero equilibrium point.
Such an idea has been given in, {for example,} \cite{LK1980}.
Specifically, we derive
asymptotic boundary conditions to solve for $v={\mathcal{L}_c}\xspace w$ and $w^*={\mathcal{Q}}\xspace (w)$ in Steps 1
and 3 of~\algref{alg:alg1}. In practice, we have found that the minimum
computational domain lengths are restricted by these calculations, whereas the
integrations in Steps 2 and 6 exhibit faster convergence. This is because as $x\to \infty$,
the pulse profiles vanish very quickly, while when $x\to-\infty$ the term $e^x$
in the integrands forces the fast convergence of the integrals even though the
profiles do not vanish as quickly in this direction. For these
reasons, we neglect further discussion of errors in integrated quantities due to
the truncation of the domain.
\subsection{Computing ${{\mathcal{L}_c}\xspace w}$ and $w^*$ with a given $w$ in one dimension} \label{sec:v_asym}
Suppose a function $w$ is defined on
the real line $(-\infty,\infty)$;
however it is known only on the interval $[a,b]$.
First, we will construct asymptotic boundary conditions for Step 1 in~\algref{alg:alg1}.
As $w$ serves as a guess of the minimizer $u$ which decays to zero at infinity,
we assume $w$ and $w'$ are $o(1)$ outside the interval $[a,b]$.
Let $v={\mathcal{L}_c}\xspace w$. Then $v''+v'- \frac{\gamma}{c^2}v= -\frac{w}{c^2}$
on $(-\infty,\infty)$,
which is equivalent to the system
\begin{equation} \label{eqn:v_sys}
\vectwo{v}{z}'=B \vectwo{v}{z} - \vectwo{0}{\frac{w}{c^2}}
\end{equation}
where $B=\mattwo{0}{1}{\frac{\gamma}{c^2}}{-1}$. The eigenvalues of $B$ are given by
\begin{equation} \label{eqn:nu}
\left\{ \nu_1\, ,\, \nu_2 \right\}= \left\{\frac{1}{2} \left(-1 - \sqrt{1+ \frac{4 \gamma}{c^2}}\right)\, , \, \frac{1}{2} \left(-1 + \sqrt{1+ \frac{4 \gamma}{c^2}}\right)\right\},
\end{equation}
with $\nu_1<-1<0<\nu_2$. Correspondingly, ${\bf L}_1= \vectwo{-\nu_2}{1}$ and ${\bf L}_2 =\vectwo{-\nu_1}{1}$
are the left eigenvectors of $B$ for $\nu_1$ and $\nu_2$, respectively.
By taking the scalar product of ${\bf L}_1$ with \eqref{eqn:v_sys}, we obtain
\begin{equation} \label{eqn:Phi1}
\Phi_1'= \nu_1 \Phi_1 - \frac{w}{c^2}
\end{equation}
where $\Phi_1 \equiv {\bf L}_1 \cdot \vectwo{v}{z}=- \nu_2 v+z$. This first order equation can be integrated to give
\[
\Phi_1(x) = - \int_{-\infty}^x e^{\nu_1(x-t)} \frac{w(t)}{c^2} dt ;
\]
the arbitrary constant associated with the complementary solution has to be set to zero for $\Phi_1$
to stay bounded as $x \to -\infty$.
It follows that
\begin{eqnarray*}
\nu_1 \Phi_1(a)-\frac{w(a)}{c^2} &=& \frac{\nu_1}{c^2} \int_{-\infty}^a e^{\nu_1(a-t)} (w(a)-w(t)) \, dt \\
&=& - \frac{1}{c^2} \int_{-\infty}^a e^{\nu_1(a-t)} w'(t) \, dt.
\end{eqnarray*}
Hence
\begin{eqnarray*}
\left|\nu_1 \Phi_1(a)-\frac{w(a)}{c^2} \right| &\leq & \frac{|o(1)|}{c^2} \int_{-\infty}^a e^{\nu_1(a-t)} \, dt. \\
& = & \frac{|o(1)|}{|\nu_1|\, c^2} \;.
\end{eqnarray*}
It is therefore natural to impose the boundary condition $\nu_1 \Phi_1=\frac{w}{c^2}$ at $x=a$; which
amounts to
\begin{equation} \label{eqn:left_bc}
v'- \nu_2 v= \frac{w}{\nu_1 c^2} \quad \mbox{at} \;\; x=a.
\end{equation}
This is like setting the right hand side of \eqref{eqn:Phi1} to zero at $x=a$.
A similar analysis for large positive $x$ using $\Phi_2= {\bf L}_2 \cdot \vectwo{v}{z}$ leads to
\begin{equation} \label{eqn:right_bc}
v'- \nu_1 v= \frac{w}{\nu_2 c^2} \quad \mbox{at} \;\; x=b.
\end{equation}
\eqref{eqn:left_bc} and \eqref{eqn:right_bc} are the asymptotic boundary conditions
used when solving for $v={\mathcal{L}_c}\xspace w$.
\begin{remark} \label{remark4}
If $w$ goes to different constants as $x \to \pm \infty$ but with $w'=o(1)$ beyond $[a,b]$, the above argument
can be modified to derive some different asymptotic boundary conditions.
This observation will have implications in case one studies a traveling front problem numerically.
\end{remark}
We will now compute $w^*$ from~\eqref{eqn:wStar} in Step 3 of~\algref{alg:alg1}
using asymptotic boundary conditions. Let $\hat{w}=dc^2 w -{\mathcal{L}_c}\xspace w +f(w)$ denote
the known right hand side of~\eqref{eqn:wStar}.
Compare this problem with the equation on $v$ in~\secref{sec:v_asym}. By substituting $\gamma/c^2$ by $1$
and $w/c^2$ by $\hat{w}$, the new eigenvalues now are $\nu_1^*=-\frac{1}{2}(1+\sqrt{5})$ and
$\nu_2^*=\frac{1}{2}(\sqrt{5}-1)$, and the asymptotic boundary conditions are given by
\begin{eqnarray}
{w^*}{\,'}- \nu_2^* w^* &= \frac{\hat{w}}{\nu_1^*} & \mbox{at} \;\; x=a, \label{eqn:*left_bc} \\
{w^*}{\,'}- \nu_1^* w^* & = \frac{\hat{w}}{\nu_2^*} & \mbox{at} \;\; x=b. \label{eqn:*right_bc}
\end{eqnarray}
\subsection{Computing ${{\mathcal{L}_c}\xspace w}$ and $w^*$ with a given $w$ in two dimensions} \label{sec:v_asym2D}
We take $w=w(x,y)$ on the infinite strip $(-\infty,\infty)\times [-L,L]$
and derive asymptotic boundary conditions to apply on the truncated domain
$\Omega=[a,b]\times [-L,L]$, first for $v={\mathcal{L}_c}\xspace w$. At $y=-L$ and $y=L$
the boundary values are $w=v=0$, for all $x\in\mathbb{R}$. Fourier
expansions for $v={\mathcal{L}_c}\xspace w$ and $w$ are
\begin{equation}
\begin{aligned}
w(x,y) &= \sum_{j=1}^\infty \hat{w}_j (x) \sin \left(\frac{j\pi (y+L)}{2L}\right) , \\
v(x,y) &= \sum_{j=1}^\infty \hat{v}_j (x) \sin \left(\frac{j\pi (y+L)}{2L}\right) .
\end{aligned}
\label{eqn:fourier}
\end{equation}
Insert the relations~\eqref{eqn:fourier} into the equation
$\Delta v +v_x -\frac{\gamma}{c^2} v =-\frac{1}{c^2} w$:
\begin{multline*}
\sum_{j=1}^\infty \left( \hat{v}_j'' (x)+\hat{v}_j' (x)-\left(\frac{j^2\pi^2}{4L^2}+\frac{\gamma}{c^2}\right) \hat{v}_j (x) \right) \sin \left(\frac{j\pi (y+L)}{2L}\right)
= \\
\sum_{j=1}^\infty -\frac{1}{c^2}\hat{w}_j (x) \sin \left(\frac{j\pi (y+L)}{2L}\right) .
\end{multline*}
Then the Fourier coeffcients satisfy
\begin{equation}
\hat{v}_j'' (x)+\hat{v}_j' (x)-\left(\frac{j^2\pi^2}{4L^2}+\frac{\gamma}{c^2}\right) \hat{v}_j (x)
= -\frac{1}{c^2}\hat{w}_j (x) .
\label{eqn:bc2d1}
\end{equation}
In case $x<0$ with $|x|\gg 1$, we assume it holds that
$|\hat{w}_j (x)|\ll |\hat{w}_1 (x)|$ and
$|\hat{v}_j (x)|\ll |\hat{v}_1 (x)|$ for all $j>1$, thus
\begin{equation}
\begin{aligned}
w(x,y) &\approx \hat{w}_1 (x) \sin \left(\frac{\pi (y+L)}{2L}\right) , \\
v(x,y) &\approx \hat{v}_1 (x) \sin \left(\frac{\pi (y+L)}{2L}\right) .
\end{aligned}
\label{eqn:fourier2}
\end{equation}
Then $\hat{w}_1 (x)$ and $\hat{v}_1 (x)$ have the same behavior in $x$ as
$w$ and $v$, respectively, as $x\to-\infty$. Furthermore, these Fourier
coefficients satisfy~\eqref{eqn:bc2d1}. By analogy with the derivation
in~\secref{sec:v_asym}, if $a<0$ with $|a|\gg 1$ then we apply the
boundary condition
\begin{equation} \label{eqn:l_bc2d}
\hat{v}_1'- \nu_2 \hat{v}_1= \frac{\hat{w}_1}{\nu_1 c^2} \quad \mbox{at} \;\; x=a
\end{equation}
with the eigenvalues
\begin{equation}
\begin{aligned}
\nu_1 &= \frac{1}{2} \left(-1 - \sqrt{1+\frac{\pi^2}{L^2}+ \frac{4 \gamma}{c^2}}\right) \\
\mbox{and} \;\;
\nu_2 &= \frac{1}{2} \left(-1 + \sqrt{1+\frac{\pi^2}{L^2}+ \frac{4 \gamma}{c^2}}\right) .
\end{aligned}
\label{eqn:nu_2d}
\end{equation}
We combine~\eqref{eqn:fourier2} and~\eqref{eqn:l_bc2d} to derive the
approximate boundary condition
\begin{equation} \label{eqn:left_bc2d}
\frac{\partial v}{\partial x}- \nu_2 v= \frac{w}{\nu_1 c^2} \quad \mbox{at} \;\; x=a , \quad
\mbox{for} \;\; -L < y < L.
\end{equation}
It is equivalent to applying~\eqref{eqn:left_bc} at $x=a$ for each fixed
value of $y$, with the adjustment~\eqref{eqn:nu_2d} for the
eigenvalues~\eqref{eqn:nu}.
The corresponding boundary condition on the right is
\begin{equation} \label{eqn:right_bc2d}
\frac{\partial v}{\partial x} - \nu_1 v= \frac{w}{\nu_2 c^2} \quad \mbox{at} \;\; x=b , \quad
\mbox{for} \;\; -L < y < L,
\end{equation}
by analogy with~\eqref{eqn:right_bc} and the derivation
of~\eqref{eqn:left_bc2d}.
Here, we assume $b\gg 1$. Taken together with $v(x,-L)=v(x,L)=0$ for all
$x\in\mathbb{R}$,~\eqref{eqn:left_bc2d}-\eqref{eqn:right_bc2d} are the
boundary conditions used to compute $v={\mathcal{L}_c}\xspace w$ on the truncated domain.
Note that the asymptotic
conditions~\eqref{eqn:left_bc2d}-\eqref{eqn:right_bc2d} are compatible with
the homogeneous Dirichlet boundary conditions for $v$ and $w$ at $y=\pm L$.
Asymptotic boundary conditions for $w^*$ in Step 3 of~\algref{alg:alg1}
may be derived quickly by first comparing~\eqref{eqn:wStar} to the
equation~\eqref{eqn:fhn3_2d} for $v={\mathcal{L}_c}\xspace w$. Let
$\tilde{w}=dc^2 w -{\mathcal{L}_c}\xspace w +f(w)$ denote the known right hand side
of~\eqref{eqn:wStar}. By substituting $\gamma/c^2$ with $1$ and $w/c^2$
with $\tilde{w}$, the new eigenvalues now are
\begin{equation}
\left\{ \nu^*_1 \, ,\, \nu^*_2 \right\} = \left\{ \frac{1}{2} \left(-1 - \sqrt{5+\frac{\pi^2}{L^2}}\right) \, , \,
\frac{1}{2} \left(-1 + \sqrt{5+\frac{\pi^2}{L^2}}\right) \right\},
\label{eqn:nuStar_2d}
\end{equation}
and the asymptotic boundary conditions are given by
\begin{eqnarray}
\frac{\partial w^*}{\partial x}- \nu_2^* w^* &= \frac{1}{\nu_1^*}\tilde{w} & \mbox{at} \;\; x=a, \;\; -L<y<L, \label{eqn:*left_bc2d} \\
\frac{\partial w^*}{\partial x}- \nu_1^* w^* & = \frac{1}{\nu_2^*}\tilde{w} & \mbox{at} \;\; x=b, \;\; -L<y<L. \label{eqn:*right_bc2d}
\end{eqnarray}
\section{Computation of pulses in one dimension}\label{sec:results_1d}
We demonstrate how our method can be used to calculate traveling pulse
solutions in the class~{-/+/-}\xspace for one dimension in space, allowing for the
facts that the subclasses $+$, $-$, $-/+$ and $+/-$ are all subsets
of~{-/+/-}\xspace.
The steepest descent algorithm will continue to work even if the iterates
degenerate into functions in these subclasses. However, in our experiments
we do not observe this to happen.
{In~\secref{sec:varyCD} we} will investigate two aspects of the
theory regarding traveling waves: the possibility of multiple traveling
pulse solutions and the validation of the asymptotic
relation~\eqref{eqn:fastSpeed}. In both cases, the value of the parameter
$d$ is important. In~\cite[Theorem 1.1]{CC2015}, it is shown that when $d$
is small a traveling pulse solution must exist, but it is not known what
happens for larger $d$ or if multiple pulse solutions may exist for a
particular $d$. The dependence~\eqref{eqn:fastSpeed} holds only for the
fastest traveling pulse solution.
In~\secref{sec:parabolicTest}, the computed traveling pulses are tested
using a parabolic solver.
{It will be helpful to distinguish between the meaning of the space
variable $x$ in~\eqref{eqn:fhn1} versus the variable that represents space
for~\eqref{eqn:fhn3} (up to a shift), let us say $z=cx$. Hereafter, $z$ shall
denote the space variable used in~\algref{alg:alg1}. When we study the
results using the parabolic solver or when we consider our traveling pulses
as solutions of~\eqref{eqn:fhn1}, we instead use the variable $x$ for space.}
\subsection{Numerical methods for~\algref{alg:alg1}}\label{sec:numerics_1d}
Some {specific} numerical methods must be {adopted} for the computations
in~\algref{alg:alg1}. We do not seek to compare {different implementations}. The goal
is to investigate our algorithm assuming that each step is performed with
reasonable accuracy, for which purpose there are myriad acceptable
numerical methods. Spatial discretizations are performed
with standard, centered finite difference methods that are formally
second-order accurate with respect to the uniform grid size, $h>0$.
Numerical integrations were computed using the {composite} midpoint rule.
Shifting operations were handled by shifting the grid points themselves,
rather than interpolating the shifted data onto a fixed grid. This is easy
to implement and avoids introducing interpolation errors at each step of
the algorithm.
Let $\Omega = (a,b)$ for $-\infty <a\ll 0 \ll b <\infty$ denote the
domain (which may be shifted each iteration) and denote the computational
grid points by $x_j = a+hj$, for $j=0,1,\ldots, N+1$. Here, $(N+1)h=b-a$.
In order to make some precise statements regarding our computations below,
any functions, say $\psi (x)$, defined at the grid points will have
approximate values $\psi_j\approx \psi (x_j)$, $0\leq j\leq N+1$.
Due to the computational truncation of the domain, asymptotic boundary
conditions are implemented; see~\appref{sec:asymbc} for details.
For the parabolic solver (used to test stability), the same discrete
methods are applied in space as described above, with
Crank-Nicolson for the time evolution. Newton's method is used for the
nonlinearity. However, the $u$ and $v$ computations are not done at the
same time levels. Rather, they are staggered by half a time step in order
to numerically decouple their calculations, for efficiency. The resulting
method is formally of second-order accuracy in both space and time. The
idea of staggered
space and time methods has been used often since the seminal work
of~\cite{VNR1950}. Unlike this early work with
hyperbolic conservation laws, our parabolic solver does not require
a staggered grid in space for stability. Also, we move the grid points
each time step in accordance with the calculated wave speed, in order to
avoid using a very large domain to test the wave propagation over long
times. For this scheme, the wave profile would ideally appear to be static.
If the computed profile or speed is not correct, it will appear as a
deviation from the initial profile used in the solver.
\subsection{Traveling pulse behavior for various values of $d$.}\label{sec:varyCD}
Since a traveling pulse solution of~\eqref{eqn:fhn1} corresponds to a root of
${\mathcal{J}(c)}\xspace$, we first investigate the dependence of ${\mathcal{J}(c)}\xspace$ on $c$ and $d$.
We know from~\secref{sec:monotone} that ${\mathcal{J}(c)}\xspace$ increases with $d$; this
qualitative result, in particular \eqref{eq:orderJ}, helps us to
quantitatively locate the right ranges of $d$ and $c$.
With a fixed $d$,
the values of ${\mathcal{J}(c)}\xspace$ are calculated for various values of $c$
using~\algref{alg:alg1} until we find where ${\mathcal{J}(c)}\xspace$ changes sign, thus signifying
a root. A good approximation is then calculated for the wave speed $c$, where
${\mathcal{J}(c)}\xspace=0$, by the method of {\em regula falsi} (see {\em e.g.}\xspace~\cite{SB2010}). This is
a root-finding algorithm that approximates the wave speed by using linear
interpolation across the interval where ${\mathcal{J}(c)}\xspace$ changes sign. Upon completion,
we also obtain the traveling pulse profile from~\algref{alg:alg1}.
We begin by providing examples of the curves ${\mathcal{J}(c)}\xspace$ for
$d=2e-3$, $d=5e-4$, $d=3e-4$ and $d=1e-4$.
The results are shown in~\figref{fig:results1} at a resolution of $100$
evenly-spaced samples of the wave speed per unit of $c$. The computational
grid spacing is $h=10^{-2}$. Other parameters
values in~\algref{alg:alg1} are provided
in~\tabref{tab:tab1}. For the relative accuracy tolerance $\delta_1$ and
domain width $|\Omega|$ {(for spatial variable $z$)}, the values were
adjusted within the ranges shown experimentally, depending on $c$.
\begin{table}[h]
\centering
\begin{tabular}[c]{|c|c|c|c|c|c|c|c|}\hline
$\beta$ & $\gamma$ & $\theta$ & $\alpha^0$ & $\delta_1$ & $\delta_2$ & $\delta_3$ & $|\Omega|$ \\ \hline
$1/4$ & $1/16$ & $1/2$ & $1/1000$ & $10^{-9}$--$10^{-7}$ & $10^{-14}$ & $10^{-3}$ & $160$--$480$ \\
\hline
\end{tabular}
\caption{Parameter values for the tests corresponding to~\figref{fig:results1}.\label{tab:tab1}}
\end{table}
From Step 4 of~\algref{alg:alg1}, the numbers $Q^n-(dc^2+\mu^n) w^n$
represent the update to the wave profile at iteration $n$. The sizes of these
values were found to vary with both $c$ and $n$ significantly. In order to
define a rule for the step sizes $\alpha^n$, we introduced a normalization,
first by writing
$\alpha^n=\alpha_1^n/\alpha_2^n$ and then choosing
\[
\alpha_2^n \equiv \max_{0\leq j \leq N+1} \left| Q^n_j-(dc^2+\mu^n) w^n_j \right| .
\]
The value of $\alpha_1^n$ was allowed to either increase or decrease for
purposes of Step 6 in~\algref{alg:alg1}. We applied the rule
$\alpha_1^{n+1} =\min \{ 1.1 \alpha_1^n , \alpha_1^0 \}$.
It was observed that this approach sped up convergence compared to
taking a constant $\alpha_1^n = \alpha_1^0$. Finally, the initial profile used
at $c=5$ was
\begin{equation*}
w^0 (x) = \left\{ \begin{array}{cc}
1,& \ \text{if} \ -1 \leq x \leq 1, \\
0, & \ \text{otherwise} .
\end{array}\right.
\end{equation*}
This square-wave is independent of $c_0$ when using the $x$-coordinate, but
when used in~\algref{alg:alg1}, we must rescale first by $z=c x$, so that
\begin{equation}
w^0 (z) = \left\{ \begin{array}{cc}
1,& \ \text{if} \ -5 \leq z \leq 5, \\
0, & \ \text{otherwise} .
\end{array}\right.
\label{eqn:initialGuess}
\end{equation}
A step size $\Delta c$ with $|\Delta c|=0.01$ was used to change the wave speed to
$c+\Delta c$. The value of $\mathcal{J} (c+\Delta c)$ was then calculated using
the computed minimizer corresponding to ${\mathcal{J}(c)}\xspace$ as the initial guess
{$w^0 (z)$}. This process was repeated to generate our results.
\begin{figure}
\caption{The computed values of ${\mathcal{J}
\label{fig:results1}
\end{figure}
In~\figref{fig:results1}, note that ${\mathcal{J}(c)}\xspace$ crosses the horizontal axis twice
when $d=5e-4$ and $d=3e-4$, but only once in case $d=1e-4$. {Also, we see
that for large enough values of $d$ there is no traveling pulse solution in the
class ${-/+/-}\xspace$.} In case of
multiple roots of ${\mathcal{J}(c)}\xspace$, let $c_1=c_1 (d)$ and $c_0=c_0 (d)$ denote the smaller
and larger of the wave speeds, respectively, such that ${\mathcal{J}(c)}\xspace=0$. Our
computations suggest that there exists some critical value $d_{crit}$ such that
\[
\lim_{d\to d_{crit}^+} c_1(d) =0.
\]
Furthermore, for $d<d_{crit}$ the traveling wave solution might be unique in
the class~{-/+/-}\xspace. Stability of the traveling pulses is discussed below
in~\secref{sec:parabolicTest}.
In~\tabref{tab:tab2} we have provided the computed values of the ratio
\[
\eta \equiv \frac{2dc_0^2}{(1-2\beta)^2}.
\]
In accordance with~\eqref{eqn:fastSpeed}, as the value of $d$ decreases we see
the ratio $\eta$ becomes closer to the limiting value of $\eta=1$.
\begin{table}[h]
\centering
\begin{tabular}[h]{|c|c|c|c|}\hline
$d$ & $c_1$ & $c_0$ & $\eta$ \\ \hline
$5e-4$ & $4.58$ & $14.04$ & $0.79$ \\ \hline
$3e-4$ & $3.14$ & $19.18$ & $0.88$ \\ \hline
$1e-4$ & --- & $34.70$ & $0.96$ \\
\hline
\end{tabular}
\caption{Computed values of the wave speeds $c_0$ and $c_1$ for traveling pulse solutions.\label{tab:tab2}}
\end{table}
\subsection{Testing of candidate traveling {pulse} profiles}\label{sec:parabolicTest}
Our computed traveling pulses are tested here via the method
in~\secref{sec:numerics_1d}. That is, the computed wave profiles,
after scaling back {from spatial variables $z$ to $x$}, are
used to initiate a parabolic solver and the computational window is moved at a
rate of one grid length per time step. The grid length, $h$, and time step size, $\Delta t$, are
related by $c\Delta t=h$ with $c$ the computed speeds in~\tabref{tab:tab2}, so
that the wave profile should move at the same rate as the computational
window. A stable profile should then retain its shape and speed for a long
time.
The calculations were run for each case until either the profile was observed
to break up or until the pulse propagated the distance of one computational domain
length.
For the slower wave profiles with speed $c_1$, our tests showed that the
profiles were not maintained by the parabolic solver, nor did they evolve into
a new traveling pulse profile. Indeed, the profiles broke up completely after
a relatively small time. An example is shown for $d=5e-4$
in~\figref{fig:fig0} ({top}). The slower traveling pulse
solutions that exist for larger values of $d$ are unstable.
For the faster wave speeds denoted by $c_0$, we observed that the
wave profiles remained stable. In~\figref{fig:results2}
the initial data is plotted together with the final data, upon completion of the
parabolic solver run. The final data
is shifted left in space by one domain length for a
direct, visual comparison with the initial data. The profiles have
traveled the length of their computational domain and retained their shape
and speed very well in all cases.
\begin{figure}
\caption{Computed pulse profiles, shifted back
in space by one domain length for comparison with the initial profiles, for
cases $d=5e-4$ (left), $d=3e-4$ (middle) and $d=1e-4$ (right). Circle
markers are shown $200$ grid points apart for the final profiles. The initial
and final profiles are visually identical, indicating stability.}
\label{fig:results2}
\end{figure}
\section{Computation of pulses in two dimensions}\label{sec:results_2d}
In this section we demonstrate the calculation of traveling (dissipative) solitons
in a multiple-dimensional domain. Partial curves ${\mathcal{J}(c)}\xspace$ are shown for four
values of $d$, illustrating cases where there are multiple, one or no
solutions found, including solitons and unstable pulses.
A fundamental difference between the one- and two-dimensional cases
relates to an occurrence of bifurcation in the latter case, wherein a soliton is observed to split
into two co-evolving spots {in the direction perpendicular to their motion}.
We take $\Omega = (-\infty,\infty)\times (-L,L)$, with $L=1$ in all cases.
As discussed in~\secref{sec:alg2D}, we rescale the domain to
$\Omega^* = (-\infty,\infty)\times (-cL,cL)$; the truncated, computational
domain for~\algref{alg:alg1} is then chosen to be $[a,b]\times[-c,c]$.
Recall that the domain is shifted during iterations of~\algref{alg:alg1}, so we only specify the domain length $b-a$ below. Let $(x,y)$ and
$(x^*,y^*)$ denote elements of $\Omega$ and $\Omega^*$, respectively.
Numerical methods to compute pulse profiles and for numerical integration
are simple extensions of the methods described in~\secref{sec:numerics_1d}
into two dimensions. That is, standard second-order, centered
finite differences were used on a uniform rectangular grid
to implement~\algref{alg:alg1}.
Numerical integration was implemented using midpoint approximations on
each grid rectangle, also with centered, second-order rules to evaluate the
terms of the integrands. The parabolic solver for stability tests is also
analogous to that of~\secref{sec:numerics_1d}, using centered finite
differences in space and time stepping via Crank-Nicolson. Other relevant
parameter values are given in~\tabref{tab:tab4}. We apply asymptotic
boundary conditions, as per~\appref{sec:asymbc}.
\begin{table}[h]
\centering
\begin{tabular}[c]{|c|c|c|c|c|c|c|}\hline
$\beta$ & $\gamma$ & $\theta$ & $\alpha^0$ & $\delta_1$ & $\delta_2$ & {$\delta_3$} \\ \hline
$1/4$ & $1/16$ & $1/2$ & $10^{-5}-10^{-3}$ & $10^{-6}$ & $10^{-12}$ & {$10^{-3}-10^{-1}$} \\
\hline
\end{tabular}
\caption{Parameter values for the tests corresponding to~\figref{fig:fig2d_1}.\label{tab:tab4}}
\end{table}
The functional values ${\mathcal{J}(c)}\xspace$ are shown for three values of $d$
in~\figref{fig:fig2d_1}. For these computations we fixed $b-a=280$,
with $5600$ grid intervals from $x^*=a$ to $x^*=b$ and $80$ grid intervals from
$y^*=-c$ to $y^*=c$. We observe that for $d=5e-4$ and $d=7e-4$ a single
traveling {soliton} is found; that is, there is a single root of ${\mathcal{J}(c)}\xspace$ at
$c=c_0$. We calculate $c_0\approx 13.74$ for $d=5e-4$ and $c_0\approx 10.54$
for $d=7e-4$. When $d=9e-4$, the root is lost and no solution is found.
\begin{figure}
\caption{The computed values of ${\mathcal{J}
\label{fig:fig2d_1}
\end{figure}
The computed {solitons} for {both} cases $d=5e-4$ and $d=7e-4$ were
{confirmed} to be
stable upon inserting these as initial conditions for a parabolic solver and
{allowing the spots to propagate for the length of one computational domain.}
Upon completion, we shift the {solitons} back to
the left by the {same distance}, for a direct comparison with the
initial profile.
{In~\figref{fig:fig2d_2}, we show that the initial and final profiles are
visually identical.}
\begin{figure}
\caption{Contour plots of $u$. Plot (a) shows the {soliton}
\label{fig:fig2d_2}
\end{figure}
A close inspection of~\figref{fig:fig2d_1} reveals that ${\mathcal{J}(c)}\xspace$ appears to have
a local maximum near $c=6$. This is related to a bifurcation
that {occurs}.
{As we track only the minimum energy curve, there may be additional secondary bifurcations associated with
other bifurcation branches, resulting in many close-by solutions.}
To illustrate this effect clearly, we consider the case of
$d=8.8e-4$, for which the values of ${\mathcal{J}(c)}\xspace$ are plotted in~\figref{fig:fig2d_3}.
The curve ${\mathcal{J}(c)}\xspace$ versus $c$ has two smooth branches, separated near $c\approx 6.1$.
For larger $c$, the minimizer profile has one contiguous positive region. That is,
{there is a single soliton.} As $c$ decreases, a separation
into two parallel {solitons} serves to reduce the energy.
This qualitative state persists for the minimizer profile as $c$ decreases
toward zero.
As a result, we found four roots of ${\mathcal{J}(c)}\xspace$ corresponding to two
{solitons} and two unstable traveling pulses.
\begin{figure}
\caption{The computed values of ${\mathcal{J}
\label{fig:fig2d_3}
\end{figure}
We denote the four wave speeds for the traveling pulses by $c_0\approx 7.4067$,
$c_1\approx 6.6864$, $c_2\approx 5.1752$ and $c_3\approx 2.4181$. The unstable
pulses correspond to $c_1$ and $c_3$. In~\figref{fig:fig2d_4} we show the
unstable computed profiles $u$, rescaled to variables $(x,y)$, as computed
by~\algref{alg:alg1}.
The {stable solitons} correspond to $c_0$ and $c_2$. In~\figref{fig:fig2d_5} we show
these, rescaled to variables $(x,y)$, as computed
by~\algref{alg:alg1} and also after running the parabolic solver to demonstrate
stability.
For visualization purposes, we do not show the entire domain. We note
that the computational grid used for these computations with $c> 6.1$, corresponding
to the single-{soliton} solution, was the same as for other computations in this
section. However, for $c<6.1$ the computational domain was shortened so that $b-a=50$,
with 4000 intervals between $x^*=a$ and $x^*=b$ and $160$ computational intervals
between $y^*=-c$ and $y^*=c$. This finer computational grid was needed to approximate
the two-{soliton} solution. In~\figref{fig:fig2d_5} we note that the computed {solitons}
retain their shapes well, but they do not travel with precisely the computed
wave speeds $c_0$ and $c_2$. We believe this is simply due to numerical error in
computing the functional value $J_c (u)$, which could be reduced using a finer
computational grid or more accurate finite difference method.
For $d=8.8e-4$, the values of ${\mathcal{J}(c)}\xspace$ remain very small over a wide range of $c$ values.
As a result, it is more difficult to compute the location of the roots of ${\mathcal{J}(c)}\xspace$ as
compared to other examples in this paper.
\begin{figure}
\caption{Contour plots of the unstable traveling pulse profiles $u$ for $d=8.8e-4$.
Top: the profile with wave speed $c_1$. Bottom: the {double}
\label{fig:fig2d_4}
\end{figure}
\begin{figure}
\caption{Contour plots of the traveling {solitons}
\label{fig:fig2d_5}
\end{figure}
\section{Summary and {future work}}\label{sec:summary}
We have provided an iterative method to calculate traveling {solitons and
unstable traveling} pulse solutions for the~{FitzHugh-Nagumo}\xspace equations. It is a steepest descent
method based on the minimization
of a functional within a certain admissible set. The infimum of the functional
over the admissible set, denoted by ${\mathcal{J}(c)}\xspace$, depends on a parameter $c$ that
represents wave speed. Traveling pulses are identified as roots of the functional;
${\mathcal{J}(c)}\xspace=0$. We have demonstrated that the method is robust. For example, some tests
revealed that initial guesses employed for our method would not suffice as initial
conditions in a parabolic solver to try to compute a {soliton}. The computations
also support the asymptotic relationship~\eqref{eqn:fastSpeed} that applies to the
{solitons (observed as the fastest pulses)}, given a set of physical parameters.
This provides mutual validation. We {computed} both stable
and unstable traveling pulses for
moderate values of the parameter $d$, no traveling pulses for large values of
$d$ and a unique, stable traveling pulse for small $d$.
{Solitons} were tested using a parabolic solver and {observed to be
stable}.
We also observe that as $d$ becomes small, the fastest wave speed for the
soliton becomes large, the pulse width (measured in an
appropriate sense) becomes wide, and the tail decay rate becomes slow.
Due to the steep
wave front but otherwise smooth and slowly-decaying tail, our use of uniform
grids with finite difference methods is not optimal.
{This could be addressed through the}
use of adaptive methods.
For example, the class of $hp$-adaptive finite element methods have previously
enjoyed success for problems with a wide range of scales
(see {\em e.g.}~\cite{DOP1988}).
In two dimensions of space, we observed a bifurcation
{that qualitatively separates single and double traveling soliton solutions.}
The splitting of the {solitons} from one to two {spots} serves to lower
the functional energy ${\mathcal{J}(c)}\xspace$ as $c$ decreases (below around $c\approx 6$ in our
examples). For a narrow range of parameter values, this enables ${\mathcal{J}(c)}\xspace$ to drop below
zero multiple times as $c$ changes, resulting in four traveling pulse solutions
{for a single set of parameters, with their speeds distinct.}
Two solutions are unstable. The {two-soliton} solutions have smaller wave speeds than
the {single-soliton} solutions.
In some on-going work~\cite{CCD_new}, we will demonstrate how to
use our algorithm to find traveling fronts as well {in 2D}.
In fact, for the same physical parameters, fronts and pulses can co-exist.
Our steepest descent method can find many traveling {waves}
independently {for systems with a variational structure}, but it could also serve as
a {robust tool to augment the use of
continuation methods, which may have difficulty in multiple dimensions sometimes
(see~\secref{sec:introduction}). Conceivably, one might also use the global
property to create an {\em ad-hoc} continuation-steepest descent method that can take
larger steps along a bifurcation curve, saving total computational expense for detailed
explorations of parameter space.}
\appendix
\section{~\algref{alg:alg1} with asymptotic boundary conditions}\label{sec:asymbc}
The traveling pulses decay to zero as $|x| \to \infty$ for both one- or
two-dimensional domains.
Instead of imposing the zero Dirichlet boundary condition on the bounded
computational domain in~\algref{alg:alg1}, we
{derive asymptotic boundary conditions that provide}
better information on solution behaviors as $|x| \to \infty$ than just knowing
that they go to zero.
If the governing equation is linear, eliminating the blow-up mode will yield the asymptotic information.
Similar conclusions can be drawn by linearizing the nonlinear equations about the zero equilibrium point.
Such an idea has been given in, {for example,} \cite{LK1980}.
Specifically, we derive
asymptotic boundary conditions to solve for $v={\mathcal{L}_c}\xspace w$ and $w^*={\mathcal{Q}}\xspace (w)$ in Steps 1
and 3 of~\algref{alg:alg1}. In practice, we have found that the minimum
computational domain lengths are restricted by these calculations, whereas the
integrations in Steps 2 and 6 exhibit faster convergence. This is because as $x\to \infty$,
the pulse profiles vanish very quickly, while when $x\to-\infty$ the term $e^x$
in the integrands forces the fast convergence of the integrals even though the
profiles do not vanish as quickly in this direction. For these
reasons, we neglect further discussion of errors in integrated quantities due to
the truncation of the domain.
\subsection{Computing ${{\mathcal{L}_c}\xspace w}$ and $w^*$ with a given $w$ in one dimension} \label{sec:v_asym}
Suppose a function $w$ is defined on
the real line $(-\infty,\infty)$;
however it is known only on the interval $[a,b]$.
First, we will construct asymptotic boundary conditions for Step 1 in~\algref{alg:alg1}.
As $w$ serves as a guess of the minimizer $u$ which decays to zero at infinity,
we assume $w$ and $w'$ are $o(1)$ outside the interval $[a,b]$.
Let $v={\mathcal{L}_c}\xspace w$. Then $v''+v'- \frac{\gamma}{c^2}v= -\frac{w}{c^2}$
on $(-\infty,\infty)$,
which is equivalent to the system
\begin{equation} \label{eqn:v_sys}
\vectwo{v}{z}'=B \vectwo{v}{z} - \vectwo{0}{\frac{w}{c^2}}
\end{equation}
where $B=\mattwo{0}{1}{\frac{\gamma}{c^2}}{-1}$. The eigenvalues of $B$ are given by
\begin{equation} \label{eqn:nu}
\left\{ \nu_1\, ,\, \nu_2 \right\}= \left\{\frac{1}{2} \left(-1 - \sqrt{1+ \frac{4 \gamma}{c^2}}\right)\, , \, \frac{1}{2} \left(-1 + \sqrt{1+ \frac{4 \gamma}{c^2}}\right)\right\},
\end{equation}
with $\nu_1<-1<0<\nu_2$. Correspondingly, ${\bf L}_1= \vectwo{-\nu_2}{1}$ and ${\bf L}_2 =\vectwo{-\nu_1}{1}$
are the left eigenvectors of $B$ for $\nu_1$ and $\nu_2$, respectively.
By taking the scalar product of ${\bf L}_1$ with \eqref{eqn:v_sys}, we obtain
\begin{equation} \label{eqn:Phi1}
\Phi_1'= \nu_1 \Phi_1 - \frac{w}{c^2}
\end{equation}
where $\Phi_1 \equiv {\bf L}_1 \cdot \vectwo{v}{z}=- \nu_2 v+z$. This first order equation can be integrated to give
\[
\Phi_1(x) = - \int_{-\infty}^x e^{\nu_1(x-t)} \frac{w(t)}{c^2} dt ;
\]
the arbitrary constant associated with the complementary solution has to be set to zero for $\Phi_1$
to stay bounded as $x \to -\infty$.
It follows that
\begin{eqnarray*}
\nu_1 \Phi_1(a)-\frac{w(a)}{c^2} &=& \frac{\nu_1}{c^2} \int_{-\infty}^a e^{\nu_1(a-t)} (w(a)-w(t)) \, dt \\
&=& - \frac{1}{c^2} \int_{-\infty}^a e^{\nu_1(a-t)} w'(t) \, dt.
\end{eqnarray*}
Hence
\begin{eqnarray*}
\left|\nu_1 \Phi_1(a)-\frac{w(a)}{c^2} \right| &\leq & \frac{|o(1)|}{c^2} \int_{-\infty}^a e^{\nu_1(a-t)} \, dt. \\
& = & \frac{|o(1)|}{|\nu_1|\, c^2} \;.
\end{eqnarray*}
It is therefore natural to impose the boundary condition $\nu_1 \Phi_1=\frac{w}{c^2}$ at $x=a$; which
amounts to
\begin{equation} \label{eqn:left_bc}
v'- \nu_2 v= \frac{w}{\nu_1 c^2} \quad \mbox{at} \;\; x=a.
\end{equation}
This is like setting the right hand side of \eqref{eqn:Phi1} to zero at $x=a$.
A similar analysis for large positive $x$ using $\Phi_2= {\bf L}_2 \cdot \vectwo{v}{z}$ leads to
\begin{equation} \label{eqn:right_bc}
v'- \nu_1 v= \frac{w}{\nu_2 c^2} \quad \mbox{at} \;\; x=b.
\end{equation}
\eqref{eqn:left_bc} and \eqref{eqn:right_bc} are the asymptotic boundary conditions
used when solving for $v={\mathcal{L}_c}\xspace w$.
\begin{remark} \label{remark4}
If $w$ goes to different constants as $x \to \pm \infty$ but with $w'=o(1)$ beyond $[a,b]$, the above argument
can be modified to derive some different asymptotic boundary conditions.
This observation will have implications in case one studies a traveling front problem numerically.
\end{remark}
We will now compute $w^*$ from~\eqref{eqn:wStar} in Step 3 of~\algref{alg:alg1}
using asymptotic boundary conditions. Let $\hat{w}=dc^2 w -{\mathcal{L}_c}\xspace w +f(w)$ denote
the known right hand side of~\eqref{eqn:wStar}.
Compare this problem with the equation on $v$ in~\secref{sec:v_asym}. By substituting $\gamma/c^2$ by $1$
and $w/c^2$ by $\hat{w}$, the new eigenvalues now are $\nu_1^*=-\frac{1}{2}(1+\sqrt{5})$ and
$\nu_2^*=\frac{1}{2}(\sqrt{5}-1)$, and the asymptotic boundary conditions are given by
\begin{eqnarray}
{w^*}{\,'}- \nu_2^* w^* &= \frac{\hat{w}}{\nu_1^*} & \mbox{at} \;\; x=a, \label{eqn:*left_bc} \\
{w^*}{\,'}- \nu_1^* w^* & = \frac{\hat{w}}{\nu_2^*} & \mbox{at} \;\; x=b. \label{eqn:*right_bc}
\end{eqnarray}
\subsection{Computing ${{\mathcal{L}_c}\xspace w}$ and $w^*$ with a given $w$ in two dimensions} \label{sec:v_asym2D}
We take $w=w(x,y)$ on the infinite strip $(-\infty,\infty)\times [-L,L]$
and derive asymptotic boundary conditions to apply on the truncated domain
$\Omega=[a,b]\times [-L,L]$, first for $v={\mathcal{L}_c}\xspace w$. At $y=-L$ and $y=L$
the boundary values are $w=v=0$, for all $x\in\mathbb{R}$. Fourier
expansions for $v={\mathcal{L}_c}\xspace w$ and $w$ are
\begin{equation}
\begin{aligned}
w(x,y) &= \sum_{j=1}^\infty \hat{w}_j (x) \sin \left(\frac{j\pi (y+L)}{2L}\right) , \\
v(x,y) &= \sum_{j=1}^\infty \hat{v}_j (x) \sin \left(\frac{j\pi (y+L)}{2L}\right) .
\end{aligned}
\label{eqn:fourier}
\end{equation}
Insert the relations~\eqref{eqn:fourier} into the equation
$\Delta v +v_x -\frac{\gamma}{c^2} v =-\frac{1}{c^2} w$:
\begin{multline*}
\sum_{j=1}^\infty \left( \hat{v}_j'' (x)+\hat{v}_j' (x)-\left(\frac{j^2\pi^2}{4L^2}+\frac{\gamma}{c^2}\right) \hat{v}_j (x) \right) \sin \left(\frac{j\pi (y+L)}{2L}\right)
= \\
\sum_{j=1}^\infty -\frac{1}{c^2}\hat{w}_j (x) \sin \left(\frac{j\pi (y+L)}{2L}\right) .
\end{multline*}
Then the Fourier coeffcients satisfy
\begin{equation}
\hat{v}_j'' (x)+\hat{v}_j' (x)-\left(\frac{j^2\pi^2}{4L^2}+\frac{\gamma}{c^2}\right) \hat{v}_j (x)
= -\frac{1}{c^2}\hat{w}_j (x) .
\label{eqn:bc2d1}
\end{equation}
In case $x<0$ with $|x|\gg 1$, we assume it holds that
$|\hat{w}_j (x)|\ll |\hat{w}_1 (x)|$ and
$|\hat{v}_j (x)|\ll |\hat{v}_1 (x)|$ for all $j>1$, thus
\begin{equation}
\begin{aligned}
w(x,y) &\approx \hat{w}_1 (x) \sin \left(\frac{\pi (y+L)}{2L}\right) , \\
v(x,y) &\approx \hat{v}_1 (x) \sin \left(\frac{\pi (y+L)}{2L}\right) .
\end{aligned}
\label{eqn:fourier2}
\end{equation}
Then $\hat{w}_1 (x)$ and $\hat{v}_1 (x)$ have the same behavior in $x$ as
$w$ and $v$, respectively, as $x\to-\infty$. Furthermore, these Fourier
coefficients satisfy~\eqref{eqn:bc2d1}. By analogy with the derivation
in~\secref{sec:v_asym}, if $a<0$ with $|a|\gg 1$ then we apply the
boundary condition
\begin{equation} \label{eqn:l_bc2d}
\hat{v}_1'- \nu_2 \hat{v}_1= \frac{\hat{w}_1}{\nu_1 c^2} \quad \mbox{at} \;\; x=a
\end{equation}
with the eigenvalues
\begin{equation}
\begin{aligned}
\nu_1 &= \frac{1}{2} \left(-1 - \sqrt{1+\frac{\pi^2}{L^2}+ \frac{4 \gamma}{c^2}}\right) \\
\mbox{and} \;\;
\nu_2 &= \frac{1}{2} \left(-1 + \sqrt{1+\frac{\pi^2}{L^2}+ \frac{4 \gamma}{c^2}}\right) .
\end{aligned}
\label{eqn:nu_2d}
\end{equation}
We combine~\eqref{eqn:fourier2} and~\eqref{eqn:l_bc2d} to derive the
approximate boundary condition
\begin{equation} \label{eqn:left_bc2d}
\frac{\partial v}{\partial x}- \nu_2 v= \frac{w}{\nu_1 c^2} \quad \mbox{at} \;\; x=a , \quad
\mbox{for} \;\; -L < y < L.
\end{equation}
It is equivalent to applying~\eqref{eqn:left_bc} at $x=a$ for each fixed
value of $y$, with the adjustment~\eqref{eqn:nu_2d} for the
eigenvalues~\eqref{eqn:nu}.
The corresponding boundary condition on the right is
\begin{equation} \label{eqn:right_bc2d}
\frac{\partial v}{\partial x} - \nu_1 v= \frac{w}{\nu_2 c^2} \quad \mbox{at} \;\; x=b , \quad
\mbox{for} \;\; -L < y < L,
\end{equation}
by analogy with~\eqref{eqn:right_bc} and the derivation
of~\eqref{eqn:left_bc2d}.
Here, we assume $b\gg 1$. Taken together with $v(x,-L)=v(x,L)=0$ for all
$x\in\mathbb{R}$,~\eqref{eqn:left_bc2d}-\eqref{eqn:right_bc2d} are the
boundary conditions used to compute $v={\mathcal{L}_c}\xspace w$ on the truncated domain.
Note that the asymptotic
conditions~\eqref{eqn:left_bc2d}-\eqref{eqn:right_bc2d} are compatible with
the homogeneous Dirichlet boundary conditions for $v$ and $w$ at $y=\pm L$.
Asymptotic boundary conditions for $w^*$ in Step 3 of~\algref{alg:alg1}
may be derived quickly by first comparing~\eqref{eqn:wStar} to the
equation~\eqref{eqn:fhn3_2d} for $v={\mathcal{L}_c}\xspace w$. Let
$\tilde{w}=dc^2 w -{\mathcal{L}_c}\xspace w +f(w)$ denote the known right hand side
of~\eqref{eqn:wStar}. By substituting $\gamma/c^2$ with $1$ and $w/c^2$
with $\tilde{w}$, the new eigenvalues now are
\begin{equation}
\left\{ \nu^*_1 \, ,\, \nu^*_2 \right\} = \left\{ \frac{1}{2} \left(-1 - \sqrt{5+\frac{\pi^2}{L^2}}\right) \, , \,
\frac{1}{2} \left(-1 + \sqrt{5+\frac{\pi^2}{L^2}}\right) \right\},
\label{eqn:nuStar_2d}
\end{equation}
and the asymptotic boundary conditions are given by
\begin{eqnarray}
\frac{\partial w^*}{\partial x}- \nu_2^* w^* &= \frac{1}{\nu_1^*}\tilde{w} & \mbox{at} \;\; x=a, \;\; -L<y<L, \label{eqn:*left_bc2d} \\
\frac{\partial w^*}{\partial x}- \nu_1^* w^* & = \frac{1}{\nu_2^*}\tilde{w} & \mbox{at} \;\; x=b, \;\; -L<y<L. \label{eqn:*right_bc2d}
\end{eqnarray}
\end{document}
|
\begin{document}
\date{\today.}
\keywords{Isogeny graphs, $(\ell,\ell)$-isogenies, principally polarised abelian varieties, Jacobians of hyperelliptic curves, lattices in symplectic spaces, orders in CM-fields}
\begin{abstract}
Fix a prime number $\ell$. Graphs of isogenies of degree a power of $\ell$ are well-understood for elliptic curves, but not for higher-dimensional abelian varieties. We study the case of absolutely simple ordinary abelian varieties over a finite field. We analyse graphs of so-called $\mathfrak l$-isogenies, resolving that they are (almost) volcanoes in any dimension. Specializing to the case of principally polarizable abelian surfaces, we then exploit this structure to describe graphs of a particular class of isogenies known as \emph{$(\ell, \ell)$}-isogenies: those whose kernels are maximal isotropic subgroups of the $\ell$-torsion for the Weil pairing.
We use these two results to write an algorithm giving a path of computable isogenies from an arbitrary absolutely simple ordinary abelian surface towards one with maximal endomorphism ring, which has immediate consequences for the CM-method in genus 2, for computing explicit isogenies, and for the random self-reducibility of the discrete logarithm problem in genus 2 cryptography.
\end{abstract}
\title{Isogeny graphs of ordinary abelian varieties}
\section{Introduction}
\subsection{Background}
Graphs of isogenies of principally polarized abelian varieties of dimension $g$ have been an extensive object of study in both number theory and mathematical cryptology. When $g = 1$, Kohel \cite{kohel:thesis} gave a description of the structure of such graphs and used it to compute the endomorphism ring of an elliptic curve over a finite field. This description has subsequently been utilized in a variety of cryptographic applications such as point counting on elliptic curves \cite{fouquet-morain}, random self-reducibility of the elliptic curve discrete logarithm problem in isogeny classes \cite{jmv:asiacrypt, jmv:jnt}, generating elliptic curves with a prescribed number of points via the CM method based on the Chinese Remainder Theorem~\cite{sutherland}, as well as computing modular polynomials \cite{broker-lauter-sutherland}.
When $g > 1$, the problem of describing the structure of these graphs becomes harder. The literature has seen a number of attempts to generalize Kohel's thesis, yet the structure of these isogeny graphs has not been studied systematically. For $g = 2$, Br\"oker, Gruenewald and Lauter~\cite{broker-gruenewald-lauter} proved that graphs of ${(\ell,\ell)}$-isogenies of abelian surfaces are not volcanoes.
In~\cite{lauter-robert}, Lauter and Robert observed that from a random abelian surface, it might not always be possible to reach an isogenous one with maximal endomorphism ring (locally at $\ell$) using only ${(\ell,\ell)}$-isogenies.
Following the footsteps of Kohel, Bisson~\cite[Ch.5]{bisson} sketched the relation between isogeny graphs and the lattice of orders in the endomorphism algebra for abelian varieties of higher dimension. This provides a first approximation of the global structure of the graphs, but allows no fine-grained analysis.
It was also unclear whether the notion of ${(\ell,\ell)}$-isogenies is the right one to generalize the structure of isogeny graphs.
Ionica and Thom\'e \cite{ionica-thome} observed that the subgraph of ${(\ell,\ell)}$-isogenies restricted to surfaces with maximal real order in $K_0$ (globally) could be studied through what they called $\mathfrak l$-isogenies, where $\mathfrak l$ is a prime ideal in $K_0$ above $\ell$. They suggest that the $\mathfrak l$-isogeny graphs should be volcanoes, under certain assumptions\footnote{The proof of \cite[Prop.15]{ionica-thome} gives a count of the number of points at each level of the graph, but does not allow a conclusive statement on the edge structure, and thus does not appear to prove that the graph is a volcano.}.
When $\mathfrak l$ is principal, of prime norm, generated by a real, totally positive endomorphism $\beta$, then $\mathfrak l$-isogenies coincide with the cyclic $\beta$-isogenies from \cite{dudeanu-jetchev-robert} --- an important notion, since these are the cyclic isogenies preserving principal polarizability.
Our main contributions include a full description of graphs of $\mathfrak l$-isogenies for any $g \geq 1$. This proves the claims of~\cite{ionica-thome} and extends them to a much more general setting. For $g = 2$, we exploit this $\mathfrak l$-structure to provide a complete description of graphs of ${(\ell,\ell)}$-isogenies preserving the maximal real multiplication locally at~$\ell$. We also explore the structure of ${(\ell,\ell)}$-isogenies when the real multiplication is not necessarily locally maximal. As an application of these results, we build an algorithm that, given as input a principally polarized abelian surface, finds a path of computable isogenies leading to a surface with maximal endomorphism ring. This was a missing --- yet crutial --- building block for the CRT-based CM-method in dimension 2, for computing explicit isogenies between two given surfaces, and for the random self-reducibility of the discrete logarithm problem in genus 2 cryptography. Applications are discussed more thoroughly in Section~\ref{subsec:introApplications}.
This structure of $\mathfrak l$-isogenies, when one assumes that $\mathfrak{l}$ is of prime norm and trivial in the narrow class group of $K_0$, implies in particular that graphs of cyclic $\beta$-isogenies are volcanoes.
In parallel to the present work, Chloe Martindale has recently announced a similar result on cyclic $\beta$-isogenies.
It will be found in her forthcoming Ph.D. thesis, as part of a larger project aimed at computing Hilbert class polynomials and modular polynomials in genus 2. Her results, which are proven in a complex-analytic setting different from our $\ell$-adic methods, yield the same description of the graph in this particular case.
\subsection{Setting}\label{subsec:setting}
For a given ordinary, absolutely simple abelian variety $\mathscr A$ over a finite field $k = \mathbb{F}_q$, the associated endomorphism algebra $\End (\mathscr A) \otimes_\mathbb{Z} \mathbb{Q}$ is isomorphic to a CM-field $K$, i.e., a totally imaginary quadratic extension of a totally real number field $K_0$. Moreover, the dimension $g$ of $\mathscr A$ equals the degree $[K_0 : \mathbb{Q}$]. The endomorphism ring $\End(\mathscr A)$ identifies with an order $\mathcal{O}$ in $K$. The Frobenius endomorphism $\pi$ of $\mathscr A$ generates the endomorphism algebra $K = \mathbb{Q}(\pi)$, and its characteristic polynomial determines its $k$-isogeny class, by Tate's isogeny theorem~\cite{Tate1966}. In particular, since $\End_k(\mathscr A) = \End_{\overline k}(\mathscr A)$ (see~\cite[Thm.7.2.]{Waterhouse1969}), all isogenous varieties (over $\overline k$) share the same CM-field $K$, and their endomorphism rings all correspond to orders in $K$. Thus, the structure of isogeny graphs is related to the structure of the lattice of orders of the field $K$.
The choice of an isomorphism $\End(\mathscr A) \otimes_\mathbb{Z} \mathbb{Q} \cong K$ naturally induces an embedding $\imath_\mathscr B : \End(\mathscr B) \rightarrow K$ for any variety $\mathscr B$ that is isogenous to $\mathscr A$, and it does not depend on the choice of an isogeny. We can then unambiguously denote by $\mathcal{O}(\mathscr B)$ the order in $K$ corresponding to the endomorphism ring of any $\mathscr B$.
Define the suborder $\mathcal{O}_0(\mathscr A) = \mathcal{O}(\mathscr A) \cap K_0$; the variety $\mathscr A$ is said to have \emph{real multiplication} (RM) by $\mathcal{O}_0(\mathscr A)$. Recall the conductor $\mathfrak{f}$ of an order $\mathcal{O}$ in a number field $L$ is defined as
$$\mathfrak{f} = \{x \in L\ |\ x\mathcal{O}_L \subseteq \mathcal{O}\}.$$
Equivalently, it is the largest subset of $L$ which is an ideal in both $\mathcal{O}_L$ and $\mathcal{O}$.
Fix once and for all a prime number $\ell$ different from the characteristic of the finite field $k$, and write $\mathfrak{o}(\mathscr A) = \mathcal{O}(\mathscr A) \otimes_{\mathbb{Z}} \mathbb{Z}_\ell$, the \emph{local order} of $\mathscr A$. It is an order in the algebra $K_\ell = K \otimes_{\mathbb{Q}} \mathbb{Q}_\ell$. Also, $\mathfrak{o}_K = \mathcal{O}_K \otimes_{\mathbb{Z}} \mathbb{Z}_\ell$ is the maximal order in $K_\ell$.
Finally, write $\mathfrak{o}_0(\mathscr A)$ for the \emph{local real order} $\mathcal{O}_0(\mathscr A) \otimes_{\mathbb{Z}} \mathbb{Z}_\ell$, which is an order in the algebra $K_{0,\ell} = K_0 \otimes_{\mathbb{Q}} \mathbb{Q}_\ell$, and let $\mathfrak{o}_0 = \mathcal{O}_{K_0} \otimes_{\mathbb{Z}} \mathbb{Z}_\ell$.
\subsection{Main results}
When $\mathscr A$ is an elliptic curve, the lattice of orders is simple: $K$ being a quadratic number field (i.e. $K_0 = \mathbb{Q}$), all the orders in $K$ are of the form $\mathcal{O}_c = \mathbb{Z} + c\mathcal{O}_K$, with $c \in \mathbb{Z}$ generating the conductor of $\mathcal{O}_c$. Locally at a prime number $\ell$, the lattice of orders in $K_\ell$ is simply the chain $\mathfrak{o}_{K} \supset \mathbb{Z}_\ell + \ell \mathfrak{o}_{K} \supset \mathbb{Z}_{\ell} + \ell^2 \mathfrak{o}_{K} \supset...$.
The (local) structure of the lattice of orders of a CM-field $K$ is in general not as simple as the linear structure arising in the case of an imaginary quadratic field. This constitutes the main difficulty in generalizing the structural results to $g > 1$.
For the rest of the paper, we let $g > 1$, and fix an isogeny class whose endomorphism algebra is the CM-field $K$.
\subsubsection{Isogeny graphs preserving the real multiplication} \label{subsubsec:theorem1}
In the case of quadratic number fields, the inclusion of orders corresponds to the divisibility relation of conductors. Neither the one-to-one correspondence between orders and conductors, nor the relationship between inclusion and divisibility holds in higher degree.
We can, however, prove that such a correspondence between orders and conductors, and inclusion and divisibility still holds if we restrict to orders with maximal real multiplication, i.e., $\mathcal{O}_{K_0} \subset \mathcal{O}$. More than that, it even holds locally, i.e., for the orders of $K_\ell$ containing $\mathfrak{o}_0$. More precisely, we show in Section~\ref{sec:orders}, Theorem~\ref{thm:classificationOrdersMaxRM}, that any order in $K$ (respectively $K_\ell$) with maximal real multiplication is of the form $\mathcal{O}_{K_0} + \mathfrak f \mathcal{O}_K$ (respectively $\mathfrak{o}_{0} + \mathfrak f \mathfrak{o}_K$) for some ideal $\mathfrak f$ in $\mathcal{O}_{K_0}$. Our first results use this classification to provide a complete description of graphs of isogenies preserving the maximal real multiplication locally at $\ell$.
The main building block for isogenies preserving the real multiplication is the notion of $\mathfrak l$-isogeny.
\begin{definition}[$\mathfrak l$-isogeny]\label{def:frakLIso}
Let $\mathfrak l$ be a prime above $\ell$ in $K_0$, and $\mathscr A$ a variety in the fixed isogeny class. Suppose $\mathfrak l$ is coprime to the conductor of $\mathcal{O}_0(\mathscr A)$.
An $\mathfrak l$-isogeny from $\mathscr A$ is an isogeny whose kernel is a proper, $\mathcal{O}_0(\mathscr A)$-stable subgroup of\footnote{By abuse of notation, we write $\mathscr A[\mathfrak l]$ in place of $\mathscr A[\mathfrak l \cap \mathcal{O}(\mathscr A)]$.} $\mathscr A[\mathfrak l]$.
\end{definition}
\begin{remark}
The degree of an $\mathfrak l$-isogeny is $N\mathfrak l$.
\end{remark}
We will therefore study the structure of the graph $\mathscr W_\mathfrak l$ whose vertices are the isomorphism classes of abelian varieties $\mathscr A$ in the fixed isogeny class, which have maximal real multiplication locally at $\ell$ (i.e., $\mathfrak{o}_0 \subset \mathfrak{o}(\mathscr A)$), and there is an edge of multiplicity $m$ from such a vertex with representative $\mathscr A$ to a vertex $\mathscr B$ if there are $m$ distinct subgroups $\kappa\subset \mathscr A$ that are kernels of $\mathfrak l$-isogenies such that $\mathscr A/\kappa \cong \mathscr B$ (of course, the multiplicity $m$ does not depend on the choice of the representative $\mathscr A$).
\begin{remark}
When $\mathfrak l$ is trivial in the narrow class group of $K_0$, then $\mathfrak l$-isogenies preserve principal polarizability.
The graph $\mathscr W_\mathfrak l$ does not account for polarizations, but it is actually easy to add polarizations back to graphs of unpolarized varieties, as will be discussed in Section~\ref{sec:Polarizations}.
\end{remark}
Each vertex $\mathscr A$ of this graph $\mathscr W_\mathfrak l$ has a level, given by the valuation $v_\mathfrak l(\mathscr A)$ at~$\mathfrak l$ of the conductor of $\mathcal{O}(\mathscr A)$.
Our first result, Theorem~\ref{thm:lisogenyvolcanoes}, completely describes the structure of the connected components of $\mathscr W_\mathfrak l$, which turns out to be closely related to the volcanoes observed for cyclic isogenies of elliptic curves. It is proven in Subsection~\ref{subsec:frakLIsogenyGraphs}.
\begin{theorem}\label{thm:lisogenyvolcanoes}
Let $\mathscr V$ be any connected component of the leveled $\mathfrak l$-isogeny graph $(\mathscr W_\mathfrak l, v_\mathfrak l)$.
For each $i \geq 0$, let $\mathscr V_i$ be the subgraph of $\mathscr V$ at level $i$.
We have:
\begin{enumerate}[label=(\roman*)]
\item For each $i \geq 0$, the varieties in $\mathscr V_i$ share a common endomorphism ring $\mathcal{O}_i$. The order $\mathcal{O}_0$ can be any order with locally maximal real multiplication at $\ell$, whose conductor is not divisible by $\mathfrak l$;
\item The level $\mathscr V_0$ is isomorphic to the Cayley graph of the subgroup of $\Pic(\mathcal{O}_0)$ with generators the prime ideals above $\mathfrak l$; fixing $\mathscr A \in \mathscr V_0$, an isomorphism is given by sending any ideal class $[\mathfrak a]$ to the isomorphism class of $\mathscr A/\mathscr A[\mathfrak a]$;
\item For any $\mathscr A \in \mathscr V_0$, there are $\left(N(\mathfrak l)-\left(\frac{K}{\mathfrak l}\right)\right)/[\mathcal{O}_{0}^\times : \mathcal{O}_{1}^\times]$ edges of multiplicity $[\mathcal{O}_{0}^\times : \mathcal{O}_{1}^\times]$ from $\mathscr A$ to distinct vertices of~$\mathscr V_{1}$ (where $\left(\frac{K}{\mathfrak l}\right)$ is $-1$, $0$ or $1$ if $\mathfrak l$ is inert, ramified, or split in $K$);
\item For each $i > 0$, and any $\mathscr A \in \mathscr V_i$, there is one simple edge from $\mathscr A$ to a vertex of $\mathscr V_{i-1}$, and $N(\mathfrak l)/[\mathcal{O}_{i}^\times : \mathcal{O}_{i+1}^\times]$ edges of multiplicity $[\mathcal{O}_{i}^\times : \mathcal{O}_{i+1}^\times]$ to distinct vertices of $\mathscr V_{i+1}$, and there is no other edge from $\mathscr A$;
\item For each path $\mathscr A \rightarrow \mathscr B \rightarrow \mathscr C$ where the first edge is descending, and the second ascending, we have $\mathscr C \cong \mathscr A / \mathscr A[\mathfrak l]$;
\item\label{ascendingImpliesDescending} For each ascending edge $\mathscr B \rightarrow \mathscr C$, there is a descending edge $\mathscr C \rightarrow \mathscr B / \mathscr B[\mathfrak l]$.
\end{enumerate}
In particular, the graph $\mathscr V$ is an $N(\mathfrak l)$-volcano if and only if $\mathcal{O}_0^\times \subset K_0$ and $\mathfrak l$ is principal in ${\mathcal{O}_0 \cap K_0}$.
Also, if $\mathscr V$ contains a variety defined over the finite field $k$, the subgraph containing only the varieties defined over $k$ consists of the subgraph of the first $v$ levels, where $v$ is the valuation at $\mathfrak l$ of the conductor of $\mathcal{O}_{K_0}[\pi] = \mathcal{O}_{K_0}[\pi, \pi^\dagger]$.
\end{theorem}
\subsubsection{Graphs of $(\ell, \ell)$-isogenies}
The following results focus on the case $g = 2$. In contrast to the case of elliptic curves, where a principal polarization always exists, the property of being principally polarizable is not even invariant under cyclic isogeny in genus $2$. In addition, basic algorithms for computing isogenies of elliptic curves from a given kernel (such as V\'elu's formulae \cite{velu}) are difficult to generalize and the only known methods \cite{robert,cosset-robert,lubicz-robert,dudeanu-jetchev-robert} assume certain hypotheses and thus do not apply for general isogenies of cyclic kernels. On the other hand, ${(\ell,\ell)}$-isogenies always preserve principal polarizability, and are computable with the most efficient of these algorithms~\cite{cosset-robert}. These ${(\ell,\ell)}$-isogenies are therefore an important notion, and we are interested in understanding the structure of the underlying graphs.
\begin{definition}[${(\ell,\ell)}$-isogeny]\label{def:ll}
Let $(\mathscr A, \xi_{\mathscr{A}})$ be a principally polarized abelian surface. We call an isogeny $\varphi \colon \mathscr A \rightarrow \mathscr B$ an ${(\ell,\ell)}$-isogeny (with respect to $\xi_{\mathscr{A}}$) if $\ker (\varphi)$ is a maximal isotropic subgroup of $\mathscr A[\ell]$ with respect to the Weil pairing on $\mathscr A[\ell]$ induced by the polarization isomorphism corresponding to $\xi_\mathscr A$.
\end{definition}
One knows that if $\varphi \colon \mathscr A \rightarrow \mathscr B$ is an $(\ell, \ell)$-isogeny, then there is a unique principal polarization $\xi_{\mathscr{B}}$ on $\mathscr B$ such that $\varphi^* \xi_{\mathscr{B}} = \xi_{\mathscr{A}}^{\ell}$ (this is a consequence of Grothendieck descent \cite[pp.290--291]{mumford:eq1}; see also \cite[Prop. 2.4.7]{drobert:thesis}). This allows us to view an isogeny of \emph{a priori} non-polarized abelian varieties $\varphi$ as an isogeny of polarized abelian varieties $\varphi \colon (\mathscr A, \mathcal{L}^\ell) \rightarrow (\mathscr B, \mathcal{M})$.\\
First, we restrict our attention to abelian surfaces with maximal real multiplication at $\ell$. The description of $\mathfrak l$-isogeny graphs provided by Theorem~\ref{thm:lisogenyvolcanoes} leads to a complete understanding of graphs of ${(\ell,\ell)}$-isogenies preserving the maximal real order locally at $\ell$, via the next theorem.
More precisely, we study the structure of the graph $\mathscr G_{\ell,\ell}$ whose vertices are the isomorphism classes of principally polarizable surfaces $\mathscr A$ in the fixed isogeny class, which have maximal real multiplication locally at $\ell$ (i.e., $\mathfrak{o}_0 \subset \mathfrak{o}(\mathscr A)$), with an edge of multiplicity $m$ from such a vertex $\mathscr A$ to a vertex $\mathscr B$ if there are $m$ distinct subgroups $\kappa\subset \mathscr A$ that are kernels of ${(\ell,\ell)}$-isogenies such that $\mathscr A/\kappa \cong \mathscr B$. This definition will be justified by the fact that the kernels of ${(\ell,\ell)}$-isogenies preserving the maximal real multiplication locally at $\ell$ do not depend on the choice of a principal polarization on the source (see Remark~\ref{rem:RMPreservingEllEllDoNotDependOnPol}).
The following theorem is proven in Subsection~\ref{subsec:ellellMaxRM}, where its consequences are discussed in details.
\begin{theorem}\label{thm:ellellLCombinations}
Suppose that $\mathscr A$ has maximal real multiplication locally at $\ell$.
Let $\xi$ be any principal polarization on $\mathscr A$.
There is a total of $\ell^3 + \ell^2 + \ell + 1$ kernels of ${(\ell,\ell)}$-isogenies from~$\mathscr A$ with respect to $\xi$.
Among these, the kernels whose target also has maximal local real order do not depend on $\xi$, and are:
\begin{enumerate}[label=(\roman*)]
\item the $\ell^2+1$ kernels of $\ell\mathcal{O}_{K_0}$-isogenies if $\ell$ is inert in $K_0$,
\item the $\ell^2 + 2\ell + 1$ kernels of compositions of an $\mathfrak l_1$-isogeny with an $\mathfrak l_2$-isogeny if $\ell$ splits as $\mathfrak l_1\mathfrak l_2$ in $K_0$,
\item the $\ell^2 + \ell + 1$ kernels of compositions of two $\mathfrak l$-isogenies if $\ell$ ramifies as $\mathfrak l^2$ in $K_0$.
\end{enumerate}
The other ${(\ell,\ell)}$-isogenies have targets with real multiplication by $\mathfrak{o}_1 = \mathbb{Z}_\ell + \ell \mathfrak{o}_0$.
\end{theorem}
Second, we look at ${(\ell,\ell)}$-isogenies when the real multiplication is not maximal at $\ell$.
Note that since $g = 2$, even though the lattice of orders in $K$ is much more intricate than in the quadratic case, there still is some linearity when looking at the suborders $\mathcal{O}_0(\mathscr A) = \mathcal{O}(\mathscr A) \cap K_0$, since $K_0$ is a quadratic number field.
For any variety $\mathscr A$ in the fixed isogeny class, there is an integer $f$, the conductor of $\mathcal{O}_0(\mathscr A)$, such that
$\mathcal{O}_0(\mathscr A) = \mathbb{Z} + f\mathcal{O}_{K_0}$.
The local order $\mathfrak{o}_0(\mathscr A)$
is exactly the order $\mathfrak{o}_n = \mathbb{Z}_\ell + \ell^n\mathfrak{o}_0$ in $K_{0,\ell}$, where $n = v_\ell(f)$ is the valuation of $f$ at the prime $\ell$.
The next result describes how $(\ell, \ell)$-isogenies can navigate between these ``levels'' of real multiplication.
Let $\varphi \colon \mathscr A \rightarrow \mathscr B$ be an ${(\ell,\ell)}$-isogeny with respect to a polarization $\xi$ on $\mathscr A$. If $\mathfrak{o}_0(\mathscr A) \subset \mathfrak{o}_0(\mathscr B)$, we refer to $\varphi$ as an \emph{RM-ascending} isogeny; if $\mathfrak{o}_0(\mathscr B) \subset \mathfrak{o}_0(\mathscr A)$, we call $\varphi$ \emph{RM-descending}; otherwise, if $\mathfrak{o}_0(\mathscr A) = \mathfrak{o}_0(\mathscr B)$, $\varphi$ is called \emph{RM-horizontal}.
Note that we start by considering ${(\ell,\ell)}$-isogenies defined over the algebraic closure of the finite field $k$; in virtue of Remark~\ref{rem:fieldOfDefIsogenies}, it is then easy to deduce the results on isogenies defined over $k$. The following assumes $n > 0$ since the case $n = 0$ is taken care of by Theorem~\ref{thm:ellellLCombinations}.
\begin{theorem}\label{RMupIso}
Suppose $\mathfrak{o}_0(\mathscr A) = \mathfrak{o}_n$ with $n > 0$. For any principal polarization $\xi$ on $\mathscr A$, the kernels of ${(\ell,\ell)}$-isogenies from $(\mathscr A, \xi)$ are:
\begin{enumerate}[label=(\roman*)]
\item A unique RM-ascending one,
whose target has local order $\mathfrak{o}_{n-1}\cdot\mathfrak{o}(\mathscr A)$ (in particular, its local real order is $\mathfrak{o}_{n-1}$, and the kernel is defined over the same field as $\mathscr A$),
\item $\ell^2 + \ell$ RM-horizontal ones, and
\item $\ell^3$ RM-descending isogenies, whose targets have local real order $\mathfrak{o}_{n+1}$.
\end{enumerate}
\end{theorem}
The proof of this theorem is the matter of Section~\ref{sec:levelsRM}.
\subsection{Lattices in $\ell$-adic symplectic spaces}
The theorems stated above are proven using a different approach from the currently available analyses of the structure of $\ell$-power isogeny graphs. Rather than working with complex tori, we attach to an $\ell$-isogeny of abelian varieties a pair of lattices in an $\ell$-adic symplectic space, whose relative position is determined by the kernel of the isogeny, following the proof of Tate's isogeny theorem \cite{Tate1966}.
Inspired by ~\cite[\textsection 6]{Cornut04}, where the theory of Hecke operators on $\mathbf{G}L_2$ is used to understand the CM elliptic curves isogenous to a fixed curve, we analyze the possible local endomorphism rings (in $K_\ell = K \otimes_{\mathbb{Q}} \mathbb{Q}_\ell$) for an analogous notion of ``neighboring'' lattices. This method gives a precise count of the horizontal isogenies as well as the vertical isogenies that increase or decrease the local endomorphism ring at $\ell$.
\subsection{``Going up'' and applications}\label{subsec:introApplications}
One of the applications of the above structural results is an algorithm that, given as input a principally polarized abelian surface, finds a path of computable isogenies leading to a surface with maximal endomorphism ring, when it is possible (and we can charaterize when it is possible). This algorithm is built and analysed in Section~\ref{sec:goingUp}.
Such an algorithm has various applications. One of them is in generating (hyperelliptic) curves of genus 2 over finite fields with
suitable security parameters via the CM method. The method is based on first computing invariants for the curve
(Igusa invariants) and then using a method of Mestre \cite{mestre:genus2} (see also \cite{cardona-quer}) to reconstruct the equation of the curve.
The computation of the invariants is expensive and there are three different ways to compute their minimal polynomials (the Igusa class polynomials): 1) complex analytic techniques \cite{vanwamelen:genus2, weng:genus2, Streng10}; 2) $p$-adic lifting techniques \cite{carls-kohel-lubicz, carls-lubicz, ghkr:2adic}; 3) a technique based on the Chinese Remainder Theorem \cite{eisentraeger-lauter, freeman-lauter, broker-gruenewald-lauter} (the \emph{CRT method}).
Even if 3) is currently the least efficient method, it is also the least understood and deserves more attention: its analog for elliptic curves holds the records for time and space complexity and for the size of the computed examples \cite{enge-sutherland, sutherland:crt}.
The CRT method of \cite{broker-gruenewald-lauter} requires one to find an ordinary abelian surface $\mathscr A / \mathbb{F}_q$ whose endomorphism ring is the maximal order $\mathcal{O}_K$ of the quartic CM field $K$ isomorphic to the
endomorphism algebra $\End(\mathscr A) \otimes_{\mathbb{Z}} \mathbb{Q}$. This is obtained by trying random hyperelliptic curves $\mathscr C$ in the isogeny class and using the maximal endomorphism test of \cite{freeman-lauter}, thus making the algorithm quite inefficient.
In \cite{lauter-robert}, the authors propose a different method based on $(\ell, \ell)$-isogenies that does not require the endomorphism ring to be maximal (generalizing the method of Sutherland \cite{sutherland:crt} for elliptic curves). Starting from an arbitrary abelian surface in the isogeny class, the method is based on a probabilistic algorithm for ``going up" to an abelian surface with maximal endomorphism ring. Although the authors cannot prove that the going-up algorithm succeeds with any fixed probability, the improvement is practical and heuristically, it reduces the running time of the CRT method in genus 2 from $\mathcal{O}(q^3)$ to $\mathcal{O}(q^{3/2})$. Our algorithm for going up takes inspiration from~\cite{lauter-robert}, but exploits our new structural results on isogeny graphs.
A second application is in the computation of an explicit isogeny between any two given principally polarized abelian surfaces in the same isogeny class. An algorithm is given in~\cite{JW15} to find an isogeny between two such surfaces with maximal endomorphism ring.
This can be extended to other pairs of isogenous principally polarized abelian surfaces, by first computing paths of isogenies to reach the maximal endomorphism ring, then applying the method of~\cite{JW15}.
Similarly, this ``going up'' algorithm can also extend results about the random self-reducibility of the discrete logarithm problem in genus 2 cryptography. The results of~\cite{JW15} imply that if the discrete logarithm problem is efficiently solvable on a non-negligible proportion of the Jacobians with maximal endomorphism ring within an isogeny class, then it is efficiently solvable for all isogenous Jacobians with maximal endomorphism ring.
For this to hold on any other Jacobian in the isogeny class, it only remains to compute a path of isogenies reaching the level of the maximal endomorphism ring.
Finally, we note that the ``going-up" algorithm can also be applied in the computation of endomorphism rings of abelian surfaces over finite fields, thus extending the work of Bisson \cite{bisson}. This will be the subject of a forthcoming paper.
\section{Orders} \label{sec:orders}
\subsection{Global and local orders}\label{subsec:localglobalorders}
An order in a number field is a full rank $\mathbb{Z}$-lattice which is also a subring. If $\ell$ is a prime, and $L$ is a finite extension of $\mathbb{Q}_\ell$ or a finite product of finite extensions of $\mathbb{Q}_\ell$, an order in $L$ is a full rank $\mathbb{Z}_\ell$-lattice which is also a subring. If $K$ is a number field, write $K_\ell = K \otimes_\mathbb{Q} \mathbb{Q}_\ell$. In this section, if $\mathcal{O}$ an order in $K$, write $\mathcal{O}_\ell = \mathcal{O} \otimes_\mathbb{Z} \mathbb{Z}_\ell$; then $\mathcal{O}_\ell$ is an order in $K_\ell$.
\begin{lemma}
Given a number field $K$ and a sequence $R(\ell)$ of orders in $K_\ell$, such that $R(\ell)$ is the maximal order in $K_\ell$ for almost all $\ell$, there exists a unique order $\mathcal{O}$ in $K$ such that $\mathcal{O}_\ell = R(\ell)$ for all $\ell$. This order $\mathcal{O}$ is the intersection $\bigcap_{\ell} (R(\ell)\cap K)$.
\end{lemma}
\begin{proof}
This is well-known, but we include a proof for completeness. Let $n = [K:\mathbb{Q}]$ and pick a $\mathbb{Z}$-basis for the maximal order $\mathcal{O}_K$ of $K$. With this choice, a lattice $\Lambda$ in $\mathcal{O}_K$ may be described by a matrix in $M_n(\mathbb{Z}) \cap \mathbf{G}L_n(\mathbb{Q})$ whose column vectors are a basis; this matrix is well-defined up to the left action of $\mathbf{G}L_n(\mathbb{Z})$. Similarly, a local lattice $\Lambda_\ell$ in $K_\ell$ may be described by a matrix in $M_n(\mathbb{Z}_\ell) \cap \mathbf{G}L_n(\mathbb{Q}_\ell)$, well-defined up to the left-action of $\mathbf{G}L_n(\mathbb{Z}_\ell)$. It thus suffices to prove that, given matrices $M_\ell \in M_n(\mathbb{Z}_\ell) \cap \mathbf{G}L_n(\mathbb{Q}_\ell)$, almost all of which are in $\mathbf{G}L_n(\mathbb{Z}_\ell)$, there exists an $N \in M_n(\mathbb{Z}) \cap \mathbf{G}L_n(\mathbb{Q})$ such that $NM_\ell^{-1} \in \mathbf{G}L_n(\mathbb{Z}_\ell)$. This follows from
$$
\mathbf{G}L_n(\mathbb{A}_\mathrm{fin}) = \mathbf{G}L_n(\mathbb{Q}) \cdot \mathrm{pr}od_\ell \mathbf{G}L_n(\mathbb{Z}_\ell),
$$
a consequence of strong approximation for $\text{SL}_n$ and the surjectivity of the determinant map $\mathbf{G}L_n(\mathbb{Z}_\ell) \to \mathbb{Z}_\ell^\times$ (see the argument in \cite[p. 52]{Gelbart}, which generalizes in an obvious way when $n > 2$). Finally, the identity $\mathcal{O} = \bigcap_{\ell} (\mathcal{O}_\ell\cap K)$ follows from the fact that $\tilde\mathcal{O} = \bigcap_{\ell} (\mathcal{O}_\ell\cap K)$ is an order in $K$ such that $\tilde\mathcal{O}_\ell = \mathcal{O}_\ell$ for all $\ell$.
\end{proof}
\subsection{Orders with maximal real multiplication}
Suppose that $K_0$ is a number field or finite product of extensions of $\mathbb{Q}_p$, and let $K$ a quadratic extension of $K_0$ (i.e., an algebra of the form $K_0[x]/f(x)$, where $f$ is a separable quadratic polynomial). The non-trivial element of $\mathbb{A}ut(K/K_0)$ will be denoted $\dagger$. In the case that $K$ is a CM-field and $K_0$ its maximally real subfield, Goren and Lauter~\cite{GorenLauter09} proved that if $K_0$ has a trivial class group, the orders with maximal real multiplication, i.e., the orders containing $\mathcal{O}_{K_0}$, are characterized by their conductor --- under the assumption that ideals of $\mathcal{O}_K$ fixed by $\mathbf{G}al(K/K_0)$ are ideals of $\mathcal{O}_{K_0}$ augmented to $\mathcal{O}_K$, which is rather restrictive, since it implies that no finite prime of $K_0$ ramifies in $K$. In that case, these orders are exactly the orders $\mathcal{O}_{K_0} + \mathfrak{f}_0 \mathcal{O}_K$, for any ideal $\mathfrak{f}_0$ in $\mathcal{O}_{K_0}$. We generalize this result to an arbitrary quadratic extension; abusing language, we will continue to say an order of $K$ has ``maximal real multiplication'' if it contains $\mathcal{O}_{K_0}$.
\begin{theorem}\label{thm:classificationOrdersMaxRM}
The map $\mathfrak{f}_0 \mapsto \mathcal{O}_{K_0} + \mathfrak{f}_0\mathcal{O}_K$ is a bijection between the set of ideals in $\mathcal{O}_{K_0}$ and the set of orders in $K$ containing $\mathcal{O}_{K_0}$. More precisely,
\begin{enumerate}[label=(\roman*)]
\item \label{thmClassificationMaxRM1} for any ideal $\mathfrak{f}_0$ in $\mathcal{O}_{K_0}$, the conductor of $\mathcal{O}_{K_0} + \mathfrak{f}_0\mathcal{O}_K$ is $\mathfrak{f}_0\mathcal{O}_K$, and
\item \label{thmClassificationMaxRM2} for any order $\mathcal{O}$ in $K$ with maximal real multiplication and conductor $\mathfrak{f}$, one has $\mathcal{O} = \mathcal{O}_{K_0} + (\mathfrak{f} \cap \mathcal{O}_{K_0}) \mathcal{O}_K$.
\end{enumerate}
\end{theorem}
\begin{lemma}\label{lemma_rosatiStable}
An order $\mathcal{O}$ in $K$ is stable under $\dagger$ if and only if $\mathcal{O} \cap K_0 = (\mathcal{O} + \mathcal{O}^\dagger) \cap K_0$.
\end{lemma}
\begin{proof}
The direct implication is obvious. For the other direction, suppose $\mathcal{O} \cap K_0 = (\mathcal{O} + \mathcal{O}^\dagger) \cap K_0$ and let $x \in \mathcal{O}$. Then, $x + x^\dagger \in (\mathcal{O} + \mathcal{O}^\dagger) \cap K_0 = \mathcal{O} \cap K_0 \subset \mathcal{O}$, which proves that $x^\dagger \in \mathcal{O}$.
\end{proof}
\begin{lemma}\label{bijfrak}
Let $\mathfrak{f}$ and $\mathfrak g$ be two ideals in $\mathcal{O}_{K}$, such that $\mathfrak g$ divides $\mathfrak{f}$. Let $\pi : \mathcal{O}_K \rightarrow \mathcal{O}_K/\mathfrak{f}$ be the natural projection. The canonical isomorphism between $(\mathcal{O}_{K_0} + \mathfrak{f}) / \mathfrak{f}$ and $\mathcal{O}_{K_0}/(\mathcal{O}_{K_0} \cap \mathfrak{f})$ induces a bijection between $\pi(\mathcal{O}_{K_0}) \cap \pi(\mathfrak g)$ and $(\mathcal{O}_{K_0} \cap \mathfrak g)/(\mathcal{O}_{K_0} \cap \mathfrak{f})$.
\end{lemma}
\begin{proof}
Any element in $ \pi(\mathcal{O}_{K_0}) \cap \pi(\mathfrak g)$ can be written as $\pi(x) = \pi(y)$ for some $x \in \mathcal{O}_{K_0}$ and $y \in \mathfrak g$. Then, $x-y \in \mathfrak{f} \subset \mathfrak g$, so $x = (x-y) + y \in \mathfrak g$. So
$$\pi(\mathfrak g) \cap \pi(\mathcal{O}_{K_0}) = \pi(\mathfrak g \cap \mathcal{O}_{K_0}) \cong (\mathfrak g \cap \mathcal{O}_{K_0})/(\mathfrak{f} \cap \mathcal{O}_{K_0}),$$
where the last relation comes from the canonical isomorphism between the rings $(\mathcal{O}_{K_0} + \mathfrak{f}) / \mathfrak{f}$ and $\mathcal{O}_{K_0}/(\mathcal{O}_{K_0} \cap \mathfrak{f})$.
\end{proof}
\begin{lemma}\label{maxRM_rosatiStable}
Let $\mathcal{O}$ be an order in $K$ of conductor $\mathfrak{f}$ with maximal real multiplication. Then, $\mathcal{O}$ is stable under $\dagger$ and $\mathfrak{f}$ comes from an ideal of $\mathcal{O}_{K_0}$, i.e., $\mathfrak{f} = \mathfrak{f}_0\mathcal{O}_{K}$, where $\mathfrak{f}_0$ is the $\mathcal{O}_{K_0}$-ideal $\mathfrak{f} \cap \mathcal{O}_{K}$.
\end{lemma}
\begin{proof}
From Lemma~\ref{lemma_rosatiStable}, it is obvious that any order with maximal real multiplication is stable under $\dagger$. Its conductor $\mathfrak{f}$ is thereby a $\dagger$-stable ideal of $\mathcal{O}_{K}$. For any prime ideal $\mathfrak p_0$ in $\mathcal{O}_{K_0}$, let $\mathfrak{f}_{\mathfrak p_0}$ be the part of the factorization of $\mathfrak{f}$ that consists in prime ideals above $\mathfrak p_0$. Then, $\mathfrak{f} = \mathrm{pr}od_{\mathfrak p_0} \mathfrak{f}_{\mathfrak p_0}$, and each $\mathfrak{f}_{\mathfrak p_0}$ is $\dagger$-stable.
It is easy to see that each $\mathfrak{f}_{\mathfrak p_0}$ comes from an ideal of $\mathcal{O}_{K_0}$ when $\mathfrak p_0$ is inert or splits in $\mathcal{O}_K$.
Now suppose it ramifies as $\mathfrak p_0\mathcal{O}_K = \mathfrak p^2$. Then $\mathfrak{f}_{\mathfrak p_0}$ is of the form $\mathfrak p^\alpha$. If $\alpha$ is even, $\mathfrak{f}_{\mathfrak p_0} = \mathfrak p_0^{\alpha/2}\mathcal{O}_K$. We now need to prove that $\alpha$ cannot be odd.
By contradiction, suppose $\alpha = 2\beta + 1$ for some integer $\beta$. Let $\pi : \mathcal{O}_K \rightarrow \mathcal{O}_K / \mathfrak{f}$ be the canonical projection. The ring $\pi(\mathcal{O})$ contains $\pi(\mathcal{O}_{K_0}) = (\mathcal{O}_{K_0} + \mathfrak{f})/\mathfrak{f}$. Write $\mathfrak{f} = \mathfrak p^\alpha \mathfrak g$ and let us prove that $\pi(\mathfrak p^{\alpha - 1}\mathfrak g) \subset \pi(\mathcal{O}_{K_0})$.
From Lemma~\ref{bijfrak},
$$\left| \pi(\mathcal{O}_{K_0}) \cap \pi(\mathfrak p^{\alpha - 1}\mathfrak g) \right| = |\mathfrak p_0^{\beta}\mathfrak g_0/\mathfrak p_0^{\beta+1}\mathfrak g_0| = N(\mathfrak p_0) = N(\mathfrak p) = |\pi(\mathfrak p^{\alpha - 1}\mathfrak g)|,$$
where $N$ denotes the absolute norm, so $\pi(\mathfrak p^{\alpha - 1}\mathfrak g) \subset \pi(\mathcal{O}_{K_0}) \subset \pi(\mathcal{O})$. Finally,
$$\mathfrak p^{\alpha - 1}\mathfrak g = \pi^{-1}(\pi(\mathfrak p^{\alpha - 1}\mathfrak g)) \subset \pi^{-1}(\pi(\mathcal{O})) = \mathcal{O},$$
which contradicts the fact that $\mathfrak{f}$ is the biggest ideal of $\mathcal{O}_K$ contained in $\mathcal{O}$.
\end{proof}
\begin{lemma}\label{lemmaModuleQuotient}
Let $\mathfrak{f}_0$ be an ideal in $\mathcal{O}_{K_0}$, and $R = \mathcal{O}_{K_0}/\mathfrak{f}_0$. There is an element $\alpha \in \mathcal{O}_K$ such that $\mathcal{O}_K/\mathfrak{f}_0\mathcal{O}_{K} = R \oplus R \alpha$.
\end{lemma}
\begin{proof}
The order $\mathcal{O}_K$ is a module over $\mathcal{O}_{K_0}$. It is locally free, and finitely generated, thus it is projective. Since $\mathcal{O}_{K_0}$ is a regular ring, the submodule $\mathcal{O}_{K_0}$ in $\mathcal{O}_{K}$ is a direct summand, i.e., there is an $\mathcal{O}_{K_0}$-submodule $M$ of $\mathcal{O}_K$ such that $\mathcal{O}_{K}~=~\mathcal{O}_{K_0}~\oplus~M$. Then,
$\mathcal{O}_{K}/\mathfrak{f}_0\mathcal{O}_{K} = R \oplus M/\mathfrak{f}_0M.$
Let $A$ be $\mathbb{Z}$ if $K$ is a number field and $\mathbb{Z}_p$ if it is a finite product of extensions of $\mathbb{Q}_p$. In the former case write $n$ for $[K:\mathbb{Q}]$ and in the latter for the dimension of $K_p$ as a $\mathbb{Q}_p$-vector space. As modules over $A$, $\mathcal{O}_K$ is of rank $2n$ and $\mathcal{O}_{K_0}$ of rank $n$, hence $M$ must be of rank $n$. Therefore, as an $\mathcal{O}_{K_0}$-module, $M$ is isomorphic to an ideal $\mathfrak a$ in $\mathcal{O}_{K_0}$, so $M/\mathfrak{f}_0M \cong \mathfrak a/\mathfrak{f}_0\mathfrak a \cong R$. So there is an element $\alpha \in M$ such that $M/\mathfrak{f}_0M = R\alpha$.
\end{proof}
\subsection*{Proof of Theorem~\ref{thm:classificationOrdersMaxRM}}
For~\ref{thmClassificationMaxRM1}, let $\mathfrak{f}_0$ be an ideal in $\mathcal{O}_{K_0}$, and write $\mathfrak{f} = \mathfrak{f}_0\mathcal{O}_K$. Let $\mathfrak c$ be the conductor of $\mathcal{O}_{K_0} + \mathfrak{f}$. From Lemma~\ref{maxRM_rosatiStable}, $\mathfrak c$ is of the form $\mathfrak c_0 \mathcal{O}_K$ where $\mathfrak c_0 = \mathcal{O}_{K_0} \cap \mathfrak c$. Clearly $\mathfrak{f} \subset \mathfrak c$, so $\mathfrak c_0 \mid \mathfrak{f}_0$ and we can write $\mathfrak{f}_0 = \mathfrak c_0 \mathfrak g_0$. Let $\pi : \mathcal{O}_K \rightarrow~\mathcal{O}_K/\mathfrak{f}$ be the canonical projection.
Since $\mathfrak c \subset \mathcal{O}_{K_0} + \mathfrak{f}$, we have $\pi(\mathfrak c) \subset \pi(\mathcal{O}_{K_0})$. From Lemma~\ref{bijfrak},
$$\left|\pi(\mathfrak c) \right| = \left| \pi(\mathcal{O}_{K_0}) \cap \pi(\mathfrak c) \right| = |\mathfrak c_0/\mathfrak{f}_0| = N(\mathfrak g_0).$$
On the other hand,
$\left| \pi(\mathfrak c) \right| = |\mathfrak c/\mathfrak{f}| = N(\mathfrak g_0\mathcal{O}_K) = N(\mathfrak g_0)^2,$
so $N(\mathfrak g_0)=1$, hence $\mathfrak c = \mathfrak{f}$.
To prove~\ref{thmClassificationMaxRM2}, let $\mathcal{O}$ be an order in $K$ with maximal real multiplication and conductor $\mathfrak{f}$. From Lemma~\ref{maxRM_rosatiStable}, $\mathcal{O}$ is $\dagger$-stable and $\mathfrak{f} = \mathfrak{f}_0\mathcal{O}_K$, where $\mathfrak{f}_0 = \mathfrak{f}~\cap~\mathcal{O}_{K_0}$.
We claim that if $x \in \mathcal{O}$ then $x \in \mathcal{O}_{K_0} + \mathfrak{f}$. Let $R = \mathcal{O}_{K_0}/\mathfrak{f}_0$. By Lemma~\ref{lemmaModuleQuotient}, $\mathcal{O}_K/\mathfrak{f} = R \oplus R \alpha$. The quotient $\mathcal{O}/\mathfrak{f}$ is an $R$-submodule of $\mathcal{O}_K/\mathfrak{f}$.
There are two elements $y,z \in R$ such that $x + \mathfrak{f} = y + z\alpha$. Then, $z \alpha \in \mathcal{O}/\mathfrak{f}$, and
we obtain that $(zR)\alpha \subset \mathcal{O}/\mathfrak{f}$. There exists an ideal $\mathfrak g_0$ dividing $\mathfrak{f}_0$ such that $zR = \mathfrak g_0 / \mathfrak{f}_0$. Therefore $(\mathfrak g_0 / \mathfrak{f}_0)\alpha \subset \mathcal{O}/\mathfrak{f}$. Then,
$$ \mathfrak g / \mathfrak{f} \subset R + (\mathfrak g_0/ \mathfrak{f}_0) \alpha \subset \mathcal{O}/\mathfrak{f},$$
where $\mathfrak g = \mathfrak g_0 \mathcal{O}_K$, which implies that $\mathfrak g \subset \mathcal{O}$. But $\mathfrak g$ divides $\mathfrak{f}$, and $\mathfrak{f}$ is the largest $\mathcal{O}_K$-ideal in $\mathcal{O}$, so $\mathfrak g = \mathfrak{f}$. Hence $z \in \mathfrak{f}$, and $x \in \mathcal{O}_{K_0} + \mathfrak{f}$.\qed
\section{From abelian surfaces to lattices, and vice-versa}\label{sec:correspondence}
\subsection{Tate modules and isogenies}
Consider again the setting introduced in Subsection~\ref{subsec:setting}, with $\mathscr A$ an abelian variety over the finite field $k$ in the fixed isogeny class --- ordinary, absolutely simple, and of dimension $g$.
Write $T = T_\ell \mathscr A$ for the $\ell$-adic Tate module of $\mathscr A$, and $V$ for $T \otimes_{\mathbb{Z}_\ell} \mathbb{Q}_\ell$. Then $V$ is a $2g$-dimensional $\mathbb{Q}_\ell$-vector space with an action of the algebra $K_\ell$, over which it has rank one, and $T$ is similarly of rank one over the ring $\mathfrak{o}(\mathscr A) = \mathcal{O}(\mathscr A) \otimes_{\mathbb{Z}} \mathbb{Z}_\ell$. Write $\pi$ for the Frobenius endomorphism of $\mathscr A$, viewed as an element of $\mathcal{O}(\mathscr A)$.
The elements of $T$ are the sequences $(Q_n)_{n \geq 0}$ with $Q_n \in \mathscr A[\ell^n]$, $\ell Q_n = Q_{n-1}$ for all $n \geq 1$.
An element of $V$ identifies with a sequence $(P_n)_{n \geq 0}$ with $P_n \in \mathscr A[\ell^\infty]$ and $\ell P_n = P_{n-1}$ for $n \geq 1$ as follows:
$$
(Q_n)_{n \geq 0} \otimes \ell^{-m} \longmapsto (Q_{n+m})_{n \geq 0},
$$
and under this identification, $T$ is the subgroup of $V$ where $P_0 = 0 \in \mathscr A[\ell^\infty]$. The projection to the zeroth coordinate then yields a canonical identification
\begin{equation}\label{eq:identification}
V/T \stackrel{\sim}{\longrightarrow} \mathscr A[\ell^\infty](\overline{k}),
\end{equation}
under which the action of $\pi$ on the left-hand side corresponds to the action of the arithmetic Frobenius element in $\mathbf{G}al(\overline{k}/k)$ on the right-hand side.
We are now ready to state the main correspondence between lattices in $V$ containing the Tate module $T$ and $\ell$-power isogenies from $\mathscr A$.
\begin{proposition}\label{prop:correspondence}
There is a one-to-one correspondence
$$
\left\{\text{Lattices in } V \text{ containing } T \right\} \cong
\left\{\mbox{finite subgroups of $\mathscr A[\ell^\infty]$}\right\},
$$
where a lattice $\mathbf{G}amma$ is sent to the subgroup $\mathbf{G}amma/T$, through the identification \eqref{eq:identification}. Under this correspondence,
\begin{enumerate}[label=(\roman*)]
\item A lattice is stable under $\pi^n$ if and only if the corresponding subgroup is defined over the degree $n$ extension $\mathbb{F}_{q^n}$ of $k$.
\item If a subgroup $\kappa \subset \mathscr A[\ell^\infty]$ corresponds to a lattice $\mathbf{G}amma$, then the order of $K_\ell$ of elements stabilizing $\mathbf{G}amma$ is $\mathfrak{o}(\mathscr A/\kappa)$.
\end{enumerate}
\end{proposition}
\begin{proof}
A lattice $\mathbf{G}amma$ in $V$ is sent to the subgroup $\kappa$ of $\mathscr A[\ell^\infty]$ corresponding to $\mathbf{G}amma$ under~\eqref{eq:identification}. Conversely, give a subgroup $\kappa \subset \mathscr A[\ell^\infty]$, let $\mathbf{G}amma$ be the set of sequences in $V$ whose zeroth coordinate is in $\kappa$. It follows that the subgroup of $\mathscr A[\ell^\infty]$ corresponding to this lattice under \eqref{eq:identification} is $\kappa$, so that this process is indeed bijective.
The claim about fields of definition follows from the previously-discussed Frobenius equivariance of \eqref{eq:identification}. The claim about endomorphism rings is Tate's isogeny theorem applied to $\mathbf{H}om(\mathscr A/\kappa, \mathscr A/\kappa)$.
\end{proof}
\begin{remark}\label{rem:kernelVsIso}
Observe that given a subgroup $\kappa \subset \mathscr A[\ell^\infty]$, any two isogenies of kernel $\kappa$ differ only by an isomorphism between the targets.
Therefore if $\varphi : \mathscr A \rightarrow \mathscr B$ is any isogeny of kernel $\kappa$, then $\mathfrak{o}(\mathscr A/\kappa) = \mathfrak{o}(\mathscr B)$.
\end{remark}
\begin{remark}\label{rem:fieldOfDefIsogenies}
Recall that all varieties and morphisms are considered over $\overline k$. We are however also interested in the structures arising when restricting to varieties and morphisms defined over $k$, in the sense of Subsection~\ref{subsec:setting}.
To this end, the most important fact (which is special to the case of simple, ordinary abelian varieties) is that if a variety $\mathscr B$ is $k$-isogenous to $\mathscr A$, then any isogeny $\mathscr A \rightarrow \mathscr B$ is defined over $k$
(this is an easy consequence of~\cite[Thm.7.2.]{Waterhouse1969}: if $\mathscr B$ is defined over $k$, and $\varphi, \psi : \mathscr A \rightarrow \mathscr B$ are two isogenies then $\varphi \circ \psi^{-1}$ is an element of $\End(\mathscr B)\otimes_\mathbb{Z}\mathbb{Q}$, hence defined over $k$, so $\varphi$ is defined over $k$ if and only if $\psi$ is).
Similarly to Remark~\ref{rem:kernelVsIso}, if $\kappa$ is defined over $k$, any two $k$-isogenies of kernel $\kappa$ differ by a $k$-isomorphism between the targets. From Proposition~\ref{prop:correspondence}(ii), if $\pi \in \mathfrak{o}(\mathscr A/\kappa)$, then $\kappa$ is defined over $k$, and is thereby the kernel of a $k$-isogeny\footnote{Note that in general, if $\mathscr B$ is $\overline k$-isogenous to $\mathscr A$ and $\pi \in \mathcal{O}(\mathscr B)$, then $\pi$ does not necessarily correspond to the $k$-Frobenius of $\mathscr B$ unless $\mathscr B$ is actually $k$-isogenous to $\mathscr A$.}. We obtain a correspondence between subgroups $\kappa$ defined over $k$ and $\mathbf{G}amma$ lattices stabilized by $\pi$.
\end{remark}
The following proposition justifies the strategy of working locally at $\ell$, as it guarantees that $\ell$-power isogenies do not affect endomorphism rings at primes $\ell' \neq \ell$.
\begin{proposition}\label{prop:EllDoesNotChangeP}
Let $\varphi: \mathscr A \to \mathscr B$ be an isogeny of abelian varieties of $\ell$-power degree. Then for any prime $\ell' \neq \ell$ of $\mathscr A$, one has $\mathcal{O}(\mathscr A) \otimes_\mathbb{Z} \mathbb{Z}_{\ell'} = \mathcal{O}(\mathscr B) \otimes_\mathbb{Z} \mathbb{Z}_{\ell'}$.
\end{proposition}
\begin{proof}
Let $\mathcal{C}_{\ell'}$ be the category whose objects are abelian varieties over $\overline k$ and whose morphisms are $\mathbf{H}om_{\mathcal{C}_{\ell'}}(\mathscr A_1, \mathscr A_2) = \mathbf{H}om(\mathscr A_1, \mathscr A_2) \otimes_\mathbb{Z} \mathbb{Z}_{\ell'}$. There exists an isogeny $\hat\varphi: \mathscr B \to \mathscr A$ such that $\hat \varphi \circ \varphi = [\ell^n]$, so $\varphi$ induces an isomorphism in $\mathcal{C}_{\ell'}$; it follows that the endomorphism rings of $\mathscr A$ and $\mathscr B$ in this category are identified.
\end{proof}
\subsection{Polarizations and symplectic structures}
Fix a polarization $\xi$ of $\mathscr A$. It induces a polarization isogeny $\lambda:\mathscr A \to \mathscr A^\vee$, which in turn gives a map $T \to T_\ell(\mathscr A^\vee)$. Therefore the Weil pairing equips $T$ with a natural $\mathbb{Z}_\ell$-linear pairing $\langle-,-\rightarrowngle$, which extends to a pairing on $V$. We gather standard facts about this pairing in the following lemma.
\begin{lemma}
One has:
\begin{enumerate}[label=(\roman*)]
\item The pairing $\langle-,-\rightarrowngle$ is symplectic.
\item For any $\alpha \in K$, one has
$$
\langle \alpha x, y \rightarrowngle = \langle x, \alpha^\dagger y \rightarrowngle,
$$
where $\dagger$ denotes the complex conjugation.
\item For $\mathbf{G}amma$ a lattice in $V$, write
$$
\mathbf{G}amma^* = \{ \alpha \in V \mid \langle \alpha, \mathbf{G}amma \rightarrowngle \subset \mathbb{Z}_\ell \}
$$
for the dual lattice of $\mathbf{G}amma$. Then $T \subset T^*$, and the quotient is isomorphic to $(\ker \lambda)[\ell^\infty]$. In particular, $T$ is self-dual if and only if the degree of $\lambda$ is coprime to $\ell$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first two claims are standard --- see \cite[Lemma 16.2e, and \textsection 167]{MilneAV}. For the third,
note that $T^*$ identifies with $\lambda_*^{-1}(T_\ell \mathscr A^\vee)$, and $\lambda_*$ induces an isomorphism
$$\lambda_*^{-1}(T_\ell \mathscr A^\vee)/{T} \stackrel{\sim}{\longrightarrow} \ker (\lambda)[\ell^\infty].$$
\end{proof}
\section{Graphs of $\mathfrak l$-isogenies}
In this section we study $\mathfrak l$-isogenies through the lens of lattices in an $\ell$-adic vector space, endowed with an action of the algebra $K_\ell$.
\subsection{Lattices with locally maximal real multiplication}
Throughout this subsection, $V$ is a $\mathbb{Q}_\ell$-vector space of dimension $2g$, $\ell$ is a prime number, $K$ is a quartic CM-field, with $K_{0}$ its maximal real subfield. The algebra $K_\ell$ is a $\mathbb{Q}_\ell$-algebra of dimension~$2g$. Suppose that it acts ($\mathbb{Q}_\ell$-linearly) on~$V$.
Define the \emph{order} of a full-rank $\mathbb{Z}_\ell$-lattice $\Lambda \subset V$ as
$$\mathfrak{o}(\Lambda) = \{x \in K_\ell \mid x\Lambda \subset \Lambda \}.$$
For any order $\mathfrak{o}$ in $K_\ell$, say that $\Lambda$ is an $\mathfrak{o}$-lattice if $\mathfrak{o}(\Lambda) = \mathfrak{o}$.
Let $\mathfrak{o} = \mathfrak{o}(\Lambda)$ be the order of $\Lambda$, and suppose that it has maximal real multiplication, i.e., that $\mathfrak{o}$ contains the maximal order $\mathfrak{o}_0$ of $K_{0,\ell} = K_{0} \otimes_\mathbb{Q} \mathbb{Q}_\ell$.
We now need some commutative algebra:
\begin{lemma}\label{lemma:quadraticImpliesGorenstein}
Let $A$ be a Dedekind domain with field of fractions $F$, and let $L$ be a quadratic extension of $F$. If $\mathcal{O}$ is any $A$-subalgebra of the integral closure of $A$ in $L$, with $\mathcal{O} \otimes K = L$, then $\mathcal{O}$ is Gorenstein.
\end{lemma}
\begin{proof}
The hypotheses and result are local on $\text{Spec} A$, so we may take $A$ a principal ideal domain. Then $\mathcal{O}$ is a free $A$-module, which must be $2$-dimensional. The element $1 \in \mathcal{O}$ is not an $A$-multiple of any element of $\mathcal{O}$, so there is a basis $\{1, \alpha\}$ for $\mathcal{O}$ as an $A$-module; clearly $\mathcal{O} = A[\alpha]$ as $A$-algebras. The result then follows from \cite[Ex.2.8]{BuchmannLenstra}.
\end{proof}
By Lemma \ref{lemma:quadraticImpliesGorenstein}, the order $\mathfrak{o}$, which has maximal real multiplication, is a Gorenstein ring and $\Lambda$ is a free $\mathfrak{o}$-module of rank 1.
Recall the notations $\mathfrak{o}_K = \mathcal{O}_K\otimes_\mathbb{Z} \mathbb{Z}_\ell$ and $\mathfrak{o}_0 = \mathcal{O}_{K_0}\otimes_\mathbb{Z} \mathbb{Z}_\ell$ from Section~\ref{subsec:setting}.
For any ideal $\mathfrak f$ in $\mathfrak{o}_0$, let $\mathfrak{o}_\mathfrak f = \mathfrak{o}_0 + \mathfrak f\mathfrak{o}_{K}$. From Theorem~\ref{thm:classificationOrdersMaxRM}, all the orders containing $\mathfrak{o}_0$ are of this form.
\begin{definition}[$\mathfrak l$-neighbors]
Let $\Lambda$ be a lattice with maximal real multiplication, and let $\mathfrak l$ be a prime ideal in $\mathfrak{o}_{0}$. The set $\mathscr L_{\mathfrak l}(\Lambda)$ of \emph{$\mathfrak l$-neighbors} of $\Lambda$ consists
of all the lattices $\mathbf{G}amma$ such that $\mathfrak l \Lambda \subset \mathbf{G}amma \subset \Lambda$, and $\mathbf{G}amma/\mathfrak l\Lambda \cong \mathfrak{o}_{0}/\mathfrak l$, i.e., $\mathbf{G}amma/\mathfrak l\Lambda \in \mathbb P^1(\Lambda/\mathfrak l\Lambda)$.
\end{definition}
\begin{remark}\label{rem:correspFrakLAndFrakL}
Consider the lattice $T = T_\ell\mathscr A$.
Then, $\mathfrak l$-isogenies $\mathscr A \to \mathscr B$ (see Definition~\ref{def:frakLIso}) correspond under Proposition \ref{prop:correspondence} to lattices $\mathbf{G}amma$ with $T \subset \mathbf{G}amma \subset \mathfrak l^{-1} T$ and $\mathbf{G}amma/T$ is an $\mathfrak{o}_{0}/\mathfrak l$-subspace of dimension one of $(\mathfrak l^{-1} T) / T$.
\end{remark}
The following lemma is key to understanding $\mathfrak l$-neighbors. It arises from the technique employed by Cornut and Vatsal~\cite[\textsection 6]{Cornut04} to study the action of a certain Hecke algebra on quadratic CM-lattices.
\begin{lemma}\label{lemma:fixedPointsMaxRM}
Let $K$ be a CM-field, and $K_0$ its maximal real subfield. Let $\mathfrak l$ be a prime ideal in $\mathfrak{o}_{0}$, and $\mathbb F = \mathfrak{o}_{0}/\mathfrak l$. Let $\mathfrak f$ be an ideal in $\mathfrak{o}_{0}$ and $\mathfrak{o}_\mathfrak f = \mathfrak{o}_{0} + \mathfrak f \mathfrak{o}_{K}$.
The action of $\mathfrak{o}_{\mathfrak f}^\times$ on the set of $\mathbb{F}$-lines $\mathbb P^1(\mathfrak{o}_{\mathfrak f}/\mathfrak l \mathfrak{o}_{\mathfrak f})$ factors through $\mathfrak{o}_{\mathfrak f}^\times/ \mathfrak{o}_{\mathfrak l\mathfrak f}^\times$. Let $\mathfrak L$ be a prime in $\mathfrak{o}_{\mathfrak f}$ above $\mathfrak l$. The fixed points are
\[
\mathbb P^1(\mathfrak{o}_{\mathfrak f}/\mathfrak l \mathfrak{o}_{\mathfrak f})^{\mathfrak{o}_{\mathfrak f}^\times} =
\left\{ \begin{array}{ll}
\emptyset & \mbox{if $\mathfrak l \nmid \mathfrak f$ and $\mathfrak l\mathfrak{o}_\mathfrak f = \mathfrak L$,}\\
\{\mathfrak L/\mathfrak l \mathfrak{o}_{\mathfrak f}, \mathfrak L^\dagger/\mathfrak l \mathfrak{o}_{\mathfrak f}\} & \mbox{if $\mathfrak l \nmid \mathfrak f$ and $\mathfrak l\mathfrak{o}_\mathfrak f = \mathfrak L \mathfrak L^\dagger$,}\\
\{(\mathfrak l \mathfrak{o}_{\mathfrak l^{-1}\mathfrak f})/\mathfrak l \mathfrak{o}_{\mathfrak f}\} & \mbox{if $\mathfrak l \mid \mathfrak f$.}\end{array} \right.
\]
The remaining points are permuted simply transitively by $\mathfrak{o}_{\mathfrak f}^\times/ \mathfrak{o}_{\mathfrak l\mathfrak f}^\times$.
\end{lemma}
\begin{proof}
The ring $\mathfrak{o}_{\mathfrak l\mathfrak f}^\times$ acts trivially on $\mathbb P^1(\mathfrak{o}_{\mathfrak f}/\mathfrak l \mathfrak{o}_{\mathfrak f})$, which proves the first statement.
Observe that the projection $\mathfrak{o}_{\mathfrak f} \rightarrow \mathfrak{o}_{\mathfrak f}/\mathfrak{o}_{\mathfrak l\mathfrak f}$ induces a canonical isomorphism between $\mathfrak{o}_{\mathfrak f}^\times/ \mathfrak{o}_{\mathfrak l\mathfrak f}^\times$ and $(\mathfrak{o}_{\mathfrak f}/\mathfrak l \mathfrak{o}_{\mathfrak f})^\times/\mathbb{F}^{\times}$.
Suppose that $\mathfrak l$ divides $\mathfrak f$. Then, there exists an element $\epsilon \in \mathfrak{o}_{\mathfrak f}/\mathfrak l \mathfrak{o}_{\mathfrak f}$ such that $\mathfrak{o}_{\mathfrak f}/\mathfrak l \mathfrak{o}_{\mathfrak f} = \mathbb{F}[\epsilon]$ and $\epsilon^2 = 0$. But the only $\mathbb{F}$-line in $\mathbb{F}[\epsilon]$ fixed by the action of $\mathbb{F}[\epsilon]^\times$ is $\epsilon \mathbb{F} = (\mathfrak l \mathfrak{o}_{\mathfrak l^{-1}\mathfrak f})/\mathfrak l \mathfrak{o}_{\mathfrak f}$, and this action is transitive on the $\ell$ other lines. Therefore the action of $\mathbb{F}[\epsilon]^\times/\mathbb{F}^\times = (\mathfrak{o}_{\mathfrak f}/\mathfrak l \mathfrak{o}_{\mathfrak f})^\times/\mathbb{F}^{\times}$ on these $\ell$ lines is simply transitive.
Now, suppose that $\mathfrak l$ does not divide $\mathfrak f$. If $\mathfrak l$ is inert in $\mathfrak{o}_\mathfrak f$, then $\mathfrak{o}_{\mathfrak f}/\mathfrak l \mathfrak{o}_{\mathfrak f} = \mathbb K$ is a quadratic field extension of $\mathbb{F}$, and $\mathbb K^\times/\mathbb{F}^\times$ acts simply transitively on the $\mathbb{F}$-lines $\mathbb P^1(\mathbb K)$. To statement follows from the isomorphism between $\mathbb K^\times/\mathbb{F}^\times$ and $\mathfrak{o}_{\mathfrak f}^\times/ \mathfrak{o}_{\mathfrak l\mathfrak f}^\times$.
The cases where $\mathfrak l$ splits or ramifies in $K$ are treated similarly, with $\mathfrak{o}_{\mathfrak f}/\mathfrak l \mathfrak{o}_{\mathfrak f} \cong \mathbb{F}^2$ in the first case, and $\mathfrak{o}_{\mathfrak f}/\mathfrak l \mathfrak{o}_{\mathfrak f} \cong \mathbb{F}[X]/(X^2)$ in the second case.
\end{proof}
\iffalse
\begin{lemma}\label{lemma:fixedPointsMaxRM}
Let $K$ be a CM-field, and $K_0$ its maximal real subfield. Let $\mathfrak l$ be a prime ideal in $\mathcal{O}_{K_0}$, and $\mathbb F = \mathcal{O}_{K_0}/\mathfrak l$. Let $\mathcal{O}_{K,\mathfrak l}$ and $\mathcal{O}_{K_0,\mathfrak l}$ be the completions at $\mathfrak l$ of $\mathcal{O}_K$ and $\mathcal{O}_{K_0}$, and write $\mathfrak O_n = \mathcal{O}_{K_0,\mathfrak l} + \mathfrak l^n \mathcal{O}_{K,\mathfrak l}$.
The action of $\mathfrak O_{n}^\times$ on the set of $\mathbb{F}$-lines $\mathbb P^1(\mathfrak O_{n}/\mathfrak l \mathfrak O_{n})$ factors through $\mathfrak O_{n}^\times/ \mathfrak O_{n+1}^\times$. Let $\mathfrak L$ be a prime in $K$ above $\mathfrak l$. The fixed points are
\[
\mathbb P^1(\mathfrak O_{n}/\mathfrak l \mathfrak O_{n})^{\mathfrak O_{n}^\times} =
\left\{ \begin{array}{ll}
\emptyset & \mbox{if $n = 0$ and $\mathfrak l\mathcal{O}_K = \mathfrak L$,}\\
\{\mathfrak L/\mathfrak l \mathfrak O_{n}, \mathfrak L^\dagger/\mathfrak l \mathfrak O_{n}\} & \mbox{if $n = 0$ and $\mathfrak l\mathcal{O}_K = \mathfrak L \mathfrak L^\dagger$,}\\
\{(\mathfrak l \mathfrak O_{n-1})/\mathfrak l \mathfrak O_{n}\} & \mbox{if $n > 0$.}\end{array} \right.
\]
The remaining points are permuted simply transitively by $\mathfrak O_{n}^\times/ \mathfrak O_{n+1}^\times$.
\end{lemma}
\begin{proof}
The ring $\mathfrak O_{n+1}^\times$ acts trivially on $\mathbb P^1(\mathfrak O_{n}/\mathfrak l \mathfrak O_{n})$, which proves the first statement.
Observe that the projection $\mathfrak O_{n} \rightarrow \mathfrak O_{n}/\mathfrak O_{n+1}$ induces a canonical isomorphism between $\mathfrak O_{n}^\times/ \mathfrak O_{n+1}^\times$ and $(\mathfrak O_{n}/\mathfrak l \mathfrak O_{n})^\times/\mathbb{F}^{\times}$.
Suppose that $n > 1$. Then, there exists an element $\epsilon \in \mathfrak O_{n}/\mathfrak l \mathfrak O_{n}$ such that $\mathfrak O_{n}/\mathfrak l \mathfrak O_{n} = \mathbb{F}[\epsilon]$ and $\epsilon^2 = 0$. But the only $\mathbb{F}$-line in $\mathbb{F}[\epsilon]$ fixed by the action of $\mathbb{F}[\epsilon]^\times$ is $\epsilon \mathbb{F} = (\mathfrak l \mathfrak O_{n-1})/\mathfrak l \mathfrak O_{n}$, and this action is transitive on the $\ell$ other lines. Therefore the action of $\mathbb{F}[\epsilon]^\times/\mathbb{F}^\times = (\mathfrak O_{n}/\mathfrak l \mathfrak O_{n})^\times/\mathbb{F}^{\times}$ on these $\ell$ lines is simply transitive.
Now, suppose that $n = 0$. If $\mathfrak l$ is inert in $K$, then $\mathfrak O_{n}/\mathfrak l \mathfrak O_{n} \cong \mathbb K$ is a quadratic field extension of $\mathbb{F}$, and $\mathbb K^\times/\mathbb{F}^\times$ acts simply transitively on the $\mathbb{F}$-lines $\mathbb P^1(\mathbb K)$. We conclude from the fact that through the isomorphism between $\mathfrak O_{n}/\mathfrak l \mathfrak O_{n}$ and $\mathbb K$, the group $\mathbb K^\times/\mathbb{F}^\times$ corresponds $(\mathfrak O_{n}/\mathfrak l \mathfrak O_{n})^\times/\mathbb{F}^{\times}$.
The cases where $\mathfrak l$ splits or ramifies in $K$ are treated similarly, with $\mathfrak O_{n}/\mathfrak l \mathfrak O_{n} \cong \mathbb{F}^2$ in the first case, and $\mathfrak O_{n}/\mathfrak l \mathfrak O_{n} \cong \mathbb{F}[X]/(X^2)$ in the second case.
\end{proof}
\fi
\begin{proposition}[Structure of $\mathscr L_\mathfrak l(\Lambda)$]\label{prop:genericNeighbors}
Suppose $\Lambda$ is an $\mathfrak{o}_{\mathfrak f}$-lattice, for some $\mathfrak{o}_{0}$-ideal $\mathfrak f$, and let $\mathfrak l$ be a prime ideal in $\mathfrak{o}_{0}$. The lattice $\Lambda$ has $N(\mathfrak l) + 1$ $\mathfrak l$-neighbors. The $\mathfrak l$-neighbors that have order $\mathfrak{o}_{\mathfrak l\mathfrak f}$ are permuted simply transitively by $(\mathfrak{o}_{\mathfrak f} / \mathfrak{o}_{\mathfrak l\mathfrak f})^\times$. The other $\mathfrak l$-neighbors have order $\mathfrak{o}_{\mathfrak l^{-1}\mathfrak f}$ if $\mathfrak l $ divides $ \mathfrak f$, or $\mathfrak{o}_{K}$ otherwise.
More explicitly, if $\mathfrak l $ divides $ \mathfrak f$, there is one $\mathfrak l$-neighbor of order $\mathfrak{o}_{\mathfrak l^{-1}\mathfrak f}$, namely $\mathfrak l \mathfrak{o}_{\mathfrak l^{-1}\mathfrak f} \Lambda$, and $N(\mathfrak l)$ $\mathfrak l$-neighbors of order $\mathfrak{o}_{\mathfrak l\mathfrak f}$. If $\mathfrak l $ does not divide $ \mathfrak f$, we have:
\begin{enumerate}[label=(\roman*)]
\item If $\mathfrak l$ is inert in $K$, all $N(\mathfrak l) + 1$ lattices of $\mathscr L_\mathfrak l(\Lambda)$ have order $\mathfrak{o}_{\mathfrak l}$,
\item If $\mathfrak l$ splits in $K$ into prime ideals $\mathfrak L_1$ and $\mathfrak L_2$, $\mathscr L_\mathfrak l(\Lambda)$ consists of two lattices of order $\mathfrak{o}_{K}$, namely $\mathfrak L_1\Lambda$ and $\mathfrak L_2\Lambda$, and $N(\mathfrak l)-1$ lattices of order $\mathfrak{o}_{\mathfrak l}$,
\item If $\mathfrak l$ ramifies in $K$ as $\mathfrak L^2$, $\mathscr L_\mathfrak l(\Lambda)$ consists of one lattice of order $\mathfrak{o}_{K}$, namely $\mathfrak L\Lambda$, and $N(\mathfrak l)$ lattices of order $\mathfrak{o}_{\mathfrak l}$.
\end{enumerate}
\end{proposition}
\begin{proof}
This is a direct consequence of Lemma~\ref{lemma:fixedPointsMaxRM}, together with the fact that $\Lambda$ is a free $\mathfrak{o}_{\mathfrak f}$-module of rank 1.
\end{proof}
\subsection{Graphs of $\mathfrak l$-isogenies}\label{subsec:frakLIsogenyGraphs}
Fix again a principally polarizable absolutely simple ordinary abelian variety $\mathscr A$ of dimension $g$ over $k$, with endomorphism algebra~$K$.
Suppose that $\mathscr A$ has locally maximal real multiplication at $\ell$ (i.e., $\mathfrak{o}_0 \subset \mathfrak{o}(\mathscr A)$). The $\mathfrak l$-neighbors correspond in the world of varieties to $\mathfrak l$-isogenies (see Remark~\ref{rem:correspFrakLAndFrakL}).
\begin{definition}
Suppose $\mathscr A$ has local order $\mathfrak{o}_{\mathfrak f}$, for some $\mathfrak{o}_{0}$-ideal $\mathfrak f$ and let $\mathfrak l$ be a prime ideal in $\mathfrak{o}_{0}$. An $\mathfrak l$-isogeny $\varphi : \mathscr A \rightarrow \mathscr B$ is \emph{$\mathfrak l$-ascending} if $\mathfrak{o}(\mathscr B) = \mathfrak{o}_{\mathfrak l^{-1}\mathfrak f}$, it is \emph{$\mathfrak l$-descending} if $\mathfrak{o}(\mathscr B) = \mathfrak{o}_{\mathfrak l\mathfrak f}$, and it is \emph{$\mathfrak l$-horizontal} if $\mathfrak{o}(\mathscr B) = \mathfrak{o}_{\mathfrak f}$.
\end{definition}
\begin{proposition}\label{prop:frakLStructure}
Suppose $\mathscr A$ has local order $\mathfrak{o}_{\mathfrak f}$ for some $\mathfrak{o}_{0}$-ideal $\mathfrak f$ and let $\mathfrak l$ be a prime ideal in $\mathfrak{o}_{0}$. There are $N(\mathfrak l) + 1$ kernels of $\mathfrak l$-isogenies from $\mathscr A$. The kernels of $\mathfrak l$-descending $\mathfrak l$-isogenies are permuted simply transitively by the action of $(\mathfrak{o}_{\mathfrak f} / \mathfrak{o}_{\mathfrak l\mathfrak f})^\times$. The other $\mathfrak l$-isogenies are $\mathfrak l$-ascending if $\mathfrak l $ divides $ \mathfrak f$, and $\mathfrak l$-horizontal otherwise.
More explicitely, if $\mathfrak l $ divides $ \mathfrak f$, there is a unique $\mathfrak l$-ascending $\mathfrak l$-kernel from $\mathscr A$,
and $N(\mathfrak l)$ $\mathfrak l$-descending $\mathfrak l$-kernels. If $\mathfrak l$ does not divide $ \mathfrak f$, we have:
\begin{enumerate}[label=(\roman*)]
\item If $\mathfrak l$ is inert in $K$, all $N(\mathfrak l) + 1$ $\mathfrak l$-kernels are $\mathfrak l$-descending;
\item If $\mathfrak l$ splits in $K$ into two prime ideals $\mathfrak L_1$ and $\mathfrak L_2$, there are two $\mathfrak l$-horizontal $\mathfrak l$-kernels, namely $\mathscr A[\mathfrak L_1]$ and $\mathscr A[\mathfrak L_2]$, and $N(\mathfrak l)-1$ $\mathfrak l$-descending ones;
\item If $\mathfrak l$ ramifies in $K$ as $\mathfrak L^2$, there is one $\mathfrak l$-horizontal $\mathfrak l$-kernel, namely $\mathscr A[\mathfrak L]$, and $N(\mathfrak l)$ $\mathfrak l$-descending ones.
\end{enumerate}
\end{proposition}
\begin{proof}
This proposition follows from Proposition~\ref{prop:genericNeighbors} together with Remark~\ref{rem:correspFrakLAndFrakL}.
\end{proof}
\begin{definition}[$\mathfrak l$-predecessor]
When it exists, let $\kappa$ be the unique $\mathfrak l$-ascending kernel of Proposition~\ref{prop:frakLStructure}. We call
$\mathrm{pr}_\mathfrak l(\mathscr A) = \mathscr A / \kappa$ the \emph{$\mathfrak l$-predecessor} of $\mathscr A$, and denote by
$\mathrm{up}_{\mathscr A}^{\mathfrak l} : \mathscr A \rightarrow \mathrm{pr}_\mathfrak l(\mathscr A)$ the canonical projection.
\end{definition}
The following notion of volcano was introduced in~\cite{fouquet-morain} to describe the structure of graphs of $\ell$-isogenies between elliptic curves.
\begin{definition}[volcano]
Let $n$ be a positive integer. An (infinite) \emph{$n$-volcano} $\mathscr V$ is an $(n+1)$-regular, connected, undirected graph whose vertices are partitioned into \emph{levels} $\{\mathscr V_i\}_{i \in \mathbb{Z}_{\geq 0}}$ such that:
\begin{enumerate}[label=(\roman*)]
\item The subgraph $\mathscr V_0$, the \emph{surface}, is a finite regular graph of degree at most 2,
\item For each $i > 0$, each vertex in $\mathscr V_i$ has exactly one neighbor in $\mathscr V_{i-1}$, and these are exactly the edges of the graph that are not on the surface.
\end{enumerate}
For any positive integer $h$, the corresponding (finite) volcano of height $h$ is the restriction of $\mathscr V$ to its first $h$ levels.
\end{definition}
Let $\mathfrak l$ be a prime of $K_0$ above $\ell$.
Consider the $\mathfrak l$-isogeny graph $\mathscr W_\mathfrak l$ as defined in Section~\ref{subsubsec:theorem1}.
Note that it is a directed multigraph; we say that such a graph is \emph{undirected} if for any vertices $u$ and $v$, the multiplicity of the edge from $u$ to $v$ is the same as the multiplicity from $v$ to $u$.
The remainder of this section is a proof of Theorem~\ref{thm:lisogenyvolcanoes}, which provides a complete description of the structure of the leveled $\mathfrak l$-isogeny graph $(\mathscr W_\mathfrak l, v_\mathfrak l)$, closely related to volcanoes.
\iffalse
\begin{lemma}\label{lemma:isoExtraAuto}
Let $\varphi_1 : \mathscr A \rightarrow \mathscr B$ and $\varphi_2 : \mathscr A \rightarrow \mathscr B$ be two isogenies. Suppose there are two isogenies $\tilde \varphi_i : \mathscr B \rightarrow \mathscr C$ such that $\tilde \varphi_1 \circ \varphi_1 = \tilde \varphi_2 \circ \varphi_2$, and $\mathcal{O}(\mathscr A) = \mathcal{O}(\mathscr C)$. If furthermore $\ker \varphi_1 \neq \ker \varphi_2$ and $\ker \tilde\varphi_1 = \ker \tilde\varphi_2$, then there exists a unit $\alpha$ in $\mathcal{O}(\mathscr A)^\times$ which is not in $\mathcal{O}(\mathscr B)$.
\end{lemma}
\begin{proof}
Let $\kappa = \ker \tilde\varphi_1 = \ker \tilde\varphi_2$, and $\varepsilon = \varphi_1 \circ \tilde \varphi_1 = \varphi_2 \circ \tilde \varphi_2$.
For $i = 1,2$, let $\psi_i : \mathscr B/\kappa \rightarrow \mathscr C$ be the isomorphism induced by $\tilde\varphi_i$. The following diagram commutes:
\begin{center}
\begin{tikzcd}
\mathscr A\arrow[drr, bend right, "\varphi_1"'] \arrow[r, "\varepsilon"] &\mathscr C \arrow[r, "\psi_1^{-1}"] & \mathscr B/\kappa \arrow[r, "\psi_2"] & \mathscr C &\mathscr A \arrow[dll, bend left, "\varphi_2"]\arrow[l, "\varepsilon"']\\
&&\mathscr B \arrow[lu, "\tilde \varphi_1"] \arrow[u, "\mathrm{pr}"] \arrow[ru, "\tilde \varphi_2"']&&
\end{tikzcd}
\end{center}
The composition $\psi_2\circ \psi_1^{-1}$ is an automorphism of $\mathscr C$, and since $\mathcal{O}(\mathscr A) = \mathcal{O}(\mathscr C)$, there is an automorphism $\psi$ of $\mathscr A$ such that $\varepsilon \circ \psi = \psi_2\circ \psi_1^{-1} \circ \varepsilon$. \bennote{Can this proof be completed? If not, at least we have the following lemma.}
\end{proof}
\begin{lemma}\label{lemma:isoExtraAuto}
Let $\varphi_1 : \mathscr A \rightarrow \mathscr B$ and $\varphi_2 : \mathscr A \rightarrow \mathscr B$ be two isogenies. Suppose there are two isogenies $\tilde \varphi_i : \mathscr B \rightarrow \mathscr A$ such that $\varphi_1 \circ \tilde \varphi_1 = \varphi_2 \circ \tilde \varphi_2$. If $\ker \varphi_1 \neq \ker \varphi_2$ and $\ker \tilde\varphi_1 = \ker \tilde\varphi_2$, then there exists a unit $\alpha \in \mathcal{O}(\mathscr A)^\times$ such that $\alpha \not\in \mathcal{O}(\mathscr B)$, and $\imath(\alpha)(\ker \varphi_1) = \ker \varphi_2$, where $\imath$ is the isomorphism between $\mathcal{O}(\mathscr A)$ and $\End(\mathscr A)$.
\end{lemma}
\begin{proof}
Let $\kappa = \ker \tilde\varphi_1 = \ker \tilde\varphi_2$, and $\varepsilon = \varphi_1 \circ \tilde \varphi_1 = \varphi_2 \circ \tilde \varphi_2$.
For $i = 1,2$, let $\psi_i : \mathscr B/\kappa \rightarrow \mathscr C$ be the isomorphism induced by $\tilde\varphi_i$. The following diagram commutes:
\begin{center}
\begin{tikzcd}
\mathscr A \arrow[r, "\psi_1^{-1}"]\arrow[ddr, bend right, "\varphi_1"'] & \mathscr B/\kappa \arrow[r, "\psi_2"] & \mathscr A \arrow[ddl, bend left, "\varphi_2"]\\
&\mathscr B \arrow[lu, "\tilde \varphi_1"] \arrow[u, "\mathrm{pr}"] \arrow[ru, "\tilde \varphi_2"'] \arrow[d, "{\varepsilon}"] &\\
&\mathscr B. &
\end{tikzcd}
\end{center}
Therefore $\psi = \psi_2\circ \psi_1^{-1}$ is an automorphism of $\mathscr A$ sending $\ker \varphi_1$ to $\ker \varphi_2$. The unit we are looking for will be $\alpha = \imath^{-1}(\psi)$. It remains to show that $\alpha \not \in \mathcal{O}(\mathscr B)$, i.e., that $\ker\varepsilon \not \subset \ker (\varphi_2 \circ \psi \circ \tilde \varphi_2)$. Let $\Psi = \varphi_2 \circ \psi \circ \tilde \varphi_2 = \varphi_1 \circ \tilde \varphi_2$. Now, observe that $\tilde \varphi_2(\ker \Psi) \subset \ker \varphi_1$ and $\tilde \varphi_2(\ker\varepsilon) \subset \ker \varphi_2$, and therefore $$|\tilde \varphi_2(\ker \Psi \cap \ker\varepsilon)| \leq |\ker \varphi_1 \cap \ker \varphi_2| < |\ker \varphi_2|.$$
Then, $|\ker \Psi \cap \ker\varepsilon| < |\ker \tilde\varphi_2||\ker \varphi_2| = |\ker\varepsilon|,$
so $\ker\varepsilon \not \subset \ker \Psi$, concluding the proof.
\end{proof}
\begin{lemma}\label{lemma:isoExtraAuto}
Let $\varphi_1 : \mathscr A \rightarrow \mathscr B$ and $\varphi_2 : \mathscr A \rightarrow \mathscr B$ be two isogenies. Suppose that $\ker \varphi_1 \neq \ker \varphi_2$ and $\ker \hat\varphi_1 = \ker \hat\varphi_2$. Then, there exists a unit $\alpha \in \mathcal{O}(\mathscr A)^\times$ such that $\alpha \not\in \mathcal{O}(\mathscr B)$, and $\imath(\alpha)(\ker \varphi_1) = \ker \varphi_2$, where $\imath$ is the isomorphism between $\mathcal{O}(\mathscr A)$ and $\End(\mathscr A)$.
\end{lemma}
\begin{proof}
Let $\kappa = \ker \hat\varphi_1 = \ker \hat\varphi_2$, and $d = \deg \varphi_1 = \deg \varphi_2$.
For $i = 1,2$, let $\psi_i : \mathscr B/\kappa \rightarrow$ be the isomorphism induced by $\hat\varphi_i$. The following diagram commutes:
\begin{center}
\begin{tikzcd}
\mathscr A \arrow[r, "\psi_1^{-1}"]\arrow[ddr, bend right, "\varphi_1"'] & \mathscr B/\kappa \arrow[r, "\psi_2"] & \mathscr A \arrow[ddl, bend left, "\varphi_2"]\\
&\mathscr B \arrow[lu, "\hat \varphi_1"] \arrow[u, "\mathrm{pr}"] \arrow[ru, "\hat \varphi_2"'] \arrow[d, "{[d]}"] &\\
&\mathscr B. &
\end{tikzcd}
\end{center}
Therefore $\psi = \psi_2\circ \psi_1^{-1}$ is an automorphism of $\mathscr A$ sending $\ker \varphi_1$ to $\ker \varphi_2$. The unit we are looking for will be $\alpha = \imath^{-1}(\psi)$. It remains to show that $\alpha \not \in \mathcal{O}(\mathscr B)$, i.e., that $\mathscr B[d] \not \subset \ker (\varphi_2 \circ \psi \circ \hat \varphi_2)$. Let $\Psi = \varphi_2 \circ \psi \circ \hat \varphi_2 = \varphi_1 \circ \hat \varphi_2$. Now, observe that $\hat \varphi_2(\ker \Psi) \subset \ker \varphi_1$ and $\hat \varphi_2(\mathscr B[d]) \subset \ker \varphi_2$, and therefore $$|\hat \varphi_2(\ker \Psi \cap \mathscr B[d])| \leq |\ker \varphi_1 \cap \ker \varphi_2| < |\ker \varphi_1| = d.$$
Then, $|\ker \Psi \cap \mathscr B[d]| < d^4 = |\mathscr B[d]|,$
so $\mathscr B[d] \not \subset \ker \Psi$, concluding the proof.
\end{proof}
\fi
\begin{lemma}\label{lemma:distinctFrakLKernels}
Suppose that $\mathcal{O}(\mathscr B)\subset \mathcal{O}(\mathscr A)$. If there exists an $\mathfrak l$-isogeny $\varphi: \mathscr A \to \mathscr B$, then there are at least $[\mathcal{O}(\mathscr A)^\times : {\mathcal{O}}(\mathscr B)^\times]$ pairwise distinct kernels of $\mathfrak l$-isogenies from $\mathscr A$ to $\mathscr B$.
\end{lemma}
\begin{proof}
The elements $\alpha \in \mathcal{O}(\mathscr A)$ act on the subgroups of $\mathscr A$ via the isomorphism $\mathcal{O}(\mathscr A) \cong \End(\mathscr A)$, and we denote this action $\kappa \mapsto \kappa^\alpha$. Let $\kappa = \ker \varphi$. If $u \in \mathcal{O}(\mathscr A)^\times$ is a unit, then $\kappa^u$ is also the kernel of an $\mathfrak l$-isogeny. Furthermore, $u$ canonically induces an isomorphism $\mathscr A/\kappa \rightarrow \mathscr A/\kappa^u$, so $\kappa^u$ is the kernel of a $\mathfrak l$-isogeny with target $\mathscr B$.
It only remains to prove that the orbit of $\kappa$ for the action of $\mathcal{O}(\mathscr A)^\times$ contains at least $[\mathcal{O}(\mathscr A)^\times : {\mathcal{O}}(\mathscr B)^\times]$ distinct kernels. It suffices to show that if $\kappa^u = \kappa$, then $u \in \mathcal{O}(\mathscr B)^\times$. Let $u \in \mathcal{O}(\mathscr A)^\times$ such that $\kappa^u = \kappa$.
Recall that for any variety $\mathscr C$ in our isogeny class, we have fixed an isomorphism $\imath_\mathscr C : \End(\mathscr C) \rightarrow \mathcal{O}(\mathscr C)$, and that these isomorphisms are all compatible in the sense that for any isogeny $\psi : \mathscr C \rightarrow \mathscr D$, and $\gamma \in \End(\mathscr C)$, we have $\imath_\mathscr C(\gamma) = \imath_\mathscr D(\psi \circ \gamma \circ \hat\psi)/\deg \psi$.
Let $u_\mathscr A \in \End(\mathscr A)$ be the endomorphism of $\mathscr A$ corresponding to~$u$. It induces an isomorphism $\tilde u_\mathscr A : \mathscr A/\kappa \rightarrow \mathscr A/\kappa^u$, which is actually an automorphism of $\mathscr A/\kappa$ since $\kappa^u = \kappa$. Let $\varphi : \mathscr A \rightarrow \mathscr A/\kappa$ be the natural projection. We obtain the following commutative diagram:
\begin{equation*}
\xymatrix{
\mathscr A/\kappa \ar[r]^{\tilde u_\mathscr A} & \mathscr A/\kappa \ar[rd]^{\hat \varphi} & \\
\mathscr A \ar[u]^{\varphi} \ar[r]^{u_\mathscr A} & \mathscr A \ar[u]^{\varphi} \ar[r]_{[\deg \varphi]} & \mathscr A.}
\label{eq:ppav}
\end{equation*}
Finally, we obtain
\[u = \imath_\mathscr A([\deg \varphi] \circ u_\mathscr A)/\deg \varphi = \imath_\mathscr A( \hat\varphi \circ \tilde u_\mathscr A \circ \varphi)/\deg \varphi = \imath_\mathscr B(\tilde u_\mathscr A)\in \mathcal{O}(\mathscr B).\]
\end{proof}
\begin{lemma}\label{lemma:classNumberRelation}
Let $K$ be a CM-field and $K_0$ its maximal real subfield.
Let $\mathcal{O}$ be an order in $K$ of conductor $\mathfrak f$ such that $\mathfrak{o}_{0} \subset \mathcal{O} \otimes_\mathbb{Z} \mathbb{Z}_{\ell}$. Let $\mathcal{O}'$ be the order such that $\mathcal{O}' \otimes_\mathbb{Z} \mathbb{Z}_{\ell'} = \mathcal{O} \otimes_\mathbb{Z} \mathbb{Z}_{\ell'}$ for all prime $\ell' \neq \ell$, and $\mathcal{O}' \otimes_\mathbb{Z} \mathbb{Z}_{\ell} = \mathfrak{o}_{0} + \mathfrak l\mathfrak f\mathfrak{o}_{K}$. Then, \[|\Pic(\mathcal{O}')| = \frac{\left[(\mathcal{O} \otimes_\mathbb{Z} \mathbb{Z}_{\ell})^\times : ({\mathcal{O}'} \otimes_\mathbb{Z} \mathbb{Z}_\ell)^\times\right]}{[\mathcal{O}^\times : {\mathcal{O}'}^\times]}|\Pic(\mathcal{O})|.\]
\end{lemma}
\begin{proof}
First, for any order $\mathcal{O}$ in $K$ of conductor $\mathfrak f$ we have the classical formula (see~\cite[Th.12.12 and Prop.12.11]{Neukirch99})
\begin{align*}
|\Pic(\mathcal{O})| &= \frac{h_K}{[\mathcal{O}_K^\times : \mathcal{O}^\times]}\frac{|(\mathcal{O}_K/\mathfrak f)^\times|}{|(\mathcal{O}/\mathfrak f)^\times|}\\
& = \frac{h_K}{[\mathcal{O}_K^\times : \mathcal{O}^\times]} \mathrm{pr}od_{\ell' \text{ prime}} [(\mathcal{O}_{K} \otimes_\mathbb{Z} \mathbb{Z}_{\ell'})^\times : (\mathcal{O} \otimes_\mathbb{Z} \mathbb{Z}_{\ell'})^\times].
\end{align*}
Now, consider $\mathcal{O}$ and $\mathcal{O}'$ as in the statement of the lemma. We obtain
\begin{align*}
\frac{|\Pic(\mathcal{O}')|}{|\Pic(\mathcal{O})|} & = \frac{[\mathcal{O}_K^\times : \mathcal{O}^\times]}{[\mathcal{O}_K^\times : {\mathcal{O}'}^\times]} [(\mathcal{O} \otimes_\mathbb{Z} \mathbb{Z}_{\ell})^\times : (\mathcal{O}' \otimes_\mathbb{Z} \mathbb{Z}_{\ell})^\times] \\
& = \frac{\left[(\mathcal{O} \otimes_\mathbb{Z} \mathbb{Z}_{\ell})^\times : (\mathcal{O}' \otimes_\mathbb{Z} \mathbb{Z}_\ell)^\times\right]}{[\mathcal{O}^\times : {\mathcal{O}'}^\times]}.
\end{align*}
\end{proof}
\begin{remark}
If one supposes that $\mathcal{O}_K^\times = \mathcal{O}_{K_0}^\times$, then $[\mathcal{O}^\times : {\mathcal{O}'}^\times]$ is always $1$ in the above lemma. Indeed, one has $\mathcal{O}^\times \subset \mathcal{O}_{K_0}^\times \subset \mathfrak{o}_{0}^\times \subset (\mathcal{O}' \otimes_\mathbb{Z} \mathbb{Z}_\ell)^\times,$ and
therefore, since $\mathcal{O}$ and $\mathcal{O}'$ coincide at every other prime, we obtain
$\mathcal{O}^\times \subset {\mathcal{O}'}^\times,$ hence $\mathcal{O}^\times = {\mathcal{O}'}^\times$.
\end{remark}
\begin{remark}\label{rem:Streng}
For $g = 2$, the field $K$ is a primitive quartic CM-field. Then, the condition $\mathcal{O}_K^\times = \mathcal{O}_{K_0}^\times$ is simply equivalent to $K \neq \mathbb{Q}(\zeta_5)$ by \cite[Lem.3.3]{Streng10}. So in dimension 2, if $K \neq \mathbb{Q}(\zeta_5)$, one always has $[\mathcal{O}^\times : {\mathcal{O}'}^\times] = 1$ in the above lemma.
\end{remark}
\subsection*{Proof of Theorem~\ref{thm:lisogenyvolcanoes}}
Let $\mathscr V$ be any of connected component of $\mathscr W_\mathfrak l$.
First, it follows from Proposition~\ref{prop:EllDoesNotChangeP} that locally at any prime other than $\ell$, the endomorphism rings occurring in $\mathscr V$ all coincide. Also, locally at $\ell$, Proposition~\ref{prop:frakLStructure} implies that an $\mathfrak l$-isogeny can only change the valuation at $\mathfrak l$ of the conductor. Therefore within $\mathscr V$, the endomorphism ring of a variety $\mathscr A$ is uniquely determined by its level $v_{\mathfrak l}(\mathscr A)$. Let $\mathcal{O}_i$ be the endomorphism of any (and therefore every) variety $\mathscr A$ in $\mathscr V$ at level $v_{\mathfrak l}(\mathscr A) = i$. Write $\mathscr V_i$ for the corresponding subset of $\mathscr V$.
Proposition~\ref{prop:frakLStructure} implies that, except at the surface, all the edges connect consecutive levels of the graph, and each vertex at level $i$ has exactly one edge to the level $i-1$.
The structure of the connected components of the level $\mathscr V_0$ is already a consequence of the well-known free CM-action of $\Pic(\mathcal{O}_0)$ on ordinary abelian varieties with endomorphism ring $\mathcal{O}_0$.
Note that if $\varphi: \mathscr A \rightarrow \mathscr B$ is a descending $\mathfrak l$-isogeny within $\mathscr V$, then the unique ascending $\mathfrak l$-isogeny from $\mathscr B$ is $\mathrm{up}_{\mathscr B}^\mathfrak l : \mathscr B \rightarrow \mathrm{pr}_\mathfrak l(\mathscr B)$, and we
have $\mathrm{pr}_\mathfrak l(\mathscr B) \cong \mathscr A/\mathscr A[\mathfrak l]$; also, we have
$\mathrm{pr}_\mathfrak l(\mathscr B/\mathscr B[\mathfrak l]) \cong \mathrm{pr}_\mathfrak l(\mathscr B)/\mathrm{pr}_\mathfrak l(\mathscr B)[\mathfrak l]$.
These facts easily follow from the lattice point of view (see Proposition~\ref{prop:genericNeighbors}, and observe that if
$\mathbf{G}amma \in \mathscr L_\mathfrak l (\Lambda)$, then $\mathfrak l\mathbf{G}amma \in \mathscr L_\mathfrak l (\mathfrak l\Lambda)$).
We can deduce in particular that $\mathscr V_0$ is connected: a path from $\mathscr A \in \mathscr V_0$ to another vertex of $\mathscr V_0$ containing only vertical isogenies can only end at a vertex $\mathscr A/\mathscr A[\mathfrak l^i]$, which can also be reached within $\mathscr V_0$.
We now need to look at a bigger graph. For each $i \geq 0$, let $\mathscr U_{i}$ be the orbit of the level $\mathscr V_i$ for the CM-action of $\Pic(\mathcal{O}_i)$. The action is transitive on $\mathscr U_0$ since the connected graph $\mathscr V_0$ is in a single orbit of the action of $\Pic(\mathcal{O}_0)$. Let us show by
induction that each $\mathscr U_{i+1}$ consists of a single orbit, and that each vertex of $\mathscr U_{i+1}$ is reachable by an edge from $\mathscr U_{i}$. First, $\mathscr U_{i+1}$ is non-empty because,
by induction, $\mathscr U_{i}$ is non-empty, and each vertex in $\mathscr U_{i}$ has neighbors in $\mathscr U_{i+1}$. Choose
any isogeny $\varphi : {\mathscr A}'\rightarrow \mathscr A$ from $\mathscr U_{i}$ to $\mathscr U_{i+1}$. For any vertex $\mathscr B$ in the
orbit of $\mathscr A$, there is an isogeny $\psi : \mathscr A \rightarrow \mathscr B$ of degree coprime to $\ell$. The isogeny $\psi\circ\varphi$ factors
through a variety ${\mathscr B}'$ via an isogeny $ \psi' : {\mathscr A}' \rightarrow {\mathscr B}'$ of same degree as $\psi$, and an
isogeny $\nu : {\mathscr B}' \rightarrow \mathscr B$ of kernel $\psi'(\ker \varphi)$. In particular, $\nu$ is an $\mathfrak l$-isogeny, and $\mathscr B'$ is in the orbit of $\mathscr A'$ for the CM-action, so it is in $\mathscr U_{i}$. This proves that any vertex in the orbit of $\mathscr A$ is reachable by an isogeny down from $\mathscr U_{i}$.
Let $\mathscr E_i$ be the set of all edges (counted with multiplicities) from $\mathscr U_i$ to $\mathscr U_{i+1}$. From Proposition~\ref{prop:frakLStructure}, we have
\begin{equation}\label{eq:Ulysse}
|\mathscr E_i| = \left[(\mathcal{O}_{i} \otimes_{\mathbb{Z}}\mathbb{Z}_\ell)^\times : ({\mathcal{O}_{i+1} \otimes_{\mathbb{Z}}\mathbb{Z}_\ell)}^\times\right]\cdot|\mathscr U_i|.
\end{equation}
For any $\mathscr B \in \mathscr U_{i+1}$, let $d(\mathscr B)$ be the number of edges in $\mathscr E_i$ targeting $\mathscr B$ (with multiplicities). We have seen that any $\mathscr B$ is reachable from $\mathscr U_{i}$, therefore $d(\mathscr B) \geq 1$, and we deduce from Lemma~\ref{lemma:distinctFrakLKernels} that $d(\mathscr B) \geq \left[\mathcal{O}_{i}^\times : \mathcal{O}_{i+1}^\times\right]$. We deduce
\[|\mathscr E_i| = \sum_{\mathscr B \in \mathscr U_{i+1}} d(\mathscr B) \geq \left[\mathcal{O}_{i}^\times : \mathcal{O}_{i+1}^\times\right]\cdot|\mathscr U_{i+1}|.\]
Together with Equation~\eqref{eq:Ulysse}, we obtain the inequality
\begin{equation}\label{eq:Penelope}
|\mathscr U_{i+1}| \leq \frac{\left[(\mathcal{O}_{i} \otimes_{\mathbb{Z}}\mathbb{Z}_\ell)^\times : ({\mathcal{O}_{i+1} \otimes_{\mathbb{Z}}\mathbb{Z}_\ell)}^\times\right]}{\left[\mathcal{O}_{i}^\times : \mathcal{O}_{i+1}^\times\right]}\cdot|\mathscr U_i|.
\end{equation}
Since the CM-action of the Picard group of $\mathcal{O}_i$ is free, we obtain from Lemma~\ref{lemma:classNumberRelation} that the right-hand side of Equation~\eqref{eq:Penelope} is exactly the size of the orbit of any vertex in $\mathscr U_{i+1}$. So $\mathscr U_{i+1}$ contains at most one orbit, and thereby contains exactly one, turning Equation~\eqref{eq:Penelope} into an actual equality.
In particular, all the edges in $\mathscr E_i$ must have multiplicity precisely $[\mathcal{O}_{i}^\times : \mathcal{O}_{i+1}^\times]$.
This conclude the recursion.
Note that with all these properties, the graph is a volcano if and only if it is undirected, and all the vertical multiplicities are $1$. The latter is true if and only if $[\mathcal{O}_{i}^\times : \mathcal{O}_{i+1}^\times] = 1$ for any $i$, i.e., if $\mathcal{O}_0^\times \subset K_0$. For the following, suppose it is the case; it remains to decide when the graph is undirected.
If $\mathfrak l$ is principal in $\mathcal{O}_0 \cap K_0$, the surface $\mathscr V_0$ is undirected because the primes above $\mathfrak l$ in $\mathcal{O}_0$ are inverses of each other.
If $\varphi: \mathscr A \rightarrow \mathscr B$ is a descending $\mathfrak l$-isogeny within $\mathscr V$, then the unique ascending $\mathfrak l$-isogeny from $\mathscr B$ points to $\mathscr A/\mathscr A[\mathfrak l]$, which is isomorphic to $\mathscr A$ if and only if $\mathfrak l$ is principal in $\mathcal{O}(\mathscr A)$. So for each descending edge $\mathscr A \rightarrow \mathscr B$ there is an ascending edge $\mathscr B \rightarrow \mathscr A$, and since we have proven above that each vertical edge has multiplicity 1, we conclude that the graph is undirected (so is a volcano) if and only if $\mathfrak l$ is principal in $\mathcal{O}_0 \cap K_0$ (if $\mathfrak l$ is not principal in $\mathcal{O}_0 \cap K_0$, there is a level $i$ where $\mathfrak l$ is not principal in $\mathcal{O}_i$).
For Point~\ref{ascendingImpliesDescending}, choose a descending edge $\mathscr A \rightarrow \mathscr B$. We get that $\mathscr C \cong \mathscr A/\mathscr A[\mathfrak l]$. It is then easy to see that the isogeny $\mathscr A \rightarrow \mathscr B$ induces an isogeny $\mathscr C \rightarrow \mathscr B/\mathscr B[\mathfrak l]$.
\qed\\
Theorem~\ref{thm:lisogenyvolcanoes} gives a complete description of the graph: it allows one to construct an abstract model of any connected component corresponding to an order $\mathcal{O}_0$ from the knowledge of the norm of $\mathfrak l$, of the (labeled) Cayley graph of the subgroup of $\Pic(\mathcal{O}_0)$ with generators the prime ideals in $\mathcal{O}_0$ above $\mathfrak l$, of the order of $\mathfrak l$ in each Picard group $\Pic(\mathcal{O}_i)$, and of the indices $[\mathcal{O}_{i}^\times : \mathcal{O}_{i+1}^\times]$.
\begin{example}
For instance, suppose that $\ell = 2$ ramifies in $K_0$ as $\mathfrak l^2$, and $\mathfrak l$ is principal in $\mathcal{O}_K$, but is of order 2 in both $\Pic(\mathcal{O}_{K_0} + \mathfrak l\mathcal{O}_K)$ and $\Pic(\mathcal{O}_{K_0} + \mathfrak l^2\mathcal{O}_K)$, and that $\mathcal{O}_K^\times \subset K_0$. Then, the first four levels of any connected component of the $\mathfrak l$-isogeny graph for which the largest order is $\mathcal{O}_K$ are isomorphic to the graph of Figure~\ref{fig:exampleOfNonVolcano}. It is not a volcano since $\mathfrak l$ is not principal in every order $\mathcal{O}_{K_0} + \mathfrak l^i\mathcal{O}_K$.
\end{example}
\begin{figure}
\caption{\label{fig:exampleOfNonVolcano}
\label{fig:exampleOfNonVolcano}
\end{figure}
\begin{example}
When $K$ is a primitive quartic CM-field, we have seen in Remark~\ref{rem:Streng} that
the multiplicities $[\mathcal{O}_{i}^\times : \mathcal{O}_{i+1}^\times]$ are always one, except maybe if
$K = \mathbb{Q}(\zeta_5)$. Actually, even for $K = \mathbb{Q}(\zeta_5)$, only the maximal order $\mathcal{O}_K$ has
units that are not in $K_0$. We give in Figure~\ref{fig:exampleZeta5} examples of
$\mathfrak l$-isogeny graphs when the order at the surface is $\mathcal{O}_K = \mathbb{Z}[\zeta_5]$ (which is a principal ideal domain). The primes $2$ and
$3$ are inert in $K$, so we consider $\mathfrak l = 2\mathcal{O}_{K_0}$ and $\mathfrak l = 3\mathcal{O}_{K_0}$, and the prime number
$5$ is ramified in $K_0$ so $\mathfrak l^2 = 5\mathcal{O}_{K_0}$ (and $\mathfrak l$ is also ramified
in $K$, explaining the self-loop at the surface of the last graph).
\end{example}
\begin{figure}
\caption{\label{fig:exampleZeta5}
\label{fig:exampleZeta5}
\end{figure}
\begin{notation}\label{notation:lVolcano}
Let $\mathcal{O}$ be any order in $K$ with locally maximal real multiplication at $\ell$, whose conductor is not divisible by $\mathfrak l$.
We denote by $\mathscr V_\mathfrak l(\mathcal{O})$ the connected graph $\mathscr V$ described in Theorem~\ref{thm:lisogenyvolcanoes}.
If $\mathfrak l$ does divide the conductor of $\mathcal{O}$, let $\mathcal{O}'$ be the smallest order containing $\mathcal{O}$, whose order is not divisible by $\mathfrak l$. Then, we also write $\mathscr V_\mathfrak l(\mathcal{O})$ for the graph $\mathscr V_\mathfrak l(\mathcal{O}')$.
\end{notation}
\section{Graphs of $\mathfrak{l}$-isogenies with polarization} \label{sec:Polarizations}
When $\mathfrak l$ is trivial in the narrow class group of $K_0$, then $\mathfrak l$-isogenies preserve principal polarizability. The graphs of $\mathfrak l$-isogenies studied in Section~\ref{subsec:frakLIsogenyGraphs} do not account for polarizations. The present section fills this gap, by describing polarized graphs of $\beta$-isogenies, where $\beta \in K_0$ is a totally positive generator of $\mathfrak l$.
The main result of this section is Theorem \ref{thm:polarizedbetaisogenyvolcanoes} according to which the connected components of polarized isogeny graphs are either isomorphic to the corresponding components of the non-polarized isogeny graphs, or non-trivial double-covers thereof. Yet, this description is not quite exact due to problems arising when the various abelian varieties occurring in a connected component have different automorphism groups.
\subsection{Graphs with polarization}
Before defining the graph, we record the following proposition, which implies that one vertex of a fixed connected component of $(\mathscr W_\beta, v_\beta)$ is principally polarizable if and only if all of them are. Note that since $\beta$ is a generator of $\mathfrak l$, we will write $\beta$-isogeny to mean $\mathfrak l$-isogeny.
\begin{proposition}\label{prop:UniquePolarization}
If $\varphi: \mathscr A \to \mathscr B$ is a $\beta$-isogeny, then there is a unique principal polarization $\xi_{\mathscr B}$ on $\mathscr B$ satisfying
$$
\varphi^* \xi_{\mathscr B} = \xi_{\mathscr A}^\beta.
$$
\end{proposition}
\begin{proof}
Writing $\varphi_{\xi_{\mathscr A}}$ the polarization isogeny, then $\ker(\varphi) \subset \ker (\varphi_{\xi_{\mathscr A}^\beta})$ is a maximal isotropic subgroup for the commutator pairing and hence by Grothendieck descent (see \cite[Lem.2.4.7]{drobert:thesis}); the proof there is in characteristic $0$, but it extends to ordinary abelian varieties in characteristic $p$ via to canonical lifts), it follows that $\xi_{\mathscr A}^{\beta}$ is a pullback of a principal polarization $\xi_{\mathscr B}$ on $\mathscr B$. For uniqueness, note that the homomorphism $\varphi^* \colon \mathbb{N}S(\mathscr B) \to \mathbb{N}S(\mathscr A)$ of free abelian groups of the same rank becomes an isomorphism after tensoring with $\mathbb{Q}$, hence is injective.
\end{proof}
We define the principally polarized, leveled, $\beta$-isogeny graph $(\mathscr W_\beta^\mathrm{pp}, v_\beta)$ as follows. A point is an isomorphism class\footnote{Recall that two polarizations $\xi_\mathscr A$ and $\xi'_\mathscr A$ on $\mathscr A$ are isomorphic if and only if there is a unit $u \in \mathcal{O}(\mathscr A)^{\times}$ such that $\xi'_\mathscr A = u^*\xi_\mathscr A$.} of pair $(\mathscr A, \xi_{\mathscr A})$, where $\mathscr A$ is a principally polarizable abelian variety occuring in $(\mathscr W_\beta, v_\beta)$, and $\xi_{\mathscr A}$ is a principal polarization on $\mathscr A$.
There is an edge of multiplicity $m$ from the isomorphism class of $(\mathscr A, \xi_{\mathscr A})$ to the isomorphism class of $(\mathscr B, \xi_{\mathscr B})$ if there are $m$ distinct subgroups of $\mathscr A$ that are kernels of $\beta$-isogenies $\varphi : \mathscr A \rightarrow \mathscr B$ such that $\varphi^*\xi'_{\mathscr B}$ is isomorphic to $\xi_{\mathscr A}^\beta$, for some polarization $\xi'_{\mathscr B}$ isomorphic to $\xi_{\mathscr B}$.
The graph $\mathscr W_\beta^\mathrm{pp}$ admits a forgetful map to $\mathscr W_\beta$, and in particular inherits the structure of a leveled graph $(\mathscr W_\beta^\mathrm{pp}, v_\beta)$.
\begin{remark}
It can be the case that there is no $\beta$-isogeny $\varphi : \mathscr A \to \mathscr B$ such that $\varphi^* \xi_{\mathscr{B}} \cong \xi_{\mathscr A}^\beta$, but that there is nonetheless an edge (because there is a map with this property for some other polarization $\xi'_{\mathscr B}$, isomorphic to $\xi_{\mathscr B}$). This can happen because pullbacks of isomorphic polarizations are not necessarily isomorphic, when $\mathscr A$ and $\mathscr B$ have different automorphism groups.
\end{remark}
We note that this graph is undirected:
\begin{proposition}\label{prop:BetaDual}
If $\varphi: \mathscr A \to \mathscr B$ is a $\beta$-isogeny, then there is a unique $\beta$-isogeny $\tilde\varphi: \mathscr B \to \mathscr A$ satisfying $\tilde\varphi \varphi= \beta$, called the $\beta$-dual of $\varphi$.
\end{proposition}
\begin{proof}
Let $\kappa$ be the kernel of $\varphi$.
The group $\mathscr A[\beta]$ is an $\mathcal{O}_0(\mathscr A)/(\beta)$-vector space of dimension 2, of which the kernel $\kappa$ is a vector subspace of dimension 1. Therefore there is another vector subspace $\kappa'$ such that $\mathscr A[\beta] = \kappa \oplus \kappa'$, and
$\varphi(\kappa')$ is the kernel of a $\beta$-isogeny
$\psi : \mathscr B \rightarrow \mathscr C$.
Then, the kernel of the composition $\psi \circ \varphi$ is $\mathscr A[\beta]$ so there is an isomorphism $u : \mathscr C \rightarrow \mathscr A$ such that $u\circ \psi \circ \varphi = \beta$.
The isogeny $u\circ \psi$ is the $\beta$-dual of $\varphi$ (which is trivially unique).
\end{proof}
\subsection{Counting polarizations}
To describe $(\mathscr W_\beta^\mathrm{pp}, v_\beta)$, we need to count principal polarizations on any fixed variety. If $\mathcal{O}$ is an order in $K$, write $\mathcal{O}^{+\times}$ for the group of totally positive units in $\mathcal{O} \cap K_0$.
\begin{proposition}\label{allPolarizations}
Let $\mathscr A$ be a simple ordinary abelian variety over $\mathbb{F}_q$ with endomorphism ring $\mathcal{O}$. Then the set of isomorphism classes of principal polarizations (when non-empty) on $\mathscr A$ is a torsor for the group
$$
U(\mathcal{O}) := \frac{\mathcal{O}^{+\times}}{\mathbf{N}: \mathcal{O}^\times \to (\mathcal{O} \cap K_0)^\times}.
$$
\end{proposition}
\begin{proof}
See \cite[Cor.5.2.7]{BL04} for a proof in characteristic $0$. That the result remains true for ordinary abelian varieties in characteristic $p$ follows from the theory of canonical lifts.
\end{proof}
The following lemma recalls some well-known facts about $U(\mathcal{O})$.
\begin{lemma}\label{lem:units}
The group $U(\mathcal{O})$ is an $\mathbb{F}_2$-vector space of dimension $d$, where $0 \leq d \leq g-1$. If $\mathcal{O} \subset \mathcal{O'}$ and $\mathcal{O} \cap K_0 = \mathcal{O}' \cap K_0$, then the natural map $U(\mathcal{O}) \to U(\mathcal{O'})$ is surjective.
\end{lemma}
\begin{proof}
Writing $\mathbf{N}$ for the norm from $K$ to $K_0$, we have the following hierarchy, the last containment following because for $\beta \in \mathcal{O}_r$ one has $\beta^2 = \mathbf N \beta$:
\begin{equation}\label{hierarchy}
(\mathcal{O} \cap K_0)^\times \supseteq \mathcal{O}^{+\times} \supseteq \mathbf{N}(\mathcal{O}^\times) \supseteq (\mathcal{O} \cap K_0)^{\times 2}
\end{equation}
By Dirichlet's unit theorem (and its extension to non-maximal orders), the group $\mathcal{O}_0^\times$ is of the form $\{\pm 1\} \times A$, where $A$ is a free abelian group of cardinality $2^{g-1}$, so the quotient
$(\mathcal{O} \cap K_0)^\times/(\mathcal{O} \cap K_0)^{\times 2}$ is an $\mathbb{F}_2$-vector space of dimension at most $g$. Since $-1$ is never a totally positive unit, the first claim follows. The second sentence of the lemma is clear.
\end{proof}
\begin{remark}
We remark that, other than the simple calculations described, there is little one can say in great generality about the indices of the containments in (\ref{hierarchy}), which vary depending on the specific fields $K$ and orders $\mathcal{O}$ chosen. For example, if $g = 2$, the total index in (\ref{hierarchy}) is $4$, and one has examples with the ``missing'' factor of $2$ (i.e., the one unaccounted for by the totally negative unit $-1$) occurring in any of the three containments.
\end{remark}
\subsection{Structure of $(\mathscr W_\beta^\mathrm{pp}, v_\beta)$}
We may now state the main theorem.
\begin{theorem}\label{thm:polarizedbetaisogenyvolcanoes}
Let $\mathscr V^\mathrm{pp}$ be any connected component of the leveled $\beta$-isogeny graph $(\mathscr W_\beta^\mathrm{pp}, v_\beta)$.
For each $i \geq 0$, let $\mathscr V^\mathrm{pp}_i$ be the subgraph of $\mathscr V^\mathrm{pp}$ at level $i$.
We have:
\begin{enumerate}[label=(\roman*)]
\item \label{thmitem:polLevels} For each $i \geq 0$, the varieties in $\mathscr V^\mathrm{pp}_i$ share a common endomorphism ring $\mathcal{O}_i$. The order $\mathcal{O}_0$ can be any order with locally maximal real multiplication at $\ell$, whose conductor is not divisible by $\beta$;
\item \label{thmitem:polLevel0}The level $\mathscr V^\mathrm{pp}_0$ is isomorphic to the Cayley graph of the subgroup of $\mathfrak C(\mathcal{O}_0)$ with generators $(\mathfrak L_i, \beta)$ where $\mathfrak L_i$ are the prime ideals in $\mathcal{O}_0$ above $\beta$;
\item For any $\mathscr A \in \mathscr V^\mathrm{pp}_0$, there are
$$\frac {N(\mathfrak l)-\left(\frac{K}{\beta}\right)}{[\mathcal{O}_{0}^\times : \mathcal{O}_{1}^\times]}\frac{U(\mathcal{O}_{1})}{U(\mathcal{O}_{0})}$$
edges of multiplicity $[\mathcal{O}_{0}^\times : \mathcal{O}_{1}^\times]$ from $\mathscr A$ to distinct vertices of~$\mathscr V^\mathrm{pp}_{1}$ (where $\left(\frac{K}{\beta}\right)$ is $-1$, $0$ or $1$ if $\beta$ is inert, ramified, or split in $K$);
\item For each $i > 0$, and any $x \in \mathscr V^\mathrm{pp}_i$, there is one simple edge from $x$ to a vertex of $\mathscr V^{pp}_{i-1}$, and
$$\frac {N(\mathfrak l)}{[\mathcal{O}_{i}^\times : \mathcal{O}_{i+1}^\times]}\frac{U(\mathcal{O}_{i+1})}{U(\mathcal{O}_{i})}$$
edges of multiplicity $[\mathcal{O}_{i}^\times : \mathcal{O}_{i+1}^\times]$ to distinct vertices of $\mathscr V^\mathrm{pp}_{i+1}$;
\item\label{thmitem:almostundirected} For each edge $x \rightarrow y$, there is an edge $y \rightarrow x$.
\end{enumerate}
In particular, the graph $\mathscr V^\mathrm{pp}$ is an $N(\beta)$-volcano if and only if $\mathcal{O}_0^\times \subset K_0$.
Also, if $\mathscr V^\mathrm{pp}$ contains a variety defined over the finite field $k$, the subgraph containing only the varieties defined over $k$ consists of the subgraph of the first $v$ levels, where $v$ is the valuation at $\beta$ of the conductor of $\mathcal{O}_{K_0}[\pi] = \mathcal{O}_{K_0}[\pi, \pi^\dagger]$.
\end{theorem}
Before proving this theorem, we need some preliminary results.
First, we recall the action of the Shimura class group. For $\mathcal{O}$ an order, write $\mathscr{I}(\mathcal{O})$ for the group of invertible $\mathcal{O}$-ideals, and define the Shimura class group as
$$
\mathfrak{C}(\mathcal{O}) = \{ (\mathfrak{a}, \alpha) \mid \mathfrak{a} \in \mathscr{I}(\mathcal{O}): \mathbf N\mathfrak{a} = \alpha\mathcal{O}, \alpha \in K_0 \text{ totally positive } \} /\sim
$$
where two pairs $(\mathfrak{a}, \alpha), (\mathfrak{a}', \alpha')$ are equivalent if there exists $u \in K^\times$ with $\mathfrak{a}' = u\mathfrak{a}$ and $\alpha' = uu^\dagger \alpha$.
The Shimura class group acts freely on the set of isomorphism classes of principally polarized abelian varieties whose endomorphism ring is $\mathcal{O}$ (see~\cite[\S 17]{taniyama-shimura} for the result in characteristic 0, which extends via canonical lifts to the ordinary characteristic $p$ case). If $\beta$ is coprime to the conductor of $\mathcal{O}$, then an element of $\mathfrak C(\mathcal{O})$ acts by a $\beta$-isogeny if and only if it is of the form $(\mathfrak{L}, \beta)$, for some prime ideal $\mathfrak{L}$ of $\mathcal{O}$ dividing $(\beta)$.
\begin{lemma}\label{lemma:uniquePolUp}
Let $\varphi : \mathscr A \rightarrow \mathscr B$ be a $\beta$-isogeny, and let $\xi_\mathscr A$ be a principal polarization on $\mathscr A$. We have:
\begin{enumerate}[label=(\roman*)]
\item \label{lemmaitem:ascendingpol} If $\varphi$ is $\beta$-ascending, there is, up to isomorphism, a unique polarization $\xi_\mathscr B$ on $\mathscr B$ such that $\varphi^*\xi_\mathscr B$ is isomorphic to $\xi_\mathscr A^\beta$;
\item \label{lemmaitem:descendingpol} It $\varphi$ is $\beta$-descending, there are, up to isomorphism, exactly
\[\frac {|U(\mathcal{O}(\mathscr B))|}{|U(\mathcal{O}(\mathscr A))|}\]
distinct polarizations $\xi_\mathscr B$ on $\mathscr B$ such that $\varphi^*\xi_\mathscr B$ is isomorphic to $\xi_\mathscr A^\beta$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let us first prove \ref{lemmaitem:ascendingpol}.
From Proposition~\ref{prop:UniquePolarization}, there exists a polarization $\xi_\mathscr B$ on $\mathscr B$ such that $\varphi^*\xi_\mathscr B = \xi_\mathscr A^\beta$.
Suppose $\xi'_\mathscr B$ is a polarization such that $\varphi^*\xi'_\mathscr B \cong \xi_\mathscr A^\beta$. Then, there is a unit $u \in \mathcal{O}(\mathscr A)^{\times}$ such that $\varphi^*\xi'_\mathscr B = u^*\xi_\mathscr A^{\beta}$. But $\varphi$ is ascending, so $u \in \mathcal{O}(\mathscr B)^{\times}$ and therefore $$\varphi^*\xi'_\mathscr B = u^*\xi_\mathscr A^\beta = u^*(\varphi^*\xi_\mathscr B) = \varphi^*(u^*\xi_\mathscr B).$$
From the uniqueness in Proposition~\ref{prop:UniquePolarization}, we obtain $\xi'_\mathscr B = u^*\xi_\mathscr B$, so $\xi_\mathscr B$ and $\xi'_\mathscr B$ are two isomorphic polarizations.
For \ref{lemmaitem:descendingpol}, again apply Proposition~\ref{prop:UniquePolarization}, and observe that the kernel of the surjection
$U(\mathcal{O}(\mathscr B)) \rightarrow U(\mathcal{O}(\mathscr A))$
of Lemma~\ref{lem:units} acts simply transitively on the set of isomorphism classes of polarizations $\xi_\mathscr B$ on $\mathscr B$ satisfying $\varphi^*\xi_\mathscr B \cong\xi_\mathscr A^\beta$.
\end{proof}
\subsection*{Proof of Theorem \ref{thm:polarizedbetaisogenyvolcanoes}}
First observe that \ref{thmitem:polLevels} is immediate from Theorem~\ref{thm:lisogenyvolcanoes}(i), since the leveling on $\mathscr{V}^{\mathrm{pp}}$ is induced from that of $\mathscr{V}$. Also, \ref{thmitem:almostundirected} is a direct consequence of the existence of $\beta$-duals, established in Proposition~\ref{prop:BetaDual}.
Now, let us prove that for any class $(\mathscr A, \xi_\mathscr A)$ at a level $i > 0$, there is a unique edge to the level $i-1$.
From Theorem~\ref{thm:lisogenyvolcanoes}, there exists an ascending isogeny $\varphi : \mathscr A \rightarrow \mathscr B$ (unique up to isomorphism of $\mathscr B$), and from
Lemma~\ref{lemma:uniquePolUp}\ref{lemmaitem:ascendingpol}, there is a unique polarization $\xi_\mathscr B$ on $\mathscr B$ (up to isomorphism) such that $(\mathscr A, \xi_\mathscr A) \rightarrow (\mathscr B, \xi_\mathscr B)$ is an edge in $\mathscr{V}^{\mathrm{pp}}$.
These results, and the fact that $\mathscr V_0$ is connected, imply that $\mathscr V_0^\mathrm{pp}$ is connected. We can then deduce \ref{thmitem:polLevel0} from the action of the Shimura class group $\mathfrak C(\mathcal{O}_0)$.
Now, (iii) (respectively, (iv)) is a consequence of Theorem~\ref{thm:lisogenyvolcanoes}(iii) (respectively, Theorem~\ref{thm:lisogenyvolcanoes}(iv)) together with Lemma~\ref{lemma:uniquePolUp}. The statement on multiplicities of the edges also uses the fact that if $\varphi,\psi : \mathscr A\rightarrow \mathscr B$ are two $\beta$-isogenies with same kernel, and $\xi_\mathscr A$ is a principal polarization on $\mathscr A$, then the two principal polarizations on $\mathscr B$ induced via $\varphi$ and $\psi$ are isomorphic.
The volcano property follows from the corresponding phrase in the statement of Theorem \ref{thm:lisogenyvolcanoes}, and the statement on fields of definition follows from Remark \ref{rem:fieldOfDefIsogenies}, which shows that the isomorphism from a principally polarized absolutely simple ordinary abelian variety to its dual, and hence the polarization, is defined over the field of definition of the variety.
\qed
\subsection{Principally polarizable surfaces}\label{subsec:ppas}
The result of Theorem~\ref{thm:polarizedbetaisogenyvolcanoes} for abelian surfaces is a bit simpler than the general case, thanks to the following lemma.
\begin{lemma}
Suppose $g = 2$.
With all notations as in Theorem~\ref{thm:polarizedbetaisogenyvolcanoes}, we have $U(\mathcal{O}_i) = U(\mathcal{O}_0)$ for any non-negative integer $i$.
\end{lemma}
\begin{proof}
In these cases, one has $\mathcal{O}_K^\times = \mathcal{O}_{K_0}^\times$ except in the case $K = \mathbb{Q}(\zeta_5)$ (see Remark \ref{rem:Streng}); but even when $K = \mathbb{Q}(\zeta_5)$ the equality is true up to units of norm~$1$. Therefore for any order $\mathcal{O}$ in $K$, one has $ \mathbf N \mathcal{O}^\times = \mathbf N(\mathcal{O} \cap {K_0})^\times$. Thus, none of the groups $U(\mathcal{O}_i)$ actually depend on $i$.
\end{proof}
Therefore, the factors $|U(\mathcal{O}_{i+1})|/|U(\mathcal{O}_i)|$ disappear when $g = 2$.
It follows that each component $\mathscr{W}^\mathrm{pp}$ is either isomorphic to its image in $(\mathscr W_\beta, v_\beta)$, or is isomorphic to the natural double cover of this image constructed by doubling the length of the cycle $\mathscr V_0$ (as illustrated in Figure~\ref{fig:examplePlusMinusVolcano}).
The first case occurs when $(\beta)$ is inert in $K/K_0$, or when the order of $(\mathfrak L, \beta)$ in $\mathfrak C(\mathcal{O}_0)$ equals the order of $\mathfrak L$ in $\mathrm{Cl}(\mathcal{O}_0)$ (where $\mathfrak L$ is a prime ideal of $\mathcal{O}_0$ above $(\beta)$). The second case occurs when the order of $(\mathfrak L, \beta)$ is twice that of $\mathfrak L$.
\begin{figure}
\caption{\label{fig:examplePlusMinusVolcano}
\label{fig:examplePlusMinusVolcano}
\end{figure}
\section{Levels for the real multiplication in dimension 2}\label{sec:levelsRM}
We now specialize to the case $g = 2$. Then, $\mathscr A$ is of dimension 2, and $K$ is a primitive quartic CM-field. The subfield $K_0$ is a real quadratic number field. The orders in $K_{0,\ell}$ are linearly ordered since they are all of the form $\mathbb{Z}_\ell + \ell^n\mathfrak{o}_0$. These $n$'s can be seen as ``levels'' of real multiplication. Taking advantage of this simple structure, the goal of this section is to prove Theorem~\ref{RMupIso}.
\subsection{Preliminaries on symplectic lattices} Let $\mathbb{F}_\ell$ be the finite field with $\ell$ elements.
\begin{lemma}\label{lemma:countingMaxIso}
Let $W$ be a symplectic $\mathbb{F}_\ell$-vector space of dimension $4$.
It contains exactly $\ell^{3} + \ell^{2} + \ell + 1$ maximal isotropic subspaces.
\end{lemma}
\begin{proof}
In the following, a \emph{line} or a \emph{plane} means a dimension 1 or 2 subspace of a vector space (i.e., they contain the origin of the vector space).
Fix any line $L$ in $W$. We will count the number of maximal isotropic subspaces of $W$ containing $L$. The line $L$ is itself isotropic (yet not maximal), so $L \subset L^\perp$. Also, $\dim L + \dim L^\perp = 4$, so $\dim L^\perp = 3$. Since any maximal isotropic subspace of $W$ is of dimension 2, it is easy to see that those containing $L$ are exactly the planes in $L^\perp$ containing $L$. There are $\ell + 1$ such planes, because they are in natural correspondence with the lines in the dimension 2 vector space $L^\perp / L$. It follows that there are $\ell + 1$ maximal isotropic subspaces of $W$ containing $L$. There are $\ell^{3} + \ell^{2} + \ell + 1$ lines $L$ in $W$, and each maximal isotropic subspace of $W$ contains $\ell + 1$ lines, we conclude that there are $\ell^{3} + \ell^{2} + \ell + 1$ maximal isotropic subspaces.
\end{proof}
\begin{lemma}
Let $V$ be a symplectic $\mathbb{Q}_\ell$-vector space of dimension 4. Let $\Lambda \subset V$ be a lattice in $V$ such that $\Lambda^* = \Lambda$. Then $\Lambda/\ell \Lambda$ is a symplectic $\mathbb{F}_\ell$-vector space of dimension $4$ for the symplectic form
$$\langle \lambda + \ell\Lambda, \mu + \ell\Lambda \rightarrowngle_\ell = \langle \lambda , \mu \rightarrowngle \mod \ell.$$
\end{lemma}
\begin{proof}
The fact that the form $\langle -, - \rightarrowngle_\ell$ is bilinear and alternating easily follows from the fact that the form $\langle -, - \rightarrowngle$ is symplectic. It only remains to prove that it is non-degenerate. Let $\lambda \in \Lambda$, and suppose that $\langle \lambda + \ell\Lambda, \mu + \ell\Lambda \rightarrowngle_\ell = 0$ for any $\mu \in \Lambda$. We now prove that $\lambda \in \ell\Lambda$. For any $\mu \in \Lambda$, we have $\langle \lambda , \mu \rightarrowngle \in \ell \mathbb{Z}_\ell$, and therefore
$\langle \ell^{-1}\lambda , \mu \rightarrowngle \in \mathbb{Z}_\ell.$
So $\ell^{-1}\lambda \in \Lambda^* = \Lambda$, whence $\lambda \in \ell \Lambda$, concluding the proof.
\end{proof}
\begin{lemma}
Let $V$ be a symplectic $\mathbb{Q}_\ell$-vector space of dimension 4, and $\Lambda$ a self-dual lattice in $V$.
Let $\ell\Lambda \subset \mathbf{G}amma \subset \Lambda$ be an intermediate lattice. Then $\mathbf{G}amma/\ell\Lambda$ is maximal isotropic in $\Lambda/\ell \Lambda$ if and only if $\mathbf{G}amma^* = \ell^{-1}\mathbf{G}amma$.
\end{lemma}
\begin{proof}
First, suppose that $\mathbf{G}amma/\ell\Lambda$ is maximal isotropic. Fix $\gamma \in \mathbf{G}amma$. For any $\delta \in \mathbf{G}amma$, since $\mathbf{G}amma/\ell\Lambda$ is isotropic, we have $\langle \gamma, \delta \rightarrowngle \in \ell \mathbb{Z}_\ell$, so $\langle \ell^{-1}\gamma, \delta \rightarrowngle \in \mathbb{Z}_\ell$ and therefore $\ell^{-1}\gamma \in \mathbf{G}amma^*$. This proves that $\ell^{-1}\mathbf{G}amma \subset \mathbf{G}amma^*$. Now, let $\alpha \in \mathbf{G}amma^*$.
Observe that $\langle \ell\alpha, \gamma \rightarrowngle = \ell\langle \alpha, \gamma \rightarrowngle \in \ell\mathbb{Z}_\ell$ for any $\gamma \in \mathbf{G}amma$. This implies that $\ell^{-1}\alpha$ must be in $\mathbf{G}amma$, because $\mathbf{G}amma/\ell\Lambda$ is maximally isotropic. This proves that $\ell^{-1}\mathbf{G}amma^* \subset \mathbf{G}amma$.
Now, suppose that $\mathbf{G}amma^* = \ell^{-1}\mathbf{G}amma$.
Then, $\langle \ell^{-1} \mathbf{G}amma, \mathbf{G}amma \rightarrowngle \subset \mathbb{Z}_\ell$, so $\langle \mathbf{G}amma, \mathbf{G}amma \rightarrowngle \in \ell\mathbb{Z}_\ell$, and $\mathbf{G}amma/\ell \Lambda$ is isotropic. Let $\lambda \in \Lambda$ such that $\langle \lambda + \ell \Lambda, \mathbf{G}amma/\ell \Lambda \rightarrowngle_\ell = \{0\}$. Then, $\langle \ell^{-1}\lambda , \mathbf{G}amma \rightarrowngle \subset \ell\mathbb{Z}_\ell$, so $\ell^{-1}\lambda \in \mathbf{G}amma^* = \ell^{-1}\mathbf{G}amma$, which implies that $\lambda \in \mathbf{G}amma$. So $\mathbf{G}amma/\ell \Lambda$ is maximal isotropic.
\end{proof}
\begin{definition}[($\ell,\ell$)-neighbors]
The set $\mathscr L(\Lambda)$ of \emph{$(\ell,\ell)$-neighbors} of $\Lambda$ is the set of lattices $\mathbf{G}amma$ such that $\ell \Lambda \subset \mathbf{G}amma \subset \Lambda$, and $\mathbf{G}amma/\ell\Lambda$ is maximal isotropic in $\Lambda/\ell\Lambda$.
\end{definition}
\begin{remark}\label{rem:correspEllEllAndEllEll}
Consider the lattice $T = T_\ell\mathscr A$.
Note that $(\ell, \ell)$-isogenies $\mathscr A \to \mathscr B$ correspond under Proposition \ref{prop:correspondence} to lattices $\mathbf{G}amma$ with $T \subset \mathbf{G}amma \subset \frac{1}{\ell} T$ and $\mathbf{G}amma/T$ a maximal isotropic subspace of $\frac{1}{\ell} T / T$, i.e., to ${(\ell,\ell)}$-neighbors of $T$ rescaled by a factor $\ell^{-1}$.
\end{remark}
\subsection{${(\ell,\ell)}$-neighboring lattices}
Throughout this section, $V$ is a symplectic $\mathbb{Q}_\ell$-vector space of dimension 4. Again, we consider a prime number $\ell$, a quartic CM-field $K$, with $K_{0}$ its quadratic real subfield. The algebra $K_\ell = K\otimes_{\mathbb{Q}}\mathbb{Q}_\ell$ is a $\mathbb{Q}_\ell$-algebra of dimension~4, with an involution $x \mapsto x^\dagger$ fixing $K_{0,\ell}$ induced by the generator of $\mathbf{G}al(K/K_0)$. Suppose that $K_\ell$ acts ($\mathbb{Q}_\ell$-linearly) on~$V$, and that for any $x \in K_\ell$, $u,v \in V$, we have $\langle xu,v \rightarrowngle = \langle u,x^\dagger v \rightarrowngle$.
For any lattice $\Lambda$ in $V$, the \emph{real order} of $\Lambda$ is the order in $K_{0,\ell} = K_0\otimes_{\mathbb{Q}}\mathbb{Q}_\ell$ defined as
$$\mathfrak{o}_0(\Lambda) = \{x \in K_{0,\ell} \mid x\Lambda \subset \Lambda \}.$$
Any order in $K_{0,\ell}$ is of the form $\mathfrak{o}_n = \mathbb{Z}_\ell + \ell^{n}\mathfrak{o}_0$, for some non-negative integer $n$, with $\mathfrak{o}_0$ the maximal order of $K_{0,\ell}$.
We say that $\Lambda$ is an $\mathfrak{o}_n$-lattice if $\mathfrak{o}(\Lambda) = \mathfrak{o}_n$. The goal of this section is to prove Theorem~\ref{RMupIso} by first proving its lattice counterpart, in the form of the following proposition.
\begin{proposition}\label{prop:neighborsLevelsRM}
Let $\Lambda$ be a self-dual $\mathfrak{o}_n$-lattice, with $n > 0$. The set $\mathscr L(\Lambda)$ of its $(\ell,\ell)$-neighbors contains exactly one $\mathfrak{o}_{n-1}$-lattice, namely $\ell \mathfrak{o}_{n-1}\Lambda$, $\ell^2 + \ell$ lattices of real order $\mathfrak{o}_{n}$, and $\ell^3$ lattices of real order $\mathfrak{o}_{n+1}$.
\end{proposition}
\begin{lemma}\label{lemma:splittingOnLattice}
Let $\Lambda$ be a self-dual $\mathfrak{o}_n$-lattice in $V$, for some non-negative integer $n$. Then, $\Lambda$ is a free $\mathfrak{o}_n$-module of rank 2.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:quadraticImpliesGorenstein}, the order $\mathfrak{o}_n$ is a Gorenstein ring of dimension 1, and it follows from~\cite[Thm. 6.2]{Bass63} that $\Lambda$ is a reflexive $\mathfrak{o}_n$-module. From~\cite[Prop. 7.2]{Bass63}, $\Lambda$ has a projective direct summand, so $\Lambda = \mathfrak{o}_n e_1 \oplus M$ for some $e_1 \in \Lambda$, and $M$ an $\mathfrak{o}_n$-submodule. This $M$ is still reflexive (any direct summand of a reflexive module is reflexive). So applying \cite[Prop. 7.2]{Bass63} again to $M$, together with the fact that it has $\mathbb{Z}_\ell$-rank 2, there is a non-negative integer $m \leq n$ and an element $e_2 \in \Lambda$ such that $M = \mathfrak{o}_me_2$. We shall prove that $m = n$. By contradiction, assume $m < n$. We have $\Lambda/\ell\Lambda = ( \mathfrak{o}_n e_1/\ell\mathfrak{o}_n) \oplus ( \mathfrak{o}_m e_2 / \ell \mathfrak{o}_m)$. Observe that $ \mathfrak{o}_m e_2/ \ell \mathfrak{o}_m$ is maximal isotropic. Indeed, it is of dimension 2, and for any $x,y \in \mathfrak{o}_m$, $\langle x e_2, y e_2 \rightarrowngle = - \langle y e_2, x e_2 \rightarrowngle$ because the form is alternating, and $\langle x e_2, y e_2 \rightarrowngle = \langle y e_2, x e_2 \rightarrowngle$ because it is $K_0$-bilinear, so $\langle x e_2, y e_2 \rightarrowngle = 0$.
Also, we have $\mathfrak{o}_{n-1} \subset \mathfrak{o}_m$, so
$$\langle \ell\mathfrak{o}_{n-1} e_1, \mathfrak{o}_m e_2 \rightarrowngle = \langle \ell e_1, \mathfrak{o}_{n-1}\mathfrak{o}_m e_2 \rightarrowngle = \ell \langle e_1, \mathfrak{o}_m e_2 \rightarrowngle \subset \ell \mathbb{Z}_\ell.$$
This proves that $\ell\mathfrak{o}_{n-1}e_1/\ell\mathfrak{o}_n \subset (\mathfrak{o}_m e_2/\ell\mathfrak{o}_m)^\perp = \mathfrak{o}_m e_2/\ell\mathfrak{o}_m$, a contradiction.
\end{proof}
Using a standard abuse of notation, write $\mathbb{F}_\ell[\epsilon]$ for the ring of dual numbers, i.e. an $\mathbb{F}_\ell$-algebra isomorphic to $\mathbb{F}_\ell[X]/X^2$ via an isomorphism sending $\epsilon$ to $X$.
\begin{lemma}\label{lemma:subspacesEpsilon}
Let $R = \mathbb{F}_\ell[\epsilon]f_1 \oplus \mathbb{F}_\ell[\epsilon]f_2$ be a free $\mathbb{F}_\ell[\epsilon]$-module of rank 2.
The $\mathbb{F}_\ell[\epsilon]$-submodules of $R$ of $\mathbb{F}_{\ell}$-dimension 2 are exactly the $\ell^2 + \ell + 1$ modules $\epsilon R$, and $\mathbb{F}_\ell[\epsilon] \cdot g$ for any $g \not \in \epsilon R$. A complete list of these orbits $\mathbb{F}_\ell[\epsilon] \cdot g$ is given by $\mathbb{F}_\ell[\epsilon]\cdot(b\epsilon f_1 + f_2)$ for any $b \in \mathbb{F}_\ell$, and $\mathbb{F}_\ell[\epsilon]\cdot(f_1 + \alpha f_2 + \beta \epsilon f_2)$, for any $\alpha,\beta \in \mathbb{F}_\ell$.
\end{lemma}
\begin{proof}
Let $H \subset R$ be a subspace of dimension 2, stable under the action of $\mathbb{F}_\ell[\epsilon]$.
For any $g \in H$, write $g = a_gf_1+b_g\epsilon f_1 + c_gf_2 + d_g \epsilon f_2 \in H$ for $a_g,b_g,c_g,d_g \in \mathbb{F}_\ell$.
Since $H$ is $\mathbb{F}_\ell[\epsilon]$-stable, for any $g \in H$, the element $g\epsilon = a_g\epsilon f_1 + c_g\epsilon f_2$ is also in $H$.
First suppose $a_g = 0$ and $c_g = 0$ for any $g \in H$. Then, as $H = \epsilon R$, it is indeed an $\mathbb{F}_\ell[\epsilon]$-submodule and has $\mathbb{F}_{\ell}$-dimension 2.
Now, suppose $a_g = 0$ for any $g \in H$, but $H$ contains an element $g$ such that $c_g \neq 0$ is non-zero. Then, $H$ contains both $b_g\epsilon f_1 + c_gf_2 + d_g \epsilon f_2$, and $c_g\epsilon f_2$, so $H$ is the $\mathbb{F}_\ell$-vector space spanned by $\epsilon f_2$ and $b_g\epsilon f_1 + c_gf_2$. There are $\ell + 1$ such subspaces $H$ (one for each possible $(b_g:c_g) \in \mathbb P^1(\mathbb{F}_\ell)$), and all of them are of dimension 2 and $R$-stable.
Finally, suppose there exists $g \in H$ such that $a_g \neq 0$. Then, it is spanned as an $\mathbb{F}_\ell$-vector spaces by a pair $\{f_1 + \alpha f_2 + \beta \epsilon f_2, \epsilon f_1 + \alpha \epsilon f_2\}$, with $\alpha,\beta \in \mathbb{F}_\ell$, and any of the $\ell^2$ subspaces of this form are $\mathbb{F}_\ell[\epsilon]$-submodules.
\end{proof}
\begin{lemma}\label{lemma:realOrbitIsotropic}
Let $\Lambda$ be an $\mathfrak{o}_n$-lattice, for some non-negative integer $n$. For any element $g \in \Lambda/\ell\Lambda$, the orbit $\mathfrak{o}_n \cdot g$ is an isotropic subspace of $\Lambda/\ell\Lambda$.
\end{lemma}
\begin{proof}
Let $\lambda \in \Lambda$ such that $g = \lambda + \ell\Lambda$.
For any $\alpha, \beta \in \mathfrak{o}_n$, we have
$\langle \alpha \lambda, \beta \lambda \rightarrowngle = -\langle \beta \lambda, \alpha \lambda \rightarrowngle$
because the symplectic form on $V$ is alternating, and
$\langle \alpha \lambda, \beta \lambda \rightarrowngle = \langle \beta \lambda, \alpha \lambda \rightarrowngle$
because it is $K_0$-bilinear. So $\langle \alpha g, \beta g \rightarrowngle_\ell = 0$, and the orbit of $g$ is isotropic.
\end{proof}
\subsection*{Proof of Proposition~\ref{prop:neighborsLevelsRM}}
From Lemma~\ref{lemma:splittingOnLattice}, $\Lambda$ splits as $e_1 \mathfrak{o}_n \oplus e_2 \mathfrak{o}_n$, for some $e_1,e_2 \in \Lambda$.
Observe that there is an element $\epsilon \in \mathfrak{o}_n$ such that $\mathfrak{o}_n / \ell \mathfrak{o}_n = \mathbb{F}_\ell[\epsilon] \cong \mathbb{F}_\ell[X]/(X^2)$, via the isomorphism sending $\epsilon$ to $X$. The quotient
$R = \Lambda / \ell \Lambda$ is a free $\mathbb{F}_\ell[\epsilon]$-module of rank 2. Let $\pi : \Lambda \rightarrow R$ be the
canonical projection. The set $\{f_1,\epsilon f_1, f_2, \epsilon f_2\}$ forms an $\mathbb{F}_\ell$-basis of $R$, where $f_i = \pi(e_i)$.
From Lemma~\ref{lemma:subspacesEpsilon}, $R$ contains $\ell^2 + \ell + 1$ subspaces of dimension 2 that are $\mathbb{F}_\ell[\epsilon]$-stable.
The subspace $\epsilon R = \mathbb{F}_\ell\epsilon f_1 \oplus \mathbb{F}_\ell\epsilon f_2$ is isotropic because
$$\langle \epsilon f_1, \epsilon f_2 \rightarrowngle_\ell = \langle f_1, \epsilon^2 f_2 \rightarrowngle_\ell = 0.$$
Together with Lemma~\ref{lemma:realOrbitIsotropic}, we conclude that all $\ell^2 + \ell + 1$ of these $\mathbb{F}_\ell[\epsilon]$-stable subspaces are maximal isotropic.
From Lemma~\ref{lemma:countingMaxIso}, $R$ contains a total of $\ell^{3} + \ell^{2} + \ell + 1$ maximal isotropic subspaces.
Thus, the $(\ell,\ell)$-neighbors corresponding to the remaining $\ell^3$ subspaces are not stable for the action of $\mathfrak{o}_n$. They are
however stable for the action of $\mathfrak{o}_{n+1}$, so those are $\mathfrak{o}_{n+1}$-lattices.
It remains to prove that among the $\ell^{2} + \ell + 1$ neighbors that are $\mathfrak{o}_n$-stable, only the lattice $\ell \mathfrak{o}_{n-1}\Lambda$ (which
corresponds to the subspace $\epsilon R$) is $\mathfrak{o}_{n-1}$-stable, and that it is not $\mathfrak{o}_{n-2}$-stable. This would prove that
$\ell \mathfrak{o}_{n-1}\Lambda$ is an $\mathfrak{o}_{n-1}$-lattice, and the $\ell^2 + \ell$ other lattices have order $\mathfrak{o}_{n}$.
Write $\mathbf{G}amma = \ell \mathfrak{o}_{n-1}\Lambda$. Then $\pi(\mathbf{G}amma) = \epsilon R$ is maximal isotropic and $\mathbb{F}_\ell[\epsilon]$-stable.
Suppose by contradiction that we have
$\mathfrak{o}_{n-2} \mathbf{G}amma \subset \mathbf{G}amma$. Then,
$\ell \mathfrak{o}_{n-2}\Lambda \subset \mathfrak{o}_{n-2}\mathbf{G}amma \subset \mathbf{G}amma \subset \Lambda,$
so $\ell \mathfrak{o}_{n-2}\Lambda \subset \Lambda$. But $\ell \mathfrak{o}_{n-2} \not \subset \mathfrak{o}_{n}$, which contradicts the fact that $\Lambda$ is an $\mathfrak{o}_n$-lattice. Therefore $\mathbf{G}amma$ is an $\mathfrak{o}_{n-1}$-lattice.
Let $H \subset R$ be another maximal isotropic subspace, and suppose that $\pi^{-1}(H)$ is $\mathfrak{o}_{n-1}$-stable.
Let $\lambda = e_1(a+\ell^nx) + e_2(b+\ell^ny) \in \pi^{-1}(H)$, with $a,b \in \mathbb{Z}_\ell$ and $x,y \in \mathfrak{o}_0$, and let $z \in \mathfrak{o}_{n-1}$. A simple computation yields
$$\Lambda = z \lambda + \Lambda = z ae_1 + z be_2 + \Lambda.$$
Therefore, both $z a$ and $z b$ must be in $\mathfrak{o}_n$ for any $z \in \mathfrak{o}_{n-1}$. It follows that $a$ and $b$ must be in $\ell\mathbb{Z}_\ell$, whence
$\lambda \in \mathbf{G}amma$. So $\pi^{-1}(H) \subset \mathbf{G}amma$, and we conclude that $H = \epsilon R$ from the fact that both are maximal
isotropic. This proves that no $(\ell,\ell)$-neighbor other that $\mathbf{G}amma$ is $ \mathfrak{o}_{n-1}$-stable.
\qed
\subsection{Changing the real multiplication with ${(\ell,\ell)}$-isogenies}\label{subsec:ellellLevelsRM}
The results for lattices are now ready to be applied to analyze how ${(\ell,\ell)}$-isogenies can change the real multiplication.
Fix a principally polarizable absolutely simple ordinary abelian surface $\mathscr A$ over $\mathbb{F}_q$. As usual, $K$ is its endomorphism algebra, and $K_0$ the maximal real subfield of $K$.
The local real order $\mathfrak{o}_0(\mathscr A)$ of $\mathscr A$ is of the form $\mathfrak{o}_n = \mathbb{Z}_\ell + \ell^n\mathfrak{o}_0$ for some non-negative integer $n$.
\begin{definition}
Let $\varphi: \mathscr A \rightarrow \mathscr B$ be an isogeny.
If $\mathfrak{o}_0(\mathscr A) \subset \mathfrak{o}_0(\mathscr B)$, we say that $\varphi$ is an \emph{RM-ascending} isogeny, if $\mathfrak{o}_0(\mathscr B) \subset \mathfrak{o}_0(\mathscr A)$ we say it is \emph{RM-descending}, otherwise $\mathfrak{o}_0(\mathscr A) = \mathfrak{o}_0(\mathscr B)$ and it is \emph{RM-horizontal}.
\end{definition}
\subsection*{Proof of Theorem~\ref{RMupIso}}
Theorem ~\ref{RMupIso} follows from Proposition~\ref{prop:neighborsLevelsRM} together with Remark~\ref{rem:correspEllEllAndEllEll}, and the observation that the $\mathfrak{o}_{n-1}$-lattice $\ell \mathfrak{o}_{n-1}\Lambda$ has order $\mathfrak{o}_{n-1}\cdot\mathfrak{o}(\Lambda)$.
\qed\\
In the following, we show that some structure of the graphs of horizontal isogenies at any level can be inferred from the structure at the maximal level: indeed, there is a graph homomorphism from any non-maximal level to the level above.
\begin{definition}[RM-predecessor]
Suppose $\mathfrak{o}_0(\mathscr A) = \mathfrak{o}_n$ with $n > 0$. Note that the kernel $\kappa$ of the unique RM-ascending isogeny of Proposition~\ref{RMupIso} is given by $(\mathfrak{o}_{n-1}T_\ell\mathscr A)/T_\ell\mathscr A$ (via Proposition~\ref{prop:correspondence}) and does not depend on the polarization.
The \emph{RM-predecessor} of $\mathscr A$ is the variety $\mathrm{pr}(\mathscr A) = \mathscr A / \kappa$, and we denote by $\mathrm{up}_\mathscr A : \mathscr A \rightarrow \mathscr A / \kappa$ the canonical projection.
If $\xi$ is a principal polarization on $\mathscr A$, let $\mathrm{pr}(\xi)$ be the unique principal polarization induced by $\xi$ via $\mathrm{up}_\mathscr A$.
\end{definition}
\begin{proposition}\label{prop:liftingIsogeniesUp}
Suppose $n > 0$. For any principal polarization $\xi$ on $\mathscr A$, and any RM-horizontal
${(\ell,\ell)}$-isogeny $\varphi : \mathscr A \rightarrow \mathscr B$ with respect to $\xi$, there is an ${(\ell,\ell)}$-isogeny $\tilde{\varphi} : \mathrm{pr}(\mathscr A) \rightarrow \mathrm{pr}(\mathscr B)$ with respect to $\mathrm{pr}(\xi)$ such that the following diagram commutes:
\begin{equation*}
\xymatrix{
\mathrm{pr}(\mathscr A) \ar[r]^{\tilde\varphi} & \mathrm{pr}(\mathscr B) \\
\mathscr A \ar[u]^{\mathrm{up}_\mathscr A} \ar[r]^{\varphi} & \mathscr B \ar[u]_{\mathrm{up}_\mathscr B}.
}
\label{eq:ppav}
\end{equation*}
\end{proposition}
\begin{proof}
This follows from the fact that if $\Lambda$ is an $\mathfrak{o}_n$-lattice, and $\mathbf{G}amma\in\mathscr L(\Lambda)$ is an ${(\ell,\ell)}$-neighbor of $\Lambda$, then $\ell\mathfrak{o}_{n-1}\mathbf{G}amma \in \mathscr L(\ell\mathfrak{o}_{n-1}\Lambda)$.
\end{proof}
\section{${(\ell,\ell)}$-isogenies preserving the real multiplication}
\subsection{${(\ell,\ell)}$-neighbors and $\mathfrak l$-neighbors}
Let $\mathscr L_0(\Lambda)$ be the set of $(\ell,\ell)$-neighbors of the lattice $\Lambda$ with maximal real multiplication. These neighbors will be analysed through $\mathfrak l$-neighbors, for $\mathfrak l$ a prime ideal in $\mathfrak{o}_0$.
This will allow us to account for the possible splitting behaviors of $\ell$.
The relation between the set $\mathscr L_0(\Lambda)$ and the sets $\mathscr L_\mathfrak l(\Lambda)$ is given by the following proposition proved case-by-case in the following three sections, as Propositions~\ref{prop:L0LambdaInert}, \ref{prop:L0LambdaSplit} and \ref{prop:L0LambdaRamif}:
\begin{proposition}\label{prop:classificationRMPreservingNeighbors}
Let $\Lambda$ be a lattice with maximal real multiplication. The set of $(\ell,\ell)$-neighbors with maximal real multiplication is
\[
\mathscr L_0(\Lambda) = \left\{ \begin{array}{ll}
\mathscr L_{\ell\mathfrak{o}_0}(\Lambda) & \mbox{if $\ell$ is inert in $K_0$},\\
\mathscr L_{\mathfrak l_1}[\mathscr L_{\mathfrak l_2}(\Lambda)] = \mathscr L_{\mathfrak l_2}[\mathscr L_{\mathfrak l_1}(\Lambda)] & \mbox{if $\ell$ splits as $\mathfrak l_1\mathfrak l_2$ in $K_0$},\\
\mathscr L_{\mathfrak l}[\mathscr L_{\mathfrak l}(\Lambda)] & \mbox{if $\ell$ ramifies as $\mathfrak l^2$ in $K_0$}.\end{array} \right.
\]
\end{proposition}
\subsubsection{The inert case} Suppose that $\ell$ is inert in $K_0$. Then, $\ell\mathcal{O}_{K_0}$ is the unique prime ideal of $K_0$ above $\ell$. The orders in $K_\ell$ with maximal real multiplication are exactly the orders $\mathfrak{o}_{\ell^n\mathfrak{o}_0} = \mathfrak{o}_0 + \ell^n \mathfrak{o}_{K}$.
\begin{proposition}\label{prop:L0LambdaInert}
Let $\Lambda$ be a lattice with maximal real multiplication. If $\ell$ is inert in $K_0$, the set of $(\ell,\ell)$-neighbors with maximal real multiplication is $$\mathscr L_0(\Lambda) = \mathscr L_{\ell\mathfrak{o}_0}(\Lambda).$$
\end{proposition}
\begin{proof}
Since $\mathfrak{o}_0/\ell\mathfrak{o}_0 \cong \mathbb{F}_{\ell^2}$, $\Lambda/\ell\Lambda$ is a free $\mathfrak{o}(\Lambda)/\ell \mathfrak{o}(\Lambda)$-module of rank 1. In particular, it is a vector space over $\mathbb{F}_{\ell^2}$ of dimension 2, and thereby the $\mathfrak{o}_{0}$-stable maximal isotropic subspaces of $\Lambda/\ell\Lambda$ are $\mathbb{F}_{\ell^2}$-lines. Since any $\mathbb{F}_{\ell^2}$-line is isotropic,
$\mathscr L_{\ell\mathfrak{o}_0}(\Lambda)$ is precisely the set of $(\ell,\ell)$-neighbors preserving the maximal real multiplication.
\end{proof}
\begin{remark}
The structure of $\mathscr L_0(\Lambda)$ is then fully described by Proposition~\ref{prop:genericNeighbors}, with $\mathfrak l = \ell \mathfrak{o}_0$, and $N\mathfrak l = \ell^2$. In particular, $\mathscr L(\Lambda)$ consists
of $\ell^2 + 1$ neighbors with maximal real multiplication, and $\ell^3 + \ell$ with real multiplication by $\mathfrak{o}_1$.
\end{remark}
\subsubsection{The split case} Suppose that $\ell$ splits in $K_0$ as $\ell\mathcal{O}_{K_0} = \mathfrak l_1 \mathfrak l_2$.
The orders in $K_\ell$ with maximal real multiplication are exactly the orders $\mathfrak{o}_{\mathfrak f} = \mathfrak{o}_0 + \mathfrak f \mathfrak{o}_{K}$, where $\mathfrak f = \mathfrak l_1^m \mathfrak l_2^n$ for any non-negative integers $m$ and $n$.
\begin{lemma}\label{lemma:decompositionLambda}
Suppose $\Lambda$ has maximal real multiplication. Then, we have the orthogonal decomposition
$\Lambda/\ell\Lambda = (\mathfrak l_1\Lambda/\ell\Lambda) \perp (\mathfrak l_2\Lambda/\ell\Lambda).$
\end{lemma}
\begin{proof}
Let $\mathfrak{o} = \mathfrak{o}(\Lambda)$. Since $\mathfrak l_1$ and $\mathfrak l_2$ are coprime and $\mathfrak l_1\mathfrak l_2 = \ell\mathfrak{o}_0$, the quotient $\mathfrak{o}/\ell\mathfrak{o}$ splits as $\mathfrak l_1\mathfrak{o}/\ell\mathfrak{o} \oplus \mathfrak l_2\mathfrak{o}/\ell\mathfrak{o}.$
It follows that $\Lambda/\ell\Lambda = (\mathfrak l_1\Lambda/\ell\Lambda) \oplus (\mathfrak l_2\Lambda/\ell\Lambda)$. Furthermore,
$\langle \mathfrak l_1\Lambda, \mathfrak l_2\Lambda\rightarrowngle = \langle \Lambda, \mathfrak l_1\mathfrak l_2\Lambda\rightarrowngle = \langle \Lambda, \ell\Lambda\rightarrowngle \subset \ell\mathbb{Z}_\ell,$
so $\mathfrak l_1\Lambda/\ell\Lambda \subset (\mathfrak l_2\Lambda/\ell\Lambda)^\perp$. The last inclusion is also an equality because both $\mathfrak l_1\Lambda/\ell\Lambda$ and $\mathfrak l_2\Lambda/\ell\Lambda$ have dimension 2.
\end{proof}
\begin{lemma}\label{lemma:caracSplitIso}
Suppose $\Lambda$ has maximal real multiplication. An $(\ell,\ell)$-neighbor $\mathbf{G}amma \in \mathscr L(\Lambda)$ has maximal real multiplication if and only if there exist $\mathbf{G}amma_1 \in \mathscr L_{\mathfrak l_1}(\Lambda)$ and $\mathbf{G}amma_2 \in \mathscr L_{\mathfrak l_2}(\Lambda)$ such that $\mathbf{G}amma = \mathfrak l_2\mathbf{G}amma_1 + \mathfrak l_1\mathbf{G}amma_2$.
\end{lemma}
\begin{proof}
First, let $\mathbf{G}amma \in \mathscr L(\Lambda)$ be an $(\ell,\ell)$-neighbor with maximal real multiplication. Defining $\mathbf{G}amma_i = \mathbf{G}amma + \mathfrak l_i\Lambda$, we then have
$$\mathfrak l_2 \mathbf{G}amma_1 + \mathfrak l_1 \mathbf{G}amma_2 = (\mathfrak l_1 + \mathfrak l_2)\mathbf{G}amma + \ell\Lambda = \mathfrak{o}_0\mathbf{G}amma + \ell\Lambda = \mathbf{G}amma.$$
By contradiction, suppose $\mathbf{G}amma_i \not \in \mathscr L_i(\Lambda)$. Then, $\mathbf{G}amma_i$ is either $\Lambda$ or $\mathfrak l_i\Lambda$. Suppose first that $\mathbf{G}amma_i = \Lambda$. Then $\mathbf{G}amma \subset \mathfrak l_i\Lambda$, and even $\mathbf{G}amma = \mathfrak l_i\Lambda$ since $[\Lambda : \mathbf{G}amma] = [\Lambda : \mathfrak l_i\Lambda] = \ell^2$. But the orthogonal decomposition of Lemma~\ref{lemma:decompositionLambda} implies that $\mathfrak l_i\Lambda/\Lambda$ is not isotropic, contradicting the fact that $\mathbf{G}amma \in \mathscr L(\Lambda)$.
For the converse, suppose $\mathbf{G}amma = \mathfrak l_2\mathbf{G}amma_1 + \mathfrak l_1\mathbf{G}amma_2$ for some $\mathbf{G}amma_1 \in \mathscr L_{\mathfrak l_1}(\Lambda)$ and $\mathbf{G}amma_2 \in \mathscr L_{\mathfrak l_2}(\Lambda)$. Then $\mathbf{G}amma/\ell\Lambda$ is of dimension 2, so it suffices to prove that it is isotropic. Each summand $\mathfrak l_i\mathbf{G}amma_j$ is isotropic, because it is of dimension 1, and Lemma~\ref{lemma:decompositionLambda} implies that $\mathfrak l_2\mathbf{G}amma_1$ and $\mathfrak l_1\mathbf{G}amma_2$ are orthogonal, so their sum $\mathbf{G}amma$ is isotropic.
\end{proof}
\begin{proposition}\label{prop:L0LambdaSplit}
Suppose $\Lambda$ has maximal real multiplication. If $\ell$ splits in $K_0$ as $\ell\mathfrak{o}_0 = \mathfrak l_1 \mathfrak l_2$, the set of $(\ell,\ell)$-neighbors of $\Lambda$ with maximal real multiplication is $$\mathscr L_0(\Lambda) = \mathscr L_{\mathfrak l_1}[\mathscr L_{\mathfrak l_2}(\Lambda)] = \mathscr L_{\mathfrak l_2}[\mathscr L_{\mathfrak l_1}(\Lambda)].$$
\end{proposition}
\begin{proof}
For any $\mathbf{G}amma_1 \in \mathscr L_{\mathfrak l_1}(\Lambda)$ and $\mathbf{G}amma_2 \in \mathscr L_{\mathfrak l_2}(\Lambda)$, we have that $\mathfrak l_2\mathbf{G}amma_1 + \mathfrak l_1\mathbf{G}amma_2 \in \mathscr L_{\mathfrak l_2}(\mathbf{G}amma_1)$ and $\mathfrak l_2\mathbf{G}amma_1 + \mathfrak l_1\mathbf{G}amma_2 \in \mathscr L_{\mathfrak l_1}(\mathbf{G}amma_2)$. This proposition is thus a consequence of Lemma~\ref{lemma:caracSplitIso}.
\end{proof}
\begin{remark}
When $\ell$ splits in $K_0$, $\mathscr L_0(\Lambda)$ is then of size $\ell^2 + 2\ell + 1$, and the $\ell^3 - \ell$ other $(\ell,\ell)$-neighbors have real order $\mathfrak{o}_1$.
\end{remark}
\subsubsection{The ramified case} Suppose that $\ell$ ramifies in $K_0$ as $\ell\mathcal{O}_{K_0} = \mathfrak l^2$. Then, $\mathfrak{o}_0/\ell\mathfrak{o}_0$ is isomorphic to $\mathbb{F}_\ell[\epsilon]$ with $\epsilon^2 = 0$. The orders in $K_\ell$ with maximal real multiplication are exactly the orders $\mathfrak{o}_{\mathfrak l^n} = \mathfrak{o}_0 + \mathfrak l^n \mathfrak{o}_{K}$.
\begin{proposition}\label{prop:L0LambdaRamif}
Suppose $\Lambda$ has maximal real multiplication. If $\ell$ splits in $K_0$ as $\ell\mathfrak{o}_0 = \mathfrak l^2$, the set of $(\ell,\ell)$-neighbors of $\Lambda$ with maximal real multiplication is $$\mathscr L_0(\Lambda) = \mathscr L_{\mathfrak l}[\mathscr L_{\mathfrak l}(\Lambda)].$$
\end{proposition}
\begin{proof}
Let $\mathbf{G}amma \in \mathscr L_0(\Lambda)$. First, if $\mathbf{G}amma = \mathfrak l \Lambda$, observe that for any $\Pi \in \mathscr L_\mathfrak l(\Lambda)$, we have $\mathfrak l \Lambda \in \mathscr L_\mathfrak l(\Lambda)$, and therefore $\mathbf{G}amma \in \mathscr L_{\mathfrak l}[\mathscr L_{\mathfrak l}(\Lambda)]$. We can now safely suppose $\mathbf{G}amma \neq \mathfrak l \Lambda$. Let $\Pi = \mathbf{G}amma + \mathfrak l \Lambda$. We have the sequence of inclusions
$$\ell\Lambda \subset \mathfrak l \Pi \subset \mathbf{G}amma \subsetneq \Pi \subset \Lambda.$$
By contradiction, suppose $\Pi = \Lambda$. Then, $\mathbf{G}amma \cap \mathfrak l \Lambda = \ell\Lambda$. Since $\mathfrak l \mathbf{G}amma \subset \mathbf{G}amma \cap \mathfrak l \Lambda = \ell\Lambda$, it follows that
$\mathfrak l \Lambda = \mathfrak l \Pi = \mathfrak l \mathbf{G}amma + \ell \Lambda \subset \ell \Lambda,$
a contradiction. Therefore $\mathbf{G}amma \subsetneq \Pi \subsetneq \Lambda$, and each inclusion must be of index $\ell$. Then, $\mathbf{G}amma \in \mathscr L_\mathfrak l (\Pi) \subset \mathscr L_\mathfrak l [\mathscr L_\mathfrak l (\Lambda)]$.
Let us now prove that $\mathscr L_{\mathfrak l}[\mathscr L_{\mathfrak l}(\Lambda)] \subset \mathscr L_0(\Lambda)$. Let $\Pi \in \mathscr L_{\mathfrak l}(\Lambda)$ and $\mathbf{G}amma \in \mathscr L_{\mathfrak l}(\Pi)$. We have the sequence of inclusions
$$\ell\Lambda = \mathfrak l (\mathfrak l \Lambda) \subset_\ell \mathfrak l \Pi \subset_\ell \mathbf{G}amma \subset_\ell \Pi \subset_\ell \Lambda,$$
where $\subset_\ell$ means that the first lattice is of index $\ell$ in the second. Therefore $\ell\Lambda \subset \mathbf{G}amma \subset \Lambda$, and $\mathbf{G}amma / \ell \Lambda$ is of dimension 2 over $\mathbb{F}_\ell$.
Since $\mathbf{G}amma / \mathfrak l \Lambda$ is a line, there is an element $\pi \in \Pi$ such that $\Pi = \mathbb{Z}_\ell \pi + \mathfrak l \Lambda$. Similarly, $\Pi / \mathfrak l \mathbf{G}amma$ is a line, so there is an element
$\gamma \in \mathbf{G}amma$ such that $\mathbf{G}amma = \mathbb{Z}_\ell\gamma + \mathfrak l \pi + \ell \Lambda$. Therefore, writing $x = \gamma + \ell\Lambda$ and $y = \pi + \ell\Lambda$, $\mathbf{G}amma / \ell \Lambda$ is generated as an $\mathbb{F}_\ell$-vector space by $x$ and
$\epsilon y$. Since $\gamma \in \mathbf{G}amma \subset \Pi = \mathbb{Z}_\ell \pi + \mathfrak l \Lambda$, there exist $a\in\mathbb{Z}_\ell$ and $z \in \Lambda/\ell\Lambda$ such that $x = ay + \epsilon z$. Then,
$$\langle x, \epsilon y \rightarrowngle_\ell = \langle a y, \epsilon y \rightarrowngle_\ell + \langle \epsilon z, \epsilon y \rightarrowngle_\ell = a \langle y, \epsilon y \rightarrowngle_\ell + \langle z, \epsilon^2 y \rightarrowngle_\ell = 0,$$
where the last equality uses Lemma~\ref{lemma:realOrbitIsotropic}, and the fact that $\epsilon^2 = 0$. So $\mathbf{G}amma/\ell\Lambda$ is maximal isotropic, and $\mathbf{G}amma \in \mathscr L(\Lambda)$. Furthermore $\epsilon x = a\epsilon y$, and $\epsilon y = 0$ are both in $\mathbf{G}amma/\ell\Lambda$, so the latter is $\mathbb{F}_\ell[\epsilon]$-stable, so $\mathbf{G}amma$ is $\mathfrak{o}_0$-stable. This proves that $\mathbf{G}amma \in \mathscr L_0(\Lambda)$.
\end{proof}
\begin{remark}\label{rem:numberofEllEllinRamifCase}
We can deduce from Lemma~\ref{lemma:subspacesEpsilon} that $|\mathscr L_0(\Lambda)| = \ell^2 + \ell + 1$. In fact, for any two distinct lattices $\Pi_1, \Pi_2 \in \mathscr L_{\mathfrak l}(\Lambda)$, we have $\mathscr L_{\mathfrak l}(\Pi_1) \cap \mathscr L_{\mathfrak l}(\Pi_2) = \{\mathfrak l \Lambda\}$.
\end{remark}
\subsection{Locally maximal real multiplication and ${(\ell,\ell)}$-isogenies}\label{subsec:ellellMaxRM}
Fix again a principally polarizable absolutely simple ordinary abelian surface $\mathscr A$ over $\mathbb{F}_q$, with endomorphism algebra $K$, and $K_0$ the maximal real subfield of $K$.
Now suppose that $\mathscr A$ has locally maximal real multiplication at $\ell$. Recall from Theorem~\ref{thm:classificationOrdersMaxRM} that any such locally maximal real order is of the form $\mathfrak{o}_\mathfrak f = \mathfrak{o}_0 + \mathfrak f \mathfrak{o}_K$, for some $\mathfrak{o}_0$-ideal $\mathfrak f$.
The structure of $\mathfrak l$-isogeny graphs as described by Theorem~\ref{thm:lisogenyvolcanoes} can be used to describe graphs of ${(\ell,\ell)}$-isogenies preserving the real multiplication, via Theorem~\ref{thm:ellellLCombinations}.
\subsection*{Proof of Theorem~\ref{thm:ellellLCombinations}}
This theorem is a direct consequence of Proposition~\ref{prop:classificationRMPreservingNeighbors} translated to the world of isogenies via Remark~\ref{rem:correspEllEllAndEllEll}.
\qed
\begin{remark}\label{rem:RMPreservingEllEllDoNotDependOnPol}
Note that in particular, Theorem~\ref{thm:ellellLCombinations} implies that the kernels of the ${(\ell,\ell)}$-isogenies $\mathscr A \rightarrow \mathscr B$ preserving the real multiplication do not depend on the choice of a polarization $\xi$ on $\mathscr A$.
\end{remark}
\subsubsection{The inert and ramified cases}
Combining Theorem~\ref{thm:lisogenyvolcanoes} and Theorem~\ref{thm:ellellLCombinations} allows us to describe the graph of ${(\ell,\ell)}$-isogenies with maximal local real multiplication at $\ell$.
For purpose of exposition, we assume from now on that the primitive quartic CM-field $K$ is different from $\mathbb{Q}(\zeta_5)$, but the structure for $\mathbb{Q}(\zeta_5)$ can be deduced in the same way (bearing in mind that in that case, $\mathcal{O}_{K_0}^\times$ is of index 5 in $\mathcal{O}_{K}^\times$).
Let $\mathscr A$ be any principally polarizable abelian variety with order $\mathcal{O}$, with maximal real multiplication locally at $\ell$.
When $\ell$ is inert in $K_0$, the connected component of $\mathscr A$ in the ${(\ell,\ell)}$-isogeny graph (again, for maximal local real multiplication) is exactly the volcano $\mathscr V_\mathfrak l(\mathcal{O})$ (see Notation~\ref{notation:lVolcano}).
When $\ell$ ramifies as $\mathfrak l^2$ in $K_0$,
the connected component of $\mathscr A$ in the graph of $\mathfrak l$-isogenies is isomorphic to the graph $\mathscr V_\mathfrak l(\mathcal{O})$, and the graph of ${(\ell,\ell)}$-isogenies
can be constructed from it as follows: on the same set of vertices, add an edge in the ${(\ell,\ell)}$-graph between $\mathscr B$ and $\mathscr C$ for each path of length 2 between $\mathscr B$ and $\mathscr C$ in the $\mathfrak l$-volcano; each vertex $\mathscr B$ has now $\ell^2 + 2\ell + 1$ outgoing edges, while there are only $\ell^2 + \ell + 1$ possible kernels of RM-preserving ${(\ell,\ell)}$-isogenies (see Remark~\ref{rem:numberofEllEllinRamifCase}). This is because the edge corresponding to the canonical projection $\mathscr B \rightarrow \mathscr B / \mathscr B[\mathfrak l]$ has been accounted for $\ell + 1$ times. Remove $\ell$ of these copies, and the result is exactly the graph of ${(\ell,\ell)}$-isogenies.
\begin{example}
Suppose $\ell = 2$ ramifies in $K_0$ as $\mathfrak l^2$, and $\mathfrak l$ is principal in $\mathcal{O}_{K_0}$. Suppose further that $\mathfrak l$ splits in $K$ into two prime ideals of order $4$ in $\mathrm{Cl}(\mathcal{O}_K)$.
Then, the first four levels of any connected component of the ${(\ell,\ell)}$-isogeny graph for which the largest order is $\mathcal{O}_K$ are isomorphic to the graph of Figure~\ref{fig:exampleOfEllEllRamif}. The underlying $\mathfrak l$-isogeny volcano is represented with dotted nodes and edges. Since $\mathfrak l$ is principal in $\mathcal{O}_{K_0}$, it is an undirected graph, and we represent it as such. The level 0, i.e., the surface of the volcano, is the dotted cycle of length 4 at the center. The circles have order $\mathcal{O}_{K}$, the squares have order $\mathcal{O}_{K_0} + \mathfrak l \mathcal{O}_{K}$, the diamonds $\mathcal{O}_{K_0} + \ell \mathcal{O}_{K}$, and the triangles $\mathcal{O}_{K_0} + \mathfrak l^3 \mathcal{O}_{K}$.
\end{example}
\begin{figure}
\caption{\label{fig:exampleOfEllEllRamif}
\label{fig:exampleOfEllEllRamif}
\end{figure}
\subsubsection{The split case}
For simplicity, suppose again that the primitive quartic CM-field $K$ is a different from $\mathbb{Q}(\zeta_5)$.
Let $\mathscr A$ be any principally polarizable abelian variety with order $\mathcal{O}$, with maximal real multiplication locally at $\ell$.
The situation when $\ell$ splits as $\mathfrak l_1\mathfrak l_2$ in $K_0$ (with $\mathfrak l_1$ and $\mathfrak l_2$ principal in $\mathcal{O} \cap K_0$) is a bit more delicate because the $\mathfrak l_1$ and $\mathfrak l_2$-isogeny graphs need to be carefully pasted together.
Let $\mathscr G_{\mathfrak l_1,\mathfrak l_2}(\mathscr A)$ be the connected component of $\mathscr A$ in the labelled isogeny graphs whose edges are $\mathfrak l_1$-isogenies (labelled $\mathfrak l_1$) and $\mathfrak l_2$-isogenies (labelled~$\mathfrak l_2$).
The graph of ${(\ell,\ell)}$-isogenies is the graph on the same set of vertices, such that the number of edges between two vertices $\mathscr B$ and $\mathscr C$ is exactly the number of paths of length 2 from $\mathscr B$ to $\mathscr C$, whose first edge is labelled $\mathfrak l_1$ and second edge is labelled $\mathfrak l_2$. It remains to fully understand the structure of the graph $\mathscr G_{\mathfrak l_1,\mathfrak l_2}(\mathscr A)$.
Like for the cases where $\ell$ is inert or ramified in $K_0$, we would like a complete characterization of the structure of the isogeny graph, i.e., a description that is sufficient to construct an explicit model of the abstract graph.
Without loss of generality, suppose $\mathcal{O}$ is locally maximal at $\ell$. Then, the endomorphism ring of any variety in $\mathscr G_{\mathfrak l_1,\mathfrak l_2}(\mathscr A)$ is characterized by the conductor $\mathfrak l_1^m\mathfrak l_2^n$ at $\ell$, and we denote by $\mathcal{O}_{m,n}$ the corresponding order. The graph $\mathscr G_{\mathfrak l_1,\mathfrak l_2}(\mathscr A)$ only depends on the order, so we also denote it $\mathscr G_{\mathfrak l_1,\mathfrak l_2}(\mathcal{O})$. For simplicity of exposition, let us assume that $\mathfrak l_1$ and $\mathfrak l_2$ are principal in $\mathcal{O}\cap K_0$, so that the $\mathfrak l_i$-isogeny graphs are volcanoes.
\begin{definition}[cyclic homomorphism]
Let $\mathscr X$ and $\mathscr Y$ be two graphs. A graph homomorphism $\psi : \mathscr X \rightarrow \mathscr Y$ is a \emph{cyclic homomorphism} if each edge of $\mathscr X$ and $\mathscr Y$ can be directed in such a way that $\psi$ becomes a homomorphism of directed graphs, and each undirected cycle in $\mathscr X$ becomes a directed cycle.
\end{definition}
\begin{lemma}\label{lemma:easyLemmaConnectedComp}
Let $\mathscr X$, $\mathscr Y$ and $\mathscr Y'$ be connected, $d$-regular graphs, with $d \leq 2$, such that $\mathscr Y$ and $\mathscr Y'$ are isomorphic. If $\varphi : \mathscr Y \rightarrow \mathscr X$ and $\varphi' : \mathscr Y' \rightarrow \mathscr X$ are two cyclic homomorphisms, there is an isomorphism $\psi : \mathscr Y \rightarrow \mathscr Y'$ such that $\varphi = \varphi' \circ \psi$.
\end{lemma}
\begin{proof}
The statement is trivial if $d$ is 0 or 1. Suppose $d = 2$, i.e., $\mathscr X$, $\mathscr Y$ and $\mathscr Y'$ are cycles.
Let $\mathscr X$ be
the cycle $x_0 - x_1 - \dots - x_m$, with $x_m = x_0$. Similarly, $\mathscr Y$ is the cycles $y_0 - y_1 - \dots - y_n$, with $y_n = y_0$. Without loss of generality, $\varphi(y_0) = x_0$ and $\varphi(y_1) = x_1$.
There is a direction on the edges of $\mathscr X$ and $\mathscr Y$ such that $\varphi$ becomes a homomorphism of directed graphs, and $\mathscr Y$ becomes a directed cycle.
Without loss of generality, the direction of $\mathscr Y$ is given by $y_i \rightarrow y_{i+1}$.
Since $y_0 \rightarrow y_{1}$, we have $\varphi(y_0) \rightarrow \varphi(y_{1})$, hence $x_0 \rightarrow x_{1}$. Since $y_1 \rightarrow y_{2}$, we must also have $x_1 \rightarrow \varphi(y_{2})$, so $\varphi(y_{2}) \neq x_0$ and therefore $\varphi(y_{2}) = x_2$, and as a consequence $x_1 \rightarrow x_{2}$.
Repeating inductively, we obtain $x_i \rightarrow x_{i+1}$ for all $i \leq m$, and $\varphi(y_{i}) = x_{i \text{ mod } m}$ for all $i \leq n$.
Similarly, any direction on $\mathscr X$ and $\mathscr Y'$ such that $\mathscr Y'$ is a directed cycle and $\varphi'$ becomes a homomorphism of directed graphs turns $\mathscr X$ into a directed cycle. Without loss of generality, it is exactly the directed cycle $x_0 \rightarrow x_1 \rightarrow \dots \rightarrow x_m$ (if it is the other direction, simply invert the directions of $\mathscr Y'$). There is then an enumeration $\{y_i'\}_{i=0}^n$ of $\mathscr Y'$ such that $\varphi'(y_i') = x_i$, and $y_i' \rightarrow y'_{i+1}$ for each $i$. The isomorphism $\psi$ is then simply given by $\psi(y_i) = y_i'$.
\end{proof}
\iffalse
\begin{notation}
Let $\mathfrak l$ be a prime ideal in $\mathcal{O}_{K_0}$. Let $\epsilon$ be $-1$ if $\mathfrak l$ is inert in $\mathcal{O}_K$, $0$ if it ramifies and $1$ if it splits.
Denote by $\mathscr T_{\mathfrak l}$ the rooted tree whose root is of degree $N+1+\epsilon$, and every other vertex is of degree $N\mathfrak l+1$. It comes with a predecessor map $\mathrm{pr}_\mathfrak l$ mapping each vertex except the root to its parent.
\end{notation}
\begin{proposition}\label{prop:structureVolcamicMess}
Let $\mathscr T_1 = \mathscr T_{\mathfrak l_1}$ and $\mathscr T_2 = \mathscr T_{\mathfrak l_2}$.
A labelled graph $\mathscr G$ on the set of vertices $\Pic(\mathcal{O}) \times V(\mathscr T_{1}) \times V(\mathscr T_{2})$ is uniquely determined up to isomorphism by the following properties:
\begin{enumerate}[label=(\roman*)]
\item all the edges are from one of the following subgraphs:
\begin{enumerate}[label=(\roman*)]
\item for each $\alpha \in \Pic(\mathcal{O}), t_2 \in V(\mathscr T_{2})$, the subgraph on $\{\alpha\} \times V(\mathscr T_{1}) \times \{t_2\}$, with edges labelled by $\mathfrak l_1$, is canonically isomorphic to $\mathscr T_{1}$,
\item for each $\alpha \in \Pic(\mathcal{O}), t_1 \in V(\mathscr T_{1})$, the subgraph on $\{\alpha\} \times \{t_1\} \times V(\mathscr T_{2})$, with edges labelled by $\mathfrak l_2$, is canonically isomorphic to $\mathscr T_{2}$,
\item for each non-negative integers $m$ and $n$, the subgraph $\mathscr G_{m,n}$ on the set of vertices $\Pic(\mathcal{O}) \times V(\mathscr T_{1}{(m)}) \times V(\mathscr T_{2}{(n)})$ is isomorphic to the Cayley graph $\mathscr C_{m,n}$ of $\Pic(\mathcal{O}_{m,n})$ with set of generators the invertible ideals of the order $\mathcal{O}_{m,n}$ above $\ell$, naturally labelled by $\mathfrak l_1$ and $\mathfrak l_2$.
\end{enumerate}
\item for any $m > 0$ and $n \geq 0$, the predecessor map $\mathrm{pr}_1$ of $\mathscr T_{1}$ induces a cyclic homomorphism
$\mathrm{pr}_1 : \mathscr G_{m,n} \rightarrow \mathscr G_{m-1,n} : (\alpha, t_1, t_2) \mapsto (\alpha, \mathrm{pr}_1(t_1), t_2),$
\item for any $m \geq 0$ and $n > 0$, the predecessor map $\mathrm{pr}_2$ of $\mathscr T_{2}$ induces a cyclic homomorphism
$\mathrm{pr}_2 : \mathscr G_{m,n} \rightarrow \mathscr G_{m,n-1} : (\alpha, t_1, t_2) \mapsto (\alpha, t_1, \mathrm{pr}_2(t_2)).$
\end{enumerate}
In particular, the (non-canonical) choice of isomorphisms between $\mathscr G_{m,n}$ and $\mathscr C_{m,n}$ do not affect the isomorphism class of $\mathscr G$.
\end{proposition}
\begin{remark}
Note that the Cayley graphs on $\Pic(\mathcal{O}_{m,n})$ are not necessarily connected. The set of generators is even empty if $m$ and $n$ are both strictly positive, in which case the graph is simply $|\Pic(\mathcal{O}_{m,n})|$ isolated vertices. A more algebraic way to see this graph is the following: the set of vertices is $\bigcup_{m,n}\Pic(\mathcal{O}_{m,n})$, and each subgraph $\Pic(\mathcal{O}_{m,n})$ is exactly the (labelled) Cayley graph of $\Pic(\mathcal{O}_{m,n})$ with set of generators the invertible ideals of the order $\mathcal{O}_{m,n}$ above $\ell$, and in addition, there is an $\mathfrak l_1$-edge between $[\mathfrak a] \in \Pic(\mathcal{O}_{m,n})$ and $[\mathfrak a\mathcal{O}_{m-1,n}]$ whenever $m>0$, and an $\mathfrak l_2$-edge between $[\mathfrak a]$ and $[\mathfrak a\mathcal{O}_{m,{n-1}}]$ whenever $n > 0$.
\end{remark}
\begin{proof}
Let $\mathscr G$ and $\mathscr G'$ be the graphs induced by two different sets of choices for the isomorphisms.
We will construct an isomorphism $\Psi : \mathscr G \rightarrow \mathscr G'$ by starting with the isomorphism between $\mathscr G_{0,0}$ and $\mathscr G'_{0,0}$ and extending it on the blocks $\mathscr G_{m,n}$ and $\mathscr G'_{m,n}$ one at a time. Let $n>0$, and suppose, by induction, that $\Psi$ has been defined exactly on the blocks $\mathscr G_{i,j}$ for $i + j < n$. Let us extend $\Psi$ to the blocks $\mathscr G_{m,n-m}$ for $m = 0,\dots,n$ in order.
Both $\mathscr G_{0,n}$ and $\mathscr G'_{0,n}$ are isomorphic to $\mathscr C_{0,n}$, so their connected components are all isomorphic, and they are of degree $d$ at most 2.
We have the graph homomorphism $\mathrm{pr}_2 : \mathscr G_{0,n} \rightarrow \mathscr G_{0,n-1}$. Let $S$ be the set of connected components of $\mathscr G_{0,n}$. Define the equivalence relation on $S$
$$A \sim B \Longleftrightarrow \mathrm{pr}_2(A) = \mathrm{pr}_2(B).$$
Similarly define the equivalence relation $\sim'$ on the set $S'$ of connected components of $\mathscr G'_{0,n}$.
Observe that each equivalence class for either $\sim$ or $\sim'$ has same cardinality, so one can choose a bijection $\mathbb{T}heta : S \rightarrow S'$ such that for any $A \in S$, we have $\Psi(\mathrm{pr}_2(A)) = \mathrm{pr}_2(\mathbb{T}heta(A))$. From Lemma~\ref{lemma:easyLemmaConnectedComp}, for each $A \in S$, there is a graph isomorphism $\psi_A : A \rightarrow \mathbb{T}heta(A)$ such that for any $x \in A$, $\mathrm{pr}_2( \psi_A(x)) = \Psi( \mathrm{pr}_2(x))$.
Let $\hat\Psi$ be the map extending $\Psi$ by sending any $x \in \mathscr G_{0,n}$ to $\psi_A(x)$, where $A$ is the connected component of $x$ in $\mathscr G_{0,n}$.
We need to show that it is a graph isomorphism. Write $\mathscr D$ and $\mathscr D'$ the domain and codomain of $\Psi$. The map
$\hat\Psi$, restricted and corestricted to $\mathscr D$ and $\mathscr D'$ is exactly $\Psi$ so is an isomorphism. Also, the restriction and corestriction
to $\mathscr G_{0,n}$ and $\mathscr G'_{0,n}$ is an isomorphism, by construction. Only the edges between $\mathscr G_{0,n}$ and $\mathscr D$ (respectively $\mathscr G'_{0,n}$ and $\mathscr D'$) might cause trouble. The only edges between $\mathscr G_{0,n}$ and $\mathscr D$ are actually between $\mathscr G_{0,n}$ and $\mathscr G_{0,n-1}$, and are of the form $(x,\mathrm{pr}_2(x))$. But $\Psi$ was precisely constructed so that $\Psi(\mathrm{pr}_2(x)) = \mathrm{pr}_2(\Psi(x))$, so $\hat \Psi$ is indeed an isomorphism.
Now, let $0<m<n$ and suppose that $\Psi$ has been extended to the components $\mathscr G_{i,n-i}$ for each $i < m$. Let us extend it to $\mathscr G_{m,n-m}$. Since $m>0$ and $n-m >0$, the graph $\mathscr C_{m,n-m}$ is a set of isolated points, with no edge. Define an equivalence relation on $\mathscr G_{m,n-m}$ by
$$x \sim y \Longleftrightarrow (\mathrm{pr}_1(x) = \mathrm{pr}_1(y) \wedge \mathrm{pr}_2(x) = \mathrm{pr}_2(y)).$$
Similarly define the equivalence relation on $\mathscr G'_{m,n-m}$. Each equivalence class for either
$\sim$ or $\sim'$ has same cardinality, so one can choose a bijection $\psi: \mathscr G_{m,n-m} \rightarrow \mathscr G'_{m,n-m}$
such that for any $x \in \mathscr G_{m,n-m}$, we have $\mathrm{pr}_i(\psi(x)) = \Psi(\mathrm{pr}_i(x))$ for $i = 1,2$. It is then easy to check that the extension of $\Psi$ induced by $\psi$ is an isomorphism.
The final step, extending on $\mathscr G_{n,0}$, is similar to the case $\mathscr G_{0,n}$. This concludes the induction, and therefore $\mathscr G$ and $\mathscr G'$ are isomorphic.
\end{proof}
\begin{proposition}\label{prop:structl1l2isograph}
The graph $\mathscr G_{\mathfrak l_1,\mathfrak l_2}(\mathcal{O})$ is isomorphic to the graph $\mathscr G$ of Proposition~\ref{prop:structureVolcamicMess}.
\end{proposition}
\begin{proof}
Let $\mathscr G = \mathscr G_{\mathfrak l_1,\mathfrak l_2}(\mathcal{O})$, and for any non-negative integers $m,n$, let $\mathscr G_{m,n}$ be the subgraph containing the varieties of endomorphism ring $\mathcal{O}_{m,n}$.
Let $\mathscr A$ be a variety in $\mathscr G_{m,n}$. From Proposition~\ref{prop:frakLStructure}, there is a unique variety $S_{\mathfrak l_1}(\mathscr A)$ in $\mathscr G_{0,n}$, the \emph{$\mathfrak l_1$-source} of $\mathscr A$, reachable via a sequence of $m$ $\mathfrak l_1$-isogenies. Similarly define the $\mathfrak l_2$-source $S_{\mathfrak l_2}(\mathscr A)$. The \emph{source} of $\mathscr A$ is the isomorphism class $S(\mathscr A)$ of $S_{\mathfrak l_2}(S_{\mathfrak l_1}(\mathscr A)) \cong S_{\mathfrak l_1}(S_{\mathfrak l_2}(\mathscr A))$, in $\mathscr G_{0,0}$.
For any $i = 1,2$ and $\mathscr B \in \mathscr G_{0,0}$, let $\mathscr T_i(\mathscr B)$ be the subgraph containing the varieties whose source is $\mathscr B$, and the edges labelled by $\mathfrak l_i$.
From Theorem~\ref{thm:lisogenyvolcanoes}, there is a graph isomorphism $\psi_{i,\mathscr B}: \mathscr T_i(\mathscr B) \rightarrow \mathscr T_{\mathfrak l_i}$. Furthermore, there is a graph isomorphism $\psi_0 : \mathscr G_{0,0} \rightarrow \mathscr C_{0,0}$, where $\mathscr C_{0,0}$ is as above the Cayley graph of $\Pic(\mathcal{O})$ with set of generators the invertible ideals of the order $\mathcal{O}$ above $\ell$, naturally labelled by $\mathfrak l_1$ and $\mathfrak l_2$. We can now define the bijection
\begin{alignat*}{1}
\Psi : \mathscr G &\longrightarrow \Pic(\mathcal{O}) \times V(\mathscr T_{1}) \times V(\mathscr T_{2})\\
\mathscr A &\longmapsto (\psi_0(S(\mathscr A)), \psi_{1,S(\mathscr A)}(S_{\mathfrak l_2}(\mathscr A)), \psi_{2,S(\mathscr A)}(S_{\mathfrak l_1}(\mathscr A))).
\end{alignat*}
This bijection induces a graph structure on the vertices $\Pic(\mathcal{O}) \times V(\mathscr T_{1}) \times V(\mathscr T_{2})$. It is then easy to verify using Proposition~\ref{prop:frakLStructure} and Theorem~\ref{thm:lisogenyvolcanoes} that this graph satisfies the properties of Proposition~\ref{prop:structureVolcamicMess}.
\end{proof}
\fi
\begin{proposition}\label{prop:structureVolcamicMess}
The graph $\mathscr G_{\mathfrak l_1,\mathfrak l_2}(\mathcal{O})$, with edges labelled by $\mathfrak l_1$ and $\mathfrak l_2$, and bi-levelled by $(v_{\mathfrak l_1}, v_{\mathfrak l_2})$, is isomorphic to the unique (up to isomorphism) graph $\mathscr G$ with edges labelled by $\mathfrak l_1$ and $\mathfrak l_2$, and bi-levelled by a pair $(v_1, v_2)$, satisfying:
\begin{enumerate}[label=(\roman*)]
\item\label{property:l1l2volcanoes} For $i= 1,2$, the subgraph of $\mathscr G$ containing only the edges labelled by $\mathfrak l_i$ is a disjoint union of $\ell$-volcanoes, levelled by $v_{i}$,
\item\label{property:l1l2compatibility} For $i \neq j$, if $u$ and $v$ are connected by an $\mathfrak l_i$-edge, then $v_{j}(u) = v_j(v)$,
\item\label{property:l1l2CayleySubgraphs} For any non-negative integers $m$, and $n$, let $\mathscr G_{m,n}$ be the subgraph containing the vertices $v$ such that $(v_1(v),v_2(v)) = (m,n)$. Then,
\begin{enumerate}[label=(\roman*)]
\item $\mathscr G_{0,0}$ is isomorphic to the Cayley graph $\mathscr C_{0,0}$ of the subgroup of $\Pic(\mathcal{O})$ with generators the invertible ideals of the order $\mathcal{O}$ above $\ell$, naturally labelled by $\mathfrak l_1$ and $\mathfrak l_2$,
\item each connected component of $\mathscr G_{m,n}$ is isomorphic to the Cayley graph $\mathscr C_{m,n}$ of the subgroup of $\Pic(\mathcal{O}_{m,n})$ with generators the invertible ideals of the order $\mathcal{O}_{m,n}$ above $\ell$, naturally labelled by $\mathfrak l_1$ and $\mathfrak l_2$,
\end{enumerate}
\item\label{property:l1l2commute} For any two vertices $u$ and $v$ in $\mathscr G$, there is a path of the form $u -_{\mathfrak l_1} w -_{\mathfrak l_2} v$ if and only if there is a path of the form $u -_{\mathfrak l_2} w' -_{\mathfrak l_1} v$ (where $-_{\mathfrak l_i}$ denotes an edge labelled by $\mathfrak l_i$).
\end{enumerate}
\end{proposition}
\begin{proof}
First, it is not hard to see that $\mathscr G_{\mathfrak l_1,\mathfrak l_2}(\mathcal{O})$ satisfies all these properties. Properties~\ref{property:l1l2volcanoes} and~\ref{property:l1l2compatibility} follow from Proposition~\ref{prop:frakLStructure} and Theorem~\ref{thm:lisogenyvolcanoes}. Property~\ref{property:l1l2CayleySubgraphs} follows from the free CM-action of $\Pic(\mathcal{O}_{m,n})$ on the corresponding isomorphism classes. Property~\ref{property:l1l2commute} follows from the fact that $\mathscr A[\mathfrak l_1] \oplus \mathscr A[\mathfrak l_2]$ is a direct sum.
Let $\mathscr G$ and $\mathscr G'$ be two graphs with these properties.
For $i = 1,2$, let $\mathrm{pr}_i$ (respectively, $\mathrm{pr}'_i$) be the predecessor map induced by the volcano structure of the $\mathfrak l_i$-edges on $\mathscr G$ (respectively, on $\mathscr G'$).
We will construct an isomorphism $\Psi : \mathscr G \rightarrow \mathscr G'$ by starting with the isomorphism between $\mathscr G_{0,0}$ and $\mathscr G'_{0,0}$ and extending it on the blocks $\mathscr G_{m,n}$ and $\mathscr G'_{m,n}$ one at a time.
Let $n>0$, and suppose, by induction, that $\Psi$ has been defined exactly on the blocks $\mathscr G_{i,j}$ for $i + j < n$. Let us extend $\Psi$ to the blocks $\mathscr G_{m,n-m}$ for $m = 0,\dots,n$ in order.
Both $\mathscr G_{0,n}$ and $\mathscr G'_{0,n}$ have the same number of vertices, and their connected components are all isomorphic $\mathscr C_{0,n}$, which are of degree $d$ at most 2.
We have the graph homomorphism $\mathrm{pr}_2 : \mathscr G_{0,n} \rightarrow \mathscr G_{0,n-1}$. Let $S$ be the set of connected components of $\mathscr G_{0,n}$. Define the equivalence relation on $S$
$$A \sim B \Longleftrightarrow \mathrm{pr}_2(A) = \mathrm{pr}_2(B).$$
Similarly define the equivalence relation $\sim'$ on the set $S'$ of connected components of $\mathscr G'_{0,n}$.
Observe that each equivalence class for either $\sim$ or $\sim'$ has same cardinality, so one can choose a bijection $\mathbb{T}heta : S \rightarrow S'$ such that for any $A \in S$, we have $\Psi(\mathrm{pr}_2(A)) = \mathrm{pr}'_2(\mathbb{T}heta(A))$. It is not hard to check that $\mathrm{pr}_2$ and $\mathrm{pr}'_2$ are cyclic homomorphisms, using Property~\ref{property:l1l2commute}. From Lemma~\ref{lemma:easyLemmaConnectedComp}, for each $A \in S$, there is a graph isomorphism $\psi_A : A \rightarrow \mathbb{T}heta(A)$ such that for any $x \in A$, $\mathrm{pr}'_2( \psi_A(x)) = \Psi( \mathrm{pr}_2(x))$.
Let $\hat\Psi$ be the map extending $\Psi$ by sending any $x \in \mathscr G_{0,n}$ to $\psi_A(x)$, where $A$ is the connected component of $x$ in $\mathscr G_{0,n}$.
We need to show that it is a graph isomorphism. Write $\mathscr D$ and $\mathscr D'$ the domain and codomain of $\Psi$. The map
$\hat\Psi$, restricted and corestricted to $\mathscr D$ and $\mathscr D'$ is exactly $\Psi$ so is an isomorphism. Also, the restriction and corestriction
to $\mathscr G_{0,n}$ and $\mathscr G'_{0,n}$ is an isomorphism, by construction. Only the edges between $\mathscr G_{0,n}$ and $\mathscr D$ (respectively $\mathscr G'_{0,n}$ and $\mathscr D'$) might cause trouble. The only edges between $\mathscr G_{0,n}$ and $\mathscr D$ are actually between $\mathscr G_{0,n}$ and $\mathscr G_{0,n-1}$, and are of the form $(x,\mathrm{pr}_2(x))$. But $\Psi$ was precisely constructed so that $\Psi(\mathrm{pr}_2(x)) = \mathrm{pr}'_2(\Psi(x))$, so $\hat \Psi$ is indeed an isomorphism.
Now, let $0<m<n$ and suppose that $\Psi$ has been extended to the components $\mathscr G_{i,n-i}$ for each $i < m$. Let us extend it to $\mathscr G_{m,n-m}$. Since $m>0$ and $n-m >0$, the graph $\mathscr C_{m,n-m}$ is a single point, with no edge.
Let us now prove that for any pair $(x_1,x_2)$, where $x_1$ is a vertex in $\mathscr G_{m-1,n-m}$ and $x_2$ in $\mathscr G_{m,n-m-1}$ such that $\mathrm{pr}_2(x_1) = \mathrm{pr}_1(x_2)$, there is a unique vertex $x$ in $\mathscr G_{m,n-m}$ such that $(x_1,x_2) = (\mathrm{pr}_1(x), \mathrm{pr}_2(x))$.
First, we show that for any vertex $x \in \mathscr G_{m,n-m}$, we have
$$\mathrm{pr}_1^{-1}(\mathrm{pr}_1(x)) \cap \mathrm{pr}_2^{-1}(\mathrm{pr}_2(x)) = \{x\}.$$
Let $z = \mathrm{pr}_1(\mathrm{pr}_2(x))$. Let $X = \mathrm{pr}_{1}^{-1}(\mathrm{pr}_1(x))$ and $Y = \mathrm{pr}_{1}^{-1}(z)$.
From Property~\ref{property:l1l2compatibility}, $z$ and $\mathrm{pr}_1(x)$ are at the same $v_1$-level, so
from Property~\ref{property:l1l2volcanoes}, we have $|X| = |Y|$. For any $y \in Y$, we have $\mathrm{pr}_1(x) -_{\mathfrak l_2} z -_{\mathfrak l_1} y$, so there is a vertex $x'$ such that $\mathrm{pr}_1(x) -_{\mathfrak l_1} x' -_{\mathfrak l_2} y$. Then, $v_1(x') = v_1(y) = v_1(\mathrm{pr}_1(x)) - 1$, and therefore $x' \in X$. This implies that $\mathrm{pr}_2$ induces a surjection $\tilde{\mathrm{pr}}_2 : X \rightarrow Y$, which is a bijection since $|X| = |Y|$. So
$$X \cap \mathrm{pr}_2^{-1}(\mathrm{pr}_2(x)) = X \cap \tilde{\mathrm{pr}}_2^{-1}(\mathrm{pr}_2(x)) = \{x\}.$$
Now, an elementary counting argument shows that $x \mapsto (\mathrm{pr}_1(x), \mathrm{pr}_2(x))$ is a bijection between the vertices of $\mathscr G_{m,n-m}$ and the pairs $(x_1,x_2)$, where $x_1$ is a vertex in $\mathscr G_{m-1,n-m}$ and $x_2$ in $\mathscr G_{m,n-m-1}$ such that $\mathrm{pr}_2(x_1) = \mathrm{pr}_1(x_2)$. This property also holds in $\mathscr G'$, and we can thereby define $\psi: \mathscr G_{m,n-m} \rightarrow \mathscr G'_{m,n-m}$ as the bijection sending any vertex $x$ in $\mathscr G_{m,n-m}$ to the unique vertex $x'$ in $\mathscr G'_{m,n-m}$ such that $$(\mathrm{pr}'_1(x'), \mathrm{pr}'_2(x')) = (\Psi(\mathrm{pr}_1(x)), \Psi(\mathrm{pr}_2(x))).$$
It is then easy to check that the extension of $\Psi$ induced by $\psi$ is an isomorphism.
The final step, extending on $\mathscr G_{n,0}$, is similar to the case of $\mathscr G_{0,n}$. This concludes the induction, and proves that $\mathscr G$ and $\mathscr G'$ are isomorphic.
\end{proof}
\iffalse
\begin{theorem}[Structure of ${(\ell,\ell)}$-isogeny graphs with maximal local RM]
Suppose that $K$ is a primitive quartic CM-field different from $\mathbb{Q}(\zeta_5)$. The graph of ${(\ell,\ell)}$-isogenies with maximal local real multiplication is given by:
\begin{enumerate}[label=(\roman*)]
\item if $\ell$ is inert in $K_0$, it is exactly the graph $\mathscr W_{\ell\mathcal{O}_{K_0}}$,
\item if $\ell$ ramifies as $\mathfrak l^2$ in $K_0$, it is the graph on the same set of vertices as the graph $\mathscr W_{\mathfrak l}$, with an edge in the ${(\ell,\ell)}$-graph between $\mathscr A\neq\mathscr B$ for each path of length 2 in $\mathscr W_{\mathfrak l}$ between $\mathscr A$ and $\mathscr B$, and with a single self-loop on each vertex,
\item if $\ell$ splits as $\mathfrak l_1\mathfrak l_2$ in $K_0$, it is the graph on the same set of vertices as the labelled graph $\mathscr G_{\mathfrak l_1,\mathfrak l_2}$ (whose structure is described in Proposition~\ref{prop:structureVolcamicMess}), such that the number of edges between two vertices $\mathscr A$ and $\mathscr B$ is exactly the number of paths of length 2 from $\mathscr A$ to $\mathscr B$, whose first edge is labelled $\mathfrak l_1$ and second edge is labelled $\mathfrak l_2$.
\end{enumerate}
\end{theorem}
\fi
\section{Applications to ``going up" algorithms}\label{sec:goingUp}
\subsection{Largest reachable orders}
The results from Section~\ref{subsec:ellellLevelsRM} and Section~\ref{subsec:ellellMaxRM} on the structure of the graph of ${(\ell,\ell)}$-isogenies allow us to determine exactly when there exists a sequence of ${(\ell,\ell)}$-isogenies leading to a surface with maximal local order at $\ell$. When there is no such path, one can still determine the largest reachable orders.
\begin{proposition}\label{prop:ellellIncreasingLocalOrder}
Suppose $\mathscr A$ has maximal local real order, and $\mathfrak{o}(\mathscr A) = \mathfrak{o}_\mathfrak f$.
\begin{enumerate}[label=(\roman*)]
\item If $\ell$ divides $\mathfrak f$, there is a unique ${(\ell,\ell)}$-isogeny to a surface with order $\mathfrak{o}_{\ell^{-1}\mathfrak f}$.
\item If $\ell$ ramifies in $K_0$ as $\mathfrak l^2$ and $\mathfrak f = \mathfrak l$, then there exists an ${(\ell,\ell)}$-isogeny to a surface with maximal local order if and only if $\mathfrak l$ is not inert in $K$. It is unique if $\mathfrak l$ is ramified, and there are two if it splits.
\item If $\ell$ splits in $K_0$ as $\mathfrak l_1\mathfrak l_2$, and $\mathfrak f = \mathfrak l_1^i$ for some $i > 0$, then there exists an ${(\ell,\ell)}$-isogeny to a surface with local order $\mathfrak{o}_{\mathfrak l_1^{i-1}}$ if and only if $\mathfrak l_2$ is not inert in $K$. It is unique if $\mathfrak l_2$ is ramified, and there are two if it splits. Also, there always exist an ${(\ell,\ell)}$-isogeny to a surface with local order $\mathfrak{o}_{\mathfrak l_{1}^{i-1}\mathfrak l_2}$.
\end{enumerate}
\end{proposition}
\begin{proof}
This is a straightforward case-by-case analysis of Propositions~\ref{prop:frakLStructure} and Theorem~\ref{thm:ellellLCombinations}.
\end{proof}
\begin{definition}[parity of $\mathscr A$]
Suppose $\mathscr A$ has real order $\mathfrak{o}_n = \mathbb{Z}_\ell + \ell^n \mathfrak{o}_0$. Construct $\mathscr B$ by the RM-predecessor of $\mathcal A$ $n$ times, i.e.,
$\mathscr B = \mathrm{pr}(\mathrm{pr}(\dots \mathrm{pr}(\mathscr A)\dots))$ is the (iterated) RM-predecessor of $\mathscr A$ that has maximal real local order.
Let $\mathfrak f$ be the conductor of $\mathfrak{o}(\mathscr B)$. The \emph{parity} of $\mathscr A$ is 0 if $N(\mathfrak f\cap \mathfrak{o}_0)$ is a square, and 1 otherwise.
\end{definition}
\begin{remark}
The parity is always 0 if $\ell$ is inert in $K_0$.
\end{remark}
\begin{theorem}\label{thm:ellellsurface}
For any $\mathscr A$, there exists a sequence of ${(\ell,\ell)}$-isogenies starting from $\mathscr A$ and ending at a variety with maximal local order, except in the following two cases:
\begin{enumerate}[label=(\roman*)]
\item $\mathscr A$ has parity 1, $\ell$ splits in $K_0$ as $\mathfrak l_1\mathfrak l_2$, and both $\mathfrak l_1$ and $\mathfrak l_2$ are inert in $K$, in which case the largest reachable local orders are $\mathfrak{o}_0 + \mathfrak l_1\mathfrak{o}_K$ and $\mathfrak{o}_0 + \mathfrak l_2\mathfrak{o}_K$;
\item $\mathscr A$ has parity 1, $\ell$ ramifies in $K_0$ as $\mathfrak l^2$, and $\mathfrak l$ is inert in $K$, in which case the largest reachable local order is $\mathfrak{o}_0 + \mathfrak l\mathfrak{o}_K$.
\end{enumerate}
\end{theorem}
\begin{proof}
First, from Propositon~\ref{prop:liftingIsogeniesUp}, there is a sequence of ${(\ell,\ell)}$-isogenies starting from $\mathscr A$ and ending at a variety with maximal local order if and only if there is such a path that starts by a sequence of isogenies up to $\mathscr B = \mathrm{pr}(\mathrm{pr}(\dots \mathrm{pr}(\mathscr A)\dots))$, and then only consists of ${(\ell,\ell)}$-isogenies preserving the maximality of the local real order. It is therefore sufficient to look at sequences of RM-preserving ${(\ell,\ell)}$-isogenies from $\mathscr B$, which has by construction the same parity $s$ as $\mathscr A$.
From Proposition~\ref{prop:ellellIncreasingLocalOrder}, there is a path from $\mathscr B$ to a surface $\mathscr C$ with local order $
\mathfrak{o}(\mathscr C) = \mathfrak{o}_{\mathfrak l^s}$ where $\mathfrak l$ is a prime ideal of $\mathfrak{o}_0$ above $\ell$, and $s$ is the parity of $\mathscr A$. We are done if the parity is 0.
Suppose the parity is 1. From Propositions~\ref{prop:frakLStructure} and Theorem~\ref{thm:ellellLCombinations}, one can see that there exists a sequence of RM-preserving ${(\ell,\ell)}$-isogeny from $\mathscr C$ which changes the parity to 0 if and only if $\ell$ ramifies in $K_0$ as $\mathfrak l^2$ and $\mathfrak l$ is not inert in $K$, or $\ell$ splits in $K_0$ as $\mathfrak l_1\mathfrak l_2$ and either $\mathfrak l_1$ or $\mathfrak l_2$ is not inert in $K$. This concludes the proof.
\end{proof}
\subsection{A ``going up" algorithm}
In many applications (in particular, the CM method in genus 2 based on the CRT) it is useful to find a chain of isogenies to a principally polarized abelian surface with maximal endomorphism ring starting from any curve whose Jacobian is in the given isogeny class.
Lauter and Robert \cite[\S 5]{lauter-robert} propose a probabilistic algorithm to construct a principally polarized abelian variety whose endomorphism ring is maximal. That algorithm is heuristic, and the probability of failure is difficult to analyze. We now apply our structural results from Subsection~\ref{subsec:ellellMaxRM} to some of their ideas to give a provable algorithm.
\subsubsection{Prior work of Lauter--Robert}
Given a prime $\ell$ for which we would like to find an isogenous abelian surface over $\mathbb{F}_q$ with maximal local endomorphism ring at $\ell$, suppose that $\alpha = \ell^e \alpha'$ for some $\alpha' \in \mathcal{O}_K$ and some $e > 0$.
To find a surface $\mathscr A'/\mathbb{F}_q$ for which $\alpha / \ell^e \in \End(\mathscr A')$, Lauter and Robert \cite[\S 5]{lauter-robert} use $(\ell, \ell)$-isogenies and a test for
whether $\alpha / \ell^e \in \End(\mathscr A')$. In fact, $\alpha / \ell^e \in \End(\mathscr A')$ is equivalent to testing that $\alpha(\mathscr A'[\ell^e]) = 0$, i.e., $\alpha$ is trivial on the $\ell^e$-torsion of $\mathscr A$.
To guarantee that, one defines an ``obstruction" $N_e = \# \alpha(\mathscr A[\ell^e])$
that measures the failure of $\alpha/\ell^e$ to be an endomorphism of $\mathscr A'$. To construct an abelian surface that contains the element $\alpha / \ell^e$ as endomorphism, one uses $(\ell, \ell)$-isogenies iteratively in order to decrease the associated obstruction $N_e$ (this is in essence the idea of \cite[Alg.21]{lauter-robert}).
To reach an abelian surface with maximal local endomorphism ring at $\ell$, Lauter and Robert look at the structure of $\End(\mathscr A) \otimes_{\mathbb{Z}}\mathbb{Z}_\ell$ as a $\mathbb{Z}_\ell$-module and define an obstruction via a particular choice of a $\mathbb{Z}_\ell$-basis \cite[Alg.23]{lauter-robert}.
\subsubsection{Refined obstructions and provable algorithm}
Theorem~\ref{thm:ellellsurface} above gives a provable ``going up" algorithm that runs in three main steps: 1) it uses $(\ell, \ell)$-isogenies to reach a surface with maximal local real endomorphism ring at $\ell$; 2) it reaches the largest possible order via $(\ell, \ell)$-isogenies as in Theorem~\ref{thm:ellellsurface}; 3) if needed, it makes a last step to reach maximal local endomorphism ring via a cyclic isogeny. To implement 1) and 2), one uses refined obstructions, which we now describe in detail.
\subsubsection{``Going up" to maximal real multiplication.}\label{subsec:surfacingToMaxRM}
Considering the local orders $\mathfrak{o}_0 = \mathcal{O}_{K_0} \otimes_\mathbb{Z} \mathbb{Z}_\ell$ and $\mathbb{Z}_\ell[\pi + \pi^{\dagger}]$, choose a $\mathbb{Z}_\ell$-basis $\{1, \beta / \ell^e\}$ for $\mathfrak{o}_0$ such that $\beta \in \mathbb{Z}[\pi, \pi^{\dagger}]$ and we apply a ``real-multiplication" modification of \cite[Alg.21]{lauter-robert} to $\beta$.
Thus, given an abelian surface $\mathscr A$ with endomorphism algebra isomorphic to $K$, define the obstruction for $\mathscr A$ to have maximal real multiplication at $\ell$ as
$$
N_0(\mathscr A) = e - \max \{\epsilon \colon \beta(\mathscr A[\ell^{\epsilon}]) = 0 \}.
$$
Clearly, $\mathscr A$ will have maximal real endomorphism ring at $\ell$ if and only if $N_0(\mathscr A) = 0$. The following simple lemma characterizes the obstruction:
\begin{lemma}\label{lem:obstr-cond-real}
The obstruction $N_0(\mathscr A)$ is equal to the valuation at $\ell$ of the conductor of the real multiplication $\mathcal{O}_0(\mathscr A) \subset \mathcal{O}_{K_0}$.
\end{lemma}
\begin{proof}
Using the definition of $N_0(\mathscr A)$ and the fact that $\beta / \ell^\epsilon \in \mathcal{O}(\mathscr A)$ if and only if $\beta(\mathscr A[\ell^\epsilon]) = 0$, it follows that
$$
\displaystyle \mathbb{Z}_\ell + \beta / \ell^{e - N_0(\mathscr A)} \mathbb{Z}_\ell \subseteq \mathfrak{o}_0(\mathscr A) \subsetneq \mathbb{Z}_\ell + \beta / \ell^{e - N_0(\mathscr A) + 1} \mathbb{Z}_\ell.
$$
Since all orders of $\mathcal{O}_{K_0}$ are of the form $\mathbb{Z} + c \mathcal{O}_{K_0}$ for some $c \in \mathbb{Z}_{>0}$, by localization at $\ell$ one sees that
$$
\displaystyle \mathfrak{o}_0(\mathscr A) = \mathbb{Z}_\ell + \beta / \ell^{e - N_0(\mathscr A)} \mathbb{Z}_\ell = \mathbb{Z}_\ell + \ell^{N_0(\mathscr A)} \mathfrak{o}_0,
$$
i.e., the valuation at $\ell$ of the conductor of $\mathcal{O}_0(\mathscr A)$ is $N_0(\mathscr A)$.
\end{proof}
The lemma proves the following algorithm works (i.e., that there always exists a neighbor decreasing the obstruction $N_0$):
\begin{algorithm}
\caption{Surfacing to maximal real endomorphism ring}
\begin{algorithmic}[1]
\mathbb{R}EQUIRE An abelian surface $\mathscr A / \mathbb{F}_q$ with endomorphism algebra $K = \End(\mathscr A) \otimes \mathbb{Q}$, and a prime number $\ell$.
\ENSURE An isogenous abelian surface $\mathscr A' / \mathbb{F}_q$ with $\mathfrak{o}_0(\mathscr A') = \mathfrak{o}_0$.
\STATE $\beta \gets$ an element $\beta \in \mathbb{Z}[\pi, \overline{\pi}]$ such that $\{1, \beta / \ell^e\}$ is a $\mathbb{Z}_\ell$-basis for $\mathfrak{o}_0$.
\STATE Compute $N_0(\mathscr A) := e - \max\{\epsilon \colon \beta(\mathscr A[\ell^{\epsilon}]) = 0\}$
\IF{$N_0(\mathscr A) = 0$}\label{nplus}
\mathbb{R}ETURN $\mathscr A$
\ENDIF
\STATE $\mathcal{L} \leftarrow$ list of maximal isotropic $\kappa \subset \mathscr A[\ell]$ with $\kappa \cap \beta(\mathscr A[\ell^{e - N_0(\mathscr A) +1}]) \ne \emptyset$
\mathbb{F}OR{$\kappa \in \mathcal{L}$}
\STATE Compute $N_0(\mathscr A/\kappa) := e - \max\{\epsilon \colon \beta(\mathscr (A/\kappa) [\ell^{\epsilon}]) = 0\}$
\IF{$N_0(\mathscr A/\kappa, \epsilon) < N_0(\mathscr A, \epsilon)$}
\STATE $\mathscr A \leftarrow \mathscr A/\kappa$ and \textbf{go to} Step~\ref{nplus}
\ENDIF
\ENDFOR
\end{algorithmic}
\label{alg:surf-real}
\end{algorithm}
\subsubsection{Almost maximal order with ${(\ell,\ell)}$-isogenies.}
For each prime $\ell$, use the going-up algorithm (Algorithm~\ref{alg:surf-real}), until
$\mathcal{O}_0(\mathscr A) = \mathcal{O}_{K_0}$. Let $\ell$ be any prime and let $\mathfrak l \subset \mathcal{O}_{K_0}$ be a prime
ideal above $\ell$. Let $\mathfrak{o}_{0, \mathfrak l} = \mathcal{O}_{K_0, \mathfrak l}$ be the completion at $\mathfrak l$ of $\mathcal{O}_{K_0}$. Let $\mathfrak{o}_{\mathfrak l}(\mathscr A) = \mathcal{O}(\mathscr A) \otimes_{\mathcal{O}_{K_0}} \mathfrak{o}_{0, \mathfrak l}$.
Consider the suborder $\mathfrak{o}_{0, \mathfrak l}[\pi, \pi^{\dagger}]$ of the maximal local (at $\mathfrak l$) order
$\mathfrak{o}_{\mathfrak l} = \mathcal{O}_{K} \otimes_{\mathcal{O}_{K_0}} \mathfrak{o}_{0, \mathfrak l}$. Now write
$$
\mathfrak{o}_{0, \mathfrak l}[\pi, \pi^{\dagger}] = \mathfrak{o}_{0, \mathfrak l} + \gamma_{\mathfrak l} \mathfrak{o}_{0, \mathfrak l}, \qquad \text{and} \qquad \mathfrak{o}_{\mathfrak l} = \mathfrak{o}_{0, \mathfrak l} +
{\gamma_{\mathfrak l} / \varpi^{f_{\mathfrak l}}} \mathfrak{o}_{0, \mathfrak l},
$$
for some endomorphism $\gamma$. Here, $\varpi$ is a uniformizer for the local order $\mathfrak{o}_{0, \mathfrak l}$ and $f_\mathfrak l \geq 0$ is some integer. To define a similar obstruction to $N_0(\mathscr A, \epsilon)$, but at $\mathfrak l$, let
$$
N_\mathfrak l(\mathscr A) = f_{\mathfrak l} - \max \{\delta \colon \gamma (\mathscr A[\mathfrak l^\delta]) = 0\}.
$$
To compute
these obstructions, we compute $\gamma$ on the $\mathfrak l$-power torsion of $\mathscr A$.
The idea is similar to Algorithm~\ref{alg:surf-real}, except that in the split case, one must test the obstructions $N_\mathfrak l(\mathscr A, \epsilon)$ for both prime ideals $\mathfrak l \subset \mathcal{O}_{K_0}$ above $\ell$ at the same time.
We now show that one can reach the maximal possible ``reachable" (in the sense of Theorem~\ref{thm:ellellsurface}) local order at $\ell$ starting from $\mathscr A$ and using only $(\ell, \ell)$-isogenies. When $\ell$ is either inert or ramified in $K_0$, there is only one obstruction $N_{\mathfrak l}(\mathscr A)$, and one can ensure that it decreases at each step via the obvious modification of Algorithm~\ref{alg:surf-real}.
Suppose now that $\ell \mathcal{O}_{K_0} = \mathfrak l_1 \mathfrak l_2$ is split.
Let $\mathfrak f = \mathfrak l_1^{i_1} \mathfrak l_2^{i_2}$ be the conductor of
$\mathscr A$ and suppose, without loss of generality, that $i_1 \geq i_2$. To first ensure that one can reach an abelian surface $\mathscr A$ for which $0 \leq i_1 - i_2 \leq 1$, we relate the conductor $\mathfrak f$ to the two obstructions at $\mathfrak l_1$ and $\mathfrak l_2$.
\begin{lemma}
Let $\mathscr A$ be an abelian surface with maximal local real endomorphism ring at $\ell$ and let
$\mathfrak{o}(\mathscr A) = \mathfrak{o}_0 + \mathfrak f \mathfrak{o}_K$ where $\mathfrak f$ is the conductor. Then
$$
v_{\mathfrak l_1}(\mathfrak f) = N_{\mathfrak l_1}(\mathscr A) \qquad \text{and} \qquad
v_{\mathfrak l_2}(\mathfrak f) = N_{\mathfrak l_2}(\mathscr A).
$$
\end{lemma}
\begin{proof}
The proof is the same as the one of Lemma~\ref{lem:obstr-cond-real}.
\end{proof}
Using the lemma, and assuming $N_{\mathfrak l_1}(\mathscr A) - N_{\mathfrak l_2}(\mathscr A) > 1$, one repeatedly looks for an $(\ell, \ell)$-isogeny at each step that will decrease $N_{\mathfrak l_1}(\mathscr A)$ by 1 and increase $N_{\mathfrak l_2}(\mathscr A)$ by 1. Such an isogeny exists by Proposition~\ref{prop:ellellIncreasingLocalOrder}(iii). One repeats this process until
$$
0 \leq N_{\mathfrak l_1}(\mathscr A) - N_{\mathfrak l_2}(\mathscr A) \leq 1.
$$
If at this stage $N_{\mathfrak l_2}(\mathscr A) > 0$, this means that $\ell \mid {\mathfrak f}$ and hence, by Proposition~\ref{prop:ellellIncreasingLocalOrder}(i), there exists
a unique $(\ell, \ell)$-isogeny decreasing both obstructions. One searches for that $(\ell, \ell)$-isogeny by testing whether the two obstructions decrease simultaneously, and repeats until $N_{\mathfrak l_2}(\mathscr A) = 0$.
If $N_{\mathfrak l_1}(\mathscr A) = 0$, then the maximal local order at $\ell$ has been reached. If $N_{\mathfrak l_1}(\mathscr A) = 1$ then Proposition~\ref{prop:ellellIncreasingLocalOrder}(iii) implies that, if $\mathfrak l_2$ is not inert in $K$, then
there exists an $(\ell, \ell)$-isogeny that decreases $N_{\mathfrak l_1}(\mathscr A)$ to $0$ and keeps $N_{\mathfrak l_2}(\mathscr A)$ at zero.
\subsubsection{Final step via a cyclic isogeny}
In the exceptional cases of Theorem~\ref{thm:ellellsurface}, it may happen that one needs to do an extra step via a cyclic isogeny to reach maximal local endomorphism ring at $\ell$. Whenever this cyclic $\mathfrak l$-isogeny is computable via the algorithm of \cite{dudeanu-jetchev-robert}, one can
always reach maximal local endomorphism ring at $\ell$. But $\mathfrak l$-isogenies are computable if and only if $\mathfrak l$ is trivial in the narrow class group of $K_0$. We thus distinguish the following two cases:
\begin{enumerate}
\item If $\mathfrak l$-isogenies are computable by \cite{dudeanu-jetchev-robert} then one can always reach maximal local endomorphism ring at $\ell$.
\item If $\mathfrak l$-isogenies are not computable by \cite{dudeanu-jetchev-robert}, one can only use $(\ell,\ell)$-isogenies, so Theorem~\ref{thm:ellellsurface} tells us what the largest order that we can reach is.
\end{enumerate}
\end{document}
|
\begin{document}
\title{A simple scheme for expanding photonic cluster states for quantum information}
\author{P. Kalasuwan$^1$, G. Mendoza$^{1,2}$, A. Laing$^1$ T. Nagata$^{3,4}$, J. Coggins$^1$, M. Callaway$^1$ , S. Takeuchi$^{3,4}$ A. Stefanov$^5$, and J. L. O'Brien$^{*,}$}
\address{University of Bristol, H.H. Wills Physics Laboratory \& Department of Electrical and Electronic Engineering,
University of Bristol, Merchant Venturers Building, Woodland Road,
Bristol, BS8 1UB, UK.\\$^2$California Institute of Technology, Pasadena, CA
91125, USA.\\
$^3$Research Institute for Electronic Science, Hokkaido
University, Sapporo 060-0812, Japan. \\$^4$The Institute of
Scientific and Industrial Research, Osaka University, Mihogaoka
8-1, Ibaraki, Osaka 567-0047, Japan\\$^5$Federal Office of
Metrology, METAS, Laboratory Time and Frequency,
Switzerland. }
\date{\today}
\begin{abstract}
We show how an entangled cluster state encoded in the polarization
of single photons can be straightforwardly expanded by
deterministically entangling additional qubits encoded in the path
degree of freedom of the constituent photons. This can be achieved
using a polarization--path controlled-phase gate. We
experimentally demonstrate a practical and stable realization of
this approach by using a Sagnac interferometer to entangle a path
qubit and polarization qubit on a single photon. We demonstrate
precise control over phase of the path qubit to change the
measurement basis and experimentally demonstrate properties of
measurement-based quantum computing using a 2 photon, 3 qubit
cluster state.
\end{abstract}
\maketitle
Quantum information science\cite{nielsen} promises both profound
insights into the fundamental workings of nature as well as new
technologies that harness uniquely quantum mechanical behavior
such as superposition and entanglement. Perhaps the most profound
aspect of both of these avenues is the prospect of a quantum
computer---a device which harnesses massive parallelism to gain
exponentially greater computational power for particular tasks. In
analogy with a conventional computer, quantum computing was
originally formulated in terms of quantum circuits consisting of
one- and two-qubit gates operating on a register of qubits which
are thereby transformed into the output state of a quantum
alogrithm\cite{nielsen}. In 2001 a remarkable alternative was
proposed in which the computation starts with a particular
entangled state of many qubits---a cluster state---and the
computation proceeds via a sequence of single qubit measurements
from left to right that ultimately leave the rightmost column of
qubits in the answer state \cite{ra-prl-86-5188}.
Of the various physical systems being considered for quantum information science, photons are particularly attractive for their low noise properties, high speed transmission, and straightforward single qubit operations \cite{ob-sci-318-1567}; and a scheme for non-deterministic but scalable implementation of two-qubit logic gates ignited the field of all-optical quantum computing\cite{kn-nat-409-46}. In 2004 it was recognized that cluster states offered tremendous advantages for this optical approach \cite{ni-prl-93-040503,yo-prl-91-037903}: Because preparation of the cluster state can be probabilistic, non-deterministic logic gates are suitable for making it, removing much of the massive overhead associated with near-deterministic logic gates.
Soon after these theoretical developments there were
groundbreaking demonstrations of small-scale algorithms operating
on four photon cluster states \cite{wa-nat-434-169,pr-nat-445-65};
cluster states of up to six photons were produced
\cite{ki-prl-95-210502,lu-nphys-3-91}; and the importance of high
fidelity was quantified \cite{to-prl-100-210501}. It has been
recognized that encoding cluster states in multiple degrees of
freedom of photons may provide advantages to computation
\cite{jo-pra-76-052326} and has been demonstrated as a promising
route to high count rates and larger cluster states
\cite{ch-prl-99-120503,va-prl-100-160502}. However, these
demonstrations have relied on a sandwich source or double pass
crystal to create the cluster state, making their production
unwieldy, and scalability an issue. Here, we propose and
demonstrate a simple scheme which enables a path encoded qubit to
be added to any photon in a polarization encoded cluster state.
This is achieved using deterministic controlled-phase (CZ) gate
between a photon's polarization and path. We use a Sagnac
interferometer architecture that provides a stable and practical
realization of this scheme and demonstrate simple
measurement-based operations on a 2 photon, 3 qubit cluster state
with high fidelity.
A standard way to define a cluster state is via a graph where the
nodes represent qubits, initially prepared in the
$\ket{+}\equiv(\ket{0}+\ket{1})/\sqrt{2}$ state, and connecting
bonds indicate that an entangling controlled-phase (CZ) gate has
been implemented between the pair of qubits that they connect, as
in Fig. \ref{schematic}(a) (because these CZ gates commute, the
order in which they are performed is not important). Adding a path
encoded qubit on a photon in a polarization encoded cluster state
therefore requires a CZ gate to be implement between the
polarization of the photon and its path, which must have
previously been prepared in the $\ket{+}$ state (Fig.
\ref{schematic}(c)).
A polarizing beam splitter (PBS), that transmits horizontal and
reflects vertical polarizions of light, implements a
controlled-NOT (CNOT) gate on the polarization (control qubit) and
path (target qubit) of a single photon passing through it (Fig.
\ref{schematic}(c)). A CZ gate can be realized by implementing a
Hadamard ($\hat{H}$) gate ($\ket{0},
\ket{1}\leftrightarrow\ket{0}\pm\ket{1}$) on the target qubit
before and after a CNOT gate. For a path qubit a $\hat{H}$ can be
implemented with a non-polarizing 1/2 beamsplitter (BS). However,
preparation of the $\ket{+}$ state of the path (target) qubit
requires an additional $\hat{H}$, and $\hat{H}\hat{H}$ is the
identity operation $\hat{I}$; the $\hat{H}$ after the CNOT simply
implements a one qubit rotation, and is not included in our
demonstration. A PBS is therefore all that is required to add a
path qubit to a polarization cluster state. Measuring the path
qubit in an arbitrary basis, however, requires a phase shift
followed by BS, and so interferometric stability is required.
\begin{figure}
\caption{A simple scheme for adding photon path qubits to a polarization cluster state. (a)
The linear three-qubit cluster state can be created by preparing three qubits in the $\ket{+}
\label{schematic}
\end{figure}
As a simple demonstration of this approach, we constructed the 3
qubit cluster state
\begin{equation}
\left|\Phi_{3}^{lin}\right\rangle
=\frac{1}{\sqrt{2}}\left(\left|+\right\rangle
_{1}\left|0\right\rangle _{2}\left|0\right\rangle
_{3}-\left|-\right\rangle _{1}\left|1\right\rangle
_{2}\left|1\right\rangle _{3}\right), \label{3cluster}
\end{equation}
where the first two qubits were encoded in the polarization of two photons and the third qubit was the path of the second photon.
(Eq. \ref{3cluster} is locally equivalent to the usual 3 qubit
linear cluster state; simply with an $\hat{H}$ rotation applied to
qubit 3.) Our experimental scheme is shown schematically in Fig.
\ref{schematic}(e): Two photons prepared in the state
$\ket{1H}_1\ket{1V}_2$ converge onto a 1/2 beamsplitter,
non-deterministically creating the entangled state
$\left|\phi^+\right\rangle \equiv(\left|1H\right\rangle_{1}\left|1H\right\rangle_{2}
+\left|1V\right\rangle_{1}\left|1V\right\rangle_{2} )/{\sqrt{2}}$, where the number $1$ inside the ket brackets indicate photon number and outside subscripts
$1$ and $2$ denote spatial paths. Photon 2 then travels through a
half-wave plate set at $22.5^{\circ}$, which implements a
$\hat{H}$ on polarization to create the two qubit cluster state. A
third qubit is added to the cluster by adding a path degree of
freedom on photon 2: Photon 2 enters the Sagnac interferometer via
a PBS cube, and forms a superposition of clockwise ($C$) and
counterclockwise ($D$) paths. The state becomes then
\begin{eqnarray}
\left|\psi\right\rangle =(\left|1H\right\rangle_{1}\left|1H\right\rangle_{C}
-\left|1H\right\rangle_{1}\left|1V\right\rangle_{D} \nonumber \\-\left|1V\right\rangle_{1}\left|1H\right\rangle_{C}
-\left|1V\right\rangle_{1}\left|1V\right\rangle_{D} )/{2}\label{eq:}
\end{eqnarray}
The relabeling $\ket{1H}_1\rightarrow\ket{1}_1$,
$\ket{1V}_1\rightarrow\ket{0}_1$,
$\ket{1H}_C\rightarrow\ket{1}_2\ket{0}_3$,
$\ket{1V}_D\rightarrow\ket{0}_2\ket{1}_3$ gives the state of Eq.
\ref{3cluster}.
The phase of the path qubit, qubit 3, can be controlled by the quarter and half waveplates (HWPs) inside the Sagnac interferometer; while the stability of this phase is provided by the Sagnac architecture (the visibility of the Sagnac interferometer was 99.5\%).
The angle $\alpha$ of the HWP in the interferometer sets the
relative phase between $\ket{0}_3$ and $\ket{1}_3$ to
$e^{i4\alpha}$. The measurement basis of qubit 3 is therefore
determined by $\alpha$.
Following the principles of cluster state quantum computation, an
arbitrary qubit rotation can be performed on qubit 3 (path qubit{,
$j=3$}) by measuring qubits 1 and 2 (polarization qubits) in the
basis $B_{j}(\varphi)\equiv\{
\left|\varphi_{+}\right\rangle_{j},\left|\varphi_{-}\right\rangle
_{j}\}$ where
$\left|\varphi_{\pm}\right\rangle_{j}\equiv\frac{1}{\sqrt{2}}(\left|0\right\rangle
_{j}\pm e^{-i\varphi}\left|1\right\rangle _{j})$. The eigenvalues
$m_{j}=0$ or $m_{j}=1$ if the measurement outcome on qubit $j$ is
$\left|\varphi_{+}\right\rangle _{j}$ or
$\left|\varphi_{-}\right\rangle _{j}$, respectively. {The feed
forward information of $m_1$ selects the projection of the second
qubit: for $m_1=0$ ($m_1=1$) qubit 2 will be projected on
$\left|\varphi_{+}\right\rangle_2$($\left|\varphi_{-}\right\rangle_2)$.}
After these measurements, qubit 3 is in the state
{$\left|\psi\right\rangle_{3}=\sigma_{x}^{m_{2}}\sigma_{z}^{m_{1}}R_{x}\left(\varphi_{2}\right)R_{z}\left(\varphi_{1}\right)\left|+\right\rangle$}
Hence, the path qubit can be projected into any state (up to a
known $\sigma_x$ operation).
The waveplate settings in front of the PBSs determine
$\varphi_1$ and $\varphi_2$; simultaneous detection of the two
photons at detectors $D_1$ and $D_2$ ideally results in a
sinusoidal interference fringe, as a function of $\alpha$, with a
phase and amplitude that depends on $\varphi_1$ and $\varphi_2$.
Figure~\ref{dm} shows the density matrix $\rho_{exp}$, obtained via quantum state tomography, of the polarization state of the two photons after the ordinary BS in Fig.~\ref{schematic}(e), before the path qubit is added. (Here the phase correction waveplates were set to produce the singlet state $\ket{\psi^-}\equiv(\ket{01}-\ket{10})/\sqrt{2}$, rather than $\ket{\phi^+}$). It has a fidelity with the singlet state $\ket{\psi^-}\equiv(\ket{HV}-\ket{VH})/\sqrt{2}$ of $F=0.895$. A major source of this non-unit fidelity is that the BS had a reflectivity of $R=0.59$; the fidelity of $\rho_{exp}$ with the expected output state $\ket{\psi'}\equiv 0.57\ket{HV}+0.82\ket{VH}$ is $F=0.929$. The remaining imperfections predominantly arise from the non-unit visibility of quantum interference at the ordinary BS: the measured visibility for two photons of the same polarization was $V_{meas.}=0.91$, which is $V_{rel.}=0.97$, relative to the ideal visibility for a $R=0.59$ BS $V_{ideal}=0.937$. This visibility results in reduced coherences in the measured density matrix shown in Fig.~\ref{dm}.
These imperfections in $\rho_{exp}$ will limit the performance of cluster state operations described below.
\begin{figure}
\caption{Real (left) and imaginary (right) parts
of the experimentally measured density matrix $\rho_{exp}
\label{dm}
\end{figure}
Figure \ref{fringes} shows experimentally measured coincidence counts as a function of $\alpha$ for several different projective measurements on (polarization) qubits 1 and 2: $B_{1}(\pi/2)\otimes B_{2}(\pi/2)$ (red), $B_{1}(\pi/2)\otimes B_{2}(\pi/4)$ (green), $B_{1}(\pi/2)\otimes B_{2}(0)$ (blue) and $B_{1}(\pi/2)\otimes B_{2}(-\pi/4)$ (black)
. {The solid lines are theoretical prediction of the fringe
expressed as
$Y(\alpha)=Y_0(1+(1-2a^2)\cos(4\alpha+\varphi_{2})\\+2a\sqrt{1-a^2}\sin(4\alpha+\varphi_{2})\sin(\varphi_{1}))$,
where $Y_0$ is the peak coincidences counts from each experiments
and $a(=0.567)$ is a constant depending on the
reflectivity$(R=0.59)$ of the BS. The relation between $R$ and $a$
is $a^2=(1-R)^2/\left((1-R)^2+R^2\right)$.} The expected high visibility fringes are observed in each case (the non-unit visibility is a result of the reduced coherences in $\rho_{exp}$), however the phase of each fringe is {offset (10's of degrees) compared to the case for a $R=0.5$ BS but is good agreement with $Y(\alpha)$}. Taking into account the $R=0.59$ BS well explains these offsets.
Similar fringes were measured for other projective measurements on
qubits 1 and 2: $\bigl\{ B_1(-\pi/4)$, $B_1(0), B_1(\pi/4)$,
$B_1(\pi/2)\bigl\}\otimes\bigl\{ B_2(-\pi/4)$, $B_2(0),
B_2(\pi/4)$, $B_2(\pi/2)\bigl\}$ (not shown), and again the
observed phases and visibilities were in good agreement with
predictions based on {an $R=0.59$ BS}. Observation of these
fringes confirms the correct one-qubit rotations are realized via
the measurements on the two-photon, three-qubit cluster state.
We have experimentally demonstrated a simple scheme for adding
path-encoded qubits to a polarization-encoded cluster state and
demonstrated simple one-qubit rotations on such a hybrid
path-polarization cluster state. Similar approaches have used less
stable Mach-Zehnder interferometers \cite{pa-oe-15-17960}; while
10 qubits on 5 photons have been entangled in a similar way
\cite{gao-2008}. Photonic approaches to exploring cluster states
and measurment based quantum computations are currently the most
advanced. Further progress is limited by the number of photons,
making schemes for encoding more than one qubit per photon
appealing. The advent of high performance waveguide integrated
quantum circuits \cite{po-sci-320-646,marshall-2008} that include
ultra-stable interferometers \cite{po-sci-320-646,matthews-2008}
and precise optical phase control \cite{matthews-2008}, is a
promising architecture for this approach. Our scheme uses entanglement of polarization and path degrees of freedom of one photons.
This enables the addition of a path qubit to any photon in a polarization cluster
state. The path qubit is not fully connected in the
cluster, because the path qubit can be connected to the polarization qubit sharing same photon only. This is most useful at the edges of the cluster state.
With current approaches using up to six photons, adding path
qubits in this way has the potential to significantly increase the
size of cluster states, and thereby the complexity of algorithms
that can be implemented. However, there is some possibilities to entangle path qubits from different photons \cite{ra-pra-65-062324} to develop more sophisticate cluster state.
\begin{figure}
\caption{{Path qubit rotation via polarization qubit measurement
superimposed on the theoretical curves.The fringes of coincidences
counts are pronounced as a function of $\alpha$ as described in
the text. The solid lines represent the theoretical prediction
given the reflectivity of our BS. The experimental points $\circ$ with errors are fitted by the dashed lines.}
\label{fringes}
\end{figure}
\noindent We thank T. Rudolph, N. Yoran and X.-Q.
Zhou for helpful discussions. This work was supported by IARPA,
EPSRC, QIP IRC and the Leverhulme Trust. G.M. acknowledges
support from Caltech's Summer Undergraduate Research Fellowship
(SURF).
\end{document}
|
\begin{document}
\begin{frontmatter}
\title{Reciprocal first-order second-moment method}
\author{Benedikt Kriegesmann}
\author{Julian K. L\"udeker}
\address{Hamburg University of Technology}
\address{[email protected]}
\begin{abstract}
This paper shows a simple parameter substitution, which makes use of the reciprocal relation of typical objective functions with typical random parameters.
Thereby, the accuracy of first-order probabilistic analysis improves significantly at almost no additional computational cost.
The parameter substitution requires a transformation of the stochastic distribution of the substituted parameter, which is explained for different cases.
\end{abstract}
\begin{keyword}
Probabilistic analysis \sep first-order\sep reciprocal approximation
\end{keyword}
\end{frontmatter}
\section{Introduction}
\textsc{Taylor} series approximations of an objective function allow for a very fast determination of the mean and variance of this objective function and are therefore frequently used in robust design optimization.
While most researchers use second-order approximations (see e.g. \cite{doltsinis_robust_2004,asadpoure_robust_2011}), the authors showed that using a first-order approximation is also suitable for certain cases at much less computational cost \cite{kriegesmann_robust_2019}.
For second-order approaches, the computational cost at least scales with the number of random parameters, while the first-order approximation presented in \cite{kriegesmann_robust_2019} required solving only two systems of equations.
Typical objective functions are compliance of a structure, the displacement at a certain point, or the maximum stress.
Typical random parameters are material properties (e.g. \textsc{Young}'s modulus or yield stress), geometric measures (e.g. shape or thickness), and loads (direction and/or magnitude).
In linear mechanics, displacement and stress are indeed linearly dependent on the load, and the compliance is quadratically dependent on the load magnitude.
However, for the other random parameters mentioned, this relationship is reciprocal.
The benefits of using reciprocal approximations has already been identified by Schmit and Farshi \cite{schmit_approximation_1974} and is still widely used in the context of structural optimization \cite{fleury_claude_structural_1986,svanberg_method_1987}.
For probabilistic analyses however, the reciprocal relation has only been utilized in the work of Fuchs and Shabtay \cite{fuchs_reciprocal_2000} (to the best of the authors' knowledge).
They used the reciprocal relation for the perturbation method. Hence, they expanded the equilibrium equation for random stiffness to get the stochastic moments of the displacement vector.
In this paper, a much simpler formulation is presented to consider the reciprocal relationship in a \textsc{Taylor} expansion of the objective function, which is formulated for arbitrary objective functions and arbitrary random parameters.
The major implication in using this approach (which also holds for the work of Fuchs and Shabtay \cite{fuchs_reciprocal_2000}) is the need for stochastic moments of the reciprocal parameters.
Therefore, the paper shows how to determine the requires stochastic moments of the reciprocal parameters for different cases.
\section{First-order second-moment method for mean and variance approximation}
\label{sec:FOSM}
Firstly, the well-known approach to determine mean and variance of an objective function using a \textsc{Taylor} series is recalled.
Consider the objective function $g(\boldsymbol{x})$, which is a function of the random vector $\boldsymbol{X}$.
The \textsc{Taylor} series approximation if $g$ is expanded at the mean vector ${\boldsymbol{\mu}}_X$ of $\boldsymbol{X}$.
\begin{equation}
\begin{aligned}
g\left( {\boldsymbol{x}} \right) & = g\left( {{{\boldsymbol{\mu }}_X}} \right) + \sum\limits_{i = 1}^n {\frac{{\partial g\left( {{{\boldsymbol{\mu }}_X}} \right)}}{{\partial {x_i}}}\left( {{x_i} - {\mu _{{X_i}}}} \right)} \\
& + \frac{1}{2}\sum\limits_{i = 1}^n {\sum\limits_{j = 1}^n {\frac{{{\partial ^2}g\left( {{{\boldsymbol{\mu }}_X}} \right)}}{{\partial {x_i}\,\partial {x_j}}}\left( {{x_i} - {\mu _{{X_i}}}} \right)\left( {{x_j} - {\mu _{{X_j}}}} \right)} } + \ldots
\end{aligned}
\label{eq:Taylor_gx}
\end{equation}
Inserting the first-order terms of (\ref{eq:Taylor_gx}) in the integrals that determine the mean $\mu_g$ and the variance $\sigma_g^2$ yields
\begin{equation}
\begin{aligned}
{\mu _g} & = \int\limits_{ - \infty }^\infty {g\left( {\boldsymbol{x}} \right){f_{\boldsymbol{X}}}\left( {\boldsymbol{x}} \right)d{\boldsymbol{x}}} \approx g\left( {{{\boldsymbol{\mu }}_X}} \right) \\
\sigma _g^2 & = \int\limits_{ - \infty }^\infty {{{\left[ {g\left( {\boldsymbol{x}} \right) - {\mu _g}} \right]}^2}{f_{\boldsymbol{X}}}\left( {\boldsymbol{x}} \right)d{\boldsymbol{x}}} \approx \sum\limits_{i = 1}^n {\sum\limits_{j = 1}^n {\frac{{\partial g\left( {{{\boldsymbol{\mu }}_X}} \right)}}{{\partial {x_i}}}\frac{{\partial g\left( {{{\boldsymbol{\mu }}_X}} \right)}}{{\partial {x_j}}}{\mathop{\rm cov}} \left( {{X_i},{X_j}} \right)} }
\end{aligned}
\label{eq:FOSM_x}
\end{equation}
For a second-order approach, e.g. the second-order fourth-moment (SOFM) method, refer for instance \cite{kriegesmann_fast_2011}.
From eqs. (\ref{eq:FOSM_x}) it becomes obvious that the approximation of $g$ by the \textsc{Taylor} series should be acceptable in areas where $f_{\boldsymbol{X}}({\boldsymbol{x}})$ is significantly different from zero, which is typically the case in the region close to ${\boldsymbol{\mu}}_X$.
\section{Reciprocal first-order approximation by parameter substitution}
\label{sec:reciFOSM}
The basic idea of the reciprocal approach is to substitute the original random vector $\boldsymbol{X}$ by $\boldsymbol{Z}$, where for each $i$-th entry
\begin{equation}
{z_i} = \frac{1}{{{x_i}}} \Leftrightarrow {x_i} = \frac{1}{{{z_i}}}
\label{eq:x_to_z}
\end{equation}
Now, the objective function $g\left( {{\boldsymbol{x}}\left( {\boldsymbol{z}} \right)} \right)$ is expanded in ${\boldsymbol{z}}$ at ${\boldsymbol{\mu}}_Z$.
\begin{equation}
g\left( {{\boldsymbol{x}}\left( {\boldsymbol{z}} \right)} \right) = g\left( {\boldsymbol{x}}({{{\boldsymbol{\mu }}_Z}}) \right) + \sum\limits_{i = 1}^n {\frac{{\partial g\left( {\boldsymbol{x}}({{{\boldsymbol{\mu }}_Z}}) \right)}}{{\partial {z_i}}}\left( {{z_i} - {\mu _{{Z_i}}}} \right)} + \ldots
\label{eq:Taylor_gz}
\end{equation}
Here, ${\boldsymbol{\mu}}_Z$ is the mean vector of ${\boldsymbol{Z}}$. Inserting (\ref{eq:Taylor_gz}) into eqs. (\ref{eq:FOSM_x}), yields
\begin{equation}
\begin{aligned}
{\mu _g} & \approx g\left( {\boldsymbol{x}}({{{\boldsymbol{\mu }}_Z}}) \right) \\
\sigma _g^2 & \approx \sum\limits_{i = 1}^n {\sum\limits_{j = 1}^n {\frac{{\partial g\left( {\boldsymbol{x}}({{{\boldsymbol{\mu }}_Z}}) \right)}}{{\partial {z_i}}}\frac{{\partial g\left( {\boldsymbol{x}}({{{\boldsymbol{\mu }}_Z}}) \right)}}{{\partial {z_j}}}{\mathop{\rm cov}} \left( {{Z_i},{Z_j}} \right)} }
\end{aligned}
\label{eq:FOSM_z}
\end{equation}
The derivatives of (\ref{eq:x_to_z}) equals
\begin{equation}
\frac{{\partial {x_i}}}{{\partial {z_j}}} = \left\{ \begin{array}{ccccc}
- \frac{1}{{z_i^2}}\quad & \forall i = j\\
0\quad & \forall i \ne j
\end{array} \right.
\label{eq:dxdz}
\end{equation}
The derivative of $g$ with respect to $z_i$ therefore is
\begin{equation}
\frac{{\partial g\left( {\boldsymbol{z}} \right)}}{{\partial {z_i}}} = \frac{{\partial g}}{{\partial {x_i}}}\frac{{\partial {x_i}}}{{\partial {z_i}}}
\label{eq:dgdz}
\end{equation}
What still needs to be determined for evaluating (\ref{eq:FOSM_z}) is the mean vector ${\boldsymbol{\mu}}_Z$ at which $g\left( {{\boldsymbol{x}}\left( {\boldsymbol{z}} \right)} \right)$ and its derivatives are evaluated, and the covariance ${\mathop{\rm cov}} \left( {{Z_i},{Z_j}} \right)$. Determining these stochastic moments based on (\ref{eq:x_to_z}) is less obvious than it may seem, and therefore it is discussed in the next section.
The authors initially tried to express $\mu_g$ and $\sigma_g^2$ purely in terms of stochastic moments of $\boldsymbol{X}$ by a reciprocal expansion. However, this turned out not to be possible, but also not necessary.
\section{Transformation to reciprocal random parameter}
This section shows how ${\boldsymbol{\mu}}_Z$ and ${\mathop{\rm cov}} \left( {{Z_i},{Z_j}} \right)$ can be determined firstly, in case the distribution of $\boldsymbol{X}$ is given explicitly and secondly, in case measurement data of $\boldsymbol{X}$ are available.
\subsection{Case 1 – distribution given}
\label{sec:reciRand_distGiven}
Given the probability density function distribution $f_{X}$ of a single random parameter $X$, the probability density function of $Z$ (which is related by (\ref{eq:x_to_z})) is given by
\begin{equation}
{f_Z}\left( z \right) = \frac{1}{{{z^2}}}{f_X}\left( {\frac{1}{z}} \right)
\label{eq:pdf_trans}
\end{equation}
Based on (\ref{eq:pdf_trans}), the mean and the variance of $Z$ can be determined.
what is however required for applying the FOSM approach is the mean and the variance of $Z$, which are given by
\begin{equation}
{\mu _Z} = \int\limits_0^\infty {\frac{1}{z}{f_X}\left( {\frac{1}{z}} \right)\;dz}
\label{eq:mu_Z}
\end{equation}
and
\begin{equation}
\sigma _Z^2 = \int\limits_0^\infty {{f_X}\left( {\frac{1}{z}} \right)\;dz} - \mu _Z^2
\label{eq:sigma2_Z}
\end{equation}
The Appendix provides the few lines of Matlab code which are required to solve (\ref{eq:mu_Z}) and (\ref{eq:sigma2_Z}).
However, solving these integrals is not always easy and sometimes even impossible (e.g. for the standard \textsc{Gauss} distribution).
On the other hand, for several distributions $f_X$ the distribution of the reciprocal variable $f_Z$ is known, e.g. for
uniform distribution, \textsc{Cauchy} distribution, $F$-distribution, gamma distribution \cite{forbes_statistical_2011}.
This is demonstrated for the very simple case of $F$-distribution, which has the two parameters $m$ and $n$. Mean and variance of the $F$-distribution are given by
\begin{equation}
{\mu _{F{\rm{ - dist}}}} = \frac{n}{{n - 2}} \quad \quad \quad \quad \sigma _{F{\rm{ - dist}}}^2 = \frac{{2{n^2}\left( {m + n - 2} \right)}}{{m{{\left( {n - 2} \right)}^2}\left( {n - 4} \right)}}
\label{eq:mean_var_Fdist}
\end{equation}
If $X$ follows $F$-distribution, i.e. $X \sim F(m,n)$, then $Z \sim F(n,m)$.
This section only looked at single random parameters. If the entries of the random vector $\boldsymbol{X}$ are independent, the transformation of the distribution can be done component-wise. However, if the entries of $\boldsymbol{X}$ are dependent, their covariance has to be considered. In case $\boldsymbol{X}$ is \textsc{Gaussian}, it can be transformed easily to independent parameters. In general however, this may not be possible and makes it difficult to determine $f_{\boldsymbol{Z}}$ analytically. A workaround in such case is to generate random realizations $\boldsymbol{x}$ of $\boldsymbol{X}$ and use the approach given in the following section.
\subsection{Case 2 – data give}
Given $n$ measurement data (or realizations) ${\boldsymbol{x}^{(k)}}$ of the random vector $\boldsymbol{X}$, each $i$-th entry of each $k$-th realization can be transformed to ${z_i^{(k)}} = \frac{1}{{{x_i^{(k)}}}}$.
Then, mean vector and covariance matrix of ${\boldsymbol{Z}}$ are determined by the well-known empirical estimators
\begin{equation}
\begin{aligned}
{\boldsymbol{\mu}}_Z & \approx \frac{1}{n} \sum_{k=1}^{n} {\boldsymbol{z}^{(k)}} \\
{\mathop{\rm cov}} \left( {{Z_i},{Z_j}} \right) & \approx \frac{1}{n-1} \sum_{k=1}^{n} {z_i^{(k)}}
\left( {{z_i^{(k)}} - {\mu _{{Z_i}}}} \right)\left( {{z_j^{(k)}} - {\mu _{{Z_j}}}} \right)
\end{aligned}
\end{equation}
\section{Examples}
\begin{figure}
\caption{Cantilever beam example}
\label{fig:cantilever}
\end{figure}
For demonstrating the effect of using the reciprocal FOSM method given in the previous section, consider the cantilever beam example shown in Figure~\ref{fig:cantilever} with the load $F = 0.1kN$, the length $L = 1000mm$, the \textsc{Young}'s modulus $E = 70kN/mm^2$ and a rectangular cross section with the height $h=30mm$ and width $b=30mm$.
The objective function is the displacement at the tip $w = 4FL^3/(Eh^3b)$.
\begin{figure}
\caption{Displacement at tip $w$ as a function of \textsc{Young}
\label{fig:w_over_Eh}
\end{figure}
In the following, the \textsc{Young}'s modulus $E$ and the height $h$ will be considered as random.
As depicted in Figure~\ref{fig:w_over_Eh}, for this simple example the objective function $w$ is indeed inversely proportional with respect to $E$, where the dependency of $w$ and $h$ is of higher-order nonlinearity.
\subsection{Case 1 – distribution given}
The \textsc{Young}'s modulus is now expressed as $E = \alpha \cdot E_0$, where $E_0$ is the nominal value and $\alpha$ is a random parameter that follows $F$-distribution with $m=25$ and $n=100$. Using eqs. (\ref{eq:mean_var_Fdist}), the mean value and standard deviation of $E$ the are determined to equal $\mu_E = 71.4kN/mm^2$ and $\sigma_E = 22.9kN/mm^2$. In this example, the coefficient of variation equals $CoV = 0.32$, which is unrealistic large for most materials and has been chose for demonstration purpose.
As stated in section~\ref{sec:reciRand_distGiven}, the reciprocal random parameter $Z=1/\alpha$ follows $F$-distribution with $m=100$ and $n=25$,
Table~\ref{tab:resCase1} summarizes the results of applying the FOSM method, the SOFM method, the reciprocal FOSM (recFOSM) method introduced in section~\ref{sec:reciFOSM} and the Monte Carlo method with $10^5$ samples.
Since the deflection $w$ is indeed reciprocal with respect to the \textsc{Young}'s modulus $E$, the reciprocal FOSM approach provides almost exactly the same results as the Monte Carlo simulation at the same computational cost as the standard FOSM method.
\begin{table}[h]
\centering
\begin{tabular}{lcccc}
\hline \hline
Approach & FOSM & SOFM & recFOSM & Monte Carlo \\
\hline
$\mu_w$ & 6.91 & 7.62 & 7.67 & 7.68 \\
$\sigma_w$ & 2.21 & 2.51 & 2.62 & 2.63 \\
\hline \hline
\end{tabular}
\caption{Mean and standard deviation of the beam displacement $w$ due to a $F$-distributed random \textsc{Young}'s modulus determined by different approaches}
\label{tab:resCase1}
\end{table}
\subsection{Case 2 – data given}
In order to simulate the case that measurement data are given, random realizations are generated using the \textsc{Weibull} distribution. Here, the mean value always equals the nominal value and the variance is determined from a prescribed coefficient of variation, which varies throughout the section.
Firstly, consider the \textsc{Young}'s modulus as random parameter. The coefficient of variation takes the values of $\text{CoV} \in [0.025, 0.05, 0.1, 0.25, 0.4]$.
Figure~\ref{fig:randE_over_CoV} shows the results of applying the four approaches considered for different coefficients of variation of the \textsc{Young}'s modulus. Table~\ref{tab:resCase2_E} gives numerical values for the most extreme cases. The results show that for a small CoVs, FOSM and SOFM approaches are accurate, while for large CoVs they deviate strongly from the Monte Carlo solution (using $10^5$ samples). The reciprocal FOSM result however is always accurate.
\begin{table}[h]
\centering
\begin{tabular}{lcccc}
\hline \hline
Approach & FOSM & SOFM & recFOSM & Monte Carlo \\
\hline
$\mu_w$ for CoV$=0.025$ & 16.46 & 16.47 & 16.47 & 16.47 \\
$\sigma_w$ for CoV$=0.025$& 0.407 & 0.407 & 0.418 & 0.418 \\
\hline
$\mu_w$ for CoV$=0.4$ & 16.46 & 17.49 & 17.85 & 17.85 \\
$\sigma_w$ for CoV$=0.4$ & 4.11 & 4.34 & 6.37 & 6.37 \\
\hline \hline
\end{tabular}
\caption{Mean and standard deviation of the beam displacement $w$ due to a random \textsc{Young}'s modulus determined by different approaches and different coefficients of variation}
\label{tab:resCase2_E}
\end{table}
\begin{figure}
\caption{Mean $\mu_w$ (left) and standard deviation $\sigma_w$ (right) of the beam displacement due to a random \textsc{Young}
\label{fig:randE_over_CoV}
\end{figure}
Next, the cross section height $h$ is considered as random parameter. Its coefficient of variation is chosen to take values of $\text{CoV} \in [0.01, 0.05, 0.1, 0.15]$.
The results are given in Table~\ref{tab:example_results} and shown in Figure~\ref{fig:randH_over_CoV}.
Due to the strong nonlinear influence of $h$ on the deflection $w$, the results of FOSM, SOFM and reciprocal FOSM deviate from the Monte Carlo solution already for small CoVs. While SOFM provides better results for the mean $\mu_w$, the reciprocal FOSM is more accurate for the standard deviation $\sigma_w$.
\begin{table}[h]
\centering
\begin{tabular}{lcccc}
\hline \hline
Approach & FOSM & SOFM & recFOSM & Monte Carlo \\
\hline
$\mu_w$ for CoV$=0.01$ & 7.056 & 7.061 & 7.059 & 7.061 \\
$\sigma_w$ for CoV$=0.01$ & 0.213 & 0.212 & 0.215 & 0.218 \\
\hline
$\mu_w$ for CoV$=0.15$ & 7.06 & 8.02 & 7.64 & 8.48 \\
$\sigma_w$ for CoV$=0.15$ & 3.18 & 3.49 & 4.13 & 6.61 \\
\hline \hline
\end{tabular}
\caption{Mean and standard deviation of the beam displacement $w$ due to a random cross section height determined by different approaches and different coefficients of variation}
\label{tab:example_results}
\end{table}
\begin{figure}
\caption{Mean $\mu_w$ (left) and standard deviation $\sigma_w$ (right) of the beam displacement due to a random cross section height for increasing coefficient of variation CoV}
\label{fig:randH_over_CoV}
\end{figure}
\section{Conclusion}
The presented reciprocal FOSM method requires the same number of function evaluations and the same derivatives as the standard FOSM method.
Hence, the computational cost is the same and therefore much less than for a second-order approach or Monte Carlo simulations.
For parameters which have a reciprocal relationship to the objective function, the reciprocal FOSM method provides exact results.
If the objective function is non-linear with respect to the considered parameter, the reciprocal FOSM method still provides much more accurate results than the FOSM approach and can even be more accurate than a second-order approach.
Especially for the use in robust design optimization, the reciprocal FOSM method is a promising approach as it accurately determines the variance of an objective at very low computational costs.
\section*{Appendix}
This Appendix provides the Matlab code for determining the mean and variance of the the reciprocal random variable $Z$ from a given probability density function of $X$. The example is given for a Weibull distribution. The inverse cumulative distribution function (CDF) is used for the validation via Monte Carlo sampling. The most challenging part of applying the example to other distributions is the generation of samples for the validation. Since for many distributions the inverse CDF is unknown, it has to be determined numerically or an acceptance rejection method has to be used. On request, the author can provide further examples, such as for log-normal, normal or $\beta$-distribution.
\begin{lstlisting}[style=mlab]
a=3; b=5;
pdfX = @(x) a*b* x.^(b-1) .* exp(-a* x.^b );
invCDFx = @(u) ( -1/a * log(1-u) ).^(1/b);
fun = @(z) 1./z.*pdfX(1./z);
mu_Z = integral(fun,0,Inf)
fun = @(z) pdfX(1./z);
sigma2_Z = integral(fun,0,Inf) - mu_Z^2
nos = 1000000;
xr = invCDFx(rand(nos,1));
zr = 1./xr;
mu_Z_MC = mean(zr)
sigma2_Z_MC = var(zr)
\end{lstlisting}
\end{document}
|
\begin{document}
\title{A product construction for hyperbolic metric spaces}
\maketitle
\begin{center}
{\large Thomas Foertsch *\footnote{* Supported by the Deutsche Forschungsgemeinschaft (FO 353/1-1)} \hspace{1cm} Viktor Schroeder$^{\sharp}$\footnote{$\sharp$
Partially supported by the Suisse National Science Foundation}}
\footnote{2000 Mathematics Subject Classification. Primary 53C21} \\
\end{center}
\begin{abstract}
Given two pointed Gromov hyperbolic metric spaces $(X_i,d_i,z_i)$, $i=1,2$, and $\Delta \in \mathbb{R}^+_0$, we present a
construction method, which yields another Gromov hyperbolic metric space $Y_{\Delta}=Y_{\Delta}((X_1,d_1,z_1),(X_2,d_2,z_2))$.
Moreover, it is shown that once $(X_i,d_i)$ is roughly geodesic, $i=1,2$, then there exists a ${\Delta}'\ge 0$ such that
$Y_{\Delta}$ also is roughly geodesic for all $\Delta \ge {\Delta}'$.
\end{abstract}
\section{Introduction}
\label{sec-intro}
A metric space $(X,d)$ is called $\delta$-hyperbolic, $\delta \ge 0$, if for all $x,y,z,w\in X$ it holds
\begin{equation} \label{eqn-def-hyperbolicity}
d(x,y)+d(z,w) \; \le \; \max \{ d(x,z)+d(y,w),d(x,w)+d(y,z)\} \; + \; 2\delta
\end{equation}
and said to be Gromov hyperbolic, if it is $\delta$-hyperbolic for some $\delta \ge 0$. \\
Let $(X_i,d_i)$ be Gromov hyperbolic metric spaces and fix $z_i\in X_i$, $i=1,2$. For $\Delta \ge 0$ consider the set
$Y_{\Delta}=Y_{\Delta}(X_1,d_1,z_1,X_2,d_2,z_2)$ defined via
\begin{displaymath}
Y_{\Delta} \; := \; \{ (x_1,x_2)\in X_1\times X_2 \; | \; |d_1(x_1,z_1)-d_2(x_2,z_2)|\le \Delta\} \subset X \; := \; X_1\times X_2.
\end{displaymath}
On $Y_{\Delta}$ we consider the metric $d_m|_{Y_{\Delta}\times Y_{\Delta}}$ which is the restriction of the $l_{\infty}$-product metric
$d_m:X\times X\longrightarrow \mathbb{R}^+_0$
\begin{displaymath}
d_m\Big( (x_1,x_2),(x_1',x_2')\Big) \; := \; \max \{d_1(x_1,x_1'),d_2(x_2,x_2')\},
\end{displaymath}
for all $x_1,x_1'\in X_1, x_2,x_2'\in X_2$, to $Y_{\Delta}\times Y_{\Delta}\subset X\times X$. \\
Our paper is based on the following elementary observation which we refer to as
\begin{theorem} \label{theo-gen}
Let $(X_i,d_i)$ be Gromov hyperbolic metric spaces and $z_i\in X_i$, $i=1,2$. Then $(Y_{\Delta},d_m|_{Y_{\Delta}\times Y_{\Delta}})$,
as introduced above, is Gromov hyperbolic.
\end{theorem}
With Theorem \ref{theo-gen} at hand we further prove the
\begin{theorem} \label{theo-r-geod}
Let $(X_i,d_i)$ be roughly geodesic, Gromov hyperbolic metric spaces, $z_i\in X_i$, $i=1,2$, then there exists $\tilde{\Delta}\ge 0$ such that
$(Y_{\Delta},d_m|_{Y_{\Delta}\times Y_{\Delta}})$ is roughly geodesic for all $\Delta \ge \tilde{\Delta}$ (and hyperbolic due
to Theorem \ref{theo-gen}). Moreover, its boundary at infinity is naturally homeomorphic to
${\partial}_{\infty}(X_1,d_1)\times {\partial}_{\infty}(X_2,d_2)$.
\end{theorem}
For precise definitions of the boundary at infinity and rough geodesics see Section \ref{sec-basics}.
\begin{remark}
\begin{description}
\item[(i)] Both theorems can be formulated for a finite number of factors.
\item[(ii)] For metric spaces $(X_i,d_i)$ with nonempty boundaries at infinity, the Theorems
\ref{theo-gen} and \ref{theo-r-geod} have analogues in the limit case, that the fixed points $z_i\in X_i$ converge at infinity. Those will
precisely be stated in Section \ref{sec-limit}.
\item[(iii)] An analogue of Theorem \ref{theo-r-geod} in the setting of geodesic metric spaces has been studied in \cite{fs2}.
\end{description}
\end{remark}
{\bf Outline of the paper:} In Section \ref{sec-theo-gen} we prove Theorem \ref{theo-gen}. In Section \ref{sec-basics} we recall
some basic definitions and facts on Gromov hyperbolic metric spaces, which will be used in Section \ref{sec-theo-r-geod} when
proving Theorem \ref{theo-r-geod} and Section \ref{sec-limit} when we state the above mentioned ``limit case analogues'' of the
Theorems \ref{theo-gen} and \ref{theo-r-geod}. \\
{\bf Acknowledgment:} It is a pleasure to thank Mario Bonk for useful discussions and valuable comments on an earlier version of this paper.
\section{The Proof of Theorem \ref{theo-gen}}
\label{sec-theo-gen}
{\bf Proof of Theorem \ref{theo-gen}:} First of all note that if for a metric space $(X,d)$ there exist $z\in X$ and $\tilde{\delta} \ge 0$
such that for all $x,y,w\in X$ it holds
\begin{equation} \label{eqn-4point->3point}
d(x,y) \; + \; d(w,z) \; \le \;
\max \{ d(x,w)+d(y,z),d(x,z)+d(y,z)\} \; + \; 2\tilde{\delta}
\end{equation}
then $(X,d)$ is $2\tilde{\delta}$-hyperbolic (see e.g. \cite{g}). \\
Let now $(X_i,d_i)$ be ${\delta}_i$-hyperbolic metric spaces and set $\delta := \max \{{\delta}_1,{\delta}_2\}$. In order to
show that $(Y_{\Delta},d_m)$ is hyperbolic, we show that inequality (\ref{eqn-4point->3point}) holds for
$d=d_m$, $z=(z_1,z_2)\in Y_{\Delta}$, $\tilde{\delta}:=\delta +\frac{\Delta}{2}$ and $x,y,z\in Y_{\Delta}$ arbitrary: \\
Without loss of generality we assume $d_1(x_1,y_1)\ge d_2(x_2,y_2)$. Now, due to the definition of $Y_{\Delta}$ we find
\begin{displaymath}
d_m(w,z) \; = \; \max \{ d_1(w_1,z_1),d_2(w_2,z_2)\} \; \le \; d_1(w_1,z_1) \; + \; \Delta .
\end{displaymath}
Thus with the $\delta$-hyperbolicity of the first factor we get
\begin{eqnarray*}
& & d_m(x,y)+d_m(w,z) \\
& \le & d_1(x_1,y_1) \; + \; d_1(w_1,z_1) \; + \; \Delta \\
& \le & \max \{ d_m(x,w)+d_m(y,z),d_m(x,z)+d_m(y,z)\} \; + \; 2\delta \; + \; \Delta .
\end{eqnarray*}
$\Box$ \\
\section{Some basic definitions}
\label{sec-basics}
\subsection{Hyperbolicity and rough geodesics}
Let $(X_1,d_1),(X_2,d_2)$ be metric spaces. A map $f:X_1\longrightarrow X_2$ is called a quasi-isometric embedding, if
there exist $\lambda \ge 1$, $k\ge 0$ such that for all $x,x'\in X_1$ it holds
\begin{displaymath}
\frac{1}{\lambda}d_1(x,x') \; - \; k \; \le \; d_2\Big( f(x),f(x')\Big) \; \le \; \lambda d_1(x,x') \; + \; k.
\end{displaymath}
If $k=0$, $f$ is called bilipschitz, while, for $\lambda =1$, $f$ is said to be a $k$-rough-isometric embedding. For $k=0$ and
$\lambda =1$ the embedding is called isometric. \\
Let $f:X_1\longrightarrow X_2$ be a rough isometric embedding. Then, if $(X_2,d_2)$ is hyperbolic, so is $(X_1,d_1)$. In case
$d_1$ and $d_2$ are length metrics, then the same holds when replacing the rough-isometric embedding through a quasi-isometric
embedding (This non-trivial but by now standard result may be found in, for instance, \cite{brih} or \cite{bubui}). \\
A ($k$-rough) geodesic $\gamma$ in $(X,d)$ connecting $x\in X$ to $x'\in X$ is a ($k$-rough) isometric embedding
$\gamma :[\alpha ,\omega ]\longrightarrow X$ such that $\gamma(\alpha )=x$ and $\gamma (\omega )=x'$. The metric space
$(X,d)$ is called ($k$-rough) geodesic if for all $x,x'\in X$ there exists a ($k$-rough) geodesic connecting $x$ to $x'$.
If there exists a $k\ge 0$ such that $(X,d)$ is $k$-rough geodesic, then $(X,d)$ is said to be rough geodesic. \\
According to \cite{bos} a metric space $(X,d)$ is called $k$-almost geodesic, if for every $x,y\in X$ and every $t\in [0,d(x,y)]$,
there exists $w\in X$ such that $|d(x,w)-t|\le k$ and $|d(y,w)-(d(x,y)-t)|\le k$. $(X,d)$ is said to be almost geodesic, if
there exists $k\ge 0$ such that it is $k$-almost geodesic. Note that a hyperbolic metric space $(X,d)$ is alomst geodesic if and
only if it is rough geodesic (compare Proposition 5.2 in \cite{bos}). \\
For geodesic metric spaces there are a number of equivalent characterizations of hyperbolicity using the
geometry of geodesic triangles (see e.g. \cite{brih} and \cite{fs2}). All of those characterizations have analogues in the
rough geodesic setting. Here we state the corresponding results, we are going to make use of in the following:
\begin{definition}
A $k$-roughly geodesic metric
space $(X,d)$ is said to be $(\delta ,k)$-hyperbolic if each side of any $k$-roughly geodesic triangle in $(X,d)$ is contained in
the $\delta$-neighborhood of the union of the other two sides.
\end{definition}
Just along the lines of the proof of the corresponding statement for geodesic spaces (see e.g. \cite{brih}) one proves the
\begin{proposition} \label{prop-delta-k-hyp}
Let $(X,d)$ be a $k$-roughly geodesic space. Then the following are equivalent:
\begin{description}
\item[(1)] $(X,d)$ is Gromov-hyperbolic.
\item[(2)] there exists a $\delta \in \mathbb{R}_0^+$ such that $(X,d)$ is $(\delta ,k)$-hyperbolic.
\end{description}
\end{proposition}
Let $X$ be a metric space and $x,y,z\in X$. Then there exist unique $a,b,c\in \mathbb{R}^+_0$ such that
\begin{displaymath}
d(x,y)=a+b, \hspace{0.5cm} d(x,z)=a+c \hspace{0.5cm} \mbox{and} \hspace{0.5cm} d(y,z)=b+c .
\end{displaymath}
In fact those numbers are given through
\begin{displaymath}
a \; = \; (y\cdot z)_x \; , \hspace{0.5cm} b \; = \; (x\cdot z)_y \hspace{0.5cm}
\mbox{and} \hspace{0.5cm} c \; = \; (x\cdot y)_z \; ,
\end{displaymath}
where, for instance,
\begin{displaymath}
(y\cdot z)_x \; = \; \frac{1}{2} \Big[ d(y,x) \; + \; d(z,x) \; - \; d(y,z) \Big] .
\end{displaymath}
In the case that $X$ is $k$-roughly geodesic we may consider a $k$-roughly geodesic triangle
$\overline{xy} \cup \overline{xz} \cup \overline{yz} \subset X$,
where for example $\overline{xy}$ denotes a $k$-roughly geodesic segment connecting $x$ to $y$. Given such a triangle we
write $\tilde{x}:={\gamma}_{yz}(b)$, $\tilde{y}:={\gamma}_{xz}(a)$ and $\tilde{z}:={\gamma}_{xy}(a)$. Note that
for geodesic triangles it holds ${\gamma}_{xz}(a)={\gamma}_{xz}^{-1}(c)$. In the case that $(X,d)$ is only
$k$-roughly geodesic we still have e.g. $d({\gamma}_{xz}(a),{\gamma}_{xz}^{-1}(c))\le 2k$. \\
Similar to Lemma 1 i) in \cite{fs2} one proves the
\begin{lemma} \label{lemma-delta-k-hyperbolic}
If $(X,d)$ is $(\delta ,k)$-hyperbolic, then
\begin{displaymath}
d(z,\tilde{z}) \; \le c \; + \; 2\delta \; + \; 4k, \hspace{1cm}
d\Big( {\gamma}_{xy}(t),{\gamma}_{xz}(t)\Big) \; \le \; 4\delta \; + \; 15k \;\;\; \forall \; t\in [0,a]
\end{displaymath}
and the points $\tilde{x}$, $\tilde{y}$ and $\tilde{z}$ have pairwise distance $\le 4\delta +15k$.
\end{lemma}
\subsection{The boundary at infinity and Busemann functions}
Given a hyperbolic space $(X,d)$ there are various ways to attach a boundary at infinity ${\partial}_{\infty}X$ to $X$.
In this paper we define ${\partial}_{\infty}X$ in the following way: \\
We choose a basepoint $z\in X$ and say that a sequence $\{ x^i{\}}_{i\in \mathbb{N}}$ of points in $X$
{\it converges to infinity}, if
\begin{displaymath}
\liminf\limits_{i,j\longrightarrow \infty} \; (x^i\cdot x^j)_z \; = \; \infty .
\end{displaymath}
Two sequences $\{ x^i{\}}_{i\in \mathbb{N}}$ and $\{ y^i{\}}_{i\in \mathbb{N}}$ converging to infinity
are equivalent, \linebreak $\{ x^k{\}}_{k\in \mathbb{N}} \sim \{ y^k{\}}_{k\in \mathbb{N}}$ if
\begin{displaymath}
\liminf\limits_{i,j\longrightarrow \infty} \; (x^i\cdot y^j)_z \; = \; \infty .
\end{displaymath}
One shows that $\sim$ is an equivalence relation and defines $\partial X$ as the set of equivalence classes.
We write $[\{ x^i\} ]\in \partial X$ for the corresponding class. \\
For $v\in \partial X$ and $r>0$ one defines
\begin{displaymath}
U(v,r) \; := \;
\Big\{ {\textstyle
w\in \partial X \; \Big| \; \exists \{ x^k\} , \{ y^k\} \; \mbox{s.t.} \;
[\{ x^k\} ]=v, \; [\{ y^k\} ]=w, \; \liminf\limits_{k,l\longrightarrow \infty} \; (x^k\cdot y^l)_z \; > \; r }
\Big\} .
\end{displaymath}
On $\partial X$ we consider the topology generated by $U(v,r)$, $v\in \partial X$, $r>0$. \\
Let now $(X,d)$ be $k$-roughly geodesic, then there exists a $k'=k'(k,\delta )$ with $\delta$ as in
equation (\ref{eqn-def-hyperbolicity}) such that for every $x\in X$ there exists a $k'$-rough geodesic
${\gamma}_{xu}: [0,\infty )\longrightarrow X$ with ${\gamma}_{xu}(0)=x$ and $[\{ {\gamma}_{xy}(i)\} ]=u$
(see \cite{bos}). Such rays are said to connect $x$ to $u$. \\
We now fix such a $k'$-roughly geodesic ray ${\gamma}_{zu}$ connecting $z\in X$ to $u\in {\partial}_{\infty}X$ and define
the Busemann function $B_{{\gamma}_{zu}} : X\longrightarrow \mathbb{R}$ associated to the ray ${\gamma}_{zu}$ via
\begin{displaymath}
B_{{\gamma}_{zu}} (x) \; := \; \liminf\limits_{t\longrightarrow \infty}
\Big[ d\Big( x,{\gamma}_{zu}(t) \Big) \; - \; t \Big] .
\end{displaymath}
Note that the limit inferior always exists, while the limit itself necessarily only exists once ${\gamma}_{zu}$
is a geodesic.
\section{The proof of Theorem \ref{theo-r-geod}}
\label{sec-theo-r-geod}
In this section we provide the
{\bf Proof of Theorem \ref{theo-r-geod}:}
Let $(X_i,d_i)$ be $k_i$-roughly geodesic, ${\delta}_i$-hyperbolic, $i=1,2$, and set $k:=\max \{ k_1,k_2\}$ as well as
$\delta := \max \{ {\delta}_1,{\delta}_2\}$. We show that for
$\Delta \ge 4k$ the space $(Y_{\Delta},d_m)$ is $K(\Delta ,k,\delta )$-almost geodesic and therefore roughly geodesic due to
Theorem \ref{theo-gen} and Proposition 5.2 in \cite{bos}. \\
Thus we have to show that for $\Delta \ge 4k$ there exists $K\ge 0$ such that for all $x,y\in Y_{\Delta}$, $t\in [0,d_m(x,y)]$
there exists $w\in Y_{\Delta}$ such that
\begin{displaymath}
\begin{array}{rcccl}
t \; - \; K & \le & d_m(x,w) & \le & t \; + \; K \hspace{0.5cm} \mbox{and} \hspace{1cm} (*)\\
& & & & \\
d_m(x,y)-t-K & \le & d_m(y,w) & \le & d_m(x,y)-t-K .
\end{array}
\end{displaymath}
(1) W.l.o.g. we assume $d_m(x,y)=d_1(x_1,y_1)\ge d_2(x_2,y_2)$. \\
(2) W.l.o.g. we assume $\Delta < t < d_m(x,y)-\Delta$. This can be done, since for $\Delta <t$ we may set $w:=x$ while for
$t> d_m(x,y) - \Delta $ we may set $w:=y$ and the inequalities above trivally hold once $K\ge 2\Delta$. \\
(3) Now set
\begin{eqnarray*}
a_i & := & \frac{1}{2} \Big( d_i(x_i,y_i) \; + \; d_i(x_i,z_i) \; - \; d_i(y_i,z_i)\Big) , \\
b_i & := & \frac{1}{2} \Big( d_i(y_i,x_i) \; + \; d_i(y_i,z_i) \; - \; d_i(x_i,z_i)\Big) , \\
c_i & := & \frac{1}{2} \Big( d_i(z_i,x_i) \; + \; d_i(z_i,y_i) \; - \; d_i(x_i,y_i)\Big) , \\
\end{eqnarray*}
$i=1,2$. W.l.o.g. we assume $\Delta < t \le a_1$,
let ${\gamma}_{z_ix_i}:[0,d_i(z_i,x_i)]\longrightarrow X_i$ be $2k$-rough geodesics connecting $x_i$ to $z_i$, $i=1,2$ and set
\begin{displaymath}
w \; := \; \Big( {\gamma}_{z_1x_1}(d_1(x_1,z_1)-t), {\gamma}_{z_2x_2}(d_1(x_1,z_1)-t)\Big) .
\end{displaymath}
Note that since $\Delta \ge 4k$ it follows that $w\in Y_{\Delta}$. \\
Moreover, from the definition of $w$ it is clear that we have
\begin{equation} \label{eqn-w2-w1}
d_2(w_2,x_2) \; \le \; d_1(w_1,x_1) \; + \; \Delta \; + \; 4k.
\end{equation}
$(X_i,d_i)$ is hyperbolic, $i=1,2$. Thus, due to Proposition \ref{prop-delta-k-hyp}, there exists $\tilde{\delta}_i$ such that
$(X_i,d_i)$ is $(\tilde{\delta}_i,2k)$-hyperbolic. Setting $\tilde{\delta} := \max \{ \tilde{\delta}_1,\tilde{\delta}_2\}$ and
${\delta}':=4\tilde{\delta}+30k$ yields, due to Lemma \ref{lemma-delta-k-hyperbolic},
\begin{displaymath}
\begin{array}{rcccl}
t-2k-{\delta}' & \le & d_1(w_1,x_1) & \le & t+2k+{\delta}' \hspace{0.5cm} \mbox{and} \hspace{1cm} (**)\\
d_1(x_1,y_1)-t-2k-{\delta}' & \le & d_1(w_1,y_1) & \le & d_1(x_1,y_1)-t+2k-{\delta}' .
\end{array}
\end{displaymath}
Now we consider the following two cases: \\
(i) $d_2(x_2,z_2)-[d_1(x_1,z_1)-t]\le a_2$: In this case Lemma \ref{lemma-delta-k-hyperbolic} yields
\begin{displaymath}
\begin{array}{rcccl}
t-2k-{\delta}' & \le & d_2(w_2,x_2) & \le & t+2k+{\delta}' \hspace{0.5cm} \mbox{and} \\
d_2(x_2,y_2)-t-2k-{\delta}' & \le & d_2(w_2,y_2) & \le & d_2(x_2,y_2)-t+2k-{\delta}' .
\end{array}
\end{displaymath}
(ii) $d_2(x_2,z_2)-[d_1(x_1,z_1)-t]> a_2$: Of course we have $d_1(x_1,z_1)-t\ge d_1(x_1,z_1)-a_1=c_1$, hence
\begin{eqnarray*}
d_2(y_2,z_2) \; - \; [d_1(x_1,z_1)-t] & \le & d_2(y_2,z_2) -c_1 \\
& \le & d_1(y_1,z_1)+\Delta -c_1 \; = \; b_1+\Delta .
\end{eqnarray*}
Thus, due to $d_2(x_2,z_2)-[d_1(x_1,z_1)-t]> a_2$ and Lemma \ref{lemma-delta-k-hyperbolic} we conclude
\begin{displaymath}
d_2(w_2,y_2) \; \le \; b_1 \; + \Delta \; + \; {\delta}' .
\end{displaymath}
From (i) and (ii) and the inequalities $(**)$ as well as (\ref{eqn-w2-w1}) it follows that the inequalities $(*)$ hold for
a $K\ge 0$ sufficiently large. \\
For the part of the proof concerning the boundary at infinity, we refer the reader to \cite{fs2}, where the geodesic case is treated.
$\Box$
\section{The limit case and final remarks}
\label{sec-limit}
In this section we state the Theorems \ref{theo-gen-lim} and \ref{theo-r-geod-lim} corresponding to the Theorems
\ref{theo-gen} and \ref{theo-r-geod} when fixing points in the boundary at infinity of the factors rather than points in the
interior. \\
Let therefore $(X_1,d_1)$ and $(X_2,d_2)$ be roughly geodesic hyperbolic metric spaces with non-empty boundaries at infinity.
Fix $u_i\in {\partial}_{\infty}X_i$ as well as Busemann functions $B_i:X_i\longrightarrow \mathbb{R}$, associated to
roughly geodesic rays ${\gamma}_i$ converging to $u_i$, $i=1,2$. This time we consider the sets
\begin{displaymath}
Y_{\Delta} \; := \; \{ (x_1,x_2)\in X_1\times X_2 \; | \; |B_1(x_1)-B_2(x_2)|\le \Delta \}, \hspace{0.5cm} \Delta \ge 0.
\end{displaymath}
With this notation the following theorems hold:
\begin{theorem} \label{theo-gen-lim}
Let $(X_i,d_i)$ be Gromov hyperbolic metric spaces with nonempty boundaries at infinity such that two Busemann functions
$B_i:X_i\longrightarrow \mathbb{R}$ associated to rough geodesic rays are defined. Then, for all $\Delta \ge 0$, $(Y_{\Delta},d_m|_{Y_{\Delta}\times Y_{\Delta}})$
also is hyperbolic.
\end{theorem}
\begin{theorem} \label{theo-r-geod-lim}
Let $(X_i,d_i)$ be roughly geodesic, Gromov hyperbolic metric spaces and $B_i:X_i\longrightarrow \mathbb{R}$ Busemann functions on
$X_i$, $i=1,2$. Then there exists $\tilde{\Delta} \ge 0$ such that $(Y_{\Delta},d_m|_{Y_{\Delta}\times Y_{\Delta}})$ also
is roughly geodesic and hyperbolic for all $\Delta \ge \tilde{\Delta}$. Moreover, ${\partial}_{\infty}(Y_{\Delta},d_m|_{Y_{\Delta}\times Y_{\Delta}})$ is
naturally homeomorphic to the smashed product ${\partial}_{\infty}(X_1,d_1)\wedge {\partial}_{\infty}(X_2,d_2)$.
\end{theorem}
\begin{remark}
The smashed product $\wedge$ is a standard construction
for pointed topological spaces (see e.g. \cite{m}). Let $(U_1,u_1)$, $(U_2,u_2)$ be two pointed spaces then the smashed product
$U_1\wedge U_2$ is defined as $U_1\times U_2/U_1\vee U_2$, where $U_1\times U_2$
is the usual product and
\begin{displaymath}
U_1\vee U_2 \; = \; \Big( \{ u_1\} \times U_2\Big) \; \cup \Big( U_1 \times \{ u_2\} \Big) \; \subset \; U_1\times U_2
\end{displaymath}
is the wedge product canonically embedded in $U_1\times U_2$. Thus $U_1\wedge U_2$ is obtained from $U_1\times U_2$
by collapsing $U_1\vee U_2$ to a point. For example $S^m\wedge S^n=S^{m+n}$.
\end{remark}
The proofs of these theorems go just along the lines of the proofs of the corresponding Theorems \ref{theo-gen} and \ref{theo-r-geod}
when fixing points in the interior rather than the boundary. For the part of Theorem \ref{theo-r-geod} concerning the boundary at infinity
we refer the reader to \cite{fs2}, where the analogue in the geodesic setting is proved. \\
We finally point out that when starting off with two proper geodesic metric spaces one has to consider the length metric $d$ induced by $d_m$
on $Y_0$, in order to obtain a proper geodesic space again. In this case, we might as well endow $Y_0$ with the length metric induced by the Euclidean
product metric $d_e$ instead of the maximum metric $d_m$. Since both are length spaces which are bilipschitz related, one of them is
Gromov hyperbolic if and only if the other one is. \\
In fact, when starting off with two Riemannian manifolds and fixing points at infinity, the construction using the Euclidean product metric
has the advantage that it once again yields a Riemannian manifold (compare e.g. \cite{fs1}). However, we emphazise that for neither of
the Theorems \ref{theo-gen}, \ref{theo-r-geod}, \ref{theo-gen-lim} and \ref{theo-r-geod-lim} we might replace the maximum metric through the
Euclidean metric. This is, for instance, seen in the
\begin{example} \label{example}
Consider two copies of the real hyperbolic space $\mathbb{H}^2$. Fix points $u_i\in {\partial}_{\infty}\mathbb{H}^2$, Busemann functions
$B_i$ associated to geodesic rays ${\gamma}_i$ converging to $u_i$, $i=1,2$, and
consider sequences of points $\{x^n=(x^n_1,x^n_2)\}$, $\{y^n=(y^n_1,y^n_2)\}$, $\{z^n=(z^n_1,z^n_2)\}$ and $\{w^n=(w^n_1,w^n_2)\}$
such that $x^n_1=x^n_2$, $y^n_1=y^n_2$, $z^n_1=z^n_2$, $B_i(z^n_i)=B_i(y^n_i)$, $d_i(x^n_i,y^n_i)=d_i(x^n_i,z^n_i)=\frac{1}{2}d_i(y_i^n,z_i^n)$, $w^n_1=y^n_1$ and
$w^n_2=z^n_2$ for all $n\in \mathbb{N}$, $i=1,2$ as well as $d_i(y^n_i,z^n_i) \stackrel{n\rightarrow \infty}{\longrightarrow} \infty$, $i=1,2$. \\
We claim that $(Y_0(\mathbb{H}^2,u_i,B_i),d_e)$ is not hyperbolic. Suppose the contrary, then there exists a $\delta \ge 0$ such that for all
$n\in \mathbb{N}$
\begin{displaymath}
\begin{array}{crcl}
& & & d_e(y^n,z^n) \; + \; d_e(x^n,w^n) \\
& & & \\
& & \le & \max \{ d_e(x^n,y^n)+d(z^n,w^n),d_e(y^n,w^n)+d(x^n,z^n)\} \; + \; 2\delta \\
& & & \\
\Longleftrightarrow & d_e(y^n,z^n) & \le & \max \{ d_e(z^n,w^n),d(y^n,w^n)\} \; + \; 2\delta \\
& & & \\
\Longleftrightarrow & \sqrt{2} \, d_1(y^n_1,z^n_1) & \le & d_1(y^n_1,z^n_1) \; + \; 2\delta ,
\end{array}
\end{displaymath}
which contradicts our choices of sequences.
\end{example}
\end{document}
|
\begin{document}
\title{Controlling spin relaxation with a cavity}
\author{A. Bienfait$^{1}$, J.J. Pla$^{2}$, Y. Kubo$^{1}$, X. Zhou$^{1,3}$, M. Stern$^{1,4}$, C.C. Lo$^{2}$, C.D. Weis$^{5}$, T. Schenkel$^{5}$, D. Vion$^{1}$, D. Esteve$^{1}$, J.J.L. Morton$^{2}$, and P. Bertet$^{1}$}
\affiliation{$^{1}$Quantronics group, Service de Physique de l'Etat Condens\'e, DSM/IRAMIS/SPEC, CNRS UMR 3680, CEA-Saclay,
91191 Gif-sur-Yvette cedex, France }
\affiliation{$^{2}$ London Centre for Nanotechnology, University College London, London WC1H 0AH, United Kingdom}
\affiliation{$^{3}$Institute of Electronics Microelectronics and Nanotechnology, CNRS UMR 8520, ISEN Department, Avenue Poincar\'e, CS 60069, 59652 Villeneuve d'Ascq Cedex, France}
\affiliation{$^{4}$ Quantum Nanoelectronics Laboratory, BINA, Bar Ilan University, Ramat Gan, Israel}
\affiliation{$^{5}$Accelerator Technology and Applied Physics Division, Lawrence Berkeley National Laboratory, Berkeley,
California 94720, USA}
\date{\today}
\pacs{03.67.Lx, 71.55.-i, 85.35.Gv, 71.70.Gm, 31.30.Gs}
\keywords{hybrid quantum systems, superconductor, spins, Purcell, cavity QED, spontaneous emission}
\maketitle
Spontaneous emission of radiation is one of the fundamental mechanisms by which an excited quantum system returns to equilibrium. For spins, however, spontaneous emission is generally negligible compared to other non-radiative relaxation processes because of the weak coupling between the magnetic dipole and the electromagnetic field. In 1946, Purcell realised~\cite{Purcell.PhysRev.69.681(1946)} that the spontaneous emission rate can be strongly enhanced by placing the quantum system in a resonant cavity --- an effect which has since been used extensively to control the lifetime of atoms and semiconducting heterostructures coupled to microwave~\cite{Goy1983} or optical~\cite{Heinzen.PhysRevLett.58.1320(1987),Yamamoto1991337} cavities, underpinning single-photon sources~\cite{Gerard.PhysRevLett.81.1110(1998)}. Here we report the first application of these ideas to spins in solids. By coupling donor spins in silicon to a superconducting microwave cavity of high quality factor and small mode volume, we reach for the first time the regime where spontaneous emission constitutes the dominant spin relaxation mechanism. The relaxation rate is increased by three orders of magnitude when the spins are tuned to the cavity resonance, showing that energy relaxation can be engineered and controlled on-demand. Our results provide a novel and general way to initialise spin systems into their ground state, with applications in magnetic resonance and quantum information processing~\cite{Butler.PhysRevA.84.063407(2011)}.
They also demonstrate that, contrary to popular belief, the coupling between the magnetic dipole of a spin and the electromagnetic field can be enhanced up to the point where quantum fluctuations have a dramatic effect on the spin dynamics; as such our work represents an important step towards the coherent magnetic coupling of individual spins to microwave photons.
Spin relaxation is the process by which a spin reaches thermal equilibrium by exchanging an energy quantum $\hbar \omega_{\rm s}$ with its environment ($\omega_{\rm s}$ being its resonance frequency) for example in the form of a photon or a phonon, as shown in Fig.~\ref{fig:figure1}a. Understanding and controlling spin relaxation is of essential importance in applications such as spintronics~\cite{Sinova.NatureMat.11.368(2012)} and quantum information processing~\cite{Ladd.Nature.464.45(2010)} as well as magnetic resonance spectroscopy and imaging~\cite{Levitt.SpinDynamics}. For such applications, the spin relaxation time $T_1$ must be sufficiently long to permit coherent spin manipulation; however, if $T_1$ is too long it becomes a major bottleneck which limits the repetition rate of an experiment, and in turn impacts factors such as the achievable sensitivity.
Certain types of spins can be actively reset in their ground state by optical~\cite{Robledo.Nature.477.574(2011)} or electrical~\cite{Pla2012} means due to their specific energy level scheme, while methods such as chemical doping have been employed to influence spin relaxation times ex-situ~\cite{Shapiro.NatureBiotech.28.264(2010)}. Nevertheless, an efficient, general and tuneable initialization method for spin systems is still currently lacking.
\begin{figure}
\caption{\label{fig:figure1}
\label{fig:figure1}
\end{figure}
At first inspection, spontaneous emission would appear an unlikely candidate to influence spin relaxation: for example, an electron spin in free space and at a typical frequency of $\omega_s / 2\pi \simeq 8$\,GHz, spontaneously emits a photon at a rate of $\sim 10^{-12}\,\mathrm{s}^{-1}$.
However, the Purcell effect provides a means to dramatically enhance spontaneous emission, and thus gain precise and versatile control over spin relaxation~\cite{Purcell.PhysRev.69.681(1946)}.
Consider a spin embedded in a microwave cavity of quality factor $Q$ and frequency $\omega_0$.
If the cavity damping rate $\kappa~= \omega_0 / Q$ is greater than the spin-cavity coupling $g$, the cavity then provides an additional channel for spontaneous emission of microwave photons, governed by a so-called Purcell rate~\cite{Butler.PhysRevA.84.063407(2011),Wood.PhysRevLett.112.050501(2014)}
\begin{equation}
\label{eq:Purcell}
\Gamma_{\rm P} = \kappa \frac{g^2} {\kappa^2 / 4 + \delta ^2},
\end{equation}
where $\delta~=\omega_0-\omega_{\rm s}$ is the spin-cavity detuning~(see Fig.~\ref{fig:figure1}a and Suppl. Info.).
This cavity-enhanced spontaneous emission can be much larger than in free space, and is strongest when the spins and cavity are on-resonance ($\delta =0$), where $\Gamma_{\rm P} = 4 g^2 / \kappa $. Furthermore, the Purcell rate can be modulated by changing the coupling constant or the detuning, allowing spin relaxation to be tuned on-demand. The Purcell effect was used to detect spontaneous emission of radiofrequency radiation from nuclear spins coupled to a resonant circuit~\cite{Sleator.PhysRevLett.55.1742(1985)}, but even then the corresponding Purcell rate $\Gamma_{\rm P} \simeq 10^{-16}~\mathrm{s}^{-1}$ (or 1 photon emitted every 300 million years) was negligible compared to the intrinsic spin-lattice relaxation processes.
In order for photon emission to become the dominant spin relaxation mechanism, both a large spin-cavity coupling and a low cavity damping rate are needed: in our experiment, this is achieved by combining the microwave confinement provided by a micron-scale resonator with the high quality factors enabled by the use of superconducting circuits.
The device consists of two planar aluminium lumped-element superconducting resonators patterned onto a silicon chip which was enriched in nuclear-spin-free $^{28}\mathrm{Si}$ and implanted with bismuth atoms (see Fig.~\ref{fig:figure1}b) at a sufficiently low concentration for collective radiation effects to be absent. A static magnetic field $\mathbf{B_0}$ is applied in the plane of the aluminium resonators, at an angle $\theta$ from the resonator inductive wire, tunable in-situ. The device is mounted inside a copper box and cooled to 20~mK. Each resonator can be used to perform inductive detection of the electron-spin resonance (ESR) signal of the bismuth donors: microwave pulses at $\omega_0$ are applied at the resonator input, generating an oscillating magnetic field $B_1$ around the inductive wire which drives the surrounding spins; the quantum fluctuations of this field, present even when no microwave is applied, are responsible for the Purcell spontaneous emission. Hahn echo pulse sequences~\cite{Hahn1950} are used, resulting in the emission of a spin-echo in the detection waveguide, which is amplified with a sensitivity reaching the quantum limit thanks to the use of a Josephson Parametric Amplifier~\cite{Zhou.PhysRevB.89.214517(2014)} and demodulated, yielding the integrated echo signal quadrature $A_{\rm Q}$. A more detailed setup description can be found in~\cite{QuantumESR}.
Bismuth is a donor in silicon~\cite{Feher.PhysRev.114.1219(1959)} with a nuclear spin $I = 9/2$. At cryogenic temperatures it can bind an electron (with spin $S = 1/2$) in addition to those shared with the surrounding Si lattice. The large hyperfine interaction $A\overrightarrow{S}\cdot\overrightarrow{I}$ between the electron and nuclear spin, where $\overrightarrow{S}$ and $\overrightarrow{I}$ are the electron and nuclear spin operators and $A/h = 1475$~MHz, produces a splitting of 7.375~GHz between the ground and excited multiplets at zero magnetic field (see Fig.~\ref{fig:figure1}d for the complete energy diagram~\cite{Wolfowicz.PhysRevB.86.245301(2012)}). This makes the system ideal for coupling to superconducting circuits~\cite{Morley.NatureMat.9.725(2010),George.PhysRevLett.105.067601(2010)}. At low fields ($B_0 < 10$\,mT, compatible with the critical field of aluminum) all $\Delta m_{\rm F} = \pm 1$ transitions are allowed, $m_{\rm F}$ being the projection of the total spin ($\overrightarrow{F} = \overrightarrow{I} + \overrightarrow{S}$) along $B_0$. Considering only the transitions with largest matrix element, resonator A ($\omega_{0A}/2\pi = 7.245$~GHz, $Q_A=3.2 \times 10^5$) crosses the $\ket{F,m_F}=\ket{4,-4} \leftrightarrow \ket{5,-5}$ transition, whilst resonator B ($\omega_{0B}/2\pi = 7.305$~GHz, $Q_B=1.1 \times 10^5$) crosses the transitions $\ket{4,-4} \leftrightarrow \ket{5,-5}$, $\ket{4,-3} \leftrightarrow \ket{5,-4}$, and $\ket{4,-2} \leftrightarrow \ket{5,-3}$ (see Figs.~\ref{fig:figure2}a and b).
\begin{figure}
\caption{\label{fig:figure2}
\label{fig:figure2}
\end{figure}
The echo signal $A_{\rm Q}$ from each resonator as a function of $B_0$ shows resonances at the expected magnetic fields, split into two peaks of full-width-half-maximum $\Delta \omega / 2\pi \sim 2$~MHz (see Fig.~\ref{fig:figure2}a). As explained in~\cite{QuantumESR}, this splitting is believed to be the result of strain induced in the silicon by the aluminium surface structure, which is non-negligible at the donor implant depth of $\sim 100$~nm. In the following we focus on the lower-frequency peak of the $\ket{4,-4} \leftrightarrow \ket{5,-5}$ line which corresponds to spins lying under the wire (see Suppl. Info). Over the region occupied by these spins, the $B_1$ field amplitude varies by less than $\pm 2\%$, as evidenced by the well-defined Rabi oscillations observed when we sweep the power of the refocusing pulse $P_{\rm in}$ at the cavity input (see Fig.~\ref{fig:figure2}c), allowing us to determine the input power of a $\pi$ pulse for a given pulse duration.
We measure the relaxation time $T_1$ by performing an ``inversion-recovery'' experiment~\cite{SchweigerEPR(2001)} (see schematic, top of Fig.~\ref{fig:figure2}a), with the static field $B_0$ aligned along $x$ ($\theta = 0$). A $\pi$ pulse first inverts the spins whose frequency lies within the resonator bandwidth $\kappa_A/ 2\pi = 23$\,kHz (or $\kappa_B/ 2\pi = 68$\,kHz); note that this constitutes a small subset of the total number of spins since $\kappa_{A,B} \ll \Delta \omega$. After a varying delay $T$, a Hahn echo sequence provides a measure of the longitudinal spin polarization. Fitting the data with decaying exponentials, we extract $T_1 = 0.35$\,s for resonator A and $T_1 = 1.0$\,s for resonator B.
For a quantitative comparison with the expected Purcell rate, it is necessary to evaluate the spin-resonator coupling constant $g = \gamma_e \langle F,m_F | S_x | F+1,m_{F}-1 \rangle \left\| \mathbf{\delta B_\bot} \right\|$, where $\gamma_{\rm e} / 2\pi \simeq 28$\,GHz/T is the electronic gyromagnetic ratio and $\mathbf{\delta B_\bot} $ is the component of the resonator field vacuum fluctuations orthogonal to $\mathbf{B_0}$ (see Suppl. Info of \cite{QuantumESR}). A numerical estimate yields $g_0 / 2\pi = 56 \pm 1$\,Hz for the spins located below the resonator inductive wire that are probed in our measurements and for $\theta = 0$. An independent estimate is obtained by measuring Rabi oscillations. Their frequency $\Omega_R = 2 g_0 \sqrt{\bar{n}}$ directly yields $g_0$ upon knowledge of the average intra-cavity photon number $\bar{n}$, which can be determined with a $\sim 30 \%$ imprecision from $P_{\rm in}$ and the measured resonator coupling to the input and output antennae (see Suppl. Info). We obtain $g_0 / 2\pi = 50 \pm 7$\,Hz for resonator A and $58 \pm 7$\,Hz for resonator B, compatible with the numerical estimate. The corresponding resonant Purcell spontaneous emission time is $\Gamma_P^{-1} = 0.36 \pm 0.09$\,s for resonator A and $0.81 \pm 0.17$\,s for resonator B, in agreement with the experimental values.
\begin{figure}
\caption{\label{fig:figure4}
\label{fig:figure4}
\end{figure}
According to Eq.~\ref{eq:Purcell}, a Purcell-limited $T_1$ should be strongly dependent on the spin-cavity detuning. We introduce a magnetic field pulse of duration $T$ between the spin excitation and the spin-echo sequence (see Fig.~\ref{fig:figure4}a), which results in a temporary detuning $\delta$ of the spins. The echo signal amplitude $A_{\rm Q}$ as a function of $T$ yields their energy relaxation time while they are detuned by $\delta$. To minimize the influence of spin diffusion~\cite{SchweigerEPR(2001)}, the spin excitation is performed here by a high-power long-duration saturating pulse (see Fig.~\ref{fig:figure4}a and Suppl. Info) instead of an inversion pulse as in Fig.~\ref{fig:figure2}d. As evident in Fig.~\ref{fig:figure4}b, we find that the decay of the echo signal is well fit by a single exponential with a decay time increasing with $|\delta|$. The extracted $T_1(\delta)$ curve (see Fig.~\ref{fig:figure4}c) shows a remarkable increase of $T_1$ by up to $3$ orders of magnitude when the spins are detuned away from resonance, until it becomes limited by a non-radiative energy decay mechanism with rate $\Gamma_{\rm NR}^{-1} = 1600 \pm 300$\,s. Given the doping concentration in our sample, this rate is consistent with earlier measurements of donor spin relaxation times~\cite{Feher.PhysRev.114.1245(1959)}, which have been attributed to charge hopping, but could also arise here from spatial diffusion of the spin magnetisation away from the resonator mode volume. Figure ~\ref{fig:figure4}c shows that the $T_1(\delta)$ measurements are in agreement with the expected dependence $(\Gamma_{\rm P}(\delta) + \Gamma_{\rm NR})^{-1}$, with the only free parameter in this fit being $\Gamma_{\rm NR}$.
\begin{figure}
\caption{\label{fig:figure3}
\label{fig:figure3}
\end{figure}
Having demonstrated the effect of cavity linewidth and detuning on the Purcell rate, we finally explore the effect of modulating the spin-cavity coupling constant $g$. This can be achieved by varying the orientation $\theta$ of the static magnetic field $B_0$ in the $x$-$y$ plane (Fig.~\ref{fig:figure1}b), adjusting the component of the microwave magnetic field (mostly along $y$ under the inductive wire) which is orthogonal to $B_0$. More precisely, $g(\theta) = \gamma_{\rm e} \langle F,m_F | S_x | F+1,m_{F}-1 \rangle \sqrt{\delta B_{1y}^2 \cos(\theta)^2 + \delta B_{1z}^2}$ (noting that $\delta B_{1x}=0$), and we expect $\delta B_{1z}\ll\delta B_{1y}$ for the spins lying under the wire that are probed in these measurements.
This is verified experimentally by measuring the Rabi frequency as a function of $\theta$, as shown in Fig.~\ref{fig:figure3}a \& b, allowing us to extract
$g(0)/2\pi = 58$\,Hz and
$g(\pi/2)/2\pi = 17$\,Hz. As expected, we measure longer spin relaxation times for increasing values of $\theta$, as shown in Fig.~\ref{fig:figure3}c, with the relaxation rate $T_1^{-1}$ scaling as $g^2 (\theta)$, in agreement with Eq.~\ref{eq:Purcell}.
Overall, the data of Figs.~\ref{fig:figure4} and \ref{fig:figure3} demonstrate unambiguously that cavity-enhanced spontaneous emission is by far the dominant spin relaxation channel when the spins are resonant with the cavity, since the probability for a spin-flip to occur due to emission of a microwave photon in the cavity is $1/[1+\Gamma_{\rm NR}/\Gamma_{\rm P}(\delta = 0)] = 0.999$, very close to unity.
At this point it is interesting to reflect on the important fact that the spontaneous emission evidenced here is an energy relaxation mechanism which does not require the presence of a macroscopic magnetization to be effective. Under the Purcell effect, each spin independently relaxes towards thermal equilibrium by microwave photon emission, so that the sample ends up in a fully polarized state after a time longer than $\Gamma_{\rm P}^{-1}$, regardless of its initial state. This is in stark contrast with the well-known phenomenon of radiative damping~\cite{Bloembergen1954} of a transverse magnetization generated by earlier microwave pulses, which is a coherent collective effect under which the degree of polarization of a sample cannot increase. We also note that had our device possessed a larger spin concentration, spontaneous relaxation would have occurred collectively, manifesting itself as a non-exponential decay of the echo signal on a time scale faster than $\Gamma_P^{-1}$~\cite{Wood.PhysRevLett.112.050501(2014)} and leading to an incomplete thermalization~\cite{Butler.PhysRevA.84.063407(2011),Wood.Arxiv.1506.03007}. Such superradiant or maser emission~\cite{Feher.PhysRev.109.221(1958)} requires the dimensionless parameter $C = N g^2 / (\kappa \Delta \omega)$ called cooperativity ($N$ being the total number of spins) to satisfy $C \gg 1$~\cite{Butler.PhysRevA.84.063407(2011),Temnov.PhysRevLett.95.243602(2005),Wood.Arxiv.1506.03007}, which is not the case here because of the large inhomogeneous broadening of the spin resonance caused by strain.
Our demonstrated ability to modulate spin relaxation through 3 orders of magnitude by changing the applied field by less than 0.1~mT opens up new perspectives for spin-based quantum information processing: long intrinsic relaxation times which are desirable to maximise the spin coherence time can be combined with fast, on-demand initialisation of the spin state.
We also anticipate Purcell relaxation will offer a powerful approach to dynamical nuclear polarisation~\cite{Carver.PhysRev.92.212.2(1953),Abragam.RepProgrPhys.41.395(1978)}, for example by tuning the cavity to match an electron-nuclear spin flip-flop transition, enhancing the rate of cross-relaxation to pump polarisation into the desired nuclear spin state~\cite{Bloembergen.PhysRev.114.445(1959)}.
The Purcell rate we obtain could be increased by reducing the transverse dimensions of the inductor wire to yield larger coupling constants, up to $5-10$\,kHz, bringing the spontaneous emission time below $1$\,ms (enabling faster repetition rates), as well as a higher sensitivity~\cite{QuantumESR}, up to the single-spin detection limit.
Finally, our measurements constitute the first evidence that vacuum fluctuations of the microwave field can affect the dynamics of spins, and are thus a step towards the application of circuit quantum electrodynamics concepts to individual spins in solids.
\begin{acknowledgments}
We acknowledge technical support from P. S{\'e}nat, D. Duet, J.-C. Tack, P. Pari, P. Forget, as well as useful discussions within the Quantronics group. We acknowledge support of the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013) through grant agreements No. 615767 (CIRQUSS), 279781 (ASCENT), and 630070 (quRAM), and of the C'Nano IdF project QUANTROCRYO. J.J.L.M. is supported by the Royal Society. C.C.L. is supported by the Royal Commission for the Exhibition of 1851. T.S. and C.D.W. were supported by the U. S. Department of Energy under contract DE-AC02-05CH11231.
\end{acknowledgments}
\begin{thebibliography}{10}
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem{Purcell.PhysRev.69.681(1946)}
\bibinfo{author}{Purcell, E.~M.}
\newblock \bibinfo{title}{Spontaneous emission probabilities at radio
frequencies}.
\newblock \emph{\bibinfo{journal}{Phys. Rev.}} \textbf{\bibinfo{volume}{69}},
\bibinfo{pages}{681} (\bibinfo{year}{1946}).
\bibitem{Goy1983}
\bibinfo{author}{Goy, P.}, \bibinfo{author}{Raimond, J.~M.},
\bibinfo{author}{Gross, M.} \& \bibinfo{author}{Haroche, S.}
\newblock \bibinfo{title}{{Observation of cavity-enhanced single-atom
spontaneous emission}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{50}}, \bibinfo{pages}{1903--1906}
(\bibinfo{year}{1983}).
\bibitem{Heinzen.PhysRevLett.58.1320(1987)}
\bibinfo{author}{Heinzen, D.~J.}, \bibinfo{author}{Childs, J.~J.},
\bibinfo{author}{Thomas, J.~E.} \& \bibinfo{author}{Feld, M.~S.}
\newblock \bibinfo{title}{Enhanced and inhibited visible spontaneous emission
by atoms in a confocal resonator}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{58}}, \bibinfo{pages}{1320--1323}
(\bibinfo{year}{1987}).
\bibitem{Yamamoto1991337}
\bibinfo{author}{Yamamoto, Y.}, \bibinfo{author}{Machida, S.},
\bibinfo{author}{Horikoshi, Y.}, \bibinfo{author}{Igeta, K.} \&
\bibinfo{author}{Bjork, G.}
\newblock \bibinfo{title}{Enhanced and inhibited spontaneous emission of free
excitons in {G}a{A}s quantum wells in a microcavity}.
\newblock \emph{\bibinfo{journal}{Optics Communications}}
\textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{337 -- 342}
(\bibinfo{year}{1991}).
\bibitem{Gerard.PhysRevLett.81.1110(1998)}
\bibinfo{author}{G\'erard, J.~M.} \emph{et~al.}
\newblock \bibinfo{title}{Enhanced spontaneous emission by quantum boxes in a
monolithic optical microcavity}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{1110--1113}
(\bibinfo{year}{1998}).
\bibitem{Butler.PhysRevA.84.063407(2011)}
\bibinfo{author}{Butler, M.~C.} \& \bibinfo{author}{Weitekamp, D.~P.}
\newblock \bibinfo{title}{Polarization of nuclear spins by a cold nanoscale
resonator}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. A}} \textbf{\bibinfo{volume}{84}},
\bibinfo{pages}{063407} (\bibinfo{year}{2011}).
\bibitem{Sinova.NatureMat.11.368(2012)}
\bibinfo{author}{Sinova, J.} \& \bibinfo{author}{{\v{Z}}uti{\'c}, I.}
\newblock \bibinfo{title}{New moves of the spintronics tango}.
\newblock \emph{\bibinfo{journal}{Nature Materials}}
\textbf{\bibinfo{volume}{11}}, \bibinfo{pages}{368--371}
(\bibinfo{year}{2012}).
\bibitem{Ladd.Nature.464.45(2010)}
\bibinfo{author}{Ladd, T.~D.} \emph{et~al.}
\newblock \bibinfo{title}{Quantum computers}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{464}},
\bibinfo{pages}{45--53} (\bibinfo{year}{2010}).
\bibitem{Levitt.SpinDynamics}
\bibinfo{author}{Levitt, M.~H.}
\newblock \emph{\bibinfo{title}{Spin dynamics: basics of nuclear magnetic
resonance}} (\bibinfo{publisher}{John Wiley and Sons}, \bibinfo{year}{2001}).
\bibitem{Robledo.Nature.477.574(2011)}
\bibinfo{author}{Robledo, L.} \emph{et~al.}
\newblock \bibinfo{title}{High-fidelity projective read-out of a solid-state
spin quantum register}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{477}},
\bibinfo{pages}{574} (\bibinfo{year}{2011}).
\bibitem{Pla2012}
\bibinfo{author}{Pla, J.~J.} \emph{et~al.}
\newblock \bibinfo{title}{A single-atom electron spin qubit in silicon}.
\newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{489}},
\bibinfo{pages}{541--545} (\bibinfo{year}{2012}).
\bibitem{Shapiro.NatureBiotech.28.264(2010)}
\bibinfo{author}{Shapiro, M.~G.} \emph{et~al.}
\newblock \bibinfo{title}{Directed evolution of a magnetic resonance imaging
contrast agent for noninvasive imaging of dopamine}.
\newblock \emph{\bibinfo{journal}{Nature Biotechnology}}
\textbf{\bibinfo{volume}{28}}, \bibinfo{pages}{264--270}
(\bibinfo{year}{2010}).
\bibitem{Wood.PhysRevLett.112.050501(2014)}
\bibinfo{author}{Wood, C.~J.}, \bibinfo{author}{Borneman, T.~W.} \&
\bibinfo{author}{Cory, D.~G.}
\newblock \bibinfo{title}{Cavity cooling of an ensemble spin system}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{112}}, \bibinfo{pages}{050501}
(\bibinfo{year}{2014}).
\bibitem{Sleator.PhysRevLett.55.1742(1985)}
\bibinfo{author}{Sleator, T.}, \bibinfo{author}{Hahn, E.~L.},
\bibinfo{author}{Hilbert, C.} \& \bibinfo{author}{Clarke, J.}
\newblock \bibinfo{title}{Nuclear-spin noise}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{55}}, \bibinfo{pages}{1742--1745}
(\bibinfo{year}{1985}).
\bibitem{Hahn1950}
\bibinfo{author}{Hahn, E.}
\newblock \bibinfo{title}{{Spin echoes}}.
\newblock \emph{\bibinfo{journal}{Physical Review}}
\textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{580--594}
(\bibinfo{year}{1950}).
\bibitem{Zhou.PhysRevB.89.214517(2014)}
\bibinfo{author}{Zhou, X.} \emph{et~al.}
\newblock \bibinfo{title}{{High-gain weakly nonlinear flux-modulated Josephson
parametric amplifier using a SQUID array}}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. B}} \textbf{\bibinfo{volume}{89}},
\bibinfo{pages}{214517} (\bibinfo{year}{2014}).
\bibitem{QuantumESR}
\bibinfo{author}{Bienfait, A.} \emph{et~al.}
\newblock \bibinfo{title}{Reaching the quantum limit of sensitivity in electron
spin resonance}.
\newblock \emph{\bibinfo{journal}{arXiv:1507.06831}} (\bibinfo{year}{2015}).
\bibitem{Feher.PhysRev.114.1219(1959)}
\bibinfo{author}{Feher, G.}
\newblock \bibinfo{title}{Electron spin resonance experiments on donors in
silicon. i. {E}lectronic structure of donors by the electron nuclear double
resonance technique}.
\newblock \emph{\bibinfo{journal}{Phys. Rev.}} \textbf{\bibinfo{volume}{114}},
\bibinfo{pages}{1219--1244} (\bibinfo{year}{1959}).
\bibitem{Wolfowicz.PhysRevB.86.245301(2012)}
\bibinfo{author}{Wolfowicz, G.} \emph{et~al.}
\newblock \bibinfo{title}{Decoherence mechanisms of ${}^{209}$ {B}i donor
electron spins in isotopically pure ${}^{28}${S}i}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. B}} \textbf{\bibinfo{volume}{86}},
\bibinfo{pages}{245301} (\bibinfo{year}{2012}).
\bibitem{Morley.NatureMat.9.725(2010)}
\bibinfo{author}{Morley, G.~W.} \emph{et~al.}
\newblock \bibinfo{title}{The initialization and manipulation of quantum
information stored in silicon by bismuth dopants}.
\newblock \emph{\bibinfo{journal}{Nature materials}}
\textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{725--729}
(\bibinfo{year}{2010}).
\bibitem{George.PhysRevLett.105.067601(2010)}
\bibinfo{author}{George, R.~E.} \emph{et~al.}
\newblock \bibinfo{title}{Electron spin coherence and electron nuclear double
resonance of {B}i donors in natural {S}i}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{105}}, \bibinfo{pages}{067601}
(\bibinfo{year}{2010}).
\bibitem{SchweigerEPR(2001)}
\bibinfo{author}{Schweiger, A.} \& \bibinfo{author}{Jeschke, G.}
\newblock \emph{\bibinfo{title}{Principles of pulse electron paramagnetic
resonance}} (\bibinfo{publisher}{Oxford University Press},
\bibinfo{year}{2001}).
\bibitem{Feher.PhysRev.114.1245(1959)}
\bibinfo{author}{Feher, G.} \& \bibinfo{author}{Gere, E.~A.}
\newblock \bibinfo{title}{Electron spin resonance experiments on donors in
silicon. ii. {E}lectron spin relaxation effects}.
\newblock \emph{\bibinfo{journal}{Phys. Rev.}} \textbf{\bibinfo{volume}{114}},
\bibinfo{pages}{1245--1256} (\bibinfo{year}{1959}).
\bibitem{Bloembergen1954}
\bibinfo{author}{Bloembergen, N.} \& \bibinfo{author}{Pound, R.~V.}
\newblock \bibinfo{title}{Radiation damping in magnetic resonance experiments}.
\newblock \emph{\bibinfo{journal}{Phys. Rev.}} \textbf{\bibinfo{volume}{95}},
\bibinfo{pages}{8--12} (\bibinfo{year}{1954}).
\bibitem{Wood.Arxiv.1506.03007}
\bibinfo{author}{Wood, C.~J.} \& \bibinfo{author}{Cory, D.~G.}
\newblock \bibinfo{title}{Cavity cooling to the ground state of an ensemble
quantum system}.
\newblock \bibinfo{howpublished}{arXiv:1506.03007} (\bibinfo{year}{2015}).
\bibitem{Feher.PhysRev.109.221(1958)}
\bibinfo{author}{Feher, G.}, \bibinfo{author}{Gordon, J.~P.},
\bibinfo{author}{Buehler, E.}, \bibinfo{author}{Gere, E.~A.} \&
\bibinfo{author}{Thurmond, C.~D.}
\newblock \bibinfo{title}{Spontaneous emission of radiation from an electron
spin system}.
\newblock \emph{\bibinfo{journal}{Phys. Rev.}} \textbf{\bibinfo{volume}{109}},
\bibinfo{pages}{221--222} (\bibinfo{year}{1958}).
\bibitem{Temnov.PhysRevLett.95.243602(2005)}
\bibinfo{author}{Temnov, V.~V.} \& \bibinfo{author}{Woggon, U.}
\newblock \bibinfo{title}{Superradiance and subradiance in an inhomogeneously
broadened ensemble of two-level systems coupled to a low-{Q} cavity}.
\newblock \emph{\bibinfo{journal}{Phys. Rev. Lett.}}
\textbf{\bibinfo{volume}{95}}, \bibinfo{pages}{243602}
(\bibinfo{year}{2005}).
\bibitem{Carver.PhysRev.92.212.2(1953)}
\bibinfo{author}{Carver, T.~R.} \& \bibinfo{author}{Slichter, C.~P.}
\newblock \bibinfo{title}{Polarization of nuclear spins in metals}.
\newblock \emph{\bibinfo{journal}{Phys. Rev.}} \textbf{\bibinfo{volume}{92}},
\bibinfo{pages}{212--213} (\bibinfo{year}{1953}).
\bibitem{Abragam.RepProgrPhys.41.395(1978)}
\bibinfo{author}{Abragam, A.} \& \bibinfo{author}{Goldman, M.}
\newblock \bibinfo{title}{Principles of dynamic nuclear polarisation}.
\newblock \emph{\bibinfo{journal}{Reports on Progress in Physics}}
\textbf{\bibinfo{volume}{41}}, \bibinfo{pages}{395--467}
(\bibinfo{year}{1978}).
\bibitem{Bloembergen.PhysRev.114.445(1959)}
\bibinfo{author}{Bloembergen, N.}, \bibinfo{author}{Shapiro, S.},
\bibinfo{author}{Pershan, P.~S.} \& \bibinfo{author}{Artman, J.~O.}
\newblock \bibinfo{title}{Cross-relaxation in spin systems}.
\newblock \emph{\bibinfo{journal}{Phys. Rev.}} \textbf{\bibinfo{volume}{114}},
\bibinfo{pages}{445--459} (\bibinfo{year}{1959}).
\end{thebibliography}
\hrulefill
\widetext
\begin{center}
\textbf{\large Supplementary Material: Controlling spin relaxation with a cavity}
\end{center}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\setcounter{page}{1}
\makeatletter
\renewcommand{S\arabic{equation}}{S\arabic{equation}}
\renewcommand{S\arabic{figure}}{S\arabic{figure}}
\newcommand{\ac}[0]{\ensuremath{\hat{a}}}
\newcommand{\adagc}[0]{\ensuremath{\hat{a}^{\dagger}}}
\newcommand{\ain}[0]{\ensuremath{\hat{a}_{\mathrm{in}}}}
\newcommand{\aout}[0]{\ensuremath{\hat{a}_{\mathrm{out}}}}
\newcommand{\aR}[0]{\ensuremath{\hat{a}_{\mathrm{R}}}}
\newcommand{\aT}[0]{\ensuremath{\hat{a}_{\mathrm{T}}}}
\renewcommand{\b}[0]{\ensuremath{\hat{b}}}
\newcommand{\bdag}[0]{\ensuremath{\hat{b}^{\dagger}}}
\newcommand{\betaI}[0]{\ensuremath{\beta_\mathrm{I}}}
\newcommand{\betaR}[0]{\ensuremath{\beta_\mathrm{R}}}
\newcommand{\bid}[0]{\ensuremath{\hat{b}_{\mathrm{id}}}}
\renewcommand{\c}[0]{\ensuremath{\hat{c}}}
\newcommand{\cdag}[0]{\ensuremath{\hat{c}^{\dagger}}}
\newcommand{\CorrMat}[0]{\ensuremath{\boldsymbol\gamma}}
\newcommand{\Deltacs}[0]{\ensuremath{\Delta_{\mathrm{cs}}}}
\newcommand{\Deltacsmax}[0]{\ensuremath{\Delta_{\mathrm{cs}}^{\mathrm{max}}}}
\newcommand{\Deltacsparked}[0]{\ensuremath{\Delta_{\mathrm{cs}}^{\mathrm{p}}}}
\newcommand{\Deltacstarget}[0]{\ensuremath{\Delta_{\mathrm{cs}}^{\mathrm{t}}}}
\newcommand{\Deltae}[0]{\ensuremath{\Delta_{\mathrm{e}}}}
\newcommand{\Deltahfs}[0]{\ensuremath{\Delta_{\mathrm{hfs}}}}
\newcommand{\dens}[0]{\ensuremath{\hat{\rho}}}
\newcommand{\e}[1]{\ensuremath{\times 10^{#1}}}
\newcommand{\erfc}[0]{\ensuremath{\mathrm{erfc}}}
\newcommand{\Fq}[0]{\ensuremath{F_{\mathrm{q}}}}
\newcommand{\gammapar}[0]{\ensuremath{\gamma_{\parallel}}}
\newcommand{\gammaperp}[0]{\ensuremath{\gamma_{\perp}}}
\newcommand{\gavg}[0]{\ensuremath{\mathcal{G}_{\mathrm{avg}}}}
\newcommand{\gbar}[0]{\ensuremath{\bar{g}}}
\newcommand{\gens}[0]{\ensuremath{g_{\mathrm{ens}}}}
\renewcommand{\H}[0]{\ensuremath{\hat{H}}}
\renewcommand{\Im}[0]{\ensuremath{\mathrm{Im}}}
\newcommand{\kappac}[0]{\ensuremath{\kappa_{\mathrm{c}}}}
\newcommand{\kappaL}[0]{\ensuremath{\kappa_{\mathrm{L}}}}
\newcommand{\kappamin}[0]{\ensuremath{\kappa_{\mathrm{min}}}}
\newcommand{\kappamax}[0]{\ensuremath{\kappa_{\mathrm{max}}}}
\newcommand{\kB}[0]{\ensuremath{k_{\mathrm{B}}}}
\newcommand{\mat}[1]{\ensuremath{\mathbf{#1}}}
\newcommand{\mean}[1]{\ensuremath{\langle#1\rangle}}
\newcommand{\namp}[0]{\ensuremath{n_{\mathrm{amp}}}}
\renewcommand{\neq}[0]{\ensuremath{n_{\mathrm{eq}}}}
\newcommand{\Nmin}[0]{\ensuremath{N_{\mathrm{min}}}}
\newcommand{\nsp}[0]{\ensuremath{n_{\mathrm{sp}}}}
\newcommand{\omegac}[0]{\ensuremath{\omega_{\mathrm{c}}}}
\newcommand{\omegas}[0]{\ensuremath{\omega_{\mathrm{s}}}}
\newcommand{\pauli}[0]{\ensuremath{\hat{\sigma}}}
\newcommand{\pexc}[0]{\ensuremath{p_{\mathrm{exc}}}}
\newcommand{\pexceff}[0]{\ensuremath{p_{\mathrm{exc}}^{\mathrm{eff}}}}
\newcommand{\Pa}[0]{\ensuremath{\hat{P}_{\mathrm{c}}}}
\newcommand{\Qmin}[0]{\ensuremath{Q_{\mathrm{min}}}}
\newcommand{\Qmax}[0]{\ensuremath{Q_{\mathrm{max}}}}
\renewcommand{\Re}[0]{\ensuremath{\mathrm{Re}}}
\renewcommand{\S}[0]{\ensuremath{\hat{S}}}
\newcommand{\Sminuseff}[0]{\ensuremath{\hat{S}_-^{\mathrm{eff}}}}
\newcommand{\Sxeff}[0]{\ensuremath{\hat{S}_x^{\mathrm{eff}}}}
\newcommand{\Syeff}[0]{\ensuremath{\hat{S}_y^{\mathrm{eff}}}}
\newcommand{\tildeac}[0]{\ensuremath{\tilde{a}_{\mathrm{c}}}}
\newcommand{\tildepauli}[0]{\ensuremath{\tilde{\sigma}}}
\newcommand{\Tcaveff}[0]{\ensuremath{T_{\mathrm{cav}}^{\mathrm{eff}}}}
\newcommand{\Techo}[0]{\ensuremath{T_{\mathrm{echo}}}}
\newcommand{\Tmem}[0]{\ensuremath{T_{\mathrm{mem}}}}
\newcommand{\Tswap}[0]{\ensuremath{T_{\mathrm{swap}}}}
\newcommand{\Var}[0]{\ensuremath{\mathrm{Var}}}
\renewcommand{\vec}[1]{\ensuremath{\mathbf{#1}}}
\newcommand{\Xa}[0]{\ensuremath{\hat{X}_{\mathrm{c}}}}
\newcommand{\Xid}[0]{\ensuremath{\hat{X}_{\mathrm{id}}}}
\newcommand{\Xin}[0]{\ensuremath{\hat{X}_{\mathrm{in}}}}
\newcommand{\Xout}[0]{\ensuremath{\hat{X}_{\mathrm{out}}}}
\newcommand{\Yin}[0]{\ensuremath{\hat{Y}_{\mathrm{in}}}}
\newcommand{\Yout}[0]{\ensuremath{\hat{Y}_{\mathrm{out}}}}
\section{Bismuth donor spin}
The spins used in this experiment, neutral bismuth donors in silicon, have a nuclear spin $I=9/2$ and an electron spin $S=1/2$. The Halmitonian describing the system\cite{Wolfowicz.NatureNano.8.561(2013)} is
\begin{equation}
\H /\hbar = \vec{B} \cdot (\gamma_e \vec{S} \otimes \mathbb{1} - \gamma_n \mathbb{1} \otimes \vec{I}) + A\, \vec{S} \cdot \vec{I},
\label{eq:Hamiltonian_BiSI}
\end{equation}
where $A/h= 1.45\,$ GHz is the hyperfine coupling between electron and nuclear spins, $\gamma_e/ 2\pi=27.997$ GHz/T and $\gamma_n/ 2\pi=6.9$ MHz/T are the electronic and nuclear gyromagnetic ratios. In the limit of a small static magnetic field ($B_0 \lesssim 50\,$mT), the 20 electro-nuclear energy states are well approximated by eigenstates of the total angular momentum $\vec{F}=\vec{S}+\vec{I}$ and its projection, $m_F$, along the axis of the applied field. These eigenstates can be grouped in an $F=4$ ground and an $F=5$ excited multiplet separated by a frequency of $(I+1/2)A/h=7.35\,$GHz in zero-field, as shown in Figure 1 of the main text.
For a given weak static field $B_0$ oriented along $\vec{z}$, transitions verifying $\Delta F\Delta m_F=\pm 1$ may be probed with an excitation field orientated along $\vec{x}$ (or $\vec{y}$) since their associated matrix element $\bra{F,m_F} S_x \ket{F+1,m_F\pm 1}=\bra{F,m_F} S_y \ket{F+1,m_F\pm 1}$ has the same magnitude as an ideal electronic spin 1/2 transition $\bra{m_s} S_y \ket{m_{s'}}=0.5$. Characteristics for the nine $\Delta F \Delta m_F=+1$ transitions and the nine $\Delta F \Delta m_F=-1$ transitions are given in Table~\ref{tbl:Transitions} for $B_0=3\,$mT. Only the ten transitions with a matrix element greater than 0.25 are shown on Figure 2a of the main text. Note that the eight hidden transitions with lower matrix element are actually degenerate with transitions with stronger matrix element. The transitions probed by our resonators are highlighted in red in Table~\ref{tbl:Transitions}.
\newcommand\NameEntry[1]{
\multirow{3}*{
\begin{minipage}{5em}
#1
\end{minipage}}}
\newcolumntype{.}{D{.}{.}{0}}
\begin{table}[!h]
\setlength{\tabcolsep}{5pt}
\centering
\begin{tabular}{|p{4cm} .|p{4cm} .|r|.|}
\hline
\multicolumn{2}{|c|}{Transitions $\Delta F\Delta m_F=-1$} & \multicolumn{2}{c|}{Transitions $\Delta F\Delta m_F=+1$} & \NameEntry{Frequency \\ (GHz)} & \NameEntry{$df/dB$ \\ (GHz/T)} \\
\multicolumn{2}{|l|}{{\scriptsize $\ket{F,m_F} \leftrightarrow \ket{F+1,m_F-1}$}} & \multicolumn{2}{l|}{{\scriptsize$\ket{F,m_F} \leftrightarrow \ket{F+1,m_F+1}$}} & & \\
\multicolumn{2}{|r|}{{\scriptsize$\bra{F,m_F} S_x \ket{F+1,m_F-1}$}} & \multicolumn{2}{r|}{{\scriptsize$\bra{F,m_F} S_x \ket{F+1,m_F+1}$}} & & \\
\hline
\textcolor{red}{$\ket{4,-4} \leftrightarrow \ket{5,-5}$} & $0.474$ & & & 7.300 & $-25.1$ \\
\textcolor{red}{$\ket{4,-3} \leftrightarrow \ket{5,-4}$} & $0.423$ & $\ket{4,-4} \leftrightarrow \ket{5,-3}$ & $0.072$ & 7.317 & $-19.2$ \\
\textcolor{red}{$\ket{4,-2} \leftrightarrow \ket{5,-3}$} & $0.372$ & $\ket{4,-3} \leftrightarrow \ket{5,-2}$ & $0.125$ & 7.334 & $-13.8$ \\
$\ket{4,-1} \leftrightarrow \ket{5,-2}$ & $0.321$ & $\ket{4,-2} \leftrightarrow \ket{5,-1}$ & $0.176$ & 7.351 & $-8.1$ \\
$\ket{4, 0} \leftrightarrow \ket{5,-1}$ & $0.271$ & $\ket{4,-1} \leftrightarrow \ket{5, 0}$ & $0.226$ & 7.368 & $-2.5$ \\
$\ket{4, 1} \leftrightarrow \ket{5, 0}$ & $0.221$ & $\ket{4, 0} \leftrightarrow \ket{5, 1}$ & $0.277$ & 7.385 & $3.1$ \\
$\ket{4, 2} \leftrightarrow \ket{5, 1}$ & $0.171$ & $\ket{4, 1} \leftrightarrow \ket{5, 2}$ & $0.327$ & 7.401 & $8.7$ \\
$\ket{4, 3} \leftrightarrow \ket{5, 2}$ & $0.120$ & $\ket{4, 2} \leftrightarrow \ket{5, 3}$ & $0.376$ & 7.418 & $14.2$ \\
$\ket{4, 4} \leftrightarrow \ket{5, 3}$ & $0.069$ & $\ket{4, 3} \leftrightarrow \ket{5, 4}$ & $0.426$ & 7.435 & $19.6$ \\
& & $\ket{4,4} \leftrightarrow \ket{5,5}$ & $0.475$ & 7.452 & $25.3$ \\
\hline
\end{tabular}
\caption{Relevant Si:Bi transitions and their characteristics for $B_0=3\,$mT. Highlighted in red are the transitions accessible to our resonators}
\label{tbl:Transitions}
\end{table}
\section{Experimental details}
Details on the bismuth implanted sample and an extensive description of the setup are included in \cite{QuantumESR}. We present in the following the exact protocols used to acquire the data shown in Figure 2 and 3 of the main text.
\subsection*{Experimental determination of $T_1$ at resonance}
\begin{figure}
\caption{Excitation pulse bandwidth effect on $T_1$ measurement. (a) Computed pulse bandwidth, respectively for a $5(100)$-$\mu$s $\pi$ pulse, in red (blue) incident on a cavity with $\kappa/2\pi=23\,$kHz (green dashes). To illustrate the averaging effect of the pulse bandwidth on $T_{1}
\label{figSuppT1BW}
\end{figure}
This part aims to explain the inversion recovery sequence presented in Figure 2 of the main text. If we rewrite Eq.1 of the main text, we can express $$T_1(\delta) = T_{1}(0) (1+ 4 \delta^2/\kappa^2) \label{eq:T1delta}$$
with $T_{1}(0)=\kappa/4g^2$. Since the probed ensemble of spins has a larger linewidth $\Delta\omega= 2\,$MHz than our resonators, the signal emitted during the spin-echo comes from a subset of the ensemble of spins, with a frequency spectrum at least as large as the resonator bandwidth. Spins probed at the edges of the bandwidth of the resonator will have longer Purcell relaxation times: for instance those detuned by $\delta=\kappa$ have an expected Purcell relaxation time five times slower than the $T_1$ time expected at perfect resonance, Figure~\ref{figSuppT1BW}a. The contribution of those spins with a longer decay time to the signal will result in an averaging effect, meaning that the measured $T_1$ will be erroneously longer than predicted.
In order to suppress this effect, we reduce the bandwidth of the readout sequence so as to collect signal only from spins very close to the resonance. The response function of a pulse of length $t_p$ incident on a cavity with bandwidth $\kappa$ at frequency $\omega_0$ is expressed as :
$$ \mathcal{R}(\omega)=2 \frac{ \sin(t_p(\omega-\omega_0)/2)}{t_p(\omega-\omega_0)} \times \mathcal{R}_{cav}(\omega) = 2 \frac{ \sin(t_p(\omega-\omega_0)/2)}{t_p(\omega-\omega_0)} \times \frac{1}{1+4 \left(\frac{\omega-\omega_0}{\kappa}\right)^2}$$
As shown on Figure~\ref{figSuppT1BW}a, for the narrowest bandwidth $\kappa/2\pi=23\,$kHz of resonator A, pulses of $5\upmu\,$s are heavily filtered by the resonator and have the same bandwidth whereas $100\upmu$s-long pulses have a reduced bandwidth of $\approx 10$kHz. In case of $100\upmu$s-long excitation pulses, the Rabi frequency is such that only spins with $\lvert \delta\lvert/2\pi \leq 5\,$kHz will contribute to the signal. This corresponds to a dispersion of only 5\% for the expected Purcell relaxation times, which is negligible. To illustrate the averaging effect, two inversion recovery curves are shown on Figure~\ref{figSuppT1BW}b with readout pulses of $5\upmu$s and $100\upmu$s. The former yields $T_1=0.65\,$s, which is a factor $2$ higher than predicted by the Purcell effect whereas the lattest yields the expected value $T_1=0.35\,$s.
Thus Figure~2d of the main text shows an inversion recovery sequence that has a readout echo sequence with a narrow bandwidth ($t_{\pi }=100\upmu$s, $t_{\pi/2}=t_{\pi }/2$) to suppress contribution from spins with a lower decay rate, and an inversion pulse with large-bandwidth ($t_{\pi}=5\upmu$s) in order to maximize the efficiency of the inversion.
\subsection*{Protocol used to measure spin-cavity detuning dependent relaxation rate}
The goal of this section is to detail the protocol used in Figure 3 of the main text to study the Purcell rate dependence on the spin-cavity detuning, $\delta$.
In order to study this dependence, we detune the spins from the cavity by applying a magnetic field pulse. This is done in our setup by adding a pulse generator with $50 \Omega$ output impedance in parallel to the DC supply of one of the Helmholtz coils which have a $1\,$Hz bandwidth. To minimize the effect of transients, buffer times of 1s are added after ramping the coil up and down. To limit the loss of signal during those buffer times, we use an angle $\theta=45\degree$ and work with resonator B in order to have a longer $T_{1}(0)=1.68\,$s. This $T_1$ was measured with inversion recovery. All the data presented in this section as well as in Figure 3 of the main text were done in a separate run. The quality factor of resonator B dropped from $Q_B=1.07 \times 10^5$ to $Q_B=8.9 \times 10^4$ due to slightly higher losses, yielding the resonator bandwidth $\kappa/2\pi=82\,$kHz.
To observe the long relaxation times, such as those measured in Figure~3 of the main text, inversion recovery is not an ideal method. Indeed, when the spin linewidth is broader ($\sim\times 20$) than the excitation bandwidth and when the thermalization time is very long, one can observe polarization mixing mechanisms \cite{Bloembergen1949,Abragam.NuclearMagneticResonance}, spectral and spatial spin diffusion being the most relevant to our case, as the system is only constituted from one species. If one tries to measure the relaxation from spins that have been detuned by an amount $\delta/2\pi=(\omega_s-\omega_0)/2\pi= 3.8\,$MHz during a lapse of time $T$ with an inversion recovery sequence (Figure~\ref{figSuppT1sat}a), one observes a double exponential relaxation (Figure~\ref{figSuppT1sat}d, green), pointing towards the existence of a spin diffusion mechanism.
Spin diffusion is prevented by suppressing any polarization gradient along the spin line, which leads us to use a saturation recovery scheme instead of inversion recovery. The simplest saturation recovery scheme (Figure~\ref{figSuppT1sat}b) consists of sending a strong microwave tone resulting in the saturation of the line, producing an incoherent mixed state with the population evenly shared between excited and ground states. Nevertheless, a relaxation time measured with this scheme still yields a double-exponential decay (Figure~\ref{figSuppT1sat}d, orange), with time constant similar to the inversion recovery case. This implies that the saturation of the line was insufficient.
\begin{figure}
\caption{Spectral spin diffusion. (a,b,c) $T_1$ measurement sequence when spins are detuned from the cavity by applying a magnetic field $B_{\delta}
\label{figSuppT1sat}
\end{figure}
To improve the saturation, one can sweep the magnetic field during the saturation pulse so as to bring different subsets of the spin line to resonance and realize a full saturation. The adopted sweep scheme is shown on Figure~\ref{figSuppT1sat}c. The relaxation curve acquired with such a curve is a simple exponential (Figure~\ref{figSuppT1sat}d, blue), indicating the suppression of the spin diffusion effect.
One can further check the quality of the saturation by measuring the polarization across the full spin linewidth immediately after saturation. To realize such scans (Figure~\ref{figSuppT1sat}e), we apply the relevant saturation pulse at $\omega_0$, then apply a magnetic field pulse $B_{\delta}=(\omega_s-\omega_0)/\gamma_e$ and measure the echo signal $A_Q(\omega_s)$ with a Hahn echo sequence. When no saturation pulse is applied, the measured echo signal $A_{Q0}(\omega_s)$ is a measure of the full polarization $-\langle S_z(\omega_s) \rangle = +1$ (black curve) and shows the natural spin linewidth. When studying an excitation pulse, the polarization of the spins is given by $-\langle S_z(\omega_s) \rangle = A_{Q}(\omega_s)/A_{Q0}(\omega_s)$, where $A_{Q}(\omega_s)$ is the measured echo signal. Thus $-\langle S_z(\omega_s) \rangle = -1$ ndicates full inversion, $\langle S_z(\omega_s) \rangle = 0$ saturation and $-\langle S_z(\omega_s) \rangle = +1$ return to thermal equilibrium. The green, orange and blue curves are taken after respectively a $\pi$ pulse (\textbf{a}) and a saturation without field sweeps (\textbf{b}) and with field sweeps (\textbf{c}). At resonance, one expects a change of $S_z$ from -1 to +1 for a $\pi$ pulse and from -1 to 0 for a saturation pulse. Due to the coil transient time, all three curves shows a partial relaxation. If the saturation was optimal and no partial relaxation was occurring, one should observe $S_z=0$ for any detuning $\delta$. Among the two saturations (\textbf{b}) and (\textbf{c}) studied here, only the last saturation scheme (\textbf{c}) equally saturates the line. The basic saturation has a bandwidth $\approx 250\,$kHz and the $\pi$ pulse bandwidth is similar to the cavity $\kappa=82\,$kHz. This confirms that only in scheme (\textbf{c}) can spin diffusion be fully suppressed and yield a simple exponential decay relaxation. This is this last scheme that is used to measure the $22$ relaxation rates at different detunings $\delta$ of Figure~3 of the main text.
The global fit shown on Figure~3c of the main text is obtained by using equation $T_1(\delta)^{-1}= \Gamma_{\rm P}+\Gamma_{\rm NR}$ which may be expressed as $T_{1}(0)^{-1} \left( 1 + 4 \left( \frac{\delta}{\kappa}\right)^2\right)^{-1} +\Gamma_{\rm NR}$ to involve only experimentally determined parameters. Indeed, $\kappa$ is precisely determined by measuring the quality factor of the resonator at very low power while $T_{1}(0)$ is determined by an inversion recovery sequence as mentioned above. $\delta$ has been determined via precise calibration of the coil pulse, thus the only remaining free parameter in the fit is $\Gamma_{\rm NR}$, yielding $\Gamma_{NR}^{-1}=1600\,$s. The errors bars come from the accuracy of the relaxation rates fits.
\end{document}
|
\begin{document}
\title{Strong monogamy of quantum entanglement for multi-qubit W-class states}
\author{Jeong San Kim}
\email{[email protected]} \affiliation{
Department of Mathematics, University of Suwon, Kyungki-do 445-743, Korea
}
\date{\today}
\begin{abstract}
We provide a strong evidence for strong monogamy inequality of multi-qubit entanglement recently proposed in
[B. Regula {\em et al.}, Phys. Rev. Lett. {\bf 113}, 110501 (2014)].
We consider a large class of multi-qubit generalized W-class states, and analytically show that the strong monogamy inequality
of multi-qubit entanglement is saturated by this class of states.
\end{abstract}
\pacs{
03.67.Mn,
03.65.Ud
}
\maketitle
\section{Introduction}
\label{Intro}
Whereas classical correlation can be freely shared among parties in
multi-party systems, quantum entanglement is restricted in its
shareability; if a pair of parties are maximally entangled in
multipartite systems, they cannot have any entanglement~\cite{CKW, OV}
nor classical correlations~\cite{KW} with the rest of the system.
This restriction of entanglement shareability among multi-party systems is known as
the {\em monogamy of entanglement}~(MoE)~\cite{T04}.
MoE is at the heart of many quantum information and communication
protocols. For example, MoE is a key ingredient to make quantum
cryptography secure because it quantifies how much information
an eavesdropper could potentially obtain about the secret key to be
extracted~\cite{rg}. MoE also plays an important role
in condensed-matter physics such as
the frustration effects observed in Heisenberg antiferromagnets and
the $N$-representability problem for
fermions~\cite{anti}.
The first mathematical characterization of MoE was established by
Coffman-Kundu-Wootters(CKW) for three-qubit systems~\cite{CKW} as an inequality; for a three-qubit
pure state $\ket{\psi}_{ABC}$
with its one-qubit and two-qubit reduced density matrices
$\rho_A=\mbox{$\mathrm{tr}$}_{BC}\ket{\psi}_{ABC}\bra{\psi}$,
$\mbox{$\mathrm{tr}$}_C\ket{\psi}_{ABC}\bra{\psi}=\rho_{AB}$ and
$\mbox{$\mathrm{tr}$}_B\ket{\psi}_{ABC}\bra{\psi}=\rho_{AC}$ respectively,
\begin{equation}
\tau\left(\ket{\psi}_{A|BC}\right) \geq \tau\left(\rho_{A|B}\right)+\tau\left(\rho_{A|C}\right),
\label{eq: CKW}
\end{equation}
where $\tau\left(\ket{\psi}_{A|BC}\right)$
is the {\em tangle} of the pure state $\ket{\psi}_{ABC}$ quantifying the bipartite entanglement between $A$ and $BC$,
and $\tau\left(\rho_{A|B}\right)$ and $\tau\left(\rho_{A|C}\right)$ are the tangles
of the two-qubit reduced states $\rho_{AB}=\mbox{$\mathrm{tr}$}_{C}\ket{\psi}_{ABC}\bra{\psi}$ and $\rho_{AC}=\mbox{$\mathrm{tr}$}_{B}\ket{\psi}_{ABC}\bra{\psi}$,
respectively.
CKW inequality in~(\ref{eq: CKW}) shows the mutually exclusive nature of
three-qubit quantum entanglement in a quantitative way; more
entanglement shared between two qubits ($\tau\left(\rho_{A|B}\right)$) necessarily
implies less entanglement between the other two qubits ($\tau\left(\rho_{A|C}\right)$)
so that the summation does not exceed the total entanglement ($\tau\left(\ket{\psi}_{A|BC}\right)$).
Moreover, the residual entanglement from the difference between left and right-hand sides of Inequality~(\ref{eq: CKW})
is also interpreted as the genuine three-party entanglement, {\em three tangle} of $\ket{\psi}_{ABC}$
\begin{equation}
\tau\left(\ket{\psi}_{A|B|C}\right)=\tau\left(\ket{\psi}_{A|BC}\right)-\tau\left(\rho_{A|B}\right)-\tau\left(\rho_{A|C}\right),
\label{3tangle}
\end{equation}
which is invariant under the permutation of subsystems $A$, $B$ and $C$.
In this sense, $\tau\left(\ket{\psi}_{A|BC}\right)$ and $\tau\left(\rho_{A|B}\right)$ are also referred to
as the one tangle and two tangle, respectively~\cite{12tangle}.
Later, CKW inequality was generalized for multi-qubit systems~\cite{OV} and some cases of higher-dimensional
quantum systems in terms of various entanglement measures~\cite{KDS, KSRenyi, KT, KSU}. For general monogamy
inequality of multi-party entanglement, it was shown that
squashed entanglement~\cite{CW04} is a faithful entanglement measure
showing MoE of arbitrary quantum systems~\cite{BCY10}.
Recently, the three-tangle in Eq.~(\ref{3tangle})
was systematically generalized for arbitrary $n$-qubit quantum states, namely residual {\em $n$-tangle}~\cite{LA}.
Based on this generalization, the concept of {\em strong monogamy}(SM) inequality of multi-qubit entanglement
was proposed by conjecturing the nonnegativity of the $n$-tangle~\cite{LA}. For the validity of SM inequality, an extensive numerical evidence was presented
for four qubit systems together with analytical proof for some cases of multi-qubit systems.
However, proving SM conjecture analytically for arbitrary multi-qubit states seems to be a formidable challenge due to the numerous optimization
processes arising in the definition of $n$-tangle.
Here we provide a strong evidence for SM inequality of multi-qubit entanglement; we consider a
large class of multi-qubit states, {\em generalized W-class states}, and analytically show that SM inequality proposed in~\cite{LA}
is saturated by this class of states. Because multi-qubit CKW inequality is known to be saturated by this generalized W-class states~\cite{Kim08},
this class of states are good candidates as possible counterexamples
for stronger version of monogamy inequalities.
The paper is organized as follows. In Sec.~\ref{Sec: Wstate}, we review the definition of generalized W-class states
for multi-qubit systems and provide some useful properties of this class in accordance with CKW inequality.
In Sec.~\ref{Subsec: strong}, we recall the concept of $n$-tangle as well as SM inequality of multi-qubit entanglement,
and show that multi-qubit SM inequality is saturated by generalized W-class states in Sec.~\ref{Subsec: SM W}.
In Sec.~\ref{Sec: Conclusion}, we summarize our results.
\section{Multi-qubit CKW inequality and the generalized W-class states}
\label{Sec: Wstate}
For $n$-qubit systems $\mathcal H_1 \otimes \cdots \otimes \mathcal H_n$ where $\mathcal H_j \cong \mathbb{C}^2$ for $j=1,\ldots,n$
and any $n$-qubit state $\ket{\psi}_{A_1 A_2 ... A_n} \in \mathcal H_1 \otimes \cdots \otimes \mathcal H_n$,
the three-qubit CKW inequality in (\ref{eq: CKW}) can be generalized as~\cite{OV}
\begin{equation}
\tau\left(\ket{\psi}_{A_1|A_2\cdots A_n}\right) \geq \sum_{j=2}^{n}\tau\left(\rho_{A_1|A_j}\right),
\label{eq: OV}
\end{equation}
where $\tau\left(\ket{\psi}_{A_1|A_2\cdots A_n}\right)$ is the tangle (or one tangle) of the pure state $\ket{\psi}_{A_1A_2\cdots A_n}$
with respect to the bipartition between $A_1$ and the other qubits $A_2\cdots A_n$
\begin{equation}
\tau\left(\ket{\psi}_{A_1|A_2\cdots A_n}\right)=4\det \rho_A,
\label{1tangle}
\end{equation}
and $\tau\left(\rho_{A_1|A_j}\right)$ is the tangle (or two tangle) of the two-qubit reduced density matrix $\rho_{A_1A_j}$
defined by convex-roof extension
\begin{equation}
\tau\left(\rho_{A_1|A_j}\right)=\bigg[\min\sum_h p_h \sqrt{\tau(\ket{\psi_h}_{A_1A_j})}\bigg]^2,
\label{2tangle}
\end{equation}
with the minimization taken over all possible pure state decompositions
\begin{equation}
\rho_{A_1A_j}=\sum_{h}p_{h}\ket{\psi_h}_{A_1A_j}\bra{\psi_h},
\label{decomp}
\end{equation}
for each $j=2,\cdots ,n$.
The $n$-qubit the generalized W-class state is defined as
\begin{align}
\ket{\psi}_{A_1 A_2 ... A_n}=&a\ket{00\cdots0}+b_1 \ket{10\cdots0}+b_2 \ket{01\cdots0}\nonumber\\
&+...+b_n \ket{00\cdots1}
\label{supWV}
\end{align}
with $|a|^2+\sum_{i=1}^{n}|b_j|^2 =1$~\cite{Kim08, GW}.
The term ``{\em generalized}'' naturally arises because Eq.~(\ref{supWV}) includes $n$-qubit W states as a special case
when $a=0$ and $b_j=1/\sqrt{n}$ for all $j$.
Before we further investigate strongly monogamous property of entanglement for the generalized W-class state in Eq.~(\ref{supWV}),
we recall a very useful property of quantum states proposed by Hughston-Jozsa-Wootters (HJW), which shows the unitary freedom
in the ensemble for density matrices~\cite{HJW}.
\begin{Prop} (HJW theorem)
The sets $\{|\tilde{\phi_i}\rangle\}$ and $\{|\tilde{\psi_j}\rangle\}$ of (possibly unnormalized) states generate the same density matrix
if and only if
\begin{equation}
|\tilde{\phi_i}\rangle=\sum_j u_{ij}|\tilde{\psi_j}\rangle\
\label{HJWeq}
\end{equation}
where $(u_{ij})$ is a unitary matrix of complex numbers, with indices $i$ and $j$, and we
{\em pad} whichever set of states $\{|\tilde{\phi_i}\rangle\}$ or $\{|\tilde{\psi_j}\rangle\}$ is smaller with additional zero vectors
so that the two sets have the same number of elements.
\label{HJWthm}
\end{Prop}
A direct consequence of Proposition~\ref{HJWthm} is the following; for two pure-state decompositions
$\sum_{i}p_{i}\ket{\phi_i}\bra{\phi_i}$ and $\sum_{j}q_{j}\ket{\psi_j}\bra{\psi_j}$,
they represent the same density matrix, that is $\rho=\sum_{i}p_{i}\ket{\phi_i}\bra{\phi_i}=\sum_{j}q_{j}\ket{\psi_j}\bra{\psi_j}$
if and only if $\sqrt{p_{i}}\ket{\phi_i}=\sum_{j}u_{ij}\sqrt{q_{j}}\ket{\psi_j}$ for some unitary matrix $u_{ij}$.
Now we have the following lemma showing that multi-qubit monogamy inequality in terms of one and two tangles in (\ref{eq: OV}) is
saturated by the generalized W-class states in (\ref{supWV}).
\begin{Lem}
For any multi-qubit system, multi-qubit CKW inequality is saturated by generalized W-class states,
that is,
\begin{align}
\tau\left(\ket{\psi}_{A_1|A_2\cdots A_n}\right) = \sum_{j=2}^{n}\tau\left(\rho_{A_1|A_j}\right),
\label{satWV}
\end{align}
for any $n$-qubit state $\ket{\psi}_{A_1|A_2\cdots A_n}$ in Eq.~(\ref{supWV})
\label{Lem: satWV}
\end{Lem}
\begin{proof}
Let us first consider the one tangle of $\ket{\psi}_{A_1 A_2 ... A_n}$ with respect to the bipartition between $A_1$ and the other qubits.
The reduced density matrix $\rho_{A_1}$ of subsystem $A_1$ is
\begin{align}
\rho_{A_1}=&\mbox{$\mathrm{tr}$}_{A_2\cdots A_n}\ket{\psi}_{A_1 A_2 ... A_n}\bra{\psi}\nonumber\\
=&\left(a\ket{0}+b_1\ket{1}\right)_{A_1}\left(a^*\bra{0}+{b_1}^*\bra{1}\right)
+\sum_{j=2}^{n}|b_j|^2\ket{0}_{A_1}\bra{0},
\label{rho_A_1}
\end{align}
thus
\begin{align}
\tau\left(\ket{\psi}_{A_1|A_2\cdots A_n}\right)=4\det\rho_{A_1}=4|b_1|^2\sum_{j=2}^{n}|b_j|^2.
\label{onet}
\end{align}
For each $j=2,3,\cdots , n$, the reduced density matrix $\rho_{A_1A_j}$ of two-qubit subsystem $A_1A_j$ is
\begin{widetext}
\begin{align}
\rho_{A_1A_j}=&\mbox{$\mathrm{tr}$}_{A_2\cdots \mbox{$\omega$}idehat{A_j} \cdots A_n}\ket{\psi}_{A_1 A_2 ... A_n}\bra{\psi}\nonumber\\
=&\left(a\ket{00}+b_1\ket{10}+b_j\ket{01}\right)_{A_1A_j}\left(a^*\bra{00}+b_1^*\bra{10}+b_j^*\bra{01}\right)
+\sum_{k\neq j}|b_k|^2\ket{00}_{A_1A_j}\bra{00},
\label{rho1i}
\end{align}
\end{widetext}
where $A_2\cdots \mbox{$\omega$}idehat{A}_j\cdots A_n = A_2\cdots A_{j-1}A_{j+1}\cdots A_n$ for each $j=2,3,\cdots, n$.
Here, we consider two-qubit (possibly) unnormalized states
\begin{align}
\ket{\tilde{x}}_{A_1A_j}=&a\ket{00}_{A_1A_j}+b_1\ket{10}_{A_1A_j}+b_j\ket{01}_{A_1A_j}\nonumber\\
\ket{\tilde{y}}_{A_1A_j}=&\sqrt{\sum_{k\neq j}|b_k|^2}\ket{00}_{A_1A_j},
\label{HJW}
\end{align}
which represents $\rho_{A_1A_j}$ as
\begin{equation}
\rho_{A_1A_j}=\ket{\tilde{x}}_{A_1A_j}\bra{\tilde{x}}+\ket{\tilde{y}}_{A_1A_j}\bra{\tilde{y}}.
\label{rho1irep}
\end{equation}
From the HJW theorem in Proposition~\ref{HJWthm}, we note that for any pure state decomposition of
\begin{equation}
\rho_{A_1 A_j}=\sum_{h=1}^{r}|\tilde{\phi_h}\rangle_{A_1 A_j} \langle\tilde{\phi_h}|,
\label{decomp}
\end{equation}
where
$|\tilde{\phi_h}\rangle_{A_1 A_j}$ is an unnormalized state in two-qubit subsystem $A_1A_j$,
there exists an $r\times r$ unitary matrix $(u_{hl})$ such that
\begin{equation}
|\tilde{\phi_h}\rangle_{A_1A_j}=u_{h1}\ket{\tilde{x}}_{A_1 A_j}+u_{h2}\ket{\tilde{y}}_{A_1 A_j},
\label{HJWrelation}
\end{equation}
for each $h$.
By considering the normalization $\ket{\phi_h}_{A_1 A_j}=|\tilde{\phi}_h\rangle_{A_1 A_j}/\sqrt{p_h}$
with $ p_h =|\langle\tilde{\phi}_h|\tilde{\phi}_h\rangle|$, we have the tangle of each two-qubit pure state
$\ket{\phi_h}_{A_1 A_j}$ as
\begin{align}
\tau\left(\ket{\phi_h}_{A_1|A_j}\right)=4\det\rho_{A_1}^{h}=\frac{4}{p_h^2}|u_{hj}|^4|b_1|^2|b_j|^2,
\label{tauphi_h}
\end{align}
where $\rho_{A_1}^{h}=\mbox{$\mathrm{tr}$}_{A_i}\ket{\phi_h}_{A_1A_i}\bra{\phi_h}$ is the reduced density matrix of $\ket{\phi_h}_{A_1A_j}$ on subsystem $A_1$ for each $h$.
Moreover, the definition of two-tangle in Eq.~(\ref{2tangle}) together with Eq.~(\ref{tauphi_h}) lead us to
\begin{align}
\tau\left(\rho_{A_1A_j}\right)=&\bigg[\min_{\{p_h, \ket{\phi_h}\}}\sum_h p_h \sqrt{\tau\left(\ket{\phi_h}_{A_1|A_j}\right)}\bigg]^2\nonumber\\
=&\bigg[\min_{\{p_h, \ket{\phi_h}\}}\sum_h 2|u_{hj}|^2|b_1||b_j|\bigg]^2,\nonumber\\
=&4|b_1|^2|b_j|^2,
\label{2ti}
\end{align}
for each $j=2,\cdots ,n$.
Now Eqs.~(\ref{onet}) and (\ref{2ti}) implies Eq.~(\ref{satWV}), which completes the proof.
\end{proof}
For two tangle of two-qubit mixed state $\rho_{A_1A_j}$ in Eq.~(\ref{rho1i}), we need to deal with the minimization arising in the definition Eq.~(\ref{2tangle}).
In fact, any two-qubit mixed state can have an analytic entanglement measure called {\em concurrence}~\cite{WW}, whose analytic
evaluation can also be adapted for that of two tangle. However, the proof of Lemma~\ref{Lem: satWV} efficiently resolves this optimization problem by
considering all possible pure-state decompositions of $\rho_{A_1 A_j}$, which also shows a nice property of generalized W-class states;
the tangle of two-qubit reduced density matrix obtained from a generalized W-class state does not depend on
the choice of pure-state decomposition, $\rho_{A_1 A_j}=\sum_{h}{ p_h \ket{\phi_h}_{A_1 A_j}\bra{\phi_h}}$.
The following simple lemma shows another useful property about the structure of generalized W-class.
\begin{Lem}
Let $\ket{\psi}_{A_1\cdots A_n}$ be a generalized W-class state in Eq.~(\ref{supWV}).
For any $m$-qubit subsystems $A_1A_{j_1}\cdots A_{j_{m-1}}$ of $A_1\cdots A_n$ with $2 \leq m \leq n-1$,
the reduced density matrix $\rho_{A_1A_{j_1}\cdots A_{j_{m-1}}}$ of $\ket{\psi}_{A_1\cdots A_n}$ is a mixture of a $m$-qubit generalized W-class state
and vacuum.
\label{reduced}
\end{Lem}
\begin{proof}
By a straightforward calculation, we obtain
\begin{align}
\rho_{A_1A_{j_1}\cdots A_{j_{m-1}}}=&\ket{\tilde{x}}_{A_1A_{j_1}\cdots A_{j_{m-1}}}\bra{\tilde{x}}\nonumber\\
&+\ket{\tilde{y}}_{A_1A_{j_1}\cdots A_{j_{m-1}}}\bra{\tilde{y}},
\label{mrho}
\end{align}
where
\begin{widetext}
\begin{align}
\ket{\tilde{x}}_{A_1A_{j_1}\cdots A_{j_{m-1}}}=
&\left(a\ket{00\cdots0}+b_1\ket{10\cdots0}+b_{j_1}\ket{01\cdots0}+
\cdots+b_{j_{m-1}}\ket{00\cdots1}\right)_{A_1A_{j_1}\cdots A_{j_{m-1}}},\nonumber\\
\ket{\tilde{y}}_{A_1A_{j_1}\cdots A_{j_{m-1}}}=&\sqrt{\sum_{k\in \{j_1, j_2, \cdots,j_{m-1} \}}|b_k|^2}\ket{00\cdots 0}_{A_1A_{j_1}\cdots A_{j_{m-1}}}
\label{xym}
\end{align}
\end{widetext}
are the unnormalized states in $m$-qubit subsystems $A_1A_{j_1}\cdots A_{j_{m-1}}$.
By considering the normalized states $\ket{x}_{A_1A_{j_1}\cdots A_{j_{m-1}}}=\ket{\tilde{x}}_{A_1A_{j_1}\cdots A_{j_{m-1}}}/\sqrt{p}$ with
$p=\langle\tilde{x}|\tilde{x}\rangle$ and $\ket{y}_{A_1A_{j_1}\cdots A_{j_{m-1}}}=\ket{\tilde{y}}_{A_1A_{j_1}\cdots A_{j_{m-1}}}/\sqrt{q}$ with
$q=\langle\tilde{y}|\tilde{y}\rangle$, we note that
\begin{align}
\rho_{A_1A_{j_1}\cdots A_{j_{m-1}}}=&p\ket{x}_{A_1A_{j_1}\cdots A_{j_{m-1}}}\bra{x}\nonumber\\
&+q\ket{y}_{A_1A_{j_1}\cdots A_{j_{m-1}}}\bra{y},
\end{align}
where $\ket{x}$ is a generalized W-class state and $\ket{y}$ is the vacuum, which completes the proof.
\end{proof}
\section{Strong monogamy inequality for multi-qubit generalized W-class states}
\label{Sec: SM W}
\subsection{Strong monogamy of multi-qubit entanglement}
\label{Subsec: strong}
The definition of three tangle in Eq.~(\ref{3tangle}) was generalized for arbitrary $n$-qubit quantum states~\cite{LA};
for an $n$-qubit pure state $\ket{\psi}_{A_1A_2\cdots A_n}$,
its {\em $n$-tangle} is defined as
\begin{align}
\tau\left(\ket{\psi}_{A_1|A_2|\cdots |A_n}\right)
=&\tau\left(\ket{\psi}_{A_1|A_2\cdots A_n}\right)\nonumber\\
&-\sum_{m=2}^{n-1} \sum_{\vec{j}^m}\tau\left(\rho_{A_1|A_{j^m_1}|\cdots |A_{j^m_{m-1}}}\right)^{m/2},
\label{eq:ntanglepure}
\end{align}
where the index vector $\vec{j}^m=(j^m_1,\ldots,j^m_{m-1})$ spans all the ordered subsets of the index set $\{2,\ldots,n\}$ with $(m-1)$ distinct elements.
For each $m=2,\cdots, n-1$, the $m$-tangle for multi-qubit mixed state is defined by convex-roof extension,
\begin{widetext}
\begin{equation}
\tau\left(\rho_{A_1|A_{j^m_1}|\cdots |A_{j^m_{m-1}}}\right)=\bigg[\min_{\{p_h, \ket{\psi_h}\}}\sum_h p_h
\sqrt{\tau\left(\ket{\psi_h}_{A_1|A_{j^m_1}|\cdots |A_{j^m_{m-1}}}\right)}\bigg]^2,
\label{ntanglemix}
\end{equation}
\end{widetext}
where the minimization of over all possible pure state decompositions
\begin{equation}
\rho_{A_1A_{j^m_1}\cdots A_{j^m_{m-1}}}=\sum_{h}p_{h}\ket{\psi_h}_{A_1A_{j^m_1}\cdots A_{j^m_{m-1}}}\bra{\psi_h}.
\label{decomp}
\end{equation}
Eq.~(\ref{eq:ntanglepure}) is an recurrent definition, that is, all the $m$ tangles $\tau\left(\rho_{A_1|A_{j^m_1}|\cdots |A_{j^m_{m-1}}}\right)$
for $2 \leq m \leq n-1$ need to appear to define the $n$ tangle $\tau\left(\ket{\psi}_{A_1|A_2|\cdots |A_n}\right)$.
We further note that Eq.~(\ref{eq:ntanglepure}) reduces to the two and three tangles when $n=2$ and $n=3$ respectively.
Based on this generalization, strong monogamy of multi-qubit entanglement was proposed
by conjecturing the nonnegativity of $n$-tangle Eq.~(\ref{eq:ntanglepure}),
\begin{align}
\tau\left(\ket{\psi}_{A_1|A_2\cdots A_n}\right)\geq\sum_{m=2}^{n-1} \sum_{\vec{j}^m}\tau\left(\rho_{A_1|A_{j^m_1}|\cdots |A_{j^m_{m-1}}}\right)^{m/2}.
\label{eq:SM}
\end{align}
The term {\em strong} naturally arises because
Inequality~(\ref{eq:SM}) is in fact {\em finer} than the $n$-qubit CKW inequality in (\ref{eq: OV})
\begin{align}
\tau\left(\ket{\psi}_{A_1|A_2\cdots A_n}\right)\geq&\sum_{j=2}^{n}\tau\left(\rho_{A_1|A_j}\right)\nonumber\\
&+\sum_{m=3}^{n-1} \sum_{\vec{j}^m}\tau\left(\rho_{A_1|A_{j^m_1}|\cdots |A_{j^m_{m-1}}}\right)^{m/2}\nonumber\\
\geq &\sum_{j=2}^{n}\tau\left(\rho_{A_1|A_j}\right).
\label{compar}
\end{align}
Moreover, Inequality~(\ref{eq:SM}) also encapsulates three-qubit CKW inequality in (\ref{eq: CKW}) for $n=3$,
thus Inequality~(\ref{eq:SM}) can be considered as another generalization of three-qubit CKW inequality in a stronger form.
\subsection{SM inequality for W-class states}
\label{Subsec: SM W}
For the validity of SM inequality in (\ref{eq:SM}), an extensive numerical evidence was presented
for four qubit systems together with analytical proof for some cases of multi-qubit systems.
However, providing an analytical proof of Inequality~(\ref{eq:SM}) for arbitrary multi-qubit states seems to be a formidable challenge
because there are numerous optimization processes arising in the recurrent definition of $n$-tangle~(\ref{eq:ntanglepure}).
Here we show that SM inequality holds for generalized W-class states in arbitrary multi-qubit systems.
Because Lemma~\ref{Lem: satWV} shows the multi-qubit CKW inequality is saturated by
generalized W-class states~\cite{Kim08}, this class of states are good candidates for possible violation of
stronger inequality, that is, SM inequality.
For the validity of SM inequality for generalized W-class states,
we first note that Inequality (\ref{eq:SM}) must be saturated by this class of states because of
Lemma~\ref{Lem: satWV} together with Inequalities (\ref{compar}). Thus we will show the residual term
\begin{align}
\sum_{m=3}^{n-1} \sum_{\vec{j}^m}\tau\left(\rho_{A_1|A_{j^m_1}|\cdots |A_{j^m_{m-1}}}\right)^{m/2}
\label{ktangleres}
\end{align}
in (\ref{compar}) is zero for any $n$-qubit generalized W-class state $\ket{\psi}_{A_1 A_2 ... A_n}$.
By using the mathematical induction on $m$, we further show that all the
$m$ tangles for $3\leq m \leq n-1$ is zero for generalized W-class states, that is,
\begin{align}
\tau\left(\rho_{A_1|A_{j^m_1}|\cdots |A_{j^m_{m-1}}}\right)=0,
\label{ktanglezero}
\end{align}
for all the index vectors $\vec{j}^m=(j^m_1,\ldots,j^m_{m-1})$ with $3\leq m \leq n-1$.
For $m=3$ and any index vector $\vec{j}=(j_1, j_2)$ with $j_1,~j_2 \in \{2,3,\cdots,n\}$, the left-hand side of Eq.~(\ref{ktanglezero}) becomes the three-tangle of the three-qubit
subsystem ${A_1A_{j_1}A_{j_2}}$~\cite{omit} where Lemma~\ref{reduced} leads us to the three-qubit reduced density matrix as
\begin{equation}
\rho_{A_1A_{j_1}A_{j_2}}=\ket{\tilde{x}}_{A_1A_{j_1}A_{j_2}}\bra{\tilde{x}}+\ket{\tilde{y}}_{A_1A_{j_1}A_{j_2}}\bra{\tilde{y}},
\label{rho123rep}
\end{equation}
with the three-qubit unnormalized states
\begin{align}
\ket{\tilde{x}}_{A_1A_{j_1}A_{j_2}}=a&\ket{000}_{A_1A_{j_1}A_{j_2}}+b_1\ket{100}_{A_1A_{j_1}A_{j_2}}\nonumber\\
&+b_{j_1}\ket{010}_{A_1A_{j_1}A_{j_2}}+b_{j_2}\ket{001}_{A_1A_{j_1}A_{j_2}}\nonumber\\
\ket{\tilde{y}}_{A_1A_{j_1}A_{j_2}}&=\sqrt{\sum_{k\neq j_1, j_2}|b_k|^2}\ket{000}_{A_1A_{j_1}A_{j_2}}.
\label{xy2}
\end{align}
The HJW theorem in Proposition~\ref{HJWthm} assures that for any pure state decomposition of $\rho_{A_1A_{j_1}A_{j_2}}$,
\begin{equation}
\rho_{A_1A_{j_1}A_{j_2}}=\sum_{h=1}^{r}|\tilde{\phi_h}\rangle_{A_1A_{j_1}A_{j_2}} \langle\tilde{\phi_h}|,
\label{decomp123}
\end{equation}
where
$|\tilde{\phi_h}\rangle_{A_1A_{j_1}A_{j_2}}$ is an unnormalized state in three-qubit subsystem ${A_1A_{j_1}A_{j_2}}$,
there exists an $r\times r$ unitary matrix $(u_{hl})$ that makes a relation between pure state ensembles of $\rho_{A_1A_{j_1}A_{j_2}}$ as
\begin{equation}
|\tilde{\phi_h}\rangle_{A_1A_{j_1}A_{j_2}}=u_{h1}\ket{\tilde{x}}_{A_1A_{j_1}A_{j_2}}+u_{h2}\ket{\tilde{y}}_{A_1A_{j_1}A_{j_2}}.
\label{phih123}
\end{equation}
Here we note, for each $h$, $|\tilde{\phi_h}\rangle_{A_1A_{j_1}A_{j_2}}$ in Eq.~(\ref{phih123}) is a (unnormalized) superposition of a three-qubit W-class state
and vacuum. Thus Lemma~\ref{Lem: satWV} assures that the normalized state $\ket{\phi_h}_{A_1A_{j_1}A_{j_2}}=|\tilde{\phi}_h\rangle_{A_1A_{j_1}A_{j_2}}/\sqrt{p_h}$
with $ p_h =|\langle\tilde{\phi}_h|\tilde{\phi}_h\rangle|$ satisfies Eq.~(\ref{satWV}), that is, the three-tangle of
$|\tilde{\phi_h}\rangle_{A_1A_{j_1}A_{j_2}}$ in Eq.~(\ref{eq:ntanglepure}) is zero,
\begin{align}
\tau\left(\ket{\phi_h}_{A_1|A_{j_1}|A_{j_2}}\right)=&\tau\left(\ket{\phi_h}_{A_1|A_{j_1}A_{j_2}}\right)\nonumber\\
&-\tau\left(\rho_{A_1|A_{j_1}}\right)
-\tau\left(\rho_{A_1|A_{j_2}}\right)\nonumber\\
=&0,
\label{phi123zero}
\end{align}
for each $h$.
Eq.~(\ref{phi123zero}) implies that three-qubit pure state that arises in any pure state ensemble of $\rho_{A_1A_{j_1}A_{j_2}}$ in Eq.~(\ref{decomp123})
has zero as its three tangle value. Thus, from the definition of $n$-tangle for multi-qubit mixed state in Eq.~(\ref{2tangle}), we have
\begin{align}
\tau\left(\rho_{A_1|A_{j_1}|A_{j_2}}\right)=&\bigg[\min_{\{p_h, \ket{\phi_h}\}}\sum_h p_h \sqrt{\tau\left(\ket{\phi_h}_{A_1|A_{j_1}|A_{j_2}}\right)}\bigg]^2\nonumber\\
=&0,
\label{taurho123}
\end{align}
for any the three-qubit reduced density matrix $\rho_{A_1A_{j_1}A_{j_2}}$ of $\ket{\psi}_{A_1 A_2 ... A_n}$.
We now assume the induction hypothesis for Eq.~(\ref{ktanglezero}); for any $(m-1)$-qubit reduced density matrix
$\rho_{A_1A_{j_1}A_{j_2}\cdots A_{j_{m-2}}}$ of the generalized W-class states in Eq.~(\ref{supWV}), we assume its $(m-1)$ tangle is zero,
\begin{align}
\tau\left(\rho_{A_1|A_{j_1}|A_{j_2}|\cdots|A_{j_{m-2}}}\right)=0,
\label{induct}
\end{align}
and show its validity for $m\leq n-1$.
For any index vector $\vec{j}=(j_1, j_2, \ldots, j_{m-1})$ with $\{j_1,~j_2, \ldots, j_{m-1}\}\subseteq\{2,3,\cdots,n\}$,
Lemma~\ref{reduced} assures that the $m$-qubit reduced density matrix of $\ket{\psi}_{A_1 A_2 ... A_n}$ on subsystems $A_1A_{j_1}\cdots A_{j_{m-1}}$ is
\begin{align}
\rho_{A_1A_{j_1}\cdots A_{j_{m-1}}}=&\ket{\tilde{x}}_{A_1A_{j_1}\cdots A_{j_{m-1}}}\bra{\tilde{x}}\nonumber\\
&+\ket{\tilde{y}}_{A_1A_{j_1}\cdots A_{j_{m-1}}}\bra{\tilde{y}},
\label{mrhom}
\end{align}
where
$\ket{\tilde{x}}_{A_1A_{j_1}\cdots A_{j_{m-1}}}$ and $\ket{\tilde{y}}_{A_1A_{j_1}\cdots A_{j_{m-1}}}$ are the $m$-qubit unnormalized states in Eq.~(\ref{xym}).
By HJW theorem in Proposition~\ref{HJWthm}, we note that any pure state decomposition
\begin{equation}
\rho_{A_1A_{j_1}\cdots A_{j_{m-1}}}=\sum_{h=1}^{r}|\tilde{\phi_h}\rangle_{A_1A_{j_1}\cdots A_{j_{m-1}}} \langle\tilde{\phi_h}|,
\label{decompn}
\end{equation}
is related with Eq.~(\ref{mrhom}) by some
$r\times r$ unitary matrix $(u_{hl})$ such that
\begin{align}
|\tilde{\phi_h}\rangle_{A_1A_{j_1}\cdots A_{j_{m-1}}}=&u_{h1}\ket{\tilde{x}}_{A_1A_{j_1}\cdots A_{j_{m-1}}}\nonumber\\
&+u_{h2}\ket{\tilde{y}}_{A_1A_{j_1}\cdots A_{j_{m-1}}},
\label{phihm}
\end{align}
for each $h$. Furthermore, the normalization $\ket{\phi_h}_{A_1A_{j_1}\cdots A_{j_{m-1}}}=|\tilde{\phi}_h\rangle_{A_1A_{j_1}\cdots A_{j_{m-1}}}/\sqrt{p_h}$
with $ p_h =|\langle\tilde{\phi}_h|\tilde{\phi}_h\rangle|$ is a superposition of a $m$-qubit generalized W-class state and vacuum, which is again a generalized
W-class state.
From the definition of pure state tangle in Eq.~(\ref{eq:ntanglepure}), the $m$ tangle of each $m$-qubit pure state
$\ket{\phi_h}_{A_1A_{j_1}\cdots A_{j_{m-1}}}$ is
\begin{widetext}
\begin{align}
\tau\left(\ket{\phi_h}_{A_1|A_{j_1}|\cdots|A_{j_{m-1}}}\right)
=&\tau\left(\ket{\phi_h}_{A_1|A_{j_1}\cdots A_{j_{m-1}}}\right)-\sum_{k=2}^{m-1} \sum_{\vec{i}^k}\tau\left(\rho^h_{A_1|A_{i_1}|\cdots |A_{i_{k-1}}}\right)^{k/2},
\label{mtanglepure1}
\end{align}
\end{widetext}
where $\rho^h_{A_1A_{i_1}\cdots A_{i_{k-1}}}$ is the reduced density matrix of $\ket{\phi_h}_{A_1A_{j_1}\cdots A_{j_{m-1}}}$ on $k$-qubit subsystems
${A_1A_{i_1}\cdots A_{i_{k-1}}}$ with the index vector $\vec{i}^k=(i_1, i_2, \cdots, i_{k-1})$ for $\{i_1,~i_2, \cdots, i_{k-1}\} \subseteq \{j_1, j_2,\cdots,j_{m-1}\}$.
Let us further divide the last term of the inequality into the summation of two tangles and the others;
\begin{widetext}
\begin{align}
\tau\left(\ket{\phi_h}_{A_1|A_{j_1}|\cdots|A_{j_{m-1}}}\right)
=&\tau\left(\ket{\phi_h}_{A_1|A_{j_1}\cdots A_{j_{m-1}}}\right)-\sum_{l=1}^{m-1} \tau\left(\rho^h_{A_1|A_{j_l}} \right) -\sum_{k=3}^{m-1} \sum_{\vec{i}^k}\tau\left(\rho^h_{A_1|A_{i_1}|\cdots |A_{i_{k-1}}}\right)^{k/2}.
\label{mtanglepure2}
\end{align}
\end{widetext}
For each $k=3, \cdots ,m-1$, $\rho_{A_1A_{i_1}\cdots A_{i_{k-1}}}$ in the last summation of Eq.~(\ref{mtanglepure2}) is a $k$-qubit reduced density matrix of
the generalized W-class state $\ket{\phi_h}_{A_1A_{j_1}\cdots A_{j_{m-1}}}$, therefore the induction hypothesis assures that its $k$ tangle is zero;
\begin{equation}
\tau\left(\rho^h_{A_1|A_{i_1}|\cdots |A_{i_{k-1}}}\right)=0,
\label{ktau0}
\end{equation}
for each $k=3,\cdots ,m-1$ and index vector $\vec{i}^k=(i_1, i_2, \cdots, i_{k-1})$.
Furthermore, Lemma~\ref{Lem: satWV} implies that the usual monogamy inequality in terms of one and two tangles is saturated by
$\ket{\phi_h}_{A_1A_{j_1}\cdots A_{j_{m-1}}}$;
\begin{align}
\tau\left(\ket{\phi_h}_{A_1|A_{j_1}\cdots A_{j_{m-1}}}\right)=\sum_{l=1}^{m-1}\tau\left(\rho^h_{A_1|A_{j_l}} \right),
\label{satphi}
\end{align}
for each $h$.
Eq.~(\ref{ktau0}) together with Eq.~(\ref{satphi}) imply that
\begin{align}
\tau\left(\ket{\phi_h}_{A_1|A_{j_1}|\cdots|A_{j_{m-1}}}\right)=0
\label{tphi0}
\end{align}
for each $\ket{\phi_h}_{A_1A_{j_1}\cdots A_{j_{m-1}}}$ that arises in the decomposition of $\rho_{A_1A_{j_1}\cdots A_{j_{m-1}}}$,
\begin{align}
\rho_{A_1A_{j_1}\cdots A_{j_{m-1}}}=&\sum_{h=1}^{r}|\tilde{\phi_h}\rangle_{A_1A_{j_1}\cdots A_{j_{m-1}}} \langle\tilde{\phi_h}|\nonumber\\
=&\sum_{h=1}^{r}p_h\ket{\phi_h}_{A_1A_{j_1}\cdots A_{j_{m-1}}}\bra{\phi_h}.
\label{decompn2}
\end{align}
Thus, from the definition of $n$-tangle for multi-qubit mixed state in Eq.~(\ref{2tangle}), we have
\begin{widetext}
\begin{align}
\tau\left(\rho_{A_1|A_{j_1}|\cdots|A_{j_{m-1}}} \right)=&\bigg[\min_{\{p_h, \ket{\phi_h}\}}\sum_h p_h \sqrt{\tau\left(\ket{\phi_h}_{A_1|A_{j_1}|\cdots|A_{j_{m-1}}}\right)}\bigg]^2
=0,
\label{taurhom}
\end{align}
\end{widetext}
for any the $m$-qubit reduced density matrix $\rho_{A_1A_{j_1}\cdots A_{j_{m-1}}}$ of $\ket{\psi}_{A_1 A_2 ... A_n}$ with $3\leq m \leq n-1$.
Now Eq.~(\ref{taurhom}) together with Lemma~\ref{Lem: satWV}, we have the following theorem showing the saturation of multi-qubit SM inequality
by generalized W-class states.
\begin{Thm}
The strong monogamy inequality of multi-qubit entanglement is saturated by the generalized W-class states;
\begin{align}
\tau\left(\ket{\psi}_{A_1|A_2\cdots A_n}\right)=\sum_{m=2}^{n-1} \sum_{\vec{j}^m}\tau\left(\rho_{A_1|A_{j^m_1}|\cdots |A_{j^m_{m-1}}}\right)^{m/2},
\label{eq:SMsat}
\end{align}
for any multi-qubit generalized W-class state in Eq.~(\ref{supWV}).
\label{thm: smono}
\end{Thm}
\section{Conclusions}\label{Sec: Conclusion}
We have considered a large class of multi-qubit generalized W-class states,
and provided a strong evidence for SM inequality of multi-qubit entanglement.
Although providing an analytical proof of SM inequality for arbitrary multi-qubit states seems to be a formidable challenge
because there are numerous optimization processes arising in the recurrent definition of $n$-tangle, we have successfully resolved
this problem by investing the structural properties of W-class states, and analytically shown that strong monogamy inequality is saturated
by this class of states.
Our result characterizes the strongly monogamous nature of arbitrary multi-qubit W-class states.
Noting the importance of the study on multipartite
entanglement, our result can provide a rich reference for future
work on the study of entanglement in complex quantum systems.
\section*{Acknowledgments}
The author would like to appreciate G. Adesso, S. Lee, S. D. Martino and B. Regula for helpful discussions.
This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)
funded by the Ministry of Education, Science and Technology(2012R1A1A1012246).
\end{document}
|
\begin{document}
\begin{abstract}
We prove that non-negative solutions to the fully anisotropic equation
\begin{equation*}
\partial_t u= \sum_{i=1}^N \partial_i (|\partial_i u|^{p_i-2} \partial_i u), \quad \qquad \text{in} \, \, {\mathbb R}^N\times (-\infty, T),
\epsilonnd{equation*} \noindent are constant if they satisfy a condition of finite speed of propagation and if they are both one-sided bounded, and bounded in ${\mathbb R}^N$ at a single time level. A similar statement is valid when the bound is given at a single space point. As a general paradigm, local H\"older estimates provide the basics for rigidity. Finally, we show that recent intrinsic Harnack estimates can be improved to a Harnack inequality valid for non-intrinsic times. Locally, they are equivalent.
\noindent
{\bf{MSC 2020:}} 35B53, 35K65, 35K92, 35B65.
\noindent
{\bf{Key Words}}: Anisotropic $p$-Laplacian, Liouville Theorem, Harnack estimates, H\"older continuity.\newline
\epsilonnd{abstract}
\maketitle
\begin{center}
\begin{minipage}{9cm}
\small
\tableofcontents
\epsilonnd{minipage}
\epsilonnd{center}
\section{Introduction to the problem}
\noindent
Consider $u(x,t)$ as a function describing the temperature at the time $t$ of a point $x$ in an infinite isolated rod, being hence a solution of the heat equation. As usual, it is assumed that heat has spread from hotter zones to colder ones. Now, if one considers a non-negative solution in ${\mathbb R}^N \times (-\infty,0)$, the diffusive process has already gone on for an infinite amount of time, and it is reasonable to question if $u(x,t)$ has become constant. This fact, stated in this way, is generally false, as shown by the following examples:
\begin{equation}\label{examples}
u_1(x,t)= e^{x_N+t}, \quad u_2(x,t)=e^{-t} \sin(x_1), \quad x \in {\mathbb R}^N.
\epsilonnd{equation}
The two functions above are {\it eternal} solutions of the heat equation, i.e. solutions in ${\mathbb R}^N \times {\mathbb R}$. We call {\it ancient} solutions those solutions that solve the parabolic equation in $ {\mathbb R}^N \times (-\infty, T)$ for some time $T \in {\mathbb R}$. In line with the literature, we call {\it Liouville property} any rigidity condition that ensures the triviality of solutions. It is clear from $u_1$ that a sign condition is not enough to confirm our suspect, while the sign-changing solution $u_2$ shows that boundedness at a fixed time is not enough. Although Appel \cite{Appel} already proved in 1892 that an ancient solution to the heat equation which is two-sided bounded (as for instance $0 \leqslantq u(x,t) \leqslantq M$) is constant, the first optimal parabolic Liouville theorem for ancient solutions was found in 1952 by Hirschman (see \cite{Hirschman}, Bear \cite{Bear} and Widder \cite{Widder}, \cite{WidderBook} for the case $N=1$), stating that a non-negative ancient solution to the heat equation is constant if one adds the assumption that, for a time $t_o<T$,
\begin{equation}\label{log}
\lim_{r \uparrow + \infty} \frac{\log(\sup_{|x|<r}u(x,t_o))}{r} \leqslantq 0, \qquad \text{that is,} \qquad u(x,t_o) \leqslantq e^{o(|x|)} \quad \text{as} \quad |x|\rightarrow +\infty.
\epsilonnd{equation} This result was sharp in the sense that any function of the kind
\begin{equation*}\label{count-Hirsch}
u(x,t)= e^{a^2t} \cosh{ax}
\epsilonnd{equation*} shows that if \epsilonqref{log} above is violated then $u(x,t)$ is not necessarily constant; but \epsilonqref{log} is just a condition on the space variables for a fixed time. Sub-exponential optimal growth conditions have been generalized to different metric contexts; see for instance \cite{Sunra} and references therein. Not much later, in 1958, Friedman gave a condition on the behavior of non-negative ancient solutions to more general second-order parabolic equations as
\begin{equation}\label{seond-order}
\partial_t u(x,t) = \sum_{i,j=1}^N a_{i,j}\frac{\partial^2 u(x,t)}{\partial x_i \partial x_j} + \sum_{i=1}^N b_i \frac{\partial_i u(x,t)}{x_i} +c u(x,t), \quad \text{in} \quad {\mathbb R}^N \times {\mathbb R}_+,
\epsilonnd{equation} being $b_i,c$ real numbers and $\{a_{i,j}\}_{i,j}$ a positive matrix. Now the assumption concerns infinite past times as
\begin{equation} \label{fried}
\lim_{t\mathrm{d}ownarrow - \infty} \frac{\log(u(0,t))}{t}= c+ \gamma, \qquad \gamma>0,\quad c+\gamma \geqslant 0. \epsilonnd{equation} \noindent See \cite{Friedman} for the result and \cite{Eidelman} for the earlier case of systems. Furthermore, conditions guaranteeing the stabilization of the solution to a constant were studied for a fixed space variable (see \cite{Eidelman-Kamin-Tedeev} and its references for an account). This short preamble is just to highlight that different assumptions, mainly on the second bound, may be requested to solutions of these parabolic equations in order to ensure Liouville property; it is therefore an incomplete list. The literature on these rigidity results is wide, so we refer the reader to the book \cite{QS-libro} and the survey \cite{Kogoj} for a more complete account.\vskip0.1cm
\noindent
The heat equation can be regarded as a special case of the anisotropic $p$-Laplacian equation
\noindent \begin{equation} \label{EQ}
\partial_t u= \sum_{i=1}^N \partial_i (|\partial_i u|^{p_i-2} \partial_i u),
\epsilonnd{equation} \noindent
when $p_i\epsilonquiv 2$ for all $i=1,\mathrm{d}ots,N$. When $2<p_i< \bar{p}(1+1/N)$, this equation describes the effect of competing diffusions along coordinate axes in finite speed of propagation (see \cite{Ant-Sh}, \cite{Mosconi}, \cite{Ruzicka} for an introduction to the parabolic problem). In one spatial dimension, the equation \epsilonqref{EQ} is a one-dimensional $p$-Laplacian, which has a very interesting change of behavior from the degenerate case ($p>2$) to the singular one ($1<p<2$). Roughly speaking, the solutions to $p$-Laplacian singular equations behave more like solutions to elliptic equations and only one bound is enough to infer that they are constant (see \cite{DGV-Liouville} for more details). On the other hand, in the degenerate case two bounds are required to infer a Liouville property; see Section \ref{LiouvilleSection} below for a counterexample. In this paper we show some Liouville properties for non-negative solutions to \epsilonqref{EQ} for a range of $p_i$s which is degenerate and allows a finite speed of propagation. In many physical circumstances, this is a more reasonable assumption than the sudden infinite expansion of the support of solutions to the heat equation.\vskip0.2cm
\noindent The theory of regularity for solutions to \epsilonqref{EQ}, even if much investigated, is still incomplete and fragmented (see, e.g., \cite{MingRadu} and \cite{Mar-survey} for an account on the elliptic case). The Liouville properties that we are about to describe are entailed by recent Harnack estimates, obtained with an approach of expansion of positivity. This has been shown relying on the behavior of abstract fundamental solutions in \cite{Ciani-Mosconi-Vespri}. Here we start from the aforementioned Harnack inequality (see Section \ref{Preliminaries}), which is formulated in an intrinsic geometry (see section Notations below) reflecting the natural scaling of the equation, and study some rigidity connections between local and global behavior of solutions.\newline
Similarly to the Liouville property inferred by Hirschman, we will prove that it is sufficient to have a one-sided bound (say, from below) and an estimate from the other side (say, from above), just for a fixed time. If these conditions are met, solutions are forced to be constant (Theorem \ref{Liouville1}). This clearly implies that a solution that is bounded both from above and below is constant; on the other hand, it is unreasonable to expect that just a one-sided bound suffices (see the example in Section \ref{LiouvilleSection}) for our range of $p_i$s. As a known fact, we comment that a precise decay on oscillation given by H\"older continuity estimates is enough as Liouville property, see Theorem \ref{Cutilisci}. This decay is usually easier to show than a complete Harnack inequality, and as such deserves its own attention: already in the range $\bar{p}(1+1/N)\leqslantq p_i<\bar{p}(1+2/N)$, although continuity is expected by the regularizing properties of diffusion, no Harnack estimate may be available, because the competition among the diffusions is too strong (see for instance \cite{AS}). It would be of interest to compare this behavior to the one in porous materials by the sole control on the oscillation, for example as in \cite{Eurica}.\newline
On a similar track to Friedman's result, we prove that, fixed any spatial point and assuming that the solution is bounded at infinity in time, again it is forced to be constant (Theorem \ref{Liouville2}). Finally we state a Harnack inequality that frees the time variable to be intrinsic (Theorem \ref{WeakHarnack}, see \cite{DB} for the isotropic counterpart): being the Harnack estimate not anymore intrinsic in time, the estimate is more suitable for an application to rigidity. Moreover, this turns out to be useful to determine the optimal growth on the initial data when $|x|\rightarrow \infty $ for the solvability of the Cauchy problem for \epsilonqref{EQ} (see for instance \cite{DB-He}). Clearly, this implies that the domain where the equation is solved must be, in turn,
`compatible' with the anisotropy of the diffusion: this is certainly the case for ancient solutions.
\subsection*{Structure of the paper}
Section \ref{Preliminaries} is devoted to set up the functional framework and to recall some known properties of the solutions, as the existence of fundamental solutions, comparison principles, and the Harnack inequality.
Section \ref{Appendix} is concerned with the study of H\"older continuity of solutions. In Section \ref{LiouvilleSection} we prove the Liouville-type results, while Section \ref{WHSection} pertains an alternative formulation of the Harnack inequality, which turns out to be locally equivalent to the known one.
\section*{Notations}
\begin{itemize}
\item[-] We define the following function of $p_i$s, called {\it harmonic mean}: $\,
\bar{p}=N(\sum_{i=1}^N1/p_i)^{-1}$.\newline We suppose that $ 1<p_1\leqslantq p_2 \leqslantq \ldots \leqslantq p_N$ are ordered, as well as $\bar{p}<N$.
\vskip0.1cm \noindent
\item[-] For any $\rho, \theta>0$ and $x\in{\mathbb R}^N$, we denote by $K_{\rho}(x) \subset {\mathbb R}^N$ the cube of side $2\rho$ centered at $x$.
\noindent Let $x_o+\mathcal{K}_{\rho}(\theta)$ stand for the anisotropic cube of radius $\rho$, ``magnitude'' $\theta$, and center $x_o$, i.e.,
\begin{equation}\label{anisocubi}
x_o+\mathcal{K}_{\rho}(\theta)= \prod_{i=1}^N\bigg{\{}|x-x_{o,i}|<\theta^{{(p_i-\bar{p})}/{p_i}}\rho^{{\bar{p}}/{p_i}}\bigg{\}}.
\epsilonnd{equation}
If either $\theta=\rho$ or $p_i=p$ for all $i=1,\ldots,N$, then $x_o+\mathcal{K}_{\rho}(\theta)=K_{\rho}(x_o)$.
\item[-] For any $\rho, \theta,C >0$ and $(x_o,t_o) \in {\mathbb R}^{N+1}$, we consider the following anisotropic cylinders:
\begin{equation*}\label{cylinders}
\begin{cases}
\text{centered: }(x_o,t_o)+\mathcal{Q}_{\rho}(\theta,C)=
(x_o+\mathcal{K}_{\rho}(\theta) )\times (t_o-\theta^{2-\bar{p}}(C\rho)^{\bar{p}},t_o+\theta^{2-\bar{p}}(C\rho)^{\bar{p}});\\
\text{forward: }(x_o,t_o)+\mathcal{Q}^+_{\rho}(\theta,C)= (x_o+\mathcal{K}_{\rho}(\theta) )\times [t_o,t_o+\theta^{2-\bar{p}}(C\rho)^{\bar{p}});\\
\text{backward: }(x_o,t_o)+\mathcal{Q}^-_{\rho}(\theta,C)=
(x_o+\mathcal{K}_{\rho}(\theta) )\times (t_o-\theta^{2-\bar{p}}(C\rho)^{\bar{p}},t_o].
\epsilonnd{cases}
\epsilonnd{equation*} \noindent We omit the index $C$ when the constant is clear from the context. \vskip0.1cm \noindent
\item[-] For $\Omega \subset \subset {\mathbb R}^N$, i.e., $\Omega$ open and bounded set in ${\mathbb R}^N$, we denote with $\Omega_T= \Omega \times [-T,T]$, $T>0$, the parabolic domain, and with $S_s= {\mathbb R}^N \times (-\infty, s)$, $s\in {\mathbb R}$, the space strip.\vskip0.1cm \noindent
\item[-] We adopt the convention that the constant $\gamma>0$ may change from line to line, when depending only on fixed quantities $\{N,p_i\}$.
\epsilonnd{itemize}
\section{Preliminaries and Tools of the Trade} \label{Preliminaries}
\noindent
We begin with the definition of solution. For $\Omega \subseteq {\mathbb R}^N$ open rectangular domain and $T>0$, we set $\Omega_T= \Omega \times [-T,T]$ and define the Banach spaces
\[W^{1,{\bf{p}}}_{loc}(\Omega):= \{ u \in W^{1,1}_{loc}(\Omega) |\, \partial_i u \in L^{p_i}_{loc}(\Omega) \}, \]
\[ L^{{\bf{p}}}_{loc}(0,T;W^{1,{\bf{p}}}_{loc}(\Omega)):= \{u \in W^{1,1}_{loc}(0,T;L^1_{loc}(\Omega))|\, \partial_i u \in L^{p_i}_{loc}(0,T;L^{p_i}_{loc}(\Omega)) \}. \]
These are usually called anisotropic spaces (see for instance \cite{Ant-Sh}). When $\bar{p}>N$ and $\partial \Omega$ is regular enough, the space $W^{1,{\bf{p}}}(\Omega)$ is embedded in the space of H\"older continuous functions \cite{VenTuan}. A function \[ u \in C_{loc}(0,T; L^2_{loc}({\mathbb R}^N)) \cap L^{\bf{p}}_{loc}(0,T;W^{1,{\bf{p}}}_{loc}({\mathbb R}^N))\] is called a {\it local weak solution} of \epsilonqref{EQ} in $S_T$ if, for all $0<t_1<t_2<T$ and any $\varphi \in C^{\infty}_{loc}(0,T;C_o^{\infty}({\mathbb R}^N))$,
\begin{equation} \label{anisotropic-localweaksolution}
\int_{{\mathbb R}^N} u \varphi \, dx \bigg|_{t_1}^{t_2}+ \int_{t_1}^{t_2} \int_{{\mathbb R}^N} (-u \, \varphi_t + \sum_{i=1}^N \, |\partial_i u|^{p_i-2} \partial_i u \, \partial_i \varphi) \, dx dt=0,
\epsilonnd{equation} \noindent Similarly, when considering $\Omega\subset {\mathbb R}^N$ bounded set, by a {\it local weak solution} to \epsilonqref{EQ} in $\Omega_T$ we mean a function $u \in C_{loc}(0,T; L^2_{loc}(\Omega)) \cap L^{\bf{p}}_{loc}(0,T;W^{1,{\bf{p}}}_{loc}(\Omega))$ satisfying for all compact sets $K \subset \Omega$ and for all $\varphi \in C^{\infty}_{loc}(0,T;C_o^{\infty}(K))$ the integral equality
\begin{equation} \label{localweaksolution}
\int_{K} u \varphi \, dx \bigg|_{t_1}^{t_2}+ \int_{t_1}^{t_2} \int_{K} (-u \, \varphi_t + \sum_{i=1}^N \, |\partial_i u|^{p_i-2} \partial_i u \, \partial_i \varphi) \, dx dt=0,\quad \text{for all} \quad 0<t_1<t_2<T.
\epsilonnd{equation}
Now we briefly introduce the main tools for our proofs: the intrinsic Harnack inequality, the existence of an abstract Barenblatt-type solution, and a local comparison principle.\newline
Hereafter, with the only exception of Theorem \ref{Cutilisci}, we restrict our attention to the range
\begin{equation}\label{pi}
2<p_1\leqslantq p_N<\bar{p}(1+1/N),\qquad \bar{p}<N,
\epsilonnd{equation}
and we will refer to the constants $C_i$, $i=1,2,3$, appearing in the following theorem.
\begin{theorem} \label{Harnack-Inequality}
Let $u\geqslant 0$ be a local weak solution to \epsilonqref{EQ} in $\Omega_T$ and let \epsilonqref{pi} be valid. Suppose that $u(x_o,t_o)>0$ for a Lebesgue point $(x_o,t_o) \in \Omega_T$ for $u$. Then there exist $C_{1}\geqslant 0, C_3\geqslant C_2\geqslant 1$, depending only on $N$ and the $p_{i}$s, such that, letting $\theta=u(x_o,t_o)/C_1$, it holds
\begin{equation}\label{Harnack}
\frac{1}{C_{3}}\sup_{x_o+\mathcal{K}_{\rho}(\theta)}u(\,\cdot\, , t_o - \theta^{2-\bar p}\, (C_{2}\, \rho)^{\bar p} )\leqslant u(x_o,t_o) \leqslant C_{3} \inf_{x_o+\mathcal{K}_{\rho}(\theta)} u(\,\cdot\, , t_o + \theta^{2-\bar{p}}\, (C_{2}\, \rho)^{\bar{p}})
\epsilonnd{equation}
with $\mathcal{K}_{\rho}(\theta)$ defined as in \epsilonqref{anisocubi}, whenever $\rho, \theta>0$ satisfy
\begin{equation} \label{side-condition}
\theta^{2-\bar p}\, (C_{3}\, \rho)^{\bar p}<T-|t_o| \qquad \text{and} \qquad x_o+\mathcal{K}_{C_{3}\, \rho}(\theta)\subseteq \Omega.\epsilonnd{equation}
\epsilonnd{theorem}
\noindent The assumption $u(x_o,t_o)>0$ is understood by a suitable limit process, as customary. Semi-continuity clarifies this definition, as long as a theoretical maximum principle is in force (see \cite{CianiGuarnotta}, \cite{Mosconi}, \cite{Liao} for an account).
Theorem \ref{Harnack-Inequality} has been proved in \cite{Ciani-Mosconi-Vespri} without the assumption of H\"older continuity of solutions, which can be shown (see Section \ref{Appendix}) to be a sole consequence of \epsilonqref{Harnack}. This important property has been faced several times in the past, with imprecise proofs or an unclear geometric setting. For this reason, and in order to explain the main adversities that anisotropic diffusion obliges us to face, we include in Section \ref{Appendix} a proof of local H\"older continuity of solutions to \epsilonqref{EQ}, which follows Moser's ideas \cite{Moser} through an appropriate anisotropic intrinsic geometry. Taking for granted their continuity, in what follows we will refer directly to the point-wise values of solutions.\vskip0.1cm \noindent Let us comment Theorem \ref{Harnack-Inequality} from a global point of view: if we pick a point $(x_o,t_o) \in \Omega_T$ where $u$ is positive, it is possible to `detect' the sets where the pointwise controls \epsilonqref{Harnack} hold true. This is the core of the next proposition.
\begin{proposition}\label{paraboloids}
Suppose the assumptions of Theorem \ref{Harnack-Inequality} are satisfied for $(x_o,t_o) \in \Omega_T$. Then
\begin{equation} \label{estimate-paraboloid}
\inf_{\mathcal{P}_\theta^+(x_o,t_o)} u \geqslant u(x_o,t_o)/C_3 \qquad \text{and}\qquad \sup_{\mathcal{P}_\theta^-(x_o,t_o)}u \leqslantq C_3 u(x_o,t_o),\epsilonnd{equation}
where, setting $\theta= u(x_o,t_o)/C_1$, the paraboloids $\mathcal{P}^+_{\theta}(x_o,t_o)$ and $\mathcal{P}^-_{\theta}(x_o,t_o)$ are defined by
\[
\mathcal{P}_\theta^+(x_o,t_o)= \bigg{\{}(x,t) \in \Omega_T:\, \, C_2^{\bar{p}} |x_i-x_{o,i}|^{p_i}\theta^{2-p_i}\leqslantq (t-t_o)\leqslantq C_2^{\bar{p}}\varrho^{\bar{p}}\theta^{2-\bar{p}}, \, \, \forall i=1,..N \bigg{\}},
\]
\[
\mathcal{P}_\theta^-(x_o,t_o)= \bigg{\{}(x,t) \in \Omega_T:\,\, -C_2^{\bar{p}}\varrho^{\bar{p}}\theta^{2-\bar{p}}\leqslantq (t-t_o) \leqslantq -C_2^{\bar{p}}|x_i-x_{o,i}|^{p_i} {\theta}^{2-p_i}, \, \, \forall i=1,..N \bigg{\}},
\]
\noindent with $\varrho$ depending on $u$, $\Omega_T$, and $(x_o,t_o)$ according to the following expression:
\begin{equation}\label{rho+}
\varrho^{\bar{p}}= C_3^{-\bar{p}} \bigg( \frac{u(x_o,t_o)}{C_1} \bigg)^{\bar{p}-2} \min_{i=1,\mathrm{d}ots,N} \bigg{\{}(T-|t_o|), \, \bigg( \frac{\mathrm{d}ist(x_o, \partial \Omega)}{2}\bigg)^{p_i} \bigg( \frac{u(x_o,t_o)}{C_1} \bigg)^{2-p_i} \bigg{\}}.
\epsilonnd{equation}
\epsilonnd{proposition}
\noindent It is remarkable that estimate \epsilonqref{Harnack} is prescribed on a {\it{space}} configuration depending on the solution, in contrast to what happens with $p$-Laplacian type equations. This is due to the natural scaling of the equation (see \cite{CianiGuarnotta}), because the expansion of positivity of solutions is readily checked via comparison with the following family of Barenblatt-type solutions.
\begin{theorem}
\label{Barenblatt}
Set $\lambda=N(\bar{p}-2)+\bar{p}$ and let \epsilonqref{pi} be satisfied. For each $\sigma >0$ there exists $\tilde{\epsilonta}>0$ and a local weak solution $\mathcal{B}_{\sigma}(x, t)$ to \epsilonqref{EQ}
with the following properties, valid for any $t\in(0,T)$:
\begin{enumerate}
\item $\mathrm{d}isplaystyle{\|\mathcal{B}_{\sigma}(\cdot, t)\|_{\infty}=\sigma \, t^{-\alpha}}$,
\item
$\mathrm{d}isplaystyle{{\rm supp}(\mathcal{B}_{\sigma}(\cdot, t))\subseteq \prod_{i=1}^N \big{\{} |x_i|\leqslant \sigma^{(p_i-2)/p_i}\, t^{\alpha_i} \big{\}}}$, $\qquad \qquad$ $\alpha=N/\lambda$, $\alpha_i=(1+2\alpha)/p_i-\alpha$,
\item
$\mathrm{d}isplaystyle{\{\mathcal{B}_{\sigma}(\cdot, t)\geqslant \epsilonta\, \sigma \, t^{-\alpha}\}\supseteq \prod_{i=1}^N \big{\{} |x_i|\leqslant \epsilonta\, \sigma^{(p_{i}-2)/p_{i}}\, t^{\alpha_i} \big{\}}=:\mathcal{P}_t}$.
\epsilonnd{enumerate}
\epsilonnd{theorem}
\noindent The existence of a Barenblatt Fundamental solution $\mathcal{B}$ is a consequence of the finite speed of propagation of solutions to \epsilonqref{EQ} combined with a particular correspondence of the Cauchy problems associated to \epsilonqref{EQ} and to an anisotropic Fokker-Planck equation. On the other hand, the properties of $\mathcal{B}$ stated above stem from comparison techniques and the invariance of the equation \epsilonqref{EQ} under scaling, which entitles $\mathcal{B}$ to be a self-similar solution. We refer to \cite{Ciani-Mosconi-Vespri} for the proofs of these facts and the following proposition; see also \cite{CSV}, \cite{Vazquez} for the singular case.
\begin{proposition} \label{local-comparison}
Let $\Omega \subset {\mathbb R}^N$ be a bounded open set and $u,v$ be local weak solutions to the equation \epsilonqref{EQ} in $\Omega_T$. Let $\tilde{\Omega} \subset \Omega$ and $0<\tilde{T}<T$. If $u,v$ satisfy $u(x,t) \geqslant v(x,t)$ in the parabolic boundary of $\tilde{\Omega}_{\tilde{T}}$, then $u \geqslant v$ in $\tilde{\Omega}_{\tilde{T}}$.
\epsilonnd{proposition} \noindent The point-wise boundary inequality assumed in Proposition \ref{local-comparison} will be used in the proof of Theorem \ref{WeakHarnack} locally, and as such, it has a well-defined meaning thanks to the results of the next section.
\section{H\"older Continuity of solutions}
\label{Appendix}
\begin{theorem}\label{HC}
Under condition \epsilonqref{pi}, any local weak solution $u$ to \epsilonqref{EQ} is locally H\"older continuous. More precisely, there exist $\gamma>1$ and $\chi \in (0,1)$, depending only upon $p_i,N$, with the following property: for each compact set $K \subset \subset \Omega_T$ there exist a set $\mathcal{L}ambda$ and $\omega_o=\omega_o(K, \|u\|_{\infty,K})$ such that $K \subset \mathcal{L}ambda \subseteq \Omega_T$ and, for every $(x,t)$, $(y,s) \in K$,
\begin{equation} \label{HContinuity}
|u(x,t)-u(y,s)| \leqslantq \gamma \omega_o \bigg(\frac{\sum_{i=1}^N |x_i-y_i|^{{p_i}/{\bar{p}}}\omega_o^{{(\bar{p}-p_i)}/{\bar{p}}}+ |t-s|^{1/{\bar{p}}}\omega_o^{{(\bar{p}-2)}/{\bar{p}}}}{{\bf{p}}\text{-dist}(K,\partial \mathcal{L}ambda) } \bigg)^{\chi},
\epsilonnd{equation}\noindent with
\begin{equation} \label{pi-dist}
\begin{aligned}
&{\bf{p}}\text{-dist}(K,\partial \mathcal{L}ambda):=\inf \{ {\bf{p}}_x, {\bf{p}}_t \}, \quad \text{being}\\
& {\bf{p}}_x=\inf \bigg{\{}
|x_i-y_i|^{{p_i}/{\bar{p}}}(\omega_o/C_1)^{{(\bar{p}-p_i)}/{\bar{p}}}\, : \, (x,t) \in K, (y,s) \in \partial \mathcal{L}ambda,\, i=1,..,N\bigg{\}},\\
& {\bf{p}}_t=\inf \bigg{\{}
|t-s|^{{1}/{\bar{p}}}(\omega_o/C_1)^{{(\bar{p}-2)}/{\bar{p}}}\, : \, (x,t) \in K, (y,s) \in \partial \mathcal{L}ambda\bigg{\}}.
\epsilonnd{aligned}
\epsilonnd{equation} \noindent Furthermore, if $u$ is bounded in $\Omega_T$ then \epsilonqref{HContinuity} holds with $\mathcal{L}ambda= \Omega_T$.
\epsilonnd{theorem} \noindent We prove Theorem \ref{HC} in four steps, without assuming that $u$ is globally bounded.
\begin{proof}
Let us fix a compact set $K \subset \subset \Omega_T$ and two points $(y,s), (x,t) \in K$. \vskip0.2cm
\noindent {\small{STEP 1-{\it A global bound for the solution in $K$.}}}
\vskip0.2cm \noindent Let $\bar{p}_2= \bar{p}(1+2/N)$ and for $k>0$ we define the increasing functions $g(k)=\sum_{i=1}^{N}k^{ p_{i}-2}$ and $h(k)=\leqslantft(\sum_{i=1}^{N}k^{p_{i}-\bar p_{2}}\right)^{-1}$. We use the estimates in \cite[Lemma 4.2]{Mosconi}: under condition \epsilonqref{pi}, there exists $\tilde{\gamma}>0$ such that solutions to \epsilonqref{EQ} satisfy
\begin{equation}
\label{supest}
\|u_{+}\|_{L^{\infty}(Q_{\lambda/2, M})}\leqslantq g^{-1}(1/M)+ h^{-1}\leqslantft({\gamma}\mathcal{B}ig(M\, \mathrm{d}ashiint_{Q_{\lambda, M}} u_{+}^{\bar p_{2}}\, dx\mathcal{B}ig)^{{\bar p}/{(N+\bar p)}}\right),
\epsilonnd{equation} in the (non-intrinsic) anisotropic cylinders
\begin{equation} \label{anisocylinder}
Q_{\lambda, M}= \prod_{i=1}^{N}\leqslantft[-\lambda^{{1}/{p_{i}}}, \lambda^{{1}/{p_{i}}}\right]\times [-M\, \lambda, 0],\quad \quad M, \lambda>0.
\epsilonnd{equation}
\vskip0.2cm \noindent By compactness of $K$, we find $(x_i,t_i) \in K$ and $\lambda_i, M_i \in \mathbb{R}_+$, $i=1,\mathrm{d}ots,m$, for $m\in{\mathbb N}$, such that
\begin{equation*}
K \subset \mathcal{L}ambda:=\bigcup_{j=1}^m \{(x_j,t_j)+Q_{\lambda_j,M_j}\}\subseteq \bigcup_{j=1}^m \{(x_j,t_j)+Q_{2\lambda_j,M_j} \}\subseteq \Omega_T,
\epsilonnd{equation*} \noindent being $Q_{\lambda,M}$ as in \epsilonqref{anisocylinder}.
\noindent According to \epsilonqref{supest}, for each anisotropic cylinder $\hat{Q}_{\lambda_j,M_j}=(x_j,t_j)+Q_{\lambda_j,M_j}$, $j=1,\mathrm{d}ots,m$, we deduce the estimate
\begin{equation*} \begin{aligned}\label{A}
\| u\|_{L^{\infty}(\hat{Q}_{\lambda_j, M_j})} &
\leqslantq g^{-1} (1/\min_{j} M_j)+ h^{-1} \bigg( \gamma \max_{j=1,\mathrm{d}ots,m} \bigg( M_j \mathrm{d}ashint \mathrm{d}ashint_{\hat{Q}_{2\lambda_j,M_j}} |u|^{\bar{p}_2}\, dxdt\bigg)^{{\bar{p}}/{(N+\bar{p})}} \bigg)=: \mathcal{I},
\epsilonnd{aligned}
\epsilonnd{equation*} \noindent because $h$,$g$, are monotone increasing.
\noindent Finally, we define $\omega_o=\omega_o(K)$ as \begin{equation} \label{0}
\omega_o:= 2\mathcal{I}. \epsilonnd{equation} \noindent Accordingly,
\[
K \subset \bigcup_{j=1}^m \hat{Q}_{\lambda_j, M_j}(x_j,t_j)= \mathcal{L}ambda \qquad \mbox{and} \qquad 2 \|u\|_{L^{\infty}(\mathcal{L}ambda)} \leqslantq \omega_o.
\]
\vskip0.2cm \noindent {\small{STEP 2-{\it Accommodation of degeneracy and alternatives.}}}
\vskip0.2cm \noindent
Recalling \epsilonqref{pi-dist}
we define $R:= [{\bf{p}}\text{-dist}(K,\partial \mathcal{L}ambda)]/(2C_3)$. Now, by definition of $R$, the intrinsic cylinder centered at $(y,s)\in K$ and constructed with $R$ and $\omega_o$ is contained inside $\mathcal{L}ambda$, that is,
\[
(y,s)+ \mathcal{Q}_{R}(\omega_o/C_1,C_2) \subseteq \mathcal{L}ambda.
\]
\noindent Now consider any other point $(x,t) \in K$. We reduce the study of the oscillation only in $(y,s) + \mathcal{Q}_R^-(\omega_o/C_1,C_2)$, having elsewhere the H\"older continuity of $u$. Indeed, if $|s-t| \geqslant (\omega_o/C_1)^{2-\bar{p}}(C_2 R)^{\bar{p}}$, we have
\[
|u(y,s)-u(x,t)|\leqslantq |u(y,s)|+|u(x,t)|\leqslantq \omega_o \leqslantq 2C_3 \omega_o \bigg( \frac{(\omega_o/C_1)^{{(\bar{p}-2)}/{\bar{p}}}|s-t|^{{1}/{\bar{p}}}}{{\bf{p}}\text{-dist}(K,\partial \mathcal{L}ambda)} \bigg) \] by definition of $R$.
Similarly, if $|y_i-x_i| \geqslant (\omega_o/C_1)^{{(p_i-\bar{p})}/{\bar{p}}} R^{{\bar{p}}/{p_i}}$ for some $i \in \{1,\mathrm{d}ots,N\}$, the same conclusion follows from
\[
|u(y,s)-u(x,t)|\leqslantq |u(y,s)|+|u(x,t)|\leqslantq \omega_o \leqslantq 2C_3 \omega_o \bigg( \frac{(\omega_o/C_1)^{{(\bar{p}-p_i)}/{\bar{p}}}|y_i-x_i|^{{p_i}/{\bar{p}}}}{{\bf{p}}\text{-dist}(K,\partial \mathcal{L}ambda)} \bigg).
\]
This technical stratagem justifies the definition \epsilonqref{pi-dist}. Hence we can assume that
\begin{equation}\label{exclusion}
|s-t|<(\omega_0/C_1)^{2-\bar{p}} (C_2 R)^{\bar{p}} \quad \text{and} \quad |y_i-x_i|< (\omega_o/C_1)^{{(p_i-\bar{p})}/{p_i}} R^{{\bar{p}}/{p_i}} \quad \forall i=1,\mathrm{d}ots, N,
\epsilonnd{equation}
that is,
\[(x,t) \in (y,s)+ \mathcal{Q}_R^-(\omega_o/C_1,C_2). \]
We take the cylinder $\mathcal{Q}_0:=(y,s)+\mathcal{Q}_R^-(\omega_o/C_1,C_2)$ as the first element of a net $\{\mathcal{Q}_n\}_n$ of cylinders shrinking to the center $(y,s)$. This net will be constructed to control uniformly the oscillation.
\vskip0.2cm \noindent
\noindent {\small{STEP 3-{\it Controlled reduction of oscillation.}}}
\begin{proposition}\label{birra} Let the hypothesis of Theorem \ref{HC} be valid and assume also \epsilonqref{exclusion}. Then, setting
\begin{equation*}
\begin{cases}
\omega_0= \omega_o(K),\\
\omega_n=\mathrm{d}elta \omega_{n-1}, \, n\geqslant 1,
\epsilonnd{cases}
\begin{cases}
\theta_n= \omega_n/C_1, \, n\geqslant 0,\\
\rho_0=R,\\
\rho_n= \varepsilon \rho_{n-1}, \, n\geqslant 1,\\
\epsilonnd{cases}
\begin{cases}
\mathrm{d}elta=4C_3/(1+4C_3),\\
\varepsilon=\mathrm{d}elta^{{(\bar{p}-2)}/{\bar{p}}}/A, \\
A=4^{p_N}, \epsilonnd{cases}
\epsilonnd{equation*} \noindent we have both the inclusions
\[
\mathcal{Q}_{n}\subset \mathcal{Q}_{n-1}, \quad \text{with} \quad \mathcal{Q}_n= (y,s)+ \mathcal{Q}_{\rho_n}^-(\theta_n)= \prod_{i=1}^N \bigg{\{}|y_i-x_i|<\theta_n^{{(p_i-\bar{p})}/{p_i}}\rho_n^{{\bar{p}}/{p_i}} \bigg{\}} \times \bigg(s-\theta_n^{2-\bar{p}} (C_2\rho_n)^{\bar{p}} ,\, s\bigg],
\] and the inequalities
\begin{equation}\label{control}
\osc_{\mathcal{Q}_n} u \leqslantq \omega_n = \mathrm{d}elta^n \omega_o.
\epsilonnd{equation}
\epsilonnd{proposition}
\begin{proof}[Proof of Proposition \ref{birra}]\noindent First of all, we prove that $\mathcal{Q}_{n} \subset \mathcal{Q}_{n-1}$ for all $n\in{\mathbb N}$. By direct computation,
\begin{equation*} \begin{aligned}
\theta_{n}^{2-\bar{p}} (C_2\rho_{n})^{\bar{p}} = \bigg(\frac{\mathrm{d}elta \omega_{n-1}}{C_1}\bigg)^{2-\bar{p}}\bigg((C_2\rho_{n-1}/A)^{\bar{p}}\mathrm{d}elta^{\bar{p}-2}\bigg)= \theta_{n-1}^{2-\bar{p}} (C_2\rho_{n-1}/A)^{\bar{p}}.
\epsilonnd{aligned} \epsilonnd{equation*} For each $i\in \{1,..,N\}$, since $p_i>2$ and $\mathrm{d}elta \in (0,1)$, it holds
\[
\theta_{n}^{p_i-\bar{p}} \rho_{n}^{{\bar{p}}} = \mathrm{d}elta^{p_i-2} \theta_{n-1}^{{p_i-\bar{p}}} ( {\rho_{n-1}}/{A} )^{{\bar{p}}} \leqslantq \theta_{n-1}^{{p_i-\bar{p}}} ( {\rho_{n-1}}/{A} )^{{\bar{p}}}.
\] This computation shows a little more, by allowing indeed $\mathcal{Q}_{n}\subset (y,s)+\mathcal{Q}_{\rho_{n-1}/A}^-(\theta_{n-1})\subset \mathcal{Q}_{n-1}$.
\noindent Now we prove \epsilonqref{control} by induction. The base step holds true: indeed, the accommodation of degeneracy (see Step 2 above) entails $\mathcal{Q}_0\subset\mathcal{L}ambda$, so that the bound produced in Step 1 yields
\[
\osc_{\mathcal{Q}_0} u \leqslantq \osc_{\mathcal{L}ambda} u \leqslantq 2\|u\|_{L^\infty(\mathcal{L}ambda)} \leqslantq \omega_o.
\]
We assume now that the statement \epsilonqref{control} is true until step $n$ and we show it for $n+1$. This will determine the number $A$. More precisely, we assume that $\osc_{\mathcal{Q}_n}u \leqslantq \omega_n$ and, by contradiction, that $\osc_{\mathcal{Q}_{n+1}} u > \omega_{n+1}$. We set
\[
M_{n}= \sup_{\mathcal{Q}_{n}} u, \qquad m_{n}= \inf_{\mathcal{Q}_{n}}u, \qquad P_{n}=(y,\, s-\theta_{n}^{2-\bar{p}}(C_2\rho_{n})^{\bar{p}}).
\] Now we observe that one of the following two inequalities must be valid:
\[
M_{n}-u(P_{n}) > \omega_{n+1}/4 \qquad \text{or} \qquad u(P_{n})-m_{n} > \omega_{n+1}/4.
\] Indeed, if both alternatives are violated, then by adding the opposite inequalities we obtain $\osc_{\mathcal{Q}_{n}} u \leqslantq \omega_{n+1}/2< \osc_{\mathcal{Q}_{n+1}}$, generating a contradiction with $\mathcal{Q}_{n+1} \subseteq \mathcal{Q}_n$. Let us suppose $M_{n}-u(P_{n}) \geqslant \omega_{n+1}/4$, the other case being similar. In particular we have the double bound
\begin{equation}
\label{doublebound}
\omega_{n+1}/4\leqslantq M_n - u(P_n) \leqslantq \omega_n.
\epsilonnd{equation}
Let us set $\hat{\theta}_n=(M_n-u(P_n))/C_1$. We work in the half-paraboloid $\mathcal{P}^+_n=\mathcal{P}^+_{\hat{\theta}_n}(P_n)$ for times restricted to the ones of $\mathcal{Q}_n$.
\noindent
The starting time of $P_n^+$ is the same as the one of $\mathcal{Q}_n$ (see Figure \ref{FigA}).
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.25]
\mathrm{d}raw[thick,->] (25,0) -- (25,6) node[anchor=north west] {\small{$x \in {\mathbb R}^N$}};
\mathrm{d}raw[thick,->] (25,0) -- (31,0) node[anchor=south west] {\small{$t \in {\mathbb R}$}};
\mathrm{d}raw (20,0) rectangle (-6,6);
\mathrm{d}raw (20,0) rectangle (6,4);
\mathrm{d}raw (20,0) rectangle (-6,-6);
\mathrm{d}raw (20,0) rectangle (6,-4);
\mathrm{d}raw (18, 2) node{$\mathcal{Q}_{n+1}$};
\mathrm{d}raw (-4, 4) node{$\mathcal{Q}_{n}$};
\mathrm{d}raw (4, 4.6) node{\textcolor{red}{$\mathcal{P}^+_n$}};
\mathrm{d}raw (-7, 0) node{$P_n$};
\mathrm{d}raw (21, 0) node{$s$};
\mathrm{d}raw (5.6,-0.65) node{$\bar{t}$};
\mathrm{d}raw[red] (20,5.6) parabola (-6,0);
\mathrm{d}raw[red] (20,-5.6) parabola (-6,0);
\epsilonnd{tikzpicture}
\caption{{\small Scheme of the proof of \epsilonqref{bound}. The anisotropic paraboloid $\mathcal{P}^+_n$ (in red), centered in $P_n=(\,y,\, s-(\omega_n/C_1)^{2-\bar{p}}(C_2\rho_n)^{\bar{p}})$, evolves in a time $(\omega_n/C_1)^{2-\bar{p}}(C_2\rho_n)^{\bar{p}}$ to cover $\mathcal{Q}_{n+1}$.}}
\label{FigA}
\epsilonnd{figure}
\noindent To show that $\mathcal{P}_n^+\subset\mathcal{Q}_n\subset\Omega_T$, we control the space variables. From the upper bound in \epsilonqref{doublebound} and of the paraboloid, we infer
\begin{equation*}
|x_i-y_i|^{p_i}< \bigg(\frac{M_n-u(P_n)}{C_1}\bigg)^{p_i-2}\rho_n^{\bar{p}}\bigg(\frac{\omega_n}{C_1}\bigg)^{2-\bar{p}}\leqslantq \bigg( \frac{\omega_n}{C_1} \bigg)^{p_i-\bar{p}} \rho_n^{\bar{p}}
\epsilonnd{equation*}
for all $x\in\pi_x(\mathcal{P}_n^+)$, being $\pi_x$ the projection on the space variables. This furnishes the desired inclusion. \\
\noindent
Now we show that, after a certain time $\bar{t}$, the whole cylinder $\mathcal{Q}_{n+1}$ is contained in the paraboloid $ \mathcal{P}^+_n$; see Figure \ref{FigA} for a representation. For times $t >s-(\omega_n/C_1)^{2-\bar{p}}(C_2\rho_n)^{\bar{p}}$, we denote by $\mathcal{P}^+_n(t)$ the time-section of $\mathcal{P}^+_n$ at time $t$:
\[ \mathcal{P}^+_n(t)= \bigg{\{} x \in {\mathbb R}^N: \, \, |x_i-y_i|^{p_i}< C_2^{-\bar{p}} [(M_n-u(P_n))/C_1]^{p_i-2}(t-s+ (\omega_n/C_1)^{2-\bar{p}}(C_2\rho_n)^{\bar{p}}) \bigg{\}}.\]
\noindent Let us set \begin{equation*} \bar{t}=s-(\omega_{n+1}/C_1)^{2-\bar{p}}(C_2 \rho_{n+1})^{\bar{p}},\epsilonnd{equation*}
and let us prove that at time $\bar{t}$ we have the inclusion $\pi_{x}(\mathcal{Q}_{n+1})\subset \mathcal{P}^+_n(\bar{t})$. This reduces to show that
\[\rho_{n+1}^{\bar{p}} (\omega_{n+1}/C_1)^{p_i-\bar{p}} \leqslantq (A^{\bar{p}}-1) [(M_n-u(P_n)/C_1)]^{p_i-2} \rho_{n+1}^{\bar{p}}
(\omega_{n+1}/C_1)^{2-\bar{p}},\]
that is,
\[\omega_{n+1}^{p_i-2} \leqslantq (A^{\bar{p}}-1) (M_n-u(P_n))^{p_i-2}.\]
According to \epsilonqref{doublebound}, this inequality is verified when $4^{p_N-2}<A^{\bar{p}}-1$, as for instance setting $A= 4^{p_N}$. \vskip0.1cm \noindent Hence, by the Harnack inequality \epsilonqref{estimate-paraboloid} and \epsilonqref{doublebound}, we can estimate the infimum of $M_{n}-u$ in $\mathcal{Q}_{n+1}$ as
\begin{equation} \label{bound}
\inf_{\mathcal{Q}_{n+1}} (M_{n}-u) \geqslant \inf_{\mathcal{P}^+_n(\bar{t})} (M_n-u) \geqslant \frac{M_{n}-u(P_{n})}{C_3} \geqslant\, \, \omega_{n+1}/(4C_3),
\epsilonnd{equation}\noindent again referring to Figure \ref{FigA}. Thus
\[
M_{n} \geqslant \sup_{\mathcal{Q}_{n+1}} u+ \omega_{n+1}/(4C_3).
\]
Adding $-\inf_{\mathcal{Q}_n} u \geqslant - \inf_{\mathcal{Q}_{n+1}}u$ to both sides, besides using $\osc_{\mathcal{Q}_{n+1}}u >\omega_{n+1}$, we get
\[\omega_n \geqslant M_n - \inf_{\mathcal{Q}_n} u \geqslant \sup_{\mathcal{Q}_{n+1}} u+ \omega_{n+1}/(4C_3) - \inf_{\mathcal{Q}_{n+1}} u= \osc_{\mathcal{Q}_{n+1}}u+\omega_{n+1}/(4C_3)> \bigg(1+\frac{1}{4C_3} \bigg) \omega_{n+1}\, .
\]This leads to a contradiction by definition of $\mathrm{d}elta$, since
\[
\omega_n > \bigg(1+\frac{1}{4C_3} \bigg) \mathrm{d}elta \omega_{n}= \bigg(\frac{4C_3}{1+4C_3}\bigg) \bigg(1+\frac{1}{4C_3} \bigg) \omega_n = \omega_n.
\]
\epsilonnd{proof}
\noindent
{\small STEP 4-{\it Conclusion of the proof of Theorem \ref{HC}.}}
\vskip0.2cm
\noindent If we consider a point $(x,t) \in (y,s) +\mathcal{Q}_R^-(\omega_o/C_1,C_2)$, let $n \in \mathbb{N}$ be the last number such that we have $(x,t)\in \mathcal{Q}_n$, so that $(x,t) \not\in \mathcal{Q}_{n+1}$. From the first condition and \epsilonqref{control} we have
\[|u(x,t)-u(y,s)| \leqslantq \osc_{\mathcal{Q}_n} u\leqslantq \mathrm{d}elta^n \omega_o.\]
The rest of the job is standard and consists in determining from condition $(x,t) \not\in \mathcal{Q}_{n+1}$ an upper bound for $\mathrm{d}elta^n$. For the sake of simplicity, we just show the case $x \not\in y+\mathcal{K}_{\rho_{n+1}}$.
\noindent
Let $\beta>0$ be such that $\mathrm{d}elta^{{(\bar{p}-2)}/{\bar{p}}}/A=\mathrm{d}elta^{\beta}$. By assumption, there is an index $i \in \{1,\mathrm{d}ots, N\}$ such that
\begin{equation*}
|x_i-y_i|^{p_i}> \rho_{n+1}^{{\bar{p}}} (\omega_{n+1}/C_1)^{{p_i-\bar{p}}} \geqslantq \gamma(A)(\mathrm{d}elta^n)^{[{\bar{p}(\beta-1)+p_i}]}R^{\bar{p}}(\omega_o/C_1)^{p_i-\bar{p}},
\epsilonnd{equation*}
\noindent that gives us, for $\chi_i=\bar{p}/(\bar{p}(\beta-1)+p_i)$, the following estimate of $\mathrm{d}elta^n$:
\begin{equation*} \begin{aligned}
\mathrm{d}elta^n \leqslantq&
\gamma \bigg( \frac{ |x_i-y_i|^{{p_i}/{\bar{p}}} (\omega_o/C_1)^{{(\bar{p}-p_i)}/{\bar{p}}}}{R} \bigg)^{{\bar{p}}/[{{\bar{p}(\beta-1)+p_i}}]} \\
&\leqslantq \gamma \bigg(\frac{\sum_{i=1}^N |x_i-y_i|^{{p_i}/{\bar{p}}}\omega_o^{{(\bar{p}-p_i)}/{\bar{p}}}+ |t-s|^{{1}/{\bar{p}}}\omega_o^{{(\bar{p}-2)}/{\bar{p}}}}{{\bf{p}}\text{-dist}(K,\partial \mathcal{L}ambda) } \bigg)^{\chi_i}.\epsilonnd{aligned} \epsilonnd{equation*}
\noindent
From $A>4>\mathrm{d}elta^{-1-2/\bar{p}}$ we infer $\beta>2$, whence $\chi_i\in(0,1)$. A similar estimate follows from the case where times are not contained, with $\chi_t= \bar{p}/(\bar{p}(\beta-1)+2)$. Therefore, recalling that $p_N>2$, we choose the H\"older exponent
\begin{equation} \label{alfa}
\chi = \min \{\chi_i, \chi_t, \quad i=1,\mathrm{d}ots,N \}=\frac{\bar{p}}{\bar{p}(\beta-1)+p_N}.
\epsilonnd{equation}
\epsilonnd{proof}
\noindent
\section{Liouville-type results}\label{LiouvilleSection}
\noindent In their origins, Liouville properties were discovered for harmonic functions. Indeed, for solutions to $\Delta u =0$ in ${\mathbb R}^N$, a one-sided bound on $u$ or the sublinear growth at infinity are suitable rigidity conditions.
These two classical examples follow respectively from an application of Harnack's inequality and from gradient estimates. Here we observe that gradient bounds of logarithmic type are unknown for solutions to the stationary counterpart of \epsilonqref{EQ} and seem hard to obtain, chiefly because of the lack of homogeneity of the operator. On the other hand, for parabolic equations a one-side bound is not sufficient to imply that solutions are constant, as we remarked. This is still the case also for non-negative solutions to degenerate $p$-Laplacian equations (i.e., for $p>2$). Indeed, the one-parameter family of non-negative functions
\[
{\mathbb R} \times {\mathbb R} \ni (x,t)\rightarrow u(x,t;c)= c^{{1}/({p-2})} \bigg(\frac{p-2}{p-1} \bigg)^{{(p-1)}/{(p-2)}} (1-x+ct)_+^{{(p-1)}/{(p-2)}}
\] is a family of non-negative, non-constant weak solutions to $u_t=\Delta_{p}u$ in ${\mathbb R}^2$. This naturally provides a counterexample also in case of equation \epsilonqref{EQ} in one spatial dimension. Similarly, the anisotropic driving example we have in mind is
\[{\mathbb R}^N \times {\mathbb R} \ni (x,t) \rightarrow u(x,t;c)= \bigg(1-ct + \sum_{i=1}^N {(\alpha_i/p_i') |x_i|^{p_i'}}\bigg)_+,\]
for $\alpha_i >0$ such that $\sum_{i=1}^N |\alpha_i|^{p_i-1}\alpha_i=c$ and being $p_i'$ the H\"older conjugate of $p_i$ for each $i=1,\mathrm{d}ots, N$. On the other hand, a full lower bound coupled with a specific upper bound at some time level ensures a Liouville property, as the following result uncovers.
\begin{theorem}\label{Liouville1}
Let $T\in {\mathbb R}$, $S_T={\mathbb R}^N \times (-\infty, T)$, and $u$ be a solution to \epsilonqref{EQ}-\epsilonqref{pi} which is bounded below in $S_T$. Assume moreover that, for some $s<T$, one has
\begin{equation}\label{rs-bound}
\sup_{{\mathbb R}^N} u(\cdot,s)=M_s <\infty.
\epsilonnd{equation} Then $u$ is constant in $S_s= {\mathbb R}^N \times (-\infty, s)$.
\epsilonnd{theorem}
\begin{corollary}
\label{LiouvilleCor}
Let $T\in {\mathbb R}$, $S_T={\mathbb R}^N \times (-\infty, T)$, and $u$ be a solution to \epsilonqref{EQ}-\epsilonqref{pi}. If $u$ is bounded from above and below in $S_T$, then it is constant.
\epsilonnd{corollary}
\begin{proof}[Proof of Theorem \ref{Liouville1}]
Let $u$ be a solution to \epsilonqref{EQ} bounded from below in $S_T$. We define
\[
m:=\inf_{S_T} u.
\]
We prove the following fact, which is interesting in its own:
\begin{equation}\label{perse}
\lim_{t\rightarrow -\infty} u(x,t)= \inf_{S_T} u \quad \mbox{for any} \quad x \in {\mathbb R}^N.
\epsilonnd{equation}\noindent To this aim, fix any $x\in{\mathbb R}^N$ and $\varepsilon>0$. Notice that there exists a point $(y_\varepsilon, s_\varepsilon) \in S_T$ such that $u(y_{\varepsilon},s_{\varepsilon})-m\leqslantq \varepsilon/C_3$. Set $\theta_{\varepsilon}= (u(y_{\varepsilon},s_{\varepsilon})-m)/C_1$. Exploiting \epsilonqref{estimate-paraboloid} for the solution $u-m$, we have
\begin{equation}\label{infimum}
m \leqslantq u(y,s) \leqslantq m+\varepsilon \quad \text{for all} \quad (y,s) \in \mathcal{P}_{\theta_\varepsilon}^-(y_{\varepsilon},s_{\varepsilon}).
\epsilonnd{equation}
Consider the half line $R:=\{x\}\times(-\infty,T)$. Observe that
\[
R \cap \mathcal{P}_{\theta_\varepsilon}^-(y_{\varepsilon},s_{\varepsilon}) = \{x\} \times (-\infty,t_{\varepsilon,x}), \quad \mbox{being} \quad t_{\varepsilon,x} := s_\varepsilon-C_2^{\bar{p}}(2-|x_i-y_{\varepsilon,i}|)^{p_i}\theta^{2-p_i}.
\]
According to \epsilonqref{infimum}, this shows that
\begin{equation*}
m \leqslantq u(x,s) \leqslantq m+\varepsilon \quad \text{for all} \quad s < t_{\varepsilon,x}.
\epsilonnd{equation*}
Accordingly, \epsilonqref{perse} is proved, by arbitrariness of $x$ and $\varepsilon$.
A similar argument shows that
\begin{equation}\label{perse2}
\sup_{S_T} u <\infty \quad {\mathbb R}ightarrow \quad \lim_{t\rightarrow -\infty} u(x,t) = \sup_{S_T} u \quad \forall x \in {\mathbb R}^N.
\epsilonnd{equation}
Eventually this implies that any $u$ solution to \epsilonqref{EQ} which is bounded from both above and below in the whole $S_T$ is necessarily constant. Indeed, by \epsilonqref{perse} and \epsilonqref{perse2} we have $\sup_{S_T} u= \inf_{S_T} u$. This argument proves Corollary \ref{LiouvilleCor}.
\vskip0.1cm \noindent
In order to conclude the proof of Theorem \ref{Liouville1}, we use the assumption that there exists $\bar{s} \in (-\infty,\, T)$ such that $u(\cdot, \bar{s})$ is bounded from above in the whole ${\mathbb R}^N$ by a suitable $M_s\in {\mathbb R}$. Indeed, letting $\theta_x=(u(x,\bar{s})-m)/C_1$ for any $x\in{\mathbb R}^N$ and using the intrinsic backward Harnack inequality for $u-m$ again, we get the uniform bound
\[
u(y,s) \leqslantq C_3 u(x,\bar{s}) \leqslantq C_3 M_{\bar{s}}, \quad \text{for all} \quad x \in {\mathbb R}^N \quad \mbox{and} \quad (y,s) \in \mathcal{P}_{\theta_{x}}^-(x,\bar{s}).
\] Reasoning as above, with $\mathcal{P}_{\theta_x}^-(x,\bar{s})$ instead of $\mathcal{P}_{\theta_{\varepsilon}}^- (y_{\varepsilon},\, s_{\varepsilon})$, besides recalling that $u$ bounded from both above and below in $\mathcal{P}_{\theta_x}^-(x,\bar{s})$ uniformly in $x \in {\mathbb R}^N$, we conclude that $u$ is constant in $S_{\bar{s}}$.
\epsilonnd{proof}
\noindent As a general principle, the bigger the set where the equation is solved the stronger the rigidity: for solutions of \epsilonqref{EQ} in ${\mathbb R}^N \times {\mathbb R}$, it suffices to check their asymptotic (in time) two-side boundedness at a single point $y \in {\mathbb R}^N$ to infer that they are constant, as shown by the next theorem.
\begin{theorem}\label{Liouville2}
Let $u$ be a local weak solution to \epsilonqref{EQ}-\epsilonqref{pi} in ${\mathbb R}^N \times {\mathbb R}$ which is bounded from below. If, in addition, there exists $y \in {\mathbb R}^N$ and a sequence $\{ s_n\} \subset {\mathbb R}$, $s_n\to+\infty$, such that $\{u(y, s_n)\}$ is bounded, then $u$ is constant.
\epsilonnd{theorem}\noindent
\begin{remark}
We explicitly point out the following straightforward consequence of Theorem \ref{Liouville2}. Let $u$ be a local weak solution to \epsilonqref{EQ} in ${\mathbb R}^N \times {\mathbb R}$ which is bounded from below. Suppose that, for some $y \in {\mathbb R}^N$, one has
\begin{equation}\label{Liouville2HP}
\liminf_{t\rightarrow +\infty} u(y,t)=\alpha \in {\mathbb R}.
\epsilonnd{equation}\noindent Then $u$ is constant.
\epsilonnd{remark}
\begin{proof}[Proof of Theorem \ref{Liouville2}]
Let $m:=\inf u$ and consider $\tilde{u}:=u+m+C_1$, which is a solution to \epsilonqref{EQ}. By assumption, there exist $M \in {\mathbb R}$ and $\{s_n\}\subset{\mathbb R}$ such that $s_n \to +\infty$ and
\[
\tilde{u}(y,s_n)<M \quad \quad \forall n \in {\mathbb N}.
\]
Let us fix arbitrarily $\bar{s}\in {\mathbb R}$ and let $\bar{n} \in {\mathbb N}$ be big enough such that $s_n >\bar{s}$ for all $n \geqslant \bar{n}$. Then, for all $n\geqslant\bar{n}$, we set $\theta_n:=\tilde{u}(y,s_n)/C_1$ and define a sequence of radii $\{\rho_n\}$ through
\[
s_n-\theta_n^{2-\bar{p}} (C_2 \rho_n)^{\bar{p}} =\bar{s}, \quad \quad \mbox{that is,} \quad \quad \rho_n= [\theta_n^{\bar{p}-2} (s_n-\bar{s})]^{1/\bar{p}}/C_2.
\] We want to apply the Harnack inequality to deduce an upper bound for $\tilde{u}(\cdot,\bar{s})$ in the whole ${\mathbb R}^N$; so we need to check that the intrinsic anisotropic cubes $\mathcal{K}_{\rho_n}(\theta_n)$ expand as $s_n\to+\infty$. An explicit computation yields
\[
\mathcal{K}_{\rho_n}(\theta_n)= \prod_{i=1}^N \bigg{\{}|x_i|<\theta_n^{{(p_i-2)}/{p_i}} \leqslantft(\frac{s_n-\bar{s}}{C_2^{\bar{p}}}\right)^{{1}/{p_i}} \bigg{\}}\quad \xrightarrow[n \to \infty]{} \quad {\mathbb R}^N,
\] since $1\leqslant\theta_n\leqslant M/C_1$ and $\{s_n\}$ diverges. By the intrinsic Harnack inequality \epsilonqref{Harnack} we have
\[
\sup_{y+\mathcal{K}_{\rho_n}(\theta_n)} \tilde{u} \bigg(\, \, \cdot\, \, ,\, s_n-\theta_n^{2-\bar{p}} (C_2\rho_n)^{\bar{p}} \bigg)\leqslantq C_3 \,\tilde{u}(y,s_n) \leqslantq C_3 M \quad \forall n\geqslant \bar{n}.
\] Thus, recalling the definition of $\{\rho_n\}$, we get the uniform estimate
\[
\sup_{y+\mathcal{K}_{\rho_n}(\theta_n)} \tilde{u}(\cdot, \bar{s} ) \leqslantq C_3 M, \qquad \forall n\geqslant \bar{n},
\]
whence, letting $n\to\infty$,
\[
\sup_{{\mathbb R}^N} \tilde{u}(\cdot, \bar{s}) \leqslantq C_3 M.
\]
Now we can apply Theorem \ref{Liouville1} in $(-\infty, \bar{s})$ and conclude by the arbitrariness of $\bar{s}\in{\mathbb R}$.
\epsilonnd{proof}
\noindent Finally, we show that the oscillation estimates \epsilonqref{control} constitute a Liouville property for ancient solutions. This allows us to get rid of the range of $p_i$s of finite speed of propagation \epsilonqref{pi}, at the price of assuming a suitable decay of the local oscillation.
\begin{theorem} \label{Cutilisci}
Let $u$ be a bounded function in $S_T$. Let $\omega_o>0$, $\mathrm{d}elta \in (0,1)$, $2<p_1 \leqslantq \ldots \leqslantq p_N< \infty$ and $c_1,c_2,c_4>1$ be fixed parameters. For any $(\bar{x}, \bar{t}) \in S_T$ and $R_o>0$, define a sequence of backward shrinking cylinders $\mathcal{Q}_{n+1} \subset \mathcal{Q}_{n}$ as
\begin{equation}\label{cilindrotti}
\mathcal{Q}_n = \mathcal{Q}_{n}(R_o) = (\bar{x}, \bar{t})+ \mathcal{Q}_{\rho_n}^-(\theta_n, c_2) = \prod_{i=1}^N \bigg{\{}|x_i-\bar{x}_i|<\theta_n^{{(p_i-\bar{p})}/{p_i}}\rho_n^{{\bar{p}}/{p_i}} \bigg{\}} \times \bigg(\bar{t}-\theta_n^{2-\bar{p}} (c_2\rho_n)^{\bar{p}} ,\, \bar{t}\, \bigg],
\epsilonnd{equation} being
\[
\theta_n= \mathrm{d}elta^n \omega_o/c_1, \qquad \rho_n=\varepsilon^n R_o, \qquad \varepsilon=\mathrm{d}elta^{\frac{\bar{p}-2}{\bar{p}}}/c_4.
\]
\noindent If $u$ satisfies, for all $(\bar{x},\bar{t}) \in S_T$ and $R_o>0$, the decay
\begin{equation}\label{oscilla}
\osc_{\mathcal{Q}_{n+1}} u \leqslantq \mathrm{d}elta \osc_{\mathcal{Q}_{n}} u, \qquad n\in {\mathbb N} \cup\{0\},
\epsilonnd{equation}
\noindent then $u$ is constant in $S_T$.
\epsilonnd{theorem}
\begin{proof} The proof is an adaptation of an early idea already present in \cite{Glagoleva1} (see also \cite{Landis}). Arguing by contradiction, assume that $A,B \in S_T$ are two points such that $u(A) \ne u(B)$.
Suppose, without loss of generality, $T=0$ and define
\[d= \max \{\mathrm{d}ist(A,0), \, \mathrm{d}ist(B,0) \}.\]
Choose a radius $\tilde{R}_o>0$ big enough to enclose $A$ and $B$ inside an intrinsic backward cylinder $\tilde{\mathcal{Q}}_0:=\mathcal{Q}_0(\tilde{R}_o)$,
so that $\tilde{R}_o$ satisfies
\begin{equation*}
\begin{cases}
\theta_0^{p_i-\bar{p}} \tilde{R}_o^{{\bar{p}}} >d^{p_i},\quad i =1,..,N,\\
\theta_0^{2-\bar{p}} (c_2 \tilde{R}_o)^{\bar{p}} >d.
\epsilonnd{cases}
\epsilonnd{equation*}
\noindent Now set $R_o:=c_4 \tilde{R}_o\mathrm{d}elta^{\frac{2-p_N}{\bar{p}}}$, observe that $\tilde{\mathcal{Q}}_0\subset\mathcal{Q}_1(R_o)$, and fix $\tilde{\mathcal{Q}}_1:=\mathcal{Q}_0(R_o)$.
Then the decay \epsilonqref{oscilla} implies
\[\osc_{\tilde{\mathcal{Q}}_0} u \leqslant \osc_{\mathcal{Q}_1(R_o)} u \leqslant \mathrm{d}elta \osc_{\mathcal{Q}_0(R_o)} u = \mathrm{d}elta \osc_{\tilde{\mathcal{Q}}_1} u.\]
Proceeding inductively, we construct $\tilde{\mathcal{Q}}_{n+1}$ by choosing a new $R_o$ such that $\tilde{\mathcal{Q}}_n \subset \mathcal{Q}_1(R_o) \subset \mathcal{Q}_0(R_o) =: \tilde{\mathcal{Q}}_{n+1}$. By construction,
\[ |u(A)-u(B)| \leqslant \osc_{\tilde{\mathcal{Q}}_0} u \leqslant \mathrm{d}elta^n \osc_{\tilde{\mathcal{Q}}_{n}} u \quad \quad \forall n \in {\mathbb N}.\]
Finally, the boundedness of $u$ leads to a contradiction: indeed, for all $n \in {\mathbb N} \cup \{0\}$ we have
\[|u(A)-u(B)| \leqslant \mathrm{d}elta^n \osc_{\tilde{\mathcal{Q}}_{n}} u \leqslant 2 \mathrm{d}elta^n \|u\|_{\infty, S_T},\]
forcing $u(A)=u(B)$.
\epsilonnd{proof}
\noindent As a consequence of Proposition \ref{birra}, we obtain again Corollary \ref{LiouvilleCor}. Indeed, when equation \epsilonqref{EQ} is solved in $S_T$ the length $R$ in Proposition \ref{birra} can be taken arbitrarily large. Nevertheless, we decided to formulate Theorem \ref{Cutilisci} without the assumption that $u$ is a solution of any equation. Indeed, Theorem \ref{Cutilisci} is finer: its general principle goes far beyond equation \epsilonqref{EQ} and is a key argument to prove rigidity results for a very general class of equations (see, e.g., \cite[Prop. 18.4]{DBGV-mono} or, for instance, \cite{Liao2} for an application to systems). Its importance shows up when a Harnack inequality ceases to hold true.
\section{Time-extrinsic Harnack inequality} \label{WHSection}
\noindent In this section we show how it is possible to free the Harnack inequality from its intrinsic geometry in time. More specifically, we give a formulation of the Harnack inequality allowing the solution to be evaluated at any time level, independently of the anisotropic geometry, provided there is enough room for the anisotropic evolution inside $\Omega_T$. Unlike the isotropic case, here it looks harder to get rid of the intrinsic geometry along the space variables. The proof of the next theorem exploits a comparison with the abstract Barenblatt solution $\mathcal{B}$ of Theorem \ref{Barenblatt} to control the positivity.
\begin{theorem}
\label{WeakHarnack} Let $u \geqslant 0$ be a local weak solution to \epsilonqref{EQ} in $\Omega_T$, and assume \epsilonqref{pi}. Then there exist $ \tilde{\epsilonta} >0$ and $\gamma>1$, depending only on $N$ and $p_i$s, such that for all $(x_o,t_o)\in \Omega_T$ and $\rho, \tilde{\theta}>0$ fulfilling the condition
\begin{equation}\label{domain}
(x_o, t_o+ \tilde{\theta})+\mathcal{Q}_{C_3 \rho}(u(x_o,t_o)/C_1,C_2) \subset \Omega_T
\epsilonnd{equation} we have
\begin{equation}\label{WH}
u(x_o,t_o)\leqslantq \gamma \bigg{\{} \bigg(\frac{\rho^{\bar{p}}}{\tilde{\theta}} \bigg)^{{1}/{(\bar{p}-2)}}+ \bigg( \frac{\tilde{\theta}}{\rho^{\bar{p}}} \bigg)^{N/\bar{p}} \bigg[\inf_{x_o+K_{\tilde{\epsilonta}\rho}(\tilde{\epsilonta} u(x_o,t_o)/C_1)} u( \cdot,\, t_o+\tilde{\theta}) \bigg]^{\lambda/\bar{p}}\bigg{\}},
\epsilonnd{equation} \noindent where $C_1,C_3>1$ come from Theorem \ref{Harnack-Inequality} while $\lambda,\tilde{\epsilonta}>0$ stem from Theorem \ref{Barenblatt}. \epsilonnd{theorem}
\begin{proof}
Let $\rho, \tilde{\theta} >0$ be such that \epsilonqref{domain} holds true. Set
\begin{equation} \label{t*}
t^*:= \bigg( \frac{C_1}{u(x_o,t_o)} \bigg)^{\bar{p}-2} (C_2 {\rho})^{\bar{p}}.
\epsilonnd{equation} We can suppose $t^*<\tilde{\theta}/2$; otherwise we get $u(x_o,t_o) \leqslantq \gamma ({\rho}^{\bar{p}}/\tilde{\theta})^{1/(\bar{p}-2)}$ for a suitable $\gamma= \gamma (C_1, C_2,\bar{p})$, and \epsilonqref{WH} is valid. Observe that $t^*<\tilde{\theta}/2$ and \epsilonqref{domain} imply
\[
t_0+\bigg( \frac{C_1}{u(x_o,t_o)}\bigg)^{\bar{p}-2} (C_2 {\rho})^{\bar{p}} <t_0+ \tilde{\theta}/2 < T \quad \mbox{and} \quad x_o + \mathcal{K}_{C_3 {\rho}}(u(x_o,t_o)/C_1) \subset \Omega.
\] Hence the forward Harnack inequality \epsilonqref{Harnack} furnishes
\[
u(x_o,t_o) \leqslantq C_3 u(x,\, t_o+t^*) \quad \quad \forall \, x \in x_o + \mathcal{K}_{{\rho}}(u(x_o,t_o)/C_1).
\] This initial value can be considered for a comparison with the Barenblatt solution $\mathcal{B}_{\sigma}(x-x_o,t-s)$ centered at $(x_o,s)$, being $s,\sigma>0$ to be chosen such that $\mathcal{B}_{\sigma}(x-x_o,t_o+t^*-s)$ lies below $u$ in $x_0+\mathcal{K}_{\rho}(u(x_o,t_o)/C_1)$. These requirements can be written as
\begin{equation}\label{supportami}
\begin{cases}
\supp{\mathcal{B}_{\sigma}}(\cdot-x_o, t_o+t^*-s) \subseteq x_o+ \mathcal{K}_{\rho}(u(x_o,t_o)/C_1),\\
\|\mathcal{B}_{\sigma}(\cdot-x_o, t_o+t^*-s)\|_\infty \leqslantq u(x_o,t_o)/C_3.
\epsilonnd{cases}
\epsilonnd{equation}
According to Theorem \ref{Barenblatt}, conditions in \epsilonqref{supportami} are fulfilled as long as
\begin{equation}\label{supportami2}
\begin{cases}
\sigma^{(p_i-2)/p_i} (t_o+t^*-s)^{\alpha_i} \leqslantq \rho^{\bar{p}/p_i} (u(x_o,t_o)/C_1)^{(p_i-\bar{p})/p_i},\\
\sigma (t_o+t^*-s)^{-\alpha} \leqslantq u(x_o,t_o)/C_3.
\epsilonnd{cases}
\epsilonnd{equation}
\noindent Inequalities in \epsilonqref{supportami2} are in turn ensured by choosing
\[
\sigma= (t_o+t^*-s)^{N/\lambda} u(x_o,t_o)/C_3 \quad \mbox{and} \quad s= t_o+t^*- \bigg( \frac{\rho^{\bar{p}}}{u(x_o,t_o)^{\bar{p}-2}} \bigg) \gamma_1,\]
where $\gamma_1=\min \{(C_3^{p_i-2})/(C_1^{p_i-\bar{p}})\, |\, i=1,\mathrm{d}ots,N\}$. Therefore the comparison principle, applied at the time $t_o+\tilde{\theta}>t_o+t^*$, gives
\begin{equation}\label{comparison} \begin{aligned}
u(x, t_o+\tilde{\theta}) &\geqslant \tilde{\epsilonta} \sigma |t_o+t^*-(t_o+\tilde{\theta})|^{-\alpha} = \tilde{\epsilonta} \bigg( \frac{u(x_o,t_o)}{C_3} \bigg) (t_o+t^*-s)^{N/\lambda} (\tilde{\theta}-t^*)^{-N/\lambda}\\
&\geqslant \tilde{\epsilonta} \bigg( \frac{u(x_o,t_o)}{C_3} \bigg) \bigg( \frac{\gamma_1 \rho^{\bar{p}}}{u(x_o,t_o)^{\bar{p}-2}} \bigg)^{N/\lambda} \tilde{\theta}^{-N/\lambda} \geqslant \gamma u(x_o,t_o)^{\bar{p}/\lambda} \bigg( \frac{\rho^{\bar{p}}}{\tilde{\theta}} \bigg)^{N/\lambda},
\epsilonnd{aligned} \epsilonnd{equation} with $\gamma= \gamma(\gamma_1,\tilde{\epsilonta})$, for every $x$ in the set of positivity
\begin{equation*}
\begin{aligned}
\mathcal{P}_{t_o+\tilde{\theta}-s}(x_o)\supseteq \mathcal{P}_{t_o+t^*-s}(x_o) &= \prod_{i=1}^N\{|x_i-x_{o,i}|\leqslantq\tilde{\epsilonta} \rho^{\bar{p}/p_i} (u(x_o,t_o)/C_1)^{(p_i-\bar{p})/p_i} \}\\
&=x_o+ \mathcal{K}_{\tilde{\epsilonta}\rho}(\tilde{\epsilonta}u(x_o,t_o)/C_1),
\epsilonnd{aligned}
\epsilonnd{equation*}
with a constant $\tilde{\epsilonta}$ depending only on the data $N, p_i$. Taking the infimum in the estimate \epsilonqref{comparison} on the set $x_o+ \mathcal{K}_{\tilde{\epsilonta}\rho}(\tilde{\epsilonta}u(x_o,t_o)/C_1)$ concludes the proof.
\epsilonnd{proof}
\begin{remark}
In Theorem \ref{WeakHarnack} the lower bound $u(x_o,t_o)>0$ is not required; moreover, $\tilde{\theta}>0$ is arbitrarily chosen between those numbers that preserve the inclusion \epsilonqref{domain}. When the equation is solved in ${\mathbb R}^{N+1}$, the proof furnishes inequality \epsilonqref{WH} without the first term on the right.
\vskip0.1cm \noindent Actually, Theorems \ref{Harnack-Inequality} and \ref{WeakHarnack} are equivalent for small radii. We can easily show that Theorem \ref{WeakHarnack} implies Theorem \ref{Harnack-Inequality} by a simple choice of $\tilde{\theta}$. For instance, let us pick
\[ \tilde{\theta}= (2\gamma)^{\bar{p}-2}\rho^{\bar{p}}u(x_o,t_o)^{2-\bar{p}},
\] and suppose that $(x_o,t_o+\tilde{\theta}) + \mathcal{Q}_{C_3\rho} (u(x_o,t_o)/C_1)\subset \Omega_T$. Then inequality \epsilonqref{WH} leads to
\[
u(x_o,t_o) \leqslantq \gamma \bigg{\{} \frac{u(x_o,t_o)}{2\gamma}+ \bigg( \frac{2\gamma}{u(x_o,t_o)} \bigg)^{{N(\bar{p}-2)}/{\bar{p}}}\bigg[ \inf_{x_o+ \mathcal{K}_{\tilde{\epsilonta} \rho}(\tilde{\epsilonta} u(x_o,t_o)/C_1)} u(\, \cdot\, , \, t_o+ \bigg( \frac{u(x_o,t_o)}{2\gamma} \bigg)^{2-\bar{p}} \rho^{\bar{p}}) \bigg]^{{\lambda}/{\bar{p}}} \bigg{\}},
\]
whence
\[
u(x_o,t_o) \leqslantq \tilde{C_3} \inf_{x_o+ \mathcal{K}_{\tilde{\rho}}(M)} u(\cdot, \, t_o+ \tilde{C_2} M^{2-\bar{p}} \tilde{\rho}^{\bar{p}}), \quad M= u(x_o,t_o)/\tilde{C}_1, \] for all $\tilde{\rho} \leqslantq \tilde{\epsilonta} \rho$ and with positive constants
\[\tilde{C_1}= C_1/\tilde{\epsilonta}, \quad \tilde{C_2}=\tilde{\epsilonta}^{-2}(2\gamma/\tilde{C}_1)^{\bar{p}-2}, \quad \tilde{C_3}= 2 \gamma.\] \epsilonnd{remark}
\begin{thebibliography}{99}
\bibitem{Ant-Sh} S. Antontsev, S. Shmarev, {\it Evolution PDEs with nonstandard growth conditions. Existence, uniqueness, localization, blow-up}, Atlantis Studies in Differential Equations {\bf 4}, Atlantis Press, Paris, 2015.
\bibitem{AS} S. Antontsev, S. Shmarev, {\it Localization of solutions of anisotropic parabolic equations}, Nonlinear Anal. {\bf 71} (2009), no. 12, 725–737.
\bibitem{Appel} P.Appell, {\it Sur l’équation $\frac {\partial^{2} z}{\partial x^{2}}-\frac {\partial z}{\partial y}= 0 $ et la Théorie de la chaleur}, J. Math. Pures Appl. {\bf 8} (1892), 187-216.
\bibitem{Bear} H.S. Bear, {\it Liouville theorems for heat functions}, Comm. Partial Differential Equations {\bf 11} (1986), no. 14, 1605–1625.
\bibitem{CianiGuarnotta} S. Ciani, U. Guarnotta, V.Vespri, {\it On a particular scaling for the prototype anisotropic $p$-Laplacian}, Recent Advances in Mathematical Analysis, Trends in Mathematics Series, Springer Special Issue, 2022.
\bibitem{Ciani-Mosconi-Vespri} S. Ciani, S. Mosconi, V. Vespri, {\it Parabolic Harnack estimates for anisotropic slow diffusion}, Journal d'Analyse Math\'ematique, (2023), 1-32.
\bibitem{CSV} S. Ciani, I.I. Skrypnik, V. Vespri, {\it On the local behavior of local weak solutions to some singular anisotropic elliptic equations}, Adv. Nonlinear Anal. {\bf 12} (2023), no. 1, 237–265.
\bibitem{DB} E. DiBenedetto, {\it Degenerate Parabolic Equations}, Universitext, Springer-Verlag, New York, 1993.
\bibitem{DGV-Liouville} E. DiBenedetto, U. Gianazza, V. Vespri. {\it Liouville-type theorems for certain degenerate and singular parabolic equations}, C. R. Math. Acad. Sci. Paris {\bf 348} (2010), no. 15-16, 873–877.
\bibitem{DBGV-mono} E. DiBenedetto, U. Gianazza, V. Vespri, {\it Harnack's inequality for degenerate and singular parabolic equations}, Springer Monographs in Mathematics. Springer, New York, 2012.
\bibitem{DB-He} E. DiBenedetto, M.A. Herrero, {\it On the Cauchy problem and initial traces for a degenerate parabolic equation}, Trans. Amer. Math. Soc. {\bf 314} (1989), no. 1, 187–224.
\bibitem{Mosconi} F.G. D\"uzg\"un, S. Mosconi, V. Vespri, {\it Anisotropic Sobolev embeddings and the speed of propagation for parabolic equations}, J. Evol. Equ. {\bf 19} (2019), no. 3, 845–882.
\bibitem{Eidelman} S.D. Eidelman, {\it Estimates of solutions of parabolic systems and some of their applications}, Mat. Sbornik N.S. {\bf 33 (75)} (1953), 359–382 (in Russian).
\bibitem{Eidelman-Kamin-Tedeev} S.D. Eidelman, S. Kamin, A.F. Tedeev, {\it On stabilization of solutions of the Cauchy problem for linear degenerate parabolic equations}, Adv. Differential Equations {\bf 14} (2009), no. 7-8, 621–641.
\bibitem{Vazquez} F. Feo, J.L. Vázquez, B.Volzone, {\it Anisotropic $p$-Laplacian Evolution of Fast Diffusion Type}, Adv. Nonlinear Stud. {\bf 21} (2021), no. 3, 523–555.
\bibitem{Friedman} A. Friedman, {\it Liouville's theorem for parabolic equations of the second order with constant coefficients}, Proc. Amer. Math. Soc. {\bf 9} (1958), 272–277.
\bibitem{Glagoleva1} R.Y. Glagoleva, {\it Liouville theorems for the solution of a second-order linear parabolic equation with discontinuous coefficients}, Mat. Zametki {\bf 5} (1969), 599–606.
\bibitem{Eurica} E. Henriques, {\it Concerning the regularity of the anisotropic porous medium equation}, J. Math. Anal. Appl. {\bf 377} (2011), no. 2, 710–731.
\bibitem{Hirschman} I.I. Hirschman Jr., {\it A note on the heat equation}, Duke Math. J. {\bf 19} (1952), 487–492.
\bibitem{Kogoj} A. Kogoj, E. Lanconelli, {\it Liouville theorems for a class of linear second-order operators with nonnegative characteristic form}, Bound. Value Probl. 2007, Paper No. 48232, 16 pp.
\bibitem{Landis} E.M. Landis, {\it Second order equations of elliptic and parabolic type}, Translations of Mathematical Monographs {\bf 171}, American Mathematical Society, Providence, RI, 1998.
\bibitem{Liao} N. Liao, {\it Regularity of weak supersolutions to elliptic and parabolic equations: lower semicontinuity and pointwise behavior}, J. Math. Pures Appl. (9) {\bf 147} (2021), 179–204.
\bibitem{Liao2} N. Liao, {\it Hölder regularity for porous medium systems}, Calc. Var. Partial Differential Equations ({\bf 60}) (2021), 1–28.
\bibitem{Mar-survey} P. Marcellini, {\it Regularity under general and $p,q$-growth conditions}, Discrete Contin. Dyn. Syst. Ser. S {\bf 13} (2020), no. 7, 2009–2031.
\bibitem{MingRadu} G. Mingione, V. R\v{a}dulescu, {\it Recent developments in problems with nonstandard growth and nonuniform ellipticity}, J. Math. Anal. Appl. {\bf 501} (2021), no. 1, Paper No. 125197, 41 pp.
\bibitem{Sunra} S. Mosconi, {\it Liouville theorems for ancient caloric functions via optimal growth conditions}, Proc. Amer. Math. Soc. {\bf 149} (2021), no. 2, 897–906.
\bibitem{Moser} J. Moser, {\it A Harnack inequality for parabolic differential equations}, Comm. Pure Appl. Math. {\bf 17} (1964), 101–134.
\bibitem{QS-libro} P. Quittner, P Souplet, {\it Superlinear parabolic problems. Blow-up, global existence and steady states}, Birkh\"auser Advanced Texts: Basler Lehrb\"ucher, Birkh\"auser/Springer, Cham, 2019.
\bibitem{Ruzicka} M. Ruzicka, {\it Electrorheological fluids: modeling and mathematical theory}, Lecture Notes in Mathematics {\bf 1748}, Springer-Verlag, Berlin, 2000.
\bibitem{VenTuan} L. Ven'-Tuan, {\it Embedding theorems for spaces of functions whose partial derivatives have varying degrees of summability}, Vestnik Leningrad. Gos. Univ. {\bf 16} (1961), no. 7, 23-27 (in Russian).
\bibitem{Widder} D.V. Widder, {\it The role of the Appell transformation in the theory of heat conduction}, Trans. Amer. Math. Soc. {\bf 109} (1963), 121–134.
\bibitem{WidderBook} D.V. Widder, {\it The heat equation}, Pure and Applied Mathematics {\bf 67}, Academic Press, New York-London, 1975.
\epsilonnd{thebibliography}
\epsilonnd{document}
|
\begin{document}
\author{Peter Jonsson\\Department of Computer and Information Science\\Linköping University \and Marco Kuhlmann\\Department of Computer and Information Science\\Linköping University}
\title{Maximum Pagenumber-$k$ Subgraph\is NP-Complete}
\begin{abstract}
Given a graph~$G$ with a total order defined on its vertices, the
\PROBLEM{Maximum Pagenumber-$k$ Subgraph Problem} asks for a maximum
subgraph $G'$ of $G$ such that $G'$ can be embedded into a $k$"-book
when the vertices are placed on the spine according to the specified
total order.
We show that this problem is NP-complete for $k \geq 2$.
\end{abstract}
\section{Introduction}
A \emph{k-book} is a collection of $k$ half-planes, all of which have
the same line as their boundary.
The half"-planes are called the \emph{pages} of the book and the
common line is called the \emph{spine}.
A \emph{book embedding} is an embedding of a graph into a $k$-book
such that the vertices are placed on the spine, every edge is drawn on
a single page, and no two edges cross each other.
The \emph{pagenumber} of a graph~$G$ is the smallest number of pages
for which~$G$ has a book embedding.
Computing the pagenumber of a graph is an NP-complete problem
\cite{DBLP:conf/stacs/Unger88}, and it is even NP-complete to verify
if a graph has a certain pagenumber~$k$, for fixed $k \geq 2$.
Verifying that a graph has pagenumber~$1$ however can done in
polynomial time: A graph~$G$ has pagenumber~$1$ if and only if it is
outerplanar\footnote{An undirected graph is \emph{outerplanar} if and
only if it has a crossing"-free embedding in the plane such that all
vertices are on the same face.} \citep{Chung:etal:sijadm87}, and
outerplanarity can be checked in linear time
\citep{DBLP:journals/ipl/Mitchell79}.
In certain applications, the order of the vertices along the spines is
not arbitrary but specified in the input.
In this case we can still check whether a graph has a $1$-book
embedding that respects~${\prec}$ in linear time.
Assume we have a graph $G = (\SET{v_1, \dots, v_m}, E)$ and spine
order $v_1 \prec \dots \prec v_m$. Extend~$E$ with the edges
$\SETC{\SET{v_i, v_{i+1}}}{1 \leq i \leq m-1} \cup \SET{\SET{v_m,
v_1}}$
and note that $G = (V, E)$ can be embedded (in a way that respects
$\prec$) into a $1$-book if and only if the extended graph is
outerplanar.
It can also be checked in linear time whether a graph has
pagenumber~$2$ \citep{Haslinger:Stadler:bmb99}.
We are interested in the complexity of the following problem:
\begin{DECISIONPROBLEM}{(Fixed-Order) Maximum Pagenumber-$k$ Subgraph}
\INSTANCE An undirected graph $G = (V, E)$, a total
ordering~${\prec}$ on~$V$, and an integer $m \geq 0$.
\QUESTION Is there a subset $E' \subseteq E$ such that $|E'| \geq m$
and $G' = (V, E')$ can be embedded into a $k$-book such that the
vertices in~$V$ are placed on the spine according to the total
order~${\prec}$?
\end{DECISIONPROBLEM}
\noindent For $k = 1$, this problem can be solved in time $O(|V|^3)$
using dynamic programing \citep{DBLP:journals/corr/abs-1504-04993}.
Here we show that, for $k \geq 2$, the problem is NP"-complete, and
remains so even if we restrict solutions to acyclic subgraphs.
That is, the following problem is NP"-complete for $k \geq 2$:
\begin{DECISIONPROBLEM}{Maximum Acyclic Pagenumber-$k$ Subgraph}
\INSTANCE A directed graph $G = (V, A)$, a total ordering~${\prec}$
on~$V$, and an integer $m \geq 0$.
\QUESTION Is there a subset $A' \subseteq A$ such that
\begin{enumerate}
\item $|A'| \geq m$,
\item $(V, A')$ is acyclic, and
\item $(V, A')$ can be embedded into a $k$-book such that the
vertices in~$V$ are placed on the spine according to the total
order~${\prec}$?
\end{enumerate}
\end{DECISIONPROBLEM}
\section{Circle graphs}
The following is largely based on \citet{DBLP:conf/stacs/Unger92}.
A \emph{circle graph} is the intersection graph of a set of chords of
a circle.
That is, its vertices can be put in one"-to"-one correspondence with a
set of chords in such a way that two vertices are adjacent if and only
if the corresponding chords cross each other.
An example is shown in Figure~\ref{fig:CircleGraphs}.
\begin{figure}
\caption{A circle graph (left) together with a corresponding chord
drawing (middle) and overlap model (right). The endpoints of a
chord $c_i$ are named $c_{iL}
\label{fig:CircleGraphs}
\end{figure}
Circle graphs are often represented in a different way.
We assume henceforth, without loss of generality, that no two chords
have a common endpoint on the circle.
To obtain a so"-called \emph{overlap model} for a circle graph, we
start with a chord drawing and do the following:
\begin{enumerate}
\item we break up the circle and straighten it out into a line and
\item we turn the chords into arcs above the line that represents the
circle.
\end{enumerate}
It is easy to see that the standard representation and the overlap
representation are equivalent.
For simplicity, we will still call the arcs in the overlap
representation chords.
Given a chord~$x$ in an overlap representation, we will denote its
left endpoint with $\LEP{x}$ and its right endpoint with $\REP{x}$. We
assume without loss of generality that the endpoints are represented
by positive integers.
Formally, we can now define circle graphs in the overlap
representation as follows.
An undirected graph $G = (V, E)$ with $V = \SET{v_1, \dots, v_n}$ is a
circle graph if and only if there exists a set of chords
\begin{displaymath}
C = \SETC{(\LEP{c}, \REP{c})}{\text{$1 \leq i \leq n$ and $\LEP{c} < \REP{c}$}}
\end{displaymath}
such that
\begin{displaymath}
\SET{v_i, v_j} \in E
\quad\text{if and only if}\quad
c_i \otimes c_j
\end{displaymath}
where the Boolean predicate ${\otimes}$ denotes the intersection of
chords, i.e.\ $c \otimes d$ if $\LEP{c} < \LEP{d} < \REP{c} < \REP{d}$
or $\LEP{d} < \LEP{c} < \REP{d} < \REP{c}$.
Given a circle graph $G = (V, E)$ with an overlap representation~$C$,
we let $C(v)$ (where $v \in V$) denote the chord corresponding to~$v$.
\section{Proof}
Given a graph $G = (V, E)$ and a subset $V' \subseteq V$, we let
$G | V'$ denote the subgraph of~$G$ induced by~$V'$, i.e.\ $G | V$ has
vertex set~$V'$ and the edge set contains exactly those edges in~$E'$
that have both their endpoints in~$V'$.
Our starting point is the following problem.
\begin{DECISIONPROBLEM}{$k$-Colourable Induced Subgraph Problem for
Circle Graphs ($k$-CIG)}
\INSTANCE A circle graph $G = (V, E)$ and an integer $m \geq 0$.
\QUESTION Is there a subset $V' \subseteq V$ such that $|V'| \geq m$
and the graph $G|V'$ is $k$-colourable?
\end{DECISIONPROBLEM}
\citet{DBLP:journals/tcad/CongL91} have shown that $k$-CIG is
NP-complete when $k \geq 2$.
In the cases when $k = 2$ and $k \geq 4$, this result is based on
earlier results by \citet{DBLP:journals/tcad/SarrafzadehL89} and
\citet{DBLP:conf/stacs/Unger88}, respectively.
We now show that \PROBLEM{Maximum Pagenumber-$k$ Subgraph} is
NP-complete by a reduction from $k$-CIG.
\begin{proposition}
\PROBLEM{Maximum Pagenumber-$k$ Subgraph} is NP-complete.
\end{proposition}
\begin{proof}
We first show that \PROBLEM{Maximum Pagenumber-$k$ Subgraph} is in
NP.
Given an instance $((V, E), {\prec},m)$ of this problem,
non"-deterministically guess a subset $E' \subseteq E$ and a
partitioning $E'_1, \dots, E'_m$ of~$E'$.
The instance has a solution if and only if
$(V, E'_1), \dots, (V, E'_k)$ have pagenumber~$1$ under the spine
order~${\prec}$.
This property can be checked in polynomial time as was pointed out
earlier.
We next prove NP-hardness via a polynomial"-time reduction from
$k$-CIG.
Arbitrarily choose a circle graph $G = (V, E)$ and an integer
$m \geq 0$.
Construct (in polynomial time) an overlap model~$C$ of $G$ using the
algorithm of \citet{DBLP:journals/jal/Spinrad94}.
This overlap model defines a graph $H = (W, F)$ as follows:
\begin{align*}
W &= \SETC{\LEP{v}, \REP{v}}{\text{$v \in V$ and $C(v) = (\LEP{v}, \REP{v})$}}\\
F &= \SETC{\SET{\LEP{v}, \REP{v}}}{\text{$v \in V$ and $C(v) = (\LEP{v}, \REP{v})$}}
\end{align*}
Finally, let~${\prec}$ be the natural linear ordering on~$W$.
We claim that $(H, \prec, m)$ has a solution if and only if $(G, m)$
has a solution.
Assume that $(H, {\prec}, m)$ has a solution set of edges~$X$.
Assume that $X = X_1 \cup \dots \cup X_k$ where the edges in~$X_1$
are assigned to page~1, the edges in~$X_2$ to page~2, and so on.
Construct a solution set of vertices $Y$ for $(G, m)$ as follows:
$Y = \SETC{v \in V}{\SET{\LEP{v},\REP{v}} \in X}$.
Obviously, $|Y| \geq m$.
We show that $G|Y$ is $k$-colourable.
Colour the edges in~$X_i$, $1 \leq i \leq k$ with colour $i$.
Each edge corresponds to a vertex in~$Y$; let this vertex inherit
its colour.
Consider an arbitrary edge $(v, w)$ appearing in $G | Y$.
Since $(v, w) \in E$, the chords $C(v) = (\LEP{v}, \REP{v})$ and
$C(w) = (\LEP{w}, \REP{w})$ intersect.
Thus, the edges $\SET{\LEP{v}, \REP{v}}$ and
$\SET{\LEP{w}, \REP{w}}$ in~$F$ cannot be placed on the same book
page which implies that~$v$ and~$w$ are assigned different colours.
Assume that $(G, m)$ has a solution set of vertices~$V'$.
Let $f\mathpunct{:}\ V' \to \SET{1,\dots,k}$ be a $k$-colouring of
$G|V'$.
Construct a solution set of edges $T$ for $(H, {\prec}, m)$ as
follows:
\begin{displaymath}
T = \SETC{\SET{\LEP{v}, \REP{v}}}{\text{$v \in V'$ and
$(\LEP{v}, \REP{v}) = C(v)$}}
\end{displaymath}
Obviously, $|T| \geq m$.
We show that $(W, T)$ can be embedded into a $k$-book with spine
order~${\prec}$.
Pick an arbitrary edge $e = \SET{\LEP{v}, \REP{v}} \in T$ and put
$e$ on page $f(v)$.
Assume now that edges $e = \SET{\LEP{v}, \REP{v}}$ and
$e' = \SET{\LEP{v'}, \REP{v'}}$ in~$T$ cross each other, i.e.\ that
$(\LEP{v}, \REP{v}) \otimes (\LEP{v'}, \REP{v'})$ and $e, e'$ appear
on the same page.
This implies that $f(v) = f(v')$ and that there is an edge
between~$v$ and~$v'$ in~$G$. Since $v, v' \in V'$, we see that this
edge appears in $G|V'$. This contradicts the fact that~$f$ is a
$k$-colouring of $G|V'$.
\end{proof}
\begin{corollary}
\PROBLEM{Maximum Acyclic Pagenumber-$k$ Subgraph} is NP-hard when
$k \geq 2$.
\end{corollary}
\begin{proof}
Polynomial-time reduction from \PROBLEM{Maximum Pagenumber-$k$
Subgraph}.
Let $((V, E), {\prec}, m)$ be an arbitrary instance of
\PROBLEM{Maximum Pagenumber-$k$ Subgraph}.
We first show that \PROBLEM{Maximum Acyclic Pagenumber-$k$ Subgraph}
is in~NP.
Non-deterministically guess a subset $A' \subseteq A$ and a
partitioning $A'_1, \dots, A'_m$ of~$A'$.
Let $E'_i$ denote the corresponding set of undirected edges, i.e.\
$E'_i = \SETC{\SET{v,w}}{(v,w) \in A'_i}$, and note that the
instance has a solution if and only if $(V, E'_1), \dots, (V, E'_k)$
have pagenumber~1 under the spine order~${\prec}$.
We continue by proving NP-hardness.
Assume without loss of generality that $V = \SET{1, \dots, m}$.
Construct a directed graph $(V, A)$ as follows: the arc $(i, j)$ is
in~$A$ if and only if $i < j$ and the edge $\SET{i, j}$ is in~$E$.
Note that $(V, A)$ (and consequently every subgraph) is acyclic.
It is now easy to verify that $((V, A), {\prec}, m)$ has a solution
if and only if $((V, E), {\prec}, m)$ has a solution.
\end{proof}
\section*{Acknowledgments}
We thank the participants of the Dagstuhl Seminar 15122 `Formal Models
of Graph Transformation for Natural Language Processing' for
interesting discussions on the problem studied in this draft.
\end{document}
|
\begin{document}
\title{Processing multi-photon state through operation on single photon: methods and applications}
\author{Qing Lin}
\email{[email protected]}
\affiliation{College of Information Science and Engineering, Huaqiao University (Xiamen),
Xiamen 361021, China}
\author{Bing He}
\email{[email protected] }
\affiliation{Institute for Quantum Information Science, University of Calgary, Alberta T2N
1N4, Canada}
\author{J\'{a}nos A. Bergou}
\affiliation{Department of Physics and Astronomy, Hunter College of
the City University of New York, 695 Park Avenue, New York, New York
10065, USA}
\author{Yuhang Ren}
\affiliation{Department of Physics and Astronomy, Hunter College of
the City University of New York, 695 Park Avenue, New York, New York
10065, USA}
\pacs{03.67.Lx, 42.50.Ex}
\begin{abstract}
Multi-photon states are widely applied in quantum information technology. By the methods presented in this paper, the structure of a multi-photon state in the form of multiple single photon qubit product can be mapped to a single photon qudit, which could also be in separable product with
other photons. This makes the possible manipulation of such multi-photon states in the way of processing single photon states. The optical realization of unknown qubit discrimination [B. He, J. A. Bergou, and Y.-H. Ren, Phys. Rev. A 76, 032301 (2007)] is simplified with the transformation methods. Another application is the construction of quantum logic gates, where the inverse transformations back to the input state spaces are also necessary. We especially show that the modified setups to implement the transformations can realize the deterministic multi-control gates (including Toffoli gate) operating directly on the products of single photon qubits.
\end{abstract}
\maketitle
\section{
Introduction}
Photonic states are suitable to carrying quantum information for their flexibility and robustness against decoherence
effects. The flying qubits in various quantum computing schemes are encoded in photonic states \cite{P}, and many tasks in quantum cryptography
and quantum communications are performed with the aid of photonic states as well \cite{cryp, repeater}. In most of protocols, an optical system
should process the input of multi-photon state, which is often a tensor product of the signal plus the ancilla. In the scheme of unknown quantum state discrimination \cite{Bergou}, for example, the inputs should be prepared as the tensor product of the program and
the data state, $|\psi_1\rangle|\psi_2\rangle|\psi_3\rangle$, where the qubit $|\psi_i\rangle$ ($i=1,2$ for the program and $i=3$ for the data)
could be an unknown linear combination of the polarization modes $|H\rangle$ and $|V\rangle$ ($H$ represents
horizontal and $V$ vertical polarization) of a single photon. When a multi-photon input state is processed by a linear optical device together with the post-selection through measurement on part of the output, there will be an upperbound for the success probability to obtain the desired target state \cite{efficiency}. Non-deterministic operations are inevitable in such systems, so it demands more resources to realize a processing task definitely.
The situation will be different when we deal with single photon states. Quantum information can be encoded in spatial degree of freedom
(different spatial paths) and internal degree of freedom (polarization, etc.) of a single photon.
A significant advantage in using single photon states of multiple spatial modes is that arbitrary unitary operation on such states can be deterministically realized with a linear optical device \cite{Reck}. This result can be generalized to realize arbitrary positive-operator-valued-measurement (POVM) on multiple path-mode single photon states \cite{h-b-w}. For instance, it will be possible to
perform the above-mentioned unknown qubits discrimination, if one could realize a map
\begin{align}
|\psi_1\rangle|\psi_2\rangle|\psi_3\rangle&=c_1|HHH\rangle+c_2|HHV\rangle+\cdots+c_8|VVV\rangle\nonumber\\
&\rightarrow c_1|1\rangle+c_2|2\rangle+\cdots+c_8|8\rangle
\label{1}
\end{align}
to a corresponding multi-mode single photon inheriting the coefficients $c_i$ of the input multi-photon state ($|1\rangle$, $\cdots$, $|8\rangle$ are the spatial modes here). In particular, the unambiguous discrimination of any pair of multi-mode single photon states and of even more such states can be simply realized by linear optics \cite{usd}.
In this paper we discuss the methods to realize such transformations and some of their useful applications. It has been proposed that the transformation in Eq. (\ref{1}) could be realized by teleportation \cite{parity}. However, the method can be simplified much further if we improve
on the necessary circuits. Here we also present a purely circuit-based approach without teleportation to realize the transformations in the form
\begin{align}
&~~~~c_1|HHH\rangle+c_2|HHV\rangle+\cdots+c_8|VVV\rangle\nonumber\\
&\rightarrow |+\rangle_1|+\rangle_2(c_1|1\rangle+c_2|2\rangle+\cdots+c_8|8\rangle)_3,
\label{20}
\end{align}
in which the third photon inheriting the coefficients $c_i$ of the input multi-photon state will be in tensor product with two other photons of the state $|+\rangle=\frac{1}{\sqrt{2}}(|H\rangle+|V\rangle)$. This kind of transformations can be realized without ancilla and with a quantity of circuit resources scaling linearly to the photon number in input states.
An important application of these transformations is the construction of quantum networks or quantum logic gates.
As there is no direct interaction between photonic states, the nonlinear media of strong intensity seem necessary to realize photonic logic
gates. However, the best cross-Kerr nonlinearity created by electromagnetically induced transparency (EIT) techniques \cite{EIT} thus far is still
in the range of weak nonlinearity. The development of photonic logic gates is therefore mainly along the line of the Knill-Laflmme-Milburn
(KLM) protocol \cite{KLM}, which applies linear optical circuit, entangled ancilla state and photon number resolving detection to realize a two-qibit gate probabilistically. The KLM protocol and its improvements within linear optics are all probabilistic, if the realistic resources are available \cite{P}. Since the input states for many key logic gates or quantum networks can be simply encoded as the initial states in Eq. (\ref{1}) and (\ref{20}), it is reasonable to first convert them to the corresponding single photon states and then use a linear optical circuit to deterministically implement the unitary operations for the gates. The remaining work will be converting the processed states in multiple path modes back to polarization space. The transformations to single photon states and their inverse processes are realizable using weak nonlinearity, linear optics, as well as photon number resolving detection which could be performed by various methods. This idea of realizing a quantum network is different from the conventional method of decomposing it into the product of two-qubit and one-qubit gates \cite{Nielsen}.
Feasible though the realization of all kinds of deterministic two-qubit, triple-qubit gates, etc, in the approach is, it is impractical to implement
a quantum network involving large number of input qubits this way, as the number of the spatial modes carried by the transformed single photon is exponentially large ($2^n$ for $n$ input qubits). For a special type of multi-qubit gates, however, the necessary resources could be much fewer than those required by the conventional decomposition method into two-qubit gates, if we modify an element gate used for the previously mentioned transformations to single photon states a bit. This type of multi-qubit gates include Toffoli gate \cite{Toffoli}, a gate key to many quantum information processing tasks. We will discuss the building of these gates in detail.
The rest of the paper is organized as follows. In Sec. \ref{sec2}, we discuss the designs of the gates which are related to the teleportation of multi-photon states to the corresponding single photon states. The improvement on the teleportation strategy in \cite{parity} is given in \ref{tele}.
The purely circuit-base approach to realizing the map in Eq. (\ref{20}) is presented in Sec. \ref{sec3}, where the steps to transform input two-photon and triple-photon states are respectively illustrated in details. The application of the transformation methods in constructing quantum networks is discussed in Sec. \ref{sec4}; in this section we discuss the realization of general two-qubit gate, general multi-qubit gate and especially multi-control gates including Toffoli gate, respectively. Finally a few remarks conclude the paper in the last section.
\section{Teleportation approach}
\label{sec2}
Firstly, we discuss the approach to transform multi-photon states to the corresponding single photon states through teleportation.
The designs of the necessary gates for the purpose are given in detail as follows. The target is to map the state of many photons to a genuine
single photon state as in Eq. (\ref{1}).
\subsection{Parity gate}
In Ref. \cite{parity} some of us propose an optical realization of unknown quantum state discrimination scheme in \cite{Bergou}. A crucial element there is a parity gate that sends the bi-photon components of different parity to the different paths. However, that gate actually selects out either even parity component (the linear combinations of $|HH\rangle$ and $|VV\rangle$) or odd parity component (the linear combinations of $|HV\rangle$ and $|VH\rangle$) through homodyne detection. We here present a correct design for such deterministic parity gate. On the other hand, parity gate is an important tool in quantum information processing. The wide usages of the gate with abstract quantum systems are discussed in \cite{Io}, and various optical realizations can be found in, for example, \cite{P-J-C, B-K, B-R, Nemoto, Munro, He}. We enrich the methods to realize parity gate with
a special feature that the different parity components will be obtained simultaneously on different paths.
\begin{figure}
\caption{(Color online) Parity gate. Here we use two qubus beams in coherent state $|\alpha\rangle$ to interact with the indicated photonic modes. Each coupling gives rise to a phase shift $\theta$. Two phase shifters $-\theta$ are applied to two qubus beams, respectively. The setup of the QND module is shown in dash-dotted line, where one of the beams $|\gamma\rangle$ is coupled to the first qubus beam after a 50:50 BS between two phase shifter $-\theta$ and the QND module. One output beam in the QND module is measured by a number-non-resolving detector. The phase shift $\pi$ on $|V\rangle_1$ and the switch of the path mode 2 and 3 are conditional on the classically feed-forwarded measurement results.}
\label{parity-p}
\end{figure}
The gate deals with an input state
\begin{align}
\left\vert \psi\right\rangle _{in}=a\left\vert HH\right\rangle
_{12}+b\left\vert HV\right\rangle _{12}+c\left\vert VH\right\rangle
_{12}+d\left\vert VV\right\rangle _{12}
\label{in}
\end{align}
of two photons running on track 1 and 2.
We apply a 50:50 beam splitter (BS) to divide the second photon
into two spatial modes 2 and 3:
\begin{align}
\left\vert \psi\right\rangle _{in}&\rightarrow \frac{1}{\sqrt{2}}a\left\vert H\right\rangle _{1}\left( \left\vert
H\right\rangle _{2}+\left\vert H\right\rangle _{3}\right) +\frac{1}{\sqrt{2}
}b\left\vert H\right\rangle _{1}\left( \left\vert V\right\rangle
_{2}+\left\vert V\right\rangle _{3}\right) \nonumber\\
& +\frac{1}{\sqrt{2}}c\left\vert V\right\rangle _{1}\left( \left\vert
H\right\rangle _{2}+\left\vert H\right\rangle _{3}\right) +\frac{1}{\sqrt{2}
}d\left\vert V\right\rangle _{1}\left( \left\vert V\right\rangle
_{2}+\left\vert V\right\rangle _{3}\right).
\end{align}
Two qubus or communication beams $\left\vert \alpha\right\rangle \left\vert
\alpha\right\rangle $ are then introduced, and are coupled to the corresponding photonic modes through the cross-phase modulation (XPM) processes described by the Hamiltonian ${\cal H}=-\hbar\chi \hat{n}_i\hat{n}_j$, where $\chi$ is the nonlinear intensity and $\hat{n}_i$ ($\hat{n}_j$)
the number operator of the coupling modes.
The interaction pattern in Fig. \ref{parity-p} is summarized in Tab. \ref{tb1}.
\begin{table}
\begin{tabular}{|c|c|c|c|c|}\hline
& $|H\rangle_2$ & $|V\rangle_2$ & $|H\rangle_3$ & $|V\rangle_3$\\ \hline
$|H\rangle_1$ & $\bigcirc$ & & & $\bigcirc$ \\ \hline
$|V\rangle_1$ & &$\bigtriangleup$ & $\bigtriangleup$ & \\ \hline
\end{tabular}
\caption{$\bigcirc$ and $\bigtriangleup$ represent the coupling to the first and the second qubus beam, respectively. The XPM pattern in Fig. 1 is read as the first beam being coupled to $|H\rangle_1$ of the first photon and $\{|H\rangle_2, |V\rangle_3\}$ of the second photon, while the second beam to $|V\rangle_1$ of the first photon and $\{|V\rangle_2, |H\rangle_3\}$ of the second. }
\label{tb1}
\end{table}
Suppose the XPM phase shifts induced by the couplings are all $\theta$. A qubus beam will undergo a conditioned XPM phase shift $|\alpha\rangle\rightarrow |\alpha e^{i\theta}\rangle$ if coupled to one photonic mode, and $|\alpha\rangle\rightarrow |\alpha e^{2i\theta}\rangle$ if coupled to two modes. As the result, we will realize the following transformation of the total system:
\begin{align}
&~~~~~|\psi\rangle_{in}|\alpha\rangle|\alpha\rangle \nonumber\\
&\rightarrow \frac{1}{\sqrt{2}}\left\vert H\right\rangle _{1}\left( a\left\vert
H\right\rangle _{2}+b\left\vert V\right\rangle _{3}\right) \left\vert \alpha
e^{2i\theta}\right\rangle \left\vert \alpha \right\rangle \nonumber\\
&+\frac{1}{\sqrt{2}}\left\vert V\right\rangle _{1}\left( c\left\vert
H\right\rangle _{3}+d\left\vert V\right\rangle _{2}\right) \left\vert \alpha
\right\rangle \left\vert \alpha e^{2i\theta}\right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert H\right\rangle _{1}\left( a\left\vert
H\right\rangle _{3}+b\left\vert V\right\rangle _{2}\right) \left\vert \alpha e^{i\theta}\right\rangle \left\vert \alpha e^{i\theta}\right\rangle
\nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert V\right\rangle _{1}\left( c\left\vert
H\right\rangle _{2}+d\left\vert V\right\rangle _{3}\right) \left\vert \alpha
e^{i\theta}\right\rangle \left\vert \alpha e^{i\theta}\right\rangle .
\end{align}
After that, a phase shifter of $-\theta$ is respectively applied to two qubus beams, and then one more 50:50 BS implements
the transformation $\left\vert
\alpha_{1}\right\rangle \left\vert \alpha_{2}\right\rangle \rightarrow
\left\vert \frac{\alpha_{1}-\alpha_{2}}{\sqrt{2}}\right\rangle \left\vert
\frac{\alpha_{1}+\alpha_{2}}{\sqrt{2}}\right\rangle $ of the coherent-state components.
The state of the total system will be therefore transformed to
\begin{align}
&~~~\frac{1}{\sqrt{2}}\left\vert H\right\rangle _{1}\left( a\left\vert
H\right\rangle _{3}+b\left\vert V\right\rangle _{2}\right) \left\vert 0\right\rangle \left\vert \sqrt{2}\alpha \right\rangle \nonumber\\
&+\frac{1}{\sqrt{2}}\left\vert V\right\rangle _{1}\left( c\left\vert
H\right\rangle _{2}+d\left\vert V\right\rangle _{3}\right) \left\vert 0\right\rangle \left\vert \sqrt{2}\alpha \right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert H\right\rangle _{1}\left( a\left\vert
H\right\rangle _{2}+b\left\vert V\right\rangle _{3}\right) \left\vert -\beta \right\rangle \left\vert \sqrt{2}\alpha \cos\theta\right\rangle
\nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert V\right\rangle _{1}\left( c\left\vert
H\right\rangle _{3}+d\left\vert V\right\rangle _{2}\right) \left\vert \beta\right\rangle \left\vert \sqrt{2}\alpha \cos\theta \right\rangle,
\label{output}
\end{align}
where $|\beta\rangle=|i\sqrt{2}\alpha\sin
\theta\rangle$.
The first coherent-state component in Eq. (\ref{output}) is either vacuum or a cat state (the superposition of $|\pm\beta\rangle$ in the second piece). The target output could be therefore obtained by the projection $\left\vert
n\right\rangle \left\langle n\right\vert $ on the first qubus beam.
If $n=0$, we will obtain
\begin{align}
\left\vert \psi\right\rangle _{out}=a\left\vert HH\right\rangle
_{13}+b\left\vert HV\right\rangle _{12}+c\left\vert VH\right\rangle
_{12}+d\left\vert VV\right\rangle _{13},
\label{parity}
\end{align}
with the even parity component on path 1 and 3, and the odd parity component on path 1 and 2, respectively.
If $n\neq 0$, on the other hand, there will be the output
\begin{align}
\left\vert \psi\right\rangle _{out}&=e^{-in\frac{\pi}{2}
}\{\left\vert H\right\rangle _{1}\left( a\left\vert H\right\rangle
_{2}+b\left\vert V\right\rangle _{3}\right)\nonumber\\
&+e^{in\pi}\left\vert
V\right\rangle _{1}\left( c\left\vert H\right\rangle _{3}+d\left\vert
V\right\rangle _{2}\right)\},
\label{parity1}
\end{align}
which can be transformed to the form in Eq. (\ref{parity}) by a phase shift $\pi$ on $|V\rangle_1$ following the classically feed-forwarded measurement
result $n$ and a switch of the path modes 2 and 3.
Photon number resolving detection on one of the output qubus beams is crucial in realizing the states in Eqs. (\ref{parity}) and (\ref{parity1}).
The general aspect and development of photon number resolving detection can be found in \cite{P}. The best device thus far to simulate the projector $\left\vert n\right\rangle \left\langle n\right\vert $ is transition edge sensor (TES), a superconducting microbolometer that has demonstrated very high detection efficiency (95\% at $\lambda=1550$ nm) and high photon number resolution\cite{detector, d2,d3,d4}. To realize the projection $\left\vert n\right\rangle \left\langle n\right\vert $ deterministically, we could apply the indirect measurement method in \cite{He}. It is to use a quantum non-demolition detection (QND) module shown inside dash-dotted line in Fig. \ref{parity-p}.
In the module one of the coherent state $|\gamma\rangle$ is coupled to the first coherent-state component in Eq. (\ref{output}), making the
XPM phase shifts
\begin{equation}
\left\vert \pm\beta\right\rangle \left\vert \gamma\right\rangle \left\vert
\gamma\right\rangle \rightarrow e^{-\left\vert \beta\right\vert ^{2}
/2}\overset{\infty}{\underset{n=0}{{\sum}}}\frac{\left( \pm\beta\right)
^{n}}{\sqrt{n!}}\left\vert n\right\rangle \left\vert \gamma e^{in\theta
}\right\rangle \left\vert \gamma\right\rangle
\label{pro}
\end{equation}
conditioned on the Fock states $|n\rangle$ in the first qubus beam of Eq. (\ref{output}).
A 50:50 BS then maps the coherent-state components on the right hand side of Eq. (\ref{pro}) to $\left\vert
\frac{\gamma e^{in\theta}-\gamma}{\sqrt{2}}\right\rangle \left\vert
\frac{\gamma e^{in\theta}+\gamma}{\sqrt{2}}\right\rangle $ ($n=0,1,\cdots
,\infty$). We will detect $n$ by using a photon number non-resolving detector described by the POVM
elements \cite{n}
\begin{align}
\Pi_{0} & =\overset{\infty}{\underset{n=0}{{\sum}}}\left( 1-\eta\right)
^{n}\left\vert n\right\rangle \left\langle n\right\vert ,\nonumber\\
\Pi_{1} & =I-\Pi_{0},
\label{povm}
\end{align}
where $\eta<1$ is the quantum efficiency of the detector, and $\Pi_{0}$ and $\Pi_{1}$ correspond to detecting no photon and any number of photon, respectively.
Since each of the states $\left\vert \frac{\gamma
e^{in\theta}-\gamma}{\sqrt{2}}\right\rangle $ has a certain distribution of photon numbers (Poisson peak), the action of $\Pi_1$ on it will
be actually the operator, $\Pi_{1,k}=\sum_{m=n_k}^{n_k'}(1-(1-\eta)^m)|m\rangle\langle m|$, if the dominant distribution for the peak $n=k$ is from
$n_k$ to $n_k'$. We let the detector respond to a $\left\vert \frac{\gamma e^{ik\theta}-\gamma}{\sqrt{2}}\right\rangle $ according to the total photon detection probability $\langle \frac{\gamma e^{ik\theta}-\gamma}{\sqrt{2}}|\Pi_{1,k}|\frac{\gamma e^{ik\theta}-\gamma}{\sqrt{2}}\rangle$. This response, as a function of the photon detection probability, can be a measured quantity (for example, voltage or current converted from the measured light) by the detector. Such photon number non-resolving detector could read the sufficiently large difference in the intensities of the measured coherent light, though it is unable to distinguish between pulses of close photon numbers (for example, two Fock states $|n\rangle$ and $|n+m\rangle$ if $m$ is not large enough). We can use a sufficiently large $|\gamma|$ such that the photon number distributions of $\left\vert \frac{\gamma e^{in\theta}-\gamma}{\sqrt{2}}\right\rangle $ will be fairly separated and the reactions of the detector to them will be mutually distinct. As the result, the operator $\Pi_{1,k}$ will indirectly project out the Fock state $|k\rangle$ in the first qubus beam by a particular response of the detector to $\left\vert \frac{\gamma
e^{ik\theta}-\gamma}{\sqrt{2}}\right\rangle $.
In Fig. \ref{distribution} we show such an example of the separate photon number distributions induced by different $|k\rangle$ ($k=1,2,\cdots$) in the first qubus beam when $\left\vert\gamma\right\vert =100$ and $\theta=0.05$. Also from the figure the number of possibly measured results by a detector is finite, as the photon number Poisson distribution of $|\pm \beta\rangle$ is within a finite range. Once the possibly measured quantities are set in design,
we could use them to check by the deviations if the gate works correctly.
The error probability of this parity gate is
\begin{align}
P_E&=||\sum_{n=0}^{\infty}e^{-\left\vert \beta\right\vert ^{2}
/2}\frac{\left( \pm\beta\right)
^{n}}{\sqrt{n!}}\left\vert n\right\rangle \Pi_{0}^{\frac{1}{2}} \left\vert \frac{\gamma
e^{in\theta}-\gamma}{\sqrt{2}}\right\rangle||^2 \nonumber\\
&\sim exp\{-2(1-e^{-\frac{1}{2}\eta\gamma^2 \theta^2})\alpha^2\sin^2\theta\},
\end{align}
considering $\theta \ll 1$ from a weak cross-Kerr nonlinearity, so it could deterministically send the different parity components of a bi-photon state to different paths given that $2\alpha^2\sin^2\theta \gg 1$ and $\frac{1}{2}\eta \gamma^2\theta^2\gg 1$.
Other methods to perform the projection $|n\rangle\langle n|$ indirectly with QND module include quadrature $\hat{x}$ ($\hat{p}$) measurement on $\left\vert\frac{\gamma e^{in\theta}-\gamma}{\sqrt{2}}\right\rangle$ or phase measurement on $\left\vert \gamma e^{in\theta
}\right\rangle$ via homodyne-heterdyne techniques \cite{Nemoto}. For a deterministic operation, the condition $\beta \theta^2\gg 1$ in \cite{Nemoto}, which is from $\hat{x}$ measurement on the first qubus beam directly, can be relaxed if the beams of a sufficiently large $|\gamma|$ are used in the QND module.
\begin{figure}
\caption{(Color online) Example of the photon number resolving measurement on an output qubus beam with the average photon number $|\beta|^2=20$. (a) Poisson distribution of the measured output qubus beam photon numbers. Its dominant photon numbers are between $n=8$ and $n=35$. These photon numbers occurring with the shown probabilities give rise to so many possibly measured quantities by a number non-resolving detector described by Eq. (\ref{povm}
\label{distribution}
\end{figure}
If we project out the even or odd parity component of the output bi-photon state (on only one pair of tracks 1, 2 or 1, 3) by a QND device, the gate will also work as an entangler for separable input bi-photon states. The main advantage of using two qubus beams as in \cite{He} is that an XPM phase shift of $-\theta$ (which is only possible with a coupling constant $\chi$ of the opposite sign) or the displacement operation on qubus beam in some other qubus mediated parity gates \cite{Nemoto, Munro} can be avoided. With EIT techniques \cite{EIT}, it is possible to use a realistic qubus beam intensity and maintain a good coherence of the total quantum state against the losses in generating a small $\theta$ through such XPM processes \cite{loss}. Another advantage of the design is that qubus beams can be recycled. Since the $\theta$ induced by weak cross-Kerr nonlinearity is small, i.e., $\left\vert \sqrt{2}\alpha\cos
\theta\right\vert \simeq\left\vert \sqrt{2}\alpha\right\vert $, the other
unmeasured qubus beam can be used again until it is consumed too much after many
times of detection.
\subsection{Variation of parity gate---controlled-path gate}
We will slightly modify the parity gate to have a deterministic realization of Controlled-path gate (C-path gate) introduced in \cite{Lin}.
This gates implements the transformation ($C$ stands for the control and $T$ the target in Fig. \ref{c-path-p}):
\begin{align}
& \left\vert \psi\right\rangle _{CT}=a\left\vert HH\right\rangle
_{CT}+b\left\vert HV\right\rangle _{CT}+c\left\vert VH\right\rangle
_{CT}+d\left\vert VV\right\rangle _{CT}\nonumber\\
& \rightarrow a\left\vert HH\right\rangle _{C1}+b\left\vert HV\right\rangle
_{C1}+c\left\vert VH\right\rangle _{C2}+d\left\vert VV\right\rangle
_{C2}=\left\vert \phi\right\rangle ,
\label{c-path}
\end{align}
where the index $1$ and $2$ denotes two different paths, controlling the paths of the target single-photon qubit $T$ by the polarizations of the control photon $C$.
\begin{figure}
\caption{(Color online) Controlled-path gate. The two qubus beams interact with the indicated photonic modes through cross-Kerr nonlinearity creating $\theta$. A phase shifters $-\theta$ is respectively applied to two qubus beams. The QND module performs the photon number resilving detection, the results of which are classically feed-forwarded to control the switch of path 1 and 2 and a phase shift $\pi$ on path 1. The effect of the gate is realizing the control on the paths of the target photon $T$ by the polarizations of the control photon $C$. }
\label{c-path-p}
\end{figure}
The process is similar to that in parity gate. We first use a 50:50 beam splitter
(BS) to divide the target photon into two spatial modes 1 and 2. Then two
qubus beams $\left\vert \alpha\right\rangle \left\vert \alpha\right\rangle $
are used to interact with the corresponding photonic modes in the pattern summarized in Tab. \ref{tb2}.
\begin{table}
\begin{tabular}{|c|c|c|}\hline
& $|H/V\rangle_1$ & $|H/V\rangle_2$ \\ \hline
$|H\rangle_C$ & & $\bigtriangleup$ \\ \hline
$|V\rangle_C$ & $\bigcirc$ & \\ \hline
\end{tabular}
\caption{$\bigcirc$ and $\bigtriangleup$ represent the coupling to the first and the second qubus beam, respectively. The XPM pattern in C-path gate is that the first qubus beam is coupled to $|V\rangle_C$ of the control photon and all modes of the target photon on path 1 after BS, while the second beam to $|H\rangle_C$ of the control photon and those on path 2 from the target photon. }
\label{tb2}
\end{table}
The phase shifters of $-\theta$ follow up to change the phases of the coherent-state components to the following
\begin{align}
& \frac{1}{\sqrt{2}}\left\vert \phi\right\rangle \left\vert \alpha
\right\rangle \left\vert \alpha\right\rangle
+\frac{1}{\sqrt{2}}\left\vert H\right\rangle _{C}\left( a\left\vert
H\right\rangle _{2}+b\left\vert V\right\rangle _{2}\right) \left\vert \alpha
e^{-i\theta}\right\rangle \left\vert \alpha e^{i\theta}\right\rangle
\nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert V\right\rangle _{C}\left( c\left\vert
H\right\rangle _{1}+d\left\vert V\right\rangle _{1}\right) \left\vert \alpha
e^{i\theta}\right\rangle \left\vert \alpha e^{-i\theta}\right\rangle .
\end{align}
Finally, one more 50:50 BS converts the coherent-state components to
\begin{align}
& \frac{1}{\sqrt{2}}\{\vert H \rangle _{C}(a \vert
H \rangle _{2}+b \vert V \rangle _{2}) \vert-\beta\rangle \vert \sqrt{2}\alpha\cos\theta\rangle \nonumber\\
& +\vert V \rangle _{C}(c\vert
H\rangle _{1}+d \vert V\rangle _{1})
\vert\beta \rangle \vert \sqrt{2}\alpha\cos\theta\rangle\} \nonumber\\
&+\frac{1}{\sqrt{2}} \vert \phi\rangle \vert 0\rangle \vert \sqrt
{2}\alpha\rangle ,
\label{2}
\end{align}
where $\left\vert \beta\right\rangle =\left\vert i\sqrt{2}\alpha\sin
\theta\right\rangle $. Then, we could use the QND module in Fig. \ref{parity-p} to perform the projection $|n\rangle\langle n|$
for achieving the desired output state $\left\vert \phi\right\rangle $ deterministically.
As we will discuss in what follows, this C-path gate works as a building block for constructing various logic gates.
Next, we will apply it in a teleportation scheme to transform a multi-photon state to the corresponding single photon state.
\subsection{Teleportation scheme for unknown state discrimination}
\label{tele}
Now, we present an improved teleportation scheme on that in \cite{parity}, which transforms a multi-photon state to the corresponding single photon state. We illustrate the procedure first with the simplest example of a two-photon input state, $|\psi_1\rangle|\psi_2\rangle$, where $|\psi_i\rangle=c_i|H\rangle+d_i|V\rangle$, and the coefficients $c_i$ and $d_i$ could be unknown. Using the same ancilla resources,
a single photon state $\frac{1}{\sqrt{2}}(|H\rangle_C+|V\rangle_C)$ and a photon pair in Bell state $|\Phi^{+}\rangle_{12}=\frac{1}{\sqrt{2}}(|HH\rangle_{12}+|VV\rangle_{12})$, as in \cite{parity}, we process them with a first C-path gate as follows:
\begin{align}
&\frac{1}{\sqrt{2}}(|H\rangle_C+|V\rangle_C)\frac{1}{\sqrt{2}}(|HH\rangle_{12}+|VV\rangle_{12})\nonumber\\
&\rightarrow \frac{1}{2}( |H, H, H\rangle_{C32}+|H, V, V\rangle_{C32}
+|V, H, H\rangle_{C42}\nonumber\\
&+|V, V, V\rangle_{C42})=|\Sigma\rangle,
\end{align}
where the $|H\rangle_C$ and $|V\rangle_C$ of the control single photon control the first photon in the Bell pair to different paths 3 and 4.
The state $|\Sigma\rangle$ can be rewritten as
\begin{align}
&\frac{1}{\sqrt{2}}(|H\rangle_C |0\rangle_S+|V\rangle_C |1\rangle_S)\frac{1}{\sqrt{2}}(|HH\rangle_{12}+|VV\rangle_{12})\nonumber\\
&=\frac{1}{2}( |H, H, H\rangle_{C32}+|H, V, V\rangle_{C32}
+|V, H, H\rangle_{C42}\nonumber\\
&+|V, V, V\rangle_{C42})
\label{bell},
\end{align}
a product of two Bell states $|\Phi^{+}\rangle_{CS}|\Phi^{+}\rangle_{12}$. In the first effective Bell state we adopt the notation in \cite{parity} to define the switch state $|0\rangle_S$,
which sends the modes on path 1 to path 3, and $|1\rangle_S$, which sends the modes on path 1 to path 4 at the same time.
The effect of the C-path gate is to create an effective Bell state $|\Phi^{+}\rangle_{CS}$.
Suppose two single photons in the state $|\psi_i\rangle$ are placed on track A and B, respectively. Then we will have the following total state \cite{notation}:
\begin{align}
&|\psi_1\rangle_A|\psi_2\rangle_B |\Sigma\rangle \nonumber\\
&= \frac{1}{4}(|\Phi^+\rangle_{AC}|\psi_1\rangle_S+|\Psi^+\rangle_{AC}~\sigma_x|\psi_1\rangle_S+|\Psi^-\rangle_{AC}~(-i\sigma_y)|\psi_1\rangle_S\nonumber\\
&+|\Phi^-\rangle_{AC}~\sigma_z|\psi_1\rangle_S )(|\Phi^+\rangle_{B2}|\psi_2\rangle_1+|\Psi^+\rangle_{B2}~\sigma_x|\psi_2\rangle_1\nonumber\\
&+|\Psi^-\rangle_{B2}~(-i\sigma_y)|\psi_2\rangle_1
+|\Phi^-\rangle_{B2}~\sigma_z|\psi_2\rangle_1 ).
\end{align}
Performing the Bell-state measurements on the modes $\{A,C\}$ and $\{B,2\}$ and classically-feedforwarding the results for the post operations,
we will finally obtain the state
\begin{align}
&(c_1|0\rangle_S+d_1|1\rangle_S)(c_2|H\rangle_1+d_2|V\rangle_1)\nonumber\\
&=c_1c_2|H\rangle_3+c_1d_2|V\rangle_3+d_1c_2|H\rangle_4+d_1d_2|V\rangle_4,
\end{align}
which is a single photon qudit inheriting the coefficients of the input bi-photon state $|\psi_1\rangle|\psi_2\rangle$.
To teleport a triple-photon state $|\psi_1\rangle|\psi_2\rangle|\psi_3\rangle$ to the corresponding single photon qudit, we will use an extra ancilla photon in the state $\frac{1}{\sqrt{2}}(|H\rangle_D+|V\rangle_D)$. We let it control the paths of the first photon of the Bell pair, which runs on path 3 and 4 after the first C-path gate.
This control should be performed on path 3 and 4 together, so we modify the standard C-path gate to a C-path-2 gate shown in
Fig. \ref{c-path-2}.
In this gate a 50:50 BS splits the modes on path 3 to path 5 and 6, while it devides the modes on path 4 to path 7 and 8 (the global coefficient is neglected):
\begin{align}
&~~~~~(|H\rangle_D+|V\rangle_D)|\Sigma\rangle \nonumber\\
&\rightarrow (|H\rangle_D+|V\rangle_D)(|HH\rangle_{C2}\frac{1}{\sqrt{2}}(|H\rangle_5+|H\rangle_6)\nonumber\\
&+|HV\rangle_{C2}\frac{1}{\sqrt{2}}(|V\rangle_5+|V\rangle_6)
+|VH\rangle_{C2}\frac{1}{\sqrt{2}}(|H\rangle_7+|H\rangle_8)\nonumber\\
&+|VV\rangle_{C2}\frac{1}{\sqrt{2}}(|V\rangle_7+|V\rangle_8)).
\end{align}
The interaction pattern for the qubus beams with the above state is given in Tab. \ref{tb3}.
\begin{table}[ptb]
\begin{tabular}
[c]{|c|c|c|c|c|}\hline
& $|H/V\rangle_{5}$ & $|H/V\rangle_{6}$& $|H/V\rangle_{7}$ & $|H/V\rangle_{8}$\\\hline
$|H\rangle_{D}$ & & $\bigtriangleup$ & & $\bigtriangleup$\\\hline
$|V\rangle_{D}$ & $\bigcirc$ & & $\bigcirc$ &\\\hline
\end{tabular}
\caption{$\bigcirc$ and $\bigtriangleup$ represent the coupling to the first
and the second qubus beam, respectively. The XPM pattern in C-path-2 gate is
that the first qubus beam is coupled to $|V\rangle_{D}$ of the first photon
and all modes of the third photon on path $5$ and $7$ after
BS, while the second beam to $|H\rangle_{D}$ of the first photon and those on
path $6, 8$ from the third photon. }
\label{tb3}
\end{table}
Through the similar post-selection to that in a standard C-path gate, the ancilla photon on track D will control the paths of the first photon of the Bell pair to path 5, 6, 7 and 8 with its polarization mode $|H/V\rangle_D$. This process is equivalent to adding another effective Bell state as in the first C-path gate.
After the corresponding operations according to the Bell state measurements, any POVM can be performed by linear optical circuits on the single photon running on path 5, 6, 7 and 8, which inherits the structure of the input $|\psi_1\rangle|\psi_2\rangle|\psi_3\rangle$ \cite{h-b-w, parity}. The C-path gate teleportation method thus greatly simplfies the optical realization of unknown qubit discrimination in \cite{parity}.
\begin{figure}
\caption{(Color online) A modified C-path gate from that in Fig. \ref{c-path-p}
\label{c-path-2}
\end{figure}
\section{Purely circuit-based approach}
\label{sec3}
In teleporting an n-photon state to the corresponding single photon state, $n+1$ ancilla photons are necessary as in the above-discussed scheme for unknown state discrimination. These ancilla resources can be saved by the approach discussed in this section. Here we illustrate the method with the examples of transforming two-photon and triple-photon states to the corresponding single photon states, which can be processed by linear optics.
\subsection{Two-photon state}
The schematic setup for two-photon transformation is shown in Fig. \ref{two-photon}. After a C-path gate, a two-photon input
state $\left\vert\psi\right\rangle _{12}$ in the form of that in Eq. (\ref{in}) is transformed to
\begin{align}
\left\vert \phi\right\rangle &= \frac{1}{\sqrt{2}}\left\vert +\right\rangle _{1}\left( a\left\vert
H\right\rangle _{1^{^{\prime}}}+b\left\vert V\right\rangle _{1^{^{\prime}}
}+c\left\vert H\right\rangle _{2^{^{\prime}}}+d\left\vert V\right\rangle
_{2^{^{\prime}}}\right) \nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert -\right\rangle _{1}\left( a\left\vert
H\right\rangle _{1^{^{\prime}}}+b\left\vert V\right\rangle _{1^{^{\prime}}
}-c\left\vert H\right\rangle _{2^{^{\prime}}}-d\left\vert V\right\rangle
_{2^{^{\prime}}}\right) ,
\end{align}
with the first photon expressed in the basis $|\pm\rangle=\frac{1}{\sqrt{2}}(|H\rangle\pm |V\rangle)$.
Then we process it with the circuit part called Disentangler shown in dotted line in Fig. \ref{two-photon}.
Disentangler includes a balanced Mach-Zehnder (MZ)
interferometer formed with two polarizing beam splitters (PBS$_{\pm}$), which transmit
$\left\vert +\right\rangle $ and reflect
$\left\vert -\right\rangle $, and two QND modules in each arms, respectively.
If the component $|+\rangle$ is detected by a QND module,
the following state,
\begin{equation}
\left\vert +\right\rangle _{1}\left( a\left\vert H\right\rangle
_{1^{^{\prime}}}+b\left\vert V\right\rangle _{1^{^{\prime}}}+c\left\vert
H\right\rangle _{2^{^{\prime}}}+d\left\vert V\right\rangle _{2^{^{\prime}}
}\right),
\label{2-trans}
\end{equation}
will be projected out.
On the other hand, if $|-\rangle$ is detected, a phase shift $\pi$ will be performed on path $2^{\prime}$
of the second photon to get the same final state. Now, the state of the second photon inherits the coefficients of the initial two-photon state $\left\vert \psi\right\rangle _{12}$. Since we apply the detections with QND modules, the other photon is also preserved as $|+\rangle_1$ in Eq. (\ref{2-trans}).
\begin{figure}
\caption{(Color online) Setup to transform a double photon state in the form of Eq. (\ref{in}
\label{two-photon}
\end{figure}
\subsection{Triple-photon and multi-photon state}
\label{m1}
This method can be generalized to multi-photon state as shown in Fig. \ref{multi-photon}.
For simplicity, we only give the detail of the triple-photon transformation.
The input triple-photon state is given as
\begin{equation}
\left\vert \psi\right\rangle _{123}=\left\vert HH\right\rangle \left\vert
\phi_{1}\right\rangle +\left\vert HV\right\rangle \left\vert \phi
_{2}\right\rangle +\left\vert VH\right\rangle \left\vert \phi_{3}\right\rangle
+\left\vert VV\right\rangle \left\vert \phi_{4}\right\rangle ,
\end{equation}
where $\left\vert \phi_{j}\right\rangle =\alpha_{j}\left\vert H\right\rangle
+\beta_{j}\left\vert V\right\rangle $ $\left( j=1,2,3,4\right)$, and $\sum_{j=1}^4|\alpha_j|^2+\sum_{j=1}^4|\beta_j|^2=1$.
Firstly, similar to the case of two-photon state, we implement a C-path gate
and a Disentangler to the second and third photon, to project out the following
state,
\begin{equation}
\left\vert +\right\rangle _{2}\left[ \left\vert H\right\rangle _{1}\left(
\left\vert \phi_{1}\right\rangle _{1^{^{\prime}}}+\left\vert \phi
_{2}\right\rangle _{2^{^{\prime}}}\right) +\left\vert V\right\rangle
_{1}\left( \left\vert \phi_{3}\right\rangle _{1^{^{\prime}}}+\left\vert
\phi_{4}\right\rangle _{2^{^{\prime}}}\right) \right] .
\end{equation}
Second, we use a C-path-2 gate in Fig. \ref{multi-photon} for further processing. The XPM pattern in this C-path-2 gate is
that the first qubus beam is coupled to $|V\rangle_{1}$ of the first photon
and all modes of the third photon on path $1^{^{\prime}},2^{^{\prime}}$ after
BS, while the second beam to $|H\rangle_{1}$ of the first photon and those on
path $3^{^{\prime}},4^{^{\prime}}$ from the third photon.
In the C-path-2 gate, one 50:50 BS implements the maps $1^{^{\prime}}\rightarrow1^{^{\prime}},3^{^{\prime}}$ and
$2^{^{\prime}}\rightarrow2^{^{\prime}},4^{^{\prime}}$ of the two spatial modes
of the third photon. Then two qubus beams $\left\vert \alpha\right\rangle
\left\vert \alpha\right\rangle $ are used to interact with the corresponding
photonic modes as indicated in Fig. \ref{multi-photon}.
The phase shifters of $-\theta$ continue to shift the phases of
the coherent-state components to those in the following:
\begin{align}
& \frac{1}{\sqrt{2}}\left\vert +\right\rangle _{2}\left[ \left\vert
H\right\rangle _{1}\left( \left\vert \phi_{1}\right\rangle _{1^{^{\prime}}
}+\left\vert \phi_{2}\right\rangle _{2^{^{\prime}}}\right) \left\vert
\alpha\right\rangle \left\vert \alpha\right\rangle \right. \nonumber\\
& +\left\vert V\right\rangle _{1}\left( \left\vert \phi_{3}\right\rangle
_{3^{^{\prime}}}+\left\vert \phi_{4}\right\rangle _{4^{^{\prime}}}\right)
\left\vert \alpha\right\rangle \left\vert \alpha\right\rangle \nonumber\\
& +\left\vert H\right\rangle _{1}\left( \left\vert \phi_{1}\right\rangle
_{3^{^{\prime}}}+\left\vert \phi_{2}\right\rangle _{4^{^{\prime}}}\right)
\left\vert \alpha e^{-i\theta}\right\rangle \left\vert \alpha e^{i\theta
}\right\rangle \nonumber\\
& \left. +\left\vert V\right\rangle _{1}\left( \left\vert \phi
_{3}\right\rangle _{1^{^{\prime}}}+\left\vert \phi_{4}\right\rangle
_{2^{^{\prime}}}\right) \left\vert \alpha e^{i\theta}\right\rangle \left\vert
\alpha e^{-i\theta}\right\rangle \right] .
\end{align}
Next, a 50:50 BS transforms two qubus beams, and a QND module performs $|n\rangle\langle n|$ on one the beams after that.
The following state,
\begin{equation}
\left\vert +\right\rangle _{2}\left[ \left\vert H\right\rangle _{1}\left(
\left\vert \phi_{1}\right\rangle _{1^{^{\prime}}}+\left\vert \phi
_{2}\right\rangle _{2^{^{\prime}}}\right) +\left\vert V\right\rangle
_{1}\left( \left\vert \phi_{3}\right\rangle _{3^{^{\prime}}}+\left\vert
\phi_{4}\right\rangle _{4^{^{\prime}}}\right) \right],
\end{equation}
will be thus obtained with a conditioned switch and a phase shift $\pi$ on the modes $1^{\prime}$ and $2^{\prime}$ together accroding to the measured $n$ by the QND module. Like in the process in two-photon transformation, we will then use Disentangler on the first and third photon to realize the following state
\begin{equation}
\left\vert +\right\rangle _{1}\left\vert +\right\rangle _{2}\left( \left\vert
\phi_{1}\right\rangle _{1^{^{\prime}}}+\left\vert \phi_{2}\right\rangle
_{2^{^{\prime}}}+\left\vert \phi_{3}\right\rangle _{3^{^{\prime}}}+\left\vert
\phi_{4}\right\rangle _{4^{^{\prime}}}\right) .
\label{triple}
\end{equation}
Now, the third photon is in the desired single photon state carrying multiple spatial
modes, and it inherits the information from the polarizations of two other photons.
\begin{figure}
\caption{(Color online) Schematic setup for transforming a product of $n$ single photon qubits to that in Eq. (\ref{m-out}
\label{multi-photon}
\end{figure}
It is straightforward to generalize to the transformation
for a general multi-photon state
\begin{equation}
\left\vert \psi\right\rangle _{12\cdots n}=\left( \left\vert H\cdots
H\right\rangle \left\vert \phi_{1}\right\rangle +\cdots+\left\vert V\cdots
V\right\rangle \left\vert \phi_{2^{n-1}}\right\rangle \right) _{12\cdots n},
\end{equation}
in which the n-th photon is the superposition of $2^{n-1}$ pieces.
The process starts with a triple-photon transformation
applied to the (n-2)-th, the (n-1)-th and the n-th photons, and repeatedly apply the
C-path-2 gates to the other photons and the n-th photon, with all spatial
modes of the n-th photon split by 50:50 BS in each C-path-2
gates. The target state,
\begin{equation}
\left\vert +\right\rangle _{1}\otimes\cdots\otimes\left\vert +\right\rangle
_{n-1}\left( \left\vert \phi_{1}\right\rangle _{1^{\prime}}+\cdots+\left\vert
\phi_{2^{n-1}}\right\rangle _{2^{n-1\prime}}\right) ,
\label{m-out}
\end{equation}
will be obtained, with the path modes of the n-th photon inheriting the coefficients from other photons' modes.
In this transformation for $n$-photon state, $n-1$ C-path gates (one standard C-path gate, $n-2$
C-path-2 gates) and $n-1$ Disentangles are used, so the circuit resources are linearly dependent on the photon number of the input state.
Another significant advantage of the transformation is that no ancilla is necessary. Such transformation can be applied to realize unambiguous discrimination of multi-photon states including the unknown state discrimination schemes
in \cite{Bergou, h-b,h-b07}.
\section{Application in quantum network construction}
\label{sec4}
One important application of the transformation methods discussed in the previous sections is the realization of quantum logic gates. In what follows, we will illustrate how to construct various logic gates that are key to quantum computation.
\subsection{Merging gate}
In the realization of logic gates, we should apply another building block---Merging gate.
The qubits we use are encoded as the superposition of $|H\rangle$ and $|V\rangle$, and they will be in extra spatial modes after a C-path gate.
A Merging gate can be be viewed as performing the inverse process of a C-path gate to merge a single photon in multiple patial modes back to only one spatial mode without changing anything else. Specifically, the schematic setup to deterministically realize the gate in Fig. \ref{merge} implements the transformation
\begin{align}
\left\vert \psi\right\rangle_{in} & =a\left\vert HH\right\rangle _{12}
+b\left\vert HV\right\rangle _{12}+c\left\vert VH\right\rangle _{13}
+d\left\vert VV\right\rangle _{13}\nonumber\\
& \rightarrow a\left\vert HH\right\rangle _{14}+b\left\vert HV\right\rangle
_{14}+c\left\vert VH\right\rangle _{14}+d\left\vert VV\right\rangle _{14},
\end{align}
i.e., the merging of the second photon modes on path 2 and 3 to path 4. Here an extra single
photon in the state $\left\vert \pm\right\rangle =\frac{1}{\sqrt{2}}\left(
\left\vert H\right\rangle \pm\left\vert V\right\rangle \right) $ should be used.
\begin{figure}
\caption{(Color online) Merging gate. Two qubus beams interact with the modes of a single photon on path 2, 3 and those of the ancilla photon on path 4 as indicated in the part of Entangler. After Entangler, a 50:50 BS implements the interference of the single phootn modes on path 2, 3. Then the detection results of the QND modules will revise the possible output states in Eq. (\ref{merge-out}
\label{merge}
\end{figure}
Suppose the initial state is $\left\vert \psi\right\rangle_{in} \left\vert +\right\rangle_4 $.
It is first processed by the part called Entangler in Fig \ref{merge}.
There two input photons will interact with two qubus beams $\left\vert \alpha\right\rangle \left\vert \alpha\right\rangle
$ as shown in the dashed line of Fig. \ref{merge}). The interaction pattern is summarized in Tab. \ref{tb4}. \begin{table}[ptb]
\begin{tabular}
[c]{|c|c|c|c|c|}\hline
& $|H\rangle_{2}$ & $|V\rangle_{2}$& $|H\rangle_{3}$ &$|V\rangle_{3}$\\\hline
$|H\rangle_{4}$ & & $\bigcirc$ & & $\bigcirc$\\\hline
$|V\rangle_{4}$ & $\bigtriangleup$ & & $\bigtriangleup$ & \\\hline
\end{tabular}
\caption{$\bigcirc$ and $\bigtriangleup$ represent the coupling to the first
and the second qubus beam, respectively. The XPM pattern in Merging gate is
that the first qubus beam is coupled to $|H\rangle_{4}$ of the ancilla photon
and $|V\rangle_{2,3}$ of the second photon after PBS, while the second beam to
$|V\rangle_{4}$ of the ancilla photon and $|H\rangle_{2,3}$ of the second
photon. }
\label{tb4}
\end{table}
The total state, $\left\vert \psi\right\rangle_{in} \left\vert
+\right\rangle_4 \left\vert \alpha\right\rangle \left\vert \alpha\right\rangle $, will be transformed to
\begin{align}
& \frac{1}{\sqrt{2}}\left( a\left\vert HHH\right\rangle _{124}+b\left\vert
HVV\right\rangle _{124}\right) \left\vert \alpha\right\rangle \left\vert
\alpha\right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left( c\left\vert VHH\right\rangle _{134}+d\left\vert
VVV\right\rangle _{134}\right) \left\vert \alpha\right\rangle \left\vert
\alpha\right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left( a\left\vert HHV\right\rangle _{124}+c\left\vert
VHV\right\rangle _{134}\right) \left\vert \alpha e^{-i\theta}\right\rangle
\left\vert \alpha e^{i\theta}\right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left( b\left\vert HVH\right\rangle _{124}+d\left\vert
VVH\right\rangle _{134}\right) \left\vert \alpha e^{i\theta}\right\rangle
\left\vert \alpha e^{-i\theta}\right\rangle
\end{align}
by the Entangler.
A bit flip $\sigma_{x}$ and a phase shifter
$\pi$ conditioned on the number-resolving detection results by a QND module will yield the following state
\begin{equation}
a\left\vert HHH\right\rangle _{124}+b\left\vert HVV\right\rangle
_{124}+c\left\vert VHH\right\rangle _{134}+d\left\vert VVV\right\rangle
_{134}.
\end{equation}
This entangled state of three photons can be rewritten as
\begin{align}
& \frac{1}{\sqrt{2}}a|H\rangle_{1}(|+\rangle_{2}+|-\rangle_{2})|H\rangle
_{4}+\frac{1}{\sqrt{2}}b|H\rangle_{1}(|+\rangle_{2}-|-\rangle_{2}
)|V\rangle_{4}\nonumber\\
& +\frac{1}{\sqrt{2}}c|V\rangle_{1}(|+\rangle_{3}+|-\rangle_{3})|H\rangle
_{4}+\frac{1}{\sqrt{2}}d|V\rangle_{1}(|+\rangle_{3}-|-\rangle_{3}
)|V\rangle_{4}.
\end{align}
After the interference of the single photon modes on path 2 and 3
\begin{align}
|\pm\rangle
_{2} & \rightarrow\frac{1}{\sqrt{2}}(|\pm\rangle_{2}+|\pm\rangle_{3}),\nonumber\\
|\pm\rangle_{3} &\rightarrow\frac{1}{\sqrt{2}}(|\pm\rangle_{2}-|\pm\rangle
_{3})
\end{align}
through a 50:50 BS, two PBS$_{\pm}$ send the single photon to four different paths numbered from $5$ to $8$,
giving the state
\begin{align}
& \frac{1}{2}\left( a\left\vert HH\right\rangle +b\left\vert HV\right\rangle
+c\left\vert VH\right\rangle +d\left\vert VV\right\rangle \right)
_{14}\left\vert +\right\rangle _{5}\nonumber\\
& +\frac{1}{2}\left( a\left\vert HH\right\rangle -b\left\vert
HV\right\rangle +c\left\vert VH\right\rangle -d\left\vert VV\right\rangle
\right) _{14}\left\vert -\right\rangle _{6}\nonumber\\
& +\frac{1}{2}\left( a\left\vert HH\right\rangle +b\left\vert
HV\right\rangle -c\left\vert VH\right\rangle -d\left\vert VV\right\rangle
\right) _{14}\left\vert +\right\rangle _{7}\nonumber\\
& +\frac{1}{2}\left( a\left\vert HH\right\rangle -b\left\vert
HV\right\rangle -c\left\vert VH\right\rangle +d\left\vert VV\right\rangle
\right) _{14}\left\vert -\right\rangle _{8}.
\label{merge-out}
\end{align}
At this step, we can use a QND module on each path to project out the target state. In these QND modules the photon number non-resolving detectors can be simple APDs which output the same signal without telling the difference of the input states.
Finally, the classically feed-forwarded QND detection results control the operation $\sigma_{z}$ on path 1 and 4 to tailor the phases of the projected-out double photon state. The detected photon (in the state $|\pm\rangle$) on one of the paths can be used again in the next Merging gate.
\subsection{Two-qubit gate}
\label{2-qubit}
With a pair of C-path and Merging gate, it is very convenient to realize the two-qubit gate operations in the form $|H\rangle\langle H|\otimes U_1+|V\rangle\langle V|\otimes U_2$. If $U_1=I$ and $U_2=\sigma_x$, for instance, this gate will be a CNOT gate. Here we directly apply $U_1$ and $U_2$ on path 1 and 2 for the state $|\phi\rangle$ in Eq. (\ref{c-path}), and then the merging of the modes on path 1 and 2 will finish the transformation.
The main point in this subsection is the realization for a general two-qubit gate $U\in U(4)$.
As has been proved early, three CNOT gates (together with the proper one-qubit operations) should be necessary in constructing a general two-qubit gate \cite{cnot, TD}. It looks that three pairs of C-path and Merging involving six number-resolving detections should be used to realize such two-qubit gate. However, fewer resources would be necessary if we work with the design in Fig. \ref{two-qubit-gate}.
\begin{figure}
\caption{(Color online) Schematic setup for general two-qubit gate. By a
C-path gate in Fig. \ref{c-path-p}
\label{two-qubit-gate}
\end{figure}
In Fig. \ref{two-qubit-gate}, we first use a two-photon transformation to
convert an initial state $\left\vert \psi\right\rangle _{in}$ in Eq. (\ref{in}) to the form in Eq. (\ref{2-trans}).
Then, two PBS on the path $1^{\prime}$ and $2^{\prime}$ followed by two $\sigma_{x}$ operations on path 2 and 4 achieve the state
\begin{align}
\left\vert +\right\rangle _{1}|\varphi\rangle&=\left\vert +\right\rangle _{1}\left( a\left\vert H\right\rangle
_{1}+b\left\vert H\right\rangle _{2}+c\left\vert H\right\rangle _{3}
+d\left\vert H\right\rangle _{4}\right) \nonumber\\
& \equiv\left\vert +\right\rangle _{1}\left( a\left\vert 1\right\rangle
+b\left\vert 2\right\rangle +c\left\vert 3\right\rangle +d\left\vert
4\right\rangle \right) ,
\end{align}
where $\left\vert 1\right\rangle ,\left\vert 2\right\rangle ,\left\vert
3\right\rangle ,\left\vert 4\right\rangle $ denote the spatial modes for the single photon.
Now the second photon is a qudit with 4 spatial modes.
Any operation $U\in U(4)$ can be implemented on this single photon qudit by a linear optical multi-port interferometer (LOMI)
in the dashed line of Fig. \ref{two-qubit-gate} \cite{Reck}. The LOMI transforms the qudit to
\begin{equation}
U|\varphi\rangle= a^{^{\prime}}\left\vert 1\right\rangle
+b^{^{\prime}}\left\vert 2\right\rangle +c^{^{\prime}}\left\vert
3\right\rangle +d^{^{\prime}}\left\vert 4\right\rangle .
\end{equation}
Next, we should transform the single photon qudit back to polarization space.
By the inverse operations of $\sigma_{x}$ and two PBS, we have the following state
\begin{equation}
\left\vert +\right\rangle _{1}\left( a^{^{\prime}}\left\vert H\right\rangle
_{1^{\prime}}+b^{^{\prime}}\left\vert V\right\rangle _{1^{\prime}}
+c^{^{\prime}}\left\vert H\right\rangle _{2^{\prime}}+d^{^{\prime}}\left\vert
V\right\rangle _{2^{\prime}}\right) .
\end{equation}
The conversion follows with a setup called Entangler-2 in the dash-dotted line of Fig. \ref{two-qubit-gate}.
Entangler-2 is a little bit different from an Entangler in Fig. \ref{merge}, with the first (second)
qubus beam coupled to $\left\vert H\right\rangle _{1}$ and $\left\vert
H/V\right\rangle _{2^{\prime}}$ ($\left\vert V\right\rangle _{2}$ and
$\left\vert H/V\right\rangle _{1^{\prime}}$).
Through such XPM processes in the Entangler-2, the following state can be obtained:
\begin{align}
& \frac{1}{\sqrt{2}}\left( a^{^{\prime}}\left\vert HH\right\rangle
_{1,1^{\prime}}+b^{^{\prime}}\left\vert HV\right\rangle _{1,1^{\prime}
}\right) \left\vert \alpha\right\rangle \left\vert \alpha\right\rangle
\nonumber\\
& +\frac{1}{\sqrt{2}}\left( c^{^{\prime}}\left\vert VH\right\rangle
_{1,2^{\prime}}+d^{^{\prime}}\left\vert VV\right\rangle _{1,2^{\prime}
}\right) \left\vert \alpha\right\rangle \left\vert \alpha\right\rangle
\nonumber\\
& +\frac{1}{\sqrt{2}}\left( a^{^{\prime}}\left\vert VH\right\rangle
_{1,1^{\prime}}+b^{^{\prime}}\left\vert VV\right\rangle _{1,1^{\prime}
}\right) \left\vert \alpha e^{-i\theta}\right\rangle \left\vert \alpha
e^{i\theta}\right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left( c^{^{\prime}}\left\vert HH\right\rangle
_{1,2^{\prime}}+d^{^{\prime}}\left\vert HV\right\rangle _{1,2^{\prime}
}\right) \left\vert \alpha e^{i\theta}\right\rangle \left\vert \alpha
e^{-i\theta}\right\rangle .
\end{align}
Then the photon number resolving detection by a QND module controls the indicated operations
on the photonic modes to achieve the state
\begin{equation}
a^{^{\prime}}\left\vert HH\right\rangle _{1,1^{\prime}}+b^{^{\prime}
}\left\vert HV\right\rangle _{1,1^{\prime}}+c^{^{\prime}}\left\vert
VH\right\rangle _{1,2^{\prime}}+d^{^{\prime}}\left\vert VV\right\rangle
_{1,2^{\prime}}.
\end{equation}
It is then easy to obtain the final state,
\begin{equation}
|\psi\rangle_{out}=a^{^{\prime}}\left\vert HH\right\rangle +b^{^{\prime}}\left\vert
HV\right\rangle +c^{^{\prime}}\left\vert VH\right\rangle +d^{^{\prime}
}\left\vert VV\right\rangle ,
\end{equation}
by merging the modes on path $1^{\prime}$ and $2^{\prime}$ with a Merging gate, realizing a general two-qubit operation $U\in U(4)$.
In this proposal, we first transform the two-photon state to a single photon qudit inheriting the coefficients of it,
and then perform the general operation $U\in U(4)$ on this single photon qudit by a linear optical circuit. Finally the tensor product of the qudit and the other single photon qubit will be transformed back to the target two-photon state by a modified Entangler and a Merging gate.
Compared with all other approaches thus far, this design uses less resources to realize a general two-qubit gate in deterministic way.
\subsection{General multi-qubit gate}
\label{m2}
In what follows, we will generalize the design of two-qubit gate to multi-qubit gate $U\in U(2^{n})$.
Firstly, we use a multi-photon transformation to obtain a corresponding
single photon qudit for the n-th photon as in Eq. (\ref{m-out}), and then perform the operation $U$ on
this single photon qudit by an LOMI with $2^{n}$ input and output ports.
Because the last single photon and the other $n-1$ photons will be in a tensor product state from Eq. (\ref{m-out}), the remaining work is to
entangle them to the proper output state of $U$. However, the last single photon has $2^{n-1}$ different spatial modes, so we should modify the Entangler in Fig. \ref{merge} to suit the case.
In triple-qubit gates, for example, the four spatial modes after the
transformation for the third photon to different spatial modes will be sent to
a modified Entangler called Entangler-3 (Fig. \ref{entangler-3}), where they
are divided into two parts ($1^{\prime},2^{\prime}$) and ($3^{\prime
},4^{\prime}$), having each part treated as one spatial mode like in an
Entangler-2. In this gate, the first (second) qubus beam is coupled to
both $\left\vert H\right\rangle _{1}$ and $\left\vert H/V\right\rangle _{3^{\prime
},4^{\prime}}$ ($\left\vert V\right\rangle _{2}$ and $\left\vert
H/V\right\rangle _{1^{\prime},2^{\prime}}$). After that, the
triple-photon state in Eq. (\ref{triple}) will be first transformed to
\begin{align}
& \frac{1}{\sqrt{2}}\left\vert +\right\rangle _{2}\left\vert H\right\rangle
_{1}\left( \left\vert \phi_{1}\right\rangle _{1^{\prime}}+\left\vert \phi
_{2}\right\rangle _{2^{\prime}}\right) \left\vert \alpha\right\rangle
\left\vert \alpha\right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert +\right\rangle _{2}\left\vert V\right\rangle
_{1}\left( \left\vert \phi_{3}\right\rangle _{3^{\prime}}+\left\vert \phi
_{4}\right\rangle _{4^{\prime}}\right) \left\vert \alpha\right\rangle
\left\vert \alpha\right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert +\right\rangle _{2}\left\vert V\right\rangle
_{1}\left( \left\vert \phi_{1}\right\rangle _{1^{\prime}}+\left\vert \phi
_{2}\right\rangle _{2^{\prime}}\right) \left\vert \alpha e^{-i\theta
}\right\rangle \left\vert \alpha e^{i\theta}\right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert +\right\rangle _{2}\left\vert H\right\rangle
_{1}\left( \left\vert \phi_{3}\right\rangle _{3^{\prime}}+\left\vert \phi
_{4}\right\rangle _{4^{\prime}}\right) \left\vert \alpha e^{i\theta
}\right\rangle \left\vert \alpha e^{-i\theta}\right\rangle ,
\label{e1}
\end{align}
together with two qubus beams, and then to
\begin{equation}
\left\vert +\right\rangle _{2}\left( \left\vert H\right\rangle _{1}\left\vert
\phi_{1}\right\rangle _{1^{\prime}}+\left\vert H\right\rangle _{1}\left\vert
\phi_{2}\right\rangle _{2^{\prime}}+\left\vert V\right\rangle _{1}\left\vert
\phi_{3}\right\rangle _{3^{\prime}}+\left\vert V\right\rangle _{1}\left\vert
\phi_{4}\right\rangle _{4^{\prime}}\right)
\label{e2}
\end{equation}
by post-selection with photon number resolving detection as in parity, C-path and Merging gate.
Next, the spatial modes of the third photon will be regrouped as in the lower
part of Fig. \ref{entangler-3}---it is to divide the four spatial modes into
two parts ($1^{\prime},3^{\prime}$) and ($2^{\prime},4^{\prime}$), and send
the above state to the second Entangler-3. The following process in the second Entangler-3,
\begin{align}
& \frac{1}{\sqrt{2}}\left\vert H\right\rangle _{2}\left( \left\vert
H\right\rangle _{1}\left\vert \phi_{1}\right\rangle _{1^{\prime}}+\left\vert
V\right\rangle _{1}\left\vert \phi_{3}\right\rangle _{3^{\prime}}\right)
\left\vert \alpha\right\rangle \left\vert \alpha\right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert V\right\rangle _{2}\left( \left\vert
H\right\rangle _{1}\left\vert \phi_{2}\right\rangle _{2^{\prime}}+\left\vert
V\right\rangle _{1}\left\vert \phi_{4}\right\rangle _{4^{\prime}}\right)
\left\vert \alpha\right\rangle \left\vert \alpha\right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert V\right\rangle _{2}\left( \left\vert
H\right\rangle _{1}\left\vert \phi_{1}\right\rangle _{1^{\prime}}+\left\vert
V\right\rangle _{1}\left\vert \phi_{3}\right\rangle _{3^{\prime}}\right)
\left\vert \alpha e^{-i\theta}\right\rangle \left\vert \alpha e^{i\theta
}\right\rangle \nonumber\\
& +\frac{1}{\sqrt{2}}\left\vert H\right\rangle _{2}\left( \left\vert
H\right\rangle _{1}\left\vert \phi_{2}\right\rangle _{2^{\prime}}+\left\vert
V\right\rangle _{1}\left\vert \phi_{4}\right\rangle _{4^{\prime}}\right)
\left\vert \alpha e^{i\theta}\right\rangle \left\vert \alpha e^{-i\theta
}\right\rangle \nonumber\\
& \rightarrow\left( \left\vert HH\right\rangle \left\vert \phi_{1}
\right\rangle _{1^{\prime}}+\left\vert HV\right\rangle \left\vert \phi
_{2}\right\rangle _{2^{\prime}}\right) _{123}\nonumber\\
& +\left( \left\vert VH\right\rangle \left\vert \phi_{3}\right\rangle
_{3^{\prime}}+\left\vert VV\right\rangle \left\vert \phi_{4}\right\rangle
_{4^{\prime}}\right) _{123},
\label{e3}
\end{align}
can be realized too. For a general multi-photon state, we will just apply this procedure to
the spatial modes of the $n$-th photon iteratively. The multi-photon state in Eq. (\ref{m-out}) will be transformed correspondingly to
\begin{equation}
\left( \left\vert H\cdots H\right\rangle \left\vert \phi_{1}\right\rangle
_{1^{\prime}}+\cdots+\left\vert V\cdots V\right\rangle \left\vert
\phi_{2^{n-1}}\right\rangle _{2^{n-1\prime}}\right) _{1\cdots n},
\end{equation}
an entangled state of $n$ photons.
\begin{figure}
\caption{(Color online) Schematic setup to entangle three photons as shown from Eq. (\ref{e1}
\label{entangler-3}
\end{figure}
Then, similar to a two-qubit gate, we should merge the n-th photon in multiple spatial modes to only one spatial mode without changing anything else. However, the standard Merging gate is not suitable to do the work as there are more than two spatial modes, so we should generalize it to the one
called Merging-n gate shown in Fig. \ref{merge-n}. In this gate, an ancilla single photon $\left\vert +\right\rangle
_{a}$ is used, and the last photon and the ancilla in the state $\left\vert +\right\rangle _{a}$\ are processed by a setup called
Entangler-4, where the first (second) qubus beam is coupled to
$\left\vert
H\right\rangle _{a}$ and $\left\vert V\right\rangle _{1^{\prime}
\cdots2^{n-1\prime}}$ ($\left\vert V\right\rangle _{a}$ and $\left\vert
H\right\rangle _{1^{\prime}\cdots2^{n-1\prime}}$) simultaneously.
The Entangler-4 outputs the following state ($a$ denotes the ancilla):
\begin{equation}
\left( \left\vert H\cdots H\right\rangle \left\vert \phi_{1}^{^{\prime}
}\right\rangle _{1,a}+\cdots+\left\vert V\cdots V\right\rangle \left\vert
\phi_{2^{n-1}}^{^{\prime}}\right\rangle _{2^{n-1},a}\right) _{1\cdots n},
\label{out-m}
\end{equation}
where $\left\vert \phi_{j}^{^{\prime}}\right\rangle _{j,a}=\left( \alpha
_{j}\left\vert HH\right\rangle +\beta_{j}\left\vert VV\right\rangle \right)
_{j,a}$, for $j=1,\cdots,2^{n-1}$.
The difference in the next step from a standard Merging gate is that a circuit performing quantum Fourier transform (QFT) \cite{Nielsen} ($\left\vert j\right\rangle ,\left\vert k\right\rangle $ denote the
spatial modes, and $N=2^{n-1}$),
\begin{equation}
\left\vert j\right\rangle =\frac{1}{\sqrt{N}}\overset{N-1}{\underset
{k=0}{{\sum}}}e^{2\pi ijk/N}\left\vert k\right\rangle ,
\label{qft}
\end{equation}
instead of only one 50:50 BS in a standard Merging gate should be used for the interference of the spatial modes of the n-th photon.
The QFT for each of $2^{n-1}$ modes of the n-th photon
in Eq. (\ref{out-m}) is
\begin{align}
\left\vert \phi_{j}^{^{\prime}}\right\rangle _{j,a} & =\frac{1}{\sqrt{2}
}\left( \left\vert \phi_{j}\right\rangle _{a}\left\vert +\right\rangle
_{j}+\sigma_{z}\left\vert \phi_{j}\right\rangle _{a}\left\vert -\right\rangle
_{j}\right) \nonumber\\
& \rightarrow\frac{1}{\sqrt{2N}}\left( \left\vert \phi_{j}\right\rangle
_{a}\overset{N-1}{\underset{k=0}{{\sum}}}e^{2\pi ijk/N}\left\vert
+\right\rangle _{k}\right. \nonumber\\
& \left. +\sigma_{z}\left\vert \phi_{j}\right\rangle _{a}\overset
{N-1}{\underset{k=0}{{\sum}}}e^{2\pi ijk/N}\left\vert -\right\rangle
_{k}\right) .
\end{align}
By a PBS$_{\pm}$ on each output spatial mode and the corresponding conditional
operations involving $\sigma_{z}$ and phase shifts $e^{2\pi
ijk/N}$, the desired output state will be projected out from Eq. (\ref{out-m}).
Of course, simpler circuits could be found to replace the QFT circuits.
For example, in the case of three-photon gates,
\begin{equation}
U=\frac{1}{2}\left(
\begin{array}
[c]{cccc}
1 & 1 & 1 & 1\\
1 & 1 & -1 & -1\\
1 & -1 & -1 & 1\\
1 & -1 & 1 & -1
\end{array}
\right) ,
\end{equation}
would perform the interference of the spatial modes of the last photon as well as a QFT circuit.
Then, the remaining conditional operations are just $\sigma_{z}$ without any phase shift.
\begin{figure}
\caption{(Color online) General Merging-n gate.
In Entangler-4 within the dashed line, the two qubus beams should be coupled to all spatial modes of the n-th photon as indicated.
Entangler-4 is used to entangle the n-th photon in multiple spatial modes with the ancilla
photon. An LOMI will then perform the QFT in \ref{qft}
\label{merge-n}
\end{figure}
Merging-n gate will reduce to a standard Merging gate if $n=2$. With the gate, an arbitrary multi-qubit gate $U\in U(2^{n})$ could be realized deterministically. Here $n-1$ C-path gates (one C-path and $n-2$ modified C-path gates), $n$ Entanglers ($n-1$ Entangler-3 and one
Entangler-4), one ancilla single photon and plus two LOMI
(a $2^{n}$-dimensional one for the gate unitary operation and a $2^{n-1}$-dimensional one for QFT) are the resources to realize the gate.
A significant advantage of this proposal is that it works for any circuit in principle even if we do not know its decomposition into two-qubit gates.
\subsection{Multi-control gate}
Although it is possible to realize an n-qubit gate $U\in U(2^{n})$ with the current technology as we apply the above-discussed transformations approaches, the necessary resources (BS and phase shifters) for the LOMIs in the setups will exponentially grow with $n$. The number of the modes to be coupled to the qubus beams in a C-path-2 and an Entangler-4 is also exponentially large with $n$. It is therefore not realistic to apply the method to build a gate of large $n$. However, the resources for constructing a special class of gates---multi-control gates---can be greatly reduced if we apply C-path and Merging gates. The operation of this type of gates is given as
\begin{align}
C^n(U_1)&=(I\otimes\cdots I-|V\cdots V\rangle\langle V\cdots V|)\otimes I\nonumber\\
&+|V\cdots V\rangle\langle V\cdots V|\otimes U_1.
\label{control}
\end{align}
It effects the action of $U_1$ on the last photon under the control of $|V\rangle$ component of the remaining $n-1$ photons.
The gate is a key multi-qubit gate called Toffoli gate \cite{Toffoli, B} for $U_1=\sigma_x$.
\begin{table}
\begin{tabular}
[c]{|c|c|c|}\hline
& $|H/V\rangle_{4}$ & $|H/V\rangle_{5}$\\\hline
$|H\rangle_{2}$ & & $\bigtriangleup$\\\hline
$|V\rangle_{2'}$ & & $\bigtriangleup$\\\hline
$|H\rangle_{3}$ & & $\bigtriangleup$\\\hline
$|V\rangle_{3}$ & $\bigcirc$ & \\\hline
\end{tabular}
\caption{$\bigcirc$ and $\bigtriangleup$ represent the coupling to the first
and the second qubus beam, respectively. The XPM pattern in such C-path-3 gate
is that the first qubus beam is coupled to $|V\rangle_{3}$ of the control
photon $C_{2}$ and all modes of the target photon $T$\ on path 4 after BS,
while the second beam to all $|H\rangle_{3}$, $|H\rangle_{2}$ and $|V\rangle_{2'}$ (which becomes $|H\rangle_{2'}$ after the bit flip) of the second control photon
$C_{2}$ in Fig. \ref{Toffoli}, and
those on path 5 from the target photon $T$. }
\label{tb5}
\end{table}
\begin{figure}
\caption{(Color online) Triple-photon Toffoli gate. Here one C-path gate and one C-path-3 gate, which is illustrated in dashed line, transform every photon to two spatial modes. The qubus interaction pattern in C-path-3 is summarized in Tab. \ref{tb5}
\label{Toffoli}
\end{figure}
A Toffoli gate is a universal element in quantum computation, i. e., any quantum network can be constructed with this multi-qubit gate and the proper single-qubit gates. The gate is useful in implementing Shor's algorithm
\cite{Shor} and quantum error correction \cite{G}. Several recent works have been attempted to optical realization of this gate \cite{F, R, L, compact, Lin}. In most of previous approaches, Toffoli gate is constructed by means of decomposing the gate into two-qubit gates. There should be at least five two-qubit gates in constructing a triple-qubit Toffoli gate \cite{SD}, and a general Toffoli gate of $n$ qubits requires $\cal{O}$ $(n^2)$ two-qubit gates \cite{B}. It was proved that the resources could be reduced with part of input state encoded as a qudit in operation \cite{R}. Here we show in what follows that a deterministic optical Toffoli gate, which uses the resources increasing linearly with the size of an input, is possible even if the input is simply encoded as a multi-qubit product state in Eqs. (\ref{1}) and (\ref{20}).
In Fig. \ref{Toffoli} and \ref{Toffoli-2}, we outline the designs to realize such gate with weak nonlinearity and linear optics. Like a general multi-qubit gate in \ref{m2}, these designs do not rely on the decomposition into two-qubit gates. Their difference from a general multi-qubit gate
is no transformation of one input photon to $2^n$ spatial modes. The idea is that the first control photon sends the second photon to two different paths conditioned on its polarizations, and the modes on the second path of the second photon continue to control that of the third photon, and so forth to the last photon. This is what happens to an $n$-control Toffoli gate realizing Eq. (\ref{control}). The number of the spatial modes for each photon, including the target photon, will be two after such operations, so $U_1$ could be performed on one spatial mode of the target to realize the gate.
We illustrate the realization of Toffoli gate with the design in Fig. \ref{Toffoli}. The gate processes any input like the following with each qubit
encoded in the respective polarization space:
\begin{align}
|\Psi\rangle_{C_1C_2T} & =A_{1}\left\vert HHH\right\rangle
+A_{2}\left\vert HHV\right\rangle +A_{3}\left\vert HVH\right\rangle
\nonumber\\
& +A_{4}\left\vert HVV\right\rangle +A_{5}\left\vert VHH\right\rangle
+A_{6}\left\vert VHV\right\rangle \nonumber\\
& +A_{7}\left\vert VVH\right\rangle +A_{8}\left\vert VVV\right\rangle.
\end{align}
Firstly, the polarizations of the photon $C_1$ control the second photon to two different paths $2$ and $3$ through a C-path gate, giving the state
\begin{align}
|\Phi\rangle &=(A_{1}\left\vert HHH\right\rangle +A_{2}\left\vert HHV\right\rangle+A_{3}\left\vert HVH\right\rangle \nonumber\\
&+ A_{4}\left\vert HVV\right\rangle)_{12T}+(A_{5}\left\vert VHH\right\rangle +A_{6}\left\vert VHV\right\rangle \nonumber\\
&+A_{7}\left\vert VVH\right\rangle +A_{8}\left\vert VVV\right\rangle )
_{13T} \label{Tf-1}.
\end{align}
At this time, if the modes of the second photon on path 3 are to control the paths of the target photon through a standard C-path gate, there will be a problem of how to arrange the coupling of the qubus beams with all terms in Eq. (\ref{Tf-1}). The qubus beams are in tensor product state with all single photon modes, but the modes $|H/V\rangle_3$ on path 3 are only part of the superposition. We should consider the coupling of two qubus beams with the other modes in the superposition of Eq. (\ref{Tf-1}). Therefore, we should modify the standard C-path gate to that in dashed line of Fig. \ref{Toffoli}. We call it C-path-3 gate to distinguish it from the prevously discussed C-path and C-path-2 gate.
A 50:50 BS divides the target photon into two paths $4$ and $5$ in Fig. \ref{Toffoli}, and then two qubus beams interact with the corresponding photonic modes as summarized in Tab. \ref{tb5}, realizing the following transformation
\begin{widetext}
\begin{align}
|\Phi\rangle |\alpha\rangle|\alpha\rangle &\rightarrow \frac{1}{\sqrt{2}}|HH\rangle_{12}\{ (A_1|H\rangle_4+|A_2|V\rangle_4)|\alpha\rangle|\alpha\rangle+(A_1|H\rangle_5+|A_2|V\rangle_5)|\alpha e^{-i\theta}\rangle|\alpha e^{i\theta}\rangle\}
\nonumber\\
&+\frac{1}{\sqrt{2}}|HV\rangle_{12'}\{ (A_3|H\rangle_4+|A_4|V\rangle_4)|\alpha\rangle|\alpha\rangle+(A_3|H\rangle_5+|A_4|V\rangle_5)|\alpha e^{-i\theta}\rangle|\alpha e^{i\theta}\rangle\}\nonumber\\
&+\frac{1}{\sqrt{2}}|VH\rangle_{13}\{ (A_5|H\rangle_4+|A_6|V\rangle_4)|\alpha\rangle|\alpha\rangle+(A_5|H\rangle_5+|A_6|V\rangle_5)|\alpha e^{-i\theta}\rangle|\alpha e^{i\theta}\rangle\}\nonumber\\
&+\frac{1}{\sqrt{2}}|VV\rangle_{13}\{ (A_7|H\rangle_4+|A_8|V\rangle_4)|\alpha e^{i\theta}\rangle|\alpha e^{-i\theta}\rangle+(A_7|H\rangle_5+|A_8|V\rangle_5)|\alpha \rangle|\alpha \rangle\},
\end{align}
\end{widetext}
together with the phase shift $-\theta$ on two qubus beams. Here the modes of the second photon on path 2 are divided into $|H\rangle_2$ and $|V\rangle_{2'}$ by a PBS. A triple-phootn state
\begin{align}
& \left\vert H\right\rangle _{1}\left( A_{1}\left\vert HH\right\rangle
+A_{2}\left\vert HV\right\rangle +A_{3}\left\vert VH\right\rangle
+A_{4}\left\vert VV\right\rangle \right) _{24}\nonumber\\
& +\left\vert VH\right\rangle _{13}\left( A_{5}\left\vert H\right\rangle
+A_{6}\left\vert V\right\rangle \right) _{4}+\left\vert VV\right\rangle
_{13}\left( A_{7}\left\vert H\right\rangle +A_{8}\left\vert V\right\rangle
\right) _{5}
\end{align}
will be post-selected out by a similar procedure to that in a standard C-path gate.
Here the target photon passes through path 5 only if both control
photons are in the state $\left\vert V\right\rangle $. A bit flip
$\sigma_{x}$ on path $5$ alone will then finish the target operation.
The remaning steps are the merging of the modes on path 4, 5 and on path 2, 3, respectively.
They are easy to perform with two standard Merging gates. A triple-photon Toffoli gate implementing the map,
\begin{align}
|\Psi\rangle_{C_1C_2T} &\rightarrow A_{1}\left\vert HHH\right\rangle
+A_{2}\left\vert HHV\right\rangle +A_{3}\left\vert HVH\right\rangle
\nonumber\\
& +A_{4}\left\vert HVV\right\rangle +A_{5}\left\vert VHH\right\rangle
+A_{6}\left\vert VHV\right\rangle \nonumber\\
& +A_{7}\left\vert VVV\right\rangle +A_{8}\left\vert VVH\right\rangle,
\end{align}
will be thus realized.
\begin{figure}
\caption{(Color online) Alternative design for triple-photon Toffoli gate.
The difference from the design in Fig. \ref{Toffoli}
\label{Toffoli-2}
\end{figure}
The design for the Toffoli gate in this approach is not unique. The other layout in Fig. \ref{Toffoli-2} realizes an optical Toffoli gate as well, but the interaction pattern for the qubus beams in C-path-3 gate is different from that in Fig. \ref{Toffoli}--- the second beam interacts with the modes of both first and second photon to save one mode for coupling.
To construct a general Toffoli gate with $n$ control photons, we should apply $n-1$ C-path-3 gates, togther with one standard C-path gate.
The interaction pattern in Tab. \ref{tb5} for the $(k-1)$-th C-path-3 gate (where the modes of the $k$-th photon on the second spatial path control the paths of the $k+1$-th photon) will be simply generalized with the second beam coupled the second spatial mode of the $k+1$-th photon as well as with the $|H\rangle$ mode of the $k$-th photon on the second path and both $|H\rangle$ and $|V\rangle$ mode of the $k$-th photon on the first path. Meanwhile, the first qubus beam should be coupled to the first spatial mode of the $k+1$-th photon as well as the $|V\rangle$ mode
of the $k$-th photon on the second spatial path. In Fig. \ref{toffoli-4}, we give the schematic setup of triple-control Toffoli gate, for example.
The circuit resources for an $n$-control Toffoli gate are only $n$ pairs of C-path (C-path-3) and Merging gates. The design of an $n$-qubit Toffoli gate can be regarded as a simple generalization of that for CNOT gate (single-qubit control) mentioned at the beginning of
\ref{2-qubit}.
\begin{figure}
\caption{(Color online) Schematic setup for triple-control or four-photon Toffoli
gate. It is a simple generalization from Fig. \ref{Toffoli}
\label{toffoli-4}
\end{figure}
In addition, more general multi-control gate called $C^{n}(U_{k})$
gate \cite{Nielsen}, which performs the multi-qubit $U_{k}$ operation on $k$ target
photons controlled by $n$\ control photons, i.e.,
\begin{align}
C^{n}(U_{k}) & =(\underbrace{I\otimes\cdots I}\limits_{n}-|V\cdots V\rangle\langle V\cdots
V|)\otimes (\underbrace{I\otimes \cdots I}\limits_{k}) \nonumber\\
& +|V\cdots V\rangle\langle V\cdots V|\otimes U_{k},
\end{align}
can be realized in a similar way.
We could combine the multi-control gate idea discussed here with the multi-photon
transformation and its inverse in \ref{m1} and \ref{m2}. The circuit resources are $n+k$ C-path gates
(one C-path gate for the first two control photons, $n-1$
C-path-3 gates for the next $n-1$\ control photons, $k$ C-path-3 gates for the
final control photon and $k$ target photons), the gates to transform k-photon state to a corresponding single photon and their inverse, the LOMI for $U_{k}$ on single photon, and $n+k$ Merging gates. All these element gates together could realize a $C^{n}(U_{k})$ gate deterministically.
\section{Conclusion}
We present the methods of transforming a class of multi-photon states to the corresponding single photon states which inherit the structures of the initial multi-photon states. These transformations make the deterministic unitary operations and POVMs on the multi-photon states in the form
of tensor products of multiple single photon qubits possible. Since the input states of various quantum information processing tasks can be encoded in such form, these transformations and their inverses, which are implemented by a finite number of element gates, would be found applications in the related fields. In this paper we have discussed the realization of multi-photon state discrimination and the construction of quantum logic gates in the transformation approach. We especially discuss the realization of parity gate and Toffoli gate, which have wide applications in quantum information technology. The technical ingredients for this circuit or network based approach are linear optics and weak non-linearity,
which are within the current technology. The circuits used for the transformations may add to the tool kits
to process photonic states.
\begin{acknowledgments}
B. H. thanks C. F. Wildfeuer for helpful dicussions. This work is partially supported by a PSC-CUNY grant.
\end{acknowledgments}
\end{document}
|
\begin{document}
\maketitle
\begin{abstract}
This article is devoted to the study of spectral optimisation for inhomogeneous plates. In particular, we optimise the first eigenvalue of a vibrating plate with respect to its thickness and/or density. Our result is threefold. First, we prove existence of an optimal thickness, using fine tools \UUU hinging on \EEE topological properties of rearrangement classes. Second, in the case \UUU of a circular plate, \EEE we provide a characterisation of this optimal thickness by means of Talenti inequalities. Finally, we prove a stability result when assuming that the thickness and \UUU the \EEE density of the plate are linearly related. This proof relies on $H$-convergence tools applied to biharmonic operators.
{\varepsilon}nd{abstract}
\section{Introduction}
The study of eigenmodes optimisation is central to the theory of inhomogeneous elastic plates and is of great applicative relevance. A vast literature has been devoted to the analysis of spectral optimisation problems for biharmonic operators, modelling plates of varying density and thickness under different settings \cite{Anedda2010,Anedda,Berchio2020,Bucur2011,BuosoFreitas,Buoso2013,Buoso2015,Colasuonno2019,Kang2017,Kao2021}. In addition, several contributions are devoted to inverse problems arising in the study of such inhomogeneous plates \cite{Jadamba2015,Manservisi2000,Marinov2013}. In the latter context, the main objective is to identify some structural \UUU descriptors \EEE of the plate under consideration, such as its thickness or its bending stiffness, and the outlook on the problem is mostly computational.
The goal of this article is to provide answers to several theoretical questions that, to the best of our knowledge, have not received a mathematical treatment so far. We focus on the optimization of thickness and/or density with respect to the first eigenvalue. For fixed density, we prove the existence of an optimal thickness. This calls for the implementation of a delicate argument, based on rearrangements. We then investigate the symmetry of the optimal solution in specific geometries, showing analogies with previously studied cases \cite{Anedda2010,Anedda}. Eventually, we prove a stability result for the case in which the thickness and the density of the plate are linearly related.
In order to make the discussion more precise, let ${\Omega}mega$ be a bounded domain in $\mathbb{R}^2$ with $\mathscr C^2$ boundary, representing the reference mid-surface configuration of a thin plate \UUU at rest, \EEE and let $D,g \in L^\infty({\Omega})$. The function $D$ describes the varying {\it thickness} of the plate. Its lower bound is normalized by assuming that $D\geq 1$, where the inequality is meant to hold almost everywhere in ${\Omega}$. The function $g$ accounts for the {\it heterogeneity} of the plate. We are hence led to consider the first eigenvalue associated with the natural vibration of the plate. In variational terms, this eigenvalue admits the following Rayleigh-quotient representation
\begin{equation}
\tilde\Lambda(D,g)=\inf_{u\in W^{2,2}({\Omega})\cap W^{1,2}_0({\Omega}),\, u{\nabla}eq0}\frac{\int_{\Omega} D \left(\Delta u\right)^2}{\int_{\Omega} g u^2}.\label{eq:la0}{\varepsilon}nd{equation}
The associated eigenfunction $v_{D,g}$ satisfies the following elliptic problem
\begin{equation}\label{eq:eig-D}\begin{cases}\Delta \left(D \Delta v_{D,g}\right)=\tilde{\Lambda}(D,g) g v_{D,g}\text{ in }{\Omega},
\\ v_{D,g}=\Delta v_{D,g}=0\text{ on }{\varphi}artial {\Omega}.{\varepsilon}nd{cases}{\varepsilon}nd{equation}
The most general formulation of the optimisation problem under consideration, covering questions from \cite{Anedda2010,Anedda,Berchio2020,Colasuonno2019,Kang2017,Kao2021}, is the study of the qualitative properties of solutions to the minimisation problem
\begin{equation}\inf_{D,g}\tilde{\Lambda}(D,g).\label{eq:ref}{\varepsilon}nd{equation}
From the modeling viewpoint, the reference \UUU application consists in \EEE reinforcing the plate locally by adding a layer of material, hence increasing the thickness, or by combining two materials, hence increasing the density. These cases \UUU translate in \EEE the choice
\begin{equation}D=1+\beta_0 \mathds 1_\omega,\, g=1+\delta_0\mathds 1_{\omega'}{\varepsilon}nd{equation} for two measurable subsets $\omega,\, \omega'$ on which we can act, where $\mathds 1$ is the corresponding characteristic function. This in turn leads to considering $L^\infty$ and $L^1$ constraints on $D$ and $g$.
We hence introduce the following admissible classes for thickness and heterogeneity, where $\beta_0,\, \delta_0,\, D_0,\, g_0$ are fixed positive parameters:
\begin{align}\mathcal N({\Omega}):=&\left\{D\in L^\infty({\Omega}) \ : \ 1\leq D\leq 1+\beta_0,\, \int_{\Omega} D=D_0\right\},\label{eq:N}\\
\mathcal N'({\Omega}):=&\left\{g\in L^\infty({\Omega}) \ : \ 1\leq g\leq 1+\delta_0,\, \int_{\Omega} g=g_0\right\}.{\varepsilon}nd{align} The main minimization problem {\varepsilon}qref{eq:ref} is then specified as follows
\begin{equation}\label{Eq:PvIntro}
\inf_{D\in \mathcal N({\Omega}),\, g\in \mathcal N'({\Omega})}\tilde\Lambda(D,g).
{\varepsilon}nd{equation}
Let us start by removing a
difficulty related to the definition of the eigenvalue, which is that the potential term $\tilde{\Lambda}(D,g)gv_{D,g}$ in {\varepsilon}qref{eq:eig-D} appears in a multiplicative form. As it is customary in eigenvalue optimisation, arguing as in \cite[Theorem 13]{Chanillo2000} we reformulate the problem by referring to the {\it density \UUU (excess)} $\rho$ of the plate instead of its heterogeneity. In particular, we introduce the class of admissible densities
\begin{equation}\mathcal M({\Omega}):=\left\{\rho \in L^\infty({\Omega}) \ : \ 0\leq \rho\leq 1,\, \int_{\Omega} \rho=\rho_0\right\}\label{eq:M}{\varepsilon}nd{equation} and, for $D\in \mathcal N({\Omega})$, we define the first eigenvalue
\begin{equation}\Lambda(D,\rho):=\inf_{u\in W^{2,2}({\Omega})\cap W^{1,2}_0({\Omega}),\, u {\nabla}ot = 0}\frac{\int_{\Omega} D \left(\Delta u\right)^2-\int_{\Omega} \rho u^2}{\int_{\Omega} u^2}.\label{eq:la}{\varepsilon}nd{equation}
Up to a scaling factor, proceeding along the lines of \cite[Theorem 13]{Chanillo2000}, solving {\varepsilon}qref{Eq:PvIntro} is equivalent to finding solutions to \begin{equation}\label{Eq:PvIntro2}
\inf_{D\in \mathcal N({\Omega}),\,\rho \in \mathcal M ({\Omega})}\Lambda(D,\rho).
{\varepsilon}nd{equation}
We prefer to work with formulation {\varepsilon}qref{Eq:PvIntro2}, for the normalization term $\int_{\Omega} u^2=1$ in the denominator in {\varepsilon}qref{eq:la} is independent of $\rho$ (compare with {\varepsilon}qref{eq:la0}).
Most contributions on the minimization problem {\varepsilon}qref{Eq:PvIntro2} focus on the case of fixed thickness $D{\varepsilon}quiv 1$ and the optimisation is carried out with respect to $\rho$ only, either under Navier boundary conditions, or under clamped boundary conditions, see for instance \cite{Anedda2010,Anedda,Kang2017}. In these contributions, rearrangements arguments and Talenti inequalities are used in order to derive Faber-Krahn-like inequalities, delivering information on the geometry of minimizers. On the other hand, the optimisation of the thickness $D$ is mostly treated numerically \cite{Berchio2020,Jadamba2015,Manservisi2000,Marinov2013} and the existence of a minimizer $D^*$ is usually not ascertained, to the best of our knowledge. Let us stress that existence in this setting can be quite delicate to obtain. As a matter of comparison, let us recall that in the somehow related case of
optimisation of the first eigenvalue of two-phases operators $-{\nabla}abla \cdot(D{\nabla})$ under the constraint $D\in \mathcal N({\Omega})$, it is well-known \cite{MuratTartar,CasadoDiaz3} that no solution exists if ${\Omega}$ is not a ball.
The first main result of the paper is hence an existence proof for an optimal thickness $D^*$ for {\varepsilon}qref{Eq:PvIntro2}, under fixed $\rho{\varepsilon}quiv 0$. In particular,
setting $\mu(D):=\Lambda(D,0)$, we investigate the minimisation problem $\underset{D\in \mathcal N({\Omega})}\inf \,\mu(D)$. We prove in Theorem \ref{Th:ExistMu} that, in any domain ${\Omega}$, a minimiser exists. Note that it is in sharp contrast with several other models involving heterogeneity in the leading term of the underlying elliptic operator (such as classical two-phases operators), where existence strongly depends on the choice of the ambient space ${\Omega}$. The proof of Theorem \ref{Th:ExistMu} relies on delicate topological properties of constraint classes defined through rearrangements and we will make use of some related results from \cite{Alvino1989,ConcaMahadevanSanz}.
Our second main result, Theorem \ref{Th:PV0}, focuses on the case when ${\Omega}$ is a ball. In this case, we are able to characterise the optimal thickness $D^*$ as being piecewise constant and radially symmetric. The argument \UUU is in \EEE the spirit of \cite{Anedda,Anedda2010}. In particular, we use Talenti inequalities in combination with rearrangement arguments.
In our last main result, Theorem \ref{Th:Stability}, we investigate the case of coupled thickness and density. For simplicity, we focus on the case of a linear relation between these two quantities, namely, $D=1+\alpha \rho$ for a small parameter $\alpha>0$. Albeit linear, this case
already proves very challenging. By defining $\lambda_\alpha(\rho):=\Lambda(1+\alpha \rho,\rho)$, we give a fine stability analysis in the case where ${\Omega}$ is a ball, $\alpha$ is small, and all the functions involved are assumed to be radial. In particular we obtain a \UUU stationary result: \EEE the minimisers $\rho^*$ in the case $\alpha=0$, which were \UUU already \EEE studied in \cite{Anedda2010,Anedda,Kang2017}, remain optimal for $\alpha>0$ small enough. This proof relies on $H$-convergence-like tools, generalising to biharmonic operators a strategy developed in \cite{MazariNadinPrivat}.
The paper is organised as follows. In Section \ref{sec:setting} we specify the precise assumptions for our analysis and state our three main results. In Section \ref{Se:Preliminary} we collect some preliminary technical results. Sections \ref{sec:thm1}--\ref{sec:thm3} are devoted to the proofs of Theorems \ref{Th:ExistMu}--\ref{Th:Stability}. Eventually, Section \ref{sec:con} contains a summary of our findings.
\section{Mathematical setting and results}
Throughout the paper, inequalities will always be meant in the sense of \UUU $L^1$ \EEE functions, namely almost everywhere in the corresponding set where the different quantities are defined.
\label{sec:setting}
\subsection{Optimisation with respect to the thickness}
We first investigate optimisation with respect to the thickness of the plate. Given two positive parameters $\beta_0,\, D_0$, the admissible class of thicknesses $\mathcal N({\Omega})$ is defined in {\varepsilon}qref{eq:N},
where nonetheless we assume that $D_0>\operatorname{Vol}({\Omega})$ in order to ensure that this class is not empty or reduced to a single element. For any $D\in \mathcal N({\Omega})$ we define the first eigenvalue $\mu(D)$ given by the Rayleigh quotient
\begin{equation}\label{Eq:DefMu}\mu(D)=\inf_{u\in W^{1,2}_0({\Omega})\cap W^{2,2}({\Omega}),\, u{\nabla}eq 0}\frac{\int_{\Omega} D\left(\Delta u\right)^2}{\int_{\Omega} u^2},{\varepsilon}nd{equation} which is associated with the following eigenequation (where we have chosen a $L^2$ normalisation):
\begin{equation}\label{Eq:MuD}\begin{cases}
\Delta(D\Delta u_D)=\mu(D) u_D\text{ in }{\Omega},
\\ u_D=\Delta u_D=0\text{ on }{\varphi}artial {\Omega},
\\ \int_{\Omega} u_D^2=1.
{\varepsilon}nd{cases}{\varepsilon}nd{equation}
We emphasise once again that this corresponds to problem {\varepsilon}qref{Eq:PvIntro} with $g{\varepsilon}quiv 1$. The first optimisation problem we consider is
\begin{equation}\label{Eq:PvMu}\inf_{D\in \mathcal N({\Omega})}\mu(D).{\varepsilon}nd{equation}
Our first result establishes the existence of a solution:
\begin{theorem}[Existence of minimisers]\label{Th:ExistMu}
For any bounded domain ${\Omega}mega\subset \mathbb{R}^2$ with $\mathscr C^2$ boundary there exists $D^*\in \mathcal N({\Omega})$ such that
\begin{equation}\inf_{D \in \mathcal N({\Omega})} \mu(D)=\mu(D^*).{\varepsilon}nd{equation}Furthermore, there exists a measurable set $\omega^*\subset {\Omega}$ such that $D^*=1+\beta_0 \mathds 1_{\omega^*}$. {\varepsilon}nd{theorem}
The proof of this theorem relies on rather fine topological arguments which yield compactness of sequences of minimisers. Let us note that, as is classical in this class of problems, one can not use the direct method in the calculus of variations: indeed, the best convergence one could get on a minimising sequence $\{D_k\}_{k\in {\mathbb N}}$ (and on the associated sequence of eigenfunctions $\{u_k\}_{k\in {\mathbb N}}$) is the weak-$\ast$ convergence in $L^\infty $ of $\{D_k\}_{k\in {\mathbb N}}$ and weak convergence of $\{u_k\}_{k\in {\mathbb N}}$ in $W^{2,2}({\Omega})$, thus forbidding to pass to the limit in the Rayleigh-quotient formulation {\varepsilon}qref{Eq:DefMu} of $\{\mu(D_k)\}_{k\in {\mathbb N}}$. This is a known \UUU conundrum \EEE in the study of two-phases operators \cite{MuratTartar}, \UUU impairing the proof of \EEE the existence of optimisers for general ${\Omega}$. We present here a way to circumvent this difficulty in the case of biharmonic operators.
In general domains, it is hopeless to give an explicit characterisation of the optimal thickness $D^*$. In the case of a ball, however, using Talenti inequalities, we obtain an inequality of Faber-Krahn type. Consider the case in which our plate \UUU coincides \EEE with the ball of radius $R>0$ centered in the origin, i.e. ${\Omega}=\mathbb B(0;R)$. Define the function ${\overline D}_\#$ as follows:
\begin{equation}\label{Eq:DefA}
\overline D_\#=(1+\beta_0)\mathds 1_{\mathbb A}+\mathds 1_{\mathbb A^c}
{\varepsilon}nd{equation}
where, in radial coordinates, $\UUU {\mathbb A }\EEE =\{r_0<r<R\}$ and $ \operatorname{Vol}(\mathbb A)=({D_0-\operatorname{Vol}({\Omega})})/{\beta_0}$. Note that the set
$\mathbb A$ is uniquely defined and the volume constraint ensures that $\overline D_\#\in \mathcal N({\Omega})$. Our second result reads as follows.
\begin{theorem}[The case of the ball]\label{Th:PV0}
Let ${\Omega}=\mathbb B(0;R)$ for some $R>0$. Then, $\overline D_\#$ minimises $\mu$ in $\mathcal N({\Omega})$, namely,
\begin{equation} \mu({\overline D}_\#) \leq \mu(D) \quad \forall D \in \mathcal N({\Omega}).{\varepsilon}nd{equation}{\varepsilon}nd{theorem}
It should be noted that this is the exact opposite result with respect to the optimisation on the density $\rho$ (i.e. keeping $D{\varepsilon}quiv 1$). In fact, by minimizing w.r.t. $\rho$ it is shown \UUU in \EEE \cite{Anedda2010,Anedda} that the unique optimal material density $\rho^*$ when ${\Omega}=\mathbb B(0;R)$ corresponds to having a maximal density in the center, and a minimal density close to the boundary: $\rho^*=\mathds 1_{\mathbb B(0;r^*)}$ with $r^*$ chosen so as to satisfy the volume constraint. This \UUU observation motivates \EEE our interest in investigating optimality with respect to density {\it and} thickness. We tackle this topic in the next subsection, by assuming a linear relation between $\rho$ and $D$.
\subsection{Density-dependent thickness}
In this subsection, we consider another version of {\varepsilon}qref{Eq:PvIntro}-{\varepsilon}qref{Eq:PvIntro2}, by assuming a linear dependency of the thickness $D$ of the plate with respect to the density \UUU of the material. \EEE In other words, we consider a real parameter $\alpha\geq 0$, and assume that the thickness $D$ depends on the density \UUU of the material \EEE via the relation
\begin{equation}
D=1+\alpha \rho.{\varepsilon}nd{equation}
Keeping in mind that $\rho$ corresponds to the repartition of some material inside the elastic plate ${\Omega}$, we recall the admissible class $\mathcal M({\Omega}mega)$ of densities from {\varepsilon}qref{eq:M}
and, for any $\rho \in \mathcal M({\Omega})$, we consider the first eigenvalue $\lambda_\alpha(\rho)$ of \UUU $u \mapsto \Delta{\mathbb B}ig((1+\alpha \rho)\Delta u{\mathbb B}ig)-\rho u$. In its Rayleigh-quotient formulation, this is given\EEE by
\begin{equation}\label{Eq:DefLambda}\lambda_\alpha(\rho):=\inf_{u\in W^{2,2}({\Omega})\cap W^{1,2}_0({\Omega}),\, u{\nabla}eq 0}\frac{\int_{\Omega} (1+\alpha \rho)(\Delta u)^2-\int_{\Omega} \rho u^2}{\int_{\Omega} u^2}.{\varepsilon}nd{equation} Up to a $L^2$ normalisation, the associated eigenfunction ${u_{\alpha,\rho}}$ satisfies
\begin{equation}\label{Eq:LambdaRho}
\begin{cases}
\Delta \left((1+\alpha \rho)\Delta {u_{\alpha,\rho}}\right)=\lambda_\alpha(\rho){u_{\alpha,\rho}}+\rho{u_{\alpha,\rho}}\text{ in }{\Omega},
\\ {u_{\alpha,\rho}}=\Delta {u_{\alpha,\rho}}=0\text{ on }{\varphi}artial {\Omega},
\\ \int_{\Omega} {u_{\alpha,\rho}}^2=1.
{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
We prove in Lemma \ref{Cl:Misc} that $\lambda_\alpha(\rho)$ is a simple eigenvalue and that the associated first eigenfunction has a constant sign.
For a fixed parameter $\alpha\geq 0$, we consider the optimisation problem
\begin{equation}\inf_{\rho \in \mathcal M({\Omega})}\lambda_\alpha(\rho).{\varepsilon}nd{equation} We assume that ${\Omega}=\mathbb B(0;R)$ for some $R>0$ \UUU and \EEE focus on the \UUU geometry of minimizers for \EEE $\alpha>0$ \UUU small. \EEE Indeed, an explicit characterisation of the minimisers for $\alpha=0$ was given in \cite{Anedda}: if ${\mathbb B}^*$ is the unique ball centered in the origin, contained in ${\Omega} = {\mathbb B}(0;R)$ \UUU with \EEE $\operatorname{Vol}({\mathbb B}^*)=\rho_0$, then the unique minimiser of $\lambda_0$ in $\mathcal M({\Omega})$ is
\begin{equation}\rho^*=\mathds 1_{{\mathbb B}^*}.{\varepsilon}nd{equation} \UUU On the other hand, \EEE Theorem \ref{Th:PV0} seems to indicate that, for $\alpha\to \infty$, the optimal $\rho$ should behave as $\mathds 1_{\mathbb A}$, where $\mathbb A=\{r_0<r<R\}$ is the only annulus of volume $\rho_0$.
\begin{theorem}[Stability for small $\alpha$ in the ball for radially symmetric distributions]\label{Th:Stability} Let ${\Omega}=\mathbb B(0;R)$ for some $R>0$, and define $\rho^*:=\mathds 1_{{\mathbb B}^*}$. Then, there exists $\overline \alpha>0$ such that, for any $0\leq \alpha\leq \overline \alpha$,
\begin{equation}
\lambda_\alpha(\rho^*)\leq \lambda_\alpha(\rho ) \quad \forall \rho \in \mathcal M({\mathbb B}), \ \rho \text{ radially symmetric}.
{\varepsilon}nd{equation}
{\varepsilon}nd{theorem}
The proof of this theorem relies on fine arguments inspired from $H$-convergence theory \cite{Allaire,MuratTartar}, and can be linked to some stationarity results in two-phases problems \cite{Laurain,MazariNadinPrivat}. In the proof, the radial symmetry assumption of competitors is crucial.
\section{Preliminary technical results}\label{Se:Preliminary}
We first gather in this section several preliminary results that are used throughout the rest of the paper.
Let us begin by presenting a straightforward application of the maximum principle.
\begin{lemma}[Positivity principle]\label{Cl:Positivity}
Let $\rho\in \mathcal M({\Omega})$.
Assume that $u\in W^{2,2}_0({\Omega})$ satisfies, for some $f\in L^2({\Omega})$,
\begin{equation}\begin{cases}
\Delta \left((1+\alpha \rho)\Delta u\right)=f\geq 0\text{ in }{\Omega},
\\ u=\Delta u=0\text{ on }{\varphi}artial {\Omega}.{\varepsilon}nd{cases}{\varepsilon}nd{equation}
Then
\begin{equation}
u\geq 0 \ \text{and} \ \ (1+\alpha \rho)\Delta u\leq 0\text{ in }{\Omega}.{\varepsilon}nd{equation}
{\varepsilon}nd{lemma}
\begin{proof}[Proof of Lemma \ref{Cl:Positivity}]
Let $\rho,u$ be as in the statement of the lemma. First of all, by elliptic regularity,
$$(1+\alpha \rho)\Delta u\in \UUU W^{2,2}\EEE({\Omega}).$$ Let us introduce $z=-(1+\alpha \rho)\Delta u$.
Then $z\in W^{1,2}_0({\Omega})$ satisfies
\begin{equation*}
\begin{cases}
-\Delta z=f\geq 0\text{ in }{\Omega},
\\z=0\text{ on }{\varphi}artial {\Omega}.
{\varepsilon}nd{cases}
{\varepsilon}nd{equation*}
As a consequence of the maximum principle for the Laplacian we obtain
$z\geq 0\text{ in }{\Omega}.$ Hence,
$
\Delta u\leq 0.$
Since $u\in W^{2,2}({\Omega})$, $\Delta u \in L^2({\Omega})$. We can then apply the maximum principle to the inequality
$-\Delta u\geq 0\text{ in }{\Omega}$ to conclude that $u\geq 0$ in ${\Omega}$.
{\varepsilon}nd{proof}
In the next lemma we collect some basic facts about the underlying spectral and optimisation problems.
\begin{lemma}\label{Cl:Misc}
\begin{enumerate}
\item For any $D\in \mathcal N({\Omega})$, $ \alpha\geq 0$, and $ \rho\in \mathcal M({\Omega})$, the eigenfunctions $u_D$ and ${u_{\alpha,\rho}}$ can be assumed to have constant sign. Hence, the first eigenvalue is the only one whose eigenfunction is constant in sign.
\item There exists $M>0$ such that, for any $D\in \mathcal N({\Omega})$,
\begin{equation}\left|�\mu(D)\right|,\, \Vert u_D\Vert_{W^{2,2}({\Omega})}\leq M.{\varepsilon}nd{equation}
\item For any $\overline \alpha>0$, there exists $M(\overline \alpha)$ such that, for any $\rho \in \mathcal M({\Omega})$ and any $\alpha \in [0;\overline \alpha]$,
\begin{equation}\left|\lambda_\alpha(\rho)\right|,\, \Vert {u_{\alpha,\rho}}\Vert_{W^{2,2}({\Omega})}\leq M(\overline \alpha).{\varepsilon}nd{equation}
{\varepsilon}nd{enumerate}
{\varepsilon}nd{lemma}
\begin{proof}[Proof of Lemma \ref{Cl:Misc}]
To prove \UUU point 1 of the \EEE Lemma, we adapt \cite[Lemma 16]{Berchio2006}. We \UUU detail this argument for $\lambda_\alpha(\rho)$ only, for an analogous proof yields \EEE the conclusion for $\mu(D)$, \UUU as well. \EEE In order to prove \UUU point 1, \EEE it suffices to establish the following fact: for any $u\in W^{2,2}({\Omega})\cap W^{1,2}_0({\Omega})$ and any $\rho \in \mathcal M({\Omega})$ there exists $w\in W^{2,2}({\Omega})\cap W^{1,2}({\Omega})$ such that
\begin{equation}w\geq 0, \ \int_{\Omega} (1+\alpha \rho)(\Delta w)^2-\rho w^2\leq \int_{\Omega} (1+\alpha \rho)(\Delta u)^2-\rho u^2, \ \text{ and} \ \int_{\Omega} w^2\geq \int_{\Omega} u^2. {\varepsilon}nd{equation} Indeed, since $u\in W^{2,2}({\Omega})$ does not imply $|u|\in W^{2,2}({\Omega})$, it is not possible to simply replace $u$ by its absolute value. Let us hence consider $\rho\in \mathcal M({\Omega}),\, \alpha\geq 0,\, u\in W^{2,2}({\Omega})\cap W^{1,2}_0({\Omega})$ and define $w$ as the unique solution of
\begin{equation}\begin{cases}
-\Delta w=\left|�\Delta u\right|\text{ in }{\Omega},
\\ w=0\text{ on }{\varphi}artial {\Omega}.{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
\UUU We \EEE first observe that
$\int_{\Omega} (1+\alpha \rho)(\Delta w)^2=\int_{\Omega} (1+\alpha \rho)(\Delta u)^2.$
Besides, by the maximum principle,
$w\geq 0\text{ in }{\Omega}.$ Furthermore, from the definition of $w$, we get that
$-\Delta w\geq -\Delta u,\, -\Delta w\geq \Delta u$, whence $w\geq u$, and $w\geq -u$ in ${\Omega}$. As a consequence, $w\geq |u|$ in ${\Omega}$. Thus, $\int_{\Omega} w^2\geq \int_{\Omega} u^2.$
Since $\rho \geq 0$, \UUU we have that \EEE
$$\int_{\Omega} \rho u^2\leq \int_{\Omega} \rho w^2$$ which yields the conclusion. It should be noted that this construction proves that any eigenfunction associated with the first eigenvalue has a constant sign, whence the simplicity of the first eigenvalues $\mu(D)$ and $\lambda_\alpha(\rho)$.
We \UUU now proceed with the proof of point 2. Point 3 follows \EEE from the exact same arguments. Let us consider $D\in \mathcal N({\Omega})$. From the Rayleigh-quotient formulation {\varepsilon}qref{Eq:DefMu} of $\mu(D)$, we get that $\mu(D)\geq 0$ (for $\lambda_\alpha(\rho)$, we would get $\lambda_\alpha(\rho)\geq -1$). Let us consider the first eigenvalue ${\varepsilon}ta_1({\Omega})$ of the biharmonic operator in ${\Omega}$ defined as
\begin{equation}{\varepsilon}ta_1({\Omega}):=\inf_{u\in W^{2,2}({\Omega})\cap W^{1,2}_0({\Omega}),\, \int_{\Omega} u^2=1}\int_{\Omega} \left(\Delta u\right)^2.{\varepsilon}nd{equation} Let $w_1$ be an associated eigenfunction. Then
\begin{equation}
\mu(D)\leq \int_{\Omega} D\left(\Delta w_1\right)^2\leq (1+\beta_0)\int_{\Omega} \left(\Delta w_1\right)^2=(1+\beta_0){\varepsilon}ta_1({\Omega}),
{\varepsilon}nd{equation}
which yields the required uniform bound on the eigenvalue. Next, by multiplying the eigenequation {\varepsilon}qref{Eq:MuD} by $u_D$ and integrating by parts we obtain
\begin{equation*}
\int_{\Omega} \left(\Delta u_D\right)^2\leq \int_{\Omega} D\left(\Delta u_D\right)^2=\mu(D)\leq (1+\beta_0){\varepsilon}ta_1({\Omega}).
{\varepsilon}nd{equation*}
Since, by elliptic regularity, for any $u\in W^{1,2}_0({\Omega})$,
\begin{equation}
\Vert u_D\Vert_{W^{2,2}({\Omega})}\leq C\Vert \Delta u_D\Vert_{L^2({\Omega})}{\varepsilon}nd{equation} we obtain the required bound.
{\varepsilon}nd{proof}
Henceforth, with no loss of generality we assume $u_D$ and $u_{\alpha,\rho}$ to be nonnegative,
up to multiplying them by \UUU $-1$. \EEE
Our next step is hence to establish the concavity of
the eigenvalue maps.
\begin{lemma}\label{Le:Concav}
Let $\alpha\geq 0$ be fixed. The two maps
\begin{equation}
\mathcal N({\Omega}){\nabla}i D\mapsto \mu(D),\ \mathcal M({\Omega}){\nabla}i \rho\mapsto \lambda_\alpha(\rho)
{\varepsilon}nd{equation}
are concave.
{\varepsilon}nd{lemma}
\begin{proof}[Proof of Lemma \ref{Le:Concav}]
Each of these two maps is defined as an infimum of linear functionals in their respective variables, so that they are concave.
{\varepsilon}nd{proof}
This concavity property enables one to write the seemingly naive but in fact crucial reformulation of the eigenvalue problems in terms of bang-bang functions, which we now define.
\begin{definition}
A function $D\in \mathcal N({\Omega})$ is called bang-bang if $D=1+\beta_0\mathds 1_\omega$ for some measurable subset $\omega$ of ${\Omega}$. Such functions are the extremal points of $\mathcal N({\Omega})$ and are denoted $\operatorname{Ext}(\mathcal N({\Omega}))$.
A function $\rho \in \mathcal M({\Omega})$ is called bang-bang if $\rho=\mathds 1_{\omega'}$ for some measurable subset $\omega'$ of ${\Omega}$. Such functions are the extremal points of $\mathcal M({\Omega})$ and are denoted $\operatorname{Ext}(\mathcal M({\Omega}))$.
{\varepsilon}nd{definition}
The definition of bang-bang functions in terms of extremal points is classical \cite[Proposition 7.2.17]{HenrotPierre}. As an immediate consequence of Lemma \ref{Le:Concav} and of the convexity of the admissible sets $\mathcal M({\Omega})$ and $\mathcal N({\Omega})$, we obtain the following lemma:
\begin{lemma}\label{Cl:Reformulation} We have \UUU that
\begin{align*}
&\inf_{D\in \mathcal N({\Omega})}\mu(D)=\inf_{D\in \operatorname{Ext}(\mathcal N({\Omega}))}\mu(D),\\
&\inf_{\rho\in \mathcal M({\Omega})}\lambda_\alpha(\rho)=\inf_{\rho\in \operatorname{Ext}(\mathcal M({\Omega}))}\lambda_\alpha(\rho).
{\varepsilon}nd{align*}
{\varepsilon}nd{lemma}
\section{Proof of Theorem \ref{Th:ExistMu}}
\label{sec:thm1}
The proof relies on several preliminary results that we recall in \UUU Subsection \ref{sub:re}. \EEE The proof is \UUU then presented in Subsection \ref{sec:add}. \EEE
\subsection{Preliminary material about rearrangements}
\label{sub:re}
Let us briefly recall the key concepts of \UUU the Schwarz \EEE rearrangement. For a comprehensive introduction to rearrangements, we refer to \cite{Bandle,Kawohl,Kesavan}. For a $\mathscr C^2$ domain of ${\mathbb R}^2$, let ${\Omega}^\#=\mathbb B(0; R^\#)$ be the centered ball with the same volume as ${\Omega}$. For any function ${\varphi}\in L^2({\Omega}),\, {\varphi}\geq 0$, the Schwarz rearrangement of ${\varphi}$ is the unique non-increasing function ${\varphi}^\#:{\Omega}^\#\to {\mathbb R}_+$ such that, for any $t\geq 0,$
\begin{equation}\operatorname{Vol}\left(\{{\varphi}>t\}\right)=\operatorname{Vol}\left(\{{\varphi}^\#>t\}\right).{\varepsilon}nd{equation}
Of particular importance are the following properties of this rearrangement:
\begin{enumerate}
\item Equimeasurability: for any ${\varphi} \in L^2({\Omega})$, $ {\varphi}\geq 0$, $$\Vert {\varphi}\Vert_{L^2({\Omega})}=\Vert{\varphi}^\#\Vert_{L^2({\Omega}^\#)}^2.$$
\item Hardy-Littlewood inequality: for any non-negative functions ${\varphi}_0,{\varphi}_1\in L^2({\Omega})$,
$$\int_{\Omega} {\varphi}_0{\varphi}_1\leq \int_{{\Omega}^\#}{\varphi}_0^\#{\varphi}_1^\#.$$
{\varepsilon}nd{enumerate}
Another key tool is \UUU the Talenti inequality \cite{Talenti} which reads as follows. \EEE
\begin{proposition}[Talenti inequality, {\cite[Theorem 1]{Talenti}}]\label{Pr:Talenti} Let ${\Omega}$ be a Lipschitz bounded domain, and let ${\mathbb B}$ be the ball centered in the origin and such that $\operatorname{Vol}({\Omega})=\operatorname{Vol}({\mathbb B})$. Let ${\varphi}si\in L^2({\Omega}),\, {\varphi}si\geq 0$. Let ${\varphi}hi\in W^{1,2}({\Omega})$ be the solution of
\begin{equation}\begin{cases}
-\Delta {\varphi}hi={\varphi}si\text{ in }{\Omega},\,
\\ {\varphi}hi=0\text{ on }{\varphi}artial {\Omega},{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
and $\tilde{\varphi}hi$ be the solution of
\begin{equation}\begin{cases}
-\Delta \tilde{\varphi}hi={\varphi}si^\#\text{ in }{\mathbb B},
\\ \tilde{\varphi}hi=0\text{ on }{\varphi}artial {\mathbb B}.{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
Then the inequality
\begin{equation}
{\varphi}hi^\#\leq \tilde{\varphi}hi{\varepsilon}nd{equation} holds pointwise in ${\mathbb B}$.
{\varepsilon}nd{proposition}
The proof of Theorem \ref{Th:ExistMu} relies on some results of Alvino, Lions, and Trombetti \cite{Alvino1989}. These results have been used to show existence properties for two-phases optimisation problems in the case of balls by Conca, Mahadevan, and Sanz \cite{ConcaMahadevanSanz}. The strategy from \cite{Alvino1989} reads as follows: using a suitable rearrangement one checks that, when ${\Omega}=\mathbb B(0;R)$ for a suitable $R>0$, one can restrict to minimising sequences of radially symmetric functions. Such symmetry then enables to use a powerful compactness result to obtain existence of a minimiser. What is notable in our approach is that the structure of the biharmonic operator makes it so that we do not require any symmetry property of the domain, nor of the elements of the minimising sequence.
Let us introduce a comparison relation: for any two functions $f,g\in L^2({\Omega}),\, f,g\geq 0$, we write
\begin{equation}
f{\varphi}rec g{\varepsilon}nd{equation} if, for any $r\in [0; R^\#]$,
\begin{equation}
\int_{\mathbb B(0;r)} f^\#\leq \int_{\mathbb B(0;r)} g^\#{\varepsilon}nd{equation} and if
\begin{equation}
\int_{ {\mathbb B}(0,R^\#)} f^\#=\int_{ {\mathbb B}(0,R^\#)} g^\#.{\varepsilon}nd{equation}
\begin{remark}\textit{ It should be noted that if $g$ is $L^\infty$, and if $f{\varphi}rec g$, then $f$ is $L^\infty$ as well and $ \Vert f\Vert_{L^\infty}\leq \Vert g\Vert_{L^\infty}$}.
{\varepsilon}nd{remark}
Let ${\Omega}^\#=\mathbb B(0; R^\#)$, and let ${\mathbb B}^*:={\mathbb B}(0;r^*)$ be the only ball centered in the origin of volume $({D_0-{\operatorname{Vol}}({\Omega}mega)})/{\beta_0}$.
We define $\overline D^\#$ as
\begin{equation}\overline D^\#=1+\beta_0\mathds 1_{{\mathbb B}^*}.{\varepsilon}nd{equation} First of all let us notice that for any $D\in \mathcal N({\Omega})$ we have
\begin{equation}D^\#{\varphi}rec \overline D^{\#} \ \ \UUU \text{and} \EEE \ \ \int_{\Omega} D=\int_{{\Omega}^\#}\overline D^\#.{\varepsilon}nd{equation}
We define the class
\begin{equation}
\mathscr C\left(\overline D^\#\right):=\left\{f\in L^2({\Omega})\ : \ f\geq 0,\, f^\#=\overline D^\#\right\},{\varepsilon}nd{equation} which exactly corresponds to the set of bang-bang functions:
\begin{equation}\operatorname{Ext}(\mathcal N({\Omega}))=\mathscr C\left(\overline D^\#\right).{\varepsilon}nd{equation}
This class $\mathscr C({\varphi}si)$ is not closed under weak-$\ast$ $L^\infty $ convergence. Its weak-$\ast$ $L^\infty $ compactification has been proved in \cite{Alvino1989} to be
\begin{equation}
\mathscr K\left(\overline D^\#\right):=\left\{f\in L^2({\Omega})\ : \ f\geq 0,\, f{\varphi}rec \overline D^\#\right\}.{\varepsilon}nd{equation}
From \cite[Theorem 2.2]{Alvino1989}, $\mathscr K\left(\overline D^\#\right)$ is closed and weakly-$\ast$ compact for the $L^\infty $-topology (this result is a generalisation of a result by Migliaccio \cite{Migliaccio}). Furthermore, from \cite[Theorem 2.2]{Alvino1989} we have
\begin{equation}
\operatorname{Ext}\left(\mathscr K\left(\overline D^\#\right)\right)=\mathscr C\left(\overline D^\#\right).
{\varepsilon}nd{equation}
As a consequence of the general result \cite[Proposition 2.1]{HenrotPierre} or directly from weak-$\ast$ convergence to extreme points of convex sets, if a sequence $\{f_k\}_{k\in {\mathbb N}}\in \mathscr K\left(\overline D^\#\right)$ weakly-$\ast$ converges to $f\in \mathscr C\left(\overline D^\#\right)$, then the convergence is strong in $L^p$, $p\in [1;+\infty)$, see \cite{Visintin}.
\subsection{Proof of Theorem \ref{Th:ExistMu}}\label{sec:add}
What should be noted is that, here, the weak-$\ast$ $L^\infty $ convergence of a sequence $\{D_k\}_{k\in {\mathbb N}}\in \mathcal N({\Omega})^{\mathbb N}$ does not imply the convergence of the associated sequence of eigenvalues $\{\mu(D_k)\}_{k\in {\mathbb N}}$. As is clear from the eigenequation
\begin{equation}\label{Eq:Mu}
\begin{cases}
\Delta \left(D\Delta u_\rho\right)=\mu(D)u_D\text{ in }{\Omega},
\\ u_D=\Delta u_D=0\text{ on }{\varphi}artial {\Omega},{\varepsilon}nd{cases} {\varepsilon}nd{equation}
the correct convergence that would imply lower-semi continuity of the eigenfunction is the convergence of the sequence $\left\{\frac1{D_k}\right\}_{k\in {\mathbb N}}$.
\begin{lemma}\label{Cl:CvInv}
Let $\delta$ and $M_1$ be two positive constants. Let $\{D_k\}_{k\in {\mathbb N}}\in L^\infty({\Omega})^{\mathbb N}\,$, $\inf_{k,{\Omega}}D_k\geq \delta>0\,$, and $\sup_k \Vert D_k\Vert_{L^\infty({\Omega})}\leq M_1$. Assume there exists $C_\infty\in L^\infty({\Omega})$ such that
\begin{equation}
\frac1{D_k}\underset{k\rightarrow +\infty}\rightharpoonup C_\infty \text{ weakly-$\ast$ in $L^\infty$. }
{\varepsilon}nd{equation}
Then, up to a subsequence,
\begin{equation}
\mu (D_k)\underset{k\to \infty}\rightarrow \mu\left(\frac1{C_\infty}\right).
{\varepsilon}nd{equation}
{\varepsilon}nd{lemma}
\begin{proof}[Proof of Lemma \ref{Cl:CvInv}]
To lighten notations, for any $k\in {\mathbb N}$ we denote by
$u_k$ the eigenfunction associated with $\mu(D_k)$ and we define \begin{equation}\label{Eq:uk}z_k = -D_k \Delta u_k.{\varepsilon}nd{equation}
From Lemma \ref{Cl:Positivity}, we have $z_k=0$ on ${\varphi}artial {\Omega}$ and $z_k\geq 0$ in ${\Omega}$.
Furthermore from Lemma \ref{Cl:Misc} the sequence $\left\{\mu(D_k)\right\}_{k\in{\mathbb N}}$ is bounded. We can thus choose $\mu_\infty\in {\mathbb R}$ such that
$\mu(D_k)\underset{k\to \infty}\rightarrow \mu_\infty$ for some not relabelled subsequence.
By assumption, we know that
$\frac1{D_k}\underset{k\to \infty}\rightharpoonup C_\infty \text{ weakly-$\ast$ in $L^\infty$}.$ Since
$\frac1{1+\beta_0}\leq \frac1{D_k}\leq 1$, the same $L^\infty$ bounds hold for $\frac1{C_\infty}.$
By Lemma \ref{Cl:Misc}, we have a uniform $W^{2,2}({\Omega})$ bound on the family $\{u_k\}_{k\in {\mathbb N}}$. Since $z_k$ solves the equation
\begin{equation}\label{Eq:zk}
\begin{cases}
-\Delta z_k=\mu(D_k)u_k\text{ in }{\Omega},
\\ z_k=0\text{ on }{\varphi}artial {\Omega}{\varepsilon}nd{cases}{\varepsilon}nd{equation}
we obtain a uniform $W^{1,2}_0({\Omega})$ bound on $\{z_k\}_{k\in {\mathbb N}}$, namely, there exists $M$ such that
\begin{equation}
\forall k\in {\mathbb N},\, \Vert z_k\Vert_{W^{1,2}_0({\Omega})}\leq M.{\varepsilon}nd{equation}
As a consequence, there exists $z_\infty \in L^2({\Omega})$ such that
\begin{equation}z_k\underset{k\to \infty}\rightarrow z_\infty \text{ weakly in $W^{1,2}_0({\Omega})$ and strongly in $L^2({\Omega})$}{\varepsilon}nd{equation}
for some not relabelled subsequence.
There also exists $u_\infty\in W^{2,2}({\Omega})\cap W^{1,2}_0({\Omega})$ such that
\begin{equation}u_k\underset{k\to \infty}\rightarrow u_\infty \text{ weakly in $W^{2,2}({\Omega})$ and strongly in $W^{1,2}_0({\Omega})$}{\varepsilon}nd{equation}
for some not relabelled subsequence.
Passing to the limit in the weak formulation of {\varepsilon}qref{Eq:uk}, the triple $(u_\infty,C_\infty,z_\infty)$ solves
\begin{equation}
\begin{cases}
-\Delta u_\infty=C_\infty z_\infty \text{ in }{\Omega},
\\z_\infty=0\text{ on }{\varphi}artial {\Omega}, {\varepsilon}nd{cases}{\varepsilon}nd{equation} and since, for any $k$, $u_k\geq 0$ and $\int_{\Omega} u_k^2=1$, we have
$$u_\infty\geq 0 \ \ \UUU \text{and} \ \ \EEE \int_{\Omega} u_\infty^2=1.$$
Passing to the limit in the weak formulation {\varepsilon}qref{Eq:zk} we obtain that $(z_\infty,\mu_\infty,u_\infty)$ solves
\begin{equation}
\begin{cases}
-\Delta z_\infty=\mu_\infty u_\infty\text{ in }{\Omega},
\\ z_\infty=0\text{ on }{\varphi}artial {\Omega}.
{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
As a consequence, $(C_\infty,u_\infty)$ solves
\begin{equation}
\begin{cases}
\Delta\left(\frac1{C_\infty}\Delta u_\infty\right)=\mu_\infty u_\infty\text{ in }{\Omega},
\\ u_\infty=\Delta u_\infty=0\text{ on }{\varphi}artial {\Omega},
\\ u_\infty\geq 0,\, \int_{\Omega} u_\infty^2=1.
{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
However, the first eigenvalue being the only having a constant sign eigenfunction, we conclude that $(u_\infty,\mu_\infty)$ is the first eigencouple associated to $\frac1{C_\infty}$ or, in other words, that
$\mu_\infty=\mu\left(\frac1{C_\infty}\right).$ Thus, the sequence $\{\mu(D_k)\}_{k\in {\mathbb N}}$ has a unique closure point, and hence the entire sequence converges, so that
\begin{equation}\underset{k\to \infty}\lim \mu(D_k)=\mu\left(\frac1{C_\infty}\right).{\varepsilon}nd{equation}
{\varepsilon}nd{proof}
We now treat the optimisation problem {\varepsilon}qref{Eq:PvMu} in a slightly different way. \UUU For \EEE any $D\in L^\infty({\Omega})$ \UUU with \EEE $ \inf D>0$ \UUU we \EEE set
\begin{equation}{\varepsilon}ta(D):=\mu\left(\frac1D\right).{\varepsilon}nd{equation} We recall that from Lemma \ref{Cl:Reformulation} and Subsection \ref{sub:re} we have
$$\inf_{D\in \mathcal N({\Omega})}\mu(D)=\inf_{\{D\in \mathcal N({\Omega}):\, D^\#=\overline D^\#\}}\mu(D).$$ Since $ \mathscr C\left(\overline D^\#\right)=\{D\in \mathcal N({\Omega})\ : \ D^\#=\overline D^\#\}$ this can be equivalently rewritten as
\begin{equation}
\inf_{D\in \mathcal N({\Omega})}\mu(D)=\inf_{D\in \mathscr C\left(\overline D^\#\right)}\mu(D).
{\varepsilon}nd{equation}
Eventually, as $\overline D^\#$ is bang-bang, it follows that $D\in \mathscr C\left(\overline D^\#\right)$ if and only if $\frac1D\in \mathscr C\left(\left(\frac1{\overline D^\#}\right)^\#\right)$.
Given the definition of ${\varepsilon}ta$, problem {\varepsilon}qref{Eq:PvMu} is equivalent to
\begin{equation}\label{Eq:La}
\inf_{ \left\{E\in \mathscr C\left(\left(\frac1{\overline D^\#}\right)^\#\right)\right\}}{\varepsilon}ta\left(E\right),{\varepsilon}nd{equation} in the sense that, if $E$ solves {\varepsilon}qref{Eq:La} then $\frac1E$ solves {\varepsilon}qref{Eq:PvMu}.
The key lemma is thus the following:
\begin{lemma}\label{Cl:Eta}
\begin{enumerate}
\item The variational problem
\begin{equation}\label{Eq:PvEta}
\inf_{E\in \mathscr K \left(\left(\frac1{\overline D^\#}\right)^\#\right)}{\varepsilon}ta\left(E\right){\varepsilon}nd{equation}
has a solution $E^*$.
\item The solutions of the variational problem {\varepsilon}qref{Eq:PvEta} belong to $ \mathscr C\left(\left(\frac1{\overline D^\#}\right)^\#\right)$.
{\varepsilon}nd{enumerate}
{\varepsilon}nd{lemma}
\begin{proof}[Proof of Lemma \ref{Cl:Eta}] \UUU Point 1. \EEE The existence of a minimiser for problem {\varepsilon}qref{Eq:PvEta} follows from the weak-$\ast$ $L^\infty $ compactness of the set $ \mathscr K\left(\left(\frac1{\overline D^\#}\right)^\#\right)$. Let $\{E_k\}_{k\in {\mathbb N}}\in \mathscr K\left(\left(\frac1{\overline D^\#}\right)^\#\right)^{\mathbb N}$ be a minimising sequence, and let $E_\infty\in \mathscr K\left(\left(\frac1{\overline D^\#}\right)^\#\right)$ be one of its weak closure points. From Lemma \ref{Cl:CvInv},
\begin{equation}{\varepsilon}ta(E_k)=\mu\left(\frac1{E_k}\right)\underset{k\to \infty}\rightarrow \mu\left(\frac1{E_\infty}\right)={\varepsilon}ta\left(E_\infty\right).{\varepsilon}nd{equation} Hence, $E_\infty$ is a solution of {\varepsilon}qref{Eq:PvEta}.
\UUU Point 2. \EEE To prove the second point of the lemma, it suffices to prove that no interior point $E\in \mathscr K\left(\left(\frac1{\overline D^\#}\right)^\#\right)$ satisfies local first order optimality conditions. By standard theorems \cite{Kato} the simplicity of ${\varepsilon}ta\left(E\right)$, obtained as in Lemma \ref{Cl:Misc}, enables us to differentiate it with respect to $E$. Let $E\in \mathscr K\left(\left(\frac1{\overline D^\#}\right)^\#\right)$ and $h$ be an admissible perturbation at $E$ (i.e. $E+th\in \mathscr K\left(\left(\frac1{\overline D^\#}\right)^\#\right)$ for $t>0$ small enough). For the sake of notational simplicity, let $u_E$ be the eigenfunction associated with ${\varepsilon}ta(E)$. Let $\dot {\varepsilon}ta$ and $\dot u$ be the derivative of ${\varepsilon}ta(E+th)$ and its associated eigenfunction with respect to $t$ evaluated in the origin, respectively. Then, $(\dot u,\, \dot {\varepsilon}ta)$ solves
\begin{equation}
\label{eq:system}
\begin{cases}
\Delta \left(\frac1E\Delta \dot u\right)-\Delta \left(\frac{h}{E^2} \Delta u_E\right)=\dot {\varepsilon}ta u_E+{\varepsilon}ta\left(E\right)\dot u,
\\ \dot u=\Delta \dot u=0\text{ on }{\varphi}artial {\Omega},
\\ \int_{\Omega} u_E\dot u=0.
{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
As a consequence, testing {\varepsilon}qref{eq:system} against $u_E$, using the eigenequation for $u_E$, and integrating by parts, from the fact that $\int_{{\Omega}mega} u_E^2=1$, we find
\begin{equation}\dot {\varepsilon}ta=\int_{\Omega} \frac{h}{E^2}\left(\Delta u_E\right)^2.{\varepsilon}nd{equation}
Thus, if $E$ is not a bang-bang function, that is, if $\omega_0:=\left\{\frac1{1+\beta_0}<E<1\right\}$ is a set of positive measure, there exists a constant $C$ such that $\frac{(\Delta u_E)^2}{E^2}=C\text{ in }\omega_0,$
see for instance \cite[Theorem 1, Remark 1]{Privat2015}.
Plugging this in the eigenequation
$\Delta\left(\frac1E\Delta u_E\right)={\varepsilon}ta(E)u_E$ we obtain
\begin{equation*}
u_E=0\text{ in }\omega_0.
{\varepsilon}nd{equation*} This contradicts the positivity of $u_E$ inside ${\Omega}$, which is a consequence of the strong maximum principle and of Lemma \ref{Cl:Misc}.
{\varepsilon}nd{proof}
Relying on Lemma \ref{Cl:Eta}, we \UUU can eventually prove \EEE Theorem \ref{Th:ExistMu} by computing
\begin{align*}\inf_{D \in \mathscr C\left(\overline D^\#\right)}\mu(D)&=\inf_{D\in \mathscr C\left(\overline D^\#\right)}{\varepsilon}ta\left(\frac1D\right)=\min_{E\in \mathscr K\left(\left(\frac1{\overline D^\#}\right)^\#\right) }{\varepsilon}ta(E)\\
&\quad =\min_{E\in \mathscr C\left(\left(\frac1{\overline D^\#}\right)^\#\right) }{\varepsilon}ta(E)={\varepsilon}ta(E^*)=\mu\left(\frac1{E^*}\right).{\varepsilon}nd{align*}
Since $ E^* \in \mathscr C\left(\frac1{\overline D^\#}\right)^\#$, we have that $\frac{1}{ E^*}\in \mathscr C\left(\overline D^\#\right)$. This entails the existence of a minimizer, hence Theorem \ref{Th:ExistMu} \UUU holds. \EEE
\section{Proof of Theorem \ref{Th:PV0}}
\label{sec:thm2}
Recall that here ${\Omega}mega={\mathbb B}(0,R)$ for some $R>0$.
The core idea of the proof is to use \UUU the Talenti \EEE inequality, as was done in \cite{Anedda} to solve {\varepsilon}qref{Eq:PvMu}. Let us briefly recall this inequality:
Let $D\in \mathcal N({\Omega})$ and $u_D$ be the associated eigenfunction solving {\varepsilon}qref{Eq:MuD}. Let $z_D$ be defined as
\begin{equation}\label{Eq:Mainz}
-\Delta u_D=z_D\text{ in }{\Omega}.
{\varepsilon}nd{equation}
Since $u_D\in H^2({\Omega})$ we have that $z_D\in L^2({\Omega})$.
From $\Delta u_D=0$ on ${\varphi}artial {\Omega}$ we obtain $z_D=0$ on ${\varphi}artial {\Omega}$. Furthermore, from Lemma \ref{Cl:Positivity}, there holds $z_D \geq 0$ in ${\Omega}mega$. Let us consider its Schwarz rearrangement $z_D^\#$. Since ${\Omega}mega$ is a ball centered in the origin, clearly ${\Omega}mega^\#={\Omega}mega$. Let $\tilde u_D$ be the solution of
\begin{equation}\label{Eq:Maintz}
-\Delta \tilde u_D=z_D^\#\text{ in }{\Omega},
\\ \tilde u_D=0\text{ on }{\varphi}artial {\Omega}.
{\varepsilon}nd{equation}
From \UUU the Talenti \EEE inequality, Proposition \ref{Pr:Talenti}, we have
\begin{equation}
0\leq u_D^\#\leq \tilde u_D\text{ in }{\Omega}.{\varepsilon}nd{equation} This inequality holds pointwise and hence guarantees
\begin{equation}\label{Eq:Norm}
1=\int_{\Omega} u_D^2=\int_{{\Omega}}\left(u_D^\#\right)^2\leq \int_{{\Omega}} \tilde u_D^2.
{\varepsilon}nd{equation} Furthermore, since $z_D^\#$ is a rearrangement of $z_D$, for any $V \in [0;{\operatorname{Vol}}({\Omega})]$ we have that
\begin{equation}\label{Eq:Bathtub}\inf_{F\subset {\Omega},\, {\operatorname{Vol}}(F)=V}\int_{F} z_D^2= \inf_{G\subset {\mathbb B},\, {\operatorname{Vol}}(G)=V}\int_{G}(z_D^\#)^2.{\varepsilon}nd{equation}
Take now $V=({D_0-{\operatorname{Vol}}{{\Omega}}})/{\beta_0}$. The function $z_D^\#$ being non-increasing, the so-called {\it bathtub} principle \cite[Theorem 1.14]{LiebLoss} ensures that
\begin{equation}\label{Eq:Bathtub2}\inf_{G\subset {\mathbb B},\, {\operatorname{Vol}}(G)=V}\int_{{\Omega}}(1+\beta_0\mathds 1_G)(z_D^\#)^2=\int_{\mathbb B}\overline D_\# (z_D^*)^2.
{\varepsilon}nd{equation}
On the one hand, the Schwarz rearrangement is measure preserving, hence
\begin{equation}
\int_{\Omega} (\Delta u_D)^2=\int_{\Omega} z_D^2=\int_{\mathbb B} (z_D^\#)^2=\int_{\mathbb B} \left(\Delta \tilde u_D\right)^2.{\varepsilon}nd{equation}
On the other hand, from {\varepsilon}qref{Eq:Bathtub}-{\varepsilon}qref{Eq:Bathtub2} we get
\begin{equation}\label{Eq:Dr}
\int_{\Omega} D(\Delta u_D)^2\geq \int_{\mathbb B} \overline D_\# \left(\Delta \tilde u_\rho\right)^2.
{\varepsilon}nd{equation}
Combining {\varepsilon}qref{Eq:Dr} with {\varepsilon}qref{Eq:Norm} and plugging these estimates in the Rayleigh-quotient formulation of the eigenvalues we obtain
\begin{equation}
\mu(D)=\frac{\int_{\Omega} D(\Delta u_D)^2}{\int_{\Omega} u_D^2}\geq \frac{\int_{\mathbb B} \overline D_\#(\Delta \tilde u_D)^2}{\int_{\Omega} \tilde u_D^2} \geq \mu(\overline D_\#),{\varepsilon}nd{equation}
and the assertion follows.
\section{Proof of Theorem \ref{Th:Stability}}
\label{sec:thm3}
We are now working under the assumption that ${\Omega}mega={\mathbb B}(0,R)$ for some $R>0$. Recall that $\rho^*=\mathds 1_{\mathbb B^*}$ is the characteristic function of a ball centered in the origin and of volume $V_0$. By the same arguments as in \cite[Theorem 3.3]{Anedda}, $\rho^*$ is the unique minimiser of $\lambda_0$ in $\mathcal M({\Omega})$ and $u_{0,\rho^*}$ is radially symmetric non-increasing.
Furthermore, $u_{0,\rho^*}$ is strictly decreasing and there holds \begin{equation}\label{Eq:De}\forall {\varepsilon}>0,\, {\varepsilon}xists \delta({\varepsilon})>0,\, \forall r \in ({\varepsilon};R]: \quad \left|\frac{{\varphi}artial u_{0,\rho^*}}{{\varphi}artial r}\right|\geq \delta ({\varepsilon}).{\varepsilon}nd{equation}
Indeed, this follows from the following fact: replacing $u_{0,\rho^*}$ with the solution $w$ to
$$\begin{cases}-\Delta w=|\Delta u|^*&\text{ in }{\mathbb B}(0;R),
\\ w\in W^{1,2}_0({\Omega}),{\varepsilon}nd{cases}$$ we obtain, combining the arguments of Lemma \ref{Cl:Misc} and \UUU the Talenti \EEE inequality, that
$$\frac{\int_{\Omega}\left(\Delta w\right)^2-\int_{\Omega} \rho^* w^2}{\int_{\Omega} w^2}\leq \frac{\int_{\Omega}\left(\Delta u_{0,\rho^*}\right)^2-\int_{\Omega} \rho^* u_{0,\rho^*}^2}{\int_{\Omega} u_{0,\rho^*}^2}.$$ Thus, $w$ is also an eigenfunction. By simplicity of $\lambda_0(\rho^*)$, $u_{0,\rho^*}$ and $w$ are linearly dependent. As a consequence, $u_{0,\rho^*}=cw$ for some constant $c>0$; this sign condition comes from the fact that both $u_{0,\rho^*}$ and $w$ are non-negative. Thus, it follows that $-\Delta u_{0,\rho^*}=\left| \Delta u_{0,\rho^*}\right|^*$. Since $\Delta u_{0,\rho^*}{\nabla}eq 0$, $\left| \Delta u_{0,\rho^*}\right|^*(0)>0$. Setting $z=\Delta u_{0,\rho^*}=-|\Delta u_{0,\rho^*}|^*$ we have, in radial coordinates
$$r\frac{{\varphi}artial u_{0,\rho^*}}{{\varphi}artial r}(r)=\int_0^r\tau z(\tau)d\tau<0,$$ which concludes the proof.
We prove Theorem \ref{Th:Stability} by contradiction and assume that, for any $\alpha>0$, there exists a radially symmetric $\rho_\alpha \in \mathcal M({\Omega})$ such that
\begin{equation}
\label{eq:contr}
\lambda_\alpha(\rho_\alpha)\leq \lambda_\alpha(\rho^*), \ \rho_\alpha {\nabla}eq \rho^*.{\varepsilon}nd{equation}
Let us prepare a preliminary lemma.
\begin{lemma}\label{Cl:Convergence}
We can assume that $\rho_\alpha$ is a bang-bang function. Furthermore, we have $\rho_\alpha\underset{\alpha \to 0}\rightarrow \rho^*$ strongly in $L^1$.
{\varepsilon}nd{lemma}
\begin{proof}[Proof of Lemma \ref{Cl:Convergence}]
The first point follows from the concavity of the functional. For the second point, we first observe that, for any weak-$\ast$ $L^\infty $ closure point $\rho_0$ of $\{\rho_\alpha\}_{\alpha\to 0}$, there holds $\lambda_0(\rho_0)\leq \underset{\alpha\to 0}{\lim\inf}\lambda_\alpha(\rho_\alpha)$: setting, for notational convenience, $u_\alpha:=u_{\alpha,\rho_\alpha}$, we have, from Lemma \ref{Cl:Misc}, a uniform $W^{2,2}({\Omega})$ bound on this sequence. This allows to pick a weak $W^{2,2}({\Omega})$ and strong $L^2({\Omega})$ closure point $u_0$ of a not relabelled subsequence.
By weak lower-semicontinuity of convex functions
$$\int_{\Omega} (\Delta u_0)^2\leq \underset{\alpha \to 0}{\lim \inf}\int_{\Omega} (1+\alpha \rho_\alpha)(\Delta u_\alpha)^2,$$ while
$$\int_{\Omega} \rho_0u_0^2=\underset{\alpha \to 0}\lim \int_{\Omega} \rho_\alpha u_\alpha^2,\, \int_{\Omega} u_0^2=1.$$ From the variational formulation {\varepsilon}qref{Eq:DefLambda} of $\lambda_0(\rho_0)$, we obtain the conclusion. Let us then observe that for any fixed $\rho\in \mathcal M({\Omega})$ (in particular for $\rho=\rho^*$), there holds $\lambda_0(\rho)=\underset{\alpha \to 0}\lim \lambda_\alpha(\rho)$. Passing to the limit in the inequality $\lambda_\alpha(\rho_\alpha)\leq \lambda_\alpha(\rho^*)$ we obtain $\lambda_0(\rho_0)\leq \lambda_0(\rho^*)$. Since $\rho^*$ is the unique minimiser of $\lambda_0$ we have $\rho_0=\rho^*$. As $\rho^*$ is an extreme point of $\mathcal M({\Omega})$, from \cite[Proposition 2.1]{HenrotPierre} this convergence is strong in $L^1$.
{\varepsilon}nd{proof}
Henceforth, we can hence assume that the sequence $\{\rho_\alpha\}_{\alpha \to 0}$ fulfilling {\varepsilon}qref{eq:contr} consists of bang-bang functions.
We use this information to proceed with the proof, which rests upon fine properties of the switch function. We need to use one of the core idea of $H$-convergence to make sure this function is regular enough. Let us explain why some concepts from homogenisation are needed: if we consider the map $D\mapsto \lambda_\alpha(D)$ and if we define $u_{\alpha,\rho}$ as the eigenfunction associated with $\lambda_\alpha(\rho)$, the simplicity of the eigenvalue ( Lemma \ref{Cl:Misc}) ensures that $\rho\mapsto (\lambda_\alpha(\rho),\, u_{\alpha,\rho})$ is G\^ateaux-differentiable. Furthermore, for any $\rho \in \mathcal M({\mathbb B}(0,R))$ and any admissible perturbation $h$ at $\rho$ (i.e a function $h$ such that, for every ${\varepsilon}>0$ small enough $\rho +{\varepsilon} h\in \mathcal M({\mathbb B}(0,R))$), the G\^ateaux-derivatives $\dot u_{\alpha,\rho}$ and $\dot\lambda_\alpha(\rho)$ (we omit the dependency on $h$ for notational convenience) solve
\begin{equation}\label{Eq:GateauxDerivative}
\begin{cases}
\Delta \left((1+\alpha \rho)\Delta \dot u_{\alpha,\rho}\right)+\alpha\Delta\left(h\Delta u_{\alpha,\rho}\right)=\left(\lambda_{\alpha,\rho}{+}\rho\right)\dot u_{\alpha \rho}+\left(\dot \lambda_{\alpha,\rho}{+}h\right){u_{\alpha,\rho}} \text{ in }{\mathbb B}(0,R),
\\ {\dot u_{\alpha,\rho}}=\Delta {\dot u_{\alpha,\rho}}=0\text{ on }{\varphi}artial {\mathbb B}(0,R),
\\ \int_{{\mathbb B}(0,R)} {u_{\alpha,\rho}} {\dot u_{\alpha,\rho}}=0.
{\varepsilon}nd{cases}
{\varepsilon}nd{equation}
Multiplying the equation by ${u_{\alpha,\rho}}$, integrating by parts, and using the equation {\varepsilon}qref{Eq:LambdaRho} on ${u_{\alpha,\rho}}$ we obtain the following expression for $\dot\lambda_\alpha(\rho)$:
\begin{equation}\label{Eq:LambdaDot}
\dot \lambda_\alpha(\rho)=\int_{{\mathbb B}(0,R)} h\left\{\alpha\left(\Delta {u_{\alpha,\rho}}\right)^2-{u_{\alpha,\rho}}^2\right\}.
{\varepsilon}nd{equation}
This leads to defining the switch function associated with the problem as
\begin{equation}\label{Eq:SwitchDeb}U_{\alpha,\rho}:= \alpha\left(\Delta {u_{\alpha,\rho}}\right)^2-{u_{\alpha,\rho}}^2 .{\varepsilon}nd{equation} In other words, with this approach, we have
$\dot \lambda_\alpha(\rho)=\int_{{\mathbb B}(0,R)} U_{\alpha,\rho} h.$ Ideally, we would use Lemma \ref{Cl:Convergence} to approximate $U_{\alpha,\rho}$ by $U_{0,\rho^*}$ in the $\mathscr C^1$ norm. However, since $\Delta {u_{\alpha,\rho}}$ is merely $L^\infty$, $U_{\alpha,\rho}$ is not regular enough.
To overcome this problem, we rely on some general ideas borrowed from $H$-convergence and homogenisation theory \cite{Allaire,MuratTartar}. We introduce, for any $\rho \in \mathcal M({\mathbb B}(0,R))$, the harmonic mean $\mathscr J_-(\rho)$ of $1+\alpha \rho$, defined as
\begin{equation}\mathscr J_-(\rho):=\frac{1+\alpha}{1+\alpha(1-\rho)}.{\varepsilon}nd{equation}
We define an auxiliary eigenvalue $\Lambda_\alpha(\rho)$ as follows:
\begin{equation}
\Lambda_\alpha(\rho):=\min_{u\in W^{2,2}({\mathbb B}(0,R))\cap W^{1,2}_0({\mathbb B}(0,R))\, u{\nabla}eq 0}\frac{\int_{{\mathbb B}(0,R)} \mathscr J_-(\rho)(\Delta u)^2-\int_{{\mathbb B}(0,R)} \rho u^2}{\int_{{\mathbb B}(0,R)} u^2}.
{\varepsilon}nd{equation} From this variational formulation, since $\rho\mapsto \mathscr J_-(\rho)$ is concave, we have that $\rho\mapsto \Lambda_\alpha(\rho)$ is concave too. If $\rho$ is a bang-bang function, that is, if $\rho=\mathds 1_E$ for some measurable subset $E$, then
$\mathscr J_-(\rho)=1+\alpha \rho$ so that
\begin{equation}\text{For any bang-bang $\rho$ one has that} \ \lambda_\alpha(\rho)=\Lambda_\alpha(\rho).
{\varepsilon}nd{equation}
Hence for all $\alpha>0, $ we have that
$$\lambda_\alpha(\rho_\alpha)=\Lambda_\alpha(\rho_\alpha) \ \text{and} \ \lambda_\alpha(\rho^*)=\Lambda_\alpha(\rho^*).
$$
To see why this allows to overcome the aforementioned regularity issues, let us compute the G\^ateaux-derivative of the map $\rho \mapsto \Lambda_\alpha(\rho)$. Let us define $v_{\alpha,\rho}$ to be the eigenfunction associated with $\Lambda_\alpha(\rho)$. This can be chosen positive and normalized in $L^2$. In particular, $v_{\alpha,\rho}$ solves
\begin{equation}\label{Eq:Valpha}\begin{cases}
\Delta\left(\mathscr J_-(\rho)\Delta v_{\alpha,\rho}\right)=\Lambda_\alpha(\rho)v_{\alpha,\rho}+\rho v_{\alpha,\rho}\text{ in }{\mathbb B}(0,R),
\\v_{\alpha,\rho}=�\Delta v_{\alpha,\rho}=0\text{ on }{\varphi}artial {\mathbb B}(0,R),
\\ \int_{{\mathbb B}(0,R)} v_{\alpha,\rho}^2=1,\, v_{\alpha,\rho}\geq 0.{\varepsilon}nd{cases}{\varepsilon}nd{equation}
From the same arguments as in Lemma \ref{Cl:Misc}, $\Lambda_\alpha(\rho)$ is a simple eigenvalue, and so the map $\rho \mapsto \left(\Lambda_\alpha(\rho),\, v_{\alpha,\rho}\right)$ is G\^ateaux-differentiable and, for $\rho \in \mathcal M({\mathbb B}(0,R))$ and an admissible perturbation $h$ at $\rho$, if we denote with a dot the G\^ateaux-differentiated quantities, the couple $\left(\dot \Lambda_\alpha(\rho),\, \dot v_{\alpha,\rho}\right)$ solves
\begin{equation}\label{Eq:ValphaDot}
\begin{cases}
\Delta \left(\mathscr J_-(\rho) \Delta \dot v_{\alpha,\rho}\right)+\frac\alpha{1+\alpha} \Delta\left(h \mathscr J_-(\rho)^2 \Delta v_{\alpha,\rho} \right)=&\left(\Lambda_\alpha(\rho)+\rho\right)\dot v_{\alpha,\rho}\\&+\dot \Lambda_\alpha(\rho) v_{\alpha,\rho}+h v_{\alpha,\rho}\text{ in }{\mathbb B}(0,R),
\\� \dot v_{\alpha,\rho}=\Delta \dot v_{\alpha,\rho}=0\text{ on }{\varphi}artial {\mathbb B}(0,R),
\\�\int_{{\mathbb B}(0,R)} \dot v_{\alpha,\rho} v_{\alpha,\rho}=0.
{\varepsilon}nd{cases}
{\varepsilon}nd{equation} This equation has a unique solution by the Fredholm alternative.
Multiplying the first equation in {\varepsilon}qref{Eq:ValphaDot} by $ v_{\alpha,\rho}$, integrating by part and using {\varepsilon}qref{Eq:Valpha} yields
\begin{equation}
\dot \Lambda_\alpha(\rho)=\int_{{\mathbb B}(0,R)} h\left\{\frac\alpha{1+\alpha}\mathscr J_-(\rho)^2 (\Delta v_{\alpha,\rho})^2- v_{\alpha,\rho}^2\right\}.{\varepsilon}nd{equation}
The new switch function
\begin{equation}
{\varphi}si_{\alpha,\rho}:= \frac\alpha{1+\alpha}\mathscr J_-(\rho)^2 (\Delta v_{\alpha,\rho})^2- v_{\alpha,\rho}^2 {\varepsilon}nd{equation} is now more regular, since the function $\mathscr J_-(\rho)\Delta v_{\alpha,\rho}$ is itself the solution of an elliptic problem. Let us now consider the two bang-bang-densities $\rho_\alpha,\, \rho^* \in \mathcal M({\mathbb B}(0,R))$. Instead of considering the path $t\mapsto \lambda_\alpha(\rho_\alpha+t(\rho^*-\rho_\alpha))$, which would lead to the irregular switch function {\varepsilon}qref{Eq:SwitchDeb}, we set $\rho_t:=\rho_\alpha+t(\rho^*-\rho_\alpha)$ and we consider the path
\begin{equation}
f_\alpha:t\mapsto \Lambda_\alpha( \rho_t).{\varepsilon}nd{equation}
For $t\in [0;1]$, let us define
$v_t$ to be the eigenfunction associated with $\Lambda_\alpha(\rho^*+t(\rho_\alpha-\rho^*))$ and
\begin{equation}\Psi_t:=\frac\alpha{1+\alpha}\mathscr J_-(\rho_t)^2 (\Delta v_{t})^2- v_{t}^2{\varepsilon}nd{equation} By Lemma \ref{Cl:Convergence} and by the mean value Theorem, we write
\begin{align}
\lambda_\alpha(\rho^*)-\lambda_\alpha(\rho_\alpha)&=\Lambda_\alpha(\rho^*)-\Lambda_\alpha(\rho_\alpha)=\int_{{\mathbb B}(0,R)} \Psi_t (\rho^*-\rho_\alpha){\varepsilon}nd{align}
for some $t=t(\alpha)\in [0;1]$.
From Lemma \ref{Cl:Convergence}, we know that $\rho_{t(\alpha)}\underset{\alpha \to 0}\rightarrow \rho^*$ strongly in $L^1({\mathbb B}(0,R))$. From standard elliptic regularity, there exists a constant $M>0$ such that $\Vert\mathscr J_-(\rho_{t(\alpha)})\Delta v_{t(\alpha)}\Vert_{\mathscr \UUU C^1({\mathbb B}(0,R))}\leq M$. Again from elliptic regularity, we also have that $v_{t(\alpha)}\underset{\alpha\to 0}\rightarrow u_{0,\rho^*}$ in $\mathscr C^1$. Hence, $\Psi_{t(\alpha)}\underset{\alpha \to 0}\rightarrow -u_{0,\rho^*}^2$ in $\mathscr C^1$. Since $\Psi_{t(\alpha)}$ is radial, the strict monotonicity {\varepsilon}qref{Eq:De} implies that $\rho^*$ is the unique solution of
\begin{equation}\label{Eq:TT}\inf_{\rho\in \mathcal M({\mathbb B}(0,R))}\int_{{\mathbb B}(0,R)} \Psi_{t(\alpha)}\rho{\varepsilon}nd{equation} for $\alpha>0$ small enough. Indeed, from {\varepsilon}qref{Eq:De} and the $\mathscr C^1$ convergence of $\left\{\Psi_{t(\alpha)}\right\}_{\alpha \to 0}$ to $-u_{0,\rho^*}^2$, for $\alpha>0$ small enough, $\mathbb B^*$ is the unique level set of $\Psi_{t(\alpha)}$ of volume $\rho_0$.
Hence, $\int_{{\mathbb B}(0,R)} \Psi_{t(\alpha)}(\rho_\alpha-\rho^*)\geq 0$, which in turn implies \UUU that \EEE $\Lambda_\alpha(\rho^*)-\Lambda_\alpha(\rho_\alpha)\leq0$. By Lemma \ref{Cl:Convergence}, this leads to \UUU contradicting \EEE {\varepsilon}qref{eq:contr} \UUU and \EEE concludes the proof of the theorem.
\section{Conclusion}
\label{sec:con}
In this article, we have studied several theoretical aspects related with the spectral optimisation of inhomogeneous plates. It is worth underlining that the existence result, Theorem \ref{Th:ExistMu}, is in sharp contrast with other results in the context of the optimisation of two-phase problems.
Note moreover that the stationarity of minimisers of $\lambda_\alpha$ , as $\alpha\to 0^+$ is proved with respect to radial competitors only. We believe that the case of not radially symmetric competitors is presently out of reach, given the available rearrangement tools. In fact, Theorem \ref{Th:PV0} indicates that the correct rearrangement when handling thickness optimisation is expected to be the {\it increasing} rearrangement, whereas previous results \cite{Anedda} point to the fact that optimisation with respect to the density should rather involve the {\it decreasing} rearrangement.
\section*{Acknowledgments}
E.~Davoli has been partially supported by the Austrian Science
Funds (FWF) grants V662, I4052, Y1292, and F65, as well as by the
OeAD-WTZ project CZ04/2019. I.~Mazari acknowledges support of the
FWF grants I4052 and F65. U. Stefanelli
acknowledges support of the FWF grants
I4354, F65, I5149, and P\,32788, and by the OeAD-WTZ
project CZ 01/2021.
\begin{thebibliography}{10}
\bibitem{Allaire}
{\sc G.~Allaire}, {{\varepsilon}m Shape Optimization by the Homogenization Method},
Springer New York, 2002, \url{https://doi.org/10.1007/978-1-4684-9286-6}.
\bibitem{Alvino1989}
{\sc A.~Alvino, G.~Trombetti, and P.-L.~Lions}, {{\varepsilon}m On optimization problems with
prescribed rearrangements}, Nonlinear Anal.,
\url{https://doi.org/10.1016/0362-546x(89)90043-6}.
\bibitem{Anedda2010}
{\sc C.~Anedda}, {{\varepsilon}m Maximization and minimization in problems involving the
bi-laplacian}, Ann. di Mat. Pura ed Appl.,
190 (2010),
pp.~145--156, \url{https://doi.org/10.1007/s10231-010-0142-5}.
\bibitem{Anedda}
{\sc C.~Anedda, F.~Cuccu, and G.~Porru}, {{\varepsilon}m Minimization of the first
eigenvalue in problems involving the bi-laplacian}, Revista de
Matem{\'a}tica: Teor{\'\i}a y Aplicaciones, 16 (2009), pp.~127--136.
\bibitem{Bandle}
{\sc C.~Bandle}, {{\varepsilon}m Isoperimetric Inequalities and Applications}, Monographs
and studies in mathematics, Pitman, 1980,
\url{https://books.google.at/books?id=I0vvAAAAMAAJ}.
\bibitem{Berchio2020}
{\sc E.~Berchio and A.~Falocchi}, {{\varepsilon}m About symmetry in partially hinged
composite plates}, Appl. Math. Optim., (2020), to appear.
\url{https://doi.org/10.1007/s00245-020-09722-y}.
\bibitem{Berchio2006}
{\sc E.~Berchio, F.~Gazzola, and E.~Mitidieri}, {{\varepsilon}m Positivity preserving
property for a class of biharmonic elliptic problems}, J.
Differential Equations, 229 (2006), pp.~1--23,
\url{https://doi.org/10.1016/j.jde.2006.04.003}.
\bibitem{Bucur2011}
{\sc D.~Bucur and F.~Gazzola}, {{\varepsilon}m The first biharmonic steklov eigenvalue:
Positivity preserving and shape optimization}, Milan J. Math.,
79 (2011), pp.~247--258,
\url{https://doi.org/10.1007/s00032-011-0143-x}.
\bibitem{BuosoFreitas}
{\sc D.~Buoso and P.~Freitas}, {{\varepsilon}m Extremal eigenvalues of the Dirichlet
biharmonic operator on rectangles}, Proc. Amer. Math. Soc., 148 (2020), pp.~1109--1120,
\url{https://doi.org/10.1090/proc/14792}.
\bibitem{Buoso2013}
{\sc D.~Buoso and P.~D. Lamberti}, {{\varepsilon}m Shape deformation for vibrating hinged
plates}, Math. Methods Appl. Sci., 37 (2013),
pp.~237--244,
\url{https://doi.org/10.1002/mma.2858}.
\bibitem{Buoso2015}
{\sc D.~Buoso and L.~Provenzano}, {{\varepsilon}m A few shape optimization results for a
biharmonic steklov problem}, J. Differential Equations, 259 (2015),
pp.~1778--1818,
\url{https://doi.org/10.1016/j.jde.2015.03.013}.
\bibitem{CasadoDiaz3}
{\sc J.~Casado-D{\'\i}az}, {{\varepsilon}m A characterization result for the existence of
a two-phase material minimizing the first eigenvalue}, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire, 34 (2017), pp.~1215–-1226.
\url{https://doi.org/10.1016/j.anihpc.2016.09.006}.
\bibitem{Chanillo2000}
{\sc S.~Chanillo, D.~Grieser, M.~Imai, K.~Kurata, and I.~Ohnishi}, {{\varepsilon}m
Symmetry breaking and other phenomena in the optimization of eigenvalues for
composite membranes}, Comm. Math. Phys., 214 (2000),
pp.~315--337,
\url{https://doi.org/10.1007/pl00005534}.
\bibitem{Colasuonno2019}
{\sc F.~Colasuonno and E.~Vecchi}, {{\varepsilon}m Symmetry in the composite plate
problem}, Commun. Contemp. Math., 21 (2019), p.~1850019,
\url{https://doi.org/10.1142/s0219199718500190}.
\bibitem{ConcaMahadevanSanz}
{\sc C.~Conca, R.~Mahadevan, and L.~Sanz}, {{\varepsilon}m An extremal eigenvalue problem
for a two-phase conductor in a ball}, Appl. Math. Optim.,
60 (2008), pp.~173--184,
\url{https://doi.org/10.1007/s00245-008-9061-x}.
\bibitem{HenrotPierre}
{\sc A.~Henrot and M.~Pierre}, {{\varepsilon}m Shape Variation and Optimization}, European
Mathematical Society Publishing House, 2018, \url{https://doi.org/10.4171/178}.
\bibitem{Jadamba2015}
{\sc B.~Jadamba, R.~Kahler, A.~A. Khan, F.~Raciti, and B.~Winkler}, {{\varepsilon}m
Identification of flexural rigidity in a kirchhoff plates model using a
convex objective and continuous newton method}, Math. Probl. Eng., 2015 (2015), pp.~1--11,
\url{https://doi.org/10.1155/2015/290301}.
\bibitem{Kang2017}
{\sc D.~Kang and C.-Y. Kao}, {{\varepsilon}m Minimization of inhomogeneous biharmonic
eigenvalue problems}, Appl. Math. Model., 51 (2017),
pp.~587--604,
\url{https://doi.org/10.1016/j.apm.2017.07.015}.
\bibitem{Kao2021}
{\sc C.-Y. Kao and S.~A. Mohammadi}, {{\varepsilon}m Tuning the total displacement of
membranes}, Commun. Nonlinear Sci. Numer. Simul., 96
(2021), p.~105706,
\url{https://doi.org/10.1016/j.cnsns.2021.105706}.
\bibitem{Kato}
{\sc T.~Kato}, {{\varepsilon}m Perturbation theory for linear operators}, Springer Berlin Heidelberg, 1995.
\url {https://doi.org/10.1007/978-3-642-66282-9}.
\bibitem{Kawohl}
{\sc B.~Kawohl}, {{\varepsilon}m Rearrangements and Convexity of Level Sets in {PDE}},
Springer Berlin Heidelberg, 1985,
\url{https://doi.org/10.1007/bfb0075060}.
\bibitem{Kesavan}
{\sc S.~Kesavan}, {{\varepsilon}m Symmetrization and Applications}, {World Scientific}, 2006,
\url{https://doi.org/10.1142/6071}.
\bibitem{Laurain}
{\sc A.~Laurain}, {{\varepsilon}m Global minimizer of the ground state for two phase
conductors in low contrast regime}, ESAIM Control Optim. Calc. Var, 20 (2014), pp.~362--388,
\url{https://doi.org/10.1051/cocv/2013067}.
\bibitem{LiebLoss}
{\sc E. H. Lieb and M. Loss}, {{\varepsilon}m Analysis (2nd ed.)},
American Mathematical Society, Graduate Studies in Mathematics, Vol. 14, 2001,
{\sc ISBN:{978-0-8218-2783-3}}.
\bibitem{Manservisi2000}
{\sc S.~Manservisi and M.~Gunzburger}, {{\varepsilon}m A variational inequality
formulation of an inverse elasticity problem}, Appl. Numer. Math.,
34 (2000), pp.~99--126,
\url{https://doi.org/10.1016/s0168-9274(99)00042-2}.
\bibitem{Marinov2013}
{\sc T.~T. Marinov and R.~S. Marinova}, {{\varepsilon}m An inverse problem for estimation
of bending stiffness in kirchhoff{\textendash}love plates}, Comput. Math. Appl., 65 (2013), pp.~512--519,
\url{https://doi.org/10.1016/j.camwa.2012.07.008}.
\bibitem{MazariNadinPrivat}
{\sc I.~Mazari, G.~Nadin, and Y.~Privat}, {{\varepsilon}m Optimization of a two-phase,
weighted eigenvalue with dirichlet boundary conditions}.
{\nabla}ewblock Submitted, 2020.
\bibitem{Migliaccio}
{\sc L.~Migliaccio}, {{\varepsilon}m Sur une condition de Hardy, Littlewood, Polya}, C.R. Hebd. S\'eanc. Acad. Sci. Paris, 297 (1983), pp.~25--28.
\bibitem{MuratTartar}
{\sc F.~Murat and L.~Tartar}, {{\varepsilon}m Calculus of Variations and Homogenization},
Birkh{\"a}user Boston, Boston, MA, 1997, pp.~139--173,
\url{https://doi.org/10.1007/978-1-4612-2032-9_6}.
\bibitem{Privat2015}
{\sc Y. Privat, E. Tr\'elat and E. Zuazua}, {{\varepsilon}m Complexity and regularity of maximal energy domains for the wave equation with fixed initial data},
Discrete Contin. Dyn. Syst. Ser. B, 35 (2015), pp 6133--6153.
\url{https://doi.org/10.3934/dcds.2015.35.6133}.
\bibitem{Talenti}
{\sc G.~Talenti}, {{\varepsilon}m Elliptic equations and rearrangements}, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 3 (1976),
pp.~697--718, \url{http://www.numdam.org/item/ASNSP_1976_4_3_4_697_0}.
\bibitem{Visintin}
{\sc A.~Visintin}, A. {{\varepsilon}m Strong convergence results related to strict convexity}, Comm. Partial Differential Equations, 9 (1984), pp.~439--466, \url{https://doi.org/10.1080/03605308408820337}.
{\varepsilon}nd{thebibliography}
{\varepsilon}nd{document}
|
\begin{document}
\newtheorem{fact}[lemma]{Fact}
\title{An $\tilde{O}(\log^2 n)$-approximation algorithm for $2$-edge-connected dominating set}
\author{Amir Belgi\thanks{Part of this work was done as a part of author's M.Sc. Thesis at the Open University of Israel.}
\and Zeev Nutov}
\institute{The Open University of Israel. \email{[email protected], [email protected]}}
\maketitle
\def\sc Connected{\sc Connected}
\def\sc Edge-Connected{\sc Edge-Connected}
\def\sc Dominating Set{\sc Dominating Set}
\def\sc Dominating Subgraph{\sc Dominating Subgraph}
\def\sc Dominating Subtree{\sc Dominating Subtree}
\def\sc Augmentation{\sc Augmentation}
\def\sc Steiner{\sc Steiner}
\def\sc Subset{\sc Subset}
\def\sc Node Weighted{\sc Node Weighted}
\def\sigma{\sigma}
\def\epsilon{\epsilon}
\def\alpha{\alpha}
\def\emptyset{\emptyset}
\def\setminus{\setminus}
\def\subseteq{\subseteq}
\def\tilde{\tilde}
\def\hat{\hat}
\def\frac{\frac}
\def\mathsf{opt}{\mathsf{opt}}
\begin{abstract}
In the {\c} {\ds} problem we are given a graph $G=(V,E)$ and seek a minimum size dominating set $S \subs V$
such that the subgraph $G[S]$ of $G$ induced by $S$ is connected.
In the $2$-{\ec} {\ds} problem $G[S]$ should be $2$-edge-connected.
We give the first non-trivial approximation algorithm for this problem, with expected approximation ratio $\t{O}(\log^2n)$.
\end{abstract}
\section{Introduction} \label{s:intro}
Let $G=(V,E)$ be a graph.
A subset $S \subs V$ of nodes of $G$ is a {\bf dominating set} in $G$ if every $v\in V\sem S$ has a neighbors in $S$.
In the {\ds} problem the goal is to find a min-size dominating set $S$.
In the {\c} {\ds} problem the subgraph $G[S]$ of $G$ induced by $S$ should be connected.
This problem admits a tight approximation ratio $O(\log n)$, even in the node weighted case \cite{GK98,GK99},
based on \cite{KR}.
A graph is {\bf $2$-edge-connected} if it contains $2$ edge disjoint paths between every pair of nodes.
We consider the following problem.
\begin{center} \fbox{\begin{minipage}{0.97\textwidth}
\underline{$2$-{\ec} {\ds}} \\
{\em Input:} \ \ A graph $G=(V,E)$. \\
{\em Output:} A min-size dominating set $S \subs V$ such that $G[S]$ is $2$-edge-connected.
\end{minipage}} \end{center}
Given a distribution ${\cal T}$ over spanning trees of a graph $G$, the {\bf stretch} of ${\cal T}$ is
${\displaystyle \max_{uv \in E}} \mathbb{E}_{T \sim {\cal T}} \left[\f{d_T (u,v)}{d_G(u,v)}\right]$, where $d_H(u,v)$ denotes the
distance between $u$ and $v$ in a graph $H$.
Let $\si=\si(n)$ denote the lowest known upper bound on the stretch that can be achieved
by a polynomial time construction of such ${\cal T}$ for a graph on $n$ nodes.
By the work of Abraham, Bartal, and Neiman \cite{ABN},
that is in turn based on the work of Elkin, Emek, Spielman, and Teng \cite{EEST}, we have:
$$
\si(n)=O(\log n \cdot\log\log n\cdot(\log\log\log n)^{3})=\t{O}(\log n) \ .
$$
Our main result is:
\begin{theorem} \label{t:main}
$2$-{\ec} {\ds} admits an approximation algorithm with expected approximation ratio $O(\si \log n)=\t{O}(\log^2 n)$.
\end{theorem}
In the rest of the Introduction we discuss motivation, related problems, and give a
road-map of the proof of Theorem~\ref{t:main}.
It is a common problem in network design to route messages through the network.
Many routing protocols exploit flooding strategy in which every node broadcasts the message to all of its neighbors.
However, such protocols suffer from a large amount of redundancy.
Ephremides, Wieselthier, and Baker \cite{E87} introduced the idea of constructing a {\bf virtual backbone} of a network.
A virtual backbone is often chosen to be a {\bf connected dominating set} -- a connected subgraph (a tree) on a dominating node set.
Then only the nodes of the tree are involved in the routing,
which may significantly reduce the number of messages the routing protocol generates.
Moreover, we only need to maintain the nodes in the tree to keep the message flow.
This raises the natural problem of constructing a ``cheap'' connected virtual backbone $H$.
Usually ``cheap'' means that $H$ should have a minimum number of edges or nodes,
or, more generally, that we are given edge costs/node weights, and $H$ should have a minimum cost/weight.
In many cases we also require from the virtual backbone to be robust to edge or node failures.
A graph $G$ is {\bf $k$-edge-connected} if it contains $k$ edge disjoint paths between every pair of nodes;
if the paths are required to be internally node disjoint then $G$ is {\bf $k$-connected}.
A subset $S$ of nodes in a graph $G=(V,E)$ is an {\bf $m$-dominating set} if every $v \in V \sem S$ has at least $m$ neighbors in $S$.
In the {\sc Min-Weight $k$-Connected $m$-Dominating Set} problem we seek a minimum node weight $m$-dominating set $S$
such that the subgraph $G[S]$ of $G$ induced by $S$ is $k$-connected.
This problem was studied in many papers, both in general graphs and in unit disk graph,
for arbitrary weights and also for unit weights; the unit weights case is the {\sc $k$-Connected $m$-Dominating Set} problem.
We refer the reader to recent papers \cite{F,ZZMD,N-CDS}.
In the {\sc Min-Cost $k$-Connected $m$-Dominating Subgraph} problem, we seek to minimize
the cost of the edges of the subgraph rather than the weight of the nodes.
We observe that for unit weights/costs, the approximability of the {\sc $k$-Connected $m$-Dominating Set} problem
is equivalent to the one of the {\sc $k$-Connected $m$-Dominating Subgraph} problem, up to a factor of $2$;
this is so since the number of edges in a minimally $k$-connected graph is between $kn/2$ and $kn$.
The same holds also for the $k$-edge-connectivity variant of these problems.
Most of the work on the {\sc Min-Weight $k$-Connected $m$-Dominating Set} problem focused
on the easier case $m \geq k$, when the union of a partial solution and a feasible solution is
always feasible. This enables to construct the solution by computing first an $\al$-approximate $m$-dominating set
and then a $\beta$-approximate augmenting set to satisfy the connectivity requirements;
the approximation ratio is then bounded by the sum $\al+\beta$ of the ratios of the two sub-problems.
The currently best ratios when $m\geq k$ are \cite{N-CDS}:
$O(k \ln n)$ for general graphs, $\min\left\{\f{m}{m-k},k^{2/3}\right\} \cdot O(\ln^2 k)$ for unit disc graphs,
and $\min\left\{\f{m}{m-k},\sqrt{k}\right\} \cdot O(\ln^2 k)$ for unit disc graphs with unit node weights.
However, when $m<k$ this approach does not work, and the only non trivial ratio known is for (unweighted) unit disk graphs,
due to Wang et al. \cite{WKAG}, where they obtained a constant ratio for $k\leq 3$ and $m=1,2$.
It is an open question to obtain a non-trivial ratio for the (unweighted)
{\sc $2$-Connected Dominating Set} problem in general graphs.
The $2$-{\ec} {\ds} problem that we consider is the edge-connectivity version of the above problem,
when the virtual backbone should be robust to single edge failures.
As was mentioned, the approximability of this problem is the same, up to a factor of $2$,
as that of the $2$-{\ec} {\dsu} problem that seeks to minimize
the number of edges of the subgraph rather than the number of nodes.
We prove Theorem~\ref{t:main} for the latter problem using a two stage reduction.
Our overall approximation ratio $O(\si \log n)$ is a product of the first reduction fee $\si$ and
the approximation ratio $O(\log n)$ for the problem obtained from the second reduction.
In the first stage (see Section~\ref{s:red})
we use the probabilistic embedding into a spanning tree of \cite{ABN} with stretch $\si=\tilde{O}(\log n)$
to reduce the problem to a ``domination version'' of the so called {\sc Tree Augmentation} problem (c.f. \cite{EFKN});
in our problem, which we call {\sc Dominating Subtree},
we are given a spanning tree $T$ in $G$ and seek a min-size edge set $F \subs E \sem T$
and a subtree $T'$ of $T$, such that $T'$ dominates all nodes in $G$ and $T' \cup F$ is $2$-edge-connected.
This reduction invokes a factor of $\si=\tilde{O}(\log n)$ in the approximation ratio.
Gupta, Krishnaswamy, and Ravi \cite{GKR2} used such tree embedding to give a generic framework
for approximating various restricted $2$-edge-connected network design problems,
among them the {\sc $2$-Edge-Connected Group Steiner Tree} problem.
However, all their algorithms are based on rounding an appropriate LP relaxations, while our algorithm is purely combinatorial
and uses different methods.
In the second stage (see Section~\ref{s:red'}) we reduce the {\sc Dominating Subtree} problem to
the {\sc Subset Steiner Connected Dominating Set} problem \cite{GK98}.
While we show in Section~\ref{s:hard} that in general this problem is as hard as the {\sc Group Steiner Tree} problem,
the instances that are derived from the reduction have special properties that will enable us to obtain ratio $O(\log n)$.
We note that the reduction we use is related to the one of Basavaraju et al. \cite{BFGM},
that showed a relation between the {\sc Tree Augmentation} and the {\sc Steiner Tree} problems.
\section{Reduction to the dominating subtree problem} \label{s:red}
To prove Theorem~\ref{t:main} we will consider the following variant of our problem:
\begin{center} \fbox{\begin{minipage}{0.97\textwidth}
\underline{$2$-{\ec} {\dsu}} \\
{\em Input:} \ \ A graph $G=(V,E)$. \\
{\em Output:} A $2$-edge-connected subgraph $(S,J)$
of $G$ with $|J|$ minimum such that $S$ is a dominating set in $G$.
\end{minipage}} \end{center}
Since $|S| \leq |J| \leq 2(|S|-1)$ holds for any edge-minimal $2$-edge-connected graph $(S,J)$,
then if $2$-{\ec} {\dsu} admits ratio $\rho$ then $2$-{\ec} {\ds} admits ratio $2\rho$.
Thus it is sufficient to prove Theorem~\ref{t:main} for the $2$-{\ec} {\dsu} problem.
For simplicity of exposition we will assume that we are given a single spanning tree $T=(V,E_T)$ with stretch $\si$, namely that
$$
|T_f| \leq \si \ \ \ \ \forall f \in E \sem E_T
$$
where $T_f$ denotes the path in the tree $T$ between the endnodes of $f$.
We say that {\bf $f \in E \sem E_T$ covers $e \in E_T$} if $e \in T_f$.
For an edge set $F$ let $T_F=\cup_{f \in F} T_f$ denote the forest formed by the tree edges of $T$
that are covered by the edges of $F$.
The following two lemmas give some cases when $T_F \cup F$ is a $2$-edge-connected graph.
\begin{lemma} \label{l:feasible}
If $F \subs E \sem E_T$ then $T_F \cup F$ is $2$-edge-connected if and only if $T_F$ is a tree.
\end{lemma}
\begin{proof}
Note that every $f \in F$ has both ends in $T_F$.
It is known (c.f. \cite{EFKN}) that if $T'$ is a tree and $F$ is an additional edge set on the node set of $T'$,
then $T' \cup F$ is $2$-edge-connected if and only if $T'_F=T'$.
This implies that if $T_F$ is a tree then $T_F \cup F$ is $2$-edge-connected.
Now suppose that $T_F$ is not a tree. Let $C$ be a connected component of $T_F$.
Then no edge in $F \cup T_F$ has exactly one end in $C$. Thus $C$ is also a connected component of $T_F \cup F$,
so $T_F \cup F$ is not connected.
\qed
\end{proof}
\begin{lemma} \label{l:si-loss}
If $(S,J)$ is a $2$-edge-connected subgraph of $G$ then $T_{J \sem E_T}$ is a tree.
\end{lemma}
\begin{proof}
By Lemma~\ref{l:feasible} the statement is equivalent to claiming that $T_{J \sem E_T} \cup (J \sem E_T)$ is $2$-edge-connected.
To see this, note that $T_{J \sem E_T} \cup (J \sem E_T)$ is obtained from the $2$-edge-connected graph $(S,J)$
by sequentially adding for each $f \in J \sem E_T$ the path $T_f$.
It is known that adding a simple path $P$ between two nodes of a $2$-edge-connected graph
results in a $2$-edge-connected; this is so also if $P$ contains some edges of the graph.
The statement now follows by induction.
\qed
\end{proof}
Let us consider the following problem.
\begin{center} \fbox{\begin{minipage}{0.98\textwidth}
\underline{{\dt}} \\
{\em Input:} \ \ A graph $G=(V,E)$ and a spanning tree $T=(V,E_T)$ in $G$. \\
{\em Output:} A min-size edge set $F \subs E \sem E_T$ such that $T_F$ is a dominating tree.
\end{minipage}} \end{center}
From Lemmas \ref{l:feasible} and \ref{l:si-loss} we have the following.
\begin{corollary}
Let $(S,J)$ be an optimal solution of a $2$-{\ec} {\dsu} instance $G$.
Let $T$ be a spanning tree in $G$ with stretch $\si$ and $F$ a $\rho$-approximate solution to the {\dt} instance $G,T$.
Then $T_F \cup F$ is a feasible solution to the $2$-{\ec} {\dsu} instance and $|F \cup E(T_F)| \leq \rho(\si+1)|J|$.
\end{corollary}
\begin{proof}
$T_F \cup F$ is a feasible solution to the $2$-{\ec} {\dsu} instance by
the definition of the {\dt} problem and Lemma~\ref{l:feasible}.
By Lemma~\ref{l:si-loss}, $J \sem E_T$ is a feasible solution to the {\dt} instance,
thus $|F| \leq \rho |J \sem E_T| \leq \rho|J|$.
Since $T$ has stretch $\si$ we get $|E(T_F)| \leq \si |F| \leq \si \rho |J|$.
\qed
\end{proof}
Hence to finish the proof of Theorem~\ref{t:main} is is sufficient to prove the following theorem,
that may be of independent interest.
\begin{theorem} \label{t:main'}
The {\dt} problem admits approximation ratio $O(\log n)$.
\end{theorem}
\section{Reduction to subset connected dominating set} \label{s:red'}
In this section we reduce the {\dt} problem to the {\su} {\st} {\c} {\ds},
and show that the special instances that arise from the reduction admit ratio $O(\log n)$.
The justification of the reduction is given in the following lemma.
\begin{lemma} \label{l:sv}
Let $T=(V,E_T)$ be a tree and $F$ and edge set on $V$, and let $s,t \in V$.
Let $H=(F \cup V,I)$ be a bipartite graph where $I=\{fv: f \in F, v \in V \cap T_f\}$.
Then $T \cup F$ has $2$ edge disjoint $st$-paths if and only if $H$ has an $st$-path.
\end{lemma}
\begin{proof}
Let $P=T_{st}$ be the $st$-path in $T$.
By Menger's Theorem, $T \cup F$ has $2$ edge disjoint $st$-paths if and only if for every $e \in P$ there is $f \in F$ that covers $e$.
Let $S=\{v \in P:H \mbox{ has an } sv\mbox{-path}\}$. Let $\hat{P}$ be the set of edges in $P$ uncovered by $F$.
We need to show that $\hat{P} \neq \empt$ if and only if $t \notin S$.
Suppose that $t \notin S$.
Among the nodes in $S$, let $u$ be the furthest from $s$ along $P$.
Let $v$ be the node in $P$ after $u$. Then $uv \in P$, since $u \neq t$.
We claim that $uv \in \hat{P}$. Otherwise, there is $f \in F$ with $u,v \in T_f$ and we get that $v \in S$ (since $u \in S$),
contradicting the choice of $u$.
Suppose that there is $e \in \hat{P}$. Let $T_s,T_t$ be the two trees of $T \sem \{e\}$, where $s \in T_s$ and $t \in T_t$.
Since no link in $F$ covers $e$, every link in $F$ has both ends either in $T_s$ or in $T_t$.
This implies that no node in $T_t$ belongs to $S$.
\qed
\end{proof}
Let us assume w.l.o.g. that our {\dt} instance consists of a tree $T$ on $V$ and
an edge set $E$ on $V$ that contains no edge from $T$.
\begin{definition} \label{d:con-dom}
Given a {\dt} instance $T,E$ the {\bf connectivity-domination graph} $\hat{G}=(\hat{V},\hat{E})$
has node set $\hat{V}=E \cup V$ and edge set $\hat{E}=I \cup D$ where (see Fig.~\ref{f:red}):
\begin{eqnarray*}
I & = & \{fg: f,g \in E, V(T_f) \cap V(T_g) \neq \empt\} \\
D & =& \{ev: e \in E, v \in V, v \in T_e \mbox{ or } v \mbox{ is a neighbor of } T_e \mbox{ in } G\}
\end{eqnarray*}
\end{definition}
\begin{figure}
\caption{Illustration to Definition~\ref{d:con-dom}
\label{f:red}
\end{figure}
Note that an edge in $fg \in I$ encodes that $T_f$ and $T_g$ have a node in common,
while an edge $ev \in D$ encodes that $v$ is dominated by $T_e$
(belongs to $T_e$ or is connected by an edge of $G$ to some node in $T_e$).
From Lemma~\ref{l:sv} we have:
\begin{corollary} \label{c:H}
$F \subs E$ is a feasible solution to a {\dt} instance if and only if
in the connectivity-domination graph $\hat{G}$ the following holds: \\
{\em (i)} $\hat{G}[F]$ is connected; {\em (ii)} $F$ dominates $V$.
\end{corollary}
Our goal is to give an $O(\log n)$ approximation algorithm for the problem of finding min-size $F \subs E$ as in Corollary~\ref{c:H}.
Note that in this problem $V, E$ are both subsets of nodes of $\hat{G}$.
This is a particular case of the following problem.
\begin{center} \fbox{\begin{minipage}{0.97\textwidth}
\underline{{\su} {\st} {\c} {\ds}} \\
{\em Input:} \ \ A graph $\h{G}=(\h{V},\h{E})$ and a partition $Q,R$ of $\h{V}$. \\
{\em Output:} A min-size $S \subs Q$ such that $\h{G}[S]$ is connected and $S$ dominates $R$.
\end{minipage}} \end{center}
In Section~\ref{s:hard} we observe that up to constants, the approximability of this problem is the same as
that of the {\sc Group Steiner Tree} problem, that admits ratio $O(\log^3 n)$ \cite{GKR}.
However, in some cases better ratios are possible.
In the case $Q=\h{V}$ we get the (unweighted) {\st} {\c} {\ds} problem, that admits ratio $O(\log n)$ \cite{GK98}.
We show ratio $O(\log n)$ when $\h{G}$ is the connectivity-domination graph, with $Q=E$ and $R=V$.
In what follows, given a {\su} {\st} {\c} {\ds} instance, let $q$ be the least integer such that
for every $v \in R$, any two neighbors of $v$ in $\h{G}$ are connected by a path in $\h{G}[Q]$ that has at most $q$ internal nodes.
\begin{lemma}
{\su} {\st} {\c} {\ds} admits approximation ratio $O(q \log n)$ if $Q,R$ partition $\h{V}$ and $R$ is an independent set in $G$.
\end{lemma}
\begin{proof}
We find an $O(\log n)$ approximate solution $S$ for the {\st} {\c} {\ds} instance (with $Q=\h{V}$) using the algorithm of \cite{GK98}.
If $S \subs Q$ then we are done. Else, let $T=(V_T,E_T)$ be a subtree of $\h{G}$ with node set $S$.
We may assume that $T$ has no leaf in $R$, otherwise such leaf can be removed from $S$ and from $T$.
Let $R_T=S \cap R$ and $Q_T=S \cap Q$.
Since $R$ is an independent set in $\h{G}$, $Q_T$ dominates $R$,
and the nodes in $R_T$ are used in $S$ just to connect between the nodes in $Q_T$.
Moreover, since $R_T$ is an independent set in $T$
$$
\sum_{r \in R_T} (\deg_T(r)-1) \leq |E_T|-|R_T|=|S|-1-|R_T| =|Q_T|-1 \ .
$$
Let $r \in R_T$. Add a set of $\deg_T(r)-1$ dummy edges that form a tree on the neighbors of $r$ in $T$,
and then replace every dummy edge $uv$ by a path $P_{uv}$ in $\h{G}[Q]$ that has at most $q$ internal nodes.
Applying this on every $r \in R_T$ gives a connected graph in $\h{G}[Q]$ that contains the set $Q_T$ that dominates $R$,
and the number of nodes in this graph is at most
$$
|Q_T|+q \sum_{r \in R_T} (\deg_T(r)-1) \leq (q+1)(|Q _T|-1) \leq (q+1)|S| \ .
$$
Since $|S|$ is $O(\log n)$ times the optimum, the lemma follows.
\qed
\end{proof}
Note that in our case, when $\h{G}$ is the connectivity-domination graph, we have $Q=E$ and $R=V$.
Then $Q,R$ partition $\h{V}$ and $R$ is an independent set in $G$, by the construction.
The next lemma shows that $q$ is a small constant in our case.
\begin{lemma}
$q\leq 2$ if $\h{G}$ is the connectivity-domination graph and $Q=E$, $R=V$.
\end{lemma}
\begin{proof}
Let $v \in R$ and let $Q_v$ be the set of neighbors of $v$ in $\h{G}$.
For $e \in Q_v$ let $f_e$ be defined as follows:
\begin{itemize}
\item
If there exists some $f \in E$ (possibly $f=e$) such that $v \in T_f$ and $T_e,T_f$ have a node in common,
then we say that {\bf $e$ is of type 1} and let $f_e=f$.
\item
If $f$ as above does not exist then we say that {\bf $e$ is of type 2} and let $f_e=e$.
\end{itemize}
For illustration, consider the example in Fig.~\ref{f:red}.
\begin{itemize}
\item
Let $v=v_6$. Then $Q_v=\{f_1,f_3,f_4\}$.
If $e=f_3$ or if $e=f_4$ then $v \in T_f$, hence $e$ is of type~1 and we may have $f_e=e$.
If $e=f_1$ then $v \notin T_e$, but $e$ is still of type~1 since for $f=f_3$ we have $v \in T_f$ and $T_e,T_f$ have a node in common.
\item
Let $v=v_7$. Then $Q_v=\{f_1,f_3\}$. There is no $f \in E$ such that $v \in T_f$, hence both $f_1,f_3$ are of type~2.
\end{itemize}
Suppose that every $e \in Q_v$ is of type~1.
Then for every $e \in Q_v$ we have $f_e=e$ or $ef_e \in I$, and note that $v \in T_{f_e}$.
Thus for any $e_1,e_2 \in Q_v$, the sequence $e_1,f_{e_1},f_{e_2},e_2$ forms a path in $\h{G}[Q]$
with at most two internal nodes.
Suppose that there is $e \in Q_v$ of type~2.
Then $v \notin T_e$, and $v$ is dominated by $T_e$ via some edge $uv$ of $T$; otherwise $e$ is of type~1.
Let $T^u$ and $T^v$ be the two subtrees of $T \sem e$, where $u \in T^u$ and $v \in T^v$.
Note that no edge in $E$ connects $T^u$ and $T^v$; otherwise $e$ is of type~1.
Hence $uv$ is a bridge of $G$.
This implies $T^v$ consists of a single node $v$, as otherwise the instance has no feasible solution.
Consequently, every $e \in Q_v$ is of type~2 and $u \in T_e$ holds, hence $\h{G}[Q_v]$ is a clique, and the lemme follows.
\qed
\end{proof}
This concludes the proof of Theorem~\ref{t:main'}, and thus also the proof of Theorem~\ref{t:main}.
\section{{\sc Connected Dominating Set} variants} \label{s:hard}
Here we make some observations about the approximability of several variants of the
{\sc Connected Dominating Set} ({\sc CDS}) problem.
In all these variants we are given a graph $G=(V,E)$ and possibly edge-costs/node-weights,
and seek a minimum cost/weight/size subtree $H=(V_H,E_H)$ of $G$ that satisfies a certain domination property.
Recall that in the {\sc CDS} problem $V_H$ should dominate $V$.
The additional variants we consider are as follows.
\noindent \underline{\sc Steiner CDS}:
$V_H$ dominates a given set of terminals $R \subs V$. \\
\noindent \underline{\sc Subset Steiner CDS}:
$V_H$ dominates $R$ and $V_H \subs Q$ for a partition $Q,R$ of $V$. \\
\noindent \underline{\sc Partial CDS}:
$V_H$ dominates at least $k$ nodes. \\
We relate these problems to the {\sc Group Steiner Tree} ({\sc GST}) problem:
given a graph $G=(V,E)$ and a collection ${\cal S}$ of groups (subsets) of $V$,
find a minimum edge-cost/node-weight/size subtree $H$ of $G$ that contains at least one node from every group.
When the input graph is a tree and there are $k$ groups, edge-costs {\sc GST} admits ratio $O(\log n \log k)$ \cite{GKR},
and this is essentially tight \cite{HK}.
For general graphs the edge-costs version admits ratio $O(\log^2 n \log k)$,
using the result of \cite{GKR} for tree inputs and the \cite{FRT} probabilistic tree embedding.
However, the best ratio known for the node-weighted {\sc GST} is the one that is derived from the more general
{\sc Directed Steiner Tree} problem \cite{CCCD,Z-DST,HRZ,KP-DST} with $k$ terminals --
for any integer $1 \leq \ell \leq k$, ratio $O\left(\ell^3 k^{2/\ell}\right)$ in time
$O\left(k^{2\ell}n^\ell\right)$.
As was observed in \cite{KPS}, several {\sc CDS} variants are particular cases of the corresponding {\sc GST} variants,
where for every relevant node $r$ we have a group $S_r$ of nodes that dominate $r$ in the input graph.
Specifically, we have the following.
\begin{lemma} \label{l:SS<GST}
For edge-costs/node-weights, ratio $\al(n,k)$ for {\sc GST} with $n$ nodes and $k$ groups
implies ratio $\al(|Q|,|R|)$ for {\sc Subset Steiner CDS},
and this is so also for the unit node weights versions of the problems.
\end{lemma}
\begin{proof}
Given a {\sc Subset Steiner CDS} instance $G=(V,E)$ with edge costs/node weights and $Q,R \subs V$,
construct a {\sc GST} instance by introducing for every $r \in R$ a group $S_r$ of nodes in $Q$ that dominate $r$.
In all cases, we return an $\al$-approximation for the {\sc GST} instance on $G[Q]$, that has $|Q|$ nodes and $|R|$ groups.
\qed
\end{proof}
Earlier, Guha and Khuller \cite{GK98} showed that the inverse is also true for edge-costs {\sc CDS} and node-weighted
{\sc Steiner CDS}; in \cite{GK98} the reduction was to the {\sc Set TSP} problem, that can be shown to have
the same approximability as GST, up to a factor of 2.
Note that already the edge-costs {\sc CDS} is {\sc GST} hard,
hence our ratio $\tilde{O}(\log^2 n)$ for unit edge costs {\sc $2$-Edge-Connected Dominating Set}
is unlikely to be extended to arbitrary costs.
We now show that {\sc Subset Steiner CDS} with unit edge costs/node weights
is hard to approximate as {\sc GST} with general edge costs.
In what follows, we will assume that ratio $\al(n)$ for a given problem is an increasing function of $n=|V|$.
\begin{theorem} \label{t:S>GST}
For any constant $\eps>0$, ratio $\al(n)$ for {\sc Subset Steiner CDS} with unit edge costs/node weights
implies ratio $\al(|E|(n+k)/\eps)+\eps$ for {\sc GST} with arbitrary edge costs.
\end{theorem}
Theorem~\ref{t:S>GST} is proved in the next two lemmas.
Note that combined with Lemmas \ref{l:SS<GST}, Theorem~\ref{t:S>GST} implies
that the approximability of
{\sc Subset Steiner CDS} with unit edge costs/node weights
is essentially the same as that of {\sc GST} with arbitrary edge costs, up to a constant factor.
Recall that we showed that particular instances of {\sc Subset Steiner CDS} with unit node weights admit ratio $O(\log n)$.
Theorem~\ref{t:S>GST} implies that we could not achieve this for general unit node weights instances.
The next lemma shows that for unit edge costs/node weights,
{\sc Subset Steiner CDS} is not much easier than {\sc GST}.
\begin{lemma}
For unit edge costs/node weights, ratio $\al(n)$ for {\sc Subset Steiner CDS}
implies ratio $\al(n+k)$ for {\sc GST} with $k$ groups.
\end{lemma}
\begin{proof}
For each one of the problems in the lemma, any inclusion minimal solution is a tree,
hence the unit edge costs case is equivalent to the unit node weights case;
this is so up to an additive $\pm 1$ term, which can be avoided by guessing an edge/node that belongs to some optimal solution.
So we will consider just the unit node weights case.
Given a unit weight {\sc GST} instance $G=(V,E),{\cal S}$ construct a unit weight
{\sc Subset Steiner CDS} instance $G'=(V',E'),Q,R$ as follows.
For each group $S \in {\cal S}$ add a new node $r_S$ connected to all nodes in $S$.
The set of nodes that should be dominated is $R=\{r_S: S \in {\cal S}\}$, and $Q=V$.
Any subtree $H$ of $G'[Q]$ is is also a subtree of $G$,
and for any group $S \in {\cal S}$, $H$ contains a node from $S$ if and only if
$H$ dominates $r_S$. Thus $H$ is a feasible {\sc GST} solution if and only if $H$ is a feasible {\sc Subset Steiner CDS} solution.
\qed
\end{proof}
The next lemma shows that {\sc GST} with unit edge costs is not much easier than
{\sc GST} with arbitrary edge costs.
\begin{lemma} \label{l:GST}
If {\sc GST} with unit edge costs admits ratio $\al(n)$
then for any constant $\eps>0$ {\sc GST} (with arbitrary edge costs) admits ratio $\al(n|E|/\eps)+\eps$.
\end{lemma}
\begin{proof}
Let $G=(V,E),c$ be a {\sc GST} instance (with arbitrary edge costs).
Fix some optimal solution $H^*$. Let $M=\max_{e\in H^*} c(e)$ be the maximum cost of an edge in $H^*$.
Note that $c(H^*) \geq M$.
Since there are $O(n^2)$ edges, we can guess $M$, and remove from $G$ all edges of cost greater than $M$.
Let $\mu=\frac{\eps M}{n}$. Define new costs by $c'(e)=\lfloor\frac{c(e)}{\mu}\rfloor$.
Note that $\mu c'(e)\leq c(e)\leq\mu(c'(e)+1)$ for all $e$, thus for any edge set $J$ with $|J| \leq n$
$$
c(J) = \sum_{e \in J} c(e) \leq \sum_{e \in J} \mu(c'(e)+1)=\mu c'(J)+\mu|J| \leq \mu c'(J)+\eps M \ .
$$
This implies that if $H$ is an $\al$-approximate solution w.r.t. the new costs $c'$ then
$$
c(H) \leq \mu c'(H)+\eps M \leq \mu \al c'(H^*)+\eps M \leq \al c(H^*) +\eps c(H^*)=(\al+\eps)c(H^*) \ .
$$
So ratio $\al$ w.r.t. costs $c'$ implies ratio $(\al+\eps)$ w.r.t. the original costs $c$.
The instance with costs $c'$ can be transformed into an equivalent instance with unit edge costs
and at most $n|E|/\eps$ nodes by a folklore reduction that replaces every edge by a path of length
equal to the cost of the edge. Note that $c'$ is integer valued and that
$$
c'(e) \leq \frac{c(e)}{\mu} = \frac{c(e)\cdot n}{\epsilon M} \leq \f{n}{\eps} \ .
$$
Thus the number of nodes in the obtained unit edge costs instance is bounded by $n+|E|(n/\eps-1) \leq |E|n/\eps$.
Given the {\sc GST} instance with costs $c'$,
we contract every zero cost edge while updating the groups accordingly.
Then we replace every edge $e=uv$ by a $uv$-path of length $c'(e)$, thus obtaining an equivalent
{\sc GST} instance with unit edge costs and at most $|E| n/\eps$ nodes.
\qed
\end{proof}
Theorem~\ref{t:S>GST} follows from the last two lemmas.
Finally, we consider the {\sc Partial CDS} problem.
For unit node weights the problem was shown to admit a logarithmic ratio in \cite{KPS}.
We show that in the case of arbitrary weights, the problem
is not much easier than {\sc Subset Steiner CDS}, and thus also is not easier than {\sc GST}.
\begin{lemma} \label{l:partial}
For edge costs/node weights, ratio $\al(n)$ for {\sc Partial CDS}
implies ratio $\al(n^2)$ for {\sc Subset Steiner CDS}.
\end{lemma}
\begin{proof}
Let us consider the case of node-weights.
Given a {\sc Subset Steiner CDS} instance $G,w,Q,R$
construct a {\sc Partial CDS} instance $G',w',k$ as follows.
The graph $G'$ is obtained from $G$ by adding $|Q|$ copies $R_1,\ldots,R_{|Q|}$ of $R$,
and for each $r \in R$ connecting each copy $r_i \in R_i$ of $r$ to all nodes in $Q$ that dominate $r$.
We let $w'(v)=w(v)$ if $v \in Q$ and $w'(v)=\infty$ otherwise, and we let $k=(|Q|+1)|R|$.
In the obtained {\sc Partial CDS} instance, a subset of $Q$ that does not dominate $R$,
dominates at most $(|R|-1)(|Q|+1)+|Q|=k-1$ nodes;
hence any feasible solution of finite weight must dominate $R$.
The {\sc Partial CDS} instance has $|Q||R|+ |R|+|Q| \leq n^2$ nodes, and the node weights case follows.
In the case of edge costs $G',k$ are as in the case of node weights,
and the cost of an edge $uv$ of $G'$ is $c'(uv)=c(uv)$ if $u,v \in Q$ and $c'(uv)=\infty$ otherwise.
The rest of the proof is the same as in the case of node-weights.
\qed
\end{proof}
\end{document}
|
\begin{document}
\title[Crooked tilings]{Affine Schottky Groups and Crooked Tilings}
\author[Charette and Goldman]
{Virginie Charette and William M. Goldman
}
\address{
Department of Mathematics, University of Maryland,
College Park, MD 20742
}
\email{
[email protected], [email protected]}
\date{\today}
\renewcommand{{\mathbb P}}{{\mathbb P}}
\newcommand{{\mathbb E}}{{\mathbb E}}
\newcommand{\mathcal C}{\mathcal C}
\newcommand{\mathcal H}{\mathcal H}
\newcommand{\mathfrak H}{\mathfrak H}
\newcommand{\mathcal Z}{\mathcal Z}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\partial{\mathcal N}_+}}{{\partial{\mathcal N}_+}}
\renewcommand{{\mathbb L}}{{\mathbb L}}
\newcommand{\langle}{\langlengle}
\newcommand{\rangle}{\ranglengle}
\newcommand{\mathbb B ar{X}}{\mathbb B ar{X}}
\newcommand{\mathbb B ar{\Delta}}{\mathbb B ar{\Delta}}
\renewcommand{\operatorname}{\operatornameperatorname}
\newcommand{\operatorname{hyp}}{\operatorname{hyp}}
\newcommand{\R^{2,1}}{{\mathbb R}^{2,1}}
\newcommand{\R^{n,1}}{{\mathbb R}^{n,1}}
\newcommand{\R^n}{{\mathbb R}^n}
\newcommand{\R^2}{{\mathbb R}^2}
\newcommand{\operatorname{H}^2_\R}{\operatorname{H}^2_{\mathbb R}}
\newcommand{{\mathfrak g}}{{\mathfrak g}}
\newcommand{{\mathfrak g}l}{{\mathfrak{gl}}}
\newcommand{{\mathcal N}}{{\mathcal N}}
\newcommand{\operatorname{SO}(2,1)}{\operatorname{SO}(2,1)}
\newcommand{\operatorname{SO}(2)}{\operatorname{SO}(2)}
\newcommand{\operatorname{O}(2,1)}{\operatorname{O}(2,1)}
\newcommand{\operatorname{SO}^0(2,1)}{\operatorname{SO}^0(2,1)}
\newcommand{\operatorname{SO}^0(2,1)o}{\operatorname{SO}^0(1,1)}
\newcommand{\operatorname{rank}}{\operatorname{rank}}
\newcommand{\operatorname{Map}}{\operatorname{Map}}
\newcommand{{\mathsf{x}}}{{\mathsf{x}}}
\newcommand{{\mathsf{y}}}{{\mathsf{y}}}
\newcommand{{\mathsf{z}}}{{\mathsf{z}}}
\newcommand{{\mathsf{u}}}{{\mathsf{u}}}
\newcommand{{\mathsf{v}}}{{\mathsf{v}}}
\newcommand{{\mathsf{w}}}{{\mathsf{w}}}
\newcommand{{\mathsf{a}}}{{\mathsf{a}}}
\newcommand{{\mathsf{0}}}{{\mathsf{0}}}
\newcommand{\mathbb B }{\mathbb B }
\newcommand{\mathbb I}{\mathbb I}
\newcommand{\xo}[1]{{{\mathsf{x}}}^0(#1)}
\newcommand{\xp}[1]{{{\mathsf{x}}}^+(#1)}
\newcommand{\xm}[1]{{{\mathsf{x}}}^-(#1)}
\newcommand{\xpm}[1]{{{\mathsf{x}}}^{\pm}(#1)}
\newcommand{\xmp}[1]{{{\mathsf{x}}}^{\mp}(#1)}
\newcommand{{\mathcal{S}}}{{\mathcal{S}}}
\newcommand{{\mathfrak{S}}}{{\mathfrak{S}}}
\newcommand{{\mathbf{Wing}}}{{\mathbf{Wing}}}
\newcommand{{\mathbf{nWing}}}{{\mathbf{nWing}}}
\newcommand{{\mathbf{Wing}}p}{{\mathcal W}^+}
\newcommand{{\mathbf{Wing}}m}{{\mathcal W}^-}
\newcommand{{\mathbf{Wing}}pm}{{\mathcal W}^\pm}
\newcommand{{\mathbf{nWing}}p}{{\mathcal M}^+}
\newcommand{{\mathbf{nWing}}m}{{\mathcal M}^-}
\newcommand{{\mathcal K}}{{\mathcal K}}
\newcommand{\operatorname{csch}}{\operatorname{csch}}
\newcommand{\operatorname{sech}}{\operatorname{sech}}
\newcommand{\operatorname{Isom}}{\operatorname{Isom}}
\newcommand{\operatorname{Isom}o}{\operatorname{Isom}^0}
\newcommand{\Isom(\E)}{\operatorname{Isom}({\mathbb E})}
\newcommand{\Isom(\E)o}{\operatorname{Isom}o({\mathbb E})}
\thanks{The authors gratefully acknowledge partial support from NSF grant
DMS-9803518. Goldman gratefully acknowledges a Semester Research Award from
the General Research Board of the University of Maryland.}
\maketitle
\tableofcontents
In his doctoral thesis~\cite{Drumm0} and subsequent
papers~\cite{Drumm1,Drumm2}, Todd Drumm developed a theory of
fundamental domains for discrete groups of isometries of Minkowski
$(2+1)$-space ${\mathbb E}$, using polyhedra called {\em crooked planes\/}
and {\em crooked half-spaces.\/} This paper expounds these results.
\begin{main}
Let $\mathcal C_1^-,\mathcal C_1^+,\dots,\mathcal C_m^-,\mathcal C_m^+\subset{\mathbb E}$ be a family of
crooked planes bounding crooked half-spaces $\mathcal H_1^-,\mathcal H_1^+,
\dots,\mathcal H_m^-,\mathcal H_m^+\subset{\mathbb E}$ and
$h_1,\dots, h_m\in\Isom(\E)o$ such that:
\begin{itemize}
\item any pair of the $\mathcal H_i^j$ are pairwise disjoint;
\item $h_i(\mathcal H_i^-)= {\mathbb E} - \mathbb B ar{\mathcal H}_i^+$.
\end{itemize}
Then:
\begin{itemize}
\item the group $\Gamma$ generated by $h_1,\dots, h_m\in\Isom(\E)o$ acts properly
discontinuously on ${\mathbb E}$;
\item the polyhedron
\begin{equation*}
X = {\mathbb E} - \bigcup_{i=1}^m (\mathbb B ar{\mathcal H}_i^+ \cup \mathbb B ar{\mathcal H}_i^-)
\end{equation*}
is a fundamental domain for the $\Gamma$-action on ${\mathbb E}$.
\end{itemize}
\end{main}
The first examples of properly discontinuous affine actions of
nonabelian free groups were constructed by
Margulis~\cite{Margulis1,Margulis2}, following a suggestion of
Milnor~\cite{Milnor}. (Compact quotients of affine space were
classified by Fried and Goldman~\cite{FriedGoldman} about the same time.)
For background on this problem, we refer the reader to these articles
as well as the survey articles \cite{CDGM},\cite{Drumm4}.
Here is the outline of this paper. In \S\ref{sec:Minkowski}, we
collect basic facts about the geometry of Minkowski space and $\R^{2,1}$.
In \S\ref{sec:compression}, we prove a basic lemma on how
isometries compress Euclidean balls in special directions. A key idea
is the {\em hyperbolicity\/} of a hyperbolic isometry, motivated by
ideas of Margulis~\cite{Margulis1,Margulis2}, and discussed in
\S\ref{sec:hyperbolicity}, using the {\em null frames\/} associated
with spacelike vectors and hyperbolic isometries. The hyperbolicity
of a hyperbolic element $g$ is defined as the distance between the attracting
and repelling directions, and $g$ is $\epsilon$-hyperbolic if its hyperbolicity
is ${\mathfrak g}e \epsilon$.
\S\ref{sec:Schottky} reviews Schottky subgroups of $\operatorname{SO}^0(2,1)$ acting on
$\operatorname{H}^2_\R$. This both serves as the prototype for the subsequent
discussion of affine Schottky groups and as the starting point for the
construction of affine Schottky groups. For Schottky groups acting on
$\operatorname{H}^2_\R$, a completeness argument in Poincar\'e's polygon theorem shows
that the images of the fundamental polygon tile all of $\operatorname{H}^2_\R$; the
analogous result for affine Schottky groups is the main topic of this
paper.
\S\ref{sec:hypcrit} gives a criterion for
$\epsilon$-hyperbolicity of elements of Schottky groups.
\S\ref{sec:crooked} introduces {\em crooked planes\/} as the analogs
of geodesics in $\operatorname{H}^2_\R$ bounding the fundamental polygon.
\S\ref{sec:extSchottky1}
extends Schottky groups to linear actions on
$\R^{2,1}$. In \S\ref{sec:affineSchottky},
affine deformations of these linear actions are
proved to be properly discontinuous on open subsets of Minkowski
space, using the standard argument for Schottky groups.
The fundamental polyhedron $X$ is bounded by crooked planes, in exactly
the same configuration as the geodesics bounding the fundamental polygon
for Schottky groups. The generators of the affine Schottky group pair
the faces of $X$ in exactly the same pattern as for the original Schottky
group.
The difficult part of the proof is to show that the images ${\mathfrak g}amma\mathbb B ar{X}$ tile
{\em all\/} of Minkowski space. Assuming that a point $p$ lies in ${\mathbb E}-\Gamma\mathbb B ar{X}$, Drumm
intersects the tiling with a fixed definite plane $P\ni p$.
In \S\ref{sec:zigzags}, we abstract this idea by introducing
{\em zigzags} and {\em zigzag regions,\/}
which are the intersections of crooked planes and half-spaces with $P$.
The proof that $\Gamma\mathbb B ar{X}={\mathbb E}$ (that is, {\em completeness\/} of the
affine structure on the quotient $(\Gamma\mathbb B ar{X})/\Gamma$) involves a
nested sequence of crooked half-spaces $\mathfrak H_k$ containing $p$.
This sequence is constructed in \S\ref{sec:sequence} and
is indexed by a sequence ${\mathfrak g}amma_k\in\Gamma$.
\S\ref{sec:uniform} gives a uniform lower bound to the Euclidean width
of $\mathbb B ar{X}$. Bounding the uniform width is a key ingredient in proving
completeness for Schottky groups in hyperbolic space.
\S\ref{sec:approx} approximates the nested sequence of zigzag regions
by a nested sequence of half-planes ${\mathbb P}i_k$ in $P$.
The uniform width bounds are used to prove Lemma~\ref{lem:tubnbhd},
on the existence of tubular neighborhoods $T_k$ of $\partial{\mathbb P}i_k$
which are disjoint. The Compression Lemma~\ref{lem:affinecompression},
combined with the observation (Lemma~\ref{lem:wunu}) that the zigzag regions
do not point too far away from the direction of expansion of the ${\mathfrak g}amma_k$,
imply that the Euclidean width of the $T_k$ are bounded below in terms of
the hyperbolicity of ${\mathfrak g}amma_k$. (Lemma~\ref{lem:separation}).
Thus it suffices to find $\epsilon>0$ such that infinitely many ${\mathfrak g}amma_k$ are
$\epsilon$-hyperbolic.
\S\ref{sec:alternative} applies the criterion
for $\epsilon$-hyperbolicity derived in \S\ref{sec:hypcrit} (Lemma~\ref{lem:fxpts}) to
find $\epsilon_0>0$ guaranteeing $\epsilon_0$-hyperbolicity in many cases.
In the other cases, the sequence ${\mathfrak g}amma_k$ has a special form, the analysis
of which gives a smaller $\epsilon$ such that every ${\mathfrak g}amma_k$ is now
$\epsilon$-hyperbolic. The details of this constitute \S\ref{sec:change}.
We follow Drumm's proof, with several modifications. We wish to
thank Todd Drumm for the inspiration for this work and Maria Morrill,
as well as Todd Drumm, for several helpful conversations.
\section{Minkowski space}\langlebel{sec:Minkowski}
We begin with background on $\R^{2,1}$ and Minkowski (2+1)-space ${\mathbb E}$.
$\R^{2,1}$ is defined as a real inner product space of dimension 3
with a nondegenerate inner product of index 1.
Minkowski space is an affine space ${\mathbb E}$ whose underlying
vector space is $\R^{2,1}$; equivalently ${\mathbb E}$ is a simply-connected
geodesically complete flat Lorentzian manifold.
If $p,q\in{\mathbb E}$, then a unique vector ${\mathsf{v}}\in\R^{2,1}$ represents the
{\em displacement\/} from $p$ to $q$, that is, translation by the vector ${\mathsf{v}}$
is the unique translation taking $p$ to $q$; we write
\begin{equation*}
{\mathsf{v}} = q - p \text{~and~} q = p + {\mathsf{v}}.
\end{equation*}
Lines and planes in $\R^{2,1}$ are classified in terms
of the inner product. The identity component $\operatorname{SO}^0(2,1)$ of the group of
linear isometries of $\R^{2,1}$ comprises linear
transformations preserving the {\em future\/} component ${\mathcal N}_+$
of the set ${\mathcal N}$ of timelike vectors, as well as orientation. The set of rays
in ${\mathcal N}_+$ is
a model for {\em hyperbolic 2-space\/} $\operatorname{H}^2_\R$, and the geometry of
$\operatorname{H}^2_\R$ serves as a model for the geometry of $\R^{2,1}$ and ${\mathbb E}$.
A key role is played by the {\em ideal boundary\/} of $\operatorname{H}^2_\R$. Its
intrinsic description is as the projectivization of the lightcone in
$\R^{2,1}$, although following Margulis~\cite{Margulis1,Margulis2} we
identify it with a section on a Euclidean
sphere. However, to simplify the formulas, we find it more convenient
to take for this section the intersection of the future-pointing
lightcone with the sphere $S^2(\sqrt{2})$ of radius $\sqrt{2}$, rather
than the Euclidean unit sphere, which is used in the earlier
literature. We hope this departure from tradition justifies itself by
the resulting
simplification of notation.
\subsection{Minkowski space and its projectivization}
Let $\R^{2,1}$ be the three-dimensional real vector space ${\mathbb R}^3$ with the inner
product
\begin{equation*}
\mathbb B ({\mathsf{u}},{\mathsf{v}}) = {\mathsf{u}}_1{\mathsf{v}}_1 + {\mathsf{u}}_2{\mathsf{v}}_2 - {\mathsf{u}}_3{\mathsf{v}}_3
\end{equation*}
where
\begin{equation*}
{\mathsf{u}} = \bmatrix {\mathsf{u}}_1 \\ {\mathsf{u}}_2 \\ {\mathsf{u}}_3 \endbmatrix,
{\mathsf{v}} = \bmatrix {\mathsf{v}}_1 \\ {\mathsf{v}}_2 \\ {\mathsf{v}}_3 \endbmatrix \in\R^{2,1}.
\end{equation*}
\begin{definition}
A vector ${\mathsf{u}}$ is said to be
\begin{itemize}
\item {\em spacelike} if $\mathbb B ({\mathsf{u}},{\mathsf{u}}) >0$,
\item {\em lightlike} (or {\em null}) if $\mathbb B ({\mathsf{u}},{\mathsf{u}})=0$,
\item {\em timelike} if $\mathbb B ({\mathsf{u}},{\mathsf{u}})<0$.
\end{itemize}
The union of all null lines is called the {\em lightcone}.
\end{definition}
Here is the dual trichotomy for planes in $\R^{2,1}$:
\begin{definition}
A 2-plane $P\subset\R^{2,1}$ is:
\begin{itemize}
\item
{\em indefinite\/} if the restriction of
$\mathbb B $ to $P$ is indefinite, or equivalently if $P\cap\mathcal N\neq\emptyset$;
\item
{\em null\/} if
$P\cap\mathbb B ar{{\mathcal N}}$ is a null line;
\item {\em definite\/} if the
restriction $\mathbb B |_P$ is positive definite, or equivalently if
$P\cap\mathbb B ar{{\mathcal N}}=\{{{\mathsf{0}}}\}$.
\end{itemize}
\end{definition}
Let $\operatorname{O}(2,1)$ denote the group of linear isometries of $\R^{2,1}$ and $\operatorname{SO}(2,1)
=\operatorname{O}(2,1) \cap \operatorname{SL}(3,{\mathbb R})$ as usual. Let $\operatorname{SO}^0(2,1)$ denote the identity
component of $\operatorname{O}(2,1)$ (or $\operatorname{SO}(2,1)$). The affine space with underlying
vector space $\R^{2,1}$ has a complete flat Lorentzian metric arising from
the inner product on $\R^{2,1}$; we call it {\em Minkowski space\/} and
denote it by ${\mathbb E}$. Its isometry group $\Isom(\E)$ of ${\mathbb E}$ splits as a
semidirect product $\operatorname{O}(2,1)\ltimes V$ (where $V$ denotes the vector group
of translations of ${\mathbb E}$) and its identity component $\Isom(\E)o$ of $\Isom(\E)$ is
$\operatorname{SO}^0(2,1)\ltimes V$. The elements of $\Isom(\E)$ are called {\em affine
isometries,\/} to distinguish them from the linear isometries in
$\operatorname{O}(2,1)$.
The linear isometries can be characterized as the affine
isometries fixing a chosen ``origin'', which is used to
identify the {\em points\/} of ${\mathbb E}$ with the {\em vectors\/} in $\R^{2,1}$.
The {\em origin\/} is the point which corresponds to the zero vector
${\mathsf{0}}\in\R^{2,1}$.
Let ${\mathbb L}:\Isom(\E)o\longrightarrow\operatorname{SO}^0(2,1)$ denote the
homomorphism associating to an affine isometry its linear part.
Let ${\mathbb P}(\R^{2,1})$ denote the {\em projective space\/} associated to $\R^{2,1}$,
that is the space of 1-dimensional linear subspaces of $\R^{2,1}$. Let
\begin{equation*}
{\mathbb P}:\R^{2,1}-\{{{\mathsf{0}}}\}\longrightarrow{\mathbb P}(\R^{2,1})
\end{equation*}
denote the quotient mapping which associates to a nonzero vector the
line it spans.
Let ${\mathcal N}$ denote the set of all timelike vectors.
Its two connected components are the convex cones
\begin{equation*}
{\mathcal N}_+ = \{{\mathsf{v}}\in{\mathcal N} \mid {\mathsf{v}}_3 > 0\}
\end{equation*}
and
\begin{equation*}
{\mathcal N}_- = \{{\mathsf{v}}\in{\mathcal N} \mid {\mathsf{v}}_3 < 0\}.
\end{equation*}
We call ${\mathcal N}_+$ the {\em future\/} or {\em positive time-orientation\/}
of ${{\mathsf{0}}}$.
{\em Hyperbolic 2-space\/} $\operatorname{H}^2_\R={\mathbb P}({\mathcal N})$ consists of timelike lines
in $\R^{2,1}$. The distance $d(u,v)$ between two points
$u = {\mathbb P}({\mathsf{u}}), v = {\mathbb P}({\mathsf{v}})$ represented
by timelike vectors ${\mathsf{u}},{\mathsf{v}}\in{\mathcal N}$ is given by:
\begin{equation*}
\cosh(d(u,v)) = \frac{\vert\mathbb B ({\mathsf{u}},{\mathsf{v}})\vert}{\sqrt{\mathbb B ({\mathsf{u}},{\mathsf{u}})\mathbb B ({\mathsf{v}},{\mathsf{v}})}}.
\end{equation*}
The hyperbolic plane can be identified with either one of the
components of the two-sheeted hyperboloid defined by $\mathbb B ({\mathsf{v}},{\mathsf{v}}) = -1$.
This hyperboloid is a section of
${\mathbb P}:{\mathcal N}\longrightarrow \operatorname{H}^2_\R$, and inherits a complete Riemannian metric
of constant curvature $-1$ from $\R^{2,1}$.
The group $\operatorname{SO}^0(2,1)$ acts linearly on ${\mathcal N}$, and thus by projective
transformations on $\operatorname{H}^2_\R$. It preserves the subsets ${\mathcal N}_\pm$ and acts
isometrically with respect to the induced Riemannian structures.
The boundary $\partial{\mathcal N}$ is the union of all null lines, that is
the lightcone. The projectivization of $\partial{\mathcal N} -\{ {{\mathsf{0}}}\}$
is the {\em ideal boundary\/} $\partial\operatorname{H}^2_\R$ of hyperbolic 2-space.
Let ${\mathfrak{S}}$ denote the set of unit-spacelike vectors :
\begin{equation*}
{\mathfrak{S}} = \{{\mathsf{v}}\in\R^{2,1} \mid \mathbb B ({\mathsf{v}},{\mathsf{v}}) = 1\} .
\end{equation*}
It is a one-sheeted hyperboloid which is homeomorphic to an annulus.
Points in ${\mathfrak{S}}$ correspond to oriented geodesics in $\operatorname{H}^2_\R$, or geodesic
half-planes in $\operatorname{H}^2_\R$ as follows. Let ${\mathsf{v}}\in{\mathfrak{S}}$. Then
define the {\sl half-plane\/} $H_{\mathsf{v}} = {\mathbb P}(\tilde{H}_{\mathsf{v}})$ where
\begin{equation*}
\tilde{H}_{\mathsf{v}} = \{{\mathsf{u}}\in{\mathcal N}_+ \mid \mathbb B ({\mathsf{u}},{\mathsf{v}}) > 0\}.
\end{equation*}
$H_{\mathsf{v}}$ is one of the two components of the complement
$\operatorname{H}^2_\R - l_{\mathsf{v}}$ where
\begin{equation}\langlebel{eq:spaceline}
l_{\mathsf{v}} = {\mathbb P}(\{{\mathsf{u}}\in{\mathcal N}_+ \mid \mathbb B ({\mathsf{u}},{\mathsf{v}}) = 0\}) = \partial H_{\mathsf{v}}
\end{equation}
is the geodesic in $\operatorname{H}^2_\R$ corresponding to the line ${\mathbb R}{\mathsf{v}}$ spanned by ${\mathsf{v}}$.
\subsection{A little Euclidean geometry}\langlebel{sec:Euclid}
Denote the Euclidean norm by
\begin{equation*}
\Vert{\mathsf{v}}\Vert = \sqrt{({\mathsf{v}}_1)^2 + ({\mathsf{v}}_2)^2 + ({\mathsf{v}}_3)^2}
\end{equation*}
and let $\rho$ denote the Euclidean distance
on ${\mathbb E}$ defined by
\begin{equation*}
\rho(p,q) = \Vert p - q\Vert .
\end{equation*}
If $S\subset{\mathbb E}$ and $\delta>0$, the
{\em Euclidean $\delta$-neighborhood of $S$\/} is
$B(S,\delta) = \{y\in {\mathbb E} \mid \rho(S,y) < \delta\}$.
Note that $B(S,\delta) = \bigcup_{x\in S}B(x,\delta)$.
Let
\begin{equation*}
S^2(\sqrt{2}) = \{{\mathsf{v}}\in\R^{2,1} \mid \Vert{\mathsf{v}}\Vert = \sqrt{2} \}
\end{equation*}
be the Euclidean sphere of radius $\sqrt{2}$.
Let $S^1$ denote the intersection $S^2(\sqrt{2})\cap\partial{\mathcal N}_+$,
consisting of points
\begin{equation*}
{\mathsf{u}}_\phi = \bmatrix \cos(\phi) \\ \sin(\phi) \\ 1 \endbmatrix
\end{equation*}
where $\phi\in{\mathbb R}$.
The subgroup of $\operatorname{SO}^0(2,1)$ preserving $S^1$ and $S^2(\sqrt{2})$ is the
subgroup $\operatorname{SO}(2)$ consisting of rotations
\begin{equation*}
R_\phi = \bmatrix
\cos(\phi) & -\sin(\phi) & 0 \\
\sin(\phi) & \cos(\phi) & 0 \\
0 & 0 & 1 \endbmatrix.
\end{equation*}
While the linear action of $\operatorname{SO}^0(2,1)$ does not preserve
$S^1$, we may use the identification of $S^1$ with
${\mathbb P}({\partial{\mathcal N}_+})$ to define an action
of $\operatorname{SO}^0(2,1)$ on $S^1$.
If $g\in\operatorname{SO}^0(2,1)$, we denote this action
by ${\mathbb P}(g)$, that is, if ${\mathsf{u}}\in S^1$, then
${\mathbb P}(g)({\mathsf{u}})$ is the image of ${\mathbb P}(g({\mathsf{u}}))$ under the identification
${\mathbb P}({\partial{\mathcal N}_+})\longrightarrow S^1$.
Throughout this paper (and especially in \S 2), we
shall consider this action of $\operatorname{SO}^0(2,1)$ on $S^1$.
The restriction of either the Euclidean metric or the Lorentzian
metric to $S^1$ is the rotationally invariant metric $d\phi^2$ on
$S^1$ for which the total circumference is $2\pi$.
\begin{definition}
An {\em interval\/} on $S^1$ is an open subset $A$ of the form
$\{{\mathsf{u}}_\phi\mid \phi_1<\phi<\phi_2\}$ where $\phi_1 < \phi_2$ and
$\phi_2 - \phi_1 < 2\pi$. Its {\em length\/} ${\mathbb P}hi(A)$ is $\phi_2-\phi_1$.
\end{definition}
Note that if $\phi_1<\phi_2$, the points ${\mathsf{a}}_1={\mathsf{u}}_{\phi_1}$ and ${\mathsf{a}}_2={\mathsf{u}}_{\phi_2}\in S^1$ bound two
different intervals : we can either take $A=\{{\mathsf{u}}_\phi\mid
\phi_1<\phi<\phi_2\}$ or
$A=\{{\mathsf{u}}_\phi\mid \phi_2<\phi<\phi_1+2\pi\}$. The length of one of
these intervals is less than or equal to $\pi$, in which case
\begin{equation*}
\rho({\mathsf{a}}_1,{\mathsf{a}}_2) = 2\sin({\mathbb P}hi(A)/2).
\end{equation*}
Intervals correspond to unit-spacelike vectors as follows. Let
$A\subset S^1$ be an interval bounded by ${\mathsf{a}}_1={\mathsf{u}}_{\phi_1}$ and
${\mathsf{a}}_2={\mathsf{u}}_{\phi_2}$, where $\phi_1 <\phi_2$. Then the Lorentzian cross-product ${\mathsf{u}}_2\boxtimes{\mathsf{u}}_1$
(see, for example, Drumm-Goldman~\cite{DrummGoldman1,DrummGoldman2})
is a positive scalar multiple of the corresponding unit-spacelike vector.
In \cite{Drumm0,Drumm1,Drumm2}, Drumm considers conical neighborhoods
in ${\partial{\mathcal N}_+}$ rather than intervals in $S^1$.
A {\em conical neighborhood\/} is a connected open subset $U$
of the future lightcone ${\partial{\mathcal N}_+}$ which is
invariant under the group ${\mathbb R}_+$ of positive scalar multiplications.
The projectivization ${\mathbb P}(U)$ of a conical neighborhood
is a connected open interval in ${\mathbb P}({\partial{\mathcal N}_+})\approx S^1$ which we may
identify with the interval $U\cap S^1$, which is an interval.
Thus every conical neighborhood equals ${\mathbb R}_+\cdot A$, where $A\subset S^1$
is the interval $A = U\cap S^1$.
\subsection{Null frames}\langlebel{sec:nullframes}
Let ${\mathsf{v}}\in{\mathfrak{S}}$ be a unit-spacelike vector. We associate to ${\mathsf{v}}$
two null vectors $\xpm{{\mathsf{v}}}$ in the future which are
$\mathbb B $-orthogonal to ${\mathsf{v}}$ and
lie on the unit circle $S^1=S^2(\sqrt{2})\cap{\partial{\mathcal N}_+}$.
These vectors
correspond to the endpoints of the geodesic $l_{\mathsf{v}}$. To define $\xpm{{\mathsf{v}}}$,
first observe that the orthogonal complement
\begin{equation*}
{\mathsf{v}}^\perp = \{{\mathsf{u}}\in\R^{2,1}\mid \mathbb B ({\mathsf{u}},{\mathsf{v}}) = 0\}
\end{equation*}
meets the positive lightcone ${\partial{\mathcal N}_+}$ in two rays.
Then $S^1 = {\partial{\mathcal N}_+}\cap S^2(\sqrt{2})$
meets ${\mathsf{v}}^\perp$ in a pair of vectors $\xpm{{\mathsf{v}}}$. We determine which one
of this pair is $\xp{{\mathsf{v}}}$ and which one is $\xm{{\mathsf{v}}}$ by requiring that the
triple
\begin{equation*}
(\xm{{\mathsf{v}}},\xp{{\mathsf{v}}},{\mathsf{v}})
\end{equation*}
be a positively oriented basis of $\R^{2,1}$.
We call such a basis a {\em null frame\/} of $\R^{2,1}$.
(Compare Figure~\ref{fig:frame}.) The pair
$\{\xm{{\mathsf{v}}},\xp{{\mathsf{v}}}\}$ is a basis for the indefinite plane
${\mathsf{v}}^\perp$. In fact, ${\mathsf{v}}$ is the
unit-spacelike vector corresponding to the interval bounded by the
ordered pair $(\xp{{\mathsf{v}}},\xm{{\mathsf{v}}})$.
Hyperbolic elements of $\operatorname{SO}^0(2,1)$ also determine null frames.
Recall that $g\in\operatorname{SO}^0(2,1)$ is {\em hyperbolic\/} if
it has real distinct eigenvalues, which are necessarily positive.
Then the eigenvalues are $\langlembda,1,\langlembda^{-1}$ and we may assume that
\begin{equation*}
\langlembda < 1 < \langlembda^{-1}.
\end{equation*}
Let $\xm{g}$ denote the unique eigenvector with eigenvalue $\langlembda$ lying on
$S^1$ and $\xp{g}$ denote the unique eigenvector with eigenvalue
$\langlembda^{-1}$ lying on $S^1$. Then $\xo{g}\in{\mathfrak{S}}$ is the uniquely
determined eigenvector of $g$ such that $(\xm{g},\xp{g},\xo{g})$ is positively
oriented. Observe that this is a null frame, since
$\xpm{g}=\xpm{{\mathsf{v}}}$, where ${\mathsf{v}} =\xo{g}$.
\subsection{$\epsilon$-Hyperbolicity}\langlebel{sec:hyperbolicity}
We may define the {\em hyperbolicity\/} of a unit-spacelike vector ${\mathsf{v}}$
as the Euclidean distance
$\rho(\xp{{\mathsf{v}}},\xm{{\mathsf{v}}})$.
The following definition (Drumm-Goldman~\cite{DrummGoldman1}) is
based on Margulis~\cite{Margulis1,Margulis2}.
\begin{definition}
A unit-spacelike vector ${\mathsf{v}}$ is {\em $\epsilon$-spacelike} if
$ \rho(\xp{{\mathsf{v}}},\xm{{\mathsf{v}}}) {\mathfrak g}e \epsilon$.
A hyperbolic element $g\in\operatorname{SO}^0(2,1)$ is {\em $\epsilon$-hyperbolic} if
$\xo{g}$ is $\epsilon$-spacelike.
An affine isometry is {\em $\epsilon$-hyperbolic} if its linear part
is an $\epsilon$-hyperbolic linear isometry.
\end{definition}
The spacelike vector ${\mathsf{v}}$ corresponds to a geodesic
$l_{\mathsf{v}}$ (defined in \eqref{eq:spaceline})
in the hyperbolic plane $\operatorname{H}^2_\R$.
Let
\begin{equation}\langlebel{eq:rhtorigin}
O = {\mathbb P}\left(\bmatrix 0 \\ 0 \\ 1 \endbmatrix\right)
\end{equation}
be the {\em origin\/} in $\operatorname{H}^2_\R$.
Although we will not need this, the hyperbolicity relates to
other more familiar quantities. For example, the hyperbolicity of
a vector ${\mathsf{v}}$ relates to the distance from $l_{\mathsf{v}}$ to the origin
$O$ in $\operatorname{H}^2_\R$ and to the Euclidean length of ${\mathsf{v}}$ by:
\begin{align*}
\rho(\xp{{\mathsf{v}}},\xm{{\mathsf{v}}}) & = 2 \operatorname{sech} (d(O,l_{\mathsf{v}})) \\
& = 2 \sqrt{\frac2{1 + \Vert{\mathsf{v}}\Vert^2}}.
\end{align*}
The set of all $\epsilon$-spacelike vectors is the compact set
\begin{equation*}
{\mathfrak{S}}_\epsilon = {\mathfrak{S}} \cap B({{\mathsf{0}}},\sqrt{8/\epsilon^2 - 1})
\end{equation*}
and ${\mathfrak{S}} = \bigcup_{\epsilon>0}{\mathfrak{S}}_\epsilon$.
\begin{figure}
\caption{The Compression Lemma}
\end{figure}
\subsection{Compression}\langlebel{sec:compression}
We now prove the basic technical lemma bounding the compression of
Euclidean balls by isometries of ${\mathbb E}$.
A similar bound was found in Lemma 3 (pp.\ 686--687) of
Drumm~\cite{Drumm2} (see also (1) of \S3.7 of Drumm~\cite{Drumm1}).
\begin{definition}\langlebel{def:weakunstable}
For any unit-spacelike vector ${\mathsf{v}}$, the {\em weak-unstable
plane\/} $E^{wu}({\mathsf{v}})\subset\R^{2,1}$ is the plane spanned by the vectors
${\mathsf{v}}$ and $\xp{{\mathsf{v}}}$. If $g\in\operatorname{SO}(2,1)$ is hyperbolic, then $E^{wu}(g)$
is defined as $E^{wu}(\xo{g})$.
If $x\in{\mathbb E}$, then
$E^{wu}_x(g)$ is defined as the image of $E^{wu}(g)$ under translation
by $x$.
\end{definition}
\noindent (Drumm~\cite{Drumm0,Drumm1,Drumm2} denotes $E^{wu}$ by $S_+$.)
\begin{lem}\langlebel{lem:inscribedball}
Let ${\mathsf{v}}$ be an $\epsilon$-spacelike vector.
Let $\delta>0$ and $Q({\mathsf{v}},\delta)$ be the convex hull of the four intersection
points
of $\partial B({{\mathsf{0}}},\delta)$ with the two lines
${\mathbb R}{\mathsf{v}}$ and ${\mathbb R}\xp{{\mathsf{v}}}$. Then $Q({\mathsf{v}},\delta)$ is a rectangle
in the plane $E^{wu}({\mathsf{v}})$ containing
\begin{equation*}
B\left({\mathsf{0}},\frac{\delta\epsilon}4\right)
\cap E^{wu}({\mathsf{v}}).
\end{equation*}
\end{lem}
\begin{proof}
The lines ${\mathbb R}{\mathsf{v}}$ and ${\mathbb R}\xp{{\mathsf{v}}}$ meet
$\partial B({{\mathsf{0}}},\delta)$ in the four points
$\pm \delta \xp{{\mathsf{v}}}, \pm \delta {\mathsf{y}}$
where ${\mathsf{y}}= {\mathsf{v}}/\Vert {\mathsf{v}}\Vert$.
(Compare Figure~\ref{fig:compression}.)
Since $\triangle(\xp{{\mathsf{v}}},{\mathsf{y}},\xm{{\mathsf{v}}})$ is isosceles,
\begin{equation*}
\rho(\xp{{\mathsf{v}}},{\mathsf{y}}) = \rho(\xm{{\mathsf{v}}},{\mathsf{y}}),
\end{equation*}
the triangle inequality implies
\begin{align*}
\epsilon & \le \rho(\xp{{\mathsf{v}}},\xm{{\mathsf{v}}})\\ & \le
\rho(\xp{{\mathsf{v}}},{\mathsf{y}}) + \rho({\mathsf{y}},\xm{{\mathsf{v}}}) \\ & \le 2 \rho(\xp{{\mathsf{v}}},{\mathsf{y}}).
\end{align*}
Therefore $\rho(\xp{{\mathsf{v}}},{\mathsf{y}}) {\mathfrak g}e \epsilon/2$.
Similarly $\rho(\xp{{\mathsf{v}}},-{\mathsf{y}}) {\mathfrak g}e \epsilon/2.$
Thus the sides
of $Q({\mathsf{v}},\delta)$ have length at least $\delta\epsilon/2$,
and $B({{\mathsf{0}}},\delta\epsilon/4)\subset Q({\mathsf{v}},\delta)$ as claimed.
\end{proof}
\begin{lem}\langlebel{lem:CompressionLemma}
Suppose that $g\in\operatorname{SO}^0(2,1)$ is $\epsilon$-hyperbolic.
Then for all $\delta>0$,
\begin{equation*}
B\left({\mathsf{0}},\frac{\delta\epsilon}4\right) \cap E^{wu}(g) \subset
gB({{\mathsf{0}}},\delta).
\end{equation*}
\end{lem}
\begin{proof}
$g$ fixes $\pm\delta{\mathsf{y}}$ and multiplies $\pm\delta\xp{g}$
by $\langlembda(g)^{-1}>1$ so $Q \subset g(Q)$.
(Compare Figure~\ref{fig:compression}.)
By Lemma~\ref{lem:inscribedball},
$B({\mathsf{0}},\delta\epsilon/4) \cap E^{wu}(g) \subset Q
\subset gB({{\mathsf{0}}},\delta).$
\end{proof}
The following lemma directly follows from Lemma~\ref{lem:CompressionLemma}
by applying translations.
\begin{lem}[The Compression Lemma]\langlebel{lem:affinecompression}
Suppose that $h\in\operatorname{Isom}o({\mathbb E})$ is an $\epsilon$-hyperbolic affine
isometry.
Then for all $\delta>0$ and $x\in{\mathbb E}$,
\begin{equation*}
B\left(h(x),\frac{\delta\epsilon}4\right) \cap E^{wu}_{h(x)}(g) \subset
h(B(x,\delta)).
\end{equation*}
\end{lem}
\section{Schottky groups}\langlebel{sec:Schottky}
In this section we recall the construction of Schottky subgroups in
$\operatorname{SO}^0(2,1)$ and their action on $\operatorname{H}^2_\R$. We also use the projective action
${\mathbb P}(g)$ of $g\in\operatorname{SO}^0(2,1)$ on $S^1$ (see \S\ref{sec:Euclid}) and
abbreviate ${\mathbb P}(g)$ by $g$ to ease notation.
This classical construction is the template for the construction of
affine Schottky groups later in \S\ref{sec:affineSchottky}. Then we
prove several elementary technical facts to be used later in the proof
of the Main Theorem.
\subsection{Schottky's configuration}
Let $G\subset\operatorname{SO}^0(2,1)$ be a group generated be
$g_1,\dots,g_m$, for which there exist intervals
$A_i^-,A_i^+\subset S^1$, $i=1,\dots,m$ such that:
\begin{itemize}
\item $g_i(A_i^-) = {\partial{\mathcal N}_+} - \mathbb B ar{A}_i^+$;
\item $g_i^{-1}(A_i^+) = {\partial{\mathcal N}_+} - \mathbb B ar{A}_i^-$.
\end{itemize}
We call $G$ a {\em Schottky group}. Write $J$ for the set $\{+1,-1\}$ or its abbreviated version
$\{+,-\}$. Denote by $I$
the set $\{1,\dots,m\}$. We index many of the objects associated with Schottky
groups (for example the intervals $A_i^j$ and the Schottky
generators $g_i^j$) by the Cartesian product
\begin{equation*}
I\times J = \{1,\dots,m\} \times \{+,-\}.
\end{equation*}
Let $H_i^+$ and $H_i^-$ be the two half-spaces
(the convex hulls) in $\operatorname{H}^2_\R$ bounded by
$A_i^+$ and $A_i^-$ respectively.
Their complement
\begin{equation*}
\Delta_i = \operatorname{H}^2_\R-(\mathbb B ar{H}_i^+\cup \mathbb B ar{H}_i^-)
\end{equation*}
is the convex hull in $\operatorname{H}^2_\R$ of
$\partial\operatorname{H}^2_\R-(\mathbb B ar{A}_i^+\cup \mathbb B ar{A}_i^-). $
These half-spaces satisfy conditions analogous to those above :
\begin{itemize}
\item $g_i(H_i^-) = \operatorname{H}^2_\R - \mathbb B ar{H}_i^+$;
\item $g_i^{-1}(H_i^+) = \operatorname{H}^2_\R - \mathbb B ar{H}_i^-$.
\end{itemize}
(Compare Fig.~\ref{fig:delta1}.) $\Delta_i$ is a fundamental domain
for the cyclic group $\langle g_i\rangle$. As all of the $A_i^j$ are disjoint,
all of the complements $\operatorname{H}^2_\R -\Delta_i$ are disjoint.
\begin{lem}\langlebel{lem:Brouwer}
For each $i=1,\dots,m$, $\xp{g_i}\in A_i^+$ and
$\xm{g_i}\in A_i^-$.
\end{lem}
\begin{proof}
Since $g_i$ is hyperbolic, it has three invariant lines corresponding
to its eigenvalues. The two eigenvectors corresponding to $\langlembda$ and
$\langlembda^{-1}$ are null, determining exactly two fixed points of $g_i$
on $S^1$. The fixed point corresponding to $\xp{g_i}$ is attracting
and the fixed point corresponding to $\xm{g_i}$ is repelling.
Since $A_i^+\subset S^1 - A_i^-$,
\begin{equation*}
g_i(A_i^+) \subset g_i(S^1 - A_i^-) = A_i^+
\end{equation*}
so Brouwer's fixed-point theorem implies that
either $\xp{g_i}$ or $\xm{g_i}$ lies in $A_i^+$.
The same argument applied to $g_i^{-1}$ implies that
either $\xp{g_i}$ or $\xm{g_i}$ lies in $A_i^-$.
Since
$g$ has
only two fixed points and $A_i^-\cap A_i^+=\emptyset$,
either $\xp{g_i}\in A_i^+, \xm{g_i}\in A_i^-$ or
$\xp{g_i}\in A_i^-, \xm{g_i}\in A_i^+$. The latter case cannot happen
since $\xp{g_i}$ is attracting and
$\xm{g_i}$ is repelling.
\end{proof}
The following theorem is the basic result on Schottky groups. It is
one of the simplest cases of ``Poincar\'e's
theorem on fundamental polygons'' or ``Klein's combination theorem.''
Compare Beardon~\cite{Beardon}, Ford~\cite{Ford},
Ratcliffe~\cite{Ratcliffe}, pp. 584--587, and
Epstein-Petronio~\cite{EpsteinPetronio}.
\begin{thm}\langlebel{thm:Schottky}
The set $\{g_1,\dots,g_m\}$ freely generates $G$ and $G$ is discrete.
The intersection $\Delta = \Delta_1\cap \dots \cap \Delta_m$
is a fundamental domain for $G$ acting on $\operatorname{H}^2_\R$.
\end{thm}
\noindent Figure~\ref{fig:delta2} depicts the pattern of identifications.
We break the proof into three separate lemmas: first, that the
$g\Delta$ form a set of disjoint tiles, second, that $G$ is
discrete and third, that these tiles cover all of $\operatorname{H}^2_\R$. The first
lemma extends immediately to affine Schottky groups. The second lemma
is automatic since the linear part of $\Gamma$ equals $G$, which we
already know is discrete. However, a much different argument is
needed to prove that the tiles cover ${\mathbb E}$ in the affine case.
\begin{figure}
\caption{The tiling associated to a Schottky group, in the Poincar\'e disk model}
\end{figure}
\begin{lem}\langlebel{lem:ldisjoint}
If $g\in G$ is nontrivial, then $g\Delta\cap\Delta = \emptyset$.
\end{lem}
\begin{lem}\langlebel{lem:ldiscrete} G is discrete.\end{lem}
\begin{lem}\langlebel{lem:lcomplete} The union
\begin{equation*}
G\mathbb B ar{\Delta} = \bigcup_{g\in G} g(\mathbb B ar{\Delta})
\end{equation*}
equals all of $\operatorname{H}^2_\R$.
\end{lem}
\begin{proof}[Proof of Lemma~\ref{lem:ldisjoint}]
We show that if $g\in G$ is a reduced word
\begin{equation*}
g = g_{i_k}^{j_k} \dots g_{i_1}^{j_1}
\end{equation*}
(where $j_i = \pm 1$) then either $k=0$ (that is, $g = 1$) or
$g \Delta\cap \Delta = \emptyset$.
In the latter case $g\mathbb B ar{\Delta}\cap\mathbb B ar{\Delta}=\emptyset$
unless $k=1$. This implies that the $g_i$ freely generate $G$
and that $G$ acts properly and freely on the union
\begin{equation*}
\Gamma\mathbb B ar{\Delta} = \bigcup_{{\mathfrak g}amma\in\Gamma} {\mathfrak g}amma\mathbb B ar{\Delta}.
\end{equation*}
Then we show that $\Gamma\mathbb B ar{\Delta} = \operatorname{H}^2_\R$.
Suppose that $k>0$. We claim inductively that
$g(\Delta) \subset H_{i_k}^{j_k}.$
Let
$g' = g_{i_{k-1}}^{j_{k-1}} \dots g_{i_1}^{j_1} $
so that $g = g_{i_k}^{j_k} g'$.
If $k=1$, then $g'=1$ and
\begin{equation*}
g \Delta =
g_{i_1}^{j_1} \Delta \subset
g_{i_1}^{j_1} (\operatorname{H}^2_\R - \mathbb B ar{H}_{i_1}^{-j_1})\subset
H_{i_1}^{j_1}\subset \operatorname{H}^2_\R -\Delta.
\end{equation*}
If $k>1$, then $g_{i_k}^{-j_k}\neq
g_{i_{k-1}}^{j_{k-1}}$ (since $g$ is a
reduced word) so
$H_{i_k}^{-j_k}\neq H_{i_{k-1}}^{j_{k-1}}$. The
induction hypothesis
\begin{equation*}
g'(\Delta)\subset H_{i_{k-1}}^{j_{k-1}}
\end{equation*}
implies
\begin{equation*}
g \Delta =
g_{i_k}^{j_k}g' \Delta \subset
g_{i_k}^{j_k} H_{i_{k-1}}^{j_{k-1}} \subset
g_{i_k}^{j_k} (\operatorname{H}^2_\R - \mathbb B ar{H}_{i_k}^{-j_k}) \subset
H_{i_k}^{j_k}
\end{equation*}
as desired. Thus all of the ${\mathfrak g}amma\Delta$, for ${\mathfrak g}amma\in\Gamma$,
are disjoint and their closures ${\mathfrak g}amma\mathbb B ar{\Delta}$ tile $\Gamma\mathbb B ar{\Delta}$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:ldiscrete}]
Let $x_0\in\Delta$. Since $\Delta$ is open, there exists $\delta_0>0$
such that the $\delta_0$-ball (in the hyperbolic metric $d$ on $\operatorname{H}^2_\R$)
about $x_0$ lies in $\Delta$. We have proved that if $g$ is a reduced
word in $g_1,\dots,g_m$ then $g(x_0)\notin \Delta$ so
$d(x_0,g(x_0))>\delta_0$. In particular no sequence of reduced words in $G$
can accumulate on the identity, proving that $G$ is discrete.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:lcomplete}]
We must prove $G\mathbb B ar{\Delta}=\operatorname{H}^2_\R$. We use a completeness argument to show that
the quotient $M = (G\mathbb B ar{\Delta})/G$ is actually $\operatorname{H}^2_\R/G$.
We begin by making an abstract model for the universal covering space
$\tilde M$ as the quotient space of the Cartesian
product $\mathbb B ar{\Delta}\times G$ by the equivalence relation generated by identifications
\begin{equation*}
(x,g) \sim (g_i^j x, g g^{-j}_i)
\end{equation*}
where $x\in\partial H^{-j}_i$, $g\in G$ and $(i,j)\in I\times J$.
Then $M = (G\mathbb B ar{\Delta})/G$ inherits a hyperbolic structure from $\operatorname{H}^2_\R$.
We show that this hyperbolic structure is complete to prove that
$M=\operatorname{H}^2_\R/G$ and thus $B\mathbb B ar{\Delta}=\operatorname{H}^2_\R$.
The identifications define the structure of a smooth manifold on $\tilde M$.
If $x\in\Delta$, then the equivalence class of $(x,g)$ contains only
$(x,g)$ itself, and a smooth chart at $(x,g)$ arises from the smooth
structure on $\Delta$. If $x\in\partial\Delta$, then the equivalence class
of $(x,g)$ equals
\begin{equation*}
\{ (x,g), (g_i^j x, g g^{-j}_i)\}
\end{equation*}
where $x\in\partial H^{-j}_i$.
Let $U$ be an open neighborhood of $x$ in $\operatorname{H}^2_\R$ which intersects the orbit
$\Gamma x$ in $\{x\}$.
Since $x$ is a boundary point of the smooth
surface-with-boundary $\mathbb B ar{\Delta}$, the intersection $U\cap\mathbb B ar{\Delta}$ is a coordinate
patch for $x$ in $\mathbb B ar{\Delta}$. The image $g_i^j(U-\Delta)$ is a smooth coordinate
patch for $g_i^j x$ in $\mathbb B ar{\Delta}$ and the image of the union
\begin{equation*}
(U\cap\mathbb B ar{\Delta})\times \{g\}\quad \bigcup \quad
(g_i^j(U-\Delta)) \times \{gg_i^{-j}\}
\end{equation*}
is a smooth coordinate patch for the equivalence class of $(x,g)$.
The $G$-action defined by
\begin{equation*}
{\mathfrak g}amma: (x,g) \longmapsto (x,{\mathfrak g}amma g)
\end{equation*}
preserves this equivalence relation and thus defines a $G$-action
on the quotient $\tilde M$. The map
\begin{align*}
D: \mathbb B ar{\Delta}\times G & \longrightarrow \operatorname{H}^2_\R \\
(x,g) & \longmapsto g(x)
\end{align*}
preserves the equivalence relation and defines a $G$-equivariant
map, the {\em developing map\/} $D:\tilde M\longrightarrow\operatorname{H}^2_\R$.
The developing map $D$ is a local diffeomorphism onto the open set
$G\mathbb B ar{\Delta}$.
Pull back the hyperbolic metric from $\operatorname{H}^2_\R$ by $D$ to obtain a Riemannian
metric on $\tilde M$ for which $D$ is a local isometry. Since $G$ acts
isometrically on $\operatorname{H}^2_\R$, the developing map $D$ is $G$-equivariant.
We claim $\tilde M$ is geodesically complete. To this end, consider a
maximal unit-speed geodesic ray $\mu(t)$ defined for $0 < t < t_0$.
Its preimage in $\mathbb B ar{\Delta}\times G$ consists of geodesic segments $\mu_k$ in
various components $\mathbb B ar{\Delta}\times \{g_{i_k}\}$.
We claim there are only finitely many segments $\mu_k$. Since $\mathbb B ar{\Delta}$
is convex, one of its segments $\mu_k$ cannot enter and exit $\mathbb B ar{\Delta}$
from the same side. Since the defining arcs $A_i^j$ are pairwise
disjoint, the corresponding geodesics $\partial H_i^j$ are pairwise
ultraparallel and the distance between different sides of $\mathbb B ar{\Delta}$ is
bounded below by $\delta >0$. Since the length of $\mu$ equals $t_0$,
there can be at most $t_0/\delta$ segments $\mu_k$.
Let $\mu_k:[t_1,t_0)\longrightarrow \mathbb B ar{\Delta}\times \{g_{i_k}\}$ be the last
geodesic segment. Since $\mathbb B ar{\Delta}$ is closed, $\mu_k(t)$ converges as
$t\longrightarrow t_0$, contradicting maximality.
Thus $\tilde M$ is geodesically complete. Since a local isometry
from a complete Riemannian manifold is a covering space,
$D$ is a covering space. The van Kampen
theorems imply that $\tilde M$ is simply connected, so that $D$ is
a diffeomorphism and hence surjective. Thus $G\mathbb B ar{\Delta}=\operatorname{H}^2_\R$ as desired.
\end{proof}
\subsection{Existence of a small interval}
The following lemma is an elementary fact which is used in the proof
of completeness. We assume that the number $m$ of generators in the
Schottky group is at least $2$.
\begin{lem} \langlebel{lem:small}
Let $\{A_i^j\mid (i,j)\in I\times J\}$
be a collection of disjoint intervals on $S^2\cap{\mathcal N}_+$.
Then there exists an $(i_0,j_0)\in I\times J$ such
that the length ${\mathbb P}hi(A_{i_0}^{j_0}) < \pi/2$.
\end{lem}
\begin{proof}
Since the $A_i^j$ are disjoint, their total length is bounded by $2\pi$.
Since there are $2m {\mathfrak g}e 4$ of them,
at least one has length $< (2\pi)/4$ as claimed.
\end{proof}
\subsection{A criterion for $\epsilon$-hyperbolicity}\langlebel{sec:hypcrit}
To determine proper discontinuity of affine Schottky groups, we
examine sequences of group elements.
An important case is when for some $\epsilon >0$, every
element of the sequence is $\epsilon$-hyperbolic. Here is a useful criterion
for such $\epsilon$-hyperbolicity of an entire sequence. As before,
$A_i^j, (i,j)\in I\times J$,
denote the disjoint intervals associated to the
generators $g_1,\dots ,g_m$ of a Schottky group.
\begin{lem}\langlebel{lem:fxpts}
Let $\theta_0$ be the minimum angular separation between the
intervals $A_i^j\subset S^1$ and let
$\epsilon_0 = 2\sin(\theta_0/2)$.
Suppose that
\begin{equation}\langlebel{eq:reducedword}
g = g_{i_0}^{j_0}\dots g_{i_l}^{j_l}
\end{equation}
is a reduced word.
If $(i_l,j_l)\neq (i_0,-j_0)$ then $g$ is $\epsilon_0$-hyperbolic.
\end{lem}
The condition $(i_l,j_l)\neq (i_0,-j_0)$ means that
\eqref{eq:reducedword} describes a {\em cyclically reduced\/} word.
\begin{proof}
By Lemma~\ref{lem:Brouwer},
\begin{equation*}
\xp{g}\in A_{i_0}^{j_0}
\end{equation*}
and
\begin{equation*}
\xm{g} = \xp{g^{-1}} \in A_{i_l}^{-j_l}.
\end{equation*}
Since $(i_l,j_l)\neq (i_0,-j_0)$, the vectors
$\xm{g}$ and $\xp{g}$ lie in the attracting interval and repelling
interval respectively. Since they lie in disjoint conical neighborhoods,
they are separated by at least $\theta_0$.
\end{proof}
This lemma
is crucial in the analysis of
sequences of elements of
$\Gamma$ arising from incompleteness. These sequences will all
be $\epsilon$-hyperbolic for some $\epsilon>0$.
A typical sequence which is not $\epsilon$-hyperbolic for any $\epsilon >0 $
is the following (compare \cite{DrummGoldman1}):
\begin{equation*}
{\mathfrak g}amma_n = g_1^n g_2 g_1^{-n}.
\end{equation*}
Since all the elements are conjugate, the eigenvalues are constant
(in particular they are bounded).
However both sequences of eigenvectors $\xm{{\mathfrak g}amma_n}, \xp{{\mathfrak g}amma_n}$
converge to $\xp{g_1}$ so
$\rho(\xm{{\mathfrak g}amma_n}, \xp{{\mathfrak g}amma_n}) \longrightarrow 0.$
\section{Crooked planes and zigzags}\langlebel{sec:crooked}
Now consider the action of a Schottky group $G\subset\operatorname{SO}^0(2,1)$ on Minkowski
$(2+1)$-space ${\mathbb E}$.
The inverse projectivization ${\mathbb P}^{-1}(\Delta)$ of a fundamental
domain $\Delta$ is a fundamental domain for the linear action of $G$
on the subspace ${\mathcal N}$ of timelike vectors.
We extend this fundamental domain to a
larger open subset of ${\mathbb E}$. The extended fundamental domains are
bounded by polyhedral surfaces called {\em crooked planes.\/} For the
groups of linear transformations, the crooked planes all pass through
the origin --- indeed the origin is a special point of each crooked
plane, its {\em vertex.\/}
The Schottky group $G = \langle g_1,\dots,g_m\rangle$ acts properly
discontinuously and freely on the open subset ${\mathcal N}$ consisting of
timelike vectors in $\R^{2,1}$. However, since $G$ fixes the origin, the
$G$-action on all of ${\mathbb E}$ is quite far from being properly discontinuous.
Then we deform $G$ inside the group of affine isometries of ${\mathbb E}$ to
obtain a group $\Gamma$ which in certain cases acts freely and
properly discontinuously. This {\em affine deformation\/} $\Gamma$ of
$G$ is defined by geometric identifications of a family of {\em disjoint\/}
crooked planes. The crooked planes bound {\em crooked half-spaces\/}
whose intersection is a {\em crooked polyhedron\/} $X$.
In \cite{Drumm1}, Drumm proved the remarkable fact that as long as the
crooked planes are disjoint,
$\Gamma$ acts freely and properly discontinuously on ${\mathbb E}$ with fundamental
domain $X$.
\subsection{Extending Schottky groups to Minkowski space}
\langlebel{sec:extSchottky1}
When $\Delta$ is a fundamental domain for $G$
acting on $\operatorname{H}^2_\R$, its inverse image ${\mathbb P}^{-1}(\Delta)$ is a fundamental domain for the
action of $G$ on ${\mathcal N}$. The faces of ${\mathbb P}^{-1}(\Delta)$ are the
intersections of ${\mathcal N}$ with indefinite planes corresponding to the
geodesics in $\operatorname{H}^2_\R$ forming the sides of $\Delta$.
Each face $S$ of ${\mathbb P}^{-1}(\Delta)$ extends to a polyhedral surface $\mathcal C$
in $\R^{2,1}$ called a {\em crooked plane.\/} The face
$S\subset{\mathbb P}^{-1}(\Delta)$ is the {\em stem\/} of the crooked plane.
To extend the stem one adds two null half-planes, called the {\em wings,\/}
along the null lines bounding $S$.
Crooked planes are more flexible than Euclidean planes
since one can build fundamental polyhedra for free, properly discontinuous
groups from them.
Crooked planes were introduced by Drumm in his doctoral
dissertation~\cite{Drumm0} (see also \cite{Drumm1,Drumm2}).
Their geometry is extensively discussed in Drumm-Goldman~\cite{DrummGoldman2}
(see also \cite{ERA}) where their intersections are classified.
\subsection{Construction of a crooked plane}\langlebel{defcp}
Here is an example from which we derive the general definition of a crooked
plane. The geodesic $l_{\mathsf{v}}$ determined by the spacelike vector
\begin{equation*}
{\mathsf{v}} = \bmatrix 1 \\ 0 \\ 0 \endbmatrix
\end{equation*}
corresponds to the set
\begin{equation*}
S_0 = \operatornameverline{({\mathbb P}^{-1}(l_{\mathsf{v}}))}
= \left\{ \bmatrix 0 \\ {\mathsf{u}}_2 \\ {\mathsf{u}}_3 \endbmatrix \mathbb B igg|
\vert {\mathsf{u}}_3\vert {\mathfrak g}eq \vert {\mathsf{u}}_2\vert \right\} \subset \mathbb B ar{{\mathcal N}}
\end{equation*}
which is the {\em stem} of the crooked plane. The two lines bounding $S_0$ are
\begin{align*}
\partial^-S_0 & =
\left\{ \bmatrix 0 \\ {\mathsf{u}}_2 \\ {\mathsf{u}}_3 \endbmatrix \mathbb B igg| {\mathsf{u}}_2 = {\mathsf{u}}_3 \right\}
\\
\partial^+S_0 & =
\left\{ \bmatrix 0 \\ {\mathsf{u}}_2 \\ {\mathsf{u}}_3 \endbmatrix \mathbb B igg|
{\mathsf{u}}_2 = -{\mathsf{u}}_3 \right\}
\end{align*}
and the {\em wings} are the half-planes
\begin{align*}
{\mathbf{Wing}}m_0 & =
\left\{ \bmatrix {\mathsf{u}}_1 \\ {\mathsf{u}}_2 \\ {\mathsf{u}}_3 \endbmatrix
\mathbb B igg|
{\mathsf{u}}_1 \le 0, {\mathsf{u}}_2 = {\mathsf{u}}_3 \right\} \\
{\mathbf{Wing}}p_0 & =
\left\{ \bmatrix {\mathsf{u}}_1 \\ {\mathsf{u}}_2 \\ {\mathsf{u}}_3 \endbmatrix \mathbb B igg|
{\mathsf{u}}_1 {\mathfrak g}e 0, {\mathsf{u}}_2 = -{\mathsf{u}}_3 \right\}.
\end{align*}
The crooked plane
$\mathcal C_0$ is defined as the
the union
\begin{equation*}
\mathcal C_0 = {\mathbf{Wing}}m_0 \cup S_0 \cup {\mathbf{Wing}}p_0.
\end{equation*}
(Compare Figure~\ref{fig:cp1}.)
Corresponding to the half-plane $H_{\mathsf{v}}\subset\operatorname{H}^2_\R$ is the region
\begin{equation*}
\tilde{H}_{\mathsf{v}} =
\left\{ \bmatrix {\mathsf{u}}_1 \\ {\mathsf{u}}_2 \\ {\mathsf{u}}_3 \endbmatrix \in{\mathcal N} \ \mathbb B igg|
{\mathsf{u}}_1 > 0, {\mathsf{u}}_3 > 0 \right\}
\end{equation*}
and the component of the complement ${\mathbb E} - \mathcal C_0$ containing
$\tilde{H}_{\mathsf{v}}$ is the {\em crooked half-space\/}
\begin{align*}
\mathcal H({\mathsf{v}}) & =
\left\{ \bmatrix {\mathsf{u}}_1 \\ {\mathsf{u}}_2 \\ {\mathsf{u}}_3 \endbmatrix \ \mathbb B igg|
{\mathsf{u}}_1 > 0, {\mathsf{u}}_2 + {\mathsf{u}}_3 > 0 \right\} \\
& \quad \bigcup
\left\{ \bmatrix {\mathsf{u}}_1 \\ {\mathsf{u}}_2 \\ {\mathsf{u}}_3 \endbmatrix \ \mathbb B igg|
{\mathsf{u}}_1 = 0, {\mathsf{u}}_2 + {\mathsf{u}}_3 > -0, {\mathsf{u}}_2 + {\mathsf{u}}_3 > 0 \right\} \\
& \qquad \bigcup
\left\{ \bmatrix {\mathsf{u}}_1 \\ {\mathsf{u}}_2 \\ {\mathsf{u}}_3 \endbmatrix \ \mathbb B igg|
{\mathsf{u}}_1 < 0, -{\mathsf{u}}_2 + {\mathsf{u}}_3 > 0 \right\} .
\end{align*}
Now let ${\mathsf{u}}\in{\mathfrak{S}}$ be any unit-spacelike vector, determining the half-space
$H_{\mathsf{u}}\in\operatorname{H}^2_\R$. The crooked plane directed by ${\mathsf{u}}$ can be defined as
follows, using the previous example. Let $g\in\operatorname{SO}^0(2,1)$ such that
$g({\mathsf{v}})={\mathsf{u}}$. The {\em crooked plane directed by ${\mathsf{u}}$\/} is
$\mathcal C({\mathsf{u}}) = g(\mathcal C_0)$.
Since $g$ preserves the spacelike, lightlike or timelike nature of a
vector, we see that $\mathcal C({\mathsf{u}})$ is composed of a stem flanked by two
tangent wings, just like $\mathcal C_0$.
The crooked plane is singular at the origin,
which we call the {\em vertex\/} of $\mathcal C({\mathsf{u}})$. In general, if $p\in{\mathbb E}$
is an arbitrary point, the {\em crooked plane directed by ${\mathsf{u}}$ and
with vertex $p$\/} is defined as:
\begin{equation*}
\mathcal C({\mathsf{u}},p) = p +\mathcal C({\mathsf{u}}).
\end{equation*}
Let $\mathcal C({\mathsf{u}},p)\subset{\mathbb E}$ be a crooked plane.
We define the crooked half-space $\mathcal H({\mathsf{u}},p)$ to be the component
of the complement ${\mathbb E}-\mathcal C({\mathsf{u}},p)$ which is bounded by $\mathcal C({\mathsf{u}},p)$
and contains $p+H_{{\mathsf{u}}}$. Note that ${\mathbb E}$ decomposes as a disjoint union
\begin{equation*}
{\mathbb E} = \mathcal H({\mathsf{u}},p) \cup \mathcal C({\mathsf{u}},p) \cup \mathcal H(-{\mathsf{u}},p).
\end{equation*}
(Crooked half-spaces are called {\em wedges\/} in
Drumm~\cite{Drumm0,Drumm1,Drumm2}.)
The {\em angle} ${\mathbb P}hi(\mathcal H({\mathsf{u}},p))$ of the crooked half-space
$\mathcal H({\mathsf{u}},p)$ is taken to be the angle of $A$, the interval determined
by the half-space $H_{\mathsf{u}}$.
\begin{figure}
\caption{A crooked plane}
\end{figure}
\subsection{Zigzags}\langlebel{sec:zigzags}
To understand the tiling of ${\mathbb E}$ by crooked polyhedra, we intersect
the tiling with a fixed definite plane $P$, which is always transverse
to the stem and wings of any crooked plane. Since the tiling only
contains countably many crooked planes, we may assume that $P$ misses
the vertices of the crooked planes in the tiling.
A {\em zigzag\/} in a definite plane $P$ is a union $\zeta$ of two disjoint
rays $r_0$ and $r_1$ and the segment $s$ (called the {\em stem\/})
joining the endpoint $v_0$ of $r_0$ to the endpoint $v_1$ of $r_1$,
such that the two angles $\theta_0$ and $\theta_1$ formed by the rays at the
respective endpoints of $s$ differ by $\pi$ radians.
The intersection of a crooked plane with a definite plane not
containing its vertex is a zigzag. (Conversely, every zigzag extends to
a unique crooked plane, although we do not need this fact.) Compare Figure~\ref{fig:crooked}.
\begin{figure}
\caption{Angles in a zigzag}
\end{figure}
A {\em zigzag region\/} is a component $\mathcal Z$ of $P-\zeta$ where
$\zeta\subset P$ is a zigzag. Equivalently $\mathcal Z$ is the
intersection of a crooked half-space with $P$.
The corresponding half-space in $\operatorname{H}^2_\R$ is bounded by an interval $A\subset S^1$
whose length ${\mathbb P}hi(A)$ is defined as the
{\em angle\/} ${\mathbb P}hi(\mathcal Z)$ of the zigzag region. One of the angles of
$\mathcal Z$
equals ${\mathbb P}hi(A)/2$ and the other, ${\mathbb P}hi(A)/2+\pi$.
Compare Figure~\ref{fig:angles}.
If $r\subset P$ is an open ray contained in the zigzag region $\mathcal Z$,
then $r$ lies in a unique maximal open ray inside $\mathcal Z$ (just move the
endpoint of $r$ back until it meets $\zeta$). Every maximal open ray
then lies in one of two angular sectors (possibly both). One angular
sector has vertex $v_0$ and sides $r_0$ and $s$, and subtends the
angle $\theta_0$. The other angular sector has vertex $v_1$ and sides
$r_1$ and $s$, and subtends the angle $\theta_1$. Any two rays in $R$
make an angle of at most ${\mathbb P}hi(\zeta)$ with each other.
\subsection{Affine deformations}\langlebel{sec:affineSchottky}
Consider a group $\Gamma$ generated by isometries
\begin{equation*}
h_1,\dots, h_m\in\Isom(\E)o
\end{equation*}
for which there exist crooked half-spaces $\mathcal H_i^j$,
where $(i,j)\in I\times J$
(using the indexing convention defined in \S\ref{sec:Schottky})
such that:
\begin{itemize}
\item all the $\mathcal H_i^j$ are pairwise disjoint;
\item $h_i(\mathcal H_i^-)= {\mathbb E} - \mathbb B ar{\mathcal H}_i^+$. (Thus
$h_i^{-1}(\mathcal H_i^+)= {\mathbb E} - \mathbb B ar{\mathcal H}_i^-$ as well.)
\end{itemize}
We call $\Gamma$ an {\em affine Schottky group.\/} The set
\begin{equation*}
X = {\mathbb E} - \bigcup_{(i,j)\in I\times J} \mathbb B ar{\mathcal H}_i^j
\end{equation*}
is an open subset of ${\mathbb E}$ whose closure $\mathbb B ar{X}$ is a finite-sided polyhedron
in ${\mathbb E}$ bounded by the crooked planes $\mathcal C_i^j=\partial\mathcal H_i^j$.
Drumm-Goldman~\cite{DrummGoldman2} provides criteria for
disjointness of crooked half-spaces.
In particular for every configuration of disjoint half-planes
\begin{equation*}
H_1^+,H_1^-,\dots,H_m^+,H_m^-\subset\operatorname{H}^2_\R
\end{equation*}
paired by Schottky generators
$g_1,\dots,g_m\in\operatorname{SO}^0(2,1)$, there exists a configuration
of disjoint crooked half-spaces
\begin{equation*}
\mathcal H_1^+,\mathcal H_1^-,\dots,\mathcal H_m^+,\mathcal H_m^-\subset{\mathbb E}
\end{equation*}
whose directions correspond to the $H_i^j$, and which are paired by affine transformations
$h_i$ with linear part $g_i$, satisfying the above conditions.
We show that $\mathbb B ar{X}$ is a fundamental polyhedron for $\Gamma$ acting
on ${\mathbb E}$. As with standard Schottky groups, one first shows that
the images $h\mathbb B ar{X}$ form a set of {\em disjoint\/} tiles of $\Gamma\mathbb B ar{X}$.
\begin{lem} $\Gamma\mathbb B ar{X}$ is open.\end{lem}
\begin{proof}
(Compare the proof of Lemma~\ref{lem:lcomplete}.)
If $x\in X$, then ${\mathfrak g}amma x$ is an interior point of $\Gamma X \subset
\Gamma\mathbb B ar{X}$ for every ${\mathfrak g}amma\in\Gamma$.
Otherwise suppose $x\in\partial X$. Then
$x\in \mathcal C_i^j$ for some $(i,j)\in I\times J$.
Let $B$ be an open ball about $x$ such
that $B\cap\partial X\subset \mathcal C_i^j$. Then
\begin{equation*}
(B\cap\mathbb B ar{X}) \cup (h_i^{-j}B\cap\mathbb B ar{X}) \subset \mathbb B ar{X}
\end{equation*}
is an open subset of $\mathbb B ar{X}$ whose orbit is an open neighborhood of $x$
in $\Gamma\mathbb B ar{X}$.
\end{proof}
The analogue of Lemma~\ref{lem:ldisjoint} is:
\begin{lem}\langlebel{lem:adisjoint}
The affine transformations $h_1,\dots,h_m$ freely generate $\Gamma$. The crooked polyhedron
$\mathbb B ar{X}$ is a fundamental domain for $\Gamma$ acting on $\Gamma\mathbb B ar{X}$.
\end{lem}
\begin{proof}
The proof is completely identical to that of
Lemma~\ref{lem:ldisjoint}. Replace the hyperbolic half-spaces $H_i^j$
by crooked half-spaces $\mathcal H_i^j$, the hyperbolic polygon $\Delta$
by the crooked polyhedron $X$ and the hyperbolic isometries $g_i$ by
Lorentzian affine isometries $h_i$.
\end{proof}
The most difficult part of the proof of the Main Theorem is that
$\Gamma\mathbb B ar{X}={\mathbb E}$,
that is, the images ${\mathfrak g}amma\mathbb B ar{X}$ tile {\em all\/} of ${\mathbb E}$.
Due to the absence of an invariant {\em Riemannian\/} metric,
the completeness proof of Lemma~\ref{lem:lcomplete} fails.
\section{Completeness}
We
prove that the images of the crooked polyhedron $X$ tile
${\mathbb E}$. We suppose there exists a point $p$ not in
$\Gamma\mathbb B ar{X}$ and derive a contradiction.
The first step is to describe a sequence of nested crooked half-spaces
$\mathfrak H_k$ containing $p$. This sequence corresponds to a sequence of
indices
\begin{equation*}
(i_0,j_0), (i_1,j_1),\dots,(i_k,j_k),\dots
\end{equation*}
such that $\mathfrak H_k
= {\mathfrak g}amma_k \mathcal H_{i_k}^{j_k}$ where ${\mathfrak g}amma_k = h_{i_0}^{j_0}\dots
h_{i_{k-1}}^{j_{k-1}}$.
Since crooked polyhedra are somewhat complicated and the elements of
$\Gamma$ exhibit different dynamical behavior in different directions,
bounding the separation of the crooked polyhedra requires some care.
To simplify the discussion we intersect this sequence with a fixed
definite plane $P$ so that the crooked half-spaces $\mathfrak H_k$ intersect
$P$ in a sequence of nested zigzag regions containing $p$.
We then approximate the zigzag regions by half-planes ${\mathbb P}i_k\subset P$
(compare Figures~\ref{fig:cluster} and \ref{fig:seqzz})
and show that for infinitely many $k$, the distance between the successive
lines $L_k=\partial{\mathbb P}i_k$ bounding ${\mathbb P}i_k$ is bounded below, to reach
the contradiction. The Compression Lemma~\ref{lem:CompressionLemma}
gives a lower bound for $\rho(L_k,L_{k+1})$ whenever ${\mathfrak g}amma_k$ is
$\epsilon$-hyperbolic. Using the special form of the sequence
${\mathfrak g}amma_k$ and
the Hyperbolicity Criterion (Lemma~\ref{lem:fxpts}), we find infinitely
many $\epsilon$-hyperbolic ${\mathfrak g}amma_k$ for some $\epsilon>0$ and achieve
a contradiction.
\subsection{Construction of the nested sequence}\langlebel{sec:sequence}
The complement ${\mathbb E} - \mathbb B ar{X}$ consists of the $2m$ crooked half-spaces
$\mathcal H_i^j$, which are
bounded by crooked planes $\mathcal C_i^j$,
indexed by $I\times J$.
\begin{lem}\langlebel{lem:nestedsequence}
Let $p\in{\mathbb E} - \Gamma\mathbb B ar{X}$. There exists a sequence
$\{\mathfrak H_k\}$ of crooked half-spaces such that
\begin{itemize}
\item $\mathfrak H_k\supset\mathfrak H_{k+1}$ and $\mathfrak H_k\neq\mathfrak H_{k+1}$;
\item $p\in\mathfrak H_k$;
\item there exists a sequence $(i_0,j_0),(i_1,j_1),\dots,(i_n,j_n),\dots$
in $I\times J$ such that
$(i_k,j_k)\neq (i_{k+1},-j_{k+1})$ for all $k{\mathfrak g}e 0$ and
\begin{equation*}
\mathfrak H_k = {\mathfrak g}amma_k \mathcal H_{i_k}^{j_k}
\end{equation*}
where
\begin{equation*}
{\mathfrak g}amma_k = h_{i_0}^{j_0} h_{i_1}^{j_1} \dots h_{i_{k-1}}^{j_{k-1}}.
\end{equation*}
\end{itemize}
\end{lem}
\begin{proof}
We first adjust $p$ so that the first crooked half-space $\mathfrak H_0$ satisfies
${\mathbb P}hi(\mathfrak H_0)<\pi/2$.
By Lemma~\ref{lem:small},
\begin{equation}\langlebel{eq:firstonesmall}
{\mathbb P}hi(A_{i_0}^{j_0})<\pi/2
\end{equation}
for some $(i_0,j_0)\in I\times J$.
Since $p\notin\mathbb B ar{X}$, there exists $(i,j)$ such that $p\in\mathcal H_i^j$. If
$(i,j)\neq(i_0,j_0)$, then we replace $p$ by ${\mathfrak g}amma p$, for some
${\mathfrak g}amma\in\Gamma$ such that ${\mathfrak g}amma p\in\mathcal H_{i_0}^{j_0}$. Here is how
we do this. If
$(i,j)\neq(i_0,-j_0)$, then ${\mathfrak g}amma=h_{i_0}^{j_0}$ moves $p$ into
$Hh_{i_0}^{j_0}$.
Otherwise first move $p$ into
a crooked half-space other than $\mathcal H_{i_0}^{-j_0}$, then into
$\mathcal H_{i_0}^{j_0}$. For example, $h_{i_1}^{j_1}$ moves $p$ into
$\mathcal H_{i_1}^{j_1}$ and then ${\mathfrak g}amma = h_{i_0}^{j_0}h_{i_1}^{j_1}$ moves $p$
into $\mathcal H_{i_0}^{j_0}$. Thus we may assume that
\begin{equation*}
p \in \mathfrak H_0 = \mathcal H_{i_0}^{j_0}
\end{equation*}
where ${\mathbb P}hi(\mathfrak H_0)<\pi/2$.
Suppose inductively that
\begin{equation*}
\mathfrak H_0 \supset \dots \supset \mathfrak H_k \ni p
\end{equation*}
is a nested sequence of crooked half-spaces containing $p$ satisfying
the conclusions of Lemma~\ref{lem:nestedsequence}. Then
$\mathfrak H_k = {\mathfrak g}amma_k \mathcal H_{i_k}^{j_k}$ and ${\mathfrak g}amma_k^{-1}(p)\in
\mathcal H_{i_k}^{j_k}$. Let ${\mathfrak g}amma_{k+1}={\mathfrak g}amma_k h_{i_k}^{j_k}$.
Thus
\begin{equation*}
{\mathfrak g}amma_{k+1}^{-1}(p)\in h_{i_k}^{-j_k}\mathcal H_{i_k}^{j_k} =
{\mathbb E} - \mathcal H_{i_k}^{-j_k}.
\end{equation*}
Since $p\notin\Gamma\mathbb B ar{X}$,
\begin{equation*}
{\mathfrak g}amma_{k+1}^{-1}(p)\in
{\mathbb E} - \mathbb B ar{X} - \mathcal H_{i_k}^{-j_k} =
\bigcup_{(i,j)\neq(i_k,-j_k)} \mathcal H_i^j.
\end{equation*}
Let $(i_{k+1},j_{k+1})$ index the component
of ${\mathbb E} - \mathbb B ar{X} - \mathcal H_{i_k}^{-j_k}$ containing
${\mathfrak g}amma_{k+1}^{-1}(p)$. This gives the desired sequence.
\end{proof}
\subsection{Uniform Euclidean width of crooked polyhedra}\langlebel{sec:uniform}
If $S\subset\Gamma\mathbb B ar{X}$, define the {\em star-neighborhood\/} of $S$
as the interior of the union of all tiles ${\mathfrak g}amma\mathbb B ar{X}$ intersecting
$\bar{S}$.
\begin{lem}\langlebel{lem:delta0}
There exists $\delta_0>0$ such that
the $\delta_0$-neighborhood $B(X,\delta_0)$
lies in the star-neighborhood of $\mathbb B ar{X}$.
In particular whenever $(i,j),(i',j')\in I\times J$ satisfy
$(i,j)\neq (i',-j')$,
\begin{equation}\langlebel{eq:uniformdistance}
B({\mathbb E}-\mathcal H_i^j,\delta_0) \subset {\mathbb E}-h_i^j\bar{\mathcal H}_{i'}^{j'}.
\end{equation}
\end{lem}
\begin{proof}
The fundamental polyhedron $\mathbb B ar{X}$ is bounded by crooked planes
$\mathcal C_i^j=\partial\mathcal H_i^j$.
The star-neighborhood of $\mathbb B ar{X}$ equals
\begin{equation*}
\mathbb B ar{X} \cup \bigcup_{(i,j)\in I\times J} h_i^j\mathbb B ar{X}.
\end{equation*}
Its complement consists of the $2m(2m-1)$ crooked half-spaces
$h_i^j \mathcal H_{i'}^{j'}$ where $(i',j')\neq(i,-j)$.
Unlike in hyperbolic space, two disjoint closed planar regions in Euclidean
space are separated a positive distance apart.
Four closed planar regions comprise a crooked plane.
Thus the distance between disjoint crooked
planes is strictly positive. Choose $\delta_0>0$ to be smaller than
the distance between any of the $\mathcal C_i^j$ and $h_i^j\mathcal C_{i'}^{j'}$.
The second assertion follows since the $\delta_0$-neighborhood of a crooked
half-space $\mathfrak H$ equals $\mathfrak H\cup B(\partial\mathfrak H,\delta_0)$.
\end{proof}
\subsection{Approximating zigzag regions by half-planes}\langlebel{sec:approx}
Now intersect with $P$.
We
approximate each zigzag region $\mathfrak H_k\cap P$ by a Euclidean
half-plane ${\mathbb P}i_k\subset P$ containing $p$. These half-planes form
a nested sequence
\begin{equation*}
{\mathbb P}i_1 \supset {\mathbb P}i_2 \supset \dots \supset {\mathbb P}i_k \supset \dots
\end{equation*}
with $\mathfrak H_k\cap P\subset {\mathbb P}i_k$ and we take the lines $L_k=\partial{\mathbb P}i_k$
to be parallel. Since $p\in\mathfrak H_k\cap P$ for all $k$,
and $p\in\partial(\Gamma\mathbb B ar{X})$, the parallel lines $L_k$ approach
at $p$ from one side.
We obtain a contradiction by bounding the Euclidean distance
$\rho(L_k,L_{k+1})$ from below, for infinitely many $k$.
Here is the detailed construction.
Let $\nu$ be the line in $P$ parallel to the intersection of $P$ with the
stem of $\partial\mathfrak H_0$. Then $\nu$ makes an angle of at most $\pi/4$ with
every ray contained in $\mathfrak H_0\cap P$.
Let $L_k\subset P$ be the line perpendicular to $\nu$ bounding a half-plane
${\mathbb P}i_k$ containing $\mathfrak H_k\cap P$ and intersecting the zigzag
$\zeta_k=\partial\mathfrak H_k\cap P$ at a vertex
of $\zeta_k$.
Choose $\delta_0>0$ as in Lemma~\ref{lem:delta0}.
\begin{lem}\langlebel{lem:tubnbhd}
For any $\delta\le\delta_0$, the tubular neighborhood
\begin{equation}\langlebel{eq:tubnbhddef}
T_k(\delta) = {\mathfrak g}amma_k\left(B({\mathfrak g}amma_k^{-1} L_k,\delta)\right)
\end{equation}
of $L_k$ is disjoint from $L_{k+1}$.
\end{lem}
\begin{proof}
$L_k\subset P-\mathfrak H_k$ implies
\begin{equation}\langlebel{eq:gammaL}
{\mathfrak g}amma_k^{-1}L_k\subset {\mathbb E} -\mathcal H_{i_k}^{j_k}.
\end{equation}
Now
\begin{align*}
{\mathbb E} -\mathcal H_{i_k}^{j_k} & = {\mathbb E} - {\mathfrak g}amma_k^{-1}\mathfrak H_k \\
& =
{\mathfrak g}amma_k^{-1}({\mathbb E} - \mathfrak H_k) \\
& \subset
{\mathfrak g}amma_k^{-1}({\mathbb E} - \mathfrak H_{k+1}) \\
& =
{\mathbb E} - {\mathfrak g}amma_k^{-1}{\mathfrak g}amma_{k+1} \mathcal H_{i_{k+1}}^{j_{k+1}} \\
& =
{\mathbb E} - h_{i_k}^{j_k} \mathcal H_{i_{k+1}}^{j_{k+1}}.
\end{align*}
Apply \eqref{eq:gammaL} and \eqref{eq:uniformdistance} to conclude
$B({\mathfrak g}amma_k^{-1},\delta) \subset {\mathbb E} - h_{i_k}^{j_k}
\bar{\mathcal H}_{i_{k+1}}^{j_{k+1}}$, so
\begin{align*}
T_k(\delta) & = {\mathfrak g}amma_k B({\mathfrak g}amma_k^{-1}L_k,\delta) \\
& \subset
{\mathbb E} - h_{i_0}^{j_0}h_{i_1}^{j_1}\dots h_{i_{k-1}}^{j_{k-1}}
(h_{i_k}^{j_k} \bar{\mathcal H}_{i_{k+1}}^{j_{k+1}}) \\
& = E - \bar{\mathfrak H}_{k+1}.
\end{align*}
so $T_k(\delta)$ is disjoint from $\bar{\mathfrak H}_{k+1}$.
Intersecting with $P$, $T_k(\delta)$ is disjoint from $L_{k+1}$.
\end{proof}
\subsection{Bounding the separation of half-planes}
Write $E^k(x)$ for the weak-unstable plane
$E^{wu}_x(g_k)$ where $x\in{\mathbb E}$ and and $g_k$ is the linear part of ${\mathfrak g}amma_k$
(see Definition~\ref{def:weakunstable}). Foliate $T_k (\delta)$ by
leaves $E^k(x)\cap T_k (\delta)$. We first bound the diameter of the
leaves of $T_k(\delta)$.
\begin{lem}\langlebel{lem:wunu}
The angle between $\nu$ and any
$E^k(x)\cap P$ is bounded by $\pi/4$.
\end{lem}
\begin{proof}
By Lemma~\ref{lem:Brouwer}, the vector $\xp{{\mathfrak g}amma_k}$ lies in
the attracting interval $A_{i_0}^{j_0}$. The null plane corresponding to
$\xp{g_{i_0}^{j_0}}$ is the weak-unstable plane $E^k(x)$.
Since the stem of the crooked
plane $\mathcal C_{i_0}^{j_0}$ corresponds to $A_{i_0}^{j_0}$,
the corresponding null plane $E^k(x)$ intersects $\mathfrak H_0 = \mathcal H_{i_0}^{j_0}$ in
a half-plane. (Compare Figure~\ref{fig:ray1}.)
Hence the line $E^k(x)\cap P$ meets $\mathfrak H_0\cap P$ in a ray.
Since any ray in $\mathfrak H_0\cap P$ subtends an angle of at most
$\pi/4$ with $\nu$, Lemma~\ref{lem:wunu} follows.
\end{proof}
\begin{figure}
\caption{Weak-unstable ray in a crooked half-space }
\end{figure}
\begin{lem}\langlebel{lem:separation}
Let $\epsilon>0$.
If ${\mathfrak g}amma_k$ is $\epsilon$-hyperbolic and $\delta < \delta_0$,
then
\begin{equation*}
\rho(L_k,L_{k+1}) {\mathfrak g}e \frac{\delta\epsilon}{4\sqrt{2}}.
\end{equation*}
\end{lem}
\begin{proof}
Apply the Compression Lemma~\ref{lem:CompressionLemma} with
$x\in{\mathfrak g}amma_k^{-1}(L_k)$ and $h={\mathfrak g}amma_k$ to obtain
\begin{equation*}
B \mathbb B ig(
L_k,\frac{\delta\epsilon}4
\mathbb B ig)
\cap E^k(x) \subset
{\mathfrak g}amma_kB({\mathfrak g}amma_k^{-1}L_k,\delta) = T_k(\delta).
\end{equation*}
Lemma~\ref{lem:tubnbhd} implies that the tubular neighborhood
$T_k (\delta)$ is disjoint from $L_{k+1}$.
Therefore
$B(L_k,\delta\epsilon/4) \cap E^k(x)$ is disjoint from $L_{k+1}$ and
\begin{equation}\langlebel{eq:1}
\rho\left(x,\partial T_k (\delta)\right)
\le \rho(x,L_{k+1}) \le \rho(L_k,L_{k+1}).
\end{equation}
Lemma~\ref{lem:wunu} implies $\angle\left(\nu,E^k(x)\cap P\right) < \pi/4$ so
\begin{equation*}
\cos\angle\left(\nu,E^k(x)\cap P\right) > \frac{1}{\sqrt{2}}.
\end{equation*}
Thus (compare Figure~\ref{fig:tubnbhd})
\begin{equation}\langlebel{eq:2}
\rho \left(x,\partial T_k (\delta)\right) = \rho\left(x,\partial B\mathbb B ig(L_k,\frac{\delta\epsilon}4\mathbb B ig)\cap E^k(x)\right)\cos\angle\left(\nu,E^k(x)\cap P\right) > \frac{\delta\epsilon}{4\sqrt{2}}.
\end{equation}
Lemma~\ref{lem:separation} follows from \eqref{eq:1} and \eqref{eq:2}.
\end{proof}
\subsection{The alternative to $\epsilon$-hyperbolicity}
\langlebel{sec:alternative}
By Lemma~\ref{lem:separation}, it suffices to find $\epsilon>0$ such that
for infinitely many $k$, the element ${\mathfrak g}amma_k$ is $\epsilon$-hyperbolic.
Lemma~\ref{lem:fxpts} gives a criterion for $\epsilon$-hyperbolicity
in terms of the expression of ${\mathfrak g}amma_k$ as a reduced word.
Choose $\epsilon_0$ as in Lemma~\ref{lem:fxpts} and
the sequence $(i_0,j_0),\dots, (i_k,j_k),\dots$ as in
Lemma~\ref{lem:nestedsequence}.
Recall that ${\mathfrak g}amma_k$ has the expression
\begin{equation*}
{\mathfrak g}amma_k = h_{i_0}^{j_0}\dots h_{i_{k-1}}^{j_{k-1}}
\end{equation*}
where
$(i_{k+1},j_{k+1})\neq (i_k,-j_k)$ for all $k{\mathfrak g}e 0$.
Lemma~\ref{lem:fxpts} implies:
\begin{lem}\langlebel{lem:notehyp}
Either $g_k$ is $\epsilon_0$-hyperbolic for infinitely many $k$ or
there exists $k_2>0$ such that $(i_k,j_k)= (i_0,-j_0)$ for all $k>k_2$.
We may assume that $k_2$ is minimal, that is,
$(i_{k_2},j_{k_2})\neq (i_0,-j_0)$.
\end{lem}
Thus $g_k$ has the special form
\begin{equation*}
g_k = (g_{i_0}^{j_0})^{k_1} g' (g_{i_0}^{-j_0})^{k - k_1 - k_2 - 1}
\end{equation*}
where $k_1>0$ is the smallest $k$ such that $(i_k,j_k)\neq(i_0,j_0)$ and
$g'$ is the subword
\begin{equation*}
g_{i_{k_1}}^{j_{k_1}} \dots g_{i_{k_2}}^{j_{k_2}}.
\end{equation*}
$g'$ is the maximal subword of $g_k$ which
neither begins
with $g_{i_0}^{j_0}$
nor end
ends with $g_{i_0}^{-j_0}$. In particular the conjugate of $g_k$
by $\psi= g_{i_0}^{-j_0k_1}$
\begin{equation*}
g_{i_0}^{-j_0k_1} g_k g_{i_0}^{j_0k_1} = g' (g_{i_0}^{-j_0})^{k - k_2 - 1}
\end{equation*}
is $\epsilon_0$-hyperbolic, by Lemma~\ref{lem:fxpts}.
\subsection{Changing the hyperbolicity}\langlebel{sec:change}
The proof concludes by showing that there is a $K>1$ depending
on $g_{i_0}^{-j_0k_1}$ and taking $\epsilon$ smaller than
$\epsilon_0/K$, infinitely many $g_k$ are $\epsilon$-hyperbolic for
this new choice of $\epsilon$.
This contradiction concludes the proof of the theorem.
\begin{lem}\langlebel{lem:conjhyp}
Let $\psi\in\operatorname{SO}^0(2,1)$.
Then there exists $K$ such that, for any $\epsilon>0$,
an element $g\in\operatorname{SO}^0(2,1)$
is $\epsilon/K$-hyperbolic whenever $\psi g\psi^{-1}$ is
$\epsilon$-hyperbolic.
\end{lem}
\begin{proof}
Let $s$ denote the distance $d(O,\psi(O))$ that $s$ moves the origin
$O\in\operatorname{H}^2_\R$ (see \eqref{eq:rhtorigin}) and let
\begin{equation*}
K = e^s \pi/2.
\end{equation*}
Since $\xpm{\psi g\psi^{-1}} = \psi\left(\xpm{g}\right)$,
it suffices to prove that if
${\mathsf{a}}_1,{\mathsf{a}}_2\in S^1$, then
\begin{equation}\langlebel{eq:distortion}
K^{-1} \le \frac{\rho(\psi({\mathsf{a}}_1),\psi({\mathsf{a}}_2))}{\rho({\mathsf{a}}_1,{\mathsf{a}}_2)} \le K.
\end{equation}
Let $\operatorname{SO}(2)$ be the group of rotations and $\operatorname{SO}^0(2,1)o$ the group of
{\em transvections\/}
\begin{equation*}
\tau_s =
\bmatrix
1 & 0 & 0 \\
0 & \cosh(s) & \sinh(s) \\
0 & \sinh(s) & \cosh(s) \endbmatrix.
\end{equation*}
Since $\operatorname{SO}(2)\subset\operatorname{SO}^0(2,1)$ is a maximal compact subgroup and $\operatorname{SO}^0(2,1)o\subset\operatorname{SO}^0(2,1)$
is an ${\mathbb R}$-split Cartan subgroup, the Cartan decomposition of $\operatorname{SO}^0(2,1)$
is
\begin{equation*}
\operatorname{SO}^0(2,1) = \operatorname{SO}(2) \cdot \operatorname{SO}^0(2,1)o \cdot \operatorname{SO}(2)
\end{equation*}
and we write $\psi = R_\theta \tau_s R_{\theta'}$
where $s = d(O,\psi(O))$ as above. Since
\begin{equation*}
\rho(\psi({\mathsf{a}}_1),\psi({\mathsf{a}}_2)) =
\rho(
\tau_s(R_{\theta'}({\mathsf{a}}_1)),
\tau_s(R_{\theta'}({\mathsf{a}}_2)))
\end{equation*}
and
\begin{equation*}
\rho({\mathsf{a}}_1,{\mathsf{a}}_2) =
\rho(
R_{\theta'}({\mathsf{a}}_1),
R_{\theta'}({\mathsf{a}}_2)),
\end{equation*}
it suffices to prove \eqref{eq:distortion} for $\psi = \tau_s$.
In this case
\begin{equation*}
\frac{d\phi}{\psi^*d\phi} = \frac{1+\cos{\phi}}{2}e^{s} +
\frac{1-\cos{\phi}}{2}e^{-s}
\end{equation*}
so that
\begin{equation*}
e^{-s} \le \frac{d\phi}{\psi^*d\phi} \le e^{s}.
\end{equation*}
Let $A$ be the interval
on $S^1$ joining ${\mathsf{a}}_1$ to ${\mathsf{a}}_2$, such that ${\mathbb P}hi(A)\le\pi$. Its length and the length
of its image $\psi(A)$ are given by:
\begin{equation*}
{\mathbb P}hi(A) = \int_A \vert d\phi\vert, \qquad {\mathbb P}hi(\psi(A)) = \int_{\psi(A)} \vert
d\phi\vert
= \int_A \psi^*(\vert d\phi\vert).
\end{equation*}
Therefore
\begin{equation}\langlebel{eq:Riemdistortion}
e^{-s} \le \frac{{\mathbb P}hi(A)}{{\mathbb P}hi(\psi(A))} \le e^{s}.
\end{equation}
Finally the distance $\rho({\mathsf{a}}_1,{\mathsf{a}}_2)$ on $S^1$ (the length of the chord
jointing ${\mathsf{a}}_1$ to ${\mathsf{a}}_2$)
relates to the Riemannian distance by
$\rho({\mathsf{a}}_1,{\mathsf{a}}_2) = 2 \sin ({\mathbb P}hi(A)/2). $
Now (since $-\pi \le \phi \le \pi$)
\begin{equation*}
\frac2\pi \le \frac{2\sin(\phi/2)}\phi \le 1,
\end{equation*}
implies
\begin{equation}\langlebel{eq:RiemChord1}
\frac2\pi \le\frac{\rho({\mathsf{a}}_1,{\mathsf{a}}_2)}{{\mathbb P}hi(A)}
\le 1
\end{equation}
and
\begin{equation}\langlebel{eq:RiemChord2}
1 \le \frac{{\mathbb P}hi(\psi(A))}{\rho(\psi({\mathsf{a}}_1),\psi({\mathsf{a}}_2))} \le \frac{\pi}2.
\end{equation}
Combining inequalities
\eqref{eq:Riemdistortion} with \eqref{eq:RiemChord1} and
\eqref{eq:RiemChord2} implies \eqref{eq:distortion}.
\end{proof}
\makeatletter \renewcommand{\@biblabel}[1]{
#1.}\makeatother
\end{document}
|
\begin{document}
\begin{abstract}
Jim Propp recently introduced a variant of chip-firing on a line where the chips are given distinct integer labels. Hopkins, McConville, and Propp showed that this process is confluent from some (but not all) initial configurations of chips. We recast their set-up in terms of root systems: labeled chip-firing can be seen as a \emph{root-firing} process which allows the moves $\lambda {\raU{}} \lambda + \alpha$ for $\alpha\in \Phi^{+}$ whenever $\langle\lambda,\alpha^\vee{\raU{}}ngle= 0$, where~$\Phi^{+}$ is the set of positive roots of a root system of Type~A and $\lambda$ is a weight of this root system. We are thus motivated to study the exact same root-firing process for an arbitrary root system. Actually, this \emph{central root-firing} process is the subject of a sequel to this paper. In the present paper, we instead study the \emph{interval root-firing} processes determined by $\lambda {\raU{}} \lambda + \alpha$ for $\alpha\in \Phi^{+}$ whenever~$\langle\lambda,\alpha^\vee{\raU{}}ngle\in [-k-1,k-1]$ or~$\langle\lambda,\alpha^\vee{\raU{}}ngle\in [-k,k-1]$, for any $k \geq 0$. We prove that these interval-firing processes are always confluent, from any initial weight. We also show that there is a natural way to consistently label the stable points of these interval-firing processes across all values of $k$ so that the number of weights with given stabilization is a polynomial in~$k$. We conjecture that these \emph{Ehrhart-like polynomials} have nonnegative integer coefficients.
\end{abstract}
\date{\today}
\keywords{Chip-firing; Abelian Sandpile Model; root systems; confluence; permutohedra; Ehrhart polynomials; zonotopes}
\subjclass[2010]{17B22; 52B20; 05C57}
\title{Root system chip-firing {I}
\tableofcontents
\section{Introduction}
The Abelian Sandpile Model (ASM) is a discrete dynamical system that takes place on a graph. The states of this system are configurations of grains of sand on the vertices of the graph. A vertex with at least as many grains of sand as its neighbors is said to be \emph{unstable}. Any unstable vertex may \emph{topple}, sending one grain of sand to each of its neighbors. The sequence of topplings may continue forever, or it may terminate at a \emph{stable} configuration, where every vertex is stable. The ASM was introduced (in the special case of the two-dimensional square lattice) by the physicists Bak, Tang, and Wiesenfeld~\cite{bak1987self} as a simple model of self-organized criticality; much of the general, graphical theory was subsequently developed by Dhar~\cite{dhar1990self,dhar1999abelian}. The ASM is by now studied in many parts of both physics and pure mathematics: for instance, following the seminal work of Baker and Norine~\cite{baker2007riemann}, it is known that this model is intimately related to tropical algebraic geometry (specifically, divisor theory for tropical curves~\cite{gathmann2008riemann,mikhalkin2008tropical}); meanwhile, the ASM is studied by probabilists because of its remarkable scaling-limit behavior~\cite{pegden2013convergence,levine2016apollonian}; and there are also interesting complexity-theoretic questions related to the ASM, such as, what is the complexity of determining whether a given configuration stabilizes~\cite{kiss2015chip,farrell2016coeulerian}. For more on sandpiles, consult the short survey article~\cite{levine2010sandpile} or the recent textbook~\cite{corry2017divisors}.
Independently of its introduction in the statistical mechanics community, the same model was defined and studied from a combinatorial perspective by Bj\"{o}rner, Lov\'{a}sz, and Shor~\cite{bjorner1991chip} under the name of \emph{chip-firing}.\footnote{It is also worth mentioning that essentially the same model was studied even earlier, in the context of math pedagogy, by Engel~\cite{engel1975probabilistic,engel1976why} under the name of the \emph{probabilistic abacus}.} Instead of grains of sand, we imagine that chips are placed on the vertices of a graph; the operation of an unstable vertex sending one chip to each of its neighbors is now called \emph{firing} that vertex. One fundamental result of Bj\"{o}rner-Lov\'{a}sz-Shor is that, from any initial chip configuration, either the chip-firing process always goes on forever, or it terminates at a stable configuration that does not depend on the choice of which vertices were fired. This is a \emph{confluence} result: it says that (in the case of termination) the divergent paths in the chip-firing process must come together eventually. This confluence property is the essential property which serves as the basis of all further study of the chip-firing process; it explains the adjective ``Abelian'' in ``Abelian Sandpile Model.''
A closely related chip-firing process to the one studied by Bj\"{o}rner-Lov\'{a}sz-Shor is where a distinguished vertex is chosen to be the \emph{sink}. The sink will never become unstable and is allowed to accumulate any number of chips; hence, any initial chip configuration will eventually stabilize to a unique stable configuration. This model was studied for instance by Biggs~\cite{biggs1999chip} and by Dhar~\cite{dhar1990self,dhar1999abelian}. Chip-firing with a sink has been generalized to several other contexts beyond graphs. One of the most straightforward but also nicest such generalizations is what is called \emph{M-matrix chip-firing} (see e.g.~\cite{gabrielov1993asymmetric,guzman2015chip}, or~\cite[\S13]{postnikov2004trees}). Rather than a graph, we take as input an integer matrix $\mathbf{C} = (\mathbf{C}_{ij})\in \mathbb{Z}^{n\times n}$. The states are vectors $c=(c_1,c_2,\ldots,c_n) \in \mathbb{Z}^n$, and a firing move replaces a state $c$ with $c-\mathbf{C}^te_i$ whenever~$c_i \geq \mathbf{C}_{ii}$ for~$i=1,\ldots,n$. (Here~$e_1,\dots,e_n$ are the standard basis vectors of $\mathbb{Z}^n$; i.e.,~$c-\mathbf{C}^te_i$ is $c$ minus the $i$th row of $\mathbf{C}$.) This firing move is denoted $c \to c-\mathbf{C}^te_i$. Setting $\mathbf{C}$ to be the reduced Laplacian of a graph (including possibly a directed graph, as in~\cite{bjorner1992chip}) recovers chip-firing with a sink. But in fact $\mathbf{C}$ does not need to be a reduced Laplacian of any graph for confluence to hold in this setting: the condition required to guarantee confluence (and termination), as first established by Gabrielov~\cite{gabrielov1993asymmetric}, is that $\mathbf{C}$ be an M-matrix.
We will discuss M-matrix chip-firing, and its relation to our present research, in more detail later (see~\S\ref{sec:cartanmatrix}). But now let us explain the direct motivation for our work, namely, \emph{labeled} chip-firing.
Bj\"{o}rner, Lov\'{a}sz, and Shor were motivated to introduce the chip-firing process for arbitrary graphs by papers of Spencer~\cite{spencer1986balancing} and Anderson et al.~\cite{anderson1989disks} which studied the special case of chip-firing on a line. Jim Propp recently introduced a version of labeled chip-firing on a line that generalizes this original case. In ordinary chip-firing, the chips are all indistinguishable. But the states of the labeled chip-firing process are configurations of $N$ \emph{distinguishable} chips with integer labels $1,2,\ldots,N$ on the infinite path graph $\mathbb{Z}$. The firing moves consist of choosing two chips that occupy the same vertex and moving the chip with the lesser label one vertex to the right and the chip with the greater label one vertex to the left. Propp conjectured that if one starts with an even number of chips at the origin, this labeled chip-firing process is confluent and in particular the chips always end up in sorted order. Propp's conjecture was recently proved by Hopkins, McConville, and Propp~\cite{hopkins2017sorting}. Note crucially that confluence does not hold for labeled chip-firing if the initial number of chips at the origin is odd (e.g., three). Hence, compared to all the other models of chip-firing discussed above (for which confluence holds locally and follows from Newman's \emph{diamond lemma}~\cite{newman1942theories}), confluence is a much subtler property for labeled chip-firing.
The crucial observation that motivated our present research is that we can generalize Propp's labeled chip-firing to ``other types,'' as follows. For any configuration of~$N$ labeled chips on the line, if we define the vector $c\coloneqq (c_1,c_2,\ldots,c_N) \in \mathbb{Z}^{N}$ by setting $c_i$ to be the position of the chip with label $i$, then for $i < j$ we are allowed to fire chips with label $i$ and $j$ in this configuration as long as $c$ is orthogonal to $e_i-e_j$; and doing so replaces the vector $c$ by $c+(e_i-e_j)$. Note that the vectors $e_i-e_j$ for~$1 \leq i < j \leq N$ are exactly the positive roots $\Phi^+$ of the root system~$\Phi$ of Type~$A_{N-1}$.
So there is a natural candidate for a generalization of Propp's labeled chip-firing to arbitrary (crystallographic) root systems: let $\Phi$ be any root system living in some Euclidean vector space~$V$; then for a vector $v\in V$ and a positive root $\alpha \in \Phi^{+}$, we allow the firing move~$v {\raU{}} v+\alpha$ whenever $v$ is orthogonal to $\alpha$. We call this process \emph{central root-firing} (or just \emph{central-firing} for short) because we allow a firing move whenever our point $v$ lies on a certain central hyperplane arrangement (namely, the Coxeter arrangement of $\Phi$).
Central-firing is actually the subject of our sequel paper~\cite{galashin2017rootfiring2}.
\begin{figure}
\caption{The $k=1$ symmetric interval-firing process for $\Phi=A_2$.}
\label{fig:syma2k1}
\end{figure}
In the present paper we instead study two ``affine'' deformations of central-firing. Let us explain what these deformations look like. First of all, it turns out to be best to interpret the condition ``whenever $v$ is orthogonal to~$\alpha$'' as ``whenever~$\langlev,\alpha^\vee{\raU{}}ngle=0$,'' where $\langle\cdot,\cdot{\raU{}}ngle$ is the standard inner product on~$V$ and $\alpha^\vee$ is the \emph{coroot} associated to $\alpha$. Also, rather than consider all vectors $v \in V$ to be the states of our system, it is better to restrict to a discrete setting where the states are \emph{weights} $\lambda \in P$, where~$P$ is the \emph{weight lattice} of~$\Phi$ (this is akin to only allowing vectors $c\in \mathbb{Z}^N$ above). The central-firing moves thus become
\[\lambda \to \lambda+\alpha \textrm{ whenever $\langle\lambda,\alpha^\vee{\raU{}}ngle= 0$ for $\lambda \in P$, $\alpha \in \Phi^{+}$}.\]
The deformations of central-firing we consider involve changing the values of~$\langle\lambda,\alpha^\vee{\raU{}}ngle$ at which we allow the firing move $\lambda {\raU{}} \lambda +\alpha$ to be some wider interval. In fact, we study two very particular families of intervals. For $k\in\mathbb{Z}_{\geq 0}$, the \emph{symmetric interval root-firing process} is the binary relation ${\raU{}}U{\mathrm{sym},k}$ on $P$ defined by
\[\lambda {\raU{}}U{\mathrm{sym},k} \lambda + \alpha, \; \textrm{ for $\lambda \in P$ and $\alpha\in \Phi^+$ with $\langle\lambda,\alpha^\vee{\raU{}}ngle+ 1 \in \{-k,-k+1,\ldots,k\}$}\]
and the \emph{truncated interval root-firing process} is the relation ${\raU{}}U{\mathrm{tr},k}$ on $P$ defined by
\[\lambda {\raU{}}U{\mathrm{tr},k} \lambda + \alpha, \; \textrm{ for $\lambda \in P$ and $\alpha\in \Phi^+$ with $\langle\lambda,\alpha^\vee{\raU{}}ngle+ 1 \in \{-k+1,-k+2,\ldots,k\}$}.\]
We refer to these as \emph{interval-firing} processes for short.
As mentioned, the central-firing process may or may not be confluent, depending on the initial weight we start at (e.g., our comment about three labeled chips above says that the central-firing process is not confluent from the origin for the root system of Type $A_2$). The first main result of the present paper is the following, which we prove in \S\ref{sec:sym_conf} and~\S\ref{sec:tr_conf}.
\begin{thm} \label{thm:confluence_intro}
For any $k\geq 0$, both the symmetric and truncated interval-firing processes are confluent from all initial weights.
\end{thm}
For example, Figure~\ref{fig:syma2k1} depicts the $k=1$ symmetric interval-firing process for~$\Phi=A_2$: the edges of this graph correspond to firing moves; that this process is confluent means that all paths starting from a given vertex must terminate at the same final vertex. For more such pictures, see Example~\ref{ex:rank2graphs}.
We call these processes \emph{interval-firing} processes because they allow firing a root from a weight when the inner product of that weight with the corresponding coroot is in some fixed interval. Alternately, we could say that the firing moves are allowed when our weight belongs to a certain affine hyperplane arrangement whose hyperplanes are orthogonal translates of the Coxeter arrangement hyperplanes; this is precisely the sense in which these processes are ``affine.'' The \emph{symmetric} process is so called because the symmetric closure of the relation~$ {\raU{}}U{\mathrm{sym},k}$ is invariant under the action of the Weyl group. The \emph{truncated} process is so-called because the interval defining it is truncated by one element on the left compared to the symmetric process.
Note that these processes are not truly ``deformations'' of central-firing in the sense that we cannot recover central-firing by specializing~$k$. But observe that the~$k=0$ case of symmetric interval-firing has the firing moves
\[\lambda {\raU{}}U{\mathrm{sym},0} \lambda+\alpha \textrm{ whenever $\langle\lambda,\alpha^\vee{\raU{}}ngle=-1$ for $\lambda \in P$, $\alpha \in \Phi^{+}$}\]
and the $k=1$ case of truncated interval-firing has the firing moves
\[\lambda {\raU{}}U{\mathrm{tr},1} \lambda+\alpha \textrm{ whenever $\langle\lambda,\alpha^\vee{\raU{}}ngle\in \{-1,0\}$ for $\lambda \in P$, $\alpha \in \Phi^{+}$}.\]
So these two interval-firing processes are actually very ``close'' to central-firing, and suggest that central-firing (in particular, labeled chip-firing) is somehow right on the ``cusp'' of confluence. Hence, it is not surprising that some of the tools we develop in the present paper are applied to the study of central-firing in the sequel paper~\cite{galashin2017rootfiring2}. We also note that these interval-firing processes themselves have a direct chip-firing interpretation in Type A; see Remark~\ref{rem:chipfiringinterpretation} for more details.
Moreover, we contend that these interval-firing processes are interesting not just because of their connection to central-firing (and hence labeled chip-firing), but also because of their remarkable geometric structure. To get a sense of this geometric structure, the reader is encouraged to look at the depictions of these interval-firing processes for the irreducible rank~$2$ root systems in Example~\ref{ex:rank2graphs}. As we will show, the symmetric and truncated interval-firing processes are closely related to \emph{permutohedra}, and indeed we will mostly investigate these processes from the perspective of convex, polytopal geometry. For example, a key ingredient in our proof of confluence is an exact formula for \emph{traverse lengths of root strings} in permutohedra.
The most striking geometric objects that come out of our investigation of interval-firing are certain ``Ehrhart-like'' polynomials that count the number of weights with given stabilization as we vary our parameter $k$. To make sense of ``with given stabilization,'' first we show that there is a consistent way to label the stable points of the symmetric and truncated interval-firing processes across all values of $k$: these stable points are (a subset of) $\eta^k(\lambda)$ for $\lambda \in P$, where $\eta\colon P\to P$ is a certain piecewise-linear ``dilation'' map depicted in Figure~\ref{fig:eta}. Then we ask: for $\lambda \in P$, how many weights stabilize to $\eta^k(\lambda)$, as a function of~$k$? Let us denote by $L^{\mathrm{sym}}_{\lambda}(k)$ (resp., $L^{\mathrm{tr}}_{\lambda}(k)$) the number of weights $\mu\in P$ that ${\raU{}}U{\mathrm{sym},k}$-stabilize (resp., ${\raU{}}U{\mathrm{tr},k}$-stabilize) to $\eta^k(\lambda)$. The following is our second main result, which we prove in \S\ref{sec:sym_Ehrhart} and~\S\ref{sec:tr_Ehrhart}.
\begin{thm}\label{thm:Ehrhart_intro}
\leavevmode
\begin{itemize}
\item For any root system $\Phi$ and any $\lambda \in P$, $L^{\mathrm{sym}}_{\lambda}(k)$ is a polynomial in $k$ with integer coefficients.
\item For any simply laced root system $\Phi$ and any $\lambda \in P$, $L^{\mathrm{tr}}_{\lambda}(k)$ is a polynomial in $k$ with integer coefficients.
\end{itemize}
\end{thm}
We conjecture for all root systems~$\Phi$ that these functions are polynomials in~$k$ \emph{with nonnegative integer coefficients}. We call these polynomials \emph{Ehrhart-like} because they count the number of points in some discrete region as it is dilated, but we note that in general the set of weights with given stabilization is not the set of lattice points of any convex polytope, or indeed any convex set (although these Ehrhart-like polynomials do include the usual Ehrhart polynomials of regular permutohedra).
That these Ehrhart-like polynomials apparently have nonnegative integer coefficients suggests that our interval-firing processes may have a deeper connection to the representation theory or algebraic geometry associated to the root system~$\Phi$, although we have no precise idea of what such a connection would be. There is some similarity between our interval-firing processes and the space of \emph{quasi-invariants} of the Weyl group (see~\cite{etingof2003lectures}). We thank Pavel Etingof for pointing this out to us.
As for possible connections to algebraic geometry: one can see in the above definitions of the interval-firing processes that rather than record the intervals corresponding to the values of~$\langle\lambda,\alpha^\vee{\raU{}}ngle$ at which we allow firing, we recorded the intervals corresponding to the values of~$\langle\lambda,\alpha^\vee{\raU{}}ngle+ 1 =\langle\lambda+\frac{\alpha}{2},\alpha^\vee{\raU{}}ngle$ at which we allow firing. This turns out to be more natural in many respects. And with this convention, the intervals defining the symmetric and truncated interval-firing processes are exactly the same as the intervals defining the \emph{(extended) $\Phi^\vee$-Catalan} and \emph{(extended) $\Phi^\vee$-Shi} hyperplane arrangements~\cite{postnikov2000deformations, athanasiadis2000deformations}. The Catalan and Shi arrangements are known to have many remarkable combinatorial and algebraic properties, such as \emph{freeness}~\cite{edelman1996free, terao2002multiderivations, yoshinaga2004characterization}. Although we have no precise statement to this effect, empirically it seems that many of the remarkable properties of these families of hyperplane arrangements are reflected in the interval-firing processes. See Remark~\ref{rem:hyperplanes} for more discussion of connections with hyperplane arrangements.
Finally, we remark that a kind of ``chip-firing for root systems'' was recently studied by Benkart, Klivans, and Reiner~\cite{benkart2016chip}. However, what Benkart-Klivans-Reiner studied was in fact M-matrix chip-firing with respect to the Cartan matrix $\mathbf{C}$ of the root system~$\Phi$. As we discuss later (see~\S\ref{sec:cartanmatrix}), this Cartan matrix chip-firing is analogous to root-firing \emph{where we only allow firing of the simple roots of $\Phi$}. The root-firing processes we study in this paper allow firing of all the positive roots of~$\Phi$. Hence, our set-up is quite different than the set-up of Benkart-Klivans-Reiner: for instance, the simple roots are always linearly independent, but there are many linear dependencies among the positive roots. Establishing confluence for Cartan matrix chip-firing is easy since the fact that the simple roots are pairwise non-acute implies confluence holds locally; whereas two positive roots may form an acute angle and hence confluence for interval-firing processes is a much more delicate question. Nevertheless, we do explain in Remark~\ref{rem:bkr} how Cartan matrix chip-firing can be obtained from our interval-firing processes by taking a $k\to\infty$ limit.
Now let us outline the rest of the paper. In Part~\ref{part:symtrconfluence} we prove that the symmetric and truncated interval-firing processes are confluent. To do this, we first identify some Weyl group symmetries for both of the interval-firing processes (Theorem~\ref{thm:symmetry}); in particular, we demonstrate that symmetric interval-firing is invariant under the action of the whole Weyl group (explaining its name). We then introduce the map $\eta$ and explain how it labels the stable points for symmetric interval-firing~(Lemma~\ref{lem:symsinks}). We proceed to prove some polytopal results: we establish the aforementioned formula for traverse lengths of permutohedra (Theorem~\ref{thm:traverseformula}); this traverse length formula leads directly to a ``permutohedron non-escaping lemma'' (Lemma~\ref{lem:permtrap}) which says that interval-firing processes get ``trapped'' inside of certain permutohedra. The confluence of symmetric interval-firing (Corollary~\ref{cor:symconfluence}) follows easily from the permutohedron non-escaping lemma. Finally, we establish the confluence of truncated interval-firing (Corollary~\ref{cor:trconfluence}) by first explaining how the map $\eta$ also labels the stable points in the truncated case (Lemma~\ref{lem:trsinks}), and then combining the permutohedron non-escaping lemma with a careful analysis of truncated interval-firing in rank~$2$.
In Part~\ref{part:ehrhart} we study the Ehrhart-like polynomials. We establish the existence of the symmetric Ehrhart-like polynomials (Theorem~\ref{thm:symehrhart}) via some basic Ehrhart theory for zonotopes (see, e.g., Theorem~\ref{thm:polypluszoneehrhart}). Then, to establish the existence of the truncated Ehrhart-like polynomials in the simply laced case (Theorem~\ref{thm:trehrhart}), we study in detail the relationship between symmetric and truncated interval-firing and in particular how the connected components of the graphs of these processes ``decompose'' into smaller connected components in a way consistent with the labeling map $\eta$ (see~\S\ref{sec:decompose}). In the final section, \S\ref{sec:iterate}, we explain how these Ehrhart-like polynomials also count the sizes of fibers of iterates of a certain operator on the weight lattice, another surprising property of these polynomials that would be worth investigating further.
\noindent {\bf Acknowledgements:} We thank Jim Propp, both for several useful conversations and because his introduction of labeled chip-firing and his infectious enthusiasm for exploring its properties launched this project. We also thank the anonymous referee for paying close attention to our article and providing several useful comments. The second author was supported by NSF grant~\#1122374.
\part{Confluence of symmetric and truncated interval-firing} \label{part:symtrconfluence}
\section{Background on root systems} \label{sec:rootsystemdefs}
Here we review the basic facts about root systems we will need in the study of certain vector-firing processes we define in terms of a fixed root system $\Phi$. For details, consult~\cite{humphreys1972lie},~\cite{bourbaki2002lie}, or~\cite{bjorner2005coxeter}.
Fix $V$, an $n$-dimensional real vector space with inner product $\langle\cdot,\cdot{\raU{}}ngle$. For a nonzero vector $\alpha \in V\setminus \{0\}$ we define its \emph{covector} to be $\alpha^\vee \coloneqq \frac{2\alpha}{\langle\alpha,\alpha{\raU{}}ngle}$. Then we define the \emph{reflection} across the hyperplane orthogonal to $\alpha$ to be the linear map $s_{\alpha}\colon V \to V$ given by $s_{\alpha}(v) \coloneqq v - \langlev,\alpha^\vee{\raU{}}ngle\alpha$.
\begin{definition}
A \emph{root system} is a finite collection~$\Phi \subseteq V \setminus \{0\}$ of nonzero vectors such that:
\begin{enumerate}
\item $\mathrm{Span}_{\mathbb{R}}(\Phi) = V$;
\item $s_{\alpha}(\Phi) = \Phi$ for all $\alpha \in \Phi$;
\item $\mathrm{Span}_{\mathbb{R}}(\{\alpha\}) \cap \Phi = \{\pm \alpha\}$ for all $\alpha \in \Phi$;
\item $\langle\beta,\alpha^\vee{\raU{}}ngle\in \mathbb{Z}$ for all $\alpha,\beta \in \Phi$.
\end{enumerate}
We remark that sometimes the third condition is omitted and those root systems satisfying the third condition are called \emph{reduced}. On the other hand, sometimes the fourth condition is omitted and those root systems satisfying the fourth condition are called \emph{crystallographic}. We will assume that all root systems under consideration are reduced and crystallographic and from now on will drop these adjectives.
\end{definition}
From now on in the paper we will fix a root system $\Phi$ in $V$. The vectors $\alpha\in \Phi$ are called \emph{roots}. The dimension of $V$ (which is $n$) is called the \emph{rank} of the root system. The vectors $\alpha^\vee$ for $\alpha\in \Phi$ are called \emph{coroots} and the set of coroots forms another root system, denoted $\Phi^\vee$, in $V$.
We use $W$ to denote the \emph{Weyl group} of $\Phi$, which is the subgroup of $GL(V)$ generated by the reflections $s_{\alpha}$ for $\alpha \in \Phi$. By the first and second conditions of the definition of a root system, $W$ is isomorphic as an abstract group to a subgroup of the symmetric group on $\Phi$, and hence is finite. Observe that the Weyl group of $\Phi^\vee$ is equal to the Weyl group of $\Phi$. Also note that all transformations in $W$ are orthogonal.
It is well-known that we can choose a set $Delta \subseteq \Phi$ of \emph{simple roots} which form a basis of~$V$, and which divide the root system $\Phi = \Phi^{+} \cup \Phi^{-}$ into \emph{positive} roots $\Phi^{+}$ and \emph{negative} roots $\Phi^{-} \coloneqq -\Phi^{+}$ so that any positive root $\alpha \in \Phi^{+}$ is a nonnegative integer combination of simple roots. The choice of $Delta$ is equivalent to the choice of $\Phi^{+}$; one way to choose $\Phi^{+}$ is to choose a generic linear form and let $\Phi^{+}$ be the set of roots which are positive according to this form. There are many choices for $Delta$ but they are all conjugate under $W$. From now on we will fix a set of simple roots $Delta$, and thus also a set of positive roots $\Phi^{+}$. It is known that any $\alpha\in \Phi$ appears in some choice of simple roots (in fact, every $\alpha \in \Phi$ is $W$-conjugate to a simple root appearing with nonzero coefficient in its expansion in terms of simple roots) and hence $W(Delta) = \Phi$. We use~$Delta =\{\alpha_1,\ldots,\alpha_n\}$ to denote the simple roots with an arbitrary but fixed order. The coroots $\alpha^\vee_i$ for $i=1,\ldots,n$ are called the \emph{simple coroots} and they of course form a set of simple roots for $\Phi^\vee$. We will always make this choice of simple roots for the dual root system, unless stated otherwise. With this choice of simple roots for the dual root system, we have $(\Phi^\vee)^{+}= (\Phi^{+})^\vee$.
We use $\mathbf{C} \coloneqq (\langle\alpha_i,\alpha_j^\vee{\raU{}}ngle) \in \mathbb{Z}^{n\times n}$ to denote the \emph{Cartan matrix} of $\Phi$. Clearly one can recover the root system $\Phi$ from the Cartan matrix $\mathbf{C}$, which is encoded by its Dynkin diagram. The \emph{Dynkin diagram} of $\Phi$ is the graph with vertex set $[n]\coloneqq \{1,2,\dots,n\}$ obtained as follows: first for all $1 \leq i < j \leq n$ we draw $\langle\alpha_i,\alpha_j^\vee{\raU{}}ngle\langle\alpha_j,\alpha_i^\vee{\raU{}}ngle$ edges between~$i$ and $j$; then, if $\langle\alpha_i,\alpha_j^\vee{\raU{}}ngle\langle\alpha_j,\alpha_i^\vee{\raU{}}ngle\notin \{0,1\}$ for some $i$ and $j$, we draw an arrow on top of the edges between them, from $i$ to $j$ if $|\alpha_i| > |\alpha_j|$. If there are no arrows in the Dynkin diagram of $\Phi$ then we say that $\Phi$ is \emph{simply laced}.
There are two important lattices related to $\Phi$, the \emph{root lattice} $Q \coloneqq \mathrm{Span}_{\mathbb{Z}}(\Phi)$ and the \emph{weight lattice} $P \coloneqq \{v\in V\colon \langlev,\alpha^\vee{\raU{}}ngle\in \mathbb{Z} \textrm{ for all $\alpha \in \Phi$}\}$. The elements of $P$ are called the \emph{weights} of $\Phi$. By the assumption that~$\Phi$ is crystallographic, we have $Q \subseteq P$. We use $\Omega \coloneqq \{\omega_1,\ldots,\omega_n\}$ to denote the dual basis to the basis of simple coroots $\{\alpha_1^\vee,\ldots,\alpha_n^\vee\}$ (in other words, the $\omega_i$ are defined by $\langle\omega_i,\alpha^\vee_j{\raU{}}ngle= \delta_{i,j}$); the elements of $\Omega$ are called \emph{fundamental weights}. Observe that $Q = \mathrm{Span}_{\mathbb{Z}}(Delta)$ and $P = \mathrm{Span}_{\mathbb{Z}}(\Omega)$.
We use $P^{\mathbb{R}}_{\geq 0} \coloneqq \mathrm{Span}_{\mathbb{R}_{\geq 0}}(\Omega)$, $P_{\geq 0} \coloneqq \mathrm{Span}_{\mathbb{Z}_{\geq 0}}(\Omega)$ and similarly $Q^{\mathbb{R}}_{\geq 0} \coloneqq \mathrm{Span}_{\mathbb{R}_{\geq 0}}(Delta)$, $Q_{\geq 0} \coloneqq \mathrm{Span}_{\mathbb{Z}_{\geq 0}}(Delta)$. Note that $P^{\mathbb{R}}_{\geq 0}$ and $Q^{\mathbb{R}}_{\geq 0}$ are dual cones; moreover, because the simple roots are pairwise non-acute, we have $P^{\mathbb{R}}_{\geq 0} \subseteq Q^{\mathbb{R}}_{\geq 0}$. The elements of~$P_{\geq 0}$ are called \emph{dominant weights}. For every $\lambda \in P$ there exists a unique element in $W(\lambda)\cap P_{\geq 0}$ and we use $\lambda_{\mathrm{dom}}$ to denote this element. A dominant weight of great importance is the \emph{Weyl vector} $\rho \coloneqq \sum_{i=1}^{n}\omega_i$. It is well-known (and easy to check) that $\rho = \frac{1}{2}\sum_{\alpha\in \Phi^{+}}\alpha$.
The connected components of $\{v \in V\colon \langlev,\alpha^\vee{\raU{}}ngle\neq 0 \textrm{ for all $\alpha \in \Phi$}\}$ are called the \emph{chambers} of $\Phi$. The \emph{fundamental chamber} is $C_0 \coloneqq \{v \in V\colon \langlev,\alpha^\vee{\raU{}}ngle> 0 \textrm{ for all $\alpha \in \Phi^{+}$}\}$. The Weyl group acts freely and transitively on the chambers and hence every chamber is equal to $wC_0$ for some unique $w\in W$. Observe that $P^{\mathbb{R}}_{\geq 0}$ is the closure of $C_0$.
If $U \subseteq V$ is any subspace spanned by roots, then $\Phi \cap U$ is a root system in $U$, which we call a \emph{sub-root system} of $\Phi$. The root lattice of $\Phi\cap U$ is of course $\mathrm{Span}_{\mathbb{Z}}(\Phi\cap U)$ while the weight lattice is the orthogonal (with respect to~$\langle \cdot,\cdot {\raU{}}ngle$) projection of $P$ onto~$U$. Moreover, $\Phi^{+}\cap U$ is a set of positive roots for $\Phi\cap U$, although $Delta\cap U$ may not be a set of simple roots for $\Phi\cap U$. We will always consider the positive roots of $\Phi\cap U$ to be $\Phi^{+}\cap U$ unless explicitly stated otherwise. The case of \emph{parabolic sub-root systems} (where in fact $Delta \cap U$ is a set of simple roots for $\Phi\cap U$) is of special significance: for $I\subseteq [n]$ we set $\Phi_I \coloneqq \Phi\cap \mathrm{Span}_{\mathbb{R}}(\{\alpha_i\colon i \in I\})$.
\begin{figure}
\caption{Dynkin diagrams of all irreducible root systems. The nodes corresponding to minuscule weights are filled in.}
\label{fig:dynkinclassification}
\end{figure}
If there exists an orthogonal decomposition $V = V_1 \oplus V_2$ with $0 \subsetneq V_1,V_2 \subsetneq V$ such that $\Phi = \Phi_1 \cup \Phi_2$ with $\Phi_i \subseteq V_i$ for $i=1,2$, then we write $\Phi = \Phi_1\oplus\Phi_2$ and we say the root system $\Phi$ is \emph{reducible}. Otherwise we say that it is \emph{irreducible}. (Let us also declare by fiat that the empty set, although it is a root system, is not irreducible.) In other words, a root system is irreducible if and only if its Dynkin diagram is connected. The famous \emph{Cartan-Killing classification} classifies all irreducible root systems up to isomorphism, where an \emph{isomorphism} of root systems is a bijection between roots induced from an invertible orthogonal map, potentially composed with a global rescaling of the inner product. Figure~\ref{fig:dynkinclassification} shows the Dynkin diagrams of all the irreducible root systems: these are the classical infinite series $A_n$ for $n\geq 1$, $B_n$ for $n \geq 2$, $C_n$ for $n \geq 3$, $D_n$ for $n \geq 4$, together with the exceptional root systems $G_2$, $F_4$, $E_6$, $E_7$, and $E_8$. Our numbering of the simple roots is consistent with Bourbaki~\cite{bourbaki2002lie}. In every case the subscript in the name of the root system denotes the number of nodes of the Dynkin diagram, which is also the number of simple roots, that is, the rank of~$\Phi$. These labels $A_n$, $B_n$, etc. are the \emph{type} of the root system; we may also talk about, e.g., ``Type A'' root systems.
All constructions that depend on the root system $\Phi$ decompose in a simple way as a direct product of irreducible factors. Hence without loss of generality we will from now on {\bf assume that $\Phi$ is irreducible.}
In an irreducible root system, there are at most two values of lengths $|\alpha|$ among the roots $\alpha \in \Phi$. Those roots whose lengths achieve the maximum value are called \emph{long}, and those which do not are called \emph{short}. The Weyl group $W$ acts transitively on the long roots, and it also acts transitively on the short roots.
There is a natural partial order on $P$ called the \emph{root order} whereby $\mu\leq \lambda$ for $\mu,\lambda \in P$ if $\lambda-\mu \in Q_{\geq 0}$. When restricted to $\Phi^{+}$, this partial order is graded by \emph{height}; the height of $\alpha = \sum_{i=1}^{n}c_i\alpha_i \in \Phi$ is $\sum_{i=1}^{n}c_i$. Because we have assumed that~$\Phi$ is irreducible, there is a unique maximal element of $\Phi^{+}$ according to root order, denoted $\theta$ and called the \emph{highest root}. The highest root is always long. We use $\widehat{\theta}$ to denote the unique (positive) root such that $\widehat{\theta}^\vee$ is the highest root of the dual root system $\Phi^\vee$ (with respect to the choice of $\{\alpha_1^\vee,\ldots,\alpha_n^\vee\}$ as simple roots). If $\Phi$ is simply laced then $\theta=\widehat{\theta}$ and $\theta$ is the unique root which is a dominant weight; if~$\Phi$ is not simply laced then $\theta$ and $\widehat{\theta}$ are the two roots which are dominant weights. In the non-simply laced case we call $\widehat{\theta}$ the \emph{highest short root}: it is the maximal short root with respect to the root ordering.
The root lattice $Q$ is a full rank sublattice of $P$; hence the quotient $P/Q$ is some finite abelian group. Note that $P/Q \simeq \mathrm{coker}(\mathbf{C}^t)$ where we view the transposed matrix as a map $\mathbf{C}^t\colon \mathbb{Z}^n\to \mathbb{Z}^n$. The order of this group is called the \emph{index of connection} of~$\Phi$ and is denoted $f\coloneqq |P/Q|$. There is a nice choice of coset representatives of $P/Q$, which we now describe. A dominant, nonzero weight $\lambda \in P_{\geq 0}\setminus \{0\}$ is called \emph{minuscule} if $\langle\lambda, \alpha^\vee{\raU{}}ngle\in \{-1,0,1\}$ for all $\alpha \in \Phi$. Let us use $\Omega_m$ to denote the set of minuscule weights. Note that $\Omega_m\subseteq \Omega$, i.e., a minuscule weight must be a fundamental weight. In Figure~\ref{fig:dynkinclassification}, the vertices corresponding to minuscule weights are filled in. In fact, there are $f-1$ minuscule weights and the minuscule weights together with zero form a collection of coset representatives of $P/Q$. We use $\Omega^{0}_{m} \coloneqq \Omega_m \cup\{0\}$ to denote the set of these representatives.
There is another characterization of minuscule weights that we will find useful. Namely, for a dominant weight $\lambda \in P_{\geq 0}$ we have that $\lambda \in \Omega^{0}_m$ if and only if $\lambda$ is the minimal element according to root order in $(Q+\lambda) \cap P_{\geq 0}$.
This last characterization of minuscule weight can also be described in terms of certain polytopes called \emph{($W$)-permutohedra}. Permutohedra will play a key role for us in our understanding of interval-firing processes, so let us review these now. For~$v \in V$, we define the \emph{permutohedron} associated to~$v$ to be~$\Pi(v) \coloneqq \mathrm{ConvexHull}\, W(v)$, a convex polytope in~$V$. And for a weight~$\lambda \in P$, we define $\Pi^Q(\lambda) \coloneqq \Pi(\lambda)\cap(Q+\lambda)$, which we call the \emph{discrete permutohedron} associated to~$\lambda$.
The following simple proposition describes the containment of permutohedra (see also~\cite[1.2]{stembridge1998partial}):
\begin{prop} \label{prop:perm_containment}
For $u,v\in P_{\geq 0}^{\mathbb{R}}$ we have $\Pi(u)\subseteq \Pi(v)$ if and only if $v-u\in Q_{\geq 0}^{\mathbb{R}}$. Hence for $\mu,\lambda \in P_{\geq 0}$ we have $\Pi^Q(\mu)\subseteq \Pi^Q(\lambda)$ if and only if $\mu \leq \lambda$ (in root order).
\end{prop}
\begin{proof}
First suppose that $u$ and $v$ are strictly inside the fundamental chamber~$C_0$, i.e., that we have $\langleu,\alpha_i^\vee{\raU{}}ngle> 0$ and $\langlev,\alpha_i^\vee{\raU{}}ngle> 0$ for all $i\in[n]$. By the \emph{inner cone} of polytope at a vertex, we mean the affine convex cone spanned by the edges of the polytope incident to that vertex in the direction ``outward'' from that vertex. Note that a point belongs to a polytope if and only if it belongs to the inner cone of that polytope at every vertex. Since the walls of the fundamental chamber are orthogonal to the simple roots, it is easy to see that if $u$ and $v$ are strictly inside the fundamental chamber then the inner cone of $\Pi(u)$ at $u$ is spanned by the negatives of the simple roots, and ditto for the inner cone of $\Pi(v)$ and $v$. So if we do not have $v-u\in Q_{\geq 0}^{\mathbb{R}}$, then clearly $u$ does not belong to $\Pi(v)$. Hence suppose that $v-u\in Q_{\geq 0}^{\mathbb{R}}$. Every vertex of $\Pi(u)$ belongs to the inner cone of $\Pi(u)$ at $u$; i.e., $u-u' \in Q_{\geq 0}^{\mathbb{R}}$ for all $u'\in W(u)$. Thus for all $u'\in W(u)$ we have $v-u' \in Q_{\geq 0}^{\mathbb{R}}$; i.e., every point in $\Pi(u)$ is in the inner cone of $\Pi(v)$ at $v$. But then by the $W$-invariance of permutohedra, we conclude that every point in $\Pi(u)$ is in the inner cone of $\Pi(v)$ at every vertex of $\Pi(v)$, and hence that~$\Pi(u)\subseteq \Pi(v)$, as claimed.
For arbitrary $u,v\in P_{\geq 0}^{\mathbb{R}}$, note $\Pi(u) = \bigcap_{\varepsilon > 0} \Pi(u+\varepsilon \rho)$ and $\Pi(v) = \bigcap_{\varepsilon > 0} \Pi(v+\varepsilon \rho)$, and $u+\varepsilon \rho$ and $v+\varepsilon \rho$ will be strictly inside the fundamental chamber for all $\varepsilon > 0$. Thus the result for arbitrary $u,v\in P_{\geq 0}^{\mathbb{R}}$ follows from the preceding paragraph.
\end{proof}
So in light of Proposition~\ref{prop:perm_containment}, we see that minuscule weights can also be characterized as follows: for~$\lambda \in P_{\geq 0}$ we have~$\lambda \in \Omega^{0}_m$ if and only if $\Pi^Q(\lambda)=W(\lambda)$. For references for all these various characterizations of and facts about minuscule weights, see~\cite[Proposition 3.10]{benkart2016chip} (who in particular credit Stembridge~\cite{stembridge1998partial} for some of these facts).
\section{Background on binary relations and confluence} \label{sec:relations}
Interval-firing will formally be defined to be a binary relation on the weight lattice of~$\Phi$. Before giving the precise definition, we review some general notation and results concerning binary relations. Let $X$ be a set and ${\raU{}}$ a binary relation on $X$. We use $\Gamma_{{\raU{}}}$ to denote the directed graph (from now on, ``digraph'') with vertex set $X$ and with a directed edge~$(x,y)$ whenever $x {\raU{}} y$. Clearly~$\Gamma_{{\raU{}}}$ contains exactly the same information as~${\raU{}}$ and we will often implicitly identify binary relations and digraphs (specifically, digraphs without multiple edges in the same direction) in this way. We use ${\raU{}}Ast$ to denote the reflexive, transitive closure of~${\raU{}}$: that is, we write~$x {\raU{}}Ast y$ to mean that~$x = x_0 {\raU{}} x_1 {\raU{}} \cdots {\raU{}} x_k = y$ for some~$k \in \mathbb{Z}_{\geq 0}$. In other words, $x {\raU{}}Ast y$ means there is a path from $x$ to $y$ in~$\Gamma_{{\raU{}}}$. We use~${\baU{}}$ to denote the symmetric closure of~${\raU{}}$: $x {\baU{}} y$ means that~$x {\raU{}} y$ or~$y {\raU{}} x$. For any digraph $\Gamma$, we use $\Gamma^{\mathrm{un}}$ to denote the underlying undirected graph of $\Gamma$; in fact, we view~$\Gamma^{\mathrm{un}}$ as a digraph: it has edges $(x,y)$ and $(y,x)$ whenever $(x,y)$ is an edge of~$\Gamma$. Hence $\Gamma_{{\baU{}}} = \Gamma^{\mathrm{un}}_{{\raU{}}}$. Finally, we use ${\baU{}}Ast$ to denote the reflexive, transitive, symmetric closure of~${\raU{}}$: $x {\baU{}}Ast y$ means that~$x = x_0 {\baU{}} x_1 {\baU{}} \cdots {\baU{}} x_k = y$ for some~$k \in \mathbb{Z}_{\geq 0}$. In other words, $x {\baU{}}Ast y$ means there is a path from $x$ to $y$ in~$\Gamma^{\mathrm{un}}_{{\raU{}}}$.
Now let us review some notions of confluence for binary relations. Here we generally follow standard terminology in the theory of abstract rewriting systems, as laid out for instance in~\cite{huet1980confluent}; however, instead following chip-firing terminology, we use ``stable'' in place of what would normally be called ``irreducible,'' and rather than ``normal forms'' we refer to ``stabilizations.'' We say that~${\raU{}}$ is \emph{terminating} (also sometimes called \emph{noetherian}) if there is no infinite sequence of relations~$x_0 {\raU{}} x_1 {\raU{}} x_2 {\raU{}} \cdots$; i.e., ${\raU{}}$ is terminating means that~$\Gamma_{{\raU{}}}$ has no infinite paths (which implies in particular that this digraph has no directed cycles). Generally speaking, the relations we are most interested in will all be terminating and it will be easy for us to establish that they are terminating. For~$x \in X$, we say that~${\raU{}}$ is \emph{confluent from $x$} if whenever $x {\raU{}}Ast y_1$ and~$x {\raU{}}Ast y_2$, there is $y_3$ such that~$y_1 {\raU{}}Ast y_3$ and~$y_2 {\raU{}}Ast y_3$. We say~$x \in X$ is \emph{${\raU{}}$-stable} (or just \emph{stable} if the context is clear) if there is no $y \in X$ with $x {\raU{}} y$. In graph-theoretic language, $x$ is~${\raU{}}$-stable means that~$x$ is a sink (vertex of outdegree zero) of $\Gamma_{{\raU{}}}$. If ${\raU{}}$ is terminating, then for every~$x \in X$ there must be at least one stable $y \in X$ with $x {\raU{}}Ast y$. On the other hand, if ${\raU{}}$ is confluent from~$x \in X$, then there can be at most one stable $y \in X$ with $x {\raU{}}Ast y$. Hence if ${\raU{}}$ is terminating and is confluent from $x$, then there exists a unique stable~$y$ with~$x {\raU{}}Ast y$; we call this~$y$ the~\emph{${\raU{}}$-stabilization} (or just \emph{stabilization} if the context is clear) of~$x$. We say that~${\raU{}}$ is \emph{confluent} if it is confluent from every~$x \in X$. As we just explained, if ${\raU{}}$ is confluent and terminating then a unique stabilization of~$x$ exists for all $x \in X$. A weaker notion than confluence is that of local confluence: we say that ${\raU{}}$ is \emph{locally confluent} if for any~$x \in X$, if $x {\raU{}} y_1$ and $x {\raU{}} y_2$, then there is some $y_3$ with~$y_1 {\raU{}}Ast y_3$ and~$y_2 {\raU{}}Ast y_3$. Figure~\ref{fig:relationexs} gives some examples of relations comparing these various notions of confluence and termination. Observe that there is no example in this figure of a relation that is locally confluent and terminating but not confluent. That is no coincidence: Newman's lemma, a.k.a.~the diamond lemma, says that local confluence plus termination implies confluence.
\begin{figure}
\caption{Examples of various relations: (I) is confluent from $x$ but not from $y$; (II) and (III) are confluent but not terminating; (IV) and~(VI) are locally confluent but not confluent; (V) is confluent and terminating.}
\label{fig:relationexs}
\end{figure}
\begin{lemma}[Diamond lemma, see~{\cite[Theorem 3]{newman1942theories} or~\cite[Lemma 2.4]{huet1980confluent}}]\label{lem:diamond}
Suppose ${\raU{}}$ is terminating. Then ${\raU{}}$ is confluent if and only if it is locally confluent.
\end{lemma}
\section{Definition of interval-firing}
In this section we formally define the interval-firing processes in their most general form. We use the notation $\mathbf{k} \in \mathbb{Z}[\Phi]^{W}$ to mean that $\mathbf{k}$ is an integer-valued function on the roots of~$\Phi$ that is invariant under the action of the Weyl group. We write $\mathbf{a} \leq \mathbf{b}$ to mean that~$\mathbf{a}(\alpha) \leq \mathbf{b}(\alpha)$ for all $\alpha \in \Phi$. We use the notation~$\mathbf{k} = k$ to mean that $\mathbf{k}$ is constantly equal to~$k$. We also use the obvious notation $a\mathbf{a}+b\mathbf{b}$ for linear combinations of these functions. We use $\mathbb{N}[\Phi]^W$ to denote the set of~$\mathbf{k} \in \mathbb{Z}[\Phi]^W$ with $\mathbf{k} \geq 0$. We write $\rho_{\mathbf{k}} \coloneqq \sum_{i=1}^{n}\mathbf{k}(\alpha_i)\omega_i$. Since we have assumed that~$\Phi$ is irreducible, there are at most two $W$-orbits of $\Phi$: the short roots and the long roots. If $\Phi$ is simply laced then it has a single Weyl group orbit and~$\mathbf{k} = k$ for some constant~$k \in \mathbb{Z}$; otherwise, we have two constants $k_{s}, k_{l} \in \mathbb{Z}$ so that $\mathbf{k}(\alpha)=k_s$ if $\alpha$ is short and~$\mathbf{k}(\alpha) = k_l$ if $\alpha$ is long.
For $\mathbf{k}\in\mathbb{N}[\Phi]^W$, the \emph{symmetric interval-firing process} is the binary relation ${\raU{}}U{\mathrm{sym},\mathbf{k}}$ on $P$ defined by
\[\lambda {\raU{}}U{\mathrm{sym},\mathbf{k}} \lambda + \alpha, \; \textrm{ for $\lambda \in P$ and $\alpha\in \Phi^+$ with $\langle\lambda+\frac{\alpha}{2},\alpha^\vee{\raU{}}ngle\in [-\mathbf{k}(\alpha),\mathbf{k}(\alpha)]$}\]
and the \emph{truncated interval-firing process} is the binary relation ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ on $P$ defined by
\[\lambda {\raU{}}U{\mathrm{tr},\mathbf{k}} \lambda + \alpha, \; \textrm{ for $\lambda \in P$ and $\alpha\in \Phi^+$ with $\langle\lambda+\frac{\alpha}{2},\alpha^\vee{\raU{}}ngle\in [-\mathbf{k}(\alpha)+1,\mathbf{k}(\alpha)]$}.\]
From now own we will often think about a relation ${\raU{}}$ as $\Gamma_{{\raU{}}}$. So we use the shorthand notations~$\Gamma_{\mathrm{sym},\mathbf{k}}\coloneqq \Gamma_{{\raU{}}U{\mathrm{sym},\mathbf{k}}}$ and $\Gamma_{\mathrm{tr},\mathbf{k}}\coloneqq \Gamma_{{\raU{}}U{\mathrm{tr},\mathbf{k}}}$.
\begin{figure}
\caption{The positive roots of the rank~$2$ root systems $A_2$, $B_2$, and~$G_2$. The elements of $\Omega\cup\{0\}
\label{fig:rank2posroots}
\end{figure}
\begin{example} \label{ex:rank2graphs}
The irreducible rank~$2$ root systems are $A_2$, $B_2$ and $G_2$. The positive roots and fundamental weights for these root systems are depicted in Figure~\ref{fig:rank2posroots}. In Figures~\ref{fig:a2symtr},~\ref{fig:b2symtr}, and~\ref{fig:g2symtr} we depict the the truncated and symmetric interval-firing processes~$\Gamma_{\mathrm{tr},\mathbf{k}}$ and $\Gamma_{\mathrm{sym},\mathbf{k}}$ for $\mathbf{k}=0,1,2$ for these three root systems. Of course these graphs are infinite, so we depict the ``interesting part'' of the graphs near the origin (which is circled in black). The colors in these drawings correspond to classes of weights modulo the root lattice (hence there are three colors in the $A_2$ graphs, two in the $B_2$ graphs, and one in the $G_2$ graphs). Note that as $\mathbf{k}$ increases, the scale of the drawing is not maintained. Most, if not all, of the features of truncated and symmetric interval-firing that we care about are visible already in rank~$2$. Thus the reader is encouraged, while reading the rest of this paper, to return to these figures and understand how each of the results apply to these two dimensional examples.
\end{example}
\begin{figure}\label{fig:a2symtr}
\end{figure}
\begin{figure}\label{fig:b2symtr}
\end{figure}
\begin{figure}\label{fig:g2symtr}
\end{figure}
\begin{remark} \label{rem:chipfiringinterpretation}
Let us recall Propp's labeled chip-firing process (studied in~\cite{hopkins2017sorting}), which motivated our study of interval-firing processes. The states of labeled chip-firing are configurations of labeled chips on the infinite path graph $\mathbb{Z}$, such as:
\begin{center}
\begin{tikzpicture}[scale=0.7,block/.style={draw,circle, minimum width={width("11")+12pt},
font=\small,scale=0.6}]
\foreach \x in {-2,-1,...,2} {
\node[anchor=north] (A\x) at (\x,0) {$\x$};
}
\draw (-2.2,0) -- (2.2,0);
\draw[dashed] (-5,0) -- (-2.2,0);
\draw[dashed] (5,0) -- (2.2,0);
\foreach[count=\i] \a/\b in {0/1,0/2,0/3} {
\node[block] at (\a,{\b*0.6*1.2-0.5*0.6*1.2}) {$\i$};
}
\end{tikzpicture}
\end{center}
If two chips with labels occupy the same position, we may \emph{fire} them, which sends the lesser-labeled chip one vertex to the right and the greater-labeled chip one vertex to the left. For instance, firing the chips~\chip{1} and~\chip{2} above leads to
\begin{center}
\begin{tikzpicture}[scale=0.7,block/.style={draw,circle, minimum width={width("11")+12pt},
font=\small,scale=0.6}]
\foreach \x in {-2,-1,...,2} {
\node[anchor=north] (A\x) at (\x,0) {$\x$};
}
\draw (-2.2,0) -- (2.2,0);
\draw[dashed] (-5,0) -- (-2.2,0);
\draw[dashed] (5,0) -- (2.2,0);
\foreach[count=\i] \a/\b in {1/1,-1/1,0/1} {
\node[block] at (\a,{\b*0.6*1.2-0.5*0.6*1.2}) {$\i$};
}
\end{tikzpicture}
\end{center}
Firing the chips \chip{i} and \chip{j} with $i < j$ corresponds to $c {\raU{}} c + (e_i-e_j)$, where the integer vector $c \coloneqq (c_1,\ldots,c_N) \in \mathbb{Z}^N$ is given by $c_i \coloneqq \textrm{the position of the chip \chip{i}}$. In this way central-firing (the subject of our sequel paper~\cite{galashin2017rootfiring2}) is the same as the labeled chip-firing process for $\Phi$ of Type A. Via this same correspondence between lattice vectors and configurations of chips, symmetric and truncated interval-firing in Type A can also be seen as ``labeled chip-firing processes'' that consist of the same chip-firing moves, which send chip~\chip{i} one vertex to the right and chip~\chip{j} one vertex to the left for any $i < j$, but where we allow these moves to be applied under different conditions: namely, when the position of chip~\chip{i} minus the position of chip~\chip{j} is either in the interval $[-k-1,k-1]$ (in the symmetric case) or in the interval $[-k,k-1]$ (in the truncated case). For example, consider the smallest non-trivial case of these interval-firing processes, which is symmetric interval-firing with $k=0$. This corresponds to the labeled chip-firing process that allows the transposition of the chips~\chip{i} and~\chip{j} with~$i < j$ when \chip{i} is one position to the left of \chip{j}. It is immediately apparent that this process is confluent; for instance, the configuration
\begin{center}
\begin{tikzpicture}[scale=0.7,block/.style={draw,circle, minimum width={width("11")+12pt},
font=\small,scale=0.6}]
\foreach \x in {-3,-2,...,3} {
\node[anchor=north] (A\x) at (\x,0) {$\x$};
}
\draw (-3.2,0) -- (3.2,0);
\draw[dashed] (-7,0) -- (-3.2,0);
\draw[dashed] (7,0) -- (3.2,0);
\foreach[count=\i] \a/\b in {-2/1,-1/1,1/1,-1/2,2/1,3/1,2/2} {
\node[block] at (\a,{\b*0.6*1.2-0.5*0.6*1.2}) {$\i$};
}
\end{tikzpicture}
\end{center}
${\raU{}}U{\mathrm{sym},0}$-stabilizes to
\begin{center}
\begin{tikzpicture}[scale=0.7,block/.style={draw,circle, minimum width={width("11")+12pt},
font=\small,scale=0.6}]
\foreach \x in {-3,-2,...,3} {
\node[anchor=north] (A\x) at (\x,0) {$\x$};
}
\draw (-3.2,0) -- (3.2,0);
\draw[dashed] (-7,0) -- (-3.2,0);
\draw[dashed] (7,0) -- (3.2,0);
\foreach[count=\i] \a/\b in {-1/1,-1/2,3/1,-2/1,2/1,2/2,1/1} {
\node[block] at (\a,{\b*0.6*1.2-0.5*0.6*1.2}) {$\i$};
}
\end{tikzpicture}
\end{center}
In general the stabilization will weakly sort each collection of contiguous chips, while leaving the underlying unlabeled configuration of chips the same. The next smallest case to consider is truncated interval-firing with $k=1$. This corresponds to the labeled chip-firing process that allows both the transposition moves from the symmetric $k=0$ case, and the usual labeled chip-firing moves from the central-firing case. The reader can verify that for instance the configuration
\begin{center}
\begin{tikzpicture}[scale=0.7,block/.style={draw,circle, minimum width={width("11")+12pt},
font=\small,scale=0.6}]
\foreach \x in {-2,-1,...,2} {
\node[anchor=north] (A\x) at (\x,0) {$\x$};
}
\draw (-2.2,0) -- (2.2,0);
\draw[dashed] (-5,0) -- (-2.2,0);
\draw[dashed] (5,0) -- (2.2,0);
\foreach[count=\i] \a/\b in {0/1,0/2,-1/1,-1/2} {
\node[block] at (\a,{\b*0.6*1.2-0.5*0.6*1.2}) {$\i$};
}
\end{tikzpicture}
\end{center}
${\raU{}}U{\mathrm{tr},1}$-stabilizes to
\begin{center}
\begin{tikzpicture}[scale=0.7,block/.style={draw,circle, minimum width={width("11")+12pt},
font=\small,scale=0.6}]
\foreach \x in {-2,-1,...,2} {
\node[anchor=north] (A\x) at (\x,0) {$\x$};
}
\draw (-2.2,0) -- (2.2,0);
\draw[dashed] (-5,0) -- (-2.2,0);
\draw[dashed] (5,0) -- (2.2,0);
\foreach[count=\i] \a/\b in {1/1,0/1,-1/1,-2/1} {
\node[block] at (\a,{\b*0.6*1.2-0.5*0.6*1.2}) {$\i$};
}
\end{tikzpicture}
\end{center}
Here it is less obvious that confluence holds (although it is not too hard to prove this fact directly via a diamond lemma argument). The reader is now encouraged to experiment with this labeled chip-firing interpretation of symmetric and truncated interval-firing for higher values of $k$. Note that increasing $k$ allows for the firing of chips~\chip{i} and~\chip{j} when they are further apart.
\end{remark}
In our further treatment of the interval-firing processes we will focus on the geometric picture (on display in Example~\ref{ex:rank2graphs}) and not the chip-firing picture (discussed in Remark~\ref{rem:chipfiringinterpretation}).
To close out this section, let us demonstrate that the interval-firing processes are always terminating. This is straightforward because the collection~$\Phi^+$ of vectors we are adding is acyclic.
\begin{prop} \label{prop:interval_firing_terminating}
For~$\mathbf{k}\in\mathbb{N}[\Phi]^W$, the relations ${\raU{}}U{\mathrm{sym},\mathbf{k}}$ and ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ are terminating.
\end{prop}
\begin{proof}
It is enough to show this for ${\raU{}}U{\mathrm{sym},\mathbf{k}}$, which has more firing moves than~${\raU{}}U{\mathrm{tr},\mathbf{k}}$. For~$\lambda \in P$ define $\varphi(\lambda) \coloneqq \langle\rho_{\mathbf{k}+1}-\lambda, \rho_{\mathbf{k}+1}-\lambda{\raU{}}ngle$; in other words, $\varphi(\lambda)$ is the length of the vector $\rho_{\mathbf{k}+1}-\lambda$. Suppose $\lambda {\raU{}}U{\mathrm{sym},\mathbf{k}} \lambda + \alpha$ for $\alpha \in \Phi^+$. Then,
\begin{align*}
\varphi(\lambda) - \varphi(\lambda+\alpha) &= \langle \rho_{\mathbf{k}+1}-\lambda, \rho_{\mathbf{k}+1}-\lambda{\raU{}}ngle- \langle \rho_{\mathbf{k}+1}-(\lambda+\alpha), \rho_{\mathbf{k}+1}-(\lambda+\alpha){\raU{}}ngle\\
&=2\langle \rho_{\mathbf{k}+1},\alpha{\raU{}}ngle-2\langle\lambda,\alpha{\raU{}}ngle-\langle\alpha,\alpha{\raU{}}ngle\\
&\geq \langle\alpha,\alpha{\raU{}}ngle(\mathbf{k}(\alpha)+1-\mathbf{k}(\alpha)+1-1) =\langle\alpha,\alpha{\raU{}}ngle,
\end{align*}
where we use the facts that $\langle\lambda,\alpha{\raU{}}ngle\leq \frac{\langle\alpha,\alpha{\raU{}}ngle}{2}(\mathbf{k}(\alpha)-1)$ since $\lambda {\raU{}}U{\mathrm{sym},\mathbf{k}} \lambda + \alpha$, and that $\langle\rho_{\mathbf{k}+1},\alpha{\raU{}}ngle\geq\frac{\langle\alpha,\alpha{\raU{}}ngle}{2}(\mathbf{k}(\alpha)+1)$ because $\alpha$ is $W$-conjugate to at least one simple root appearing with nonzero coefficient in its expansion in terms of simple roots. So each firing move causes the quantity $\varphi(\lambda)$ to decrease by at least some fixed nonzero amount. But $\varphi(\lambda)\geq 0$ because it is the length of a vector. Thus indeed ${\raU{}}U{\mathrm{sym},\mathbf{k}}$ is terminating.
\end{proof}
\section{Symmetries of interval-firing processes} \label{sec:symmetry}
In this section we study the symmetries of the two interval-firing processes. Since the set of positive roots $\Phi^+$ is an ``oriented'' set of vectors, we do not expect the directed graphs $\Gamma_{\mathrm{sym},\mathbf{k}}$ and~$\Gamma_{\mathrm{tr},\mathbf{k}}$ to have many symmetries, and certainly none coming from the Weyl group. But if we consider instead the undirected graphs $\Gamma^{\mathrm{un}}_{\mathrm{sym},\mathbf{k}}$ and~$\Gamma^{\mathrm{un}}_{\mathrm{tr},\mathbf{k}}$ (corresponding to the symmetric relations~${\baU{}}U{\mathrm{sym},\mathbf{k}}$ and~${\baU{}}U{\mathrm{tr},\mathbf{k}}$), we will see that both of these do in fact have symmetries coming from the Weyl group.
For the symmetric interval-firing process, the graph $\Gamma^{\mathrm{un}}_{\mathrm{sym},\mathbf{k}}$ is invariant under the action of the whole Weyl group~$W$. This explains the name ``symmetric'' for the process: it has the biggest possible group of symmetries. As for the truncated process, in order to understand its symmetries we need to introduce a certain subgroup of the Weyl group~$C\subseteq W$. In fact this $C$ is an abelian group and satisfies $C\simeq P/Q$. In our definition of $C$ we follow Lam and Postnikov~\cite{lam2012alcoved}\footnote{Lam and Postnikov worked in a completely dual setting to ours: that is, they described a copy of the coweight lattice modulo the coroot lattice inside of $W$; hence, they used $\theta$ instead of $\widehat{\theta}$, etc.}. The \emph{Coxeter number} of $\Phi$, another fundamental invariant of the root system, is $h\coloneqq \langle\rho,\widehat{\theta}^\vee{\raU{}}ngle+1$. (The Coxeter number is also equal to $h = 1+\sum_{i=1}^{n}a_i$ where $\theta=\sum_{i=1}^{n}a_i\alpha_i$). Lam and Postnikov~\cite[\S5]{lam2012alcoved} defined the subgroup $C\coloneqq \{w\in W\colon \rho-w(\rho) \in hP\}$ of the Weyl group and explained (using the affine Weyl group, which we will not discuss here) that $C$ is naturally isomorphic to $P/Q$: the isomorphism is explicitly given by~$w \mapsto \omega \in \Omega^0_m$ if and only if~$\rho-w(\rho) = h\omega$. (Since $\rho-w(\rho) \in Q$ for any $w\in W$, a consequence of this description of the isomorphism is that $h\cdot(P/Q) =\{0\}$.) As they mention, this subgroup was also studied before by Verma~\cite{verma1975affine}, but in spite of its significance it does not seem to have any name other than $C$ in the root system literature. Lam and Postnikov gave another characterization~\cite[Proposition 6.4]{lam2012alcoved} of~$C$ that will be useful for us: $C=\{w\in W\colon w(\{\alpha^\vee_0,\alpha^\vee_1,\ldots,\alpha^\vee_n\}) = \{\alpha^\vee_0,\alpha^\vee_1,\ldots,\alpha^\vee_n\}\}$, where we use the suggestive notation $\alpha^\vee_0 \coloneqq -\widehat{\theta}^\vee$.
\begin{thm} \label{thm:symmetry}
Let $\mathbf{k}\in\mathbb{N}[\Phi]^W$. Set $\Gamma \coloneqq \Gamma^{\mathrm{un}}_{\mathrm{sym},\mathbf{k}}$ or $\Gamma \coloneqq \Gamma^{\mathrm{un}}_{\mathrm{tr},\mathbf{k}}$. Then,
\begin{itemize}
\item if $\Gamma = \Gamma^{\mathrm{un}}_{\mathrm{sym},\mathbf{k}}$, the linear map $v\mapsto w(v)$ is an automorphism of $\Gamma$ for all $w \in W$;
\item if $\Gamma = \Gamma^{\mathrm{un}}_{\mathrm{tr},\mathbf{k}}$, the affine map $v \mapsto w(v-\frac{1}{h}\rho)+\frac{1}{h}\rho$ is an automorphism of $\Gamma$ for all~$w \in C\subseteq W$.
\end{itemize}
\end{thm}
\begin{proof}
If $\Gamma = \Gamma^{\mathrm{un}}_{\mathrm{sym},\mathbf{k}}$ set $c\coloneqq 0$, and if $\Gamma = \Gamma^{\mathrm{un}}_{\mathrm{tr},\mathbf{k}}$ set $c\coloneqq 1$. Consider the hyperplane arrangement $\mathcal{H} \coloneqq \left\{H_{\alpha^\vee,\frac{c}{2}}\colon \alpha \in \Phi^{+}\right\}$ with hyperplanes $H_{\alpha^\vee,\frac{c}{2}} \coloneqq
\left\{v\in V\colon \langlev,\alpha^\vee{\raU{}}ngle = \frac{c}{2}\right\}$.
First we claim that if for $w\in W$ and $u\in V$ the affine map $\varphi\colon v\mapsto w(v-u)+u$ is an automorphism of $\mathcal{H}$ which maps $P$ to $P$, then it is an automorphism of $\Gamma$ (by an automorphism of the hyperplane arrangement, we mean an invertible affine map $\varphi$ such that $\varphi$ permutes the hyperplanes in $\mathcal{H}$). Indeed, observe that there is an edge in $\Gamma$ between $\lambda$ and $\mu$ if and only if there is some $\alpha \in \Phi^{+}$ such that $\mu = \lambda +\alpha$ and $\mathrm{max}(\{|\langle\mu,\alpha^\vee{\raU{}}ngle-\frac{c}{2}|,|\langle\lambda,\alpha^\vee{\raU{}}ngle-\frac{c}{2}|\}) \leq \mathbf{k}(\alpha)+1-\frac{c}{2}$. So suppose there is an edge between~$\lambda$ and~$\mu$ in the $\alpha$ direction. Any $\varphi$ of this form will satisfy~$\varphi(\mu)-\varphi(\lambda)=w(\alpha)$ and $\varphi(H_{\alpha^\vee,\frac{c}{2}}) = H_{\pm w(\alpha)^\vee,\frac{c}{2}}$ (where the sign~$\pm$ is chosen so that $\pm w(\alpha) \in \Phi^{+}$). Moreover, since all Weyl group elements are orthogonal, and, in particular, preserve distances, the distance from $\mu$ to~$H_{\alpha^\vee,\frac{c}{2}}$ will be the same as the distance from $\varphi(\mu)$ to $H_{\pm w(\alpha)^\vee,\frac{c}{2}}$, and ditto for $\lambda$. But $|\langle\mu,\alpha^\vee{\raU{}}ngle-\frac{c}{2}|$ is precisely the distance from $\mu$ to~$H_{\alpha^\vee,\frac{c}{2}}$, and ditto for $\lambda$. Hence indeed we will get that $\varphi(\mu) = \varphi(\lambda)+w(\alpha)$ and that
\begin{align*}
\mathrm{max}\left(\left\{\left|\langle\varphi(\mu),(\pm w(\alpha))^\vee{\raU{}}ngle-\frac{c}{2}\right|,\left|\langle\varphi(\lambda),(\pm w(\alpha))^\vee{\raU{}}ngle-\frac{c}{2}\right|\right\}\right) \\\leq \mathbf{k}(\alpha)+1-\frac{c}{2}
= \mathbf{k}(\pm w(\alpha))+1-\frac{c}{2},
\end{align*}
which means there is an edge in $\Gamma$ between $\varphi(\lambda)$ and $\varphi(\mu)$ in the $\pm w(\alpha)$ direction. To see that conversely if there is an edge between $\varphi(\lambda)$ and $\varphi(\mu)$ in $\Gamma$, there is one between~$\lambda$ and $\mu$, use that $\varphi$ is invertible and $\varphi^{-1}$ is of the same form.
In the case $c=0$, the hyperplane arrangement $\mathcal{H}$ is just the Coxeter arrangement of~$\Phi$ and it is easy to see that every $w\in W$ is an automorphism of~$\mathcal{H}$.
Now consider the case $c=1$, in which case $\mathcal{H}$ is (a scaled version of) the \emph{$\Phi^\vee$-Linial arrangement}; see for instance~\cite{postnikov2000deformations} and~\cite{athanasiadis2000deformations}. We claim that $\varphi\colon v \mapsto w(v-\frac{c}{h}\rho)+\frac{c}{h}\rho$ is an automorphism of $\mathcal{H}$ for all $w \in C$. So suppose $x \in H_{\alpha^\vee,\frac{c}{2}}$; we want to show that $\varphi(x) \in H_{\pm w(\alpha)^\vee,\frac{c}{2}}$ where the sign $\pm$ is chosen so that $\pm w(\alpha)$ is positive. (The reverse implication will then follow from consideration of $\varphi^{-1}(v) = w^{-1}(v-\frac{c}{h}\rho)+\frac{c}{h}\rho$.) We have
\begin{equation} \label{eqn:onhyperplane}
\langle\varphi(x),w(\alpha)^\vee{\raU{}}ngle = \frac{c}{2}-\left\langle\frac{c}{h}\rho,\alpha^\vee\right{\raU{}}ngle+\left\langle\frac{c}{h}\rho,w(\alpha)^\vee\right{\raU{}}ngle.
\end{equation}
Write $\alpha^\vee = \sum_{i=1}^{n}a_i\alpha^\vee_i$, with the convention $a_0\coloneqq 0$. By a result of Lam-Postnikov mentioned above, there is a permutation $\pi\colon \{0,1,\ldots,n\}\to \{0,1,\ldots,n\}$ such that $w(\alpha^\vee_i) = \alpha^\vee_{\pi(i)}$ (with the aforementioned convention $\alpha^\vee_0 \coloneqq -\widehat{\theta}^\vee$ where $\widehat{\theta}^\vee$ is the highest root of $\Phi^\vee$). Thus, $w(\alpha)^\vee=\sum_{i=1}^{n} a_i\alpha^\vee_{\pi(i)}$.
We will consider two cases. First suppose that $a_{\pi^{-1}(0)} = 0$. Then $w(\alpha)^\vee$ is clearly a positive root, so $\pm = +$; moreover, we have $\langle\frac{c}{h}\rho,\alpha^\vee{\raU{}}ngle=\langle\frac{c}{h}\rho,w(\alpha)^\vee{\raU{}}ngle=\frac{c}{h}\cdot\sum_{i=1}^{n}a_i$. So from~\eqref{eqn:onhyperplane} we get that $\langle\varphi(x),w(\alpha)^\vee{\raU{}}ngle=\frac{c}{2}$, that is, $\varphi(x) \in H_{\pm w(\alpha)^\vee,\frac{c}{2}}$, as desired.
Now suppose that $a_{\pi^{-1}(0)} \neq 0$. We claim that this forces $a_{\pi^{-1}(0)} = 1$: indeed, otherwise the height of $w(\alpha)^\vee$ would be strictly less than $-(h-1)$, which is impossible because $-\widehat{\theta}^{\vee}$ has height $-(h-1)$ and is the root in $\Phi^\vee$ of smallest height. So indeed we have $a_{\pi^{-1}(0)} = 1$. Note also that in this case the height of $w(\alpha)^\vee$ is a negative root and hence $w(\alpha)^\vee$ is negative, so $\pm= -$. Then we compute
\[-\left\langle\frac{c}{h}\rho,\alpha^\vee\right{\raU{}}ngle+\left\langle\frac{c}{h}\rho,w(\alpha)^\vee\right{\raU{}}ngle=
-\left\langle\frac{c}{h}\rho,\alpha_{\pi^{-1}(0)}^\vee\right{\raU{}}ngle+\left\langle\frac{c}{h}\rho,\alpha^\vee_0\right{\raU{}}ngle = -\frac{c}{h}-\left(\frac{c}{h}(h-1)\right) = -c.\]
Thus from~\eqref{eqn:onhyperplane} we get that $\langle\varphi(x),-w(\alpha)^\vee{\raU{}}ngle = -\frac{c}{2}+c = \frac{c}{2}$, that is, $\varphi(x) \in H_{\pm w(\alpha)^\vee,\frac{c}{2}}$, as desired.
Finally, the description of $C$ given above says that $\varphi(0) = w(0-\frac{c}{h}\rho)+\frac{c}{h}\rho = c\omega$ for some $\omega \in \Omega^0_m$. Hence indeed $\varphi$ maps $P$ to $P$.
\end{proof}
\section{Sinks of symmetric interval-firing and the map~\texorpdfstring{$\eta$}{eta}}
Recall that our overall strategy for proving confluence of the interval-firing processes is to show that they get ``trapped'' inside certain permutohedra, and then to analyze where these processes must terminate. In order to carry out this strategy, we need to understand what are the possible final points we terminate at, i.e., what are the stable points of these processes.
In this section we describe the ${\raU{}}U{\mathrm{sym},\mathbf{k}}$-stable points, i.e., the sinks of $\Gamma_{\mathrm{sym},\mathbf{k}}$. We will show in particular that there is a way to consistently label the sinks of $\Gamma_{\mathrm{sym},\mathbf{k}}$ across all values of~$\mathbf{k}$.
In order to define this labeling we need to review some basic facts about parabolic subgroups and parabolic cosets. Recall that the Weyl group $W$ is generated by the \emph{simple reflections}~$s_i \coloneqq s_{\alpha_i}$ for $i=1,\ldots,n$. For any $w \in W$ we use $\ell(w)$ to denote the \emph{length} of~$w$, which is the length of the shortest representation of $w$ as a product of simple reflections. An \emph{inversion} of $w$ is a positive root $\alpha \in \Phi^{+}$ for which $w(\alpha)$ is negative. The length $\ell(w)$ is equal to the number of inversions of $w$. The identity is the only Weyl group element of length zero. The simple reflections are the only Weyl group elements of length one: $s_i$ sends $\alpha_i$ to~$-\alpha_i$ and permutes $\Phi^{+}\setminus\{\alpha_i\}$. A \emph{(right) descent} of $w \in W$ is a simple reflection~$s_i$ such that $\ell(ws_i) <\ell(w)$. The reflection $s_i$ is a descent of $w$ if and only if $\alpha_i$ is an inversion of $w$.
Recall that for $I\subseteq[n]$ we use $W_I$ to denote the corresponding \emph{parabolic subgroup} of $W$, that is, the subgroup of~$GL(V)$ generated by simple reflections $s_i$ for $i \in I$. Note that $W_I$ is (isomorphic to) the Weyl group of $\Phi_I$. For $\lambda \in P$ we define the parabolic permutohedron $\Pi_I(\lambda) \coloneqq \mathrm{ConvexHull}\, W_I(\lambda)$ and $\Pi^Q_I(\lambda)\coloneqq \Pi_I(\lambda)\cap(Q+\lambda)$. An important property of parabolic subgroups is the existence of distinguished coset representatives: each (left) coset $wW_I$ in $W$ contains a unique element of minimal length. We use $W^I$ for the set of minimal length coset representatives of~$W_I$. There is even an explicit description: $W^I \coloneqq \{w\in W\colon \textrm{$s_i$ is not a descent of $w$ for all $i \in I$}\}$ (see for instance~\cite[\S2.4]{bjorner2005coxeter}).
Recall that for any $\lambda \in P$ we use $\lambda_{\mathrm{dom}}$ to denote the dominant element of $W(\lambda)$. For a dominant weight $\lambda = \sum_{i=1}^n c_i \omega_i \in P_{\geq 0}$, we define $I^{0}_{\lambda} \coloneqq \{i \in [n] \colon c_i = 0\}$. And then for any weight $\lambda \in P$ we define $I^{0}_{\lambda} \coloneqq I^{0}_{\lambda_{\mathrm{dom}}}$.
\begin{prop} \label{prop:stabilizer}
For $\lambda \in P_{\geq 0}$, the stabilizer of $\lambda$ in $W$ is $W_{I^{0}_{\lambda}}$.
\end{prop}
\begin{proof}
This (straightforward proposition) is~\cite[Lemma 10.2B]{humphreys1972lie}.
\end{proof}
\begin{cor} \label{cor:cosets}
For any $\lambda \in P$, $\{w \in W\colon w^{-1}(\lambda) \in P_{\geq 0}\}$ is a coset of~$W_{I^{0}_{\lambda}}$.
\end{cor}
\begin{proof}
First let us show that if $w^{-1}(\lambda)$ is dominant then $(ww')^{-1}(\lambda)$ is dominant for any $w' \in W_{I^{0}_{\lambda}}$. This is clear: $(ww')^{-1}(\lambda) = (w')^{-1}(w^{-1}(\lambda)) = (w')^{-1}(\lambda_{\mathrm{dom}}) = \lambda_{\mathrm{dom}}$ since $w'$ is in the stabilizer of $\lambda_{\mathrm{dom}}$ by Proposition~\ref{prop:stabilizer}. Next let us show that if~$w^{-1}(\lambda)$ is dominant and $(w')^{-1}(\lambda)$ is dominant then $w' = ww''$ for some $w'' \in W_{I^{0}_{\lambda}}$. This is also clear: $w^{-1}(w'(\lambda_{\mathrm{dom}})) = w^{-1}(\lambda) = \lambda_{\mathrm{dom}}$, so $w^{-1}w'$ is in the stabilizer of~$\lambda_{\mathrm{dom}}$, that is, $w^{-1}w'= w''$ for some $w'' \in W_{I^{0}_{\lambda}}$ thanks to Proposition~\ref{prop:stabilizer}, as claimed.
\end{proof}
In light of Corollary~\ref{cor:cosets}, for $\lambda \in P$ we define $w_{\lambda}$ to be the minimal length element of~$\{w \in W\colon w^{-1}(\lambda) \in P_{\geq 0}\}$. Hence, for $\lambda \in P_{\geq 0}$ we have (by the Orbit-Stabilizer Theorem) that $W^{I^{0}_{\lambda}} = \{w_\mu\colon \mu \in W(\lambda)\}$ and $w_{\mu}\neq w_{\mu'}$ for $\mu\neq \mu' \in W(\lambda)$. Another way to think about $w_{\lambda}$: $\lambda$ may belong to the closure of many chambers, but there will be a unique chamber $wC_0$ with $w$ of minimal length such that $\lambda$ belongs to the closure of~$wC_0$ and this is when $w=w_{\lambda}$. Then for $\mathbf{k} \in \mathbb{N}[\Phi]^W$, we define the map $\eta_{\mathbf{k}}\colon P \to P$ by setting~$\eta_{\mathbf{k}}(\lambda) \coloneqq \lambda + w_{\lambda}(\rho_{\mathbf{k}})$ for all $\lambda \in P$ (where, as above, we have~$\rho_{\mathbf{k}} \coloneqq \sum_{i=1}^{n} \mathbf{k}(\alpha_i) \omega_i$).
\begin{figure}\label{fig:eta}
\end{figure}
This map $\eta_{\mathbf{k}}$ will be of crucial importance for us in our investigation of both the symmetric and truncated interval-firing processes and the relationship between these two processes. Figure~\ref{fig:eta} gives a graphical depiction of $\eta_{\mathbf{k}}$: as we can see, this map ``dilates'' space by translating the chambers radially outwards; a point not inside any chamber travels in the same direction as the chamber closest to the fundamental chamber among those chambers whose closure the point lies in. The following proposition lists some very basic properties of $\eta_{\mathbf{k}}$.
\begin{prop} \label{prop:etafacts} $ $
\begin{itemize}
\item For any $\mathbf{k},\mathbf{m} \in \mathbb{N}[\Phi]^W$, we have $\eta_{\mathbf{k}+\mathbf{m}} = \eta_{\mathbf{m}}(\eta_{\mathbf{k}})$.
\item For any $\mathbf{k} \in \mathbb{N}[\Phi]^W$, the map $\eta_{\mathbf{k}}\colon P \to P$ is injective.
\end{itemize}
\end{prop}
\begin{proof}
For the first bullet point: let $\lambda \in P$. Set $\lambda' \coloneqq \eta_{\mathbf{k}}(\lambda) = \lambda + w_{\lambda}(\rho_{\mathbf{k}})$. Observe that $\lambda'_{\mathrm{dom}} = w_{\lambda}^{-1}(\eta_{\mathbf{k}}(\lambda)) = \lambda_{\mathrm{dom}} + \rho_{\mathbf{k}}$. Hence, $I^{0}_{\lambda'} \subseteq I^{0}_{\lambda}$. This means the cosets of~$W_{I^{0}_{\lambda}}$ are unions of cosets of~$W_{I^{0}_{\lambda'}}$. But we just saw that $w_{\lambda} \in w_{\lambda'}W_{I^{0}_{\lambda'}}$, because~$w_{\lambda}^{-1}(\lambda')$ is dominant. So $w_{\lambda}$ must be the minimal length element of $w_{\lambda'}W_{I^{0}_{\lambda'}}$ (since it is the minimal length element of a superset of $w_{\lambda'}W_{I^{0}_{\lambda'}}$). Hence $w_{\lambda'} = w_{\lambda}$. This means that $\eta_{\mathbf{m}}(\eta_{\mathbf{k}}(\lambda)) = \lambda + w_{\lambda}(\rho_{\mathbf{k}}) + w_{\lambda}(\rho_{\mathbf{m}}) = \lambda + w_{\lambda}(\rho_{\mathbf{k+m}}) = \eta_{\mathbf{k}+\mathbf{m}}(\lambda)$ and thus the claim is proved.
For the second bullet point: suppose $\lambda,\mu \in P$ with $\eta_{\mathbf{k}}(\lambda) = \eta_{\mathbf{k}}(\mu)$. First of all, since~$\eta_{\mathbf{k}}(\lambda)_{\mathrm{dom}} = \lambda_{\mathrm{dom}} + \rho_{\mathbf{k}}$ and similarly for $\mu$, we have $\lambda_{\mathrm{dom}}=\mu_{\mathrm{dom}}$. Let~$m \gg 0 \in \mathbb{Z}$ be some very large constant. From the first bullet point we know~$\eta_{\mathbf{k}+m}(\lambda) = \eta_{\mathbf{k}+m}(\mu)$ and hence $\lambda + w_{\lambda}(\rho_{\mathbf{k}+m}) = \mu + w_{\mu}(\rho_{\mathbf{k}+m})$. But $\rho_{\mathbf{k}+m}$ is inside the fundamental chamber~$C_0$, and hence $w(\rho_{\mathbf{k}+m}) = w'(\rho_{\mathbf{k}+m})$ if and only if $w=w'$. Moreover, by taking $m$ large enough we can guarantee that $w(\rho_{\mathbf{k}+m})$ and $w'(\rho_{\mathbf{k}+m})$ are very far away from one another for $w \neq w'$. Hence $\lambda + w_{\lambda}(\rho_{\mathbf{k}+m}) = \mu + w_{\mu}(\rho_{\mathbf{k}+m})$ in fact forces $w_{\lambda} = w_{\mu}$. But~$w_{\lambda} = w_{\mu}$ together with~$\lambda_{\mathrm{dom}} = \mu_{\mathrm{dom}}$ means $\lambda = \mu$ and thus the claim is proved.
\end{proof}
In light of Proposition~\ref{prop:etafacts} it makes sense to set~$\eta \coloneqq \eta_1$ so that $\eta_k = \eta^k$. Now we proceed to explain how $\eta_{\mathbf{k}}$ labels the sinks of $\Gamma_{\mathrm{sym},\mathbf{k}}$.
For a dominant weight~$\lambda = \sum_{i=1}^n c_i \omega_i \in P_{\geq 0}$, define $I^{0,1}_{\lambda} \coloneqq \{i \in [n] \colon c_i \in \{0,1\}\}$. And for any weight $\lambda \in P$ define $I^{0,1}_{\lambda} \coloneqq I^{0,1}_{\lambda_{\mathrm{dom}}}$.
\begin{prop} \label{prop:posparaboliccosets}
Let $\lambda \in P$ with $\langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1$ for all~$\alpha \in \Phi^{+}$. Then $w_{\lambda} (\Phi^{+}_{I^{0,1}_{\lambda}})$ is a subset of positive roots.
\end{prop}
\begin{proof}
It suffices to show that $w_{\lambda}(\alpha_i)$ is positive for all $i \in I^{0,1}_{\lambda}$. Suppose that $w_{\lambda}(\alpha_i)$ is negative for some $i \in I^{0,1}_{\lambda}$, i.e., $s_i$ is a descent of $w_{\lambda}$. Note~$\langle\lambda_{\mathrm{dom}},\alpha_i^\vee{\raU{}}ngle \in \{0,1\}$. If $\langle\lambda_{\mathrm{dom}},\alpha_i^\vee{\raU{}}ngle = 1$, then $\langle\lambda_{\mathrm{dom}},-\alpha_i^\vee{\raU{}}ngle = -1$ so~$\langle\lambda,-w_{\lambda}(\alpha_i)^\vee{\raU{}}ngle = -1$, which contradicts that $\langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha \in \Phi^{+}$. But since $w_{\lambda}$ is the minimal length representative of $w_{\lambda}W_{I^{0}_{\lambda}}$, it cannot have any descents $s_j$ with $j\in I^{0}_{\lambda}$. Hence we cannot have that~$\langle\lambda_{\mathrm{dom}},\alpha_i^\vee{\raU{}}ngle = 0$ either. Thus it must be that $w_{\lambda}(\alpha_i)$ is positive for all $i \in I^{0,1}_{\lambda}$.
\end{proof}
\begin{prop} \label{prop:sinksminlen}
For a dominant weight $\mu \in P_{\geq 0}$, we have that
\[ W^{I^{0,1}_{\mu}} = \{w_{\lambda}\colon \lambda \in P, \lambda_{\mathrm{dom}}=\mu, \langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1 \textrm{ for all $\alpha \in \Phi^{+}$}\}.\]
\end{prop}
\begin{proof}
Let $\lambda \in P$ with $\lambda_{\mathrm{dom}}=\mu$ and first suppose that $\langle\lambda,\alpha^\vee{\raU{}}ngle=-1$ for some $\alpha \in \Phi^{+}$. Then we have~$\langlew^{-1}_{\lambda}(\lambda),w^{-1}_{\lambda}(\alpha)^\vee{\raU{}}ngle = -1$. But since $w^{-1}_{\lambda}(\lambda) = \lambda_{\mathrm{dom}}$ is dominant, this means~$w^{-1}_{\lambda}(\alpha)$ is a negative root; moreover, the only way $\langle\lambda_{\mathrm{dom}},w^{-1}_{\lambda}(\alpha)^\vee{\raU{}}ngle = -1$ is possible is if all the simple coroots $\alpha_i^\vee$ appearing in the expansion of $-w^{-1}_{\lambda}(\alpha)^\vee$ have $i \in I^{0,1}_{\lambda}$. This implies that~$w_{\lambda}(\alpha_i)$ is negative for some $i \in I^{0,1}_{\lambda}$. But then $s_i$ would be a descent of $w_{\lambda}$, and hence $w_{\lambda}$ cannot be the minimal length element of $w_{\lambda}W_{I^{0,1}_{\lambda}}$.
If $\lambda \in P$ with $\lambda_{\mathrm{dom}}=\mu$ satisfies $\langle\lambda,\alpha^\vee{\raU{}}ngle\neq-1$ for all $\alpha \in \Phi^{+}$, then we have seen in Proposition~\ref{prop:posparaboliccosets} that $w_{\lambda}$ has no descents $s_i$ with $i \in I^{0,1}_{\mu}$ and hence indeed~$w_{\lambda} \in W^{I^{0,1}_{\mu}}$. On the other hand, since $W_{I^{0}_{\mu}} \subseteq W_{I^{0,1}_{\mu}}$, the cosets of $W_{I^{0,1}_{\mu}}$ are unions of cosets of $W_{I^{0}_{\mu}}$ and hence the minimal length element of any coset of~$W_{I^{0,1}_{\mu}}$ must be of the form $w_{\lambda}$ for some $\lambda \in P$ with $\lambda_{\mathrm{dom}} = \mu$.
\end{proof}
\begin{lemma} \label{lem:symsinks}
For any $\mathbf{k} \in \mathbb{N}[\Phi]^W$, the sinks of $\Gamma_{\mathrm{sym},\mathbf{k}}$ are
\[\{\eta_{\mathbf{k}}(\lambda)\colon \lambda \in P, \langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1 \textrm{ for all $\alpha \in \Phi^{+}$}\}\]
\end{lemma}
\begin{proof}
First suppose that $\lambda \in P$ satisfies $\langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha \in \Phi^{+}$. Let $\alpha \in \Phi^{+}$. If~$\alpha \in w_{\lambda}(\Phi_{I^{0,1}_{\lambda}})$, then $\langle\eta_{\mathbf{k}}(\lambda),\alpha^\vee{\raU{}}ngle = \langle\lambda_{\mathrm{dom}} + \rho_{\mathbf{k}},w_{\lambda}^{-1}(\alpha)^\vee{\raU{}}ngle \geq \mathbf{k}(\alpha)$ since $w_{\lambda}^{-1}(\alpha) \in \Phi^{+}$ by Proposition~\ref{prop:posparaboliccosets}. So now consider $\alpha \notin w_{\lambda}(\Phi_{I^{0,1}_{\lambda}})$. Then $w_{\lambda}^{-1}(\alpha)$ may be positive or negative, but $|\langle\lambda_{\mathrm{dom}},w_{\lambda}(\alpha)^\vee{\raU{}}ngle| \geq 2$ (because $\lambda_{\mathrm{dom}}$ has an $\omega_i$ coefficient of at least~$2$ for some $i \notin I^{0,1}_{\lambda}$ such that $\alpha_i^\vee$ appears in the expansion of $\pm w_{\lambda}(\alpha)^\vee$). Hence
\[|\langle\eta_{\mathbf{k}}(\lambda),\alpha^\vee{\raU{}}ngle| = |\langle\lambda_{\mathrm{dom}} + \rho_{\mathbf{k}},w_{\lambda}^{-1}(\alpha)^\vee{\raU{}}ngle| \geq \mathbf{k}(\alpha)+2,\]
which means that $\langle\eta_{\mathbf{k}}(\lambda),\alpha^\vee{\raU{}}ngle \notin [-\mathbf{k}(\alpha)-1,\mathbf{k}(\alpha)-1]$. Thus $\eta_{\mathbf{k}}(\lambda)$ is a sink of~$\Gamma_{\mathrm{sym},\mathbf{k}}$.
Now suppose $\mu$ is a sink of $\Gamma_{\mathrm{sym},\mathbf{k}}$. Since $\langle\mu,\alpha^\vee{\raU{}}ngle \notin [-\mathbf{k}(\alpha)-1,\mathbf{k}(\alpha)-1]$ for~$\alpha \in \Phi^{+}$, in particular $|\langle\mu,\alpha^\vee{\raU{}}ngle| \geq \mathbf{k}(\alpha)$ for all $\alpha\in \Phi^{+}$. This means~$\langle\mu_{\mathrm{dom}},\alpha^\vee{\raU{}}ngle \geq \mathbf{k}(\alpha)$ for all~$\alpha \in \Phi^{+}$. Hence $\mu_{\mathrm{dom}} = \mu' + \rho_{\mathbf{k}}$ for some dominant $\mu' \in P_{\geq 0}$. Suppose to the contrary that $w_{\mu}$ is not the minimal length element of $w_{\mu}W_{I^{0,1}_{\mu'}}$. Then there exists a descent $s_i$ of~$w_{\mu}$ with~$i \in I^{0,1}_{\mu'}$. But then
\[\langle\mu,-w_{\mu}(\alpha_i)^\vee{\raU{}}ngle = \langle\mu_{\mathrm{dom}},-\alpha_i^\vee{\raU{}}ngle = -\langle\mu',\alpha_i^\vee{\raU{}}ngle -\langle\rho_{\mathbf{k}},\alpha_i^\vee{\raU{}}ngle \geq -\mathbf{k}(\alpha_i)-1,\]
and also $\langle\mu,-w_{\mu}(\alpha_i)^\vee{\raU{}}ngle =-\langle \mu_{\mathrm{dom}},\alpha_i^\vee{\raU{}}ngle \leq 0$. This would imply that $\mu$ is not a sink of~$\Gamma_{\mathrm{sym},\mathbf{k}}$, since $-w_{\mu}(\alpha_i) \in \Phi^{+}$. So $w_{\mu}$ must be the minimal length element of~$w_{\mu}W_{I^{0,1}_{\mu'}}$. Thanks to Proposition~\ref{prop:sinksminlen}, this means $w_{\mu} = w_{\lambda}$ for some $\lambda \in P$ with $\lambda_{\mathrm{dom}} = \mu'$ and~$\langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha \in \Phi^{+}$. Moreover, $\mu = w_{\mu}(\mu_{\mathrm{dom}}) = \lambda + w_{\lambda}(\rho_{\mathbf{k}}) = \eta_{\mathbf{k}}(\lambda)$, as claimed.
\end{proof}
\section{Traverse lengths of permutohedra}
Our goal will now be to describe the connected components of $\Gamma_{\mathrm{sym},\mathbf{k}}$, with the eventual aim of establishing confluence of ${\raU{}}U{\mathrm{sym},\mathbf{k}}$. (By \emph{connected component} of a directed graph, we mean a connected component of its underlying undirected graph.) We will show over the course of the next several sections that the connected components are contained in certain permutohedra; from this confluence will follow easily. First we need to discuss traverse lengths.
\begin{definition}
For a root $\alpha\in\Phi$, an \emph{$\alpha$-string} of length $\ell$ is a subset of $P$ of the form $\{\mu,\mu-\alpha,\mu-2\alpha,\dots,\mu-\ell\alpha\}$ for some weight $\mu \in P$. For a dominant weight $\lambda\in P_{\geq 0}$, an \emph{$\alpha$-traverse} in the discrete permutohedron $\Pi^Q(\lambda)$ is a maximal (as a set) $\alpha$-string that belongs to $\Pi^Q(\lambda)$. Concretely, it is an $\alpha$-string~$\{\mu,\mu-\alpha,\mu-2\alpha,\dots,\mu-\ell\alpha\}\subseteq \Pi^Q(\lambda)$ such that $\mu+\alpha,\, \mu-(\ell+1)\alpha\not\in \Pi^Q(\lambda)$. Finally, for a dominant weight $\lambda \in P_{\geq 0}$, the \emph{traverse length} $\mathbf{l}_\lambda \in \mathbb{Z}[\Phi]^W$ is given by
\[ \mathbf{l}_{\lambda}(\alpha) \coloneqq \textrm{the minimal length $\ell$ of an $\alpha$-traverse in } \Pi^Q(\lambda).\]
Clearly, by the $W$-symmetry of permutohedra, the traverse length is $W$-invariant and hence really does belong to $\mathbb{Z}[\Phi]^W$.
\end{definition}
\begin{lemma} \label{lem:traversesym}
For $\lambda\in P$ and $\alpha\in \Phi$, any $\alpha$-traverse $\{\mu,\mu-\alpha,\dots,\mu-\ell\alpha\} \subseteq \Pi^Q(\lambda)$ is symmetric with respect to the reflection $s_\alpha$, i.e., $s_\alpha(\mu - i\alpha) = \mu-(\ell-i)\alpha$ for all~$i=0,\dots,l$. Its length is $\ell=\langle\mu,\alpha^\vee{\raU{}}ngle$. In particular, $\langle\mu,\alpha^\vee{\raU{}}ngle\geq 0$.
\end{lemma}
\begin{proof}
By the $W$-symmetry of discrete permutohedra, we have $s_\alpha(\Pi^Q(\lambda)) = \Pi^Q(\lambda)$, which implies the first sentence.
The second sentence then follows from
\[\mu-\ell\alpha = s_\alpha(\mu) = \mu - \langle\mu,\alpha^\vee{\raU{}}ngle\,\alpha.\]
The last sentence is clear because the length $\ell$ must be nonnegative.
\end{proof}
Lemma~\ref{lem:traversesym} implies the following reformulation of the definition of~$\mathbf{l}_\lambda $.
\begin{cor} \label{cor:traverselenreform}
For $\lambda \in P$, the traverse length $\mathbf{l}_\lambda$ is given by
\[\mathbf{l}_{\lambda}(\alpha) = \mathrm{min}( \{\langle\mu,\alpha^\vee{\raU{}}ngle\colon \mu\in \Pi^Q(\lambda),\, \mu + \alpha\not\in\Pi^Q(\lambda)\}).\]
\end{cor}
Corollary~\ref{cor:traverselenreform} explains the connection of traverse length to interval-firing: we are going to prove that interval-firing processes get ``trapped'' inside of permutohedra because the traverse lengths of these permutohedra are large (and hence if $\mu$ is inside such a permutohedron but $\mu +\alpha$ is not, $\langle\mu,\alpha^\vee{\raU{}}ngle$ must be so large that it is outside the fireability interval of our process). To do this we need a formula for traverse length. In most cases, the traverse length of a permutohedron in a given direction~$\alpha$ is realized on some edge of the permutohedron in direction~$\alpha$. However, there are some strange exceptions to this general rule, for which we need the concept of ``funny'' weights.
\begin{definition} \label{def:funny}
If $\Phi$ is simply laced, then there are no funny weights. So suppose~$\Phi$ is not simply laced. Then there is a unique long simple root $\alpha_l$ and short simple root~$\alpha_s$ with $\langle\alpha_l,\alpha^\vee_s{\raU{}}ngle \neq 0$. We say the dominant weight $\lambda = \sum_{i=1}^{n}c_i\omega_i \in P_{\geq 0}$ is \emph{funny} if~$c_s = 0$ and $c_l \geq 1$ and $c_i \geq c_l$ for all $i$ such that $\alpha_i$ is long.
\end{definition}
\begin{example}
With the numbering of simple roots as in Figure~\ref{fig:dynkinclassification}, if~$\Phi = B_n$ then $\lambda= \sum_{i=1}^{n}c_i\omega_i \in P_{\geq 0}$ is funny if $c_1,\ldots,c_{n-2} \geq c_{n-1} \geq 1$ and $c_n = 0$. If~$\Phi=C_n$, then $\lambda$ is funny if $c_{n-1}=0$ and $c_n \geq 1$.
\end{example}
For a dominant weight $\lambda = \sum_{i=1}^{n}c_i\omega_i \in P_{\geq 0}$, define $\mathbf{m}_{\lambda}\in\mathbb{Z}[\Phi]^W$ by setting \[\mathbf{m}_{\lambda}(\alpha) \coloneqq \mathrm{min}(\{c_i\colon \alpha \in W(\alpha_i)\}).\]
\begin{thm} \label{thm:traverseformula}
For a dominant weight $\lambda \in P_{\geq 0}$, we have
\[\mathbf{l}_{\lambda}(\alpha) = \begin{cases}\mathbf{m}_{\lambda}(\alpha)-1 &\textrm{if $\alpha$ is long and $\lambda$ is funny},\\ \mathbf{m}_{\lambda}(\alpha) &\textrm{otherwise}. \end{cases}\]
\end{thm}
\begin{proof}
Let $\lambda = \sum_{i=1}^{n}c_i\omega_i \in P_{\geq 0}$. The $\alpha_i$-traverse $\{\lambda,\lambda-\alpha,\ldots,\lambda-\ell\alpha=s_i(\lambda)\}$, which is contained in the edge $[\lambda,s_i(\lambda)]$ of the permutohedron $\Pi(\lambda)$, has length equal to $\ell = \langle\lambda,\alpha_i^\vee{\raU{}}ngle = c_i$. By the $W$-symmetry of the traverse length (and because any root is $W$-conjugate to some simple root), it follows that $\mathbf{l}_{\lambda} \leq \mathbf{m}_{\lambda}$.
We will show that in most of the cases (except the case with long roots and funny weights) we actually have $\mathbf{l}_{\lambda} = \mathbf{m}_{\lambda}$. We need to show that the length of any $\alpha$-traverse in $\Pi^Q(\lambda)$ is greater than or equal to $\mathbf{m}_{\lambda}(\alpha)$, i.e., for $\mu \in \Pi^{Q}(\lambda)$ such that $\mu +\alpha \notin \Pi^Q(\lambda)$, we have $\langle\mu,\alpha^\vee{\raU{}}ngle \geq \mathbf{m}_{\lambda}(\alpha)$.
If $\mathbf{m}_{\lambda}(\alpha) = 0$, then we automatically get $\mathbf{l}_{\lambda}(\alpha) = \mathbf{m}_{\lambda}(\alpha)=0$, because $\mathbf{l}_{\lambda}(\alpha) \geq 0$. So let us assume that $\mathbf{m}_{\lambda}(\alpha) \geq 1$.
Let $\mu \in \Pi^Q(\lambda)$ be such that $\mu + \alpha \notin \Pi^Q(\lambda)$. Since $\mu +\alpha \in Q+\lambda$, we deduce that $\mu + \alpha \notin \Pi(\lambda)$. This means that the line segment $[\mu,\mu+\alpha]$ must ``exit'' the permutohedron $\Pi(\lambda)$ at some point $v \in V$, i.e., there exists a unique point $v = \mu+t\alpha$, where $t \in \mathbb{R}$, with $v \in \Pi(\lambda)$ but $\mu + q\alpha \notin \Pi(\lambda)$ for any $q > t$. We have $0 \leq t < 1$.
Let $F$ be the minimal (by inclusion) face of $\Pi(\lambda)$ that contains the point $v$. The minimal value of the linear form $\langle\cdot,\alpha^\vee{\raU{}}ngle$ on the face $F$ should be reached at a vertex $\nu$ of $F$. By the $W$-symmetry of $\Pi(\lambda)$, we assume without loss of generality that this minimum is achieved at $\nu=\lambda$. So we have $\langle\lambda,\alpha^\vee{\raU{}}ngle \leq \langlev,\alpha^\vee{\raU{}}ngle$.
If $\lambda$ is strictly in the fundamental chamber, then any edge of $\Pi(\lambda)$ coming out of~$\lambda$ must be in the direction of a negative simple root. This is not true for general~$\lambda \in P_{\geq 0}$, but the edges of $\Pi(\lambda)$ coming out of $\lambda$ that are not in the direction of a negative simple root must immediately leave the dominant chamber. Hence if we let~$x \in V$ be some generic point in the interior of the face $F$ very close to $\lambda$, by acting by $W_{I^{0}_{\lambda}}$ we can transport $x$ to the dominant chamber while fixing $\lambda$. Thus, we may assume that the affine span of~$F$ is spanned by simple roots. So let $I \subseteq [n]$ be the minimal set of indices such that the face $F$ belongs to the affine subspace $\lambda + \mathrm{Span}_{\mathbb{R}}(\{\alpha_i\colon i \in I\})$.
Let $\alpha = \sum_{i=1}^{n}a_i\alpha_i$, where the $a_i$ are either all nonnegative or all nonpositive. Then we have~$\alpha^\vee = \sum_{i=1}^{n} \widetilde{a}_i\alpha^\vee_i$ where $\widetilde{a}_i = \frac{\langle\alpha_i,\alpha_i{\raU{}}ngle}{\langle\alpha,\alpha{\raU{}}ngle}a_i$. Note that these $\widetilde{a}_i$ are also integers.
Any root $\alpha$ is $W$-conjugate to at least one simple root that appears with nonzero coefficient in its expansion in terms of the simple roots. So there exists $j \in [n]$ such that $\alpha_j \in W(\alpha)$ and~$a_j = \widetilde{a}_j \neq 0$. We have $c_j \geq \mathbf{m}_{\lambda}(\alpha) \geq 1$.
We have $\lambda \in \Pi(\lambda)$ and $\lambda + \alpha \notin \Pi(\lambda)$. So $\langle\lambda,\alpha^\vee{\raU{}}ngle \geq 0$, because $\langle\lambda,\alpha^\vee{\raU{}}ngle$ is the length of the $\alpha$-traverse that starts at $\lambda$, which is always nonnegative. Therefore we have~$\langle\lambda,\alpha^\vee{\raU{}}ngle = \sum_{i=1}^{n} \widetilde{a}_i c_i \geq 0$; moreover, all nonzero terms in this expression have the same sign and at least one term $\widetilde{a}_jc_j$ is nonzero. It follows that $a_1,\ldots,a_n \geq 0$, i.e., $\alpha$ is a positive root.
We have $\mu = v - t\alpha = (\lambda - \sum_{i \in I}b_i\alpha_i) - t\alpha$ for real numbers $0 \leq t < 1$ and $b_i \geq 0$, $i \in I$. Thus $\langle\mu,\alpha^\vee{\raU{}}ngle = \langlev,\alpha^\vee{\raU{}}ngle - t\langle\alpha,\alpha^\vee{\raU{}}ngle = \langlev,\alpha^\vee{\raU{}}ngle - 2t \geq \langle\lambda,\alpha^\vee{\raU{}}ngle - 2t > \langle\lambda,\alpha^\vee{\raU{}}ngle - 2$. Moreover, since both $\langle\mu,\alpha^\vee{\raU{}}ngle$ and $\langle\lambda,\alpha^\vee{\raU{}}ngle-2$ are integers, and the first is strictly greater than the second, we get
\[ \langle\mu,\alpha^\vee{\raU{}}ngle \geq \langle\lambda,\alpha^\vee{\raU{}}ngle- 1 = \left(\sum_{i=1}^{n}\widetilde{a}_ic_i\right) - 1.\]
We already noted that the last expression involves at least one nonzero term $\widetilde{a}_jc_j$ such that $\alpha_j \in W(\alpha)$. So $\widetilde{a}_jc_j\geq c_j \geq \mathbf{m}_{\lambda}(\alpha)$ and thus $\langle\mu,\alpha^\vee{\raU{}}ngle \geq \mathbf{m}_{\lambda}(\alpha) - 1$.
We need to prove just a slightly stronger inequality $\langle\mu,\alpha^\vee{\raU{}}ngle \geq \mathbf{m}_{\lambda}(\alpha)$.
If $\sum_{\alpha_i \in W(\alpha)} \widetilde{a}_i \geq 2$, we get
\[ \langle\mu,\alpha^\vee{\raU{}}ngle \geq \sum_{i=1}^{n}\widetilde{a}_ic_i - 1 \geq \sum_{\alpha_i \in W(\alpha)} \widetilde{a}_ic_i - 1 \geq 2\mathbf{m}_{\lambda}(\alpha)-1 \geq \mathbf{m}_{\lambda}(\alpha),\]
as needed. So we now assume that $\sum_{\alpha_i \in W(\alpha)} \widetilde{a}_i =1$. Note that this means $a_j = \widetilde{a}_j=1$.
If we had $c_j > \mathbf{m}_{\lambda}(\alpha)$, then we would get
\[\langle\mu,\alpha^\vee{\raU{}}ngle \geq \widetilde{a}_jc_j -1 \geq c_j-1 \geq \mathbf{m}_{\lambda}(\alpha)\]
and we would also be done. So we now assume that $c_j = \mathbf{m}_{\lambda}(\alpha)$.
Since $\alpha$ does not belong to the subspace spanned by the $\alpha_i$ for $i \in I$, there is $r\in[n]$ with $r\notin I$ such that $a_r \geq 1$.
If $a_r=1$, then, from the fact that $\lambda-\mu = (\sum_{i\in I}b_i\alpha_i) + t\alpha$ belongs to the root lattice $Q$ and thus is an \emph{integer} linear combination of the simple roots, we deduce that in fact~$t \in \mathbb{Z}$ and thus $t = 0$. In this case get $\langle\mu,\alpha^\vee{\raU{}}ngle \geq \langle\lambda,\alpha^\vee{\raU{}}ngle \geq a_jc_j \geq \mathbf{m}_{\lambda}(\alpha)$, as needed. So we now assume that $a_r \geq 2$.
Then note that $\alpha_r \notin W(\alpha)$, because we assumed~$\sum_{\alpha_i \in W(\alpha)} \widetilde{a}_i = \sum_{\alpha_i \in W(\alpha)} a_i = 1$.
If there is $q \in [n]$ such that $a_q \notin W(\alpha)$, $\widetilde{a}_q \geq 1$ and $c_q \geq 1$, we have
\[ \langle\mu,\alpha^\vee{\raU{}}ngle = \left( \sum_{i=1}^{n} \widetilde{a}_ic_i \right) - 1 \geq \widetilde{a}_jc_j + \widetilde{a}_qc_q - 1 \geq \widetilde{a}_jc_j \geq \mathbf{m}_{\lambda}(\alpha),\]
as needed.
Thus, the only possibility which is not covered by the above discussion is when:
\begin{enumerate}[(1)]
\item \label{cond:funny1} There is exactly one nonzero term $a_j\alpha_j$ in the expansion $\alpha = \sum_{i=1}^{n}a_i\alpha_i$ such that $\alpha_j \in W(\alpha)$. For this term, $a_j=1$ and $c_j = \mathbf{m}_{\lambda}(\alpha) \geq 1$.
\item \label{cond:funny2} There is at least one more more nonzero term $a_i\alpha_i$ in that expansion. For all such terms, $\alpha_i \notin W(\alpha)$, $a_i \geq 2$, and $c_i = 0$.
\end{enumerate}
We claim that these conditions imply that $\alpha$ is a long root. This is easy to check by hand for $\Phi=B_n$, $C_n$, or $G_2$. One does not need to check Type $F_4$ separately, because in this case there are two long simple roots and two short simple roots, but the expansion of $\alpha$ involves either only one short simple root or only one long simple root. We leave it as an exercise for the reader to find a uniform root theoretic argument of the fact that conditions~\eqref{cond:funny1} and~\eqref{cond:funny2} above imply that $\alpha$ is long.
Also, we claim that conditions~\eqref{cond:funny1} and~\eqref{cond:funny2} above imply that $\lambda$ is a funny weight. Indeed, it is a well-known fact that for any root $\alpha = \sum_{i=1}^{n}a_i\alpha_i$, the set of~$i \in [n]$ for which $a_i \neq 0$ must be a connected subset of the Dynkin diagram (see for instance~\cite[Chapter~VI, \S1.6, Corollary~3]{bourbaki2002lie}). Hence the $\alpha_j$ in condition~\eqref{cond:funny1} must be the long simple root $\alpha_l$, and one of the $\alpha_i$ in condition~\eqref{cond:funny2} must be the short simple root $\alpha_s$ (with notation as in Definition~\ref{def:funny}). Note also that $\mathbf{m}_{\lambda}(\alpha)=c_l$ forces $c_i \geq c_l$ for all $i$ such that $\alpha_i$ is long.
In this ``long and funny'' case we can only get the (slightly) weaker inequality:
\[\langle\mu,\alpha^\vee{\raU{}}ngle \geq \mathbf{m}_{\lambda}(\alpha)-1.\]
It remains to show that this last inequality is tight in this ``long and funny'' case. Let us concentrate on the $2$-dimensional face of the permutohedron $\Pi(\lambda)$ contained in the affine subspace $\lambda + \mathrm{Span}_{\mathbb{R}}(\{\alpha_l,\alpha_s\})$ (with notation as in Definition~\ref{def:funny}).
This face is equivalent to the $2$-dimensional $W'$-permutohedron $\Pi_{W'}(\lambda')$ corresponding to the sub-root system $\Phi'$ of rank $2$ with simple roots $\alpha_l$ and $\alpha_s$, and fundamental weights $\omega'_1$ (corresponding to $\alpha_l$) and $\omega'_2$ (corresponding to $\alpha_s$), where $W'$ is the Weyl group of $\Phi'$, and $\lambda'=\mathbf{m}_{\lambda}(\alpha)\omega'_1 + 0\cdot\omega'_2$.
The $2$-dimensional root system $\Phi'$ must be equal to either $B_2$ or $G_2$. In this situation there in fact is a $\mu \in \Pi_{W'}^Q(\lambda')$ with $\mu+\alpha \notin \Pi_{W'}^Q(\lambda')$ for some long $\alpha \in \Phi'$ such that $\langle\mu,\alpha^\vee{\raU{}}ngle = \mathbf{m}_{\lambda}(\alpha)-1$: indeed, we can take $\alpha\coloneqq \alpha_l$ and $\mu \coloneqq (\mathbf{m}_{\lambda}(\alpha)-1)\omega'_1$ for $B_2$ or $\alpha\coloneqq \alpha_l$ and $\mu \coloneqq (\mathbf{m}_{\lambda}(\alpha)-1)\omega'_1 + \omega'_2$ for $G_2$.
This finishes the proof of the theorem.
\end{proof}
\section{The permutohedron non-escaping lemma}
We need to place some restrictions on our parameter $\mathbf{k}$ so that funny weights do not occur in our analysis of the relevant permutohedra traverse lengths. For this we have the notion of ``goodness.''
\begin{definition}
If $\Phi$ is simply laced, then every $\mathbf{k} \in \mathbb{N}[\Phi]^W$ is good. So suppose~$\Phi$ is not simply laced and let $\mathbf{k} \in \mathbb{N}[\Phi]^W$. Then there exist $k_s, k_l \in \mathbb{Z}$ with~$\mathbf{k}(\alpha) = k_s$ if~$\alpha$ is short and~$\mathbf{k}(\alpha) = k_l$ if~$\alpha$ is long. We say $\mathbf{k}$ is \emph{good} if $k_s = 0 \Rightarrow k_l=0$. Note in particular that if $\mathbf{k}=k\geq 0$ is constant, then it is good.
\end{definition}
Now we can prove the following permutohedron non-escaping lemma, which says that certain discrete permutohedra ``trap'' the symmetric interval-firing process inside of them.
\begin{lemma} \label{lem:permtrap}
Let $\mathbf{k} \in \mathbb{N}[\Phi]^W$ be good and let $\Gamma \coloneqq \Gamma^{\mathrm{un}}_{\mathrm{sym},\mathbf{k}}$. Let $\lambda \in P_{\geq 0}$. Then there is no directed edge $(\mu,\mu')$ in $\Gamma$ with $\mu \in \Pi^Q(\eta_{\mathbf{k}}(\lambda))$ and $\mu' \notin \Pi^Q(\eta_{\mathbf{k}}(\lambda))$.
\end{lemma}
\begin{proof}
First suppose $\Phi$ is not simply laced and $k_s=0$. Then also $k_l =0$, i.e., $\mathbf{k} = 0$, since $\mathbf{k}$ is good. Hence $\rho_{\mathbf{k}} = 0$, so $\eta_{\mathbf{k}}(\lambda) = \lambda$. If $\mu \in \Pi^Q(\lambda)$ but $\mu + \alpha \notin \Pi^Q(\lambda)$, then by Corollary~\ref{cor:traverselenreform} we have~$\langle\mu,\alpha^\vee{\raU{}}ngle \geq \mathbf{l}_{\lambda}(\alpha)$. Note that by definition $ \mathbf{l}_{\lambda}(\alpha) \geq 0$. But this means $\langle\mu,\alpha^\vee{\raU{}}ngle \geq \mathbf{k}(\alpha)$, so indeed $(\mu,\mu+\alpha)$ cannot be a directed edge of $\Gamma$.
Now suppose either $\Phi$ is simply laced or $\Phi$ is not simply laced but $k_s \geq 1$. Then note that $\rho_{\mathbf{k}}$ is not funny. Hence by Theorem~\ref{thm:traverseformula} we conclude that~$\mathbf{l}_{\eta_{\mathbf{k}}(\lambda)}(\alpha) \geq \mathbf{k}(\alpha)$. If~$\mu \in \Pi^Q(\eta_{\mathbf{k}}(\lambda))$ but $\mu + \alpha \notin \Pi^Q(\eta_{\mathbf{k}}(\lambda))$, then $\langle\mu,\alpha^\vee{\raU{}}ngle \geq \mathbf{l}_{\eta_{\mathbf{k}}(\lambda)}(\alpha)$ by Corollary~\ref{cor:traverselenreform}. This means $\langle\mu,\alpha^\vee{\raU{}}ngle \geq \mathbf{k}(\alpha)$, so indeed $(\mu,\mu+\alpha)$ cannot be a directed edge of $\Gamma$.
\end{proof}
\begin{figure}\label{fig:notgood}
\end{figure}
\begin{example} \label{ex:notgood}
Lemma~\ref{lem:permtrap} is false in general without the goodness assumption. Indeed, suppose $\Phi=B_2$ and $\mathbf{k} \in \mathbb{N}[\Phi]^W$ is given by $k_s\coloneqq 0$ and $k_l\coloneqq 1$. Then Figure~\ref{fig:notgood} depicts (a portion of) the graph $\Gamma_{\mathrm{sym},\mathbf{k}}$. In this picture we only show elements of the root lattice~$Q$. The permutohedron~$\Pi(\rho_{\mathbf{k}}) = \Pi(\omega_1)$ is shown in red. Observe that although $0 \in \Pi^Q(\rho_{\mathbf{k}})$and~$\alpha_1 \notin \Pi^Q(\rho_{\mathbf{k}})$, we have an edge $(0,\alpha_1)$ in $\Gamma_{\mathrm{sym},\mathbf{k}}$.
\end{example}
We also need a ``lower-dimensional'' version of the permutohedron non-escaping lemma that says that these interval-firing processes get trapped inside of permutohedra of parabolic subgroups of $W$. This is established in the following lemma and corollary.
\begin{lemma} \label{lem:permtrapsmall}
Let $\mathbf{k} \in \mathbb{N}[\Phi]^W$ and $\Gamma \coloneqq \Gamma^{\mathrm{un}}_{\mathrm{sym},\mathbf{k}}$. Let~$\lambda \in P_{\geq 0}$. Then if $(\mu,\mu+\alpha)$ is a directed edge in $\Gamma$ with~$\mu \in \Pi_{I^{0,1}_\lambda}^Q(\eta_{\mathbf{k}}(\lambda))$, we have~$\alpha \in \Phi_{I^{0,1}_\lambda}$.
\end{lemma}
\begin{proof}
Write $\eta_{\mathbf{k}}(\lambda) = \sum_{i=1}^{n}c_i\omega_i$. Assume to the contrary that there exists an edge $(\mu,\mu+\alpha)$ in $\Gamma$ such that $\mu \in \Pi_{I^{0,1}_\lambda}^Q(\eta_{\mathbf{k}}(\lambda))$ but $\alpha$ does not belong to $\mathrm{Span}_{\mathbb{R}}(\{\alpha_i\colon i \in I\})$.
Note that $\alpha$ is a root (positive or negative) with $-\mathbf{k}(\alpha)-1\leq \langle\mu,\alpha^\vee{\raU{}}ngle \leq \mathbf{k}(\alpha)+1$. Let $\beta = \pm\alpha \in \Phi^{+}$ be the positive root. Then~$\langle\mu,\beta^\vee{\raU{}}ngle \leq \mathbf{k}(\alpha)+1 \leq \mathbf{k}(\beta) + 1$.
Since the point $\mu$ belongs to $\Pi_{I^{0,1}_{\lambda}}(\eta_{\mathbf{k}}(\lambda))$, we deduce that the same inequality $\langle\nu,\beta^\vee{\raU{}}ngle \leq \mathbf{k}(\beta)+1$ holds for some vertex $\nu$ of $\Pi_{I^{0,1}_{\lambda}}(\eta_{\mathbf{k}}(\lambda))$. We have $\nu= w(\eta_{\mathbf{k}}(\lambda))$ where $w \in W_{I^{0,1}_{\lambda}}$. Hence we have that~$\langlew(\lambda),\beta^\vee{\raU{}}ngle = \langle\lambda,w^{-1}(\beta)^\vee{\raU{}}ngle \leq \mathbf{k}(\beta)+1$ for some~$w \in W_{I^{0,1}_{\lambda}}$.
The action of the parabolic subgroup $W_{I^{0,1}_{\lambda}}$ on $\beta^\vee$ does not change the coefficients~$b_j$ of the expansion $\beta^\vee = \sum_{i=1}^{n}b_i\alpha_i^\vee$ for all $j \notin I$, and at least one of these coefficients~$b_j$ should be strictly positive (because $\beta^\vee$ is a positive coroot that does not belong to~$\mathrm{Span}_{\mathbb{R}}(\{\alpha_i\colon i \in I\})$). So the expansion $w^{-1}(\beta)^\vee = \sum_{i=1}^{n}b'_i\alpha_i^\vee$ contains some strictly positive coefficient, which means that $w^{-1}(\beta)^\vee$ is a positive coroot and thus we have~$b'_i \geq 0$ for all $i$.
Moreover, any coroot is $W$-conjugate to some simple coroot that appears in its expansion with nonzero coefficient. These observations mean that we can find $j\notin I$ such that $b'_j=b_j \geq 1$, and also (possibly the same) $i$ such that $b'_i\geq 1$ and $\alpha_i \in W(\alpha)$. Note that for this $i$ we have $\mathbf{k}(\alpha_i) = \mathbf{k}(w^{-1}(\beta)) = \mathbf{k}(\beta) = \mathbf{k}(\alpha)$.
If $i=j$, we get $\langle\lambda,w^{-1}(\beta)^\vee{\raU{}}ngle \geq \langle\lambda,b'_j\alpha_j^\vee{\raU{}}ngle \geq \langle\lambda,\alpha_j^\vee{\raU{}}ngle = c_j \geq \mathbf{k}(\alpha_j)+2 = \mathbf{k}(\alpha)+2$ (because for $j \notin I$, $c_j\geq \mathbf{k}(\alpha_j)+2$). But this contradicts~$\langle\lambda,w^{-1}(\beta)^\vee{\raU{}}ngle \leq \mathbf{k}(\alpha)+1$.
On the other hand, if $i \neq j$, we get
\begin{align*}
\langle\lambda,w^{-1}(\beta)^\vee{\raU{}}ngle \geq \langle\lambda,b'_i\alpha_i^\vee+b'_j\alpha_j^\vee{\raU{}}ngle &\geq \langle\lambda,\alpha_i^\vee{\raU{}}ngle + \langle\lambda,\alpha_j^\vee{\raU{}}ngle = c_i + c_j \\
&\geq \mathbf{k}(\alpha_i) + (\mathbf{k}(\alpha_j) + 2) \geq \mathbf{k}(\alpha_i) + 2 = \mathbf{k}(\alpha)+2.
\end{align*}
Again, we get a contradiction.
\end{proof}
\begin{cor} \label{cor:permtrap}
Let $\mathbf{k} \in \mathbb{N}[\Phi]^W$ be good and $\Gamma \coloneqq \Gamma^{\mathrm{un}}_{\mathrm{sym},\mathbf{k}}$. Let~$\lambda \in P_{\geq 0}$. Then there is no directed edge $(\mu,\mu')$ in $\Gamma$ with $\mu \in \Pi_{I^{0,1}_\lambda}^Q(\eta_{\mathbf{k}}(\lambda))$ and~$\mu' \notin \Pi_{I^{0,1}_\lambda}^Q(\eta_{\mathbf{k}}(\lambda))$.
\end{cor}
\begin{proof}
This follows by combining Lemmas~\ref{lem:permtrap} and~\ref{lem:permtrapsmall}: if we have a directed edge $(\mu,\mu+\alpha)$ with $\mu \in \Pi_{I^{0,1}_\lambda}^Q(\eta_{\mathbf{k}}(\lambda))$, then $\alpha \in \Phi_{I^{0,1}_\lambda}$ by Lemma~\ref{lem:permtrapsmall}; hence this firing move is equivalent (via projection) to the same move for the sub-root system $ \Phi_{I^{0,1}_\lambda}$; so by Lemma~\ref{lem:permtrap} applied to that sub-root system, we have $\mu+\alpha \in \Pi_{I^{0,1}_\lambda}^Q(\eta_{\mathbf{k}}(\lambda))$.
\end{proof}
\section{Confluence of symmetric interval-firing} \label{sec:sym_conf}
Now, as promised, we are ready to show that connected components of $\Gamma_{\mathrm{sym},\mathbf{k}}$ are contained inside permutohedra.
\begin{thm} \label{thm:permcc}
Let $\mathbf{k} \in \mathbb{N}[\Phi]^W$ be good. Let $\lambda \in P$ with $\langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha \in \Phi^{+}$. Let $Y_{\lambda}\coloneqq \{\mu\in P\colon \mu {\baU{}}AstU{\mathrm{sym},\mathbf{k}} \eta_{\mathbf{k}}(\lambda) \}$ be the connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$ containing the sink~$\eta_{\mathbf{k}}(\lambda)$. Then $Y_{\lambda}$ is contained in $w_{\lambda}\Pi_{I^{0,1}_{\lambda}}^Q(\eta_{\mathbf{k}}(\lambda_{\mathrm{dom}}))$.
\end{thm}
\begin{proof}
First suppose that $\lambda$ is dominant. By Corollary~\ref{cor:permtrap} there is no edge $(\mu,\mu')$ in~$\Gamma_{\mathrm{sym},\mathbf{k}}$ where one of $\mu,\mu'$ is in $\Pi_{I^{0,1}_\lambda}^Q(\eta_{\mathbf{k}}(\lambda))$ and the other is not, which implies that~$Y_{\lambda}$ is contained in $\Pi_{I^{0,1}_{\lambda}}^Q(\eta_{\mathbf{k}}(\lambda))$. Now suppose $\lambda$ is not dominant. By the preceding argument, the result is true for~$\lambda_{\mathrm{dom}}$. But then we have $Y_{\lambda} = w_{\lambda} Y_{\lambda_{\mathrm{dom}}}$ by the $W$-symmetry of~$\Gamma^{\mathrm{un}}_{\mathrm{sym},\mathbf{k}}$, i.e., by Theorem~\ref{thm:symmetry}.
\end{proof}
And now we can prove half of Theorem~\ref{thm:confluence_intro}.
\begin{cor} \label{cor:symconfluence}
Let $\mathbf{k} \in \mathbb{N}[\Phi]^W$ be good. Then ${\raU{}}U{\mathrm{sym},\mathbf{k}}$ is confluent (and terminating).
\end{cor}
\begin{proof}
We already saw in Proposition~\ref{prop:interval_firing_terminating} that ${\raU{}}U{\mathrm{sym},\mathbf{k}}$ is terminating. Thus, every connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$ contains at least one sink, and ${\raU{}}U{\mathrm{sym},\mathbf{k}}$ is confluent as long as every connected component contains a unique sink.
So suppose that two sinks belong to the same connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$. By Lemma~\ref{lem:symsinks}, we know that these sinks must be of the form $\eta_{\mathbf{k}}(\lambda)$ and $\eta_{\mathbf{k}}(\lambda')$ for~$\lambda,\lambda'\in P$ with $\langle\lambda,\alpha^\vee{\raU{}}ngle\neq -1$ and~$\langle\lambda',\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha \in \Phi^{+}$.
By Theorem~\ref{thm:permcc}, $\eta_{\mathbf{k}}(\lambda) \in w_{\lambda'}\Pi_{I^{0,1}_{\lambda'}}^Q(\eta_{\mathbf{k}}(\lambda'_{\mathrm{dom}}))$ and vice-versa. In particular we have that $\eta_{\mathbf{k}}(\lambda_{\mathrm{dom}}) \in \Pi^Q(\eta_{\mathbf{k}}(\lambda'_{\mathrm{dom}}))$ and $\eta_{\mathbf{k}}(\lambda'_{\mathrm{dom}}) \in \Pi^Q(\eta_{\mathbf{k}}(\lambda_{\mathrm{dom}}))$. Proposition~\ref{prop:perm_containment} then says that~$\eta_{\mathbf{k}}(\lambda_{\mathrm{dom}}) - \eta_{\mathbf{k}}(\lambda'_{\mathrm{dom}})$ and $\eta_{\mathbf{k}}(\lambda'_{\mathrm{dom}}) - \eta_{\mathbf{k}}(\lambda_{\mathrm{dom}})$ are both in $Q_{\geq 0}$, which is possible only if $\eta_{\mathbf{k}}(\lambda_{\mathrm{dom}}) = \eta_{\mathbf{k}}(\lambda'_{\mathrm{dom}})$. That is, thanks to the injectivity of~$\eta_{\mathbf{k}}$ established in Proposition~\ref{prop:etafacts}, we must have $\lambda_{\mathrm{dom}} = \lambda'_{\mathrm{dom}}$.
But then the fact that $\eta_{\mathbf{k}}(\lambda) \in w_{\lambda'}\Pi_{I^{0,1}_{\lambda}}^Q(\eta_{\mathbf{k}}(\lambda_{\mathrm{dom}}))$ means that $\eta_{\mathbf{k}}(\lambda)$ is a vertex of $ w_{\lambda'}\Pi_{I^{0,1}_{\lambda}}(\eta_{\mathbf{k}}(\lambda_{\mathrm{dom}}))$, i.e., $\eta_{\mathbf{k}}(\lambda) = w_{\lambda'}w(\eta_{\mathbf{k}}(\lambda_{\mathrm{dom}}))$ for some $w \in W_{I^{0,1}_{\lambda}}$. Note that~$(w_{\lambda'}w)^{-1}(\eta_{\mathbf{k}}(\lambda))$ is dominant. We have seen in the the proof of Proposition~\ref{prop:etafacts} that this means $(w_{\lambda'}w)^{-1}(\lambda)$ is dominant as well, or in other words, that $w_{\lambda'}w = w_{\lambda}w'$ for some $w' \in W_{I^{0}_{\lambda}}$. This shows that~$w_{\lambda} \in w_{\lambda'}W_{I^{0,1}_{\lambda}}$. By Proposition~\ref{prop:sinksminlen}, $w_{\lambda}$ and~$w_{\lambda'}$ must both be the minimal length elements of the cosets of $W_{I^{0,1}_{\lambda}}$ they belong to. So~$w_{\lambda} = w_{\lambda'}$. That~$\lambda_{\mathrm{dom}} = \lambda'_{\mathrm{dom}}$ and $w_{\lambda} = w_{\lambda'}$ implies that~$\lambda = \lambda'$, and consequently that~$\eta_{\mathbf{k}}(\lambda)=\eta_{\mathbf{k}}(\lambda')$, as required.
\end{proof}
\begin{remark}
As far as we know, Theorem~\ref{thm:permcc} and Corollary~\ref{cor:symconfluence} may be true even in the case where $\mathbf{k}$ is not good. Indeed, it appears that ${\raU{}}U{\mathrm{sym},\mathbf{k}}$ is confluent for all~$\mathbf{k} \in \mathbb{N}[\Phi]^W$ and to prove this it would be sufficient, thanks to the diamond lemma (Lemma~\ref{lem:diamond}), to prove it for root systems of rank~$2$, of which there are only four: $A_1 \oplus A_1$, $A_2$, $B_2$, $G_2$. All~$\mathbf{k}$ are good for simply laced root systems, so in fact one would need only check $B_2$ and $G_2$.
\end{remark}
\section{Full-dimensional components, saturated components, and Cartan matrix chip-firing as a limit} \label{sec:cartanmatrix}
Let $\mathbf{k}\in\mathbb{N}[\Phi]^W$ be good. For $\lambda \in P$, recall the notation $Y_{\lambda}\coloneqq \{\mu\in P\colon \mu {\baU{}}AstU{\mathrm{sym},\mathbf{k}} \eta_{\mathbf{k}}(\lambda) \}$ for the connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$ containing the sink~$\eta_{\mathbf{k}}(\lambda)$ from the last section. By the results of the last section, all these components are distinct. In this section, we take a moment to highlight certain special components~$Y_{\lambda}$, namely:
\begin{itemize}
\item those which are \emph{full-dimensional} in the sense that their affine hulls are the whole vector space: $\mathrm{AffineHull}\, Y_{\lambda}=V$;
\item those which are full-dimensional and \emph{saturated} in the sense that they contain all lattice points in their convex hulls: $Y_{\lambda} = (\mathrm{ConvexHull}\, Y_{\lambda})\cap(Q+\eta_{\mathbf{k}}(\lambda))$.
\end{itemize}
For the full-dimensional components: by a result we will prove later (Corollary~\ref{cor:symccsweylorbit}), we have that $Y_{\lambda}$ always contains $W(\eta_{\mathbf{k}}(\lambda))$ for $\lambda \in P_{\geq 0}$ with $I^{0,1}_{\lambda} = [n]$. Hence by Theorem~\ref{thm:permcc} we see that the full-dimensional connected components of~$\Gamma_{\mathrm{sym},\mathbf{k}}$ are exactly $Y_{\lambda}$ for $\lambda \in P_{\geq 0}$ with $I^{0,1}_{\lambda}=[n]$, i.e., for those~$\lambda = \sum_{i=1}^{n}c_i\omega_i \in P$ with~$c_i\in \{0,1\}$ for all $i\in [n]$. Clearly there are $2^n$ such full-dimensional components. (Strictly speaking we do not have $\mathrm{AffineHull}\, Y_{\lambda}=V$ when $\lambda =0$ and $\mathbf{k}=0$, but to make our description of full-dimensional components consistent across all values of $\mathbf{k}$ it is best to nevertheless consider this component full-dimensional.)
For the full-dimensional and saturated components: by that same Corollary~\ref{cor:symccsweylorbit}, we see that $Y_{\lambda}$ being full-dimensional and saturated is equivalent to having this component satisfy~$Y_{\lambda}=\Pi^Q(\eta_{\mathbf{k}}(\lambda_{\mathrm{dom}}))$. Recall that $\Omega_m^0$ denotes the set of minuscule weights together with zero; then we have the following:
\begin{prop} \label{prop:saturatedccs}
Let $\mathbf{k} \in \mathbb{N}[\Phi]^W$ be good. Let $\lambda \in P$ be a weight with $\langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha \in \Phi^{+}$. Let $Y_{\lambda}\coloneqq \{\mu\in P\colon \mu {\baU{}}AstU{\mathrm{sym},\mathbf{k}} \eta_{\mathbf{k}}(\lambda) \}$ be the connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$ containing the sink~$\eta_{\mathbf{k}}(\lambda)$. Then $Y_{\lambda}$ is equal to $\Pi^{Q}(\eta_{\mathbf{k}}(\lambda))$ if and only if $\lambda \in \Omega_m^0$.
\end{prop}
\begin{proof}
First note that if $\lambda$ is a sink of $\Gamma_{\mathrm{sym},\mathbf{k}}$ then so is $\lambda_{\mathrm{dom}}$ and by the confluence of~${\raU{}}U{\mathrm{sym},\mathbf{k}}$ there cannot be two sinks in a single connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$, so it suffices to prove this proposition for dominant $\lambda \in P_{\geq 0}$ with $I^{0,1}_{\lambda} = [n]$. (Observe that if~$\lambda \in \Omega_m^0$ then certainly it is of this form.)
By the polytopal characterization of minuscule weights, there exists a dominant weight~$\mu \in P_{\geq 0}$ with $\mu \in \Pi^Q(\lambda)$ but $\mu \neq \lambda$ if and only if $\lambda\notin \Omega_m^0$. Hence by Proposition~\ref{prop:perm_containment} there exists~$\mu \in P_{\geq 0}$ with $\eta_{\mathbf{k}}(\mu) \in \Pi^Q(\eta_{\mathbf{k}}(\lambda))$ but $\eta_{\mathbf{k}}(\mu) \neq \eta_{\mathbf{k}}(\lambda)$ if and only if $\lambda \notin \Omega_m^0$. By applying $W$, we see that there is a sink $\eta_{\mathbf{k}}(\mu)$ of $\Gamma_{\mathrm{sym},\mathbf{k}}$ with~$\eta_{\mathbf{k}}(\mu) \in \Pi^Q(\eta_{\mathbf{k}}(\lambda))$ but $\eta_{\mathbf{k}}(\mu) \notin W(\eta_{\mathbf{k}}(\lambda))$ if and only if $\lambda \notin \Omega_m^0$. Finally, by the permutohedron non-escaping lemma, Lemma~\ref{lem:permtrap}, this means precisely that $ \Pi^Q(\eta_{\mathbf{k}}(\lambda))$ is its own connected component if and only if~$\lambda \in \Omega_m^0$.
\end{proof}
\begin{remark}
Proposition~\ref{prop:saturatedccs} fails when $\mathbf{k}$ is not good, as can be seen in Example~\ref{ex:notgood} above: in this example, $0 \in \Pi^Q(\rho_{\mathbf{k}})$ but $0$ does not belong to the connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$ containing $\rho_{\mathbf{k}}=\eta_{\mathbf{k}}(0)=\omega_1$.
\end{remark}
So we see that the full-dimensional and saturated components of $\Gamma_{\mathrm{sym},\mathbf{k}}$ are exactly the~$Y_{\omega}$ for $\omega \in \Omega_m^0$. There are $f$ of these, where we recall that $f \coloneqq \#P/Q$ is the index of connection of $\Phi$. In some sense $P/Q$ is the ``sandpile group'' in our setting, and in fact we have that $P/Q\simeq \mathrm{coker}(\mathbf{C}^t)$, where~$\mathbf{C}$ is the Cartan matrix of~$\Phi$. Hence, these full-dimensional and saturated components suggest that interval-firing may possibly be connected to Cartan matrix chip-firing. The next remark explains that indeed there is some connection.
\begin{remark} \label{rem:bkr}
Let us explain how Cartan matrix chip-firing (which, as mentioned, has been investigated by Benkart-Klivans-Reiner~\cite{benkart2016chip}) can be realized as a certain ``limit'' of symmetric interval-firing. Note that a Cartan matrix is always an M-matrix (see~\cite[Proposition 4.1]{benkart2016chip}). By associating to each vector $c=(c_1,\ldots,c_n)\in\mathbb{Z}^n$ the weight $\lambda = \sum_{i=1}^{n}c_i\omega_i\in P$, we can view Cartan matrix chip-firing as the relation~${\raU{}}U{\mathbf{C}}$ on $P$ defined by
\[ \lambda {\raU{}}U{\mathbf{C}} \lambda - \alpha_i\; \textrm{ for $\lambda \in P$ and simple root $\alpha_i$, $i\in [n]$ with $\langle\lambda,\alpha_i^\vee{\raU{}}ngle\geq 2$}.\]
For $\lambda=\sum_{i=1}^{n}c_i\omega_i \in P$ and $k\in\mathbb{Z}_{\geq 0}$ set $B_k(\lambda) \coloneqq \{\sum_{i=1}^{n}c'_i\omega_i \in P\colon \sum_{i=1}^{n} |c_i-c'_i| \leq k\}$. In other words, $B_k(\lambda)$ consists of those $\mu$ which are within weight lattice distance $k$ of~$\lambda$. Note that for all $\lambda \in B_k(\rho_k)$, we have that $\langle\lambda,\alpha^\vee{\raU{}}ngle\geq k$ if~$\alpha \in \Phi^+$ is not a simple root. In other words, for $\lambda \in B_k(\rho_k)$, if~$\lambda {\raU{}}U{\mathrm{sym},k} \lambda + \alpha$, then~$\alpha = \alpha_i$ is some simple root. Moreover, for $\lambda \in B_k(\rho_k)$ we have $\langle\lambda,\alpha_i^\vee{\raU{}}ngle\geq 0$ for any simple root~$\alpha_i$. Hence, for~$\lambda \in B_k(\rho_k)$ the symmetric interval-firing relation reduces to
\[ \lambda {\raU{}}U{\mathrm{sym},k} \lambda + \alpha_i\; \textrm{ for a simple root $\alpha_i$, $i\in [n]$ with $\langle\lambda,\alpha_i^\vee{\raU{}}ngle\leq k-1$}.\]
Define $\Psi_k\colon P\to P$ by $\Psi_k(\lambda) \coloneqq -\lambda+\rho_{k+1}$ (so $\Psi_k$ is just a ``reflection plus translation''). Then for $\lambda \in \Psi_k^{-1}(B_k(\rho_k)) = B_k(\rho)$ we have
\[ \Psi_k(\lambda) {\raU{}}U{\mathrm{sym},k} \Psi_k(\lambda - \alpha_i)\; \textrm{ for a simple root $\alpha_i$, $i\in [n]$ with $\langle\Psi_k(\lambda),\alpha_i^\vee{\raU{}}ngle\geq 2$}.\]
Thus the restriction of $\Psi_{k}^{-1}(\Gamma_{\mathrm{sym},k})$ to $B_k(\rho)$ is exactly the same as the restriction of $\Gamma_{{\raU{}}U{\mathbf{C}}}$ to $B_k(\rho)$. But every~$\lambda \in P$ belongs to~$B_k(\rho)$ as $k\to \infty$. In this way, we can recover Cartan matrix chip-firing as a certain~$k\to \infty$ limit of symmetric interval-firing.
Benkart-Klivans-Reiner~\cite[Theorem~1.1]{benkart2016chip} showed that the \emph{recurrent configurations} for Cartan matrix chip-firing are $\rho-\omega$ for $\omega \in \Omega_m^0$. Observe~$\Psi_k(\rho-\omega) = \eta_k(\omega)$, so these recurrent configurations correspond exactly to the sinks of our full-dimensional and saturated components. In the same way, the $2^n$ stable configurations in~$\mathbb{Z}_{\geq 0}^n$ for Cartan matrix chip-firing correspond to the sinks of our full-dimensional components.
We should stress, however, that confluence is much easier to establish for Cartan matrix chip-firing than for our interval-firing processes: for Cartan matrix chip-firing, confluence holds locally, which ultimately has to do with the fact that simple roots are pairwise non-acute. On the other hand, when firing arbitrary positive roots confluence need not hold locally because two positive roots may form an acute angle. Hence while Cartan matrix chip-firing describes the limiting behavior of our interval-firing process, it does not explain why the system is confluent from every initial point. Indeed, we could have also obtained Cartan matrix chip-firing by taking the same $k\to\infty$ limit of the root-firing process which has $\lambda \to \lambda +\alpha$ for $\lambda\in P$, $\alpha\in\Phi^+$ when $\langle\lambda,\alpha^\vee{\raU{}}ngle+ 1 \in [-k+2,k]$, but that process is not confluent.
\end{remark}
\section{Confluence of truncated interval-firing} \label{sec:tr_conf}
So far in this paper we have mostly focused on symmetric interval-firing. We now finally turn to truncated interval-firing. In this section we prove the confluence of~${\raU{}}U{\mathrm{tr},\mathbf{k}}$. Let us start by describing the sinks of $\Gamma_{\mathrm{tr},\mathbf{k}}$.
\begin{lemma} \label{lem:trsinks}
For any $\mathbf{k} \in \mathbb{N}[\Phi]^W$, the sinks of $\Gamma_{\mathrm{tr},\mathbf{k}}$ are~$\{\eta_{\mathbf{k}}(\lambda)\colon \lambda \in P\}$.
\end{lemma}
\begin{proof}
Let $\lambda \in P$. Let $\alpha \in \Phi^{+}$. Note that since $w_{\lambda} \in W^{I^{0}_{\lambda}}$, $w_{\lambda}$ does not have a descent $s_i$ with $I^{0}_{\lambda}$ and thus $w_{\lambda}$ has no inversions in~$\Phi_{I^{0}_{\lambda}}$. Thus if~$\alpha \in w_{\lambda}(\Phi_{I^{0}_{\lambda}})$, then $\langle\eta_{\mathbf{k}}(\lambda),\alpha^\vee{\raU{}}ngle = \langle\lambda_{\mathrm{dom}} + \rho_{\mathbf{k}},w_{\lambda}^{-1}(\alpha)^\vee{\raU{}}ngle \geq \mathbf{k}(\alpha)$, since $w_{\lambda}^{-1}(\alpha) \in \Phi^{+}$. So now consider $\alpha \notin w_{\lambda}(\Phi_{I^{0}_{\lambda}})$. Then $w_{\lambda}^{-1}(\alpha)$ may be positive or negative, but $|\langle\lambda_{\mathrm{dom}},w_{\lambda}(\alpha)^\vee{\raU{}}ngle| \geq 1$ (because $\lambda_{\mathrm{dom}}$ has an $\omega_i$ coefficient of at least~$1$ for some $i \notin I^{0}_{\lambda}$ such that $\alpha_i^\vee$ appears in the expansion of $\pm w_{\lambda}(\alpha)^\vee$). Hence
\[|\langle\eta_{\mathbf{k}}(\lambda),\alpha^\vee{\raU{}}ngle| = |\langle\lambda_{\mathrm{dom}} + \rho_{\mathbf{k}},w_{\lambda}^{-1}(\alpha)^\vee{\raU{}}ngle| \geq \mathbf{k}(\alpha)+1,\]
which means that $\langle\eta_{\mathbf{k}}(\lambda),\alpha^\vee{\raU{}}ngle \notin [-\mathbf{k}(\alpha),\mathbf{k}(\alpha)-1]$. So indeed $\eta_{\mathbf{k}}(\lambda)$ is a sink of $\Gamma_{\mathrm{tr},\mathbf{k}}$.
Now suppose $\mu$ is a sink of $\Gamma_{\mathrm{tr},\mathbf{k}}$. Since $\langle\mu,\alpha^\vee{\raU{}}ngle \notin [-\mathbf{k}(\alpha),\mathbf{k}(\alpha)-1]$ for all $\alpha \in \Phi^{+}$, in particular $|\langle\mu,\alpha^\vee{\raU{}}ngle| \geq \mathbf{k}(\alpha)$ for all $\alpha\in \Phi^{+}$. This means that $\langle\mu_{\mathrm{dom}},\alpha^\vee{\raU{}}ngle \geq \mathbf{k}(\alpha)$ for all~$\alpha \in \Phi^{+}$. Hence $\mu_{\mathrm{dom}} = \mu' + \rho_{\mathbf{k}}$ for some dominant $\mu' \in P_{\geq 0}$. Suppose to the contrary that $w_{\mu}$ is not the minimal length element of $w_{\mu}W_{I^{0}_{\mu'}}$. Then there exists a descent $s_i$ of~$w_{\mu}$ with $i \in I^{0}_{\mu'}$. But then
\[\langle\mu,-w_{\mu}(\alpha_i)^\vee{\raU{}}ngle = \langle\mu_{\mathrm{dom}},-\alpha_i^\vee{\raU{}}ngle = -\langle\mu',\alpha_i^\vee{\raU{}}ngle -\langle\rho_{\mathbf{k}},\alpha_i^\vee{\raU{}}ngle \geq -\mathbf{k}(\alpha_i),\]
and also $\langle\mu,-w_{\mu}(\alpha_i)^\vee{\raU{}}ngle =-\langle \mu_{\mathrm{dom}},\alpha_i^\vee{\raU{}}ngle \leq 0$. This would mean $\mu$ is not a sink of $\Gamma_{\mathrm{tr},\mathbf{k}}$, since $-w_{\mu}(\alpha_i) \in \Phi^{+}$. So $w_{\mu}$ must be the minimal length element of~$w_{\mu}W_{I^{0}_{\mu'}}$. This means $w_{\mu} = w_{\lambda}$ for some~$\lambda \in P$ with $\lambda_{\mathrm{dom}} = \mu'$. And $\mu = w_{\mu}(\mu_{\mathrm{dom}}) = \lambda + w_{\lambda}(\rho_{\mathbf{k}}) = \eta_{\mathbf{k}}(\lambda)$, as claimed.
\end{proof}
We now proceed to prove the confluence of truncated interval-firing. In some sense our proof of confluence here is less satisfactory than the one for symmetric interval-firing because we heavily rely on the diamond lemma, and reduction to rank~$2$, which is a kind of ``trick'' that obscures the underlying polytopal geometry (and requires us at one point to use the classification of rank~$2$ root systems). But we also do crucially use the permutohedron non-escaping lemma in the following lemma, which says that ``small'' permutohedra close to the origin are connected components of truncated interval-firing.
\begin{lemma} \label{lem:trccs}
Let $\mathbf{k} \in \mathbb{N}[\Phi]^W$ be good. Then for all $\omega \in \Omega^m_0$, the (translated) discrete permutohedron $\Pi^Q(\rho_{\mathbf{k}})+\omega$ is a connected component of $\Gamma_{\mathrm{tr},\mathbf{k}}$ and the unique sink of this connected component is $\rho_{\mathbf{k}}+\omega$.
\end{lemma}
\begin{proof}
First let us prove a preliminary result: for any $\lambda \in P$ and $\omega \in \Omega^m_0$, we have that~$(\lambda-\omega)_{\mathrm{dom}} = \lambda_{\mathrm{dom}} - w(w^{-1}_{\lambda}(\omega))$ for some $w \in W_{I^{0}_{\lambda}}$. Indeed, since~$\omega$ is minuscule or zero, we have that $\langle-w'(\omega),\alpha^\vee{\raU{}}ngle \in \{-1,0,1\}$ for any $\alpha\in \Phi$ and any~$w'\in W$. Therefore~$w^{-1}_{\lambda}(\lambda-\omega) = \lambda_{\mathrm{dom}} - w^{-1}_{\lambda}(\omega)$ may not be dominant, but the only $\alpha_i$ for which we have~$\langle \lambda_{\mathrm{dom}} - w^{-1}_{\lambda}(\omega),\alpha_i^\vee{\raU{}}ngle < 0$ must have $i \in I^{0}_{\lambda}$. Hence, if we let~$w \in W_{I^{0}_{\lambda}}$ be such that $\langlew(w^{-1}_{\lambda}(\omega)),\alpha_i^\vee{\raU{}}ngle \geq 0$ for all $i \in I^{0}_{\lambda}$, then~$(\lambda-\omega)_{\mathrm{dom}} = \lambda_{\mathrm{dom}} - w(w^{-1}_{\lambda}(\omega))$ as claimed.
Now let us show that for any $\omega \in \Omega_m^0$, the only sink of $\Gamma_{\mathrm{tr},\mathbf{k}}$ in $\Pi^Q(\rho_{\mathbf{k}})+\omega$ is $\rho_{\mathbf{k}}+\omega$. Suppose $\eta_{\mathbf{k}}(\lambda) \in \Pi^Q(\rho_{\mathbf{k}})+\omega$ for some $\lambda \in P$. This means $\eta_{\mathbf{k}}(\lambda) -\omega \in \Pi^Q(\rho_{\mathbf{k}})$, which means that $(\eta_{\mathbf{k}}(\lambda)-\omega)_{\mathrm{dom}} = \lambda_{\mathrm{dom}}+\rho_{\mathbf{k}} - w(w_{\lambda}(\omega)) \in \Pi^Q(\rho_{\mathbf{k}})$ for some $w \in W_{I^{0}_{\lambda}}$ (we are using that $w_{\eta_{\mathbf{k}}(\lambda)}=w_{\lambda}$, which we have seen before, and that $W_{I^{0}_{\lambda}} \subseteq W_{I^{0}_{\lambda}}+\rho_{\mathbf{k}}$). Hence Proposition~\ref{prop:perm_containment} tells us that
\[\rho_{\mathbf{k}}-( \lambda_{\mathrm{dom}}+\rho_{\mathbf{k}} - w(w_{\lambda}(\omega))) = -(\lambda_{\mathrm{dom}}-\omega) + ( w(w_{\lambda}(\omega)) - \omega) \in Q_{\geq 0}.\]
Now, since $\lambda_{\mathrm{dom}} \in (Q+\omega)\cap P_{\geq 0}$, we know that $\lambda_{\mathrm{dom}}-\omega \in Q_{\geq 0}$ (by one characterization of minuscule weights mentioned in~\S\ref{sec:rootsystemdefs}). Also, $\omega-w(w_{\lambda}(\omega)) \in Q_{\geq 0}$ by Proposition~\ref{prop:perm_containment}. Hence we conclude that $\lambda_{\mathrm{dom}}=\omega$ and $w(w_{\lambda}(\omega)) = \omega$. But since we have~$w \in W_{I^{0}_{\lambda}}$, we conclude that~$w(w_{\lambda}(\omega))=w_{\lambda}(\omega)$, and thus $w_{\lambda}(\omega)=\omega$, which forces $w_{\lambda}$ to be the identity, i.e., we have $\lambda =\omega$. So indeed the only sink of $\Gamma_{\mathrm{tr},\mathbf{k}}$ in~$\Pi^Q(\rho_{\mathbf{k}})+\omega$ is~$\rho_{\mathbf{k}}+\omega$.
Let us prove the lemma first for $\omega = 0$. Since ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ is terminating by Proposition~\ref{prop:interval_firing_terminating}, any ${\raU{}}U{\mathrm{tr},\mathbf{k}}$-firing sequence starting at some $\mu \in \Pi^Q(\rho_{\mathbf{k}})$ has to terminate somewhere. By the permutohedron non-escaping lemma, Lemma~\ref{lem:permtrap}, such a sequence must terminate somewhere inside $\Pi^Q(\rho_{\mathbf{k}})$; and since $\rho_{\mathbf{k}}$ is the only sink in~$\Pi^Q(\rho_{\mathbf{k}})$, it must terminate at $\rho_{\mathbf{k}}$. So indeed $\Pi^Q(\rho_{\mathbf{k}})$ is a connected component of~$\Gamma_{\mathrm{tr},\mathbf{k}}$.
Finally, let $\omega \in \Omega_m$ be arbitrary, and let $w \in C$ be the element corresponding to~$\omega$ under the isomorphism $C \simeq P/Q$. Then by the description of this isomorphism in~\S\ref{sec:symmetry} we get~$w(0-\rho/h) +\rho/h= \omega$, and hence $w(\Pi^Q(\rho_{\mathbf{k}})-\rho/h)+\rho/h = \Pi^Q(\rho_{\mathbf{k}})+\omega$. So from the symmetry of $\Gamma^{\mathrm{un}}_{\mathrm{tr},\mathbf{k}}$ described in Theorem~\ref{thm:symmetry}, we get that $\Pi^Q(\rho_{\mathbf{k}})+\omega$ is also a connected component of $\Gamma_{\mathrm{tr},\mathbf{k}}$.
\end{proof}
Now we consider truncated interval-firing for rank~$2$ root systems.
\begin{prop} \label{prop:rank2tr}
Suppose $\Phi$ is of rank~$2$. Let $\mathbf{k} \in \mathbb{N}[\Phi]^W$. Let $\lambda \in P$ be such that $\langle\lambda,\alpha^\vee{\raU{}}ngle \in [-\mathbf{k}(\alpha),\mathbf{k}(\alpha)]$ and $\langle\lambda,\beta^\vee{\raU{}}ngle \in [-\mathbf{k}(\beta),\mathbf{k}(\beta)]$ for two linearly independent roots $\alpha,\beta \in \Phi$. Suppose that either $\Phi$ is simply laced or one of $\alpha$ and $\beta$ is short and the other is long. Let $\omega \in \Omega_m^0$ be such that $\rho_{\mathbf{k}} - \lambda \in Q+\omega$. Then $\lambda \in \Pi^Q(\rho_\mathbf{k}) + \omega$.
\end{prop}
\begin{proof}
First let us show $\lambda_{\mathrm{dom}} = c_1\omega_1 + c_2\omega_2$ with $c_1 \in [0,\mathbf{k}(\alpha_1)]$ and $c_2 \in [0,\mathbf{k}(\alpha_2)]$. Observe that $\langle\lambda_{\mathrm{dom}},w_{\lambda}(\alpha)^\vee{\raU{}}ngle \in [-\mathbf{k}(\alpha),\mathbf{k}(\alpha)]$ and similarly for $\beta$. By replacing $\alpha$ with~$-\alpha$ and $\beta$ with $-\beta$ if necessary, we can assume $\langle\lambda_{\mathrm{dom}},w_{\lambda}(\alpha)^\vee{\raU{}}ngle \in [0,\mathbf{k}(\alpha)]$ and similarly for $\beta$, and since $\lambda_{\mathrm{dom}}$ is dominant, we are free to assume that $w_{\lambda}(\alpha)^\vee$ is positive and similarly for $\beta$. Note that $w_{\lambda}(\alpha)^\vee$ and $w_{\lambda}(\beta)^\vee$ are both nonnegative integer combinations of the simple coroots $\alpha_1^\vee$ and $\alpha_2^\vee$. Then, since $\alpha$ and $\beta$ are linearly independent, and since either $\Phi$ is simply laced, in which case $\mathbf{k}(\alpha)=\mathbf{k}(\beta)=k$, or one of $\alpha,\beta$ is short (say e.g. $\mathbf{k}(\alpha)=k_s$) and the other is long (say e.g. $\mathbf{k}(\beta)=k_l$), we can conclude in fact that $\langle\lambda_{\mathrm{dom}},\alpha_1^\vee{\raU{}}ngle \in [0,\mathbf{k}(\alpha_1)]$ and $\langle\lambda_{\mathrm{dom}},\alpha_2^\vee{\raU{}}ngle \in [0,\mathbf{k}(\alpha_2)]$.
So indeed, $\lambda_{\mathrm{dom}} = c_1\omega_1 + c_2\omega_2$ with $c_1 \in [0,\mathbf{k}(\alpha_1)]$ and $c_2 \in [0,\mathbf{k}(\alpha_2)]$. If $c_1 =\mathbf{k}(\alpha_1)$ and $c_2 = \mathbf{k}(\alpha_2)$, then $\lambda_{\mathrm{dom}} = \rho_{\mathbf{k}}$ and the proposition is obvious in this case (note that we will have~$\omega=0$). So assume without loss of generality that $c_2 \leq \mathbf{k}(\alpha_2)-1$.
Let $\lambda' \coloneqq \lambda - \omega$. We want to show $\lambda' \in \Pi^Q(\rho_\mathbf{k})$. As we have seen in the proof of Lemma~\ref{lem:trccs}, we have $\lambda'_{\mathrm{dom}} = \lambda_{\mathrm{dom}} - w(\omega)$ for some $w \in W$. So let $w\in W$ be such that $\lambda'_{\mathrm{dom}} = \lambda_{\mathrm{dom}} - w(\omega)$ and write $\lambda'_{\mathrm{dom}} = c'_1\omega_1 + c'_2\omega_2$. Since~$\langle-w(\omega),\alpha^\vee{\raU{}}ngle \in \{-1,0,1\}$ for any $\alpha\in \Phi$, we have $c'_1 \leq \mathbf{k}(\alpha_1)+1$ and $c'_2 \leq \mathbf{k}(\alpha_2)$. First suppose $c'_1\leq \mathbf{k}(\alpha_1)$. Together with $c'_2\leq \mathbf{k}(\alpha_2)$, this implies that $\rho_{\mathbf{k}}-\lambda'_{\mathrm{dom}} \in P_{\geq 0}$, and hence $\rho_{\mathbf{k}}-\lambda'_{\mathrm{dom}} \in Q_{\geq 0}$. Thus we conclude $\lambda' \in \Pi^Q(\rho_\mathbf{k})$ by Proposition~\ref{prop:perm_containment}.
So suppose that $c'_1 = \mathbf{k}(\alpha_1)+1$. This means $\langle-w(\omega),\alpha_1^\vee{\raU{}}ngle =1$. Note that this implies $\omega \neq 0$, and hence $\omega$ must be a minuscule weight. But $G_2$ has no minuscule weights, so we may from now on assume that $\Phi \neq G_2$. Since $\omega$ is the only dominant element of $W(\omega)$, we also have that~$\langle-w(\omega),\alpha_2^\vee{\raU{}}ngle \leq 0$ and hence~$c'_2 \leq \mathbf{k}(\alpha_2)-1$. Write $\rho_{\mathbf{k}}-\lambda'_{\mathrm{dom}} = a_1\alpha_1 + a_2\alpha_2$ for some integers $a_1,a_2\in \mathbb{Z}$. Then~$c'_1 = \mathbf{k}(\alpha_1)+1$ and $c'_2 \leq \mathbf{k}(\alpha_2)-1$ translate to
\begin{align*}
2a_1+\langle\alpha_2,\alpha^\vee_1{\raU{}}nglea_2 &= -1; \\
\langle\alpha_1,\alpha^\vee_2{\raU{}}nglea_1 + 2a_2 &\geq 1.
\end{align*}
By the classification of rank~$2$ root systems we have $\langle\alpha_2,\alpha^\vee_1{\raU{}}ngle,\langle\alpha_1,\alpha^\vee_2{\raU{}}ngle \in \{-1,-2\}$ with at least one of them equal to $-1$. It is then not hard to check that all integer solutions $a_1,a_2\in \mathbb{Z}$ to the above system of inequalities must have $a_1,a_2\geq 0$. Hence we conclude~$\rho_{\mathbf{k}}-\lambda'_{\mathrm{dom}} \in Q_{\geq 0}$, and thus $\lambda' \in \Pi^Q(\rho_\mathbf{k})$ by Proposition~\ref{prop:perm_containment}.
\end{proof}
\begin{cor} \label{cor:rank2trconfluence}
Suppose $\Phi$ is of rank~$2$. Let $\mathbf{k}\in\mathbb{N}[\Phi]^W$ be good. Then ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ is confluent (and terminating).
\end{cor}
\begin{proof}
We know ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ is terminating thanks to Proposition~\ref{prop:interval_firing_terminating}. Hence by the diamond lemma, Lemma~\ref{lem:diamond}, it is enough to prove that ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ is locally confluent.
First let us prove this when $\Phi$ is simply laced. Suppose $\lambda {\raU{}}U{\mathrm{tr},\mathbf{k}} \lambda+\alpha$ and $\lambda {\raU{}}U{\mathrm{tr},\mathbf{k}} \lambda+\beta$ for $\alpha,\beta \in \Phi^{+}$. Then by Proposition~\ref{prop:rank2tr} we have that $\lambda \in \Pi^Q(\rho_\mathbf{k}) + \omega$ where $\omega\in \Omega_m^0$ is such that $\rho_{\mathbf{k}} - \lambda \in Q+\omega$. But by Lemma~\ref{lem:trsinks}, $\Pi^Q(\rho_\mathbf{k}) + \omega$ is a connected component of~$\Gamma_{\mathrm{tr},\mathbf{k}}$ with unique sink $\rho_{\mathbf{k}}+\omega$; since ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ is terminating this means that any ${\raU{}}U{\mathrm{tr},\mathbf{k}}$-firing sequence starting at $\lambda$ eventually terminates at $\rho_{\mathbf{k}}+\omega$. Hence we can bring~$\lambda+\alpha$ and~$\lambda+\beta$ back together again via ${\raU{}}U{\mathrm{tr},\mathbf{k}}$-firings.
Note that confluence for $\Phi=A_1\oplus A_1$ (for any $\mathbf{k} \in\mathbb{N}[\Phi]^W$) reduces to confluence for~$\Phi=A_1$, which is trivial. Thus in fact we have proved confluence for all simply laced root systems of rank~$2$, including those which are not irreducible.
So assume~$\Phi$ is not simply laced. Suppose $\lambda {\raU{}}U{\mathrm{tr},\mathbf{k}} \lambda+\alpha$ and $\lambda {\raU{}}U{\mathrm{tr},\mathbf{k}} \lambda+\beta$ for~$\alpha,\beta \in \Phi^{+}$. If one of $\alpha$ and $\beta$ is short and the other is long, then we can apply Proposition~\ref{prop:rank2tr} and Lemma~\ref{lem:trsinks} as above to conclude that we can bring $\lambda+\alpha$ and $\lambda+\beta$ back together again via ${\raU{}}U{\mathrm{tr},\mathbf{k}}$-firings. So suppose $\alpha$ and $\beta$ have the same length. Then let~$\widetilde{\Phi}$ be the set of all roots in $\Phi$ with the same length as $\alpha$ and $\beta$. This $\widetilde{\Phi}$ will again be a rank~$2$ root system, and by construction a simply laced one. Hence by the result for simply laced root systems, we know that truncated interval-firing is confluent for $\widetilde{\Phi}$; so in particular we can bring $\lambda+\alpha$ and $\lambda+\beta$ back together again via ${\raU{}}U{\mathrm{tr},\mathbf{k}}$-firings.
\end{proof}
The confluence of truncated interval-firing for all root systems follows easily from confluence for rank~$2$ root systems. The following finishes the proof of Theorem~\ref{thm:confluence_intro}.
\begin{cor} \label{cor:trconfluence}
Let $\mathbf{k}\in\mathbb{N}[\Phi]^W$ be good. Then ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ is confluent (and terminating).
\end{cor}
\begin{proof}
We know ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ is terminating thanks to Proposition~\ref{prop:interval_firing_terminating}. Hence by the diamond lemma, Lemma~\ref{lem:diamond}, it is enough to prove that ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ is locally confluent. Suppose that~$\lambda {\raU{}}U{\mathrm{tr},\mathbf{k}} \lambda+\alpha$ and $\lambda {\raU{}}U{\mathrm{tr},\mathbf{k}} \lambda+\beta$ for $\alpha,\beta \in \Phi^{+}$. Restricting $\Phi$ to the span of $\alpha$ and $\beta$ gives a rank~$2$ sub-root system, for which we have proved confluence in Corollary~\ref{cor:rank2trconfluence} (as remarked in the proof of that corollary, we in fact proved confluence for \emph{all} rank~$2$ root systems, including those which are not irreducible). Hence we can bring $\lambda+\alpha$ and $\lambda+\beta$ back together just with truncated interval-firing moves inside that rank~$2$ sub-root system.
\end{proof}
\begin{remark}
Our method of proof of confluence for ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ fails when $\mathbf{k}$ is not good; for instance, Lemma~\ref{lem:trccs} is not true for general $\mathbf{k}$, as can be seen in Example~\ref{ex:notgood}: here $0 \in \Pi^Q(\rho_{\mathbf{k}})$ but $0$ does not belong to the connected component of $\Gamma_{\mathrm{tr},\mathbf{k}}$ containing~$\rho_{\mathbf{k}}$. However, we can actually deduce that ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ is confluent for all $\mathbf{k}\in\mathbb{N}[\Phi]^W$ from Corollary~\ref{cor:trconfluence}. Indeed, if $\mathbf{k}\in\mathbb{N}[\Phi]^W$ is not good, then $k_s=0$. But if~$k_s=0$ then we will never be able to fire any short root. In other words, if $k_s=0$ then truncated interval-firing reduces to truncated interval-firing with respect to the long roots only; and the long roots form a simply laced root system, for which ${\raU{}}U{\mathrm{tr},\mathbf{k}}$ is known to be confluent from Corollary~\ref{cor:trconfluence}.
\end{remark}
\begin{remark}
It appears that when $\Phi = A_2$ there are no intervals $[a,b]$ for which the relation $\lambda \to \lambda+\alpha$ for $\lambda \in P$, $\alpha\in\Phi^+$ with $\langle\lambda,\alpha^\vee{\raU{}}ngle+ 1 \in [a,b]$ is confluent besides the symmetric and truncated intervals (and this probably would not be too hard to prove). If so, then the same would be true for all irreducible simply laced root systems (except for~$A_1$) because any irreducible root system of rank $3$ or greater contains an~$A_2$ sub-root system. This observation also severely restricts possible intervals defining confluent processes for all root systems, including the non-simply laced ones (although note that central-firing is confluent for $\Phi = B_2$).
\end{remark}
\begin{remark} \label{rem:hyperplanes}
To any root-firing process ${\raU{}}$ on $P$ let us associate the hyperplane arrangement which contains the hyperplane $H = \{v\in V\colon \langlev,\alpha^\vee{\raU{}}ngle= c\}$ whenever we have a firing move $\lambda \to \lambda + \alpha$ with $\langle\lambda+\frac{\alpha}{2}, \alpha^\vee{\raU{}}ngle=c$; i.e., we include a hyperplane orthogonal to $\alpha$ at the \emph{midpoint} between $\lambda$ and $\lambda +\alpha$. As mentioned in the introduction, under this correspondence the symmetric and truncated interval-firing processes correspond to the (extended) Catalan and Shi hyperplane arrangements~\cite{postnikov2000deformations, athanasiadis2000deformations}. The confluence of symmetric and truncated interval-firing seems like it might have something to do with the freeness of the Catalan and Shi arrangements. \emph{Freeness} is a certain deep algebraic property of hyperplane arrangements introduced by Terao~\cite{terao1980arrangements}. Freeness of the (extended) Catalan and Shi hyperplane arrangements of a root system was conjectured by Edelman and Reiner~\cite{edelman1996free} and proven by Yoshinaga~\cite{yoshinaga2004characterization} building on work of Terao~\cite{terao2002multiderivations}. Vic Reiner suggested that we look at other free deformations of Coxeter arrangements as a possible source of other confluent root-firing processes. We found one such process which, experimentally, appears confluent: for $\mathbf{k}\in\mathbb{N}[\Phi]$ consider the relation $\lambda {\raU{}} \lambda+\alpha$ for $\lambda \in P$, $\alpha\in\Phi^+$ with $\langle\lambda,\alpha^\vee{\raU{}}ngle+ 1 \in [-\mathbf{k}(\alpha)+1,\mathbf{k}(\alpha)]$ if $\alpha$ is long and $\langle\lambda,\alpha^\vee{\raU{}}ngle+ 1 \in [-\mathbf{k}(\alpha),\mathbf{k}(\alpha)]$ if $\alpha$ is short. In other words, we use either the truncated or symmetric intervals depending on which Weyl group orbit our root lies in. This process corresponds to a \emph{Shi-Catalan} hyperplane arrangement, as studied by Abe and Terao~\cite{abe2011freeness}. Other free variants of Coxeter arrangements include the \emph{ideal subarrangements} of Coxeter arrangements~\cite{abe2016freeness, abe2016free}, but we have not been able to obtain confluent root-firing processes from these. Note that the freeness of the corresponding hyperplane arrangement certainly does not imply confluence of the root-firing process: for instance, reversing the direction of all the arrows for the truncated interval-firing process yields a process which is not confluent but which corresponds to the same Shi hyperplane arrangement. Nevertheless, it would be very interesting to understand the connection between freeness and confluence further.
\end{remark}
\begin{remark}
Under the correspondence between root-firing processes and hyperplane arrangements discussed in Remark~\ref{rem:hyperplanes}, the central-firing process corresponds not to the central Coxeter arrangement, but rather to the affine \emph{Linial arrangement}. The Linial arrangement has many interesting combinatorial properties (see e.g.~\cite{postnikov2000deformations} and~\cite{athanasiadis2000deformations}), but is not free.
\end{remark}
\part{Ehrhart-like polynomials} \label{part:ehrhart}
\section{Ehrhart-like polynomials: introduction}
Continue to fix a root system $\Phi$ in vector space $V$ as in the previous part (and retain all the notation from that part). In this part, we investigate the set of weights with given symmetric or truncated interval-firing stabilization. Thus, for good $\mathbf{k}\in \mathbb{N}[\Phi]^W$, we define the \emph{stabilization maps} $s^{\mathrm{sym}}_{\mathbf{k}}\colon P \to P$ and $s^{\mathrm{tr}}_{\mathbf{k}}\colon P \to P$ by
\begin{align*}
s^{\mathrm{sym}}_{\mathbf{k}}(\mu) = \lambda &\Leftrightarrow \textrm{ the ${\raU{}}U{\mathrm{sym},\mathbf{k}}$-stabilization of $\mu$ is $\eta_{\mathbf{k}}(\lambda)$}; \\
s^{\mathrm{tr}}_{\mathbf{k}}(\mu) = \lambda &\Leftrightarrow \textrm{ the ${\raU{}}U{\mathrm{sym},\mathbf{k}}$-stabilization of $\mu$ is $\eta_{\mathbf{k}}(\lambda)$}.
\end{align*}
These functions are well-defined since the symmetric and truncated interval-firing processes are confluent and terminating (Corollaries~\ref{cor:symconfluence} and~\ref{cor:trconfluence}), the stable points of these processes must have the form $\eta_{\mathbf{k}}(\lambda)$ for some $\lambda \in P$ (Lemmas~\ref{lem:symsinks} and~\ref{lem:trsinks}), and the map $\eta_{\mathbf{k}}$ is injective (Proposition~\ref{prop:etafacts}).
Looking at Example~\ref{ex:rank2graphs}, one can see that the set $(s^{\mathrm{sym}}_{\mathbf{k}})^{-1}(\lambda)$ (or $(s^{\mathrm{tr}}_{\mathbf{k}})^{-1}(\lambda)$) of weights with interval-firing stabilization $\eta_{\mathbf{k}}(\lambda)$ looks ``the same'' across all values of~$\mathbf{k}$ except that it gets ``dilated'' as $\mathbf{k}$ is scaled. In analogy with the \emph{Ehrhart polynomial}~\cite{ehrhart1977polynomes} of a convex lattice polytope, which counts the number of lattice points in dilations of the polytope, let us define for all $\lambda \in P$ and all good $\mathbf{k}\in\mathbb{N}[\Phi]^W$ the quantities:
\begin{align*}
L^{\mathrm{sym}}_{\lambda}(\mathbf{k}) &\coloneqq \#(s^{\mathrm{sym}}_{\mathbf{k}})^{-1}(\lambda); \\
L^{\mathrm{tr}}_{\lambda}(\mathbf{k}) &\coloneqq \#(s^{\mathrm{tr}}_{\mathbf{k}})^{-1}(\lambda).
\end{align*}
Our aim is to show that $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ and $L^{\mathrm{tr}}_{\lambda}(\mathbf{k})$ are polynomials in~$\mathbf{k}$. By ``polynomial in~$\mathbf{k}$'' we mean that, if $\Phi$ is simply laced, then these $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ and $L^{\mathrm{tr}}_{\lambda}(\mathbf{k})$ are single-variable polynomials in $k$, where $\mathbf{k}(\alpha)=k$ for all $\alpha \in \Phi$; and if $\Phi$ is non-simply laced, then they are two-variable polynomials in $k_s$ and $k_l$, where $\mathbf{k}(\alpha)=k_s$ if $\alpha$ is short and $\mathbf{k}(\alpha)=k_l$ if $\alpha$ is long.
We are able to show that the $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ are polynomials for all root systems~$\Phi$ (Theorem~\ref{thm:symehrhart}), and we are able to show that the $L^{\mathrm{tr}}_{\lambda}(\mathbf{k})$ are polynomials assuming that~$\Phi$ is simply laced (Theorem~\ref{thm:trehrhart}). In fact, we show that all these polynomials have integer coefficients. Moreover, we conjecture that for all $\Phi$ that these $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ and $L^{\mathrm{tr}}_{\lambda}(\mathbf{k})$ are polynomials with \emph{nonnegative} integer coefficients.
We refer to these $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ and $L^{\mathrm{tr}}_{\lambda}(\mathbf{k})$ as the \emph{symmetric} and \emph{truncated Ehrhart-like polynomials} because they count the size of some discrete subset of lattice points as that set is somehow ``dilated.'' But it is important to note that the sets $(s^{\mathrm{sym}}_{\mathbf{k}})^{-1}(\lambda)$ and $(s^{\mathrm{tr}}_{\mathbf{k}})^{-1}(\lambda)$ are in general not the set of lattice points of any convex polytope, or indeed, any convex set. This can already be seen in rank~$2$ (see Example~\ref{ex:rank2graphs}). Nevertheless, for some special~$\lambda$ (namely, $\lambda \in \Omega_m^0$) the polynomials $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ and $L^{\mathrm{tr}}_{\lambda}(\mathbf{k})$ are (essentially) genuine Ehrhart polynomials; and so we do use Ehrhart theory to prove the polynomiality of~$L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ and~$L^{\mathrm{tr}}_{\lambda}(\mathbf{k})$. Note that, because they apparently have nonnegative integer coefficients, these polynomials are (as we explain below) most similar to the Ehrhart polynomials \emph{of zonotopes}.
\section{Symmetric Ehrhart-like polynomials}\label{sec:sym_Ehrhart}
The \emph{Ehrhart polynomial} $L_{\mathcal{P}}(k)$ of a convex lattice polytope $\mathcal{P}$ is a single-variable polynomial in $k$ which satisfies
\[L_{\mathcal{P}}(k) = \textrm{the number of lattice points in $k\mathcal{P}$ (the $k$th dilate of $\mathcal{P}$)}\]
for all $k \geq 1$. Such polynomials were first investigated by Ehrhart~\cite{ehrhart1977polynomes}, who proved that they exist for all lattice polytopes. A famous result of Stanley~\cite[Example~3.1]{stanley1980decompositions} says that the Ehrhart polynomial of a lattice \emph{zonotope} (i.e., a Minkowski sum of line segments) has nonnegative integer coefficients. A standard way to prove this result is to inductively \emph{pave} the zonotope (see~\cite[\S9.2]{beck2015computing}); this decomposition of a zonotope goes back to Shephard~\cite{shephard1974combinatorial}. In the following theorem we apply this same paving technique to a slightly more general setting: namely, we show that if~$\mathcal{P}$ is any fixed convex lattice polytope, and $\mathcal{Z}$ is a lattice zonotope, then for $k \geq 1$ the number of lattice points in~$\mathcal{P}+k\mathcal{Z}$ is a polynomial with nonnegative integer coefficients in~$k$. Stanley's result corresponds to taking $\mathcal{P}$ to be a point. Although the proof is, as mentioned, standard, we have not found this theorem in the Ehrhart theory literature; and it turns out that this result is just what we need to prove that the symmetric Ehrhart-like polynomials~$L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ exist.
\begin{figure}\label{fig:paving}
\end{figure}
\begin{thm} \label{thm:polypluszoneehrhart}
Let $\Lambda$ be a lattice in $V$. Let $\mathcal{P}$ be any convex lattice polytope in~$V$. Let~$v_1,\ldots,v_m \in \Lambda$ be lattice elements. Then for any~$\mathbf{k}=(k_1,\ldots,k_m) \in \mathbb{Z}_{\geq 0}^m$ the quantity
\[\#(\mathcal{P}+k_1[0,v_1] + \cdots + k_m[0,v_m]) \cap \Lambda\]
is given by a polynomial in the $k_1,\ldots,k_m$ with nonnegative integer coefficients.
\end{thm}
\begin{proof}
For $X = \{u_1,\ldots,u_\ell\}\subseteq V$ linearly independent, a \emph{half-open parallelepiped} with edge set~$X$ is a convex set $\mathcal{Z}^{h.o.}_{X}$ of the form
\[\mathcal{Z}^{h.o.}_{X} = \sum_{i=1}^{\ell} \begin{cases}[0,u_i) &\textrm{if $\varepsilon = 1$}; \\ (0,u_i] &\textrm{if $\varepsilon = -1$}, \end{cases}\]
for some choice of sign vector $(\varepsilon_1,\ldots,\varepsilon_\ell)\in \{-1,1\}^{\ell}$. For $X\subseteq \{v_1,\ldots,v_m\}$ let us use~$\mathbf{k}X \coloneqq \{k_iv_i\colon v_i \in X\}$.
The key idea for this theorem: $\mathcal{P}+k_1[0,v_1] + \cdots + k_m[0,v_m]$ can be inductively decomposed (or ``paved'') into disjoint pieces that are (up to translation) of the form
\[ F +\mathcal{Z}^{h.o.}_{\mathbf{k}X},\]
where $X\subseteq \{v_1,\ldots,v_m\}$ is linearly independent and $F$ is an open face of the polytope~$\mathcal{P}$ which is affinely independent from $\mathrm{Span}_{\mathbb{R}}(X)$. Figure~\ref{fig:paving} shows how this is done. Here by ``open face'' of $\mathcal{P}$ we mean a face minus its relative boundary. Note that vertices have empty relative boundary and hence vertices are open faces. (But observe that~Figure~\ref{fig:paving} is slightly misleading in that we should technically show the whole polytope $\mathcal{P}$ decomposed into its open faces as well; instead the figure shows these pieces grouped into a single bigger piece.) The proof, by induction on $m$, that this is possible works in exactly the same way as for paving a zonotope (see~\cite[Lemma 9.1]{beck2015computing}), so we do not go into the details. Then note that
\[ \#\left( \left(F + \mathcal{Z}^{h.o.}_{\mathbf{k}X} \right) \cap \Lambda \right) = \#\left( \left(F + \mathcal{Z}^{h.o.}_{X} \right) \cap \Lambda \right) \cdot \prod_{v_i\in X} k_i\]
precisely because $F$ is affinely independent from $\mathcal{Z}^{h.o.}_{\mathbf{k}X}$. Hence the desired polynomial in $k_1,\ldots,k_m$ indeed exists: it is a sum over the pieces of this decomposition of~$\#\left( \left(F + \mathcal{Z}^{h.o.}_{X} \right) \cap \Lambda \right) \cdot \prod_{v_i\in X} k_i$. (We are implicitly using the fact that this decomposition can be realized in a uniform way across all values of~$\mathbf{k}$).
\end{proof}
\begin{cor} \label{cor:permehrhart}
For any~$\lambda \in P_{\geq 0}$, for all $\mathbf{k} \in \mathbb{N}[\Phi]^W$ the quantity $\#\Pi^Q(\lambda+\rho_{\mathbf{k}})$ is given by a polynomial with nonnegative integer coefficients in~$\mathbf{k}$.
\end{cor}
\begin{proof}
We are free to translate $\Pi^Q(\lambda+\rho_{\mathbf{k}})$ so that it contains the origin; i.e., clearly $\#\Pi^Q(\lambda+\rho_{\mathbf{k}})$ is the number of $Q$-points in $\Pi(\lambda+\rho_{\mathbf{k}})-\lambda-\rho_{\mathbf{k}}$. One easy consequence of Proposition~\ref{prop:perm_containment} is that $\Pi(\lambda+\mu)=\Pi(\lambda)+\Pi(\mu)$ for dominant weights $\lambda,\mu\in P_{\geq 0}$. Hence, because~$\lambda$ is dominant, we have
\[\Pi(\lambda+\rho_{\mathbf{k}}) -\lambda - \rho_{\mathbf{k}} = (\Pi(\lambda) -\lambda) + (\Pi(\rho_{\mathbf{k}}) -\rho_{\mathbf{k}}).\]
It is well known that the \emph{regular permutohedron} $\Pi(\rho)$ is a zonotope. In Type~A, a standard way to prove this fact is to compute the Newton polytope of the Vandermonde determinant in two ways (see~\cite[Theorem 9.4]{beck2015computing}). The same argument, but with \emph{Weyl's denominator formula} (see~\cite[\S24.3]{humphreys1972lie}) in place of the Vandermonde determinant, establishes that $\Pi(\rho)=\sum_{\alpha\in\Phi^+}[-\alpha/2,\alpha/2]$. It is then a simple exercise to show that $\Pi(\rho_{\mathbf{k}})=\sum_{\alpha\in\Phi^+}\mathbf{k}(\alpha)[-\alpha/2,\alpha/2]$. Hence,
\[\Pi(\lambda+\rho_{\mathbf{k}}) -\lambda - \rho_{\mathbf{k}} = (\Pi(\lambda) -\lambda) + \sum_{\alpha \in \Phi^+} \mathbf{k}(\alpha)[0,-\alpha],\]
and so the desired polynomial indeed exists thanks to Theorem~\ref{thm:polypluszoneehrhart}.
\end{proof}
We are now ready to prove the first part of Theorem~\ref{thm:Ehrhart_intro}.
\begin{thm} \label{thm:symehrhart}
For any $\lambda \in P$, for good~$\mathbf{k} \in\mathbb{N}[\Phi]^W$ the quantity $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ is given by a polynomial with integer coefficients in~$\mathbf{k}$.
\end{thm}
\begin{proof}
First of all, if $\lambda$ has $\langle\lambda,\alpha^\vee{\raU{}}ngle = -1$ for some $\alpha \in \Phi^{+}$ then clearly we can take $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})\coloneqq 0$ because, thanks to Lemma~\ref{lem:symsinks}, $\eta_{\mathbf{k}}(\lambda)$ cannot be a sink of~$\Gamma_{\mathrm{sym},\mathbf{k}}$ in this case. So now assume that $\lambda$ satisfies $\langle\lambda,\alpha^\vee{\raU{}}ngle\neq -1$ for all $\alpha \in \Phi^{+}$. If~$I^{0,1}_{\lambda} \neq [n]$, then, by Theorem~\ref{thm:permcc}, the connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$ containing the sink $\eta_{\mathbf{k}}(\lambda)$ is contained in $w_\lambda \Pi^{Q}_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$, which is contained in an affine translate of the strict subspace $\mathrm{Span}_{\mathbb{R}}(w_{\lambda} \Phi_{I^{0,1}_{\lambda}})$. By induction on rank we know the theorem is true for the sub-root system $w_{\lambda} \Phi_{I^{0,1}_{\lambda}}$. Hence, the desired polynomial $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ is just the corresponding polynomial for the orthogonal projection of $\lambda$ onto $\mathrm{Span}_{\mathbb{R}}(w_{\lambda} \Phi_{I^{0,1}_{\lambda}})$. (Here we use the fact that the map $\eta_{\mathbf k}$ respects this projection: but this is clear because the projection of $\lambda$ and the projection of $w_{\lambda}(\rho_{\mathbf{k}})$ onto the weight lattice of~$w_{\lambda} \Phi_{I^{0,1}_{\lambda}}$ are both dominant with respect to the choice of $w_{\lambda} \Phi^{+}_{I^{0,1}_{\lambda}}$ as positive roots, which is a subset of~$\Phi^{+}$ by Proposition~\ref{prop:posparaboliccosets}.)
So now assume~$I^{0,1}_{\lambda} = [n]$. This means that $\lambda$ is dominant. Let $\mathbf{k} \in \mathbb{N}[\Phi]^W$ be good. Set~$S \coloneqq \{\mu\in P\colon \textrm{$\langle\mu,\alpha^\vee{\raU{}}ngle\neq -1$ for all $\alpha \in \Phi^{+}$}, \, \eta_{\mathbf{k}}(\mu) \in \Pi^Q(\eta_{\mathbf{k}}(\lambda))\}$; i.e.,~$S$ is the set of all labels of sinks of $\Gamma_{\mathrm{sym},\mathbf{k}}$ that are inside of $\Pi^Q(\eta_{\mathbf{k}}(\lambda))$.
We claim that in fact~$S = \{\mu\in P\colon \textrm{$\langle\mu,\alpha^\vee{\raU{}}ngle\neq -1$ for all $\alpha \in \Phi^{+}$}, \, \mu \in \Pi^Q(\lambda)\}$. Indeed, for~$\mu\in P$ with $\langle\mu,\alpha^\vee{\raU{}}ngle\neq -1$ for all $\alpha \in \Phi^{+}$, we have~$ \eta_{\mathbf{k}}(\mu) \in \Pi^Q(\eta_{\mathbf{k}}(\lambda))$ if and only if~$ \eta_{\mathbf{k}}(\mu)_{\mathrm{dom}} = \eta_{\mathbf{k}}(\mu_{\mathrm{dom}}) \in \Pi^Q(\eta_{\mathbf{k}}(\lambda))$. By Proposition~\ref{prop:perm_containment}, we have that~$\eta_{\mathbf{k}}(\mu_{\mathrm{dom}}) \in \Pi^Q(\eta_{\mathbf{k}}(\lambda))$ if and only if $(\lambda+\rho_{\mathbf{k}}) - (\mu_{\mathrm{dom}}+\rho_{\mathbf{k}}) = \lambda-\mu_{\mathrm{dom}} \in Q_{\geq 0}$, which, again by Proposition~\ref{prop:perm_containment}, is if and only if $\mu_{\mathrm{dom}} \in \Pi^Q(\lambda)$, that is, if and only if~$\mu \in \Pi^Q(\lambda)$. Note that this second description of $S$ is independent of $\mathbf{k}$. Also note that for all $\mu\neq \lambda \in S$, either $I^{0,1}_{\mu}\neq [n]$ or $\mu=\mu_{\mathrm{dom}}$, and in the latter case we have that~$\mu$ is strictly less than $\lambda$ in root order. Now, the permutohedron non-escaping lemma, Lemma~\ref{lem:permtrap}, says that
\[\Pi^Q(\eta_{\mathbf{k}}(\lambda)) = \bigcup_{\mu \in S} (s^{\mathrm{sym}}_{\mathbf{k}})^{-1}(\mu).\]
Hence, rewriting, and taking cardinalities, we get
\[ \#(s^{\mathrm{sym}}_{\mathbf{k}})^{-1}(\lambda) = \#\Pi^Q(\eta_{\mathbf{k}}(\lambda)) - \sum_{\mu \neq \lambda \in S} \#(s^{\mathrm{sym}}_{\mathbf{k}})^{-1}(\mu).\]
The quantity $\#\Pi^Q(\eta_{\mathbf{k}}(\lambda))$ is a polynomial in~$\mathbf{k}$ with integer coefficients thanks to Theorem~\ref{cor:permehrhart}. The quantity $\sum_{\mu \neq \lambda \in S} \#(s^{\mathrm{sym}}_{\mathbf{k}})^{-1}(\mu)$ is a polynomial in~$\mathbf{k}$ with integer coefficients by induction on rank and on root order. Since the above equality holds for all good $\mathbf{k} \in \mathbb{N}[\Phi]^W$, we conclude that$L^{\mathrm{sym}}_{\lambda}(\mathbf{k})= \#(s^{\mathrm{sym}}_{\mathbf{k}})^{-1}(\lambda)$ is indeed a polynomial in~$\mathbf{k}$ with integer coefficients.
\end{proof}
\begin{table}
\begin{center}
\begin{tabular}{c | c | r }
$\Phi$ & $\lambda$ & $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ \\ \specialrule{2.5pt}{1pt}{1pt}
$A_2$ & $0$ & $3k^2 + 3k+1$ \\ \hline
$A_2$ & $\omega_1$ & $3k^2 + 6k + 3$ \\ \hline
$A_2$ & $\omega_2$ & $3k^2 + 6k + 3$ \\ \hline
$A_2$ & $\omega_1+\omega_2$ & $6k+6$ \\ \specialrule{2.5pt}{1pt}{1pt}
$B_2$ & $0$ & $2k_{l}^2+4k_lk_s + k_s^2 + 2k_l + 2k_s + 1$ \\ \hline
$B_2$ & $\omega_1$ & $4k_l+4k_s +4$ \\ \hline
$B_2$ & $\omega_2$ & $2k_l^2 + 4k_lk_s + k_s^2 + 6k_l + 4k_s + 4$ \\ \hline
$B_2$ & $\omega_1+\omega_2$ & $4k_l+4k_s+8$ \\ \specialrule{2.5pt}{1pt}{1pt}
$G_2$ & $0$ & $9k_l^2+12k_lk_s+3k_s^2+3k_l+3k_s+1$ \\ \hline
$G_2$ & $\omega_1$ & $12k_l+6k_s+6$ \\ \hline
$G_2$ & $\omega_2$ & $6k_l+6k_s+6$ \\ \hline
$G_2$ & $\omega_1+\omega_2$ & $6k_l+6k_s+12$ \\ \specialrule{2.5pt}{1pt}{1pt}
\end{tabular}
\end{center}
\caption[Symmetric Ehrhart-like polynomials for rank~$2$ root systems]{The polynomials $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ for the irreducible rank $2$ root systems.} \label{tab:sympolys}
\end{table}
Table~\ref{tab:sympolys} records the polynomials $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ for the irreducible rank $2$ root systems, for all $\lambda \in P_{\geq 0}$ with $I^{0,1}_{\lambda}=[n]$. Compare these polynomials to the graphs of the corresponding symmetric interval-firing processes in Example~\ref{ex:rank2graphs}.
\begin{remark}
The evaluation of the polynomial $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ for $\mathbf{k} \in \mathbb{N}[\Phi]^W$ not good may not count the number of weights in the connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$ containing~$\eta_{\mathbf{k}}(\lambda)$. For example, take $\Phi = B_2$ and $\mathbf{k}$ defined by $k_s \coloneqq 0$ and $k_l \coloneqq 1$, as in Example~\ref{ex:notgood}. Then, with $\lambda\coloneqq 0$, looking at Table~\ref{tab:sympolys} we see
\[L^{\mathrm{sym}}_{\lambda}(\mathbf{k})=2k_{l}^2+4k_lk_s + k_s^2 + 2k_l + 2k_s + 1=5,\]
while there are only four weights in the connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$ containing the sink~$\eta_{\mathbf{k}}(\lambda)$. (Here the ``missing'' weight is of course the origin.)
\end{remark}
\begin{conj} \label{conj:symehrhart}
The polynomials $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ have nonnegative integer coefficients.
\end{conj}
When $\lambda \in \Omega_m^{0}$, we know thanks to Proposition~\ref{prop:saturatedccs} that $(s^{\mathrm{sym}}_{\mathbf{k}})^{-1}(\lambda) = \Pi^Q(\lambda+\rho_{\mathbf{k}})$, so Corollary~\ref{cor:permehrhart} implies that Conjecture~\ref{conj:symehrhart} is true in this case. Very recently, the second and fourth authors have proved Conjecture~\ref{conj:symehrhart} in general~\cite{hopkins2018positive}. The first step in their proof of positivity is to give a more refined version of Theorem~\ref{thm:polypluszoneehrhart} that gives an explicit formula for the number of lattice points in a polytope plus dilating zonotope.
\section{Cubical subcomplexes}
In order to proceed further in our investigation of the stabilization maps $s^{\mathrm{sym}}_{\mathbf{k}}$ and~$s^{\mathrm{tr}}_{\mathbf{k}}$, and the relation between them, we need to understand a bit more about the connected components of~$\Gamma_{\mathrm{sym},\mathbf{k}}$. We know that the connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$ containing the sink~$\eta_{\mathbf{k}}(\lambda)$ is contained in the discrete permutohedron $w_{\lambda}\Pi^Q_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$ (Theorem~\ref{thm:permcc}); but it can sometimes contain all of this permutohedron (see Proposition~\ref{prop:saturatedccs}) and can sometimes contain relatively little of it. In this section we will show that there is a small amount of $w_{\lambda}\Pi^Q_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$ that this connected component must always contain.
The permutohedron $\Pi_I(\lambda)$ has the structure of a polyhedral complex. The \emph{cubical subcomplex} of $\Pi_I(\lambda)$ is the union of all faces of $\Pi_I(\lambda)$ that are cubes; here a \emph{cube} means a product of pairwise orthogonal intervals. We denote the cubical subcomplex by~$^\square\Pi_I(\lambda)$. Note that every edge is a cube, and hence $^\square\Pi_I(\lambda)$ contains at least the $1$-skeleton of $\Pi_I(\lambda)$, but it may contain more. We use $^\square\Pi^Q_I(\lambda) \coloneqq \, ^\square\Pi_I(\lambda) \cap (Q+\lambda)$.
\begin{prop} \label{prop:symccscubes}
Let $\lambda \in P$ with $\langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha \in \Phi^{+}$ and let $\mathbf{k} \in \mathbb{N}[\Phi]^W$. Let $Y_{\lambda}\coloneqq \{\mu\in P\colon \mu {\baU{}}AstU{\mathrm{sym},\mathbf{k}} \eta_{\mathbf{k}}(\lambda) \}$ be the connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$ containing the sink~$\eta_{\mathbf{k}}(\lambda)$. Then $Y_{\lambda}$ contains the discrete cubical subcomplex~$w_{\lambda} \, ^\square\Pi^Q_{I^{0,1}_{\lambda}}(\eta_\mathbf{k}(\lambda_{\mathrm{dom}}))$.
\end{prop}
\begin{proof}
By the usual projection argument that we have by now carried out many times, we can assume that $I^{0,1}_{\lambda}=[n]$ and consequently that $\lambda$ is dominant.
For any simple root~$\alpha_i$ we have that $\langle\eta_{\mathbf{k}}(\lambda),\alpha_i^\vee{\raU{}}ngle \in \{\mathbf{k}(\alpha),\mathbf{k}(\alpha)+1\}$. This means that we can ``unfire'' $\alpha_i$ from $\eta_{\mathbf{k}}(\lambda)$; that is, $\langle\eta_{\mathbf{k}}(\lambda)-\alpha_i,\alpha_i^\vee{\raU{}}ngle \leq \mathbf{k}(\alpha)-1$, so that there will be an edge $\eta_{\mathbf{k}}(\lambda)-\alpha_i {\raU{}}U{\mathrm{sym},\mathbf{k}} \eta_{\mathbf{k}}(\lambda)$ of $\Gamma_{\mathrm{sym},\mathbf{k}}$. In fact, we can keep ``unfiring'' the simple root~$\alpha_i$ until we reach~$s_{\alpha_i}(\eta_{\mathbf{k}}(\lambda))$; i.e., in $\Gamma_{\mathrm{sym},\mathbf{k}}$ there are sequence of edges
\[s_{\alpha_i}(\eta_{\mathbf{k}}(\lambda)) {\raU{}}U{\mathrm{sym},\mathbf{k}} s_{\alpha_i}(\eta_{\mathbf{k}}(\lambda)) + \alpha_i {\raU{}}U{\mathrm{sym},\mathbf{k}} \cdots {\raU{}}U{\mathrm{sym},\mathbf{k}} \eta_{\mathbf{k}}(\lambda)-\alpha_i {\raU{}}U{\mathrm{sym},\mathbf{k}} \eta_{\mathbf{k}}(\lambda).\]
(Note that it is possible that $s_{\alpha_i}(\eta_{\mathbf{k}}(\lambda))=\eta_{\mathbf{k}}(\lambda)$, in which case we would not actually be able to unfire $\alpha_i$ at all). This means that all the $(Q+\eta_{\mathbf{k}}(\lambda))$-points of the entire edge of $\Pi(\eta_{\mathbf{k}}(\lambda))$ between $\eta_{\mathbf{k}}(\lambda)$ and $s_{\alpha_i}(\eta_{\mathbf{k}}(\lambda))$ are reachable via unfirings from $\eta_{\mathbf{k}}(\lambda)$. Moreover, if $\alpha_i$ and $\alpha_j$ are orthogonal, then unfiring one of these does not affect our ability to unfire the other, and hence in this way we can reach any $(Q+\eta_{\mathbf{k}}(\lambda))$-point on a face of $\Pi(\eta_{\mathbf{k}}(\lambda))$ that is the orthogonal product of edges coming out of the vertex~$\eta_{\mathbf{k}}(\lambda)$ in the direction of a negative simple root. Since in particular $s_i(\eta_{\mathbf{k}}(\lambda))$ is reachable via firings and unfirings from $\eta_{\mathbf{k}}(\lambda)$, by applying the $W$-symmetry of~$\Gamma^{\mathrm{un}}_{\mathrm{sym},\mathbf{k}}$ (Theorem~\ref{thm:symmetry}) we see that all vertices of $\Pi(\eta_{\mathbf{k}}(\lambda))$ are so reachable. But note that any face of $\Pi(\eta_{\mathbf{k}}(\lambda))$ can be transported via $W$ to a face containing $\eta_{\mathbf{k}}(\lambda)$, such that the edges of this face which contain $\eta_{\mathbf{k}}(\lambda)$ are in the direction of a negative simple root (see the proof of Theorem~\ref{thm:traverseformula}). We thus conclude that we can reach any $(Q+\eta_{\mathbf{k}}(\lambda))$-point on any cubical face of $\Pi(\eta_{\mathbf{k}}(\lambda))$ via firings and unfirings from~$\eta_{\mathbf{k}}(\lambda)$.
\end{proof}
\begin{cor} \label{cor:symccsweylorbit}
Let $\lambda \in P$ with $\langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha \in \Phi^{+}$ and let $\mathbf{k} \in \mathbb{N}[\Phi]^W$. Let $Y_{\lambda}\coloneqq \{\mu\in P\colon \mu {\baU{}}AstU{\mathrm{sym},\mathbf{k}} \eta_{\mathbf{k}}(\lambda) \}$ be the connected component of $\Gamma_{\mathrm{sym},\mathbf{k}}$ containing the sink~$\eta_{\mathbf{k}}(\lambda)$. Then $Y_{\lambda}$ contains $w_{\lambda}W_{I^{0,1}_{\lambda}}(\eta_\mathbf{k}(\lambda_{\mathrm{dom}}))$. In the special case $\mathbf{k}=0$, $Y_{\lambda}$ is in fact equal to $w_{\lambda}W_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$.
\end{cor}
\begin{proof}
Note that $^\square\Pi_I(\mu)$ contains at least the $1$-skeleton of $\Pi_I(\mu)$. Thus $Y_{\lambda}$ contains $w_{\lambda}W_{I^{0,1}_{\lambda}} (\eta_\mathbf{k}(\lambda_{\mathrm{dom}}))$ by Proposition~\ref{prop:symccscubes}. Now suppose $\mathbf{k}=0$. If $\mu {\raU{}}U{\mathrm{sym},0} \mu'$ then $\mu' = \mu + \alpha$ for some $\alpha \in \Phi^{+}$ with~$\langle\mu,\alpha^\vee{\raU{}}ngle = -1$, which means that $\mu' = s_{\alpha}(\mu)$. Hence any two elements in a connected component of $\Gamma_{\mathrm{sym},0}$ must be related by a Weyl group element. By Corollary~\ref{cor:symconfluence}, each connected component of $\Gamma_{\mathrm{sym},0}$ contains only a single sink, and thus the component $Y_{\lambda}$ must be exactly $w_{\lambda}W_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$.
\end{proof}
\section{How interval-firing components decompose} \label{sec:decompose}
In this section, we study how symmetric and truncated interval-firing components ``decompose'' into smaller components. Let us explain what we mean by ``decompose'' more precisely. For any $\mathbf{k} \in \mathbb{N}[\Phi]^W$, $\Gamma_{\mathrm{tr},{\mathbf{k}}}$ is a subgraph of $\Gamma_{\mathrm{sym},{\mathbf{k}}}$, so the connected components of $\Gamma_{\mathrm{sym},{\mathbf{k}}}$ are unions of connected components of $\Gamma_{\mathrm{tr},{\mathbf{k}}}$. Similarly, $\Gamma_{\mathrm{sym},{\mathbf{k}}}$ is a subgraph of $\Gamma_{\mathrm{tr},{\mathbf{k}+1}}$ and so the connected components of $\Gamma_{\mathrm{tr},{\mathbf{k}+1}}$ are unions of connected components of $\Gamma_{\mathrm{sym},{\mathbf{k}}}$. What we want to show, in both cases, is that the way these components decompose into smaller components is consistent with the way we label the components by their sinks $\eta_{\mathbf{k}}(\lambda)$.
That the connected components of $\Gamma_{\mathrm{sym},{\mathbf{k}}}$ break into connected components of $\Gamma_{\mathrm{tr},{\mathbf{k}}}$ in a way consistent with the map $\eta_{\mathbf{k}}$ turns out to be a simple consequence of the fact that these connected components contain parabolic coset orbits (i.e., a consequence of Corollary~\ref{cor:symccsweylorbit} from the previous section). This is established in the next lemma and corollary.
\begin{lemma} \label{lem:symdecompose}
For $\lambda,\mu \in P$, if $\lambda$ and $\mu$ belong to the same connected component of~$\Gamma_{\mathrm{sym},0}$, then $\eta_{\mathbf{k}}(\lambda)$ and $\eta_{\mathbf{k}}(\mu)$ belong to the same connected component of $\Gamma_{\mathrm{sym},{\mathbf{k}}}$ for all~$\mathbf{k} \in \mathbb{N}[\Phi]^W$.
\end{lemma}
\begin{proof}
Let $\lambda,\mu \in P$ belong to the same connected component of $\Gamma_{\mathrm{sym},0}$. From Corollary~\ref{cor:symccsweylorbit}, we get that $\mu_{\mathrm{dom}} = \lambda_{\mathrm{dom}}$ and also that there is some $w\in w_{\mu}W_{I^{0,1}_{\lambda}}$ such that $w^{-1}(\lambda)$ is dominant. But by Corollary~\ref{cor:cosets} this means $w \in w_{\lambda}W_{I^{0}_{\lambda{\mathrm{dom}}}}$, and since the cosets of $W_{I^{0,1}_{\lambda}}$ are unions of cosets of $W_{I^{0}_{\lambda}}$, this means $w_{\mu}W_{I^{0,1}_{\lambda}} = w_{\lambda}W_{I^{0}_{\lambda{\mathrm{dom}}}}$. Thus, Corollary~\ref{cor:symccsweylorbit} tells us that indeed $\eta_{\mathbf{k}}(\lambda)$ and $\eta_{\mathbf{k}}(\mu)$ belong to the same connected component of $\Gamma_{\mathrm{sym},{\mathbf{k}}}$ for all $\mathbf{k} \in \mathbb{N}[\Phi]^W$.
\end{proof}
\begin{cor} \label{cor:symdecompose}
For all $\mu \in P$ and all good $\mathbf{k} \in \mathbb{N}[\Phi]^W$, we have
\[ s^{\mathrm{sym}}_{\mathbf{k}}(\mu) = s^{\mathrm{sym}}_{0}(s^{\mathrm{tr}}_{\mathbf{k}}(\mu)).\]
\end{cor}
\begin{proof}
Since $\Gamma_{\mathrm{tr},{\mathbf{k}}}$ is a subgraph of $\Gamma_{\mathrm{sym},{\mathbf{k}}}$, the ${\raU{}}U{\mathrm{sym},{\mathbf{k}}}$-stabilization of $\mu$ is the same as the ${\raU{}}U{\mathrm{sym},{\mathbf{k}}}$-stabilization of the ${\raU{}}U{\mathrm{tr},{\mathbf{k}}}$-stabilization of $\mu$. But the ${\raU{}}U{\mathrm{tr},{\mathbf{k}}}$-stabilization of $\mu$ is by definition $\eta_{\mathbf{k}}(\lambda)$ where $\lambda \coloneqq s^{\mathrm{tr}}_{\mathbf{k}}(\mu)$. Let $\lambda'$ be the sink of the connected component of $\Gamma_{\mathrm{sym},0}$ containing $\lambda$; hence, $\lambda' = s^{\mathrm{sym}}_{0}(\lambda)$. Then Lemma~\ref{lem:symdecompose} says that $\eta_{\mathbf{k}}(\lambda')$ is the sink of the connected component of $\Gamma_{\mathrm{sym},{\mathbf{k}}}$ containing $\eta_{\mathbf{k}}(\lambda)$. In other words, the ${\raU{}}U{\mathrm{sym},{\mathbf{k}}}$-stabilization of $\lambda$ is $\eta_{\mathbf{k}}(\lambda')$, i.e., $s^{\mathrm{sym}}_{\mathbf{k}}(\mu)=\lambda'= s^{\mathrm{sym}}_{0}(s^{\mathrm{tr}}_{\mathbf{k}}(\mu))$.
\end{proof}
We want an analog of Lemma~\ref{lem:symdecompose} and Corollary~\ref{cor:symdecompose} for truncated interval-firing. But to show that the connected components of $\Gamma_{\mathrm{tr},{\mathbf{k}+1}}$ break into connected components of $\Gamma_{\mathrm{sym},{\mathbf{k}}}$ in a way consistent with the map $\eta_{\mathbf{k}}$ turns out to be much more involved. In fact, for technical reasons, we are able to achieve this only assuming that~$\Phi$ is simply laced. Nevertheless, the first few steps towards giving truncated analogs of Lemma~\ref{lem:symdecompose} and Corollary~\ref{cor:symdecompose} do not require the assumption that $\Phi$ be simply laced, so we state them for general~$\Phi$.
\begin{prop} \label{prop:trmove0}
Let $\lambda \in P$ be such that $\langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha \in \Phi^{+}$. Suppose that $\lambda {\raU{}}U{\mathrm{tr},1} \lambda + \beta$ for some $\beta \in \Phi^{+}$. Then $\lambda {\raU{}}U{\mathrm{tr},1} \lambda + w_{\lambda}(\alpha_i)$ for some simple root $\alpha_i$. Moreover, in this case we have $\eta_{\mathbf{k}}(\lambda) {\raU{}}U{\mathrm{tr},{\mathbf{k}+1}} \eta_{\mathbf{k}}(\lambda) + w_{\lambda}(\alpha_i)$ for all $\mathbf{k} \in \mathbb{N}[\Phi]^W$.
\end{prop}
\begin{proof}
If $\langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha \in \Phi^{+}$, but $\lambda {\raU{}}U{\mathrm{tr},1} \lambda + \beta$ for some $\beta \in \Phi^{+}$, this must mean that $\langle\lambda,\beta^\vee{\raU{}}ngle = 0$. Applying $w^{-1}_{\lambda}$, we get $\langlew_{\lambda}^{-1}(\lambda),w_{\lambda}^{-1}(\beta)^\vee{\raU{}}ngle = 0$. Since $w_{\lambda}^{-1}(\beta)^\vee$ is either a positive sum or a negative sum of simple coroots, and because $w_{\lambda}^{-1}(\lambda)=\lambda_{\mathrm{dom}}$ is dominant, this means there is some simple root $\alpha_i$ such that $\langlew_{\lambda}^{-1}(\lambda),\alpha_i^\vee{\raU{}}ngle=0$. But then~$\langle\lambda,w_{\lambda}(\alpha_i)^\vee{\raU{}}ngle=0$. And note by Proposition~\ref{prop:posparaboliccosets} that indeed $w_{\lambda}(\alpha)$ is positive.
To prove the last sentence of the proposition: note that
\[\langle\eta_{\mathbf{k}}(\lambda),w_{\lambda}(\alpha_i)^\vee{\raU{}}ngle = \langle\lambda+w_{\lambda}(\rho_{\mathbf{k}}),w_{\lambda}(\alpha_i)^\vee{\raU{}}ngle = \langlew_{\lambda}^{-1}(\lambda),\alpha_i^\vee{\raU{}}ngle + \langle\rho_{\mathbf{k}},\alpha_i^\vee{\raU{}}ngle = 0+\mathbf{k}(\alpha)=\mathbf{k}(\alpha);\]
so indeed, $\eta_{\mathbf{k}}(\lambda) {\raU{}}U{\mathrm{tr},{\mathbf{k}+1}} \eta_{\mathbf{k}}(\lambda) + w_{\lambda}(\alpha_i)$.
\end{proof}
\begin{prop} \label{prop:trtraps}
Let $\lambda \in P$ be a weight such that $\langle\lambda,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha \in \Phi^{+}$. Let~$\mathbf{k} \in \mathbb{N}[\Phi]^W$ be good, with $\mathbf{k} \geq 1$. Let $\mu \in w_{\lambda} \Pi^Q_{I^{0,1}_{\lambda}}(\eta_{\mathbf{k}}(\lambda_{\mathrm{dom}}))$. Then $\mu$ and $\eta_{\mathbf{k}}(\lambda)$ belong to the same connected component of $\Gamma_{\mathrm{tr},{\mathbf{k}+1}}$.
\end{prop}
\begin{proof}
First let us prove this proposition when $\lambda$ is dominant and $I^{0}_{\lambda} = [n]$. In this case, $\eta_{\mathbf{k}}(\lambda) \in \Pi(\rho_{\mathbf{k}+1})$. Let $\omega\in \Omega_m^0$ be such that $\lambda\in Q+\rho+\omega$. Note that, since $\mathbf{k}\geq 1$, $\eta_{\mathbf{k}}(\lambda) - \omega$ is still dominant; hence, because $P^{\mathbb{R}}_{\geq 0}\subseteq Q^{\mathbb{R}}_{\geq 0}$, we get that $\eta_{\mathbf{k}}(\lambda) - \omega \in \Pi(\rho_{\mathbf{k}+1})$ by Proposition~\ref{prop:perm_containment}. But then by definition of $\omega$ we have that~$\eta_{\mathbf{k}}(\lambda) \in \Pi^Q(\rho_{\mathbf{k}+1})+\omega$. Thus by Lemma~\ref{lem:trccs} the connected component of~$\Gamma_{\mathrm{tr},{\mathbf{k}+1}}$ that $\eta_{\mathbf{k}}(\lambda)$ belongs to is $\Pi^Q(\rho_{\mathbf{k}+1})+\omega$. By Corollary~\ref{cor:symccsweylorbit}, the connected component of $\Gamma_{\mathrm{sym},{\mathbf{k}}}$ that $\eta_{\mathbf{k}}(\lambda)$ belongs to contains the Weyl orbit~$W(\eta_{\mathbf{k}}(\lambda))$. Hence also the the connected component of $\Gamma_{\mathrm{tr},{\mathbf{k}+1}}$ that~$\eta_{\mathbf{k}}(\lambda)$ belongs to contains $W(\eta_{\mathbf{k}}(\lambda))$. But this connected component is, as mentioned,~$\Pi^Q(\rho_{\mathbf{k}+1})+\omega$; in particular, it is a convex set intersected with $Q+\eta_{\mathbf{k}}(\lambda)$. Since~$\mu$ belongs to the convex hull of $W(\eta_{\mathbf{k}}(\lambda))$ and belongs to the coset~$Q+\eta_{\mathbf{k}}(\lambda)$, this means that~$\mu \in \Pi^Q(\rho_{\mathbf{k}+1})+\omega$. So indeed $\mu$ and~$\eta_{\mathbf{k}}(\lambda)$ belong to the same connected component of $\Gamma_{\mathrm{tr},{\mathbf{k}+1}}$ in this case.
Now let us address general $\lambda$. Note that $w_{\lambda}\Phi^{+}_{I^{0,1}_{\lambda}}$ is a choice of positive roots for the sub-root system $w_{\lambda}\Phi_{I^{0,1}_{\lambda}}$. Moreover, by Proposition~\ref{prop:posparaboliccosets}, $w_{\lambda}\Phi^{+}_{I^{0,1}_{\lambda}}$ is a subset of positive roots. Hence any truncated interval-firing move (with parameter $\mathbf{k}+1$) we can carry out in $w_{\lambda}\Phi_{I^{0,1}_{\lambda}}$ with choice of positive roots $w_{\lambda}\Phi^{+}_{I^{0,1}_{\lambda}}$, we can actually carry out in the original root system $\Phi$. But then note that $\langle\lambda,w_{\lambda}(\alpha_i)^\vee{\raU{}}ngle \in \{0,1\}$ for all~$i \in I^{0,1}_{\lambda}$; hence the result follows from the previous paragraph by orthogonally projecting $\lambda$ and~$\mu$ onto~$\mathrm{Span}_{\mathbb{R}}(w_{\lambda}\Phi_{I^{0,1}_{\lambda}})$.
\end{proof}
The strategy will be to use Proposition~\ref{prop:trmove0} to say that whenever we have a ${\raU{}}U{\mathrm{tr},{1}}$-move from a sink of $\Gamma_{\mathrm{sym},{0}}$, we have a corresponding ${\raU{}}U{\mathrm{tr},{\mathbf{k}+1}}$-move from the corresponding sink of $\Gamma_{\mathrm{sym},{\mathbf{k}}}$; then we will apply Proposition~\ref{prop:trtraps} to say that that move actually gets us ``trapped'' in the correct connected component of $\Gamma_{\mathrm{tr},{\mathbf{k}+1}}$. But we have reached the point where to carry out this strategy we must assume that~$\Phi$ is simply laced.
\begin{prop} \label{prop:trmove1}
Suppose that $\Phi$ is simply laced. Let $\mu \in P_{\geq 0}$ be dominant. Suppose~$\mu {\raU{}}U{\mathrm{tr},{1}} \lambda$ where $\lambda = \mu+\alpha_i$ for a simple root $\alpha_i$. Then $\lambda \in W_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$.
\end{prop}
\begin{proof}
If $\mu$ is dominant but $\mu {\raU{}}U{\mathrm{tr},{1}} \lambda$, this must mean that $\langle\mu,\alpha_i^\vee{\raU{}}ngle = 0$. Let $\Phi'$ be the irreducible sub-root system of~$\Phi_{I^{0}_{\mu}}$ that contains $\alpha_i$. Let $\theta'$ be the highest root of $\Phi'$. We claim that $\lambda_{\mathrm{dom}} = \mu +\theta'$. First of all, because $\Phi'$ is also simply laced, the Weyl group~$W'$ of $\Phi'$ acts transitively on $\Phi'$ so that there is some $w \in W'$ with~$w(\theta') = \alpha_i$. But~$W'\subseteq W_{I^{0}_{\mu}}$, the stabilizer of $\mu$, so we indeed have $w(\mu +\theta') = \mu+\alpha_i=\lambda$. Why is $\mu +\theta'$ dominant? Let $D$ be the Dynkin diagram of $\Phi$ (which is just an undirected graph since $\Phi$ is simply laced). For $I\subseteq[n]$ use $D[I]$ to denote the restriction of the Dynkin diagram to the vertices in~$I$. Note that $\Phi'= \Phi_{I}$ where $I$ is (the set of vertices of) the connected component of $D[I^{0}_{\mu}]$ containing $\alpha_i$. Hence $\theta' = \sum_{j\in I} c_j\alpha_j$ for some coefficients $c_j$. First of all, $\theta'$ is dominant in $\Phi'$, so if $j \in I$ then $\langle\theta',\alpha_j^\vee{\raU{}}ngle \geq 0$ and hence certainly $\langle\mu+\theta',\alpha_j^\vee{\raU{}}ngle \geq 0$. Now suppose $j \notin I$ and $j$ is not adjacent in $D$ to any vertex in $I$; then clearly $\langle\theta',\alpha_j^\vee{\raU{}}ngle = 0$ and so again $\langle\mu+\theta',\alpha_j^\vee{\raU{}}ngle \geq 0$. Finally, suppose $j \notin I$ but $j$ is adjacent in $D$ to some vertex in $I$; then, since $\Phi$ is simply laced and $\theta'$ is a positive root of $\Phi$, we certainly have $\langle\theta',\alpha_j^\vee{\raU{}}ngle \geq -1$; but $\langle\lambda,\alpha_j^\vee{\raU{}}ngle \geq 1$ since $j\notin I^{0}_{\mu}$, and thus $\langle\lambda+\theta',\alpha_j^\vee{\raU{}}ngle \geq 0$. So indeed $\mu+\theta'$ is dominant and so~$\lambda_{\mathrm{dom}} = \mu +\theta'$, as claimed.
Suppose for a moment that $\Phi'\neq A_1$. Then, writing $\theta'=\sum_{j=1}^{n}c_j\omega_j$, we will have that $c_j \in \{0,1\}$ for all $j \in I$; this can be seen for instance by noting that these coefficients $c_j$ are precisely the number of edges between $j$ and the ``affine node'' in the affine Dynkin diagram extending~$D[I]$ (see~\cite[VI,\S3]{bourbaki2002lie}). This means that we have~$W' \subseteq W_{I^{0,1}_{\lambda}}$, and so $w(\lambda_{\mathrm{dom}}) = \lambda$ for some $w \in W_{I^{0,1}_{\lambda}}$; or in other words, we have~$\lambda \in W_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$. On the other hand, if $\Phi'=A_1$, then actually $\theta'=\alpha_i$ and so~$\lambda = \lambda_{\mathrm{dom}}$ and the claim is clear.
\end{proof}
\begin{remark} \label{rem:decomposeproblem}
Note that Proposition~\ref{prop:trmove1} is in general false when $\Phi$ is not simply laced. For example, take $\Phi=B_2$. Then, with $\mu \coloneqq 0$ and $\lambda \coloneqq \alpha_1$ (the long simple root, with numbering as in Figure~\ref{fig:dynkinclassification}), we have $\mu {\raU{}}U{\mathrm{tr},{1}} \lambda$ but $\lambda \notin W_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$.\end{remark}
\begin{prop} \label{prop:trmove2}
Suppose that $\Phi$ is simply laced. Let $\mu \in P_{\geq 0}$ be dominant. Suppose that~$\mu {\raU{}}U{\mathrm{tr},{1}} \lambda$ where $\lambda = \mu+\alpha_i$ for a simple root $\alpha_i$. Then $\eta_k(\mu) + \alpha \in \Pi^{Q}_{I^{0,1}_{\lambda}}(\eta_k(\lambda_{\mathrm{dom}}))$ for all $k \geq 0$.
\end{prop}
\begin{proof}
The statement in the case $k=0$ follows immediately from Proposition~\ref{prop:trmove1}; so assume $k \geq 1$. By Proposition~\ref{prop:trmove1} we have that $\lambda \in W_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$, which means, by Proposition~\ref{prop:perm_containment}, that $\lambda_{\mathrm{dom}}-\lambda$ is a nonnegative sum of simple roots in $I^{0,1}_{\lambda}$. Since $\mu$ is dominant we have $\eta_k(\mu) = \mu + k\rho$. Then note that $\eta_k(\mu)+\alpha = \lambda +k\rho = \mu+k\rho+\alpha$ is actually dominant as well, because $\mu$ is dominant, and $k\rho+\alpha$ is dominant since $\Phi$ is simply laced. Further, observe that $\eta_k(\lambda_{\mathrm{dom}})-(\eta_k(\mu)+\alpha)=\lambda_{\mathrm{dom}}-\lambda$. But then the fact that $\eta_k(\lambda_{\mathrm{dom}})-(\eta_k(\mu)+\alpha)$ is a nonnegative sum of simple roots in $I^{0,1}_{\lambda}$, together with the fact that $\eta_k(\mu)+\alpha$ is dominant, implies, via Proposition~\ref{prop:perm_containment}, that we have~$\eta_k(\mu) + \alpha \in \Pi^{Q}_{I^{0,1}_{\lambda}}(\eta_k(\lambda_{\mathrm{dom}}))$.
\end{proof}
\begin{prop} \label{prop:trmove3}
Suppose that $\Phi$ is simply laced. Let $\mu \in P$ satisfy $\langle\mu,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha\in \Phi^{+}$. Suppose that $\mu {\raU{}}U{\mathrm{tr},1} \lambda$ where $\lambda = \mu + w_{\mu}(\alpha_i)$ for some simple root~$\alpha_i$. Then for all $k \geq 0$, $\eta_k(\mu)$ and $\eta_k(\lambda)$ belong to the same connected component of~$\Gamma_{\mathrm{tr},{k+1}}$.
\end{prop}
\begin{proof}
If $k=0$ the claim is obvious. So assume $k \geq 1$.
Let $\lambda'$ be the sink of the connected component of $\Gamma_{\mathrm{sym},0}$ containing $\lambda$; hence by Corollary~\ref{cor:symccsweylorbit}, we have that $\lambda' \in w_{\lambda} W_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$, so in particular $\lambda'_{\mathrm{dom}}= \lambda_{\mathrm{dom}}$. Now, if $\langle\mu,\alpha^\vee{\raU{}}ngle \neq -1$ for all $\alpha\in \Phi^{+}$ and $\mu {\raU{}}U{\mathrm{tr},1} \lambda$ this means that $\langle\mu,w_{\mu}(\alpha_i)^\vee{\raU{}}ngle = 0$. Hence we also have $\mu_{\mathrm{dom}} {\raU{}}U{\mathrm{tr},1} \mu_{\mathrm{dom}} +\alpha_i$. Then $\mu_{\mathrm{dom}} + \alpha_i \in W_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$ by Proposition~\ref{prop:trmove1}; and so by applying $w_{\mu}$ we get $\lambda \in w_{\mu}W_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$. This implies $\lambda' \in w_{\mu}W_{I^{0,1}_{\lambda}}(\lambda_{\mathrm{dom}})$, so that $(w_{\mu}w)^{-1}(\lambda')$ is dominant for some $w \in W_{I^{0,1}_{\lambda}}$. But because of Corollary~\ref{cor:cosets} that means that $w_{\mu}w=w_{\lambda'}w'$ for some $w' \in W_{I^{0}_{\lambda}}$.
By Proposition~\ref{prop:trmove2} we get that $\eta_k(\mu_{\mathrm{dom}}) + \alpha_i \in \Pi^{Q}_{I^{0,1}_{\lambda}}(\eta_k(\lambda_{\mathrm{dom}}))$. By applying~$w_{\mu}$ we get $\eta_k(\mu)+w_{\mu}(\alpha_i) \in w_{\mu} \Pi^{Q}_{I^{0,1}_{\lambda}}(\eta_k(\lambda_{\mathrm{dom}}))$. Note that since $w \in W_{I^{0,1}_{\lambda}}$, we have that~$w_{\mu} \Pi^{Q}_{I^{0,1}_{\lambda}}(\eta_k(\lambda_{\mathrm{dom}})) = w_{\mu}w \Pi^{Q}_{I^{0,1}_{\lambda}}(\eta_k(\lambda_{\mathrm{dom}}))$. Similarly,~$w' \in W^{I^{0}_{\lambda}} \subseteq W^{I^{0,1}_{\lambda}}$ implies that~$w_{\lambda'}w'\Pi^{Q}_{I^{0,1}_{\lambda}}(\eta_k(\lambda_{\mathrm{dom}}))=w_{\lambda'}\Pi^{Q}_{I^{0,1}_{\lambda}}(\eta_k(\lambda_{\mathrm{dom}}))$. Hence, we can conclude that~$\eta_k(\mu)+w_{\mu}(\alpha_i) \in w_{\lambda'}\Pi^{Q}_{I^{0,1}_{\lambda}}(\eta_k(\lambda_{\mathrm{dom}}))$. Since $\lambda'$ is a sink of $\Gamma_{\mathrm{sym},0}$ (and thus, by Lemma~\ref{lem:symsinks}, satisfies $\langle\lambda',\alpha^\vee{\raU{}}ngle \neq -1$ for all~$\alpha \in \Phi^{+}$), we can apply Proposition~\ref{prop:trtraps} to conclude that~$\eta_k(\lambda')$ and~$\eta_k(\mu)+w_{\mu}(\alpha_i)$ belong to the same connected component of~$\Gamma_{\mathrm{tr},{k+1}}$.
But since $\lambda$ and $\lambda'$ belong to the same connected component of $\Gamma_{\mathrm{sym},0}$, Lemma~\ref{lem:symdecompose} tells us that $\eta_k(\lambda)$ and $\eta_k(\lambda')$ belong to the same connected component of $\Gamma_{\mathrm{sym},k}$, and hence also belong to the same connected component of $\Gamma_{\mathrm{tr},{k+1}}$. Then note by Proposition~\ref{prop:trmove0} that we have $\eta_k(\mu) {\raU{}}U{\mathrm{tr},{k+1}} \eta_k(\mu)+w_{\mu}(\alpha_i)$, so $\eta_k(\mu)$ and $ \eta_k(\mu)+w_{\mu}(\alpha_i)$ belong to the same connected component of $\Gamma_{\mathrm{tr},{k+1}}$. Putting it all together, $\eta_k(\mu)$ and~$ \eta_k(\lambda)$ belong to the same connected component of $\Gamma_{\mathrm{tr},{k+1}}$, as claimed.
\end{proof}
Finally, we are able to prove the desired analogs of Lemma~\ref{lem:symdecompose} and Corollary~\ref{cor:symdecompose} in the simply laced case.
\begin{lemma} \label{lem:trdecompose}
Suppose that $\Phi$ is simply laced. For $\lambda,\mu \in P$, if $\lambda$ and $\mu$ belong to the same connected component of~$\Gamma_{\mathrm{tr},1}$, then $\eta_k(\lambda)$ and $\eta_k(\mu)$ belong to the same connected component of $\Gamma_{\mathrm{tr},{k+1}}$ for all~$k \geq 0$.
\end{lemma}
\begin{proof}
Clearly it suffices to prove this when $\lambda$ is a sink of $\Gamma^{\mathrm{tr},1}$. So let us describe one way to compute the ${\raU{}}U{\mathrm{tr},{k+1}}$-stabilization of $\eta_k(\mu)$. If $\mu$ is not a sink of $\Gamma_{\mathrm{sym},0}$, then by Lemma~\ref{lem:symdecompose} we know that $\eta_k(\mu)$ is in the same connected component of~$\Gamma_{\mathrm{sym},k}$ as $\eta_k(\mu')$, where $\mu'$ is the sink of the component of $\Gamma_{\mathrm{sym},{0}}$ containing $\mu$; so then to compute the ${\raU{}}U{\mathrm{tr},{k+1}}$-stabilization of $\eta_k(\mu)$ we instead compute the ${\raU{}}U{\mathrm{tr},{k+1}}$-stabilization of~$\eta_k(\mu')$. So now assume that $\mu$ is a sink of $\Gamma_{\mathrm{sym},0}$. Then, if $\mu$ is not a sink of~$\Gamma_{\mathrm{tr},1}$, by Proposition~\ref{prop:trmove0} there is a simple root~$\alpha_i$ with $\mu {\raU{}}U{\mathrm{tr},1} \mu'$ where $\mu' = \mu+w_{\mu}(\alpha_i)$. By Proposition~\ref{prop:trmove3} we get that $\eta_k(\mu)$ and $\eta_k(\mu')$ are in the same connected component of $\Gamma_{\mathrm{tr},{k+1}}$; so again to compute the ${\raU{}}U{\mathrm{tr},{k+1}}$-stabilization of $\eta_k(\mu)$ we instead compute the ${\raU{}}U{\mathrm{tr},{k+1}}$-stabilization of $\eta_k(\mu')$. Because ${\raU{}}U{\mathrm{tr},{1}}$ is terminating, this procedure will eventually terminate; in fact, it must terminate at computing the ${\raU{}}U{\mathrm{tr},{k+1}}$-stabilization of $\eta_k(\mu)$ where~$\mu$ is a sink of $\Gamma_{\mathrm{tr},1}$. But there is only one sink of the connected component of~$\Gamma_{\mathrm{tr},1}$ containing $\mu$, namely,~$\lambda$; so the lemma is proved.
\end{proof}
\begin{cor} \label{cor:trdecompose}
Suppose that $\Phi$ is simply laced. Then for all $\mu \in P$ and all $k \geq 0$, we have
\[ s^{\mathrm{tr}}_{k+1}(\mu) = s^{\mathrm{tr}}_{1}(s^{\mathrm{sym}}_{k}(\mu)).\]
\end{cor}
\begin{proof}
This follows from Lemma~\ref{lem:trdecompose} in the same way that Corollary~\ref{cor:symdecompose} follows from Lemma~\ref{lem:symdecompose}. Since $\Gamma_{\mathrm{sym},{k}}$ is a subgraph of $\Gamma_{\mathrm{sym},{k+1}}$, the ${\raU{}}U{\mathrm{tr},{k+1}}$-stabilization of $\mu$ is the same as the ${\raU{}}U{\mathrm{tr},{k+1}}$-stabilization of the ${\raU{}}U{\mathrm{sym},{k}}$-stabilization of $\mu$. But the ${\raU{}}U{\mathrm{sym},{k}}$-stabilization of $\mu$ is by definition $\eta_k(\lambda)$ where $\lambda \coloneqq s^{\mathrm{sym}}_{k}(\mu)$. Let $\eta_1(\lambda')$ be the sink of the connected component of $\Gamma_{\mathrm{tr},1}$ containing $\lambda$; hence, $\lambda' = s^{\mathrm{tr}}_{1}(\lambda)$. Then Lemma~\ref{lem:symdecompose} says that $\eta_{k}(\eta_{1}(\lambda')) = \eta_{k+1}(\lambda')$ (this equality follows from Proposition~\ref{prop:etafacts}) is the sink of the connected component of $\Gamma_{\mathrm{tr},{k+1}}$ containing $\eta_{k}(\lambda)$. In other words, the ${\raU{}}U{\mathrm{tr},{k+1}}$-stabilization of $\lambda$ is $\eta_{k+1}(\lambda')$, i.e., $s^{\mathrm{tr}}_{k+1}(\mu)=\lambda'= s^{\mathrm{tr}}_{1}(s^{\mathrm{sym}}_{k}(\mu))$.
\end{proof}
We expect that (with the appropriate care regarding the goodness of $\mathbf{k}\in \mathbb{N}[\Phi]^W$) Lemma~\ref{lem:symdecompose} and Corollary~\ref{cor:symdecompose} should hold in the non-simply laced case as well, but, as we mentioned in Remark~\ref{rem:decomposeproblem}, our method of proof does not work there.
\section{Truncated Ehrhart-like polynomials}\label{sec:tr_Ehrhart}
The existence of the truncated Ehrhart-like polynomials, in the simply laced case, follows easily from the fact that truncated components decompose into symmetric ones in a consistent way (together with the existence of the symmetric Ehrhart-like polynomials).
\begin{thm} \label{thm:trehrhart}
Suppose that $\Phi$ is simply laced. Then, for any $\lambda \in P$, for all~$k \geq 1$ the quantity $L^{\mathrm{tr}}_{\lambda}(k)$ is given by a polynomial in~$k$ with integer coefficients.
\end{thm}
\begin{proof}
By Corollary~\ref{cor:trdecompose}, for any $k\geq 1$ and any $\lambda \in P$ we have
\begin{align*}
\#(s^{\mathrm{tr}}_{k})^{-1}(\lambda) &= \#(s^{\mathrm{sym}}_{k-1})^{-1}((s^{\mathrm{tr}}_1)^{-1}(\lambda))\\
&= \sum_{\mu \in (s^{\mathrm{tr}}_1)^{-1}(\lambda)} L^{\mathrm{sym}}_{\mu}(k-1).
\end{align*}
The right-hand side of this expression is an evaluation of a polynomial (with integer coefficients) because of Theorem~\ref{thm:symehrhart}. Since this identity holds for all $k\geq 1$, we conclude that the desired polynomial $L^{\mathrm{tr}}_{\lambda}(k)$ does exist.
\end{proof}
This finishes the proof of Theorem~\ref{thm:Ehrhart_intro}.
\begin{table}
\begin{center}
\definecolor{Gray}{gray}{0.9}
\begin{tabular}{c | c | r }
$\lambda$ & $L^{\mathrm{tr}}_{\lambda}(k)$ \\ \specialrule{2.5pt}{1pt}{1pt}
\rowcolor{Gray} $0$ & $3k^2+3k+1$ \\ \hline
\rowcolor{Gray} $\omega_1$ & $3k^2+3k+1$ \\ \hline
$-\omega_1+\omega_2$ & $2k+1$ \\ \hline
$-\omega_2$ & $k+1$ \\ \hline
\rowcolor{Gray} $\omega_2$ & $3k^2+3k+1$ \\ \hline
$\omega_1-\omega_2$ & $2k+1$ \\ \hline
$-\omega_1$ & $k+1$ \\ \hline
\rowcolor{Gray} $\omega_1 + \omega_2$ & $2k+1$ \\ \hline
$-\omega_1+2\omega_2$ & $k+1$ \\ \hline
$2\omega_1 - \omega_2$ & $k+1$ \\ \hline
$-2\omega_1 + \omega_2$ & $k+1$ \\ \hline
$\omega_1-2\omega_2$ & $k+1$ \\ \hline
$-\omega_1-\omega_2$ & $1$ \\ \specialrule{2.5pt}{1pt}{1pt}
\end{tabular}
\end{center}
\caption[Truncated Ehrahrt-like polynomials for~$A_2$]{The polynomials $L^{\mathrm{tr}}_{\lambda}(k)$ for $\Phi=A_2$.} \label{tab:trpolys}
\end{table}
\begin{conj} \label{conj:trehrhart}
For any~$\Phi$ and~$\lambda \in P$, for all good $\mathbf{k} \in \mathbb{N}[\Phi]^W$ the quantity $L^{\mathrm{tr}}_{\lambda}(\mathbf{k})$ is given by a polynomial with nonnegative integer coefficients in $\mathbf{k}$.
\end{conj}
Note that the fact we can take $\mathbf{k}=0$ in Conjecture~\ref{conj:trehrhart} means that the constant term of the $L^{\mathrm{tr}}_{\lambda}(\mathbf{k})$ polynomials should be~$1$ (which, compared to the symmetric polynomials, makes them even more like Ehrhart polynomials of zonotopes). Strictly speaking, our Theorem~\ref{thm:trehrhart} does not establish that these polynomials have constant term~$1$ even in the simply laced case.
\begin{remark}
Table~\ref{tab:trpolys} records the polynomials $L^{\mathrm{tr}}_{\lambda}(k)$ for $\Phi=A_2$, for all $\lambda \in P$ with $I_{\lambda_{\mathrm{dom}}}^{0,1}=[n]$. Compare these polynomials to the graphs of the $A_2$ truncated interval-firing processes in Example~\ref{ex:rank2graphs}. In agreement with Conjecture~\ref{conj:trehrhart}, all these polynomials have constant coefficient $1$. Note that, for $\lambda \in P$ with $L^{\mathrm{sym}}_{\lambda}(k) \neq 0$, the constant term of $L^{\mathrm{sym}}_{\lambda}(k)$ is by definition equal to the number of vertices in the connected component of $\Gamma_{\mathrm{sym},{0}}$ containing $\lambda$, which by Lemma~\ref{lem:symdecompose} is also equal to the number of connected components of~$\Gamma_{\mathrm{tr},{k}}$ contained in the connected component of~$\Gamma_{\mathrm{sym},{k}}$ with sink $\eta_{k}(\lambda)$ for all $k\geq 0$.
\end{remark}
We know that Conjecture~\ref{conj:trehrhart} holds for $\lambda \in \Omega_m^0$. That is because, for $\lambda \in \Omega_m^0$, Lemma~\ref{lem:trccs} tells us that $(s^{\mathrm{tr}}_{\mathbf{k}})^{-1}(\lambda)=\Pi(\rho_{\mathbf{k}})+\lambda$, and hence $\#(s^{\mathrm{tr}}_{\mathbf{k}})^{-1}(\lambda)$ is literally the Ehrhart polynomial of a zonotope.
Polynomials with nonnegative integer coefficients occupy a special place in algebraic combinatorics. Of course it would be great, in the course of positively resolving Conjectures~\ref{conj:symehrhart} and~\ref{conj:trehrhart}, to also give a combinatorial interpretation of the coefficients of the coefficients of these polynomials. (In fact, for the symmetric polynomials, this is precisely what is done in~\cite{hopkins2018positive}.) It would also be extremely interesting to relate these polynomials to the representation theory or algebraic geometry attached to the root system~$\Phi$, and establish positivity in that way. These polynomials arose for us in the course of a purely combinatorial investigation, but it is hard to imagine that they do not have some deeper significance if they indeed have nonnegative integer coefficients.
\begin{remark} \label{rem:ehrhart_symmetry}
It is also worth considering how the stabilization maps $s^{\mathrm{sym}}_{\mathbf{k}}$ and $s^{\mathrm{tr}}_{\mathbf{k}}$ interact with the symmetries of $\Gamma^{\mathrm{un}}_{\mathrm{sym},{\mathbf{k}}}$ and $\Gamma^{\mathrm{un}}_{\mathrm{tr},{\mathbf{k}}}$ coming from Theorem~\ref{thm:symmetry}. For the symmetric stabilization maps: if $\lambda \in P$ and $w \in W^{I^{0,1}_{\lambda}}$, then it is not hard to deduce from Lemma~\ref{lem:symdecompose} that
\[ (s^{\mathrm{sym}}_{\mathbf{k}})^{-1}(w(\lambda)) = w((s^{\mathrm{sym}}_{\mathbf{k}})^{-1}(\lambda))\]
for all good $\mathbf{k} \in \mathbb{N}[\Phi]^W$. Of course this implies that
\[ L^{\mathrm{sym}}_{w(\lambda)}(\mathbf{k}) = L^{\mathrm{sym}}_{\lambda}(\mathbf{k}),\]
in this case. Meanwhile, it appears that if $w \in C\subseteq W$ and $\varphi\colon P\to P$ is the affine map $\varphi\colon v\mapsto w(v-\rho/h)+\rho/h$, then
\[ (s^{\mathrm{tr}}_{\mathbf{k}})^{-1}(\varphi(\lambda)) = \varphi((s^{\mathrm{tr}}_{\mathbf{k}})^{-1}(\lambda))\]
for all $\lambda \in P$ and all good $\mathbf{k} \in \mathbb{N}[\Phi]^W$. But even in the simply laced case, where we have Lemma~\ref{lem:trdecompose} at our disposal, in order to conclude that $s^{\mathrm{tr}}_{k}$ indeed respects the symmetry $\varphi$ in this way, we would need to know that this is the case for $\mathbf{k}=1$; and, as we mention in the next section, we do not currently have a great understanding of $\Gamma_{\mathrm{tr},{1}}$. So to show that the truncated stabilization maps and polynomials have the expected symmetries coming from the subgroup $C$ would require some more work.
\end{remark}
\section{Iterative descriptions of the stabilization} \label{sec:iterate}
Finally, let us focus a little more on what our decomposition results tell us about the relationship between the polynomials $L^{\mathrm{sym}}_{\lambda}(\mathbf{k})$ and $L^{\mathrm{tr}}_{\lambda}(\mathbf{k})$, and between the stabilization map $s^{\mathrm{sym}}_{\mathbf{k}}$ and $s^{\mathrm{tr}}_{\mathbf{k}}$. So, let us assume that $\Phi$ is simply laced for the remainder of this section. It is clear that Corollaries~\ref{cor:symdecompose} and~\ref{cor:trdecompose} imply the following identities relating these polynomials for all $\lambda \in P$ and all $k\geq 1$:
\begin{align*}
L^{\mathrm{sym}}_{\lambda}(k) &= \sum_{\mu \in (s^{\mathrm{sym}}_0)^{-1}(\lambda)} L^{\mathrm{tr}}_{\mu}(k); \\
L^{\mathrm{tr}}_{\lambda}(k) &= \sum_{\mu \in (s^{\mathrm{tr}}_1)^{-1}(\lambda)} L^{\mathrm{sym}}_{\mu}(k-1).
\end{align*}
What is more, these corollaries also immediately imply some striking, iterative descriptions of the stabilization functions:
\begin{cor} \label{cor:iterative}
Suppose that $\Phi$ is simply laced. Then for all $\mu \in P$ and all $k \geq 1$:
\begin{itemize}
\item $s^{\mathrm{sym}}_1(\mu) = s^{\mathrm{sym}}_0(s^{\mathrm{tr}}_1(\mu))$;
\item $s^{\mathrm{sym}}_k(\mu) = (s^{\mathrm{sym}}_1)^{k}(\mu)$;
\item $s^{\mathrm{tr}}_k(\mu) = s^{\mathrm{tr}}_1((s^{\mathrm{sym}}_1)^{k-1}(\mu))$.
\end{itemize}
\end{cor}
Corollary~\ref{cor:iterative} says that the information of all of the stabilization maps is contained just in $s^{\mathrm{sym}}_0$ and $s^{\mathrm{tr}}_1$. Now, $s^{\mathrm{sym}}_0$ is pretty simple to understand: for example, its fibers are just parabolic Weyl coset orbits (see Corollary~\ref{cor:symccsweylorbit}). So somehow all of the complexity of all truncated and symmetric interval-firing processes (or, at least all the complexity related to \emph{stabilization} for these interval-firing processes) is contained just in~$\Gamma_{\mathrm{tr},{1}}$. Admittedly, we do not understand $\Gamma_{\mathrm{tr},{1}}$ very well. It would be very interesting, for example, to try to find an explicit description of the connected components of $\Gamma_{\mathrm{tr},{1}}$.
Finally, we end the paper by discussing another surprising consequence of Corollary~\ref{cor:iterative}: for all $\lambda \in P$ and all~$k\geq 1$,
\[ \#((s^{\mathrm{sym}}_1)^{k})^{-1}(\lambda) = L^{\mathrm{sym}}_{\lambda}(k).\]
In other words, we have a map $f\colon X \to X$ from some discrete set to itself, such that the sizes $\#(f^k)^{-1}(x)$ of fibers of iterates of this map are given by polynomials (in $k$) for every point $x \in X$. In fact, we have many such maps, one for each simply laced root system. This is a very special property for a self-map of a discrete set to have. In the next two examples we show what this looks like in the simplest cases.
\begin{figure}\label{fig:symtabmapa1}
\end{figure}
\begin{example}
Although we have so far been eschewing one-dimensional examples, in fact $s^{\mathrm{sym}}_1$ is interesting even for $A_1$. Figure~\ref{fig:symtabmapa1} depicts $s^{\mathrm{sym}}_1$ for $\Phi=A_1$. Of course in this picture we draw an arrow from $\mu$ to $\lambda$ to mean that $s^{\mathrm{sym}}_1(\mu)=\lambda$. The colors of the vertices correspond to classes of weights modulo the root lattice. We write the polynomials $L^{\mathrm{sym}}_{\lambda}(k)$ above the weights in this figure. One can verify by hand that in this case~$\#((s^{\mathrm{sym}}_1)^{k})^{-1}(\lambda) = L^{\mathrm{sym}}_{\lambda}(k)$ for all $\lambda \in P$ and all~$k \geq 0$.
\end{example}
\begin{example}
Note that when $\Phi=A_2$, we have $\rho \in Q$ and hence $s^{\mathrm{sym}}_1$ preserves the root lattice and so descends to a map $s^{\mathrm{sym}}_1\colon Q\to Q$. Figure~\ref{fig:symtabmapa2} depicts $s^{\mathrm{sym}}_1\colon Q \to Q$ for $\Phi=A_2$. (As with our previous drawings for rank~$2$ interval-firing processes, we of course only depict the ``interesting,'' finite portion of this function near the origin.) Compare this figure to the symmetric interval-firing graphs for $A_2$ in Example~\ref{ex:rank2graphs} and the polynomials $L^{\mathrm{sym}}_{\lambda}(k)$ for $A_2$ recorded in Table~\ref{tab:sympolys}. Observe that indeed $((s^{\mathrm{sym}}_1)^k)^{-1}(0) = \Pi^Q(k\rho)$ for all $k\geq 1$. Also observe that $((s^{\mathrm{sym}}_1)^k)^{-1}(\alpha_1+\alpha_2)$ is the set of $Q$-lattice points on the boundary of $\Pi((k+1)\rho)$.
\end{example}
\begin{figure}\label{fig:symtabmapa2}
\end{figure}
{}
\end{document}
|
\begin{equation}gin{document}
\tildetle{Weak Ricci curvature bounds for Ricci shrinkers}
\author{Bennett Chow}
\author{Peng Lu}
\author{Bo Yang$^{1}$}
\begin{equation}gin{abstract}
We show that for a complete Ricci shrinker there exists a sequence of
points tending to infinity whose norms of the Ricci tensor grow at
most linearly.
\end{abstract}
\title{Weak Ricci curvature bounds for Ricci shrinkers}
All objects are $C^{\infty }$. Let $\left( \mathcal{M}^{n}\!,g\right) $ be a
Riemannian manifold and $\phi ,f:\mathcal{M}\rightarrow \mathbb{R}$.\ For $
\gammama :\left[ 0,\bar{s}\right] \rightarrow \mathcal{M}$, $\bar{s}>0$,
define $S=\gammama ^{\prime }$ and $\mathcal{J}\left( \gammama \right)
=\int_{0}^{\bar{s}}\left( \left\vert S\left( s\right) \right\vert ^{2}+2\phi
\left( \gammama \left( s\right) \right) \right) ds$. A critical point $\gammama $
of $\mathcal{J}$ on paths with fixed endpoints, called a $\phi $-geodesic,
satisfies $\nabla _{S}S=\nabla \phi $ and $\left\vert S\right\vert
^{2}-2\phi =C$. Let $\operatorname{Rc}_{f}=\operatorname{Rc}+\nabla \nabla f$. For a minimal
$\phi $-geodesic,
\begin{equation}gin{equation}
-\int_{0}^{\bar{s}}\zeta ^{2}\Delta _{f}\phi ds+\int_{0}^{\bar{s}}\zeta ^{2}
\operatorname{Rc}_{f}\left( S,S\right) ds\leq \int_{0}^{\bar{s}}\left( n\left( \zeta
^{\prime }\right) ^{2}-2\zeta \zeta ^{\prime }\left\langle \nabla
f,S\right\rangle \right) ds,
\label{second variation}
\end{equation}
where $\Delta _{f}=\Delta -\nabla f\cdot \nabla $ and $\zeta :\left[ 0,\bar{s
}\right] \rightarrow \mathbb{R}$ is piecewise $C^{\infty }$, vanishing at $0$
and $\bar{s}$.
Let $\left( g,f\right) $ be a complete shrinker and satisfy $\operatorname{Rc}_{f}=
\frac{1}{2}g$ and $f-\left\vert \nabla f\right\vert ^{2}=R>0$. Let $c>0$ and
$2\phi =c\frac{R}{f}$. From $\Delta _{f}R=-2\left\vert \operatorname{Rc}\right\vert
^{2}+R$ and $\Delta _{f}f=\frac{n}{2}-f$ we compute
\begin{equation}gin{equation*}
\Delta _{f}\frac{R}{f}=\frac{R}{f^{2}}(2f-\frac{n}{2})-2\frac{\left\vert
\operatorname{Rc}\right\vert ^{2}}{f}-4\frac{\operatorname{Rc}(\nabla f,\nabla f)}{f^{2}}+2
\frac{R|\nabla f|^{2}}{f^{3}}\leq -\frac{|\operatorname{Rc}|^{2}}{f}+4\frac{(1+\sqrt{
n})^{2}}{f}.
\end{equation*}
If $\zeta \!\left( s\right) \!=\!s$ for $s\in \left[ 0,1\right] $, $\zeta
\!\left( s\right) \!=\!1$ for $s\in \left[ 1,\bar{s}-1\right] $, $\zeta
\!\left( s\right) \!=\!\bar{s}-s$ for $s\in \left[ \bar{s}-1,\bar{s}\right] $
, then
\begin{equation}gin{equation*}
\frac{c}{2}\int_{0}^{\bar{s}}\zeta ^{2}\left( \frac{|\operatorname{Rc}|^{2}}{f}-4
\frac{(1+\sqrt{n})^{2}}{f}\right) ds+\frac{1}{2}\int_{0}^{\bar{s}}\zeta
^{2}\left\vert S\right\vert ^{2}ds\leq 2n-\int_{0}^{\bar{s}}2\zeta \zeta
^{\prime }\left\langle \nabla f,S\right\rangle ds.
\end{equation*}
Let $\gammama \left( 0\right) =x$, $\gammama \left( \bar{s}\right) =y$, and $
\bar{s}=d\left( x,y\right) $. Then $1-c\leq C\leq 1+c$; the lower by $\frac{R
}{f}\leq 1$ and the upper since for a minimal geodesic $\bar{\gammama}(s)$, $
s\in \lbrack 0,\bar{s}]$, from $x$ and $y$,
\begin{equation}gin{equation*}
C\bar{s}\leq \int_{0}^{\bar{s}}\left( \left\vert \gammama ^{\prime }\left(
s\right) \right\vert ^{2}+c\frac{R(\gammama (s))}{f(\gammama (s))}\right) ds\leq
\int_{0}^{\bar{s}}\left( \left\vert \bar{\gammama}^{\prime }\left( s\right)
\right\vert ^{2}+c\frac{R(\bar{\gammama}(s))}{f(\bar{\gammama}(s))}\right)
ds\leq (1+c)\bar{s}.
\end{equation*}
Let $f\left( O\right) =\min_{\mathcal{M}}f\leq \frac{n}{2}$ and $r=d\left(
\cdot ,O\right) $. Then $\left\vert \nabla f\right\vert \left( z\right) \leq
\sqrt{f\left( z\right) }\leq \sqrt{\frac{n}{2}}+r\left( z\right) $. Since $
\left\vert S\right\vert \leq \sqrt{C+c}$ and $r\left( \gammama \left( s\right)
\right) \leq \min \{r\left( x\right) +s\sqrt{C+c},r\left( y\right) +\left(
\bar{s}-s\right) \sqrt{C+c}\}$,
\begin{equation}gin{eqnarray*}
-\int_{0}^{\bar{s}}\!\zeta \zeta ^{\prime }\left\langle \nabla
f,S\right\rangle ds\!\! &\leq &\!\!\int_{0}^{1}\!s\sqrt{f\left( \gammama
\left( s\right) \right) }\left\vert S\left( s\right) \right\vert ds+\int_{
\bar{s}-1}^{\bar{s}}\!\left( \bar{s}-s\right) \sqrt{f\left( \gammama \left(
s\right) \right) }\left\vert S\left( s\right) \right\vert ds \\
&\leq &\!\!\tfrac{1}{2}\sqrt{C+c}\left( \sqrt{2n}+r\left( x\right) +r\left(
y\right) +2\sqrt{C+c}\right) .
\end{eqnarray*}
Let $A=\sqrt{C+c}$. Since $f\left( \gammama \left( s\right) \right) \geq
f\left( O\right) $ and $\bar{s}=d\left( x,y\right) $, we have
\begin{equation}gin{equation*}
\int_{0}^{\bar{s}}\frac{\zeta ^{2}|\operatorname{Rc}|^{2}}{f}ds\leq \frac{4(1+\sqrt{n
})^{2}d\left( x,y\right) }{f\left( O\right) }+\frac{4(\sqrt{n}+A)^{2}}{c}+
\frac{2A\left( r\left( x\right) +r\left( y\right) \right) }{c}.\vspace{
-0.04in}
\end{equation*}
Take $x=O$ and $\bar{s}=r\left( y\right) \geq 2\sqrt{\frac{n}{2}}$. Then $
d\left( \gammama \left( s\right) ,y\right) \leq \frac{r\left( y\right) }{2}$
for $s\in \lbrack \frac{2A-1}{2A}\bar{s},\bar{s}]$ and
\begin{equation}gin{equation*}
\frac{(\frac{r\left( y\right) }{2A}-1)\min\limits_{s\in \lbrack (1-\frac{1}{
2A})\bar{s},\bar{s}]}|\operatorname{Rc}|^{2}\left( \gammama \left( s\right) \right) }{(
\sqrt{\frac{n}{2}}+\frac{3r\left( y\right) }{2})^{2}}\leq \int_{(1-\frac{1}{
2A})\bar{s}}^{\bar{s}-1}\frac{|\operatorname{Rc}|^{2}\left( \gammama \left( s\right)
\right) }{f\left( \gammama \left( s\right) \right) }ds\leq \operatorname{Const}\left(
r\left( y\right) +1\right) .
\end{equation*}
Thus there exists $C<\infty $ such that for any $y\in \mathcal{M}$ with $
r\left( y\right) \geq \max \{\sqrt{2n},3A\}$, there exists a point $z\in
\mathcal{M}$ with $d\left( z,y\right) \leq \frac{r\left( y\right) }{2}$ and $
|\operatorname{Rc}|\left( z\right) \leq C\left( r\left( y\right) +1\right) $.
\footnotetext[1]{
Address. Bennett Chow, Bo Yang: Math. Dept., UCSD; Peng Lu: Math. Dept., U
of Oregon.}
\end{document}
|
\begin{document}
\title{Convolution and Concurrency}
\author{James Cranch}
\author{Simon Doherty}
\author{Georg Struth}
\affil{University of Sheffield\\ United Kingdom}
\date{}
\maketitle
\begin{abstract}
We show how concurrent quantales and concurrent Kleene algebras
arise as convolution algebras $Q^X$ of functions from structures $X$
with two ternary relations that satisfy relational interchange laws
into concurrent quantales or Kleene algebras $Q$. The elements of
$Q$ can be understood as weights; the case $Q=\mathbb{B}$ corresponds to
a powerset lifting. We develop a correspondence theory between
relational properties in $X$ and algebraic properties in $Q$ and
$Q^X$ in the sense of modal and substructural logics, and boolean
algebras with operators. As examples, we construct the concurrent
quantales and Kleene algebras of $Q$-weighted words, digraphs,
posets, isomorphism classes of finite digraphs and pomsets.
\end{abstract}
\pagestyle{plain}
\section{Introduction}\label{S:introduction}
Our initial motivation has been the provision of recipes for
constructing graph models for concurrent quantales and concurrent
Kleene algebras~\cite{HMSW11}. These axiomatise the sequential and
parallel compositions $\cdot$ and $\|$ of concurrent and distributed
systems as well as their finite sequential and parallel iterations and
impose that these compositions interact via a lax interchange law
$(w\| x) \cdot (y\| z)\le (w\cdot y)\| (x\cdot z)$. Two classical
models are languages over finite words with respect to sequential and
shuffle composition in interleaving concurrency, and languages over
partial orders or partial words (pomsets) with respect to serial and
parallel composition in true concurrency. The relation $\le$ is
interpreted as set inclusion in these models.
In both models, the language-level algebras are constructed by lifting
structural properties of compositions from single objects---single
words, single posets---to power sets. In fact, both liftings are
instances of the classical Stone-type duality between $n+1$-ary
relations and $n$-ary operators (or modalities) on power set boolean
algebras~\cite{JonssonT51,Goldblatt}; here for ternary relations and
binary operators. In the word model, the ternary relations on words
are $u=v\cdot w$ and $u\in v\| w$; the binary operators on power sets
are $X\cdot Y=\{u\cdot v\mid u\in X\land v\in Y\}$ and
$X\| Y = \bigcup\{u\|v\mid u\in Y\land v\in Y\}$. In the poset model,
the ternary relations on posets are $P=P_1\cdot P_2$ provided
$P_1\cdot P_2$ is defined ($P_1\cdot P_2$ being disjoint union with
additional arrows from each element of $P_1$ to each element of $P_2$)
and $P\preceq P_1\| P_2$ provided $P_1\| P_2$ is defined ($P_1\| P_2$
being disjoint union and the relation holds if there is a bijective
order morphism from the right-hand poset to the left-hand one). The
binary operators on power sets are
$X\cdot Y=\{P_1\cdot P_2\mid P_1\in X, P_2\in Y \text{ and } P_1\cdot
P_2 \text{ is defined}\}$
and
$X\| Y=\{P \mid \exists P_1 \in X,P_2\in Y.\ P\preceq P_1\| P_2\text{
and } P_1\| P_2 \text{ is defined}\}$.
Both constructions generalise further to weighted words and weighted
po(m)sets~\cite{Handbook} and beyond that---yet ignoring
interchange---to arbitrary functions $X\to Q$ from partial monoids or
ternary relations over $X$ into quantales
$Q$~\cite{DongolHS16,DongolHS17}. The binary operations on function
spaces $Q^X$ then generalise to convolutions of the form
$(f\ast g)\, x = \bigvee\{f\, x \bullet g\, y\mid R^x_{yz}\}$, and the
algebra on the function space $Q^X$ is called \emph{convolution
algebra}. This raises the more specific question how concurrent
quantales and similar structures on $Q^X$ can be constructed from
ternary relations on $X$ and weight quantales $Q$---in particular the
above lax interchange law and its variants on $Q^X$. This question is
not only of structural interest. Operationally, checking relational
properties on $X$ tends to be much simpler than those on $Q^X$, and
the first activity suffices if the construction of $Q^X$ from $X$ and
$Q$ is uniform. The rest of this article investigates this question.
First, in Section~\ref{S:summary}, we summarise the previous approach
to relational convolution in $Q^X$~\cite{DongolHS17}, where $X$ is a
set equipped with a ternary relation and $Q$ a quantale, and introduce
the basic lifting construction, namely that $Q^X$ forms a quantale if
$X$ satisfies a relational associativity law and $Q$ is a quantale,
and if a suitable set of relational units is present.
In Section~\ref{S:correspondence-interchange} we prove novel
correspondence results between relational interchange laws on $X$ and
algebraic interchange laws on $Q$ and
$Q^X$. Proposition~\ref{P:correspondence1} shows that interchange on
$X$ and $Q$ give rise to those on $Q^X$. In addition, under mild
non-degeneracy conditions on $Q$, interchange laws on $Q$ and $Q^X$
give rise to those on $X$ (Proposition~\ref{P:correspondence2}) and,
under mild non-degeneracy conditions on $X$, interchange laws on $X$
and $Q^X$ give rise to those on $Q$
(Proposition~\ref{P:correspondence3}). In combination, these results
show that interchange laws on $X$ and $Q$ are precisely what is needed
to obtain such laws on $Q^X$.
Additional correspondences are then presented in
Section~\ref{S:further-correspondences}. First, we prove such results
for sets of relational units in $X$ and quantalic units on $Q$ and
$Q^X$ and show how the above degeneracy conditions simplify in the
presence of units. Secondly, we show how correspondences for
(semi-)associativity and commutativity laws arise from those for
interchange.
Equipped with these correspondences we then introduce relational
interchange monoids and interchange quantales in
Section~\ref{S:interchange-quantales} and package the individual
correspondences for these structures in the main theorem of this
article (Theorem~\ref{P:interchange-quantale-correspondence}): a
correspondence result between relational monoids $X$, interchange
quantales $Q$ and interchange quantales $Q^X$. Interchange quantales
are essentially concurrent quantales without commutativity assumptions
on the ``parallel'' composition. In addition we prove a weak
Eckmann-Hilton argument that shows that certain small interchange laws
are subsumed by the one presented above.
In light of the duality between $n+1$-ary relations and boolean
algebras with $n$-ary operators, the natural question arises how a
more general duality between $X$, $Q$ and $X^Q$ can be
obtained. Partial results are already known~\cite{HardingWW18}. We
explain the special case of the power set lifting ($Q=\mathbb{B}$) in
Section~\ref{S:duality} and relate this results with constructions
from Section~\ref{S:correspondence-interchange}, but leave the general
case for future work.
In Section~\ref{S:interchange-kas} we specialise
Theorem~\ref{P:interchange-quantale-correspondence} to interchange
Kleene algebras and concurrent Kleene algebras, which requires
finiteness and grading assumptions on ternary relations
(Theorem~\ref{P:interchange-ka-correspondence}).
Finally, Sections~\ref{S:shuffle}-\ref{S:graph-type-languages} apply
the constructions from
Theorem~\ref{P:interchange-quantale-correspondence} and
\ref{P:interchange-ka-correspondence} to the examples mentioned above.
In Section~\ref{S:shuffle} we construct the concurrent quantale and
Kleene algebra of $Q$-weighted shuffle languages using an isomorphism
between ternary relations and certain
multimonoids~\cite{GalmicheL06}. In Section~\ref{S:graph-languages}
and \ref{S:graph-type-languages} we construct the concurrent quantale
and Kleene algebra of $Q$-weighted digraph languages and those of
isomorphism classes of finite digraphs. To prepare for these
constructions, Section~\ref{S:pims} introduces partial interchange
monoids, which form relational interchange monoids under certain
restrictions. It then suffices to show that graphs under the
operations $\cdot$ and $\|$ outlined form such monoids. The
specialisation to (weighted) partial orders and partial words or
pomsets, which are isomorphism classes of labelled partial orders, is
then straightforward.
Ultimately these results yield uniform construction principles for
(weighted) concurrent quantales and Kleene algebras from simpler
structures such as ternary relations, multimonoids and similar ordered
monoidal structures: to construct such structures it suffices to know
the underlying relational structure, the rest is then
automatic. Beyond that, our results provide valuable structural
insights that might be stepping stones to future duality results.
\section{Relational Convolution: a Summary} \label{S:summary}
This section summarises the general approach.
Relational convolution~\cite{DongolHS17} has its origins in J\'onsson
and Tarski's boolean algebras with operators~\cite{JonssonT51}, Rota's
foundations of combinatorics~\cite{Rota64}, Sch\"utzenberger's
approach to language theory~\cite{BerstelReutenauer,Handbook} and
Goguen's L-fuzzy maps and relations~\cite{Goguen67}. It is an
operation in the algebra of functions $X\to Q$ from a set $X$ into a
complete lattice $Q$ equipped with an additional operation $\bullet$
of composition and constrained by a ternary relation on $X$, which we
identify with a predicate of type $X\to X\to X\to \mathbb{B}$. It is defined as
\begin{equation*}
(f\ast g)\, x = \bigvee_{y,z:R^x_{yz}} f\, y\bullet g\, z,
\end{equation*}
where the right-hand side abbreviates
$\bigvee \{f\, y \bullet g\, z\mid R^x_{yz}\}$ and $\bigvee$ denotes the
supremum in $Q$. It is well known that the function space $Q^X$ forms
a complete lattice when the order and sups in $Q$ are extended
pointwise~\cite{AbramskyJung}. Yet the convolution $\ast$ need not satisfy any algebraic
laws on $Q^X$, unless conditions on $R$ and $Q$ are imposed.
This is reminiscent of modal correspondence theory, where conditions
on relational Kripke frames force algebraic properties of modal
operators and vice versa, or more generally to dualities between
categories of $n+1$-ary relational structures and those of boolean
algebras with $n$-ary operators~\cite{JonssonT51,Goldblatt}. In
fact, $R$ is a ternary relational structure and $\ast$ a binary
modality similar to the product of the Lambek calculus~\cite{Lambek58},
the chop operation of interval temporal logics~\cite{MM83} or the
separating conjunction of separation logic~\cite{OHRY01}.
\begin{example}\label{ex:incidence}
Let $X$ be an incidence algebra of closed intervals (over
$\mathbb{R}$, say)~\cite{Rota64}, with interval fusion $[p,q][r,s]$
equal to $[p,s]$ if $q=r$, and undefined otherwise. Let $R^x_{yz}$
hold if the fusion of intervals $y$ and $z$ is defined and equal to
$x$. Let $Q=\mathbb{B}$ be the (complete) lattice of booleans with
$\bullet$ as meet. Functions $X\to \mathbb{B}$ are then predicates
ranging over intervals in $X$. The predicate $f\ast g$ holds of an
interval $x$ whenever it can be decomposed into a prefix interval
$y$ and a suffix interval $z$ by fusion such that $f\, y$ and
$g\, z$ both hold. This captures the semantics of the binary chop
modality of interval temporal logics~\cite{MM83} .\qed
\end{example}
It is well known that chop is associative in the convolution algebra
$\mathbb{B}^X$ due to associativity of meet in $\mathbb{B}$ and
associativity of interval fusion in $X$---up to definedness.
\begin{definition}[\cite{Rosenthal90}]\label{D:quantale}
A \emph{quantale} $Q$ is a complete lattice equipped
with an associative composition $\bullet$ that preserves sups in both
arguments: for all $a,b\in Q$ and
$A,B\subseteq Q$,
\begin{equation*}
a\bullet \left(\bigvee B\right) = \bigvee \{a\bullet b\mid b\in
B\}\qquad\text{ and }\qquad
\left(\bigvee A\right) \bullet b = \bigvee \{a\bullet b\mid a\in A\}.
\end{equation*}
A quantale is \emph{unital} if $\bullet$ has a unit $1$.
\end{definition}
Convolution is then associative in $Q^X$ if $R$ is \emph{relationally
associative}~\cite{DongolHS17}: for all $x,u,v,w\in X$,
\begin{equation*}
\exists y.\ R^y_{uv} \wedge R^x_{yw} \Leftrightarrow \exists
y.\ R^x_{uy} \wedge R^y_{vw}.
\end{equation*}
This yields one direction of a correspondence between the ternary
relation $R$ and convolution $\ast$ viewed as a binary modality.
Similarly, convolution is commutative in $Q^X$ if $R$ is
\emph{relationally commutative}: for all $x,u,v\in X$,
\begin{equation*}
R^x_{uv} \Rightarrow R^x_{vu}.
\end{equation*}
Finally, if the unary relation $E^x$ is a set of relational units for
$R$ and the quantale $Q$ unital with unit of composition $1$, then
convolution has the indicator function $\mathit{id}_E$ as a left and right
unit, where
\begin{equation*}
\mathit{id}_E\, x =
\begin{cases}
1,& \text{if } E^x,\\
0, & \text{otherwise}.
\end{cases}
\end{equation*}
\begin{definition}[\cite{DongolHS17}]\label{D:rel-units}
The set $E\subseteq X$ is a \emph{set of relational units} for the
ternary relation $R$ over $X$ if it satisfies, for all $x,y\in X$,
\begin{gather*}
\exists e.\ R^x_{ex} \wedge E^e, \qquad
R^x_{ey} \wedge E^e \Rightarrow x=y,\qquad
\exists e.\ R^x_{xe}\wedge E^e, \qquad
R^x_{ye}\wedge E^e\Rightarrow x = y.
\end{gather*}
\end{definition}
\noindent This guarantees that each $x\in X$ has a unique left unit as
well as a unique right one. With the Kronecker delta function
$\delta_x: X\to \mathbb{B}$ defined as $\delta_x\, y$ equal to $1$ if $x=y$
and to $0$ otherwise, therefore,
\begin{equation*}
\mathit{id}_E = \bigvee_{e:E^e} \delta_e.
\end{equation*}
The convolution algebras on $Q^X$ can now be described as follows.
\begin{theorem}[\cite{DongolHS17}]\label{P:conv-algebras} ~
\begin{enumerate}
\item If $R$ is relationally associative and $Q$ a quantale, then
$Q^X$ is a quantale.
\item If $R$ is relationally associative and commutative and $Q$
an abelian quantale, then $Q^X$ is an abelian quantale.
\item If $R$ is relationally associative and has relational
units, and $Q$ is a unital quantale, then $Q^X$ is a unital
quantale
.
\end{enumerate}
\end{theorem}
\begin{example}\label{ex:languages}
Let $(X^\ast, \cdot,\varepsilon)$ be the free monoid over $X$. For
$u,v,w\in X^\ast$ define $R^u_{vw} \Leftrightarrow u = v\cdot w$.
Then $R$ is relationally associative with relational unit
$\varepsilon$. For any quantale $Q$, the convolution algebra is the
quantale $Q^{X^\ast}$ of $Q$-weighted languages over $X$ whereas
$\mathbb{B}^{X^\ast}$ is the usual language quantale over
$X$. Convolution is (weighted) language product (the boolean case is
also known as complex or Minkowski product). The construction
generalises to arbitrary monoids.
Words are \emph{finitely decomposable} in that each word can only be
split into finitely many prefix/suffix pairs. All sups in
convolutions therefore remain finite and $Q$ can be replaced by an
arbitrary semiring. This yields the usual weighted languages
formalised as rational power series in the sense of Sch\"utzenberger
and Eilenberg~\cite{BerstelReutenauer,Handbook}. \qed
\end{example}
\begin{example}\label{ex:relations}
Define a composition $\cdot :X\times X\to X$ on a set $X$ such that
$(a,b)\cdot (c,d)$ is equal to $(a,d)$ if $b=c$ and undefined
otherwise. For $x,y,z\in X\times X$, let $R^x_{yz}$ hold if and
only if $y\cdot z$ is defined and equal to $x$, and let
$E=\{(a,a)\mid a \in X\}$. Then $R$ is relationally associative and
the elements of $E$ are the relational units. For any quantale $Q$,
the convolution algebra $Q^{X\times X}$ is the quantale of
$Q$-weighted binary relations over $X$, while $\mathbb{B}^{X\times X}$ is
simply the quantale of binary relations over $X$. Convolution is
(weighted) relational composition. This specialises to quantales of
weighted closed intervals in linear orders, as in
Example~\ref{ex:incidence}, which can be represented by ordered
pairs $(a,b)$ in which $a\le b$.\qed
\end{example}
\begin{example}\label{ex:sep-logic}
Define a composition $\oplus$ on the set of partial functions of
type $X\rightharpoonup Y$ such that $f \oplus g$ is $f\cup g$ if
$\mathit{dom}\, f$ and $\mathit{dom}\, g$ are disjoint and undefined
otherwise. The set $Y^X$ can model the heap memory area with
addresses in $X$, values in $Y$ and $\oplus$ as heaplet
addition. For $f,g,h:X\rightharpoonup Y$ let $R^f_{gh}$ hold if and
only if $g\oplus h$ is defined and equal to $f$. Then $R$ is
relationally associative and commutative; the empty partial function
(which is undefined everywhere) is its relational unit. For any
abelian quantale $Q$, the convolution algebra is the abelian
quantale $Q^{(Y^X)}$ of $Q$-weighted assertions of separation logic
over the set $Y^X$ of heaps. Convolution is separating
conjunction~\cite{DongolHS16}. The standard assertion algebra of separation
logic is formed by $\mathbb{B}^{(Y^X)}$.\qed
\end{example}
\section{Correspondences for Interchange
Laws}\label{S:correspondence-interchange}
Theorem~\ref{P:conv-algebras} generalises to correspondences between
quantales with two compositions $\textcolor{orange}{\bullet}$ and
$\textcolor{teal}{\bullet}$ related by seven interchange laws and
relational structures with suitable relational constraints. The choice
of the interchange laws is explained in
Section~\ref{S:interchange-quantales}; the six small interchange laws
are precisely those that can be derived from the seventh in the
presence of suitable units. We start with the relational structures.
\begin{definition}\label{D:rel-bi-magma}~
\begin{enumerate}
\item A \emph{relational magma} $(X,R)$ is a set $X$ equipped with a
ternary relation $R:X\to X\to X\to \mathbb{B}$. It is \emph{unital} if
there is a set $E$ of relational units satisfying the axioms in
Definition~\ref{D:rel-units}.
\item A \emph{relational bi-magma} $(X,\srel,\prel)$ is a set $X$
equipped with two ternary relations $\srel$ and $\prel$. It is
\emph{unital} if $\srel$ has a set of relational units $\sE$ and
$\prel$ a set of relational units $\pE$.
\end{enumerate}
\end{definition}
\noindent The constraints considered on a bi-magma $X$ are, for
$t,u,v,w,x,y,z\in X$, the \emph{relational interchange laws}
\begin{align}
\srel^x_{uv} &\Rightarrow \prel^x_{uv}, \tag{\ri1} \label{eq:ri1}\\
\srel^x_{uv} &\Rightarrow \prel^x_{vu}, \tag{\ri2} \label{eq:ri2}\\
\exists y.\ \srel^x_{uy} \wedge \prel^y_{vw} &\Rightarrow
\exists y.\ \srel^y_{uv} \wedge \prel^x_{yw},
\tag{\ri3}\label{eq:ri3}\\
\exists y.\ \prel^y_{uv} \wedge \srel^x_{yw} &\Rightarrow \exists y.\ \prel^x_{uy}\wedge \srel^y_{vw},
\tag{\ri4}\label{eq:ri4}\\
\exists y.\ \srel^x_{uy} \land \prel^y_{vw} &\Rightarrow \exists
y. \prel^x_{vy} \land
\srel^y_{uw},\tag{\ri5}\label{eq:ri5}\\
\exists y.\ \prel^y_{uv}\land \srel^x_{yw} & \Rightarrow \exists
y. \srel^y_{uw}\land \prel^x_{yv},\tag{\ri6}\label{eq:ri6}\\
\exists y,z.\ \prel^y_{tu} \wedge \srel^x_{yz} \wedge
\prel^z_{vw} &\Rightarrow \exists y,z.\ \srel^y_{tv} \wedge \prel^x_{yz} \wedge \srel^z_{uw}.
\tag{\ri7}\label{eq:ri7}
\end{align}
They memoise the relationships between the trees in $X$
shown in
Figure~\ref{fig:memotrees}.
\begin{figure}
\caption{Trees memoised by the relational interchange laws.}
\label{fig:memotrees}
\end{figure}
Next we turn to quantales. As their monoidal structure emerges in the
constructions, we generalise.
\begin{definition}\label{D:pre-bi-quantale}~
\begin{enumerate}
\item A \emph{prequantale}~\cite{Rosenthal90} is a structure
$(Q,\le,\bullet)$ such that $(Q,\le)$ is a complete lattice and the
binary operation $\bullet$ on $Q$ preserves sups in both
arguments. It is \emph{unital} if $\bullet$ has unit $1$.
\item A \emph{bi-prequantale} is a structure $(Q,\le,\scomp,\pcomp)$ such
that $(Q,\le,\scomp)$ and $(Q,\le,\pcomp)$ are both prequantales. It
is unital if $\scomp$ has unit $\se$ and $\pcomp$ unit $\pe$.
\end{enumerate}
\end{definition}
\noindent A quantale is thus a prequantale with associative composition.
For
$a,b,c,d\in Q$ we define the \emph{algebraic interchange laws}
\begin{align}
a\scomp b &\le a\pcomp b,\tag{\ai1}\label{eq:i1}\\
a\scomp b &\le b\pcomp c,\tag{\ai2}\label{eq:i2}\\
a\scomp (b\pcomp c) &\le (a\scomp b)\pcomp c,\tag{\ai3}\label{eq:i3}\\
(a\pcomp b)\scomp c &\le a\pcomp (b\scomp
c),\tag{\ai4}\label{eq:i4}\\
a\scomp (b\pcomp c) &\le b\pcomp (a\scomp c),\tag{\ai5}\label{eq:i5}\\
(a\pcomp b)\scomp c &\le (a\scomp c)\pcomp b,\tag{\ai6}\label{eq:i6}\\
(a\pcomp b)\scomp (c\pcomp d) &\le (a\scomp c)\pcomp (b\scomp
d).\tag{\ai7}\label{eq:i7}
\end{align}
Interestingly, the syntax trees of these laws in $Q$, as shown in
Figure~\ref{fig:syntaxtrees}, have the structure of the trees in $X$ in
Figure~\ref{fig:memotrees}. The following example provides some
intuition.
\begin{figure}
\caption{Syntax trees of the algebraic interchange laws.}
\label{fig:syntaxtrees}
\end{figure}
\begin{example}\label{ex:memotrees}
Let $X=Q$, $\srel^x_{ab} \Leftrightarrow x\le a\scomp b$ and
$\prel^x_{ab} \Leftrightarrow x\le a\pcomp b$. Then, for instance,
\begin{equation*}
\exists y,z.\ \prel^y_{ab} \wedge \srel^x_{yz} \wedge
\prel^z_{cd} \Leftrightarrow x \le (a\pcomp b)\scomp (c\pcomp d),
\end{equation*}
and likewise for the other terms in interchange laws. The relational
and algebraic interchange laws then translate into each other. For
instance, for (\ref{eq:ri7}) and (\ref{eq:i7}),
\begin{align*}
\left(\exists y,z.\ \prel^y_{ab} \wedge \srel^x_{yz} \wedge
\prel^z_{cd} \Rightarrow \exists y,z.\ \srel^y_{ac} \wedge
\prel^x_{yz} \wedge \srel^z_{bd}\right)
&\Leftrightarrow
\left(x \le (a\pcomp b)\scomp (c\pcomp d) \Rightarrow x\le (a\scomp
c)\pcomp (b\scomp d)\right)\\
&\Leftrightarrow
(a\pcomp b)\scomp (c\pcomp d) \le (a\scomp c)\pcomp (b\scomp d),
\end{align*}
that is,
\begin{equation*}
\left(
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&x \ar[dash,line width=.8pt,orange,dll]\ar[dash, line width=.8pt,orange,drr]&&&\\
&\circ \ar[dash,line width=.8pt,teal,dl]\ar[dash,line width=.8pt,teal,dr]&&&& \circ \ar[dash,line width=.8pt,teal,dl]\ar[dash,line width=.8pt,teal,dr]&\\
t&& u && v && w
\end{tikzcd}
\Rightarrow
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&x \ar[dash,line width=.8pt,teal,dll]\ar[dash, line width=.8pt,teal,drr]&&&\\
&\circ \ar[dash,line width=.8pt,orange,dl]\ar[dash,line width=.8pt,orange,dr]&&&& \circ \ar[dash,line width=.8pt,orange,dl]\ar[dash,line width=.8pt,orange,dr]&\\
t&& v && u && w
\end{tikzcd}
\right)
\ \Leftrightarrow \
\left(
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&\scomp \ar[dash,dll]\ar[dash,drr]&&&\\
&\pcomp \ar[dash,dl]\ar[dash,dr]&&&& \pcomp \ar[dash,dl]\ar[dash,dr]&\\
t&& u && v && w
\end{tikzcd}
\le
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&\pcomp \ar[dash,dll]\ar[dash,drr]&&&\\
&\scomp \ar[dash,dl]\ar[dash,dr]&&&& \scomp \ar[dash,dl]\ar[dash,dr]&\\
t&& v && u && w
\end{tikzcd}
\right).
\end{equation*}
With this particular encoding, the relational interchange laws
simply memoise the algebraic ones. \qed
\end{example}
In general, however, the relationship between relational and algebraic
convolution is more complex. The
left-to-right translation in Example~\ref{ex:memotrees} can therefore
fail.
For functions $f,g:X\to Q$ we define the convolutions
\begin{equation*}
(f\sconv g)\, x = \bigvee_{y,z:\srel^x_{yz}} f\, y\scomp g\,
z\quad\text{ and}\quad (f\pconv g)\, x = \bigvee_{y,z:\prel^x_{yz}} f\, y\pcomp g\, z.
\end{equation*}
They can be represented using trees in $X$, $Q$ and $Q^X$ as
\begin{equation*}
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&\sconv \ar[dash,dl]\ar[dash,dr]&\\
f && g
\end{tikzcd}
=
\lambda x.\ \bigvee
\left\{
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&\scomp \ar[dash,dl]\ar[dash,dr]&\\
f\,y && g\, z
\end{tikzcd}
\, \middle|\,
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&x \ar[dash,line width=.8pt,orange,dl]\ar[dash, line width=.8pt,orange,dr]&\\
y && z
\end{tikzcd}
\right\}
\qquad\text{ and }\qquad
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&\pconv \ar[dash,dl]\ar[dash,dr]&\\
f && g
\end{tikzcd}
=
\lambda x.\ \bigvee
\left\{
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&\pcomp \ar[dash,dl]\ar[dash,dr]&\\
f\,y && g\, z
\end{tikzcd}
\, \middle|\,
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&x \ar[dash,line width=.8pt,teal,dl]\ar[dash, line width=.8pt,teal,dr]&\\
y && z
\end{tikzcd}
\right\}.
\end{equation*}
Convolution thus translates trees with the same structure in $X$ and
$Q$ into trees in $Q^X$.
One can then prove correspondences between relational and
algebraic interchange laws. First we show that relational interchange
laws in $X$ and algebraic interchange laws in $Q$ force algebraic
interchange laws in the convolution algebra on $Q^X$.
\begin{proposition}\label{P:correspondence1}
Let $X$ be a relational bi-magma and $Q$ a bi-prequantale. Then
(\ri{k}) in $X$ and (\ai{k}) in $Q$ imply (\ai{k}) in $Q^X$, for
each $1\leq k\leq 7$.
\end{proposition}
\begin{proof} Let
$\exists y,z.\ \prel^y_{tu} \wedge \srel^x_{yz} \wedge
\prel^z_{vw}\Rightarrow \exists y,z.\ \srel^y_{tv} \wedge
\prel^x_{yz} \wedge \srel^z_{uw}$ in $X$
and
$(w\pcomp x)\scomp (y\pcomp z) \le (w\scomp y)\pcomp (x\scomp z)$ in
$Q$. Then
\begin{align*}
\left((f\pconv g) \sconv (h\pconv k)\right)\, x
&= \bigvee \left\{ \bigvee \left\{f\, t\pcomp g\, u \mid \prel^y_{tu}\right\}
\scomp \bigvee \left\{h\, v\pcomp k\, w\mid \prel^z_{vw}\right\} \,
\middle|\, \srel^x_{yz}\right\}\\
&= \bigvee\left\{ (f\,
t \pcomp g\, u) \scomp (h\, v \pcomp k\, w)\,\middle|\, \exists y,z.\
\prel^y_{tu}\wedge \srel^x_{yz}\wedge \prel^z_{vw}\right\}\\
&\le \bigvee\left\{ (f\,
t \scomp h\, v) \pcomp (g\, u \scomp k\, w)\, \middle|\, \exists y,z.\
\srel^y_{tv}\wedge \prel^x_{yz}\wedge \srel^z_{uw}\right\}\\
&= \bigvee \left\{ \bigvee \left\{f\, t\scomp h\, v \mid \srel^y_{tv}\right\}
\pcomp \bigvee \left\{g\, u\scomp k\, w\mid \srel^z_{uw}\right\} \,
\middle|\, \prel^x_{yz}\right\}\\
&= \left((f\sconv h)\pconv (g\sconv k)\right)\, x
\end{align*}
proves (\ref{eq:i7}) in $Q^X$. Alternatively, using trees,
\begin{eqnarray*}
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&\sconv \ar[dash,dll]\ar[dash,drr]&&&\\
&\pconv \ar[dash,dl]\ar[dash,dr]&&&& \pconv \ar[dash,dl]\ar[dash,dr]&\\
f&& g && h && k\end{tikzcd}
&=&
\lambda x.\ \bigvee
\left\{
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&\scomp \ar[dash,dll]\ar[dash,drr]&&&\\
&\pcomp \ar[dash,dl]\ar[dash,dr]&&&& \pcomp \ar[dash,dl]\ar[dash,dr]&\\
f\, t&& g\, u && h\, v && k\, w
\end{tikzcd}
\, \middle|\,
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&x \ar[dash,line width=.8pt,orange,dll]\ar[dash, line width=.8pt,orange,drr]&&&\\
&\circ \ar[dash,line width=.8pt,teal,dl]\ar[dash,line width=.8pt,teal,dr]&&&& \circ \ar[dash,line width=.8pt,teal,dl]\ar[dash,line width=.8pt,teal,dr]&\\
t&& u && v && w
\end{tikzcd}
\right\} \\
&\le&
\lambda x.\ \bigvee
\left\{
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&\pcomp \ar[dash,dll]\ar[dash,drr]&&&\\
&\scomp \ar[dash,dl]\ar[dash,dr]&&&& \scomp \ar[dash,dl]\ar[dash,dr]&\\
f\, t&& h\, v && g\, u && k\, w
\end{tikzcd}
\, \middle|\,
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&x \ar[dash,line width=.8pt,teal,dll]\ar[dash, line width=.8pt,teal,drr]&&&\\
&\circ \ar[dash,line width=.8pt,orange,dl]\ar[dash,line width=.8pt,orange,dr]&&&& \circ \ar[dash,line width=.8pt,orange,dl]\ar[dash,line width=.8pt,orange,dr]&\\
t&& v && u && w
\end{tikzcd}
\right\}\\
&=&
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&\pconv \ar[dash,dll]\ar[dash,drr]&&&\\
&\sconv \ar[dash,dl]\ar[dash,dr]&&&& \sconv \ar[dash,dl]\ar[dash,dr]&\\
f&& h && g && k\end{tikzcd}.
\end{eqnarray*}
The proofs for the small interchange laws are similar, and left to the
reader. In particular, the proof of (\ref{eq:i3}) from (\ref{eq:ri3})
and that of (\ref{eq:i4}) from (\ref{eq:ri4}) are related by
\emph{opposition}: one can be obtained from the other by swapping the
operands of $\sconv$, $\pconv$, $\scomp$ and $\pcomp$ and the lower
indices of $\srel$ and $\prel$, that is, by reversing the algebraic
syntax trees in $Q$ and the trees in $X$ memoised in the relational
interchange laws. The same holds for the proof of (\ref{eq:i5}) from
(\ref{eq:ri5}) and that of (\ref{eq:i6}) from (\ref{eq:ri6}).
\end{proof}
Next we show that, under mild nondegeneracy conditions on
$Q$, algebraic interchange laws in $Q^X$ force relational
interchange laws in $X$, and that under mild nondegeneracy
conditions on $X$, algebraic interchange laws in $Q^X$ force algebraic
interchange laws in $Q$. Yet we begin with a definition and some
lemmas.
For all $x,y\in X$ and $a\in Q$, we define the function
$\delta^a_x:X\to Q$ by
\begin{equation*}
\delta^a_x\, y =
\begin{cases}
a, & \text{ if } x= y,\\
0 & \text{ otherwise}
\end{cases}
\end{equation*}
and the function $(-\mid -):Q\to\mathbb{B}\to Q$, for all $a\in Q$ and
$P:\mathbb{B}$, by
\begin{equation*}
(a\mid P) =
\begin{cases}
a, &\text{ if } P,\\
0, & \text{ otherwise}.
\end{cases}
\end{equation*}
Obviously, $\delta^a_x\, y = (a\mid x = y)$.
\begin{lemma}\label{P:delta-conv-props}
Let $X$ be a relational bi-magma and $Q$ a bi-prequantale. For all
$a,b,c,d\in Q$ and $x,t,u,v,w\in X$,
\begin{enumerate}[label=(\arabic*)]
\item $\left(\delta^a_u \sconv \delta^b_v\right) x = \left(a \scomp b\mid
\srel^x_{uv}\right)$,
\item $\left(\delta^a_u \sconv \left(\delta^b_v \pconv
\delta^c_w\right)\right) x = \left(a
\scomp (b \pcomp c) \mid
\exists y.\ \srel^x_{uy}\wedge \prel^y_{vw}\right) $,
\item $\left(\left(\delta^a_u
\pconv \delta^b_v\right)\sconv \delta^c_w\right) x = \left((a \pcomp b)\scomp c\mid
\exists y.\ \prel^y_{uv}\wedge \srel^x_{yw}\right) $,
\item $\left(\left(\delta^a_t\pconv \delta^b_u\right)\sconv \left(\delta^c_v \pconv
\delta^d_w\right)\right) x = \left((a\pcomp b)\scomp (c\pcomp d)\mid \exists y,z.\
\prel^y_{tu}\wedge \srel^x_{yz}\wedge \prel^z_{vw}\right) $,
\item properties (1)--(4) hold with colours interchanged.
\end{enumerate}
\end{lemma}
\begin{proof}
For (4), we use the proof of Proposition~\ref{P:correspondence1} to
calculate
\begin{align*}
((\delta^a_t\pconv \delta^b_u)\sconv (\delta^c_v \pconv
\delta^d_w))\, x
&= \bigvee\{(\delta^a_t\, x_1 \pcomp \delta^b_u\,
x_2)\scomp (\delta^c_v\, x_3 \pcomp \delta^d_w\, x_4)\mid \exists
y,z. \ \prel^y_{x_1x_2} \wedge \srel^x_{yz}\wedge
\prel^z_{x_3x_4}\}\\
&= ((a\pcomp b)\scomp (c\pcomp d)\mid \exists y,z.\
\prel^y_{tu}\wedge \srel^x_{yz}\wedge \prel^z_{vw})\\
&\le (a\pcomp b)\scomp (c\pcomp d).
\end{align*}
Alternatively, using trees,
\begin{eqnarray*}
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&\sconv \ar[dash,dll]\ar[dash,drr]&&&\\
&\pconv \ar[dash,dl]\ar[dash,dr]&&&& \pconv \ar[dash,dl]\ar[dash,dr]&\\
\delta^a_t&& \delta^b_u && \delta^c_v&& \delta^d_w\end{tikzcd}
&=&
\lambda x.\ \bigvee
\left\{
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&\scomp \ar[dash,dll]\ar[dash,drr]&&&\\
&\pcomp \ar[dash,dl]\ar[dash,dr]&&&& \pcomp \ar[dash,dl]\ar[dash,dr]&\\
\delta^a_t\, x_1 && \delta^b_u\, x_2 && \delta^c_v\, x_3 &&
\delta^d_w\, x_4
\end{tikzcd}
\, \middle|\,
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&x \ar[dash,line width=.8pt,orange,dll]\ar[dash, line width=.8pt,orange,drr]&&&\\
&\circ \ar[dash,line width=.8pt,teal,dl]\ar[dash,line width=.8pt,teal,dr]&&&& \circ \ar[dash,line width=.8pt,teal,dl]\ar[dash,line width=.8pt,teal,dr]&\\
x_1&& x_2 && x_3 && x_4
\end{tikzcd}
\right\}\\
&=&
\lambda x.\
\left(
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&\scomp \ar[dash,dll]\ar[dash,drr]&&&\\
&\pcomp \ar[dash,dl]\ar[dash,dr]&&&& \pcomp \ar[dash,dl]\ar[dash,dr]&\\
a && b && c && d
\end{tikzcd}
\, \middle|\,
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&x \ar[dash,line width=.8pt,orange,dll]\ar[dash, line width=.8pt,orange,drr]&&&\\
&\circ \ar[dash,line width=.8pt,teal,dl]\ar[dash,line width=.8pt,teal,dr]&&&& \circ \ar[dash,line width=.8pt,teal,dl]\ar[dash,line width=.8pt,teal,dr]&\\
t&& u && v && w
\end{tikzcd}
\right)\\
&\le &
\begin{tikzcd}[column sep= -.2cm, row sep=.2cm]
&&&\scomp \ar[dash,dll]\ar[dash,drr]&&&\\
&\pcomp \ar[dash,dl]\ar[dash,dr]&&&& \pcomp \ar[dash,dl]\ar[dash,dr]&\\
a && b && c && d
\end{tikzcd}.
\end{eqnarray*}
All other proofs are similar, and left to the reader. In particular,
(3) follows from (2) by opposition.
\end{proof}
The next lemma shows that the trees memoised by the relational
interchange laws can be expressed in terms of deltas and convolution
in the presence of the following mild nondegeneracy conditions on $Q$:
\begin{gather}
\exists a, b \in Q.\ a \scomp b \neq 0, \tag{$D_1$}\label{eq:d1}\\
\exists a,b,c \in Q.\ a\scomp (b\pcomp c) \neq 0,
\tag{$D_2$}\label{eq:d2}\\
\exists a,b,c \in Q.\ (a\pcomp b)\scomp c \neq 0,
\tag{$D_3$}\label{eq:d3}\\
\exists a,b,c,d \in Q.\ (a\pcomp b)\scomp (c\pcomp d) \neq 0. \tag{$D_4$}\label{eq:d4}
\end{gather}
\begin{lemma}\label{P:delta-rel}
Let $X$ be a relational bi-magma and $Q$ a
bi-prequantale. Then
\begin{enumerate}[label=(\arabic*)]
\item $\srel^x_{yz} \Rightarrow \left(\delta_y^a \sconv
\delta_z^b\right) x = a\scomp b$, and the converse implication follows from (\ref{eq:d1}),
\item
$\exists y.\ \srel^x_{uy}\wedge \prel^y_{vw} \Rightarrow
\left(\delta^a_u \sconv \left(\delta^b_v \pconv
\delta^c_w\right)\right) x = a\scomp (b\pcomp c)$,
and the converse implication follows from (\ref{eq:d2}),
\item
$\exists y.\ \prel^y_{uv}\wedge \srel^x_{yw} \Rightarrow \left(\left(\delta^a_u \pconv\delta^b_v\right) \sconv
\delta^c_w\right) x = (a\pcomp
b)\scomp c$,
and the converse implication follows from (\ref{eq:d3}),
\item
$\exists y,z.\ \prel^y_{tu}\wedge \srel^x_{yz}\wedge \prel^z_{vw}
\Rightarrow
\left(\left(\delta^a_t\pconv \delta^b_u\right)\sconv
\left(\delta^c_v \pconv \delta^d_w\right)\right) x = (a\pcomp b)\scomp(c\pcomp d)$,
and the converse implication follows from (\ref{eq:d4}),
\item properties (1)--(5) hold with colours interchanged, including in the
degeneracy conditions.
\end{enumerate}
\end{lemma}
\begin{proof}
For (4), suppose
$\exists y,z.\ \prel^y_{tu}\wedge \srel^x_{yz}\wedge \prel^z_{vw}$.
Then
\begin{equation*}
\left(\left(\delta^a_t\pconv \delta^b_u\right)\sconv
\left(\delta^c_v \pconv \delta^d_w\right)\right) x =
\left((a\pcomp b)\scomp (c\pcomp d)\mid \exists y,z.\
\prel^y_{tu}\wedge \srel^x_{yz}\wedge \prel^z_{vw}\right) = (a\pcomp b)\scomp(c\pcomp d)
\end{equation*}
by Lemma~\ref{P:delta-conv-props}(4).
For the converse implication,
\begin{equation*}
0\neq (a\pcomp b)\scomp(c\pcomp d) = \left((\delta^a_t\pconv
\delta^b_u)\scomp (\delta^c_v \pconv \delta^d_w)\right)\, x = \left((a\pcomp
b)\scomp(c\pcomp d) \mid \exists y,z.\
\prel^y_{tu}\wedge \srel^x_{yz}\wedge \prel^z_{vw}\right)
\end{equation*}
by Lemma~\ref{P:delta-conv-props}(4) and therefore $\exists y,z.\
\prel^y_{tu}\wedge \srel^x_{yz}\wedge \prel^z_{vw}$.
All other proofs are similar, and left to the reader. (3)
follow from (2) by opposition.
\end{proof}
Intuitively, convolutions of delta functions represent trees in $X$ in
the function space $Q^X$ by
creating their ``shadows'' in $Q$---which requires nondegeneracy. The
case of Lemma~\ref{P:delta-rel}(4) and its dual are shown in
Figure~\ref{fig:delta-diagram}.
\begin{figure}
\caption{$\left(\delta^a_t\pconv \delta^b_u\right)\sconv
\left(\delta^c_v \pconv \delta^d_w\right)\, x$
representing
$\exists y,z.\ \prel^y_{tu}
\label{fig:delta-diagram}
\end{figure}
We are now prepared to prove our second correspondence result, namely
that algebraic interchange laws in $Q^X$ force relational
interchange laws in $X$ subject to mild nondegeneracy conditions on $Q$.
\begin{proposition}\label{P:correspondence2}
Let $X$ be a relational bi-magma and $Q$ a bi-prequantale. Then
$(D_{\lceil\frac{k}{2}\rceil})$ in $Q$ and $(I_k)$ in $Q^X$ imply $(RI_k)$ in $X$, for
each $1\le k\le 7$.
\end{proposition}
\begin{proof}
Suppose $(a\pcomp b)\scomp (c\pcomp d) \neq 0$ for some
$a,b,c,d\in Q$ and
$\left(\delta^a_t\pconv \delta^b_u\right)\sconv \left(\delta^c_v\pconv \delta^d_w\right)
\le \left(\delta^a_t\sconv \delta^c_v\right)\pconv \left(\delta^b_u\sconv
\delta^d_w\right)$.
Then, using Lemma~\ref{P:delta-rel}(4),
\begin{align*}
\exists y,z.\ \prel^y_{tu} \wedge \srel^x_{yz} \wedge \prel^z_{vw}
&\Leftrightarrow 0 \neq \left(\left(\delta^a_t\pconv \delta^b_u\right)\sconv
\left(\delta^c_v\pconv \delta^d_w\right)\right) x\\
&\Rightarrow 0 \neq \left(\left(\delta^a_t\sconv \delta^c_v\right)\pconv \left(\delta^b_u\sconv
\delta^d_w\right)\right) x\\
&\Leftrightarrow \exists y,z.\ \srel^y_{tv} \wedge \prel^x_{yz}
\wedge \srel^z_{uw}
\end{align*}
proves $(RI_7)$. The remaining proofs are similar. Those for $(RI_3)$
and $(RI_4)$ and those for $(RI_5)$ and $(RI_6)$ are related by
opposition.\qedhere
\end{proof}
Finally, we prove the third correspondence result for interchange
laws, namely that algebraic interchange laws on $Q^X$ force those on
$Q$, subject to the following mild degeneracy conditions on $X$:
\begin{gather}
\exists x,u,v.\ \srel^x_{uv}, \tag{$RD_1$} \label{eq:rd1}\\
\exists x,y,u,v,w.\ \srel^x_{uy} \wedge \prel^y_{vw},
\tag{$RD_2$}\label{eq:rd2}\\
\exists x,y,u,v,w.\ \prel^y_{uv} \wedge \srel^x_{yw},
\tag{$RD_3$}\label{eq:rd3}\\
\exists x,y,z,t,u,v,w.\ \prel^y_{tu} \wedge \srel^x_{yz} \wedge
\prel^z_{vw}.
\tag{$RD_4$}\label{eq:rd4}
\end{gather}
\begin{proposition}\label{P:correspondence3}.
Let $X$ be a relational bi-magma and $Q$ a bi-prequantale. Then
$(RD_{\lceil\frac{k}{2}\rceil})$ in $X$ and $(I_k)$ in $Q^X$ imply
$(I_k)$ in $Q$, for each $1\le k\le 7$.
\end{proposition}
\begin{proof}
Suppose
$(\delta^a_t\pconv \delta^b_u)\sconv (\delta^c_v\pconv \delta^d_w)
\le (\delta^a_t\sconv \delta^c_v)\pconv (\delta^b_u\sconv
\delta^d_w)$
for some $a,b,c,d\in Q$ and let
$\exists y,z.\ \prel^y_{tu} \wedge \srel^x_{yz} \wedge \prel^z_{vw}$
for some $t,u,v,w\in X$. Then, using Lemma~\ref{P:delta-rel}(4) and
\ref{P:delta-conv-props}(4),
\begin{equation*}
(a\pcomp b)\scomp (c\pcomp d)
= ((\delta^a_t \pconv\delta^b_u) \sconv (\delta^c_v\pconv
\delta^d_w))\, x
\le ((\delta^a_t\sconv \delta^c_v)\pconv (\delta^b_u\sconv
\delta^d_w))\, x
\le (a\scomp c)\pcomp (b\scomp d)
\end{equation*}
proves $(I_7)$ in $Q$. The remaining
proofs are similar. Those for $(RD_3)$ and $(RD_4)$ and those for
$(RD_5)$ and $(RD_6)$ are related by opposition.
\end{proof}
It may be helpful to check the proofs of
Propositions~\ref{P:correspondence2} and \ref{P:correspondence3} with
the diagrams in Figure~\ref{fig:delta-diagram}. The degeneracy
conditions are necessary. Indeed, if $Q$ is a singleton set, then so
is $Q^X$ and hence will obey all axioms independently of
$X$. Similarly, if all products on $Q$ vanish, then $Q^X$ will
automatically satisfy many axioms as all convolutions will be trivial.
\section{Further Correspondences}\label{S:further-correspondences}
When the relational bi-magma $X$ and the bi-prequantale $Q$ are both
unital, units can be defined in $Q^X$ as in
Section~\ref{S:summary}:
\begin{align*}
\sid\, x =
\begin{cases}
\se, & \text{if $\sE^x$},\\
0, & \text{otherwise}
\end{cases}
\qquad\text{ and}\qquad
\pid\, x=
\begin{cases}
\pe, & \text{if $\pE^x$},\\
0, & \text{otherwise}.
\end{cases}
\end{align*}
Theorem~\ref{P:conv-algebras} then shows that $Q^X$ is a unital quantale if
$Q$ is a unital quantale and both compositions are associative and
unital in $X$. We
restate the three kinds of correspondences for units in the weaker
setting of relational magmas and prequantales.
\begin{proposition}\label{P:correspondence-units}
Let $X$ be a relational magma and $Q$ a prequantale.
\begin{enumerate}
\item If $X$ and $Q$ are unital, then so is $Q^X$.
\item If $Q^X$ is unital and $1\neq 0$ in $Q$, then so is $X$.
\item If $Q^X$ is unital and $E\neq \emptyset$ in $X$ , then so is
$Q$.
\end{enumerate}
\end{proposition}
\begin{proof}~
\begin{enumerate}
\item If $X$ and $Q$ are unital, then
\begin{align*}
(f\ast \mathit{id}_E)\, x
=\bigvee \{f\, y \bullet \delta_e\, z \mid R^x_{yz} \land E^e\}
=\bigvee \{f\, y \bullet 1 \mid \exists e.\ R^x_{ye} \land E^e\}
=(f\, x \mid \exists e.\ R^x_{xe} \land E^e)
= f\, x,
\end{align*}
where the last two steps use the relational unit axioms from
Definition~\ref{D:rel-units}. The proof for left units follows by
opposition.
\item If $\mathit{id}_E$ is the right unit in $Q^X$, then
\begin{equation*}
(1\mid (y=x))
= \delta_y\, x
= (\delta_y\ast \mathit{id}_E)\, x
= (1\mid \exists e.\ R^x_{ye} \land E^e).
\end{equation*}
Suppose $1\neq 0$. Then $x=y$ implies
$\exists e.\ R^x_{xe} \land E^e$, the existence axiom for right
relational units, and $\exists e.\ R^x_{ye} \land E^e$
implies that $x=y$, the uniqueness axiom. The proofs for left units
follow by opposition.
\item If $\mathit{id}_E$ is the right unit in $Q^X$, then
\begin{equation*}
a\bullet 1
= (a\bullet 1\mid \exists e.\ R^x_{xe} \land E^e)
=\bigvee\{\delta_x\, x \mid \exists e.\ R^x_{xe} \land E^e\}
= (\delta^a_x \ast \mathit{id}_E)\, x
= \delta^a_x\, x
= a
\end{equation*}
proves that $1$ is a right unit in $Q$. The left unit law follows
by opposition.\qedhere
\end{enumerate}
\end{proof}
In the presence of non-trivial units in $X$ and $Q$, the nondegeneracy
conditions for interchange laws in Proposition~\ref{P:correspondence2}
and \ref{P:correspondence3} simplify. Condition (\ref{eq:d1}) becomes
trivial with $\se\scomp\se = \se\neq 0$, condition (\ref{eq:d2}) with
$\se\scomp (\pe\pcomp\pe) = \pe\neq 0$ and condition (\ref{eq:d3}) by
opposition. Condition (\ref{eq:d4}) reduces to $(\pe\pcomp \pe)\scomp
(\pe \pcomp \pe) = \pe \scomp \pe \neq 0$, but remains non-trivial. It
becomes trivial when $\se=\pe$. It is easy to check that the
nondegeneracy conditions (\ref{eq:rd1})--(\ref{eq:rd3}) become trivial
in a similar way, using the fact that $\srel^e_{ee}$ holds for all
$e\in \sE$ and $\prel^e_{ee}$ for all $e\in \pE$. Once again,
(\ref{eq:rd4}) becomes trivial when $\sE=\pE$.
\begin{figure}
\caption{Degeneracy condition (\ref{eq:d4}
\label{fig:delta-diagram2}
\end{figure}
\begin{corollary}\label{P:correspondence23-cor}
Let $X$ be a unital relational bi-magma satisfying
$\sE=\pE\neq \emptyset$ and $Q$ a unital bi-prequantale satisfying
$\se=\pe\neq 0$. Then $(I_k)$ holds in $Q^X$ if and only if $(I_k)$
holds in $Q$ and $(RI_k)$ holds in $X$, for each $1\le k\le 7$.
\end{corollary}
In the only-if directions, Functions $\delta_x$ can now be used. This
leads to a simpler relationship between deltas and ternary relations
than in Lemma~\ref{P:delta-rel}.
\begin{corollary}\label{P:delta-rel-unital-cor}
Let $X$ be a relational magma and $Q$ a unital prequantale with
$1\neq 0$. Then
\begin{equation*}
R^x_{yz} \Leftrightarrow (\delta_y \ast \delta_z)\, x = 1.
\end{equation*}
\end{corollary}
\noindent It is therefore compelling to see $\mathbb{B}$ as the sublattice
over $\{0,1\}$ of $Q$ and simply write
$R^x_{yz} = (\delta_y\ast\delta_z)\, x$ or even
$(f\ast g)\, x = \bigvee_{y,z} f\, y \bullet g\, z \bullet R^x_{yz}$.
Figure~\ref{fig:delta-diagram2} shows how the presence of units
affects the right-hand term in (\ref{eq:ri7}).
Next we present a correspondence result for relational units that is
useful in Section~\ref{S:interchange-quantales}.
\begin{lemma}\label{P:unit-rel-conv}
Let $X$ be a unital bi-magma and $Q$ a unital bi-prequantale.
\begin{enumerate}
\item If $\pE \subseteq \sE$ in $X$ $\pe\le \se$ in $Q$, then $\pid
\le \sid$ in $Q^X$.
\item If $\pid \le \sid$ in $Q^X$ and $\pe\neq 0$ in $Q$, then $\pE \subseteq \sE$ in
$X$.
\item If $\pid \le \sid$ in $Q^X$ and $\pE\neq \emptyset$ in $X$, then $\pe\le \se$ in $Q$.
\end{enumerate}
\end{lemma}
\begin{proof}~
\begin{enumerate}
\item Let $\pE\subseteq \sE$ and $\pe\le \se$. Then $\sid\, x = 0 \Leftrightarrow \neg \sE^x \Rightarrow \neg \pE^x
\Leftrightarrow \pid\, x = 0$ and therefore $\pid\le \sid$.
\item Let $\pid\le \sid$. If $\pE^x$, then $0\neq \pe =\pid\, x \le
\sid\, x$ and therefore $\sE^x$.
\item Let $\pid\le \sid$ and $\pE^x$. Then $\pe= \pid\, x \le \sid\, x \le \se$.\qedhere
\end{enumerate}
\end{proof}
\begin{corollary}\label{P:unit-rel-conv-cor}
Let $X$ be a unital bi-magma with $\pE\neq \emptyset$ and $Q$ a
unital bi-prequantale with $\pe\neq 0$. Then $\pid
\le \sid$ in $Q^X$ if and only $\pE \subseteq \sE$ in $X$ and $\pe\le \se$ in $Q$.
\end{corollary}
Because of the symmetry in the definitions of unital bi-magmas and
bi-prequantales, Lemma~\ref{P:unit-rel-conv} and
Corollary~\ref{P:unit-rel-conv-cor} hold with colours swapped. We do
not spell them out explicitly.
The correspondences between interchange laws can be specialised to
obtain the associativity laws for a quantale. The relational
interchange law (\ref{eq:ri3}),
$\exists y.\ \srel^x_{uy} \wedge \prel^y_{vw} \Rightarrow \exists y.\
\srel^y_{uv} \wedge \prel^x_{yw}$,
becomes the relational semi-associativity law
$\exists y. R^x_{uy} \wedge R^y_{vw}\Rightarrow \exists y.\ R^y_{u\,
v} \wedge R^x_{y\, w}$
when colours are switched off; (\ref{eq:ri4}) translates into the
opposite implication. Similarly, the interchange laws (\ref{eq:i3})
and (\ref{eq:i4}),
$a\scomp (b\pcomp c)\le (a\scomp b)\pcomp c$ and
$(a\pcomp b)\scomp c\le a \pcomp (b\scomp c)$, become the
semi-associativity laws
$a\bullet (b\bullet c)\le (a\bullet b)\bullet c$ and
$(a\bullet b)\bullet c\le a \bullet (b\bullet c)$. This yields the
following corollary to Proposition~\ref{P:correspondence1},
\ref{P:correspondence2} and \ref{P:correspondence3}.
\begin{corollary}\label{P:assoc-correspondence1}
Let $X$ be a relational magma and $Q$ a prequantale.
\begin{enumerate}
\item If $X$ is relationally associative and $Q$ associative, then
$Q^X$ is associative.
\item If $Q^X$ is associative and some $a,b,c\in Q$ satisfy $a\bullet
(b\bullet c)\neq 0\neq (a\bullet b) \bullet c$, then $X$ is
relationally associative.
\item If $Q^X$ is associative and some $x,y,z,u,v,w\in X$ satisfy
$R^x_{uz}$, $R^x_{yw}$ $R^z_{vw}$ and $R^y_{uv}$, then $Q$ is associative.
\end{enumerate}
\end{corollary}
\noindent Similar correspondences between semi-associativity laws are
straightforward, but not as interesting for our purposes. In the
presence of units, Corollary~\ref{P:assoc-correspondence1}
simplifies further.
\begin{corollary}\label{P:assoc-correspondence2}
Let $X$ be a unital relational magma satisfying $E\neq \emptyset$
and $Q$ a unital prequantale satisfying $1\neq 0$. Then $Q^X$ is
associative if and only if $X$ is relationally associative and $Q$
is associative.
\end{corollary}
The correspondences between interchange laws can also be specialised
to the commutativity law for a quantale. The relational interchange
law (\ref{eq:ri2}), $\srel^x_{uv} \Rightarrow \prel^x_{vu}$,
specialises to $R^x_{uv} \Rightarrow R^x_{vu}$ when colours are
switched off while the interchange law (\ref{eq:i2}), $a\scomp b \le
b\pcomp a$, becomes $a\bullet b\le b\bullet a$. This yields another
corollary to Proposition~\ref{P:correspondence1},
\ref{P:correspondence2} and \ref{P:correspondence3}.
\begin{corollary}\label{P:comm-correspondence}
Let $X$ be a relational magma and $Q$ a prequantale.
\begin{enumerate}
\item If $X$ is relationally commutative and $Q$ abelian, then $Q^X$
is abelian.
\item If $Q^X$ is abelian and there exist $a,b\in Q$ with $a\bullet b\neq
0$, then $X$ is relationally commutative.
\item If $Q^X$ is abelian and there exist $x,y,z\in X$ with $R^x_{x\, z}
$, then $Q$ is abelian.
\end{enumerate}
\end{corollary}
\noindent In the presence of units, this corollary simplifies further.
\begin{corollary}\label{P:comm-correspondence-unit}
Let $X$ be a unital relational magma satisfying $E\neq \emptyset$
and $Q$ a unital quantale satisfying $1\neq 0$. Then $Q^X$ is
abelian if and only if $X$ is relationally commutative and $Q$ abelian.
\end{corollary}
\section{Relational Interchange Monoids and Interchange
Quantales}\label{S:interchange-quantales}
We now start shifting the focus from correspondence theory to
construction recipes for quantales with interchange laws. To
avoid nondegeneracy conditions, we assume that relational magmas and
quantales are unital and impose an order between units:
$\emptyset\neq \pE\subseteq \sE$ and $0\neq \pe\le \se$.
Yet first we prove a weak variant of the classical Eckmann-Hilton
argument~\cite{EckmannH62}. It shows that if a unital bi-magma, a set equipped with
composition $\scomp$ and unit $\se$ and composition $\pcomp$ with unit
$\pe$, satisfies the strong interchange law
$(a\pcomp b)\scomp (c\pcomp d) = (a\scomp c)\pcomp (b\scomp d)$, then
$\se=\pe$, $\scomp$ and $\pcomp$ coincide, and they are associative and
commutative. We show how these properties change if strong interchange
is weakened to (\ref{eq:i7}). This of course requires ordered
bimagmas, in which the underlying set is partially ordered and the two
compositions preserve the order in both arguments.
\begin{lemma}[weak Eckmann-Hilton]\label{P:weak-eh}
Let $(S,\le,\scomp,\pcomp,\se,\pe)$ be an ordered bimagma in which
(\ref{eq:i7}) holds. Then $\se \le \pe$, and
(\ref{eq:i1})--(\ref{eq:i6}) hold whenever $\pe \le \se$.
\end{lemma}
The proofs, like the classical Eckmann-Hilton ones, substitute $\se$ and
$\pe$ in (\ref{eq:i7}) and are straightforward. Analogous results hold for relational
bimagmas because of the various correspondence results in the
previous section and Lemma~\ref{P:weak-eh}.
\begin{lemma}\label{P:rel-interchange-redundant}
Let $(X,\srel,\prel,\sE,\pE)$ be a unital relational bimagma in which (\ref{eq:ri7})
holds. Then $\sE\subseteq \pE$, and (\ref{eq:ri1})--(\ref{eq:ri6}) hold
whenever $\pE\subseteq \sE$.
\end{lemma}
\begin{proof}
First, for all $e\in S$, and with (\ref{eq:ri7}) in the fourth step,
\begin{align*}
\sE^e
&\Leftrightarrow \sE^e \land \srel^e_{ee}\\
& \Leftrightarrow \sE^e \land \exists x,y,e',e''.\ \pE^{e'}\land \prel^x_{e'e} \land
\srel^e_{xy}\land \prel^y_{ee''}\\
& \Rightarrow \exists e', e''.\ \sE^e \land \pE^{e'}\land \prel^e_{e'e} \land
\srel^e_{ee}\land \prel^e_{ee''}\\
& \Rightarrow \exists e',e''.\ \sE^e \land \pE^{e'}\land \srel^e_{e'e} \land
\prel^e_{ee}\land \srel^e_{ee''}\\
& \Rightarrow \sE^e \land \pE^e\land \srel^e_{ee} \land
\prel^e_{ee}\land \srel^e_{ee}\\
&\Rightarrow \pE^e.
\end{align*}
Second, let $\pE\subseteq \sE$ and assume (\ref{eq:ri7}). Then
\begin{align*}
\exists y.\ \prel^y_{uv} \wedge \srel^x_{yw}
&\Leftrightarrow \exists e,y.\ \prel^y_{uv} \wedge \srel^x_{yw}\wedge
\prel^w_{ew} \wedge \pE^e\\
&\Leftrightarrow \exists e,y,z.\ \prel^y_{uv} \wedge \srel^x_{yz}\wedge
\prel^z_{ew} \wedge \pE^e\\
&\Rightarrow \exists e,y, z.\ \srel^y_{ue} \wedge \prel^x_{yz}\wedge
\srel^z_{vw}\wedge \sE^e\\
&\Leftrightarrow \exists e,z.\ \srel^u_{ue} \wedge \prel^x_{uz}\wedge
\srel^z_{vw}\wedge \sE^e\\
&\Leftrightarrow \exists z. \prel^x_{u\, z}\wedge \srel^z_{v\, w}
\end{align*}
proves (\ref{eq:ri6}). The proofs of (\ref{eq:ri1}), (\ref{eq:ri2}),
(\ref{eq:ri3}) and (\ref{eq:ri5}) from (\ref{eq:ri7}) are similar, and
left to the reader.
\end{proof}
\begin{definition}\label{D:relational-monoids}~
\begin{enumerate}
\item A \emph{relational semigroup} $(X,R)$ is a set $X$ equipped with
a relationally associative ternary relation $R$.
\item A \emph{relational monoid} is a relational semigroup $(X,R)$
with a set $E\subseteq X$ of relational units for $R$.
\item A \emph{relational interchange monoid} is a structure
$(X,\srel,\prel,E)$ such that $(X,\srel,E)$ and $(X,\prel,E)$
are relational monoids and the relational interchange
law (\ref{eq:ri7}) holds.
\end{enumerate}
\end{definition}
\begin{definition}
A (unital) \emph{interchange quantale} is a structure
$(Q,\le,\scomp,\pcomp,1)$ such that $(Q,\le,\scomp,1)$ and
$(Q,\le,\pcomp,1)$ are (unital) quantales, and the
interchange law (\ref{eq:i7}) holds.
\end{definition}
In light of Lemma~\ref{P:weak-eh} and
\ref{P:rel-interchange-redundant}, we always assume that relational
interchange monoids and interchange quantales have one single unit
that is shared between the relations and compositions, respectively.
The following result then summarises these two lemmas.
\begin{corollary}\label{P:relinterchange-redundant}~
\begin{enumerate}
\item In every relational interchange monoid,
(\ref{eq:ri1})--(\ref{eq:ri6}) are derivable.
\item In every unital interchange quantale, (\ref{eq:i1})--(\ref{eq:i6})
are derivable.
\end{enumerate}
\end{corollary}
The correspondence results from
Section~\ref{S:correspondence-interchange} and
\ref{S:further-correspondences} can now be summarised in terms of
interchange monoids and interchange quantales as well.
\begin{theorem}\label{P:interchange-quantale-correspondence}~
\begin{enumerate}
\item If $X$ is a relational interchange monoid and $Q$ an interchange
quantale, then $Q^X$ is an interchange quantale.
\item If $Q^X$ is an interchange quantale and $1\neq 0$, then $X$ is a
relational interchange monoid.
\item If $Q^X$ is an interchange quantale and $E\neq \emptyset$, then
$Q$ is an interchange quantale.
\end{enumerate}
\end{theorem}
\begin{proof}
The correspondence for associativity and units in the subquantales is
given by Corollary~\ref{P:assoc-correspondence1} and
Proposition~\ref{P:correspondence-units}; that for interchange between
the subquantales by Propositions~\ref{P:correspondence1},
\ref{P:correspondence2} and \ref{P:correspondence3}.
\end{proof}
Theorem~\ref{P:interchange-quantale-correspondence} shows that, up to
mild nondegeneracy assumptions, all interchange quantales of type
$X\to Q$ are obtained from a relational interchange monoid on $X$ and
an interchange quantale $Q$. To build such quantales, one should
therefore look for relational interchange monoids, and the
advantage is that fewer properties need to be checked.
Interchange quantales generalise concurrent quantales and are strongly
related to concurrent Kleene algebras~\cite{HMSW11}. The difference is
that here we do not assume ``parallel composition'' $\pcomp$ to be
commutative. Yet Theorem~\ref{P:interchange-quantale-correspondence}
adapts easily to the commutative case. For a concurrent quantale in
$Q^X$, an interchange monoid $X$ with relationally commutative $\prel$
and an interchange quantale $Q$ with commutative $\pcomp$ is needed. A
variant of Theorem~\ref{P:interchange-quantale-correspondence} then
follows from Corollary~\ref{P:comm-correspondence-unit} and
\ref{P:correspondence23-cor}. In particular, the nondegeneracy
assumptions simplify to non-triviality assumptions for units and unit
sets.
\section{Duality for Powerset Quantales}\label{S:duality}
An interesting specialisation of
Theorem~\ref{P:interchange-quantale-correspondence} is the case of
$Q=\mathbb{B}$, which forms an interchange quantale with both
compositions being meet and both units of composition $1$. In
particular, in the booleans, $0\neq 1$. The interchange law
(\ref{eq:i7}) holds trivially because
$(w\wedge x)\wedge (y\wedge z) = (w \wedge y)\wedge (x\wedge z)$ in
any semilattice by associativity and commutativity of meet.
\begin{corollary}\label{P:complex_cor}
$\mathbb{B}^X\cong \mathcal{P}\, X$ is an interchange quantale if
and only if $X$ is a relational interchange monoid.
\end{corollary}
In this case, by Corollary~\ref{P:delta-rel-unital-cor} and Lemma~\ref{P:delta-rel},
$\srel^x_{yz} \Leftrightarrow (\delta_y \sconv \delta_z)\, x = 1$,
$\prel^x_{yz} \Leftrightarrow (\delta_y \pconv \delta_z)\, x = 1$ and
likewise for the other relational nondegeneracy conditions.
More interestingly, as a powerset boolean algebra, $\mathbb{B}^X$ is
complete and atomic---a CABA---and a well known duality holds. The
work of the Tarski school~\cite{JonssonT51} and Goldblatt~\cite{Goldblatt} shows
that categories of CABAs with $n$-ary operators and relational
structures with $n+1$-ary relations are dually equivalent. Atomic
boolean prequantales are CABAs with a binary operator; relational
magmas are relational structures with a ternary relation. Morphisms
in the category $\mathsf{ABP}$ of atomic boolean (pre)quantales
preserve sups, complementation and composition. A morphism $\rho$
between relational magmas $(X,R)$ and $(X',S)$ must satisfy
\begin{equation*}
R^x_{yz} \Rightarrow S^{(\rho\, x)}_{(\rho\, y)(\rho\, z)}
\end{equation*}
for all $x,y,z\in X$. A morphism is \emph{bounded} if, for all
$x,y,z\in X$,
\begin{equation*}
S^{(\rho\, x)}_{yz} \Rightarrow \exists u,v\in X.\ \rho\, u =
y\wedge \rho\, v= z \wedge R^x_{uv}.
\end{equation*}
The morphisms in the category $\mathsf{RM}$ of relational
magmas are assumed to be bounded.
Next we summarise this duality between categories. With every atomic
boolean prequantale $Q$ one associates its dual relational
structure---its atom structure---$Q_+= \mathit{At}\,Q$ by defining the ternary
relation $R$ in $Q$, as in Example~\ref{ex:memotrees}, by
\begin{equation*}
R^\alpha_{\beta\gamma} \Leftrightarrow \alpha\le \beta\bullet \gamma
\end{equation*}
for all $\alpha,\beta,\gamma\in Q_+$. With very morphism
$\varphi:Q\to Q'$ in $\mathsf{ABP}$ one associates the map
$\varphi_+:\mathit{At}\, Q'\to Q$ defined by
\begin{equation*}
\varphi_+\, \beta = \bigwedge\{a\in Q\mid \beta \le \varphi\,
a\}.
\end{equation*}
It is easy to check that $\varphi_+$ maps atoms in $Q'$ to atoms ind
$Q$.
Conversely, with every relational magma $(X, R)$ one associates its
dual convolution prequantale---its complex
algebra---$X^+= (\mathcal{P}\, X,\subseteq,\ast)$. With every bounded
morphism $\rho:X\to X'$ one associates the contravariant powerset (or
preimage) functor $\rho^+: \mathcal{P}\, X'\to \mathcal{P}\, X$. It is defined, for
all $B\in X'$, by
\begin{equation*}
\rho^+\, B = \{x\in X\mid \rho\, x \in B\}.
\end{equation*}
In this context, our function $\delta:X\to X\to\mathbb{B}$ is isomorphic to
the function $\eta:X\to \mathcal{P}\, X$ defined by $\eta=\{-\}$.
Then $(\delta_y \ast \delta_z)\, x = 1 \Leftrightarrow x\in \{y\}\ast
\{z\}$ and hence $R^x_{y\, z} \Leftrightarrow x\in \{y\}\ast \{z\}$ by
Corollary~\ref{P:delta-rel-unital-cor}.
\begin{proposition}[\cite{JonssonT51,HenkinMonkTarski71}]\label{P:JT-dual}
Let $Q$ be an atomic boolean prequantale and $X$ a relational magma.
\begin{enumerate}
\item $Q\cong (Q_+)^+$ with isomorphism $\sigma:Q\to \mathcal{P}\, (\mathit{At}\, Q)$
given by $\sigma\, a = \{\alpha\mid \alpha\le a\}$.
\item $X\cong
(X^+)_+$ with isomorphism $\eta:X\to \mathit{At}\, (\mathcal{P}\, X)$ given by
$\eta\, x = \{x\}$.
\end{enumerate}
\end{proposition}
To prove (1) one can use that any bijection $\varphi$ between the
atoms of two atomic boolean prequantales $Q$ and $Q'$ extends to an
isomorphism if and only if
$\alpha\le \beta\bullet \gamma\Leftrightarrow \varphi\, \alpha \le
\varphi\, \beta\bullet \varphi\, \gamma$
for all $\alpha,\beta,\gamma\in \mathit{At}\, Q$. The bijection
$\eta$ between atoms in $Q$ and atoms in $\mathcal{P}\, Q$ satisfies this
condition, and it turns out that $\sigma$ is its extension. For (2)
it is easy to check that the bijection $\eta$ is a relational magma
morphism.
\begin{proposition}[\cite{Goldblatt}]\label{P:Goldblatt-dual1}
The maps $(-)^+:\mathsf{RM}\to \mathsf{ABP}$ and
$(-)_+:\mathsf{ABP}\to \mathsf{RM}$ are contravariant functors.
\end{proposition}
For $(-)^+$, one must show that $\rho^+$ preserves sups,
complementation and composition, for any bounded morphism $\rho$. The
first two properties follow from Stone's theorem for CABAs. Proving
$\rho^+\, B_1 \ast \rho^+\, B_2 \subseteq \rho^+\, (B_1 \ast B_2)$ for
$B_1,B_2\in X'$ requires that $\rho$ is a morphism, the converse
inclusion that it is bounded. Proving $(-)_+$ requires checking functoriality.
\begin{theorem}[\cite{Goldblatt}]\label{P:Goldblatt-dual2}
The composites $(-)_+\circ (-)^+$ and $(-)^+\circ (-)_+$ are
naturally isomorphic to the identity functors on the categories
$\mathsf{ABP}$ and
$\mathsf{RM}$, respectively. The two categories are dually
equivalent.
\end{theorem}
The isomorphisms are $\sigma_Q: Q\mapsto (Q_+)^+$ and
$\eta_X:X\mapsto (X^+)_+$. Showing that these are components of a
natural transformations requires checking that the following diagrams commute.
\begin{equation*}
\begin{tikzcd}[column sep= 1cm, row sep=1cm]
Q \arrow{r}{\varphi}\arrow{d}{\sigma_Q}& Q'\arrow{d}{\sigma_{Q'}}\\
(Q_+)^+\arrow{r}{(\varphi_+)^+} & (Q'_+)^+
\end{tikzcd}
\qquad\qquad
\begin{tikzcd}[column sep= 1cm, row sep=1cm]
X \arrow{r}{\rho}\arrow{d}{\eta_X}& X'\arrow{d}{\eta_{X'}}\\
(X^+)_+\arrow{r}{(\rho^+)_+} & (X'^+)_+
\end{tikzcd}
\end{equation*}
\begin{question}
Is there a Stone-type duality for non-atomic (boolean) quantales and
arbitrary convolution algebras?
\end{question}
\section{Interchange Kleene Algebras}\label{S:interchange-kas}
We mentioned in Section~\ref{S:summary} that in many classical
convolution algebras, including Rota's incidence algebras and the
formal power series of Scha\"utzenberger and Eilenberg's approach to
formal languages, the underlying set $X$ has a finite decomposition
property (Rota calls partial orders with this property \emph{locally
finite}~\cite{Rota64}). As infinite sups are then not needed to
express convolutions, one can specialise quantales to semirings and
Kleene algebras, and in particular concurrent Kleene algebras. This
is the purpose of this section.
\begin{definition}
A \emph{dioid} is a semiring $(S,+,\bullet,0,1)$ in which
addition is idempotent.
\end{definition}
Hence $(S,+,0)$ is a sup-semilattice ordered by
$a\le b\Leftrightarrow a+b=b$ and least element $0$. Moreover $\bullet$
preserves $\le$ in both arguments. A quantale can thus be seen as a
complete dioid.
\begin{definition}
An \emph{interchange semiring} is a structure
$(S,+,\scomp,\pcomp, 0,1)$ such that $(S,+,\scomp,0,1)$ and
$(S,+,\pcomp,0,1)$ are dioids, and the
interchange law (\ref{eq:i7}) holds.
\end{definition}
The six small interchange laws are of course derivable in this
setting.
\begin{definition}~
\begin{enumerate}
\item A \emph{Kleene algebra} is a dioid with a unary star operation
$^\star$ that satisfies the unfold and induction axioms
\begin{equation*}
1+a\bullet a^\star \le a^\star,\qquad c+a\bullet b\le b\Rightarrow a^\star
\bullet b\le b,\qquad 1+a^\star \bullet a\le a^\star, \qquad c+b\bullet a
\le b\Rightarrow b\bullet a^\star \le b.
\end{equation*}
\item An \emph{interchange Kleene algebra} is a structure
$(K,+,\scomp,\pcomp, 0,1,^{\sstar},^{\pstar})$ such that
$(K,+,\scomp,0,1,^{\sstar})$ and $(K,+,\pcomp,0,1,^{\pstar})$ are
Kleene algebras and (\ref{eq:i7}) holds.
\end{enumerate}
\end{definition}
We write $(-)^\star$ instead of the usual $(-)^\ast$ to distinguish
the Kleene star from the convolution operation.
To translate Theorem~\ref{P:interchange-quantale-correspondence} into
the Kleene algebra setting all sups must be guaranteed to be finite.
This can be achieved by imposing that all functions have finite
support, or that the relations $\srel^x_{y\, z}$ and $\prel^x_{yz}$
are \emph{finitely decomposable}, that is, for each $x$ the sets
$\{(y,z)\mid \srel^x_{yz}\}$ and $\{(y,z)\mid \prel^x_{yz}\}$ are
finite. If this is the case we call the relational interchange monoid
\emph{finitely decomposable} as well.
\begin{theorem}\label{P:interchange-semiring-correspondence}
If $X$ is a finitely decomposable relational interchange monoid
and $S$ an interchange semiring, then $S^X$ is an interchange
semiring.
\end{theorem}
\begin{proof}~
In the construction of the convolution algebra on $S^X$ it can
be checked that all sups remain finite.
\end{proof}
It is easy to generalise these results from dioids to proper semirings
that are ordered. We do not spell out the details. Beyond that it
seems interesting to extend
Theorem~\ref{P:interchange-semiring-correspondence} to interchange
Kleene algebras. First of all, every interchange quantale is an
interchange Kleene algebra, because $^{\sstar}$ and $^{\pstar}$ can be
defined explicitly in this setting using Kleene's fixpoint theorem:
$x^{\sstar} = \bigvee_{k\in\mathbb{N}} x^{\textcolor{orange}{k}}$ and
$x^{\pstar} = \bigvee_{k\in\mathbb{N}} x^{\textcolor{teal}{k}}$ satisfy
the star axioms, with powers defined recursively in the standard way
as $x^{\textcolor{orange}{0}} = 1$ and
$x^{\textcolor{orange}{i+1}} = x \scomp x^{\textcolor{orange}{i}}$ and
likewise for $x^{\textcolor{teal}{i}}$.
When infinite sups and the sup-preservation properties required for
Kleene's fixpoint theorem are not available, a different approach is
needed. We have already shown~\cite{ArmstrongSW14} that
formal power series---functions of type $\Sigma^\ast\to K$, where
$\Sigma^\ast$ is the free monoid over the finite alphabet $\Sigma$ and
$K$ a Kleene algebra---form Kleene algebras. In this setting, the star
of a power series can be defined recursively~\cite{KuichS86} as
\begin{equation*}
f^\star\, \varepsilon = (f\, \varepsilon)^\star, \qquad f^\star\, x = (f\, \varepsilon)^\star\bullet
\sum_{y,z:x=y\cdot z,y\neq \varepsilon} f\, y \bullet f^\star\, z,
\end{equation*}
where $\sum$ indicates a finite sup. The verification of the star
axioms for power series uses structural induction over finite
words. Yet this is not applicable for general ternary relations.
Instead we use a notion of grading that been used
for arbitrary monoids by Sakarovitch~\cite{Sakarovitch03}.
The function $|-|:X\to\mathbb{N}$ is a \emph{grading} on the
relational monoid $(R,X,E)$ if
\begin{itemize}
\item $|x|>0$ for all $x\in X$ such that $x\notin E$,
\item $|x|=|y|+|z|$ whenever $R^x_{yz}$.
\end{itemize}
Then $(X,R,E)$ is \emph{graded} if there is a grading on $X$.
Thus, in a graded monoid $|e|=0$ if and only if $e\in E$.
\begin{proposition}\label{P:ka-correspondence-prop}
If $(X,R,\{e\})$ is a graded, finitely decomposable, relational monoid and $K$ a Kleene
algebra, then $K^X$ is a Kleene algebra with
\begin{equation*}
f^\star\, e = (f\, e)^\star, \qquad f^\star\, x = (f\, e)^\star \bullet
\sum_{y,z:R^x_{y
z},y\neq e} f\, y \bullet f^\star\, z.
\end{equation*}
\end{proposition}
\begin{proof}
We need to check the unfold and induction axioms of Kleene
algebra. First, it is well known that the unfold axiom
$1+a^\star\bullet a \le a^\star$ is implied by the other axioms in
any Kleene algebra, and can therefore be ignored. Second, the
axiom $c+a\bullet b\le b\Rightarrow a^\star \bullet c\le b$ follows
from the simpler formula
$a\bullet b\le b\Rightarrow a^\star \bullet b \le b$ and, by
opposition, $c+b\bullet a\le b\Rightarrow c \bullet a^\star \le b$
follows from $b\bullet a\le b\Rightarrow b \bullet a^\star \le b$, in
any Kleene algebra~\cite{Kozen94}. It
thus remains to check that
\begin{equation*}
\mathit{id}_e+f\ast f^\star\le f^\star,
\qquad
f \ast g \le g\Rightarrow f^\star \ast g \le g,
\qquad
g \ast f \le g\Rightarrow g \ast f^\star \le g
\end{equation*}
hold in the convolution algebra $K^X$.
\begin{enumerate}
\item $\mathit{id}_e+f\ast f^\star = f^\star$. If $x=e$, then
$(\mathit{id}_e + f \ast f^\star)\, e = 1 + (f\, e) \bullet (f\, e)^\star = (f\,
e)^\star = f^\star \, e$.
Otherwise, if $x\neq e$,
\begin{align*}
(\mathit{id}_e + f \ast f^\star)\, x &= \sum \left\{f\, y\bullet f^\star\, z \mid
R^x_{yz}\right\}\\
&= f\, e \bullet f^\star \, x + \sum \left\{f\, y\bullet
f^\star\, z \mid R^x_{yz}\land y \neq e\right\}\\
&= f\, e \bullet f^\star \, e \bullet \sum \left\{f\, y\bullet
f^\star\, z \mid R^x_{yz}\land y \neq e\right\} + \sum \left\{f\, y\bullet
f^\star\, z \mid R^x_{yz} \land y\neq e \right\}\\
&= \left(f\, e \bullet f^\star \, e + \mathit{id}_E\, e\right) \bullet \sum \left\{f\, y\bullet
f^\star\, z \mid R^x_{yz} \land y \neq e\right\}\\
&= \left(f\, e\right)^\star \bullet \sum \left\{f\, y\bullet f^\star\, z \mid
R^x_{yz}\land y\neq e\right\}\\
&=f^\star \, x.
\end{align*}
\item
$\left(\forall x.\ (f \ast g)\, x \le g\, x\right)\Rightarrow \left(\forall x.\
(f^\star \ast g)\, x \le g\, x\right)$. We proceed by induction on $|x|$.
\begin{enumerate}
\item Let $|x|=0$ and hence $x=e$. Then
$(f^\star \ast g)\, e = (f\, e)^\star \bullet g\, e \le g\, e$
follows from the assumption $f\, e\bullet g\, e\le g\, e$ and the
first induction axiom of Kleene algebra.
\item Let $|x|>0$ and therefore $x\neq e$. Then, by the induction
hypothesis, $\left(f\ast g\right)\, y\le g\, y$ holds for all $y$ with
$|y|< |x|$. In addition, the assumption implies that
$\forall x,y,z.\ R^x_{yz} \Rightarrow f\, y\bullet g\, z\le g\, x$,
from which
$(f\, e)^\star\bullet g\, x = f^\star\, e \bullet g\, x \le g\, x$
follows by star induction in $K$. With this property,
\begin{align*}
(f^\star \ast g)\, x
&= f^\star\, e \bullet g\, x +\sum \left\{f^\star\, e\bullet \sum\left\{f\,
u \bullet f^\star\, v\mid R^y_{uv}\land u\neq e\right\} \bullet g\, z
\mid R^x_{yz}\land y \neq e\right\}\\
&= f^\star\, e \bullet \left(g\, x + \sum\left\{(f\,
u \bullet f^\star\, v) \bullet g\, z \mid \exists y.\ R^y_{uv}\land R^x_{yz}
\land u\neq e \wedge y\neq e\right\}\right)\\
&= f^\star\, e \bullet \left(g\, x + \sum\left\{ f\,
u \bullet (f^\star\, v \bullet g\, z) \mid \exists y.\ R^x_{uy}\land
R^y_{vz} \land u\neq e \land y\neq e\right\}\right)\\
&\le f^\star\, e \bullet \left(g\, x + \sum\left\{ f\, u \bullet (f^\ast \ast
g)\, y \mid R^x_{uy}\land u\neq e \right\}\right)\\
&\le f^\star\, e \bullet \left(g\, x + \sum\left\{ f\, u \bullet g\, y\mid
R^x_{uy} \right\}\right)\\
&\le f^\star\, e \bullet \left(g\, x + (f\ast g)\, x\right)\\
&= f^\star\, e \bullet \left(g\, x + g\, x\right)\\
&\le g\, x.
\end{align*}
The first step unfolds the definition of convolution and the Kleene
star in $K^X$. The second step applies distributivity laws in $K$; the
third one associativity in $X$ and $K$. The fourth step introduces a
convolution as an upper bound, thus dropping the constraint $y\neq
e$. The fifth step applies the induction hypothesis to $y$. The
condition $u\neq e$ guarantees that $|y|<|x|$. The sixth step applies
the assumption; the last step the derived property.
\end{enumerate}
\item $g \ast f \le g\Rightarrow g \ast f^\star \le g$
follows by opposition from (2).\qedhere
\end{enumerate}
\end{proof}
\noindent The following theorem is then immediate.
\begin{theorem}\label{P:interchange-ka-correspondence}
If $X$ is a graded relational interchange monoid with unit $e$
and $K$ an interchange Kleene algebra, then $K^X$ is an interchange
Kleene algebra.
\end{theorem}
We have already discussed the relationship between interchange
quantales and concurrent quantales in
Section~\ref{S:interchange-quantales}, namely that concurrent
quantales are interchange quantales in which $\pcomp$ is commutative
and $\se=\pe$. Similarly, concurrent semirings and concurrent Kleene
algebras are interchange semirings and interchange Kleene algebras
satisfying these two conditions. It is then a trivial consequence of
Theorem~\ref{P:interchange-semiring-correspondence},
Corollary~\ref{P:comm-correspondence-unit} and Corollary
\ref{P:correspondence23-cor} that $S^X$ is a concurrent semiring if
$S$ is a concurrent semiring and $X$ a finitely decomposable
relational monoid. Similarly, by
Theorem~\ref{P:interchange-ka-correspondence} and these corollaries,
$K^X$ is a concurrent Kleene algebra if $K$ is a concurrent Kleene
algebra and $X$ a graded relational monoid.
\section{Weighted Shuffle Languages}\label{S:shuffle}
This extended example shows how weighted shuffle
languages~\cite{Handbook} can be constructed with our approach. Yet an
alternative view on relational interchange monoids is helpful.
Obviously, the sets $\mathcal{P}\, (X\times X\times X)$ and $X\times X\to \mathcal{P}\, X$
are isomorphic.
A ternary relation $R$ can thus be
seen as a \emph{multioperation} $\odot:X\times X\to \mathcal{P}\, X$ defined
by
\begin{equation*}
x \in y\odot z \Leftrightarrow R^x_{yz}.
\end{equation*}
It can be extended Kleisli-style to the operation
$\odot: \mathcal{P}\, S\to\mathcal{P}\, S\to \mathcal{P}\, S$ defined,
for all $A,B\subseteq X$, by
\begin{equation*}
A\odot B = \bigcup \{x\odot y\mid x\in A\land y\in B\}.
\end{equation*}
It follows that
$(A\odot B)\, x = \bigvee\{A\, y \wedge B\, z\mid R^x_{yz}\}$ if the
sets $A$ and $B$ are identified with their indicator functions, which
turns $\odot$ into a convolution.
The overloading of the multioperation $\odot$ and its extension allows
rewriting the relational interchange laws more compactly in algebraic
form. Relational associativity becomes
$\{x\}\odot (x\odot z)= (x\odot y)\odot \{z\}$; the relational
interchange law (\ref{eq:ri7}) becomes
$(w\textcolor{teal}{\odot} x)\textcolor{orange}{\odot}
(y\textcolor{teal}{\odot} z) \subseteq (w\textcolor{orange}{\odot} y)
\textcolor{teal}{\odot} (x\textcolor{orange}{\odot} z)$.
Multisemigroups, multimonoids and other multialgebras have been
studied in mathematics for many
decades~\cite{Marty34,Krasner83,ConnesC10,KudryavtsevaM15}. In
computer science they appear in the semantics of separation
logic~\cite{GalmicheL06}.
The shuffle of two words from an alphabet $\Sigma$ is obviously a
multioperation
$\|: \Sigma^\ast \to \Sigma^\ast \to \mathcal{P}\,\Sigma^\ast$. It
can be defined recursively as
\begin{equation*}
v\| \varepsilon = \{v\} = \varepsilon\| v,\qquad (av)
\| (bw)= \{a\}\| (v\| (bw)) \cup \{b\} \| ((av)\|w),
\end{equation*}
where $a$ and $b$ are letters, $v$ and $w$ words and the extension
$\|:\mathcal{P}\, X^\ast \to \mathcal{P}\, X^\ast \to \mathcal{P}\, X^\ast$ of $\|$ has
been tacitly used. It yields the shuffle or Hurwitz product
\begin{equation*}
A\| B=\bigcup\{x\| y\mid x\in A \land y\in B\}
\end{equation*}
for $A,B\subseteq \Sigma^\ast$ at language level.
To construct the quantale of $Q$-weighted shuffle languages using
Theorem~\ref{P:interchange-quantale-correspondence}(1) it remains to
check that the structure $M=(\Sigma^\ast,\srel,\prel,\{\varepsilon\})$
is a relational interchange monoid with shared unit
$\sE=\pE=\{\varepsilon\}$, where
$\srel^x_{yz} \Leftrightarrow x=y\cdot z$, for word concatenation
$\cdot$ and $\prel^x_{yz}\Leftrightarrow x \in y\|z$ for
shuffle.
It is of course straightforward to check that
$(\Sigma^\ast,\srel,\{\varepsilon\})$ is a relational monoid: it is in
fact isomorphic to the free monoid $(\Sigma^\ast,\cdot,\varepsilon)$
and checking the relational associativity and relational unit laws in
the first monoid amounts to checking their algebraic counterparts in
the second one. Checking that $(\Sigma^\ast,\prel,\{\varepsilon\})$ is a relational
monoid---or $(\Sigma^\ast,\|,\varepsilon)$ a multimonoid---and that
the relational interchange law (\ref{eq:ri7}) holds---or the
interchange law
$(w\|x)\cdot (y\| z) \subseteq (w\cdot y) \| (x\cdot z)$ with language
product $\cdot$ in the left-hand term and word concatenation $\cdot$
in the right-hand one---is a surprisingly tedious exercise and
requires nested
inductions.
The result of this verification is summarised as follows.
\begin{lemma}\label{P:shuffle-quantale}
$M$ is a relational interchange monoid with unit $\varepsilon$ and
relationally commutative $\prel$.
\end{lemma}
\noindent The following corollary to
Theorem~\ref{P:interchange-quantale-correspondence}(1) is then automatic.
\begin{corollary}
If $Q$ is an interchange quantale with unit $\se=\pe= 1$ and
$\pcomp$ commutative, then $Q^M$ is an interchange quantale with
$\pconv$ commutative and
\begin{equation*}
(f\sconv g)\, x = \bigvee_{y,z:x= y\cdot z} f\, y \scomp g\, z,\qquad
(f\pconv g)\, x = \bigvee_{y,z:x\in y\parallel z} f\, y \pcomp g\, z,\qquad
\mathit{id}\, x = \delta_\varepsilon \, x.
\end{equation*}
\end{corollary}
\noindent The operation $\sconv$ is similar to the standard
convolution of formal power series, a $Q$-weighted generalisation of
the standard language product. The operation $\pconv$ generalises the
standard shuffle product $\|$ of languages to the $Q$-weighted
setting. Yet semirings or at least Kleene algebras are normally used
as weight-algebras. A grading on words is needed, and in this
particular case the length of words can be used. It is then obvious
that $\Sigma^\ast_n$---the size of words of length $n$ is finite
whenever $\Sigma$ is finite. This yields the following corollary to
Theorem~\ref{P:interchange-ka-correspondence}.
\begin{corollary}\label{P:shuffle-ka}
If $K$ is an interchange Kleene algebra with unit $1$ and $\pcomp$
commutative, then $K^M$ is an interchange Kleene algebra with unit
$\delta_\varepsilon$ and $\pconv$ commutative.
\end{corollary}
As we have shared units and a commutative shuffle operation, the
convolution algebras of weighted shuffle form in fact concurrent Kleene algebras.
Weighted languages are usually defined over semirings instead of
dioids. Instead of Kleene algebras one can then use star
semirings~\cite{Handbook}. The Kleene star can then be defined on
$Q^M$ as before. We conjecture that Corollary~\ref{P:shuffle-ka}
still holds for ordered star semirings, though we have not checked the
details.
Shuffle languages are widely used in the interleaving semantics of
concurrent programs. The finite transition and Aczel traces of the
rely-guarantee calculus~\cite{RoeverBH2001}, in particular, form concurrent
quantales, which suffices at least for the analysis of safety and
invariant properties.
\section{Partial Interchange Monoids}\label{S:pims}
Next we prepare for our second example, namely of digraphs under
serial and parallel composition. It is then natural to consider these
compositions not as ternary relations, but as partial operations on
graphs. This leads to more general notions of partial semigroups and
monoids. An approach to convolution with partial semigroups and
monoids has already been developed in~\cite{DongolHS16}.
\begin{definition}[\cite{DongolHS16}]
A \emph{partial monoid} is a structure
$(S,\otimes, D, E)$ where $S$ is a set, $D\subseteq S\times S$ the
domain of definition of the composition $\otimes :D\to S$, which is
associative in the sense that
\begin{equation*}
D\, x\, y \wedge D\, (x\otimes y)\, z \Leftrightarrow D\, y\, z\wedge
D\, x\, (y\otimes z),\qquad D\, x\, y \wedge D\, (x\otimes y)\, z
\Rightarrow x\otimes (y \otimes z) = (x\otimes y)\otimes z,
\end{equation*}
and $E\subseteq X$ is a set of units, which satisfy
\begin{equation*}
\exists e\in E.\ D\, e\, x\wedge e\otimes x= x,\qquad \exists e\in E.\
D\, x\, e\wedge x\otimes e= x,\qquad \forall e_1,e_2\in E.\ D\, e_1\, e_2
\Rightarrow e_1= e_2.
\end{equation*}
\end{definition}
This definition captures the intuition of partiality that the
left-hand side of $x\otimes (y \otimes z) = (x\otimes y)\otimes z$ is
defined if and only if the right-hand side is, and, if either side is
defined, then the two sides are equal. This notion of equality is
sometimes called \emph{Kleene equality}. Categories, monoids and the
interval algebras in Example~\ref{ex:incidence}, ordered pair algebras
in Example~\ref{ex:relations} and heaplet algebras in
Example~\ref{ex:sep-logic} all form partial monoids. Instead of the
unit axioms presented here we could equally use those of object-free
categories~\cite{MacLane98}. The precise relationship between partial
monoids and object-free categories is discussed in~\cite{CranchDS20}.
The relationship between partial and relational monoids is
straightforward. A relational monoid $(X,R,E)$ is \emph{functional} if
$R^x_{y\, z} = R^{x'}_{y\, z} \Rightarrow x = x'$ holds for all
$x,x'y,z\in X$. With every functional relational monoid $(X,R,E)$ one
can then associate a partial monoid $(X,\otimes,D,E)$ with
$D\, y \,z \Leftrightarrow \exists x.\ R^x_{yz}$ and $y\otimes z$
being the unique $x\in X$ that satisfies $R^x_{yz}$---if $D\, y\, z$
is defined. We are particularly interested in the converse
construction.
\begin{lemma}[\cite{DongolHS17}]\label{P:pmonoid-relmonoid}
If $(S,\otimes,D,E)$ is a partial monoid, then $(S,R,E)$ is a
(functional) relational monoid with
\begin{equation*}
R^x_{yz}\Leftrightarrow x=y\otimes z\land D\, y\, z.
\end{equation*}
\end{lemma}
Next we relate partial monoids with relational interchange monoids.
Expressing a variant of the interchange law (\ref{eq:i7}) in the
context of partial monoids requires an ordering on $S$. This motivates
the following definition.
\begin{definition}
A \emph{preordered partial monoid} is a structure
$(S,\preceq,\otimes,D,E)$ such that $(S,\preceq)$ is a preorder,
$(S,\otimes,D,E)$ a partial monoid,
and $\otimes$ is order preserving in the
sense that
\begin{equation*}
x\preceq y \wedge D\, z\, x \Rightarrow z \otimes x\preceq z\otimes y
\wedge D\, z\, y,\qquad x\preceq y \wedge D\, x\, z \Rightarrow x
\otimes z\preceq y\otimes z
\wedge D\, y\, z.
\end{equation*}
\end{definition}
Lemma~\ref{P:pmonoid-relmonoid} can then be generalised.
\begin{lemma}\label{P:ord-pmonoid-relsemigroup}
Let $(S,\preceq,\otimes,D)$ be a preordered partial monoid. Then
$(S,R)$ is a relational semigroup with
\begin{equation*}
R^x_{yz}\Leftrightarrow x\preceq y\otimes z\land D\, y\, z.
\end{equation*}
\end{lemma}
\begin{proof}
For relational associativity,
\begin{align*}
\exists y.\ R^x_{uy} \land R^y_{vw}
& \Leftrightarrow \exists y.\ x\preceq u\otimes y \land D\, u\, y
\land y\preceq v\otimes w\land D\, v\, w\\
& \Leftrightarrow x\preceq u\otimes (v\otimes w)
\land D\, v\, w \land D\, u\, (v\otimes w)\\
& \Leftrightarrow x\preceq (u\otimes v)\otimes w
\land D\, u\, v \land D\, (u\otimes v)\, w\\
& \Leftrightarrow \exists y.\ D\, y \, w
\land y \preceq u\otimes v \land D\, u\, v \land x\preceq y \otimes w\\
&\Leftrightarrow \exists y.\ R^y_{uv}\land R^x_{yw}.
\end{align*}
\end{proof}
However the unit laws of preordered partial monoids need not translate
to relational semigroups.
\begin{lemma}\label{P:ord-pmonoid-relsemigroup-units}
Let $(S,\preceq,\otimes,D)$ be a preordered partial monoid and
$R^x_{yz}\Leftrightarrow x\preceq y\otimes z\land D\, y\, z$. Then
\begin{enumerate}
\item $\exists e\in E.\ R^x_{ex}$ and $\exists e\in E.\ R^x_{xe}$,
\item $\exists e\in E.\ R^x_{ey} \Rightarrow x\preceq y$ and $\exists e\in E.\ R^x_{ye} \Rightarrow x\preceq y$.
\end{enumerate}
\end{lemma}
In (2), it cannot generally be expected that $x=y$. The relationship
$x\preceq y$ cannot be captured directly within relational semigroups
or monoids.
\begin{definition}
A \emph{partial interchange monoid} is a structure
$(S,\preceq,\sdot,\pdot,\sD,\pD,\sE\,\pE)$ such that
$(S,\preceq,\sdot,\sD,\sE)$ and $(S,\preceq,\pdot,\pD,\pE)$ are
preordered partial monoids, $\pE\subseteq \sE$ and the following
interchange law holds:
\begin{equation}
\label{eq:pi7}
\pD\, w\, x \land \sD\, (w\pdot x)\, (y\pdot z)\land \pD\, y\,
z\Rightarrow \sD\, w\, y \land \pD\, (w\sdot y)\, (x \sdot z) \land \sD\, x\, z
\wedge (w \pdot x)\sdot (y\pdot z)\preceq (w\sdot y)\pdot (x\sdot z). \tag{pi7}
\end{equation}
\end{definition}
In light of Lemma~\ref{P:ord-pmonoid-relsemigroup}
and \ref{P:ord-pmonoid-relsemigroup-units} we cannot not expect to
relate partial interchange monoids directly with relational
interchange monoids. But the relationship is straightforward if we
forget relational units and restrict to a single monoidal unit.
\begin{lemma}\label{P:pim-small-interchange}
Let $(S,\preceq,\sdot,\pdot,\sD,\pD,\sE,\pE)$ be a partial
interchange monoid in which $\sE=\{e\}=\pE$. Then the following
small interchange laws hold.
\begin{enumerate}
\item $\sD\, x \, y \Rightarrow \pD\, x\, y \land x \sdot y \preceq x \pdot y$,
\item $\sD\, x \, y \Rightarrow \pD\, y\, x \land x \sdot y \preceq y \pdot x$,
\item
$\sD\, x\, (y\pdot z) \land \pD\, y\, z \Rightarrow \sD\, x\, y
\land \pD\, (x \sdot y)\, x \land x\sdot (y \pdot z) \preceq (x\sdot
y)\pdot z$,
\item $\pD\, x\, y \land \sD\, (x\pdot y)\, z \Rightarrow \pD\, x\, (y
\sdot z) \land \sD\, y \, z\land (x\pdot y) \sdot z \preceq x \pdot (y\sdot z)$,
\item $\sD\, x \, (y\pdot z) \land \pD\, y\, z \Rightarrow \pD\, y\,
(x\pdot z) \land \sD\, y\, z\, \land x\sdot (y \pdot z) \preceq y \pdot (x\sdot z)$,
\item $\pD\, x\, y\land \sD\, (x\pdot y)\, z \Rightarrow \sD\, x\,
z\land \pD\, (x\sdot z)\, y \land (x\pdot y) \sdot z \preceq (x\sdot z) \pdot y$.
\end{enumerate}
\end{lemma}
\begin{proof}
We show (3) as an example. Suppose $\sD\, x\, (y\pdot z)$ and
$\pD\, y\, z$. Then $\pD\, x\, e$ and
$\sD\, (x\pdot e)\, (y\pdot z)$ and therefore $\sD\, x\, y$,
$\pD\, (x\sdot y)\, (e\sdot z)$, $\sD\, e\, z$ and
$(x\pdot e)\sdot (y \pdot z) \preceq (x\sdot y)\pdot (e\sdot z)$ by
(\ref{eq:pi7}). Hence
$x\sdot (y \pdot z) \preceq (x\sdot y)\pdot z$ by the unit laws of
partial monoids. The other proofs are similar and left to the reader.
\end{proof}
With multiple units it seems necessary to require that parallel units
are sequential units for all elements, which is artificial.
From now on we call \emph{relational interchange semigroup} a
relational interchange monoid in which units may be absent, and the
small interchange laws (\ref{eq:ri1})-(\ref{eq:ri6}) hold in addition
to (\ref{eq:ri7}).
\begin{lemma}\label{P:pinterchangemonoid-rinterchangesemigroup}
If $(S,\preceq,\sdot,\pdot,\sD,\pD,\{e\})$ is a partial interchange
monoid, then $(S,\srel,\prel)$ is a relational interchange semigroup with
$\srel^x_{yz}\Leftrightarrow x= y\sdot z\land \sD\, y\, z$
and $\prel^x_{yz}\Leftrightarrow x\preceq y\pdot z\land \pD\, y\, z$.
\end{lemma}
\begin{proof}
We need to check that (\ref{eq:pi7}) implies (\ref{eq:ri7}).
\begin{align*}
\exists y,z.\ \prel^y_{tu} \land \srel^x_{yz}\land \prel^z_{vw}
& \Leftrightarrow \exists y,z.\ y\preceq t\pdot u \land \pD\, t\,
u\land x=y\sdot z\land \sD\, y\, z \land z\preceq v\pdot w \land
\pD\, v\, w\\
&\Leftrightarrow x\preceq (t\pdot u)\sdot (v\pdot w) \land \pD\, t\,
u\land \sD\, (t\pdot u)\, (v\pdot w) \land \pD\, v\, w\\
&\Rightarrow x\preceq (t\sdot v)\pdot (u\pdot w) \land \sD\, t\,
v\land \pD\, (t\pdot v)\, (u\pdot w) \land \sD\, u\, w\\
&\Leftrightarrow \exists y,z.\ y = t\sdot v \land \sD\, t\, v\land
x\preceq y\pdot z \land \pD\, y\, z
\land z = u\sdot w \land \sD\, u\, w\\
& \Leftrightarrow \exists y,z.\ \srel^y_{tv} \land \prel^x_{yz} \land
\srel^z_{uw}.
\qedhere
\end{align*}
This calculation does not depend on the presence of units. Small
interchange laws hold in $S$ by
Lemma~\ref{P:pim-small-interchange}. These allow deriving the
small relational interchange laws (\ref{eq:ri1})-(\ref{eq:ri6}) as
in the proof of (\ref{eq:ri7}). Hence $S$ is a relational
interchange semigroup.
\end{proof}
It is easy to check that we
could have used
$\srel^x_{yz}\Leftrightarrow x\preceq y\sdot z\land D\, y\, z$
instead of $\srel^x_{yz}\Leftrightarrow x = y\sdot z\land D\, y\, z$
in the proof of
Lemma~\ref{P:pinterchangemonoid-rinterchangesemigroup}. Using
an equational encoding for $\prel$, however, would have broken the
proof. The following example shows that even (\ref{eq:ri1}) would
break if two equational encodings were used.
\begin{example}
Consider the partial monoid over $\{a,b\}$ with $\sD= \{(a,a)\}$,
$\pD =S\times S$, order and compositions defined by
$b\pdot b=a\pdot b=b\pdot a= a\sdot a = b\prec a = a\pdot a$, and a
suitable unit adjoined. The small interchange law
$\sD\, x\, y \Rightarrow \pD\, x\, y \land x\sdot y\preceq x\pdot y$
holds for all $x$ and $y$. Let
$\srel^x_{yz}\Leftrightarrow x=y\sdot z\land \sD\, y\, z$ and
$\prel^x_{yz}\Leftrightarrow x=y\pdot z\land \pD\, y\, z$. Then
$\srel\not\subseteq \prel$, that is, $\srel^b_{aa}$ and
$\neg \prel^b_{aa}$, because $\sD\, a\, a$, $b=a\sdot a$ and
$b\neq a= a\pdot a$.\qed
\end{example}
Lemma~\ref{P:pinterchangemonoid-rinterchangesemigroup} yields the
following corollary to
Theorem~\ref{P:interchange-quantale-correspondence}(1).
\begin{corollary}\label{P:rel-interchange-monoid-quantale}
If $S$ is a partial interchange monoid with unit $e$ and $Q$ an
interchange quantale, then $Q^S$ is a non-unital interchange
quantale with convolutions
\begin{equation*}
(f\sconv g)\, x = \bigvee_{y,z:x=y\sdot z} f\, y\scomp g\, z,\qquad
(f\pconv g)\, x = \bigvee_{y,z:x\preceq y\pdot z} f\, y\pcomp g\, z
\end{equation*}
that satisfies the small interchange laws
(\ref{eq:i1})--(\ref{eq:i6}) in addition to (\ref{eq:i7}).
\end{corollary}
Unitality fails in general because the unit $\mathit{id}$ of $\sconv$ need
not be the unit of $\pconv$:
\begin{equation*}
(f\pconv \mathit{id})\, x = \bigvee\left\{f\, y \pcomp 1\mid \prel^x_{ye}\right\}=
\bigvee\left\{f\, y \mid x\preceq y\right\} \ge f\, x,
\end{equation*}
but not necessarily $f\pconv \mathit{id} = f$, and similarly for
$\mathit{id}\pconv f = f$. The retract $(Q,\le,\sconv)$ has unit $\mathit{id}$;
only the retract $(Q,\le,\pconv)$ does not have $\mathit{id}$ as a
unit. To obtain equality, and hence unital interchange quantales,
conditions on $f$ are needed.
A partial interchange monoid $(S,\sdot,\pdot,\{e\})$ is
\emph{positive} if $e$ is a minimal element of $S$ with respect to
$\preceq$. It is \emph{serially-decomposable} if
$x\preceq y_1\sdot y_2$ implies that there exists $x_1$, $x_2$ such
that $x=x_1\sdot x_2$, and $x_1\preceq y_1$ and $x_2\preceq y_2$.
\begin{lemma}\label{P:antitone-unital}
Let $f$ be antitone, that is,
$x\preceq y\Rightarrow f\, y\le f\, x$. Then
$f\pconv \mathit{id} = f = \mathit{id}\pconv f$.
\end{lemma}
\begin{proof}
$(f\pconv \mathit{id})\, x = \bigvee\left\{f\, y \pcomp 1\mid \prel^x_{ye}\right\}=
\bigvee\{f\, y \mid x\preceq y\} = f\, x$.
The $\le$-direction holds by antitonicity, the $\ge$-direction by
the above calculation. The proof of $\mathit{id}\pconv f = f$ is similar.
\end{proof}
To make $\mathit{id}$ antitone it seems appropriate to require that $e$ is
minimal with respect to $\preceq$ and hence that the partial
interchange monoid is positive. We also need to check that $\sconv$
and $\pconv$ preserve antitonicity.
\begin{proposition}\label{P:pim-conv-quantale}
Let $(S,\sdot,\pdot,\{e\})$ be a positive serially-decomposable
partial interchange monoid and $Q$ and interchange quantale. Then
the antitone functions in $Q^S$ form a (unital)
interchange sub-quantale.
\end{proposition}
\begin{proof}
Unitality follows from Lemma~\ref{P:antitone-unital}. It remains to
show that $\mathit{id}$ is antitone and that $\sconv$ and $\pconv$ preserve
antitonicity. The first fact follows from positivity. For
preservation of $\pconv$, suppose $x\preceq y$. Then
\begin{equation*}
(f\pconv g)\, y = \bigvee \{f\, y_1 \pcomp f\, y_2\mid y \preceq y_1 \pdot
y_2\} \le \bigvee \{f\, x_1 \pcomp f\, x_2\mid x \preceq x_1 \pdot
x_2\} = (f\pconv g)\, x.
\end{equation*}
For preservation of $\sconv$, suppose once again $x\preceq y$. Then
\begin{equation*}
(f\sconv g)\, y = \bigvee \{f\, y_1 \scomp f\, y_2\mid y = y_1 \sdot
y_2\} \le \bigvee \{f\, x_1 \scomp f\, x_2\mid x = x_1 \sdot
x_2\} = (f\sconv g)\, x
\end{equation*}
by $\sdot$-decompositionality.
\end{proof}
\section{Weighted Graph Languages}\label{S:graph-languages}
Our second extended example shows how weighted graph languages can be
constructed with our approach. A partial interchange monoid structure
can be imposed on graphs in various ways. Partiality arises because,
typically, the vertices of the graph operands are supposed to be
disjoint. Henceforce, we mean digraph when we say graph. Graphs with
undirected edges can be obtained from these in the obvious way.
Formally, we view graphs as binary relations on some set $X$. Let
graphs $G_1$ and $G_2$ be disjoint, that is, they have disjoint vertex
sets: $V_{G_1}\cap V_{G_2}=\emptyset$. Their \emph{serial composition
(complete join) and disjoint union (parallel composition)} are
defined as
\begin{equation*}
G_1\cdot G_2=G_1\sqcup G_2\sqcup \, (V_{G_1}\times
V_{G_2}),\qquad
G_1\| G_2 = G_1\sqcup G_2,
\end{equation*}
where $\sqcup$ denotes disjoint union. Both operations are
standard~\cite{CourcelleEngelfriet}. This turns graphs under
serial composition into partial monoids, and graphs under parallel
composition into partial abelian monoids.
A \emph{graph morphism} $\varphi:G_1\to G_2$ between graphs $G_1$ and
$G_2$ satisfies $(x,y)\in G_1\Rightarrow (f\, x,f\, y)\in G_2$.
A morphism $f$ is \emph{faithful}, or a \emph{graph embedding}, if
$(f\, x,f\, y)\in G_2$ implies $(x,y)\in G_1$. A \emph{graph
isomorphism} is a bijective (on vertices) graph embedding. We write
$G_1\cong G_2$ if there exists a graph isomorphism between $G_1$ and
$G_2$. We say that $G_1$ and $G_2$ are \emph{isomorphic} or have the
same \emph{graph type} if $G_1\cong G_2$ and call $G/{\cong}$ the
\emph{isomorphism class} or \emph{graph type} of $G$.
The \emph{subsumption relation} $\preceq$ between graphs, which is
defined by $G_1\preceq G_2$ if and only if there exists a bijective
(on vertices) graph morphism $\varphi:G_2\to G_1$, is a preorder. The
associated subsumption equivalence $\simeq$ need not coincide with
$\cong$, as will be explained in Section~\ref{S:graph-type-languages}.
We now fix any set $\mathcal{G}$ of (di)graphs that contains the empty
graph $\varepsilon$ and is closed under serial and parallel
composition.
\begin{proposition}\label{P:graph-cmonoid}
The structure $(\mathcal{G},\cdot,\|,\{\varepsilon\})$ forms a
partial interchange monoid with commutative parallel composition and
shared unit $\varepsilon$.
\end{proposition}
\begin{proof}
First of all, the partial associativity and unit laws, partial
commutativity of disjoint union as well as partial isotonicity of
the two compositions must be shown.This is routine. In the presence
of a shared unit $\varepsilon$ it then remains to verify
(\ref{eq:pi7}). For this we need the following isotonicity property
of cartesian products: $A\subseteq B$ implies
$A\times C\subseteq B\times C$ and $C\times A\subseteq C\times B$.
We only show that the weak interchange law
$(G_1\| G_2)\cdot (G_3\| G_4)\preceq_r (G_1\cdot G_2)\|(G_2\cdot
G_4)$
holds and leave the remaining laws to the reader. We use the
identity function on the $G_i$ to construct the bijective
morphism. We need to show that
$V_{(G_1\| G_2)\cdot (G_3\| G_4)}= V_{(G_1\cdot G_3)\|(G_2\cdot
G_4)}$
and
$(G_1\cdot G_1)\|(G_3\cdot G_4) \subseteq (G_1\| G_3)\cdot (G_2\|
G_4)$
as a relation. First, $V_{G_i\cdot G_j} = V_{G_i}\cup V_{G_j} = V_{G_i\| G_j}$ and
therefore
\begin{equation*}
V_{(G_1\| G_2)\cdot (G_3\| G_4) }
= V_{G_1} \cup V_{G_2}\cup V_{G_3}\cup V_{G_4}
=V_{(G_1\cdot G_3)\|(G_2\cdot G_4)}.
\end{equation*}
Second,
\begin{align*}
(G_1\cdot G_1)\|(G_3\cdot G_4) &= (G_1\cup G_2\cup V_{G_1}\times
V_{G_2})\|(G_3\cup G_4\cup V_{G_3}\times V_{G_4})\\
&= G_1\cup G_2\cup V_{G_1}\times
V_{G_2}\cup G_3\cup G_4\cup V_{G_3}\times V_{G_4}\\
&\subseteq G_1\cup G_3\cup G_2\cup G_4 \cup (V_{G_1}\cup V_{G_3})\times
(V_{G_2}\cup V_{G_4})\\
&= (G_1\| G_3) \cup (G_2\| G_4) \cup V_{G_1\| G_3}\times
V_{G_2\| G_4}\\
& = (G_1\| G_3)\cdot (G_2\| G_4).
\end{align*}
\end{proof}
Lemma~\ref{P:pinterchangemonoid-rinterchangesemigroup} and
Corollary~\ref{P:rel-interchange-monoid-quantale} then imply that
weighted graph languages form interchange quantales up-to unitality of
the parallel quantale retract. But one can do better.
\begin{lemma}
The partial interchange monoid $(\mathcal{G},\cdot,\|,\{\varepsilon\})$ is positive and serially decomposable.
\end{lemma}
\begin{proof}
It is clear that $\varepsilon$ is an isolated point with respect to
$\preceq$ and hence minimal. This proves positivity. The proof of
serial decomposability is intuitive, but somewhat tedious to spell
out formally. Suppose $G\preceq G_1\cdot G_2$. Then the vertices
of $G_1$ and $G_2$ are disjoint and in addition to the arrows of
$G_1$ and $G_2$ we have $V_{G_1}\times V_{G_2}$. Hence if
$G\preceq G_1\cdot G_2$, then the arrows added by the bijective
graph morphism $\varphi:G_1\cdot G_1 \to G$ must either be added to
$G_1$ or to $G_2$, while $V_{G_1}\times V_{G_2}$ stays the
same. There must thus be $G_1'\preceq G_1$ and $G_2'\preceq G$ such
that $G=G_1'\cdot G_2'$.
\end{proof}
Proposition~\ref{P:pim-conv-quantale} then specialises as follows.
\begin{corollary}\label{P:graph-conv-algebra}
If $Q$ is an interchange quantale with unit $1$ and $\pcomp$
commutative, then $Q^\mathcal{G}$ is a (generally non-unital) interchange
quantale with $\pconv$ commutative and
\begin{equation*}
(f\sconv g)\, x = \bigvee_{y,z:x=y\cdot z} f\, y\scomp g\, z,\qquad
(f\pconv g)\, x = \bigvee_{y,z:x\preceq y\| z} f\, y\pcomp g\, z.
\end{equation*}
The subquantale of antitone functions in $Q^\mathcal{G}$ is unital.
\end{corollary}
Labels can be added to vertices ad libitum, which yields proper
weighted graph languages. Both the serial and the parallel composition
preserve order properties. Corollary~\ref{P:graph-conv-algebra} thus
specialises immediately to weighted partial orders.
Next we consider convolution algebras that are powerset liftings, that
is, $Q= \mathbb{B}$. Then $f:\mathcal{G}\to \mathbb{B}$ is a set indicator
function and we may write $x\in f$ instead of $f\, x$, identifying the
indicator function with the set it represents. Then
$(f\sconv g)\, x = \bigvee\{f\, y\scomp g\, z\mid x = y\cdot z\}$
rewrites as
$x\in f\sconv g \Leftrightarrow \exists y,z.\ x=y\cdot z\land y\in
f\land z\in g$ and hence
\begin{equation*}
f\sconv g= \{y\cdot
z\mid y\in f \land z\in g\}.
\end{equation*}
Similarly,
\begin{equation*}
f\pconv g=\{x\mid x\preceq
y\|z \land y\in f \land z\in g\}=\{y\|
z\mid y\in f \land z\in g\}{\downarrow},
\end{equation*}
where $\downarrow$ denotes the down-closure with respect to
$\preceq$. Moreover, the antitonicity condition rewrites as $x\preceq y
\land f\, y \Rightarrow f\, x$, which is precisely $f=f{\downarrow}$,
that is, $f$ is a down-set with respect to $\preceq$.
\begin{corollary}
The down-sets in $\mathcal{P}\, \mathcal{G}$ form a unital
interchange quantale.
\end{corollary}
Finally we consider the finite case, and obtain the following
corollary of Theorem~\ref{P:interchange-ka-correspondence}.
\begin{corollary}
If $K$ is an interchange Kleene algebra with unit $1$ and
$\mathcal{G}$ a partial interchange monoid of finite graphs, then
the antitone functions in
$K^\mathcal{G}$ form an interchange Kleene algebra.
\end{corollary}
This holds because any finite graph can be decomposed in finitely many
ways serially or parallelly into subgraphs. Once again, all results
specialise to partial orders, and in particular to labelled partial
orders, where vertices are labelled with letters from some alphabet.
Sets of partial orders in general, and labelled partial orders in
particular, are widely used in concurrency
theory~\cite{Grabowski,Vogler92} and the theory of distributed
systems~\cite{Lamport78}.
\section{Weighted Languages of Types of Finite
Graphs}\label{S:graph-type-languages}
Many applications, including those in concurrency and distributed
systems, require isomorphism classes and hence types of graphs or
(labelled) partial orders. Lifting the results from
Section~\ref{S:graph-languages} to these is not entirely
straightforward. This is well known~\cite{Esik02}, but we spell out
details for the sake of completeness.
\begin{example}[\cite{Esik02}]
Consider the infinite poset $(P,\le_P)$ with
$P=\{p_{i,j}\mid i,j\in \mathbb{N}\land (i=0\lor j=0)\}$ and
$p_{i,j}\le_P p_{k,l}$ if and only if $i = k = 0$ and $j \le l$, and
the infinite poset $(Q,\le_Q)$ with
$Q=\{q_{i,j}\mid i,j\in \mathbb{N}\land (i=0\lor i=1 \lor j=0)\}$
and $q_{i,j}\le_P q_{k,l}$ if and only if $i = k = 0$ or $i=k=1$,
and $j\le l$.
Intuitively, $P$ consists of the disjoint union of the infinite
chain formed by the $p_{0,j}$ and the elements $p_{i,0}$ with
$i> 0$, whereas $Q$ consists of the disjoint union of the infinite
chain formed by the $p_{0,j}$, the infinite chain formed by the
$p_{1,j}$ and the elements $p_{i,0}$ with $i\ge 1$.
Define the functions $\varphi:P\to Q$ and $\psi:Q\to P$ by
\begin{equation*}
\varphi\, p_{i,j} =
\begin{cases}
q_{0,j} & \text{ if } i=0,\\
q_{1,k} & \text{ if } i > 0 \land j = 2k+1,\\
q_{k,0} & \text{ if } i > 0 \land j = 2k,
\end{cases}
\qquad
\psi\, q_{i,j} =
\begin{cases}
p_{0,2k} & \text{ if } i=0,\\
p_{1,2k+1} &\text{ if } i=1,\\
p_{i-1,0} & \text{ if } i > 1.
\end{cases}
\end{equation*}
Intuitively, $\varphi$ maps the chain in $P$ onto the first chain in
$Q$ and the isolated elements in $P$ alternatingly onto the second
chain and the isolated elements in $Q$, whereas $\psi$ maps the
elements of the two chains in $Q$ alternatingly onto the chain in $P$,
and isolated points in $Q$ onto isolated points in $P$. The morphisms
are shown in Figure~\ref{fig:graph-morphisms}.
\begin{figure}
\caption{Posets $P$ and $Q$ with bijective morphisms $\varphi$ in left
diagram and $\psi$ in right diagram.}
\label{fig:graph-morphisms}
\end{figure}
By construction, $\varphi$ and $\psi$ are both bijective and
order-preserving. Hence $P\preceq Q$ and $Q\preceq P$, but of course
neither $P=Q$ nor $P\cong Q$.\qed
\end{example}
At least in the finite case the situation is simpler. An explanation
requires two simple facts about groups.
\begin{lemma}\label{P:group-aux1}
Let $G$ be the cyclic group generated by $x$ and let $x^i = x^j$ for
some integers $i<j$. Then $G=\{1,x,x^2,\dots x^{k-1}\}$, where
$k=j-i$.
\end{lemma}
\begin{proof}
The assumption implies that $x^i = x^i x^k$ for some $k$, and thus
$x^k= 1$ by cancellation. By cyclicity, every $g\in G$ is of the
form $g=x^n$ for some $n\in\mathbb{N}$ that can be written $n=pk+q$
for some $p,q\in\mathbb{N}$ with $q\le k-1$. Hence
$g=(x^k)^p x^q=1^p x^q = x^q$ for some $0\le q\le k-1$ or,
equivalently, $g\in \{1,x,x^2,\dots x^{k-1}\}$. Since every
$x^n\in G$, this shows that $G=\{1,x,x^2,\dots x^{k-1}\}$.
\end{proof}
\begin{lemma}\label{P:group-aux2}
Let $G$ be a finite cyclic group of order $n$ generated by $x$.
Then $G=\{1,x,x^2,\dots, x^{n-1}\}$ and $x^n=1$.
\end{lemma}
\begin{proof}
By the pigeonhole principle, there must be a minimal
$j\in\mathbb{N}$, $j\le n$, such that $x^j=x^i$ for some $i\in\mathbb{N}$ with
$i<j$. Hence the elements $1,x,x^2,\dots, x^{j-1}$ are pairwise
distinct. Then, by Lemma~\ref{P:group-aux1}, $j=n$, $i=0$ and $x^n= x^0=1$.
\end{proof}
\begin{lemma}\label{P:refinement-fin-equiv}
Let $G_1$ and $G_1$ be finite graphs such that $G_1\preceq G_2$ and
$G_2\preceq G_1$. Then $G_1\cong G_2$.
\end{lemma}
\begin{proof}
By assumption there exists order preserving bijections
$\varphi:G_2\to G_1$ and $\psi:G_1\to G_2$, hence
$\chi=\psi\circ \varphi$ is an order preserving bijection on $G_1$.
As $\chi$ can be seen as a group action on the finite set $V_1$, it
generates a finite cyclic group. Hence there is some
$k\in\mathbb{N}$ such that $\chi^k=\mathit{id}_{V_1}$ by
Proposition~\ref{P:group-aux2}. It then follows that $f$ is
faithful: Suppose $(\varphi\, x,\varphi\, y)\in G_2$. Then
$ x = \chi^k\, x = \chi^{k-1}(\psi(\varphi\, x))\to_R
\chi^{k-1}(\psi(\varphi\, y)) = \chi^k\, y = y$.
It follows that $\varphi$ is a graph isomorphism and $G_1\cong G_2$.
\end{proof}
A similar fact has been proved by \'Esik~\cite{Esik02}. We henceforth
restrict our attention to finite graphs.
Let $[G]=\{G'\mid G'\cong G\}$ denote the type of $G$. We
extend the subsumption preorder $\preceq$ to equivalence classes by
$[G_1]\preceq [G_2]\Leftrightarrow G_1\preceq G_2$, overloading
notation. This relation is well defined.
\begin{lemma}\label{P:type-preorder-defined}
Let $G_1'\cong G_1$, $G_1\preceq G_2$ and $G_2\cong G_2'$. Then
$G_1'\preceq G_2'$.
\end{lemma}
\begin{proof}
Let $\varphi_1$ be the graph isomorphism of type $G_1\to G_1'$,
$\varphi_2$ the graph isomorphism of type $G_2'\to G_2$ and $\psi$
the bijective graph morphism of type $G_2\to G_1$. Then
$\varphi_1\circ \psi\circ \varphi_2:G_2'\to G_1'$ is a bijective
graph morphism as well. Hence $G_1' \preceq G_2'$.
\end{proof}
\begin{lemma}\label{P:type-refinement-fin-po}
The relation $\preceq$ is a partial order on $\mathcal{G}/{\cong}$ if all graphs
in $\mathcal{G}$ are finite.
\end{lemma}
\begin{proof}
Reflexivity and transitivity for $\preceq$ on $\mathcal{G}/{\cong}$
follows from reflexivity and transitivity of $\preceq$ on
$\mathcal{G}$. For antisymmetry, $[G_1]\preceq [G_2]$ and
$[G_2]\preceq [G_1]$ imply $[G_1]=[G_2]$ for all
$G_1,G_2\in\mathcal{G}$ by Lemma~\ref{P:refinement-fin-equiv}.
\end{proof}
Extending serial and parallel composition of graphs is standard:
$[G_1]\cdot [G_2]=\{G_1'\cdot G_2'\mid G_1'\cong G\wedge G_2'\cong
G\}$
and likewise for $[G_1]\| [G_2]$. It is also well known that both
compositions are well defined: if $G_1\cong G_1'$ and $G_2\cong G_2'$,
then $[G_1]\cdot[G_2]=[G_1']\cdot [G_2']$ and
$[G_1]\|[G_2]=[G_1']\| [G_2']$. By contrast to serial and parallel
compositions of graphs, those of graph types are total. Finally,
equivalence classes are closed with respect to serial and parallel
composition.
\begin{lemma}\label{P:type-compositions-closed}
For all $G_1,G_2\in\mathcal{G}$,
\begin{enumerate}
\item $[G_1\cdot G_2] = [G_1]\cdot [G_2]$,
\item $[G_1\| G_2] = [G_1]\|[G_2]$.
\end{enumerate}
\end{lemma}
\begin{proof}
$H\in [G_1\cdot G_2]$ if and only if $H\cong G_1\cdot G_2$. This is
the case if and only if there are graphs
$G_1'$ and $G_2'$ such that $H=G_1'\cdot G_2'$ and $G_1'\cong G_1$ and
$G_2'\cong G_2$, which holds if and only if $H\in [G_1]\cdot [G_2]$. The proof for $\|$ is similar.
\end{proof}
\begin{proposition}\label{P:graph-type-interchange-monoid}
The structure $(\mathcal{G}/{\cong},\cdot,\|,[\varepsilon])$ is an
interchage monoid in which $\|$ is commutative, if all graphs in
$\mathcal{G}$ are finite.
\end{proposition}
\begin{proof}
The associativity, commutativity and unit laws are easy to check,
noting that $[\varepsilon]=\{\varepsilon\}$. For the interchange
law
$[(G_1\| G_2)\cdot (G_3\| G_4)]\preceq [(G_1\cdot G_3)\|(G_2\cdot
G_4)]$,
by definition of $\preceq$ on equivalence classes, it suffices to
show that $(G_1\| G_2)\cdot (G_3\| G_4)\preceq (G_1\cdot G_3)\|(G_2\cdot
G_4)$, which holds by Proposition~\ref{P:graph-cmonoid}.
\end{proof}
Proposition~\ref{P:graph-type-interchange-monoid} specialises
immediately to types of finite partial orders with serial and parallel
composition, which are known as partial words or pomsets in
concurrency theory---when vertex labels are
added~\cite{Grabowski,Gischer}. The instance of
Proposition~\ref{P:graph-type-interchange-monoid} for pomsets is due
to Gischer~\cite{Gischer}.
Because some compositions in $\mathcal{G}/{\cong}$ may results in the
empty set, the interchange monoid can have an annihilator $0$, that
is, an element for which $x\cdot 0=0= 0\cdot x$ and $x\| 0 = 0$ holds
for any element $x$.
The lifting to convolution algebras---interchange quantales, unital
interchange quantales, interchange Kleene algebras---then follows the
results of the previous section. The result that the powerset lifting
of finite pomsets yields concurrent semirings, interchange semirings
in which $\pcomp$ is commutative and $Q=\mathbb{B}$, is due to
Gischer~\cite{Gischer}. Extensions to concurrent Kleene algebras and
concurrent quantales have been proved more recently~\cite{HMSW11}.
\section{Conclusion}\label{S:conclusion}
The results in this article support the construction of concurrent
quantales and Kleene algebras from relational structures, multimonoids
and partial monoids. They can be formalised easily in proof assistants
and applied in concurrency verification. In fact, the lifting from
ternary relations and partial monoids to quantalic convolution
algebras---without interchange laws---has already been formalised with
Isabelle/HOL~\cite{DongolGHS17}. Extending this to concurrency is left
for future work.
Another interesting avenue for research is the extension of Stone-type
duality to our constructions, building on work of Harding, Walker and
Walker for lattice-valued functions~\cite{HardingWW18}. Moreover, a
categorification of our approach will be published in a successor
paper.
Finally, we hope that our results will benefit the construction of
real-word graph-based models for concurrent and distributed systems,
and ultimately the design of programming languages and verification
tools for such systems.
\textbf{Acknowledgement:} The authors would like to thank Tony Hoare
for discussions on models of concurrent Kleene algebras, and to
Brijesh Dongol and Ian Hayes for their collaboration on convolution
algebras and comments on early versions of this work. The second and
third author have been partially supported by EPSRC grant EP/R032351/1.
\end{document}
|
\begin{document}
\title{Large time behavior of semilinear stochastic partial differential
equations perturbed by a mixture of Brownian and fractional Brownian motions
}
\author{Marco Dozzi\footnote{corresponding author, [email protected], UMR-CNRS 7502, Institut Elie Cartan de Lorraine, Nancy, France} \and Ekaterina T.
Kolkovska\thanks{
Centro de Investigaci\'{o}n en Matem\'{a}ticas, Guanajuato, Mexico.} \and
Jos\'{e} A. L\'{o}pez-Mimbela$^{\dag}$ \and Rim Touibi\thanks{UMR-CNRS 7502, Institut Elie Cartan de Lorraine, Nancy, France.}}\date{ }
\maketitle
\begin{abstract}
We study the trajectorywise blowup behavior of a semilinear partial
differential equation that is driven by a mixture of multiplicative Brownian
and fractional Brownian motion, modeling different types of random
perturbations. The linear operator is supposed to have an eigenfunction of
constant sign, and we show its influence, as well as the influence of its
eigenvalue and of the other parameters of the equation, on the occurrence of
a blowup in finite time of the solution. We give estimates for the
probability of finite time blowup and of blowup before a given fixed time.
Essential tools are the mild and weak form of an associated random partial
differential equation.
\textbf{Keywords} Stochastic reaction-diffusion equation; mixed fractional noise; finite-time blowup of trajectories
\textbf{AMS Mathematics Subject Classification} 60H15 60G22 35R60 35B40 35B44 35K58
\end{abstract}
\section{Introduction}
In this paper we study existence, uniqueness and the blowup behavior of
solutions to the fractional stochastic partial differential equation of the
form
\begin{eqnarray} \label{2.1}
du(x,t) &=&\left[\frac{1}{2}k^{2}(t)Lu(x,t)+g(u(x,t))\right]
dt+u(x,t)\,dN_{t},\quad x\in D, \quad t>0, \notag \\
u(x,0) &=&\varphi (x)\geq 0, \notag \\
u(x,t) &=&0, \quad x\in \partial D, \quad t \geq 0,
\end{eqnarray}
where $D\subset
\mathbb{R}
^{d}$ is a bounded Lipschitz domain, $L$ is the infinitesimal generator of a
strongly continuous semigroup {\color{black} of contractions} which satisfies conditions (\ref{P}), (\ref{L}) below, and $\varphi \in L^{\infty}(D),$ where $L^{\infty
}(D)$ is the space of real-valued essentially bounded functions on $D.$
Additionally, $g$ is a nonnegative locally Lipschitz function and $N$ is a
process given by
\begin{equation}\label{Def-of-N}
N_t= \int_0^t a(s) \,dB(s)+ \int_0^t b(s) \,dB^H(s), \quad t\geq0,
\end{equation}
where $B$ is Brownian motion and $B^{H}$ is fractional Brownian motion with
Hurst parameter $H>1/2$, $a$ is continuous and $b$ is H\"{o}lder continuous
of order $\alpha >1-H.$ Both, $B$ and $B^{H},$ are supposed to be defined on
a filtered probability space $(\Omega ,\mathcal{F},(\mathcal{F}_{t},t\geqq
0),\mathbb{P})$ and adapted to the filtration $(\mathcal{F}_{t},t\geqq 0).$
Such models have recently been studied under the name of `mixed models' in
the context of stochastic differential equations, see \cite{MS} and \cite
{MRS}. When $N=0,$ $L=\Delta ,$ $k=1,$ $g(u)=u^{1+\beta }$ we obtain the
classical Fujita equation which was studied in \cite{Fuj}. In \cite{DL} and
\cite{ALP} there were considered the cases when $N$ is a Brownian motion, in
\cite{DKL} it was investigated the case when $N$ is a fractional Brownian
motion with Hurst parameter $H>1/2$ and $D\subset\mathbb{R}^{d}$, and in
\cite{D-K-LM-SAA} the case of $H\ge 1/2$ and $D=\mathbb{R}^{d}$.
The fractional Brownian motion (fBm) appears in many stochastic phenomena,
where rough external forces are present. The principal difference, compared
to Brownian motion, is that fBm is not a semimartingale nor a Markov
process, hence classical theory of stochastic integration cannot be applied. Since $H>1/2$, the stochastic integral with respect to $
B^{H}$ in \eqref{2.1} can be understood as a fractional integral. Also the presence of both, Brownian and fractional Brownian motion in \eqref{2.1}, due to their
different analytic and probabilistic properties, modelize different aspects
of the random evolution in time of the solution. The factor
$k^2/2$ in front of $L$ affects dissipativity, which in several cases is in
favor of retarding or even preventing blowup.
We consider both, weak and mild solutions of \eqref{2.1}, which we prove are
equivalent and unique. Beyond existence and uniqueness of weak and mild
solutions we are interested in their qualitative behaviour. In Theorem \ref
{THM1} below we obtain a random time $\tau^*$ which is an upper bound of the
explosion time $\tau.$
In Theorem \ref{THM7} we obtain a lower bound $
\tau_* $ of $\tau$ so that a.s.
\begin{equation*}
\tau_*\le \tau\le \tau^*.
\end{equation*}
The random times $\tau_*$ and $\tau^*$ are given by exponential functionals
of the mixture of a Brownian and a fractional Brownian motion. The laws of
such kind of functionals presently are not known.
In order to study the distribution of $\tau^*$
we use the well-known representation of $B^H$
in the form $$B_{t}^{H}=\int_{0}^{t}{K^{H}(t,s)\,dW_{s}}, $$ where
the kernel $K^H$ is given in \eqref{Kernel-K} and
$W$ is a Brownian motion defined in the same filtered probability space as $B$.
In general, $W$ can be different from the Brownian motion $B $ appearing in the first integral of \eqref{Def-of-N}. We obtain
estimates of the probability $\mathbb{P}(\tau<\infty) $, and of the tail
distribution of $\tau^*$.
To achieve this we make use of recent results of N.T. Dung \cite{D,DII} from
the Malliavin theory for continuous isonormal Gaussian processes.
In Theorem \ref{THM2} we obtain upper bounds for $\mathbb{P}(\tau^*\le T)$ in the case when $B=W,$ and in Theorem \ref{THM3} when $B$ is independent of $W$, and when $B$ and $W$ are general Brownian motions. In Theorem \ref{THM4} we obtain lower bounds for $\mathbb{P}(\tau< \infty) $ when $B=W$.
As a result in the case when $W=B$ we get specific configurations of the
coefficients $a, b$ and $k$ under which the weak solution (hence also the
mild solution) of equation \eqref{2.1} exhibits finite time blow-up. To
be concrete suppose that $g(z) \geq Cz^{1+\beta}$ for some constants $C>0$, $
\beta>0$, $B_{t}^{H}=\int_{0}^{t}{K^{H}(t,s)\,dB_{s}}, $
and
\begin{equation*}
\int_0^t a^2(r)\,dr \sim t^{2l}, \quad \int_0^tb^2(r)\,dr\sim t^{2m}, \quad \int_0^t k^2(r)\,dr \sim t^{2p}
\quad
\mbox{ as }\quad t \to \infty
\end{equation*}
for some nonnegative constants $l,\, m$ and $p$. If {\color{black} $\beta \in (0,1/2)$ and $\max\{p,l\}>
H+m-1/2$, or if $\beta=1/2$ and $p>H+m-1/2$, or if $\beta > 1/2$ } and $p>\max\{l, H+m-1/2\},$ then
all nontrivial positive solutions of (\ref{2.1}) suffer finite-time blowup
with positive probability.
Our approach here is to transform the equation \eqref{2.1} into a random
partial differential equation (RPDE) (\ref{2.2}), whose solution blows up at
the same random time $\tau$ as the solution of \eqref{2.1}, and to work with
this equation. The blowup behavior of (\ref{2.2}) is easier to determine
because $N$ appears as a coefficient, and not as stochastic integrator as in
(\ref{2.1}). Such transformations are indeed known for more general SPDEs
than (\ref{2.1}), including equations whose stochastic term does not depend
linearly on $u$, see \cite{LR}. But for the RPDE's associated to more
general SPDE's it seems difficult to find explicit expressions for upper and
lower bounds for the blowup time, and this is an essential point in our
study. Another reason for having chosen the relatively simple form of
\eqref{2.1} and \eqref{2.2} is that we consider the blowup trajectorywise
which is a relatively strong notion compared, e.g., to blowup of the moments
of the solution (see, e.g. \cite{Cho}). The crucial ingredient in the proofs
is the existence of a positive eigenvalue and an eigenfunction with constant
sign of the adjoint operator of $L.$ Special attention is given to the case $
H\in (\frac{3}{4},1)$ because then the process $N$ is equivalent to a
Brownian motion \cite{Ch}. This allows us to apply a result by Dufresne and
Yor \cite{Y} on the law of exponential functionals of the Brownian motion to
get in Theorem \ref{THM5} an explicit lower bound for the probability of
blowup in finite time.
We finish this section by introducing some notations and definitions we will
need in the sequel. A stopping time $\tau :\Omega \rightarrow (0,\infty )$
with respect to the filtration $(\mathcal{F}_{t},t\geqq 0)$ is a blowup time
of a solution $u$ of\ \eqref{2.1} if
\begin{equation*}
\limsup_{t\nearrow \tau }\sup_{x\in D} |u(x,t)| =+\infty \quad \mbox{$\mathbb{P}$-a.s.}
\end{equation*}
\noindent
{\color{black}
Let $(P_{t}^{D},t\geqq 0)$ and $((P^{D})_{t}^{\ast
},t\geqq 0)$
} be the strongly continuous semigroups corresponding to the
operator $L$ and its adjoint $L^*:$
\begin{equation}\label{D}
\int_{D}f(x)P_{t}^{D}g(x)dx=\int_{D}g(x)(P^{D})_{t}^{\ast }f(x)dx,\quad
f,g\in {\color{black}L^2(D).}
\end{equation}
{\color{black}
As usual, $Lf:=\underset{t\rightarrow 0}{\lim }\frac{1}{t}(P_{t}^{D}f-f)$
for all $f\in {\color{black} L^2(D)}$ in the domain of $L,$ denoted by $\mathrm{Dom}(L).$
Due to the Hille-Yosida theorem, $\mathrm{Dom}(L)$ and $\mathrm{Dom}(L^{\ast })$ are dense
in $L^2(D).$
}
Let $P_{t}^{D}(x,\Gamma )$ and ($P^{D})_{t}^{\ast }(x,\Gamma )$ denote the
associated transition functions, where $t> 0,$ $x\in D,$ and $\Gamma \in
\mathcal{B}(D),$ the Borel sets on $D$. In the sequel we will assume that they admit
densities, i.e. there exist families of continuous functions $(p^{D}(t,\cdot
,\cdot ),t>0)$ and $((p^{D})^{\ast }(t,\cdot ,\cdot ),t>0)$ on $D\times D$
such that
\begin{eqnarray*}
P_{t}^{D}g(x) &=&\int_{D}g(y)P_{t}^{D}(x,dy)=\int_{D}g(y)p^{D}(t,x,y)dy, \\
(P^{D})_{t}^{\ast }f(x) &=&\int_{D}f(y)(P^{D})_{t}^{\ast
}(x,dy)=\int_{D}f(y)(p^{D})^{\ast }(t,x,y)dy.
\end{eqnarray*}
Due to \eqref{D},
\begin{equation}\label{PDt1}
(p^{D})^{\ast }(t,x,y)=p^{D}(t,y,x)\quad\mbox{for all $t > 0$ and $
x,y\in D.$}
\end{equation}
\section{The weak solution of the associated random partial differential
equation, equivalence with the mild solution}
\label{Section2}
Let us consider the random partial differential equation
\begin{eqnarray} \label{2.2}
\frac{\partial v}{\partial t}(x,t)&=&\frac{1}{2}k^{2}(t)Lv(x,t)-\frac{1}{2}
a^{2}(t)v(x,t)+\exp (-N_{t})g(\exp (N_{t})v(x,t)), \\
v(x,0) &=&\varphi (x),\:x\in D, \notag \\
v(x,t)&=&0, \ t \geq 0,\ x\in \partial{D}. \notag
\end{eqnarray}
In this section we transform the weak form of \eqref{2.1} into the weak
form of \eqref{2.2}
using the transformation $v(x,t)=\exp (-N_{t})u(x,t),\, x\in D,\, t\ge 0. $ Hence, if blowup takes place in finite time, it occurs of course at the same
time and at the same place $x\in D$ for the solutions of both equations.
In the following we write $\left\langle \cdot ,\cdot \right\rangle _{D}$ for
the scalar product in $L^{2}(D).$
\begin{defi}
An $(\mathcal{F}_{t},t\geqq 0)$-adapted random field $v=(v(x,t),$ $t\in
\lbrack 0,T],\ x\in D)$ with values in $L^{2}(D)$ is a weak solution of
\eqref{2.2} if, for all $t\in $ $[0,T]$ and all $f\in \mathrm{Dom}(L^{\ast })
$, $\mathbb{P}$-a.s.
\begin{eqnarray*}
\left\langle v(\cdot ,t),f\right\rangle _{D} &=&\left\langle \varphi
,f\right\rangle _{D}+\int_{0}^{t}\left( \frac{1}{2}k^{2}(s)\left\langle
v(\cdot ,s),L^{\ast }f\right\rangle _{D}-\frac{1}{2}a^{2}(s)\left\langle
v(\cdot ,s),f\right\rangle _{D}\right) ds
\end{eqnarray*}
\begin{equation}
+\int_{0}^{t}\exp (-N_{s})\left\langle g(\exp (N_{s})v(\cdot
,s)),f\right\rangle _{D}ds. \label{weak rpde}
\end{equation}
\end{defi}
Since $g$ is supposed to be locally Lipschitz, a blowup in finite time of $v$
may occur, and the blowup time $\tau $ depends in general on $\omega \in
\Omega .$ A \emph{weak solution of \eqref{2.2} up to $\tau $} is defined as
an $(\mathcal{F}_{t},t\geqq 0)$-adapted random field $v$ that satisfies
\eqref{weak rpde} for all $t\in (0,T \wedge \tau )$ $\mathbb{P}$-a.s. If $
\omega $ is such that $v(\omega ,\cdot ,\cdot )$ does not blowup in finite
time, we set $\tau (\omega )=\infty .$
\begin{defi}
An $(\mathcal{F}_{t},t\geqq 0)$-adapted random field $u=(u(x,t),$ $t\in
\lbrack 0,T],x\in D)$ with values in $L^{2}(D)$ is a weak solution of
\eqref{2.1} up to $\tau $ if, for all $t\in $ $(0,T\wedge \tau )$ and all $
f\in \mathrm{Dom}(L^{\ast }),$ $\mathbb{P}$-a.s.
\begin{equation*}
(i)\text{ }\int_{0}^{t}a^{2}(s)\left[1+\left\langle u(\cdot
,s),f\right\rangle _{D}^{2}\right]\,ds<\infty ,\quad b(\bullet )\left\langle
u(\cdot ,\bullet ),f\right\rangle _{D}\in \mathcal{C}^{\beta }[0,t] \mbox{
for some $\beta >1-H,$}
\end{equation*}
\begin{equation*}
(ii)\text{ \ }\int_{0}^{t}\left( k^{2}(s)\left\vert \left\langle u(\cdot
,s),L^{\ast }f\right\rangle _{D}\right\vert +\left\vert \left\langle
g(u(\cdot ,s)),f\right\rangle _{D}\right\vert \right)\, ds<\infty ,
\phantom{1^l\!\!\!\!\!\!\!}antom{XXXXXXxXXXXXXX}
\end{equation*}
and
\begin{equation*}
\left\langle u(\cdot ,t),f\right\rangle _{D}=\left\langle \varphi
,f\right\rangle _{D}+\int_{0}^{t}\left( \frac{1}{2}k^{2}(s)\left\langle
u(\cdot ,s),L^{\ast }f\right\rangle _{D}+\left\langle g(u(\cdot
,s),f\right\rangle _{D}\right) ds
\end{equation*}
\begin{equation}
+\int_{0}^{t}\left\langle u(\cdot ,s),f\right\rangle _{D}dN_{s}.
\label{weak spde}
\end{equation}
\end{defi}
\noindent Conditions (i) and (ii) in the above definition are sufficient for
the It\^{o}, the fractional and the Lebesgue integrals in \eqref{weak spde}
to be well defined $\mathbb{P}$-a.s.
We proceed now to the relation between \eqref{weak spde} and \eqref{weak
rpde}.
\begin{prop}
\label{PROP1} If $u$ is a weak solution of \eqref{2.1} up to a random time $
\tau $, then $v(x,t)=\exp (-N_{t})u(x,t)$ is a weak solution of \eqref{2.2}
up to $\tau $, and viceversa.
\end{prop}
\begin{rem}
We notice that $\left\langle v(\cdot ,s),f\right\rangle _{D}$ is
absolutely continuous in $s$ if $v$ is a weak solution of \eqref{2.2}. With
the choice $u(x,t):=\exp(N_{t})v(x,t)$ condition (i) is satisfied. In fact,
for $t<T\wedge \tau (\omega ),$
\begin{equation*}
\int_{0}^{t}\left\langle u(\cdot ,s),f\right\rangle
_{D}a(s)\,dB_{s}=\int_{0}^{t}\left\langle v(\cdot ,s),f\right\rangle
_{D}\exp (N_{s})a(s)\,dB_{s}
\end{equation*}
is well defined since $\int_{0}^{t}(\int_{D}v(x,s)f(x)\,dx)^{2}\exp
(2N_{s})a^{2}(s)\,ds<\infty$ \ $\mathbb{P}$-a.s.
Recall that the fractional integral $\int_0^Tf(x)dg(x)$ is defined (in the sense of Zähle \cite{Z}) in \cite[Def. 2.1.1]{M} for $f,g$ belonging to fractional Sobolev spaces. If $0<\varepsilon<H$, $f$ and $g$ are Hölder continuous of exponents $\alpha$ and $H-\varepsilon$ respectively, and $\alpha+H-\varepsilon>1$, this fractional
integral
coincides with the corresponding generalized Riemann-Stieltjes integral; see \cite[Thm. 2.1.7]{M}.
Hence, the fractional integral
\begin{equation}
\int_{0}^{t}\left\langle u(\cdot ,s),f\right\rangle
_{D}b(s)\,dB_{s}^{H}=\int_{0}^{t}\left\langle v(\cdot ,s),f\right\rangle
_{D}\exp (N_{s})b(s)\,dB_{s}^{H} \label{frac int}
\end{equation}
is well defined for $t<T\wedge \tau (\omega )$
because, on the one hand, $N_{\cdot }=\int_{0}^{\cdot }(a(s)\,dB_{s}+b(s)\,dB_{s}^{H})$ is $
\mathbb{P}$-a.s. H\"{o}lder continous of order $1/2-\epsilon $ for all $
\epsilon >0$ by the theorem of Kolmogorov and \cite[Proposition 4.1]{NR}. On the other hand
$b(\cdot)$ is $\alpha$-Hölder continuous (with $\alpha>1-H$) and $B^H$ is Hölder continuous with exponent $H-\varepsilon$ for any $\varepsilon>0$. Hence, choosing $\varepsilon <\min\{H/2-1/4, \alpha +H-1\}$ we get that the
integrand on the right side of \eqref{frac int} is H\"{o}lder
continuous of order $\min\{\alpha,1/2-\varepsilon\}$, and therefore $H-\varepsilon +\min\{\alpha, 1/2 - \varepsilon\}>1$ and the integral is well defined as
a generalized Riemann-Stieltjes integral.
\end{rem}
\begin{proof}
Let $T>0.$ It suffices to prove the assertion for $t\in (0,T\wedge \tau $)$.$
We apply (a slight generalisation of) the It\^{o} formula in \cite[page 184]{M}.
Let
\begin{eqnarray*}
Y_{t}^{1} &=&\int_{0}^{t}a(s)\,dB_{s}\text{ },\text{ \ }Y_{t}^{2}=\int_{0}^{t}
\left\langle u(\cdot ,s),f\right\rangle _{D}a(s)\,dB_{s}, \\
Y_{t}^{3} &=&\int_{0}^{t}b(s)\,dB_{s}^{H},\text{\ \ }Y_{t}^{4}=\int_{0}^{t}
\left\langle u(\cdot ,s),f\right\rangle _{D}b(s)\,dB_{s}^{H},\text{ } \\
Y_{t}^{5} &=&\left\langle \varphi ,f\right\rangle _{D}+\int_{0}^{t}\left(
\frac{1}{2}k^{2}(s)\left\langle u(\cdot ,s),L^{\ast }f\right\rangle
_{D}+\left\langle g(u(\cdot ,s)),f\right\rangle _{D}\right) ds,
\end{eqnarray*}
and let $F(y_{1},y_{2},y_{3},y_{4},y_{5})=\exp
(-y_{1}-y_{3})(y_{5}+y_{2}+y_{4}). $ Then
\[
F(Y_{t}^{1},\ldotsots,Y_{t}^{5})=\exp (-N_{t})\left\langle u(\cdot
,t),f\right\rangle _{D}=\left\langle v(\cdot ,t),f\right\rangle _{D}.
\]
The above mentioned It\^{o} formula then reads
\begin{eqnarray*}\lefteqn{
F(Y_{t}^{1},\ldotsots,Y_{t}^{5})}\\
&=&F(Y_{0}^{1},\ldotsots,Y_{0}^{5})+\sum_{i=1}^{5}\int_{0}^{t}\frac{\partial F}{
\partial y_{i}}(Y_{s}^{1},\ldotsots,Y_{s}^{5})\,dY_{s}^{i}
+\frac{1}{2}\sum_{i,j=1}^{2}\int_{0}^{t}\frac{\partial ^{2}F}{\partial
y_{i}\partial y_{j}}(Y_{s}^{1},\ldotsots,Y_{s}^{5})\,d\left\langle
Y_{s}^{i},Y_{s}^{j}\right\rangle .
\end{eqnarray*}
Since $u$ is a weak solution of \eqref{2.1},
\begin{eqnarray*}
\left\langle v(\cdot ,t),f\right\rangle _{D} &=&\left\langle \varphi
,f\right\rangle _{D}-\int_{0}^{t}\exp (-N_{s})\left\langle u(\cdot
,s),f\right\rangle _{D}\left(a(s)\,dB_{s}+b(s)\,dB_{s}^{H}\right) \\
&&+\int_{0}^{t}\exp (-N_{s})\left\langle u(\cdot ,s),f\right\rangle
_{D}\left(a(s)dB_{s}+b(s)dB_{s}^{H}\right) \\
&&+\int_{0}^{t}\exp (-N_{s})\left( \frac{1}{2}k^{2}(s)\left\langle u(\cdot
,s),L^{\ast }f\right\rangle _{D}+\left\langle g(u(\cdot ,s)),f\right\rangle
_{D}\right) ds \\
&&-\frac{1}{2}\int_{0}^{t}\exp (-N_{s})\left\langle u(\cdot
,s),f\right\rangle _{D}a^{2}(s)ds \\
&=&\left\langle \varphi ,f\right\rangle _{D}+\int_{0}^{t}\left( \frac{1}{2}
k^{2}(s)\left\langle v(\cdot ,s),L^{\ast }f\right\rangle _{D}-\frac{1}{2}
a^{2}(s)\left\langle v(\cdot ,s),f\right\rangle _{D}\right) \,ds \\
&&+\int_{0}^{t}\exp (-N_{s})\left\langle g(\exp (N_{s})v(\cdot
,s)),f\right\rangle _{D}\,ds.
\end{eqnarray*}
Therefore $v$ is a weak solution of \eqref{2.2}. Similarly we obtain the viceversa result. \end{proof}
In order to define the mild solutions of equations \eqref{2.1} and
\eqref{2.2} we define first the evolution families of contractions
corresponding to the generator $\frac{1}{2}k^{2}(t)L.$ For $0\le s<t$ let
\begin{equation} \label{AK}
K(t,s)=\frac{1}{2}\int_{s}^{t}k^{2}(r)\,dr,\quad A(t,s)=\frac{1}{2}
\int_{s}^{t}a^{2}(r)\,dr,\quad K(t)=K(t,0),\quad A(t)=A(t,0),
\end{equation}
and set $p^{D}(s,x;t,y)=p^{D}(K(t,s),x,y),$ $x,y\in D\times D,$ $0\leqq s<t.$
For $f\in L^{2}(D)$ the corresponding evolution families of contractions on $
L^{2}(D)$ are given by
\begin{eqnarray*}
U^{D}(t,s)f(x) &=&\int_{D}p^{D}(s,x;t,y)f(y)dy=P_{K(t,s)}^{D}f(x), \\
\text{\ }(U^{D})^{\ast }(t,s)f(x)
&=&\int_{D}p^{D}(s,y;t,x)f(y)dy=(P^{D})_{K(t,s)}^{\ast }f(x).\text{ }
\end{eqnarray*}
\begin{defi}
An $(\mathcal{F}_{t},t\geqq 0)$-adapted random field $v=(v(x,t),$ $t\geqq
0,\ x\in D)$ with values in $L^{2}(D)$ is a mild solution of~\eqref{2.2} on $
[0,T]$ if, for all $t\in $ $[0,T],$ $\mathbb{P}$-a.s.
\begin{eqnarray*}
v(x,t) &=&U^{D}(t,0)\varphi (x)-\frac{1}{2}
\int_{0}^{t}a^{2}(s)U^{D}(t,s)v(x,s)\,ds \\
&&+\int_{0}^{t}\exp (-N_{s})U^{D}(t,s)\left(g((\exp N_{s})
\phantom{1^l\!\!\!\!\!\!\!}antom{1^l\!\!\!\!\!\!\!}\:\: v(x,s))\right)\,ds.\text{ }
\end{eqnarray*}
\end{defi}
\begin{prop}
\label{Proposition2} The mild form of~\eqref{2.2} can be written as
\begin{eqnarray}
v(x,t)&=&\exp (-A(t))U^{D}(t,0)\varphi (x) \notag \\
&+&\int_{0}^{t} \exp (-N_{s}-A(t,s))U^{D}(t,s)g(\exp (N_{s})v(\cdot
,s))(x)ds,\qquad \label{ms}
\end{eqnarray}
where $A(t,s)$ and $A(t)$ are given in \eqref{AK}.
\end{prop}
\begin{rem}
Since $g$ and $\varphi$ are supposed to be nonnegative,
\begin{equation*}
v(x,t)\ge \exp (-A(t))U^{D}(t,0)\varphi (x)\ge0 \mbox{ for all $x\in D$ and
$t\ge 0.$}
\end{equation*}
\end{rem}
\begin{proof}
Let $w(x,t)=\exp (A(t))v(x,t).$ For $f\in L^2(D),$ we get from
the definition of the mild solution
\begin{eqnarray*}
\frac{d}{dt}\langle w(\cdot ,t),f\rangle _{D} &=&\frac{1}{2}a^{2}(t)\exp
(A(t))\langle v(\cdot ,t),f\rangle _{D}+\exp (A(t))\frac{d}{dt}\langle
v(\cdot ,t),f\rangle _{D} \\
&=&\frac{1}{2}a^{2}(t)\exp (A(t))\langle v(\cdot ,t),f\rangle _{D} \\
&&+\exp (A(t))\left( \frac{1}{2}k^{2}(t)\left\langle v(\cdot ,t),L^{\ast
}f\right\rangle _{D}-\frac{1}{2}a^{2}(t)\left\langle v(\cdot
,t),f\right\rangle _{D}\right) \\
&&+\exp (A(t))\exp (-N_{t})\left\langle g(\exp (N_{t})v(\cdot
,t)),f\right\rangle _{D} \\
&=&\frac{1}{2}\exp (A(t))k^{2}(t)\left\langle v(\cdot ,t),L^{\ast
}f\right\rangle _{D}+\exp (A(t)-N_{t})\left\langle g(\exp (N_{t})v(\cdot
,t)),f\right\rangle _{D} \\
&=&\frac{1}{2}k^{2}(t)\left\langle w(\cdot ,t),L^{\ast }f\right\rangle
_{D}+\exp (A(t)-N_{t})\left\langle g(\exp (N_{t}-A(t))w(\cdot
,t)),f\right\rangle _{D},
\end{eqnarray*}
with boundary conditions $w(x,0)=\varphi (x)$ for $x\in D$ and $w(x,t)=0$
for $x\in \partial D.$ Therefore $w$ is a weak solution of the RPDE formally
given by
\begin{equation*}
\frac{d}{dt}w(x,t)=\frac{1}{2}k^{2}(t)Lw(x,t)+\exp (A(t)-N_{t})g(\exp
(N_{t}-A(t))w(x,t)).
\end{equation*}
By the definition of the mild solution
\begin{equation*}
w(x,t)=U^{D}(t,0)\varphi (x)+\int_{0}^{t}\exp (A(s)-N_{s})U^{D}(t,s)g(\exp
(N_{s}-A(s))w(\cdot ,s))(x)\,ds.
\end{equation*}
Consequently,
\begin{eqnarray*}
v(x,t) &=&\exp (-A(t))w(x,t) \\
&=&\exp (-A(t))U^{D}(t,0)\varphi (x)+\int_{0}^{t}ds\exp
(-A(t,s)-N_{s})U^{D}(t,s)g(\exp (N_{s})v(\cdot ,s))(x).
\end{eqnarray*}
\end{proof}
{\color{black}
\begin{theorem}
\label{PROP3} The equation \eqref{ms} has a unique non-negative local mild
solution, i.e. there exists $t>0$ such that \eqref{ms} has a mild solution
in $L^{\infty }([0,t[\times D).$
\end{theorem}
\begin{proof}
Let $T>0$ and denote
$
E_T=\left\{ v:[0,T]\times D \to L^{\infty }(D) : \norm{v} <\infty \right \},
$ where $$\norm{v}:=\sup_{0\leq t\leq T}\| v(t, \cdot)\| _{\infty}. $$ Let $P_T=\{ v\in E_T: v\geq0\} $ and for $R>0$ let $C_R=\{v\in E_T : \norm{v} \leq R \}. $
Then $E_T$ is a Banach space and $P_T$ and $C_R$ are closed subsets of $E_T$. Let us now define
$$\psi(v)(t,x)=
e^{-A(t)}U^{D}(t,0)\varphi (x)+\int_{0}^{t}e^{
-A(t,s)-N_{s}}U^{D}(t,s)g\left(e^{N_{s}}v(\cdot ,s)\right)(x)\,ds. $$
We will prove that for sufficiently big $R$ and sufficiently small $T$, $\psi$ is contraction on $P_T \cap C_R$.
Let $v_1, v_2\in P_T \cap C_R. $ Then
$$
\norm{
\psi(v_1)-\psi(v_2)
}
\
\le
\
\sup_{0\le t\le T}
\int_0^t
\left\|
e^{-N_s}\left[g\left(e^{N_s}v_1\right)-g\left(e^{N_s}v_2\right)\right]\right\|_{\infty}\,ds.
$$
Let $A_T=\sup_{0\le s\le T}e^{|N_s|}$ and $G_R=\sup_{|x|<R}|g(x)|$, and assume that $g$ is locally Lipschitz with Lipschitz constant $K_R$
in the ball of radius $R>0$ centered at $0$. Then,
$$
\sup_{0\le s\le T}\left\|e^{-N_s}v_i(s,\cdot)\right\|_{\infty}\ \le \ A_TR,\quad i=1,2,
$$
and
$$
\left\|
e^{-N_s}g\left(
e^{N_s}v_1(s,\cdot)\right)
-
e^{-N_s}g\left(
e^{N_s}v_2(s,\cdot)\right)
\right\|_{\infty}\ \le \ A^2_T K_{A_TR}\left\|v_1(s)-v_2(s)\right\|_{\infty}.
$$
Therefore,
$$
\norm{\psi(v_1)-\psi(v_2)}\ \le \ \sup_{0\le t\le T}\int_0^t
A_{T}^2K_{A_TR}\,\norm{v_1-v_2}\,ds\ = \ TA^2_TK_{A_tR}\,\norm{v_1-v_2}.
$$
We need
\begin{equation}\label{LL1}
TA^2_TK_{A_TR}<1.
\end{equation}
In addition, we require that $C_R\cap P_T$ be mapped by $\psi$ into itself. Let $v\in C_R\cap P_T$. Using that for $0\le s\le T$ the operator $U^D(t,s)$ is a contraction, and that
$\|e ^{N_s}v(\cdot,s)\|_{\infty}\le A_T R$, we get
$\|g(e^{N_s}v(\cdot,s))\|_{\infty}\le G_{A_TR}.$ It follows that
$$
\norm{\psi(v)} \ \le \|\varphi\|_{\infty}
+
\sup_{0\le t\le T}\int_0^t\left\|e^{-N(s)}\right\|_{\infty}\,ds\,G_{A_TR}\
\le\ \|\varphi\|_{\infty} +TA_TG_{A_TR}.
$$
Hence, we need that
\begin{equation}\label{LL2}
\|\varphi\|_{\infty}+TA_TG_{A_TR}\ <\ R.
\end{equation}
Let $R$ be such that $R\ge 2\|\varphi\|_{\infty}$. Since $\lim_{T\to0}A_T=1$, we choose $\varepsilon_1>0$ so that $A_T<2$ if $T<\varepsilon_1$, and
$$
\varepsilon \ < \ \frac{R}{4G_{2R}}\wedge\frac{1}{4K_{2R}}\wedge \varepsilon_1.
$$
Using that $G_A\le G_B$ and $ K_A\le K_B$ if $A\le B$, we get for
$R>2\|\varphi\|_{\infty}$ and $T<\varepsilon$,
$$
\|\varphi\|_{\infty} + TA_TG_{A_TR} \ \le\ \|\varphi\|_{\infty} +2\varepsilon G_{2R} < \frac{R}{2} +\frac{R}{2} =R
$$
and
$$
TA^2_TK_{A_TR}< 4\varepsilon K_{2R}<1.
$$
\end{proof}
}
We proceed to prove equivalence of weak and mild solutions of \eqref{2.2}.
The proof of this theorem follows the method in \cite[Theorem 9.15]{PZ},
where this equivalence is shown for SPDE's with autonomous differential
operators and driven by L\'{e}vy noise. For a comparison of weak and mild
solutions of SPDEs driven by fractional Brownian motion we refer to \cite
{333}.
We state first the Kolmogorov backward and forward equations for $U^{D}.$ By
the Kolmogorov backward equation for $P^{D}$, the transition density $
p^{D}(u,x,y)$ satisfies, for any $y$ fixed, $\frac{\partial }{\partial u}
p^{D}(u,x,y)=Lp^{D}(u,x,y).$ Then $(s,x)\rightarrow $ $p^{D}(s,x;t,y)$
satisfies, for $(t,y)$ fixed, the equation
\begin{equation*}
-\frac{\partial }{\partial s}p^{D}(s,x;t,y)=-\frac{\partial }{\partial s}
p^{D}(K(t,s),x,y)=-\frac{\partial }{\partial u}p^{D}(u,x,y)\mid _{u=K(t,s)}
\frac{\partial }{\partial s}K(t,s)
\end{equation*}
\begin{equation}
=\frac{1}{2}k^{2}(s)Lp^{D}(K(t,s),x,y)=\frac{1}{2}k^{2}(s)Lp^{D}(s,x;t,y).
\label{backward}
\end{equation}
Similarly, by the Kolmogorov forward equation for $P^{D},$ for any $x$
fixed, $p^{D}(u,x,y)$ satisfies
$$\frac{\partial }{\partial u}
p^{D}(u,x,y)=L^{\ast }p^{D}(u,x,y).$$ Then $(t,y)\rightarrow $ $
p^{D}(s,x;t,y) $ satisfies, for $(s,x)$ fixed, the equation
\begin{equation*}
\frac{\partial }{\partial t}p^{D}(s,x;t,y)=\frac{\partial }{\partial t}
p^{D}(K(t,s),x,y)=\frac{\partial }{\partial u}p^{D}(u,x,y)\mid _{u=K(t,s)}
\frac{\partial }{\partial t}K(t,s)
\end{equation*}
\begin{equation}
=\frac{1}{2}k^{2}(t)L^{\ast }p^{D}(K(t,s),x,y)=\frac{1}{2}k^{2}(t)L^{\ast
}p^{D}(s,x;t,y). \label{forward}
\end{equation}
\begin{theorem}
{\color{black}\label{THM8} Consider the random partial differential equation \eqref{2.2}.
Then $v$ is a weak solution of \eqref{2.2} on $
[0,T]$ if and only if $v$ is a mild solution of \eqref{2.2} on $[0,T]$.
}
\end{theorem}
\begin{proof}
Assume that $v$ is a weak solution of
\eqref{2.2}. Let {\color{black}$h\in {C}^{1}([0,\infty),\mathbb{R})$}, $f\in {{\rm Dom}(L}^{\ast }),$ and $
G(x,t):=-\frac{1}{2}a^{2}(t)v(x,t)+\exp (-N_{t})g(\exp (N_{t})v(x,t)).$
The integration by parts formula is applicable since $h\in
{\color{black} {C}^{1}([0,\infty),\mathbb{R})}
$ (see \cite{PZ} Proposition 9.16) and yields
\begin{eqnarray*}
\langle v(\cdot ,t),h(t)f(\cdot )\rangle _{D}
&=&\langle v(\cdot ,0),h(0)f(\cdot )\rangle _{D} +\int_{0}^{t}\langle v(\cdot
,s),h^{\prime }(s)f(\cdot )\rangle _{D}\, ds\\ \nonumber
&+&\int_{0}^{t}\langle v(\cdot ,s),\frac{1}{2}h(s)k^{2}(s)L^{\ast }f(\cdot
)\rangle _{D}\, ds+\int_{0}^{t}\langle G(\cdot ,s),h(s)f(\cdot )\rangle _{D} \,ds.
\end{eqnarray*}
{\color{black}
Since the functions $h\cdot f$ are dense in $C^1([0,\infty),{\rm Dom}(L^*))$, for each $z\in C^1([0,\infty),{\rm Dom}(L^*))$ we have
\begin{eqnarray}\label{5}
\langle v(\cdot ,t),z(\cdot,t )\rangle _{D}
&=&\langle v(\cdot ,0),z(\cdot,0)\rangle _{D} +\int_{0}^{t}\langle v(\cdot
,s),\frac{\partial}{\partial s}z(\cdot,s)\rangle _{D}\, ds\\ \nonumber
&+&\int_{0}^{t}\langle v(\cdot ,s),\frac{1}{2}k^{2}(s)L^{\ast }z(\cdot,s
)\rangle _{D}\, ds+\int_{0}^{t}\langle G(\cdot ,s),z(\cdot,s )\rangle _{D} \,ds.
\end{eqnarray}
For each $f\in{\rm Dom}(L^*) $ we define
\begin{equation*}
\psi (x,s):=(U^{D})^{\ast }(t,s)f(x)=\left\{
\begin{tabular}{ll}
$\langle p^{D^{\ast }}(s,x;t,\cdot ),f(\cdot )\rangle _{D}$ & if $s<t$, \\
& \\
$f(x)$ & if $s=t$,
\end{tabular}
\right.
\end{equation*}
hence $\psi\in C^1([0,\infty),{\rm Dom}(L^*))$.
Taking $z=\psi(x,s)$ in \eqref{5} we get, for any $t\in \lbrack 0,T]$ fixed,}
\begin{eqnarray}\nonumber
\langle v(\cdot ,t),\psi (\cdot ,t)\rangle _{D} &=&\langle v(\cdot ,0),\psi
(\cdot ,0)\rangle _{D}+\int_{0}^{t}\left\langle v(\cdot ,s),\frac{d}{ds}\psi
(\cdot ,s)+\frac{1}{2}k^{2}(s)L^{\ast }\psi (\cdot ,s)\right\rangle _{D}ds \\ \label{sept-1}
&&+\int_{0}^{t}\langle G(\cdot ,s),\psi (\cdot ,s)\rangle _{D}\,ds.
\end{eqnarray}
Now we evaluate the terms above:
\begin{eqnarray*}
\langle v(\cdot ,0),\psi (\cdot ,0)\rangle _{D}
&=&\int_{D}v(x,0)\int_{D}p^{D^{\ast }}(0,x;t,y)f(y)\,dy\,dx \\
&=&\int_{D}f(y)\int_{D}p^{D^{\ast }}(0,x;t,y)v(x,0)\,dx\,dy
\ = \ \left\langle U^{D}(t,0)v(\cdot ,0),f(\cdot )\right\rangle _{D}.
\end{eqnarray*}
By applying the Kolmogorov backward equation to $(x,s)\rightarrow (U^{D})^{\ast }(t,s)f(x)$ we get
\begin{eqnarray*}
-\frac{d}{ds}\psi (x,s) &=&-\frac{\partial }{\partial s}\left\langle
(p^{D})^{\ast }(s,x;t,\cdot ),f(\cdot )\right\rangle _{D} \\
&=&\frac{1}{2}k^{2}(s)L^{\ast }\left\langle (p^{D})^{\ast }(s,x;t,\cdot
),f(\cdot )\right\rangle _{D}=\frac{1}{2}k^{2}(s)L^{\ast }\psi (x,s).
\end{eqnarray*}
Moreover, from Fubini's theorem and \eqref{PDt1}
\begin{eqnarray*}
\langle G(\cdot ,s),\psi (\cdot ,s)\rangle _{D}
&=&\int_{D}G(x,s)\int_{D}p^{D^{\ast }}(s,x;t,y)f(y)\,dy\,dx \\
&=&\int_{D}f(y)\int_{D}p^{D}(s,y;t,x)G(x,s)\,dx\,dy
\ = \ \left\langle U^{D}(t,s)G(\cdot ,s),f(\cdot )\right\rangle _{D}.
\end{eqnarray*}
Therefore, from \eqref{sept-1},
$\left\langle v(\cdot ,t),f(\cdot )\right\rangle _{D}=\left\langle
U^{D}(t,0)v(\cdot ,0),f(\cdot )\right\rangle _{D}+\int_{0}^{t}\langle
U^{D}(t,s)G(\cdot ,s),f(\cdot )\rangle _{D}\,ds
$
{\color{black} for all $f\in{\rm Dom}(L^*)$.
Since ${\rm Dom}(L^{\ast })$ is dense in $L^{2}(D)$ }
we obtain that $v$ is a
mild solution of \eqref{2.2} on $[0,T].$
To prove the converse let $v$ be a mild solution of \eqref{2.2} on $[0,T].$ For $f \in {\rm Dom}(L^{\ast }),$
\begin{eqnarray}\nonumber
\lefteqn{\int_{0}^{t}\left\langle v(\cdot ,s),\frac{1}{2}k^{2}(s)L^{\ast }f(\cdot
)\right\rangle _{D}\,ds} \\ \nonumber
&=&\int_{0}^{t}\left\langle U^{D}(s,0)v(\cdot ,0),\frac{1}{2}k^{2}(s)L^{\ast
}f(\cdot )\right\rangle _{D}\,ds \\ \nonumber
&&+\int_{0}^{t}\left\langle \int_{0}^{s}\chi _{\lbrack
0,s]}(r)U^{D}(s,r)G(\cdot ,r)\,dr,\frac{1}{2}k^{2}(s)L^{\ast }f(\cdot
)\right\rangle _{D}\,ds \\ \nonumber
&=&\int_{0}^{t}\left\langle v(\cdot ,0),(U^{D})^{\ast }(s,0)\frac{1}{2}
k^{2}(s)L^{\ast }f(\cdot )\right\rangle_{D}\,ds \\
&&
+\int_{0}^{t}\int_{r}^{t}\left\langle U^{D}(s,r)G(\cdot ,r),\frac{1}{2}
k^{2}(s)L^{\ast }f(\cdot )\right\rangle _{D}\,ds\,dr. \label{6}
\end{eqnarray}
By applying the Kolmogorov forward equation to $(U^{D})^{\ast }$ we get for the
first integral on the right side of \eqref{6}:
\begin{eqnarray*}
&&
(U^{D})^{\ast }(s,0)(\frac{1}{2}k^{2}(s)L^{\ast }f)(x)=\int_{D}p^{D^{\ast }}(0,x;s,y)
\frac{1}{2}k^{2}(s)L^{\ast }f(y)\,dy\\
&&
=\int_{D}(\frac{1}{2}k^{2}(s)L)p^{D^{\ast }}(0,x;s,y)f(y)\,dy=\int_{D}\frac{\partial }{
\partial s}p^{D^{\ast }}(0,x;s,y)f(y)\,dy,
\end{eqnarray*}
and therefore
\begin{eqnarray*}\lefteqn{
\int_{0}^{t}\left\langle v(\cdot ,0),(U^{D})^{\ast }(s,0)(\frac{1}{2}k^{2}(s)L^{\ast })f(\cdot
)\right\rangle_{D}\,ds}\\
&=&\int_{0}^{t}\left\langle v(\cdot ,0),\int_{D}\frac{\partial }{\partial s}
p^{D^{\ast }}(0,\cdot ;s,y)f(y)\,dy\right\rangle_{D}\,ds \
= \ \left\langle v(\cdot
,0),\int_{D}p^{D^{\ast }}(0,\cdot ;t,y)f(y)dy-f(\cdot )\right\rangle _{D} \\
&=&\left\langle v(\cdot ,0),(U^{D})^{\ast }(t,0)f(\cdot )\right\rangle_{D}-\langle
v(\cdot ,0),f(\cdot )\rangle_{D}.
\end{eqnarray*}
In the same way we get for the second integral on the right side of \eqref{6}
\begin{eqnarray*}
\left\langle U^{D}(s,r)G(\cdot ,r),\frac{1}{2}k^{2}(s)L^{\ast }f(\cdot
))\right\rangle _{D}
&=&\left\langle G(\cdot ,r),(U^{D})^{\ast }(s,r)(\frac{1}{2}k^{2}(s)L^{\ast
}f)(\cdot )\right\rangle _{D} \\
&=&\left\langle G(\cdot ,r),\int_{D}\frac{\partial }{\partial s}
p^{D^{\ast }}(r,\cdot ;s,y)f(y)dy\right\rangle _{D},
\end{eqnarray*}
and therefore
\begin{eqnarray*}\lefteqn{
\int_{r}^{t}\left\langle U^{D}(s,r)G(\cdot ,r),\frac{1}{2}k^{2}(s)L^{\ast
}f(\cdot )\right\rangle _{D}ds=\int_{r}^{t}\left\langle G(\cdot ,r),\int_{D}
\frac{\partial }{\partial s}p^{D^{\ast }}(r,\cdot ;s,y)f(y)dy\right\rangle _{D}ds}\\
&=&\left\langle G(\cdot ,r),\int_{D}p^{D^{\ast }}(r,\cdot ;t,y)f(y)dy-f(\cdot
)\right\rangle _{D}=\left\langle G(\cdot ,r),(U^{D})^{\ast }(t,r)f(\cdot
)-f(\cdot )\right\rangle _{D} \\
&=&\left\langle U^{D}(t,r)G(\cdot ,r),f(\cdot )\right\rangle_{D}-\left\langle
G(\cdot ,r),f(\cdot )\right\rangle _{D}.
\end{eqnarray*}
In this way we obtain
\begin{eqnarray*}\lefteqn{
\int_{0}^{t}\langle v(\cdot ,s),\frac{1}{2}k^{2}(s)L^{\ast }f(\cdot
)\rangle _{D}\,ds }\\
&=&\left\langle U^{D}(t,0)v(\cdot ,0)+\int_{0}^{t}U^{D}(t,r)G(\cdot ,r)dr,f(\cdot
)\rangle _{D}-\langle v(\cdot ,0),f(\cdot )\right\rangle_{D}
-\int_{0}^{t}\langle G(\cdot ,r),f(\cdot )\rangle _{D}\,dr \\
&=&\left\langle v(\cdot ,t),f(\cdot )\right\rangle _{D}-\langle v(\cdot
,0),f(\cdot )\rangle \,_{D}-\int_{0}^{t}\langle G(\cdot ,r),f(\cdot )\rangle
_{D}\,dr,
\end{eqnarray*}
since $v$ is a mild solution on $[0,T]$. It follows that $v$ is a weak
solution on $[0,T].$
\end{proof}
\begin{coro}
\label{CORO1}The equations \eqref{2.1} and \eqref{2.2} possess unique weak
solutions.
\end{coro}
\begin{proof} Theorem \ref{THM8} and Proposition \ref{PROP3} show the existence and uniqueness of\ a local
weak and mild solution of \eqref{2.2}, and Proposition \ref{PROP1} shows the uniqueness of a weak solution of \eqref{2.1}.
\end{proof}
\begin{rem}
We refer to \cite{NV} for an existence and uniqueness theorem of the
variational solution of an SPDE with a nonautonomous second order differential operator and driven by fractional Brownian motion, and to \cite{RalSche} for the existence and uniqueness of the mild solution. In \cite{MRS} the existence and uniqueness of the mild solution is shown for equations with the same differential operator and driven by mixed noise.
\end{rem}
\section{An upper bound for the blowup time and probability estimates}\label{Section3}
\subsection {An upper bound for the blowup time}\label{Subsection3.1}
In the remaining part of the paper we will assume that $L$ and $L^*$ admit strictly positive eigenfunctions: there
exists a positive eigenvalue $\lambda _{0}$ and strictly positive
eigenfunctions $\psi _{0}\in $ $\mathrm{Dom}(L)$ for $P_{t}^{D}$ and $
\varphi _{0}\in \mathrm{Dom}(L^{\ast })$ for $(P^{D})_{t}^{\ast }$ with $
\int_{D}\psi _{0}(x)dx=\int_{D}\varphi _{0}(x)dx=1$ such that
\begin{equation} \label{P}
(P_{t}^{D}-e^{-\lambda _{0}t})\psi _{0}=((P^{D})_{t}^{\ast }-e^{-\lambda
_{0}t})\varphi _{0}=0,
\end{equation}
{\color{black}hence}
\begin{equation} \label{L}
(L+\lambda _{0})\psi _{0}=(L^{\ast }+\lambda _{0})\phantom{1^l\!\!\!\!\!\!\!}i _{0}=0.
\end{equation}
For generators of a
general class of L\'{e}vy processes, properties (\ref{P}) and (\ref{L})
follow from \cite{KyS,CyW}.
Another example are the diffusion processes: for $f\in
\mathcal{C}_{0}^{2}(D),$ the set of twice continously differentiable
functions with compact support in $D,$ let us define the differential
operator
\begin{equation*}
Lf=\sum_{j,k=1}^{d}\frac{\partial }{\partial x_{j}}\left(a_{jk}\frac{
\partial }{\partial x_{k}}f\right)+\sum_{j=1}^{d}b_{j} \frac{\partial }{
\partial x_{j}}f-cf,
\end{equation*}
where $a_{j,k},$ $b_{j},$ $j,k=1,...,d$ are bounded smooth functions on $D$
and $c$ is bounded and continous. We assume that the matrix $(a_{j,k},$ $
j,k=1,...,d)$ is symmetric and uniformly elliptic.
In this case properties (\ref{P}) and (\ref{L}) follow from \cite[
Theorem 11, Chapter 2]{Friedman}.
\begin{theorem}
\label{THM1}
Assume (\ref{L}) and let $g(z) \geq Cz^{1+\beta }$ for all $z>0$, where $C>0$, $\beta >0$, are given constants.
Let us define
\begin{equation}
\tau ^{\ast }=\inf \left\{t>0:\int_{0}^{t}\exp \left[-\beta (\lambda
_{0}K(r)+A(r))+\beta N_{r}\right]\,dr\ \geq \ \frac{1}{C\beta }\langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{-\beta }\right\}, \label{tau*}
\end{equation}
where the functions $K$ and $A$ are defined in (\ref{AK}). Then, on the
event $\{\tau ^{\ast }<\infty \}$ the solution $v$ of~\eqref{2.2} and the
solution $u$ of \eqref{2.1} blow up in finite time $\tau$, and $\tau \leq
\tau ^{\ast}$ $\mathbb{P}$-a.s.
\end{theorem}
\begin{proof}
Using the hypothesis on $g$ and Jensen's inequality we get for the terms in \eqref{weak rpde}:
\begin{eqnarray*}
\langle v(\cdot ,s),L^{\ast }\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle _{D}&=&-\lambda _{0}\langle
v(\cdot ,s),\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle _{D},\\
\exp (-N_{s})\left\langle g(\exp (N_{s})v(\cdot ,s)),\phantom{1^l\!\!\!\!\!\!\!}i _{0}\right\rangle
_{D} &\geqq &C\exp (\beta N_{s})\left\langle v^{1+\beta }(\cdot ,s),\phantom{1^l\!\!\!\!\!\!\!}i
_{0}\right\rangle _{D} ,\\
&\geqq &C\exp (\beta N_{s})\langle {v(\cdot ,s),\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle }
_{D}^{1+\beta }.
\end{eqnarray*}
Applying these lower bounds to ($\langle {v(\cdot ,t+\varepsilon ),\phantom{1^l\!\!\!\!\!\!\!}i
_{0}\rangle }_{D}-\langle {v(\cdot ,t),\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle }_{D})/\varepsilon $
and letting $\varepsilon \rightarrow 0$ we get
\begin{equation}
\frac{d}{dt}\langle {v(\cdot ,t),\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle }_{D}\geqq -\frac{1}{2}
(\lambda _{0}k^{2}(t)+a^{2}(t))\langle v(\cdot ,t),\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle
_{D}+C\exp (\beta N_{t})\langle {v(\cdot ,t),\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle }_{D}^{1+\beta
}. \label{ineq}
\end{equation}
The corresponding differential equality reads
\begin{equation*}
\frac{d}{dt}I(t)=-\frac{1}{2}(\lambda _{0}k^{2}(t)+a^{2}(t))I(t)+C\exp
(\beta N_{t})I(t)^{1+\beta },
\end{equation*}
and $I(t)$ is a subsolution of
\eqref{ineq}, i.e. $\langle {v(\cdot ,t),\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle }_{D}\geqq I(t)$.
Then
\begin{equation*}
I(t)=\exp [-(\lambda _{0}K(t)+A(t))]\left( \langle \varphi {,\phantom{1^l\!\!\!\!\!\!\!}i
_{0}\rangle }_{D}^{-\beta }-\beta C\int_{0}^{t}\exp\left [-\beta (\lambda
_{0}K(s)+A(s))+\beta N_{s}\right]ds\right) ^{-1/\beta }
\end{equation*}
for all $t\in \lbrack 0,\tau ^{\ast }),$ where $\tau ^{\ast }$ is given by~
\eqref{tau*}. Therefore $\tau ^{\ast }$ is an upper bound for the blowup
time of $\langle {v(\cdot ,t),\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle }_{D}$, and the function
$
t\mapsto \|{v(\cdot ,t)\| }_{\infty }=\exp
(-N_{t})\| {u(\cdot ,t)\|}_{\infty }
$
can not stay finite on $[0,\tau ^{\ast }]$ if $\tau ^{\ast }<\infty $.
Therefore $u$ and $v$ blow up before $\tau ^{\ast }$ if $\tau ^{\ast
}<\infty $.
\end{proof}
\begin{rem}
Notice that $\tau ^{\ast }$ depends on $L$ only by the positive
eigenvalue $\lambda _{0}$ and the associated eigenfunction $\phantom{1^l\!\!\!\!\!\!\!}i _{0}.$
Moreover, $\tau ^{\ast }$ is a decreasing function of $\varphi ,$ $\phantom{1^l\!\!\!\!\!\!\!}i _{0}$
and $C$, and an increasing function of $\lambda _{0}K.$ Therefore small
functions $\varphi $, $\phantom{1^l\!\!\!\!\!\!\!}i _{0}$ and a small constant $C,$ as well as high
values of $\lambda _{0}K$ postpone the blowup of $I$ and have, in this
sense, the tendency to postpone the blowup of $v$ and $u.$
\end{rem}
\subsection {A tail probability estimate for the upper bound of the blowup time} \label{section.3.1}
In the following theorem we apply a tail probability estimate for exponential
functionals of fBm studied by N.T.\ Dung \cite{D} to estimate the
probability that $\tau ^{\ast }$ occurs before a fixed time $T$. Here we assume that the process $B^H$ is given by the formula
\begin{equation}\label{FBM}
B_{t}^{H}=\int_{0}^{t}{K^{H}(t,s)\,dB_{s}},
\end{equation}
where the kernel $K^{H}$ is given for $H>1/2$ by
\begin{equation}\label{Kernel-K}
K^{H}(t,s)=\left\{
\begin{array}{ll}
C_{H}s^{1/2-H}\int_{s}^{t}{(\sigma -s)^{H-3/2}\sigma ^{H-1/2}d\sigma } &
\text{ if }t>s, \\
& \\
0 & \text{ if }t\leqq s,
\end{array}
\right.
\end{equation}
{\color{black}
where $C_{H}=[\frac{H(2H-1)}{\mathcal{B}(2-2H,H-1/2)}]^{\frac{1}{2}}$ and $\mathcal{B}$ is the usual beta function (see Section 5.1.3 in \cite {N} for a general representation formula of fBm with $H>1/2$). Notice that $B^H$ and $B$ are dependent in this case.}
\begin{theorem} \label{THM2} Under assumptions \eqref{L} and \eqref{FBM},
let
$g(z) \geq Cz^{1+\beta
}$
for
all $z>0$, where $C>0$, $\beta >0$, are given constants,
and let
$\mu(T) =\int_{0}^{T}\exp [-\beta (\lambda _{0}K(t)+A(t))]
\mathbb{E}\left[ \exp (\beta N_{t})\right] dt $. Then, for any $T>0$ such that $\frac{1}{C\beta }\langle \varphi ,\phantom{1^l\!\!\!\!\!\!\!}i
_{0}\rangle _{D} ^{-\beta } >\mu(T) ,$
\begin{equation*}
\mathbb{P}\left\{ \tau ^{\ast }\leq T\right\} \leq 2\exp \left( -\frac{\ln^2
\left[ C\beta\langle \varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle_{D}^{\beta} \, \mu(T) \right]
}{2M(T)}\right) ,
\end{equation*}
where
\begin{equation*}
M(T) =2\beta ^{2}\int_{0}^{T}a^{2}(r)\,dr +4\beta ^{2}HT^{2H-1}\int_{0}^{T}b^{2}(u)\,du.
\end{equation*}
\end{theorem}
\begin{proof} For $t\ge0, $ using \eqref{FBM}, we have the following representation:
\begin{eqnarray}
X_{t}&:=&-\beta(\lambda_0 K(t)+A(t)) +\beta N_{t} \\ \nonumber \label{Rep}
&=&-\beta(\lambda_0 K(t)+A(t)) +\beta \left(
\int_{0}^{t}a(s)\,dB_{s}+\int_{0}^{t}\int_{s}^{t}b(r)\frac{\partial }{\partial r}
K^H(r,s)\,dr\,dB_{s}\right).\\ \nonumber
\end{eqnarray}
From \cite[Theorem 3.1]{D}
it follows that for any $T\geq 0$ and any $x>\mu(T),$ there holds
\begin{equation}
\mathbb{P}\left( \int_{0}^{T}e^{X_{t}}dt\geq x\right) \leq 2\exp \left[-\frac{(\ln x-\ln
\mu(T) )^{2}}{2M(T)}\right], \label{Dung}
\end{equation}
where $\mu (T)=\int_{0}^{T}\mathbb{E}\left[
e^{X_{t}}\right] dt$ and $M(T)$ is such that
\begin{equation}\label{M1}
\sup_{t\in \lbrack 0,T]}\int_{0}^{T}|D_{r}X_{t}|^{2}\,dr\leq M(T)\quad
\mbox{$\mathbb{P}$-a.s.}
\end{equation}
Here $D_rX_t$ denotes the Malliavin derivative of $X_t$.
In the following we will find an upper bound $M(T)$ such that (\ref{M1}) holds. For $r<t$ we have, using the representation \eqref{Rep},
\begin{eqnarray*}
D_{r}X_{t}=\beta \left( a(r)+\int_{r}^{t}b(s)\frac{\partial }{\partial s}
K(s,r)\,ds \right) .\\
\end{eqnarray*}
Hence $\int_0^t |D_{r}X_{t}|^2 \,dr \leq 2 \beta^2 \int_0^t a^2(r) \,dr + 2 \beta^2 \int_0^t(\int_r^t b(s) \frac{\partial}{\partial s}K(s,r) \,ds)^2 \,dr $ and
\begin{eqnarray*}
\lefteqn{
\int_0^t\left(\int_r^t b(s) \frac{\partial}{\partial s}K(s,r) \,ds\right)^2 \,dr}\\
&=&\int_0^t \left(\int_r^t b(s) \frac{\partial}{\partial s}K(s,r) \,ds\right)\left(\int_r^t b(s') \frac{\partial}{\partial s'}K(s',r) \,ds'\right)\,dr \\ \
&=&\int_0^t b(s)\,ds \int_0^s \frac{\partial}{\partial s}K(s,r) \,dr \int_r^t b(s') \frac{\partial}{\partial s'}K(s',r) \,ds' \\ \
&=&\int_0^t ds\, b(s) \int_0^t \,dr 1_{[0,s]} (r) \frac{\partial}{\partial s}K(s,r) \int_r^t b(s') \frac{\partial}{\partial s'}K(s',r) \,ds' \\ \
&=&\int_0^t ds\, b(s) \int_0^t \,ds' b(s') \int_0^{s'}1_{[0,s]} (r) \frac{\partial}{\partial s}K(s,r) \frac{\partial}{\partial s'}K(s',r) \,dr \\ \
&=&\int_0^t \,ds \int_0^t ds' \, b(s) b(s') \int_0^{s \wedge s'} \frac{\partial}{\partial s}K(s,r) \frac{\partial}{\partial s'}K(s',r) \,dr\\ \
&=&\int_0^t \,ds \int_0^t ds' \, b(s) b(s') \phantom{1^l\!\!\!\!\!\!\!}antom{\int}\!\!\!\!\!\!\!\!i(s,s')\\ \
&=&\int_0^t \,ds \int_0^s ds' \, b(s) b(s') \phantom{1^l\!\!\!\!\!\!\!}antom{\int}\!\!\!\!\!\!\!\!i(s,s')+\int_0^t \,ds \int_s^t ds' \, b(s) b(s') \phantom{1^l\!\!\!\!\!\!\!}antom{\int}\!\!\!\!\!\!\!\!i(s,s') \\ \
&=&2 \int_0^t \,ds \int_0^s ds' \, b(s) b(s') \phantom{1^l\!\!\!\!\!\!\!}antom{\int}\!\!\!\!\!\!\!\!i(s,s'),
\end{eqnarray*}
where $\phantom{1^l\!\!\!\!\!\!\!}antom{\int}\!\!\!\!\!\!\!\!i(s,s')=\int_0^{s \wedge s'} \frac{\partial}{\partial s}K(s,r) \frac{\partial}{\partial s'}K(s',r) \,dr.$
Since $ \frac{\partial}{\partial s} K(s,r)=C_{H}r^{1/2-H} (s-r)^{H-3/2}s^{H-1/2},$ using (5.7) in \cite{N} we obtain
\begin{eqnarray*}
\phantom{1^l\!\!\!\!\!\!\!}antom{\int}\!\!\!\!\!\!\!\!i(s,s')= C_{H}^{2}(ss')^{H-1/2} \int_0^{s \wedge s'} r^{1-2H}(s-r)^{H-3/2}(s'-r)^{H-3/2} \,dr
= H(2H-1)(s-s')^{2H-2}
\end{eqnarray*}
for $s'<s,$ hence
\begin{align}\nonumber
\lefteqn{\int_0^t\left(\int_r^t b(s) \frac{\partial}{\partial s}K(s,r) \,ds\right)^2 \,dr} \\ \nonumber
&\leq \ 2H(2H-1)\int_0^t \,ds\int_0^s
|b(s)b(s')|(s-s')^{2H-2} \,ds' \\ \nonumber
&\leq\ H(2H-1)\left[
\int_0^t b(s)^2 \int_0^s (s-s')^{2H-2} \,ds'
\,ds
+ \int_0^t \int_0^s b(s')^2 (s-s')^{2H-2}\,ds'
\,ds
\right] \\ \nonumber
&=\ H \int_0^t b(s)^2 s^{2H-1}\,ds + H(2H-1) \int_0^t b(s')^2 \int_{s'}^t (s-s')^{2H-2} \,ds\, ds' \\ \nonumber
&=\ H \int_0^t b(s)^2 (s^{2H-1}+(t-s)^{2H-1}) \,ds\\ \label{R}
& \leq \ 2H t^{2H-1} \int_0^t b(s)^2 \,ds.
\end{align}
From the above inequalities we obtain
\begin{equation}
\sup_{t\in \lbrack 0,T]}\int_{0}^{T}|D_{r}X_{t}|^{2}dr
\leq 2\beta ^{2}\int_{0}^{T}a^{2}(r)dr
+4\beta ^{2}HT^{2H-1}\int_{0}^{T}b^{2}(u)du:=M(T).
\label{M2}
\end{equation}
Now, from (\ref{tau*})
\begin{eqnarray}\nonumber
\mathbb{P}(\tau ^{\ast }\leqq T)&=&\mathbb{P}\left\{ \int_{0}^{T}\exp
[-\beta (\lambda _{0}K(t)+A(t))+\beta N_{t}]\,dt \geqq \frac{1}{C\beta}\langle \varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle _{D}^{-\beta} \right\} \\ \label{X}
&=&\mathbb{P}\left\{ \int_{0}^{T}\exp[X(t)] \,dt \geq x \right\},\end{eqnarray} where $x= \frac{1}{C\beta}\langle \varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle _{D}^{-\beta} .$ The result follows from \eqref{Dung} and \eqref{M2}.\end{proof}
{\color{black}
In the following theorem we obtain upper bounds for the tail of $\tau^*$ in the case {\color{black} when the Brownian motion $B$} and the fractional Brownian motion $B^H$ have general dependence structure.
\begin{theorem}\label{THM3}
Assume \eqref{L} and let $g(z) \geq Cz^{1+\beta }$ for all $z>0$, where $C>0$, $\beta >0$, are given constants.
\begin{enumerate}
\item Assume that $B_{t}^{H}=\int_{0}^{t}{K^{H}(t,s)\,dW_{s}},$ where $W$ is a Brownian motion defined in the same probability space, and adapted to the same filtration as the Brownian motion $B$.
Then
\begin{eqnarray*}\lefteqn{
\mathbb{P}(\tau^*\le T)} \\
&\le & C\beta \langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{\beta }
\int_0^T\left[
e^{-\beta \lambda_0 \int_0^t k^2(s)\,ds + 2\beta^2\int_0^ta^2(s)\,ds}
+
e^{ - \beta \int_0^t a^2(s)\,ds
+4\beta^2Ht^{2H-1}\int_0^tb^2(s)\,ds}
\right]\,dt.
\end{eqnarray*}
\item If $B$ and $B^H$ are independent, then {\color{black}
$$\mathbb{P}(\tau^*\le T) \ \le \ { C\beta \langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{\beta }}
\int_0^T
e^{-\beta\lambda_0K(t)
+\frac{\beta^2-\beta}{2}\int_0^ta^2(s)\,ds
+ \beta^2H t^{2H-1}\int_0^tb^2(s)\,ds}.
$$
}
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}
\item Using Hölder's and Chebishev's inequalities we obtain
{\color{black}
\begin{eqnarray}\nonumber
\mathbb{P}(\tau^*\le T) &=& \mathbb{P}\left[
\int_0^T
e^{-\beta\lambda_0K(t)
+\beta\int_0^ta(s)\,dB_s -\beta A(t)+\beta \int_0^tb(s)\,dB^H_s}
\,dt\ge \frac{1}{C\beta }\langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{-\beta }
\right]
\\ \nonumber
&\le &
\mathbb{P}\left[\left(
\int_0^T
e^{-2\beta\lambda_0K(t)
+2\beta\int_0^ta(s)\,dB_s }
\,dt\right)^{\frac{1}{2}} \right. \\ \nonumber &&\phantom{1^l\!\!\!\!\!\!\!}antom{MMMM}\times\left.
\left(
\int_0^T
e^{ -2\beta A(t)+2 \beta \int_0^tb(s)\,dB^H_s}
\,dt\right)^{\frac{1}{2}}
\ge \frac{1}{C\beta }\langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{-\beta }
\right]
\\ \nonumber
&\le &
\mathbb{P}\left[
\int_0^T
e^{-2\beta\lambda_0K(t)
+2\beta\int_0^ta(s)\,dB_s }
\,dt\ge \frac{1}{C\beta }\langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{-\beta }\right] \\ \nonumber &&
+
\mathbb{P}\left[
\int_0^T
e^{-2\beta A(t)+ 2\beta \int_0^tb(s)\,dB^H_s}
\,dt\ge \frac{1}{C\beta }\langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{-\beta }
\right]
\\ \nonumber
&\le & \frac{
\mathbb{E}\left[
\int_0^T
e^{-2\beta\lambda_0K(t)
+2\beta\int_0^ta(s)\,dB_s }
\,dt\right]
+
\mathbb{E}\left[
\int_0^T
e^{-2\beta A(t)+ 2\beta\int_0^tb(s)\,dB^H_s}
\,dt
\right]}{ \frac{1}{C\beta }\langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{-\beta }}\\ \nonumber
&\le &\frac{\int_0^T
\left[e^{-2\beta\lambda_0K(t)
+2\beta^2\int_0^ta^2(s)\,ds}\right]\,dt
+
\int_0^T
e^{-2\beta A(t)}
\mathbb{E}\left[e^{ 2\beta\int_0^t b(s)\,dB^H_s}\right] \,dt
}{ \frac{1}{C\beta }\langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{-\beta }},\\ \label{B1B2B3}
\end{eqnarray}
}
where we have used the fact that $\mathbb{E}\left(\exp\left\{\int_0^tf(s)\,dB(s)\right\}\right)=
\exp\left\{\frac{1}{2}\int_0^tf^2(s)\,ds\right\}$ to obtain the last inequality.
{\color{black} In addition,
$$
\mathbb{E}\left[e^{ 2\beta\int_0^t b(s)\,dB^H_s}\right]
=
\mathbb{E}\left[e^{ 2\beta\int_0^t\int_s^t b(r)\frac{\partial}{\partial r}K^H(r,s)\,dr\,dW_s}\right] =
e^{ 2\beta^2\int_0^t \left[\int_s^t b(r)\frac{\partial}{\partial r}K^H(r,s)\,dr\right]^2\,ds},
$$
where the last equality follows from \cite[Theorem 4.12]{Klebaner}. Therefore, using \eqref{R} we get
\begin{equation}\label{B3B3B3}
\mathbb{E}\left[e^{ 2\beta\int_0^t b(s)\,dB^H_s}\right]\le \exp\left\{
4\beta^2Ht^{2H-1}\int_0^tb^2(s)\,ds
\right\}.
\end{equation}
Substituting \eqref{B3B3B3} into \eqref{B1B2B3} we obtain the desired bound.
}
\item Using Chebishev's inequality, the independence of $B$ and $B^H$
{\color{black}
and the proof of \eqref{B3B3B3},
\begin{eqnarray*}\lefteqn{
\mathbb{P}(\tau^*\le T)} \\ &=& \mathbb{P}\left[
\int_0^T
e^{-\beta\lambda_0K(t)
+\beta\int_0^ta(s)\,dB_s -\beta A(t)+\beta \int_0^t b(s)\, dB^H_s)}
\,dt\ge \frac{1}{C\beta }\langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{-\beta }
\right]
\\
&\le & C\beta \langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{\beta }
\int_0^T
\mathbb{E}\left[e^{-\beta\lambda_0K(t)
+\beta\int_0^t a(s)\,dB_s}\right]
\mathbb{E}\left[e^{ -\beta A(t)+ \beta \int_0^t b(s)\, dB^H_s }\right]
\,dt \\
& \le & C\beta \langle {
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle}_{D}^{\beta }
\int_0^T
\exp\left\{
-\beta\lambda_0K(t)
+\frac{\beta^2-\beta}{2}\int_0^ta^2(s)\,ds
+
\beta^2Ht^{2H-1}\int_0^tb^2(s)\,ds
\right\}
\,dt.
\end{eqnarray*}
}
\end{enumerate}
\end{proof}
}
\section{Lower bounds for the blowup time and for the probability of finite
time blowup}
\subsection{A lower bound for the probability of finite
time blowup}
In the following theorem we give a lower bound for the probability of finite
time blow up of the {\color{black} weak} solution of (\ref{2.1}). If $f,g$ are nonnegative
functions and $c$ is a constant, we write $f(t)\sim cg(t)$ as $t\to\infty$
if $\lim_{t\to\infty}f(t)/g(t)=c$.
{\color{black}
\begin{theorem}\label{THM4} {\color{black} Assume \eqref{L} and \eqref{FBM}.
Let
$g(z)\geq Cz^{1+\beta }$
}
and
\begin{equation*}
\int_{0}^{t}a^{2}(r)\,dr\sim C_{1}t^{2l},\quad \int_{0}^{t}b^{2}(r)\,dr\sim C_{2}t^{2m},\quad
\int_{0}^{t}k^{2}(r)\,dr\sim C_{3}t^{2p}
\end{equation*}
as $t\rightarrow \infty $ for some nonnegative constants $l,\,m,\, p$ and positive
constants $C,$ $\beta$, $C_{1}, C_2$ and $C_{3}.$ Suppose additionally that
\begin{enumerate}{\color{black}
\item if $\beta \in (0,1/2),$ then $\max \{p,l\}>H+m-\frac{1}{2},
$
\item if $\beta =1/2,$ then H+$m-\frac{1}{2}<p,$
\item if $\beta >1/2,$
then $p>\max \{l,H+m-\frac{1}{2}\}.$
}
\end{enumerate}
Under these assumptions the solution
of (\ref{2.1}) blows up in finite time with positive probability. Moreover,
\begin{equation}
\mathbb{P}(\tau <\infty )\ \geqq \ \mathbb{P}(\tau ^{\ast }<\infty )\ \geqq
\ 1-\exp \left( -\frac{(m_{\xi }-1)^{2}}{2L_{\xi }}\right) , \label{DV}
\end{equation}
where
\begin{equation}
\xi =\frac{1}{C\beta }\langle \varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle _{D}^{-\beta },\quad
L_{\xi }=\underset{t\geqq 0}{\sup }\frac{M(t)}{(\ln (\xi +1)+f(t))^{2}},
\end{equation}
with $f(t)=t^{\max \{H+m-1/2,\,l\}}$ and
\begin{equation}
m_{\xi }=E\left[ \underset{t\geqq 0}{\sup }\frac{\ln \left( \int_{0}^{t}\exp
\left( -\beta (\lambda _{0}K(s)+A(s))+\beta N_{s}\right) \,ds+1\right) +f(t)
}{\ln (\xi +1)+f(t)}\right] . \label{DIII}
\end{equation}
\end{theorem}
\begin{proof} From (\ref{X}) it follows that $\mathbb{P}(\tau^* <\infty) =\mathbb{P}( \int_0^{\infty} e^{X_t} \,dt \geq \xi).$
In order to estimate $\mathbb{P}( \int_0^{\infty} e^{X_t} \,dt \geq \xi)$ we use \cite[Theorem 3.1]{DII}, with $a=0$ and $ \sigma=1:$
\begin{prop}[\cite{DII}] \label{DII}Assume that the stochastic process $X$ is adapted and satisfies
a) $\int_0^{\infty}Ee^{X_s}\,ds <\infty,$
b) For each $t\geq 0, \, X_t \in D^{1,2},$
c) There exists a function $f:R_+\to R_+$ such that $\lim_{t \to \infty}f(t) = \infty$ and for each $x>0,$
\begin{eqnarray}
\text{ \ \ }\underset{t\geqq 0}{\sup }\frac{\sup_{s\in \lbrack
0,t]}\int_{0}^{t}|D_{r}X_{s}|^{2}dr}{(\ln (x+1)+f(t))^{2}}
\leq
L_{x}<\infty \quad {a.s.}
\end{eqnarray}
Then
$$\mathbb{P}\left( \int_0^{\infty} e^{X_t} \,dt <x\right) \leq \exp\left\{-\frac{(m_x-1)^2}{2L_x}\right\},$$
where $$m_x= E\left[\sup_{t\geq 0} \frac{\ln(\int_0^t e^{X_s}\,ds +1)+f(t)}{\ln(x+1)+f(t)}\right].$$
\end{prop}
We now verify that conditions a) - c) of the above proposition hold.
For condition a) we have from \eqref{Rep},
{\color{black}
\begin{eqnarray*}\nonumber
\lefteqn{
\int_{0}^{\infty }\mathbb{E} \exp [X_{t}]\,dt}\\
&=& \int_{0}^{\infty }\mathbb{E} \exp \left[-\frac{\beta \lambda _{0}}{2}\int_0^tk^2(s)\,ds-\frac{\beta}{2}\int_0^ta^2(s)\,ds \right. \\ && \phantom{1^l\!\!\!\!\!\!\!}antom{MMMMM} + \left. \beta\left( \int_0^t a(s)\,dB_s +\int_0^t\int_s^t b(r)\frac{\partial}{\partial r}K^H(r,s)\,dr\,dB_s\right)\right] dt\\ \nonumber
&=&
\int_{0}^{\infty }\mathbb{E} \exp \left[-\frac{\beta \lambda _{0}}{2}\int_0^tk^2(s)\,ds-\frac{\beta}{2}\int_0^ta^2(s)\,ds + \beta \int_0^t\left( a(s) +\int_s^t b(r)\frac{\partial}{\partial r}K^H(r,s)\,dr\right)\,dB_s\right] dt\\
&=&
\int_{0}^{\infty } \exp \left[-\frac{\beta \lambda _{0}}{2}\int_0^tk^2(s)\,ds-\frac{\beta}{2}\int_0^ta^2(s)\,ds + \frac{\beta^2}{2} \int_0^t
\left( a(s) +\int_s^t b(r)\frac{\partial}{\partial r}K^H(r,s)\,dr\right)^2\,ds\right] dt,
\end{eqnarray*}
where, again, we have used \cite[Theorem 4.12]{Klebaner} to obtain the last equality. Therefore, using \eqref{R},
\begin{eqnarray}
\nonumber
\int_{0}^{\infty }\mathbb{E} \exp [X_{t}]\,dt
&\le&
\int_{0}^{\infty } \exp \left[-\frac{\beta \lambda _{0}}{2}\int_0^tk^2(s)\,ds-\frac{\beta}{2}\int_0^ta^2(s)\,ds + \frac{\beta^2}{2} \int_0^t
2a^2(s)\,ds\right. \\ \label{KlKl} & & \phantom{1^l\!\!\!\!\!\!\!}antom{MMMMM} + \left.
2\beta^2Ht^{2H-1}\int_0^tb^2(s)\,ds
\right] dt.
\end{eqnarray}
}
The integral \eqref{KlKl} is finite if and only if the leading power of $t$ in the term
$$
-\frac{\beta \lambda_{0}}{2} \int _0^t k^2(s)\, ds +\frac{2\beta^2 -\beta}{2} \int_0^t a^2(s)\,ds + 2\beta^2Ht^{2H-1}\int_0^tb^2(s)\,ds
$$
has negative coefficient, which follows from our assumptions.
Condition b) is a consequence of \eqref{M2}.
For condition c) we use the inequality (\ref {M2}), which implies that for any $x>0$ and any fixed function $f$,
\begin{eqnarray}\label{sup}
\text{ \ \ }\underset{t\geqq 0}{\sup }\frac{\sup_{s\in \lbrack
0,t]}\int_{0}^{t}|D_{r}X_{s}|^{2}dr}{(\ln (x+1)+f(t))^{2}}
\leq \sup_{t\geq 0} \frac{M(t)}{(\ln (x+1)+f(t))^{2}}.
\end{eqnarray}
Due to our assumptions, for big $t$, the leading power of $t$ in the numerator is $ \max \{2l,2H +2m -1\}.$ It follows that
$$\lim_{t\to \infty}\frac{M(t)}{\left(\ln (x+1)+t^{\max\{l,H+m-1/2 \}}\right)^{2}} <\infty, $$
and therefore the supremum in (\ref{sup}) is finite. The result follows from Proposition \ref{DII}.
\end{proof}
The cases when $a=0$ (presence only of fractional Brownian motion) or $b=0$
(presence only of Brownian motion), are simpler:
\begin{coro} {\color{black} Under the assumptions in Theorem \ref{THM4},}
\begin{enumerate}
\item When $a(t)\equiv 0$ and {\color{black} $p>H+m-1/2$} the solution of (\ref{2.1})
explodes in finite time with positive probability for all $\beta >0$.
\item If $a(t)\equiv 0$ and {\color{black}$p=H+m-1/2$}, the solution of (\ref{2.1})
explodes in finite time with positive probability for all $\beta >0$
satisfying {\color{black}$
\,\beta < \frac{C_3\lambda_0}{4 C_2H}
.$
}
\item When $b(t)\equiv0$ and {\color{black}$0<\beta \leq\frac{1}{2}$} the solution of (\ref{2.1}) exhibits
explosion in finite time with positive probability for all values of $p$ and
$l$.
\item If $b(t)\equiv0$ and {\color{black}$\beta >1/2,$} the solution of (\ref{2.1}) exhibits
explosion in finite time with positive probability if $p>l$ or if $p=l$ and {\color{black}$
C_{3}\lambda _{0}>C_{1}(2\beta -1)$.}
\end{enumerate}
\end{coro}
}
Notice that $m_{\xi}$ given in (\ref{DIII}) satisfies $m_{\xi }>1$ due to
Theorem 3.1 in \cite{DII}. The formula for $m_{\xi }$ shows interactions
between $\varphi $ and $K$ that have an influence on the lower bound in
\eqref{DV}.
Increasing values of $K$ decrease the lower bound in \eqref{DV}. In this
sense high values of $K$ are in favour of absence of finite time blowup.
{\color{black}
\subsection{The case $H>3/4$ and independent $B$ and $B^H$}
}
In order to find more explicit lower bounds for $\mathbb{P}(\tau <+\infty )$
, we consider in this subsection the case $H\in (3/4,1)$ and
suppose that $B$ and $B^{H}$ are independent and $b(s)=ca(s)$ for all $
s\geqq 0$, where $c$ is a constant. Then $N_{t}=\int_{0}^{t}a(s)dM_{s}$ with
$M_{s}=B_{s}+cB_{s}^{H}$. By \cite{Ch} $M$ is equivalent to a Brownian
motion $\widetilde{B}$, and therefore $N_{t}$ is equivalent to $\tilde{N}
_{t}:=\int_{0}^{t}a(s)\,d\widetilde{B}_{s}$. Here equivalence means equality
of the laws of the processes on ($\mathcal{C}[0,T],\mathcal{B}),$ the space
of continous functions defined on $[0,T]$ endowed with the $\sigma -$algebra
generated by the cylinder sets. Furthermore, ($\tilde{N}_{t})_{t\geqq 0}$ is
a continous martingale and therefore a time-changed Brownian motion: $\tilde{
N}_{t}=\widetilde{B}_{2A(t)}$. \newline
\begin{theorem}
\label{THM5}
{\color{black}Assume \eqref{L}}.
Let $H \in (3/4,1)$, $B$ and $B^{H}$ be independent and $
b(s)=ca(s)$ for all $s\geqq 0$, where $c$ is a constant.We assume also that $
g(z)\geq Cz^{1+\beta}$, that the functions $k$ and $a$ are positive continuous on
$\mathbb{R}_{+}$ and that there exist constants $\eta \in (0,+\infty ]$ and $
c_{1}>0$ such that
\begin{equation}
\frac{1}{a^{2}(t)}\exp(-\beta \lambda _{0}K(t)) \geq c_{1}\exp \left(-2\beta
\frac{A(t)}{\eta }\right),\quad t\in \mathbb{R}_{+}. \label{PB}
\end{equation}
Then
\begin{equation}
\mathbb{P}(\tau <+\infty ) \ \geq \ \mathbb{P}(Z_{\mu }\leq \theta ),
\end{equation}
where $\tau $ is the blowup time of \eqref{2.1}, $Z_{\mu }$ is a
gamma-distributed random variable with parameter $\mu :=\frac{2}{\beta }(
\frac{1}{\eta }+\frac{1}{2}),$ $\theta :=\frac{2c_{1}}{\beta ^{2}\xi }$ and $
\xi :=\frac{1}{C\beta }\langle \varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle_{D} ^{-\beta }$.
\end{theorem}
\begin{proof}
{\color{black} From Theorem \ref{THM1},}
\begin{eqnarray*}
\mathbb{P}(\tau^{*}=+\infty)&=&\mathbb{P}\left( \int_{0}^{t}dr\,\exp \left[-\beta(\lambda _{0}K(r)+A(r))+\beta \tilde{N}_{r}\right]<\xi\mbox{ for all } t>0 \right)\\
&=&\mathbb{P}\left( \int_{0}^{\infty}dr\exp \left[-\beta(\lambda _{0}K(r) + A(r))+\beta \tilde{N}_{r}\right] \leq \xi \right).
\end{eqnarray*}
By the change of variable $q=2A(r)$ we get
\begin{equation*}
\begin{aligned}
\mathbb{P}(\tau^{*}=+\infty)=\mathbb{P}\left(\int_{0}^{\infty}dr\,\exp \left[-\beta(\lambda _{0}K(r)+ A(r))+\beta \tilde{B}_{2A(r)}\right] \leq \xi \right)\\
=\mathbb{P}\left( \int_{0}^{\infty}\frac{dq}{a^{2}(A^{-1}(q/2))}\exp \left[-\beta(\lambda _{0}K(A^{-1}(q/2))+\frac{1}{2}q)+\beta \tilde{B}_{q}\right] \leq \xi\right).
\end{aligned}
\end{equation*}
Applying~\eqref{PB} to $t=A^{-1}(q/2)$ yields
\begin{equation*}
\frac{1}{a^{2}(A^{-1}(q/2))} \exp\left[-\beta(\lambda _{0}K(A^{-1}(q/2))\right] \geq c_{1} \exp\left(-\frac{\beta}{\eta}q\right),\quad q\in\mathbb{R}_{+}.
\end{equation*}
Therefore
\begin{eqnarray*}
\mathbb{P}(\tau^{*}=+\infty) &\leq &\mathbb{P}\left(c_{1}\int_{0}^{\infty} dq \exp\left [-\beta q\left(\frac{1}{\eta}+\frac{1}{2}\right)+\beta \tilde{B}_{q}\right] \leq \xi\right)\\
&=&\mathbb{P}\left(\int_{0}^{\infty}dq\, \exp\left[\beta(\tilde{B}_{q}-\tilde{\mu} q)\right] \leq\frac{\xi}{c_{1}}\right),
\end{eqnarray*}
where $\tilde{\mu}:=\frac{1}{\eta}+\frac{1}{2}$. A second change of variable $q=\frac{4s}{\beta^{2}}$ yields
\begin{equation*}
\begin{aligned}
\mathbb{P}(\tau^{*}=+\infty)
\le \mathbb{P}\left(\int_{0}^{\infty} ds \,\exp\left[2(\tilde{B}_{s}-\mu s)\right] \leq \frac{\beta^{2}\xi}{4 c_{1}} \right),
\end{aligned}
\end{equation*}
where $\mu:=\tilde{\mu}\frac{2}{\beta}$.
Due to \cite[Corollary 1.2, page 95]{Y},
\begin{equation*}
\int_{0}^{\infty}e^{2(\tilde{B}_{s}-\mu s)}\,ds\overset{\mathcal{L}}{=}\frac{1}{2Z_{\mu}},
\end{equation*}
where $Z_{\mu}$ is a gamma-distributed random variable with parameter $\mu$. Therefore
\begin{equation*}
\begin{aligned}
\mathbb{P}(\tau=+\infty) \leq\ \mathbb{P}(\tau^{*}=+\infty) \leq \mathbb{P}\left(\frac{1}{2Z_{\mu}} \leq \frac{\beta^{2}\xi}{4 c_{1}} \right)=\mathbb{P}\left(Z_{\mu} \geq \frac{2c_{1}}{\beta^{2}\xi}\right).
\end{aligned}
\end{equation*}
This implies the statement of the theorem.
\end{proof}
\begin{rem}
If $k,a$ and $b$ are constants, a more explicit lower bound for $\mathbb{P}
(\tau <+\infty )$ is available without the assumption \eqref{PB}. Indeed,
starting with \eqref{tau*}, a straightforward calculation gives a lower
bound in terms of a gamma-distributed random variable $Z$ again, but this
time with parameter $\widehat{\mu }:=(\lambda _{0}k^{2}+a^{2})/(a^{2}\beta ).
$ More precisely,
\begin{equation*}
\mathbb{P}(\tau <\infty )\ \geqq \ \mathbb{P}(\tau ^{\ast }<\infty )\ = \
\mathbb{P}\left(Z_{\widehat{\mu }}\leqq \frac{2C}{a^{2}\beta }\langle
\varphi ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle_{D} ^{\beta }\right).
\end{equation*}
\end{rem}
\subsection{ A lower bound for the blowup time}
Our next goal is to obtain a lower bound for the blowup time $\tau.$ Since
the proofs of the following results are close to those in \cite{ALP} (where $
b=0$), we omit them here.
\begin{theorem}
\label{THM7} Let the function $g$ be such that $g(0)=0$, $z\rightarrow g(z)/z
$ is increasing, and $g(z)\leq \Lambda z^{1+\beta}$ for some positive
constant $\Lambda.$ Then $\tau \geq \tau_*$, where
\begin{equation}
\tau _{\ast }=\inf \left\{t>0:\text{ }\int_{0}^{t}\exp (\beta
(N_{r}-A(r)))\left\| U^{D}(r,0)\varphi\right\| _{\infty }^{\beta }dr\geqq
\frac{1}{\Lambda \beta }\right\}. \label{tau*2}
\end{equation}
Let us define for $0\leqq t<\tau _{\ast },$
\begin{equation*}
J(t)=\left( 1-\Lambda \beta \int_{0}^{t}\exp (\beta (N_{r}-A(r)))\left\|
U^{D}(r,0)\varphi \right\|_{\infty }^{\beta }dr\right) ^{-1/\beta }.
\end{equation*}
Then the solution $u$ of \eqref{2.1} satisfies, for $x\in D,$ $0\leqq t<\tau
_{\ast },$ $\mathbb{P}$-a.s.
\begin{equation}
0\leqq u(x,t)\leqq J(t)\exp (N_{t}-A(t))U^{D}(t,0)\varphi (x). \label{ubu}
\end{equation}
\end{theorem}
\begin{rem}
More precisely, the proof of this theorem shows that the mild solution $v$
of \eqref{2.2} satisfies \eqref{ubu}\ without the factor $exp(N_{t}).$ By
Theorem \ref{THM8}, $v$ is also the weak solution of \eqref{2.2}, hence the
weak solution $u(\cdot ,t)=\exp (N_{t})v(\cdot ,t)$ of \eqref{2.1} satisfies
\eqref{ubu}.
\end{rem}
\begin{coro}
\label{CC4} Assume that
\begin{equation*}
\Lambda \beta \int_{0}^{\infty }\exp [\beta (N_{r}-A(r))]\left\|
U^{D}(r,0)\varphi \right\|_{\infty }^{\beta }dr<1.
\end{equation*}
Then the solution $u$ of \eqref{2.1} satisfies \eqref{ubu} $\mathbb{P}$-a.s.
for all $t.$
\end{coro}
\begin{rem}
For the special choice of $\varphi =p\psi _{0},$ $p>0,$ the integrals
appearing \ in \eqref{tau*} and \eqref{tau*2} are the same exponential
functionals of $N.$ In fact, $U^{D}(r,0)\psi _{0}=\exp (-\lambda
_{0}K(r))\psi _{0},$ and $\tau _{\ast }$ becomes
\begin{equation}
\tau _{\ast }=\inf \left\{t>0:\text{ }\int_{0}^{t}\exp \left[\:\:
\phantom{1^l\!\!\!\!\!\!\!}antom{1^l\!\!\!\!\!\!\!} \beta (N_{r}-\lambda _{0}K(r)-A(r))\right]
\,dr\geqq \frac{p^{-\beta }}{\Lambda \beta } \left\| \psi _{0}\right\|
_{\infty }^{-\beta }\right\}, \label{taupsy2}
\end{equation}
whereas
\begin{equation}
\tau ^{\ast }=\inf \left\{t>0:\int_{0}^{t}\exp \left[\:\:\phantom{1^l\!\!\!\!\!\!\!}antom{1^l\!\!\!
\!\!\!\!}\beta (N_{r}-\lambda _{0}K(r)-A(r))\right]\,dr\geq \frac{p^{-\beta }
}{C\beta } \langle \psi_{0} ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle_{D} ^{-\beta }\right\}.
\label{taupsy1}
\end{equation}
In fact $\tau _{\ast }\leqq \tau ^{\ast }$ if $C\leqq \Lambda ,$ since $
\langle \psi_{0} ,\phantom{1^l\!\!\!\!\!\!\!}i _{0}\rangle_{D} \leqq \| \psi _{0}\|_{\infty
}\int_{D}\phantom{1^l\!\!\!\!\!\!\!}i _{0}(x)dx=\|\psi_{0}\|_{\infty }. $ In order to apply both
bounds simultaneously, we have to suppose $Cz^{1+\beta }\leqq g(z)\leqq
\Lambda z^{1+\beta },$ $z>0.$ It is therefore of interest to know the law of
the integral appearing in \eqref{taupsy2} and \eqref{taupsy1}$.$ This seems
possible only for $b^{H}=0, $ since, to our best knowledge, the law of
exponential functionals of fractional Brownian motion is still unknown. For
the moment it seems that only estimates of the type of those in Section \ref
{section.3.1} are available. See also Theorem \ref{THM5} for $H>3/4.$
\end{rem}
\section{A sufficient condition for finite time blowup}
We consider now the mild form of \eqref{2.2} obtained in Proposition \ref
{Proposition2}, and obtain a sufficient condition for finite time blowup.
\begin{theorem}
\label{THM6} Suppose that $g(z)\geq Cz^{1+\beta}$ and that there exists $
w^*>0$ such that
\begin{equation}
\exp (\beta A(w^*))\parallel U^{D}(w^*,0)\varphi \parallel _{\infty
}^{-\beta }<\beta C\int_{0}^{w^*}\exp (\beta N_{s})\,ds\text{ .\label{cond2}}
\end{equation}
Then for the explosion time $\tau $ of~\eqref{2.1} there holds $\tau \leq
w^*.$
\end{theorem}
\begin{rem}
Inequality \eqref{cond2} is understood trajectorywise. Therefore $w^*$ is
random. \eqref{cond2} is harder to satisfy with a small initial condition $
\varphi $ and with a small value of $C.$ Due to the different
interpretations of the integrals in $N$, the effects on blowup of $B$ and $
B^{H}$ are different. If $N=0,$ \eqref{cond2} reads $\parallel
U^{D}(w^*,0)\varphi \parallel _{\infty }^{-\beta }<\beta Cw^*$ and in this
case $w^*$ {\color{black}
is deterministic; if in addition $\varphi
=\psi _{0},$ \eqref{cond2} reads
}
$\exp(\lambda _{0}\beta K(w^*))\parallel
\psi _{0}\parallel _{\infty }^{-\beta }<\beta Cw^*.$
\end{rem}
\begin{proof}We use the approach in \cite[Lemma 15.6]{Q-S}; see also \cite{LDP}. Suppose that $v(x,t),$ $x\in D,$ $t\geq 0,$ is a global
solution of \eqref{2.2}, and let $0<t<t^{\prime }.$ Using the semigroup
property of the evolution system $(U^{D}(t,r))_{0\leqq r<t}$ we obtain
\begin{eqnarray*}
&&\exp \left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t^{\prime },t)\right)U^{D}(t^{\prime },t)v(\cdot ,t)(x) \\
&=&\exp\left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t^{\prime },t)\right)U^{D}(t^{\prime },t)\left[ \exp
\left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t)\right)U^{D}(t,0)\varphi (\cdot )\right] (x) \\
&&+\exp \left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t^{\prime },t)\right)U^{D}(t^{\prime },t)\left[ \int_{0}^{t}\exp
(-N_{r})\exp \left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t,r)\right)U^{D}(t,r)g(\exp (N_{r})v(\cdot ,r))(x)\,dr\right] (x) \\
&=&\exp \left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t^{\prime })\right)U^{D}(t^{\prime },0)\varphi (\cdot )(x) \\
&&+\int_{0}^{t}\exp (-N_{r})\exp \left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t^{\prime },r)\right)U^{D}(t^{\prime
},r)g(\exp (N_{r})v(\cdot ,r))(x)\,dr \\
&\geqq &\exp \left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t^{\prime })\right)U^{D}(t^{\prime },0)\varphi (\cdot )(x) \\
&&+C\int_{0}^{t}\exp (\beta N_{r})\exp \left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t^{\prime },r)\right)U^{D}(t^{\prime
},r)v(\cdot ,r)^{1+\beta }(x)\,dr.
\end{eqnarray*}
By Jensen's inequality
\begin{eqnarray*}
U^{D}(t^{\prime },r)v(\cdot ,r)^{1+\beta }(x) &=&\int_{D}p^{D}(r,x;t^{\prime
},y)v(y,r)^{1+\beta }\,dy \\
&\geqq &\left( \int_{D}p^{D}(r,x;t^{\prime },y)v(y,r)\,dy\right) ^{1+\beta
}=\left(\:\phantom{1^l\!\!\!\!\!\!\!} U^{D}(t^{\prime },r)v(\cdot ,r)(x)\right) ^{1+\beta }.
\end{eqnarray*}
Therefore
\begin{equation*}
\exp \left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t^{\prime },t)\right)U^{D}(t^{\prime },t)v(\cdot ,t)(x)\geqq \exp
\left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t^{\prime })\right)U^{D}(t^{\prime },0)\varphi (x)
\end{equation*}
\begin{equation}
+C\int_{0}^{t}\exp (\beta N_{r})\left(\:\phantom{1^l\!\!\!\!\!\!\!}antom{\int}\!\!\!\!\!\!\!\! \exp \left(\:\phantom{1^l\!\!\!\!\!\!\!}-A(t^{\prime
},r)\right)U^{D}(t^{\prime },r)v(\cdot ,r)(x)\right) ^{1+\beta }dr \label{psy}.
\end{equation}
Let $\psi (t)$ be the last term in \eqref{psy}. Then, from the above inequality,
$$
\psi ^{\prime }(t) =C\exp (\beta N_{t})\left( \:\phantom{1^l\!\!\!\!\!\!\!}antom{\int}\!\!\!\!\!\!\!\!\exp (-A(t^{\prime
},t))U^{D}(t^{\prime },t)v(\cdot ,t)(x)\right) ^{1+\beta }
\geqq C\exp (\beta N_{t})(\psi (t))^{1+\beta }
$$
Let now $\Psi (t):=$ $\int_{t}^{\infty }dz/z^{1+\beta }=\frac{1}{\beta }
t^{-\beta },$ $t>0.$ Then
\begin{equation*}
\frac{d}{dt}\Psi (\psi (t))=-\frac{\psi ^{\prime }(t)}{(\psi (t))^{1+\beta }}
\leqq -C\exp (\beta N_{t}).
\end{equation*}
Hence
$$
C\int_{0}^{t^{\prime }}\exp (\beta N_{s})\,ds \leqq \Psi (\psi (0))-\Psi
(\psi (t^{\prime }))
=\int_{\psi (0)}^{\psi (t^{\prime })}dz/z^{1+\beta }<\int_{\exp
(-A(t^{\prime }))U^{D}(t^{\prime },0)\varphi (\cdot )(x)}^{\infty
}dz/z^{1+\beta }
$$
for all $x\in D$ and all $t^{\prime }>0.$ Therefore
$
\beta C\int_{0}^{t^{\prime }}\exp (\beta N_{s})\,ds\leqq \exp (\beta
A(t^{\prime }))\|U^{D}(t^{\prime },0)\varphi \|_{\infty
}^{-\beta }
$
for all $t^{\prime }>0.$ This contradicts \eqref{cond2}.
\end{proof}
{\color{black}
{\noindent\bf Acknowledgement} \ {\color{black}The authors are grateful to two anonymous referees for their valuable comments, which greatly improved our paper.}
The second- and third-named authors acknowledge the hospitality of Institut \'{E}lie Cartan de Lorraine, where part of this
work was done. The research of the second-named author was partially supported by CONACyT (Mexico), Grant No. 652255. The fourth-named author would like to express her gratitude to the entire staff of the IECL for their hospitality and strong support during the completion of her Ph.D. dissertation there.
}
\end{document}
|
\begin{equation}gin{document}
\draft
\title{Periodic Hamiltonian and Berry's phase in harmonic oscillators}
\author{Dae-Yup Song}
\address{ Department of Physics,\\ Sunchon National University, Sunchon
540-742, Korea}
\date{\today}
\maketitle
\begin{equation}gin{abstract}
For a time-dependent $\tau$-periodic harmonic oscillator of two linearly
independent homogeneous solutions of classical equation of motion which are
bounded all over the time (stable), it is shown,
there is a representation of states cyclic
up to multiplicative constants under $\tau$-evolution or $2\tau$-evolution
depending on the model. The set of the wave functions is complete.
Berry's phase which could depend on the choice of representation can be
defined under the $\tau$- or $2\tau$-evolution in this representation.
If a homogeneous solution diverges as the time goes to infinity, it is
shown that, Berry's phase can not be defined in any representation considered.
Berry's phase for the driven harmonic oscillator is also considered.
For the cases where Berry's phase can be defined, the phase is given
in terms of solutions of the classical equation of motion.
\end{abstract}
\pacs{03.65.Ca, 03.65.Bz, 03.65.Ge}
\begin{equation}gin{multicols}{2}
\section{Introduction}
Berry \cite{Berry,SW} showed that the cyclic adiabatic change of Hamiltonian
induces, in the phase of wave function, a change which separates into the
obvious dynamical part and an additional geometric part.
Aharonov and Anandan \cite{AA} generalized Berry's result to
nonadiabatic cases, by giving up the assumption of adiabaticity.
The price we have to pay in making this generalization is that quantum
states after the evolution of Hamiltonian's cycle are not necessarily
equivalent to the original states up to phase, so that Berry's phase could
in general not be defined. A specific example of this failure in (quasi)cyclic
evolution of quantum states is given in a driven harmonic oscillator
system where the particular solution diverges as time goes
to infinity \cite{Moore}.
The (driven) harmonic oscillator system of time-dependent mass and/or
frequency has known to be a system whose quantum states
can be given in terms of the solutions of classical equation of motion
\cite{Lewis,Song1,KL,Ji}.
The time-dependent $\tau$-periodic (driven) harmonic oscillator system
is considered in Refs.\cite{Moore,Ji} based on the
Floquet theory; If the solutions of classical equation of motion are
stable (finite all over the time), it has been shown
{\it with other assumptions} that \cite{Ji}, there might exist positive integer
$N$ such that wave functions are quasiperiodic under the $N\tau$-evolutions.
In those cases, the Berry's phases were given in terms of auxiliary functions related to
the solutions of classical equation of motion.
The driven harmonic oscillator is described by the Hamiltonian:
\begin{equation}
H^F(x,p,t)= {p^2 \over 2 M(t)} +{1\over 2} M(t)w^2(t)x^2-xF(t),
\end{equation}
with positive mass $M(t)$, real frequency $w(t)$ and external force $F(t)$.
For these parameters, we require a periodicity so that
\begin{equation}
H^F(x,p,t+\tau)=H^F(x,p,t).
\end{equation}
The classical equation of motion of the system is given as
\begin{equation}
{d \over {dt}} (M \dot{\bar{x}}) + M(t) w^2(t) \bar{x}=F(t).
\end{equation}
General solution of Eq.(3) is described by two linearly independent homogeneous
solutions $\{u(t),v(t)\}$ and a particular solution $x_p(t)$.
For convenience, we define $\rho(t)$ and time-constant $\Omega$ as
\begin{equation}gin{eqnarray}
\rho(t)&=&\sqrt{u^2(t) +v^2 (t)},\\
\Omega&=&M(u \dot{v} -v\dot{u}).
\end{eqnarray}
It is known that \cite{Song1,Song2} the wave functions satisfying the
corresponding Schr\"{o}dinger equation are given as
\begin{equation}gin{eqnarray}
\psi_n^F &=&\cr
&& {1\over \sqrt{2^n n!}}({\Omega \over \pi\hbar})^{1\over 4}
{1\over \sqrt{\rho(t)}}[{u(t)-iv(t) \over \rho(t)}]^{n+{1\over 2}}\cr
&&\times \exp[{i\over \hbar}(M\dot{x}_px+\delta(t))]
\cr&&
\times \exp[{(x-x_p)^2\over 2\hbar}(-{\Omega \over \rho^2(t)}
+i M(t){\dot{\rho}(t) \over \rho(t)})]
\cr&&
\times H_n(\sqrt{\Omega \over \hbar} {x -x_p \over \rho(t)}),
\end{eqnarray}
where $\delta(t)$ is defined as
\begin{equation}
\delta(t)=\int_{0}^t[{M(z)w^2(z)\over 2} x_p^2(z)
-{M(z) \over 2}\dot{x}_p^2(z) ] dz.
\end{equation}
Different choice of $\{u,v\}$ gives different set of wave function, while
a set with two linearly independent $\{u,v\}$ is complete in the sense that
\begin{equation}gin{eqnarray}
K(x_b,t_b;x_a,t_a)&=&\sum_n \psi_n^F(x_b,t_b){\psi_n^F}^*(x_a,t_a)\cr
&\rightarrow& \delta(x_b-x_a)
~~~{\rm as}~t_b\downarrow t_a.
\end{eqnarray}
From the wave functions in Eq.(6), one can easily find that, the wave functions
become (quasi)periodic when the $\rho(t)$ and $x_p(t)$ are periodic with a
common period.
In this paper, we will show that, if there exist $u(t),v(t)$ which are
stable, for the oscillator without driving force
there is a representation of (quasi)periodic wave functions
under $\tau$-evolution or $2\tau$-evolution depending on the model.
If one of the homogeneous solution diverges as the time goes to infinity,
it will be shown that, Berry's phase can not be defined in any representation
and the uncertainty of position diverges in the limit.
For the oscillator with driving force, it will be shown that, if
there exist two homogeneous stable solutions and ${p \over N}\tau$-periodic
particular solution with two positive integers $p$ and $N$
of no common divisor except 1, the wave functions are (quasi)periodic under
$N\tau$- or $2N\tau$-evolution so that Berry's phase can be defined.
For the cases where Berry's phase can be defined, the phase will be given
in terms of solutions of the classical equation of motion.
In the next section, the harmonic oscillators without driving force
will be considered. In section III, the driven harmonic oscillators will
be considered. In section IV, it will be shown that if one of
homogeneous solutions diverges, Berry's phase can not be defined
in any of the representations. A summary will be given in the last section.
\section{The Hill's equation and quasiperiodic quantum states}
In this section we will consider the harmonic oscillator without
driving force. The wave functions $\psi_n(x,t)$ of the case are given
from $\psi_n^F(x,t)$ in Eq.(6) by letting $x_p=\delta=0$.
\subsection{The Hill's equation}
We start with the case of unit mass. For this case, the classical equation
of motion of Eq.(3) reduces to the Hill's equation:
\begin{equation}
\ddot{\bar{x}} + w_0^2(t) \bar{x} =0,
\end{equation}
which has long been studied in mathematics \cite{Magnus}.
The subscript 0 will be used to denote that the quantity
is defined for the system of unit mass, and thus the two linearly independent
solutions of Eq.(9) will be denoted as $\{u_0,v_0\}$.
In considering Eq.(9) as an equation of motion of a classical system,
one may naively expect that the force varying periodically with
time acts on the (unit) mass in such a manner that the force tends to move the
mass back into a position of equilibrium ($\bar{x}=0$) in proportional to
dislocation of the mass, so that the mass is confined to a neighborhood of
$\bar{x}=0$. Through the analyses of Hill's equation, however, it turns out
that this expectation needs {\it not} to be always the case.
In fact, an increase of the restoring force may cause the mass to oscillate
with wider and wider amplitude, as can be seen from the Liapounoff's theorem
\cite{Lia,Magnus}.
A detail information on whether the Hill's equation has an unstable
solution can be obtained from the Floquet's theorem \cite{Magnus}.
If a constant $\alpha$ which is called {\em characteristic exponent}
is not one of the $m\pi/\tau$
$(m=0,\pm 1,\pm 2,\cdots)$, the theorem states that two linearly
independent solutions of Hill's equation are written as
\begin{equation}
\bar{x}_1(t)= e^{i\alpha t} p_1(t),~~~~\bar{x}_2(t)= e^{-i\alpha t} p_2(t)
\end{equation}
where $p_1(t)$, $p_2(t)$ are periodic functions with period $\tau$.
In the case that $\alpha$ is one of the $m\pi/\tau$, the theorem states,
both of the two linearly independent solutions of Hill's equation are
stable if and only if both of them can be written as periodic functions
of period $\tau$ or $2\tau$ depending on $w^2_0(t)$.
If the two linearly independent solutions are written as in
Eq.(10) and both of them are not $\tau$-periodic nor $2\tau$-periodic,
they are stable if $\alpha$ is real \cite{Magnus}. In this stable case,
by combining linearly $\bar{x}_1(t)$ and $\bar{x}_2^*(t)$ with complex
coefficients and taking real, imaginary part of the new solution
as another set of solutions, one can always find two linearly
independent real solutions $u_0(t),v_0(t)$ of Hill's equation as
\begin{equation}
u_0(t)=Ap(t)\cos(\alpha t + \tilde{\theta} (t)),~
v_0(t)=Bp(t)\sin(\alpha t + \tilde{\theta} (t)),
\end{equation}
with $\tau$-periodic real functions
$p(t),\tilde{\theta}(t)$ and nonzero constants
$A,B,\Omega_0~ (\equiv ABp^2(\alpha+\dot{\tilde{\theta}}))$.
\subsection{Quasiperiodic quantum states}
It is known that \cite{Magnus,Song2}, if
\begin{equation}
w_0^2(t)=w^2 -{1\over \sqrt{M}}{d^2\sqrt{M}\over dt^2},
\end{equation}
$\{u(t),v(t)\}$ is given from $\{u_0(t),v_0(t)\}$ as
\begin{equation}
u(t)={u_0(t) \over \sqrt{M}},~v(t)={v_0(t) \over \sqrt{M}}.
\end{equation}
If $\alpha$ is one of the $m\pi/\tau$ $(m=0,\pm 1,\pm 2,\cdots)$ and the
two linearly independent solutions are stable, then $\{u,v\}$ are $\tau$-
or $2\tau$-periodic. In order to find the overall phase change, we need to
consider the complex $z$-plane of $z=u+iv$. In the plane, the trajectory
of $z(t)$ makes a closed curve $C$, since $z(t+\tau')=z(t)$ where $\tau'$
is $\tau$ or $2\tau$ depending on the periodicity of $\{u,v\}$. Making use
of residue theorem, the number that the curve winds the origin can be shown to
be equal to ${1\over 2\pi i}\oint_C {dz \over z}$. From this considerations
one can find that
\begin{equation}
\psi_n(x,t+\tau')
=e^{-i(n+{1\over 2})\int_0^{\tau'}{\Omega \over M\rho^2}dt}\psi_n(x,t).
\end{equation}
Therefore the overall phase change can be written as
\begin{equation}
\chi_n=-(n+{1\over 2})\int_0^{\tau'}{\Omega \over M\rho^2}dt.
\end{equation}
If $\alpha$ is not one of the $m\pi/\tau$ $(m=0,\pm 1,\pm 2,\cdots)$
and is real, by taking $A=B$, one can find the
quasiperiodic wave functions under $\tau$-evolution.
If $A=B$, the wave functions do not depend on the magnitude of $A$ (or $B$).
In the representation of $A=B$, the overall phase change under the $\tau$-evolution
is written as
\begin{equation}
\chi_n= -(n+{1\over 2})\alpha \tau
~~(=-(n+{1\over 2})\int_0^{\tau}{\Omega \over M\rho^2}dt).
\end{equation}
\subsection{Berry's phase}
The dynamical phase change $\delta_n$ during $\tau'$-evolution is given as
\begin{equation}gin{eqnarray}
&\delta_n(\tau')&\cr
&=&-{1\over \hbar}\int_0^{\tau'}\int_{-\infty}^\infty
\psi_n^*(x,t) H(x,t) \psi_n(x,t) dx dt \cr
&=&-(n+{1\over 2})\int_0^{\tau'}
[ {\Omega \over 2M(t)\rho^2(t)}
(1+{M^2(t) \over \Omega^2}\rho^2(t)\dot{\rho}^2 )\cr
&&~~~~~~~~ +{\rho^2(t) \over 2\Omega}M(t)w^2(t)] dt.
\end{eqnarray}
The Berry's phase $\gamma_n$ is given from the overall phase change by
subtracting the dynamical change;
\begin{equation}
\gamma_n=\chi_n - \delta_n.
\end{equation}
If $\alpha$ is one of $m\pi/\tau$ $(m=0,\pm 1,\pm 2,\cdots)$ and both of the
linearly independent classical solutions are stable, the wave functions are
$\tau$- or $2\tau$-periodic. In this case, the Berry's phase is given as
\begin{equation}
\gamma_n={1 \over 2} (n+{1\over 2})\int_0^{\tau'}
({M \dot{\rho}^2 \over \Omega} - {\Omega\over M \rho^2}
+{Mw^2 \over \Omega}\rho^2) dt,
\end{equation}
where $\tau'$ can be $\tau$ or $2\tau$ depending on the model.
If $\alpha$ is not one of $m\pi/\tau$ $(m=0,\pm 1,\pm 2,\cdots)$ and is real,
the Berry's phase in the representation of $A=B$ is written as
\begin{equation}gin{eqnarray}
\gamma_n &=&-{1\over 2}(n+{1\over 2})\alpha\tau \cr
&&~~~~+{1\over 2}(n+{1\over 2})\int_0^\tau [{w^2(t)p^2(t) \over \tilde{\Omega}}
+{M(t) \over \tilde{\Omega}} \dot{q}^2]dt,
\end{eqnarray}
where
\begin{equation}
\tilde{\Omega}={\Omega_0 \over A^2}~~{\rm and}~~q(t)={p(t)\over \sqrt{M(t)}}.
\end{equation}
For the $n=0$ cases, these results exactly agree with that of Ref.\cite{GC}.
A point that should be mentioned is that every phase is
defined up to additive constant $2\pi$.
\section{Driven harmonic oscillator}
In this section, we will consider the driven harmonic oscillator.
Due to the lack of understanding on the periodicity of particular
solution, our attention will be limited in the cases in which one can
construct a particular solution periodic with period $r\tau$, where
$r$ is written as $p/N$ with two positive integers $p$ and $N$
of no common divisor except 1. We will also restrict our attentions on the
cases of the two linearly independent stable homogeneous solutions.
For a $\tau'$-evolution, the dynamical phase is written as
\begin{equation}gin{eqnarray}
&\delta_n^F(\tau')& = -{1\over \hbar}\int_0^{\tau'} <n|H^F| n> dt \cr
&=& -{1 \over 2} (n+{1\over 2})\int_0^{\tau'}
({M \dot{\rho}^2 \over \Omega} + {\Omega\over M \rho^2}
+{Mw^2 \over \Omega}\rho^2) dt \cr
&& -{1\over \hbar}\int_0^{\tau'}
({3 \over 2} M\dot{x}_p^2 - {1 \over 2} Mw^2x_p^2) dt.
\end{eqnarray}
In the case that two linearly independent homogeneous solutions are
periodic under $\tau$- or $2\tau$-evolution by defining $\tau'$ as
$N\tau$ or $2N\tau$ depending on the periodicity of the
classical solutions, one can find the relation
\begin{equation}
\psi_n^F(t+\tau')= e^{-i(n+{1\over 2})\int_0^{\tau'}{\Omega \over M\rho^2}dt
+i{\delta(\tau')\over \hbar}}\psi_n^F(t),
\end{equation}
which gives the overall phase change:
\begin{equation}
\chi_n^F=-(n+{1\over 2})\int_0^{\tau'}{\Omega \over M\rho^2}dt
+{\delta(\tau')\over\hbar}.
\end{equation}
Berry's phase is thus given as
\begin{equation}gin{eqnarray}
\gamma_n^F(\tau') &=& {1 \over 2} (n+{1\over 2})\int_0^{\tau'}
({M \dot{\rho}^2 \over \Omega} - {\Omega\over M \rho^2}
+{Mw^2 \over \Omega}\rho^2)dt \cr
&& + {1\over \hbar}\int_0^{\tau'} M\dot{x}_p^2 dt.
\end{eqnarray}
In the case that two linearly independent homogeneous stable solutions
are not periodic under $\tau$- or $2\tau$-evolution, by letting $A=B$
we can find quasiperiodic wave functions under the $N\tau$-evolution with
the overall phase change:
\begin{equation}
\chi_n^F= -(n+{1\over 2}) \alpha N\tau
+{1\over \hbar}\delta(N\tau).
\end{equation}
In this representation, Berry's phase is given as
\begin{equation}gin{eqnarray}
\gamma_n^F &=&
-(n+{1\over 2}) \alpha N\tau \cr
&& +{1 \over 2}(n+{1\over 2})N
\int_0^{\tau}[{w^2 \over \tilde{\Omega}}p^2
+{M\dot{q}^2 \over\tilde{\Omega}}]dt \cr
&& +{1\over \hbar}\int_0^{N\tau} M\dot{x}_p^2 dt
\end{eqnarray}
The last terms in the right hand side of Eqs.(25,27) are order of
$1/\hbar$; If they are zero, then $\gamma^F=N\gamma_n$.
\section{ Unstable classical solutions }
If the wave function is quasiperiodic, then $<n|x|n>$ and
$<n|(\Delta x)^2|n>$ $(\equiv <n|x^2|n> - <n|x|n>^2)$ must be periodic.
For the driven case, if the $x_p$ diverges,
because of the fact that $<n|x|n>=x_p$, the wave functions can not be
quasiperiodic.
For the driven or undriven case, $(\Delta x)^2$ is given as
\begin{equation}
(\Delta x)^2 = (n + {1\over 2}) {\hbar \rho^2 \over \Omega}.
\end{equation}
If a homogeneous solution is unstable, then any set of two linearly
independent homogeneous solution should have an unstable solution.
If one of the homogeneous solutions is unstable, Eqs.(4,28) show
that $<n|(\Delta x)^2|n>$ can not be periodic and thus the wave functions
can not be quasiperiodic.
For the quasiperiodic wave functions, $<n|(\Delta p)^2|n>$
which is given as
\begin{equation}
<n|(\Delta p)^2|n>= (n + {1\over 2})\hbar
[{\Omega\over \rho^2} + {M^2\dot{\rho}^2 \over \Omega}]
\end{equation}
should also be periodic. In the case that imaginary part of $\alpha$
is not zero, one can easily show that $<n|(\Delta p)^2|n>$ diverges
in the limit $t\rightarrow \pm \infty$. In the case of unstable solution
with $\alpha$ being one of the $m\pi/\tau$ $(m=0,\pm 1,\pm 2,\cdots)$,
$<n|(\Delta p)^2|n>$ remains finite in the limit, but does not converges
to 0 in general.
\section{Conclusions}
We have considered the (driven) harmonic oscillator system.
For the $\tau$-periodic oscillator of two linearly
independent homogeneous stable solutions without driving force,
one can always construct a set of wave functions which are quasiperiodic
under $\tau$- or $2\tau$-evolution.
The set of wave functions is complete.
If one of the homogeneous solution is unstable or particular
solution of the driven system diverges, for the oscillator system with or
without driving force, we prove that wave functions can {\it not} be
quasiperiodic.
For the driven case, we illustrate the possibility of existence of
quasiperiodic wave functions under $N\tau$- or $2N\tau$-evolution which
mainly depends on the behavior of the particular solution.
If the Berry's phase can be defined for a wave function of driven harmonic
oscillator system, it must either be integral multiple of Berry's phase for
the corresponding wave function without driving force or contain a term
of $O(1/\hbar)$.
Recently Berry's phase of simple harmonic oscillator is studied in
\cite{PS} where it has been shown that the model can provide explicit
examples for various cases considered here.
\acknowledgments
It is the author's pleasure to acknowledge helpful discussions with Prof.
J.H. Park on mathematical aspects of the subject. This work is supported
in part by Non-Directed Research Fund, Sunchon National University.
\begin{equation}gin{references}
\bibitem{Berry} M.V. Berry, Proc. R. Soc. London Ser. A {\bf 392}, 45 (1984).
\bibitem{SW} A. Shapere and F. Wilczek (eds.), {\it Geometric Phases in
Physics} (World Scientific, Singapore, 1989).
\bibitem{AA} Y. Aharonov and J. Anandan, Phys. Rev. Lett. {\bf 58}, 1593 (1987).
\bibitem{Moore} D.J. Moore, J. Phys. A: Math. Gen. {\bf 23}, 5523 (1990);
Phys. Rep. {\bf 210}, 1 (1991); M.-H. Lee, H-C. Kim and J.-Y. Ji,
J. Korean Phys. Soc. {\bf 31}, 560 (1997).
\bibitem{Lewis} H.R. Lewis, J. Math. Phys. {\bf 9}, 1976 (1968);
Phys. Rev. Lett. {\bf 18}, 510 (1967).
\bibitem{Song1}D.-Y. Song, Phys. Rev. A {\bf 59}, 2616 (1999).
\bibitem{KL} D.C. Khandekar and S.V. Lawande, J. Math. Phys. {\bf 16}, 384
(1975); K.H. Yeon, K.K. Lee, C.I. Um, T.F. George and L.N. Pandey,
Phys. Rev. A {\bf 48}, 2716 (1993).
\bibitem{Ji} J.-Y. Ji, J.K. Kim, S.P. Kim, and K.-S. Soh, Phys. Rev. A
{\bf 52}, 3352 (1995).
\bibitem{Song2}D.-Y. Song, J. Phys. A: Math. Gen. {\bf 32}, 3449 (1999).
\bibitem{GC} Y.C. Ge and M.S. Child, Phys. Rev. Lett. {\bf 78} 2507 (1997).
\bibitem{Magnus} W. Magnus and S. Winkler, {\it Hill's equation}
(John Wiley \& Sons, New York, 1966).
\bibitem{Lia} A. Liapounoff, Ann. Fac. Sci. Univ. Toulouse (2) {\bf 9}, 203;
See also, E.A. Coddington and R. Carlson, {\it Linear Ordinary
Differential Equations} ( Siam, Philadelphia, 1997) p117.
\bibitem{PS} J.H. Park and D.-Y. Song, quant-ph/9908005.
\end{references}
\end{multicols}
\end{document}
|
\begin{document}
\title{On Free Knots and Links}
\abstract{Both classical and virtual knots arise as formal Gauss
diagrams modulo some abstract moves corresponding to Reidemeister
moves. If we forget about both over/under crossings structure and
writhe numbers of knots modulo the same Reidemeister moves, we get a
dramatic simplification of virtual knots, which kills all classical
knots. However, many virtual knots survive after this
simplification.
We construct invariants of these objects and present their
applications to minimality problems of virtual knots as well as some
questions related to graph-links.
One can easily generalize these results for the orientable case and
apply them for solving non-invertibility problems.
The main idea behind these invariants is some geometrical
construction which reduces the general equivalence to the
equivalence only modulo Reidemeister - 2 move.
This paper is a sequel of the paper \cite{FreeKnots}. }
\section{Introduction}
In \cite{FreeKnots}, for a certain class objects (a terrific
simplification of virtual knots) we proved a theorem that the
equivalence questions of some diagrams can often be reduced to the
question of some very simple equivalence (using only Reidemeister 2
moves). To do that, we made difference between two types of
crossings: the ``odd'' ones and the ``even'' ones and created a
diagram-valued invariant of our objects (free knots). For some
diagrams which are ``irreducibly odd'' the value of the invariant
consists of the diagram itself, and it has been considered then only
up to second Reidemeister move.
This has a lot of corollaries (already described here or still to
come in forthcoming papers): for flat virtual knots (for virtual
knots and their generalizations see \cite{KaV}), their
non-triviality, non-equivalence, non-invertibility etc.
However, the main nerve of the construction was the notion of odd
crossing. Roughly speaking, we are taking a Gauss diagram and
forgetting all over/under information and all writhe numbers modulo
formal Reidemeister moves. Odd crossings are precisely those
corresponding to {\em odd chords of the Gauss diagram}, i.e., those
chords which intersect an odd number of other chords.
The notion of odd chord (odd crossing) is closely connected to the
notion of a non-orientable atom: starting from such a four-valent
(framed) graph, we may construct a checkerboard surface (see ahead)
which can be either orientable or non-orientable. The presence of
odd chords precisely means non-oreintability of the atoms.
So, the examples we have constructed and all non-trivial results we
have proved in \cite{FreeKnots} deal with only non-orientable atoms
and (virtual) knots corresponding to them.
By definition, the invariant constructed in \cite{FreeKnots}
``smoothes'' all even crossings, and for a diagram with orientable
atom we get the same value of the invariant as that of the unknot.
So, the simple non-triviality, non-invertibility, non-equivalence,
and minimality results hold for a class of objects with
non-orientable atoms. In particular, it has nothing to do with
classical knots.
The aim of the present paper is to find another ``oddness''
condition for the crossings allowing to use the techniques similar
to the one introduced in \cite{FreeKnots}.
We give new non-trivial examples.
\section{Basic Notions}
By a {\em $4$-graph} we mean a topological space consisting of
finitely many components, each of which is either a circle or a
finite graph with all vertices having valency four.
A $4$-graph is {\em framed} if for each vertex of it, the four
emanating half-edges are split into two sets of edges called {\em
(formally) opposite}.
A {\em unicursal component} of a $4$-graph is either a free loop
component of it or an equivalence class of edges where two edges
$a$,$b$ are called equivalent if there is a sequence of edges
$a=a_{0},\dots, a_{n}=b$ and vertices $v_{1},\dots, v_{n}$ so that
$a_{i}$ and $a_{i+1}$ are opposite at $v_{i+1}$.
As an example of a free graph one may take the graph of a singular
link.
By a {\em free link} we mean an equivalence class of framed
$4$-valent graphs modulo the following transformations. For each
transformation we assume that only one fixed fragment of the graph
is being operated on (this fragment is to be depicted) or some
corresponding fragments of the chord diagram. The remaining part of
the graph or chord diagram are not shown in the picture; the pieces
of the chord diagram not containing chords participating in this
transformation, are depicted by punctured arcs. The parts of the
graph are always shown in a way such that the formal framing
(opposite edge relation) in each vertex coincides with the natural
opposite edge relation taken from ${\bf R}^{2}$.
The first Reidemeister move is an addition/removal of a loop, see
Fig.\ref{1r}
\begin{figure}
\caption{Addition/removal of a loop on a graph and on a chord
diagram}
\label{1r}
\end{figure}
The second Reidemeister move adds/removes a bigon formed by a pair
of edges which are adjacent in two edges, see Fig. \ref{2r}.
\begin{figure}
\caption{The second
Reidemeister move and two chord diagram versions of it}
\label{2r}
\end{figure}
Note that the second Reidemeister move adding two vertices does not
impose any conditions on the edges it is applied to: we may take any
two two edges of the graph an connect them together as shown in Fig.
\ref{2r} to get two new crossings.
The third Reidemeister move is shown in Fig.\ref{3r}.
\begin{figure}
\caption{The third
Reidemeister move and its chord diagram versions}
\label{3r}
\end{figure}
Note that each of these three moves applied to a framed graph,
preserves the number of unicursal components of the graph. Thus,
applying these moves to graphs with a unique unicursal cycle, we get
to graphs with a unique unicursal cycle.
A {\em free knot} is a free link with one unicursal component
(obviously, the number of unicursal component of a framed $4$-graph
is preserved under Reidemeister moves).
Free links are closely connected to {\em flat virtual knots},
i.\,e., with equivalence classes of virtual knots modulo
transformation changing over/undercrossing structure. The latter are
equivalence classes of immersed curves in orientable $2$-surfaces
modulo homotopy and stabilization.
\subsection{Smoothings}
Here we introduce the notion of smoothing, we shall often use in the
sequel.
Let $G$ be a framed graph, let $v$ be a vertex of $G$ with four
incident half-edges $a,b,c,d$, s.t. $a$ is opposite to $c$ and $b$
is opposite to $d$ at $v$.
By {\em smoothing} of $G$ at $v$ we mean any of the two framed
$4$-graphs obtained by removing $v$ and repasting the edges as
$(a,b)$, $(c,d)$ or as $(a,d)$ $(b,c)$, see Fig. \ref{smooth}.
\begin{figure}
\caption{Two
smoothings of a vertex of for a framed graph}
\label{smooth}
\end{figure}
Herewith, the rest of the graph (together with all framings at
vertices except $v$) remains unchanged.
We may then consider further smoothings of $G$ at {\em several}
vertices.
\section{Atoms and orientability. \\ The source-sink condition}
Our further strategy is as follows: in many situations, it is easier
to find {\em links} rather than {\em knots} with desired
non-triviality properties. So, we shall first define a map from free
$1$-compoent links to ${\bf Z}_{2}$-linear combinations of
$2$-component links, and then we shall study the latter by an
invariant similar to that constructed in \cite{FreeKnots}.
Ideologically, the first map is a simplified version of Turaev's
cobracket \cite{Turaev} which establishes a structure of Lie
coalgera on the set of curves immersed in $2$-surfaces (up to some
equivalence, the Lie {\em algebra} structure was introduced by
Goldman in a similar way).
We shall need Turaev's construction (Turaev's $\Delta$) to get a
$2$-component free link from a $1$-component one.
The second map takes a certain state sum for a $2$-component free
link, where we distinguish between two types of crossings, and
smooth only crossings of the first type. What should these ``two
types'' mean, will be discussed later.
In some sense, the invariant $[\cdot]$ of free knots constructed in
\cite{FreeKnots} is a diagrammatic extension of a terrifically
simplified Alexander polynomial (we forget about the variable and
signs taking ${\bf Z}_{2}$-coefficients). The invariant $\{\cdot\}$
suggested in the present paper is in the same sense an extension of
the terrifically simplified Kauffman bracket, but again we use
diagrams as coefficients.
Altogether, these two constructions provide an example of
non-trivial and minimal diagrams of free knots with orientable
atoms.
\begin{dfn}
An {\em atom} (originally introduced by Fomenko, \cite{Fom}) is a
pair $(M,\Gamma)$ consisting of a $2$-manifold $M$ and a graph
$\Gamma$ embedded in $M$ together with a colouring of $M\backslash
\Gamma$ in a checkerboard manner. An atom is called {\em orientable}
if the surface $M$ is orientable. Here $\Gamma$ is called the {\em
frame} of the atom, whence by {\em genus} (atoms and their genera
were also studied by Turaev~\cite{Turg}, and atom genus is also
called the Turaev genus~\cite {Turg}) ({\em Euler characteristic,
orientation}) of the atom we mean that of the surface $M$.
\end{dfn}
Having an atom $V$, one can construct a virtual link diagram out of
it as follows. Take a generic immersion of atom's frame into ${\mathbb R}^2$,
for which the formally opposite structure of edges coincides with
the opposite structure induced from the plane.
Put virtual crossings at the intersection points of images of
different edges and restore classical crossings at images of
vertices `as above'. Obviously, since we disregard virtual
crossings, the most we can expect is the well-definiteness up to
detours. However, this allows us to get different virtual link types
from the same atom, since for every vertex $V$ of the atom with four
emanating half-edges $a,b,c,d$ (ordered cyclically on the atom) we
may get two different clockwise-orderings on the plane of embedding,
$(a,b,c,d)$ and $(a,d,c,b)$. This leads to a move called {\em
virtualisation}.
\begin{dfn}
By a {\em virtualisation} of a classical crossing of a virtual
diagram we mean a local transformation shown in
Fig.~\ref{virtualisation}.
\end{dfn}
\begin{figure}
\caption{Virtualisation}
\label{virtualisation}
\end{figure}
The above statements summarise as
\begin{prop}(see, e.g., {\em \cite{MyBook}}).
Let $L_1$ and $L_2$ be two virtual links obtained from the same atom
by using different immersions of its frame. Then $L_1$ differs from
$L_2$ by a sequence of \(detours and\/\) virtualisations.
\end{prop}
At the level of Gauss diagrams, virtualisation is the move that does
not change the writhe numbers of crossings, but inverts the arrow
directions. So, atoms just keep the information about signs of Gauss
diagrams, but not of their arrows.
A further simplification comes when we want to forget about the
signs and pass to flat virtual links (see also \cite{Vstrings}): in
this case we don't want to know which branch forms an overpass at a
classical crossing, and which one forms an underpass. So, the only
thing we should remember is its frame with opposite edge structure
of vertices (the {\em $A$-structure}). Having that, we take any atom
with this frame and restore a virtual knot up to virtualisation and
crossing change.
The $A$-structure of an atom's frame is exactly a $4$-valent framed
graph.
This perfectly agrees with the fact that {\bf free links are virtual
links modulo virtualization and crossing changes.}
Having a framed $4$-graph, one can consider {\em all atoms} which
can be obtained from it by attaching black and white cells to it. In
fact, it turns out that for a given framed $4$-graph either all such
surfaces are orientable or they are all non-orientable.
To see this, one should introduce the {\em source-sink} orientation.
By a {\em source-sink} orientation of a $4$-valent framed graph we
mean an orientation of all edges of this graph in such a way that
for each vertex some two opposite edges are outgoing, whence the
remaining two edges are incoming.
The following statement is left to the reader as an exercise
\begin{ex}
Let $G$ be a $4$-valent framed graph. Then the following conditions
are equivalent:
1. $G$ admits a source-sink orientation
2. At least one atom obtained from $G$ by attaching black and white
cells is orientable.
3. All atoms obtained from $G$ by attaching black and white cells
are orientable.
Moreover, if $G$ has one unicursal component, then each of the above
conditions is equivalent to the following:
Every chord of the corresponding Gauss diagram $C(D)$ is odd.
\end{ex}
We give two examples: for a planar $4$-valent framed graph we
present a source-sink orientation (left picture, Fig. \ref{lfr}),
and for a non-orientable $4$-valent framed graph (right picture,
Fig.\ref{lfr}, the artefact of immersion is depicted by a virtual
crossing) we see that the source-sink orientation taken from the
left crossing leads to a contradiction for the right crossing.
\begin{figure}
\caption{The
source-target condition}
\label{lfr}
\end{figure}
\subsection{The sets ${{\mathbb Z}G}$ and ${{\mathbb Z}GG}$}.
Let ${\mathfrak{G}}$ be the set of all equivalence classes of framed
graphs with one unicursal component modulo second Reidemeister moves.
Consider the linear space ${\mathbb Z}G$. By ${{\mathbb Z}GG}$ we denote the space of
all equivalence classes of framed graphs (with arbitrarily many
components) by the second Reidemeister move with all graphs with
free loops taken to be zero.
Having a framed $4$-graph, one can consider it as an element of
${\mathbb Z}G$ or of ${\mathbb Z}GG$. It is natural to try simplifying it: we call a
graph in ${\mathbb Z}G$ {\em irreducible} if no decreasing second
Reidemeister move can be applied to it. We call a graph in ${\mathbb Z}G$
{\em irreducible} if it has no free loops and no decreasing second
Reidemeister move can be applied to it.
The following theorem is trivial
\begin{thm}
Every $4$-valent framed graph $G$ with one unicursal component
considered as an element of ${\mathbb Z}G$ has a unique irreducible
representative, which can be obtained from $G$ by consequtive
application of second decreasing Reidemeister moves.
Every $4$-valent framed graph $G$ considered as an element of ${\mathbb Z}GG$
is either equal to $0$ or has a unique irreducible representative.
In both cases, the reduction can be held by monotonous decreasing of
the diagram by using second Reidemeister move and, if at some point
one gets a free loop, the diagram is equal to zero.
\end{thm}
This allows to recognize elements ${\mathbb Z}G$ and ${\mathbb Z}GG$ easily, which
makes the invariants constructed in the previous subsection
digestable.
In particular, the minimality of a framed $4$-graph in ${\mathbb Z}G$ or
${\mathbb Z}GG$ is easily detectable: one should just check all pairs of
vertices and see whether any of them can be cancelled by a second
Reidemeister move (or in ${\mathbb Z}G$ one should also look for free loops).
Denote by ${\mathbb Z}GG_{k}$ the subspace of ${\mathbb Z}GG$ generated by
$k$-component free links.
\section{The Turaev cobracket}
There is a simple and fertile idea due to Goldman \cite{Goldman} and
Turaev \cite{Turaev} of transforming two-component curves into
one-component curves and vice versa.
Here we simplify Turaev's idea for our purposes and call it
``Turaev's $\Delta$''.
We shall construct a map from ${\mathbb Z}G$ to ${\mathbb Z}GG_{2}$ as follows.
In fact, to define the map $\Delta$, one may require for a free knot
to be oriented. However, we can do without.
Given a framed $4$-graph $G$. We shall construct an element
$\Delta(G)$ from ${\mathbb Z}GG_{2}$ as follows. For each crossing $c$ of
$G$, there are two ways of smoothing it. One way gives a knot, and
the other smoothing gives a $2$-component link $G_c$. We take the
one giving a $2$-component link and write
\begin{equation}
\Delta(G)=\sum_{c}G_{c}\in {\mathbb Z}GG_{2}
\end{equation}
\begin{thm}
$\Delta(G)$ is a well defined mapping from ${\mathbb Z}G$ to ${\mathbb Z}GG_{2}$
\end{thm}
The proof is standard and follows Turaev's original idea. One should
consider the three Reidemeister move. The first move adds a new
summand which has a free loop (the latter assumed to be trivial in
${\mathbb Z}GG_{2}$); for the second Reidemeister move we get two new
identical summands, which cancel each other because we are dealing
with ${\bf Z}_{2}$ coefficients. For the third Reidemeister moves
the LHS and the RHS will lead to the summands identical up to second
Reidemeister moves.
{\em We call the conditions described above {\bf the parity
conditions.}}
\section{Two Types of Crossings: \\ Reducing all Reidemeister Moves to
the Second Reidemeister move}
Assume we have a certain class of knot-like objects which are
equivalence classes of {\bf diagrams} modulo {\em three Reidemeister
moves}. Assume for this class of diagrams (e.g. $4$-valent framed
graphs) there is a fixed rule of distinguishing between two types of
crossings (called even and odd) such that:
1) Each crossing taking part in the first Reidemeister move is even,
and after adding/deleting this crossing the parity of the remaining
crossings remains the same.
2) Each two crossings taking part in the second Reidemeister move
are either both odd or both even, and after performing these moves,
the parity of the remaining crossings remains the same.
3) For the third Reidemeister move, the parities of the crossings
which do not take part in the move remain the same.
Moreover, the parities of the three pairs of crossings are the same
in the following sense: there is a natural one-to-one correspondence
between pairs of crossings $A-A',B-B',C-C'$ taking part in the third
Reidemeister move, see Fig. \ref{abc}.
\begin{figure}\label{abc}
\end{figure}
We require that the {\em parity} of $A$ coincides with that of $A'$,
the {\em parity} of $B$ coincides with that of $B'$ and the parity
of $C$ coincides with that of $C'$.
We also require that the number of odd crossings among the three
crossings in question ($A,B,C$) is even (that is, is equal to $2$ or
$0$).
Having such objects with a prescribed rule satisfying the above
properties, one can define an invariant polynomials of our knot-like
objects (to be more precise, this leads to an invariant {\em
mapping} from knot-like objects to a terrifically simpler class of
objects).
In particular, this will lead us to one invariant of graph-knots and
one invariant of graph-links. The first invariant is introduced in
\cite{FreeKnots}.
It counts all {\em rotating circuits}, i.e. circuits going along all
the edges of the graph once and switching from each edge to a
non-opposite edge. Let us be more specific.
If one takes these circuits for a classical knot with an appropriate
signs and weights (powers of $t$), one would get the Alexander
polynomial.
Here, we introduce two ingredients: instead of circuits rotating at
all crossings, we let circles rotate only at ``even'' crossings, and
leave odd crossings as they are. Besides, instead of signs and
powers of $t$, we add some framed graphs modulo relations as
coefficients.
The scheme for the Kauffman bracket polynomial is the same with the
only difference that we count {\em all states} with weights being
either polynomials or chord diagrams: by a state we mean a way of
smoothing of all vertices (or.,resp., all odd vertices). The only
difference between a smoothing and a rotating circuit is that for a
smoothing the number of circles may be arbitrary.
\subsection{The ``Alexander-like'' bracket}
Let us be more specific. We concentrate on free knots and call a
vertex of a $4$-valent graph corresponding to a free knot {\em odd}
if and only if the corresponding chord of the chord diagram
corresponding to the framed graph is odd.
It is left for the reader as an exercise to check the {\bf parity
conditions}. We shall construct a map from free knots to ${\mathbb Z}G$.
Consider the following sum
\begin{equation}
[G]=\sum_{s\;even.,1\; comp} G_{s},
\end{equation}
which is taken over all smoothings in all {\em even} vertices, and
only those summands are taken into account where $G_{s}$ has one
unicursal component.
Thus, if $G$ has $k$ even vertices, then $[G]$ will consist of at
most $2^{k}$ summands, and if all vertices of $G$ are odd, then we
shall have exactly one summand, the graph $G$ itself.
The ``Alexander-like'' {\em bracket} (to be denoted by $[\cdot ]$)
is defined as follows:
\begin{equation}
[G]=\sum_{s\;even.} G_{s},
\end{equation}
where $G_{s}$ is considered as an element in ${\mathbb Z}G$.
\begin{thm}(\cite{FreeKnots})
The mapping $G\mapsto [G]$ is well defined, i.\,e., $[G]$ does not
depend on the representative of the free knot corresponding to $G$.
\label{mainthm}
\end{thm}
This theorem is proved in \cite{FreeKnots}. The idea behind the
proof relies on the comparison of diagrams obtained from each other
by Reidemeister moves with parity conditions taken into account.
We call a four-valent framed graph having one unicursal component
{\em odd} if all vertices of this graph are odd. We call an odd
graph {\em irreducibly odd} if for every two distinct vertices $a,b$
there exists a vertex $c\notin\{a,b\}$ such that $\langle
a,c\rangle\neq \langle b,c\rangle$.
Theorem \ref{mainthm} yields the following
\begin{crl}
Let $G$ be an irreducibly odd framed 4-graph with one unicursal
component. Then any representative $G'$ of the free knot
$K_{G}$,generated by $G$, has a smoothing $\tilde G$ having the
same number of vertices as $G$. In particular, $G$ is a minimal
representative of the free knot $K_{G}$ with respect to the number
of vertices.\label{sld}
\end{crl}
The simplest example of an irreducibly odd graph is depicted in Fig.
\ref{irred}.
\begin{figure}
\caption{An
irreducibly odd graph and its chord diagram}
\label{irred}
\end{figure}
\section{The Kauffman-like bracket}
Thus, we have proved that the free knot $K$ having Gauss diagram
shown in Fig.\ref{irred} is minimal: every diagram of this knot has
at least $6$ odd veritces.
The reason is that $[K]$ consists of one diagram representing $K$
itself, and $K$ is not simplifiable in the category ${\mathbb Z}G$.
However, this argument is not applicable to free knots with no odd
crossings. Thus, we give two more examples using ``Kauffman bracket
like'' techniques for other free knots.
Let $K$ be a two-component free considered as an element from
${\mathbb Z}GG_{2}$. We shall construct a map $\{\cdot\}: K\mapsto \{K\}$
valuend in ${\mathbb Z}GG$ as follows.
Take a framed four-valent graph $G$ representing $K$. By definition,
it has two components. Now, a vertex of $G$ is called {\em odd} if
it is formed by two different components, and {\em even} otherwise.
{\bf The parity conditions can be checked straightforwardly}.
Now, we define
\begin{equation}
\{G\}=\sum_{s} G_{s},
\label{kbrck}
\end{equation}
where we take the sum over all smoothings of all even vertices, and
consider the smoothed diagrams $K_{s}$ as elements of ${\mathbb Z}GG$. In
particular, we take all elements of $K_{s}$ with free loops to be
zero.
\begin{thm}
The bracket $\{K\}$ is an invariant of two-component free links,
that is, for two graphs $G$ and $G'$ representing the same
two-component free link $K$ we have $\{G\}=\{G'\}$ in ${\mathbb Z}GG$.
\label{mainthm2}
\end{thm}
\begin{proof}
The proof is very similar to that of Theorem \ref{mainthm}. Indeed,
we have to consider two diagrams that differ by a Reidemeister move
and show that the corresponding brackets $\{\cdot\}$ are equal in
${\mathbb Z}GG$.
Let us check the invariance $[G]\in {\mathbb Z}GG$ under the three
Reidemeister moves.
Let $G'$ differ from $G$ by a first Reidemeister move, so that
$G'$ has one vertex more than $G$. By definition this vertex is even
(it is formed by one component), and when calculating $[G']$ this
vertex has to be smoothed in order to get one unicursal curve in
total.
Thus, we have to take only one of two smoothings of the given
vertex, see Fig. \ref{razved}
\begin{figure}
\caption{The two
smoothings --- good and bad --- of a loop}
\label{razved}
\end{figure}
Thus there is a natural equivalence between smoothings of $G$ having
one unicursal component, and smoothings of $G'$ with one unicursal
component. Moreover, this equivalence yields a termwise identity
between $[G]$ and $[G']$.
Now, let $G'$ be obtained from $G$ by a second Reidemeister move
adding two vertices.
These two vertices are either both even or both odd (that is, two
branches belong to the same component of the free link or to
different components).
If both added vertices are odd, then the set of smoothings of $G$ is
in one-to-one correspondence with that of $G'$ and the corresponding
summands for $[G]$ and for $[G']$ differ from each other by a second
Reidemeister move.
If both vertices are odd then one has to consider different
smoothings of these vertices shown in. \ref{razved2}.
\begin{figure}
\caption{Smoothings of two even vertices}
\label{razved2}
\end{figure}
The smoothings shown in the upper-left Fig. \ref{razved2}, yield to
free loops, so they do not count in $[G']\in {\mathbb Z}GG$.
The second-type and third-type smoothings (the second and the third
pictures in the top row of \ref{razved2}) give the same impact to
${\mathbb Z}G$, thus, they reduce in
$[G']$. Finally, the smoothings corresponding to the upper-right
Fig. \ref{razved2} are in one-to-one correspondence with smoothings
of $G$, thus we have a term-wise equality of terms $[G]$ and those
terms of $[G']$, which are not cancelled by comparing the two middle
pictures.
If $G$ and $G'$ differ by a third Reidemeister move, then the
following two cases are possible: either all vertices taking part in
the third Reidemeister move are even, or two of them are odd and one
is even.
If all the three vertices are even, there are seven types of
smoothings corresponding to $[G]$ (and seven types of smoothings
corresponding to $[G']$): in each of the three vertex we have two
possible smoothings, and one case is ruled out because of a free
loop. When considering $G$, three of these seven cases coincide
(this triple is denoted by $1$), so, in ${\mathbb Z}G$ it remains exactly one
of these two cases. Amongst the smoothings of the diagram $G'$, the
other three cases coincide (they are marked by $2$). Thus, both in
$[G]$ and $[G']$ there are five types of summands marked by
$1,2,3,4,5$.
These five cases are in one-to-one correspondence (see Fig.
\ref{razved31}) and they yield the equality $[G]=[G']$).
\begin{figure}
\caption{Correspondences of smoothings with respect to $\Omega_3$
with three even vertices}
\label{razved31}
\end{figure}
If amongst the three vertices taking part in $\Omega_3$ we have
exactly one even vertices (say $a\to a'$), we get the situation
depicted in Fig. \ref{razved32}.
\begin{figure}
\caption{Correspondence between smoothings for $\Omega_3$ with one
even vertex}
\label{razved32}
\end{figure}
From this figure we see that those smoothings where $a$ (resp.,
$a'$) is smoothed {\em vertically}, give identical summands in $[G]$
and in $[G']$, and those smoothings where $a$ and $a'$ are smoothed
{\em horizontally}, are in one-to-one correspondence for $G$ and
$G'$, and the corresponding summands are obtained by applying two
second Reidemeister moves. This proves that $[G]=[G']$ in ${\mathbb Z}G$.
\end{proof}
We extend $K$ to ${\mathbb Z}GG_{2}$ by linearity.
\section{New Examples}
\begin{st}
The free link $L_1$ shown in Fig. \ref{frlink1} is minimal and the
corresponding atom is orientable.
\end{st}
\begin{figure}
\caption{Minimal free two-component link}
\label{frlink1}
\end{figure}
\begin{proof}
The orientability of any of the corresponding atoms can be checked
straightforwardly: one can easily verify the source-sink condition.
To prove minimality, let us consider the bracket $\{L_1\}$. By
construction, $\{L_1\}$ consists of only one diagram, the diagram
$L_1$ itself. Since $L_1$ is minimal in ${\mathbb Z}G$, we see that for every
link $L'_1$ equivalent to $L_1$ there is a smoothing of $L'_1$ at
some vertices equivalent to $L_1$. So, $L'_1$ has at least eight
crossings.
\end{proof}
\begin{st}
The free knot $K_1$ shown in Fig. \ref{frknot1} is minimal.
\end{st}
\begin{figure}
\caption{A
Minimal Free Knot Gauss Diagram}
\label{frknot1}
\end{figure}
\begin{proof}
Consider $\Delta(K_1)$. By construction, it consists of nine
summands, each of which representing a two-component free link.
These summands are constructed by smoothing exactly one crossing
(one chord). If we smooth the chord $x$, we get the link $L_1$
depicted above. Thus, $\Delta(K_1)=L_{1}+\sum L_{i}$, where the all
$L_{i}$'s are two-component links.
We claim that none of the links $L_{i}$ is equivalent to $L_1$ as a
free link.
Indeed, the initial Gauss diagram of $K_1$ has $9$ chords, and there
is only one chord $x$ which is linked with any other chords. Thus,
if we smooth the diagram along a chord distinct from $x$, we would
get a $2$-component link, say, $L_i$ with at least one crossing
formed by one and the same component. Thus, by definition of
$\{\cdot\}$, we see that $\{L_{i}\}$ is a sum of diagrams having
strictly less than $8$ crossings. Consequently, $L_{i}\neq L_{1}$.
Thus we have proved that for $\{\Delta(K_1)\}$ has exactly one
diagram with minimal crossing number $8$. Taking into account the
invariance of $\{\cdot\}$, we see that $K_1$ has at least $9$
crossings.
Moreover, we have indeed proved that every diagram of $\Delta$ has
at least one smoothing equivalent to $L_1$.
\end{proof}
\section{Post Scriptum. Odds and Ends.}
In \cite{FreeKnots} we gave an example of a looped graph (or
graph-knot, for definition see \cite{TZ,IM}) which has no
realisable representative, i.\,e. which is not equivalent to any
virtual knot. However, this graph was consisting of only {\em odd
vertices}, i.\,e., each vertex has odd valency.
Below we present an example of a looped graph with all vertices of
even valency, which has no realizable representative.
To do that, let us analyse the example shown in Fig. \ref{frknot1}.
This Gauss diagram has all even chords (this guarantees the
orientability of the corresponding atom). There is exactly one chord
which is linked with any of the remaining chords: the chord $x$.
This guarantees that the smoothing at $x$ gives a $2$-component link
where every crossing is formed by $2$ components, whence the
smoothing at any other crossing gives a $2$ components gives a
$2$-component link having at least one crossing formed by one
component. This means, that if we then apply the bracket
$\{\cdot\}$, these remaining diagrams will lead to diagrams with
strictly smaller number of crossings (than eight).
Finally, there is no room to perform the second decreasing
Reidemeister move to the graph obtained by smoothing along $x$. This
is guaranteed by the fact that in the initial chord diagram there is
no pair of chords $a,b$ both distinct from $x$ and such that
For reader's convenience, the intersection graph of this chord
diagram looks as shown in Fig. \ref{xx}.
\begin{figure}
\caption{The
intersection graph of the minimal chord diagram}
\label{xx}
\end{figure}
Quite analogously, one considers the looped graph in the sense
\cite{TZ} with ``Gauss diagram intersection graph'' given by the
non-realizable graph shown in Fig. \ref{xxx}.
\begin{figure}
\caption{The
intersection graph a non-realizable knot}
\label{xxx}
\end{figure}
All vertices of this diagram have even valency, and there is exactly
one vertex connected to all the remaining vertices. Thus, applying
Turaev's $\Delta$ to it, one gets $7$ $2$-component diagrams:
$A+\sum_{i}B_{i}$, exactly one of which has all ``even crossings'',
that is, only for one diagram $A$ every crossing belongs to both
components of the two component link. For any other diagram $B_{i},
i=1,\dots, 6$, there is at least one crossing formed by branches of
the same component.
So, the diagram $A$ is not cancelled by any of $B_{i}$'s. One easily
checks that $A$ is non-realizable, thus, the looped graph with Gauss
diagram shown in Fig. \ref{xxx} has no realizable representative.
\end{document}
|
\begin{document}
\begin{center}{\Large \bf
On convergence of generators of equilibrium dynamics of hopping particles to generator of a birth-and-death process in continuum
}
{\large \bf E. Lytvynov and P.T. Polara}\\
Department of Mathematics, Swansea University, Singleton
Park, Swansea SA2 8PP, U.K.\end{center}
\begin{abstract}
We deal with two following classes of equilibrium stochastic dynamics of infinite particle systems in
continuum: hopping particles (also called Kawasaki dynamics), i.e., a dynamics where each particle randomly hops over the space,
and birth-and-death process in continuum (or Glauber dynamics), i.e., a dynamics where there is no motion of particles, but
rather particles die, or are born at random. We prove that a wide class of Glauber dynamics can be derived as a scaling limit
of Kawasaki dynamics. More precisely, we prove the convergence of respective generators on a set of cylinder functions, in
the $L^2$-norm with respect to the invariant measure of the processes. The latter measure is supposed to be a Gibbs measure corresponding to a potential of pair interaction, in the low activity--high temperature regime. Our result
generalizes that of [Finkelshtein~D.L. et al., to appear in Random Oper.\ Stochastic Equations], which was proved for a special Glauber (Kawasaki, respectively) dynamics.\end{abstract}
\noindent
{\it MSC:} 60K35, 60J75, 60J80, 82C21, 82C22
\noindent{\it Keywords:} Birth-and-death process; Continuous system; Gibbs measure; Hopping particles; Scaling limit
\section{Introduction}
This paper deals with two classes of equilibrium stochastic dynamics of infinite particle systems in
continuum. Let $\Gamma$ denote the space of all locally finite subsets of $\mathbb R^d$. Such a space is called the
configuration space (of an infinite particle system in continuum). Elements of $\Gamma$ are called configurations and each
point of a configuration represents position of a particle.
One can naturally define a $\sigma$-algebra on $\Gamma$, and then a probability measure on $\Gamma$ represents a random
system of particles. A probability measure on $\Gamma$ is often called a point process (see e.g.\ \cite{Kal}). Configuration
spaces and point processes are important tools of classical statistical mechanics of continuous systems. A central class
of point processes which is studied there is the class of Gibbs measures. Typically one deals with Gibbs measures which
correspond to a potential of pair interaction.
An equilibrium stochastic dynamics in continuum is a Markov process on $\Gamma$ which has a point process (typically a Gibbs measure) $\mu$ as
its invariant measure.
One can distinguish three main classes of stochastic dynamics:
\begin{itemize}
\item
diffusion processes, i.e., dynamics where each particle
continuously moves in the space, see e.g.\ \cite{AKR4,Fritz,KLRDiffusions,MR98,Osa96,RS98,Yos96};
\item birth-and-death processes in continuum (Glauber dynamics), i.e., dynamics where
there is no motion of particles, but rather particles disappear (die) or appear (are born) at random, see e.g.\ \cite{BCC,G1,HS,KL,KLR,KMZ,P,Wu};
\item hopping particles (Kawasaki dynamics), i.e., dynamics where each
particle randomly hops over the space \cite{KLR}.
\end{itemize}
For a deep understanding of these dynamics, it is important to see how they are related to each other. For example, in the recent paper
\cite{KKL}, it was shown that a typical diffusion dynamics can be derived through a diffusive scaling limit of a
corresponding Kawasaki dynamics.
In \cite{FKL}, it was proved that a special Glauber dynamics can be derived
through a scaling limit of Kawasaki dynamics. Furthermore, \cite{FKL} conjectured that such a result holds, in fact, for a wide class of birth-and-death dynamics (dynamics of hopping particles, respectively), which are indexed by a parameter $s\in[0,1]$.
(Note that the result of \cite{FKL} corrrespods to the choice of parameter $s=0$.)
The aim of this work is to show that the conjecture of \cite{FKL} is indeed true, at least for parameters $s\in[0,1/2]$. (In the case where $s\in (1/2,1]$, one needs to put additional, quite restrictive assumptions on the potential of pair interaction, and we will not treat this case in the present paper.)
Thus, we show that the result of \cite{FKL} is not a property of just one special Kawasaki (Glauber, respectively)
dynamics, but rather represents a property which is common for many dynamics.
More specifically, we fix a class of cylinder functions on $\Gamma$, and prove that on this class of functions, the
corresponding generators converge in the $L^2(\Gamma,\mu)$-space. Here, $\mu$ is a Gibbs measure in the low activity--high temperature regime, $\mu$ being invariant measure for all the processes under consideration. If one additionally knows that the class of cylinder
functions is a core for the limiting generator, then our result implies weak convergence of finite-dimensional
distributions of the corresponding processes. Unfortunately, apart from a very special case \cite{KL}, no result about a core for
these generators is yet available.
The paper is organized as follows. In Section 2, we briefly discuss Gibbs measures in the low activity--high temperature regime,
and the corresponding correlation and Ursell functions. In Section 3, we describe classes of birth-and-death processes and of dynamics of hopping particles. In Section~4, we formulate and prove the result about convergence of the generators.
The authors acknowledge numerous useful discussions with Dmirti Finkelshtein and Yuri Kondratiev.
\section{Gibbs measures in the low activity-high temperature regime}
The configuration space over $\mathbb R^d$, $d\in\mathbb N$,
is defined by
\[
\Gamma:=\{\gamma\subset\mathbb R^d :\, |\gamma\cap\Lambda|<\infty \text{
for each compact } \Lambda\subset\mathbb R^d \},
\]
where $|\cdot|$ denotes the cardinality of a set. One can identify any $\gamma\in\Gamma$
with the positive Radon measure $\sum_{x\in\gamma}\varepsilon_x \in
\mathcal M(\mathbb R^d)$, where $\varepsilon_x$ is the Dirac measure with mass
at $x$, $\sum_{x\in\varnothing}\varepsilon_x:=$zero measure, and
$\mathcal M(\mathbb R^d)$ stands for the set of all positive Radon measures
on the Borel $\sigma$-algebra $\mathcal B(\mathbb R^d)$. The space $\Gamma$
can be endowed with the relative topology as a subset of
the space $\mathcal M(\mathbb R^d)$ with the vague topology, i.e., the
weakest topology on $\Gamma$ with respect to which all maps
\[
\Gamma\ni\gamma\mapsto\langle f,\gamma\rangle:=\int_{\mathbb R^d} f(x)\,\gamma(dx)=\sum_{x\in\gamma}f(x),\quad f\in C_0(\mathbb R^d),
\]
are continuous. Here, $C_0(\mathbb R^d)$ is
the space of all continuous real-valued functions on $\mathbb R^d$
with compact support. We will denote by $\mathcal B(\Gamma)$ the
Borel $\sigma$-algebra on $\Gamma$.
Let $\mu$ be a probability measure on $(\Gamma,\mathcal B(\Gamma))$. Assume that, for each $n\in\mathbb N$, there
exists a non-negative, measurable symmetric function
$k_\mu^{(n)}$ on $(\mathbb R^d)^n$ such that, for any measurable
symmetric function $f^{(n)} :(\mathbb R^d)^n \to
[0,+\infty]$,
\[
\int_\Gamma \sum_{\{x_1,\dots,x_n\}\subset\gamma}
f^{(n)}
(x_1,\dots,x_n)\,\mu(d\gamma)
=\frac{1}{n!}\int_{(\mathbb R^d)^n}f^{(n)}
(x_1,\dots,x_n)k_\mu^{(n)}
(x_1,\dots,x_n)\,d x_1 \dotsm d x_n.
\]
The functions $k_\mu^{(n)}$ are called correlation functions
of the measure $\mu$. If there exists a constant $\xi>0$
such that
\begin{equation}\label{RB}
\forall(x_1,\dots,x_n)\in(\mathbb R^d)^n
:\quad k_\mu^{(n)} (x_1,\dots,x_n)\le\xi^n,
\end{equation}
then we say that the correlation functions $k_\mu^{(n)}$
satisfy the Ruelle bound.
The following lemma gives a characterization of the correlation functions in terms of the Laplace transform of a given point process, see e.g.\ \cite{KKL}
\begin{lemma}\label{n} Let $\mu$ be a probability measure on $(\Gamma,\mathcal B(\Gamma))$
which satisfies the Ruelle bound \eqref{RB}. Let $f:\mathbb{R}^d\to\mathbb{R}$ be a measurable function which is bounded outside a compact set
$\Lambda \subset \mathbb R^d$ and such that $e^{f}-1 \in L^1(\mathbb R^d,dx)$. Then for $\mu$-a.a.\ $\gamma \in \Gamma$,
$\langle |f|,\gamma \rangle < \infty$ and
\[\int_\Gamma e^{\langle f,\gamma\rangle} \mu(d\gamma)=1+\sum_{n=1}^\infty \frac{1}{n!}\int_{(\mathbb{R}^d)^n}
(e^{f(x_1)}-1)\dotsm (e^{f(x_n)}-1)k_\mu^{(n)}(x_1,\dots,x_n)\, dx_1\dotsm dx_n.\]
\end{lemma}
\begin{remark}\label{fff}\rom{Note that if $f:\mathbb{R}^d\to\mathbb{R}$ is bounded outside a compact set $\Lambda\subset\mathbb R^d$ and if, furthermore, $f$ is bounded from above on the whole $\mathbb R^d$, then the condition $e^{f}-1 \in L^1(\mathbb R^d,dx)$
is equivalent to $f\in L^1(\Lambda^c,dx)$.
}\end{remark}
Via a recursion formula, one can transform the correlation functions
$k_\mu^{(n)}$ into the Ursell functions $u_\mu^{(n)}$ and vice
versa, see e.g.\ \cite{Ru69}. Their relation is given by
\begin{equation}\label{urs_def}
k_\mu(\eta)=\sum u_\mu(\eta_1)\dotsm u_\mu(\eta_j),\quad
\eta\in\Gamma_0,\ \eta\ne\varnothing,
\end{equation}
where
\[
\Gamma_0:=\{\gamma\in\Gamma: |\gamma|<\infty \},
\]
for any $\eta=\{x_1,\dots,x_n\}\in\Gamma_0$
\[
k_\mu(\eta):=k_\mu^{(n)}(x_1,\dots,x_n), \quad
u_\mu(\eta):=u_\mu^{(n)}(x_1,\dots,x_n) ,
\]
and the summation in (\ref{urs_def}) is over all partitions
of the set $\eta$ into nonempty mutually disjoint subsets
$\eta_1,\dots,\eta_j\subset\eta$ such that
$\eta_1\cup\dotsm\cup\eta_j=\eta$, $j\in\mathbb N$.
Note that if the correlation functions $(k_\mu^{(n)})_{n=1}^\infty$ are translation invariant, i.e., for each $a\in\mathbb R^d$
\[ k_\mu^{(n)} (x_1,\dots,x_n)=k_\mu^{(n)}(x_1+a,\dots,x_n+a),\quad (x_1,\dots,x_n)\in(\mathbb R^d)^n,\]
then so are the Ursell functions $(u_\mu^{(n)})_{n=1}^\infty$.
A pair potential is a Borel-measurable function $\phi:\mathbb R^d \to\mathbb R\cup\{+\infty\}$
such that $\phi(-x)=\phi(x)\in\mathbb R$ for all $x\in\mathbb R^d\setminus\{0\}$.
For $\gamma\in\Gamma$ and $x\in\mathbb R^d\setminus\gamma$, we define a relative
energy of interaction between a particle at $x$ and the
configuration $\gamma$ as follows:
\[
E(x,\gamma):=\left\{
\begin{aligned}
&\sum_{y\in\gamma}\phi(x-y),&& \text{if } \sum_{y\in\gamma} |\phi(x-y)|
< +\infty ,\\
& +\infty, &&\text{otherwise.}
\end{aligned}
\right.
\]
A probability measure $\mu$ on $(\Gamma,\mathcal B(\Gamma))$ is
called a (grand canonical) Gibbs measure corresponding to
the pair potential $\phi$ and activity $z>0$ if it satisfies
the Georgii--Nguyen--Zessin identity (\cite[Theorem 2]{NZ}):
\begin{equation}\label{GNZ}
\int_\Gamma\mu(d\gamma)\int_{\mathbb R^d}\gamma(dx) F(\gamma,x)
=\int_\Gamma \mu(d\gamma)\int_{\mathbb R^d} z\,dx \exp[-E(x,\gamma)]
F(\gamma\cup x,x)
\end{equation}
for any measurable function $F:\Gamma\times\mathbb R^d\to[0;+\infty]$. Here and below, for simplicity of notations, we just write $x$ instead of $\{x\}$.
We denote the set of all such measures $\mu$ by $\mathcal G (z,\phi)$.
As a straightforward corollary of the Georgii--Nguyen--Zessin
identity (\ref{GNZ}), we get the following equality:
\begin{align}\notag
&\int_\Gamma \mu(d\gamma)\int_{\mathbb R^d}\gamma(dx_1)\int_{\mathbb R^d}\gamma(dx_2)
F(\gamma,x_1,x_2)\\
&=\int_\Gamma \mu(d\gamma)\int_{\mathbb R^d}z\, dx_1\int_{\mathbb R^d} z\, dx_2\exp\left[
-E(x_1,\gamma)-E(x_2,\gamma)-\phi(x_1-x_2)
\right]\notag\\
&\qquad \times F(\gamma\cup \{x_1, x_2\},x_1,x_2)\notag\\
&\quad + \int_\Gamma \mu(d\gamma)\int_{\mathbb R^d} z\,dx \exp\left[
-E(x,\gamma)\right] F(\gamma\cup x,x,x)\label{double_GNZ}
\end{align}
for any measurable function
$F:\Gamma\times\mathbb R^d\times\mathbb R^d\to[0,+\infty]$.
Let us formulate conditions on the pair potential $\phi$.
{\bf (S) (Stability)}
There exists $B\ge0$ such that, for any $\gamma\in\Gamma_0$,
\[
\sum_{\{x,y\}\subset\gamma}\phi(x-y)\ge -B|\gamma|.
\]
In particular, condition (S) implies that $\phi(x)\geq -2B$,
$x\in\mathbb R^d$.
{\bf (LA-HT) (Low activity-high temperature regime)} We have:
\[
\int_{\mathbb R^d}|e^{-\phi(x)}-1|z\,dx<(2e^{1+2B})^{-1},
\]
where $B$ is as in (S).
The following classical theorem is due to Ruelle
\cite{wewewe,Ru69}.
\begin{theorem}\label{cfdrer} Assume that \rom{(S)} and \rom{(LAHT)} are satisfied. Then there exists $\mu\in\mathcal G(z,\phi)$ which has the following properties\rom:
\rom{a)} $\mu$ has correlation functions $(k_\mu^{(n)})_{n=1}^\infty$\rom, which are translation invariant
and satisfy the Ruelle bound \eqref{RB}\rom;
\rom{b)} For each $n \ge 2$, we have $u_\mu^{(n)}(0,\cdot,\cdot,\dots,\cdot)\in
L^1(\mathbb R^{d(n-1)},dx_1 \dotsm dx_{n-1})$, where \linebreak $u_\mu^{(n)}(0,\cdot,\cdot,\dots,\cdot)$
is considered as a function of $n-1$ variables.
\end{theorem}
In what follows, we will assume that (S) and (LAHT) are satisfied, and we will
keep the measure $\mu$ from Theorem~\ref{cfdrer} fixed.
\section{Equilibrium birth-and-death (Glauber) dynamics and hopping particles' \newline
(Kawasaki) dynamics}
In what follows, we will additionally assume that $\phi$ is bounded outside some ball in $\mathbb R^d$. Note that then (see e.g.\ \cite{KLR})
$$E(x,\gamma)=\sum_{y\in\gamma}\phi(x-y),$$ for $dx\,\mu(d\gamma)$-a.a. $x\in\mathbb R^d$ and $\gamma\in\Gamma$ and
$$E(X,\gamma\setminus x)=\sum_{y\in\gamma\setminus x}\phi(x-y),$$ for $\mu$-a.a. $\gamma\in\Gamma$ and
all $x\in\gamma$.
We fix a parameter $s \in [0,1/2]$. We introduce the set $\mathcal FC_b(C_0(\mathbb R^d),\Gamma)$ of all functions of the form
$$\Gamma \ni \gamma \mapsto F(\gamma)=g(\langle f_1,\gamma \rangle,\dots,\langle f_N,\gamma \rangle),$$
where $N\in\mathbb N$, $f_1,\dots,f_N \in C_0(\mathbb R^d)$, and $g\in C_b(\mathbb R^N)$, where $C_b(\mathbb R^N)$
denotes the set of all continuous bounded functions on $\mathbb R^N$. For each function $F:\Gamma \to \mathbb R$,
$\gamma\in\Gamma$, and $x,y\in\mathbb R^d$, we denote
$$(D_{x}^{-}F)(\gamma):=F(\gamma \setminus x)-F(\gamma),$$
$$(D_{xy}^{-+}F)(\gamma):=F(\gamma \setminus x \cup y)-F(\gamma).$$
We fix a bounded function $a:\mathbb R^d \to [0,+\infty)$ such that $a(-x)=a(x)$, $x\in\mathbb R^d$, and $a\in L^1(\mathbb R^d,dx)$.
We define bilinear forms
\begin{align*}
\mathcal E_G(F,G)&=\int_\Gamma\mu (d\gamma)\int_{\mathbb R^d} \gamma(dx)
\exp[s E(x,\gamma\setminus x)](D_{x}^{-}F)(\gamma)(D_{x}^{-}G)(\gamma),
\end{align*}
\begin{align*}
\mathcal E_K(F,G)&=\frac{1}{2}\int_\Gamma\mu (d\gamma)\int_{\mathbb R^d} \gamma(dx)\int_{\mathbb R^d}dy \ a(x-y)\\
&\quad \times\exp[s E(x,\gamma\setminus x)-(1-s)E(y,\gamma \setminus x)](D_{xy}^{-+}F)(\gamma)(D_{xy}^{-+}G)(\gamma),\end{align*}
where $F,G \in\mathcal FC_b(C_0(\mathbb R^d),\Gamma)$. As we will see below, $\mathcal E_G$
corresponds to a Glauber dynamics and $\mathcal E_K$ corresponds to a Kawasaki dynamics.
The next theorem follows from \cite{KLR}.
\begin{theorem}\label{Hunt_thm}
\rom{i)} The bilinear forms $(\mathcal E_G,\mathcal FC_b(C_0(\mathbb R^d),\Gamma))$ and $(\mathcal E_K,\mathcal FC_b(C_0(\mathbb R^d),\Gamma))$ are closable on $L^2(\Gamma,\mu)$
and their closures are denoted by $(\mathcal E_G,D(\mathcal E_G))$ and $(\mathcal E_K,D(\mathcal E_K))$, respectively.
\rom{ii)} Denote by $(H_G,D(H_G))$ and $(H_K,D(H_K))$ the generators of $(\mathcal E_G,D(\mathcal E_G))$ and \linebreak $(\mathcal E_K,D(\mathcal E_K))$, respectively. Then $\mathcal FC_b(C_0(\mathbb R^d),\Gamma)\subset D(H_G)\cap D(H_K)$, and for any $F\in \mathcal FC_b(C_0(\mathbb R^d),\Gamma)$,
\begin{align}
(H_GF)(\gamma)&=-\int_{\mathbb R^d}\gamma(dx) \exp[sE(x,\gamma\setminus x)](D_{x}^{-}F)(\gamma)\notag\\&\qquad\text{}
-\int_{\mathbb R^d}z\, dx \exp[(s-1)E(x,\gamma)](D_{x}^{+}F)(\gamma),\label{fguftftf}\\
(H_KF)(\gamma)&= -\int_{\mathbb R^d} \gamma(dx)\int_{\mathbb R^d}dy \, a(x-y)
\exp[sE(x,\gamma \setminus x)+(s-1) E(y,\gamma\setminus x)](D_{xy}^{-+}F)(\gamma).\label{C}
\end{align}
\rom{iii)} Let $\sharp:=G,K$. There exists a conservative
Hunt process
\[
\mathbf {M}^\sharp=(\mathbf{\Omega}^\sharp, \, \mathbf{F}^\sharp, \
(\mathbf {F}^\sharp_t)_{t\geq 0}, \, (\mathbf{\Theta}^\sharp_t)_{t\geq 0}, \, (\mathbf {X}^\sharp(t)_{t\geq 0}, \, (\mathbf{P}^\sharp_\gamma)_{\gamma\in\Gamma})
\]
on $\Gamma$ \rom(see e.g. \rom{\cite[p.~92]{MaRo})} which is properly associated
with $(\mathcal E_\sharp,D(\mathcal E_\sharp))$, i.e., for all \rom($\mu$-versions
of\,\rom) $F\in L^2(\Gamma,\mu)$ and all $t>0$ the function
\[
\Gamma\ni\gamma\mapsto(p_t^\sharp F)(\gamma)
:=\int_{\mathbf\Omega^\sharp} F(\mathbf {X}^\sharp(t))d\mathbf {P}^\sharp_\gamma
\]
is an $\mathcal E_\sharp$-quasi-continuous version of
$\exp[-tH_\sharp]F$. $\mathbf{M}^\sharp$ is up to
$\mu$-equivalence unique \rom(cf.\ \rom{\cite[Chap.~IV, Sect.~6]{MaRo})}.
In particular, $\mathbf {M}^\sharp$ has $\mu$ as invariant
measure.
\end{theorem}
\begin{remark}\rom{
In Theorem~\ref{Hunt_thm}, $\mathbf{M}^\sharp$ can be taken canonical,
i.e., $\mathbf{\Omega}^\sharp$ is the set
$D([0,+\infty),\Gamma)$ of all {\it c\'adl\'ag\/} functions
$\omega:\left[0,+\infty\right)\to\Gamma$ (i.e., $\omega$
is right continuous on $\left[0,+\infty\right)$ and has
left limits on $(0,+\infty)$);
$\mathbf{X}^\sharp(t)(\omega)=\omega(t)$, $t\geq0$,
$\omega\in\mathbf {\Omega}^\sharp$; $(\mathbf {F}^\sharp_t)_{t\geq0}$,
together with $\mathbf {F}^\sharp$, is the corresponding minimum
completed admissible family (cf.\ \cite[Section~4.1]{Fu80}); $\mathbf {\Theta}^\sharp_t$, $t\geq0$, are the corresponding
natural time shifts.
}\end{remark}
It follows from \eqref{fguftftf} that $H_G$ is (at least heuristically) the generator of a birth-and-death process, in which the factor $\exp[sE(x,\gamma\setminus x)]$ describes the rate at which particle $x$ of the configuration $\gamma$ dies, whereas the factor $\exp[(s-1)E(x,\gamma)]$ describes the rate at which, given a configuration $\gamma$, a new particle is born at $x$. We see that particles tend to die in high energy regions, i.e., if $E(x,\gamma\setminus x)$
is high, and they tend to be born al low energy regions, i.e., if $E(x,\gamma)$ is low.
Next, by \eqref{C}, $H_K$ is (again at least heuristically) the generator of a hopping particle dynamics, in which the factor
\[\exp[sE(x,\gamma\setminus x)+(s-1)E(y,\gamma\setminus x)]\] describes the rate at which a particle $x$ of configuration $\gamma$ hops to $y$.
We see that this rate is high if the relative energy of interaction between $x$ and the rest of the configuration, $\gamma\setminus x$, is high, whereas the relative energy of interaction between $y$ and $\gamma\setminus x$ is low, i.e., particles tend to hop from high energy regions to low energy regions.
\section{Scaling limit}
In this section, we will show that the birth-and-death dynamics considered in Section~3 can be treated as a limiting dynamics of hopping particles.
In other words, we will perform a scaling of Kawasaki dynamics which will lead to the Glauber dynamics.
We will
only discuss this convergence at the level of convergence of the generators on an appropriate set of cylinder
functions. In fact, such a convergence implies weak convergence of finite-dimensional distributions of corresponding
equilibrium processes if additionally the set of test functions forms a core for the limiting generator. However, in the general case,
no core of this generator is yet known and this is an open, important problem, which we hope to return to in our future
research. Our results will hold for all $s\in[0,1/2]$ (see Section~3). They will generalize Theorem 4.1 in
\cite{FKL}, which was proved in the special case $s=0$, and confirm the conjecture formulated in Section~6 of that paper.
So, let us consider the following scaling of the Kawasaki dynamics (for a fixed $s\in[0,1/2]$). Recall that, for each bounded function
$a:\mathbb R^d \to \mathbb R$ such that $a(x)\ge0$, $a\in L^1(\mathbb R^d,dx)$, and
$a(-x)=a(x)$ for all $x\in\mathbb R^d$, we have constructed the corresponding generator of the Kawasaki dynamics. We now fix
an arbitrary $\varepsilon >0$ and define a function $a_\varepsilon:\mathbb R^d \to \mathbb R$ by
$$a_\varepsilon(x)=\varepsilon^d a(\varepsilon x),\quad x\in \mathbb R^d.$$ Note that
$$\int_{\mathbb R^d} a_\varepsilon(x)\,dx=\int_{\mathbb R^d} a(x)\,dx.$$
By the properties of the function $a$, we evidently have that the function $a_\varepsilon$ is also bounded, satisfies $a_\varepsilon(x)\ge0$,
for all $x\in\mathbb R^d$, $a_\varepsilon\in L^1(\mathbb R^d,dx)$, and
$a_\varepsilon(-x)=a_\varepsilon(x)$ for all $x\in\mathbb R^d$. Hence, we can construct the Kawasaki
generator which corresponds to the function $a_\varepsilon$. It is convineant for us to denote this generator by
$(H_\varepsilon, D(H_\varepsilon))$. We will also denote the generator of the Glauber dynamics by
$(H_0, D(H_0))$. We first need the following lemma, whose proof is completely analogous to the proof of Lemma 4.1
in \cite{FKL}.
\begin{lemma}
For any $\varepsilon \ge 0$ and any $\varphi \in C_0(\mathbb R^d)$, the function $F(\gamma):=e^{\langle\varphi,\gamma\rangle}
$ belongs to $ D(H_\varepsilon)$ and the action of $H_\varepsilon$ on $F$ is given by the right hand side of
formula \eqref{C} for $\varepsilon>0$ \rom(with $a$ replaced by $a_\varepsilon$\rom), respectively by the right hand side of \eqref{fguftftf} for $\varepsilon=0$.
\end{lemma}
\begin{remark}\rom{For each $\varepsilon\ge0$, denote by $(\mathcal E_\varepsilon,D(\mathcal E_\varepsilon))$ the Dirichlet form with the generator $(H_\varepsilon,D(H_\varepsilon))$.
It can be easily proved that the set $\big\{\exp[\langle\varphi,\cdot\rangle]:\varphi\in C_0(\mathbb R^d)\big\}$
is dense in the Hilbert space $D(\mathcal E_\varepsilon)$ equipped with inner product $(F,G)_{D(\mathcal E_\epsilon)}:=\mathcal E(F,G)+(F,G)_{L^2(\Gamma,\mu)}$.
}\end{remark}
We have
\begin{align*}
(D_{xy}^{-+}F)(\gamma)&=F(\gamma \setminus x \cup y)-F(\gamma)\\
&=-F(\gamma)+F(\gamma \setminus x)-F(\gamma \setminus x)+F(\gamma \setminus x \cup y)\\
&=(D_{x}^{-}F)(\gamma)+(D_{y}^{+}F)(\gamma \setminus x).
\end{align*}
So, we may rewrite the action of $H_\varepsilon$ for $\varepsilon > 0$ as follows:
$$H_\varepsilon :=H_\varepsilon^{+} + H_\varepsilon^{-},$$ where
\begin{align*}
(H_\varepsilon^-F)(\gamma)&= -\int_{\mathbb R^d} \gamma(dx)(D_{x}^{-}F)(\gamma)\int_{\mathbb R^d}dy \ a_\varepsilon(x-y)
\exp[sE(x,\gamma \setminus x)+(s-1) E(y,\gamma\setminus x)]
\end{align*}
and
\begin{align*}
(H_\varepsilon^+F)(\gamma)&= -\int_{\mathbb R^d} \gamma(dx)\int_{\mathbb R^d}dy \ a_\varepsilon(x-y)
\exp[sE(x,\gamma \setminus x)+(s-1) E(y,\gamma\setminus x)](D_{y}^{+}F)(\gamma \setminus x).
\end{align*}
We can also rewrite $$H_0: =H_0^{+} + H_0^{-},$$where
\begin{align*}
(H_0^-F)(\gamma)&=-\int_{\mathbb R^d}\gamma(dx) \exp[sE(x,\gamma\setminus x)](D_{x}^{-}F)(\gamma)
\end{align*}
and
$$(H_0^+F)(\gamma)=-\int_{\mathbb R^d}z\, dx \exp[-(1-s)E(x,\gamma)](D_{x}^{+}F)(\gamma).$$
\begin{theorem}
Let $s\in[0,1/2]$ be fixed. Assume that the pair potential $\phi$ and activity $z>0$ satisfy conditions
\rom{(S)} and \rom{(LA-HT)}. Assume that $\phi$ is bounded outside some compact set in $\mathbb R^d$. Assume also that \begin{equation}\label{12345}
\phi(x) \to 0 \text{ as }|x| \to \infty.
\end{equation}
Let $\mu$ be the Gibbs measure from
$\mathcal G(z,\phi)$ as in Theorem \rom{\ref{cfdrer}}. Assume that the function $a$ is chosen so that
\begin{align}
\int_{\mathbb R^d}a(x)dx=\bigg(\int_\Gamma \exp\bigg[(s-1) \sum_{u\in\gamma}\phi(u)\bigg]\mu(d\gamma)\bigg)^{-1}.\label{2}
\end{align}
Then, for any $\varphi \in C_0(\mathbb R^d)$,
$$H_\varepsilon^{\pm}e^{\langle \varphi,\cdot\rangle} \to H_0^{\pm}e^{\langle \varphi,\cdot\rangle} \text{ in }
L^2(\Gamma,\mu) \text{ as }
\varepsilon \to 0,$$ so that
$$H_\varepsilon e^{\langle \varphi,\cdot\rangle} \to H_0 e^{\langle \varphi,\cdot\rangle} \text{ in }
L^2(\Gamma,\mu) \text{ as } \varepsilon \to 0.$$
\end{theorem}
\begin{remark}\rom{
In fact, condition \eqref{12345} can be omitted, and instead one can use the fact that $\phi$ is an integrable function outside a compact set in $\mathbb R^d$ (compare with \cite{FKL}). However, in any reasonable application, the potential $\phi$ does satisfy condition \eqref{12345}.
}\end{remark}
\begin{remark}\rom{
Note that the integral on the right hand side of \eqref{2} is well defined and finite due to Lemma~\ref{n}, see also Remark~\ref{fff}
}\end{remark}
\noindent {\it{Proof.}}
We first need the following lemma, which generalizes Lemma 4.2 in \cite{FKL}.
\begin{lemma}\label{T}
Let a function $\psi: \mathbb R^d \to \mathbb R$ be such that $e^\psi-1$ is bounded and integrable. Suppose that $A\ge0$,
$B\ge0$, $x_1,x_2,y_1,y_2\in \mathbb R^d$ and $x_1 \neq y_1$. Then
\begin{multline*}\int_\Gamma \exp\bigg[-A E\bigg(\frac{x_1}{\varepsilon}+x_2,\gamma\bigg)-B E\bigg(\frac{y_1}{\varepsilon}+y_2,\gamma\bigg)+
\langle \psi,\gamma\rangle\bigg]\mu(d\gamma)\\
\to \int_\Gamma \exp\bigg[-A\sum_{u\in\gamma}\phi(u)\bigg]\mu(d\gamma) \int_\Gamma \exp\bigg[-B\sum_{u\in\gamma}\phi(u)\bigg]\mu(d\gamma)
\int_\Gamma \exp[\langle \psi,\gamma \rangle]\mu(d\gamma)\end{multline*}
as $\varepsilon \to 0$.
\end{lemma}
\noindent{\it Proof.} By Lemma \ref{n},
\begin{align}
&\int_\Gamma \exp\bigg[-A E((x_1/\varepsilon)
+x_2,\gamma)-B E((y_1/\varepsilon)+y_2,\gamma)+
\langle \psi,\gamma\rangle\bigg]\mu(d\gamma)\notag \\
&\qquad=1+ \sum_{n=1}^\infty \frac{1}{n!}\int_{(\mathbb R^d)^n}(\exp[-A\phi(\cdot -x(\varepsilon))-B\phi(\cdot -y(\varepsilon))
+ \psi(\cdot)]-1)^\otimes(u_1,\dots,u_n)\notag \\
&\qquad\quad\times k_\mu^{(n)}(u_1,\dots,u_n)du_1\dotsm du_n,\label{M}
\end{align}
where $x(\varepsilon):=(x_1/\varepsilon)+x_2$, $y(\varepsilon):=(y_1/\varepsilon)+y_2$.
Using the Ruelle bound, semi-boundedness of $\phi$ from below and the integrability of $\phi$ outside a compact set,
we conclude from the dominated convergence theorem that, in order to find the limit of the
right hand side of \eqref{M} as $\varepsilon \to 0$, it suffices to find the limit of each term
\begin{align}
C_\varepsilon^{(n)}:&=\int_{(\mathbb R^d)^n}(\exp[-A\phi(\cdot -x(\varepsilon))-B\phi(\cdot -y(\varepsilon))
+ \psi(\cdot)]-1)^{\otimes n}(u_1,\dots,u_n)
\notag \\
&\quad\times
k_\mu^{(n)}(u_1,\dots,u_n)\,du_1\dotsm du_n \notag \\
&=\sum_{n_1+n_2+n_3=n} {n \choose n_1\, n_2\, n_3}\int_{(\mathbb R^d)^n}(f_{1,\varepsilon}^{\otimes n_1}
\otimes f_{2,\varepsilon}^{\otimes n_2}\otimes f_{3,\varepsilon}^{\otimes n_3})(u_1,\dots,u_n) \notag\\
&\quad\times k_\mu^{(n)}(u_1,\dots,u_n)\,du_1\dotsm du_n,\label{N}
\end{align}
where
\begin{align*}
&f_{1,\varepsilon}(u):=(\exp[-\psi(u)]-1)\exp[-A\phi(u -x(\varepsilon))-B\phi(u -y(\varepsilon))],\\
&f_{2,\varepsilon}(u):=(\exp[-A\phi(u -x(\varepsilon))]-1) \exp[-B\phi(u -y(\varepsilon))],\\
&f_{3,\varepsilon}(u):=\exp[-B\phi(u -y(\varepsilon))]-1,\quad u\in\mathbb R.
\end{align*}
Using definition of Ursell functions,
we see that
\begin{align*}
&\int_{(\mathbb R^d)^n}(f_{1,\varepsilon}^{\otimes n_1}
\otimes f_{2,\varepsilon}^{\otimes n_2}\otimes f_{3,\varepsilon}^{\otimes n_3})(u_1,\dots,u_n)
k_\mu^{(n)}(u_1,\dots,u_n)\,du_1\dotsm du_n \\
&=\sum\int_{(\mathbb R^d)^n}(f_{1,\varepsilon}^{\otimes n_1}
\otimes f_{2,\varepsilon}^{\otimes n_2}\otimes f_{3,\varepsilon}^{\otimes n_3})(u_1,\dots,u_n)
u_\mu(\eta_1)\dotsm u_\mu(\eta_j)\,du_1\dotsm du_n,
\end{align*}
where the summation is over all partitions $\{\eta_1,\dots,\eta_j\}$ of $\eta=\{u_1,\dots,u_n\}$. We now have to distinguish the three following cases.
Case 1: Each element $\eta_i$ of the partition is either a subset of $\{u_1,\dots,u_{n_1}\}$, or a subset of
$\{u_{n_1+1},\dots,u_{n_1+n_2}\}$, or a subset of $\{u_{n_1+n_2+1},\dots,u_n\}$.
Set $$u_i'=u_i-x(\varepsilon),\quad i=n_1+1,\dots,n_2,$$
$$u_i'=u_i-y(\varepsilon),\quad i=n_2+1,\dots,n.$$
Then using the translation invariance of the Ursell functions, we get that the corresponding term is equal to
\begin{align}
\int_{(\mathbb R^d)^n}(f_{1,\varepsilon}^{\otimes n_1}
\otimes g_{2,\varepsilon}^{\otimes n_2}\otimes g_{3,\varepsilon}^{\otimes n_3})(u_1,\dots,u_n)
u_\mu(\eta_1)\dotsm u_\mu(\eta_j)\,du_1\dotsm du_n,\label{O}
\end{align}
where
\begin{align*}
&g_{2,\varepsilon}(u):=(\exp[-A\phi(u)]-1)\exp[-B\phi(u +((x_1-y_1)/\varepsilon)+x_2-y_2)],\\
&g_{3,\varepsilon}(u):=\exp[-B\phi(u)]-1,\quad u\in\mathbb R.
\end{align*}
Note that $x_1-y_1\not=0$ and so for any fixed $u$ (and $x_2,y_2$), we have
$$|u+((x_1-y_1)/\varepsilon)+x_2-y_2| \to +\infty \quad\text{as } \varepsilon \to 0.$$
By \eqref{12345} and the dominated convergence theorem, we therefore have that \eqref{O}
converges to
\begin{align*}
&\int_{(\mathbb R^d)^n}(\exp[-\psi(\cdot)]-1)^{\otimes n_1}\otimes (\exp[-A\phi(\cdot)]-1)^{\otimes n_2}\\
&\qquad \otimes (\exp[-B\phi(\cdot)]-1)^{\otimes n_3}(u_1,\dots,u_n)
u_\mu(\eta_1)\dotsm u_\mu(\eta_j)\,du_1\dotsm du_n.
\end{align*}
Case 2: There is an element of the partition which has non-empty intersections with both sets $\{u_1,\dots,u_{n_1}\}$
and $\{u_{n_1+1},\dots,u_n\}$.
Using Theorem \ref{cfdrer}, we have that, for each $n\in\mathbb N$,
$$U_\mu^{(n+1)}\in L^1((\mathbb R^d)^n,dx_1 \dotsm dx_n),$$
where
$$U_\mu^{(n+1)}(x_1,\dots,x_n):=u_\mu^{(n+1)}(x_1,\dots,x_n,0),\quad (x_1,\dots,x_n)\in(\mathbb R^d)^n.$$
Consider the integral
\begin{align*}
&\int_{(\mathbb R^d)^k}(\exp[-\psi(u_1)]-1)u_\mu^{(k)}(u_1,\dots,u_k)\,du_1 \dotsm du_k \\
&\qquad=\int_{(\mathbb R^d)^k}(\exp[-\psi(u_1)]-1)u_\mu^{(k)}(0,u_2-u_1,u_3-u_1,\dots,u_k-u_1)\,du_1 \dotsm du_k,
\end{align*}
where we used translation invariance of Ursell function.
By changing variables $u_1'=u_1,\,u_2'=u_2-u_1,\,\dots,\,u_k'=u_k-u_1$, we continue as follows:
\begin{align*}
&=\int_{(\mathbb R^d)^k}(\exp[-\psi(u_1')]-1)u_\mu^{(k)}(0,u_2',u_3',\dots,u_k')\,du_1' \dotsm du_k'\\
&=\int_{\mathbb R^d}(e^{-\psi(u_1)}-1)du_1 \times
\int_{(\mathbb R^d)^{k-1}}U_\mu^{(k)}(u_2,u_3,\dots,u_k)du_2 \dotsm du_k.
\end{align*}
Note also that $$|x(\varepsilon)| \to +\infty \text{ and }|y(\varepsilon)| \to +\infty \text{ as }\varepsilon \to 0,$$
and hence, for each fixed $u\in\mathbb R^d$
$$\exp[-A\phi(u-x(\varepsilon))]-1 \to 0,\quad \exp[-B\phi(u-y(\varepsilon))]-1 \to 0$$
as $\varepsilon \to 0$. From here, using the dominated convergence theorem, we conclude that
\begin{align*}
&\int_{(\mathbb R^d)^n}(f_{1,\varepsilon}^{\otimes n_1}
\otimes f_{2,\varepsilon}^{\otimes n_2}\otimes f_{3,\varepsilon}^{\otimes n_3})(u_1,\dots,u_n) \\
&\quad\times u_\mu(\eta_1)\dotsm u_\mu(\eta_j)\,du_1 \dotsm du_n \to 0
\end{align*}
as $\varepsilon \to 0$.
Case 3: Case 2 is not satisfied, but there is an element $\eta_l$ of the partition which has non-empty
intersections with both sets
$\{u_{n_1+1},\dots,u_{n_1+n_2}\}$, and $\{u_{n_1+n_2+1},\dots,u_n\}$.
Shift all the variables entering $\eta_l$ by $x(\varepsilon)$. Now, since $\exp[-A\phi]-1 \in L^1(\mathbb R^d ,dx)$,
analogously to case 2, the term converges to zero
as $\varepsilon \to 0$.
Thus, again using the definition of the Ursell functions, we get, for each $n\in\mathbb N$,
\begin{align*}
C_\varepsilon^{(n)} & \to \sum_{n_1+n_2+n_3=n} {n \choose n_1\, n_2\, n_3}\\
&\quad\times
\int_{(\mathbb R^d)^{n_1}}
(\exp[\psi( \cdot )]-1)^{\otimes n_1}(u_1,\dots,u_{n_1}) k_{\mu}^{(n_1)}
(u_1,\dots,u_{n_1})
\,du_1 \dotsm du_{n_1}\\
&\quad\times\int_{(\mathbb R^d)^{n_2}} (\exp[-A\phi(\cdot)]-1)^{\otimes n_2}(u_{n_1+1},\dots,u_{n_1+n_2})k_\mu^{(n_2)}
(u_{n_1+1},\dots,u_{n_1+n_2})\\
&\qquad\qquad\quad\times du_{n_1+1} \dotsm du_{n_1+n_2}\\
&\quad\times\int_{(\mathbb R^d)^{n_3}} (\exp[-B\phi(\cdot)]-1)^{\otimes n_3}(u_{n_1+n_2+1},\dots,u_n )k_\mu^{(n_3)}
(u_{n_1+n_2+1},\dots,u_n)\\
&\qquad\qquad\quad\times
du_{n_1+n_2+1} \dotsm du_n.
\end{align*}
Therefore, the right hand side of \eqref{M} converges to
\begin{align*}
&\bigg(1+ \sum_{n=1}^\infty \frac{1}{n!}\int_{(\mathbb R^d)^n}(\exp[-A\phi(\cdot)]-1)^{\otimes n}(u_1,\dots,u_n)
k_\mu^{(n)}(u_1,\dots,u_n)\,du_1\dotsm du_n \bigg)\\
&\times\bigg( 1+ \sum_{n=1}^\infty \frac{1}{n!}\int_{(\mathbb R^d)^n}(\exp[-B\phi(\cdot)]-1)^{\otimes n}(u_1,\dots,u_n)
k_\mu^{(n)}(u_1,\dots,u_n)\,du_1\dotsm du_n \bigg)\\
&\times \bigg(1+ \sum_{n=1}^\infty \frac{1}{n!}\int_{(\mathbb R^d)^n}(\exp[\psi(\cdot)]-1)^{\otimes n}(u_1,\dots,u_n)
k_\mu^{(n)}(u_1,\dots,u_n)\,du_1\dotsm du_n\bigg)\\
&\qquad=\int_\Gamma \exp\bigg[-A\sum_{u\in\gamma}\phi(u)\bigg]\mu(d\gamma) \int_\Gamma \exp\bigg[-B\sum_{u\in\gamma}\phi(u)\bigg]\mu(d\gamma)
\int_\Gamma \exp[\langle \psi,\gamma \rangle]\mu(d\gamma).
\end{align*}
as $\varepsilon \to 0,$
which proves the lemma.\quad $\square$
Now we are in position to prove the theorem. We fix any $\varphi \in C_0(\mathbb R^d)$ and denote
$F(\gamma):=e^{\langle \varphi,\gamma\rangle}$. It suffices to prove that
\begin{equation}\label{D}
\int_\Gamma (H_\varepsilon^{\pm}F)^2(\gamma)\mu(d\gamma) \to \int_\Gamma (H_0^{\pm}F)^2(\gamma)\mu(d\gamma)
\text{ as } \varepsilon \to 0,
\end{equation}
\begin{equation}\label{S}
\int_\Gamma(H_\varepsilon^{\pm}F)(\gamma)(H_0^{\pm}F)(\gamma)\mu(d\gamma) \to \int_\Gamma (H_0^{\pm}F)^2(\gamma)\mu(d\gamma)
\text{ as } \varepsilon \to 0.
\end{equation}
Now,
\begin{align}
&\int_\Gamma (H_0^-F)^2(\gamma)\mu(d\gamma) \notag\\
&\quad=\int_\Gamma \bigg(-\int_{\mathbb R^d}\gamma(dx) \exp[sE(x,\gamma\setminus x)](D_{x}^{-}F)(\gamma)\bigg)^2 \mu(d\gamma) \notag\\
&\quad=\int_\Gamma \mu(d\gamma)\bigg(-\int_{\mathbb R^d}\gamma(dx) \exp[sE(x,\gamma\setminus x)]
(e^{\langle \varphi,\gamma\rangle -\varphi(x)}-e^{\langle \varphi,\gamma\rangle})\bigg)^2 \notag\\
&\quad=\int_\Gamma \mu(d\gamma)\bigg(-\int_{\mathbb R^d}\gamma(dx) \exp[sE(x,\gamma\setminus x)]
e^{\langle \varphi,\gamma\rangle}(e^{-\varphi(x)}-1)\bigg)^2 \notag\\
&\quad=\int_\Gamma \mu(d\gamma)e^{\langle 2\varphi,\gamma\rangle}\int_{\mathbb R^d}\gamma(dx_1)\int_{\mathbb R^d}\gamma(dx_2)
\exp[sE(x_1,\gamma\setminus x_1)]\exp[sE(x_2,\gamma\setminus x_2)] \notag\\
&\qquad\times
(e^{-\varphi(x_1)}-1)(e^{-\varphi(x_2)}-1) \notag\\
&\quad=\int_\Gamma \mu(d\gamma)\int_{\mathbb R^d}z\,dx\,e^{-E(x,\gamma)}e^{\langle 2\varphi,\gamma\cup x \rangle}
\exp[2sE(x,\gamma)](e^{-\varphi(x)}-1)^2
\notag\\
&\qquad\text{}+\int_\Gamma \mu(d\gamma)\int_{\mathbb R^d}z\,dx_1\int_{\mathbb R^d}z\,dx_2 \exp[-E(x_1,\gamma)-E(x_2,\gamma)
-\phi(x_1-x_2)]e^{\langle 2\varphi,\gamma\cup x_1 \cup x_2 \rangle} \notag\\
&\qquad\times \exp[sE(x_1,\gamma \cup x_2)]\exp[sE(x_2,\gamma \cup x_1)](e^{-\varphi(x_1)}-1)(e^{-\varphi(x_2)}-1) \notag\\
&\quad=\int_{\mathbb R^d}z\,dx(1-e^{\varphi(x)})^2\int_\Gamma \mu(d\gamma)\exp[(2s-1)E(x,\gamma)+
\langle 2\varphi,\gamma \rangle] \notag\\
&\qquad+\int_{\mathbb R^d}z\,dx_1\int_{\mathbb R^d}z\,dx_2\,e^{\varphi(x_1)}(1-e^{\varphi(x_1)})e^{\varphi(x_2)}
(1-e^{\varphi(x_2)}) \notag\\
&\qquad\times\exp[(2s-1)\phi(x_1-x_2)]\int_\Gamma \mu(d\gamma)\exp[(s-1)E(x_1,\gamma)
+(s-1)E(x_2,\gamma)+\langle 2\varphi,\gamma \rangle]\label{E}
\end{align}
and
\begin{align}
&\int_\Gamma (H_0^+F)^2(\gamma)\mu(d\gamma) \notag\\
&\quad=\int_{\mathbb R^d}z\ dx_1\,\int_{\mathbb R^d}z\,dx_2\,(e^{\varphi(x_1)}-1)
(e^{\varphi(x_2)}-1) \int_\Gamma \mu(d\gamma)\exp[(s-1)E(x_1,\gamma)\notag\\
&\qquad+(s-1)E(x_2,\gamma)+\langle 2\varphi,\gamma \rangle].\label{F}
\end{align}
Completely analogously to \eqref{E} we have
\begin{align}
&\int_\Gamma (H_\varepsilon^-F)^2(\gamma)\mu(d\gamma) \notag\\
&\quad=\int_{\mathbb R^d}z\,dx\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2(e^{\varphi(x)}-1)^2 a_\varepsilon (x-y_1)
a_\varepsilon(x-y_2)\int_\Gamma \mu(d\gamma) \notag\\
&\qquad\exp[(2s-1)E(x,\gamma)+(s-1)E(y_1,\gamma)+(s-1)
E(y_2,\gamma)+\langle 2\varphi,\gamma \rangle] \notag\\
&\qquad+\int_{\mathbb R^d}z\,dx_1\int_{\mathbb R^d}z\,dx_2\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2 \
e^{\varphi(x_1)}(e^{\varphi(x_1)}-1)e^{\varphi(x_2)}(e^{\varphi(x_2)}-1) \notag\\
&\qquad\times a_\varepsilon (x_1-y_1)a_\varepsilon (x_2-y_2)\
\exp[(2s-1)\phi(x_1-x_2)+(s-1)\phi(y_1-x_2)
\notag \\
&\qquad+(s-1)\phi(x_1-y_2)]\int_\Gamma \mu(d\gamma)\exp[(s-1)E(x_1,\gamma)+(s-1)E(x_2,\gamma) \notag\\
&\qquad+(s-1)E(y_1,\gamma)+(s-1)
E(y_2,\gamma)+\langle 2\varphi,\gamma \rangle].\label{H}
\end{align}
Let us make the change of variables $$y_1'=\varepsilon (y_1-x),\quad \quad y_2'=\varepsilon (y_2-x)$$ in the first integral, and
$$y_1'=\varepsilon (y_1-x_1),\quad \quad y_2'=\varepsilon (y_2-x_2)$$ in the second integral. Then omitting the primes in the
notations of variables, we continue \eqref{H} as follows:
\begin{align}
&=\int_{\mathbb R^d}z\,dx\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2(e^{\varphi(x)}-1)^2 a(y_1)a(y_2)
\int_\Gamma \mu(d\gamma)\exp[(2s-1)E(x,\gamma) \notag\\
&\quad+(s-1)E((y_1/\varepsilon)+x,\gamma)+(s-1)
E((y_2/\varepsilon)+x,\gamma)+\langle 2\varphi,\gamma \rangle] \notag\\
&\quad+\int_{\mathbb R^d}z\,dx_1\int_{\mathbb R^d}z\,dx_2\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2 \
e^{\varphi(x_1)}(e^{\varphi(x_1)}-1)e^{\varphi(x_2)}(e^{\varphi(x_2)}-1) \notag\\
&\quad\times a(y_1)a(y_2)\ \exp[(2s-1)\phi(x_1-x_2)+(s-1)\phi((y_1/\varepsilon)+x_1-x_2)
\notag \\
&\quad+(s-1)\phi((y_2/\varepsilon)+x_2-x_1)\int_\Gamma \mu(d\gamma)\exp[(s-1)E(x_1,\gamma)+(s-1)E(x_2,\gamma) \notag\\
&\quad+(s-1)E((y_1/\varepsilon)+x_1,\gamma)+(s-1)
E((y_2/\varepsilon)+x_2,\gamma)+\langle 2\varphi,\gamma \rangle].\label{G}
\end{align}
Next,
\begin{align}
&\int_\Gamma (H_\varepsilon^+F)^2(\gamma)\mu(d\gamma) \notag\\
&\quad=\int_\Gamma \mu(d\gamma)\bigg( -\int_{\mathbb R^d} \gamma(dx)\int_{\mathbb R^d}dy \ a_\varepsilon(x-y) \notag\\
&\qquad \times \exp[sE(x,\gamma \setminus x)-(1-s) E(y,\gamma\setminus x)](F(\gamma\setminus x \cup y)
-F(\gamma \setminus x))\bigg)^2 \notag\\
&\quad=\int_\Gamma \mu(d\gamma)\bigg( \int_{\mathbb R^d} \gamma(dx)\int_{\mathbb R^d}dy \ a_\varepsilon(x-y) \notag\\
&\qquad \times \exp[sE(x,\gamma \setminus x)-(1-s) E(y,\gamma\setminus x)]
(e^{\langle \varphi,\gamma\setminus x \rangle +\varphi(y)}-e^{\langle \varphi,\gamma\setminus x \rangle})\bigg)^2 \notag\\
&\quad=\int_\Gamma \mu(d\gamma)
\bigg( \int_{\mathbb R^d} \gamma(dx_1)\int_{\mathbb R^d} \gamma(dx_2)
e^{\langle \varphi,\gamma\setminus x_1 \rangle}e^{\langle \varphi,\gamma\setminus x_2 \rangle}
\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2 \notag\\
&\qquad\times a_\varepsilon(x_1-y_1)a_\varepsilon(x_2-y_2)
\exp[sE(x_1,\gamma \setminus x_1)-(1-s) E(y_1,\gamma\setminus x_1)]\notag\\
&\qquad\times\exp[sE(x_2,\gamma \setminus x_2)-(1-s) E(y_2,\gamma\setminus x_2)](e^{\varphi(y_1)}-1)(e^{\varphi(y_2)}-1) \notag\\
&\quad=\int_\Gamma \mu(d\gamma)
\int_{\mathbb R^d} \gamma(dx)e^{\langle 2\varphi,\gamma\setminus x \rangle}
\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2 \notag\\
&\qquad\times a_\varepsilon(x-y_1)a_\varepsilon(x-y_2)
\exp[sE(x,\gamma \setminus x)-(1-s) E(y_1,\gamma\setminus x)]\notag\\
&\qquad\times\exp[sE(x,\gamma \setminus x)-(1-s) E(y_2,\gamma\setminus x)](e^{\varphi(y_1)}-1)(e^{\varphi(y_2)}-1) \notag\\
&\qquad +\int_\Gamma \mu(d\gamma)
\int_{\mathbb R^d} \gamma(dx_1)\int_{\mathbb R^d} (\gamma \setminus x_1)(dx_2)
e^{\langle \varphi,\gamma\setminus x_1 \rangle}e^{\langle \varphi,\gamma\setminus x_2 \rangle}
\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2 \notag\\
&\qquad\times a_\varepsilon(x_1-y_1)a_\varepsilon(x_2-y_2)
\exp[sE(x_1,\gamma \setminus x_1)-(1-s) E(y_1,\gamma\setminus x_1)]\notag\\
&\qquad\times\exp[sE(x_2,\gamma \setminus x_2)-(1-s) E(y_2,\gamma\setminus x_2)](e^{\varphi(y_1)}-1)(e^{\varphi(y_2)}-1) \notag\\
&\quad=\int_\Gamma \mu(d\gamma)\int_{\mathbb R^d}z\,dx \exp[-E(x,\gamma)]e^{\langle 2\varphi,\gamma \rangle}
\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2 \notag\\
&\qquad\times\varepsilon^{2d}a(\varepsilon(x-y_1))a(\varepsilon(x-y_2))
\exp[sE(x,\gamma)-(1-s) E(y_1,\gamma)]\notag\\
&\qquad\times\exp[sE(x,\gamma)-(1-s) E(y_2,\gamma)](e^{\varphi(y_1)}-1)(e^{\varphi(y_2)}-1) \notag\\
&\qquad+\int_\Gamma \mu(d\gamma)\int_{\mathbb R^d}z\,dx_1\int_{\mathbb R^d}z\,dx_2\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2 \notag\\
&\qquad\times\exp[-E(x_1,\gamma)-E(x_2,\gamma)-\phi(x_1-x_2)]e^{\langle \varphi,\gamma \cup x_2 \rangle}
e^{\langle \varphi,\gamma \cup x_1 \rangle}\notag \\
&\qquad\times a_\varepsilon(x_1-y_1)a_\varepsilon(x_2-y_2)
\exp[sE(x_1,\gamma \cup x_2)-(1-s) E(y_1,\gamma\cup x_2)]\notag\\
&\qquad\times \exp[sE(x_2,\gamma \cup x_1)-(1-s) E(y_2,\gamma \cup x_1)](e^{\varphi(y_1)}-1)(e^{\varphi(y_2)}-1) \notag\\
&\quad=:{\rm I}+{\rm II}. \label{I}
\end{align}
In the first integral in \eqref{I} let us make the change of variables
$$y_1'=\varepsilon (y_1-x),\quad y_2'=\varepsilon (y_2-x).$$
Then omitting the primes in the
notations of variables, we continue {\rm I} as follows:
\begin{align}
{\rm I}&=\int_{\mathbb R^d}dy_1 \int_{\mathbb R^d}dy_2\, a(y_1)a(y_2)
(e^{\varphi((y_1/\varepsilon)+x)}-1)
(e^{\varphi((y_2/\varepsilon)+x)}-1) \notag\\
&\quad\times\int_\Gamma \mu(d\gamma)\int_{\mathbb R^d}z\,dx \,
\exp[(2s-1)E(x,\gamma)+(s-1)E((y_1/\varepsilon)+x,\gamma) \notag\\
&\quad
+(s-1)E((y_2/\varepsilon)+x,\gamma)
+\langle 2\varphi,\gamma \rangle].\notag
\end{align}
Let us take $$x'=x+(y_1/\varepsilon),$$ then omitting the primes in the notations of variables, we get:
\begin{align}
{\rm I}&=\int_{\mathbb R^d}dy_1 \int_{\mathbb R^d}dy_2\, a(y_1)a(y_2)\int_{\mathbb R^d}z\,dx (e^{\varphi(x)}-1)
(e^{\varphi(x+((y_2-y_1)/\varepsilon))}-1) a(y_1)a(y_2)\notag\\
&\quad\times\int_\Gamma \mu(d\gamma)\exp[(2s-1)E(x-(y_1/\varepsilon),\gamma) \notag\\
&\quad
+(s-1)E(x,\gamma)+(s-1)
E(x+((y_2-y_1)/\varepsilon),\gamma)+\langle 2\varphi,\gamma \rangle] .\label{J}
\end{align}
In the second integral in \eqref{I}, let us make the change of variables
$$x_1'=\varepsilon (x_1-y_1),\quad \quad x_2'=\varepsilon (x_2-y_2).$$
Then omitting the primes, we have:
\begin{align}
{\rm II}&=\int_{\mathbb R^d}z\,dx_1\int_{\mathbb R^d}z\,dx_2\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2\,
e^{\varphi((x_1/\varepsilon)+y_1)}
e^{\varphi((x_2/\varepsilon)+y_2)}(e^{\varphi(y_1)}-1)(e^{\varphi(y_2)}-1) \notag\\
&\quad\times a(x_1)a(x_2)\ \exp[(2s-1)\phi(((x_1-x_2)/\varepsilon)+y_1-y_2)+(s-1)\phi((x_1/\varepsilon)+y_1-y_2)
\notag \\
&\quad+(s-1)\phi(y_1-y_2-(x_2/\varepsilon))]
\int_\Gamma \mu(d\gamma)\exp[(s-1)E(y_1,\gamma)+(s-1)E(y_2,\gamma) \notag\\
&\quad+(s-1)E((x_1/\varepsilon)+y_1,\gamma)+(s-1)
E((x_2/\varepsilon)+y_2,\gamma)+\langle 2\varphi,\gamma \rangle].\label{K}
\end{align}
Using \eqref{J} and \eqref{K}, we get
\begin{align}
&\int_\Gamma (H_\varepsilon^+F)^2(\gamma)\mu(d\gamma) \notag\\
&=\int_{\mathbb R^d}z\,dx\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2\,(e^{\varphi(x)}-1)
(e^{\varphi(x+((y_2-y_1)/\varepsilon)}-1) a(y_1)a(y_2)\notag\\
&\quad\times
\int_\Gamma \mu(d\gamma)\exp[(2s-1)E(x-(y_1/\varepsilon),\gamma) \notag\\
&\quad+(s-1)E(x,\gamma)+(s-1)
E(x+((y_2-y_1)/\varepsilon),\gamma)+\langle 2\varphi,\gamma \rangle] \notag\\
&\quad+\int_{\mathbb R^d}z\,dx_1\int_{\mathbb R^d}z\,dx_2\int_{\mathbb R^d}dy_1\int_{\mathbb R^d}dy_2\,
e^{\varphi((x_1/\varepsilon)+y_1)}
e^{\varphi((x_2/\varepsilon)+y_2)}(e^{\varphi(y_1)}-1)(e^{\varphi(y_2)}-1) \notag\\
&\quad\times a(x_1)a(x_2)\, \exp[(2s-1)\phi(((x_1-x_2)/\varepsilon)+y_1-y_2)+(s-1)\phi((x_1/\varepsilon)+y_1-y_2)
\notag \\
&\quad+(s-1)\phi(y_1-y_2-(x_2/\varepsilon))
\int_\Gamma \mu(d\gamma)\exp[(s-1)E(y_1,\gamma)+(s-1)E(y_2,\gamma) \notag\\
&\quad+(s-1)E((x_1/\varepsilon)+y_1,\gamma)+(s-1)
E((x_2/\varepsilon)+y_2,\gamma)+\langle 2\varphi,\gamma \rangle].\label{L}
\end{align}
Completely analogously, we get
\begin{align}
&\int_\Gamma (H_0^-F)(\gamma)(H_\varepsilon^-F)(\gamma)\mu (d\gamma)\notag\\
&=\int_{\mathbb R^d}z\,dx\int_{\mathbb R^d}dy \,(e^{\varphi(x)}-1)^2 a(y)
\int_\Gamma \mu(d\gamma) \exp[(2s-1)E(x,\gamma)+(s-1)E((y/\varepsilon)+x,\gamma)
+\langle 2\varphi,\gamma \rangle]\notag\\
&\quad+\int_{\mathbb R^d}z\,dx_1\int_{\mathbb R^d}z\,dx_2\int_{\mathbb R^d}dy\,(e^{\varphi(x_1)}-1)(e^{\varphi(x_2)}-1)
e^{\varphi(x_1)}e^{\varphi(x_2)}a(y) \notag\\
&\quad\times \exp[(2s-1)\phi(x_1-x_2)+(s-1)\phi((y/\varepsilon)+x_1-x_2)]\notag\\
&\quad\times\int_\Gamma \mu(d\gamma)\exp[(s-1)E(x_1,\gamma)+(s-1)E(x_2,\gamma)
+(s-1)E((y/\varepsilon)+x_1,\gamma)+\langle 2\varphi,\gamma \rangle],\label{Q}
\end{align}
and
\begin{align}
&\int_\Gamma (H_0^+F)(\gamma)(H_\varepsilon^+F)(\gamma)\mu (d\gamma)\notag\\
&=\int_{\mathbb R^d}z\,dx_1\int_{\mathbb R^d}z\,dx_2 \int_{\mathbb R^d} dy \, a(x_2)(e^{\varphi(x_1)}-1)\notag\\
&\quad\times (e^{\varphi(y)}-1)\int_\Gamma \mu(d\gamma)e^{\langle 2\varphi,\gamma \rangle}
\exp[(s-1)E((x_2/\varepsilon)+y,\gamma)+(s-1)E(y,\gamma)\notag\\
&\quad+
(s-1)E(x_1,\gamma)+(s-1)\phi((x_2/\varepsilon)+y-x_1)].\label{R}
\end{align}
Using the Ruelle bound and Lemma \ref{n}, we conclude that the integral over $\Gamma$ in the right hand side of equalities
\eqref{G}, \eqref{L}--\eqref{R}, are bounded by a constant, which is indepeandent of $\varepsilon$. Therefore, by the
dominated convergence theorem, to find the limit of \eqref{G}, \eqref{L}--\eqref{R} as $\varepsilon \to 0$
it suffices to find the point-wise limit of the functions appearing before the integral over $\Gamma$, as well as the
limit of the integrals over $\Gamma$ for fixed $x$
($x_1$ and $x_2$ respectively), $y_1$ and $y_2$.
To find the latter limits, we use Lemms \ref{T}. Then, using \eqref{2}, we see that \eqref{G} and \eqref{Q}
converge to \eqref{E}, whereas \eqref{L} and \eqref{R} converge to \eqref{F}. Therefore, \eqref{D} and \eqref{S} hold.\quad $\square$
\end{document}
|
\begin{document}
\title{On the non-measurability of $\omega$-categorical Hrushovski constructions}
\begin{abstract}
We study $\omega$-categorical $MS$-measurable structures. Our main result is that a class of $\omega$-categorical Hrushovski constructions, supersimple of finite $SU$-rank is not $MS$-measurable. These results complement the work of Evans on a conjecture of Macpherson and Elwes. In constrast to Evans' work, our structures may satisfy independent $n$-amalgamation for all $n$. We also prove some general results in the context of $\omega$-categorical $MS$-measurable structures. Firstly, in these structures, the dimension in the $MS$-dimension-measure can be chosen to be $SU$-rank. Secondly, non-forking independence implies a form of probabilistic independence in the measure. The latter follows from more general unpublished results of Hrushovski, but we provide a self-contained proof.
\end{abstract}
\section{Introduction}
This work focuses on $\omega$-categorical $MS$-measurable structures and on the role of supersimple $\omega$-categorical Hrushovski constructions in settling some conjectures about them. In particular, we prove that a certain class of $\omega$-categorical supersimple Hrushovski constructions of finite $SU$-rank is not $MS$-measurable. The question of whether, in general, such structures are not $MS$-measurable is still open.\\
Let $\mathcal{L}$ be a first order language. A complete $\mathcal{L}$-theory is \textbf{$\pmb{\omega}$-categorical} if it has a unique countable model up to isomorphism. We say that an $\mathcal{L}$-structure is $\omega$-categorical if its theory is. An $MS$-measurable structure has an associated dimension-measure function on its definable sets, where for the sets of a given dimension there is an invariant, additive measure satisfying Fubini's theorem with respect to the dimension and giving such sets positive measure. We give a formal definition in Definition \ref{MScond}. Standard examples of $MS$-measurable structures are pseudofinite fields, the random graph, and $\omega$-categorical $\omega$-stable structures \cite{EM}. In this article, we focus on whether $\omega$-categorical Hrushovski constructions are $MS$-measurable. These are a class of relational structures, which generalise Fra\"{i}ss\'{e} limits, and whose construction depends on a choice of a parameter $\alpha\in\mathbb{R}^{>0}$ and of a non-decreasing function $f:\mathbb{R}^{>0}\to\mathbb{R}^{>0}$. We discuss their construction in section \ref{Hrushconstr}.\\
In their review article on $MS$-measurable structures \cite{EM}, Elwes and Macpherson ask the following questions which require the study of $\omega$-categorical Hrushovski constructions:
\begin{question}\label{q1}
Is there any $\omega$-categorical supersimple theory of finite $SU$-rank which is not $MS$-measurable?
\end{question}
\begin{question}\label{q2}
Is every $\omega$-categorical $MS$-measurable structure one-based?
\end{question}
Being $MS$-measurable implies being supersimple of finite $SU$-rank \cite{EM}. Furthermore, in such structures $D$-rank, $S1$-rank and $SU$-rank are the same. Hence, the first question is simply asking whether supersimplicity and finite rank imply $MS$-measurability in an $\omega$-categorical context. \\
Supersimple $\omega$-categorical Hrushovski constructions of finite $SU$-rank are essential structures for the study of these questions. These are the only examples of $\omega$-categorical supersimple not one-based structures which we know of. Hence, finding out that any of these structures are not $MS$-measurable would answer the first question positively. Finding out that any of these structures is $MS$-measurable would answer negatively the second question.
We already know that some $\omega$-categorical Hrushovski constructions are not $MS$-measurable from the work of Evans \cite{Measam}. There, Evans first proves that any $MS$-measurable structure must satisfy a weak form of independent $n$-amalgamation using a version by Towsner of the Hypergraph Removal Lemma \cite{Town2,Gowers,Rodl}. Then, he proves that for a class of Hrushovski constructions (with a ternary relation and $\alpha=1$), any dimension is a scaled version of the natural Hrushovski dimension. With these tools, Evans shows that some such structures do not satisfy the weak $n$-amalgamation property.\\
While Evans' dimension theorem can be generalised to other classes of Hrushovski constructions, it is possible for Hrushovski constructions to satisfy the weak amalgamation condition (and even independent $n$ amalgamation for all $n$) as long as $f$ is slow-growing enough \cite{Udiamalg}. Indeed, a natural question which arises from Evans' paper is whether $MS$-measurability for an $\omega$-categorical finite rank structure is implied by satisfying some strong enough form of independent $n$-amalgamation.\\
In this paper, we develop new tools to study $\omega$-categorical $MS$-measurable structures. These will allow us to study how an $MS$-dimension-measure would behave in an $\omega$-categorical Hrushovski construction if it was $MS$-measurable. In particular, in Section \ref{finally}, we introduce the $\omega$-categorical supersimple finite rank Hrushovski constructions $\mathcal{M}_f$ for which we prove the main theorem of this paper:
\begin{customthm}{\ref{eqnstheorem}} The structures $\mathcal{M}_f$ satisfying the conditions of Construction \ref{actualf} are not $MS$\-/measurable. Hence, there are $\omega$-categorical supersimple finite $SU$-rank structures which are not $MS$-measurable. Indeed, there are supersimple $\omega$-categorical structures of finite $SU$-rank and with independent $n$-amalgamation over finite algebraically closed sets for all $n$ which are not $MS$-measurable.
\end{customthm}
The proof of this result is contained in subsection \ref{proof!}. \\
The structure of this paper is as follows. In Section \ref{MSmeasures}, we introduce some basic notions concerning $MS$-measurable structures. We also prove the folklore theorem of Ben Yaacov that the natural notion of independence in $MS$-measurable structures corresponds to non-forking independence. In Corollary \ref{amacor}, we find a set of equations that the measure of an $MS$-measurable structure must satisfy. We also prove that, in an $\omega$-categorical context, we may take the dimension of an $MS$-measurable structure to be $SU$-rank (Corollary \ref{SUok}). This allows us to circumvent the need to prove a dimension theorem for the class of Hrushovski constructions we will consider.
Section \ref{indepmeas} yields another set of equations for the measure in $\omega$-categorical $MS$-measurable structures which show how non-forking independence yields probabilistic independence in the measure (Theorem \ref{indeptheorem}). These results are a special case of the probabilistic independence theorem in the unpublished \cite{AER}.
In Section \ref{Hrushconstr}, we introduce Hrushovski constructions and in subsection \ref{Hrusheqns} we specify the form of the equations from section \ref{MSmeasures} in the context of such structures.
In Section \ref{finally}, we introduce in Remark \ref{actualf} various supersimple $\omega$-categorical Hrushovski constructions of finite $SU$-rank, some of which satisfy independent $n$-amalgamation for all $n$. We will prove these are not $MS$-measurable in subsection \ref{proof!}. Various properties of these structures (especially supersimplicity) are proven in the Appendix.
The most relevant results for understanding the proof of our main theorem are Corollaries \ref{amacor} and \ref{SUok}, Theorems \ref{indeptheorem} and \ref{Hrush}, and subsection \ref{proof!}.\\
We assume some knowledge of model theory, especially regarding $\omega$-categorical structures. Most of the relevant material is covered in Chapters 1 to 4 of Tent and Ziegler's book \cite{TZ}. Section \ref{indepmeas} uses some local stability theory for which the first chapter of \cite{Pillay} is sufficient. We work in countable languages and only with complete theories. We shall make frequent use of the equivalent conditions to $\omega$-categoricity from the Ryll-Nardzewski theorem. We also assume some basic knowledge of simple theories and $SU$-rank. Chapter 2 of \cite{Kimsimp} covers the relevant material.\\
Regarding notation, for a language $\mathcal{L}$, we work on an $\mathcal{L}$-structure $\mathcal{M}$ and write $M$ to denote the underlying set. The monster model for the $\mathcal{L}$-theory of $\mathcal{M}$ is denoted by $\mathbb{M}$. We use overlined lowercase letters at the beginning of the alphabet $\overline{a},\overline{b}, \dots$ to denote finite tuples from a model, and use $\overline{x},\overline{y}, \cdots$ to denote tuples of variables. We use the non-overlined versions when speaking of $1$-tuples is sufficient. We use letters $A, B, C, \dots$ to denote (usually finite) subsets of our model. We use the greek letters $\phi, \psi, \chi\dots$ to denote $\mathcal{L}$-formulas, which we often write in the form $\phi(\overline{x}, \overline{y})$ to specify their free variables.\\
\textbf{Acknowledgements:} I would like to thank David Evans and Charlotte Kestner for their supervision and support on this project. I would also like to thank Ehud Hrushovski for sharing his notes \cite{AER} on a stronger version of the probabilistic independence theorem than the one proved in this article. The theorem greatly simplified the equations needed to prove Theorem \ref{eqnstheorem} and provided an excellent lens to study the measures arising in $MS$-measurable structures. This paper is part of my PhD project at Imperial College London, which is supported by an Admin-Roth Scholarship.
\section{Measurable \texorpdfstring{$\omega$}{omega}-categorical structures}\label{MSmeasures}
In this section, we introduce $MS$-measurable structures and some basic facts about them. We also prove some original results on $\omega$-categorical $MS$-measurable structures.
We begin with Subsection \ref{basicdef}, where we introduce the notion of $MS$-measurability following \cite{MS} and \cite{EM}. Then, in Subsection \ref{dimind}, we prove the folklore result that dimension independence in $MS$-measurable structures corresponds to non-forking independence. Finally, in Subsection \ref{omegameas}, we find a set of equations which hold in $\omega$-categorical $MS$-measurable structures. We also show that if an $\omega$-categorical structure is $MS$-measurable, then it is $MS$-measurable with dimension given by $SU$-rank. This will allows us to avoid proving Evans' dimension theorem for the class of $\omega$-categorical Hrushovski constructions we will consider.
\subsection{Basic definitions}\label{basicdef}
Let $\mathcal{M}$ be a structure. By $\mathrm{Def}(M)$ we mean the set of non-empty definable subsets of $M$ defined by formulas with parameters from $M$. Meanwhile, $\mathrm{Def}_{\overline{x}}(M)$ denotes the definable subsets of $M$ in the variable $\overline{x}$. For $\overline{a}$, $\overline{a}'\in M^{\vert\overline{y}\vert}$ and $A\subset M$, we write $\overline{a}\equiv_A \overline{a}'$ to say that $\overline{a}$ and $\overline{a}'$ have the same type over A.
If $A=\emptyset$, we simply write $\overline{a}\equiv \overline{a}'$.
\begin{definition} Let $X$ be any set and consider a function $g:\mathrm{Def(M)}\to X$.
For $A\subseteq M$, we say that $g$ is $A$-\textbf{invariant} if $g(\phi(M^{\vert\overline{x}\vert}, \overline{a}))=g(\phi(M^{\vert\overline{x}\vert}, \overline{a}'))$ whenever $\overline{a}\equiv_A \overline{a}'$. When $A=\emptyset$, we say $g$ is \textbf{invariant}. The function $g$ is \textbf{definable} if for any formula $\phi(\overline{x}, \overline{y})$ and $k\in X$, the set of $\overline{a}\in M^{\vert\overline{y}\vert}$ such that $g(\phi(M^{\vert\overline{x}\vert}, \overline{a}))=k$ is definable over the empty set. We say that $g$ is \textbf{finite} if the set of values of $g(\phi(M^{\vert\overline{x}\vert}, \overline{a}))$ for $\overline{a}\in M^{\vert\overline{y}\vert}$ is finite.\\
To avoid cumbersome notation, when the model $\mathcal{M}$ is clear, we sometimes write $g(\phi(\overline{x}, \overline{a}))$ instead of $g(\phi(M^{\vert\overline{x}\vert}, \overline{a}))$.
\end{definition}
Note that definability always implies invariance. In an $\omega$-categorical context, by Ryll\-/Nardzewski, invariance implies definability and finiteness.
Before introducing the notion of an $MS$-measurable structure, we follow the notation of \cite{WagDim} to speak of a dimension function:
\begin{definition}
We call $\mathrm{dim : Def(M)}\to \mathbb{N}$ a \textbf{ dimension} if it satisfies the following conditions:
\begin{itemize}
\item (Algebraicity) for $X$ finite and non-empty, $\mathrm{dim}(X)=0$;
\item (Union) for $X, Y\in\mathrm{Def}(M)$, $\mathrm{dim}(X\cup Y)=\mathrm{max}\{\mathrm{dim}(X), \mathrm{dim}(Y)\}$; and
\item (Additivity) for finite tuples $\overline{a},\overline{b}, \overline{c}$ from $M$, $\mathrm{dim}(\overline{a}\overline{b}/\overline{c})=\mathrm{dim}(\overline{a}/\overline{b}\overline{c})+\mathrm{dim}(\overline{b}/\overline{c})$.
\end{itemize}
\end{definition}
\begin{remark} In the additivity condition, by $\mathrm{dim}(\overline{a}/B)$ we mean $\mathrm{dim}(\mathrm{tp}(\overline{a}/B))$. For a partial type $\pi(\overline{x})$ over $B\subseteq M$, we define
\[\mathrm{dim}(\pi(\overline{x}))=\mathrm{min}\{\mathrm{dim}(\phi(M^{\vert\overline{x}\vert}, \overline{b}) \ \vert \ \pi(\overline{x})\vdash \phi(\overline{x}, \overline{b}) \}.\]
\end{remark}
\begin{remark} In the course of this paper, we work with definable dimensions in an $\omega$-categorical context. Hence, for $\overline{a}, \overline{b}$ finite tuples from $M$ we have that
\[\mathrm{dim}(\overline{a}/\overline{b})=\mathrm{dim}(\phi(M^{\vert\overline{x}\vert}, \overline{b})),\]
where $\phi(\overline{x}, \overline{b})$ is a formula isolating $\mathrm{tp}(\overline{a}/\overline{b})$.
\end{remark}
\begin{definition} We say that the function $\mu:\mathrm{Def}_{\overline{x}}(M)\to \mathbb{R}^{\geq 0}\cup \{\infty\}$ is a \textbf{measure} if it is a finitely additive function, i.e. for $X$ and $Y$ definable and disjoint in the same variable,
\[\mu(X\cup Y)=\mu(X)+\mu(Y).\]
We say $\mu$ is a \textbf{Keisler} measure if it takes values in $[0,1]$.
\end{definition}
We give here a slightly simplified definition of $MS$-measurability than the original following \cite[Proposition 5.7]{MS}:
\begin{definition}\label{MScond} The $\mathcal{L}$-structure $\mathcal{M}$ is $\pmb{MS}$\textbf{-measurable} if there is a dimension-measure function $h:\mathrm{Def(M)}\to\mathbb{N}\times\mathbb{R}^{>0}$, with notation $h(X)=\dimeas{X}$, satisfying the following conditions:
\begin{enumerate}
\item The function $h$ is finite and definable.
\item For $\overline{a}\in M^n$, $h(\{\overline{a}\})=(0,1)$.
\item (Additivity) For $X, Y\subseteq M^n$ definable and disjoint,
\begin{equation*} \mu(X\cup Y)=
\left\{ \begin{array}{ll}
\mu(X)+\mu(Y), & \text{for } \mathrm{d}(X)=\mathrm{d}(Y);\\
\mu(Y) & \text{ for } \mathrm{d}(X)<\mathrm{d}(Y).
\end{array}\right.
\end{equation*}
\item (Fubini) For $n\geq 2$, let $X\subseteq M^n$ be definable and for $1\leq m<n$, let $\pi:M^n\to M^m$ be a projection of $M^n$ to $m$-many coordinates. Suppose there is $(k, \nu)\in\mathbb{N}\times\mathbb{R}^{>0}$ such that for any $\overline{a}\in\pi(X)$, we have that $h(\pi^{-1}(\overline{a})\cap X)=(k, \nu)$. Then, $d(X)=d(\pi(X))+k$ and $\mu(X)=\mu(\pi(X))\nu$.
\end{enumerate}
\end{definition}
\begin{remark}\label{eqMS}
From the definition it follows that being $MS$-measurable is a property of a theory, i.e. if $\mathcal{M}$ is $MS$-measurable, then so is any elementarily equivalent structure. We say that a many sorted structure $\mathcal{M}^*$ is $MS$-measurable if every restriction of $\mathcal{M}^*$ to finitely many sorts is $MS$-measurable. We have that if $\mathcal{M}$ is $MS$-measurable, then so is $\mathcal{M}^{eq}$ \cite[Proposition 5.10]{MS}.
\end{remark}
\begin{remark} An important, and somehow hidden, assumption of the definition of $MS$\-/measurability is that of \textbf{positivity}. That is, for $h(X)=(k, \nu)$, $\nu>0$. This will be vital to our proof of non-measurability since we will show that a given definable set must be assigned measure zero.
\end{remark}
\begin{remark}
As noted in \cite[Proposition 5.3]{MS}, for $A$-definable $X$, we have an induced $A$-invariant probability measure on the definable subsets of $X$.
\end{remark}
\begin{remark} The dimension part of an $MS$-function is a definable dimension function, as defined above. Additivity follows from \cite[p.919]{WagDim}.
\end{remark}
In an $MS$-measurable context, for $\pi(\overline{x})$ a partial type over $B\subseteq M$, we define
\[\mu(\pi(\overline{x}))=\mathrm{inf}\{\mu(\phi(\overline{x}, \overline{b}))\vert \pi(\overline{x})\vdash \phi(\overline{x}, \overline{b}), \mathrm{d}(\phi(\overline{x}, \overline{b}))=\mathrm{d}(\pi(\overline{x}))\},\]
and we write $\mu(\overline{a}/B)$ for $\mu(\mathrm{tp}(\overline{a}/B))$.
For an $\omega$-categorical $MS$-measurable structure, $\mu(\overline{a}/\overline{b})\allowbreak=\mu(\phi(M^{\vert\overline{x}\vert}, \overline{b}))$, for any formula $\phi(\overline{x}, \overline{b})$ isolating $\mathrm{tp}(\overline{a}/\overline{b})$.
\begin{remark}\label{Fub}
When speaking of the dimension and measure of types we shall also set the convention that $\mathrm{d}(\emptyset)=0$ and $\mu(\emptyset)=1$. This is helpful to treat the case of types over $\emptyset$ analogously to that of types over sets of parameters when dealing with Fubini. In fact, we shall make frequent use of the fact that, in an $\omega$-categorical $MS$-measurable structure, Fubini implies that \[h(\overline{a}\overline{b})=(\mathrm{dim}(\overline{a}\overline{b}), \mu(\overline{a}\overline{b}))=(\mathrm{dim}(\overline{a}/\overline{b})+\mathrm{dim}(\overline{b}),\mu(\overline{a}/\overline{b})\mu(\overline{b})).\]
Hence, we adopt the conventions $\mathrm{d}(\emptyset)=0$ and $\mu(\emptyset)=1$ so that we may write
\[\mathrm{dim}(\overline{a})=\mathrm{dim}(\overline{a}/\emptyset)+\mathrm{dim}(\emptyset)=\mathrm{dim}(\overline{a}/\emptyset), \text{ and}\]
\[\mu(\overline{a})=\mu(\overline{a}/\emptyset)\mu(\emptyset)=\mu(\overline{a}/\emptyset).\]
\end{remark}
\subsection{Dimension-independence}\label{dimind}
It is easy to obtain a notion of independence from an invariant dimension. In particular, in an $MS$-measurable structure, dimension-independence corresponds to non-forking independence. In \cite{EM}, this result is attributed to unpublished work of Ben-Yaacov. In this section we prove it briefly. In the next section, this will help us showing that in $\omega$-categorical $MS$-measurable structures we may take the dimension to be $SU$-rank without harm.
\begin{definition} Let $\mathrm{d}:\mathrm{Dim}(\mathbb{M})\to \mathbb{N}$ be an invariant dimension, where $\mathbb{M}$ is a monster model. Let $\overline{a}$ be a tuple and $B, C$ be small subsets of $\mathbb{M}$. We say that $\overline{a}$ is $\mathrm{d}$-independent from $B$ over $C$, writing $\overline{a}\indep{C}{d} B$ if
\[\mathrm{d}(\overline{a}/BC)=\mathrm{d}(\overline{a}/C).\]
\end{definition}
The following is easy to prove from the basic properties of an invariant dimension:
\begin{prop} Let $\mathrm{d}:\mathrm{Dim}(\mathbb{M})\to \mathbb{N}$ be an invariant dimension. The relation of $\mathrm{d}$\-/independence is a notion of independence in the sense of \cite[Definition 4.1]{KPsimp}.
\end{prop}
Hence, in an $MS$-measurable context the dimension part of an $MS$-dimension-measure yields a notion of independence. We shall prove this
coincides with non-forking independence. In order to prove this, first recall Lemma 3.5 from \cite{EM}:
\begin{lemma}\label{divlemma} Let $\mathcal {M}$ be $MS$-measurable. Let $X\subseteq M^{\vert\overline{x}\vert}$ be definable and $\phi(\overline{x}, \overline{y})$ be such that there is an indiscernible sequence $(\overline{b}_i)_{i<\omega}$ such that $\{\phi(\overline{x}, \overline{b}_i)\vert i<\omega\}$ is inconsistent and $\phi(M^{\vert\overline{x}\vert}, \overline{b}_i)\subseteq X$ for each $i<\omega$. Then, $\mathrm{d}(X)>\mathrm{d}(\phi(\overline{x}, \overline{b_i}))$.
\end{lemma}
From this result, it is easy to prove that dimension independence implies non-forking independence.
\begin{prop}\label{indepimpl} Suppose that $\overline{a}\indep{C}{d} B$. Then, $\mathrm{tp}(\overline{a}/BC)$ does not divide over $C$. Hence, by simplicity, $\overline{a}\indep{C}{d} B$ implies $\overline{a}\indep{C}{f}B$, where $\indep{C}{f}$ is non-forking independence.
\end{prop}
\begin{proof} Suppose by contrapositive that $\mathrm{tp}(\overline{a}/BC)$ divides over $C$. So there is $\phi(\overline{x}, \overline{d})\in\mathrm{tp}(\overline{a}/BC)$ such that $\{\phi(\overline{x}, \overline{d}_i)\vert i<\omega\}$ is $k$-inconsistent and $(\overline{d}_i)_{i<\omega}$ is $C$-indiscernible with $\overline{d}_0=\overline{d}$. Let $\chi(\overline{x})$ be a formula defined over $C$ witnessing $\mathrm{d}(\overline{a}/C)=k$. Consider $\phi(\overline{x}, \overline{d_i})\wedge\chi(\overline{x})$. Since $\phi(\overline{x}, \overline{d})\in\mathrm{tp}(\overline{a}/BC)$, we have that $\vDash\exists \overline{x}\phi(\overline{x}, \overline{d})\wedge \chi(\overline{x})$, and since $\overline{d}_i\equiv_C\overline{d}$, $\vDash\exists \overline{x}\phi(\overline{x}, \overline{d}_i)\wedge \chi(\overline{x})$. Thus, $\phi(\mathbb{M}^{\vert\overline{x}\vert}, \overline{d_i})\wedge\chi(\mathbb{M}^{\vert\overline{x}\vert})\subseteq \chi(\mathbb{M}^{\vert\overline{x}\vert})$, and $\{\phi(\overline{x}, \overline{d_i})\wedge\chi(\overline{x}) \vert i<\omega\}$ is inconsistent. The conditions of Lemma \ref{divlemma} are met and
\[\mathrm{d}(\chi(\mathbb{M}^{\vert\overline{x}\vert})>\mathrm{d}(\phi(\mathbb{M}^{\vert\overline{x}\vert}, \overline{d_i})\wedge\chi(\mathbb{M}^{\vert\overline{x}\vert}))=\mathrm{d}(\phi(\mathbb{M}^{\vert\overline{x}\vert}, \overline{d})\wedge\chi(\mathbb{M}^{\vert\overline{x}\vert}))=\mathrm{d}(\phi(\mathbb{M}^{\vert\overline{x}\vert}, \overline{d})).\]
Where the second equality holds by invariance of the dimension and the last equality holds since $\phi(\mathbb{M}^{\vert\overline{x}\vert}, \overline{d})\subseteq \chi(\mathbb{M}^{\vert\overline{x}\vert})$. By our choice of $\chi(\overline{x})$, we have that $\mathrm{d}(\overline{a}/BC)<\mathrm{d}(\overline{a}/C)$, implying that $\overline{a}$ is not $\mathrm{d}$-independent from $B$ over $C$. Our claim then holds by contrapositive.
\end{proof}
The other implication holds by the following result of Kim and Pillay \cite{KPsimp}[Theorem 4.2, Claim I]:
\begin{lemma} Let $T$ be an arbitrary theory and $\indep{}{*}$ be a notion of independence. Suppose that $a\indep{C}{f}B$. Then, $a\indep{C}{*}B$.
\end{lemma}
And this yields the desired result:
\begin{theorem}\label{dimtheom} Let $\mathcal{M}$ be an $MS$-measurable structure. Then, dimension independence is the same as non-forking independence.
\end{theorem}
\begin{remark} An alternative way to prove that dimension independence is the same of non-forking independence is by noting how the independence theorem is substantially proven in Theorem 2.18 of \cite{approxsubg}.
\end{remark}
\subsection{\texorpdfstring{$MS$}{MS}-measurability in an \texorpdfstring{$\omega$}{omega}-categorical context}\label{omegameas}
In this subsection, we introduce some basic equations which must be satisfied in $\omega$\-/categorical $MS$-measurable structures through Corollary \ref{amacor}. Observations regarding these equations allow us to prove that in an $\omega$-categorical context we may always take the dimension part of an $MS$-dimension-measure to be $SU$-rank, as shown in corollary \ref{SUok}.\\
The following result is a standard fact about $\omega$-categorical $MS$-measurable structures. As far as I can tell, a version of this is first proven in Elwes' PhD thesis \cite[Lemma 5.2.1]{ElwesPhD}:
\begin{lemma} Let $\mathcal{M}$ be $\omega$-categorical and MS-measurable. Let $B\supseteq A$ be finite subsets of $M$. Suppose $p$ is a partial type over $A$. By $\omega$-categoricity, $p$ has finitely many complete extensions to $B$. Let $p_1, \dots, p_n$ be those which do not fork over $A$. Then,
\begin{equation}
\mu(p)=\sum_{i=1}^n \mu(p_i).
\end{equation}
\end{lemma}
\begin{proof}
Suppose the complete extensions of $p$ to $B$ are $p_1, \dots, p_m$ with $m\geq n$ where $p_1, \dots, p_n$ have maximal dimension. By Theorem \ref{dimtheom}, these are the non-forking extensions of $p$ to $B$. Consider the sets
\[p_i(M^{\vert\overline{x}\vert}):=\left\{\overline{c}\in M^{\vert\overline{x}\vert} \big\vert \overline{c}\vDash p(\overline{x})\right\}.\]
By $\omega$-categoricity, these are disjoint definable sets such that $\bigcup_{i=1}^m p_i(M^{\vert\overline{x}\vert})=p(M^{\vert\overline{x}\vert})$. Hence, by additivity:
\[\mu(p)=\mu\left(\bigcup_{i=1}^n p_i(M^{\vert\overline{x}\vert})\cup \bigcup_{i>n}^m p_i(M^{\vert\overline{x}\vert})\right)=\mu\left(\bigcup_{i=1}^n p_i(M^{\vert\overline{x}\vert})\right)=\sum_{i=1}^n \mu(p_i).\]
Where the second last equality holds by additivity in the case $\mathrm{d}(X)<\mathrm{d}(Y)$, and the last equality due to additivity in the case of $\mathrm{d}(X)=\mathrm{d}(Y)$.
\end{proof}
By an easy application of Fubini (see Remark \ref{Fub}), we get equations in terms of types over the empty set and obtain:
\begin{corollary}\label{amacor} Let $\mathcal{M}$ be $\omega$-categorical and MS-measurable. Take $\overline{a}, \overline{b}, \overline{c}$ to be tuples from $M$. Let $\mathrm{tp}(\overline{c}_1/\overline{a}\overline{b}), \dots, \mathrm{tp}(\overline{c}_n/\overline{a}\overline{b})$ be the finitely many complete non-forking extensions of $\mathrm{tp}(\overline{c}/\overline{a})$ to $\overline{a}\overline{b}$. Then,
\begin{equation}\label{amacoreq}
\frac{\mu(\overline{ac}) \mu( \overline{ab})}{\mu( \overline{a})}=\sum_{i=1}^n \mu(\overline{ab c_i}).
\end{equation}
\end{corollary}
We shall use the above corollary in our proof of non-measurability. There is a partial converse to this corollary. In fact, if an $\omega$-categorical structure has a function $\mu: S(\emptyset)\to\mathbb{R}^{>0}$ satisfying equations of the form of (\ref{amacoreq}) and an algebraicity condition, then it is $MS$-measurable. This can be extracted from the proof of Theorem \ref{changedim}. In this theorem, we show that in an $\omega$-categorical $MS$-measurable structure we may choose our dimension to be any definable dimension yielding non-forking independence as its notion of independence.
\begin{theorem}\label{changedim} Suppose that $\mathcal{M}$ is an $\omega$-categorical $MS$-measurable structure with dimension-measure $h=(\mathrm{d}, \mu)$. Let $D:\mathrm{Def}(M)\to \mathbb{N}$ be any definable dimension for $\mathcal{M}$ whose induced notion of independence is non-forking independence. Then, $\mathcal{M}$ has a dimension-measure $h'=(D, \mu')$, where $\mu'$ agrees with $\mu$ on complete types over finite sets of parameters.
\end{theorem}
\begin{proof} By Ryll-Nardzewski, $D$ is invariant and finite. The dimension-measure $h=(d, \mu)$ induces a function on types over finite sets of parameters $\mu^*:S_{\mathrm{fin}}(M)\to\mathbb{R}^{>0}$ given by $\mu^*(\overline{a}/\overline{b})=\mu(\overline{a}/\overline{b})$. We can write any set $X$ definable over $\overline{a}$ as a finite disjoint union of the sets of realisations of complete types $p_1, \dots, p_m$ over $\overline{a}$. Say that $p_1, \dots, p_n$ have maximal $D$-dimension. Then, we define,
\[\mu'(X)=\sum_{i=1}^n \mu^*(p_i).\]
We need to show that our definition does not depend on the choice of parameters over which $X$ is defined. Suppose that $X$ is defined both over $\overline{a}$ and $\overline{b}$. We may assume without loss of generality that $\overline{a}\subseteq\overline{b}$. Each $p_i$ over $\overline{a}$ has finitely many complete extensions of maximal $D$-dimension to $\overline{b}$, say $p'_{i,1}, \dots, p'_{i, m_i}$, which, by assumption, are the non-forking extensions of $p_i$ to $\overline{b}$. By the dimension theorem \ref{dimtheom}, these are the extensions of $p_i$ of maximal $d$-dimension, and so, by $MS$-measurability, we know that
\[\mu(p_i)=\sum_{j=1}^{m_i}\mu(p'_{ij}).\]
But then,
\[\sum_{i=1}^n\mu(p_i)=\sum_{i=1}^n\sum_{j=1}^{m_j}\mu(p'_{ij}),\]
where the equation on the left is the definition of $\mu'(X)$ as defined over $\overline{a}$ and the equation on the right is the definition of $\mu'$ for $X$ as defined over $\overline{b}$. This yields that $\mu'$ is well defined. Positivity, algebraicity, additivity and Fubini are easy to prove. Hence, $(D, \mu')$ yields an $MS$-dimension-measure for $\mathcal{M}$.
\end{proof}
An important consequence of this theorem is that for $MS$-measurable $\omega$-categorical structures we may take the dimension to be $SU$-rank. In a structure of finite $SU$-rank, $SU$-rank is a dimension function: it is additive by the Lascar inequalities \cite[Prop. 2.5.19]{Kimsimp}. In an $\omega$-categorical structure, it is definable being invariant. Since $MS$-measurable structures have finite $SU$-rank, in $\omega$-categorical $MS$-measurable structures, $SU$-rank yields a definable dimension. By definition it induces non-forking independence as its notion of independence. Hence, it satisfies all of the conditions of Theorem \ref{changedim}, and we get the following corollary.
\begin{corollary}\label{SUok} Suppose that the $\omega$-categorical structure $\mathcal{M}$ is $MS$-measurable with dimension-measure $h=(d, \mu)$. Then, $\mathcal{M}$ is also $MS$ measurable via the dimension-measure $h'=(SU, \mu')$, where $\mu'$ is as in Theorem \ref{changedim}. Hence, if an $\omega$-categorical structure $\mathcal{M}$ is $MS$-measurable, there is a dimension-measure on $\mathcal{M}$ where the dimension is given by $SU$-rank.
\end{corollary}
This statement is known to be false outside of an $\omega$-categorical context. In particular, in \cite[Remark 3.8]{EM}, Elwes and Macpherson show that there are $MS$-measurable structures where $SU$-rank is not definable. They also give an example of an $\omega$-categorical structure for which we can artificially choose the dimension in the $MS$-measure and $SU$-rank to differ. However, our theorem does prove that if there is an $MS$-measure in the $\omega$-categorical case, there is no harm in taking the dimension to be $SU$-rank.
Corollary \ref{SUok} is a powerful tool in the context of $MS$-measurable $\omega$-categorical structures. For example, Evans' proof \cite{Measam} employs a highly non-trivial theorem showing that in the class of $\omega$-categorical structures he is considering, any dimension function corresponds to a scaled $SU$-rank. Our result allows us to skip proving such a theorem. Indeed, for other $\omega$-categorical structures such a theorem is false (e.g. the example in \cite[Rem. 3.8]{EM}).
\section{Independence in measure}\label{indepmeas}
In this section, we obtain another set of equations that hold for $MS$-measurable $\omega$-categorical structures. The main idea is that in an $MS$-measurable structure, non-forking independence induces probabilistic independence in the measure. This is shown explicitly in Theorem \ref{indeptheorem} and Corollary \ref{triang}, which yields equations that we will use later in our proof of non-$MS$-measurability. We thank Ehud Hrushovski for sharing his notes for \cite{AER}. In there, he proves a more general version of Theorem \ref{indeptheorem}, which implies the results in this section. Working in an $\omega$-categorical $MS$-measurable context greatly simplifies the tools and the amount of theory required to obtain the probabilistic independence theorem. Hence, we give proofs for these results in this section.\\
We work in an $\omega$-saturated $\mathcal{L}$-structure $\mathcal{M}$. Let $\mu:\mathrm{Def}_{\overline{y}}(M)\to [0,1]$ be an invariant measure and $\phi(\overline{x}_1, \overline{y}), \psi(\overline{x}_2, \overline{y})$ be $\mathcal{L}$-formulas. We define the relation $R_\alpha(\overline{x}_1, \overline{x}_2)$ on $M^{\vert\overline{x}_1\vert}\times M^{\vert\overline{x}_2\vert}$ by
\[R_\alpha(\overline{a}, \overline{b}) \text{ if and only if } \mu(\phi(\overline{a}, \overline{y})\wedge \psi(\overline{b}, \overline{y}))=\alpha.\]
From \cite[Prop.2.25]{approxsubg} we know that $R_\alpha(\overline{x}_1, \overline{x}_2)$ is stable.\\
We shall begin by showing that whether $R_\alpha(\overline{a}, \overline{b})$ holds of a pair of tuples only depends on the individual types of $\overline{a}$ and $\overline{b}$. This can be seen as a consequence of a commonly used corollary to the finite equivalence relation theorem \cite[Lemma 2.11]{Pillay} (see \cite{psfH} or \cite{KPsimp} for similar uses). We need to introduce some notation to express the result.
\begin{definition} Let $\delta(\overline{x};\overline{z})$ be an $\mathcal{L}$-formula and $A$ a set of parameters. An instance of $\delta$ over $A$ is a formula $\delta(\overline{x};\overline{a})$ in $\mathcal{L}_A$, i.e. the language obtained by adding to $\mathcal{L}$ constants for the elements of $A$. A complete $\delta$-type over $A$ is a maximally consistent set of instances of $\delta$ over $A$. By $S_\delta^{\overline{x}} (A)$ we mean the set of $\delta$-types over $A$. By $FER_\delta(A)$ we denote the set of $A$-definable equivalence relations $E(\overline{x}, \overline{y})$ with finitely many classes such that for any $\overline{b}$, $E(\overline{x}, \overline{b})$ is elementary equivalent to a Boolean combination of instances of $\delta$ over $A$.
\end{definition}
With the above notation, we can prove the following well-know application of the finite equivalence relation theorem:
\begin{prop} Let $\mathcal{M}$ be an $\omega$-saturated $\mathcal{L}$-structure with $\mathrm{acl}^{eq}(\emptyset)=\mathrm{dcl}^{eq}(\emptyset)$. Let $\delta(\overline{x},\overline{y})$ be a stable $\mathcal{L}$-formula. Suppose that $\overline{a}$ and $\overline{b}$ are tuples from $\mathcal{M}$ with $\overline{a}\indep{}{}\overline{b}$. Then, whether $\vDash \delta(\overline{a},\overline{b})$ only depends on $\mathrm{tp}(\overline{a})$ and $\mathrm{tp}(\overline{b})$, but not on $\mathrm{tp}(\overline{ab})$.
\end{prop}
\begin{proof} Let $\overline{a}$ and $\overline{a}'$ be such that $\overline{a}\indep{}{}\overline{b}, \overline{a}'\indep{}{}\overline{b}$ and $\overline{a}\equiv\overline{a}'$. In particular, $\overline{a}$ and $\overline{a}'$ have the same $\delta$-type over $\emptyset$.\\
Let $B\subset M$ and suppose $q_1,q_2\in S^{\overline{x}}_\delta(B)$ are non-forking extensions of $\mathrm{tp}_{\delta}(\overline{a})$, the $\sigma$ type of $\overline{a}$ over the empty set. By stability of $\delta(\overline{x}, \overline{y})$ and the finite equivalence relation theorem \cite[Lemma 2.11]{Pillay}, $q_1\neq q_2$ if and only if there is a finite equivalence relation $E(\overline{x}, \overline{y})\in\mathrm{FER}_\delta(\emptyset)$ such that $q_1(\overline{x})\cup q_2(\overline{y})\vdash E(\overline{x}, \overline{y})$. By $\mathrm{acl}^{eq}(\emptyset)=\mathrm{dcl}^{eq}(\emptyset)$,
$E(\overline{x}, \overline{y})$ is trivial, i.e. tuples with the same types over the emptyset will be in the same equivalence classes. In particular, since for $\overline{a}$, $E(\overline{x}, \overline{a})$ is a Boolean combination instances of $\delta$ over over $\emptyset$, the equivalence class of $\overline{a}$ is entirely determined by $\mathrm{tp}_{\delta}(\overline{a})$, and so $q_1=q_2$. Hence, for any $B$ there is a unique $\delta$-type over $B$ which is a non-forking extension of $\mathrm{tp}_{\delta}(\overline{a})$. In particular, $\mathrm{tp}_{\delta}(\overline{a}/\overline{b})=\mathrm{tp}_{\delta}(\overline{a}'/\overline{b})$, which yields
\[\vDash \delta(\overline{a}, \overline{b})\leftrightarrow \delta(\overline{a}', \overline{b}).\]
We can apply the same reasoning in the variable $\overline{y}$ to prove that for $\overline{a}\indep{}{}\overline{b}$ and $\overline{a}'\indep{}{}\overline{b}'$ with $\overline{a}\equiv\overline{a}'$ and $\overline{b}\equiv\overline{b}'$ we have
\[\vDash \delta(\overline{a}, \overline{b})\leftrightarrow \delta(\overline{a}', \overline{b}').\]
\end{proof}
From this and stability of $R_\alpha(\overline{x}, \overline{y})$ we immediately get the following result for an invariant Keisler measure:
\begin{corollary}\label{constant} Let $\mu:\mathrm{Def}_{\overline{y}}(M)\to [0,1]$ be an invariant measure on an $\omega$-saturated $\mathcal{L}$-structure $\mathcal{M}$ with $\mathrm{acl}^{eq}(\emptyset)=\mathrm{dcl}^{eq}(\emptyset)$. Let $\phi_i(\overline{x}_i, \overline{y})$ for $i\in\{1,2\}$ be $\mathcal{L}$-formulas. Suppose that $\overline{a}_1, \overline{a}_2$ are tuples from $M$ such that $\overline{a}_1\indep{}{}\overline{a}_2$. Then, the value of $\mu(\phi_1(\overline{a}_1, \overline{y})\wedge \phi_2(\overline{a}_2, \overline{y}))$ only depends on $\mathrm{tp}(\overline{a}_1)$ and $\mathrm{tp}(\overline{a}_2)$, but not on $\mathrm{tp}(\overline{a}_1\overline{a}_2)$.
\end{corollary}
Let $\mathcal{M}$ be an $\mathcal{L}$ structure. Let $\overline{a}$ be a tuple from $M$ and $B\subseteq M$. We write $\mathrm{loc}(\overline{a}/B)$ for the set of realisations of $\mathrm{tp}(\overline{a}/B)$ in $\mathcal{M}$.
As noted earlier, by Proposition 5.10 of \cite{MS}, if $\mathcal{M}$ is $MS$-measurable, then so is $\mathcal{M}^{eq}$. Let $\chi(x)$ be an $\mathcal{L}^{eq}(M^{eq})$-formula and let $\mathcal{M}^\star$ and $\mathcal{M}^\bullet$ be restrictions of $\mathcal{M}^{eq}$ to finitely many sorts containing the sorts of the variables and parameters in $\chi(x)$. Inspecting the proof of Proposition 5.10 we can see that the dimension measures $h^\star$ on $\mathcal{M}^\star$ and $h^\bullet$ on $\mathcal{M}^\bullet$ induced by the dimension-measure $h$ on $\mathcal{M}$ will agree on the value of the set defined by $\chi(x)$. Hence, we may write unambiguously
\[h(\chi(\mathcal{M}^{eq}))\]
for this value.
\begin{lemma}\label{a1a2form} Let $\mathcal{M}$ be $\omega$-categorical and $a_1, a_2$ be tuples from $M^{eq}$. Then, there is an $\mathcal{L}^{eq}$ formula $\psi(x_1, x_2)$ such that
\begin{equation}\label{a1a2indep}
\vDash \psi(a_1', a_2') \text{ if and only if } a_1'\equiv a_1 \text{ and } a_2'\equiv a_2 \text{ and } a_1'\indep{}{}a_2'.
\end{equation}
Moreover, if $\mathcal{M}$ is $MS$-measurable,
\[\mu\left(\psi(M^{eq}, M^{eq})\right)=\mu(a_1)\mu(a_2).\]
\end{lemma}
\begin{proof} The first part follows simply by Ryll-Nardzewski and invariance of non-forking independence. For the second part, we use Fubini. Firstly, since $\psi(x_1, x_2)$ isolates the types of its variables,
\[\vDash \exists x_1 \psi(x_1, b_2) \text{ if and only if } b_2\in\mathrm{loc}(a_2).\]
So, $\mu(\exists x_1 \psi(x_1, M^{eq}))=\mu(a_2)$. Secondly, consider $\mu(\psi(M^{eq}, b_2))$ for $b_2$ such that $\vDash \exists x_1 \psi(x_1, b_2)$. By invariance, $\mu(\psi(M^{eq}, b_2))=\mu(\psi(M^{eq}, a_2))$. But now, for $b_1\equiv a_1$, by the dimension theorem \ref{dimtheom},
\[\vDash \psi(b_1, a_2) \text{ if and only if } \mathrm{d}(b_1/a_2)=\mathrm{d}(b_1).\]
Hence, by additivity, $\mu(a_1)=\mu(\psi(M^{eq}, a_2))$. Considering Fubini applied to $\psi(x_1, x_2)$ yields the result.
\end{proof}
\begin{fact}\label{indfact} If $\mathcal{M}$ is supersimple, $\omega$-categorical with $\mathrm{acl}^{eq}(\emptyset)=\mathrm{dcl}^{eq}(\emptyset)$, the independence theorem holds over the empty set.
\end{fact}
\begin{proof}
By $\omega$-categoricity, over finite sets, Lascar types and strong types coincide. By \cite[Proposition 3.2.12]{Kimsimp}, the independence theorem holds over $\mathrm{acl}^{eq}(\emptyset)$. Since $\mathrm{acl}^{eq}(\emptyset)=\mathrm{dcl}^{eq}(\emptyset)$, the independence theorem holds over the empty set.
\end{proof}
At this point, we have the tools to show how in an $\omega$-categorical context, for a triplet of independent pairs, non-forking independence corresponds to probabilistic independence.
We state the following result in terms of $\mathcal{M}^{eq}$.
\begin{theorem}\label{indeptheorem} Let $\mathcal{M}^{eq}$ be $MS$-measurable, $\omega$-categorical with $\mathrm{acl}^{eq}(\emptyset)=\mathrm{dcl}^{eq}(\emptyset)$.
Suppose that $a_1, {a}_2, {a}_3$ are tuples such that ${a}_i\indep{}{}{a}_j$ for $1\leq i<j\leq 3$. Then, $\mathrm{d}(\mathrm{tp}({a}_3/{a}_1)\cup \mathrm{tp}({a}_3/{a}_2))=\mathrm{d}(a_3)$ and
\begin{equation}\label{measeq0}
\mu(\mathrm{tp}({a}_3/{a}_1)\cup \mathrm{tp}({a}_3/{a}_2))=\frac{\mu({a}_3/{a}_1)\mu({a}_3/{a}_2)}{\mu({a}_3)}.
\end{equation}
\end{theorem}
In the theorem, the set defined by $\mathrm{tp}({a}_3/{a}_1)\cup \mathrm{tp}({a}_3/{a}_2)$ is $\mathrm{loc}({a}_3/{a}_1)\cap \mathrm{loc}({a}_3/{a}_2)$. The statement about the dimension corresponds to the independence theorem over $\emptyset$. But we get a much stronger result: the events $"x\in \mathrm{loc}(a_3/a_1)"$ and $"x\in \mathrm{loc}(a_3/a_1)"$ are probabilistically independent in the measure $\mu'$ induced by $\mu$ on definable subsets of $\mathrm{loc}(a_3)$.
\begin{proof}
We write $M$ instead of $M^{eq}$ in the proof to simplify the notation. For $i\in\{1, 2\}$, let $\phi_{i3}({x}_i, {x}_3)$ isolate $\mathrm{tp}({a}_i{a}_3)$. Let $\psi({x}_1, {x}_2)$ be the formula satisfying the condition \ref{a1a2indep} shown to exist in Lemma \ref{a1a2form}. Our proof consists of calculating in different ways the measure of the set $S$ defined by the formula
\[S(x_1, x_2, x_3):=\phi_{13}(x_1, x_3)\wedge \phi_{23}(x_2, x_3)\wedge \psi(x_1, x_2).\]
We apply Fubini, projecting onto the first two coordinates $\pi_{x_1x_2}:M^3\to M^2$. The formula $\exists x_3 S(x_1, x_2, x_3)$ is satisfied by any $b_1b_2\vDash \psi(x_1, x_2)$ by Fact \ref{indfact}. Hence,
\[\exists x_3 S(x_1, x_2, x_3)\equiv \psi(x_1, x_2),\]
and so by Lemma \ref{a1a2form},
\begin{equation*}
\mu(\pi_{x1x2}(S))=\mu(a_1)\mu(a_2).
\end{equation*}
For $b_1b_2\in M$ we consider $\pi_{x_1x_2}^{-1}(b_1b_2)\cap S$, i.e. the set defined by $S(b_1, b_2, x_3)$. This definable set is non-empty if and only if $b_1 b_2\vDash \psi(x_1, x_2)$. Also, by Fact \ref{indfact}, for any such pair, $S(b_1, b_2, x_3)$ will have maximal dimension. Finally, by Corollary \ref{constant},
\[\mu(S(b_1, b_2, M))=\mu(S(a_1, a_2, M))= \mu(\mathrm{tp}({a}_3/{a}_1)\cup \mathrm{tp}({a}_3/{a}_2)).\]
Hence, by Fubini with respect to $\pi_{x_1x_2}$ we obtain:
\begin{equation}\label{measeq1}
\mu(S)=\mu(\mathrm{tp}({a}_3/{a}_1)\cup \mathrm{tp}({a}_3/{a}_2))\mu(a_1)\mu(a_2).
\end{equation}
Let us compute the measure with respect to $\pi_{x_3}:M^3\to M$. The formula $\exists x_1 x_2 S(x_1, x_2, x_3)$ isolates $\mathrm{tp}(a_3)$ and so $\mu(\exists x_1 x_2 S(x_1, x_2, M))=\mu(a_3)$. Meanwhile, for $b_3\in M$, the formula $S(x_1, x_2, b_3)$ is consistent only if $b_3\equiv a_3$. By invariance we may consider $S(x_1, x_2, a_3)$. To compute the measure of the set defined by this formula we apply Fubini again. Projecting onto $x_1$, $\exists x_1 S(x_1, x_2, a_3)$ isolates $\mathrm{tp}(a_2/a_3)$ and so, $\mu(\exists x_1 S(x_1, M, a_3))=\mu(a_2/a_3)$. Meanwhile, for $b_2\equiv_{a_3} a_2$ consider $S(x_1, b_2, a_3)$. Again, by invariance this defines a set with the same measure as $S(x_1, a_2, a_3)$. We claim that $\mu(S(M, a_2, a_3))=\mu(a_1/a_3)$. By Lemma \ref{a1a2form}, there is a formula $\chi(x_1, x_2)$ over $a_3$ isolating the pairs $b_1\equiv_{a_3} a_1$ and $b_2\equiv_{a_3} a_2$ which are independent over $a_3$. So, we may consider $S(M, a_2, a_3)$ as the disjoint union
\[(S(M, a_2, a_3)\wedge \chi(M, a_2)) \sqcup (S(M, a_2, a_3)\wedge \neg\chi(M, a_2)).\]
But the definable set on the right of the disjoint union has lower dimension by the dimension theorem, and so, by additivity,
\[\mu(S(M, a_2, a_3))=\mu(S(M, a_2, a_3)\wedge \chi(M, a_2)).\]
Consider the extension of $\mathrm{tp}(a_1/a_3)$ to $a_2$. By the same reasoning,
\[\mu(a_1/a_3)=\mu(\phi_{13}(M, a_3)\wedge \chi(M, a_2)).\]
Noting that $\chi(x_1, x_2)$ implies $\psi(x_1, x_2)$, we get that
\[\mu(S(M, a_2, a_3))=\mu(a_1/a_3).\]
Hence, our calculations with respect to $\pi_{x_3}$ yield:
\begin{equation}\label{measeq2}
\mu(S)=\mu(a_3)\mu(a_2/a_3)\mu(a_1/a_3).
\end{equation}
Recall that by Fubini $\mu(a_i a_j)=\mu(a_i/a_j)\mu(a_j)$. Using this fact and comparing the equations \ref{measeq1} and \ref{measeq2}, we obtain
\begin{equation}\label{measeq5}
\mu(\mathrm{tp}({a}_3/{a}_1)\cup \mathrm{tp}({a}_3/{a}_2))= \frac{\mu({a}_2{a}_3)\mu({a}_1{a}_3)}{\mu({a}_1)\mu({a}_2)\mu({a}_3)}.
\end{equation}
Applying Fubini again, we get the desired result.
\end{proof}
\begin{corollary}\label{triang}
Let $\mathcal{M}^{eq}$ be $MS$-measurable, $\omega$-categorical with $\mathrm{acl}^{eq}(\emptyset)=\mathrm{dcl}^{eq}(\emptyset)$. Suppose that ${a}_1, a_2, {a}_3$ are tuples such that ${a}_i\indep{}{}{a}_j$ for $1\leq i<j\leq 3$. Let $p_{ij}({x}_i, {x}_j)=\mathrm{tp}({a}_i, {a}_j)$. Then,
\[\mathrm{d}(p_{12}({x}_1, {x}_2)\cup p_{23}({x}_2, {x}_3)\cup p_{13}({x}_1, {x}_3))=\sum_{i\leq 3} \mathrm{d}(a_i), \text{ and }\]
\[\mu(p_{12}({x}_1, {x}_2)\cup p_{23}({x}_2, {x}_3)\cup p_{13}({x}_1, {x}_3))=\frac{\mu({a}_1 {a}_2 )\mu({a}_1 {a}_3 )\mu({a}_2 {a}_3 )}{\mu({a}_1)\mu({a}_2)\mu({a}_3)}.\]
\end{corollary}
\begin{proof}
Considering the set defined by $p_{12}({x}_1, {x}_2)\cup p_{23}({x}_2, {x}_3)\cup p_{13}({x}_1, {x}_3)$, we apply Fubini by projecting onto the first two coordinates. The projection isolates
$\mathrm{tp}(a_1a_2)$, and the fiber of $a_1a_2$ isolates $\mathrm{tp}({a}_3/{a}_1)\cup \mathrm{tp}({a}_3/{a}_2)$. By Fubini
\[\mu(p_{12}({x}_1, {x}_2)\cup p_{23}({x}_2, {x}_3)\cup p_{13}({x}_1, {x}_3))= \mu(\mathrm{tp}({a}_3/{a}_1)\cup \mathrm{tp}({a}_3/{a}_2))\mu({a}_1{a}_2).\]
Substituting with equation \ref{measeq5}, the result follows.
\end{proof}
\section{Hrushovski constructions}\label{Hrushconstr}
In this section, we focus on $\omega$-categorical Hrushovski constructions. We begin with Subsection \ref{introH} which gives a brief introduction to these structures following \cite{Evans} and \cite{Wagner:ST}. Then, in Subsection \ref{Hrusheqns}, we find the form that the equations of Corollary \ref{amacor} would take in the context of $\omega$-categorical Hrushovski constructions if these were $MS$-measurable.
\subsection{Building \texorpdfstring{$\omega$}{omega}-categorical Hrushovski constructions}\label{introH}
We focus on graphs for simplicity of notation, but we can build Hrushovski constructions in relational languages in general \cite{Wagner:ST}. Let $\overline{\mathcal{K}}$ be the class of graphs and $\mathcal{K}$ be the class of finite graphs. Now, for $\alpha\in\mathbb{R}^{>0}$, and $A$ a finite graph, we define the \textbf{predimension} of $A$, $\delta(A)$ to be:
\[\delta(A)=\alpha\vert A\vert -\vert E(A)\vert ,\]
where $\vert E(A)\vert $ is the number of edges of $A$.
For $A\subseteq B,C\in\mathcal{K}$, suppose $g:A\to B$ and $f:A\to C$ are embeddings of $A$ in $B$ and $C$ respectively. The \textbf{free amalgamation} of $B$ and $C$ over $A$, is the (unique) graph $B\amalg_A C$ such that there are embeddings $h:B\to B\amalg_A C$, and $k:C\to B\amalg_A C$ such that $h(g(A))=k(f(A))$ and $E(B\amalg_A C)=E(B)\cup E(C)$.
The predimension we introduced satisfies \textbf{submodularity}. That is, for $D\in\overline{\mathcal{K}}$, and $B,C\subset D$ finite, we have that
\[\delta(B\cup C)\leq \delta(B)+\delta(C)-\delta(B\cap C),\]
with equality holding if and only $B$ and $C$ are freely amalgamated over $B\cap C$.
\begin{definition} Let $B\in\overline{\mathcal{K}}$, $A\subseteq B$ finite. We say that $A$ is \textbf{d-closed} in $B$, and write $A\leq B$ if $\delta(A)<\delta(B')$ for any finite $B'$ such that $A\subsetneq B'\subseteq B$. We say that $A$ is \textbf{self sufficient} in $B$ and write $A\leq^*B$ if $\delta(A)\leq\delta(B')$ for any finite $B'$ such that $A\subseteq B'\subseteq B$.
\end{definition}
Let $f:\mathbb{R}^{\geq0}\to\mathbb{R}^{\geq0}$ be continuous, increasing and unbounded with $f(0)=0$. We let
\[\mathcal{K}_f:=\{A\in\mathcal{K}: \delta(A')\geq f(\vert A'\vert ) \text{ for any }A'\subseteq A\}.\]
\begin{definition}
We say that $\mathcal{K}_f$ has the \textbf{free amalgamation property} if given $A_0\leq A_1, A_2\in\mathcal{K}_f$ we have that $A_1\amalg_{A_0} A_2\in\mathcal{K}_f$.
\end{definition}
When $\mathcal{K}_f$ has the free amalgamation property we can build an $\omega$-categorical Hrushovski construction from it \cite[Theorems 3.2 \& 3.19]{Evans}:
\begin{theorem}\label{Hrushconstrtheorem} Suppose that $\mathcal{K}_f$ has the free amalgamation property. Then, there is a unique countable structure $\mathcal{M}_f$ such that:
\begin{itemize}
\item \textbf{Union of Chains:} $\mathcal{M}_f$ is given by the union of a chain $M_1\leq M_2\leq \dots$ where $M_i\in\mathcal{K}_f$.
\item \textbf{Substructures in} $\pmb{\mathcal{K}_f}$\textbf{:} every $A\in\mathcal{K}_f$ is isomorphic to some $A'\leq M_f$.
\item \textbf{Extension Property:} given $A\leq M_f$ finite and $A\leq B\in \mathcal{K}_f$, there is an embedding of $B$ over $A$ in $\mathcal{M}_f$ such that $B\leq M_f$.
\end{itemize}
Furthermore, $\mathcal{M}_f$ is $\omega$-categorical.
\end{theorem}
We say that $f$ is a \textbf{good function} if it is piece-wise smooth, $f'(x)\leq 1/x$ and $f'(x)$ is non-decreasing.
\begin{lemma}\label{showfreeam} Suppose that $f(x)$ is a good function for $x\geq t$. and that for $A_0\leq A_1,A_2$, with $\vert A_i\vert \leq t$, we have $\delta(A_0\amalg_{A_1}A_2)\geq f(\vert A_0\amalg_{A_1}A_2\vert )$. Then $\mathcal{K}_f$ has the free amalgamation property.
\end{lemma}
\begin{proof}
This is substantially the proof of Example 3.20 in \cite{Evans}, which goes back to Hrushovski's original unpublished note \cite{omps}.
\end{proof}
Furthermore, when $\alpha\in\mathbb{Q}$, choosing $f$ with slow enough growth ensures that $\mathcal{M}_f$ is supersimple of finite $SU$-rank by \cite[Theorem 3.6]{supersimple} (originally proved in \cite{Udiamalg}). This is explained further in the Appendix.
For $B\in\mathcal{K}_f$ or $B=\mathcal{M}_f$ and finite $A\subseteq B$ there is a unique smallest d-closed subset of $B$ containing $A$ which we call $\mathrm{cl}_B(A)$, i.e, the \textbf{closure} of $A$ in $B$ \cite[Lemma 2.1]{Wagner:ST}.
In the context of $\mathcal{M}_f$, $\mathrm{cl}_{M_f}(A)$ corresponds to algebraic closure of $A$ in $\mathcal{M}_f$ and is bounded above by $f^{-1}(\alpha\vert A\vert )$ \cite[Theorem 3.19]{Evans}. In general, we shall avoid the subscript when it is clear we are speaking of $\mathrm{cl}_{M_f}(A)$.
The predimension of a closed set induces a natural notion of \textbf{dimension}. For $D$ finite or $D=\mathcal{M}_f$, given $A\subseteq D$, we write $\mathrm{d}_D(A)$ for $\delta(\mathrm{cl}_D(A))$. If $\alpha\in\mathbb{N}$, then this dimension is also a natural number, and it may be re-scaled to be a natural number if $\alpha\in\mathbb{Q}$. As above, we omit the subscript and write $d(A)$ for $d_{\mathcal{M}_f}(A)$.
We write $d_D(A/B)$ for $d_D(AB)-d_B(B)$, where we write $AB$ for $A\cup B$. We say that $A$ and $B$ are \textbf{independent} over $C$ in $D$ if $d_D(A/BC)=d_D(A/B)$. This implies that $\mathrm{cl}_D(AB)\cap \mathrm{cl}_D(AC)=\mathrm{cl}_D(A)$. When $\mathcal{M}_f$ is supersimple (of finite $SU$ rank) we have that this notion of independence induced by the Hrushovski dimension in $\mathcal{M}_f$ is precisely non-forking independence and the Hrushovski dimension on $\mathcal{M}_f$ is $SU$-rank. \\
We have a clear picture of what independence looks like in $\mathcal{M}_f$ \cite[Claim 6.2.9]{Kimsimp}:
\begin{lemma}\label{indepfree} Let $D\in\mathcal{K}_f$ or $D=\mathcal{M}_f$. Suppose $A=B\cap C\leq B, C\leq D$. Then, $B$ and $C$ are independent over $A$ in $D$ if and only if $B$ and $C$ are freely amalgamated over $A$ and $\delta(B\cup C)=\delta(\mathrm{cl}_D(B\cup C))$.
\end{lemma}
\begin{remark}\label{moreind}
With the notation above, if $B$ and $C$ are independent over $A$ in $D$, $BC\leq^* \mathrm{cl}_D(B\cup C)$. To see this, suppose by contradiction that there is some $F$ such that $BC\subseteq F\subsetneq \mathrm{cl}_D(B\cup C)$ and $\delta(F)<\delta(BC)$. We may choose $F$ to be maximal so that for $F\subsetneq H\subseteq \mathrm{cl}_D(B\cup C)$, $\delta(H)>\delta(F)$. Then,
\[BC\subseteq F\leq \mathrm{cl}_D(B\cup C)\leq D.\] But this contradicts the definition of closure in $D$ as the minimal d-closed subset of $D$ containing $BC$.
\end{remark}
\subsection{MS-measurable \texorpdfstring{$\omega$}{omega}-categorical Hrushovski constructions}\label{Hrusheqns}
In this section, we consider some equations which would hold in an $MS$-measurable $\omega$-categorical Hrushovski construction. Suppose $A\subseteq M_f$, and $B=\mathrm{cl}_{\mathcal{M}_f}(A)$. Let $\overline{a}\overline{b}$ be an enumeration of $B$ where $\overline{a}$ is an enumeration of $A$. By the extension property in Theorem \ref{Hrushconstrtheorem}, the quantifier-free type of $\overline{a}\overline{b}$ determines the type of of $\overline{a}$ in $\mathcal{M}_f$ \cite[Corollary 2.4]{Wagner:ST}. For $A\leq M_f$, we write $\theta_{\overline{a}}(\overline{x})$ for the formula isolating $\mathrm{tp}(\overline{a})$.
\begin{notation}\label{notameas} Note that by Fubini, any $MS$-dimension-measure $h$ is invariant under permutation of variables. That is, for $\overline{x}_\sigma$ a permutation of the variables of $\overline{x}$, and $\phi_\sigma(\overline{x})=\phi(\overline{x}_\sigma)$,
\[h(\phi_\sigma(M^{\vert\overline{x}\vert}))=h(\phi(M^{\vert\overline{x}\vert})).\]
Hence, if $\mathcal{M}_f$ is $MS$-measurable, for $A\leq M_f$, we may write $\mu(A):=\mu(\theta_{\overline{a}}(\overline{x}))$ without ambiguity. For $A\in\mathcal{K}_f$ we write $\mu(A)$ for $\mu(\theta_{\overline{a}}(\overline{x}))$, where $\overline{a}$ is an enumeration of a copy $A'$ of $A$ in $M_f$ such that $A'\leq M_f$.
\end{notation}
By the notation $\mathrm{tp}(A/B)$ we mean $\mathrm{tp}(\overline{a}/B)$ where $\overline{a}$ is an enumeration of $A$.
\begin{theorem}\label{Hrush}
Let $\mathcal{M}_f$ be the $\omega$-categorical Hrushovski construction for the class $\mathcal{K}_f$. Suppose that $\mathcal{M}_f$ is MS-measurable. Let $A\leq B, C\leq M_f$ be finite. Let $\mathrm{tp}(C_1/B), \dots, \mathrm{tp}(C_n/B)$ be the finitely many non-forking extensions of $\mathrm{tp}(C/A)$ to $B$. Let $D_i$ be $\mathrm{cl}(BC_i)$. Then, we have that:
\begin{equation}\label{Heq}
\frac{\mu(B)\mu(C)}{\mu(A)}=\sum_{i=1}^n \frac{\mu(D_i)}{\vert\mathrm{Aut}(D_i/BC_i)\vert},
\end{equation}
where $\vert\mathrm{Aut}(D_i/BC_i)\vert$ is the number of automorphisms of $D_i$ fixing $BC_i$ pointwise.
\end{theorem}
\begin{proof} Let $\overline{a}\overline{b}$ be an enumeration of $B$ where $\overline{a}$ is an enumeration of $A$. Similarly, let $\overline{a}\overline{c}$ be an enumeration of $C$. Let $\overline{a}\overline{b}\overline{c}_i\overline{d}_i$ be an enumeration of $D_i$ where, $\overline{a}\overline{b}\overline{c}_i$ is the corresponding enumeration of $ABC_i$. By Corollary \ref{amacor},
\begin{equation}
\frac{\mu(B)\mu(C)}{\mu(A)}=\sum_{i=1}^n \mu(\mathrm{tp}(\overline{a}\overline{b}\overline{c}_i))
\end{equation}
We know that $\mathrm{tp}(\overline{a}\overline{b}\overline{c}_i)$ is isolated by $\exists\overline{w}\theta_{\overline{a}\overline{b}\overline{c}_i\overline{d}_i}(\overline{x}\overline{y}\overline{z}\overline{w})$. Note that $\vert\{e\in M_f \vert \theta_{\overline{a}\overline{b}\overline{c}_i\overline{d}_i}(\overline{a}\overline{b}\overline{c_i}\overline{e})\}\vert$ is finite and counts the number of automorphisms of $D_i$ fixing $BC_i$ pointwise. Hence, by Fubini,
\[\mu(\mathrm{tp}(\overline{a}\overline{b}\overline{c}_i)) \vert\mathrm{Aut}(D_i/BC_i)\vert=\mu(D_i).\]
This yields the desired equation.
\end{proof}
Following Lemma \ref{indepfree}, we can express the conditions for a graph $D_i$ to appear as $\mathrm{cl}(BC_i)$ in the theorem above in terms of $\mathcal{K}_f$.
\begin{definition} Let $A_0\leq A_1, A_2$, $A_1\amalg_{A_0} A_2\subseteq D\in \mathcal{K}_f$. We say $D$ is an \textbf{eventual closure} of $A_1\amalg_{A_0} A_2$ if $A_1$ and $A_2$ are freely amalgamated over $A_0$ in $D$, $A_1\cup A_2\leq^*D$, $\delta(A_1\cup A_2)=\delta(D)$, and $A_i\leq D$ for each $i\in\{0,1,2\}$.
\end{definition}
\begin{remark} Consider the closures $D_i$ in Theorem \ref{Hrush}. We have that there is $C_i\leq M_f$ independent from $B$ over $A$ with $\mathrm{tp}(C_i/A)=\mathrm{tp}(C/A)$ and closure $D_i$ if and only if there is $D_i'\in \mathcal{K}_f$, isomorphic to $D_i$ as a graph, which is an eventual closure of $B\amalg_A C$. The left-to-right implication follows by Lemma \ref{indepfree} and Remark \ref{moreind}. The right-to-left implication follows by the extension property, noting that $B\leq D_i'\in\mathcal{K}_f$.
\end{remark}
Hence, we may rephrase Theorem \ref{Hrush} as follows:
\begin{corollary}\label{Hrushcor} Let $\mathcal{M}_f$ be an $\omega$-categorical Hrushovski construction with amalgamation class $\mathcal{K}_f$. Suppose $\mathcal{M}_f$ is $MS$-measurable. Let $A\leq B, C\in\mathcal{K}_f$. Let $D_1, \dots D_n$ be the eventual closures of $B\amalg_A C$. Then,
\begin{equation*}
\frac{\mu(B)\mu(C)}{\mu(A)}=\sum_{i=1}^n \frac{\mu(D_i)}{\vert\mathrm{Aut}(D_i/BC)\vert}.
\end{equation*}
\end{corollary}
\section{A non-measurability proof}\label{finally}
In this section, we give a class of $\omega$-categorical Hrushovski constructions. These are supersimple of finite $SU$-rank, as proven in the Appendix \ref{appendix}. However, we will prove in Subsection \ref{proof!} that these are not $MS$-measurable. In Subsection \ref{evcl}, we introduce these structures and show how we have control on small enough eventual closures in them.
\subsection{The structure \texorpdfstring{$\mathcal{M}_f$}{Mf} and small eventual closures}\label{evcl}
We begin by giving the class of structures that we work with.
\begin{construction}\label{actualf} We set $\alpha=2$. Let $f:\mathbb{R}^{>0}\to \mathbb{R}^{>0}$ be such that for $t\geq 6$, $f$ is a good function with $f(3\cdot t)\leq f(t)+1$ and for $t\leq 6$, $f$ is piece-wise linear with $f(1)=2, f(4)=5, f(6)=6$. An illustration of the initial values of $f$ is given by Figure \ref{fig:newf}. As proven in the Appendix \ref{appendix}, the class $\mathcal{K}_f$ has the free amalgamation property, and so by Theorem \ref{Hrushconstrtheorem} we can build an $\omega$-categorical Hrushovski construction $\mathcal{M}_f$.
\end{construction}
An example of a function satisfying the conditions of Construction \ref{actualf} is the function taking the values specified for $t\leq 6$ and corresponding to $\log_3(x)-\log_3(6)+6$ for $t>6$. Of course, there are uncountably many functions satisfying the conditions of our construction.
From now on, by $\mathcal{M}_f$ we mean an $\omega$-categorical Hrushovski construction obtained following Construction \ref{actualf}.
\begin{figure}
\caption{The initial values of $f$ are represented by the line in the figure. The dots represent positions $(\vert X\vert , \delta(X))$ of the graphs $X$ in $\mathcal{K}
\label{fig:newf}
\end{figure}
\begin{lemma}[Basic properties of $\mathcal{M}_f$] The $\omega$-categorical Hrushovski construction $\mathcal{M}_f$ is supersimple of $SU$-rank $2$. Furthermore, it has weak elimination of imaginaries. Finally, we may choose $f$ with growth slow enough that $\mathcal{M}_f$ satisfies independent $n$ amalgamation over finite algebraically closed sets for any $n\geq 3$ or even for all $n\in\mathbb{N}$.
\end{lemma}
\begin{proof} See Appendix \ref{appendix}.\end{proof}
As we shall see, the choice of values of $f(t)$ for $t\leq6$ ensures that the eventual closures of graphs of small enough dimension are easy to enumerate.
\begin{definition} Let $A\in\mathcal{K}_f$. Suppose $A\leq^* A'\in\mathcal{K}_f, \delta(A')=\delta(A)$, and $\vert A'\setminus A\vert =1$. Then, we call $A'$ a \textbf{one point extension} of $A$.
\end{definition}
Since $\alpha=2$, a one-point extension $A'$ of $A$ will be obtained by adding to $A$ a single vertex $v$ joined by two edges to $A$.
\begin{lemma}\label{iterate1ext} Let $A, B\in\mathcal{K}_f$, $A\leq^*B, \delta(A)=\delta(B),$ and $\vert B\setminus A\vert <6$. Then, $B$ is obtained from $A$ by iterating one-point extensions.
\end{lemma}
\begin{proof}
Note that with our choice of $f$ for $X\in \mathcal{K}_f$ such that $\vert X\vert <6$, $\vert E(X)\vert <\vert X\vert $. In fact, for $\vert X\vert <6$, $f(X)>\vert X\vert $. Since $X\in\mathcal{K}_f$,
\begin{align}
& 2\vert X\vert -\vert E(X)\vert =\delta(X)\geq f(\vert X\vert )>\vert X\vert \\
\Rightarrow & \vert E(X)\vert <\vert X\vert . \label{e<x}
\end{align}
Now, consider the graph $X$ induced by the vertices in $B\setminus A$. Since $\vert X\vert <6$, $\vert E(X)\vert <\vert X\vert $. Let $E(A;X):=\{\{u, v\}\in E(B) \vert u\in A, v\in X\}$. By $\delta(B)=\delta(A)$, we have that:
\[\delta(B)=2\vert B\vert -E(B)=2\vert A\vert +2\vert X\vert -\vert E(A)\vert -\vert E(X)\vert -\vert E(A;X)\vert =2\vert A\vert -\vert E(A)\vert =\delta(A),\]
and so since $\vert E(X)\vert <\vert X\vert $,
\[0=2\vert X\vert -\vert E(X)\vert -\vert E(A;X)\vert >\vert X\vert -\vert E(A;X)\vert ,\]
which yields $\vert X\vert <\vert E(A;X)\vert $. By the pigeonhole principle there must be a vertex $v$ of $X$ connected to $A$ by two or more points. Note that if $v$ were connected by three or more points $\delta(A\cup\{v\})<\delta(A)$, and so $A\not\leq^*B$. Hence, $A\cup\{v\}$ must be a one-point extension of $A$.
The same argument can be iterated for $A\cup\{v\}$ so that we see $B$ is constructed by iterating one-point extensions.
\end{proof}
\begin{remark}\label{1ptevc}
Let $D$ be a proper eventual closure of $A_1\amalg_{A_0}A_2$ with $\vert D\setminus A_1\cup A_2\vert <6$. Then, as noted in Lemma \ref{iterate1ext}, $D$ is obtained by iterating one-point extensions $D_1, \dots, D_k$ for $k\leq 5$. One can see that each $D_i$ is also an eventual closure.
We shall call an eventual closure of $A_1\amalg_{A_0}A_2$ which is also a one-point extension a \textbf{one point closure}.
\end{remark}
Given Lemma \ref{iterate1ext} and Corollary \ref{Hrushcor}, we get:
\begin{corollary}\label{noextcor} Let $A\leq\mathcal{M}_f$, and $A\leq B, C\in\mathcal{K}_f$. Let $N=\lfloor f^{-1}(\delta(B\amalg_A C))\rfloor$ so that $N$ is the largest size of a graph of predimension $\delta(B\amalg_A C)$. Suppose that $N-\vert B\amalg_A C\vert <6$ and that the maximal distance between a point in $B\setminus A$ and a point in $C\setminus A$ in $B\amalg_A C$ is $\leq 3$. Then, there is no proper eventual closure for $B\amalg_A C$, and so,
\[\mu(B\amalg_A C)=\frac{\mu(B)\mu(C)}{\mu(A)}.\]
\end{corollary}
\begin{proof} Since no graph in $\mathcal{K}_f$ contains cycles of length $<6$, $B\amalg_A C$ has no proper eventual closures. In fact, in a one point closure of $B\amalg_A C$ by a point $v$, $v$ must be attached to a vertex in $B\setminus A$ and a vertex in $C\setminus A$ since $B$ and $C$ are closed in the eventual closure. But then, this one-point closure would contain a $k$-cycle for $k<6$.
\end{proof}
\subsection{Proving non-measurability}\label{proof!}
We are ready to prove that the structures built in Construction \ref{actualf} are not $MS$-measurable. We first find some equations that an $MS$-measure should satisfy and then note that these equations imply that some definable set has measure zero, breaking the positivity condition of $MS$-measures.
\begin{notation}
We still follow the notation of \ref{notameas}. However, for clarity, we represent graphs pictorially. For example, for $A\in\mathcal{K}_f$ a path of length two, instead of writing $\mu(A)$, we may write $\mu\left(\ptwo\right)$, so that it is clear from our notation which graph we are talking about. Note that when we speak of "the measure of $A$", we actually mean "the measure of the definable subset of $M_f^{\vert A\vert }$ consisting of copies of $A$ which are such that $A\leq M_f$". By our choice of $f$, vertices and edges isolate complete types in $\mathcal{M}_f$.
\end{notation}
\begin{prop}\label{manyeqns} Suppose that $\mathcal{M}_f$ is $MS$-measurable with dimension given by $SU$-rank. Assume without loss of generality that the measure of a single vertex is $1$. Let $\lambda$ be the measure of an edge. Then, $\mu$ must satisfy the following equations:
\begin{align}
\mu\left(\ptwo\right) & = \lambda^2 \label{ptwo}\\
\mu\left(\dtwo\right)+ \mu\left(\ptwo\right)& = 1 \label{dtwo}\\
\mu\left(\pthree\right) &= \lambda^3 \label{pthree}\\
\mu \left(\elle\right)+ \mu\left(\pthree\right) & = \lambda(1-\lambda^2) \label{elle}\\
\mu\left( \ptwodot\right) &=\frac{ \mu\left(\elle\right)^2}{(1-\lambda^2)} \label{ptwodot}\\
\mu\left(\mahT\right) & = \lambda^4 \label{T}\\
\mu\left(\ptwodot\right)+\mu\left(\mahT\right) & =(1-\lambda^2)^2\lambda^2.\label{contreq}
\end{align}
\end{prop}
\begin{proof}
Equation \ref{contreq} is obtained with a different technique from the rest. The proof for equations \ref{ptwo}-\ref{T} consists in identifying various free amalgamations in the graphs involved and in the repeated use of Corollary \ref{noextcor} or the fact that all eventual closures we will consider are obtained by iterating one-point extensions and using Theorem \ref{Hrushcor}. Again, to obtain these equations, we consider the eventual closures $D_i$ of a free amalgamation $B\amalg_A C$. For clarity, we shall illustrate this through pictures colouring differently the components of $B\amalg_A C$ and the one-point extensions. We colour the copy of $A$ in purple, the copy of $B\setminus A$ in blue and the copy of $C\setminus A$ in orange. When dealing with one-point extensions we shall colour the additional points in green. If you are reading the paper in black and white printing, the copy of $B\setminus A$ appears black, the copy of $A$ appears grey, the copy of $C\setminus A$ appears light grey and any one-point extension will be very light grey. We also label the vertices for extra clarity.\\
Let us begin with equation \ref{ptwo}. Consider a path of length two as a free amalgamation of two paths of length one over a point as in Figure \ref{fig:ptwo}. This free amalgamation has no further eventual closures. Since the measure of an edge is $\lambda$ and the measure of a point is 1, equation \ref{ptwo} follows from Corollary \ref{noextcor}.
\begin{figure}
\caption{We consider a path of length two as the free amalgamation of the two edges $B:=v_1v_2$ and $C:=v_2 v_3$ over the intermediate vertex $A:=v_2$. Note that this path has already maximal size for its dimension as $f(3)=4$ and so there are no further eventual closures.}
\label{fig:ptwo}
\end{figure}
Now, for equation \ref{dtwo}, consider the free amalgamation of two points over the empty set. This does have a one point closure, namely a copy of a path of length two (by joining the vertex of the extension to the two distinct vertices in the amalgamation). There cannot be further extensions since $f(3)=4$, and so the resulting graph has maximal size for its dimension. Since vertices have measure one in $\mathcal{M}_f$, equation \ref{dtwo} follows. Note that as a consequence of equations \ref{dtwo} and \ref{ptwo}, we obtain that
\[\mu\left(\dtwo\right)=(1-\lambda^2),\]
and we shall use this fact in future equations.
Equations (\ref{pthree})-(\ref{T}) are explained in Figures \ref{fig:pthree}-\ref{fig:T}.
\begin{figure}
\caption{We consider a path of length three $v_1v_2v_3v_4$ as a free amalgamation of two paths of length two, $B:=v_1 v_2 v_3$ and $C:=v_2 v_3 v_4$, over the path of length one $A:=v_2 v_3$. There are no further eventual closures since $f(4)=5$ and so we have maximal size for this dimension. This amalgamation yields equation \ref{pthree}
\label{fig:pthree}
\end{figure}
\begin{figure}
\caption{The disjoint union of an edge $v_1 v_2$ and a vertex $v_3$ corresponds to the free amalgamation of the disjoint union of two vertices $B:=v_1 v_3$ and of the edge $C:=v_1 v_2$ over the vertex $A:=v_1$. We can obtain a one point closure by joining a vertex $w$ to the points $v_2$ and $v_3$. It is the unique one point closure since $\vert B\setminus A\vert =\vert C\setminus A\vert =1$. There are no larger eventual closures since $f(4)=5$. Thus, we get equation \ref{elle}
\label{fig:elle}
\end{figure}
\begin{figure}
\caption{We think of the disjoint union of a vertex $v_4$ and a path of length two $v_1 v_2 v_3$ as a free amalgamations of two copies of the disjoint union of an edge and a vertex, $B:=v_1 v_2 v_4$ and $C:=v_2v_3 v_4$, over the disjoint union of two vertices $A:=v_2v_4$. The free amalgamation has dimension $6$. Since $f(6)=6$ and the vertices in $B\setminus A$ and $C\setminus A$ are at distance two from each other, from Corollary \ref{noextcor}
\label{fig:ptwodot}
\end{figure}
\begin{figure}
\caption{We consider the free amalgamation of two paths of length three, $B:=v_1 v_2 v_4 v_5$ and $C:=v_3 v_2 v_4 v_5$ over a path of length two $A:=v_2 v_4 v_5$. Equation \ref{T}
\label{fig:T}
\end{figure}
We are now ready to obtain equation \ref{contreq}. For it we can use Corollary \ref{triang} since the structures we are working with have weak elimination of imaginaries, as proven in the Appendix \ref{appendix}. Since in $\mathcal{M}_f$ types of finite tuples are determined by the quantifier-free type of their closures, there are only two $2$-types over the emptyset, $p_1(x,y)$ and $p_2(x,y)$, such that $p_i(a,b)$ implies $a\indep{}{}b$. These are isolated by the formulas $\phi(x, y)$ and $\psi(x, y)$, saying respectively that $x$ and $y$ are at distance two from each other and that they are at distance $>2$. We consider the measure of $\psi(x_1, x_2)\wedge\psi(x_2, x_3)\wedge \phi(x_1, x_3)$. By Corollary \ref{triang}, we have
\[\mu(\psi(x_1, x_2)\wedge\psi(x_2, x_3)\wedge \phi(x_1, x_3))=\mu(\psi(x_1, x_2))^2\mu(\phi(x_1, x_3))=(1-\lambda^2)^2\lambda^2.\]
We wish to express $\psi(x_1, x_2)\wedge\psi(x_2, x_3)\wedge \phi(x_1, x_3)$ as a disjoint union of $3$-types of maximal dimension in order to expand the left hand side of this equation. Let us look at the complete $3$-types $\mathrm{tp}(a_1a_2a_3)$ such that $a_1a_2a_3\vDash \psi(x_1, x_2)\wedge\psi(x_2, x_3)\wedge \phi(x_1, x_3)$ and $a_i\indep{}{} a_j a_k$ for $\{i,j,k\}=\{1,2,3\}$ distinct. Let $A$ be the graph in $\mathcal{K}_f$ given by the following picture:
\begin{figure}
\caption{The graph $A$ consists of the disjoint union of a path of length two $v_2 u v_3$ and a vertex $v_1$. Note that if $A\leq M_f$, then $\mathcal{M}
\label{fig:A}
\end{figure}
Again, since the types of these triples are determined by the quantifier-free types of their closures, we may focus on the graphs $B$ in $\mathcal{K}_f$ such that, $A\leq^* B$, $\delta(B)=\delta(A)=6$, and $v_1v_2, v_1v_2, v_2uv_3\leq B$. Note that since $\delta(B)=6$, $B$ may have at most $6$ points and must be obtained by iterating one-point extensions on the graph $A$.
\begin{figure}
\caption{Note that since $v_1v_2, v_1v_2, v_2uv_3\leq B$, any one-point extension of $A$ by a vertex $w$ must be such that $w$ has an edge with $u$. Since we avoid triangles, the only possible one-point extension is given by joining $w$ to $u$ and $v_1$. Call the graph so obtained $B'$. Note that $B'$ doesn't have any one-point extension since any two points of $B'$ have distance $\leq 3$ from each other and $\mathcal{M}
\label{fig:contreq}
\end{figure}
Let $A(v_1, v_2, v_3, u)$ be the formula expressing $v_1 v_2 v_3 u \cong A\leq \mathcal{M}_f$ and $B'(v_1, v_2, v_3, u, w)$ be the formula expressing $v_1 v_2 v_3 u w\cong B'\leq \mathcal{M}_f$, where $B'$ is described in Figure \ref{fig:contreq}. From the argument in Figure \ref{fig:contreq}, we can see that there are only two $3$-types $p(x_1, x_2, x_3)$ which are completions of $\psi(x_1, x_2)\wedge\psi(x_2, x_3)\wedge \phi(x_1, x_3)$ and which satisfy our independence requirements, and they are isolated by $\exists u A(x_1, x_2, x_3, u)$ and $\exists u, w B'(x_1, x_2, x_3, u, w)$. Hence, by additivity,
\[\mu(\exists u A(x_1, x_2, x_3, u))+\mu(\exists u, w B'(x_1, x_2, x_3, u, w))=\mu(\psi(x_1, x_2)\wedge\psi(x_2, x_3)\wedge \phi(x_1, x_3)).\]
Consider the projection $\pi:M_f^4\to M_f^3$ onto the first three coordinates. The restriction of this map to $A(M^4)$ is injective, and so by Fubini and algebraicity,
\[\mu(\exists u A(x_1, x_2, x_3, u))=\mu(A(x_1, x_2, x_3, u))=\mu\left(\ptwodot\right).\]
Similarly, considering the projection $\pi':M_f^5\to M_f^3$ onto the first three coordinates and $B(M^5)$,
\[\mu(\exists u, w B'(x_1, x_2, x_3, u, w))=\mu(B'(x_1, x_2, x_3, u, w))=\mu\left(\mahT\right).\]
This yields the desired equation.
\end{proof}
\begin{theorem}\label{eqnstheorem} The structures of the form $\mathcal{M}_f$ built following Construction \ref{actualf} are not $MS$-measurable. Hence, there are $\omega$-categorical supersimple finite $SU$-rank structures which are not $MS$-measurable. Indeed, there are supersimple $\omega$-categorical structures of finite $SU$-rank and with independent $n$-amalgamation for all $n$ which are not $MS$-measurable.
\end{theorem}
\begin{proof}
By Corollary \ref{SUok}, we know that if $\mathcal{M}_f$ is $MS$-measurable, it also has an $MS$-dimension-measure where the dimension is given by $SU$-rank. However, the equations from Proposition \ref{manyeqns} imply that $\lambda=0$. This contradicts the positivity assumption for the measure in $MS$-measurable structures. To see this note that equations \ref{pthree} and \ref{elle} imply that
\[\mu\left(\elle\right)=\lambda-2\lambda^3.\]
From equations \ref{ptwodot}, \ref{T} and \ref{contreq} we then get that
\[\frac{\lambda^2(1-2\lambda^2)^2}{(1-\lambda^2)}+\lambda^4=(1-\lambda^2)^2\lambda^2.\]
Simplifying, we obtain that $\lambda=0$.
\end{proof}
Whilst we had a particular choice for $\alpha$ and for the initial values of $f$, it is plausible that similar results should hold for different choices of both. This raises the question of whether in general, non-trivial $\omega$-categorical Hrushovski constructions are not $MS$-measurable. This would provide further evidence for the conjecture that $MS$-measurable $\omega$-categorical structures are one-based.\\
In subsequent work \cite{Ergome}, we find more general reasons for which various classes of $\omega$-categorical Hrushovski constructions are not $MS$-measurable by studying invariant Keisler measures in these structures. In particular, we prove that $\omega$-categorical $MS$-measurable structures must satisfy a stronger version of the independence theorem and that the equation in Corollary \ref{triang} holds also when one of the pairs is weakly algebraically independent. We also prove that in the structures $\mathcal{M}_f$ we introduced, the formula asserting that "$x$ has distance two from $a$" does not fork over the empty-set but is universally measure zero. While our results make it implausible, it remains an open question whether any not one-based $\omega$-categorical Hrushovski construction is $MS$-measurable.
\section{Appendix: Proving the main properties of our Hrushovski constructions}\label{appendix}
In Section \ref{finally}, we introduced a class of $\omega$-categorical supersimple finite $SU$-rank Hrushovski constructions and proved they are not $MS$-measurable. However, we omitted from the body of this article the proofs that our choice of $f$ yields such structures. Indeed, the proof of simplicity requires some effort. We include these technical results in this appendix.\\
We begin with an abstract discussion of how to prove simplicity in Subsection \ref{simpH}. Then, in Subsection \ref{example} we move to proving that our choice of $f$ in Construction \ref{actualf} yields $\omega$-categorical Hrushovski constructions and that these are supersimple of finite $SU$-rank, with weak elimination of imaginaries. We further note that these can be built so that they may satisfy independent $n$-amalgamation for all $n$.
\subsection{Simplicity of \texorpdfstring{$\omega$}{omega}-categorical Hrushovski constructions}\label{simpH}
In this subsection, we discuss under which conditions on $f$ $\omega$-categorical Hrushovski constructions are supersimple of finite $SU$-rank. This was first explored in \cite{Udiamalg}. We simplify some of the conditions for supersimplicity discussed in \cite{Wong} and \cite{supersimple}. This will shorten the proofs of supersimplicity of the structures considered in the next section. Much of this material is implicit in the proof of Theorem 3.6 in \cite{supersimple}. Here we make the arguments explicit and correct a mistake in Remark 3.8 of the same article.\\
Let us begin by reminding some basic properties of the d-closed and self-sufficient relations \cite[Lemma 3.10]{Evans}:
\begin{lemma}\label{basicleq} Let $C\in\overline{\mathcal{K}}$ and let $\leq'$ stand for either $\leq$ or $\leq^*$. Then, the following hold:
\begin{enumerate}
\item Let $A\leq' C$ be finite, and $B\subseteq C$. Then, $A\cap B\leq'B$.
\item Let $A$ and $B$ be finite such that $A\leq'B\leq'C$. Then, $A\leq'C$.
\item Let $A$ and $B$ be finite with $A, B\leq'C$. Then, $A\cap B\leq'C$.
\end{enumerate}
\end{lemma}
\begin{definition}[Independence Theorem Diagram]\label{ITDdef} Let $D$ be a graph. Suppose that $D$ has subgraphs $D_i$ and $D_{ij}$ for $0\leq i<j\leq 3$ all contained in $\mathcal{K}_f$. To simplify our notation, we allow inverting indices, e.g. $D_{12}=D_{21}$.
We say that $(D; D_i; D_{ij})$ is an \textbf{independence theorem diagram} (ITD) with respect to $\mathcal{K}_f$ if the following hold:
\begin{itemize}
\item $D_{0j}=D_j$ for $1\leq j\leq 3$;
\item $D_i\cap D_j=D_0$ for $1\leq i<j\leq 3$;
\item $D_i, D_j\leq D_{ij}$ for $1\leq i<j\leq 3$;
\item $D_{ij}\cap D_{jk}=D_j$ for $\{i,j,k\}=\{1,2,3\}$;
\item $D_i$ and $D_j$ are independent over $D_0$ in $D_{ij}$ for $1\leq i<j\leq 3$;
\item Any edge in $D$ is entirely contained within some $D_{ij}$ for $1\leq i<j\leq 3$.
\end{itemize}
Figure \ref{fig:2} gives a visual representation of an independence theorem diagram.
We say that $(D; D_i; D_{ij})$ is a \textbf{proper ITD} when it satisfies the following additional conditions:
\begin{itemize}
\item $D_0 \subsetneq D_i$ for $1\leq i\leq 3$;
\item $D_i\cup D_j\subsetneq D_{ij}=\mathrm{cl}_D(D_i\cup D_j)$ for $1\leq i<j\leq 3$.
\end{itemize}
We say that $\mathcal{K}_f$ is \textbf{closed under ITDs} if, whenever $(D; D_i; D_{ij})$ is an ITD, we have that $D\in\mathcal{K}_f$.
\end{definition}
\begin{figure}
\caption{A visual representation of an independence theorem diagram. The first picture represents an ITD with its various parts labelled. To distinguish the parts more clearly, the last three pictures show as a shaded area $D_0$, $D_1$, and $D_{12}
\label{fig:2}
\end{figure}
We note some basic properties of independence theorem diagrams and proper ITDs.
\begin{lemma} Suppose $(D;D_i;D_{ij})$ is an ITD for $\mathcal{K}_f$. Then,
\begin{enumerate}
\item $D_0\leq D_{ij}$ for $0\leq i<j\leq 3$;
\item $D_{ij}\leq D$;
\item $D_{ij}\leq D_{ij}\cup D_{jk}$;
\item $D_{ij}\cup D_{jk}\leq^* D$.
\end{enumerate}
\end{lemma}
\begin{proof} Part 1 is a direct consequence of Lemma \ref{basicleq}.1. Parts 2,3, and 4 are from \cite[Lemma1.6]{Wong}.
\end{proof}
From \cite[Corollary 2.24 \& Theorem 3.6]{supersimple} we know that:
\begin{theorem}\label{supersimpth} Suppose that $\alpha\in\mathbb{N}$ and $\mathcal{K}_f$ is closed under ITDs. Then, $\mathcal{M}_f$ is supersimple of finite $SU$-rank.
\end{theorem}
Hence, it is sufficient to include all ITDs in $\mathcal{K}_f$ in order to have supersimplicity and finite $SU$-rank for $\mathcal{M}_f$. However, this is not easy to check because in order to see whether $D$ is in $\mathcal{K}_f$ we need to verify that every subgraph $B\subset D$ is also in $\mathcal{K}_f$. The following lemmas help us simplifying this process.
\begin{lemma} Let $C\leq D\in\mathcal{K}_f$, where $D$ is an ITD. Then, $C$ is an ITD.
\end{lemma}
\begin{proof}
Call $C_{ij}$ the intersection of $C$ and $D_{ij}$. The only conditions that we need to check are that $C_i\leq C_{ij}$ and that $C_i$ and $C_j$ are independent over $C_0$ in $C_{ij}$.
Now, the first condition follows by Lemma \ref{basicleq}.1 since $D_i\leq D_{ij}$ and $C_{ij}\subseteq D_{ij}$ implies $C_i=D_i\cap C_{ij}\leq C_{ij}$.
For the second condition, first note that since $D_i$ and $D_j$ are freely amalgamated over $D_0$ in $D_{ij}$, so must be $C_i$ and $C_j$ over $C_0$ in $C_{ij}$, being just intersections of these sets. We know that $D_i\cup D_j\leq^*\mathrm{cl}_{D_{ij}}(D_i\cup D_j), \mathrm{cl}_{C_{ij}}(C_i\cup C_j)\cap (D_j\cup D_j)= C_i\cup C_j$, and $\mathrm{cl}_{C_{ij}}(C_i\cup C_j)\cap\mathrm{cl}_{D_{ij}}(D_i\cup D_j)=\mathrm{cl}_{C_{ij}}(C_i\cup C_j)$. Hence, by Lemma \ref{basicleq}.1, $C_i\cup C_j\leq^*\mathrm{cl}_{C_{ij}}(C_i\cup C_j)$. But then, by Lemma \ref{indepfree}, we know that $C_i$ and $C_j$ are independent over $C_0$ in $C_{ij}$.
\end{proof}
Hence, in proving supersimplicity, we can reduce the number of graphs for which we check $f(\vert D\vert )\leq \delta(D)$ by only focusing on proper ITDs.
\begin{lemma}\label{strongITD} The following are equivalent:
\begin{enumerate}
\item $\mathcal{K}_f$ is closed under ITDs;
\item $\mathcal{K}_f$ is closed under proper ITDs;
\item For each proper ITD $D$ we have that $f(\vert D\vert )\leq\delta(D)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1)$\Rightarrow$(2)$\Rightarrow$(3) follows from the definitions of strong ITD and $\mathcal{K}_f$. Let us focus then on (2)$\Rightarrow$(1). Let $D$ be an ITD. For it to be in $\mathcal{K}_f$ we need to show that for any $C\subseteq D, f(\vert C\vert )\leq\delta(C)$. Note that if $f(\vert C\vert )>\delta(C)$, also $\mathrm{cl}_D(C)$ satisfies this condition since
\[f(\vert \mathrm{cl}_D(C)\vert )\geq f(\vert C\vert )>\delta(\vert C\vert )\geq \delta(\mathrm{cl}_D(C)),\]
where the first inequality holds since $f$ is increasing and the last by definition of closure. Hence, we may assume that $C\leq D$.
We shall show that if $C$ is not already a proper ITD, it must already be in $\mathcal{K}_f$, or that there is a proper ITD $C'$, such that if $C'$ is in $\mathcal{K}_f$, then so is $C$.
We label the intersections of $C$ with $D$ so that $C_i:=D_i\cap C$ and $C_{ij}:=D_{ij}\cap C$. By the previous lemma $C$ is an independence theorem diagram.
Suppose that $C_1=C_0$, then $C$ is in $\mathcal{K}_f$, being obtained by freely amalgamating $C_{12}$ and $C_{23}$ over $C_2$, and then freeely amalgamating this structure with $C_{13}$ over $C_3$. Hence, if $C_i=C_0$ for any $1\leq i\leq 3$, $C\in\mathcal{K}_f$.
Suppose that $C_{12}=C_1\cup C_2$. Then, $C$ is the free amalgamation of $C_{23}$ and $C_{13}$ over $C_3$. Thus, if $C_{ij}=C_i\cup C_j$, then $C\in\mathcal{K}_f$.
Finally, consider the subgraph of $C'$ of $C$ which constitutes the proper independence theorem diagram obtained by taking $C'_{ij}:=\mathrm{cl}^C(C_i\cup C_j)$. Now, by construction and definition of closure, we must have $C'_{ij}\leq C_{ij}$. Furthermore, $C'_{ij}\leq C'$
Hence, $C$ may be obtained as a free amalgamation of $C'$ and $C_{ij}$ over $C'_{ij}$ (eventually repeating this operation for the different $C_{ij}$'s).
So, we have seen that if all proper ITDs are in $\mathcal{K}_f$, then so are all ITDs, and so (2)$\Rightarrow$(1). We prove that (3)$\Rightarrow$(2) holds by considering a minimal counterexample. Suppose that $D$ is a minimal proper ITD not in $\mathcal{K}_f$. Since $f(\vert D\vert )\leq \delta(D)$, there must be some $C\subset D$ such that $f(\vert C\vert )>\delta(\vert C\vert )$.
Again, we may assume that $C\leq D$. By the previous lemma, $C$ is an ITD. From the steps above, we know that either it will be either a free amalgamation of graphs in $\mathcal{K}_f$, or the free amalgamation of graphs in $\mathcal{K}_f$ and some proper ITD, or it will be a proper ITD again. But, by minimality, any of these cases implies that $C\in\mathcal{K}_f$. So, it is sufficient to check the condition $f(\vert D\vert )\leq \delta(D)$ in proper ITDs.
\end{proof}
\begin{lemma}\label{SITDcond} Let $\alpha\in\mathbb{N}$, $\mathcal{K}_f$ be a free amalgamation class. Let $n_1\in\mathbb{N}$. Suppose that for $n_1\leq t$, $f(3t)\leq f(t)+k$, for fixed $k\in \mathbb{N}$. Let $D$ be an ITD. Without loss of generality, say that $D_{12}$ is the $D_{ij}$ of biggest predimension, and call its predimension $d_{12}$. Suppose that $f(n_1) \leq d_{12}$. We have that if $\delta(D)\geq d_{12}+k$, then $f(\vert D\vert )\leq \delta(D)$.
\end{lemma}
\begin{proof}
This proof is substantially identical to the final part of the Theorem 3.6 in \cite{supersimple}. However, we repeat the argument for completeness and in order to avoid confusion with Remark 3.8 \cite{supersimple}, which follows the theorem and contains a mistake.
Let $g$ be the inverse of $f$. Making the substitution $g(x)=t$ into $f(3t)\leq f(t)+k$, we get that $3g(x)\leq g(x+k)$ for $f(n_1)\leq x$. Then, for $f(n_1)\leq d_{12}$:
\[ \vert D\vert \leq \sum_{1\leq i<j\leq3} \vert D_{ij}\vert \leq \sum_{1\leq i<j\leq3} g(\delta(D_{ij})) \leq 3 g(d_{12})\leq g(d_{12}+k)\leq g(\delta(D)) \]
Where the second inequality holds since $D_{ij}\in\mathcal{K}_f$ and the fourth holds since $\delta(D)\geq d_{12}+k$ and $g$ is increasing. Note that the resulting inequality is equivalent to $f(\vert D\vert )\leq \delta(D)$.
\end{proof}
Hence, we have a method to obtain supersimple $\omega$-categorical Hrushovski constructions of finite $SU$-rank. And now we know how to verify this more easily.
\subsection{The structures \texorpdfstring{$\mathcal{M}_f$}{Mf} and their supersimplicity}\label{example}
In this subsection, we focus on our choice of structures $\mathcal{M}_f$ built as specified in Construction \ref{actualf}. We prove that these structures have the various properties mentioned in the main article. \\
We begin by noting that $\mathcal{K}_f$ is a free amalgamation class and so the structures $\mathcal{M}_f$ may be built according to Theorem \ref{Hrushconstrtheorem}.
\begin{prop}\label{freeam} The class $\mathcal{K}_f$ is a free amalgamation class.
\end{prop}
\begin{proof}
Since $f$ is a good function for $t\geq 6$, by Lemma \ref{showfreeam}, we just need to show that for $A_0\leq A_1, A_2\in\mathcal{K}_f$, then $A_1\amalg_{A_0}A_2\in\mathcal{K}_f$ for $\vert A_i\vert \leq 6$. Note that $\vert A_1\amalg_{A_0}A_2\vert =\vert A_1\vert +\vert A_2\vert -\vert A_0\vert $, and $\delta(A_1\amalg_{A_0}A_2)=\delta(A_1)+\delta(A_2)-\delta(A_0)$. So, in the context of Figure \ref{fig:newf}, it is sufficient to check that given any three dots $p,q$, and $r$ (possibly with $q=r$) lying above $f$, with $p=(p_1,p_2), q=(q_1,q_2), r=(r_1,r_2)$ such that $p_1<q_1, r_1$ and $p_2<q_2, r_2$, the fourth vertex of the parallelogram with edges $\overline{pq}$ and $\overline{pr}$ is still above the function $f$. Since $f(18)\leq 7$ and our function is increasing, this can be easily verified.
\end{proof}
Now we prove that $\mathcal{M}_f$ is supersimple of $SU$-rank 2. We proceed step by step by proving that for any proper ITD $D$ for $\mathcal{K}_f$, $f(\vert D\vert )\leq \delta(D)$. Hence, the proof will follow by Lemma \ref{strongITD} and Theorem \ref{supersimpth}. We shall adopt the notation we already set for proper ITDs. To avoid confusion when speaking of the various element of an ITD, we always write $D$ for the proper ITD, we write $D_{ij}$ only for $i,j\in\{1,2,3\}$, and consistently write $D_j$ for $D_{0j}$. Furthermore, given $D$, we shall assume without loss of generality that $D_{12}$ has maximal dimension among the $D_{ij}$. We shall also write $d_i$ for $\delta(D_i)$, $d_{ij}$ for $\delta(D_{ij})$ and $d$ for $\delta(D)$ to simplify our notation.
\begin{lemma} For all proper ITDs $D$ for $\mathcal{K}_f$ with $d_{12}\leq 4$, $f(\vert D\vert )\leq \delta(D)$. For this proof we only need to assume that $f(1)=2, f(2)=3, f(3)=4$ and $f(6)\leq 6$.
\end{lemma}
\begin{proof}
For $d_{12}\leq 4$, $\vert D_{12}\vert \leq 3$, and so $\vert D_{ij}\vert \leq 3$. Since we require $D_i, D_j\neq \emptyset$ and $D_i\cup D_j\subsetneq D_{ij}$ (for $i\neq j\in\{1,2,3\}$), we must have that $\vert D_{ij}\vert =3$ for each $i\neq j\in\{1,2,3\}$, $\vert D_i\vert =1$ for each $i\in\{1,2,3\}$, and $\vert D_0\vert =0$. There is only one graph satisfying these requirements, i.e. a $6$-cycle as shown in Figure \ref{fig:hexagon}.
\begin{figure}
\caption{Each $D_i$ for $1\leq i\leq 3$ is a vertex. We can see that each $D_{ij}
\label{fig:hexagon}
\end{figure}
\end{proof}
\begin{lemma} For all proper ITDs $D$ for $\mathcal{K}_f$ with $d_{12}\leq 5$, $f(\vert D\vert )\leq\delta(D)$. We only need to assume $f(1)=2, f(2)=3, f(3)=4, f(4)=5$, and $f(6)\leq 6$, and $f(12)\leq 7$.
\end{lemma}
\begin{proof} Given the previous lemma, we only need to prove the condition for $d_{12}=5$.
Note that since $f(3\cdot 4)=f(12)\leq 7=f(4)+2$, by Lemma \ref{SITDcond} we only need to check the case of $d=6$.
We have that $d_0>0$ since otherwise, the condition that $d-d_{12}=1$ forces $d_3=1$, which is impossible since no graph in $\mathcal{K}_f$ has predimension $1$. Hence, we need to check the cases of $d_0=2$ and $d_0=3$ (note that for $d_0>3$ we cannot have $d=6$ since $d>d_0+3$). By definition of an ITD, the following inclusion-exclusion formula holds:
\begin{equation}
\vert D\vert =\sum_{1\leq i<j\leq3} \vert D_{ij}\vert -\sum_{1\leq i\leq3} \vert D_i\vert +\vert D_0\vert .
\end{equation}
Knowing the upper bounds for the sizes of the $D_{ij}$ and the lower bounds for the sizes of the $D_i$, we can come to an upper bound for $D$:
\begin{equation}
\vert D\vert \leq \sum_{1\leq i<j\leq3} \lfloor f^{-1}(d_i+d_j-d_0) \rfloor -\sum_{1\leq i\leq3} \min\{\vert B\vert \text{ s.t. } \delta(B)=d_i\}+\vert D_0\vert :=\beta
\end{equation}
Furthermore, note that $(d_1-d_0)+(d_2-d_0)+d_0=d_{12}$ and that $d_3-d_0=d-d_{12}=1$.
For $d_0=2$, without loss of generality we have $d_1=4, d_2=3, d_3=3$ (since we must have $d_1-d_0=2, d_2-d_0=1, d_3-d_0=1$). Hence, we get that $\beta=4\cdot 2+3-2\cdot 3+1=6$, and so $\vert D\vert \leq 6$. Since $f(6)\leq 6$, $f(\vert D\vert )\leq 6=d$.
For $d_0=3$, we obtain that $d_1-d_0=1, d_2-d_0=1, d_3-d_0=1$. Since $d_0=3$ implies that $\vert D_0\vert =2$, $\vert D_i\vert \geq 3$ so $\beta=3\cdot 4-3\cdot 3+2=5$, and so $f(\vert D\vert )\leq f(5)<6=d$, as desired.
\end{proof}
\begin{theorem}\label{sups} Let $\mathcal{K}_f$ and $\mathcal{M}_f$ be as in Construction \ref{actualf}. Then, $\mathcal{M}_f$ is supersimple of finite $SU$-rank. In particular, it has $SU$-rank $2$.
\end{theorem}
\begin{proof} From Theorem \ref{supersimpth} and Lemma \ref{strongITD} we need to check that for any proper ITD $D$, $f(\vert D\vert )\leq \delta(D)$. We know from the lemmas above for any proper ITD $D$ for $\mathcal{K}_f$ with $d_{12}\leq 5$, $f(\vert D\vert )\leq \delta(D)$. Since for $t\geq 6, f(3\cdot t)\leq f(t)+1$, by Lemma \ref{SITDcond} we have that for $d_{12}\geq 6$, if $\delta(D)\geq d_{12}+1$, then $f(\vert D\vert )\leq \delta(D)$. But $\delta(D)> d_{12}$ in any proper ITD. Hence, any proper ITD with $d_{12}\geq 6$ is such that $f(\vert D\vert )\leq \delta(D)$ and so the theorem follows. Finally, $SU$-rank coincides with the Hrushovski dimension, and so $SU(\mathcal{M}_f)=2$. This follows from the characterisation of non-forking independence in terms of the dimension \cite[Corollary 2.21]{supersimple}.
\end{proof}
In the process of our proof of Theorem \ref{sups} we have proven that the smallest proper ITD in $\mathcal{K}_f$ has predimension 6. Our conditions in Construction \ref{actualf} put no constraints on how slowly $f(t)$ grows for $t> 6$. Hence, we can see that we may make $f$ slow growing enough to satisfy independent $n$-amalgamation for any $n\in\mathbb{N}$ of our choice. Indeed, by chosing $f(t)$ with growth the inverse of $(n+1)!$ for $t\geq 6$, we can choose $\mathcal{M}_f$ to have independent $n$ amalgamation for all $n$ \cite{Udiamalg}.\\
Finally, we note that structures of the form of $\mathcal{M}_f$ have weak elimination of imaginaries.
\begin{prop}\label{HWEI}
Let $\mathcal{M}_f$ be as in Remark $5.2$, then it has weak elimination of imaginaries.
\end{prop}
\begin{proof}
From Theorem 5.12, we know that $\mathcal{M}_f$ satisfies conditions (P1)-(P5) from \cite{supersimple}. And these conditions are sufficient for weak elimination of imaginaries by Lemma 2.9 and Corollary 2.7 from \cite{supersimple}.
\end{proof}
\printbibliography
\end{document}
|
\begin{document}
\title{Automorphisms of Calabi-Yau threefolds with Picard number three}
\author{Vladimir Lazi\'c}
\address{Mathematisches Institut, Universit\"at Bonn, Endenicher Allee 60, 53115 Bonn, Germany}
\email{[email protected]}
\author{Keiji Oguiso}
\address{Department of Mathematics, Osaka University, Toyonaka 560-0043 Osaka,
Japan and Korea Institute for Advanced Study, Hoegiro 87, Seoul, 130-722, Korea}
\email{[email protected]}
\author{Thomas Peternell}
\address{Mathematisches Institut, Universit\"at Bayreuth, 95440 Bayreuth, Germany}
\email{[email protected]}
\dedicatory{Dedicated to Professor Yujiro Kawamata on the occasion of his 60th birthday}
\thanks{All authors were partially supported by the DFG-Forschergruppe 790 ``Classification of Algebraic Surfaces and Compact Complex Manifolds". The first author was partially supported by the DFG-Emmy-Noether-Nachwuchsgruppe ``Gute Strukturen in der h\"oherdimensionalen birationalen Geometrie". The second author is supported by JSPS Grant-in-Aid (S) No 25220701, JSPS Grant-in-Aid (S) No 22224001, JSPS Grant-in-Aid (B) No 22340009, and by KIAS Scholar Program.}
\begin{abstract}
We prove that the automorphism group of a Calabi-Yau threefold with Picard number three is either finite, or isomorphic to the infinite cyclic group up to finite kernel and cokernel.
\end{abstract}
\maketitle
\section{Introduction}
In this paper we are interested in the automorphism group of a Calabi-Yau threefold with small Picard number. Here, a Calabi-Yau threefold is a smooth complex projective threefold $X$ with trivial canonical bundle $K_X$ such that $h^1(X,\mathcal{O}_X)=0$.
It is a classical fact that the group of birational automorphisms $\Bir(X)$ and the automorphism group $\mathbb{A}ut(X)$ are finite groups and coincide when $X$ is a Calabi-Yau threefold with $\rho(X)=1$. It is, however, unknown which finite groups really occur as automorphism groups, even for smooth quintic threefolds. When $\rho(X)=2$, the automorphism group is also finite by \cite[Theorem 1.2]{Og12} (see also \cite{LP12}), while there is an example of a Calabi-Yau threefold with $\rho(X)=2$ and with infinite $\Bir(X)$ \cite[Proposition 1.4]{Og12}.
In contrast, Borcea \cite{Bor91b} gave an example of a Calabi-Yau threefold with $\rho(X)=4$ having infinite automorphism group, and the same phenomenon is expected for any Picard number $\rho(X)\geq4$; for examples with large Picard numbers, see \cite{GM93,OT13}.
Thusfar, the case of Picard number $3$ remained unexplored. Perhaps surprisingly, we show that the automorphism groups of such threefolds are relatively small:
\begin{thm} \ensuremath{\lambda}bel{thm:main}
Let $X$ be a Calabi-Yau threefold with $\rho(X) = 3$.
Then the automorphism group $\mathbb{A}ut(X)$ is either finite, or it is an almost abelian group of rank $1$, i.e.\ it is isomorphic to $\mathbb{Z}$ up to finite kernel and cokernel.
\end{thm}
We investigate automorphisms $g$ of infinite order and distinguish the cases when $g$ has an eigenvalue different than $1$, and when $g$ only has eigenvalue $1$. Theorem \ref{thm:main} then follows from Corollary \ref{cor:1} and Proposition \ref{pro:cubic2} below.
\vskip 2mm
At the moment, we do not have an example where $\mathbb{A}ut(X)$ is an infinite group. Existence of such an example would show that $3$ is the smallest possible Picard number of a Calabi-Yau threefold with infinite automorphism group. However, finiteness of the automorphism group is known when the fundamental group of $X$ is infinite: when $X$ is a Calabi-Yau threefold of Type A, i.e.\ $X$ is an \'etale quotient of a torus, then $\mathbb{A}ut(X)$ is finite by \cite[Theorem (0.1)(IV)]{OS01}. The case when $X$ is of Type K, i.e.\ $X$ is an \'etale quotient of a product of an elliptic curve and a K3 surface, of Picard number $\rho(X) \leq 3$, is studied in the forthcoming work \cite{HK13}.
\vskip 2mm
It is our honour to dedicate this paper to Professor Yujiro Kawamata on the occasion of his sixtieth birthday. This article and our previous papers \cite{Og12,LP12} are inspired by his beautiful paper \cite{Kaw97}.
\section{Preliminaries}
We first fix some notation. Let $X$ be a Calabi-Yau threefold with Picard number $\rho(X) = 3$. The automorphism group of $X$ is denoted by ${\mathbb{A}ut}(X)$ and $N^1(X)$ is the N\'eron-Severi group of $X$ generated by the numerical classes of line bundles on $X$. Note that $N^1(X)$ is a free $\mathbb Z$-module of rank $3$. There is a natural homomorphism
$$ r\colon {\mathbb{A}ut}(X) \to {\GL}(N^1(X)),$$
and we set $\mathcal A(X) = r({\mathbb{A}ut}(X))$. Note that the kernel of $r$ is finite \cite[Proposition 2.4]{Og12}, hence ${\mathbb{A}ut}(X)$ is finite if and only if $\mathcal A(X)$ is finite. We furthermore let $N^1(X)_{\mathbb{R}} := N^1(X) \otimes \mathbb R$
be the vector space generated by $N^1(X)$.
\begin{pro}\ensuremath{\lambda}bel{pro:plane}
Let $\ell_1$ and $\ell_2$ be two distinct lines in $\mathbb{R}^2$ through the origin, and let $G$ be a subgroup of $\GL(2,\mathbb{Z})$ which acts on $\ell_1\cup\ell_2$.
If $G$ is infinite, then it is an almost abelian group of rank $1$, i.e. $G$ contains an abelian subgroup of finite index.
\end{pro}
\begin{proof}
The proof follows from that of \cite[Theorem 3.9]{LP12}, and we recall the argument for the convenience of the reader. Fix nonzero points $x_1\in\ell_1$ and $x_2\in\ell_2$. Then for any $g\in G$ there exist a permutation $(i_1,i_2)$ of the set $\{1,2\}$ and real numbers $\alpha_1$ and $\alpha_2$ such that $gx_1=\alpha_1 x_{i_1}$ and $gx_2=\alpha_2 x_{i_2}$. It follows that there are positive numbers $\beta_1$ and $\beta_2$ such that $g^4x_i=\beta_i x_i$. Hence, taking a quotient of $G$ by a finite group, we may assume that $G$ acts on $\mathbb{R}_+x_1$ and $\mathbb{R}_+x_2$.
For every $g\in G$, let $\alpha_g$ be the positive number such that $gx_1=\alpha_g x_1$, and set $\mathcal S=\{\alpha_g\mid g\in G\}$. Then $\mathcal S$ is a multiplicative subgroup of $\mathbb{R}^*$ and the map
$$ G \to \mathcal S, \quad g \mapsto \alpha_g$$
is an isomorphism of groups. It therefore suffices to show that $\mathcal S$ is an infinite cyclic group. By \cite[21.1]{Fo81}, it is enough to prove that $\mathcal S$ is discrete. Otherwise, we can pick a sequence $(g_i)$ in $G$ such that $(\alpha_{g_i})$ converges to $1$. Fix two linearly independent points $h_1,h_2\in\mathbb{Z}^2$. Then $g_ih_1\to h_1$ and $g_ih_2\to h_2$ when $i\to\infty$. Since $g_ih_1,g_ih_2\in\mathbb{Z}^2$, this implies that $g_ih_1=h_1$ and $g_ih_2=h_2$ for $i\gg0$, and hence $g_i=\mathrm{id}$ for $i\gg0$.
\end{proof}
In order to prove our main result, Theorem \ref{thm:main}, we first show that the cubic form on our Calabi-Yau threefold $X$ always splits in a special way, and this almost immediately has strong consequences on the structure of the automorphism group.
In this paper, when $L$ is a linear, quadratic or cubic form on $N^1(X)_\mathbb{R}$, we do not distinguish between $L$ and the corresponding locus $(L=0)\subseteq N^1(X)_\mathbb{R}$.
We start with the following lemma.
\begin{lem} \ensuremath{\lambda}bel{inf}
Let $X$ be a Calabi-Yau threefold with Picard number $3$. Assume that $\mathbb{A}ut(X)$ is infinite.
Then there exists $g\in\mathcal A(X)$ with $\det g=1$ such that $\ensuremath{\lambda}ngle g\rangle\simeq\mathbb{Z}$.
\end{lem}
\begin{proof}
By possibly replacing $\mathcal A(X)$ by the subgroup $\mathcal A(X) \cap SL(N^1(X))$ of index at most $2$, we may assume that all elements of $\mathcal A(X)$ have determinant $1$. Assume that all elements of $\mathcal A(X)$ have finite order, and fix an element
$$h\in\mathcal A(X)\subseteq\GL(N^1(X))$$
of order $n_h$. Since $\rho(X)=3$, the characteristic polynomial $\Phi_h(t)\in\mathbb{Z}[t]$ of $h$ is of degree $3$. If $\xi$ is an eigenvalue of $h$, then $\xi^{n_h}=1$, and hence $\varphi(n_h)\leq3$, where $\varphi$ is Euler's function. An easy calculation shows that then $n_h\leq6$, and therefore $\mathcal A(X)$ is a finite group by Burnside's theorem, a contradiction.
\end{proof}
If $c_2(X) = 0$ in $H^4(X,\mathbb R)$, then ${\mathbb{A}ut}(X)$ is finite by \cite[Theorem (0.1)(IV)]{OS01}. Combining this with Lemma \ref{inf}, we may assume the following:
\begin{assumption} \ensuremath{\lambda}bel{assumption}
Let $X$ be a Calabi-Yau threefold with Picard number $3$. We assume that $c_2(X) \ne 0$ and that $\mathbb{A}ut(X)$ is infinite, and we fix an element $g \in \mathcal A(X)$ of infinite order as given in Lemma \ref{inf}. We denote by $C$ the cubic form on $N^1(X)_\mathbb{R}$ given by the intersection product.
\end{assumption}
\begin{pro}\ensuremath{\lambda}bel{pro:uvw}
Let $h \in \mathcal A(X)$.
\begin{enumerate}
\item[(i)] If $h$ is of infinite order, then there exist a real number $\alpha \geq 1$ and (when $\alpha=1$ not necessarily distinct) nonzero elements $u,v,w\in N^1(X)_{\mathbb{R}}$ such that $w$ is integral, $v$ is nef, and
$$hu=\frac1\alpha u,\quad hv=\alpha v,\quad hw=w.$$
Moreover, if $\alpha =1$, then $\alpha$ is the unique eigenvalue of (the complexified) $h$.
\item[(ii)] If $h \neq \mathrm{id}$ has finite order, then (the complexified) $h$ has eigenvalues $1$, $\ensuremath{\lambda}mbda$, $\bar\ensuremath{\lambda}mbda$, where $\ensuremath{\lambda}mbda\in\{\pm i, \pm (\frac12 \pm i\frac{\sqrt 3}2)\}$.
\end{enumerate}
\end{pro}
\begin{proof}
Let $h^*$ denote the dual action of $h$ on $H^4(X,\mathbb{Z})$. Since $h^*$ preserves the second Chern class $c_2(X)\in H^4(X,\mathbb{Z})$, one of its eigenvalues is $1$,
and therefore $h$ also has an eigenvector $w$ with eigenvalue $1$. Since $h$ acts on the nef cone $\mathbb{N}ef(X)$, by the Birkhoff-Frobenius-Perron theorem \cite{Bir67} there exist $\alpha\geq1$ and
$v\in\mathbb{N}ef(X)\setminus\{0\}$ such that $hv=\alpha v$. As $\det h=1$, if $\alpha>1$, then the remaining eigenvalue of $h$ is $1/\alpha$.
Assume that $\alpha=1$. Then by the Birkhoff-Frobenius-Perron theorem, all eigenvalues of $h$ have absolute value 1. Thus the characteristic polynomial of $h$ reads
$$ \Phi_h(t) = (t-1)(t-\ensuremath{\lambda}mbda)(t-\bar\ensuremath{\lambda}mbda)$$
with $\vert \ensuremath{\lambda}mbda \vert = 1$. Since $\Phi_h$ has integer coefficients, a direct calculation gives $\ensuremath{\lambda}mbda\in\{1,\pm i,\pm (\frac12 \pm i\frac{\sqrt 3}2)\}$. When $\ensuremath{\lambda}mbda\neq1$, it is easily checked that $h$ has finite order.
Finally, if $\ensuremath{\lambda}mbda=1$, then the Jordan form of $h$ is
$$\text{either}\quad
\left(
\begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 1 \\
0 & 0 & 1
\end{array}
\right)
\quad\text{or}\quad
\left(
\begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}
\right).$$
In both cases it is clear that $h$ has infinite order.
\end{proof}
In the following two sections, we fix an element of infinite order as in Lemma \ref{inf} and analyse separately the cases $\alpha>1$ and $\alpha=1$ as in Proposition \ref{pro:uvw}(i).
\section{The case $\alpha>1$}
\begin{pro}\ensuremath{\lambda}bel{pro:uvwBigger}
Under Assumption \ref{assumption} and in the notation from Proposition \ref{pro:uvw} for $h=g$, assume that $\alpha>1$. Then $u$ and $v$ are nef and irrational, we have
\begin{equation}\ensuremath{\lambda}bel{eq:relations}
u^3=v^3=u^2v=uv^2=u^2w=uw^2=v^2w=vw^2=0,
\end{equation}
and the plane $\mathbb{R} u+\mathbb{R} v$ is in the kernel of the linear form given by $c_2(X)\in H^4(X,\mathbb{Z})$.
\end{pro}
\begin{proof}
We first need to show that the eigenspace of $1/\alpha$ intersects $\mathbb{N}ef(X)$. Pick $u\neq0$ such that $gu=\frac1\alpha u$,
and note that $u,v$ and $w$ form a basis of $N^1(X)_\mathbb{R}$. Take a general ample class
$$H=xv+yu+zw,$$
and observe that $y\neq0$ by the general choice of $H.$ Then $g^{-n}H$ is ample for every positive integer $n$, hence the divisor
$$\lim_{n\to\infty}\frac1{\alpha^n|y|}g^{-n}H=\lim_{n\to\infty}\Big(\frac x{\alpha^{2n}|y|}v+\frac y{|y|}u+\frac z{\alpha^n |y|}w\Big)=\frac y{|y|}u$$
is nef. Now replace $u$ by $yu/|y|$ if necessary to achieve the nefness of $u.$
Furthermore, since $v^3=(gv)^3=\alpha^3 v^3$, we obtain $v^3=0$; other relations in
\eqref{eq:relations} are proved similarly. Also,
$$v\cdot c_2(X)=gv\cdot gc_2(X)=\alpha v\cdot c_2(X),$$
hence $v\cdot c_2(X)=0$, and analogously $u\cdot c_2(X)=0$.
Finally, assume that $v$ is rational. By replacing $v$ by a rational multiple, we may assume that $v$ is a primitive element of $N^1(X)$. But the eigenspace associated to $\alpha$ is $1$-dimensional, and since $gv$ is also primitive, we must have $gv=v$, a contradiction. Irrationality of $u$ is proved in the same way.
\end{proof}
\begin{pro}\ensuremath{\lambda}bel{pro:cubic}
Under Assumption \ref{assumption} and in the notation of Proposition \ref{pro:uvw} for $h=g$, assume that $\alpha>1$. Let $L$ be the linear form on $N^1(X)_\mathbb{R}$ given by $c_2(X)$.
Then one of the following holds:
\begin{enumerate}
\item[(i)] $C=L_1L_2L$, where $L_1$ and $L_2$ are irrational linear forms such that
$$L_1\cap L_2=\mathbb{R} w,\quad L_1\cap L=\mathbb{R} u,\quad L_2\cap L=\mathbb{R} v;$$
\item[(ii)] $C=QL$, where $Q$ is an irreducible quadratic form. Then
$$Q\cap L=\mathbb{R} u\cup\mathbb{R} v,$$
and the planes $\mathbb{R} u+\mathbb{R} w$ and $\mathbb{R} v+\mathbb{R} w$ are tangent to $Q$ at $u$ and $v$ respectively.
\end{enumerate}
\end{pro}
\begin{proof}
Denote $A=w^3$ and $B=uvw$. We first claim that $B\neq0$. In fact, suppose that $B = 0$ and let $H$ be any ample class. Then the relations \eqref{eq:relations} imply $uv=0$, hence $0=(Huv)^2=(H^2u)\cdot(v^2u)$,
and the Hodge index theorem \cite[Corollary 2.5.4]{BS95} yields that $H$ and $v$ are proportional, which is a contradiction since $v^3=0$. This proves the claim.
Therefore, for any real variables $x,y,z$ we have
$$(xu+yv+zw)^3=z(Az^2+6Bxy),$$
and thus in the basis $(u,v,w)$ we have $C=QL$, where $Q=Az^2+6Bxy$. We consider two cases.
Assume first that $A=0$. Then $C=6Bxyz$, and we set $L_2=x$ and $L_1=6By$. This gives (i).
If $A\neq0$, then
$$Q=Az^2+6Bxy=
\left(
\begin{array}{c}
x \\
y \\
z
\end{array}
\right)^t
\left(
\begin{array}{ccc}
0 & 3B & 0 \\
3B & 0 & 0 \\
0 & 0 & A
\end{array}
\right)
\left(
\begin{array}{c}
x \\
y \\
z
\end{array}
\right),$$
and the signature of $Q$ is $(2,1)$. Therefore, $Q$ is a non-empty smooth quadric. It is now easy to see that the tangent plane to $Q$ at $u$ is $(y=0)$, and the tangent plane to $Q$ at $v$ is $(x=0)$. This proves the proposition.
\end{proof}
\begin{cor}\ensuremath{\lambda}bel{cor:1}
Under Assumption \ref{assumption} and in the notation of Proposition \ref{pro:uvw} for $h=g$, assume that $\alpha>1$. Then $\mathcal A(X)$ is an almost abelian group of rank $1$.
\end{cor}
\begin{proof}
First note that every element $h\in\mathcal A(X)$ fixes the cubic $C$ and the plane $L=c_2(X)^\perp$. Further, the singular locus $\Sing(C)$ of $C$ is $h$-invariant. In the case (i) of Proposition \ref{pro:cubic}, $\Sing(C)=\mathbb{R} u\cup\mathbb{R} v\cup\mathbb{R} w$. This implies that the set
$$\mathbb{R} u\cup \mathbb{R} v\subseteq L$$
is $h$-invariant, and hence so is $\mathbb{R} w$. In particular, the sets $\mathbb{R} u$, $\mathbb{R} v$, $\mathbb{R} w$ are each $h^2$-invariant. Then Proposition \ref{pro:uvw} immediately shows that $hw=w$, and hence the map
$$ \mathcal A(X) \to \GL(2,\mathbb Z), \quad h \mapsto h|_L$$
is injective. Now the claim follows from Proposition \ref{pro:plane}.
In the case (ii) of Proposition \ref{pro:cubic}, we have $ \Sing(C) = \mathbb{R} u\cup\mathbb{R} v\subseteq L$, and $\mathbb{R} w$ is $h$-invariant as it is the intersection of tangent planes to $Q$ at $u$ and $v$. Now we conclude similarly as above.
\end{proof}
\section{The case $\alpha=1$}
\begin{lem}\ensuremath{\lambda}bel{lem:jordanform}
Under Assumption \ref{assumption} and in the notation of Proposition \ref{pro:uvw} for $h=g$, assume that $\alpha=1$. Then the Jordan form of $g$ is
\begin{equation}\ensuremath{\lambda}bel{eq:jordan}
\left(
\begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 1 \\
0 & 0 & 1
\end{array}
\right).
\end{equation}
In particular, the eigenspace of $g$ associated to the eigenvalue $1$ has dimension $1$.
\end{lem}
\begin{proof} By Proposition \ref{pro:uvw}, $\alpha = 1$ is the unique eigenvalue of $g$.
Therefore the Jordan form of $g$ is either of the form \eqref{eq:jordan} or of the form
\begin{equation}\ensuremath{\lambda}bel{eq:matrix}
\left(
\begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}
\right).
\end{equation}
Assume that the Jordan form is of the form \eqref{eq:matrix}; in other words, there is a basis $(u_1,u_2,u_3)$ of $N^1(X)_\mathbb{R}$ such that
$$gu_1=u_1,\quad gu_2=u_1+u_2,\quad gu_3=u_3.$$
Clearly,
$$g^nu_2=u_2+nu_1$$
for every integer $n$, and furthermore,
$$u_2^3=(g^nu_2)^3=u_2^3+3nu_2^2u_1+3n^2u_2u_1^2+n^3u_1^3.$$
This gives
\begin{equation}\ensuremath{\lambda}bel{eq:1}
u_2^2u_1=u_2u_1^2=u_1^3=0.
\end{equation}
Similarly, from the equations
$$u_2^2u_3=(g^nu_2)^2g^nu_3\quad\text{and}\quad u_2u_3^2=g^nu_2(g^nu_3^2)^2$$
we get
\begin{equation}\ensuremath{\lambda}bel{eq:2}
u_1^2u_3=u_1u_3^2=u_1u_2u_3=0.
\end{equation}
For any smooth very ample divisor $H$ on $X$, \eqref{eq:1} and \eqref{eq:2} give $u_1^2 \cdot H = u_1 \cdot H^2 = 0,$ thus
$(u_1|_H)^2=0$ and $u_1|_H\cdot H|_H=0$, and hence $u_1|_H=0$, applying the Hodge index theorem on $H.$
This implies $u_1=0$ by the Lefschetz hyperplane section theorem, a contradiction. Thus the Jordan form cannot be of type \eqref{eq:matrix}, and the assertion is proved.
\end{proof}
\begin{pro}\ensuremath{\lambda}bel{pro:jordan2}
Under Assumption \ref{assumption} and in the notation of Proposition \ref{pro:uvw} for $h=g$, assume that $\alpha=1$. Then, possibly by rescaling $w$, there exist $w_1,w_2\in N^1(X)$ such that $(w,w_1,w_2)$ is a basis of $N^1(X)_\mathbb{R}$ with respect to the Jordan form \eqref{eq:jordan}, and we have
\begin{equation}\ensuremath{\lambda}bel{eq:3}
w\cdot c_2(X)=w_1\cdot c_2(X)=w^2=w_1^3=ww_1^2=ww_1w_2=0
\end{equation}
and
\begin{equation}\ensuremath{\lambda}bel{eq:4}
ww_2^2=2w_1w_2^2=-2w_1^2w_2 \ne 0.
\end{equation}
\end{pro}
\begin{proof}
Pick any $w_2\in N^1(X)$ such that $w_1 :=(g-\mathrm{id})w_2\neq0$ and $u:=(g-\mathrm{id})^2w_2\neq0$, which is possible by Lemma \ref{lem:jordanform}.
Then
$$gu=u,\quad gw_1=u+w_1,\quad gw_2=w_1+w_2,$$
and it is easy to check that $(u,w_1,w_2)$ is a basis of $N^1(X)_\mathbb{R}$. Since the eigenspace associated to the eigenvalue $1$ of $g$ is $1$-dimensional by Lemma \ref{lem:jordanform}, by Proposition \ref{pro:uvw} we may assume that $u=w$. We first observe that
$$g^nw_1=w_1+nw\quad\text{and}\quad g^nw_2=w_2+nw_1+\frac{n(n-1)}{2}w$$
for any integer $n$. Then the equations
$$w_1\cdot c_2(X)=g^nw_1\cdot c_2(X)\quad\text{and}\quad w_2\cdot c_2(X)=g^nw_2\cdot c_2(X)$$
give
$$w\cdot c_2(X)=w_1\cdot c_2(X)=0.$$
Similarly, from $w_1^3=(g^nw_1)^3$ and $ww_2^2=(g^nw)(g^nw_2)^2$ we get
$$ww_1^2=ww_1w_2= w^2=0,$$
and $w_1^2w_2=(g^nw_1)^2(g^nw_2)$ yields
$$w_1^3=0.$$
Finally, from $w_2^3=(g^nw_2)^3$ we obtain \eqref{eq:4}, up to the non-vanishing statement. Assume that $ww_2^2 = 0$. Since $w,w_1,w_2$ generate $N^1(X)_\mathbb{R}$, this implies that for any two smooth very ample line bundles $H_1$ and $H_2$ on $X$ we have $w\cdot H_1\cdot H_2 = 0$, and in particular $w|_{H_1}=0$. But then $w=0$ by the Lefschetz hyperplane section theorem, a contradiction.
\end{proof}
\begin{pro}\ensuremath{\lambda}bel{pro:cubic2}
Under Assumption \ref{assumption} and in the notation of Proposition \ref{pro:uvw} and Proposition \ref{pro:jordan2} for $h=g$, assume that $\alpha=1$.
\begin{enumerate}
\item[(i)] Let $L$ be the linear form on $N^1(X)_\mathbb{R}$ given by $c_2(X)$. Then $C=QL$, where $Q$ is an irreducible quadratic form, and $L$ is tangent to $Q$ at $w$.
\item[(ii)] The automorphism group $\mathbb{A}ut(X)$ is an almost abelian group of rank $1$.
\end{enumerate}
\end{pro}
\begin{proof}
Set $E=3ww_2^2/2$ and $F=w_2^3$. Then, using \eqref{eq:3} and \eqref{eq:4}, for all real variables $x,y,z$ we obtain the equation
$$(xw+yw_1+zw_2)^3=z(Fz^2+2Exz-Ey^2+Eyz).$$
Since $L=z$ by \eqref{eq:3}, we have $C=QL$, where $Q=Fz^2+2Exz-Ey^2+Eyz$. Noticing that $E \ne 0$ by Proposition \ref{pro:jordan2}, the tangent plane to $Q$ at $w$ is $(z=0)$. This shows (i).
\vskip 2mm
For (ii), consider any $h \in \mathcal A(X)$. We may assume $\det h=1$, possibly replacing $\mathcal A(X)$ by $\mathcal A(X) \cap \SL(N^1(X)).$
The singular locus of $C$ is $\mathbb{R} w$, hence $\mathbb{R} w$ is $h$-invariant and therefore defined over $\mathbb Q$. By the shape of the cubic and by Proposition \ref{pro:cubic}, and since the element $g$ in Assumption \ref{assumption} is chosen arbitrarily, $h$ has a unique real eigenvalue $\alpha=1$. By Proposition \ref{pro:uvw} and by Lemma \ref{lem:jordanform}, $\mathbb{R} w$ is the only eigenspace of $h$, thus $hw = w$.
The plane $L = c_2(X)^{\perp}$ is $h$-invariant, and note that $L$ is spanned by $w$ and $w_1$ by \eqref{eq:3}. In the basis $(w,w_1)$, the restriction $h|_L$ has the form
$$
\left(
\begin{array}{ccc}
1 & a_h\\
0 & b_h
\end{array}
\right),
$$
and $\det (h|_L) = \pm 1$.
By possibly replacing $\mathcal A(X)$ by the preimage of $ \mathcal A(X) \vert_L \cap \SL(L)$
under the restriction map $ \mathcal A(X) \to \mathcal A(X) \vert_L$, which has index at most $2$, we may assume that $\det (h|_L) = 1$, and thus $b_h = 1$. Hence, the matrix of $h$ in the basis $(w,w_1,w_2)$ is
\begin{equation}\ensuremath{\lambda}bel{eq:jA}
\mathcal H=\left(
\begin{array}{ccc}
1 & a_h & d_h \\
0 & 1 & c_h \\
0 & 0 & 1
\end{array}
\right).
\end{equation}
This implies, in particular, that $h$ cannot be of finite order. The quadric $Q$ is given in this basis by the matrix
$$ \mathcal Q = \left(
\begin{array}{ccc}
0 & 0 & E \\
0 & -E & \frac12 E \\
E & \frac12 E & F
\end{array}
\right).$$
We now view $Q$ as a quadric over $\mathbb{C}$. Since $Q$ is $h$-invariant, by the Nullstellensatz there exists $\ensuremath{\lambda}mbda\in\mathbb{Q}$ such that $hQ= \ensuremath{\lambda}mbda Q$, i.e.
$\mathcal H^t\mathcal Q \mathcal H=\ensuremath{\lambda}mbda \mathcal Q$. By taking determinants, we conclude that $\ensuremath{\lambda}mbda^3 = 1$, hence $\ensuremath{\lambda}mbda = 1$. Putting the explicit matrices into the formula, we obtain
\begin{equation}\ensuremath{\lambda}bel{equation}
a_h = c_h\quad\text{and}\quad d_h = \frac{a_h (a_h - 1)}{2}.
\end{equation}
Since $w\in N^1(X)$, there is a primitive element $\overline w\in N^1(X)$ and a positive integer $p$ such that $w=p\overline w$. We have $a_hp\overline w=a_h w=hw_1 - w_1\in N^1(X)$, hence the number $a_hp$ must be an integer.
Consider the group homomorphism
$$ \tau\colon \mathcal A(X) \to \mathbb{Z}, \qquad h\mapsto pa_h.$$
By (\ref{equation}), $\tau $ is injective, and therefore $\mathcal A(X)\simeq\mathbb{Z}$. Thus $\mathcal A(X)$ is abelian of rank $1$.
\end{proof}
\end{document}
|
\begin{document}
\title{Orthosymmetric spaces over an Archimedean vector lattice}
\author{M. A. Ben Amor, K. Boulabiar, and J. Jaber\\{\small Research Laboratory of Algebra, Topology, Arithmetic, and Order}\\{\small Department of Mathematics}\\{\small Faculty of Mathematical, Physical and Natural Sciences of Tunis}\\{\small Tunis-El Manar University, 2092-El Manar, Tunisia}}
\date{}
\maketitle
\begin{abstract}
We introduce and study the notion of orthosymmetric spaces over an Archimedean
vector lattice as a generalization of finite-dimentional Euclidean inner
spaces. A special attention has been paid to linear operators on these spaces.
\end{abstract}
\section{Introduction and first properties}
We take it for granted that the reader is familiar with the elementary theory
of vector lattices (i.e., Riesz spaces) and positive operators. For
terminology, notation, and properties not explained or proved we refer to the
standard texts \cite{LZ71,Z83}.
Let $\mathbb{V}$ be an Archimedean vector lattice. Following Buskes and van
Rooij in \cite{BR00}, we call a $\mathbb{V}$-\textsl{valued orthosymmetric
product} on a vector lattice $L$ any function that takes each ordered pair
$\left( f,g\right) $ of elements of $L$ to a vector $\left\langle
f,g\right\rangle $ of $\mathbb{V}$ and has the two following properties.
\begin{enumerate}
\item[(1)] (\textsl{Positivity}) $\left\langle f,g\right\rangle \in
\mathbb{V}^{+}$ for all $f,g\in L^{+}$.
\item[(2)] (\textsl{Orthosymmetry}) $\left\langle f,g\right\rangle =0$ in
$\mathbb{V}$ for all $f,g\in L$ with $f\wedge g=0$.
\end{enumerate}
\noindent By an \textsl{orthosymmetric space over }$\mathbb{V}$ (or, just an
\textsl{orthosymmetric space }if no confusion can arise) we mean a vector
lattice $L$ along with a $\mathbb{V}$-valued orthosymmetric product on $L$. As
the next example shows, the classical Euclidean spaces fill within the
framework of orthosymmetric spaces.
\begin{example}
\label{Euclidean}As usual, $\mathbb{R}$ denotes the Archimedean vector lattice
of all real numbers. Pick $n\in\mathbb{N}=\left\{ 1,2,...\right\} $ and
suppose that the vector space $\mathbb{R}^{n}$ is equipped with its usual
structure of Euclidean space. In particular,
\[
\left\langle f,g\right\rangle =\sum_{k=1}^{n}\left\langle f,e_{k}\right\rangle
\left\langle g,e_{k}\right\rangle \text{ for all }f,g\in\mathbb{R}^{n},
\]
where $\left( e_{1},...,e_{n}\right) $ is the canonical \emph{(}
orthogonal\emph{) }basis of $\mathbb{R}^{n}$. Simultaneously, $\mathbb{R}^{n}$
is a vector lattice with respect to the coordinatewise ordering. That is,
\[
f\geq0\text{ in }\mathbb{R}^{n}\text{ if and only if }\left\langle
f,e_{k}\right\rangle \in\mathbb{R}^{+}\text{ for all }k\in\left\{
1,...,n\right\} .
\]
Consequently, if $f,g\geq0$ in $\mathbb{R}^{n}$ then $\left\langle
f,g\right\rangle \in\mathbb{R}^{+}$. Furthermore, let $f,g\in\mathbb{R}^{n}$
such that $f\wedge g=0$. Whence,
\[
\min\left\{ \left\langle f,e_{k}\right\rangle ,\left\langle g,e_{k}
\right\rangle \right\} =0\text{ for all }k\in\left\{ 1,2,...,n\right\} .
\]
Therefore, $\left\langle f,g\right\rangle =0$ meaning that the inner product
on $\mathbb{R}^{n}$ is an $\mathbb{R}$-valued orthosymmetric product. Thus,
the Euclidean space $\mathbb{R}^{n}$ is an orthosymmetric space over
$\mathbb{R}$.
\end{example}
\begin{quote}
\textit{Beginning with the next lines, we shall impose the blanket assumption
that all orthosymmetric spaces under consideration are taken over the fixed
Archimedean vector lattice }$\mathbb{V}$ (\textit{unless otherwise stated
explicitly}).
\end{quote}
\noindent The following property is useful for later purposes.
\begin{lemma}
\label{vp}Let $L$ be an orthosymmetric space. Then,
\[
\left\langle f,f\right\rangle =\left\langle \left\vert f\right\vert
,\left\vert f\right\vert \right\rangle \in\mathbb{V}^{+}\text{ for all }f\in
L.
\]
\end{lemma}
\begin{proof}
If $f\in L$ then $f^{+}\wedge f^{-}=0$. It follows that
\[
\left\langle f^{+},f^{-}\right\rangle =\left\langle f^{-},f^{+}\right\rangle
=0.
\]
Hence,
\begin{align*}
\left\langle f,f\right\rangle & =\left\langle f^{+}-f^{-},f^{+}
-f^{-}\right\rangle =\left\langle f^{+},f^{+}\right\rangle +\left\langle
f^{-},f^{-}\right\rangle \\
& =\left\langle f^{+}+f^{-},f^{+}+f^{-}\right\rangle =\left\langle \left\vert
f\right\vert ,\left\vert f\right\vert \right\rangle \in\mathbb{V}^{+}.
\end{align*}
This is the desired result.
\end{proof}
At first sight, it might seem that it is easy to establish the following
remarkable property of orthosymmetric spaces. However, all proofs that can be
found in the literature are quite involved and far from being trivial (see,
e.g., Corollary $2$ in \cite{BR00} and Theorem $3.8.14$ in \cite{S10}). By the
way, it should be pointed out that this property is based on the fact that
$\mathbb{V}$ is Archimedean.
\begin{lemma}
\label{Steinberg}Let $L$ be an orthosymmetric space. Then,
\[
\left\langle f,g\right\rangle =\left\langle g,f\right\rangle \text{ for all
}f,g\in L.
\]
\end{lemma}
Roughly speaking, any $\mathbb{V}$-valued orthosymmetric product on a vector
lattice is symmetric (a multidimensional version of Lemma \ref{Steinberg} can
be found in \cite{B02}). We emphasize that results in Lemmas \ref{vp} and
\ref{Steinberg} could be used below without further mention.
Before proceeding our investigation, we note that our next terminology comes
in part from the Inner Product Theory linguistic repertoire (see \cite{B74}).
Let $L$ be an orthosymmetric space. An element $f$ in $L$ is said to
be\textsl{ neutral} if $\left\langle f,f\right\rangle =0$. Obviously, the zero
vector is neutral. The set of all neutral elements in $L$ is called the
\textsl{neutral part} of $L$ and is denoted by $L^{0}$. Namely,
\[
L^{0}=\left\{ f\in L:\left\langle f,f\right\rangle =0\right\} .
\]
The neutral part of $L$ has a nice characterization.
\begin{lemma}
\label{key}Let $L$ be an orthosymmetric space. Then,
\[
L^{0}=\left\{ f\in L:\left\langle f,g\right\rangle =0\text{ for all }g\in
L\right\} .
\]
\end{lemma}
\begin{proof}
Obviously, if $f\in L$ with $\left\langle f,g\right\rangle =0$ for all $g\in
L$, then $\left\langle f,f\right\rangle =0$ so $f\in L^{0}$. Conversely, let
$f\in L^{0}$ and pick $g\in L$. Choose $n\in\mathbb{N}$ and observe that
\[
0\leq\left\langle g-nf,g-nf\right\rangle =\left\langle g,g\right\rangle
-2n\left\langle f,g\right\rangle +n^{2}\left\langle f,f\right\rangle
=\left\langle g,g\right\rangle -2n\left\langle f,g\right\rangle .
\]
Therefore,
\[
2n\left\langle f,g\right\rangle \leq\left\langle g,g\right\rangle \text{ for
all }n\in\mathbb{N}.
\]
Replacing $f$ by $-f$ in the above inequality, we obtain
\[
-2n\left\langle f,g\right\rangle \leq\left\langle g,g\right\rangle \text{ for
all }n\in\mathbb{N}.
\]
But then $\left\langle f,g\right\rangle =0$ because $\mathbb{V}$ is
Archimedean. The proof is complete.
\end{proof}
An interesting lattice-ordered property of the neutral part of an
orthosymmetric space is obtained as a consequence of previous lemma.
\begin{theorem}
\label{neutral}The neutral part $L^{0}$ of an orthosymmetric space $L$ is an
ideal in $L$.
\end{theorem}
\begin{proof}
Let $r$ be a real number and $f,g\in L^{0}$. Then,
\[
\left\langle f+rg,f+rg\right\rangle =\left\langle f,f\right\rangle
+2r\left\langle f,g\right\rangle +r^{2}\left\langle g,g\right\rangle =0
\]
(where we use Lemma \ref{key}). It follows that $f+rg\in L^{0}$ and so $L^{0}$
is a vector subspace of $L$.
Secondly, let $f\in L^{0}$ and observe that
\[
\left\langle \left\vert f\right\vert ,\left\vert f\right\vert \right\rangle
=\left\langle f,f\right\rangle =0.
\]
Hence, $\left\vert f\right\vert \in L^{0}$ and thus $L^{0}$ is a vector
sublattice of $L$.
Finally, let $f,g\in L$ such that $0\leq f\leq g$ and $g\in L^{0}$. Whence,
\[
0\leq\left\langle f,f\right\rangle \leq\left\langle f,g\right\rangle
\leq\left\langle g,g\right\rangle =0.
\]
This yields that $L^{0}$ is a solid in $L$ and finishes the proof.
\end{proof}
An orthosymmetric space need not be Archimedean as it is shown in the next example.
\begin{example}
\label{lexi}Assume that the Euclidean plan $\mathbb{R}^{2}$ is endowed with
its lexicographic ordering. Hence, $\mathbb{R}^{2}$ is a non-Archimedean
vector lattice. Put
\[
\left\langle f,g\right\rangle =x_{1}y_{1}\text{ for all }f=\left( x_{1}
,x_{2}\right) ,g=\left( y_{1},y_{2}\right) \text{ in }\mathbb{R}^{2}.
\]
Since $\mathbb{R}^{2}$ is totally ordered \emph{(}i.e., linearly
ordered\emph{)}, this formula defines an $\mathbb{R}$-valued orthosymmetric
product on $\mathbb{R}^{2}$. This means that $\mathbb{R}^{2}$ is a
non-Archimedean orthosymmetric space over $\mathbb{R}$.
\end{example}
The orthosymmetric space $L$ is said to be \textsl{definite}\textit{ }if its
neutral part is trivial. That is to say, $L$ is definite if and only if
\[
\left\langle f,f\right\rangle =0\text{ in }\mathbb{V}\text{ implies }f=0\text{
in }L.
\]
Definite orthosymmetric spaces have a better behavior as explained in the following.
\begin{proposition}
\label{arch}Any definite orthosymmetric space is Archimedean.
\end{proposition}
\begin{proof}
Let $L$ be a definite orthosymmetric space and choose $f,g\in L^{+}$ with
\[
0\leq nf\leq g\text{ for all }n\in\mathbb{N}.
\]
Pick $n\in\mathbb{N}$ and observe that $g-nf\in L^{+}$. So,
\[
0\leq\left\langle g-nf,f\right\rangle =\left\langle g,f\right\rangle
-n\left\langle f,f\right\rangle .
\]
Hence,
\[
0\leq n\left\langle f,f\right\rangle \leq\left\langle g,f\right\rangle \text{
for all }n\in\mathbb{N}.
\]
Since $\mathbb{V}$ is Archimedean, we get $\left\langle f,f\right\rangle =0$.
But then $f=0$ because $L$ is definite.
\end{proof}
By Theorem \ref{neutral}, the neutral part $L^{0}$ is an ideal in $L$. Hence,
we may consider the quotient vector lattice $L/L^{0}$ (see Chapter $9$ in
\cite{LZ71}). The equivalence class (i.e., the residue class)
\[
f+L^{0}=\left\{ f+g:g\in L^{0}\right\}
\]
of a vector $f\in L$ is denoted by $\left[ f\right] $. It turns out that
$L/L^{0}$ is automatically equipped with a structure of an orthosymmetric space.
\begin{theorem}
\label{quo}Let $L$ be an orthosymmetric space. Then $L/L^{0}$ is a definite
orthosymmetric space with respect to the $\mathbb{V}$-valued orthosymmetric
product given by
\[
\left\langle \left[ f\right] ,\left[ g\right] \right\rangle =\left\langle
f,g\right\rangle \text{ for all }f,g\in L.
\]
\end{theorem}
\begin{proof}
First of all, let's prove that the function that takes each ordered pair
$\left( \left[ f\right] ,\left[ g\right] \right) $ of elements of
$L/L^{0}$ to a vector $\left\langle \left[ f\right] ,\left[ g\right]
\right\rangle $ of $\mathbb{V}$ and given
\begin{equation}
\left\langle \left[ f\right] ,\left[ g\right] \right\rangle =\left\langle
f,g\right\rangle \text{ for all }f,g\in L \tag{$\ast$}
\end{equation}
is well-defined. Indeed, choose $f,g\in L$ and $h,k\in L^{0}$. By Lemma
\ref{key}, we have
\[
\left\langle f,k\right\rangle =\left\langle h,g\right\rangle =\left\langle
h,k\right\rangle =0.
\]
So,
\[
\left\langle f+h,g+k\right\rangle =\left\langle f,g\right\rangle +\left\langle
f,k\right\rangle +\left\langle h,g\right\rangle +\left\langle h,k\right\rangle
=\left\langle f,g\right\rangle .
\]
We derive that the function given by $\left( \ast\right) $ is well-defined
(its bilinearity is obvious). Now, let $f,g\in L$ such that $\left[ f\right]
,\left[ g\right] $ are positive in $L/L^{0}$. Hence, there exist $h,k\in
L^{0}$ such that $h\leq f$ and $k\leq g$. Whence, $0\leq f-h$ and $0\leq g-k$
from which it follows that
\[
0\leq\left\langle f-h,g-k\right\rangle =\left\langle f,g\right\rangle
-\left\langle f,k\right\rangle -\left\langle h,g\right\rangle +\left\langle
h,k\right\rangle =\left\langle f,g\right\rangle =\left\langle \left[
f\right] ,\left[ g\right] \right\rangle .
\]
Moreover, if $\left[ f\right] \wedge\left[ g\right] =\left[ 0\right] $
in $L/L^{0}$ then
\[
\left[ f\wedge g\right] =\left[ f\right] \wedge\left[ g\right] =\left[
0\right] .
\]
This means that $f\wedge g\in L^{0}$. This together with Lemma \ref{key}
yields quickly that
\[
\left\langle \left[ f\right] ,\left[ g\right] \right\rangle =\left\langle
f,g\right\rangle =\left\langle f-f\wedge g,g-f\wedge g\right\rangle .
\]
But then $\left\langle \left[ f\right] ,\left[ g\right] \right\rangle =0$
because
\[
\left( f-f\wedge g\right) \wedge\left( g-f\wedge g\right) =0.
\]
Accordingly, $L/L^{0}$ is an orthosymmetric space. It remains to show that
$L/L^{0}$ is definite. To see this, let $f\in L$ such that $\left\langle
\left[ f\right] ,\left[ f\right] \right\rangle =0$. Hence, $\left\langle
f,f\right\rangle =0$ from which it follows that $f\in L^{0}$, so $\left[
f\right] =\left[ 0\right] $. This completes the proof.
\end{proof}
Taking into account Proposition \ref{arch} and Theorem \ref{quo}, we infer
directly that the quotient vector lattice $L/L^{0}$ is Archimedean and so
$L^{0}$ is a uniformly closed ideal in the vector lattice $L$ (see Theorem
$60.2$ in \cite{LZ71}).
\section{Multiplication operators in orthosymmetric product spaces}
Let $M$ be an ordered vector space. A vector subspace $V$ of $M$ is called a
\textit{lattice-subspace} of $M$ if $V$ is a vector lattice with respect to
the ordering inherited from $M$ (see Definition $5.58$ in \cite{AA02}). On the
other hand, in general, we cannot talk about $V$ being a vector sublattice of
$M$ as the latter space need not be a vector lattice. In order to cope with
this terminology problem, we call after Abramovich and Wickstead in
\cite{AW94} the lattice-subspace $V$ of $M$ a \textsl{generalized vector
sublattice} of $M$ if the supremum in $M$ of each $v,w\in V$ exists and
coincides with their supremum in $V$. Hence, if $M$ turns out to be a vector
lattice, then the word `generalized' becomes superfluous. Moreover, it is
trivial that every generalized vector sublattice of $M$ is a lattice-subspace.
Nevertheless, the converse is not true as we can see in the example provided
in \cite[Page $229$]{AA02}.
We start this section with the following general lemma which is presumably
well-known, though we have not been able to locate a precise reference for it.
As usual, the kernel and the range of any linear operator $T$ are denoted by
$\ker T$ and $\operatorname{Im}T$, respectively.
\begin{lemma}
\label{gene}Let $L$ be a vector lattice and $M$ be an ordered vector space.
Suppose that $T:L\rightarrow M$ is a positive operator such that
\begin{enumerate}
\item[\emph{(i)}] $\ker T$ is an vector sublattice of $L$, and
\item[\emph{(ii)}] $f^{-}\in\ker T$ for all $f\in L$ with $Tf\in M^{+}$.
\end{enumerate}
\noindent Then $\operatorname{Im}T$ is a lattice-subspace of $M$ and $T$ is a
lattice homomorphism from $L$ onto $\operatorname{Im}T$.
\end{lemma}
\begin{proof}
Let $f,g\in L$. Since $T$ is positive,
\[
T\left( f\vee g\right) \geq Tf\text{ and }T\left( f\vee g\right) \geq Tg.
\]
This means that $T\left( f\vee g\right) $ is an upper bound of $\left\{
Tf,Tg\right\} $ in $\operatorname{Im}T$. Let $v\in\operatorname{Im}T$ another
upper bound of $\left\{ Tf,Tg\right\} $ in $\operatorname{Im}T$. There
exists $h\in L$ such that $v=Th$. Since
\[
Th=v\geq Tf\text{ and }Th=v\geq Tg,
\]
we get
\[
T\left( h-f\right) \geq0\text{ and }T\left( h-g\right) \geq0.
\]
Therefore,
\[
\left( h-f\right) ^{-}\in\ker T\text{ and }\left( h-g\right) ^{-}\in\ker
T.
\]
But then
\[
\left( h-\left( f\vee g\right) \right) ^{-}=\left( h-f\right) ^{-}
\vee\left( h-g\right) ^{-}\in\ker T
\]
because $\ker T$ is a vector sublattice of $M$. It follows that
\[
T\left( h-\left( \left( f\vee g\right) \right) \right) =T\left(
f-\left( \left( f\vee g\right) \right) ^{+}\right) \geq0
\]
and so
\[
v=Th\geq T\left( f\vee g\right) .
\]
We derive that $T\left( f\vee g\right) $ is a supremum of $\left\{
Tf,Tg\right\} $ in $\operatorname{Im}T$. That is,
\[
T\left( f\vee g\right) =Tf\vee Tg\text{ in }\operatorname{Im}T
\]
and the proof is complete.
\end{proof}
Lemma \ref{gene} does not hold without the condition $\mathrm{(ii)}$. An
example in this direction is given next.
\begin{example}
\label{integ}Let $L=M=C\left[ 0,\pi\right] $, where $C\left[ 0,\pi\right]
$ is the Archimedean vector lattice of all real-valued continuous functions on
the real interval $\left[ 0,\pi\right] $. Define $T:L\rightarrow M$ by
putting
\[
\left( Tf\right) \left( x\right) =\int_{0}^{x}f\left( y\right)
\mathrm{d}y\text{ for all }f\in L\text{ and }x\in\left[ 0,\pi\right] .
\]
Obviously, $T$ is a positive operator. Moreover, $T$ is one-to-one and so
$\ker T=\left\{ 0\right\} $ is an ideal in $L$. However, if $f\in L$ is
defined by
\[
f\left( x\right) =2\sin x\cos x\text{ for all }x\in\left[ 0,\pi\right] ,
\]
then
\[
\left( Tf\right) \left( x\right) =\left( \sin x\right) ^{2}\text{ for
all }x\in\left[ 0,\pi\right] .
\]
Furthermore,
\[
f^{-}\left( x\right) =\left\{
\begin{array}
[c]{l}
2\sin x\cos x\text{ if }x\in\left[ 0,\pi/2\right] \\
\\
0\text{ if }x\in\left[ \pi/2,\pi\right] .
\end{array}
\right.
\]
Thus,
\[
\left( T\left( f^{-}\right) \right) \left( \pi/2\right) =\int_{0}
^{\pi/2}2\sin x\cos x\mathrm{d}x=1\neq0.
\]
Hence, $f^{-}\notin\ker T$. Observe now that
\[
\operatorname{Im}T=\left\{ f\in L:f\left( 0\right) =0\text{ and }f\text{
continuously differentiable}\right\}
\]
is not a lattice-subspace of $M$.
\end{example}
Hence, Example \ref{integ} proves that the condition $\mathrm{(ii)}$ is not
redundant in Lemma \ref{gene}. In spite of that, the following observation
deserves particular attention.
\begin{remark}
Let $L$ be a vector lattice, $M$ be an ordered vector space, and
$T:L\rightarrow M$ be a positive operator such that $\ker T$ is a vector
sublattice of $L$. We derive quickly that $\ker T$ is an ideal in $L$. Hence,
we may speak about the vector lattice $L/\ker T$ \emph{(}see
\emph{\cite[Chapter $9$]{LZ71})}. In this situation, it is not hard to see
that an ordering can be defined on $\operatorname{Im}T$ by putting
\[
Tf\preccurlyeq Tg\text{ in }\operatorname{Im}T\text{ if and only if }g-f\geq
h\text{ for some }h\in\ker T.
\]
This ordering makes $\operatorname{Im}T$ into a vector lattice which is
lattice isomorphic with $L/\ker T$. The lattice operations in
$\operatorname{Im}T$ are given pointwise as follows
\[
Tf\curlyvee Tg=T\left( f\vee g\right) \text{ and }Tf\curlywedge Tg=T\left(
f\wedge g\right) \text{ for all }f,g\in L.
\]
However, $\operatorname{Im}T$ need not be, in general, a lattice-subspace of
$M$. As a matter of fact, \emph{Example \ref{integ}} illustrates this
situation. Indeed, we have observed already that
\[
\operatorname{Im}T=\left\{ f\in C\left[ 0,\pi\right] :f\left( 0\right)
=0\text{ and }f\text{ continuously differentiable}\right\}
\]
is not a lattice-subspace of $C\left[ 0,\pi\right] $. Nevertheless,
$\operatorname{Im}T$ is a vector lattice with respect to the `new' ordering
defined by
\[
f\preccurlyeq g\text{ if and only if }f^{\prime}\leq g^{\prime},
\]
where $f^{\prime}$ and $g^{\prime}$ denote the derivative of $f$ and $g$,
respectively. Moreover, the lattice operations in this vector lattice are
given by
\[
\left( f\curlyvee g\right) \left( x\right) =\int_{0}^{x}\left( f^{\prime
}\vee g^{\prime}\right) \left( y\right) \mathrm{d}y\text{ and }\left(
f\curlywedge g\right) \left( x\right) =\int_{0}^{x}\left( f^{\prime}\wedge
g^{\prime}\right) \left( y\right) \mathrm{d}y
\]
for all $f,g\in\operatorname{Im}T$ and $x\in\left[ 0,\pi\right] $.
\end{remark}
Henceforth, $L$ stands for an orthosymmetric space over the Archimedean vector
lattice $\mathbb{V}$ and $\mathcal{L}^{+}\left( L,\mathbb{V}\right) $
denotes the set of all positive operators from $L$ into $\mathbb{V}$. It could
be helpful to recall that an operator $T:L\rightarrow\mathbb{V}$ is said to be
\textsl{regular} if $T=R-S$ for some $R,S\in\mathcal{L}^{+}\left(
L,\mathbb{V}\right) $. The set $\mathcal{L}_{r}\left( L,\mathbb{V}\right) $
of all regular operators from $L$ into $\mathbb{V}$ is an Archimedean ordered
vector space with respect to the pointwise addition, scalar multiplication,
and ordering. By the way, $\mathcal{L}^{+}\left( L,,\mathbb{V}\right) $ is
the positive cone of $\mathcal{L}_{r}\left( L,\mathbb{V}\right) $.
At this point, let $f\in L$ and define a map $\Phi_{f}:L\rightarrow\mathbb{V}$
by putting
\[
\Phi_{f}g=\left\langle f,g\right\rangle \text{ for all }g\in L.
\]
Obviously, $\Phi_{f}$ is a linear operator, which is referred to as a
\textsl{multiplication operator} on the orthosymmetric space $L$. Further
elementary (but very useful) properties of such operators are gathered next.
\begin{lemma}
\label{tech}Let $L$ be an orthosymmetric space and $f\in L$. Then the
following hold.
\begin{enumerate}
\item[\emph{(i)}] $\Phi_{f}\in\mathcal{L}^{+}\left( L,\mathbb{V}\right) $
whenever $f\in L^{+}$.
\item[\emph{(ii)}] $\Phi_{f}=\Phi_{f^{+}}-\Phi_{f^{-}}\in\mathcal{L}
_{r}\left( L,\mathbb{V}\right) $.
\item[\emph{(iii)}] $\Phi_{f}=0$ if and only if $f\in L^{0}$.
\item[\emph{(iv)}] $\Phi_{f}\in\mathcal{L}^{+}\left( L,\mathbb{V}\right) $
if and only if $f^{-}\in L^{0}$.
\end{enumerate}
\end{lemma}
\begin{proof}
$\mathrm{(i)}$ This follows immediately from the positivity of the
$\mathbb{V}$-valued orthosymmetric product on $L$.
$\mathrm{(ii)}$ If $f,g\in L$ then
\begin{align*}
\Phi_{f}g & =\left\langle f,g\right\rangle =\left\langle f^{+}
-f^{-},g\right\rangle =\left\langle f^{+},g\right\rangle -\left\langle
f^{-},g\right\rangle \\
& =\Phi_{f^{+}}g-\Phi_{f^{-}}g=\left( \Phi_{f^{+}}-\Phi_{f^{-}}\right) g.
\end{align*}
Thus, $\Phi_{f}=\Phi_{f^{+}}-\Phi_{f^{-}}$. Moreover, $\Phi_{f^{+}}
,\Phi_{f^{-}}\in\mathcal{L}^{+}\left( L,\mathbb{V}\right) $ as $f^{+}
,f^{-}\in L^{+}$ (where we use $\mathrm{(i)}$). It follows that $\Phi_{f}$ is regular.
$\mathrm{(iii)}$ This is a direct consequence of Lemma \ref{key}.
$\mathrm{(iv)}$ If $f^{-}\in L^{0}$ then, by $\mathrm{(iii)}$, $\Phi_{f^{-}
}=0$. Using $\mathrm{(ii)}$ then $\mathrm{(i)}$, we get $\Phi_{f}=\Phi_{f^{+}
}\in\mathcal{L}^{+}\left( L,\mathbb{V}\right) $. Conversely, suppose that
$\Phi_{f}\in\mathcal{L}^{+}\left( L,\mathbb{V}\right) $. Then,
\begin{align*}
0 & \leq\Phi_{f}\left( f^{-}\right) =\Phi_{f^{+}}\left( f^{-}\right)
-\Phi_{f^{-}}\left( f^{-}\right) \\
& =\left\langle f^{+},f^{-}\right\rangle -\left\langle f^{-},f^{-}
\right\rangle =-\left\langle f^{-},f^{-}\right\rangle \leq0.
\end{align*}
We derive that $\left\langle f^{-},f^{-}\right\rangle =0$ so $f^{-}\in L^{0}$,
as required.
\end{proof}
As we shall see in what follows, it turns out that the set
\[
\mathcal{M}\left( L,\mathbb{V}\right) =\left\{ \Phi_{f}:f\in L\right\}
\]
of all multiplication operators on the orthosymmetric space $L$ enjoys a very
interesting lattice-ordered structure.
\begin{theorem}
\label{main}Let $L$ be an orthosymmetric space. Then $\mathcal{M}\left(
L,\mathbb{V}\right) $ is a generalized vector sublattice of $\mathcal{L}
_{r}\left( L,\mathbb{V}\right) $ and the map $\Phi:L\rightarrow
\mathcal{M}\left( L,\mathbb{V}\right) $ defined by
\[
\Phi f=\Phi_{f}\text{ for all }f\in L
\]
is a surjective lattice homomorphism with $\ker\Phi=L^{0}$. In particular, the
vector lattice $L/L^{0}$ and $\mathcal{M}\left( L,\mathbb{V}\right) $ are
lattice isomorphic.
\end{theorem}
\begin{proof}
We have seen in Lemma \ref{tech} $\mathrm{(ii)}$ that $\mathcal{M}\left(
L,\mathbb{V}\right) $ is contained in $\mathcal{L}_{r}\left( L,\mathbb{V}
\right) $. Moreover, it is a simple exercise to check that $\mathcal{M}
\left( L,\mathbb{V}\right) $ is a vector subspace of $\mathcal{L}_{r}\left(
L,\mathbb{V}\right) $. Also, we may check directly that $\Phi$ is a linear
operator and, by Lemma \ref{tech} $\mathrm{(i)}$, $\Phi$ is positive.
Furthermore, Lemma \ref{tech} $\mathrm{(iii)}$ yields that $\ker\Phi=L^{0}$.
In particular, $\ker\Phi$ is a vector sublattice of $L$ (where we use Theorem
\ref{neutral}). Moreover, Lemma \ref{tech} $\mathrm{(iv)}$ shows that
$f^{-}\in\ker\Phi$ whenever $f\in L$ and $0\leq\Phi f\in\mathcal{M}\left(
L,\mathbb{V}\right) $. Consequently, since $\operatorname{Im}\Phi
=\mathcal{M}\left( L,\mathbb{V}\right) $, all conditions of Lemma \ref{gene}
are fulfilled. So, $\mathcal{M}\left( L,\mathbb{V}\right) $ is a
lattice-subspace of $\mathcal{L}_{r}\left( L,\mathbb{V}\right) $ and $\Phi$
is a lattice homomorphism from $L$ onto $\mathcal{M}\left( L,\mathbb{V}
\right) $. In particular, if $f\in L$ then the absolute value of $\Phi_{f}$
in $\mathcal{M}\left( L,\mathbb{V}\right) $ equals $\Phi_{\left\vert
f\right\vert }$. At this point, we claim that $\mathcal{M}\left(
L,\mathbb{V}\right) $ is a generalized vector sublattice of $\mathcal{L}
_{r}\left( L,\mathbb{V}\right) $. To this end, we shall prove that for each
$f\in L$ and $g\in L^{+}$ the set
\[
A\left( f,g\right) =\left\{ \left\vert \Phi_{f}h\right\vert :h\in L\text{
and }\left\vert h\right\vert \leq g\right\}
\]
has $\Phi_{\left\vert f\right\vert }g$ as a supremum in $\mathbb{V}$. The
Riesz-Kantorovich formula (see, e.g. Theorem $1.14$ in \cite{AB06}) will thus
give the desired result.
So, let $f\in L$ and $g\in L^{+}$. If $h\in L$ with $\left\vert h\right\vert
\leq g$, then
\[
\left\vert \Phi_{f}h\right\vert =\left\vert \left\langle f,h\right\rangle
\right\vert \leq\left\langle \left\vert f\right\vert ,\left\vert h\right\vert
\right\rangle \leq\left\langle \left\vert f\right\vert ,g\right\rangle
=\Phi_{\left\vert f\right\vert }g.
\]
That is, $\Phi_{\left\vert f\right\vert }g$ is an upper bound of $A\left(
f,g\right) $ in $\mathbb{V}$. We now proceed into three steps.
\underline{\textsl{Step 1}}\quad Assume that $L$ is Dedekind-complete. In
particular, any band is a projection band. Let $P_{\left\vert f\right\vert }$
denote the order projection on the principal band in $L$ generated by
$\left\vert f\right\vert $. In particular, we have
\[
\left\vert P_{\left\vert f\right\vert }g\right\vert \leq g.
\]
We derive, via an elementary calculation, that
\[
\Phi_{\left\vert f\right\vert }g=\left\langle \left\vert f\right\vert
,g\right\rangle =\left\langle f,P_{\left\vert f\right\vert }g\right\rangle
=\left\vert \left\langle f,P_{\left\vert f\right\vert }g\right\rangle
\right\vert =\left\vert \Phi_{f}\left( P_{\left\vert f\right\vert }g\right)
\right\vert \in A\left( f,g\right) .
\]
It follows that the set $A\left( f,g\right) $ has a supremum in $\mathbb{V}$
and
\[
\Phi_{\left\vert f\right\vert }g=\sup A\left( f,g\right) =\sup\left\{
\left\vert \Phi_{f}h\right\vert :h\in L\text{ and }\left\vert h\right\vert
\leq g\right\} .
\]
This completes the first step.
\underline{\textsl{Step 2}}\quad Suppose that $L$ is Archimedean. Let
$L^{\delta}$ and $\mathbb{V}^{\delta}$ denote the Dedekind-completions of $L$
and $\mathbb{V}$, respectively. There exists a $\mathbb{V}^{\delta}$-valued
orthosymmetric product on $L^{\delta}$ which extends the $L$'s (see Theorem
$4.1$ in \cite{BK07}). The image under this product of each ordered pair
$\left( f,g\right) $ of elements in $L^{\delta}$ is denoted by $\left\langle
f,g\right\rangle ^{\delta}$. Furthermore, we define $\Phi_{f}^{\delta
}:L^{\delta}\rightarrow\mathbb{V}^{\delta}$ by putting
\[
\Phi_{f}^{\delta}g=\left\langle f,g\right\rangle ^{\delta}\text{ for all }g\in
L^{\delta}.
\]
Let $T\in\mathcal{L}_{r}\left( L,\mathbb{V}\right) $ such that
\[
T\geq\pm\Phi_{f}\text{ in }\mathcal{L}_{r}\left( L,\mathbb{V}\right) .
\]
In particular, $T$ is positive. Choose a positive extension $T^{\delta}
\in\mathcal{L}_{r}\left( L^{\delta},\mathbb{V}^{\delta}\right) $ of $T$
(see, e.g., Corollary $1.5.9$ in \cite{M91}). By the first case (as
$L^{\delta}$ is Dedekind-complete), we get
\[
T^{\delta}\geq\Phi_{\left\vert f\right\vert }\text{ in }\mathcal{L}_{r}\left(
L^{\delta},\mathbb{V}^{\delta}\right) .
\]
Thus, if $g\in L^{+}$ then
\[
Tg=T^{\delta}g\geq\Phi_{\left\vert f\right\vert }^{\delta}g=\left\langle
\left\vert f\right\vert ,g\right\rangle ^{\delta}=\left\langle \left\vert
f\right\vert ,g\right\rangle =\Phi_{\left\vert f\right\vert }g.
\]
This yields that
\[
T\geq\Phi_{\left\vert f\right\vert }\text{ in }\mathcal{L}_{r}\left(
L,\mathbb{V}\right)
\]
and so
\[
\left\vert \Phi_{f}\right\vert =\Phi_{\left\vert f\right\vert }\text{ in
}\mathcal{L}_{r}\left( L,\mathbb{V}\right) .
\]
We focus next on the general case.
\underline{\textsl{Step 3}}\quad Here we do not assume any extra condition on
$L$. Set
\[
A\left( \left[ f\right] ,\left[ g\right] \right) =\left\{ \left\vert
\Phi_{f}h\right\vert :h\in L\text{ and }\left\vert \left[ h\right]
\right\vert \leq\left[ g\right] \text{ in }L/L^{0}\right\} .
\]
We claim that
\[
A\left( f,g\right) =A\left( \left[ f\right] ,\left[ g\right] \right)
.
\]
The inclusion
\[
A\left( f,g\right) \subset A\left( \left[ f\right] ,\left[ g\right]
\right)
\]
being obvious, we prove the converse inclusion. Hence, let $h\in L$ such that
$\left\vert \left[ h\right] \right\vert \leq\left[ g\right] $ in $L/L^{0}
$. From Theorem $59.1$ in \cite{LZ71}, it follows that there $k\in L^{0}$ such
that $\left\vert k\right\vert \leq\left\vert h\right\vert $ and $\left\vert
h-k\right\vert \leq\left\vert g\right\vert $. But then
\[
\left\vert \Phi_{f}h\right\vert =\left\vert \Phi_{f}\left( h-k\right)
\right\vert \in A\left( f,g\right)
\]
and the required inclusion follows. Theorem \ref{quo} together with the
Archimedean case leads to
\begin{align*}
\Phi_{\left\vert f\right\vert }g & =\left\langle \left\vert f\right\vert
,g\right\rangle =\left\langle \left[ \left\vert f\right\vert \right]
,\left[ g\right] \right\rangle =\left\langle \left\vert \left[ f\right]
\right\vert ,\left[ g\right] \right\rangle \\
& =\Phi_{\left\vert \left[ f\right] \right\vert }\left[ g\right] =\sup
A\left( \left[ f\right] ,\left[ g\right] \right) =\sup A\left(
f,g\right) .
\end{align*}
This completes the proof of theorem.
\end{proof}
In what follows we shall discuss an example which illustrates Theorem
\ref{main}.
\begin{example}
\label{kaplan}Let $C\left( \mathbb{N}\right) $ be the Archimedean vector
lattice of all sequences of real numbers. The vector sublattice of $C\left(
\mathbb{N}\right) $ of all bounded sequences is denoted by $C^{\ast}\left(
\mathbb{N}\right) $ \emph{(}here, we follow notations from \emph{\cite{GJ76}
)}. Define $T\in\mathcal{L}_{r}\left( C^{\ast}\left( \mathbb{N}\right)
,C\left( \mathbb{N}\right) \right) $ by
\[
\left( Tf\right) \left( n\right) =\left\{
\begin{array}
[c]{l}
f\left( n\right) -f\left( n+1\right) \text{ if }n\in\mathbb{N}\text{ is
odd}\\
\\
0\text{ if }n\in\mathbb{N}\text{ is even}
\end{array}
\right. \text{ for all }f\in C^{\ast}\left( \mathbb{N}\right) .
\]
Since $C\left( \mathbb{N}\right) $ is Dedekind-complete, $\mathcal{L}
_{r}\left( C^{\ast}\left( \mathbb{N}\right) ,C\left( \mathbb{N}\right)
\right) $ is a vector lattice and thus $T$ has an absolute value $\left\vert
T\right\vert $. We intend to calculate $\left\vert T\right\vert $. Consider
$u,v\in C^{\ast}\left( \mathbb{N}\right) $ with
\[
u=\left\{
\begin{array}
[c]{l}
1\text{ if }n\text{ is odd}\\
\\
0\text{ if }n\text{ is even}
\end{array}
\right. \text{ and }v=\left\{
\begin{array}
[c]{l}
0\text{ if }n\text{ is odd}\\
\\
1\text{ if }n\text{ is even.}
\end{array}
\right.
\]
Moreover, define the shift operator $S\in\mathcal{L}^{+}\left( C^{\ast
}\left( \mathbb{N}\right) ,C\left( \mathbb{N}\right) \right) $ by putting
\[
\left( Sf\right) \left( n\right) =f\left( n+1\right) \text{ for all
}f\in C^{\ast}\left( \mathbb{N}\right) \text{ and }n\in\mathbb{N}.\text{ }
\]
Also, if $f,g\in C^{\ast}\left( \mathbb{N}\right) $ then $fg$ is defined by
\[
\left( fg\right) \left( n\right) =f\left( n\right) g\left( n\right)
\text{ for all }f\in C^{\ast}\left( \mathbb{N}\right) \text{ and }
n\in\mathbb{N}.
\]
Then, it is readily checked that the formula
\[
\left\langle f,g\right\rangle =ufg+S\left( vfg\right) \text{ for all }f,g\in
L.
\]
makes $C^{\ast}\left( \mathbb{N}\right) $ into an orthosymmetric space over
$C\left( \mathbb{N}\right) $. An easy calculation yields that
\[
Tf=\left\langle u-v,f\right\rangle =\Phi_{u-v}f\text{ for all }f\in L.
\]
By \emph{Theorem \ref{main}}, we derive that $T$ has an absolute value in
$\mathcal{L}_{r}\left( C^{\ast}\left( \mathbb{N}\right) ,C\left(
\mathbb{N}\right) \right) $ which is given by
\[
\left\vert T\right\vert =\left\vert \Phi_{u-v}\right\vert =\Phi_{\left\vert
u-v\right\vert }=\Phi_{u+v}.
\]
That is,
\[
\left( \left\vert T\right\vert f\right) \left( n\right) =\left\{
\begin{array}
[c]{l}
f\left( n\right) +f\left( n+1\right) \text{ if }n\in\mathbb{N}\text{ is
odd}\\
\\
0\text{ if }n\in\mathbb{N}\text{ is even}
\end{array}
\right. \text{ for all }f\in C^{\ast}\left( \mathbb{N}\right) .
\]
By the way, let $C\left( \mathbb{N}^{\ast}\right) $ be the vector sublattice
of $C\left( \mathbb{N}\right) $ of all convergent sequences \emph{(}where
$\mathbb{N}^{\ast}$ denote the point-one compactification of $\mathbb{N}
$\emph{)}. Also, $T$ leaves $C\left( \mathbb{N}^{\ast}\right) $ invariant
and thus $T$ can be considered as an element of $\mathcal{L}_{r}\left(
C\left( \mathbb{N}^{\ast}\right) ,C\left( \mathbb{N}^{\ast}\right)
\right) $. Using an example by Kaplan in \emph{\cite{K73}} \emph{(}see also
\emph{Example} $1.17$ in \emph{\cite{AB06})}, it turns out that $T$ has no
absolute value in $\mathcal{L}_{r}\left( C\left( \mathbb{N}^{\ast}\right)
,C\left( \mathbb{N}^{\ast}\right) \right) $.
\end{example}
The following consequence of Theorem \ref{main} is straightforward but worth
talking about. First recall that the orthosymmetric space is definite if
$L^{0}$ is trivial.
\begin{corollary}
Let $L$ be a definite orthosymmetric space. The map $T:L\rightarrow
\mathcal{M}\left( L,\mathbb{V}\right) $ defined by
\[
\Phi f=\Phi_{f}\text{ for all }f\in L
\]
is a lattice isomorphism.
\end{corollary}
Roughly speaking, any definite orthosymmetric space $L$ can be embedded in
$\mathcal{L}_{r}\left( L,\mathbb{V}\right) $ as a generalized vector
sublattice. In particular, if $\mathbb{V}$ is in addition Dedekind-complete,
then $L$ has a vector sublattice copy in the Dedekind-complete $\mathcal{L}
_{r}\left( L,\mathbb{V}\right) $. For instance, any definite orthosymmetric
space over $\mathbb{R}$ can be considered as a vector sublattice of its order dual.
\section{Adjoint operators on orthosymmetric spaces}
Also in this section, \textsl{all given orthosymmetric spaces are over the
Archimedean vector lattice} $\mathbb{V}$. Let $L,M$ be two orthosymmetric
spaces. The ordered vector space of all (linear) operators from $L$ into $M$
is denoted by $\mathcal{L}_{\mathrm{r}}\left( L,M\right) $ and by
$\mathcal{L}_{\mathrm{r}}\left( L\right) $ if $L=M$. Recall that we call a
\textsl{positive orthomorphism} on $L$ any positive operator $T\in
\mathcal{L}_{\mathrm{r}}\left( L\right) $ for which
\[
f\wedge Tg=0\text{ for all }f,g\in L\text{ with }f\wedge g=0.
\]
An operator $T\in\mathcal{L}_{\mathrm{r}}\left( L\right) $ is called an
\textsl{orthomorphism}\textit{ }if $T=R-S$ for some positive orthomorphisms
$R,S$ on $L$. The set of all orthomorphisms on $L$ is denoted by
$\mathrm{Orth}\left( L\right) $. For orthomorphisms, the reader can consult
the Thesis \cite{P81} or Chapter $20$ in \cite{Z83} (further results can be
found in \cite{HP86,HP82,HP84,P84}).
It turns out that orthomorphisms on orthosymmetric spaces over the same
Archimedean vector lattice have an interesting property.
\begin{theorem}
\label{orth}Let $L$ be an orthosymmetric space. Then
\[
\left\langle f,Tg\right\rangle =\left\langle Tf,g\right\rangle \text{ for all
}T\in\mathrm{Orth}\left( L\right) \text{ and }f,g\in L.
\]
\end{theorem}
\begin{proof}
Obviously, we can prove the formula only for positive orthomorphisms. Hence,
let $T$ be a positive orthomorphism on $L$ and define a bilinear map which
assigns the vector
\[
\left\langle f,g\right\rangle _{T}=\left\langle f,Tg\right\rangle
\in\mathbb{V}
\]
to each ordered pair $\left( f,g\right) \in L\times L$. Clearly, the above
formula defines a new $\mathbb{V}$-valued orthosymmetric product on $L$. In
view of Lemma \ref{Steinberg}, we conclude that if $f,g\in L$ then
\[
\left\langle f,Tg\right\rangle =\left\langle f,g\right\rangle _{T}
=\left\langle g,f\right\rangle _{T}=\left\langle g,Tf\right\rangle
=\left\langle Tf,g\right\rangle
\]
and the theorem follows.
\end{proof}
Theorem \ref{orth} motivates us to introduce the following concept. We say
that $T\in\mathcal{L}_{\mathrm{r}}\left( L,M\right) $ has an
\textsl{adjoint}\textit{ }in $\mathcal{L}_{\mathrm{r}}\left( M,L\right) $ if
there is $S\in\mathcal{L}_{\mathrm{r}}\left( M,L\right) $ for which
\[
\left\langle Tf,g\right\rangle =\left\langle f,Sg\right\rangle \text{ for all
}f\in L\text{ and }g\in M.
\]
The subset of $\mathcal{L}_{\mathrm{r}}\left( M,L\right) $ of all adjoints
of $T\in\mathcal{L}_{\mathrm{r}}\left( L,M\right) $ is denoted by
$\mathrm{adj}\left( T\right) $. The operator $T\in\mathcal{L}_{\mathrm{r}
}\left( L\right) $ is said to be \textsl{selfadjoint} if $T\in
\mathrm{adj}\left( T\right) $. It follows from Theorem \ref{orth} that any
orthomorphism on $L$ is selfadjoint. Of course, the converse is not valid,
i.e., a selfadjoint operator in $\mathcal{L}_{\mathrm{r}}\left( L\right) $
need not be an orthomorphism. Consider the orthosymmetric space $\mathbb{R}
^{2}$ as defined in Example \ref{Euclidean}. The operator $T\in\mathcal{L}
_{\mathrm{r}}\left( \mathbb{R}^{2}\right) $ given by the $2\times2$ matrix
\[
T=\left(
\begin{array}
[c]{cc}
1 & 2\\
2 & 0
\end{array}
\right)
\]
is selfadjoint but fails to be an orthomorphism. Recall by the way that any
orthomorphism on the vector lattice $\mathbb{R}^{n}$ ($n\in\mathbb{N}$) is
given by a diagonal matrix as shown in \cite[Exercice $141.7$]{Z83}. Next, we
shall show \textit{via }an example that $\mathrm{adj}\left( T\right) $ can
be empty.
\begin{example}
Put $L=\mathbb{V}=C\left[ -1,1\right] $, where $C\left[ -1,1\right] $ is
the Archimedean vector lattice of all real-valued continuous functions on the
real interval $\left[ 0,1\right] $. It is not hard to see that $L$ is an
orthosymmetric space over $\mathbb{V}$ under the $\mathbb{V}$-valued
orthosymmetric product given by
\[
\left\langle f,g\right\rangle =fg\text{ for all }f,g\in L.
\]
Now, define $T\in\mathcal{L}_{\mathrm{r}}\left( L\right) $ by
\[
\left( Tf\right) \left( x\right) =x\int_{-1}^{1}f\left( s\right)
\mathrm{d}s\text{ for all }f\in L\text{ and }x\in\left[ -1,1\right] .
\]
Suppose that $\mathrm{adj}\left( T\right) $ contains $R$. Let $f\in L$ and
put
\[
g\left( x\right) =x\text{ for all }x\in\left[ -1,1\right] .
\]
Hence, $Tg=0$ and so
\[
x\left( Rf\right) \left( x\right) =\left\langle Rf,g\right\rangle
=\left\langle f,Tg\right\rangle =0\text{ for all }x\in\left[ -1,1\right] .
\]
It follows that $R=0$ and thus
\[
Tf=\left\langle Tf,\mathbf{1}\right\rangle =\left\langle f,R\mathbf{1}
\right\rangle =0\text{ for all }f\in L
\]
\emph{(}here, $\mathbf{1}$ is the constant function whose is the constant
$1$\emph{)}. This is an obvious absurdity and thus $\mathrm{adj}\left(
T\right) $ is empty.
\end{example}
Next, we provide an example in which we shall see that $\mathrm{adj}\left(
T\right) $ may contain more than one element.
\begin{example}
Again, we put $L=\mathbb{V}=C\left[ -1,1\right] $. For each $f,g\in L$ we
set
\[
\left\langle f,g\right\rangle \left( x\right) =\left\{
\begin{array}
[c]{l}
x
{\displaystyle\int_{-1}^{0}}
\left( fg\right) \left( s\right) \mathrm{d}s\text{\quad if }x\in\left[
0,1\right] \\
\\
0\text{\quad if }x\in\left[ -1,0\right] .
\end{array}
\right.
\]
It is an easy task to check that this formula makes $L$ into an orthosymmetric
space over $\mathbb{V}$. Define $T\in\mathcal{L}_{\mathrm{r}}\left( L\right)
$ by
\[
\left( Tf\right) \left( x\right) =x\int_{-1}^{0}f\left( s\right)
\mathrm{d}s\text{ for all }f\in L\text{ and }x\in\left[ -1,1\right] .
\]
Moreover, consider $R,S\in\mathcal{L}_{\mathrm{r}}\left( L\right) $ such
that
\[
\left( Rf\right) \left( x\right) =\int_{-1}^{0}sf\left( s\right)
\mathrm{d}s\text{ and }Sf=fw+Rf\text{ for all }f\in L\text{ and }x\in\left[
-1,1\right] ,
\]
where
\[
w\left( x\right) =0\text{ if }x\in\left[ -1,0\right] \text{ and }w\left(
x\right) =x\text{ if }x\in\left[ 0,1\right] .
\]
A direct calculation reveals that
\[
\left\langle f,Tg\right\rangle =\left\langle Rf,g\right\rangle =\left\langle
Sf,g\right\rangle \text{ for all }f,g\in L.
\]
Hence, $R,S\in\mathrm{adj}\left( T\right) $ and $R\neq S$.
\end{example}
The fact that an operator $T\in\mathcal{L}_{\mathrm{r}}\left( L,M\right) $
could have more than one adjoint is rather inconvenient. In order to get
around the problem and thus make our theory more or less reasonable,
\textit{we shall assume from now on that the codomain orthosymmetric space
}$M$\textit{ is definite}. Recall here that the orthosymmetric space $M$ is
definite if $0$ is the only neutral element in $M$. This means that
\[
M^{0}=\left\{ g\in M:\left\langle g,g\right\rangle =0\right\} =\left\{
0\right\} .
\]
As we shall prove next, an operator on a definite orthosymmetric space has at
most one adjoint.
\begin{proposition}
Let $L,M$ be orthosymmetric spaces with $M$ definite. If $T\in\mathcal{L}
_{\mathrm{r}}\left( L,M\right) $ then $\mathrm{adj}\left( T\right) $ has
at most one point.
\end{proposition}
\begin{proof}
Suppose that $R,S\in\mathrm{adj}\left( T\right) $. Let $f\in L$ and $g\in
M$. Observe that
\begin{align*}
\left\langle Rf-Sf,g\right\rangle & =\left\langle Rf,g\right\rangle
-\left\langle Sf,g\right\rangle \\
& =\left\langle f,Tg\right\rangle -\left\langle f,Tg\right\rangle =0.
\end{align*}
Since $g$ are arbitrary in $M$ and $M$ is definite, we get
\[
Rf-Sf=0\text{ for all }f\in L.
\]
This means that $R=S$ and we are done.
\end{proof}
As it would be expected, if $\mathrm{adj}\left( T\right) $ is non-empty,
then its unique element is called the \textsl{adjoint}\textit{ }of $T$ and
denoted by $T^{\ast}$. In this situation, we have
\[
T^{\ast}\in\mathcal{L}_{\mathrm{r}}\left( M,L\right) \text{ and
}\left\langle Tf,g\right\rangle =\left\langle f,T^{\ast}g\right\rangle \text{
for all }f\in L\text{ and }g\in M.
\]
A first property of adjoint operators is given below.
\begin{proposition}
\label{pospos}Let $L,M$ be orthosymmetric spaces with $M$ definite and
$T\in\mathcal{L}_{\mathrm{r}}\left( L,M\right) $ such that $T^{\ast}
\in\mathcal{L}_{\mathrm{r}}\left( M,L\right) $ exists. If $T$ is positive
then so is $T^{\ast}$.
\end{proposition}
\begin{proof}
Let $f\in L$ and $g\in M$. Keeping the same notation as previously used in
Section $2$, we can write
\[
\Phi_{T^{\ast}g}f=\left\langle f,T^{\ast}g\right\rangle =\left\langle
Tf,g\right\rangle =\Phi_{g}\left( Tf\right) =\left( \Phi_{g}\circ T\right)
g.
\]
Hence,
\[
\Phi_{T^{\ast}g}=\Phi_{g}\circ T\text{ for all }g\in L.
\]
Now, assume that $T$ is positive and let $g\in M^{+}$. We claim that $T^{\ast
}g\in M^{+}$. Since $g\in M^{+}$, the operator $\Phi_{g}$ is positive (see
Lemma \ref{tech} $\mathrm{(i)}$). Hence, $\Phi_{g}\circ T$ is positive and so
is $\Phi_{T^{\ast}g}$. But then
\[
\left( T^{\ast}g\right) ^{-}\in M^{0}=\left\{ 0\right\}
\]
as $M$ is definite (where we use Lemma \ref{tech} $\mathrm{(iv)}$). It follows
that $T^{\ast}g\in M^{+}$ which leads to the desired result.
\end{proof}
Now, we shall focus on lattice homomorphisms from $L$ into $M$. Recall that
$T\in\mathcal{L}_{\mathrm{r}}\left( L,M\right) $ is a \textsl{lattice
homomorphism}\textit{ }if
\[
\left\vert Tf\right\vert =T\left\vert f\right\vert \quad\text{for all }f\in
L.
\]
Next, we obtain a (quite amazing) characterization of lattice homomorphisms in
term of adjoint.
\begin{theorem}
\label{riesz}Let $L,M$ be orthosymmetric spaces with $M$ definite and
$T\in\mathcal{L}_{\mathrm{r}}\left( L,M\right) $ such that $T$ is positive
and $T^{\ast}\in\mathcal{L}_{\mathrm{r}}\left( M,L\right) $ exists. Then $T$
is a lattice homomorphism if and only if $T^{\ast}T\in\mathrm{Orth}\left(
L\right) $.
\end{theorem}
\begin{proof}
Assume that $T$ is a lattice homomorphism. Since $T$ and $T^{\ast}$ are
positive (see Proposition \ref{pospos}), so is $T^{\ast}T$. Let $f,g\in L$
with $f\wedge g=0$. From $Tf\wedge Tg=0$ it follows that $\left\langle
Tf,Tg\right\rangle =0$. We get
\begin{align*}
0 & \leq\left\langle \left( T^{\ast}T\right) f\wedge g,\left( T^{\ast
}T\right) f\wedge g\right\rangle \leq\left\langle \left( T^{\ast}T\right)
f,g\right\rangle \\
& =\left\langle T^{\ast}Tf,g\right\rangle =\left\langle Tf,Tg\right\rangle
=0.
\end{align*}
Since $M$ is definite, we derive that $\left( T^{\ast}T\right) f\wedge g=0$
so $T^{\ast}T\in\mathrm{Orth}\left( L\right) $.
Conversely, suppose that $T^{\ast}T\in\mathrm{Orth}\left( L\right) $ and
pick $f,g\in L$ with $f\wedge g=0$. Hence, $\left( T^{\ast}T\right) f\wedge
g=0$ because $T^{\ast}T$ is a positive orthomorphism. Therefore,
\[
\left\langle Tf,Tg\right\rangle =\left\langle \left( T^{\ast}T\right)
f,g\right\rangle =0.
\]
Consequently,
\[
0\leq\left\langle Tf\wedge Tg,Tf\wedge Tg\right\rangle \leq\left\langle
Tf,Tg\right\rangle =0.
\]
We derive that $Tf\wedge Tg=0$ as $M$ is definite. This yields that $T$ is a
lattice homomorphism and completes the proof.
\end{proof}
A quite curious consequences of Theorem \ref{riesz} is discussed next. Let $T$
be a positive $n\times n$ matrix $(n\in\mathbb{N})$ such that $T^{\ast}T$ is
diagonal. Then each row of $T$ contains at most one nonzero (positive) entry.
Indeed, since $T^{\ast}T$ is diagonal, $T^{\ast}T$ is an orthomorphism on
$\mathbb{R}^{n}$. But then $T$ is a lattice homomorphism (where we use Theorem
\ref{riesz}) and the result follows (see \cite{AA02} or \cite{S74}).
We proceed to a question which arises naturally. Namely, is the eventual
adjoint of a lattice homomorphism on orthosymmetric spaces again a lattice
homomorphism ? The following simple example proves that this is not true in general.
\begin{example}
Let $L=\mathbb{R}^{3}$ be equipped with its structure of definite
orthosymmetric space over $\mathbb{R}$ as explained in \emph{Example
\ref{Euclidean}}. Consider
\[
T=\left(
\begin{array}
[c]{ccc}
0 & 0 & 0\\
0 & 1 & 0\\
0 & 1 & 0
\end{array}
\right) \in\mathcal{L}_{\mathrm{r}}\left( L\right)
\]
and observe that $T$ is a lattice homomorphism on $L$. However,
\[
T^{\ast}=\left(
\begin{array}
[c]{ccc}
0 & 0 & 0\\
0 & 1 & 1\\
0 & 0 & 0
\end{array}
\right)
\]
is not a lattice homomorphism.
\end{example}
The situation improves if $T$ is onto as we prove in the following.
\begin{corollary}
Let $L,M$ be orthosymmetric spaces with $M$ definite and $T\in\mathcal{L}
_{\mathrm{r}}\left( T\right) $ be a surjective lattice homomorphism such
that $T^{\ast}\in\mathcal{L}_{\mathrm{r}}\left( M,L\right) $ exists. Then
$T^{\ast}$ is again a lattice homomorphism.
\end{corollary}
\begin{proof}
Let $g\in M$ and $f\in L$ such that $g=Tf$. By Theorem \ref{riesz}, the
operator $T^{\ast}T$ is an orthomorphism. Thus,
\[
T^{\ast}\left\vert g\right\vert =T^{\ast}\left\vert Tf\right\vert =T^{\ast
}T\left\vert f\right\vert =\left\vert T^{\ast}Tf\right\vert =\left\vert
T^{\ast}g\right\vert .
\]
This means that $T^{\ast}$ is a lattice homomorphism, as desired.
\end{proof}
The following shows that only normal lattice homomorphisms have adjoints.
First, recall that a lattice homomorphism $T\in\mathcal{L}_{\mathrm{r}}\left(
L,M\right) $ is said to be \textsl{normal} if its kernel $\ker\left(
T\right) $ is a band of $L$.
\begin{corollary}
Let $L,M$ be orthosymmetric space with $M$ definite and $T\in\mathcal{L}
_{\mathrm{r}}\left( L,M\right) $ be a lattice homomorphism such that
$T^{\ast}\in\mathcal{L}_{\mathrm{r}}\left( M,L\right) $ exists. Then $T$ is normal.
\end{corollary}
\begin{proof}
We prove that $\ker\left( T\right) $ is band. We claim that $\ker\left(
T^{\ast}T\right) =\ker\left( T\right) $. Obviously, $\ker\left( T^{\ast
}T\right) $ contains $\ker\left( T\right) $. Conversely, if $f\in
\ker\left( T^{\ast}T\right) $ then
\[
0\leq\left\langle Tf,Tf\right\rangle =\left\langle T^{\ast}Tf,f\right\rangle
=0.
\]
It follows that $Tf=0$ because $M$ is definite. We derive that $\ker\left(
T^{\ast}T\right) =\ker\left( T\right) $, as required. On the other hand,
Theorem \ref{riesz} guaranties that $T^{\ast}T$ is an orthomorphism on $L$.
Accordingly, $\ker\left( T^{\ast}T\right) $ is a band of $L$ and so is
$\ker\left( T\right) $. This yields that $T$ is normal and completes the proof.
\end{proof}
The last part of the paper deals with interval preserving operators. For any
positive element $f$ in a vector lattice $L$, we set
\[
\left[ 0,f\right] =\left\{ g\in M:0\leq g\leq f\right\} .
\]
A positive operator $T:M\rightarrow L$ is said to be \textsl{interval
preserving} if
\[
T\left[ 0,f\right] =\left[ 0,Tf\right] \text{ for all }f\in M^{+}\text{,}
\]
A sufficient condition for a positive operator acting on orthosymmetric spaces
to be a lattice homomorphism is to have an interval preserving adjoint.
\begin{theorem}
Let $L,M$ be orthosymmetric space with $M$ definite and $T\in\mathcal{L}
_{\mathrm{r}}\left( L,M\right) $ be positive such that $T^{\ast}$ exists. If
$T^{\ast}$ is interval preserving then $T$ is a lattice homomorphism.
\end{theorem}
\begin{proof}
We choose $f\in L$ and we claim that $T\left( f^{+}\right) =\left(
Tf\right) ^{+}$. To this end, observe that if $0\leq h\in M^{+}$ then
\begin{align*}
\Phi_{\left( Tf\right) ^{+}}h & =\Phi_{Tf}^{+}h=\sup\left\{ \left\langle
Tf,g\right\rangle :0\leq g\leq h\text{ in }M\right\} \\
& =\sup\left\{ \left\langle f,T^{\ast}g\right\rangle :0\leq g\leq h\text{ in
}M\right\} \\
& \leq\sup\left\{ \left\langle f,u\right\rangle :0\leq u\leq T^{\ast}h\text{
in }L\right\} .
\end{align*}
On the other hand, if $0\leq u\leq T^{\ast}h$ in $L$, then there exists
$g\in\left[ 0,h\right] $ such that $u=T^{\ast}g$. Accordingly,
\begin{align*}
\Phi_{\left( Tf\right) ^{+}}h & =\sup\left\{ \left\langle
f,u\right\rangle :0\leq u\leq T^{\ast}h\text{ in }L\right\} \\
& =\Phi_{f}^{+}T^{\ast}h=\Phi_{f^{+}}T^{\ast}h=\Phi_{T\left( f^{+}\right)
}h.
\end{align*}
It follows that $\Phi_{\left( Tf\right) ^{+}}=\Phi_{T\left( f^{+}\right)
}$ and so $\left( Tf\right) ^{+}=T\left( f^{+}\right) $ because $M$ is
definite. This ends the proof of the theorem.
\end{proof}
The converse of the previous theorem fails as it can be seen in the following
example, which is the last item of this paper.
\begin{example}
Let $L=M=C\left[ 0,1\right] $ be equipped with any structure of definite
symmetric space and define $T\in\mathrm{Orth}\left( L\right) $ by
\[
\left( Tf\right) \left( x\right) =xf\left( x\right) \text{ for all }f\in
L\text{ and }x\in\left[ 0,1\right] .
\]
Since $T\in\mathrm{Orth}\left( L\right) $, we derive that $T^{\ast}$ exists
and $T=T^{\ast}$ \emph{(}see \emph{Theorem \ref{orth}).} Of course, $T$ is a
lattice homomorphism. However, $T$ is not interval preserving. Indeed, if
$g\in L$ is defined by
\[
g\left( x\right) =x\left\vert \sin\frac{1}{x}\right\vert \text{ if }
x\in\left] 0,1\right] \text{ and }g\left( 0\right) =0,
\]
then $0\leq g\leq T\mathbf{1}$ but there is not $f\in L$ such that $g=Tf$.
\end{example}
\end{document}
|
\begin{document}
\title{Nondegeneracy for Quotient Varieties under Finite Group Actions}
\begin{center}
\large {Chennai Mathemtical Institute\\
H1, SIPCOT IT Park \\
Padur PO, Siruseri 603103\\
India\\
email:\quad \tt [email protected], [email protected]\\ }
\end{center}
\begin{abstract}
\noindent We prove that for an abelian group $G$ of order $n$
the morphism
$ \varphi\colon \ensuremath{\mathbf{P}}(V^*)\longrightarrow
\ensuremath{\mathbf{P}} ( ({\mathrm{Sym} }^n V^*)^G )$
defined by $\varphi( [f] ) = [\prod_{\sigma\in G} \sigma \cdot f ] $
is nondegenerate for every finite-dimensional representation
$V$ of $G$
if and only if either $n $ is a prime number or $n=4$.
\end{abstract}
{\bf Keywords:}\quad finite group quotients, nondegeneracy
\textbf{AMS Classification No: 14L30}
\section*{Introduction}
Let $V$ be a finite-dimensional representation of a finite group
$G$ of order $n$ over a field $k$. Then by a classical theorem of
Hermann Weyl \cite{weyl} on polarizations one obtains a nice generating set for
the invariant ring $({\mathrm{Sym} }\, V^*)^G$ consisting of
$ e_r(f), r=1,2,\ldots,n,\ f\in V^*$ where $e_r(f)$
denotes the $r$-th elementary symmetric polynomial in
$\sigma.f,\ \sigma\in G$.
Also, for any point $x\in\ensuremath{\mathbf{P}}(V)$ there is a $G$-invariant section
of the line bundle $\mathcal{O} (1)^{\otimes n}$ of the form
$s = e_n(f) = \prod_{\sigma\in G}\sigma . f ,\ f\in V^*$
such that $s(x)\neq0$. So, the line bundle $\mathcal{O}(1)^{\otimes n}$
descends to the quotient $G\backslash \ensuremath{\mathbf{P}}(V)$. (See \cite{mum}, \cite{news}).
This leads one to ask the natural question whether
the set $\{\prod_{\sigma\in G} \sigma.f : f\in V^*\}$ generates
the $k$-algebra $\bigoplus_{d\in \ensuremath{\mathbf{Z}}_{\geq0} }
\big({\mathrm{Sym} }^{dn} V^*\big)^G$ of $G$-invariants.
In particular in view of Theorem 3.1 of \cite{kannan}
the following question arises: Is the morphism
$$\ensuremath{\mathbf{P}}(V^*)\longrightarrow \ensuremath{\mathbf{P}} \big( ({\mathrm{Sym} }^{n} V^*)^G\big)$$ given
by $ [f] \mapsto [\prod_{\sigma\in G} \sigma \cdot f ] $
nondegenerate?
In an attempt to answer this question we prove the following
result:
\noindent{\bf Theorem}:\quad{\it
Let $G$ be a finite abelian group of order $n$. Then
the map $$ \varphi\colon \ensuremath{\mathbf{P}}(V^*)\longrightarrow
\ensuremath{\mathbf{P}} ( ({\mathrm{Sym} }^n V^*)^G )$$
defined by $\varphi( [f] ) = [\prod_{\sigma\in G} \sigma \cdot f ] $
is nondegenerate
for every finite-dimensional complex representation $V$ of $G$
if and only if $n $ is either a prime number or $n=4$.}
The paper is organized as follows:
In Section 1 some preliminaries are established.
In Section 2 we give a criterion linking nondegeneracy to
nonsingularity of an associated matrix.
In Section 3 the main theorem is proved.
\section{Preliminaries}
Recall that a map into a vector space $V$ or $\ensuremath{\mathbf{P}}(V)$ is said to be
degenerate, if the image is contained in a hyperplane.
\begin{lem}
Let $V$ be a finite-dimensional vector space
over an infinite field $k$, and $f_1,f_2,\ldots,f_N$ be a
finite number of linearly independent elements of
${\mathrm{Sym} }(V^*) = \bigoplus
_{d=0}^\infty {\mathrm{Sym} }^d V^*$. Then the morphism $\psi\colon V\to k^N$
given by $\psi(v) = (f_1(v), f_2(v),\ldots, f_N(v) )$ is
nondegenerate.
\end{lem}
Proof:\quad If $\psi$ were degenerate, then there would exist a nonzero
linear form $F =\sum_i \mathbf{a}lpha_iY_i$ such that $F(\psi(V))=0$.
Then $\sum_i\mathbf{a}lpha_i f_i(v) = 0$ for all $v\in V$.
This implies, as the base field is infinite,
that $\sum_i\mathbf{a}lpha_if_i=0$
contradicting the linear independence of $f_i$'s.
\begin{lem} Let $v_1$ be a nonzero vector
in a vector space $V$ over $\ensuremath{\mathbf{Z}}/2\ensuremath{\mathbf{Z}}$ of finite-dimension.
Then the two sets,
$\{ A\subset V \mid |A| \hbox{ is even and }
\sum_{w\in A} w = 0\}$ and
$\{ A\subset V \mid |A| \hbox{ is odd and } \sum_{w\in A} w = v_1\}$
have same cardinality.
\end{lem}
Proof:\quad The correspondence $f(A) = A\setminus\{v_1\} \hbox { or }
A\cup \{v_1\}$ according as whether $v_1$ is in $A$ or not, sets up
the bijection between these two collections of subsets of $V$.
\section{ Nondegeneracy and Nonsingularity}
First we fix some notations.
Let $G$ be a finite abelian group of order $n$, and $V$ a finite-dimensional permutation representation for $G$. If $\{X_i\}_{i=1}^m$ is a basis of
of $V^*$ permuted by the action of $G$ then the monomials on
$X_i$'s of degree $d$ provide a basis for
${\mathrm{Sym} }^d V^*$ again permuted by $G$ action.
Let $O_1,O_2,\ldots, O_N$ be all the distinct $G$-orbits
for monomials of degree $n$ (${}=|G|)$ in these $X_i$'s.
For an $r\in \{1,2,\ldots, N\}$ we
denote by $O_r(X)$ the element in $({\mathrm{Sym} }^n V)^G$ that is
the sum of the monomials in the orbit $O_r$.
Working with the $G$-module $V^*\oplus V^*$, we use
the notation $Y_i$ for the basis of the other copy of
$V^*$ and define $O_r(Y)$ similarly.
Now we express the $G$-invariant
$\prod_{\sigma\in G} \sigma (\sum_{i=1}^n X_iY_i)$ as
linear combination of these orbit sums
$\sum_{r,s=1}^N a_{rs} O_r(X) O_s(Y)$.
\begin{lem}
A necessary and sufficient
condition for the morphism $\varphi\colon V^*\to
({\mathrm{Sym} } ^n V^*)^G$ given by
$\varphi(f) = \prod_{\sigma\in G} \sigma.f$ to be
nondegenerate is that the $N\times N$-matrix $((a_{rs}))$
be nonsingular.
\end{lem}
Proof:\quad Assume that the matrix $((a_{rs}))$ is non-singular.
By previous lemma the map $V\to k^N$ sending $v$ to
$(O_1(Y)(v), \ldots, O_N(Y)(v) )$ is nondegenerate.
Hence, there exists $v_1,v_2,\ldots, v_N\in V$ such
that the matrix $((O_r(Y)(v_s))$ is nonsingular. Hence the product
of the two matrices $ ((a_{rs}))\cdot ((O_r(Y)(v_s))$ is also
nonsingular. Hence the image of the map
$V^*\to ({\mathrm{Sym} }^n V^*)^G$ defined by
$f\mapsto \prod_{\sigma\in G} \sigma.f$
contains a basis of $({\mathrm{Sym} }^n V^*)^G$.
Converse is proved similarly.
\noindent{\bf Example} For any finite group $G$ of order ${}\leq5$, and
$V$ its regular representation, the matrix $((a_{rs}))$ has determinant
$\pm1$.
For the case of group of order 5 the regular representation
has 26 orbits on the set of monomials of degree 5
on 5 variables and the $26\times 26$ matrix is
as given below.
{\footnotesize
$$\mathbf{a}rraycolsep=4pt
\left[
\begin{array}{cccccccccccccccccccccccccc}
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 1& 0& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 1& 0& 0& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 1& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 2& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 0& 1& 0& 0& 0& 0& 0& 0& 0& 0& 2& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 1& 0& 1& 0& 0& 0& 0& 0& 0& 5 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 0& 1& 0& 1& 0& 0& 0& 0& 0& 0& 5 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 1& 0& 0& 0& 2& 0& 0& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 0& 1& 0& 0& 2& 0& 0& 0 \\
0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 1& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 3& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 1& 0& 0& 0& 2& 0& 1& 0& 0& 0& 0& 0& 0& 5 \\
0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 0& 0& 1& 0& 0& 0& 1& 0& 0& 0& 3& 0& 0& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 1& 0& 0& 0& 1& 0& 2& 0& 0& 0& 0& 0& 0& 5 \\
0& 0& 0& 0& 0& 1& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0& 0& 1& 0& 0& 3& 0& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 1& 1& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 3& 0& 0 \\
0& 1& 0& 0& 0& 0& 0& 1& 0& 0& 0& 0& 0& 2& 0& 0& 0& 3& 0& 0& 0& 5& 0& 0& 0& 0 \\
0& 0& 1& 0& 0& 1& 0& 0& 0& 0& 0& 0& 0& 0& 2& 0& 0& 0& 0& 3& 0& 0& 5& 0& 0& 0 \\
0& 0& 0& 1& 0& 0& 0& 0& 1& 2& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 3& 0& 0& 5& 0& 0 \\
0& 0& 0& 0& 1& 0& 1& 0& 0& 0& 2& 0& 0& 0& 0& 3& 0& 0& 0& 0& 0& 0& 0& 0& 5& 0 \\
1& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 5& 5& 0& 0& 0& 5& 0& 5& 0& 0& 0& 0& 0& 0& 15 \\
\end{array}\right]$$
}
The determinant being a unit in $\ensuremath{\mathbf{Z}}$
the nondegeneracy of the map $\varphi$ holds
in all characteristics.
\noindent{\bf Remark}\quad In the case of $V$ having a basis
of common eigenvectors for the action of $G$, a variation of
this lemma can be proved easily. In such cases the matrix
$((a_{rs}))$ would be diagonal. In the next section this variation
will be applied.
\section{Criterion for Nondegeneracy of Finite Abelian Group Quotients}
In this section we prove the following theorem.
\noindent{\bf Theorem}:\quad{\it
Let $G$ be a finite abelian group of order $n$. Then
the map $$ \varphi\colon \ensuremath{\mathbf{P}}(V^*)\longrightarrow
\ensuremath{\mathbf{P}} ( ({\mathrm{Sym} }^n V^*)^G )$$
defined by $\varphi( [f] ) = [\prod_{\sigma\in G} \sigma \cdot f ] $
is nondegenerate
for every finite-dimensional complex representation $V$ of $G$
if and only if $n $ is either a prime number or $n=4$.}
We first prove the sufficiency of the condition.
Let $n=p$, a prime number and
$\zeta$ be a primitive $p$th root of unity. Let $\{X_{ij}\mid
i=1,\ldots, m_j, j=0, \ldots, p-1\}$ be a basis of $V^*$ such that
$\sigma X_{ij}=\zeta^jX_{ij}\hbox{ for all } i,j$.
We prove that for any $p$-tuple $\mathbf{a} =(a_0, \ldots, a_{p-1})$ of
nonnegative integers
such that $\sum_{i=0}^{p-1}a_i=p$ and $\sum_{i=0}^{p-1} a_i i
\equiv 0 \pmod p$, and for any subset $A_j$ of $\{1,\ldots, m_j\}$
of cardinality $a_j$, the monomial $\prod_{j=0}^{p-1} \prod_{i\in A_j} X_{ij} \prod_{j=0}^{p-1}\prod_{i\in A_j} Y_{ij}$ occurs with
nonzero coefficient in
$$ \prod_{t=0}^{p-1} \sigma^t\left( \sum_{j=0}^{p-1}\sum_{i=1}^{m_j} X_{ij}Y_{ij} \right) $$
The argument depends only on weight computation. So, we may assume
that $X_{ij}=X_{kj}\ \hbox{ for all } i,j,k$.
Let $\pi:{\ensuremath{\mathbf{Z}}}\to {\ensuremath{\mathbf{Z}}}/p\ensuremath{\mathbf{Z}}$ be the natural map.
Let $x\in {\ensuremath{\mathbf{Z}}}/p{\ensuremath{\mathbf{Z}}}$.
Define $b_r =\sum_{i=0}^r a_i$.
Define
$S_{x,\mathbf{a}}=\{\sigma\in S_p\mid \pi \left(\sum_{r=0}^{p-1}
\left(\sum_{i=1+b_{r-1}}^{b_r} \sigma(i)\right)r\right)=x\}$.
Then, we have
\begin{lem}
For any $x,y\in {\ensuremath{\mathbf{Z}}}/p{\ensuremath{\mathbf{Z}}}$
such that $y=u\cdot x$ for some nonzero $u$ in ${\mathbf Z}/p{\mathbf Z}$,
$(1)\ \left| S_{x,\mathbf{a}}\right| =\left| S_{y,\mathbf{a}}\right|$, and
$ (2)\ \left| S_{\pi(0),\mathbf{a}}\right|\neq \left| S_{\pi(1),\mathbf{a} }\right|$.
\end{lem}
\noindent{\bf Proof of }(1) The map
$\tilde{u}:S_{x,\mathbf{a}}\to S_{y,\mathbf{a}}$ given by $ \tilde{u}(\sigma)(i)=u\pi(\sigma(i)), i=0, \ldots p-1$ gives a bijection.
\noindent{\bf Proof of} (2). We first observe that $$p
\hbox{ divides }
\left| S_{x,\mathbf{a}} \right|\ \hbox{for all }x\in {\mathbf Z}/p{\mathbf Z}
\hbox{ and } \forall \mathbf{a}\eqno(*)$$
For a proof, let $z\in {\mathbf Z}/p{\mathbf Z}$ and let
$\sigma\in S_{x,\mathbf{a}}$. Then the permutation given by
$(\tilde{z}\circ \sigma)(\pi(i))=\sigma(\pi(i))+z$
belongs to $S_{x,\mathbf{a}}$ as $\sum_{r=0}^{n-1}a_rr \equiv 0 \pmod p. $
Thus $p \hbox{ divides }\left| S_{x,\mathbf{a}} \right|\ \forall x \mbox{ and } \mathbf{a}$.
On the other hand, $$ \bigcup_{x\in {\mathbf Z}/p{\mathbf Z}}S_{x,\mathbf{a}}
=S_p \eqno(**)$$
From $(*)$ and $(**)$, we see that if $ \left| S_{\pi(0),\mathbf{a}}\right|
=\left| S_{\pi(1),\mathbf{a}} \right|$, then $p\cdot \left| S_{0,\mathbf{a}}\right|
=p\mbox{!}$.
Hence $p$ divides $(p-1)$! by $(*)$ which is a contradiction for a prime $p$.
\begin{cor}
Let $p$ be prime. For any $p$-tuple of
nonnegative integers
$\mathbf{a}=(a_0,\ldots, a_{p-1})$ such that $\sum_{r=0}^{p-1}a_r
=p$ and $\sum_{r=0}^{p-1}a_rr\equiv 0 \pmod p$; the polynomial
\break
$\sum_{\sigma\in S_p} T^{\sum_{r=0}^{p-1}\left( \sum_{i=1+ b_{r-1}}^{b_r} \sigma(i)\right)r}$ is not divisible by the $p$th
cyclotomic polynomial.
\end{cor}
Thus, we have shown that the coefficient of $ X^\mathbf{a} Y^\mathbf{a}$ in
$\prod_{i=0}^{p-1}\sigma^i \left( \sum_{r=0}^{p-1}X_{r}Y_r\right)$
is non-zero for all $\mathbf{a}$.
The nondegeneracy for both groups of order 4 can be checked
explicitly.
The necessity of the condition in the Theorem is a consequence of
the following:
\begin{prop}
Let $G$ be a finite abelian group
whose order $n$ is a composite number different from $4$. Let $V$ be the regular representation of $G$ over \ensuremath{\mathbf{C}}. Then the morphism
$$ \varphi\colon \ensuremath{\mathbf{P}}(V^*)\longrightarrow \ensuremath{\mathbf{P}} ( ({\mathrm{Sym} }^n V^*)^G )$$
defined by $\varphi( [f] ) = [\prod_{\sigma\in G} \sigma \cdot f ] $ is
degenerate.
\end{prop}
{\bf Proof:}\quad
This proposition will be proved by considering three cases:
cyclic groups, groups which are direct products of at least
3 copies of the group of order 2, and the other finite abelian groups.
In all the cases we exhibit a monomial $M$ which
is not in the span of image of $\varphi$, proving degeneracy
of the map.
\noindent{\bf Case I}: $G$ is a cyclic group of order $n$.
There are two subcases to be considered here.
Let $p$ denote the least prime divisor of $n$, and let
$q = n/p$.
For construction of $M$, we use a basis $\{X_\lambda \mid
\lambda \in G \}$ of common eigen vectors of $G$. Note that we abuse
notation, by using elements of $G$, instead of the dual $\hat G$
for indexing this basis.
\textbf{Subcase Ia.}\quad $p|q$. Take
$M= X_0X_1^{q-2}X_{n-q+2}X_q^{(p-1)q}$
\textbf{Subcase Ib}.\quad $\gcd(p,q) =1$. Take
$M= X_0^{q-2}X_{q-p}X_{n-q+p} X_p^{(p-1)q}$
For the other cases we decompose the group using structure theorem
of finite abelain groups.
$G = \bigoplus_{i=1}^r \ensuremath{\mathbf{Z}}/q_i\ensuremath{\mathbf{Z}}$, with
$q_i|q_{i+1},\ i=1,2,\ldots r-1$.
\noindent{\bf Case II}. $r\geq2$ and $q_r\geq3$
Let $H = \bigoplus_{i=2}^r \ensuremath{\mathbf{Z}}/q_i\ensuremath{\mathbf{Z}}$ have index $q$ in $G$.
That is $n = q q_1$.
Take $M = X^{q-2}_{0,0,\ldots,0}X_{0,1,1,\ldots,1}X_{0,q_2-1,\ldots,q_r-1}X^{q(q_1-1)}_{1,1,\ldots,1}$.
\noindent{\bf Case III}
$G$ is a product of 3 or more copies of $\ensuremath{\mathbf{Z}}/2\ensuremath{\mathbf{Z}}$.
Now we express $G$ as $\ensuremath{\mathbf{Z}}/2\ensuremath{\mathbf{Z}} \oplus U$, with $U$ a subgroup
of order $q = n/2$.
Take $M= (\prod_{v\in U} X_{0,v})X_{1,1,\ldots,1}^q$.
Now we give the arguments for each case.
\textbf{Case Ia}.
Then the monomial is $G$-invariant of degree $n$, and it can be expressed in only
the following two ways as product of $p$ monomials each of which is a
$H$-invariant monomial of degree $q$, where $H$ is the unique
subgroup of order $q$ in $G$.
\begin{itemize}
\item[(i)] $M =\prod_{i=1}^p M_i$ where
$M_1=X_0X_1^{q-2}X_{n-q+2};\ M_i=X_q^q,\ i=2,\ldots, p$.
\item[(ii)] $M=\prod_{i=1}^p M_i'$ where $M_1'=X_qX_1^{q-2}X_{n-q+2};\ M_2'=X_0X_q^{q-1}$
$M_i'=X_q^q, \ i=3,4,\ldots, p$.
\end{itemize}
It is clear that $X_1^{q-2}$ and $X_{n-q+2}$ must be a factor of
a common $H$-invariant monomial of degree $q$. So either such
a monomial must be $X_0X_1^{q-2}X_{n-q+2}$ or $X_qX_1^{q-2}X_{n-q+2}$.
Now we prove that the coefficient of $M_i(X)M_i(Y)$ in $\prod_{j=0}^{q-1}\sigma^{pj}\left(\sum_{r=0}^{n-1} X_r Y_r\right)$ is the same as the
coefficient of ${M_i'(X) M_i'(Y)}$ in $\prod_{j=0}^{q-1} \sigma^{pj} \left(\sum_{r=0}^{n-1} X_r Y_r\right)$.
We prove that the monomial
$X_0X_1^{q-2}X_{n-q+2}X_q^{(p-1)q} Y_0Y_1^{q-2}Y_{n-q+2}Y_2^{(p-1)q}$
does not appear in the product
$\prod_{i=0}^{p-1}\sigma^i \left(\prod_{j=0}^{q-1}\sigma^{pj}\left(\sum_{r=0}^{n-1}X_rY_r\right)\right)$.
Let $\zeta$ be a primitive $n$th
root of 1. Then, the coefficient of ${M_1(X) M_1(Y)}$ in
$\displaystyle {\prod_{j=0}^{q-1} \sigma ^{pj}\left( \sum_{r=0}^{n-1} X_r Y_r\right)}$ is
\begin{eqnarray*}
&&=\sum_{\sigma\in S_q}\zeta^{p\left(\sigma(0)\cdot 0+\left(\sum_{t=1}^{q-2}\sigma(t)\right)\cdot 1+\sigma(q-1)(n-q+2)\right) }\\
&&=\sum_{\sigma\in S_q}\zeta^{p\left(\sigma(0)\cdot q+\left(\sum_{t=1}^{q-2}\sigma(t)\right)\cdot 1+\sigma(q-1)(n-q+2)\right) }\\
&&\left( \mbox{because } \zeta^{jpq}=1 \forall j\in \{0,1,2,\ldots,q-1\}\right)\\
&&=\mbox{Coefficient of } M_1'(X)M_1'(Y)\mbox{ in } \prod_{j=0}^{q-1}\sigma^{pj}\left(\sum_{r=0}^{n-1}X_r Y_r\right)
\end{eqnarray*}
Similarly, the coefficient of $M_2(X)M_2(Y)= $ the coefficient of $M_2'(X)M_2'(Y)$.
Since $M_i=M_i'$ for $i=3,4,\ldots, p$, the coefficients of
$M_i(X)M_i(Y)$ and $M_i'(X)M_i'(Y)$ in
$\prod_{j=0}^{q-1}\sigma^{pj} \left(\sum_{r=0}^{n-1}X_rY_r\right)$
are equal.
Let $c_i$ be the coefficient of $M_i(X)M_i(Y)$ in $\prod_{j=0}^{q-1}\sigma^{ pj}\left(\sum_{r=0}^{n-1} X_r Y_r\right)$. Let $c_i'$ be the
coefficient of $M_i'(X)M_i'(Y)$. Then, $c_i=c_i'$ by 1. Then, the
coefficient of $X_0X_1^{q-2}X_{n-q+2}X_q^{(p-1)q}$ in
$\prod_{i=0}^{p-1}\sigma^i\left(\prod_{j=0}^{q-1}\sigma^{pj}
\left(\sum_{r=0}^{n-1}X_rY_r\right)\right)$
is
\noindent $\Big(\sum_{i=0}^{p-1}\zeta^{in+q^2\sum_{k\neq i}k}
+\sum_{i,j=0\mathbf{a}top i\neq j}^{p-1} \zeta^{i(n+q)+j(q(q-1))
+ q^2 \sum_{k=0\mathbf{a}top k\neq i,j }^{p-1} k}\Big)c$
where $c=\prod_{i=1}^p c_i$.
But \begin{eqnarray*}&&\sum_{i=0}^{p-1}
\zeta^{in+q^2\sum_{k\neq i}k}
+\sum_{i,j=0\mathbf{a}top i\neq j}^{p-1} \zeta^{i(n+q)+jq(q-1)
+ q^2 \sum_{k=0\mathbf{a}top k\neq i,j }^{p-1} k}\\
&&{}=p+\sum_{\textstyle {i\neq j\mathbf{a}top i,j=0} }^{p-1}\zeta^{(i-j)q}
\quad(\hbox{since } \zeta^n=\zeta^{q^2}=1)\\
&&{}=p+p\left(\sum_{t=1}^{p-1}\zeta ^{tq}\right)\\
&&{}=p\left( \sum_{i=0}^{p-1}\zeta^{ tq} \right)=0.
\end{eqnarray*}
\noindent {\bf Case Ib}.
By a similar argument, one can show that if $\gcd(p,q)=1$,
the coefficient of
$X^{q-2}_0X_{q-p}X_{n-q+p}X_p^{(p-1)q }
Y^{q-2}_0Y_{q-p}Y_{n-q+p}Y_p^{(p-1)q }$ in
$\prod_{i=0}^{n-1}\sigma^i \left(\sum_{r=0}^{n-1}X_rY_r\right)$
is zero.
\noindent{\bf Case II.}
Let $(M_1,\ldots, M_{q_1})$ be a $q_1$-tuple of monomials each
of which is $G$-invariant such that their product $\prod _{i=1}^{q_1} M_i$ is $M$. There is a unique $i_0$ and a unique $j_0$ such that
the variables $X_{0,q_2-1,\ldots,q_r-1}$ and
$X_{0,1,1,\ldots,1}$ occur respectively in the monomials
$M_{i_0}$ and and $M_{j_0}$. As we are in the case
$q_r \geq3$, these two variables are distinct.
So for each $t$ the monomial $M_t$ would be $G$-invariant
if and only if $i_0 =j_0$.
For a fixed $i$ consider the set of $q_1$-tuples of monomials defined by
$$S_i = \{ (M_1,\ldots M_{q_1}) \mid X_{0,q_2-1,\ldots,q_r-1}
\hbox{ occurs in } M_i\}$$
and fixing a $j$ define the subset
$$S_{i,j} = \{ (M_1,\ldots M_{q_1})\in S_i \mid X_{0,1,1,\ldots,1}
\hbox{ occurs in } M_j\}$$
Let $(M_1,\ldots M_{q_1})\in S_{i,i}$. Let $j\neq i$.
Define $$ \begin{array}{rlll}M_t' &=& M_t &
\hbox{for all } t\neq i,j\\[2pt]
M_i'&=&\displaystyle
\frac{M_i}{X_{0,1,1\ldots,1}} X_{1,1,\ldots,1}\\[11pt]
M_j'&=& \displaystyle
\frac{M_j}{X_{1,1,1\ldots,1}} X_{0,1,\ldots,1}.\\
\end{array}$$
Then the map $S_{i,i}\to S_{i,j}$
sending $(M_1,\ldots,M_{q_1})$ to $(M_1',\ldots,M_{q_1}')$ is
bijective.
If $a_i$ is the sum of coefficients of $M(X)M(Y)$ in
$\prod_{i=0}^{q_1-1} \sigma^i \prod _{\tau\in H}
\tau (\sum _{\lambda\in \hat G} X_\lambda Y_\lambda)$
contributed by the elements
$ (N_1,N_2,\ldots,N_{q_1} )\in S_{i,i} $ such that $N_i = M_i$,
then the sum of coefficients of $M(X)M(Y)$ contributed by the larger
set
$$\{(N_1,N_2,\ldots,N_{q_1} )\in S_i \mid N_i = M_i \}$$ is
$a_i (\sum _{j=0}^{q_1-1} \zeta ^{i-j})$, with $\zeta$ being a primitive $q_1$-th root of unity. But this sum is zero as it is the sum of \textit{all} the $q_1$-th roots of unity.
This proves that
in the product
$\prod_{i=0}^{q_1-1} \sigma^i \prod _{\tau\in H}
\tau (\sum _{\lambda\in \hat G} X_\lambda Y_\lambda)$
the coefficient of $M(X)M(Y) $
is zero.
\noindent{\bf Case III}
The proof proceeds along the same lines as in Case II, making use
of the Lemma 2.
\end{document}
|
\begin{document}
\setcounter{secnumdepth}{6}
\setcounter{tocdepth}{6}
\noindent
{{\Large\bf Analytic theory of It\^o-stochastic differential equations with non-smooth coefficients}\footnote{The research of Haesung Lee was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2020R1A6A3A01096151), and by the DFG through the IRTG 2235 \lq\lq Searching for the regular in the irregular: Analysis of singular and random systems.\rq\rq\ The research of Wilhelm Stannat was partially supported by the DFG through the Research Unit FOR 2402 \lq\lq Rough paths, stochastic partial differential equations and related topics.\rq\rq\ The research of Gerald Trutnau was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2017R1D1A1B03035632).}}
\noindent
{\bf Haesung Lee}, {\bf Wilhelm Stannat},
{\bf Gerald Trutnau}
\\
\noindent
{\small{\bf Abstract.} We present a detailed analysis of non-degenerate time-homogeneous It\^o-stochastic differential equations with low local regularity assumptions on the coefficients. In particular the drift coefficient may only satisfy a local integrability condition. We discuss non-explosion, irreducibility, Krylov-type estimates, regularity of the transition function and resolvent, moment inequalities, recurrence, transience, long time behavior of the transition function, existence and uniqueness of invariant measures, as well as pathwise uniqueness, strong solutions and uniqueness in law. This analysis shows in particular that sharp explicit conditions for the various mentioned properties can be derived similarly to the case of classical stochastic differential equations with local Lipschitz coefficients.\\
\noindent
{2020 {\it Mathematics Subject Classification}: Primary 60H20, 47D07, 60J35; Secondary 31C25, 35J15, 35B65, 60J60.}\\
\noindent
{Key words: It\^o-stochastic differential equation, abstract Cauchy problem, $L^1$-uniqueness, generalized Dirichlet form, infinitesimally invariant measure, strong Feller property, irreducibility, Hunt process, Krylov-type estimate, non-explosion, recurrence, invariant measure, ergodicity, strong well-posedness.
}
\tableofcontents
\section{Introduction}
\label{chapter_1}
This monograph is devoted to the systematic analytic and probabilistic study of weak solutions to the
stochastic differential equation (hereafter SDE)
\begin{eqnarray}
\label{intro:eq1}
X_t = x+ \int_0^t \sigma (X_s) \, dW_s + \int^{t}_{0} \mathbf{G}(X_s) \, ds,
\quad 0\le t <\zeta, x\in \mathbb R^d,
\end{eqnarray}
where $(W_{t})_{t\ge 0}$ is a $d$-dimensional Brownian motion, $A=(a_{ij})_{1\le i,j\le d}=\sigma\sigma^T$
with $\sigma=(\sigma_{ij})_{1\le i,j\le d}$ is locally uniformly strictly elliptic (see \eqref{eq:2.1.2}),
$\mathbf{G}=(g_1, \dots, g_d)$, and $\zeta$ is the lifetime of $X$, under low regularity assumptions
on the coefficients. \\
The classical approach to the solution of \eqref{intro:eq1} is a pathwise solution and global existence and uniqueness of solutions can be obtained
under locally Lipschitz assumptions combined with a linear growth condition.
However, typical finite-dimensional approximations of stochastic partial differential equations do not match these
assumptions and also several applications in the natural and engineering sciences, see for example \cite{Car}, \cite{Naga}. Therefore the need for substantial generalization arises. \\
Another essential drawback of the pathwise approach is the, to a large extent, still open problem of a
mathematical rigorous characterization of the generator
\begin{eqnarray}
\label{intro:eq2}
Lf=\frac 12\sum_{i,j=1}^d a_{ij}\partial_{ij} f
+ \sum_{i=1}^d g_i\partial_i f, \quad f\in C_0^{\infty}(\mathbb R^d)
\end{eqnarray}
of \eqref{intro:eq1}. More specifically, in order to investigate
properties of the solution
of \eqref{intro:eq1} with analytic tools on the state space, especially from PDE theory and functional analysis,
it is necessary to uniquely characterize $w(x,t) = \mathbb{E} (f(X_t) \mid X_0 = x)$, $t \ge 0$,
$x\in\mathbb R^d$, as a solution of a Cauchy problem
\begin{equation}
\label{intro:eq3}
\partial_t \varw (x,t) = L \varw (x,t) \, ,t \ge 0 \, , x\in\mathbb R^d,
\end{equation}
with initial condition $w(x,0) = f(x)$, for some proper extension $L$, whose full domain will depend
on the underlying function space and may in general not explicitly be characterized. \\
In this monograph we will investigate a converse approach to the solution and further investigation of
\eqref{intro:eq1}, by starting with the analysis of the Cauchy problem \eqref{intro:eq3} on $L^1$-spaces
with weights and subsequently constructing a strong solution to \eqref{intro:eq1} via the
Kolmogorov-type construction of an associated Markov process. The essential advantage of this approach,
which we will describe in more detail in Section \ref{MethodsandResults}
below, is that at each stage of the
construction we keep a rigorous analytic description of the associated Cauchy problem \eqref{intro:eq3}
including its full generator $\overline{L}$. This allows us on the one hand to establish a rigorous mathematical
connection between SDEs and related stochastic calculus, with regularity theory of partial differential equations (PDEs), potential and semigroup theory, and generalized
Dirichlet form theory on the other. \\
As another advantage we can relax the local regularity assumptions on the coefficients of \eqref{intro:eq1}
considerably. If for instance, for some $p\in (d,\infty)$, the components of
$\sigma=(\sigma_{ij})_{1\leq i,j \leq d}$ have $H^{1,p}_{loc}$-regularity and $\mathbf{G}$ has
$L^{p}_{loc}$-regularity, strong existence and pathwise uniqueness of a solution $X$
to \eqref{intro:eq1} holds for all times under any global condition that implies non-explosion. Various non-explosion conditions are given in Section
\ref{subsec:3.2.1}.
Our main result then,
Theorem \ref{theo:3.3.1.8}, provides a detailed analysis of the properties of the solution $X$, like
strong Feller properties of the transition semigroup and resolvent, Krylov-type estimates, moment
inequalities, transience, recurrence, ergodicity, and existence and uniqueness of invariant measures with
sharp explicit conditions, similarly to the classical case of locally Lipschitz continuous coefficients.
\\
In recent years stunning and important new results about pathwise uniqueness and existence of a strong
solution to \eqref{intro:eq1}, when $\mathbf{G}$ merely fulfills some local integrability condition, were
presented (\cite{GyMa}, \cite{KR}, \cite{Zh11}). All these works also cover the time-dependent case with
some trade-off between the integrability assumptions in time and space, but struggle to provide a complete stochastic analysis as in Theorem \ref{theo:3.3.1.8}, without a drastic strengthening of the local regularity assumptions (cf. Remark \ref{rem:3.3.1}). \\
Instead, the crucial idea here is to construct weak solutions to \eqref{intro:eq1} by PDE techniques
(\cite{ArSe} and \cite{St65}) and generalized Dirichlet form theory (\cite{WS99}, \cite{WSGDF},
\cite{Tr2}, \cite{Tr5}), and thus separately and independently from local pathwise uniqueness and
probabilistic techniques.
Following this approach, initiated in \cite{RoShTr} in the frame of sectorial Dirichlet forms, we finally
only rely on a local pathwise uniqueness result (\cite[Theorem 1.1]{Zh11}), since it enables us
to construct a Hunt process with a transition semigroup that has such a nice regularity that
all presumably optimal classical conditions for the properties of a solution to \eqref{intro:eq1} above
carry over to our situation of non-smooth and/or locally unbounded coefficients.
\subsection{Methods and results}\label{MethodsandResults}
Let us describe in more detail the respective stages in our approach to the analysis of
\eqref{intro:eq1} and the main results obtained.
\subsection*{I. The abstract Cauchy problem}
The starting point of our approach is in Section \ref{sec2.1}, the analysis of the Cauchy problem
\eqref{intro:eq3} on a space $L^1 (\mathbb R^d , \mu)$, where $\mu$ is a measure having a regular density $\rho$
and which satisfies
\begin{eqnarray}
\label{intro:eq4}
\int_{\mathbb R^d} Lf \, d\mu = 0 \quad \forall\, f\in C_0^{\infty} (\mathbb R^d).
\end{eqnarray}
We call such a measure an infinitesimally invariant measure for $(L, C_0^{\infty}(\mathbb R^d))$ and although the above property
of $\mu$ is loosely linked with the concept of invariance of stochastic processes, our approach is not
at all limited to SDEs that admit an invariant measure or even are
ergodic. We emphasize that the existence of such a measure $\mu$ is much less restrictive, if at all,
than it might seem at first sight, and in fact $\mu$ will not be a finite measure in general, let
alone a probability.\\ \\
\textit{Semigroup approach to the Cauchy problem}\\ \\
In the first step we realize in Theorem \ref{theorem2.1.5} an extension of $(L,C_0^{\infty}(\mathbb R^d))$ as
the infinitesimal generator of a sub-Markovian $C_0$-semigroup of contractions $(T_t)_{t>0}$, which then gives rise to solutions of the Cauchy problem \eqref{intro:eq3}. A crucial
step in our analysis is the decomposition of the drift coefficient $\mathbf{G}$ as
$$
\mathbf{G} = \beta^{\rho , A}+\mathbf{B},
$$
where $\beta^{\rho , A}=(\beta^{\rho, A}_1 , \ldots,\beta^{\rho, A}_d)$ is the logarithmic derivative of $\rho$ associated with $A$ (see \eqref{eq:2.1.5}), and $\mathbf{B}$ is
a $\mu$-divergence zero vector field. This allows us to decompose the operator $L$ as
$$
Lf = L^0 f + \langle \mathbf{B}, \nabla f \rangle,
$$
where
$$
L^0 f = \frac 12\sum_{i,j=1}^d a_{ij}\partial_{ij} f
+ \sum_{i=1}^d \beta^{\rho, A}_i \partial_i f, \quad f\in C_0^{\infty}(\mathbb R^d)
$$
is symmetric on $L^2 (\mathbb R^d , \mu )$ and can be extended in a unique way to a self-adjoint generator of
a symmetric Dirichlet form, which plays a crucial role in our analysis.\\ \\
\textit{Uniqueness, invariance and conservativeness} \\ \\
We then discuss in the abstract functional analytic setting uniqueness of such infinitesimal generators, which is
linked to the uniqueness of solutions of the Cauchy problem, and its interrelations with invariance and
conservativeness as global properties of the semigroup. The corresponding results can be found in
Proposition \ref{prop2.1.9}, Proposition \ref{prop:2.1.10}, Remark \ref{rem:2.1.10a}, Corollaries
\ref{cor:2.1.4.1} and \ref{cor2.1.1}.
\subsection*{II. Infinitesimally invariant measures $\mu$}
The existence and further regularity properties of a measure $\mu$ satisfying \eqref{intro:eq4} needed
for our approach is investigated in Section \ref{sec2.2}. It is shown in particular that if, for some
$p\in (d,\infty)$, the components of $A$ have $H^{1,p}_{loc}$-regularity and $\mathbf{G}$ has
$L^{p}_{loc}$-regularity, such a $\mu$ exists, having a strictly positive density $\rho\in H^{1,p}_{loc}(\mathbb R^d)$. Reformulating
$L$ into divergence form, we can considerably relax the assumptions on $A$, see assumption {\bf (a)} of
Section \ref{subsec:2.2.1} for precise conditions. From there onwards assumption {\bf (a)} will always be in force unless otherwise stated. The main result on existence and regularity of $\mu$ is
given in Theorem \ref{theo:2.2.7}.
\subsection*{III. Regular solutions of the Cauchy problem}
In order to enable a Kolmogorov-type construction of a Markov process whose transition semigroup is
given by the semigroup $(T_t)_{t>0}$, it is necessary to pass from $(T_t)_{t> 0}$ to kernels of
sub-probability measures. To this end we further analyze the regularity properties of $(T_t)_{t>0}$ in
Section \ref{sec2.3}, in particular the existence of a H\"older-continuous version $P_tf$ of $T_tf$ that
gives rise to a transition function $(P_t)_{t > 0}$. The corresponding results are given in
Theorem \ref{theo:2.6}
using
\eqref{semidef}. We also discuss precise
interrelations of our regularity results with the strong Feller property.\\ \\
\textit{Irreducibility} \\ \\
In Section \ref{sec2.4}, irreducibility of $(T_t)_{t>0}$ and of the associated transition function
$(P_t)_{t>0}$, called irreducibility in the probabilistic sense, are obtained. See Proposition
\ref{prop:2.4.2} for the corresponding result. This closes the analytic part of our approach.
\subsection*{IV. Associated Markov processes}
\textit{Construction and identification}\\ \\
Our first step on the probabilistic side is to construct in Section \ref{sec:3.1} a Hunt process $\mathbb M$
with transition semigroup $(P_t)_{t>0}$. The corresponding result is contained in Theorem \ref{th: 3.1.2}.
The existence of $\mathbb M$ does not follow immediately from the general theory of Markov processes, since $(P_t)_{t>0}$ may fail to be Feller. Instead, we use a refinement of a construction method
from \cite{AKR} that involves elements of generalized Dirichlet form theory. For this purpose a higher
regularity of the resolvent is needed, which requires another assumption {\bf (b)} (see Section
\ref{subsec:3.1.1}) in addition to assumption {\bf (a)}. From there onwards both assumptions {\bf (a)} and
{\bf (b)} will be in force unless otherwise stated. Given the
regularity properties of the resolvent and the transition semigroup, the identification of $\mathbb M$ as a
weak solution to \eqref{intro:eq1} (cf. Definition \ref{def:3.48}(iv)) then follows standard lines. See
Proposition \ref{prop:3.1.6} and Theorem \ref{theo:3.1.4} for the corresponding results.\\ \\
\textit{Krylov-type estimates} \\ \\
As a by-product of the improved resolvent regularity, we obtain in Theorem \ref{theo:3.3} a Krylov-type
estimate that has an interest of its own (see Remark \ref{rem:ApplicationKrylovEstimates}). Its importance stems from the fact that probabilistic quantities like $\int_0^t g(X_s)ds$, which are related to the drift or to the quadratic variation of the local martingale part of $X$ in \eqref{intro:eq1}, can be controlled in terms of the $L^q$-norm of $g$ and thereby make solutions to \eqref{intro:eq1} more tractable.\\ \\
\textit{Non-explosion and conservativeness} \\ \\
Throughout Section \ref{sec:3.2}, we investigate global properties of the Hunt process $\mathbb M$ constructed in
Theorem \ref{th: 3.1.2}, by analytic and by probabilistic methods. Since we already know that $\mathbb M$ has continuous
sample paths on $[0,\zeta)$, where $\zeta$ denotes the lifetime,
the first important global property of $\mathbb M$ is non-explosion, i.e. $\zeta=\infty$. It guarantees that
the weak solution of Theorem \ref{theo:3.1.4} exists for all times and is continuous on
$[0,\infty)$. Due to the strong Feller property, conservativeness of $(T_t)_{t>0}$ is equivalent to non-explosion of $\mathbb M$. In Section \ref{subsec:3.2.1}, various qualitatively different sufficient non-explosion
criteria for $\mathbb M$ are presented. See Proposition \ref{prop:3.2.8}, Lemma \ref{lem3.2.6}, Corollaries
\ref{cor:3.2.2} and \ref{cor:3.1.3} and Proposition \ref{theo:3.2.8} for Lyapunov-type conditions for
non-explosion. Since the drift coefficient does not need to be locally bounded, we also provide in
Proposition \ref{theo:3.2.8} non-explosion criteria of a different nature than in the case
of locally bounded coefficients (\cite{Pi}), which we further illustrate with examples
(Example \ref{exam:3.2.1.4}). We also present in Proposition \ref{prop:3.2.9}
volume growth conditions for non-explosion, which follow from generalized Dirichlet form techniques and
are again of a different nature than classical non-explosion conditions.\\ \\
\textit{Transience and recurrence} \\ \\
In Section \ref{subsec:3.2.2}, we study transience and recurrence. Recurrence is an important concept as it implies stationarity of solutions w.r.t. $\mathbb{P}_{\mu}$ and that $\mu$ is the unique (however possibly infinite) invariant measure for the solution of \eqref{intro:eq1} (see \cite{LT21in}).
We establish in Theorem \ref{theo:3.3.6}
a well-known dichotomy between recurrence and transience (cf. for instance \cite[Theorem 3.2]{Bha} and
\cite[Theorem 7.4]{Pi} for the case of locally bounded coefficients) and develop several sufficient analytic
criteria for recurrence. The corresponding results are given in Proposition \ref{theo:3.2.6}, Corollary \ref{cor:3.2.2.5}, and Proposition \ref{cor:3.3.2.6}.\\ \\
\textit{Uniqueness of invariant measures} \\ \\
Section \ref{subsec:3.2.3} deals with uniqueness of invariant measures and the long time behavior of
$(P_t)_{t>0}$. Again, due to the regularity properties of $(P_t)_{t>0}$, Doob's ergodic
theorem is applicable. Based on this, we develop several classical-like explicit criteria for ergodicity
(see Proposition \ref{prop:3.3.12} and Corollary \ref{cor:3.2.3.7}). Example \ref{ex:3.8} provides a
counterexample to uniqueness of invariant measures.
\subsection*{V. The stochastic differential equation}
In the final stage of our approach we consider the stochastic differential equation \eqref{intro:eq1}
and investigate in Section \ref{sec:3.3} two types of uniqueness of a solution.\\ \\
\textit{Pathwise uniqueness and strong solutions} \\ \\
The first type of uniqueness is pathwise uniqueness (cf. Definition \ref{def:3.48}(v)) and we explore
the existence of a strong solution to \eqref{intro:eq1} (cf. Definition \ref{def:3.48}(ii)). Using the
classical Yamada--Watanabe Theorem (\cite{YW71}) and a local uniqueness result from \cite{Zh11}, we
obtain Theorem \ref{theo:3.3.1.8} both under the mere assumption of {\bf (c)} of Section \ref{sec:3.3}
(which implies the two
assumptions {\bf (a)} and {\bf (b)}) and the assumption that the constructed Hunt process $\mathbb M$ in
Theorem \ref{th: 3.1.2} is non-explosive.
This is one of the main achievements of our approach.
It shows that SDEs with non-smooth coefficients, for instance those with locally unbounded drift,
can be treated with classical-like methods and presents a real extension of the It\^{o} theory of locally
Lipschitz coefficients and non-degenerate dispersion coefficients. In particular, our new approach allows us to
close a partial gap in the existing literature and we refer the reader to the introduction of
Section \ref{sec:3.3} and Remark \ref{rem:3.3.1} for more details.\\ \\
\textit{Uniqueness in law} \\ \\
The second type of uniqueness is related to uniqueness in law under the conditions {\bf (a)} and {\bf (b)}.
Since uniqueness in law in the classical sense as in Definition \ref{def:3.48}(vi) may not hold in the
general class of coefficients (cf. for instance the introduction of \cite{LT19de}), here we consider a
weaker form of uniqueness in law which is related to $L^1$-uniqueness (cf. Definition \ref{def:3.3.2.1}).
The corresponding uniqueness result is contained in Proposition \ref{prop:3.3.1.16}.\\
\subsection{Organization of the book}
The text is structured and divided into an analytic part (Chapter \ref{chapter_2}), a probabilistic part (Chapter \ref{chapter_3}), and a conclusion and outlook part (Chapter \ref{conclusionoutlook}). For a better orientation of the reader we start each section with a summary of its main contents and the assumptions that are in force. We also provide historical remarks concerning specific aspects of our work, where we cite relevant related work and compare existing literature with our results in a detailed way (Remark \ref{rem:2.30new}, Remark \ref{rem:2.4.1}, Remark \ref{rem:2.3.3}, Remark \ref{rem:ApplicationKrylovEstimates}, Remark \ref{rem:3.3.1}). Additional information to existing theories and results that are used for our analysis is provided throughout the text. In particular, Sections \ref{Comments2}, resp. \ref{Comments3}, provide a summary of techniques and results that we rely on in Chapters \ref{chapter_2}, resp. \ref{chapter_3}.
\section{The abstract Cauchy problem in $L^r$-spaces with weights}
\label{chapter_2}
\subsection{The abstract setting, existence and uniqueness}
\label{sec2.1}
We consider the Cauchy problem\index{Cauchy problem}
\begin{equation}
\label{eq:2.1.0}
\partial_t \varw (x,t) = L \varw (x,t) \, ,t \ge 0 \, , x\in\mathbb R^d,
\end{equation}
where $L= \frac 12\sum_{i,j=1}^d a_{ij}\partial_{ij} + \sum_{i=1}^d g_i\partial_i $ is some locally uniformly strictly elliptic partial differential operator of second order on $\mathbb R^d$ with domain $C_0^\infty (\mathbb R^d )$ and suitable initial condition
$\varw(x,0) = f(x)$ on the space $L^1 (\mathbb R^d , \mu )$. Here, $\mu$ is a locally finite nonnegative
measure that is infinitesimally invariant for $(L,C_0^{\infty}(\mathbb R^d))$ (see \eqref{eq:2.1.0d}). We explicitly construct in
Section \ref{subsec:2.1.2a}, under minimal assumptions on the coefficients $(a_{ij})_{1\le i,j\le d}$ and
$(g_i)_{1\le i\le d}$, extensions of $(L,C_0^{\infty}(\mathbb R^d))$ generating sub-Markovian $C_0$-semigroups on $L^1 (\mathbb R^d, \mu )$ (see
Theorem \ref{theorem2.1.5} for the main result) and discuss in Section \ref{subsec:2.1.4} uniqueness of
such extensions. The main result, contained in Corollary \ref{cor2.1.1}, establishes a link
between uniqueness of maximal extensions and invariance of the infinitesimally invariant measure $\mu$
under the associated semigroup $(\overline{T}_t)_{t\ge 0}$. We discuss in Section \ref{subsec:2.1.4} the
interrelations of invariance with conservativeness of $(\overline{T}_t)_{t\ge 0}$, resp. its dual semigroup,
and provide in Proposition \ref{prop:2.1.10}, resp. Corollary \ref{cor:2.1.4.1} explicit sufficient
conditions on the coefficients, including Lyapunov-type conditions, implying invariance resp.
conservativeness. We also illustrate the scope of the results with some counterexamples.\\left [3pt]
In view of the envisaged application to the analysis of weak solutions of stochastic differential
equations, we will be in particular interested in the existence of solutions $\varw(x,t)$ to the Cauchy problem
\eqref{eq:2.1.0} that can be represented as an expectation w.r.t. some associated Markov process:
\begin{equation}
\label{eq:2.1.0a}
\varw(x,t) = \mathbb{E} (f(X_t) \mid X_0 = x) \, , t\ge 0\, , x\in\mathbb R^d.
\end{equation}
The classical linear semigroup theory (see \cite[Chapter 4]{Pa}) provides a solution to the abstract Cauchy problem
\eqref{eq:2.1.0}
in terms of $\varw(x,t)= \overline{T}_t f (x)$, where $\overline{T}_0=id$ and $(\overline{T}_t)_{t> 0}$ is a
strongly continuous semigroup ($C_0$-semigroup) on a suitable function space $\mathcal{B}$, whose infinitesimal
generator
$$
\overline{L} f := \frac{d\overline{T}_t f}{dt}\Big|_{t = 0} \quad \text{ on $\mathcal{B}$}
$$
with domain $D(\overline{L}) := \{f\in \mathcal{B} : \frac{d\overline{T}_t f}{dt}\Big|_{t = 0} \text{ exists in } \mathcal{B} \}$
is a closed extension of $(L,C_0^{\infty}(\mathbb R^d))$, i.e. $C_0^\infty (\mathbb R^d) \subset D(\overline{L})$ and
$\overline{L}|_{C_0^\infty (\mathbb R^d)} = L$. Such extensions of the operator $L$ are called
\textbf{maximal extensions}\index{operator ! maximal extension} of $(L,C_0^{\infty}(\mathbb R^d))$ on $\mathcal{B}$. \\
However, in order to be able to represent $\overline{T}_t f(x) = \varw(x,t)
= \mathbb{E} (f(X_t) \mid X_0 = x)$ as the expectation of some Markov process, the semigroup has
to be in addition \textbf{sub-Markovian}\index{semigroup ! sub-Markovian}, i.e.
\begin{equation}
\label{eq:2.1.0b}
0\le f\le 1 \quad \mathbb Rightarrow \quad 0\le \overline{T}_t f \le 1 \, , t > 0 .
\end{equation}
Using the maximum principle, the construction of such sub-Markovian semigroups associated with $L$ can be achieved within classical PDE theory under appropriate regularity assumptions on the coefficients $a_{ij}$ and $g_i$. On the other hand, the theory of stochastic processes provides the existence of
Markov processes under much weaker regularity assumptions on the coefficients, for example with the help of SDEs and the precise mathematical characterization of their transition semigroups and (infinitesimal) generators. This has been intensively investigated in the past, but still leaves many challenging questions open. \\
A very successful approach towards such a rigorous mathematical theory, connecting solutions of the abstract Cauchy problem \eqref{eq:2.1.0} with transition semigroups of Markov processes under minimal regularity assumptions, has been developed within the theory of symmetric Dirichlet forms (\cite{FOT}) in the particular case where the differential operator $L$ becomes symmetric,
\begin{equation}
\label{eq:2.1.0c}
\int_{\mathbb R^d} Lu \varv\, d\mu = \int_{\mathbb R^d} u L\varv\, d\mu \quad \forall\, u,\varv\in C_0^{\infty} (\mathbb R^d),
\end{equation}
w.r.t. the inner product on the Hilbert space $L^2 (\mathbb R^d , \mu )$ induced by some locally finite nonnegative measure
$\mu$. The measure $\mu$ is called a \textbf{symmetrizing measure} \index{measure ! symmetrizing} for
the operator $L$ in this case. Using linear perturbation theory of symmetric operators, the scope of Dirichlet form theory had
subsequently been successfully extended in a first generalization to the case where $L$ can be realized as
a sectorial operator on some $L^2$-space (see \cite{MR}) and later to the fully non-symmetric case in \cite{WS99} (see also \cite{WSGDF}). \\
The general theory developed in \cite{WS99} combines semigroup theory with Dirichlet form techniques
in order to solve the abstract Cauchy problem \eqref{eq:2.1.0} in terms of a sub-Markovian semigroup
$(\overline{T}_t)_{t > 0}$ on the Banach space $L^1 (\mathbb R^d , \mu )$, where $\mu$ is an \textbf{infinitesimally invariant measure} \index{measure ! infinitesimally invariant}
for $(L, C_0^{\infty}(\mathbb R^d))$, i.e. a locally finite nonnegative measure satisfying $Lu\in L^1(\mathbb R^d,\mu)$ for all $u\in C_0^{\infty} (\mathbb R^d)$ and
\begin{equation}
\label{eq:2.1.0d}
\int_{\mathbb R^d} Lu \, d\mu = 0 \quad \forall\, u\in C_0^{\infty} (\mathbb R^d)\, .
\end{equation}
Note that \textbf{symmetry \eqref{eq:2.1.0c} implies invariance \eqref{eq:2.1.0d}} by choosing a
function $\chi\in C_0^\infty (\mathbb R^d)$ such that $\chi\equiv 1$ on the support of $u$, since
\begin{equation*}
\label{eq:2.1.0dd}
\int_{\mathbb R^d} Lu \, d\mu = \int_{\mathbb R^d} L u \chi\, d\mu = \int_{\mathbb R^d} u L\chi\, d\mu = 0\, ,
\end{equation*}
because $L\chi \equiv 0$ on the support of $u$.\\
The existence (and uniqueness), as well as the analytic and probabilistic interpretation of (infinitesimally) invariant measures $\mu$, will be further analyzed thoroughly in subsequent
sections (see in particular Sections \ref{sec2.2} and \ref{subsec:3.2.3}). \\
Before stating the precise assumptions on the coefficients and the infinitesimally invariant measure in the next
section let us discuss the most relevant functional analytic implications of assumption
\eqref{eq:2.1.0d}.
\begin{itemize}
\item \textbf{(Beurling--Deny property)}\index{Beurling--Deny property} Let $\psi\in L^1_{loc} (\mathbb R )$, be monotone increasing. Then
\begin{equation}
\label{eq:2.1.0e}
\int_{\mathbb R^d} \psi (u)L u \, d\mu \le 0\quad \forall u\in C_0^\infty (\mathbb R^d)\, .
\end{equation}
Indeed, assume first that $\psi\in C^\infty (\mathbb R)$ is monotone increasing, hence $\psi' \ge 0$. Let $\mathbb Psi (t) := \int_0^t \psi (s)\, ds$. Then $\mathbb Psi (0) = 0$, hence
$\mathbb Psi (u)\in C_0^\infty (\mathbb R^d)$ and using the ellipticity
$$
L \mathbb Psi (u) = \psi (u) Lu + \psi' (u) \frac 12 \sum_{i,j=1}^d a_{ij}\partial_i u \partial_j u \ge \psi (u) Lu
$$
hence integrating w.r.t. the infinitesimally invariant measure $\mu$ yields \eqref{eq:2.1.0e}. The general case then follows by straightforward approximation.
\item As a consequence of the Beurling--Deny property we obtain that $(L , C_0^\infty (\mathbb R^d))$ is
dissipative on $L^r (\mathbb R^d , \mu)$ for all $r\in [1, \infty)$ (see \cite[Lemma 1.8, p. 36]{Eberle}),
which is a necessary condition for the existence of maximal extensions of $L$ generating a
$C_0$-semigroup of contractions on $L^r (\mathbb R^d , \mu)$.
\item Since $L$ is dissipative, it is in particular closable. Its closure in $L^r (\mathbb R^d, \mu )$ generates
a $C_0$-semigroup $(T_t )_{t> 0}$, if and only if the following \textbf{range condition}
\index{operator ! range condition} holds (\cite[Theorem 3.1]{LP61}):
\begin{equation}
\label{eq:2.1.0f}
\exists\, \lambda > 0 \text{ such that } (\lambda - L)(C_0^\infty (\mathbb R^d )) \subset L^r (\mathbb R^d , \mu )
\text{ dense.}
\end{equation}
In this case, the semigroup $(T_t )_{t> 0}$ is sub-Markovian (see \cite[pp. 36--37]{Eberle}).
\end{itemize}
\noindent
We will apply the range condition, in Section \ref{subsec:2.1.2} below, to some suitable, but still
explicit, extension of $L$, to obtain, for any relatively compact open subset $V\subset \mathbb R^d$, the
existence of a sub-Markovian semigroup $( \overline{T}^V_t )_{t> 0}$ on $L^1 (V, \mu )$ whose
generator $(\overline{L}^V , D(\overline{L}^V))$ extends $(L, C_0^\infty (V))$
(Proposition \ref{prop:2.1}). The associated Markov process (also constructed in \cite{WS99}) is a
stochastic process killed at the instant it reaches the boundary of $V$. It is therefore only natural to
conjecture the following \textbf{domain monotonicity}\index{semigroup ! domain monotonicity}:
\begin{equation}
\label{eq:2.1.0g}
\overline{T}^{V_1}_t \le \overline{T}^{V_2}_t \text{ for any relatively compact open subsets }
V_1 \subset V_2.
\end{equation}
Here $\overline{T}^{V_1}_t \le\overline{T}^{V_2}_t$ means that $\overline{T}^{V_1}_t f
\le \overline{T}^{V_2}_t f$ for all $f\in L^1 (V_1, \mu )$, $f\ge 0$.
We give a rigorous purely analytic proof for this monotonicity in terms of the corresponding resolvents
in Lemma \ref{lemma2.1.6} below. \\
Having established \eqref{eq:2.1.0g}, we can consider in the next step the monotone limit
\begin{equation}
\label{eq:2.1.0h}
\overline{T}_t f = \lim_{n\to\infty} \overline{T}^{V_n}_t f\, , t\ge 0\, ,
\end{equation}
for an increasing sequence $(V_n)_{n\ge 1}$ of relatively compact open subsets satisfying
$\overline{V}_n\subset V_{n+1}$, $n\ge 1$. It is quite easy to see that the monotone limit
$( \overline{T}_t )_{t > 0}$ defines a sub-Markovian $C_0$-semigroup of contractions on $L^1 (\mathbb R^d , \mu )$.
A remarkable fact of this construction is its \textbf{independence of the chosen exhausting sequence
$(V_n)_{n\ge 1}$} (Theorem \ref{theorem2.1.5}).
\subsubsection{Framework and basic notations}
\label{subsec:2.1.1}
Let us next introduce our precise mathematical framework and fix basic notations and assumptions used throughout
up to the end of Section \ref{sec2.1}.
We suppose that $\mu$ is a $\sigma$-finite (positive) measure on $\mathcal B (\mathbb R^d)$ as follows:
\begin{eqnarray}\label{condition on mu}
\mu=\rho\,dx, \ \text{ where }\rho = \varphi^2,\ \varphi\in H_{loc}^{1,2}(\mathbb R^d), \ d\ge 1, \ \text{supp}(\mu )\equiv \mathbb R^d.
\end{eqnarray}
\noindent
Let $V$ be an open subset of $\mathbb R^d$. If $\mathcal{A}\subset L^s (V, \mu )$, $s\in [1, \infty]$, is
an arbitrary subset, denote by $\mathcal{A}_0$ the subspace of all elements $u\in \mathcal{A}$ such that
$\text{supp} (|u|\mu )$ is a compact subset contained in $V$, and $\mathcal{A}_b$ the subspace of
all bounded elements in $\mathcal{A}$. Finally, let $\mathcal{A}_{0,b} := \mathcal{A}_0\cap \mathcal{A}_b$.\\
Let us next introduce weighted Sobolev spaces that we are going to use in our analysis.
Let $H_0^{1,2} (V, \mu )$ be the closure of $C_0^\infty (V)$ in $L^2 (V,\mu )$ w.r.t. the norm
$$
\|u\|_{H_0^{1,2} (V,\mu )}
:= \left( \int_V u^2\, d\mu + \int_V \|\nabla u\|^2\, d\mu \right)^{\frac 12}.
$$
Finally, let $H_{loc}^{1,2} (V,\mu )$ be the space of all elements $u$ such that
$u\chi\in H_0^{1,2}(V,\mu)$ for all $\chi\in C_0^\infty (V)$.
\noindent
The precise assumptions on the coefficients of our differential operator
that we want to analyze are as follows: let $A = (a_{ij})_{1\le i,j\le d}$ with
\begin{equation}
\label{eq:2.1.1}
a_{ji}= a_{ij}\in H_{loc}^{1,2}(\mathbb R^d ,\mu )\, , 1\le i,j\le d,
\end{equation}
be locally strictly elliptic, i.e., for all $V$ relatively compact
there exists a constant $\nu_V > 0$ such that
\begin{equation}
\label{eq:2.1.2}
\nu^{-1}_V\| \xi \|^2\le\langle A (x) \xi,\xi\rangle\le\nu_V \|\xi\|^2 \text{ for all } \xi \in\mathbb R^d , x\in V.
\end{equation}
Let
\begin{equation}
\label{eq:2.1.3}
\mathbf{G}=(g_1,\ldots,g_d) \in L^2_{loc}(\mathbb R^d, \mathbb R^d, \mu),
\end{equation}
i.e., $\int_V\|\mathbf{G}\|^2\, d\mu < \infty$ for all $V$ relatively compact in $\mathbb R^d$,
and suppose that the measure $\mu$ is an {\bf infinitesimally invariant measure}
\index{measure ! infinitesimally invariant} for $(L^A+\langle \mathbf{G},
\nabla \rangle, C_0^{\infty}(\mathbb R^d))$, i.e.
\begin{equation}
\label{eq:2.1.4}
\int_{\mathbb R^d}\big ( L^{A} u + \langle\mathbf{G} , \nabla u\rangle\big )\, d\mu = 0 \qquad \forall u\in C_0^\infty (\mathbb R^d),
\end{equation}
where for $f\in C^2(\mathbb R^d)$
\begin{equation} \label{eq:2.1.3bis}
L^A f := \frac 12 \sum_{i,j =1}^d a_{ij}\partial_{ij} f.
\end{equation}
Moreover, throughout this monograph, we shall let for $f\in C^2(\mathbb R^d)$
\begin{equation} \label{eq:2.1.3bis2}
L f := L^A f +\langle \mathbf{G},\nabla f\rangle=\frac 12 \sum_{i,j =1}^d a_{ij}\partial_{ij} f +\sum_{i=1}^dg_i\partial_i f.
\end{equation}
We will provide in Theorem \ref{theo:2.2.7} of Section \ref{sec2.2}
explicit sufficient conditions on $A$ and $\mathbf{G}$ such that an
infinitesimally invariant measure $\mu$ with the required regularity \eqref{condition on mu} exists, and for which the
assumptions \eqref{eq:2.1.1}--\eqref{eq:2.1.4} are satisfied (see in particular
Theorem \ref{theo:2.2.7} and Remark \ref{rem:2.2.7}).\\
As mentioned in the previous section, \eqref{eq:2.1.4} implies that the operator $(L^A + \langle \mathbf{G}, \nabla \rangle, C_0^\infty (\mathbb R^d))$ is
dissipative on the Banach space $L^1 (\mathbb R^d , \mu )$, which is necessary for the
existence of a closed extension $(\overline{L}, D(\overline{L}))$ of
$(L^A + \langle \mathbf{G}, \nabla \rangle, C_0^\infty (\mathbb R^d))$ generating a $C_0$-semigroup of contractions
on $L^1 (\mathbb R^d , \mu )$. In general we cannot expect that the closure of
$(L^A + \langle \mathbf{G}, \nabla\rangle, C_0^\infty (\mathbb R^d))$ will be already generating such a semigroup, in fact, in general there exist many maximal extensions and not all maximal extensions will generate
sub-Markovian semigroups. Here we recall that a closed extension $(\overline{L}, D(\overline{L}))$ of $(L,
C_0^\infty (\mathbb R^d))$ is called a {\bf maximal extension}, if it is the generator of a $C_0$-semigroup
in $L^1 (\mathbb R^d , \mu )$.\\
To find the right maximal extension that meets our requirements for the analysis of
associated Markov processes, we first need to extend the domain $C_0^\infty (\mathbb R^d )$ in a
nontrivial, but nevertheless explicit way. To this end observe that we can rewrite
\begin{equation}
\label{eq:2.1.3b}
L^A + \langle \mathbf{G}, \nabla \rangle = L^0 + \langle \mathbf{B}, \nabla \rangle \quad \text{ on }\ C^{\infty}_0(\mathbb R^d)
\end{equation}
into the sum of some $\mu$-symmetric operator $L^0$ and a first-order perturbation given by some vector
field $\mathbf{B}$, which is of $\mu$-divergence zero. \\
Indeed, note that for $u,\varv\in C_0^\infty (\mathbb R^d)$, an integration by parts yields that
\begin{equation}
\label{eq:2.1.3c}
\int_{\mathbb R^d} \big( L^A u + \langle\mathbf{G}, \nabla u\rangle\big ) \varv\, d\mu
= - \frac 12 \int_{\mathbb R^d} \langle A\nabla u, \nabla \varv\rangle\, d\mu
+ \int_{\mathbb R^d} \langle \mathbf{G} - \beta^{\rho, A} , \nabla u\rangle \varv\, d\mu
\end{equation}
with $\beta^{\rho , A} = (\beta_1^{\rho , A} , \ldots , \beta_d^{\rho , A} )\in L^2_{loc}
(\mathbb R^d, \mathbb R^d, \mu )$ defined as
\begin{equation}
\label{eq:2.1.5}
\beta^{\rho , A}_i
= \frac 12 \sum_{j=1}^d \Big ( \partial_j a_{ij} + a_{ij} \frac{\partial_j \rho}{\rho}\Big ),
1\le i\le d.
\end{equation}
The symmetric positive definite bilinear form
$$
\mathcal E^0 (u,\varv) := \frac 12 \int_{\mathbb R^d}\langle A\nabla u, \nabla \varv\rangle\,d\mu, u,\varv\in C_0^\infty (\mathbb R^d )
$$
can be shown to be closable on $L^2 (\mathbb R^d , \mu )$ by using results of \cite[Subsection II.2b)]{MR}.
Denote its closure by $(\mathcal E^0,D(\mathcal E^0))$, the associated self-adjoint generator
by $(L^0, D(L^0))$ and the corresponding sub-Markovian semigroup by $(T_t^0)_{t>0}$. We let
$$
\mathcal E^0_{\alpha}(\cdot,\cdot):=\mathcal E^0(\cdot,\cdot)+\alpha(\cdot,\cdot)_{L^2(\mathbb R^d,\mu)}\, , \ \alpha>0.
$$
Recall that
the domain of the generator is defined as
$$
D(L^{0}) := \left\{ u\in D(\mathcal E^0 )
: \varv\mapsto \mathcal E^0 (u,\varv) \text{ is continuous w.r.t } \|\cdot\|_{L^2 (\mathbb R^d ,\mu)} \; \text{on } D(\mathcal{E}^0) \right\}
$$
and for $u\in D(L^{0})$, $L^{0} u$ is defined via the Riesz representation theorem (see \cite[Theorem 5.5]{BRE}) as the unique
element in $L^2 (\mathbb R^d , \mu)$ satisfying
\begin{equation}
\label{DefGeneratorDF}
\mathcal E^0 (u,\varv) = - \int_{\mathbb R^d} L^{0} u \varv\, d\mu , \;\; \forall \varv\in D(\mathcal E^0 ).
\end{equation}
\noindent
It is easy to see that our assumptions imply that $C_0^\infty (\mathbb R^d )\subset D(L^0)$ and
\begin{equation}
\label{eq:2.1.3d}
L^0 u = L^A u + \langle \beta^{\rho , A},\nabla u\rangle, u\in C_0^\infty (\mathbb R^d ),
\end{equation}
so that
\begin{equation}
\label{eq:2.1.5a}
\mathbf{B} = \mathbf{G} - \beta^{\rho , A},
\end{equation}
which is also contained in $L^2_{loc} (\mathbb R^d, \mathbb R^d, \mu )$. $\mathbf{B}$ now is in fact of
$\mu$-divergence zero, i.e.,
\begin{equation}
\label{eq:2.1.6}
\int_{\mathbb R^d} \langle\mathbf{B}, \nabla u \rangle\, d\mu = 0 \quad \forall u\in C_0^\infty (\mathbb R^d),
\end{equation}
since $\int_{\mathbb R^d} \langle\mathbf{B}, \nabla u \rangle\, d\mu = \int_{\mathbb R^d} L^A u + \langle \mathbf{G} ,
\nabla u\rangle\, d\mu - \int_{\mathbb R^d} L^0 u\,d\mu = 0$. \\
The decomposition \eqref{eq:2.1.3b} is crucial for our construction of a closed extension of
$L^A + \langle \mathbf{G}, \nabla \rangle$ on $L^1 (\mathbb R^d , \mu )$ generating a $C_0$-semigroup that
is {\it sub-Markovian}.
\begin{remark}\label{cnulltwocoincide}
{\it By the same line of argument as above one can show that \eqref{eq:2.1.3b} holds in fact also on $C_0^2(\mathbb R^d)$.}
\end{remark}
\subsubsection{Existence of maximal extensions on $\mathbb R^d$}
\label{subsec:2.1.2a}
\paragraph{Existence of maximal extensions on relatively compact subsets $V\subset\mathbb R^d$}
\label{subsec:2.1.2}
Throughout this section we fix a relatively compact open subset $V$ in $\mathbb R^d$.
Then all assumptions on the coefficients become global. In particular, the restriction of
$A(x)$ is uniformly strictly elliptic, the restriction of $\mu$ is a finite measure and the vector fields $\mathbf{G}$, $\beta^{\rho , A}$ and $\mathbf{B}$ are in $L^2 (V, \mathbb R^d ,\mu )$. Our aim in this section is to construct a {\bf maximal extension} \index{operator ! maximal extension} $(\overline{L}^V, D(\overline{L}^V))$ of
\begin{equation}
\label{OperatorBoundedDomain}
L^A u+ \langle\mathbf{G}, \nabla u\rangle = L^0 u + \langle\mathbf{B}, \nabla u\rangle\, ,
u\in C_0^\infty (V) ,
\end{equation}
on $L^1 (V, \mu )$, i.e.
$(\overline{L}^V, D(\overline{L}^V))$ is a closed extension of $(L^A + \langle \mathbf{G}, \nabla \rangle, C_0^{\infty}(V))$ on $L^1(V, \mu)$ that generates a $C_0$-semigroup of contractions on $L^1(V, \mu)$. \\
It is clear that we cannot achieve this by simply taking its closure on
$C_0^\infty (V)$, since no boundary conditions are specified. However, we can impose Dirichlet
boundary conditions as follows: let $(L^{0,V}, D(L^{0,V}))$ be the self-adjoint generator of the
symmetric Dirichlet form $\mathcal E^0 (u,\varv)$, $u,\varv\in H_0^{1,2}(V,\mu )$, which is characterized similar to
the full domain case \eqref{DefGeneratorDF} as
\begin{equation}
\label{DefGeneratorDFBoundedDomain}
\mathcal E^0 (u,\varv) = - \int_V L^{0, V} u \varv\, d\mu, \quad \forall u \in D(L^{0,V}), \ \varv \in H^{1,2}_0(V, \mu).
\end{equation}
Note that $C_0^2(V)\subset D(L^{0,V})$ and that for $u\in D(L^{0,V}) \subset H_0^{1,2} (V, \mu )$, $\langle\mathbf{B}, \nabla u\rangle
\in L^1 (V,\mu )$, so that in particular its restriction to bounded functions,
$$
L^{0,V} u + \langle\mathbf{B}, \nabla u\rangle\, , u\in D(L^{0,V})_b ,
$$
is a well-defined extension of \eqref{OperatorBoundedDomain} on $L^1 (V, \mu )$. Note that the zero
$\mu$-divergence of the vector field $\mathbf{B}$ (see \eqref{eq:2.1.6}) extends to all of
$H_0^{1,2} (V, \mu )$ by simple approximation. \\
The following proposition now states that this operator is closable and that its closure generates a sub-Markovian $C_0$-semigroup of contractions. In addition, the integration by parts \eqref{eq:2.1.3c} extends to all bounded functions in the domain of the closure.
\begin{proposition}
\label{prop:2.1}
Let \eqref{condition on mu}--\eqref{eq:2.1.4}
be satisfied and $V$ be a relatively compact open subset in $\mathbb R^d$.
Let $(L^{0,V}, D(L^{0,V}))$ be the generator of $(\mathcal E^0 , H_0^{1,2} (V,\mu ))$ (see \eqref{DefGeneratorDFBoundedDomain}). Then:
\item{(i)} The operator
$$
L^V u := L^{0,V} u + \langle \mathbf{B} , \nabla u\rangle, \quad u\in D(L^{0,V})_b,
$$
is dissipative, hence in particular closable, on
$L^1(V, \mu )$. The closure $(\overline{L}^V, D(\overline{L}^V))$ generates a
sub-Markovian $C_0$-semigroup of contractions $(\overline{T}_t^V)_{t >0}$ on $L^1(V, \mu)$. In particular
$(\overline{L}^V, D(\overline{L}^V))$ is a {\bf maximal extension}\index{operator ! maximal extension} of
$$
(\frac 12 \sum_{i,j =1}^d a_{ij}\partial_{ij} + \sum_{i=1}^d g_{i}\partial_{i},
C_0^\infty (V))
$$
(cf. \eqref{eq:2.1.3bis} and \eqref{OperatorBoundedDomain}).
\item{(ii)} $D(\overline{L}^V)_b\subset H_0^{1,2} (V,\mu )$ and
\begin{equation}
\label{eq:2.1.7}
\mathcal E^0 (u,\varv) - \int_V\langle\mathbf{B}, \nabla u\rangle \varv\, d\mu = - \int_V
\overline{L}^V u\, \varv\, d\mu \, ,u\in D(\overline{L}^V)_b , \varv\in H_0^{1,2}
(V, \mu )_b.
\end{equation}
In particular,
\begin{equation}
\label{eq:2.1.8}
\mathcal E^0 (u,u) = -\int_{V}\overline{L}^V u\, u\, d\mu, u\in D(\overline{L}^V)_b.
\end{equation}
\end{proposition}
\begin{proof}
The complete proof of Proposition \ref{prop:2.1} is given in \cite{WS99}. Let us only state its essential steps in the following. \\
(i) {\bf Step 1:} To show that $(L^V, D(L^{0,V})_b)$ is dissipative, it suffices to show that
$$
\int_V L^Vu \psi (u)\, d\mu \le 0 \, , u\in D(L^{0,V})_b ,
$$
with $\psi = 1_{(0, \infty)} - 1_{(-\infty , 0)}$, since $\|u\|_1 \psi (u) \in L^\infty (V,\mu )
= (L^1 (V, \mu ))'$ is a normalized tangent functional to $u$. Since $\psi = 1_{(0, \infty)}
- 1_{(-\infty , 0)}$ is monotone increasing, it therefore suffices to extend the Beurling--Deny property \eqref{eq:2.1.0e} to this setting. But this follows from the well-known fact that it holds for the generator $L^{0,V}$ of the symmetric Dirichlet form (\cite{BH}) and since
$u\in H_0^{1,2} (V, \mu )$ implies $|u|\in H_0^{1,2} (V,\mu)$,
$$
\int_V \langle \mathbf{B} , \nabla u\rangle \psi (u) \, d\mu =
\int_V \langle \mathbf{B} , \nabla |u|\rangle \, d\mu = 0.
$$
\noindent
{\bf Step 2:} In the next step one shows that the closure $(\overline{L}^V, D(\overline{L}^V))$ generates
a $C_0$-semigroup of contractions $(\overline{T}^V_t)_{t> 0}$ on $L^1(\mathbb R^d, \mu)$. To this end by \cite[Theorem 3.1]{LP61}),
verifies the range condition: $(1-L^V)(D(L^{0,V})_b)\subset L^1 (V , \mu )$ dense.
Indeed, let $h\in L^\infty (V, \mu )$ be such that $\int_V (1-L^V)u\, h\, d\mu = 0$ for all $u\in D(L^{0,V})_b$. Then $u\mapsto\int_V (1- L^{0,V})u\, h \, d\mu = \int_V \langle\mathbf{B} , \nabla u\rangle h\, d\mu$, $u\in D(L^{0,V})_b$, is continuous w.r.t. the norm on $H_0^{1,2}(V,\mu )$ which implies the existence of some element $v\in H_0^{1,2} (V, \mu )$ such that $\mathcal E^0_1(u,v) = \int_V (1- L^{0,V})u\, h \, d\mu $ for all $u\in D(L^{0,V})_b$. It follows that
$\int_V (1- L^{0,V})u (h-v) \, d\mu = 0$ for all $u\in D(L^{0,V})_b$. Since the semigroup generated by $(L^0 , D(L^{0,V}))$ is in particular $L^\infty$-contractive, we obtain that $(1-L^{0,V})(D(L^{0,V})_b) \subset L^1 (V,\mu )$ dense and consequently, $h = v$. In particular, $h\in
H_0^{1,2} (V, \mu )$ and
$$
\begin{aligned}
\mathcal E^0_1(h,h)
& = \lim_{t\to 0+}\mathcal E^0_1(T_t^{0,V}h,h) = \lim_{t\to 0+}\int_V (1-L^{0,V})T^{0,V}_th\, h \, d\mu \\
& = \lim_{t\to 0+}\int_V \langle \mathbf{B} , \nabla T^{0,V}_t h \rangle h \, d\mu
= \int_V \langle \mathbf{B} , \nabla h \rangle h \, d\mu = 0
\end{aligned}
$$
by \eqref{eq:2.1.5} and \cite[Lemma 1.3.3(iii)]{FOT} and therefore $h = 0$.
\noindent
{\bf Step 3:} $(\overline{T}^V_t)_{t > 0}$ is sub-Markovian.
\noindent
This follows from the fact that the Beurling--Deny property \eqref{eq:2.1.0e} for $(L^V,D(L^{0,V})_b)$ extends to its closure. In particular,
$$
\int_V\overline{L}^Vu\, 1_{\{u > 1\}}\, d\mu\le 0.
$$
It is well-known that this property now implies that the semigroup $(\overline{T}^V_t)_{t> 0}$
is sub-Markovian. \\
(ii) In order to verify the integration by parts \eqref{eq:2.1.7} first note that it holds for
$u\in D(L^{0,V})_b$ by the construction of $L^V$. It remains to extend it to bounded $u$ in the closure
$u\in D(\overline{L}^{V})_b$. This is not straightforward, since convergence of $(u_n)_{n\ge 1} \subset
D(L^{0,V})_b$ to $u\in D (\overline{L}^V)$ w.r.t. the graph norm does not immediately imply convergence in
$H_0^{1,2} (V, \mu )$. One therefore needs to apply a suitable cutoff function $\psi\in C_b^2 (\mathbb R)$
such that $\psi (t) = t$ if $|t|\le\|u\|_{L^\infty (\mathbb R^d , \mu )} + 1$ and $\psi (t) = 0$ if $|t|\ge
\|u\|_{L^\infty (\mathbb R^d , \mu )} + 2$, to pass to the uniformly bounded sequence $(\psi (u_n))_{n\ge 1}
\subset D(L^{0,V})_b$. Clearly,
$$
\overline{L}^V\psi (u_n) = \psi '(u_n) L^V u_n + \frac 12 \psi '' (u_n)\langle A\nabla u_n , \nabla u_n\rangle
$$
and the essential step now is to verify that
$$
\lim_{n\to\infty} \psi '' (u_n)\langle A\nabla u_n , \nabla u_n\rangle = 0 \text{ on } L^1 (V,\mu)\, ,
$$
since this then implies $\lim_{n\to\infty}\overline{L}^V\psi (u_n) = \overline{L}^Vu$,
$(\psi (u_n))_{n\ge 1} \subset H_0^{1,2} (V,\mu )$ bounded, hence $u\in H_0^{1,2} (V,\mu )$, and
\eqref{eq:2.1.7} holds for the limit $u\in D(\overline{L}^V)_b$.
\end{proof}
\begin{remark}
\label{rem2.1.3}
{\it Let \eqref{condition on mu}--\eqref{eq:2.1.4} be satisfied and $V$ be a relatively compact open subset in $\mathbb R^d$.
Since $- \left(\mathbf{G}- \beta^{\rho, A}\right)$ satisfies the same assumptions as
$\mathbf{G} - \beta^{\rho, A}$, the closure $(\overline{L}^{V,\prime},
D(\overline{L}^{V,\prime}))$ of $L^{0,V}u -\langle \mathbf{G} - \beta^{\rho, A},
\nabla u\rangle$, $u\in D(L^{0,V})_b$, on $L^1 (V, \mu)$ generates a sub-Markovian $C_0$-semigroup
of contractions $(\overline{T}^{V,\prime}_t)_{t> 0}$, $D(\overline{L}^{V,\prime})_b
\subset H_0^{1,2}(V,\mu )$ and
$$
\mathcal E^0 (u,\varv) + \int_V\langle\mathbf{G} - \beta^{\rho, A} , \nabla u\rangle\varv\, d\mu
= - \int_V\overline{L}^{V,\prime}u\,\varv\, d\mu\, ,
u\in D(\overline{L}^{V,\prime})_b, \varv\in H_0^{1,2}(V, \mu )_b.
$$
If $(L^{V, \prime}, D(L^{V, \prime}))$ is the part of $(\overline{L}^{V,\prime},
D(\overline{L}^{V,\prime}))$ on $L^2 (V, \mu )$ and $(L^V, D(L^V))$ is the part of
$(\overline{L}^V, D(\overline{L}^V))$ on $L^2 (V, \mu )$, then
\begin{equation}
\label{eq:2.1.11}
\begin{aligned}
(L^Vu, & \varv )_{L^2 (V, \mu )} = -\mathcal E^0 (u,\varv) + \int_V\langle\mathbf{G}
- \beta^{\rho, A} ,\nabla u \rangle\varv\, d\mu \\
& = -\mathcal E^0 (\varv ,u) - \int_V\langle\mathbf{G} - \beta^{\rho, A},
\nabla\varv \rangle u\, d\mu
= (L^{V, \prime}\varv ,u )_{L^2 (V, \mu )}
\end{aligned}
\end{equation}
for all $u\in D(L^V)_b$, $v\in D(L^{V,\prime})_b$. Since $(L^V, D(L^V))$
(resp. $(L^{V,\prime}, D(L^{V,\prime}))$) is the generator of a sub-Markovian $C_0$-semigroup,
it follows that $D(L^V)_b\subset D(L^V)$ (resp. $D(L^{V, \prime})_b\subset D(L^{V,\prime})$) dense
w.r.t. the graph norm, \eqref{eq:2.1.11} extends to all $u\in D(L^V)$, $\varv\in D(L^{V, \prime})$,
which implies that the parts of $\overline{L}^V$ and $\overline{L}^{V, \prime}$ on $L^2 (V, \mu )$
are adjoint operators. }
\end{remark}
\noindent
Note that the sub-Markovian $C_0$-semigroup of contractions
$(\overline{T}^V_t)_{t> 0}$ on $L^1 (V, \mu )$ can be restricted to a
semigroup of contractions on $L^r (V, \mu )$ for all $r\in [1, \infty)$ by
the Riesz-Thorin interpolation theorem (cf. \cite[Theorem IX.17]{ReSi}) and that
the restricted semigroup is strongly continuous on $L^r (V, \mu )$. The
corresponding generator $(\overline{L}^V_r, D(\overline{L}^V_r))$ is the
{\bf part of} $(\overline{L}^V, D(\overline{L}^V))$ on $L^r (V, \mu )$, i.e.,
$D(\overline{L}^V_r) = \{u\in D(\overline{L}^V)\cap L^r(V,\mu ):\overline
{L}^Vu\in L^r (V, \mu )\}$ and $\overline{L}^V_r u = \overline{L}^Vu$,
$u\in D(\overline{L}^V_r)$.
\paragraph{Existence of maximal extensions on the full domain $\mathbb R^d$}
\label{subsec:2.1.3}
We are now going to extend the previous existence result to the full domain.
For any relatively compact open subset $V$ in $\mathbb R^d$ let
$(\overline{L}^V, D(\overline{L}^V))$ be the maximal extension of $(L, C_0^\infty (V))$
on $L^1 (V, \mu )$ constructed in Proposition \ref{prop:2.1} and $(\overline{T}_t^V)_{t> 0}$
be the associated sub-Markovian $C_0$-semigroup of contractions. Recall from linear semigroup theory that
for $\alpha > 0$, the operator $(\alpha - \overline{L}^V , D(\overline{L}^V))$ is invertible
with bounded inverse $\overline{G}^V_\alpha = (\alpha - \overline{L}^V)^{-1}$.
$(\overline{G}^V_\alpha )_{\alpha > 0}$ is called the resolvent generated by $\overline{L}^V$ and it is given as the Laplace transform
$$
\overline{G}^V_\alpha = \int_0^\infty e^{-\alpha t} \overline{T}_t^V \, dt, \alpha > 0,
$$
of the semigroup. The strong continuity of $(\overline{T}_t^V)_{t> 0}$ implies the strong continuity
$\lim_{\alpha\to\infty} \alpha \overline{G}_\alpha^V f = f$ in $L^1 (V, \mu )$ of the resolvent and
sub-Markovianity of the semigroup implies the same for $\alpha\overline{G}_\alpha^V$. \\
If we define
$$
\overline{G}^V_\alpha f := \overline{G}^V_\alpha (f1_V), f\in L^1
(\mathbb R^d , \mu ), \alpha > 0,
$$
then $\alpha \overline{G}^V_\alpha$, $\alpha > 0$, can be extended to a
sub-Markovian contraction on $L^1 (\mathbb R^d, \mu )$, which is, however, no longer
strongly continuous in the usual sense, but still satisfies $\lim_{\alpha\to\infty}
\alpha\overline{G}^V_\alpha f = f1_V$ in $L^1 (\mathbb R^d , \mu )$.\\
The crucial observation for the existence of an extension now is the following domain
monotonicity:
\begin{lemma}
\label{lemma2.1.6}
Let \eqref{condition on mu}--\eqref{eq:2.1.4} be satisfied. Let $V_1$, $V_2$ be relatively compact open subsets in
$\mathbb R^d$ and $V_1\subset V_2$. Let $u\in L^1 (\mathbb R^d , \mu )$, $u\ge 0$, and
$\alpha > 0$. Then $\overline{G}^{V_1}_\alpha u\le\overline{G}^{V_2}_\alpha u$.
\end{lemma}
\begin{proof} Clearly, we may assume that $u$ is bounded. Let
$w_\alpha := \overline{G}^{V_1}_\alpha u - \overline{G}^{V_2}_\alpha u $.
Then $w_\alpha\in H_0^{1,2}(V_2, \mu )$ but also
$w_\alpha^+\in H_0^{1,2}(V_1, \mu )$ since $w_\alpha^+\le\overline
{G}^{V_1}_\alpha u$ and $\overline{G}^{V_1}_\alpha u\in H_0^{1,2}
(V_1, \mu )$. Note that $\int_{\mathbb R^d} \langle\mathbf{B},
\nabla w_\alpha\rangle w_\alpha^+\, d\mu = \int_{\mathbb R^d} \langle\mathbf{B}, \nabla
w_\alpha^+\rangle w_\alpha^+\, d\mu = 0$ and that $\mathcal E^0 (w_\alpha^+, w_\alpha^-)
\le 0$, since $(\mathcal E^0 , H_0^{1,2}(V_2, \mu ))$ is a Dirichlet form. Hence
by \eqref{eq:2.1.7}
$$
\begin{aligned}
\mathcal E_\alpha^0 (w_\alpha^+ , w_\alpha^+) & \le\mathcal E^0_\alpha (w_\alpha ,
w_\alpha^+ ) - \int_{\mathbb R^d} \langle\mathbf{B},\nabla w_\alpha\rangle w_\alpha^+ \, d\mu \\
& = \int_{\mathbb R^d} (\alpha - \overline{L}^{V_1})\overline{G}^{V_1}_\alpha u\,
w_\alpha ^+ \, d\mu - \int_{\mathbb R^d} (\alpha - \overline{L}^{V_2})\overline
{G}^{V_2}_\alpha u \, w_\alpha ^+ \, d\mu = 0.
\end{aligned}
$$
Consequently, $w_\alpha^+ = 0$, i.e., $\overline{G}^{V_1}_\alpha u
\le\overline{G}^{V_2}_\alpha u$.
\end{proof}
\begin{theorem}
\label{theorem2.1.5}
Let \eqref{condition on mu}--\eqref{eq:2.1.4}
be satisfied and let $(L^0 , D(L^0 ))$ be the generator of
$(\mathcal E^0 , D(\mathcal E^0))$ (see \eqref{DefGeneratorDF}). Then
there exists a closed extension $(\overline{L}, D(\overline{L}))$ of
\begin{eqnarray}\label{defLfirst}
Lu := L^0 u + \langle \mathbf{B}, \nabla u\rangle, \quad u\in D(L^0)_{0,b},
\end{eqnarray}
on $L^1 (\mathbb R^d , \mu )$ satisfying the following properties:
\item{(i)} $(\overline{L}, D(\overline{L}))$ generates a sub-Markovian
$C_0$-semigroup of contractions $(\overline{T}_t)_{t> 0}$. In particular
$(\overline{L}, D(\overline{L}))$ is a {\bf maximal extension}\index{operator ! maximal extension}
of
$$
(\frac 12 \sum_{i,j =1}^d a_{ij}\partial_{ij} + \sum_{i=1}^d g_{i}\partial_{i},
C_0^\infty (\mathbb R^d))
$$
(cf. \eqref{eq:2.1.3bis} and \eqref{eq:2.1.3b}).
\item{(ii)} If $(V_n)_{n\ge 1}$ is an increasing sequence of
relatively compact open subsets in $\mathbb R^d$ such that $\mathbb R^d = \bigcup_{n\ge 1} V_n $ then
$$
\overline{G}_\alpha f:=\lim_{n\to\infty}\overline{G}^{V_n}_\alpha f = (\alpha - \overline
{L})^{-1}f
$$
in $L^1 (\mathbb R^d , \mu )$ for all $f\in L^1 (\mathbb R^d , \mu )$ and $\alpha > 0$. In particular,
$(\overline{G}_\alpha)_{\alpha>0}$ is a sub-Markovian $C_0$-resolvent of contractions on $L^1(\mathbb R^d,\mu)$ and has
$(\overline{L}, D(\overline{L}))$ as generator.
\item{(iii)} $D(\overline{L})_b\subset D(\mathcal E^0)$ and
$$
\mathcal E^0 (u,\varv ) - \int_{\mathbb R^d}\langle\mathbf{B}, \nabla u\rangle\varv\, d\mu = - \int_{\mathbb R^d}
\overline{L}u\, \varv\, d\mu \, ,u\in D(\overline{L})_b , \varv\in H^{1,2}_0 (\mathbb R^d , \mu )_{0,b}.
$$
Moreover,
$$
\mathcal E^0 (u,u) \le -\int_{\mathbb R^d}\overline{L}u\, u\, d\mu \, , u\in D(\overline{L})_b.
$$
\end{theorem}
\begin{proof}
The complete proof of Theorem \ref{theorem2.1.5} is given in \cite{WS99}. Let us again only
state its essential steps in the following. \\
Let $(V_n)_{n\ge 1}$ be some increasing sequence of relatively compact open subsets in $\mathbb R^d$ such that $\overline{V}_n\subset V_{n+1}$, $n\ge 1$, and $\mathbb R^d = \bigcup_{n\ge 1}
V_n$. Let $f\in L^1 (\mathbb R^d , \mu )$, $f\ge 0$. Then $\lim_{n\to\infty}\overline
{G}^{V_n}_\alpha f =: \overline{G}_\alpha f$ exists $\mu$-a.e. by Lemma \ref{lemma2.1.6}. Since
$$
\int_{\mathbb R^d}\alpha\overline{G}^{V_n}_\alpha f\, d\mu\le\int_{\mathbb R^d} f1_{V_n}\, d\mu,
\le \int_{\mathbb R^d} f\, d\mu
$$
the sequence converges in $L^1 (\mathbb R^d , \mu )$, and
\begin{equation}
\label{eq:2.1.12}
\int_{\mathbb R^d}\alpha\overline{G}_\alpha f\, d\mu\le\int_{\mathbb R^d} f\, d\mu,
\end{equation}
in particular $\alpha\overline{G}_\alpha$ is a linear contraction on $L^1 (\mathbb R^d , \mu )$.
Since $\alpha\overline{G}^{V_n}_\alpha$ is sub-Markovian, the limit
$\alpha\overline{G}_\alpha$ is sub-Markovian too. Also the resolvent equation follows
immediately. \\
The strong continuity of $(\overline{G}_\alpha )_{\alpha > 0}$ is verified as follows. Let
$u\in D(L^0)_{0,b}$, hence $u\in D(L^{0,V_n})_b$ for large $n$ and thus
$u = \overline{G}^{V_n}_\alpha (\alpha - L^{V_n})u = \overline{G}^{V_n}_\alpha (\alpha - L)u$.
Hence
\begin{equation}
\label{eq:2.1.15}
u = \overline{G}_\alpha (\alpha - L)u.
\end{equation}
In particular,
$$
\begin{aligned}
\|\alpha\overline{G}_\alpha u - u\|_{L^1 (\mathbb R^d ,\mu )}
& = \|\alpha\overline{G}_\alpha u - \overline{G}_\alpha (\alpha - L)u\|_{L^1 (\mathbb R^d ,\mu )}
= \|\overline{G}_\alpha Lu\|_{L^1 (\mathbb R^d ,\mu )} \\
& \le \frac 1\alpha \|Lu\|_{L^1 (\mathbb R^d ,\mu )}\to 0\, , \alpha\to\infty ,
\end{aligned}
$$
for all $u\in C_0^\infty (\mathbb R^d)$ and the strong continuity then follows by a $3 \varepsilon$-argument. \\
Let $(\overline{L}, D(\overline{L}))$ be the generator of $(\overline
{G}_\alpha )_{\alpha > 0}$. Then $(\overline{L}, D(\overline{L}))$ extends
$(L, D(L^0)_{0,b})$ by \eqref{eq:2.1.15}. By the Hille-Yosida Theorem $(\overline{L},
D(\overline{L}))$ generates a $C_0$-semigroup of contractions $(\overline
{T}_t)_{t> 0}$. Since $\overline{T}_t u = \lim_{\alpha\to\infty}\exp (t
\alpha (\alpha \overline{G}_\alpha - 1))u$ for all $u\in L^1 (\mathbb R^d, \mu )$
(cf. \cite[Chapter 1, Corollary 3.5]{Pa}) we obtain that $(\overline{T}_t)_{t> 0}$ is
sub-Markovian. \\
To see that the construction of $(\overline{L}, D(\overline{L}))$ is actually independent
of the exhausting sequence, let $(W_n)_{n\ge 1}$ be another increasing sequence of
relatively compact open subsets in $\mathbb R^d$ such that $\mathbb R^d = \bigcup_{n\ge 1} W_n$.
Compactness of $\overline{V}_n$
then implies that $V_n\subset W_m$ for some $m$, hence $\overline{G}^{V_n}_\alpha f
\le \overline{G}^{W_m}_\alpha f$ by Lemma \ref{lemma2.1.6}, so $\overline{G}_\alpha f
\le\lim_{n\to\infty}\overline{G}^{W_n}_\alpha f$. Similarly, $\lim_{n\to\infty}
\overline{G}^{W_n}_\alpha f\le\overline{G}_\alpha f$, hence (i) is satisfied. \\
Finally, the integration by parts (iii) is first verified for $u = \overline{G}_\alpha f$,
$f\in L ^1 (\mathbb R^d , \mu )_b$. For such $u$ one first shows that $\lim_{n\to\infty}
\overline{G}^{V_n}_\alpha f = u$ weakly in $D(\mathcal E^0)$, since by \eqref{eq:2.1.8}
\begin{equation*}
\begin{aligned}
\mathcal E^0 (\overline{G}^{V_n}_\alpha f , \overline{G}^{V_n}_\alpha f)
& = - \int_{V_n} \overline{L}^{V_n} \overline{G}^{V_n}_\alpha f \overline{G}^{V_n}_\alpha f \, d\mu \\
& = \int_{V_n} (f1_{V_n} - \alpha \overline{G}^{V_n}_\alpha f )\overline{G}^{V_n}_\alpha f \, d\mu \\
& \le \frac {1}{\alpha}\ \|f\|_{L^1 (\mathbb R^d ,\mu)}\|f\|_{L^\infty (\mathbb R^d ,\mu)}.
\end{aligned}
\end{equation*}
We can therefore take the limit in \eqref{eq:2.1.7} to obtain the integration by parts (iii)
in this case. To extend (iii) finally to all $u\in D(\overline{L})_b$, it suffices to consider
the limit $u = \lim_{\alpha\to\infty} \alpha\overline{G}_\alpha u$ weakly in $D(\mathcal{E}^0)$.
\end{proof}
\begin{remark}
\label{remark2.1.7}
{\it (i) Clearly, $(\overline{L}, D(\overline{L}))$ is uniquely determined by
properties (i) and (ii) in Theorem \ref{theorem2.1.5}.
\noindent
(ii) Similarly to $(\overline{L}, D(\overline{L}))$ we can construct a closed
extension $(\overline{L}^{\prime}, D(\overline{L}^{\prime}))$ of
\begin{eqnarray}\label{defL'first}
L^{\prime}u:=L^0 u - \langle\mathbf{B},
\nabla u\rangle, \quad u\in D(L^0)_{0,b},
\end{eqnarray}
that generates a sub-Markovian
$C_0$-semigroup of contractions $(\overline{T}'_t)_{t> 0}$. Since for
all $V$ relatively compact in $\mathbb R^d$ by \eqref{eq:2.1.11}
\begin{equation}
\label{eq:2.1.16}
\int_{\mathbb R^d}\overline{G}^V_\alpha u\, \varv\, d\mu = \int_{\mathbb R^d} u\overline{G}_\alpha
^{V,\prime} \varv\, d\mu\text{ for all }u,\varv\in L^1 (\mathbb R^d, \mu )_b,
\end{equation}
it follows that
\begin{equation}
\label{eq:2.1.17}
\int_{\mathbb R^d}\overline{G}_\alpha u\, \varv\, d\mu = \int_{\mathbb R^d} u\,\overline{G}^{\prime}_\alpha \varv\, d\mu
\text{ for all }u,\varv\in L^1 (\mathbb R^d, \mu )_b,
\end{equation}
where $\overline{G}^{\prime}_\alpha = (\alpha - \overline{L}^{\prime})^{-1}$.
\noindent
(iii) The construction of the maximal extension $\overline{L}$ can be extended to the case
of arbitrary open subsets $W$ in $\mathbb R^d$ (see \cite[Theorem 1.5]{WS99} for further details). }
\end{remark}
\begin{definition}
\label{definition2.1.7}
Let \eqref{condition on mu}--\eqref{eq:2.1.4} be satisfied.
$(\overline{T}_t)_{t>0}$ (see Theorem \ref{theorem2.1.5} (i)) restricted to
$L^1(\mathbb R^d, \mu)_b$ can be extended to a sub-Markovian $C_0$-semigroup of contractions on
$L^s(\mathbb R^d, \mu)$, $s\in (1, \infty)$ and to a sub-Markovian semigroup on
$L^{\infty}(\mathbb R^d, \mu)$. These semigroups will all be denoted by $(T_t)_{t>0}$ and in order
to simplify notations, $(T_t)_{t>0}$ shall denote the semigroup on $L^s(\mathbb R^d, \mu)$ for any
$s \in [1, \infty]$ from now on, whereas $(\overline{T}_t)_{t>0}$ denotes the semigroup
acting exclusively on $L^1(\mathbb R^d, \mu)$. Likewise, we define $(T'_t)_{t>0}$ acting on all
$L^s(\mathbb R^d, \mu)$, $s \in [1, \infty]$, corresponding to $(\overline{T}'_t)_{t>0}$ as in
Remark \ref{remark2.1.7}(ii) which acts exclusively on $L^1(\mathbb R^d, \mu)$. The resolvents
corresponding to $(T_t)_{t>0}$ are also all denoted by $(G_{\alpha})_{\alpha>0}$, those
corresponding to $(T'_t)_{t>0}$ by $(G'_{\alpha})_{\alpha>0}$.\\
Furthermore, we denote by $(L_s,D(L_s))$, $(L_s^{\prime},D(L_s^{\prime}))$, the generators
corresponding to $(T_t)_{t>0}$, $(T'_t)_{t>0}$ defined on $L^s(\mathbb R^d, \mu)$,
$s \in [1, \infty)$, so that in particular $(L_1,D(L_1))=(\overline{L}, D(\overline{L}))$,
$(L_1^{\prime},D(L_1^{\prime}))=(\overline{L}' , D(\overline{L}'))$. \eqref{eq:2.1.17}
implies that $L_2$ and $L_2^{\prime}$ are adjoint operators on $L^2 (\mathbb R^d , \mu )$,
and that $(T_t)_{t > 0}$ is the adjoint semigroup of $(T'_t)_{t> 0}$ when considered on
$L^2 (\mathbb R^d , \mu )$.
\end{definition}
\begin{lemma} \label{lemma2.1.4}
Let \eqref{condition on mu}--\eqref{eq:2.1.4} be satisfied and $(\overline{L}, D(\overline{L}))$ be as in Theorem \ref{theorem2.1.5}.
The space $D(\overline{L})_b$ is an algebra, i.e., $u,\varv\in D(\overline{L})_b$ implies
$u\varv\in D(\overline{L})_b$. Moreover,
\begin{equation}
\label{eq:2.1.17a}
\overline{L} (u\varv) = \varv \overline{L} u + u \overline{L} \varv + \langle A\nabla u,
\nabla \varv\rangle.
\end{equation}
\end{lemma}
\begin{proof}
It suffices to prove that $u\in D(\overline{L})_b$ implies $u^2\in D(\overline{L})_b$ and
$\overline{L} (u ^2 ) = g := 2u\overline{L}u + \langle A\nabla u, \nabla u\rangle$. To this end it is sufficient to show that
\begin{equation}
\label{eq:2.1.18}
\begin{aligned}
\int_{\mathbb R^d}\overline{L}'\varv\, u^2\, d\mu & = \int_{\mathbb R^d} g\varv\, d\mu \text{ for all }\varv =
\overline{G}'_1h\, , h\in L^1 (\mathbb R^d , \mu )_b,
\end{aligned}
\end{equation}
since then $\int_{\mathbb R^d}\overline{G}_1 (u^2 - g)h\, d\mu = \int_{\mathbb R^d} (u^2 - g)\overline
{G}_1 ' h\, d\mu = \int_{\mathbb R^d} u^2 (\overline{G}_1 ' h - \overline{L}' \overline
{G}_1 ' h) \, d\mu = \int_{\mathbb R^d} u^2 \, h\, d\mu$ for all $h\in L^1 (\mathbb R^d , \mu )_b$.
Consequently, $u^2 = \overline{G}_1 (u^2 - g)\in D(\overline{L})_b$.
\noindent
For the proof of \eqref{eq:2.1.18} fix $\varv = \overline{G}'_1h$, $h\in L^1 (\mathbb R^d, \mu )_b$,
and suppose first that $u=\overline{G}_1f$ for some $f\in L^1 (\mathbb R^d , \mu )_b$.
Let $u_n := \overline{G}_1^{V_n}f$ and $\varv_n = \overline{G}_1^{V_n ,\prime}h$,
where $(V_n)_{n\ge 1}$ is as in Theorem \ref{theorem2.1.5}(ii). Then by Proposition
\ref{prop:2.1} and Theorem \ref{theorem2.1.5}
$$
\begin{aligned}
& \int_{V_n}\overline{L}^{V_n ,\prime}\varv_n\, uu_n \, d\mu = -\mathcal E^0 (\varv_n , uu_n)
- \int_{V_n}\langle\mathbf{B},\nabla \varv_n\rangle uu_n\, d\mu \\
& = -\mathcal E^0 (\varv_n u_n , u) - \frac 12\int_{\mathbb R^d}\langle A\nabla \varv_n , \nabla u_n\rangle u\,
d\mu + \frac 12\int_{\mathbb R^d}\langle A\nabla u_n, \nabla u\rangle \varv_n\, d\mu \\
& \qquad + \int_{\mathbb R^d}\langle\mathbf{B}, \nabla u\rangle \varv_n u_n\, d\mu + \int_{\mathbb R^d}\langle
\mathbf{B}, \nabla u_n \rangle \varv_nu\, d\mu \\
& = \int_{\mathbb R^d}\overline{L}u\, \varv_n u_n\, d\mu + \int_{V_n}\overline{L}^{V_n}u_n\, \varv_n
u\, d\mu + \frac 12\int_{\mathbb R^d}\langle A\nabla u_n, \nabla (\varv_n u)\rangle\, d\mu \\
& \qquad - \frac 12 \int_{\mathbb R^d}\langle A\nabla \varv_n , \nabla u_n\rangle u\, d\mu
+ \frac 12 \int_{\mathbb R^d}\langle A\nabla u_n, \nabla u\rangle \varv_n\, d\mu \\
& = \int_{\mathbb R^d}\overline{L}u\, \varv_n u_n\, d\mu + \int_{V_n}\overline{L}^{V_n}u_n\, \varv_n u\, d\mu
+ \int_{\mathbb R^d}\langle A\nabla u_n, \nabla u\rangle \varv_n\, d\mu.
\end{aligned}
$$
Note that $\lim_{n\to\infty}\int_{\mathbb R^d}\langle A\nabla u_n , \nabla u\rangle \varv_n \,
d\mu = \int_{\mathbb R^d}\langle A\nabla u , \nabla u \rangle \varv\, d\mu $ since
$\lim_{n\to\infty}u_n = u$ weakly in $D(\mathcal E^0)$ and $\lim_{n\to\infty}
\langle A\nabla u , \nabla u\rangle \varv_n^2 = \langle A\nabla u , \nabla u
\rangle \varv^2$ (strongly) in $L^1 (\mathbb R^d , \mu )$. Hence
$$
\begin{aligned}
\int_{\mathbb R^d}\overline{L}'\varv\, u^2\, d\mu
& = \lim_{n\to\infty}\int_{V_n} \overline{L}^{V_n ,\prime}\varv_n\, uu_n \, d\mu \\
& = \lim_{n\to\infty}\int_{\mathbb R^d} \overline{L}u\, \varv_nu_n\, d\mu
+ \int_{V_n}\overline{L}^{V_n}u_n\, \varv_n u\, d\mu + \int_{\mathbb R^d}\langle A\nabla u_n,
\nabla u\rangle \varv_n\, d\mu \\
& = \int_{\mathbb R^d} g\varv\, d\mu.
\end{aligned}
$$
Finally, if $u\in D(\overline{L})_b$ arbitrary, let $g_\alpha := 2(\alpha
\overline{G}_\alpha u)\overline{L}(\alpha \overline{G}_\alpha u) + \langle
A\nabla \alpha\overline{G}_\alpha u,\nabla \alpha\overline{G}_\alpha u
\rangle$, $\alpha > 0$. Note that by Theorem \ref{theorem2.1.5}(iii)
$$
\begin{aligned}
\mathcal E^0 (\alpha\overline{G}_\alpha u - u , \alpha\overline{G}_\alpha u - u)
& \le -\int_{\mathbb R^d}\overline{L}(\alpha\overline{G}_\alpha u - u )
(\alpha\overline{G}_\alpha u - u) \, d\mu \\
& \le 2\|u\|_{L^\infty (\mathbb R^d , \mu )} \|\alpha\overline{G}_\alpha\overline{L}u
- \overline{L}u \|_{L^1 (\mathbb R^d , \mu )} \to 0
\end{aligned}
$$
if $\alpha\to\infty$, which implies that $\lim_{\alpha\to\infty}\alpha
\overline{G}_\alpha u = u$ in $D(\mathcal E^0)$ and thus $\lim_{\alpha\to\infty}
g_\alpha = g$ in $L^1 (\mathbb R^d , \mu )$. Since $u + (1-\alpha)\overline
{G}_\alpha u\in L^1 (\mathbb R^d , \mu )_b$ and $\overline{G}_1 (u + (1-\alpha )
\overline{G}_\alpha u) = \overline{G}_\alpha u$ by the resolvent
equation it follows from what we have just proved that
$$
\int_{\mathbb R^d}\overline{L}'\varv (\alpha\overline{G}_\alpha u)^2\, d\mu
= \int_{\mathbb R^d} g_\alpha \varv\, d\mu
$$
for all $\alpha > 0$ and thus, taking the limit $\alpha\to\infty$,
$$
\int_{\mathbb R^d}\overline{L}'\varv\, u^2\, d\mu = \int_{\mathbb R^d} g\varv\, d\mu
$$
and \eqref{eq:2.1.18} is shown.
\end{proof}
\subsubsection{Uniqueness of maximal extensions on $\mathbb R^d$}
\label{subsec:2.1.4}
Having established the existence of maximal extensions $(\overline{L}, D(\overline{L}))$
of $(L, D(L^0)_{0,b})$ in $L^1 (\mathbb R^d, \mu )$, where $L^0$ denotes the generator
of the symmetric Dirichlet form $\mathcal E^0$ (see \eqref{DefGeneratorDF}) and
$D(L^0)_{0,b}$ the subspace of compactly supported bounded functions in $D(L^0 )$,
we now discuss the uniqueness of $\overline{L}$ and the connections of the uniqueness
problem with global properties of the associated semigroup $(\overline{T}_t)_{t> 0}$. \\left [3pt]
The uniqueness of maximal extensions of $L$ is linked to the domain $D$ on which we
consider the operator $L$. It is clear that there can be only one maximal extension
of $(L, D)$ if $D\subset D(\overline{L})$ is dense w.r.t. the graph norm of
$\overline{L}$, but in general such dense subsets are quite difficult to characterize.
We will consider this problem in the following exposition for two natural choices: the
domain $D(L^0)_{0,b}$ and the domain $C_0^\infty (\mathbb R^d )$ of compactly
supported smooth functions. \\left [3pt]
Let us first introduce two useful notations:
\begin{definition}\label{def:2.1.1}
Let \eqref{condition on mu}--\eqref{eq:2.1.4} be satisfied. Let $(\overline{T}_t)_{t> 0}$, $(\overline{T}'_t)_{t>0}$ and $(T_t')_{t>0}$ be as in Theorem \ref{theorem2.1.5}, Remark \ref{remark2.1.7}(ii) and Definition \ref{definition2.1.7}.\\left [3pt]
(i) Let $r\in [1,\infty )$ and $(A, D)$ be a
densely defined operator on $L^r (\mathbb R^d , \mu )$. We say that $(A ,D)$ is $L^r(\mathbb R^d, \mu)$-{\bf unique} (hereafter written for convenience as \textbf{$L^r$-unique})\index{uniqueness ! $L^r$-unique}, if there is only one extension of $(A , D)$ on
$L^r(\mathbb R^d , \mu )$ that generates a $C_0$-semigroup. It follows from \cite[Theorem A-II, 1.33]{Na86}, that if $(A, D)$ is $L^r$-unique and $(\overline A, \overline D)$
its unique extension generating a $C_0$-semigroup, then $D\subset\overline D$
dense w.r.t. the graph norm. Equivalently, $(A, D)$ is $L^r$-unique, if and only if the
range condition\index{operator ! range condition} $(\alpha -A)(D)\subset L^r (\mathbb R^d , \mu )$ dense holds for some $\alpha > 0$. \\left [3pt]
(ii) Let $(S_t)_{t> 0}$ be a sub-Markovian $C_0$-semigroup
on $L^1 (\mathbb R^d , \nu )$. We say that $\nu$ is \textbf{$(S_t)_{t> 0}$-invariant}
\index{measure ! invariant}
(resp. \textbf{$\nu$ is $(S_t)_{t> 0}$-sub-invariant})\index{measure ! sub-invariant}, if $\int_{\mathbb R^d} S_t f\, d\nu
= \int_{\mathbb R^d} f\, d\nu$ (resp. $\int_{\mathbb R^d} S_t f\, d\nu \le \int_{\mathbb R^d} f\, d\nu$) for all
$f\in L^1 (\mathbb R^d , \nu )_b$ with $f\ge 0$ and $t>0$.\\left [3pt]
In particular, $\mu$ is always $(\overline{T}_t)_{t> 0}$-sub-invariant, since for $f\in L^1 (\mathbb R^d , \mu )_b$, $f\ge 0$ and $t>0$, we have by the sub-Markov property $\int_{\mathbb R^d} \overline{T}_t f\, d\mu
= \int_{\mathbb R^d} fT_t' 1_{\mathbb R^d}\, d\mu\le \int_{\mathbb R^d} f\, d\mu$. Likewise, $\mu$ is always $(\overline{T}'_t)_{t>0}$-sub-invariant.
\end{definition}
\paragraph{Uniqueness of $(L, D(L^0)_{0,b})$}
\begin{proposition}
\label{prop2.1.9}
Let \eqref{condition on mu}--\eqref{eq:2.1.4} be satisfied. Let $(L^0 , D(L^0 ))$ be the generator of $(\mathcal E^0 , D(\mathcal E^0))$ (see \eqref{DefGeneratorDF}) and recall that as in Theorem \ref{theorem2.1.5}
$$
Lu := L^0 u + \langle \mathbf{B}, \nabla u\rangle, \quad u\in D(L^0)_{0,b}.
$$
The following statements are equivalent:
\item{(i)} $(L, D(L^0)_{0,b})$ is $L^1$-unique.
\item{(ii)} $\mu$ is $(\overline{T}_t)_{t>0}$-invariant.
\item{(iii)} There exist $\chi_n\in H^{1,2}_{loc}(\mathbb R^d , \mu )$ and
$\alpha > 0$ such that
$(\chi_n - 1)^-\in H^{1,2}_0(\mathbb R^d , \mu )_{0,b}$, $\lim_{n\to\infty}
\chi_n = 0$ $\mu$-a.e. and
\begin{equation}
\label{eq:2.1.20}
\mathcal E^0_\alpha (\chi_n , \varv ) + \int_{\mathbb R^d}\langle \mathbf{B} , \nabla\chi_n\rangle\varv \, d\mu
\ge 0
\text{ for all }\varv\in H_0^{1,2} (\mathbb R^d , \mu )_{0,b}, \varv\ge 0.
\end{equation}
\end{proposition}
\begin{proof}
$(i)\mathbb Rightarrow (ii)$: Since $\int_{\mathbb R^d}\overline{L}u\, d\mu = 0$ for all $u\in
D(L^0)_{0,b}$ we obtain that $\int_{\mathbb R^d}\overline{L}u\, d\mu = 0$ for all $u\in
D(\overline{L})$ and thus
$$
\int_{\mathbb R^d}\overline{T}_tu\, d\mu = \int_{\mathbb R^d} u\, d\mu + \int_0^t\int_{\mathbb R^d}\overline{L}\,
\overline{T}_su\, d\mu\, ds = \int_{\mathbb R^d} u\, d\mu
$$
for all $u\in D(\overline{L})$. Since $D(\overline{L})\subset
L^1 (\mathbb R^d , \mu )$ dense we obtain that $\mu$ is
$(\overline{T}_t)_{t>0}$-invariant.
\noindent
$(ii)\mathbb Rightarrow (iii)$:
As a candidate for a sequence $\chi_n$, $n \geq 1$, of functions satisfying the conditions in
(iii) consider
$$
\chi_n := 1-\overline{G}_1^{V_n ,\prime}(1_{V_n}) \text{ for } V_n = B_n.
$$
Clearly, $\chi_n\in H^{1,2}_{loc}(\mathbb R^d ,\mu )$ and $(\chi_n - 1)^- \in H_0^{1,2} (\mathbb R^d ,\mu )_{0,b}$.
Moreover, $(\chi_n)_{n\ge 1}$ is decreasing by Lemma \ref{lemma2.1.6} and therefore
$\chi_\infty := \lim_{n\to\infty}\chi_n$ exists $\mu$-a.e. To see that $\chi_\infty = 0$ note that
\eqref{eq:2.1.16} implies for $g\in L^1 (\mathbb R^d ,\mu )_b$ that
$$
\begin{aligned}
\int_{\mathbb R^d} g\chi_\infty\, d\mu & = \lim_{n\to\infty}\int_{\mathbb R^d} g\chi_n\, d\mu =
\lim_{n\to\infty} \int_{\mathbb R^d} g\, d\mu - \int_{\mathbb R^d} g \overline{G}_1^{V_n ,\prime}
(1_{V_n})\, d\mu \\
& = \lim_{n\to\infty}\int_{\mathbb R^d} g\, d\mu - \int_{\mathbb R^d}\overline{G}_1^{V_n}g \,
1_{V_n}\, d\mu \\
& = \int_{\mathbb R^d} g\, d\mu - \int_{\mathbb R^d} \overline{G}_1g \, d\mu = 0,
\end{aligned}
$$
since $\mu$ is $(\overline{T}_t)_{t>0}$-invariant, hence
$$
\int_{\mathbb R^d}\overline{G}_1 g\, d\mu = \int_0^\infty \int_{\mathbb R^d} e^{-t}\overline{T}_t g\, d\mu dt
= \int_{\mathbb R^d} g\, d\mu .
$$
It remains to show that $\chi_n$ satisfies \eqref{eq:2.1.20}. To this end we have to consider
the approximation $w_\beta := \beta \overline{G}'_{\beta +1}\overline{G}_1^{V_n ,\prime}(1_{V_n} )$,
$\beta > 0$. Since $w_\beta\ge\beta
\overline{G}_{\beta + 1}^{V_n ,\prime}\overline{G}_1^{V_n ,\prime}(1_{V_n})$
and $\beta\overline{G}_{\beta + 1}^{V_n ,\prime}\overline{G}_1^{V_n ,\prime}
(1_{V_n}) = \overline{G}_1^{V_n ,\prime}(1_{V_n}) - \overline{G}_{\beta + 1}
^{V_n ,\prime}(1_{V_n})\ge \overline{G}_1^{V_n ,\prime}(1_{V_n})
- 1/(\beta + 1)$ by the resolvent equation, it follows that
\begin{equation}
\label{eq:2.1.23}
w_\beta\ge\overline{G}_1^{V_n ,\prime}(1_{V_n}) - 1/(\beta + 1),
\beta > 0.
\end{equation}
Note that by Theorem \ref{theorem2.1.5}
$$
\begin{aligned}
&\mathcal E^0_1 (w_\beta , w_\beta ) \\
& \le\beta (\overline{G}_1^{V_n ,\prime}
(1_{V_n}) - w_\beta , w_\beta )_{L^2 (\mathbb R^d , \mu )} \\
& \le\beta (\overline{G}_1^{V_n ,\prime}(1_{V_n}) - w_\beta , \overline
{G}_1^{V_n ,\prime}(1_{V_n}))_{L^2 (\mathbb R^d , \mu )} \\
& = \mathcal E^0_1 (w_\beta , \overline{G}_1^{V_n ,\prime}(1_{V_n})) +
\int_{V_n}\langle\mathbf{B}, \nabla w_\beta\rangle \overline{G}_1^{V_n ,\prime}
(1_{V_n})\, d\mu \\
& \le \mathcal E^0_1 (w_\beta , w_\beta )^{\frac 12}(\mathcal E^0_1 (\overline
{G}_1^{V_n ,\prime}(1_{V_n}),\overline{G}_1^{V_n ,\prime}(1_{V_n}))
^{\frac 12} + \sqrt{2\nu_{V_n}}\| \mathbf{B} 1_{V_n}\|_{L^2 (\mathbb R^d, \mathbb R^d, \mu )}).
\end{aligned}
$$
Consequently, $\lim_{\beta\to\infty} w_\beta = \overline{G}_1^{V_n ,\prime}
(1_{V_n})$ weakly in $D(\mathcal E^0)$. Now \eqref{eq:2.1.23} implies for
$u\in H^{1,2}_0 (\mathbb R^d , \mu )_{0,b}$, $u\ge 0$,
$$
\begin{aligned}
\mathcal E^0_1 (\chi_n , u) & + \int_{\mathbb R^d}\langle\mathbf{B}, \nabla\chi_n\rangle u \, d\mu
= \lim_{\beta\to\infty}\Big (\int_{\mathbb R^d} u\, d\mu - \mathcal E^0_1(w_\beta , u)
- \int_{\mathbb R^d}\langle\mathbf{B}, \nabla w_\beta\rangle u\, d\mu\Big ) \\
& = \lim_{\beta\to\infty} \Big (\int_{\mathbb R^d} u\, d\mu
- \beta\int_{\mathbb R^d} (\overline{G}_1^{V_n ,\prime}(1_{V_n}) - w_\beta ) u\, d\mu\Big ) \ge 0.
\end{aligned}
$$
$(iii)\mathbb Rightarrow (i)$: It is sufficient to show that if
$h\in L^\infty (\mathbb R^d , \mu )$ is such that $\int_{\mathbb R^d} (\alpha - L)u\, h\, d\mu = 0$
for all $u\in D(L^0)_{0,b}$ it follows that $h=0$. To this end let
$\chi\in C_0^\infty (\mathbb R^d )$. If $u\in D(L^0)_b$ it is easy to see that
$\chi u\in D(L^0)_{0,b}$ and $L^0 (\chi u) = \chi L^0 u + \langle A\nabla\chi ,
\nabla u\rangle + u L^0\chi$. Hence
\begin{equation}
\label{eq:2.1.21}
\begin{aligned}
\int_{\mathbb R^d} (\alpha -L^0)u (\chi h)\, d\mu
& = \int_{\mathbb R^d} (\alpha -L^0)(u\chi ) h\, d\mu
+ \int_{\mathbb R^d}\langle A \nabla u, \nabla\chi\rangle h\, d\mu \\
& \qquad + \int_{\mathbb R^d} u L^0\chi\, h\,d\mu \\
& = \int_{\mathbb R^d}\langle\mathbf{B} , \nabla (u\chi )\rangle h\,d\mu
+ \int_{\mathbb R^d}\langle A \nabla u, \nabla \chi\rangle h\, d\mu
+ \int_{\mathbb R^d} u L^0\chi\, h\,d\mu.
\end{aligned}
\end{equation}
Since $\|\mathbf{B}\|\in L^2_{loc} (\mathbb R^d , \mu )$ we obtain that $u\mapsto \int_{\mathbb R^d}
(\alpha -L^0)u (\chi h)\, d\mu $, $u\in D(L^0)_b$, is continuous w.r.t. the
norm on $D(\mathcal E^0)$. Hence there exists some element $\varv\in D(\mathcal E^0)$ such that
$\mathcal E_\alpha ^0 (u , \varv ) = \int_{\mathbb R^d} (\alpha -L^0)u (\chi h)\, d\mu $. Consequently,
$\int_{\mathbb R^d} (\alpha -L^0) u (\varv - \chi h)\, d\mu = 0$ for all $u\in D(L^0)_b$, which
now implies that $\varv = \chi h$. In particular, $\chi h\in D(\mathcal E^0)$ and
\eqref{eq:2.1.21} yields
\begin{equation}
\label{eq:2.1.22}
\begin{aligned}
\mathcal E^0_\alpha (u, \chi h)
& = \int_{\mathbb R^d} \langle\mathbf{B} , \nabla (\chi u)\rangle h\, d\mu
+ \int_{\mathbb R^d}\langle A \nabla u, \nabla \chi\rangle h\, d\mu \\
& \qquad + \int_{\mathbb R^d} L^0\chi\, u h\,d\mu
\end{aligned}
\end{equation}
for all $u\in D(L^0)_b$ and subsequently for all $u\in D(\mathcal E^0)$. From \eqref{eq:2.1.22}
it follows that
$$
\mathcal E^0_\alpha (u,h) - \int_{\mathbb R^d}\langle\mathbf{B} ,\nabla u\rangle h\, d\mu = 0
\text{ for all } u\in H_0^{1,2} (\mathbb R^d , \mu )_0.
$$
\noindent
Let $\varv_n := \|h\|_{L^\infty (\mathbb R^d , \mu )} \chi_n - h$. Then $\varv_n^-\in H_0^{1,2}
(\mathbb R^d , \mu )_{0,b}$ and
$$
0\le\mathcal E^0_\alpha (\varv_n , \varv_n^- ) - \int_{\mathbb R^d}\langle\mathbf{B} , \nabla \varv_n^-\rangle \varv_n\,
d\mu \le -\alpha \int_{\mathbb R^d} (\varv_n^-)^2\, d\mu,
$$
since $\int_{\mathbb R^d}\langle\mathbf{B} , \nabla\varv_n^-\rangle\varv_n\, d\mu
= \int_{\mathbb R^d}\langle\mathbf{B} , \nabla\varv_n^-\rangle\varv_n^-\, d\mu = 0$ and
$\mathcal{E}^0(v_n^+,v_n^-)\leq0$.
Thus $\varv_n^- = 0$, i.e., $h\le\|h\|_{L^\infty (\mathbb R^d , \mu )}\chi_n$. Similarly,
$-h\le\|h\|_{L^\infty (\mathbb R^d , \mu )}\chi_n$,
hence $|h|\le\|h\|_\infty\chi_n$. Since $\lim_{n\to\infty}\chi_n = 0$ $\mu$-a.e.,
it follows that $h=0$.
\end{proof}
\begin{remark}
\label{rem:2.1.10}
{\it
The proof of $(ii)\mathbb Rightarrow (iii)$ in Proposition \ref{prop2.1.9} shows that
if $\mu$ is $(\overline{T}_t)_{t>0}$-invariant then there exists for {\it all}
$\alpha > 0$ a sequence $(\chi_n)_{n\ge 1}\subset H^{1,2}_{loc}(\mathbb R^d ,
\mu )$ such that $(\chi_n - 1)^-\in H^{1,2}_0(\mathbb R^d , \mu )_{0,b}$,
$\lim_{n\to\infty}\chi_n = 0$ $\mu$-a.e. and
$$
\mathcal E^0_\alpha (\chi_n , \varv ) + \int_{\mathbb R^d}\langle\mathbf{B} , \nabla\chi_n\rangle
\varv\, d\mu \ge 0
\text{ for all }\varv\in H_0^{1,2} (\mathbb R^d , \mu )_{0,b}, \varv\ge 0.
$$
Indeed, it suffices to take $\chi_n := 1- \alpha\overline{G}_\alpha
^{V_n ,\prime}(1_{V_n})$, $n\ge 1$. }
\end{remark}
\noindent
Let us state sufficient conditions on $\mu$, $A$ and $\mathbf{G}$ that imply
$(\overline{T}_t)_{t>0}$-invariance of $\mu$ and discuss its interrelation with the notion
of conservativeness, that we define right below.
\begin{definition}
\label{def:3.2.1}
$(T_t)_{t>0}$ as defined in Definition \ref{definition2.1.7} is called {\bf conservative}\index{semigroup ! conservative} if
$$
T_t 1_{\mathbb R^d} = 1, \; \; \text{$\mu$-a.e. \quad for one (and hence all) $t>0$.}
$$
\end{definition}
\noindent
We can then state
the following relations:
\begin{remark}
\label{rem:2.1.10a}
{\it Let \eqref{condition on mu}--\eqref{eq:2.1.4} be satisfied.\\
(i) The measure $\mu$ is $(\overline{T}_t)_{t> 0}$-invariant\index{measure ! invariant} (cf. Definition \ref{def:2.1.1}(ii)), if and only if the dual semigroup
$(T'_t)_{t> 0}$ of $(\overline{T}_t)_{t>0}$, acting on $L^\infty (\mathbb R^d ,\mu )$,
is conservative\index{semigroup ! conservative}. Indeed, if $\mu$ is $(\overline{T}_t)_{t>0}$-invariant, then
for any $f \in C_0^{\infty}(\mathbb R^d)$, $t>0$,
$$
\int_{\mathbb R^d} f d\mu =
\int_{\mathbb R^d} \overline{T}_t f d\mu
=\lim_{n \rightarrow \infty}\int_{\mathbb R^d} f T'_t 1_{B_n} d\mu
= \int_{\mathbb R^d} f T'_t 1_{\mathbb R^d} d\mu,
$$
hence $T'_t 1_{\mathbb R^d}=1$, $\mu$-a.e. The converse follows similarly. Likewise, $\mu$ is $(\overline{T}'_t)_{t> 0}$-invariant, if and only if $(T_t)_{t> 0}$ is conservative.
Since in the {\it symmetric} case \;(i.e., $\mathbf{G} = \beta^{\rho, A}$)
$\overline{T}'_{t}|_{L^1 (\mathbb R^d , \mu )_b}$ coincides with $\overline{T}_{t}|_{L^1 (\mathbb R^d , \mu )_b}$ we obtain that both notions
coincide in this particular case. Conservativeness in the symmetric case
has been well-studied by many authors. We refer to \cite{Da85}, \cite{FOT}, \cite[Section 1.6]{Sturm94} and references therein. \\
(ii) Suppose that $\mu$ is {\it finite}. Then $\mu$ is $(\overline{T}_t)_{t> 0}$-invariant,
if and only if $\mu$ is $(\overline{T}'_t)_{t> 0}$-invariant. Indeed, let $\mu$ be
$(\overline{T}_t)_{t> 0}$-invariant.
Then, since $1_{\mathbb R^d}\in L^1(\mathbb R^d,\mu)$, we obtain by the $(\overline{T}_t)_{t> 0}$-invariance
$\int_{\mathbb R^d} |1-T_t 1_{\mathbb R^d}|\, d\mu = \int_{\mathbb R^d} (1-\overline{T}_t 1_{\mathbb R^d})\, d\mu= 0$ for all $t>0$, i.e.,
$T_t 1_{\mathbb R^d} = 1$ $\mu$-a.e. for all $t>0$,
which implies that $\mu$ is $(\overline{T}'_t)_{t> 0}$-invariant by (i).
The converse is shown similarly. }
\end{remark}
For $u\in C^2(\mathbb R^d)$, we define (cf. \eqref{eq:2.1.3bis}, \eqref{eq:2.1.5}, \eqref{eq:2.1.5a}, and \eqref{eq:2.1.3})
\begin{eqnarray} \label{eq:2.1.3bis2'}
L^{\prime}u:= L^A u + \langle \beta^{\rho, A} - \mathbf{B},
\nabla u\rangle = L^A u + \langle 2\beta^{\rho, A} - \mathbf{G},\nabla u\rangle.
\end{eqnarray}
\begin{remark}\label{cnulltwocoincideprime}
{\it Similarly to Remark \ref{cnulltwocoincide}, we have
\begin{equation}
\label{eq:2.1.3forprime}
L^A + \langle \beta^{\rho, A} - \mathbf{B},
\nabla \rangle = L^0 - \langle \mathbf{B}, \nabla \rangle \quad \text{ on }\ C^{2}_0(\mathbb R^d).
\end{equation}
Therefore the definitions \eqref{defL'first} and \eqref {eq:2.1.3bis2'} for $L'$ coincide on $D(L^0)_{0,b}\cap C^{2}(\mathbb R^d)=C^{2}_0(\mathbb R^d)$ and are therefore consistent.}
\end{remark}
\begin{proposition}
\label{prop:2.1.10}
Let \eqref{condition on mu}--\eqref{eq:2.1.4} be satisfied.
Each of the following conditions (i), (ii) and (iii) imply that $\mu$ is $(\overline{T}_t)_{t>0}$-invariant \index{measure ! invariant ! sufficient condition} (cf. Definition \ref{def:2.1.1}(ii)), or equivalently, by Remark \ref{rem:2.1.10a}(i), that $(T'_t)_{t>0}$ is conservative\index{semigroup ! conservative ! sufficient condition}:
\item{(i)} $a_{ij}, g_i - \beta^{\rho, A}_i \in L^1 (\mathbb R^d , \mu)$, $1\le i,j\le d$.
\item{(ii)} There exist $u\in C^2 (\mathbb R^d )$ and $\alpha > 0$ such that
$\lim_{\|x\|\to\infty} u(x) = \infty$ and $L'u = L^A u + \langle \beta^{\rho, A} - \mathbf{B},
\nabla u\rangle\le\alpha u$ $\mu$-a.e. \index{Lyapunov condition}
\item{(iii)} There exists $M\ge 0$, such that $-\langle A(x)x,x\rangle/ (\|x\|^2 + 1) + \frac 12 {\rm trace}(A(x))
+ \langle (\beta^{\rho, A} - \mathbf{B})(x), x\rangle\le M(\|x\|^2 +1)\big ( \ln (\|x\|^2 + 1) + 1\big )$
for $\mu$-a.e. $x\in \mathbb R^d$.
\end{proposition}
\begin{proof}
(i) By Proposition \ref{prop2.1.9} it is sufficient to show that $(L, D(L^0)_{0,b})$
is $L^1$-unique. But if $h\in L^\infty (\mathbb R^d , \mu )$ is such that
$\int_{\mathbb R^d} (1-L)u\, h\, d\mu = 0$ for all $u\in D(L^0)_{0,b}$ we have seen in
the proof of the implication $(i)\mathbb Rightarrow (ii)$ in Proposition \ref{prop2.1.9}
that $h\in H^{1,2}_{loc}(\mathbb R^d , \mu )$ and
\begin{equation}
\label{eq:2.1.24}
\mathcal E^0_1 (u, h ) - \int_{\mathbb R^d}\langle \mathbf{B} , \nabla u\rangle h\,d\mu = 0
\text{ for all } u\in H^{1,2}_0(\mathbb R^d , \mu )_0.
\end{equation}
Let $\psi_n\in C_0^\infty (\mathbb R^d )$ be such that $1_{B_n}\le
\psi_n\le 1_{B_{2n}}$ and $\|\nabla\psi_n\|_{L^\infty (\mathbb R^d , \mathbb R^d, \mu )}\le c/n$ for some $c>0$. Then
\eqref{eq:2.1.24} implies that
$$
\begin{aligned}
\int_{\mathbb R^d} & \psi_n^2 h^2 \, d\mu + \mathcal E^0 (\psi_n h , \psi_n h ) =
\mathcal E^0_1 (\psi_n^2 h , h ) + \frac12 \int_{\mathbb R^d}\langle A\nabla\psi_n ,\nabla\psi_n
\rangle h^2\, d\mu \\
& \qquad - \int_{\mathbb R^d}\langle\mathbf{B}, \nabla (\psi_n^2 h)\rangle h\, d\mu +
\int_{\mathbb R^d}\langle\mathbf{B} , \nabla\psi_n\rangle \psi_n h^2\, d\mu \\
& \le \frac{c^2}{2n^2}\|h\|_{L^\infty (\mathbb R^d , \mu )}^2 \Big (\sum_{i,j=1}^d\int_{\mathbb R^d} |a_{ij}| \, d\mu \Big )
+\frac{c}{n} \|h\|_{L^\infty (\mathbb R^d , \mu )}^2 \Big (\sum_{i=1}^d\int_{\mathbb R^d} |g_i - \beta^{\rho, A}_i|\, d\mu\Big )
\end{aligned}
$$
and thus $\int_{\mathbb R^d} h^2\, d\mu = \lim_{n\to\infty}\int_{\mathbb R^d}\psi_n^2 h^2\, d\mu = 0$.
\noindent
(ii) Let $\chi_n := \frac {u}{n}$. Then $\chi_n\in H^{1,2}_{loc}(\mathbb R^d ,
\mu )$, $(\chi_n - 1)^-$ is bounded and has compact support, $\lim_{n\to
\infty}\chi_n = 0$ and
$$
\begin{aligned}
\mathcal E^0_\alpha (\chi_n , \varv ) & + \int_{\mathbb R^d}\langle\mathbf{B}, \nabla\chi_n\rangle \varv\, d\mu \\
& = \frac 1n \int_{\mathbb R^d} (\alpha u - L^A u - \langle \beta^{\rho, A} -\mathbf{B}, \nabla u\rangle )
\varv \, d\mu \ge 0
\end{aligned}
$$
for all $\varv\in H_0^{1,2} (\mathbb R^d , \mu )_0$, $\varv\ge 0$.
By Proposition \ref{prop2.1.9} $\mu$ is $(\overline{T}_t)_{t>0}$-invariant.
\noindent
Finally, (iii) implies (ii) since we can take $u(x) = \ln (\|x\|^2 + 1) + r$ for
$r$ sufficiently large.
\end{proof}
\noindent
As a direct consequence of Proposition \ref{prop:2.1.10} and Remark \ref{rem:2.1.10a}(i), we obtain the following result.
\begin{corollary}
\label{cor:2.1.4.1}
Let \eqref{condition on mu}--\eqref{eq:2.1.4} be satisfied.
Each of the following conditions (i), (ii) and (iii) imply that $(T_t)_{t>0}$ is conservative\index{semigroup ! conservative ! sufficient condition}, or equivalently, by Remark \ref{rem:2.1.10a}(i), that $\mu$ is
$(\overline{T}'_t)_{t>0}$-invariant\index{semigroup ! invariant ! sufficient condition}:
\item{(i)} $a_{ij}, g_i - \beta^{\rho, A}_i \in L^1 (\mathbb R^d , \mu)$, $1\le i,j\le d$.
\item{(ii)} There exist $u\in C^2 (\mathbb R^d )$ and $\alpha > 0$ such that
$\lim_{\|x\|\to\infty} u(x) = \infty$ and $Lu \le\alpha u$ $\mu$-a.e., where $Lu=L^Au+\langle \beta^{\rho, A}+\mathbf{B}, \nabla u \rangle = L^A u +\langle \mathbf{G},\nabla u \rangle$ (see \eqref{eq:2.1.3bis2} and \eqref{eq:2.1.5a}). \index{Lyapunov condition}
\item{(iii)} There exists $M\ge 0$, such that $-\langle A(x)x,x\rangle/ (\|x\|^2 + 1)
+ \frac 12 {\rm trace}(A(x))
+ \langle \mathbf{G}(x), x\rangle\le M(\|x\|^2 +1)\big (\ln (\|x\|^2 + 1) + 1\big )$
for $\mu$-a.e. $x\in \mathbb R^d$.
\end{corollary}
\begin{remark}
\label{remark2.1.11}
{\it
\noindent
(i) Suppose that $\mu$ is {\it finite}, so that according to Remark \ref{rem:2.1.10a}(ii)
$\mu$ is $(\overline{T}_t)_{t> 0}$-invariant if and only if $\mu$ is
$(\overline{T}'_t)_{t> 0}$-invariant. In this case we replace $g_i - \beta^{\rho, A}_i$
(resp. $\beta^{\rho ,A} - \mathbf{B}$) in Proposition \ref{prop:2.1.10}(i) (resp.
\ref{prop:2.1.10}(ii) and (iii)) by $g_i-\beta^{\rho,A}_i$ (resp. $\mathbf{G}$) and still obtain
that $\mu$ is $(\overline{T}_t)_{t>0}$-invariant. \\
(ii) The criteria stated in part (iii) of Proposition \ref{prop:2.1.10} resp. Corollary \ref{cor:2.1.4.1}
involve the logarithmic derivative $\beta^{\rho, A}$ of the density. This assumption can be replaced by
volume growth conditions of $\mu$ on annuli (see Proposition \ref{prop:3.2.9} below).}
\end{remark}
\begin{proposition} \label{prop:2.1.4.1.4}
Let \eqref{condition on mu}--\eqref{eq:2.1.4} be satisfied.
Suppose that there exist a bounded, nonnegative and nonzero function $u\in C^2 (\mathbb R^d )$
and $\alpha > 0$, such that $L'u= L^A u + \langle\beta^{\rho ,A} - \mathbf{B} , \nabla u\rangle \ge\alpha u$.
Then $\mu$ is not $(\overline{T}_t)_{t > 0}$-invariant, or equivalently, by Remark \ref{rem:2.1.10a}(i), $(T'_t)_{t>0}$ is not conservative. In particular, if there exist a bounded, nonnegative and nonzero function $u\in C^2 (\mathbb R^d )$
and $\alpha > 0$ such that $Lu \geq \alpha u$, where $Lu = L^A u +\langle \beta^{\rho, A}+ \mathbf{B} ,\nabla u\rangle =
L^A u+\langle \mathbf{G}, \nabla u \rangle$ (see \eqref{eq:2.1.3bis2} and \eqref{eq:2.1.5a}), then $\mu$ is not $(\overline{T}'_t)_{t>0}$-invariant, or equivalently, by Remark \ref{rem:2.1.10a}(i), $(T_t)_{t>0}$ is not conservative.
\end{proposition}
\begin{proof} We may suppose that $u\le 1$. If $\mu$ was
$(\overline{T}_t)_{t>0}$-invariant, it would follow that there exist $\chi_n\in
H^{1,2}_{loc}(\mathbb R^d , \mu )$, $n\ge 1$, such that $(\chi_n - 1)^-\in
H^{1,2}_0(\mathbb R^d , \mu )_{0,b}$, $\lim_{n\to\infty}\chi_n = 0$ $\mu$-a.e.
and $\mathcal E^0_\alpha (\chi_n , \varv ) + \int_{\mathbb R^d}\langle\mathbf{B} , \nabla\chi_n\rangle
\varv\, d\mu\ge 0$ for all $\varv\in H_0^{1,2} (\mathbb R^d , \mu )_{0,b}$, $\varv\ge 0$
(cf. the Remark \ref{rem:2.1.10}). Let $\varv_n := (\chi_n - u)$. Then $\varv_n^-\in
H_0^{1,2} (\mathbb R^d , \mu )_{0,b}$ and
$$
0\le\mathcal E^0_\alpha (\varv_n , \varv_n^- ) - \int_{\mathbb R^d}\langle\mathbf{B} , \nabla \varv_n^-
\rangle \varv_n\, d\mu \le - \alpha \int_{\mathbb R^d} (\varv_n^-)^2\, d\mu,
$$
since $\int_{\mathbb R^d}\langle\mathbf{B} , \nabla \varv_n^-\rangle \varv_n\, d\mu
= \int_{\mathbb R^d}\langle\mathbf{B} , \nabla \varv_n^-\rangle \varv_n^-\, d\mu = 0$ and
$\mathcal{E}^0(\varv_n^+, \varv_n^-) \leq 0$.
Thus $\varv_n^- = 0$, i.e., $u\le\chi_n$. Since $\lim_{n\to\infty}\chi_n = 0$
$\mu$-a.e. and $u\ge 0$, it follows that $u=0$, which is a contradiction to our
assumption $u\neq 0$.
The rest of the assertion follows by replacing $(\overline{T}_t)_{t>0}$ with $(\overline{T}'_t)_{t>0}$.
\end{proof}
\begin{remark}
\label{rem:2.1.12}
{\it
Let us provide two examples illustrating the scope of our results. \\
(i) In the first example the measure $\mu$ is not
$(\overline{T}_t)_{t > 0}$-invariant\index{measure ! invariant ! counterexample}.
To this end let $\mu := e^{-x^2}\, dx$,
$\mathbf{G} (x) = -x -2e^{x^2}$, $x \in \mathbb R$,
$$
Lu := \frac12 u'' + \mathbf{G}\cdot u', \;\; u\in C_0^\infty (\mathbb R),
$$
$(\overline{L}, D(\overline{L}))$ be the maximal extension having properties (i)--(iii) in Theorem \ref{theorem2.1.5} and
$(\overline{T}_t)_{t> 0}$ be the associated semigroup. Let $h(x) := \int_{-\infty}^x e^{-t^2}\, dt $,
$x\in\mathbb R$. Then
$$
\begin{aligned}
\frac12 h''(x) + (\beta^{\rho ,A} - \mathbf{B})(x) h'(x)
& = \frac12 h'' (x) + (-x+ 2e^{x^2}) h'(x) \\
& = - 2xe^{-x^2} + 2 \ge \frac{1}{\sqrt{\pi}} h(x)
\end{aligned}
$$
for all $x\in\mathbb R$.
It follows from Proposition \ref{prop:2.1.4.1.4} that $\mu$ is not $(\overline{T}_t)_{t > 0}$-invariant.
Since $\mu$ is finite, $\mu$ is also not $(\overline{T}'_t)_{t> 0}$-invariant according to Remark
\ref{rem:2.1.10a}(ii) and thus both semigroups $(T_t)_{t > 0}$ and $(T'_t)_{t> 0}$ are not conservative according to Remark
\ref{rem:2.1.10a}(i). \\
(ii) in the second example $(T_t)_{t > 0}$ is conservative, but
the dual semigroup $(T'_t)_{t > 0}$ of $(\overline{T}_t)_{t > 0}$ is not conservative\index{semigroup ! conservative ! counterexample}. Necessarily,
the (infinitesimally
invariant) measure $\mu$ must be infinite in this case. To this end let $\mu := e^x \, dx$,
$\mathbf{G} (x) = \frac12 + \frac12 e^{-x}$, \,$x \in \mathbb R$, $Lu := \frac12 u'' + \mathbf{G}\cdot u'$,
$u\in C_0^\infty (\mathbb R)$,
$(\overline{L}, D(\overline{L}))$ be the maximal extension having properties (i)--(iii) in Theorem
\ref{theorem2.1.5} and $(\overline{T}_t)_{t> 0}$ be the associated semigroup. Let $h(x) = 1 + x^2$,
$x\in\mathbb R$. Then
$$
\frac12 h''(x) + (\beta^{\rho ,A} +\mathbf{B}) h'(x) = 1 + x + e^{-x}x \le 2 (1 + x^2) = 2 h(x).
$$
It follows from Proposition \ref{prop:2.1.10} that $\mu$ is $(\overline{T}'_t)_{t > 0}$-invariant, hence $(T_t)_{t > 0}$ is conservative according to Remark
\ref{rem:2.1.10a}(i). To see that $\mu$ is not $(\overline{T}_t)_{t > 0}$-invariant, let
$h(x) = \mathbb Psi (e^{-x})$, $x\in\mathbb R$, for some bounded, nonzero and nonnegative function
$\mathbb Psi\in C^2 ((0, \infty))$. Then
\begin{equation}
\label{eq:2.1.24a}
\frac12 h''(x) + (\beta^{\rho ,A} - \mathbf{B}) h'(x) = \frac12 h''(x) +(\frac12 -\frac12 e^{-x})h'(x)
\ge \alpha h(x), \;\; \text{ for all $x \in \mathbb R$}
\end{equation}
for some $\alpha > 0$ is equivalent with
\begin{equation}
\label{eq:2.1.24b}
\left( \mathbb Psi '' (y) + \mathbb Psi ' (y)\right)y^2 \ge 2\alpha \mathbb Psi (y)
\end{equation}
for all $y>0$. An example of such a function $\mathbb Psi$ is given by
$$
\mathbb Psi (y) = \begin{cases}
y^2(6-y) & \text{ if } 0<y \le 3 \\
54 - \frac{81}{y} & \text{ if } 3\le y .
\end{cases}
$$
Indeed, it follows that $\mathbb Psi(y)>0$ for all $y \in (0, \infty)$ with $\mathbb Psi \in C^2_b((0, \infty))$ and
$$
(\mathbb Psi'' (y)+\mathbb Psi'(y))y^2 = \begin{cases}
y^2(-3y^2+6y+12) \ge 3y^2 \ge \frac12 \mathbb Psi(y) & \text{ if } 0<y \leq 3\\
81-\frac{162}{y} \geq 27 \geq \frac12 \mathbb Psi(y) & \text{ if } 3\le y .
\end{cases}
$$
Thus, $\mathbb Psi$ satisfies \eqref{eq:2.1.24b} with $\alpha = \frac14$, hence $h(x) = \mathbb Psi (e^{-x})$ satisfies
\eqref{eq:2.1.24a} for the same $\alpha$.
It follows from Proposition \ref{prop:2.1.4.1.4} that $\mu$ is not $(\overline{T}_t)_{t > 0}$-invariant,
hence the dual semigroup $(T'_t)_{t > 0}$ is not conservative by Remark
\ref{rem:2.1.10a}(i).\\
The intuition behind the example is as follows: the density of the measure $\mu$ is monotone
increasing, its derivative as well. The drift of $L$ is bounded from above on $\mathbb{R}^+$, so the
associated diffusion process will not explode to $+\infty$ in finite time. On $\mathbb{R}^-$ the drift
becomes unbounded positive, but that excludes that the associated diffusion process can explode to
$-\infty$ in finite time. This is exactly the opposite for the dual process: since the drift of the dual
process becomes unbounded from below with exponential growth, the solution will explode in finite time
to $-\infty$. }
\end{remark}
\paragraph{Uniqueness of $(L, C_0^\infty (\mathbb R^d ))$}
We will now discuss the problem of uniqueness of the maximal extension
$(L, C_0^\infty (\mathbb R^d ))$ on $L^1 (\mathbb R^d , \mu )$. To this end we make the following
additional assumption on $A$: Suppose that for any compact $V$ there exist constants $M_V\ge 0$
and $\alpha_V\in (0, 1)$ such that
\begin{equation}
\label{AssumptionUniqueness}
|a_{ij} (x) - a_{ij}(y)|\le M_V \|x-y\|^{\alpha_V}\text{ for all }x,y\in V.
\end{equation}
The following regularity result is then crucial for our further investigations:
\begin{theorem}
\label{theorem2.1.3}
Let \eqref{condition on mu}--\eqref{eq:2.1.4}
and \eqref{AssumptionUniqueness} be satisfied and $L$ be as in Theorem \ref{theorem2.1.5} (in particular $L$ can be expressed as in \eqref{defLfirst} and \eqref{eq:2.1.3bis2} on $C_0^{2}(\mathbb R^d)$).
Let $h\in L^\infty (\mathbb R^d , \mu )$ be such that $\int_{\mathbb R^d} (1-L)u\, h\,d\mu = 0$ for all
$u\in C_0^\infty (\mathbb R^d )$. Then $h\in H_{loc}^{1,2}(\mathbb R^d , \mu )$ and
$\mathcal E^0_1 (u,h)-\int_{\mathbb R^d}\langle\mathbf{B} ,\nabla u\rangle h\, d\mu = 0$ for all
$u\in H_0^{1,2}(\mathbb R^d , \mu )_0$.
\end{theorem}
\begin{proof}
First note that $C_0^2 (\mathbb R^d )\subset D(L^0)_{0,b}
\subset D(\overline{L})_{0,b}$ and that $\int _{\mathbb R^d}(1-L)u\, h\, d\mu = 0$ for all
$u\in C_0^2 (\mathbb R^d )$. Let $\chi\in C_0^\infty (\mathbb R^d )$ and $r > 0$
be such that $\text{supp} (\chi )\subset B_r (0)$. We have to show that
$\chi h\in H^{1,2}_0 (\mathbb R^d,\mu )$. Let $K \ge 0$ and $\overline{\alpha} \in (0,1)$ be constants,
such that $|a_{ij} (x) - a_{ij}(y)|\le K\|x-y\|^{\overline{\alpha}}$ for all
$x,y\in B_r (0)$ and define
$$
\overline{a}_{ij}(x) := a_{ij}((\frac r{|x|}\wedge 1)x), x\in\mathbb R^d.
$$
Then $\overline{a}_{ij}(x) := a_{ij}(x)$ for all $x\in B_r (0)$ and
$|\overline{a}_{ij}(x)-\overline{a}_{ij}(y)|\le 2 K\|x-y\|^{\overline{\alpha}}$ for all
$x,y\in\mathbb R^d$. Let $L^{\overline{A}} = \sum_{i,j=1}^d\overline{a}_{ij}
\partial_{ij}$. By \cite[Theorems 4.3.1 and 4.3.2]{Kr96}, there exists for
all $f\in C_0^\infty (\mathbb R^d )$ and $\alpha > 0$ a unique function
$\overline{R}_\alpha f\in C_b^2 (\mathbb R^d )$ satisfying $\alpha\overline
{R}_\alpha f - L^{\overline{A}}\overline{R}_\alpha f = f$ and $\|\alpha
\overline{R}_\alpha f\|_{C_b (\mathbb R^d)}\le \|f\|_{C_b (\mathbb R^d)}$. Moreover, $\alpha\overline
{R}_\alpha f\ge 0$ if $f\ge 0$ by \cite[Theorem 2.9.2]{Kr96}.
\noindent
Since $C_0^\infty (\mathbb R^d )\subset C_\infty (\mathbb R^d )$ dense, we obtain that $f\mapsto \alpha\overline{R}_\alpha f$,
$f\in C_0^\infty (\mathbb R^d)$, can be uniquely extended to a positive linear
map $\alpha\overline{R}_\alpha : C_\infty (\mathbb R^d)\to C_b (\mathbb R^d )$
such that $\|\alpha\overline{R}_\alpha f\|_{C_b (\mathbb R^d)} \le \|f\|_{C_b (\mathbb R^d)}$ for all
$f\in C_\infty (\mathbb R^d )$. By the Riesz representation theorem there exists
a unique positive measure $V_\alpha (x, \cdot )$ on $(\mathbb R^d , \mathcal B
(\mathbb R^d ))$ such that $V_\alpha f(x) := \int_{ \mathbb R^d } f(y)\, V_\alpha
(x, dy) = \overline{R}_\alpha f(x)$ for all $f\in C_\infty (\mathbb R^d )$,
$x\in\mathbb R^d $. Clearly, $\alpha V_\alpha (\cdot , \cdot )$ is a kernel on
$(\mathbb R^d , \mathcal B (\mathbb R^d ))$ (cf. \cite[Chapter IX, Theorem 9]{DeM88}). Since $\alpha
V_\alpha f = \alpha\overline{R}_\alpha f\le 1$ for all $f\in C_\infty
(\mathbb R^d )$ such that $f\le 1$ we conclude that the linear operator
$f\mapsto \alpha V_\alpha f$, $f\in \mathcal B_b (\mathbb R^d )$, is sub-Markovian.
\noindent
Let $f_n\in C_0^\infty (\mathbb R^d )$, $n\ge 1$, such that $\|f_n\|_{C_b (\mathbb R^d )} \le
\|h\|_{L^\infty (\mathbb R^d , \mu )}$ and $\tilde h := \lim_{n\to\infty} f_n$ is a $\mu$-version of
$h$. Then $\lim_{n\to\infty}\alpha V_\alpha f_n (x) = \alpha V_\alpha
\tilde h (x)$ for all $x\in\mathbb R^d$ by Lebesgue's theorem and $\|\alpha
V_\alpha\tilde h\|_{C_b (\mathbb R^d )} \le \|h\|_{L^\infty (\mathbb R^d , \mu )} $.
Then
\begin{eqnarray}
\label{eq:2.1.2.2}
&&\mathcal E^0 (\chi\alpha V_\alpha f_n , \chi\alpha V_\alpha f_n ) = -\int_{\mathbb R^d} L^0
(\chi\alpha V_\alpha f_n) \chi\alpha V_\alpha f_n\, d\mu \nonumber \\
& =& -\int_{\mathbb R^d} \chi L^A \chi\, (\alpha V_\alpha f_n)^2 \, d\mu - \int_{\mathbb R^d}\langle A
\nabla\chi , \nabla\alpha V_\alpha f_n\rangle\chi\alpha V_\alpha f_n\, d\mu\nonumber\\
& &\qquad -\int_{\mathbb R^d} L^{\overline{A}}(\alpha V_\alpha f_n )\, \chi^2\, \alpha
V_\alpha f_n\, d\mu -\int_{\mathbb R^d}\langle \beta^{\rho, A},\nabla (\chi\alpha V_\alpha f_n)\rangle
\chi\alpha V_\alpha f_n\, d\mu \nonumber\\
& =& -\int_{\mathbb R^d}\chi L^A\chi\, (\alpha V_\alpha f_n)^2\, d\mu - \int_{\mathbb R^d}\langle A\nabla
\chi ,\nabla (\chi\alpha V_\alpha f_n)\rangle\alpha V_\alpha f_n\, d\mu \nonumber\\
&& \qquad + \int_{\mathbb R^d} \langle A\nabla \chi , \nabla\chi\rangle (\alpha V_\alpha
f_n)^2\,d\mu - \alpha\int_{\mathbb R^d} (\alpha V_\alpha f_n - f_n)\chi^2\,\alpha V_\alpha
f_n\, d\mu \nonumber\\
& &\qquad - \int_{\mathbb R^d}\langle \beta^{\rho, A}, \nabla (\chi\alpha V_\alpha f_n)\rangle\chi\,
\alpha V_\alpha f_n\, d\mu. \\ \nonumber
\end{eqnarray}
\noindent
Hence $\mathcal E^0 (\chi\alpha V_\alpha f_n , \chi\alpha V_\alpha f_n )\le c\,
\mathcal E^0 (\chi\alpha V_\alpha f_n , \chi\alpha V_\alpha f_n )^{1/2} + M$ for
some positive constants $c$ and $M$ independent of $n$. Consequently,
$\sup_{n\ge 1}\mathcal E^0 (\chi\alpha V_\alpha f_n , \chi\alpha V_\alpha f_n )
< + \infty$, hence $\chi\alpha V_\alpha\tilde h\in D(\mathcal E^0)$ and
$\lim_{n\to\infty}\chi\alpha V_\alpha f_n = \chi\alpha V_\alpha\tilde h$
weakly in $D(\mathcal E^0)$.
\noindent
Note that
\begin{eqnarray}
\label{eq:2.1.2.3}
&&-\ \alpha\int_{\mathbb R^d} (\alpha V_\alpha \tilde h - \tilde h)\alpha V_\alpha\tilde h
\chi^2\, d\mu\le - \alpha\int_{\mathbb R^d} (\alpha V_\alpha\tilde h - \tilde h )\tilde h
\chi^2\, d\mu \nonumber\\
& =& \lim_{n\to\infty} - \alpha\int_{\mathbb R^d} (\alpha V_\alpha f_n - f_n )
\tilde h\chi^2\, d\mu \nonumber\\
& =& \lim_{n\to\infty} - \int_{\mathbb R^d} L^{\overline{A}} (\alpha V_\alpha f_n) \tilde h
\chi^2\,d\mu \nonumber\\
& = &\lim_{n\to\infty} \Big (- \int_{\mathbb R^d} L^A (\chi^2\alpha V_\alpha f_n )
\tilde h\,d\mu + 2\int_{\mathbb R^d}\langle A\nabla\chi, \nabla\alpha V_\alpha
f_n\rangle\chi\tilde h \,d\mu \nonumber\\
& &\qquad + \int_{\mathbb R^d} L^A(\chi^2)\alpha V_\alpha f_n\,\tilde h\, d\mu\Big ) \nonumber\\
& =& \lim_{n\to\infty} \Big (
- \int_{\mathbb R^d}\chi^2\alpha V_\alpha f_n\,\tilde h\,
d\mu + \int_{\mathbb R^d}\langle \mathbf{G} ,\nabla (\chi^2\alpha V_\alpha f_n)
\rangle \tilde h\, d\mu \nonumber\\
& &\qquad + \ 2 \int_{\mathbb R^d}\langle A\nabla\chi ,\nabla\alpha V_\alpha f_n
\rangle\chi\tilde h\, d\mu + \int_{\mathbb R^d} L^A (\chi^2)\alpha V_\alpha
f_n\,\tilde h \, d\mu\Big ) \nonumber\\
& =& - \int_{\mathbb R^d}\chi^2(\alpha V_\alpha\tilde h)\,\tilde h\, d\mu + \int_{\mathbb R^d}
\langle \mathbf{G} ,\nabla (\chi\alpha V_\alpha \tilde h )\rangle
\chi\tilde h\, d\mu \nonumber\\
&& \qquad + \int_{\mathbb R^d}\langle \mathbf{G} , \nabla \chi\rangle\chi(\alpha V_\alpha\tilde h)
\, \tilde h\, d\mu + 2\int_{\mathbb R^d}\langle A\nabla\chi ,\nabla (\chi\alpha V_\alpha
\tilde h)\rangle\tilde h\, d\mu \nonumber\\
&& \qquad -\ 2 \int_{\mathbb R^d} \langle A\nabla\chi , \nabla\chi\rangle (\alpha V_\alpha
\tilde h)\, \tilde h\, d\mu + \int_{\mathbb R^d} L^A(\chi^2)(\alpha V_\alpha\tilde h)\,
\tilde h\, d\mu \nonumber\\
&\le& c\, \mathcal E^0 (\chi\alpha V_\alpha\tilde h , \chi\alpha V_\alpha\tilde h )^{1/2} + M \\ \nonumber
\end{eqnarray}
for some positive constants $c$ and $M$ independent of $\alpha$. Combining
\eqref{eq:2.1.2.2} and \eqref{eq:2.1.2.3} we obtain that
$$
\begin{aligned}
\mathcal E^0 (\chi\alpha V_\alpha\tilde h & , \chi\alpha V_\alpha\tilde h)\le
\liminf_{n\to\infty} \mathcal E^0 (\chi\alpha V_\alpha\tilde f_n, \chi\alpha
V_\alpha\tilde f_n) \\
& \le - \int_{\mathbb R^d}\chi L^A\chi (\alpha V_\alpha \tilde h)^2\, d\mu - \int_{\mathbb R^d}\langle A
\nabla\chi , \nabla (\chi\alpha V_\alpha \tilde h)\rangle\alpha V_\alpha
\tilde h\, d\mu
\\
& \qquad + \int_{\mathbb R^d}\langle A\nabla\chi , \nabla\chi\rangle (\alpha V_\alpha
\tilde h)^2\, d\mu - \alpha\int_{\mathbb R^d} (\alpha V_\alpha\tilde h - \tilde h)\chi^2\,
\alpha V_\alpha \tilde h\, d\mu \\
& \qquad - \int_{\mathbb R^d}\langle\beta^{\rho, A} , \nabla (\chi\alpha V_\alpha \tilde h)\rangle\chi
\alpha V_\alpha \tilde h\, d\mu \\
& \le\tilde c\, \mathcal E^0 (\chi\alpha V_\alpha\tilde h , \chi\alpha V_\alpha
\tilde h )^{1/2} + \tilde M
\end{aligned}
$$
for some positive constants $\tilde c$ and $\tilde M$ independent of $\alpha$.
Hence $(\chi\alpha V_\alpha\tilde h )_{\alpha > 0}$ is bounded in
$D(\mathcal E^0)$.\\
If $u\in D(\mathcal E^0)$ is the limit of some weakly convergent subsequence
$(\chi\alpha_k V_{\alpha_k}\tilde h)_{k\ge 1}$ with $\lim_{k\to\infty}
\alpha_k = +\infty$ it follows for all $\varv\in C_0^\infty (\mathbb R^d )$ that
\begin{eqnarray*}
\int_{\mathbb R^d} (u - \chi \tilde h) \varv \, d\mu &= &\lim_{k\to\infty}\int_{\mathbb R^d}\chi (\alpha_k
V_{\alpha_k}\tilde h - \tilde h)\varv\, d\mu \\
& = &\lim_{k\to\infty}\lim_{n\to\infty} \int_{\mathbb R^d}\chi (\alpha_k V_{\alpha_k}f_n -
f_n ) \varv\, d\mu \\
& =& \lim_{k\to\infty}\lim_{n\to\infty} \int_{\mathbb R^d}\chi L^A (V_{\alpha_k}
f_n) \varv\, d\mu \\
& = &\lim_{k\to\infty}\lim_{n\to\infty} \Big (\int_{\mathbb R^d} V_{\alpha_k}f_n \, L^0
(\chi \varv )\, d\mu -\int_{\mathbb R^d}\langle\beta^{\rho, A} , \nabla V_{\alpha_k}f_n\rangle
\chi \varv\, d\mu \Big )\\
& = &\lim_{k\to\infty}\Big (\int_{\mathbb R^d} V_{\alpha_k}\tilde h \, L^0 (\chi \varv)\,
d\mu - \int_{\mathbb R^d}\langle\beta^{\rho, A} , \nabla (\chi V_{\alpha_k}\tilde h)
\rangle \varv\, d\mu \\
&&\qquad \qquad + \int_{\mathbb R^d}\langle\beta^{\rho, A} , \nabla\chi\rangle V_{\alpha_k}
\tilde h\, \varv\, d\mu\Big ) \\
\end{eqnarray*}
\begin{eqnarray*}
& \le& \lim_{k\to\infty}\frac {1}{\alpha_k}\Big (\|h\|_{L^\infty (\mathbb R^d , \mu )}
\|L^0 (\chi \varv)\|_{L^1 (\mathbb R^d , \mu )} \\
& &\quad \qquad \qquad+ \sqrt {2\nu }\|\|\beta^{\rho, A} \|\varv\|_{L^2 (\mathbb R^d , \mu )}
\mathcal E^0 (\chi\alpha_k V_{\alpha_k} \tilde h , \chi\alpha_k V_{\alpha_k}\tilde h )^{1/2} \\
& &\quad \qquad \qquad\qquad + \sqrt {2\nu }\|h\|_{L^\infty (\mathbb R^d , \mu )} \|\beta^{\rho, A}
\varv\|_{L^2 (\mathbb R^d , \mathbb R^d, \mu )} \mathcal E^0 (\chi , \chi )^{1/2}\Big ) = 0.
\end{eqnarray*}
Consequently, $\chi\tilde h$ is a $\mu$-version of $u$. In particular, $\chi h\in H_0^{1,2}(\mathbb R^d , \mu )$.
\noindent
Let $u\in H_0^{1,2}(\mathbb R^d , \mu )$ with compact support, $\chi\in
C_0^\infty (\mathbb R^d)$ such that $\chi \equiv 1$ on $\text{supp}(|u|\mu )$
and $u_n\in C_0^\infty (\mathbb R^d)$, $n\ge 1$, such that
$\lim_{n\to\infty} u_n = u$ in $H_0^{1,2} (\mathbb R^d ,\mu )$. Then
$$
\begin{aligned}
\mathcal E_1^0 (u,h)
& - \int_{\mathbb R^d}\langle\mathbf{B}, \nabla u\rangle h\, d\mu
= \lim_{n\to\infty}\mathcal E_1^0 (u_n,h) -\int_{\mathbb R^d}\langle\mathbf{B},
\nabla u_n\rangle h\, d\mu \\
& = \lim_{n\to\infty} \int_{\mathbb R^d} (1-L)u_n\, \chi h \, d\mu = 0.
\end{aligned}
$$
\end{proof}
\begin{corollary}
\label{cor2.1.1}
Let \eqref{condition on mu}--\eqref{eq:2.1.4}
and \eqref{AssumptionUniqueness} be satisfied.
Let $(\overline{L}, D(\overline{L}))$ be the maximal extension of
$(L, C_0^\infty (\mathbb R^d ))$ satisfying (i)--(iii) in Theorem \ref{theorem2.1.5} and
$(\overline{T}_t)_{t> 0}$ the associated semigroup. Then
$(L, C_0^\infty (\mathbb R^d ))$ is $L^1$-unique, if and only if
$\mu$ is $(\overline{T}_t)_{t>0}$-invariant (see Definition \ref{def:2.1.1})
\index{uniqueness ! $L^1$-unique}\index{measure ! invariant}.
\end{corollary}
\begin{proof}
Clearly, if $(L, C_0^\infty (\mathbb R^d ))$ is $L^1$-unique
it follows that $(L, D(L^0)_{0,b})$ is $L^1$-unique. Hence $\mu$ is
$(\overline{T}_t)_{t>0}$-invariant by Proposition \ref{prop2.1.9}.
\noindent
Conversely, let $h\in L^\infty (\mathbb R^d, \mu )$ be such that $\int_{\mathbb R^d}
(1-L)u\, h\, d\mu = 0$ for all $u\in C_0^\infty (\mathbb R^d)$. Then
$h\in H_{loc}^{1,2}(\mathbb R^d , \mu )$ and $\mathcal E^0_1 (u,h)-\int_{\mathbb R^d}\langle \mathbf{B} ,
\nabla u \rangle h\, d\mu = 0$ for all $u\in H_0^{1,2}(\mathbb R^d , \mu )_0$
by Theorem \ref{theorem2.1.3}. In particular,
\begin{equation}
\label{eq:2.1.2.4}
\int_{\mathbb R^d} (1-L)u\, h\, d\mu = \mathcal E_1^0(u, h)-\int_{\mathbb R^d}\langle \mathbf{B} ,\nabla u\rangle h
\, d\mu = 0 \text{ for all }u\in D(L^0)_{0,b}.
\end{equation}
Since $\mu$ is $(\overline{T}_t)_{t>0}$-invariant it follows from Proposition \ref{prop2.1.9}
that $(L, D(L^0)_{0,b})$ is $L^1$-unique and \eqref{eq:2.1.2.4} now implies that $h=0$.
Hence $(L, C_0^\infty (\mathbb R^d ))$ is $L^1$-unique too.
\end{proof}
\noindent
In the particular symmetric case, i.e., we can reformulate
Corollary \ref{cor2.1.1} as follows:
\begin{corollary} \label{cor2.1.2}
Let \eqref{condition on mu}--\eqref{eq:2.1.4} and \eqref{AssumptionUniqueness} be satisfied.
Let $\mathbf{G} = \beta^{\rho, A}$, i.e. $\mathbf{B}=0$ (cf. \eqref{eq:2.1.5a} and \eqref{eq:2.1.3}).
Then $(L^0, C_0^\infty (\mathbb R^d ))$ (cf. \eqref{eq:2.1.3d}) is $L^1$-unique, if and only if the associated
Dirichlet form $(\mathcal E^0 , D(\mathcal E^0))$ is conservative, i.e. $T_t^0 1_{\mathbb R^d}=1$ $\mu$-a.e. for all $t>0$\index{uniqueness ! $L^1$-unique}\index{semigroup ! conservative}.
\end{corollary}
\begin{proof}
Clearly, $(\mathcal E^0 , D(\mathcal E^0))$ is conservative, if and only if
$T^{\prime}_t 1_{\mathbb R^d} =1$ $\mu$-a.e. for all $t> 0$.
But by Remark \ref{rem:2.1.10a}(i),
$T^{\prime}_t 1_{\mathbb R^d} =1$, $\mu$-a.e., for all $t>0$, if and only if $\int_{\mathbb R^d}\overline{T}_t u \, d\mu
= \int_{\mathbb R^d} u\, d\mu$ for all $u\in L^1 (\mathbb R^d ,\mu )$ and $t>0$, i.e., $\mu$ is
$(\overline{T}_t)_{t>0}$-invariant, which implies the result by Corollary \ref{cor2.1.1}.
\end{proof}
\subsection{Existence and regularity of densities to infinitesimally invariant measures}
\label{sec2.2}
Since the abstract analysis on existence and uniqueness of solutions to the abstract Cauchy problem
\eqref{eq:2.1.0} developed in Section \ref{sec2.1} requires the existence and certain regularity properties
of an infinitesimally invariant measure $\mu$ for $(L, C_0^\infty (\mathbb R^d))$, i.e. a locally finite
nonnegative measure satisfying \eqref{condition on mu}--\eqref{eq:2.1.3} and
\begin{equation}
\label{eq:2.2.0}
\int_{\mathbb R^d} \big (\frac 12 \sum_{i,j =1}^d a_{ij}\partial_{ij}f + \sum_{i=1}^d g_i \partial_i f\big )
\, d\mu = 0, \qquad \forall f\in C_0^\infty (\mathbb R^d),
\end{equation}
we will first identify in Section \ref{subsec:2.2.1} a set of sufficient conditions on the coefficients
$(a_{ij})_{1\le i,j\le d}$ and $(g_i)_{1\le i\le d}$ that imply the existence of such $\mu$. We will in particular obtain existence of a
sufficiently regular density $\rho$, that allows us to apply Theorem \ref{theorem2.1.5} in order to obtain
the existence of a
closed extension of $(L,C_0^{\infty}(\mathbb R^d))$ generating a sub-Markovian
$C_0$-semigroup of contractions $(\overline{T}_t)_{t>0}$ on $L^1 (\mathbb R^d , \mu )$ with the further properties of Theorem \ref{theorem2.1.5}.
As one major aim of this book is to understand $L$ as the generator of a solution to an SDE with
corresponding coefficients, the class of admissible coefficients, i.e. the class of coefficients $(a_{ij})_{1\le i,j\le d}$ and $(g_i)_{1\le i\le d}$ for which \eqref{eq:2.2.0} has a solution $\mu$ with nice density $\rho$, plays an important
role.
\subsubsection{Class of admissible coefficients and the main theorem}
\label{subsec:2.2.1}
In order to understand the class of admissible coefficients better, it will be suitable to write $L$ in divergence form. {\bf Throughout, we let the dimension $d\ge 2$.} The case $d=1$ plays a special role since it allows for explicit and partly
elementary computations with strong regularity results. It is therefore
best treated separately elsewhere and will therefore not be considered
further from now on. Instead we included the case $d=1$ in the outlook (cf. Chapter \ref{conclusionoutlook} part 1.).
We then consider the following class of divergence form operators\index{operator ! divergence form} with respect to a possibly non-symmetric diffusion matrix and perturbation $\mathbf{H}=(h_1,\dots,h_d)$:
\begin{eqnarray}\label{eq:2.2.0first}
Lf & = & \frac12 \sum_{i,j=1}^{d} \partial_i((a_{ij}+c_{ij})\partial_j)f+\sum_{i=1}^{d}h_i\partial_i f, \quad f\in C^{2}(\mathbb R^d),
\end{eqnarray}
where the coefficients $a_{ij}$, $c_{ij}$, and $h_i$, satisfy the following {\bf assumption}\index{assumption ! {\bf (a)}}:
\begin{itemize}
\item[{\bf (a)}] \ \index{assumption ! {\bf (a)}}$a_{ji}= a_{ij}\in H_{loc}^{1,2}(\mathbb R^d) \cap C(\mathbb R^d)$, $1 \leq i, j \leq d$, $d\ge 2$, and $A = (a_{ij})_{1\le i,j\le d}$ satisfies \eqref{eq:2.1.2}.
$C = (c_{ij})_{1\le i,j\le d}$, with $-c_{ji}=c_{ij} \in H_{loc}^{1,2}(\mathbb R^d) \cap C(\mathbb R^d)$, $1 \leq i,j \leq d$, $\mathbf{H}=(h_1, \dots, h_d) \in L_{loc}^p(\mathbb R^d, \mathbb R^d)$ for some $p\in (d,\infty)$.
\end{itemize}
The anti-symmetry $-c_{ji}=c_{ij}$ in assumption {\bf (a)} is needed for the equivalence of infinitesimal invariance \eqref{eq:2.2.0} and variational equality \eqref{eq:2.2.0a}, to switch from divergence form \eqref{eq:2.2.0first} to non-divergence form \eqref{equation G with H}, and to obtain that $\beta^{\rho, C^T}$ has zero divergence in the weak sense with respect to $\mu$ (see Remark \ref{rem:2.2.4}).\\
Under assumption {\bf \text{(a)}}, $L$ as in \eqref{eq:2.2.0first} is written for $f\in C^{2}(\mathbb R^d)$ as
\begin{eqnarray}\label{equation G with H}
Lf &= & \frac12\mathrm{div}\big ( (A+C)\nabla f\big )+\langle\mathbf{H}, \nabla f\rangle \nonumber \\
&= & \frac12\mathrm{trace}\big ( A\nabla^2 f\big )+\big \langle\frac{1}{2}\nabla (A+C^{T})+ \mathbf{H}, \nabla f\big \rangle,\\ \nonumber
\end{eqnarray}
where for a matrix $B=(b_{ij})_{1 \leq i,j \leq d}$ of functions
\begin{equation}\label{divergence of row}
\nabla B=((\nabla B)_1, \ldots, (\nabla B)_d)
\end{equation}
with
\begin{equation}\label{divergence of row i}
(\nabla B)_i=\sum_{j=1}^d\partial_j b_{ij}, \qquad 1 \leq i \leq d.
\end{equation}
{\bf From now on (unless otherwise stated), we assume always that $\mathbf{G}$ has under assumption {\bf \text{(a)}} the following form}:
\begin{eqnarray}\label{form of G}
\mathbf{G}=(g_1, \dots, g_d)=\frac{1}{2}\nabla \big (A+C^{T}\big )+ \mathbf{H},
\end{eqnarray}
where $A$, $C$, and $\mathbf{H}$ are as in assumption {\bf (a)}. Thus $L$ as in \eqref{eq:2.2.0first} is written as a non-divergence form operator\index{operator ! non-divergence form}
\begin{eqnarray*}
Lf & = & \frac12 \sum_{i,j=1}^{d}a_{ij}\partial_{ij}f+\sum_{i=1}^{d}g_i\partial_i f, \quad f \in C^{2}(\mathbb R^d),
\end{eqnarray*}
where
\begin{eqnarray}\label{defofL}
g_i=\frac12\sum_{j=1}^{d} \partial_{j} (a_{ij}+c_{ji})+h_i,\quad 1 \leq i \leq d.
\end{eqnarray}
\begin{remark}\label{rem:2.2.7}
{\it The class of admissible coefficients satisfying {\bf \text{(a)}} is quite large. It does not only allow us to consider fairly general divergence form operators.
Assumption {\bf (a)} allows us also to consider a fairly general subclass of non-divergence form operators $L$. Indeed, choose
$a_{ij} \in H_{loc}^{1,p}(\mathbb R^d)\cap C(\mathbb R^d)$, $1\le i,j\le d$, for some $p\in (d,\infty)$, such that $A = (a_{ij})_{1\le i,j\le d}$ satisfies \eqref{eq:2.1.2}, $C\equiv 0$,
and
$$
\mathbf{H}=\mathbf{\widetilde{H}}-\frac12 \nabla A, \text{ with arbitrary }\mathbf{\widetilde{H}}\in L_{loc}^p(\mathbb R^d, \mathbb R^d).
$$
Putting $\mathbf{\widetilde{H}}=\mathbf{G}$, this leads to any non-divergence form operator $L$, such that for any $f \in C^{2}(\mathbb R^d)$
\begin{eqnarray*}
Lf &=& \frac12 \sum_{i,j=1}^{d} a_{ij}\partial_{ij}f+\sum_{i=1}^{d}g_i\partial_i f\ =\ \frac12\mathrm{trace}(A\nabla^2 f)+\langle \mathbf{G}, \nabla f \rangle \label{eq:2.2.1} \\
\end{eqnarray*}
with the following {\bf assumption} on the coefficients
\begin{itemize}
\item[{\bf (a$^{\prime}$)}] \index{assumption ! {\bf (a$^{\prime}$)}}for some $p\in (d,\infty)$, $a_{ji}= a_{ij}\in H_{loc}^{1,p}(\mathbb R^d) \cap C(\mathbb R^d)$, $1 \leq i, j \leq d$, $A = (a_{ij})_{1\le i,j\le d}$ satisfies \eqref{eq:2.1.2} and $\mathbf{G}=(g_1,\ldots,g_d) \in L_{loc}^p(\mathbb R^d,\mathbb R^d)$.
\end{itemize}
}
\end{remark}
\noindent
If assumption {\bf (a)} holds and $\rho \in H^{1,2}_{loc}(\mathbb R^d)$, \eqref{eq:2.2.0} is by integration by parts equivalent (cf. \eqref{equation G with H}--\eqref{defofL}) to the following variational equality\index{variational equality}:
\begin{equation}\label{eq:2.2.0a}
\int_{\mathbb R^d} \langle \frac12 (A+C^T) \nabla \rho - \rho \mathbf{H}, \nabla f \rangle dx = 0, \quad \forall f \in C_0^{\infty}(\mathbb R^d).
\end{equation}
In the next section, we show how the variational equation \eqref{eq:2.2.0a} can be adequately solved using classical tools from PDE theory and that for the measure $\rho\,dx$, where $\rho$ is the solution to \eqref{eq:2.2.0a}, \eqref{eq:2.1.1} and \eqref{eq:2.1.3} are satisfied. In particular, replacing $\hat{A}$ and $\hat{\mathbf{H}}$ in Theorem \ref{Theorem2.2.4} below with $\frac12 (A+C^T)$ and $\mathbf{H}$, respectively, we obtain the {\bf following main theorem} of this section.
\begin{theorem}\label{theo:2.2.7}
Under assumption {\bf (a)} (see the beginning of Section \ref{subsec:2.2.1}), there exists $\rho \in H^{1,p}_{loc}(\mathbb R^d) \cap C(\mathbb R^d)$ with $\rho(x)>0$ for all $ x\in \mathbb R^d$ such that with $\mu=\rho dx$, and $L$ as in \eqref{equation G with H} (see also \eqref{form of G} and Remark \ref{rem:2.2.7}), it holds that
\begin{equation} \label{eq:2.2.8}
\int_{\mathbb R^d} Lf d\mu = 0, \quad \text{ for all } f \in C_0^{\infty}(\mathbb R^d).
\end{equation}
In particular, $\mu$ as given above satisfies the assumption \eqref{condition on mu} on $\mu$ at the beginning of Section \ref{subsec:2.1.1} and moreover as a simple consequence of assumption {\bf (a)}, the assumptions \eqref{eq:2.1.1}--\eqref{eq:2.1.4} are satisfied and therefore Theorem \ref{theorem2.1.5} applies.
\end{theorem}
By Remark \ref{rem:2.2.7}, the first part of Theorem \ref{theo:2.2.7} is a generalization of \cite[Theorem 1]{BRS} (see also \cite[Theorem 2.4.1]{BKRS}), where the existence of a density $\rho$ with the same properties as in Theorem \ref{theo:2.2.7} is derived under
assumption {\bf (a$^{\prime}$)}.
\subsubsection{Proofs}
\label{subsec:2.2.2}
This section serves to provide the missing ingredients for the proof of Theorem \ref{theo:2.2.7}, in particular Theorem \ref{Theorem2.2.4} below.
\begin{lemma} \label{Lemma2.2.1}
Let $\hat{A}:=(\hat{a}_{ij})_{1 \leq i,j \leq d}$ be a (possibly non-symmetric) matrix of bounded measurable functions on an open ball $B$ and suppose there is a constant $\lambda>0$ such that
\begin{equation} \label{eq:2.2.3}
\lambda \|\xi\|^2 \leq \langle \hat{A}(x) \xi, \xi \rangle, \quad \forall \xi \in \mathbb R^d, x\in B.
\end{equation}
Let $\hat{\mathbf{H}} \in L^{d\varepsilone (2+\varepsilon)}(B, \mathbb R^d)$ for some $\varepsilon>0$.
Then for any $\mathbb Phi \in H^{1,2}_0(B)'$, there exists a unique $u \in H^{1,2}_0(B)$ such that
$$
\int_B \langle \hat{A} \nabla u+ u\hat{\mathbf{H}}, \nabla \varphi \rangle dx = \left [\mathbb Phi, \varphi \right ], \quad \forall \varphi \in H^{1,2}_0(B),
$$
where $[\cdot,\cdot]$ denotes the dualization between $H^{1,2}_0(B)'$ and $H^{1,2}_0(B)$, i.e.$\left [\mathbb Phi, \varphi \right ]=\mathbb Phi(\varphi)$.
\end{lemma}
\begin{proof}
For $\alpha \geq 0$, define a bilinear form $\mathcal{B}_{\alpha}: H^{1,2}_0(B) \times H^{1,2}_0(B) \rightarrow \mathbb R$ by
$$
\mathcal{B}_{\alpha}(u,v) = \int_B \langle \hat{A} \nabla u+ u\hat{\mathbf{H}} , \nabla v \rangle dx + \alpha \int_{B} uv dx.
$$
Then by \cite[Lemme 1.5, Th\'eor\`eme 3.2]{St65}, there exist constants $M, \gamma, \delta>0$ such that
$$
|\mathcal{B}_{\gamma}(u,v)| \leq M \|u\|_{H_0^{1,2}(B)} \|v\|_{H_0^{1,2}(B)}\, ,\quad \forall u,v \in H^{1,2}_0(B)
$$
and
\begin{equation} \label{eq:2.2.4}
|\mathcal{B}_{\gamma}(u,u)| \geq \delta \|u\|^2_{H^{1,2}_0(B)}.
\end{equation}
Let $\mathbb Psi \in H^{1,2}_0(B)'$ be given.
Then by the Lax--Milgram theorem \cite[Corollary 5.8]{BRE}, there exists a unique $S(\mathbb Psi) \in H^{1,2}_0(B)$ such that
$$
\mathcal{B}_{\gamma}(S(\mathbb Psi), \varphi) = \left [\mathbb Psi, \varphi \right ], \quad \varphi \in H^{1,2}_0(B).
$$
By \eqref{eq:2.2.4}, it follows that the map $S: H_0^{1,2}(B)' \rightarrow H^{1,2}_0(B)$ is a bounded linear operator.
Now define $J: H^{1,2}_0(B) \rightarrow H^{1,2}_0(B)'$ by
$$
[ J(u),v ] = \int_{B} u v dx, \qquad u, v \in H^{1,2}_0(B).
$$
By the weak compactness of balls in $H^{1,2}_0(B)$, $J$ is a compact operator, hence $S \circ J: H^{1,2}_0(B) \rightarrow H^{1,2}_0(B)$
is also a compact operator. In particular
$$
\exists v \in H^{1,2}_0(B) \ \text{ with }\ \mathcal{B}_0(v,\varphi) = [ \mathbb Psi, \varphi ],\quad \forall \varphi \in H_0^{1,2}(B),
$$
if and only if
$$
\exists v \in H^{1,2}_0(B) \ \text{ with } \ \left(I-\gamma S\circ J\right)(v) = S(\mathbb Psi),
$$
where $I:H^{1,2}_0(B) \rightarrow H^{1,2}_0(B)$ is the identity map.
By the maximum principle \cite[Theorem 4]{T77}, $\left(I-\gamma S \circ J\right)(v) =S(0)$ if and only if $v=0$.
Now let $\mathbb Phi \in H^{1,2}_0(B)'$ be given. Using the Fredholm-alternative \cite[Theorem 6.6(c)]{BRE}, we can see that there exists a unique $u \in H^{1,2}_0(B)$ such that
$$
\left(I-\gamma S\circ J\right)(u) = S(\mathbb Phi),
$$
as desired.
\end{proof}
The following Theorem \ref{Theorem2.2.2} originates from \cite[Theorem 1.8.3]{BKRS}, where $\hat{A}$ is supposed to be symmetric, but it straightforwardly extends to non-symmetric $\hat{A}$ by \cite[Theorem 1.2.1]{BKRS} (see \cite[Theorem 2.8]{Kr07} for the original result), since \cite[Theorem 2.8]{Kr07} holds for non-symmetric $\hat{A}$.
\begin{theorem} \label{Theorem2.2.2}
Let $B$ be an open ball in $\mathbb R^d$ and $\hat{A}=(\hat{a}_{ij})_{1 \leq i,j \leq d}$ be a (possibly non-symmetric) matrix of continuous functions on $\overline{B}$ that satisfies \eqref{eq:2.2.3}.
Let $\mathbf{F} \in L^p(B, \mathbb R^d)$ for some $p\in (d,\infty)$. Suppose $u \in H^{1,2}(B)$ satisfies
$$
\int_{B} \langle \hat{A} \nabla u,\nabla \varphi\rangle dx = \int_{B} \langle \mathbf{F}, \nabla \varphi \rangle dx, \quad \forall \varphi \in C_0^{\infty}(B).
$$
Then $u \in H^{1,p}(V)$ for any open ball $V$ with $\overline{V} \subset B$.
\end{theorem}
\begin{theorem} \label{Theorem2.2.4}
Let $\hat{A}:=(\hat{a}_{ij})_{1 \leq i,j \leq d}$ be a (possibly non-symmetric) matrix of locally bounded measurable functions on $\mathbb R^d$.
Assume that for each open ball $B$ there exists a constant $\lambda_B>0$ such that
$$
\lambda_B \|\xi\|^2 \leq \langle \hat{A}(x) \xi, \xi \rangle, \quad \xi \in \mathbb R^d, x\in B.
$$
Let $\hat{\mathbf{H}}\in L^p_{loc}(\mathbb R^d, \mathbb R^d)$ for some $p\in (d,\infty)$. Then it holds that:\\left [3pt]
(i) There exists $\rho \in H_{loc}^{1,2}(\mathbb R^d) \cap C(\mathbb R^d)$ with $\rho(x)>0$ for all $x \in \mathbb R^d$
such that
$$
\int_{\mathbb R^d} \langle \hat{A} \nabla \rho+ \rho \hat{\mathbf{H}}, \nabla \varphi \rangle dx = 0, \quad \forall \varphi \in C^{\infty}_0(\mathbb R^d).
$$
(ii) If additionally $\hat{a}_{ij} \in C(\mathbb R^d)$, $1 \leq i,j \leq d$, then $\rho \in H^{1,p}_{loc}(\mathbb R^d)$.
\end{theorem}
\begin{proof}
(i) Let $n \in \mathbb N$. By Lemma \ref{Lemma2.2.1} and \cite[Corollary 5.5]{T77}, there exists $v_n \in H_0^{1,2}(B_n) \cap C(B_n)$ such that
\begin{equation*}
\int_{B_{n}} \langle \hat{A} \nabla v_n+ v_n \hat{\mathbf{H}} , \nabla \varphi \rangle dx = \int_{B_n} \langle-\hat{\mathbf{H}}, \nabla \varphi \rangle dx, \quad \text{ for all } \varphi \in C_0^{\infty}(B_n).
\end{equation*}
Let $u_n:= v_n+1$. Then $T(u_n)=1$ on $\partial B_n$, where $T$ is the trace operator from $H^{1,2}(B_n)$ to $L^2(\partial B_n)$. Moreover,
\begin{equation} \label{eq:2.2.5}
\int_{B_{n}} \langle \hat{A} \nabla u_n+ u_n \hat{\mathbf{H}} , \nabla \varphi \rangle dx = 0, \quad \text{ for all } \varphi \in C_0^{\infty}(B_n).
\end{equation}
Since $0 \leq u_n^{-}\le v_n^-$, we have $u^-_n \in H^{1,2}_0(B_n)$. Thus by
\cite[Lemma 3.4]{LT19}, we get
\begin{equation*}
\qquad \quad \int_{B_n} \langle \hat{A} \nabla u_n^{-} +u_n^{-} \,\hat{\mathbf{H}}, \nabla \varphi \rangle dx \leq 0, \quad \varphi \in C_0^{\infty}(B_n) \text{ with } \varphi \geq 0.
\end{equation*}
By the maximum principle \cite[Theorem 4]{T77}, we have $u_n^- \leq 0$, hence $u_n \geq 0$. Suppose there exists $x_0 \in B_n$ such that $u_n(x_0)=0$. Then applying the Harnack inequality of \cite[Corollary 5.2]{T73} to $u_n$ of \eqref{eq:2.2.5}, $u_n(x)=0$ for all $x \in B_n$, hence $T(u_n)=0$ on $\partial B_n$, which is contradiction. Therefore, $u_n$ is strictly positive on $B_n$. Now let $\rho_n(x):= u_n(0)^{-1}u_n(x)$, $x\in B_n, n\in \mathbb N$. Then $\rho_n(0)=1$ and
$$
\int_{B_n} \langle \hat{A} \nabla \rho_n +\rho_n \hat{\mathbf{H}} , \;\nabla \varphi \rangle dx=0 \;\; \text{ for all } \varphi \in C_0^{\infty}(B_n).
$$
Fix $r>0$. Then, by \cite[Corollary 5.2]{T73}
$$
\sup_{x \in B_{2r}}\rho_n(x) \leq C_1 \inf_{x \in B_{2r}}\rho_n(x) \leq C_1 \; \text{ for all } n>2r,
$$
where $C_1>0$ is independent of $\rho_n$, $n>2r$.
By \cite[Lemma 5.2]{St65},
$$
\| \rho_n \|_{H^{1,2}(B_{r})} \leq C_2 \|\rho_n \|_{L^2(B_{2r})} \leq C_1 C_2 \, dx(B_{2r}), \; \text{ for all } n>2r,
$$
where $C_2$ is independent of $(\rho_n)_{n>2r}$. By \cite[Corollary 5.5]{GilbargTrudinger}
$$
\| \rho_n \|_{C^{0,\gamma}(\overline{B}_r)} \leq C_3 \sup_{B_{2r}}\|\rho_n \| \leq C_1 C_3,
$$
where $\gamma \in (0,1)$ and $C_3>0$ are independent of $(\rho_n)_{n>2r}$.
By weak compactness of balls in $H_0^{1,2}(B_r)$ and the Arzela--Ascoli theorem, there exists $(\rho_{n,r})_{n \geq 1} \subset (\rho_n)_{n >2r}$ and $\rho_{(r)} \in H^{1,2}(B_r) \cap C(\overline{B}_r)$
such that as $n\to \infty$
$$
\rho_{n,r} \rightarrow \rho_{(r)} \;\text{ weakly in } H^{1,2}(B_r), \qquad \rho_{n,r} \rightarrow \rho_{(r)} \;\text{ uniformly on } \overline{B}_r.
$$
Choosing $(\rho_{n,k})_{n \geq 1} \supset (\rho_{n,k+1})_{n \geq 1}$, $k\in \mathbb N$, we get $\rho_{(k)}=\rho_{(k+1)}$ on $B_{k}$, hence we can well-define $\rho$ as
$$
\rho := \rho_{(k)} \text{ on } B_{k}, k\in \mathbb N.
$$
Finally, applying the Harnack inequality of \cite[Corollary 5.2]{T73}, it holds that $\rho(x)>0$ for all $x \in \mathbb R^d$.\\
(ii) Let $R>0$. Then
$$
\int_{B_{2R}} \langle \hat{A} \nabla \rho, \nabla \varphi \rangle dx = - \int_{B_{2R}} \langle \rho \hat{\mathbf{H}}, \nabla \varphi \rangle dx, \quad \forall \varphi \in C^{\infty}_0(B_{2R}).
$$
Since $\rho \hat{\mathbf{H}} \in L^p(B_{2R}, \mathbb R^d)$, we obtain $\rho \in H^{1,p}(B_R)$ by Theorem \ref{Theorem2.2.2}.
\end{proof}
\begin{proof}{\it of Theorem \ref{theo:2.2.7}} \; By Theorem \ref{Theorem2.2.4} applied with $\hat{A}=A+C^T$, there exists $\rho \in H^{1,p}_{loc}(\mathbb R^d) \cap C(\mathbb R^d)$ with $\rho(x)>0$ for all $x \in \mathbb R^d$ such that the variational equation \eqref{eq:2.2.0a} holds.
Using integration by parts, we obtain from \eqref{eq:2.2.0a}
$$
-\int_{\mathbb R^d} \left(\frac12\mathrm{trace}\big ( A\nabla^2 f\big )+\big \langle\frac{1}{2}\nabla (A+C^{T})+ \mathbf{H}, \nabla f\big \rangle \right) \rho dx=0 , \quad \forall f \in C_0^{\infty}(\mathbb R^d).
$$
Letting $\mu=\rho dx$, \eqref{eq:2.2.8} follows. Since $\rho \in H^{1,p}_{loc}(\mathbb R^d) \cap C(\mathbb R^d)$ and $\rho(x)>0$ for all $x \in \mathbb R^d$, we obtain $\sqrt{\rho} \in H^{1,p}_{loc}(\mathbb R^d) \cap C(\mathbb R^d)$ with the help of the chain rule (\cite[Theorem 4.4(ii)]{EG15}). Moreover, $a_{ij} \in H^{1,2}_{loc}(\mathbb R^d)=H^{1,2}_{loc}(\mathbb R^d, \mu)$ for all $1 \leq i,j \leq d$ and $\mathbf{G}=\frac{1}{2}\nabla (A+C^{T})+ \mathbf{H} \in L^2_{loc}(\mathbb R^d, \mathbb R^d) = L_{loc}^2(\mathbb R^d, \mathbb R^d, \mu)$.
\end{proof}
\subsubsection{Discussion}
\label{subsec:2.2.3}
The converse problem of constructing and analyzing a partial differential operator $(L, C_0^\infty (\mathbb R^d))$
with suitable coefficients, given a prescribed infinitesimally invariant measure, appears in applications
of SDEs, e.g. to the sampling of probability distributions (see \cite{Hwang})
or more generally to ergodic control problems (see \cite{Borkar}). In the following remark we will briefly
discuss the applicability of our setting to this problem.
\begin{remark} \label{rem:2.2.4}
{\it In Theorem \ref{theo:2.2.7}, we derived under the assumption {\bf (a)} the existence of a nice density
$\rho$ such that \eqref{eq:2.2.8} holds. Conversely, if $\rho \in H^{1,p}_{loc}(\mathbb R^d) \cap C(\mathbb R^d)$ for some $p\in (d,\infty)$ with
$\rho(x)>0$ for all $x\in \mathbb R^d$ is explicitly given, we can construct a large class of partial differential
operators $(L, C_0^{\infty}(\mathbb R^d))$ as in \eqref{eq:2.2.0first} satisfying condition {\bf (a)} and such
that $\mu=\rho dx$ is an infinitesimally invariant measure for $(L, C_0^{\infty}(\mathbb R^d))$, i.e. \eqref{eq:2.2.8}
holds. \\
More specifically, for any $A=(a_{ij})_{1 \leq i,j \leq d}$ and $C=(c_{ij})_{1 \leq i,j \leq d}$
satisfying condition {\bf (a)} of Section \ref{subsec:2.2.1} and any $\mathbf{\overline{B}} \in L_{loc}^p(\mathbb R^d, \mathbb R^d)$ satisfying
$$
\int_{\mathbb R^d} \langle \mathbf{\overline{B}}, \nabla \varphi \rangle \rho dx=0, \quad \text{ for all }
\varphi \in C_0^{\infty}(\mathbb R^d)
$$
it follows that $A$, $C$ and $\mathbf{H}:=\frac{(A+C^T) \nabla \rho}{2\rho}+\mathbf{\overline{B}}$ satisfy
condition {\bf (a)}. In particular (cf. \eqref{eq:2.1.5}, \eqref{eq:2.1.5a} and \eqref{form of G}) $\mathbf{B} =\mathbf{G}-\beta^{\rho,A}
= \beta^{\rho, C^T}+\mathbf{\overline{B}} \in L_{loc}^2(\mathbb R^d, \mathbb R^d, \mu)$, where
$$
\beta^{\rho, C^T} = \frac{1}{2} \nabla C^T +C^T \frac{\nabla \rho}{2 \rho}
$$
(see \eqref{divergence of row} and \eqref{divergence of row i} for the definition of $\nabla C^T$),
and $\rho dx$ is an
infinitesimally invariant measure for $(L, C_0^{\infty}(\mathbb R^d))$, since by integration by parts
$$
\int_{\mathbb R^d} \langle \beta^{\rho, C^T}, \nabla \varphi \rangle \rho dx
= \frac12\int_{\mathbb R^d} \sum_{i,j=1}^{d}\rho c_{ij} \partial_i \partial_j \varphi dx=0, \quad \text{ for all }
\varphi \in C_0^{\infty}(\mathbb R^d),
$$
so that
$$
\int_{\mathbb R^d} \langle \mathbf{B}, \nabla \varphi \rangle \rho dx = 0, \quad \text{ for all } \varphi \in C_0^{\infty}(\mathbb R^d).
$$
In particular, \eqref{condition on mu}--\eqref{eq:2.1.4} hold, so that the results of Section \ref{sec2.1} are applicable.
}
\end{remark}
\subsection{Regular solutions to the abstract Cauchy problem}
\label{sec2.3}
In this section, we investigate the regularity properties of $(T_t)_{t>0}$ as defined in Definition \ref{definition2.1.7}, as well as regularity properties of the corresponding resolvent. The semigroup regularity will play an important role in Chapter \ref{chapter_3} to construct an associated Hunt process that can start from every point in $\mathbb R^d$. The resolvent regularity will be used to derive a Krylov-type estimate for the associated Hunt process in Theorem \ref{theo:3.3}.
Throughout this section, we let
$$
\mu=\rho\,dx
$$
be as in Theorem \ref{theo:2.2.7} or as in Remark \ref{rem:2.2.4}.\\
Here to obtain the $L^{s}(\mathbb R^d,\mu)$-strong Feller property, $s\in [1, \infty]$, including the strong Feller property of $(P_t)_{t>0}$ (for both definitions see Definition \ref{def:2.3.1} below), we only need condition {\bf (a)} of Section \ref{subsec:2.2.1}. The conservativeness of $(T_t)_{t>0}$ is not needed. Our main strategy is to use H\"{o}lder regularity results and Harnack inequalities for variational solutions to elliptic and parabolic PDEs of divergence type. Indeed, we show that given a sufficiently regular function $f$, $\rho G_{\alpha} f$ and $\rho T_{\cdot} f$ are the variational solutions to elliptic and parabolic PDEs of divergence type, respectively, so that the results of \cite{St65} and \cite{ArSe} apply. \\
To obtain the regularity of $(T_t)_{t>0}$ in our case, it is notable that one could apply the result \cite[Theorem 4.1]{BKR2} based on Sobolev regularity for parabolic equations involved with measures. But then it would be required that $a_{ij} \in H^{1,\widetilde{p}}_{loc}(\mathbb R^d)$ for all $1 \leq i,j \leq d$ and $\mathbf{G} \in L^{\widetilde{p}}_{loc}(\mathbb R^d, \mathbb R^d)$, $\widetilde{p}>d+2$ and the strong Feller property of the regularized version $(P_t)_{t>0}$ of $(T_t)_{t>0}$ may not directly be derived without assuming the conservativeness of $(T_t)_{t>0}$. Proceeding this way would hence be too restrictive.\\
At the end of this section we briefly
discuss related work on regularity results in the existing literature.
\begin{theorem} \label{theorem2.3.1}
Let $q=\frac{pd}{p+d}$, $p\in (d,\infty)$. Suppose {\bf (a)} of Section \ref{subsec:2.2.1} holds and let $g \in
\cup_{r\in [q,\infty]} L^r(\mathbb R^d,\mu)$ with $ g \geq 0$, $\alpha>0$. Then $G_{\alpha} g$ (see Definition \ref{definition2.1.7}) has a locally H\"{o}lder continuous version $R_{\alpha} g$ and for any open balls $U$, $V$ in $\mathbb R^d$, with $\overline{U} \subset V$,
\begin{eqnarray}
\| R_{\alpha}g \|_{C^{0, \gamma}(\overline{U})} \le c\Big (\| g \|_{L^q(V, \mu)} + \| G_{\alpha}g \|_{L^1(V, \mu)}\Big ), \label{eq:2.3.37}
\end{eqnarray}
where $c>0$ and $\gamma \in (0,1)$ are constants, independent of $g$.
\end{theorem}
\begin{proof}
Let $g \in C_0^{\infty}(\mathbb R^d)$ and $\alpha >0$. Then for all $\varphi \in C_0^{\infty}(\mathbb R^d)$,
\begin{eqnarray}\label{eq:2.3.38b}
\int_{\mathbb R^d} (\alpha- L_2' ) \varphi \cdot \big(G_{\alpha} g\big) \,d\mu = \int_{\mathbb R^d} G'_{\alpha}(\alpha-L_2') \varphi \cdot g \,d\mu=\int_{\mathbb R^d} \varphi g \,d\mu,
\end{eqnarray}
and it follows from \eqref{eq:2.1.5a}, Definition \ref{definition2.1.7}, \eqref{eq:2.1.3bis2'}, and \eqref{equation G with H}, that
\begin{eqnarray*}
L_2' \varphi&=& \frac{1}{2}\text{trace}(A \nabla^2 \varphi)+ \langle 2 \beta^{\rho,A}-\mathbf{G}, \nabla \varphi \rangle \\
&=& \frac12 \text{div}((A+C^T) \nabla \varphi )+ \langle -\frac12 \nabla (A+ C)+2 \beta^{\rho,A}-\mathbf{G}, \nabla \varphi \rangle \\
&=& \frac12 \text{div}((A+C^T) \nabla \varphi )+ \langle \frac{A \nabla \rho}{\rho}- \mathbf{H}, \nabla \varphi \rangle.
\end{eqnarray*}
Since by Theorem \ref{theorem2.1.5}, $G_{\alpha} g \in D(\overline{L})_b \subset D(\mathcal{E}^0) \subset H^{1.2}_{loc}(\mathbb R^d)$,
applying integration by parts to the left hand side of \eqref{eq:2.3.38b}, for any $\varphi \in C_0^{\infty}(\mathbb R^d)$,
$$
\int_{\mathbb R^d} \big \langle \frac12 (A+C) \nabla (\rho G_{\alpha} g) + (\rho G_{\alpha} g)(\mathbf{H}-\frac{A \nabla \rho}{\rho}), \nabla \varphi \big \rangle +\alpha(\rho G_{\alpha}g) \varphi dx = \int_{\mathbb R^d} (\rho g) \varphi dx.
$$
Suppose now that $g \geq 0$. Then since $\frac{1}{\rho}$ is locally H\"{o}lder continuous, by \cite[
Th\'eor\`eme 7.2, 8.2]{St65}, $G_{\alpha} g$ has a locally H\"{o}lder continuous version $R_{\alpha}g$ on $\mathbb R^d$ and there exists a constant $\gamma \in (0, 1-d/p)$, independent of $g$, such that
\begin{eqnarray*}
\| \rho R_{\alpha}g\|_{C^{0, \gamma}(\overline{U})} &\leq& c_1 \left( \|\rho G_{\alpha} g\|_{L^2(V)} + \| \rho g\|_{L^q(V)} \right) \\
&\leq& c_1 \left( c_2\inf_{V} (\rho R_{\alpha} g) +c_2\|\rho g\|_{L^q(V)} +\|\rho g\|_{L^q(V)} \right) \\
&\leq& c_3 \Big( \|\rho G_{\alpha} g\|_{L^1(V)}+ \| \rho g\|_{L^q(V)} \Big),
\end{eqnarray*}
where $c_1, c_2, c_3>0$ are constants, independent of $g$.
Since $\rho \in L^{\infty}(V)$ and $\frac{1}{\rho} \in C^{0,\gamma}(\overline{U})$, \eqref{eq:2.3.37} follows for all $g \in C_0^{\infty}(\mathbb R^d)$ with $g \geq 0$.\\
\noindent
Moreover, for such $g$, using the $L^r(\mathbb R^d, \mu)$-contraction property of $\alpha G_{\alpha}$ for $r \in [q, \infty)$ and H\"{o}lder's inequality,
\begin{eqnarray}
&&\| R_{\alpha}g \|_{C^{0, \gamma}(\overline{U})} \le c\left (\| g \|_{L^q(V, \mu)} + \| G_{\alpha}g \|_{L^1(V, \mu)}\right ) \label{eq:2.50new} \\
&\le& c \left( \|\rho\|^{\frac{1}{q}-\frac{1}{r}}_{L^1(V)}
\| g \|_{L^r(\mathbb R^d, \mu)} +\|\rho \|^{\frac{r-1}{r}}_{L^1(V)} \|G_{\alpha} g\|_{L^r(\mathbb R^d, \mu)} \right) \nonumber \\
&\le& c(\|\rho\|^{\frac{1}{q}-\frac{1}{r}}_{L^1(V)} \varepsilone \frac{1}{\alpha}\|\rho \|_{L^1(V)}^{\frac{r-1}{r}}) \|g\|_{L^r(\mathbb R^d, \mu)}. \label{eq:2.51new}
\end{eqnarray}
\noindent
Now, suppose $g \in L^r(\mathbb R^d, \mu)$ for some $r \in [q, \infty)$ and $g \geq 0$. Choose $(g_n)_{n \geq 1} \subset C_0^{\infty}(\mathbb R^d) \cap \mathcal{B}^+(\mathbb R^d)$ with $\lim_{n \rightarrow \infty} g_n= g$ in $L^r(\mathbb R^d, \mu)$. Using a Cauchy sequence argument together with \eqref{eq:2.51new}, there exists $u^g \in C^{0, \gamma}(\overline{U})$ such that
\begin{equation} \label{eq:2.52new}
\lim_{n \rightarrow \infty} R_{\alpha} g_n =u^g \quad \text{ in $C^{0, \gamma}(\overline{U})$}.
\end{equation}
Since $U$ is an arbitrary open ball in $\mathbb R^d$, we can well-define
\begin{equation} \label{eq:2.53def}
R_{\alpha} g:= u^g \quad \text{ on $\mathbb R^d$},
\end{equation}
i.e. $R_{\alpha} g$ is the same for any chosen sequence $(g_n)_{n \geq 1}$ as above. Moreover, $R_{\alpha} g$ is a continuous version of $G_{\alpha} g$ by \eqref{eq:2.52new} and it follows from \eqref{eq:2.50new} that
\begin{eqnarray} \label{eq:2.54new}
\| R_{\alpha}g \|_{C^{0, \gamma}(\overline{U})} &\le& c\Big (\| g \|_{L^q(V, \mu)} + \| R_{\alpha} g \|_{L^1(V, \mu)}\Big ).
\end{eqnarray}
Finally, let $g \in L^{\infty}(\mathbb R^d, \mu)$ with $g \geq 0$ and $g_n:=1_{B_n} \cdot g \in L^1(\mathbb R^d, \mu)_b \subset L^{q}(\mathbb R^d, \mu)$, $n \geq 1$. Then $\lim_{n \rightarrow \infty} g_n=g$, a.e. By the sub-Markovian property of $(G_{\alpha})_{\alpha>0}$ and the continuity of $z \mapsto R_{\alpha} g_n(z)$ on $\mathbb R^d$, $(R_{\alpha} g_n(z))_{n \geq 1}$ is for each $z \in \mathbb R^d$ a uniformly bounded and increasing sequence in $[0,1 ]$. Applying Lebesgue's theorem, $(R_{\alpha}g_n)_{n \geq 1}$ is a Cauchy sequence in $L^1(V, \mu)$ and $(g_n)_{n \geq 1}$ is a Cauchy sequence in $L^q(V, \mu)$. By using a Cauchy sequence argument together with \eqref{eq:2.54new}, we can well-define $R_{\alpha}g$ on $\mathbb R^d$ as we did in \eqref{eq:2.52new} and \eqref{eq:2.53def}. Hence $R_{\alpha}g$ is a continuous version of $G_{\alpha}g$ and \eqref{eq:2.3.37} holds for all $g \in L^{\infty}(\mathbb R^d, \mu)$ with $g \geq 0$ as desired.
\end{proof}
\noindent
Let $g \in L^r(\mathbb R^d, \mu)$ for some $r \in [q, \infty]$ and $\alpha>0$. By splitting $g=g^+-g^-$, we define
\begin{equation}\label{resoldef}
R_{\alpha} g:=R_{\alpha}g^+ -R_{\alpha} g^-\quad \text{ on \,$\mathbb R^d$.}
\end{equation}
Then $R_{\alpha}g$ is a continuous version of $G_{\alpha}g$ and it follows from \eqref{eq:2.3.37} and the $L^r(\mathbb R^d, \mu)$-contraction property of $\alpha G_\alpha$ that
\begin{equation} \label{eq:2.3.38a}
\| R_{\alpha}g \|_{C^{0, \gamma}(\overline{U})} \leq c_4 \|g\|_{L^r(\mathbb R^d, \mu)},
\end{equation}
where $c_4>0$ is a constant, independent of $g$. Finally, let $f \in D(L_r)$ for some $r \in [q, \infty)$. Then $f = G_1 (1-L_r) f$, and $f$ has a
locally H\"{o}lder continuous version on $\mathbb R^d$ by Theorem \ref{theorem2.3.1}. Moreover, for any open ball $U$, \eqref{eq:2.3.38a} implies
\begin{equation} \label{eq:2.3.39}
\|f\|_{C^{0,\gamma}(\overline{U})} \leq c_4 \| (1-L_r) f\|_{L^r(\mathbb R^d, \mu)} \leq c_4 \|f\|_{D(L_r)}.
\end{equation}
Since also $T_t f \in D(L_r)$, $T_t f$ has a continuous $\mu$-version, say $P_{t}f$, and it follows from \eqref{eq:2.3.39} and the $L^r(\mathbb R^d, \mu)$-contraction property of $(T_t)_{t>0}$ that
\begin{equation} \label{eq:2.3.40}
\|P_t f\|_{C^{0, \gamma}(\overline{U})} \leq c_4 \|T_t f \|_{D(L_r)} \leq c_4 \|f\|_{D(L_r)}.
\end{equation}
\begin{lemma} \label{eq:2.3.39a}
Let {\bf (a)} of Section \ref{subsec:2.2.1} be satisfied. Then for any $f\in \bigcup_{r\in [q,\infty)} D(L_r)$ the map
$$
(x,t)\mapsto P_t f(x)
$$
is continuous on $ \mathbb R^d\times [0,\infty)$, where $P_0 f:=f$ and $q=\frac{pd}{p+d}$, $p\in (d,\infty)$.
\end{lemma}
\begin{proof}
Let $f\in D(L_r)$ for some $r\in [q,\infty)$ and let $\left ((x_n,t_n)\right )_{n\ge 1}$ be a sequence in $\mathbb R^d\times [0,\infty)$ that converges to $(x_0,t_0)\in \mathbb R^d\times [0,\infty)$.
Let $B$ be an open ball, such that $x_n \in B$ for all $n \geq 1$. Then by \eqref{eq:2.3.39}
for all $n \geq 1$
\begin{eqnarray*}
\left | P_{t_n} f(x_n)-P_{t_0} f(x_0) \right | &\leq& \| P_{t_n} f- P_{t_0}f \,\|_{C(\overline{B})} +\left | P_{t_0} f(x_n)-P_{t_0} f(x_0) \right | \\left [3pt]
&\leq& c_4\| P_{t_n} f- P_{t_0}f \,\|_{D(L_r)}+\big | P_{t_0} f(x_n)-P_{t_0} f(x_0) \big |.
\end{eqnarray*}
Using the $L^r(\mathbb R^d, \mu)$-strong continuity of $(T_t)_{t>0}$ and the continuity of $P_{t_0} f$ at $x_0$, the assertion follows.
\end{proof}
\begin{theorem} \label{theo:2.6}
Suppose {\bf (a)} of Section \ref{subsec:2.2.1} holds and that $f \in \cup_{s \in [1, \infty]}L^s(\mathbb R^d, \mu)$, $f \geq 0$. Then $T_t f$, $t>0$ (see Definition \ref{definition2.1.7}) has a continuous $\mu$-version $P_t f$ on $\mathbb R^d$ and $P_{\cdot}f(\cdot)$ is locally parabolic H\"{o}lder continuous on $\mathbb R^d \times (0, \infty)$. Moreover, for any bounded open sets $U$, $V$ in $\mathbb R^d$ with $\overline{U} \subset V$ and $0<\tau_3<\tau_1<\tau_2<\tau_4$, we have the following estimate:
\begin{equation} \label{eq:2.3.41}
\|P_{\cdot} f(\cdot)\|_{C^{\gamma; \frac{\gamma}{2}}(\overline{U} \times [\tau_1, \tau_2])} \leq C_4 \| P_{\cdot} f(\cdot) \|_{L^1( V \times (\tau_3, \tau_4), \mu\otimes dt) },
\end{equation}
where $C_4>0$, $\gamma \in (0,1)$ are constants, independent of $f$.
\end{theorem}
\begin{proof}
First assume $f \in D(\overline{L})_b \cap D(L_2) \cap D(L_q)$ with $f \geq 0$ and $q=\frac{pd}{p+d}$, $p\in (d,\infty)$. Set $u(x,t):=\rho(x) P_t f(x)$. Then by Lemma \ref{eq:2.3.39a}, $u \in C(\mathbb R^d \times [0, \infty))$. Let $B$ be an open ball in $\mathbb R^d$ and $T>0$.
Using Theorem \ref{theorem2.1.5}, one can see $u \in H^{1,2}(B \times (0,T))$. Let $\phi \in C_0^{\infty}(\mathbb R^d)$, $\psi \in C^{\infty}_0((0,T))$ and $\varphi:=\phi \psi$. Then
$$
\frac{d}{dt} \int_{\mathbb R^d} \phi T_t f d\mu=
\int_{\mathbb R^d} \phi L_2 T_t f d\mu =\int_{\mathbb R^d} L_2' \phi \cdot T_t f d\mu,
$$
hence using integration by parts,
\begin{eqnarray} \label{eq:2.3.42}
0=-\int _0^T\int_{\mathbb R^d} \left ( \partial _t \varphi +L'_2\varphi \right ) u\, dxdt.
\end{eqnarray}
By $C^2$-approximation with finite linear combinations $\sum\phi_i \psi_i$, \eqref{eq:2.3.42} extends to all $\varphi \in C_0^{\infty}(\mathbb R^d \times (0,T))$. Applying integration by parts to \eqref{eq:2.3.42}, for all $\varphi \in C_0^{\infty}(\mathbb R^d \times (0,T))$ (see proof of Theorem \ref{theorem2.3.1}),
\begin{equation} \label{eq:2.3.45}
0=\int_0^T\int_{\mathbb R^d} \left ( \frac{1}{2}\langle (A+C) \nabla u, \nabla \varphi \rangle + u \langle \mathbf{H}-\frac{A \nabla \rho}{\rho}, \nabla \varphi \rangle -u\partial_t\varphi \right ) dxdt.
\end{equation}
Let $\bar{x} \in \mathbb R^d$ and $\bar{t} \in (0,T)$. Take a sufficiently small $r>0$ so that $\bar{t}-(3r)^2>0$. Then by \cite[Theorems 3 and 4]{ArSe},
\begin{eqnarray*}
\|u\|_{C^{\gamma;\frac{\gamma}{2}}(\bar{R}_{\bar{x}}(r) \times [\bar{t}-r^2, \bar{t}])} &\leq& C_1\sup\big \{u(z)\, :\,z\in R_{\bar{x}}(3r) \times (\bar{t}-(3r)^2, \bar{t})\big \} \\
&\leq& C_1 C_2 \inf \big \{u(z)\, : \,z\in R_{\bar{x}}(3r) \times \big (\bar{t}+6(3r)^2, \bar{t}+7(3r)^2\big )\big \}\\left [3pt]
&\leq & C_1 C_2 C_3 \|u\|_{L^1\big(R_{\bar{x}}(3r) \times (\bar{t}+6(3r)^2, \bar{t}+7(3r)^2) \big)},
\end{eqnarray*}
where $\gamma \in (0, 1-d/p]$, $C_1, C_2, C_3>0$ are constants, independent of $u$. Using a partition of unity and $\frac{1}{\rho} \in C^{0, \gamma}(\bar{R}_{\bar{x}}(3r))$, \eqref{eq:2.3.41} holds for all $f \in D(\overline{L})_b \cap D(L_2) \cap D(L_q)$ with $f \geq 0$.
Moreover, using the $L^1(\mathbb R^d, \mu)$-contraction property of $(T_t)_{t>0}$, for all $f \in D(\overline{L})_b \cap D(L_2) \cap D(L_q)$ with $f \geq 0$, $q=\frac{pd}{p+d}$, it holds that
\begin{eqnarray}
&&\|P_{\cdot} f(\cdot)\|_{C^{\gamma; \frac{\gamma}{2}}(\overline{U} \times [\tau_1, \tau_2])} \leq C_4 \| P_{\cdot} f(\cdot) \|_{L^1( V \times (\tau_3, \tau_4), \mu\otimes dt) } \nonumber \\
&&\leq C_4 \int_{\tau_3}^{\tau_4} \| P_t f \|_{L^1(V, \mu)} dt \leq C_4(\tau_4-\tau_3) \|f\|_{L^1(\mathbb R^d, \mu)}. \label{eq:2.56new}
\end{eqnarray}
Now let $f \in L^1(\mathbb R^d, \mu)_b$ with $f \geq 0$. Then $f_n:=nG_n f \in D(\overline{L})_b \cap D(L_2) \cap D(L_q)$, $n\geq 1$, $f_n \geq 0$ by the sub-Markovian property of $(G_{\alpha})_{\alpha>0}$ and $\lim_{n \rightarrow \infty} f_n=f$ in $L^1(\mathbb R^d, \mu)$ by the $L^1(\mathbb R^d, \mu)$-strong continuity of $(G_{\alpha})_{\alpha>0}$.\\
Using a Cauchy sequence argument together with \eqref{eq:2.56new}, there exists $u^f \in C^{\gamma;\frac{\gamma}{2}}( \overline{U} \times [\tau_1, \tau_2])$ such that
\begin{equation} \label{eq:2.57a}
\lim_{n \rightarrow \infty} P_{\cdot} f_n(\cdot)=u^f \;\; \text{ in } \;C^{\gamma; \frac{\gamma}{2}}(\overline{U} \times [\tau_1, \tau_2]).
\end{equation}
Since $U \times [\tau_1, \tau_1]$ is arbitrarily chosen in $\mathbb R^d \times (0, \infty)$, given $t>0$
we can define
\begin{equation} \label{eq:2.57b}
P_t f:=u^f(\cdot, t), \quad \text{ on $\mathbb R^d$}.
\end{equation}
Then $P_t f$ is a continuous version of $T_t f$ by \eqref{eq:2.57a} and it follows from \eqref{eq:2.56new} that \eqref{eq:2.3.41} holds for all $f \in L^1(\mathbb R^d, \mu)_b$. Moreover, for $r \in [1, \infty)$, using the $L^r(\mathbb R^d, \mu)$-contraction property of $(T_t)_{t>0}$ and H\"{o}lder's inequality, we get
\begin{eqnarray}
&&\|P_{\cdot} f(\cdot)\|_{C^{\gamma; \frac{\gamma}{2}}(\overline{U} \times [\tau_1, \tau_2])} \leq C_4 \| P_{\cdot} f(\cdot) \|_{L^1( V \times (\tau_3, \tau_4), \mu\otimes dt) } \label{eq:2.58a} \\
&& \leq C_4 \int_{\tau_3}^{\tau_4} \| P_t f \|_{L^r(V, \mu)} \| \rho\|^{\frac{r-1}{r}}_{L^1(V)} dt \leq C_4(\tau_4-\tau_3) \| \rho\|^{\frac{r-1}{r}}_{L^1(V)} \|f\|_{L^r(\mathbb R^d, \mu)}. \qquad \;\; \text{} \label{eq:2.58new}
\end{eqnarray}
\\
Now let $f \in L^r(\mathbb R^d, \mu)$ with $f \geq 0$ and $r \in [1, \infty)$. Then there exists $(f_n)_{n \geq 1} \subset L^1(\mathbb R^d, \mu)_b \cap \mathcal{B}^+(\mathbb R^d)$ such that $\lim_{n \rightarrow \infty} f_n = f$ in $L^r(\mathbb R^d, \mu)$. By using a Cauchy sequence argument together with \eqref{eq:2.58new}, we can well-define $P_t f$ on $\mathbb R^d$ as we did in \eqref{eq:2.57a} and \eqref{eq:2.57b}, so that $P_t f$ is a continuous version of $T_t f$ and \eqref{eq:2.3.41} holds for all $f \in \cup_{r \in [1, \infty)}L^r(\mathbb R^d, \mu)$ with $f \geq 0$ by \eqref{eq:2.58a}.
\\
Finally, let $f \in L^{\infty}(\mathbb R^d, \mu)$ with $f \geq 0$ and $f_n:=1_{B_n} \cdot f \in L^1(\mathbb R^d, \mu)_b$ for $n \geq 1$. Then $\lim_{n \rightarrow \infty} f_n=f$, a.e. By the sub-Markovian property of $(T_t)_{t>0}$ and the continuity of $z \mapsto P_t f_n(z)$ on $\mathbb R^d$ for each $t>0$, $(P_t f_n(z))_{n \geq 1}$ is a uniformly bounded and increasing sequence in $[0,1]$ for each $t>0$ and $z \in \mathbb R^d$. Therefore, applying Lebesgue's theorem, $(P_{\cdot} f_n(\cdot))_{n \geq 1}$ is a Cauchy sequence in $L^1( V \times (\tau_3, \tau_4), \mu\otimes dt)$. By using a Cauchy sequence argument together with \eqref{eq:2.58a}, we can define $P_t f$ on $\mathbb R^d$ as we did in \eqref{eq:2.57a} and \eqref{eq:2.57b}. Then $P_t f$ is a continuous version of $T_t f$ and \eqref{eq:2.3.41} holds for all $f \in L^{\infty}(\mathbb R^d, \mu)$ with $f \geq 0$ as desired.
\end{proof}
\noindent
For $f\in L^s(\mathbb R^d, \mu)$ with $s \in [1, \infty]$ and $t>0$, by splitting $f=f^+-f^-$, we define
\begin{equation} \label{semidef}
P_t f:= P_{t} f^+ - P_{t}f^- \quad \text{ on\, $\mathbb R^d$.}
\end{equation}
Then by Theorem \ref{theo:2.6}, $P_t f$ is a continuous version of $T_t f$ and for any bounded open subset $U$ of $\mathbb R^d$ and $0<\tau_1< \tau_2<\infty$, $P_{\cdot} f(\cdot) \in C^{\gamma; \frac{\gamma}{2}}(\overline{U} \times [\tau_1, \tau_2])$, where $\gamma \in (0,1)$ is a constant as in Theorem \ref{theo:2.6}. Moreover, applying the $L^s(\mathbb R^d, \mu)$-contraction property of $(T_t)_{t>0}$ for $s \in [1, \infty]$ and H\"{o}lder's inequality to \eqref{eq:2.3.41}, for any open subset $V$ of $\mathbb R^d$ with $\overline{U} \subset V$, $0<\tau_3<\tau_1<\tau_2<\tau_4<\infty$ and $t \in [\tau_1, \tau_2]$, it follows that
\begin{eqnarray}\label{eq:2.3.46}
\|P_{t} f \|_{C^{0,\gamma}(\overline{U})} & \leq & 2 C_4 (\tau_4-\tau_3) \|\rho \|_{L^1(V)}^{\frac{s-1}{s}} \cdot \|f\|_{L^s(\mathbb R^d, \mu) },
\end{eqnarray}
where $C_4>0$ is the constant of Theorem \ref{theo:2.6} and $\frac{s-1}{s}:=1$ if $s=\infty$ (cf. \eqref{eq:2.58new}).
The H\"older exponent $\gamma$ in \eqref{eq:2.3.46} may depend on the domains and may hence vary for different domains. But the important fact that we need for further considerations is that for a given domain, the constant $\gamma \in (0,1)$ and the constant in front of $\|f\|_{L^s(\mathbb R^d, \mu)}$ in \eqref{eq:2.3.46} are independent of $f$.
\\ \\
In a final remark, we discuss some previously derived and related regularity results.
In order to fix some terminologies used there, we first give a definition.
\begin{definition}\label{def:2.3.1}
(i) Let $r\in [1,\infty]$. A family of positive linear operators $(S_t)_{t>0}$ defined on $L^r(\mathbb R^d,\mu)$ is said to be
{\bf $L^r(\mathbb R^d,\mu)$-strong Feller}\index{$L^r(\mathbb R^d,\mu)$-strong Feller}, if $S_t(L^{r}(\mathbb R^d,\mu)) \subset C(\mathbb R^d)$ for any $t>0$.\\left [3pt]
(ii) A family of positive linear operators $(S_t)_{t>0}$ defined on $\mathcal{B}_b(\mathbb R^d)$ is said to be {\bf strong Feller}\index{strong Feller}, if $S_t (\mathcal{B}_b(\mathbb R^d)) \subset C_b(\mathbb R^d)$ for any $t>0$.
In particular, the $L^{\infty}(\mathbb R^d,\mu)$-strong Feller property implies the strong Feller property.\\left [3pt]
(iii) A family of positive linear operators $(S_t)_{t\ge 0}$ defined on $C_{\infty}(\mathbb R^d)=\{f \in C_b(\mathbb R^d): \exists\lim_{\|x\| \rightarrow \infty} f(x)= 0 \}$ with $S_0=id$, where $C_{\infty}(\mathbb R^d)$ is equipped with the sup-norm $\|\cdot\|_{C_b(\mathbb R^d)}$, is called a {\bf Feller semigroup}\index{semigroup ! Feller}, if:
\begin{itemize}
\item[(a)] $\|S_t f\|_{C_b(\mathbb R^d)}\le \|f\|_{C_b(\mathbb R^d)}$ for any $t>0$,
\item[(b)] $\lim_{t\to 0}S_t f=f$ in $C_{\infty}(\mathbb R^d)$ for any $f\in C_{\infty}(\mathbb R^d)$,
\item[(c)] $S_t(C_{\infty}(\mathbb R^d)) \subset C_{\infty}(\mathbb R^d)$ for any $t>0$.
\end{itemize}
\end{definition}
\noindent
If $(S_t)_{t\ge 0}$ is a Feller semigroup\index{semigroup ! Feller}, then by \cite[Chapter III. (2.2) Proposition]{RYor} and \cite[(9.4) Theorem]{BlGe} there exists a Hunt process (see Definition \ref{def:3.1.1}(ii)) whose transition semigroup is determined by $(S_t)_{t\ge 0}$.
\begin{remark}\label{rem:2.30new}
{\it In \cite{AKR}, \cite{BGS13}, and \cite{ShTr13a}, regularity properties of the resolvent and semigroup associated with a symmetric Dirichlet form are studied. For instance, if one considers a symmetric Dirichlet form defined as the closure of
\begin{equation} \label{eq:2.3.48}
\frac12 \int_{\mathbb R^d} \langle \nabla f, \nabla g \rangle d\mu, \quad f, g \in C_0^{\infty}(\mathbb R^d),
\end{equation}
then, provided $\rho$ has enough regularity, the drift coefficient of the associated generator has the form $\mathbf{G}=\nabla \phi$, where $\phi=\frac12 \ln \rho$. In \cite{AKR}, \cite{BGS13} using Sobolev regularity for elliptic equations involved with measures, $L^r(\mathbb R^d,\mu)$-strong Feller properties of the corresponding resolvent are shown, where $r \in (d, \infty]$. In those cases, $L^s(\mathbb R^d,\mu)$-strong Feller properties of the associated semigroup, $s \in (d, \infty)$ immediately follow from the analyticity of symmetric semigroups. Conservativeness (see for instance \cite[Proposition 3.8]{AKR}) of the semigroup is assumed in order to derive the strong Feller property of the regularized semigroup $(P_t)_{t>0}$ (see Definition \ref{def:2.3.1}).
Similarly, in the sectorial case \cite{RoShTr}, analyticity and conservativeness of the semigroup are used to derive its $L^s(\mathbb R^d,\mu)$-strong Feller properties, $s \in (d, \infty]$ and in \cite[Section 3]{ShTr13a} the special properties of Muckenhoupt weights, which in particular imply conservativeness, lead to the strong Feller property of the semigroup using the joint continuity of the heat kernel and its pointwise upper bound.\\
We introduce three further references, where mainly analytical methods are used to construct a semigroup that has the strong Feller property. In \cite{MPW02}, a sub-Markovian semigroup on $\mathcal{B}_b(\mathbb R^d)$ is constructed under the assumption that the diffusion and drift coefficients of the associated generator are locally H\"{o}lder continuous on $\mathbb R^d$ and the strong Feller property of the semigroup is derived in \cite[Corollary 4.7]{MPW02} by interior Schauder estimates for parabolic PDEs of non-divergence type. Similarly, the strong Feller property is derived under the existence of an additional zero-order term in \cite[Proposition 2.2.12]{LB07}. In \cite[Theorem 1]{Ki18}, a sub-Markovian and analytic $C_0$-semigroup of contractions on $L^p(\mathbb R^d)$, where $p$ is in a certain open subinterval of $(d-1,\infty)$, $d \geq 3$, associated with the partial differential operator $\Delta+\langle \sigma, \nabla \rangle$, where $\sigma$ is allowed to be in a certain nice class of measures, including absolutely continuous ones with drift components in $L^d(\mathbb R^d)+L^{\infty}(\mathbb R^d)$, is constructed and it is shown in \cite[Theorem 2]{Ki18} that the associated resolvent has the $L^p(\mathbb R^d)$-strong Feller property. Moreover in \cite[Theorem 2]{Ki18}, the semigroup is also shown to be Feller, so that the existence of an associated Hunt process follows (cf. Definition \ref{def:2.3.1}(iii)).
\\
In \cite[Section 2.3]{Scer}, some probabilistic techniques are used to show the strong Feller property of the semigroup, but the required conditions on the coefficients of the associated generator are quite restrictive. For instance, it is at least required that the diffusion coefficient is continuous and globally uniformly strictly elliptic and that the drift coefficient is locally Lipschitz continuous. We additionally refer to \cite{Bha}, where a possibly explosive diffusion process associated with $(L, C_0^{\infty}(\mathbb R^d))$ is constructed, where $A=(a_{ij})_{1 \leq i,j \leq d}$ satisfies \eqref{eq:2.1.2}, with $a_{ij} \in C(\mathbb R^d)$ for all $1 \leq i,j \leq d$ and $\mathbf{G} \in L^{\infty}_{loc}(\mathbb R^d, \mathbb R^d)$. In that case, the strong Feller property is derived in \cite[Lemma 2.5]{Bha} under the assumption that the explosion time of the diffusion process is infinite (a.s.) for some initial condition $x_0 \in \mathbb R^d$.}
\end{remark}
\subsection{Irreducibility of solutions to the abstract Cauchy problem}
\label{sec2.4}
In order to investigate the ergodic behavior of the regularized semigroup $(P_t)_{t>0}$ in Section \ref{subsec:3.2.3}, the irreducibility in the probabilistic sense as defined in the following definition together with the strong Feller property
are important properties. Throughout this section, we let
$$
\mu=\rho\,dx
$$
be as in Theorem \ref{theo:2.2.7} or as in Remark \ref{rem:2.2.4}. \\
\begin{definition} \label{def:2.4.4}
$(P_t)_{t>0}$ (see Theorem \ref{theo:2.6}) is said to be {\bf irreducible in the probabilistic sense}\index{irreducible ! in the probabilistic sense}, if for any $x \in \mathbb R^d$, $t>0$, $A \in \mathcal{B}(\mathbb R^d)$ with $\mu(A)>0$, we have $P_t 1_A (x)>0$.
\end{definition}
\noindent
In this section, our main goal is to show the irreducibility in the probabilistic sense (Proposition \ref{prop:2.4.2}), which implies
{\it irreducibility in the classical sense}, i.e. if for any $x \in \mathbb R^d$, $t>0$, $U \subset \mathbb R^d$ open, we have $P_t 1_U(x)>0$.\\
To further explain the connections between different notions related to irreducibility in the literature
and our work, let us introduce some notions related to generalized and symmetric Dirichlet form theory and
in particular to our semigroup $(T_t)_{t>0}$.
\begin{definition}
\label{def:2.4.4bis}
$A\in \mathcal{B}(\mathbb R^d)$ is called a {\bf weakly invariant set} relative to $(T_t)_{t>0}$ (see Definition \ref{definition2.1.7}), if
$$
T_t (f \cdot 1_A ) (x)=0,\ \ \text{for } \mu \text{-a.e.} \ \ x\in \mathbb R^d\setminus A,
$$
for any $t> 0$, $f\in L^2(\mathbb R^d,\mu)$. $(T_t)_{t>0}$ is said to be {\bf strictly irreducible}\index{irreducible ! strictly}, if for any
weakly invariant set $A$ relative to $(T_t)_{t>0}$, we have $\mu(A)=0$ or $\mu(\mathbb R^d\setminus A)=0$.\\
\end{definition}
$A \in \mathcal{B}(\mathbb R^d)$ is called a {\it strongly invariant set} relative to $(T_t)_{t>0}$, if
$$
T_t 1_A f = 1_A T_t f, \quad \text{$\mu$-a.e.}
$$
for any $t>0$ and $f \in L^2(\mathbb R^d, \mu)$. $(T_t)_{t>0}$ is said to be {\it irreducible}, if for any
strongly invariant set $A$ relative to $(T_t)_{t>0}$, we have $\mu(A)=0$ or $\mu(\mathbb R^d\setminus A)=0$. One
can check that $A \in \mathcal{B}(\mathbb R^d)$ is a strongly invariant set relative to $(T_t)_{t>0}$, if and only
if $A$ and $\mathbb R^d\setminus A$ are weakly invariant sets relative to $(T_t)_{t>0}$. Therefore, if $(T_t)_{t>0}$ is
strictly irreducible, then $(T_t)_{t>0}$ is irreducible. One can also check that $A \in \mathcal{B}(\mathbb R^d)$
is a weakly invariant set relative to $(T_t)_{t>0}$, if and only if $\mathbb R^d\setminus A$ is a weakly invariant set
relative to $(T'_t)_{t>0}$. Hence, if $(T_t)_{t>0}$ is associated with a symmetric Dirichlet form, then the
strict irreducibility of $(T_t)_{t>0}$ is equivalent to the irreducibility of $(T_t)_{t>0}$.
\begin{remark}\label{rem:2.4.1}
{\it In the symmetric case (see \cite{FOT}), it is shown in \cite[Lemma 1.6.4]{FOT} that if $(T_t)_{t>0}$ is associated with a symmetric Dirichlet form and $(T_t)_{t>0}$ is irreducible, then $(T_t)_{t>0}$ is either recurrent or transient (see Definition \ref{def:3.2.2.2} below). Moreover, it is known from \cite[Exercise 4.6.3]{FOT}, that if $(T_t)_{t>0}$ is associated with a symmetric Dirichlet form and has the strong Feller property, then $(T_t)_{t>0}$ is irreducible. Since in our case the associated generator may be non-symmetric and non-sectorial, the above results dealing with symmetric Dirichlet form theory may not apply. Therefore, we use the stronger concept of strict irreducibility of $(T_t)_{t>0}$ covered in \cite{GT2} and originally due to \cite{Ku11}. In \cite[Section 3.2.3]{GT2}, under the assumption that $\mu$ is a Muckenhoupt $\mathcal{A}_{\beta}$-weight, $\beta \in [1, 2]$, and that $(T_t)_{t>0}$ is associated to a symmetric Dirichlet form defined as the closure of \eqref{eq:2.3.48}, the pointwise lower bound of the associated heat kernel leads to the strict irreducibility of $(T_t)_{t>0}$. }
\end{remark}
\noindent
Here, the strict irreducibility of $(T_t)_{t>0}$, merely follows under assumption {\bf (a)} of Section \ref{subsec:2.2.1}. Namely, we show the irreducibility in the probabilistic sense in Lemma \ref{lem:2.7}, which implies the strict irreducibility by Lemma \ref{prop:2.2}. As in the case of Section \ref{sec2.3}, for a sufficiently regular function $f$, $\rho T_{\cdot} f$ is a variational solution to a parabolic PDE of divergence type. We may hence apply the pointwise parabolic Harnack inequality of \cite[Theorem 5]{ArSe}, which is a main ingredient to derive our results.
\begin{lemma} \label{prop:2.2}
Suppose {\bf (a)} of Section \ref{subsec:2.2.1} holds. If $(P_t)_{t>0}$ is irreducible in the probabilistic sense, then $(T_t)_{t>0}$ is strictly irreducible.
\end{lemma}
\begin{proof}
Let $t_0>0$ and $A \in \mathcal{B}(\mathbb R^d)$ be a weakly invariant set relative to $(T_t)_{t>0}$. Let $f_n:=1_{B_n} \in L^2(\mathbb R^d, \mu)$. Then $T_{t_0}(f_n 1_A)(x)=0$ for $\mu$-a.e. $x \in \mathbb R^d \setminus A$, for all $n \in \mathbb N$. Since $f_n \nearrow 1_{\mathbb R^d}$, we have $T_{t_0} (f_n 1_A) \nearrow T_{t_0} 1_A$ $\mu$-a.e. Thus, $T_{t_0} 1_A(x)=0$ for $\mu$-a.e. $x \in \mathbb R^d \setminus A$, so that $P_{t_0} 1_A(x)=0$ for $\mu$-a.e. $x \in \mathbb R^d \setminus A$.
\\
Now suppose that $\mu(A)>0$ and $\mu(\mathbb R^d \setminus A)>0$. Then there exists $x_0 \in \mathbb R^d \setminus A$ such that $P_{t_0}1_A(x_0)=0$, which is contradiction since $(P_t)_{t>0}$ is irreducible in the probabilistic sense. Therefore, we have $\mu(A)=0$ or $\mu(\mathbb R^d \setminus A)=0$, as desired.
\end{proof}
\begin{lemma}\label{lem:2.7}
Suppose {\bf (a)} of Section \ref{subsec:2.2.1} holds.
\begin{itemize}
\item [(i)] Let $A \in \mathcal{B}(\mathbb R^d)$ be such that $P_{t_0} 1_A (x_0)=0$ for some $t_0>0$ and $x_0\in \mathbb R^d$. Then $\mu(A)=0$.
\item [(ii)] Let $A \in \mathcal{B}(\mathbb R^d)$ be such that $P_{t_0} 1_A (x_0)=1$ for some $t_0>0$ and $x_0\in \mathbb R^d$. Then $P_t 1_{A}(x)=1$ \;for all $(x,t) \in \mathbb R^d \times (0,\infty)$.
\end{itemize}
\end{lemma}
\begin{proof}
(i) Suppose that $\mu(A)>0$. Choose $r>0$ so that
$$
0<\mu(A \cap B_r(x_0))<\infty.
$$
Let $u:=\rho P_{\cdot}1_{A \cap B_r(x_0)}$. Then $0 \leq u(x_0, t_0) \leq \rho(x_0) P_{t_0}1_A(x_0)=0$. Let $f_n:=nG_n 1_{A \cap B_r(x_0)}$ and $u_n:=\rho P_{\cdot} f_n$. Note that $f_n \in D(\overline{L})_b \cap D(L_2) \cap D(L_q)$ and $\lim_{n \rightarrow \infty} f_n =1_{A \cap B_r(x_0)}$ in $L^1(\mathbb R^d, \mu)$. Let $U$ be a bounded open set in $\mathbb R^d$ and $\tau_1, \tau_2 \in (0, \infty)$ with $\tau_1<\tau_2$. By Theorem \ref{theo:2.6},
\begin{equation}\label{eq:2.4.46}
\lim_{n \rightarrow \infty} u_n = u \;\; \text{ in } C(\overline{U} \times [\tau_1, \tau_2]).
\end{equation}
Fix $T>t_0$ and $U\supset \overline{B}_{r+1}(x_0)$. Then by \eqref{eq:2.3.45}, $u_n \in H^{1,2}(U \times (0,T))$ satisfies
for all $\varphi \in C_0^{\infty}(U\times (0, T))$
\begin{eqnarray*}
\int_0^T\int_{U} \left ( \frac{1}{2}\langle (A+C) \nabla u_n, \nabla \varphi \rangle + u_n \langle \mathbf{H}-\frac{A \nabla \rho}{\rho}, \nabla \varphi \rangle -u\partial_t\varphi \right ) dxdt=0.
\end{eqnarray*}
Take arbitrary but fixed $(x,t) \in B_{r}(x_0) \times (0, t_0)$. By \cite[Theorem 5]{ArSe}
\begin{equation}\label{eq:2.4.47}
0 \leq u_n(x,t) \leq u_n(x_0, t_0)\;\exp \left ( C_1 \Big(\frac{\|x_0-x\|^2}{t_0-t}+ \frac{t_0-t}{\min(1,t)} +1 \Big)\right),
\end{equation}
where $C_1>0$ is a constant independent of $n \in \mathbb N$. Applying \eqref{eq:2.4.46} with $\overline{U} \times [\tau_1, \tau_2] \supset \overline{B}_{r+1}(x_0) \times [t,t_0]$ to \eqref{eq:2.4.47}, we have $u(x, t)=0$. Thus, $ P_t 1_{A\cap B_{r}(x_0)}(x)=0$ for all $(x,t) \in B_{r}(x_0) \times (0, t_0)$, so that by strong continuity inherited from $(T_t)_{t>0}$ (see Theorem \ref{theo:2.6} and Definition \ref{definition2.1.7})
$$
0=\int_{\mathbb R^d} 1_{A \cap B_{r}(x_0)} P_t 1_{A\cap B_{r}(x_0)} d\mu \underset{\text{as } t \rightarrow 0+}{ \; \longrightarrow} \mu(B_r(x_0) \cap A)>0,
$$
which is contradiction. Therefore, we must have $\mu(A)=0$.\\
(ii) Let $y \in \mathbb R^d$ and $0<s<t_0$ be arbitrary but fixed, $r:=2\|x_0-y\|$ and let $B_m$ be an open ball in $\mathbb R^d$ with $A \cap B_m \neq \emptyset$. Let $g_n:= nG_n 1_{A \cap B_m}$. Then $g_n \in D(\overline{L})_b \cap D(L_2) \cap D(L_q)$ and $\lim_{n \rightarrow \infty} g_n = 1_{A \cap B_m}$ in $L^1(\mathbb R^d, \mu)$. By Theorem \ref{theo:2.6},
\begin{equation}
\lim_{n \rightarrow \infty} P_{\cdot} g_n = P_{\cdot} 1_{A \cap B_m} \;\; \text{ in } C(\overline{B}_r(x_0) \times [s/2, 2t_0]).
\end{equation}
Now fix $T>t_0$ and $U \supset \overline{B}_{r+1}(x_0)$. Using integration by parts and \eqref{eq:2.2.0a}, for all $\varphi \in C_0^{\infty}(U \times (0,T))$,
\begin{eqnarray}
&&\int_0^T\int_{U} \left ( \frac{1}{2}\langle (A+C) \nabla \rho, \nabla \varphi \rangle + \rho \langle \mathbf{H}-\frac{A \nabla \rho}{\rho}, \nabla \varphi \rangle -\rho\partial_t\varphi \right ) dxdt \nonumber \\
&&=-\int_0^T \int_{U} \langle \frac12 (A+C^T) \nabla \rho - \rho \mathbf{H}, \nabla \varphi \rangle dx dt = 0. \label{eq:2.4.49}
\end{eqnarray}
By \eqref{eq:2.3.45}, $\rho P_{\cdot} g_n \in H^{1,2}(U \times (0,T))$ satisfies for all $\varphi \in C_0^{\infty}(U\times (0, T))$
\begin{equation} \label{eq:2.4.50}
\int_0^T\int_{U} \left ( \frac{1}{2}\langle (A+C) \nabla (\rho P_{\cdot} g_n), \nabla \varphi \rangle + (\rho P_{\cdot} g_n) \langle \mathbf{H}-\frac{A \nabla \rho}{\rho}, \nabla \varphi \rangle -u\partial_t\varphi \right ) dxdt=0.
\end{equation}
Now let $u_n(x,t):=\rho(x) \left(1- P_t g_n (x) \right)$. Then $u_n \in H^{1,2}(U \times (0,T))$ and $u_n \geq 0$. Subtracting $\eqref{eq:2.4.50}$ from $\eqref{eq:2.4.49}$ implies
\begin{equation*}
\int_0^T\int_{U} \left ( \frac{1}{2}\langle A \nabla u_n, \nabla \varphi \rangle + u_n \langle \mathbf{H}-\frac{A \nabla \rho}{\rho}, \nabla \varphi \rangle -u_n\partial_t\varphi \right ) dxdt=0.
\end{equation*}
Thus, by \cite[Theorem 5]{ArSe}
$$
0 \leq u_n(y,s) \leq u_n(x_0, t_0)\; \exp \left ( C_2 \Big(\frac{\|x_0-y\|^2}{t_0-s}+ \frac{t_0-s}{\min(1,s)} +1 \Big) \right),
$$
where $C_2>0$ is a constant independent of $n \in \mathbb N$. Letting $n \rightarrow \infty$ and $m \rightarrow \infty$, we obtain
$P_{s}1_{A}(y)=1$. Since $(y, s) \in \mathbb R^d \times (0, t_0)$ was arbitrary, we obtain $P_{\cdot}1_A=1$ on $\mathbb R^d \times (0, t_0]$ by continuity. Then by the sub-Markovian property, $P_{t_0}1_{\mathbb R^d}(y)=1$ for any $y \in \mathbb R^d$. Now let $t \in (0, \infty)$ be given. Then there exists $k \in \mathbb N \cup \left \{0 \right \}$ such that
$$
k t_0<t \leq (k+1) t_0
$$
and so $P_t 1_{A} = P_{kt_0+ (t-kt_0)} 1_{A}= \underbrace{P_{t_0} \circ \cdots \circ P_{t_0}}_{k\text{-times}} \circ P_{t-kt_0} 1_{A} =1$.
\end{proof}
\noindent
The following results are immediately derived by Lemma \ref{lem:2.7}(i) through contraposition and by Lemma \ref{prop:2.2}.
\begin{proposition}\label{prop:2.4.2}
Suppose {\bf (a)} of Section \ref{subsec:2.2.1} holds and let $(P_t)_{t>0}$ be as in Theorem \ref{theo:2.6}. Then:
\begin{itemize}
\item[(i)] $(P_t)_{t>0}$ is irreducible in the probabilistic sense (Definition \ref{def:2.4.4}).
\item[(ii)] $(T_t)_{t>0}$ is strictly irreducible (Definition \ref{def:2.4.4bis}).
\end{itemize}
\end{proposition}
We close this section with two remarks. The first is on a generalization of our results up to now to open sets and the second on related previous work.
\begin{remark}\label{rem:2.4.3}
{\it It is possible to generalize everything that has been achieved so far in Sections \ref{sec2.2}, \ref{sec2.3}, \ref{sec2.4} to general open sets $W\subset\mathbb R^d$. For this let $(W_n)_{n \geq 1}$ be a family of bounded and open sets in $\mathbb R^d$ with Lipschitz boundary $\partial W_n$ for all $n \geq 1$, such that
$$
\overline{W}_n \subset W_{n+1}, \; \forall n \geq 1 \;\; \text{ and } \;\; W = \cup_{n \geq 1}W_n.
$$
Let $(p_n)_{n \geq 1}$ be a sequence in $\mathbb R$, such that $p_n \geq p_{n+1}>d$,\; $\forall n \geq 1$ and
$$
\lim_{n \rightarrow \infty} p_n =d,
$$
and assume that the coefficients $(a_{ij})_{1 \leq i,j \leq d}$, $(c_{ij})_{1 \leq i,j \leq d}$, and $(h_i)_{1 \leq i \leq d}$, satisfy for each $n\ge 1$:
\begin{itemize}
\item[] $a_{ji}= a_{ij}\in H^{1,2}(W_n) \cap C(W_n)$, $1 \leq i, j \leq d$ and $A = (a_{ij})_{1\le i,j\le d}$ satisfies \eqref{eq:2.1.2} on $W_n$, $C = (c_{ij})_{1\le i,j\le d}$, with $-c_{ji}=c_{ij} \in H^{1,2}(W_n) \cap C(W_n)$, $1 \leq i,j \leq d$, $\mathbf{H}=(h_1, \dots, h_d) \in L^{p_n}(W_n, \mathbb R^d)$.
\end{itemize}
Then taking into account Remark \ref{remark2.1.7}(iii) and adapting the methods of Sections \ref{sec2.2}, \ref{sec2.3}, \ref{sec2.4}, one can derive all results of Section \ref{sec2.2}, \ref{sec2.3}, \ref{sec2.4}, where $\mathbb R^d$ is replaced by $W$.
}
\end{remark}
\begin{remark}\label{rem:2.3.3}
{\it We can mention at least two references \cite{Scer}, \cite{ZhXi16}, in which mainly probabilistic methods are employed to derive irreducibility in the classical sense.
In \cite[Section 2.3]{Scer}, irreducibility in the classical sense is shown under the same assumptions as those which are used to show the strong Feller property. \\
In \cite{ZhXi16}, to obtain the strong Feller property and irreducibility in the classical sense of the semigroup associated with a diffusion process, restrictive conditions on the coefficients are imposed. The merit is that some time-inhomogeneous cases are covered
in \cite{ZhXi16}, but the results are far from being optimal in the time-homogeneous case (see the discussion in the introduction of \cite{LT18}).\\
In \cite[Corollary 4.7]{MPW02} the irreducibility of the semigroup in the classical sense is shown analytically by using the strict positivity of the associated heat kernel in \cite[Theorem 4.4]{MPW02} (see also \cite[Theorem 2.2.12 and Theorem 2.2.5]{LB07} for the case where there is an additional zero-order term).}
\end{remark}
\subsection{Comments and references to related literature}\label{Comments2}
Chapter \ref{chapter_2} is based on techniques from functional analysis and PDE theory that can be found in textbooks, for instance \cite{BRE}, \cite{EG15}, \cite{Ev11}. We further apply direct variational methods and make use of standard results
from semigroup, potential and operator theory.
In Section \ref{sec2.1}, the Lumer--Phillips theorem (\cite[Theorem 3.1]{LP61}) is used to derive that the closure of a dissipative operator generates a $C_0$-semigroup of contractions. In Section \ref{sec2.2}, the Lax--Milgram theorem (\cite[Corollary 5.8]{BRE}), the maximum principle (\cite[Theorem 4]{T77}), the Fredholm-alternative (\cite[Theorem 6.6(c)]{BRE}), and the elliptic Harnack inequality of \cite[Corollary 5.2]{T73} are mainly used to show existence of an infinitesimally invariant measure for $(L, C_0^{\infty}(\mathbb R^d))$. \\
Concerning more recent sources, beyond the classical ones, for the $H^{1,p}_{loc}$-regularity of the density of the infinitesimally invariant measure, \cite[Theorem 1.2.1]{BKRS} and \cite[Theorem 2.8]{Kr07} are used.
In Section \ref{sec2.3}, the elliptic and parabolic H\"{o}lder regularity results (\cite[Th\'eor\`eme 7.2, 8.2]{St65}), \cite[Theorems 3 and 4]{ArSe}), are used to obtain regularized versions of the resolvent and the semigroup, respectively.
In Section \ref{sec2.4}, the irreducibility of the semigroup is derived by the pointwise parabolic Harnack inequality (\cite[Theorem 5]{ArSe}).\\
The content of Section \ref{sec2.1} is taken from \cite[Part I, Sections 1 and 2]{WS99}. Detailed explanations on the construction of the Markovian semigroup have been added, as well as the new example Remark \ref{rem:2.1.12}(ii). Sections \ref{sec2.2}--\ref{sec2.4} (and Chapter \ref{chapter_3}) originate roughly from \cite{LT18} and \cite{LT19}, but we recombined, reorganized, refined and further developed the results of \cite{LT18} and \cite{LT19}. In particular, the contents of Section \ref{sec2.2} are a refinement of \cite[Theorem 3.6]{LT19}. Some proofs on elliptic regularity (\cite[Lemma 3.3, 3.4]{LT19}) are omitted in this book and the interested reader may check the original source for the technical details.
\section{Stochastic differential equations}
\label{chapter_3}
\subsection{Existence}
\label{sec:3.1}
In Section \ref{sec:3.1} we show that the regularized semigroup $(P_t)_{t>0}$ from Theorem \ref{theo:2.6} and \eqref{semidef} determines the transition semigroup of a Hunt process $(X_t)_{t\geq 0}$ with nice sample paths and that $(R_{\alpha})_{\alpha >0}$ determines its resolvent. For the construction of the Hunt process $(X_t)_{t\geq 0}$, crucially the existence of a Hunt process $(\tilde{X}_t)_{t \ge 0}$ deduced from generalized Dirichlet form theory for a.e. starting point (Proposition \ref{prop:3.1.3}) is needed, and additionally to assumption {\bf (a)} of Section \ref{subsec:2.2.1}, assumption {\bf (b)} of Section \ref{subsec:3.1.1}, which provides a higher resolvent regularity. Since $(\tilde{X}_t)_{t \ge 0}$ has continuous sample paths on the one-point-compactification $\mathbb R^d_{\Delta}$, the same is then true for $(X_t)_{t\geq 0}$. From Remark \ref{rem:3.1.1} of Section \ref{subsec:3.1.1} on we assume assumptions {\bf (a)} and {\bf (b)} to hold, if not stated otherwise.
As a by-product of the existence of $(X_t)_{t\geq 0}$ and the resolvent regularity derived in Theorem \ref{theorem2.3.1} by PDE theory, we obtain Krylov-type estimates (see Remark \ref{rem:ApplicationKrylovEstimates}). The identification of $(X_t)_{t\geq 0}$ as a weak solution (cf. Definition \ref{def:3.48} (iv)) to \eqref{intro:eq1} then follows standard lines by representing continuous local martingales as stochastic integrals with respect to Brownian motion through the knowledge of their quadratic variations. \\
\subsubsection{Regular solutions to the abstract Cauchy problem as transition functions}\label{subsec: 3.1.1first}
Throughout this section we will assume that {\bf (a)} of Section \ref{subsec:2.2.1} holds, and that
$$
\mu=\rho\,dx
$$
is as in Theorem \ref{theo:2.2.7} or as in Remark \ref{rem:2.2.4}. \\
\begin{proposition}
\label{prop:3.1.1}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} holds.
Let $(P_t)_{t>0}$ be as in Theorem \ref{theo:2.6} and \eqref{semidef}. Let $(x,t) \in \mathbb R^d \times (0, \infty)$. Then:
\begin{itemize}
\item[(i)] $P_{t}(x, \cdot)$ defined through
$$
P_t(x,A):= P_t 1_{A}(x), \qquad A \in \mathcal{B}(\mathbb R^d)
$$
is a sub-probability measure on $\mathcal{B}(\mathbb R^d)$, i.e. $P_t(x,\mathbb R^d)\le 1$, and equivalent to $\mu$.
\item[(ii)] \ We have
\begin{eqnarray}
\label{eq:3.1.1}
P_t f (x)= \int_{\mathbb R^d} f(y) P_t(x,dy), \qquad \forall f\in \bigcup_{s\in [1,\infty]}L^s(\mathbb R^d,\mu).
\end{eqnarray}
In particular, \eqref{eq:3.1.1} extends by linearity to all $f\in L^1(\mathbb R^d,\mu)+L^\infty(\mathbb R^d,\mu)$, and for such $f$, $P_t f$ is continuous by Theorem \ref{theo:2.6} and \eqref{semidef}.
\end{itemize}
\end{proposition}
\begin{proof}
(i) That $P_{t}(x, \cdot)$ defines a measure is obvious by the properties of $(T_t)_{t>0}$ on $L^\infty(\mathbb R^d,\mu)\supset\mathcal{B}_b(\mathbb R^d)$ and since $P_t 1_{A}$ is a continuous version of $T_t 1_A$. In particular $P_{t}(x, \cdot)$ defines a sub-probability measure since by the sub-Markov property $T_t 1_{\mathbb R^d}\le 1$ $\mu$-a.e. hence by continuity $P_{t}(x, \mathbb R^d)=P_t 1_{\mathbb R^d}(x)\le 1$ for every $x\in \mathbb R^d$. If $N\in \mathcal{B}(\mathbb R^d)$ is such that $\mu(N)=0$, then clearly $P_{t}(x,N)=P_t 1_N(x)=0$ and if $P_{t}(x,N)=P_t 1_N(x)=0$ then $\mu(N)=0$ by Lemma \ref{lem:2.7}(i).\\
(ii) For any $(x,t) \in \mathbb R^d \times (0,\infty)$, we have
$$
P_t f (x)= \int_{\mathbb R^d} f(y) P_t(x,dy)
$$
for $f=1_A$, $A \in \mathcal{B}(\mathbb R^d)$ which extends to any $f\in \bigcup_{s\in [1,\infty]}L^s(\mathbb R^d,\mu)$ in view of \eqref{eq:2.3.46}.
\end{proof}
\begin{proposition}\label{prop:3.1.2}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} holds.
Let $(R_{\alpha})_{\alpha>0}$ be as in Theorem \ref{theorem2.3.1} and \eqref{resoldef}. Let $(x, \alpha) \in \mathbb R^d \times (0, \infty)$. Then:
\begin{itemize}
\item[(i)]
$\alpha R_{\alpha}(x, \cdot)$, where
$$
R_{\alpha}(x, A):= R_{\alpha} 1_{A}(x), \qquad A \in \mathcal{B}(\mathbb R^d)
$$
is a sub-probability measure on $\mathcal{B}(\mathbb R^d)$, absolutely continuous with respect to $\mu$.
\item[(ii)]\ We have
\begin{eqnarray} \label{eq:3.1.2}
R_{\alpha}g(x)=\int_{\mathbb R^d}g(y) R_{\alpha}(x,dy), \qquad \forall g\in \bigcup_{r\in [q,\infty]} L^r(\mathbb R^d,\mu),
\end{eqnarray}
where $q=\frac{pd}{p+d}$, $p\in (d,\infty)$. In particular, \eqref{eq:3.1.2} extends by linearity to all $g\in L^q(\mathbb R^d,\mu)+L^\infty(\mathbb R^d,\mu)$, and for such $g$, $R_{\alpha}g$ is continuous by Theorem \ref{theorem2.3.1} and \eqref{resoldef}.
\end{itemize}
\end{proposition}
\begin{proof}
In view of \eqref{eq:2.3.38a} the proof is similar to the corresponding proof for Proposition \ref{prop:3.1.1} and we therefore omit it.
\end{proof}
Define
$$
P_0:=id.
$$
\begin{theorem}\label{th:3.1.1}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} holds.
For $(x, \alpha) \in \mathbb R^d \times (0, \infty)$, it holds that
$$
R_{\alpha}g(x)=\int_{0}^{\infty} e^{-\alpha t}P_t g(x)dt, \qquad g\in \bigcup_{r\in [q,\infty]} L^r(\mathbb R^d,\mu),
$$
where $q=\frac{pd}{p+d}$ and $p\in (d,\infty)$.
\end{theorem}
\begin{proof} Let first $g\in C_0^2(\mathbb R^d)$ and let $x_n\to x\in \mathbb R^d$ as $n \rightarrow \infty$. Then by Theorem \ref{theo:2.6} (see also \eqref{semidef}), $P_tg(x_n), P_tg(x)$
are continuous functions in $t\in (0,\infty)$ and $P_tg(x_n)\to P_tg(x)$ as $n \rightarrow \infty$ for any $t\in (0,\infty)$. Since further $\sup_{n\in\mathbb N}|P_tg(x_n)|\le \sup_{y\in \mathbb R^d}|g(y)|<\infty$ for any $t\in (0,\infty)$, Lebesgue's theorem implies that $\int_{0}^{\infty} e^{-\alpha t}P_t g dt$ is a continuous function on $\mathbb R^d$. By Theorem \ref{theorem2.3.1}, $R_{\alpha}g$ is continuous. Since $(G_{\alpha})_{\alpha>0}$ is the Laplace transform of $(T_t)_{t>0}$ on $L^2(\mathbb R^d,\mu)$, the two continuous functions $R_{\alpha}g$ and $\int_{0}^{\infty} e^{-\alpha t}P_t g dt$ coincide $\mu$-a.e. hence everywhere on $\mathbb R^d$.
\iffalse
If $g\in C_0(\mathbb R^d)$ it can be uniformly approximated on $\mathbb R^d$ through $C_0^\infty(\mathbb R^d)$ functions, for instance by convolution with a standard mollifier. Then using \eqref{eq:2.3.38a} and \eqref{eq:2.3.46}, and Lebesgue's Theorem, we get
$R_{\alpha}g=\int_{0}^{\infty} e^{-\alpha t}P_t g dt$ for any $g\in C_0(\mathbb R^d)$. In particular, using Propositions \ref{prop:3.1.1} and \ref{prop:3.1.2}, it holds
\fi
Therefore, it holds that
$$
\int_{\mathbb R^d}g(y) R_{\alpha}(x,dy)=\int_0^{\infty}\int_{\mathbb R^d} g(y)P_t(x,dy)e^{-\alpha t}dt, \qquad \forall x\in \mathbb R^d,
$$
for any $g\in C_0^2(\mathbb R^d)$. Since the $\sigma$-algebra generated by $C_0^2(\mathbb R^d)$ equals $\mathcal{B}(\mathbb R^d)$,
by a monotone class argument the latter extends to all $g\in \mathcal{B}_b(\mathbb R^d)$. Finally, splitting $g=g^+-g^-$ in positive and negative parts, using linearity and monotone approximation through $\mathcal{B}_b(\mathbb R^d)$ functions, the assertion follows for $g\in \bigcup_{r\in [q,\infty]} L^r(\mathbb R^d,\mu)$.
\end{proof}
\begin{remark}\label{rem:3.1equivalence ralpha}
{\it As a direct consequence of Theorem \ref{th:3.1.1}, the sub-probability measures $\alpha R_{\alpha}(x, dy)$ on $\mathcal{B}(\mathbb R^d)$ are equivalent to $\mu$ for all $\alpha>0$ and $x \in \mathbb R^d$. Indeed, by Proposition \ref{prop:3.1.2}(i), $\alpha R_{\alpha}(x, dy) \ll \mu$ for all $x \in \mathbb R^d$ for $\alpha>0$. For the converse, let $\alpha>0$, $x \in \mathbb R^d$ be given and assume that $A \in \mathcal{B}(\mathbb R^d)$ satisfies $\alpha R_{\alpha}(x, A)=0$. Then by Theorem \ref{th:3.1.1}, $P_t 1_A(x)=0$ for $dt$-a.e. $t \in (0, \infty)$, hence $\mu(A)=0$ by Lemma \ref{lem:2.7}(i), as desired.}
\end{remark}
With the definition $P_0=id$ and Proposition \ref{prop:3.1.1} from above, $(P_t)_{t \ge 0}$ determines a {\bf (temporally homogeneous) sub-Markovian transition function} on $(\mathbb{R}^d,\mathcal{B}(\mathbb{R}^d))$, i.e.:
\begin{itemize}
\item for all $x\in \mathbb{R}^d$, $t\ge 0$:\quad $A\in \mathcal{B}(\mathbb{R}^d)\mapsto P_t(x,A)$ is a sub-probability measure;
\item for all $t\ge 0$, $A\in \mathcal{B}(\mathbb{R}^d)$:\quad $x\in \mathbb{R}^d \mapsto P_t(x,A)$ is $\mathcal{B}(\mathbb{R}^d)$-measurable;
\item for all $x\in \mathbb{R}^d$, $A\in \mathcal{B}(\mathbb{R}^d)$, the Chapman--Kolmogorov equation
$$
P_{t+s}(x,A)=\int_{\mathbb R^d}P_s(y,A)P_t(x,dy), \qquad \forall t,s\ge 0
$$
holds.
\end{itemize}
Here the Chapman-Kolmogorov equation can be rewritten as $P_{t+s}1_A=P_{t}P_{s}1_A$ and therefore holds, since both sides are equal $\mu$-a.e. as $T_{t+s}1_A=T_{t}T_{s}1_A$, and $P_{t+s}1_A$ as well as $P_{t}P_{s}1_A$ are continuous functions by Theorem \ref{theo:2.6}, if either $t\not=0$ or $s\not=0$, and by the definition $P_0=id$, if $t=s=0$.\\
Since $(P_t)_{t \ge 0}$ only defines a sub-Markovian transition function we will extend it to make it Markovian (i.e. conservative). For this let $\mathbb{R}^d_{\Delta}:=\mathbb{R}^d\cup \{\Delta\}$ be the one-point-compactification of $\mathbb R^d$
with the point at infinity \lq\lq$\Delta$\rq\rq, and
$$
\mathcal{B}(\mathbb{R}^d_{\Delta}):=\{A\subset \mathbb{R}^d_{\Delta} : A\in \mathcal{B}(\mathbb{R}^d) \text{ or } A=A_0\cup\{\Delta\}, \ A_0\in \mathcal{B}(\mathbb{R}^d)\}.
$$
Any function $f$ originally defined on $\mathbb{R}^d$ is extended to $\mathbb{R}^d_{\Delta}$ by setting $f(\Delta)=0$. Likewise any measure $\nu$ originally defined on $\mathcal{B}(\mathbb{R}^d)$ is extended to $\mathcal{B}(\mathbb{R}^d_{\Delta})$ by setting $\nu(\{\Delta\})=0$. For instance, $P_t(x, \{\Delta \})=0$ for all $x \in \mathbb R^d$. Now for $t\ge 0$,
$$
P_t^{\Delta}(x, dy) =
\begin{cases}
\big[1- P_t(x,\mathbb{R}^d)\big] \delta_{\Delta} (dy) + P_t(x, dy), \quad \text{if} \ x \in \mathbb{R}^d\\
\delta_{\Delta} (dy), \quad \text{if} \ x = \Delta
\end{cases}
$$
determines a (temporally homogeneous) Markovian transition function $(P_t^{\Delta})_{t\ge 0}$ on $(\mathbb{R}^d_{\Delta},\mathcal{B}(\mathbb{R}^d_{\Delta}))$.
\subsubsection{Construction of a Hunt process}
\label{subsec:3.1.1}
Throughout this section we will assume that {\bf (a)} of Section \ref{subsec:2.2.1} holds (except for Proposition \ref{prop:3.1.3}). Furthermore, we shall {\bf assume} that
\begin{itemize}
\item[{\bf (b)}] \ \index{assumption ! {\bf (b)}}$\mathbf{G}=(g_1,\ldots,g_d)=\frac{1}{2}\nabla \big (A+C^{T}\big )+ \mathbf{H} \in L_{loc}^q(\mathbb R^d, \mathbb R^d)$ (cf. \eqref{equation G with H} and \eqref{form of G}), where $q=\frac{pd}{p+d}$ and $p\in (d,\infty)$,
\end{itemize}
holds. Assumption {\bf (b)} will be needed from Remark \ref{rem:3.1.1} below on and implies $C^2_0(\mathbb R^d)\subset D(L_q)$, which is crucial for the construction of the Hunt process in Theorem \ref{th: 3.1.2} below. \\
By the results of Section \ref{sec:3.1}, $(P_t^{\Delta})_{t\ge 0}$ is a (temporally homogeneous) Markovian transition function on $(\mathbb{R}^d_{\Delta},\mathcal{B}(\mathbb{R}^d_{\Delta}))$. Restricting $(P_t^{\Delta})_{t\ge 0}$ to the positive dyadic rationals $S:= \bigcup_{n \in \mathbb{N}} S_n$, $S_n : = \{ k2^{-n} : k \in \mathbb{N} \cup \{0\}\}$, we can hence construct a Markov process
$$
\mathbb{M}^{0} = (\Omega , \mathcal{F}^0, (\mathcal{F}^0_s)_{s \in S}, (X_s^0)_{s \in S} , (\mathbb{P}_x)_{x \in \mathbb{R}^d_{\Delta}})
$$
by Kolmogorov's method (see \cite[Chapter III]{RYor}). Here
$
\Omega : = (\mathbb{R}^d_{\Delta})^S
$
is equipped with the product $\sigma$-field $\mathcal{F}^0$, $X_s^0 : (\mathbb{R}^d_{\Delta})^S \to \mathbb{R}^d_{\Delta}$ are coordinate maps and $\mathcal{F}_s^0 : = \sigma( X_r^0 \ | \ r \in S, r \le s)$. \\
\begin{definition}\label{def:3.1.1}
(i) $\widetilde{\mathbb M}= (\widetilde{\Omega},\widetilde{\mathcal{F}},(\widetilde{X}_t)_{t\ge
0},(\widetilde{\mathbb P}_x)_{x\in \mathbb{R}^d_\Delta})$ is called a {\bf strong Markov process} (resp. a {\bf right process})\index{process ! strong Markov}\index{process ! right}
with state space $\mathbb{R}^d$, lifetime $\widetilde{\zeta}$, and corresponding
filtration $(\widetilde{\mathcal{F}}_t)_{t\ge 0}$, if (M.1)--(M.6) (resp. (M.1)--(M.7)) below are fulfilled:
\begin{itemize}
\item[(M.1)] $\widetilde{X}_t : \widetilde{\Omega} \to \mathbb{R}^d_\Delta$ is $\widetilde{\mathcal{F}}_t/{\cal B}(\mathbb R^d_\Delta)$-
measurable for all $t \ge 0$, and $\widetilde{X}_t(\omega) = \Delta
\Leftrightarrow t \ge \widetilde{\zeta}(\omega)$ for all $\omega \in \widetilde{\Omega}$, where
$(\widetilde{\mathcal{F}}_t)_{t\ge 0}$ is a filtration on $(\widetilde{\Omega}, \widetilde{\mathcal{F}} )$ and
$\widetilde{\zeta} : \widetilde{\Omega} \to [ 0,\infty ]$.
\item[(M.2)] For all $t \ge 0$ there exists a map $\vartheta_t : \widetilde{\Omega}
\to \widetilde{\Omega}$ such that $\widetilde{X}_s \circ \vartheta_t = \widetilde{X}_{s+t}$ for all $s \ge
0$.
\item[(M.3)] $(\widetilde{\mathbb P}_x)_{x\in \mathbb{R}^d_\Delta}$ is a family of probability
measures on $(\widetilde{\Omega},\widetilde{\mathcal{F}})$, such that $x \mapsto \widetilde{\mathbb P}_x (B)$ is ${\cal
B}(\mathbb R^d_\Delta)^*$--measurable (here ${\cal
B}(\mathbb R^d_\Delta)^*$ denotes the universially measurable sets) for all $B \in \widetilde{\mathcal{F}}$ and
${\cal B}(\mathbb{R}^d_\Delta)$-measurable for all $B \in \sigma (\widetilde{X}_t|t \ge 0)$ and
$\widetilde{\mathbb P}_\Delta(\widetilde{X}_0=\Delta) = 1$.
\item[(M.4)] (Markov property) For all $A \in {\cal B}(\mathbb{R}^d_\Delta), s, t \ge 0$, and $x \in
\mathbb{R}^d_\Delta$
$$
\widetilde{\mathbb P}_x(\widetilde{X}_{t+s} \in A|\widetilde{\mathcal{F}}_t) = \widetilde{\mathbb P}_{\widetilde{X}_t}(\widetilde{X}_s \in A)\, , \quad
\widetilde{\mathbb P}_x\mbox{-a.s.}
$$
\item[(M.5)] (Normal property) $\widetilde{\mathbb P}_x(\widetilde{X}_0 = x) = 1$ for all $x \in \mathbb{R}^d_\Delta$.
\item[(M.6)] (Strong Markov property) $(\widetilde{\mathcal{F}}_t)_{t\ge0}$ is right continuous (see \eqref{defrightcon})
and for any $\nu \in {\cal
P}(\mathbb{R}^d_\Delta):=\{\nu:\nu \text{ is a probability measure on }\mathbb R^d_{\Delta}\}$ and $(\widetilde{\mathcal{F}}_t)_{t\ge0}$--stopping time $\tau$
$$
\widetilde{\mathbb P}_\nu(\widetilde{X}_{\tau+s} \in A| \widetilde{\mathcal{F}}_\tau) = \widetilde{\mathbb P}_{\widetilde{X}_\tau}(\widetilde{X}_s \in A)\, , \quad \widetilde{\mathbb P}_\nu\mbox{-a.s.}
$$
for all $A \in {\cal B}(\mathbb{R}^d_\Delta)$, $s \ge 0$, where for a positive measure $\nu$
on $(\mathbb{R}^d_\Delta,{\cal B}(\mathbb{R}^d_\Delta))$ we set $\widetilde{\mathbb P}_\nu(\cdot) := \int_{\mathbb R^d} \widetilde{\mathbb P}_x(\cdot) \,\nu(dx)$.
\item[(M.7)] (Right continuity) $t \mapsto \widetilde{X}_t(\omega)$ is right continuous on
$[0,\infty)$ for all $\omega \in \widetilde{\Omega}$.
\end{itemize}
\noindent
(ii) A right process $\widetilde{\mathbb M}$ is said to be a {\bf Hunt process}\index{process ! Hunt}, if additionally to (M.1)--(M.7), (M.8)--(M.9) below are fulfilled:
\begin{itemize}
\item[(M.8)] (left limits on $[0,\infty)$) $\widetilde{X}_{t-} := \lim_{{s\uparrow t}\atop{s<t}} \widetilde{X}_s$ exists in
$\mathbb{R}^d_\Delta$ for all $t\in(0,\infty)\ \widetilde{\mathbb P}_\nu$-a.s. for all $\nu \in \mathcal{P}(\mathbb R^d_\Delta)$.
\item[(M.9)] (quasi-left continuity on $[0,\infty)$) for all $\nu \in \mathcal{P}(\mathbb R^d_\Delta)$, we have
$\lim_{n\to\infty} \widetilde{X}_{\tau_n} = \widetilde{X}_\tau \ \widetilde{\mathbb P}_\nu$--a.s. on
$\{\tau < \infty\}$ for every increasing sequence
$(\tau_n)_{n\ge1}$ of $(\widetilde{\mathcal{F}}^{\mathbb P_\nu}_t)_{t\ge0}$-stopping times
with limit $\tau$, where for a
sub-$\sigma$-algebra $\mathcal{G} \subset \widetilde{\mathcal{F}}$ we let $\mathcal{G}^{\widetilde{\mathbb P}_\nu}$ be
its $\widetilde{\mathbb P}_\nu$-completion in $\widetilde{\mathcal{F}}$.
\end{itemize}
A strong Markov process $\widetilde{\mathbb M}$ is said to have {\bf continuous sample paths on the one-point-compactification
$\mathbb R^d_{\Delta}$ of $\mathbb R^d$}, if
\begin{itemize}
\item[(M.10)] $\widetilde{\mathbb P}_x(t\mapsto \widetilde{X}_t \text{ is continuous in } t\in [0,\infty) \text{ on } \mathbb R^d_{\Delta})=1 \quad \text{for any } x\in \mathbb R^d_{\Delta}.$
\end{itemize}
Here the continuity is of course w.r.t. the topology of $\mathbb R^d_{\Delta}$.
In particular, if $(M.1)-(M.6)$ and $(M.10)$ hold, then $\widetilde{\mathbb M}$ has automatically left limits on $[0,\infty)$ and is quasi-left continuous on $[0,\infty)$, and therefore $\widetilde{\mathbb M}$ is a {\bf Hunt process (with continuous sample paths on the one-point-compactification
$\mathbb R^d_{\Delta}$ of $\mathbb R^d$)}.\\
\end{definition}
\noindent
In what follows, we will need the following result, deduced from generalized Dirichlet form theory.
\begin{proposition}\label{prop:3.1.3}
Assume \eqref{condition on mu}--\eqref{eq:2.1.4} hold (which is the case if for instance condition {\bf (a)} of Section \ref{subsec:2.2.1} holds, see Theorem \ref{theo:2.2.7} and also Remark \ref{rem:2.2.4}). Then,
there exists a Hunt process
$$
\tilde{\mathbb{M}} = (\tilde{\Omega}, \tilde{\mathcal{F}}, (\tilde{\mathcal{F}})_{t \ge 0}, (\tilde{X}_t)_{t \ge 0}, (\tilde{\mathbb{P}}_x)_{x \in \mathbb R^d \cup \{ \Delta \} })
$$
with state space $\mathbb R^d$, lifetime $\tilde\zeta:=\inf\{t\ge 0\,:\,\tilde{X}_t=\Delta\}$ and cemetery $\Delta$ such that
for any $f\in L^2(\mathbb R^d,\mu)_b$ and $\alpha>0$
$$
\tilde{\mathbb{E}}_{x}\Big [\int_0^{\infty}e^{-\alpha t}f(\tilde{X}_t)dt\Big ] =G_{\alpha}f(x)\qquad \text{for } \mu\text{-a.e. } x\in \mathbb R^d
$$
where $\tilde{\mathbb{E}}_{x}$ denotes the expectation with respect to $\tilde{\mathbb{P}}_x$ and
$G_{\alpha}$ is as in Definition \ref{definition2.1.7}.
Moreover, $\tilde{\mathbb{M}}$ has continuous sample paths on the one-point-compactification $\mathbb R^d_{\Delta}$ of $\mathbb R^d$, i.e. we may assume that
\begin{equation}\label{contipath}
\tilde{\Omega} = \{\omega = (\omega (t))_{t \ge 0} \in C([0,\infty),\mathbb R^d_{\Delta}) \, : \, \omega(t) = \Delta \quad \forall t \ge \tilde{\zeta}(\omega) \}
\end{equation}
and
$$
\tilde{X}_t(\omega) = \omega(t), \quad t \ge 0.
$$
\end{proposition}
\begin{proof}
Using in particular Lemma \ref{lemma2.1.4} it is shown in \cite[proof of Theorem 3.5]{WS99} that the generalized Dirichlet form $\mathcal{E}$ associated with $(L_2,D(L_2))$ (cf. \cite[I.4.9(ii)]{WSGDF}) is quasi-regular and by \cite[IV. Proposition 2.1]{WSGDF} and Lemma \ref{lemma2.1.4} satisfies the structural condition D3 of \cite[p. 78]{WSGDF}. Thus by the theory of generalized Dirichlet forms \cite[IV. Theorem 2.2]{WSGDF}, there exists a standard process $\tilde{\tilde{\mathbb{M}}}$ properly associated with $\mathcal{E}$. Using in a crucial way the existence of $\tilde{\tilde{\mathbb{M}}}$ and Lemma \ref{lemma2.1.4}
it is shown in \cite[Theorem 6]{Tr5} that the generalized Dirichlet form $\mathcal{E}$ is strictly quasi-regular and satisfies the structural condition SD3. Thus the existence of the Hunt process $\tilde{\mathbb{M}}$ follows by generalized Dirichlet form theory from \cite{Tr5}.\\
In order to show that $\tilde{\mathbb{M}}$ can be assumed to have continuous sample paths on the
one-point-compactification $\mathbb R^d_{\Delta}$ of $\mathbb R^d$, it is enough to show that this holds for strictly $\mathcal{E}$-quasi-every starting point $x\in \mathbb R^d$. Indeed the complement of those points can be assumed to be a trap for $\tilde{\mathbb{M}}$.
Due to the properties of smooth measures with respect to $\text{cap}_{1,{\widehat{G}}_1\varphi}$ in \cite[Section 3]{Tr5} one can consider the work \cite{Tr2} with cap$_{\varphi}$ (as defined in \cite{Tr2}) replaced by $\text{cap}_{1,{\widehat{G}}_1\varphi}$. In particular \cite[Lemma 3.2, Theorem 3.10 and Proposition 4.2]{Tr2} apply with respect to the strict capacity $\text{cap}_{1,{\widehat{G}}_1\varphi}$. More precisely, in order to show that $\tilde{\mathbb{M}}$ has continuous sample paths on the one-point-compactification $\mathbb R^d_{\Delta}$ of $\mathbb R^d$ for strictly $\mathcal{E}$-quasi-every starting point $x\in \mathbb R^d$ one has to adapt three main arguments from \cite{Tr2}. The first one is related to the no killing inside condition \cite[Theorem 3.10]{Tr2}. In fact \cite[Theorem 3.10]{Tr2} which holds for $\mathcal{E}$-quasi-every starting point $x\in \mathbb R^d$ under the existence of an associated standard process and standard co-process and the quasi-regularity of the original and the co-form, holds with exactly the same proof for strictly $\mathcal{E}$-quasi-every starting point $x\in \mathbb R^d$ if we assume the existence of an associated Hunt process and associated Hunt co-process and the strict quasi-regularity for the original and the co-form. \cite[Lemma 3.2]{Tr2}, which holds under the quasi-regularity of $\mathcal{E}$ and the existence of an associated standard process for $\mathcal{E}$-quasi-every starting point $x\in \mathbb R^d$ and all $\nu \in \hat S_{00}$, holds in exactly the same way under the strict quasi-regularity of $\mathcal{E}$ and the existence of an associated Hunt process for strictly $\mathcal{E}$-quasi-every starting point $x\in \mathbb R^d$ and all $\nu \in \hat S_{00}^{str}$ (as defined in \cite[Section 3]{Tr5}). Finally, \cite[Proposition 4.2]{Tr2} also holds in exactly the same way for the Hunt process and its lifetime and the Hunt co-process and its lifetime for strictly $\mathcal{E}$-quasi-every starting point $x\in \mathbb R^d$. In particular all the mentioned statements then hold for $\mu$-a.e. starting point $x\in \mathbb R^d$.
\end{proof}
Let $\nu : = gd\mu$, where $g \in L^1(\mathbb R^d,\mu)$, $g>0$ $\mu$-a.e. and $\int_{\mathbb R^d} g \,d\mu =1$. For instance, we can choose $g(x):=\frac{e^{-\|x\|^2}}{( \pi)^{d/2} \rho(x)}$, $x\in \mathbb R^d$. Set
$$
\tilde{\mathbb{P}}_{\nu}(\cdot) := \int_{\mathbb R^d} \tilde{\mathbb{P}}_x (\cdot) \ g(x) \ \mu(dx), \qquad \mathbb{P}_{\nu}(\cdot) := \int_{\mathbb R^d} \mathbb{P}_x (\cdot) \ g(x) \ \mu(dx) .
$$
Recall the definition of $S$ at the beginning of Section \ref{subsec:3.1.1}. Consider the one-to-one map
$$
G : \tilde{\Omega} \to \Omega,\quad G(\omega): = \omega |_S.
$$
Then $G$ is $\tilde{\mathcal{F}}^0 / \mathcal{F}^0$ measurable and $\tilde{\Omega} \in \tilde{\mathcal{F}}^0$, where $\tilde{\mathcal{F}}^0 : = \sigma(\tilde{X}_s \ | \ s \in S)$ and using Proposition \ref{prop:3.1.3} exactly as in \cite[Lemma 4.2 and 4.3]{AKR} we can show that
$$
\tilde{\mathbb{P}}_{\nu} |_{\tilde{\mathcal{F}}^0} \circ G^{-1} = \mathbb{P}_{\nu},\quad G(\tilde{\Omega}) \in \mathcal{F}^0, \quad \text{and} \quad\mathbb{P}_{\nu}(G(\tilde{\Omega}))=1.
$$
In particular
\begin{eqnarray}\label{eq:3.4}
\mathbb{P}_x(\Omega \setminus G(\tilde{\Omega}))=0, \quad \text{ for }\mu\text{-a.e. } x\in \mathbb R^d.
\end{eqnarray}
Now, the following holds:
\begin{lemma}\label{lem:3.1.1}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} holds.
Let
$$
\Omega_1 : = \bigcap_{s > 0, s \in S} \vartheta_s^{-1} (G(\tilde{\Omega})),
$$
where $\vartheta_s : \Omega \to \Omega$, $\vartheta_s(\omega) : = \omega(\cdot +s)$, $s \in S$, is the canonical shift operator. Then
\begin{equation}\label{omega1}
\mathbb{P}_x(\Omega_1) = 1
\end{equation}
for all $x \in \mathbb R^d$.
\end{lemma}
\begin{proof}
Using the Markov property, we have for $x \in \mathbb R^d$, $s\in S$, $s>0$
\begin{eqnarray*}
&&\mathbb{P}_x\big (\Omega\setminus\vartheta_s^{-1} (G(\tilde{\Omega}))\big )
\ = \ \mathbb E_x\big [\mathbb E_{X_s^0}[1_{\Omega\setminus G(\tilde{\Omega})} ]\big ]\ =\ P_s^{\Delta}\big (\mathbb E_{\cdot}[1_{\Omega\setminus G(\tilde{\Omega})} ]\big )(x)\\
&&=\big[1- P_s(x,\mathbb{R}^d)\big] \int_{\mathbb R^d_{\Delta}}\mathbb E_{y}[1_{\Omega\setminus G(\tilde{\Omega})} ]\delta_{\Delta} (dy)\ +\ \int_{\mathbb R^d_{\Delta}}\mathbb E_{y}[1_{\Omega\setminus G(\tilde{\Omega})} ]P_s(x, dy)\\
\end{eqnarray*}
Now
$$
\int_{\mathbb R^d_{\Delta}}\mathbb E_{y}[1_{\Omega\setminus G(\tilde{\Omega})} ]P_s(x, dy)=\int_{\mathbb R^d}\mathbb E_{y}[1_{\Omega\setminus G(\tilde{\Omega})} ]P_s(x, dy)=0
$$
by \eqref{eq:3.4} since $P_s(x, dy)$ doesn't charge $\mu$-zero sets, and
$$
\int_{\mathbb R^d_{\Delta}}\mathbb E_{y}[1_{\Omega\setminus G(\tilde{\Omega})} ]\delta_{\Delta} (dy)=\mathbb{P}_{\Delta}(\Omega\setminus G(\tilde{\Omega}))=0,
$$
since for the constant path $\Delta$, we have $\Delta\in G(\tilde{\Omega})$ and $\mathbb{P}_{\Delta}(\Omega\setminus \{\Delta\})=0$. Thus $\mathbb{P}_x\big (\Omega\setminus \vartheta_s^{-1} (G(\tilde{\Omega}))\big )=0$ and the assertion follows.
\end{proof}
\begin{lemma} \label{lem:3.2}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} holds.
Let $(P_t)_{t>0}$ and $(R_{\alpha})_{\alpha>0}$ be as in Theorems \ref{theorem2.3.1} and \ref{theo:2.6}. Let $\alpha, t>0$, $x \in \mathbb R^d$ and $f \in \cup_{r \in [q, \infty]} L^r(\mathbb R^d, \mu)$ with $f \geq 0$, where $q=\frac{pd}{p+d}$ and $p\in (d,\infty)$. Then:
\begin{itemize}
\item[(i)]
$P_t R_{\alpha} f (x) = R_{\alpha} P_t f(x) = e^{\alpha t} \int_t^{\infty} e^{-\alpha u} P_u f(x) du$.
\item[(ii)]
$(\Omega , \mathcal{F}^0, (\mathcal{F}^0_s)_{s \in S}, \left ( e^{-\alpha s} R_{\alpha} f(X^0_s)\right )_{s \in S}, \mathbb P_x)$ is a positive supermartingale.
\end{itemize}
\end{lemma}
\begin{proof}
(i) Since $T_t G_{\alpha} f = G_{\alpha} T_t f$ $\mu$-a.e. and $P_t R_{\alpha}f$, $R_{\alpha} P_t f \in C(\mathbb R^d)$, it holds that
$$
P_t R_{\alpha} f (x) = R_{\alpha} P_t f(x).
$$
By Theorem \ref{th:3.1.1},
$$
R_{\alpha} P_t f(x) = \int_0^{\infty} e^{-\alpha u} P_{t+u}f(x) du=e^{\alpha t} \int_t^{\infty} e^{-\alpha u} P_u f(x) du.
$$
(ii) Let $s \in S$. Since $R_{\alpha} f$ is continuous by Theorem \ref{theorem2.3.1}, $\left ( e^{-\alpha s} R_{\alpha} f(X^0_s)\right )_{s \in S}$ is adapted. Moreover, since $R_\alpha f \in \cup_{r \in [q, \infty]} L^r(\mathbb R^d, \mu)$, it follows from Proposition \ref{prop:3.1.1}(i) that
\begin{eqnarray*}
\mathbb E_x\left[ |e^{-\alpha s} R_{\alpha}f(X^0_s)| \right] &=& e^{-\alpha s} P^{\Delta}_s |R_{\alpha} f| (x) =e^{-\alpha s} \int_{\mathbb R^d} |R_{\alpha} f|(y) P_s(x, dy) < \infty.
\end{eqnarray*}
Let $s' \in S$ with $s' \geq s$. Then by the Markov property, (i) and Theorem \ref{th:3.1.1},
\begin{eqnarray*}
&&\mathbb E_x \left[ e^{-\alpha s'} R_{\alpha} f(X^0_{s'}) \varepsilonrt \mathcal{F}_s \right] \ = \ \mathbb E_{X^0_s} \left[ e^{-\alpha s'} R_{\alpha} f (X^0_{s'-s})\right] \ = \ e^{-\alpha s'} P_{s' -s} R_{\alpha} f(X^0_s) \\
&=& e^{-\alpha s} \int_{s'-s}^{\infty} e^{-\alpha u} P_u f(X^0_s) du \ \leq \ e^{-\alpha s} R_{\alpha} f(X^0_s).
\end{eqnarray*}
\end{proof}
\noindent
$\Omega_1$ defined in Lemma \ref{lem:3.1.1} consists of paths in $\Omega$ which have unique continuous extensions to $(0,\infty)$, which still lie in $\mathbb R^d_{\Delta}$, and which stay in $\Delta$ once they have hit $\Delta$. In order to handle the limits at $s=0$ the properties presented in the following remark are crucial.
\begin{remark}\label{rem:3.1.1}
{\it Assume that {\bf (a)} of Section \ref{subsec:2.2.1} and that {\bf (b)} of the beginning of this section
hold. Then, in view of Theorems \ref{theorem2.3.1} and \ref{theo:2.6}, and Lemma \ref{eq:2.3.39a} and \eqref{resoldef}}, one can find $\{ u_n : n \ge 1 \} \subset C_0^2(\mathbb R^d)\subset D(L_q)$, satisfying:
\begin{itemize}
\item[(i)] for all $\varepsilon \in \mathbb{Q} \cap (0,1)$ and
$y \in D$, where $D$ is any given countable dense set in $\mathbb R^d$, there exists $n \in \mathbb N$ such that $u_n (z) \ge 1$, for all $z \in \overline{B}_{\frac{\varepsilon}{4}}(y)$ and $u_n \equiv 0$ on $\mathbb R^d \setminus B_{\frac{\varepsilon}{2}}(y)$;
\item[(ii)] $R_1\big( [(1 -L) u_n]^+ \big)$, $R_1\big( [(1 -L) u_n]^- \big)$, $R_1 \big( [(1-L)u_n^2]^+ \big)$, $R_1 \big( [(1-L)u_n^2]^- \big)$ are continuous on $\mathbb R^d$ for all $n \ge 1$;
\end{itemize}
and moreover it holds that:
\begin{itemize}
\item[(iii)] $R_1 C_0(\mathbb R^d) \subset C(\mathbb R^d)$;
\item[(iv)] for any $x \in \mathbb R^d$ and $u\in C_0^2(\mathbb R^d)$, the maps $t \mapsto P_t u(x)$ and $t \mapsto P_t (u^2)(x)$ are continuous on $[0,\infty)$.
\end{itemize}
\end{remark}
Define
$$
\Omega_0 : = \{ \omega \in \Omega_1 : \lim_{s \searrow 0} X_s^0(\omega) \ \text{exists in } \mathbb R^d\}.
$$
\begin{lemma}\label{akrlemma}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
We have
\begin{equation}\label{normal}
\lim_{\begin{subarray}{1} s \searrow 0 \\ s \in S \end{subarray} } X_s^0 =x \quad \mathbb{P}_x\text{-a.s.} \quad \text{for all } \ x \in \mathbb R^d.
\end{equation}
In particular $\mathbb{P}_x (\Omega_0) = 1$ for any $x \in \mathbb R^d$.
\end{lemma}
\begin{proof}
Let $x \in \mathbb R^d$, $n \ge 1$. Then the processes with time parameter $s\in S$
$$
\big( e^{-s} R_1 \big( [(1-L) u_n]^+ \big) (X_s^0), \mathcal{F}_s^0, \mathbb{P}_x \big) \quad \text{and} \quad \big( e^{-s} R_1 \big( [(1-L) u_n]^- \big) (X_s^0), \mathcal{F}_s^0, \mathbb{P}_x \big)
$$
are positive supermartingales by Lemma \ref{lem:3.2}(ii). Then by \cite[1.4 Theorem 1]{CW} for any $t \ge 0$
$$
\exists \lim_{\begin{subarray} \ s \searrow t \\ s \in S \end{subarray}} e^{-s} \ R_1 \big( [(1-L) u_n]^{\pm}\big) (X_s^0) \quad \mathbb{P}_x\text{-a.s.}
$$
thus
\begin{equation}\label{existslim}
\exists \lim_{\begin{subarray} \ s \searrow 0 \\ s \in S \end{subarray}} u_n(X_s^0) \quad \mathbb{P}_x\text{-a.s.}
\end{equation}
We have $u_n = R_1 \big((1-L)u_n \big)$ and $u_n^2 = R_1 \big((1-L)u_n^2 \big)$ $\mu$-a.e., but since both sides are respectively continuous by Remark \ref{rem:3.1.1}(ii), it follows that the equalities hold pointwise on $\mathbb R^d$. Therefore
$$
\mathbb E_x \big[\big(u_n(X_s^0) - u_n(x) \big)^2 \big] = P_s R_1\big( (1-L)u_n^2 \big)(x) - 2u_n(x) P_s R_1\big((1-L)u_n\big)(x) + u_n^2(x)
$$
and so
\begin{equation}\label{separ}
\lim_{\begin{subarray} \ s \searrow 0 \\ s \in S \end{subarray}} \mathbb E_x\big[ \big(u_n(X_s^0) -u_n(x) \big)^2 \big] =0
\end{equation}
by Remark \ref{rem:3.1.1}(iv). \eqref{existslim} and \eqref{separ} now imply that
\begin{equation}\label{separating}
\lim_{\begin{subarray} \ s \searrow 0 \\ s \in S \end{subarray}} u_n(X_s^0 (\omega)) = u_n(x) \quad \text{for all} \ \omega \in \Omega_x^n,
\end{equation}
where $\Omega_x^n \subset \Omega_1$ with $\mathbb{P}_x (\Omega_x^n) = 1$. Let $\omega \in \Omega_x^0 : = \bigcap_{n \ge 1} \Omega_x^n$. Then $\mathbb{P}_x (\Omega_x^0) = 1$. Suppose that $X_s^0(\omega)$ does not converge to $x$ as $s \searrow 0$, $s \in S$. Then there exists $\varepsilon_0 \in \mathbb{Q}$ and a subsequence $(X_{s_k}^0(\omega))_{k \in \mathbb N}$ such that $\|X_{s_k}^0(\omega)-x\| > \varepsilon_0$ for all $k \in \mathbb N$. For $\varepsilon_0 \in \mathbb{Q}$ we can find $y \in D$ and $u_n$ as in Remark \ref{rem:3.1.1}(i) such that $\|x-y\| \le \frac{\varepsilon_0}{4}$ and $u_n(z) \ge 1 $, $z \in \overline{B}_{\frac{\varepsilon_0}{4}}(y)$ and $u_n(z) = 0$, $z \in \mathbb R^d \setminus B_{\frac{\varepsilon_0}{2}}(y)$.
Then
$\|X^0_{s_k}(\omega)-y\| >\frac{3}{4}\varepsilon_0$, and so $u_n(X_{s_k}^0(\omega))\equiv 0$ cannot converge to $u_n(x)=1$ as $k \to \infty$. This is a contradiction.
\end{proof}
Now we define for $t \ge 0$
$$
X_t(\omega) : =
\begin{cases}
\lim_{\begin{subarray}{1} s \searrow t \\ s \in S \end{subarray} } X_s^0(\omega) \quad \text{if} \ \omega \in \Omega_0, \\
0\in \mathbb R^d \qquad \text{if} \ \omega \in \Omega \setminus \Omega_0.
\end{cases}
$$
Then by Remark \ref{rem:3.1.1}(iv) for any $t \ge 0$, $f \in C_0^2(\mathbb R^d)$ and $x \in \mathbb R^d$
$$
\mathbb E_x[f(X_t)] = P_t f(x),
$$
which extends to $f\in C_0(\mathbb R^d)$ using a uniform approximation of $f\in C_0(\mathbb R^d)$ through functions in $C_0^2(\mathbb R^d)$.
Since the $\sigma$-algebra generated by $C_0(\mathbb R^d)$ equals $\mathcal{B}(\mathbb R^d)$, it follows by a monotone class argument that
$$
\mathbb{M} = (\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\ge 0}, (X_t)_{t\geq0} , (\mathbb{P}_x)_{x \in \mathbb R^d_{\Delta} }),
$$
where $(\mathcal{F}_t)_{t\ge 0}$ is the natural filtration, is a normal Markov process (cf. Definition \ref{def:3.1.1}), such that $\mathbb E_x[f(X_t)] = P_t f(x)$ for any $t \ge 0$, $f \in \mathcal{B}_b(\mathbb R^d)$ and $x \in \mathbb R^d$. Moreover, $\mathbb{M}$ has continuous sample paths up to infinity on $\mathbb R^d_{\Delta}$. The strong Markov property of $\mathbb{M}$ follows from \cite[Section I. Theorem (8.11)]{BlGe} using Remark \ref{rem:3.1.1}(iii). Hence $\mathbb{M}$ is a strong Markov process with continuous sample paths on $\mathbb R^d_{\Delta}$, and has the transition function $(P_t)_{t \ge 0}$ as transition semigroup. In particular $\mathbb{M}$ is also a Hunt process (see Definition \ref{def:3.1.1}(ii)). Making a statement out of the these conclusions we formulate the following theorem.
\begin{theorem}
\label{th: 3.1.2}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold. Then,
there exists a Hunt process
$$
\mathbb M = (\Omega, \mathcal{F}, (\mathcal{F}_t)_{t \ge 0}, (X_t)_{t \ge 0}, (\mathbb{P}_x)_{x \in \mathbb R^d\cup \{\Delta\}} )
$$
with state space $\mathbb R^d$ and lifetime
$$
\zeta=\inf\{t\ge 0\,:\,X_t=\Delta\}=\inf\{t\ge 0\,:\,X_t\notin \mathbb R^d\},
$$
having the transition function $(P_t)_{t \ge 0}$ (cf. Proposition \ref{prop:3.1.1} and the paragraph right after Remark \ref{rem:3.1equivalence ralpha}) as transition semigroup,
i.e. for every $t\geq 0$, $x \in \mathbb R^d$ and $f\in \mathcal{B}_b(\mathbb R^d)$ it holds that
\begin{equation}\label{eq:3.1semigroupequal-a.e.}
P_tf(x)=\mathbb E_x[f(X_t)],
\end{equation}
where $\mathbb E_x$ denotes the expectation w.r.t $\mathbb P_x$.
Moreover, $\mathbb M$ has continuous sample paths
on the one point compactification $\mathbb R^d_{\Delta}$ of $\mathbb R^d$ with the cemetery $\Delta$ as a point at infinity,
i.e. we may assume that
\begin{equation*}
\Omega = \{\omega = (\omega (t))_{t \ge 0} \in C([0,\infty),\mathbb R^d_{\Delta}) \, : \, \omega(t) = \Delta \quad \forall t \ge \zeta(\omega) \}
\end{equation*}
and
$$
X_t(\omega) = \omega(t), \quad t \ge 0.
$$
\end{theorem}
\subsubsection{Krylov-type estimate}
\label{subsec:3.1.2}
Throughout this section we will assume that {\bf (a)} of Section \ref{subsec:2.2.1} holds and that assumption {\bf (b)} of Section \ref{subsec:3.1.1} holds (except in the case of Proposition \ref{prop:3.1.5}). Let $(P_t)_{t>0}$ be as in Theorem \ref{theo:2.6} and \eqref{semidef}, and $\mathbb M$ be as in Theorem \ref{th: 3.1.2}, and we let
$$
\mu=\rho\,dx
$$
be as in Theorem \ref{theo:2.2.7} or as in Remark \ref{rem:2.2.4}. \\
\begin{proposition}\label{prop:3.1.4}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Let $x\in \mathbb R^d$, $\alpha, t>0$. Then (cf. Proposition \ref{prop:3.1.1})
\begin{equation} \label{eq:3.12}
P_t f(x)=\int_{\mathbb R^d} f(y) P_t(x, dy)=\mathbb E_x\left [f(X_t)\right ],
\end{equation}
for any $f\in L^1(\mathbb R^d,\mu)+L^\infty(\mathbb R^d,\mu)$ and (cf. Proposition \ref{prop:3.1.2})
\begin{equation} \label{eq:3.13}
R_{\alpha}g(x)=\int_{\mathbb R^d} g(y) R_{\alpha}(x, dy)=\mathbb E_x \left [\int_0^\infty e^{-\alpha s} g(X_s)ds\right ],
\end{equation}
for any $g\in L^q(\mathbb R^d,\mu)+L^\infty(\mathbb R^d,\mu)$, $q=\frac{pd}{p+d}$ and $p\in (d,\infty)$. \\
In particular, integrals of the form
$\int_0^\infty e^{-\alpha s} h(X_s)ds$, $\int_0^t h(X_s)ds$, $t\ge 0$ are for any $x\in \mathbb R^d$, whenever they are well-defined, $\mathbb P_x$-a.s. independent of the measurable $\mu$-version chosen for $h$.
\end{proposition}
\begin{proof}
Using Theorem \ref{th: 3.1.2} and linearity, \eqref{eq:3.12} first holds for simple functions and extends to $f \in \cup_{r \in [1, \infty]}L^r(\mathbb R^d, \mu)$ with $f \geq 0$ through monotone integration. Then \eqref{eq:3.12} follows by linearity. Using \eqref{eq:3.12} and Theorem \ref{th:3.1.1}, \eqref{eq:3.13} follows by Fubini's theorem.
\end{proof}
\begin{proposition}
\label{prop:3.1.5}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} holds.
Let $(P_t)_{t>0}$ and $(R_{\alpha})_{\alpha>0}$ be as in Theorems \ref{theorem2.3.1} and \ref{theo:2.6}, and let $q=\frac{pd}{p+d}$, $p\in (d,\infty)$.
\begin{itemize}
\item[(i)]
Let $f \in L^r(\mathbb R^d, \mu)$ for some $r \in [q, \infty]$, $B$ be an open ball in $\mathbb R^d$ and $t>0$. Then
$$
\sup_{x \in \overline{B}}\int_0^t P_s |f|(x) ds < e^t c_{B,r} \|f\|_{L^r(\mathbb R^d, \mu)},
$$
where $c_{B,r}>0$ is a constant independent of $f$ and $t$.
\item[(ii)]
Let $\alpha>0$, $t>0$ and $g \in D(L_r) \subset C(\mathbb R^d)$ for some $r \in [q, \infty)$. Then
$$
R_{\alpha} (\alpha-L_r) g (x) =g (x), \quad \forall x \in \mathbb R^d.
$$
\item[(iii)]
Let $t>0$ and $g \in D(L_r) \subset C(\mathbb R^d)$ for some $r \in [q, \infty)$. Then
$$
P_t g(x)-g(x) = \int_0^t P_s L_r g(x) ds, \quad \forall x \in \mathbb R^d.
$$
\end{itemize}
\end{proposition}
\begin{proof}
(i) By Theorem \ref{th:3.1.1} and \eqref{eq:2.3.38a},
$$
\sup_{x \in \overline{B}} \int_0^t P_s |f|(x) ds \leq e^t \sup_{ x \in \overline{B}} R_1 |f|(x) \leq e^t c_{B,r} \|f\|_{L^r(\mathbb R^d, \mu)},
$$
where $c_{B,r}>0$ is a constant independent of $f$ and $t$.\\
(ii)
We have $G_{\alpha} (\alpha-L_r) g(x) = g(x)$ for $\mu$-a.e. $x \in \mathbb R^d$. Since $R_{\alpha} (\alpha -L_r) g$ is a continuous $\mu$-version of $G_{\alpha} (\alpha-L_r) g$ and $g$ is continuous, the assertion follows. \\
(iii) Let $f:=(1-L_r) g$. Then $R_1 f = g \in L^r(\mathbb R^d, \mu)$. For $x \in \mathbb R^d$, $s\geq 0$, it follows by Theorem \ref{th:3.1.1} that
$$
e^{-s} P_s R_1 f(x) = e^{-s} R_1 P_s f(x) =\int_0^{\infty} e^{-(s+u)} P_{s+u} f(x) du = \int_s^{\infty} e^{-u} P_u f(x) du,
$$
hence by Theorem \ref{th:3.1.1} again,
\begin{equation} \label{eq:3.14}
e^{-s} P_s R_1 f(x) - R_1 f(x) = \int_0^s -e^{-u} P_u f (x) du.
\end{equation}
For $s \in [0,t]$, let $\ell(s):=e^{-s} P_s R_1 f(x)$. Then by \eqref{eq:3.14}, $\ell$ is absolutely continuous on $[0,t]$ and has a weak derivative $\ell' \in L^1([0,t])$ satisfying
$$
\ell'(s)= -e^{-s} P_s f (x), \; \text{ for a.e. } s \in [0,t].
$$
Let $k(s):=e^s$, $s \in [0,t]$. Using the product rule and the fundamental theorem of calculus,
$$
P_t g(x)-g(x)=k(t)\ell(t) -k(0)\ell(0) = \int_0^t k'(s)\ell(s)+k(s)\ell'(s) ds = \int_0^t P_s L_r g(x) ds.
$$
\end{proof}
\noindent
Using Proposition \ref{prop:3.1.5}(i) and Fubini's theorem, we obtain the following theorem.
\begin{theorem}
\label{theo:3.3}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Let $\mathbb M$ be as in Theorem \ref{th: 3.1.2}. Let $r \in [q, \infty]$, with $q=\frac{pd}{p+d}$ and $p\in (d,\infty)$, $t>0$ and $B$ be an open ball in $\mathbb R^d$.
\begin{itemize}
\item[(i)]
Then for any $f\in L^r(\mathbb R^d,\mu)$,
\begin{equation}
\label{KrylovEstimate}
\sup_{x\in \overline{B}}\mathbb E_x\left [ \int_0^t |f|(X_s) \, ds \right ] \le e^t c_{B,r} \|f\|_{L^r(\mathbb R^d, \mu)},
\end{equation}
where $c_{B, r}>0$ is independent of $f$ and $t>0$. In particular, if $\rho \in L^{\infty}(\mathbb R^d)$, then for any $f \in L^{r}(\mathbb R^d)$,
\begin{equation}\label{kryest1}
\sup_{x\in \overline{B}}\mathbb E_x\left [ \int_0^t |f|(X_s) \, ds \right ] \le e^t c_{B,r} \|\rho\|_{L^{\infty}(\mathbb R^d)} \|f\|_{L^r(\mathbb R^d)}.
\end{equation}
\item[(ii)] Let $V$ be an open ball in $\mathbb R^d$. Then for any $f \in L^q(\mathbb R^d)$ with $\text{supp}(f) \subset V$,
\begin{equation} \label{kryest2}
\sup_{x\in \overline{B}}\mathbb E_x\left [ \int_0^t |f|(X_s) \, ds \right ] \le e^t c_{B,q} \|\rho\|_{L^{\infty}(V)} \, \|f\|_{L^q(\mathbb R^d)},
\end{equation}
where $c_{B, q}>0$ is a constant as in (i).
\end{itemize}
\end{theorem}
\begin{remark}
\label{rem:ApplicationKrylovEstimates}
{\it The Krylov-type estimate \eqref{KrylovEstimate} and in particular its localization to Lebesgue integrable
functions in Theorem \ref{theo:3.3}(ii) is an important tool in the derivation of tightness
results for solutions of SDEs. Such an estimate is often applied in the approximation of
SDEs by SDEs with smooth coefficients (see, e.g.,
\cite{GyKr}, \cite{Mel}, \cite{MiKr}, \cite{GyMa}
and \cite[p. 54, 4. Theorem]{Kry} for the
original Krylov estimate involving conditional expectation). \\
A priori \eqref{KrylovEstimate} only holds for the Hunt process $\mathbb M$ constructed here. However, if
uniqueness in law holds for the SDE solved by $\mathbb M$ with certain given coefficients (for instance in the
situation of Theorem \ref{theo:3.3.1.8} and Propositions \ref{prop:3.3.1.15} and \ref{prop:3.3.1.16}
below), then \eqref{KrylovEstimate} and its localization to Lebesgue integrable functions hold generally
for any diffusion with the given coefficients. This may then lead to an improvement in the order of integrability $r=q>\frac{d}{2}$ in Theorem \ref{theo:3.3} in comparison to $d$ in \cite[p. 54, 4. Theorem]{Kry}. In fact, the mentioned improvement in the order of integrability can already be observed in an application of Theorem \ref{theo:3.3} to the moment inequalities derived in Proposition \ref{theo:3.2.8}.\\
Estimate \eqref{KrylovEstimate} becomes particularly useful when the density $\rho$ is explicitly known,
which holds for a large class of time-homogeneous generalized Dirichlet forms (see Remark
\ref{rem:2.2.4}). As a particular example consider the non-symmetric divergence form case, i.e. the case
where $\mathbf{H}, \mathbf{\overline{B}}\equiv 0$, in Remark \ref{rem:2.2.4}. Then the explicitly given
$\rho\equiv 1$ defines an infinitesimally invariant measure. In this case $\mu$ in \eqref{KrylovEstimate}
can be replaced by the Lebesgue measure}.
\end{remark}
\subsubsection{Identification of the stochastic differential equation}
\label{subsec:3.1.3}
Throughout this section we will assume that {\bf (a)} of Section \ref{subsec:2.2.1} holds and that assumption {\bf (b)} of Section \ref{subsec:3.1.1} holds. And we let
$$
\mu=\rho\,dx
$$
be as in Theorem \ref{theo:2.2.7} or as in Remark \ref{rem:2.2.4}.
\begin{definition}\label{stopping times}
Consider $\mathbb M$ of Theorem \ref{th: 3.1.2} and let $A\in \mathcal{B}(\mathbb R^d)$. Let $B_n:=\{y\in \mathbb R^d : \|y\|<n\}, n\ge 1$. We define the following stopping times:
$$
\sigma_{A}:=\inf\{t>0\,:\, X_t\in A\},\qquad \sigma_n:=\sigma_{\mathbb R^d\setminus B_n},n\ge 1,
$$
and
$$
D_A:=\inf\{t\ge0\,:\, X_t\in A\}, \qquad D_n:=D_{\mathbb R^d\setminus B_n}, n\ge 1.
$$
\end{definition}
\begin{lemma}\label{lem:3.1.4}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Let $x\in \mathbb R^d$, $t\ge 0$, $q=\frac{pd}{p+d}$ and $p\in (d,\infty)$. Let $\mathbb M$ be as in Theorem \ref{th: 3.1.2}.
Then we have:
\begin{itemize}
\item[(i)] Let $\sigma_n, n\in \mathbb N$, be as in Definition \ref{stopping times}.
$$
\mathbb P_x \Big(\lim_{n \rightarrow \infty} \sigma_{n} = \zeta \Big)=1.
$$
\item[(ii)]
$$
\mathbb P_x\left (\int_0^t |f|(X_s)ds<\infty\right )=1, \text{ if }\ \ f\in \bigcup_{r \in [q, \infty]} L^r(\mathbb R^d,\mu).
$$
\item[(iii)]
$$
\mathbb P_x \Big (\Big \{\int_0^t |f|(X_s)ds<\infty\Big \} \cap \{t<\zeta\}\Big )=\mathbb P_x \left (\{t<\zeta\}\right ), \text{ if } \ \ f\in L^q_{loc}(\mathbb R^d,\mu),
$$
i.e.
$$
\mathbb P_x \Big (1_{\{t<\zeta \}}\int_0^t |f|(X_s)ds<\infty \Big )=1, \;\text{ if } \ \ f\in L^q_{loc}(\mathbb R^d,\mu),
$$
\end{itemize}
\end{lemma}
\begin{proof}
(i) Fix $x \in \mathbb R^d$. By the $\mathbb P_x$-a.s. continuity of $(X_t)_{t \geq 0}$ on $\mathbb R^d_{\Delta}$, it follows that $\sigma_n \leq \sigma_{n+1} \leq \zeta$ for all $n \geq 1$, $\mathbb P_x$-a.s. Define $\zeta':=\lim_{n\to\infty}\sigma_n$.
Then $\sigma_n \leq \zeta' \leq \zeta$, for all $n \in \mathbb N$, $\mathbb P_x$-a.s. Now suppose that $\mathbb P_x(\zeta' < \zeta )>0$. Then $\mathbb P_x\left(X_{\zeta'} \in \mathbb R^d, \zeta'<\infty \right)=\mathbb P_x(\zeta'<\zeta)>0$. Let $\omega \in \{X_{\zeta'} \in \mathbb R^d, \zeta'<\infty\}$. By the $\mathbb P_x$-a.s. continuity of $(X_{t})_{t\geq 0}$, we may assume that $t \mapsto X_t(\omega)$ is continuous on $\mathbb R^d_{\Delta}$ and $\sigma_n(\omega) \leq \zeta'(\omega)$ for all $n\in \mathbb N$. Then there exists $N_{\omega} \in \mathbb N$ such that $\{X_{t}(\omega): 0 \leq t \leq \zeta'(\omega)\} \subset B_{N_{\omega}}$, hence $\zeta'(\omega) <\sigma_{N_{\omega}}(\omega)$, which is a contradiction. Thus, $\mathbb P_x(\zeta' \geq \zeta) = 1$ and since $x \in \mathbb R^d$ was arbitrary, the assertion follows. \\
(ii) follows from Theorem \ref{theo:3.3}(i). \\
(iii) Let $x \in \mathbb R^d$ and $f \in L_{loc}^q(\mathbb R^d, \mu)$. Then there exists $N_0\in \mathbb N$ with $x\in B_{N_0}$ and for any $n\ge N_0$, $X_s\in B_n$ for all $s\in [0,t]$ with $t<\sigma_{n}$, $\mathbb P_x$-a.s. By Theorem \ref{theo:3.3}(i),
$$
\mathbb E_x \Big [1_{\left \{t< \sigma_{n}\right \}}\int_0^t |f|(X_s)ds \Big ]\le
\mathbb E_x \Big [\int_0^t |f|1_{B_n}(X_s)ds \Big ]<\infty, \ \ \ \forall n\ge N_0.
$$
Thus, we obtain
$$
\mathbb P_x \Big (1_{\{t<\sigma_n \}}\int_0^t |f|(X_s)ds<\infty \Big )=1, \;\; \; \forall n\ge N_0,
$$
so that
\begin{equation} \label{eq:3.16}
\mathbb P_x \Big (\Big \{\int_0^t |f|(X_s)ds<\infty\Big \} \cap \{t<\sigma_n \}\Big )=\mathbb P_x \big (\{t<\sigma_n \}\big ).
\end{equation}
Letting $n \rightarrow \infty$ in \eqref{eq:3.16}, the assertion follows from (i).
\end{proof}
\begin{proposition}\label{prop:3.1.6}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Let $\mathbb M$ be as in Theorem \ref{th: 3.1.2}. Let $u \in D(L_r)$ for some $r \in [q, \infty)$ with $q=\frac{pd}{p+d}$ and $p\in (d,\infty)$, and define
$$
M_t ^u: = u(X_t) - u(x) - \int_0^t L_r u(X_s) \, ds , \quad t \ge 0.
$$
Then $(M_t^u)_{t \geq 0}$ is an $(\mathcal{F}_t)_{t \ge 0}$-martingale under $\mathbb P_x$ for any $x \in \mathbb R^d$. In particular, if $u \in C_0^{2}(\mathbb R^d)$, then
$(M_t^u)_{t \geq 0}$ is a continuous $(\mathcal{F}_t)_{t \ge 0}$-martingale under $\mathbb P_x$ for any $x \in \mathbb R^d$, i.e. $\mathbb P_x$ solves the martingale problem associated with $(L, C_0^2(\mathbb R^d))$ for every $x \in \mathbb R^d$.
\end{proposition}
\begin{proof}
Let $x \in \mathbb R^d$, $u \in D(L_r)$ for some $r \in [q, \infty)$. Then $\mathbb E_x[|M^u_t|]<\infty$ for all $t>0$ by Theorem \ref{theo:3.3}(i). Let $t \geq s \geq 0$. Then using the Markov property and Proposition \ref{prop:3.1.5}(iii),
\begin{eqnarray*}
\mathbb E_x \left[ M_t^u-M_s^u \varepsilonrt \mathcal{F}_s \right] &=& \mathbb E_x [u(X_t) \varepsilonrt \mathcal{F}_s] - u(X_s) - \mathbb E_x \Big[ \int_s^t L_rf(X_{\varv}) d\varv \, \big \varepsilonrt\, \mathcal{F}_s \Big] \\
&=& \mathbb E_{X_s} [u(X_{t-s})] -u(X_s) - \mathbb E_{X_s} \Big[ \int_s^t L_r f(X_{\varv-s})d\varv \Big]\\
&=& P_{t-s} u (X_s)-u(X_s) - \int_s^t P_{\varv-s} L_r f (X_s) d\varv = 0.
\end{eqnarray*}
Let $u \in C_0^{2}(\mathbb R^d) \subset D(L_r) \cap C_{\infty}(\mathbb R^d)$. Then $t \mapsto u(X_t)$ is continuous on $[0, \infty)$,
hence $(M_t^u)_{t \geq 0}$ is a continuous $(\mathcal{F}_t)_{t \ge 0}$-martingale under $\mathbb P_x$.
\end{proof}
\begin{proposition}\label{prop:3.1.7}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Let $\mathbb M$ be as in Theorem \ref{th: 3.1.2}. Let $u \in C_0^2(\mathbb R^d)$, $t\ge 0$. Then the quadratic variation process $\langle M^u \rangle$ of the continuous martingale $M^u$ satisfies for any $x \in \mathbb R^d$
$$
\langle M^u \rangle_t=\int_0^t \langle A\nabla u, \nabla u\rangle(X_s)ds, \quad t\ge 0, \quad \mathbb P_x\text{-a.s.}
$$
In particular, by Lemma \ref{lem:3.1.4}(ii) $\langle M^u \rangle_t$ is $\mathbb P_x$-integrable for any $x \in \mathbb R^d$, $t\ge 0$ and so $M^u$ is square integrable.
\end{proposition}
\begin{proof}
For $u\in C_0^2(\mathbb R^d)\subset D(L_q)$, where $q=\frac{pd}{p+d}$ and $p\in (d,\infty)$, we have $u^2\in C_0^2(\mathbb R^d)\subset D(L_q)$ and $L u^2 = \langle A\nabla u,\nabla u \rangle + 2 u L u$. Thus by Proposition \ref{prop:3.1.6}
\begin{eqnarray*}
u^2(X_t) - u^2(x)= M_t^{u^2} +\int_0^t \left (\langle A\nabla u,\nabla u \rangle(X_s) + 2 u L u(X_s)\right ) ds.
\end{eqnarray*}
Applying It\^o's formula to the continuous semimartingale $(u(X_t))_{t\ge 0}$, we obtain
\begin{eqnarray*}
u^2(X_t) - u^2(x)= \int_0^t 2u(X_s)dM_s^{u} +\int_0^t 2 u L u(X_s)\,ds + \langle M^u \rangle_t.
\end{eqnarray*}
The last two equalities imply that $\big (\langle M^u \rangle_t-\int_0^t \langle A\nabla u, \nabla u\rangle(X_s)ds\big )_{t\ge 0}$ is
a continuous $\mathbb P_x$-martingale of bounded variation for any $x\in \mathbb R^d$, hence constant. This implies the assertion.
\end{proof}
\noindent
For the following result, see for instance \cite[Theorem 1.1, Lemma 2.1]{ChHu}, which we can apply locally.
\begin{lemma}\label{lem:3.1.5}
Under the assumption {\bf (a)} of Section \ref{subsec:2.2.1}, there exists a symmetric non-degenerate matrix of functions $\sigma=(\sigma_{ij})_{1\le i,j\le d}$ with $\sigma_{ij}\in C(\mathbb R^d)$ for all $1 \leq i, j \leq d$ such that
\begin{equation*}
A(x)=\sigma(x)\sigma(x), \ \ \forall x\in \mathbb R^d,
\end{equation*}
i.e.
$$
a_{ij}(x)=\sum_{k=1}^d \sigma_{ik}(x)\sigma_{jk}(x), \ \ \forall x\in \mathbb R^d, \ 1\le i,j\le d
$$
and
$$
\det(\sigma(x))>0, \ \ \ \forall x\in \mathbb R^d,
$$
where here $\det(\sigma(x))$ denotes the determinant of $\sigma(x)$.
\end{lemma}
\begin{definition}\label{non-explosive}
$\mathbb M$ (of Theorem \ref{th: 3.1.2}) is said to be {\bf non-explosive}\index{non-explosive}, if
$$
\mathbb P_x(\zeta=\infty)=1, \quad \text{ for all } x \in \mathbb R^d.
$$
\end{definition}
\begin{theorem}\label{theo:3.1.4}
Let $A=(a_{ij})_{1\leq i,j \leq d}$ and $\mathbf{G}=(g_1, \ldots, g_d)=\frac{1}{2}\nabla \big (A+C^{T}\big )+ \mathbf{H}$ (see\eqref{form of G}) satisfy the conditions {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1}.
Consider the Hunt process $\mathbb M$ from Theorem \ref{th: 3.1.2} with coordinates $X_t=(X_t^1,\ldots,X_t^d)$.
\begin{itemize}
\item[(i)] Suppose that $\mathbb M$ is non-explosive.
Let $(\sigma_{ij})_{1 \le i,j \le d}$ be any matrix (possibly non-symmetric) consisting of locally bounded and measurable functions such that $\sigma \sigma^T =A$ (see for instance Lemma \ref{lem:3.1.5} for the existence of such a matrix). Then it holds that $\mathbb P_x$-a.s. for any $x=(x_1,\ldots,x_d)\in \mathbb R^d$,
\begin{equation} \label{itosdeweakglo}
X_t = x+ \int_0^t \sigma (X_s) \, dW_s + \int^{t}_{0} \mathbf{G}(X_s) \, ds, \quad 0\le t <\infty,
\end{equation}
i.e. it holds that $\mathbb P_x$-a.s. for any $i=1,\ldots,d$
\begin{equation}\label{weaksolution}
X_t^i = x_i+ \sum_{j=1}^d \int_0^t \sigma_{ij} (X_s) \, dW_s^j + \int^{t}_{0} g_i(X_s) \, ds, \quad 0\le t <\infty,
\end{equation}
where $W = (W^1,\dots,W^d)$ is a $d$-dimensional standard $(\mathcal{F}_t)_{t \geq 0}$-Brownian motion starting from zero.
\item[(ii)] Let $(\sigma_{ik})_{1 \le i \le d,1\le k \le l}$, $l\in \mathbb N$ arbitrary but fixed, be any matrix consisting of continuous functions such that $\sigma_{ik}\in C(\mathbb R^d)$ for all $1\le i\le d,1\le k\le l$, and such that $A=\sigma\sigma^T$, i.e.
$$
a_{ij}(x)=\sum_{k=1}^l \sigma_{ik}(x)\sigma_{jk}(x), \ \ \forall x\in \mathbb R^d, \ 1\le i, j \leq d.
$$
Then on a standard extension
of $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, (\widetilde{\mathcal{F}}_t)_{t\ge 0}, \widetilde{\mathbb P}_x )$, $x\in \mathbb R^d$, which we denote for notational convenience again
by $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\ge 0}, \mathbb P_x )$, $x\in \mathbb R^d$, there exists for every $n \in \mathbb N$ an $l$-dimensional standard $(\mathcal{F}_t)_{t \geq 0}$-Brownian motion $(W_{n,t})_{t \geq 0} = \big((W_{n,t}^{1},\dots,W_{n,t}^{l})\big)_{t \geq 0}$ starting from zero such that $\mathbb P_x$-a.s. for any $x=(x_1,\ldots,x_d)\in \mathbb R^d$, $i=1,\dots,d$
\begin{equation*}
X_t^i = x_i+ \sum_{k=1}^l \int_0^t \sigma_{ik} (X_s) \, dW_{n,s}^{k} + \int^{t}_{0} g_i(X_s) \, ds, \quad 0\le t \leq D_n,
\end{equation*}
where $D_n, n\in \mathbb N$, is as in Definition \ref{stopping times}.
Moreover, it holds that $W_{n,s} = W_{n+1, s}$ on $\{s \leq D_n\}$, hence with $W^k_s:=\lim_{n \rightarrow \infty} W^k_{n,s}$, $k=1, \ldots, l$ and $W_s:=(W^1_s, \ldots, W^l_s)$ on $\{ s < \zeta \}$ we get for $1 \leq i \leq d$,
\begin{equation*}
X_t^i = x_i+ \sum_{k=1}^l \int_0^t \sigma_{ij} (X_s) \, dW_s^{k} + \int^{t}_{0} g_i(X_s) \, ds, \quad 0\leq t < \zeta,
\end{equation*}
$\mathbb P_x$-a.s. for any $x \in \mathbb R^d$. In particular, if $\mathbb M$ is non-explosive, then $(W_t)_{t \geq 0}$ is a standard $(\mathcal{F}_t)_{t \geq 0}$-Brownian motion.
\end{itemize}
\end{theorem}
\begin{proof}
(i) Since $\mathbb M$ is non-explosive, it follows from Lemma \ref{lem:3.1.4}(i) that $D_n\nearrow \infty$ $\mathbb P_x$-a.s. for any $x\in \mathbb R^d$. Let $\varv\in C^{2}(\mathbb R^d)$. Then we claim that
$$
M_t ^\varv: = \varv(X_t) - \varv(x) - \int_0^t \Big (\frac{1}{2}\sum_{i,j=1}^{d}a_{ij}\partial_i\partial_j \varv+\sum_{i=1}^{d}g_i\partial_i \varv\Big )(X_s) \, ds , \quad t \ge 0,
$$
is a continuous square integrable local $\mathbb P_x$-martingale with respect to the stopping times $(D_n)_{n\ge 1}$ for any $x\in \mathbb R^d$. Indeed, let $(\varv_n)_{n\ge 1}\subset C_0^2(\mathbb R^d)$ be such that $\varv_n=\varv$ pointwise on $\overline{B}_n$, $n\ge 1$.
Then for any $n\ge 1$, we have $\mathbb P_x$-a.s
$$
M_{t\wedge D_n}^\varv=M_{t\wedge D_n}^{\varv_n}, \ \ t\ge 0,
$$
and $(M_{t\wedge D_n}^{\varv_n})_{t\ge 0}$ is a square integrable $\mathbb P_x$-martingale for any $x\in \mathbb R^d$ by Proposition \ref{prop:3.1.7}.
Now let $u_i \in C^{2}(\mathbb R^d)$, $i=1,\dots,d$, be the coordinate projections, i.e. $u_i(x)=x_i$. Then by Proposition \ref{prop:3.1.7}, polarization and localization with respect to $(D_n)_{n\ge 1}$, the quadratic covariation processes satisfy
$$
\langle M^{u_i}, M^{u_j} \rangle_t = \int_0^t a_{ij}(X_s) \, ds, \quad 1 \le i,j \le d, \ t \ge 0.
$$
Using Lemma \ref{lem:3.1.5} we obtain by \cite[II. Theorem 7.1]{IW89} (see also \cite[IV. Proposition 2.1]{IW89}) that there exists a $d$-dimensional standard $(\mathcal{F}_t)_{t \geq 0}$-Brownian motion $(W_t)_{t \ge 0} = (W_t^1,\dots, W_t^d)_{t \ge 0}$ on $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\ge 0}, \mathbb P_x )$, $x\in \mathbb R^d$, such that
\begin{eqnarray}\label{sigma1}
M_t^{u_i} = \sum_{j=1}^{d} \int_0^t \ \sigma_{ij} (X_s) \ dW_s^j, \quad 1 \le i \le d, \ t\ge 0.
\end{eqnarray}
Since for any $x\in \mathbb R^d$, $\mathbb P_x$-a.s.
\begin{eqnarray}\label{drift1}
M_t^{u_i}= X_t^i - x_i - \int_0^t g_i(X_s) \, ds , \quad t \ge 0.
\end{eqnarray}
(ii) Let $n\in \mathbb N$. Using the same notations and proceeding as in (i), we obtain that
\begin{eqnarray*}
M^{i,n}_t:=M_{t\wedge D_n}^{u_i}= X_{t\wedge D_n}^i - x_i - \int_0^{t\wedge D_n} g_i(X_s) \, ds , \quad t \ge 0,
\end{eqnarray*}
is a continuous square integrable $\mathbb P_x$-martingale for any $x\in \mathbb R^d$ and it holds that
$$
\langle M^{i,n}, M^{j,n}\rangle_t = \int_0^{t\wedge D_n} a_{ij}(X_s) \, ds=\int_0^{t} 1_{[0,D_n]}(s)a_{ij}(X_s) \, ds, \quad 1\leq i,j \leq d, \ t \ge 0.
$$
Let $\mathbb Phi_{ij}(s)=a_{ij}(X_s)1_{[0,D_n]}(s)$, $1 \leq i, j \leq d$, $s \geq 0$, so that
$$
\mathbb Phi_{ij}(s)=\sum_{k=1}^l \mathbb Psi_{ik}(s)\mathbb Psi_{jk}(s), \ \text{ with }\ \ \mathbb Psi_{ik}(s)=\sigma_{ik}(X_s)1_{[0,D_n]}(s),\ \
1\le i, j \le d.
$$
Then for any $x \in \mathbb R^d$, $\mathbb P_x$-a.s. for all $1\leq i, j \leq d$, $1\leq k\leq l$,
\begin{eqnarray*}
\int_0^t |\mathbb Psi_{ik}(s)|^2 ds <\infty, \;\; \int_0^t |\mathbb Phi_{ij}(s)| ds< \infty, \quad \text{ for all } t\geq 0.
\end{eqnarray*}
Then by \cite[II. Theorem 7.1']{IW89}, we obtain the existence of an $l$-dimensional standard $(\mathcal{F}_t)_{t \geq 0}$-Brownian motion $(W_{n,t})_{t \geq 0} = \big((W_{n,t}^{1},\dots,W_{n,t}^{l})\big)_{t \geq 0}$ as in the assertion such that
\begin{eqnarray*}
M_t^{i,n} & = &\sum_{k=1}^{l} \int_0^t \ \sigma_{ik} (X_s)1_{[0,D_n]}(s) \ dW_{n, s}^{k}\\
&=& \sum_{k=1}^{l} \int_0^{t\wedge D_n} \ \sigma_{ik} (X_s) \ dW_{n,s}^{k}, \quad 1 \le i \le d, \ t\ge 0
\end{eqnarray*}
$\mathbb P_x$-a.s. for any $x\in \mathbb R^d$. Thus for $1\le i\le d$
\begin{equation*}
X_{t\wedge D_n}^i =x_i +\sum_{k=1}^{l} \int_0^{t\wedge D_n} \ \sigma_{ik} (X_s) \ dW_{n,s}^{k}+ \int_0^{t\wedge D_n} g_i(X_s) \, ds , \quad t \ge 0.
\end{equation*}
From the proof of \cite[II. Theorem 7.1']{IW89}, we can see the consistency $W_{n,s}=W_{n+1,s}$ on $\{ s \leq D_n \}$. This implies the remaining assertions.
\end{proof}
\subsection{Global properties}
\label{sec:3.2}
In this sectio, we investigate non-explosion, transience and recurrence, and invariant and sub-invariant measures of the Markov process $\mathbb M$, which is described as a weak solution to an SDE in Theorem \ref{theo:3.1.4}. Due to the strong Feller property, conservativeness of $(T_t)_{t>0}$ is equivalent to non-explosion of $\mathbb M$ (see Corollary \ref{cor:3.2.1}).\\
We first develop three sufficient criteria for non-explosion. The first type of such a criterion is related to the existence of a Lyapunov function, which implies a supermartingale property and provides explicit growth conditions on the coefficients given by a continuous function as upper bound (Proposition \ref{prop:3.2.8} and Corollaries \ref{cor:3.2.2} and \ref{cor:3.1.3}). The second type of non-explosion criterion is related to moment inequalities that are derived with the help of a Burkholder--Davis--Gundy inequality or Doob's inequality and finally a Gronwall inequality. Here the growth condition is given by the sum of a continuous function and an integrable function of some order as upper bound (Proposition \ref{theo:3.2.8}) and the growth condition is stated separately for the diffusion and the drift coefficients in contrast to the first type of non-explosion criterion. The third type of non-explosion criterion is a conservativeness criterion deduced from \cite{GT17}, which, in contrast to the first two types of non-explosion criteria, originates from purely analytical means and involves a volume growth condition on the infinitesimally invariant measure $\mu$. It is applicable, if the growth of $\mu$ on Euclidean balls is known, for instance if $\rho$ is explicitly known (see Proposition \ref{prop:3.2.9}).\\
In Section \ref{subsec:3.2.2}, we study transience and recurrence of the semigroup $(T_t)_{t>0}$ and of $\mathbb M$ in the probabilistic sense (see Definitions \ref{def:3.2.2.2} and Definition \ref{def:3.2.2.3}). Since $(T_t)_{t>0}$ is strictly irreducible by Proposition \ref{prop:2.4.2}, we obtain in Theorem \ref{theo:3.3.6}(i) that $(T_t)_{t>0}$ is either recurrent or transient. Moreover, using the technique of \cite{Ge} and the regularity of the resolvent associated with $\mathbb M$, it follows that recurrence and transience of $(T_t)_{t>0}$ is equivalent to recurrence and transience of $\mathbb M$ in the probabilistic sense, respectively (see Theorem \ref{theo:3.3.6}). We present in Proposition \ref{theo:3.2.6} a criterion to obtain the recurrence of $\mathbb M$ in the probabilistic sense and in the situation of Remark \ref{rem:2.2.4}, we present another type of recurrence criterion, Corollary \ref{cor:3.2.2.5}, which is a direct consequence of \cite[Theorem 21]{GT2} and Proposition \ref{prop:2.4.2}(ii). \\
In Section \ref{subsec:3.2.3}, we introduce the two notions, {\it invariant measure} and {\it sub-invariant measure} for $\mathbb M$, which are strongly connected to the notion of $(\overline{T}_t)_{t>0}$-invariance and sub-invariance respectively, introduced in Section \ref{subsec:2.1.4}. These will appear later in Section \ref{subsec:3.3.2}, for a result about uniqueness in law. We analyze further the long time behavior of the transition semigroup $(P_t)_{t>0}$ associated with $\mathbb M$, as well as uniqueness of invariant measures for $\mathbb M$ in the case where there exists a probability invariant measure for $\mathbb M$. For that, in Theorem \ref{theo:3.3.8}, the strong Feller property (Definition \ref{def:2.3.1}) and the irreducibility in the probabilistic sense of $(P_t)_{t>0}$ (Definition \ref{def:2.4.4}) are essentially used to apply Doob's theorem, but we further complement it by using Lemma \ref{lem:2.7}(ii) and Remark \ref{remark2.1.11}(i). We show in Example \ref{ex:3.8} that there is a case where there is no unique invariant measure for $\mathbb M$ by presenting two distinct infinite invariant measures for $\mathbb M$, which cannot be expressed as a constant multiple of each other.\\
\subsubsection{Non-explosion results and moment inequalities} \label{subsec:3.2.1}
Throughout this section, unless otherwise stated, we will assume that {\bf (a)} of Section \ref{subsec:2.2.1} holds and that assumption {\bf (b)} of Section \ref{subsec:3.1.1} holds. Furthermnore, we let
$$
\mu=\rho\,dx
$$
be as in Theorem \ref{theo:2.2.7} or as in Remark \ref{rem:2.2.4}. In fact, only at the end of this section for Proposition \ref{prop:3.2.9} and in Remark \ref{rem:3.2.1 1}, assumptions {\bf (a)} and {\bf (b)} and the assumption on $\mu$ may be omitted.
\\
Due to the strong Feller property, we have:
\begin{corollary} \label{cor:3.2.1}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
$(T_t)_{t>0}$ is conservative\index{semigroup ! conservative} (Definition \ref{def:3.2.1}), if and only if $\mathbb M$ is non-explosive (Definition \ref{non-explosive}).
\end{corollary}
\begin{proof}
Assume that $(T_t)_{t>0}$ is conservative. Then by Theorem \ref{theo:2.6} and Proposition \ref{prop:3.1.4},
\begin{equation*}
\mathbb P_x(\zeta>t) =\mathbb P_x(X_t \in \mathbb R^d)=P_{t} 1_{\mathbb R^d}(x) = 1 \; \text{ for all $(x,t) \in \mathbb R^d \times (0, \infty)$}.
\end{equation*}
Letting $t \rightarrow \infty$, $\mathbb P_x(\zeta = \infty)=1$ for all $x \in \mathbb R^d$. Conversely, assume that $\mathbb M$ is non-explosive. Then
$$
\mathbb P_x(X_t \in \mathbb R^d) = \mathbb P_x(\zeta>t) \geq \mathbb P_x(\zeta=\infty)=1\; \text{ for all $(x,t) \in \mathbb R^d \times (0, \infty)$.}
$$
Consequently, by Theorem \ref{theo:2.6} and Proposition \ref{prop:3.1.4}, $T_t 1_{\mathbb R^d} =1$, $\mu$-a.e. for all $t>0$.
\end{proof}
\begin{remark}
{\it By Corollary \ref{cor:3.2.1} and Lemma \ref{lem:2.7}(ii), it follows that $\mathbb M$ is non-explosive, if and only if there exists $(x_0, t_0)\in \mathbb R^d \times (0, \infty)$:
$$
P_{t_0}1_{\mathbb R^d}(x_0)=\mathbb P_{x_0}(X_{t_0} \in \mathbb R^d) = \mathbb P_{x_0}(\zeta>t_{0})=1.
$$
Thus, $\mathbb M$ is non-explosive, if and only if $\mathbb P_{x_0}(\zeta=\infty)=1$ for some $x_0 \in \mathbb R^d$. This property is also derived in \cite[Lemma 2.5]{Bha} under the assumptions of a locally bounded drift coefficient and continuous diffusion coefficient. In comparison, our conditions {\bf (a)}, {\bf (b)} allow the drift coefficient to be locally unbounded but the diffusion coefficient has to be continuous with a suitable weak differentiability.}
\end{remark}
\noindent
Consider the following {\bf condition}: \\
\\
{\bf (L)} \index{assumption ! {\bf (L)}}there exists $\varphi \in C^2(\mathbb R^d)$, $\varphi \geq 0$ such that $\displaystyle \lim_{ r \rightarrow \infty} (\inf_{\partial B_r} \varphi)= \infty$ and
$$
L\varphi \leq M \varphi, \quad \text{ a.e. on } \mathbb R^d
$$
for some constant $M>0$. \\ \\
We will call a function $\varphi$ as in {\bf (L)} a {\bf Lyapunov function}\index{Lyapunov function}. Under the assumption of {\bf (L)}, we saw an analytic method to derive conservativeness (hence non-explosion by Corollary \ref{cor:3.2.1}) in the proof of Proposition \ref{prop:2.1.10}(ii). The next proposition deals with a probabilistic method to derive the non-explosion of $\mathbb M$ under the assumption of {\bf (L)}. The method provides implicitly a moment inequality for $\varphi(X_t)$.
\begin{proposition} \label{prop:3.2.8}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Under the assumption of {\bf (L)} above, $\mathbb M$ is non-explosive (Definition \ref{non-explosive}) and for any $x \in \mathbb R^d$ it holds that
$$
\mathbb E_x \left[ \varphi(X_t) \right] \leq e^{Mt} \varphi(x), \;\; \; t \geq 0.
$$
\end{proposition}
\begin{proof}
Let $x=(x_1, \ldots, x_d) \in \mathbb R^d$ and take $k_0 \in \mathbb N$ such that $x \in B_{k_0}$. Let $X^{i,n}_t:=X^{i}_{t \wedge \sigma_n}$,\, $n \in \mathbb N$ with $n \geq k_0$, $i \in \{1, \ldots, d\}$, $t \geq 0$, where $\sigma_n, n\in \mathbb N$, is as in Definition \ref{stopping times}. Then by Theorem \ref{theo:3.1.4}(ii), $(X_t^{i, n})_{t \geq 0}$ is a continuous $\mathbb P_x$-semimartingale and $\mathbb P_x$-a.s.
$$
X^{i,n}_t= x_i+ \sum_{j=1}^l \int_0^{t \wedge \sigma_n} \sigma_{ij} (X_s) \, dW_s^j + \int^{t \wedge \sigma_n}_{0} g_i(X_s) \, ds, \;\; \quad 0 \leq t< \infty.
$$
For $j \in \{1, \ldots, d \}$, it follows that $\mathbb P_x$-a.s.
$$
\langle X^{i,n}, X^{j,n} \rangle_t = \int_0^{t \wedge \sigma_n} a_{ij} (X_s) ds, \quad 0 \leq t<\infty.
$$
Thus, by the time-dependent It\^{o} formula, $\mathbb P_x$-a.s.
\begin{eqnarray*}
&&e^{-Mt} \varphi(X_{t\wedge \sigma_n}) = \varphi(x) + \sum_{i=1}^{d}\sum_{j=1}^l \int_0^{t \wedge \sigma_n} e^{-Ms} \partial_i \varphi(X_s) \cdot \sigma_{ij} (X_s) \, dW_s^j \\
&&+ \int_0^{t \wedge \sigma_n} -Me^{-Ms} \varphi(X_s) ds + \frac12 \sum_{i,j=1}^{d} \int_0^{t \wedge \sigma_n} e^{-Ms} \partial_i \partial_j \varphi \cdot a_{ij}(X_s)ds \\
&& + \sum_{i=1}^{d} \int_0^{t \wedge \sigma_n} e^{-Ms}\partial_i \varphi \cdot g_i(X_s) ds \\
&&= \varphi(x) + \int_0^{t \wedge \sigma_n} e^{-Ms}\nabla \varphi \cdot \sigma(X_s) dW_s + \int_0^{t \wedge \sigma_n} e^{-Ms}(L-M) \varphi(X_s) ds.
\end{eqnarray*}
Consequently, $\left(e^{-Mt} \varphi (X_{t\wedge \sigma_n}) \right)_{t \geq 0}$ is a positive continuous $\mathbb P_x$-supermartingale. Since $\mathbb M$ has continuous sample paths on the one-point-compactification $\mathbb R^d_{\Delta}$ of $\mathbb R^d$, it follows that
\begin{eqnarray*}
\varphi(x) \geq \mathbb E_x \left[e^{-Mt} \varphi(X_{t\wedge \sigma_n}) \right] \geq \mathbb E_x\left[ e^{-Mt} \varphi(X_{t \wedge \sigma_n})1_{\{\sigma_{n}\le t\}}\right] \ge e^{-M t} \inf_{\partial B_n} \varphi \cdot \mathbb P_x(\sigma_{n}\le t).
\end{eqnarray*}
Therefore, using Lemma \ref{lem:3.1.4}(i)
$$
\mathbb P_x(\zeta\le t)=\lim_{n\to \infty}\mathbb P_x(\sigma_{n}\le t)\leq \lim_{n \rightarrow \infty} \frac{e^{Mt} \varphi(x) }{\inf_{\partial B_n} \varphi} =0.
$$
Letting $t \rightarrow \infty$, $\mathbb P_x(\zeta<\infty)=0$, hence $\mathbb M$ is non-explosive. Applying Lemma \ref{lem:3.1.4}(i), Fatou's lemma and the supermartingale property, for any $t \geq 0$
$$
\mathbb E_x \left[e^{-Mt} \varphi(X_t)\right] = \mathbb E_x \left[\liminf_{n \rightarrow \infty} e^{-Mt} \varphi(X_{t\wedge \sigma_n})\right] \leq \liminf_{n \rightarrow \infty}\mathbb E_x \left[ e^{-Mt} \varphi(X_{t\wedge \sigma_n})\right] \leq \varphi(x),
$$
as desired.
\end{proof}
In the next lemma, we present a condition for {\bf (L)}, which is apparently weaker than {\bf (L)}.
\begin{lemma} \label{lem3.2.6}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Let $N_0 \in \mathbb N$. Let $g\in C^2(\mathbb R^d \setminus \overline{B}_{N_0}) \cap C(\mathbb R^d)$, $g \geq 0$, with
\begin{equation} \label{eq:3.19}
\lim_{r \rightarrow \infty} (\inf_{\partial B_r} g) = \infty.
\end{equation}
Assume that there exists a constant $M>0$ such that
$$
Lg\le M g \quad \text{a.e. on }\, \mathbb R^d\setminus \overline{B}_{N_0}.
$$
Then there exist a constant $K>0$, $N \in \mathbb N$ with $N \geq N_0+3$ and $\varphi \in C^2(\mathbb R^d)$ with $\varphi \geq K$, $\varphi(x)=g(x)+K$ for all $x \in \mathbb R^d \setminus B_{N}$ such that
$$
L \varphi \le M \varphi \quad \text{a.e. on } \, \mathbb R^d.
$$
In particular, $\mathbb M$ of Theorem \ref{th: 3.1.2} (see also Theorem \ref{theo:3.1.4}) is non-explosive by Proposition \ref{prop:3.2.8}.
\end{lemma}
\begin{proof}
We first show the following claim.\\
{\bf Claim:} If $g\in C^2(\mathbb R^d \setminus \overline{B}_{N_0}) \cap C(\mathbb R^d)$, $g \geq 0$ satisfies \eqref{eq:3.19}, then there exist $N_1 \in \mathbb N$ with $N_1 \geq N_0+2$ and $\psi \in C^2(\mathbb R^d)$ with $\psi \geq 0$ such that $\psi(x) = g(x)$ for all $x \in \mathbb R^d \setminus B_{N_1}$. \\
For the proof of the claim, let $\phi_1 \in C^2(\mathbb R)$ such that $\phi_1(t) \ge 0$ for all $t \in \mathbb R$ and
$$
\phi_1(t)=\begin{cases} \ \sup_{B_{N_0+1}}g& \text{ if } t\le \sup_{B_{N_0 +1}}g,\\
\ t\quad& \text{ if } t\ge 1+\sup_{B_{N_0 +1}}g.
\end{cases}
$$
Define $\psi:= \phi_1 \circ g$. Then $\psi \geq 0$, $\psi \equiv \sup_{B_{N_0+1}}g$ on $\overline{B}_{N_0+1}$ and $\psi \in C^2(\mathbb R^d \setminus \overline{B}_{N_0})$, hence $\psi \in C^2(\mathbb R^d)$. Let $A_1:=\{ x \in \mathbb R^d: |g(x)| \leq 1+\sup_{B_{N_0+1}}g \}$. Then $A_1$ is closed and bounded since $g \in C(\mathbb R^d)$ and \eqref{eq:3.19} holds. Thus, there exists $N_1 \in \mathbb N$ with $N_1 \geq N_0+2$ and $A_1 \subset B_{N_1}$. In particular, $\psi(x)=g(x)$ for all $x \in \mathbb R^d \setminus B_{N_1}$, hence the claim is shown.\\
For the constructed $\psi \in C^2(\mathbb R^d)$ and $N_1 \in \mathbb N$ as in the claim above, it holds that
$$
L \psi \leq M \psi, \quad \text{ a.e. on $\mathbb R^d \setminus B_{N_1}$.}
$$
Let $\phi_2 \in C^2(\mathbb R)$ such that $\phi_2(t), \phi_2'(t) \geq 0$ for all $t \in \mathbb R$ and
$$
\phi_2(t)=\begin{cases} \ \sup_{B_{N_1}} \psi & \text{ if } t\le \sup_{B_{N_1}} \psi,\\
\ t\quad& \text{ if } t\ge 1+\sup_{B_{N_1}} \psi.
\end{cases}
$$
Let $A_2:=\{ x: |\psi(x)| \leq 1+\sup_{B_{N_1}}\psi \}$. As above, there exists $N \in \mathbb N$ with $N \geq N_1+1$ and $A_2 \subset B_{N}$.
Define
$$
K:= \sup_{B_{N}} \left(\psi \cdot \phi_2'(\psi)\right) +\frac{\nu_{B_{N}}}{2M}\sup_{B_{N}} \left(|\phi_2''(\psi)| \| \nabla \psi\|^2\right),
$$
where $\nu_{B_{N}}$ is as in \eqref{eq:2.1.2} and $\varphi:= \phi_2 \circ \psi+K$. Then $\varphi \in C^2(\mathbb R^d)$ with $\varphi \geq 0$. In particular, $\varphi(x)=\psi(x)+K=g(x)+K$ for all $x \in \mathbb R^d \setminus B_{N}$. Moreover,
$$
L\varphi=\begin{cases} \ 0 \leq M\varphi& \text{ a.e. on } B_{N_1},\\
\ \phi'_2(\psi) L\psi+\frac12 \phi''_2(\psi) \langle A \nabla \psi, \nabla \psi \rangle \\
\quad \; \leq M \phi_2'(\psi) \psi + \frac12 \nu_{B_{N}} |\phi_2''(\psi)| \| \nabla \psi\|^2 \leq MK \leq M\varphi \quad& \text{ a.e. on }B_{N} \setminus B_{N_1}, \\
\ L \psi \leq M \psi \leq M \varphi. \quad& \text{ a.e. on } \mathbb R^d \setminus B_{N}.
\end{cases}
$$
Finally, since $\displaystyle \lim_{r \rightarrow \infty} (\inf_{\partial B_r} \varphi) =\infty$, $\mathbb M$ is non-explosive by Proposition \ref{prop:3.2.8}.
\end{proof}
\noindent
In view of Corollary \ref{cor:3.2.1}, the following result slightly improves the condition of Corollary \ref{cor:2.1.4.1}(iii).
\begin{corollary} \label{cor:3.2.2}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Assume that there exist a constant $M> 0$ and $N_0\in \mathbb N$, such that
\begin{eqnarray}\label{eq:3.20}
-\frac{\langle A(x)x, x \rangle}{ \left \| x \right \|^2 }+ \frac12\mathrm{trace}A(x)+ \big \langle \mathbf{G}(x), x \big \rangle \leq M\left \| x \right \|^2 \big( {\rm ln}\left \| x \right \| +1 \big)
\end{eqnarray}
for a.e. $x\in \mathbb R^d\setminus \overline{B}_{N_0}$. Then $\mathbb M$ is non-explosive (Definition \ref{non-explosive}).
\end{corollary}
\begin{proof}
Define $g(x):= \ln(\|x\|^2 \varepsilone N_0^2)+2$. Then $g \in C^{\infty}(\mathbb R^d \setminus \overline{B}_{N_0}) \cap C(\mathbb R^d)$ and
$$
Lg = -2 \frac{\langle A(x)x,x \rangle}{\|x\|^4} + \frac{\text{trace}(A(x))}{\|x\|^2} + \frac{2 \langle \mathbf{G}(x), x \rangle}{\|x\|^2 } \;\; \text{ on $\mathbb R^d \setminus \overline{B}_{N_0}$. }
$$
Since \eqref{eq:3.20} is equivalent to the fact that $Lg \leq Mg$ for a.e. on $\mathbb R^d \setminus \overline{B}_{N_0}$, the assertion follows from Lemma \ref{lem3.2.6}.
\end{proof}
\noindent
The following corollary allows (in the special case $d=2$) the diffusion coefficient to have an arbitrary growth in the
case where the difference between the minimal and the maximal eigenvalue of the diffusion coefficient has
quadratic-times-logarithmic growth.
\begin{corollary}
\label{cor:3.1.3}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Let $d=2$, $N_0 \in \mathbb N$, $\mathbb Psi_1, \mathbb Psi_2 \in C(\mathbb R^2)$ with $\mathbb Psi_1(x), \mathbb Psi_2(x)>0$
for all $x \in \mathbb R^2$, and $Q=(q_{ij})_{1 \leq i,j \leq 2}$ be a matrix of measurable functions such that
$Q^T(x)Q(x)=id$ for all $x\in \mathbb R^2\setminus \overline{B}_{N_0}$.
Suppose that, in addition to the assumptions {\bf (a)} and {\bf (b)}, the diffusion coefficient has the form
$$
A(x) = Q^T(x) \begin{pmatrix}
\mathbb Psi_1(x) & 0 \\
0 & \mathbb Psi_2(x)
\end{pmatrix}Q(x) \; \;\text{ for all $x\in \mathbb R^2\setminus \overline{B}_{N_0}$,}
$$
and that there exists a constant $M > 0$, such that
\begin{equation}
\label{eq:3.2.1.20}
\frac{|\mathbb Psi_1(x)-\mathbb Psi_2(x)|}{2} + \langle \mathbf{G}(x),x \rangle \leq M \left \| x \right \|^2 \big( {\rm ln}\left \| x \right \| +1 \big)
\end{equation}
for a.e. $x\in \mathbb R^2\setminus \overline{B}_{N_0}$. Then $\mathbb M$ is non-explosive (Definition \ref{non-explosive}).
\end{corollary}
\begin{proof}
Let $x \in \mathbb R^2 \setminus \overline{B}_{N_0}$ and $y=(y_1,y_2):=Q(x)x$. Then
$$
\|y\|^2=\langle Q(x)x, Q(x)x \rangle = \langle Q^T(x)Q(x)x,x\rangle=\|x\|^2
$$
and
$$
\langle A(x)x,x \rangle= \left\langle \begin{pmatrix}
\mathbb Psi_1(x) & 0 \\
0 & \mathbb Psi_2(x)
\end{pmatrix} y, y \right \rangle = \mathbb Psi_1(x) y^2_1+\mathbb Psi_2(x) y^2_2 \geq (\mathbb Psi_1(x) \wedge \mathbb Psi_2(x)) \|x\|^2.
$$
Thus,
$$
-\frac{\langle A(x)x, x \rangle}{ \left \| x \right \|^2 }+ \frac12\mathrm{trace}A(x) \leq -(\mathbb Psi_1(x) \wedge \mathbb Psi_2(x))+\frac{\mathbb Psi_1(x)+\mathbb Psi_2(x)}{2}=\frac{|\mathbb Psi_1(x)-\mathbb Psi_2(x)|}{2}.
$$
Now Corollary \ref{cor:3.2.2} implies the assertion.
\end{proof}
\begin{proposition} \label{theo:3.2.8}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Let $\sigma= (\sigma_{ij})_{1 \leq i \leq d,1\leq j \leq l}$ be as in Theorem \ref{theo:3.1.4}(ii). Let $h_1 \in L^p(\mathbb R^d, \mu)$, $h_2 \in L^q(\mathbb R^d, \mu)$ with $q=\frac{pd}{p+d}$ and $p\in (d,\infty)$, and $M>0$ be a constant.
\begin{itemize}
\item[(i)] If for a.e. $x \in \mathbb R^d$
\begin{eqnarray*} \label{lineargrowth-2}
\max_{1\leq i \leq d, 1 \leq j \leq l}|\sigma_{ij}(x)| \leq |h_1(x)|+ M(\sqrt{\|x\|}+1),
\end{eqnarray*}
and
$$
\max_{1 \leq i \leq d}|g_i(x)| \leq |h_2(x)| + M(\|x\|+1),
$$
then $\mathbb M$ is non-explosive (Definition \ref{non-explosive}) and moreover, for any $T>0$ and any open ball $B$, there exist constants $D$, $E>0$ such that
$$
\sup_{x \in \overline{B}} \mathbb E_{x}\Big [\sup_{s \leq t} \|X_s \|\Big] \leq D\cdot e^{E \cdot t}, \quad \forall t \in [0, T].
$$
\item[(ii)] If for a.e. $x \in \mathbb R^d$
\begin{eqnarray*}
\max_{1\leq i \leq d, 1 \leq j \leq l} |\sigma_{ij}(x)|+\max_{1 \leq i \leq d}|g_i(x)| \leq |h_1(x)| + M(\|x\|+1),
\end{eqnarray*}
then $\mathbb M$ is non-explosive and moreover, for any $T>0$ and any open ball $B$, there exist constants $D, E>0$ such that
$$
\sup_{x \in \overline{B}} \mathbb E_{x}\Big[\sup_{s \leq t} \|X_s \|^2\Big] \leq D\cdot e^{E\cdot t}, \quad \forall t \in [0, T].
$$
\end{itemize}
\end{proposition}
\begin{proof}
(i) Let $x \in \overline{B}$, $T>0$ and $t \in [0, T]$, and $\sigma_n, n\in \mathbb N$, be as in Definition \ref{stopping times}. For any $i \in \{ 1, \ldots, d\}$ and $n \in \mathbb N$, it holds that $\mathbb P_x$-a.s.
\begin{eqnarray*}
\sup_{0 \leq s \leq t \wedge \sigma_{n}} |X_s^i|
\leq |x_i| + \sum_{j=1}^l \sup_{\;0 \leq s \leq t \wedge \sigma_{n} } \Big|\int_0^{s} \sigma_{ij} (X_u) \, dW_u^j \Big | + \int^{ t \wedge \sigma_{n}}_{0} |g_i(X_u)| \, du.
\end{eqnarray*}
Using the Burkholder--Davis--Gundy inequality (\cite[IV. (4.2) Corollary]{RYor}) and Jensen's inequality, it holds that for any $i \in \{1, \ldots, d\}$ and $j \in \{1,\ldots, l \}$
\begin{eqnarray*}
&& \mathbb E_x \Big [ \sup_{\;0 \leq s \leq t \wedge \sigma_{n} }\Big | \int_0^{s} \sigma_{ij} (X_u) \, dW_u^j \Big | \Big ] \\
&& \leq C_1 \mathbb E_x \Big [ \Big(\int_0^{t \wedge \sigma_{n} } \sigma_{ij}^2(X_{u }) du \Big)^{1/2} \Big ] \leq C_1 \mathbb E_x \Big [ \int_0^{t \wedge \sigma_{n} } \sigma_{ij}^2(X_{u }) du \Big ]^{1/2},
\end{eqnarray*}
where $C_1>0$ is a universal constant. Using Theorem \ref{theo:3.3}(i) and
the inequalities $(a+b+c)^2\le 3(a^2+b^2+c^2)$, $\sqrt{a+b+c}\le \sqrt{a}+\sqrt{b}+\sqrt{c}$ and $\sqrt{a} \leq a +1/4$ which hold for $a,b,c\ge 0$,
\begin{eqnarray*}
&& \mathbb E_x \Big [ \int_0^{t \wedge \sigma_{n} } \sigma_{ij}^2(X_{u }) du \Big ]^{1/2} \leq \mathbb E_{x} \Big [ 3 \int_0^{t \wedge \sigma_n} \Big (|h_1^2(X_u)| + M^2\|X_u\|+M^2\Big ) du \Big]^{1/2}\\
&& \leq \sqrt{3} \mathbb E_x \Big [ \int_0^ T h_1^2(X_u) du\Big]^{1/2}+M\sqrt{3} \cdot \mathbb E_{x} \Big[ \int_0^{t \wedge \sigma_n}\|X_{u }\| du \Big]^{1/2} +M\sqrt{3T} \nonumber \\
&&\leq \underbrace{ (3e^T c_{B, q})^{1/2} \|h_1 \|_{L^{2q}(\mathbb R^d, \mu)}+M\sqrt{3}\Big (\sqrt{T}+\frac{1}{4}\Big )}_{=:C_2} +M \sqrt{3} \int_0^ t \mathbb E_{x} \Big[ \sup_{0 \leq s \leq u \wedge \sigma_n} \|X_s\| \Big] du. \nonumber
\end{eqnarray*}
Concerning the drift term, we have
\begin{eqnarray*}
&&\mathbb E_{x}\Big[\int^{ t \wedge \sigma_{n}}_{0} |g_i(X_u)| \, du \Big] \leq \mathbb E_{x}\Big[ \int_0^{ t \wedge \sigma_{n}} |h_2(X_u) | du\Big] +M\mathbb E_{x} \Big[ \int_0^{t \wedge \sigma_{n}} \big(\|X_{u }\|+1\big) \,du \Big] \\
&&\leq \mathbb E_{x} \Big[ \int_0^T |h_2(X_u)| du \Big]+MT+ M\mathbb E_{x} \Big[ \int_0^{t } \sup_{0 \leq s \leq u \wedge \sigma_n} \|X_s\| du \Big] \\
&& \leq \underbrace{e^T c_{B, q} \|h_2 \|_{L^q(\mathbb R^d, \mu)}+MT}_{=:C_{3}}+ M\int_0^{t }\mathbb E_{x} \Big[ \sup_{0 \leq s \leq u \wedge \sigma_n} \|X_s\|\Big] du.
\end{eqnarray*}
Let $p_n(t):=\mathbb E_{x} \left[ \sup_{0 \leq s \leq t\wedge \sigma_{n}}\|X_{s}\| \right]$. Then
\begin{eqnarray*}
p_n(t) \leq \sum_{i=1}^{d} \sup_{0 \leq s \leq t \wedge \sigma_{n}} |X_s^i| \leq \underbrace{ \sqrt{d} \|x\| + d(lC_1C_2+C_{3})}_{=:D} + \underbrace{Md(\sqrt{3}lC_1+1)}_{=:E} \int_0^t p_n(u) du.
\end{eqnarray*}
By Gronwall's inequality,
\begin{equation} \label{eq:3.21}
p_n(t) \leq D\cdot e^{E\cdot t}\;\; \text{ $\forall t \in [0,T]$. }
\end{equation}
Using Markov's inequality and \eqref{eq:3.21},
\begin{eqnarray*}
\mathbb P_x(\sigma_n \leq T) &\leq& \mathbb P_{x} \Big(\sup_{s \leq T \wedge \sigma_n}|X_{s} | \geq n \Big) \\
&\leq& \frac{1}{n} \mathbb E_{x} \Big[ \sup_{s \leq T \wedge \sigma_n}|X_{s} | \Big] \leq \frac{1}{n} D\cdot e^{E \cdot T} \rightarrow 0 \;\text{ as } n \rightarrow \infty.
\end{eqnarray*}
Therefore, letting $T \rightarrow \infty$, $\mathbb M$ is non-explosive by Lemma \ref{lem:3.1.4}(i). Applying Fatou's lemma to \eqref{eq:3.21} and taking the supremum over $\overline{B}$, the last assertion follows. \\
(ii) By Jensen's inequality, for $i \in \{1, \ldots, d \}$, $t \in [0, T]$ and $\sigma_n, n\in \mathbb N$, as in Definition \ref{stopping times}
\begin{eqnarray*}
&&\sup_{0 \leq s \leq t \wedge \sigma_{n}} |X_s^i|^2 \\
&\leq& (l+2) \Big(x_i^2 + \sum_{j=1}^l \Big(\sup_{\;0 \leq s \leq t \wedge \sigma_{n} } \Big | \int_0^{s} \sigma_{ij} (X_u) \, dW_u^j \Big |\Big)^2 + t\int^{ t \wedge \sigma_{n} }_{0} |g_i(X_u)|^2 \, du \Big).
\end{eqnarray*}
Using Doob's inequality (\cite[II. (1.7) Theorem]{RYor}), for any $j=1,\ldots,l$,
\begin{eqnarray*}
\mathbb E_x \Big [ \Big(\sup_{\;0 \leq s \leq t \wedge \sigma_{n} } \Big |\int_0^{s} \sigma_{ij} (X_u) \, dW_u^j \Big |\Big)^2 \Big ] &\leq& 4 \mathbb E_x \Big [ \int_0^{t \wedge \sigma_n } \sigma_{ij}^2(X_{u }) du \Big ]. \\
\end{eqnarray*}
By Theorem \ref{theo:3.3}(i),
\begin{eqnarray*}
&& \mathbb E_x \Big[ \int_0^{t \wedge \sigma_{n} } \sigma_{ij}^2(X_{u }) du \Big] \leq \mathbb E_{x} \Big[ 3 \int_0^{t \wedge \sigma_n} \big(|h_1^2(X_u)| + M^2\|X_u\|^2+M^2 \big)du \Big] \\
&&\;\; \leq \underbrace{(3e^T c_{B,\frac{p}{2}}) \|h_1^2 \|_{L^{p/2}(\mathbb R^d, \mu)} + 3M^2T}_{=:c_1}+3M^2\int_0^{t }\mathbb E_{x} \Big[ \sup_{0 \leq s \leq u \wedge \sigma_n} \|X_s\|^2\Big] du.
\end{eqnarray*}
and
\begin{eqnarray*}
&&\mathbb E_{x}\Big[\int_0^{ t \wedge \sigma_{n}} |g_i(X_u)|^2du\Big] \leq c_1+ 3M^2\int_0^{t }\mathbb E_{x} \Big[ \sup_{0 \leq s \leq u \wedge \sigma_n} \|X_s\|^2\Big] du.
\end{eqnarray*}
Let $p_n(t):=\mathbb E_{x} \Big[ \sup_{0 \leq s \leq t\wedge \sigma_{n}}\|X_{s}\|^2 \Big]$. Then
\begin{eqnarray*}
&&p_n(t) \leq \sum_{i=1}^{d} \mathbb E_x \Big[\sup_{0 \leq s \leq t \wedge \sigma_{n}} |X_s^i|^2\Big] \\
&&\leq \underbrace{(l+2) \Big( \|x\|^2 + 4c_1dl+c_1 dT \Big)}_{=:D} + \underbrace{3M^2d(l+2)(4l+T)}_{=:E} \int_0^t p_n(u) du.
\end{eqnarray*}
Using Gronwall's inequality, Markov's inequality and Jensen's inequality, the rest of the proof follows similarly to the proof of (i).
\end{proof}
\text{}\\
There are examples where the Hunt process $\mathbb M$ of Theorem \ref{th: 3.1.2} is non-explosive but \eqref{eq:3.20} is not satisfied and $\mathbf{G}$ has infinitely many singularities which form an unbounded set.
\begin{eg}\label{exam:3.2.1.4}
\begin{itemize}
\item[(i)]
Let $\eta \in C_0^{\infty}(B_{1/4})$ with $\eta\equiv 1$ on $B_{1/8}$ and define
$$
w(x_1, \dots, x_d):= \eta(x_1, \dots, x_d) \cdot \int_{-\infty}^{x_1} \frac{1}{|t|^{1/d}} 1 _{[-1,1]}(t) dt, \;\; (x_1, \ldots, x_d) \in \mathbb R^d.
$$
Then $w \in H^{1,q}(\mathbb R^d) \cap C_0(B_{1/4})$ with $\partial_1 w \notin L_{loc}^{d}(\mathbb R^d)$. Let
$$
\varv(x_1, \dots, x_d):=\sum_{i=0}^{\infty} \frac{1}{2^i} w(x_1-i, \dots, x_d), \;\; (x_1, \ldots, x_d) \in \mathbb R^d.
$$
Then $\varv \in H^{1,q}(\mathbb R^d) \cap C(\mathbb R^d)$ with $\partial_1 \varv \notin L_{loc}^{d}(\mathbb R^d)$. Now define $P=(p_{ij})_{1 \leq i, j \leq d}$ as
$$
p_{1d}:=\varv, \;\; p_{d1}:=-\varv,\;\quad p_{ij}:=0 \text{ if }\;(i,j) \notin \{(1,d), (d,1)\}.
$$
Let $Q=(q_{ij})_{1 \leq i, j \leq d}$ be a matrix of functions with $q_{ij}=-q_{ij} \in H^{1,2}_{loc}(\mathbb R^d) \cap C(\mathbb R^d)$ for all $1\leq i,j \leq d$ and assume there exists a constant $M>0$ such that
$$
\| \nabla Q \| \leq M( \|x\|+1), \quad \text{ for a.e. on } \mathbb R^d.
$$
Let $A:=id$, $C:=(P+Q)^T$ and $\mathbf{H}:=0$. Then $\mu:=dx$ is an infinitesimally invariant measure for $(L, C_0^{\infty}(\mathbb R^d))$ and $\mathbf{G}=\frac12\nabla (A+C^T)= \frac12(\partial_1 \varv \,\mathbf{e}_1 + \nabla Q$). Observe that
$\partial_1 \varv$ is unbounded in a neighborhood of infinitely many isolated points that form an unbounded set and moreover $\nabla Q$ is a locally bounded vector field which has linear growth. By Proposition \ref{theo:3.2.8}(i), $\mathbb M$ is non-explosive.
\item[(ii)] \ Let $\gamma \in (0,1)$, $\psi(x):=\|x\|^{\gamma}$, $x \in B_{1/4}$ and $p:=\frac{d}{1-\frac{\gamma}{2}}>d$. Then since $p(1-\gamma)<d$, $\psi \in H^{1,p}(B_{1/4})$ with $\nabla \psi(x)=\frac{\gamma}{\|x\|^{1-\gamma}} \frac{x}{\|x\|}$. By \cite[Theorem 4.7]{EG15}, $\psi$ can be extended to a function $\psi \in H^{1,p}(\mathbb R^d)$ with $\psi \geq 0$ and $\text{supp}(\psi) \subset B_{1/2}$.
Let
$$
\rho(x):=1+\sum_{k=0}^{\infty}\psi(x-k\mathbf{e}_1), \;\;\; x \in \mathbb R^d.
$$
Then $\rho \in H_{loc}^{1,p}(\mathbb R^d) \cap C_b(\mathbb R^d)$ with $\rho(x) \geq 1$ for all $x \in \mathbb R^d$ and $\|\nabla \rho\| \notin \cup_{r \in [1, \infty]}L^r(\mathbb R^d)$. Let $\mu:=\rho dx$. Since $\rho$ is bounded above, there exists $c>0$ such that $\mu(B_r) \leq cr^d$ for all $r>0$. Let $\mathbf{F} \in L^p_{loc}(\mathbb R^d, \mathbb R^d)$ be such that
$$
\int_{\mathbb R^d} \langle \mathbf{F}, \nabla \varphi \rangle dx = 0, \quad \text{ for all } \varphi \in C_0^{\infty}(\mathbb R^d)
$$
and for some $M>0$, $N_0 \in \mathbb N$, assume $\|\mathbf{F}(x)\| \leq M\|x\|(\ln\|x\|+1)$ for a.e. $x \in \mathbb R^d \setminus \overline{B}_{N_0}$. Let $A:=id$, $\mathbf{B}:=\frac{\mathbf{F}}{\rho}$ and $\mathbf{G}:=\beta^{\rho,A}+\mathbf{B}=\frac{\nabla \rho}{\rho}+\frac{\mathbf{F}}{\rho}$. Then $\frac{\nabla \rho}{\rho}$ is unbounded in a neighborhood of infinitely many isolated points, whose union forms an unbounded set and $|\langle\mathbf{B}(x),x\rangle| \leq M\|x\|^{2}(\ln\|x\|+1)$ for a.e. $x \in \mathbb R^d \setminus \overline{B}_{N_0}$. Therefore \eqref{eq:3.20} is not satisfied and $\mathbf{G}$ does also not satisfy the condition of Proposition \ref{theo:3.2.8}(i) and (ii). But by the following Proposition \ref{prop:3.2.9}(i), $(T_t)_{t>0}$ is conservative. Hence, $\mathbb M$ is non-explosive by Corollary \ref{cor:3.2.1}.
\end{itemize}
\end{eg}
So far, we proved non-explosion criteria by probabilistic means. The following proposition, which is an immediate consequence of \cite[Corollary 15(i) and (iii)]{GT17}, completes Proposition \ref{prop:2.1.10} in the sense that the conservativeness is proven by purely analytical means and that it is applied in the situation of Section \ref{subsec:2.1.4}, where the density $\rho$ is explicitly given, in contrast to the situation of Theorem \ref{theo:2.2.7}, where $\rho$ is constructed and not known explicitly, except for its regularity properties.
\begin{proposition}
\label{prop:3.2.9}
Suppose that the assumptions \eqref{condition on mu}--\eqref{eq:2.1.4} of Section \ref{subsec:2.1.1}
are satisfied and that the given density $\rho$ additionally satisfies $\rho>0$ a.e.
(Both hold for instance in the situation of Remark \ref{rem:2.2.4} which includes condition {\bf (a)} of Section \ref{subsec:2.2.1}). Suppose further that there exist constants $M, c>0$ and $N_0, N_1 \in \mathbb N$, such that either of the following holds:
\begin{itemize}
\item[(i)]
$$
\frac{\langle A(x)x,x\rangle}{\|x\|^2}+|\langle{\mathbf{B}(x)},x\rangle| \leq M\|x\|^{2} \ln(\|x\|+1),
$$
for a.e. $x\in \mathbb R^d\setminus \overline{B}_{N_0}$ and
$$
\mu(B_{4n}\setminus B_{2n})\le (4n)^{c}\qquad \forall n\ge N_1.
$$
\item[(ii)]
$$
\langle A(x)x,x\rangle+|\langle\mathbf{B} (x),x\rangle| \leq M\|x\|^{2},
$$
for a.e. $x\in \mathbb R^d\setminus \overline{B}_{N_0}$ and
$$
\mu(B_{4n}\setminus B_{2n})\le e^{c(4n)^2}\qquad \forall n\ge N_1.
$$
\end{itemize}
Then $(T_t)_{t>0}$ and $(T'_t)_{t>0}$ are conservative (cf. Definitions \ref{definition2.1.7} and \ref{def:3.2.1}) and $\mu$ is $(\overline{T}_t)_{t>0}$-invariant and $(\overline{T}'_t)_{t>0}$-invariant (cf. Definition \ref{def:2.1.1}(ii)).
\end{proposition}
\begin{remark}\label{rem:3.2.1 1}
{\it Recall that the drift has the form $\mathbf{G} =\beta^{\rho , A}+\mathbf{B} $. Proposition \ref{prop:3.2.9} is a type of conservativeness result, where a growth condition on the logarithmic derivative $\beta^{\rho,A}$ is not explicitly required.
Instead, a volume growth condition on the infinitesimally invariant measure $\mu$ occurs.
Such types of conservativeness results have been studied systematically under more general assumptions on the coefficients.
In the symmetric case in \cite{Ta89} and \cite{Sturm94}, in the sectorial case in \cite{TaTr}, and in the possibly non-sectorial case in \cite{GT17}.
In Proposition \ref{prop:3.2.9} there is an interplay between the growth conditions on $A$ and $\mathbf{B}$ and the growth condition of $\mu$ on annuli. The stronger conditions on $A$ and $\mathbf{B}$ in Proposition \ref{prop:3.2.9}(ii) allow for the weaker exponential growth condition of $\mu$ on annuli. In particular, an exponential growth condition as in Proposition \ref{prop:3.2.9}(ii) already appears in \cite{Ta89}.
The exponential growth condition of \cite{Ta89} can even be slightly relaxed in the symmetric case (see \cite[Remarks b), p. 185]{Sturm94}).
For instance, if according to Remark \ref{rem:2.2.4}, $A=id$, $C=0$, and $\mathbf{\overline{B}}=0$ (hence $\mathbf{B}=0$) and $\rho=e^{2\phi}$, with $\phi \in H^{1,p}_{loc}(\mathbb R^d)$ for some $p\in (d,\infty)$, then $\mathbf{G}=\nabla \phi$, $\mu=e^{2\phi}dx$, and by \cite[Theorem 4]{Sturm94}, $(T_t)_{t>0}$ is conservative and $\mu$ is $(\overline{T}_t)_{t>0}$-invariant, if there exists a constant $M>0$ and $N_0 \in \mathbb N$, such that
\begin{equation} \label{eq:3.2.27}
\phi (x) \leq M \|x\|^2 \ln(\|x\|+1), \;\quad \forall x \in \mathbb R^d \setminus \overline{B}_{N_0}.
\end{equation}
Indeed, in this case the intrinsic metric equals the Euclidean metric (see \cite[4.1 Theorem]{Sturm98} and its proof), so that \cite[{\bf Assumption (A)}]{Sturm94} is satisfied.
Moreover, $\mu(B_{r})\le e^{cr^{2} \ln(r+1)}$, for some constant $c$ and $r$ large, which further implies
$$
\int_1^{\infty}\frac{r}{\ln \mu(B_r)} dr =\infty.
$$
Thus the result follows by \cite[Theorem 4]{Sturm94}.}
\end{remark}
\begin{eg}
We saw in Example \ref{exam:3.2.1.4}(ii) that the criterion Proposition \ref{prop:3.2.9}(i) was not covered by any other non-explosion or conservativeness result of this monograph. The same is true for Proposition \ref{prop:3.2.9}(ii) and the criterion \eqref{eq:3.2.27}.
Here we only show the latter. Hence let $A=id$, $C=0$, $\mathbf{\overline{B}}=0$.
Let $\phi_1(x)=1+\sum_{k=0}^{\infty}\psi(x-k\mathbf{e}_1)$, $x \in \mathbb R^d$, where $\psi$ is defined as in Example \ref{exam:3.2.1.4}(ii), $\phi_2(x)=(\|x\|+1)^2 \ln(\|x\|+1)$, $x \in \mathbb R^d$, $\phi=\phi_1+\phi_2$, and $\mu=\exp(2\phi)dx$. Then $\phi$ satisfies \eqref{eq:3.2.27} and the associated semigroup $(T_t)_{t>0}$ is conservative and $\mathbb M$ is non-explosive by Remark \ref{rem:3.2.1 1}. Indeed the volume growth of $\mu$ is as described at the end of in Remark \ref{rem:3.2.1 1} and the drift coefficient $\mathbf{G}$ consists of $\nabla \phi_1$, which has infinitely many singularities that form an unbounded set in $\mathbb R^d$ and of $\nabla \phi_2(x)=(\|x\|+1)(2\ln(\|x\|+1)+1) \frac{x}{\|x\|}$, which has linear times logarithmic growth.
Hence, Corollary \ref{cor:3.2.2}, Proposition \ref{theo:3.2.8} and Proposition \ref{prop:3.2.9} cannot be used to determine the conservativeness of $(T_t)_{t>0}$.
\end{eg}
\subsubsection{Transience and recurrence}
\label{subsec:3.2.2}
Throughout this section we will assume that {\bf (a)} of Section \ref{subsec:2.2.1} holds and that assumption {\bf (b)} of Section \ref{subsec:3.1.1} holds. And we let
$$
\mu=\rho\,dx
$$
be as in Theorem \ref{theo:2.2.7} or as in Remark \ref{rem:2.2.4}. \\ For $f \in L^1(\mathbb R^d, \mu)$ with $f \geq 0$, define through the following pointwise increasing limit
$$
Gf:=\int_0^{\infty} T_t f dt = \lim_{\alpha \rightarrow 0+} \int_{0}^{\infty} e^{-\alpha t} T_t f dt = \lim_{\alpha \rightarrow 0+} G_{\alpha} f, \;\; \text{$\mu$-a.e.},
$$
where $(T_t)_{t>0}$ and $(G_{\alpha})_{\alpha>0}$ are defined in Definition \ref{definition2.1.7}.
\begin{definition} \label{def:3.2.2.2}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} holds. $(T_t)_{t>0}$ see Definition \ref{definition2.1.7}) is called {\bf recurrent}\index{recurrent ! semigroup}, if for any $f \in L^1(\mathbb R^d, \mu)$ with $f \geq 0$ \, $\mu$-a.e,
$$
G f(x) \in\{0,\infty\} \;\; \text{for $\mu$-a.e. $x \in \mathbb R^d$}.
$$
$(T_t)_{t>0}$ is called {\bf transient}\index{transient ! semigroup}, if there exists $g \in L^1(\mathbb R^d, \mu)$ with $g> 0$ $\mu$-a.e. such that
$$
G g(x) <\infty, \;\; \text{for $\mu$-a.e. $x \in \mathbb R^d$.}
$$
\end{definition}
\noindent
Note that by \cite[Remark 3(a)]{GT2},
$(T_t)_{t>0}$ is transient, if and only if for any $f \in L^1(\mathbb R^d, \mu)$ with $f \geq 0$ $\mu$-a.e.
$$
G f(x) <\infty,\;\ \text{ for $\mu$-a.e. $x \in \mathbb R^d$. }
$$
For $x \in \mathbb R^d$ and $f \in L^1(\mathbb R^d, \mu)$ with $f \geq 0$, define for $(P_t)_{t>0}$ of Proposition \ref{prop:3.1.1} and $\mathbb M$ of Theorem \ref{th: 3.1.2} (see also Theorem \ref{theo:3.1.4}),
\begin{eqnarray*}
R f(x):&=& \int_0^{\infty}P_t f(x) dt=\mathbb E_x \left[ \int_0^{\infty} f(X_t) dt \right] = \lim_{n \rightarrow \infty} \mathbb E_x \left[ \int_0^{\infty} (f \wedge n)(X_t) dt \right] \\
&=&\lim_{n \rightarrow \infty} \lim_{\alpha \rightarrow 0+} \mathbb E_x \left[ \int_0^{\infty} e^{-\alpha t}(f \wedge n)(X_t) dt \right]
= \lim_{n \rightarrow \infty} \lim_{\alpha \rightarrow 0+} \Big( R_{\alpha} (f \wedge n) (x) \Big).
\end{eqnarray*}
Since $Rf$ is the pointwise increasing limit of lower semi-continuous functions, $R f$ is lower semi-continuous on $\mathbb R^d$ by Theorem \ref{theorem2.3.1}. In particular, for any $f, g \in L^1(\mathbb R^d, \mu)$ with $f=g\ge 0$, $\mu$-a.e. it holds that $R f(x)= R g(x)$ for all $x \in \mathbb R^d$. Moreover,
$$
Rf(x) = Gf(x), \;\; \text{ for $\mu$-a.e. $x \in \mathbb R^d$.}
$$
Define the last exit time $L_A$ from $A\in \mathcal{B}(\mathbb R^d)$ by
\begin{equation}\label{lastexittime}
L_A:=\sup \{ t\geq 0: X_t\in A\},\ \ (\sup \emptyset:=0).
\end{equation}
\begin{definition} \label{def:3.2.2.3}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
$\mathbb M$ (see Theorem \ref{th: 3.1.2} and also Theorem \ref{theo:3.1.4}) is called {\bf recurrent in the probabilistic sense}\index{recurrent ! in the probabilistic sense}, if for any non-empty open set $U$ in $\mathbb R^d$
\begin{eqnarray*}
\mathbb P_x(L_U=\infty)=1,\ \ \forall x\in \mathbb R^d.
\end{eqnarray*}
$\mathbb M$ is called {\bf transient in the probabilistic sense}\index{transient ! in the probabilistic sense}, if for any compact set $K$ in $\mathbb R^d$,
\begin{equation*}
\mathbb P_x(L_K<\infty)=1,\ \ \forall x\in \mathbb R^d.
\end{equation*}
\end{definition}
\begin{proposition} \label{prop:3.2.2.11}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
$\mathbb M$ is transient in the probabilistic sense, if and only if
\begin{equation} \label{eq:3.25}
\mathbb P_x(\lim_{t \rightarrow \infty}X_t=\Delta)=1,\ \ \; \forall x\in \mathbb R^d.
\end{equation}
In particular, if $\mathbb M$ is transient, then
\begin{equation*} \label{eq:3.25*}
\lim_{t \rightarrow \infty} P_t f(x) =0
\end{equation*}
for any $x \in \mathbb R^d$ and $f \in \mathcal{B}_b(\mathbb R^d)_0 + C_{\infty}(\mathbb R^d)$.
\end{proposition}
\begin{proof}
Let $x \in \mathbb R^d$ and $K_n:=\overline{B}_n(x)$, $n \in \mathbb N$. Let $\Omega_0:=\cap_{n \in \mathbb N} \{\omega \in \Omega: L_{K_n}(\omega)<\infty\}$. Then it follows that
\begin{equation*}
\Omega_{0}= \{\omega \in \Omega: \lim_{t \rightarrow \infty}X_t(\omega)=\Delta \},\;\; \text{ $\mathbb P_x$-a.s. }
\end{equation*}
Assume that $\mathbb M$ is transient in the probabilistic sense. Then $\mathbb P_x(\Omega_0)=1$, hence \eqref{eq:3.25} holds. Conversely, assume \eqref{eq:3.25} holds. Then $\mathbb P_x(L_{K_n}<\infty)$ for all $n \in \mathbb N$. Let $K$ be a compact set in $\mathbb R^d$. Then there exists $N \in \mathbb N$ such that $K \subset K_N$, hence $\mathbb P_x(L_K <\infty)=1$. Thus, $\mathbb M$ is transient. \\
Now assume that $\mathbb M$ is transient. Let $x \in \mathbb R^d$ and $f \in \mathcal{B}_b(\mathbb R^d)_0+C_{\infty}(\mathbb R^d)$. Then, $P_t f(x)=\mathbb E_x[f(X_t)]$ by Proposition \ref{prop:3.1.4}. Since $f$ is bounded and $\lim_{t \rightarrow \infty} f(X_t)=0$ \,$\mathbb P_x$-a.s. by \eqref{eq:3.25}, it follows from Lebesgue's theorem that
$$
\lim_{t \rightarrow \infty} \mathbb E_x[f(X_t)] =0,
$$
as desired.
\end{proof}
\begin{lemma} \label{lem:3.2.6}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold and let $\mathbb M$ be as in Theorem \ref{th: 3.1.2}.
Assume that $\Lambda \in \mathcal{F}$ is $\vartheta_t$-invariant for some $t>0$, i.e. $\Lambda=\vartheta^{-1}_t(\Lambda)$.
Then $x \mapsto \mathbb P_x(\Lambda)$ is continuous on $\mathbb R^d$.
\end{lemma}
\begin{proof}
By the Markov property,
$$
\mathbb{P}_x(\Lambda) = \mathbb{P}_x(\vartheta_t^{-1}(\Lambda))= \mathbb{E}_x[\mathbb{E}_x[ 1_{\Lambda}\circ \vartheta_t \,|\, \mathcal{F}_t]]
= \mathbb{E}_x[\mathbb{E}_{X_t}[ 1_{\Lambda}]]=P_t \mathbb{P}_{\cdot}(\Lambda)(x).
$$
Since $x \mapsto \mathbb P_x(\Lambda)$ is bounded and measurable, the assertion follows by Theorem \ref{theo:2.6}.
\end{proof}
\begin{theorem}\label{theo:3.3.6}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
We have the following:
\begin{itemize}
\item[(i)] $(T_t)_{t>0}$ is either recurrent or transient (see Definition \ref{def:3.2.2.2}).
\item[(ii)] $(T_t)_{t>0}$ is transient, if and only if $\mathbb M$ is transient in the probabilistic sense (see Definition \ref{def:3.2.2.3}).
\item[(iii)] $(T_t)_{t>0}$ is recurrent, if and only if $\mathbb M$ is recurrent in the probabilistic sense.
\item[(iv)] $\mathbb M$ is either recurrent or transient in the probabilistic sense.
\item[(v)] If $(T_t)_{t>0}$ is recurrent, then $(T_t)_{t>0}$ is conservative.
\end{itemize}
\end{theorem}
\begin{proof}
(i) Since $(T_t)_{t>0}$ is strictly irreducible by Proposition \ref{prop:2.4.2}(ii), $(T_t)_{t>0}$ is either recurrent or transient by \cite[Remark 3(b)]{GT2}.\\
(ii) If $(T_t)_{t>0}$ is transient, then by \cite[Lemma 6]{GT2} there exists $g \in L^{\infty}(\mathbb R^d, \mu)$ with $g(x)>0$ for all $x\in \mathbb R^d$ such that
$$
R g =\mathbb E_{\cdot}\Big [\int_0^{\infty} g(X_t) dt\Big ]\in L^{\infty}(\mathbb R^d, \mu).
$$
Since $Rg$ is lower-semicontinuous
$$
V :=\big \{Rg-\|Rg\|_{L^{\infty}(\mathbb R^d, \mu)}>0\big \}
$$
is open. Since $\mu(V)=0$ and $\mu$ has full support, we must have that $V=\varnothing$. It follows that $Rg\le\|Rg\|_{L^{\infty}(\mathbb R^d, \mu)}$ pointwise so that the adapted process $t\mapsto Rg(X_t)$ is $\mathbb{P}_x$-integrable for any $x\in \mathbb R^d$. Using the Markov property,
for any $0 \leq s <t$ and $x\in \mathbb R^d$
\begin{eqnarray*}
\mathbb E_x \left[ R g(X_{t}) \varepsilonrt \mathcal{F}_s \right] &=& \mathbb E_{X_s} \left[ R g(X_{t-s})\right] \\
&=& P_{t -s} R g(X_s)\ =\ \int_{t-s}^{\infty} P_u g(X_s) du \ \leq \ R g(X_s)
\end{eqnarray*}
and moreover since $\mathbb M$ is a normal Markov process with right-continuous sample paths, we obtain $Rg(x)>0$ for any $x\in \mathbb R^d$. Thus, $(\Omega , \mathcal{F}, (\mathcal{F}_t)_{t \geq 0}, \left ( Rg(X_t)\right )_{t \geq 0}, \mathbb P_x)$ is a positive supermartingale for all $x\in \mathbb R^d$. \\
Let $U_n:=\{Rg>\frac{1}{n}\}$, $n \in \mathbb N$. Since $Rg$ is lower-semicontinuous, $U_n$ is open in $\mathbb R^d$ and since $Rg>0$ everywhere, $\mathbb R^d = \cup_{n \in \mathbb N} U_n$. \\
Let $K\subset \mathbb R^d$ be an arbitrary compact set. Since $\{U_n\cap B_n\}_{n\in \mathbb N}$ is an open cover of $K$, there exists $N\in \mathbb N$ with $K\subset V_N:=U_N\cap B_N$. Since $\overline{V}_N$ is compact and $\{U_n\}_{n\in \mathbb N}$ is an open cover of $\overline{V}_N$ there exists $M\in \mathbb N$, with
$$
K\subset V_N \subset \overline{V}_N \subset U_M.
$$
By the optional stopping theorem for positive supermartingales, for $t\ge 0$ and $x \in \mathbb R^d$ and $\sigma_{V_N}$ as in Definition \ref{stopping times}
\begin{eqnarray*}
P_t Rg(x)&=&\mathbb E_x\left[Rg(X_t)\right] \geq \mathbb E_x\big [ Rg(X_{t+\sigma_{V_N} \circ \vartheta_t}) \big ] \\
&\geq& \mathbb E_x\big [ Rg(X_{t+\sigma_{V_N} \circ \vartheta_t}) 1_{\{ t+\sigma_{V_N} \circ \vartheta_t<\infty \}} \big] \\
&\geq& \frac{1}{M} \cdot \mathbb P_x\left( t+\sigma_{V_N} \circ \vartheta_t<\infty \right),
\end{eqnarray*}
and the last inequality holds since $X_{t+\sigma_{V_N} \circ \vartheta_t} \in \overline{V}_N$, $\mathbb P_x$-a.s. on $\{ t+\sigma_{V_N} \circ \vartheta_t<\infty \}$. Consequently,
$$
\lim_{t\rightarrow \infty}\mathbb P_x\left( t+\sigma_{V_N} \circ \vartheta_t<\infty \right)\leq M \cdot \lim_{t\rightarrow \infty} P_t R g(x)=0,
$$
hence $\mathbb P_x\left( t+\sigma_{V_N} \circ \vartheta_t<\infty \text{ for all } t>0 \right)=0$ for any $x \in \mathbb R^d$. Therefore,
$$
1=\mathbb P_x\left( t+\sigma_{V_N} \circ \vartheta_t=\infty \text{ for some } t>0 \right)=\mathbb P_x(L_{V_N} <\infty).
$$
Since $L_K \leq L_{V_N}<\infty$ \,$\mathbb P_x$-a.s. for all $x\in \mathbb R^d$ (cf. \eqref{lastexittime} for the definition of $L_A$), we obtain the transience of $\mathbb M$ in the probabilistic sense.\\
Conversely, assume that $\mathbb M$ is transient in the probabilistic sense. Then condition (8) of \cite[Proposition 10]{GT2} holds with $B_n$ being the Euclidean ball of radius $n$ about the origin. Consequently, by \cite[Proposition 10]{GT2}, there exists
$g \in L^1(\mathbb R^d, \mu)$ with $g>0$ $\mu$-a.e., such that $Rg(x)<\infty$ for $\mu$-a.e. $x\in \mathbb R^d$. Since $Rg$ is a $\mu$-version of $Gg$, we obtain that $(T_t)_{t>0}$ is transient.\\
(iii) Assume that $(T_t)_{t>0}$ is recurrent. Let $U$ be a nonempty open set in $\mathbb R^d$. Then $U$ is not $\mu$-polar and finely open. Thus by \cite[Proposition 11(d)]{GT2}, $\mathbb P_x(L_U<\infty)=1$ for $\mu$-a.e. $x \in \mathbb R^d$. Since $\{L_U<\infty \}\in \mathcal{F}$ is $\vartheta_t$-invariant for all $t>0$, it follows from Lemma \ref{lem:3.2.6} that
$$
\mathbb P_x(L_U<\infty)=1, \; \text{ for all } x \in \mathbb R^d
$$
as desired. \\
Conversely, if $\mathbb M$ is recurrent in the probabilistic sense, then $\mathbb M$ cannot be transient in the probabilistic sense. Thus $(T_t)_{t>0}$ cannot be transient by (ii). Therefore $(T_t)_{t>0}$ is recurrent by (i).\\
(iv) The assertion follows from (i), (ii) and (iii).\\
(v) This follows from \cite[Corollary 20]{GT2}.
\end{proof}
\begin{lemma} \label{lem:3.2.8}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold. Let $\mathbb M$ be as in Theorem \ref{th: 3.1.2} (see also Theorem \ref{theo:3.1.4})
For any $x \in \mathbb R^d$ and $N\in \mathbb N$, it holds that $\mathbb P_x(\sigma_{N}<\infty)=1$, where $\sigma_N$, $N\in \mathbb N$, is as in Definition \ref{stopping times}.
\end{lemma}
\begin{proof}
Let $x \in \mathbb R^d$ and $N \in \mathbb N$. If $x \in \mathbb R^d \setminus \overline{B}_N$, then $\mathbb P_x(\sigma_N=0)=1$. Assume that $x \in \overline{B}_{N}$. Since $\mathbb M$ is either recurrent or transient in the probabilistic sense by Theorem \ref{theo:3.3.6}, it follows that $\mathbb P_x(L_{\mathbb R^d \setminus \overline{B}_{N}}=\infty)=1$ or $\mathbb P_x(L_{\overline{B}_{N}}<\infty)=1$ (cf. \eqref{lastexittime} for the definition of $L_A$), hence the assertion follows.
\end{proof}
\noindent
The following criterion to obtain the recurrence of $\mathbb M$ in the probabilistic sense is proven by a well-known technique which involves stochastic calculus (see for instance \cite[Theorem 1.1, Chapter 6.1]{Pi}), but we ultimately use our results, Lemma \ref{lem:3.2.8}, Theorem \ref{theo:3.3.6}(iv) and the claim of Lemma \ref{lem3.2.6}, so that,
in contrast to \cite{Pi}, also the case of a locally unbounded drift coefficient can be treated.
\begin{proposition} \label{theo:3.2.6}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Let $N_0 \in \mathbb N$. Let $g\in C^2(\mathbb R^d \setminus \overline{B}_{N_0}) \cap C(\mathbb R^d)$, $g \geq 0$, with
\begin{equation*}
\lim_{r \rightarrow \infty} (\inf_{\partial B_r} g) = \infty.
\end{equation*}
and assume that
$$
Lg\le 0 \quad \text{a.e. on }\, \mathbb R^d\setminus \overline{B}_{N_0}.
$$
Then $\mathbb M$ is recurrent in the probabilistic sense (see Definition \ref{def:3.2.2.3}).
\end{proposition}
\begin{proof}
By the claim of Lemma \ref{lem3.2.6}, there exists $N_1 \in \mathbb N$ with $N_1 \geq N_0+2$ and $\psi \in C^2(\mathbb R^d)$ with $\psi(x) \geq 0$ for all $x \in \mathbb R^d$, $\psi(x) =g(x)$ for all $x \in \mathbb R^d\setminus B_{N_1}$, such that
\begin{eqnarray}\label{eq:3.2.28a}
L \psi \leq 0 \quad \text{a.e. on }\, \mathbb R^d\setminus \overline{B}_{N_1}.
\end{eqnarray}
In particular, $\mathbb M$ is non-explosive by Lemma \ref{lem3.2.6}. We first show the following claim.\\
{\bf Claim:} Let $n \geq N_1$ and $x \in \mathbb R^d \setminus \overline{B}_n$ arbitrary. Then $\mathbb P_x(\sigma_{B_n}<\infty)=1$ (for $\sigma_{B_n}$ see Definition \ref{stopping times}). \\
To show the claim, choose any $N \in \mathbb N$, with $x \in B_N$. By It\^{o}'s formula and Theorem \ref{theo:3.1.4}(i), $\mathbb P_x$-a.s. for any $t \in [0, \infty)$
$$
\psi(X_{t \wedge \sigma_{B_n} \wedge \sigma_N})- \psi(x) =\int_0^{t \wedge\sigma_{B_n} \wedge \sigma_N} \nabla \psi \cdot \sigma(X_s) dW_s +\int_0^{t \wedge\sigma_{B_n} \wedge \sigma_N} L \psi (X_s) ds,
$$
where $\sigma=(\sigma_{ij})_{1 \leq i,j \leq d}$ is as in Lemma \ref{lem:3.1.5} and $\sigma_{N}$ as in Definition \ref{stopping times}). Taking expectations and using \eqref{eq:3.2.28a}
$$
\mathbb E_x \left[ \psi(X_{t \wedge \sigma_{B_n} \wedge \sigma_N}) \right] \leq \psi(x).
$$
Since $\mathbb P_x(\sigma_N<\infty)=1$ by Lemma \ref{lem:3.2.8}, using Fatou's lemma, we obtain that
\begin{eqnarray*}
(\inf_{\partial B_N}\psi)\cdot \mathbb P_x(\sigma_{B_{n}}=\infty)&\le& \mathbb E_x[\psi(X_{\sigma_N})1_{\{\sigma_{B_{n}}=\infty\}}] \le \mathbb E_x[\psi(X_{\sigma_{B_{n}}\wedge\sigma_N})]\\
&\le& \liminf_{t \rightarrow \infty} \mathbb E_x \left[ \psi(X_{t \wedge \sigma_{B_n} \wedge \sigma_N}) \right] \leq \psi(x).
\end{eqnarray*}
Letting $N \rightarrow \infty$, we obtain $\mathbb P_x(\sigma_{B_{n}}=\infty)=0$ and the claim is shown.\\
Now let $x \in \mathbb R^d$ and $N_2:=N_1+1$. If $x \in \mathbb R^d \setminus \overline{B}_{N_2}$, then $\mathbb P_x(\sigma_{B_{N_2}}<\infty)=1$ by the claim. If $x \in B_{N_2}$, then $\mathbb P_x(\sigma_{B_{N_2}}=0)=1$, by the continuity and normal property of $\mathbb M$. Finally if $x \in \partial B_{N_2}$, then by the claim again $\mathbb P_x(\sigma_{B_{N_1}}<\infty)=1$, hence $\mathbb P_x(\sigma_{B_{N_2}}<\infty)=1$ since $\sigma_{B_{N_2}}< \sigma_{B_{N_1}}$, $\mathbb P_x$-a.s. Therefore, we obtain
\begin{equation} \label{eq:3.2.29}
\mathbb P_x(\sigma_{B_{N_2}}<\infty)=1, \;\; \text{ for all $x \in \mathbb R^d$}.
\end{equation}
For $n\in \mathbb N$, define
$$
\Lambda_n:= \{ \omega \in \Omega:X_t(\omega)\in B_{N_2} \text{ for some } t\in[n,\infty) \}.
$$
Then $\Lambda_n=\{\omega \in \Omega:\sigma_{B_{N_2}}\circ \vartheta_n<\infty\}$, $\mathbb P_x$-a.s. Using the Markov property and that $\mathbb M$ is non-explosive, it follows by \eqref{eq:3.2.29} that
\begin{eqnarray*}
\mathbb P_x\big ( \sigma_{B_{N_2}}\circ \vartheta_n<\infty \big ) &=& \mathbb E_x\big[ 1_{\{\sigma_{B_{N_2}}<\infty\}} \circ \vartheta_n \big] = \mathbb E_x\big[\mathbb E_x\big[ 1_{\{\sigma_{B_{N_2}}<\infty\}} \circ \vartheta_n \big\varepsilonrt \mathcal{F}_n\big]\big] \\
&=& \mathbb E_x\big[ \mathbb P_{X_n} \big( \sigma_{B_{N_2}}<\infty \big) \big]=1.
\end{eqnarray*}
Therefore, $1=\mathbb P_x(\cap_{n \in \mathbb{N}} \Lambda_n)=\mathbb P_x(L_{B_{N_2}}=\infty)$, hence $\mathbb M$ is not transient. By Theorem \ref{theo:3.3.6}(iv), $\mathbb M$ is recurrent.
\end{proof}
\noindent
Choosing $g(x):= \ln(\|x\|^2 \varepsilone N_0^2)+2$ as in the proof of Corollary \ref{cor:3.2.2} and using the same method as in the proof of Corollary \ref{cor:3.1.3}, the following result is a direct consequence of Proposition \ref{theo:3.2.6}.
\begin{corollary} \label{cor:3.2.2.5}
Assume {\bf (a)} of
Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Assume that there exists $N_0\in \mathbb N$, such that
\begin{eqnarray*}
-\frac{\langle A(x)x, x \rangle}{ \left \| x \right \|^2 }+ \frac12\mathrm{trace}A(x)+ \big \langle \mathbf{G}(x), x \big \rangle \leq 0
\end{eqnarray*}
for a.e. $x\in \mathbb R^d\setminus \overline{B}_{N_0}$. Then $\mathbb M$ is recurrent. In particular, if $d=2$ and $\mathbb Psi_1$, $\mathbb Psi_2$, $Q$, $A$ are as in Corollary \ref{cor:3.1.3}, and
$$
\frac{|\mathbb Psi_1(x)-\mathbb Psi_2(x)|}{2} + \langle \mathbf{G}(x),x \rangle \leq 0
$$
for a.e. $x\in \mathbb R^2\setminus \overline{B}_{N_0}$, then $\mathbb M$ is recurrent in the probabilistic sense (see Definition \ref{def:3.2.2.3}).
\end{corollary}
\noindent
Using Theorem \ref{theo:3.3.6} we obtain the following corollary of \cite[Theorem 21]{GT2}.
\begin{proposition} \label{cor:3.3.2.6}
Consider the situation of Remark \ref{rem:2.2.4}. Define for $r\ge 0$,
\begin{equation*}
\varv_1(r):= \int_{B_r} \frac{\langle A(x)x, x \rangle}{\|x\|^2} \mu(dx), \ \ \ \varv_2(r):=\int_{B_r} \big | \big \langle (\beta^{\rho, C^T}+ \overline{\mathbf{B}})(x),x \big \rangle \big | \mu(dx),
\end{equation*}
and let
$$
\varv(r):=\varv_1(r)+\varv_2(r), \ \ a_n:=\int_1^n \frac{r}{\varv(r)}dr, \ \ n\ge 1.
$$
Assume that
$$
\lim_{n\rightarrow \infty}a_n=\infty \ \ \ \text{and} \ \ \ \lim_{n\rightarrow \infty} \frac{\ln(\varv_2(n)\varepsilone 1)}{a_n}=0.
$$
Then $(T_t)_{t>0}$ and $(T'_t)_{t>0}$ are recurrent (cf. Definitions \ref {def:3.2.2.2} and \ref{definition2.1.7})
and $\mu$ is $(\overline{T}_t)_{t>0}$-invariant (cf. Definition \ref{def:2.1.1}).
Moreover, if $\nabla (A+C^T) \in L^q_{loc}(\mathbb R^d, \mathbb R^d)$, then $\mathbb M$ is recurrent in the probabilistic sense (see Definition \ref{def:3.2.2.3}).
\end{proposition}
\begin{proof}
By \cite[Theorem 21]{GT2} applied with $\rho(x)=\|x\|$ (the $\rho$ of \cite{GT2} is different from our density $\rho$ of $\mu$ defined here), $(T_t)_{t>0}$ is not transient. Hence by Theorem \ref{theo:3.3.6}(i), $(T_t)_{t>0}$ is recurrent. The same applies to $(T'_t)_{t>0}$ by replacing $\beta^{\rho, C^T}+ \overline{\mathbf{B}}$ with $-(\beta^{\rho, C^T}+ \overline{\mathbf{B}})$, hence we obtain that $(T'_t)_{t>0}$ is also recurrent. In particular, $(T'_t)_{t>0}$ is conservative by Theorem \ref{theo:3.3.6}(v), hence $\mu$ is $(\overline{T}_t)_{t>0}$-invariant by Remark \ref{rem:2.1.10a}(i). If $\nabla (A+C^T) \in L^q_{loc}(\mathbb R^d, \mathbb R^d)$, then $A$, $C$, $\mathbf{H}$ defined in Remark \ref{rem:2.2.4} satisfy {\bf (a)} and {\bf (b)} of Sections \ref{subsec:2.2.1} and \ref{subsec:3.1.1}. Hence $\mathbb M$ is recurrent in the probabilistic sense by Theorem \ref{theo:3.3.6}(iii).
\end{proof}
\subsubsection{Long time behavior: Ergodicity, existence and uniqueness of invariant measures, examples/counterexamples}
\label{subsec:3.2.3}
Throughout this section we will assume that {\bf (a)} of Section \ref{subsec:2.2.1} holds and that assumption {\bf (b)} of Section \ref{subsec:3.1.1} holds. Let
$$
\mu=\rho\,dx
$$
be as in Theorem \ref{theo:2.2.7} or as in Remark \ref{rem:2.2.4}.
\begin{definition}\label{def:invariantforprocess}
Consider a right process
$$
\widetilde{\mathbb M} = (\widetilde{\Omega}, \widetilde{\mathcal{F}}, (\widetilde{\mathcal{F}}_t)_{t \geq 0}, (\widetilde{X}_t)_{t \ge 0}, (\widetilde{\mathbb P}_x)_{x \in \mathbb R^d\cup \{\Delta\}})
$$
with state space $\mathbb R^d$ (cf. Definition \ref{def:3.1.1}). A $\sigma$-finite measure $\widetilde{\mu}$ on $(\mathbb R^d, \mathcal{B}(\mathbb R^d))$ is called an {\bf invariant measure} for $\widetilde{\mathbb M}$\index{measure ! invariant for $\widetilde{\mathbb M}$}, if for any $t \geq 0$
\begin{eqnarray} \label{eq:3.29}
\int_{\mathbb R^d} \widetilde{\mathbb{P}}_x(\widetilde{X}_t \in A) \widetilde{\mu}(dx)= \widetilde{\mu}(A), \quad \text{ for any } A \in \mathcal{B}(\mathbb R^d).
\end{eqnarray}
$\widetilde{\mu}$ is called a {\bf sub-invariant measure} for $\widetilde{\mathbb M}$\index{measure ! sub-invariant for $\widetilde{\mathbb M}$}, if \eqref{eq:3.29} holds with
\lq\lq$=$\rq\rq \ replaced by \lq\lq$\leq$\rq\rq.
\end{definition}
\begin{remark} \label{rem:3.2.3.3}
{\it Using monotone approximation by simple functions, $\widetilde{\mu}$ is an invariant measure for $\widetilde{\mathbb M}$, if and only if
\begin{eqnarray} \label{eq:3.30}
\int_{\mathbb R^d} \widetilde{\mathbb{E}}_x\big[f(\widetilde{X}_t)\big] \widetilde{\mu}(dx)= \int_{\mathbb R^d} f d \widetilde{\mu}, \quad \text{ for any $f \in \mathcal{B}^+_b(\mathbb R^d)$},
\end{eqnarray}
where $\widetilde{\mathbb E}_x$ denotes the expectation with respect to $\widetilde{\mathbb P}_x$. Likewise, $\widetilde{\mu}$ is a sub-invariant measure for $\widetilde{\mathbb M}$, if and only if \eqref{eq:3.30} holds with \lq\lq$=$\rq\rq \ replaced by \lq\lq$\leq$\rq\rq. By the $L^1(\mathbb R^d, \mu)$ contraction property of $(T_t)_{t>0}$, $\mu$ (as at the beginning of this section) is always a sub-invariant measure for $\mathbb M$. Moreover, $\mu$ is $(\overline{T}_t)_{t>0}$(-sub)-invariant, if and only if $\mu$ is a (sub-)invariant measure for $\mathbb M$ (cf. Definition \ref{def:2.1.1}(ii), Theorem \ref{theo:2.6}, \eqref{semidef} and \eqref{eq:3.1semigroupequal-a.e.})}.
\end{remark}
\begin{lemma} \label{lem:3.2.9}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
$(P_t)_{t \geq 0}$ (cf. Proposition \ref{prop:3.1.1}) is stochastically continuous, i.e.
$$
\lim_{t \rightarrow 0+}P_t(x, B_{r}(x))=1, \quad \text{ for all $r>0$ and $x \in \mathbb R^d$.}
$$
Moreover, for each $t_0>0$, $(P_{t})_{t>0}$ is $t_0$-regular, i.e. for all $x \in \mathbb R^d$, the sub-probability measures $P_{t_0}(x, dy)$ are mutually equivalent.
\end{lemma}
\begin{proof}
By Lebesgue's theorem, for any $r>0$ and $x \in \mathbb R^d$ it holds that for $\mathbb M$ of Theorem \ref{th: 3.1.2},
$$
\lim_{t \rightarrow 0+}P_t(x, B_{r}(x))=\lim_{t \rightarrow 0+} \mathbb E_x\left[ 1_{B_r(x)}(X_t)\right]=1.
$$
By Proposition \ref{prop:3.1.1}(i), $(P_{t})_{t>0}$ is $t_0$-regular for any $t_0>0$.
\end{proof}
\noindent
The following theorem is an application of our results combined with those of \cite{DPZB}.
\begin{theorem} \label{theo:3.3.8}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Assume that there exists a finite invariant measure $\nu$ for $\mathbb M$ (see Definition \ref{def:invariantforprocess}) of Theorem \ref{th: 3.1.2} (see also Theorem \ref{theo:3.1.4}). Let $\mu=\rho\,dx$
be as in Theorem \ref{theo:2.2.7} or as in Remark \ref{rem:2.2.4}. Then the followings are satisfied:
\begin{itemize}
\item[(i)]
$\mathbb M$ is non-explosive (Definition \ref{non-explosive}), hence $P_t(x, dy)$ is a probability measure on $(\mathbb R^d, \mathcal{B}(\mathbb R^d))$ for any $(x,t) \in \mathbb R^d \times (0, \infty)$ and equivalent to the Lebesgue measure (cf. Proposition \ref{prop:3.1.1}).
\item[(ii)] Any sub-invariant measure for $\mathbb M$ is finite and $\mu$ is a finite invariant measure for $\mathbb M$.
\item[(iii)] $\nu$ is unique up to a multiplicative constant. More precisely,
if there exists another invariant measure $\pi$ for $\mathbb M$, then $\pi$ is finite and
$$
\nu(A)= \frac{\nu(\mathbb R^d)}{\pi(\mathbb R^d)}\cdot \pi(A), \quad \text{ for all } A \in \mathcal{B}(\mathbb R^d).
$$
\item[(iv)] For any $s \in [1, \infty)$ and $f \in L^s(\mathbb R^d, \mu)$, we have
\begin{equation} \label{lslimit}
\lim_{t \rightarrow \infty}P_t f = \frac{1}{\mu(\mathbb R^d)}\int_{\mathbb R^d} f d \mu \quad \text{ in } L^s(\mathbb R^d, \mu)
\end{equation}
and for all $x \in \mathbb R^d$, $A \in \mathcal{B}(\mathbb R^d)$
\begin{eqnarray} \label{1dmarginal}
\lim_{t \rightarrow \infty} P_t(x,A)=\lim_{t \rightarrow \infty}\mathbb P_x(X_t \in A) = \frac{\mu(A)}{\mu(\mathbb R^d)}.
\end{eqnarray}
\item[(v)] Let $A \in \mathcal{B}(\mathbb R^d)$ be such that $\mu(A)>0$ and $(t_n)_{n\ge 1}\subset (0,\infty)$ be any sequence with $\lim_{n\to \infty}t_n=\infty$. Then
\begin{equation} \label{eq:3.32*}
\mathbb P_x(X_{t_n}\in A \text{ for infinitely many } n\in \mathbb N)=1,\ \ \text{ for all } x\in \mathbb R^d.
\end{equation}
In particular, $\mathbb P_x(L_{A}=\infty)=1$ for all $x \in \mathbb R^d$ and $\mathbb M$ is recurrent in the probabilistic sense (see Definition \ref{def:3.2.2.3} and \eqref{lastexittime} for the definition of $L_A$).
\end{itemize}
\end{theorem}
\begin{proof}
(i) Since $\nu$ is finite and an invariant measure for $\mathbb M$, it follows from \eqref{eq:3.30} that for any $t>0$ that
$$
\int_{\mathbb R^d} (1-P_t1_{\mathbb R^d}) d\nu=0,
$$
hence $P_t 1_{\mathbb R^d}=1$, $\nu$-a.e. for any $t>0$. Thus, for some $(x_0, t_0) \in \mathbb R^d \times(0, \infty)$, $P_{t_0}1_{\mathbb R^d}(x_0)=1$ and then $(T_t)_{t>0}$ is conservative by Lemma \ref{lem:2.7}(ii). Consequently, $\mathbb M$ is non-explosive by Corollary \ref{cor:3.2.1}. \\
(ii) By (i), Lemma \ref{lem:3.2.9}, \cite[Theorem 4.2.1(i)]{DPZB} it follows that for any $A \in \mathcal{B}(\mathbb R^d)$ and $x \in \mathbb R^d$
\begin{equation} \label{eq:3.33}
\lim_{t \rightarrow \infty}\mathbb P_x(X_t \in A)= \frac{\nu(A)}{\nu(\mathbb R^d)}.
\end{equation}
Now suppose that $\kappa$ is an infinite sub-invariant measure for $\mathbb M$. Since $\kappa$ is $\sigma$-finite, we can choose $A \in \mathcal{B}(\mathbb R^d)$ with $\kappa(A)<\infty$ and $\nu(A)>0$.
Then by \eqref{eq:3.33} and Fatou's lemma,
$$
\infty=\int_{\mathbb R^d} \frac{\nu(A)}{\nu(\mathbb R^d)} d\kappa \leq \liminf_{t \rightarrow \infty}\int_{\mathbb R^d} \mathbb{P}_x(X_t \in A) \kappa(dx) \leq \kappa(A)<\infty,
$$
which is a contradiction. Therefore, any sub-invariant measure is finite. In particular $\mu$ is finite. Since $(T_t)_{t>0}$ is conservative by (i) and $\mu$ is finite, it follows that $\mu$ is $(\overline{T}_t)_{t>0}$-invariant by Remark \ref{remark2.1.11}, so that $\mu$ is a finite invariant measure for $\mathbb M$. \\
(iii) By (i), Lemma \ref{lem:3.2.9}, and \cite[Theorem 4.2.1(ii)]{DPZB}, $\frac{\nu}{\nu(\mathbb R^d)}$ is the unique invariant probability measure for $\mathbb M$. So, if there exists another invariant measure $\pi$ for $\mathbb M$, then $\pi$ must be finite by (ii) and therefore $\frac{\pi}{\pi(\mathbb R^d)}$ is an invariant probability measure for $\mathbb M$ which must then coincide with $\frac{\nu}{\nu(\mathbb R^d)}$.\\
(iv) By (iii), $\nu=\frac{\nu(\mathbb R^d)}{\mu(\mathbb R^d)} \mu$. Hence, \eqref{eq:3.12} (see Proposition \ref{prop:3.1.1}(i)) and \eqref{eq:3.33} implies \eqref{1dmarginal}.
Using \eqref{eq:3.1.1} and that the strong convergence of $(P_t(x,\cdot))$ in \eqref{1dmarginal} implies weak convergence, we get
\begin{equation} \label{pointxg}
\lim_{t \rightarrow \infty} P_t f(x) = \frac{1}{\mu(\mathbb R^d)} \int_{\mathbb R^d} f d\mu, \quad x\in \mathbb R^d,\ f \in C_b(\mathbb R^d).
\end{equation}
Since $\mu$ is finite, \eqref{lslimit} follows from \eqref{pointxg} for any $f \in C_b(\mathbb R^d)$ using Lebesgue's theorem and the sub-Markovian property of $(P_t)_{t>0}$. Finally, using the denseness of $C_b(\mathbb R^d)$ in $L^s(\mathbb R^d, \mu)$ and the $L^s(\mathbb R^d, \mu)$-contraction property of $(P_t)_{t>0}$ for each $s \in [1, \infty)$, \eqref{lslimit} follows by a 3-$\varepsilon$ argument.\\
(v) By \cite[Proposition 3.4.5]{DPZB}, \eqref{eq:3.32*} holds, hence $\mathbb P_x(L_A=\infty)=1$ for all $A \in \mathcal{B}(\mathbb R^d)$ with $\mu(A)>0$ and $x \in \mathbb R^d$. Since $\mu(U)>0$ for any nonempty open set $U$ in $\mathbb R^d$, $\mathbb M$ is recurrent in the probabilistic sense.
\end{proof}
\begin{proposition} \label{prop:3.3.12}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Let $N_0 \in \mathbb N$. Let $g\in C^2(\mathbb R^d \setminus \overline{B}_{N_0}) \cap C(\mathbb R^d)$, $g \geq 0$, with
\begin{equation} \label{eq:3.34*}
\lim_{r \rightarrow \infty} (\inf_{\partial B_r} g) = \infty.
\end{equation}
Assume that for some $c>0$
$$
Lg\le -c \quad \text{a.e. on }\, \mathbb R^d\setminus \overline{B}_{N_0}.
$$
Then $\mu$ is a finite invariant measure (see Definition \ref{def:invariantforprocess} and right before it) for $\mathbb M$ of Theorem \ref{th: 3.1.2} (see also Theorem \ref{theo:3.1.4}) and Theorem \ref{theo:3.3.8} applies.
\end{proposition}
\begin{proof}
First, $\mathbb M$ is non-explosive by Lemma \ref{lem3.2.6}, hence $(T_t)_{t>0}$ is conservative by Corollary \ref{cor:3.2.1}. By the claim of Lemma \ref{lem3.2.6}, there exists $N_1 \in \mathbb N$ with $N_1 \geq N_0+2$ and $\psi \in C^2(\mathbb R^d)$ with $\psi(x) \geq 0$ for all $x \in \mathbb R^d$ and $\psi(x)=g(x)$ for all $x\in \mathbb R^d \setminus \overline{B}_{N_1}$ such that
$$
L \psi \leq -c \quad \text{a.e. on }\, \mathbb R^d\setminus \overline{B}_{N_1}.
$$
It follows by \cite[2.3.3. Corollary]{BKRS} (see also \cite[Theorem 2]{BRS} for the original result) that $\mu$ is finite and then by Remark \ref{remark2.1.11} that $\mu$ is $(\overline{T}_t)_{t>0}$-invariant. Therefore, $\mu$ is a finite invariant measure for $\mathbb M$ and Theorem \ref{theo:3.3.8} applies with $\nu=\mu$.
\end{proof}
\begin{corollary} \label{cor:3.2.3.7}
Assume {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold.
Let $N_0\in \mathbb N$ and $M>0$. Assume that either
\begin{eqnarray} \label{eq:3.35*}
-\frac{\langle A(x)x, x \rangle}{ \left \| x \right \|^2 }+ \frac12\mathrm{trace}A(x)+ \big \langle \mathbf{G}(x), x \big \rangle \leq -M \|x\|^2
\end{eqnarray}
for a.e. $x\in \mathbb R^d\setminus \overline{B}_{N_0}$ or
\begin{eqnarray} \label{eq:3.36}
\frac12\mathrm{trace}A(x)+ \big \langle \mathbf{G}(x), x \big \rangle \leq -M
\end{eqnarray}
for a.e. $x\in \mathbb R^d\setminus \overline{B}_{N_0}$.
Then $\mu$ is a finite invariant measure (see Definition \ref{def:invariantforprocess}) for $\mathbb M$ of Theorem \ref{th: 3.1.2} and Theorem \ref{theo:3.3.8} applies. In particular, if $d=2$ and $\mathbb Psi_1$, $\mathbb Psi_2$, $Q$, $A$ are as in Corollary \ref{cor:3.1.3} and
$$
\frac{|\mathbb Psi_1(x)-\mathbb Psi_2(x)|}{2} + \langle \mathbf{G}(x),x \rangle \leq -M\|x\|^2
$$
for a.e. $x\in \mathbb R^2\setminus \overline{B}_{N_0}$, then \eqref{eq:3.35*} is satisfied in this special situation.
\end{corollary}
\begin{proof}
Let $g(x)= \ln(\|x\|^2 \varepsilone N_0^2)+2$, $x \in \mathbb R^d$ be as in the proof of Corollary \ref{cor:3.2.2}. Then $Lg \leq -2M$ a.e. on $\mathbb R^d\setminus \overline{B}_{N_0}$, if and only if \eqref{eq:3.35*} holds. If $f(x)=\|x\|^2$, $x \in \mathbb R^d$, then $Lf \leq -2M$ a.e. on $\mathbb R^d\setminus \overline{B}_{N_0}$, if and only if \eqref{eq:3.36} holds. Thus, the assertion follows by Proposition \ref{prop:3.3.12}. The last assertion holds, proceeding as in the proof of Corollary \ref{cor:3.1.3}.
\end{proof}
\noindent
In Theorem \ref{theo:3.3.8}, we saw that if there exists a finite invariant measure $\nu$ for $\mathbb M$, then any invariant measure for $\mathbb M$ is represented by a constant multiple of $\nu$. The following example illustrates a case where $\mathbb M$ has two infinite invariant measures which are not represented by a constant multiple of each other.
\begin{eg}\label{ex:3.8}
Define
$$
Lf = \frac{1}{2} \Delta f+ \langle \mathbf{e}_1, \nabla f \rangle, \quad f \in C_0^{\infty}(\mathbb R^d).
$$
Then $\mu:=dx$ is an infinitesimally invariant measure for $(L, C_0^{\infty}(\mathbb R^d))$. Hence by Theorem \ref{theorem2.1.5}, there exists a closed extension of $(L, C_0^{\infty}(\mathbb R^d))$ that generates a sub-Markovian $C_0$-semigroup $(T_t)_{t>0}$ on $L^1(\mathbb R^d, \mu)$. Then by Proposition \ref{prop:2.1.10}(iii), $(T_t)_{t>0}$ is conservative and $\mu$ is $(T_t)_{t>0}$-invariant. Let
$$
\mathbb M = (\Omega, \mathcal{F}, (\mathcal{F}_t)_{t \ge 0}, (X_t)_{t \ge 0}, (\mathbb{P}_x)_{x \in \mathbb R^d\cup \{\Delta\}} )
$$
be the Hunt process associated with $(T_t)_{t>0}$ by Theorem \ref{th: 3.1.2}. Let $y \in \mathbb R^d$ be given. By Theorem \ref{theo:3.1.4}(i), there is a $d$-dimensional Brownian motion $((W_t)_{t\geq0}, (\mathcal{F}_t)_{t\geq0})$ on $(\Omega, \mathcal{F}, \mathbb P_y)$ such that $(\Omega, \mathcal{F}, \mathbb P_y, (\mathcal{F}_t)_{t \ge 0}, (X_t)_{t \ge 0},(W_t)_{t \geq 0})$ is a weak solution
(see Definition \ref{def:3.48}(iv)) to
\begin{equation} \label{eq:3.34}
X_t = X_0+W_t + \int_0^t \mathbf{e}_1ds.
\end{equation}
On the other hand, $\widetilde{\mu}:=e^{2 \langle \mathbf{e}_1, x \rangle}dx$ is also an infinitesimally invariant measure for $(L, C_0^{\infty}(\mathbb R^d))$. By Theorem \ref{theorem2.1.5}, there exists a closed extension of $(L, C_0^{\infty}(\mathbb R^d))$ that generates a sub-Markovian $C_0$-semigroup $(\widetilde{T}_t)_{t>0}$ on $L^1(\mathbb R^d, \widetilde{\mu})$. Then by Proposition \ref{prop:2.1.10}(iii), $(\widetilde{T}_t)_{t>0}$ is conservative and $\widetilde{\mu}$ is $(\widetilde{T}_t)_{t>0}$-invariant. Let
$$
\widetilde{\mathbb M} = (\widetilde{\Omega}, \widetilde{\mathcal{F}}, (\widetilde{\mathcal{F}}_t)_{t \ge 0}, (\widetilde{X}_t)_{t \ge 0}, (\widetilde{\mathbb{P}}_x)_{x \in \mathbb R^d\cup \{\Delta\}} )
$$
be the Hunt process associated with $(\widetilde{T}_t)_{t>0}$ by Theorem \ref{th: 3.1.2}. By Theorem \ref{theo:3.1.4}(i), there exists a $d$-dimensional standard $(\widetilde{\mathcal{F}}_t)_{t \geq 0}$- Brownian motion $(\widetilde{W}_t)_{t\geq0}$ on the probability space $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P}_y)$ such that $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P}_y, (\widetilde{\mathcal{F}}_t)_{t \ge 0}, (\widetilde{X}_t)_{t \ge 0},(\widetilde{W}_t)_{t \geq 0})$ is a weak solution to \eqref{eq:3.34}. Since the SDE \eqref{eq:3.34} admits pathwise uniqueness (see Definition \ref{def:3.48}(v))
by \cite[2.9 Theorem, Chapter 5]{KaSh} (see also \cite[Proposition 1]{YW71} for the original result) and pathwise uniqueness implies the uniqueness in law (cf. Definition \ref{def:3.48}(vi)) by \cite[3.20 Theorem, Chapter 5]{KaSh}, it holds that
\begin{equation} \label{eq:3.35}
\mathbb P_y(X_t \in A) = \widetilde{\mathbb P}_y(\widetilde{X}_t \in A), \quad \text{ for all $A \in \mathcal{B}(\mathbb R^d)$ and $t>0$}.
\end{equation}
Since $\mu$ and $\widetilde{\mu}$ are invariant measures for $\mathbb M$ and $\widetilde{\mathbb M}$, respectively, and $y \in \mathbb R^d$ is arbitrarily given, it follows from \eqref{eq:3.35} that both $\mu$ and $\widetilde{\mu}$ are invariant measures for $\mathbb M$ (and $\widetilde{\mathbb M}$). Obviously, $\mu$ and $\widetilde{\mu}$ cannot be represented by a constant multiple of each other.
\end{eg}
\subsection{Uniqueness}
\label{sec:3.3}
In this section, we investigate pathwise uniqueness (cf. Definition \ref{def:3.48}(v)) and uniqueness in law (cf. Definition \ref{def:3.48}(vi)).\\
We will consider the following {\bf condition}:
\begin{itemize}
\item[{\bf (c)}] \ \index{assumption ! {\bf (c)}}for some $p\in (d,\infty)$, $d\ge 2$ (see beginning of Section \ref{subsec:2.2.1}), $\sigma=(\sigma_{ij})_{1 \leq i,j \leq d}$ is possibly non-symmetric
with $\sigma_{ij} \in H^{1,p}_{loc}(\mathbb R^d) \cap C(\mathbb R^d)$ for all $1\leq i,j \leq d$ such that
$A=(a_{ij})_{1 \leq i,j \leq d}:=\sigma \sigma^T$ satisfies \eqref{eq:2.1.2} and $\mathbf{G}=(g_1,\ldots,g_d)
\in L^p_{loc}(\mathbb R^d, \mathbb R^d)$.
\end{itemize}
If {\bf (c)} holds, then {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section
\ref{subsec:3.1.1} hold. \\
Our strategy to obtain a pathwise unique and strong solution to the SDE \eqref{eq:3.39}, is to apply the Yamada--Watanabe theorem \cite[Corollary 1]{YW71} and the local pathwise
uniqueness result \cite[Theorem 1.1]{Zh11} to the weak solution of Theorem \ref{theo:3.1.4}(i). Under the
mere condition of {\bf (c)} and the assumption that the constructed Hunt process $\mathbb M$ in Theorem \ref{th: 3.1.2}
is non-explosive, it is shown in Proposition \ref{prop:3.3.9} and Theorem \ref{theo:3.3.1.8} that there exists a pathwise unique and strong
solution to the SDE \eqref{eq:3.39} (cf. Definition \ref{def:3.48}). Moreover, Proposition \ref{prop:3.3.1.9} implies that
the local strong solution of \cite[Theorem 1.3]{Zh11} (see also \cite[Theorem 2.1]{KR} for prior work that covers the case of Brownian motion with drift) when considered in the time-homogeneous case is non-explosive, if the
Hunt process $\mathbb M$ of Theorem \ref{th: 3.1.2} is non-explosive. Therefore, any condition for non-explosion of $\mathbb M$ in this monograph is a new criterion for
strong well-posedness of time-homogeneous It\^{o}-SDEs whose coefficients satisfy {\bf (c)}.
As an example for this observation, consider the case where {\bf (c)} and the non-explosion condition \eqref{eq:3.20}
are satisfied. Then we obtain a pathwise unique and strong solution to \eqref{eq:3.39}, under the classical-like non-explosion condition \eqref{eq:3.20} that even allows for an interplay of diffusion and drift coefficient.
Additionally, $\|\mathbf{G}\|$ is here allowed to have arbitrary growth as long as
$\langle \mathbf{G}(x), x \rangle$ in \eqref{eq:3.20} is negative. A further example is given when $d=2$. Then the diffusion coefficient is allowed to have arbitrary
growth in the situation of \eqref{eq:3.2.1.20} in Corollary \ref{cor:3.1.3}. In summary, one can say that Theorem \ref{theo:3.3.1.8}, Propositions \ref{prop:3.3.9} and \ref{prop:3.3.1.9}, together with further results of this work (for instance those which are mentioned in Theorem \ref{theo:3.3.1.8}) can be used to complete and to considerably improve various results from \cite{KR}, \cite{ZhXi16}, \cite{Zh05}, and \cite{Zh11}, in the time-homogeneous case (see \cite{LT18}, \cite{LT19}, in particular the introduction of \cite{LT18}).
This closes a gap in the literature, which is described at the end of Remark \ref{rem:3.3.1},
where we discuss related work.\\
In Section \ref{subsec:3.3.2}, under the assumption {\bf (a)} of Section \ref{subsec:2.2.1} and
{\bf (b)} of Section \ref{subsec:3.1.1}, we investigate uniqueness in law, among
all right processes that have a strong Feller transition semigroup (more precisely such that \eqref {eq:3.41*} holds), that have $\mu$ as a
sub-invariant measure, and where $(L,C_0^{\infty}(\mathbb R^d))$ solves the martingale problem with respect to $\mu$. This sort of uniqueness in law is more restrictive than
uniqueness in law in the classical sense. But under the mere assumption of
{\bf (a)} and {\bf (b)}, classical uniqueness in law is not known to hold. Our main result in
Section \ref{subsec:3.3.2}, Proposition \ref{prop:3.3.1.15} which is more analytic than probabilistic, is
ultimately derived by the concept of $L^1$-uniqueness of $(L, C_0^{\infty}(\mathbb R^d))$ introduced in Definition
\ref{def:2.1.1}(i). Therefore, as a direct consequence of Proposition \ref{prop:3.3.1.15} and Corollary
\ref{cor2.1.2}, under the assumption that $\mu$ is an invariant measure for $\mathbb M$, we derive in Proposition
\ref{prop:3.3.1.16} our uniqueness in law result. This result is meaningful in terms
of being able to deal with the case of locally unbounded drift coefficients and explosive $\mathbb M$. We
present various situations in Example \ref{ex:3.4.9} where $\mu$ is an invariant measure for $\mathbb M$, so that our
uniqueness in law result is applicable.
\subsubsection{Pathwise uniqueness and strong solutions}
\label{subsec:3.3.1}
\begin{definition}\label{def:3.48}
\begin{itemize}
\item[(i)]
For a filtration $(\widehat{\mathcal{F}}_t)_{t \geq 0}$ on a probability space $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P})$, the {\bf augmented filtration} $(\widehat{\mathcal{F}}^{\text{aug}}_t)_{t \geq 0}$
of $(\widehat{\mathcal{F}}_t)_{t \geq 0}$ under $\widetilde{\mathbb P}$ is defined as
$$
\widehat{\mathcal{F}}^{\text{aug}}_t:= \sigma(\widehat{\mathcal{F}}_t \cup \widehat{\mathcal{N}}^{\widetilde{\mathbb P}}), \quad 0\leq t<\infty,
$$
where $\widehat{\mathcal{N}}^{\widetilde{\mathbb P}}:=\{ F \subset \widetilde{\Omega}: F \subset G \text{ for some } G \in \widehat{\mathcal{F}}_{\infty}:=\sigma(\cup_{t \geq 0} \widehat{\mathcal{F}}_t) \text{ with } \widetilde{\mathbb P}(G)=0 \}$. The {\bf completion} $\widehat{\mathcal{F}}^{\text{aug}}$ of $\widetilde{\mathcal{F}}$ under $\widetilde{\mathbb P}$ is defined as
$$
\widehat{\mathcal{F}}^{\text{aug}}:= \sigma(\widetilde{\mathcal{F}} \cup \widetilde{\mathcal{N}}^{\widetilde{\mathbb P}}),
$$
where $\widetilde{\mathcal{N}}^{\widetilde{\mathbb P}}:=\{ F \subset \widetilde{\Omega}: F \subset G \text{ for some } G \in \widetilde{\mathcal{F}} \text{ with } \widetilde{\mathbb P}(G)=0 \}$.
\item[(ii)] Let $l \in \mathbb N$, $\widetilde{\sigma}=(\widetilde{\sigma}_{ij})_{1 \leq i \leq d, 1 \leq j \leq l}$ be a matrix of Borel measurable functions and $\widetilde{\mathbf{G}}=(\widetilde{g}_1, \ldots, \widetilde{g}_d)$ be a Borel measurable vector field.
Given an $l$-dimensional Brownian motion $(\widetilde{W}_t)_{t \geq 0}$ on a probability space $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P})$, let $(\widehat{\mathcal{F}}_t)_{t \geq 0}:=\left(\sigma(\widetilde{W}_s\varepsilonrt s \in \left [0,t\right ])\right)_{t \geq 0}$ and $x \in \mathbb R^d$. $(\widetilde{X}_t)_{t \geq 0}$ is called a {\bf strong solution}\index{solution ! strong} to \eqref{eq:3.36*} with Brownian motion $(\widetilde{W}_t)_{t \geq 0}$ and initial condition $\widetilde{X}_0=x$, if (a)--(d) below hold:
\subitem(a) $(\widetilde{X}_t)_{t \geq 0}$ is an $\mathbb R^d$-valued stochastic process adapted to $(\widehat{\mathcal{F}}^{\text{aug}}_t)_{t \geq 0}$,
\subitem(b) $\widetilde{\mathbb P}(\widetilde{X}_0=x)=1$,
\subitem(c) $\widetilde{\mathbb P}\left(\int_0^t (\widetilde{\sigma}^2_{ij}(\widetilde{X}_s) +|\widetilde{g}_i|(\widetilde{X}_s) )ds <\infty \right)=1$ for all $1 \leq i \leq d$, $1 \leq j \leq l$\\
\text{}\quad \quad \;\; and $0 \leq t<\infty$,
\subitem(d)
$\widetilde{\mathbb P}$-a.s. it holds that
\begin{equation} \label{eq:3.36*}
\widetilde{X}_t=\widetilde{X}_0+\int_0^t \widetilde{\sigma}(\widetilde{X}_s)d\widetilde{W}_s + \int_0^t \widetilde{\mathbf{G}}(\widetilde{X}_s)ds, \; 0\leq t<\infty,
\end{equation}
\text{} \quad \quad i.e. $\widetilde{\mathbb P}$-a.s.
\begin{equation*}
\widetilde{X}^i_t = \widetilde{X}^i_0+ \sum_{j=1}^l \int_0^t \widetilde{\sigma}_{ij}(\widetilde{X}_s) d \widetilde{W}^j_s + \int_0^t \widetilde{g}_i(\widetilde{X}_s) ds, \;\; 1 \leq i \leq d, \; 0\leq t<\infty.
\end{equation*}
\item[(iii)] A filtration $(\widetilde{\mathcal{F}}_t)_{t \geq 0}$ on a probability space $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P})$ is said to satisfy the {\bf usual conditions}, if
$$
\widetilde{\mathcal{F}}_{0} \supset \{ F \subset \widetilde{\Omega}: F \subset G \text{ for some } G \in \widetilde{\mathcal{F}} \text{ with } \widetilde{\mathbb P}(G)=0 \}
$$
and
$(\widetilde{\mathcal{F}}_t)_{t \geq 0}$ is right-continuous, i.e.
\begin{equation} \label{defrightcon}
\widetilde{\mathcal{F}}_t = \bigcap_{\varepsilon>0} \widetilde{\mathcal{F}}_{t+\varepsilon}, \quad \forall t \geq 0.
\end{equation}
\item[(iv)]
Let $l \in \mathbb N$ and $\widetilde{\sigma}$, $\widetilde{\mathbf{G}}$, $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P} )$, $(\widetilde{W}_t)_{t \geq 0}$ be as in (ii). We say that
$$
(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P}, (\widetilde{\mathcal{F}}_t)_{t \ge 0}, (\widetilde{X}_t)_{t \ge 0},(\widetilde{W}_t)_{t \geq 0})
$$
is a {\bf weak solution} to \eqref{eq:3.36*}\index{solution ! weak} if $(\widetilde{\mathcal{F}_t})_{t \geq 0}$ is a filtration on $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P})$ satisfying the usual conditions, $(\widetilde{X}_t)_{t \geq 0}$ is an $\mathbb R^d$-valued stochastic process adapted to $(\widetilde{\mathcal{F}}_t)_{t \geq 0}$, $(\widetilde{W}_t)_{t \geq 0}$ is an $l$-dimensional standard $(\widetilde{\mathcal{F}}_t)_{t \geq 0}$-Brownian motion and (c) and (d) of (ii) hold. In particular, any strong solution as in (ii) is a weak solution as defined in (iv).
\item[(v)]
We say that {\bf pathwise uniqueness holds} for the SDE \eqref{eq:3.36*}\index{uniqueness ! pathwise}, if whenever $x \in \mathbb R^d$ and
$$
(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P}, (\widetilde{\mathcal{F}}_t)_{t \ge 0}, (\widetilde{X}^1_t)_{t \ge 0}, (\widetilde{W}_t)_{t \geq 0})
$$
and
$$
(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P}, (\widetilde{\mathcal{F}}_t)_{t \ge 0}, (\widetilde{X}^2_t)_{t \ge 0}, (\widetilde{W}_t)_{t \geq 0})
$$
are two weak solutions to \eqref{eq:3.36*} with
$$
\widetilde{\mathbb P}(\widetilde{X}^1_0=\widetilde{X}^2_0=x)=1,
$$
then
\begin{equation*}
\widetilde{\mathbb P}(\widetilde{X}^1_t=\widetilde{X}^2_t,\; t \geq 0 )=1.
\end{equation*}
A weak solution to \eqref{eq:3.36*} is said to be {\bf pathwise unique}, if pathwise uniqueness holds for the SDE \eqref{eq:3.36*}.
\item[(vi)]
We say that {\bf uniqueness in law holds} for the SDE \eqref{eq:3.36*}\index{uniqueness ! in law}, if whenever $x \in \mathbb R^d$ and
$$
(\widetilde{\Omega}^1, \widetilde{\mathcal{F}}^1, \widetilde{\mathbb P}^1, (\widetilde{\mathcal{F}}^1_t)_{t \ge 0}, (\widetilde{X}^1_t)_{t \ge 0}, (\widetilde{W}^1_t)_{t \geq 0})
$$
and
$$
(\widetilde{\Omega}^2, \widetilde{\mathcal{F}}^2, \widetilde{\mathbb P}^2, (\widetilde{\mathcal{F}}^2_t)_{t \ge 0}, (\widetilde{X}^2_t)_{t \ge 0}, (\widetilde{W}^2_t)_{t \geq 0})
$$
are two weak solutions to \eqref{eq:3.36*}, defined on possibly different probability spaces, with
$$
\widetilde{\mathbb P}^1 \circ (\widetilde{X}_0^{1})^{-1}=\widetilde{\mathbb P}^2 \circ (\widetilde{X}_0^{2})^{-1}= \delta_x,
$$
where $\delta_x$ is a Dirac measure in $x \in \mathbb R^d$, then
$$
\widetilde{\mathbb P}^1 \circ (\widetilde{X}^{1})^{-1}=\widetilde{\mathbb P}^2 \circ (\widetilde{X}^{2})^{-1} \text{ on } \; \mathcal{B}(C([0, \infty), \mathbb R^d)).
$$
\end{itemize}
\end{definition}
\begin{proposition}
\label{prop:3.3.9}
Let $\sigma$, $\mathbf{G}$, satisfy assumption {\bf (c)} as at the beginning of Section \ref{sec:3.3}.
Then
pathwise uniqueness holds for the SDE
\begin{equation} \label{eq:3.39}
\widetilde{X}_t = \widetilde{X}_0+ \int_0^t \sigma(\widetilde{X}_s) dW_s +\int_0^t \mathbf{G}(\widetilde{X}_s) ds, \quad 0 \leq t<\infty.
\end{equation}
\end{proposition}
\begin{proof}
Let $n \in \mathbb N$ be such that $x \in B_n$ and $\tau_n:=\inf \{t>0 : \widetilde{X}^1_t \in \mathbb R^d \setminus B_n \} \wedge \inf \{t>0 : \widetilde{X}^2_t \in \mathbb R^d \setminus B_n \}$. Let $(\widetilde{\mathcal{F}}^{\text{aug}}_t)_{t \geq 0}$ be the augmented filtration of $(\widetilde{\mathcal{F}}_t)_{t \geq 0}$ and $\widetilde{\mathcal{F}}^{\text{aug}}$ be the completion of $\mathcal{F}$ under $\mathbb P$. Then $(\widetilde{X}_t^1)_{t \geq 0}$ and $(\widetilde{X}_t^2)_{t \geq 0}$ are still adapted to $(\widetilde{\mathcal{F}}^{\text{aug}}_t)_{t \geq 0}$ and $(\widetilde{W}_t)_{t>0}$ is still a $d$-dimensional standard $(\widetilde{\mathcal{F}}^{\text{aug}}_t)_{t \geq 0}$-Brownian motion.
We can hence from now on assume that we are working on $(\widetilde{\Omega}, \widetilde{\mathcal{F}}^{\text{aug}}, \widetilde{\mathbb P}, (\widetilde{\mathcal{F}}^{\text{aug}}_t)_{t \geq 0})$. Then since $(\widetilde{X}_t^1)_{t \geq 0}$ and $(\widetilde{X}_t^2)_{t \geq 0}$ are $\mathbb P$-a.s. continuous,
$\tau_n$ is an $(\widetilde{\mathcal{F}}^{\text{aug}}_t)_{t \geq 0}$-stopping time and $\widetilde{\mathbb P}(\lim_{n \rightarrow \infty}\tau_n=\infty)=1$. Let $\chi_n \in C_0^{\infty}(\mathbb R^d)$ be such that $0 \leq \chi_n \leq 1$, $\chi_n=1$ on $\overline{B}_n$ and $\text{supp}(\chi_n) \subset B_{n+1}$. Let $\mathbf{G}_n:=\chi_n \mathbf{G}$ and $\sigma^n=(\sigma^n_{ij})_{1 \leq i,j \leq d}$ be defined by
$$
\sigma_{ij}^n(x):= \chi_{n+1}(x)\sigma_{ij}(x)+ \sqrt{\nu_{B_{n+1}}} (1-\chi_{n}(x))\delta_{ij}, \quad x \in \mathbb R^d,
$$
where $(\delta_{ij})_{1 \leq i,j \leq d}$ denotes the identity matrix and the constant $\nu_{B_{n+1}}$ is from \eqref{eq:2.1.2}. Then, $\mathbf{G}_n \in L^p(\mathbb R^d, \mathbb R^d)$, $\nabla \sigma^n_{ij} \in L^p(\mathbb R^d, \mathbb R^d)$ for all $1 \leq i,j \leq d$ and
$$
\nu_{B_{n+1}}^{-1} \|\xi\|^2 \leq \|(\sigma^n)^T(x) \xi\|^2 \leq 4 \nu_{B_{n+1}} \|\xi\|^2, \quad \forall x \in \mathbb R^d, \xi \in \mathbb R^d.
$$
For $i \in \{1, 2 \}$, suppose it holds $\widetilde{\mathbb P}$-a.s. that
$$
\widetilde{X}_t^i = x+\int_0^t \sigma^n(\widetilde{X}^i_s) d \widetilde{W}_s + \int_0^t \mathbf{G}_n(\widetilde{X}^i_s)ds, \quad 0 \leq t<\tau_n.
$$
Then by \cite[Theorem 1.1]{Zh11} applied for $\sigma^n$, $\mathbf{G}_n$ and $\tau_n$,
$$
\widetilde{\mathbb P}(\widetilde{X}^1_t = \widetilde{X}^2_t, \quad 0 \leq t<\tau_n)=1.
$$
Now the assertion follows by letting $n \rightarrow \infty$.
\end{proof}
\begin{theorem} \label{theo:3.3.1.8}
Assume {\bf (c)} as at the beginning of Section \ref{sec:3.3} and
that $\mathbb M$ is non-explosive (cf. Definition \ref{non-explosive}). Then
$$
(\Omega, \mathcal{F}, \mathbb P_x, (\mathcal{F}_t)_{t \ge 0}, (X_t)_{t \ge 0},(W_t)_{t \geq 0})
$$
of Theorem \ref{theo:3.1.4}(i) is for each $x \in \mathbb R^d$ a weak solution to \eqref{eq:3.39} and uniqueness in law holds for \eqref{eq:3.39} (cf. Definition \ref{def:3.48}(vi)).\\
Let further $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P})$ be a probability space carrying a $d$-dimensional standard Brownian motion $(\widetilde{W}_t)_{t \geq 0}$. Let $x \in \mathbb R^d$ be arbitrary. Then there exists a measurable map
$$
h^x: C([0, \infty), \mathbb R^d) \rightarrow C([0, \infty), \mathbb R^d)
$$
such that $(Y^x_t)_{t \geq 0}:= (h^x(\widetilde{W}_t))_{t \geq 0}$ is a pathwise unique and strong solution to \eqref{eq:3.39} on the probability space $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P})$ with Brownian motion $(\widetilde{W}_t)_{t \geq 0}$ and initial condition $Y^x_0=x$.
Moreover, $\mathbb P_x \circ X^{-1}= \widetilde{\mathbb P} \circ (Y^x)^{-1}$ holds, and therefore $((Y^x_t)_{t \geq 0}, \widetilde{\mathbb P})_{x \in \mathbb R^d}$ inherits all properties from $\mathbb M$ that only depend on its law. Precisely, more than strong Feller properties (Theorem \ref{theorem2.3.1}, Theorem \ref{theo:2.6}, Proposition \ref{prop:3.1.4}), irreducibility (Lemma \ref{lem:2.7}, Proposition \ref{prop:2.4.2}), Krylov-type estimates (Theorem \ref{theo:3.3}), integrability (Lemma \ref{lem:3.1.4}), moment inequalities (Proposition \ref{prop:3.2.8}, Proposition \ref{theo:3.2.8}), properties for recurrence and transience (Proposition \ref{prop:3.2.2.11}, Theorem \ref{theo:3.3.6}, Lemma \ref{lem:3.2.8}, Proposition \ref{theo:3.2.6}, Corollary \ref{cor:3.2.2.5}, Proposition \ref{cor:3.3.2.6}), ergodic properties including the uniqueness of invariant measures (Theorem \ref{theo:3.3.8}, Proposition \ref{prop:3.3.12}, Corollary \ref{cor:3.2.3.7}) are satisfied where $(X_t)_{t \geq 0}$ and $\mathbb P_x$ are replaced by $(Y^x_t)_{t \geq 0}$ and $\widetilde{\mathbb P}$, respectively.
\end{theorem}
\begin{proof}
Since $\mathbb M$ is non-explosive, it follows from Theorem \ref{theo:3.1.4}(i) that there exists a $d$-dimensional standard $(\mathcal{F}_t)_{t\geq0}$-Brownian motion $(W_t)_{t\geq0}$ on $(\Omega, \mathcal{F}, \mathbb P_x)$ such that $(\Omega, \mathcal{F}, \mathbb P_x, (\mathcal{F}_t)_{t \ge 0}, (X_t)_{t \ge 0},(W_t)_{t \geq 0})$ is a weak solution to \eqref{eq:3.39}. Thus, the first assertion follows from Proposition \ref{prop:3.3.9} and \cite[Proposition 1]{YW71}.
Moreover, by Proposition \ref{prop:3.3.9} and \cite[Corollary 1]{YW71}), there exists a measurable map
$$
h^x: C([0, \infty), \mathbb R^d) \rightarrow C([0, \infty), \mathbb R^d)
$$
such that $(X_t)_{t \geq 0}$ and $(h^x(W_t))_{t \geq 0}$ are $\mathbb P_x$-indistinguishable and in particular,
$$
(Y^x_t)_{t \geq 0}:= (h^x(\widetilde{W}_t))_{t \geq 0}
$$
is a strong solution to \eqref{eq:3.39} on the probability space $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P})$ with Brownian motion $(\widetilde{W}_t)_{t \geq 0}$ and $\widetilde{\mathbb P}(Y^x_0=x)=1$. Finally, since \eqref{eq:3.39} enjoys pathwise uniqueness, using \cite[Proposition 1]{YW71}, $\mathbb P_x \circ X^{-1}= \widetilde{\mathbb P} \circ (Y^x)^{-1}$ on $\mathcal{B}(C([0, \infty), \mathbb R^d))$, which concludes the proof.
\end{proof}
\begin{proposition} \label{prop:3.3.1.9}
Assume {\bf (c)} as at the beginning of Section \ref{sec:3.3} and that $\mathbb M$ is non-explosive (cf. Definition \ref{non-explosive}). Let $x \in \mathbb R^d$ and let
$$
(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P}, (\widetilde{\mathcal{F}}_t)_{t \ge 0}, (\widetilde{X}^x_t)_{t \ge 0})
$$
be an $\mathbb R^d_{\Delta}$-valued adapted stochastic process with $\widetilde{\mathbb P}(\widetilde{X}^x_0=x)=1$. Assume that there exists an ($\widetilde{\mathcal{F}}_t)_{t \geq 0}$-stopping time $\widetilde{\zeta}$ such that $t \mapsto \widetilde{X}^x_t$ is continuous and $\mathbb R^d$-valued on $[0, \widetilde{\zeta})$ and $\widetilde{X}^x_t=\Delta$ on $\{t \geq \widetilde{\zeta} \}$, both $\widetilde{\mathbb P}$-a.s., and that for each $n \in \mathbb N$
it holds that
$$
\inf \{t>0: \widetilde{X}^x_t \in \mathbb R^d \setminus \overline{B}_n \}<\widetilde{\zeta} \quad \text{$\widetilde{\mathbb P}$-a.s. on $\{ \widetilde{\zeta}<\infty\}$. }
$$
Let $(\widetilde{W}_t)_{t \geq 0}$ be a $d$-dimensional standard $(\widetilde{\mathcal{F}}_t)_{t \geq 0}$-Brownian motion on $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P})$. If
$$
\widetilde{X}^x_t = x+ \int_0^t \sigma(\widetilde{X}^x_s) d\widetilde{W}_s + \int_0^t \mathbf{G}(\widetilde{X}^x_s) ds, \quad 0 \leq t< \widetilde{\zeta}, \; \text{ $\widetilde{\mathbb P}$-a.s. }
$$
then $\widetilde{\mathbb P}(\widetilde{\zeta}=\infty)=1$ and $(\widetilde{X}^x_t)_{t \geq 0}$ is a strong solution to \eqref{eq:3.39} on the probability space $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P})$ with Brownian motion $(\widetilde{W}_t)_{t \geq 0}$ and $\widetilde{\mathbb P}(\widetilde{X}^x_0=x)=1$. Moreover, $\mathbb P_x \circ X^{-1}= \widetilde{\mathbb P} \circ (\widetilde{X}^x)^{-1}$ \text{on } $\mathcal{B}(C([0, \infty), \mathbb R^d))$.
\end{proposition}
\begin{proof}
Without loss of generality, we may assume that $(\widetilde{\mathcal{F}}_t)_{t \geq 0}$ is right continuous and contains the augmented filtration of $(\sigma(\widetilde{W}_s; \, 0 \leq s \leq t))_{t \geq 0}$. By Theorem \ref{theo:3.3.1.8}, there exists a measurable map
$$
h^x: C([0, \infty), \mathbb R^d) \rightarrow C([0, \infty), \mathbb R^d)
$$
such that $(Y^x_t)_{t \geq 0}:= (h^x(\widetilde{W}_t))_{t \geq 0}$ is a pathwise unique and strong solution to \eqref{eq:3.39} on the probability space $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P})$ with Brownian motion $(\widetilde{W}_t)_{t \geq 0}$ and $\widetilde{\mathbb P}(Y^x_0=x)=1$, hence
$$
Y^x_t = x+ \int_0^t \sigma(Y^x_s) d\widetilde{W}_s + \int_0^t \mathbf{G}(Y^x_s) ds, \quad 0 \leq t< \infty, \; \text{ $\widetilde{\mathbb P}$-a.s. }
$$
Let $n \in \mathbb N$ be such that $x \in B_n$ and $\tau_n:=\inf \{t>0: \widetilde{X}^x_t \in \mathbb R^d \setminus \overline{B}_n \}$. Then by the $\widetilde{\mathbb P}$-a.s. right continuity of $(\widetilde{X}^x_t)_{t \geq 0}$ and the usual conditions of $(\widetilde{\mathcal{F}}_{t})_{t \geq 0}$, we obtain that $\tau_n$ is an $(\widetilde{\mathcal{F}}_t)_{t \geq 0}$-stopping time. Since $\widetilde{\mathbb P}$-a.s. $t \mapsto \widetilde{X}_t^x$ is continuous and $\mathbb R^d$-valued on $[0, \widetilde{\zeta})$, it follows that $\tau_n<\tau_{n+1}<\widetilde{\zeta}$, $\widetilde{\mathbb P}$-a.s. on $\{ \widetilde{\zeta}<\infty\}$. Moreover, $\widetilde{\mathbb P}$-a.s. $\lim_{n \rightarrow \infty} \tau_n=\widetilde{\zeta}$ and $\widetilde{\mathbb P}$-a.s.
$$
\widetilde{X}_t^x = x+\int_0^t \sigma(\widetilde{X}^x_s) d \widetilde{W}_s + \int_0^t \mathbf{G}(\widetilde{X}^x_s)ds, \quad 0 \leq t<\tau_{n+1}.
$$
By \cite[Theorem 1.1]{Zh11},
$$
\widetilde{\mathbb P}(Y^x_t=\widetilde{X}^x_t, \quad 0\leq t<\tau_{n+1})=1.
$$
Therefore, we obtain
$$
Y^x_{\tau_n} = \widetilde{X}^x_{\tau_n}, \quad \text{$\widetilde{\mathbb P}$-a.s. on $\{\widetilde{\zeta}<\infty \}$}.
$$
Now suppose that $\widetilde{\mathbb P}(\widetilde{\zeta}<\infty)>0$. Then $\widetilde{\mathbb P}$-a.s. on $\{\widetilde{\zeta}<\infty \}$
$$
\|Y^x_{\tau_n}\| = \|\widetilde{X}^x_{\tau_n}\|=n.
$$
Therefore, $\widetilde{\mathbb P}$-a.s. on $\{\widetilde{\zeta}<\infty \}$
$$
\|Y^x_{\widetilde{\zeta}}\| = \lim_{n \rightarrow \infty} \|Y^x_{\tau_n} \| = \infty,
$$
which is a contradiction since $\|Y^x_{\widetilde{\zeta}}\|<\infty$ $\widetilde{\mathbb P}$-a.s. on $\{\widetilde{\zeta}<\infty \}$. Therefore,
$$
\widetilde{\mathbb P}(\widetilde{\zeta}=\infty)=1,
$$
hence by Proposition \ref{prop:3.3.9},
$$
\widetilde{\mathbb P}(Y^x_t=\widetilde{X}^x_t, \; 0\leq t<\infty)=1.
$$
By Theorem \ref{theo:3.3.1.8}, it follows that
$$
\mathbb P_x \circ X^{-1}= \widetilde{\mathbb P} \circ (Y^x)^{-1}=\widetilde{\mathbb P} \circ (\widetilde{X}^x)^{-1} \quad \text{on } \ \mathcal{B}(C([0, \infty), \mathbb R^d)).
$$
\end{proof}
In the following remark, we briefly mention some previous related results about pathwise uniqueness and strong solutions to
SDEs.
\begin{remark}\label{rem:3.3.1}
{\it The classical result developed by It\^{o} about pathwise uniqueness and existence of a strong solution (strong well-posedness) requires dispersion and drift
coefficients to be globally Lipschitz continuous and to satisfy a linear growth condition (cf. \cite[2.9 Theorem,
Chapter 5]{KaSh}). In \cite[Theorem 4]{Zvon}, Dini continuity that is weaker than global Lipschitz
continuity is assumed for the drift coefficient, but the diffusion and drift coefficients should be
globally bounded. The result of It\^{o} can be localized, imposing only a local Lipschitz condition together with a (global) linear growth condition (cf. \cite[IV. Theorems 2.4 and 3.1]{IW89}).\\
Strong well-posedness results for only measurable coefficients were given starting from \cite{Zvon}, \cite{Ver79}, \cite{Ver81}.
In these works $\sigma$ is non-degenerate and $\sigma, \mathbf{G}$ are bounded. To our knowledge the first strong well-posedness results for unbounded measurable coefficients start with \cite[Theorem 2.1]{GyMa}, but the growth condition there for non-explosion \cite[Assumption 2.1]{GyMa} does not allow for linear growth as in the classical case.
In \cite{KR}, the authors consider the Brownian motion case with drift, covering the condition {\bf (c)}. They obtain strong well-posedness up to an explosion time and certain non-explosion conditions, which also do not allow for linear growth (see \cite[Assumption 2.1]{KR}). The main technique of \cite{Zvon}, now known as Zvonkin transformation, was employed together with Krylov-type estimates in
\cite{Zh11} in order to obtain strong well-posedness for locally unbounded drift coefficient and non-trivial dispersion coefficient up to an explosion time. The assumptions in \cite{Zh11}, when restricted to the time-homogeneous case are practically those of {\bf (c)} (cf. \cite[Remark 3.3(ii)]{LT18} and the corresponding discussion in the introduction there), but again the non-explosion conditions are far from being classical-like linear growth conditions (see also \cite{Zh05}).
Among the references, where the technique of Zvonkin transformation together with Krylov-type estimates is used to obtain local strong well-posedness, the best non-explosion conditions up to now under the local strong well-posedness result of \cite{Zh11} can be found in \cite{ZhXi16}. In \cite{ZhXi16} also strong Feller properties, irreducibility and further properties of the solution are studied. However, the conditions to obtain the results there are quite involved and restrictive and finally do not differ substantially from the classical results of local Lipschitz coefficients (see the discussion in the introduction of \cite{LT18}). In summary one can say that in contrast to our results, \cite{GyMa}, \cite{KR}, \cite{Zh05}, \cite{Zh11}, \cite{ZhXi16} also cover the time-inhomogeneous case, but
sharp results to treat SDEs with general locally unbounded drift coefficient in detail further as in Theorem \ref{theo:3.3.1.8}, similarly to classical SDEs with local Lipschitz coefficients, seem not to be at hand. The optimal local regularity assumptions to obtain local well-posedness (as in \cite{Zh11}) require a strengthening to obtain the further important properties of the solution as in Theorem 3.7 (see for instance conditions (H1), (H2), and (H1'), (H2') in \cite{ZhXi16}), contrary to the classical case (of locally Lipschitz coefficients), where important further properties of the solution can be formulated independently of the local regularity assumptions.}
\end{remark}
\subsubsection{Uniqueness in law (via $L^1$-uniqueness)}
\label{subsec:3.3.2}
Throughout this section, we let
$$
\mu=\rho\,dx
$$
be as in Theorem \ref{theo:2.2.7} or as in Remark \ref{rem:2.2.4}.
\begin{definition} \label{def:3.3.2.1}
Consider a right process
$$
\widetilde{\mathbb M} = (\widetilde{\Omega}, \widetilde{\mathcal{F}}, (\widetilde{\mathcal{F}}_t)_{t \geq 0},(\widetilde{X}_t)_{t \ge 0}, (\widetilde{\mathbb P}_x)_{x \in \mathbb R^d\cup \{\Delta\}})
$$
with state space $\mathbb R^d$ (cf. Definition \ref{def:3.1.1}).
For a measure $\nu$ on $(\mathbb R^d, \mathcal{B}(\mathbb R^d))$, we set
$$
\widetilde{\mathbb P}_{\nu}(A):=\int_{\mathbb R^d} \widetilde{\mathbb P}_{x}(A) \nu(dx), \quad A \in \mathcal{B}(\mathbb R^d).
$$
$\widetilde{\mathbb M}$ is said to {\bf solve the martingale problem for $(L,C_0^{\infty}(\mathbb R^d))$ with respect to $\mu$}\index{solution ! to the martingale problem with respect to $\mu$}, if for all $u\in C_0^{\infty}(\mathbb R^d)$:
\begin{itemize}
\item[(i)] $u(\widetilde{X}_t) - u(\widetilde{X}_0) - \int_0^t L u(\widetilde{X}_s) \, ds$, $t \ge 0$,
is a continuous $(\widetilde{\mathcal{F}}_t)_{t \ge 0}$-martingale under $\widetilde{\mathbb P}_{\varv \mu}$ for any $\varv \in \mathcal{B}_b^+(\mathbb R^d)$ such that $\int_{\mathbb R^d}\varv\,d\mu=1$.
\end{itemize}
\end{definition}
\begin{remark}
{\it Let $\widetilde{\mathbb M} = (\widetilde{\Omega}, \widetilde{\mathcal{F}}, (\widetilde{\mathcal{F}}_t)_{t \geq 0},(\widetilde{X}_t)_{t \ge 0}, (\widetilde{\mathbb P}_x)_{x \in \mathbb R^d\cup \{\Delta\}})$ be a right process with state space $\mathbb R^d$ and consider the following condition:
\begin{itemize}
\item[(i$^\prime$)]
for all $u \in C_0^{\infty}(\mathbb R^d)$, $u(\widetilde{X}_t) - u(\widetilde{X}_0) - \int_0^t L u(\widetilde{X}_s) \, ds$, $t \ge 0$, is a continuous $(\widetilde{\mathcal{F}}_t)_{t \ge 0}$-martingale under $\widetilde{\mathbb P}_{x}$ for $\mu$-a.e. $x \in \mathbb R^d$.
\end{itemize}
If (i$^\prime$) holds, then (i) of Definition \ref{def:3.3.2.1} holds and $\widetilde{\mathbb M}$ hence solves the martingale problem for $(L, C_0^{\infty}(\mathbb R^d))$ with respect to $\mu$. In particular, by Proposition \ref{prop:3.1.6}, $\mathbb M$ solves the martingale problem for $(L,C_0^{\infty}(\mathbb R^d))$ with respect to $\mu$. Consider the following condition:
\begin{itemize}
\item[(i$^{\prime \prime}$)]
there exists a $d$-dimensional standard $(\widetilde{\mathcal{F}}_t)_{t\geq0}$-Brownian motion $(\widetilde{W}_t)_{t\geq0}$ on $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P}_y)$ such that $(\widetilde{\Omega}, \widetilde{\mathcal{F}}, \widetilde{\mathbb P}_y, (\widetilde{\mathcal{F}}_t)_{t \ge 0}, (\widetilde{X}_t)_{t \ge 0},(\widetilde{W}_t)_{t \geq 0})$ is a weak solution to \eqref{eq:3.39} for $\mu$-a.e. $y \in \mathbb R^d$.
\end{itemize}
By It\^{o}'s formula, if (i$^{\prime \prime}$) is satisfied, then (i$^{\prime}$) holds, hence $\widetilde{\mathbb M}$ solves the martingale problem for $(L,C_0^{\infty}(\mathbb R^d))$ with respect to $\mu$.\\
If $\mu$ is a sub-invariant measure for $\widetilde{\mathbb M}$, then by Proposition \ref{prop:3.3.1.15} below, we obtain a resolvent $(\widetilde{R}_{\alpha})_{\alpha>0}$ on $L^1(\mathbb R^d, \mu)$ associated to $\widetilde{\mathbb M}$, hence for any $f \in L^1(\mathbb R^d, \mu)$ and $\alpha>0$, it holds that
$$
\widetilde{R}_{\alpha}f(x)=\widetilde{\mathbb E}_x\left[\int_0^{\infty} e^{-\alpha t} f(\widetilde{X}_t) dt \right], \;\; \text{for $\mu$-a.e. $x \in \mathbb R^d$.}
$$
Thus, we have that $\int_0^t Lu(\widetilde{X}_s)ds$, $t \geq 0$, is $\widetilde{\mathbb P}_\mu$-a.e. independent of the Borel measurable $\mu$-version chosen for $Lu$.}
\end{remark}
\begin{proposition} \label{prop:3.3.1.15}
Suppose that condition {\bf (a)} of Section \ref{subsec:2.2.1} holds and that $(L,C_0^{\infty}(\mathbb R^d))$ is $L^1$-unique (cf. Definition \ref{def:2.1.1}(i)).
Let a right process
$$
\widetilde{\mathbb M} = (\widetilde{\Omega}, \widetilde{\mathcal{F}}, (\widetilde{\mathcal{F}}_t)_{t \geq 0}, (\widetilde{X}_t)_{t \ge 0}, (\widetilde{\mathbb P}_x)_{x \in \mathbb R^d\cup \{\Delta\}})
$$
solve the martingale problem for $(L,C_0^{\infty}(\mathbb R^d))$ with respect to $\mu$ such that $\mu$ is a sub-invariant measure for $\widetilde{\mathbb M}$. Let
$$
p^{\widetilde{\mathbb M}}_t f(x):=\widetilde{\mathbb E}_x[f(\widetilde{X}_t)], \quad f \in \mathcal{B}_b(\mathbb R^d), \; x \in \mathbb R^d, t>0,
$$
where $\widetilde{\mathbb E}_x$ denotes the expectation with respect to $\widetilde{\mathbb P}_x$. Then $(p^{\widetilde{\mathbb M}}_t)_{t \geq 0}|_{L^1(\mathbb R^d, \mu)_b}$ uniquely extends to a sub-Markovian $C_0$-semigroup of contractions $(S_t)_{t \geq 0}$ on $L^1(\mathbb R^d, \mu)$ and
\begin{equation} \label{eq:3.40*}
S_t f =T_tf \; \text{ in $L^1(\mathbb R^d, \mu)$,\quad for all $f\in L^1(\mathbb R^d,\mu)$, $t\ge 0$}.
\end{equation}
In particular, $\mu$ is an invariant measure for $\widetilde{\mathbb M}$. Moreover, if additionally assumption {\bf (b)} of Section \ref{subsec:3.1.1} holds and
\begin{equation} \label{eq:3.41*}
p^{\widetilde{\mathbb M}}_t(C_0^{\infty}(\mathbb R^d)) \subset C(\mathbb R^d), \quad \forall t>0,
\end{equation}
then
$$
\widetilde {\mathbb P}_x \circ \widetilde{X}^{-1}=\mathbb P_x \circ X^{-1} \quad \text{on } \ \mathcal{B}(C([0, \infty), \mathbb R^d))\ \text{ for all $x\in \mathbb R^d$},
$$
hence $\widetilde{\mathbb M}$ inherits all properties of $\mathbb M$ that only depend on its law.
\end{proposition}
\begin{proof}
Since $\mu$ is a sub-invariant measure for $\widetilde{\mathbb M}$ and $L^1(\mathbb R^d, \mu)_b$ is dense in $L^1(\mathbb R^d, \mu)$, it follows that $(p^{\widetilde{\mathbb M}}_t)_{t \geq 0}|_{L^1(\mathbb R^d, \mu)_b}$ uniquely extends to a sub-Markovian semigroup of contractions $(S_t)_{t> 0}$ on $L^1(\mathbb R^d, \mu)$. We first show the following claim. \\
{\bf Claim}: $(S_t)_{t \geq 0}$ is strongly continuous on $L^1(\mathbb R^d, \mu)$. \\
Let $f \in C_0(\mathbb R^d)$. By the right continuity and the normal property of $(\widetilde{X}_t)_{t \geq 0}$ and Lebesgue's theorem, it follows that
\begin{equation} \label{eq:3.41}
\lim_{t \rightarrow 0+} S_t f(x) = \lim_{t \rightarrow 0+} \widetilde{\mathbb E}_{x} [f(\widetilde{X}_t)]=f(x), \quad \text{ for $\mu$-a.e. $x \in \mathbb R^d$}.
\end{equation}
Now let $B$ be an open ball with $\text{supp}(f) \subset B$. By \eqref{eq:3.41} and Lebesgue's theorem,
$$
\lim_{t \rightarrow 0+}\int_{\mathbb R^d} 1_B |S_t f| d \mu = \int_{\mathbb R^d} 1_B |f| d\mu = \|f\|_{L^1(\mathbb R^d, \mu)},
$$
hence using the contraction property of $(S_t)_{t>0}$ on $L^1(\mathbb R^d, \mu)$
\begin{equation} \label{eq:3.42}
\int_{\mathbb R^d} 1_{\mathbb R^d \setminus B} \,|S_t f| d \mu \leq \|f\|_{L^1(\mathbb R^d, \mu)} - \int_{\mathbb R^d} 1_B |S_t f|d \mu \rightarrow 0 \;\; \text{ as } t \rightarrow 0+.
\end{equation}
Therefore, by \eqref{eq:3.41}, \eqref{eq:3.42} and Lebesgue's theorem,
\begin{eqnarray*}
\lim_{t \rightarrow 0+} \int_{\mathbb R^d} |S_t f -f | d \mu &=& \lim_{t \rightarrow 0+} \left(\int_{\mathbb R^d} 1_B |S_t f - f| d \mu + \int_{\mathbb R^d} 1_{\mathbb R^d \setminus B} |S_t f | d \mu \right) =0.
\end{eqnarray*}
Using the denseness of $C_0(\mathbb R^d)$ in $L^1(\mathbb R^d, \mu)$ and the contraction property of $(S_t)_{t>0}$ on $L^1(\mathbb R^d, \mu)$, the claim follows by $3$-$\varepsilon$ argument.\\
Denote by $(A, D(A))$ the infinitesimal generator of the $C_0$-semigroup of contractions $(S_t)_{t > 0}$ on $L^1(\mathbb R^d, \mu)$.
Let $u \in C_0^{\infty}(\mathbb R^d)$ and $\varv \in \mathcal{B}_b^+(\mathbb R^d)$ with $\int_{\mathbb R^d} \varv d\mu$=1. Then by Fubini's theorem,
\begin{eqnarray*}
\int_{\mathbb R^d} (S_t u - u) \varv d\mu &=& \widetilde{\mathbb E}_{\varv\mu} \left[ u(\widetilde{X}_t)-u(\widetilde{X}_0) \right]\\
&=&\widetilde{\mathbb E}_{\varv\mu}\left[\int_0^t Lu(\widetilde{X}_s) ds \right]=\int_{\mathbb R^d} \left(\int_0^tS_s Lu ds\right) \varv d\mu,
\end{eqnarray*}
hence we obtain $S_t u-u=\int_0^tS_s Lu ds$ in $L^1(\mathbb R^d, \mu)$. By the strong continuity of $(S_t)_{t>0}$ on $L^1(\mathbb R^d, \mu)$, we get $u \in D(A)$ and $Au =Lu$. Since $(L,C_0^{\infty}(\mathbb R^d))$ is $L^1$-unique, it follows that $(A, D(A))=(\overline{L}, D(\overline{L}))$, hence \eqref{eq:3.40*} follows. Since $\mu$ is $(\overline{T}_t)_{t>0}$-invariant by Proposition \ref{prop2.1.9}, it follows by monotone approximation that $\mu$ is an invariant measure for $\widetilde{\mathbb M}$. If \eqref{eq:3.41*} and additionally {\bf (b)} hold, then by \eqref{eq:3.40*} and the strong Feller property of $(P_t)_{t>0}$,
$$
\int_{\mathbb R^d} f(y) \, \widetilde{\mathbb P}_{x} (\widetilde{X}_t \in dy)=\int_{\mathbb R^d} f(y) \, \mathbb P_{x} (X_t \in dy), \quad \forall f \in C_0^{\infty}(\mathbb R^d), x \in \mathbb R^d, t>0.
$$
By a monotone class argument, the latter implies $\widetilde{\mathbb P}_{x} \circ \widetilde{X}_t^{-1}=\mathbb P_{x} \circ X_t^{-1}$ for all $x \in \mathbb R^d$ and $t>0$. Since the law of a right process is uniquely determined by its transition semigroup (and the initial condition), we have $\widetilde {\mathbb P}_x \circ \widetilde{X}^{-1}=\mathbb P_x \circ X^{-1}$ on $\mathcal{B}(C([0, \infty), \mathbb R^d))$ for all $x \in \mathbb R^d$ as desired.
\end{proof}
\begin{proposition} \label{prop:3.3.1.16}
Suppose the conditions {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold and that $a_{ij}$ is locally H\"{o}lder continuous on $\mathbb R^d$ for all $1 \leq i,j \leq d$, i.e. \eqref{AssumptionUniqueness} holds. Suppose that $\mu$ as at the beginning of this section is an invariant measure for $\mathbb M$ (see Definition \ref{def:invariantforprocess}) and let
$$
\widetilde{\mathbb M}=(\widetilde{\Omega}, \widetilde{\mathcal{F}}, (\widetilde{\mathcal{F}}_t)_{t \geq 0}, (\widetilde{X}_t)_{t \ge 0}, (\widetilde{\mathbb P}_x)_{x \in \mathbb R^d\cup \{\Delta\}})
$$
be a right process which solves the martingale problem for $(L,C_0^{\infty}(\mathbb R^d))$ with respect to $\mu$ (see Definition \ref{def:3.3.2.1}), such that $\mu$ is a sub-invariant measure for $\widetilde{\mathbb M}$
(Definition \ref{def:invariantforprocess}). Assume further that
$$
\widetilde{\mathbb E}_{\cdot}[f(\widetilde{X}_t)] \in C(\mathbb R^d), \quad \forall f \in C_0^{\infty}(\mathbb R^d), t>0.
$$
Then $\mu$ is an invariant measure for $\widetilde{\mathbb M}$ and
$$
\widetilde {\mathbb P}_x \circ \widetilde{X}^{-1}=\mathbb P_x \circ X^{-1} \quad \text{on } \ \mathcal{B}(C([0, \infty), \mathbb R^d)) \ \text{ for all $x\in \mathbb R^d$},
$$
hence $\widetilde{\mathbb M}$ inherits all properties of $\mathbb M$ that only depend on its law.
\end{proposition}
\begin{proof}
By Corollary \ref{cor2.1.1}, $(L, C_0^{\infty}(\mathbb R^d))$ is $L^1$-unique, if and only if $\mu$ is an invariant measure for $\mathbb M$. Therefore, the assertion follows from Proposition \ref{prop:3.3.1.15}.
\end{proof}
\begin{eg} \label{ex:3.4.9}
In (i)--(vi) below, we illustrate different kinds of situations which imply that $\mu$ is an invariant measure for $\mathbb M$, so that Proposition \ref{prop:3.3.1.16} is applicable. Throughout (i)--(iv), $a_{ij}$, $1 \leq i,j \leq d$, is assumed to be locally H\"{o}lder continuous on $\mathbb R^d$.
\begin{itemize}
\item[(i)]
By \cite[Proposition 2.5]{BCR}, $(T_t)_{t>0}$ is recurrent if and only if $(T'_t)_{t>0}$ is recurrent. Therefore, it follows from Theorem \ref{theo:3.3.6}(iii) and (v) that if $\mathbb M$ is recurrent, then $(T'_t)_{t>0}$ is conservative, hence $\mu$ is $(\overline{T}_t)_{t>0}$-invariant by Remark 2.4(i). Thus, under the assumptions of Proposition \ref{theo:3.2.6} or Proposition \ref{cor:3.3.2.6}, we obtain that $\mu$ is an invariant measure for $\mathbb M$ by Remark \ref{rem:3.2.3.3}.
\item[(ii)]
Consider the situation of Remark \ref{rem:2.2.4} and let additionally $\nabla (A+C^T) \in L^q_{loc}(\mathbb R^d, \mathbb R^d)$. Note that this implies {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1}.
Then, under the assumption of Proposition \ref{prop:2.1.10}(i), or Proposition \ref{prop:3.2.9}(i) or (ii) (in particular, Example \ref{exam:3.2.1.4}(i)), it follows that $\mu$ is an invariant measure for $\mathbb M$.
\item[(iii)]
Under the assumption of Example \ref{exam:3.2.1.4}(ii), it follows that the Hunt process $\mathbb M'$ associated with $(T'_t)_{t>0}$ is non-explosive, hence $(T'_t)_{t>0}$ is conservative by Corollary \ref{cor:3.2.1}, so that $\mu$ is an invariant measure for $\mathbb M$ by Remark 2.4(i) and Remark \ref{rem:3.2.3.3}.
\item[(iv)] Suppose that {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold and that Proposition \ref{prop:3.2.9}(i) or (ii) is verified. Then $\mathbb M$ is non-explosive and $\mu$ is an invariant measure for $\mathbb M$, by Proposition \ref{prop:3.2.9} and Remark.\ref{rem:3.2.3.3}.
\item[(v)] Suppose that $A=id$, $\mathbf{G}=\nabla \phi$ and $\mu=\exp(2\phi) dx$, where $\phi \in H^{1,p}_{loc}(\mathbb R^d)$ for some $p\in (d,\infty)$, and that \eqref{eq:3.2.27} holds.Then $\mathbb M$ is non-explosive and $\mu$ is an invariant measure for $\mathbb M$, by Remarks \ref{rem:3.2.1 1} and \ref{rem:3.2.3.3}.
\item[(vi)] Let $A=id$ and $\mathbf{G}= (\frac{1}{2}-\frac{1}{2}e^{-x_1}) \mathbf{e}_1$. Then $(L, C_0^{\infty}(\mathbb R^d))$ is written as
$$
Lf = \frac12 \Delta f +(\frac12-\frac12 e^{-x_1} ) \partial_1 f, \; \quad \forall f \in C_0^{\infty}(\mathbb R^d),
$$
and $\mu=e^{x_1}dx$ is an infinitesimally invariant measure for $(L, C_0^{\infty}(\mathbb R^d))$. Let $(\overline{L}', D(\overline{L}'))$ be the infinitesimal generator of $(T'_t)_{t>0}$ on $L^1(\mathbb R^d, \mu)$ as in Remark \ref{remark2.1.7}(ii). Then $C_0^{\infty}(\mathbb R^d) \subset D(\overline{L}')$ and it holds that
$$
\overline{L}' f= \frac12 \Delta f +(\frac12+\frac12 e^{-x_1} ) \partial_1 f, \; \quad \forall f \in C_0^{\infty}(\mathbb R^d).
$$
By Remark \ref{rem:2.1.12}(ii) and Proposition \ref{prop:2.1.4.1.4}, $\mu$ is not $(\overline{T}'_t)_{t>0}$-invariant, hence $(T_t)_{t>0}$ is not conservative. But by Proposition \ref{prop:2.1.10}, $\mu$ is $(\overline{T}_t)_{t>0}$-invariant, hence $\mu$ is an invariant measure for $\mathbb M$. Thus, Proposition \ref{prop:3.3.1.16} is applicable, even though $\mathbb M$ is explosive.
\end{itemize}
\end{eg}
\subsection{Comments and references to related literature}\label{Comments3}
The classical probabilistic techniques that we use in Chapter \ref{chapter_3} can be found for instance in
\cite{IW89}, \cite{D96}, \cite{KaSh}. Beyond that, in Section \ref{subsec:3.1.1}, the idea for the construction of the Hunt process $\mathbb M$ whose starting points are all points of $\mathbb R^d$ originates from \cite{AKR}, that originally only covers the case of an underlying symmetric Dirichlet form. More precisely, using the theory of generalized Dirichlet forms (\cite{WSGDF}) and their stochastic counterpart (\cite{Tr2} and \cite{Tr5}), we extend the method of \cite{AKR} in order to obtain Theorem \ref{th: 3.1.2}. In Section \ref{subsec:3.1.3}, the identification of $\mathbb M$ as a weak solution to an SDE is done via a representation theorem for semi-martingales (\cite[II. Theorem 7.1, 7.1']{IW89}).\\
The Krylov-type estimates in Theorem \ref{theo:3.3} (see also Remark \ref{rem:ApplicationKrylovEstimates}), which result from the application of regularity theory of PDEs seem to be new, even in the classical case of locally Lipschitz continuous coefficients.\\
Concerning Section \ref{subsec:3.2.1}, providing sufficient conditions for non-explosion in terms of Lyapunov functions goes back at least to \cite[Theorem 3.5]{kha12}. In \cite[Example 5.1]{BRSta}, a procedure is explained on how to extend that method to Lyapunov functions that are considered as $\alpha$-superharmonic functions outside an arbitrarily large compact set. This procedure is used to obtain Lemma \ref{lem3.2.6} about non-explosion of $\mathbb M$. Corollary \ref{cor:3.2.2} on a Lyapunov condition for non-explosion is an improved version of \cite[Theorem 4.2]{LT18}.\\
Various results about recurrence and transience in Section \ref{subsec:3.2.2} are obtained by combining results and methods of
\cite{Ge}, \cite{Pi}, \cite{GT2}. Proposition \ref{cor:3.2.2.5} on a Lyapunov condition for recurrence is an improved version of \cite[Theorem 4.13]{LT18}. \\
Doob's theorem on regular semigroups \cite[Theorem 4.2.1]{DPZB}, resp. the Lyapunov condition for finiteness of $\mu$ in \cite[Theorem 2]{BRS} are crucial for the results on ergodicity in Theorem \ref{theo:3.3.8}, respectively the finiteness of $\mu$ in Proposition \ref{prop:3.3.12} in Section \ref{subsec:3.2.3}. Corollary \ref{cor:3.2.3.7} on a Lyapunov condition for ergodicity is an improved version of \cite[Proposition 4.17]{LT18}.
The uniqueness of weak
solutions of SDEs is then applied in Example \ref{ex:3.8} to show non-uniqueness
of invariant measures.
\\
In Section \ref{subsec:3.3.1}, Proposition \ref{prop:3.3.9} on pathwise uniqueness which is a direct consequence of \cite[Theorem 1.1]{Zh11} together with
the Yamada--Watanabe theorem (\cite[Corollary 1, Proposition 1]{YW71}) are crucial to obtain global strong existence in Theorem \ref{theo:3.3.1.8}.
However, Theorem \ref{theo:3.3.1.8} not only draws on \cite[Corollary 1, Proposition 1]{YW71}, \cite[Theorem 1.1]{Zh11}, since together with the weak existence result and various other results on properties of the weak solution presented in this monograph, Theorem \ref{theo:3.3.1.8} actually discloses new results for the existence of a strong solution to time-homogeneous It\^o-SDEs with rough coefficients and its various properties.\\
The idea to derive uniqueness in law via $L^1$-uniqueness in Section \ref{subsec:3.3.2}
can be found in \cite{AR} (see also \cite{Eberle}).
\section{Conclusion and outlook}\label{conclusionoutlook}
In this book, we studied the existence, uniqueness and stability of solutions to It\^{o} SDEs with non-smooth coefficients, using functional analysis, PDE-techniques and stochastic analysis. Theories that played important roles in developing the contents of this book were elliptic and parabolic regularity theory for PDEs and generalized Dirichlet form theory.
In order to study the existence and various properties of solutions to It\^{o}-SDEs, we could use the functional analytic characterization of a generator and additional analytic properties of the corresponding semigroups and resolvents.
Thus, without restricting the local regularity assumptions on the coefficients that ensure the local uniqueness of solutions,
we could derive strong Feller properties and irreducibility of the semigroup as well as Krylov-type estimates for the solutions to the SDEs. Subsequently, we verified that the solutions of the SDEs with non-smooth coefficients can be further analyzed in very much the same way as the solutions to classical SDEs with Lipschitz coefficients. In particular, through the theory of elliptic PDEs, we could explore the existence of an infinitesimally invariant measure that is not only a candidate for the invariant measure but also a reference measure for our underlying $L^r$-space. Thus, investigating the conservativeness of the adjoint semigroups, the existence of invariant measures could be characterized and we could present various criteria for recurrence and ergodicity, as well as uniqueness of invariant probability measures.\\
Let us provide some outlook to further related topics that can now be investigated based on the techniques developed in this book. \\ \\
{\bf 1. The time-inhomogeneous case and other extensions}\\ \\
The way of constructing weak solutions to SDEs by methods as used in this book is quite robust and was already successfully applied in the degenerate case (see \cite{LT19de}) and to cases with reflection (\cite{ShTr13a}). We may hence think of applying it also in the time-dependent case. As mentioned in the introduction, the local well-posedness result \cite[Theorem 1.1]{Zh11} also holds in
the time-dependent case (and including also the case $d=1$) with some trade-off between the integrability assumptions in time and space.
In particular, the corresponding time-dependent Dirichlet form theory is already well-developed (see \cite{O04, O13, WS04, RuTr4}).
Our method to construct weak solutions independently and separately from local well-posedness, and thereby to extend existing literature, may also work well in the time-inhomogeneous case, if an adequate regularity theory can be developed or exploited. Moreover, we may also think of developing the time-homogeneous case $d=1$. As it allows explicit computations with stronger regularity results and there always exists a symmetrizing measure\index{measure ! symmetrizing} under mild regularity assumptions on the coefficients, one can always apply symmetric Dirichlet form theory (see for instance \cite[Remark 2.1]{GT1}, \cite[Lemma 2.2.7(ii), Section 5.5]{FOT}). Therefore, in the time-homogeneous case in $d=1$, we expect to obtain weak existence results under quite lower local regularity assumptions on the coefficients than are needed for local well-posedness.\\ \\
{\bf 2. Relaxing the local regularity conditions on the coefficients}\\ \\
By introducing a function space called $VMO$, it is possible to relax the condition {\bf (a)} of
Section \ref{subsec:2.2.1}.
For $g \in L^1_{loc}(\mathbb R^d)$, let us write $g \in VMO$ (cf. \cite{BKRS}) if there exists a positive continuous function $\omega$ on $[0, \infty)$ with $\omega(0)=0$ such that
\begin{equation*} \label{vmoine}
\sup_{z \in \mathbb R^d, r <R} r^{-2d} \int_{B_r(z)} \int_{B_r(z)} |g(x)-g(y)|dx dy \leq \omega(R), \quad \forall R>0.
\end{equation*}
Given an open ball $B$ and $f \in L^1(B)$, we write $f \in VMO(B)$ if there exists an extension $\widetilde{f} \in L^1_{loc}(\mathbb R^d)$ of $f \in L^1(B)$ such that $\widetilde{f} \in VMO$. For $f \in L^1_{loc}(\mathbb R^d)$, we write $f \in VMO_{loc}$ if for each open ball $B$, $f|_{B} \in VMO(B)$. Obviously, $C(\mathbb R^d) \subset VMO_{loc}$. By the Poincar\'{e} inequality (\cite[Theorem 4.9]{EG15}) and an extension result (\cite[Theorem 4.7]{EG15}), it holds that $H^{1,d}_{loc}(\mathbb R^d) \subset VMO_{loc}$.
Note that if the assumption $\hat{a}_{ij} \in C(\overline{B})$ for all $1 \leq i,j \leq d$ in Theorem \ref{Theorem2.2.2} is replaced by $\hat{a}_{ij} \in VMO(B) \cap L^{\infty}(B)$ for all $1 \leq i,j \leq d$, Theorem \ref{Theorem2.2.2} remains true, since it is a consequence of \cite[Theorem 1.8.3]{BKRS} which merely imposes $\hat{a}_{ij} \in VMO(B)$. Therefore, by replacing assumption {\bf (a)} of Section \ref{subsec:2.2.1} with the following assumption:
\begin{itemize}
\item[{\bf ($\tilde{\mathbf{a}}$)}] \ \index{assumption ! {\bf (a)}}$a_{ji}= a_{ij}\in H_{loc}^{1,2}(\mathbb R^d) \cap VMO_{loc}\cap L^{\infty}_{loc}(\mathbb R^d)$, $1 \leq i, j \leq d$, $d\ge 2$, and $A = (a_{ij})_{1\le i,j\le d}$ satisfies \eqref{eq:2.1.2}.
$C = (c_{ij})_{1\le i,j\le d}$, with $-c_{ji}=c_{ij} \in H_{loc}^{1,2}(\mathbb R^d) \cap VMO_{loc} \cap L^{\infty}_{loc}(\mathbb R^d)$, $1 \leq i,j \leq d$, $\mathbf{H}=(h_1, \dots, h_d) \in L_{loc}^p(\mathbb R^d, \mathbb R^d)$ for some $p\in (d,\infty)$,
\end{itemize}
we can achieve analogous results to those derived in this book. Regarding an analytic approach to a class of degenerate It\^{o}-SDEs allowed to have discontinuous coefficients, a systematic study was conducted in \cite{LT19de}. Further studies to relax the assumptions of \cite{LT19de} are required. \\ \\
{\bf 3. Extending the theory of symmetric Dirichlet forms to non-symmetric cases}\\ \\
In the general framework of symmetric Dirichlet forms, many results in stochastic analysis have been derived in \cite{FOT}. However, in the general framework of non-symmetric and non-sectorial Dirichlet forms, it is necessary to confirm in detail whether or not the results of \cite{FOT} can be applied. In particular, the semigroup $(P_t)_{t>0}$ studied in this book is possibly non-symmetric with respect to $\mu$ and may not be an analytic semigroup in $L^2(\mathbb R^d, \mu)$, hence the corresponding Dirichlet form is in general non-symmetric and non-sectorial. The absolute continuity condition of $\mathbb M$, i.e. $P_t(x, dy) \ll \mu$ for each $x \in \mathbb R^d$ and $t>0$, is crucially used in \cite{FOT} to strengthen results that are valid up to a capacity zero set, to results that hold for every (starting) point in $\mathbb R^d$. In our case, under the assumption {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1}, the absolute continuity condition of $\mathbb M$ is fulfilled, so that we expect to derive similar results, related to every starting point in $\mathbb R^d$, such as those in \cite{FOT}. For instance, adapting the proof of \cite[Theorem 4.7.3]{FOT}, we expect to obtain the following result under the assumption that $\mathbb M$ is recurrent: given $x \in \mathbb R^d$ and $f \in L^1(\mathbb R^d, \mu)$ with $f \in L^{\infty}(B_{r}(x))$ for some $r>0$, it holds
$$
\lim_{t \rightarrow \infty}\frac{1}{t} \int_0^t f(X_s) ds = c_f,\qquad \text{$\mathbb P_x$-a.s},
$$
where $c_f = \frac{1}{\mu(\mathbb R^d)} \int_{\mathbb R^d}f d\mu$ if $\mu(\mathbb R^d)<\infty$ and $c_f = 0$ if $\mu(\mathbb R^d)=\infty$. Concretely, under the assumption of Theorem \ref{theo:3.3.8} or Proposition \ref{prop:3.3.12}, we may obtain that $\mu$ is not only a finite invariant measure but for any $x \in \mathbb R^d$ and $f \in L^{\infty}(\mathbb R^d, \mu)$ it holds
$$
\lim_{t \rightarrow \infty} \frac{1}{t}\int_0^t f(X_s) ds =\frac{1}{\mu(\mathbb R^d)} \int_{\mathbb R^d}f d\mu, \qquad \text{$\mathbb P_x$-a.s.}
$$
\\
{\bf 4. Further exploring infinitesimally invariant measures using numerical approximations
}\\ \\
In this book, the existence of an infinitesimally invariant measure $\rho dx$ for $(L, C_0^{\infty}(\mathbb R^d))$ whose coefficients satisfy condition {\bf (a)} of Section \ref{subsec:2.2.1} follows from Theorem \ref{theo:2.2.7}. In addition, from Theorem \ref{theo:2.2.7} we know that $\rho$ has the local regularity properties $\rho \in H^{1,p}_{loc}(\mathbb R^d) \cap C(\mathbb R^d)$ for some $p \in (d, \infty)$ and $\rho(x)>0$ for all $x \in \mathbb R^d$. However, we do not know the concrete behavior of $\rho$ for sufficiently large $\|x\|$. Of course, we can start with an explicitly given $\rho$ and consider a partial differential operator whose infinitesimally invariant measure is $\rho dx$ as in Remark \ref{rem:2.2.4}. But this approach is restrictive in that it may not deal with arbitrary partial differential operators. Indeed, having concrete information about $\rho$ is important since in the
Krylov-type estimate of Theorem \ref{theo:3.3}, the product of the constants left to the norm of $f$ in \eqref{kryest1}, \eqref{kryest2} depends on $\rho$. In addition, a certain volume growth on $\mu$ is required for the conservativeness and recurrence criteria in Propositions \ref{prop:3.2.9}, \ref{cor:3.3.2.6}, and in Theorem \ref{theo:3.3.8}
the asymptotic behavior of $P_t f$ as $t \rightarrow \infty$ is determined $\frac{1}{\mu(\mathbb R^d)}\int_{\mathbb R^d} f d\mu$.
Recently, it was shown in \cite{LT21in} that if $\mathbb M$ is recurrent and {\bf (a$^{\prime}$)} of Section \ref{subsec:2.2.1} is assumed, then an infinitesimally invariant measure for $(L, C_0^{\infty}(\mathbb R^d))$ is unique up to a multiplicative constant. Therefore, in the case where $\mathbb M$ is recurrent, if one can find explicitly an infinitesimally invariant measure $\mu=\rho dx$ for $(L, C_0^{\infty}(\mathbb R^d))$ or if one can estimate the error by finding an approximation for $\rho$ solving numerically the elliptic PDE of divergence type \eqref{eq:2.2.0a}, then it will lead to a useful supplement to this book. \\ \\ \\
{\bf 5. Uniqueness and stability of classical solutions to the Cauchy problem}\\ \\
Consider the Cauchy problem
\begin{equation} \label{clascauchypro}
\partial_t u_f = \frac12 \sum_{i,j=1}^d a_{ij} \partial_{ij} u_f +\sum_{i=1}^d g_i \partial_i u_f \ \ \text{ in \ $\mathbb R^d\times (0,\infty)$,}\quad u_f(\cdot,0) = f\ \ \text{ in \ $\mathbb R^d$}.
\end{equation}
For $f \in C_b(\mathbb R^d)$, $u_f$ is said to be a {\it classical solution} to \eqref{clascauchypro} if $u_f \in C^{2,1}(\mathbb R^d \times (0, \infty)) \cap C_b(\mathbb R^d \times [0, \infty))$ and $u_f$ satisfies \eqref{clascauchypro}. There is an interesting connection between the uniqueness of classical solutions to \eqref{clascauchypro} and existence of a global weak solution to \eqref{itosdeweakglo}. Under the assumption that $\mathbb M$ is non-explosive and that {\bf (a)} of Section \ref{subsec:2.2.1} and {\bf (b)} of Section \ref{subsec:3.1.1} hold, every classical solution $u_f$ to \eqref{clascauchypro} is represented as (cf. for instance the proof of \cite[Proposition 4.7]{LT21in})
\begin{equation} \label{eqclastrans}
u_f(x,t) = \mathbb E_x[f(X_t)]=P_t f(x), \quad \; \text{ for all $(x,t) \in \mathbb R^d \times [0, \infty)$}.
\end{equation}
Remarkably, under the assumptions of Theorem \ref{theo:3.3.8} (or those of Proposition \ref{prop:3.3.12}), every classical solution $u_f$ to \eqref{clascauchypro} enjoys by \eqref{eqclastrans} and Theorem \ref{theo:3.3.8}(iv) and its proof the following asymptotic behavior:
\begin{equation} \label{assymen1}
\lim_{t \rightarrow \infty} u_f(x,t) = \int_{\mathbb R^d} f dm\quad \text{for each $x \in \mathbb R^d$}
\end{equation}
and
\begin{equation} \label{assymen2}
\lim_{t \rightarrow \infty} u_f(\cdot, t) = \int_{\mathbb R^d} f dm, \quad \text{ in $L^r(\mathbb R^d, m)$}, \quad \text{for each $r \in [1, \infty)$,}
\end{equation}
where $m=\mu(\mathbb R^d)^{-1}\mu$ is the unique probability invariant measure for $\mathbb M$. Actually, in \cite[Chapter 2.2]{LB07} under the assumption that the $a_{ij}$ and $g_i$ are locally H\"{o}lder continuous of order $\alpha \in (0,1)$ for all $1 \leq i,j \leq d$ and that $A$ is locally uniformly strictly elliptic, it is shown that there exists a classical solution $u_f\in C_b(\mathbb R^d \times [0, \infty)) \cap C^{2+\alpha, 1+\frac{\alpha}{2}}(\mathbb R^d \times (0, \infty))$ to \eqref{clascauchypro}. Therefore, under the assumption {\bf (a$^{\prime}$)} of Section \ref{subsec:2.2.1} and that the $g_i$ are locally H\"{o}lder continuous of order $\alpha \in (0,1)$ for any $1 \leq i \leq d$, the classical solution $u_f$ to \eqref{clascauchypro} induced by \cite[Chapter 2.2]{LB07} satisfies \eqref{eqclastrans} and enjoys the asymptotic behavior \eqref{assymen1} and \eqref{assymen2}.
\subsection*{Notations and conventions}
\addcontentsline{toc}{section}{Notations and conventions}
\begin{itemize}
\item[]
\item[] \centerline{\bf Vector spaces and norms}
\item[$\|\cdot\|$]{the Euclidean norm on the $d$-dimensional Euclidean space $\mathbb R^d$}
\item[$\langle\cdot,\cdot\rangle$]{the Euclidean inner product in $\mathbb R^d$}
\item[$|\cdot |$]{the absolute value in $\mathbb R$}
\item[$\| \cdot \|_{\mathcal{B}}$]{the norm associated with a Banach space $\mathcal{B}$}
\item[$\mathcal{B}'$]{the dual space of a Banach space $\mathcal{B}$}
\item[]
\item[] \centerline{\bf Sets and set operations}
\item[$\mathbb R^d$]{the $d$-dimensional Euclidean space}
\item[$\mathbb R^d_{\Delta}$]{the one-point compactification of $\mathbb R^d$ with the point at infinity \lq\lq$\Delta$\rq\rq}
\item[$(\mathbb R^d_{\Delta})^{S}$]{set of all functions from $S$ to $\mathbb R^d_{\Delta}$, where $S \subset [0,\infty)$}
\item[$\overline{V}$]{the closure of $V\subset \mathbb R^d$}
\item[$B_r(x)$]{for $x\in \mathbb R^d$, $r>0$, defined as $\{y\in \mathbb R^d : \|x-y\|<r\}$}
\item[$\overline{B}_r(x)$]{defined as $\{y\in \mathbb R^d : \|x-y\|\le r\}$}
\item[$B_r$]{short for $B_r(0)$}
\item[$R_{x}(r)$]{the open cube in $\mathbb R^d$ with center $x \in \mathbb R^d$ and edge length $r>0$}
\item[$\overline{R}_{x}(r)$]{the closure of $R_{x}(r)$}
\item[$A+B$]{defined as $\{a+b: a\in A, b\in B\}$, for sets $A,B$ with an addition operation}
\item[]
\item[] \centerline{\bf Measures and $\sigma$-algebras}
\item[]{In this monograph, any measure is always non-zero and positive and if a measure is defined on a subset of $\mathbb R^d$, then it is a Borel measure, i.e. defined on the Borel subsets.} \\
\item[$\mu = \rho \, dx$]{denotes the infinitesimally invariant measure (see \eqref{eq:2.1.4}, Theorem \ref{theo:2.2.7} and Remark \ref{rem:2.2.4})}
\item[$dx$]{the Lebesgue measure on $\mathcal{B}(\mathbb R^d)$}
\item[$dt$]{the Lebesgue measure on $\mathcal{B}(\mathbb R)$}
\item[$\mathcal{B}(\mathbb R^d)$]{the Borel subsets of $\mathbb R^d$ or the space of Borel measurable functions $f:\mathbb R^d\to \mathbb R$}
\item[$\mathcal{B}(\mathbb{R}^d_{\Delta})$]{defined as $\{A\subset \mathbb{R}^d_{\Delta} : A\in \mathcal{B}(\mathbb{R}^d) \text{ or } A=A_0\cup\{\Delta\}, \ A_0\in \mathcal{B}(\mathbb{R}^d)\}$}
\item[$\mathcal{B}(X)$]{smallest $\sigma$-algebra containing the open sets of a topological space $X$}
\item[a.e.]{almost everywhere}
\item[supp$(\nu)$]{the support of a measure $\nu$ on $\mathbb R^d$}
\item[supp$(u)$]{for a measurable function $u:\mathbb R^d\to \mathbb R$ defined as supp$(|u|dx)$}
\item[$\delta_x$]{Dirac measure at $x \in \mathbb R^d_{\Delta}$}
\item[$P_t(x, dy)$]{the sub-probability measure defined by $P_t(x, A) = P_t 1_A (x)$, $A \in \mathcal{B}(\mathbb R^d), \\
(x,t) \in \mathbb R^d \times (0, \infty)$ (see Proposition \ref{prop:3.1.1})}
\item[]
\item[] \centerline{\bf Derivatives of functions, vector fields}
\item[$\partial_t f$]{(weak) partial derivative in the time variable $t$}
\item[$\partial_i f$]{(weak) partial derivative in the $i$-th spatial coordinate}
\item[$\nabla f$]{(weak) spatial gradient, $\nabla f:=(\partial_1 f, \ldots, \partial_d f)$}
\item[$\partial_{ij}f$]{second-order (weak) partial derivatives, $\partial_{ij} f:=\partial_i \partial_j f$}
\item[$\nabla^2 f$]{(weak) Hessian matrix, $(\nabla^2 f)=(\partial_{ij}f)_{1 \leq i,j \leq d}$}
\item[$\Delta f$]{(weak) Laplacian, $\Delta f=\sum_{i=1}^{d}\partial_{ii} f$}
\item[$\text{div}\mathbf{F}$]{(weak) divergence of the vector field $\mathbf{F}=(f_1,\dots,f_d)$, defined as $\sum_{i=1}^d\partial_i f_i$}
\item[$(\nabla B)_i$]{for $1 \leq i \leq d$ and a matrix $B=(b_{ij})_{1 \leq i,j \leq d}$ of functions, $(\nabla B)_i$ is the divergence of the $i$-th row of $B$, i.e. defined as $\sum_{j=1}^d\partial_j b_{ij}$}
\item[$\nabla B$]{defined as $((\nabla B)_1, \ldots, (\nabla B)_d)$}
\item[$B^T$]{for a matrix $B$, the transposed matrix is denoted by $B^T$}
\item[$\text{trace}(B)$]{trace of a matrix of functions $B=(b_{ij})_{1 \leq i,j \leq d}$, $\text{trace}(B)=\sum_{i=1}^{d}b_{ii}$}
\item[$A$]{diffusion matrix $A = (a_{ij})_{1 \leq i,j \leq d}$}
\item[$\mathbf{G}$]{in Section \ref{sec2.1} the drift $\mathbf{G}$ satisfies $\mathbf{G} = (g_1 , \ldots , g_d ) \in L^2_{loc}(\mathbb R^d, \mathbb R^d, \mu)$ (cf. \eqref{eq:2.1.3}). From Section \ref{subsec:2.2.1} on the drift satisfies $\mathbf{G} = (g_1 , \ldots , g_d )=\frac12 \nabla(A+C^T)+\mathbf{H}$
(see assumption {\bf (a)} in Section \ref{subsec:2.2.1} and \eqref{form of G}, but also Remark \ref{rem:2.2.7})}
\item[$\beta^{\rho,B}$]{logarithmic derivative $\beta^{\rho, B} = (\beta_1^{\rho, B}, \ldots , \beta_d^{\rho, B})$ (of $\rho$ associated with $B=(b_{ij})_{1 \leq i,j \leq d}$), where $\beta_i^{\rho, B} = \frac 12 \left( \sum_{j=1}^d \partial_j b_{ij} + b_{ij} \frac{\partial_j \rho}{\rho}\right)$, i.e. $\beta^{\rho, B} = \frac12 \nabla B+ \frac{1}{2\rho} B \nabla \rho$ (see \eqref{eq:2.1.5} and Remark \ref{rem:2.2.4})}
\item[$\mathbf{B}$]{$\mathbf{B} = \mathbf{G} - \beta^{\rho, A}$, divergence zero vector field with respect to $\mu$ (see \eqref{eq:2.1.5a}, \eqref{eq:2.1.6})}
\item[]
\item[] \centerline{\bf Function spaces and norms}
\item[]We always choose the continuous version of a function, if it has one.\\
\item[$q$]{the real number $q$ is given throughout by
$$
q:= \frac{pd}{p+d}
$$
for an arbitrarily chosen real number $p\in (d, \infty)$}
\item[$\mathcal{B}(\mathbb R^d)$]{the Borel subsets of $\mathbb R^d$ or the space of Borel measurable functions $f:\mathbb R^d\to \mathbb R$}
\item[$\mathcal{B}^+(\mathbb R^d)$]{defined as $\{f \in \mathcal{B}(\mathbb R^d): f(x) \geq 0 \text{ for all } x \in \mathbb R^d \}$}
\item[$\mathcal{B}_b(\mathbb R^d)$]{defined as $\{f \in \mathcal{B}(\mathbb R^d): \text{ $f$ is pointwise uniformly bounded} \}$}
\item[$\mathcal{B}(\mathbb R^d)_0$]{defined as $\{f \in \mathcal{B}(\mathbb R^d): \text{supp}(|f| dx) \text{ is a compact subset of } \mathbb R^d \}$}
\item[$\mathcal{B}_b^+(\mathbb R^d)$]{defined as $\mathcal{B}^+(\mathbb R^d) \cap \mathcal{B}_b(\mathbb R^d)$}
\item[$\mathcal{B}_b(\mathbb R^d)_0$]{defined as $\mathcal{B}_b(\mathbb R^d) \cap \mathcal{B}(\mathbb R^d)_0$}
\item[$L^r(U, \nu)$]{the space of $r$-fold integrable functions on $U$ with respect to $\nu$, equipped with the norm $\|f\|_{L^r(U, \nu)}:=(\int_U |f|^r d\nu)^{1/r}$, where $\nu$ is a measure on $\mathbb R^d$, $r \in [1, \infty)$ and $U \in \mathcal{B}(\mathbb R^d)$}
\item[$(f,g)_{L^2(U, \mu)}$]{inner product on $L^2 (U , \mu )$, defined as $\int_U f g d\mu$, $f, g \in L^2(U,\mu)$, where $U \in \mathcal{B}(\mathbb R^d)$}
\item[$L^{\infty}(U, \nu)$]{the space of $\nu$-a.e. bounded measurable functions on $U$, equipped with the norm $\|f\|_{L^{\infty}(U, \nu)}:= \inf\{c>0: \nu(\{|f|>c\})=0 \}$, where $U \in \mathcal{B}(\mathbb R^d)$}
\item[$\mathcal{A}_0$, $\mathcal{A}_b$, $\mathcal{A}_{0,b}$]
If $\mathcal{A} \subset L^s (V ,\mu )$ is an arbitrary subspace, where $V$ is open subset of $\mathbb R^d$, $s \in [1, \infty]$, denote by $\mathcal{A}_0$ the subspace of
all elements $u\in \mathcal{A}$ such that $\text{supp}(|u|\mu )$ is a compact subset of $V$, and by $\mathcal{A}_b$ the
subspace of all essentially bounded elements in $\mathcal{A}$, and $\mathcal{A}_{0,b}:=\mathcal{A}_0\cap \mathcal{A}_b$
\item[$L^r(U)$]{defined as $L^r(U, dx)$, $r \in [1, \infty]$, where $U \in \mathcal{B}(\mathbb R^d)$}
\item[$L_{loc}^r(\mathbb R^d, \nu)$]{defined as $\{ f \in \mathcal{B}(\mathbb R^d): f1_K \in L^r(\mathbb R^d, \nu)$ \text{ for any compact subset}}
\item[]{$K$ \text{of } $\mathbb R^d \}$, $r \in [1, \infty]$}
\item[$L^r_{loc}(\mathbb R^d)$]{defined as $L^r_{loc}(\mathbb R^d, dx)$, where $r \in [1, \infty]$}
\item[$L^r(U, \mathbb R^d, \nu)$]{defined as $\{(\mathbf{F}=(f_1,\ldots,f_d) \in \mathcal{B}(\mathbb R^d)^d: \|\mathbf{F}\| \in L^r(\mathbb R^d, \nu) \}$, equipped with the norm $\|\mathbf{F}\|_{L^r(U, \mathbb R^d, \nu)}:=\| \|\mathbf{F}\| \|_{L^r(U, \nu)}$, where $r \in [1, \infty]$ and $U \in \mathcal{B}(\mathbb R^d)$}
\item[$L^r(U, \mathbb R^d)$]{defined as $L^r(U, \mathbb R^d, dx)$, where $r \in [1, \infty]$ and $U \in \mathcal{B}(\mathbb R^d)$}
\item[$L_{loc}^r(\mathbb R^d, \mathbb R^d, \nu)$]{defined as $\{ \mathbf{F} =(f_1,\ldots,f_d) \in \mathcal{B}(\mathbb R^d)^d: 1_K \mathbf{F} \in L^r(\mathbb R^d, \mathbb R^d, \nu)$}
\item[]{\text{for any compact subset} $K$ \text{of } $\mathbb R^d \}$, $r \in [1, \infty]$ }
\item[$L_{loc}^r(\mathbb R^d, \mathbb R^d)$]{defined as $L_{loc}^r(\mathbb R^d, \mathbb R^d,dx)$, where $r \in [1, \infty]$}
\item[$C([0, \infty), \mathbb R^d)$]
{the space of $\mathbb R^d$-valued continuous functions on $[0,\infty)$ equipped with the metric $d$, where for $\omega, \omega' \in C([0, \infty), \mathbb R^d)$
$$
d(\omega, \omega')=\sum_{n=1}^{\infty} 2^{-n} \left(1 \wedge \sup_{t \in [0,n]} |\omega(t)-\omega'(t)| \right)
$$
}
\item[$C([0, \infty), \mathbb R_{\Delta}^d)$]
{the space of $\mathbb R_{\Delta}^d$-valued continuous functions on $[0,\infty)$}
\item[$C(U)$]{the space of continuous functions on $U$, where $U \in \mathcal{B}(\mathbb R^d)$}
\item[$C_b(U)$]{the space of bounded continuous functions on $U$ equipped with the norm $\|f\|_{C_b(U)}:=\sup_{U} |f|$, where $U \in \mathcal{B}(\mathbb R^d)$}
\item[$C^k(U)$]{the set of $k$-times continuously differentiable functions on $U$, where $k \in \mathbb N \cup \{ \infty \}$ and $U$ is an open subset of $\mathbb R^d$}
\item[$C^k_b(U)$]{defined as $C^k(U)\cap C_b(U)$, where $k \in \mathbb N \cup \{ \infty \}$ and $U$ is an open subset of $\mathbb R^d$}
\item[$C_0(U)$]{defined as $\{f \in C(U): \text{supp}(|f|dx) \text{ is a compact subset of } U \}$, where $U$ is an open subset of $\mathbb R^d$}
\item[$C_0^k(U)$]{defined as $C^k(U)\cap C_0(U)$ , $k \in \mathbb N \cup \{ \infty \}$, $U\subset \mathbb R^d$ open
\item[$C_{\infty}(\mathbb R^d)$]{defined as $\{f \in C_b(\mathbb R^d): \exists\lim_{\|x\| \rightarrow \infty} f(x)= 0 \}$ equipped with the norm $\|f\|_{C_b(\mathbb R^d)}$}}
\item[$C^{0,\beta}(\overline{V})$]{defined as $ \{f \in C(\overline{V}): \text{h\"ol}_\beta (f,\overline{V}) < \infty \}$, where $V$ is an open subset of $\mathbb R^d$, $\beta \in (0,1)$ and
$$
\text{h\"ol}_\beta (f,\overline{V}) := \sup\left\{\frac{|f(x)-f(y)|}{\|x-y\|^\beta}: x,y \in \overline{V}, x\not=y\right\}
$$
equipped with the norm
$$
\|f\|_{C^{0,\beta}(\overline{V})} := \sup_{x \in \overline{V}} |f(x)| + \mathrm{\text{h\"{o}l}}_\beta (f,\overline{V})
$$
}
\item[$C^{\gamma; \frac{\gamma}{2}}(\overline{Q})$]{defined as $\{f \in C(\overline{Q}): \text{ph\"ol}_\gamma (f,\overline{Q}) < \infty\}$, where $Q$ is an open subset of $\mathbb R^d \times \mathbb R$, $\gamma \in (0,1)$ and
$$
\text{ph\"ol}_\gamma (f,\overline{Q}) := \sup\left\{\frac{|f(x,t)-f(y,s)|}{\left(\|x-y \|+\sqrt{|t-s|}\right)^{\gamma}}: \;(x,t), (y,s) \in \overline{Q}, \;(x,t) \not=(y,s)\right \}
$$
equipped with the norm
$$
\|f\|_{C^{\gamma; \frac{\gamma}{2}}(\overline{Q})} := \sup_{(x,t) \in \overline{Q}} |f(x,t)| +\text{ph\"ol}_\gamma(f,\overline{Q})
$$
}
\item[$H^{1,r}(U)$]{defined as $\{ f \in L^r(U): \partial_i f \in L^r(U), \text{ for all $i=1,\ldots, d$} \}$ equipped with the norm $\|f\|_{H^{1,r}(U)}:=(\|f\|^r_{L^r(U)}+\sum_{i=1}^d\|\partial_i f \|_{L^r(U)}^r)^{1/r}$, if $r \in [1, \infty)$ and $\|f\|_{H^{1,\infty}(U)}:=\|f\|_{L^{\infty}(U)}+\sum_{i=1}^d \|\partial_i f\|_{L^{\infty}(U)}$, if $r=\infty$, where $U$ is an open subset of $\mathbb R^d$}
\item[$H^{1,r}_0(U)$]{the closure of $C_0^{\infty}(U)$ in $H^{1,r}(U)$, where $r \in [1, \infty)$ and $U$ is an open subset of $\mathbb R^d$}
\item[$H_{loc}^{1,r}(\mathbb R^d)$]{defined as $\{f \in L^r_{loc}(\mathbb R^d): f|_B \in H^{1,r}(B) \text{ for any open ball $B$ in $\mathbb R^d$} \}=\{f \in L^r_{loc}(\mathbb R^d): f\chi\in H^{1,r}(\mathbb R^d ) \text{ for any } \chi\in C_0^\infty (\mathbb R^d ) \}$}, where $r\in [1,\infty]$
\item[$H_0^{1,2}(V, \mu)$]{ the closure of
$C_0^\infty (V)$ in $L^2 (V,\mu )$ w.r.t. the norm
$$
\|u\|_{H_0^{1,2} (V,\mu )}
:= \left( \int_V u^2\, d\mu + \int_V \|\nabla u\|^2\, d\mu \right)^{\frac 12},
$$
where $V$ is an open subset of $\mathbb R^d$}
\item[$H^{1,2}_{loc}(V, \mu)$]{the space of all elements $u$ such that
$u\chi\in H_0^{1,2}(V,\mu)$ for all $\chi\in C_0^\infty (V)$, where $V$ is an open subset of $\mathbb R^d$}
\item[]
\item[] \centerline{\bf Operators}
\item[$id$]{identity operator on a given space}
\item[$L^A f$]{defined as $\frac12 \text{trace}(A \nabla^2 f)=\frac12\sum_{i,j=1}^{d} a_{ij}\partial_{ij} f$, $f \in C^2(\mathbb R^d)$ (see \eqref{eq:2.1.3bis})}
\item[$Lf$]{defined as $L^A f + \langle \mathbf{G}, \nabla f \rangle=\frac 12 \sum_{i,j=1}^d a_{ij} \partial_{ij} f+ \sum_{i=1}^d g_i\partial_i f$, $f\in C^{2}(\mathbb R^d)$ (see \eqref{eq:2.1.3bis2}) and as $L^0 u + \langle \mathbf{B}, \nabla u\rangle, u\in D(L^0)_{0,b}$ (see \eqref{defLfirst}). The definitions are consistent, since they coincide on $D(L^0)_{0,b}\cap C^{2}(\mathbb R^d)=C^{2}_0(\mathbb R^d)$ by Remark \ref{cnulltwocoincide}}
\item[$L'f$]{defined as $ L^A f+\langle 2 \beta^{\rho, A}- \mathbf{G}, \nabla f \rangle= L^A f + \langle \beta^{\rho, A} - \mathbf{B},
\nabla f\rangle$, $f\in C^{2}(\mathbb R^d)$ (see \eqref{eq:2.1.3bis2'} and as $L^0 u - \langle \mathbf{B}, \nabla u\rangle, u\in D(L^0)_{0,b}$ (see \eqref{defL'first} and \eqref{eq:2.1.5a}, \eqref{eq:2.1.6}). The definitions are consistent, since they coincide on $D(L^0)_{0,b}\cap C^{2}(\mathbb R^d)=C^{2}_0(\mathbb R^d)$ by Remark \ref{cnulltwocoincideprime}}
\item[$(\mathcal{E}^{0,V}, D(\mathcal{E}^{0,V}))$]
symmetric Dirichlet form defined as the closure of
$$
\mathcal{E}^{0,V}(u,v) = \frac12 \int_{V} \langle A \nabla u, \nabla v \rangle d\mu, \quad u,v \in C_0^{\infty}(V)
$$
in $L^2(V, \mu)$, where $V$ is an open subset of $\mathbb R^d$. (If $V$ is relatively compact, then $D(\mathcal{E}^{0,V})=H^{1,2}_0(V, \mu)$.)
\item[$(L^{0,V}, D(L^{0,V}))$]{the generator associated with $(\mathcal{E}^{0,V}, D(\mathcal{E}^{0,V}))$}
\item[$(T_t^{0,V})_{t>0}$]{the sub-Markovian $C_0$-semigroup of contractions on $L^2(V, \mu)$ generated by $(L^{0,V}, D(L^{0,V}))$}
\item[$(\mathcal{E}^{0}, D(\mathcal{E}^0))$]{defined as $(\mathcal{E}^{0, \mathbb R^d}, D(\mathcal{E}^{0, \mathbb R^d}))$}
\item[$\mathcal{E}^{0}_{\alpha}(\cdot,\cdot)$]{defined as $\mathcal E^0(\cdot,\cdot)+\alpha(\cdot,\cdot)_{L^2(\mathbb R^d,\mu)}\, , \ \alpha>0$}
\item[$(L^{0}, D(L^{0}))$]{the generator associated with $(\mathcal{E}^{0}, D(\mathcal{E}^{0}))$ (see \eqref{DefGeneratorDF})}
\item[$(T^{0}_t)_{t>0}$]{the sub-Markovian $C_0$-semigroup of contractions on $L^2(\mathbb R^d, \mu)$ generated by $(L^{0}, D(L^{0}))$ }
\item[$(\overline{L}^V, D(\overline{L}^V))$]{the $L^1$-closed extension of $(L, C_0^{\infty}(V))$ generating the sub-Markovian
$C_0$-semigroup $(\overline{T}^V_t)_{t>0}$ on $L^1 (V,\mu )$}, where $V$ is a bounded open subset of $\mathbb R^d$ (see Proposition \ref{prop:2.1})
\item[$(\overline{L}^{V, \prime}, D(\overline{L}^{V, \prime}))$]{the $L^1$-closed extension of $(L^{\prime}, C_0^{\infty}(V))$ generating the sub-Markovian $C_0$-semigroup $(\overline{T}^{V, \prime}_t)_{t>0}$ on $L^1 (V,\mu )$}, where $V$ is a bounded open subset of $\mathbb R^d$ (see Remark \ref{rem2.1.3})
\item[$(\overline{T}^V_t)_{t>0}$]{the sub-Markovian $C_0$-semigroup of contractions on $L^1(V, \mu)$ generated by $(\overline{L}^V, D(\overline{L}^V))$}
\item[$(\overline{T}^{V, \prime}_t)_{t>0}$]{the sub-Markovian $C_0$-semigroup of contractions on $L^1(V, \mu)$ generated by $(\overline{L}^{V, \prime}, D(\overline{L}^{V, \prime}))$}
\item[$(\overline{G}^V_{\alpha})_{\alpha>0}$]{the sub-Markovian $C_0$-resolvent of contractions on $L^1(V, \mu)$ generated by $(\overline{L}^V, D(\overline{L}^V))$}
\item[$(\overline{G}^{V, \prime}_\alpha)_{\alpha>0}$]{the sub-Markovian $C_0$-resolvent of contractions on $L^1(V, \mu)$ generated by $(\overline{L}^{V, \prime}, D(\overline{L}^{V, \prime}))$ }
\item[($\overline{L}, D(\overline{L}))$]{the $L^1$-closed extension of $(L, C_0^{\infty}(\mathbb R^d))$ generating the sub-Markovian$C_0$-semigroup $(\overline{T}_t)_{t>0}$ on $L^1 (\mathbb R^d,\mu )$} (see Theorem \ref{theorem2.1.5})
\item[($\overline{L}', D(\overline{L}'))$]{the $L^1$-closed extension of $(L', C_0^{\infty}(\mathbb R^d))$ generating the sub-Markovian $C_0$-semigroup $(\overline{T}'_t)_{t>0}$ on $L^1 (\mathbb R^d,\mu )$ (see Remark \ref{remark2.1.7}(ii))}
\item[$(\overline{T}_t)_{t>0}$]{the sub-Markovian $C_0$-semigroup of contractions on $L^1(\mathbb R^d, \mu)$ generated by $(\overline{L}, D(\overline{L}))$}
\item[$(\overline{T}'_t)_{t>0}$]{the sub-Markovian $C_0$-semigroup of contractions on $L^1(\mathbb R^d, \mu)$ generated by $(\overline{L}', D(\overline{L}'))$}
\item[$(\overline{G}_{\alpha})_{\alpha>0}$]{the sub-Markovian $C_0$-resolvent of contractions on $L^1(\mathbb R^d, \mu)$ generated by $(\overline{L}, D(\overline{L}))$}
\item[$(\overline{G}'_{\alpha})_{\alpha>0}$]{the sub-Markovian $C_0$-resolvent of contractions on $L^1(\mathbb R^d, \mu)$ generated by $(\overline{L}', D(\overline{L}'))$}
\item[$(T_t)_{t>0}$]{the semigroup corresponding to $(\overline{T}_t)_{t>0}$ on all $L^r(\mathbb R^d,\mu )$-spaces, $r\in [1,\infty]$ (cf. Definition \ref{definition2.1.7})}
\item[$(P_t)_{t>0}$]{the regularized semigroup of $(T_t)_{t>0}$ (cf. Proposition \ref{prop:3.1.1})}
\item[$(T'_t)_{t>0}$]{the semigroup corresponding to $(\overline{T}'_t)_{t>0}$ on all $L^r(\mathbb R^d,\mu )$-spaces, $r\in [1,\infty]$ (cf. Definition \ref{definition2.1.7})}
\item[$(G_{\alpha})_{\alpha>0}$]{the resolvent associated with $(T_t)_{t>0}$ on all $L^r(\mathbb R^d,\mu )$-spaces, $r \in [1,\infty]$}
\item[$(R_{\alpha})_{\alpha>0}$]{the regularized resolvent of $(G_{\alpha})_{\alpha>0}$ (cf. Proposition \ref{prop:3.1.2})}
\item[$(G_{\alpha}')_{\alpha>0}$]{the resolvent associated with $(T'_t)_{t>0}$ on all $L^r(\mathbb R^d,\mu )$-spaces, $r \in [1,\infty]$}
\item[$(L_r,D(L_r))$]{the generator of $(G_{\alpha})_{\alpha>0}$ on $L^r(\mathbb R^d,\mu )$, $r \in [1,\infty)$ (cf. Definition \ref{definition2.1.7})}
\item[$(L_r',D(L_r'))$]{the generator of $(G_{\alpha}')_{\alpha>0}$ on $L^r(\mathbb R^d,\mu )$, $r \in [1,\infty)$ (cf. Definition \ref{definition2.1.7})}
\item[]
\item[] \centerline{\bf Stochastic processes, stopping times and the like}
\item[$\mathbb M$]{the Hunt process $\mathbb M = (\Omega, \mathcal{F}, (\mathcal{F}_t)_{t \ge 0}, (X_t)_{t \ge 0}, (\mathbb{P}_x)_{x \in \mathbb R^d\cup \{\Delta\}} )$ whose transition semigroup is $(P_t)_{t \ge 0}$ (see Theorem \ref{th: 3.1.2} and also Theorem \ref{theo:3.1.4})}
\item[a.s.]{almost surely}
\item[$\mathbb E_x$, resp. $\tilde \mathbb E_x$]{expectation w.r.t. the probability measure $\mathbb P_x$, resp. $\tilde{\mathbb P}_x$}
\item[$\sigma(\widetilde{X}_s \varepsilonrt s \in I)$]{smallest $\sigma$-algebra such that all $\widetilde{X}_s, s \in I$ are measurable, where $I \subset [0, \infty)$ with $I \in \mathcal{B}(\mathbb R)$}
\item[$\sigma(\mathcal{S})$]{the smallest $\sigma$-algebra which contains every set of some collection of sets $\mathcal{S}$}
\item[$\sigma_{A}$]{$\sigma_{A}:=\inf\{t>0\,:\, X_t\in A\}$, $A\in \mathcal{B}(\mathbb R^d)$}
\item[$\sigma_n$]{$\sigma_n:=\sigma_{\mathbb R^d\setminus B_n},n\ge 1$}
\item[$D_A$]{$D_A:=\inf\{t\ge0\,:\, X_t\in A\}$, $A\in \mathcal{B}(\mathbb R^d)$}
\item[$D_n$]{$D_n:=D_{\mathbb R^d\setminus B_n}$, $n\ge 1$}
\item[$L_A$] the last exit time from $A\in \mathcal{B}(\mathbb R^d)$, $L_A:=\sup \{ t\geq 0: X_t\in A\}$, $\sup \emptyset:=0$
\item[$\langle X\rangle_t$, $\langle X, Y \rangle_t$]{the quadratic variation up to time $t$ of a continuous stochastic process $(X_t)_{t \geq 0}$, resp. the covariation of two continuous stochastic processes $(X_t)_{t \geq 0}$ and $(Y_t)_{t \geq 0}$.}
\item[$\zeta$, resp. $\widetilde{\zeta}$]{lifetime of a stochastic process $(X_t)_{t \geq 0}$, resp. $(\widetilde{X}_t)_{t \geq 0}$ (see Theorem \ref{th: 3.1.2}), Definition \ref{def:3.1.1})}
\item[$\vartheta_t$]{the shift operator, i.e. $X_s\circ\vartheta_t=X_{s+t}$, $s,t\ge 0$}
\item[]
\item[] \centerline{\bf Miscellanea}
\item[$\mathbf{e}_1$]{$\mathbf{e}_1:=(1,0, \ldots, 0) \in \mathbb R^d$}
\item[w.r.t]{with respect to}
\item [$a \wedge b$]{minimum value of $a$ and $b$, $a \wedge b=\frac{|a+b|-|a-b|}{2}$}
\item[$a \varepsilone b$]{maximum value of $a$ and $b$, $a \varepsilone b=\frac{|a+b|+|a-b|}{2}$}
\item[$a^+$]{defined as $a \varepsilone 0$}
\item[$a^-$]{defined as $-a \varepsilone 0$}
\end{itemize}
\iffalse
++++++++++ style of the old version
\begin{itemize}
\item[] \centerline{\bf Vector Spaces and Norms}
\item $\|\cdot\|$ for the Euclidean norm on the $d$-dimensional Euclidean space $\mathbb R^d$
\item $\langle\cdot,\cdot\rangle$ for the Euclidean inner product in $\mathbb R^d$
\item $|\cdot |$ for the absolute value in $\mathbb R$
\item $B$ Banach space, $\| \cdot \|_B$ its associated norm
\item for $B$ Banach space, $B'$ denotes its dual space
\item for $V\subset \mathbb R^d$, $\overline{V}$ denotes the closure of $V$ in $\mathbb R^d$\\ \\
\end{itemize}
\begin{itemize}
\item[] \centerline{\bf Sets and Set-operations}
\item $\mathbb R^d$ denotes the $d$-dimensional Euclidean space
\item $\mathcal{B}(\mathbb R^d)$ denotes the Borel subsets of $\mathbb R^d$
\item For $x\in \mathbb R^d$, $r>0$, $B_r(x):=\{y\in \mathbb R^d : \|x-y\|<r\}$
\item $\overline{B}_r(x):=\{y\in \mathbb R^d : \|x-y\|\le r\}$
\item $B_r:=B_r(0)$, $r>0$
\item $R_{x}(r)$ for the open cube in $\mathbb R^d$ with center $x \in \mathbb R^d$ and edge length $r>0$
\item $\overline{R}_{x}(r)$ for the closure of $R_{x}(r)$
\item if $A,B$ are sets with an addition operation, then $A+B:=\{a+b: a\in A, b\in B\}$\\ \\
\end{itemize}
++++++++++++++++++++++
\fi
\printindex
\addcontentsline{toc}{section}{Index}
\text{}\\
Haesung Lee\\
Department of Mathematics and Computer Science, \\
Korea Science Academy of KAIST, \\
105-47 Baegyanggwanmun-ro, Busanjin-gu,\\
Busan 47162, South Korea\\
E-mail: [email protected]\\ \\
Wilhelm Stannat\\
Institut f\"ur Mathematik\\
Technische Universit\"at Berlin\\
Stra\ss{}e des 17. Juni 136, \\
D-10623 Berlin, Germany \\
E-mail: [email protected]\\ \\
Gerald Trutnau\\
Department of Mathematical Sciences and \\
Research Institute of Mathematics of Seoul National University,\\
1 Gwanak-ro, Gwanak-gu \\
Seoul 08826, South Korea \\
E-mail: [email protected]\\ \\
\end{document}
|
\begin{document}
\title{Performance of Quantum Preprocessing under Phase Noise}
\begin{abstract}
Optical fiber transmission systems form the backbone of today's communication networks and will be of high importance for future networks as well. Among the prominent noise effects in optical fiber is phase noise, which is induced by the Kerr effect. This effect limits the data transmission capacity of these networks and incurs high processing load on the receiver. At the same time, quantum information processing techniques offer more efficient solutions but are believed to be inefficient in terms of size, power consumption and resistance to noise. Here we investigate the concept of an all-optical joint detection receiver. We show how it contributes to enabling higher baud-rates for optical transmission systems when used as a pre-processor, even under high levels of noise induced by the Kerr effect.
\end{abstract}
\begin{section}{Introduction}
The present and the future of our societies relies more and more on high speed connectivity between a growing number of services. Advanced communication systems support the operation of everything from communication between machines to communication between humans. Real-time video services as well as emerging applications like telemedicine and connected cars will further increase the demand for connectivity, which can be anticipated from the Cisco annual internet report \cite{cisco}. At the same time, increasingly smaller wireless cells will put a tremendous load on the optical fiber backbone, where not only increasing data rates but also reduced latency and lower power consumption are demanded at the same time \cite{6Grequirements}.
In recent works \cite{6Gquantum} it has thus been speculated that quantum information processing (QIP) technologies should play a larger role in the development of the next generation of mobile networks. Quantum primitives such as squeezed light and entanglement at transmitter and receiver can significantly improve communication rates. However, the drawback of QIP is that many solutions only exist on paper, with many mathematical constructs lacking their physical counterpart, and existing ones being bulky and requiring excessive cooling to combat environmental noise, which make them hard to use in practice.
In this work, we investigate a QIP technique, namely the concept of the joint detection receiver (JDR), which promises a practical realization. In the same way as Shannon's work led engineers to perform error correction over multiple received bits to reach superior performance, the work \cite{Holevo1998c,Giovannetti2015} of Holevo and follow-up research motivates to perform error correction over multiple received \emph{pulses}. This latter operation will in practice be carried out by a JDR. Such JDR can be thought of as operating fully in the optical domain, ultimately producing a bit which is handed over to a higher layer for processing. In \cite{guha2011structured} the first proposal for the design of a JDR has been made. Due to its superior performance in the low photon number regime, this device has previously be seen as an optimal choice for deep space communications. While this is certainly true, in the recent work \cite{noetzelrosati2022} it has been pointed out that power-limited communication under high baud-rates also inevitably leads to a low number of received photons per pulse. Observed trends for baud-rates \cite{highBaudrateComms} let us conjecture that future systems ten to twenty years from now will likely operate at baud-rates in the area from $300$GBd to well above $400$GBd. As techniques emerge which even allow the conversion of entire frequency domains (e.g. C- to O-band) \cite{cbandConverter}, there does not seem to be a natural limit for increasing baud-rates, but rather technological hurdles to be overcome. However, when increasing baud-rates under a power limit, current systems fall short of realizing any reasonable gain \cite{noetzelrosati2022}. In sharp contrast, systems utilizing the QIP technique of joint detection can be expected to benefit from the observed trend \cite{highBaudrateComms}.\\
Data transmission techniques utilizing optical fiber need to deal with fiber nonlinearities. In this domain, the recent work \cite{ludovicoWilde} has discovered corresponding capacities. As any realization of the QIP potential will have to be based on practical design, we study here the performance limits of a practically implementable design based on phase shift keying rather than information-theoretic bounds. Under any fixed power limit, high baud-rates will eventually induce the low photon numbers where the JDR technology outperforms its classical analog, and in such domain it is inevitably at some point optimal to use, among all phase shift keying formats, the one with the lowest modulation order. Thus in this work, we study binary phase shift keying (BPSK).\\
Our model for phase noise is derived from the Kerr effect, which results in so-called phase noise and has been the subject of investigation for example in the recent work \cite{kunzParisBanaszekKerrMedium}.
In our analysis, we utilize this model in simulations and in the derivation of an analytical channel model.
\end{section}
\begin{section}{System Model and Notation}
Consider a channel with complex valued input and output
\begin{align}
Y = \tau\cdot X\cdot e^{\mathbbm{i}\Phi}, X\in\mathbbm C, \phi \in [0, 2\pi)
\end{align}
where $X$ is the signal, $\tau=e^{-a\cdot L}\in[0,1]$ the transmittivity, $a$ the attenuation coefficient, $L$ the fiber length in $km$, and $e^{-\mathbbm{i}\Phi}$ a random phase noise term \cite{kramerSystemModel}.
When investigating the potential of a JDR for such system, it is sufficient to replace $X$ by a coherent state $|X\rangle$ which is an element of the Fock space $\mathcal F$ \cite{banaszekQuantumLimits}. The sender encodes their message into coherent states $|\alpha\rangle=\exp(-|\alpha|^2/2)\sum_{n=0}^\infty\alpha^n/\sqrt{n!}|n\rangle$, where $\{\ket{n}\}_{n\in\mathbb N}$ is the photon number basis of $\mathcal F$. For binary phase shift keying (BPSK), the set of signals states is $S_1=\{\ket{\alpha},\ket{-\alpha}\}$. The signal energy per pulse is given by $E= \hbar \omega_0 |\alpha|^2$, with $\hbar \omega_0$ being the energy of a single photon at carrier frequency $\omega_0$. To use the joint detection concept discovered in \cite{guha2011structured} for BPSK signals one can construct a code book of $2\cdot n$ signals $\ket{v_k(s\alpha)}$, $k=1,\ldots,n$, $s=\pm1$ with code-words taking the form
\begin{equation}
\ket{v_k(\alpha)} = \otimes_{j=0}^{n-1} \ket{(H_n)_{j,k} \alpha},
\end{equation}
where the symmetric matrix $H_n$ is defined as
\begin{equation}
(H_n)_{j,k} = (-1)^{j\cdot k},\quad j \cdot k = \textstyle\sum_{t=0}^{\log_2 n} j_t k_t,
\end{equation}
with $j \cdot k$ being the bitwise scalar product of the binary representations of $j,k = 0,...,n-1$ \cite{rosati2016}. The receiver employs a Hadamard transformation $\hat{U}_{Had}^{(n)}$, resulting in the state
\begin{align}
\hat{U}_{Had}^{(n)}\ket{v_k(\alpha)} = \ket{w_k(\alpha)} = \ket{\sqrt{n} \alpha}_k \left( \otimes_{j\neq k} \ket{0}_j \right).
\end{align}
As can be seen, all but one output mode of the Hadamard receiver are in the vacuum state, and the desired phase can be decoded from the $k$-th output mode in state $\ket{\sqrt{n}\alpha}$. In this work, we employ homodyne detection for this task. $\hat{U}_{Had}^{(n)}$
can be implemented using an array of beamsplitters. Each beamsplitter $U_{BS}$ transforms an incoming pair of coherent states as
\begin{align}\label{eqn:beamsplitter}
U_{BS}\ket{\alpha}\otimes\ket{\beta} = \ket{(\alpha+\beta)/\sqrt{2}}\otimes\ket{(\alpha-\beta)/\sqrt{2}}.
\end{align}
The receiver's performance can be evaluated by using classical statistical tools applied to the complex numbers $\alpha, \beta$. To account for the Kerr effect, we consider phase noise. This leads to states of the form $\ket{\alpha e^{\mathbbm{i}\phi}}$ with $\phi$ a random phase.
For a sequence $\Phi^n:=(\phi_1,\ldots,\phi_n)$ of such phases, the output of the Hadamard receiver of order $n=2^K$ at port $k'$ given input $\nu_k(\alpha)$ is
\begin{equation}
\ket{\Lambda^n_{k,k'}(\alpha)}:=\ket{2^{-K/2}\alpha\left(\sum_{m=1}^n H_{k,m}H_{m,k'} e^{\mathbbm{i} \phi_m}\right)}.
\end{equation}
In order to quantify the distribution of the phases $\phi$, we consider the phase noise model derived in \cite{kunzParisBanaszekKerrMedium}, which lets $\phi$ be distributed according to a normal distribution with a variance that is with our parameter choices estimated as
\begin{align}
\sigma^2\approx 6 \cdot 10^{-19}\cdot b,\label{eqn:phase-noise}
\end{align}
where $b$ is the baud-rate (see Section \ref{subsection:noise model}). Each of the homodyne detectors at the output ports of the receiver operates based on a threshold ${\varepsilon}>0$ as follows: If $m$ is the measurement result of the detector, then the received signal will be set to $\alpha$ if $m>{\varepsilon}$, $0$ if $m\in(-{\varepsilon},{\varepsilon})$ and $-\alpha$, else. This yields a statistical input-output relation (see Subsection \ref{subsec:detection} for details)
\begin{align}\label{eqn:conditional-distribution}
q(y^n|\alpha,k)=p_C^{\varepsilon}ilon(y_k|\alpha)\prod_{k'\neq k}p_I^{\varepsilon}ilon(y_{k'}|\alpha)
\end{align}
where $y^n\in\{-\alpha,0,\alpha\}$ are detection results at the $n=2^K$ output ports of the receiver. $p_C$ and $p_I$ denote the correct and incorrect detection probabilities respectively.
The capacity of the system without the JDR is calculated as $C(b,E):=b\cdot\max_{p_A}I(Y;A)$ where $p_A$ is the distribution of the phases, and as
\begin{align}\label{def:hadamard-capacity}
C(b,E,n):=b\cdot I(Y^n;AN)/n
\end{align}
with $A$ and $N$ uniformly distributed over $\{-\alpha,\alpha\}$ and $\{1,\ldots,n\}$ if quantum pre-processing, JDR, is used.
\end{section}
\begin{section}{Results}
To analyze the statistical properties of the received signal we set $t(i,k):=(H_{i,m}H_{k,m})_{m=1}^n$. It holds $t(i,k)_m\in\{-1,1\}$ and $\sum_mt(i,k)_m=\delta(i,k)$. The received signal then reads as
\begin{align}
\ket{\Lambda^n_{k,k'}(\alpha)}
= \begin{cases}
\ket{2^{-K/2}\alpha \sum_{m}e^{\mathbbm{i}\phi_m} }&,k=k'\\
\ket{2^{-K/2}\alpha \sum_{m}t(k,k')_me^{\mathbbm{i}\phi_m} }&,\mathrm{else}\label{eqn:received-signal}
\end{cases}.
\end{align}
Equation \eqref{eqn:received-signal} naturally allows us to state the following:
\begin{theorem}\label{thm:result}
Let for each $K\in\mathbb N$ $n:=2^K$ and let the random variables $\Phi_1,\ldots,\Phi_n$ be i.i.d according to a measure $\mu$ on $[0,2\pi)$ and $t>0$. For all $k,k'=1,\ldots,n$:
\begin{align}
\mathbb P\bigg(\bigg|\Lambda^n_{k,k'}(\alpha) - \sqrt{n}\alpha\delta(k,k')\mathbb E_\mu [e^{\mathbbm{i}\Phi}]\bigg|\geq \tfrac{t|\alpha|}{\sqrt{n}}\bigg)&\leq 4e^{-t^2/n}.
\end{align}
\end{theorem}
We conclude the Hadamard receiver asymptotically transforms any signal $\alpha H_{1k},\ldots\alpha H_{nk}$ into output $\approx\sqrt{n}\alpha\mathbb E_\mu[e^{\mathbbm{i}\Phi}]$ at output port $k$, whereas at output port $k'\neq k$ one receives almost no signal. To optimize the overall system performance we approach the problem of optimizing the homodyne detectors at the output ports of the system as follows:
\begin{align}\label{eqn:mathematical-problem-statement}
{\varepsilon}_{\max}:=\arg\max_{{\varepsilon}}\ & p_C^{{\varepsilon}ilon}(\alpha|\alpha)\prod_{k\neq k'}p_I^{{\varepsilon}ilon}(0|0).
\end{align}
This approach is motivated from three observations: {\bf{ 1.}} The calculation of ${\varepsilon}ilon$ in \eqref{eqn:conditional-distribution} is becoming more challenging as $n$ grows. {\bf 2.} As $n$ grows, the central limit theorem predicts that $\Lambda^{n}_{k,k'}(\alpha)$ will be approximately a normal distribution. {\bf 3.} the variance $\sigma$ of said normal distribution can be efficiently computed. Turning the observation {\bf 2.} into a quantitative \emph{assumption}, we can prove that setting the detection parameters according to the solution of \eqref{eqn:mathematical-problem-statement} yields near-optimal results once the solution ${\varepsilon}_{\max}$ yields $p_C^{{\varepsilon}ilon_{\max}}(\alpha|\alpha)\prod_{k\neq k'}p_I^{{\varepsilon}ilon_{\max}}(0|0)\approx1$:
\begin{theorem}\label{thm:capacity-bound}
For all $b,E,n$ we have $C(b,E,n)\leq b\cdot\log(n)/n$. If $p_I^{\varepsilon}ilon(y|\pm\alpha)=d(y)$ for all $y\in\{-\alpha,0,\alpha\}$ and $p_C^{\varepsilon}ilon(\alpha|\alpha)\geq1-\delta$ and $d(0)\geq1-\delta$ and $\delta<n^{-1}$, then
\begin{align}
C(b,E,n)\geq b\left((1-\delta)\log n - 5\cdot h(\delta)\right)/n,
\end{align}
where $h$ is the binary entropy.
\end{theorem}
We utilize problem statement \eqref{eqn:mathematical-problem-statement} to obtain numerically values for setting the detection threshold ${\varepsilon}ilon$. Thereby we obtained lower bounds for $C(b,E,n)$ for different values of $b$. In this process, we replaced the Gaussian distribution of $\Phi$ with a von Mises distribution (see Subsection \ref{subsection:noise model} for details and figures \ref{fig:mutualinfo} and \ref{fig:capacities} for numerical results concerning our particular application). Using the von Mises distribution reduced the dependence of Eq. \eqref{eqn:mathematical-problem-statement} on the Hadamard receiver size, as explained in Subsubsection \ref{subsubsec:detector-optimization-procedure}.
Our simulation results show the different capacities with increasing baud-rates and phase noise:
\begin{figure}
\caption{Average capacity plotted over the baud-rate. The green line represents the classical Shannon capacity, the red line the capacity of the Hadamard receiver with order $n=4$ and the brown line with order $n=32$. At $130Gbd$, the received photon number is approximately $0.29$ photons per pulse. The shaded regions are error bars derived from the empirical variance of simulation results. We used the attenuation coefficient $ a = 0.046$ and fiber length $L=250km$ resulting in a transmittivity of $\tau\approx10^{-5}
\label{fig:capacitybaudrates}
\end{figure}
As the performance of the JDR and of the standard homodyne receiver are both hard to evaluate analytically for specific parameters, the performance was simulated \cite{ourCode}. For the attenuation we used the formula $\tau=\exp{-0.046\cdot L}$ modelling approximately optical fibers \cite{kunzParisBanaszekKerrMedium} of length $L$ kilometers. The attenuation coefficient $a = 0.046$ was taken from \cite[Table 1]{kunzParisBanaszekKerrMedium}. It models attenuation in SMF-28 fiber for communication in the C-band at $1550$nm, which is the most widely used system choice in communication networks.
\begin{figure}
\caption{In this Figure the mutual information of the Hadamard receiver is plotted over the Hadamard order $n$. The dashed lines represent the capacities with the wrapped normal distribution (WN) and the solid lines are representing the von Mises(vM) distribution chosen as the phase noise distribution.}
\label{fig:mutualinfo}
\end{figure}
\begin{figure}
\caption{In this Figure the capacity of the Hadamard receiver is plotted over the Hadamard order $n$. The dashed lines represent the capacities with the wrapped normal distribution (WN) and the solid lines are representing the von Mises (vM) distribution chosen as the phase noise distribution.}
\label{fig:capacities}
\end{figure}
Figure \ref{fig:capacitybaudrates} displays simulation results for $L=250km$ it can be seen that with quantum pre-processing, we can achieve a higher capacity at high baud-rates than with an only classical system. From Figure \ref{fig:capacitybaudrates} we can also see that higher Hadamard orders ($n=32$) tend to be more beneficial when the ratio of $E/b$ is extremely small, while lower orders $(n=4)$ can already bring performance improvements. The maximum of the red curve at around $130$Gbd restates the fact that JDRs outperform conventional receivers in the low photon regime where $\tau\cdot E/b$ is small ($\ll 1$) for our specific design and noise model.
\section{Discussion}
\subsubsection{Advantage of the Hadamard receiver}
While it was known \cite{rosati2016, guha2011structured} that the Hadamard receiver outperforms a conventional receiver, our work clarifies that such advantage persists given a scaling of phase noise with baud-rate in a situation where the Hadamard receiver is used only as a pre-processing stage in the design. Note here that we ignore further non-idealities, such as the common mode rejection ratio in the homodyne receiver, or the estimation of global phases.
\subsubsection{Accuracy of Approximation with Van Mises Distribution}
From figures \ref{fig:mutualinfo} and \ref{fig:capacities} one can see that the results obtained by using the von Mises distribution get closer to those obtained based on the wrapped normal distribution, if $n$ grows.
\subsubsection{Asymptotics for Problem \eqref{eqn:mathematical-problem-statement}}
By setting $t = c\cdot n^{3/4}$ (with suitable choice of $c$) in Theorem \ref{thm:result} one can deduce that, for every $\delta\in(0,1)$, problem statement \eqref{eqn:mathematical-problem-statement} yields a solution ${\varepsilon}_{max}\leq\delta$: Namely,
\begin{align*}
\mathbb P(|\Lambda^n_{k,k'}(\alpha) - \sqrt{n}\alpha\delta(k,k')\mathbb E_\mu [e^{\mathbbm{i}\Phi}]|\geq c|\alpha|n^{1/4})&\leq 4e^{-\sqrt{n} c^2}
\end{align*}
so that for $c=\mathbb E_\mu[e^{\mathbbm{i}\Phi}]$ and every choice $(\alpha,k)$ of the transmitter the received signal at port $k$ is concentrated in the interval between $\alpha\sqrt{n}\mathbb E_\mu[e^{\mathbbm{i}\Phi}](1\pm n^{-1/4})$ while the received signal at every other port is in another interval between numbers $\alpha\sqrt{n}\mathbb E_\mu[e^{\mathbbm{i}\Phi}](0\pm n^{-1/4})$.
Based on the detection probabilities listed in Subsection \ref{subsec:detection}, we see that for every $\delta>0$ and large enough $n$ a threshold ${\varepsilon}_n:=\alpha\sqrt{n}\mathbb E_\mu[e^{\mathbbm{i}\Phi}]/2$ is asymptotically good enough to simultaneously achieve $d(0)\geq1-\delta$ and $p_C^{\varepsilon}(\alpha|\alpha)\geq1-\delta$.
\subsubsection{Detector Optimization Procedure\label{subsubsec:detector-optimization-procedure}}
Finding the optimal solution to \eqref{eqn:mathematical-problem-statement} for arbitrary distributions is a challenging task.
For the simulations, we thus used a semi-heuristic approach which makes use of the Gaussian shape $\Lambda^n(\alpha)$ when $n$ grows. The basic idea is as follows: For an empirical mean $\bar{X}_n = 1/n \sum_n X_i$, with $X_i$ being the phases $e^{\mathbbm{i}\Phi_i}$, with mean $\mathbb E_\mu[e^{\mathbbm{i}\Phi}]$ and variance $\sigma^2$ of $X_i$, we know from the central limit theorem that, asymptotically, $\bar{X}_n$ will be distributed as $N(\mathbb E_\mu[e^{\mathbbm{i}\Phi}], \sigma^2/n)$. Since we have $\Lambda_{k,k}^n(\alpha)=\alpha\sqrt{n}\bar{X}_n$ we thus know that (asymptotically) $\Lambda_{k,k}^n(\alpha)$ will be distributed as $N(\alpha\sqrt{n}\mathbb E_\mu[e^{\mathbbm{i}\Phi}], |\alpha|^2\sigma^2)$,
and similarly we get for $k\neq k'$ that $\Lambda_{k,k'}^n(\alpha)$ will be distributed according to $N(0, 2|\alpha|^2\sigma^2)$.
Since the expectation value and variance of the von Mises distribution with zero mean is given by the first and second raw moment respectively (see Lemma \ref{lem:raw moments} and Equation \eqref{def:kappa})
\begin{align}
\mathbb E_\mu[e^{\mathbbm{i}\Phi}] &= \frac{I_1(\kappa)}{I_0(\kappa)}\\
\sigma^2_{vM} &= \left(\frac{I_2(\kappa)}{I_0(\kappa)}-\frac{I_1(\kappa)^2}{2I_0(\kappa)^2}\right) - \frac{I_1(\kappa)^2}{2I_0(\kappa)^2},
\end{align}
we can find, for every $n=2^K$, the optimal ${\varepsilon}$ in \eqref{eqn:mathematical-problem-statement} by taking into account the detection probability \eqref{eq:detectalpha} as well as the assumed Gaussian shape \begin{align}
f_t(\beta|\alpha):=\tfrac{1}{t\sigma_{vM}\sqrt{2\pi}}\exp{-\left(\tfrac{\beta-\sqrt{n}E_\mu[e^{\mathbbm{i}\Phi}]\alpha}{2t\sigma_{vM}}\right)^2}
\end{align}
of the received signal, to arrive at
\begin{align}
p_C^{\varepsilon}(\alpha|\alpha)=\int_{\mathbb R}f_1(\beta|\alpha) \frac{1}{2}(1-{\mathrm{erf}}(\sqrt{2}(\beta-{\varepsilon})))d\beta
\end{align}
\begin{align}
d(0)=\int_{\mathbb R} \frac{f_2(\beta|0)}{2}\sum_{x=0}^1{\mathrm{erf}}(\sqrt{2}({\varepsilon} +(-1)^x\beta))d\beta,
\end{align}
with ${\mathrm{erf}}(a)$ being the error function
\begin{align}
{\mathrm{erf}}(a) = \frac{2}{\sqrt{\pi}} \int_0^a d\xi e^{-\xi^2}.
\end{align}
As the computation of $p_C^{\varepsilon}(\alpha|\alpha)d(0)^{n-1}$ does not depend critically on $n$ anymore, the calculation of ${\varepsilon}_{\max}$ is efficient even for large values of $n$. This is important for the regime where the ratio $\alpha\sqrt{n}/\sigma$ becomes low. When baud-rate and noise level are low, ${\varepsilon}=\alpha\sqrt{n}\mathbb E_\mu[e^{\mathbbm{i}\Phi}]/2$ will be a sufficient choice.
\end{section}
\begin{section}{Methods and Proofs}
\subsection{Noise Model}\label{subsection:noise model}
Setting $\xi:=\gamma\hbar\omega_0/(a\cdot b)$ where $\gamma=1$ is a nonlinear interaction coefficient (discussed below \cite[Equation (7)]{kunzParisBanaszekKerrMedium}) and $a$ the attenuation parameter we let $N$ be the transmitted number of photons per second so that we get \cite[Eq. (11)]{kunzParisBanaszekKerrMedium}
\begin{align}
\sigma^2=4\cdot\xi^2\cdot N \cdot b^{-1}\cdot\left(2-\tau-\tau(1-\log\tau)^2\right),
\end{align}
with $\tau\in(0,1)$ being the transmittivity.
Assuming transmission at $1550$nm over a standard SMF-28 fiber with attenuation parameter $a=0.046$ \cite[Table 1]{kunzParisBanaszekKerrMedium} we arrive at
\begin{align}
\xi=2.8\cdot 10^{-18}\cdot b,
\end{align}
and thus since $\tau=e^{-a\cdot 250}\ll1$ we get
\begin{align}
\sigma^2\approx6\cdot 10^{-35}\cdot b\cdot N.
\end{align}
For $\approx1mW$ transmit power we use $N=10^{16}$ and therefore
\begin{align}
\sigma^2\approx6\cdot 10^{-19}\cdot b.
\end{align}
In order to recover the dependence of the phase noise on the signal energy we note that $1/\sigma^2$ plays a role similar to that of $\kappa$ in the von Mises distribution, so that we set
\begin{align}\label{def:kappa}
\kappa=10^{19}\cdot b^{-1}/6.
\end{align}
As has been noted in \cite{colletLewisDiscriminatingWNDandVMD} the difference between the two models is negligible for sample sizes below $200$. For large sample sizes it is visible for values of $\kappa$ in the interval $(0.1,10)$.
\subsection{Detection}\label{subsec:detection}
We use the homodyne detector at each output port of the receiver, with the following POVMs
\begin{align}
\Pi_x = \ket{x}\bra{x},
\end{align}
with
\begin{align}
\ket{x} = \left(2/\pi\right)^{\frac{1}{4}} e^{-x^{2}}\textstyle{\sum_{n=0}^{\infty}} H_{n}(\sqrt{2}x)/\sqrt{2^{n}n!}\ket{n},
\end{align}
with $H_n(x)$ being Hermite polynomials.
Then we have the outcome $x$ of the homodyne detector as
\begin{align}
p(x|\beta) = {\mathrm{tr}}\{ \ket{\beta}\bra{\beta}\Pi_x\} = \sqrt{2/\pi}\exp(-2(x-\beta))^2,
\end{align}
with $\beta$ being a coherent state. In the case of zero noise $\beta$ is identical to either $\alpha$, $-\alpha$, or $0$, if one listens at an output port where the vacuum state is present. To distinguish between these three events, a threshold ${\varepsilon}ilon>0$ is set and the detected value $x$ is declared as $\alpha$ if $x\geq\sqrt{n}\alpha-{\varepsilon}ilon$, as $-\alpha$ if $x\leq-\sqrt{n}\alpha+{\varepsilon}ilon$ and as $0$ if none of the above holds. The corresponding detection probabilities are equal to
\begin{align}
\mathbb P(\alpha|\beta)
&=\frac{1}{2}\left\{ 1-{\mathrm{erf}}\left(\sqrt{2}(\beta-{\varepsilon}ilon)\right)\right\}\label{eq:detectalpha} \\
\mathbb P(0|\beta)
&=\frac{1}{2}\left\{{\mathrm{erf}}\left(\sqrt{2}({\varepsilon}ilon - \beta)\right) + {\mathrm{erf}}\left(\sqrt{2} ({\varepsilon}ilon + \beta)\right)\right\}\label{eq:detect0} \\
\mathbb P(-\alpha|\beta)
&=\frac{1}{2}\left\{ 1 - {\mathrm{erf}}\left(\sqrt{2}({\varepsilon}ilon+ \beta)\right)\right\}.
\end{align}
The conditional distribution \eqref{eqn:conditional-distribution} of the output port events given input $(\beta,k)$ and phase noise realization $\phi_1,\ldots,\phi_n$ can - due to the i.i.d. property of $\Phi_1,\ldots,\Phi_n$ - be calculated as
\begin{align}
p_C(y|\beta)=\mathbb P(y|2^{-n/2}\beta\textstyle\sum_m e^{-\mathbbm{i}\phi_m})\\
p_I(y|\beta)=\mathbb P(y|2^{-n/2}\beta\textstyle\sum_{m}{n}(-1)^m e^{-\mathbbm{i}\phi_m}).
\end{align}
For the simulations, we sampled $1000$ realizations of the phase noise values $(\phi_1,\ldots,\phi_n)$ to obtain an empirical approximation to \eqref{eqn:conditional-distribution}.
\subsection{Proofs}
To prove the convergence of the joint detection receiver towards the expected value and the normal distribution we will have to first define the expected value and variance of the von Mises distribution.
\begin{lemma}[Raw Moments of the von Mises Distribution \label{lem:raw moments}]
Let $f(\theta;\beta,\kappa)= \frac{1}{2\pi I_0(\kappa)}e^{\kappa \cos(\theta-\beta)}$ be the von Mises distribution, with $0 \leq \beta < 2\pi$ and $\kappa \geq 0$ being parameters and $I_n$ the Bessel function of order $n$. Then the raw moments of this distribution are
\begin{align}
m_n = E[X^n] = \frac{ I_n(\kappa)}{I_0(\kappa)} e^{\mathbbm{i}n\beta}
\end{align}
According to \cite{jammalamadaka2001topics} the central trigonometric moments are
\begin{align}
\alpha_n^* = \frac{I_n(\kappa)}{I_0(\kappa)}\cdot \cos(n\beta),
\end{align}
whereas $\beta_n^* = 0$ due to the symmetry of the von Mises density.
\end{lemma}
For the proof see \cite[Chapter 2.2.4]{jammalamadaka2001topics}.
Now we can apply Hoeffding's inequality \cite[p.18, p.30]{vershynin_2018} to the Hadamard receiver with coherent states with phase noise.
\begin{proof}[Proof of Theorem \ref{thm:result}]
Let $\phi_1,\ldots,\phi_n\in[0,2\pi)$ be a realization of the phase noise. Under this realization, the $k$-th sequence of signals $A_k := \alpha\cdot (H_{k,1}, \ldots, H_{k,n})$ is transformed to
\begin{align}
\hat A_k := \alpha\cdot (H_{k,1}\exp{\mathbbm{i}\phi_1}, \ldots, H_{k,n}\exp{\mathbbm{i}\phi_n})
\end{align}
and the $k$-th output of the Hadamard transform $H$ applied to $\hat A_k$ satisfies \begin{align}
\tfrac{\alpha}{\sqrt{n}}\sum_m H_{k,m}H_{m,k}e^{\mathbbm{i}\Phi_m}=\tfrac{\alpha}{\sqrt{n}}\sum_me^{\mathbbm{i}\Phi_m}.
\end{align}
Since the right hand side is a sum of iid random variables we get from Hoeffding's inequality applied separately to the real and imaginary parts:
\begin{align}
\mathbb P\bigg(\bigg|\sum_m H_{k,m}H_{m,k}e^{\mathbbm{i}\Phi_m} - n\mathbb E_\mu [e^{\mathbbm{i}\Phi}]\bigg|\geq t\bigg)&\leq 4e^{-t^2/2n},
\end{align}
with $\mathbb E_\mu [e^{\mathbbm{i}\Phi}]$ being the expectation value of $e^{\mathbbm{i}\Phi}$.
Further for all $i\neq k$:
\begin{align}
\mathbb P\bigg(\bigg|\tfrac{\alpha}{\sqrt{n}}\sum_m H_{k',m}H_{m,k}e^{\mathbbm{i}\Phi_m}\bigg|\geq t\bigg)&\leq4e^{-t^2/n}
\end{align}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:capacity-bound}]
Assume $w(y^n|x,k)=g(y_k|x)\cdot\prod_{k'\neq k}d(y_{k'})$ for some conditional probability distribution $g$ and probability distribution $d$ on $\{-1,0,1\}$. To derive a lower bound on the capacity of such a system we let without loss of generality the input signals, consisting of BPSK symbols $x\in\{-1,1\}$ (where we set $\alpha=1$ without loss of generality) and input ports, are chosen uniformly at random. Then, the output distribution $q$ of the symbols $y^n\in\{-1,0,1\}^n$ can be written as
\begin{align}
q(y^n)
&=\sum_{x,k}\tfrac{1}{2n}g(y_k|x)\prod_{k\neq k'}d(y_{k'}).
\end{align}
We can thus decompose this density as one truncated version on the set $A:=\{y^n:N(0|y^n)=n-1\}$ and $B:=A^\complement$ being the complement. Denote the respective re-scaled probabilities of $q$ restricted to $A$ or $B$ $q_A$ and $q_B$. Due to concavity of the entropy we have
\begin{align}
H(q)
&\geq q(A)H(q_A).
\end{align}
Then, for $y^n\in A$, without loss of generality $y_1=1$,
\begin{align}
&q_A(y^n)=\tfrac{1}{2n}\bigg(g(y_1|y_1)\cdot d(0)^{n-1} + g(y_1|-y_1)\cdot d(0)^{n-1} +\nonumber\\
&\qquad \sum_{k=2}^n\sum_{x}g(0|x)\cdot d(0)^{n-1}\bigg)\\
&= \tfrac{d(0)^{n-1}g(1|1)}{2n}\bigg(1 + \tfrac{g(1|-1) + (n-1)\sum_{x}g(0|x)}{g(1|1)}\bigg). \label{eq:qa}
\end{align}
With $\pi$ denoting the uniform distribution on $A$ we get
\begin{align}
q_A(y^n)=\lambda\pi(y^n)+(1-\lambda)\hat q_A(y^n)
\end{align}
with $\hat q_A$ defined in the obvious way from \eqref{eq:qa} and
\begin{align}
\lambda:=d(0)^{n-1}g(\alpha|\alpha).
\end{align}
Using concavity of the entropy and $H(\pi)=\log(2n)$ we get
\begin{align}
H(q_A) &\geq
\lambda(K+1)
\end{align}
Now if the detector is well designed, then $d(0)\approx1$ and $g(y_1|y_1)\approx1$, so that we get $\lambda\approx1$ and therefore $H(q_A)\approx K+1$.
Following our calculations, the capacity is lower bounded by
\begin{align}
C(b,E,n)&\geq I(Y^n;AN)\\
&\geq\lambda(K+1) - H(Y^n|AN),
\end{align}
where $A$ is a random variable describing the random choice of phases and $N$ a random variable describing the random choice of input port.
The distribution of $Y^n$ given $k$ and $x$ does not depend on the particular choice of $k,x$. Thus for the calculation of $H(Y^n|AN)$ it is sufficient to calculate the following:
\begin{align}
-H&(Y^n|X=1,N=1) = \sum_{y^n}w(y^n|1,1)\log w(y^n|1,1)\\
&= g(1|1)d(0)^{n-1}\log(g(1|1)d(0)^{n-1}) \nonumber\\
&\qquad+ g(-1|1)d(0)^{n-1}\log(g(-1|1)d(0)^{n-1})\nonumber\\
&\qquad -H(g(\cdot|1)) - g(\cdot|1)(n-1)H(d(\cdot))\\
&= \lambda\log(\lambda) +
g(-1|1)d(0)^{n-1}\log(g(-1|1)d(0)^{n-1}) \nonumber\\
&\qquad-H(g(\cdot|1)) - g(\cdot|1)(n-1)H(d(\cdot)).
\end{align}
Thus if $g(1|1)\geq1-\delta$ and $d(0)\geq1-\delta$ we get
\begin{align}
-H(Y^n|AN) &\geq (1-\delta)\log(1-\delta) + \delta\log(\delta)\nonumber\\
&\qquad - h(\delta) - \delta - \delta(n-1)(h(\delta) + \delta)\\
&\geq -2\cdot h(\delta) - 2\delta - \delta(n-1)h(\delta).
\end{align}
It thus follows with $\delta':=1-\delta$
\begin{align}
C(b,E,n) &\geq \delta'(K+1) -2\cdot h(\delta) - 2\delta - \delta(n-1)h(\delta)
\end{align}
and for $\delta<\tfrac{1}{n-1}<\tfrac{1}{n}$
\begin{align}
C(b,E,n)\geq (1-\delta)(K+1) -5 h(\delta).
\end{align}
\end{proof}
\end{section}
\begin{section}{Conclusions}
We have detailed the concept of a (quantum) joint detection receiver as a pre-processing step in classical receiver design. Improving upon earlier modelling steps, we have incorporated a scaling of phase noise with the baud-rate. Structural insights have been obtained by theoretical analysis. Our simulation results indicate that the theoretical performance predictions made in earlier works will persist in more realistic situations.
\section*{Acknowledgment}
Das Projekt/Forschungsvorhaben ist Teil der Initiative Munich Quantum Valley, die von der Bayerischen Staatsregierung aus Mitteln der Hightech Agenda Bayern Plus gefördert wird.
The project/research is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. Funding from the Federal Ministry of Education and Research of Germany, programme "Souver\"an. Digital. Vernetzt." joint project 6G-life, project identification number: 16KISK002 (ZA,JN), DFG Emmy-Noether program under grant number NO 1129/2-1 (JN) and support of the Munich Center for Quantum Science and Technology (MCQST) are acknowledged. Boulat A. Bash's work was supported in part by the National Science Foundation under Grant No. CCF-2006679.
\end{section}
\end{document}
|
\begin{document}
\baselineskip=16pt
\begin{abstract}
We show that projective structures with torsion are related to connections in a parallel way to the torsion-free ones.
This is done in terms of Cartan connections in a pallarel way as the one by Kobayashi and Nagano.
For this purpose, we make use of a bundle of formal frames, which is a generalization of a bundle of frames.
We will also describe projective structures in terms of Thomas--Whitehead connections by following Roberts.
In particular, we formulate normal projective connections and show the fundamental theorem for Thomas--Whitehead connections regardless the triviality of the torsion.
We will study some examples of projective structures of which the torsion is non-trivial while the curvature is trivial.
In this article, projective structures are considered to be the same if they have the same geodesics and the same torsions.
\end{abstract}
\title{Notes on projective structures with torsion}
\section*{Introduction}
Projective structures are quite well-studied.
They can be described by Cartan connections and frame bundles, as studied by Kobayashi and Nagano~\cite{Kobayashi-Nagano}, et. al.
Projective structures can be also described in terms of Thomas--Whitehead connections (TW-connections for short) which are linear connections on a certain line bundle~\cite{Roberts}.
Associated with projective structures are torsions, which are $2$-forms.
If the torsion of a projective structure vanishes, then the structure is said to be \textit{torsion-free} or without torsion.
Actually, the above-mentioned studies are done in the torsion-free case.
One of the most fundamental results is the existence of normal projective connections~\cite{Kobayashi-Nagano}*{Proposition~3} which is a Cartan connection of special kind.
A corresponding result for TW-connections is known as the Fundamental theorem for TW-connections~\cite{Roberts}.
On the other hand, linear connections always induce projective structures even if they are with torsions.
In this article, we study how linear connections with torsions induce projective structures.
Indeed, we will study projective structures with torsion and show that they can be treated in a parallel way to the torsion-free case.
For this purpose, we need a notion of formal frame bundles~\cite{asuke:2022} which is a generalization of frame bundles.
Usually, a $2$-frame at a point is given by a pair $(a^i{}_j,a^i_{jk})\in\mathrm{GL}_n({\mathbb R})\times{\mathbb R}^{n^3}$ such that $a^i{}_{jk}=a^i{}_{kj}$.
The symmetricity condition is quite related with torsion-freeness and we have to drop this condition in order to deal with torsions.
This leads us to formal frames.
A formal $2$-frame at a point is a pair $(a^i{}_j,a^i_{jk})\in\mathrm{GL}_n({\mathbb R})\times{\mathbb R}^{n^3}$.
We refer to~\cite{asuke:2022} for the precise definition and details of formal frames.
Expecting a better understanding of the torsion, we will study some examples of projective structures of which the torsion is non-trivial while the curvature is trivial.
Finally, we remark that a slightly different approach to projective structures with torsion is presented in~\cite{McKay}*{Section~7}.
\par
In this article, projective structures are considered to be the same if they have the same (unparameterized) geodesics and the same torsions except last part of Section~2.
Throughout this article, $(U,\varphi)$ and $(\widehat{U},\widehat{\varphi})$ denote charts, and $\psi$ denotes the transition function.
Representing (local) tensors, we make use of the Einstein convention.
For example, $a^i{}_\alpha b^\alpha{}_{jk}$ means $\sum_{\alpha}a^i{}_\alpha b^\alpha_{jk}$.
The range of $\alpha$ will be from $1$ to $\dim M$ or from $1$ to $\dim M+1$.
We basically retain notations of \cite{Kobayashi-Nagano} and \cite{Roberts}.
Finally, the order of lower indices of the Christoffel symbols are reversed in this article (see Notation~\ref{not2.12}).
\section{Cartan connections}
We recall basics of Cartan connections after~\cite{K}.
We will work in the real category, however, we can work in the complex category (not necessarily the holomorphic category) after obvious modifications.
Let $G$ be a Lie group and $H$ a closed subgroup of $G$.
We assume that $P$ is a principal $H$-bundle over $M$.
In what follows, the Lie algebra is represented by the corresponding lower German letter, e.g., $\mathfrak{g}$ will denote the Lie algebra of $G$.
\begin{definition}
A \textit{Cartan connection} is a $1$-form $\omega$ on $P$ with values in $\mathfrak{g}$ which satisfies the following conditions:
\begin{enumerate}
\item
$\omega(A^*)=A$ for any $A\in\mathfrak{h}$, where $A^*$ denotes the fundamental vector field associated with $A$.
\item
$R_a{}^*\omega=\Ad_{a^{-1}}\omega$ for any $a\in H$.
\item
$\omega(X)\neq0$ for any non-zero vector $X$ on $P$.
\end{enumerate}
\end{definition}
\begin{notation}
In what follows, we assume that $G=\mathrm{PGL}_{n+1}({\mathbb R})=\mathrm{GL}_{n+1}({\mathbb R})/Z$, where $Z=\{\lambda I_{n+1}\mid\lambda\neq0\}$.
Let $[x^0:\cdots:x^n]$ be the homogeneous coordinates for ${\mathbb R} P^n$, and $H\subset G$ the isotopy group of $[0:\cdots:0:1]$.
Finally we set $\mathfrak{m}={\mathbb R}^n$, which is understood as a space of column vectors, and let $\mathfrak{m}^*$ denote its dual.
\end{notation}
\begin{definition}
We set
\begin{align*}
G_0&=\left\{\begin{pmatrix}
A & 0\\
\xi & a
\end{pmatrix}\in\mathrm{GL}_{n+1}(\mathbb{R})\;\middle|\;a\det A=1\right\}\hskip-12pt\left.\phantom{\biggl(}\middle/\right.\hskip-3pt Z,\\*
G_1&=\left\{\begin{pmatrix}
I_n & 0\\
\xi & 1
\end{pmatrix}\in\mathrm{GL}_{n+1}(\mathbb{R})\;\middle|\;\xi\in\mathfrak{m}^*\right\}.
\end{align*}
\end{definition}
Note that $G_1$ is naturally a subgroup of $G$ and $G_0$.
We have
\begin{align*}
\mathfrak{g}_0&=\left\{\begin{pmatrix}
A & 0\\
0 & a
\end{pmatrix}\;\middle|\;\mathop{\mathrm{tr}}A+a=0\right\},\\*
\mathfrak{g}_1&=\left\{\begin{pmatrix}
0 & 0\\
\xi & 0
\end{pmatrix}\right\}.
\end{align*}
If we set
\[
\mathfrak{g}_{-1}=\left\{\begin{pmatrix}
0 & v\\
0 & 0
\end{pmatrix}\right\},
\]
then we have
\[
\mathfrak{g}=\mathfrak{g}_{-1}\oplus\mathfrak{g}_0\oplus\mathfrak{g}_1.
\]
We have $\mathfrak{g}_{-1}\cong\mathfrak{m}$, $\mathfrak{g}_0\cong\mathfrak{gl}_n({\mathbb R})$ and $\mathfrak{g}_1\cong\mathfrak{m}^*$ so that $\mathfrak{g}\cong\mathfrak{m}\oplus{\mathfrak g}l_n({\mathbb R})\oplus\mathfrak{m}^*$.
We also have $\mathfrak{h}\cong{\mathfrak g}l_n({\mathbb R})\oplus\mathfrak{m}^*$.
The identifications are given by
\begin{alignat*}{3}
&\begin{pmatrix}
0 & v\\
0 & 0
\end{pmatrix}\in\mathfrak{g}_{-1}&{}\mapsto{}& v\in\mathfrak{m},\\*
&\begin{pmatrix}
A & 0\\
0 & a
\end{pmatrix}\in\mathfrak{g}_0&{}\mapsto{}& U=A-aI_n\in\mathfrak{gl}_n({\mathbb R}),\\*
&\begin{pmatrix}
0 & 0\\
\xi & 0
\end{pmatrix}\in\mathfrak{g}_1&{}\mapsto{}& \xi\in\mathfrak{m}^*.
\end{alignat*}
Note that $U\in\mathfrak{gl}_n({\mathbb R})$ corresponds to $\begin{pmatrix}
U & 0\\
0 & 0
\end{pmatrix}-\dfrac1{n+1}(\mathop{\mathrm{tr}}U)I_{n+1}$.
Under these identifications, the Lie brackets are given as follows.
Let $u,v\in\mathfrak{m}$, $u^*,v^*\in\mathfrak{m}^*$ and $U,V\in{\mathfrak g}l_n({\mathbb R})$.
Then, we have
\begin{align*}
&[u,v]=0,\\*
&[u^*,v^*]=0,\\*
&[U,u]=Uu\in\mathfrak{m},\\*
&[u^*,U]=u^*U\in\mathfrak{m}^*,\\*
&[U,V]=UV-VU\in{\mathfrak g}l_n({\mathbb R}),\\*
&[u,u^*]=uu^*+u^*uI_n\in{\mathfrak g}l_n({\mathbb R}).
\end{align*}
In what follows, we always make use of these identifications.
If $\omega$ is a Cartan connection on $P$, then we represent $\omega=(\omega^i,\omega^i{}_j,\omega_j)$ according to the identification $\mathfrak{g}=\mathfrak{m}\oplus{\mathfrak g}l_n({\mathbb R})\oplus\mathfrak{m}^*$.
\begin{remark}
\label{rem4.3}
Each element $g$ of\/ $\mathrm{PGL}_{n+1}({\mathbb R})$ admits a representative of the form $\begin{pmatrix}
A & \xi^*\\
\xi & 1
\end{pmatrix}$.
By associating $g$ with $(\xi,A,\xi^*)$, we can consider $(a^i,a^i{}_j,a_j)$ as coordinates for $\mathrm{PGL}_{n+1}({\mathbb R})$.
With respect to these coordinates, we have $H=\{(0,a^i{}_j,a_j)\}$.
Let $o=[0:\cdots:0:1]$ denote $H\in\mathrm{PGL}_{n+1}({\mathbb R})/H$.
If $h=(0,a^i{}_j,a_j)\in H$ and if $x=(x^i)=[x^1:\cdots:x^n:1]$ is close enough to $o$, then we have
\begin{align*}
h.x&=\frac{a^i{}_jx^j}{a_jx^j+1}\\*
&=a^i{}_jx^j-a^i{}_jx^ja_kx^k+\cdots\\*
&=a^i{}_jx^j-\frac12(a^i{}_ja_k+a^i{}_ka_j)x^jx^k+\cdots.
\end{align*}
\end{remark}
\begin{definition}
\label{def1.2}
Let $\omega=(\omega^i,\omega^i{}_j,\omega_j)$ be a Cartan connection on $P$.
We set
\begin{alignat*}{3}
&\Omega^i&{}={}&d\omega^i+\omega^i{}_k\wedge\omega^k,\\*
&\Omega^i{}_j&{}={}&d\omega^i{}_j+\omega^i{}_k\wedge\omega^k{}_j+\omega^i\wedge\omega_j-\delta^i{}_j\omega_k\wedge\omega^k,\\*
&\Omega_j&{}={}&d\omega_j+\omega_k\wedge\omega^k{}_j.
\end{alignat*}
We call $\Omega^i$ the \textit{torsion} and $(\Omega^i{}_j,\Omega_j)$ the \textit{curvature} of $\omega$, respectively.
\end{definition}
We refer to $(\Omega^i{}_j)$ as the \textit{curvature matrix} of $\omega$ and consider trace of it.
We have the following
\begin{proposition}[\cite{Kobayashi-Nagano}*{Proposition~2}].
We can represent the torsion and the curvature as
\begin{alignat*}{4}
&\Omega^i&{}={}&\frac12K^i{}_{kl}\omega^j\wedge\omega^k,&\quad &K^i{}_{lk}=-K^i{}_{kl},\\*
&\Omega^i{}_j&{}={}&\frac12K^i{}_{jkl}\omega^k\wedge\omega^l,&\quad &K^i{}_{jlk}=-K^i{}_{jkl},\\*
&\Omega_j&{}={}&\frac12K_{jkl}\omega^k\wedge\omega^l,&\quad &K_{jlk}=-K_{jkl},
\end{alignat*}
where $K^i{}_{kl}$, $K^i{}_{jkl}$ and $K_{jkl}$ are functions on $P$.
\end{proposition}
\begin{remark}
\label{rem4.4}
If $\omega$ is a Cartan connection on $P$, then we have the following\textup{:}
\begin{enumerate}[\textup{\alph{enumi})}]
\item
$\omega^i(A^*)=0$ and $\omega^i{}_j(A^*)=A^i{}_j$ for any $A=(A^i{}_j,A_j)\in\mathfrak{h}={\mathfrak g}l_{n+1}({\mathbb R})\oplus\mathfrak{m}^*$.
\item
$R_a{}^*(\omega^i,\omega^i{}_j)=\Ad_{a^{-1}}(\omega^i,\omega^i{}_j)$ for any $a\in H$.
\item
Let $X\in TP$.
We have $\omega^i(X)=0$ if and only if $X$ is vertical, namely, tangent to a fiber of $P\to M$.
\end{enumerate}
\end{remark}
Proposition~3 in~\cite{Kobayashi-Nagano} holds in the following form.
A point is that we do not need the condition $\Omega^i{}_i=0$.
See also Remark~\ref{rem2.4}.
\begin{proposition}
\label{prop4.5}
Let $\omega^i$ and $\omega^i{}_j$ satisfy the conditions in Remark~\ref{rem4.4}.
Then, there is a Cartan connection of the form $\omega=(\omega^i,\omega^i{}_j,\omega_j)$.
If $n{\mathfrak g}eq2$, there uniquely exists a Cartan connection such that $K^i{}_{jil}=0$, that is, $\omega$ is Ricci-flat.
If moreover $n{\mathfrak g}eq3$ and if $\omega$ is torsion-free, then $\Omega^i{}_i=0$, namely, the curvature matrix $(\Omega^i{}_j)$ is trace-free.
\end{proposition}
\begin{proof}
First we show the existence of a Cartan connection.
Let $\{U_\alpha\}$ be a locally finite open covering of $M$ and $\{f_\alpha\}$ a partition of unity subordinate to $\{U_\alpha\}$.
Let $\pi\colon P\to M$ is the projection.
Suppose that for each $\alpha$, there is a Cartan connection $\omega_\alpha$ on $\pi^{-1}(U_\alpha)$ such that $\omega_\alpha=(\omega^i,\omega^i{}_j,\omega_{j,\alpha})$ for some $\omega^i{}_{j,\alpha}$.
If we set $\omega=\sum(f_\alpha\circ\pi)\omega_\alpha$, then $\omega$ is a Cartan connection of the form $(\omega^i,\omega^i{}_j,\omega_j)$.
On the other hand, we may assume that $\pi^{-1}(U_\alpha)$ is trivial.
We fix a trivialization $\pi^{-1}(U_\alpha)\cong U_\alpha\times H$.
If $(x,h)\in U_\alpha\times H$ and if $Y\in T_{(x,h)}P$, then we can represent $Y$ as $Y=X+A$, where $Y\in T_xM$ and $A\in\mathfrak{h}$.
If we set $\omega_\alpha(Y)=\Ad_{a^{-1}}(\omega^i(X),\omega^i{}_j(X),0)+A$, then $\omega_\alpha$ is a Cartan connection of the form $(\omega^i,\omega^i{}_j,\omega_{j\alpha})$.\par
From now on, we assume that $n{\mathfrak g}eq2$.
We show the uniqueness.
Suppose that $\omega=(\omega^i,\omega^i{}_j,\omega_j)$ and $\omega'=(\omega^i,\omega^i{}_j,\omega'_j)$ are Cartan connections as in the proposition.
By the conditions a) and c), we have $\omega_j-\omega'_j=A_{jk}\omega^k$ for some functions $A_{jk}$ on $P$.
We have
\[
\Omega^i{}_j-\Omega'{}^i{}_j=\omega^i{}\wedge(\omega_j-\omega'_j)-\delta^i{}_j(\omega_k-\omega'_k)\wedge\omega^k.
\]
It follows that
\[
K^i{}_{jkl}-K'{}^i{}_{jkl}=-\delta^i{}_lA_{jk}+\delta^i{}_kA_{jl}+\delta^i{}_jA_{kl}-\delta^i{}_jA_{lk}.
\]
Therefore, we have
\begin{align*}
K^i{}_{jil}-K'{}^i{}_{jil}&=-\delta^i{}_lA_{ji}+\delta^i{}_iA_{jl}+\delta^i{}_jA_{il}-\delta^i{}_jA_{li}\\*
&=(n-1)A_{jl}+(A_{jl}-A_{lj})\\*
&=nA_{jl}-A_{lj}.
\end{align*}
It follows that
\[
\tag{\thetheorem{}a}
\label{eq4.5-a}
A_{jk}=\frac1{n^2-1}(n(K^i{}_{jik}-K'{}^i{}_{jik})+(K^i{}_{kij}-K'{}^i{}_{kij})).
\]
Since $\omega$ and $\omega'$ are Ricci-flat, we have $A_{jk}=0$.
Next, we show that the existence of a Cartan connection which is Ricci-flat.
Let $\omega'$ be a Cartan connection of the form $(\omega^i,\omega^i{}_j,\omega_j')$ which is not necessarily Ricci-flat.
If $\omega$ is a Cartan connection which is Ricci-flat, then we have by \eqref{eq4.5-a} that
\[
\tag{\thetheorem{}b}
\label{eq4.5-3}
A_{jk}=-\frac1{n^2-1}(nK'{}^i{}_{jik}+K'{}^i{}_{kij}).
\]
If we conversely define $A_{jk}$ by the equality~\eqref{eq4.5-3} and set $\omega_j=\omega'_j+A_{jk}\omega^k$, then $(\omega^i,\omega^i{}_j,\omega_j)$ is a desired Cartan connection.\par
Finally, we assume that $\omega$ is torsion-free.
Then $\Omega^i{}_i=0$ by Proposition~\ref{prop4.7} provided that $\dim M{\mathfrak g}eq3$.
\end{proof}
\begin{proposition}
\label{prop4.7}
Suppose that $n{\mathfrak g}eq3$ and let $\omega=(\omega^i,\omega^i{}_j,\omega_j)$ be a Cartan connection.
Then, we have the following\textup{:}
\begin{enumerate}[\textup{\theenumi)}]
\item
If $d\Omega^i+\omega^i{}_j\wedge\Omega^j=0$, then we have $K^i{}_{jkl}+K^i{}_{klj}+K^i{}_{ljk}=0$.
\item
If $d\Omega^i+\omega^i{}_j\wedge\Omega^j=0$ and if $K^i{}_{jil}=0$, then $\Omega^i{}_i=0$.
\item
If\/ $\Omega^i=0$ and if\/ $\Omega^i{}_i=0$, then we have $K_{jkl}+K_{klj}+K_{ljk}=0$.
\item
If\/ $\Omega^i=0$ and if\/ $\Omega^i{}_j=0$, then $\Omega_j=0$.
\end{enumerate}
\end{proposition}
\begin{proof}
First we will show 1).
We have $\Omega^i=d\omega^i+\omega^i{}_j\wedge\omega^j$.
Hence we have
\begin{align*}
&\hphantom{{}={}}
d\Omega^i+\omega^i{}_j\wedge\Omega^j\\*
&=d\omega^i{}_j\wedge\omega^j-\omega^i{}_j\wedge d\omega^j+\omega^i{}_j\wedge(d\omega^j+\omega^j{}_k\wedge\omega^k)\\*
&=d\omega^i{}_j\wedge\omega^j+\omega^i{}_j\wedge\omega^j{}_k\wedge\omega^k\\*
&=\Omega^i{}_j\wedge\omega^j\\*
&=\frac12K^i{}_{jkl}\omega^k\wedge\omega^l\wedge\omega^j.
\end{align*}
It follows that $K^i{}_{jkl}+K^i{}_{klj}+K^i{}_{ljk}=0$ if $d\Omega^i+\omega^i{}_j\wedge\Omega^j=0$.
Next, we show 2).
Suppose in addition that $K^i{}_{jil}=0$.
Then, we have $0=K^i{}_{ikl}+K^i{}_{kli}=K^i{}_{ikl}-K^i{}_{kil}=K^i{}_{ikl}$.
Next, we show 3).
We have
\begin{align*}
d\Omega^i{}_j
&=d\omega^i{}_k\wedge\omega^k{}_j-\omega^i{}_k\wedge d\omega^k{}_j+d\omega^i\wedge\omega_j-\omega^i\wedge d\omega_j-\delta^i{}_j(d\omega_k\wedge\omega^k-\omega_k\wedge d\omega^k)\\*
&=(\Omega^i{}_k-\omega^i{}_l\wedge\omega^l{}_k-\omega^i\wedge\omega_k+\delta^i{}_k\omega_l\wedge\omega^l)\wedge\omega^k{}_j\\*
&\hphantom{{}={}}
-\omega^i{}_k\wedge(\Omega^k{}_j-\omega^k{}_l\wedge\omega^l{}_j-\omega^k\wedge\omega_j+\delta^k{}_j\omega_l\wedge\omega^l)\\*
&\hphantom{{}={}}
+(\Omega^i-\omega^i{}_k\wedge\omega^k)\wedge\omega_j-\omega^i\wedge(\Omega_j-\omega_k\wedge\omega^k{}_j)\\*
&\hphantom{{}={}}
-\delta^i{}_j((\Omega_k\wedge\omega^k-\omega_l\wedge\omega^l{}_k)\wedge\omega^k-\omega_k\wedge(\Omega^k+\omega^k{}_l\wedge\omega^l))\\*
&=\Omega^i{}_k\wedge\omega^k{}_j-\omega^i{}_k\wedge\Omega^k{}_j+\Omega^i\wedge\omega_j-\omega^i\wedge\Omega_j-\delta^i{}_j(\Omega_k\wedge\omega^k-\omega_k\wedge\Omega^k).
\end{align*}
Taking the trace, we obtain
\[
d\Omega^i{}_i=(n+1)(\Omega^i\wedge\omega_i-\omega^i\wedge\Omega_i).
\]
If\/ $\Omega^i=0$ and if\/ $\Omega^i{}_i=0$, then we have $\omega^i\wedge\Omega_i=0$.
Hence $K_{jkl}+K_{klj}+K_{ljk}=0$.
Finally, we show 4).
If $\Omega^i=0$ and if $\Omega^i{}_j=0$, then we have $\omega^i{}\wedge\Omega_j=0$ by 3).
As $n{\mathfrak g}eq3$, we have $\Omega_i=0$.
\end{proof}
\section{Cartan connections, affine connections and projective structures}
We follow the arguments in~\cite{Kobayashi-Nagano}, taking torsions into account.
First, we briefly recall bundles of formal frames $\widetilde{P}^r(M)$ and groups $\widetilde{G}^r$ which act on $\widetilde{P}^r(M)$ on the right~\cite{asuke:2022}, where $r=1,2$.
Let $M$ be a manifold, and $P^r(M)$ and $G^r$ the bundle of $r$-frames and the group of $r$-jets~\cite{K}.
\begin{definition}
\begin{enumerate}
\item
We set $\widetilde{P}^1(M)=P^1(M)$ and $\widetilde{G}^1=G^1\cong\mathrm{GL}_n({\mathbb R})$.
\item
We set $\widetilde{G}^2=\mathrm{GL}_n({\mathbb R})\ltimes{\mathbb R}^{n^3}$, where the multiplication law is given by $(a^i{}_j,a^i{}_{jk})(b^i{}_j,b^i{}_{jk})=(a^i{}_lb^l{}_j,a^i{}_lb^l{}_{jk}+a^i{}_{lm}b^l{}_jb^m{}_k)$ which is the same as the one in $G^2$.
Indeed, $G^2=\{(a^i{}_j,a^i{}_{jk})\in\widetilde{G}^2\mid a^i{}_{jk}=a^i{}_{kj}\}$.
\end{enumerate}
\end{definition}
The group $\widetilde{G}^2$ consists of the $1$-jets of certain bundle homomorphisms, and the bundle $\widetilde{P}^2(M)$ is a principal $\widetilde{G}^2$-bundle which also consists of the $1$-jets of certain bundle homomorphisms.
We have $\widetilde{P}^2(M)=P^2(M)\times_{G^2}\widetilde{G}^2$.
In view of Remark~\ref{rem4.3}, we introduce the following
\begin{definition}
\label{def2.1}
We define a subgroup $H^2$ of $\widetilde{G}^2$ by setting
\[
H^2=\{(a^i{}_j,a^i{}_{jk})\in\widetilde{G}^2\mid\exists\,a_i,\ a^i{}_{jk}=-(a^i{}_ja_k+a_ja^i{}_k)\}.
\]
We regard $(a^i{}_j,a_j)$ as coordinates for $H^2$.
\end{definition}
It is easy to see that $H^2$ is indeed a subgroup of $\widetilde{G}^2$ isomorphic to $H$ and satisfies $G^1=\widetilde{G}^1<H^2<G^2<\widetilde{G}^2$.
\begin{definition}
\begin{enumerate}
\item
A \textit{projective structure} on $M$ is a subbundle $P$ of $\widetilde{P}^2(M)$ with structure group $H^2$.
\item
A \textit{projective connection} associated with a projective structure $P$ is a Cartan connection $\omega=(\omega^i,\omega^i{}_j,\omega_j)$ on $P$ such that $\omega^i$ coincides with the restriction of the canonical form of order $0$ to $P$.
In order to distinguish from TW-connections, we refer to projective connections also as \textit{Cartan projective connections}.
\end{enumerate}
\end{definition}
\begin{remark}
Let $(\theta^i,\theta^i{}_j)$ be the canonical form on $\widetilde{P}^2(M)$.
We set $\Theta^i=d\theta^i+\theta^i{}_j\wedge\theta^j$.
Then we have $\sigma^*\Omega^i=\sigma^*\Theta^i$.
We have $\Theta^i=0$ on $P^2(M)$.
Indeed, this is just the structural equation.
See~\cite{asuke:2022}\, for details.
\end{remark}
\begin{theorem}[cf.~\cite{McKay}*{Theorem~7}]
\label{thm5.3}
For each projective structure $P$ of a manifold $M$, there is a projective connection $\omega=(\omega^i,\omega^i{}_j,\omega_j)$ with the projective structure $P$.
If $n{\mathfrak g}eq2$, then there exists a unique $\omega$ with the following properties\textup{:}
\begin{enumerate}[\textup{\theenumi)}]
\item
$(\omega^i,\omega^i{}_j)$ coincides with the restriction of the canonical form on $\widetilde{P}^2(M)$ to~$P$.
\item
$K^i{}_{jil}=0$.
\end{enumerate}
If moreover $\omega$ is torsion-free, namely, if $\Omega^i=0$, then $\Omega^i{}_i=0$, or equivalently, $K^i{}_{ikl}=0$.
\end{theorem}
\begin{proof}
This is a consequence of Proposition~\ref{prop4.5}.
Indeed, the restriction of the canonical form satisfies the conditions in Remark~\ref{rem4.4}.
If $n=2$, then the last part will be later shown as Lemma~\ref{lem2.13}.
\end{proof}
\begin{remark}
\label{rem2.4}
Theorem~\ref{thm5.3} is well-known in the torsion-free case.
Since we do not assume projective structures to be torsion-free, we need canonical forms on $\widetilde{P}^2(M)$ which realize torsions.
A point is that the condition $\Omega^i{}_i=0$ is not needed for the uniqueness in Proposition~\ref{prop4.5}.
\end{remark}
\begin{remark}
\label{rem2.5}
Let $(U,\varphi)$ be a chart.
Then, $u\in\widetilde{P}^2(M)|_U$ naturally corresponds to $(u^i,u^i{}_j,u^i{}_{jk})\in{\mathbb R}^n\times\mathrm{GL}_n({\mathbb R})\times{\mathbb R}^{n^3}$, which are called the natural coordinates \textup{(}\cite{Kobayashi-Nagano}*{p.~225}, \cite{asuke:2022}*{Definition~1.8}\textup{)}.
If $u\in P^2(M)$ and if $u$ is represented by $f\colon{\mathbb R}^n\to M$, then $(u^i,u^i{}_j,u^i{}_{jk})=\left(f^i(o),DF^i{}_j(o),D^2F^i{}_{jk}(o)\right)$.
The canonical form $(\theta^0,\theta^1)$ is represented as
\begin{align*}
\theta^0{}_u&=v^i{}_\alpha du^\alpha,\\*
\theta^1{}_u&=v^i{}_\alpha du^\alpha{}_j-v^i{}_\alpha u^\alpha{}_{j\beta}v^\beta{}_{\mathfrak g}amma du^{\mathfrak g}amma,
\end{align*}
where $(v^i{}_j)=(u^i{}_j)^{-1}$.
\end{remark}
\begin{definition}
Let $n{\mathfrak g}eq2$.
The projective connection given by Theorem~\ref{thm5.3} is called the \textit{normal projective connection} associated with $P$.
\end{definition}
The following is clear.
\begin{proposition}
\begin{enumerate}[\textup{\theenumi)}]
\item
There is a one-to-one correspondence between the following objects\textup{:}
\begin{enumerate}[\textup{\alph{enumii})}]
\item
Sections from $M$ to $\widetilde{P}^2(M)/\widetilde{G}^1$.
\item
Sections from $\widetilde{P}^1(M)$ to $\widetilde{P}^2(M)$ equivariant under the $\widetilde{G}^1$-action.
\item
Affine connections on $M$.
\end{enumerate}
\item
There is a one-to-one correspondence between the following objects\textup{:}
\begin{enumerate}[\textup{\alph{enumii})}]
\item
Sections from $M$ to $\widetilde{P}^2(M)/H^2$.
\item
Projective structures on $M$.
\end{enumerate}
\end{enumerate}
\end{proposition}
If $\nabla$ is an affine connection, then $\nabla$ corresponds to a section from $M$ to $\widetilde{P}^2(M)/\widetilde{G}^1$.
Since $\widetilde{G}^1=G^1$ is a subgroup of $H^2$, $\nabla$ induces a section from $M$ to $\widetilde{P}^2(M)/H^2$, namely, a projective structure.
Conversely, given a projective structure, we can find an affine connection which induces the projective structure because $H^2/\widetilde{G}^1$ is contractible.
We introduce the following definition after~\cite{K} (see also Tanaka~\cite{Tanaka}, Weyl~\cite{Weyl}).
\begin{definition}
\label{def2.8}
Let $\nabla$ and $\nabla'$ be linear connections on $TM$.
Let $\omega$ and $\omega'$ be the connection forms of associated connection on $P^1(M)$.
We say that $\nabla$ and $\nabla'$ are \textit{projectively equivalent} if there is an $\mathfrak{m}^*$-valued function, say $p$, on $P^1(M)$ such that
\[
\omega'-\omega=[\theta,p],
\]
where $\theta$ denotes the canonical form on $P^1(M)$.
\end{definition}
Note that $p$ necessarily satisfy $R_g{}^*p=pg$, where $g\in\mathrm{GL}_n({\mathbb R})$.
\begin{remark}
The torsion is invariant under the projective equivalences in the sense of Definition~\ref{def2.8}.
On the other hand, we can consider the usual equivalence relation based on unparameterized geodesics, then any affine connection is equivalent to a torsion-free one.
See Corollary~\ref{cor2.26} and Remark~\ref{rem2.23}.
\end{remark}
\begin{lemma}
\label{lem1.12}
Linear connections $\nabla$ and $\nabla'$ on $TM$ are projectively equivalent if and only if there is a $1$-form, say $\rho$, on $M$ such that $\nabla'-\nabla=\rho\otimes\mathrm{id}+\mathrm{id}\otimes\rho$.
\end{lemma}
\begin{proof}
If $\nabla$ and $\nabla'$ are projectively equivalent, then there is an $\mathfrak{m}^*$-valued function $p$ such that $\omega'-\omega=[\theta,p]$.
If $x\in M$ and if $v\in T_xM$, then we fix a frame $u$ of $T_x M$ and represent $v=uw$.
We set $\rho_x(v)=p(u)w$, and we have $\nabla'-\nabla=\rho\otimes\mathrm{id}+\mathrm{id}\otimes\rho$.
Conversely if $\nabla'-\nabla=\rho\otimes\mathrm{id}+\mathrm{id}\otimes\rho$ holds for a $1$-form $\rho$.
Let $u=(e_1,\ldots,e_n)$ be a frame and $(e^1,\ldots,e^n)$ its dual.
We represent $\rho$ as $\rho=\rho_1e^1+\cdots+\rho_ne^n$ and set $p(u)=(\rho_1,\ldots,\rho_n)$.
Then we have $\omega'-\omega=[\theta,p]$.
\end{proof}
\begin{remark}
Let $(x^1,\ldots,x^n)$ be local coordinates and choose $\left(\pdif{}{x^1},\ldots,\pdif{}{x^n}\right)$ as a frame.
If we represent $\rho$ as $\rho=\rho_idx^i$, then we have
\begin{align*}
(\rho\otimes\mathrm{id})^i{}_{jk}&=\delta^i{}_j\rho_k,\\*
(\mathrm{id}\otimes\rho)^i{}_{jk}&=\delta^i{}_k\rho_j,
\end{align*}
where $\delta^i{}_j=\begin{cases}
1, & i=j,\\
0, & i\neq j
\end{cases}$.
\end{remark}
\begin{lemma}
\label{lem1.14}
If we have $\nabla'-\nabla=\rho\otimes\mathrm{id}+\mathrm{id}\otimes\rho=\rho'\otimes\mathrm{id}+\mathrm{id}\otimes\rho'$, then $\rho'=\rho$.
\end{lemma}
\begin{proof}
We have $(\rho\otimes\mathrm{id}+\mathrm{id}\otimes\rho)(e_i,e_i)=2\rho(e_i)$.
Hence $\rho(e_i)=0$ if $\rho\otimes\mathrm{id}+\mathrm{id}\otimes\rho=0$.
\end{proof}
We will make use of the Christoffel symbols reversing the order of lower indices.
This is convenient when formal frames are considered.
\begin{notation}
\label{not2.12}
We set $\Gamma^i{}_{jk}=dx^i\left(\nabla_{\textstyle{\frac{\partial}{\partial x^k}}}\pdif{}{x^j}\right)$.
\end{notation}
\begin{lemma}
Affine connections $\nabla$ and $\nabla'$ induce the same projective structure if and only if they are projectively equivalent.
\end{lemma}
\begin{proof}
Let $\Gamma^i{}_{jk}$ and $\Gamma'{}^i{}_{jk}$ be the Christoffel symbols for $\nabla$ and $\nabla'$, respectively.
Then, $\nabla$ corresponds to a section from $M$ to $\widetilde{P}^2(M)/\widetilde{G}^1$ represented by $x\mapsto\sigma_\nabla(x)=(x,\delta^i{}_j,-\Gamma^i{}_{jk})$.
Then, sections $\sigma_\nabla$ and $\sigma_{\nabla'}$ determine the same projective structure if and only if there is an $H^2$-valued function, say $a=(a^i{}_j,-(a^i{}_ja_k+a_ja^i{}_k))$ such that $\sigma_\nabla.a=\sigma_{\nabla'}$.
This condition is equivalent to that
\[
(x,a^i{}_j,-\Gamma^i{}_{lm}a^l{}_ja^m{}_k-(a^l{}_ja_k+a_ja^l{}_k))=(x,\delta^i{}_j,-\Gamma'{}^i{}_{jk}).
\]
holds in $\widetilde{P}^2(M)/\widetilde{G}^1$.
The left hand side is equal to $(x,\delta^i{}_j,-\Gamma^i{}_{jk}-(\delta^i{}_ja_k+\delta^i{}_ka_j))$.
Hence $\nabla$ and $\nabla'$ correspond to the same projective structure if and only if we have $\Gamma'{}^i{}_{jk}=\Gamma^i{}_{jk}+\delta^i{}_ja_k+\delta^i{}_ka_j$, that is, $\nabla$ and $\nabla'$ are projectively equivalent.
\end{proof}
\begin{remark}
Affine connections decide geodesics and hence projective structures.
The most standard projective structure is the one on ${\mathbb R} P^n$ and equivalences should be described in terms of linear fractional transformations even if we allow torsions.
This leads to above definitions.
Recall that projective structures are considered to be the same if they have the same (unparameterized) geodesics and the same torsions in this article.
\end{remark}
Let $\nabla$ be an affine connection.
We will describe the projective structure given by $\nabla$ and the associated normal projective connection.
For this purpose, we introduce the following
\begin{definition}
\label{def2.12}
Let $\nabla$ be an affine connection and $\{\Gamma^i{}_{jk}\}$ the Christoffel symbols with respect to a chart.
We define one-forms $\mu$ and $\nu$ by setting $\mu_j=\frac12(\Gamma^\alpha{}_{\alpha j}-\Gamma^\alpha{}_{j\alpha})$ and $\nu_j=-\frac1{2(n+1)}(\Gamma^\alpha{}_{\alpha j}+\Gamma^\alpha{}_{j\alpha})$.
We refer to $\mu$ as the \textit{reduced torsion} of $\nabla$.
\end{definition}
\begin{remark}
\begin{enumerate}
\item
The differential form $\Gamma^\alpha{}_{\alpha j}dx^j$ is the connection form of the connection on $\mathcal{E}(M)$ induced by $\nabla$.
The other differential form $\Gamma^\alpha{}_{k\alpha}dx^k$ also correspond to a connection on $\mathcal{E}(M)$.
These connections are the same if\/ $\nabla$ is torsion-free.
\item
The differential form $-\mu=-\mu_jdx^j$ is a kind of the Ricci tensor of the torsion.
\end{enumerate}
\end{remark}
Cartan connections can be found as follows.
\begin{lemma}
\label{lem2.21}
Let $(\omega^i,\omega^i{}_j,\omega_j)$ be a Cartan connection on $P$.
Let $\sigma\colon U\to P$ be a section, and set $\psi^i=\sigma^*\omega^i=\Pi^i{}_jdx^j$, $\psi^i{}_j=\sigma^*\omega^i{}_j=\Pi^i{}_{jk}dx^k$ and $\psi_j=\sigma^*\omega_j=\Pi_{jk}dx^k$.
Let $(a^i{}_j,a_j)$ be the coordinates for $H^2$ as in Definition~\ref{def2.1} and $(x^i,a^i{}_j,a_j)$ be the product coordinates for $P|_U\cong U\times H^2$, where the identification is given by $\sigma$.
If we set $(b^i{}_j)=(a^i{}_j)^{-1}$, then we have
\[
\renewcommand{1.5}{1.5}
\begin{array}{l@{}c@{}l}
\omega^i&{}={}&b^i{}_\alpha\psi^\alpha\\*
&{}={}& b^i{}_\alpha\Pi^\alpha{}_\beta dx^\beta,\\*
\omega^i{}_j&{}={}&b^i{}_\alpha da^\alpha{}_j+b^i{}_\alpha\psi^\alpha{}_\beta a^\beta{}_j+b^i{}_\alpha\psi^\alpha a_j+\delta^i{}_ja_\alpha b^\alpha{}_\beta\psi^\beta\\*
&{}={}&b^i{}_\alpha da^\alpha{}_j+b^i{}_\alpha\Pi^\alpha{}_{\beta{\mathfrak g}amma}a^\beta{}_jdx^{\mathfrak g}amma+b^i{}_\alpha\Pi^\alpha{}_\beta a_jdx^\beta+\delta^i{}_ja_\alpha b^\alpha{}_\beta\Pi^\beta{}_{\mathfrak g}amma dx^{\mathfrak g}amma\\*
\omega_j&{}={}&da_j-a_\alpha b^\alpha{}_\beta da^\beta_j-a_\alpha b^\alpha{}_\beta\psi^\beta{}_{\mathfrak g}amma a^{\mathfrak g}amma{}_j+\psi_\alpha a^\alpha{}_j-a_\alpha b^\alpha{}_\beta\psi^\beta a_j\\*
&{}={}&da_j-a_\alpha b^\alpha{}_\beta da^\beta_j-a_\alpha b^\alpha{}_\beta\Pi^\beta{}_{{\mathfrak g}amma\delta}a^{\mathfrak g}amma{}_jdx^\delta+\Pi_{\alpha\beta}a^\alpha{}_jdx^\beta-a_\alpha b^\alpha{}_\beta\Pi^\beta{}_{\mathfrak g}amma a_jdx^{\mathfrak g}amma.
\end{array}
\]
\end{lemma}
Let $U$ be a chart of $M$ and $x^i$ the local coordinates on $U$.
Then, Proposition~17 of \cite{Kobayashi-Nagano} holds in the following form.
\begin{proposition}
\label{prop2.9}
Suppose that $n{\mathfrak g}eq2$ and let $\omega=(\omega^i,\omega^i{}_j,\omega_j)$ be the normal projective connection for the projective structure $P$ determined by $\nabla$.
Then, there is a unique section $\sigma\colon U\to P$ with the following properties\textup{:}
\begin{enumerate}[\textup{\theenumi)}]
\item
We have $\sigma^*\omega^i=dx^i$.
\item
If we set\/ $\sigma^*\omega^i{}_j=\Psi^i{}_j=\Pi^i{}_{jk}dx^k$, then we have $\Pi^i{}_{ik}=\mu_k$.
\end{enumerate}
We have moreover that
\begin{enumerate}
\item[\textup{2')}]
$\Pi^i{}_{ji}=-\mu_j$,
\end{enumerate}
and
\begin{alignat*}{3}
&\Pi^i{}_{jk}&{}={}&\Gamma^i{}_{jk}+\delta^i{}_j\nu_k+\delta^i{}_k\nu_j\\*
& &{}={}&\Gamma^i{}_{jk}-\frac1{2(n+1)}(\delta^i{}_j(\Gamma^\alpha{}_{\alpha k}+\Gamma^\alpha{}_{k\alpha})+\delta^i{}_k(\Gamma^\alpha{}_{\alpha j}+\Gamma^\alpha{}_{j\alpha})),\\*
&\Pi_{jk}&{}={}&\dfrac{-1}{n^2-1}\left(n\left(\pdif{\Pi^i{}_{jk}}{x^i}+\pdif{\mu_j}{x^k}-\mu_\alpha\Pi^\alpha{}_{jk}-\Pi^\alpha{}_{j\beta}\Pi^\beta{}_{\alpha k}\right)\right.\\*
& & &\hphantom{\frac{-1}{n^2-1}\biggl(}\left.+\left(\pdif{\Pi^i{}_{kj}}{x^i}+\pdif{\mu_k}{x^j}-\mu_\alpha\Pi^\alpha{}_{kj}-\Pi^\alpha{}_{k\beta}\Pi^\beta{}_{\alpha j}\right)\right),
\end{alignat*}
where $\{\Gamma^i{}_{jk}\}$ denote the Christoffel symbols and $\sigma^*\omega_j=\Psi_j=\Pi_{jk}dx^k$.
Finally, we can exchange conditions \textup{2)} and \textup{2')}.
\end{proposition}
\begin{proof}
Let $\sigma_0$ be the section from $M$ to $\widetilde{P}^2(M)/\widetilde{G}^1$ given by the connection, namely, $\sigma_0(x)=(x^i,\delta^i{}_j,-\Gamma^i{}_{jk})$.
Let $\overline{\sigma}_0$ denote the section from $M$ to $\widetilde{P}^2(M)/H^2$ induced by $\sigma_0$.
By the condition~1), $\sigma$ should be of the form $\overline{\sigma}_0.h$, where $h=(\delta^i{}_j,-(\delta^i{}_k\nu'_j+\delta^i{}_j\nu'_k))$ for some $\nu'_j$.
If $\sigma(x)=(x^i,\delta^i{}_j,-\Pi^i{}_{jk})$, then we have $\Pi^i{}_{jk}=\Gamma^i{}_{jk}+\delta^i{}_j\nu'_k+\delta^i{}_k\nu'_j$ (see Remark~\ref{rem2.5}).
Suppose that $\nu'_j$ can be so chosen that $\Pi^i{}_{ik}=\mu_k$ or $\Pi^i{}_{ji}=-\mu_j$.
Then, we accordingly have
\begin{align*}
\mu_k&=\Pi^i{}_{ik}=\Gamma^i{}_{ik}+(n+1)\nu'_k,\ \text{or}\\*
-\mu_j&=\Pi^i{}_{ji}=\Gamma^i{}_{ji}+(n+1)\nu'_j.
\end{align*}
The both conditions are equivalent to
\[
(n+1)\nu'_k=-\frac12(\Gamma^\alpha{}_{\alpha k}+\Gamma^\alpha{}_{k\alpha}).
\]
Hence we have $\nu'=\nu$ in the both cases.
The uniqueness also holds.
Conversely, if we define $\Pi^i{}_{jk}$ as in the statement and if we set $\sigma(x)=(x^i,\delta^i{}_j,-\Pi^i{}_{jk})$, then $\sigma$ induces a section to $\widetilde{P}^2(M)/H^2$ by Lemma~\ref{lem2.12} below.
We have $\sigma^*\omega^i=dx^i$ and $\sigma^*\omega^i{}_j=\Psi^i{}_j$.
If we set $\Psi_j=\sigma^*\omega_j$, then we have
\[
\tag{\thetheorem{}a}
\label{eq2.11}
\sigma^*\Omega^i{}_j=d\Psi^i{}_j+\Psi^i{}_k\wedge\Psi^k{}_j+dx^i\wedge\Psi_j-\delta^i{}_j\Psi_k\wedge dx^k.
\]
If we define $k^i{}_{jkl}$ by the conditions that $\sigma^*\Omega^i{}_j=\frac12k^i{}_{jkl}dx^k\wedge dx^l$ and $k^i{}_{jkl}+k^i{}_{jlk}=0$, then \eqref{eq2.11} is equivalent to
\[
k^i{}_{jkl}=\pdif{\Pi^i{}_{jl}}{x^k}-\pdif{\Pi^i{}_{jk}}{x^l}+\Pi^i{}_{\alpha k}\Pi^\alpha{}_{jl}-\Pi^i{}_{\alpha l}\Pi^\alpha{}_{jk}+\delta^i{}_k\Pi_{jl}-\delta^i{}_l\Pi_{jk}-\delta^i{}_j(\Pi_{lk}-\Pi_{kl}).
\]
Since $\omega$ is a normal projective connection, we have
\begin{align*}
0&=k^i{}_{jil}\\*
&=\pdif{\Pi^i{}_{jl}}{x^i}-\pdif{\Pi^i{}_{ji}}{x^l}+\Pi^i{}_{\alpha i}\Pi^\alpha{}_{jl}-\Pi^i{}_{\alpha l}\Pi^\alpha{}_{ji}+n\Pi_{jl}-\Pi_{jl}-(\Pi_{lj}-\Pi_{jl})\\*
&=\pdif{\Pi^i{}_{jl}}{x^i}+\pdif{\mu_j}{x^l}-\mu_\alpha\Pi^\alpha{}_{jl}-\Pi^i{}_{\alpha l}\Pi^\alpha{}_{ji}+n\Pi_{jl}-\Pi_{lj}.
\end{align*}
Regarding this equality as an equation with respect to $\Pi_{jk}$, we see that $\Pi_{jk}$ is given as in the statement.
\end{proof}
\begin{remark}
\label{rem2.17}
If we replace $\nu_j$ by $-\frac1{2(n+1)}(a\Gamma^\alpha{}_{\alpha j}+b\Gamma^\alpha{}_{j\alpha})$ in Definition~\ref{def2.12}, then Proposition~\ref{prop2.9} holds after replacing the conditions by
\begin{align*}
\Pi^\alpha{}_{\alpha k}&=\left(1-\frac{a}2\right)\Gamma^\alpha{}_{\alpha k}-\frac{b}2\Gamma^\alpha{}_{k\alpha},\\*
\Pi^\alpha{}_{j\alpha}&=-\frac{a}2\Gamma^\alpha{}_{\alpha k}+\left(1-\frac{b}2\right)\Gamma^\alpha{}_{k\alpha}.
\end{align*}
These are proportional to the reduced torsion if and only if $a+b=2$.
We choose $a=b=1$ as the simplest case, taking symmetricity into account.
The situation is similar in Theorem~\ref{thm6.13}.
\end{remark}
As in the classical case, we have the following.
We choose a branch of the logarithmic function in the complex category.
\begin{lemma}
\label{lem2.12}
Let $(U,\varphi)$ and $(\widehat{U},\widehat{\varphi})$ be charts.
We assume that $U=\widehat{U}$ and set $\psi=\widehat{\varphi}\circ\varphi^{-1}$.
If $\sigma$ and $\widehat{\sigma}$ denote the sections given by Proposition~\ref{prop2.9}, then we have
\[
\psi_*\sigma=\widehat\sigma.(a^i{}_j,-(a_ja^i{}_k+a_ka^i{}_j)),
\]
where $a^i{}_j=D\psi^i{}_j$ and $a_j=-\dfrac1{n+1}\pdif{\log J\psi}{x^j}$ with $J\psi=\det D\psi$.
\end{lemma}
\begin{proof}
We have
\[
\Gamma^i{}_{jk}=(D\psi^{-1})^i{}_\alpha H\psi^\alpha{}_{jk}+(D\psi^{-1})^i{}_\alpha\widehat{\Gamma}^\alpha{}_{\beta{\mathfrak g}amma}D\psi^\beta{}_jD\psi^{\mathfrak g}amma{}_k,
\]
where $D$ denotes the derivative and $H$ denotes the Hessian.
It follows that
\begin{align*}
\Pi^i{}_{jk}&=(D\psi^{-1})^i{}_\alpha H\psi^\alpha{}_{jk}+(D\psi^{-1})^i{}_\alpha\widehat{\Gamma}^\alpha{}_{\beta{\mathfrak g}amma}D\psi^\beta{}_jD\psi^{\mathfrak g}amma{}_k\\*
&\hphantom{{}={}}
-\frac1{2(n+1)}\delta^i{}_j\left(\left(\pdif{\log J}{x^k}+\widehat{\Gamma}^\alpha{}_{\alpha\beta}D\psi^\beta{}_k\right)+\left(\pdif{\log J}{x^k}+\widehat{\Gamma}^\alpha{}_{{\mathfrak g}amma\alpha}D\psi^{\mathfrak g}amma{}_k\right)\right)\\*
&\hphantom{{}={}}
-\frac1{2(n+1)}\delta^i{}_k\left(\left(\pdif{\log J}{x^j}+\widehat{\Gamma}^\alpha{}_{\alpha\beta}D\psi^\beta{}_j\right)+\left(\pdif{\log J}{x^j}+\widehat{\Gamma}^\alpha{}_{{\mathfrak g}amma\alpha}D\psi^{\mathfrak g}amma{}_j\right)\right)\\*
&=(D\psi^{-1})^i{}_\alpha H\psi^\alpha{}_{jk}+(D\psi^{-1})^i{}_\alpha\widehat{\Pi}^\alpha{}_{\beta{\mathfrak g}amma}D\psi^\beta{}_jD\psi^{\mathfrak g}amma{}_k\\*
&\hphantom{{}={}}
-\frac1{n+1}\left(\delta^i{}_j\pdif{\log J}{x^k}+\delta^i{}_k\pdif{\log J}{x^j}\right),
\end{align*}
from which the lemma follows.
\end{proof}
If $\nabla$ is torsion-free, then $\Pi^i{}_{jk}$ and $\Pi_{jk}$ are well-known as follows~\cite{Kobayashi-Nagano}*{Proposition~17}, \cite{Roberts}*{Fundamental theorem for TW-connections}.
\begin{lemma}
\label{lem2.13}
If\/ $\nabla$ is torsion-free, then we have $\mu_j=0$ and $\Pi^i{}_{jk}=\Pi^i{}_{kj}$.
We have
\begin{align*}
\Pi^i{}_{jk}&=\Gamma^i{}_{jk}-\frac1{n+1}(\delta^i{}_j\Gamma^\alpha{}_{\alpha k}+\delta^i{}_k\Gamma^\alpha{}_{\alpha j}),\\*
\Pi_{jk}&=\Pi_{kj}=-\frac1{n-1}\left(\pdif{\Pi^i{}_{jk}}{x^i}-\Pi^\alpha{}_{j\beta}\Pi^\beta{}_{\alpha k}\right).
\end{align*}
Moreover, \/$\Omega^i{}_{ikl}=0$.
\end{lemma}
\begin{proof}
The first part is straightforward.
To show that $\Omega^i{}_j$ is trace-free, it suffices to show that $k^i{}_{ikl}=0$.
We have
\begin{align*}
k^i{}_{ikl}&=\pdif{\Pi^i{}_{il}}{x^k}-\pdif{\Pi^i{}_{ik}}{x^l}+\Pi^i{}_{\alpha k}\Pi^\alpha{}_{il}-\Pi^i{}_{\alpha l}\Pi^\alpha{}_{ik}+\Pi_{kl}-\Pi_{lk}-n(\Pi_{lk}-\Pi_{kl})\\*
&=\pdif{\mu_l}{x^k}-\pdif{\mu_k}{x^l}+(n+1)(\Pi_{kl}-\Pi_{lk})\\*
&=0.\qedhere
\end{align*}
\end{proof}
In this article, we are working with projective structures keeping torsion invariant.
If we allow to modify torsions, we have the following lemma and corollary~\cite{Weyl},~\cite{McKay}*{Lemma~11}.
We include a sketch of a proof for completeness.
\begin{lemma}
\label{lem2.25}
Let $\nabla$ and $\overline{\nabla}$ be connections of which the Christoffel symbols are $\{\Gamma^i{}_{jk}\}$ and $\{\overline{\Gamma}^i{}_{jk}\}$.
Then, the unparameterized geodesics of $\nabla$ and $\overline\nabla$ are the same if and only if\/ $\overline{\Gamma}^i{}_{jk}=\Gamma^i{}_{jk}+\delta^i{}_j\varphi_k+\delta^i{}_k\varphi_j+a^i{}_{jk}$, where $\{\varphi_k\}$ are components of a $1$-form of $M$, and $\{a^i{}_{jk}\}$ are components of $TM$-valued $2$-form on $M$ such that $a^i{}_{kj}=-a^i{}_{jk}$.
\end{lemma}
\begin{proof}
We follow the proof of~\cite{Kobayashi-Nagano}*{Proposition~12}.
We only show that the geodesic equation of $\nabla$ and $\overline{\nabla}$ are equivalent.
Let $s$ and $\overline{s}$ be parameters of geodesic of $\nabla$ and $\overline{\nabla}$, respectively.
Writing down the geodesic equation, we have
\begin{align*}
0&=\frac{d^2x^i}{d\overline{s}^2}+\overline{\Gamma}^i{}_{jk}\frac{dx^j}{d\overline{s}}\frac{dx^k}{d\overline{s}}\\*
&=\left(\frac{d^2x^i}{ds^2}+\Gamma^i{}_{jk}\frac{dx^j}{ds}\frac{dx^k}{ds}\right)\left(\frac{ds}{d\overline{s}}\right)^2+\frac{dx^i}{ds}\left(2\varphi_j\frac{dx^j}{d\overline{s}}\frac{ds}{d\overline{s}}+\frac{d^2s}{d\overline{s}^2}\right)+a^i{}_{jk}\frac{dx^j}{d\overline{s}}\frac{dx^k}{d\overline{s}}\\*
&=\left(\frac{d^2x^i}{ds^2}+\Gamma^i{}_{jk}\frac{dx^j}{ds}\frac{dx^k}{ds}\right)\left(\frac{ds}{d\overline{s}}\right)^2+\frac{dx^i}{ds}\left(2\varphi_j\frac{dx^j}{d\overline{s}}\frac{ds}{d\overline{s}}+\frac{d^2s}{d\overline{s}^2}\right),
\end{align*}
because $a^i{}_{kj}=-a^i{}_{jk}$.
Hence, it suffices to solve the equation $2\varphi_j\frac{dx^j}{d\overline{s}}\frac{ds}{d\overline{s}}+\frac{d^2s}{d\overline{s}^2}=0$.
\end{proof}
\begin{corollary}
\label{cor2.26}
Given an affine connection $\nabla$, we can find a torsion-free affine connection $\overline\nabla$ of which the geodesics are the same.
\end{corollary}
\begin{proof}
Let $T$ be the torsion of $\nabla$.
It suffices to set $\overline\nabla=\nabla+\frac12T$.
\end{proof}
\begin{remark}
\label{rem2.23}
A projective connection similar to the normal projective connection as in Theorem~\ref{thm5.3} is given by Hlavat\'y~\cite{Hlavaty}.
We refer to this connection as the \textup{Hlavat\'y connection}.
The components of the Hlavar\'y connection is given by
\begin{align*}
&\Phi^i{}_{jk}=\Gamma^i{}_{jk}+\frac1{n^2-1}(\delta^i{}_j(\Gamma^\alpha{}_{k\alpha}-n\Gamma^\alpha{}_{\alpha k})+\delta^i{}_k(\Gamma^\alpha{}_{\alpha j}-n\Gamma^\alpha{}_{j\alpha})).
\end{align*}
We have $\Phi^\alpha{}_{\alpha k}=0$ and $\Phi^\alpha{}_{j\alpha}=0$.
The Hlavat\'y connection can be obtained as follows.
First consider an affine connection $\overline{\nabla}$ of which the Christoffel symbols $\{\overline{\Gamma}{}^i{}_{jk}\}$ are given by
\[
\overline{\Gamma}{}^i{}_{jk}=\Gamma^i{}_{jk}-\frac1{n-1}(\delta^i{}_j\mu_k-\delta^i{}_k\mu_j).
\]
The geodesics of\/ $\nabla$ and $\overline\nabla$ are the same.
On the other hand, if $T$ and $\overline{T}$ denote the torsion of\/ $\nabla$ and $\overline\nabla$, then we have $\overline{T}{}^i{}_{jk}=T^i{}_{jk}+\frac2{n-1}(\delta^i{}_j\mu_k-\delta^i{}_k\mu_j)$.
We have
\begin{align*}
\overline\Gamma{}^\alpha{}_{\alpha k}&=\Gamma^\alpha{}_{\alpha k}-\mu_k=\frac12(\Gamma^\alpha{}_{\alpha k}+\Gamma^\alpha{}_{k\alpha})=-(n+1)\nu_k,\\*
\overline\Gamma{}^\alpha{}_{j\alpha}&=\Gamma^\alpha{}_{j\alpha}+\mu_j=\frac12(\Gamma^\alpha{}_{\alpha j}+\Gamma^\alpha{}_{j\alpha})=-(n+1)\nu_j.
\end{align*}
Hecne we have
\begin{align*}
\overline\mu_j&=\frac12(\overline\Gamma{}^\alpha{}_{\alpha j}-\overline\Gamma{}^\alpha{}_{j\alpha})=0,\\*
\overline\nu_j&=-\frac1{2(n+1)}(\overline\Gamma{}^\alpha{}_{\alpha j}+\overline\Gamma{}^\alpha{}_{j\alpha})=\nu_j.
\end{align*}
By some straightforward calculations, we see that $\overline\Pi{}^i{}_{jk}=\Phi^i{}_{jk}$.
Note that we have $\Phi^i{}_{jk}-\Pi^i{}_{jk}=\overline{\Gamma}{}^i{}_{jk}-\Gamma^i{}_{jk}=-\frac1{n-1}(\delta^i{}_j\mu_k-\delta^i{}_k\mu_j)$.
As $\overline{\mu}_j=0$, we have
\[
\overline\Pi{}_{jk}=\dfrac{-1}{n^2-1}\left(n\left(\pdif{\overline\Pi^i{}_{jk}}{x^i}-\overline\Pi^\alpha{}_{j\beta}\overline\Pi^\beta{}_{\alpha k}\right)+\left(\pdif{\overline\Pi^i{}_{kj}}{x^i}-\overline\Pi^\alpha{}_{k\beta}\overline\Pi^\beta{}_{\alpha j}\right)\right).
\]
\end{remark}
\section{Geodesics and completeness, flatness of projective structures}
Carefully examining arguments in~\cite{Kobayashi-Nagano}*{Sections~7 and 8}, we see that results presented there remain valid for projective structures with torsion.
We always consider equivalences in the sense of Definition~\ref{def2.8}, namely, we require the geodesics to be the same and also the torsions are the same.
As mentioned in the previous section, we have the following
\begin{proposition}[\cite{Weyl}, \cite{Kobayashi-Nagano}*{Proposition~12}]
Let $P$ be a projective structure of $M$ and $\nabla$ an affine connection which belongs to $P$.
If we disregard parametrizations, then geodesics of\/ $\nabla$ are geodesics of $P$ and vice versa.
\end{proposition}
\begin{definition}
\begin{enumerate}
\item
Let $M$ and $M'$ be manifolds with projective structures $P$ and $P'$.
A diffeomorphism $f\colon M\to M'$ is said to be a \textit{projective isomorphism} if $f_*\colon\widetilde{P}^2(M)\to\widetilde{P}^2(M')$ induces a bundle isomorphism from $P$ to $P'$.
\item
Let $M$ and $M'$ be manifolds with projective structures $P$ and $P'$.
A mapping $f\colon M\to M'$ is said to be a \textit{projective morphism} if for each $p\in M$, there exists an open neighborhood $U$ of $p$ such that the restriction of $f$ to $U$ is a projective isomorphism to its image.
\item
A projective structure $P$ on a manifold $M$ is said to be \textit{flat}, if for each $p\in M$, there exists an open neighborhood $U$ of $p$ and a projective isomorphism from $U$ to an open subset of ${\mathbb R} P^n$, where $n=\dim M$.
\end{enumerate}
\end{definition}
If a projective structure $P$ is flat, then the normal projective connection is torsion-free.
Hence we are in the classical settings so that we have the following.
\begin{theorem}[\cite{Kobayashi-Nagano}*{Theorem~15}]
A projective structure $P$ of a manifold $M$ is flat if and only if the torsion and the curvature of the normal projective connection vanish.
\end{theorem}
\begin{remark}
We also have estimates of the dimension of transformation groups which concern projective structures.
The results are parallel to Theorems~13 and 14 of\/ \cite{Kobayashi-Nagano}.
\end{remark}
\section{Thomas--Whitehead connections}
We follow arguments by Roberts~\cite{Roberts}.
Projective structures are described by means of connections on bundle of volumes.
Such connections are called Thomas--Whitehead connections.
\begin{definition}
Let $M$ be a manifold of dimension $n$.
If $M$ is orientable, then let $\mathcal{E}(M)$ be the principal ${\mathbb R}_{>0}$-bundle associated with $\bigwedge^nTM$.
If $M$ is non-orientable, we consider $\mathcal{E}(M)/\{\pm1\}$.
We equip an ${\mathbb R}$-action on $\mathcal{E}(M)$ by setting $va=ve^a$ for $v\in\mathcal{E}(M)$ and $a\in{\mathbb R}$.
We call $\mathcal{E}(M)$ the \textit{bundle of volume elements} over $M$.
\end{definition}
\begin{lemma}
The bundle of volume elements $\mathcal{E}(M)$ is a principal ${\mathbb R}$-bundle.
\end{lemma}
\begin{proof}
If $M$ is orientable, then we only consider charts compatible with the orientation.
Let $(U,\varphi)$ be a chart.
Then, $TM|_U$ is trivialized by $\left\{\pdif{}{x^i}\right\}$ so that $\mathcal{E}(M)|_U$ is trivialized by $\epsilon=\pdif{}{x^1}\wedge\cdots\wedge\pdif{}{x^n}$.
Indeed, if $p\in U$ and if $v_p\in\mathcal{E}_p(M)$, then we have $v_p=a\epsilon_p$ for some $a>0$.
Hence we can associate with $v_p$ a pair $(\epsilon_p,\log a)$.
In other words, the inverse of the mapping $(x^1,x^2,\ldots,x^n,x^{n+1})\mapsto(\varphi^{-1}(x^1,\ldots,x^n),\epsilon_{\varphi^{-1}(x^1,\ldots,x^n)}e^{x^{n+1}})$ is a local trivialization of $\mathcal{E}(M)$.
If $(\widehat{U},\widehat{\varphi})$ is another chart and if $\psi$ is the transition function from $U$ to $\widehat{U}$, then we have $\widehat{\epsilon}\det D\psi=\epsilon$.
Hence the transition function from $\mathcal{E}(M)|_U$ to $\mathcal{E}(M)|_{\widehat{U}}$ is given by $(p,t)\mapsto(p,t+\log\det D\psi)$ if $M$ is orientable and $(p,t)\mapsto(p,t+\log\norm{\det D\psi})$ if $M$ is non-orientable.
\end{proof}
\begin{remark}
In the complex category, we fix branches of the logarithms when choosing local trivializations.
\end{remark}
\begin{definition}
We locally set $\Psi=e^{-x^{n+1}}dx^1\wedge\cdots\wedge dx^n\wedge dx^{n+1}$ and call $\Psi$ the \textit{canonical positive odd density}.
\end{definition}
\begin{remark}
If $M$ is orientable, then $\Psi$ is indeed an $(n+1)$-form.
\end{remark}
\begin{definition}
For $a\in{\mathbb R}$ and $v\in\mathcal{E}(M)$, we set $R_av=v.a$.
Let $\mathrm{Lie}({\mathbb R})$ denote the Lie algebra of ${\mathbb R}$.
If $b\in\mathrm{Lie}({\mathbb R})$, then the vector field $X$ defined by $X_u=\left.\pdif{}{t}R_{bt}u\right|_{t=0}$ is called the \textit{fundamental vector field} associated with $b$.
In particular, the fundamental vector field associate with $1\in\mathrm{Lie}({\mathbb R})$ is called the \textit{canonical fundamental vector field} and denoted by $\xi$.
\end{definition}
We can reduce the definition of connection forms on $\mathcal{E}(M)$ as follows.
\begin{definition}
\label{def4.7}
A $\mathrm{Lie}({\mathbb R})$-valued $1$-form $\underline{\omega}$ on $\mathcal{E}(M)$ is called a \textit{connection form} if we have
\begin{enumerate}
\item
$\underline{\omega}(\xi)=1$, and
\item
$R_a{}^*\underline{\omega}=\Ad_{-a}\underline{\omega}=\underline{\omega}$ for $a\in{\mathbb R}$.
\end{enumerate}
\end{definition}
\begin{definition}
We set $\mathscr{F}=\left(\pdif{}{x^1},\ldots,\pdif{}{x^n},\pdif{}{x^{n+1}}\right)$ on $T\mathcal{E}(M)$.
\end{definition}
If $\psi$ denotes a change of coordinates, then the transition function is given by $\begin{pmatrix}
D\psi & 0\\
\partial\log J\psi & 1
\end{pmatrix}$, where $J\psi=\det D\psi$ and $\partial\log J\psi=\left(\pdif{\log J\psi}{x^1}\ \cdots\ \pdif{\log J\psi}{x^n}\right)$.
\begin{definition}[\cite{Roberts}, see also \cite{Thomas}]
\label{def1.8}
A \textit{Thomas--Whitehead projective connection}, or a \textit{TW-connection}, is a linear connection $\nabla$ on $T\mathcal{E}(M)$ with the following properties.
Let $\omega=(\omega^i{}_j)$ be the connection form of $\nabla$ with respect to $\mathscr{F}$.
\begin{enumerate}
\item
$\nabla\xi=-\frac1{n+1}\mathrm{id}$, namely, we have
\[
\omega^i{}_{n+1,j}=-\frac{\delta^i{}_j}{n+1},
\]
where $\delta^i{}_j=\begin{cases}
1, & i=j,\\
0, & i\neq j
\end{cases}$.
\item
We have $\omega^i{}_{j,n+1}=-\frac{\delta^i{}_j}{n+1}$.
\item
$R_{a*}(\nabla_XY)=\nabla_{R_{a*}X}(R_{a*}Y)$ for any $X,Y\in\mathfrak{X}(\mathcal{E}(M))$, namely, $\nabla$ is invariant under the right action of ${\mathbb R}$.
\end{enumerate}
We refer to $\nabla^{\underline{\omega}}$ as a \textit{TW-connection} on $TM$ induced by $\nabla$ and $\underline{\omega}$.
\end{definition}
\begin{remark}
TW-connections are usually assumed to be torsion-free.
In this case, the conditions 1) and 2) in Definition~\ref{def1.8} are equivalent.
\end{remark}
\begin{definition}
\label{defTW}
Let $\nabla$ be a TW-connection on $T\mathcal{E}(M)$ and $\underline{\omega}$ a connection form on $\mathcal{E}(M)$.
If $X,Y\in\mathfrak{X}(M)$, then let $\widetilde{X},\widetilde{Y}\in\mathfrak{X}(\mathcal{E}(M))$ be lifts of $X,Y$ horizontal with respect to $\underline{\omega}$.
We set
\[
\nabla^{\underline{\omega}}_XY=\pi_*\left(\nabla_{\widetilde{X}}\widetilde{Y}\right),
\]
where $\pi\colon\mathcal{E}(M)\to M$ is the projection.
\end{definition}
\begin{lemma}[see also Lemma~\ref{lem5.2}]
\label{Lem4.11}
$\nabla^{\underline{\omega}}$ is a connection on $TM$.
If\/ $\nabla$ is torsion-free, then so is $\nabla^{\underline{\omega}}$.
\end{lemma}
\begin{proof}
It is easy to see that $\nabla^{\underline{\omega}}$ is a connection.
If $\nabla$ is torsion-free, then we have
\begin{align*}
\nabla^{\underline{\omega}}_XY-\nabla^{\underline{\omega}}_YX&=\pi_*\left(\nabla_{\widetilde{X}}\widetilde{Y}-\nabla_{\widetilde{Y}}\widetilde{X}\right)\\*
&=\pi_*\left(\left[\widetilde{X},\widetilde{Y}\right]\right)\\*
&=\left[\pi_*\widetilde{X},\pi_*\widetilde{Y}\right]\\*
&=[X,Y].\qedhere
\end{align*}
\end{proof}
Let $\underline{\omega}$ be a connection form on $\mathcal{E}(M)$.
We locally have
\[
\underline{\omega}=f_1dx^1+\cdots+f_ndx^n+dx^{n+1}
\]
for some functions $f_1,\ldots,f_n$.
\begin{remark}
\begin{enumerate}[\textup{\theenumi)}]
\item
The functions $f_1,\ldots,f_n$ are independent of $x^{n+1}$ by 2) of Definition~\ref{def4.7}.
\item
Despite~1), $f_1dx^1+\cdots+f_ndx^n$ is not necessarily well-defined on $M$.
\end{enumerate}
\end{remark}
\begin{definition}
Let $e_i$ be the horizontal lift of $\pdif{}{x^i}$ to $T\mathcal{E}(M)$ with respect to $\underline{\omega}$, that is, we set
\[
e_i=\pdif{}{x^i}-f_i\pdif{}{x^{n+1}}.
\]
We set $e_{n+1}=\pdif{}{x^{n+1}}$ and $\mathscr{F}^H=(e_1,\ldots,e_n,e_{n+1})$.
\end{definition}
\begin{lemma}
\label{lem3.1}
Let $\psi$ be the transition function from $(x^1,\ldots,x^n)$ to $(\widehat{x}^1,\ldots,\widehat{x}^n)$.
We have
\[
\tag{\thetheorem{}a}
\label{eq5.1-1}
\left(\widehat{e}_1,\ldots,\widehat{e}_n,\widehat{e}_{n+1}\right)\begin{pmatrix}
D\psi \\
0 & 1
\end{pmatrix}=\left(e_1,\ldots,e_n,e_{n+1}\right).
\]
\end{lemma}
\begin{proof}
If we set $f=(f_1\ \cdots\ f_n)$, then we have
\[
\left(e_1,\ldots,e_n,e_{n+1}\right)\begin{pmatrix}
I_n\\
f & 1
\end{pmatrix}=\left(\pdif{}{x^1},\ldots,\pdif{}{x^n},\pdif{}{x^{n+1}}\right).
\]
If we set $J\psi=\det D\psi$, then we have
\begin{align*}
&\hphantom{{}={}}
\left(\widehat{e}_1,\ldots,\widehat{e}_n,\widehat{e}_{n+1}\right)\begin{pmatrix}
I_n\\
\widehat{f} & 1
\end{pmatrix}\begin{pmatrix}
D\psi\\
D\log J\psi & 1
\end{pmatrix}\\*
&=\left(\pdif{}{\widehat{x}^1},\ldots,\pdif{}{\widehat{x}^n},\pdif{}{\widehat{x}^{n+1}}\right)\begin{pmatrix}
D\psi\\
D\log J\psi & 1
\end{pmatrix}\\*
&=\left(\pdif{}{x^1},\ldots,\pdif{}{x^n},\pdif{}{x^{n+1}}\right)\\*
&=\left(e_1,\ldots,e_n,e_{n+1}\right)\begin{pmatrix}
I_n\\
f & 1
\end{pmatrix}.
\end{align*}
On the other hand, if we set $dx={}^t(dx^1\ \cdots\ dx^n)$, then we have $\underline{\omega}=\begin{pmatrix}
f & 1
\end{pmatrix}\begin{pmatrix}
dx\\
dx^{n+1}
\end{pmatrix}$.
Hence we have
\[
\begin{pmatrix}
f & 1
\end{pmatrix}\begin{pmatrix}
dx\\
dx^{n+1}
\end{pmatrix}=\begin{pmatrix}
\widehat{f} & 1
\end{pmatrix}\begin{pmatrix}
d\widehat{x}\\
d\widehat{x}^{n+1}
\end{pmatrix}
=\begin{pmatrix}
\widehat{f} & 1
\end{pmatrix}\begin{pmatrix}
D\psi \\
D\log J\psi & 1
\end{pmatrix}\begin{pmatrix}
dx\\
dx^{n+1}
\end{pmatrix}
\]
and consequently that
\[
\begin{pmatrix}
I_n\\
\widehat{f} & 1
\end{pmatrix}\begin{pmatrix}
D\psi\\
D\log J\psi & 1
\end{pmatrix}=\begin{pmatrix}
D\psi\\
f & 1
\end{pmatrix}=\begin{pmatrix}
D\psi\\
0 & 1
\end{pmatrix}\begin{pmatrix}
I_n\\
f & 1
\end{pmatrix}.
\]
Combining these equalities, we obtain the relation as desired.
\end{proof}
Let $\omega$ be the connection form of a TW-connection with respect to $\mathscr{F}$.
If we define $\omega'$ by the property
\[
\omega=\omega'-\frac1{n+1}\begin{pmatrix}
I_ndx^{n+1} & dx\\
0 & dx^{n+1}
\end{pmatrix},
\]
then $\omega'=\begin{pmatrix}
\alpha & 0\\
\beta & 0
\end{pmatrix}$, where $\alpha$ and $\beta$ do not involve $dx^{n+1}$.
Moreover, as $\nabla$ is invariant under the ${\mathbb R}$-action, $\alpha$ and $\beta$ projects to $M$.
\begin{remark}
The connection $\nabla$ is torsion-free if and only if we have $\alpha^i{}_{jk}=\alpha^i{}_{kj}$ and $\beta^i{}_{jk}=\beta^i{}_{kj}$.
\end{remark}
\begin{remark}
\label{rem4.17}
The transition rule of $\alpha$ and $\beta$ under changes of coordinates is given as follows.
We have
\begin{align*}
\omega&=\begin{pmatrix}
D\psi & 0\\
\partial\log J\psi & 1\end{pmatrix}^{-1}d\begin{pmatrix}
D\psi & 0\\
\partial\log J\psi & 1\end{pmatrix}+\begin{pmatrix}
D\psi & 0\\
\partial\log J\psi & 1\end{pmatrix}^{-1}\widehat\omega\begin{pmatrix}
D\psi & 0\\
\partial\log J\psi & 1\end{pmatrix}\\*
&=\begin{pmatrix}
(D\psi)^{-1}dD\psi & 0\\
-(\partial\log J\psi)(D\psi^{-1})dD\psi+d\partial\log J\psi & 0\end{pmatrix}\\*
&\hphantom{{}={}}
+\begin{pmatrix}
(D\psi)^{-1}\widehat\alpha D\psi & 0\\
-(\partial\log J)(D\psi)^{-1}\widehat\alpha D\psi+\widehat\beta D\psi & 0
\end{pmatrix}\\*
&\hphantom{{}={}}
-\frac1{n+1}\left(I_{n+1}d\widehat{x}^{n+1}+\begin{pmatrix}
(D\psi)^{-1}d\widehat{x}\partial\log J\psi & (D\psi)^{-1}d\widehat{x}\\
-(\partial\log J\psi)(D\psi^{-1})d\widehat{x}\partial\log J\psi & -\partial(\log J\psi)(D\psi)^{-1}d\widehat{x}
\end{pmatrix}\right)\\*
&=\begin{pmatrix}
(D\psi)^{-1}dD\psi & 0\\
-(\partial\log J\psi)(D\psi^{-1})dD\psi+d\partial\log J\psi & 0\end{pmatrix}\\*
&\hphantom{{}={}}
+\begin{pmatrix}
(D\psi)^{-1}\widehat\alpha D\psi & 0\\
-(\partial\log J)(D\psi)^{-1}\widehat\alpha D\psi+\widehat\beta D\psi & 0
\end{pmatrix}\\*
&\hphantom{{}={}}
-\frac1{n+1}\begin{pmatrix}
I_ndx^{n+1} & dx\\
0 & dx^{n+1}
\end{pmatrix}\\*
&\hphantom{{}={}}
-\frac1{n+1}\begin{pmatrix}
I_nd\log J\psi+dx\partial\log J\psi & 0\\
-(d\log J\psi)\partial\log J\psi & 0
\end{pmatrix}.
\end{align*}
It follows that
\begin{align*}
\alpha&=(D\psi)^{-1}dD\psi-\frac1{n+1}(I_nd\log J\psi+dx\partial\log J\psi)+(D\psi)^{-1}\widehat\alpha D\psi,\\*
\beta&=-(\partial\log J\psi)(D\psi^{-1})dD\psi+d\partial\log J\psi+\frac1{n+1}(d\log J\psi)\partial\log J\psi\\*
&\hphantom{{}={}}
-(\partial\log J\psi)(D\psi)^{-1}\widehat\alpha D\psi+\widehat\beta D\psi.
\end{align*}
Note that we have
\begin{align*}
\alpha^i{}_i&=\widehat{\alpha}^i{}_i,\\*
\beta&=-(\partial\log J\psi)\alpha-\frac1{n+1}((\partial\log J\psi)d\log J\psi+d\log J\psi(\partial\log J\psi))\\*
&\hphantom{{}={}}
+\frac1{n+1}d\partial\log J\psi+\widehat\beta D\psi\\*
&=\frac1{n+1}(d\partial\log J\psi-2(\partial\log J\psi)d\log J\psi)-(\partial\log J\psi)\alpha+\widehat\beta D\psi.
\end{align*}
\end{remark}
\begin{remark}
\label{rem5.4}
If $\omega^H$ denotes the connection matrix of\/ $\nabla$ with respect to $\mathscr{F}^H$, then we have by the equality~\eqref{eq5.1-1} that
\begin{align*}
\omega^H&=\begin{pmatrix}
I_n\\
-f & 1\end{pmatrix}^{-1}d\begin{pmatrix}
I_n\\
-f & 1\end{pmatrix}+\begin{pmatrix}
I_n\\
-f & 1\end{pmatrix}^{-1}\omega\begin{pmatrix}
I_n\\
-f & 1\end{pmatrix}\\*
&=\begin{pmatrix}
O_n\\
-df & 0
\end{pmatrix}+\begin{pmatrix}
\alpha & 0 \\
f\alpha+\beta & 0
\end{pmatrix}\\*
&\hphantom{{}={}}
-\frac1{n+1}\begin{pmatrix}
I_ndx^{n+1}-dx f & dx\\
fdx^{n+1}-(fdx+dx^{n+1})f & fdx+dx^{n+1}
\end{pmatrix}\\*
&=\begin{pmatrix}
\alpha+\frac1{n+1}(I_nfdx+dxf) & 0\\
-df+\frac1{n+1}fdxf+f\alpha+\beta & 0
\end{pmatrix}-\frac1{n+1}\begin{pmatrix}
I_n\underline{\omega} & dx\\
0 & \underline{\omega}
\end{pmatrix}.
\end{align*}
\end{remark}
Note that $(dx^1,\ldots,dx^n,\underline{\omega})$ is the dual to $\mathscr{F}^H$.
\begin{definition}
We set
\begin{align*}
\alpha^H&=\alpha+\frac1{n+1}(I_nfdx+dxf),\\*
\beta^H&=-df+\frac1{n+1}fdxf+f\alpha+\beta.
\end{align*}
\end{definition}
We have the following
\begin{lemma}
\label{lem5.2}
The connection form of\/ $\nabla^{\underline{\omega}}$ with respect to $\left(\pdif{}{x^1},\ldots,\pdif{}{x^n}\right)$ is equal to $\alpha^H$.
Indeed, we have
\begin{align*}
\alpha^H&=D\psi^{-1}dD\psi+D\psi^{-1}\widehat{\alpha}^HD\psi,\\*
\beta^H&=\widehat{\beta}^HD\psi.
\end{align*}
\end{lemma}
\begin{proof}
The first part follows directly from Definition~\ref{defTW}.
Let $(U,\varphi)$ and $(\widehat{U},\widehat{\varphi})$ be charts, and $\omega^H$ and $\widehat{\omega}^H$ connection forms of $\nabla$ with respect to $\mathscr{F}^H$ and $\widehat{\mathscr{F}}^H$, respectively.
Then, by Lemma~\ref{lem3.1}, we have
\begin{align*}
\omega^H&=\begin{pmatrix}
D\psi\\
& 1
\end{pmatrix}^{-1}d\begin{pmatrix}
D\psi\\
& 1
\end{pmatrix}+\begin{pmatrix}
D\psi\\
& 1
\end{pmatrix}^{-1}\widehat{\omega}^H\begin{pmatrix}
D\psi\\
& 1
\end{pmatrix}\\*
&=\begin{pmatrix}
D\psi^{-1}dD\psi & 0\\
0 & 0
\end{pmatrix}+\begin{pmatrix}
D\psi^{-1}\widehat\alpha^HD\psi & 0\\
\widehat\beta^HD\psi & 0
\end{pmatrix}-\frac1{n+1}\begin{pmatrix}
I_n\underline{\omega} & D\psi^{-1}d\widehat{x}\\
0 & \underline{\omega}
\end{pmatrix}\\*
&=\begin{pmatrix}
D\psi^{-1}dD\psi+D\psi^{-1}\widehat\alpha^HD\psi & 0\\
\widehat\beta^HD\psi & 0
\end{pmatrix}-\frac1{n+1}\begin{pmatrix}
I_n\underline{\omega} & dx\\
0 & \underline{\omega}
\end{pmatrix}.\qedhere
\end{align*}
\end{proof}
\begin{theorem}
\label{thm1.15}
If\/ $\nabla$ is a TW-connection on $T\mathcal{E}(M)$ and if $\underline{\omega}$ and $\underline{\omega}'$ are connection forms on $\mathcal{E}(M)$, then
\begin{enumerate}[\textup{\theenumi)}]
\item
$\underline{\omega}'-\underline{\omega}=\pi^*\rho$ for some $1$-form $\rho$ on $M$, and
\item
We have
\[
\nabla^{\underline{\omega}'}-\nabla^{\underline{\omega}}=\frac1{n+1}\rho\otimes\mathrm{id}+\frac1{n+1}\mathrm{id}\otimes\rho.
\]
\item
$\nabla^{\underline{\omega}}$ and $\nabla^{\underline{\omega}'}$ are projectively equivalent.
\end{enumerate}
\end{theorem}
\begin{proof}
First, we have $\omega'(\xi)-\omega(\xi)=0$ and $R_a{}^*(\omega'-\omega)=\omega'-\omega$.
Hence we have $\omega'-\omega=\pi^*\rho$ for some $1$-form on $M$.
2) follows from Remark~\ref{rem5.4} and Lemma~\ref{lem5.2}.
3) follows from 2) and Lemma~\ref{lem1.12}.
\end{proof}
\begin{theorem}
Fix a TW-connection $\nabla$ on $T\mathcal{E}(M)$ and a connection form $\underline{\omega}$ on $\mathcal{E}(M)$.
Then, there is a one-to-one correspondence between the set of connection forms on $\mathcal{E}(M)$ and the set of linear connections in the projective equivalence class represented by $\nabla^{\underline{\omega}}$.
\end{theorem}
\begin{proof}
Let $\mathcal{D}$ be a linear connection projectively equivalent to $\nabla^{\underline{\omega}}$.
There is a $1$-form $\rho$ such that $\mathcal{D}-\nabla^{\underline{\omega}}=\frac1{n+1}\rho\otimes\mathrm{id}+\frac1{n+1}\mathrm{id}\otimes\rho$.
If we set $\underline{\omega}'=\omega+\pi^*\rho$, then we have $\nabla^{\underline{\omega}'}=\mathcal{D}$ by Theorem~\ref{thm1.15}.
Suppose conversely that $\nabla^{\underline{\omega}_1}=\nabla^{\underline{\omega}_2}$.
Then $\underline{\omega}_1=\underline{\omega_2}$ by Lemma~\ref{lem1.14}.
\end{proof}
\begin{definition}
If $\omega$ is a ${\mathfrak g}l_n({\mathbb R})$-valued $1$-form, then we set $R(\omega)=d\omega+\omega\wedge\omega$.
\end{definition}
Needless to say that $R(\omega)$ is the curvature form if $\omega$ is a connection form of a linear connection.
\begin{lemma}
\label{lem4.18}
The curvature form of a TW-connection with respect to $\mathscr{F}$ is given by
\[
R(\omega)=d\omega+\omega\wedge\omega=\begin{pmatrix}
d\alpha+\alpha\wedge\alpha-\frac1{n+1}dx\wedge\beta & -\frac1{n+1}\alpha\wedge dx\\
d\beta+\beta\wedge\alpha & -\frac1{n+1}\beta\wedge dx
\end{pmatrix}.
\]
The TW-connection is torsion free if and only if $\alpha\wedge dx=0$ and $\beta\wedge dx=0$,
\end{lemma}
\begin{proof}
We have
\begin{align*}
&\hphantom{{}={}}d\omega+\omega\wedge\omega\\*
&=d\omega'+\omega'\wedge\omega'\\*
&\hphantom{{}={}}-\frac1{n+1}\omega'\wedge\begin{pmatrix}
I_ndx^{n+1} & dx\\
0 & dx^{n+1}
\end{pmatrix}-\frac1{n+1}\begin{pmatrix}
I_ndx^{n+1} & dx\\
0 & dx^{n+1}
\end{pmatrix}\wedge\omega'\\*
&\hphantom{{}={}}+\frac1{(n+1)^2}\begin{pmatrix}
I_ndx^{n+1} & dx\\
0 & dx^{n+1}
\end{pmatrix}\wedge\begin{pmatrix}
I_ndx^{n+1} & dx\\
0 & dx^{n+1}
\end{pmatrix}\\*
&=\begin{pmatrix}
d\alpha+\alpha\wedge\alpha & 0\\
d\beta+\beta\wedge\alpha & 0
\end{pmatrix}-\frac1{n+1}\begin{pmatrix}
\alpha\wedge dx^{n+1}+dx^{n+1}\wedge\alpha+dx\wedge\beta & \alpha\wedge dx\\
\beta\wedge dx^{n+1}+dx^{n+1}\wedge\beta & \beta\wedge dx
\end{pmatrix}\\*
&=\begin{pmatrix}
d\alpha+\alpha\wedge\alpha & 0\\
d\beta+\beta\wedge\alpha & 0
\end{pmatrix}-\frac1{n+1}\begin{pmatrix}
dx\wedge\beta & \alpha\wedge dx\\
0 & \beta\wedge dx
\end{pmatrix}.
\end{align*}
If $\nabla$ is torsion-free, then we have $(\alpha\wedge dx)^i=\alpha^i{}_{jk}dx^k\wedge dx^j=0$.
Similarly, we have $\beta\wedge dx=0$.
The converse is easy.
\end{proof}
In view of Definition~\ref{def1.2}, we introduce the following
\begin{definition}
We regard the curvature form $d\omega+\omega\wedge\omega$ as being valued in $\mathfrak{pgl}_{n+1}({\mathbb R})=\mathfrak{m}\oplus\mathfrak{gl_n}({\mathbb R})\oplus\mathfrak{m}^*$, and represent the curvature form as $(\rho^i,\rho^i{}_j,\rho_j)$.
We call $\rho^i$ the \textit{torsion} and $(\rho^i{}_j,\rho_j)$ the \textit{curvature} of $\nabla$ as a projective connection.
\end{definition}
\begin{lemma}
\label{lem5.4}
We have
\begin{alignat*}{3}
&\rho^i&{}={}&-\frac1{n+1}\alpha\wedge dx,\\*[-2pt]
&\rho^i{}_j&{}={}&d\alpha+\alpha\wedge\alpha-\frac1{n+1}(dx\wedge\beta-\beta\wedge dxI_n),\\*
& &{}={}&d\alpha+\alpha\wedge\alpha+dx\wedge\beta'-\beta'\wedge dxI_n,\\*
&\rho_j&{}={}&d\beta+\beta\wedge\alpha\\*
& &{}={}&-(n+1)(d\beta'+\beta'\wedge\alpha),
\end{alignat*}
where $\beta'=-\frac1{n+1}\beta$.
\end{lemma}
\begin{definition}
We define the Ricci curvature $\mathop{\mathrm{Ric}}(\nabla)$ of a TW-connection $\nabla$ by
\begin{align*}
&\hphantom{{}={}}
\mathop{\mathrm{Ric}}(\nabla)_{jk}\\*
&=\rho^i{}_{jik}\\*
&=\pdif{\alpha^i{}_{jk}}{x^i}-\pdif{\alpha^i{}_{ji}}{x^k}+\alpha^i{}_{{\mathfrak g}amma i}\alpha^{\mathfrak g}amma{}_{jk}-\alpha^i{}_{{\mathfrak g}amma k}\alpha^{\mathfrak g}amma{}_{ji}
-\frac1{n+1}(n\beta_{jk}-\beta_{jk}+\beta_{jk}-\beta_{kj})\\*
&=\pdif{\alpha^i{}_{jk}}{x^i}-\pdif{\alpha^i{}_{ji}}{x^k}+\alpha^i{}_{{\mathfrak g}amma i}\alpha^{\mathfrak g}amma{}_{jk}-\alpha^i{}_{{\mathfrak g}amma k}\alpha^{\mathfrak g}amma{}_{ji}
-\frac1{n+1}(n\beta_{jk}-\beta_{kj}).
\end{align*}
\end{definition}
The fundamental theorem for TW-connections by Roberts~\cite{Roberts} holds in the following form in the present setting.
\begin{theorem}
\label{thm6.13}
Suppose that $\dim M{\mathfrak g}eq2$ and a projective structure is of $M$ is given by an affine connection $\nabla_M$.
Let $\Psi_M$ be the canonical positive odd scalar density on $\mathcal{E}(M)$ and $\mu_M$ the reduced torsion of\/ $\nabla_M$ regarded as a form on $\mathcal{E}(M)$ by pull-back.
Then, there exists a unique TW-connection $\nabla$ such that
\begin{enumerate}[\textup{\theenumi)}]
\item
$\nabla\Psi_M=-\mu_M\otimes\Psi_M$.
\item
$\nabla$ is Ricci-flat.
\item
$\nabla$ induces the given projective equivalence class on $M$.
\end{enumerate}
Moreover, there is a unique connection on $\mathcal{E}(M)$ such that $\alpha^H$ is the connection form of\/ $\nabla_M$ with respect to~$\left(\pdif{}{x^i}\right)_{1\leq i\leq n}$.
Indeed, if $\{\Gamma^i{}_{jk}\}$ denotes the Christoffel symbols of\/ $\nabla_M$, then we have
\begin{align*}
\alpha^i{}_{jk}&=\Gamma^i{}_{jk}-\frac1{2(n+1)}(\delta^i{}_k(\Gamma^a{}_{aj}+\Gamma^a{}_{ja})+\delta^i{}_j(\Gamma^a{}_{ak}+\Gamma^a{}_{ka})),\\*
\beta_{jk}&=\dfrac{1}{n-1}\left(n\left(\pdif{\alpha^i{}_{jk}}{x^i}+\pdif{\mu_M{}_j}{x^k}-\mu_M{}_a\alpha^a{}_{jk}-\alpha^a{}_{jb}\alpha^b{}_{ak}\right)\right.\\*
&\hphantom{\Pi_{jk}{}={}\frac{-1}{n-1}\biggl(}\left.+\left(\pdif{\alpha^i{}_{kj}}{x^i}+\pdif{\mu_M{}_k}{x^j}-\mu_M{}_a\alpha^a{}_{kj}-\alpha^a{}_{kb}\alpha^b{}_{aj}\right)\right),
\end{align*}
where
\[
\begin{pmatrix}
\alpha & 0\\
\beta & 0
\end{pmatrix}-\frac1{n+1}\begin{pmatrix}
I_ndx^{n+1} & dx\\
0 & dx^{n+1}
\end{pmatrix}
\]
is the connection matrix of\/ $\nabla$ with respect to $\mathscr{F}$.
The connection of $\mathcal{E}(M)$ is given by $\underline{\omega}=\frac12(\Gamma^\alpha{}_{\alpha j}+\Gamma^\alpha{}_{j\alpha})$.
\end{theorem}
\begin{proof}
Let $\{\Gamma^i{}_{jk}\}$ be the Christoffel symbols of $\nabla_M$ and set $\alpha^H=(\Gamma^i{}_{jk}dx^k)$.
If we fix a connection $\underline{\omega}=fdx+dx^{n+1}$ on $\mathcal{E}(M)$, then a TW-connection is given by $\begin{pmatrix}
\alpha^H-\frac1{n+1}(I_nfdx+dxf) & 0\\
df+\frac1{n+1}fdx f-f\alpha^H+\beta^H & 0
\end{pmatrix}-\dfrac1{n+1}\begin{pmatrix}
I_ndx^{n+1} & dx\\
0 & dx^{n+1}
\end{pmatrix}$, where $\beta^H$ is an $\mathfrak{m}^*$-valued $1$-form (see Remark~\ref{rem5.4}).
Note that even if we replace $\nabla$ by a projectively equivalent connection, then $\underline\omega$ is modified while the TW-connection remains in the same form.
We have
\[
\nabla\Psi_M=(-(\alpha^H)^\alpha{}_{\alpha j}dx^j+f_jdx^j)\otimes\Psi_M=(-\Gamma^\alpha{}_{\alpha j}dx^j+f_jdx^j)\otimes\Psi_M.
\]
By the condition~1), we have $\Gamma^\alpha{}_{\alpha j}-f_j=\frac12(\Gamma^\alpha{}_{\alpha j}-\Gamma^\alpha{}_{j\alpha})$ so that
\[
f_j=\frac12(\Gamma^\alpha{}_{\alpha j}+\Gamma^\alpha{}_{j \alpha}).
\]
If set $\alpha=\alpha^H-\frac1{n+1}(I_nfdx+dxf)$ and $\beta=df+\frac1{n+1}fdx f-f\alpha^H+\beta^H$, then we have by the condition~2) that
\[
\pdif{\alpha^i{}_{jk}}{x^i}-\pdif{\alpha^i{}_{ji}}{x^k}+\alpha^i{}_{{\mathfrak g}amma i}\alpha^{\mathfrak g}amma{}_{jk}-\alpha^i{}_{{\mathfrak g}amma k}\alpha^{\mathfrak g}amma{}_{ji}-\frac1{n+1}(n\beta_{jk}-\beta_{kj})=0.
\]
It follows that $\beta_{jk}$ are given as in the statement.
Conversely, if we define $\alpha^i{}_{jk}$ and $\beta_{jk}$ as in the statement, then $\nabla$ is a TW-connection with the required properties.
Since $\alpha^i{}_{jk}$ and $\beta_{jk}$ are independent of $\underline\omega$, $\nabla$ is unique.
\end{proof}
It is natural to introduce the following
\begin{definition}
We call the TW-connection given by Theorem~\ref{thm6.13} the \textit{normal TW-connection}.
\end{definition}
\begin{remark}
If we only require uniqueness of normal TW-connections, then we can modify the normalizing conditions 1) and 2) in Theorem~\ref{thm6.13} by similar reasons as in Remark~\ref{rem2.17}.
The conditions are so chosen that components of the normal TW-connections coincide with the normal Cartan projective connections up to multiplication of constants.
Actually, $\alpha^i{}_{jk}$ and $\beta'{}_{jk}$ coincide with $\Pi^i{}_{jk}$ and $\Pi_{jk}$ given by Proposition~\ref{prop2.9}.
\end{remark}
\begin{remark}
Suppose that the projective structure in Theorem~\ref{thm6.13} is torsion-free.
Then, $\nabla_M$ is always torsion-free so that the condition~1) reduces to $\nabla\Psi_M=0$, which is independent of\/ $\nabla_M$.
In addition, we have
\begin{align*}
\alpha^i{}_{jk}&=\Gamma^i{}_{jk}-\frac1{n+1}(\delta^i{}_k\Gamma^a{}_{aj}+\delta^i{}_j\Gamma^a{}_{ak}),\\*
\beta_{jk}&=\dfrac{n+1}{n-1}\left(\pdif{\alpha^i{}_{jk}}{x^i}-\alpha^a{}_{jb}\alpha^b{}_{ak}\right).
\end{align*}
\end{remark}
\begin{remark}
If we allow to modify the torsion keeping the geodesics, then we can uniquely find a TW-connection which corresponds to the Hlavat\'y connection \textup{(}Remark~\ref{rem2.23}\textup{)}.
We can also uniquely find a TW-connection which corresponds to the connection of which the Christoffel symbols are $\left\{\frac12(\Gamma^i{}_{jk}+\Gamma^i{}_{kj})\right\}$.
\end{remark}
\section{Structural equivalences of TW-connections}
We continue to follow the arguments in \cite{Roberts}.
\begin{definition}[\cite{Roberts2}]
TW-connections $\nabla$ and $\nabla'$ are said to be \textit{structurally equivalent} if $\nabla$ and $\nabla'$ induce the same projective structure.
\end{definition}
\begin{theorem}
\label{thm1.18}
TW-connections $\nabla$ and $\nabla'$ are structurally equivalent if and only if there is a $(0,2)$-tensor $\beta$ on $\mathcal{E}(M)$ such that
\[
\tag{\thetheorem{}a}
\label{eq1.19}
\left\{\begin{aligned}
&L_\xi\beta=0,\\*
&\beta(\xi,\xi)=0,
\end{aligned}
\right.
\]
and
\[
\tag{\thetheorem{}b}
\label{eq1.19-3}
\nabla'=\nabla+(\iota'_\xi\beta)\otimes\mathrm{id}+\mathrm{id}\otimes(\iota'_\xi\beta)-\beta\otimes\xi,
\]
where $\iota'_\xi\beta=\beta(\;\cdot\;,\xi)$.
Such a $\beta$ is unique.
If\/ $\nabla$ and $\nabla'$ are torsion-free, then $\beta$ is symmetric.
\end{theorem}
Before proving Theorem~\ref{thm1.18}, we show the following
\begin{lemma}
If the condition \eqref{eq1.19} holds, then there is a $1$-form $\overline\beta$ on $M$ such that $\iota'_\xi\beta=\pi^*\overline\beta$.
\end{lemma}
\begin{proof}
Let $\iota$ be the usual inner product.
We locally represent $\beta$ as $\beta=\beta_{ij}dx^i\otimes dx^j$.
We have $\iota'_\xi\beta=\beta_{i,n+1}dx^i$.
On the other hand, we have $0=L_\xi\beta=\pdif{\beta_{ij}}{x^{n+1}}dx^i\otimes dx^j$.
Hence we have $\iota_\xi(\iota'_\xi\beta)=0$ and $\iota_\xi d(\iota'_\xi\beta)=\pdif{\beta_{i,n+1}}{x^{n+1}}dx^i-\pdif{\beta_{n+1,n+1}}{x^j}dx^j=\iota'_\xi(L_\xi\beta)=0$.
\end{proof}
\begin{remark}
\label{rem1.21}
If \eqref{eq1.19-3} holds and if $\underline{\omega}$ is a connection form on $\mathcal{E}(M)$, then we have
\[
\nabla'{}^{\underline{\omega}}=\nabla^{\underline{\omega}}+\overline{\beta}\otimes\mathrm{id}+\mathrm{id}\otimes\overline{\beta}.
\]
\end{remark}
\begin{proof}[Proof of Theorem \textup{\ref{thm1.18}}]
The proof is essentially identical to that of Theorem~3.6 in~\cite{Roberts}.
Keep in mind that connections need not be torsion-free.
First assume that there exists a $\beta$ which satisfy \eqref{eq1.19} and \eqref{eq1.19-3}.
If we set
\[
\widehat\nabla=\nabla+(\iota'_\xi\beta)\otimes\mathrm{id}+\mathrm{id}\otimes(\iota'_\xi\beta)-\beta\otimes\xi,
\]
then $\widehat\nabla$ is a TW-connection.
Note that $\beta$ is invariant under the ${\mathbb R}$-action because $L_\xi\beta=0$.
Let now $\underline{\omega}$ be a connection form on $\mathcal{E}(M)$ and $X,Y\in\mathfrak{X}(M)$.
If $\widetilde{X}$ and $\widetilde{Y}$ denote horizontal lifts of $X$ and $Y$, then we have
\[
\widehat\nabla_{\widetilde{X}}\widetilde{Y}=\nabla_{\widetilde{X}}\widetilde{Y}+\pi^*\overline{\beta}(\widetilde{X})\widetilde{Y}+\pi^*\overline{\beta}(\widetilde{Y})\widetilde{X}-\beta(\widetilde{X},\widetilde{Y})\xi
\]
for some $1$-form $\overline{\beta}$ on $M$.
Hence we have
\[
\tag{\ref{thm1.18}c}
\label{eq1.20}
\widehat\nabla^{\underline{\omega}}{}_XY=\nabla^{\underline{\omega}}{}_XY+\overline\beta(X)Y+\overline{\beta}(Y)X,
\]
which means that $\nabla^{\underline{\omega}}$ and $\widehat\nabla^{\underline{\omega}}$ are projectively equivalent.
Hence $\nabla$ and $\widehat{\nabla}$ are structurally equivalent.
Suppose conversely that $\nabla$ and $\widehat\nabla$ are structurally equivalent.
If we fix a connection form $\underline{\omega}$, then
\[
\widehat\nabla^{\underline{\omega}}=\nabla^{\underline{\omega}}+\overline{\beta}\otimes\mathrm{id}+\mathrm{id}\otimes\overline{\beta}
\]
for some $1$-form $\overline{\beta}$ on $M$.
We set, for $\widetilde{X},\widetilde{Y}\in\mathfrak{X}(\mathcal{E}(M))$,
\[
\beta(\widetilde{X},\widetilde{Y})=\omega(\nabla_{\widetilde{X}}\widetilde{Y}-\widehat{\nabla}_{\widetilde{X}}\widetilde{Y})+\pi^*\overline{\beta}(\widetilde{X})\omega(\widetilde{Y})+\pi^*\overline\beta(\widetilde{Y})\omega(\widetilde{X}).
\]
It is clear that $\beta$ is a $(0,2)$-tensor.
We have $L_\xi\beta=0$ and $\beta(\xi,\xi)=0$ because $\nabla$ and $\widehat\nabla$ are TW-connections.
If in addition $\nabla$ and $\nabla'$ are torsion-free, then $\beta$ is symmetric.
We will show that the equality \eqref{eq1.19-3} holds.
Let $\widetilde{X},\widetilde{Y}\in\mathfrak{X}(\mathcal{E}(M))$.
First assume that $\widetilde{X}$ and $\widetilde{Y}$ are horizontal lifts of $X,Y\in\mathfrak{X}(M)$.
Then, the equality \eqref{eq1.20} holds.
If $\widetilde{\nabla^{\underline{\omega}}{}_XY}$ and $\widetilde{\widehat{\nabla}^{\underline{\omega}}{}_XY}$ denote the horizontal lifts of $\nabla^{\underline{\omega}}{}_XY$ and $\widehat{\nabla}^{\underline{\omega}}{}_XY$, then we have
\begin{align*}
\nabla_{\widetilde{X}}\widetilde{Y}&=\widetilde{\nabla^{\underline{\omega}}{}_XY}+\omega(\nabla_{\widetilde{X}}\widetilde{Y})\xi,\\*
\widehat{\nabla}_{\widetilde{X}}\widetilde{Y}&=\widetilde{\widehat{\nabla}^{\underline{\omega}}{}_XY}+\omega(\widehat{\nabla}_{\widetilde{X}}\widetilde{Y})\xi.
\end{align*}
It follows that
\begin{align*}
\widehat{\nabla}_XY&=\widetilde{\widehat{\nabla}^{\underline{\omega}}{}_XY}+\omega(\widehat{\nabla}_XY)\xi\\
&=\widetilde{\nabla^{\underline{\omega}}{}_XY}+\pi^*\overline\beta(\widetilde{X})\widetilde{Y}+\pi^*\overline\beta(\widetilde{Y})\widetilde{X}+\omega(\widehat{\nabla}_XY)\xi\\*
&=\nabla_XY-\omega(\nabla_XY)\xi+\pi^*\overline\beta(\widetilde{X})\widetilde{Y}+\pi^*\overline\beta(\widetilde{Y})\widetilde{X}+\omega(\widehat{\nabla}_XY)\xi.
\end{align*}
On the other hand, we have
\begin{align*}
\iota'_\xi\beta(\widetilde{X})&=\omega(\nabla_{\widetilde{X}}\xi-\widehat{\nabla}_{\widetilde{X}}\xi)+\pi^*\overline\beta(\widetilde{X})\\*
&=\overline\beta(X).
\end{align*}
Similarly, we have $\iota'_\xi\beta(\widetilde{Y})=\overline{\beta}(Y)$.
Hence we have
\begin{align*}
\widehat{\nabla}_XY&=\nabla_XY-\omega(\nabla_XY)\xi+\pi^*\overline\beta(\widetilde{X})\widetilde{Y}+\pi^*\overline\beta(\widetilde{Y})\widetilde{X}+\omega(\widehat{\nabla}_XY)\xi\\*
&=\nabla_XY+\iota'_\xi\beta(\widetilde{X})\widetilde{Y}+\iota'_\xi\beta(\widetilde{Y})\widetilde{X}+\omega(\widehat{\nabla}_XY-\nabla_XY)\xi\\*
&=\nabla_XY+\iota'_\xi\beta(\widetilde{X})\widetilde{Y}+\iota'_\xi\beta(\widetilde{Y})\widetilde{X}-\beta(\widehat{X},\widehat{Y})\xi.
\end{align*}
Next, we assume that $\widetilde{Y}=\xi$.
We have $\beta(\widetilde{X},\widetilde{Y})=\pi^*\overline\beta(\widetilde{X})$ so that
\begin{align*}
&\hphantom{{}={}}
\nabla_{\widetilde{X}}\xi+\iota'_\xi\beta(\widetilde{X})\xi+\iota'_\xi\beta(\xi)\widetilde{X}-\beta(\widetilde{X},\xi)\xi\\*
&=-\frac1{n+1}\widetilde{X}\\*
&=\widehat{\nabla}_{\widetilde{X}}\xi.
\end{align*}
We assume lastly that $\widetilde{X}=\xi$.
We have
\begin{align*}
&\hphantom{{}={}}
\nabla_\xi\widetilde{Y}+\iota'_\xi\beta(\xi)\widetilde{Y}+\iota'_\xi\beta(\widetilde{Y})\xi-\beta(\xi,\widetilde{Y})\xi\\*
&=\nabla_\xi\widetilde{Y}+\iota'_\xi\beta(\widetilde{Y})\xi-\omega(\nabla_\xi\widetilde{Y}-\widehat{\nabla}_\xi\widetilde{Y})\xi-\pi^*\overline{\beta}(\widetilde{Y})\xi\\*
&=\widehat{\nabla}_\xi\widetilde{Y}.
\end{align*}
Therefore, the equality \eqref{eq1.19-3} holds.
Finally, suppose that $\beta'$ also satisfy the equalities \eqref{eq1.19} and \eqref{eq1.19-3} if we replace $\beta$ with $\beta'$.
Then we have $\iota'_\xi\beta=\overline{\beta}$ and $\iota'_\xi\beta'=\overline{\beta}'$ for some $1$-forms $\beta$ and $\beta'$.
By Remark \ref{rem1.21}, we have $\overline\beta=\overline\beta'$.
On the other hand, we have
\begin{align*}
\nabla'_{\widetilde{X}}\widetilde{Y}-\nabla_{\widetilde{X}}\widetilde{Y}
&=\iota'_\xi\beta(\widetilde{X})\widetilde{Y}+\iota'_\xi\beta(\widetilde{Y})\widetilde{X}-\beta(\widetilde{X},\widetilde{Y})\xi\\*
&=\overline{\beta}(X)\widetilde{Y}+\overline{\beta}(Y)\widetilde{X}-\beta(\widetilde{X},\widetilde{Y})\xi.
\intertext{Similarly, we have}
\nabla'_{\widetilde{X}}\widetilde{Y}-\nabla_{\widetilde{X}}\widetilde{Y}
&=\overline{\beta}(X)\widetilde{Y}+\overline{\beta}(Y)\widetilde{X}-\beta'(\widetilde{X},\widetilde{Y})\xi.
\end{align*}
Hence we have $\beta=\beta'$.
\end{proof}
\section{examples}
We introduce examples of which the torsions are non-trivial and the curvatures are trivial.
Let $T^2={\mathbb R}^2/{\mathbb Z}^2$ be the standard torus and $(x^1,x^2)$ the standard coordinates.
We study projective structures of $T^2$ which are curvature-free and invariant under the standard $T^2$ action.
First of all, Christoffel symbols of connections are constants.
Let
\begin{align*}
\mathcal{T}&=\{\text{projective structures of $T^2$ invariant under the $T^2$-action and is curvature-free}\},\\*
\mathcal{T}'&=\{\tau\in\mathcal{T}\mid\text{$\tau$ is with torsion}\}.
\end{align*}
Let $\omega=(\omega^i,\omega^i{}_j,\omega_j)$ denote the normal projective connection associated with the projective structure given by an affine connection $\nabla$, $\sigma$ the section given by Proposition~\ref{prop2.9}.
Let $(\Omega^i,\Omega^i{}_j,\Omega_j)$ be the torsion and the curvature of $\omega$.
We have $\sigma^*\omega^i=dx^i$.
We have naturally $\widetilde{P}^2(T^2)\cong T^2\times\widetilde{G}^2$.
If $P\subset\widetilde{P}^2(T^2)$ is a projective structure, then we have $P\cong T^2\times H^2\subset T^2\times\widetilde{G}^2$.
\begin{example}
\label{ex6.1}
We consider an affine connection $\nabla$ of which the Christoffel symbols~are
\begin{alignat*}{6}
&\Gamma^1{}_{11}=1, &\quad & \Gamma^1{}_{12}=-\frac12, & \qquad & \Gamma^1{}_{21}=-\frac12, &\quad & \Gamma^1{}_{22}=0,\\*
&\Gamma^2{}_{11}=1, & & \Gamma^2{}_{12}=\frac32, & & \Gamma^2{}_{21}=-\frac12, & & \Gamma^2{}_{22}=-1.
\end{alignat*}
We set $g=(\delta^i{}_j,-\Gamma^i{}_{jk})\in\widetilde{G}^2$, which does not belong to $H^2$ because $\Gamma^2{}_{21}\neq\Gamma^2{}_{12}$.
We define $\sigma_0\colon T^2\to\widetilde{P}^2(T^2)$ by $\sigma(p)=(p,g)$ and define an $H^2$-subbundle $P$ of $\widetilde{P}^2(T^2)$ by
\[
P=\{u\in\widetilde{P}^2(T^2)\mid\exists p\in T^2,\ h\in H^2,\ u=\sigma_0(p).h\}.
\]
We have $\Gamma^\alpha{}_{\alpha1}=\frac12$, $\Gamma^\alpha{}_{\alpha2}=-\frac32$, $\Gamma^\alpha{}_{1\alpha}=\frac52$ and $\Gamma^\alpha{}_{2\alpha}=-\frac32$ so that
\begin{alignat*}{2}
&\mu_1=-1, & \quad & \mu_2=0,\\*
&\nu_1=-\frac12 & & \nu_2=\frac12.
\end{alignat*}
It follows that
\begin{alignat*}{6}
&\Pi^1{}_{11}=0, & \quad & \Pi^1{}_{12}=0, & \qquad & \Pi^1{}_{21}=0, & \quad & \Pi^1{}_{22}=0,\\*
&\Pi^2{}_{11}=1, & \quad & \Pi^2{}_{12}=1, & \qquad & \Pi^2{}_{21}=-1, & \quad &\Pi^2{}_{22}=0,\\*
&\Pi_{11}=-1, & \quad & \Pi_{12}=0, & \qquad & \Pi_{21}=0, & \quad & \Pi_{22}=0.
\end{alignat*}
We have therefore that
\begin{align*}
&\sigma^*\Omega^1=0,\quad \sigma^*\Omega^2=-2dx^1\wedge dx^2,\\*
&\sigma^*\Omega^i{}_j=0,\\*
&\sigma^*\Omega_j=0.
\end{align*}
Hence the connection $\nabla$ gives an element of $\mathcal{T}$ of which the torsion is non-trivial.
The normal TW-connection which corresponds to $\nabla$ is given as follows.
We have $\mathcal{E}(T^2)=T^2\times{\mathbb R}$.
Let $t$ be the standard coordinate for ${\mathbb R}$.
Then, the normal TW-connection is given by
\begin{align*}
\omega&=\begin{pmatrix}
\Pi^1{}_{1\alpha}dx^\alpha & \Pi^1{}_{2\alpha}dx^\alpha & 0\\
\Pi^2{}_{1\alpha}dx^\alpha & \Pi^2{}_{2\alpha}dx^\alpha & 0\\
-3\Pi_{1\alpha}dx^\alpha & -3\Pi_{2\alpha}dx^\alpha & 0
\end{pmatrix}-\frac13\begin{pmatrix}
dt & 0 & dx^1\\
0 & dt & dx^2\\
0 & 0 & dt
\end{pmatrix}\\*
&=\begin{pmatrix}
0 & 0 & 0\\
dx^1+dx^2 & -dx^1 & 0\\
3dx^1 & 0 & 0
\end{pmatrix}-\frac13\begin{pmatrix}
dt & 0 & dx^1\\
0 & dt & dx^2\\
0 & 0 & dt
\end{pmatrix},
\end{align*}
which is with torsion.
We have
\[
R(\omega)=\begin{pmatrix}
0 & 0 & 0\\
0 & 0 & \frac23dx^1\wedge dx^2\\
0 & 0 & 0
\end{pmatrix}
\]
so that $\omega$ is with torsion as a projective connection.
On the other hand, $\omega$ is curvature-free.
The correspondence between $(\Omega^i,\Omega^i{}_j,\Omega_j)$ and the components of $R(\omega)$ is given by Lemma~\ref{lem5.4}.
\end{example}
Projective structures with torsion are abundant even if we assume the curvatures to be trivial.
\begin{theorem}
The space $\mathcal{T}$ is a cubic subvariety of\/ ${\mathbb R}^6$ of dimension~$4$.
The space $\mathcal{T}'$ is an open subvariety of $\mathcal{T}$ and induces a subvariety of ${\mathbb R} P^5$ of dimension~$3$.
\end{theorem}
If we work in the complex category, then ${\mathbb R}^6$ and ${\mathbb R} P^5$ are replaced by ${\mathbb C}^6$ and~${\mathbb C} P^5$.
\begin{proof}
We make use of notations in Lemma~\ref{lem2.21}.
Let $\psi^i{}_j=\Pi^i{}_{jk}dx^k$ and $\psi_j=\Pi_{jk}dx^k$.
We have
\[
\mu_j=\Pi^\alpha{}_{\alpha j}=-\Pi^\alpha{}_{j\alpha},
\]
where $\mu$ is the reduced torsion.
This is equivalent to
\begin{align*}
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-0}
&2\Pi^1{}_{11}+\Pi^2{}_{21}+\Pi^2{}_{12}=0,\\*
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-01}
&2\Pi^2{}_{22}+\Pi^1{}_{12}+\Pi^1{}_{21}=0.
\end{align*}
We have
\[
\Pi_{jk}=\frac13\left(2(\mu_\alpha\Pi^\alpha{}_{jk}+\Pi^\alpha{}_{j\beta}\Pi^\beta{}_{\alpha k})+(\mu_\alpha\Pi^\alpha{}_{kj}+\Pi^\alpha{}_{k\beta}\Pi^\beta{}_{\alpha j})\right).
\]
It follows that
\[
\Pi_{jk}=\frac13\left(2(-\Pi^\beta{}_{\alpha\beta}\Pi^\alpha{}_{jk}+\Pi^\alpha{}_{j\beta}\Pi^\beta{}_{\alpha k})+(-\Pi^\beta{}_{\alpha\beta}\Pi^\alpha{}_{kj}+\Pi^\alpha{}_{k\beta}\Pi^\beta{}_{\alpha j})\right)
\]
If $j=k$, then we have
\begin{align*}
-\Pi^\beta{}_{\alpha\beta}\Pi^\alpha{}_{11}+\Pi^\alpha{}_{1\beta}\Pi^\beta{}_{\alpha 1}
&=-\Pi^1{}_{11}\Pi^1{}_{11}-\Pi^1{}_{21}\Pi^2{}_{11}-\Pi^2{}_{12}\Pi^1{}_{11}-\Pi^2{}_{22}\Pi^2{}_{11}\\*
&\hphantom{{}={}}+\Pi^1{}_{11}\Pi^1{}_{11}+\Pi^1{}_{12}\Pi^2{}_{11}+\Pi^2{}_{11}\Pi^1{}_{21}+\Pi^2{}_{12}\Pi^2{}_{21}\\*
&=-\Pi^2{}_{12}\Pi^1{}_{11}-\Pi^2{}_{22}\Pi^2{}_{11}+\Pi^1{}_{12}\Pi^2{}_{11}+\Pi^2{}_{12}\Pi^2{}_{21},\\*
-\Pi^\beta{}_{\alpha\beta}\Pi^\alpha{}_{22}+\Pi^\alpha{}_{2\beta}\Pi^\beta{}_{\alpha 2}
&=-\Pi^1{}_{11}\Pi^1{}_{22}-\Pi^1{}_{21}\Pi^2{}_{22}+\Pi^1{}_{21}\Pi^1{}_{12}+\Pi^2{}_{21}\Pi^1{}_{22}.
\end{align*}
If $i\neq j$, then we have
\begin{align*}
-\Pi^\beta{}_{\alpha\beta}\Pi^\alpha{}_{12}+\Pi^\alpha{}_{1\beta}\Pi^\beta{}_{\alpha 2}
&=-\Pi^1{}_{11}\Pi^1{}_{12}-\Pi^1{}_{21}\Pi^2{}_{12}-\Pi^2{}_{12}\Pi^1{}_{12}-\Pi^2{}_{22}\Pi^2{}_{12}\\*
&\hphantom{{}={}}+\Pi^1{}_{11}\Pi^1{}_{12}+\Pi^1{}_{12}\Pi^2{}_{12}+\Pi^2{}_{11}\Pi^1{}_{22}+\Pi^2{}_{12}\Pi^2{}_{22}\\*
&=-\Pi^1{}_{21}\Pi^2{}_{12}+\Pi^2{}_{11}\Pi^1{}_{22}\\*
-\Pi^\beta{}_{\alpha\beta}\Pi^\alpha{}_{21}+\Pi^\alpha{}_{2\beta}\Pi^\beta{}_{\alpha 1}
&=-\Pi^1{}_{11}\Pi^1{}_{21}-\Pi^1{}_{21}\Pi^2{}_{21}-\Pi^2{}_{12}\Pi^1{}_{21}-\Pi^2{}_{22}\Pi^2{}_{21}\\*
&\hphantom{{}={}}+\Pi^1{}_{21}\Pi^1{}_{11}+\Pi^1{}_{22}\Pi^2{}_{11}+\Pi^2{}_{21}\Pi^1{}_{21}+\Pi^2{}_{22}\Pi^2{}_{21}\\*
&=-\Pi^2{}_{12}\Pi^1{}_{21}+\Pi^1{}_{22}\Pi^2{}_{11}.
\end{align*}
Hence we have
\begin{align}
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-a}
\Pi_{11}&=-\Pi^2{}_{12}\Pi^1{}_{11}-\Pi^2{}_{22}\Pi^2{}_{11}+\Pi^1{}_{12}\Pi^2{}_{11}+\Pi^2{}_{12}\Pi^2{}_{21},\\*
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-b}
\Pi_{12}&=\Pi_{21}=-\Pi^2{}_{12}\Pi^1{}_{21}+\Pi^1{}_{22}\Pi^2{}_{11},\\*
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-c}
\Pi_{22}&=-\Pi^1{}_{11}\Pi^1{}_{22}-\Pi^1{}_{21}\Pi^2{}_{22}+\Pi^1{}_{21}\Pi^1{}_{12}+\Pi^2{}_{21}\Pi^1{}_{22}.
\end{align}
These are the defining equalities for $\Pi_{ij}$.
On the other hand, we have
\begin{alignat*}{3}
&\Omega^i&{}={}&\begin{pmatrix}
-\Pi^1{}_{12}+\Pi^1{}_{21}\\
-\Pi^2{}_{12}+\Pi^2{}_{21}
\end{pmatrix}dx^1\wedge dx^2,\\*
&\Omega^i{}_j&{}={}&\begin{pmatrix}
\Pi^1{}_{2k}\Pi^2{}_{1l}dx^k\wedge dx^l & (\Pi^1{}_{1k}\Pi^1{}_{2l}+\Pi^1{}_{2k}\Pi^2{}_{2l})dx^k\wedge dx^l\\
(\Pi^2{}_{1k}\Pi^1{}_{1l}+\Pi^2{}_{2k}\Pi^2{}_{1l})dx^k\wedge dx^l& \Pi^2{}_{1k}\Pi^1{}_{2l}dx^k\wedge dx^l
\end{pmatrix}\\*
& & &+\begin{pmatrix}
2\Pi_{12}-\Pi_{21} & \Pi_{22}\\
-\Pi_{11} & -2\Pi_{21}+\Pi_{12}
\end{pmatrix}dx^1\wedge dx^2,\\*
&\Omega_j&{}={}&\begin{pmatrix}
(\Pi_{1k}\Pi^1{}_{1l}+\Pi_{2k}\Pi^2{}_{1l})dx^k\wedge dx^l & (\Pi_{1k}\Pi^1{}_{2l}+\Pi_{2k}\Pi^2{}_{2l})dx^k\wedge dx^l
\end{pmatrix}.
\end{alignat*}
The projective structure is with torsion if and only if we have
\[
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-d}
\Pi^1{}_{12}\neq\Pi^1{}_{21}\quad\text{or}\quad\Pi^2{}_{12}\neq\Pi^2{}_{21},
\]
while it is curvature-free, namely, $(\Omega^i{}_j,\Omega_j)=(0,0)$ if and only if we have
\begin{align}
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-1}
&\Pi^1{}_{21}\Pi^2{}_{12}-\Pi^1{}_{22}\Pi^2{}_{11}+2\Pi_{12}-\Pi_{21}=0,\\*
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-2}
&\Pi^1{}_{11}\Pi^1{}_{22}-\Pi^1{}_{12}\Pi^1{}_{21}+\Pi^1{}_{21}\Pi^2{}_{22}-\Pi^1{}_{22}\Pi^2{}_{21}+\Pi_{22}=0,\\*
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-3}
&\Pi^2{}_{11}\Pi^1{}_{12}-\Pi^2{}_{12}\Pi^1{}_{11}+\Pi^2{}_{21}\Pi^2{}_{12}-\Pi^2{}_{22}\Pi^2{}_{11}-\Pi_{11}=0,\\*
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-4}
&\Pi^2{}_{11}\Pi^1{}_{22}-\Pi^2{}_{12}\Pi^1{}_{21}-2\Pi_{21}+\Pi_{12}=0,\\*
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-5}
&\Pi_{11}\Pi^1{}_{12}-\Pi_{12}\Pi^1{}_{11}+\Pi_{21}\Pi^2{}_{12}-\Pi_{22}\Pi^2{}_{11}=0,\\*
\stepcounter{equation}
\tag{\thetheorem-\theequation}
\label{eq6.2-6}
&\Pi_{11}\Pi^1{}_{22}-\Pi_{12}\Pi^1{}_{21}+\Pi_{21}\Pi^2{}_{22}-\Pi_{22}\Pi^2{}_{21}=0.
\end{align}
The equalities \eqref{eq6.2-2} and \eqref{eq6.2-3} are equivalent to the equalities~\eqref{eq6.2-c} and~\eqref{eq6.2-a}.
The equalities \eqref{eq6.2-1} and \eqref{eq6.2-4} are equivalent to the equality~\eqref{eq6.2-b}.
Hence we always have $\Omega^i{}_j=0$.
We consider $\tau=(\Pi^1{}_{12},\Pi^1{}_{21},\Pi^1{}_{22},\Pi^2{}_{11},\Pi^2{}_{12},\Pi^2{}_{21})$ as coordinates.
Let $F(\tau)$ be the left hand side of the equality~\eqref{eq6.2-5} and $G(\tau)$ be the left hand side of the equality~\eqref{eq6.2-6}.
We have
\begin{align*}
F(\tau)
&{}={}
\frac12\Pi^2{}_{12}\Pi^2{}_{12}\Pi^1{}_{12}
+\frac32\Pi^2{}_{12}\Pi^2{}_{21}\Pi^1{}_{12}
+\frac32\Pi^1{}_{12}\Pi^2{}_{11}\Pi^1{}_{12}\\*
&\hphantom{{}={}}
-\frac32\Pi^2{}_{12}\Pi^1{}_{21}\Pi^2{}_{12}
-\frac12\Pi^2{}_{12}\Pi^1{}_{21}\Pi^2{}_{21}
+\Pi^1{}_{22}\Pi^2{}_{11}\Pi^2{}_{12}\\*
&\hphantom{{}={}}
-\frac12\Pi^1{}_{21}\Pi^1{}_{21}\Pi^2{}_{11}
-\Pi^1{}_{21}\Pi^1{}_{12}\Pi^2{}_{11}-\Pi^2{}_{21}\Pi^1{}_{22}\Pi^2{}_{11}\\*
&=\frac12\Pi^2{}_{12}\Pi^2{}_{12}(\Pi^1{}_{12}-\Pi^1{}_{21})
+\Pi^2{}_{12}\Pi^2{}_{21}(\Pi^1{}_{12}-\Pi^1{}_{21})
-\Pi^2{}_{12}\Pi^1{}_{21}(\Pi^2{}_{12}-\Pi^2{}_{21})\\*
&\hphantom{{}={}}
+\frac12\Pi^2{}_{12}\Pi^2{}_{21}(\Pi^1{}_{12}-\Pi^1{}_{21})
+\frac12(\Pi^1{}_{12}\Pi^1{}_{12}-\Pi^1{}_{21}\Pi^1{}_{21})\Pi^2{}_{11}
+\Pi^1{}_{12}\Pi^2{}_{11}(\Pi^1{}_{12}-\Pi^1{}_{21})\\*
&\hphantom{{}={}}
+\Pi^1{}_{22}\Pi^2{}_{11}(\Pi^2{}_{12}-\Pi^2{}_{21}).
\end{align*}
Similarly, we have
\begin{align*}
G(\tau)&=
-\frac12\Pi^1{}_{21}\Pi^1{}_{21}\Pi^2{}_{21}
-\frac32\Pi^1{}_{21}\Pi^1{}_{12}\Pi^2{}_{21}
-\frac32\Pi^2{}_{21}\Pi^1{}_{22}\Pi^2{}_{21}\\*
&\hphantom{{}={}}
+\frac32\Pi^1{}_{21}\Pi^2{}_{12}\Pi^1{}_{21}
+\frac12\Pi^1{}_{21}\Pi^2{}_{12}\Pi^1{}_{12}
-\Pi^2{}_{11}\Pi^1{}_{22}\Pi^1{}_{21}\\*
&\hphantom{{}={}}
+\frac12\Pi^2{}_{12}\Pi^2{}_{12}\Pi^1{}_{22}
+\Pi^2{}_{12}\Pi^2{}_{21}\Pi^1{}_{22}+\Pi^1{}_{12}\Pi^2{}_{11}\Pi^1{}_{22}\\*
&=\frac12\Pi^1{}_{21}\Pi^1{}_{21}(\Pi^2{}_{12}-\Pi^2{}_{21})
+\Pi^1{}_{21}\Pi^1{}_{12}(\Pi^2{}_{12}-\Pi^2{}_{21})
-\Pi^1{}_{21}\Pi^2{}_{12}(\Pi^1{}_{12}-\Pi^1{}_{21})\\*
&\hphantom{{}={}}
+\frac12\Pi^1{}_{21}\Pi^1{}_{12}(\Pi^2{}_{12}-\Pi^2{}_{21})
+\frac12(\Pi^2{}_{12}\Pi^2{}_{12}-\Pi^2{}_{21}\Pi^2{}_{21})\Pi^1{}_{22}
+\Pi^2{}_{21}\Pi^1{}_{22}(\Pi^2{}_{12}-\Pi^2{}_{21})\\*
&\hphantom{{}={}}
+\Pi^2{}_{11}\Pi^1{}_{22}(\Pi^1{}_{12}-\Pi^1{}_{21}).
\end{align*}
Suppose conversely that we can find $\tau=(\Pi^i{}_{jk})$, where $(i,j,k)\neq(1,1,1),(2,2,2)$, such that $F(\tau)=G(\tau)=0$.
We define $\Pi^1{}_{11}$ and $\Pi^2{}_{22}$ by~\eqref{eq6.2-0} and \eqref{eq6.2-01}, and $\Pi_{ij}$ by~\eqref{eq6.2-a}, \eqref{eq6.2-b} and \eqref{eq6.2-c}.
Then, the projective structure determined by $\tau$ is curvature-free.
It is with torsion if and only if the condition~\eqref{eq6.2-d} is satisfied.
Therefore, we have
\[
\mathcal{T}=\{\tau=(\Pi^i{}_{jk})\mid F(\tau)=G(\tau)=0\}.
\]
Note that if $\tau\in\mathcal{T}$ is torsion-free, then $\tau$ is flat, because $\tau$ is curvature-free.
In this example, if we assume $\Pi^1{}_{12}=\Pi^1{}_{21}$ and $\Pi^2{}_{12}=\Pi^2{}_{21}$, then $F(\tau)=G(\tau)=0$ are equal to zero so that $\Omega_j=0$.
This is analogous to the case of dimension greater than two.
In the latter case, the vanishing of $\Omega_j$ is guaranteed by Proposition~\ref{prop4.7}.
Affine connections which induce a given normal projective connection is obtained as follows.
Let $\nu_1,\nu_2\in{\mathbb R}$ be arbitrary, and set $\Gamma^i{}_{jk}=\Pi^i{}_{jk}-(\delta^i{}_j\nu_k+\delta^i{}_k\nu_j)$ for $(i,j,k)\neq(1,1,1),(2,2,2)$, where $\delta^i{}_j=\begin{cases}
1, & i=j,\\
0, & i\neq j
\end{cases}$.
We have then $-6\nu_1-(\Gamma^\alpha{}_{\alpha1}+\Gamma^\alpha{}_{1\alpha})=-6\nu_1-2\Gamma^1{}_{11}-\Pi^2{}_{21}-\Pi^2{}_{12}+\nu_1+\nu_1=-4\nu_1-2\Gamma^1{}_{11}+2\Pi^1{}_{11}$.
Hence we have $\Pi^1{}_{11}=\Gamma^1{}_{11}+2\nu_1$.
Similarly, we have $\Pi^2{}_{22}=\Gamma^2{}_{22}+2\nu_2$.
Thus defined affine connection induces the projective structure given by $\tau=(\Pi^i{}_{jk})$.
Finally, let $F^i{}_{jk}=\pdif{F}{\Pi^i{}_{jk}}$ and $G^i{}_{jk}=\pdif{G}{\Pi^i{}_{jk}}$.
We have
\begin{align*}
F^1{}_{12}(\tau)&=\frac12\Pi^2{}_{12}\Pi^2{}_{12}+\Pi^2{}_{12}\Pi^2{}_{21}+\frac12\Pi^2{}_{12}\Pi^2{}_{21}+\Pi^1{}_{12}\Pi^2{}_{11}\\*
&\hphantom{{}={}}
+2\Pi^1{}_{12}\Pi^2{}_{11}-\Pi^2{}_{11}\Pi^1{}_{21},\\*
F^1{}_{21}(\tau)&=-\frac12\Pi^2{}_{12}\Pi^2{}_{12}-\Pi^2{}_{12}\Pi^2{}_{21}-\Pi^2{}_{12}(\Pi^2{}_{12}-\Pi^2{}_{21})-\frac12\Pi^2{}_{12}\Pi^2{}_{21}\\*
&\hphantom{{}={}}
-\Pi^1{}_{21}\Pi^2{}_{11}-\Pi^1{}_{12}\Pi^2{}_{11},\\*
F^1{}_{22}(\tau)&=\Pi^2{}_{11}(\Pi^2{}_{12}-\Pi^2{}_{21}),\\*
F^2{}_{11}(\tau)&=\frac12(\Pi^1{}_{12}\Pi^1{}_{12}-\Pi^1{}_{21}\Pi^1{}_{21})+\Pi^1{}_{12}(\Pi^1{}_{12}-\Pi^1{}_{21})+\Pi^1{}_{22}(\Pi^2{}_{12}-\Pi^2{}_{21}),\\*
F^2{}_{12}(\tau)&=\Pi^2{}_{12}(\Pi^1{}_{12}-\Pi^1{}_{21})+\Pi^2{}_{21}(\Pi^1{}_{12}-\Pi^1{}_{21})-2\Pi^2{}_{12}\Pi^1{}_{21}+\Pi^1{}_{21}\Pi^2{}_{21}\\*
&\hphantom{{}={}}
+\frac12\Pi^2{}_{21}(\Pi^1{}_{12}-\Pi^1{}_{21}),\\*
F^2{}_{21}(\tau)&=\Pi^2{}_{12}(\Pi^1{}_{12}-\Pi^1{}_{21})+\Pi^2{}_{12}\Pi^1{}_{21}+\frac12\Pi^2{}_{12}(\Pi^1{}_{12}-\Pi^1{}_{21})-\Pi^1{}_{22}\Pi^2{}_{11},\\
G^1{}_{12}(\tau)&=-\Pi^1{}_{21}(\Pi^2{}_{21}-\Pi^2{}_{12})-\Pi^1{}_{21}\Pi^2{}_{12}-\frac12\Pi^1{}_{21}(\Pi^2{}_{21}-\Pi^2{}_{12})+\Pi^2{}_{11}\Pi^1{}_{22},\\*
G^1{}_{21}(\tau)&=-\Pi^1{}_{21}(\Pi^2{}_{21}-\Pi^2{}_{12})-\Pi^1{}_{12}(\Pi^2{}_{21}-\Pi^2{}_{12})+2\Pi^1{}_{21}\Pi^2{}_{12}-\Pi^2{}_{12}\Pi^1{}_{12}\\*
&\hphantom{{}={}}
-\frac12\Pi^1{}_{12}(\Pi^2{}_{21}-\Pi^2{}_{12}),\\*
G^1{}_{22}(\tau)&=-\frac12(\Pi^2{}_{21}\Pi^2{}_{21}-\Pi^2{}_{12}\Pi^2{}_{12})-\Pi^2{}_{21}(\Pi^2{}_{21}-\Pi^2{}_{12})-\Pi^2{}_{11}(\Pi^1{}_{21}-\Pi^1{}_{12}),\\*
G^2{}_{11}(\tau)&=-\Pi^1{}_{22}(\Pi^1{}_{21}-\Pi^1{}_{12}),\\*
G^2{}_{12}(\tau)&=\frac12\Pi^1{}_{21}\Pi^1{}_{21}-\Pi^1{}_{21}\Pi^1{}_{12}+\Pi^1{}_{21}(\Pi^1{}_{21}-\Pi^1{}_{12})+\frac12\Pi^1{}_{21}\Pi^1{}_{12}\\*
&\hphantom{{}={}}
+\Pi^2{}_{12}\Pi^1{}_{22}+\Pi^2{}_{21}\Pi^1{}_{22},\\*
G^2{}_{21}(\tau)&=-\frac12\Pi^1{}_{21}\Pi^1{}_{21}-\Pi^1{}_{21}\Pi^1{}_{12}-\frac12\Pi^1{}_{21}\Pi^1{}_{12}-\Pi^2{}_{21}\Pi^1{}_{22}\\*
&\hphantom{{}={}}
-2\Pi^2{}_{21}\Pi^1{}_{22}+\Pi^1{}_{22}\Pi^2{}_{12}.
\end{align*}
If $\Pi^1{}_{12}=\Pi^1{}_{21}$ and if $\Pi^2{}_{12}=\Pi^2{}_{21}$, then we have
\begin{alignat*}{3}
F^1{}_{12}(\tau)&=2(\Pi^2{}_{12}\Pi^2{}_{12}+\Pi^1{}_{12}\Pi^2{}_{11}), & \quad & G^1{}_{12}(\tau)=-\Pi^1{}_{12}\Pi^2{}_{12}+\Pi^1{}_{22}\Pi^2{}_{11},\\*
F^1{}_{21}(\tau)&=-2(\Pi^2{}_{12}\Pi^2{}_{12}+\Pi^1{}_{12}\Pi^2{}_{11}), & \quad & G^1{}_{21}(\tau)=\Pi^1{}_{12}\Pi^2{}_{12},\\*
F^1{}_{22}(\tau)&=F^2{}_{11}(\tau)=0, & \quad & G^1{}_{22}(\tau)=G^2{}_{11}(\tau)=0,\\*
F^2{}_{12}(\tau)&=-\Pi^1{}_{12}\Pi^2{}_{12}, & \quad & G^2{}_{12}(\tau)=2(\Pi^1{}_{12}\Pi^1{}_{12}+\Pi^2{}_{12}\Pi^1{}_{22}),\\*
F^2{}_{21}(\tau)&=\Pi^1{}_{12}\Pi^2{}_{12}-\Pi^1{}_{22}\Pi^2{}_{11}, & \quad & G^2{}_{21}(\tau)=-2(\Pi^1{}_{12}\Pi^1{}_{12}+\Pi^2{}_{12}\Pi^1{}_{22}).
\end{alignat*}
Hence $\begin{pmatrix}
\pdif{F}{\tau}(\tau) & \pdif{G}{\tau}(\tau)
\end{pmatrix}$ is of rank $2$ for almost every $\tau$.
If $\tau\in\mathcal{T}'$, then we have $\Pi^1{}_{12}\neq\Pi^1{}_{21}$ or $\Pi^2{}_{12}\neq\Pi^2{}_{21}$.
In particular, one of $\Pi^1{}_{12},\Pi^1{}_{21},\Pi^2{}_{12}$ and $\Pi^2{}_{21}$ is non-zero.
Hence $\mathcal{T}'$ induces an open subvariety of ${\mathbb R} P^5$.
\end{proof}
An open subset of dimension $4$ of\/ $\mathcal{T}$ exists by the implicit function theorem, however, it seems difficult to find explicit ones.
We will present a family of elements of $\mathcal{T}$ with three parameters.
\begin{example}
Suppose that $\Pi^1{}_{12}=\Pi^1{}_{21}=\Pi^1{}_{22}=0$.
Then we have $F(\tau)=G(\tau)=0$ and $\Pi^2{}_{22}=0$.
It follows that
\begin{align*}
\Pi_{11}&=-\Pi^2{}_{12}\Pi^1{}_{11}+\Pi^2{}_{21}\Pi^2{}_{12}\\*
&=\frac32\Pi^2{}_{12}\Pi^2{}_{21}+\frac12\Pi^2{}_{12}\Pi^2{}_{12},\\*
\Pi_{12}&=\Pi_{21}=0,\\*
\Pi_{22}&=0.
\end{align*}
Let $a=\Pi^2{}_{11}$, $b=\Pi^2{}_{12}$ and $c=\Pi^2{}_{21}$.
The normal TW-connection is given by
\[
\omega=\begin{pmatrix}
-\frac{b+c}2dx^1 & 0 & 0\\
adx^1+bdx^2 & cdx^1 & 0\\
-\frac32(3bc+b^2)dx^1 & 0 &0
\end{pmatrix}-\frac13\begin{pmatrix}
dt & & dx^1\\
& dt & dx^2\\
& & dt
\end{pmatrix}
\]
We have $\Omega^1=0$ and $\Omega^2=\frac13(b-c)dx^1\wedge dx^2$.
The torsion of $\omega$ is equal to $\begin{pmatrix}
0\\
-b+c
\end{pmatrix}dx^1\wedge dx^2$.
By setting $a=b=1$ and $c=-1$, we obtain Example~\ref{ex6.1}.
Note that the ratio $a:b:c$ is relevant.
\end{example}
We have another kind of a one-parameter family.
\begin{example}
Let $\Pi^1{}_{12}=-\Pi^1{}_{21}=\sin\theta$ and $\Pi^2{}_{21}=-\Pi^2{}_{12}=\cos\theta$.
We have $\Pi^1{}_{11}=\Pi^2{}_{22}=0$ by~\eqref{eq6.2-0} and \eqref{eq6.2-01}.
On the other hand, we have
\begin{align*}
F(\tau)&=2(\sin^2\theta-(\cos\theta)\Pi^1{}_{22})\Pi^2{}_{11},\\*
G(\tau)&=-2(\cos^2\theta-(\sin\theta)\Pi^2{}_{11})\Pi^1{}_{22}.
\end{align*}
\begin{enumerate}
\item
If $\sin\theta=0$, then we have $\cos\theta\neq0$.
Since $G(\tau)=0$, we have $\Pi^1{}_{22}=0$.
Hence $\Pi_{12}=\Pi_{21}=0$ by~\eqref{eq6.2-b}.
We have $\Pi_{11}=-1$ and $\Pi_{22}=0$ by~\eqref{eq6.2-a} and~\eqref{eq6.2-c}.
The normal TW-connection is given by
\[
\begin{pmatrix}
0 & 0 & 0\\
\Pi^2{}_{11}dx^1\pm dx^2 & \mp dx^1 & 0\\
3dx^1 & 0 & 0
\end{pmatrix}-\frac13\begin{pmatrix}
dt & & dx^1\\
& dt & dx^2\\
& & dt
\end{pmatrix},
\]
where the double signs correspond and $\Pi^2_{11}$ is arbitrary.
\item
If $\cos\theta=0$, then the normal TW-connection is given by
\[
\begin{pmatrix}
\pm dx^2 & \mp dx^1+\Pi^1{}_{22}dx^2 & 0\\
0 & 0 & 0\\
0 & 3dx^2 & 0
\end{pmatrix}-\frac13\begin{pmatrix}
dt & & dx^1\\
& dt & dx^2\\
& & dt
\end{pmatrix}.
\]
\item
If $\sin\theta\neq0$ and if $\cos\theta\neq0$, then either $\Pi^1{}_{22}=\Pi^2{}_{11}=0$ or $\Pi^1{}_{22}=\frac{\sin^2\theta}{\cos\theta}$, $\Pi^2{}_{11}=\frac{\cos^2\theta}{\sin\theta}$.
In the first case, the normal TW-connection is given by
\[
\begin{pmatrix}
\sin\theta dx^2 & -\sin\theta dx^1 & 0\\*
-\cos\theta dx^2 & \cos\theta dx^1 & 0\\
0 & 0 & 0
\end{pmatrix}-\frac13\begin{pmatrix}
dt & & dx^1\\
& dt & dx^2\\
& & dt
\end{pmatrix}.
\]
In the second case, the normal TW-connections is given by
\[
\begin{pmatrix}
\sin\theta dx^2 & -\sin\theta dx^1+\frac{\sin^2\theta}{\cos\theta}dx^2 & 0\\
\frac{\cos^2\theta}{\sin\theta}dx^1-\cos\theta dx^2 & \cos\theta dx^1 & 0\\
0 & 0 & 0
\end{pmatrix}-\frac13\begin{pmatrix}
dt & & dx^1\\
& dt & dx^2\\
& & dt
\end{pmatrix}.
\]
\end{enumerate}
In the both cases, the torsion is given by $2\begin{pmatrix}
-\sin\theta\\
\hphantom{-}\cos\theta
\end{pmatrix} dx^1\wedge dx^2$.
Hence the ratio $K^1{}_{12}:K^2{}_{12}$ can take any value.
The latter connection can be slightly generalized~as
\[
\begin{pmatrix}
r\sin^2\theta\cos\theta dx^2 & -r(\sin^2\theta\cos\theta dx^1+\sin^3\theta dx^2) & 0\\
r(\cos^3\theta dx^1-\sin\theta\cos^2\theta dx^2) & r\sin\theta\cos^2\theta dx^1 & 0\\
0 & 0 & 0
\end{pmatrix}-\frac13\begin{pmatrix}
dt & & dx^1\\
& dt & dx^2\\
& & dt
\end{pmatrix}.
\]
of which the torsion is given by $2r\sin\theta\cos\theta\begin{pmatrix}
-\sin\theta\\
\hphantom{-}\cos\theta
\end{pmatrix}dx^1\wedge dx^2$.
\end{example}
\begin{bibdiv}
\begin{biblist}[\resetbiblist{99}]
\bib{asuke:2022}{article}{
author ={Asuke, Taro},
title ={Formal frames and deformations of affine connections},
note ={Preprint, available at {\tt https://doi.org/10.48550/arXiv.2206.00336}}
}
\bib{Hlavaty}{article}{
author ={Hlavat\'y, V\'aclav},
title ={Bemerkung zur Arbeit von Herrn T.~Y.~Thomas ,,A projective theory of affinely connected manifolds''},
journal ={Math. Z.},
volume ={26},
date ={1928},
pages ={142--146}
}
\bib{K_str}{article}{
author ={Kobayashi, Shoshichi},
title ={Canonical forms on frame bundles of higher order contact},
book ={
title ={Differential Geometry},
series ={Proceedings of Symposia in Pure Mathematics \textbf{3}},
publisher ={Amer. Math. Soc.},
address ={Providence, RI},
date ={1961}
},
pages ={186--193}
}
\bib{K}{book}{
author ={Kobayashi, Shoshichi},
title ={Transformation Groups in Differential Geometry},
publisher ={Springer-Verlag},
address ={Heidelberg-New York},
date ={1972}
}
\bib{Kobayashi-Nagano}{article}{
author ={Kobayashi, Shoshichi},
author ={Nagano, Tadashi},
title ={On projective connections},
journal ={J. Math. Mech.},
volume ={13},
date ={1964},
pages ={215--235}
}
\bib{McKay}{article}{
author ={McKay, Benjamin},
title ={Complete projective connections},
note ={Preprint, available at\\ {\tt https://arxiv.org/abs/math/0504082v5}}
}
\bib{Roberts}{article}{
author ={Roberts, Craig W.},
title ={The projective connections of\/ T.~Y.~Thomas and J.~H.~C.~Whitehead applied to invariant connections},
journal ={Differ. Geom. Appl.},
volume ={5},
date ={1995},
pages ={237--255}
}
\bib{Roberts2}{article}{
author ={Roberts, Craig W.},
title ={Relating Thomas--Whitehead Projective connections by a Gauge Transformation},
journal ={Math.~Phys.~Anal.~Geom.},
volume ={7},
date ={2004},
pages ={1--8}
}
\bib{Tanaka}{article}{
author ={Tanaka, Noboru},
title ={On the equivalence problems associated with a certain class of homogeneous spaces},
journal ={J. Math. Soc. Japan},
volume ={17},
date ={1965},
pages ={103--139}
}
\bib{Thomas}{article}{
author ={Thomas, Tracy Yerkes},
title ={On the projective and equi-projective geometries of paths},
journal ={Proc. Nat. Acad. Sc.},
volume ={11},
date ={1925},
pages ={199--203}
}
\bib{Weyl}{article}{
author ={Weyl, Hermann},
title ={Zur Infinitesimalgeometrie: Einordnung der projektiven und der konformen Auffassung},
journal ={Nachr. Ges. Wiss. Gottingen, Math.-Phys. Kl.},
date ={1921},
pages ={99--112}
}
\end{biblist}
\end{bibdiv}
\end{document}
|
\begin{document}
\title{Analysis of a second order discontinuous Galerkin finite element method for the Allen-Cahn equation and the curvature-driven geometric flow}
\markboth{HUANRONG LI AND JUNZHAO HU}{2ND ORDER DG METHODS FOR ALLEN-CAHN EQUATION}
\author{
Huanrong Li\thanks{College of Mathematics and Statistics,
Chongqing Technology and Business University, Chongqing 400067,
China. ({\tt [email protected].})
The work of this author was partially supported by National Science Foundation of China(11101453),
Natural Science Foundation Project of Chongqing CSTC(2013jcyjA20015, 2015jcyjA00009), and Chongqing Education Board of Science Foundation( KJ1400602) . Corresponding Author.}
\and
Junzhao Hu\thanks{Department of Mathematics, Iowa State University,
Ames, IA 50011, U.S.A. ({\tt [email protected].})
}
}
\maketitle
\begin{abstract}
The paper proposes and analyzes an efficient second-order in time numerical approximation for the Allen-Cahn equation, which is a second order nonlinear equation arising from the phase separation model. We firstly present a fully discrete interior penalty discontinuous Galerkin (IPDG) finite element method, which is based on the modified Crank-Nicolson scheme and a mid-point approximation of the nonliner term $f(u)$. We then derive the stability analysis and error estimates for the proposed IPDG method under some regularity assumptions on the initial function $u_0$. There are two key works in our analysis, one is to establish unconditionally energy-stable scheme for the discrete solutions. The other is to use a discrete spectrum estimate to handle the midpoint of the discrete solutions $u^m$ and $u^{m+1}$ in the nonlinear term, instead of using the standard Gronwall inequality technique. This discrete spectrum estimate is not trivial to obtain since the IPDG space and the conforming $H^1$ space are not contained in each other. We obtain that all our error bounds depend on reciprocal of the perturbation parameter $\epsilonilon$ only in some lower polynomial order, instead of exponential order. These sharper error bounds are the key elements in proving the convergence of our numerical solution to the mean curvature flow. Finally, numerical experiments are also provided to show the performance of the presented approach and method.
\end{abstract}
\begin{keywords}
the Allen-Cahn equation, phase separation,
interior penalty discontinuous Galerkin, discrete spectral estimate, mean curvature flow.
\end{keywords}
\section{Introduction}\label{sec-1}
Let $\Omegaga\subseteq R^{d}(d=2,3)$ be a bounded polygonal or polyhedral domain. Consider the following nonlinear singular perturbation model of the reaction-diffusion equation
\begin{equation}\label{eq1.1}
u_t-\Delta u+\frac{1}{\epsilonilon^2}f(u)=0, \qquad \mbox{in }
\Omegaga_T:=\Omegaga\times(0,T).
\end{equation}
And we consider the following homogenous
Neumann boundary condition
\begin{equation}\label{eq1.2}
\frac{\partialartial u}{\partialartial \mathbf{n}} =0, \qquad \mbox{in }
\partialartial\Omegaga_T:=\partialartial\Omegaga\times(0,T),
\end{equation}
and initial condition
\begin{equation}\label{eq1.3}
u =u_0, \qquad \mbox{in }\Omegaga\times\{t=0\},
\end{equation}
where, $\mathbf{n}$ denotes the unit
outward normal vector to the boundary $\partialartial\Omegaga$, and the boundary condition \eqref{eq1.3} means that no mass loss occurs through the boundary walls.
Equation \eqref{eq1.1}, which is called the Allen-Cahn equation, was originally introduced by Allen and Cahn in \cite{1, 8, 12} to describe an interface evolving in time in the phase separation process of the crystalline solids. Herein, $\epsilonilon>0$ is a parameter related to the interface thickness, which is small compared to the characteristic length of the laboratory scale. $u$ denotes the concentration
of one of the two metallic species of the alloy, and $f(u)=F'(u)$ with
$F(u)$ being some given energy potential. Several choices of $F(u)$ have been presented in the literature \cite{2,9,3,4,5,6}. In this paper we focus on the following Ginzburg-Landau double-well potential \cite{feng2014finite, feng2017finite}
\begin{equation}\label{eq1.4}
F(u)=\frac{1}{4}(u^2-1)^2\
\ \mathrm{and}\ \ f(u)=F'(u)=(u^{2}-1)u.
\end{equation}
Although the potential term \eqref{eq1.4} has been widely used, its quartic growth at
infinity leads to a variety of technical difficulties in the numerical approximation for the Allen-Cahn equation. For example, in order to assure that our numerical scheme is second-order in time, we have to employ the modified Crank-Nicolson scheme and a second order in time approximation of the potential term $f(u)$(see (3.4) in section 3.1).
An important feature of the Allen-Cahn equation (1.1) is that it can be viewed as the gradient flow with the Liapunov energy functional
\begin{equation}\label{eq1.5}
\textit{J}_\epsilonilon(u):=\int_\Omegaga \partialhi_\epsilonilon(u)dx \qquad \mbox{and}\qquad \partialhi_\epsilonilon(u)=\frac{1}{2}|\nablala u|^2+\frac{1}{\epsilonilon^2}F(u).
\end{equation}
More precisely, by taking the inner product of (1.1) with $-\Delta u+\frac{1}{\epsilonilon^2}f(u)$,
we immediately get the following energy law for (1.1)
\begin{equation}\label{eq1.6}
\frac{\partialartial}{\partialartial t}\textit{J}_\epsilonilon(u(t))=-\int_\Omegaga|-\Delta u+\frac{1}{\epsilonilon^2}f(u)|^2dx.
\end{equation}
Nowadays, the Allen-Cahn equation has been extensively investigated
due to its connection to the interesting and complicated \emph{curvature-driven geometric flow} known as
\emph{the mean curvature flow} or\emph{ the motion by mean curvature} (cf.\cite{25, 8} and the references therein). It was proved that(see \cite{25}),
as $\epsilonilon\rightarrow 0$, the zero
level set of the solution $u$ of the problem (1.1)-(1.4), denoted by $\Gamma_t^\epsilonilon:=\{x\in\Omegaga; u(x,t)=0\}$ converges to the curvature-driven geometric flow as $\epsilonilon$ and mesh sizes $h$ and $k$ all tend to
zero, which refers to the evolution of a surface governed by the geometric law $V=\kappa$,
where $V$ is the inward normal velocity of the surface $\Gamma_t$ and $\kappa$ is its mean curvature,
see \cite{1,9}.
The Allen-Cahn equation has been widely used in many complicated moving interface problems in fluid dynamics, materials science, image processing and biology (cf.\cite{12,Feng_Li15} and the references therein). Therefore, it is very important to develop
accurate and efficient numerical schemes to solve the Allen-Cahn equation. There are several challenges to obtain numerical approximations of these problems, such as the existence of a nonlinear potential term $f(u)$ and the presence of the small interaction length $\epsilonilon$.
An appropriate numerical resolution of the Allen-Cahn equation requires a proper relation between physical and numerical scales, that is, the spatial size $h$ and the time size $k$ must be related to the perturbation parameter $\epsilonilon$.
In the past thirty years, there have been a large body of works on numerical simulations of the Allen-Cahn equation (1.1)(cf.\cite{13,14,15,16,17,18} and the references therein). However, most of these works were conducted for a fixed parameter $\epsilonilon$. The error estimates,
which are deduced using the Gronwall inequality \cite{Li, song}, depended on $\frac{1}{\epsilonilon}$ in exponential order. Indeed, such an estimate is obviously not useful for small parameter $\epsilonilon$, in particular, in discussing whether the flow of the computed numerical interfaces converge to the curvature-driven geometric flow. Less commonly investigated are error estimates which show an depend on $\frac{1}{\epsilonilon}$ in some (low) polynomial orders.
In general, the numerical analysis depending on $\frac{1}{\epsilonilon}$ in some (low) polynomial orders can be significantly more difficult than that in exponential order. Nevertheless, such work has been reported in the following articles \cite{16,17,18,Feng_Li15}. One of the important ideas employed in the mentioned works is to use a discrete spectrum estimate to derive error order. In fact, the first such polynomial
order in $\frac{1}{\epsilonilon}$ a priori estimate was obtained by Feng and Prohl$^{\cite{16}}$ in 2003 for the finite element methods of the Allen-Cahn equation. And in 2015, Feng and Li$^{\cite{Feng_Li15}}$ developed fully discrete interior penalty discontinuous Galerkin methods for the Allen-Cahn equation, which is first-order-accurate-in-time numerical scheme and not unconditionally energy-stable scheme.
However, an essential feature of the Allen-Cahn equation is that it satisfies the energy laws (1.6), so it is important to design efficient and accurate numerical schemes that satisfy a corresponding discrete energy law, or in other words, energy stable.
In contrast to the papers referenced above, we propose a second-order-accurate-in-time, unconditionally energy-stable with respect to the time and space step sizes, fully discrete interior penalty discontinuous Galerkin finite element scheme for the Allen-Cahn problem (1.1)-(1.4). We develop an interior penalty discontinuous Galerkin finite element methods based on modified Crank-Nicolson scheme and a second-order-in-time approximation of the potential term $f(u)$, and establish polynomial order in $\frac{1}{\epsilonilon}$ a priori error estimates as well as to prove convergence and rates of convergence for the IPDGFE numerical interfaces. To the best of our knowledge,
no such numerical scheme and convergence analysis for the Allen-Cahn problem (1.1)-(1.4) is available in the literature. The highlights of this paper include not only presenting a second-order-accurate-in-time and unconditionally energy-stable scheme, but also
using a discrete spectrum estimate to handle the midpoint of the discrete solutions $u^m$ and $u^{m+1}$ in the nonlinear term to achieve error bounds depending on reciprocal of the perturbation parameter $\epsilonilon$ only in some lower polynomial order. Thus, it can be seen that the paper is not trivial extension of the article \cite{Feng_Li15} by Feng and Li.
The remainder of this paper is organized as follows. Section 2 includes a brief description of notions, and we recall a few facts and lemmas about the problem (1.1)-(1.4). In section 3, we present a fully discrete, nonlinear interior penalty discontinuous Galerkin method which is a second-order-in-time scheme based on a mid-point approximation of the potential term and proved to be unconditionally energy-stable and uniquely solvable, and provide a rigorous proof of convergence results for the proposed numerical method. In section 4, we prove the convergence and rates of convergence for the numerical interfaces of the numerical solutions to the sharp interface of the curvature-driven geometric flow. Finally, section 5 presents some of our numerical experiments to gauge the performance of the proposed interior penalty discontinuous Galerkin method.
\section{Preliminaries}\label{sec-2}
Let $\mathcal{T}_h$ be a quasi-uniform ``triangulation" of $\Omegaga$ such that
$\overline{\Omega}=\bigcup_{K\in\mathcal{T}_h} \overline{K}$. Let $h_K$ denote
the diameter of $K\in \mathcal{T}_h$ and $h:=\mbox{max}\{h_K; K\in\mathcal{T}_h\}$.
We recall that the standard broken Sobolev space $H^s(\mathcal{T}_h)$ and DG finite
element space $V_h$ are defined as
\[
H^s(\mathcal{T}_h):=\partialrod_{K\in\mathcal{T}_h} H^{s}(K), \qquad
V_h:=\partialrod_{K\in\mathcal{T}_h} P_r(K),
\]
where $P_r(K)$ denotes the set of all polynomials whose degrees do not
exceed a given positive integer $r$. Let $\mathcal{E}_h^I$ denote the set of all
interior faces/edges of $\mathcal{T}_h$, $\mathcal{E}_h^B$ denote the set of all boundary
faces/edges of $\mathcal{T}_h$, and $\mathcal{E}_h:=\mathcal{E}_h^I\cup \mathcal{E}_h^B$. The $L^2$-inner product
for piecewise functions over the mesh $\mathcal{T}_h$ is naturally defined by
\[
(u,v)_{\mathcal{T}_h}:= \sum_{K\in \mathcal{T}_h} \int_{K} u v\, dx,
\]
and for any set $\mathcal{S}_h \subset \mathcal{E}_h$, the $L^2$-inner product
over $\mathcal{S}_h$ is defined by
\begin{align*}
\big\langle u,v\big\rangle_{\mathcal{S}_h} :=\sum_{e\in \mathcal{S}_h} \int_e uv\, ds.
\end{align*}
Let $K, K'\in \mathcal{T}_h$ and $e=\partialartial K\cap \partialartial K'$ and assume
global labeling number of $K$ is smaller than that of $K'$.
We choose $n_e:=n_K|_e=-n_{K'}|_e$ as the unit normal on $e$ and
define the following standard jump and average notations
across the face/edge $e$:
\begin{alignat*}{4}
[v] &:= v|_K-v|_{K'}
\quad &&\mbox{on } e\in \mathcal{E}_h^I,\qquad
&&[v] :=v\quad
&&\mbox{on } e\in \mathcal{E}_h^B,\\
\{v\} &:=\frac12\bigl( v|_K +v|_{K'} \bigr) \quad
&&\mbox{on } e\in \mathcal{E}_h^I,\qquad
&&\{v\}:=v\quad
&&\mbox{on } e\in \mathcal{E}_h^B
\end{alignat*}
for $v\in V_h$.
Let $M$ be a (large) positive integer. Define $\tau:=T/M$ and $t_m:=m\tau$
for $m=0,1,2,\cdots,M$ be a uniform partition of $[0,T]$. For a sequence
of functions $\{v^m\}_{m=0}^M$, we define the (backward) difference operator
\begin{equation}
d_t u^m:= \frac{u^m-u^{m-1}}{k}, \qquad m=1,2,\cdots,M. \nonumber
\end{equation}
First, we introduce the DG elliptic projection operator $P_r^h: H^s(\mathcal{T}_h)\to V_h$ by
\begin{equation}\label{eq2.1}
a_h(v-P_r^h v, w_h) + \bigl( v-P_r^h v, w_h \bigr)_{\mathcal{T}_h} =0
\quad\forall w_h\in V_h
\end{equation}
for any $v\in H^s(\mathcal{T}_h)$.
We start with a well-known fact \cite{18} that the
Allen-Cahn equation \eqref{eq1.1} can be interpreted as the $L^2$-gradient
flow for the following Cahn-Hilliard energy functional
\begin{equation}\label{eq2.2}
J_\epsilonilon(v):= \int_\Omegaga \Bigl( \frac12 |\nablala v|^2
+ \frac{1}{\epsilonilon^2} F(v) \Bigr)\, dx.
\end{equation}
The following assumptions on the initial datum $u_0$ are made as in \cite{ Feng_Li15, feng2014finite, feng2015analysis,feng2017finite,16, li2015numerical,xu2016convex, 14} to derive a priori solution estimates.
{\bf General Assumption} (GA)
\begin{itemize}
\item[(1)] There exists a nonnegative constant $\sigma_1$ such that
\begin{equation}\label{eq2.3}
J_{\epsilonilon}(u_0)\leq C\epsilonilon^{-2\sigma_1}.
\end{equation}
\item[(2)] There exists a nonnegative constant $\sigma_2$ such that
\begin{equation}\label{eq2.4}
\|\Delta u_0 -\epsilonilon^{-2} f(u_0)\|_{L^2(\Omegaga)} \leq C\epsilonilon^{-\sigma_2}.
\end{equation}
\item[(3)]
There exists nonnegative constant $\sigma_3$ such that
\begin{equation}\label{eq2.5}
\lim_{s\rightarrow0^{+}} \|\nablala u_t(s)\|_{L^2(\Omegaga)}\leq C\epsilonilon^{-\sigma_3}.
\end{equation}
\end{itemize}
The following solution estimates can be found in \cite{16, Feng_Li15}.
\begin{proposition}\label{prop2.1}
Suppose that \eqref{eq2.3} and \eqref{eq2.4} hold. Then the solution $u$ of
problem \eqref{eq1.1}--\eqref{eq1.4} satisfies the following estimates:
\begin{align} \label{eq2.6}
&\underset{t\in [0,\infty)}{\mbox{\rm ess sup }} \|u(t)\|_{L^\infty(\Omega)} \leq 1,\\
&\underset{t\in [0,\infty)}{\mbox{\rm ess sup }}\, J_{\epsilonilon}(u)
+ \int_{0}^{\infty} \|u_t(s)\|_{L^2(\Omegaga)}^2\, ds
\leq C \epsilon^{-2\sigma_1},\label{eq2.7} \\
&\int_{0}^{T} \|\Delta u(s)\|^2\, ds \leq C \epsilon^{-2(\sigma_1+1)}, \label{eq2.8}\\
&\underset{t\in [0,\infty)}{\mbox{\rm ess sup }} \Bigl( \|u_t\|_{L^2(\Omegaga)}^2 +\|u\|_{H^2(\Omegaga)}^2 \Bigr)
+\int_{0}^{\infty} \|\nablala u_t(s)\|_{L^2(\Omegaga)}^2\, ds
\leq C \epsilon^{-2\max\{\sigma_1+1,\sigma_2\}}, \label{eq2.9} \\
&\int_{0}^{\infty} \Bigl(\|u_{tt}(s)\|_{H^{-1}(\Omegaga)}^2
+\|\Delta u_t(s)\|_{H^{-1}(\Omegaga)}^2\Bigr) \, ds
\leq C \epsilon^{-2\max\{\sigma_1+1,\sigma_2\}}. \label{eq2.10}
\end{align}
In addition to \eqref{eq2.3} and \eqref{eq2.4},
suppose that \eqref{eq2.5} holds, then $u$ also satisfies
\begin{align} \label{eq2.11}
&\underset{t\in [0,\infty)}{\mbox{\rm ess sup }} \|\nablala u_t\|_{L^2(\Omega)}^2 +\int_0^{\infty} \|u_{tt}(s)\|_{L^2}^2 \,ds
\leq C\epsilon^{-2\max\{\sigma_1+2,\sigma_3\}},\\
&\int_{0}^{\infty} \|\Delta u_t(s)\|_{L^2(\Omega)}^2 \,ds
\leq C\epsilon^{-2\max\{\sigma_1+2, \sigma_3\}}. \label{eq2.12}
\end{align}
\end{proposition}
Next, we quote the following well known error estimate results from
$[21, 22]$.
\begin{lemma}\label{lem2.2}
Let $v\in W^{s,\infty}(\mathcal{T}_h)$, then there hold
{\small
\begin{align}\label{eq2.13}
\|v-P_r^h v\|_{L^2(\mathcal{T}_h)} +h\|\nabla(v-P_r^h v)\|_{L^2(\mathcal{T}_h)}
&\leq Ch^{\min\{r+1,s\}}\|u\|_{H^s(\mathcal{T}_h)},\\
\frac{1}{|\ln h|^{\overline{r}}} \|v-P_r^h v\|_{L^\infty(\mathcal{T}_h)}
+ h\|\nabla(u-P_r^h u)\|_{L^\infty(\mathcal{T}_h)}
&\leq Ch^{\min\{r+1,s\}}\|u\|_{W^{s,\infty}(\mathcal{T}_h)}. \label{eq2.14}
\end{align}
}
where $\overline{r}:=\min\{1, r\}-\min\{1, r-1\}$.
\end{lemma}
\\
\\
Define $C_1$ as
\begin{equation}\label{eq2.15}
C_1=\underset{|\xi|\leq2}{\rm{max}}|f''(\xi)|.
\end{equation}
and $\widehat{P}_r^h$, corresponding to $P_r^h$, denote the elliptic projection
operator on the finite element space $S_h:=V_h\cap C^0(\overline{\Omega})$,
there holds the following estimate from $\cite{21}$:
\begin{equation}\label{eq2.16}
\|u-\widehat{P}_r^hu\|_{L^{\infty}}\leq Ch^{2-\frac{d}{2}}||u||_{H^2}.
\end{equation}
We now state our discrete spectrum estimate for the DG approximation.
\begin{proposition}\label{prop2.3}
Suppose there exists a positive number $\gamma>0$ such that the solution $u$ of
problem \eqref{eq1.1}--\eqref{eq1.4} satisfies
\begin{equation}\label{eq2.17}
\underset{t\in [0,T]}{\mbox{\rm ess sup}}\, \|u(t)\|_{W^{r+1,\infty}(\Omega)}
\leq C\epsilon^{-\gamma}.
\end{equation}
Then there exists an $\epsilon$-independent and $h$-independent constant
$c_0>0$ such that for $\epsilon\in(0,1)$ and a.e. $t\in [0,T]$
\begin{align}\label{eq2.18}
\lambda_h^{\mbox{\tiny DG}}(t):=\inf_{\partialsi_h\in V_h\atop\partialsi_h\not\equiv 0}
\frac{ a_h(\partialsi_h,\partialsi_h) + \frac{1}{\epsilon^2}\Bigl( f'\bigl(P_r^h u(t)\bigr)\partialsi_h,
\partialsi_h \Bigr)_{\mathcal{T}_h}}{\|\partialsi_h\|_{L^2(\mathcal{T}_h)}^2} \geq -c_0,
\end{align}
provided that $h$ satisfies the constraint
\begin{align}\label{eq2.19}
h^{2-\frac{d}{2}}
&\leq C_0 (C_1C_2)^{-1}\epsilon^{\max\{\sigma_1+3,\sigma_2+2\}},\\
&h^{\min\{r+1,s\}}|\ln h|^{\overline{r}} \leq C_0 (C_1C_2)^{-1}\epsilon^{\gamma+2},
\label{eq2.20}
\end{align}
where $C_2$ arises from the following inequality:
\begin{align}\label{eq2.21}
&\|u-P^h_r u\|_{L^{\infty}((0,T);L^{\infty}(\Omega)}
\leq C_2 h^{\min\{r+1,s\}}|\ln h|^{\overline{r}} \epsilon^{-\gamma},\\
&\|u-\widehat{P}^h_r u\|_{L^{\infty}((0,T);L^{\infty}(\Omega)}
\leq C_2 h^{2-\frac{d}{2}} \epsilon^{-\max\{\sigma_1+1,\sigma_2\}}. \label{eq2.22}
\end{align}
\end{proposition}
\begin{lemma}\label{lem2.4}
Let $\{S_{\ell} \}_{\ell\geq 1}$ be a positive nondecreasing sequence and
$\{b_{\ell}\}_{\ell\geq 1}$ and $\{k_{\ell}\}_{\ell\geq 1}$ be nonnegative sequences,
and $p>1$ be a constant. If
\begin{eqnarray}\label{eq2.23}
&S_{\ell+1}-S_{\ell}\leq b_{\ell}S_{\ell}+k_{\ell}S^p_{\ell} \qquad\mbox{for } \ell\geq 1,
\\ \label{eq2.24}
&S^{1-p}_{1}+(1-p)\mathop{\sum}\limits_{s=1}^{\ell-1}k_{s}a^{1-p}_{s+1}>0
\qquad\mbox{for } \ell\geq 2,
\end{eqnarray}
then
\begin{equation}\label{eq2.25}
S_{\ell}\leq \frac{1}{a_{\ell}} \Bigg\{S^{1-p}_{1}+(1-p)
\sum_{s=1}^{\ell-1}k_{s}a^{1-p}_{s+1}\Bigg\}^{\frac{1}{1-p}}\qquad\text{for}\ \ell\geq 2,
\end{equation}
where
\begin{equation}\label{eq2.26}
a_{\ell} := \partialrod_{s=1}^{\ell-1} \frac{1}{1+b_{s}} \qquad\mbox{for } \ell\geq 2.
\end{equation}
\end{lemma}
\section{Fully discrete IP-DG approximations}\label{sec-3}
\subsection{Discretized DG scheme} \label{sec-3.1}
We are now ready to introduce our fully discrete DG finite element methods
for problem \eqref{eq1.1}--\eqref{eq1.4}. They are defined by seeking
$u^m\in V_h$ for $m=0,1,2,\cdots, M$ such that
\begin{alignat}{2}\label{eq3.1}
\bigl( d_t u^{m+1},v_h\bigr)_{\mathcal{T}_h}
+a_h(u^{m+\frac{1}{2}},v_h)+\frac{1}{\epsilon^2}\bigl( f^{m+1},v_h\bigr)_{\mathcal{T}_h} &=0
&&\quad\forall v_h\in V_h,
\end{alignat}
where
\begin{align}\label{eq3.2}
a_h(u,v_h) &:=\bigl( \nablala u,\nablala v_h\bigr)_{\mathcal{T}_h}
-\bigl\langle \{\partial_n u\}, [v_h] \bigr\rangle_{\mathcal{E}_h^I} \\
&\hskip 1.1in
+\lambda \bigl\langle [u], \{\partial_n v_h\} \bigr\rangle_{\mathcal{E}_h^I} + j_h(u,v_h), \nonumber
\end{align}
\begin{align}\label{eq3.3}
j_h(u,v_h)&:=\sum_{e\in\mathcal{E}_h^I}\frac{\sigma_e}{h_e}\big\langle [u],[v_h] \big\rangle_e,
\end{align}
\begin{align}\label{eq3.4}
f^{m+1}&:=\frac{1}{4} \bigl[(u^{m+1})^3+(u^{m+1})^2 u^m+u^{m+1}(u^{m})^2+(u^{m})^3\bigl]-u^{m+\frac{1}{2}}\\
&=\frac{F(u^{m+1})-F(u^m)}{u^{m+1}-u^m}.\nonumber
\end{align}
where $u^{m+\frac{1}{2}}=\frac{u^{m+1}+u^{m}} {2}$, $\lambda=0,\partialm1$ and $\sigma_e$ is a positive piecewise constant
function on $\mathcal{E}_h^I$, which will be chosen later (see Lemma \ref{lem-3.1}). In addition,
we need to supply $u_h^0$ to start the time-stepping,
whose choice will be clear (and will be specified) below.
\begin{lemma}\label{lem-3.1}
There exist constants $\sigma_0, \alpha>0$ such that for
$\sigma_e>\sigma_0$ for all $e\in \mathcal{E}_h$ there holds
\begin{equation}\label{eq3.11a}
\Phi^h(v_h)\geq \alpha \|v_h\|_{1,\mbox{\tiny DG}}^2 \qquad\forall v_h\in V_h, \nonumber
\end{equation}
where
\begin{equation}\label{eq3.11b}
\|v_h\|_{1,\mbox{\tiny DG}}^2 := \|\nabla v_h\|_{L^2(\mathcal{T}_h)}^2 + j_h(v_h,v_h). \nonumber
\end{equation}
\end{lemma}
Now we introduce three mesh-dependent energy functionals
which can be regarded as DG counterparts of the continuous
Cahn-Hilliard energy $J_\epsilon$ defined in \eqref{eq2.2}.
\begin{align}\label{eq3.5}
\Phi^h(v) &:=\frac{1}{2} \|\nabla v\|_{L^2(\mathcal{T}_h)}^2
-\bigl\langle \{\partial_n v\}, [v] \bigr\rangle_{\mathcal{E}_h^I} + \frac12 j_h(v,v) \qquad
\forall v\in H^2(\mathcal{T}_h), \\
J_\epsilon^h(v) &:= \Phi^h(v) +\frac{1}{\epsilon^2} \bigl( F(v), 1\bigr)_{\mathcal{T}_h}
\qquad \forall v\in H^2(\mathcal{T}_h), \label{eq3.6} \\
I_\epsilon^h(v) &:= \Phi^h(v) +\frac{1}{\epsilon^2} \bigl( F_c^+(v), 1\bigr)_{\mathcal{T}_h}
\qquad \forall v\in H^2(\mathcal{T}_h), \label{eq3.7}
\end{align}
It is easy to check that $\Phi^h$ and $I_\epsilon^h$ are convex functionals
but $J_\epsilon^h$ is not because $F$ is not convex. Moreover, we have:
\begin{lemma}\label{lem3.2}
Let $\lambda =-1$ in \eqref{eq3.2}, then there holds for all $v_h,w_h\in V_h$
\begin{align}\label{eq3.8}
\Bigl(\frac{\delta \Phi^h(v_h)}{ \delta v_h}, w_h \Bigr)_{\mathcal{T}_h}
&:=\lim_{s\to 0} \frac{\Phi^h(v_h+ s w_h)-\Phi^h(v_h) }{s}
=a_h(v_h, w_h), \\
\Bigl(\frac{\delta J_\epsilon^h(v_h)}{ \delta v_h}, w_h \Bigr)_{\mathcal{T}_h}
:&=\lim_{s\to 0} \frac{J_\epsilon^h(v_h+ s w_h)-J_\epsilon^h(v_h) }{s} \label{eq3.9} \\
&=a_h(v_h, w_h) +\frac{1}{\epsilon^2} \bigl(F^{\partialrime}(v_h), w_h \bigr)_{\mathcal{T}_h},
\nonumber\\
\Bigl(\frac{\delta I_\epsilon^h(v_h)}{ \delta v_h}, w_h \Bigr)_{\mathcal{T}_h}
:&=\lim_{s\to 0} \frac{I_\epsilon^h(v_h+ s w_h)-I_\epsilon^h(v_h) }{s} \\
&=a_h(v_h, w_h) +\frac{1}{\epsilon^2} \bigl((F_c^+)^{\partialrime}(v_h), w_h \bigr)_{\mathcal{T}_h}.
\nonumber
\end{align}
\end{lemma}
\subsection{Stability of the DG scheme} \label{sec-3.2}
\begin{theorem}
The scheme \eqref{eq3.1}--\eqref{eq3.4} is unconditionally stable for all $h,k>0$ .
\end{theorem}
Proof: We have the DG scheme as below:
\begin{align}\label{eq3.11}
\big(d_tu^{m+1},v_h\bigr)+a_h\big(\frac{u^{m+1}+u^{m}}{2},v_h\big)+\frac{1}{\epsilonilon^2}\big(f^{m+1},v_h\big)=0.
\end{align}
Let $v_h=d_tu^{m+1}$, and we will get:
\begin{align}\label{eq3.12}
\big(d_tu^{m+1},d_tu^{n+1}\bigr)+a_h\big(\frac{u^{m+1}+u^{m}}{2},d_tu^{m+1}\big)&\\
+\frac{1}{\epsilonilon^2}\big(\frac{F(u^{m+1})-F(u^m)}{u^{m+1}-u^m},d_tu^{m+1}\big)&=0.\notag
\end{align}
Rearrange it to get:
\begin{align}\label{eq3.13}
\|d_tu^{m+1}\|^2_{L^2}+\frac 1 2 d_t[a_h(u^{m+1},u^{m+1})]+\frac{1}{\epsilonilon^2}d_t F(u^{m+1})=0,
\end{align}
\begin{align}\label{eq3.14}
d_t[ \frac1 2a_h(u^{m+1},u^{m+1})+\frac{1}{\epsilonilon^2}(F(u^{m+1}),1)]\leq0.
\end{align}
And the proof is complete.
\subsection{Well-posedness of the DG scheme}\label{sec-3.3}
We want to get a second order approximation of $f(u^{m+1}, u^m)$, which leads to unconditionally energy stable schemes. We split the function $F(v)=\frac14 (v^2-1)^2$ into the difference of two convex parts and get the convex decomposition $F(v)=F_c^+(v)-F_c^-(v)$,where $F_c^+(v):= \frac14 (v^4+1)$and $F_c^-(v):=\frac12 v^2$.\\
\\
Now we want to construct a second-order energy-stable scheme to approximate the two convex functions $F_c^+(u)$ and $F_c^-(u)$.
\begin{align}
f^+(u^{m+1},u^m)=\frac{F_c^+(u^{m+1})-F_c^+(u^m)}{u^{m+1}-u^m},\nonumber
\end{align}
\begin{align}
f^-(u^{m+1},u^m)=\frac{F_c^-(u^{m+1})-F_c^-(u^m)}{u^{m+1}-u^m}.\nonumber
\end{align}
\begin{theorem}
Under the constraint $k<2\epsilonilon^2$, there exists a unique solution of the scheme \eqref{eq3.1}- \eqref{eq3.4}.
\end{theorem}
Proof: Define the following functional:
\begin{align}\label{eq3.15}
&J(u^{m+1})=\frac1 4 a_h(u^{m+1},u^{m+1})+ \frac{1}{\epsilonilon^2}\int_{\mathcal{T}_h}F_+(u^{m+1},u^m)\\
&+(\frac{1}{2k}-\frac{1}{4\epsilonilon^2})\|u^{m+1}\|^2_{L^2(\mathcal{T}_h)}+\frac1 2 a_h(u^m,u^{m+1})+\int_{\mathcal{T}_h}(-\frac{1}{2\epsilonilon^2}-\frac{1}{k})u^mu^{m+1}.\nonumber
\end{align}
Take the derivative of the functional $J(u^{m+1})$, and will get:
\begin{align}\label{eq3.16}
&\Bigl(\frac{\delta J(u^{m+1})}{ \delta u^{m+1}}, v_h \Bigr)_{\mathcal{T}_h}=\frac1 2 a_h(u^{m+1},v_h)+\frac{1}{\epsilonilon^2}\int_{\mathcal{T}_h}f^+(u^{m+1},u^m)\\
&+(\frac{1}{2k}-\frac{1}{4\epsilonilon^2})2(u^{m+1},v_h)_{\mathcal{T}_h}+\frac 1 2 a_h(u^m,v_h)+(-\frac{1}{2\epsilonilon^2}-\frac{1}{k})(u^{m},v_h)_{\mathcal{T}_h}.\nonumber
\end{align}
Rearrange it, and we will get:
\begin{align}\label{eq3.17}
\Bigl(\frac{\delta J(u^{m+1})}{ \delta u^{m+1}}, v_h \Bigr)_{\mathcal{T}_h}=\bigl( d_t u^{m+1},v_h\bigr)_{\mathcal{T}_h}
+a_h(u^{m+\frac{1}{2}},v_h)+\frac{1}{\epsilon^2}\bigl( f^{m+1},v_h\bigr)_{\mathcal{T}_h} &=0.
\end{align}
Also we can see from $(3.10)$ the first two terms of $J(u^{m+1})$ are convex, also since the last two terms are linear with respect to $u^{m+1}$, so they are also convex, so if we restrict the coefficient of third term to be positive, that is, if we restrict $k<2\epsilonilon^2$, then the $J(u^{m+1})$ will be a convex functional, and the uniqueness of the solution to this scheme is approved.
\subsection{Error estimates analysis}\label{sec-3.4}
The main result of this subsection is the following error
estimate theorem.
\begin{theorem}\label{thm3.5}
suppose $\sigma_e>\max\{\sigma_0,\sigma_0^{\partialrime}\}$.
Let $u$ and $\{u_h^m\}_{m=1}^M$ denote respectively the solutions of problems
\eqref{eq1.1}--\eqref{eq1.4} and \eqref{eq3.1}--\eqref{eq3.5}.
Assume $u\in H^2((0,T);$ $L^2(\Omega))\cap L^2((0,T); W^{s,\infty}(\Omega))$
and suppose (GA) and \eqref{eq2.17} hold. Then, under the following
mesh and initial value constraints:
\begin{align*}
h^{2-\frac{d}{2}} &\leq C_0 (C_1C_2)^{-1}\epsilon^{\max\{\sigma_1+3,\sigma_2+2\}},
\end{align*}
\begin{align*}
h^{\min\{r+1,s\}}|\ln h|^{\overline{r}} &\leq C_0 (C_1C_2)^{-1}\epsilon^{\gamma+2},
\end{align*}
\begin{align*}
k<A(\epsilonilon),
\end{align*}
\begin{align*}
u_h^0\in S_h\mbox{ such that }\quad
\|u_0 -u_h^0\|_{L^2(\mathcal{T}_h)} &\leq C h^{\min\{r+1,s\}},
\end{align*}
there hold
\begin{align}\label{eq3.18}
\max_{0\leq m\leq M} \|u(t_m)-u_h^m\|_{L^2(\mathcal{T}_h)}
&\leq C(k^2+h^{\min\{r+1,s\}})\epsilon^{-(\sigma_1+2)}.\\
\Bigl( k\sum_{m=1}^M \| u(t_m)-u_h^m\|_{H^1(\mathcal{T}_h)}^2 \Bigr)^{\frac{1}{2}}
&\leq C(k^2+h^{\min\{r+1,s\}-1})\epsilon^{-(\sigma_1+3)}, \label{eq3.19} \\
\max_{0\leq m\leq M} \|u(t_m)-u_h^m\|_{L^\infty(\mathcal{T}_h)}
&\leq C h^{\min\{r+1,s\}} |\ln h|^{\overline{r}} \epsilon^{-\gamma} \label{eq3.20} \\
&\qquad
+Ch^{-\frac{d}{2}}(k^2+h^{\min\{r+1,s\}})\epsilon^{-(\sigma_1+2)}. \nonumber
\end{align}
\end{theorem}
Proof: Since the proof is long, we split the proof into four steps:\\
{\em Step 1}: \\
We write:
\[
u(t_m)-u^m=\eta^m + \xi^m,\quad \eta^m:=u(t_m)-P_r^h u(t_m),\quad
\xi^m:=P_r^h u(t_m)- u^m.
\]
Multiply $v_h$ on both sides of the Allen-Cahn equation in $(1.1)$ at the point $u(t_{m+\frac{1}{2}})$
\begin{align}\label{eq3.21}
\bigl(u_t(t_{m+\frac{1}{2}}),v_h\bigr)_{\mathcal{T}_h} +a_h(u(t_{m+\frac{1}{2}}),v_h)
+\frac{1}{\epsilon^2}\bigl( f(u(t_{m+\frac{1}{2}})), v_h\bigr)_{\mathcal{T}_h}
= 0,
\end{align}
for all $ v_h\in V_h$, where $t_{m+\frac{1}{2}}=\frac{t_{m+1}+t_{m}} {2}$.\\
\\
Subtract $(3.1)$ from $(3.21)$, we get the following equation:
\begin{align}\label{eq3.22}
&\bigl(u_t(t_{m+\frac{1}{2}})- \frac{u^{m+1}-u^{m}}{k},v_h\bigr)_{\mathcal{T}_h}
+a_h\big(u(t_{m+\frac{1}{2}})-\frac{u^{m+1}+u^{m}} {2},v_h\big)\\
\hskip 0.0in
&+\frac{1}{\epsilon^2}\bigl( f(u(t_{m+\frac{1}{2}}))-f^{m+1}, v_h \bigr)_{\mathcal{T}_h} = 0.\nonumber
\end{align}
From Taylor expansion:
\[
u(t_{m+1})=u(\frac{t_{m+1}+t_{m}}{2})+u_t(\frac{t_{m+1}+t_{m}}{2})(\frac{t_{m+1}-t_{m}}{2})+R_1^{m},
\]
where $R_1^{m}=u_{tt}(\xi_1)(\frac{t_{m+1}-t_{m}}{2})^2$.
\[
u(t_{m})=u(\frac{t_{m+1}+t_{m}}{2})-u_t(\frac{t_{m+1}+t_{m}}{2})(\frac{t_{m+1}-t_{m}}{2})+{R_2^{m}},
\]
where $R_2^{m}=u_{tt}(\xi_2)(\frac{t_{m+1}-t_{m}}{2})^2$.
And we will get:\\
\begin{align}\label{eq3.23}
u(t_{m+\frac{1}{2}})=\frac{u(t_{m+1})+u(t_{m})}{2}-\frac{\big(R_1^{m}+R_2^{m}\big)}{2},
\end{align}
\begin{align}\label{eq3.24}
u_t(t_{m+\frac{1}{2}})=\frac{u(t_{m+1})-u(t_{m})}{k}-\frac{\big(R_1^{m}-R_2^{m}\big)}{k}.
\end{align}
Use $(3.23)$ and $(3.24)$ into $(3.22)$ , we will get:
\begin{align}\label{eq3.25}
&\bigl(\frac{\xi^{m+1}-\xi^{m}}{k}+\frac{\eta^{m+1}-\eta^{m}}{k}-\frac{\big(R_1^{m}-R_2^{m}\big)}{k},v_h\bigr)_{\mathcal{T}_h} \\
\hskip 0.0in
&+a_h\bigl(\frac{\xi^{m+1}+\xi^{m}}{2}+\frac{\eta^{m+1}+\eta^{m}}{2}-\frac{\big(R_1^{m}+R_2^{m}\big)}{2},v_h\bigr)\nonumber \\
\hskip 0.0in
&+\frac{1}{\epsilon^2}\bigl( f(u(t_{m+\frac{1}{2}}))-f^{m+1}, v_h \bigr)_{\mathcal{T}_h} = 0.\nonumber
\end{align}
\begin{align}\label{eq3.26}
&\bigl(d_t \xi^{m+1},v_h\bigr)_{\mathcal{T}_h} +a_h(\frac{\xi^{m+1}+\xi^{m}}{2},v_h)\\
&+\frac{1}{\epsilon^2}\bigl( f(u(t_{m+\frac1 2}))-f^{m+1}, v_h\bigr)_{\mathcal{T}_h} \nonumber\\
\hskip 0.5in
&= \bigl(\frac{\big(R_1^{m}-R_2^{m}\big)}{k},v_h\bigr)_{\mathcal{T}_h}
-\bigl(d_t \eta^{m+1},v_h\bigr)_{\mathcal{T}_h}\nonumber\\
&-a_h(\frac{\eta^{m+1}+\eta^{m}}{2},v_h)+a_h(\frac{\big(R_1^{m}+R_2^{m}\big)}{2},v_h) \nonumber\\
&= \bigl(\frac{\big(R_1^{m}-R_2^{m}\big)}{k},v_h\bigr)_{\mathcal{T}_h}
-\bigl(d_t \eta^{m+1},v_h\bigr)_{\mathcal{T}_h} \nonumber\\
&+(\frac{\eta^{m+1}+\eta^{m}}{2},v_h)_{\mathcal{T}_h}+a_h(\frac{\big(R_1^{m}+R_2^{m}\big)}{2},v_h). \nonumber
\end{align}
Let $v_h=\frac{\xi^{m+1}+\xi^{m}}{2}$, for the first term on the left hand side:
\begin{align}\label{eq3.27}
&\bigl(d_t \xi^{m+1},\frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}= \frac12 d_t \|\xi^{m+1}\|_{L^2(\mathcal{T}_h)}^2.
\end{align}
We split the third term on the left hand side in $(3.26)$ into two parts and deal with them separately:
\begin{align}\label{eq3.28}
&\frac{1}{\epsilon^2}\bigl( f(u(t_{m+\frac1 2}))-f^{m+1}, \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\\
&=\frac{1}{\epsilon^2}\bigl( f(u(t_{m+\frac1 2}))-f(\frac{u(t_{m+1})+u(t_{m})}{2}), \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h} \nonumber\\
&+\frac{1}{\epsilon^2}\bigl( f(\frac{u(t_{m+1})+u(t_{m})}{2})-f^{m+1}, \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}.\nonumber
\end{align}
Let $\hat{u}(t_{m+\frac{1}{2}})=\frac{u(t_{m+1})+u(t_{m})}{2}$, the we have the following:\\
\begin{align}\label{eq3.29}
&f(u(t_{m+\frac1 2}))-f(\hat{u}(t_{m+\frac{1}{2}}))\\
&=f\big(\hat{u}(t_{m+\frac{1}{2}})-\frac1 8k^2(u''(\xi_1)+u''(\xi_2))\big)-f(\hat{u}(t_{m+\frac{1}{2}}))\nonumber\\
&=f'(\xi_{12})(-\frac1 8)k^2((u''(\xi_1)+u''(\xi_2))\geq-Ck^2.\nonumber
\end{align}
Since $f'$ and $u''$ both are bounded, we will get the following inequality by Cauchy-Schwarz inequality:
\begin{align}\label{eq3.30}
&\frac{1}{\epsilonilon^2}\bigl(f(u(t_{m+\frac1 2}))-f(\hat{u}(t_{m+\frac{1}{2}})),\frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\\
&\geq -\frac{1}{\epsilonilon^2}\bigl(Ck^2,\frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\nonumber\\
&\geq -\frac{1}{\epsilonilon^4}Ck^4-\|\xi^{m+\frac 1 2}\|^2_{L^2(\mathcal{T}_h)}.\nonumber
\end{align}
For the last term of the right hand side in \eqref{eq3.26}:
\begin{align}\label{eq3.31}
&a_h\big(\frac{\big(R_1^{m}+R_2^{m}\big)}{2},\frac{\xi^{m+1}+\xi^{m}}{2}\big)=a_h\big(\frac{\big(R_1^{m}+R_2^{m}\big)}{2\epsilonilon},\frac{\epsilonilon(\xi^{m+1}+\xi^{m})}{2}\big)\\
&\leq a_h\big(\frac{(R_1^{m}+R_2^{m})}{2\epsilonilon},\frac{(R_1^{m}+R_2^{m})}{2\epsilonilon}\big)+a_h\big(\frac{\epsilonilon(\xi^{m+1}+\xi^{m})}{2},\frac{\epsilonilon(\xi^{m+1}+\xi^{m})}{2}\big)\nonumber\\
&\leq Ck^4\epsilonilon^{-2}+\epsilonilon^2a_h\big(\xi^{m+\frac1 2},\xi^{m+\frac1 2}\big).\nonumber
\end{align}
Substitute \eqref{eq3.27},\eqref{eq3.30} and \eqref{eq3.31} into \eqref{eq3.26}, and we will get:
\begin{align}\label{eq3.32}
&\frac12d_t \|\xi^{m+1}\|_{L^2(\mathcal{T}_h)}^2 +a_h(\frac{\xi^{m+1}+\xi^{m}}{2},\frac{\xi^{m+1}+\xi^{m}}{2})\\
&+\frac{1}{\epsilon^2}\bigl( f(\hat{u}(t_{m+\frac{1}{2}}))-f^{m+1}, \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\nonumber\\
&= \bigl(\frac{\big(R_1^{m}-R_2^{m}\big)}{k},v_h\bigr)_{\mathcal{T}_h} -\bigl(d_t \eta^{m+1},v_h\bigr)_{\mathcal{T}_h} \nonumber\\
&+(\frac{\eta^{m+1}+\eta^{m}}{2},v_h)_{\mathcal{T}_h}+a_h(\frac{R_1^{m}+R_2^{m}}{2},v_h) \nonumber\\
&\leq\bigl( \|\bigl(\frac{\big(R_1^{m}-R_2^{m}\big)}{k})\|_{L^2(\mathcal{T}_h)}^2+\|d_t\eta^{m+1}\|_{L^2(\mathcal{T}_h)}^2\nonumber\\
&+\|(\frac{\eta^{m+1}+\eta^{m}}{2})\|_{L^2(\mathcal{T}_h)}^2\bigr)\bigl(\frac{\xi^{m+1}+\xi^{m}}{2}\big)\|_{L^2(\mathcal{T}_h)}^2\nonumber\\
&+Ck^4[\epsilonilon^{-4}+\epsilonilon^{-2}]+\epsilonilon^2a_h\big(\xi^{m+\frac1 2},\xi^{m+\frac1 2}\big) +\|\xi^{m+\frac 1 2}\|^2_{L^2(\mathcal{T}_h)}. \nonumber
\end{align}
Using the integral form of Taylor formula, we can get:
\[
|\frac{R_1^{m}-R_2^{m}}{k}|=|\frac{k(u_{tt}(\xi_1)-u_{tt}(\xi_2))}{4}=|\frac{ku_{ttt}(\xi_{11})(\xi_1-\xi_2)}{4}|\leq Ck^2.
\]
Hence
\begin{align}\label{eq3.33}
\|\frac{R_1^{m}-R_2^{m}}{k}\|^2_{L^2(\mathcal{T}_h)}\leq Ck^4.
\end{align}
Summing in m from 1 to $\ell$, using \eqref{eq3.13},\eqref{eq3.32} and \eqref{eq3.33}, and we will get the following inequality:
\begin{align}\label{eq3.34}
&\|\xi^{\ell}\|_{L^2(\mathcal{T}_h)}^2 + 2k\sum_{m=1}^\ell a_h(\frac{\xi^{m}+\xi^{m-1}}{2},\frac{\xi^{m}+\xi^{m-1}}{2})\\
&+2k\sum_{m=1}^\ell \frac{1}{\epsilon^2}\bigl( f(\hat{u}(t_{m-\frac{1}{2}}))-f^{m}, \frac{\xi^{m}+\xi^{m-1}}{2}\bigr)_{\mathcal{T}_h}\nonumber\\
&\leq \|\xi^{0}\|_{L^2(\mathcal{T}_h)}^2 +Ch^{2\min\{r+1,s\}}\, \|u\|_{H^1((0,T);H^s(\Omega))}^2 \nonumber\\
&+2Ck^4[\epsilonilon^{-4}+\epsilonilon^{-2}+1]+2k\sum_{m=1}^\ell \epsilonilon^2a_h\big(\xi^{m-\frac1 2},\xi^{m-\frac1 2}\big) +4k\sum_{m=1}^\ell \|\xi^{m-\frac1 2}\|^2_{L^2(\mathcal{T}_h)}. \nonumber
\end{align}
{\em Step 2}:
We want to bound the term$\bigl( f(\hat{u}(t_{m+\frac{1}{2}}))-f^{m+1}, \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}$ on the left hand side of \eqref{eq3.34}:
\begin{align}\label{eq3.35}
f(\hat{u}(t_{m+\frac{1}{2}}))-f^{m+1}=&[f(\hat{u}(t_{m+\frac{1}{2}}))-f(P_r^h \hat{u}(t_{m+\frac{1}{2}}))]\\
&+ [f(P_r^h \hat{u}(t_{m+\frac{1}{2}}))-f^{m+1}].\notag
\end{align}
For the first part on the right hand side of $(3.35)$,we get:
\begin{align}\label{eq3.36}
|f(u^{m+\frac 1 2})-f(P_r^h \hat{u}(t_{m+\frac{1}{2}}))| =&|f'(\xi)|\bigl|\hat{u}(t_{m+\frac{1}{2}})-P_r^h \hat{u}(t_{m+\frac{1}{2}})\bigr|\notag\\
&\geq -C|\frac{\eta^{m+1}+\eta^{m}}{2}|.
\end{align}
For the second part on the right hand side of \eqref{eq3.35},we get:
\begin{align}\label{eq3.37}
&f(P_r^h \hat{u}(t_{m+\frac1 2}))-f^{m+1}\\
&=\bigl(\frac{P_r^h u(t_{m+1})+P_r^h u(t_{m})}{2}\bigr)^3-\bigl(\frac{P_r^h u(t_{m+1})+P_r^h u(t_{m})}{2}\bigr)\notag\\
&-\bigl[\frac1 4[(u^{m+1})^3+(u^{m+1})^2u^m+u^{m+1}(u^m)^2+(u^m)^3]-\frac{u^{m+1}+u^m}{2}\bigr] \nonumber\\
&=\frac{\bigl(P_r^h u(t_{m+1})+P_r^h u(t_{m})\bigr)^3}{8}\notag\\
&-\frac2 8[(u^{m+1})^3+(u^{m+1})^2u^m+u^{m+1}(u^m)^2+(u^m)^3]\nonumber\\
&-\bigl[\bigl(\frac{P_r^h u(t_{m+1})+P_r^h u(t_{m})}{2}\bigr)-\frac{u^{m+1}+u^m}{2}\bigr] \nonumber\\
&=\frac{\bigl(P_r^h u(t_{m+1})+P_r^h u(t_{m})\bigr)^3}{8}-\frac2 8[(P_r^h u(t_{m+1})-\xi^{m+1})^3\notag\\
&+(P_r^h u(t_{m+1})-\xi^{m+1})^2(P_r^h u(t_{m})-\xi^{m})-\frac{(\xi^{(m+1)}+\xi^m)}{2}\nonumber\\
&+(P_r^h u(t_{m+1})-\xi^{m+1})(P_r^h u(t_{m})-\xi^{m})^2+(P_r^h u(t_{m})-\xi^{m})^3].\nonumber
\end{align}
We split the above into four terms: constant term with resect to $\xi^{m+1}$ and\ \ $\xi^{m}$, linear, quadratic and cubic in terms of $\xi^{m+1}$ and\ \ $\xi^{m}$.\\
For constant term, we have
\begin{align}\label{eq3.38}
&\bigl(\frac1 8 (P_r^h u(t_{m+1})-P_r^h u(t_{m}))^2(P_r^h u(t_{m+1})+P_r^h u(t_{m})), \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\\
&\geq -C(h^4+k^2)\bigl(1, \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\nonumber\\
&\geq -C(h^8+k^4)-C\|\xi^{m+\frac1 2}\|^2_{L^2(\mathcal{T}_h)}.\nonumber
\end{align}
By the boundness of $P_r^h u^{m}$ and $|P_r^h u(t_{m+1})-P_r^h u(t_{m})|\leq h^2+k$.\\
\\
For the linear term, we have the following:\\
\begin{align}\label{eq3.39}
& l= \frac 1 4\bigl\{ \xi^{m+1}[3(P_r^h u(t_{m+1}))^2+P_r^h u(t_{m+1})P_r^h u(t_{m})+(P_r^h u(t_{m}))^2]\\
&+\xi^{m}[3(P_r^h u(t_{m}))^2+P_r^h u(t_{m+1})P_r^h u(t_{m})+(P_r^h u(t_{m+1}))^2]\bigr\}-\frac{(\xi^{m+1}+\xi^m)}{2}\nonumber\\
& =\frac1 4(\xi^{m+1}+\xi^{m})(P_r^h u(t_{m+1})+P_r^h u(t_{m}))^2\nonumber\\
&+\frac1 2[\xi^{m+1}(P_r^h u(t_{m+1}))^2+\xi^{m}(P_r^h u(t_{m}))^2]-\frac{(\xi^{m+1}+\xi^m)}{2}.\nonumber
\end{align}
And we have:\\
\begin{align}\label{eq3.40}
&\bigl( \frac1 4(\xi^{m+1}+\xi^{m})(P_r^h u(t_{m+1})+P_r^h u(t_{m}))^2\\
&+\frac 12[\xi^{m+1}(P_r^h u(t_{m+1}))^2+\xi^{m}(P_r^h u(t_{m}))^2], \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\nonumber\\
&=\bigl( \frac 1 2( P_r^h u(t_{m+1})+P_r^h u(t_{m}))^2, (\frac{\xi^{m+1}+\xi^{m}}{2})^2\bigr)_{\mathcal{T}_h}\nonumber\\
&+(\frac 12[\xi^{m+1}(P_r^h u(t_{m+1}))^2+\xi^{m}(P_r^h u(t_{m}))^2], \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}.\nonumber
\end{align}
By using the Schwarz Inequality and $|P_r^h u(t_{m+1})-P_r^h u(t_{m})|\leq C(h^2+k)$, we get the following inequalities for the first and second terms of the right hand side of \eqref{eq3.40}:
\begin{align}\label{eq3.41}
&\bigl( \frac 1 2( P_r^h u(t_{m+1})+P_r^h u(t_{m}))^2, (\frac{\xi^{m+1}+\xi^{m}}{2})^2\bigr)_{\mathcal{T}_h}\\
&\geq\bigl(2(P_r^h u(t_{m}))^2, (\frac{\xi^{m+1}+\xi^{m}}{2})^2\bigr)_{\mathcal{T}_h}-C(h^2+k)\|\xi^{m+\frac1 2}\|_{L^2(\mathcal{T}_h)}^2.\nonumber
\end{align}
\begin{align}\label{eq3.42}
&(\frac 12[\xi^{m+1}(P_r^h u(t_{m+1}))^2+\xi^{m}(P_r^h u(t_{m}))^2], \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\\
&\geq\bigl((P_r^h u(t_{m}))^2, (\frac{\xi^{m+1}+\xi^{m}}{2})^2\bigr)_{\mathcal{T}_h}-C(h^2+k)\|\xi^{m+\frac 1 2}\|_{L^2(\mathcal{T}_h)}^2.\nonumber
\end{align}
\begin{align}\label{eq3.43}
&\bigl(l, \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\\
&\geq\bigl(3(P_r^h u(t_{m}))^2-1, (\frac{\xi^{m+1}+\xi^{m}}{2})^2\bigr)_{\mathcal{T}_h}-C(h^2+k)\|\xi^{m+\frac1 2}\|_{L^2(\mathcal{T}_h)}^2\nonumber\\
&=\bigl((f'(P_r^h u(t_{m})), (\frac{\xi^{m+1}+\xi^{m}}{2})^2\bigr)_{\mathcal{T}_h}-C(h^2+k)\|\xi^{m+\frac1 2}\|_{L^2(\mathcal{T}_h)}^2.\nonumber
\end{align}
For the quadratic term, we get the inequality below;\\
\begin{align}\label{eq3.44}
&q=3(\xi^{m+1})^2P_r^h u(t_{m+1})+(\xi^{m+1})^2P_r^h u(t_{m})+2\xi^{m+1}\xi^{m}P_r^h u(t_{m+1})\\
&+(\xi^{m})^2P_r^h u(t_{m+1})+2\xi^{m+1}\xi^{m}P_r^h u(t_{m})+3(\xi^{m})^2P_r^h u(t_{m})\nonumber\\ &\geq -C_1[(\xi^{m+1})^2+(\xi^{m})^2].\nonumber
\end{align}
So we get:\\
\begin{align}\label{eq3.45}
&\bigl(q, \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\\
&\geq -C_1\bigl((\xi^{m+1})^2+(\xi^{m})^2, \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\nonumber\\
&\geq-C \|\xi^{m+\frac{1}{2}}\|_{L^3(\mathcal{T}_h)}^3.\nonumber
\end{align}
For cubic term, we have:\\
\begin{align}\label{eq3.46}
&c= \frac1 4\bigl[(\xi^{m+1})^3+(\xi^{m+1})^2\xi^{m}+\xi^{m+1}(\xi^{m})^2+(\xi^{m})^3\bigr]\nonumber\\
&\ =\frac1 4\bigl[(\xi^{m+1})^2+(\xi^{m})^2\bigr](\xi^{m+1}+\xi^{m}),
\end{align}
Then we have
\begin{align}\label{eq3.47}
\bigl(c, \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}=\bigl((\xi^{m+1})^2+(\xi^{m})^2, (\xi^{m+1}+\xi^{m})^2\bigr)_{\mathcal{T}_h}\geq 0.
\end{align}
Combine all above together, we will get:
\begin{align}\label{eq3.48}
&\bigl(f(P_r^h \hat{u}(t_{m+\frac1 2}))-f^{m+1}, \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\\
&\geq -C|(\eta^{m+\frac1 2}, \xi^{m+\frac1 2})|_{\mathcal{T}_h}-C(h^8+k^4)-C\|\xi^{m+\frac1 2}\|_{L^2(\mathcal{T}_h)}\nonumber\\
&\bigl((f'(P_r^h u(t_{m}))), (\frac{\xi^{m+1}+\xi^{m}}{2})^2\bigr)_{\mathcal{T}_h}-C(h^2+k)\|\xi^{m+\frac1 2}\|_{L^2(\mathcal{T}_h)}^2\nonumber\\
&-C \|\xi^{m+\frac{1}{2}}\|_{L^3(\mathcal{T}_h)}^3+\frac{4k}{\epsilonilon^2}\bigl((\xi^{m+1})^2+(\xi^{m})^2, (\xi^{m+1}+\xi^{m})^2\bigr)_{\mathcal{T}_h}.\nonumber
\end{align}
Summing in m a from 1 to $\ell$ and we will get the following:
\begin{align}\label{eq3.49}
&\frac{2k}{\epsilonilon^2}\sum_{m=1}^\ell \bigl( f(P_r^h \hat{u}(t_{m+\frac1 2}))-f^{m+1}, \frac{\xi^{m+1}+\xi^{m}}{2}\bigr)_{\mathcal{T}_h}\\
&\geq -\frac{Ck}{\epsilonilon^2}\sum_{m=1}^\ell \|\eta^{m+\frac1 2}\|_{\mathcal{T}_h}\|\xi^{m+\frac1 2}\|_{\mathcal{T}_h}-C\frac{1}{\epsilonilon^2}(h^8+k^4)-\frac{Ck}{\epsilonilon^2}\sum_{m=1}^\ell \|\xi^{m}\|^2_{L^2(\mathcal{T}_h)}\nonumber\\
&+\frac{2k}{\epsilonilon^2}\sum_{m=1}^\ell \bigl(f'(P_r^h u(t_{m})), (\frac{\xi^{m+1}+\xi^{m}}{2})^2\bigr)_{\mathcal{T}_h}-C\frac{k}{\epsilonilon^2} (h^2+k)\sum_{m=1}^\ell\|\xi^{m}\|_{L^2(\mathcal{T}_h)}^2\nonumber\\
&-C\frac{k}{\epsilonilon^2} \sum_{m=1}^\ell \|\xi^{m+\frac{1}{2}}\|_{L^3(\mathcal{T}_h)}^3
+\sum_{m=1}^\ell \bigl((\xi^{m})^2+(\xi^{m+1})^2, (\xi^{m}+\xi^{m+1})^2\bigr)_{\mathcal{T}_h},\nonumber\\
&\geq-C h^{2\min\{r+1,s\}} \epsilon^{-4} \|u\|_{L^2((0,T);H^s(\Omega)}^2-C\frac{1}{\epsilonilon^2}(h^8+k^4)\nonumber\\
& +\frac{2k}{\epsilonilon^2}\sum_{m=1}^\ell \bigl(f'(P_r^h u(t_{m})), (\frac{\xi^{m+1}+\xi^{m}}{2})^2\bigr)_{\mathcal{T}_h}\nonumber\\
&+\frac{2k}{\epsilonilon^2}\sum_{m=1}^\ell \bigl((\xi^{m})^2+(\xi^{m+1})^2, (\xi^{m}+\xi^{m+1})^2\bigr)_{\mathcal{T}_h}-C\frac{k}{\epsilonilon^2} \sum_{m=1}^\ell \|\xi^{m+\frac{1}{2}}\|_{L^3(\mathcal{T}_h)}^3\notag\\
&-C\frac{k}{\epsilonilon^2} (h^2+k+1)\sum_{m=1}^\ell\|\xi^{m}\|_{L^2(\mathcal{T}_h)}^2-k^2\sum_{m=1}^\ell\|\xi^{m}\|_{L^2(\mathcal{T}_h)}^2.\nonumber
\end{align}
Substitute the inequality above into \eqref{eq3.34}, and we get:
\begin{align}\label{eq3.50}
&\|\xi^\ell\|_{L^2(\mathcal{T}_h)}^2 +\frac{2k}{\epsilonilon^2}\sum_{m=1}^\ell \bigl((\xi^{m})^2+(\xi^{m-1})^2, (\xi^{m}+\xi^{m-1})^2) \\
&+ 2k(1-\epsilonilon^2)\sum_{m=1}^\ell \left( a_h(\xi^{m-\frac12},\xi^{m-\frac12})
+\frac{1}{\epsilon^2} \Bigl(f'\bigl(P_r^h u(t_{m-1})\bigr),(\xi^{m-\frac1 2})^2\Bigr)_{\mathcal{T}_h}\right)\nonumber\\
&+2k \sum_{m=1}^\ell \Bigl(f'\bigl(P_r^h u(t_{m-1})\bigr),(\xi^{m-\frac12})^2\Bigr)_{\mathcal{T}_h}
\nonumber \\
&\leq \|\xi^{0}\|_{L^2(\mathcal{T}_h)}^2 +Ch^{2\min\{r+1,s\}}\bigl(\, \|u\|_{H^1((0,T);H^s(\Omega))}^2 +\epsilon^{-4} \|u\|_{L^2((0,T);H^s(\Omega)}^2\bigr) \nonumber\\
&+\frac{C}{\epsilonilon^2}(h^8+k^4)+Ck^4[\epsilonilon^{-4}+\epsilonilon^{-2}+1]\nonumber\\
&+Ck(1+\frac{k\epsilonilon^2}{\epsilonilon^2}+\frac{h^2+k}{\epsilonilon^2})\sum_{m=1}^\ell \|\xi^{m}\|^2_{L^2(\mathcal{T}_h)}+ C\frac{k}{\epsilonilon^2} \sum_{m=1}^\ell \|\xi^{m+\frac{1}{2}}\|_{L^3(\mathcal{T}_h)}^3.\notag
\end{align}
{\em Step 3}: In order to control the last two terms on the right-hand
side of \eqref{eq3.49}, we use the following Gagliardo-Nirenberg
inequality $\cite{23}$:
\[
\|v\|_{L^3(K)}^3\leq C\Bigl( \|\nabla v\|_{L^2(K)}^{\frac{d}2}
\bigl\|v\bigr\|_{L^2(K)}^{\frac{6-d}2} +\|v\|_{L^2(K)}^3 \Bigr)
\qquad\forall K\in \mathcal{T}_h,
\]
to get
\begin{align}\label{eq3.51}
\frac{Ck}{\epsilon^2} \sum_{m=1}^\ell \|\xi^m\|_{L^3(\mathcal{T}_h)}^3
&\leq \epsilon^2\alpha k\sum_{m=1}^\ell \|\nabla \xi^m\|_{L^2(\mathcal{T}_h)}^2
+\epsilon^2 k\sum_{m=1}^\ell \|\xi^m\|_{L^2(\mathcal{T}_h)}^2 \\
&\qquad
+C\epsilon^{-\frac{2(4+d)}{4-d}} k\sum_{m=1}^\ell\sum_{K\in \mathcal{T}_h}
\bigl\|\xi^m\bigr\|_{L^2(K)}^{\frac{2(6-d)}{4-d}} \nonumber \\
&\leq \epsilon^2\alpha k\sum_{m=1}^\ell \|\nabla \xi^m\|_{L^2(\mathcal{T}_h)}^2 \nonumber \\
&\qquad
+C\epsilon^{-\frac{2(4+d)}{4-d}} k\sum_{m=1}^\ell
\bigl\|\xi^m\bigr\|_{L^2(\mathcal{T}_h)}^{\frac{2(6-d)}{4-d}}. \nonumber
\end{align}
Finally, for the third term on the left-hand side of the above inequality,
we utilize the discrete spectrum estimate \eqref{eq2.18} to
bound it from below as follows:
\begin{align}\label{eq3.52}
&2k(1-\epsilonilon^2)\sum_{m=1}^\ell \left( a_h(\xi^{m-\frac12},\xi^{m-\frac12})
+\frac{1}{\epsilon^2} \Bigl(f'\bigl(P_r^h u(t_{m-1})\bigr),(\xi^{m-\frac1 2})^2\Bigr)_{\mathcal{T}_h}\right) \\
&+4k \sum_{m=1}^\ell \Bigl(f'\bigl(P_r^h u(t_{m-1})\bigr),(\xi^{m-\frac12})^2\Bigr)_{\mathcal{T}_h}\nonumber\\
&=2k(1-2\epsilonilon^2)\sum_{m=1}^\ell \left( a_h(\xi^{m-\frac12},\xi^{m-\frac12})
+\frac{1}{\epsilon^2} \Bigl(f'\bigl(P_r^h u(t_{m-1})\bigr),(\xi^{m-\frac1 2})^2\Bigr)_{\mathcal{T}_h}\right) \nonumber\\
&+2k\epsilonilon^2a_h(\xi^{m-\frac12},\xi^{m-\frac12})+ 4k \sum_{m=1}^\ell \Bigl(f'\bigl(P_r^h u(t_{m-1})\bigr),(\xi^{m-\frac12})^2\Bigr)_{\mathcal{T}_h}\nonumber\\
&\geq -2(1-2\epsilon^2)c_0 k\sum_{m=1}^\ell \|\xi^m\|_{L^2(\mathcal{T}_h)}^2 + 4\epsilon^2 \alpha k\sum_{m=1}^\ell \|\xi^{m-\frac{1}{2}}\|_{1,\mbox{\tiny DG}}^2\notag\\
&-Ck \sum_{m=1}^\ell \|\xi^m\|_{L^2(\mathcal{T}_h)}^2.\notag
\nonumber
\end{align}
{\em Step 4}:
Substitute \eqref{eq3.51} and \eqref{eq3.52} into \eqref{eq3.50}, and we get the following:
\begin{align}\label{eq3.53}
&\|\xi^\ell\|_{L^2(\mathcal{T}_h)}^2 + 3\epsilon^2 \alpha k\sum_{m=1}^\ell \|\xi^{m}\|_{1,\mbox{\tiny DG}}^2+\frac{2k}{\epsilonilon^2}\sum_{m=1}^\ell \bigl((\xi^{m})^2+(\xi^{m-1})^2, (\xi^{m}+\xi^{m-1}))\\
&\leq Ck(1+\frac{k\epsilonilon^2}{\epsilonilon^2}+\frac{h^2+k}{\epsilonilon^2})\sum_{m=1}^\ell \|\xi^{m}\|^2_{L^2(\mathcal{T}_h)}+C\epsilon^{-\frac{2(4+d)}{4-d}} k\sum_{m=1}^\ell
\bigl\|\xi^m\bigr\|_{L^2(\mathcal{T}_h)}^{\frac{2(6-d)}{4-d}}\nonumber\\
&+\|\xi^{0}\|_{L^2(\mathcal{T}_h)}^2 +Ch^{2\min\{r+1,s\}}\bigl(\, \|u\|_{H^1((0,T);H^s(\Omega))}^2 +\epsilon^{-4} \|u\|_{L^2((0,T);H^s(\Omega)}^2\bigr) \nonumber\\
&+\frac{C}{\epsilonilon^2}(h^8+k^4)+Ck^4[\epsilonilon^{-4}+\epsilonilon^{-2}+1].\nonumber\
\end{align}
Notice that on the right hand side, we need to choose the appropriate initial value $u^0_h$, so that $\|\xi^0\|_{L^2(\mathcal{T}_h)}=O(h^{\min\{r+1,s\}})$
to maintain the optimal rate of convergence in $h$. Clearly,
both the $L^2$ and the elliptic projection of $u_0$ work.
and in the latter case, we get $\xi^0=0$.\\
It then follows from \eqref{eq2.7}, \eqref{eq2.9}, \eqref{eq2.12} and \eqref{eq3.53} that
\begin{align}\label{eq3.54}
&\|\xi^\ell\|_{L^2(\mathcal{T}_h)}^2 + 3\epsilon^2 \alpha k\sum_{m=1}^\ell \|\xi^{m}\|_{1,\mbox{\tiny DG}}^2+\frac{2k}{\epsilonilon^2}\sum_{m=1}^\ell \bigl((\xi^{m})^2+(\xi^{m-1})^2, (\xi^{m}+\xi^{m-1}))\\
&\leq Ck(1+\frac{k\epsilonilon^2}{\epsilonilon^2}+\frac{h^2+k}{\epsilonilon^2})\sum_{m=1}^\ell \|\xi^{m}\|^2_{L^2(\mathcal{T}_h)}+C\epsilon^{-\frac{2(4+d)}{4-d}} k\sum_{m=1}^\ell
\bigl\|\xi^m\bigr\|_{L^2(\mathcal{T}_h)}^{\frac{2(6-d)}{4-d}}\nonumber\\
& +C h^{2\min\{r+1,s\}}\epsilon^{-2(\sigma_1+2)}+\frac{C}{\epsilonilon^2}(h^8+k^4)+Ck^4[\epsilonilon^{-4}+\epsilonilon^{-2}+1].\nonumber\
\end{align}
Since $u^{\ell}$ can be written as
\begin{equation}\label{eq3.55}
u^{\ell}=k\mathop{\sum}\limits_{m=1}^\ell d_tu^m+u^0,
\end{equation}
then by \eqref{eq2.3} and \eqref{eq3.11}, we get
\begin{align}\label{eq3.56}
\|u^{\ell}\|_{L^2(\mathcal{T}_h)}
\leq k\mathop{\sum}\limits_{m=1}^\ell \|d_tu^m\|_{L^2(\mathcal{T}_h)}
+\|u^0\|_{L^2(\mathcal{T}_h)}
\leq C\epsilon^{-2\sigma_1}.
\end{align}
By the boundedness of the projection, we have
\begin{equation}\label{eq3.57}
\|\xi^\ell\|_{L^2(\mathcal{T}_h)}^2\leq C\epsilon^{-2\sigma_1}.
\end{equation}
Then the above inequality is equivalent to the form below:
\begin{equation}\label{eq3.58}
\|\xi^\ell\|_{L^2(\mathcal{T}_h)}^2 + k\sum_{m=1}^\ell
3\epsilon^2 \alpha \|\xi^m\|_{1,\mbox{\tiny DG}}^2 \leq H_1+H_2,
\end{equation}
where
\begin{align}\label{eq3.59}
H_1:&=Ck(1+\frac{k\epsilonilon^2}{\epsilonilon^2}+\frac{h^2+k}{\epsilonilon^2})\sum_{m=1}^{\ell-1} \|\xi^{m}\|^2_{L^2(\mathcal{T}_h)}+C\epsilon^{-\frac{2(4+d)}{4-d}} k\sum_{m=1}^{\ell-1}
\bigl\|\xi^m\bigr\|_{L^2(\mathcal{T}_h)}^{\frac{2(6-d)}{4-d}}\\ \nonumber
&\qquad\qquad
+C h^{2\min\{r+1,s\}}\epsilon^{-2(\sigma_1+2)}+\frac{C}{\epsilonilon^2}(h^8+k^4)+Ck^4[\epsilonilon^{-4}+\epsilonilon^{-2}+1],\nonumber
\end{align}
\begin{align}\label{eq3.60}
H_2:&=Ck(1+\frac{k\epsilonilon^2}{\epsilonilon^2}+\frac{h^2+k}{\epsilonilon^2})\|\xi^{\ell}\|_{L^2(\mathcal{T}_h)}^2 + C\epsilon^{-\frac{2(4+d)}{4-d}} k
\bigl\|\xi^{\ell}\bigr\|_{L^2(\mathcal{T}_h)}^{\frac{2(6-d)}{4-d}}.
\end{align}
It is easy to check that
\begin{equation}\label{eq3.61}
H_2<\frac12\|\xi^\ell\|_{L^2(\mathcal{T}_h)}^2 ,
\qquad\mbox{provided that}\quad k<A(\epsilonilon).
\end{equation}
By \eqref{eq3.58} we have
\begin{align}\label{eq3.62}
& \|\xi^\ell\|_{L^2(\mathcal{T}_h)}^2 + k\sum_{m=1}^\ell
3\epsilon^2 \alpha \|\xi^m\|_{1,\mbox{\tiny DG}}^2 \leq2H_1\\\nonumber
&
\leq 2Ck(1+\frac{k\epsilonilon^2}{\epsilonilon^2}+2\frac{h^2+k}{\epsilonilon^2})\sum_{m=1}^{\ell-1} \|\xi^{m}\|^2_{L^2(\mathcal{T}_h)}+2C\epsilon^{-\frac{2(4+d)}{4-d}} k\sum_{m=1}^{\ell-1}
\bigl\|\xi^m\bigr\|_{L^2(\mathcal{T}_h)}^{\frac{2(6-d)}{4-d}}\nonumber\\
&
+2C h^{2\min\{r+1,s\}}\epsilon^{-2(\sigma_1+2)}+2\frac{C}{\epsilonilon^2}(h^8+k^4)+2Ck^4[\epsilonilon^{-4}+\epsilonilon^{-2}+1]\nonumber\\
&
\leq Ck(1+\frac{k\epsilonilon^2}{\epsilonilon^2}+\frac{h^2+k}{\epsilonilon^2})\sum_{m=1}^{\ell-1} \|\xi^{m}\|^2_{L^2(\mathcal{T}_h)}+C\epsilon^{-\frac{2(4+d)}{4-d}} k\sum_{m=1}^{\ell-1}
\bigl\|\xi^m\bigr\|_{L^2(\mathcal{T}_h)}^{\frac{2(6-d)}{4-d}}\nonumber\\
&
+C h^{2\min\{r+1,s\}}\epsilon^{-2(\sigma_1+2)}+\frac{C}{\epsilonilon^2}(h^8+k^4)+Ck^4[\epsilonilon^{-4}+\epsilonilon^{-2}+1].\nonumber
\end{align}
Let $d_{\ell}\geq 0$ be the slack variable such that
\begin{align}\label{eq3.63}
& \|\xi^\ell\|_{L^2(\mathcal{T}_h)}^2 + k\sum_{m=1}^\ell
3\epsilon^2 \alpha \|\xi^m\|_{1,\mbox{\tiny DG}}^2 +d_{\ell} \\
&
=Ck(1+\frac{k\epsilonilon^2}{\epsilonilon^2}+\frac{h^2+k}{\epsilonilon^2})\sum_{m=1}^{\ell-1} \|\xi^{m}\|^2_{L^2(\mathcal{T}_h)}+C\epsilon^{-\frac{2(4+d)}{4-d}} k\sum_{m=1}^{\ell-1}
\bigl\|\xi^m\bigr\|_{L^2(\mathcal{T}_h)}^{\frac{2(6-d)}{4-d}} \nonumber\\
&
+C h^{2\min\{r+1,s\}}\epsilon^{-2(\sigma_1+2)}+\frac{C}{\epsilonilon^2}(h^8+k^4)+Ck^4[\epsilonilon^{-4}+\epsilonilon^{-2}+1].\nonumber
\end{align}
and define for $\ell\geq1$
\begin{align}\label{eq3.64}
S_{\ell+1}:&= \|\xi^\ell\|_{L^2(\mathcal{T}_h)}^2 + k\sum_{m=1}^\ell
3\epsilon^2 \alpha \|\xi^m\|_{1,\mbox{\tiny DG}}^2+d_{\ell},
\end{align}
\begin{align}\label{eq3.65}
S_{1}:&=C h^{2\min\{r+1,s\}}\epsilon^{-2(\sigma_1+2)}+\frac{C}{\epsilonilon^2}(h^8+k^4)+Ck^4[\epsilonilon^{-4}+\epsilonilon^{-2}+1],
\end{align}
then we have
\begin{equation}\label{eq3.66}
S_{\ell+1}-S_{\ell}\leq C(1+\frac{k\epsilonilon^2}{\epsilonilon^2}+\frac{h^2+k}{\epsilonilon^2}) kS_{\ell}+C\epsilon^{-\frac{2(4+d)}{4-d}} kS_{\ell}^{\frac{6-d}{4-d}}\qquad\text{for}\ \ell\geq1.
\end{equation}
Applying Lemma \ref{lem2.4} to $\{S_\ell\}_{\ell\geq 1}$ defined above,
we obtain for $\ell\geq1$
\begin{equation}\label{eq3.67}
S_{\ell}\leq a^{-1}_{\ell}\Bigg\{S^{-\frac{2}{4-d}}_{1}-\frac{2Ck}{4-d}
\sum_{s=1}^{\ell-1}\epsilon^{-\frac{2(4+d)}{4-d}} a^{-\frac{2}{4-d}}_{s+1}\Bigg\}^{-\frac{4-d}{2}}.
\end{equation}
provided that
\begin{equation}\label{eq3.68}
\frac12 S^{-\frac{2}{4-d}}_{1}-\frac{2Ck}{4-d}\sum_{s=1}^{\ell-1} \epsilon^{-\frac{2(4+d)}{4-d}}
a^{-\frac{2}{4-d}}_{s+1}>0.
\end{equation}
We note that $a_s\, (1\leq s\leq \ell)$ are all bounded as $k\rightarrow0$,
therefore, \eqref{eq3.68} holds under the mesh constraint stated in the theorem.
It follows from \eqref{eq3.66} and \eqref{eq3.67} that
\begin{equation}\label{eq3.69}
S_{\ell}\leq 2a_\ell^{-1} S_1
\leq Ck^4\epsilon^{-2(\sigma_1+2)}+C h^{2\min\{r+1,s\}}\epsilon^{-2(\sigma_1+2)}.
\end{equation}
Finally, using the above estimate and the properties of the operator $P^h_r$
we obtain \eqref{eq3.18} and \eqref{eq3.19}. The estimate \eqref{eq3.20} follows
from \eqref{eq3.19} and the inverse inequality bounding the $L^\infty$-norm by the
$L^2$-norm and \eqref{eq2.21}. The proof is complete.
\section{Convergence of the numerical interface to the mean curvature flow} \label{sec-4}
In this section, we prove the rate of convergence of the numerical interface to its limit geometric interface of the Allen-Cahn equation. This convergence theory is based on the maximum norm error estimates, which is proven above. The rate of convergence can be proven by the sharper error estimates, which is the negative polynomial function of the interaction length $\epsilonilon$ \cite{15, 17,3}. It can't be proven if the coarse error estimate, which is the exponential function of $\epsilonilon$, is used.
For all the DG problem, the the zero-level set of $u_h^n$ may not be well defined since the zero-level set may not be continuous. Therefore, we introduce the finite element approximation
$\widehat{u}_h^m$ of the DG solution $u_h^m$ It is defined by using the averaged degrees of freedom
of $u_h^n$ as the degrees of freedom for determining $\widehat{u}_h^m$ (cf. $\cite{24}$). We get the following results $\cite{24}$.
\begin{theorem}\label{lem4.1}
Let $\mathcal{T}_h$ be a conforming mesh consisting of
triangles when $d=2$, and tetrahedra when $d=3$. For $v_h\in V_h$, let
$\widehat{v}_h$ be the finite element approximation of $v_h$ as
defined above. Then for any $v_h\in V_h$ and $i=0,1$ there holds
\begin{align}\label{eqn_KP}
\sum_{K\in\mathcal{T}_h} \|v_h-\widehat{v}_h\|_{H^i(K)}^2
\leq C \sum_{e\in\mathcal{E}_h^I} h^{1-2i}_e \|[v_h]\|_{L^2(e)}^2,
\end{align}
where $C>0$ is a constant independent of $h$ and $v_h$ but may depend on $r$ and the minimal
angle $\theta_0$ of the triangles in $\mathcal{T}_h$.
\end{theorem}
\\
Using the above approximation result we can show that the error estimates
of Theorem \ref{thm3.5} also hold for $\widehat{u}_h^n$.
\begin{theorem}\label{lem4.2}
Let $u_h^{m}$ denote the solution of the DG scheme \eqref{eq3.1}--\eqref{eq3.4}
and $\widehat{u}_h^{m}$ denote its finite element approximation as defined above. Then
under the assumptions of Theorem \ref{thm3.5} the error estimates for $u_h^m$ given in
Theorem \ref{thm3.5} are still valid for $\widehat{u}_h^{m}$, in particular, there holds
\begin{align}\label{eq3.36bx}
\max_{0\leq m\leq M} \|u(t_m)-\widehat{u}_h^m\|_{L^\infty(\mathcal{T}_h)}
&\leq C h^{\min\{r+1,s\}} |\ln h|^{\overline{r}} \epsilon^{-\gamma}\\
&\qquad
+Ch^{-\frac{d}{2}}(k^2+h^{\min\{r+1,s\}})\epsilon^{-(\sigma_1+2)}. \nonumber
\end{align}
\end{theorem}
Proof: We only give a proof for \eqref{eq3.36bx} because other estimates can
be proved likewise. By the triangle inequality we have
\begin{align}\label{eq3.36by}
\|u(t_m)-\widehat{u}_h^m\|_{L^\infty(\mathcal{T}_h)}
\leq \|u(t_m)-u_h^m\|_{L^\infty(\mathcal{T}_h)}+ \|u_h^m-\widehat{u}_h^m\|_{L^\infty(\mathcal{T}_h)}.
\end{align}
Hence, it suffices to show that the second term on the right-hand side
is an equal or higher order term compared to the first one.
Let $u^I(t)$ denote the finite element interpolation of $u(t)$ into $S_h$.
It follows from \eqref{eqn_KP} and the trace inequality that
\begin{align}\label{eq3.36bz}
\|u_h^m-\widehat{u}_h^m\|_{L^2(\mathcal{T}_h)}^2
&\leq C\sum_{e\in \mathcal{E}_h^I} h_e \|[u_h^m]\|_{L^2(e)}^2 \\
&= C\sum_{e\in \mathcal{E}_h^I} h_e \|[u_h^m-u^I(t_m)]\|_{L^2(e)}^2 \nonumber\\
& \leq C\sum_{K\in \mathcal{T}_h} h_e h_K^{-1}\|u_h^m-u^I(t_m)\|_{L^2(K)}^2 \nonumber \\
&\leq C \bigl( \|u_h^m-u(t_m)\|_{L^2(\mathcal{T}_h)}^2 + \|u(t_m)- u^I(t_m)\|_{L^2(\mathcal{T}_h)}^2 \bigr). \nonumber
\end{align}
Substituting \eqref{eq3.36bz} into \eqref{eq3.36by} after using the inverse inequality yields
\begin{align*}
&\|u(t_m)-\widehat{u}_h^m\|_{L^\infty(\mathcal{T}_h)}
\leq \|u(t_m)-u_h^m\|_{L^\infty(\mathcal{T}_h)}+ C h^{-\frac{d}2} \|u_h^m-\widehat{u}_h^m\|_{L^2(\mathcal{T}_h)}\\
&\qquad\quad
\leq \|u(t_m)-u_h^m\|_{L^\infty(\mathcal{T}_h)} \nonumber \\
&\qquad\qquad
+ Ch^{-\frac{d}2} \bigl( \|u_h^m-u(t_m)\|_{L^2(\mathcal{T}_h)} + \|u(t_m)- u^I(t_m)\|_{L^2(\mathcal{T}_h)} \bigr),\nonumber
\end{align*}
which together with \eqref{eq3.18} implies the desired estimate \eqref{eq3.36bx}. The proof is complete.
We are now ready to state the main theorem of this section.
\begin{theorem}\label{thm4.3}
Let $\{\Gamma_t\}$ denote the (generalized) mean curvature flow
defined in $\cite{25}$, that is, $\Gamma_t$ is the zero-level set of
the solution $w$ of the following initial value problem:
\begin{alignat}{2}\label{eq4.1}
w_t &=\Delta w-\frac{D^2wDw\cdot Dw}{|Dw|^2} &&\qquad\mbox{in }
\mathbf{R}^d\times (0,\infty), \\
w(\cdot,0) &=w_0(\cdot) &&\qquad\mbox{in } \mathbf{R}^d. \label{eq4.2}
\end{alignat}
Let $u^{\epsilonilon,h,k}$ denote the piecewise linear interpolation in time
of the numerical solution $\{\widehat{u}_h^m\}$ defined by
\begin{equation}\label{eq4.3}
u^{\epsilonilon,h,k}(x,t):=\frac{t-t_m}{k}\widehat{u}_h^{m+1}(x)+\frac{t_{m+1}-t}{k}
\widehat{u}_h^{m}(x), \quad t_m\leq t\leq t_{m+1}
\end{equation}
for $0\leq m\leq M-1$. Let $\{\Gamma_t^{\epsilonilon,h,k}\}$ denote the zero-level
set of $u^{\epsilonilon,h,k}$, namely,
\begin{equation}\label{eq4.4}
\Gamma_t^{\epsilonilon,h,k}=\{x\in \Omegaga;\, u^{\epsilonilon,h,k}(x,t)=0\}.
\end{equation}
Suppose $\Gamma_0=\{x\in \overline{\Omegaga};u_0(x)=0\}$ is a smooth hypersurface
compactly contained in $\Omegaga$, and $k=O(h^2)$. Let $t_*$ be
the first time at which the mean curvature flow develops a singularity, then
there exists a constant $\epsilonilon_1>0$ such that for all
$\epsilonilon\in(0,\epsilonilon_1)$ and $ 0<t<t_*$ there holds
\[
\sup_{x\in\Gamma_t^{\epsilonilon,h,k}}\{\mbox{\rm dist}(x,\Gamma_t)\}
\leq C\epsilonilon^2|\ln\,\epsilonilon|^2.
\]
\end{theorem}
Proof: We note that since $u^{\epsilonilon,h,k}(x,t)$ is continuous in both $t$ and $x$, then
$\Gamma_t^{\epsilonilon,h,k}$ is well defined.
Let $I_t$ and $O_t$ denote the inside and the outside of $\Gamma_t$ defined by
\begin{equation}\label{eq4.5}
I_t:=\{ x\in \mathbf{R}^d;\, w(x,t)>0\}, \qquad O_t:=\{ x\in \mathbf{R}^d;\, w(x,t)<0\}.
\end{equation}
Let $d(x,t)$ denote the signed distance function to $\Gamma_t$ which is positive
in $I_t$ and negative in $O_t$. By Theorem 6.1 of $\cite{26}$, there exist
$\widehat{\epsilonilon}_1>0$ and $\widehat{C}_1>0$ such that for all $t\geq 0$
and $\epsilonilon\in(0,\widehat{\epsilonilon}_1)$ there hold
\begin{alignat}{2}\label{eq4.6}
u_{\epsilonilon}(x,t) &\geq 1-\epsilonilon
&&\qquad\forall x\in\{x\in\overline{\Omegaga};\, d(x,t)\geq \widehat{C}_1\epsilonilon^2|
\ln\,\epsilonilon|^2\},\\
u_{\epsilonilon}(x,t) &\leq -1+\epsilonilon
&&\qquad\forall x\in\{x\in\overline{\Omegaga};\, d(x,t)\leq -\widehat{C}_1\epsilonilon^2|
\ln\,\epsilonilon|^2\}. \label{eq4.7}
\end{alignat}
Since for any fixed $x\in\Gamma_t^{\epsilonilon,h,k}$, $u^{\epsilonilon,h,k}(x,t)=0$,
by \eqref{eq3.36bx} with $k=O(h^2)$, we have
\begin{align*}
|u^{\epsilonilon}(x,t)| &=|u^{\epsilonilon}(x,t)-u^{\epsilonilon,h,k}(x,t)| \\
&\leq \tilde{C} \Bigl( h^{\min\{r+1,s\}} |\ln h|^{\overline{r}} \epsilon^{-\gamma}
+h^{-\frac{d}{2}}(k+h^{\min\{r+1,s\}})\epsilon^{-(\sigma_1+2)} \Bigr).
\end{align*}
Then there exists $\widetilde{\epsilonilon}_1>0$ such that for
$\epsilonilon\in(0,\widetilde{\epsilonilon}_1)$ there holds
\begin{equation}\label{eq4.8}
|u^{\epsilonilon}(x,t)|<1-\epsilonilon.
\end{equation}
Therefore, the assertion follows from setting
$\epsilonilon_1=\min\{\widehat{\epsilonilon}_1,\widetilde{\epsilonilon}_1\}$.
The proof is complete.
\section{Numerical experiments}\label{sec-5}
In this section, we provide two two-dimensional numerical experiments to gauge the accuracy and reliability of the
fully discrete IPDG method developed in the previous sections. We use a square domain $\Omegaga=[-1,1]\times[-1,1] \subset\mathbf{R}^2$, and $u_0(x)=\mbox{tanh}(\frac{d_0(x)}{\sqrt{2}\epsilonilon})$,
where $d_0(x)$ stands for the signed distance from $x$ to the initial curve $\Gamma_0$ See the details for similar numerical setting in \cite{feng2014multiphysics, Feng_Li15, feng2015analysis,16,li2015numerical, xu2016convex}.
The first test uses the smooth initial curves $\Gamma_0$, hence the requirements for $u_0$ are satisfied. Consequently, the results established in this paper apply to the test example.
In the test we first verify the spatial rate of convergence given in \eqref{eq3.18} and \eqref{eq3.20}. We then compute
the evolution of the zero-level set of the solution of the Allen-Cahn problem with
$\epsilonilon= 0.025$ and at various time instances.
{\bf Test 1} Consider the Allen-Cahn problem with the following initial condition:
$$
u_0(x)=\left\{\begin{array}{ll}\mbox{tanh}(\frac{d(x)}{\sqrt{2}\epsilonilon}), & \mbox{if}\ \frac{x_1^2}{0.36}+\frac{x_2^2}{0.04}\geq1,\\
\mbox{tanh}(\frac{-d(x)}{\sqrt{2}\epsilonilon}),& \mbox{if}\ \frac{x_1^2}{0.36}+\frac{x_2^2}{0.04}<1,
\end{array}
\right.
$$
here $d(x)$ stands for the distance function to the ellipse $\frac{x_1^2}{0.36}+\frac{x_2^2}{0.04}=1$.
\begin{center}{{ Table 5.1.\ \ Spatial errors and convergence rates\\}}
{
\begin{small}
\begin{tabular}{ccccc}
\hline
\hspace{0.3cm}$h$\hspace{0.3cm}&$L^\infty(L^2)$ error \hspace{0.3cm}&$L^\infty(L^2)$ order\hspace{0.3cm}&$L^2(H^1)$ error\hspace{0.3cm}& $L^2(H^1)$ order\hspace{0.3cm}
\\
\hline
\hspace{0.3cm}$\sqrt{2}/10$\hspace{0.3cm}&0.02451 \hspace{0.3cm}&\hspace{0.3cm}&0.34216\hspace{0.3cm}& \hspace{0.3cm}\\
\hspace{0.3cm}$\sqrt{2}/20$\hspace{0.3cm}&0.00539 \hspace{0.3cm}&2.1850\hspace{0.3cm}&0.17258\hspace{0.3cm}& 0.9874\hspace{0.3cm}\\
\hspace{0.3cm}$\sqrt{2}/40$\hspace{0.3cm}&0.00142 \hspace{0.3cm}&1.9244\hspace{0.3cm}&0.08394\hspace{0.3cm}& 1.0398\hspace{0.3cm}\\
\hspace{0.3cm}$\sqrt{2}/80$\hspace{0.3cm}&0.00036 \hspace{0.3cm}&1.9798\hspace{0.3cm}&0.04172\hspace{0.3cm}&1.0086\hspace{0.3cm}\\
\hline
\end{tabular}\end{small}
}
\end{center}
Table 5.1 shows the spatial $L^2$ and $H^1$-norm errors and convergence rates,
which are consistent with what are proved for the linear element in the convergence theorem.
\begin{figure}
\caption{Test 1: Snapshots of the zero-level set of $u^{\epsilonilon,h,k}
\end{figure}
Figure 5.1 displays six snapshots of the zero-level set of the numerical solution $u^{\epsilonilon,h,k}$ with $\epsilonilon=0.125$.
We observe that as $\epsilonilon$ is small enough the zero-level set converges to the mean curvature flow $\Gamma_t$
as time goes on.
The second test has non-smooth curve with $u_0$ defined below.This initial condition does not satisfy the assumptions in the spetrum estimate, but we can still numerically validate the convergence of the solution to the mean curvature flow.\\
{\bf Test 2} Consider the Allen-Cahn problem with the following initial condition:
\begin{equation*}
u_0(x)=\begin{cases}
\tanh(\frac{1}{\sqrt{2}\epsilon}(\text{min}\{d_1(x),d_2(x)\})), & \text{if}\
\frac{x_1^2}{0.36}+\frac{x_2^2}{0.04}\geq1,\frac{x_1^2}{0.04}+\frac{x_2^2}{0.36}\geq1,\\
&\ \text{or}\ \frac{x_1^2}{0.36}+\frac{x_2^2}{0.04}\leq1,\frac{x_1^2}{0.04}+\frac{x_2^2}{0.04}\leq1,\\
\tanh(\frac{-1}{\sqrt{2}\epsilon}(\text{min}\{d_1(x),d_2(x)\})), & \text{if}\
\frac{x_1^2}{0.36}+\frac{x_2^2}{0.04}<1,\frac{x_1^2}{0.04}+\frac{x_2^2}{0.36}>1,\\
&\ \text{or}\ \frac{x_1^2}{0.36}+\frac{x_2^2}{0.04}>1,\frac{x_1^2}{0.04}+\frac{x_2^2}{0.36}<1.\\
\end{cases}
\end{equation*}
here $d_1(x)$ and $d_2(x)$stands for the distance function to the ellipses $\frac{x_1^2}{0.36}+\frac{x_2^2}{0.04}=1$ and $\frac{x_1^2}{0.04}+\frac{x_2^2}{0.36}=1$ respectively.
\begin{center}{{ Table 5.2.\ \ Spatial errors and convergence rates\\}}
{
\begin{small}
\begin{tabular}{ccccc}
\hline
\hspace{0.3cm}$h$\hspace{0.3cm}&$L^\infty(L^2)$ error \hspace{0.3cm}&$L^\infty(L^2)$ order\hspace{0.3cm}&$L^2(H^1)$ error\hspace{0.3cm}& $L^2(H^1)$ order\hspace{0.3cm}
\\
\hline
\hspace{0.3cm}$\sqrt{2}/10$\hspace{0.3cm}&0.01032 \hspace{0.3cm}&\hspace{0.3cm}&0.08325\hspace{0.3cm}& \hspace{0.3cm}\\
\hspace{0.3cm}$\sqrt{2}/20$\hspace{0.3cm}&0.00256 \hspace{0.3cm}&2.0098\hspace{0.3cm}&0.03851\hspace{0.3cm}& 1.1123\hspace{0.3cm}\\
\hspace{0.3cm}$\sqrt{2}/40$\hspace{0.3cm}&0.00075 \hspace{0.3cm}&1.7638\hspace{0.3cm}&0.01888\hspace{0.3cm}& 1.0283\hspace{0.3cm}\\
\hspace{0.3cm}$\sqrt{2}/80$\hspace{0.3cm}&0.00022 \hspace{0.3cm}&1.9836\hspace{0.3cm}&0.00939\hspace{0.3cm}&1.0069\hspace{0.3cm}\\
\hline
\end{tabular}\end{small}
}
\end{center}
Table 5.2 shows the spatial $L^2$ and $H^1$-norm errors and convergence rates,
which are consistent with what are proved for the linear element in the convergence theorem.
\begin{figure}
\caption{Test 2 Snapshots of the zero-level set of $u^{\epsilonilon,h,k}
\label{figure}
\end{figure}
Figure 5.2 displays six snapshots of the zero-level set of the numerical solution $u^{\epsilonilon,h,k}$ with $\epsilonilon=0.025$.
Similarly, we observe that as $\epsilonilon$ is small enough the zero-level set converges to the mean curvature flow $\Gamma_t$
as time goes on.
\textbf{ Acknowledgment:}\ {\small The authors would like to express sincere thanks to Dr. Yukun Li of the Ohio State University for introducing Allen-Cahn equation to the authors and for his many valuable discussions and suggestions.}
\end{document}
|
\betaegin{equation}gin{document}
\title[Prescribed Weingarten curvature equations in warped product manifolds]
{Prescribed Weingarten curvature \\ equations in warped product manifolds}
\alphauthor{
Ya Gao,~~Chenyang Liu,~~Jing Mao$^{\alphast}$}
\alphaddress{
Faculty of Mathematics and Statistics, Key Laboratory of Applied
Mathematics of Hubei Province, Hubei University, Wuhan 430062, China
}
\epsilonmail{[email protected], [email protected], [email protected]}
\thanks{$\alphast$ Corresponding author}
\deltaate{}
\maketitle
\betaegin{equation}gin{abstract}
In this paper, under suitable settings, we can obtain the existence
of solutions to a class of prescribed Weingarten curvature equations
in \epsilonmph{warped product manifolds} of special type by the standard
degree theory based on the \epsilonmph{a priori estimates} for the
solutions. This is to say that the existence of closed hypersurface
(which is graphic with respect to the base manifold and whose $k$-th
Weingarten curvature satisfies some constraint) in a given warped
product manifold of special type can be assured.
\epsilonnd{abstract}
\maketitle {\it \,\,\,\,mall{{\betaf Keywords}: Prescribed Weingarten
curvature equations, $k$-convex, starshaped, warped product
manifolds.
}
{{\betaf MSC 2020}: 53C42, 35J60.}}
\,\,\,\,ection{Introduction} \langlebel{S1}
Throughout this paper, let $(M^{n},g)$ be a compact Riemannian
$n$-manifold with the metric $g$, and let $I$ be an (unbounded or
bounded) interval in $\mathbb{R}$. Clearly,
$\betaar{M}:=I\times_{f}M^{n}$ is actually the $(n+1)$-dimensional
warped product manifold (sometimes, for simplicity, just say
\epsilonmph{warped product}) endowed with the following metric
\betaegin{equation}gin{eqnarray} \langlebel{wpm}
\betaar{g}=dt^{2}+f^{2}(t)g,
\epsilonnd{eqnarray}
where $f:I\rightghtarrow\mathbb{R}^{+}$ is a positive differential
function defined on $I$. Given a differentiable function
$u:M^{n}\rightghtarrow I$, its graph actually corresponds to the
following graphic hypersurface
\betaegin{equation}gin{eqnarray} \langlebel{gr-1}
\mathcal{G}=\{X(x)=(u(x),x)|x\in M^{n}\}
\epsilonnd{eqnarray}
in $\betaar{M}$. Equivalently, we can say that $\mathcal{G}$ is graphic
w.r.t. \epsilonmph{the base manifold} $M^{n}$. Denote by $\betaar\nabla$, $D$
the Riemannian connections on $\betaar{M}$ and $M^{n}$, respectively.
Let $\{e_{i}\}_{i=1,2,\cdots,n}$ be an orthonormal frame field in
$M^{n}$. Then one can find an orthonormal frame field
$\{\betaar{e}_{\alphalpha}\}_{\alphalpha=0,1,\cdots,n}$ in $\betaar{M}$ such that
$\betaar{e}_{i}=(1/f)e_{i}$, $1\leq\alphalpha=i\leq n$ and
$\betaar{e}_{0}=\partialartial/\partialartial t$. The existence of the frame fields
can always be assured in the tangent space of a prescribed point.
Denote by{\mathfrak f}ootnote{~Clearly, for accuracy, here $D_{i}u$
should be $D_{e_{i}}u$. In the sequel, without confusion and if
needed, we prefer to simplify covariant derivatives like this. In
this setting, $u_{ij}:=D_{j}D_{i}u$, $u_{ijk}:=D_{k}D_{j}D_{i}u$
mean $u_{ij}=D_{\partialartial_{j}}D_{\partialartial_{i}}u$ and
$u_{ijk}=D_{\partialartial_{k}}D_{\partialartial_{j}}D_{\partialartial_{i}}u$,
respectively. We will also simplify covariant derivatives on
$\mathcal{G}$ and $\betaar{M}$ similarly if necessary.}
$u_{i}:=D_{i}u$, $u_{ij}:=D_{j}D_{i}u$, and
$u_{ijk}:=D_{k}D_{j}D_{i}u$ the covariant derivatives of $u$ w.r.t.
the metric $g$. Clearly, the tangent vectors of $\mathcal{G}$ are
given by
\betaegin{equation}gin{eqnarray*}
X_{i}=(D u,1)=e_{i}+u_{i}\partialartial/\partialartial
t=f\betaar{e}_{i}+u_{i}\betaar{e}_0, \qquad i=1,2,\ldots,n.
\epsilonnd{eqnarray*}
Let $\langlengle\cdot,\cdot\ranglengle$ be the inner product w.r.t. the
metric $\betaar{g}$. Then the induced metric $\widetilde{g}$ on
$\mathcal{G}$ has the form
\betaegin{equation}gin{equation*}\langlebel{g_{ij}}
\widetilde{g}_{ij}=\langlengle
X_{i},X_{j}\ranglengle=f^2\deltaelta_{ij}+u_{i}u_{j},
\epsilonnd{equation*}
its inverse is given by
\betaegin{equation}gin{equation*}\langlebel{g^{ij}}
\widetilde{g}^{ij}={\mathfrak f}racac{1}{f^2}\left(\deltaelta^{ij}-{\mathfrak f}racac{u^i u^j
}{f^{2}+|D u|^2}\rightght),
\epsilonnd{equation*}
where $u^{i}=g^{ij}u_{j}=\deltaelta^{ij}u_{j}$ and $|D u|^2=u^{i}u_{i}$.
Of course, in this paper we use the Einstein summation convention --
repeated superscripts and subscripts should be made
summation{\mathfrak f}ootnote{~In this setting, repeated Latin letters should
be made summation from $1$ to $n$.}. The outward unit normal vector
field of $\mathcal{G}$ is given by
\betaegin{equation}gin{eqnarray*}\langlebel{nu}
\nu={\mathfrak f}racac{1}{\,\,\,\,qrt{f^{2}+|D u|^2}}\left(f{\mathfrak f}racac{\partialartial}{\partialartial
t}-u^{i}f^{-1}e_{i}\rightght)={\mathfrak f}racac{1}{\,\,\,\,qrt{f^{2}+|D
u|^2}}\left(f\betaar{e}_0-u^{i}\betaar{e}_i\rightght),
\epsilonnd{eqnarray*}
and the component $h_{ij}$ of the second fundamental form $A$ of
$\mathcal{G}$ is computed as follows
\betaegin{equation}gin{equation}\langlebel{h_{ij}}
h_{ij}=-\langlengle\betaar{\nabla}_{X_{j}}X_{i},\nu\ranglengle
={\mathfrak f}racac{1}{\,\,\,\,qrt{f^{2}+|D
u|^2}}\left(-fu_{ij}+2f'u_{i}u_{j}+f^{2}f'\deltaelta_{ij}\rightght).
\epsilonnd{equation}
One can also see \cite[Subsection 2.2]{clw} for the computations of
the above geometric quantities.
Denote by $\langlembda_{1},\langlembda_{2},\ldots,\langlembda_{n}$
the principal curvatures of $\mathcal{G}$, which are actually the
eigenvalues of the matrix $(h_{ij})_{n\times n}$ w.r.t. the metric
$\widetilde{g}$. The so-called \epsilonmph{$k$-th Weingarten curvature} at
$X(x)=(u(x),x)\in\mathcal{G}$ is defined as
\betaegin{equation}gin{eqnarray} \langlebel{kwc}
\,\,\,\,igma_{k}(\langlembda_{1}, \langlembda_{2}, \cdots,
\langlembda_{n})=\,\,\,\,um\limits_{1\leq i_{1}<i_{2}<\cdots<i_{k}\leq
n}\langlembda_{i_{1}}\langlembda_{i_{2}}\cdots\langlembda_{i_{k}}.
\epsilonnd{eqnarray}
$V=f(u){\mathfrak f}racac{\partialartial}{\partialartial t}$ is the position vector
field{\mathfrak f}ootnote{~In $\mathbb{R}^{n+1}$ or the hyperbolic
$(n+1)$-space $\mathbb{H}^{n+1}$, there is no need to define the
vector field $V$ since these two spaces are two-points homogeneous
and global coordinate system can be set up, and then $X(x)$ can be
seen as the position vector directly. } of hypersurface
$\mathcal{G}$ in $\betaar{M}$, and clearly, for any $x\in M^{n}$,
$V|_{x}$ is a one-to-one correspondence with $X(x)$. Let $\nu(V)$ be
the outward unit normal vector field along the hypersurface
$\mathcal{G}$ and $\langlembda(V)=(\langlembda_{1}, \langlembda_{2}, \cdots,
\langlembda_{n})$ be the principal curvatures of $\mathcal{G}$ at $V$.
Define the annulus domain $\betaar{M}^{+}_{-}\,\,\,\,ubset\betaar{M}$ as follows
\betaegin{equation}gin{eqnarray*} \langlebel{annd}
\betaar{M}^{+}_{-}:=\{(t,x)\in\betaar{M}|r_{1}\leq t\leq r_{2}\}
\epsilonnd{eqnarray*}
with $r_{1}<r_{2}$. In this paper, we consider the following
Weingarten curvature equation
\betaegin{equation}gin{eqnarray} \langlebel{main equation}
\,\,\,\,igma_{k}(\langlembda(V))=\,\,\,\,um\limits_{l=0}^{k-1}\alphalpha_{l}(u(x),x)\,\,\,\,igma_{l}(\langlembda(V)),
\quad {\mathfrak f}orall V\in\mathcal{G}, \quad 2\leq k\leq n,
\epsilonnd{eqnarray}
where $\{\alphalpha_{l}(u(x),x)\}_{l=0}^{k-1}$ are given smooth
functions defined on $\mathcal{G}$. The $k$-th Weingarten curvature
$\,\,\,\,igma_{k}(\langlembda(V))$ is also called $k$-th mean curvature.
Besides, when $k=1$, $2$ and $n$, $\,\,\,\,igma_{k}(\langlembda(V))$
corresponds to the mean curvature, the scalar curvature and the
Gaussian curvature of $\mathcal{G}$ at $V$.
We also need the
following conception:
\betaegin{equation}gin{definition}
For $1\leq k\leq n$, let ${\mathfrak G}amma_{k}$ be a cone in $ \mathbb{R}^{n}$
determined by
\betaegin{equation}gin{eqnarray*}
{\mathfrak G}amma_{k}=\{\langlembda\in\mathbb{R}^{n}|\,\,\,\,igma_{l}(\langlembda)>0,
~~l=1,2,\ldots,k\}.
\epsilonnd{eqnarray*}
A smooth graphic hypersurface $\mathcal{G}\,\,\,\,ubset\betaar{M}$ is called
$k$-admissible if at every position vector $V\in\mathcal{G}$,
$(\langlembda_{1},\langlembda_{2},\ldots,\langlembda_{n})\in{\mathfrak G}amma_{k}$.
\epsilonnd{definition}
For the Eq. (\mathrm{e}f{main equation}), we can prove the following:
\betaegin{equation}gin{theorem} \langlebel{maintheorem}
Let $M^{n}$ be a compact Riemannian $n$-manifold ($n\mathfrak geq3$) and
$\betaar{M}=I\times_{f}M^{n}$, with the metric (\mathrm{e}f{wpm}), be the
warped product manifold defined as before. Assume that the warping
function $f$ is positive differential, $f'>0$, and
$\alphalpha_{l}(u(x),x)\in C^{\infty}(I\times M^{n})$ are positive
functions for all $0\leq l\leq k-1$. Suppose that
\betaegin{equation}gin{eqnarray} \langlebel{as-1}
\,\,\,\,igma_{k}(e)\left({\mathfrak f}racac{f'}{f}\rightght)^{k}\mathfrak geq\,\,\,\,um\limits_{l=0}^{k-1}\alphalpha_{l}(u,x)\,\,\,\,igma_{l}(e)\left({\mathfrak f}racac{f'}{f}\rightght)^{l}
\qquad for~u\mathfrak geq r_{2},
\epsilonnd{eqnarray}
\betaegin{equation}gin{eqnarray} \langlebel{as-2}
\,\,\,\,igma_{k}(e)\left({\mathfrak f}racac{f'}{f}\rightght)^{k}\leq\,\,\,\,um\limits_{l=0}^{k-1}\alphalpha_{l}(u,x)\,\,\,\,igma_{l}(e)\left({\mathfrak f}racac{f'}{f}\rightght)^{l}
\qquad for~0<u\leq r_{1},
\epsilonnd{eqnarray}
and
\betaegin{equation}gin{eqnarray} \langlebel{as-3}
{\mathfrak f}racac{\partialartial}{\partialartial
u}\left[f^{k-l}(u)\alphalpha_{l}(u,x)\rightght]\leq0 \quad for~
r_{1}<u<r_{2},
\epsilonnd{eqnarray}
where $[r_{1},r_{2}]\,\,\,\,ubset I$, $e=(1,1,\cdots,1)$. Then there
exists a smooth $k$-admissible, closed graphic hypersurface
$\mathcal{G}$ contained in the interior of the annulus
$\betaar{M}^{+}_{-}$ and satisfying the Eq. (\mathrm{e}f{main equation}).
\epsilonnd{theorem}
\betaegin{equation}gin{remark} \langlebel{remark-1}
\rm{ (1) The $k$-admissible and the graphic properties of the
hypersurface $\mathcal{G}$ make sure that the Eq. (\mathrm{e}f{main
equation}) is a single scalar second-order elliptic PDE of the
graphic function $u$, which is the cornerstone of the a prior
estimates given below. If furthermore $M^{n}$ is convex, then
$M^{n}$ is diffeomorphic to $\mathbb{S}^{n}$ (i.e., the Euclidean
unit $n$-sphere), $\mathcal{G}$ is also a graphic hypersurface over
$\mathbb{S}^{n}$ and should be starshaped. In this setting, Theorem
\mathrm{e}f{maintheorem} degenerates into the following:
\betaegin{equation}gin{itemize}
\item \underline{\textbf{FACT 1}}. \epsilonmph{Under the assumptions of Theorem \mathrm{e}f{maintheorem}, if furthermore $M^{n}$ is convex, then there
exists a smooth $k$-admissible, starshaped closed hypersurface
$\mathcal{G}$ contained in the interior of the annulus
$\betaar{M}^{+}_{-}$ and satisfying the Eq. (\mathrm{e}f{main equation}).}
\epsilonnd{itemize}
(2) We refer readers to, e.g., \cite[Appendix A]{mdw}, \cite[pp.
204-211 and Chapter 7]{bon} for an introduction to the notion and
properties of warped product manifolds. Submanifolds in warped
product manifolds have nice geometric properties and interesting
results can be expected -- see, e.g., several nice eigenvalue
estimates for the drifting Laplacian and the nonlinear $p$-Laplacian
on minimal submanifolds in warped product manifolds of prescribed
type have
been shown in \cite[Sections 3-5]{lmwz}.\\
(3) The Eq. (\mathrm{e}f{main equation}) is actually a combination of
elementary symmetric functions of eigenvalues of a given
$(0,2)$-type tensor. Equations of this type are important not only in the study of PDEs but also
in the study of many important geometric problems. For instance,
if $\langlembda(V)$ in the Eq. (\mathrm{e}f{main equation}) were replaced by
eigenvalues of the Hessian $D^{2}u$ of a graphic function $u$ defined over a
bounded $(k-1)$-convex domain $\Omega\,\,\,\,ubset\mathbb{R}^{n}$, Krylov
\cite{nk} studied the corresponding PDE
\betaegin{equation}gin{eqnarray} \langlebel{ME-2}
\,\,\,\,igma_{k}(D^{2}u(x))=\,\,\,\,um\limits_{l=0}^{k-1}\alphalpha_{l}(x)\,\,\,\,igma_{l}(D^{2}u(x)), \quad {\mathfrak f}orall x\in\Omega,
\epsilonnd{eqnarray}
with a prescribed
Dirichlet boundary condition (DBC for short) and coefficients $\alphalpha_{l}(x)\mathfrak geq0$
for all $0\leq l\leq k-1$, and observed that the natural admissible cone to make equation elliptic
is ${\mathfrak G}amma_{k}$; recently, Guan-Zhang \cite{gz} showed that comparing with Krylov's this observation, for
the admissible solution of Eq. (\mathrm{e}f{ME-2}) with prescribed DBC in the
sense that $\langlembda(D^{2}u)\in{\mathfrak G}amma_{k-1}$,
there is no sign requirement for the coefficient
function of $\alphalpha_{k-1}(x)$. Moreover, they also investigated the
solvability of the following fully nonlinear elliptic equation
\betaegin{equation}gin{eqnarray*}
\,\,\,\,igma_{k}(D^{2}u+uI)=\,\,\,\,um\limits_{l=0}^{k-1}\alphalpha_{l}(x)\,\,\,\,igma_{l}(D^{2}u+uI),
\quad {\mathfrak f}orall x\in\mathbb{S}^n,
\epsilonnd{eqnarray*}
for some unknown function $u:\mathbb{S}^{n}\rightghtarrow\mathbb{R}$ defined over $\mathbb{S}^{n}$, where $\alphalpha_{l}(x)$, $0\leq l\leq k-2$,
are positive functions;
Fu-Yau \cite{fy1,fy2} proposed an
equation of this type in
the study of the Hull-Strominger system in theoretical
physics; Phong-Picard-Zhang
investigated the Fu-Yau equation and its generalization in series
works \cite{ppz1,ppz2,ppz3}. Recently, inspired by Krylov's and
Guan-Zhang's works \cite{gz,nk}, Chen-Shang-Tu \cite{cst}
considered the following equation
\betaegin{equation}gin{eqnarray} \langlebel{ME-2}
\,\,\,\,igma_{k}(\kappa(X))=\,\,\,\,um\limits_{l=0}^{k-1}\alphalpha_{l}(X)\,\,\,\,igma_{l}(\kappa(X)),
\quad {\mathfrak f}orall X\in\mathcal{M}\,\,\,\,ubset\mathbb{R}^{n+1}, \qquad 2\leq
k\leq n
\epsilonnd{eqnarray}
on an embedded, closed starshaped $n$-hypersurface $\mathcal{M}$,
$n\mathfrak geq3$, where $\kappa(X)$ are principal curvatures of
$\mathcal{M}$ at $X$, and $\alphalpha_{l}(x)$, $0\leq l\leq k-1$,
are positive functions defined over $\mathcal{M}$. Under the
$k$-convexity for $\mathcal{M}$ and several other growth
assumptions (see \cite[Theorem 1.1]{cst}), they can show the existence of solutions to Eq.
(\mathrm{e}f{ME-2}). This result has already been generalized by Shang-Tu
\cite{st1} to the situation that the ambient space $\mathbb{R}^{n+1}$ was replaced by
the hyperbolic space $\mathbb{H}^{n+1}$.
}
\epsilonnd{remark}
If $M^{n}=\mathbb{S}^{n}$, $I=(0,\epsilonll)$ with $0<\epsilonll\leq\infty$,
putting a one-pint compactification topology by identifying all
pairs $\{0\}\times\mathbb{S}^{n}$ with a single point $p^{\alphast}$ to
$\betaar{M}$ (see, e.g., \cite[page 705]{fmi} for this notion) and
requiring that $f(0)=0$, $f'(0)=1$, then the warped product manifold
$\betaar{M}$ becomes the spherically symmetric manifold
$\widetilde{M}:=[0,\epsilonll)\times_{f}\mathbb{S}^{n}$. The single point
$p^{\alphast}$ is called the \epsilonmph{base point} of $\widetilde{M}$.
Applying \textbf{FACT 1} in Remark \mathrm{e}f{remark-1} directly, one has:
\betaegin{equation}gin{corollary} \langlebel{coro-1}
Under the assumptions of Theorem \mathrm{e}f{maintheorem} with additionally
$M^{n}=\mathbb{S}^{n}$, $I=(0,\epsilonll)$ with $0<\epsilonll\leq\infty$,
one-pint compactification topology imposed, $f(0)=0$ and $f'(0)=1$,
then there exists a smooth $k$-admissible, starshaped (w.r.t. the
base point $p^{\alphast}$), closed hypersurface $\mathcal{G}$ contained
in the interior of the annulus $\betaar{M}^{+}_{-}\,\,\,\,ubset\widetilde{M}$
and satisfying the Eq. (\mathrm{e}f{main equation}).
\epsilonnd{corollary}
\betaegin{equation}gin{remark}
\rm{ (1) If furthermore the warping function $f$ satisfies
$f''(t)+Kf(t)=0$ for some constant $K$, i.e. the Jacobi equation,
then
\betaegin{equation}gin{eqnarray*}
f(t)=\left\{
\betaegin{equation}gin{array}{lll}
\,\,\,\,in(\,\,\,\,qrt{K}t)/\,\,\,\,qrt{K}, \quad \qquad & K>0,~\epsilonll=\partiali/\,\,\,\,qrt{K},\\
t, \quad \qquad & K=0,~\epsilonll=\infty,\\
f(t)=\,\,\,\,inh(\,\,\,\,qrt{-K}t)/\,\,\,\,qrt{-K}, \quad \qquad & K<0,~\epsilonll=\infty,
\epsilonnd{array}
\rightght.
\epsilonnd{eqnarray*}
and moreover, in this setting, $\widetilde{M}$ corresponds to
$\mathbb{S}^{n+1}(1/\,\,\,\,qrt{K})$ (i.e., the Euclidean $(n+1)$-sphere
with radius $1/\,\,\,\,qrt{K}$) with the antipodal point of $p^{\alphast}$
missed, $\mathbb{R}^{n+1}$ and $\mathbb{H}^{n+1}(K)$ (i.e., the
hyperbolic $(n+1)$-space with constant curvature $K<0$),
respectively. From this, one can see that spherically symmetric
manifolds cover space forms as a special case and actually they were
called \epsilonmph{generalized space forms} by Katz and Kondo \cite{KK}.
\\
(2) Clearly, our Corollary \mathrm{e}f{coro-1} covers Chen-Shang-Tu's and
Shang-Tu's main results in \cite{cst,st1} (mentioned in (3) of Remark \mathrm{e}f{remark-1}) as special cases. \\
(3) Spherically symmetric manifolds have nice symmetry in
non-radial direction, which leads to the fact that one can use
this kind of manifolds as model space in the study of
comparison theorems. In fact, Prof. J.
Mao and his collaborators have used spherically symmetric manifolds
as model space to successfully obtain Cheng-type eigenvalue
comparison theorems for the first Dirichlet eigenvalue of the Laplacian on complete manifolds with
radial (Ricci and sectional) curvatures bounded, Escobar-type
eigenvalue comparison theorem for the first nonzero Steklov
eigenvalue of the Laplacian on complete manifolds with radial
sectional curvature bounded from above, heat kernel and volume comparison
theorems for complete manifolds with suitable curvature
constraints, and so on -- see \cite{fmi,m1-1,m1-2,m1,ywmd} for details.
}
\epsilonnd{remark}
This paper is organized as follows. In Section \mathrm{e}f{S2}, we will
list some useful formulas including several basic properties of
$\,\,\,\,igma_{k}$, structure equations for hypersurfaces in warped
product manifolds. A priori estimates (including $C^0$, $C^1$ and
$C^2$ estimates) for solutions to the Eq. (\mathrm{e}f{main equation}) will
be shown continuously in Sections \mathrm{e}f{S3}-\mathrm{e}f{S5}. In Section
\mathrm{e}f{S6}, by applying the degree theory, together with the a priori
estimates obtained, we can prove the existence of solutions to
prescribed Weingarten curvature equations of type (\mathrm{e}f{main
equation}).
\,\,\,\,ection{Some useful formulae} \langlebel{S2}
Except the setting of notations in Section \mathrm{e}f{S1}, denote by
$\betaar\nabla$, $\nabla$ the Riemannian connections on $\betaar{M}$ and
$\mathcal{G}$, respectively. The curvature tensors in $\betaar{M}$ and
$\mathcal{G}$ will be denoted by $\betaar{R}$ and $R$, respectively.
Let $\{E_{0}=\nu,E_{1},\cdots,E_{n}\}$ be an orthonormal frame field
in
$\mathcal{G}$ and
$\{\omega_{0},\omega_{1},\cdots,\omega_{n}\}$ is its associated dual
frame field. The connections forms $\{\omega_{ij}\}$ and curvature
forms $\{\Omega_{ij}\}$ in $\mathcal{G}$ satisfy the structure
equations
\betaegin{equation}gin{eqnarray*}
d\omega_{i}-\,\,\,\,um\limits_{i}\omega_{ij}\wedge \omega_{j}=0,\quad \omega_{ij}+\omega_{ji}=0,
\epsilonnd{eqnarray*}
\betaegin{equation}gin{eqnarray*}
d\omega_{ij}-\,\,\,\,um\limits_{k}\omega_{ik}\wedge \omega_{kj}=\Omega_{ij}=-{\mathfrak f}racac{1}{2}\,\,\,\,um\limits_{k,l}R_{ijkl}\omega_{k}\wedge \omega_{l}.
\epsilonnd{eqnarray*}
The coefficients $h_{ij}$, $1\leq i,j\leq n$, of the second
fundamental form are given by Weingarten equation
\betaegin{equation}gin{eqnarray}\langlebel{Weingarten-eq}
\omega_{i0}=\,\,\,\,um\limits_{j}h_{ij}\omega_{j}.
\epsilonnd{eqnarray}
The covariant derivatives of the second fundamental form $h_{ij}$ in
$\mathcal{G}$ are given by
\betaegin{equation}gin{eqnarray*}
\,\,\,\,um\limits_{k}h_{ijk}\omega_{k}=dh_{ij}+\,\,\,\,um\limits_{l}h_{il}\omega_{lj}+
\,\,\,\,um\limits_{l}h_{lj}\omega_{li},
\epsilonnd{eqnarray*}
\betaegin{equation}gin{eqnarray*}
\,\,\,\,um\limits_{l}h_{ijkl}\omega_{l}=dh_{ijk}+\,\,\,\,um\limits_{l}h_{ljk}\omega_{li}+
\,\,\,\,um\limits_{l}h_{ilk}\omega_{lj}+\,\,\,\,um\limits_{l}h_{ljl}\omega_{lk}.
\epsilonnd{eqnarray*}
The Codazzi equation is
\betaegin{equation}gin{eqnarray}\langlebel{codazzi-eq}
h_{ijk}-h_{ikj}=-\betaar{R}_{0ijk},
\epsilonnd{eqnarray}
and the Ricci identity can be obtained as follows:
\betaegin{equation}gin{lemma} (see also \cite[Lemma 2.2]{clw})
Let $X(x)$ be a point of $\mathcal{G}$ and
$\{E_{0}=\nu,E_{1},\cdots,E_{n}\}$ be an adapted frame field such
that each $E_{i}$ is a principal direction and $\omega^{k}_{i}=0$ at
$X(x)$. Let $(h_{ij})$ be the second quadratic form of
$\mathcal{G}$. Then, at the point $X(x)$, we have
\betaegin{equation}gin{eqnarray}\langlebel{ricci-eq}
\betaegin{equation}gin{split}
h_{llii}=&h_{iill}-h_{lm}(h_{mi}h_{il}-h_{ml}h_{ii})-h_{mi}(h_{mi}h_{ll}-h_{ml}h_{li})\\
&+\betaar{R}_{0iil;l}-2h_{ml}\betaar{R}_{miil}+h_{il}\betaar{R}_{0i0l}+h_{ll}\betaar{R}_{0ii0}\\
&+\betaar{R}_{0lil;i}-2h_{mi}\betaar{R}_{mlil}+h_{ii}\betaar{R}_{0l0l}+h_{li}\betaar{R}_{0li0}.
\epsilonnd{split}
\epsilonnd{eqnarray}
\epsilonnd{lemma}
As mentioned in Section \mathrm{e}f{S1}, one can suitably choose local
coordinates such that $\{e_{i}\}_{i=1,2,\cdots,n}$ is an orthonormal
frame field in $M^{n}$, and then one can find an orthonormal frame
field $\{\betaar{e}_{\alphalpha}\}_{\alphalpha=0,1,\cdots,n}$ in $\betaar{M}$ such
that $\betaar{e}_{i}=(1/f)e_{i}$, $1\leq\alphalpha=i\leq n$ and
$\betaar{e}_{0}=\partialartial/\partialartial t$. Correspondingly, the associated
dual frame field of $\{\betaar{e}_{\alphalpha}\}_{\alphalpha=0,1,\cdots,n}$
should be $\{\theta_{\alphalpha}\}_{\alphalpha=0,1,\cdots,n}$ with
$\betaar{\theta}_{i}=f\theta_{i}$, $1\leq i \leq n$, and
$\betaar{\theta}_{0}=dt$. Clearly, $\{\theta_{i}\}_{i=1,\cdots,n}$ is
the dual frame field of the orthonormal frame field
$\{e_{i}\}_{i=1,2,\cdots,n}$. We have the following fact:
\betaegin{equation}gin{lemma} (see \cite{clw})
On the leaf $M_{t}$ of the warped product manifold
$\betaar{M}=I\times_{f}M^{n}$, the curvature satisfies
\betaegin{equation}gin{eqnarray}\langlebel{R_ijk0}
\betaar{R}_{ijk0}=0
\epsilonnd{eqnarray}
and the principal curvature is given by
\betaegin{equation}gin{eqnarray}\langlebel{kappa}
\kappa(t)={\mathfrak f}racac{f'(t)}{f(t)}
\epsilonnd{eqnarray}
where the outward unit normal vector
$\betaar{e}_{0}={\mathfrak f}racac{\partialartial}{\partialartial t}$ is chosen for each leaf
$M_{t}$.
\epsilonnd{lemma}
\betaegin{equation}gin{remark}
\rm{ In fact, the leaf $M_{t}$ can also be seen as a closed graphic
hypersurface in $\betaar{M}$, which corresponds to the graph of some
constant function, i.e. $u=const.$. Besides, we refer readers to
\cite[Section 2]{clw} or \cite{PP} for the geometry of hypersurfaces
in warped product manifolds if necessary.
}
\epsilonnd{remark}
Consider two functions $\tau :\mathcal{G}\rightghtarrow \mathbb{R}$ and
$\Lambda :\mathcal{G}\rightghtarrow \mathbb{R}$ given by
\betaegin{equation}gin{eqnarray}\langlebel{f-1}
\tau=f\langlengle\nu,\betaar{e}_{0}\ranglengle=\langlengle V,\nu\ranglengle,\qquad
\Lambda=\int_{0}^{u} f(s) ds,
\epsilonnd{eqnarray}
where $V=f\betaar{e}_{0}=f{\mathfrak f}racac{\partialartial}{\partialartial t}$ is the position
vector field and $\nu$ is the outward unit normal vector field. Then
we have:
\betaegin{equation}gin{lemma} \langlebel{f-2} (see \cite{ajb})
The gradient vector fields of the functions $\tau$ and $\Lambda$ are
\betaegin{equation}gin{eqnarray}\langlebel{g-la}
\nabla_{E_{i}}\Lambda=f\left<\betaar{e}_{0},E_{i}\rightght>,
\epsilonnd{eqnarray}
\betaegin{equation}gin{eqnarray}\langlebel{g-ta}
\nabla_{E_{i}}\tau=\,\,\,\,um\limits_{j}\nabla_{E_{j}}\Lambda h_{ij},
\epsilonnd{eqnarray}
and the second order derivatives of $\tau$ and $\Lambda$ are given
by
\betaegin{equation}gin{eqnarray}\langlebel{d2-la}
\nabla^{2}_{E_{i},E_{j}}\Lambda=-\tau h_{ij}+f'g_{ij},
\epsilonnd{eqnarray}
\betaegin{equation}gin{eqnarray}\langlebel{d2-ta}
\nabla^{2}_{E_{i},E_{j}}\tau=-\tau\,\,\,\,um\limits_{k}h_{ik}h_{kj}+f'h_{ij}+\,\,\,\,um\limits_{k}(h_{ijk}+\betaar{R}_{0ijk})\nabla_{E_{k}}\Lambda.
\epsilonnd{eqnarray}
\epsilonnd{lemma}
The following Newton-Maclaurin inequality will be used frequently
(see, e.g., \cite{mt1,t2}).
\betaegin{equation}gin{lemma}\langlebel{NM-ieq}
Let $\langlembda \in \mathbb{R}^{n}$. For $0\leq l\leq k\leq n,~~r>s\mathfrak geq
0,~~k\mathfrak geq r,~~l\mathfrak geq s$, we have
\betaegin{equation}gin{eqnarray*}
k(n-l+1)\,\,\,\,igma_{l-1}(\langlembda)\,\,\,\,igma_{k}(\langlembda)\leq l(n-k+1)\,\,\,\,igma_{l}(\langlembda)\,\,\,\,igma_{k-1}
\epsilonnd{eqnarray*}
and
\betaegin{equation}gin{eqnarray*}
\left[{\mathfrak f}racac{\,\,\,\,igma_{k}(\langlembda)/C_{n}^{k}}{\,\,\,\,igma_{l}(\langlembda)/C_{n}^{l}}\rightght]^{{\mathfrak f}racac{1}{k-l}}
\leq
\left[{\mathfrak f}racac{\,\,\,\,igma_{r}(\langlembda)/C_{n}^{r}}{\,\,\,\,igma_{s}(\langlembda)/C_{n}^{s}}\rightght]^{{\mathfrak f}racac{1}{r-s}},
\qquad for~~ \langlembda\in{\mathfrak G}amma_{k}.
\epsilonnd{eqnarray*}
\epsilonnd{lemma}
At end, we also need the following truth to ensure the ellipticity
of the Eq. \epsilonqref{equation 1.1}.
\betaegin{equation}gin{lemma}\langlebel{ellip-}
Let $\mathcal{G}=\{\left( u(x),x\rightght)|x\in M^{n}\}$ be a smooth
$(k-1)$-admissible closed hypersurface in $\betaar{M}$ and
$\alphalpha_{l}(u,x)\mathfrak geq 0$ for any $x\in M^{n}$ and $0\leq l\leq k-2$.
Then the operator
\betaegin{equation}gin{eqnarray*}
G\left(h_{ij}(V),u,x\rightght):={\mathfrak f}racac{\,\,\,\,igma_{k}(\langlembda(V))}{\,\,\,\,igma_{k-1}(\langlembda(V))}
-\,\,\,\,um\limits_{l=0}^{k-2}\alphalpha_{l}(u,x){\mathfrak f}racac{\,\,\,\,igma_{l}(\langlembda(V))}{\,\,\,\,igma_{k-1}(\langlembda(V))}
\epsilonnd{eqnarray*}
is elliptic and concave with respect to $h_{ij}(V)$.
\epsilonnd{lemma}
\betaegin{equation}gin{proof}
The proof is almost the same with the one of \cite[Proposition
2.2]{gz}, and we prefer to omit here.
\epsilonnd{proof}
\,\,\,\,ection{$C^0$ estimate} \langlebel{S3}
We consider the family of equations for $0\leq t\leq 1$,
\betaegin{equation}gin{eqnarray}\langlebel{equation 1.1}
{\mathfrak f}racac{\,\,\,\,igma_{k}(\langlembda(V))}{\,\,\,\,igma_{k-1}(\langlembda(V))}-\,\,\,\,um\limits_{l=0}^{k-2}t\alphalpha_{l}(u,x){\mathfrak f}racac{\,\,\,\,igma_{l}(\langlembda(V))}{\,\,\,\,igma_{k-1}(\langlembda(V))}-\alphalpha_{k-1}(u,x,t)=0,
\epsilonnd{eqnarray}
where
\betaegin{equation}gin{eqnarray*}
\alphalpha_{k-1}(u,x,t):=t\alphalpha_{k-1}(u,x)+(1-t)\varphi(u){\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}{\mathfrak f}racac{f'}{f},
\epsilonnd{eqnarray*}
and $\varphi$ is a positive function defined on $I$ and satisfying
the following conditions:
(a) $\varphi(u)>0$;
(b) $\varphi(u)>1$ for $u \leq r_{1}$;
(c) $\varphi(u)<1$ for $u \mathfrak geq r_{2}$;
(d) $\varphi'(u)<0$.
\betaegin{equation}gin{lemma}[\textbf{$C^{0}$ estimate}]\langlebel{C0 estimate}
Assume that $0\leq \alphalpha_{l}(u,x)\in C^{\infty}(I\times M^{n}).$
Under the assumptions \epsilonqref{as-1} and \epsilonqref{as-2} mentioned in
Theorem \mathrm{e}f{maintheorem}, if $\mathcal{G}=\{(u(x),x)|x\in
M^{n}\}\,\,\,\,ubset \betaar{M}$ is a smooth $(k-1)$-admissible, closed
graphic hypersurface satisfied the curvature equation
\epsilonqref{equation 1.1} for a given $t\in[0,1]$, then
\betaegin{equation}gin{eqnarray*}
r_{1}\leq u(x) \leq r_{2},\qquad {\mathfrak f}orall x\in M^{n}.
\epsilonnd{eqnarray*}
\epsilonnd{lemma}
\betaegin{equation}gin{proof}
Assume that $u(x)$ attains its maximum at $x_{0}\in M^{n}$ and
$u(x_{0})\mathfrak geq r_{2}$. Then from \epsilonqref{h_{ij}}, one has
\betaegin{equation}gin{eqnarray*}
h^{i}_{j}={\mathfrak f}racac{1}{v}\left[f'\deltaelta^{i}_{j}+{\mathfrak f}racac{1}{v^{2}}\left(f'u_{j}u^{i}-fu^{i}_{j}\rightght)\rightght],
\epsilonnd{eqnarray*}
where $v=\,\,\,\,qrt{f^{2}+|D u|^{2}}$, which implies
\betaegin{equation}gin{eqnarray*}
h^{i}_{j}(x_{0})={\mathfrak f}racac{1}{f}\left(f'\deltaelta^{i}_{j}-{\mathfrak f}racac{u^{i}_{j}}{f}\rightght)\mathfrak geq{\mathfrak f}racac{f'}{f}\deltaelta^{i}_{j}.
\epsilonnd{eqnarray*}
Note that ${\mathfrak f}racac{\,\,\,\,igma_{k}}{\,\,\,\,igma_{k-1}}$ and ${\mathfrak f}racac{\,\,\,\,igma_{k-1}}{\,\,\,\,igma_{l}}$ with $0\leq l\leq k-2$ is concave in ${\mathfrak G}amma_{k-1}$. Thus,
\betaegin{equation}gin{eqnarray*}
{\mathfrak f}racac{\,\,\,\,igma_{k}}{\,\,\,\,igma_{k-1}}(h^{i}_{j})\mathfrak geq{\mathfrak f}racac{\,\,\,\,igma_{k}}{\,\,\,\,igma_{k-1}}\left({\mathfrak f}racac{f'}{f}\deltaelta^{i}_{j}\rightght)+
{\mathfrak f}racac{\,\,\,\,igma_{k}}{\,\,\,\,igma_{k-1}}\left(-{\mathfrak f}racac{1}{f^{2}}u_{j}^{i}\rightght)
\mathfrak geq{\mathfrak f}racac{\,\,\,\,igma_{k}}{\,\,\,\,igma_{k-1}}\left({\mathfrak f}racac{f'}{f}\deltaelta^{i}_{j}\rightght).
\epsilonnd{eqnarray*}
Therefore, it follows that
\betaegin{equation}gin{eqnarray*}
{\mathfrak f}racac{\,\,\,\,igma_{k}(\langlembda(V))}{\,\,\,\,igma_{k-1}(\langlembda(V))}\mathfrak geq{\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}{\mathfrak f}racac{f'}{f}.
\epsilonnd{eqnarray*}
Similarly, one can get
\betaegin{equation}gin{eqnarray*}
{\mathfrak f}racac{\,\,\,\,igma_{l}(\langlembda(V))}{\,\,\,\,igma_{k-1}(\langlembda(V))}\leq{\mathfrak f}racac{\,\,\,\,igma_{l}(e)}{\,\,\,\,igma_{k-1}(e)}\left({\mathfrak f}racac{f}{f'}\rightght)^{k-l-1}.
\epsilonnd{eqnarray*}
Combining with the above two inequalities, we have
\betaegin{equation}gin{eqnarray*}
{\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}{\mathfrak f}racac{f'}{f}-\,\,\,\,um\limits_{l=0}^{k-2}t\alphalpha_{l}(u,x){\mathfrak f}racac{\,\,\,\,igma_{l}(e)}{\,\,\,\,igma_{k-1}(e)}\left({\mathfrak f}racac{f}{f'}\rightght)^{k-l-1}\leq
\alphalpha_{k-1}(u,x,t).
\epsilonnd{eqnarray*}
Clearly, if $t=0$, the above inequality is contradict with
\epsilonqref{equation 1.1}. When $0<t\leq1$, we can obtain
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
\alphalpha_{k-1}(u,x)&=\left(1-{\mathfrak f}racac{1}{t}\rightght)\varphi{\mathfrak f}racac{f'}{f}{\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}+{\mathfrak f}racac{1}{t}\alphalpha_{k-1}(x,u,t)\\
&\mathfrak geq
\left({\mathfrak f}racac{1}{t}{\mathfrak f}racac{f'}{f}-\left(1-{\mathfrak f}racac{1}{t}\rightght)\varphi{\mathfrak f}racac{f'}{f}\rightght){\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}
-\,\,\,\,um\limits_{l=0}^{k-2}\alphalpha_{l}(u,x){\mathfrak f}racac{\,\,\,\,igma_{l}(e)}{\,\,\,\,igma_{k-1}(e)}\left({\mathfrak f}racac{f}{f'}\rightght)^{k-l-1}\\
&>{\mathfrak f}racac{f'}{f}{\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}-\,\,\,\,um\limits_{l=0}^{k-2}\alphalpha_{l}(u,x){\mathfrak f}racac{\,\,\,\,igma_{l}(e)}{\,\,\,\,igma_{k-1}(e)}\left({\mathfrak f}racac{f}{f'}\rightght)^{k-l-1},
\epsilonnd{split}
\epsilonnd{eqnarray*}
which is contradict with
\betaegin{equation}gin{eqnarray*}
{\mathfrak f}racac{f'}{f}{\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}-\,\,\,\,um\limits_{l=0}^{k-2}\alphalpha_{l}(u,x){\mathfrak f}racac{\,\,\,\,igma_{l}(e)}{\,\,\,\,igma_{k-1}(e)}\left({\mathfrak f}racac{f}{f'}\rightght)^{k-l-1}\mathfrak geq
\alphalpha_{k-1}(u,x)
\epsilonnd{eqnarray*}
in view of \epsilonqref{as-1} and the condition $\varphi(u)<1$ for $u\mathfrak geq r_{2}$. This shows $\,\,\,\,up u\leq r_{2}$. Similarly,
we can obtain $\inf u\mathfrak geq r_{1}$ in view of \epsilonqref{as-2} and the condition $\varphi(u)>1$ for $u\leq r_{1}$. Our proof is finished.
\epsilonnd{proof}
Now, we can prove the following uniqueness result.
\betaegin{equation}gin{lemma}\langlebel{uni-sol}
For $t=0$, there exists a unique admissible solution of the Eq.
\epsilonqref{equation 1.1}, namely $\mathcal{G}_{0}=\{(u(x),x)\in
\betaar{M}|u(x)=u_{0}\}$, where $u_{0}$ is the unique solution of
$\varphi(u_{0})=1$.
\epsilonnd{lemma}
\betaegin{equation}gin{proof}
Let $\mathcal{G}_{0}$ be a solution of \epsilonqref{equation 1.1}, and
then for $t=0$,
\betaegin{equation}gin{eqnarray*}
{\mathfrak f}racac{\,\,\,\,igma_{k}(\langlembda(V))}{\,\,\,\,igma_{k-1}(\langlembda(V))}-\varphi(u){\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}{\mathfrak f}racac{f'}{f}=0.
\epsilonnd{eqnarray*}
Assume that $u(x)$ attains its maximum $u_{\mathrm{max}}$ at
$x_{0}\in M^{n}$. Then one has
\betaegin{equation}gin{eqnarray*}
{\mathfrak f}racac{\,\,\,\,igma_{k}(\langlembda(V))}{\,\,\,\,igma_{k-1}(\langlembda(V))} \mathfrak geq
{\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}{\mathfrak f}racac{f'}{f},
\epsilonnd{eqnarray*}
which implies
\betaegin{equation}gin{eqnarray*}
\varphi(u_{\mathrm{max}}) \mathfrak geq 1.
\epsilonnd{eqnarray*}
Similarly, the minimum $u_{\mathrm{min}}$ of $u(x)$ satisfies
\betaegin{equation}gin{eqnarray*}
\varphi(u_{\mathrm{min}}) \leq 1.
\epsilonnd{eqnarray*}
Since $\varphi$ is a decreasing function, we obtain
\betaegin{equation}gin{eqnarray*}
\varphi(u_{\mathrm{max}}) = \varphi(u_{\mathrm{min}}) = 1,
\epsilonnd{eqnarray*}
which implies that $u(x_{0})=u_{0}$ for any $(u(x),x)\in
\mathcal{G}_{0}$, with $u_{0}$ the unique solution of
$\varphi(u_{0})=1$.
\epsilonnd{proof}
\,\,\,\,ection{$C^1$ estimate} \langlebel{S4}
We can rewritten the Eq. \epsilonqref{equation 1.1} as follows:
\betaegin{equation}gin{eqnarray*}
G(h_{ij}(V),u,x,t)={\mathfrak f}racac{\,\,\,\,igma_{k}(\kappa(V))}{\,\,\,\,igma_{k-1}(\kappa(V))}-\,\,\,\,um\limits_{l=0}^{k-2}t\alphalpha_{l}(u,x){\mathfrak f}racac{\,\,\,\,igma_{l}(\kappa(V))}{\,\,\,\,igma_{k-1}(\kappa(V))}=\alphalpha_{k-1}(u,x,t).
\epsilonnd{eqnarray*}
For convenience, we will simplify notations as follows:
\betaegin{equation}gin{eqnarray*}
G_{k}(h_{ij}(V)):={\mathfrak f}racac{\,\,\,\,igma_{k}(\langlembda(V))}{\,\,\,\,igma_{k-1}(\langlembda(V))},\qquad
G_{l}(h_{ij}(V))=:-{\mathfrak f}racac{\,\,\,\,igma_{l}(\langlembda(V))}{\,\,\,\,igma_{k-1}(\langlembda(V))},
\epsilonnd{eqnarray*}
and
\betaegin{equation}gin{eqnarray*}
G^{ij}(\langlembda(V)):={\mathfrak f}racac{\partialartial G}{\partialartial h_{ij}},\quad
G^{ij,rs}(\langlembda(V)):={\mathfrak f}racac{\partialartial^{2} G}{\partialartial h_{ij}\partialartial
h_{rs}}.
\epsilonnd{eqnarray*}
\betaegin{equation}gin{lemma}[\textbf{$C^{1}$ estimate}]\langlebel{C1 estimate}
Assume that $k\mathfrak geq 2$ and
\betaegin{equation}gin{eqnarray*}
\alphalpha_{l}(u,x)\mathfrak geq c_{l}>0,\qquad {\mathfrak f}orall x\in M^{n}
\epsilonnd{eqnarray*}
for $0\leq l\leq k-1$. Under the assumption \epsilonqref{as-3}, if the
smooth $(k-1)$-admissible, closed graphic hypersurface $\mathcal{G}$
satisfies the Eq.\epsilonqref{main equation} and $u$ has positive upper
and lower bounds, then there exists a constant $C$ depending on $n$,
$k$, $c_{l}$, $|\alphalpha_{l}|_{C^{1}}$, the $C^{0}$ bound of $f$ and
the curvature tensor $\betaar{R}$, the minimum and maximum values of
$u$ such that
\betaegin{equation}gin{eqnarray*}
|\nabla u(x)|\leq C,\qquad {\mathfrak f}orall x\in M^{n}.
\epsilonnd{eqnarray*}
\epsilonnd{lemma}
\betaegin{equation}gin{proof}
First, we know from \epsilonqref{nu} and \epsilonqref{f-1} that
\betaegin{equation}gin{eqnarray*}
\tau={\mathfrak f}racac{f^{2}(u)}{\,\,\,\,qrt{f^{2}(u)+|Du|^{2}}}.
\epsilonnd{eqnarray*}
It is sufficient to obtain a positive lower bound of $\tau$. Define
\betaegin{equation}gin{eqnarray*}
\partialsi=-\log \tau+\mathfrak gamma(\Lambda),
\epsilonnd{eqnarray*}
where $\mathfrak gamma(t)$ is a function chosen later. Assume that $x_{0}$ is
the maximum value point of $\partialsi$. If $V$ is parallel to the normal
direction $\nu$ of $\mathcal{G}$ at $x_{0}$, our result holds since
$\left<V,\nu\rightght>=|V|.$ So, we assume that $V$ is not parallel to
the normal direction $\nu$ at $x_{0}$, we may choose the local
orthonormal frame field $\{E_{1},\cdots,E_{n}\}$ on $\mathcal{G}$
satisfying
\betaegin{equation}gin{eqnarray*}
\left\langlengle V,E_{1}\rightght\ranglengle\neq 0\quad {\rm and} \quad
\left\langlengle V,E_{i}\rightght\ranglengle= 0,\quad{\mathfrak f}orall~~i\mathfrak geq2.
\epsilonnd{eqnarray*}
Then, we arrive at $x_{0}$,
\betaegin{equation}gin{eqnarray}\langlebel{tau-i}
\tau_{i}=\tau\mathfrak gamma'\Lambda_{i}
\epsilonnd{eqnarray}
and
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
\partialsi_{ii}=&-{\mathfrak f}racac{\tau_{ii}}{\tau}+{\mathfrak f}racac{(\tau_{i})^{2}}{\tau^{2}}+\mathfrak gamma''\Lambda_{i}^{2}+\mathfrak gamma'\Lambda_{ii}\\
=&-{\mathfrak f}racac{1}{\tau}\left(\,\,\,\,um\limits_{k}(h_{iik}+\betaar{R}_{0iik})\Lambda_{k}+f'h_{ii}-\tau h_{ii}^{2}\rightght)\\
&+\left((\mathfrak gamma')^{2}+\mathfrak gamma''\rightght)\Lambda_{i}^{2}+\mathfrak gamma'(f'-\tau h_{ii})
\epsilonnd{split}
\epsilonnd{eqnarray*}
in view of
\betaegin{equation}gin{eqnarray*}
\tau_{ii}=\,\,\,\,um\limits_{k}(h_{iik}+\betaar{R}_{0iik})\left\langlengle
V,E_{k}\rightght\ranglengle+f'h_{ii}-\tau\,\,\,\,um\limits_{k} h_{ik}h_{ki}.
\epsilonnd{eqnarray*}
By \epsilonqref{g-la}, \epsilonqref{g-ta} and \epsilonqref{tau-i}, we have at $x_{0}$
\betaegin{equation}gin{eqnarray}\langlebel{h-11}
h_{11}=\tau \mathfrak gamma', \quad h_{1i}=0,\quad {\mathfrak f}orall~~i\mathfrak geq2.
\epsilonnd{eqnarray}
Therefore, we can rotate the coordinate system such that
$\{E_{i}\}_{i=1}^{n}$ are the principal curvature directions of the
second fundamental form $h_{ij}$, i.e. $h_{ij}=h_{ii}\deltaelta_{ij}$.
Since $\Lambda_{1}=\left\langlengle V,E_{1}\rightght\ranglengle$,
$\Lambda_{i}=\left\langlengle V,E_{i}\rightght\ranglengle$ for any $i\mathfrak geq2$.
So, we can get
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
G^{ii}\partialsi_{ii}=&-{\mathfrak f}racac{f'}{\tau}G^{ii}h_{ii}-{\mathfrak f}racac{1}{\tau}G^{ii}(h_{ii1}+\betaar{R}_{0ii1})\Lambda_{1}+G^{ii}h_{ii}^{2}\\
&+\left((\mathfrak gamma')^{2}+\mathfrak gamma''\rightght)G^{11}\Lambda_{1}^{2}+\mathfrak gamma'G^{ii}(f'-\tau h_{ii}).
\epsilonnd{split}
\epsilonnd{eqnarray*}
Noting that
\betaegin{equation}gin{eqnarray*}
G^{ij}h_{ij}=G-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)\alphalpha_{l}G_{l}=\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)\alphalpha_{l}G_{l}
\epsilonnd{eqnarray*}
and
\betaegin{equation}gin{eqnarray*}
G^{ij}h_{ij1}=\nabla_{1}\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_{1}\alphalpha_{l}G_{l},
\epsilonnd{eqnarray*}
we conclude
\betaegin{equation}gin{eqnarray}\langlebel{G-psi}
\betaegin{equation}gin{split}
G^{ii}\partialsi_{ii}=&{\mathfrak f}racac{\Lambda_{1}}{\tau}\left(-\nabla_{1}\alphalpha_{k-1}(u,x,t)+\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_{1}\alphalpha_{l}G_{l}\rightght)\\
&+{\mathfrak f}racac{f'}{\tau}\left(-\alphalpha_{k-1}(u,x,t)+\,\,\,\,um\limits_{l=0}^{k-2}(k-l)\alphalpha_{l}G_{l}\rightght)+G^{ii}h_{ii}^{2}\\
&-{\mathfrak f}racac{1}{\tau}G^{ii}\betaar{R}_{0ii1}\Lambda_{1}+\left((\mathfrak gamma')^{2}+\mathfrak gamma''\rightght)G^{11}\Lambda_{1}^{2}+\mathfrak gamma'G^{ii}(f'-\tau h_{ii})\\
=&{\mathfrak f}racac{1}{\tau}\left(-\Lambda_{1}\nabla_{1}\alphalpha_{k-1}(u,x,t)-f'\alphalpha_{k-1}(u,x,t)\rightght)\\
&+{\mathfrak f}racac{1}{\tau}\,\,\,\,um\limits_{l=0}^{k-2}tG_{l}\left(\Lambda_{1}\nabla_{1}\alphalpha_{l}+f'(k-l)\alphalpha_{l}\rightght)+G^{ii}h_{ii}^{2}\\
&-{\mathfrak f}racac{1}{\tau}G^{ii}\betaar{R}_{0ii1}\Lambda_{1}+\left((\mathfrak gamma')^{2}+\mathfrak gamma''\rightght)G^{11}\Lambda_{1}^{2}+\mathfrak gamma'G^{ii}(f'-\tau h_{ii}).
\epsilonnd{split}
\epsilonnd{eqnarray}
Since $\left\langlengle V,E_{i}\rightght\ranglengle=0$ for $i=2,\cdots,n$, we
obtain
\betaegin{equation}gin{eqnarray*}
V=\left\langlengle V,E_{1}\rightght\ranglengle
E_{1}+\left<V,\nu\rightght>\nu=\Lambda_{1}E_{1}+\tau\nu,
\epsilonnd{eqnarray*}
which results in
\betaegin{equation}gin{eqnarray*}
\Lambda_{1}\nabla_{1}\alphalpha_{l}(u,x)+(k-l)f'\alphalpha_{l}(u,x)=\betaar{\nabla}_{V}\alphalpha_{l}(u,x)+(k-l)f'\alphalpha_{l}(u,x)-\tau\betaar{\nabla}_{\nu}\alphalpha_{l}(u,x).
\epsilonnd{eqnarray*}
We know from the assumption \epsilonqref{as-3} that
\betaegin{equation}gin{eqnarray*}
\left[(k-l)f'\alphalpha_{l}(u,x)+\nabla_{V}\alphalpha_{l}(u,x)\rightght]=\left[(k-l)f'\alphalpha_{l}(u,x)+f{\mathfrak f}racac{\partialartial \alphalpha_{l}(u,x)}{\partialartial u}\rightght]\leq 0.
\epsilonnd{eqnarray*}
Thus,
\betaegin{equation}gin{eqnarray}\langlebel{ieq-1}
\Lambda_{1}\nabla_{1}\alphalpha_{l}(u,x)+(k-l)f'\alphalpha_{l}(u,x)\leq
-\tau\betaar{\nabla}_{V}\alphalpha_{l}(u,x)
\epsilonnd{eqnarray}
and
\betaegin{equation}gin{eqnarray}\langlebel{ieq-2}
\betaegin{equation}gin{split}
\qquad
\Lambda_{1}&\nabla_{1}\alphalpha_{k-1}(u,x,t)+f'\alphalpha_{k-1}(u,x,t) \leq
(1-t)\varphi'{\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}-\tau\betaar{\nabla}_{V}\alphalpha_{k-1}(u,x,t).
\epsilonnd{split}
\epsilonnd{eqnarray}
Taking \epsilonqref{ieq-1} and \epsilonqref{ieq-2} into \epsilonqref{G-psi}, we have at $x_{0}$
\betaegin{equation}gin{eqnarray}\langlebel{ieq-3}
\betaegin{equation}gin{split}
0\mathfrak geq~~ &G^{ii}\varphi_{ii}\\
\mathfrak geq~~ &G^{ii}h_{ii}^{2}+\left((\mathfrak gamma')^{2}+\mathfrak gamma''\rightght)G^{11}\Lambda_{1}^{2}+\mathfrak gamma'G^{ii}(f'-\tau h_{ii})-{\mathfrak f}racac{1}{\tau}G^{ii}\betaar{R}_{0ii1}\Lambda_{1}\\
&-t\,\,\,\,um\limits_{l=0}^{k-2}G_{l}\betaar{\nabla}_{\nu}\alphalpha_{l}(u,x)-{\mathfrak f}racac{(1-t)}{\tau}\varphi'{\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}+\betaar{\nabla}_{\nu}\alphalpha_{k-1}(u,x,t)\\
=~~&G^{ii}\left(h_{ii}-{\mathfrak f}racac{1}{2}\mathfrak gamma'\tau\rightght)^{2}+\left((\mathfrak gamma')^{2}+\mathfrak gamma''\rightght)G^{11}\Lambda_{1}^{2}+G^{ii}\left(\mathfrak gamma'f'-{\mathfrak f}racac{1}{4}(\mathfrak gamma')^{2}\tau^{2}\rightght)\\
&-{\mathfrak f}racac{1}{\tau}G^{ii}\betaar{R}_{0ii1}\Lambda_{1}-t\,\,\,\,um\limits_{l=0}^{k-2}G_{l}\betaar{\nabla}_{\nu}\alphalpha_{l}(u,x)-{\mathfrak f}racac{(1-t)}{\tau}\varphi'{\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}+\betaar{\nabla}_{\nu}\alphalpha_{k-1}(u,x,t).
\epsilonnd{split}
\epsilonnd{eqnarray}
Choosing
\betaegin{equation}gin{eqnarray*}
\mathfrak gamma(t)=-{\mathfrak f}racac{\alphalpha}{t}
\epsilonnd{eqnarray*}
for sufficiently large positive constant $\alphalpha$, we have
\betaegin{equation}gin{eqnarray*}
\mathfrak gamma'(t)={\mathfrak f}racac{\alphalpha}{t^{2}},\quad \mathfrak gamma''(t)=-{\mathfrak f}racac{2\alphalpha}{t^{3}}.
\epsilonnd{eqnarray*}
Therefore, \epsilonqref{ieq-3} becomes
\betaegin{equation}gin{eqnarray}\langlebel{ieq-4}
\betaegin{equation}gin{split}
0\mathfrak geq G^{ii}\left(\mathfrak gamma'f'-{\mathfrak f}racac{1}{4}(\mathfrak gamma')^{2}\tau^{2}\rightght)
-c_{1}\left(\,\,\,\,um\limits_{l=0}^{k-2}|G_{l}|+1\rightght)
-{\mathfrak f}racac{1}{\tau}G^{ii}\betaar{R}_{0ii1}\Lambda_{1}
\epsilonnd{split}
\epsilonnd{eqnarray}
in view of
\betaegin{equation}gin{eqnarray*}
(\mathfrak gamma')^{2}+\mathfrak gamma''\mathfrak geq 0,
\epsilonnd{eqnarray*}
where $c_{1}$ is a positive constant depending on $|\alphalpha_{l}|_{C^{1}}$. Since $V=\left\langlengle V,E_{1}\rightght\ranglengle E_{1}+\left\langlengle V,\nu\rightght\ranglengle\nu$, we can
find that $V\partialerp{\mathrm{Span}}(E_{2},\cdots,E_{n})$, i.e., $V$ is
orthogonal with the subspace spanned by $E_{2},\cdots,E_{n}$. On the
other hand, $E_{1},\nu$ are orthogonal with
$\mathrm{Span}(E_{2},\cdots,E_{n})$. It is possible to choose
suitable coordinate system such that $\betaar{E}_{1}\partialerp\mathrm{
Span}(E_{2},\cdots,E_{n})$, which implies that the pairs
$\{V,\betaar{E}_{1}\}$ and $\{\nu,E_{1}\}$ lie in the same plane and
\betaegin{equation}gin{eqnarray*}
{\mathrm{Span}}(E_{2},...,E_{n})={\mathrm{Span}}(\betaar{E}_{2},\cdots,\betaar{E}_{n}),
\epsilonnd{eqnarray*}
where of course $\{\betaar{E}_{0}=\betaar{e}_0,\betaar{E}_{1},\cdots,\betaar{E}_{n}\}$ is a local orthonormal frame field in $\betaar{M}$.
Therefore, we can choose
$E_{2}=\betaar{E}_{2},\ldots,E_{n}=\betaar{E}_{n}$, and then vectors $\nu$
and $E_{1}$ can be decomposed into
\betaegin{equation}gin{eqnarray*}
&&\nu=\left\langlengle\nu,\betaar{e}_{0}\rightght\ranglengle\betaar{e}_{0}+\left\langlengle\nu,\betaar{E}_{1}\rightght\ranglengle\betaar{E}_{1}={\mathfrak f}racac{\tau}{f}\betaar{e}_{0}+\left\langlengle\nu,\betaar{E}_{1}\rightght\ranglengle\betaar{E}_{1},\\
&&\qquad E_{1}=\left\langlengle
E_{1},\betaar{e}_{0}\rightght\ranglengle\betaar{e}_{0}+\left\langlengle
E_{1},\betaar{E}_{1}\rightght\ranglengle\betaar{E}_{1}.
\epsilonnd{eqnarray*}
By \epsilonqref{R_ijk0} and the fact $V=\Lambda_{1}E_{1}+\tau\nu$, we can
obtain
\betaegin{equation}gin{eqnarray}\langlebel{eq-5}
\betaegin{equation}gin{split}
\betaar{R}_{0ii1}&=\betaar{R}(\nu,E_{i},E_{i},E_{1})\\
&={\mathfrak f}racac{\tau}{f}\left\langlengle
E_{1},\betaar{e}_{0}\rightght\ranglengle\betaar{R}(\betaar{e}_{0},
\betaar{E}_{i},\betaar{E}_{i},\betaar{e}_{0})+\left\langlengle\nu,\betaar{E}_{1}\rightght\ranglengle
\left\langlengle E_{1},\betaar{E}_{1}\rightght\ranglengle\betaar{R}(\betaar{E}_{1},\betaar{E}_{i},\betaar{E}_{i},\betaar{E}_{1})\\
&={\mathfrak f}racac{\tau}{f}\left\langlengle
E_{1},\betaar{e}_{0}\rightght\ranglengle\betaar{R}(\betaar{e}_{0},
\betaar{E}_{i},\betaar{E}_{i},\betaar{e}_{0})-\tau{\mathfrak f}racac{\left\langlengle\nu,\betaar{E}_{1}\rightght\ranglengle^{2}}
{\Lambda_{1}}\betaar{R}(\betaar{E}_{1},\betaar{E}_{i},\betaar{E}_{i},\betaar{E}_{1})\\
&=\tau\left({\mathfrak f}racac{1}{f}\left\langlengle
E_{1},\betaar{e}_{0}\rightght\ranglengle\betaar{R}(\betaar{e}_{0},
\betaar{E}_{i},\betaar{E}_{i},\betaar{e}_{0})-{\mathfrak f}racac{\left\langlengle\nu,\betaar{E}_{1}\rightght\ranglengle^{2}}
{\Lambda_{1}}\betaar{R}(\betaar{E}_{1},\betaar{E}_{i},\betaar{E}_{i},\betaar{E}_{1})\rightght),
\epsilonnd{split}
\epsilonnd{eqnarray}
where the third equality comes from $\left\langlengle
V,\betaar{E}_{1}\rightght\ranglengle=0$. Substituting \epsilonqref{eq-5} into
\epsilonqref{ieq-4} yields
\betaegin{equation}gin{eqnarray}\langlebel{ieq-6}
\betaegin{equation}gin{split}
0\mathfrak geq G^{ii}\left(\mathfrak gamma'f'-{\mathfrak f}racac{1}{4}(\mathfrak gamma')^{2}\tau^{2}\rightght)
-c_{1}\left(\,\,\,\,um\limits_{l=0}^{k-2}|G_{l}|+1\rightght)
-c_{2}\,\,\,\,um\limits_{i}G^{ii}
\epsilonnd{split}
\epsilonnd{eqnarray}
where $c_{2}>0$ depends on the $C^{0}$ bound of $f$ and the
curvature tensor $\betaar{R}$. To continue our proof, we need to
estimate $G_{l}$ for $0\leq l\leq k-2$. Let $P\in \mathbb{R}$ be a
fixed positive number.
$(I)$ If ${\mathfrak f}racac{\,\,\,\,igma_{k}}{\,\,\,\,igma_{k-1}}\leq P$, then we get from
$\alphalpha_{l}\mathfrak geq c_{l}$ that
\betaegin{equation}gin{eqnarray*}
|G_{l}|={\mathfrak f}racac{\,\,\,\,igma_{l}}{\,\,\,\,igma_{k-1}}\leq{\mathfrak f}racac{1}{\alphalpha_{l}}\left({\mathfrak f}racac{\,\,\,\,igma_{k}}
{\,\,\,\,igma_{k-1}}+\alphalpha_{l}(u,x,t)\rightght)\leq c_{3}(P+1),
\epsilonnd{eqnarray*}
where the constant $c_{3}>0$ depends on $c_{l}$,
$|\alphalpha_{l}|_{C^{0}}$.
$(II)$ If ${\mathfrak f}racac{\,\,\,\,igma_{k}}{\,\,\,\,igma_{k-1}}>P$, then by Lemma
\mathrm{e}f{NM-ieq}, one has
\betaegin{equation}gin{eqnarray*}
|G_{l}|={\mathfrak f}racac{\,\,\,\,igma_{l}}{\,\,\,\,igma_{k-1}} \leq
{\mathfrak f}racac{\,\,\,\,igma_{l}}{\,\,\,\,igma_{l+1}}\cdot{\mathfrak f}racac{\,\,\,\,igma_{l+1}}{\,\,\,\,igma_{l+2}}
\cdot\cdot\cdot\cdot{\mathfrak f}racac{\,\,\,\,igma_{k-2}}{\,\,\,\,igma_{k-1}} \leq
c_{4}\left({\mathfrak f}racac{\,\,\,\,igma_{k-1}}{\,\,\,\,igma_{k}}\rightght)^{k-1-l} \leq
P^{-(k-1-l)},
\epsilonnd{eqnarray*}
where the positive constant $c_{4}$ depends on $k$.
So, $|G_{l}|$ can be bounded for any $0\leq l\leq k-2$. By the
definition of operator $G$ and a direct computation, we have
$\Sigma_{i}G^{ii}\mathfrak geq {\mathfrak f}racac{n-k+1}{k}$, and so we can choose
sufficiently large $\alphalpha$ such that
\betaegin{equation}gin{eqnarray*}
0\mathfrak geq G^{ii}\left[\mathfrak gamma'f'-(\mathfrak gamma')^{2}\tau^{2}\rightght].
\epsilonnd{eqnarray*}
Thus,
\betaegin{equation}gin{eqnarray*}
\mathfrak gamma'f'\leq (\mathfrak gamma')^{2}\tau^{2},
\epsilonnd{eqnarray*}
which means
\betaegin{equation}gin{eqnarray*}
\tau \mathfrak geq c_{5}
\epsilonnd{eqnarray*}
for some positive constant $c_{5}$ depending on $n$, $k$, $c_{l}$,
$|\alphalpha_{l}|_{C^1}$, the $C^{0}$ bound of $f$ and the curvature
tensor $\betaar{R}$. The conclusion of Lemma \mathrm{e}f{C1 estimate} follows
directly.
\epsilonnd{proof}
\betaegin{equation}gin{remark}
\rm{ After several careful revisions to the manuscript of this
paper, we prefer to number (by subscripts) nearly all the constants
in the $C^1$ and $C^{2}$ estimates, and we believe that this way can
reveal the relations among constants clearly to readers.
}
\epsilonnd{remark}
\,\,\,\,ection{$C^2$ estimates} \langlebel{S5}
This section devotes to the $C^2$ estimates. However, before that,
we need to make some preparations. First, we need the following
fact:
\betaegin{equation}gin{lemma}\langlebel{C2-1}
Let $\mathcal{G}=\{\left( u(x),x\rightght)| x\in M^{n}\}$ be a
$(k-1)$-admissible solution of the Eq. \epsilonqref{equation 1.1} and
assume that $\alphalpha_{l}(u,x)\mathfrak geq 0$ for $0\leq l\leq k-1$. Then, we
have the following inequality
\betaegin{equation}gin{eqnarray*}
G^{ij}h_{ijpp}\mathfrak geq \nabla_p\nabla_p\alphalpha_{k-1}(u,x,t)-
\,\,\,\,um\limits_{l=0}^{k-2}{\mathfrak f}racac{1}{1+{\mathfrak f}racac{1}{k+1-l}}{\mathfrak f}racac{t(\nabla_p\alphalpha_l)^2}{\alphalpha_l}G_l
-\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\nabla_p\alphalpha_lG_l.
\epsilonnd{eqnarray*}
\epsilonnd{lemma}
\betaegin{equation}gin{proof}
Differentiating the Eq. \epsilonqref{equation 1.1} once, we have
\betaegin{equation}gin{eqnarray*}
\nabla_p\alphalpha_{k-1}(u,x,t)=
G^{ij}h_{ijp}+\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\alphalpha_{l}G_l.
\epsilonnd{eqnarray*}
Differentiating the Eq. \epsilonqref{equation 1.1} twice, we obtain
\betaegin{equation}gin{eqnarray*}
\nabla_p\nabla_p\alphalpha_{k-1}(u,x,t)=
G^{ij,rs}h_{ijp}h_{rsp}+G^{ij}h_{ijpp}
+2\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\alphalpha_{l}G^{ij}_{l}h_{ijp}
+\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\nabla_p\alphalpha_{l}G_l.
\epsilonnd{eqnarray*}
Moreover, since the operator
$\left({\mathfrak f}racac{\,\,\,\,igma_{k-1}}{\,\,\,\,igma_{l}}\rightght)^{{\mathfrak f}racac{1}{k-1-l}}$ is
concave for $0\leq l\leq k-2$, we have (see also (3.10) in
\cite{gz})
\betaegin{equation}gin{eqnarray*}
G^{ij,rs}h_{ijp}h_{rsp}\leq
\left(1+{\mathfrak f}racac{1}{k-1-l}\rightght)G_{l}^{-1}G^{ij}_{l}G^{rs}_{l}h_{ijp}h_{rsp}.
\epsilonnd{eqnarray*}
Thus, in view that $G_{k}$ is concave in ${\mathfrak G}amma_{k-1}$, we have
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
&\nabla_p\nabla_p\alphalpha_{k-1}(u,x,t)\\
\leq&\,\,\,\,um\limits_{l=0}^{k-2}t\alphalpha_{l}G_{l}^{ij,rs}h_{ijp}h_{rsp}+G^{ij}h_{ijpp}
+2\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\alphalpha_{l}G^{ij}_{l}h_{ijp}
+\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\nabla_p\alphalpha_{l}G_{l}\\
\leq&\,\,\,\,um\limits_{l=0}^{k-2}t\alphalpha_{l}G_{l}^{-1}\left(1+{\mathfrak f}racac{1}{k-1-l}\rightght)(G^{ij}_{l}h_{ijp})^{2}+G^{ij}h_{ijpp}+
2\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\alphalpha_{l}G^{ij}_{l}h_{ijp}
+\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\nabla_p\alphalpha_{l}G_{l}\\
=&{\mathfrak f}racac{k-l}{k-1-l}\,\,\,\,um\limits_{l=0}^{k-2}t\alphalpha_{l}G_{l}^{-1}\left(G^{ij}_{l}h_{ijp}+
{\mathfrak f}racac{1}{1+{\mathfrak f}racac{1}{k-1-l}}{\mathfrak f}racac{\nabla_p\alphalpha_{l}}{\alphalpha_{l}}G_{l}\rightght)^{2}+
\,\,\,\,um\limits_{l=0}^{k-2}{\mathfrak f}racac{1}{1+{\mathfrak f}racac{1}{k-1-l}}{\mathfrak f}racac{t(\nabla_p\alphalpha_{l})^{2}}
{\alphalpha_{l}}G_{l}\\
&+G^{ij}h_{ijpp}+\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\nabla_p\alphalpha_{l}G_{l}\\
\leq&\,\,\,\,um\limits_{l=0}^{k-2}{\mathfrak f}racac{1}{1+{\mathfrak f}racac{1}{k-1-l}}{\mathfrak f}racac{t(\nabla_p\alphalpha_{l})^{2}}
{\alphalpha_{l}}G_{l}+G^{ij}h_{ijpp}+\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\nabla_p\alphalpha_{l}G_{l},
\epsilonnd{split}
\epsilonnd{eqnarray*}
which completes the proof of Lemma \mathrm{e}f{C2-1}.
\epsilonnd{proof}
We also need the following truth:
\betaegin{equation}gin{lemma}\langlebel{C2-2}
Let $\mathcal{G}=\{\left( u(x),x\rightght)| x\in M^{n}\}$ be a
$(k-1)$-admissible solution of the Eq. \epsilonqref{equation 1.1} with the
position vector $V$ in $\betaar{M}$. We have the following equality
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
&G^{ij}\tau_{ij}+\,\,\,\,um\limits_{k}\tau G^{ij}h_{ik}h_{kj}\\
=&\left(\nabla_p\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\alphalpha_{l}G_{l}
+\,\,\,\,um\limits_{p}G^{ij}\betaar{R}_{0ijp}\rightght)\left\langlengle
V,E_{p}\rightght\ranglengle
+f'\left(\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)t\alphalpha_{l}G_{l}\rightght).
\epsilonnd{split}
\epsilonnd{eqnarray*}
\epsilonnd{lemma}
\betaegin{equation}gin{proof}
By Lemma \mathrm{e}f{f-2}, we have
\betaegin{equation}gin{eqnarray*}
\tau_{ij}=-\tau\,\,\,\,um\limits_{k}h_{ik}h_{kj}+f'h_{ij}
+\,\,\,\,um\limits_{k}(h_{ijk}+\betaar{R}_{0ijk})\left\langlengle
V,E_{p}\rightght\ranglengle,
\epsilonnd{eqnarray*}
which results in
\betaegin{equation}gin{eqnarray*}
G^{ij}\tau_{ij}=-\tau
G^{ij}\,\,\,\,um\limits_{k}h_{ik}h_{kj}+f'G^{ij}h_{ij}
+\,\,\,\,um\limits_{k}G^{ij}(h_{ijk}+\betaar{R}_{0ijk})\left\langlengle
V,E_{p}\rightght\ranglengle.
\epsilonnd{eqnarray*}
Note that
\betaegin{equation}gin{eqnarray*}
G^{ij}h_{ij}=G-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)\alphalpha_{l}G_{l}
=\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)\alphalpha_{l}G_{l}
\epsilonnd{eqnarray*}
and
\betaegin{equation}gin{eqnarray*}
G^{ij}h_{ijp}=\nabla_{p}\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_{p}\alphalpha_{l}G_{l}.
\epsilonnd{eqnarray*}
Thus,
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
G^{ij}\tau_{ij}
=&\left(\nabla_p\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\alphalpha_{l}G_{l}
+\,\,\,\,um\limits_{p}G^{ij}\betaar{R}_{0ijp}\rightght)\left\langlengle V,E_{p}\rightght\ranglengle\\
&+f'\left(\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)t\alphalpha_{l}G_{l}\rightght)
-\,\,\,\,um\limits_{k}\tau G^{ij}h_{ik}h_{kj}.
\epsilonnd{split}
\epsilonnd{eqnarray*}
Therefore, we complete the proof.
\epsilonnd{proof}
Now we begin to estimate the second fundamental form.
\betaegin{equation}gin{lemma}[\textbf{$C^{2}$ estimates}]\langlebel{C2}
Assume that $k\mathfrak geq 2$ and
\betaegin{equation}gin{eqnarray*}
\alphalpha_{l}(u,x)\mathfrak geq c_{l}>0,\qquad {\mathfrak f}orall x\in M^{n}
\epsilonnd{eqnarray*}
for $0\leq l\leq k-1$. If the $k$-admissible, closed graphic
hypersurface $\mathcal{G}=\{\left( u(x),x\rightght)| x\in M^{n}\}$
satisfies the Eq. \epsilonqref{equation 1.1} with the position vector $V$ in $\betaar{M}$, then there exists a constant $C$ depending on
$n$, $k$, $c_{l}$, $|\alphalpha_{l}|_{C^{2}}$, $|\nabla u|_{C^{0}}$, the
$C^{0}$, $C^{1}$ bounds of $f$ and the curvature tensor $\betaar{R}$
such that for $1\leq i\leq n$, the principal curvatures of
$\mathcal{G}$ at $V$ satisfy
\betaegin{equation}gin{eqnarray*}
|\langlembda_{i}(V)|\leq C,\qquad {\mathfrak f}orall x\in M^{n}.
\epsilonnd{eqnarray*}
\epsilonnd{lemma}
\betaegin{equation}gin{proof}
Since $k\mathfrak geq 2$, $\mathcal{G}$ is $2$-admissible, for sufficiently
large $c_{6}$, one has
\betaegin{equation}gin{eqnarray*}
|\langlembda_{i}|\leq c_{6}H,
\epsilonnd{eqnarray*}
where the positive constant $c_{6}$ depends on $n$, $k$. So, we only
need to estimate the mean curvature $H$ of $\mathcal{G}$. Taking the
auxiliary function
\betaegin{equation}gin{eqnarray*}
W(x)=\log H-\log\tau.
\epsilonnd{eqnarray*}
Assume that $x_{0}$ is the maximum point of $W$. Then at $x_{0}$,
one has
\betaegin{equation}gin{eqnarray}\langlebel{W-1}
0=W_{i}={\mathfrak f}racac{H_{i}}{H}-{\mathfrak f}racac{\tau_{i}}{\tau}
\epsilonnd{eqnarray}
and
\betaegin{equation}gin{eqnarray}\langlebel{W-2}
0\mathfrak geq W_{ij}(x_{0})={\mathfrak f}racac{H_{ij}}{H}-{\mathfrak f}racac{\tau_{ij}}{\tau}.
\epsilonnd{eqnarray}
Choosing a suitable coordinate system $\{x^{1},x^{2},...,x^{n}\}$ in
the neighborhood of $X_{0}=\left(u(x_{0}),x_{0}\rightght)\in
\mathcal{G}$ such that the matrix $(h_{ij})_{n\times n}$ is diagonal
at $X_{0}$, i.e., $h_{ij}=h_{ii}\deltaelta_{ij}$. This implies at
$x_{0}$,
\betaegin{equation}gin{eqnarray}\langlebel{W-ieq-1}
0\mathfrak geq
G^{ij}W_{ij}(x_{0})=\,\,\,\,um\limits_{p=1}^{n}{\mathfrak f}racac{1}{H}G^{ii}h_{ppii}
-{\mathfrak f}racac{G^{ii}\tau_{ii}}{\tau}.
\epsilonnd{eqnarray}
By \epsilonqref{ricci-eq}, we can obtain
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
h_{ppii}=&h_{iipp}+h_{pp}^{2}h_{ii}-h_{ii}^{2}h_{pp}+\betaar{R}_{0iip;p}
+\betaar{R}_{0pip;i}-2h_{pp}\betaar{R}_{piip}\\
&+h_{ii}\betaar{R}_{0i0i}+h_{pp}\betaar{R}_{0ii0}
+h_{ii}\betaar{R}_{0p0p}+h_{ii}\betaar{R}_{0ii0}-2h_{ii}\betaar{R}_{ipip}.
\epsilonnd{split}
\epsilonnd{eqnarray*}
Note that
\betaegin{equation}gin{eqnarray*}
G^{ij}h_{ij}=G-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)\alphalpha_{l}G_{l}
=\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)\alphalpha_{l}G_{l}.
\epsilonnd{eqnarray*}
So, we have
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
\,\,\,\,um\limits_{p}G^{ii}h_{ppii}=&\,\,\,\,um\limits_{p}G^{ii}\left(h_{iipp}+\betaar{R}_{0iip;p}
+\betaar{R}_{0pip;i}\rightght)-\,\,\,\,um\limits_{p}h_{pp}G^{ii}\left(h_{ii}^{2}+
2\betaar{R}_{piip}-\betaar{R}_{0ii0}\rightght)\\
&+\,\,\,\,um\limits_{p}G^{ii}h_{ii}\left(h_{pp}^{2}-2\betaar{R}_{ipip}
+\betaar{R}_{0i0i}+\betaar{R}_{0p0p}+\betaar{R}_{0ii0}\rightght)\\
\mathfrak geq& \,\,\,\,um\limits_{p}G^{ii}h_{iipp}+\left(|A|^{2}-c_{8}\rightght)
\left(\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)\alphalpha_{l}G_{l}\rightght)
-c_{7}\\
&-HG^{ii}\left(h_{ii}^{2}+c_{9}\rightght),
\epsilonnd{split}
\epsilonnd{eqnarray*}
where the positive constant $c_{7}$ depends on the $C^{1}$ bound of
the curvature tensor $\betaar{R}$, the positive constants $c_{8}$,
$c_{9}$ depend on the $C^{0}$ bound of the curvature tensor
$\betaar{R}$. Together with Lemma \mathrm{e}f{C2-1}, we know that
\epsilonqref{W-ieq-1} becomes
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
0\mathfrak geq&{\mathfrak f}racac{1}{H}\,\,\,\,um\limits_{p=1}^{n}G^{ii}h_{iipp}+{\mathfrak f}racac{|A|^{2}-c_{8}}{H}
\left(\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)\alphalpha_{l}G_{l}\rightght)-{\mathfrak f}racac{G^{ii}\tau_{ii}}{\tau}\\
&-{\mathfrak f}racac{c_{7}}{H}\,\,\,\,um\limits_{i}G^{ii}-G^{ii}(h_{ii}^{2}+c_{9})\\
\mathfrak geq&{\mathfrak f}racac{1}{H}\,\,\,\,um\limits_{p=1}^{n}\left(\nabla_p\nabla_p\alphalpha_{k-1}(u,x,t)-
\,\,\,\,um\limits_{l=0}^{k-2}{\mathfrak f}racac{1}{1+{\mathfrak f}racac{1}{k+1-l}}{\mathfrak f}racac{t(\nabla_p\alphalpha_l)^2}{\alphalpha_l}G_l
-\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\nabla_p\alphalpha_lG_l\rightght)-{\mathfrak f}racac{G^{ii}\tau_{ii}}{\tau}\\
&+{\mathfrak f}racac{|A|^{2}-c_{8}}{H}\left(\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)t\alphalpha_{l}G_{l}\rightght)
-{\mathfrak f}racac{c_{7}}{H}\,\,\,\,um\limits_{i}G^{ii}-G^{ii}(h_{ii}^{2}+c_{9}).\\
\epsilonnd{split}
\epsilonnd{eqnarray*}
By Lemma \mathrm{e}f{C2-2}, the above inequality becomes
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
0\mathfrak geq&{\mathfrak f}racac{1}{H}\,\,\,\,um\limits_{p=1}^{n}\left(\nabla_p\nabla_p\alphalpha_{k-1}(u,x,t)-
\,\,\,\,um\limits_{l=0}^{k-2}{\mathfrak f}racac{1}{1+{\mathfrak f}racac{1}{k+1-l}}{\mathfrak f}racac{t(\nabla_p\alphalpha_l)^2}{\alphalpha_l}G_l
-\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\nabla_p\alphalpha_lG_l\rightght)\\
&+{\mathfrak f}racac{|A|^{2}-c_{8}}{H}\left(\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)t\alphalpha_{l}G_{l}\rightght)
-{\mathfrak f}racac{c_{7}}{H}\,\,\,\,um\limits_{i}G^{ii}-G^{ii}(h_{ii}^{2}+c_{9})\\
&-{\mathfrak f}racac{1}{\tau}\left(\nabla_p\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\alphalpha_{l}G_{l}
+\,\,\,\,um\limits_{p}G^{ii}\betaar{R}_{0iip}\rightght)\left\langlengle V,E_{p}\rightght\ranglengle\\
&-{\mathfrak f}racac{f'}{\tau}\left(\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)t\alphalpha_{l}G_{l}\rightght)
+G^{ii}h_{ii}^{2}.
\epsilonnd{split}
\epsilonnd{eqnarray*}
Hence, we have
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
0\mathfrak geq&{\mathfrak f}racac{|A|^{2}}{H}\left(\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)t\alphalpha_{l}G_{l}\rightght)
-\left({\mathfrak f}racac{c_{7}}{H}+c_{9}\rightght)\,\,\,\,um\limits_{i}G^{ii}-{\mathfrak f}racac{\left<V,E_{p}\rightght>}{\tau}
\,\,\,\,um\limits_{p}G^{ii}\betaar{R}_{0iip}\\
&+{\mathfrak f}racac{1}{H}\,\,\,\,um\limits_{p=1}^{n}\left(\nabla_p\nabla_p\alphalpha_{k-1}(u,x,t)
-\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\nabla_p\alphalpha_lG_l\rightght)
-{\mathfrak f}racac{\left<V,E_{p}\rightght>}{\tau}\left(\nabla_p\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}t
\nabla_p\alphalpha_{l}G_{l}\rightght)\\
&-\left({\mathfrak f}racac{c_{8}}{H}+{\mathfrak f}racac{f'}{\tau}\rightght)
\left(\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)t\alphalpha_{l}G_{l}\rightght).
\epsilonnd{split}
\epsilonnd{eqnarray*}
A direction calculation implies
\betaegin{equation}gin{eqnarray}\langlebel{nabla-ieq}
|\nabla_p\alphalpha_{k-1}(u,x,t)|\leq c_{10},\quad
|\nabla_p\nabla_p\alphalpha_{k-1}(u,x,t)|\leq c_{11}(1+H),
\epsilonnd{eqnarray}
where the positive constant $c_{10}$ depends on
$|\alphalpha_{l}|_{C^{1}}$, and the positive constant $c_{11}$ depends
on $|\alphalpha_{l}|_{C^{2}}$. So
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
-{\mathfrak f}racac{1}{H}&c_{12}\left(\,\,\,\,um\limits_{l=0}^{k-2}|G_{l}|+1\rightght)(H+1)
-c_{13}\left(\,\,\,\,um\limits_{l=0}^{k-2}|G_{l}|+1\rightght)\\
\leq&{\mathfrak f}racac{1}{H}\,\,\,\,um\limits_{p=1}^{n}\left(\nabla_p\nabla_p\alphalpha_{k-1}(u,x,t)
-\,\,\,\,um\limits_{l=0}^{k-2}t\nabla_p\nabla_p\alphalpha_lG_l\rightght)
-{\mathfrak f}racac{\left\langlengle
V,E_{p}\rightght\ranglengle}{\tau}\left(\nabla_p\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}t
\nabla_p\alphalpha_{l}G_{l}\rightght)\\
&-\left({\mathfrak f}racac{c_{8}}{H}+{\mathfrak f}racac{f'}{\tau}\rightght)
\left(\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)t\alphalpha_{l}G_{l}\rightght)
\epsilonnd{split}
\epsilonnd{eqnarray*}
where the positive constant $c_{12}$ depends on $c_{8}$, $c_{10}$,
the $C^{1}$ bound of $f$, and the positive constant $c_{13}$ depends
on $c_{8}$, $c_{11}$, the $C^{1}$ bound of $f$. Then, together with
the fact $|A|^{2}\mathfrak geq{\mathfrak f}racac{1}{n}H^{2}$, we have
\betaegin{equation}gin{eqnarray*}
\betaegin{equation}gin{split}
{\mathfrak f}racac{1}{n}&H\alphalpha_{k-1}(u,x,t)-\left({\mathfrak f}racac{c_{7}}{H}+c_{14}\rightght)\,\,\,\,um\limits_{i}G^{ii}\\
&\leq{\mathfrak f}racac{|A|^{2}}{H}\left(\alphalpha_{k-1}(u,x,t)-\,\,\,\,um\limits_{l=0}^{k-2}(k-l)t\alphalpha_{l}G_{l}\rightght)
-\left({\mathfrak f}racac{c_{7}}{H}+c_{9}\rightght)\,\,\,\,um\limits_{i}G^{ii}-{\mathfrak f}racac{\left\langlengle
V,E_{p}\rightght\ranglengle}{\tau} \,\,\,\,um\limits_{p}G^{ii}\betaar{R}_{0iip},
\\
\epsilonnd{split}
\epsilonnd{eqnarray*}
where the positive constant $c_{14}$ depends on $c_{9}$, the $C^{0}$
bound of the curvature tensor $\betaar{R}$. Combining the fact
$\Sigma_{i}G^{ii}\mathfrak geq {\mathfrak f}racac{n-k+1}{k}$ with the above two
inequalities, we have
\betaegin{equation}gin{eqnarray*}
0\mathfrak geq{\mathfrak f}racac{1}{n}H\alphalpha_{k-1}(u,x,t)
-\left({\mathfrak f}racac{c_{7}}{H}+c_{14}\rightght)-{\mathfrak f}racac{1}{H}c_{12}\left(\,\,\,\,um\limits_{l=0}^{k-2}|G_{l}|+1\rightght)(H+1)
-c_{13}\left(\,\,\,\,um\limits_{l=0}^{k-2}|G_{l}|+1\rightght).
\epsilonnd{eqnarray*}
Let us divide the rest of the proof into two cases:
Case I. If ${\mathfrak f}racac{\,\,\,\,igma_{k}}{\,\,\,\,igma_{k-1}}\leq H^{{\mathfrak f}racac{1}{k}}$, then
we get from $\alphalpha_{l}\mathfrak geq c_{l}$ that
\betaegin{equation}gin{eqnarray*}
|G_{l}|={\mathfrak f}racac{\,\,\,\,igma_{l}}{\,\,\,\,igma_{k-1}}\leq{\mathfrak f}racac{1}{\alphalpha_{l}}\left({\mathfrak f}racac{\,\,\,\,igma_{k}}
{\,\,\,\,igma_{k-1}}+\alphalpha_{l}(u,x,t)\rightght)\leq
c_{15}(H^{{\mathfrak f}racac{1}{k}}+1),
\epsilonnd{eqnarray*}
where the positive constant $c_{15}$ depends on $c_{l}$,
$|\alphalpha_{l}|_{C^{0}}$. Thus, we have a contradiction when $H$ is
large enough, which implies $H\leq C$.
Case II. If ${\mathfrak f}racac{\,\,\,\,igma_{k}}{\,\,\,\,igma_{k-1}}> H^{{\mathfrak f}racac{1}{k}}$, then
by Lemma \mathrm{e}f{NM-ieq}, one has
\betaegin{equation}gin{eqnarray*}
|G_{l}|={\mathfrak f}racac{\,\,\,\,igma_{l}}{\,\,\,\,igma_{k-1}} \leq
{\mathfrak f}racac{\,\,\,\,igma_{l}}{\,\,\,\,igma_{l+1}}\cdot{\mathfrak f}racac{\,\,\,\,igma_{l+1}}{\,\,\,\,igma_{l+2}}
\cdot\cdot\cdot\cdot{\mathfrak f}racac{\,\,\,\,igma_{k-2}}{\,\,\,\,igma_{k-1}} \leq
c_{16}\left({\mathfrak f}racac{\,\,\,\,igma_{k-1}}{\,\,\,\,igma_{k}}\rightght)^{k-1-l} \leq
H^{-{\mathfrak f}racac{k-1-l}{k}},
\epsilonnd{eqnarray*}
where the constant $c_{16}>0$ depends on $k$. In this case, we can
also derive $H\leq C$ easily.
In sum, the conclusion of Lemma \mathrm{e}f{C2} follows directly by using
the fact $|\langlembda_{i}|\leq c_{6}H$.
\epsilonnd{proof}
\,\,\,\,ection{Existence} \langlebel{S6}
In this section, we use the degree theory for nonlinear elliptic
equations developed in \cite{L} to prove Theorem \mathrm{e}f{maintheorem}.
After establishing a prior estimates (see Lemmas \mathrm{e}f{C0 estimate},
\mathrm{e}f{C1 estimate} and \mathrm{e}f{C2}), we know that the Eq.
\epsilonqref{equation 1.1} is uniformly elliptic. By \cite{E}, \cite{K}
and Schauder estimates, we have
\betaegin{equation}gin{eqnarray}\langlebel{Ex-1}
|u|_{C^{4,\alphalpha}(M^{n})}\leq C
\epsilonnd{eqnarray}
for any $k$-convex solution $\mathcal{G}$ to the equation
\epsilonqref{equation 1.1}. Define
\betaegin{equation}gin{eqnarray*}
C_{0}^{4,\alphalpha}(M^{n})=\{u\in C^{4,\alphalpha}(M^{n}):
\mathcal{G}=\{\left(u(x),x\rightght)|x\in M^{n}\}~~{\mathrm{is}}~~
{k\mathrm{-convex}}\}.
\epsilonnd{eqnarray*}
Let us consider the function
\betaegin{equation}gin{eqnarray*}
F(\cdot;t):C_{0}^{4,\alphalpha}(M^{n})\rightghtarrow C^{2,\alphalpha}(M^{n}),
\epsilonnd{eqnarray*}
which is defined by
\betaegin{equation}gin{eqnarray*}
F(u,x,t)={\mathfrak f}racac{\,\,\,\,igma_{k}(\kappa(V))}{\,\,\,\,igma_{k-1}(\kappa(V))}-\,\,\,\,um\limits_{l=0}^{k-2}t
\alphalpha_{l}(u,x){\mathfrak f}racac{\,\,\,\,igma_{l}(\kappa(V))}{\,\,\,\,igma_{k-1}(\kappa(V))}-\alphalpha_{k-1}(u,x,t).
\epsilonnd{eqnarray*}
Set
\betaegin{equation}gin{eqnarray*}
\mathcal{O}_{R}=\{u\in C_{0}^{4,\alphalpha}(M^{n}):|u|_{C^{4,\alphalpha}(M^{n})}<R\},
\epsilonnd{eqnarray*}
which clearly is an open set in $C_{0}^{4,\alphalpha}(M^{n})$. Moreover,
if $R$ is sufficiently large, $F(u,x,t)=0$ does not have solution on
$\partialartial\mathcal{O}_{R}$ by the priori estimate established in
\epsilonqref{Ex-1}. Therefore, the degree
$\deltaeg\left(F(\cdot;t),\mathcal{O}_{R},0\rightght)$ is well-defined for
$0\leq t\leq 1$. Using the homotopic invariance of the degree, we
have
\betaegin{equation}gin{eqnarray*}
\deltaeg(F(\cdot;1),\mathcal{O}_{R},0)=\deltaeg(F(\cdot;0),\mathcal{O}_{R},0).
\epsilonnd{eqnarray*}
Lemma \mathrm{e}f{uni-sol} shows that $u=u_{0}$ is the unique solution to
the above equation for $t=0$. By direct calculation, one has
\betaegin{equation}gin{eqnarray*}
F(su_{0},x;0)=[1-\varphi(su_{0})]{\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}{\mathfrak f}racac{f'(su_{0})}{f(su_{0})}.
\epsilonnd{eqnarray*}
Using the fact $\varphi(u_{0})=1$, we have
\betaegin{equation}gin{eqnarray*}
\deltaelta_{u_{0}}F(u_{0},x;0)={\mathfrak f}racac{d}{ds}{\mathfrak B}igg{|}_{s=1}F(su_{0},x;0)=
-\varphi'(u_{0}){\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}{\mathfrak f}racac{f'(u_{0})}{f(u_{0})}>0,
\epsilonnd{eqnarray*}
where $\deltaelta F(u_{0},x;0)$ is the linearized operator of $F$ at
$u_{0}$. Clearly, $\deltaelta F(u_{0},x;0)$ has the form
\betaegin{equation}gin{eqnarray*}
\deltaelta_{\omega}F(u_{0},x;0)=-a^{ij}\omega_{ij}+b^{i}\omega_{i}
-\varphi'(u_{0}){\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}{\mathfrak f}racac{f'(u_{0})}{f(u_{0})}\omega,
\epsilonnd{eqnarray*}
where $(a^{ij})_{n\times n}$ is a positive definite matrix. Since
$-\varphi'(u_{0}){\mathfrak f}racac{\,\,\,\,igma_{k}(e)}{\,\,\,\,igma_{k-1}(e)}{\mathfrak f}racac{f'(u_{0})}{f(u_{0})}>0$,
then $\deltaelta F(u_{0},x;0)$ is an invertible operator. Therefore,
\betaegin{equation}gin{eqnarray*}
\deltaeg(F(\cdot;1),\mathcal{O}_{R},0)=\deltaeg(F(\cdot;0),\mathcal{O}_{R},0)=\partialm
1,
\epsilonnd{eqnarray*}
which implies that we can obtain a solution at $t=1$. This finishes
the proof of Theorem \mathrm{e}f{maintheorem}.
\vspace {5 mm}
\,\,\,\,ection*{Acknowledgments}
This work is partially supported by the NSF of China (Grant Nos.
11801496 and 11926352), the Fok Ying-Tung Education Foundation
(China) and Hubei Key Laboratory of Applied Mathematics (Hubei
University). The authors sincerely thank Mr. Agen Shang and Prof.
Qiang Tu for sending the digital version of the reference \cite{st1}
to them.
\vspace {1 cm}
\betaegin{equation}gin{thebibliography}{50}
\,\,\,\,etlength{\itemsep}{-0pt} \,\,\,\,mall
\betaibitem{ajb} F. Andrade, J. Barbosa, J. de Lira, \epsilonmph{Closed Weingarten hypersurfaces in warped product manifolds}, Indiana Univ. Math. J. {\betaf58} (2009)
1691--1718.
\betaibitem{cst} L. Chen, A G. Shang, Q. Tu, \epsilonmph{A class of prescribed Weingarten curvature equations
in Euclidean space}, Commun. Partial Differential Equations {\betaf
46}(7) (2021) 1326--1343.
\betaibitem{clw} D. G. Chen, H. Z. Li, Z. Z. Wang, \epsilonmph{Starshaped compact hypersurfaces with prescribed
Weingarten curvature in warped product manifolds}, Calc. Var.
Partial Differential Equations {\betaf 57} (2018), Article No. 42.
\betaibitem{E} L. Evans, \epsilonmph{Classical solutions of fully nonlinear, convex, second-order elliptic equations}, Commun.
Pure Appl. Math. {\betaf35} (1982) 333--363.
\betaibitem{fmi} P. Freitas, J. Mao, I. Salavessa, \epsilonmph{Spherical symmetrization and the first eigenvalue of
geodesic disks on manifolds}, Calc. Var. Partial Differential
Equations {\betaf 51} (2014) 701--724
\betaibitem{fy1} J. X. Fu, S. T. Yau, \epsilonmph{A Monge-Amp\`{e}re type equation motivated by string
theory}, Commun. Anal. Geom. {\betaf 15}(1) (2007) 29--76.
\betaibitem{fy2} J. X. Fu, S. T. Yau, \epsilonmph{The theory of superstring with flux on non-K\"{a}hler manifolds
and the complex Monge-Amp\`{e}re equation}, J. Differential Geom.
{\betaf 78}(3) (2008) 369--428.
\betaibitem{gl} P. F. Guan, J. Li, \epsilonmph{A mean curvature type flow in space forms}, Int. Math. Res. Not. {\betaf13} (2015)
4716--4740.
\betaibitem{gm} P. F. Guan, X. N. Ma, \epsilonmph{The Christoffel-Minkowski problem I: Convexity of solutions of a Hessian
equation}, Invent. Math. {\betaf 151}(3) (2003) 553--577.
\betaibitem{grw} P. F. Guan, C. Y. Ren, Z. Z. Wang, \epsilonmph{Global $C^{2}$-estimates for convex solutions of curvature
equations}, Commun. Pure Appl. Math. {\betaf 68} (2015) 1287--1325.
\betaibitem{gz} P. F. Guan, X. W. Zhang, \epsilonmph{A class of curvature type
equations}, available online at arXiv:1909.03645.
\betaibitem{KK} N. N. Katz, K. Kondo, \epsilonmph{Generalized space forms},
Trans. Amer. Math. Soc. {\betaf 354} (2002) 2279-2284.
\betaibitem{K} N. Krylov, \epsilonmph{Boundedly inhomogeneous elliptic and parabolic equations in a domain}, Izv. Akad. Nauk
SSSR Ser. Mat. {\betaf47} (1983) 75--108.
\betaibitem{nk} N. Krylov, \epsilonmph{On the general notion of fully nonlinear second order elliptic
equation}, Trans. Amer. Math. Soc. {\betaf 347}(3) (1995) 857--895.
\betaibitem{L} Y. Y. Li, \epsilonmph{Degree theory for second order nonlinear elliptic operators and its applications},
Commun. Partial Differential Equations {\betaf14} (1989) 1541--1578.
\betaibitem{mt1} M. Lin, N. S. Trudinger, \epsilonmph{On some inequalities for elementary symmetric functions}, Bull. Aust. Math. Soc. {\betaf 50} (1994)
317--326.
\betaibitem{lmwz} W. Lu, J. Mao, C. X. Wu, L. Z. Zeng, \epsilonmph{Eigenvalue estimates for the drifting Laplacian and the
$p$-Laplacian on submanifolds of warped products}, Applicable
Analysis {\betaf 100}(11) (2021) 2275--2300.
\betaibitem{m1-1} J. Mao, \epsilonmph{Eigenvalue inequalities for the $p$-Laplacian on a Riemannian
manifold and estimates for the heat kernel}, J. Math. Pures Appl.
{\betaf101} (2014) 372--393.
\betaibitem{m1-2} J. Mao, \epsilonmph{Volume comparisons for manifolds with radial curvature bounded}, Czech. Math. J. {\betaf 66} (2016) 71--86.
\betaibitem{mdw} J. Mao, F. Du, C. X. Wu, \epsilonmph{Eigenvalue Problems on
Manifolds}, Science Press, Beijing, 2017.
\betaibitem{m1} J. Mao, \epsilonmph{Geometry and topology of manifolds with integral radial curvature
bounds}, available online at arXiv: 1910.12192.
\betaibitem{bon} B. O'Neill, \epsilonmph{Semi-Riemannian Geometry with Applications to
Relativity}, Pure and Applied mathematics, vol. 103. Academic Press,
San Diego, 1983.
\betaibitem{PP} P. Petersen, \epsilonmph{Riemannian Geometry}, 2nd edn. Graduate Texts in Mathematics, vol.
171. Springer, New York, 2006.
\betaibitem{ppz1} D. H. Phong, S. Picard, X. W. Zhang, \epsilonmph{The Fu-Yau equation with negative slope
parameter}, Invent. Math. {\betaf 209}(2) (2017) 541--576.
\betaibitem{ppz2} D. H. Phong, S. Picard, X. W. Zhang, \epsilonmph{On estimates for the Fu-Yau generalization of a
Strominger system}, J. Reine Angew. Math. {\betaf 2019}(751) (2019)
243--274.
\betaibitem{ppz3} D. H. Phong, S. Picard, X. W. Zhang, \epsilonmph{Fu-Yau Hessian
Equations}, J. Differential Geom. {\betaf 118}(1) (2021) 147--187.
\betaibitem{st1} A G. Shang, Q. Tu, \epsilonmph{A class of fully nonlinear equations with linear combination of $\,\,\,\,igma_{k}$-curvature
in hyperbolic space}, preprint.
\betaibitem{t2}N. S. Trudinger, \epsilonmph{The Dirichlet problem for the prescribed curvature equations}, Arch. Ration. Mech. Anal. {\betaf 111} (1990)
153--179.
\betaibitem{ywmd} Y. Zhao, C. X. Wu, J. Mao, F. Du, \epsilonmph{Eigenvalue comparisons in Steklov eigenvalue problem and some other eigenvalue estimates},
Revista Matem\'{a}tica Complutense {\betaf33}(2) (2020) 389--414.
\epsilonnd{thebibliography}
\epsilonnd{document}
|
\begin{document}
\begin{abstract}
We identify those elements of the homeomorphism group of the circle that can be expressed as a composite of two involutions.
\end{abstract}
\maketitle
\section{Introduction}\label{S: intro}
We describe an element $g$ of a group $G$ as \emph{reversible} in $G$ if it is conjugate in $G$ to its own inverse. We say that $g$ is \emph{strongly reversible} in $G$ if there exists an involution $\tau$ in $G$ such that $\tau g \tau = g^{-1}$. This language has developed from the theory of finite groups, where the terms \emph{real} and \emph{strongly real} replace \emph{reversible} and \emph{strongly reversible}. (The word \emph{real} is used because an element $g$ of a finite group is reversible if and only if each irreducible character of $G$ takes a real value when applied to $g$.) Notice that $g$ is strongly reversible if and only if it can be expressed as a composite of two involutions. The strongly reversible elements of the homeomorphism group of the real line were determined by Jarczyk and Young; see \cite{Ja02a,OF04,Yo94}. The purpose of this paper is to determine the strongly reversible maps in the group of homeomorphisms of the circle.
Let \ensuremath{\mathbb{S}} denote the unit circle in $\mathbb{R}^2$ centred on the origin. Denote by \ensuremath{\textup{H}(\mathbb{S})} the group of homeomorphisms of $\ensuremath{\mathbb{S}} $. There is a subgroup \ensuremath{\textup{H}(\mathbb{S})} p of \ensuremath{\textup{H}(\mathbb{S})} consisting of orientation preserving homeomorphisms. The subgroup \ensuremath{\textup{H}(\mathbb{S})} p has a single distinct coset \ensuremath{\textup{H}(\mathbb{S})} n in \ensuremath{\textup{H}(\mathbb{S})} which consists of orientation reversing homeomorphisms. We classify the strongly reversible maps in both groups \ensuremath{\textup{H}(\mathbb{S})} p and $\ensuremath{\textup{H}(\mathbb{S})} $. A classification of the reversible elements of \ensuremath{\textup{H}(\mathbb{S})} p and \ensuremath{\textup{H}(\mathbb{S})} can be extracted from a conjugacy classification in these two groups. We describe the conjugacy classes of \ensuremath{\textup{H}(\mathbb{S})} p and \ensuremath{\textup{H}(\mathbb{S})} in \S\ref{S: conjugacy}, and comment briefly on reversibility.
For points $a$ and $b$ in $\ensuremath{\mathbb{S}} $, we write $(a,b)$ to indicate the open anticlockwise interval from $a$ to $b$ in $\mathbb{S}$. Let $[a,b]$ denote the closure of $(a,b)$. For a proper open interval $I$ in $\mathbb{S}$, we say that \emph{$u<v$ in $I$} if $(u,v)\subset I$. To classify the strongly reversible maps in \ensuremath{\textup{H}(\mathbb{S})} p we need the notion of the \emph{signature} of an orientation preserving homeomorphism which has a fixed point. If $f$ is such a homeomorphism, then each point $x$ in $\mathbb{S}$ is either a fixed point of $f$ or else it lies in an open interval component $I$ in the complement of the fixed point set of $f$. The signature $\Delta_f$ of $f$ is the function from $\ensuremath{\mathbb{S}} $ to $\{-1,0,1\}$ given by the equation
\[
\Delta_f(x) =
\begin{cases}
1 & \text{if $x< f(x)$ in $I$},\\
0 & \text{if $f(x)=x$},\\
-1 & \text{if $f(x)< x$ in $I$}.
\end{cases}
\]
The strongly reversible maps in \ensuremath{\textup{H}(\mathbb{S})} p are classified according to the next theorem, proven in \S\ref{S: two+}.
\begin{theorem}\label{T: two+}
An element $f$ of \ensuremath{\textup{H}(\mathbb{S})} p is strongly reversible if and only if either it is an involution or else it has a fixed point and there is an orientation preserving homeomorphism $h$ of rotation number $\frac12$ such that $\Delta_f = -\Delta_f\circ h$.
\end{theorem}
Elements in \ensuremath{\textup{H}(\mathbb{S})} p that cannot be expressed as a composite of two involutions (elements that are not strongly reversible) can nevertheless be expressed as a composite of three involutions. The next theorem is proven in \S\ref{S: three+}.
\begin{theorem}\label{T: three+}
Each member of \ensuremath{\textup{H}(\mathbb{S})} p can be expressed as a composite of three orientation preserving involutions.
\end{theorem}
We move on to describe the strongly reversible elements in the larger group $\ensuremath{\textup{H}(\mathbb{S})} $. There are orientation preserving homeomorphisms of \ensuremath{\mathbb{S}} which are strongly reversible in $\ensuremath{\textup{H}(\mathbb{S})} $, but not strongly reversible in $\ensuremath{\textup{H}(\mathbb{S})} p$. Before we state our theorem on strong reversibility in $\ensuremath{\textup{H}(\mathbb{S})} $, we introduce some notation which is explained in more detail in \S\ref{S: conjugacy}. For a homeomorphism $f$, the \emph{degree} of $f$, denoted $\text{deg}(f)$, is equal to $1$ if $f$ preserves orientation, and $-1$ if $f$ reverses orientation. Let $\rho(f)$ denote the rotation number of an orientation preserving homeomorphism $f$. The rotation number is an element of $[0,1)$. If $\rho(f)=0$ then $f$ has a fixed point. If $\rho(f)$ is rational then $f$ has a periodic point, in which case the minimal period of $f$ (the smallest positive integer $n$ such that $f^{n}$ has fixed points) is denoted by $n_f$. If $\rho(f)$ is irrational then we denote the minimal set of $f$, that is, the smallest non-trivial $f$ invariant compact subset of $\mathbb{S}$, by $K_f$. This set is either a perfect and nowhere dense subset of $\mathbb{S}$ (a Cantor set) or else equal to $\mathbb{S}$. In the former case we define $I_f$ to be the set of inaccessible points of $K_f$, and in the latter case we define $I_f$ to be $\mathbb{S}$. There is a continuous surjective map $w_f:\ensuremath{\mathbb{S}} \rightarrow\ensuremath{\mathbb{S}} $ of degree $1$ with the properties: (i) $w_f$ maps $I_f$ homeomorphically onto $w_f(I_f)$; (ii) $w_f$ maps each closed interval component in the complement of $I_f$ to a point; (iii) $w_ff=R_\theta w_f$, where $R_\theta$ is the anticlockwise rotation by $\theta=2\pi\rho(f)$. The next theorem is proven in \S\ref{S: two}.
\begin{theorem}\label{T: two}
Let $f$ be an orientation preserving member of $\ensuremath{\textup{H}(\mathbb{S})} $. Either
\begin{enumerate}
\item[(i)] $\rho(f)=0$, in which case $f$ is strongly reversible if and only if there is a homeomorphism $h$ such that $\Delta_f = -\textup{deg}(h)\cdot \Delta_f\circ h$, and either $h$ preserves orientation and has rotation number $\frac12$ or else reverses orientation;
\item[(ii)] $\rho(f)$ is non-zero and rational, in which case $f$ is strongly reversible if and only if $f^{n_f}$ is strongly reversible by an orientation reversing involution;
\item[(iii)] $\rho(f)$ is irrational, in which case $f$ is strongly reversible if and only if $w_f(I_f)$ has a reflectional symmetry.
\end{enumerate}
\end{theorem}
Using Theorem~\ref{T: two} we also show that an orientation preserving circle homeomorphism is strongly reversible by an orientation \emph{reversing} involution if and only if it is reversible by an orientation \emph{reversing} homeomorphism.
It remains to state a result on strong reversibility of orientation reversing homeomorphisms. Each orientation reversing homeomorphism has exactly two fixed points. The next theorem is proven in \S\ref{S: two-}.
\begin{theorem}\label{T: two-}
An orientation reversing homeomorphism $f$ is strongly reversible if and only if there is an orientation reversing homeomorphism $h$ that interchanges the pair of fixed points of $f$ and satisfies $h f^2h^{-1} = f^{-2}$.
\end{theorem}
Fine and Schweigert proved in \cite[Theorem 25]{FiSc55} that each member of \ensuremath{\textup{H}(\mathbb{S})} can be expressed as a composite of three involutions (this result follows quickly from Theorems \ref{T: three+} and \ref{T: two}).
We now describe the structure of this paper. Section \ref{S: conjugacy} contains unoriginal material; it consists of a brief review of a conjugacy classification in \ensuremath{\textup{H}(\mathbb{S})} p and $\ensuremath{\textup{H}(\mathbb{S})} $. All subsequent sections contain new results. Sections \ref{S: two+} and \ref{S: three+} are about \ensuremath{\textup{H}(\mathbb{S})} p and sections \ref{S: two} and \ref{S: two-} are about $\ensuremath{\textup{H}(\mathbb{S})} $.
\section{Conjugacy classification}\label{S: conjugacy}
Two conjugacy invariants which can be used to determine the conjugacy classes in \ensuremath{\textup{H}(\mathbb{S})} are the rotation number and signature, introduced in \S\ref{S: intro}. We describe here only those properties of these two quantities that we will use. For more information on rotation numbers, see \cite{Gh01}; for more information on signatures, see \cite{FiSc55}.
For an orientation preserving homeomorphism $f$, choose any point $x$ in $\mathbb{S}$, and let $\theta_n$ be the angle in $[0,2\pi)$ measured anticlockwise between $f^{n-1}(x)$ and $f^n(x)$. The \emph{rotation number} of $f$, denoted $\rho(f)$, is the unique number in $[0,1)$ such that the expression
\[
(\theta_1+\dots+\theta_n) - 2\pi n\rho(f)
\]
is bounded for all $n$. The quantity $\rho(f)$ is independent of $x$. The rotation number is invariant under conjugation in $\ensuremath{\textup{H}(\mathbb{S})} p$, and $\rho(f^n)=n\rho(f) \pmod {1}$, for each integer $n$.
A straightforward consequence of the definition of $\rho(f)$ is that $\rho(f)=0$ if and only if $f$ has a fixed point. In this case, $\mathbb{S}$ can be partitioned into a closed set $\text{fix}(f)$, consisting of fixed points of $f$, and a countable collection of open intervals on each of which $f$ is free of fixed points. Now suppose that $\rho(f)=p/q$, where $p$ and $q$ are coprime positive integers. Then $f$ has periodic points, that is, there is a positive integer $n$ for which $f^n$ has fixed points. The smallest such $n$, denoted $n_f$, is the \emph{minimal period} of $f$ and is equal to $q$.
The remaining possibility is that $\rho(f)$ is irrational. In this case we define $K_f$ to be the unique minimal set in the poset consisting of $f$ invariant compact subsets of $\mathbb{S}$ ordered by inclusion. We describe $K_f$ as the \emph{minimal set} of $f$. Either $K_f=\mathbb{S}$ or else $K_f$ is a perfect subset of $\mathbb{S}$ with empty interior---a Cantor set. In the latter case
there is a sequence of open intervals $(a_i,b_i)$, for $i=1,2,\dotsc$, such that $[a_i,b_i]\cap [a_j,b_j]=\emptyset$ when $i\neq j$, and $K_f$ is the complement of $\bigcup_{i=1}^\infty (a_i,b_i)$. The set $I_f$ of \emph{inaccessible points} of $K_f$ is the complement of
$\bigcup_{i=1}^\infty [a_i,b_i]$. If $K_f=\mathbb{S}$ then we define $I_f=\mathbb{S}$. There is a continuous surjective map $w_f$ of $\mathbb{S}$ of degree $1$ such that $w_ff=R_\theta w_f$, where $\theta =2\pi\rho(f)$. The map $w_f$ is a homeomorphism when restricted to $I_f$, and it maps each interval $[a_i,b_i]$ to a single point. The map $w_f$
is unique up to post composition by rotations.
Now suppose that $f$ is an orientation preserving homeomorphism which has a fixed point. The signature of $f$ was defined in \S\ref{S: intro}. The signature $\Delta_f$ takes the value $0$ on $\text{fix}(f)$, and elsewhere it takes either the value $-1$ or the value $1$. Useful properties of the signature are encapsulated in the next elementary lemma.
\begin{lemma}\label{L: handy}
If $f$ is a member of $\ensuremath{\textup{H}(\mathbb{S})} p$ with a fixed point, and $h$ is a member of $\ensuremath{\textup{H}(\mathbb{S})} $, then
\begin{enumerate}
\item[(i)] $\Delta_{hfh^{-1}}=\textup{deg}(h)\cdot\Delta_f\circ h^{-1}$,
\item[(ii)] $\Delta_{f^{-1}}=-\Delta_{f}$.
\end{enumerate}
\end{lemma}
We are now in a position to state criteria which determine whether two circle homeomorphisms are conjugate. The results are stated in such a way that one can deduce from them when two orientation preserving circle homeomorphisms are conjugate in each of the groups \ensuremath{\textup{H}(\mathbb{S})} p and $\ensuremath{\textup{H}(\mathbb{S})} $. The result on irrational rotation numbers follows from \cite[Theorem 2.3]{Ma70}. The result on orientation preserving homeomorphisms with fixed points is similar to \cite[Theorem 10]{FiSc55}. The other two theorems are well-known; they can both be proven directly. Recall that the degree of a circle homeomorphism is $1$ if the map preserves orientation, and $-1$ if it reverses orientation.
\begin{theorem}\label{T: conjFix}
Two orientation preserving circle homeomorphisms $f$ and $g$, each of which has a fixed point, are conjugate by a homeomorphism of degree $\epsilon$ if and only if there is a homeomorphism $h$ of degree $\epsilon$ such that $\Delta_g = \epsilon\Delta_f\circ h$.
\end{theorem}
Note in particular that a map $g$ in $\ensuremath{\textup{H}(\mathbb{S})} p$ with fixed points is conjugate in $\ensuremath{\textup{H}(\mathbb{S})} p$ to all of its powers.
\begin{theorem}\label{T: conjPer}
Two orientation preserving circle homeomorphisms $f$ and $g$, both of which have the same non-zero rational rotation number, are conjugate by a homeomorphism of degree $\epsilon$ if and only if $f^{n_f}$ is conjugate to $g^{n_g}$ by a homeomorphism of degree $\epsilon$.
\end{theorem}
Since $f$ and $g$ have the same non-zero rational rotation number, the integers $n_f$ and $n_g$ in Theorem~\ref{T: conjPer} are equal.
Recall that an orthogonal map of the circle of degree $1$ is a rotation, and an orthogonal map of the circle of degree $-1$ is a reflection in a line through the origin.
\begin{theorem}\label{T: conjIrrat}
Two orientation preserving circle homeomorphisms $f$ and $g$, both of which have the same irrational rotation number, are conjugate by a homeomorphism of degree $\epsilon$ if and only if there is an orthogonal map of degree $\epsilon$ that maps $w_f(I_f)$ to $w_g(I_g)$.
\end{theorem}
It remains to consider conjugacy between orientation reversing maps.
\begin{theorem}\label{T: conj-}
Two orientation reversing circle homeomorphisms $f$ and $g$ are conjugate in $\ensuremath{\textup{H}(\mathbb{S})} $ if and only if $f^2$ and $g^2$ are conjugate in $\ensuremath{\textup{H}(\mathbb{S})} $ by a homeomorphism that maps the pair of fixed points of $f$ to the pair of fixed points of $g$.
\end{theorem}
It follows from Theorems~\ref{T: conjPer} and \ref{T: conj-} that all non-trivial involutions in $\mathcal{H}^+$ are conjugate, and all orientation reversing involutions in $\mathcal{H}$ are conjugate. (These statements can easily be seen directly.)
We briefly remark on the reversible elements in \ensuremath{\textup{H}(\mathbb{S})} p and $\ensuremath{\textup{H}(\mathbb{S})} $.
Suppose that $f$ and $h$ are members of \ensuremath{\textup{H}(\mathbb{S})} p such that $hfh^{-1}=f^{-1}$. Then
\[
\rho(f^{-1})=\rho(hfh^{-1}) = \rho(f)\pmod {1}.
\]
But $\rho(f^{-1})=-\rho(f)\pmod {1}$, hence $\rho(f)$ is equal to either $0$ or $\frac{1}{2}$. One can construct examples of reversible elements in \ensuremath{\textup{H}(\mathbb{S})} p with either of these two rotation numbers (there are examples at the end of \S\ref{S: two}). On the other hand, if $h$ reverses orientation and still $hfh^{-1}=f^{-1}$, then
\[
\rho(hfh^{-1}) = -\rho(f)\pmod {1},
\]
so, in this case, the rotation number tells us nothing about reversibility. Notice that if we compare Theorems~\ref{T: conjFix}, \ref{T: conjPer}, and \ref{T: conjIrrat} with Theorem~\ref{T: two}, we see that that an orientation preserving map $f$ is reversible by an orientation \emph{reversing} map if and only if $f$ is strongly reversible by an orientation \emph{reversing} involution. There are, however, orientation preserving homeomorphisms that are not strongly reversible in $\mathcal{H}$, but are nevertheless reversible by orientation \emph{preserving} maps; one example is given at the end of \S\ref{S: two}.
\section{Proof of Theorem~\ref{T: two+}}\label{S: two+}
The following two theorems deal with strong reversibility in \ensuremath{\textup{H}(\mathbb{S})} p for the two rotation numbers $0$ and $\frac{1}{2}$ separately.
A result similar to Theorem~\ref{T: rot0}, below, has been proven by Jarzcyk \cite{Ja02a} and Young \cite{Yo94} for homeomorphisms of the real line. We use an elementary lemma in the proof of Theorem~\ref{T: rot0}.
\begin{lemma}\label{L: elementary}
If $f$ and $g$ are orientation preserving homeomorphisms such that $\Delta_f=\Delta_g$ then there is an orientation preserving homeomorphism $k$ that fixes each of the fixed points of $f$ and $g$ such that $kfk^{-1}=g$.
\end{lemma}
\begin{proof}
We define a homeomorphism $k$ as follows. On each fixed point $x$ of $f$ and $g$, define $k(x)=x$. On each open interval component $(a,b)$ of $\mathbb{S}\setminus\text{fix}(f)$, the signature function takes either the value $1$ for both functions $f$ and $g$, or else it takes the value $-1$ for both functions. In either case we can choose an orientation preserving homeomorphism $k_0$ of $(a,b)$ such that $k_0 fk_0^{-1}=g$ for $x\in(a,b)$. We then define $k(x)=k_0(x)$ for $x\in(a,b)$. We have constructed the required function $k$. Of course, the existence of a conjugation between $f$ and $g$ follows from Theorem~\ref{T: conjFix}, but we also needed the property of $k$ that it fixes each element of $\text{fix}(f)$.
\end{proof}
\begin{theorem}\label{T: rot0}
An element $f$ of $\ensuremath{\textup{H}(\mathbb{S})} p$ with a fixed point is strongly reversible in \ensuremath{\textup{H}(\mathbb{S})} p if and only if there is a homeomorphism $h$ in \ensuremath{\textup{H}(\mathbb{S})} p with rotation number $\frac12$ such that $\Delta_f=-\Delta_f\circ h$.
\end{theorem}
\begin{proof}
If $\sigma f\sigma =f^{-1}$ for an orientation preserving involution $\sigma$, then $\Delta_f = -\Delta_f\circ\sigma$, by Lemma~\ref{L: handy} (i). Conversely, suppose that there is a homeomorphism $h$ in \ensuremath{\textup{H}(\mathbb{S})} p with rotation number $\frac12$ such that $\Delta_f=-\Delta_f\circ h$.
By Lemma~\ref{L: handy}, $\Delta_{h^{-1} fh}=\Delta_{f^{-1}}$. Using Lemma~\ref{L: elementary} we can construct a map $k$ in $\ensuremath{\textup{H}(\mathbb{S})} p$ that fixes each fixed point of $f$, and satisfies $k^{-1}h^{-1} fh k = f^{-1}$.
Now choose a fixed point $p$ of $f$. Then $\Delta_{f}(h(p))=-\Delta_{f}(p)=0$, so $h(p)$ is a fixed point of $f$. The points $p$ and $h(p)$ are distinct because $h$ has no fixed points. Define
\[
\mu(x)=
\begin{cases}
hk(x) & \text{if $x\in[p,h(p)]$},\\
k^{-1} h^{-1}(x) & \text{if $x\in[h(p),p]$}.
\end{cases}
\]
One can check that $\mu$ is an involution in \ensuremath{\textup{H}(\mathbb{S})} p and $\mu f\mu=f^{-1}$.
\end{proof}
We move on to orientation preserving homeomorphisms with rotation number $\frac{1}{2}$.
\begin{theorem}\label{T: rot1/2}
An element of $\ensuremath{\textup{H}(\mathbb{S})} p$ with rotation number $\frac{1}{2}$ is strongly reversible in $\ensuremath{\textup{H}(\mathbb{S})} p$ if and only if it is an involution.
\end{theorem}
\begin{proof}
All involutions are strongly reversible by the identity map. Conversely, let $f$ be a homeomorphism with rotation number $\frac12$, and let $\sigma$ be an involution in $\ensuremath{\textup{H}(\mathbb{S})} p$ such that $\sigma f\sigma =f^{-1}$. Choose an element $x$ of $\text{fix}(f^2)$. Then $f(x)$ is also an element of $\text{fix}(f^2)$, and by interchanging $x$ and $f(x)$ if necessary, we may assume that $\sigma(x) \in (x,f(x)]$. Suppose that $\sigma(x)\neq f(x)$. Since $\sigma$ maps $(\sigma(x),x)$ onto $(x,\sigma(x))$, we have that $\sigma f(x)\in (x,\sigma(x))$. Likewise, $f$ maps $(x,f(x))$ onto $(f(x),x)$, therefore $f\sigma(x) \in (f(x),x)$. However, $f\sigma(x)= \sigma f ^{-1}(x)=\sigma f(x)$, and yet $(x,\sigma(x)) \cap (f(x),x)=\emptyset$. This is a contradiction, therefore $\sigma(x)=f(x)$. This means that $\sigma f$ is an orientation preserving involution which fixes $x$; hence it is the identity map. Therefore $f=\sigma$, as required.
\end{proof}
In contrast to Theorem~\ref{T: rot1/2} there are reversible homeomorphisms with rotation number $\frac{1}{2}$ that are not strongly reversible. An example is given at the end of \S\ref{S: two}.
We have all the ingredients for a proof of Theorem~\ref{T: two+}.
\begin{proof}[Proof of Theorem~\ref{T: two+}]
If $f$ is a strongly reversible member of \ensuremath{\textup{H}(\mathbb{S})} p then it is reversible, so it must have rotation number equal to either $0$ or $\frac{1}{2}$. If it has rotation number $0$ then there is an orientation preserving homeomorphism $h$ with rotation number $\frac12$ $\sigma$ such that $\Delta_f=-\Delta_f\circ h$, by Theorem~\ref{T: rot0}. If $f$ has rotation number $\frac{1}{2}$ then it is an involution, by Theorem~\ref{T: rot1/2}. The converse implication follows immediately from Theorem~\ref{T: rot0}.
\end{proof}
\section{Proof of Theorem~\ref{T: three+}}\label{S: three+}
\begin{proof}[Proof of Theorem~\ref{T: three+}]
Choose an element $f$ of \ensuremath{\textup{H}(\mathbb{S})} p that is not an involution. There exists a point $x$ in $\mathbb{S}$ such that $x$, $f(x)$, and $f^2(x)$ are three distinct points. By replacing $f$ with $f^{-1}$ if necessary we can assume that $x$, $f(x)$, and $f^2(x)$ occur in that order anticlockwise around $\mathbb{S}$. Notice that $f^{-1}(x)$ lies in $(f(x),x)$. Choose a point $y$ in $(x,f(x))$ that is sufficiently close to $f(x)$ that $f^{-1}(y)>f^2(x)$ in $(f(x),x)$. We construct an orientation preserving homeomorphism $g$ from $[x,f(x)]$ to $[f(x),x]$ such that $g(y)=f(y)$, $g(t)<\text{min}(f(t),f^{-1}(t))$ in $(x,y)$, and $f(t)<g(t)<f^{-1}(t)$ in $(y,f(x))$. A graph of such a function is shown in Figure~\ref{F: g}.
\begin{figure}\label{F: g}
\end{figure}
Define an involution $\sigma$ in $\ensuremath{\textup{H}(\mathbb{S})} p$ by the equation
\[
\sigma(t)=
\begin{cases}
g(t) & \text{if $t\in[x,f(x)]$},\\
g^{-1}(t) & \text{if $t\in[f(x),x]$}.
\end{cases}
\]
Let us determine the fixed points of $\sigma f$. For a point $t$ in $[x,f(x)]$ we have that $\sigma f(t)=t$ if and only if $f(t)=g(t)$. This means that either $t=x$ or $t=y$. For a point $t$ in $(f(x),x)$ we have that $\sigma f(t)=t$ if and only if $\sigma f g(w)= g(w)$, where $w=g^{-1}(t)$ is a point in $(x,f(x))$. Therefore $g(w)=f^{-1}(w)$. This equation has no solutions in $(x,f(x))$, hence $\sigma f$ has no fixed points in $(f(x),x)$. It is straightforward to obtain the direction of flow on the complement of $\{x,y\}$ and we find that
\[
\Delta_{\sigma f}(t)=
\begin{cases}
0 & \text{if $t=x,y$},\\
1 & \text{if $t\in(x,y)$},\\
-1 & \text{if $t\in(y,x)$}.
\end{cases}
\]
By Theorem~\ref{T: rot0}, $\sigma f$ is expressible as a composite of two involutions in $\ensuremath{\textup{H}(\mathbb{S})} p$. Therefore $f$ is expressible as a composite of three involutions in $\ensuremath{\textup{H}(\mathbb{S})} p$.
\end{proof}
A simple corollary of Theorem~\ref{T: three+} is that \ensuremath{\textup{H}(\mathbb{S})} p is uniformly perfect, meaning that there is a positive integer $N$ such that each element of \ensuremath{\textup{H}(\mathbb{S})} p can be expressed as a composite of $N$ or fewer commutators. Since it is easy to express the rotation by $\pi$ as a commutator, and each involution in \ensuremath{\textup{H}(\mathbb{S})} p is conjugate to the rotation by $\pi$, it follows from Theorem~\ref{T: three+} that \ensuremath{\textup{H}(\mathbb{S})} p is uniformly perfect with $N=3$. In fact, Eisenbud, Hirsch, and Neumann \cite{EiHiNe81} proved that \ensuremath{\textup{H}(\mathbb{S})} p is uniformly perfect with $N=1$.
\section{Proof of Theorem~\ref{T: two}}\label{S: two}
For the remainder of this document we work in the full group of homeomorphisms of the circle. In \S\ref{S: conjugacy} we showed that all reversible maps in \ensuremath{\textup{H}(\mathbb{S})} p have rotation number either $0$ or $\frac{1}{2}$. This is not the case in \ensuremath{\textup{H}(\mathbb{S})} because if $h$ is an orientation reversing map, and $f$ an orientation preserving map, then $\rho(hfh^{-1})=-\rho(f)$. Since also $\rho(f^{-1})=-\rho(f)$, the rotation number tells us nothing about reversibility by orientation reversing homeomorphisms. In fact, since all rotations are strongly reversible in $\ensuremath{\textup{H}(\mathbb{S})} $ by reflections, there are strongly reversible maps in $\ensuremath{\textup{H}(\mathbb{S})} $ with any given rotation number.
We divide our analysis of strongly reversible maps in \ensuremath{\textup{H}(\mathbb{S})} between three cases corresponding to when the rotation number is $0$, rational, or irrational. The first case is Theorem~\ref{T: two} (i). We need a preliminary lemma.
\begin{lemma}\label{L: preliminary}
If $f$ is a fixed point free homeomorphism of an open proper arc $A$ in $\mathbb{S}$, then $f$ is conjugate to $f^{-1}$ on $A$ by an orientation reversing involution of $A$.
\end{lemma}
\begin{proof}
The situation is topologically equivalent to the situation when $f$ is a fixed point free homeomorphism of the real line. Such maps $f$ are conjugate to non-trivial translations, and translations are reversible by the orientation reversing involution $x\mapsto -x$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{T: two} (i)]
If $\tau f\tau =f^{-1}$ for an involution $\tau$, then
$\Delta_f = -\text{deg}(\tau)\cdot \Delta_f\circ\tau$, by Lemma~\ref{L: handy}. For the converse, we are given a homeomorphism $h$ that satisfies $\Delta_f=-\text{deg}(h)\cdot\Delta_f\circ h$. Either $h$ preserves orientation and satisfies $\rho(h)=\frac12$, in which case the result follows from Theorem~\ref{T: rot0}, or else $h$ reverses orientation.
In the latter case, by Lemmas~\ref{L: handy} and \ref{L: elementary} there is an orientation preserving homeomorphism $k$ that fixes the fixed points of $f$, and satisfies $k^{-1}h^{-1} fh k = f^{-1}$. Define $s=k^{-1}h^{-1}$. Let $s$ have fixed points $p$ and $q$. Let $a$ denote the point in $\text{fix}(f)$ that is clockwise from $p$, and closest to $p$. Possibly $a=p$. Define $b$ to be the point in $\text{fix}(f)$ that is anticlockwise from $p$ and closest to $p$. Let $I=(a,b)$. Similarly we define an interval $J$ about $q$. If we ignore the trivial case in which $\text{fix}(f)$ has only one component then $I$ and $J$ only intersect, if at all, in their end-points. Now, $f$ fixes $I$ so we can, by Lemma~\ref{L: preliminary}, choose an orientation reversing involution $\tau_I$ of $I$ such that $\tau_I f\tau_I(x)=f^{-1}(x)$ for $x\in I$. Similarly we define $\tau_J$. From the equation $sfs^{-1}=f^{-1}$ we deduce that $s$ fixes $I$ and $J$. Hence we can define
\begin{equation}\label{E: invert}
\mu(x) =
\begin{cases}
\tau_I(x) & \text{if $x\in I$},\\
\tau_J(x) & \text{if $x\in J$},\\
s(x) & \text{if $x\in [p,q]\setminus (I\cup J)$},\\
s^{-1}(x) & \text{if $x\in [q,p]\setminus (I\cup J)$}.
\end{cases}
\end{equation}
One can check that $s$ is an orientation reversing involution that satisfies $sfs=f^{-1}$.
\end{proof}
To prove Theorem~\ref{T: two} (ii) we use a lemma that enables us to deal with rational rotation numbers of the form $1/n$, rather than $m/n$. Recall that $n_f$ denotes the minimal period of $f$.
\begin{lemma}\label{L: simplify}
Let $f$ be an element of $\ensuremath{\textup{H}(\mathbb{S})} p$ with a periodic point. If $f^d$ is strongly reversible for an integer $d$ in $\{1,2,\dots,n_f-1\}$ that is coprime to $n_f$, then $f$ is strongly reversible.
\end{lemma}
\begin{proof}
There exist integers $u$ and $t$ such that $dt= 1+un_f$. Let $q$ be the positive integer between $0$ and $n_f$, and coprime to $n_f$, such that $\rho(f)=q/n_f$. Observe that
\[
\rho(f^{dt})=(1+un_f)\rho(f)=\rho(f)+uq = \rho(f) \pmod {1}.
\]
Recall that a map $g$ in $\ensuremath{\textup{H}(\mathbb{S})} p$ with fixed points is conjugate in \ensuremath{\textup{H}(\mathbb{S})} p to all its powers. Let $g=f^{n_f}$. Then $g$ is conjugate to $g^{dt}$. In other words, $f^{n_f}$ is conjugate to $(f^{dt})^{n_f}$. Apply Theorem~\ref{T: conjPer} to the maps $f$ and $f^{dt}$ to see that these two maps are conjugate. The second map $f^{dt}$ is strongly reversible, because $f^d$ is strongly reversible. Conjugacy preserves strong reversibility, therefore $f$ is also strongly reversible.
\end{proof}
We first prove a special case of Theorem~\ref{T: two} (ii).
\begin{lemma}\label{L: reflect}
Let $f$ be an orientation preserving homeomorphism of $\mathbb{S}$ with rotation number $1/n$, for a positive integer $n$. Suppose that there is an open interval $J$ such that the intervals $J, f(J),\dots,f^{n-1}(J)$ are pairwise disjoint, and such that $\textup{fix}(f^n)$ is the complement of $J\cup f(J)\cup\dots\cup f^{n-1}(J)$. Then $f$ is strongly reversible by an orientation reversing involution that maps $f^k(J)$ to $f^{-k+1}(J)$ for each integer $k$.
\end{lemma}
\begin{proof}
Let $R$ denote an anticlockwise rotation by $2\pi/n$. After conjugating $f$ suitably, the function $f$ and interval $J$ may be adjusted so that $f^k(J)=R^k(J)$ for all integers $k$. Let $\tau$ denote reflection in a line $\ell$ through the origin that bisects $J$. Thus $\tau$ fixes $J$. Let $\sigma$ denote reflection in a line through the origin that is $\pi/n$ anticlockwise from $\ell$. Then $R=\sigma\tau$. Orient $J$ in an anticlockwise sense, and choose an increasing homeomorphism $\phi$ of $J$, without fixed points, such that $\tau\phi\tau=\phi^{-1}$. This is possible because we can, by conjugation, consider $J$ to be the real line and consider $\tau$ still to be a reflection, in which case $\phi$ can be chosen to be a translation. Now define an orientation preserving circle homeomorphism $g$ to satisfy $g(x)=f(x)$, for $x\in\text{fix}(f^n)$, and $g(x)=R^{k+1}\phi R^{-k}(x)$, for $x\in R^k(J)$. This means that $g^n$ has the same signature on all of the intervals $R^k(J)$ (either all $-1$ or all $1$). By Theorem~\ref{T: conjPer}, $g$ is conjugate to $f$. Also, one can check that $\sigma g\sigma =g^{-1}$ and $\sigma (g^k(J))=g^{-k+1}(J)$ for each integer $k$. Since strong reversibility is preserved under conjugation, the result follows.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{T: two} (ii)]
If $f$ is strongly reversible by an involution $\tau$ then $f^{n_f}$ is also strongly reversible by $\tau$. If $\tau$ is orientation preserving then, by Theorem \ref{T: two+}, $f$ is an involution. Therefore $f^{n_f}$ is the identity, and as such it is reversible by any orientation reversing involution.
Conversely, suppose that there is an orientation reversing involution $\tau$ such that $\tau f^{n_f}\tau = f^{-n_f}$. Let $p$ be a fixed point of $f^{n_f}$. There is an integer $d$ that is coprime to $n_f$ such that the distinct points $p, f^d(p),\dots,f^{(n_f-1)d}(p)$ occur in that order anticlockwise around $\mathbb{S}$. The function $(f^d)^{n_f}$ is strongly reversible because it equals $(f^{n_f})^d$. If we can deduce that $f^d$ is strongly reversible then it follows from Lemma~\ref{L: simplify} that $f$ is strongly reversible. In other words, it is sufficient to prove the theorem when the points $p,f(p),\dots,f^{{n_f}-1}(p)$ occur in that order around $\mathbb{S}$.
The map $\tau f$ has a fixed point $q$ which, by replacing $\tau$ with $f^k\tau f^{-k}$ for an appropriate integer $k$, we can assume that it lies in the interval $(f^{-1}(p),p]$. Either $q$ is a fixed point of $f^{n_f}$ (that is, a periodic point of $f$) or it is not. In the former case let $I=[q,f(q)]$ and define, for each integer $k$,
\begin{equation}\label{E: mu}
\mu(x)= f^k\tau f^k(x),\quad x\in f^{-k}(I).
\end{equation}
This is a well defined homeomophism because $f^{n_f}\tau f^{n_f}=f^{-n_f}$. One can check that $\mu$ is an involution and satisfies $\mu f\mu =f^{-1}$. In Figure~\ref{F: qIterates} the action of $\mu$ on certain $f$ iterates of $q$ is shown in the case $n_f=2m$.
\begin{figure}\label{F: qIterates}
\end{figure}
If $q$ is not a fixed point of $f^{n_f}$ then it lies in a unique component $J=(s,t)$ in the complement of $\text{fix}(f^{n_f})$. The interval $J$ is contained in $(f^{-1}(p),p)$, and both $s$ and $t$ lie in $\text{fix}(f^{n_f})$. From the equation $\tau f^{n_f}\tau = f^{-n_f}$, we can deduce that $\tau$ maps $J$ to another open interval component in $\mathbb{S}\setminus\text{fix}(f^{n_f})$. Also, $f(J)$ is a component of $\mathbb{S}\setminus\text{fix}(f^{n_f})$. But $\tau(J)$ and $f(J)$ both contain the point $\tau(q)=f(q)$; therefore $\tau(J)=f(J)$. This means that $\tau(s)=f(t)$ and $\tau(t)=f(s)$.
We define an involution $\mu$ in a similar fashion to \eqref{E: mu}, but this time we do not, yet, define $\mu$ on the intervals $J,f(J),\dots, f^{n_f-1}(J)$. Specifically, let $I=[t,f(s)]$ and define, for each integer $k$,
\begin{equation*}\label{E: mu1}
\mu(x)= f^k\tau f^k(x),\quad x\in f^{-k}(I).
\end{equation*}
We can extend $\mu$ to $J,f(J),\dots, f^{n_f-1}(J)$ using Lemma~\ref{L: reflect}. The now fully defined map $\mu $ is an orientation reversing involutive homeomorphism of $\mathbb{S}$ that satisfies $\mu f\mu=f^{-1}$.
\end{proof}
To prove Theorem~\ref{T: two} (iii) we need a lemma. We prove the lemma explicitly, although it can be deduced quickly from Lemma~\ref{L: reflect}.
\begin{lemma}\label{L: involution}
Let $J$ and $J'$ be two disjoint non-trivial closed intervals in $\mathbb{S}$, and let $g$ be an orientation preserving homeomorphism from $J$ to $J'$. Then there exists an orientation reversing homeomorphism $\gamma$ from $J$ to $J'$ such that $\gamma g^{-1} \gamma =g$.
\end{lemma}
\begin{proof}
Let $a$ and $b$ be points such that $J=[a,b]$. Choose a point $q$ in $(a,b)$. Choose an orientation reversing homeomorphism $\alpha$ from $[a,q]$ to $[g(q),g(b)]$. The map $\gamma$ defined by
\begin{equation*}
\gamma(x) =
\begin{cases}
\alpha(x) & \text{if $x\in [a,q]$},\\
g\alpha^{-1}g(x) & \text{if $x\in [q,b]$},
\end{cases}
\end{equation*}
has the required properties.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{T: two} (iii)]
If $f$ is strongly reversible (by an orientation reversing involution) then, since $I_{f^{-1}}=I_f$, we see from Theorem~\ref{T: conjIrrat} that $w_f(I_f)$ has a reflectional symmetry.
Conversely, suppose that there is a reflection $\tau$ in a line through the origin of $\mathbb{R}^2$ that fixes $w_f(I_f)$. We define an involution $\mu$ from $I_f$ to $I_f$ by the equation $\mu(x) = w_f^{-1}\tau w_f(x)$. In this equation, $w_f^{-1}$ is the inverse of the function $w_f:I_f\rightarrow w_f(I_f)$. For $x\in I_f$ we have
\[
\mu f \mu(x) = (w_f^{-1}\tau w_f)(w_f^{-1}R_\theta w_f)(w_f^{-1}\tau w_f)(x) = f^{-1}(x).
\]
Recall that $K_f$ is the complement in $\mathbb{S}$ of a countable collection of disjoint open intervals $(a_i,b_i)$, and $I_f$ is the complement in $\mathbb{S}$ of the union of the intervals $[a_i,b_i]$. We can extend the definition of $\mu$ to $K_f$ by defining $\mu(a_i)$ to be the limit of $\mu(x_n)$, where $x_n$ is a sequence in $I_f$ that converges to $a_i$. Similarly for $b_i$. The extended map $\mu$ is a homeomorphism from $K_f$ to itself. Notice that $\mu$ has the property that for each integer $i$ there is an integer $j$ such that $\mu$ interchanges $a_i$ and $b_j$, and also interchanges $a_j$ and $b_i$.
We can extend the definition of $\mu$ to the whole of $\mathbb{S}$ by introducing, for each $i$, an orientation reversing homeomorphism $\phi_i:[a_i,b_i]\rightarrow [a_j,b_j]$ (where $\mu(a_i)=b_j$ and $\mu(b_i)=a_j$), and defining $\mu(x)=\phi_i(x)$ for $x\in[a_i,b_i]$. The resulting map $\mu$ will be a homeomorphism. It remains only to show how to choose particular maps $\phi_i$ such that $\mu$ is an involution that satisfies $\mu f\mu = f^{-1}$.
Let $J=(a_i,b_i)$ and let $J'=(a_j,b_j)$. We have two collections
\begin{equation}\label{E: intervals}
\{\dotsc,f^{-1}(J),J,f(J),f^2(J),\dotsc\},\quad \{\dotsc,f^{-1}(J'),J',f(J'),f^2(J'),\dotsc\},
\end{equation}
each consisting of pairwise disjoint intervals. These two collections either share no common members, or else they coincide. In the first case, choose an arbitrary orientation reversing homeomorphism $\gamma$ from $J$ to $J'$. In the second case, there is an integer $m$ such that $f^m(J)=J'$. Apply Lemma~\ref{L: involution} with $g=f^m$ to deduce the existence of an orientation reversing homeomorphism $\gamma$ from $J$ to $J'$ satisfying $\gamma f^{-m} \gamma =f^{m}$. In each case we define, for each $n\in\mathbb{Z}$,
\begin{equation*}\label{E: kappa}
\mu(x) =
\begin{cases}
f^n\gamma f^n(x) & \text{if $x\in f^{-n}(J)$},\\
f^n\gamma^{-1} f^n(x) & \text{if $x\in f^{-n}(J')$}.
\end{cases}
\end{equation*}
One can check that $\mu$ is well-defined for points $x$ in one of the intervals of \eqref{E: intervals}, and that $\mu^2(x)=x$ and $\mu f\mu(x)=f^{-1}(x)$. In this manner $\mu$ can be defined on each of the intervals $(a_k,b_k)$. The resulting map is a homeomorphism of $\ensuremath{\mathbb{S}} $ that is an involution and satsifies $\mu f\mu =f^{-1}$.
\end{proof}
It follows from Theorem~\ref{T: two} and the results on conjugacy in \S\ref{S: conjugacy} that an orientation preserving circle homeomorphism is reversible by an orientation \emph{reversing} involution if and only if it is reversible by an orientation \emph{reversing} homeomorphism. We now sketch the details of an example to show that there are orientation preserving circle homeomorphisms that are reversible by orientation preserving homeomorphisms, but are \emph{not} strongly reversible in $\mathcal{H}$. This means that the concepts of reversibility and strong reversibility are not equivalent in either $\mathcal{H}$ or $\mathcal{H}^+$.
Let $a$ be the point on $\mathbb{S}$ with co-ordinates $(0,1)$ and let $b$ be the point with co-ordinates $(0,-1)$. Let $\dotsb<a_{-2}<a_{-1}<a_0<a_1<a_2< \dotsb$ be an infinite sequence of points in $(b,a)$ that accumulates only at $a$ and $b$. Let $g$ be an orientation preserving homeomorphism from $[b,a]$ to $[b,a]$ that fixes only the points $a_i$, $a$, and $b$. We construct a doubly infinite sequence $s$ consisting of $1$s and $-1$s as follows. Let $u$ represent the string of six numbers $1,1,1,-1,-1,1$. Let $-u$ represent the string $-1,-1,-1,1,1,-1$. Then $s$ is given by $\dotsc,u,-u,u,-u,\dotsc$. We say that two doubly infinite sequences $(x_n)$ and $(y_n)$ are equal if and only if there is an integer $k$ such that $x_{n+k}=y_n$ for all $n$. Our sequence $s$ has been constructed such that $s=-s$, and the sequence formed by reversing $s$ is distinct from $s$.
Suppose that $g$ is defined in any fashion on the intervals
\[
\dotsc,(a_{-2},a_{-1}),(a_{-1},a_{0}),(a_{0},a_{1}),(a_{1},a_{2}),\dotsc
\]
such that the signature of $g$ on these intervals
is determined by $s$. For $x\in(a,b)$ we define $g(x)=R_{\pi}gR_{\pi}(x)$, where $R_{\pi}$ is the rotation by $\pi$. A diagram of the homeomorphism $g$ is shown in Figure~\ref{F: counterexample}.
\begin{figure}\label{F: counterexample}
\end{figure}
Since we can embed $g$ in a flow, we can certainly choose a square-root $\sqrt{g}$ of $g$ (which shares the same signature as $g$). Define another circle homeomorphism $f$ by the formula
\begin{equation*}
f(x) =
\begin{cases}
R_{\pi} \sqrt{g}(x) & \text{if $x\in [b,a]$},\\
\sqrt{g}R_{\pi}(x) & \text{if $x\in (a,b)$}.
\end{cases}
\end{equation*}
This map $f$ has rotation number $\frac12$ and it satisfies $f^2=g$. The map $f$ is not reversible by an orientation preserving involution, by Theorem~\ref{T: rot1/2}, because it is not an involution. Nor is $f$ reversible by an orientation reversing involution; for it were then $g$ would also be reversed by the same orientation reversing involution, and from Theorem~\ref{T: two} one can deduce that this would mean that the sequence $s$ coincides with the reversed sequence $s$. Finally, $f$ is reversible since $g$ is reversible, by Theorem~\ref{T: conjPer}; a conjugation from $g$ to $g^{-1}$ can be constructed that maps $a_i$ to $a_{i+6}$ for each $i$.
\section{Proof of Theorem~\ref{T: two-}}\label{S: two-}
\begin{proof}[Proof of Theorem~\ref{T: two-}]
If $f$ is strongly reversible then it is expressible as a composite of two involutions, one of which, $\tau$, must reverse orientation. From the equation $\tau f\tau = f^{-1}$ we see that $\tau f^2\tau=f^{-2}$, and that $\tau$ preserves the pair of fixed points of $f$ as a set. If $\tau$ fixes each of the fixed points of $f$ then $\tau f$ is an orientation preserving involution with fixed points. Hence it is the identity map. Therefore $f=\tau$. The alternative is that $\tau$ interchanges the fixed points of $f$, which is the condition stated in Theorem~\ref{T: two-}.
For the converse, suppose that $a$ and $b$ are the two fixed points of $f$. By Theorem~\ref{T: two} (i) we can construct from $h$ an orientation reversing involution $\mu$ such that $\mu f^2\mu = f^{-2}$. We require that $\mu$, like $h$, interchanges $a$ and $b$; this is not given by the statement of Theorem~\ref{T: two}, however, it is immediate from the definition of $\mu$ in \eqref{E: invert} (where, in that equation, $s$ is our current homeomorphism $h$). Now define
\[
\tau(x)=
\begin{cases}
\mu(x) & \text{if $x\in[a,b]$},\\
f\mu f(x) & \text{if $x\in[b,a]$}.
\end{cases}
\]
Then $\tau$ is an involution in \ensuremath{\textup{H}(\mathbb{S})} n and $\tau f\tau =f^{-1}$.
\end{proof}
The hypothesis that $h$ interchanges the pair of fixed points of $f$ cannot be dropped from Theorem~\ref{T: two-}: one can construct examples of orientation reversing homeomorphisms $f$ and involutions $\tau$ such that $\tau f^2\tau=f^{-2}$ even though $f$ is not strongly reversible. There are also examples of orientation reversing circle homeomorphisms that are reversible, but not strongly reversible.
\end{document}
|
\begin{document}
\captionsetup[subfigure]{labelformat=empty}
\title[Cosmology of Plane Geometry]{Cosmology of Plane Geometry}
\subjclass[2010]{51M04, 51N20}
\keywords{Plane geometry, Analytic geometry}
\author[Alexander Skutin]{Alexander Skutin}
\maketitle
\section{Introduction}
This paper focuses on a new approach to plane geometry and develops important concepts that can allow researchers to unite and observe plane geometry from a new, meaningful perspective. The present short note is a first chapter of the paper, referenced in the last section.
\section{Deformation principle}
In the area of plane geometry, the deformation principle refers to replacing a certain configuration of points, lines, and circles with more general ones, in which equal points, lines, or circles in the original (undeformed) configuration are replaced by their deformed versions, i.e. points, lines, and circles that are not equal in the general case but are related to one another. If one is aware that points, lines, or circles are equal in the case of non-deformed configuration, one can predict the deformed versions’ connections in terms of general configuration. Consideration of the non-deformed case aids in predicting which points should be connected in the general deformed case.
\subsection{Basic deformation example}
For the first example of application of deformation principle consider the configuration of a square $ABCD$ with its center $O$. Now, we can look on the point $O$ as on the third vertices $O_{ab}$, $O_{bd}$, $O_{cd}$, $O_{da}$ of the triangles $O_{ab}AB$, $O_{bc}BC$, $O_{cd}CD$, $O_{da}DA$ constructed internally on the sides of $ABCD$ which are isosceles right angled triangles ($\angle AO_{ab}B = 90^{\circ}$, $|O_{ab}A| = |O_{ab}B|$ and so on). So, in the case when $ABCD$ is a square we see that $O=O_{ab}=O_{bc}=O_{cd}=O_{da}$ are equal, thus, the deformation principle predicts that in the general (deformed) case of an arbitrary $ABCD$, deformed points $O_{ab}$, $O_{bc}$, $O_{cd}$, $O_{da}$ should be connected. And in fact, they are: $O_{ab}O_{cd}$ is perpendicular to $O_{bc}O_{da}$ and they have equal lengths.
Therefore, we can state the next result
$\textbf{Theorem 1 (Deformation of a square with its center).}$ Consider any (convex)\\ quadrilateral $ABCD$ and let points $O_{ab}$, $O_{bc}$, $O_{cd}$, $O_{da}$ are chosen inside of $ABCD$ such that $O_{ab}AB$, $O_{bc}BC$, $O_{cd}CD$, $O_{da}DA$ are isosceles right angled triangles (see picture below).
Then the following properties of the points $O_{ab}$, $O_{bc}$, $O_{cd}$, $O_{da}$ are true:
\begin{enumerate}
\item $O_{ab}O_{cd}\perp O_{bc}O_{da}$
\item $|O_{ab}O_{cd}| = |O_{bc}O_{da}|$.
\end{enumerate}
In the previous theorem we see how the deformation principle works if we deform a square with its center, similarly we can deform other simple configurations and get complicated results. Each time when we deform we should have decided which points we like to deform and how we deform them. For example, in the previous case we can look on the square's center $O$ as on the intersection of the internal angle $\angle ABC$, $\angle BCD$, $\angle CDA$, $\angle DAB$ bisectors which will provide us with another type of deformation of $ABCDO$ (in the case of the angle bisectors the resulting deformational fact may be stated as : Points $O_1$, $O_2$, $O_3$, $O_4$ are cyclic). To see why the deformation principle is natural we can look on it as on the reverse process to the process of consideration of particular cases of theorems. For example, Theorem 1 becomes trivial in the particular case of a square because in this case points $O_{ab}$, $O_{bc}$, $O_{cd}$, $O_{da}$ become equal. So, the particular case should be seen as an evidence of the general one.
\subsection{Possible deformations}
Theorem 1 is the deformation of four nested points into four connected points in the general case. Similarly, other objects equal (or trivially connected) in the non-deformed case can be deformed to understand their connected deformations. The following table shows which objects may be considered non-deformed and how their deformations may appear:
\begin{table}[htb]
\begin{tabular}{l|l}
$\textbf{Undeformed objects}$ & $\textbf{Possible deformations}$ \\ \hline
Coincide points & Collinear points, concyclic points \\ \hline
Coincide lines & Concurrent lines \\ \hline
Coincide circles & \begin{tabular}[c]{@{}l@{}}Concurrent circles, coaxial circles, circles having\\ radical line which has many nice properties wrt\\ original configuration (if there are two circles)\end{tabular} \\ \hline
Coincide triangles & \begin{tabular}[c]{@{}l@{}}Perspective triangles, triangles which vertices\\ are lying on the same conic\end{tabular} \\ \hline
Concyclic points & \begin{tabular}[c]{@{}l@{}}Concyclic points, points lying on the same conic\\ \end{tabular} \\ \hline
\end{tabular}
\end{table}
\hspace{-6mm}
\subsection{Deformations of an equilateral triangle}
Instead of a square $ABCD$, this paper uses an equilateral triangle $ABC$ and attempts to deform its center, incircle, circumcircle, and many other closely related objects. Deformation of an equilateral triangle is the most powerful tool for producing and verifying results throughout this paper.
As an example of the deformation of an equilateral triangle, we can consider an equilateral triangle with center $O$ and interpret $O$ as the entire set of Clark Kimberling centers $X_i$ (see \cite{Cl}). Therefore, the deformation theory predicts that, in the general case of an arbitrary triangle $ABC$, the triangle centers $X_i$ should have many relations among themselves, such as collinearity, concyclity, etc. Many of these relationships may be found in \cite{Cl}.
Next we introduce some first examples of application of the deformation principle in the case of equilateral triangles.
\textbf{Example 1.} Consider any triangle $ABC$. Let $A'BC$ be the equilateral triangle constructed on the side $BC$ such that $A'$, $A$ are on the same half plane wrt $BC$. Denote the center $O_a$ of $A'BC$. Similarly define $O_b$, $O_c$. In the case of equilateral $ABC$ we get that $O_a=O_b=O_c$~-- coincides with the center $O$ of $ABC$. So, we can predict that in the general case of an arbitrary $ABC$ we will have that the points $O_a$, $O_b$, $O_c$ will be connected and have relations with the base triangle $ABC$. And, in fact, the next such relation can be formulated: Points $O_a$, $O_b$, $O_c$ form the equilateral triangle which circumcircle is passing through the first Fermat point of $ABC$.
\textbf{Example 2.} Consider a triangle $ABC$ with the first and second Fermat points $F_1$, $F_2$. Let $F_a$, $F_b$, $F_c$ be the second Fermat points of $F_1BC$, $F_1AC$, $F_1AB$, respectively. In the case of equilateral $ABC$ we get that $F_1$ coincides with the center $O$ of $ABC$ and $F_2$ coincides with a point $P$ lying on the circumcircle of $ABC$. Aditionally, in the equilateral triangle case we have that $F_a$, $F_b$, $F_c$ are the reflections of center $O$ of $ABC$ wrt its sides and the points $A$, $B$, $C$, $F_a$, $F_b$, $F_c$ form a regular hexagon which circumcircle contains $P$. Thus, we can predict that in the general case of an arbitrary $ABC$ the configuration $ABCF_aF_bF_cF_2$ will inherit some properties of a regular hexagon and a point $P$ on its circumcircle. And, in fact, the next such relation can be formulated: Point $F_2$ lies on the circumcircle of $F_aF_bF_c$.
\textbf{Example 3.} Consider a triangle $ABC$ and any point $P$. Let $AP$, $BP$, $CP$ meet the circumcircle $(ABC)$ of $ABC$ second time at $A'$, $B'$, $C'$ (i.e. $A'B'C'$ is the circumcevian triangle of $P$ wrt $ABC$). Consider the nine-point centers $N$, $N_a$, $N_b$, $N_c$ of $ABC$, $A'BC$, $B'AC$, $C'AB$, respectively. Denote
\begin{enumerate}
\item the reflections $N_a'$, $N_b'$, $N_c'$ of $N_a$, $N_b$, $N_c$ wrt $BC$, $AC$, $AB$, respectively
\item the reflections $N_a''$, $N_b''$, $N_c''$ of $N_a$, $N_b$, $N_c$ wrt midpoints of $BC$, $AC$, $AB$, respectively.
\end{enumerate}
In the case of equilateral $ABC$ and $P=O$~-- its center we get that $N_a'=N_b'=N_c'=N=P=O$ and $N_a''=N_b''=N_c''=N=P=O$. So, we can predict that in the general case of an arbitrary $ABC$ and an point $P$ we will have that the points $N_a'$, $N_b'$, $N_c'$, $N$, $P$ ($N_a''$, $N_b''$, $N_c''$, $N$, $P$) will be connected with each other. And, in fact, the next such connections can be formulated:
\begin{enumerate}
\item Points $N_a'$, $N_b'$, $N_c'$, $N$ lie on the same circle
\item Points $N_a''$, $N_b''$, $N_c''$, $N$ lie on the same circle.
\end{enumerate}
\begin{comment}
\subsection{Near-the-equilateral-triangle shapes}
This paper also uses the following shapes (among others), which may be regarded as shapes close to an equilateral triangle. The deformations of these shapes will appear in the next sections of the paper.
\end{comment}
\section{Complete article}
The full version of this article was written by the Author involving geometric problems contribution by Tran Quang Hung, Kadir Altintas and Antreas Hatzipolakis.
Download it from the links below
Cosmology of Plane Geometry: Concepts and Theorems (2019) \href{https://www.scribd.com/document/421475794}{Scribd/421475794}
Cosmology of Plane Geometry (Improved version 2021) \href{https://www.scribd.com/document/510674976}{Scribd/510674976}.
\end{document}
|
\betagin{document}
\title*{Computational Number Theory in Relation with $L$-Functions}
\mathfrak author{Henri Cohen}
\institute{Universit\'e de Bordeaux, CNRS, INRIA, IMB, UMR 5251, F-33400 Talence, France,
\mathbf email{[email protected]}}
\maketitle
\mathfrak abstract{We give a number of theoretical and practical methods related to the
computation of $L$-functions, both in the local case (counting points
on varieties over finite fields, involving in particular a detailed
study of Gauss and Jacobi sums), and in the global case (for instance
Dirichlet $L$-functions, involving in particular the study of inverse
Mellin transforms); we also give a number of little-known but very
useful numerical methods, usually but not always related to the computation
of $L$-functions.}
\section{$L$-Functions}\lambdabel{sec:one}
This course is divided into five parts. In the first part (Sections 1 and 2),
we introduce the notion of $L$-function, give a number of results and
conjectures concerning them, and explain some of the computational problems
in this theory. In the second part (Sections 3 to 6), we give a number of
computational methods for obtaining the Dirichlet series coefficients of the
$L$-function, so is \mathbf emph{arithmetic} in nature. In the third part
(Section 7), we give a number of \mathbf emph{analytic} tools necessary for working
with $L$-functions. In the fourth part (Sections 8 and 9), we give a number of
very useful numerical methods which are not sufficiently well-known, most of
which being also related to the computation of $L$-functions. The fifth part
(Sections 10 and 11) gives the {\tt Pari/GP} commands corresponding to most of
the algorithms and examples given in the course. A final Section 12 gives
as an appendix some basic definitions and results used in the course which
may be less familiar to the reader.
\subsection{Introduction}
The theory of $L$-functions is one of the most exciting subjects in number
theory. It includes for instance two of the crowning achievements of
twentieth century mathematics, first the proof of the Weil conjectures
and of the Ramanujan conjecture by Deligne in the early 1970's, using the
extensive development of modern algebraic geometry initiated by Weil himself
and pursued by Grothendieck and followers in the famous EGA and SGA treatises,
and second the proof of the Shimura--Taniyama--Weil conjecture by Wiles et
al., implying among other things the proof of Fermat's last theorem. It
also includes two of the seven 1 million dollar Clay problems for the
twenty-first century, first the Riemann hypothesis, and second the
Birch--Swinnerton-Dyer conjecture which in my opinion is the most beautiful,
if not the most important, conjecture in number theory, or even in the whole
of mathematics, together with similar conjectures such as the
Beilinson--Bloch conjecture.
There are two kinds of $L$-functions: local $L$-functions and global
$L$-functions. Since the proof of the Weil conjectures, local $L$-functions
are rather well understood from a theoretical standpoint, but somewhat less
from a computational standpoint. Much less is known on global $L$-functions,
even theoretically, so here the computational standpoint is much more
important since it may give some insight on the theoretical side.
Before giving a definition of $L$-functions, we look in some detail at a
large number of special cases of global $L$-functions.
\subsection{The Prototype: the Riemann Zeta Function $\zeta(s)$}
The simplest of all (global) $L$-function is the Riemann zeta function $\zeta(s)$
defined by
$$\zeta(s)=\sum_{n{\mathfrak g}e1}\dfrac{1}{n^s}\;.$$
This is an example of a \mathbf emph{Dirichlet series} (more generally
$\sum_{n{\mathfrak g}e1}a(n)/n^s$, or even more generally $\sum_{n{\mathfrak g}e1}1/\lambda_n^s$, but
we will not consider the latter). As such, it has a half-plane of absolute
convergence, here ${\mathbb R}e(s)>1$.
The properties of this function, studied initially by Bernoulli
and Euler, are as follows, given historically:
\betagin{enumerate}\item (Bernoulli, Euler): it has \mathbf emph{special values}.
When $s=2$, $4$,... is a strictly positive even integer, $\zeta(s)$ is equal
to ${\mathfrak p}i^s$ times a \mathbf emph{rational number}. ${\mathfrak p}i$ is here a \mathbf emph{period},
and is of course the usual ${\mathfrak p}i$ used for measuring circles. These rational
numbers have elementary \mathbf emph{generating functions}, and are equal up to easy
terms to the so-called \mathbf emph{Bernoulli numbers}. For example
$\zeta(2)={\mathfrak p}i^2/6$, $\zeta(4)={\mathfrak p}i^4/90$, etc. This was conjectured by Bernoulli
and proved by Euler. Note that the proof in 1735 of the so-called
\mathbf emph{Basel problem}:
$$\zeta(2)=1+\dfrac{1}{2^2}+\dfrac{1}{3^2}+\dfrac{1}{4^2}+\cdots=\dfrac{{\mathfrak p}i^2}{6}$$
is one of the crowning achievements of mathematics of that time.
\item (Euler): it has an \mathbf emph{Euler product}: for ${\mathbb R}e(s)>1$ one has the
identity
$$\zeta(s)={\mathfrak p}rod_{p\in P}\dfrac{1}{1-1/p^s}\;,$$
where $P$ is the set of prime numbers. This is exactly equivalent to the
so-called fundamental theorem of arithmetic. Note in passing (this does not
seem interesting here but will be important later) that if we consider
$1-1/p^s$ as a polynomial in $1/p^s=T$, its reciprocal roots all have the
same modulus, here $1$, this being of course trivial.
\item (Riemann, but already ``guessed'' by Euler in special cases): it has
an \mathbf emph{analytic continuation} to a meromorphic function in the whole complex
plane, with a single pole, at $s=1$, with residue $1$, and a \mathbf emph{functional
equation} $\Lambdaambda(1-s)=\Lambdaambda(s)$, where $\Lambdaambda(s)=\Gamma_{{\mathbb R}}(s)\zeta(s)$,
with $\Gamma_{{\mathbb R}}(s)={\mathfrak p}i^{-s/2}\Gamma(s/2)$, and $\Gamma$ is the gamma function
(see appendix).
\item As a consequence of the functional equation, we have $\zeta(s)=0$
when $s=-2$, $-4$,..., $\zeta(0)=-1/2$, but we also have \mathbf emph{special
values} at $s=-1$, $s=-3$,... which are symmetrical to those at $s=2$, $4$,...
(for instance $\zeta(-1)=-1/12$, $\zeta(-3)=1/120$, etc.). This is the
part which was guessed by Euler.
\mathbf end{enumerate}
Roughly speaking, one can say that a global $L$-function is a function
having properties similar to \mathbf emph{all} the above. We will of course be
completely precise below. Two things should be added immediately: first, the
existence of special values will not be part of the definition but, at
least conjecturally, a consequence. Second, all the global $L$-functions
that we will consider should \mathbf emph{conjecturally} satisfy a Riemann hypothesis:
when suitably normalized, and excluding ``trivial'' zeros, all the zeros
of the function should be on the line ${\mathbb R}e(s)=1/2$, axis of symmetry of the
functional equation. Note that even for the simplest $L$-function, $\zeta(s)$,
this is not proved.
\subsection{Dedekind Zeta Functions}
The Riemann zeta function is perhaps too simple an example to get the correct
feeling about global $L$-functions, so we generalize:
Let $K$ be a number field (a finite extension of ${\mathbb Q}$) of degree $d$. We
can define its \mathbf emph{Dedekind zeta function} $\zeta_K(s)$ for ${\mathbb R}e(s)>1$ by
$$\zeta_K(s)=\sum_{\mathfrak a}\dfrac{1}{\N(\mathfrak a)^s}=\sum_{n{\mathfrak g}e1}\dfrac{i(n)}{n^s}\;,$$
where $\mathfrak a$ ranges over all (nonzero) integral ideals of the ring of integers
${\mathbb Z}_K$ of $K$, $\N(\mathfrak a)=[{\mathbb Z}_K:\mathfrak a]$ is the norm of $\mathfrak a$, and $i(n)$ denotes
the number of integral ideals of norm $n$.
This function has very similar properties to those of $\zeta(s)$ (which is the
special case $K={\mathbb Q}$). We give them in a more logical order:
\betagin{enumerate}\item It can be analytically continued to the whole complex
plane into a meromorphic function having a single pole, at $s=1$, with known
residue, and it has a functional equation $\Lambdaambda_K(1-s)=\Lambdaambda_K(s)$, where
$$\Lambdaambda_K(s)=|D_K|^{s/2}\Gamma_{{\mathbb R}}(s)^{r_1+r_2}\Gamma_{{\mathbb R}}(s+1)^{r_2}\;,$$
where $(r_1,2r_2)$ are the number of real and complex embeddings of $K$
and $D_K$ its discriminant.
\item It has an Euler product $\zeta_K(s)={\mathfrak p}rod_{{\mathfrak p}}1/(1-1/\N({\mathfrak p})^s)$, where
the product is over all prime ideals of ${\mathbb Z}_K$. Note that this can also be
written
$$\zeta_K(s)={\mathfrak p}rod_{p\in P}{\mathfrak p}rod_{{\mathfrak p}\mid p}\dfrac{1}{1-1/p^{f({\mathfrak p}/p)s}}\;,$$
where $f({\mathfrak p}/p)=[{\mathbb Z}_K/{\mathfrak p}:{\mathbb Z}/p{\mathbb Z}]$ is the so-called \mathbf emph{residual index}
of ${\mathfrak p}$ above $p$. Once again, note that if we set as usual $1/p^s=T$,
the reciprocal roots of $1-T^{f({\mathfrak p}/p)}$ all have modulus $1$.
\item It has \mathbf emph{special values}, but only when $K$ is a \mathbf emph{totally real}
number field ($r_2=0$, $r_1=d$): in that case $\zeta_K(s)$ is a \mathbf emph{rational
number} if $s$ is a negative odd integer, or equivalently by the functional
equation, it is a rational multiple of $\sqrt{|D_K|}{\mathfrak p}i^{ds}$ if $s$ is a
positive even integer.\mathbf end{enumerate}
An important new phenomenon occurs: recall that
$\sum_{{\mathfrak p}\mid p}e({\mathfrak p}/p)f({\mathfrak p}/p)=d$, where $e({\mathfrak p}/p)$ is the so-called
\mathbf emph{ramification index}, which is equivalent to the
defining equality $p{\mathbb Z}_K={\mathfrak p}rod_{{\mathfrak p}\mid p}{\mathfrak p}^{e({\mathfrak p}/p)}$. In particular
$\sum_{{\mathfrak p}\mid p}f({\mathfrak p}/p)=d$ if and only if $e({\mathfrak p}/p)=1$ for all ${\mathfrak p}$, which
means that $p$ is \mathbf emph{unramified} in $K/{\mathbb Q}$; one can prove that this is
equivalent to $p\nmid D_K$. Thus, the \mathbf emph{local $L$-function}
$L_{K,p}(T)={\mathfrak p}rod_{{\mathfrak p}\mid p}(1-T^{f({\mathfrak p}/p)})$ has degree in $T$ exactly
equal to $d$ for all but a finite number of primes $p$, which are exactly
those which divide the discriminant $D_K$, and for those ``bad'' primes
the degree is strictly less than $d$. In addition, note that the
number of $\Gamma_{{\mathbb R}}$ factors in the \mathbf emph{completed} function $\Lambdaambda_K(s)$
is equal to $r_1+2r_2$, hence once again equal to $d$.
{\bf Examples:}
\betagin{enumerate}\item Let $D$ be the discriminant of a quadratic field, and
let $K={\mathbb Q}(\sqrt{D})$. In that case, $\zeta_K(s)$ \mathbf emph{factors} as
$\zeta_K(s)=\zeta(s)L(\chi_D,s)$, where $\chi_D=\lgs{D}{.}$ is the
Legendre--Kronecker symbol, and $L(\chi_D,s)=\sum_{n{\mathfrak g}e 1}\chi_D(n)/n^s$.
Thus, the local $L$-function at a prime $p$ is given by
$$L_{K,p}(T)=(1-T)(1-\chi_D(p)T)=1-a_pT+\chi_D(p)T^2\;,$$
with $a_p=1+\chi_D(p)$. Note that $a_p$ is equal to the number of solutions
in ${\mathbb F}_p$ of the equation $x^2=D$.
\item Let us consider two special cases of (1): first $K={\mathbb Q}(\sqrt{5})$.
Since it is a real quadratic field, it has special values, for instance
$$\zeta_K(-1)=\dfrac{1}{30}\;,\quad \zeta_K(-3)=\dfrac{1}{60}\;,\quad \zeta_K(2)=\dfrac{2\sqrt{5}{\mathfrak p}i^4}{375}\;,\quad \zeta_K(4)=\dfrac{4\sqrt{5}{\mathfrak p}i^8}{84375}\;.$$
In addition, note that its \mathbf emph{gamma factor} is $5^{s/2}\Gamma_{{\mathbb R}}(s)^2$.
Second, consider $K={\mathbb Q}(\sqrt{-23})$. Since it is not a totally real field,
$\zeta_K(s)$ does not have special values. However, because of the factorization
$\zeta_K(s)=\zeta(s)L(\chi_D,s)$, we can look \mathbf emph{separately} at the special values
of $\zeta(s)$, which we have already seen (negative odd integers and positive
even integers), and of $L(\chi_D,s)$. It is easy to prove that the special
values of this latter function occurs at negative \mathbf emph{even} integers
and positive \mathbf emph{odd} integers, which have empty intersection which those
of $\zeta(s)$ and explains why $\zeta_K(s)$ itself has none. For instance,
$$L(\chi_D,-2)=-48\;,\quad L(\chi_D,-4)=6816\;,\quad L(\chi_D,3)=\dfrac{96\sqrt{23}{\mathfrak p}i^3}{12167}\;.$$
In addition, note that its gamma factor is
$$23^{s/2}\Gamma_{{\mathbb R}}(s)\Gamma_{{\mathbb R}}(s+1)=23^{s/2}\Gamma_{{\mathbb C}}(s)\;,$$
where we set by definition
$$\Gamma_{{\mathbb C}}(s)=\Gamma_{{\mathbb R}}(s)\Gamma_{{\mathbb R}}(s+1)=2\cdot(2{\mathfrak p}i)^{-s}\Gamma(s)$$
by the duplication formula for the gamma function.
\item Let $K$ be the unique cubic field up to isomorphism of discriminant
$-23$, defined for instance by a root of the equation $x^3-x-1=0$. We
have $(r_1,2r_2)=(1,2)$ and $D_K=-23$. Here, one
can prove (it is less trivial) that $\zeta_K(s)=\zeta(s)L(\rho,s)$, where
$L(\rho,s)$ is a holomorphic function. Using both properties of $\zeta_K$ and
$\zeta$, this $L$-function has the following properties:
\betagin{itemize}\item It extends to an entire function on ${\mathbb C}$ with a functional
equation $\Lambdaambda(\rho,1-s)=\Lambdaambda(\rho,s)$, with
$$\Lambdaambda(\rho,s)=23^{s/2}\Gamma_{{\mathbb R}}(s)\Gamma_{{\mathbb R}}(s+1)L(\rho,s)=23^{s/2}\Gamma_{{\mathbb C}}(s)L(\rho,s)\;.$$
Note that this is the \mathbf emph{same} gamma factor as for ${\mathbb Q}(\sqrt{-23})$.
However the functions are fundamentally different, since
$\zeta_{{\mathbb Q}(\sqrt{-23})}(s)$ has a pole at $s=1$, while $L(\rho,s)$ is an
entire function.
\item It is immediate to show that if we let $L_{\rho,p}(T)=L_{K,p}(T)/(1-T)$
be the local $L$ function for $L(\rho,s)$, we have
$L_{\rho,p}(T)=1-a_pT+\chi_{-23}(p)T^2$, with
$a_p=1$ if $p=23$, $a_p=0$ if $\lgs{-23}{p}=-1$, and
$a_p=1$ or $2$ if $\lgs{-23}{p}=1$.
\mathbf end{itemize}
\mathbf end{enumerate}
\betagin{remark} In all of the above examples, the function $\zeta_K(s)$
is \mathbf emph{divisible} by the Riemann zeta function $\zeta(s)$, i.e., the function
$\zeta_K(s)/\zeta(s)$ is an \mathbf emph{entire function}. This is known for some number
fields $K$, but is \mathbf emph{not} known in general, even in degree $d=5$ for
instance: it is a consequence of the more precise \mathbf emph{Artin conjecture} on
the holomorphy of Artin $L$-functions.\mathbf end{remark}
\subsection{Further Examples in Weight $0$}
It is now time to give examples not coming from number fields.
Define $a_1(n)$ by the formal equality
$$q{\mathfrak p}rod_{n{\mathfrak g}e1}(1-q^n)(1-q^{23n})=\sum_{n{\mathfrak g}e1}a_1(n)q^n=q-q^2-q^3+q^6+q^8-\cdots\;,$$
and set $L_1(s)=\sum_{n{\mathfrak g}e1}a_1(n)/n^s$. The theory of modular forms
(here of the Dedekind eta function) tells us that $L_1(s)$ will satisfy
exactly the same properties as $L(\rho,s)$ with $\rho$ as above.
Define $a_2(n)$ by the formal equality
$$\dfrac{1}{2}\left(\sum_{(m,n)\in{\mathbb Z}\times{\mathbb Z}}q^{m^2+mn+6n^2}-q^{2m^2+mn+3n^2}\right)=\sum_{n{\mathfrak g}e1}a_2(n)q^n\;,$$
and set $L_2(s)=\sum_{n{\mathfrak g}e1}a_2(n)/n^s$. The theory of modular forms
(here of theta functions) tells us that $L_2(s)$ will satisfy
exactly the same properties as $L(\rho,s)$.
And indeed, it is an interesting \mathbf emph{theorem} that
$$L_1(s)=L_2(s)=L(\rho,s)\;:$$
The ``moral'' of this story is the following, which can be made mathematically
precise: if two $L$-functions are holomorphic, have the same gamma factor
(including in this case the $23^{s/2}$), then (conjecturally in general) they
belong to a finite-dimensional vector space. Thus in particular if this vector
space is $1$-dimensional and the $L$-functions are suitably normalized
(usually with $a(1)=1$), this implies as here that they are equal.
\subsection{Examples in Weight $1$}
Although we have not yet defined the notion of weight, let me give two
further examples.
Define $a_3(n)$ by the formal equality
$$q{\mathfrak p}rod_{n{\mathfrak g}e1}(1-q^n)^2(1-q^{11n})^2=\sum_{n{\mathfrak g}e1}a_3(n)q^n=q-2q^2-q^3+2q^4+\cdots\;,$$
and set $L_3(s)=\sum_{n{\mathfrak g}e1}a_3(n)/n^s$. The theory of modular forms
(again of the Dedekind eta function) tells us that $L_3(s)$ will satisfy
the following properties, analogous but more general than those satisfied by
$L_1(s)=L_2(s)=L(\rho,s)$:
\betagin{itemize}\item It has an analytic continuation to the whole complex
plane, and if we set
$$\Lambdaambda_3(s)=11^{s/2}\Gamma_{{\mathbb R}}(s)\Gamma_{{\mathbb R}}(s+1)L_3(s)=11^{s/2}\Gamma_{{\mathbb C}}(s)L_3(s)\;,$$
we have the functional equation $\Lambdaambda_3(2-s)=\Lambdaambda_3(s)$. Note the crucial
difference that here $1-s$ is replaced by $2-s$.
\item There exists an Euler product $L_3(s)={\mathfrak p}rod_{p\in P}1/L_{3,p}(1/p^s)$
similar to the preceding ones in that $L_{3,p}(T)$ is for all but a finite
number of $p$ a second degree polynomial in $T$. More precisely,
if $p=11$ we have $L_{3,p}(T)=1-T$, while for $p\ne11$ we have
$L_{3,p}(T)=1-a_pT+pT^2$, for some $a_p$ such that $|a_p|<2\sqrt{p}$.
This is expressed more vividly by saying that for $p\ne11$ we have
$L_{3,p}(T)=(1-\mathfrak alpha_pT)(1-\beta_pT)$, where the reciprocal roots $\mathfrak alpha_p$ and
$\beta_p$ have modulus exactly equal to $p^{1/2}$. Note again the crucial
difference with ``weight $0$'' in that the coefficient of $T^2$ is equal to
$p$ instead of ${\mathfrak p}m1$, hence that $|\mathfrak alpha_p|=|\beta_p|=p^{1/2}$ instead of $1$.
\mathbf end{itemize}
As a second example, consider the equation $y^2+y=x^3-x^2-10x-20$ (an
elliptic curve $E$), and denote by $N_q(E)$ the number of projective points
of this curve over the finite field ${\mathbb F}_q$ (it is clear that there is a unique
point at infinity, so if you want $N_q(E)$ is one plus the number of affine
points). There is a universal recipe to construct an $L$-function out of
a variety which we will recall below, but here let us simplify: for $p$
prime, set $a_p=p+1-N_p(E)$ and
$$L_4(s)={\mathfrak p}rod_{p\in P}1/(1-a_pp^{-s}+\chi(p)p^{1-2s})\;,$$
where $\chi(p)=1$ for $p\ne11$ and $\chi(11)=0$. It is not difficult to show
that $L_4(s)$ satisfies exactly the same properties as $L_3(s)$ (using for
instance the elementary theory of modular curves), so by the moral explained
above, it should not come as a surprise that in fact $L_3(s)=L_4(s)$.
\subsection{Definition of a Global $L$-Function}
With all these examples at hand, it is quite natural to give the following
definition of an $L$-function, which is not the most general but will be
sufficient for us.
\betagin{definition} Let $d$ be a nonnegative integer. We say that a
Dirichlet series $L(s)=\sum_{n{\mathfrak g}e1}a(n)n^{-s}$ with $a(1)=1$ is an
$L$-function of \mathbf emph{degree $d$} and \mathbf emph{weight $0$} if the following
conditions are satisfied:
\betagin{enumerate}\item (Ramanujan bound): we have $a(n)=O(n^\mathbf eps)$ for
all $\mathbf eps>0$, so that in particular the Dirichlet series converges
absolutely and uniformly in any half plane ${\mathbb R}e(s){\mathfrak g}e\sigma>1$.
\item (Meromorphy and Functional equation): The function $L(s)$ can be
extended to ${\mathbb C}$ to a meromorphic function of order $1$ (see appendix) having
a finite number of poles; furthermore there exist complex numbers $\lambda_i$ with
nonnegative real part and an integer $N$ called the \mathbf emph{conductor} such
that if we set
$${\mathfrak g}a(s)=N^{s/2}{\mathfrak p}rod_{1\le i\le d}\Gamma_{{\mathbb R}}(s+\lambda_i)\text{\quad and\quad}\Lambdaambda(s)={\mathfrak g}a(s)L(s)\;,$$
we have the \mathbf emph{functional equation}
$$\Lambdaambda(s)=\omega\ov{\Lambdaambda(1-\ov{s})}$$
for some complex number $\omega$, called the \mathbf emph{root number}, which will
necessarily be of modulus~$1$.
\item (Euler Product): For ${\mathbb R}e(s)>1$ we have an Euler product
$$L(s)={\mathfrak p}rod_{p\in P}1/L_p(1/p^s)\text{\quad with\quad}L_p(T)={\mathfrak p}rod_{1\le j\le d}(1-\mathfrak alpha_{p,j}T)\;,$$
and the reciprocal roots $\mathfrak alpha_{p,j}$ are called the \mathbf emph{Satake parameters}.
\item (Local Riemann hypothesis): for $p\nmid N$ we have $|\mathfrak alpha_{p,j}|=1$,
and for $p\mid N$ we have either $\mathfrak alpha_{p,j}=0$ or $|\mathfrak alpha_{p,j}|=p^{-m/2}$
for some $m$ such that $1\le m\le d$.
\mathbf end{enumerate}\mathbf end{definition}
\betagin{remarks}{\rm \betagin{enumerate}
\item More generally Selberg has defined a more general class of $L$-functions
which first allows $\Gamma(\mu_i s+\lambda_i)$ with $\mu_i$ positive real in the gamma factors and second allows weaker assumptions on $N$ and the Satake parameters.
\item Note that $d$ is \mathbf emph{both} the number of $\Gamma_{{\mathbb R}}$ factors,
\mathbf emph{and} the degree in $T$ of the Euler factors $L_p(T)$, at
least for $p\nmid N$, while the degree decreases for the ``bad'' primes $p$
which divide $N$.
\item The Ramanujan bound (1) is easily seen to be a consequence of the
conditions that we have imposed on the Satake parameters: in Selberg's more
general definition this is not the case.
\mathbf end{enumerate}}
\mathbf end{remarks}
It is important to generalize this definition in the following trivial way:
\betagin{definition} Let $w$ be a nonnegative integer. A function $L(s)$ is said
to be an $L$-function of degree $d$ and \mathbf emph{motivic weight} $w$ if
$L(s+w/2)$ is an $L$-function of degree $d$ and weight $0$ as above
(with the slight additional technical condition that the nonzero Satake
parameters $\mathfrak alpha_{p,j}$ for $p\mid N$ satisfy $|\mathfrak alpha_{p,j}|=p^{-m/2}$ with
$1\le m\le w$).
\mathbf end{definition}
For an $L$-function of weight $w$, it is clear that the functional equation
is $\Lambdaambda(s)=\omega\ov{\Lambdaambda(k-\ov{s})}$ with $k=w+1$,
and that the Satake parameters will satisfy $|\mathfrak alpha_{p,j}|=p^{w/2}$ for
$p\nmid N$, and for $p\mid N$ we have either $\mathfrak alpha_{p,j}=0$ or
$|\mathfrak alpha_{p,j}|=p^{(w-m)/2}$ for some integer $m$ such that $1\le m\le w$.
Thus, the first examples that we have given are all of weight $0$, and
the last two (which are in fact equal) are of weight $1$. For those who
know the theory of modular forms, note that the motivic weight (that we
denote by $w$) is one less than the weight $k$ of the modular form.
\section{Origins of $L$-Functions}
As can already be seen in the above examples, it is possible to construct
$L$-functions in many different ways. In the present section, we look at three
different ways for constructing $L$-functions: the first is by the theory of
modular forms or more generally of \mathbf emph{automorphic forms} (of which we have
seen a few examples above), the second is by using Weil's construction of
local $L$-functions attached to varieties, and more generally
to \mathbf emph{motives}, and third, as a special but much simpler case of this,
by the theory of \mathbf emph{hypergeometric motives}.
\subsection{$L$-Functions coming from Modular Forms}
The basic notion that we need here is that of \mathbf emph{Mellin transform}:
if $f(t)$ is a nice function tending to zero exponentially fast at infinity,
we can define its Mellin transform $\Lambdaambda(f;s)=\int_0^\infty t^sf(t)\,dt/t$,
the integral being written in this way because $dt/t$ is the invariant Haar
measure on the locally compact group ${\mathbb R}_{>0}$. If we set $g(t)=t^{-k}f(1/t)$
and assume that $g$ also tends to zero exponentially fast at infinity,
it is immediate to see by a change of variable that
$\Lambdaambda(g;s)=\Lambdaambda(f;k-s)$. This is exactly the type of functional equation
needed for an $L$-function.
The other fundamental property of $L$-functions that we need is the existence
of an Euler product of a specific type. This will come from the theory of
\mathbf emph{Hecke operators}.
{\bf A crash course in modular forms} (see for instance \cite{Coh-Str} for a
complete introduction): we use the notation $q=e^{2{\mathfrak p}i i\tau}$,
for $\tau\in{\mathbb C}$ such that $\Im(\tau)>0$, so that $|q|<1$. A function
$f(\tau)=\sum_{n{\mathfrak g}e1}a(n)q^n$ is said to be a modular cusp form of (positive,
even) weight $k$ if $f(-1/\tau)=\tau^kf(\tau)$ for all $\Im(\tau)>0$.
Note that because of the notation $q$ we also have $f(\tau+1)=f(\tau)$,
hence it is easy to deduce that $f((a\tau+b)/(c\tau+d))=(c\tau+d)^kf(\tau)$
if ${\mathfrak p}smm{a}{b}{c}{d}$ is an integer matrix of determinant $1$.
We define the $L$-function attached to $f$ as $L(f;s)=\sum_{n{\mathfrak g}e1}a(n)/n^s$,
and the Mellin transform $\Lambdaambda(f;s)$ of the function $f(it)$ is on the
one hand equal to $(2{\mathfrak p}i)^{-s}\Gamma(s)L(f;s)=(1/2)\Gamma_{{\mathbb C}}(s)L(f;s)$, and on the
other hand as we have seen above satisfies the functional equation
$\Lambdaambda(k-s)=(-1)^{k/2}\Lambdaambda(s)$.
One can easily show the fundamental fact that the vector space of modular
forms of given weight $k$ is \mathbf emph{finite dimensional}, and compute its
dimension explicitly.
If $f(\tau)=\sum_{n{\mathfrak g}e1}a(n)q^n$ is a modular form and $p$ is a prime number,
one defines $T(p)(f)$ by $T(p)(f)=\sum_{n{\mathfrak g}e1}b(n)q^n$ with
$b(n)=a(pn)+p^{k-1}a(n/p)$, where $a(n/p)$ is by convention $0$ when
$p\nmid n$, or equivalently
$$T(p)(f)(\tau)=p^{k-1}f(p\tau)+\dfrac{1}{p}\sum_{0\le j<p}f\left(\dfrac{\tau+j}{p}\right)\;.$$
Then $T(p)f$ is also a modular cusp form, so $T(p)$ is an operator on the
space of modular forms, and it is easy to show that the $T(p)$ commute and
are diagonalizable, so they are simultaneously diagonalizable hence there
exists a basis of common \mathbf emph{eigenforms} for all the $T(p)$. Since one can
show that for such an eigenform one has $a(1)\ne0$, we can normalize them
by asking that $a(1)=1$, and we then obtain a canonical basis.
If $f(\tau)=\sum_{n{\mathfrak g}e1}a(n)q^n$ is such a \mathbf emph{normalized eigenform}, it
follows that the corresponding $L$ function $\sum_{n{\mathfrak g}e1}a(n)/n^s$ will indeed
have an Euler product, and using the elementary properties of the operators
$T(p)$ that it will in fact be of the form:
$$L(f;s)={\mathfrak p}rod_{p\in P}\dfrac{1}{1-a(p)p^{-s}+p^{k-1-2s}}\;.$$
As a final remark, note that the analytic continuation and functional equation
of this $L$-function is an \mathbf emph{elementary consequence} of the definition of
a modular form. This is totally different from the motivic cases that we will
see below, where this analytic continuation is in general completely
\mathbf emph{conjectural}.
The above describes briefly the theory of modular forms on the modular group
${\mathbb P}SL_2({\mathbb Z})$. One can generalize (nontrivially) this theory to \mathbf emph{subgroups}
of the modular group, the most important being $\Gamma_0(N)$ (matrices as above
with $N\mid c$), to other \mathbf emph{Fuchsian groups}, to forms in several
variables, and even more generally to \mathbf emph{reductive groups}.
\subsection{Local $L$-Functions of Algebraic Varieties}
The second very important source of $L$-functions comes from algebraic
geometry. Let $V$ be some algebraic object. In modern terms, $V$ may be a
\mathbf emph{motive}, whatever that may mean for the moment, but assume for instance
that $V$ is an algebraic variety, in other words that for each suitable field
$K$, $V(K)$ is the set of common zeros of a family of polynomials in several
variables. If $K$ is a \mathbf emph{finite} field ${\mathbb F}_q$ (recall that we must then
have $q=p^n$ for some prime $p$ and that ${\mathbb F}_q$ exists and is unique up to
isomorphism), then $V({\mathbb F}_q)$ will also be finite.
After studying a number of special cases, such as elliptic curves
(due to Hasse), and quasi-diagonal hypersurfaces in ${\mathbb P}^d$, in 1949 Weil was
led to make a number of more precise conjectures concerning the number of
\mathbf emph{projective} points $|V({\mathbb F}_q)|$, assuming that $V$ is a
\mathbf emph{smooth projective} variety, and proved these conjectures in the special
case of curves (the proof is already quite deep).
The first \mathbf emph{Weil conjecture} says that (for $p$ fixed) the number
$|V({\mathbb F}_{p^n})|$ of projective points of $V$ over the finite field ${\mathbb F}_{p^n}$
satisfies a (non-homogeneous) linear recurrence with
constant coefficients. For instance, if $V$ is an \mathbf emph{elliptic curve}
defined over ${\mathbb Q}$ (such as $y^2=x^3+x+1$) and if we set
$a(p^n)=p^n+1-|V({\mathbb F}_{p^n})|$, then
$$a(p^{n+1})=a(p)a(p^n)-\chi(p)pa(p^{n-1})\;,$$
where $\chi(p)=1$ unless $p$ divides the so-called \mathbf emph{conductor} of the
elliptic curve, in which case $\chi(p)=0$ (this is not quite true because
we must choose a suitable model for $V$, but it suffices for us).
\betagin{exercise} Using the above recursion for $a(p^n)$, find the corresponding
recursion for $v_n=|V({\mathbb F}_{p^n})|$.
\mathbf end{exercise}
\betagin{exercise}\betagin{enumerate}\item Given a prime $p$ and $n{\mathfrak g}e1$, write a
computer program which runs through all the elements of ${\mathbb F}_{p^n}$,
represented in a suitable way.
\item For the elliptic curve $y^2=x^3+x+1$, compute (on a computer) $a(5)$
and $a(5^2)$, and check the recursion.
\item Similarly, compute $a(31)$ and $a(31^2)$, and check the recursion
(here $\chi(31)=0$).\mathbf end{enumerate}
\mathbf end{exercise}
This first Weil conjecture was proved by Dwork in the early 1960's. It is
better reformulated in terms of \mathbf emph{local $L$-functions} as follows:
define the Hasse--Weil zeta function of $V$ as the \mathbf emph{formal power series}
in $T$ given by the formula
$$Z_p(V;T)=\mathbf exp\Biggl(\sum_{n{\mathfrak g}e1}\dfrac{|V({\mathbb F}_{p^n})|}{n}T^n\Biggr)\;.$$
There should be no difficulty in understanding this: setting for simplicity
$v_n=|V({\mathbb F}_{p^n})|$, we have
\betagin{align*}
Z_p(V;T)&=\mathbf exp(v_1T+v_2T^2/2+v_3T^3/3+\cdots)\\
&=1+v_1T+(v_1^2+v_2)T^2/2+(v_1^3+3v_1v_2+2v_3)T^3/6+\cdots\mathbf end{align*}
For instance, if $V$ is projective $d$-space ${\mathbb P}^d$, we have
$|V({\mathbb F}_q)|=q^d+q^{d-1}+\cdots+1$, and since
$\sum_{n{\mathfrak g}e1}p^{nj}T^n/n=-\log(1-p^jT)$, we deduce that
$Z_p({\mathbb P}^d;T)=1/((1-T)(1-pT)\cdots(1-p^dT))$.
In terms of this language, the existence of the recurrence relation is
equivalent to the fact that $Z_p(V;T)$ is a \mathbf emph{rational function} of $T$,
and as already mentioned, this was proved by Dwork in 1960.
The second conjecture of Weil states that this rational function is of the form
$$Z_p(V;T)={\mathfrak p}rod_{0\le i\le 2d}P_{i,p}(V;T)^{(-1)^{i+1}}=\dfrac{P_{1,p}(V;T)\cdots P_{2d-1,p}(V;T)}{P_{0,p}(V;T)P_{2,p}(V;T)\cdots P_{2d,p}(V;T)}\;,$$
where $d=\dim(V)$, and the $P_{i,p}$ are polynomials in $T$.
Furthermore, a basic result in algebraic geometry called Poincar\'e duality
implies that $Z_p(V;1/(p^dT))={\mathfrak p}m p^{de/2}T^eZ_p(V;T)$, where $e$ is the
degree of the rational function (called the Euler characteristic of $V$),
which means that there is a relation between $P_{i,p}$ and $P_{2d-i,p}$.
In addition the $P_{i,p}$ have integer coefficients, and $P_{0,p}(T)=1-T$,
$P_{2d,p}(T)=1-p^dT$. For instance, for \mathbf emph{curves}, this means that
$Z_p(V;T)=P_1(V;T)/((1-T)(1-pT))$, the polynomial $P_1$ is of even degree
$2g$ ($g$ is the so-called \mathbf emph{genus} of the curve) and satisfies
$p^{dg}P_1(V;1/(p^dT))={\mathfrak p}m P_1(V;T)$.
For knowledgeable readers, in highbrow language, the polynomial $P_{i,p}$ is
the reverse characteristic polynomial of the Frobenius endomorphism acting on
the $i$th $\mathbf ell$-adic cohomology group $H^i(V;{\mathbb Q}_{\mathbf ell})$ for any $\mathbf ell\ne p$.
The third, most important and most difficult of the Weil conjectures is the
local \mathbf emph{Riemann hypothesis}, which says that the reciprocal roots of
$P_{i,p}$ have modulus exactly equal to $p^{i/2}$, in other words that
$$P_{i,p}(V;T)={\mathfrak p}rod_j(1-\mathfrak alpha_{i,j}T)\text{\quad with\quad}|\mathfrak alpha_{i,j}|=p^{i/2}\;.$$
This last is the most important in applications.
The Weil conjectures were completely proved by Deligne in the early 1970's
following a strategy already put forward by Weil, and is considered as one of
the two or three major accomplishments of mathematics of the second half of
the twentieth century.
\betagin{exercise} (You need to know some algebraic number theory for this).
Let $P\in{\mathbb Z}[X]$ be a monic irreducible polynomial and $K={\mathbb Q}(\theta)$, where
$\theta$ is a root of $P$ be the corresponding number field. Assume that
$p^2\nmid\disc(P)$. Show that the Hasse--Weil zeta function at $p$ of the
$0$-dimensional variety defined by $P=0$ is the Euler factor at $p$ of
the Dedekind zeta function $\zeta_K(s)$ attached to $K$, where $p^{-s}$ is
replaced by $T$.\mathbf end{exercise}
\subsection{Global $L$-Function Attached to a Variety}
We are now ready to ``globalize'' the above construction, and build
\mathbf emph{global} $L$-functions attached to a variety.
Let $V$ be an algebraic variety defined over ${\mathbb Q}$, say. We assume that
$V$ is ``nice'', meaning for instance that we choose $V$ to be projective,
smooth, and absolutely irreducible. For all but a finite number of primes $p$
we can consider $V$ as a smooth variety over ${\mathbb F}_p$, so for each $i$ we can
set $L_i(V;s)={\mathfrak p}rod_p 1/P_{i,p}(V;p^{-s})$, where the product
is over all the ``good'' primes, and the $P_{i,p}$ are as above. The factor
$1/P_{i,p}(V;p^{-s})$ is as usual called the Euler factor at $p$. These
functions $L_i$ can be called the global $L$-functions attached to $V$.
This na\"\i ve definition is insufficient to construct interesting objects.
First and most importantly, we have omitted a finite number of
Euler factors at the so-called ``bad primes'', which include in particular
those for which $V$ is not smooth over ${\mathbb F}_p$, and although there do
exist cohomological recipes to define them, as far as the author is aware
these recipes do not really give practical algorithms. (In highbrow language,
these recipes are based on the computation of $\mathbf ell$-adic cohomology groups,
for which the known algorithms are useless in practice; in the simplest case
of Artin $L$-functions, one must determine the action of Frobenius on the
vector space fixed by the inertia group, which can be done reasonably easily.)
Another much less important reason is the fact that most of the $L_i$ are
uninteresting or related. For instance in the case of elliptic curves seen
above, we have (up to a finite number of Euler factors)
$L_0(V;s)=\zeta(s)$ and $L_2(V;s)=\zeta(s-1)$, so the only interesting $L$-function,
called \mathbf emph{the} $L$-function of the elliptic curve, is the function
$L_1(V;s)={\mathfrak p}rod_p(1-a(p)p^{-s}+\chi(p)p^{1-2s})^{-1}$ (if the model of
the curve is chosen to be \mathbf emph{minimal}, this happens to be the correct
definition, including for the ``bad'' primes). For varieties of higher
dimension $d$, as we have mentioned as part of the Weil conjecture
the functions $L_i$ and $L_{2d-i}$ are related by Poincar\'e duality, and
$L_0$ and $L_{2d}$ are translates of the Riemann zeta function (as above), so
only the $L_i$ for $1\le i\le d$ need to be studied.
\subsection{Hypergeometric Motives}
Still another way to construct $L$-functions is through the use of
\mathbf emph{hypergeometric motives}, due to Katz and Rodriguez-Villegas. Although
this construction is a special case of the construction of $L$-functions of
varieties studied above, the corresponding variety is \mathbf emph{hidden} (although
it can be recovered if desired), and the computations are in some sense much
simpler.
Let me give a short and unmotivated introduction to the subject: let
${\mathfrak g}a=({\mathfrak g}a_n)_{n{\mathfrak g}e1}$ be a finite sequence of (positive or negative) integers
satisfying the essential condition $\sum_nn{\mathfrak g}a_n=0$.
For any finite field ${\mathbb F}_q$ with $q=p^f$ and any character $\chi$ of ${\mathbb F}_q^*$,
recall that the Gauss sum ${\mathfrak g}g(\chi)$ is defined by
$${\mathfrak g}g(\chi)=\sum_{x\in{\mathbb F}_q^*}\chi(x)\mathbf exp(2{\mathfrak p}i i\Tr_{{\mathbb F}_q/{\mathbb F}_p}(x)/p)\;,$$
see Section \ref{sec:gausssum} below. We set
$$Q_q({\mathfrak g}a;\chi)={\mathfrak p}rod_{n{\mathfrak g}e1}{\mathfrak g}g(\chi^n)^{{\mathfrak g}a_n}$$
and for any $t\in{\mathbb F}_q\setminus\{0,1\}$
$$a_q({\mathfrak g}a;t)=\dfrac{1}{1-q}\left(1+\sum_{\chi\ne\mathbf eps}\chi(Mt)Q_q({\mathfrak g}a;\chi)\right)\;,$$
where $\mathbf eps$ is the trivial character and $M={\mathfrak p}rod_nn^{n{\mathfrak g}a_n}$ is a
normalizing constant (this is not quite the exact formula but it will
suffice for our purposes). The theorem of Katz is that for $t\ne0,1$ the
quantity $a_q({\mathfrak g}a;t)$ is the \mathbf emph{trace of Frobenius} on some \mathbf emph{motive}
\mathbf emph{defined over ${\mathbb Q}$}.
In the language of $L$-functions this means the following:
define as usual the local $L$-function at $p$ by the formal power series
$$L_p({\mathfrak g}a;t;T)=\mathbf exp\left(\sum_{f{\mathfrak g}e1}a_{p^f}({\mathfrak g}a;t)\dfrac{T^f}{f}\right)\;.$$
Then $L_p$ is a rational function of $T$, satisfies the local Riemann
hypothesis, and if we set
$$L({\mathfrak g}a;t;s)={\mathfrak p}rod_pL_p({\mathfrak g}a;t;p^{-s})^{-1}\;,$$
then $L$ once completed at the ``bad'' primes should be a global $L$-function
of the standard type described above.
Let me give one of the simplest examples of a hypergeometric motive, and show
how one can recover the underlying algebraic variety. We choose
${\mathfrak g}a_1=4$, ${\mathfrak g}a_2=-2$, ${\mathfrak g}a_n=0$ for $n>2$, which does satisfy the condition
$\sum_nn{\mathfrak g}a_n=0$ (we could choose the simpler values ${\mathfrak g}a_1=2$, ${\mathfrak g}a_2=-1$,
but this would give a zero-dimensional variety, i.e., a number field, so
less representative of the general case). We thus have
$Q_q({\mathfrak g}a,\chi)={\mathfrak g}g(\chi)^4/{\mathfrak g}g(\chi^2)^2$ and $M=1/4$. By the results on
Jacobi sums that we will see below (Proposition \ref{jacgaufq}), if $\chi^2$
is not the trivial character $\mathbf eps$ we have $Q_q({\mathfrak g}a,\chi)=J(\chi,\chi)^2$,
where $J(\chi,\chi)=\sum_{x\in{\mathbb F}_q\setminus\{0,1\}}\chi(x)\chi(1-x)$. As
mentioned above, we did not give the precise formula, here it simply
corresponds to setting $Q_q({\mathfrak g}a,\chi)=J(\chi,\chi)^2$, including when
$\chi^2=\mathbf eps$. Thus
$$a_q({\mathfrak g}a;t)=\dfrac{1}{1-q}\left(1+\sum_{\chi\ne\mathbf eps}\chi(t/4)J(\chi,\chi)^2\right)\;.$$
If by a temporary abuse of notation{\mathfrak f}ootnote{The definition of $J$ given
below is a sum over all $x\in{\mathbb F}_q$, so that $J(\mathbf eps,\mathbf eps)=q^2$ and not
$(q-2)^2$.} we define $J(\mathbf eps,\mathbf eps)$ by the same formula as above, we have
$J(\mathbf eps,\mathbf eps)=(q-2)^2$ hence
$$a_q({\mathfrak g}a;t)=\dfrac{1}{1-q}\left(1-(q-2)^2+\sum_{\chi}\chi(t/4)J(\chi,\chi)^2\right)\;.$$
Now
$$\sum_{\chi}\chi(t/4)J(\chi,\chi)^2=\sum_{x,y\in{\mathbb F}_q\setminus\{0,1\}}\sum_{\chi}\chi(t/4)\chi(x)\chi(1-x)\chi(y)\chi(1-y)\;.$$
The point of writing it this way is that because of orthogonality of
characters (Exercise \ref{exoorth} below) the sum on $\chi$ vanishes unless
the argument is equal to $1$ in which case it is equal to $q-1$, so that
$$\sum_{\chi}\chi(t/4)J(\chi,\chi)^2=(q-1)N_q(t)\;,\text{\quad where\quad }N_q(t)=\sum_{\substack{x,y\in{\mathbb F}_q\setminus\{0,1\}\\(t/4)x(1-x)y(1-y)=1}}1$$
is the number of \mathbf emph{affine} points over ${\mathbb F}_q$ of the algebraic variety
defined by $(t/4)x(1-x)y(1-y)=1$ (which automatically implies $x$ and $y$
different from $0$ and $1$). We have thus shown that
$$a_q({\mathfrak g}a;t)=\dfrac{1}{1-q}(1-(q-2)^2+(q-1)N_q(t))=q-3-N_q(t)\;.$$
\betagin{exercise} By making the change of variables $X=(4/t)(1-1/x)$,
$Y=(4/t)(y-1)(1-1/x)$, show that
$$a_q({\mathfrak g}a;t)=q+1-|E({\mathbb F}_q)|\;,$$
where $|E({\mathbb F}_q)|$ is the number of projective points over ${\mathbb F}_q$ of the
elliptic curve $Y^2+XY=X(X-4/t)^2$. Thus, the global $L$-function
attached to the hypergeometric motive defined by ${\mathfrak g}a$ is equal to
the $L$-function attached to the elliptic curve $E$.
\mathbf end{exercise}
Since we will see below fast methods for computing expressions such as\newline
$\sum_{\chi}\chi(t/4)J(\chi,\chi)^2$, these will consequently give fast
methods for computing $|E({\mathbb F}_q)|$ for an arbitrary elliptic curve $E$.
\betagin{exercise}\betagin{enumerate}
\item In a similar way, study the hypergeometric motive
corresponding to ${\mathfrak g}a_1=3$, ${\mathfrak g}a_3=-1$, and ${\mathfrak g}a_n=0$ otherwise,
assuming that the correct formula for $Q_q$ corresponds as above to
the replacement of quotients of Gauss sums by Jacobi sums for all
characters $\chi$, not only those allowed by Proposition \ref{jacgaufq}.
To find the elliptic curve, use the change of variable $X=-xy$,
$Y=x^2y$.
\item Deduce that the global $L$-function of this hypergeometric motive
is equal to the $L$-function attached to the elliptic curve
$y^2=x^3+x^2+4x+4$ and to the $L$-function attached to the modular form
$q{\mathfrak p}rod_{n{\mathfrak g}e1}(1-q^{2n})^2(1-q^{10n})^2$.
\mathbf end{enumerate}
\mathbf end{exercise}
\subsection{Other Sources of $L$-Functions}
There exist many other sources of $L$-functions in addition to those that we
have already mentioned, that we will not expand upon:
\betagin{itemize}
\item Hecke $L$-functions, attached to Hecke Gr\"ossencharacters.
\item Artin $L$-functions, of which we have met a couple of examples in
Section \ref{sec:one}.
\item Functorial constructions of $L$-functions such as Rankin--Selberg
$L$-functions, symmetric squares and more generally symmetric powers.
\item $L$-functions attached to Galois representations.
\item General automorphic $L$-functions.
\mathbf end{itemize}
Of course these are not disjoint sets, and as already mentioned, when some
$L$-functions lies in an intersection, this usually corresponds to an
interesting arithmetic property. Probably the most general such correspondence
is the \mathbf emph{Langlands program}.
\subsection{Results and Conjectures on $L(V;s)$}
The problem with global $L$-functions is that most of their properties are only
\mathbf emph{conjectural}. We mention these conjectures in the case of global
$L$-functions attached to algebraic varieties:
\betagin{enumerate}\item The function $L_i$ is only defined through its Euler
product, and thanks to the last of Weil's conjectures, the local Riemann
hypothesis, proved by Deligne, it converges absolutely for ${\mathbb R}e(s)>1+i/2$.
Note that, with the definitions introduced above, $L_i$ is an $L$-function
of degree $d_i$, the common degree of $P_{i,p}$ for all but a finite number of
$p$, and of motivic weight exactly $w=i$ since the Satake parameters satisfy
$|\mathfrak alpha_{i,p}|=p^{i/2}$, again by the local Riemann hypothesis.
\item A first conjecture is that $L_i$ should have an
\mathbf emph{analytic continuation} to the whole complex plane with a
\mathbf emph{finite number} of \mathbf emph{known} poles with \mathbf emph{known} polar part.
\item A second conjecture, which can in fact be considered as part of the
first, is that this extended $L$-function should satisfy a \mathbf emph{functional
equation} when $s$ is changed into $i+1-s$. More precisely, when completed
with the Euler factors at the ``bad'' primes as mentioned (but not explained)
above, then if we set
$$\Lambdaambda_i(V;s)=N^{s/2}{\mathfrak p}rod_{1\le j\le d_i}\Gamma_{{\mathbb R}}(s+\mu_j)L_i(V;s)$$
then $\Lambdaambda_i(V;i+1-s)=\omega\ov{\Lambdaambda_i(V^*;s)}$ for some variety $V^*$ in
some sense ``dual'' to $V$ and a complex number $\omega$ of modulus $1$. In the
above, $N$ is some integer divisible exactly by all the ``bad'' primes, i.e.,
essentially (but not exactly) the primes for which $V$ reduced modulo $p$ is
not smooth, and the $\mu_j$ are in this case (varieties) \mathbf emph{integers}
which can be computed in terms of the \mathbf emph{Hodge numbers} $h^{p,q}$ of
the variety thanks to a recipe due to Serre \cite{Ser}. The number $i$ is
called the \mathbf emph{motivic weight}, and it is important to note that the
``weight'' $k$ usually attached to an $L$-function with functional equation
$s\mapsto k-s$ is equal to $k=i+1$, i.e., to \mathbf emph{one more} than the motivic
weight.
In many cases the $L$-function is self-dual, in which case the functional
equation is simply of the form $\Lambdaambda_i(V;i+1-s)={\mathfrak p}m\Lambdaambda_i(V;s)$.
\item The function $\Lambdaambda_i$ should satisfy the generalized Riemann
hypothesis (GRH): all its zeros in ${\mathbb C}$ are on the vertical line
${\mathbb R}e(s)=(i+1)/2$. Equivalently, the zeros of $L_i$ are on the one hand
real zeros at some integers coming from the poles of the gamma factors,
and all the others satisfy ${\mathbb R}e(s)=(i+1)/2$.
\item The function $\Lambdaambda_i$ should have \mathbf emph{special values}: for the
integer values of $s$ (called special points) which are those for which
neither the gamma factor at $s$ nor at $i+1-s$ has a pole, it should be
computable ``explicitly'': it should be equal to a \mathbf emph{period}
(integral of an algebraic function on an algebraic cycle) times an algebraic
number. This has been stated (conjecturally) in great detail by Deligne in the
1970's.\mathbf end{enumerate}
It is conjectured that \mathbf emph{all} $L$-functions of degree $d_i$ and weight
$i$ as defined at the beginning should satisfy all the above properties, not
only the $L$-functions coming from varieties.
I now give the status of these conjectures.
\betagin{enumerate}\item The first conjecture (analytic continuation) is known
only for a very restricted class of $L$-functions: first $L$-functions of
degree $1$, which can be shown to be Dirichlet $L$-functions, $L$-functions of
Hecke characters, $L$-functions attached to modular forms as shown above, and
more generally to \mathbf emph{automorphic forms}. For $L$-functions attached to
varieties, one knows this \mathbf emph{only} when one can prove that the
corresponding $L$-function comes from an automorphic form: this is how Wiles
proves the analytic continuation of the $L$-function attached to an elliptic
curve defined over ${\mathbb Q}$, a very deep and difficult
result, with Deligne's proof of the Weil conjectures one of the most important
result of the end of the 20th century. More results of this type are known for
certain higher-dimensional varieties such as certain \mathbf emph{Calabi--Yau
manifolds}. Note however that for such simple objects as most
\mathbf emph{Artin $L$-functions} (degree $0$, in which case only \mathbf emph{meromorphic}
continuation is known) or abelian surfaces, this is not
known, although the work of Brumer--Kramer--Poor--Yuen, as well as more
recent work of G.~Boxer, F.~Calegari, T.~Gee, and V.~Pilloni on the
\mathbf emph{paramodular conjecture} may some day lead to a proof in this last case.
\item The second conjecture on the existence of a functional equation is
of course intimately linked to the first, and the work of Wiles et al.
also proves the existence of this functional equation. But in
addition, in the case of Artin $L$-functions for which only meromorphy
(possibly with infinitely many poles) is known thanks to a theorem of
Brauer, this same theorem implies the functional equation which is thus known
in this case. Also, as mentioned, the Euler factors which we must include
for the ``bad'' primes in order to have a clean functional equation are often
quite difficult to compute.
\item The (global) Riemann hypothesis is not known for \mathbf emph{any} global
$L$-function of the type mentioned above, not even for the simplest one, the
Riemann zeta function $\zeta(s)$. Note that it \mathbf emph{is} known for other kinds of
$L$-functions such as \mathbf emph{Selberg zeta functions}, but these are
functions of order $2$, so are not in the class considered above.
\item Concerning \mathbf emph{special values}: many cases are known, and many
conjectured. This is probably one of the most \mathbf emph{fun} conjectures since
everything can be computed explicitly to thousands of decimals if desired.
For instance, for modular forms it is a theorem of Manin, for symmetric
squares of modular forms it is a theorem of Rankin, and for higher symmetric
powers one has very precise conjectures of Deligne, which check perfectly
on a computer, but none of them are proved. For the Riemann zeta function
or Dirichlet $L$-functions, of course all these results such as $\zeta(2)={\mathfrak p}i^2/6$
date back essentially to Euler.
In the case of an elliptic curve $E$ over ${\mathbb Q}$, the only special point is
$s=1$, and in this case the whole subject revolves around the \mathbf emph{Birch and
Swinnerton-Dyer conjecture} (BSD) which predicts the behavior of $L_1(E;s)$
around $s=1$. The only known results, already quite deep, due to Kolyvagin
and Gross--Zagier, deal with the case where the \mathbf emph{rank} of the elliptic
curve is $0$ or $1$.
\mathbf end{enumerate}
There exist a number of other very important conjectures linked to the behavior
of $L$-functions at integer points which are not necessarily special,
such as the Bloch, Beilinson, Kato, Lichtenbaum, or Zagier conjectures,
but it would carry us too far afield to describe them in general. However,
in the next subsections, we will give three completely explicit numerical
examples of these conjectures, so that the reader can convince himself both
that they are easy to check numerically, and that the results are spectacular.
\subsection{An Explicit Numerical Example of BSD}\lambdabel{sec:BSD}
Let us now be a little more precise. Even if this subsection involves notions
not introduced in these notes, we ask the reader to be patient since the
numerical work only involves standard notions.
Let $E$ be an elliptic curve defined over ${\mathbb Q}$. Elliptic curves have a
natural \mathbf emph{abelian group} structure, and it is a theorem of Mordell
that the group of rational points on $E$ is \mathbf emph{finitely generated}, i.e.,
$E({\mathbb Q})\simeq{\mathbb Z}^r\oplus E_{\text{tors}}({\mathbb Q})$, where $E_{\text{tors}}({\mathbb Q})$ is
a finite group, and $r$ is called the \mathbf emph{rank} of the curve.
On the analytic side, we have mentioned that $E$ has an $L$-function $L(E,s)$
(denoted $L_1$ above), and the deep theorem of Wiles et al. says that it has
an analytic continuation to the whole of ${\mathbb C}$ into an entire function with
a functional equation linking $L(E,s)$ to $L(E,2-s)$. The only special point
in the above sense is $s=1$, and a weak form of the Birch and Swinnerton-Dyer
conjecture states that the order of vanishing $v$ of $L(E,s)$ at $s=1$ should
be equal to $r$.
This has been proved for $r=0$ (by Kolyvagin) and for $r=1$
(by Gross--Zagier--Kolyvagin), and nothing is known for $r{\mathfrak g}e2$. However,
this is not quite true: if $r=2$ then we cannot have $v=0$ or $1$ by the
previous results, so $v{\mathfrak g}e2$. On the other hand, for any given elliptic curve
it is easy to check numerically that $L''(E,1)\ne0$, so to check that $v=2$.
Similarly, if $r=3$ we again cannot have $v=0$ or $1$. But for any given
elliptic curve one can compute the \mathbf emph{sign} of the functional equation
linking $L(E,s)$ to $L(E,2-s)$, and this will show that if $r=3$ all
derivatives $L^{(k)}(E,s)$ for $k$ even will vanish. Thus we cannot have
$v=2$, and once again for any $E$ it is easy to check that $L'''(E,1)\ne0$,
hence to check that $v=3$.
Unfortunately, this argument does not work for $r{\mathfrak g}e4$. Assume for instance
$r=4$. The same reasoning will show that $L(E,1)=0$ (by Kolyvagin), that
$L'(E,1)=L'''(E,1)=0$ (because the sign of the functional equation will be
$+$), and that $L''''(E,1)\ne0$ by direct computation. The BSD conjecture
tells us that $L''(E,1)=0$, but this is not known for a single curve.
Let us give the simplest numerical example, based on an elliptic curve with
$r=4$. I emphasize that no knowledge of elliptic curves is needed for this.
For every prime $p$, consider the congruence
$$y^2+xy\mathbf equiv x^3-x^2-79x+289{\mathfrak p}mod{p}\;,$$
and denote by $N(p)$ the number of pairs $(x,y)\in({\mathbb Z}/p{\mathbb Z})^2$ satisfying it.
We define an arithmetic function $a(n)$ in the following way:
\betagin{enumerate}
\item $a(1)=1$.
\item If $p$ is prime, we set $a(p)=p-N(p)$.
\item For $k{\mathfrak g}e2$ and $p$ is prime, we define $a(p^k)$ by induction:
$$a(p^k)=a(p)a(p^{k-1})-\chi(p)p\cdot a(p^{k-2})\;,$$
where $\chi(p)=1$ unless $p=2$ or $p=117223$, in which case $\chi(p)=0$.
\item For arbitrary $n$, we extend by multiplicativity: if $n={\mathfrak p}rod_ip_i^{k_i}$
then $a(n)={\mathfrak p}rod_ia(p_i^{k_1})$.
\mathbf end{enumerate}
\betagin{remarks}{\rm \betagin{itemize}
\item The number $117223$ is simply a prime factor
of the discriminant of the cubic equation obtained by completing the square
in the equation of the above elliptic curve.
\item Even though the definition of $a(n)$ looks complicated, it is \mathbf emph{very}
easy to compute (see below), for instance only a few seconds for a million
terms. In addition $a(n)$ is quite small: for $n=1,2,\dots$ we have
$$a(n)=1,-1,-3,1,-4,3,-5,-1,6,4,-6,-3,-6,5,\ldots$$
\mathbf end{itemize}}
\mathbf end{remarks}
On the analytic side, define a function $f(x)$ for $x>0$ by
$$f(x)=\int_1^\infty e^{-xt}\log(t)^2\,dt\;.$$
Note that it is very easy to compute this integral to thousands of digits if
desired and also note that $f$ tends to $0$ exponentially fast as $x\to\infty$
(more precisely $f(x)\sim 2e^{-x}/x^3$).
In this specific situation, the BSD conjecture tells us that $S=0$, where
$$S=\sum_{n{\mathfrak g}e1}a(n)f\left(\dfrac{2{\mathfrak p}i n}{\sqrt{234446}}\right)\;.$$
It takes only a few seconds to compute \mathbf emph{thousands} of digits of $S$,
and we can indeed check that $S$ is extremely close to $0$, but as of now
nobody knows how to prove that $S=0$.
\subsection{An Explicit Numerical Example of Beilinson--Bloch}\lambdabel{sec:BB}
This subsection is entirely due to V.~Golyshev (personal communication)
whom I heartily thank.
Let $u>1$ be a real parameter. Consider the elliptic curve $E(u)$ with
affine equation
$$y^2=x(x+1)(x+u^2)\;.$$
As usual one can define its $L$-function $L(E(u),s)$ using a general recipe.
The BSD conjecture deals with the value of $L(E(u),s)$ (and its derivatives)
at $s=1$. The Beilinson--Bloch conjectures deal with values at other
integer values of $s$, in the present case we consider $L(E(u),2)$. Once
again it is very easy to compute thousands of decimals of this quantity if
desired.
On the other hand, for $u>1$ consider the function
$$g(u)=2{\mathfrak p}i\int_0^1\dfrac{\mathfrak asin(t)}{\sqrt{1-t^2/u^2}}\,\dfrac{dt}{t}+{\mathfrak p}i^2\mathfrak acosh(u)=\dfrac{{\mathfrak p}i^2}{2}\left(2\log(4u)-\sum_{n{\mathfrak g}e1}\dfrac{\binom{2n}{n}^2}{n}(4u)^{-2n}\right)\;.$$
The conjecture says that when $u$ is an integer, $L(E(u),2)/g(u)$ should be a
\mathbf emph{rational number}. In fact, if we let $N(u)$ be the \mathbf emph{conductor}
of $E(u)$ (notion that I have not defined), then it seems that when
$u\ne4$ and $u\ne8$ we even have $F(u)=N(u)L(E(u),2)/g(u)\in{\mathbb Z}$.
Once again, this is a conjecture which can immediately be tested on
modern computer algebra systems such as {\tt Pari/GP}. For instance, for
$u=2,3,\ldots$ we find \mathbf emph{numerically} to thousands of decimal digits
(remember that nothing is proved)
$$F(u)=1,2,4/11,8,32,8,4/3,8,32,64,8,96,256,48,16,16,192,\ldots$$
\betagin{exercise} Check numerically that the conjecture seems still to be true
when $4u\in{\mathbb Z}$, i.e., if $u$ is a rational number with denominator $2$ or $4$.
On the other hand, it is definitely wrong for instance if $3u\in{\mathbb Z}$ (and
$u\notin{\mathbb Z}$), i.e., when the denominator is $3$. It is possible that there
is a replacement formula, but Bloch and Golyshev tell me that this is
unlikely.\mathbf end{exercise}
\subsection{An Explicit Numerical Example of Mahler Measures}
This example is entirely due to W.~Zudilin (personal communication)
whom I heartily thank. The reader does not need any knowledge of Mahler
measures since we are again going to give the example as an equality
between values of $L$-functions and integrals. Note that this can also be
considered an isolated example of the Bloch--Beilinson conjecture.
Consider the elliptic curve $E$ with equation $y^2=x^3-x^2-4x+4$, of conductor
$24$. Its associated $L$-function $L(E,s)$ can easily be shown to be equal
to the $L$-function associated to the modular form
$$q{\mathfrak p}rod_{n{\mathfrak g}e1}(1-q^{2n})(1-q^{4n})(1-q^{6n})(1-q^{12n})$$
(we do not need this for this example, but this will give us two
ways to create the $L$-function in {\tt Pari/GP}). We have the conjectural
identity due to Zudilin:
$$L(E,3)=\dfrac{{\mathfrak p}i^2}{36}\left({\mathfrak p}i G+\int_0^1\mathfrak asin(x)\mathfrak asin(1-x)\,\dfrac{dx}{x}\right)\;,$$
where $G=\sum_{n{\mathfrak g}e0}(-1)^n/(2n+1)^2=0.91596559\cdots$ is Catalan's constant.
At the end of this course, the reader will find three complete {\tt Pari/GP}
scripts which implement the BSD, Beilinson--Bloch, and Mahler measure examples
that we have just given.
\subsection{Computational Goals}
Now that we have a handle on what $L$-functions are, we come to the
computational and algorithmic problems, which are the main focus of these
notes. This involves many different aspects, all interesting in their own
right.
In a first type of situation, we assume that we are ``given''
the $L$-function, in other words that we are given a reasonably ``efficient''
algorithm to compute the coefficients $a(n)$ of the Dirichlet series
(or the Euler factors), and that we know the gamma factor ${\mathfrak g}a(s)$.
The main computational goals are then the following:
\betagin{enumerate}\item Compute $L(s)$ for ``reasonable'' values of $s$:
for example, compute $\zeta(3)$. More sophisticated, but much more interesting:
check the Birch--Swinnerton-Dyer conjecture, the Beilinson--Bloch
conjecture, and the conjectures of Deligne concerning special values of
symmetric powers $L$-functions of modular forms.
\item Check the numerical validity of the functional equation, and
in passing, if unknown, compute the numerical value of the \mathbf emph{root
number} $\omega$ occurring in the functional equation.
\item Compute $L(s)$ for $s=1/2+it$ for rather large real values of $t$
(in the case of weight $0$, more generally for $s=(w+1)/2+it$),
and/or make a plot of the corresponding $Z$ function (see below).
\item Compute all the zeros of $L(s)$ on the critical line up to a given
height, and check the corresponding Riemann hypothesis.
\item Compute the residue of $L(s)$ at $s=1$ (typically): for instance
if $L$ is the Dedekind zeta function of a number field, this gives the
product $hR$.
\item Compute the \mathbf emph{order} of the zeros of $L(s)$ at integer points
(if it has one), and the leading term in the Taylor expansion: for instance
for the $L$-function of an elliptic curve and $s=1$, this gives
the \mathbf emph{analytic rank} of an elliptic curve, together with the
Birch and Swinnerton-Dyer data.
\mathbf end{enumerate}
Unfortunately, we are not always given an $L$-function completely
explicitly. We can lack more or less partial information on the
$L$-function:
\betagin{enumerate}\item One of the most frequent situations is that
one knows the Euler factors for the ``good'' primes, as well as the
corresponding part of the conductor, and that one is lacking both
the Euler factors for the bad primes and the bad part of the conductor.
The goal is then to find numerically the missing factors and missing parts.
\item A more difficult but much more interesting problem is when
essentially nothing is known on the $L$-function except ${\mathfrak g}a(s)$, in
other words the $\Gamma_{{\mathbb R}}$ factors and the constant $N$, essentially equal
to the conductor. It is quite amazing that nonetheless one can quite often
tell whether an $L$-function with the given data can exist, and give
some of the initial Dirichlet coefficients (even when several $L$-functions
may be possible).
\item Even more difficult is when essentially nothing is known except
the degree $d$ and the constant $N$, and one looks for possible $\Gamma_{{\mathbb R}}$
factors: this is the case in the search for Maass forms over $\SL_n({\mathbb Z})$,
which has been conducted very successfully for $n=2$, $3$, and $4$.
\mathbf end{enumerate}
We will not consider these more difficult problems.
\subsection{Available Software for $L$-Functions}
Many people working on the subject have their own software. I mention the
available public data.
$\bullet$ M.~Rubinstein's {\tt C++} program {\tt lcalc}, which can compute
values of $L$-functions, make large tables of zeros, and so on.
The program uses {\tt C++} language {\tt double}, so is limited to 15 decimal
digits, but is highly optimized, hence very fast, and used in most
situations. Also optimized for large values of the imaginary part
using Riemann--Siegel. Available in {\tt Sage}.
$\bullet$ T.~Dokchitser's program {\tt computel}, initially written in
{\tt GP/Pari}, rewritten for {\tt magma}, and also available in {\tt Sage}.
Similar to Rubinstein's, but allows arbitrary precision, hence slower,
and has no built-in zero finder, although this is not too difficult
to write. It is not optimized for large imaginary parts.
$\bullet$ Since June 2015, {\tt Pari/GP} has a complete package for computing
with $L$-functions, written by B.~Allombert, K.~Belabas, P.~Molin, and myself,
based on the ideas of T.~Dokchitser for the computation
of inverse Mellin transforms (see below) but put on a more solid footing,
and on the ideas of P.~Molin for computing the $L$-function values themselves,
which avoid computing generalized incomplete gamma functions (see also below).
Note the related complete {\tt Pari/GP} package for computing with modular
forms, available since July 2018.
$\bullet$ Last but not least, not a program but a huge \mathbf emph{database}
of $L$-functions, modular forms, number fields, etc., which is the
result of a collaborative effort of approximately 30 to 40 people headed
by D.~Farmer. This database can of course be queried in many different
ways, it is possible and useful to navigate between related pages, and
it also contains {\tt knowls}, bits of knowledge which give the main
definitions. In addition to the stored data, the site can compute
additional required information on the fly using the software mentioned above,
i.e., {\tt Pari}, {\tt Sage}, {\tt magma}, and {\tt lcalc})
Available at:
\centerline{\tt http://www.lmfdb.org}
\section{Arithmetic Methods: Computing $a(n)$}
We now come to the second part of this course: the computation of
the Dirichlet series coefficients $a(n)$ and/or of the Euler factors,
which is usually the same problem. Of course this depends entirely on how
the $L$-function is \mathbf emph{given}: in view of what we have seen, it can be
given for instance (but not only) as the $L$-function attached to a modular
form, to a variety, or to a hypergeometric motive. Since there are so many
relations between these $L$-functions (we have seen several identities above),
we will not separate the way in which they are given, but treat everything
at once.
In view of the preceding section, an important computational problem is the
computation of $|V({\mathbb F}_q)|$ for a variety $V$. This may of course be done by a
na\"\i ve point count: if $V$ is defined by polynomials in $n$ variables, we can
range through the $q^n$ possibilities for the $n$ variables and count the
number of common zeros. In other words, there always exists a trivial
algorithm requiring $q^n$ steps. We of course want something better.
\subsection{General Elliptic Curves}
Let us first look at the special case of \mathbf emph{elliptic curves}, i.e.,
a projective curve $V$ with affine equation $y^2=x^3+ax+b$ such that
$p\nmid 6(4a^3+27b^2)$, which is almost the general equation for an
\mathbf emph{elliptic curve}. For simplicity assume that $q=p$, but it is immediate
to generalize. If you know the definition of the Legendre symbol, you know
that the number of solutions in ${\mathbb F}_p$ to the equation $y^2=n$ is equal to
$1+\lgs{n}{p}$. If you do not, since ${\mathbb F}_p$ is a field, it is clear that this
number is equal to $0$, $1$, or $2$, and so one can \mathbf emph{define} $\lgs{n}{p}$
as one less, so $-1$, $0$, or $1$. Thus, since it is immediate to see that
there is a single projective point at infinity, we have
\betagin{align*}|V({\mathbb F}_p)|&=1+\sum_{x\in{\mathbb F}_p}\left(1+\leg{x^3+ax+b}{p}\right)=p+1-a(p)\;,\quad\text{with}\\
a(p)&=-\sum_{0\le x\le p-1}\leg{x^3+ax+b}{p}\;.\mathbf end{align*}
Now a Legendre symbol can be computed very efficiently using the
\mathbf emph{quadratic reciprocity law}. Thus, considering that it can be computed
in constant time (which is not quite true but almost), this gives a $O(p)$
algorithm for computing $a(p)$, already much faster than the trivial $O(p^2)$
algorithm consisting in looking at all pairs $(x,y)$.
To do better, we have to use an additional and crucial property of an elliptic
curve: it is an \mathbf emph{abelian group}. Using this combined with the so-called
Hasse bounds $|a(p)|<2\sqrt{p}$ (a special case of the Weil conjectures), and
the so-called \mathbf emph{baby-step giant-step algorithm} due to Shanks, one can
obtain a $O(p^{1/4})$ algorithm, which is very fast for all practical
purposes.
However a remarkable discovery due to Schoof in the early 1980's is that
there exists a practical algorithm for computing $a(p)$ which is
\mathbf emph{polynomial in $\log(p)$}, for instance $O(\log^6(p))$. The idea is to
compute $a(p)$ modulo $\mathbf ell$ for small primes $\mathbf ell$ using
\mathbf emph{$\mathbf ell$-division polynomials}, and then use the Chinese remainder theorem
and the bound $|a(p)|<2\sqrt{p}$ to recover $a(p)$. Several
important improvements have been made on this basic algorithm, in particular
by Atkin and Elkies, and the resulting SEA algorithm (which is implemented
in many computer packages) is able to compute $a(p)$ for $p$ with several
thousand decimal digits. Note however that in practical ranges (say
$p<10^{12}$), the $O(p^{1/4})$ algorithm mentioned above is sufficient.
\subsection{Elliptic Curves with Complex Multiplication}
In certain special cases it is possible to compute $|V({\mathbb F}_q)|$ for an elliptic
curve $V$ much faster than with any of the above methods: when the elliptic
curve $V$ has \mathbf emph{complex multiplication}. Let us consider the special
cases $y^2=x^3-nx$ (the general case is more complicated but not
really slower). By the general formula for $a(p)$, we have for
$p{\mathfrak g}e3$:
\betagin{align*}a(p)&=-\sum_{-(p-1)/2\le x\le (p-1)/2}\leg{x(x^2-n)}{p}\\
&=-\sum_{1\le x\le (p-1)/2}\left(\leg{x(x^2-n)}{p}+\leg{-x(x^2-n)}{p}\right)\\
&=-\left(1+\leg{-1}{p}\right)\sum_{1\le x\le(p-1)/2}\leg{x(x^2-n)}{p}\mathbf end{align*}
by the multiplicative property of the Legendre symbol. This already
shows that if $\lgs{-1}{p}=-1$, in other words $p\mathbf equiv3{\mathfrak p}mod4$, we
have $a(p)=0$. But we can also find a formula when $p\mathbf equiv1{\mathfrak p}mod4$:
recall that in that case by a famous theorem due to Fermat, there
exist integers $u$ and $v$ such that $p=u^2+v^2$. If necessary by
exchanging $u$ and $v$, and/or changing the sign of $u$, we may
assume that $u\mathbf equiv-1{\mathfrak p}mod4$, in which case the decomposition is
unique, up to the sign of $v$. It is then not difficult to
prove the following theorem (see Section 8.5.2 of \cite{Coh3} for the proof):
\betagin{theorem} Assume that $p\mathbf equiv1{\mathfrak p}mod4$ and $p=u^2+v^2$ with
$u\mathbf equiv-1{\mathfrak p}mod4$. The number of projective points on the elliptic
curve $y^2=x^3-nx$ (where $p\nmid n$) is equal to $p+1-a(p)$, where
$$a(p)=2\leg{2}{p}\betagin{cases}
-u&\text{\quad if\quad $n^{(p-1)/4}\mathbf equiv1{\mathfrak p}mod{p}$}\\
u&\text{\quad if\quad $n^{(p-1)/4}\mathbf equiv-1{\mathfrak p}mod{p}$}\\
-v&\text{\quad if\quad $n^{(p-1)/4}\mathbf equiv-u/v{\mathfrak p}mod{p}$}\\
v&\text{\quad if\quad $n^{(p-1)/4}\mathbf equiv u/v{\mathfrak p}mod{p}$}\mathbf end{cases}$$
(note that one of these four cases must occur).
\mathbf end{theorem}
To apply this theorem from a computational standpoint we note the
following two \mathbf emph{facts}:
(1) The quantity $n^{(p-1)/4}\bmod p$ can be computed efficiently
by the \mathbf emph{binary powering algorithm} (in $O(\log^3(p))$
operations). It is however possible to compute it more efficiently
in $O(\log^2(p))$ operations using the \mathbf emph{quartic reciprocity law}.
(2) The numbers $u$ and $v$ such that $u^2+v^2=p$ can be computed
efficiently (in $O(\log^2(p))$ operations) using \mathbf emph{Cornacchia's
algorithm} which is very easy to describe but not so easy to prove.
It is a variant of Euclid's algorithm. It proceeds as follows:
$\bullet$ As a first step, we compute a square root of $-1$ modulo $p$,
i.e., an $x$ such that $x^2\mathbf equiv-1{\mathfrak p}mod{p}$. This is done by choosing
randomly a $z\in[1,p-1]$ and computing the Legendre symbol $\lgs{z}{p}$
until it is equal to $-1$ (we can also simply try $z=2$, $3$, ...).
Note that this is a fast computation. When this is the case, we have
by definition $z^{(p-1)/2}\mathbf equiv-1{\mathfrak p}mod{p}$, hence $x^2\mathbf equiv-1{\mathfrak p}mod{p}$
for $x=z^{(p-1)/4}\bmod{p}$. Reducing $x$ modulo $p$ and possibly
changing $x$ into $p-x$, we normalize $x$ so that $p/2<x<p$.
$\bullet$ As a second step, we perform the Euclidean algorithm on the pair
$(p,x)$, writing $a_0=p$, $a_1=x$, and $a_{n-1}=q_na_n+a_{n+1}$
with $0\le a_{n+1}<a_n$, and we stop at the exact $n$ for which
$a_n^2<p$. It can be proved (this is the difficult part) that for
this specific $n$ we have $a_n^2+a_{n+1}^2=p$, so up to exchange of
$u$ and $v$ and/or change of signs, we can take $u=a_n$ and $v=a_{n+1}$.
Note that Cornacchia's algorithm can easily be generalized to solving
efficiently $u^2+dv^2=p$ or $u^2+dv^2=4p$ for any $d{\mathfrak g}e1$, see Section 1.5.2
of\cite{Coh1} (incidentally one can also solve this for $d<0$, but it poses
completely different problems since there may be infinitely many solutions).
The above theorem is given for the special elliptic curves
$y^2=x^3-nx$ which have complex multiplication by the (ring of integers
of the) field ${\mathbb Q}(i)$, but a similar theorem is valid for all curves
with complex multiplication, see Section 8.5.2 of \cite{Coh3}.
\subsection{Using Modular Forms of Weight $2$}
By Wiles' celebrated theorem, the $L$-function of an elliptic curve is
equal to the $L$-function of a modular form of weight $2$ for $\Gamma_0(N)$,
where $N$ is the conductor of the curve. We do not need to give the
precise definitions of these objects, but only a specific example.
Let $V$ be the elliptic curve with affine equation $y^2+y=x^3-x^2$.
It has conductor $11$. It can be shown using classical modular form methods
(i.e., without Wiles' theorem) that the global $L$-function
$L(V;s)=\sum_{n{\mathfrak g}e1}a(n)/n^s$ is the same as that of the modular form
of weight $2$ over $\Gamma_0(11)$ given by
$$f(\tau)=q{\mathfrak p}rod_{m{\mathfrak g}e1}(1-q^m)^2(1-q^{11m})^2\;,$$
with $q=\mathbf exp(2{\mathfrak p}i i\tau)$. Even with no knowledge of modular forms, this
simply means that if we formally expand the product on the right hand side
as
$$q{\mathfrak p}rod_{m{\mathfrak g}e1}(1-q^m)^2(1-q^{11m})^2=\sum_{n{\mathfrak g}e1}b(n)q^n\;,$$
we have $b(n)=a(n)$ for all $n$, and in particular for $n=p$ prime.
We have already seen this example above with a slightly different equation
for the elliptic curve (which makes no difference for its $L$-function outside
of the primes $2$ and $3$).
We see that this gives an alternate method for computing $a(p)$ by
expanding the infinite product. Indeed, the function
$$\mathbf eta(\tau)=q^{1/24}{\mathfrak p}rod_{m{\mathfrak g}e1}(1-q^m)$$
is a modular form of weight $1/2$ with known expansion:
$$\mathbf eta(\tau)=\sum_{n{\mathfrak g}e1}\leg{12}{n}q^{n^2/24}\;,$$
and so using Fast Fourier Transform techniques for formal power series
multiplication we can compute all the coefficients $a(n)$ simultaneously
(as opposed to one by one) for $n\le B$ in time $O(B\log^2(B))$. This
amounts to computing each individual $a(n)$ in time $O(\log^2(n))$, so
it seems to be competitive with the fast methods for elliptic curves
with complex multiplication, but this is an illusion since we must
store all $B$ coefficients, so it can be used only for $B\le 10^{12}$,
say, far smaller than what can be reached using Schoof's algorithm,
which is truly polynomial in $\log(p)$ for each fixed prime $p$.
\subsection{Higher Weight Modular Forms}
It is interesting to note that the dichotomy between elliptic curves
with or without complex multiplication is also valid for modular forms
of higher weight (again, whatever that means, you do not need to know
the definitions). For instance, consider
$$\Delta(\tau)=\Delta_{24}(\tau)=\mathbf eta^{24}(\tau)=q{\mathfrak p}rod_{m{\mathfrak g}e1}(1-q^m)^{24}:=\sum_{n{\mathfrak g}e1}\tau(n)q^n\;.$$
The function $\tau(n)$ is a famous function called the \mathbf emph{Ramanujan
$\tau$ function}, and has many important properties, analogous to those
of the $a(p)$ attached to an elliptic curve (i.e., to a modular
form of weight $2$).
There are several methods to compute $\tau(p)$ for $p$ prime, say. One
is to do as above, using FFT techniques. The running time is similar,
but again we are limited to $B\le 10^{12}$, say. A second more
sophisticated method is to use the \mathbf emph{Eichler--Selberg trace formula},
which enables the computation of an individual $\tau(p)$ in time
$O(p^{1/2+\mathbf eps})$ for all $\mathbf eps>0$. A third very deep method, developed
by Edixhoven, Couveignes, et al., is a generalization of Schoof's algorithm.
While in principle polynomial time in $\log(p)$, it is not yet practical
compared to the preceding method.
For those who want to see the formula using the trace formula explicitly, we
let $H(N)$ be the
\mathbf emph{Hurwitz class number} $H(N)$ (essentially the class number of imaginary
quadratic orders counted with suitable multiplicity): if we set
$H_3(N)=H(4N)+2H(N)$ (note that $H(4N)$ can be computed in terms of $H(N)$),
then for $p$ prime
\betagin{align*}\tau(p)&=28p^6-28p^5-90p^4-35p^3-1\\
&{\mathfrak p}hantom{=}-128\sum_{1\le t<p^{1/2}}t^6(4t^4-9pt^2+7p^2)H_3(p-t^2)\;,
\mathbf end{align*}
which is the fastest \mathbf emph{practical} formula that I know for computing
$\tau(p)$.
On the contrary, consider
$$\Delta_{26}(\tau)=\mathbf eta^{26}(\tau)=q^{13/12}{\mathfrak p}rod_{m{\mathfrak g}e1}(1-q^m)^{26}:=q^{13/12}\sum_{n{\mathfrak g}e1}\tau_{26}(n)q^n\;.$$
This is what is called a modular form with complex multiplication. Whatever
the definition, this means that the coefficients $\tau_{26}(p)$ can be
computed in time polynomial in $\log(p)$ using a generalization of
Cornacchia's algorithm, hence very fast.
\betagin{exercise} (You need some extra knowledge for this.) In the literature
find an exact formula for $\tau_{26}(p)$ in terms of values of Hecke
\mathbf emph{Gr\"ossencharacters}, and program this formula. Use it to compute
some values of $\tau_{26}(p)$ for $p$ prime as large as you can go.
\mathbf end{exercise}
\subsection{Computing $|V({\mathbb F}_q)|$ for Quasi-diagonal Hypersurfaces}
We now consider a completely different situation where $|V({\mathbb F}_q)|$ can
be computed without too much difficulty.
As we have seen, in the case of elliptic curves $V$ defined over ${\mathbb Q}$, the
corresponding $L$-function is of \mathbf emph{degree $2$}, in other words is of
the form ${\mathfrak p}rod_p1/(1-a(p)p^{-s}+b(p)p^{-2s})$, where $b(p)\ne0$ for all but
a finite number of $p$. $L$-functions of degree $1$ such as the Riemann
zeta function are essentially $L$-functions of Dirichlet characters, in other
words simple ``twists'' of the Riemann zeta function. $L$-functions of degree
$2$ are believed to be always $L$-functions attached to modular forms,
and $b(p)=\chi(p)p^{k-1}$ for a suitable integer $k$ ($k=2$ for elliptic
curves), the \mathbf emph{weight} (note that this is \mathbf emph{one more} than the
so-called \mathbf emph{motivic weight}). Even though many unsolved questions remain,
this case is also quite well understood. Much more mysterious are $L$-functions
of higher degree, such as $3$ or $4$, and it is interesting to study natural
mathematical objects leading to such functions. A case where this can be done
reasonably easily is the case of diagonal or \mathbf emph{quasi-diagonal hypersurfaces}. We study a special case:
\betagin{definition} Let $m{\mathfrak g}e2$, for $1\le i\le m$ let $a_i\in{\mathbb F}_q^*$ be
nonzero, and let $b\in{\mathbb F}_q$. The quasi-diagonal hypersurface defined by this
data is the hypersurface in ${\mathbb P}^{m-1}$ defined by the projective equation
$$\sum_{1\le i\le m}a_ix_i^m-b{\mathfrak p}rod_{1\le i\le m}x_i=0\;.$$
When $b=0$, it is a diagonal hypersurface.
\mathbf end{definition}
Of course, we could study more general equations, for instance where the
degree is not equal to the number of variables, but we stick to this
special case.
To compute the number of (projective) points on this hypersurface, we need
an additional definition:
\betagin{definition} We let $\omega$ be a generator of the group of characters
of ${\mathbb F}_q^*$, either with values in ${\mathbb C}$, or in the $p$-adic field ${\mathbb C}_p$
(do not worry if you are not familiar with this).\mathbf end{definition}
Indeed, by a well-known theorem of elementary algebra, the multiplicative
group ${\mathbb F}_q^*$ of a finite field is \mathbf emph{cyclic}, so its group of
characters, which is \mathbf emph{non-canonically isomorphic} to ${\mathbb F}_q^*$, is also
cyclic, so $\omega$ indeed exists.
It is not difficult to prove the following theorem:
\betagin{theorem}\lambdabel{thmquasi} Assume that ${\mathfrak g}cd(m,q-1)=1$ and $b\ne0$, and
set $B={\mathfrak p}rod_{1\le i\le m}(a_i/b)$. If $V$ is the above quasi-diagonal
hypersurface, the number $|V({\mathbb F}_q)|$ of \mathbf emph{affine} points on $V$ is given by
$$|V({\mathbb F}_q)|=q^{m-1}+(-1)^{m-1}+\sum_{1\le n\le q-2}\omega^{-n}(B)J_m(\omega^n,\dotsc,\omega^n)\;,$$
where $J_m$ is the $m$-variable Jacobi sum.
\mathbf end{theorem}
We will study in great detail below the definition and properties of
$J_m$.
Note that the number of \mathbf emph{projective} points is simply
$(|V({\mathbb F}_q)|-1)/(q-1)$.
There also exists a more general theorem with no restriction on
${\mathfrak g}cd(m,q-1)$, which we do not give.
The occurrence of Jacobi sums is very natural and frequent in point counting
results. It is therefore important to look at efficient ways to compute them,
and this is what we do in the next section, where we also give complete
definitions and basic results.
\section{Gauss and Jacobi Sums}
In this long section, we study in great detail Gauss and Jacobi sums.
Most results are standard, and I would like to emphasize
that almost all of them can be proved with little difficulty
by easy algebraic manipulations.
\subsection{Gauss Sums over ${\mathbb F}_q$}\lambdabel{sec:gausssum}
We can define and study Gauss and Jacobi sums in two different contexts: first,
and most importantly, over finite fields ${\mathbb F}_q$, with $q=p^f$ a prime power
(note that from now on we write $q=p^f$ and not $q=p^n$).
Second, over the ring ${\mathbb Z}/N{\mathbb Z}$. The two notions coincide when $N=q=p$ is prime,
but the methods and applications are quite different.
To give the definitions over ${\mathbb F}_q$ we need to recall some fundamental (and
easy) results concerning finite fields.
\betagin{proposition} Let $p$ be a prime, $f{\mathfrak g}e1$, and ${\mathbb F}_q$ be the finite field
with $q=p^f$ elements, which exists and is unique up to isomorphism.
\betagin{enumerate}\item The map ${\mathfrak p}hi$ such that ${\mathfrak p}hi(x)=x^p$ is a field
isomorphism from ${\mathbb F}_q$ to itself leaving ${\mathbb F}_p$ fixed. It is called the
\mathbf emph{Frobenius map}.
\item The extension ${\mathbb F}_q/{\mathbb F}_p$ is a \mathbf emph{normal} (i.e., separable and Galois)
field extension, with Galois group which is cyclic of order $f$ generated
by~${\mathfrak p}hi$.
\mathbf end{enumerate}\mathbf end{proposition}
In particular, we can define the \mathbf emph{trace} $\Tr_{{\mathbb F}_q/{\mathbb F}_p}$ and the
\mathbf emph{norm} $\N_{{\mathbb F}_q/{\mathbb F}_p}$, and we have the formulas (where from now on we
omit ${\mathbb F}_q/{\mathbb F}_p$ for simplicity):
$$\Tr(x)=\sum_{0\le j\le f-1}x^{p^j}\text{\quad and\quad}
\N(x)={\mathfrak p}rod_{0\le j\le f-1}x^{p^j}=x^{(p^f-1)/(p-1)}=x^{(q-1)/(p-1)}\;.$$
\betagin{definition} Let $\chi$ be a character from ${\mathbb F}_q^*$ to an
algebraically closed field $C$ of characteristic $0$. For $a\in{\mathbb F}_q$
we define the \mathbf emph{Gauss sum} ${\mathfrak g}(\chi,a)$ by
$${\mathfrak g}(\chi,a)=\sum_{x\in{\mathbb F}_q^*}\chi(x)\zeta_p^{\Tr(ax)}\;,$$
where $\zeta_p$ is a fixed primitive $p$th root of unity in $C$.
We also set ${\mathfrak g}(\chi)={\mathfrak g}(\chi,1)$.
\mathbf end{definition}
Note that strictly speaking this definition depends on the choice
of $\zeta_p$. However, if $\zeta'_p$ is some other primitive $p$th root of
unity we have $\zeta'_p=\zeta_p^k$ for some $k\in{\mathbb F}_p^*$, so
$$\sum_{x\in{\mathbb F}_q^*}\chi(x){\zeta'_p}^{\Tr(ax)}={\mathfrak g}(\chi,ka)\;.$$
In fact it is trivial to see (this follows from the next proposition)
that ${\mathfrak g}(\chi,ka)=\chi^{-1}(k){\mathfrak g}(\chi,a)$.
\betagin{definition}\lambdabel{defeps} We define $\mathbf eps$ to be the trivial character,
i.e., such that $\mathbf eps(x)=1$ for all $x\in{\mathbb F}_q^*$. We extend characters $\chi$
to the whole of ${\mathbb F}_q$ by setting $\chi(0)=0$ if $\chi\ne\mathbf eps$ and $\mathbf eps(0)=1$.
\mathbf end{definition}
Note that this apparently innocuous definition of $\mathbf eps(0)$ is \mathbf emph{crucial}
because it simplifies many formulas. Note also that the definition of
${\mathfrak g}(\chi,a)$ is a sum over $x\in{\mathbb F}_q^*$ and not $x\in{\mathbb F}_q$, while for
Jacobi sums we will use all of ${\mathbb F}_q$.
\betagin{exercise}\lambdabel{exoorth}\betagin{enumerate}\item
Show that ${\mathfrak g}(\mathbf eps,a)=-1$ if $a\in{\mathbb F}_q^*$ and ${\mathfrak g}(\mathbf eps,0)=q-1$.
\item If $\chi\ne\mathbf eps$, show that ${\mathfrak g}(\chi,0)=0$, in other words that
$$\sum_{x\in{\mathbb F}_q}\chi(x)=0$$
(here it does not matter if we sum over ${\mathbb F}_q$ or ${\mathbb F}_q^*$).
\item Deduce that if $\chi_1\ne\chi_2$ then
$$\sum_{x\in{\mathbb F}_q^*}\chi_1(x)\chi_2^{-1}(x)=0\;.$$
This relation is called for evident reasons \mathbf emph{orthogonality of
characters}.
\item Dually, show that if $x\ne0,1$ we have $\sum_{\chi}\chi(x)=0$, where
the sum is over all characters of ${\mathbb F}_q^*$.
\mathbf end{enumerate}
\mathbf end{exercise}
Because of this exercise, if necessary we may assume that $\chi\ne\mathbf eps$
and/or that $a\ne0$.
\betagin{exercise} Let $\chi$ be a character of ${\mathbb F}_q^*$ of exact order $n$.
\betagin{enumerate}\item Show that $n\mid(q-1)$ and that
$\chi(-1)=(-1)^{(q-1)/n}$. In particular, if $n$ is odd and $p>2$ we have
$\chi(-1)=1$.
\item Show that ${\mathfrak g}(\chi,a)\in{\mathbb Z}[\zeta_n,\zeta_p]$, where as usual $\zeta_m$ denotes
a primitive $m$th root of unity.
\mathbf end{enumerate}
\mathbf end{exercise}
\betagin{proposition}\betagin{enumerate}\item If $a\ne0$ we have
$${\mathfrak g}(\chi,a)=\chi^{-1}(a){\mathfrak g}(\chi)\;.$$
\item We have
$${\mathfrak g}(\chi^{-1})=\chi(-1)\ov{{\mathfrak g}(\chi)}\;.$$
\item We have
$${\mathfrak g}(\chi^p,a)=\chi^{1-p}(a){\mathfrak g}(\chi,a)\;.$$
\item If $\chi\ne\mathbf eps$ we have
$$|{\mathfrak g}(\chi)|=q^{1/2}\;.$$
\mathbf end{enumerate}\mathbf end{proposition}
\subsection{Jacobi Sums over ${\mathbb F}_q$}
Recall that we have extended characters of ${\mathbb F}_q^*$ by setting $\chi(0)=0$
if $\chi\ne\mathbf eps$ and $\mathbf eps(0)=1$.
\betagin{definition} For $1\le j\le k$ let $\chi_j$ be characters of ${\mathbb F}_q^*$.
We define the Jacobi sum
$$J_k(\chi_1,\dotsc,\chi_k;a)=\sum_{x_1+\cdots+x_k=a}\chi_1(x_1)\cdots\chi_k(x_k)$$
and $J_k(\chi_1,\dotsc,\chi_k)=J_k(\chi_1,\dotsc,\chi_k;1)$.
\mathbf end{definition}
Note that, as mentioned above, we do not exclude the cases where some
$x_i=0$, using the convention of Definition \ref{defeps} for $\chi(0)$.
The following easy lemma shows that it is only necessary to study
$J_k(\chi_1,\dotsc,\chi_k)$:
\betagin{lemma}\lambdabel{lemjactriv} Set $\chi=\chi_1\cdots\chi_k$.
\betagin{enumerate}\item If $a\ne0$ we have
$$J_k(\chi_1,\dotsc,\chi_k;a)=\chi(a)J_k(\chi_1,\dotsc,\chi_k)\;.$$
\item If $a=0$, abbreviating $J_k(\chi_1,\dotsc,\chi_k;0)$ to $J_k(0)$ we have
$$J_k(0)=\betagin{cases} q^{k-1}&\text{\quad if $\chi_j=\mathbf eps$ for all $j$\;,}\\
0&\text{\quad if $\chi\ne\mathbf eps$\;,}\\
\chi_k(-1)(q-1)J_{k-1}(\chi_1,\dotsc,\chi_{k-1})&\text{\quad if $\chi=\mathbf eps$ and $\chi_k\ne\mathbf eps$\;.}\mathbf end{cases}$$
\mathbf end{enumerate}
\mathbf end{lemma}
As we have seen, a Gauss sum ${\mathfrak g}(\chi)$ belongs to the rather large ring
${\mathbb Z}[\zeta_{q-1},\zeta_p]$ (and in general not to a smaller ring). The advantage of
Jacobi sums is that they belong to the smaller ring ${\mathbb Z}[\zeta_{q-1}]$, and as
we are going to see, that they are closely related to Gauss sums. Thus, when
working \mathbf emph{algebraically}, it is almost always better to use Jacobi sums
instead of Gauss sums. On the other hand, when working \mathbf emph{analytically}
(for instance in ${\mathbb C}$ or ${\mathbb C}_p$), it may be better to work with Gauss sums:
we will see below the use of root numbers (suggested by Louboutin), and of the
Gross--Koblitz formula.
Note that $J_1(\chi_1)=1$. Outside of this trivial case, the close link between
Gauss and Jacobi sums is given by the following easy proposition, whose
apparently technical statement is only due to the trivial character $\mathbf eps$:
if none of the $\chi_j$ nor their product is trivial, we have the simple formula
given by (3).
\betagin{proposition}\lambdabel{jacgaufq} Denote by $t$ the number of $\chi_j$ equal
to the trivial character $\mathbf eps$, and as above set $\chi=\chi_1\dotsc\chi_k$.
\betagin{enumerate}\item If $t=k$ then $J_k(\chi_1,\dots,\chi_k)=q^{k-1}$.
\item If $1\le t\le k-1$ then $J_k(\chi_1,\dots,\chi_k)=0$.
\item If $t=0$ and $\chi\ne\mathbf eps$ then
$$J_k(\chi_1,\dotsc,\chi_k)=\dfrac{{\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_k)}{{\mathfrak g}(\chi_1\cdots\chi_k)}=\dfrac{{\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_k)}{{\mathfrak g}(\chi)}\;.$$
\item If $t=0$ and $\chi=\mathbf eps$ then
\betagin{align*}J_k(\chi_1,\dotsc,\chi_k)&=-\dfrac{{\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_k)}{q}\\
&=-\chi_k(-1)\dfrac{{\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_{k-1})}{{\mathfrak g}(\chi_1\cdots\chi_{k-1})}=-\chi_k(-1)J_{k-1}(\chi_1,\dotsc,\chi_{k-1})\;.\mathbf end{align*}
In particular, in this case we have
$${\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_k)=\chi_k(-1)qJ_{k-1}(\chi_1,\dotsc,\chi_{k-1})\;.$$
\mathbf end{enumerate}
\mathbf end{proposition}
\betagin{corollary}\lambdabel{corjacrecur} With the same notation, assume that
$k{\mathfrak g}e2$ and all the $\chi_j$ are nontrivial. Setting
${\mathfrak p}si=\chi_1\cdots\chi_{k-1}$, we have the following recursive formula:
$$J_k(\chi_1,\dotsc,\chi_k)=\betagin{cases}J_{k-1}(\chi_1,\dotsc,\chi_{k-1})J_2({\mathfrak p}si,\chi_k)&\text{\quad if ${\mathfrak p}si\ne\mathbf eps$\;,}\\
\chi_{k-1}(-1)qJ_{k-2}(\chi_1,\dotsc,\chi_{k-2})&\text{\quad if ${\mathfrak p}si=\mathbf eps$\;.}\mathbf end{cases}$$
\mathbf end{corollary}
The point of this recursion is that the definition of a $k$-fold Jacobi sum
$J_k$ involves a sum over $q^{k-1}$ values for $x_1,\dotsc,x_{k-1}$, the last
variable $x_k$ being
determined by $x_k=1-x_1-\cdots-x_{k-1}$, so neglecting the time to compute
the $\chi_j(x_j)$ and their product (which is a reasonable assumption), using
the definition takes time $O(q^{k-1})$. On the other hand, using the above
recursion boils down at worst to computing $k-1$ Jacobi sums $J_2$, for a
total time of $O((k-1)q)$. Nonetheless, we will see that in some cases it is
still better to use directly Gauss sums and formula (3) of the proposition.
Since Jacobi sums $J_2$ are the simplest and the above recursion in fact shows
that one can reduce to $J_2$, we will drop the subscript $2$ and simply write
$J(\chi_1,\chi_2)$. Note that
$$J(\chi_1,\chi_2)=\sum_{x\in{\mathbb F}_q}\chi_1(x)\chi_2(1-x)\;,$$
where the sum is over the whole of ${\mathbb F}_q$ and \mathbf emph{not} ${\mathbb F}_q\setminus\{0,1\}$
(which makes a difference only if one of the $\chi_i$ is trivial). More
precisely it is clear that $J(\mathbf eps,\mathbf eps)=q^2$, and that if $\chi\ne\mathbf eps$
we have $J(\chi,\mathbf eps)=\sum_{x\in{\mathbb F}_q}\chi(x)=0$, which are special cases of
Proposition \ref{jacgaufq}.
\betagin{exercise} Let $n\mid(q-1)$ be the order of $\chi$. Prove that
${\mathfrak g}(\chi)^n\in{\mathbb Z}[\zeta_n]$.
\mathbf end{exercise}
\betagin{exercise} Assume that none of the $\chi_j$ is equal to $\mathbf eps$, but that
their product $\chi$ is equal to $\mathbf eps$. Prove that (using the same notation
as in Lemma \ref{lemjactriv}):
$$J_k(0)=\left(1-\dfrac{1}{q}\right){\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_k)\;.$$
\mathbf end{exercise}
\betagin{exercise} Prove the following reciprocity formula for Jacobi sums:
if the $\chi_j$ are all nontrivial and $\chi=\chi_1\cdots\chi_k$, we have
$$J_k(\chi_1^{-1},\dotsc,\chi_k^{-1})=\dfrac{q^{k-1-\deltalta}}{J_k(\chi_1,\dotsc,\chi_k)}\;,$$
where $\deltalta=1$ if $\chi=\mathbf eps$, and otherwise $\deltalta=0$.
\mathbf end{exercise}
\subsection{Applications of $J(\chi,\chi)$}
In this short subsection we give without proof a couple of
applications of the special Jacobi sums $J(\chi,\chi)$. Once again
the proofs are not difficult. We begin by the following result,
which is a special case of the Hasse--Davenport relations that we
will give below.
\betagin{lemma} Assume that $q$ is odd, and let $\rho$ be the unique
character of order $2$ on ${\mathbb F}_q^*$. For any nontrivial character
$\chi$ we have
$$\chi(4)J(\chi,\chi)=J(\chi,\rho)\;.$$
Equivalently, if $\chi\ne\rho$ we have
$${\mathfrak g}(\chi){\mathfrak g}(\chi\rho)=\chi^{-1}(4){\mathfrak g}(\rho){\mathfrak g}(\chi^2)\;.$$
\mathbf end{lemma}
\betagin{exercise}\betagin{enumerate}
\item Prove this lemma.
\item Show that ${\mathfrak g}(\rho)^2=(-1)^{(q-1)/2}q$.
\mathbf end{enumerate}
\mathbf end{exercise}
\betagin{proposition}\lambdabel{propjac34}\betagin{enumerate}
\item Assume that $q\mathbf equiv1{\mathfrak p}mod4$, let $\chi$ be one of the two
characters of order $4$ on ${\mathbb F}_q^*$, and write $J(\chi,\chi)=a+bi$.
Then $q=a^2+b^2$, $2\mid b$, and $a\mathbf equiv-1{\mathfrak p}mod4$.
\item Assume that $q\mathbf equiv1{\mathfrak p}mod3$, let $\chi$ be one of the two
characters of order $3$ on ${\mathbb F}_q^*$, and write $J(\chi,\chi)=a+b\rho$,
where $\rho=\zeta_3$ is a primitive cube root of unity.
Then $q=a^2-ab+b^2$, $3\mid b$, $a\mathbf equiv-1{\mathfrak p}mod3$, and
$a+b\mathbf equiv q-2{\mathfrak p}mod{9}$.
\item Let $p\mathbf equiv2{\mathfrak p}mod3$, $q=p^{2m}\mathbf equiv1{\mathfrak p}mod3$, and let $\chi$
be one of the two characters of order $3$ on ${\mathbb F}_q^*$. We have
$$J(\chi,\chi)=(-1)^{m-1}p^m=(-1)^{m-1}q^{1/2}\;.$$
\mathbf end{enumerate}\mathbf end{proposition}
\betagin{corollary}\betagin{enumerate}
\item (Fermat.) Any prime $p\mathbf equiv1{\mathfrak p}mod4$ is a sum of two squares.
\item Any prime $p\mathbf equiv1{\mathfrak p}mod3$ is of the form $a^2-ab+b^2$ with
$3\mid b$, or equivalently $4p=(2a-b)^2+27(b/3)^2$ is of the form
$c^2+27d^2$.
\item (Gauss.) $p\mathbf equiv1{\mathfrak p}mod3$ is itself of the form $p=u^2+27v^2$
if and only if $2$ is a cube in ${\mathbb F}_p^*$.\mathbf end{enumerate}\mathbf end{corollary}
\betagin{exercise} Assuming the proposition, prove the corollary.
\mathbf end{exercise}
\subsection{The Hasse--Davenport Relations}
All the results that we have given up to now on Gauss and Jacobi sums
have rather simple proofs, which is one of the reasons we have not
given them. Perhaps surprisingly, there exist other important
relations which are considerably more difficult to prove. Before
giving them, it is instructive to explain how one can ``guess''
their existence, if one knows the classical theory of the gamma
function $\Gamma(s)$ (of course skip this part if you do not know it,
since it would only confuse you, or read the appendix).
Recall that $\Gamma(s)$ is defined (at least for ${\mathbb R}e(s)>0$) by
$$\Gamma(s)=\int_0^\infty e^{-t}t^s dt/t\;,$$ and the beta function
$B(a,b)$ by $B(a,b)=\int_0^1 t^{a-1}(1-t)^{b-1}\,dt$.
The function $e^{-t}$ transforms sums into products, so is an
\mathbf emph{additive} character, analogous to $\zeta_p^t$. The function
$t^s$ transforms products into products, so is a multiplicative
character, analogous to $\chi(t)$ ($dt/t$ is simply the Haar
invariant measure on ${\mathbb R}_{>0}$). Thus $\Gamma(s)$ is a continuous
analogue of the Gauss sum ${\mathfrak g}(\chi)$.
Similarly, since $J(\chi_1,\chi_2)=\sum_t\chi_1(t)\chi_2(1-t)$, we
see the similarity with the function $B$. Thus, it does not come
too much as a surprise that analogous formulas are valid on both
sides. To begin with, it is not difficult to show that
$B(a,b)=\Gamma(a)\Gamma(b)/\Gamma(a+b)$, exactly analogous to
$J(\chi_1,\chi_2)={\mathfrak g}(\chi_1){\mathfrak g}(\chi_2)/{\mathfrak g}(\chi_1\chi_2)$.
The analogue of $\Gamma(s)\Gamma(-s)=-{\mathfrak p}i/(s\sin(s{\mathfrak p}i))$ is
$${\mathfrak g}(\chi){\mathfrak g}(\chi^{-1})=\chi(-1)q\;.$$
But it is well-known that the gamma function has a duplication formula
$\Gamma(s)\Gamma(s+1/2)=2^{1-2s}\Gamma(1/2)\Gamma(2s)$, and more generally
a multiplication (or distribution) formula. This duplication
formula is clearly the analogue of the formula
$${\mathfrak g}(\chi){\mathfrak g}(\chi\rho)=\chi^{-1}(4){\mathfrak g}(\rho){\mathfrak g}(\chi^2)$$
given above. The \mathbf emph{Hasse--Davenport product relation} is
the analogue of the distribution formula for the gamma function.
\betagin{theorem} Let $\rho$ be a character of exact order $m$ dividing
$q-1$. For any character $\chi$ of ${\mathbb F}_q^*$ we have
$${\mathfrak p}rod_{0\le a<m}{\mathfrak g}(\chi\rho^a)=\chi^{-m}(m)k(p,f,m)q^{(m-1)/2}{\mathfrak g}(\chi^m)\;,$$
where $k(p,f,m)$ is the fourth root of unity given by
$$k(p,f,m)=\betagin{cases}
\leg{p}{m}^f&\text{ if $m$ is odd,}\\
(-1)^{f+1}\leg{(-1)^{m/2+1}m/2}{p}^f\leg{-1}{p}^{f/2}&\text{ if $m$ is even,}\\
\mathbf end{cases}$$
where $(-1)^{f/2}$ is to be understood as $i^f$ when $f$ is odd.
\mathbf end{theorem}
\betagin{remark} For some reason, in the literature this formula is usually
stated in the weaker form where the constant $k(p,f,m)$ is not
given explicitly.\mathbf end{remark}
Contrary to the proof of the distribution formula for the gamma
function, the proof of this theorem is quite long. There are
essentially two completely different proofs: one using classical
algebraic number theory, and one using $p$-adic analysis. The latter
is simpler and gives directly the value of $k(p,f,m)$. See
Section 3.7.2 of \cite{Coh3} and Section 11.7.4 of \cite{Coh4} for both
detailed proofs.
Gauss sums satisfy another type of nontrivial relation, also due to
Hasse--Davenport, the so-called \mathbf emph{lifting relation}, as follows:
\betagin{theorem} Let ${\mathbb F}_{q^n}/{\mathbb F}_q$ be an extension of finite fields,
let $\chi$ be a character of ${\mathbb F}_q^*$, and define the \mathbf emph{lift}
of $\chi$ to ${\mathbb F}_{q^n}$ by the formula
$\chi^{(n)}=\chi\circ\N_{{\mathbb F}_{q^n}/{\mathbb F}_q}$. We have
$${\mathfrak g}(\chi^{(n)})=(-1)^{n-1}{\mathfrak g}(\chi)^n\;.$$
\mathbf end{theorem}
This relation is essential in the initial proof of the Weil conjectures
for diagonal hypersurfaces done by Weil himself. This is not surprising,
since we have seen in Theorem \ref{thmquasi} that $|V({\mathbb F}_q)|$ is closely
related to Jacobi sums, hence also to Gauss sums.
\section{Practical Computations of Gauss and Jacobi Sums}
As above, let $\omega$ be a character of order exactly $q-1$, so that
$\omega$ is a generator of the group of characters of ${\mathbb F}_q^*$.
For notational simplicity, we will write $J(r_1,\dotsc,r_k)$ instead of
$J(\omega^{r_1},\dotsc,\omega^{r_k})$. Let us consider the specific example of
efficient computation of the quantity
$$S(q;z)=\sum_{0\le n\le q-2}\omega^{-n}(z)J_5(n,n,n,n,n)\;,$$
which occurs in the computation of the Hasse--Weil zeta function of
a quasi-diagonal threefold, see Theorem \ref{thmquasi}.
\subsection{Elementary Methods}
By the recursion of Corollary \ref{corjacrecur}, we have \mathbf emph{generically}
(i.e., except for special values of $n$ which will be considered separately):
$$J_5(n,n,n,n,n)=J(n,n)J(2n,n)J(3n,n)J(4n,n)\;.$$
Since $J(n,an)=\sum_{x}\omega^n(x)\omega^{an}(1-x)$, the cost of
computing $J_5$ as written is $\Os(q)$, where here and after we
write $\Os(q^\mathfrak alpha)$ to mean $O(q^{\mathfrak alpha+\mathbf eps})$ for all $\mathbf eps>0$
(soft-$O$ notation). Thus computing $S(q;z)$ by this direct
method requires time $\Os(q^2)$.
We can however do much better. Since the values of the characters are all in
${\mathbb Z}[\zeta_{q-1}]$, we work in this ring. In fact, even better, we work in the
ring with zero divisors $R={\mathbb Z}[X]/(X^{q-1}-1)$, together with the natural
surjective map sending the class of $X$ in $R$ to $\zeta_{q-1}$. Indeed, let $g$
be the generator of ${\mathbb F}_q^*$ such that $\omega(g)=\zeta_{q-1}$. We have,
again \mathbf emph{generically}:
$$J(n,an)=\sum_{1\le u\le q-2}\omega^n(g^u)\omega^{an}(1-g^u)
=\sum_{1\le u\le q-2}\zeta_{q-1}^{nu+an\log_g(1-g^u)}\;,$$
where $\log_g$ is the \mathbf emph{discrete logarithm} to base $g$ defined modulo
$q-1$, i.e., such that $g^{\log_g(x)}=x$. If $(q-1)\nmid n$ but $(q-1)\mid an$
we have $\omega^{an}=\mathbf eps$ so we must add the contribution of $u=0$, which is $1$,
and if $(q-1)\mid n$ we must add the contribution of $u=0$ \mathbf emph{and} of
$x=0$, which is $2$ (recall the \mathbf emph{essential} convention that
$\chi(0)=0$ if $\chi\ne\mathbf eps$ and $\mathbf eps(0)=1$, see Definition \ref{defeps}).
In other words, if we set
$$P_a(X)=\sum_{1\le u\le q-2}X^{(u+a\log_g(1-g^u))\bmod{(q-1)}}\in R\;,$$
we have
$$J(n,an)=P_a(\zeta_{q-1}^n)+\betagin{cases}
0&\text{\quad if $(q-1)\nmid an$\;,}\\
1&\text{\quad if $(q-1)\mid an$ but $(q-1)\nmid n$\;, and}\\
2&\text{\quad if $(q-1)\mid n$\;.}\mathbf end{cases}$$
Thus, if we set finally
$$P(X)=P_1(X)P_2(X)P_3(X)P_4(X)\bmod{X^{q-1}}\in R\;,$$
we have (still generically) $J_5(n,n,n,n,n)=P(\zeta_{q-1}^n)$.
Assume for the moment that this is true for all $n$ (we will correct this
below), let $\mathbf ell=\log_g(z)$, so that $\omega(z)=\omega(g^\mathbf ell)=\zeta_{q-1}^\mathbf ell$,
and write
$$P(X)=\sum_{0\le j\le q-2}a_jX^j\;.$$
We thus have
$$\omega^{-n}(z)J_5(n,n,n,n,n)=\zeta_{q-1}^{-n\mathbf ell}\sum_{0\le j\le q-2}a_j\zeta_{q-1}^{nj}
=\sum_{0\le j\le q-2}a_j\zeta_{q-1}^{n(j-\mathbf ell)}\;,$$
hence
\betagin{align*}S(q;z)&=\sum_{0\le n\le q-2}\omega^{-n}(z)J_5(n,n,n,n,n)
=\sum_{0\le j\le q-2}a_j\sum_{0\le n\le q-2}\zeta_{q-1}^{n(j-\mathbf ell)}\\
&=(q-1)\sum_{0\le j\le q-2,\ j\mathbf equiv\mathbf ell{\mathfrak p}mod{q-1}}a_j=(q-1)a_{\mathbf ell}\;.\mathbf end{align*}
The result is thus immediate as soon as we know the coefficients of the
polynomial $P$. Since there exist fast methods for computing discrete
logarithms, this leads to a $\Os(q)$ method for computing $S(q;z)$.
To obtain the correct formula, we need to adjust for the special $n$
for which $J_5(n,n,n,n,n)$ is not equal to $J(n,n)J(n,2n)J(n,3n)J(n,4n)$,
which are the same for which $(q-1)\mid an$ for some $a$ such that
$2\le a\le 4$, together with $a=5$. This is easy but boring, and should be
skipped on first reading.
\betagin{enumerate}\item For $n=0$ we have $J_5(n,n,n,n,n)=q^4$, and on the
other hand $P(1)=(J(0,0)-2)^4=(q-2)^4$, so the correction term is
$q^4-(q-2)^4=8(q-1)(q^2-2q+2)$.
\item For $n=(q-1)/2$ (if $q$ is odd) we have
$$J_5(n,n,n,n,n)={\mathfrak g}(\omega^n)^5/{\mathfrak g}(\omega^{5n})={\mathfrak g}(\omega^n)^4={\mathfrak g}(\rho)^4$$
since $5n\mathbf equiv n{\mathfrak p}mod{q-1}$, where $\rho$ is the character of order $2$,
and we have ${\mathfrak g}(\rho)^2=(-1)^{(q-1)/2}q$, so $J_5(n,n,n,n,n)=q^2$.
On the other hand
\betagin{align*}P(\zeta_{q-1}^n)&=J(\rho,\rho)(J(\rho,2\rho)-1)J(\rho,\rho)(J(\rho,2\rho)-1)\\
&=J(\rho,\rho)^2={\mathfrak g}(\rho)^4/q^2=1\;,\mathbf end{align*}
so the correction term is $\rho(z)(q^2-1)$.
\item For $n={\mathfrak p}m(q-1)/3$ (if $q\mathbf equiv1{\mathfrak p}mod3$), writing $\chi_3=\omega^{(q-1)/3}$,
which is one of the two cubic characters, we have
\betagin{align*}J_5(n,n,n,n,n)&={\mathfrak g}(\omega^n)^5/{\mathfrak g}(\omega^{5n})={\mathfrak g}(\omega^n)^5/{\mathfrak g}(\omega^{-n})\\
&={\mathfrak g}(\omega^n)^6/({\mathfrak g}(\omega^{-n}){\mathfrak g}(\omega^n))={\mathfrak g}(\omega^n)^6/q\\
&=qJ(n,n)^2\mathbf end{align*}
(check all this). On the other hand
\betagin{align*}P(\zeta_{q-1}^n)&=J(n,n)J(n,2n)(J(n,3n)-1)J(n,4n)\\
&=\dfrac{{\mathfrak g}(\omega^n)^2}{{\mathfrak g}(\omega^{2n})}\dfrac{{\mathfrak g}(\omega^n){\mathfrak g}(\omega^{2n})}{q}\dfrac{{\mathfrak g}(\omega^n)^2}{{\mathfrak g}(\omega^{2n})}\\
&=\dfrac{{\mathfrak g}(\omega^n)^5}{q{\mathfrak g}(\omega^{-n})}=\dfrac{{\mathfrak g}(\omega^n)^6}{q^2}=J(n,n)^2\;,\mathbf end{align*}
so the correction term is
$2(q-1){\mathbb R}e(\chi_3^{-1}(z)J(\chi_3,\chi_3)^2)$.
\item For $n={\mathfrak p}m(q-1)/4$ (if $q\mathbf equiv1{\mathfrak p}mod4$), writing $\chi_4=\omega^{(q-1)/4}$,
which is one of the two quartic characters, we have
$$J_5(n,n,n,n,n)={\mathfrak g}(\omega^n)^5/{\mathfrak g}(\omega^{5n})={\mathfrak g}(\omega^n)^4
=\omega^n(-1)qJ_3(n,n,n)\;.$$
In addition, we have
$$J_3(n,n,n)=J(n,n)J(n,2n)=\omega^n(4)J(n,n)^2=\rho(2)J(n,n)^2\;,$$
so
$$J_5(n,n,n,n,n)={\mathfrak g}(\omega^n)^4=\omega^n(-1)q\rho(2)J(n,n)^2\;.$$
Note that
$$\chi_4(-1)=\chi_4^{-1}(-1)=\rho(2)=(-1)^{(q-1)/4}\;,$$
(Exercise: prove it!), so that $\omega^n(-1)\rho(2)=1$ and the above
simplifies to $J_5(n,n,n,n,n)=qJ(n,n)^2$.
On the other hand,
\betagin{align*}P(\zeta_{q-1}^n)&=J(n,n)J(n,2n)J(n,3n)(J(n,4n)-1)\\
&=\dfrac{{\mathfrak g}(\omega^n)^2}{{\mathfrak g}(\omega^{2n})}\dfrac{{\mathfrak g}(\omega^n){\mathfrak g}(\omega^{2n})}{{\mathfrak g}(\omega^{3n})}
\dfrac{{\mathfrak g}(\omega^n){\mathfrak g}(\omega^{3n})}{q}\\
&=\dfrac{{\mathfrak g}(\omega^n)^4}{q}=\omega^n(-1)\rho(2)J(n,n)^2=J(n,n)^2\mathbf end{align*}
as above, so the correction term is
$2(q-1){\mathbb R}e(\chi_4^{-1}(z)J(\chi_4,\chi_4)^2)$.
\item For $n=a(q-1)/5$ with $1\le a\le 4$ (if $q\mathbf equiv1{\mathfrak p}mod5$), writing
$\chi_5=\omega^{(q-1)/5}$ we have
$J_5(n,n,n,n,n)=-{\mathfrak g}(\chi_5^a)^5/q$, while abbreviating
${\mathfrak g}(\chi_5^{am})$ to $g(m)$ we have
\betagin{align*}P(\zeta_{q-1}^n)&=J(n,n)J(n,2n)J(n,3n)J(n,4n)\\
&=-\dfrac{g(n)^2}{g(2n)}\dfrac{g(n)g(2n)}{g(3n)}\dfrac{g(n)g(3n)}{g(4n)}\dfrac{g(n)g(4n)}{q}\\
&=-\dfrac{g(n)^5}{q}\;,\mathbf end{align*}
so there is no correction term.\mathbf end{enumerate}
Summarizing, we have shown the following:
\betagin{proposition} Let $S(q;z)=\sum_{0\le n\le q-2}\omega^{-n}(z)J_5(n,n,n,n,n)$.
Let $\mathbf ell=\log_g(z)$ and let $P(X)=\sum_{0\le j\le q-2}a_jX^j$ be the polynomial
defined above. We have
$$S(q;z)=(q-1)(T_1+T_2+T_3+T_4+a_{\mathbf ell})\;,$$
where $T_m=0$ if $m\nmid(q-1)$ and otherwise
\betagin{align*}T_1&=8(q^2-2q+2)\;,\quad T_2=\rho(z)(q+1)\;,\\
T_3&=2{\mathbb R}e(\chi_3^{-1}(z)J(\chi_3,\chi_3)^2)\;,\text{\quad and\quad}T_4=2{\mathbb R}e(\chi_4^{-1}(z)J(\chi_4,\chi_4)^2)\;,\mathbf end{align*}
with the above notation.
\mathbf end{proposition}
Note that thanks to Proposition \ref{propjac34}, these supplementary Jacobi
sums $J(\chi_3,\chi_3)$ and $J(\chi_4,\chi_4)$ can be computed in logarithmic
time using Cornacchia's algorithm (this is not quite true, one needs an
additional slight computation, do you see why?).
Note also for future reference that the above proposition \mathbf emph{proves} that
$(q-1)\mid S(q,z)$, which is not clear from the definition.
\subsection{Sample Implementations}
For simplicity, assume that $q=p$ is prime. I have written simple
implementations of the computation of $S(q;z)$. In the first implementation,
I use the na\"\i ve formula expressing $J_5$ in terms of $J(n,an)$ and sum on
$n$, except that I use the reciprocity formula which gives
$J_5(-n,-n,-n,-n,-n)$ in terms of $J_5(n,n,n,n,n)$ to sum only over $(p-1)/2$
terms instead of $p-1$. Of course to avoid recomputation, I precompute
a discrete logarithm table.
The timings for $p\mathfrak approx 10^k$ for $k=2$, $3$, and $4$ are
$0.03$, $1.56$, and $149$ seconds respectively, compatible with $\Os(q^2)$
time.
On the other hand, implementing in a straightforward manner the algorithm
given by the above proposition gives timings for $p\mathfrak approx 10^k$ for
$k=2$, $3$, $4$, $5$, $6$, and $7$ of $0$, $0.02$, $0.08$, $0.85$, $9.90$,
and $123$ seconds respectively, of course much faster and compatible with
$\Os(q)$ time.
The main drawback of this method is that it requires $O(q)$ storage: it is
thus applicable only for $q\le 10^8$, say, which is more than sufficient
for many applications, but of course not for all. For instance, the case
$p\mathfrak approx 10^7$ mentioned above already required a few gigabytes of storage.
\subsection{Using Theta Functions}
A completely different way of computing Gauss and Jacobi sums has been
suggested by S.~Louboutin. It is related to the theory of $L$-functions of
Dirichlet characters that we study below, and in our context is valid only
for $q=p$ prime, not for prime powers, but in the context of Dirichlet
characters it is valid in general (simply replace $p$ by $N$ and ${\mathbb F}_p$
by ${\mathbb Z}/N{\mathbb Z}$ in the following formulas when $\chi$ is a primitive character
of conductor $N$, see below for definitions):
\betagin{definition} Let $\chi$ be a character on ${\mathbb F}_p$, and let $e=0$ or $1$
be such that $\chi(-1)=(-1)^e$. The \mathbf emph{theta function} associated to
$\chi$ is the function defined on the upper half-plane by
$$\Theta(\chi,\tau)=2\sum_{m{\mathfrak g}e1}m^e\chi(m)e^{i{\mathfrak p}i m^2\tau/p}\;.$$
\mathbf end{definition}
The main property of this function, which is a direct consequence of the
\mathbf emph{Poisson summation formula}, and is equivalent to the functional
equation of Dirichlet $L$-functions, is as follows:
\betagin{proposition} We have the functional equation
$$\Theta(\chi,-1/\tau)=\omega(\chi)(\tau/i)^{(2e+1)/2}\Theta(\chi^{-1},\tau)\;,$$
with the principal determination of the square root, and where
$\omega(\chi)={\mathfrak g}(\chi)/(i^ep^{1/2})$ is the so-called \mathbf emph{root number}.
\mathbf end{proposition}
\betagin{corollary} If $\chi(-1)=1$ we have
$${\mathfrak g}(\chi)=p^{1/2}\dfrac{\sum_{m{\mathfrak g}e1}\chi(m)\mathbf exp(-{\mathfrak p}i m^2/pt)}{t^{1/2}\sum_{m{\mathfrak g}e1}\chi^{-1}(m)\mathbf exp(-{\mathfrak p}i m^2t/p)}$$
and if $\chi(-1)=-1$ we have
$${\mathfrak g}(\chi)=p^{1/2}i\dfrac{\sum_{m{\mathfrak g}e1}\chi(m)m\mathbf exp(-{\mathfrak p}i m^2/pt)}{t^{3/2}\sum_{m{\mathfrak g}e1}\chi^{-1}(m)m\mathbf exp(-{\mathfrak p}i n^2t/p)}$$
for any $t$ such that the denominator does not vanish.
\mathbf end{corollary}
Note that the optimal choice of $t$ is $t=1$, and (at least for $p$ prime)
it seems that the denominator never vanishes (there are counterexamples
when $p$ is not prime, but apparently only four, see \cite{Coh-Zag}).
It follows from this corollary that ${\mathfrak g}(\chi)$ can be computed numerically as
a complex number in $\Os(p^{1/2})$ operations. Thus,
if $\chi_1$ and $\chi_2$ are nontrivial characters such that
$\chi_1\chi_2\ne\mathbf eps$ (otherwise $J(\chi_1,\chi_2)$ is trivial to compute),
the formula $J(\chi_1,\chi_2)={\mathfrak g}(\chi_1){\mathfrak g}(\chi_2)/{\mathfrak g}(\chi_1\chi_2)$ allows
the computation of $J_2$ \mathbf emph{numerically} as a complex number in
$\Os(p^{1/2})$ operations.
To recover $J$ itself as an algebraic number we could either compute all its
conjugates, but this would require more time than the direct computation of
$J$, or possibly use the LLL algorithm, which although fast, would also
require some time. In practice, to perform computations such as that of
the sum $S(q;z)$ above, we only need
$J$ to sufficient accuracy: we perform all the elementary operations in
${\mathbb C}$, and since we know that at the end the result will be an integer
for which we know an upper bound, we thus obtain a proven exact result.
More generally, we have generically $J_5(n,n,n,n,n)={\mathfrak g}(\omega^n)^5/{\mathfrak g}(\omega^{5n})$,
which can thus be computed in $\Os(p^{1/2})$ operations. It follows that
$S(p;z)$ can be computed in $\Os(p^{3/2})$ operations, which is slower than
the elementary method seen above. The main advantage is that we do not need
much storage: more precisely, we want to compute $S(p;z)$ to sufficiently
small accuracy that we can recognize it as an integer, so a priori up to
an absolute error of $0.5$. However, we have seen that $(p-1)\mid S(p;z)$:
it is thus sufficient to have an absolute error less than $(p-1)/2$
thus at worse each of the $p-1$ terms in the sum to an absolute error less
than $1/2$. Since generically $|J_5(n,n,n,n,n)|=p^2$, we need a relative
error less than $1/(2p^2)$, so less than $1/(10p^2)$ on each Gauss sum.
In practice of course this is overly pessimistic, but it does not matter.
For $p\le 10^9$, this means that $19$ decimal digits suffice.
The main term in the theta function computation (with $t=1$) is
$\mathbf exp(-{\mathfrak p}i m^2/p)$, so we need $\mathbf exp(-{\mathfrak p}i m^2/p)\le 1/(100p^2)$, say, in other
words ${\mathfrak p}i m^2/p{\mathfrak g}e 4.7+2\log(p)$, so $m^2{\mathfrak g}e p(1.5+0.7\log(p))$.
This means that we will need the values of $\omega(m)$ only up to this limit,
of the order of $O((p\log(p))^{1/2})$, considerably smaller than $p$.
Thus, instead of computing a full discrete logarithm table, which takes
some time but more importantly a lot of memory, we compute only discrete
logarithms up to that limit, using specific algorithms for doing so
which exist in the literature, some of which being quite easy.
A straightforward implementation of this method gives timings for
$k=2$, $3$, $4$, and $5$ of $0.02$, $0.40$, $16.2$, and $663$ seconds
respectively, compatible with $\Os(p^{3/2})$ time. This is faster than
the completely na\"\i ve method, but slower than the method explained above.
Its advantage is that it requires much less memory. For $p$ around $10^7$,
however, it is much too slow so this method is rather useless. We will see
that its usefulness is mainly in the context where it was invented, i.e.,
for $L$-functions of Dirichlet characters.
\subsection{Using the Gross--Koblitz Formula}
This section is of a higher mathematical level than the
preceding ones, but is very important since it gives the best method for
computing Gauss (and Jacobi) sums. We refer to Sections 11.6 and 11.7 of
\cite{Coh4} for complete details, and urge the reader to try to understand
what follows.
In the preceding sections, we have considered Gauss sums as belonging to a
number of different rings: the ring ${\mathbb Z}[\zeta_{q-1},\zeta_p]$ or the field ${\mathbb C}$ of
complex numbers, and for Jacobi sums the ring ${\mathbb Z}[\zeta_{q-1}]$, but also the
ring ${\mathbb Z}[X]/(X^{q-1}-1)$, and again the field ${\mathbb C}$.
In number theory there exist other algebraically closed fields which are
useful in many contexts, the fields ${\mathbb C}_\mathbf ell$ of $\mathbf ell$-adic numbers, one
for each prime number $\mathbf ell$. These fields come with a topology and analysis
which are rather special: one of
the main things to remember is that a sequence of elements tends to $0$
if and only the $\mathbf ell$-adic valuation of the elements (the largest exponent
of $\mathbf ell$ dividing them) tends to infinity. For instance $2^m$ tends to $0$
in ${\mathbb C}_2$, but in no other ${\mathbb C}_{\mathbf ell}$, and $15^m$ tends to $0$ in
${\mathbb C}_3$ and in ${\mathbb C}_5$.
The most important subrings of ${\mathbb C}_{\mathbf ell}$ are the ring ${\mathbb Z}_{\mathbf ell}$
of $\mathbf ell$-adic integers, the elements of which can be written as
$x=a_0+a_1\mathbf ell+\cdots+a_k\mathbf ell^k+\cdots$ with $a_j\in[0,\mathbf ell-1]$, and its field
of fractions ${\mathbb Q}_{\mathbf ell}$, which contains ${\mathbb Q}$, whose elements can be
represented in a similar way as $x=a_{-m}\mathbf ell^{-m}+a_{-(m-1)}\mathbf ell^{-(m-1)}+\cdots+a_{-1}\mathbf ell^{-1}+a_0+a_1\mathbf ell+\cdots.$
In dealing with Gauss and Jacobi sums over ${\mathbb F}_q$ with $q=p^f$,
the only ${\mathbb C}_{\mathbf ell}$ which is of use for us is the one with $\mathbf ell=p$
(in highbrow language, we are going to use implicitly \mathbf emph{crystalline}
$p$-adic methods, while for $\mathbf ell\ne p$ it would be \mathbf emph{\'etale} $\mathbf ell$-adic
methods).
Apart from this relatively strange topology, many definitions and results
valid on ${\mathbb C}$ have analogues in ${\mathbb C}_p$. The main object that we will
need in our context is the analogue of the gamma function, naturally called
the $p$-adic gamma function, in the present case due to Morita (there is
another one, see Section 11.5 of \cite{Coh4}), and denoted $\Gamma_p$.
Its definition is in fact quite simple:
\betagin{definition} For $s\in{\mathbb Z}_p$ we define
$$\Gamma_p(s)=\lim_{m\to s}(-1)^m{\mathfrak p}rod_{\substack{0\le k<m\{\mathfrak p}\nmid k}}k\;,$$
where the limit is taken over any sequence of positive integers $m$
tending to $s$ for the $p$-adic topology.\mathbf end{definition}
It is of course necessary to show that this definition makes sense,
but this is not difficult, and most of the important properties
of $\Gamma_p(s)$, analogous to those of $\Gamma(s)$, can be deduced from it.
\betagin{exercise} Choose $p=5$ and $s=-1/4$, so that $p$-adically
$s=1/(1-5)=1+5+5^2+5^3+\cdots$.
\betagin{enumerate}\item Compute the right hand side of
the above definition with small $5$-adic accuracy for $m=1$, $1+5$,
and $1+5+5^2$.
\item It is in fact easy to compute that
$$\Gamma_5(-1/4)=4 + 4\cdot5 + 5^3 + 3\cdot5^4 + 2\cdot5^5 + 2\cdot5^6 + 2\cdot5^7 + 4\cdot5^8+\cdots$$
Using this, show that $\Gamma_5(-1/4)^2/16$ seems to be a $5$-adic root of
the polynomial $5X^2+4X+1$. This is in fact true, see the Gross--Koblitz
formula below.\mathbf end{enumerate}
\mathbf end{exercise}
We need a much deeper property of $\Gamma_p(s)$ known as the
Gross--Koblitz formula: it is in fact an analogue of a formula for
$\Gamma(s)$ known as the Chowla--Selberg formula, and it is also closely
related to the Davenport--Hasse relations that we have seen above.
The proof of the Gross--Koblitz formula was initially given using tools of
crystalline cohomology, but an elementary proof due to A.~Robert now
exists, see for instance Section 11.7 of \cite{Coh4} once again.
The Gross--Koblitz formula tells us that certain products of $p$-adic gamma
functions at \mathbf emph{rational} arguments are in fact \mathbf emph{algebraic
numbers}, more precisely \mathbf emph{Gauss sums} (explaining their
importance for us). This is quite surprising since usually
transcendental functions such as $\Gamma_p$ take transcendental values.
To give a specific example, we have $\Gamma_5(1/4)^2=-2+\sqrt{-1}$,
where $\sqrt{-1}$ is the square root in ${\mathbb Z}_5$ congruent to
$3$ modulo $5$. In view of the elementary properties of the
$p$-adic gamma function, this is equivalent to the result stated
in the above exercise as $\Gamma_5(-1/4)^2=-(16/5)(2+\sqrt{-1})$.
Before stating the formula we need to collect a number of facts,
both on classical algebraic number theory and on $p$-adic analysis.
None are difficult to prove, see Chapter 4 of \cite{Coh3}. Recall that
$q=p^f$.
$\bullet$ We let $K={\mathbb Q}(\zeta_p)$ and $L=K(\zeta_{q-1})={\mathbb Q}(\zeta_{q-1},\zeta_p)={\mathbb Q}(\zeta_{p(q-1)})$, so that $L/K$ is an extension of degree ${\mathfrak p}hi(q-1)$.
There exists a unique prime ideal ${\mathfrak p}$ of $K$ above $p$, and we have
${\mathfrak p}=(1-\zeta_p){\mathbb Z}_K$ and ${\mathfrak p}^{p-1}=p{\mathbb Z}_K$, and ${\mathbb Z}_K/{\mathfrak p}\simeq{\mathbb F}_p$. The prime
ideal ${\mathfrak p}$ splits into a product of $g={\mathfrak p}hi(q-1)/f$ prime ideals
$\GammaP_j$ of degree $f$ in the extension $L/K$, i.e., ${\mathfrak p}{\mathbb Z}_L=\GammaP_1\cdots\GammaP_g$,
and for any prime ideal $\GammaP=\GammaP_j$ we have ${\mathbb Z}_L/\GammaP\simeq{\mathbb F}_q$.
\betagin{exercise} Prove directly that for any $f$ we have $f\mid{\mathfrak p}hi(p^f-1)$.
\mathbf end{exercise}
$\bullet$ Fix one of the prime ideals $\GammaP$ as above. There exists a unique
group isomorphism $\omega=\omega_{\GammaP}$ from $({\mathbb Z}_L/\GammaP)^*$ to the group of
$(q-1)$st roots of unity in $L$, such that for all $x\in({\mathbb Z}_L/\GammaP)^*$ we have
$\omega(x)\mathbf equiv x{\mathfrak p}mod{\GammaP}$. It is called the \mathbf emph{Teichm\"uller character},
and it can be considered as a character of order $q-1$ on
${\mathbb F}_q^*\simeq({\mathbb Z}_L/\GammaP)^*$. We can thus \mathbf emph{instantiate} the definition of
a Gauss sum over ${\mathbb F}_q$ by defining it as ${\mathfrak g}(\omega_{\GammaP}^{-r})\in L$.
$\bullet$ Let $\zeta_p$ be a primitive $p$th root of unity in ${\mathbb C}_p$,
fixed once and for all. There exists a unique ${\mathfrak p}i\in{\mathbb Z}[\zeta_p]$
satisfying ${\mathfrak p}i^{p-1}=-p$, ${\mathfrak p}i\mathbf equiv1-\zeta_p{\mathfrak p}mod{{\mathfrak p}i^2}$, and
we set $K_{{\mathfrak p}}={\mathbb Q}_p({\mathfrak p}i)={\mathbb Q}_p(\zeta_p)$, and $L_{\GammaP}$ the \mathbf emph{completion}
of $L$ at $\GammaP$. The field extension $L_{\GammaP}/K_{{\mathfrak p}}$ is Galois, with Galois
group isomorphic to ${\mathbb Z}/f{\mathbb Z}$ (which is the same as the Galois group of
${\mathbb F}_q/{\mathbb F}_p$, where ${\mathbb F}_p$ (resp., ${\mathbb F}_q$) is the so-called
\mathbf emph{residue field} of $K$ (resp., $L$)).
$\bullet$ We set the following:
\betagin{definition} We define the \mathbf emph{$p$-adic Gauss sum} by
$${\mathfrak g}_q(r)=\sum_{x\in L_{\GammaP},\ x^{q-1}=1}x^{-r}\zeta_p^{\Tr_{L_{\GammaP}/K_{{\mathfrak p}}}(x)}\in L_{\GammaP}\;.$$
\mathbf end{definition}
Note that this depends on the choice of $\zeta_p$, or equivalently of ${\mathfrak p}i$.
Since ${\mathfrak g}_q(r)$ and ${\mathfrak g}(\omega_{\GammaP}^{-r})$ are algebraic numbers, it is
clear that they are equal, although viewed in fields having different
topologies. Thus, results about ${\mathfrak g}_q(r)$ translate immediately into results
about ${\mathfrak g}(\omega_{\GammaP}^{-r})$, hence about general Gauss sums over finite fields.
The Gross--Koblitz formula is as follows:
\betagin{theorem}[Gross--Koblitz] Denote by $s(r)$ the sum of digits in base $p$
of the integer $r\bmod{(q-1)}$, i.e., of the unique integer $r'$ such that
$r'\mathbf equiv r{\mathfrak p}mod{q-1}$ and $0\le r'<q-1$. We have
$${\mathfrak g}_q(r)=-{\mathfrak p}i^{s(r)}{\mathfrak p}rod_{0\le i<f}\Gamma_p\left(\left\{\dfrac{p^{f-i}r}{q-1}\right\}\right)\;,$$
where $\{x\}$ denotes the fractional part of $x$.\mathbf end{theorem}
Let us show how this can be used to compute Gauss or Jacobi sums, and in
particular our sum $S(q;z)$. Assume for simplicity that $f=1$, in other
words that $q=p$: the right hand
side is thus equal to $-{\mathfrak p}i^{s(r)}\Gamma_p(\{pr/(p-1)\})$. Since we can always
choose $r$ such that $0\le r<p-1$, we have $s(r)=r$ and
$\{pr/(p-1)\}=\{r+r/(p-1)\}=r/(p-1)$, so the RHS is $-{\mathfrak p}i^r\Gamma_p(r/(p-1))$.
Now an easy property of $\Gamma_p$ is that it is differentiable: recall that $p$
is ``small'' in the $p$-adic topology, so $r/(p-1)$ is close to $-r$, more
precisely $r/(p-1)=-r+pr/(p-1)$ (this is how we obtained it in the first
place!). Thus in particular, if $p>2$ we have the Taylor expansion
\betagin{align*}\Gamma_p(r/(p-1))&=\Gamma_p(-r)+(pr/(p-1))\Gamma'_p(-r)+O(p^2)\\
&=\Gamma_p(-r)-pr\Gamma'_p(-r)+O(p^2)\;.\mathbf end{align*}
Since ${\mathfrak g}_q(r)$ depends only on $r$ modulo $p-1$, we will assume that
$0\le r<p-1$. In that case it is easy to show from the definition that
$$\Gamma_p(-r)=1/r!\text{\quad and\quad}\Gamma'_p(-r)=(-{\mathfrak g}a_p+H_r)/r!\;,$$
where $H_r=\sum_{1\le n\le r}1/n$ is the harmonic sum, and ${\mathfrak g}a_p=-\Gamma'_p(0)$
is the $p$-adic analogue of Euler's constant.
\betagin{exercise} Prove these formulas, as well as the congruence for
${\mathfrak g}a_p$ given below.
\mathbf end{exercise}
There exist infinite ($p$-adic)
series enabling accurate computation of ${\mathfrak g}a_p$, but since we only need it
modulo $p$, we use the easily proved congruence
${\mathfrak g}a_p\mathbf equiv((p-1)!+1)/p=W_p{\mathfrak p}mod{p}$, the so-called \mathbf emph{Wilson quotient}.
We will see below that, as a consequence of the Weil conjectures proved
by Deligne, it is sufficient to compute $S(p;z)$ modulo $p^2$. Thus, in the
following $p$-adic computation we only work modulo $p^2$.
The Gross--Koblitz formula tells us that for $0\le r<p-1$ we have
$${\mathfrak g}_q(r)=-\dfrac{{\mathfrak p}i^r}{r!}(1-pr(H_r-W_p)+O(p^2))\;.$$
It follows that for $(p-1)\nmid 5r$ we have
$$J(-r,-r,-r,-r,-r)=\dfrac{{\mathfrak g}(\omega_{\GammaP})^5}{{\mathfrak g}(\omega_{\GammaP}^5)}=\dfrac{{\mathfrak g}_q(r)^5}{{\mathfrak g}_q(5r)}={\mathfrak p}i^{f(r)}(a+bp+O(p^2))\;,$$
where $a$ and $b$ will be computed below and
\betagin{align*}f(r)&=5r-(5r\bmod{p-1})=5r-(5r-(p-1)\lfloor5r/(p-1)\rfloor)\\
&=(p-1)\lfloor 5r/(p-1)\rfloor\;,\mathbf end{align*}
so that ${\mathfrak p}i^{f(r)}=(-p)^{\lfloor 5r/(p-1)\rfloor}$ since ${\mathfrak p}i^{p-1}=-p$.
Since we want the result modulo $p^2$, we consider three intervals together
with special cases:
\betagin{enumerate}\item If $r>2(p-1)/5$ but $(p-1)\nmid 5r$, we have
$$J(-r,-r,-r,-r,-r)\mathbf equiv0{\mathfrak p}mod{p^2}\;.$$
\item If $(p-1)/5<r<2(p-1)/5$ we have
$$J(-r,-r,-r,-r,-r)\mathbf equiv(-p)\dfrac{(5r-(p-1))!}{r!^5}{\mathfrak p}mod{p^2}\;.$$
\item If $0<r<(p-1)/5$ we have $f(r)=0$ and $0\le 5r<(p-1)$ hence
\betagin{align*}J(-r,-r,-r,-r,-r)&=\dfrac{(5r)!}{r!^5}(1-5pr(H_r-W_p)+O(p^2))\cdot\\
&{\mathfrak p}hantom{=}\cdot(1+5pr(H_{5r}-W_p)+O(p^2))\\
&\mathbf equiv\dfrac{(5r)!}{r!^5}(1+5pr(H_{5r}-H_r)){\mathfrak p}mod{p^2}\;.\mathbf end{align*}
\item Finally, if $r=j(p-1)/5$ we have $J(-r,-r,-r,-r,-r)=p^4\mathbf equiv0{\mathfrak p}mod{p^2}$
if $j=0$, and otherwise $J(-r,-r,-r,-r,-r)=-{\mathfrak g}_q(r)^5/p$, and since the
$p$-adic valuation of ${\mathfrak g}_q(r)$ is equal to $r/(p-1)=j/5$, that of
$J(-r,-r,-r,-r,-r)$ is equal to $j-1$, which is greater or equal to $2$
as soon as $j{\mathfrak g}e3$. For $j=2$, i.e., $r=2(p-1)/5$, we thus have
$$J(-r,-r,-r,-r,-r)\mathbf equiv p\dfrac{1}{r!^5}\mathbf equiv(-p)\dfrac{(5r-(p-1))!}{r!^5}{\mathfrak p}mod{p^2}\;,$$
which is the same formula as for $(p-1)/5<r\le 2(p-1)/5$.
For $j=1$, i.e., $r=(p-1)/5$, we thus have
$$J(-r,-r,-r,-r,-r)\mathbf equiv-\dfrac{1}{r!^5}(1-5pr(H_r-W_p)){\mathfrak p}mod{p^2}\;,$$
while on the other hand
$$(5r)!=(p-1)!=-1+pW_p\mathbf equiv-1-p(p-1)W_p\mathbf equiv-1-5prW_p\;,$$ and
$H_{5r}=H_{p-1}\mathbf equiv0{\mathfrak p}mod{p}$ (Wolstenholme's congruence, easy), so
\betagin{align*}\dfrac{(5r)!}{r!^5}(1+5pr(H_{5r}-H_r))&\mathbf equiv-\dfrac{1}{r!^5}(1-5prH_r)(1+5prW_p)\\
&\mathbf equiv-\dfrac{1}{r!^5}(1-5pr(H_r-W_p)){\mathfrak p}mod{p^2}\;,\mathbf end{align*}
which is the same formula as for $0<r<(p-1)/5$.
\mathbf end{enumerate}
An important point to note is that we are working $p$-adically, but the
final result $S(p;z)$ being an integer, it does not matter at the end.
There is one small additional detail to take care of: we have
\betagin{align*}S(p;z)&=\sum_{0\le r\le p-2}\omega^{-r}(z)J(r,r,r,r,r)\\
&=\sum_{0\le r\le p-2}\omega^r(z)J(-r,-r,-r,-r,-r)\;,\mathbf end{align*}
so we must express $\omega^r(z)$ in the $p$-adic setting. Since
$\omega=\omega_{\GammaP}$ is the \mathbf emph{Teichm\"uller character}, in the $p$-adic
setting it is easy to show that $\omega(z)$ is the $p$-adic limit of
$z^{p^k}$ as $k\to\infty$. in particular $\omega(z)\mathbf equiv z{\mathfrak p}mod{p}$, but more
precisely $\omega(z)\mathbf equiv z^p{\mathfrak p}mod{p^2}$.
\betagin{exercise} Let $p{\mathfrak g}e3$. Assume that $z\in{\mathbb Z}_p\setminus p{\mathbb Z}_p$ (for
instance that $z\in{\mathbb Z}\setminus p{\mathbb Z}$). Prove that $z^{p^k}$ has a $p$-adic
limit $\omega(z)$ when $k\to\infty$, that $\omega^{p-1}(z)=1$, that
$\omega(z)\mathbf equiv z{\mathfrak p}mod{p}$, and $\omega(z)\mathbf equiv z^p{\mathfrak p}mod{p^2}$.
\mathbf end{exercise}
We have thus proved the following
\betagin{proposition} We have
\betagin{align*}S(p;z)&\mathbf equiv\sum_{0<r\le(p-1)/5}\dfrac{(5r)!}{r!^5}(1+5pr(H_{5r}-H_r))z^{pr}\\
&{\mathfrak p}hantom{=}-p\sum_{(p-1)/5<r\le2(p-1)/5}\dfrac{(5r-(p-1))!}{r!^5}z^r{\mathfrak p}mod{p^2}\;.\mathbf end{align*}
In particular
$$S(p;z)\mathbf equiv\sum_{0<r\le(p-1)/5}\dfrac{(5r)!}{r!^5}z^r{\mathfrak p}mod{p}\;.$$
\mathbf end{proposition}
\betagin{remarks}{\rm \betagin{enumerate}
\item Note that, as must be the case, all mention of $p$-adic numbers has
disappeared from this formula. We used the $p$-adic setting only in the proof.
It can be proved ``directly'', but with some difficulty.
\item We used the Taylor expansion only to order $2$. It is of course possible
to use it to any order, thus giving a generalization of the above proposition
to any power of $p$.\mathbf end{enumerate}}
\mathbf end{remarks}
The point of giving all these details is as follows: it is easy to show that
$(p-1)\mid S(p;z)$ (in fact we have seen this in the elementary method above).
We can thus easily compute $S(p;z)$ modulo $p^2(p-1)$. On the other hand,
it is possible to prove (but not easy, it is part of the Weil conjectures
proved by Deligne), that $|S(p;z)-p^4|<4p^{5/2}$. It follows that as soon
as $8p^{5/2}<p^2(p-1)$, in other words $p{\mathfrak g}e67$, the computation that we
perform modulo $p^2$ is sufficient to determine $S(p;z)$ exactly. It is
clear that the time to perform this computation is $\Os(p)$, and in fact
much faster than any that we have seen.
In fact, implementing in a reasonable way the algorithm
given by the above proposition gives timings for $p\mathfrak approx 10^k$ for
$k=2$, $3$, $4$, $5$, $6$, $7$, and $8$ of $0$, $0.01$, $0.03$, $0.21$, $2.13$,
$21.92$, and $229.6$ seconds respectively, of course much faster and
compatible with $\Os(p)$ time. The great additional advantage is that we
use very small memory. This is therefore the best known method.
{\bf Numerical example:} Choose $p=10^6+3$ and $z=2$. In $2.13$ seconds we find
that $S(p;z)\mathbf equiv a{\mathfrak p}mod{p^2}$ with $a=356022712041$. Using the Chinese
remainder formula
$$S(p;z)=p^4+((a-(1+a)p^2)\bmod((p-1)p^2))\;,$$
we immediately deduce that
$$S(p;z)=1000012000056356142712140\;.$$
Here is a summary of the timings (in seconds) that we have mentioned:
\centerline{
\betagin{tabular}{|c||c|c|c|c|c|c|c|}
\hline
$k$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ \\
\hline\hline
Na\"\i ve & $0.03$ & $1.56$ & $149$ & $*$ & $*$ & $*$ & $*$\\
\hline
Theta & $0.02$ & $0.40$ & $16.2$ & $663$ & $*$ & $*$ & $*$\\
\hline
Mod $X^{q-1}-1$ & $0$ & $0.02$ & $0.08$ & $0.85$ & $9.90$ & $123$ & $*$\\
\hline
Gross--Koblitz & $0$ & $0.01$ & $0.03$ & $0.21$ & $2.13$ & $21.92$ & $229.6$\\
\hline
\mathbf end{tabular}}
\centerline{Time for computing $S(p;z)$ for $p\mathfrak approx10^k$}
\section{Gauss and Jacobi Sums over ${\mathbb Z}/N{\mathbb Z}$}
Another context in which one encounters Gauss sums is over finite rings
such as ${\mathbb Z}/N{\mathbb Z}$. The theory coincides with that over ${\mathbb F}_q$ when
$q=p=N$ is prime, but is rather different otherwise. These other Gauss sums
enter in the important theory of \mathbf emph{Dirichlet characters}.
\subsection{Definitions}
We recall the following definition:
\betagin{definition} Let $\chi$ be a (multiplicative) character from the
multiplicative group $({\mathbb Z}/N{\mathbb Z})^*$ of invertible elements of ${\mathbb Z}/N{\mathbb Z}$ to
the complex numbers ${\mathbb C}$.
We denote by abuse of notation again by $\chi$ the map from ${\mathbb Z}$ to ${\mathbb C}$
defined by $\chi(x)=\chi(x\bmod N)$ when $x$ is coprime to $N$, and
$\chi(x)=0$ if $x$ is not coprime to $N$, and call it the Dirichlet character
modulo $N$ associated to $\chi$.\mathbf end{definition}
It is clear that a Dirichlet character satisfies $\chi(xy)=\chi(x)\chi(y)$
for all $x$ and $y$, that $\chi(x+N)=\chi(x)$, and that $\chi(x)=0$
if and only if $x$ is not coprime with $N$. Conversely, it immediate that
these properties characterize Dirichlet characters.
A crucial notion (which has no equivalent in the context of characters of
${\mathbb F}_q^*$) is that of \mathbf emph{primitivity}:
Assume that $M\mid N$. If $\chi$ is a Dirichlet character modulo $M$, we can
transform it into a character $\chi_N$ modulo $N$ by setting
$\chi_N(x)=\chi(x)$ if $x$ is coprime to $N$, and $\chi_N(x)=0$ otherwise.
We say that the characters $\chi$ and $\chi_N$ are \mathbf emph{equivalent}.
Conversely, if ${\mathfrak p}si$ is a character modulo $N$, it is not always true that
one can find $\chi$ modulo $M$ such that ${\mathfrak p}si=\chi_N$. If it is possible,
we say that ${\mathfrak p}si$ \mathbf emph{can be defined modulo $M$}.
\betagin{definition} Let $\chi$ be a character modulo $N$. We say that
$\chi$ is a \mathbf emph{primitive character} if $\chi$ cannot be defined modulo
$M$ for any proper divisor $M$ of $N$, i.e., for any $M\mid N$ such that
$M\ne N$.\mathbf end{definition}
\betagin{exercise} Assume that $N\mathbf equiv2{\mathfrak p}mod4$. Show that there do not exist
any primitive characters modulo $N$.
\mathbf end{exercise}
\betagin{exercise} Assume that $p^a\mid N$ with $p$ prime. Show that if $\chi$
is a primitive character modulo $N$, the \mathbf emph{order} of $\chi$ (the smallest
$k$ such that $\chi^k$ is a trivial character) is \mathbf emph{divisible}
by $p^{a-1}$.
\mathbf end{exercise}
As we will see, questions about general Dirichlet characters can always be
reduced to questions about primitive characters, and the latter have much
nicer properties.
\betagin{proposition} Let $\chi$ be a character modulo $N$. There exists
a divisor $f$ of $N$ called the \mathbf emph{conductor} of $\chi$ (this $f$ has
nothing to do with the $f$ used above such that $q=p^f$), having the following
properties:
\betagin{enumerate}\item The character $\chi$ can be defined modulo $f$,
in other words there exists a character ${\mathfrak p}si$ modulo $f$ such that
$\chi={\mathfrak p}si_N$ using the notation above.
\item $f$ is the smallest divisor of $N$ having this property.
\item The character ${\mathfrak p}si$ is a primitive character modulo $f$.
\mathbf end{enumerate}\mathbf end{proposition}
There is also the notion of \mathbf emph{trivial character modulo $N$}: however
we must be careful here, and we set the following:
\betagin{definition} The trivial character modulo $N$ is the Dirichlet
character associated with the trivial character of $({\mathbb Z}/N{\mathbb Z})^*$. It is
usually denoted by $\chi_0$ (but be careful, the index $N$ is implicit, so
$\chi_0$ may represent different characters), and its values are as follows:
$\chi_0(x)=1$ if $x$ is coprime to $N$, and $\chi_0(x)=0$ if $x$ is not
coprime to $N$.\mathbf end{definition}
In particular, $\chi_0(0)=0$ if $N\ne1$. The character $\chi_0$ can also be
characterized as the only character modulo $N$ of conductor $1$.
\betagin{definition} Let $\chi$ be a character modulo $N$. The \mathbf emph{Gauss sum}
associated to $\chi$ and $a\in{\mathbb Z}$ is
$${\mathfrak g}(\chi,a)=\sum_{x\bmod N}\chi(x)\zeta_N^{ax}\;,$$
and we write simply ${\mathfrak g}(\chi)$ instead of ${\mathfrak g}(\chi,1)$.
\mathbf end{definition}
The most important results concerning these Gauss sums is the following:
\betagin{proposition} Let $\chi$ be a character modulo $N$.\betagin{enumerate}
\item If $a$ is coprime to $N$ we have
$${\mathfrak g}(\chi,a)=\chi^{-1}(a){\mathfrak g}(\chi)=\ov{\chi(a)}{\mathfrak g}(\chi)\;,$$
and more generally
${\mathfrak g}(\chi,ab)=\chi^{-1}(a){\mathfrak g}(\chi,b)=\ov{\chi(a)}{\mathfrak g}(\chi,b)$.
\item If $\chi$ is a \mathbf emph{primitive} character, we have
$${\mathfrak g}(\chi,a)=\ov{\chi(a)}{\mathfrak g}(\chi)$$
for \mathbf emph{all} $a$, in other words, in addition to (1), we have
${\mathfrak g}(\chi,a)=0$ if $a$ is not coprime to $N$.
\item If $\chi$ is a \mathbf emph{primitive} character, we have
$|{\mathfrak g}(\chi)|^2=N$.\mathbf end{enumerate}\mathbf end{proposition}
Note that (1) is trivial, and that since $\chi(a)$ has modulus $1$ when
$a$ is coprime to $N$, we can write indifferently $\chi^{-1}(a)$ or
$\ov{\chi(a)}$. On the other hand, (2) is not completely trivial.
We leave to the reader the easy task of defining Jacobi sums and of proving
the easy relations between Gauss and Jacobi sums.
\subsection{Reduction to Prime Gauss Sums}
A fundamental and little-known fact is that in the context of Gauss
sums over ${\mathbb Z}/N{\mathbb Z}$ (as opposed to ${\mathbb F}_q$), one can in fact always reduce
to prime $N$. First note (with proof) the following easy result:
\betagin{proposition} Let $N=N_1N_2$ with $N_1$ and $N_2$ coprime, and
let $\chi$ be a character modulo $N$.\betagin{enumerate}
\item There exist unique characters $\chi_i$ modulo $N_i$ such that
$\chi=\chi_1\chi_2$ in an evident sense, and if $\chi$ is primitive,
the $\chi_i$ will also be primitive.
\item We have the identity (valid even if $\chi$ is not primitive):
$${\mathfrak g}(\chi)=\chi_1(N_2)\chi_2(N_1){\mathfrak g}(\chi_1){\mathfrak g}(\chi_2)\;.$$
\mathbf end{enumerate}\mathbf end{proposition}
\betagin{proof} (1). Since $N_1$ and $N_2$ are coprime there exist $u_1$ and $u_2$
such that $u_1N_1+u_2N_2=1$. We define $\chi_1(x)=\chi(xu_2N_2+u_1N_1)$ and
$\chi_2(x)=\chi(xu_1N_1+u_2N_2)$. We leave to the reader to check (1)
using these definitions.
(2). When $x_i$ ranges modulo $N_i$, $x=x_1u_2N_2+x_2u_1N_1$ ranges
modulo $N$ (check it, in particular that the values are distinct!),
and $\chi(x)=\chi_1(x)\chi_2(x)=\chi_1(x_1)\chi_2(x_2)$. Furthermore,
$$\zeta_N=\mathbf exp(2{\mathfrak p}i i/N)=\mathbf exp(2{\mathfrak p}i i(u_1/N_2+u_2/N_1))=\zeta_{N_1}^{u_2}\zeta_{N_2}^{u_1}\;,$$
hence
\betagin{align*}{\mathfrak g}(\chi)&=\sum_{x\bmod N}\chi(x)\zeta_N^x\\
&=\sum_{x_1\bmod N_1,\ x_2\bmod N_2}\chi_1(x_1)\chi_2(x_2)\zeta_{N_1}^{u_2x_1}\zeta_{N_2}^{u_1x_2}\\
&={\mathfrak g}(\chi_1;u_2){\mathfrak g}(\chi_2;u_1)=\chi_1^{-1}(u_2)\chi_2^{-1}(u_1){\mathfrak g}(\chi_1){\mathfrak g}(\chi_2)\;,\mathbf end{align*}
so the result follows since $N_2u_2\mathbf equiv1{\mathfrak p}mod{N_1}$ and
$N_1u_1\mathbf equiv1{\mathfrak p}mod{N_2}$.\qed\mathbf end{proof}
Thanks to the above result, the computation of Gauss sums modulo $N$ can be
reduced to the computation of Gauss sums modulo prime powers.
Here a remarkable simplification occurs, due to Odoni: Gauss sums modulo
$p^a$ for $a{\mathfrak g}e2$ can be ``explicitly computed'', in the sense that there
is a direct formula not involving a sum over $p^a$ terms for computing
them. Although the proof is not difficult, we do not give it, and refer
instead to \cite{Coh5} which can be obtained from the author. We use the
classical notation $\mathbf e(x)$ to mean $e^{2{\mathfrak p}i i x}$. Furthermore, we use
the $p$-adic logarithm $\log_p(m)$, but in a totally elementary manner
since we will always have $m\mathbf equiv1{\mathfrak p}mod p$ and the standard expansion
$-\log_p(1-x)=\sum_{k{\mathfrak g}e1}x^k/k$ which we stop as soon as all the terms
are divisible by $p^n$:
\betagin{theorem}[Odoni et al.]\lambdabel{thmodoni} Let $\chi$ be a \mathbf emph{primitive}
character modulo $p^n$.
\betagin{enumerate}\item Assume that $p{\mathfrak g}e3$ is prime and $n{\mathfrak g}e2$. Write
$\chi(1+p)=\mathbf e(-b/p^{n-1})$ with $p\nmid b$. Define
$$A(p)=\dfrac{p}{\log_p(1+p)}\text{\quad and\quad}B(p)=A(p)(1-\log_p(A(p)))\;,$$
except when $p^n=3^3$, in which case we define $B(p)=10$. Then
$${\mathfrak g}(\chi)=p^{n/2}\mathbf e\left(\dfrac{bB(p)}{p^n}\right)\chi(b)\cdot\betagin{cases}
1&\text{\quad if $n{\mathfrak g}e2$ is even,}\\
\leg{b}{p}i^{p(p-1)/2}&\text{\quad if $n{\mathfrak g}e3$ is odd.}
\mathbf end{cases}$$
\item Let $p=2$ and assume that $n{\mathfrak g}e4$. Write
$\chi(1+p^2)=\mathbf e(b/p^{n-2})$ with $p\nmid b$. Define
$$A(p)=-\dfrac{p^2}{\log_p(1+p^2)}\text{\quad and\quad}B(p)=A(p)(1-\log_p(A(p)))\;,$$
except when $p^n=2^4$, in which case we define $B(p)=13$. Then
$${\mathfrak g}(\chi)=p^{n/2}\mathbf e\left(\dfrac{bB(p)}{p^n}\right)\chi(b)\cdot\betagin{cases}
\mathbf e\left(\dfrac{b}{8}\right)&\text{\quad if $n{\mathfrak g}e4$ is even,}\\
\mathbf e\left(\dfrac{(b^2-1)/2+b}{8}\right)&\text{\quad if $n{\mathfrak g}e5$ is odd.}
\mathbf end{cases}$$
\item If $p^n=2^2$, or $p^n=2^3$ and $\chi(-1)=1$, we have ${\mathfrak g}(\chi)=p^{n/2}$,
and if $p^n=2^3$ and $\chi(-1)=-1$ we have ${\mathfrak g}(\chi)=p^{n/2}i$.
\mathbf end{enumerate}\mathbf end{theorem}
Thanks to this theorem, we see that the computation of Gauss sums in the
context of Dirichlet characters can be reduced to the computation of Gauss
sums modulo $p$ for prime $p$. This is of course the same as the
computation of a Gauss sum for a character of ${\mathbb F}_p^*$.
We recall the available methods for computing a single Gauss sum of this
type:
\betagin{enumerate}\item The na\"\i ve method, time $\Os(p)$ (applicable in
general, time $\Os(N)$).
\item Using the Gross--Koblitz formula, also time $\Os(p)$, but the implicit
constant is much smaller, and also computations can be done modulo $p$ or
$p^2$ for instance, if desired (applicable only to $N=p$, or in the
context of finite fields).
\item Using theta functions, time $\Os(p^{1/2})$ (applicable in general,
time $\Os(N^{1/2})$).\mathbf end{enumerate}
\subsection{General Complete Exponential Sums over ${\mathbb Z}/N{\mathbb Z}$}
We have just seen the (perhaps surprising) fact that Gauss sums modulo
$p^a$ for $a{\mathfrak g}e2$ can be ``explicitly computed''. This is in fact
a completely general fact. Let $\chi$ be a Dirichlet character modulo $N$,
and let $F\in {\mathbb Q}[X]$ be integer-valued. Consider the following
\mathbf emph{complete exponential sum}:
$$S(F,N)=\sum_{x\bmod N}\chi(x)e^{2{\mathfrak p}i i F(x)/N}\;.$$
For this to make sense we must of course assume that $x\mathbf equiv y{\mathfrak p}mod N$
implies $F(x)\mathbf equiv F(y){\mathfrak p}mod{N}$, which is for instance the case if
$F\in{\mathbb Z}[X]$. As we did for Gauss sums, using Chinese remaindering we can
reduce the computation to the case where $N=p^a$ is a prime power. But
the essential point is that if $a{\mathfrak g}e2$, $S(F,p^a)$ can be ``explicitly
computed'', see \cite{Coh5} for the detailed statement and proof, so
we are again reduced to the computation of $S(F,p)$.
A simplified version and incomplete version of the result when $\chi$ is the
trivial character is as follows:
\betagin{theorem} Let $S=\sum_{x\bmod{p^a}}e^{2{\mathfrak p}i iF(x)/p^a}$, and
assume that $a{\mathfrak g}e2$ and $p>2$. Then under suitable assumptions on $F$ we
have the following:
\betagin{enumerate}
\item If there does not exist $y$ such that $F'(y)\mathbf equiv0{\mathfrak p}mod p$ then $S=0$.
\item Otherwise, there exists $u\in{\mathbb Z}_p$ such that
$F'(u)=0$ and $v_p(F''(u))=0$, $u$ is unique, and we have
$$S=p^{a/2}e^{2{\mathfrak p}i iF(u)/p^a}g(u,p,a)\;,$$
where $g(u,p,a)=1$ if $a$ is even and otherwise
$$g(u,p,a)=\leg{F''(u)}{p}i^{p(p-1)/2}\;.$$
\mathbf end{enumerate}
\mathbf end{theorem}
\betagin{exercise} Let $F(x)=cx^3+dx$ with $c$ and $d$ integers, and let $p$
be a prime number such that $p\nmid 6cd$. The assumptions of the theorem
will then be satisfied. Compute explicitly
$\sum_{x\bmod{p^a}}e^{2{\mathfrak p}i iF(x)/p^a}$ for $a{\mathfrak g}e2$. You will need to
introduce a square root of $-3cd$ modulo $p^a$.\mathbf end{exercise}
For instance, using a variant of the above theorem, it is immediate to prove
the following result due to Sali\'e:
\betagin{proposition} The \mathbf emph{Kloosterman sum} $K(m,n,N)$ is defined by
$$K(m,n,N)=\sum_{x\in({\mathbb Z}/N{\mathbb Z})^*}e^{2{\mathfrak p}i i(mx+nx^{-1})/N}\;,$$
where $x$ runs over the invertible elements of ${\mathbb Z}/N{\mathbb Z}$. If $p>2$
is a prime such that $p\nmid n$ and $a{\mathfrak g}e2$ we have
$$K(n,n,p^a)=\betagin{cases}
2p^{a/2}\cos(4{\mathfrak p}i n/p^a)&\text{ if $2\mid a$,}\\
2p^{a/2}\leg{n}{p}\cos(4{\mathfrak p}i n/p^a)&\text{ if $2\nmid a$ and $p\mathbf equiv1{\mathfrak p}mod4$,}\\
-2p^{a/2}\leg{n}{p}\sin(4{\mathfrak p}i n/p^a)&\text{ if $2\nmid a$ and $p\mathbf equiv3{\mathfrak p}mod4$.}\mathbf end{cases}$$
\mathbf end{proposition}
Note that it is immediate to reduce general $K(m,n,N)$ to the case $m=n$
and $N=p^a$, and to give formulas also for the case $p=2$. As usual the
case $N=p$ is \mathbf emph{not} explicit, and, contrary to the case of Gauss sums
where it is easy to show that $|{\mathfrak g}g(\chi)|=\sqrt{p}$ for a primitive character
$\chi$, the bound $|K(m,n,p)|\le 2\sqrt{p}$ for $p\nmid nm$ due to Weil is
much more difficult to prove, and in fact follows from his proof of the
Riemann hypothesis for curves.
\section{Numerical Computation of $L$-Functions}
\subsection{Computational Issues}
Let $L(s)$ be a general $L$-function as defined in Section \ref{sec:one},
and let $N$ be its conductor. There are several computational problems that we
want to solve. The first, but not necessarily the most important, is the
numerical computation of $L(s)$ for given complex values of $s$. This problem
is of very varying difficulty depending on the size of $N$ and of the
imaginary part of $s$ (note that if the \mathbf emph{real part} of $s$ is quite
large, the defining series for $L(s)$ converges quite well, if not
exponentially fast, so there is no problem in that range, and by the
functional equation the same is true if the real part of $1-s$ is quite large).
The problems for $\Im(s)$ large are quite specific, and are already crucial
in the case of the Riemann zeta function $\zeta(s)$. It is by an efficient
management of this problem (for instance by using the so-called
\mathbf emph{Riemann--Siegel formula}) that one is able to compute billions of
nontrivial zeros of $\zeta(s)$. We will not consider these problems here, but
concentrate on reasonable ranges of $s$.
The second problem is specific to general $L$-functions as opposed to
$L$-functions attached to Dirichlet characters for instance: in the general
situation, we are given an $L$-function by an Euler product known outside of
a finite and small number of ``bad primes''. Using recipes dating to the
late 1960's and well explained in a beautiful paper of Serre \cite{Ser}, one
can give the ``gamma factor'' ${\mathfrak g}a(s)$, and some (but not all) the information
about the ``conductor'', which is the exponential factor, at least in the
case of $L$-functions of varieties, or more generally of motives.
We will ignore these problems and assume that we know all the bad primes,
gamma factor, conductor, and root number. Note that if we know the gamma
factor and the bad primes, using the formulas that we will give below for
different values of the argument it is easy to recover the conductor and the
root number. What is most difficult to obtain are the Euler factors at the
bad primes, and this is the object of current work.
\subsection{Dirichlet $L$-Functions}
Let $\chi$ be a Dirichlet character modulo $N$. We define the $L$-function
attached to $\chi$ as the complex function
$$L(\chi,s)=\sum_{n{\mathfrak g}e1}\dfrac{\chi(n)}{n^s}\;.$$
Since $|\chi(n)|\le1$, it is clear that $L(\chi,s)$ converges absolutely
for ${\mathbb R}e(s)>1$. Furthermore, since $\chi$ is multiplicative, as for the
Riemann zeta function we have an \mathbf emph{Euler product}
$$L(\chi,s)={\mathfrak p}rod_p\dfrac{1}{1-\chi(p)/p^s}\;.$$
The denominator of this product being generically of degree $1$, this is
also called an $L$-function of degree $1$, and conversely, with a suitable
definition of the notion of $L$-function, one can show that these are the
only $L$-functions of degree $1$.
If $f$ is the conductor of $\chi$ and $\chi_f$ is the character modulo $f$
equivalent to $\chi$, it is clear that
$$L(\chi,s)={\mathfrak p}rod_{p\mid N, p\nmid f}(1-\chi_f(p)p^{-s})L(\chi_f,s)\;,$$
so if desired we can always reduce to primitive characters, and this is
what we will do from now on.
Dirichlet $L$-series have important analytic and arithmetic properties, some
of them conjectural (such as the Riemann Hypothesis), which should (again
conjecturally) be shared by all global $L$-functions, see the discussion
in the introduction. We first give the following:
\betagin{theorem} Let $\chi$ be a \mathbf emph{primitive} character modulo $N$, and
let $e=0$ or $1$ be such that $\chi(-1)=(-1)^e$.
\betagin{enumerate}
\item (Analytic continuation.)
The function $L(\chi,s)$ can be analytically continued to the whole
complex plane into a meromorphic function, which is in fact holomorphic
except in the special case $N=1$, $L(\chi,s)=\zeta(s)$, where it has a unique
pole, at $s=1$, which is simple with residue $1$.
\item (Functional equation.)
There exists a \mathbf emph{functional equation} of the following form:
letting ${\mathfrak g}a_{{\mathbb R}}(s)={\mathfrak p}i^{-s/2}\Gamma(s/2)$, we set
$$\Lambdaambda(\chi,s)=N^{(s+e)/2}{\mathfrak g}a_{{\mathbb R}}(s+e)L(\chi,s)\;,$$
where $e$ is as above. Then
$$\Lambdaambda(\chi,1-s)=\omega(\chi)\Lambdaambda(\ov{\chi},s)\;,$$
where $\omega(\chi)$, the so-called \mathbf emph{root number}, is a complex
number of modulus $1$ given by the formula
$\omega(\chi)={\mathfrak g}(\chi)/(i^eN^{1/2})$.
\item (Special values.)
For each integer $k{\mathfrak g}e1$ we have the \mathbf emph{special values}
$$L(\chi,1-k)=-\dfrac{B_k(\chi)}{k}-\deltalta_{N,1}\deltalta_{k,1}\;,$$
where $\deltalta$ is the Kronecker symbol, and the \mathbf emph{generalized Bernoulli
numbers} $B_k(\chi)$ are easily computable algebraic numbers. In particular,
when $k\not\mathbf equiv e{\mathfrak p}mod{2}$ we have $L(\chi,1-k)=0$ (except when $k=N=1$).
By the functional equation this is equivalent to the formula
for $k\mathbf equiv e{\mathfrak p}mod{2}$, $k{\mathfrak g}e1$:
$$L(\chi,k)=(-1)^{k-1+(k+e)/2}\omega(\chi)\dfrac{2^{k-1}{\mathfrak p}i^k\ov{B_k(\chi)}}{m^{k-1/2}k!}\;.$$
\mathbf end{enumerate}\mathbf end{theorem}
To state the next theorem, which for the moment we state for Dirichlet
$L$-functions, we need still another important special function:
\betagin{definition} For $x>0$ we define the \mathbf emph{incomplete gamma function}
$\Gamma(s,x)$ by
$$\Gamma(s,x)=\int_x^\infty t^se^{-t}\,\dfrac{dt}{t}\;.$$
\mathbf end{definition}
Note that this integral converges for \mathbf emph{all} $s\in{\mathbb C}$, and that it
tends to $0$ exponentially fast when $x\to\infty$, more precisely
$\Gamma(s,x)\sim x^{s-1}e^{-x}$. In addition (but this would carry us too far here)
there are many efficient methods to compute it; see however the section
on inverse Mellin transforms below.
\betagin{theorem} Let $\chi$ be a \mathbf emph{primitive} character modulo $N$. For all
$A>0$ we have:
\betagin{align*}\Gammaamma\left(\dfrac{s+e}{2}\right)L(\chi,s)&=\deltalta_{N,1}{\mathfrak p}i^{s/2}\left(\dfrac{A^{(s-1)/2}}{s-1}-\dfrac{A^{s/2}}{s}\right)
+\sum_{n{\mathfrak g}e 1}\dfrac{\chi(n)}{n^s}\Gammaamma\left(\dfrac{s+e}{2},\dfrac{{\mathfrak p}i n^2 A}{N}\right)\\
&{\mathfrak p}hantom{=}+\omega(\chi)\left(\dfrac{{\mathfrak p}i}{N}\right)^{s-1/2}\sum_{n{\mathfrak g}e 1}\dfrac{\ov{\chi}(n)}{n^{1-s}}\Gammaamma\left(\dfrac{1-s+e}{2},\dfrac{{\mathfrak p}i n^2}{AN}\right)\;.\mathbf end{align*}
\mathbf end{theorem}
\betagin{remarks}{\rm \betagin{enumerate}
\item Thanks to this theorem, we can compute numerical values of $L(\chi,s)$
(for $s$ in a reasonable range) in time $\Os(N^{1/2})$.
\item The optimal value of $A$ is $A=1$, but the theorem is stated in this
form for several reasons, one of them being that by varying $A$ (for instance
taking $A=1.1$ and $A=0.9$) one can check the correctness of the
implementation, or even compute the root number $\omega(\chi)$ if it is not known.
\item To compute values of $L(\chi,s)$ when $\Im(s)$ is large, one
does not use the theorem as stated, but variants, see \cite{Rub}.
\item The above theorem, called the \mathbf emph{approximate functional equation},
evidently implies the functional equation itself, so it seems to be more
precise; however this is an illusion since one can show that under very mild
assumptions functional equations in a large class imply corresponding
approximate functional equations.
\mathbf end{enumerate}}
\mathbf end{remarks}
\subsection{Approximate Functional Equations}
In fact, let us make this last statement completely precise. For the sake of
simplicity we will assume that the $L$-functions have no poles (this
corresponds for Dirichlet $L$-functions to the requirement that $\chi$ not be
the trivial character). We begin by the following (where we restrict to
certain kinds of gamma products, but it is easy to generalize; incidentally
recall the \mathbf emph{duplication formula} for the gamma function
$\Gamma(s/2)\Gamma((s+1)/2)=2^{1-s}{\mathfrak p}i^{1/2}\Gamma(s)$, which allows the reduction of
factors of the type $\Gamma(s+a)$ to several of the type $\Gamma(s/2+a')$ and
conversely).
\betagin{definition} Recall that we have defined $\Gamma_{{\mathbb R}}(s)={\mathfrak p}i^{-s/2}\Gamma(s/2)$,
which is the gamma factor attached to $L$-functions of even characters, for
instance to $\zeta(s)$. A \mathbf emph{gamma product} is a function of the type
$${\mathfrak g}a(s)=f^{s/2}{\mathfrak p}rod_{1\le i\le d}\Gamma_{{\mathbb R}}(s+b_j)\;,$$
where $f>0$ is a real number. The number $d$ of gamma factors is called the
\mathbf emph{degree} of ${\mathfrak g}a(s)$.\mathbf end{definition}
Note that the $b_j$ may not be real numbers, but in the case of $L$-functions
attached to motives, they will always be, and in fact be integers.
\betagin{proposition} Let ${\mathfrak g}a$ be a gamma product.\betagin{enumerate}
\item There exists a function
$W(t)$ called the \mathbf emph{inverse Mellin transform} of ${\mathfrak g}a$ such that
$${\mathfrak g}a(s)=\int_0^\infty t^sW(t)\,dt/t$$
for ${\mathbb R}e(s)$ sufficiently large (greater than the real part of the rightmost
pole of ${\mathfrak g}a(s)$ suffices).
\item $W(t)$ is given by the following \mathbf emph{Mellin inversion formula} for
$t>0$:
$$W(t)=\M^{-1}({\mathfrak g}a)(t)=\dfrac{1}{2{\mathfrak p}i i}\int_{\sigma-i\infty}^{\sigma+i\infty}t^{-s}{\mathfrak g}a(s)\,ds\;,$$
for any $\sigma$ larger than the real part of the poles of ${\mathfrak g}a(s)$.
\item $W(t)$ tends to $0$ exponentially fast when $t\to+\infty$. More
precisely, as $t\to\infty$ we have
$$W(t)\sim C\cdot(t/f^{1/2})^B\mathbf exp(-{\mathfrak p}i d(t/f^{1/2})^{2/d})$$
with $B=(1-d+\sum_{1\le j\le d}b_j)/d$ and $C=2^{(d+1)/2}/d^{1/2}$.
\mathbf end{enumerate}\mathbf end{proposition}
\betagin{definition} Let ${\mathfrak g}a(s)$ be a gamma product and $W(t)$ its inverse
Mellin transform. The \mathbf emph{incomplete gamma product} ${\mathfrak g}a(s,x)$ is defined
for $x>0$ by
$${\mathfrak g}a(s,x)=\int_x^\infty t^sW(t)\,\dfrac{dt}{t}\;.$$
\mathbf end{definition}
Note that this integral always converges since $W(t)$ tends to $0$
exponentially fast when $t\to\infty$. In addition, thanks to the above
proposition it is immediate to show the following:
\betagin{corollary}\lambdabel{asympunsmooth}\betagin{enumerate}
\item For any $\sigma$ larger than the real part of the poles of ${\mathfrak g}a(s)$
we have
$${\mathfrak g}a(s,x)=\dfrac{x^s}{2{\mathfrak p}i i}\int_{\sigma-i\infty}^{\sigma+i\infty}\dfrac{x^{-z}{\mathfrak g}a(z)}{z-s}\,dz\;.$$
\item For $s$ fixed, as $x\to\infty$ we have with the same
constants $B$ and $C$ as above
$${\mathfrak g}a(s,x)\sim \dfrac{C}{2{\mathfrak p}i}x^s(x/f^{1/2})^{B-2/d}\mathbf exp(-{\mathfrak p}i d(x/f^{1/2})^{2/d})$$
so has essentially the same exponential decay as $W(x)$.
\mathbf end{enumerate}
\mathbf end{corollary}
The first theorem, essentially due to Lavrik, which is an exercise in complex
integration is as follows (recall that a function $f$ is of \mathbf emph{finite
order} $\mathfrak alpha{\mathfrak g}e0$ if for all $\mathbf eps>0$ and sufficiently large $|z|$ we have
$|f(z)|\le \mathbf exp(|z|^{\mathfrak alpha+\mathbf eps})$):
\betagin{theorem}\lambdabel{thmapprox} For $i=1$ and $i=2$, let
$L_i(s)=\sum_{n{\mathfrak g}e 1}a_i(n)n^{-s}$ be
Dirichlet series converging in some right half-plane ${\mathbb R}e(s){\mathfrak g}e\sigma_0$.
For $i=1$ and $i=2$, let ${\mathfrak g}a_i(s)$ be gamma products having the same
degree $d$. Assume that the functions $\Lambdaambda_i(s)={\mathfrak g}a_i(s)L_i(s)$
extend analytically to ${\mathbb C}$ into holomorphic functions of \mathbf emph{finite order},
and that we have the functional equation
$$\Lambdaambda_1(k-s)=w\cdot\Lambdaambda_2(s)$$ for some constant
$w\in{\mathbb C}^*$ and some real number $k$.
Then for all $A>0$, we have
$$\Lambdaambda_1(s)=\sum_{n{\mathfrak g}e1}\dfrac{a_1(n)}{n^s}{\mathfrak g}a_1(s,nA)+
w\sum_{n{\mathfrak g}e1}\dfrac{a_2(n)}{n^{k-s}}{\mathfrak g}a_2\Bigl(k-s,\dfrac{n}{A}\Bigr)$$
and symmetrically
$$\Lambdaambda_2(s)=\sum_{n{\mathfrak g}e1}\dfrac{a_2(n)}{n^s}{\mathfrak g}a_2\Bigl(s,\dfrac{n}{A}\Bigr)+
w^{-1}\sum_{n{\mathfrak g}e1}\dfrac{a_1(n)}{n^{k-s}}{\mathfrak g}a_1(k-s,nA)\;,$$
where ${\mathfrak g}a_i(s,x)$ are the corresponding incomplete gamma products.
\mathbf end{theorem}
Note that, as already mentioned, it is immediate to modify this theorem
to take into account possible poles of $L_i(s)$.
Since the incomplete gamma products ${\mathfrak g}a_i(s,x)$ tend to $0$ exponentially
fast when $x\to\infty$, the above formulas are rapidly
convergent series. We can make this more precise: if we write as above
${\mathfrak g}a_i(s,x)\sim C_ix^{B'_i}\mathbf exp(-{\mathfrak p}i d(x/f_i^{1/2})^{2/d})$, since
the convergence of the series is dominated by the exponential term, choosing
$A=1$, to have the $n$th term of the series less than $e^{-D}$, say, we
need (approximately) ${\mathfrak p}i d(n/f^{1/2})^{2/d}>D$, in other words
$n>(D/({\mathfrak p}i d))^{d/2}f^{1/2}$, with $f=\max(f_1,f_2)$. Thus, if the
``conductor'' $f$ is large, we may have some trouble. But this stays
reasonable for $f<10^8$, say.
The above argument leads to the belief that, apart from special values which
can be computed by other methods, the computation of values of $L$-functions
of conductor $f$ requires at least $C\cdot f^{1/2}$ operations. It has
however been shown by Hiary (see \cite{Hia}), that if $f$ is far from
squarefree (for instance if $f=m^3$ for Dirichlet $L$-functions), the
computation can be done faster (in $\Os(m)$ in the case $f=m^3$), at least in
the case of Dirichlet $L$-functions.
For practical applications, it is very useful to introduce an additional
function as a parameter. We state the following version due to Rubinstein
(see \cite{Rub}), whose proof is essentially identical to that of the
preceding version. To simplify the exposition, we again assume that the
$L$ function has no poles (it is easy to generalize), but also that
$L_2=\ov{L_1}$.
\betagin{theorem} Let $L(s)=\sum_{n{\mathfrak g}e1}a(n)n^{-s}$ be an $L$-function as above
with functional equation $\Lambdaambda(k-s)=w\ov{\Lambdaambda}(s)$ with
$\Lambdaambda(s)={\mathfrak g}a(s)L(s)$. For simplicity of exposition, assume that $L(s)$
has no poles in ${\mathbb C}$. Let $g(s)$ be an entire function such that for fixed
$s$ we have $|\Lambdaambda(z+s)g(z+s)/z|\to0$ as $\Im(z)\to\infty$ in any bounded
strip $|{\mathbb R}e(z)|\le \mathfrak alphapha$. We have
$$\Lambdaambda(s)g(s)=\sum_{n{\mathfrak g}e1}\dfrac{a(n)}{n^s}f_1(s,n)
+\omega\sum_{n{\mathfrak g}e1}\dfrac{\ov{a(n)}}{n^{k-s}}f_2(k-s,n)\;,$$
where
$$f_1(s,x)=\dfrac{x^s}{2{\mathfrak p}i i}\int_{\sigma-i\infty}^{\sigma+i\infty}\dfrac{{\mathfrak g}a(z)g(z)x^{-z}}{z-s}\,dz\text{\quad and\quad}f_2(s,x)=\dfrac{x^s}{2{\mathfrak p}i i}\int_{\sigma-i\infty}^{\sigma+i\infty}\dfrac{{\mathfrak g}a(z)\ov{g(k-\ov{z})}x^{-z}}{z-s}\,dz\;,$$
where $\sigma$ is any real number greater than the real parts of all the
poles of ${\mathfrak g}a(z)$ and than ${\mathbb R}e(s)$.
\mathbf end{theorem}
Several comments are in order concerning this theorem:
\betagin{enumerate}\item As already mentioned, the proof is a technical but
elementary exercise in complex analysis. In particular, it is very easy to
modify the formula to take into account possible poles of $L(s)$, see
\cite{Rub} once again.
\item
As in the unsmoothed case, the functions $f_i(s,x)$ are exponentially
decreasing as $x\to\infty$. Thus this gives fast formulas for computing values
of $L(s)$ for reasonable values of $s$. The very simplest case of this
approximate functional equation, even simpler than the Riemann zeta function,
is for the computation of the value at $s=1$ of the $L$-function of an
\mathbf emph{elliptic curve} $E$: if the sign of its functional equation is equal
to $+1$ (otherwise $L(E,1)=0$), the (unsmoothed) formula reduces to
$$L(E,1)=2\sum_{n{\mathfrak g}e1}\dfrac{a(n)}{n}e^{-2{\mathfrak p}i n/N^{1/2}}\;,$$
where $N$ is the conductor of the curve.
\item It is not difficult to show that as $n\to\infty$ we have a similar
behavior for the functions $f_i(s,n)$ as in the unsmoothed case
(Corollary \ref{asympunsmooth}), i.e.,
$$f_i(s,n)\sim C_i\cdot n^{B'_i}e^{-{\mathfrak p}i d(n/N^{1/2})^{2/d}}$$
for some explicit constants $C_i$ and $B'_i$ (in the preceding example $d=2$).
\item The theorem can be used with $g(s)=1$ to compute values of
$L(s)$ for ``reasonable'' values of $s$. When $s$ is unreasonable,
for instance when $s=1/2+iT$ with $T$ large (to check the Riemann
hypothesis for instance), one chooses other functions $g(s)$ adapted
to the computation to be done, such as $g(s)=e^{is\theta}$ or
$g(s)=e^{-a(s-s_0)^2}$; I refer to Rubinstein's paper for detailed
examples.
\item By choosing two very simple functions $g(s)$ such as $a^s$ for
two different values of $a$ close to $1$, one can compute numerically
the value of the root number $\omega$ if it is unknown. In a similar manner,
if the $a(n)$ are known but not $\omega$ nor the conductor $N$, by
choosing a few easy functions $g(s)$ one can find them. But much more
surprisingly, if almost nothing is known apart from the gamma factors and $N$,
say, by cleverly choosing a number of functions $g(s)$ and applying techniques
from numerical analysis such as singular value decomposition and least
squares methods, one can prove or disprove (numerically of course)
the existence of an $L$-function having the given gamma factors and
conductor, and find its first few Fourier coefficients if they exist.
This method has been used extensively by D.~Farmer in his search for
$\GammaL_3({\mathbb Z})$ and $\GammaL_4({\mathbb Z})$ Maass forms, by Poor and Yuen in computations
related to the paramodular conjecture of Brumer--Kramer and abelian surfaces,
and by A.~Mellit in the search of $L$-functions of degree $4$ with
integer coefficients and small conductor. Although a fascinating and
active subject, it would carry us too far afield to give more detailed
explanations.\mathbf end{enumerate}
\subsection{Inverse Mellin Transforms}
We thus see that it is necessary to compute inverse Mellin transforms of some
common gamma factors. Note that the exponential factors (either involving the
conductor and/or ${\mathfrak p}i$) are easily taken into account: if
${\mathfrak g}a(s)=\M(W)(s)=\int_0^\infty W(t)t^s\,dt/t$ is the Mellin transform of $W(t)$,
we have for $a>0$, setting $u=at$:
$$\int_0^\infty W(at)t^s\,dt/t=\int_0^\infty W(u)u^sa^{-s}\,du/u=a^{-s}{\mathfrak g}a(s)\;,$$
so the inverse Mellin transform of $a^{-s}{\mathfrak g}a(s)$ is simply $W(at)$.
As we have seen, there exists an explicit formula for the inverse Mellin
transform, which is immediate from the Fourier inversion formula.
We will see that although this looks quite technical, it is in practice very
useful for computing inverse Mellin transforms.
Let us look at the simplest examples (omitting the exponential factor $f^{s/2}$
thanks to the above remark):
\betagin{enumerate}
\item $\M^{-1}(\Gamma_{{\mathbb R}}(s))=2e^{-{\mathfrak p}i x^2}$ (this occurs for $L$-functions of
even characters, and in particular for $\zeta(s)$).
\item $\M^{-1}(\Gamma_{{\mathbb R}}(s+1))=2xe^{-{\mathfrak p}i x^2}$ (this occurs for $L$-functions of
odd characters).
\item $\M^{-1}(\Gamma_{{\mathbb C}}(s))=2e^{-2{\mathfrak p}i x}$ (this occurs for $L$-functions
attached to modular forms and to elliptic curves).
\item $\M^{-1}(\Gamma_{{\mathbb R}}(s)^2)=4K_0(2{\mathfrak p}i x)$ (this occurs for instance for
Dedekind zeta functions of real quadratic fields). Here $K_0(z)$ is a
well-known special function called a $K$-Bessel function. Of course this is
just a name, but it can be computed quite efficiently and can be found in
all computer algebra packages.
\item $\M^{-1}(\Gamma_{{\mathbb C}}(s)^2)=8K_0(4{\mathfrak p}i x^{1/2})$.
\item $\M^{-1}(\Gamma_{{\mathbb C}}(s)\Gamma_{{\mathbb C}}(s-1))=8K_1(4{\mathfrak p}i x^{1/2})/x^{1/2}$, where
$K_1(z)$ is another $K$-Bessel function which can be defined by
$K_1(z)=-K_0'(z)$.
\mathbf end{enumerate}
\betagin{exercise} Prove all these formulas.
\mathbf end{exercise}
It is clear however that when the gamma factor is more complicated, we
cannot write such ``explicit'' formulas, for instance what must be done for
${\mathfrak g}a(s)=\Gamma_{{\mathbb C}}(s)\Gamma_{{\mathbb R}}(s)$ or ${\mathfrak g}a(s)=\Gamma_{{\mathbb R}}(s)^3$ ? In fact all of
the above formulas involving $K$-Bessel functions are ``cheats'' in the sense
that we have simply given a \mathbf emph{name} to these inverse Mellin transform,
without explaining how to compute them.
However the Mellin inversion formula does provide such a method. The
main point to remember (apart of course from the crucial use of the Cauchy
residue formula and contour integration), is that the gamma function
\mathbf emph{tends to zero exponentially fast} on vertical lines, uniformly in the
real part (this may seem surprising if you have never seen it since
the gamma function grows so fast on the real axis, see appendix).
This exponential decrease implies that in the Mellin inversion
formula we can \mathbf emph{shift} the line of integration without changing
the value of the integral, as long as we take into account the residues
of the poles which are encountered along the way.
The line ${\mathbb R}e(s)=\sigma$ has been chosen so that $\sigma$ is larger than
the real part of any pole of ${\mathfrak g}a(s)$, so shifting to the right does not
bring anything. On the other hand, shifting towards the left shows that
for any $r<0$ not a pole of ${\mathfrak g}a(s)$ we have
$$W(t)=\sum_{\substack{s_0\text{ pole of ${\mathfrak g}a(s)$}\\{\mathbb R}e(s_0)>r}}{\mathbb R}es_{s=s_0}(t^{-s}{\mathfrak g}a(s))+\dfrac{1}{2{\mathfrak p}i i}\int_{r-i\infty}^{r+i\infty}t^{-s}{\mathfrak g}a(s)\,ds\;.$$
Using the reflection formula for the gamma function
$\Gamma(s)\Gamma(1-s)={\mathfrak p}i/\sin(s{\mathfrak p}i)$, it is easy to show that if $r$ stays say
half-way between the real part of two consecutive poles of ${\mathfrak g}a(s)$ then
${\mathfrak g}a(s)$ will tend to $0$ exponentially fast on ${\mathbb R}e(s)=r$ as $r\to-\infty$,
in other words that the integral tends to $0$ (exponentially fast). We thus
have the \mathbf emph{exact formula}
$$W(t)=\sum_{s_0\text{ pole of ${\mathfrak g}a(s)$}}{\mathbb R}es_{s=s_0}(t^{-s}{\mathfrak g}a(s))\;.$$
Let us see the simplest examples of this, taken from those given above.
\betagin{enumerate}
\item For ${\mathfrak g}a(s)=\Gamma_{{\mathbb C}}(s)=2\cdot(2{\mathfrak p}i)^{-s}\Gamma(s)$ the poles of ${\mathfrak g}a(s)$
are for $s_0=-n$, $n$ a positive or zero integer, and since
$\Gamma(s)=\Gamma(s+n+1)/((s+n)(s+n-1)\cdots s)$, the residue at $s_0=-n$ is equal to
$$2\cdot (2{\mathfrak p}i t)^n\Gamma(1)/((-1)(-2)\cdots(-n))=(-1)^n(2{\mathfrak p}i t)^n/n!\;,$$
so we obtain $W(t)=2\sum_{n{\mathfrak g}e0}(-1)^n(2{\mathfrak p}i t)^n/n!=2\cdot e^{-2{\mathfrak p}i t}$.
Of course we knew that!
\item For ${\mathfrak g}a(s)=\Gamma_{{\mathbb C}}(s)^2=4(2{\mathfrak p}i)^{-2s}\Gamma(s)^2$, the inverse Mellin
transform is $8K_0(4{\mathfrak p}i x^{1/2})$ whose expansion we do \mathbf emph{not} yet know.
The poles of ${\mathfrak g}a(s)$ are again for $s_0=-n$, but here all the poles are
double poles, so the computation is slightly more complicated. More precisely
we have $$\Gamma(s)^2=\Gamma(s+n+1)^2/((s+n)^2(s+n-1)^2\cdots s^2)\;,$$ so setting
$s=-n+\mathbf eps$ with $\mathbf eps$ small this gives
\betagin{align*}\Gamma(-n+\mathbf eps)^2&=\dfrac{\Gamma(1+\mathbf eps)^2}{\mathbf eps^2}\dfrac{1}{(1-\mathbf eps)^2\cdots(n-\mathbf eps)^2}\\
&=\dfrac{1+2\Gamma'(1)\mathbf eps+O(\mathbf eps^2)}{n!^2\mathbf eps^2}(1+2\mathbf eps/1)(1+2\mathbf eps/2)\cdots(1+2\mathbf eps/n)\\
&=\dfrac{1+2\Gamma'(1)\mathbf eps+O(\mathbf eps^2)}{n!^2\mathbf eps^2}(1+2H_n\mathbf eps)\;,\mathbf end{align*}
where we recall that $H_n=\sum_{1\le j\le n}1/j$ is the harmonic sum.
Since $(4{\mathfrak p}i^2t)^{-(-n+\mathbf eps)}=(4{\mathfrak p}i^2t)^{n-\mathbf eps}=(4{\mathfrak p}i^2t)^n(1-\mathbf eps\log(4{\mathfrak p}i^2t)+O(\mathbf eps^2))$, it follows
that
$$(4{\mathfrak p}i^2t)^{-(-n+\mathbf eps)}\Gamma(-n+\mathbf eps)^2=\dfrac{(4{\mathfrak p}i^2t)^n}{n!^2\mathbf eps^2}(1+\mathbf eps(2H_n+2\Gamma'(1)-\log(4{\mathfrak p}i^2t)))\;,$$
so that the residue of ${\mathfrak g}a(s)$ at $s=-n$ is equal to
$4((4{\mathfrak p}i^2t)^n/n!^2)(2H_n+2\Gamma'(1)-\log(4{\mathfrak p}i^2t))$.
We thus have
$2K_0(4{\mathfrak p}i t^{1/2})=\sum_{n{\mathfrak g}e0}((4{\mathfrak p}i^2t)^n/n!^2)(2H_n+2\Gamma'(1)-\log(4{\mathfrak p}i^2t))$,
hence using the easily proven fact that $\Gamma'(1)=-{\mathfrak g}a$, where
$${\mathfrak g}a=\lim_{n\to\infty}(H_n-\log(n))=0.57721566490\dots$$
is Euler's constant, this gives finally the expansion
$$K_0(t)=\sum_{n{\mathfrak g}e0}\dfrac{(t/2)^{2n}}{n!^2}(H_n-{\mathfrak g}a-\log(t/2))\;.$$
\mathbf end{enumerate}
\betagin{exercise} In a similar manner, or directly from this formula, find the
expansion of $K_1(t)$.
\mathbf end{exercise}
\betagin{exercise}\lambdabel{exga}
Like all inverse Mellin transforms of gamma factors, the
function $K_0(x)$ tends to $0$ exponentially fast as $x\to\infty$
(more precisely $K_0(x)\sim(2x/{\mathfrak p}i)^{-1/2}e^{-x}$). Note that this is
absolutely not ``visible'' on the expansion given above. Use this remark
and the above expansion to write an algorithm which computes Euler's
constant ${\mathfrak g}a$ \mathbf emph{very efficiently} to a given accuracy.
\mathbf end{exercise}
It must be remarked that even though the series defining the inverse Mellin
transform converge for \mathbf emph{all} $x>0$, one need a large number of terms
before the terms become very small when $x$ is large. For instance, we have
seen that for ${\mathfrak g}a(s)=\Gamma(s)$ we have
$W(t)=\M^{-1}({\mathfrak g}a)(t)=\sum_{n{\mathfrak g}e0}(-1)^nt^n/n!=e^{-t}$,
but this series is not very good for computing $e^{-t}$.
\betagin{exercise} Show that for $t>0$, to compute $e^{-t}$ to any reasonable
accuracy (even to $1$ decimal) we must take at least $n>3.6\cdot t$
($e=2.718...$), and work to accuracy at most $e^{-2t}$ in an evident sense.
\mathbf end{exercise}
The reason that this is not a good way is that there is catastrophic
cancellation in the series. One way to circumvent this problem is to
compute $e^{-t}$ as
$$e^{-t}=1/e^t=1/\sum_{n{\mathfrak g}e0}t^n/n!\;,$$
and the cancellation problem disappears. However this is very special to
the exponential function, and is not applicable for instance to the
$K$-Bessel function.
Nonetheless, an important result is that for any inverse Mellin transform
as above, or more importantly for the corresponding incomplete gamma
product, there exist \mathbf emph{asymptotic expansions} as $x\to\infty$, in other
words nonconvergent series which however give a good approximation if limited
to a few terms.
Let us take the simplest example of the incomplete gamma function
$\Gamma(s,x)=\int_x^\infty t^se^{-t}\,dt/t$. The \mathbf emph{power series} expansion
is easily seen to be (at least for $s$ not a negative or zero integer,
otherwise the formula must be slightly modified):
$$\Gamma(s,x)=\Gamma(s)-\sum_{n{\mathfrak g}e0}(-1)^n\dfrac{x^{n+s}}{n!(s+n)}\;,$$
which has the same type of (bad when $x$ is large) convergence behavior as
$e^{-x}$. On the other hand, it is immediate to prove by integration by parts
that
\betagin{align*}\Gamma(s,x)&=e^{-x}x^{s-1}\left(1+\dfrac{s-1}{x}+\dfrac{(s-1)(s-2)}{x^2}+\cdots\right.\\
&{\mathfrak p}hantom{=}\left.+\dfrac{(s-1)(s-2)\cdots(s-n)}{x^n}+R_n(s,x)\right)\;,\mathbf end{align*}
and one can show that in reasonable ranges of $s$ and $x$ the modulus of
$R_n(s,x)$ is smaller than the first ``neglected term'' in an evident sense.
This is therefore quite a practical method for computing these functions
when $x$ is rather large.
\betagin{exercise} Explain why the asymptotic series above terminates when
$s$ is a strictly positive integer.
\mathbf end{exercise}
\subsection{Hadamard Products and Explicit Formulas}
This could be the subject of a course in itself, so we will be quite
brief. I refer to Mestre's paper \cite{Mes} for a precise and
general statement (note that there are quite a number of evident
misprints in the paper).
In Theorem \ref{thmapprox} we assume that the $L$-series that we consider
satisfy a functional equation, together with some mild growth conditions,
in particular that they are of finite order. According to a well-known
theorem of complex analysis, this implies that they have a so-called
\mathbf emph{Hadamard product}, see Appendix. For instance, in the case of the
Riemann zeta function, which is of order $1$, we have
$$\zeta(s)=\dfrac{e^{bs}}{s(s-1)\Gamma(s/2)}{\mathfrak p}rod_{\rho}\left(1-\dfrac{s}{\rho}\right)e^{s/\rho}\;,$$
where the product is over all nontrivial zeros of $\zeta(s)$ (i.e., such that
$0\le{\mathbb R}e(\rho)\le1$), and $b=\log(2{\mathfrak p}i)-1-{\mathfrak g}a$. In fact, this can be written
in a much nicer way as follows: recall that
$\Lambdaambda(s)={\mathfrak p}i^{-s/2}\Gamma(s/2)\zeta(s)$ satisfies $\Lambdaambda(1-s)=\Lambdaambda(s)$. Then
$$s(s-1)\Lambdaambda(s)={\mathfrak p}rod_{\rho}\left(1-\dfrac{s}{\rho}\right)\;,$$
where it is now understood that the product is taken as the limit as
$T\to\infty$ of ${\mathfrak p}rod_{|\Im(\rho)|\le T}(1-s/\rho)$.
However, almost all $L$-functions that are used in number theory not only
have the above properties, but have also \mathbf emph{Euler products}. Taking again
the example of $\zeta(s)$, we have for ${\mathbb R}e(s)>1$ the Euler product
$\zeta(s)={\mathfrak p}rod_p(1-1/p^s)^{-1}$. It follows that (in a suitable range of $s$)
we have equality between two products, hence taking logarithms, equality
between two \mathbf emph{sums}. In our case the Hadamard product gives
$$\log(\Lambdaambda(s))=-\log(s(s-1))+\sum_{\rho}\log(1-s/\rho)\;,$$
while the Euler product gives
\betagin{align*}\log(\Lambdaambda(s))&=-(s/2)\log({\mathfrak p}i)+\log(\Gamma(s/2))-\sum_p\log(1-1/p^s)\\
&=-(s/2)\log({\mathfrak p}i)+\log(\Gamma(s/2))+\sum_{p,k{\mathfrak g}e1}1/(kp^{ks})\;,\mathbf end{align*}
Equating the two sides gives a relation between on the one hand a
sum over the nontrivial zeros of $\zeta(s)$, and on the other hand a
sum over prime powers.
In itself, this is not very useful. The crucial idea is to introduce
a test function $F$ which we will choose to the best of our interests,
and obtain a formula depending on $F$ and some transforms of it.
This is in fact quite easy to do, and even though not very useful in this
case, let us perform the computation for Dirichlet $L$-function of
even primitive characters.
\betagin{theorem} Let $\chi$ be an even primitive Dirichlet character of
conductor $N$, and let $F$ be a real function satisfying a number of easy
technical conditions (see \cite{Mes}). We have the \mathbf emph{explicit formula}:
\betagin{align*}\sum_{\rho}{\mathbb P}hi(\rho)&-2\deltalta_{N,1}\int_{-\infty}^\infty F(x)\cosh(x/2)\,dx\\
&=-\sum_{p,k{\mathfrak g}e1}\dfrac{\log(p)}{p^{k/2}}(\chi^k(p)F(k\log(p))+\ov{\chi^k(p)}F(-k\log(p)))\\
&{\mathfrak p}hantom{=}+F(0)\log(N/{\mathfrak p}i)\\
&{\mathfrak p}hantom{=}+\int_0^\infty\left(\dfrac{e^{-x}}{x}F(0)-\dfrac{e^{-x/4}}{1-e^{-x}}\dfrac{F(x/2)+F(-x/2)}{2}\right)\,dx\;,\mathbf end{align*}
where we set
$${\mathbb P}hi(s)=\int_{-\infty}^\infty F(x)e^{(s-1/2)x}\,dx\;,$$
and as above the sum on $\rho$ is a sum over all the nontrivial zeros of
$L(\chi,s)$ taken symmetrically
($\sum_{\rho}=\lim_{T\to\infty}\sum_{|\Im(\rho)|\le T}$).
\mathbf end{theorem}
\betagin{remarks}{\rm \betagin{enumerate}
\item Write $\rho=1/2+i{\mathfrak g}a$ (if the GRH is true all ${\mathfrak g}a$ are real,
but even without GRH we can always write this). Then
$${\mathbb P}hi(\rho)=\int_{-\infty}^\infty F(x)e^{i{\mathfrak g}a x}\,dx=\widehat{F}({\mathfrak g}a)$$
is simply the value at ${\mathfrak g}a$ of the \mathbf emph{Fourier transform}
$\widehat{F}$ of $F$.
\item It is immediate to generalize to odd $\chi$ or more general
$L$-functions:
\betagin{exercise} After studying the proof, generalize to an arbitrary pair
of $L$-functions as in Theorem \ref{thmapprox}.
\mathbf end{exercise}
\mathbf end{enumerate}}
\mathbf end{remarks}
\betagin{proof} The proof is not difficult, but involves a number of integral
transform computations. We will omit some detailed justifications which
are in fact easy but boring.
As in the theorem, we set
$${\mathbb P}hi(s)=\int_{-\infty}^\infty F(x)e^{(s-1/2)x}\,dx\;,$$
and we first prove some lemmas.
\betagin{lemma}
We have the inversion formulas valid for any $c>1$:
$$F(x)=e^{x/2}\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(s)e^{-sx}\,ds\;.$$
$$F(-x)=e^{x/2}\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(1-s)e^{-sx}\,ds\;.$$
\mathbf end{lemma}
\betagin{proof} This is in fact a hidden version of the Mellin inversion formula:
setting $t=e^x$ in the definition of ${\mathbb P}hi(s)$, we deduce that
${\mathbb P}hi(s)=\int_0^\infty F(\log(t))t^{s-1/2}\,dt/t$, so that
${\mathbb P}hi(s+1/2)$ is the Mellin transform of $F(\log(t))$. By Mellin inversion
we thus have for sufficiently large $\sigma$:
$$F(\log(t))=\dfrac{1}{2{\mathfrak p}i i}\int_{\sigma-i\infty}^{\sigma+i\infty}{\mathbb P}hi(s+1/2)t^{-s}\,ds\;,$$
so changing $s$ into $s-1/2$ and $t$ into $e^x$ gives the first formula
for $c=\sigma+1/2$ sufficiently large, and the assumptions on $F$ (which
we have not given) imply that we can shift the line of integration to any
$c>1$ without changing the integral.
For the second formula, we simply note that
$${\mathbb P}hi(1-s)=\int_{-\infty}^\infty F(x)e^{-(s-1/2)x}\,dx
=\int_{-\infty}^\infty F(-x)e^{(s-1/2)x}\,dx\;,$$
so we simply apply the first formula to $F(-x)$.\qed\mathbf end{proof}
\betagin{corollary} For any $c>1$ and any $p{\mathfrak g}e1$ we have
\betagin{align*}
\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(s)p^{-ks}\,ds&=F(k\log(p))p^{-k/2}\text{\quad and}\\
\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(1-s)p^{-ks}\,ds&=F(-k\log(p))p^{-k/2}\;.
\mathbf end{align*}
\mathbf end{corollary}
\betagin{proof} Simply apply the lemma to $x=k\log(p)$.\qed\mathbf end{proof}
Note that we will also use this corollary for $p=1$.
\betagin{lemma} Denote as usual by ${\mathfrak p}si(s)$ the logarithmic derivative
$\Gamma'(s)/\Gamma(s)$ of the gamma function. We have
\betagin{align*}
\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(s){\mathfrak p}si(s/2)&=\int_0^\infty\left(\dfrac{e^{-x}}{x}F(0)-\dfrac{e^{-x/4}}{1-e^{-x}}F(x/2)\right)\,dx\text{\quad and}\\
\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(1-s){\mathfrak p}si(s/2)&=\int_0^\infty\left(\dfrac{e^{-x}}{x}F(0)-\dfrac{e^{-x/4}}{1-e^{-x}}F(-x/2)\right)\,dx\;.\mathbf end{align*}
\mathbf end{lemma}
\betagin{proof} We use one of the most common integral representations of
${\mathfrak p}si$, see Proposition 9.6.43 of \cite{Coh4}: we have
$${\mathfrak p}si(s)=\int_0^\infty\left(\dfrac{e^{-x}}{x}-\dfrac{e^{-sx}}{1-e^{-x}}\right)\,dx\;.$$
Thus, assuming that we can interchange integrals (which is easy to justify),
we have, using the preceding lemma:
\betagin{align*}\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(s){\mathfrak p}si(s/2)\,ds&=\int_0^\infty\left(\dfrac{e^{-x}}{x}\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(s)\,ds\right.\\
&{\mathfrak p}hantom{=}\left.-\dfrac{1}{1-e^{-x}}\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(s)e^{-(s/2)x}\,ds\right)\,dx\\
&=\int_0^\infty\left(\dfrac{e^{-x}}{x}F(0)-\dfrac{e^{-x/4}}{1-e^{-x}}F(x/2)\right)\,dx\;,\mathbf end{align*}
proving the first formula, and the second follows by changing $F(x)$ into
$F(-x)$.\qed\mathbf end{proof}
\noindent
{\it Proof of the theorem.\/} Recall from above that if we set
$\Lambda(s)=N^{s/2}{\mathfrak p}i^{-s/2}\Gamma(s/2)L(\chi,s)$ we have the functional
equation $\Lambda(1-s)=\omega(\chi)\Lambda(\ov{\chi},s)$ for some $\omega(\chi)$ of modulus
$1$.
For $c>1$, consider the following integral
$$J=\dfrac{1}{2i{\mathfrak p}i}\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(s)\dfrac{\Lambda'(s)}{\Lambda(s)}\,ds\;,$$
which by our assumptions does not depend on $c>1$. We shift the line of
integration to the left (it is easily seen that this is allowed) to the
line ${\mathbb R}e(s)=1-c$, so by the residue theorem we obtain
$$J=S+\dfrac{1}{2i{\mathfrak p}i}\int_{1-c-i\infty}^{1-c+i\infty}{\mathbb P}hi(s)\dfrac{\Lambda'(s)}{\Lambda(s)}\,ds\;,$$
where $S$ is the sum of the residues in the rectangle $[1-c,c]\times{\mathbb R}$.
We first have possible poles at $s=0$ and $s=1$, which occur only for $N=1$,
and they contribute to $S$
$$-\deltalta_{N,1}({\mathbb P}hi(0)+{\mathbb P}hi(1))=-2\deltalta_{N,1}\int_{-\infty}^\infty F(x)\cosh(x/2)\,dx\;,$$
and of course second we have the contributions from the nontrivial zeros
$\rho$, which contribute $\sum_{\rho}{\mathbb P}hi(\rho)$, where it is understood that
zeros are counted with multiplicity, so that
$$S=-2\deltalta_{N,1}\int_{-\infty}^\infty F(x)\cosh(x/2)\,dx+\sum_{\rho}{\mathbb P}hi(\rho)\;.$$
On the other hand, by the functional equation we have
$\Lambda'(1-s)/\Lambda(1-s)=-\overline{\Lambdaambda}'(s)/\overline{\Lambdaambda}(s)$ (note that this does not involve
$\omega(\chi)$), where we write $\overline{\Lambdaambda}(s)$ for $\Lambda(\ov{\chi},s)$, so that
\betagin{align*}\int_{1-c-i\infty}^{1-c+i\infty}{\mathbb P}hi(s)\dfrac{\Lambda'(s)}{\Lambda(s)}\,ds
&=\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(1-s)\dfrac{\Lambda'(1-s)}{\Lambda(1-s)}\,ds\\
&=-\int_{c-i\infty}^{c+i\infty}{\mathbb P}hi(1-s)\dfrac{\overline{\Lambdaambda}'(s)}{\overline{\Lambdaambda}(s)}\,ds\;.\mathbf end{align*}
Thus,
\betagin{align*}S&=J-\dfrac{1}{2i{\mathfrak p}i}\int_{1-c-i\infty}^{1-c+i\infty}{\mathbb P}hi(s)\dfrac{\Lambda'(s)}{\Lambda(s)}\,ds\\
&=\dfrac{1}{2i{\mathfrak p}i}\int_{c-i\infty}^{c+i\infty}\left({\mathbb P}hi(s)\dfrac{\Lambda'(s)}{\Lambda(s)}+{\mathbb P}hi(1-s)\dfrac{\overline{\Lambdaambda}'(s)}{\overline{\Lambdaambda}(s)}\right)\,ds\;.\mathbf end{align*}
Now by definition we have as above
$$\log(\Lambda(s))=\dfrac{s}{2}\log(N/{\mathfrak p}i)+\log\left(\Gamma\left(\dfrac{s}{2}\right)\right)+\sum_{p,k{\mathfrak g}e1}\dfrac{\chi^k(p)}{kp^{ks}}$$
(where the double sum is over primes and integers $k{\mathfrak g}e1$), so
$$\dfrac{\Lambda'(s)}{\Lambda(s)}=\dfrac{1}{2}\log(N/{\mathfrak p}i)+\dfrac{1}{2}{\mathfrak p}si(s/2)
-\sum_{p,k{\mathfrak g}e1}\chi^k(p)\log(p)p^{-ks}\;,$$
and similarly for $\overline{\Lambdaambda}'(s)/\overline{\Lambdaambda}(s)$. Thus, by the above lemmas and corollaries,
we have
$$S=\log(N/{\mathfrak p}i)F(0)+J_1-\sum_{p,k{\mathfrak g}e1}\dfrac{\log(p)}{p^{k/2}}(\chi^k(p)F(k\log(p))+\ov{\chi^k(p)}F(-k\log(p)))\;,$$
where
$$J_1=\int_0^\infty\left(\dfrac{e^{-x}}{x}F(0)-\dfrac{e^{-x/4}}{1-e^{-x}}\dfrac{F(x/2)+F(-x/2)}{2}\right)\,dx\;,$$
proving the theorem.\qed\mathbf end{proof}
This theorem can be used in several different directions, and has
been an extremely valuable tool in analytic number theory. Just to
mention a few:
\betagin{enumerate}\item Since the conductor $N$ occurs, we can obtain
\mathbf emph{bounds} on $N$, assuming certain conjectures such as the
generalized Riemann hypothesis. For instance, this is how
Stark--Odlyzko--Poitou--Serre find \mathbf emph{lower bounds for discriminants}
of number fields. This is also how Mestre finds lower bounds
for conductors of abelian varieties, and so on.
\item When the $L$-function has a zero at its central point (here of
course it usually does not, but for more general $L$-functions
it is important), this can give good upper bounds for the order
of the zero.
\item More generally, suitable choices of the test functions
can give information on the nontrivial zeros $\rho$ of small
imaginary part.
\mathbf end{enumerate}
\section{Some Useful Analytic Computational Tools}
We finish this course by giving a number of little-known numerical methods
which are not always directly related to the computation of $L$-functions, but
which are often very useful.
\subsection{The Euler--MacLaurin Summation Formula}
This numerical method is \mathbf emph{very} well-known (there is in fact even a
whole chapter in Bourbaki devoted to it!), and is as old as Taylor's
formula, but deserves to be mentioned since it is very useful. We will be
vague on purpose, and refer to \cite{Bou} or Section 9.2 of \cite{Coh4} for
details. Recall that the \mathbf emph{Bernoulli numbers} are defined by the formal
power series
$$\dfrac{T}{e^T-1}=\sum_{n{\mathfrak g}e0}\dfrac{B_n}{n!}T^n\;.$$
We have $B_0=0$, $B_1=-1/2$, $B_2=1/6$, $B_3=0$, $B_4=-1/30$, and
$B_{2k+1}=0$ for $k{\mathfrak g}e1$.
Let $f$ be a $C^\infty$ function defined on ${\mathbb R}>0$. The basic statement of
the Euler--MacLaurin formula is that there exists a constant $z=z(f)$ such that
$$\sum_{n=1}^Nf(n)=\int_1^N f(t)\,dt+z(f)+\dfrac{f(N)}{2}+\sum_{1\le k\le p}
\dfrac{B_{2k}}{(2k)!}f^{(2k-1)}(N)+R_p(N)\;,$$
where $R_p(N)$ is ``small'', in general smaller than the first neglected term,
as in most asymptotic series.
The above formula can be slightly modified at will, first by changing the
lower bound of summation and/or of integration (which simply changes the
constant $z(f)$), and second by writing
$\int_1^Nf(t)\,dt+z(f)=z'(f)-\int_N^\infty f(t)\,dt$ (when $f$ tends to
$0$ sufficiently fast for the integral to converge), where
$z'(f)=z(f)+\int_1^\infty f(t)\,dt$.
The Euler--MacLaurin summation formula can be used in many contexts, but we
mention the two most important ones.
$\bullet$ First, to have some idea of the size of $\sum_{n=1}^Nf(n)$.
Let us take an example. Consider $S_2(N)=\sum_{n=1}^N n^2\log(n)$. Note
incidentally that
$$\mathbf exp(S_2(N))={\mathfrak p}rod_{n=1}^N n^{n^2}=1^{1^2}2^{2^2}\cdots N^{N^2}\;.$$
What is the size of this generalized kind of factorial? Euler--MacLaurin
tells us that there exists a constant $z$ such that
\betagin{align*}S_2(N)&=\int_1^N t^2\log(t)\,dt+z+\dfrac{N^2\log(N)}{2}\\
&{\mathfrak p}hantom{=}+\dfrac{B_2}{2!}(N^2\log(N))'+\dfrac{B_4}{4!}(N^2\log(N))'''+\cdots\;.\mathbf end{align*}
We have $\int_1^N t^2\log(t)\,dt=(N^3/3)\log(N)-(N^3-1)/9$,
$(N^2\log(N))'=2N\log(N)+N$, $(N^2\log(N))''=2\log(N)+3$, and
$(N^2\log(N))'''=2/N$, so using $B_2=1/6$ we obtain for some other constant
$z'$:
$$S_2(N)=\dfrac{N^3\log(N)}{3}-\dfrac{N^3}{9}+\dfrac{N^2\log(N)}{2}+\dfrac{N\log(N)}{6}+\dfrac{N}{12}+z'+O\left(\dfrac{1}{N}\right)\;,$$
which essentially answers our question, up to the determination of the constant
$z'$. Thus we obtain a generalized Stirling's formula:
$$\mathbf exp(S_2(N))=N^{N^3/3+N^2/2+N/6}e^{-(N^3/9-N/12)}C\;,$$
where $C=\mathbf exp(z')$ is an a priori unknown constant. In the case of the usual
Stirling's formula we have $C=(2{\mathfrak p}i)^{1/2}$, so we can ask for a similar
formula here. And indeed, such a formula exists: we have
$$C=\mathbf exp(\zetaeta(3)/(4{\mathfrak p}i^2))\;.$$
\betagin{exercise} Do a similar (but simpler) computation for
$S_1(N)=\sum_{1\le n\le N}n\log(n)$. The corresponding constant is explicit
but more difficult (it involves $\zeta'(-1)$; more generally the constant
in $S_r(N)$ involves $\zeta'(-r)$).
\mathbf end{exercise}
$\bullet$ The second use of the Euler--MacLaurin formula is to increase
considerably the speed of convergence of slowly convergent series.
For instance, if you want to compute $\zeta(3)$ directly using the series
$\zeta(3)=\sum_{n{\mathfrak g}e1}1/n^3$, since the remainder term after $N$ terms is
asymptotic to $1/(2N^2)$ you will never get more than $15$ or $20$ decimals
of accuracy. On the other hand, it is immediate to use Euler--MacLaurin:
\betagin{exercise} Write a computer program implementing the computation
of $\zeta(3)$ (and more generally of $\zeta(s)$ for reasonable $s$) using
Euler--MacLaurin, and compute it to $100$ decimals.
\mathbf end{exercise}
A variant of the method is to compute limits: a typical example is the
computation of Euler's constant
$${\mathfrak g}a=\lim_{N\to\infty}\left(\sum_{n=1}^N\dfrac{1}{n}-\log(N)\right)\;.$$
Using Euler--MacLaurin, it is immediate to find the \mathbf emph{asymptotic expansion}
$$\sum_{n=1}^N\dfrac{1}{n}=\log(N)+{\mathfrak g}a+\dfrac{1}{2N}-\sum_{k{\mathfrak g}e1}\dfrac{B_{2k}}{2kN^{2k}}$$
(note that this is not a misprint, the last denominator is $2kN^{2k}$, not
$(2k)!N^{2k}$).
\betagin{exercise} Implement the above, and compute ${\mathfrak g}a$ to $100$ decimal
digits.\mathbf end{exercise}
Note that this is \mathbf emph{not} the fastest way to compute Euler's constant,
the method using Bessel functions given in Exercise \ref{exga} is better.
\subsection{Variant: Discrete Euler--MacLaurin}
One problem with the Euler--MacLaurin method is that we need to compute
the derivatives $f^{(2k-1)}(N)$. When $k$ is tiny, say $k=2$ or $k=3$ this
can be done explicitly. When $f(x)$ has a special form, such as
$f(x)=1/x^{\mathfrak alpha}$, it is very easy to compute all derivatives. In fact, this
is more generally the case when the expansion of $f(1/x)$ around $x=0$ is
known explicitly. But in general none of this is available.
One way around this is to use finite differences instead of derivatives:
we can easily compute
$$\Delta_{\deltalta}(f)(x)=(f(x+\deltalta)-f(x-\deltalta))/(2\deltalta)$$
and iterates of this, where $\deltalta$ is some fixed and nonzero number.
The choice of $\deltalta$ is essential: it should not be too large, otherwise
$\Delta_{\deltalta}(f)$ would be too far away from the true derivative
(which will be reflected in the speed of convergence of the asymptotic
formula), and it should not be too small, otherwise catastrophic cancellation
errors will occur. After numerous trials, the value $\deltalta=1/4$ seems
reasonable.
One last thing must be done: find the analogue of the Bernoulli numbers.
This is a very instructive exercise which we leave to the reader.
\subsection{Zagier's Extrapolation Method}
The following nice trick is due to D.~Zagier. Assume that you have
a sequence $u_n$ that you suspect of converging to some limit $a_0$ when
$n\to\infty$ in a regular manner. How do you give a reasonable numerical
estimate of $a_0$ ?
Assume for instance that as $n\to\infty$ we have
$u_n=\sum_{0\le i\le p}a_i/n^i+O(n^{-p-1})$ for any $p$. One idea would be to
choosing for $n$ suitable values and solve a linear system. This would in
general be quite unstable and inaccurate. Zagier's trick is instead to
proceed as follows: choose some reasonable integer $k$, say $k=10$, set
$u'_n=n^ku_n$, and compute the $k$th \mathbf emph{forward difference}
$\Delta^k(u'_n)$ of this sequence (the forward difference of a sequence $w_n$
is the sequence $\Delta(w)_n=w_{n+1}-w_n$). Note that
$$u'_n=a_0n^k+\sum_{1\le i\le k}a_in^{k-i}+O(1/n)\;.$$
The two crucial points are the following:
\betagin{itemize}\item The $k$th forward difference of a polynomial of degree
less than or equal to $k-1$ vanishes, and that of $n^k$ is equal to
$k!$.
\item Assuming reasonable regularity conditions, the $k$th forward difference
of an asymptotic expansion beginning at $1/n$ will begin at $1/n^{k+1}$.
\mathbf end{itemize}
Thus, under reasonable assumptions we have
$$a_0=\Delta^k(v)_n/k!+O(1/n^{k+1})\;,$$
so choosing $n$ large enough can give a good estimate for $a_0$.
A number of remarks concerning this basic method:
\betagin{remarks}{\rm \betagin{enumerate}
\item It is usually preferable to apply this not to the sequence $u_n$
itself, but for instance to the sequence $u_{n+100}$, if it is not too
expensive to compute, since the first terms of $u_n$ are usually far from
the asymptotic expansion.
\item It is immediate to modify the method to compute further coefficients
$a_1$, $a_2$, etc.
\item If the asymptotic expansion of $u_n$ is (for instance) in powers of
$1/n^{1/2}$, it is not difficult to modify this method, see below.
\mathbf end{enumerate}}
\mathbf end{remarks}
{\bf Example.} Let us compute numerically the constant occurring in
the first example of the use of Euler--MacLaurin that we have given. We
set
$$u_N=\sum_{1\le n\le N}n^2\log(n)-(N^3/3+N^2/2+N/6)\log(N)+N^3/9-N/12\;.$$
We compute for instance that $u_{1000}=0.0304456\cdots$, which has only
$4$ correct decimal digits. On the other hand, if we apply the above
trick with $k=12$ and $N=100$, we find
$$a_0=\lim_{N\to\infty}u_N=0.0304484570583932707802515304696767\cdots$$
with $28$ correct decimal digits: recall that the exact value is
$$\zeta(3)/(4{\mathfrak p}i^2)=0.03044845705839327078025153047115477\cdots\;.$$
Assume now that $u_n$ has an asymptotic expansion in integral powers of
$1/n^{1/2}$, i.e.,
$u_n=\sum_{0\le i\le p}a_i/n^{i/2}+O(n^{-(p+1)/2})$ for any $p$. We can modify
the above method as follows. First write
$u_n=v_n+w_n/n^{1/2}$, where $v_n=\sum_{0\le i\le q}a_{2i}/n^i+O(n^{-q-1})$
and $w_n=\sum_{0\le i\le q}a_{2i+1}/n^i+O(n^{-q-1})$ are two sequences as
above. Once again we choose some reasonable integer $k$ such as $k=10$, and
we now multiply the sequence $u_n$ by $n^{k-1/2}$, so we set
$u'_n=n^{k-1/2}u_n=n^{k-1/2}v_n+n^{k-1}w_n$. Thus, when we compute
the $k$th forward difference we will have
$$\Delta^k(n^{k-1/2}v_n)=\dfrac{(k-1/2)(k-3/2)\cdots 1/2}{n^{1/2}}\left(a_0+\sum_{0\le i\le q+k}b_{k,i}/n^i\right)$$
for certain coefficients $b_{k,i}$, while as above since
$n^{k-1}w_n=P_{k-1}(n)+O(1/n)$ for some polynomial $P_{k-1}(n)$ of degree
$k-1$, we have $\Delta^k(n^{k-1}w_n)=O(1/n^k)$. Thus we have essentially
eliminated the sequence $w_n$, so we now apply the usual method to
$v'_n=n^{1/2}\Delta^k(n^{k-1/2}v_n)$, which has an expansion in integral
powers of $1/n$: we will thus have
$$\Delta^k(v'_n)/k!=((k-1/2)(k-3/2)\cdots (1/2))a(0)+O(1/n^k)$$
(in fact we do not even have to take the same $k$ for this last step).
This method can immediately be generalized to sequences $u_n$ having an
asymptotic expansion in integral powers of $n^{1/q}$ for small integers $q$.
\subsection{Computation of Euler Sums and Euler Products}
Assume that we want to compute numerically
$$S_1={\mathfrak p}rod_p\left(1+\dfrac{1}{p^2}\right)\;,$$
where here and elsewhere, the expression ${\mathfrak p}rod_p$ always means the product
over all prime numbers. Trying to compute it using a large table of prime
numbers will not give much accuracy: if we use primes up to $X$, we will
make an error of the order of $1/X$, so it will be next to impossible to
have more than $8$ or $9$ decimal digits.
On the other hand, if we simply notice that $1+1/p^2=(1-1/p^4)/(1-1/p^2)$,
by definition of the Euler product for the Riemann zeta function this implies
that
$$S_1=\dfrac{\zeta(2)}{\zeta(4)}=\dfrac{{\mathfrak p}i^2/6}{{\mathfrak p}i^4/90}=\dfrac{15}{{\mathfrak p}i^2}=1.519817754635066571658\cdots$$
Unfortunately this is based on a special identity. What if we wanted instead
to compute $S_2={\mathfrak p}rod_p(1+2/p^2)$ ? There is no special identity to help us
here.
The way around this problem is to approximate the function of which we want
to take the product (here $1+2/p^2$) by \mathbf emph{infinite products} of values
of the Riemann zeta function. Let us do it step by step before giving the
general formula.
When $p$ is large, $1+2/p^2$ is close to $1/(1-1/p^2)^2$, which is the
Euler factor for $\zeta(2)^2$. More precisely,
$(1+2/p^2)(1-1/p^2)^2=1-3/p^4+2/p^6$, so we deduce that
$$S_2=\zeta(2)^2{\mathfrak p}rod_p(1-3/p^4+2/p^6)=({\mathfrak p}i^4/36){\mathfrak p}rod_p(1-3/p^4+2/p^6)\;.$$
Even though this looks more complicated, what we have gained is that the
new Euler product converges \mathbf emph{much} faster. Once again, if we compute it
for $p$ up to $10^8$, say, instead of having $8$ decimal digits we now
have approximately $24$ decimal digits (convergence in $1/X^3$ instead
of $1/X$). But there is no reason to stop there: we have
$(1-3/p^4+2/p^6)/(1-1/p^4)^3=1+O(1/p^6)$ with evident notation and explicit
formulas if desired, so we get an even better approximation by writing
$S_2=\zeta(2)^2/\zeta(4)^3{\mathfrak p}rod_p(1+O(1/p^6))$, with convergence in $1/X^5$.
More generally, it is easy to compute by induction exponents $a_n\in{\mathbb Z}$ such
that $S_2={\mathfrak p}rod_{2\le n\le N}\zeta(n)^{a_n}{\mathfrak p}rod_p(1+O(1/p^{N+1}))$
(in our case $a_n=0$ for $n$ odd but this will not be true in general).
It can be shown in essentially all examples that one can pass to the limit,
and for instance here write $S_2={\mathfrak p}rod_{n{\mathfrak g}e2}\zeta(n)^{a_n}$.
\betagin{exercise}\betagin{enumerate}
\item Compute explicitly the recursion for the $a_n$ in the example of $S_2$.
\item More generally, if $S={\mathfrak p}rod_pf(p)$, where $f(p)$ has a convergent
series expansion in $1/p$ starting with $f(p)=1+1/p^b+o(1/p^b)$ with $b>1$
(not necessarily integral), express $S$ as a product of zeta values raised
to suitable exponents, and find the recursion for these exponents.
\mathbf end{enumerate}\mathbf end{exercise}
An important remark needs to be made here: even though the product
${\mathfrak p}rod_{n{\mathfrak g}e2}\zeta(n)^{a_n}$ may be convergent, it may converge rather slowly:
remember that when $n$ is large we have $\zeta(n)-1\sim1/2^n$, so that in fact
if the $a_n$ grow like $3^n$ the product will not even converge.
The way around this, which must be used even when the product converges, is
as follows: choose a reasonable integer $N$, for instance $N=50$, and
compute ${\mathfrak p}rod_{p\le 50}f(p)$, which is of course very fast. Then
the tail ${\mathfrak p}rod_{p>50}f(p)$ of the Euler product will be equal to
${\mathfrak p}rod_{n{\mathfrak g}e2}\zeta_{>50}(n)^{a_n}$, where $\zeta_{>N}(n)$ is the zeta function
without its Euler factors up to $N$, in other words
$\zeta_{>N}(n)=\zeta(n){\mathfrak p}rod_{p\le N}(1-1/p^n)$ (I am assuming here that we have
zeta values at integers as in the $S_2$ example above, but it is immediate
to generalize). Since $\zeta_{>N}(n)-1\sim1/(N+1)^n$,
the convergence of our zeta product will of course be considerably faster.
Note that by using the power series expansion of the logarithm
together with \mathbf emph{M\"obius inversion}, it is immediate to do the same for
Euler \mathbf emph{sums}, for instance to compute $\sum_p1/p^2$ and the like,
see Section 10.3.6 of \cite{Coh4} for details. Using \mathbf emph{derivatives} of the
zeta function we can compute Euler sums of the type $\sum_p\log(p)/p^2$, and
using antiderivatives we can compute sums of the type $\sum_p1/(p^2\log(p))$.
We can even compute sums of the form $\sum_p\log(\log(p))/p^2$, but this
is slightly more subtle: it involves taking derivatives with respect to the
order of \mathbf emph{fractional derivation}.
We can also compute products and sums over primes
which involve Dirichlet characters, as long as their conductor is small,
as well as such products and sums where the primes are restricted to
certain congruence classes:
\betagin{exercise} Compute to 100 decimal digits
$${\mathfrak p}rod_{p\mathbf equiv1{\mathfrak p}mod{4}}(1-1/p^2)\quad\text{and}\quad{\mathfrak p}rod_{p\mathbf equiv1{\mathfrak p}mod4}(1+1/p^2)$$
by using products of $\zeta(ns)$ and of $L(\chi_{-4},ns)$ as above, where
as usual $\chi_{-4}$ is the character $\lgs{-4}{n}$.
\mathbf end{exercise}
\subsection{Summation of Alternating Series}
This is due to F.~Rodriguez--Villegas, D.~Zagier, and the author \cite{Coh-Vil-Zag}.
We have seen above the use of the Euler--MacLaurin summation formula to sum
quite general types of series. If the series is \mathbf emph{alternating} (the terms
alternate in sign), the method cannot be used as is, but it is trivial to
modify it: simply write
$$\sum_{n{\mathfrak g}e1}(-1)^nf(n)=\sum_{n{\mathfrak g}e1}f(2n)-\sum_{n{\mathfrak g}e1}f(2n-1)$$
and apply Euler--MacLaurin to each sum. One can even do better and avoid this
double computation, but this is not what I want to mention here.
A completely different method which is much simpler since it avoids completely
the computation of derivatives and Bernoulli numbers, due to the above authors,
is as follows. The idea is to express (if possible) $f(n)$ as a \mathbf emph{moment}
$$f(n)=\int_0^1 x^nw(x)\,dx$$
for some \mathbf emph{weight function} $w(x)$. Then it is clear that
$$S=\sum_{n{\mathfrak g}e0}(-1)^nf(n)=\int_0^1\dfrac{1}{1+x}w(x)\,dx\;.$$
Assume that $P_n(X)$ is a polynomial of degree $n$ such that $P_n(-1)\ne0$.
Evidently
$$\dfrac{P_n(X)-P_n(-1)}{X+1}=\sum_{k=0}^{n-1}c_{n,k}X^k$$
is still a polynomial (of degree $n-1$), and we note the trivial fact that
\betagin{align*}S&=\dfrac{1}{P_n(-1)}\int_0^1\dfrac{P_n(-1)}{1+x}w(x)\,dx\\
&=\dfrac{1}{P_n(-1)}\left(\int_0^1\dfrac{P_n(-1)-P_n(x)}{1+x}w(x)\,dx
+\int_0^1\dfrac{P_n(x)}{1+x}w(x)\,dx\right)\\
&=\dfrac{1}{P_n(-1)}\sum_{k=0}^{n-1}c_{n,k}f(k)+R_n\;,\mathbf end{align*}
with
$$|R_n|\le\dfrac{M_n}{|P_n(-1)|}\int_0^1\dfrac{1}{1+x}w(x)\,dx
=\dfrac{M_n}{|P_n(-1)|}S\;,$$
and where $M_n=\sup_{x\in[0,1]}|P_n(x)|$.
Thus if we can manage to have $M_n/|P_n(-1)|$ small, we obtain a good
approximation to $S$.
It is a classical result that the best choice for $P_n$ are the shifted
Chebychev polynomials defined by $P_n(\sin^2(t))=\cos(2nt)$, but in any
case we can use these polynomials and ignore that they are the best.
This leads to an incredibly simple algorithm which we write explicitly:
$d{\mathfrak g}ets (3+\sqrt{8})^n$; $d{\mathfrak g}ets (d+1/d)/2$; $b{\mathfrak g}ets -1$; $c{\mathfrak g}ets -d$; $s{\mathfrak g}ets0$; For $k=0,\dotsc,n-1$ do:
$c{\mathfrak g}ets b-c$; $s{\mathfrak g}ets s+c\cdot f(k)$; $b{\mathfrak g}ets(k+n)(k-n)b/((k+1/2)(k+1))$;
The result is $s/d$.
The convergence is in $5.83^{-n}$.
It is interesting to note that, even though this algorithm is designed to
work with functions $f$ of the form $f(n)=\int_0^1 x^nw(x)\,dx$ with
$w$ continuous and positive, it is in fact valid outside its proven region
of validity. For example:
\betagin{exercise}
It is well-known that the Riemann zeta function $\zeta(s)$
can be extended analytically to the whole complex plane, and that we have
for instance $\zeta(-1)=-1/12$ and $\zeta(-2)=0$. Apply the above algorithm to the
\mathbf emph{alternating} zeta function
$$\betata(s)=\sum_{n{\mathfrak g}e1}(-1)^{n-1}\dfrac{1}{n^s}=\left(1-\dfrac{1}{2^{s-1}}\right)\zeta(s)$$
(incidentally, prove this identity), and by using the above algorithm, show
the nonconvergent ``identities''
$$1-2+3-4+\cdots=1/4\text{\quad and\quad}1-2^2+3^2-4^2+\cdots=0\;.$$
\mathbf end{exercise}
\betagin{exercise} (B.~Allombert.) Let $\chi$ be a periodic arithmetic function
of period $m$, say, and assume that $\sum_{0\le j<m}\chi(j)=0$ (for instance
$\chi(j)=(-1)^j$ with $m=2$).
\betagin{enumerate}\item Using the same polynomials $P_n$ as above, write
a similar algorithm for computing $\sum_{n{\mathfrak g}e0}\chi(n)f(n)$, and estimate
its rate of convergence.
\item Using this, compute to 100 decimals
$L(\chi_{-3},k)=1-1/2^k+1/4^k-1/5^k+\cdots$ for $k=1$, $2$, and $3$,
and recognize the exact value for $k=1$ and $k=3$.\mathbf end{enumerate}\mathbf end{exercise}
\subsection{Numerical Differentiation}
The problem is as follows: given a function $f$, say defined and $C^\infty$
on a real interval, compute $f'(x_0)$ for a given value of $x_0$. To be able
to analyze the problem, we will assume that $f'(x_0)$ is not too close to $0$,
and that we want to compute it to a given \mathbf emph{relative accuracy}, which
is what is usually required in numerical analysis.
The na\"\i ve, although reasonable, approach, is to choose a small $h>0$ and
compute $(f(x_0+h)-f(x_0))/h$. However, it is clear that (using the same
number of function evaluations) the formula $(f(x_0+h)-f(x_0-h))/(2h)$
will be better. Let us analyze this in detail. For simplicity we will
assume that all the derivatives of $f$ around $x_0$ that we consider are
neither too small nor too large in absolute value. It is easy to modify the
analysis to treat the general case.
Assume $f$ computed to a relative accuracy of $\mathbf eps$, in other words that
we know values $\tilde{f}(x)$ such that
$\tilde{f}(x)(1-\mathbf eps)<f(x)<\tilde{f}(x)(1+\mathbf eps)$
(the inequalities being reversed if $f(x)<0$). The absolute error
in computing $(f(x_0+h)-f(x_0-h))/(2h)$ is thus essentially equal to
$\mathbf eps |f(x_0)|/h$. On the other hand, by Taylor's theorem we have
$(f(x_0+h)-f(x_0-h))/(2h)=f'(x_0)+(h^2/6)f'''(x)$ for some $x$ close to $x_0$,
so the absolute error made in computing $f'(x_0)$ as
$(f(x_0+h)-f(x_0-h))/(2h)$ is close to $\mathbf eps |f(x_0)|/h+(h^2/6)|f'''(x_0)|$.
For a given value of $\mathbf eps$ (i.e., the accuracy to which we compute $f$)
the optimal value of $h$ is $(3\mathbf eps |f(x_0)/f'''(x_0)|)^{1/3}$ for an
absolute error of $(1/2)(3\mathbf eps |f(x_0)f'''(x_0)|)^{2/3}$ hence a relative
error of $(3\mathbf eps |f(x_0)f'''(x_0)|)^{2/3}/(2|f'(x_0)|)$.
Since we have assumed that the derivatives have reasonable size,
the relative error is roughly $C\mathbf eps^{2/3}$,
so if we want this error to be less than $\mathbf eta$, say, we need $\mathbf eps$
of the order of $\mathbf eta^{3/2}$, and $h$ will be of the order of $\mathbf eta^{1/2}$.
Note that this result is not completely intuitive. For instance,
assume that we want to compute derivatives to $38$ decimal digits.
With our assumptions, we choose $h$ around $10^{-19}$, and perform
the computations with $57$ decimals of relative accuracy. If for some
reason or other we are limited to $38$ decimals in the computation of $f$,
the ``intuitive'' way would be also to choose $h=10^{-19}$, and the above
analysis shows that we would obtain only approximately $19$ decimals.
On the other hand, if we chose $h=10^{-13}$ for instance, close to
$10^{-38/3}$, we would obtain $25$ decimals.
There are of course many other formulas for computing $f'(x_0)$, or for
computing higher derivatives, which can all easily be analyzed as above.
For instance (exercise), one can look for approximations to $f'(x_0)$ of the
form $S=(\sum_{1\le i\le 3}\lambda_if(x_0+h/a_i))/h$, for any nonzero and pairwise
distinct $a_i$, and we find that this is possible as soon as
$\sum_{1\le i\le 3}a_i=0$ (for instance, if $(a_1,a_2,a_3)=(-3,1,2)$
we have $(\lambda_1,\lambda_2,\lambda_3)=(-27,-5,32)/20$), and the absolute error is then
of the form $C_1/h+C2h^3$, so the same analysis shows that we should
work with accuracy $\mathbf eps^{4/3}$ instead of $\mathbf eps^{3/2}$. Even though we
have $3/2$ times more evaluations of $f$, we require less accuracy:
for instance, if $f$ requires time $O(D^a)$ to be computed to $D$ decimals,
as soon as $(3/2)\cdot((4/3)D)^a<((3/2)D)^a$, i.e., $3/2<(9/8)^a$, hence
$a{\mathfrak g}e3.45$, this new method will be faster.
Perhaps the best known method with more function evaluations is the
approximation
$$f'(x_0)\mathfrak approx(f(x-2h)-8f(x-h)+8f(x+h)-f(x+2h))/(12h)\;,$$
which requires accuracy $\mathbf eps^{5/4}$, and since this requires $4$ evaluations
of $f$, this is faster than the first method as soon as
$2\cdot(5/4)^a<(3/2)^a$, in other words $a>3.81$, and faster than the
second method as soon as $(4/3)\cdot(5/4)^a<(4/3)^a$, in other words
$a>4.46$. To summarize, use the first method if $a<3.45$, the second method
if $3.45\le a<4.46$, and the third if $a>4.46$. Of course this game can
be continued at will, but there is not much point in doing so. In practice
the first method is sufficient.
\subsection{Double Exponential Numerical Integration}
A remarkable although little-known technique invented around 1970 deals with
\mathbf emph{numerical integration} (the numerical computation of a definite
integral $\int_a^b f(t)\,dt$, where $a$ and $b$ are allowed to be ${\mathfrak p}m\infty$).
In usual numerical analysis courses one teaches very elementary techniques
such as the trapezoidal rule, Simpson's rule, or more sophisticated methods
such as Romberg or Gaussian integration. These methods apply to very general
classes of functions $f(t)$, but are unable to compute more than a few
decimal digits of the result, except for Gaussian integration which we will
mention below.
However, in most mathematical (as opposed for instance to physical) contexts,
the function $f(t)$ is \mathbf emph{extremely regular}, typically holomorphic or
meromorphic, at least in some domain of the complex plane. It was observed
in the late 1960's by H.~Takahashi and M.~Mori \cite{Tak-Mor} that
this property can be used to obtain a \mathbf emph{very simple} and
\mathbf emph{incredibly accurate} method to compute definite integrals of such
functions. It is now instantaneous to compute $100$ decimal digits, and takes
only a few seconds to compute $500$ decimal digits, say.
In view of its importance it is essential to have some knowledge of this
method. It can of course be applied in a wide variety of contexts, but note
also that in his thesis \cite{Mol}, P.~Molin has applied it specifically to
the \mathbf emph{rigorous} and \mathbf emph{practical} computation of values of
$L$-functions, which brings us back to our main theme.
There are two basic ideas behind this method. The first is in fact a theorem,
which I state in a vague form: If $F$ is a holomorphic function which tends to
$0$ ``sufficiently fast'' when $x\to{\mathfrak p}m\infty$, $x$ real, then the most
efficient method to compute $\int_{{\mathbb R}}F(t)\,dt$ is indeed the trapezoidal
rule. Note that this is a \mathbf emph{theorem}, not so difficult but a little
surprising nonetheless. The definition of ``sufficiently fast'' can be
made precise. In practice, it means at least like $e^{-ax^2}$ ($e^{-a|x|}$
is not fast enough), but it can be shown that the best results are obtained
with functions tending to $0$ \mathbf emph{doubly exponentially fast} such as
$\mathbf exp(-\mathbf exp(a|x|))$. Note that it would be (very slightly) worse to choose
functions tending to $0$ even faster.
To be more precise, we have an estimate coming for instance from the
\mathbf emph{Euler--MacLaurin summation formula}:
$$\int_{-\infty}^{\infty}F(t)\,dt=h\sum_{n=-N}^NF(nh)+R_N(h)\;,$$
and under suitable holomorphy conditions on $F$, if we choose $h=a\log(N)/N$
for some constant $a$ close to $1$, the remainder term $R_N(h)$ will
satisfy $R_n(h)=O(e^{-bN/\log(N)})$ for some other (reasonable) constant $b$,
showing exponential convergence of the method.
The second and of course crucial idea of the method is as follows: evidently
not all functions are doubly-exponentially tending to $0$ at ${\mathfrak p}m\infty$,
and definite integrals are not all from $-\infty$ to $+\infty$. But it is
possible to reduce to this case by using clever \mathbf emph{changes of variable}
(the essential condition of holomorphy must of course be preserved).
Let us consider the simplest example, but others that we give below are
variations on the same idea. Assume that we want to compute
$$I=\int_{-1}^1f(x)\,dx\;.$$
We make the ``magical'' change of variable $x={\mathfrak p}hi(t)=\tanh(\sinh(t))$, so that
if we set $F(t)=f({\mathfrak p}hi(t))$ we have
$$I=\int_{-\infty}^{\infty}F(t){\mathfrak p}hi'(t)\,dt\;.$$
Because of the elementary properties of the hyperbolic sine and tangent,
we have gained two things at once: first the integral from $-1$ to $1$ is
now from $-\infty$ to $\infty$, but most importantly the function
${\mathfrak p}hi'(t)$ is easily seen to tend to $0$ doubly exponentially. We thus
obtain an \mathbf emph{exponentially good approximation}
$$\int_{-1}^1f(x)\,dx=h\sum_{n=-N}^Nf({\mathfrak p}hi(nh)){\mathfrak p}hi'(nh)+R_N(h)\;.$$
To give an idea of the method, if one takes $h=1/200$ and $N=500$, hence
only $1000$ evaluations of the function $f$, one can compute $I$ to several
hundred decimal places!
Before continuing, I would like to comment that in this theory many results
are not completely rigorous: the method works very well, but the proof that
it does is sometimes missing. Thus I cannot resist giving a \mathbf emph{proven and
precise} theorem due to P.~Molin (which is of course just an example).
We keep the above notation ${\mathfrak p}hi(t)=\tanh(\sinh(t))$, and note that
${\mathfrak p}hi'(t)=\cosh(t)/\cosh^2(\sinh(t))$.
\betagin{theorem}[P.~Molin] Let $f$ be holomorphic on the disc $D=D(0,2)$
centered at the origin and of radius $2$. Then for all $N{\mathfrak g}e1$, if we choose
$h=\log(5N)/N$ we have
$$\int_{-1}^1f(x)\,dx=h\sum_{n=-N}^Nf({\mathfrak p}hi(nh)){\mathfrak p}hi'(nh)+R_N\;,$$
where
$$|R_N|\le \left(e^4\sup_{D}|f|\right)\mathbf exp(-5N/\log(5N))\;.$$
\mathbf end{theorem}
Coming back to the general situation, I briefly comment on the computation
of general definite integrals $\int_a^b f(t)\,dt$.
\betagin{enumerate}\item If $a$ and $b$ are finite, we can reduce to $[-1,1]$
by affine changes of variable.
\item If $a$ (or $b$) is finite and the function has an algebraic singularity
at $a$ (or $b$), we remove the singularity by a polynomial change of variable.
\item If $a=0$ (say) and $b=\infty$, then if $f$ does \mathbf emph{not} tend to $0$
exponentially fast (for instance $f(x)\sim 1/x^k$), we use
$x={\mathfrak p}hi(t)=\mathbf exp(\sinh(t))$.
\item If $a=0$ (say) and $b=\infty$ and if $f$ does tend to $0$
exponentially fast (for instance $f(x)\sim e^{-ax}$ or $f(x)\sim e^{-ax^2}$),
we use $x={\mathfrak p}hi(t)=\mathbf exp(t-\mathbf exp(-t))$.
\item If $a=-\infty$ and $b=\infty$, use $x={\mathfrak p}hi(t)=\sinh(\sinh(t))$ if
$f$ does not tend to $0$ exponentially fast, and $x={\mathfrak p}hi(t)=\sinh(t)$
otherwise.
\mathbf end{enumerate}
The problem of \mathbf emph{oscillating} integrals such as
$\int_0^\infty f(x)\sin(x)\,dx$ is more subtle, but there does exist
similar methods when, as here, the oscillations are completely under control.
\betagin{remark} The theorems are valid when the function is holomorphic in
a sufficiently large region compared to the path of integration. If the
function is only \mathbf emph{meromorphic}, with known poles, the direct application
of the formulas may give totally wrong answers. However, if we take into
account the poles, we can recover perfect agreement. Example of bad behavior:
$f(t)=1/(1+t^2)$ (poles ${\mathfrak p}m i$). Integrating on the intervals
$[0,\infty]$, $[0,1000]$, or even $[-\infty,\infty]$, which involve different
changes of variables, give perfect results (the latter being somewhat
surprising). On the other hand, integrating on $[-1000,1000]$ gives
a totally wrong answer because the poles are ``too close'', but it is easy
to take them into account if desired.
\mathbf end{remark}
Apart from the above pathological behavior, let us give a couple of examples
where we must slightly modify the direct use of doubly-exponential
integration techniques.
\newcommand{\hh}[1]{\^{}{#1}}
$\bullet$ Assume for instance that we want to compute
$$J=\int_1^\infty\left(\dfrac{1+e^{-x}}{x}\right)^2\,dx\;,$$
and that we use the built-in function {\tt intnum} of {\tt Pari/GP} for
doing so. The function tends to $0$ slowly at infinity, so we should compute
it using the {\tt GP} syntax {\tt oo} to represent $\infty$, so we write
{\tt f(x)=((1+exp(-x))/x)\hh{2};}, then {\tt intnum(x=1,oo,f(x))}.
This will give some sort of error, because the software will try to
evaluate $\mathbf exp(-x)$ for large values of $x$, which it cannot do since there
is exponent underflow. To compute the result, we need to split it into
its slow part and fast part: when a function tends exponentially fast to
$0$ like $exp(-ax)$, $\infty$ is represented as {\tt [oo,a]}, so we write
$J=J_1+J_2$, with $J_1$ and $J_2$ computed by:
{\tt J1=intnum(x=1,[oo,1],(exp(-2*x)+2*exp(-x))/x\hh{2});} and
{\tt J2=intnum(x=1,oo,1/x\hh{2});}
(which of course is equal to $1$), giving
$$J=1.3345252753723345485962398139190637\cdots\;.$$
Note that we could have tried to ``cheat'' and written directly
{\tt intnum(x=1,[oo,1],f(x))}, but the answer would
be wrong, because the software would have assumed that $f(x)$ tends to $0$
exponentially fast, which is not the case.
$\bullet$ A second situation where we must be careful is when we have
``apparent singularities'' which are not real singularities.
Consider the function $f(x)=(\mathbf exp(x)-1-x)/x^2$. It has an apparent singularity
at $x=0$ but in fact it is completely regular. If you ask
{\tt J=intnum(x=0,1,f(x))}, you will get a result which is reasonably correct,
but never more than $19$ decimals, say. The reason is \mathbf emph{not} due to
a defect in the numerical integration routine, but more in the computation
of $f(x)$: if you simply write {\tt f(x)=(exp(x)-1-x)/x\hh{2};}, the results
will be bad for $x$ close to $0$.
Assuming that you want $38$ decimals, say, the solution is to write
\noindent
{\tt f(x)=if(x<10\hh{(-10)},1/2+x/6+x\hh{2}/24+x\hh{3}/120,(exp(x)-1-x)/x\hh{2});}
and now we obtain the value of our integral as
$$J=0.59962032299535865949972137289656934022\cdots$$
\subsection{The Use of Abel--Plana for Definite Summation}
We finish this course by describing an identity, which is first quite amusing
and second can be used efficiently for definite summation. Consider for
instance the following theorem:
\betagin{theorem} Define by convention $\sin(n/10)/n$ as equal to its limit
$1/10$ when $n=0$, and define $\sum'_{n{\mathfrak g}e0}f(n)$ as
$f(0)/2+\sum_{n{\mathfrak g}e1}f(n)$. We have
$$\sump_{n{\mathfrak g}e0}\left(\dfrac{\sin(n/10)}{n}\right)^k=\int_0^\infty\left(\dfrac{\sin(x/10)}{x}\right)^k$$
for $1\le k\le 62$, but not for $k{\mathfrak g}e63$.
\mathbf end{theorem}
If you do not like all these conventions, replace the left-hand side by
$$\dfrac{1}{2\cdot 10^k}+\sum_{n{\mathfrak g}e1}\left(\dfrac{\sin(n/10)}{n}\right)^k\;.$$
It is clear that something is going on: it is the Abel--Plana formula.
There are several forms of this formula, here is one of them:
\betagin{theorem}[Abel--Plana] Assume that $f$ is an entire function
and that $f(z)=o(\mathbf exp(2{\mathfrak p}i|\Im(z)|))$ as $|\Im(z)|\to\infty$ uniformly in
vertical strips of bounded width, and a number of less important additional
conditions which we omit. Then
\betagin{align*}\sum_{m{\mathfrak g}e1}f(m)&=\int_0^\infty f(t)\,dt-\dfrac{f(0)}{2}+i\int_0^\infty\dfrac{f(it)-f(-it)}{e^{2{\mathfrak p}i t}-1}\,dt\\
&=\int_{1/2}^\infty f(t)\,dt-i\int_0^\infty\dfrac{f(1/2+it)-f(1/2-it)}{e^{2{\mathfrak p}i t}+1}\,dt\;.\mathbf end{align*}
In particular, if the function $f$ is \mathbf emph{even}, we have
$$\dfrac{f(0)}{2}+\sum_{m{\mathfrak g}e1}f(m)=\int_0^\infty f(t)\,dt\;.$$
\mathbf end{theorem}
Since we have seen above that using doubly-exponential techniques it is easy
to compute numerically a definite \mathbf emph{integral}, the Abel--Plana formula can
be used to compute numerically a \mathbf emph{sum}. Note that in the first version
of the formula there is an apparent singularity (but which is not a
singularity) at $t=0$, and the second version avoids this problem.
In practice, this summation method is very competitive with other methods
if we use the doubly-exponential method to compute $\int_0^\infty f(t)\,dt$,
but most importantly if we use a variant of \mathbf emph{Gaussian integration} to
compute the complex integrals, since the nodes and weights for the
function $t/(e^{2{\mathfrak p}i t}-1)$ can be computed once and for all by using
continued fractions, see Section \ref{sec:gauss}.
\section{The Use of Continued Fractions}
\subsection{Introduction}
The last idea that I would like to mention and that is applicable in quite
different situations is the use of continued fractions. Recall that a
continued fraction is an expression of the form
$$a_0+\dfrac{b_0}{a_1+\dfrac{b_1}{a_2+\dfrac{b_2}{a_3+\ddots}}}\;.$$
The problem of \mathbf emph{convergence} of such expressions (when they are unlimited)
is difficult and will not be considered here. We refer to any good textbook
on the elementary properties of continued fractions. In particular, recall that
if we denote by $p_n/q_n$ the $n$th \mathbf emph{partial quotient} (obtained by
stopping at $b_{n-1}/a_n$) then both $p_n$ and $q_n$ satisfy the same recursion
$u_n=a_nu_{n-1}+b_{n-1}u_{n-2}$.
We will mainly consider continued fractions representing \mathbf emph{functions}
as opposed to simply numbers. Whatever the context, the interest of continued
fractions (in addition to the fact that they are easy to evaluate) is that
they give essentially the \mathbf emph{best possible} approximations, both for
real numbers (this is the standard theory of \mathbf emph{regular} continued
fractions, where $b_n=1$ and $a_n\in{\mathbb Z}_{{\mathfrak g}e1}$ for $n{\mathfrak g}e1$), and for
functions (this is the theory of \mathbf emph{Pad\'e approximants}).
\subsection{The Two Basic Algorithms}
The first algorithm that we need is the following: assume that we want
to expand a (formal) power series $S(z)$ (without loss of generality
such that $S(0)=1$) into a continued fraction:
$$S(z)=1+c(1)z+c(2)z^2+\cdots = 1+\dfrac{b(0)z}{1+\dfrac{b(1)z}{1+\dfrac{b(2)z}{1+\ddots}}}\;.$$
The following method, called the \mathbf emph{quotient-difference} (QD) algorithm
does what is required:
We define two arrays $e(j,k)$ for $j{\mathfrak g}e0$ and $q(j,k)$ for $j{\mathfrak g}e1$ by
$e(0,k)=0$, $q(1,k)=c(k+2)/c(k+1)$ for $k{\mathfrak g}e0$, and by induction for $j{\mathfrak g}e1$
and $k{\mathfrak g}e0$:
\betagin{align*}e(j,k)&=e(j-1,k+1)+q(j,k+1)-q(j,k)\;,\\
q(j+1,k)&=q(j,k+1)e(j,k+1)/e(j,k)\;.\mathbf end{align*}
Then $b(0)=c(1)$ and $b(2n-1)=-q(n,0)$ and $b(2n)=-e(n,0)$ for $n{\mathfrak g}e1$.
Three essential implementation remarks: first keeping the whole arrays is
costly, it is sufficient to keep the latest vectors of $e$ and $q$. Second,
even if the $c(n)$ are rational numbers it is essential to do the computation
with floating point approximations to avoid coefficient explosion. The
algorithm can become unstable, but this is corrected by increasing the working
accuracy. Third, it is of course possible that some division by $0$ occurs,
and this is in fact quite frequent. There are several ways to overcome this,
probably the simplest being to multiply or divide the power series by
something like $1-z/{\mathfrak p}i$.
The second algorithm is needed to \mathbf emph{evaluate} the continued fraction for
a given value of $z$. It is well-known that this can be done from bottom to
top (start at $b(n)z/1$, then $b(n-1)/(1+b(n)z/1)$, etc.), or from top
to bottom (start at $(p(-1),q(-1))=(1,0)$, $(p(0),q(0))=(1,1)$, and use
the recursion). It is in general better to evaluate from bottom to top, but
before doing this we can considerably improve on the speed by using an identity
due to Euler:
$$1+\dfrac{b(0)z}{1+\dfrac{b(1)z}{1+\dfrac{b(2)z}{1+\ddots}}}
=1+\dfrac{B(0)Z}{Z+A(1)+\dfrac{B(1)}{Z+A(2)+\dfrac{B(2)}{Z+A(3)+\ddots}}}\;,$$
where $Z=1/z$, $A(1)=b(1)$, $A(n)=b(2n-2)+b(2n-1)$ for $n{\mathfrak g}e2$,
$B(0)=b(0)$, $B(n)=-b(2n)b(2n-1)$ for $n{\mathfrak g}e1$.
The reason for which this is much faster is that we replace
$n$ multiplications ($b(j)*z$) plus $n$ divisions by
$1$ multiplication plus approximately $1+n/2$ divisions, counting as usual
additions as negligible.
This is still not the end of the story since we can ``compress'' any
continued fraction by taking, for instance, two steps at once instead
of one, which reduces the cost . In any case this leads to a very efficient method for evaluating
continued fractions.
\subsection{Using Continued Fractions for Inverse Mellin Transforms}
We have mentioned above that one can use asymptotic expansions to compute
the incomplete gamma function $\Gamma(s,x)$ when $x$ is large. But this method
cannot give us great accuracy since we must stop the asymptotic expansion
at its smallest term. We can of course always use the power series expansion,
which has infinite radius of convergence, but when $x$ is large this is not
very efficient (remember the example of computing $e^{-x}$).
In the case of $\Gamma(s,x)$, continued fractions save the day: indeed, one can
prove that
$$\Gamma(s,x)=\dfrac{x^se^{-x}}{x+1-s-\dfrac{1(1-s)}{x+3-s-\dfrac{2(2-s)}{x+5-s-\ddots}}}\;,$$
with precisely known speed of convergence. This formula is the best method
for computing $\Gamma(s,x)$ when $x$ is large (say $x>50$), and can give arbitrary
accuracy.
However here we were in luck: we had an ``explicit'' continued fraction
representing the function that we wanted to compute. Evidently, in general
this will not be the case.
It is a remarkable idea of T.~Dokchitser \cite{Dok} that it does not really
matter if the continued fraction is not explicit, at least in the context of
computing $L$-functions, for instance for inverse Mellin transforms. Simply do
the following:
\betagin{enumerate}\item First compute sufficiently many terms of the asymptotic
expansion of the function to be computed. This is very easy because our
functions all satisfy a \mathbf emph{linear differential equation} with polynomial
coefficients, which gives a \mathbf emph{recursion} on the coefficients of the
asymptotic expansion.
\item Using the quotient-difference algorithm seen above, compute the
corresponding continued fraction, and write it in the form due to Euler
to evaluate it as efficiently as possible.
\item Compute the value of the function at all desired arguments by evaluating
the Euler continued fraction.\mathbf end{enumerate}
The first two steps are completely automatic and rigorous. The whole problem
lies in the third step, the evaluation of the continued fraction. In the case
of the incomplete gamma function, we had a theorem giving us the speed of
convergence. In the case of inverse Mellin transforms, not only do we not
have such a theorem, but we do not even know how to prove that the continued
fraction converges! However experimentation shows that not only does the
continued fraction converge, but rather fast, in fact at a similar speed to
that of the incomplete gamma function.
Even though this step is completely heuristic, since its introduction by
T.~Dokchitser it is used in all packages computing $L$-functions since it is
so useful. It would of course be nice to have a \mathbf emph{proof} of its validity,
but for now this seems completely out of reach, except for the simplest
examples where there are at most two gamma factors (for instance the problem
is completely open for the inverse Mellin transform of $\Gamma(s)^3$).
\subsection{Using Continued Fractions for Gaussian Integration and Summation}\lambdabel{sec:gauss}
We have seen above the doubly-exponential method for numerical integration,
which is robust and quite generally applicable. However, an extremely classical
method is \mathbf emph{Gaussian integration}: it is orders of magnitude faster,
but note the crucial fact that it is much less robust, in that it works
much less frequently.
The setting of Gaussian
integration is the following: we have a measure $d\mu$ on a (compact or
infinite) interval $[a,b]$; you can of course think of $d\mu$ as $K(x)dx$ for
some fixed function $K(x)$. We want to compute $\int_a^bf(x)d\mu$ by
means of \mathbf emph{nodes} and \mathbf emph{weights}, i.e., for a given $n$ compute $x_i$
and $w_i$ for $1\le i\le n$ such that $\sum_{1\le i\le n}w_if(x_i)$
approximates as closely as possible the exact value of the integral.
Note that \mathbf emph{classical} Gaussian integration such as Gauss--Legendre
integration (integration of a continuous function on a compact interval)
is easy to perform because one can easily compute explicitly the necessary
nodes and weights using standard \mathbf emph{orthogonal polynomials}. What I want to
stress here is that \mathbf emph{general} Gaussian integration can be performed very
simply using continued fractions, as follows.
In general the measure $d\mu$ is (or can be) given through its \mathbf emph{moments}
$M_k=\int_a^bx^kd\mu$. The remarkably simple algorithm to compute
the $x_i$ and $w_i$ using continued fractions is as follows:
\betagin{enumerate}
\item Set ${\mathbb P}hi(z)=\sum_{k{\mathfrak g}e0}M_kz^{k+1}$, and using the
quotient-difference algorithm compute $c(m)$ such that
${\mathbb P}hi(z)=c(0)z/(1+c(1)z/(1+c(2)z/(1+\cdots)))$ (see the
remark made above in case the algorithm has a division by $0$; it may also
happen that the odd or even moments vanish, so that the continued fraction
is only in powers of $z^2$, but this is also easily dealt with).
\item For any $m$, denote as usual by $p_m(z)/q_m(z)$ the $m$th
convergent obtained by stopping the continued fraction at
$c(m)z/1$, and denote by $N_n(z)$ the reciprocal polynomial of
$p_{2n-1}(z)/z$ (which has degree $n-1$) and by $D_n(z)$ the
reciprocal polynomial of $q_{2n-1}$ (which has degree $n$).
\item The $x_i$ are the $n$ roots of $D_n$ (which are all simple
and in the interval $]a,b[$), and the $w_i$ are given by the formula
$w_i=N_n(x_i)/D'_n(x_i)$.
\mathbf end{enumerate}
By construction, this Gaussian integration method will work when the
function $f(x)$ to be integrated is well approximated by polynomials,
but otherwise will fail miserably, and this is why we say that the method
is much less ``robust'' than doubly-exponential integration.
The fact that Gaussian ``integration'' can also be used very efficiently for
numerical \mathbf emph{summation} was discovered quite recently by H.~Monien. We
explain the simplest case. Consider the measure on $]0,1]$ given by
$d\mu=\sum_{n{\mathfrak g}e1}\deltalta_{1/n}/n^2$, where $\deltalta_x$ is the Dirac measure
centered at $x$. Thus by definition
$\int_0^1 f(x)d\mu=\sum_{n{\mathfrak g}e1}f(1/n)/n^2$. Let us apply the recipe
given above: the $k$th moment $M_k$ is given by
$M_k=\sum_{n{\mathfrak g}e1}(1/n)^k/n^2=\zeta(k+2)$, so that
${\mathbb P}hi(z)=\sum_{k{\mathfrak g}e1}\zeta(k+1)z^k$. Note that this is closely related to the
digamma function ${\mathfrak p}si(z)$, but we do not need this. Applying the
quotient-difference algorithm, we write
${\mathbb P}hi(z)=c(0)z/(1+c(1)z/(1+\cdots))$, and compute the $x_i$ and $w_i$ as
explained above. We will then have that $\sum_iw_if(x_i)$ is a very good
approximation to $\sum_{n{\mathfrak g}e1}f(1/n)/n^2$, or equivalently (changing the
definition of $f$) that $\sum_iw_if(y_i)$ is a very good approximation
to $\sum_{n{\mathfrak g}e1}f(n)$, with $y_i=1/x_i$.
To take essentially the simplest example, stopping the continued fraction
after two terms we find that
$y_1=1.0228086266\cdots$, $w_1=1.15343168\cdots$,
$y_2=4.371082834\cdots$, and $w_2=10.3627543\cdots$,
and (by definition) we have $\sum_{1\le i\le 2}w_if(y_i)=\sum_{n{\mathfrak g}e1}f(n)$
for $f(n)=1/n^k$ with $k=2$, $3$, $4$, and $5$.
\section{{\tt Pari/GP} Commands}
In this section, we give some of the {\tt Pari/GP} commands related to the
subjects studied in this course, together with examples. Unless mentioned
otherwise, the commands assume that the current default accuracy is the
default, i.e., $38$ decimal digits.
{\tt zeta(s)}: Riemann zeta function at $s$.
\betagin{verbatim}
? zeta(3)
? zeta(1/2+14*I)
- 0.10325812326645005790236309555257383451*I
\mathbf end{verbatim}
{\tt lfuncreate(obj)}: create $L$-function attached to mathematical object
{\tt obj}.
{\tt lfun(pol,s)}: Dedekind zeta function of the number field $K$ defined by
{\tt pol} at $s$. Identical to {\tt L=lfuncreate(pol); lfun(L,s)}.
\betagin{verbatim}
? L = lfuncreate(x^3-x-1); lfunan(L,10)
? lfun(L,1)
? lfun(L,2)
\mathbf end{verbatim}
{\tt lfunlambda(pol,s)}: same, but for the completed function $\Lambdaambda_K(s)$,
identical to {\tt lfunlambda(L,s)} where {\tt L} is as above.
\betagin{verbatim}
? lfunlambda(L,2)
\mathbf end{verbatim}
{\tt lfun(D,s)}: $L$-function of quadratic character $(D/.)$ at $s$.
\noindent
Identical to {\tt L=lfuncreate(D); lfun(L,s)}.
\betagin{verbatim}
? lfun(-23,-2)
? lfun(5,-1)
\mathbf end{verbatim}
{\tt L1=lfuncreate(pol); L2=lfuncreate(1); L=lfundiv(L1,L2)}: $L$ function
attached to $\zeta_K(s)/\zeta(s)$.
\betagin{verbatim}
? L1 = lfuncreate(x^3-x-1); L2 = lfuncreate(1);
? L = lfundiv(L1,L2); lfunan(L,14)
\mathbf end{verbatim}
{\tt lfunetaquo($[m_1,r_1;m_2,r_2]$)}: $L$-function of eta product
$\mathbf eta(m_1\tau)^{r_1}\mathbf eta(m_2\tau)^{r_2}$, for instance with
{\tt [1,1;23,1]} or {\tt [1,2;11,2]}.
\betagin{verbatim}
? L1 = lfunetaquo([1,1;23,1]); lfunan(L1,14)
? L2 = lfunetaquo([1,2;11,2]); lfunan(L2,14)
\mathbf end{verbatim}
{\tt lfuncreate(ellinit(e))}: $L$-function of elliptic curve $e$, for
instance with $e=[0,-1,1,-10,-20]$.
\betagin{verbatim}
? e = ellinit([0,-1,1,-10,-20]);
? L = lfuncreate(e); lfunan(L,14)
\mathbf end{verbatim}
{\tt ellap(e,p)}: compute $a(p)$ for an elliptic curve $e$.
\betagin{verbatim}
? ellap(e,nextprime(10^42))
\mathbf end{verbatim}
{\tt eta(q+O(q\^{}B))\^{}m}: compute the $m$th power of $\mathbf eta$ to $B$ terms.
\betagin{verbatim}
? eta(q+O(q^5))^26
\mathbf end{verbatim}
{\tt D=mfDelta(); mfcoefs(D,B)}: compute $B+1$ terms of the Fourier expansion
of $\Delta$.
\betagin{verbatim}
? D = mfDelta(); mfcoefs(D,7)
\mathbf end{verbatim}
{\tt ramanujantau(n)}: compute Ramanujan's tau function $\tau(n)$ using
the trace formula.
\betagin{verbatim}
? ramanujantau(nextprime(10^7))
\mathbf end{verbatim}
{\tt qfbhclassno(n)}: Hurwitz class number $H(n)$.
\betagin{verbatim}
? vector(13,n,qfbhclassno(n-1))
\mathbf end{verbatim}
{\tt qfbsolve(Q,n)}: solve $Q(x,y)=n$ for a binary quadratic form $Q$
(contains in particular Cornacchia's algorithm).
\betagin{verbatim}
? Q = Qfb(1,0,1); p = 10^16+61; qfbsolve(Q,p)
\mathbf end{verbatim}
{\tt gamma(s)}: gamma function at $s$.
\betagin{verbatim}
? gamma(1/4)*gamma(3/4)-Pi*sqrt(2)
\mathbf end{verbatim}
{\tt incgam(x,s)}: incomplete gamma function $\Gamma(s,x)$.
\betagin{verbatim}
? incgam(1,5/2)
\mathbf end{verbatim}
{\tt G=gammamellininvinit(A)}: initialize data for computing inverse Mellin
transforms of ${\mathfrak p}rod_{1\le i\le d}\Gamma_{{\mathbb R}}(s+a_i)$, with $A=[a_1,\ldots,a_d]$.
{\tt gammamellininv(G,t)}: inverse Mellin transform at $t$ of $A$, with
$G$ initialized as above.
\betagin{verbatim}
? G = gammamellininvinit([0,0]); gammamellininv(G,2)
\mathbf end{verbatim}
{\tt K(nu,x)}: $K_{\nu}(x)$, $K$-Bessel function of (complex) index $\nu$ at
$x$.
\betagin{verbatim}
? 4*besselk(0,4*Pi)
\mathbf end{verbatim}
{\tt sumnum(n=a,f(n))}: numerical summation of $\sum_{n{\mathfrak g}e a}f(n)$ using
discrete Euler--MacLaurin.
\betagin{verbatim}
? sumnum(n=1,1/(n^2+n^(4/3)))
\mathbf end{verbatim}
{\tt sumnumap(n=a,f(n))}: numerical summation of $\sum_{n{\mathfrak g}e a}f(n)$ using
Abel--Plana.
{\tt sumnummonien(n=a,f(n))}: numerical summation using Monien's Gaussian
summation method,
(there also exists {\tt sumnumlagrange}, which can also be very useful).
{\tt limitnum(n->f(n))}: limit of $f(n)$ as $n\to\infty$ using a variant
of Zagier's method, assuming asymptotic expansion in integral powers of $1/n$
(also {\tt asympnum} to obtain more coefficients).
\betagin{verbatim}
? limitnum(n->(1+1/n)^n)
? asympnum(n->(1+1/n)^n*exp(-1))
\mathbf end{verbatim}
{\tt sumeulerrat(f(x))}: $\sum_{p{\mathfrak g}e2}f(p)$, $p$ ranging over primes
(more general variant exists form $\sum_{p{\mathfrak g}e a}f(p^s)$).
\betagin{verbatim}
? sumeulerrat(1/(x^2+x))
\mathbf end{verbatim}
{\tt prodeulerrat(f(x))}: ${\mathfrak p}rod_{p{\mathfrak g}e2}f(p)$, $p$ ranging over primes, with
same variants.
\betagin{verbatim}
? prodeulerrat((1-1/x)^2*(1+2/x))
\mathbf end{verbatim}
{\tt sumalt(n=a,(-1)\^{}n*f(n))}: $\sum_{n{\mathfrak g}e a}(-1)^nf(n)$, assuming $f$
positive.
\betagin{verbatim}
? sumalt(n=1,(-1)^n/(n^2+n))
\mathbf end{verbatim}
{\tt f'(x)} (or {\tt deriv(f)(x)}): numerical derivative of $f$ at $x$.
\betagin{verbatim}
? -zeta'(-2)
? zeta(3)/(4*Pi^2)
\mathbf end{verbatim}
{\tt intnum(x=a,b,f(x))}: numerical computation of $\int_a^b f(x)\,dx$ using
general doubly-exponential integration.
{\tt intnumgauss(x=a,b,f(x))}: numerical integration using Gaussian
integration.
\betagin{verbatim}
? intnum(t=0,1,lngamma(t+1))
\mathbf end{verbatim}
For instance, for $500$ decimal digits, after the initial computation of
nodes and weights in both cases ({\tt intnuminit(0,1)} and
{\tt intnumgaussinit()}) this examples requires $2.5$ seconds by
doubly-exponential integration but only $0.25$ seconds by Gaussian
integration.
\section{Three Pari/GP Scripts}
\subsection{The Birch--Swinnerton-Dyer Example}
Here is a list of commands which implements the explicit BSD example given
in Section \ref{sec:BSD}, again assuming the default accuracy of $38$ decimal
digits.
\betagin{verbatim}
? E = ellinit([1,-1,0,-79,289]); /* initialize */
? N = ellglobalred(E)[1] /* compute conductor */
? /* define the integral $f(x)$ */
? f(x) = intnum(t=1,[oo,x],exp(-x*t)*log(t)^2);
? /* check that f(100) is small enough for 38D */
? f(100)
? A = ellan(E,8000); /* compute 8000 coefficients */
? /* Note that $2{\mathfrak p}i 8000/sqrt(N) > 100$ */
? S = sum(n=1,8000,A[n]*f(2*Pi*n/sqrt(N)))
? /* compute APPARENT order of vanishing of L(E,s) */
? ellanalyticrank(E)[1]
\mathbf end{verbatim}
Note that for illustrative purposes we use the {\tt intnum} command to compute
$f(x)$, corresponding to the use of doubly-exponential integration, but in the
present case there are methods which are orders of magnitude faster.
The last command, which is almost immediate, implements these methods.
\subsection{The Beilinson--Bloch Example}
The code for the explicit Beilinson--Bloch example seen in Section
\ref{sec:BB} is simpler (I have used the integral representation of $g(u)$,
but of course I could have used the series expansion instead):
\betagin{verbatim}
? e(u) =
{
my(E = ellinit([0,u^2+1,0,u^2,0]));
lfun(E,2)*ellglobalred(E)[1];
}
? g(u) =
{
my(S);
S = 2*Pi*intnum(t=0,1,asin(t)/(t*sqrt(1-(t/u)^2)));
S+Pi^2*acosh(u);
}
? e(5)/g(5)
? /* we obtain perfect accuracy */
? /* for example: */
? for(u = 2,18,print1(bestappr(e(u)/g(u),10^6)," "))
\mathbf end{verbatim}
\subsection{The Mahler Measure Example}
\betagin{verbatim}
? L=lfunetaquo([2,1;4,1;6,1;12,1]);
\\ Equivalently L=lfuncreate(ellinit([0,-1,0,-4,4]));
? lfun(L,3)
? (Pi^2/36)*(Catalan*Pi+intnum(t=0,1,asin(t)*asin(1-t)/t))
\mathbf end{verbatim}
\section{Appendix: Selected Results}
\subsection{The Gamma Function}
The Gamma function, denoted by $\Gamma(s)$, can be defined in several different
ways. My favorite is the one I give in Section 9.6.2 of \cite{Coh4}, but for
simplicity I will recall the classical definition. For $s\in{\mathbb C}$ we define
$$\Gamma(s)=\int_0^\infty e^{-t}t^s\,\dfrac{dt}{t}\;.$$
It is immediate to see that this converges if and only if ${\mathbb R}e(s)>0$ (there is
no problem at $t=\infty$, the only problem is at $t=0$), and integration by
parts shows that $\Gamma(s+1)=s\Gamma(s)$, so that if $s=n$ is a positive integer,
we have $\Gamma(n)=(n-1)!$. We can now \mathbf emph{define} $\Gamma(s)$ for
all complex $s$ by using this recursion backwards, i.e., setting
$\Gamma(s)=\Gamma(s+1)/s$. It is then immediate to check that $\Gamma(s)$ is a meromorphic
function on ${\mathbb C}$ having poles at $s=-n$ for $n=0$, $1$, $2$,\dots, which
are simple with residue $(-1)^n/n!$.
The gamma function has numerous additional properties, the most important
being recalled below:
\betagin{enumerate}
\item (Stirling's formula for large ${\mathbb R}e(s)$): as $s\to\infty$, $s\in{\mathbb R}$ (say,
there is a more general formulation) we have
$\Gamma(s)\sim s^{s-1/2}e^{-s}(2{\mathfrak p}i)^{1/2}$.
\item (Stirling's formula for large $\Im(s)$): as $|T|\to\infty$, $\sigma\in{\mathbb R}$
being fixed (say, once again there is a more general formulation), we have
$|\Gamma(\sigma+iT)|\sim |T|^{\sigma-1/2}e^{-{\mathfrak p}i |T|/2}(2{\mathfrak p}i)^{1/2}$. In particular,
it tends to $0$ exponentially fast on vertical strips.
\item (Reflection formula): we have $\Gamma(s)\Gamma(1-s)={\mathfrak p}i/\sin({\mathfrak p}i s)$.
\item (Duplication formula): we have $\Gamma(s)\Gamma(s+1/2)=2^{1-2s}{\mathfrak p}i^{1/2}\Gamma(2s)$
(there is also a more general distribution formula giving
${\mathfrak p}rod_{0\le j<N}\Gamma(s+j/N)$ which we do not need). Equivalently, if we set
$\Gamma_{{\mathbb R}}(s)={\mathfrak p}i^{-s/2}\Gamma(s/2)$ and $\Gamma_{{\mathbb C}}(s)=2\cdot(2{\mathfrak p}i)^{-s}\Gamma(s)$, we have
$\Gamma_{{\mathbb R}}(s)\Gamma_{{\mathbb R}}(s+1)=\Gamma_{{\mathbb C}}(s)$.
\item (Link with the beta function): let $a$ and $b$ in ${\mathbb C}$ with ${\mathbb R}e(a)>0$
and ${\mathbb R}e(b)>0$. We have
$$B(a,b):=\int_0^1t^{a-1}(1-t)^{b-1}\,dt=\dfrac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\;.$$
\mathbf end{enumerate}
\subsection{Order of a Function: Hadamard Factorization}
Let $F$ be a holomorphic function in the whole of ${\mathbb C}$ (it is immediate
to generalize to the case of meromorphic functions, but for simplicity we
stick to the holomorphic case). We say that $F$ has \mathbf emph{finite order} if
there exists $\mathfrak alpha{\mathfrak g}e0$ such that as $|s|\to\infty$ we have
$|F(s)|\le e^{|s|^{\mathfrak alpha}}$. The infimum of such $\mathfrak alpha$ is called the order of
$F$. It is an immediate consequence of Liouville's theorem that functions
of order $0$ are polynomials. Most functions occurring in number theory,
and in particular all $L$-functions occurring in this course, have order $1$.
The Selberg zeta function, which we do not consider, is also an interesting
function and has order $2$.
The Weierstrass--Hadamard factorization theorem is the following:
\betagin{theorem} Let $F$ be a holomorphic function of order $\rho$, set
$p=\lfloor\rho\rfloor$, let $(a_n)_{n{\mathfrak g}e1}$ be the non-zero zeros of $F$
repeated with multiplicity, and let $m$ be the order of the zero at $z=0$.
There exists a polynomial $P$ of degree at most $p$ such that for all
$z\in{\mathbb C}$ we have
$$F(z)=z^me^{P(z)}{\mathfrak p}rod_{n{\mathfrak g}e1}\left(1-\dfrac{z}{a_n}\right)\mathbf exp\left(\dfrac{z/a_n}{1}+\dfrac{(z/a_n)^2}{2}+\cdots+\dfrac{(z/a_n)^p}{p}\right)\;.$$
\mathbf end{theorem}
In the case of order $1$ which is of interest to us, this reads
$$F(z)=B\cdot z^me^{Az}{\mathfrak p}rod_{n{\mathfrak g}e1}\left(1-\dfrac{z}{a_n}\right)e^{z/a_n}\;.$$
For example, we have
$$\sin({\mathfrak p}i z)={\mathfrak p}i z{\mathfrak p}rod_{n{\mathfrak g}e1}\left(1-\dfrac{z^2}{n^2}\right)\text{\quad and\quad}\dfrac{1}{\Gamma(z+1)}=e^{{\mathfrak g}a z}{\mathfrak p}rod_{n{\mathfrak g}e1}\left(1+\dfrac{z}{n}\right)e^{-z/n}\;,$$
where as usual ${\mathfrak g}a=0.57721\cdots$ is Euler's constant.
\betagin{exercise}\betagin{enumerate}
\item Using these expansions, prove the reflection formula and the duplication
formula for the gamma function, and find the distribution formula giving
${\mathfrak p}rod_{0\le j<N}\Gamma(s+j/N)$.
\item Show that the above expansion for the sine function is equivalent to
the formula expressing $\zeta(2k)$ in terms of Bernoulli numbers.
\item Show that the above expansion for the gamma function is equivalent to
the Taylor expansion
$$\log(\Gamma(z+1))=-{\mathfrak g}a z+\sum_{n{\mathfrak g}e2}(-1)^n\dfrac{\zeta(n)}{n}z^n\;,$$
and prove the validity of this Taylor expansion for $|z|<1$, hence of
the above Hadamard product.
\mathbf end{enumerate}
\mathbf end{exercise}
\subsection{Elliptic Curves}
We will not need the abstract definition of an elliptic curve. For us, an
elliptic curve $E$ defined over a field $K$ will be a nonsingular projective
curve defined by the (affine) generalized Weierstrass equation with
coefficients in $K$:
$$y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6\;.$$
This curve has a \mathbf emph{discriminant} (obtained essentially by completing the
square and computing the discriminant of the resulting cubic), and the
essential property of being nonsingular is equivalent to the discriminant being
nonzero.
This curve has a unique point ${\mathcal O}$ at infinity, with projective
coordinates $(0:1:0)$. Using chord and tangents one can define an addition
law on this curve, and the first essential (but rather easy) result is that
it is an \mathbf emph{abelian group law} with neutral element ${\mathcal O}$, making
$E$ into an algebraic group.
In the case where $K={\mathbb Q}$ (or more generally a number field), a deeper theorem
due to Mordell states that the group $E({\mathbb Q})$ of rational points of $E$ is a
\mathbf emph{finitely generated abelian group}, i.e., is isomorphic to
${\mathbb Z}^r\oplus E({\mathbb Q})_{\text{tors}}$, where $E({\mathbb Q})_{\text{tors}}$ (the torsion
subgroup) is a finite group, and the integer $r$ is called the (algebraic)
\mathbf emph{rank} of the curve.
Still in the case $K={\mathbb Q}$, for all prime numbers $p$ except a finite number,
we can \mathbf emph{reduce} the equation modulo $p$, thus obtaining an elliptic curve
over the finite field ${\mathbb F}_p$. Using an algorithm due to J.~Tate, we can find
first a \mathbf emph{minimal Weierstrass equation} for $E$, second the behavior of
$E$ reduced at the ``bad'' primes in terms of so-called \mathbf emph{Kodaira symbols},
and third the algebraic \mathbf emph{conductor} $N$ of $E$, product of the bad primes
raised to suitable exponents (and other important quantities).
The deep theorem of Wiles et al. tells us that the $L$-function of $E$
(as defined in the main text) is equal to the $L$-function of a rational
Hecke eigenform in the modular form space $M_2(\Gamma_0(N))$, where $N$ is
the conductor of $E$.
A weak form of the Birch and Swinnerton-Dyer conjecture says that the
algebraic rank $r$ is equal to the analytic rank defined as the order of
vanishing of the $L$-function of $E$ at $s=1$.
\betagin{thebibliography}{14}
\bibitem{Bou} N.~Bourbaki, {\it D\'eveloppement tayloriens g\'en\'eralis\'es. Formule sommatoire d'Euler--MacLaurin\/}, Fonctions d'une variable r\'eelle,
Chap. 6.
\bibitem{Coh1} H.~Cohen, {\it A Course in Computational Algebraic Number Theory
(fourth corrected printing)\/}, Graduate Texts in Math.~{\bf 138}, Springer-Verlag, 2000.
\bibitem{Coh2} H.~Cohen, {\it Advanced Topics in Computational Number Theory\/}, Graduate Texts in Math.~{\bf 193}, Springer-Verlag, 2000.
\bibitem{Coh3} H.~Cohen, {\it Number Theory I, Tools and Diophantine Equations\/}, Graduate Texts in Math.~{\bf 239}, Springer-Verlag, 2007.
\bibitem{Coh4} H.~Cohen, {\it Number Theory II, Analytic and Modern Tools\/},
Graduate Texts in Math.~{\bf 240}, Springer-Verlag, 2007.
\bibitem{Coh5} H.~Cohen, {\it A $p$-adic stationary phase theorem and
applications\/}, preprint.
\bibitem{Coh-Str} H.~Cohen and F.~Str\"omberg, {\it Modular Forms: A Classical
Approach\/}, Graduate Studies in Math.~{\bf 179}, American Math.~Soc.,
(2017).
\bibitem{Coh-Vil-Zag} H.~Cohen, F.~Rodriguez-Villegas, and
D.~Zagier, {\it Convergence acceleration of alternating series\/},
Exp.~Math.~{\bf 9} (2000), 3--12.
\bibitem{Coh-Zag} H.~Cohen and D.~Zagier, {\it Vanishing and nonvanishing theta
values\/}, Ann.~Sci.~Math.~Quebec {\bf 37} (2013), pp 45--61.
\bibitem{Dok} T.~Dokchitser, {\it Computing special values of motivic
$L$-functions\/}, Exp.~Math.~{\bf 13} (2004), 137--149.
\bibitem{Hia} G.~Hiary, {\it Computing Dirichlet character sums to a power-full
modulus\/}, ArXiv preprint 1205.4687v2.
\bibitem{Mes} J.-F.~Mestre, {\it Formules explicites et minorations
de conducteurs de vari\'et\'es alg\'ebriques\/}, Compositio Math.~{\bf 58}
(1986). pp. 209--232.
\bibitem{Mol} P.~Molin, {\it Int\'egration num\'erique et calculs de
fonctions $L$\/}, Th\`ese, Universit\'e Bordeaux I (2010).
\bibitem{Rub} M.~Rubinstein, {\it Computational methods and experiments
in analytic number theory\/}, In: Recent Perspectives in Random Matrix Theory
and Number Theory, F.~Mezzadri and N.~Snaith, eds (2005), pp. 407--483.
\bibitem{Tak-Mor} H.~Takashi and M.~Mori, {\it Double exponential
formulas for numerical integration\/}, Publications of RIMS, Kyoto University
(1974), 9:721--741.
\bibitem{Ser} J.-P.~Serre, {\it Facteurs locaux des fonctions z\^eta des
vari\'et\'es alg\'ebriques (d\'efinitions et conjectures)\/},
S\'eminaire Delange--Pisot--Poitou {\bf 11} (1969--1970), exp. 19, pp. 1--15.
\mathbf end{thebibliography}
\mathbf end{document}
|
\begin{document}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{con}[thm]{Conjecture}
\newtheorem{cons}[thm]{Construction}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{claim}[thm]{Claim}
\newtheorem{obs}[thm]{Observation}
\newtheorem{que}[thm]{Question}
\newtheorem{defn}[thm]{Definition}
\newtheorem{example}[thm]{Example}
\newcommand{\displaystyle}{\displaystylesplaystyle}
\def\mathrm{def}{\mathrm{def}}
\def{\cal F}{{\cal F}}
\def{\cal H}{{\cal H}}
\def{\cal K}{{\cal K}}
\def{\cal M}{{\cal M}}
\def{\cal A}{{\cal A}}
\def{\cal B}{{\cal B}}
\def{\cal G}{{\cal G}}
\def{\cal P}{{\cal P}}
\def{\cal C}{{\cal C}}
\def\alpha'{\alpha'}
\defF_k^{2r+1}{F_k^{2r+1}}
\def\varnothing{\varnothing}
\def\colon\,{\colon\,}
\def\MAP#1#2#3{#1\colon\,#2\to#3}
\def\VEC#1#2#3{#1_{#2},\ldots,#1_{#3}}
\def\VECOP#1#2#3#4{#1_{#2}#4\cdots #4 #1_{#3}}
\def\SE#1#2#3{\sum_{#1=#2}^{#3}} \def\SGE#1#2{\sum_{#1\ge#2}}
\def\PE#1#2#3{\prod_{#1=#2}^{#3}} \def\PGE#1#2{\prod_{#1\ge#2}}
\def\UE#1#2#3{\bigcup_{#1=#2}^{#3}}
\def\FR#1#2{\frac{#1}{#2}}
\def\FL#1{\left\lfloor{#1}\right\rfloor}
\def\CL#1{\left\lceil{#1}\right\rceil}
\title{The average connectivity matrix of a graph}
\author{
Linh Nguyen\thanks{Department of Applied Mathematics and Statistics, State University of New York, Stony Brook, NY 11794-3600,
USA, [email protected]}\,
Suil O\thanks{Department of Applied Mathematics and Statistics, The State University of New York, Korea, Incheon, 21985, [email protected]. Research supported by NRF-2020R1F1A1A01048226, NRF-2021K2A9A2A06044515, and NRF-2021K2A9A2A1110161711}\,
}
\maketitle
\begin{abstract}
For a graph $G$ and for two distinct vertices $u$ and $v$, let $\kappa(u,v)$ be the maximum number of vertex-disjoint paths joining $u$ and $v$ in $G$.
The average connectivity matrix of an $n$-vertex connected graph $G$, written $A_{\overline{\kappa}}(G)$, is an $n\times n$ matrix whose $(u,v)$-entry is $\kappa(u,v)/{n \choose 2}$ and let $\rho(A_{\overline{\kappa}}(G))$ be the spectral radius of $A_{\overline{\kappa}}(G)$. In this paper, we investigate some spectral properties of the matrix. In particular, we prove that for any $n$-vertex connected graph $G$,
we have $\rho(A_{\overline{\kappa}}(G)) \le \frac{4\alpha'(G)}n$, which implies a result of Kim and O~\cite{KO} stating that for any connected graph $G$, we have $\overline{\kappa}(G) \le 2 \alpha'(G)$, where $\overline{\kappa}(G)=\sum_{u,v \in V(G)}\frac{\kappa(u,v)}{{n\choose 2}}$ and $\alpha'(G)$ is the maximum size of a matching in $G$; equality holds only when $G$ is a complete graph with an odd number of vertices. Also, for bipartite graphs, we improve the bound, namely $\rho(A_{\overline{\kappa}}(G)) \le \frac{(n-\alpha'(G))(4\alpha'(G) - 2)}{n(n-1)}$, and equality in the bound holds only when $G$ is a complete balanced bipartite graph.
\end{abstract}
{\bf MSC}: 05C50, 05C40, 05C70
{\bf Key words}: Average connectivity matrix, average connectivity, matchings, eigenvalues, spectral radius
\section {Introduction}
In this paper, we handle a simple, finite, and undirected graph.
To check how well a graph $G$ is connected, we normally compute the \emph{connectivity} of $G$, written $\kappa(G)$, which is the minimum number of vertices $S$
such that $G-S$ is disconnected or trivial.
However, since this value is based on a worst-case situation,
Beineke, Oellermann, and Pippert~\cite{BOP} introduced a parameter to reflect a global amount of connectivity. The \emph{average connectivity} of $G$ is
defined to be the number $\overline{\kappa}(G) = \sum_{u,v \in V(G)}\frac{\kappa(u,v)}{{n \choose 2}}$, where $\kappa(u,v)$ is the minimum number of vertices whose deletion makes $v$ unreachable from $u$. By Menger's Theorem, $\kappa(u,v)$ equals the maximum number of internally disjoint paths between $u$ and $v$.
For convenience, we may assume that $\kappa(v,v)=0$ for every vertex $v \in V(G)$.
The \emph{average connectivity matrix} of a graph $G$ with the vertex set $V(G)=\{v_1,\ldots,v_n\}$, written $A_{\overline{\kappa}}(G)$, is an $n\times n$ matrix whose $(i,j)$-entry is $\kappa(v_i,v_j)/{n\choose 2}$. For a vertex $v \in V(G)$, let the \emph{transmission of $v$}, written $T(v)$, be $\frac 1{{n\choose 2}}\sum_{w \in V(G)}\kappa(v,w)$ and let the \emph{transmission of $G$}, written $T(G)$, be $\max_{v \in V(G)} T(v)$. Note that for an $n$-vertex tree $T_n$, since there is a unique path between any $v_i$ and $v_j$, we have
${n\choose 2} A_{\overline{\kappa}}(T_n)=A(K_n) = J_n - I_n$,
where $A(G)$ is the adjacency matrix of a graph $G$, $J_n$ is the $n\times n$ all-1 matrix, and $I_n$ is the $n\times n$ identity matrix.
Note that $2\overline{\kappa}(G)= \sum_{v \in V(G)}T(v)$.
\begin{obs}\label{obs}
If $G$ is a connected graph with $n$ vertices, then for $v\in V(G)$, we have $\frac{2\kappa(G)}n \le T(v) \le \frac{2d(v)}n.$
\end{obs}
\begin{proof} For any pair of distinct vertices $u$ and $v$, we have $\kappa(G) \le \kappa(v,u) \le d(v)$, which implies
that $(n-1)\kappa(G) \le {n\choose 2} T(v) \le (n-1)d(v)$.
\end{proof}
Equalities in the bounds in Observation~\ref{obs} hold for complete graphs. For an $n$-vertex tree, equality in the lower bound holds, but for the center vertex $v$ in an $n$-vertex star, there is a big gap between $T(v)$ and $2d(v)/n$.
A connected graph $G$ is {\it uniformly $r$-connected}
if for any pair of non-adjacent vertices $u$ and $v$, we have $\kappa(u,v)=r$. Note that if $G$ is uniformly $r$-connected, then the average connectivity matrix has constant row sum equal to $2r/n$.
Since $A_{\overline{\kappa}}(G)$ is real and symmetric, all of its eigenvalues are real. Thus we can index the eigenvalues, say $\lambda_1(A_{\overline{\kappa}}(G)),\ldots,\lambda_n(A_{\overline{\kappa}}(G))$, in non-increasing order. For simplicity, we let $\lambda_i(A_{\overline{\kappa}}(G))=\lambda_i$, and
let $\rho(A_{\overline{\kappa}}(G))=\max\{|\lambda_i|:1\le i \le n\}$. If we let $\rho(A_{\overline{\kappa}}(G))=\rho$, then by Theorem~\ref{PF}, we have $\lambda_1=\rho$.
We prove some relationships between the spectral radius and some quantities of the graph.
\begin{thm} \label{basic}
For an $n$-vertex connected graph $G$, we have $\frac{2\overline{\kappa}(G)}n \le \rho\le T(G)$.
Equalities hold only when $G$ is uniformly connected.
\end{thm}
\begin{proof} By Theorem~\ref{PF}, there exists a unique positive eigenvector $\bold{x}$ corresponding to $\rho$. Let $x_i=\max_{j=1,\ldots,n} x_j$. Then we have
$$\rho x_i=\sum_{j=1}^n(A_{\overline{\kappa}}(G))_{ij}x_j\le x_i\sum_{j=1}^n(A_{\overline{\kappa}}(G))_{ij} = T(v_i)x_i \le T(G)x_i,$$
which implies that $\rho \le T(G)$. Equality holds only when $G$ is a uniformly connected graph.
For the lower bound, by Theorem~\ref{RR}, we have
$$ \frac{2 \overline{\kappa}(G)}n = \frac{{\bf 1}^TA_{\overline{\kappa}}(G){\bf 1}}{{\bf 1}^T{\bf 1}} \le \rho.$$
Equality holds only when $\bold{x}=c{\bf 1}$ for some constant $c$, i.e. $G$ is uniformly connected.
\end{proof}
A \emph{matching} in a graph $G$ is a set of disjoint edges in it.
The \emph{matching number} of a graph $G$, written $\alpha'(G)$,
is the maximum size of a matching in it.
Kim and O~\cite{KO} gave a relation between the matching number and the average connectivity of $G$.
\begin{thm}{\rm \cite{KO}}\label{KO}
For a connected graph $G$, we have
$\overline{\kappa}(G) \le 2\alpha'(G).$
\end{thm}
In Section 3, we prove an upper bound for the spectral radius of $A_{\overline{\kappa}}(G)$ in terms of the matching number and the number of vertices.
\begin{thm}\label{main}
For an $n$-vertex connected graph $G$, we have $$\rho(A_{\overline{\kappa}}(G)) \le \frac{4\alpha'(G)}n.$$
Equality holds only when $G=K_{n}$ and $n$ is odd.
\end{thm}
If Theorem~\ref{main} is true, then by Theorem~\ref{basic}, we have $\frac{2\overline{\kappa}(G)}n \le \rho \le \frac{4\alpha'(G)}n$, which implies Theorem~\ref{KO}. To prove Theorem \ref{main}, we find the maximum spectral radius among $n$-vertex connected graphs with a given matching number. However, this idea does not derive the relation between $\overline{\kappa}(G)$ and matching number $\alpha'(G)$ in connected bipartite graphs by Kim and O (see theorem 2.3 in \cite{KO}). Nevertheless, we also prove an upper bound for the spectral radius of $A_{\overline{\kappa}}(G)$ in $n$-vertex connected bipartite graphs to guarantee a certain matching number in Section 4.
\begin{thm}\label{bipartitethm}
For an $n$-vertex connected bipartite graph $G$, we have $$\rho(A_{\overline{\kappa}}(G)) \le \frac{(n-\alpha'(G))(4\alpha'(G) - 2)}{n(n-1)}.$$
Equality holds only when $G=K_{n/2,n/2}$.
\end{thm}
For undefined terms of graph theory, see West~\cite{W}. For basic properties of spectral graph theory, see Brouwer and Haemers~\cite{BH} or Godsil and Royle~\cite{GR}.\\
\section{Tools}
In this section, we provide the main tools to prove Theorem \ref{main} and Theorem \ref{bipartitethm}.
\begin{thm} {\rm (Rayleigh-Ritz Theorem; see~{\cite[Theorem~4.2.2]{HJ}})}\label{RR}
If $A$ is an $n\times n$ Hermitian matrix, then
$$\rho(A)=\max_{x \neq {\bf 0}}\frac{x^*Ax}{x^*x}.$$
\end{thm}
Theorem~\ref{RR} is used to prove Theorem~\ref{basic}.
The Perron-Frobenius Theorem is a very important theorem, implying that $\rho=\lambda_1$ and also the Perron vector is positive.
\begin{thm} {\rm (Perron-Frobenius Theorem; see~{\cite[Theorem~8.4.4]{HJ}})}\label{PF}
Let $A$ be an $n\times n$ matrix and suppose that $A$ is irreducible and non-negative. Then\\
(a) $\rho(A) > 0;$\\
(b) $\rho(A)$ is an eigenvalue of $A$; \\
(c) There is a positive vector $x$ such that $Ax=\rho(A)x$; and\\
(d) $\rho(A)$ is an algebraically (and hence geometrically) simple eigenvalue of $A$.
\end{thm}
To compare the spectral radii of two average connectivity matrices, we often need Theorem~\ref{PerronCor}.
\begin{thm}{\rm See~{\cite[Theorem~8.4.5]{HJ}})\label{PerronCor}
Let $A$ be an $n\times n$ non-negative and irreducible matrix. If $A \ge |B|$, then we have $\rho(A) \ge \rho(B)$.}
\end{thm}
If we encounter a problem when we compare the spectral radii of two matrices, then we normally compare the spectral radii of matrices (called quotient matrices) derived from vertex partitions.
Let $A$ be a real symmetric matrix of order $n$, whose rows and columns are indexed by $P=\{1,\ldots,n\}$, and let $\{P_1,\ldots,P_q\}$ be a partition of $P$.
If for each $i \in \{1,\ldots,q\}$, $n_i=|P_i|$, then $n=n_1+\cdots +n_q$. Let $A_{i,j}$ be the submatrix of $A$ formed by rows in $P_i$ and columns in $P_j$.
The \emph{quotient matrix} of $A$ with respect to the partition $P$ is the $q\times q$ matrix $Q=(q_{i,j})$, where $q_{i,j}$ is the average row sum of $A_{i,j}$.
If the row sum of each matrix $A_{i,j}$ is constant, then we call the partition $P$ \emph{equitable}.
\begin{thm}{\rm \cite{YYSX}}\label{quotient}
If a partition $P$ corresponding to the quotient matrix $Q$ of $A$ is equitable, then we have $\rho(Q)=\rho(A)$.
\end{thm}
\begin{thm}{\rm \cite[Theorem~6.1.1]{HJ} (Gershgorin circle theorem)}\label{Gctheo} Let $A$ be an $n$ by $n$, and let
\[R'_i(A) \equiv \sum_{j=1,j\ne i}^{n}|a_{ij}|, \quad 1 \le i \le n\]
denote the deleted absolute row sums of $A$. Then all the eigenvalues of $A$ are located in the union of $n$ discs
\[\bigcup_{i=1}^n\{z\in\mathbb{C}: |z - a_{ii}| \le R'_i(A) \}. \]
\end{thm}
Since the diagonal entries of $A_{\overline{\kappa}}(G)$ are all zeros, the Gershgorin circle theorem says that $\rho\left(A_{\overline{\kappa}}(G)\right)$ is bounded by the maximum row sum. Theorem \ref{Gctheo} is used to prove Theorem \ref{bipartitethm}.
\begin{thm}{\rm \cite{BT} (Berge-Tutte formula)}\label{TBform}
For an $n$-vertex graph $G$, we have
\[\alpha'(G)=\min\limits_{S\subseteq V(G)}\frac{1}{2}\left(n - o(G-S) + |S|\right),\]
where $o(H)$ is the number of components with odd number of vertices in a graph $H$.
\end{thm}
\section{Proof of Theorem~\ref{main}}
In this section, we prove Theorem \ref{main} by using the tools in the previous section.
For a vertex subset $S\subseteq V(G)$, let $\mathrm{def}(S)=o(G-S)-|S|$, and let $\mathrm{def}(G)=\max_{S\subseteq V(G)}\mathrm{def}(S)$. For two disjoint graphs $G_1$ and $G_2$, the \emph{join graph} $G_1 \vee G_2$ is the graph with $V(G_1\vee V_2)=V(G_1)\cup V(G_2)$ and $E(G)=E(G_1)\cup E(G_2) \cup \{uv: u \in V(G_1) \text{ and } v \in V(G_2)\}$.
Now we are ready to prove Theorem \ref{main}. The idea basically comes from the paper~\cite{O2}, and many researchers~\cite{KOSS,LM,Z,ZHW,ZL} used it to prove upper bounds for the spectral radius of many types of matrices in an $n$-vertex ($t$-)connected graph.
\quad\\
\textit{Proof of theorem \ref{main}.} First, we handle when $G$ has a (near) perfect matching, and then in the remaining case, we use the Berge-Tutte Formula.
\begin{claim}\label{claim1}
For an $n$-vertex connected graph $G$,
we have $\rho \le \frac{2(n-1)}n.$
\end{claim}
\begin{proof}
By Theorem~\ref{PerronCor}, we have $\rho(G) \le \rho(K_n)=\frac 2{n(n-1)}(n-1)^2=\frac{2(n-1)}n$.
\end{proof}
If $G$ has a perfect matching $(\alpha'(G) = \frac{n}{2})$ or a near perfect matching $(\alpha'(G) = \frac{n-1}{2})$,
then by Claim~\ref{claim1}, we have $$\rho \le \frac {4\alpha'(G)}n.$$
Now, we may assume that for $t \ge 2$, $\alpha'(G) = \frac{n-t}2$. Among $n$-vertex connected graphs $H$ with $\alpha'(H) = \frac{n-t}2$, let $G$ be a graph such that $\rho(G) \ge \rho(H)$.
By Theorem~\ref{TBform}, there exists a vertex subset $S$ such that
$\mathrm{def}(S) \ge 2$. Let $o(G-S)=q$ and $|S|=s$. Thus $t=\mathrm{def}(G) = q-s \ge 2$.
Now, we want to show that $\rho\le\frac {2(n-t)}n$.
\begin{claim}\label{mainclaim}
If $G$ is an $n$-vertex connected graph with $\alpha'(G)=\frac{n-t}2$, where $t \ge 2$, then $\rho(G) \le \rho(K_{s} \vee \left(K_{n-s-q+1}\cup (q-1)K_1 \right)).$
\end{claim}
\begin{proof}
Note that $G$ does not contain even components in $G-S$.
If there are at least two even components in $G-S$, then the graph $G'$ obtained from $G$ by joining two even components in $G-S$ has a bigger spectral radius, which is a contradiction by the choice of $G$.
If there is exactly one even component in $G-S$, then the graph $G''$ obtained from $G$ by joining the even component and an odd component has a bigger spectral radius, which is a contradiction.
Thus we say that there are only odd components of $G-S$, say $G_1,\ldots, G_q$, and for each $i \in \{1,\ldots, q\}$, let $|V(G_i)|=n_i$, where $n_1 \ge \ldots \ge n_q$.
Note that $n=s+n_1+\cdots+n_q$.
By Theorem~\ref{PerronCor}, we have
$$\rho(A_{\overline{\kappa}}(G)) \le \rho\left(A_{\overline{\kappa}}(K_s \vee (K_{n_1}\cup \ldots \cup K_{n_q})\right),$$
and note that $\alpha'(G)=\alpha'\left(K_s \vee (K_{n_1}\cup \ldots \cup K_{n_q})\right)$.
Now, if $n_q \ge 3$, then we show that
\[ \rho\left(A_{\overline{\kappa}}(K_s \vee (K_{n_1}\cup \ldots \cup K_{n_q})\right) \le \rho\left(A_{\overline{\kappa}}(K_s \vee (K_{n_1+2}\cup \ldots \cup K_{n_q-2})\right).\]
Let $Q$ be the quotient matrix corresponding to the vertex partition $P=\{V(K_s),V(K_{n_1}),\ldots,V(K_{n_q})\}$ of $K_s \vee \left(K_{n_1}\cup \ldots \cup K_{n_q}\right)$. Then for $1 \le i, j \le n$, we have
\begin{align*}
\begin{pmatrix}n\\2\end{pmatrix}Q_{11}&=(n-1)(s-1)\\
\begin{pmatrix}n\\2\end{pmatrix}Q_{ii}&=(s+n_{i-1} -1) \times (n_{i-1} -1)\quad (i =2,\ldots, n)\\
\begin{pmatrix}n\\2\end{pmatrix}Q_{ij}&=sn_{j-1} \quad (i \neq j)\\
\begin{pmatrix}n\\2\end{pmatrix}Q_{1i}&=(s+n_{i-1} -1)n_{i-1}\quad (i\neq1)\\
\begin{pmatrix}n\\2\end{pmatrix}Q_{i1} &= (s+n_{i-1} -1)s\quad (i\neq1).
\end{align*}
Let $x=(y,x_1,\ldots,x_q)$ be the Perron vector corresponding to $\rho(Q)$, and also note that $x_1 \ge \ldots \ge x_q$ since
$${n\choose 2}\rho x_i = (s + n_i -1)sy + sn_1x_1 + ... +sn_{i-1}x_{i-1} + (s + n_i -1)(n_i - 1)x_i + sn_{i+1}x_{i+1} + ... + sn_qx_q,$$
which implies
$${n\choose 2}(\rho x_i - \rho x_j) = (n_i - n_j)sy + (n_i - 1)^2 x_i - (n_j-1)^2 x_j - s(x_i - x_j).$$
$$\Rightarrow\left(\rho + \frac 1{{n\choose 2}}s\right)(x_i - x_j) = (n_i - n_j)sy + (n_i - 1)^2 x_i - (n_j-1)^2 x_j$$ $$\ge (n_j - 1)^2(x_i - x_j)
\ge (n_j - 1)(x_i - x_j).$$
Since $\rho > n_j -1$ and $n_j \ge 3$, we have $x_i \ge x_j$ for $i \le j$.
Now we compare the spectral radii of $A_{\overline{\kappa}}\left(K_s \vee (K_{n_1+2}\cup \ldots \cup K_{n_q-2})\right)$ and $A_{\overline{\kappa}}\left(K_s \vee (K_{n_1}\cup \ldots \cup K_{n_q})\right)$. If $Q'$ is the quotient matrix corresponding to the vertex partition $P'=\{V(K_s),V(K_{n_1+2}),$ $\ldots,V(K_{n_q-2})\}$ of $K_s \vee \left(K_{n_1+2}\cup \ldots \cup K_{n_q-2}\right)$,
then we have
$$x^T(Q'-Q)x=(4s+4n_1+2)y x_1+(-4s-4n_q+6)y x_q+(4n_1)x_1^2+(-4n_q+8)x_q^2+2s(x_1-x_q)
\sum_{i=1}^{q} x_i$$
$$=2s(x_1-x_q)(2y+\sum_{i=1}^{q} x_i)+4y(n_1 x_1-n_q x_q)+4(n_1x_1^2-n_q x_q^2)+2y x_1+6y x_q+8x_q^2>0
$$
since $x_1 \ge \ldots \ge x_q > 0$. Therefore
\[\rho\left(A_{\overline{\kappa}}(K_s \vee (K_{n_1}\cup \ldots \cup K_{n_q}))\right) \le \rho\left(A_{\overline{\kappa}}(K_s \vee (K_{n_1+2}\cup \ldots \cup K_{n_q-2}))\right).\]
Essentially, moving 2 vertices from $K_{n_q}$ to $K_{n_1}$ increases the spectral radius. Using a similar comparison approach, we can prove that moving 2 vertices from any $K_{n_i}, i = 2, \ldots, n_q$ to $K_{n_1}$ also increases the spectral radius. Thus, we can keep transferring vertices, two at a time, to $K_{n_1}$ until $n_2 = \ldots = n_q = 1$ to obtain $K_s\vee(K_{n-s-q+1}\cup (q-1)K_1)$, and clearly
\[\rho\left(A_{\overline{\kappa}}(K_s \vee (K_{n_1+2}\cup \ldots \cup K_{n_q-2}))\right) \le \rho\left(A_{\overline{\kappa}}(K_s\vee(K_{n-s-q+1}\cup (q-1)K_1))\right),\]
which completes the proof of Claim \ref{mainclaim}.
\end{proof}
The next step is to handle when $n_1 = n-s-q+1, n_2 = \ldots =n_q = 1$. Depending on $n$ relatively to $t$, we consider two cases because we have two different types of extremal graphs. Also, we will denote the extremal graphs in terms of $t$, namely $K_{s} \vee \left(K_{n-s-q+1}\cup (q-1)K_1 \right)$ becomes $K_s\vee({K_{n-2s-t+1}\cup \overline{K_{t+s-1}}})$.
\begin{claim}\label{extremalclaim1}
If $n \ge 3t + 2$, then $\rho\left(A_{\overline{\kappa}}(K_s\vee({K_{n-2s-t+1}\cup \overline{K_{t+s-1}}}))\right) \le \rho\left(A_{\overline{\kappa}}(G_1)\right)$, where $G_1 = K_1\vee({K_{n-t-1}\cup \overline{K_{t}}}).$
\end{claim}
\begin{proof}
The quotient matrix $Q_0$ corresponding to the vertex partition $\{V(K_s),V(K_{n-2s-t+1}),\allowbreak V(\overline{K_{t+s-1}})\}$ is
\[\begin{pmatrix}n\\2\end{pmatrix}Q_0 = \begin{pmatrix}
(n-1)(s-1) & (n-s-t)(n-2s-t+1)&s(s+t-1)\\
(n-s-t)s & (n-s-t)(n-2s-t)&s(s+t-1)\\
s^2&s(n-2s-t+1)&s(s+t-2)
\end{pmatrix}\]
and the quotient matrix $Q_1$ corresponding to the vertex partition $\{V(K_1),V(K_{n-1}),\allowbreak V(\overline{K_{t}})\}$ is
\[\begin{pmatrix}n\\2\end{pmatrix}Q_1 = \begin{pmatrix}
0&(n-t-1)^2&t\\
n-t-1&(n-t-1)(n-t-2)&t\\
1&n-t-1&t-1
\end{pmatrix}.\]
\quad
\noindent\textit{Case 1: }If $s \le \frac{n-t}{2} - 1$, let $f(\lambda)$ be the characteristic polynomial of $\begin{pmatrix}n\\2\end{pmatrix}Q_0$ and $f_1(\lambda)$ be the characteristic polynomial of $\begin{pmatrix}n\\2\end{pmatrix}Q_1$. Suppose that the coefficient of $\lambda^3$ in each $f(\lambda)$ and $f_1(\lambda)$ is positive without loss of generality.
If $\theta_1$ is the largest root of $f_1(\lambda)$, since $K_{n-t}$ is a subgraph of $G_1$
\[\theta_1 \ge \rho\left({n\choose 2}A_{\overline{\kappa}}(K_{n-t})\right) = (n-t-1)^2.\]
As a result, if $f((n-t-1)^2) > 0$, which implies $(n-t-1)^2$ is larger than the largest root of $f$, then $\rho(A_{\overline{\kappa}}(G_1))\ge\rho(A_{\overline{\kappa}}(G))$. Let $g(\lambda) = f(\lambda) - f_1(\lambda)$. We shall prove that $f((n-t-1)^2) > 0$ by proving that $g((n-t-1)^2) + f_1((n-t-1)^2) > 0$.
Given below is the explicit formula for $g(\lambda)$:
\begin{align*}
&g(\lambda) =\lambda^2(s - 1)(2n - 4t - 3s) +\lambda(s - 1)(2t^3 - 4t^2n + 9t^2s + 5t^2 + 2tn^2 - 10tns - 6tn + 12ts^2\\& - 2ts + 2t+ 2n^2s + n^2 - 6ns^2+ ns + 5s^3 - 5s^2 + s - 1) -(s - 1)(t^4s + t^4 - 2t^3ns - 3t^3n\\& + 6t^3s^2 + 2t^3s + 3t^3 + t^2n^2s + 3t^2n^2 - 8t^2ns^2 - 5t^2ns - 5t^2n + 13t^2s^3 - 6t^2s^2 + 3t^2s + t^2 - tn^3\\& + 2tn^2s^2 + 4tn^2s + tn^2 - 10tns^3 + 4tns^2 - 2tns + 2tn + 12ts^4 - 16ts^3 + 6ts^2 - 2ts - 2t - n^3s +\\& n^3 + n^2s^3 + n^2s^2 - n^2s - 3n^2 - 4ns^4 + 6ns^3 - 3ns^2 + 3ns + 3n + 4s^5 - 9s^4 + 6s^3 - 2s^2 - s - 1).
\end{align*}
If $s = 1$, then $g(\lambda) = 0$ and $G\cong G_1$, the claim becomes trivial, so we restrict $s\ge 2$. Let $\tilde{g}(\lambda) = \frac{g(\lambda)}{s - 1}$, then $g(\lambda) \ge \tilde{g}(\lambda)$. Consider $\Tilde{g}((n-t-1)^2)$ as a function $h(s)$ of $s$. We can prove that $h(s) \ge h\left(\frac{n-t}{2} - 1\right)$ for all $s \le \frac{n-t}{2} - 1$. Indeed
\begin{align*}
&h(s) - h\left(\frac{n-t}{2}-1\right) = \frac{11n^5}{8} - n^4s - \frac{43n^4t}{8} - \frac{133n^4}{16} - 6n^3s^2 - 2n^3st + 10n^3s + \frac{29n^3t^2}{4}\\& + \frac{63n^3t}{2} + \frac{111n^3}{8} + 4n^2s^3 + 22n^2s^2t + 6n^2s^2 + 12n^2st^2 - 20n^2st - 16n^2s - \frac{13n^2t^3}{4} - \frac{343n^2t^2}{8}\\& - \frac{375n^2t}{8} - \frac{13n^2}{2} + 4ns^4 - 16ns^3 - 22ns^2t^2 - 30ns^2t + 7ns^2 - 14nst^3 + 8nst^2 + 32nst + 8ns - \\& \frac{5nt^4}{8} + \frac{49nt^3}{2} + \frac{421nt^2}{8} + 12nt + 8n - 4s^5 - 12s^4t + 9s^4 - 8s^3t^2 + 26s^3t - s^3 + 6s^2t^3 + 25s^2t^2\\& - 4s^2t - 3s^2 + 5st^4 + 2st^3 - 15st^2 - 10st - s + \frac{5t^5}{8} - \frac{77t^4}{16} - \frac{157t^3}{8} - \frac{19t^2}{2} + 5t - 12
\end{align*}
\begin{align*}
=&\left(\frac{n-t}{2} - s - 1\right)\left(\frac{11n^4}4 + \frac{7n^3s}2 - 8n^3t - \frac{89n^3}8 - 5n^2s^2 - \frac{33n^2st}2 + \frac{19n^2s}4 + \frac{13n^2t^2}2\right.\\
&+ \frac{287n^2t}8 + \frac{11n^2}2 - 2ns^3 + 6ns^2t + \frac{23ns^2}2 + \frac{41nst^2}2 + \frac{7nst}2 - \frac{23ns}2 - \frac{295nt^2}8 - \frac{33nt}2\\
&- 2n + 4s^4 + 10s^3t - 13s^3 + 3s^2t^2 - \frac{59s^2t}2 + 14s^2 - \frac{15st^3}2 - \frac{53st^2}4 + \frac{53st}2 - 11s - \frac{5t^4}4\\&\left.\vphantom{\frac12} + \frac{97t^3}8 + 15t^2 - 11t + 12\right)
\end{align*}
\begin{multline*}
=\left(\frac{n-t}{2} - s - 1\right)A.
\end{multline*}
Now we prove that $A > 0$. We abuse notions slightly by multiplying $A$ by 8 to make all coefficients whole numbers, which makes for a cleaner presentation but still denoting the resulting sum by $A$.
\begin{align*}
A =& 22n^4 + 28n^3s - 64n^3t - 89n^3 - 40n^2s^2 - 132n^2st + 38n^2s + 52n^2t^2 + 287n^2t + 44n^2\\& - 16ns^3 + 48ns^2t + 92ns^2 + 164nst^2 + 28nst - 92ns - 295nt^2 - 132nt - 16n + 32s^4\\& + 80s^3t - 104s^3 + 24s^2t^2 - 236s^2t + 112s^2 - 60st^3 - 106st^2 + 212st - 88s - 10t^4\\& + 97t^3 + 120t^2 - 88t + 96.
\end{align*}
\begin{align*}
&\frac{\partial A}{\partial n} = 88n^3 + 84n^2s - 192n^2t - 267n^2 - 80ns^2 - 264nst + 76ns + 104nt^2 + 574nt + 88n - 16s^3,\\
& + 48s^2t + 92s^2 + 164st^2 + 28st - 92s - 295t^2 - 132t - 16,\\
&\frac{\partial^2 A}{\partial n^2} = 264n^2 + 168ns - 384nt - 534n - 80s^2 - 264st + 76s + 104t^2 + 574t + 88,\\
&\frac{\partial^3 A}{\partial n^3} = 528n + 168s - 384t - 534.
\end{align*}
From $n\ge3t+2$ and $n\ge t + 2s + 2$, we have $n\ge 2t + s + 2$ and so
\begin{align*}
&\frac{\partial^3 A}{\partial n^3} > 0,\\
&\frac{\partial^2 A}{\partial n^2} \ge \frac{\partial^2 A}{\partial n^2}(2t + s + 2) = 352s^2 + 744st + 934s + 392t^2 + 850t + 76 > 0,\\
&\frac{\partial A}{\partial n} \ge \frac{\partial A}{\partial n}(2t + s + 2) = 76s^3 + 296s^2t + 605s^2 + 364st^2 + 1174st + 472s + 144t^3 + 569t^2 +\\
& 400t - 204 > 0,\\
&A\ge A(2t + s + 1) = 26s^4 + 84s^3t + 89s^3 + 128s^2t^2 + 473s^2t + 570s^2 + 108st^3 + 615st^2 + 932st\\
& - 100s + 38t^4 + 231t^3 + 386t^2 - 124t - 120 > 0.
\end{align*}
Therefore, $A > 0$, and consequently
\begin{align*}
&\tilde{g}((n-t-1)^2) + f_1((n-t-1)^2) \ge h\left(\frac{n-t}{2}-1\right) + f_1((n-t-1)^2) = \frac{5n^5}8 - \frac{37n^4t}8 + \frac{21n^4}{16}\\& + \frac{51n^3t^2}4 - \frac{7n^3t}2 - \frac{39n^3}8 - \frac{67n^2t^3}4 + \frac{7n^2t^2}8 + \frac{135n^2t}8 + \frac{3n^2}2 + \frac{85nt^4}8 + \frac{7nt^3}2 - \frac{157nt^2}8 + 2nt\\& - 7n - \frac{21t^5}8 - \frac{35t^4}{16} + \frac{61t^3}8 + \frac{t^2}2 - 7t + 12.
\end{align*}
Consider the right hand side of the above inequality a function of $n$
\begin{align*}
&RHS'(n) = \frac{25n^4}8 - \frac{37n^3t}2 + \frac{21n^3}4 + \frac{153n^2t^2}4 - \frac{21n^2t}2 - \frac{117n^2}8 - \frac{67nt^3}2 + \frac{7nt^2}4 + \frac{135nt}4 + 3n\\& + \frac{85t^4}8 + \frac{7t^3}2 - \frac{157t^2}8 + 2t - 7,\\
&RHS''(n) = \frac{25n^3}2 - \frac{111n^2t}2 + \frac{63n^2}4 + \frac{153nt^2}2 - 21nt - \frac{117n}4 - \frac{67t^3}2 + \frac{7t^2}4 + \frac{135t}4 + 3,\\
&RHS^{(3)}(n) = \frac{75n^2}2 - 111nt + \frac{63n}2 + \frac{153t^2}2 - 21t - \frac{117}4,\\
&RHS^{(4)}(n) = 75n - 111t + \frac{63}2 > 0.
\end{align*}
Therefore, since $n\ge 3t + 2$
\begin{align*}
&RHS^{(3)}(n)\ge RHS^{(3)}(3t + 2) = 81t^2 + \frac{603t}2 + \frac{735}4> 0,\\
&RHS''(n)\ge RHS''(3t + 2) = 34t^3 + \frac{485t^2}2 + 321t + \frac{215}2> 0,\\
&RHS'(n)\ge RHS'(3t + 2) = 8t^4 + 124t^3 + 273t^2 + 202t + \frac{65}2> 0,\\
&RHS(n)\ge RHS(3t + 2) = 44t^4 + 149t^3 + 189t^2 + 60t + 6 > 0.
\end{align*}
To sum up, $0 < \tilde{g}((n-t-1)^2) + f_1((n-t-1)^2) \le g((n-t-1)^2) + f_1((n-t-1)^2) = f((n-t-1)^2)$.
\quad
\noindent\textit{Case 2: }If $s = \frac{n-t}{2}$, the largest root of $f(\lambda)$ can be computed explicitly as
\begin{align*}
&\rho(f(\lambda)) = - \frac{nt}4 + \frac{3n^2}8 - \frac{t^2}8 + \frac12
+ \frac{t}2 - n + \\&\frac{1}{8}\sqrt{5n^4 - 12n^3t - 8n^3 + 6n^2t^2 + 16n^2t + 24n^2 + 4nt^3 - 8nt^2 - 16nt - 32n - 3t^4 + 8t^2 + 16}.
\end{align*}
We can prove that $\rho(f(\lambda)) \le (n-t-1)^2$ directly by considering
\begin{align*}
& 8(n-t-1)^2 - 8\rho(f(\lambda)) = 12t - 8n - 14nt + 5n^2 + 9t^2 + 4 - \\&\sqrt{5n^4 - 12n^3t - 8n^3 + 6n^2t^2 + 16n^2t + 24n^2 + 4nt^3 - 8nt^2 - 16nt - 32n - 3t^4 + 8t^2 + 16}.
\end{align*}
Since $12t - 8n - 14nt + 5n^2 + 9t^2 + 4 > 0$, if we can prove that $(12t - 8n - 14nt + 5n^2 + 9t^2 + 4)^2 - (5n^4 - 12n^3t - 8n^3 + 6n^2t^2 + 16n^2t + 24n^2 + 4nt^3 - 8nt^2 - 16nt - 32n - 3t^4 + 8t^2 + 16) \ge 0$, we will have proved that $(n-t-1)^2 \ge \rho(A_{\overline{\kappa}}(G))$. Indeed
\begin{align*}
&(12t - 8n - 14nt + 5n^2 + 9t^2 + 4)^2 - (5n^4 - 12n^3t - 8n^3 + 6n^2t^2 + 16n^2t + 24n^2 + 4nt^3 - 8nt^2\\
&- 16nt - 32n - 3t^4 + 8t^2 + 16)\\
=&20n^4 - 128n^3t - 72n^3 + 280n^2t^2 + 328n^2t + 80n^2 - 256nt^3 - 472nt^2 - 288nt - 32n + 84t^4 + 216t^3 \\&+ 208t^2 + 96t
\end{align*}
Consider the right hand side of the above equality a function of $n$
\begin{align*}
&RHS'(n) = 80n^3 - 384n^2t - 216n^2 + 560nt^2 + 656nt + 160n - 256t^3 - 472t^2 - 288t - 32,\\
&RHS''(n) = 240n^2 - 768nt - 432n + 560t^2 + 656t + 160\\
&RHS^{(3)}(n) = 480n - 768t - 432. > 0
\end{align*}
\begin{align*}
\Rightarrow&RHS''(n)\ge RHS''(3t + 2) = 416t^2 + 704t + 256> 0,\\
&RHS'(n)\ge RHS'(3t + 2) = 128t^3 + 384t^2 + 256t + 64> 0,\\
&RHS(n)\ge RHS(3t + 2) = 64t^3 > 0.
\end{align*}
Hence, for any value of $s$, if $n\ge 3t + 2$ then $\rho(A_{\overline{\kappa}}(G))\le\frac{1}{\begin{pmatrix}
n\\2
\end{pmatrix}}(n-t-1)^2\le\rho(A_{\overline{\kappa}}(G_1))$.
\end{proof}
\begin{claim}\label{extremalclaim2}
If $n \le 3t$, then $\rho(A_{\overline{\kappa}}(K_s\vee({K_{n-2s-t+1}\cup \overline{K_{t+s-1}}}))) \le \rho(A_{\overline{\kappa}}(G_2))$, where $G_2 = K_{\frac{n-t}{2}}\vee{\overline{K_{\frac{n+t}{2}}}}.$
\end{claim}
\begin{proof}
The quotient matrix $Q_2$ corresponding to the vertex partition $\left\{V(K_{\frac{n-t}{2}}),V(\overline{K_{\frac{n+t}{2}}})\right\}$ is
\[\begin{pmatrix}n\\2\end{pmatrix}Q_2 = \begin{pmatrix}\frac{(n-1)(n-t-2)}{2}&\frac{n^2-t^2}{4}\\\frac{(n-t)^2}{4}&\frac{(n-t)(n+t-2)}{4}\end{pmatrix}.\]
Let $f(\lambda)$ be the characteristic polynomial of $\begin{pmatrix}n\\2\end{pmatrix}Q_0$ defined in Claim \ref{extremalclaim1} and $f_2(\lambda)$ be the characteristic polynomial of $\begin{pmatrix}n\\2\end{pmatrix}Q_2$. If $\theta_2$ is the largest root of $f_2(\lambda)$ and it's larger than that of $f(\lambda)$, then
\begin{align*}
&f_2(\theta_2) = 0,\\
&f(\theta_2) > 0.
\end{align*}
Therefore, let
\[g_2(\lambda) = f(\lambda) - f_2(\lambda) + \frac{1}{4}(5t^2 - 6tn + 16ts - 4t + n^2 - 8ns + 4n + 12s^2 - 12s)f_2(\lambda).\]
We can prove that $f(\theta_2) > 0$ by proving $g_2(\theta_2) > 0$. Given below is the explicit formula for $g_2(\lambda)$, which is a linear function with respect to $\lambda$:
\begin{align*}
g_2(\lambda) = &\frac{1}{8}\lambda(n-t-2s)(8tn^2 - 2t^3 - 4t^2n - 20t^2s + 11t^2 + 8tns - 10tn - 38ts^2 + 48ts - 6t - 2n^3\\& + 8n^2s - 5n^2 + 14ns^2 - 32ns + 14n - 20s^3 + 40s^2 - 12s - 4)+\frac{1}{64}(n - t - 2s)(n^5 - 5t^5 + t^4n\\
& - 6t^4s + 14t^4 + 10t^3n^2 - 12t^3n + 64t^3s^2 + 16t^3s - 8t^3 - 2t^2n^3 + 12t^2n^2s - 56t^2n^2 - 64t^2ns^2\\
& - 72t^2ns + 88t^2n + 256t^2s^3 - 264t^2s^2 + 56t^2s - 40t^2 - 5tn^4 + 60tn^3 - 56tn^2 - 128tns^3\\
& + 48tns^2 + 160tns - 16tn + 320ts^4 - 688ts^3 + 464ts^2 - 176ts + 32t - 6n^4s - 6n^4 + 56n^3s\\
& - 24n^3 + 56n^2s^2 - 184n^2s + 56n^2 - 64ns^4 + 112ns^3 - 48ns^2 + 112ns - 32n + 128s^5 - 416s^4\\
& + 480s^3 - 256s^2 + 32s).
\end{align*}
Since $n-t\ge 2s$, we analyze the sign of
\begin{align*}
\tilde{g_2}(\lambda) = &\frac{1}{8}\lambda(8tn^2 - 2t^3 - 4t^2n - 20t^2s + 11t^2 + 8tns - 10tn - 38ts^2 + 48ts - 6t - 2n^3 + 8n^2s - 5n^2\\& + 14ns^2 - 32ns + 14n - 20s^3 + 40s^2 - 12s - 4) +\frac{1}{64}(n^5 - 5t^5 + t^4n - 6t^4s + 14t^4 + 10t^3n^2\\& - 12t^3n + 64t^3s^2 + 16t^3s - 8t^3 - 2t^2n^3 + 12t^2n^2s - 56t^2n^2 - 64t^2ns^2 - 72t^2ns + 88t^2n\\& + 256t^2s^3 - 264t^2s^2 + 56t^2s - 40t^2 - 5tn^4 + 60tn^3 - 56tn^2 - 128tns^3 + 48tns^2 + 160tns\\& - 16tn + 320ts^4 - 688ts^3 + 464ts^2 - 176ts + 32t - 6n^4s - 6n^4 + 56n^3s - 24n^3 + 56n^2s^2\\& - 184n^2s + 56n^2 - 64ns^4 + 112ns^3 - 48ns^2 + 112ns - 32n + 128s^5 - 416s^4 + 480s^3 - 256s^2\\& + 32s).
\end{align*}
Let $a = \frac{n-t}{2}, b = \frac{n+t}{2}$, then
\begin{align*}
\tilde{g_2}(\lambda) = &\lambda(- 3a^3 - 4a^2b - 5a^2s + 4a^2 + 3ab^2 + 14abs - 8ab + 13as^2 - 20as + 5a - b^2s - b^2 - 6bs^2 + 4bs \\&+ 2b - 5s^3 + 10s^2 - 3s - 1) + (- 3a^4 + 3a^3b^2 - 7a^3b - 4a^3s^2 - a^3s + 4a^3 - 2a^2b^3 - 3a^2b^2s + 5a^2b^2\\& + 8a^2bs^2 + 9a^2bs - 4a^2b + 12a^2s^3 - 8a^2s^2 - 9a^2s + a^2 + 2ab^3 - 4ab^2s^2 + 6ab^2s - 6ab^2 - 16abs^3\\& + 20abs^2 - 15abs + 6ab - 12as^4 + 25as^3 - 16as^2 + 9as - 2a + 4b^2s^3 - 5b^2s^2 + b^2s + 8bs^4 - 18bs^3\\& + 13bs^2 - 2bs + 4s^5 - 13s^4 + 15s^3 - 8s^2 + s).
\end{align*}
$\theta_2$ can be computed from the quotient matrix
\[\begin{pmatrix}n\\2\end{pmatrix}Q_{2} = \begin{pmatrix}
(a+b-1)(a-1) & ab\\
a^2 & a(b-1)
\end{pmatrix}\]
to be
\[\theta_2 = \frac{a^2}{2} + \frac{1}{2} + ab - \frac{b}{2} - \frac{3a}{2} + \frac12\sqrt{a^4 + 4a^3b - 2a^3 - 2a^2b + 3a^2 + 2ab - 2a + b^2 - 2b + 1}.\]
Since $n\le 3t$, we have $b\ge 2a$. That implies
\[\theta_2\ge\theta_2' = \frac{a^2}{2} + \frac{1}{2} + ab - \frac{b}{2} - \frac{3a}{2} + \frac12(3a^2 - a).\]
Indeed
\begin{align*}
&(a^4 + 4a^3b - 2a^3 - 2a^2b + 3a^2 + 2ab - 2a + b^2 - 2b + 1) - (3a^2 - a)^2\\
=&- 8a^4 + 4a^3b + 4a^3 - 2a^2b + 2a^2 + 2ab - 2a + b^2 - 2b + 1:=h(b).
\end{align*}
Since $a,b\ge 1$, $h'(b) = 4a^3 - 2a^2 + 2a + 2b - 2 > 0$, therefore $h(b) \ge h(2a) = 10a^2 - 6a + 1 > 0$. We can prove that $\tilde{g_2}(\theta_2) \ge \tilde{g_2}(\theta_2') > 0$. First, the coefficient $c_1$ of $\lambda$ in $\tilde{g_2}(\lambda)$ is positive
\begin{align*}
&c_1 = - 3a^3 - 4a^2b - 5a^2s + 4a^2 + 3ab^2 + 14abs - 8ab + 13as^2 - 20as + 5a - b^2s - b^2 - 6bs^2 + 4bs + 2b\\& - 5s^3 + 10s^2 - 3s - 1,\\
&\frac{\partial c_1}{b} = 4s - 2b - 8a + 6ab + 14as - 2bs - 4a^2 - 6s^2 + 2,\\
&\frac{\partial^2c_1}{\partial b^2} = 6a - 2s - 2 > 0
\end{align*}
therefore
\begin{align*}
&\frac{\partial c_1}{b}\ge\frac{\partial c_1}{b}(2a) = 8a^2 + 10as - 12a - 6s^2 + 4s + 2 = (6a^2 - 6s^2) + (2a^2 + 10as - 12s) + 2 \ge 0,\\
&c_1\ge c_1(2a) = a^3 + 19a^2s - 16a^2 + as^2 - 12as + 9a - 5s^3 + 10s^2 - 3s - 1.
\end{align*}
Consider the right hand side of the above inequality a function of $a$, since $a\ge s\ge 1$
\begin{align*}
&RHS'(a) = 3a^2 + 38as - 32a + s^2 - 12s + 9,\\
&RHS''(a) = 6a + 38s - 32 > 0\\
\Rightarrow &RHS'(a) \ge RHS'(s) = 42s^2 - 44s + 9 > 0,\\
\Rightarrow &RHS(a) \ge RHS(s) = 16s^3 - 18s^2 + 6s - 1 > 0.
\end{align*}
It remains to show that $\tilde{g_2}(\theta_2') > 0$.
\begin{align*}
\tilde{g_2}(\theta_2') = &- 6a^5 - 11a^4b - 10a^4s + 11a^4 + 5a^3b^2 + 23a^3bs - \frac{19a^3b}2 + 22a^3s^2 - 31a^3s + \frac{9a^3}2 + a^2b^3 + 9a^2b^2s\\& - 9a^2b^2 + 9a^2bs^2 - \frac{57a^2bs}2 + 17a^2b + 2a^2s^3 - 14a^2s^2 + \frac{45a^2s}{2} - 9a^2 - ab^3s - \frac{ab^3}2 - 10ab^2s^2\\& + 5ab^2s + \frac{7ab^2}2 - 21abs^3 + \frac{71abs^2}2 - 9abs - \frac{11ab}2 - 12as^4 + 35as^3 - \frac{59as^2}{2} + 5as + \frac{5a}2 + \frac{b^3s}2\\& + \frac{b^3}2 + 4b^2s^3 - 2b^2s^2 - \frac{3b^2s}2 - \frac{3b^2}2 + 8bs^4 - \frac{31bs^3}2 + 5bs^2 + \frac{3bs}2 + \frac{3b}2 + 4s^5 - 13s^4 + \frac{25s^3}2 - 3s^2\\& - \frac{s}2 - \frac{1}2,\\
\frac{\partial \tilde{g_2}}{\partial b} =& - 11a^4 + 10a^3b + 23a^3s - \frac{19a^3}2 + 3a^2b^2 + 18a^2bs - 18a^2b + 9a^2s^2 - \frac{57a^2s}2 + 17a^2 - 3ab^2s - \frac{3ab^2}2\\& - 20abs^2 + 10abs + 7ab - 21as^3 + \frac{71as^2}2 - 9as - \frac{11a}2 + \frac{3b^2s}2 + \frac{3b^2}2 + 8bs^3 - 4bs^2 - 3bs - 3b\\& + 8s^4 - \frac{31s^3}2 + 5s^2 + \frac{3s}2 + \frac32,\\
\frac{\partial^2 \tilde{g_2}}{\partial b^2} =& 7a + 3b - 3s - 3ab + 10as + 3bs + 6a^2b - 20as^2 + 18a^2s - 18a^2 + 10a^3 - 4s^2 + 8s^3 - 6abs - 3,\\
\frac{\partial^3 \tilde{g_2}}{\partial b^3} = &3s - 3a - 6as + 6a^2 + 3 > 0.
\end{align*}
Therefore
\begin{align*}
\frac{\partial^2 \tilde{g_2}}{\partial b^2} \ge \frac{\partial^2 \tilde{g_2}}{\partial b^2}(2a) = 22a^3 + 6a^2s - 24a^2 - 20as^2 + 16as + 13a + 8s^3 - 4s^2 - 3s - 3.
\end{align*}
Consider the right hand side a function of $a$, since $a\ge s\ge 1$
\begin{align*}
&RHS'(a) = 66a^2 + 12as - 48a - 20s^2 + 16s + 13 = (66a^2 - 48a - 8s^2) + (12as - 12s^2) + 16s + 13 > 0,\\
\Rightarrow &RHS(a)\ge RHS(s) = 16s^3 - 12s^2 + 10s - 3 > 0.
\end{align*}
Now, since $\frac{\partial^2 \tilde{g_2}}{\partial b^2} > 0$
\begin{align*}
\frac{\partial \tilde{g_2}}{\partial b} \ge \frac{\partial \tilde{g_2}}{\partial b}(2a) = &21a^4 + 47a^3s - \frac{103a^3}2 - 31a^2s^2 - \frac{5a^2s}2 + 37a^2 - 5as^3 + \frac{55as^2}2 - 15as - \frac{23a}2 + 8s^4\\
&- \frac{31s^3}2 + 5s^2 + \frac{3s}2 + \frac32.
\end{align*}
Consider the right hand side a function of $a$
\begin{align*}
&RHS'(a) = 84a^3 + 141a^2s - \frac{309a^2}2 - 62as^2 - 5as + 74a - 5s^3 + \frac{55s^2}2 - 15s - \frac{23}2,\\
&RHS''(a) = 252a^2 + 282as - 309a - 62s^2 - 5s + 74 = (252a^2 - 252a) + (282as - 57a - 62s^2 - 5s)\\& + 74 > 0.\\
\Rightarrow&RHS'(a)\ge RHS'(s) = 158s^3 - 132s^2 + 59s - 23/2 > 0,\\
\Rightarrow &RHS(a) \ge RHS(s) = 40s^4 - 42s^3 + 27s^2 - 10s + \frac32 > 0 \text{ since }s\ge 1.
\end{align*}
Finally, since $\frac{\partial \tilde{g_2}}{\partial b} > 0$
\begin{align*}
\tilde{g_2}\ge \tilde{g_2}(2a) = &64a^4s - 48a^4 - 64a^3s + \frac{113a^3}2 - 24a^2s^3 + 49a^2s^2 - \frac{3a^2s}2 - 26a^2 + 4as^4 + 4as^3 - \frac{39as^2}2\\
&+ 8as + \frac{11a}2 + 4s^5 - 13s^4 + \frac{25s^3}2 - 3s^2 - \frac{s}2 - \frac12.
\end{align*}
Consider the right hand side a function of $a$
\begin{align*}
&RHS'(a) = 256a^3s - 192a^3 - 192a^2s + \frac{339a^2}2 - 48as^3 + 98as^2 - 3as - 52a + 4s^4 + 4s^3 - \frac{39s^2}2 + 8s\\& + \frac{11}{2}\\
&RHS''(a) = 768a^2s - 576a^2 - 384as + 339a - 48s^3 + 98s^2 - 3s - 52\\
&RHS^{(3)}(a) = 1536as - 384s - 1152a + 339 > 0
\end{align*}
therefore
\begin{align*}
&RHS''(a)\ge RHS''(s) = 720s^3 - 862s^2 + 336s - 52 > 0,\\
&RHS'(a) \ge RHS'(s) = 212s^4 - 282s^3 + 147s^2 - 44s + \frac{11}2 > 0,\\
&RHS(a) \ge RHS(s) = 48s^5 - 72s^4 + 48s^3 - 21s^2 + 5s - \frac12 > 0.
\end{align*}
Thus, we have the desired proof of Claim \ref{extremalclaim2}
\end{proof}
We have proved that, for graphs with fixed matching number $\frac{n-t}{2}$, the extremal graph is either $G_1 = K_1\vee({K_{n-t-1}\cup \overline{K_{t}}})$ or $G_2 = K_{\frac{n-t}{2}}\vee{\overline{K_{\frac{n+t}{2}}}}$. We now compare the spectral radii of $G_1$ and $G_2$ to $\frac{4\alpha'(G)}{n}$.
\begin{claim}
\label{claim:bound1} $$\rho(A_{\overline{\kappa}}(G_1)) < \frac{2(n-t)}{n}.$$
\end{claim}
\begin{proof}
From the quotient matrix given in the proof of Claim \ref{extremalclaim1}, we can compute
\begin{align*}
&\rho(A_{\overline{\kappa}}(G_1)) = \frac{1}{\begin{pmatrix}
n\\2
\end{pmatrix}}\left(\frac{3t}2 - n - nt + \frac{n^2}2 + \frac{t^2}2 \right.\\&\left.+\frac{1}{2}\sqrt{n^4 - 4n^3t - 4n^3 + 6n^2t^2 + 10n^2t + 8n^2 - 4nt^3 - 8nt^2 - 8nt - 8n + t^4 + 2t^3 + t^2 + 4t + 4}\vphantom{\frac11}\right).
\end{align*}
Therefore
\begin{align*}
&\rho(A_{\overline{\kappa}}(G_1))< \frac{2(n-t)}{n}\\
\Leftrightarrow &\sqrt{n^4 - 4n^3t - 4n^3 + 6n^2t^2 + 10n^2t + 8n^2 - 4nt^3 - 8nt^2 - 8nt - 8n + t^4 + 2t^3 + t^2 + 4t + 4}\\
<&2\left(\begin{pmatrix}
n\\2
\end{pmatrix}\frac{2(n-t)}{n} - \left(\frac{3t}2 - n - nt + \frac{n^2}2 + \frac{t^2}2\right)\right) = n^2 - t^2 - t\\
\Leftrightarrow &0 < 4n^3t + 4n^3 - 8n^2t^2 - 12n^2t - 8n^2 + 4nt^3 + 8nt^2 + 8nt + 8n - 4t - 4.
\end{align*}
Consider the right hand side of the above inequality a function of $n$, since $n \ge 3t + 2$
\begin{align*}
&RHS'(n) = 12n^2t + 12n^2 - 16nt^2 - 24nt - 16n + 4t^3 + 8t^2 + 8t + 8,\\
&RHS''(n) = 24n - 24t + 24nt - 16t^2 - 16 > 0
\end{align*}
\begin{align*}
\Rightarrow & RHS'(n) \ge RHS'(3t + 2) = 64t^3 + 156t^2 + 104t + 24 > 0,\\
&RHS(n) \ge RHS(3t + 2) = 48t^4 + 152t^3 + 152t^2 + 68t + 12 > 0
\end{align*}
which gives the proof of Claim \ref{claim:bound1}.
\end{proof}
\begin{claim}
\label{claim:bound2} $$\rho(A_{\overline{\kappa}}(G_2)) < \frac{2(n-t)}{n}.$$
\end{claim}
\begin{proof}
From the quotient matrix given in the proof of Claim \ref{extremalclaim2}, we can compute
\begin{align*}
&\rho(A_{\overline{\kappa}}(G_2)) = \frac{1}{\begin{pmatrix}
n\\2
\end{pmatrix}}\left(\frac{t}2 - n - \frac{nt}4 + \frac{3n^2}8 - \frac{t^2}8 + \frac12 + \right.\\&\left.+\frac18\sqrt{5n^4 - 12n^3t - 8n^3 + 6n^2t^2 + 16n^2t + 24n^2 + 4nt^3 - 8nt^2 - 16nt - 32n - 3t^4 + 8t^2 + 16}\vphantom{\frac11}\right).
\end{align*}
Therefore
\begin{align*}
&\rho(A_{\overline{\kappa}}(G_2))< \frac{2(n-t)}{n}\\
\Leftrightarrow &\sqrt{5n^4 - 12n^3t - 8n^3 + 6n^2t^2 + 16n^2t + 24n^2 + 4nt^3 - 8nt^2 - 16nt - 32n - 3t^4 + 8t^2 + 16}\\
<&8\left(\begin{pmatrix}
n\\2
\end{pmatrix}\frac{2(n-t)}{n} - \left(\frac{t}2 - n - \frac{nt}4 + \frac{3n^2}8 - \frac{t^2}8 + \frac12\right)\right) = 5n^2 - 6nt + t^2 + 4t - 4\\
\Leftrightarrow &0 < 20n^4 - 48n^3t + 8n^3 + 40n^2t^2 + 24n^2t - 64n^2 - 16nt^3 - 40nt^2 + 64nt + 32n + 4t^4 + 8t^3 - 32t.
\end{align*}
Consider the right hand side of the above inequality a function of $n$, since $n \ge t + 2$
\begin{align*}
&RHS'(n) = 80n^3 - 144n^2t + 24n^2 + 80nt^2 + 48nt - 128n - 16t^3 - 40t^2 + 64t + 32,\\
&RHS''(n) = 240n^2 - 288nt + 48n + 80t^2 + 48t - 128,\\
&RHS^{(3}(n) = 480n - 288t + 48 > 0.
\end{align*}
\begin{align*}
\Rightarrow & RHS''(n) \ge RHS''(t + 2) = 32t^2 + 480t + 928 > 0,\\
&RHS'(n) \ge RHS'(t + 2) = 96t^2 + 512t + 512 > 0,\\
&RHS(n) \ge RHS(t + 2) = 128t^2 + 320t + 192 > 0
\end{align*}
thus, the proof of Claim \ref{claim:bound2} is given.
\end{proof}
The complete proof of Theorem \ref{main} follows directly from Claims \ref{claim1}, \ref{mainclaim}, \ref{extremalclaim1}, \ref{extremalclaim2}, \ref{claim:bound1} and \ref{claim:bound2}. From Claim \ref{claim1}, equality is achieved when $G$ is a complete graph and $\alpha'(G) = \frac{n-1}{2}$, implying $n$ is odd. In the remaining case, the extremal graphs are $G_1$ and $G_2$ whose spectral radii are strictly smaller than $\frac{4\alpha'(G)}{n}$. Thus the proof of Theorem \ref{main} is done.\qquad \qquad \qquad $\qedsymbol$
\section{Proof of theorem \ref{bipartitethm}}
\quad\\
\textit{Proof of theorem \ref{bipartitethm}.} We now turn our attention to bipartite graphs, in which we can improve the upper bound in Theorem \ref{main}.
\begin{claim}\label{claim:bipartite1}
$\rho(A_{\overline{\kappa}}(K_{k,n-k})) \le \frac{(n-k)(4k - 2)}{n(n-1)}$.
\end{claim}
\begin{proof}
Without loss of generality, suppose that $k\le n - k$ or $2k \le n$. Partition the vertices according to the partite sets, we have the quotient matrix
\[\begin{pmatrix}
n\\2
\end{pmatrix}\tilde{Q} =\begin{pmatrix}
(n-k)(k-1)&k(n-k)\\
k^2 & k(n-k-1)
\end{pmatrix}.\]
The spectral radius of $A_{\overline{\kappa}}(K_{k,n-k})$ can be computed
\[\rho(A_{\overline{\kappa}}(K_{k,n-k})) = \frac{1}{n(n-1)}\left(\sqrt{n^2 + 4nk^3 - 4nk-4k^4+4k^2} + 2nk -n - 2k^2\right)\]
and analyzed as follows
\begin{align*}
&\sqrt{n^2 + 4nk^3 - 4nk-4k^4+4k^2} + 2nk -n - 2k^2\\
=& \frac{4nk^3 - 4nk - 4k^4+4k^2}{\sqrt{(n-2k)^2 + 4nk^3 -4k^4} + n} + 2nk - 2k^2\\
\le& \frac{4nk^3 - 4nk - 4k^4+4k^2}{\sqrt{8k^4 -4k^4} + 2k} + 2nk - 2k^2 \hspace{3cm}{(*)}\\
=& \frac{4nk(k-1)(k+1) - 4k^2(k-1)(k+1)}{2k^2 + 2k} + 2nk - 2k^2\\
=& 2n(k-1) - 2k(k-1) + 2nk - 2k^2\\
=& 2(n-k)(k-1) + 2k(n-k)\\
=& (n-k)(4k-2),
\end{align*}
hence, Claim \ref{claim:bipartite1} is proved.
\end{proof}
The next step is to show that $K_{k, n-k}$ is the extremal graph with the maximum spectral radius among $n$-vertex connected bipartite graphs with the matching number $k$.
\begin{claim}\label{claim:bipartite2}
Let $G$ be an $n$-vertex connected bipartite graph with the matching number $k$. Then, $\rho(A_{\overline{\kappa}}(G))\le \rho(A_{\overline{\kappa}}(K_{k, n -k}))$.
\end{claim}
\begin{proof}
Denote by $X, Y$ the two partite sets of $G$, and assume that $|X|\le |Y|$. If $G$ has a matching that covers every vertex of $X$, then $|X| = k$ and $E(G)\subset E(K_{k,n-k})$. By Theorem~\ref{PerronCor}, $\rho(A_{\overline{\kappa}}((G))\le\rho(A_{\overline{\kappa}}(K_{k,n-k}))$.
If $G$ does not have a matching that covers every vertex of $X$, by Hall's marriage theorem, there exists a subset $S$ of $X$ such that $\alpha'(G) = |X| - |S| + |N(S)| = k$, where $N(S)$ denotes the neighbors of $S$.
Consider the graph $G^*$ (see Figure \ref{fig:bipartite}) constructed by adding every edge possible between $S$ and $N(S)$, $X-S$ and $N(S)$, $X-S$ and $Y - N(S)$.
\begin{figure}
\caption{The bipartite graph $G^*$}
\label{fig:bipartite}
\end{figure}
By construction, $\alpha'(G^*) = \alpha'(G) = k$ and $\rho(A_{\overline{\kappa}}(G^*))\ge \rho(A_{\overline{\kappa}}(G))$. Let $|X| = x, |Y| = y, |S| = s$ and $|N(S)| = n_s$, the quotient matrix of $G^*$ corresponding to the partition $\{S, N(S), X - S, Y - N(S)\}$ is then
\[\hspace*{-0.5cm}\begin{pmatrix}
n\\2
\end{pmatrix}\tilde{Q_1} =\begin{pmatrix}
(s-1)n_s&n_s^2&(x-s)n_s&(y-n_s)\min\{n_s,x-s\}\\
sn_s & (n_s - 1)x & (x-s)(n_s+x-s-1) & (y-n_s)(x-s)\\
sn_s & n_s(x-s+n_s-1) & (x-s-1)y & (y-n_s)(x-s)\\
s\min\{n_s,x-s\}&n_s(x-s)&(x-s)^2&(y-n_s-1)(x-s)
\end{pmatrix}.\]
Since $\tilde{Q_1}$ depends on the value of $\min\{n_s, x-s\}$, we consider two cases.
\quad
\noindent\textit{Case 1:} If $n_s \le x-s$, then
\[\begin{pmatrix}
n\\2
\end{pmatrix}\tilde{Q_1} = \begin{pmatrix}
(s-1)n_s&n_s^2&(x-s)n_s&(y-n_s)n_s\\
sn_s & (n_s - 1)x & (x-s)(n_s+x-s-1) & (y-n_s)(x-s)\\
sn_s & n_s(x-s+n_s-1) & (x-s-1)y & (y-n_s)(x-s)\\
sn_s&n_s(x-s)&(x-s)^2&(y-n_s-1)(x-s)
\end{pmatrix}.\]
Consider the graph $G^{**}$ obtained by moving one vertex from $S$ to $Y-N(S)$ and its quotient matrix
\[\begin{pmatrix}
n\\2
\end{pmatrix}\tilde{Q_2} =\begin{pmatrix}
(s-2)n_s&n_s^2&(x-s)n_s&(y-n_s+1)n_s\\
(s-1)n_s & (n_s - 1)(x-1) & (x-s)(n_s+x-s-1) & (y-n_s+1)(x-s)\\
(s-1)n_s & n_s(x-s+n_s-1) & (x-s-1)(y+1) & (y-n_s+1)(x-s)\\
(s-1)n_s&n_s(x-s)&(x-s)^2&(y-n_s)(x-s)
\end{pmatrix},\]
then
\[\begin{pmatrix}
n\\2
\end{pmatrix}(\tilde{Q_2} - \tilde{Q_1}) = \begin{pmatrix}
-n_s&0&0&n_s\\
-n_s&-(n_s-1)&0&x-s\\
-n_s&0&x-s-1&x-s\\
-n_s&0&0&x-s
\end{pmatrix}.\]
Let $u = [u_1, u_2, u_3, u_4]^T > 0$ be the Perron vector corresponding to the spectral radius $\rho$ of $\begin{pmatrix}
n\\2
\end{pmatrix}\tilde{Q_1}$. We compare the entries of $u$, namely $u_1$ and $u_4$. If $n_s = x-s$ then
\[(s-1)n_su_1 + n_s^2u_2+n_s^2u_3 + (y-n_s)n_su_4 = \rho u_1,\]
\[sn_su_1 + n_s^2u_2+n_s^2u_3 + (y-n_s-1)n_su_4 = \rho u_4\]
\[\Rightarrow\rho(u_1 - u_4) = -n_s(u_1 - u_4) \Rightarrow u_1 = u_4.\]
Else if $n_s\le x-s-1$ then
\begin{align*}
(y-n_s-1)(x-s) &\ge (y-n_s-1)(n_s+1)\\
&= (y-n_s)n_s + (y-n_s-1) - n_s\\
&\ge (y-n_s)n_s
\end{align*}
(since $x\le y, n_s\le s \Rightarrow y - n_s \ge x-s\ge n_s + 1$). From here, we have
\begin{align*}
&\rho u_1 = (s-1)n_su_1 +n_s^2u_2 + (x-s)n_su_3 + (y-n_s)n_su_4\\
&\le sn_su_1 + n_s(x-s)u_2 + (x-s)^2u_3 + (y-n_s-1)(x-s)u_4 = \rho u_4,
\end{align*}
therefore $u_1 \le u_4$. Next, we compare $u_2$ and $u_3$.
\begin{align*}
&\rho(u_2 - u_3) = [(n_s-1)x - n_s(x-s+n_s-1)]u_2\\
&+ [(x-s)(n_s+x-s-1) - (x-s-1)y]u_3\\
\Rightarrow &\frac{u_2}{u_3} = \frac{\rho - (x-s-1)y+(x-s)(n_s+x-s-1)}{\rho - (n_s-1)x + n_s(n_s+x-s-1)}.
\end{align*}
Subtracting the numerator by the denominator gives
\begin{align*}
&(x-s)(n_s+x-s-1) - (x-s-1)y + (n_s-1)x - n_s(n_s+x-s-1)\\
=&(x-s - n_s)(n_s+x-s-1) - (x-s-1)y + (n_s-1)x\\
=&(x-s - n_s)(n_s+x-s-1) - (x-s-n_s)y - (n_s-1)y + (n_s-1)x\\
=&(x-s-n_s)(n_s+x-s-y-1) - (n_s-1)(y-x) \le 0,
\end{align*}
therefore $u_2 \le u_3$.
Next, we prove that $\begin{pmatrix}
n\\2
\end{pmatrix}u^T(\tilde{Q_2}-\tilde{Q_1})u\ge 0$. Indeed,
\begin{align*}
\begin{pmatrix}
n\\2
\end{pmatrix}u^T(\tilde{Q_2}-\tilde{Q_1})u = &-n_su_1^2 - n_su_1u_2 - n_su_1u_3-(n_s-1)u_2^2 + (x-s)u_2u_4 + (x-s-1)u_3^2\\
&+ (x-s)u_3u_4+ (x-s)u_4^2.
\end{align*}
Combining $u_1\le u_4, u_2\le u_3$ with the assumption $n_s\le x-s$, we have
\begin{align*}
&n_su_1^2\le (x-s)u_4^2,\\
&n_su_1u_2\le (x-s)u_2u_4,\\
&n_su_1u_3\le (x-s)u_3u_4,\\
&(n_s-1)u_2^2\le (x-s-1)u_3^2
\end{align*}
therefore, $u^T(\tilde{Q_2}-\tilde{Q_1})u\ge 0$ and \[\rho\left(\begin{pmatrix}
n\\2
\end{pmatrix}\tilde{Q_2}\right) \ge \begin{pmatrix}
n\\2
\end{pmatrix}u^T\tilde{Q_2}u \ge \begin{pmatrix}
n\\2
\end{pmatrix}u^T\tilde{Q_1}u = \rho\left(\begin{pmatrix}
n\\2
\end{pmatrix}\tilde{Q_1}\right).\]
Each time we move a vertex from $S$ to $Y-N(S)$, we increase the spectral radius, and we can keep doing so without changing the matching number so long as $|S|\ge|N(S)|$ after the vertex is moved. When $|S| = |N(S)|$, clearly $|X| = k$ and $E(G) \subset E(K_{k,n-k})$, so it follows that $\rho(A_{\overline{\kappa}}(G))\le\rho(A_{\overline{\kappa}}(K_{k,n-k}))$.
\quad
\noindent\textit{Case 2:} If $n_s > x - s$, the quotient matrix of $G^*$ now becomes
\[\begin{pmatrix}
n\\2
\end{pmatrix}\tilde{Q_1} = \begin{pmatrix}
(s-1)n_s&n_s^2&(x-s)n_s&(y-n_s)(x-s)\\
sn_s & (n_s - 1)x & (x-s)(n_s+x-s-1) & (y-n_s)(x-s)\\
sn_s & n_s(x-s+n_s-1) & (x-s-1)y & (y-n_s)(x-s)\\
s(x-s)&n_s(x-s)&(x-s)^2&(y-n_s-1)(x-s)
\end{pmatrix}.\]
By Theorem \ref{quotient} and Theorem \ref{Gctheo}, the spectral radius of $\tilde{Q_1}$ is bounded by the largest row sum of $\tilde{Q_1}$. We consider each row sum:
\begin{enumerate}
\item $(s-1)n_s+n_s^2+(x-s)n_s+(y-n_s)(x-s)$
$\le (s-1)n_s+n_s^2+(x-s)n_s+(y-n_s)n_s$
$ = n_s(s-1+n_s+x-s+y-n_s)$
$ = n_s(n-1) \le k(n-1)$.
\item $sn_s+(n_s-1)x+(x-s)(n_s+x-s-1)+(y-n_s)(x-s)$
$= sn_s+(n_s-1)x+(x-s)(y+x-s-1)$
$\le sn_s+(n_s-1)x+(x-s)(n-1)$
$\le (y-1)n_s+n_sx+(x-s)(n-1)$ (since $G$ is connected, $x-s \ge 1 \Rightarrow y - s\ge 1$)
$= n_s(n-1) + (x-s)(n-1)$
$= k(n-1)$.
\item $sn_s+n_s(x-s+n_s-1)+(x-s-1)y+(y-n_s)(x-s)$
$\le sn_s+n_s(x-s+n_s-1) + (x-s-1)y+(y-n_s)n_s$
$= n_s(s +x-s+n_s-1 + y - n_s) + (x-s-1)y$
$= n_s(n-1) + (x-s-1)y$
$\le n_s(n-1) + (x-s)(n-1)$
$= k (n-1)$.
\item $s(x-s) + n_s(x-s) + (x-s)^2 + (y-n_s-1)(x-s)$
$= (x-s)(s+n_s+x-s+y-n_s-1)$
$= (x-s)(n-1) \le k(n-1)$.
\end{enumerate}
Hence, all row sums of $\tilde{Q_1}$ are bounded by $\frac{2k}{n}$, which we can prove is no greater than $\rho(A_{\overline{\kappa}}(K_{k,n-k}))$. Consider that
\begin{align*}
\rho(A_{\overline{\kappa}}(K_{k,n-k})) &= \frac{1}{n(n-1)}\left(\sqrt{n^2 + 4nk^3 - 4nk-4k^4+4k^2} + 2nk -n - 2k^2\right)\\
&= \frac{1}{n(n-1)}\left(\sqrt{n^2 + 4nk^3 - 4nk-4k^4+4k^2} + 2(n-1)k - n - 2k^2 + 2k\right)\\
&= \frac{2k}{n} + \frac{1}{n(n-1)}\left(\sqrt{n^2 + 4nk^3 - 4nk-4k^4+4k^2}- n - 2k^2 + 2k\right),
\end{align*}
therefore $\rho(A_{\overline{\kappa}}(K_{k,n-k})) \ge \frac{2k}{n}$ if $\sqrt{n^2 + 4nk^3 - 4nk-4k^4+4k^2}- n - 2k^2 + 2k \ge 0$. Consider the inequality
\begin{align*}
&\sqrt{n^2 + 4nk^3 - 4nk-4k^4+4k^2}\ge n-2k+2k^2\\
\Leftrightarrow&n^2 + 4nk^3 - 4nk-4k^4+4k^2 \ge n^2 + 4k^2 + 4k^4 - 4nk + 4nk^2 - 8k^3\\
\Leftrightarrow&4nk^3 - 8k^4\ge 4nk^2 - 8k^3\\
\Leftrightarrow&(k^3 - k^2)(4n - 8k)\ge 0
\end{align*}
and since $k\ge 1$ and $n\ge 2k$, the inequality is true. The proof of Claim \ref{claim:bipartite2} is now complete.
\end{proof}
Combining Claim \ref{claim:bipartite1} and Claim \ref{claim:bipartite2}, we have the complete proof of Theorem \ref{bipartitethm}. Equality holds only when $G\cong K _{k,n-k}$ and $k = n - k$ because in order to have equality in $(*)$, we must have $n = 2k$. \hspace{13.6cm} $\qedsymbol$
\end{document}
|
\begin{document}
\title{Towards large genus asymtotics of intersection numbers on moduli spaces of curves}
\begin{abstract}
We explicitly compute the diverging factor in the large genus asymptotics of the Weil-Petersson volumes of the moduli spaces
of $n$-pointed complex algebraic curves. Modulo a universal multiplicative constant we prove the existence of a complete asymptotic expansion of the Weil-Petersson volumes in the inverse powers of the genus with coefficients that are polynomials
in $n$. This is done by analyzing various recursions for the more general intersection numbers of tautological classes, whose large genus asymptotic behavior is also extensively studied.
\operatorname{Ext}nd{abstract}
\begin{section}{Introduction and statement of results}
In this note, we study the asymptotic behavior of the Weil-Petersson volumes $V_{g,n}$ of the moduli
spaces $\mathcal{M}_{g,n}$ of $n$-pointed complex algebraic curves of genus $g$ as $g \rightarrow \infty.$ Here
$$V_{g,n}=\frac{1}{(3g-3+n)!} \int_{{\mathcal{M}}_{g,n}} \omega_{g,n}^{3g-3+n},$$ where $\omega_{g,n}$ is the Weil-Petersson symplectic form on $\mathcal{M}_{g,n}.$
The following conjecture was made in \cite{Z:con} on the basis of numerical data:
\begin{conj}\label{conj}
For any fixed $n\geq 0$
$$V_{g,n} =(2g-3+n)! (4\pi^2)^{2g-3+n} \frac{1}{\sqrt{g \pi}} \left(1 + \frac{c_{n}}{g} + O\left(\frac{1}{g^{2}}\right)\right)$$
as $g\rightarrow \infty.$
\operatorname{Ext}nd{conj}
The objective of this paper is to prove the statements formulated below.
\begin{theo}\label{theo:main:fixn}
There exists a universal constant $C\in (0,\infty)$ such that for any given $k \geq 1, n\geq 0,$
$$V_{g,n} =C\,\frac{(2g-3+n)!\,(4\pi^2)^{2g-3+n}}{\sqrt{g}} \left(1+\frac{c_n^{(1)}}{g}+\ldots+\frac{c_n^{(k)}}{g^{k}}+ O\left(\frac{1}{g^{k+1}}\right)\right), $$
as $g \rightarrow \infty.$
Each term $c_n^{(i)}$ in the asymptotic expansion is a polynomial in $n$ of degree $2i$ with coefficients in
${\mathbb Q}[\pi^{-2},\pi^2]$ that are effectively computable. Moreover, the leading term of $c_n^{(i)}$ is equal to
$\frac{(-1)^i}{i!\,(2\pi^2)^i}\,n^{2i}.$
\operatorname{Ext}nd{theo}
\begin{rema}\label{rem1}
{\rm Note that Conjecture $\ref{conj}$ claims that $C=\frac{1}{\sqrt{\pi}}$.
Numerical data suggest that the coefficients of $c_n^{(i)}$ actually belong to ${\mathbb Q}[\pi^{-2}]$. For example,}
$$c_n^{(1)}=-\frac{n^2}{2\pi^2}-\left(\frac{1}{4}-\frac{5}{2\pi^2}\right)n+\frac{7}{12}-\frac{17}{6\pi^2}.$$
\operatorname{Ext}nd{rema}
Our method also implies that given $k\geq 1, n \geq 0$ we have
\begin{equation}\label{1}
\frac{V_{g,n+1}} {8\pi^2 gV_{g,n}}= 1+ \frac{a_n^{(1)}}{g}+\ldots+\frac{a_n^{(k)}}{g^{k}}+O\left(\frac{1}{g^{k+1}}\right),
\operatorname{Ext}nd{equation}
\begin{equation}\label{2}
\frac{V_{g-1,n+2}}{V_{g,n}}=1+ \frac{b_n^{(1)}}{g}+\ldots+\frac{b_n^{(k)}}{g^{k}}+O\left(\frac{1}{g^{k+1}}\right),
\operatorname{Ext}nd{equation}
as $g\to\infty$, where the coefficients $a_n^{(i)}$ and $b_n^{(i)}$ can be explicitly computed. However, here we do this only for $a_n^{(1)}$ and $b_n^{(1)}$:
\begin{theo}\label{11}
For any fixed $n \geq 0$:
\begin{align}\label{110}
&\frac{V_{g,n+1}}{8\pi^2 gV_{g,n}}= 1+\left(\left(\frac{1}{2}-\frac{1}{\pi^2}\right)n-\frac{5}{4}+\frac{2}{\pi^2}\right)\cdot\frac{1}{g}+ O\left(\frac{1}{g^2}\right),\\
\label{111}
&\frac{V_{g-1,n+2}}{V_{g,n}}=1+\frac{3-2n}{\pi^2}\cdot\frac{1}{g}+O\left(\frac{1}{g^2}\right).
\operatorname{Ext}nd{align}
\operatorname{Ext}nd{theo}
With the help of the identity $\frac{V_{g+1,n}}{V_{g,n}}=\frac{V_{g+1,n}}{V_{g,n+2}}\cdot\frac{V_{g,n+2}}{V_{g,n+1}}\cdot\frac{V_{g,n+1}}{V_{g,n}}$ this theorem immediately yields
\begin{coro}\label{c}
Let $n \geq 0$ be fixed, then
$$\frac{V_{g+1,n}}{V_{g,n}}= (4\pi^2)^2(2g+n-1)(2g+n-2)\left(1-\frac{1}{2g}\right)\cdot \left(1+ \frac{r_{n}^{(2)}}{g^2}+ O\left(\frac{1}{g^3}\right)\right),$$
as $g \rightarrow \infty.$
\operatorname{Ext}nd{coro}
Note that $r_n^{(2)}=3/8-c_n^{(1)}$ by Theorem \ref{theo:main:fixn}, so that
$$r_n^{(2)}=\frac{n^2}{2\pi^2}+\left(\frac{1}{4}-\frac{5}{2\pi^2}\right)n-\frac{5}{24}+\frac{17}{6\pi^2}.$$
\begin{rema}\label{rem2}
{\rm Since $\prod_{g=1}^{\infty} (1+a_{g})$
converges when $a_{g}=O(1/g^2)$, Corollary \ref{c} easily implies that there exists $$\lim_{g\rightarrow \infty} \frac{V_{g,n} \sqrt{g}}{(2g-3+n)!\,(4\pi^2)^{2g-3+n}} =C\in (0,\infty).$$
The proof of Theorem $\ref{theo:main:fixn}$ is based on $(\ref{1})$ and $(\ref{2})$ (see Theorems $\ref{theo:general:1}$,
$\ref{theo:general:2}$) and follows similar lines as well. The method we use also allows to calculate the error terms explicitly: even though we only do the calculation for the coefficient of $1/g$, the error terms of order of $1/g^s$ can be written in terms of values of the intersections of $\psi$ classes on surfaces of genus at most $s$. However, this method does not provide any information about the exact value of $C$.}
\operatorname{Ext}nd{rema}
Analyzing the signs of the error terms of order $1/g^2$ in Theorem $\ref{11}$, we get
\begin{coro}\label{m}
Given $n \geq 2,$ there exists $g_{0}$ such that the sequence
$$\left \{\frac{V_{g-1,n+2}} {V_{g,n}}\right\}_{g \geq g_{0}}$$
is increasing.
Similarly, for $n \geq 3$ there exists $g_{0}$ such that the sequence
$$ \left\{ \frac{8\pi^2 g V_{g,n}} {V_{g,n+1}}\right\} _{g \geq g_{0}}$$
is increasing.
\operatorname{Ext}nd{coro}
We also obtain somewhat weaker results when $n$ varies as $g\to \infty$:
\begin{theo}\label{Main}
For any sequence $\{n(g)\}_{g=1}^{\infty}$ of non-negative integers with
$$\lim_{g\rightarrow \infty} \frac{n(g)^2}{g}=0,$$
we have
$$V_{g,n(g)}= \frac{C}{\sqrt{g}}\, (2g-3+n(g))!\,(4\pi^2)^{2g-3+n(g)}\, \left(1 + O\left(\frac{1+n(g)^2}{g}\right)\right)$$
as $ g\rightarrow \infty.$
\operatorname{Ext}nd{theo}
In fact, we prove that
$$ \frac{\sqrt{g} \,V_{g,n(g)}}{C\,(2g-3+n(g))!\;(4\pi^2)^{2g-3+n(g)}}= O\left(\frac{(g+n)!}{g! \, g^{n}}\right). $$
\noindent
{\bf Notes and remarks.}
\begin{itemize}
\item
\noindent
It may be instructive to compare Theorem \ref{theo:main:fixn} to
the asymptotic formula for the Weil-Petersson volumes for fixed $g\geq 0$ and $n\to\infty$ (cf. \cite{MZ}, Theorem 6.1):
\begin{theo}\label{z}
For any fixed $g\geq 0$
\begin{equation}\label{zn}
V_{g,n} = n! C^{n} n^{(5g-7)/2} \left(c_{g}^{(0)} + \frac{c_{g}^{(1)}}{n} + \dots\right),
\operatorname{Ext}nd{equation}
as $n \rightarrow \infty$, where $C=-\frac{2}{x_0\,J'_0(x_0)}$, $x_0$ is the first positive zero of the Bessel function $J_0$, and the coefficients $c_{g}^{(0)},c_{g}^{(1)},\dots$ are effectively computable.
\operatorname{Ext}nd{theo}
\item Penner \cite{P:vol} developed a different method for computing Weil-Petersson volumes by integrating the Weil-Petersson volume form over simplices in the cellular decomposition of the moduli space. In \cite{Gr:vol}, this method of integration was used to prove that for a fixed $n>0$ there are constants $C_{1},C_{2}>0$ such that
$$C_{1}^g \cdot (2g)! < V_{g,n} < C_{2}^g\cdot (2g)!.$$
(This result was extended to the case of $n=0$ by an algebro-geometric argument in \cite{ST:vol}.)
Note that these estimates do not give much information about the growth of $V_{g-1,n+2}/V_{g, n}$ and $V_{g,n}/V_{g,n+1}$ when
$g \rightarrow \infty.$
\item The estimates from $\cite{M:large}$ imply that given $n \geq 0$ there exists $m>0$ such that
\begin{equation}\label{compare}
g^{-m} \leq \frac{V_{g,n}}{(2g-3+n)! \,(4\pi^2)^{2g-3+n)} }\leq g^{m}.
\operatorname{Ext}nd{equation}
In general, $$\lim _{g+n \rightarrow \infty} \frac{\log(V_{g,n})}{(2g+n)\log(2g+n)}=1,$$
but understanding the asymptotics of $V_{g,n}$ for arbitrary $g,n$ seems to be more complicated.
\operatorname{Ext}nd{itemize}
\operatorname{Ext}nd{section}
\begin{section}{Relations between intersection numbers}\label{Asym}
To begin with, let us recall some well-known facts about tautological classes on $\overline{\mathcal{M}}_{g,n}$ and their intersections. For $n>0,$ there are $n$ tautological line bundles $\mathcal{L}_{i}$ on $\overline{\mathcal{M}}_{g,n}$ whose fiber at the point
$(C,x_{1},\ldots,x_{n})\in \overline{\mathcal{M}}_{g,n}$ is the cotangent line to $C$ at $x_{i},$ and we
put $\psi_{i}= c_{1}(\mathcal{L}_{i}) \in H_{2} (\overline{\mathcal{M}}_{g,n}, {\mathbb Q})$ (cf. e.g. \cite{Harris:book} or \cite{AC}).
\noindent
{\bf Notation.} For $d=(d_{1},\ldots,d_{n})$ with $ d_{i} \in {\mathbb Z} _{\geq 0}$ put $|d|=d_{1}+\ldots+d_{n}$ and, assuming $|d| \leq 3g-3+n,$ put $d_{0}= 3g-3+n-|d|$. Define
\begin{align*}
[\tau_{d_{1}}\ldots \tau_{d_{n}}]_{g,n} =\left[ \prod_{i=1}^{n} \tau_{d_{i}}\right]_{g,n}&= \;\frac{\prod_{i=1}^{n} 2^{2d_i}\,(2d_{i}+1)!!} {d_{0}!} \int_{\overline{\mathcal{M}}_{g,n}} \psi_{1}^{d_{1}} \cdots \psi_{n}^{d_{n}} \omega_{g,n}^{d_{0}}\\
&=\prod_{i=1}^{n} 2^{2d_i}\,(2d_{i}+1)!!\; \frac{(2\pi^2)^{d_0}}{d_0!} \int_{\overline{\mathcal{M}}_{g,n}} \psi_{1}^{d_{1}} \cdots \psi_{n}^{d_{n}}\kappa_1^{d_0},
\operatorname{Ext}nd{align*}
where $\kappa_1=\frac{[\omega_{g,n}]}{2\pi^2}$ is the first Mumford class on $\overline{\mathcal{M}}_{g,n}$.
According to $\cite{M:JAMS},$ for $L=(L_{1},\ldots,L_{n})$ the Weil-Petersson volume of the moduli space
of hyperbolic surfaces of genus $g$ with $n>0$ geodesic boundary components of lengths $2L_{1},\ldots ,2L_{n}$ can be written as
\begin{equation}\label{re2}
V_{g,n}(2 L)=\sum_{\sumind{d_1,\dots, d_n}{|d| \leq 3g-3+n}} \left[ \prod_{i=1}^{n} \tau_{d_{i}}\right]_{g,n} \frac{L_{1}^{2d_{1}}}{(2d_{1}+1)!}\cdots \frac{L_{n}^{2 d_{n}}}{(2d_{n}+1)!}.
\operatorname{Ext}nd{equation}
\noindent
{\bf Recursive formulas.}
The following recursions for the intersection numbers $[\tau_{d_{1}}\ldots \tau_{d_{n}}]_{g,n}$ hold:
\noindent
\begin{align*}{\rm ({\bf Ia})}\qquad\quad\;
\left[ \tau_{0} \tau_{1} \prod_{i=1}^{n} \tau_{d_{i}}\right]_{g,n+2}=\quad &\left[\tau_{0}^{4} \prod_{i=1}^{n} \tau_{d_{i}}\right]_{g-1,n+4} +\\
+\; 6 \sum_{\genfrac{}{}{0pt}{}{g_{1}+g_{2}=g}{I\amalg J=\{1,\ldots, n\}}} &\left[ \tau_{0}^{2} \; \prod_{i\in I } \tau_{d_{i}}\right]_{g_{1},|I|+2} \cdot \quad\left[\tau_{0}^{2}\prod_{i\in J} \tau_{d_{i}}\right]_{g_{2},|J|+2}.
\operatorname{Ext}nd{align*}
\noindent
\begin{align*}{\rm ({\bf Ib})}\qquad\quad
\left[\tau_{0}^2 \tau_{l+1} \prod_{i=1}^{n} \tau_{d_{i}}\right]_{g,n+3}
&=\quad \left[\tau_{0}^{4} \tau_{l} \prod_{i=1}^{n} \tau_{d_{i}}\right]_{g-1,n+5} +\\
+\;8 \sum_{\genfrac{}{}{0pt}{}{g_{1}+g_{2}=g}{I\amalg J=\{1,\ldots, n\}}}
&\left[ \tau_{0}^{2} \tau_{l} \; \prod_{i\in I } \tau_{d_{i}}\right]_{g_{1},|I|+3}
\cdot \quad\left[\tau_{0}^{2}\prod_{i\in J} \tau_{d_{i}}\right]_{g_{2},|J|+2}+\\
+\; 4\sum_{\genfrac{}{}{0pt}{}{g_{1}+g_{2}=g}{I\amalg J=\{1,\ldots, n\}}}
&\left[ \tau_{0}\tau_{l} \; \prod_{i\in I } \tau_{d_{i}}\right]_{g_{1},|I|+2}
\cdot\quad\left[\tau_{0}^{3}\prod_{i\in J} \tau_{d_{i}}\right]_{g_{2},|J|+3}.
\operatorname{Ext}nd{align*}
\noindent
$$
{\rm ({\bf II})}\quad (2g-2+n) \left[ \prod_{i=1}^{n} \tau_{d_{i}}\right]_{g,n}= \quad\frac{1}{2} \sum_{l=1}^{3g-2+n} \frac{(-1)^{l-1}\, l\,\pi^{2l-2}}{(2l+1)!} \left[ \tau_{l} \; \prod_{i=1}^{n} \tau_{d_{i}}\right]_{g,n+1}.\quad
$$
\noindent
({\bf III})
Put $a_{i} =(1-2^{1-2i})\,\zeta(2i),$ where $\zeta$ is the Riemann zeta function and $i\in\mathbb{Z}_{\geq 0}$.
Then
$$
[ \tau_{d_{1}}\ldots\tau_{d_{n}}]_{g,n}= {A}_{{d}} + {B}_{{d}}+ {C}_{{d}},
$$
where
\begin{align}
\label{A}
{A}_{{d}}=&8 \; \sum_{j=2}^{n} \sum_{l=0}^{d_{0}} (2d_{j}+1) \; a_{l} \left[\tau_{d_{1}+d_{j}+l-1} \prod_{i\not=1,j} \tau_{d_{i}}\right]_{g,n-1},\\
\label{B}
{B}_{{d}}= &16 \;\sum_{l=0}^{d_{0}} \sum_{\genfrac{}{}{0pt}{}{k_{1}+k_{2}=}{=l+d_{1}-2}} a_{l} \left[\tau_{k_{1}} \tau_{k_{2}} \prod_{i\not=1} \tau_{d_{i}}\right]_{g-1,n+1},\\
\label{C}
{C}_{{d}}=&16\sum_{\genfrac{}{}{0pt}{}{g_{1}+g_{2}=g}{I\amalg J=\{1,\ldots, n\}}} \sum_{l=0}^{d_{0}} \sum_{\genfrac{}{}{0pt}{}{k_{1}+k_{2}=}{=l+d_{1}-2}} a_{l} \; \left[\tau_{k_{1}} \prod_{i\in I } \tau_{d_{i}}\right]_{g_1,|I|+1} \cdot \left[\tau_{k_{2}} \prod_{i\in J} \tau_{d_{i}}\right]_{g_2,|J|+1}.
\operatorname{Ext}nd{align}
\noindent
{\bf Basic properties of the sequence ${\{a_{i}=(1-2^{1-2i}) \zeta(2i)\}.}$}
It is easy to check that for $i \geq 1$
$$a_i =\frac{1}{(2i-1)!} \int_{0}^{\infty} \frac{t^{2i-1}}{1+e^{t}}\; dt$$
and
\begin{equation}\label{adif}
a_{i+1}-a_{i}=\int_{0}^{\infty} \frac{1}{(1+e^{t})^{2}} \left(\frac{t^{2i+1}}{(2i+1)!}+\frac{t^{2i}}{2i!}\right) dt.
\operatorname{Ext}nd{equation}
\begin{lemm}\label{aa}
The sequence $\{a_{i}\}_{i=1}^{\infty}$ is increasing.
Moreover,
\begin{enumerate}[(i)]
\item
$$\lim_{i\rightarrow \infty} a_{i}=1,\qquad \sum_{i=0}^{\infty} (a_{i+1}-a_{i})=\frac{1}{2},$$
\item
$a_{i+1}-a_i$ has the order of $1/2^{2i}$, i.e., there exist $C>0$ such that
\begin{equation}\label{abound}
\frac{1}{C\cdot 2^{2i}}<a_{i+1}-a_{i} <\frac{C}{2^{2i}},
\operatorname{Ext}nd{equation}
\item
\begin{equation}\label{asums}
\sum_{i=0}^{\infty} i (a_{i+1}-a_{i})=\frac{1}{4},
\operatorname{Ext}nd{equation}
\item for $j \in {\mathbb Z}, j \geq 2,$ the sum
$$\sum_{i=0}^{\infty} i^{j} (a_{i+1}-a_{i})$$
is a polynomial in $\pi^2$ of degree $[j/2]$ with rational coefficients.
\operatorname{Ext}nd{enumerate}
\operatorname{Ext}nd{lemm}
\noindent
{\bf Proof.}
Both $(i)$ and $(ii)$ easily follow from the definition of $a_{i}$ and $(\ref{adif})$.
As for $(iii)$, let
$$S_{1}=\sum_{i=0}^{\infty} \int_{0}^{\infty} \frac{1}{(1+e^{t})^{2}}\cdot \frac{t^{2i+1}}{(2i+1)!}\,dt$$
and
$$S_{2}= \int_{0}^{\infty} \frac{t \; e^t}{(1+e^{t})^{2}} dt.$$
We have
$$S_{1}= \int_{0}^{\infty} \frac{(e^{t}-e^{-t})\,dt}{2(1+e^{t})^{2}}= -\frac{1}{2}+\log 2\,,\qquad S_{2}=\log 2,$$
so that from $(\ref{adif})$, $$2 \sum_{i=1}^{\infty} i(a_{i+1}-a_{i})+S_{1}=S_{2},$$ which implies $(\ref{asums}).$
We will prove $(iv)$ by induction in $j$. The base case $j=1$ being checked in $(iii)$, we observe that
\begin{align*}
\sum_{i=0}^{\infty} \frac{i^{j}\; t^{2i+1}}{(2i+1)!}&=(t D t^{-1})^j(\sinh t)
=\sum_{l=0}^j t^l(a_{1,j}^{(l)}\cosh t + b_{1,j}^{(l)}\sinh t),\\
\sum_{i=0}^{\infty} \frac{i^{j} t^{2i}}{(2i)!}&=D^j (\cosh t)=\sum_{l=0}^j t^l(a_{2,j}^{(l)}\cosh t + b_{2,j}^{(l)}\sinh t),
\operatorname{Ext}nd{align*}
where $D=\frac{t}{2}\cdot\frac{d}{dt}$, and the coefficients $a_{*,j}^{(l)},b_{*,j}^{(l)}$ are rational numbers. A standard computation shows that
\begin{align*}
&\int_{0}^{\infty} \frac{t^{l} \; e^{-t}}{(1+e^{t})^{2}} dt= l! (1-2 (1-2^{-l})\zeta(l+1) + (1-2^{1-l})\zeta(l)),\\
&\int_{0}^{\infty} \frac{t^{l} \; e^{t}}{(1+e^{t})^{2}} dt= l! (1-2^{1-l})\zeta(l).
\operatorname{Ext}nd{align*}
From here we see that the values of the zeta function at odd $l=2k+1$ do not contribute to the sum in $(iv)$ if and only if
\begin{equation}\label{cond}
(2k+1)\cdot(a_{1,j}^{(2k+1)}+a_{2,j}^{(2k+1)})-a_{1,j}^{(2k)}-a_{2,j}^{(2k)}+b_{1,j}^{(2k)}+b_{2,j}^{(2k)}=0.
\operatorname{Ext}nd{equation}
The condition (\ref{cond}) is not hard to verify by induction from $j$ to $j+1$. Thus, only the values of the zeta function at even $l$ contribute to the sum.
$\Box$
\noindent
{\bf References}.
\begin{itemize}
\item The relationship between the Weil-Petersson volumes and the intersection numbers of $\psi-$classes on $\overline{\mathcal{M}}_{g,n}$ is discussed in \cite{W} and \cite{AC}.
An explicit formula for the volumes in terms of the intersections of $\psi-$classes was given in \cite{KMZ:w}, cf. also \cite{MZ}.
\item Recursion $({\bf Ia})$ is a special case of Proposition $3.3$ in \cite{LX:higher}.
Similarly, recursion $({\bf Ib})$ is a simple corollary of Propositions $3.3$ and $3.4$ in
\cite{LX:higher}.
\item For different proofs of $({\bf II})$ see \cite{DN:cone} and \cite{LX:higher}. In terms of the volume polynomial $V_{g,n}(L)$, recursion
${(\bf II)}$ can be written as follows (\cite{DN:cone}):
$$\frac{\partial V_{g,n+1}}{\partial L_{n+1}}(L_1,\ldots, L_n, 2\pi \sqrt{-1}) = 2\pi \sqrt{-1} (2g-2+ n)V_{g,n}(L_1,\ldots, L_n).$$
When $n=0,$
$$V_{g,1}(2\pi \sqrt{-1})=0,$$
and
\begin{equation}\label{zero}
\frac{\partial V_{g,1}}{\partial L} (2\pi \sqrt{-1}) = 2\pi \sqrt{-1} (2g-2)V_{g,0}.
\operatorname{Ext}nd{equation}
\item For a proof of $({\bf III})$ see \cite{M:In}; note that $({\bf III})$ applies only when $n>0$ (in case of $n=0,$ formula $(\ref{zero})$ gives the necessary estimates on the growth of $V_{g,0}$). In fact, $({\bf III})$ can be interpreted as a recursive formula for the volumes of moduli spaces $\mathcal{M}_{g,n}(L)$
of hyperbolic surfaces of genus $g$ with $n$ geodesic boundary components of lengths $L_{1},\ldots,L_{n}$
that describes a removal of a pair of pants on a surface containing at least one of its boundary
components. Although $({\bf III})$ is written here in purely
combinatorial terms, it is related to the topology
of different pant decompositions of a surface,
cf. also \cite{M:S} and \cite{LX:M}.
\item When $d_{1}+\ldots+d_{n}=3g-3+n,$ recursion $({\bf III})$ reduces to the Virasoro constraints for the intersection numbers of $\psi_{i}$-classes predicted by Witten \cite{W}, cf. also \cite{MuS}.
For different proofs and discussions of these relations see, \cite{Ko:int}, \cite{OP}, \cite{M:JAMS}, \cite{KL:W}, and \cite{EO:WK}.
\item In this paper, we are mainly interested in the intersection numbers of $\kappa_{1}$ and $\psi_{i}$ classes. For generalizations of $({\bf III})$ to the case of intersection numbers involving higher Mumford's $\kappa$-classes see \cite{LX:higher} , \cite{E:M} and \cite{Ka}.
\operatorname{Ext}nd{itemize}
\operatorname{Ext}nd{section}
\begin{section}{Asymptotics of intersection numbers when $n$ is fixed}
In this section, we prove Theorem $\ref{11}$.
This theorem implies that there exists $C\in (0,\infty)$ such that
\begin{equation}\label{first}
\lim_{g\rightarrow \infty} \frac{V_{g,n} \sqrt{g}}{(2g-3+n)!\,(4\pi^2)^{2g-3+n}}=C.
\operatorname{Ext}nd{equation}
This result will be generalized in $\S \ref{error}.$
We recall that for any $n \geq 0$ the results obtained in \cite{M:large} yield
\begin{equation}\label{f1}
\frac{V_{g,n}}{8\pi^2 g \; V_{g-1,n+1}}= 1+O\left(\frac{1}{g}\right)
\operatorname{Ext}nd{equation}
and
\begin{equation}\label{f2}
\frac{V_{g,n}}{ V_{g-1,n+2}}= 1+O\left(\frac{1}{g}\right)
\operatorname{Ext}nd{equation}
as $g\to\infty$. The main ingredient of the proof is the following property of the intersection numbers:
\begin{equation}\label{inequality:basic}
[\tau_{d_{1}}\ldots\tau_{d_{n}}]_{g,n} \leq [\tau_0^n]_{g,n}=V_{g,n}
\operatorname{Ext}nd{equation}
for any $d=(d_1,\ldots, d_n)$. Moreover,
\begin{equation}\label{simple}
\frac{[\tau_{d_{1}}\ldots \tau_{d_{n}}]_{g,n}}{V_{g,n}}=1+O\left(\frac{1}{g}\right),
\operatorname{Ext}nd{equation}
as $g \rightarrow \infty.$
\begin{rema}
{\rm The same result holds if $d_{1},\ldots, d_n$ grow slowly with $g$ in such a way that
$$\frac{d_{1}\ldots d_{n}}{g} \rightarrow 0$$
as $g \rightarrow \infty.$
In particular, $(\ref{simple})$ holds if $d_{i}=O(\log g)$ for each $i=1,\ldots, n$.}
\operatorname{Ext}nd{rema}
A stronger statement is formulated below:
\begin{theo}\label{theo:tauk}Let $k,n \geq 1,$ then
\begin{enumerate}[(i)]
\item
$$ \frac{[\tau_k \,\tau_0^{n-1}]_{g,n}}{V_{g,n}}
=1+\frac{e_{n,k}^{(1)}}{g}+O\left(\frac{1}{g^{2}}\right),$$
as $g \rightarrow \infty$, where
$$e_{n,k}^{(1)}=-\frac{k^2+(n-5/2)k-n/2+3/2}{\pi^2}.$$
\item
\begin{align*}
8n [\tau_{k-1}\,\tau_{0}^{n-2}]_{g,n-1}&< [\tau_{k}\,\tau_0^{n-1}]_{g,n} - [\tau_{k+1}\,\tau_0^{n-1}]_{g,n} \leq\\
& \leq 16\; ((n+k) V_{g,n-1}+ k\, V_{g-1,n+1}).
\operatorname{Ext}nd{align*}
\operatorname{Ext}nd{enumerate}
\operatorname{Ext}nd{theo}
\begin{rema}\label{general:2:error}
{\rm In general, one can show that for $k\geq 1$
$$\frac{[\tau_{k+1}\,\tau_{j_{1}} \,\ldots
\tau_{j_{s}}\tau_{0}^{n-1-s}]_{g,n}}{[\tau_{k}\,\tau_{j_{1}}\ldots\tau_{j_{s}},
\tau_0^{n-1-s}]_{g,n}}
=1- \frac{2(k+j_1+...j_s)+n-3/2}{\pi^2g}
+O\left(\frac{1}{g^2}\right).$$}
\operatorname{Ext}nd{rema}
{\bf Proof of Theorem $\ref{theo:tauk}$.}
We will need the following simple fact.
Let $\{r_{i}\}_{i=1}^{\infty}$ be a sequence of real numbers and $\{k_{g}\}_{g=1}^{\infty}$ be an increasing sequence of positive integers.
Assume that for all $g$, and $i$ we have $0 \leq c_{i,g} \leq c_{i},$ and
$\lim_{g\rightarrow \infty} c_{i,g}=c_{i}.$
If $\sum_{i=1}^{\infty} | c_{i} r_{i}| < \infty,$ then
\begin{equation}\label{fact}
\lim_{g\rightarrow \infty} \sum_{i=1}^{k_{g}} r_{i} c_{i,g} = \sum_{i=1}^{\infty} r_{i} c_{i}.
\operatorname{Ext}nd{equation}
To prove part $(i)$ of the Theorem, it is sufficient to show that
$$\frac{[\tau_{1}\,\tau_{0}^{n-1}]_{g,n}}{[\tau_{0}^n]_{g,n}}=1- \frac{n}{2\pi^2 g}+O\left(\frac{1}{g^2}\right),$$
and for $k \geq 1$
\begin{equation}\label{similar}
\frac{[\tau_{k+1}\,\tau_{0}^{n-1}]_{g,n}}{[\tau_{k}\,\tau_0^{n-1}]_{g,n}}=1- \frac{2k+n-3/2}{\pi^2 g}+O\left(\frac{1}{g^2}\right).
\operatorname{Ext}nd{equation}
Here we use the recursive formula $({\bf III})$ to expand the difference $[\tau_{k}\,\tau_{0}^{n-1}]_{g,n}-[\tau_{k+1}\,\tau_{0}^{n-1}]_{g,n}$ in terms of the intersection numbers on $\overline{\mathcal{M}}_{g-1,n+1}$, $\overline{\mathcal{M}}_{g,n-1}$ and $\overline{\mathcal{M}}_{g_{1},n_{1}} \times \overline{\mathcal{M}}_{g_{2},n_{2}}$. For the sake of brevity let us put
\begin{equation}\label{Obs:III}
[\tau_{k}\,\tau_{0}^{n-1}]_{g,n}-[\tau_{k+1}\,\tau_{0}^{n-1}]_{g,n}=\widetilde{{A}}_{k,g,n}+\widetilde{{B}}_{k,g,n}+\widetilde{{C}}_{k,g,n},
\operatorname{Ext}nd{equation}
where $\widetilde{{A}}_{k,g,n}$, $\widetilde{{B}}_{k,g,n}$, and $\widetilde{{C}}_{k,g,n}$ are the terms corresponding to $(\ref{A}),$ $(\ref{B})$ and $(\ref{C})$ respectively.
We will evaluate these terms separately.
\noindent
{\bf 1. }{\operatorname{Ext}m Contribution from} ($\ref{A}$).
By $({\bf III})$ , the numbers $$[\tau_{k-1}\,\tau_{0}^{n-2}]_{g,n-1}, \ldots, [\tau_{3g+n-4}\,\tau_{0}^{n-2}]_{g,n-1}$$ contribute to $[\tau_{k}\,\tau_{0}^{n-1}]_{g,n}$ and $[\tau_{k+1}\,\tau_{0}^{n-1}]_{g,n}$.
In fact, it is easy to check that
$$\widetilde{{A}}_{k,g,n}= 8 \,a_{0}\; [\tau_{k-1}\,\tau_{0}^{n-2}]_{g,n-1}\; +\;8\sum_{i=1}^{3g-3+n-k} (a_{i+1}-a_{i}) [\tau_{k-1+i}\,\tau_{0}^{n-2}]_{g,n-1}.$$
The term $[\tau_{k-1}\,\tau_{0}^{n-2}]_{g,n-1}$ is non-zero only when $k\geq 1.$
In order to calculate the asymptotic behavior of $\widetilde{{A}}_{k,g,n}/ V_{g,n-1}$ we simply apply $(\ref{simple})$ and $(\ref{fact})$.
In view of Lemma $\ref{aa}$, we get that
$$\lim_{g \rightarrow \infty} \frac{\widetilde{{A}}_{k,g,n}}{V_{g,n-1}}= 8 (n-1)\;\left (a_{0} \delta+ \sum_{i=0}^{\infty} (a_{i+1}-a_{i})\right)= 8(n-1) (1/2 \; \delta+1/2),$$
where $\delta=0$ when $k=0,$ and otherwise $\delta=1.$ Thus
\begin{equation}\label{yek}
\frac{\widetilde{{A}}_{k,g,n}}{V_{g,n-1}}= 8(n-1) (1/2 \; \delta+1/2)+ O\left(\frac{1}{g}\right)
\operatorname{Ext}nd{equation}
as $g\to\infty$. On the other hand, Lemma \ref{aa} also implies that
\begin{equation}\label{1111}
\widetilde{{A}}_{k,g,n} \leq 8(n-1) V_{g,n-1}.
\operatorname{Ext}nd{equation}
\noindent
{\bf 2. }{\operatorname{Ext}m Contribution from} ($\ref{B}$).
Similarly, by $({\bf III})$, the numbers $[\tau_{i}\,\tau_{j}\, \tau_{0}^{n-1}]_{g-1,n+1}$ contribute to $[\tau_{k}\,\tau_{0}^{n-1}]_{g,n}$ (resp. to $[\tau_{k+1}\,\tau_{0}^{n-1}]_{g,n}$) whenever $i+j\geq k-2$ (resp. $i+j \geq k-1$). To simplify the notation, let
$$T_{m,g,n}= \sum_{i+j=m} [\tau_{i}\,\tau_{j}\,\tau_{0}^{n-1}]_{g-1,n+1}$$
(we assume $T_{m,g,n}=0$ for $m <0$).
Then
$$\widetilde{{B}}_{k,g,n}=16\,( a_{0} T_{k-2,g,n}+ (a_{1}-a_{0}) T_{k-1,g,n}+\ldots+ (a_{i+1}-a_{i}) T_{k-1+i,g,n}+\ldots).$$
Note that by $(\ref{simple})$ as $g \rightarrow \infty,$
$$ \lim_{g\rightarrow \infty} \frac{T_{m,g,n}}{V_{g-1,n+1}}=m+1.$$
Now since $a_{0}=1/2,$ Lemma $\ref{aa}\; (iii)$, together with $(\ref{simple})$ and $(\ref{fact})$, implies for $k>1$
\begin{align}
\lim_{g\rightarrow \infty} \frac{\widetilde{{B}}_{k,g,n}}{V_{g-1,n+1}}&=16\left( a_{0}(k-1)+ \sum_{i=0}^{\infty} (i+k) ( a_{i+1}-a_{i})\right)= \nonumber\\
&=16\left(\frac{k-1}{2}+ \frac{k}{2}+1/4\right)=16( k-1+1/4),\label{do}\\
\lim_{g\rightarrow \infty} \frac{\widetilde{{B}}_{1,g,n}}{V_{g-1,n+1}}&=16\sum_{i=0}^{\infty} (i+1) (a_{i+1}-a_{i})=4,\nonumber
\operatorname{Ext}nd{align}
and we also have
\begin{equation}\label{222}
\widetilde{{B}}_{k,g,n} \leq 16 k V_{g-1,n+1}.
\operatorname{Ext}nd{equation}
\noindent
{\bf 3. }{\operatorname{Ext}m Contribution from} ($\ref{C}$).
By the results obtained in \cite{M:large}
$$ \sum_{\sumind{I \amalg J=\{2,\ldots,n\}}{0\leq g' \leq g}} \frac{ V_{g',|I|+1} \cdot V_{g-g',|J|+1}}{V_{g,n}}= O\left(\frac{1}{g^2}\right).$$
Put
$$S_{k_1,k_2,g,n}= \sum_{\sumind {I \amalg J=\{2,\ldots,n\}}{0\leq g' \leq g}} \; \left[\tau_{k_{1}} \prod_{i\in I } \tau_{i}\right]_{g',|I|+1} \cdot \quad\left[ \tau_{k_{2}} \prod_{i\in J} \tau_{i}\right]_{g-g',|J|+1}.$$
Note that by $(\ref{inequality:basic})$ and recursion $({\bf Ia})$,
$$ S_{k_{1},k_{2},g,n} \leq \sum_{\sumind{I \amalg J=\{2,\ldots,n\}}{0\leq g' \leq g}} V_{g',|I|+1} \cdot V_{g-g', |J|+1} \leq V_{g,n-1}$$
Therefore, the contribution from the term $(\ref{C})$ in $({\bf III})$ satisfies
\begin{align*}
0& \leq \widetilde{{C}}_{k,g,n} \leq 16\;\sum_{i=0}^\infty (a_{i}-a_{i+1}) \sum_{k_{1}+k_{2}=i+k} S_{k_{1},k_{2},g,n} \leq\\
& \leq 16\sum_{i=0}^\infty (i+k) (a_{i}-a_{i+1})\; V_{g,n-1} \leq 16\,(1/4+k/2)\;V_{g,n-1}.
\operatorname{Ext}nd{align*}
Using $(\ref{fact})$, as in the cases {\bf 1} and {\bf 2} considered above, we see that the contribution from the term $(\ref{C})$ in $({\bf III})$ becomes small as $g\to\infty$:
\begin{equation}\label{se}
\frac{\widetilde{{C}}_{k,g,n}}{V_{g,n}}=O\left(\frac{1}{g^{2}}\right),
\operatorname{Ext}nd{equation}
and
\begin{equation}\label{333}
\widetilde{{C}}_{k,g,n} \leq 16 (1+k) \,V_{g,n-1}.
\operatorname{Ext}nd{equation}
Now, in view of $(\ref{f1})$, $(\ref{f2})$ and $(\ref{simple})$, equations $(\ref{yek})$, $(\ref{do})$ and $(\ref{se}))$ imply that for $k\geq 1$
$$1-\frac{[\tau_{k+1}\,\tau_{0}^{n-1}]_{g,n}}{[\tau_{k}\,\tau_{0}^{n-1}]_{g,n}}= \frac{2k+n-3/2}{\pi^2}\cdot\frac{1}{g}+O\left(\frac{1}{g^2}\right)$$
and
$$1-\frac{[\tau_1\,\tau_0^{n-1}]_{g,n}}{[\tau_0^n]_{g,n}}= \frac{n}{2\pi^2}\cdot\frac{1}{g}+O\left(\frac{1}{g^2}\right).$$
Finally, the inequalities $(\ref{1111}),$ $(\ref{222})$ and $(\ref{333})$ imply part $(ii)$ of the Theorem.
$\Box$
\noindent
{\bf Proof of Theorem \ref{11}}.
We start with proving $(\ref{110})$. From $({\bf II})$,
$$ \frac{2(2g-2+n) V_{g,n}}{V_{g,n+1}}=
\sum_{l=1}^{3g-2+n} \frac{(-1)^{l-1}l\, \pi^{2l-2}}{(2l+1)!}\cdot \frac{[ \tau_{l},\tau_{0}^n]_{g,n+1}}{V_{g,n+1}}.$$
Differentiating $t^{-1}\sin t$ and putting $t=\pi$ we get
$$ \sum_{l=1}^{\infty} \frac{(-1)^{l-1} l \,\pi^{2l-2}}{(2l+1)!}= \frac{1}{2\pi^2}.$$
Now we can use $(\ref{fact})$, and Theorem $\ref{theo:tauk}$ in order to calculate the error term in $(2g-2+n) V_{g,n}/V_{g,n+1}-1/4\pi^{2}.$
Clearly,
\begin{equation}\label{Obs:II}
\frac{(2g-2+n) V_{g,n}}{V_{g,n+1}}-\frac{1}{4\pi^2} =
\sum_{l=1}^{3g-2+n} \frac{(-1)^{l-1} l\, \pi^{2l-2}}{(2l+1)!} \cdot\left(\frac{[ \tau_{l}\,\tau_{0}^n]_{g,n+1}}{V_{g,n+1}}-1\right).
\operatorname{Ext}nd{equation}
Then Theorem \ref{theo:tauk}, $(i)$ implies that
\begin{align*}
\frac{(2g-2+n) V_{g,n}}{V_{g,n+1}}&-\frac{1}{4\pi^{2}}=\\
= &- \sum_{l=1}^{\infty}\frac{ (-1)^{l-1} (l^2+(n-3/2)l-n+1)\,l\, \pi^{2l-2}}{2 (2l+1)! \pi^2}\cdot\frac{1}{g}+O\left(\frac{1}{g^{2}}\right).
\operatorname{Ext}nd{align*}
On the other hand,
$$ \sum_{l=1}^{\infty} \frac{(-1)^{l-1} \,l\, (l^2+(n-3/2)l-n/2+1) \pi^{2l}}{(2l+1)!}= -\frac{4n+\pi^2-8}{8\pi^2}.$$
Hence,
\begin{equation}\label{eq:n}
\frac{4\pi^2 (2g-2+n)\; V_{g,n}}{V_{g,n+1}}= 1+\frac{4n+\pi^2-8}{4\pi^2}\cdot\frac{1}{g}+O\left(\frac{1}{g^{2}}\right),
\operatorname{Ext}nd{equation}
and
$$ \frac{8\pi^2g\; V_{g,n}}{V_{g,n+1}}=1+\left(\left(\frac{1}{\pi^2}-\frac{1}{2}\right)n+\frac{5}{4}-\frac{2}{\pi^2}\right)\cdot\frac{1}{g}+ O\left(\frac{1}{g^{2}}\right).$$
We proceed with proving $(\ref{111})$.
First, we will check this estimate when $n \geq 2$, that is,
$$ \frac{V_{g-1,n+4}} {V_{g,n+2}}=1-\frac{2n+1}{ \pi^2}\cdot\frac{1}{ g}+O\left(\frac{1}{g^2}\right).$$
From the recursion $({\bf Ia})$ we get
\begin{align}\frac{V_{g-1,n+4}}{V_{g,n+2}}&=
\frac{[ \tau_{1}\,\tau_{0}^{n+1}]_{g,n+2} }{V_{g,n+2}}-\nonumber\\ \label{Obs:I}
&-\frac{6}{V_{g,n+2}} \sum_{\sumind{g_1+g_2=g}{I\amalg J=\{1,\ldots, n\}}} V_{g_{1},|I|+2} \cdot V_{g_{2},|J|+2}.
\operatorname{Ext}nd{align}
On the other hand, by Theorem $\ref{theo:tauk}$ we have for $k=1$
$$\frac{[\tau_{1}\,\tau_{0}^{n+1}]_{g,n+2}}{[\tau_{0}^{n+2}]_{g,n+2}}=1- \frac{n+2}{2\pi^2}\cdot\frac{1}{g}+O\left(\frac{1}{g^2}\right),$$
and $(\ref{f2})$ implies
\begin{align*}
\frac{1}{V_{g,n+2}} \sum_{\sumind{g_{1}+g_{2}=g}{ I\amalg J=\{1,\ldots, n\}} } V_{g_{1},|I|+2} \cdot V_{g_{2},|J|+2}
&= 2 n\; \frac{V_{g,n+1}}{V_{g,n+2}} +O\left(\frac{1}{g^{2}}\right)\\
&= \frac{2 n}{8\pi^{2}} \cdot\frac{1}{g}+O\left(\frac{1}{g^{2}}\right).
\operatorname{Ext}nd{align*}
Hence
$$ \frac{V_{g-1,n+4}}{V_{g,n+2}}=1- \left(\frac{n+2}{2\pi^2}+ \frac{12\; n}{8\pi^{2}}\right)\cdot\frac{1}{g}+ O\left(\frac{1}{g^{2}}\right)= 1-\frac{2n+1} {\pi^2 g}+O\left(\frac{1}{g^{2}}\right). $$
The remaining cases $n=0,\,1$ follow from (\ref{110}) and (\ref{111}) for $n\geq2$. For instance, if $n=1$
$$
\frac{V_{g-1,3}}{V_{g,1}}=\frac{V_{g-1,4}}{V_{g,2}}\cdot\frac{V_{g,2}}{V_{g,1}}\cdot\frac{V_{g-1,3}}{V_{g-1,4}}
=1+\frac{1}{\pi^2}\cdot\frac{1}{g}+O\left(\frac{1}{g^2}\right).
$$
The case $n=0$ can be treated similarly.
$\Box$
Theorem \ref{11} immediately implies Corollary $\ref{c}$ about the asymptotic behavior of the ratio
$V_{g+1,n}/V_{g,n}$ (see Introduction). An important consequence of Corollary $\ref{c}$, explained in Remark $\ref{rem2}$, is formula $(\ref{first}$) announced at the beginning of this Section.
As a byproduct of this statement we also get the following estimate that we will need later:
\begin{lemm}\label{estimate:upperbound}
Fix $n_{1}, n_{2},s \geq 0.$ Then
\begin{equation}
\sum_{\sumind{g_{1}+g_{2}=g}{2g_{i}+n_{i} \geq s,\; i=1,2}} V_{g_1,n_1} \cdot V_{g_{2},n_{2}}= O\left(\frac{V_{g,n_{1}+n_{2}}}{g^{s}}\right).
\operatorname{Ext}nd{equation}
\operatorname{Ext}nd{lemm}
\operatorname{Ext}nd{section}
\begin{section}{Error terms in the asymptotics expansions}\label{error}
In this section, we prove Theorem $\ref{theo:main:fixn}$ using the following results:
\begin{theo}\label{theo:general:1} We have the following asymptotic expansions as $g \rightarrow \infty:$
\begin{enumerate}[(i)]
\item
Given the integers $n, s\geq 1,$ and ${d}=(d_{1},\ldots, d_{n})$, there exist $e_{n,{d}}^{(1)},\ldots e_{n,{d}}^{(s-1)}$ independent of $g$ such that
\begin{equation}
\frac{[\tau_{d_1} \ldots \tau_{d_n}]_{g,n}}{V_{g,n}}= 1+\frac{e_{n,{d}}^{(1)}}{g}+\ldots+\frac{e_{n,{d}}^{(s-1)}}{g^{s-1}}
+O\left(\frac{1}{g^{s}}\right).\label{e}
\operatorname{Ext}nd{equation}
\item Given $n\geq 0,$ $s\geq 1$, there exist $a_{n}^{(i)},\;b_{n}^{(i)}\; i=1,\ldots , s-1,$ independent of $g$ such that
\begin{align}
\frac{4\pi^2 (2g-2+n)V_{g,n}}{V_{g,n+1}} = &1+ \frac{a_{n}^{(1)}}{g}+\ldots+\frac{a_{n}^{(s-1)}}{g^{s-1}}
+O\left(\frac{1}{g^{s}}\right),\label{theo:n}\\
\frac{V_{g,n}} {V_{g-1,n+2}}=&1+ \frac{b_{n}^{(1)}}{g}+\ldots+\frac{b_{n}^{(s-1)}}{g^{s-1}}+O\left(\frac{1}{g^{s}}\right).\label{theo:g}
\operatorname{Ext}nd{align}
\operatorname{Ext}nd{enumerate}
\operatorname{Ext}nd{theo}
The coefficients of the above asymptotic expansions (\ref{e})--(\ref{theo:g}) can be characterized more precisely:
\begin{theo}\label{theo:general:2}
We have
\begin{enumerate}[(i)]
\item For any fixed $n$ and $d$ the coefficient $e_{n,{d}}^{(i)}$ is a polynomial in ${\mathbb Q}[\pi^{-2}, \pi^{2}]$ of degree at most $i.$
\item Each $a_{n}^{(i)}$ and $b_{n}^{(i)}$ is a polynomial in ${\mathbb Q}[\,n,\,\pi^{-2}, \pi^2]$ of degree $i$ in $n$ and of degree at most $i$ in $\pi^{-2}$ and $\pi^2$.
\operatorname{Ext}nd{enumerate}
\operatorname{Ext}nd{theo}
\begin{rema}
{\rm In the simplest case $[\tau_0\tau_k]_{g,2}/V_{g,2}$ we have the following expansions:
\begin{align*}
\frac{[\tau_0\tau_1]_{g,2}}{V_{g,2}}
&=1-\frac{1}{\pi^2g}+\left(\frac{1}{64}-\frac{5}{6\pi^2}+\frac{1}{\pi^4}\right)\cdot\frac{1}{g^2}+O\left(\frac{1}{g^3}\right),\\
\frac{[\tau_0\tau_2]_{g,2}}{V_{g,2}}
&=1-\frac{7}{2\pi^2g}+\left(\frac{1}{64}-\frac{13}{6\pi^2}+\frac{1}{\pi^4}\right)\cdot\frac{1}{g^2}+O\left(\frac{1}{g^3}\right),\\
\frac{[\tau_0\tau_k]_{g,2}}{V_{g,2}}
&=1-\frac{2k^2-k+1}{2\pi^2g}+\left(\frac{k^4}{2\pi^4}-\frac{13k^3}{6\pi^4}-\left(\frac{1}{2\pi^2}-\frac{27}{8\pi^4}\right)\cdot k^2\right.\\
&+\left.\left(\frac{1}{24\pi^2}-\frac{59}{24\pi^4}\right)\cdot k+\left(\frac{1}{64}-\frac{1}{4\pi^2}+\frac{19}{8\pi^4}\right)\right)\cdot\frac{1}{g^2}+O\left(\frac{1}{g^3}\right).
\operatorname{Ext}nd{align*}
We see that no (positive) powers of $\pi^2$ appear in these expansions.
The term of order $1/g^2$ is a polynomial in $k$ of degree 4 for $k\geq 3$ (computed numerically). However, for $k=1,2$ the general formula is off by $\frac{1}{8\pi^2}+\frac{5}{8\pi^4}$
and $\frac{5}{8\pi^4}$ for $k=1$ and $k=2$ respectively. This is a manifestation of the ``boundary effect" in recursion ({\bf III}). These results can be proved using
Remark $\ref{general:2:error}$.}
\operatorname{Ext}nd{rema}
\begin{rema}\label{rem33}
{\rm Note that a result similar to $(\ref{theo:n})$ and $(\ref{theo:g})$ holds for the inverse ratios $V_{g,n+1}/ (8\pi^2 g V_{g,n})$ and $V_{g-1,n+2}/V_{g,n}.$
This is because of the following simple fact. Let $\{w_{g}\}_{g=1}^{\infty}$ be a sequence of the form
$$w_{g}=1+ \frac{u_1}{g}+\ldots+\frac{u_{s-1}}{g^{s-1}}+O\left(\frac{1}{g^{s}}\right),$$
then
$$ \frac{1}{w_{g}}= 1+ \frac{v_{1}}{g}+\ldots+\frac{v_{s-1}}{g^{s-1}}+O\left(\frac{1}{g^{s}}\right),$$
where each $v_{i}$ is a polynomial in $u_{1},\ldots, u_{i}$ with integer coefficients. Moreover, if $u_{i}$ is a polynomial of degree $m_{i}$ in $n$, then
$v_k$ is a polynomial of degree at most $\max_{i+j=k}(m_{i}+m_{j})$ in $n$.}
\operatorname{Ext}nd{rema}
Let us first outline some general ideas underlying the proofs of Theorems $\ref{theo:general:1}$ and $\ref{theo:general:2}$.
All proofs are by induction in $s$ and are similar to each other. We basically follow the same steps as in the course of proving Theorems $\ref{theo:tauk}$ and $\ref{11}$.
\begin{rema}\label{rem3}
{\rm Let $f: {\mathbb Z}_+\rightarrow {\mathbb R}$ be a function such that $\lim_{g \rightarrow \infty} f(g)$ exists. We say that $f$ has an expansion up to $O(1/g^{s})$ if
there exist $e_{0},\ldots,e_{s-1} \in {\mathbb R}$ so that
$$f(g)=e_{0}+\frac{e_{1}}{g}+\ldots+ \frac{e_{s-1}}{g^{s-1}}+O\left(\frac{1}{g^{s}}\right).$$
Note that if $f_{1},\ldots,f_{k}$ all have expansions of order $s,$ the expansion of the product $f_{1}\cdots f_{k}$ up to $1/g^{s}$ can easily be calculated in terms of the expansions of $f_{i}.$
In this section, we are interested in the expansions of the ratios $\frac{V_{g,n}}{V_{g-1,n+2}}$,
$\frac{4\pi^2(2g-2+n) V_{g,n}}{V_{g,n+1}}$
and
$\frac{[\tau_{d_1}\ldots\tau_{d_{n}}]_{g,n}}{V_{g,n}}. $ Some remarks are in order:
\noindent {\bf 1.}
In general, given $g', n'$, in order to obtain the expansion of $\frac{V_{g-g',n-n'}}{V_{g,n}}$ up to $O(1/g^s),$ it is enough to know the expansions of
$\frac{V_{g,k}}{V_{g-1,k+2}}$ and
$\frac{4\pi^2 (2g-2+k) V_{g,k}}{V_{g,k+1}}$ up to $O(1/g^{s-2g'-n'});$ this is simply because $$\frac{V_{g-g',n-n'}}{V_{g,n}}= \prod_{j=-n'+1}^{2g'} \frac{4 \pi^2 (2g-2g'+n-j+1)V_{g-g',n+j-1}}{V_{g-g',n+j}} \cdot$$
\begin{align}\label{subsurface}
\cdot \prod_{j=1}^{g'} \frac{V_{g-j,n+2j}}{V_{g-j+1,n+2j-2}} \cdot \prod_{j=-n'+1}^{2g'} \frac{1}{(4 \pi^2 (2g-2g'+n-j+1))}.
\operatorname{Ext}nd{align}
\noindent
{\bf 2.} Following $({\bf Ia})$ and $({\bf II})$, the expansion of $\frac{4\pi^2(2g-2+n) V_{g,n}}{V_{g,n+1}}$ up to $O(1/g^s)$ can be
written explicitly in terms of the expansion of $\frac{[\tau_{l}\tau_{0}^{n}]_{g,n+1}}{V_{g,n+1}}$ up to $O(1/g^{s})$; see (\ref{Obs:II}).
Similarly, by $(\ref{Obs:III})$ and $(\ref{subsurface})$ the expansion of $\frac{V_{g,n}}{V_{g-1,n+2}}$ up to $O(1/g^{s})$
can be written in terms of the expansions of $\frac{V_{g,n_1}}{V_{g-1,n_1+2}}$ and $\frac{4\pi^2(2g-2+n_1) V_{g,n_1}}{V_{g,n_1+1}}$ up to
$O(1/g^{s-1}),$ and the expansion of $\frac{[\tau_{1}\tau_0^{n-1}]_{g,n}}{V_{g,n}}$ up to $O(1/g^s).$
\noindent
{\bf 3.} In view of $({\bf III}),$ the expansion of $\frac{[\tau_{d_1}\ldots\tau_{d_{n}}]_{g,n}}{V_{g,n}}$ up to $O(1/g^{s})$
can be written in terms of the expansions of $\frac{[\tau_{c_{1}}\ldots\tau_{c_{m}}]_{g,m}}{V_{g,m}}$
up to $O(1/g^{s-1})$ and the expansions of $\frac{V_{g,n}}{V_{g-1,n+2}}$ and $\frac{4\pi^2(2g-2+n) V_{g,n}}{V_{g,n+1}}$
up to $O(1/g^{s-1}).$ Actually, for our purposes
it will be enough to obtain the expansion of
$\frac{[\tau_{d_{1}+1}\,\ldots \tau_{d_n}]_{g,n}}{[\tau_{d_{1}}\,\ldots \tau_{d_n}]_{g,n}}$ up to
$O(1/g^s).$ As in $(\ref{Obs:III})$, we put
$$[\tau_{d_{1}}\,\ldots \tau_{d_n}]_{g,n}-[\tau_{d_{1}+1}\,\ldots \tau_{d_n}]_{g,n}=\widetilde{{A}}_{{d},g,n}+\widetilde{{B}}_{{d},g,n}+\widetilde{{C}}_{{d},g,n},$$
where $\widetilde{A}_{d,g,n}$, $\widetilde{{B}}_{{d},g,n}$, and $\widetilde{{C}}_{{d},g,n}$ are the terms corresponding to $(\ref{A}),$ $(\ref{B})$ and $(\ref{C})$. Put
\begin{equation}\label{eq:main}
\frac{[\tau_{d_{1}}\,\ldots \tau_{d_n}]_{g,n}-[\tau_{d_{1}+1}\,\ldots \tau_{d_n}]_{g,n}}{V_{g,n}}= S_{1}+S_{2}+S_{3}\;,
\operatorname{Ext}nd{equation}
where
\begin{align*}
S_1=&\frac{1}{4 \pi^2 (2g-3+n)}\cdot \frac{4\pi^2 (2g-3+n)V_{g,n-1}}{V_{g,n}}\cdot\frac{\widetilde{{A}}_{{d},g,n}}{V_{g,n-1}},\\
S_2= &\frac{1}{4\pi^2 (2g-3+n)}\cdot \frac{4 \pi^2 (2g-3+n)V_{g-1,n+1}}{V_{g-1,n+2}}\cdot \frac{V_{g-1,n+2}}{V_{g,n}}
\cdot\frac{\widetilde{{B}}_{{d},g,n}}{V_{g-1,n+1}},\\
S_3=&\frac{\widetilde{{C}}_{{d},g,n}}{V_{g,n}}.
\operatorname{Ext}nd{align*}
Similar to the case ${\bf 1}$ in the proof of Theorem $\ref{theo:tauk}$, we have
\begin{equation}\label{eq:aa}
\frac{\widetilde{{A}}_{{d},g,n}}{V_{g,n-1}}=8\;\sum_{i=0}^{3g-3+n-d_{1}} (a_{i+1}-a_{i}) \frac{[\tau_{d_{1}+d_{j}+i-1}\tau_{d_{2}}\ldots\widehat{\tau_{d_{j}}}\ldots \tau_{d_{n}}]_{g,n-1}}{V_{g,n-1}}
\operatorname{Ext}nd{equation}
(the hat means that the corresponding entry is omitted, and $a_{-1}=0$).
The case {\bf 2} of the same proof now reads
\begin{equation}\label{eq:bb}
\frac{\widetilde{{B}}_{{d},g,n}}{V_{g-1,n+1}}=16\,(a_{0} T_{k-2,g,n}+ (a_{1}-a_{0}) T_{k-1,g,n}+\ldots+(a_{i+1}-a_{i}) T_{k-1+i,g,n}+\ldots),
\operatorname{Ext}nd{equation}
where
$$T_{m,g,n}=\frac{\sum_{i+j=m} [\tau_{i}\tau_{j}\tau_{d_{2}}\ldots \tau_{d_{n}}]_{g-1,n+1}}{V_{g-1,n+1}}.$$
Similarly, according to $(\ref{C})$, each term in $\widetilde{{C}}_{{d},g,n}$ has the form
\begin{equation}\label{eq:cc}
\sum_{\genfrac{}{}{0pt}{}{k_{1}+k_{2}=}{=i+d_{1}-2}} (a_{i+1}-a_{i}) \; \left[\tau_{k_{1}} \prod_{i\in I } \tau_{d_{i}}\right]_{g',|I|+1} \cdot \left[\tau_{k_{2}} \prod_{i\in J} \tau_{d_{i}}\right]_{g-g',|J|+1},
\operatorname{Ext}nd{equation}
where $I \amalg J=\{2,\ldots,n\},$ and $0\leq g' \leq g.$
In order to obtain the expansions of $S_{1}$ and $S_2$, we can use the expansions of ratios
$\frac{[\tau_{c_{1}}\ldots\tau_{c_{n-1}}]_{g,n-1}}{V_{g,n-1}}$ and
$\frac{[\tau_{c_{1}}\ldots\tau_{c_{n+1}}]_{g-1,n+1}}{V_{g-1,n+1}}$ up to $O(1/g^{s-1})$.
What concerns the term $S_3$, by Lemma $\ref{estimate:upperbound}$ each product
$$\left[\tau_{k_{1}} \prod_{i\in I } \tau_{d_{i}}\right]_{g',|I|+1} \cdot \left[\tau_{k_{2}} \prod_{i\in J} \tau_{d_{i}}\right]_{g-g',|J|+1}$$
is of order $O(1/g^{s+1})$ unless either $2g'-1+|I| <s $ or $2g-2g'+|J|-1 < s$. In these cases we apply
$(\ref{subsurface})$ to obtain the expansion of $S_{3}$ up to $O(1/g^s).$
Then we can use the expansions of $\frac{4\pi^2(2g-2+n)V_{g-1,n+2}}{V_{g-1,n+3}}$ (for $S_1$) and
$\frac{V_{g,n}}{V_{g-1,n+2}},\;\frac{4\pi^2(2g-3+n) V_{g-1,n+1}}{V_{g-1,n+2}}$ (for $S_2$), all up to $O(1/g^{s-1}),$ to get the expansion of $\frac{[\tau_{d_{1}}\,\ldots \tau_{d_n}]_{g,n}-[\tau_{d_{1}+1}\,\ldots \tau_{d_n}]_{g,n}}{V_{g,n}}$ up to $O(1/g^s),$
which will complete the inductive step.}
\operatorname{Ext}nd{rema}
\begin{rema}\label{rneed}
{\rm We will need the following basic facts to prove Theorems $\ref{theo:general:1}$ and $\ref{theo:general:2}$:
\begin{enumerate}
\item For any $k\geq 0$ the sum
$$\sum_{l=0}^{\infty} \frac{(-1)^{l} l^{k} \pi^{2l}}{(2l+1)!}$$
is a polynomial in $\pi^{2}$ of degree at most $2[k/2]$ with rational coefficients
(this can be easily seen by expanding $\frac{\sin x}{x}$ in the Taylor series, differentiating it and putting $x=\pi$).
\item For a polynomial $p(x)=\sum_{j=1}^{m} b_{j} x^{j}$ of degree $m$,
$$\tilde{p}(x)= \sum_{i=1}^{\infty} (a_{i+1}-a_{i}) p(x+i)$$
is again a polynomial of degree $m$. The coefficient of $\tilde{p}(x)$ at $x^{j}$
is equal to $\sum_{j+r \leq m} \genfrac{(}{)}{0pt}{}{j+r}{j} b_{j+r} \cdot A(r)$
where $A(r)=\sum_{i=1}^{\infty} i^{r} (a_{i+1}-a_{i}).$
\item Since $\psi_i$ and $\kappa_1=\frac{[\omega_{g,n}]}{2\pi^2}$ are rational classes (i.e., belong to $H^{2}(\overline{\mathcal{M}}_{g,n},{\mathbb Q}),$ cf. \cite{W:H}, \cite{AC}), $$[\tau_{d_{1}}\ldots \tau_{d_{n}}]_{g,n} \in {\mathbb Q}\cdot \pi^{6g-6+2n-2|d|},$$
where $|d|=\sum_{i=1}^{n} d_{i}$. In particular, $V_{g,n}=[\tau_0\ldots\tau_0]_{g,n}$ is a rational multiple of $\pi^{6g-6+2n}$ (\cite{W:H}, see also \cite{M:In} for a different point of view).
\item The function $S_{m}(n)=\sum_{i=1}^{n} i^{m}$ is a polynomial in $n$ of degree $m+1$ with rational coefficients (Faulhaber's formula).
\operatorname{Ext}nd{enumerate}}
\operatorname{Ext}nd{rema}
\noindent
{\bf Proof of Theorem $\ref{theo:general:1}$}. First, we use $({\bf Ia}),$ $({\bf II})$ and $({\bf III})$ to prove the existence of $e_{n,d}^{(s)}$, $a_{n}^{(s)}$ and $b_{n}^{(s)}$. This is similar to what we did in the proofs of $(\ref{110})$ and $(\ref{111}).$
In fact, instead of $(i)$ we will prove a stronger statement: namely, there exist polynomials $Q_{n}^{{(s)}}(d_1,\ldots,d_n)$
and $q_{n}^{{(s)}}(d_1,\ldots,d_n)$ in variables $d_1,\ldots,d_n$ of degrees $s+1$ and $s$ respectively such that for any $d=(d_{1},\ldots,d_{n})$
\begin{equation}\label{theo:ineq1}
\left|\frac{[\tau_{d_1} \ldots \tau_{d_n}]_{g,n}}{V_{g,n}}-1-\frac{e_{n,{d}}^{(1)}}{g}-\ldots-\frac{e_{n,{d}}^{(s)}}{g^{s}}\right|
\leq \frac{Q_{n}^{(s)}(d_{1},\ldots,d_{n})}{g^{s+1}},
\operatorname{Ext}nd{equation}
and
\begin{equation}\label{theo:ineq2}
|e_{n,{d}}^{(s)}| \leq q_{n}^{s}(d_{1},\ldots,d_{n}).
\operatorname{Ext}nd{equation}
These formulas follow from the two claims below:
\noindent
{\bf Claim $1$}: {\operatorname{Ext}m Formulas $(\ref{theo:ineq1})$ and $(\ref{theo:ineq2})$ for $s=r$ and formulas $(\ref{theo:n}),$ $(\ref{theo:g})$
for $s<r$ imply $(\ref{theo:n})$ and $(\ref{theo:g})$ for $s=r$.}
In fact, from
$(\ref{Obs:II})$, $(\ref{theo:ineq1})$ and $(\ref{theo:ineq2})$ for $d=(l,0,\ldots,0)$ we have
\begin{equation}\label{f11}
\frac{(2g-2+n)V_{g,n}} {V_{g,n+1}}-\frac{1}{4\pi^2}= \frac{a_{n}^{(1)}}{g}+\ldots+\frac{a_{n}^{(s)}}{g^{s}}+O\left(\frac{1}{g^{s+1}}\right),
\operatorname{Ext}nd{equation}
where
\begin{equation}\label{a:n:s}
a_{n}^{(s)}= \sum_{l=1}^{\infty} \frac{(-1)^{l-1} l\, \pi^{2l-2}}{(2l+1)!} e_{n,l}^{(s)},
\operatorname{Ext}nd{equation}
with $d=(l,0,\dots,0).$ The existence of $a_{n}^{(s)}$ is guaranteed by the estimate
$$ \sum_{l=N}^{\infty} \frac{(-1)^{l-1} l^{k+1}\, \pi^{2l-2}}{(2l+1)!}=O(e^{-N})$$
valid for any $k\geq 0$.
Similarly, we can use $(\ref{Obs:I})$ to evaluate the error term in the expansion of $\frac{V_{g-1,n+4}}{V_{g,n+2}}-1$.
In this case we apply $(\ref{theo:ineq1})$ with $d=(1,0,\ldots,0)$. Note that by Lemma $\ref{estimate:upperbound}$
\begin{align}
\frac{6}{V_{g,n+2}} &\sum_{\sumind{g_1+g_2=g}{I\amalg J=\{1,\ldots, n\}}} V_{g_{1},|I|+2} \cdot V_{g_{2},|J|+2}= \nonumber\\
=&\sum_{2j+i+2 \leq s} \genfrac{(}{)}{0pt}{}{n}{i} \frac{V_{g-j,n+2-i}}{V_{g,n+2}} \times V_{j,i+2}+
O(\frac{1}{g^{s+1}}).\label{next}
\operatorname{Ext}nd{align}
We can now use $(\ref{subsurface})$, and together with
$(\ref{theo:n})$ and $(\ref{theo:g})$ for $s=r$ this yields the expansion of $\frac{V_{g-j,n+2-i}}{V_{g,n+2}}$ up to $O(1/g^{r+1}).$
\noindent
{\bf Claim $2$}. {\operatorname{Ext}m Fromulas $(\ref{theo:n})$, $(\ref{theo:g}),$ $(\ref{theo:ineq1})$ and $(\ref{theo:ineq2})$ for $s<r$ imply $(\ref{theo:ineq1})$ and $(\ref{theo:ineq2})$ for $s=r$. }
According to $(\ref{eq:main}),$ we need to evaluate the contributions from the term $S_{1},$ $S_{2}$ and $S_{3}$ up to $O(1/g^r)$. In view of $(\ref{eq:aa}),$ $(\ref{eq:bb})$ and $(\ref{eq:cc})$ we can use $(\ref{theo:ineq1})$ for $s=r-1$ to obtain the expansions of $\frac{\widetilde{{A}}_{{d},g,n}}{V_{g,n-1}}$ and$\frac{\widetilde{{B}}_{{d},g,n}}{V_{g-1,n+1}}$ up to $O(1/g^{r-1}).$ Formula $(\ref{eq:aa})$ now takes the form
\begin{equation}\label{tf}
\frac{\widetilde{{A}}_{{d},g,n}}{V_{g,n-1}}=8\,\sum_{i=0}^{\infty} (a_{i+1}-a_{i})
\left(1+\frac{e_{n,{d(i)}}^{(1)}}{g}+\ldots+\frac{e_{n,{d'}}^{(r-1)}}{g^{r-1}}+E_{d',r}\right),
\operatorname{Ext}nd{equation}
where $d(i)=d_{1}+d_{j}+i-1, d_{2},\ldots\widehat{d_{j}}\ldots d_{n},$ and $E_{d',r} \leq \frac{Q_{n}^{r-1}(d')}{g^{r}}$. Note that by Lemma $\ref{aa}$, $(ii)$
$$\sum_{i=N}^{\infty} (a_{i+1}-a_{i}) i^k=O(2^{-N}).$$
This allows us to calculate the contribution
from $S_1$ up to $O(1/g^r)$. The other two terms $S_2$ and $S_3$ can be treated in a similar way.
$\Box$
In order to prove Theorem $\ref{theo:general:2}$ we need two auxiliary lemmas:
\begin{lemm}\label{lem:general:1}
\begin{enumerate}[(i)]
\item Fix $k\; (0<k\leq n)$ and $d_{1},\ldots, d_{k}\in \mathbb{Z}_{\geq 0}$. Then for ${d}=(d_{1},\ldots, d_{k}, 0,\ldots,0)$ each
term $e_{n,{d}}^{(s)}$ in the asymptotic expansion (\ref{e}) is a polynomial in $n$ of degree at most $s$.
\item Each $a_{n}^{(s)}$ and $b_{n}^{(s)}$ in (\ref{theo:n}) and (\ref{theo:g}) is a polynomial in $n$ of degree $s$.
\operatorname{Ext}nd{enumerate}
\operatorname{Ext}nd{lemm}
\noindent
{\bf Proof}.
The proof is again by induction on $s$. We prove a slightly stronger version of $(i)$:
\noindent
($i'$) For given $k$ and $s$, there exist polynomials $q_{j}(d_{1},\ldots,d_{k}),\; j=0,\ldots, s,$ such that the term $e_{n,d}^{(s)}$
has the form
$$e_{n,d}^{(s)}=\sum_{j=0}^{s} e_{d,j} n^{j}$$
with $|e_{d,j}| \leq q_{j}(d_{1},\ldots,d_{k}).$
In other words, the coefficients of $e_{n,d}^{(s)}$ considered as a polynomial in $n$ grow at most polynomially
in $d_1,\ldots, d_k$. This would imply that
$$\sum_{i=1}^{\infty} (a_{i+1}-a_{i}) e_{d(i),j} <\infty,$$
where $d(i)=(d_1+i,d_{2},\ldots,d_{k},0,\ldots,0)$.
Now, by $(\ref{Obs:I})$ and $(\ref{Obs:II})$ the statement ($i'$) for $s=r$ implies part $(ii)$ of the Lemma for $s=r$,
that is clear in view of $(\ref{a:n:s})$ and $(\ref{next})$.
Moreover, part $(ii)$ for $s=r$ and the statement ($i'$) for $s<r$ imply ($i'$) for $s=r.$
This follows from $(\ref{eq:main})$ by analyzing the contributions from the terms $S_{1},$ $S_{2}$ and $S_{3}$
as in $(\ref{eq:aa})$, $(\ref{eq:bb})$ and $(\ref{eq:cc})$.
$\Box$
\begin{lemm}\label{lem:general:2}
\begin{enumerate}[(i)]
\item Let $n$ and $k$ be fixed, and let $d=(d_{1},\ldots,d_{k},d_{k+1},\ldots,d_{n})$ with
$d_{k+1},\ldots, d_{n}$ fixed. Then there exists a polynomial
$P_s\in {\mathbb R}[x_{1},\ldots,x_{k}]$ (depending on $d_{k+1},\ldots, d_{n}$) of degree $2s$ such that
$e_{n,d}^{(s)}= P_s(d_{1},\ldots,d_{k})$ provided $d_{j} \geq 2s$ for $j=1,\ldots,k$. The coefficient at each monomial
$d_{1}^{\alpha_{1}}\cdots d_{k}^{\alpha_k}$ in $P_s$ is a linear rational
combination of
$\pi^{2s-2\left[\frac{|\alpha|+1}{2}\right]},\ldots,\pi^{-2s},$ where $|\alpha|=\alpha_1+\ldots+\alpha_k.$ Moreover,
for arbitrary $d_j$ the difference $e_{n,{d}}^{(s)}- P_s(d_{1},\ldots,d_{k})$ is a linear rational combinations of
$\pi^{2s-2|d|},\ldots, \pi^{-2s}.$
\item Each $a_{n}^{(s)}$ and $b_{n}^{(s)}$ in (\ref{theo:n}) and (\ref{theo:g}) is a rational polynomial of degree at most $s$ in ${\mathbb Q}[\pi^2,\pi^{-2}].$
\operatorname{Ext}nd{enumerate}
\operatorname{Ext}nd{lemm}
\noindent
{\bf Proof.}
The proof is by induction in $s$ and utilizes the same techniques that we have already used before,
so we only sketch it here. The statement of the lemma follows from the following claims:
\noindent
{\bf Claim $1$}: {\operatorname{Ext}m Part (ii) for $s<r$ implies that the coefficient at $1/g^{r}$ in the expansion of $V_{g_{1},n_{1}+1}\cdot
V_{g-g_{1},n-n_{1}+1}/V_{g,n}$ is a polynomial of degree at most $r$ in ${\mathbb Q}[\pi^2, \pi^{-2}]$. More precisely, when
$g_{1}, n_{1}$ and $n_2$ are fixed
$$\frac{V_{g_{1},n_{1}+1}\cdot V_{g-g_{1},n-n_{1}+1}}{V_{g,n}}= \sum_{k=2g_{1}+n_1-1}^s \frac{c^{(k)}_{g_1,n_1}}{g^{k}} +
O\left(\frac{1}{g^{s+1}}\right),$$
where $c^{(k)}_{g_1,n_1}$ is a polynomial of degree at most $s$ in ${\mathbb Q}[\pi^2, \pi^{-2}]$.}
\noindent
This is a simple consequence of Remarks $\ref{rneed}$(2), $\ref{rem3}$(2) and formula $(\ref{subsurface})$.
\noindent
{\bf Claim $2$}: {\operatorname{Ext}m Part (i) of the lemma for $s=r$ and part (ii) for $s<r$ imply part (ii) for $s=r$.}
\noindent
Note that by part $(i)$, each element of the infinite sum $(\ref{a:n:s})$ is a polynomial of degree at most $r$ in ${\mathbb Q}
[\pi^2, \pi^{-2}].$ On the other hand, when $l\geq r$ the coefficient $e_{n,l}^{(r)}=P_r(l)$ is a polynomial in $l$. Remark
$\ref{rneed}$(1) and the properties of the coefficients of $P_(l)$ imply that $a_{n}^{(r)}$ is a rational polynomial of degree at
most $r$ in ${\mathbb Q}[\pi^2,\pi^{-2}].$ The similar statement for $b_{n}^{(r)}$ follows from $(\ref{Obs:I})$, Claim ${\bf 1}$
and part (i) for $s=r$ (when $k=1$ and $d=(1,0,\ldots,0)$).
\noindent
{\bf Claim $3$}: {\operatorname{Ext}m Part (ii) for $s<r$ and part (i) for $s<r$ imply part (i) for $s=r$.}
\noindent
First, we use $({\bf Ib})$ to find the polynomial $P_r$.
We put $P_r(d_{1},\ldots,d_{k})= e^{(r)}_{n+2,d}$ for $d=(0,0,d_{1},\ldots,d_{k},d_{k+1},\ldots,d_{n})$
with $d_{1},\ldots, d_{k}\geq 2r$. Note that the number $[\tau_{l},\tau_{d_1}\ldots\tau_{d_k}]_{g,k+1} \neq 0$ only when
$l \leq 3g-2+k-|d|,\; |d|=d_1+\ldots+d_k.$ On the other hand, by Lemma $\ref{estimate:upperbound}$, the term
$$\left[\tau_{0}^{2} \tau_{l} \; \prod_{i\in I } \tau_{d_{i}}\right]_{g_{1},|I|+3}
\cdot \quad\left[\tau_{0}^{2}\prod_{i\in J} \tau_{d_{i}}\right]_{g_{2},|J|+2}$$
in $({\bf Ib})$ is of order $O(1/g^{s+1})$ unless either $2g_1+|I|+1 <s $ or $2g_2+|J| < s$. Similarly the term
$$ \left[ \tau_{0}\tau_{l} \; \prod_{i\in I } \tau_{d_{i}}\right]_{g_{1},|I|+2}
\cdot \quad\left[\tau_{0}^{3}\prod_{i\in J} \tau_{d_{i}}\right]_{g_{2},|J|+3}$$
in $({\bf Ib})$ is of order $O(1/g^{s+1})$ unless $2g_1+|I| <s $ or $2g_2+|J|+1 < s$. For both terms we can explicitly
calculate the expansions of the factors up to $O(1/g^{s+1})$ if we know their expansions up to $O(1/g^{s}).$
Then Lemma $\ref{estimate:upperbound}$, formula $({\bf Ib})$ and the induction hypothesis imply that
$$ P_s(d_{1}+1,d_2,\ldots,d_{k})= P_s(d_{1},\ldots,d_{k},0,0)+ P'_s(d_{1},\ldots,d_{k})+P''_s(d_{1},\ldots, d_{k}),$$
where $P'_s$ is a polynomial when $d_{1}\geq s-1,d_2\geq s,\ldots, d_{k} \geq s,$ and $P''_s(d_{1},\ldots,d_{k})$
is nontrivial only if $d_{1},\ldots, d_{k}\leq s.$ The result follows from Lemma
$\ref{lem:general:1}(i)$ and Claim ${\bf 1}.$
In the simplest case $k=1,\;d_{2}=\ldots=d_{n}=0$, and $d_1=d\geq 2s$,
the relation $({\bf Ib})$ implies that
$$[\tau_{d}\tau_0^{n-1}]_{g,n}=[\tau_{s}\tau_{0}^{n+2(d-s)-1}]_{g-(d-s),n+2(d-s)}+ Q(d),$$
where $Q$ is a polynomial in $d$.
Next, we use $({\bf III})$ to prove the statement about the coefficients of $P_s$. We explicitly calculate the expansion of
$[\tau_{d_1}\ldots\tau_{d_n}]_{g,n}$
using (\ref{eq:main}) and Remark $\ref{rem3}$. We evaluate the contributions from the terms $S_{1}, S_{2}$ and $S_{3}$ and show that there exist polynomials $Q^{(s)}_{i},\; i=1,2,3,$ such that:
\begin{enumerate}
\item When $d_{1},\ldots, d_{k}$ are large enough (i.e., $\geq 2s$), the coefficient $S^{(s)}_{i}$ at $1/g^{s}$ in $S_i$
is equal to $Q^{(s)}_{i}(d_{1},\ldots,d_{k});$
\item The coefficient at the monomial $d_{1}^{\alpha_{1}}\cdots d_{k}^{\alpha_k}$ in each $Q^{(s)}_{i}$ is a linear rational
combination of $\pi^{2s-2[(|\alpha|+1)/2]},\ldots,\pi^{-2s},$ where $|\alpha|=\alpha_1+\ldots+\alpha_k;$
\item For all $d_{1},\ldots, d_{k}$, the difference $ S_{i}^{(s)}- Q^{(s)}_{i}(d_{1},\ldots,d_{k})$
is a linear rational combinations of $\{\pi^{2s-2|d|},\ldots, \pi^{-2s}\}.$
\operatorname{Ext}nd{enumerate}
\noindent
{\operatorname{Ext}m Contributions from $S_1$.}
We use the induction hypothesis and Lemma $\ref{aa}$ to expand the ratio $\widetilde{{A}}_{{d},g,n}/V_{g,n-1}$ up to the order $O(1/g^s)$ by expanding each $[\tau_{d_{1}+d_{j}+i-1}\tau_{d_{2}}\ldots\widehat{\tau_{d_{j}}}\ldots \tau_{d_{n}}]_{g,n-1}/V_{g,n-1}$ up to $O(1/g^{s-1}).$
Put $$q^{(m)}(d) =8 \sum_{j=2}^{n} \,\sum_{i=0}^{\infty} (a_{i+1}-a_{i}) e_{n,{d_{j}(i)}}^{(m-1)},$$
where $d_{j}(i)=(d_{1}+d_{j}+i-1,d_{2},\ldots,, \widehat{d_j}\ldots, d_{n}).$
Then
following Remark $\ref{rneed}$(2), and $(\ref{tf})$
$$S_{1}= \sum_{j=1}^{s} \frac{Q_{1}^{(j)}(d)}{g^{j}}+O\left(\frac{1}{g^{s}}\right),$$
where
$$Q_{1}^{(j)}(d)=\sum_{j_{1}+j_{2}=j} q^{(j_1)}(d) \cdot a_{n-1}^{(j_2)}.$$
Now by the induction hypothesis and part $(ii)$ of the lemma for $s<r$, both $q^{(m)}(d)$ and $Q_{1}^{(m)}(d)$
are polynomials in $d_1,\ldots, d_k$ of degree $2m$ whenever $d_{1},\ldots, d_{k}$ are large enough.
The coefficient at $d_{1}^{\alpha_{1}}\ldots d_{k}^{\alpha_{k}}$ in $q^{(m)}(d)$ is a rational linear combination of
the terms of the form $c_{\alpha_j(r)} \sum_{i=0}^{\infty} i^{r} (a_{i+1}-a_{i}),$ where $c_{\alpha_j(r)}$ is the coefficient at
$x_{1}^{r+\alpha_{1}+\alpha_{j}} x_{2}^{\alpha_{2}} \ldots\widehat{x_j^{\alpha_j}}\ldots x_{k}^{\alpha_k}$ in $P^{(m-1)}(x_{1},\ldots,\widehat{x_{j}},\ldots,x_{k})$. Now by Lemma $\ref{aa},\;(iv)$ the sum $\sum_{i=0}^{\infty} i^{r} (a_{i+1}-a_{i})$
is a rational linear combination of $\pi^{2}, \ldots, \pi^{2[r/2]}$, and by the induction hypothesis $c_{\alpha_j(r)}$ is a rational linear combination of $\pi^{2m-2-2r-|\alpha|} \ldots, \pi^{-2m-2}$. Therefore the coefficients of these polynomials
are rational combinations of $\pi^{2m-|{\alpha}|},\ldots, \pi^{-2m}.$
\noindent
{\operatorname{Ext}m Contributions from $S_2$.}
In the same way, the induction hypothesis and Lemma $\ref{aa}$ allow to expand $\widetilde{{B}}_{d,g,n}/V_{g-1,n+1}$
up to the order of $1/g^s.$ Repeating the proof of case ({\bf 2}) of Theorem $\ref{theo:tauk}$ we see that each term
in the expansion is of the form $(\ref{eq:bb})$.
In the expansion of $T_{m,g,n}$ the term of order $1/g^{m}$
is a polynomial in $d_1,\ldots, d_n$ of degree $m+1,$ whose coefficients satisfy the properties $1,$ $2$, and $3$
mentioned above.
\noindent
{\operatorname{Ext}m Contributions from $S_3$.}
What is different here compared to step {\bf 3} in the proof of Theorem $\ref{theo:tauk}$, is that
$\widetilde{{C}}_{{d},g,n}$ contributes to the terms of order $1/g^{2},\ldots,1/g^{r}.$
However, by Lemma $\ref{estimate:upperbound}$ these contributions can be evaluated using $(\ref{subsurface}).$
More precisely, as in Remark $\ref{rem3}$, we can write $S_3$ as a sum of finitely many elements of the form
\begin{align*}
\frac{\left[\tau_{k} \prod_{i\in I } \tau_{d_{i}}\right]_{g',|I|+1}}{V_{g', |I|+1}}
\cdot &\frac{V_{g-g',|J|+1}\cdot V_{g', |I|+1}}{V_{g,n}}\times\\
\times &\sum_{i=0}^{\infty} (a_{i+1}-a_{i}) \; \frac{\left[\tau_{d_1+i-2-k} \prod_{i\in J} \tau_{d_{i}}\right]_{g-g',|J|+1}}{V_{g-g',|J|+1}},
\operatorname{Ext}nd{align*}
where $I \amalg J=\{2,\ldots,n\},$ $0\leq g' \leq g,$ and $2g'-1+|I| \leq r.$ The result now follows from the induction
hypothesis on the behavior of the coefficient at $1/g^{s}$ in the expansion of
$\frac{\left[\tau_{d_1+i-2-k} \prod_{i\in J} \tau_{d_{i}}\right]_{g-g',|J|+1}}{V_{g-g',|J|+1}}$ for $s<r$, together
Claim $1$ and Remark $\ref{rneed}(2),(3).$
$\Box$
\noindent
{\bf Proof of Theorem $\ref{theo:general:2}$.}
It is easy to see that part $(i)$ of the theorem is a special case of Lemma $\ref{lem:general:2}$ for $n=k.$
Part $(ii)$ is a consequence of Lemma $\ref{lem:general:2}(ii)$ and $\ref{lem:general:1}(ii).$
$\Box$
\noindent
{\bf Proof of Theorem $\ref{theo:main:fixn}$.}
For a fixed $n \geq 0$, Theorem $\ref{theo:general:1} (ii)$ applied to the obvious identity $\frac{V_{g+1,n}}{V_{g,n}}=\frac{V_{g+1,n}}{V_{g,n+2}}\cdot\frac{V_{g,n+2}}{V_{g,n+1}}\cdot\frac{V_{g,n+1}}{V_{g,n}}$ immediately yields
\begin{align*}
\frac{V_{g+1,n}}{V_{g,n}}= &(4\pi^2)^2(2g+n-1)(2g+n-2)\times\\
\times &\left(1-\frac{1}{2g}\right)\cdot\left(1+ \frac{r^{(2)}_{n}}{g^2}+ \ldots +\frac{r^{(s)}_{n}}{g^s}+ O\left(\frac{1}{g^{s+1}}\right)\right)
\operatorname{Ext}nd{align*}
as $g \rightarrow \infty.$
On the other hand, we have
$$ V_{g,1}= \prod_{j=2}^{g-1} \frac{V_{j+1,1}}{V_{j,1}} \cdot V_{2,1},$$
and therefore the result of Theorem $\ref{theo:main:fixn}$ for $n=1$ is a consequence of the following:
\begin{lemm}
Let $a_{2},\ldots, a_{l} \in {\mathbb R}$ and let $\{c_{j}\}_{j=1}^{\infty}$ be a positive sequence with the property
$$ c_{j}=1+\frac{a_{2}}{j^{2}}+\ldots+\frac{a_{s}}{j^{s}}+ O\left(\frac{1}{j^{s+1}}\right).$$
Then there exist $b_{1},\ldots, b_{s-1}$ such that
$$ \prod_{j=1}^{g} c_{j}= C_0 \; \left(1+\frac{b_{1}}{g}+\ldots
\frac{b_{s-1}}{g^{s-1}}+ O\left(\frac{1}{g^{s}}\right)\right), $$
as $g \rightarrow \infty,$
where $C_0=\prod_{j=1}^{\infty} c_{j}.$ Moreover, $b_{1},\ldots,b_{s-1}$ are polynomials in $a_{2},\ldots, a_{s}$ with rational coefficients.
\operatorname{Ext}nd{lemm}
\noindent
{\bf Proof.}
First, we can write
$$
c_{j}=R_{j}\left(1+\frac{d_{2}}{j^2}\right) \ldots \left(1+\frac{d_{l}}{j^l}\right),
$$
where $d_{2},\ldots d_{l}$ are polynomials in $a_{2},\ldots, a_{l}$ and $R_{j}=1+ O(1/j^{l+1})$ as $j\to\infty$.
One can check that there exists a constant $R$ such that
$$\prod_{j=1}^{g} R_{j}=R \; \left(1+O\left(\frac{1}{g^l}\right)\right)$$
as $g\to\infty$. So it is enough to prove that
$$\prod_{j=1}^{g} \left(1+\frac{d_{k}}{j^{k}}\right)= C_{k} \; \left(1+\frac{p_{k}^{(1)}}{g}+\ldots
\frac{p_{k}^{(l-1)}}{g^{l-1}}+ O\left(\frac{1}{g^{l}}\right)\right) $$
as $g\to\infty$, where $p_{k}^{(1)},\ldots, p_{k}^{(l)}$ are polynomials in $d_{k}$ with rational coefficients, and $C_{k}= \prod_{j=1}^{\infty} \left(1+\frac{d_{k}}{j^{k}}\right),\; k\geq 2.$
It is enough to get bounds for the error term
\begin{equation}\label{BB}
E_{N,k}=\log \prod_{j=N}^{\infty} \left(1+\frac{d_{k}}{j^{k}}\right)= \sum_{j=N}^{\infty} \log\left(1+ \frac{d_k}{j^k}\right).
\operatorname{Ext}nd{equation}
Using the Taylor series $\log(1+x)= x-x^{2}/2+x^{3}/3-\ldots$, we expand each term $\log\left(1+ \frac{d_k}{j^k}\right)$ up to the order $j^{-l}$.
Now we make use of the Euler-Maclaurin summation formula (cf. \cite{E}):
\begin{align*}
\zeta(s)=&\sum_{i=1}^{N} \frac{1}{i^{s}}+\frac{1}{(s-1) N^{s-1}}+ \frac{1}{2N^s}+\\
+&\sum_{m=1}^{r-1} \frac{s(s+1)\ldots (s+2m-2)B_{2m}}{(2m)! \, N^{s+2m-1}}+E_{2r}(s),
\operatorname{Ext}nd{align*}
where $\zeta(s)$ is the Riemann zeta function, $B_{2m}=(-1)^{m+1} 2 (2m)! \frac{\zeta(2m)}{(2 \pi)^{2m}}$
is the $m$th Bernoulli number, and the error term $E_{2r}(s)$ has an estimate
$$|E_{2r}(s)|<\left| \frac{s(s+1)\ldots (s+2r-2)B_{2r}}{({\rm Re}(s) +2r-1)(2r)! \, N^{s+2r-1}}\right|$$
(the formula holds for all $s$ with ${\rm Re}(s)>-2r+1$).
From here we get that for any integer $k>1$
\begin{align*}
\sum_{i=N+1}^{\infty} \frac{1}{i^{k}}=&\frac{1}{(k-1) N^{k-1}}+\frac{1}{2N^k}+\\
+&\sum_{m=1}^{[(l-k)/2]} \frac{B_{2m}\cdot k(k+1)\ldots (k+2m-2)}{(2m)! \; N^{k+2m-1}}
+O\left(\frac{1}{N^l}\right)
\operatorname{Ext}nd{align*}
as $N\to \infty$.
Therefore, given $l$, there exist $q_{k}^{(k-1)},\ldots ,q_{k}^{(l)}$ such that
$$E_{N,k}= \frac{q_{k}^{(k-1)}}{N^{k-1}}+\ldots+\frac{q_{k}^{(l)}}{N^l}+ O\left(\frac{1}{N^{l+1}}\right),$$
where each $q_{k}^{(i)}$ is a polynomial with rational coefficients in $d_{k}.$ Now we can easily get hold on the error terms $e^{E_{N,k}}$:
$$e^{E_{N,k}}= \frac{p_{k}^{(k-1)}}{N^{k-1}}+\ldots+\frac{p_{k}^{(l)}}{N^l}+ O\left(\frac{1}{N^{l+1}}\right),$$
where $p_{k}^{(j)}$ is a polynomial in $q_{k}^{(i)},\; i=k-1,\ldots,l$, which implies the result.
$\Box$
As a result, we can write
$$V_{g,1} =C\,\frac{(2g-2)!\,(4\pi^2)^{2g-2}}{\sqrt{g}} \left(1+\frac{c_1^{(1)}}{g}+\ldots+\frac{c_1^{(k)}}{g^{k}}+ O\left(\frac{1}{g^{k+1}}\right)\right),$$
as $g \rightarrow \infty.$ On the other hand,
$$\frac{V_{g,n}}{C\cdot C_{g,n}}= \prod_{j=1}^{n-1} \frac{V_{g,j+1}}{4\pi^2 (2g-2+j) V_{g,j}} \cdot \frac{V_{g,1}}{C \cdot C_{g,n}},$$
where $C_{g,n}=(2g-3+n)!\,(4\pi^2)^{2g-3+n}\,g^{-1/2},$ $C=\lim_{g\rightarrow \infty} \frac{V_{g,1}}{C_{g,1}}.$
The following is elementary:
\noindent
{\it Fact.}
Let $\{f_{i}\}_{i=1}^{\infty}$ be a sequence of functions with the expansion $$f_{i}(g)=1+\frac{p(1,i)}{g}+\ldots+\frac{p(s,i)}{g^s}+O(\frac{1}{g^{s+1}}).$$
Assume that for a given $j$, $p(j,k)$ is a polynomial in $k$ of degree $j$.
Let $$H(g,n)=\prod_{j=1}^{n} f_{j}(g).$$ Then
$$ H(g,n)=1+\frac{h_1(i)}{g}+\ldots+\frac{h_{s}(i)}{g^s}+O(\frac{1}{g^{s+1}}),$$
where for a given $j$, $h_{j}(k)$ is a polynomial in $k$ of degree $2j$. Moreover the leading coefficient of $h_{j}(k)$ is equal to $\frac{l^{j}}{2^j j!},$
where $l$ is the leading coefficient of the linear polynomial $p(1,j).$
This fact is a consequence of an elementary observation. Given $m_{1},\ldots m_{k} \in {\mathbb N}$, let $$F_{m_1,\ldots,m_k}(d)=\sum_{\sumind{x_{1},\ldots,x_{k} \in \{1,\ldots, n\},} {{x_{i}\neq x_{j}}}} x_{1}^{m_{1}}\cdots x_{k}^{m_{k}}$$ is a polynomial in $d$ of degree $(m_{1}+1)+\ldots (m_{k}+1)$. Note that for $d<k,$ $F_{m_1,\ldots,m_k}(d)=0.$
Now we use this observation for $f_{j}(g)= \frac{V_{g,j+1}}{4\pi^2 (2g-2+j) V_{g,j}}.$ In this case, $(\ref{eq:n})$ implies that $l= -1/\pi^2$. Hence the result follows from $(\ref{theo:n})$ and Theorem $\ref{theo:general:2},\;(ii)$.
\operatorname{Ext}nd{section}
\begin{section}{Asymptotics for variable $n$}
In this section we discuss the asymptotics behavior of $V_{g,n(g)}$ in case when $n(g)\rightarrow \infty $ as $g \rightarrow \infty$ and prove Theorem $\ref{Main}:$ if
$n(g)^2/g \rightarrow 0$ as $g \rightarrow \infty,$ then
$$\lim_{g\rightarrow \infty} \frac{V_{g,n(g)}}{C_{g,n(g)}}=C,$$
where $C_{g,n}=(2g-3+n)!\,(4\pi^2)^{2g-3+n}\,g^{-1/2}$ and $C=\lim_{g\rightarrow \infty} \frac{V_{g,0}}{C_{g,0}}$.
We need the following basic lemma:
\begin{lemm}\label{lemm:general:n}
There are universal constants $c_{0},c_{1},c_{2}, c_{3}, c_4>0$ such that for $g,n \geq 0$ the following inequalities hold:
\begin{enumerate}[(i)]
\item for any $k \geq 1,$ $$ c_{0}\cdot \frac{n}{2g-2+n} \leq 1-\frac{[\tau_{k}\,\tau_{0}^{n-1}]_{g,n}}{V_{g,n}} \leq c_{1}\cdot \frac{n k^{2}}{2g-2+n},$$
\item $$ \left| \frac{(2g-2+n) V_{g,n}}{V_{g,n+1}}-\frac{1}{4\pi^2}\right| \leq c_{2}\cdot \frac{n}{2g-2+n},$$
\item $$ \frac{V_{g-1,n+4}}{V_{g,n+2}} \leq 1-c_{4}\cdot\frac{n}{2g-2+n}.$$
\operatorname{Ext}nd{enumerate}
\operatorname{Ext}nd{lemm}
\noindent
{\bf Proof.} The proof follows the same lines as the proofs of formulas $(\ref{f1}), (\ref{f2})$ and $(\ref{simple})$ (see \cite{M:large} for details).
First, observe that
\begin{itemize}
\item Recursion ${(\bf III)}$ implies
\begin{equation}\label{simple:bound:1}
[\tau_{k+1}\,\tau_{0}^{n-1}]_{g,n}\leq [\tau_{k}\,\tau_{0}^{n-1}]_{g,n} \leq V_{g,n},
\operatorname{Ext}nd{equation}
and we also have
$$b \leq \frac{[\tau_{1}\,\tau_{0}^{n-1}]_{g,n}}{ V_{g,n}}$$
where $b=\max\{a_{i}/a_{i+1}\},\; i=0,1,\ldots;$
\item For $l\geq 0$
$$\frac{l\,\pi^{2l-2}}{(2l+1)!} \geq \frac{(l+1)\pi^{2l}}{(2l+3)!},$$
so that from $({\bf II})$ and $(\ref{simple:bound:1})$ we have
\begin{equation}\label{observation:II}
b_{0} \leq \frac{(2g-2+n) V_{g,n}}{V_{g,n+1}} \leq b_{1},
\operatorname{Ext}nd{equation}
where
$$b_{0}=b\cdot\left(\frac{1}{6}-\frac{\pi^2}{60}\right), \qquad
b_{1}= \sum_{l=1}^{\infty} \frac{l\,\pi^{2l-2}}{(2l+1)!}.$$
\item By recursion $({\bf Ia})$, for $n \geq 2$
\begin{equation}\label{observation:I}
\frac{V_{g-1,n+4}}{V_{g,n+2}} \leq 1.
\operatorname{Ext}nd{equation}
\operatorname{Ext}nd{itemize}
Note that in view of $(\ref{observation:II})$ and $(\ref{observation:I})$, Theorem $\ref{theo:tauk},\;(ii)$ implies the first inequality. In order to prove $(ii)$
we will use $(i)$ and $(\ref{Obs:II})$. As a result, we get
$$ \left| \frac{(2g-2+n) V_{g,n}}{V_{g,n+1}}-\frac{1}{4\pi^2}\right| \leq \sum_{l=1}^{\infty} \frac{ l^3 \, \pi^{2l-2}}{(2l+1)!}\;\cdot \frac{n}{2g-2+n}.$$
The third inequality is a simple consequence of $({\bf Ia})$. This bound can be obtained from the lower bound in $(\ref{observation:II})$ and $(\ref{Obs:I})$.
$\Box$
\noindent
{\bf Proof of Theorem $\ref{Main}$.}
By Lemma \ref{lemm:general:n},
$$\frac{(2g-2+n(g)) V_{g,n(g)}}{V_{g,n(g)+1}} \rightarrow \frac{1}{4\pi^2},\qquad
\frac{V_{g-1,n(g)+4}}{V_{g,n(g)+2}} \rightarrow 1,$$
when $n(g)/g \rightarrow 0$ as $g \rightarrow \infty.$
From definition of $C_{g,n}$ immediately follows that
$$\frac{V_{g,n(g)}}{C_{g,n(g)}}=\frac{V_{g,n(g)}}{4\pi^{2} (2g-2+n(g)) V_{g,n(g)-1}}\cdot\frac{V_{g,n(g)-1}}{4\pi^{2}(2g-3+n(g)) V_{g,n(g)-2}}\cdots\frac{V_{g,0}}{C_{g,0}}. $$
Hence, by Theorem $\ref{theo:main:fixn}$ and Lemma $\ref{lemm:general:n},\;(ii)$ we have
\begin{align*}
(1-c_{2}/g)\cdot (1-2\cdot c_{2}/g)& \cdots (1-n(g)\cdot c_{2}/g)\cdot (C+O(1/g))\leq\\
\leq&\frac{V_{g,n(g)}}{C_{g,n(g)}} \leq \\
\leq(1+c_{2}/g)\cdot (1+2\cdot c_{2}/g)& \cdots (1+n(g)\cdot c_{2}/g) (C+O(1/g)),
\operatorname{Ext}nd{align*}
which implies the result.
$\Box$
\operatorname{Ext}nd{section}
\noindent
{\bf Acknowledgements.}
The work of MM is partially supported by an NSF grant.
The work of PZ is partially supported by the RFBR grants 11-01-12092-OFI-M-2011 and 11-01-00677-a, and he
gratefully acknowledges the hospitality and support of MPIM (Bonn), QGM (Aarhus) and SCGP (Stony Brook).
We thank Maxim Kazarian and Don Zagier for enlightening discussions.
\begin{thebibliography}{a}
\bibitem[AC]{AC}
E.~Arbarello and M.~Cornalba.
{\operatorname{Ext}m Combinatorial and algebro-geometric cohomology
classes on the Moduli Spaces of Curves,}
J. Algebraic Geometry {\bf 5} (1996), 705--709.
\bibitem[DN]{DN:cone}
N. Do and P. Norbury. {\operatorname{Ext}m Weil-Petersson volumes and cone surfaces,}
Geom. Dedicata {\bf 141} (2009), 93--107.
\bibitem[Ed]{E}
H. M. Edwards. {\operatorname{Ext}m Riemann's Zeta Function}, Academic Press, 1974.
\bibitem[E]{E:M}
B.~Eynard.
{\operatorname{Ext}m Recursion between Mumford volumes of moduli spaces,}
arXiv: 0706.4403.
\bibitem[EO]{EO:WK}
B.~Eynard and N.~Orantin.
{\operatorname{Ext}m Invariants of algebraic curves and topological expansion,}
Commun. Number Theory Phys. 1:2 (2007), 347--452.
\bibitem[Gr]{Gr:vol}
S.~Grushevsky.
{\operatorname{Ext}m An explicit upper bound for Weil-Petersson volumes of the moduli spaces of punctured Riemann surfaces,}
Mathematische Annalen. {\bf 321} (2001) 1, 1--13.
\bibitem[HM]{Harris:book}
J.~ Harris and I.~ Morrison.
Moduli of Curves. Graduate Texts in Mathematics, vol 187, Springer-Verlag, 1998.
\bibitem[KMZ]{KMZ:w}
R.~Kaufmann, Y.~Manin, and D.~Zagier.
{\operatorname{Ext}m Higher Weil-Petersson volumes of moduli spaces of
stable n-pointed curves,}
Comm. Math. Phys. {\bf 181} (1996), 736--787.
\bibitem[Ka]{Ka}
M. E.~Kazarian.
{\operatorname{Ext}m Private communication}
(2006).
\bibitem[KL]{KL:W}
M. E.~Kazarian and S. K.~ Lando.
{\operatorname{Ext}m An algebro-geometric proof of Witten's conjecture,}
J. Amer. Math. Soc. {\bf 20} (2007), 1079--1089.
\bibitem[Ko]{Ko:int}
M.~Kontsevich.
{\operatorname{Ext}m Intersection on the moduli space of curves and the matrix Airy function,}
Comm. Math. Phys. {\bf 147} (1992), 1-23.
\bibitem[LX1]{LX:higher}
K.~Liu and H.~Xu.
{\operatorname{Ext}m Recursion formulae of higher Weil-Petersson volumes}
Int. Math. Res. Not. IMRN {\bf 5} (2009), 835--859.
\bibitem[LX2]{LX:M}
K. Liu, and H. Xu.
{\operatorname{Ext}m Mirzakharni's recursion formula is equivalent to the Witten-Kontsevich theorem,}
Asterisque 328 (2009), 223--235.
\bibitem[MZ]{MZ}
Yu.~Manin and P. ~Zograf. {\operatorname{Ext}m Invertible cohomological field theories and Weil-Petersson
volumes,} Ann. Inst. Fourier {\bf 50:2} (2000), 519--535.
\bibitem[Mc]{M:S}
G.~McShane. {\operatorname{Ext}m Simple geodesics and a series constant over Teichm\"uller space.}
Invent. Math. {\bf 132} (1998), 607--632.
\bibitem[M1]{M:JAMS}
M.~Mirzakhani. {\operatorname{Ext}m Weil-Petersson volumes and intersection theory on the moduli space of curves,}
J. Amer. Math. Soc. {\bf 20:1} (2007), 1--23.
\bibitem[M2]{M:In}
M.~Mirzakhani. {\operatorname{Ext}m Simple geodesics and Weil-Petersson volumes of moduli spaces of bordered Riemann surfaces,}
Invent. Math. {\bf 167} (2007), 179--222.
\bibitem[M3]{M:large}
M.~Mirzakhani. {\operatorname{Ext}m Growth of Weil-Petersson volumes and random hyperbolic surfaces of large genus,}
arXiv:1012.2167.
\bibitem[MS]{MuS}
Y.~Mulase and P.~Safnuk. {\operatorname{Ext}m Mirzakhani's recursion relations, Virasoro constraints and the KdV hierarchy,}
Indian Journal of Mathematics {\bf 50} (2008), 189--228.
\bibitem[OP]{OP}
A.~Okounkov and R.~Pandharipande. {\operatorname{Ext}m Gromov-Witten theory, Hurwitz numbers, and Matrix
models,} Proc. Symp. Pure Math. 80.1 (2009), 325-414.
\bibitem[Pe]{P:vol} R.~Penner.{\operatorname{Ext}m Weil-Petersson volumes,} J. Differential Geom. {\bf 35} (1992), 559--608.
\bibitem[ST]{ST:vol}
G.~Schumacher and S.~Trapani.
{\operatorname{Ext}m Estimates of Weil-Petersson volumes via effective
divisors} Comm. Math. Phys. {\bf 222}, No.1 (2001), 1--7.
\bibitem[W]{W}
E.~Witten. {\operatorname{Ext}m Two-dimensional gravity and intersection theory on moduli spaces,}
Surveys in Differential Geometry {\bf 1} (1991), 243--269.
\bibitem[Wo]{W:H}
S.~Wolpert.
{\operatorname{Ext}m On the homology of the moduli of stable curves,}
Ann. of Math. {\bf 118:2} (1983), 491--523.
\bibitem[Z]{Z:con}
P.~Zograf. {\operatorname{Ext}m On the large genus asymptotics of Weil-Petersson volumes,}
arXiv:0812.0544.
\operatorname{Ext}nd{thebibliography}
\operatorname{Ext}nd{document}
|
\begin{document}
\title{Reading Articles Online\texorpdfstring{\thanks{This paper has been accepted at COCOA 2020.
The final authenticated publication is available online at \url{https://doi.org/10.1007/978-3-030-64843-5_43}}}{}}
\author{Andreas Karrenbauer\inst{1} \and
Elizaveta Kovalevskaya\inst{1,2}}
\authorrunning{A. Karrenbauer and E. Kovalevskaya}
\institute{Max Planck Institute for Informatics, Saarland Informatics Campus, Germany \\ \email{[email protected]}
\and Goethe University Frankfurt, Germany \\ \email{[email protected]}
}
\maketitle
\begin{abstract}
We study the online problem of reading articles that are listed in an aggregated form in a dynamic stream, e.g., in news feeds, as abbreviated social media posts, or in the daily update of new articles on arXiv. In such a context, the brief information on an article in the listing only hints at its content. We consider readers who want to maximize their information gain within a limited time budget, hence either discarding an article right away based on the hint or accessing it for reading. The reader can decide at any point whether to continue with the current article or skip the remaining part irrevocably. In this regard, \raolong{}, \rao{}, does differ substantially from the Online Knapsack Problem, but also has its similarities. Under mild assumptions, we show that any $\alpha$-competitive algorithm for the Online Knapsack Problem in the random order model can be used as a black box to obtain an $(\ensuremath{\mathrm{e}} + \alpha)C$-competitive algorithm for \rao{}, where $C$ measures the accuracy of the hints with respect to the information profiles of the articles. Specifically, with the current best algorithm for Online Knapsack, which is $6.65<2.45\ensuremath{\mathrm{e}}$-competitive, we obtain an upper bound of $3.45\ensuremath{\mathrm{e}} C$ on the competitive ratio of \rao{}. Furthermore, we study a natural algorithm that decides whether or not to read an article based on a single threshold value, which can serve as a model of human readers. We show that this algorithmic technique is $O(C)$-competitive. Hence, our algorithms are constant-competitive whenever the accuracy $C$ is a constant.
\end{abstract}
\section{Introduction}
There are many news aggregators available on the Internet these days. However, it is impossible to read all news items within a reasonable time budget. Hence, millions of people face the problem of selecting the most interesting articles out of a news stream. They typically browse a list of news items and make a selection by clicking into an article based on brief information that is quickly gathered, e.g., headline, photo, short abstract. They then read an article as long as it is found interesting enough to stick to it, i.e., the information gain is still sufficiently high compared to what is expected from the remaining items on the list. If not, then the reader goes back to browsing the list, and the previous article is discarded -- often irrevocably due to the sheer amount of available items and a limited time budget.
This problem is inspired by research in Human-Computer Interaction~\cite{freire2019foraging}.
In this paper, we address this problem from a theoretical point of view. To this end, we formally model the \raolong{} Problem, \rao{}, show lower and upper bounds on its competitive ratio, and analyze a natural threshold algorithm, which can serve as a model of a human reader.
There are obvious parallels to the famous Secretary Problem, i.e., if we could only afford to read one article, we had a similar problem that we had to make an irrevocable decision without knowing the remaining options. However, in the classical Secretary Problem, it is assumed that we obtain a truthful valuation of each candidate upon arrival. But in our setting, we only get a hint at the content, e.g., by reading the headline, which might be a click bait. However, if there is still time left from our budget after discovering the click bait, we can dismiss that article and start browsing again, which makes the problem fundamentally more general. Moreover, a typical time budget allows for reading more than one article, or at least a bit of several articles, perhaps of different lengths. Thus, our problem is also related to Online Knapsack but with uncertainty about the true values of the items. Nevertheless, we assume that the reader obtains a hint of the information content before selecting and starting to read the actual article. This is justified because such a hint can be acquired from the headline or a teaser photo, which is negligible compared to the time it takes to read an entire article. In contrast, the actual information gain is only realized while reading and only to the extent of the portion that has already been read. For the sake of simplicity, one can assume a sequential reading strategy where the articles are read word for word and the information gain might fluctuate strongly, especially in languages like German where a predicate/verb can appear at the end of a long clause. However, in contrast to spatial information profiles, one can also consider temporal information profiles where the information gain depends on the reading strategy, e.g., cross reading. It is clear that the quality of the hint in relation to the actual information content of the article is a decisive factor for the design and analysis of corresponding online algorithms. We argue formally that the hint should be an upper bound on the information rate, i.e., the information gain per time unit. Moreover, we confirm that the hint should not be too far off the average information rate to achieve decent results compared to the offline optimum where all articles with the corresponding information profiles are known in advance. In this paper, we assume that the length of an article, i.e., the time it takes to read it to the end, is revealed together with the hint. This is a mild assumption because this attribute can be retrieved quickly by taking a quick glance at the article, e.g., at the number of pages or at the size of the scroll bar.
\subsection{Related Work}
To the best of our knowledge, the problem of \rao{} has not been studied in our suggested setting yet.
The closest related problem known is the Online Knapsack Problem \cite{AlbersKnapsackGAP,KnapsackSecretaryProblem,KesselheimPrimalDual} in which an algorithm has to fill a knapsack with restricted capacity
while trying to maximize the sum of the items' values.
Since the input is not known at the beginning and an item is not selectable later than its arrival, optimal algorithms do not exist.
In the adversarial model where an adversary chooses the order of the items to arrive,
it has been shown in \cite{StochasticOnline} that the competitive ratio is unbounded.
Therefore, we consider the random order model where a permutation of the input is chosen uniformly at random.
A special case of the Online Knapsack Problem is the well-studied Secretary Problem, solved by \cite{Dynkin} among others.
The goal is to choose the best secretary of a sequence without knowing what values the remaining candidates will have. The presented $\ensuremath{\mathrm{e}}$-competitive algorithm is optimal for this problem.
The $k$-Secretary Problem aims to hire at most $k\geq 1$ secretaries while maximizing the sum of their values. In
\cite{MCSecretaryAlgorithm}, an algorithm with a competitive ratio of
$1/(1-5/\sqrt{k})$, for large enough $k$,
with a matching lower bound of $\Omega(1/(1-1/\sqrt{k}))$ is presented.
Furthermore, \cite{KnapsackSecretaryProblem} contains an algorithm that is $\ensuremath{\mathrm{e}}$-competitive for any $k$.
Some progress for the case of small $k$ was made in~\cite{albers_et_al:LIPIcs:2019:11514}.
The Knapsack Secretary Problem introduced by \cite{KnapsackSecretaryProblem} is equivalent to the
Online Knapsack Problem in the random order model.
They present a $10\ensuremath{\mathrm{e}}$-competitive algorithm. An $8.06$-competitive algorithm ($8.06<2.97\ensuremath{\mathrm{e}}$) is shown in~
\cite{KesselheimPrimalDual} for the Generalized Assignment Problem, which is the Online Knapsack Problem generalized to a setting with multiple knapsacks with different capacities.
The current best algorithm from \cite{AlbersKnapsackGAP} achieves a competitive ratio of $6.65< 2.45\ensuremath{\mathrm{e}}$.
There have been different approaches to studying the Knapsack Problem with uncertainty besides the random order model.
One approach is the Stochastic Knapsack Problem where values or weights are drawn from a known distribution.
This problem has been studied in both online \cite{StochasticOnline} and offline \cite{StochasticOffline} settings.
In \cite{TheRobustKnapsackProblemWithQueries}, an offline setting with unknown weights is considered:
algorithms are allowed to query a fixed number of items to find their exact weight.
A model with resource augmentation is considered for the fractional version of the Online Knapsack Problem in \cite{fractionalKnapsack}.
There, the knapsack of the online algorithm has $1\leq R\leq 2$ times more capacity than the knapsack of the offline optimum.
Moreover, they allow items to be rejected after being accepted.
In our model, this would mean that the reader gets time returned after having already read an article.
Thus, their algorithms are not applicable on \rao{}.
\subsection{Our Contribution}
We introduce \rao{} and prove lower and upper bounds on competitive ratios under various assumptions. We present relations to the Online Knapsack problem and show how ideas from that area can be adapted to \rao{}. Our emphasis lies on the initiation of the study of this problem by the theory community.
We first show lower bounds that grow with the number of articles unless restrictions apply that forbid the corresponding bad instances. That is, whenever information rates may be arbitrarily larger than the hint of the corresponding article,
any algorithm underestimates the possible information gain of a good article.
Hence, the reader must adjust the hints such that they upper bound the information rate to allow for bounded competitive ratios.
While we may assume w.l.o.g.~for the Online Knapsack Problem that no item is larger than the capacity since an optimal solution cannot contain such items, we show that \rao{} without this or similar restrictions suffers from a lower bound of $\Omega(n)$, i.e., any algorithm is arbitrarily bad in the setting where articles are longer than the time budget.
Moreover, we prove that the accuracy of the hints provides a further lower bound for the competitive ratio.
We measure this accuracy as the maximum ratio $C$ of hints and respective average information rates.
Hence, a constant-competitive upper bound for \rao{} is only possible when $C$ is bounded.
Under these restrictions, we
present the first constant-com\-pe\-ti\-ti\-ve algorithm for \rao{}. To this end, we introduce a framework for wrapping any black box algorithm for the Online Knapsack Problem to work for \rao{}.
Given an $\alpha$-competitive algorithm for the Online Knapsack Problem as a black box,
we obtain a $(\ensuremath{\mathrm{e}}+\alpha)C$-competitive algorithm for \rao{}.
This algorithm is $3.45\ensuremath{\mathrm{e}} C$-competitive when using the current best algorithm for the Online Knapsack Problem from \cite{AlbersKnapsackGAP}.
This is the current best upper bound that we can show for \rao{}, which is constant provided that the hints admit a constant accuracy.
However, the algorithm generated by the framework above inherits its complexity from the black box algorithm for the Online Knapsack Problem, which may yield good competitive ratios from a theoretical point of view but might be too complex to serve as a strategy for a human reader. Nevertheless, the existence of constant-competitive ratios (modulo accuracy of the hints) motivates us to strive for simple $O(C)$-algorithms. To this end, we investigate an algorithm that bases its decisions on a single threshold. The Threshold Algorithm can be seen as a formalization of human behavior. While reading, humans decide intuitively whether an article is interesting or not. This intuition is modeled by the single threshold. In case of diminishing information gain, we show that this simplistic approach suffices to obtain an upper bound of $246 C< 90.5 \ensuremath{\mathrm{e}} C$ on the competitive ratio with the current analysis, which might leave room for improvement but nevertheless achieves a constant competitive ratio.
Diminishing information gain means non-increasing information rates, a reasonable assumption particularly in the light of efficient reading strategies, where an article is not read word for word. In such a context, one would consider a temporal information profile that relates the information gain to the reading time. When smoothed to coarse time scale, the information rates can be considered non-increasing and lead to a saturation of the total information obtained from an article over time.
\section{Preliminaries}
\begin{definition}[\raolong{} (\rao{})] \label{def:RA}
There are $n$ articles that are revealed one by one in a round-wise fashion. The reader has a time budget $T\in\ensuremath{\mathbb{N}}_{>0}$ for reading.
In round $i$, the reader sees article $i$ with its hint $h_i \in\ensuremath{\mathbb{N}}_{>0}$ and time length $t_i\in\ensuremath{\mathbb{N}}_{>0}$.
The actual information rate $c_i:[t_i]\to [h_i]$ is an unknown function.
The reader has to decide whether to start reading the article or to skip to the next article.
After reading time step $j\leq t_i$, the reader obtains $c_i(j)$ information units and can decide to read the next time step of the article or to discard it irrevocably.
After discarding or finishing the current article, the next round begins.
The objective is to maximize $\sum_{i\in[n]} \sum_{j=1}^{\tau_i} c_i(j)$ where $0\le \tau_i\le t_i$ is the number of time steps read by the algorithm and $\sum_{i\in[n]} \tau_i\leq T$.
\end{definition}
For the sake of simplicity, we have chosen a discrete formulation in Def.~\ref{def:RA}, which is justified by considering words or even characters as atomic information units. Since such tiny units might be too fine-grained compared to the length of the articles, we can also extend this formulation with a slight abuse of notation and allow that the $\tau_i$ are fractional, i.e., $\sum_{j=1}^{\tau_i} c_i(j) = \sum_{j=1}^{\lfloor\tau_i\rfloor} c_i(j) + c_i(\lceil \tau_i\rceil)\cdot \{\tau_i\}$, where $\{\tau_i\}$ denotes its fractional part. However, one could also consider a continuous formulation using integrals, i.e., the objective becomes $\sum_{i\in[n]} \int_{0}^{\tau_i} c_i(t) dt$. The lower bounds presented in this section hold for these models as well.
We use the random order model
where input order corresponds to a permutation $\pi$ chosen uniformly at random.
\begin{definition}[Competitive Ratio]
We say that an algorithm $\text{\rmfamily\scshape Alg}$ is $\alpha$-competitive,
if, for any instance $I$, the expected value of $\text{\rmfamily\scshape Alg}$ on instance $I$, with respect to permutation of the input and random choices of $\text{\rmfamily\scshape Alg}$, is at least $1/\alpha$
of the optimal offline value $\text{\rmfamily\scshape Opt}(I)$, i.e., $\expected{}{\text{\rmfamily\scshape Alg}(I)} \geq \frac{1}{\alpha}\cdot \text{\rmfamily\scshape Opt}(I).$
\end{definition}
We measure the accuracy of hints with parameter $C$ from Def.~\ref{def:accuracy}. This relation is illustrated in Fig.~\ref{fig:relation}.
Lem.~\ref{lemma:lowerbound_c_upper_cbar} provides a lower bound in dependence on this accuracy.
\begin{figure}
\caption{Relation of hint $h_i$ to average information gain $\sum_{j=1}
\label{fig:relation}
\end{figure}
\begin{definition}[Accuracy of Hints] \label{def:accuracy}
The accuracy $C\geq 1$ is the smallest number s.t.
\[
h_i \leq C \cdot \sum_{j=1}^{t_i} \frac{c_i(j)}{t_i} \quad\quad \forall i\in[n] \enspace.
\]
\end{definition}
The hint is a single number giving a cue about a function. Therefore, no matter which measure of accuracy we consider, if the hint is perfectly accurate, the function has to be constant. In Section~\ref{sect:conclusion}, we discuss other ideas on the measure of accuracy of hints such as a multi-dimensional feature vector or the hint being a random variable drawn from the information rate.
We introduce an auxiliary problem to bound the algorithm's expected value.
\begin{definition}[Knapsack Problem for Hints (\kph{})]
Given an instance $I$ for \rao{}, i.e., time budget $T$, hint $h_i$ and length $t_i$ for each article $i\in [n]$,
the Knapsack Problem for Hints, \kph{}, is the fractional knapsack problem
with values $t_ih_i$, weights $t_i$ and knapsack size $T$.
Let $\text{\rmfamily\scshape Opt}_{\kph{}}(I)$ denote its optimal value on
instance $I$.
\end{definition}
As in \cite{KnapsackSecretaryProblem}, we now define an LP for a given subset $Q\subseteq [n]$
and time budget $x$. It finds the optimal fractional
solution for \kph{} with articles from $Q$ and time budget $x$.
\begin{equation*}
\begin{array}{rrrcll}
\max & \sum_{i=1}^n & t_i\cdot h_{i}\cdot y(i)\\[.2cm]
\textrm{s.t.} & \sum_{i=1}^n &t_i\cdot y(i) & \leq & x \\
& & y(i) &= &0 &\quad\forall i\notin Q \\
& & y(i) &\in & [0,1] &\quad\forall i\in [n]
\end{array}
\end{equation*}
The variable $y_Q^{(x)}(i)$ refers to the setting of $y(i)$ in the optimal solution on articles from $Q$ with time budget $x$.
The optimal solution has a clear structure: There exists a threshold hint $\rho_Q^{(x)}$ such that
any article $i\in Q$ with $h_i>\rho_Q^{(x)}$ has $y_Q^{(x)}(i)=1$ and any article
$i\in Q$ with $h_i<\rho_Q^{(x)}$ has $y_Q^{(x)}(i)=0$.
As in \cite{KnapsackSecretaryProblem}, we use the following notation for a subset $R\subseteq [n]$:
\[
v_Q^{(x)}(R) = \sum_{i\in R}t_i\cdot h_i\cdot y_Q^{(x)}(i) \quad\text{ and }\quad
w_Q^{(x)}(R) = \sum_{i\in R} t_i \cdot y_Q^{(x)}(i) \enspace.
\]
The value $v_Q^{(x)}(R)$ and weight $w_Q^{(x)}(R)$ refer to the value and weight that set $R$ contributes to the optimal solution.
We use \kph{}'s solution as an upper bound on the optimal solution, as shown in the following lemma:
\begin{lemma}\label{lemma:relationOpts}
Given instance $I$ for \rao{}, let $\text{\rmfamily\scshape Opt}(I)$ and $\text{\rmfamily\scshape Opt}_{\kph{}}(I)$ be
the respective optimal values.
Then,
$\text{\rmfamily\scshape Opt}_{\kph{}}(I)\geq \text{\rmfamily\scshape Opt}(I).$
\end{lemma}
\begin{proof}
Since the codomain of $c_i$ is $[h_i]$ by Def.~\ref{def:RA}, any algorithm for \rao{} cannot
obtain more information units than $h_i$ in any time step.
Thus, the optimal solution of \kph{} obtains at least the same amount of information units as the optimal solution of \rao{} by reading the same parts. \qed
\end{proof}
\section{Lower Bounds} \label{sect:LBs}
The proofs in this section are constructed in a way
such that the instances are not producing a lower bound for the other settings.
Note that in the proofs of Lem.~\ref{lemma:increasing_functions} and Lem.~\ref{lemma:length}, we have $C\leq 2$.
Moreover, we construct the family of instances such that the lower bounds hold in a fractional setting. The key idea is to have the first time step(s)
small in every information rate such that the algorithms are forced to spend a minimal amount of time on reading an article before obtaining eventually more than one information unit per time step.
Although Def.~\ref{def:RA} already states that the codomain of $c_i$ is $[h_i]$,
we show a lower bound as a justification for this constraint on $c_i$.
\begin{lemma} \label{lemma:increasing_functions}
If the functions $c_i$ are allowed
to take values larger than hint $h_i$,
then the competitive ratio of any deterministic or randomized algorithm is $\Omega(\sqrt{n})$.
\end{lemma}
\begin{proof}
We construct a family of instances for all $\ell\in \ensuremath{\mathbb{N}}$. Let $n:=\ell^2$. For any fixed $\ell$, we construct instance $I$ as follows.
Set $T:=n$, $t_i:=T=n$ and $h_i:=n$ for all $i\in[n]$.
We define two types of articles.
There are $\sqrt{n}$ articles of type $A$ where
$c_i(j):=1$ for $j\in[T]\setminus \set{\sqrt{n}}$ and $c_i(\sqrt{n}):=n^2$.
The articles of type $B$ have
$c_i(j):=1$ for $j\in[T-1]$ and $c_i(T):=n^2$.
An optimal offline algorithm reads all articles of type $A$ up to time step $\sqrt{n}$. Therefore, $\text{\rmfamily\scshape Opt}(I) = \Theta(n^{2.5}).$
Any online algorithm cannot distinguish between articles of type $A$ and $B$ until time step $\sqrt{n}$.
The value of any online algorithm cannot be better than the value of algorithm $\text{\rmfamily\scshape Alg}$ that reads the first $\sqrt{n}$ time
steps of the first $\sqrt{n}$ articles. Since the input order is chosen uniformly at random, the expected arrival of the first type $A$ article is at round $\sqrt{n}$.
Thus, $\expected{}{\text{\rmfamily\scshape Alg}(I)} = \Theta(n^2)$. Therefore, $\expected{}{\text{\rmfamily\scshape Alg}(I)} \leq \text{\rmfamily\scshape Opt}(I)/\sqrt{n}.$ \qed
\end{proof}
In \rao{}, solutions admit reading articles fractionally.
Therefore, we show a lower bound whenever
articles are longer than the time budget.
\begin{lemma} \label{lemma:length}
If the lengths $t_i$ are allowed to take values larger than time budget $T$,
then the competitive ratio of any deterministic or randomized algorithm is $\Omega(n)$.
\end{lemma}
\begin{proof}
We construct a family of instances with $T:=2$, $t_i:=3$ and $h_i:=M$ for all $i\in [n]$, where $M\ge 1$ is set later.
We define $c_k(1):=1$, $c_k(2):=M$ and $c_k(3):=1$.
Any other article $i\in [n]\setminus\set{k}$ has $c_i(1):=1$, $c_i(2):=1$ and $c_i(3):=M$.
As the permutation is chosen uniformly at random,
an online algorithm does not know which article is the one with $M$ information units in the second time step.
No algorithm can do better than the algorithm $\text{\rmfamily\scshape Alg}$ that reads the first article completely while $\text{\rmfamily\scshape Opt}(I)=M+1$.
Its expected value is
$\expected{}{\text{\rmfamily\scshape Alg}(I)} = (1/n)\cdot (M+1) + (1-1/n)\cdot2 \leq (2/n + 2/M)\cdot \text{\rmfamily\scshape Opt}(I).$
When setting $M=n$, we obtain the desired bound. \qed
\end{proof}
A consequence of the next lemma is that if the accuracy $C$ from Def.~\ref{def:accuracy} is not a constant, then no constant-competitive algorithms can exist.
\begin{lemma} \label{lemma:lowerbound_c_upper_cbar}
Any deterministic or randomized algorithm is $\Omega(\min\set{C,n})$-com\-pe\-ti\-ti\-ve.
\end{lemma}
\begin{proof}
Consider the following family of instances $I$ in dependence on accuracy $C\geq 1$.
Let $T:=2$ and $t_i:=2$ for all $i\in[n]$.
Set $c_k(1):=1$ and $c_k(2):=C$. We define $c_i(1):=1$ and $c_i(2):=1$ for all $i\in[n]\setminus\set{k}$.
The hints are $h_i:=C$ for all $i\in[n]$, thus, they are $C$-accurate according to Def.~\ref{def:accuracy}.
Any algorithm cannot distinguish the information rate of the articles,
as the hints and the first time steps are all equal. Therefore, no algorithm is better
than $\text{\rmfamily\scshape Alg}$, which chooses to read the article arriving first completely.
The optimal choice is to read article $k$; we obtain the desired bound:
$\expected{}{\text{\rmfamily\scshape Alg}(I)} = (1/n)\cdot (C+1) + (1-1/n)\cdot2 \leq (2/n + 2/C)\cdot \text{\rmfamily\scshape Opt}(I).
$\qed
\end{proof}
\begin{assumption} \label{assum:Caverage}
For any article $i\in [n]$, we assume that $t_i\leq T$ and that the hints $h_i$ and upper bounds $t_ih_i$ on the information gain in the articles
are distinct.\footnote{Disjointness is obtained by random, consistent tie-breaking as described in \cite{KnapsackSecretaryProblem}.}
\end{assumption}
\section{Exploitation of Online Knapsack Algorithms}
In this section, we develop a technique for applying any algorithm for the Online Knapsack Problem on an instance of \rao{}.
The presented algorithm uses the classic Secretary Algorithm that is $\ensuremath{\mathrm{e}}$-competitive for all positive $n$ as shown in \cite{MatroidSecretary}.
The Secretary Algorithm rejects the first $\lfloor n/\ensuremath{\mathrm{e}}\rfloor$ items.
Then it selects the first item that has a better value than the best so far. Note that the Secretary Algorithm selects exactly one item.
We use \kph{} for the analysis as an upper bound on the actual optimal solution with respect to information rates $c_i$. There is exactly one fractional item in the optimal solution of \kph{}. The idea is to make the algorithm robust against two types of instances: the ones with a fractional article of high information amount and the ones with many articles in the optimal solution.
\begin{theorem}\label{thm:reduction}
Given an $\alpha$-competitive algorithm $\text{\rmfamily\scshape Alg}$ for the Online Knapsack Problem,
the Reduction Algorithm
is $(\ensuremath{\mathrm{e}}+\alpha)C$-competitive.
\end{theorem}
\begin{proof}
We fix an instance $I$ and use Lem.~\ref{lemma:relationOpts}.
We split the optimal solution of \kph{} into the fractional article $i_{f}$ that is read $x_{i_{f}}\cdot t_{i_{f}}$ time steps and the set $H_{max}$ of articles
that are read completely.
Since $H_{max}$ is a feasible solution to the integral version of \kph{}, the value of the articles in $H_{max}$ is not larger than
the optimal value $\text{\rmfamily\scshape Opt}_{\kph{}}^{int}(I)$ of the integral version. We denote the optimal integral solution by set $H^*$.
Using Def.~\ref{def:accuracy}, we obtain:
\begin{equation*} \label{eq:reduction}
\begin{aligned}
\text{\rmfamily\scshape Opt}(I) &\leq \text{\rmfamily\scshape Opt}_{\kph{}}(I) = \sum_{i\in H_{max}} t_{i}h_i + x_{i_{f}}t_{i_{f}}h_{i_{f}}
\leq \text{\rmfamily\scshape Opt}_{\kph{}}^{int}(I) + t_{i_{f}}h_{i_{f}}\\
&\leq \sum_{i\in H^*} t_{i}h_i + \max_{i\in[n]} t_{i}h_i
\leq C\cdot \left(\sum_{i\in H^*} \sum_{j=1}^{t_i} c_i(j) + \max_{i\in[n]} \sum_{j=1}^{t_i}c_i(j)\right)\\
&\leq C\cdot \left(\frac{\alpha}{\delta}\cdot \prob{}{b=1}\expected{}{\sum_{i\in S} \sum_{j=1}^{t_i}c_i(j)\bigg|b=1}
\right.
\\
& \quad\quad\quad\quad \left.
+ \frac{\ensuremath{\mathrm{e}}}{1-\delta} \cdot \prob{}{b=0}\expected{}{\sum_{i\in S} \sum_{j=1}^{t_i}c_i(j)\bigg|b=0}\right)\\
&\leq C\cdot \max\set{\frac{\alpha}{\delta},\frac{\ensuremath{\mathrm{e}}}{1-\delta}}\cdot \expected{}{\text{Reduction Algorithm}(I)}\enspace.
\end{aligned}
\end{equation*}
The optimal choice of $\delta$ to minimize $\max\set{\frac{\alpha}{\delta},\frac{\ensuremath{\mathrm{e}}}{1-\delta}}$
is $\delta = \frac{\alpha}{\ensuremath{\mathrm{e}}+\alpha}$.
This is exactly how the Reduction Algorithm sets the probability $\delta \in (0,1)$ in line 1, which yields a competitive ratio of $(\ensuremath{\mathrm{e}}+\alpha)\cdot C$.\qed
\end{proof}
\begin{algorithm}
\label{algo:reduction}
\caption{Reduction Algorithm}
\KwIn{Number of articles $n$, time budget $T$, an $\alpha$-competitive algorithm $\text{\rmfamily\scshape Alg}$ for the Online Knapsack Problem.}
\KwOut{Set $S$ of chosen articles.
}
Set $\delta=\frac{\alpha}{\ensuremath{\mathrm{e}}+\alpha}$ and choose $b\in \set{0,1}$ randomly with $\prob{}{b=1}=\delta$\;
\eIf{$b=1$}{
Apply $\text{\rmfamily\scshape Alg}$ with respect to values $t_ih_i$ and weights $t_i$\;
}
{Apply the Secretary Algorithm with respect to values $t_ih_i$\;}
\end{algorithm}
When using the current best algorithm for the Online Knapsack Problem presented in \cite{AlbersKnapsackGAP}, the Reduction Algorithm has a competitive ratio of $(\ensuremath{\mathrm{e}}+6.65)\cdot C\leq 3.45\ensuremath{\mathrm{e}} C$.
Assuming that the accuracy of the hints $C\geq 1$ from Def.~\ref{def:accuracy} is constant, the \rao{} admits a constant upper bound on the competitive ratio.
\begin{remark}
(i) The Reduction Algorithm can be used to obtain an $(\alpha+\ensuremath{\mathrm{e}})$-com\-pe\-ti\-ti\-ve algorithm for the
fractional version of the Online Knapsack Problem given an $\alpha$-competitive algorithm
for the integral version.
The proof is analogous to the proof of Thm.~\ref{thm:reduction}.
(ii) Running an $\alpha$-competitive algorithm for Online Knapsack on
an instance of \rao{}, we obtain a $2\alpha C$-competitive algorithm for \rao{} by a similar proof. Since the current best algorithm for Online Knapsack has $\alpha=6.65>\ensuremath{\mathrm{e}}$,
using the Reduction Algorithm provides better bounds.
This holds for the fractional Online Knapsack Problem respectively.
\end{remark}
\section{Threshold Algorithm} \label{sect:threshold}
While the Online Knapsack Problem has to take items completely, \rao{}
does not require the reader to finish an article.
Exploiting this possibility, we present
the Threshold Algorithm, which bases its
decisions on a single threshold.
We adjust the algorithm and its analysis from \cite{KnapsackSecretaryProblem}.
From now on, we assume that the information rates $c_i$ are non-increasing and that we can stop to read an article at any time, thus allowing fractional time steps. For the sake of presentation, we stick to the discrete notation (avoiding integrals).
In practice,
the information gain diminishes the longer an article is read, and the inherent discretization by words or characters is so fine-grained compared to the lengths of the articles that it is justified to consider the continuum limit.
Before starting to read, one has to decide at which length to stop reading any article in dependence on $T$.
First, we show that cutting all articles of an instance after $gT$ time steps costs at most a factor of
$1/g$ in the competitive ratio.
\begin{lemma}\label{lemma:cutInstance}
Given an instance $I$ with time budget $T$, $g\in(0,1]$, lengths $t_i$, hints $h_i$ and non-increasing information rates $c_i:[t_i]\to[h_i]$,
we define the cut instance $I'_g$ with
time budget $T'=T$, lengths $t'_i=\min\set{t_i,gT}$, hints $h'_i=h_i$ and non-increasing information rates $c'_i:[t'_i]\to[h'_i]$,
where $c'_i(j)=c_i(j)$ for $1\leq j\leq t'_i$.
Then, $\text{\rmfamily\scshape Opt}_{\kph{}}(I)\leq \text{\rmfamily\scshape Opt}_{\kph{}}(I'_g)/g$.
\end{lemma}
\begin{proof}
Since $gt_i\leq gT$ and $gt_i\leq t_i$ we have $gt_i\leq \min\set{gT,t_i}=t'_i$ and obtain:
\noindent $
\text{\rmfamily\scshape Opt}_{\kph{}}(I)
= \frac{1}{g}\sum_{i\in[n]} h_i \cdot gt_i \cdot y_{[n]}^{(T)}(i)
\leq \frac{1}{g}\sum_{i\in[n]} h'_i t'_i \cdot y_{[n]}^{(T)}(i)
\leq \frac{1}{g}\cdot \text{\rmfamily\scshape Opt}_{\kph{}}(I'_g).
$
The last inequality follows as no feasible solution is better than the optimum.
The time budget is respected since $\sum_{i\in[n]} t_i \cdot y_{[n]}^{(T)}(i)\leq T=T'$ and $t_i\geq t'_i$.\qed
\end{proof}
\begin{algorithm} \label{algo:threshold}
\caption{Threshold Algorithm}
\KwIn{Number of articles $n$, time budget $T$, a fraction $g\in (0,1]$.\\
\hspace*{1.05cm} Article $i$ appears in round $\pi(i)$ and reveals $h_i$ and $t_i$.}
\KwOut{
Number of time steps $0\leq s_i\leq t_i$ that are read from article $i\in [n]$.
}
Sample $r\in \set{1,...,n}$ from binomial distribution $\Bin(n,1/2)$\;
Let $X=\set{1,...,r}$ and $Y=\set{r+1,...,n}$\;
\For{round $\pi(i)\in X$}{Observe $h_i$ and $t_i$\;
Set $s_i=0$\;}
Solve \kph{} on $X$ with budget $T/2$, lengths $t'_i=\min\set{gT,t_i}$, and values $t'_ih_i$. \\
Let $\rho_X^{(T/2)}$ be the threshold hint\;
\For{round $\pi(i)\in Y$}{
\uIf{ $h_i\geq \rho_X^{(T/2)}$}{
Set $s_i =\min\left\{t_i,gT, T-\displaystyle\sum_{1\leq j<i} s_j\right\}$\;
Read the first $s_i$ time steps of article $i$\;}
\uElse{Set $s_i=0$\;}
}
\end{algorithm}
We need the following lemma that can be proven by a combination of normalization, Exercise~4.7 on page~84 and Exercise~4.19 on page~87 in \cite{ProbabilityAndComputing}.
\begin{lemma}\label{lemma:chernoff}
Let $z_1, ..., z_n$ be mutually independent random variables from a finite subset of $[0,z_{max}]$ and $Z=\sum_{i=1}^n z_i$ with $\mu = \expected{}{Z}$.
For all $\mu_H\geq \mu$ and all $\delta>0$,
\[\prob{}{Z\geq (1+\delta)\mu_H}< \exp\left( -\frac{\mu_H}{z_{max}}\cdot
\left[ (1+\delta) \ln(1+\delta) -\delta \right]\right)\enspace.\]
\end{lemma}
We assume that
$\sum_{i\in[n]} \min\set{t_i, gT} = \sum_{i\in[n]} t'_i \ge 3T/2$ for the purpose of the analysis. The same assumption is made in \cite{KnapsackSecretaryProblem} implicitly.
If there are not enough articles, the algorithm can only improve since there are fewer articles that are not part of the optimal solution.
We can now state the main theorem.
\begin{theorem}\label{theorem:threshold}
For $g = 0.0215$, the Threshold Algorithm's competitive ratio is upper bounded by $ 246 C<90.5 \ensuremath{\mathrm{e}} C$.
\end{theorem}
The proof is similar to the proof of Lem.~4 in \cite{KnapsackSecretaryProblem}.
However, we introduce parameters over which we optimize the analysis of the competitive ratio.
This way, we make the upper bound on the competitive ratio as tight as possible for the proof technique used here.
Recall that we may assume w.l.o.g.~by Assumption~\ref{assum:Caverage} that the hints $h_i$ and upper bounds $t_ih_i$ are disjoint throughout the proof.
\begin{proof}
Fix an instance $I$.
We refer to the order by permutation $\pi$ chosen uniformly at random.
For simplicity, we scale the instance such that $T=1$ and all $t_i$ are multiplied with $1/T$, which does not affect the hints and the threshold.
We use \kph{} to show the bound
as $\text{\rmfamily\scshape Opt}_{\kph{}}(I)\geq \text{\rmfamily\scshape Opt}(I)$ holds by Lem.~\ref{lemma:relationOpts}.
As the reader always reads at most the first $gT$ time steps,
we use the bound from Lem.~\ref{lemma:cutInstance} for cutting instance $I$ to $I'_g$.
For better readability, we do not rename the parameters of $I'_g$ and refer to the variables without adding a prime $'$.
We proceed with showing the bound on the expected value of the algorithm on instance $I'_g$ since the algorithm reads at most $g$ time steps of each article.
Now, we proceed as in the proof of Lem.~4 in \cite{KnapsackSecretaryProblem}.
We use two auxiliary knapsacks to bound the algorithm's expected value.
Their optimal, fractional solution is computed offline on instance $I$.
In contrast to \cite{KnapsackSecretaryProblem}, we parameterize the size of the auxiliary knapsacks to find the best possible sizes.
We use a knapsack of size $\beta$ and one of size $\gamma$ where $0<\beta\leq 1$ and $1\leq \gamma\leq \sum_{i\in[n]} t_i$.
Recall that we assumed that $\sum_{i\in[n]} t_i \ge 3/2 = 3T/2$.
We show in the following that for all $i$ where $y_{[n]}^{(\beta)}(i)>0$, there is a $p\in (0,1)$ such that $\prob{}{s_i = t_i}>p$. As a consequence of $p$'s existence, we obtain the following inequalities:
\begin{equation}\label{eq:competitive}
\begin{aligned}
\text{\rmfamily\scshape Opt}_{\kph{}}(I) &\leq \frac{1}{g} \cdot \text{\rmfamily\scshape Opt}_{\kph{}}(I'_g)
\leq \frac{1}{g} \cdot \frac{1}{\beta} \cdot v_{[n]}^{(\beta)}([n])= \frac{1}{g\beta} \cdot \sum_{i\in [n]} t_ih_i y_{[n]}^{(\beta)}(i)\\
&\leq \frac{1}{g\beta p} \cdot \sum_{i\in [n]} \prob{}{s_i= t_i} t_ih_i
\leq \frac{1}{g\beta p} \cdot \sum_{i\in [n]} \prob{}{s_i= t_i} \cdot C \cdot \sum_{j=1}^{s_i}c_i(j)\\
&\leq \frac{C}{g\beta p} \cdot \expected{}{\text{Threshold Algorithm}(I)}\enspace.\\
\end{aligned}
\end{equation}
We lose the factor $C$ as we use the inequality from Def.~\ref{def:accuracy}.
The best possible value for the competitive ratio is the minimum of $C/(g\beta p)$.
We find it by maximizing $g\beta p$.
As $g$ and $\beta$ are settable variables, we determine $p$ first.
We define random variables $\zeta_i$ for all $i\in[n]$, where
\[\zeta_i = \left\{ \begin{array}{ll}
1 & \mbox{if } \pi(i)\in X\\
0 & \mbox{otherwise}\enspace.
\end{array}
\right.\]
As discussed in \cite{KnapsackSecretaryProblem},
conditioned on the value of $r$, $\pi^{-1}(X)$ is a uniformly chosen subset of $[n]$ from any subset of $[n]$ containing exactly $r$ articles.
Since $r$ is chosen from $\Bin(n,1/2)$, it has the same distribution as the size of a uniformly at random chosen subset of $[n]$.
Therefore, $\pi^{-1}(X)$ is a uniformly chosen subset of all subsets of $[n]$.
The variables $\zeta_i$ are mutually independent Bernoulli random variables with $\prob{}{\zeta_i=1}=1/2$.
Now, we fix $j\in[n]$ with $y_{[n]}^{(\beta)}(j)>0$ and define two random variables:
\begin{align*}
Z_1 &:= w_{\pi([n])}^{(\beta)}(X\setminus \set{\pi(j)})
= \sum_{i\in [n]\setminus\set{j}} t_i\cdot y_{[n]}^{(\beta)}(i)\cdot\zeta_i\enspace,\\
Z_2 &:= w_{\pi([n])}^{(\gamma)}(Y\setminus \set{\pi(j)})
= \sum_{i\in {[n]}\setminus\set{j}} t_i\cdot y_{[n]}^{(\gamma)}(i)\cdot(1-\zeta_i) \enspace.
\end{align*}
Note that the event $\pi(j)\in Y$, i.e., $\zeta_j=0$, is independent of $Z_1$ and $Z_2$ since they are defined without $\pi(j)$.
The weights $t_i\cdot y_{[n]}^{(\beta)}(i)\cdot\zeta_i$ and $ t_i\cdot y_{[n]}^{(\gamma)}(i)\cdot(1-\zeta_i)$ within the sum are random variables taking values in $[0,g]$ since the instance is cut.
Now, we reason that
when article $j$ is revealed at position $\pi(j)$ to the Threshold Algorithm,
it has enough time to read $j$ with positive probability.
The next claim is only effective for $g<0.5$ since $Z_1$ and $Z_2$ are non-negative.
\begin{claim}\label{claim:conditionalEta}
Conditioned on $Z_1<\frac{1}{2}-g$ and $Z_2< 1-2g$,
the Threshold Algorithm sets $s_j=t_j$
if $\pi(j)\in Y$ because $h_j\geq \rho_X^{(1/2)}$ and it has enough time left.
\end{claim}
\begin{proof}
Since $t_j\leq gT=g$, article $j$ can only add at most $g$ weight to $X$ or $Y$.
Therefore, $w_{\pi([n])}^{(\beta)}(X)<1/2$ and $w_{\pi([n])}^{(\gamma)}(Y)<1-g$.
Recall that knapsacks of size $\beta$ and $\gamma$ are packed optimally and fractionally. Thus, a knapsack of size $\gamma$ would be full, i.e.,
$w_{\pi([n])}^{(\gamma)}(\pi([n]))=\min\set{\gamma, \sum_{i\in [n]} t_i}$.
Since we assumed in the beginning of the proof that $\gamma\leq \sum_{i\in [n]} t_i$, we have $w_{\pi([n])}^{(\gamma)}(\pi([n]))=\gamma$.
We can bound the weight of $X$ in this solution by $w_{\pi([n])}^{(\gamma)}(X) = \gamma- w_{\pi([n])}^{(\gamma)}(Y)>\gamma-(1-g)$. We choose $\gamma$ such that
$\gamma-(1-g) >1/2.$\footnote{In \cite{KnapsackSecretaryProblem}, it is implicitly assumed that $\sum_{i\in [n]} t_i\ge 3/2$ as their choice of $\gamma$ is $3/2$.}
Thus, the articles in $X$ add weights $w_{\pi([n])}^{(\beta)}(X) <1/2$ and $w_{\pi([n])}^{(\gamma)}(X) >1/2$ to their respective optimal solution.
Since $w_{\pi([n])}^{(\gamma)}(X) = \sum_{i\in X} t_iy^{(\gamma)}_{n}(i) >1/2$, it means that there are elements in $X$ with combined $t_i$ of at least $1/2$. Therefore, the optimal solution of \kph{} on $X$ with time budget $1/2$ is satisfying the capacity constraint with equality, i.e., $w_{X}^{(1/2)}(X) =1/2$.
We obtain:
$w_{\pi([n])}^{(\gamma)}(X)>w_{X}^{(1/2)}(X)>w_{\pi([n])}^{(\beta)}(X).$
When knapsack~$A$ has a higher capacity than knapsack~$B$ on the same instance,
then the threshold density of knapsack~$A$ cannot be higher than the threshold density of knapsack~$B$.
For any $X$, we see that the knapsack of size $\beta$ uses less capacity than $1/2$ and the knapsack of size $\gamma$ uses more capacity than $1/2$ on the same $X$ respectively.
Since both knapsacks are packed by an optimal, fractional solution
that is computed offline on the whole instance, the respective threshold hint for articles from $X$ and articles from $\pi([n])$ is the same.
Therefore, we get the following ordering of the threshold hints:
$\rho_{\pi([n])}^{(\gamma)}\leq \rho_{X}^{(1/2)} \leq \rho_{\pi([n])}^{(\beta)}.$
Now, we show that when the algorithm sees $\pi(j)$, it has enough time left.
Let
$S^+ = \set{\pi(i)\in Y\setminus \set{\pi(j)}: h_i\geq \rho_{X}^{(1/2)}}$ be the set of articles that the algorithm can choose from.
Thus, the algorithm reads every article from $S^+$ (and maybe $\pi(j)$) if it has enough time left at the point when an article from $S^+$ arrives.
By transitivity, every article $\pi(i)\in S^+$ has $h_i\geq \rho_{\pi([n])}^{(\gamma)}$.
Therefore, for all but at most one\footnote{as the hints are distinct by Assumption~\ref{assum:Caverage}}
$\pi(i)\in S^+$, the equation $y_{[n]}^{(\gamma)}(i)=1$ holds.
Since the only article $i\in S^+$ that could have $y_{[n]}^{(\gamma)}(i)<1$ is not longer than $g$,
the total length of articles in $S^+$ can be bounded from above by
$\sum_{i\in S^+} t_i \leq g + w_{\pi([n])}^{(\gamma)}(Y\setminus\set{\pi(j)}) = g+Z_2<1-g.$
As $t_j\leq g$, the algorithm has enough time left to read article $j$ completely, when it arrives at position $\pi(j)$.
Moreover, if $y_{[n]}^{(\beta)}(j)>0$, then $h_{j}\geq \rho_{\pi([n])}^{(\beta)}\geq \rho_{X}^{(1/2)}$.
{
$\blacksquare$}
\end{proof}
We now proceed with the main proof by showing a lower bound on
$p'=\mathbb{P}[Z_1< 1/2-g \text{ and } Z_2< 1-2g]$.
Recall that the event $\pi(j)\in Y$ is independent of $Z_1$ and $Z_2$.
With the preceded claim we obtain:
\begin{equation}\label{eq:pprim}
\begin{aligned}
\prob{}{s_j=t_j} &= \prob{}{s_j=t_j\big| Z_1<1/2-g \text{ and } Z_2<1-2g}\cdot p'\\
&= \prob{}{\pi(j)\in Y\big| Z_1<1/2-g \text{ and } Z_2<1-2g}\cdot p' = \frac{1}{2}\cdot p'\enspace.
\end{aligned}
\end{equation}
As we are searching for the lower bound $p<\prob{}{s_j=t_j}$, we use the lower bound for $p'$ multiplied with $1/2$ as the value for $p$.
Moreover,
\begin{equation}\label{eq:primbound}
\begin{aligned}
p' &=\prob{}{Z_1< 1/2-g \text{ and } Z_2< 1-2g} \\
&\geq 1-\prob{}{Z_1\geq 1/2-g} - \prob{}{ Z_2\geq 1-2g}.
\end{aligned}
\end{equation}
We can bound the probabilities $\prob{}{Z_1\geq 1/2-g}$ and $\prob{}{ Z_2\geq 1-2g}$ by the Chernoff Bound from Lem.~\ref{lemma:chernoff}.\footnote{Note that the
Chernoff Bound is indeed applicable since the random variables $t_iy_{[n]}^{(\beta)}(i)\zeta_i$ and $t_iy_{[n]}^{(\gamma)}(i)(1-\zeta_i)$ are discrete and $\zeta_i$ are mutually independent.}
The expected values of $Z_1,Z_2$ are bounded by
\[
\expected{}{Z_1} =
\frac{1}{2}\cdot\left(\beta - t_j\cdot y_{[n]}^{(\beta)}(j) \right) \leq \frac{\beta}{2} \quad \text{ and } \quad\quad \\
\expected{}{Z_2} =
\frac{1}{2}\cdot\left(\gamma - t_j\cdot y_{[n]}^{(\gamma)}(j) \right) \leq \frac{\gamma}{2} \enspace.
\]
\noindent We use $z_{max}=g$,
$\delta_1 = (1-2g)/\beta-
1>0 \text{ and } \delta_2 = (2-4g)/\gamma - 1>0$
to obtain:
\begin{equation}\label{eq:pr1pr2}
\begin{aligned}
\prob{}{Z_1\geq 1/2-g}
&<\exp \left(\left(1-\frac{1}{2 g}\right) \cdot \ln \left(\frac{1-2 g}{\beta}\right)-1+\frac{1-\beta}{2 g}\right)\enspace,\\[.3cm]
\prob{}{ Z_2\geq 1-2g}
&<\exp \left(\left(2-\frac{1}{g}\right) \cdot \ln \left(\frac{2-4g}{\gamma}\right)-2+\frac{1-\gamma/2}{g}\right)\enspace.
\end{aligned}
\end{equation}
To conclude the proof,
the final step is numerically maximizing the lower bound on $g\beta p'/2$
obtained by combining Equations~(\ref{eq:pprim}),~(\ref{eq:primbound})~and~(\ref{eq:pr1pr2}):
\begin{samepage}
\begin{equation}\label{eq:maximization}
\begin{array}{r@{\quad}l}
\displaystyle \max_{g,\beta,\gamma} &\frac{\beta g}{2} \cdot \bigg(1-\exp \Big(\big(1-\frac{1}{2 g}\big) \cdot \ln \big(\frac{1-2 g}{\beta}\big)-1
+\frac{1-\beta}{2 g}\Big) \\
& \quad\quad \quad\ - \exp \Big(\big(2-\frac{1}{g}\big) \cdot \ln \big(\frac{2-4g}{\gamma}\big)-2 +\frac{1-\gamma/2}{g}\Big) \bigg)
\end{array} \\
\end{equation} \vspace*{-0.25cm}
\begin{equation*}
\begin{array}{r@{\quad\quad}l@{\quad\quad}l@{\quad\quad}l}
\textrm{s.t.} & \gamma+g>1.5 & 2g+\beta<1 & 4g+\gamma<2 \\
& 0<g< 0.5 & 0<\beta < 1 & \gamma > 1
\end{array}
\end{equation*}
\end{samepage}
We do not use $\gamma\leq \sum_{i\in[n]} t_i$ as a constraint because the other constraints already imply that $\gamma < 2$, and we assume that the combined length of all articles is huge compared to the time budget.
Numerical maximization of (\ref{eq:maximization}) using \cite{maxima}
yields
$g~=~0.021425,$ $\beta~=~0.565728,$ $ \gamma~=~1.478575.$
As we set $p$ to the lower bound on $p'/2$, we can plug
these values in Equation~(\ref{eq:competitive}), so
$
\text{\rmfamily\scshape Opt}(I)< 246 C \cdot \expected{}{\text{Threshold Algorithm}(I)}.
$
{\qed}
\end{proof}
It is interesting to note that our proof suggests to limit the time for each article to about 2\% of the time budget in order to maximize the expected total information gain.
The analysis from Lem.~4 in \cite{KnapsackSecretaryProblem} uses $\beta=3/4$, $\gamma=3/2$ and $t_i \le 1/81$ for all $i\in[n]$.
Applying their analysis on the Threshold Algorithm for $g=1/81$ yields an upper bound of $162\ensuremath{\mathrm{e}} C$ on its competitive ratio.
For these parameters, the optimized analysis in Thm.~\ref{theorem:threshold} gives an upper bound of $125.77\ensuremath{\mathrm{e}} C$.
\section{Open Questions} \label{sect:conclusion}
An open question is whether the analysis of the Threshold Algorithm is tight.
Although we optimize to find the best possible $g$, the used Chernoff bound is not applicable for $g\geq 1/6$. Moreover, the combined articles' lengths, cut with respect to $g$, have to be at least $1.48$ times greater than the time budget.
For the sake of improving the analysis, a different approach has to be investigated.
We informally related diminishing information gain over time to efficient reading strategies. It would be interesting to formalize it w.r.t.~spatial information profiles.
Further directions involve the exploration of new settings and extensions.
There are different measures of the accuracy of the hints worth investigating.
An example would be to interpret the information rate as a distribution of information and the hint to be a random variable drawn from this distribution.
Then, the algorithm's performance is dependent on the information rate's expectation and standard deviation.
An interesting task is to develop an algorithmic strategy for the setting where the length $t_i$ of an article
is not revealed when it arrives.
We believe that the studied techniques in this paper can be used for this setting
if the information gain diminishes, e.g., logarithmically as a function of time, while reading any article.
Another reasonable setting is the one where articles appear in a non-uniform, but still random order.
This is suitable for reading articles since many websites present articles in a categorized order or using recommender systems, where articles are sorted based on the user's preferences. In that light, it would also make sense to extend the scalar hint to a multi-dimensional feature vector. The investigation of related \emph{learning-augmented} online algorithms would be a further interesting development. The idea of a threshold can be considered in that direction: Instead of the learning phase in the Threshold Algorithm, an external threshold can be considered, e.g., from past experience, gut feeling, or rating by a recommender system.
In our opinion, the most interesting extension is to allow the reader to mark a restricted number of articles and return to these articles at any point in time.
For secretary problems with submodular objective functions,
the setting where an algorithm is allowed to remember items and select the output after seeing the whole instance
has recently been discussed by \cite{shortlist}.
Here, they achieve a competitive ratio that is arbitrarily
close to the offline version's lower bound on the approximation factor.
This extension combined with a cross reading strategy, unknown reading lengths, and articles sorted by categories or preferences is the closest setting to real-life web surfing.
\appendix
\end{document}
|
\begin{document}
\noindent This is a pre-print version of the article \textit{Sharp Estimates for the Principal Eigenvalue of the $p-$Operator} which appeared in Calculus of Variations and Partial Differential Equations 57:49 on April 18th 2018,
{\centering \url{https://link.springer.com/article/10.1007/s00526-018-1331-0}}.
\\
\title{Sharp Estimates for the Principal Eigenvalue of the $p-$Operator}
\author{Thomas Koerber}
\address{ Albert-Ludwigs-Universit\"at Freiburg, Mathematisches Institut, Eckerstr. 1, D-79104 Freiburg, Germany \\
Tel.: +49-761-203-5614}
\email{[email protected]}
\begin{abstract}
Given an elliptic diffusion operator $L$ defined on a compact and connected manifold (possibly with a convex boundary in a suitable sense) with an $L$-invariant measure $m$, we introduce the non-linear $p-$operator $L_p$, generalizing the notion of the $p-$Laplacian. Using techniques of the intrinsic $\Gamma_2$-calculus, we prove the sharp estimate $\lambda\geq (p-1)\pi_p^p/D^p$ for the principal eigenvalue of $L_p$ with Neumann boundary conditions under the assumption that $L$ satisfies the curvature-dimension condition BE$(0,N)$ for some $N\in[1,\infty)$. Here, $D$ denotes the intrinsic diameter of $L$. Equality holds if and only if $L$ satisfies BE$(0,1)$. We also derive the lower bound $\pi^2/D^2+a/2$ for the real part of the principal eigenvalue of a non-symmetric operator $L=\Delta_g+X\cdot\nabla$ satisfying $\operatorname{BE}(a,\infty)$.
\end{abstract}
\date{}
\maketitle
\section{Introduction}
\label{introduction}
Estimating the principal eigenvalue of second-order operators is an important topic for various reasons. In numerical analysis, the principal eigenvalue corresponds to the convergence rate of numerical schemes, and good estimates can lead to an optimization of these schemes. On the other hand, the first eigenvalue often corresponds to the optimal constant of Poincar\'e-type inequalities. While a good understanding of such constants is important for numerical purposes, it can be useful for purely mathematical reasons too (see for instance the solution of the Yamabe problem \cite{yamabe}). Finally, in quantum mechanics, the principal eigenvalue describes the energy of a particle in the ground state (see for instance \cite{cohen}), whereas in thermodynamics, it gives a lower bound on the decay rate of certain heat flows (see \cite{widder}).\\
\indent Given its physical and mathematical importance, the first eigenvalue of the Laplace-Beltrami operator on compact Riemannian manifolds with Neumann boundary conditions has been studied in various articles: one of the first remarkable results was \cite{payne} in 1960, where Payne and Weinberger showed the sharp estimate $\lambda\geq\pi^2/D^2$ for the first eigenvalue of the Laplacian defined on a convex and bounded open subset of $\mathbb{R}^n$ with diameter $D$. In 1970, by relating the principal eigenvalue of $L$ to the isoperimetric constant of a Riemannian manifold, Cheeger showed in \cite{cheeger} that the principal eigenvalue can be estimated by a quantity depending only on the diameter, the Ricci-curvature and the dimension. In 1980, assuming non-negative Ricci-curvature, Li and Yau (\cite{li}) showed the estimate $\lambda\geq\pi^2/4D^2$ for any compact Riemannian manifold, possibly with convex boundary, using a gradient estimate technique. In 1984, Zhong and Yang finally derived the sharp estimate
$
\lambda\geq\pi^2/D^2
$
using a barrier argument (see \cite{Yang}). Afterwards, Chen and Wang (\cite{chen}), and also Kr\"oger (\cite{kroger}), recovered this result independently by comparing the principal eigenvalue to the first eigenvalue of a one-dimensional model space. While Chen and Wang used a variational formula, Kr\"oger used a gradient comparison technique. Meanwhile, more general linear elliptic operators have been studied, and in \cite{emery}, Bakry and Emery introduced intrinsic objects like a generalized metric $\Gamma$, a Hessian $H$, a diameter $D$, and Ricci curvature $R$ related to a so-called diffusion operator $L$. They also introduced a curvature dimension condition solely depending on $L$, which is now known as $\operatorname{BE}(\kappa,N)$, where $\kappa$ is a lower bound for the curvature and $N$ an upper bound for the dimension. In general, $N$ does not coincide with the topological dimension of the manifold. In the year 2000, Bakry and Qian used techniques similar to \cite{kroger} to obtain the sharp estimate $\lambda\geq\pi^2/D^2$ for the principal eigenvalue of $L$ assuming the condition $\operatorname{BE}(0,N)$ for some $N\in[1,\infty)$ and ellipticity (see (\cite{Bakry1}). Thereby, they also obtained sharp results for positive or negative lower bounds on the Ricci-curvature. It is worth remarking that positive bounds are a lot easier to deal with and that Lichnerowicz already obtained the sharp estimate $\lambda\geq \kappa N$ in 1958, where $\kappa>0$ is a lower bound for the Ricci-curvature and $N$ the dimension of the manifold (see \cite{lich}).\\
\indent Recently, the attention has turned towards non-linear operators, especially the so-called $p$-Laplacian whose applications range from the description of non-Newtonian fluids (see \cite{newtonian}) to non-linear elasticity problems (\cite{elasticity}). Remarkably, the principal eigenvalue of the $p$-Laplacian could also be linked to the Ricci flow (see \cite{wu}). In 2003, Kawai and Nakauchi showed the lower bound $\lambda\geq \frac{1}{p-1}\frac{\pi_p^p}{(4D)^p}$ for the principal eigenvalue of the $p-$Laplacian with Neumann boundary conditions assuming non-negative Ricci-curvature and $p>2$. Here, $\pi_p$ denotes one half of the period of the so-called $p$-sine function. Similarly to \cite{Bakry1}, they used a gradient comparison technique which they proved with a maximum principle involving the $p$-Laplacian. In 2007, Zhang improved this result to $\lambda\geq (p-1)\pi_p^p/(2D)^p$ for any $p>1$, assuming non-negative Ricci-curvature and at least one point with positive Ricci-curvature. In 2012, assuming non-negative Ricci-curvature, Valtorta finally obtained the sharp estimate $\lambda \geq (p-1)\pi^p_p/D^p$ valid for the first eigenvalue of the $p-$Laplacian for any $p\in(1,\infty)$. The first improvement in his proof was a generalization of the celebrated Bochner formula to the linearization of the $p$-Laplacian, which yielded an improved gradient comparison with a one-dimensional model space using a maximum principle argument. The second improvement was a better estimate for the maximum of the eigenfunction, motivated by the techniques in \cite{Bakry1}. In \cite{val2}, these results were extended to negative lower bounds for the Ricci-curvature using similar techniques with a slightly more complicated model space. \\
\indent In this paper, we combine the approaches of \cite{Bakry1} and \cite{val1} and define a non-linear $p$-operator $L_p$, which arises from a generic elliptic diffusion operator $L$ defined on a compact differentiable manifold which is allowed to have a convex boundary (in a suitable sense). More precisely, we define $L_pu:=\Gamma(u)^\frac{p-2}{2}(Lu+(p-2)H_u(u,u)/\Gamma(u))$, where $\Gamma$ is the so-called Carr\'e du Champ operator, which can be seen as a metric on $T^*M$ induced by $L$. Using intrinsic objects similar to \cite{Bakry1} and constraints which solely depend on the operator $L$, we generalize the approach by \cite{val1}. In particular, we prove the following theorem:
\begin{theorem}
Let $M$ be a a compact and connected smooth manifold with an elliptic diffusion operator $L$ with a smooth and L-invariant measure $m$ and let $\lambda$ be the principal Neumann-eigenvalue of the $p-$operator $L_pu:=\Gamma(u)^\frac{p-2}{2}(Lu+(p-2)H_u(u,u)/\Gamma(u))$. If $L$ satisfies $\operatorname{BE}(0,N)$ for some $N\in[1,\infty)$ and if the boundary of $M$ is either empty or convex, then the sharp estimate
\begin{align*}
\lambda\geq (p-1)\frac{\pi_p^p}{D^p}
\end{align*}
holds, where $D$ is the diameter associated with the intrinsic metric $d$. Equality holds if and only if $L$ satisfies BE$(0,1)$.
\label{maintheorem1}
\end{theorem}
\indent We emphasize that the constraints are satisfied by a much larger class than the Laplace-Beltrami operators. For instance, every operator, which satisfies the condition BE$(\kappa,N_0)$ for some $\kappa>0$ and $N_0<\infty$, satisfies BE$(0,N)$ for some large $N>N_0$ if $M$ is compact. In particular, our result applies to certain Bakry-Emery Laplacians in the form of $L=\Delta_g+\nabla\phi\cdot\nabla $.\\
The rest of this paper is organized as follows:
in section 2, we briefly review the definitions of $\Gamma_2$-calculus introduced by Bakry, Emery, and Ledoux (\cite{emery}, \cite{bakry2}), define the $p$-operator and discuss the existence of the first eigenvalue. In section 3, we prove Theorem 1.1. More precisely, we prove a generalized $p$-Bochner formula for the linearization $\mathcal{L}^u_p$ of $L_p$ and use a self-improvement property of the $\operatorname{BE}(0,N)$ condition similar to \cite{Bakry1} to obtain a good estimate for $\mathcal{L}^u_p(\Gamma(u)^\frac{p}{2})$. Next, we use a maximum principle argument in the fashion of \cite{val1} to compare the gradient of the eigenfunction with a suitable one-dimensional model space. This also yields a maximum comparison similar to \cite{val1} and allows us to obtain a sharp estimate for the principal eigenvalue. In the case $N=\infty$ the maximum comparison breaks down and we obtain a weaker estimate, which we expect not to be sharp. We will also address the question of equality: While the estimate is sharp regardless of the dimension, equality can only be attained if $\dim(M)=1$ and $L=\Delta_g$ for some Riemannian metric $g$. In particular, $L$ must satisfy the condition BE$(0,1)$, which is in line with the results obtained by Hang and Wang in \cite{hang} and Valtorta in \cite{val1}. Similarly to the $p$-Laplacian, we expect the introduced $p$-operator to be useful to model various non-linear problems in physics. \\
\indent In section 4, we turn our attention towards non-symmetric operators of the form $L=\Delta_g+X\cdot\nabla$ for some Riemannian metric $g$, that is, operators that might not possess an invariant measure. Non-symmetric operators are important in quantum mechanics (see \cite{nonqm}) and can be used to describe damped oscillators. It is a well known fact that such operators have a discrete and typically complex spectrum.
Only recently, in \cite{andrews}, Andrews and Ni proved a lower bound for the first eigenvalue of the so-called Bakry-Emery Laplacian, which is symmetric with respect to a conformal measure. The main ingredient in their proof is a comparison theorem for the modulus of continuity of solutions of the heat equation with drift, which is a variation of the argument in the celebrated proof of the fundamental gap conjecture by Andrews and Clutterbuck (see \cite{andrews2}). In 2015, Wolfson proved a generalized fundamental gap conjecture for non-symmetric Schr\"odinger operators. In the same spirit, we generalize the results obtained in \cite{andrews} to non-symmetric operators; that is, we do not require the first order part to be the gradient of a function. Here, we restrict ourselves to the linear case $p=2$. More precisely, we will prove the following theorem:
\begin{theorem}
Let $(M,g)$ be a compact and connected Riemannian manifold, possibly with a strictly convex boundary together with a non-symmetric diffusion operator $L=\Delta_g+X\cdot\nabla$ satisfying $\operatorname{BE}(a,\infty)$. Let $\lambda$ be a non-zero eigenvalue of $L$ with Neumann Boundary conditions. Then one has the estimate
$$
\operatorname{Re}(\lambda)\geq \frac{\pi^2}{D^2}+\frac{a}{2},
$$
where $D$ is the diameter of the Riemannian manifold.
\label{maintheorem2}
\end{theorem}
The main difference to the result in \cite{andrews} is that we do not impose any additional constraints on the first order term $X$, whereas \cite{andrews} requires $X$ to be the gradient of a function $\phi$. Such operators are called Bakry-Emery Laplacians and have the invariant measure $e^{-\phi}dVol$, which implies that the spectrum is real. If $X$ is not a gradient, then even the principal eigenvalue is generally complex. \\ \indent We follow \cite{andrews} to prove the theorem: indeed, we compare the decay of a heat flow associated with the operator $L$ to the decay of the heat flow in a one-dimensional model space, and the estimate is obtained by using a maximum principle argument. Contrary to the proof of the first theorem, the argument relies heavily on the Riemannian geometry induced by the operator $L$. The distortion of the geometry induced by the first order term $X$ will only play a minor part. Although the eigenvalue comparison with the model space is sharp, we prefer to state the lower bound $\pi^2/D^2+a/2$, which is not sharp, but more useful. A better lower bound can be obtained through a better understanding of the model function. \\
\indent For the non-linear case $p\neq 2$, it is not even clear if an eigenvalue exists. Even if the existence could be established, our methods would not be applicable:
the approach we used to obtain the sharp estimate for the principal eigenvalue of the non-linear $p$-operator explicitly exploits that $L$ is self-adjoint with respect to the invariant measure $m$, particularly that $\lambda$ is real, whereas the argument in our second approach relies heavily on the linearity of $L$. Hence, it does not seem that they could be generalized to the non-symmetric, non-linear case.
\section{Preliminaries}
\label{preliminaries}
\subsection{The geometry of diffusion operators}
We repeat the basic definitions of $\Gamma_2$-calculus in the setting of a smooth manifold and refer to \cite{bakry2} for a good introduction. Let $M$ denote a connected smooth manifold, which is allowed to have a boundary.
\begin{definition}A linear second-order operator $L:C^\infty(M)\to C^\infty(M)$ is defined to be an elliptic diffusion operator \label{diffusion.operatprs}
if for any smooth function $\psi:\mathbb{R}^r\to\mathbb{R}$, any $f,f_i\in C^{\infty}(M)$
and at every point $x\in M$ one has
$$
L(\psi(f_1,\dots,f_r))=\sum_{i=1}^{r}\partial_i\psi L(f_i)+\sum_{i,j=1}^{r}\partial_i\partial_j\psi\Gamma(f_i,f_j)
$$
and $\Gamma(f,f)\geq0$ with equality if and only if $df=0$. Here,
$\Gamma$ denotes the so-called Carr\'e du Champ operator; that is,
$$
\Gamma(f,g):=\frac12\bigg(L(fg)-fLg-gLf\bigg).
$$
\end{definition}
One easily verifies that diffusion operators are exactly all linear, possibly degenerate elliptic differential operators depending only on the first and second derivatives but not on the function itself. If $L$ is elliptic, then the operator $\Gamma$ induces a Riemannian metric $g$ which satisfies $\Gamma(f,f)=|\nabla _g f|^2$ and $L=\Delta_g+X\cdot\nabla_g$ for some vector field $X$. If $C=\nabla_g \phi$ for some smooth function $\phi$ then $L$ is called Bakry-Emery Laplacian and one easily verifies that $L$ is the Laplace-Beltrami operator of some metric $\tilde g$ if and only if $\phi$ is constant.
\par In order to view the pair $(M,L)$ as a metric measure space we need the following definitions:
\begin{definition}
\label{invariance}
We say that a locally finite Borel measure $m$ is $L$-invariant if there is a generalized function $\nu$ such that
$$
\int_M \Gamma(f,h)dm=-\int_M fLhdm+\int_{\partial M}g\Gamma(h,\nu)dm
$$
holds for all smooth $f,g$. $\nu$ is called the outward normal function and is defined to be a set of pairs $(\nu_i,U_i)_{i\in I}$ for a covering $U_i$ of $\partial M$ such that $\nu_i\in C^\infty(U_i)$ and $\Gamma(\nu_i-\nu_j,\cdot)|_{U_i\cap U_j}\equiv 0$. Furthermore, we say that $m$ is smooth if the pushforward of $m$ by any chart of $M$ has a smooth density with respect to the Lebesque measure.
\end{definition}
\begin{definition}
The intrinsic distance $d:M\times M \to [0,\infty]$ is defined by
$$
d(x,y):=\operatorname{sup}\bigg\{f(x)-f(y) \big| f\in C^\infty(M), \Gamma(f)\leq1\bigg\}
$$
and the diameter of $M$ by $D:=\sup\{d(x,y)|x,y\in M\}$.
\end{definition}
The above definitions can be iterated to produce the Hessian and $\Gamma_2$-operator. These operators will then induce a geometry on $(M,L)$.
\begin{definition}
For any $f,a,b\in C^{\infty}(M)$, we define the Hessian by
$$
H_f(a,b)=\frac12\bigg(\Gamma(a,\Gamma(f,b))+\Gamma(b,\Gamma(f,a))-\Gamma(f,\Gamma(a,b))\bigg)
$$
and the $\Gamma_2-$operator by
$$
\Gamma_2(a,b)=\frac12\bigg(L(\Gamma(a,b))-\Gamma(a,Lb)-\Gamma(b,La)\bigg).
$$
\end{definition}
The Hessian only depends on the second order terms of $L$ and thus can be seen as the Hessian of some Riemannian metric $g$. On the other hand, $\Gamma_2$ also depends on first order terms of $L$ and thus induces a geometry which in general cannot be seen as a Riemannian object.
\begin{definition}
For $N\in[1,\infty]$, $x\in M$ and $f\in C^{\infty}(M)$, we define the $N-$Ricci-curvature pointwise by
$$
R_N(f,f)(x)=\inf\bigg\{\Gamma_2(\phi,\phi)(x)-\frac{1}{N}(L\phi)^2(x)\big|\phi\in C^{\infty}(M), \Gamma(\phi-f)(x)=0\bigg\}
$$
and the Ricci-curvature by $R:=R_\infty$, where we use the convention $1/\infty=0$.\\
Let $k\in\mathbb{R}$ and $N\in[1,\infty]$. We say that $L$ satisfies
$\operatorname{BE}(k,N)$ (the \textit{Bakry-Emery curvature-dimension condition}) if and only if
\begin{align}
R_N(f,f)\geq k\Gamma(f). \label{BE}
\end{align}
for any $f\in C^{\infty}(M)$.
This inequality is called the \textit{curvature dimension inequality}.
\end{definition}
\begin{remark}
The Bochner-formula implies that $\Delta_g$ satisfies $\operatorname{BE}(\kappa,N)$ if and only if $\operatorname{ric}\geq \kappa$ and $\dim(M)\leq N$, but in general, the curvature-dimension condition does not have such an intuitive meaning. For instance, one can easily show that the Ornstein-Uhlenbeck operator $L(f):=\Delta f-x\cdot\nabla f$ on $M=\mathbb{R}^N$ induces the Euclidean distance and satisfies BE$(1,\infty)$. However, one has $R_N\equiv-\infty$ for all $N\in[1,\infty)$. So we see that the geometry induced by $L$ is very different from the geometric situation in $\mathbb{R}^N$ equipped with the standard inner product.
\end{remark}
\par \indent
In spite of such unexpected behaviour, the Ricci curvature turns out to be computable in an easier way. Indeed, in \cite{sturm1}, Sturm showed a generalized Bochner formula:
\begin{theorem}\label{2.bochner.formula}
For any diffusion operator defined on a Riemannian manifold, we have
$$
\Gamma_2(f,f)=R(f,f)+|H_f|^2_{HS}
$$
for each $f\in C^\infty(M)$, where $|H_f|^2_{HS}$ denotes the square of the Hilbert-Schmidt norm of the Hessian, that is, $|H_f|^2_{HS}(x):=\operatorname{sup}\big\{\sum_{i,j=1}^{\tilde N(x)}|H_f(e_i,e_j)|^2(x)\big|\{e_i\} \text{ is an ONB of } \Gamma \text{ at } x\}$.
\end{theorem}
When studying the principal eigenvalue, the geometry of $\partial M$ will also play an important part, hence we make the following definition.
\begin{definition} \label{s.fund.form}
Let $\nu$ be the outward normal function, $U\subset M$ be an open set,
and $\phi,\eta\in C^\infty(U)$, such that $\Gamma({\nu},\eta),\Gamma({\nu},\phi)\equiv0$ on $U\cap\partial M$. Then we define the second fundamental form on $\partial M$ by
$$
II(\phi,\eta)=-H_\phi(\eta,\nu)=-\frac12\Gamma(\nu,\Gamma(\eta,\phi)).
$$
If for any $\phi\in C^\infty(U)$ as above with $\Gamma(\phi)>0$ on $\partial M\cap U$ we have that
$$
II(\phi,\phi)\leq0
$$
on $\partial M \cap U$, then we say that $\partial M$ is convex or that $M$ has a convex boundary. If the inequality is strict, we say that $M$ has a strictly convex boundary.
\end{definition}
\par \indent
As an example, we consider the Bakry-Emery Laplacian $L=\Delta_{\bar g}+\nabla_{\bar g} \phi\cdot\nabla_{\bar g}$ for some Riemannian metric $\bar g$ and a smooth function $\phi$. We define the conformal metric $g:=e^{-\frac{2}{N}\phi}\tilde g$ and let $m=dVol_g,\bar m=dVol_{\bar g}$. Obviously, we have $e^\phi m=\bar m$. Using the divergence theorem one can easily see that $m$ is $L$-invariant. Furthermore, one can show that the Ricci-curvature of $L$ is the Ricci-curvature of $\bar g$ plus a first-order perturbation of the metric $\bar g$ which can be controlled by the $C^2$-norm of $\phi$. So if $\operatorname{ric}_{\bar g}\geq\kappa \bar g$ and $\phi$ is not too big with respect to its $C^2$ norm, we obtain $R\geq \kappa' \Gamma$ for some $0<\kappa'<\kappa$. Now the Bochner formula, the inequality $\Delta_{\tilde g}=(\operatorname{tr}H)^2\leq \dim(M)|H|^2_{HS}$, and the Cauchy-Schwarz inequality imply that $L$ satisfies BE$(\kappa ',N)$ for some finite $N>\dim(M)$.
\subsection{$p$-operators}
In this section, we will assume that $M$ is a closed manifold and $L$ an elliptic diffusion operator with a smooth $L$-invariant measure $m$. The measure $m$ induces the space $W^{1,p}(M)$ and we can use the Riemannian metric induced by $L$ to define the spaces $C^{k,\alpha}(M)$. We define the $p-$Operator to be the natural generalization of the $p$-Laplacian.
\begin{definition}
Let $p\in(1,\infty)$ and $f\in C^\infty(M)$. We define the $p-$operator $L_p$ by
$$
L_pf(x):=\begin{cases}
&\Gamma(f)^{\frac{p-2}{2}}\bigg(Lf+(p-2)\frac{H_f(f,f)}{\Gamma(f)}\bigg)\bigg|_x \text{ if } \Gamma(f)(x)\neq 0 \\&0 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{ else}
\end{cases}
$$
and the main part of its linearization at $f$ by
$$
\mathcal{L}^f_p(\psi)=\begin{cases}
&\Gamma(f)^{\frac{p-2}{2}}\bigg(L\psi+(p-2)\frac{H_\psi(f,f)}{\Gamma(f)}\bigg)\bigg|_x \text{ if } \Gamma(f)(x)\neq 0 \\&0 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{ else}
\end{cases}
$$
for any $\phi\in C^{\infty}(M)$.
\end{definition}
\begin{remark} Since $L_p$ is quasi-linear, we have $\mathcal{L}^f_p(f)=L_p(f)$.
The $p-$operator can often be seen as a first order perturbation of a $p-$Laplacian: Let $L=\Delta_{\tilde g}+\nabla\phi\cdot\nabla$ for some Riemannian metric $\tilde g$. Then we have $\Gamma (f)=|\nabla f|_{\tilde g}^2$ and $H=H^{\tilde g}$. Hence, we have
$$
L_p(f)=\Delta^p_{\tilde g}f+|\nabla f|^{p-2}_{\tilde g}\nabla \phi\cdot\nabla f.
$$
\end{remark}
Next we would like to define an eigenvalue of $L$. Adjusting for scaling factors, we expect an eigenfunction $u$ with eigenvalue $\lambda$ to satisfy
\begin{align*}
\begin{cases}
&L_pu=-\lambda u|u|^{p-2} \text{ on } M^\circ\\
&\Gamma(u,\tilde \nu)=0 \text{ on }\partial M
\end{cases}
\end{align*}
in a suitable sense. Since $L_pu$ will not be defined everywhere, we have to integrate the equation. We start with the following lemma which can easily be deduced from the invariance of $m$.
\begin{lemma}\label{int.by.parts}
Let $\phi\in{C^\infty(M)}$ and $f\in C^2(\operatorname{supp}(\phi))$ as well as $\Gamma(f)>0$ on $\operatorname{supp}(\phi)$. Then we have
$$
\int_M \phi L_pudm=-\int_M\Gamma(f)^{\frac{p-2}{2}}\Gamma(f,\phi)dm+\int_{\partial M} \Gamma(f,\tilde{\nu})\Gamma(f)^{\frac{p-2}{2}}\phi dm'.
$$
\end{lemma}
This formula suggests the following weak eigenvalue equation. Homogeneous Neumann boundary conditions will arise naturally from this definition if $M$ has a non-empty boundary.
\begin{definition}
We say that $\lambda$ is an eigenvalue of $L_p$ if there is a
$u\in {W^{1,p}(M)}$, such that for any $\phi\in {C^{\infty}(M)}$ the following identity holds:
$$
\int_M\Gamma(u)^{\frac{p-2}{2}}\Gamma(u,\phi)dm=\lambda\int_M \phi u|u|^{p-2}dm.
$$
By density, this also holds for all $\phi\in W^{1,p}(M)$.
\end{definition}
Choosing $\phi\equiv 1$, we see that $\int_M u|u|^{p-2}dm=0$ unless $\lambda=0$. This a priori constraint allows us to show the existence of the principal eigenvalue using standard variational techniques.
\begin{lemma} \label{existence.regularity}
Let $M$ be a compact and smooth Riemannian manifold with an elliptic diffusion operator $L$ and a smooth $L$-invariant measure $m$. Then the principal eigenvalue of $L_p$ (with Neumann boundary conditions) is well-defined and the eigenfunction $u$ is in
$C^{1,\alpha}(M)$ for some $\alpha>0$. $u$ is smooth near points $x\in M$ satisfying
$\Gamma(u)(x),u(x)\neq 0$, and in $C^{3,\alpha}$ and $C^{2,\alpha}$
for $p>2$ and $p<2$, respectively, near points with $\Gamma(u)(x)\neq0$ and $u(x)= 0$.
\end{lemma}
\begin{proof}
This follows from using the direct method of the calculus of variations to compute the quantity
$$
\sigma_p:=\inf\bigg\{\frac{\int_M \Gamma(u)^{\frac{p}{2}}}{\int_M |u|^{p}} \bigg| u\in W^{1,p}(M), \int_M u|u|^{p-2}=0\bigg\}.
$$
The regularity statement follows from \cite[Theoem 1]{tol} and standard Schauder estimates.
\end{proof}
\begin{remark}
\label{strong.ev.eqn}
Near interior points where $\Gamma(u)$ does not vanish, we have $L_p(u)=-\lambda u|u|^{p-1}$ and one easily shows that $\Gamma(u,\nu)_{|\partial M}\equiv 0$. Since
$\int_M u|u|^{p-2}dm=0$, $u$ must change its sign and the eigenvalue equation is invariant under rescaling, so we can
assume without loss of generality that $\min u=-1$ and $\max u\leq 1$.
\end{remark}
\section{Eigenvalue Estimate for the $p$-Operator}
\label{eigenvalue.estimate}
In this section, we prove a sharp estimate for the principal eigenvalue of $L_p$. Throughout this section, we will assume that $M$ is compact and connected and $L$ an elliptic diffusion operator with invariant measure $m=dVol$. If $M$ has a non-empty boundary, we assume it to be convex. We define $\lambda$ to be the principal eigenvalue of $L_p$ with Neumann boundary conditions and $u$ the corresponding eigenfunction, with $\min u=1$ and $\max u\leq 1$. Finally, we assume that $L$ satisfies $\operatorname{BE}(0,N)$ for some $N\in[1,\infty]$. If $M=[-D/2,D/2]$, $L=\Delta$ and $m=\mathcal{L}^1$ then one can easily show that $\lambda=(p-1)\frac{D^p}{\pi_p^p}$ and $u(\cdot)=\sin_p(\frac{\pi_p}{D}\cdot)$, where
$$
\frac12\pi_p =\int_{0}^{1}\frac{1}{(1-s^p)^{\frac{1}{p}}}
$$
and the $p$-sine function is defined implicitly for $t\in[-\frac{\pi_p}{2},\frac{\pi_p}{2}]$ by
$$
t=\int_{0}^{\sin_p(t)}\frac{1}{(1-s^p)^{\frac{1}{p}}}ds.
$$
It is natural to think that this operator minimizes the principal eigenvalue amongst all admissible operators. However, they do not always turn out to be suitable comparison models. Given a model function $w$, we would like to use the function $u\circ w^{-1}$ to estimate the diameter from below. This will only be optimal if $\max u=\max w$. If $\max u=1$, the one-dimensional eigenvalue equation will be a good comparison model, otherwise, we will consider a relaxed equation dampening the growth of $w$ to give $\max w=\max u$.
More precisely, we follow \cite[section 5]{val1}
and consider for any $n\in(1,\infty)$ the equation
\begin{align}
\label{model.ode}
\begin{cases}
&\frac{\partial}{\partial t} \big(w'|w'|^{p-2}\big)-Tw'|w'|^{p-2}+\lambda w|w|^{p-2}=0 \text{ on } (0,\infty)\\&w(a)=-1 \qquad w'(a)=0
\end{cases}
\end{align}
where $a\geq 0$ and $T$ is a solution of the differential equation $T^2/(n-1)=T'$, that is, either $T=-(n-1)/t$ or $T\equiv 0$. If $T\equiv 0$, this is simply the eigenvalue equation, otherwise, it can be seen as a relaxed eigenvalue equation. For any $n\in(1,\infty)$, we will denote the solution corresponding to $T=-(n-1)/t$ and $a\geq 0$ by $w_a$ and for ease of notation we define $w_\infty$ to be the solution corresponding to $T\equiv 0$. \\\indent
We define $\alpha:=(\lambda/(p-1))^{\frac{1}{p}}$ and note that
the differential equation implies that for any $t>a$ which is close to $a$, one has $w_a''(t)>0$, so there exists a first time $b=b(a)$ such that $w_a'(b(a))=0$ and $w_a'>0$ in $(a,b(a))$. In particular, the restriction
of $w_a$ to $[a,b(a)]$ is invertible and we identify $w_a=w_a|_{[a,b(a)]}$. Finally, we define $\delta(a):=b(a)-a$ to be the length of the interval and $m(a)=w_a(b(a))$ to be the maximum of $w_a$. As we have seen, $\delta(\infty)=\pi_p/\alpha$ and $m(\infty)=1$ for any $\lambda>0$.
\\ \indent In order to compare the maximum and gradient of an eigenfunction $u$ and $w_a$, we need to understand the behaviour of the solutions of Equation (\ref{model.ode}) and the asymptotic behaviour of the functions $m$ and $\delta$. This is done in the following two theorems.
\begin{theorem}
For any $\lambda>0$, equation (\ref{model.ode}) has a solution $w_a\in C^1(0,\infty)$ which is also in $C^0([0,\infty)])$ if $a=0$. Furthermore, $w_a'|w_a'|^{p-2}\in C^1((0,\infty))$. The solution depends continuously on
$n$, $\lambda$, and $a$ in terms of local uniformal convergence of $w_a$ and $w_a'$ in $(0,\infty)$. Additionally, for each solution there is a sequence $t_i\in(0,\infty)$ with $t_i\to\infty$ and $w_a(t_i)=0$.
\end{theorem}
\begin{proof}
This follows from basic ODE theory (see \cite[section 5]{val1}). \end{proof}
\begin{theorem} \label{ode.asymptotics}
For any $n>1$, the function $\delta(a)$ is continuous on $[0,\infty)$
and strictly greater than $\pi_p/\alpha$. Furthermore, for any $a\in[0,\infty)$ we have
$$
\lim_{a\to\infty}\delta(a)=\frac{\pi_p}{\alpha},\qquad m(a)<1, \qquad
\lim_{a\to\infty}m(a)=1.
$$
\end{theorem}
\begin{proof}
Continuity just follows from continuous dependence of the data. The rest of the proof is a bit technical, but straightforward: The statement is true for $a=\infty$, hence it suffices to show that $\delta$ is decreasing and $m$ is increasing (see \cite[section 5]{val1}).
\end{proof}
\subsection{Gradient comparison}
In this subsection, we compare the gradient of the eigenfunction with the gradient of the model function in the one-dimensional model space. Following \cite{val1}, we will prove the gradient comparison using a maximum principle involving the linearization $\mathcal{L}^u_p$. The first step
to generalize the $p$-Bochner formula:
\begin{lemma}[$p$-Bochner formula] \label{p.bochner.formula} Let $u\in C^{1,\alpha}(M)$ be the first eigenfunction of $L_p$. Let $x\in M$ be a point such that
$\Gamma(u)(x)\neq 0$ and $u(x)\neq 0 $ if $1<p<2$. Then we have the p-Bochner formula
\begin{align*}
\frac{1}{p}\mathcal{L}^u_p(\Gamma(u)^{\frac{p}{2}})=&\Gamma(u)^{\frac{p-2}{2}}\bigg(\Gamma(L_pu,u)-(p-2)L_puA_u\bigg)\\&+\Gamma(u)^{p-2}\bigg(\Gamma_2(u)+p(p-2)A_u^2\bigg),
\end{align*}
where $A_u=H_u(u,u)/\Gamma(u)$.
\end{lemma}
\begin{proof}
We can assume that $\Gamma(u)(x)=1$, since both sides scale in the same way. In an environment of $x$ we have that $u\in C^{3,\alpha}$, so we can perform all of the following computations. Since $L$ is a diffusion operator, we have
\begin{align*}
L(\Gamma(u)^{\frac{p}{2}})&=\frac{p}{2}L(\Gamma(u))+\frac{1}{4}p(p-2)\Gamma(\Gamma(u),\Gamma(u))\\&=p(\Gamma_2(u)+\Gamma(u,Lu))+\frac{1}{4}p(p-2)\Gamma(\Gamma(u),\Gamma(u)).
\end{align*}
For the next calculation we use the chain rule (see {Lemma \ref{chain.rule}} below)
\begin{align*}
H_{\Gamma(u)^{\frac{p}{2}}}(u,u)&=\Gamma(u,\Gamma(u,{\Gamma(u)^{\frac{p}{2}}}))
-\frac{1}{2}\Gamma(\Gamma(u)^{\frac{p}{2}},\Gamma(u))
\\&=\frac{p}{2}\Gamma(u,\Gamma(u,\Gamma(u))\Gamma(u)^{\frac{p-2}{2}})
-\frac{1}{4}p\Gamma(\Gamma(u),\Gamma(u))
\\
&=\frac{p}{2}\Gamma(u,\Gamma(u,\Gamma(u)))+p(p-2)H_u(u,u)^2
-\frac{1}{4}p\Gamma(\Gamma(u),\Gamma(u)).
\end{align*}
Finally,
\begin{align*}
\Gamma(L_pu,u)=&\Gamma(\Gamma(u)^{\frac{p-2}{2}}L(u)+(p-2)\Gamma(u)^{\frac{p-4}{2}}H_u(u,u),u)
\\
=&\Gamma(u,Lu)+\frac{1}{2}(p-2)Lu\Gamma(u,\Gamma(u))+(p-2)\frac12\Gamma(u,\Gamma(u,\Gamma(u)))\\&+(p-2)\frac{p-4}{2}H_u(u,u)\Gamma(u,\Gamma(u))
\\=&\Gamma(u,Lu)+(p-2)LuH_u(u,u)+(p-2)\frac12\Gamma(u,\Gamma(u,\Gamma(u)))\\&+
(p-2)(p-4)H_u(u,u)^2.
\end{align*}
Now, using that $A_u=H_u(u,u)$ for $\Gamma(u)=1$
and the fact that $(p-2)(p-4)-(p-2)^2+p(p-2)=(p-2)^2$ for the $A_u$ terms on
the right-hand side, the claim follows. \end{proof}
In the proof, we have used
\begin{lemma}\label{chain.rule}
Let $a,b,c\in C^2(M)$ and $f:\mathbb{R}\to\mathbb{R}$ be smooth, then
$$\Gamma(a,bc)=\Gamma(a,b)c+\Gamma(a,c)b
$$
and $$
\Gamma(f(a),b)=f'(a)\Gamma(a,b).
$$
Furthermore, we have
$$
H_{ab}(u,u)=bH_a(u,u)+aH_b(u,u)+2\Gamma(u,a)\Gamma(u,b)
$$
and
$$
H_{f(a)}(u,u)=f'(a)H_a(u,u)+f''(a)\Gamma(u)^2.
$$
\end{lemma}
\begin{proof}
This follows directly from the diffusion property of $L$. \end{proof}
\indent
To apply the maximum principle argument, we will have to estimate the term $\mathcal{L}^u_{p}(\Gamma(u)^{\frac{p}{2}})$ from below. Afterwards, we will rewrite the inequality in terms of a model function which satisfies a certain differential equation and can be expressed in terms of $u$.
We can replace the $L_p(u)$ terms by $-u|u|^{p-1}$ and bearing in mind that the first derivatives of a function vanish at a maximum point, we will be able to replace the $A_u$ terms, too. Hence, the last ingredient is a good estimate for the $\Gamma_2$ term.
\begin{lemma}\label{improved.BE}
Let $u$ be the first eigenfunction of $L_p$ and consider a point $x\in M$ with $\Gamma(u)(x)\neq0$. Assume that $L$ satisfies $BE(0,N)$ for some $N\in[1,\infty]$. Let $n\geq N$. If $n>1$, we have
$$
\Gamma(u)^{p-2}\bigg(\Gamma_2(u,u)+p(p-2)A_u^2\bigg)\geq\frac{(L_p(u))^2}{n}+\frac{n}{n-1}\bigg(\frac{L_p(u)}{n}-(p-1)\Gamma(u)^{\frac{p-2}{2}}A_u\bigg)^2,
$$
where we use the convention $1/\infty=0$ and $\infty/\infty=1$.
For $n=1$, we get
$$
\Gamma(u)^{p-2}\bigg(\Gamma_2(u,u)+p(p-2)A_u^2\bigg)\geq{(L_p(u))^2}.
$$
\end{lemma}
\begin{proof} Since $\Gamma(u)(x)\neq 0$, it holds that $u\in C^{2,\alpha}$ near $x$, so all of the following calculations can be performed. Since both sides scale in the same way, we can assume that
$\Gamma(u)(x)=1$. The condition BE$(0,N)$ implies BE$(0,n)$, so we can assume that $n=N$. If $N=1$, then the condition $\operatorname{BE}(0,N)$ implies $\operatorname{BE}(0,\dim(M))$ which gives $A_u=\operatorname{tr}H_u=Lu$ and the result follows immediately by using the estimate $\Gamma_2(u)\geq (Lu)^2$. If $N=\infty$, we use the trivial estimate $\Gamma_2(u)\geq|H_u|^2_{HS}\geq A_u^2$, so we can assume that $1<N<\infty$. The idea is that the curvature-dimension inequality has a self-improvement property (see \cite{Bakry1} for the linear case): Let $v$ be an arbitrary smooth function. Since $L$ satisfies $\operatorname{BE}(0,N)$, we have at $x$
\begin{align*}
\Gamma_2(v,v)\geq \frac{1}{N}(Lv)^2&=\frac{1}{N}(\mathcal{L}^u_p(v))^2-\frac{2(p-2)}{N}LvH_v(u,u)-\frac{(p-2)^2}{N}H_v(u,u)^2 \\
\\&=\frac{1}{N}(\mathcal{L}^u_p(v))^2-\frac{2(p-2)}{N}\mathcal{L}^u_pvH_v(u,u)+\frac{(p-2)^2}{N}H_v(u,u)^2
\\&=:\frac{1}{N}(\mathcal{L}^u_p(v))^2+C(v,v).
\end{align*}
Now we define the quadratic form $B(v,v)=\Gamma_2(v,v)-(\mathcal{L}^u_p(v))^2/N-C(v,v)$ and let $\phi\in C^\infty(\mathbb{R})$ be a smooth function. By assumption, we have $B(\phi(u),\phi(u))(x)\geq 0$. Using that $\Gamma(u)(x)=1$ and $H_u(u,u)(x)=A_u(x)$, we have the following identities at $x$: $$\Gamma_2(\phi(v),\phi(v))=\phi'^2\Gamma_2(u,u)+2\phi'\phi''A_u+\phi'',$$
$$
\mathcal{L}^u_p(\phi(u))=\phi'L_p(u)+(p-1)\phi'',\qquad H_{\phi(u)}(u,u)=\phi'A_u+\phi''.
$$
This gives
\begin{align}
B(\phi(u),\phi(u))=&\phi'^2 \notag B(u,u)+2\phi'\phi''\bigg(A_u-\frac{p-1}{N}L_pu+\frac{(p-2)}{N}\big((p-1)A_u+L_pu\big)\\&-\frac{(p-2)^2}{N}A_u\bigg)+\phi''^2\bigg(1-\frac{1}{N}(p-1)^2+2\frac{(p-2)(p-1)}{N}-\frac{(p-2)^2}{N}\bigg) \notag
\\=&\phi'^2 B(u,u)+2\phi'\phi''\bigg(-\frac{L_p u}{N}+\big(1+\frac{p-2}{N}\big)A_u\bigg)+\phi''^2\frac{N-1}{N}. \label{discriminant}
\end{align}
Now for any $a$, we can choose a function $\phi$ such that $\phi'(u(x))=a$ and $\phi''(u(x))=1$, so equation (\ref{discriminant}) becomes a non-negative, quadratic polynomial in $a$ and hence must have a non-negative discriminant, that is:
$$
4B(u,u)\frac{N-1}{N}\geq 4 \bigg(\frac{L_p u}{N}-\big(1+\frac{p-2}{N}\big)A_u\bigg)^2.
$$
This, however, is equivalent to
\begin{align*}
\Gamma_2(u,u)+p(p-2)A_u^2\geq& \frac{1}{N}(L_pu)^2+p(p-2)A_u^2+C(u,u)\\&+\frac{N}{N-1}
\bigg(\frac{L_p u}{N}-\big(1+\frac{p-2}{N}\big)A_u\bigg)^2
\\=&\frac{1}{N}(L_pu)^2+\frac{N}{N-1}
\bigg(\frac{L_p u}{N}-(p-1)A_u\bigg)^2,
\end{align*}
where the last equality can be verified by direct computation.
\end{proof}
\begin{remark}
The advantage of the self-improvement property is that it automatically gives a sharp estimate which is not immediate if we chose a local framework and discard certain terms. Although the estimate for $N=1$ seems slightly weaker, it is in fact just as strong since $L_pu-(p-1)\Gamma(u)^\frac{p-2}{2}A_u\equiv 0$ in one dimension.
\end{remark}
Before we prove the gradient comparison, we summarize the results we have
obtained so far:
\begin{corollary}\label{estimate.summary}
Let $u$ be an eigenvalue of $L_p$, $x\in M$ and $\Gamma(u)(x)\neq 0$ and let $n\geq N$ and $n>1$. If $1<p<2$, let $u(x)\neq 0$. Then we have at $x$
\begin{align*}
\frac{1}{p}\mathcal{L}^u_p(\Gamma(u)^{\frac{p}{2}})
\geq &-\lambda(p-1)|u|^{p-2}\Gamma(u)^{\frac{p}{2}}+\lambda(p-2)u|u|^{p-2}
\Gamma(u)^{\frac{p-2}{2}}A_u+\frac{\lambda^2u^{2p-2}}{n}\\&+\frac{\lambda^2u^{2p-2}}{n(n-1)}
+2\frac{(p-1)\lambda}{n-1}u|u|^{p-2}\Gamma(u)^{\frac{p-2}{2}}A_u
+\frac{n}{n-1}(p-1)^2\Gamma(u)^{\frac{2p-4}{2}}A_u^2
\\=&-\lambda(p-1)|u|^{p-2}\Gamma(u)^{\frac{p}{2}}+\lambda \frac{(n+1)(p-1)-(n-1)}{(n-1)}u|u|^{p-2}
\Gamma(u)^{\frac{p-2}{2}}A_u\notag\\&+\frac{\lambda^2u^{2p-2}}{n-1}
+\frac{n}{n-1}(p-1)^2\Gamma(u)^{\frac{2p-4}{2}}A_u^2. \notag
\end{align*}
\end{corollary}
\begin{proof}
This follows directly from
{Lemma \ref{p.bochner.formula}}, {Lemma \ref{improved.BE}}, and the strong eigenvalue equation $L_p(u)=-\lambda u|u|^{p-2}$. We remark that $\Gamma(L_p,u)=\Gamma(-\lambda u|u|^{p-2},u)=-\lambda(p-1)|u|^{p-2}\Gamma(u)$.
\end{proof}
We are now able to prove the gradient comparison with a suitable one-dimensional
model function. The proof is motivated by \cite[Theorem 4.1]{val1}.
\begin{theorem}[Gradient comparison theorem]\label{grad.comp.thm}
Let $\lambda$ be the principal eigenvalue of $L_p$ with Neumann boundary conditions and $u$ be a corresponding eigenfunction. Assume that $L$ satisfies BE$(0,N)$. Let $n\geq N$ with $n>1$, $a\geq 0$, and
$w=w_a$ be the solution of the one-dimensional model equation \ref{model.ode},
with either $T=-(n-1)/t$ or $T\equiv 0$. If $n=\infty$, we let $T\equiv 0$. Let $b=b(a)$ be the first
root of $w'$ after $a$, as above. Now, if
$[\operatorname{min}(u),\operatorname{max}(u)]\subset[w(a),w(b)]$, then the following inequality holds on all of $M$:
$$\Gamma (w^{-1} \circ u)\leq1.$$
\end{theorem}
\begin{proof} We can assume that $[\operatorname{min}(u),\operatorname{max}(u)]
\subset(w(a),w(b))$ by replacing $u$ by $\xi u$ and letting $\xi\nearrow 1$ afterwards. The regularity theory for ordinary differential equations gives that $w$ is smooth on $(a,b)$. Using the chain rule we see that it is equivalent to prove
$$
\Gamma(u)^{\frac12}(x)\leq w'(w^{-1}(u(x)))
$$
for all $x\in M$. In order to prove this,
let $\phi(u(x)):= w'(w^{-1}(u(x)))^p$ and
$\psi\in C^2(\mathbb{R})$
be a positive function which will be specified later. Define
$$
F:=\psi(u)(\Gamma (u)^{\frac{p}{2}}-\phi(u)).
$$
It suffices to show that $F\leq 0$. Since $M$ is compact, $F$ attains its maximum
in $x\in M$. Furthermore, we have $\phi(u)>0$, so it suffices to consider the case
$\Gamma (u)(x)>0$. If $p>2$, then we have $u\in C^{3,\alpha}$ around $x$ and all the following computations can be performed. We will explain below how to modify the proof in the case $1<p<2$. At the point $x$, we have
$$
\Gamma(F,u)(x)=0, \qquad
\mathcal{L}^u_p(F)(x)\leq 0.
$$
This is obvious if $x$ lies in the interior: $\Gamma$ is induced by a Riemannian metric on $T^*M$ so we get $\Gamma(F,\cdot)(x)=0$, which implies the first identity. On the other hand, $\mathcal{L}^u_p$ is elliptic away from critical points of $u$ and the first-order derivatives of $F$ vanish, which implies the inequality $\mathcal{L}^u_p(F)(x)\leq 0$. If $x$ lies on the boundary, we need to be more careful: it is immediate that $\Gamma(F,\cdot)(x)$ vanishes in all directions tangent to the boundary, in particular, $\Gamma(F,u)(x)=0$ since $\Gamma(u,\tilde \nu)|_{\partial M}\equiv 0$. Moreover, the Neumann boundary conditions and the convexity of $\partial M$ imply at $x$
\begin{align}
0\leq
\Gamma(F,\tilde \nu)&=\psi'(u)\Gamma(u,\tilde \nu)\frac{F}{\psi(u)}-\psi(u)\phi'(u)\Gamma(u,\tilde \nu)
+\frac{p}{2}\psi(u)\Gamma(u)^{\frac{p-2}{2}}\Gamma(\Gamma(u),\tilde \nu) \notag \\
&=-p\psi(u)II(u,u)\leq0, \notag
\end{align}
where $\tilde \nu$ is the outward normal function at $x$. This gives $\Gamma(F,\tilde \nu)(x)=0$ and since $x$ is a maximum point, this implies that the second derivative in normal direction must be non-positive. Obviously, all the second-order derivatives in tangent directions must be non-positive as well, so the ellipticity yields $\mathcal{L}^u_p(F)(x)\leq 0$, as desired.
Now the identity $\Gamma(F,u)(x)=0$ and the product and chain rule imply at $x$ that
$$
0
=\psi'(u)\frac{F}{\psi(u)}\Gamma(u)+\psi(u)\frac{p}{2}\Gamma(u)^{\frac{p-2}{2}}\Gamma(\Gamma(u),u)-\psi(u)\phi'(u)\Gamma(u),
$$
which yields
\begin{align}
\label{first.der.zero}
\Gamma (u)^{\frac{p-2}{2}}A_u=-\frac{1}{p}\bigg(\frac{\psi'}{\psi^2}F-\phi'\bigg).
\end{align}
Next, we would like to take a closer look at the inequality $\mathcal{L}^u_p(F)(x)\leq0$. We compute at $x$, using the diffusion property of $L$, that
\begin{align*}
L(F)=&\psi(u)(L(\Gamma(u)^{\frac{p}{2}})-L(\phi(u))
+(\Gamma (u)^{\frac{p}{2}}-\phi(u))(\psi'(u)L(u)+\psi''(u)\Gamma(u))
\\&+2\big(p\Gamma(u)^{\frac{p-2}{2}}\psi'(u)H_u(u,u)-\phi'(u)\psi'(u)\Gamma(u)\big).
\end{align*}
On the other hand, using the product and chain rule {Lemma \ref{chain.rule}}, we compute
\begin{align*}
H_F(u,u)=&(\psi'(u)H_u(u,u)+\psi''\Gamma(u)^2)(\Gamma(u)^{\frac{p}{2}}-\phi(u))
+\psi(u)H_{\Gamma(u)^{\frac{p}{2}}}(u,u)\\&-\psi(u)H_{\phi(u)}(u,u)
+2p\Gamma(u)^{\frac{p}{2}}\psi'(u)H_u(u,u)
-2\psi'(u)\phi'(u)\Gamma(u)^2.
\end{align*}
Using these two identities as well as (\ref{first.der.zero}), the strong eigenvalue equation and $\Gamma(u)^{\frac{p}{2}}-\phi(u)=F/\psi(u)$ as well as $\Gamma(u)^{\frac{p}{2}}=F/\psi(u)
+\phi(u)$, we obtain at $x$ that
\begin{align}
\mathcal{L}^u_p(F)=&p\psi(u)\frac{1}{p}\mathcal{L}^u_p(\Gamma(u)^{\frac{p}{2}}) \notag
-\psi(u)\mathcal{L}^u_p(\phi(u))-\lambda F\frac{\psi'(u)}{\psi}u|u|^{p-2}
\\&+(p-1)F\frac{\psi''(u)}{\psi(u)}\bigg(\frac{F}{\psi(u)}+\phi(u)\bigg)
-2(p-1)F\frac{\psi'(u)^2}{\psi(u)^2}\bigg(\frac{F}{\psi(u)}+\phi(u)\bigg). \notag
\end{align}
Now the idea is to use {Corollary \ref{estimate.summary}} to estimate $\mathcal{L}^u_p(F)$ further and express the resulting inequality in terms of $w$. Since $w$ satisfies a differential equation, an appropriate choice of $\psi$ will enforce $F\leq 0$. If $n=\infty$ and $T\equiv 0$, the proof stays the same with the conventions $\infty/\infty=1$ and $1/\infty=0$. Noting that we have excluded the case $\Gamma(u)=0$ and using {Corollary \ref{estimate.summary}}, equation (\ref{first.der.zero}) as well as $\Gamma(u)^{\frac{p}{2}}=F/\psi(u)+\phi(u)$, we see that
\begin{align}
\frac{1}{p}\mathcal{L}^u_p(\Gamma(u)^{\frac{p}{2}})
\geq &-\lambda(p-1)|u|^{p-2}\Gamma(u)^{\frac{p}{2}}+\lambda \frac{(n+1)(p-1)-(n-1)}{(n-1)}u|u|^{p-2}
\Gamma(u)^{\frac{p-2}{2}}A_u \notag
\\&+\frac{\lambda^2u^{2p-2}}{n-1}
+\frac{n}{n-1}(p-1)^2\Gamma(u)^{\frac{2p-4}{2}}A_u^2 \notag
\\=&\frac{\lambda^2u^{2p-2}}{n-1}+\lambda\frac{(n+1)(p-1)-(n-1)}{p(n-1)} \phi'u|u|^{p-2}
\notag
\\&+\frac{n}{n-1}\frac{(p-1)^2}{p^2}\phi'^2
-\lambda \phi(p-1)|u|^{p-2} \notag
\\&+F\bigg(-\lambda(p-1)|u|^{p-2}\frac{1}{\psi}-\lambda\frac{(n+1)(p-1)-(n-1)}{p(n-1)}\frac{\psi'}{\psi^2}u|u|^{p-2}
\notag
\\&-2\frac{n}{n-1}\frac{(p-1)^2}{p^2}\phi'\frac{\psi'}{\psi^2} \bigg)
+F^2\frac{n}{n-1}\frac{(p-1)^2}{p^2}\frac{\psi'^2}{\psi^4}. \label{applied.bochner.formula}
\end{align}
On the other hand, we easily verify that
$$
\mathcal{L}^u_p(\phi(u))=\phi'L_p(u)+(p-1)\phi''\Gamma(u)^\frac{p}{2}
=-\lambda \phi'u|u|^{p-2}+(p-1)\phi''\phi+F(p-1)\frac{\phi''}{\psi}.
$$
Combining these identities and summing up the $u|u|^{p-2}$ terms, we get that
\begin{align}
0\geq\mathcal{L}^u_p(F)\geq&p\psi\bigg(\frac{\lambda^2u^{2p-2}}{n-1}+\lambda\frac{(n+1)(p-1)}{p(n-1)}\phi 'u|u|^{p-2}
\notag\\&+\frac{n}{n-1}\frac{(p-1)^2}{p^2}\phi'^2
-\lambda \phi(p-1)|u|^{p-2}-\frac{(p-1)}{p}\phi''\phi\bigg)\notag
\\&+F\bigg(-\lambda p(p-1)|u|^{p-2}-\lambda\frac{(n+1)(p-1)}{n-1}\frac{\psi'}{\psi}u|u|^{p-2}\notag\\&-2\frac{n}{n-1}\frac{(p-1)^2}{p}\phi'\frac{\psi'}{\psi}-(p-1)\phi''
+(p-1)\phi\bigg(\frac{\psi''}{\psi}-2\frac{\psi'^2}{\psi^2}\bigg)\bigg) \notag
\\& +F^2 \frac{p-1}{\psi}\bigg(\frac{\psi''}{\psi}+\frac{\psi'^2}{\psi^2}\bigg(\frac{n(p-1)}{p(n-1)}-2\bigg)\bigg)
\notag
\\=:& a+bF+cF^2. \label{final.inequality}
\end{align}
The last part of the proof is similar to \cite[Theorem 4.1]{val1} and we only include it for the convenience of the reader. We consider the function $\phi(s):=w'(w^{-1}(s))^p$ and the chain rule gives
$$
\phi'(s)=\frac{p}{p-1}\Delta_p(w)(w^{-1}(s)), \qquad \phi''(s)=\frac{p}{p-1}\frac{(\Delta_p(w))'}{w'}(w^{-1}(s)).
$$
On the other hand, by our assumption there exists $t\in(a,b)$ with $w(t)=u(x)$,
so we obtain at $x$ or $t$ respectively that
\begin{align}
\frac{a}{p\psi}=&\frac{\lambda^2w^{2p-2}}{n-1}+\lambda\frac{n+1}{n-1}\Delta_p(w)w|w|^{p-2}+\frac{n}{n-1}(\Delta_p(w))^2\notag\\&-\lambda (p-1)(w')^p|w|^{p-2}-(\Delta_p(w))'w'^{p-1}. \notag
\end{align}
Now since $T$ is one of the solutions of $T'=T^2/(n-1)$, that is, $T\equiv 0$ or $T=-(n-1)/t$, using $((w')^{p-1})'=\Delta_p w$, one directly verifies that
\begin{align}
\frac{a}{p\psi}=&\frac{1}{n-1}\bigg(\Delta_pw-Tw'^{p-1}+\lambda w|w|^{p-2}\bigg)\bigg(n\Delta_p w+Tw'^{p-1}\notag+\lambda w|w|^{p-2}\bigg)\\&
-w'^{p-1}\bigg(\Delta_pw+T|w'|^{p-1}+\lambda w|w|^{p-2}\bigg)'\notag= 0.\notag
\end{align}
In order to treat the terms $b$ and $c$, we define
$$
X(t):=\lambda^{\frac{1}{p-1}}\frac{w(t)}{w'(t)},
\qquad \psi(t)=\exp\bigg(\int_{0}^{t}h(s)ds\bigg), \qquad f(t)=-h(w(t))w'(t),
$$ for some $h$ which is still to be determined. Using that $w$ solves (\ref{model.ode}), we compute
\begin{align}
f'(t)&=-h'(w(t))w'(t)^2+\frac{f(t)}{p-1}(T-X|X|^{p-2}). \label{f.der}
\end{align}
Now we recall that by definition
$$
c=\frac{p-1}{\psi}\bigg(\frac{\psi''}{\psi}+\frac{\psi'^2}{\psi^2}\big(\frac{n(p-1)}{p(n-1)}-2\big)\bigg),
$$
such that (\ref{f.der}) and a direct computation give
\begin{align}
\frac{c(w(t))\psi(w(t))}{p-1}w'(t)^2=\frac{f}{p-1}(T-X|X|^{p-2})+f^2\bigg(\frac{p-n}{p(n-1)}\bigg)-f'=:\alpha(f,t)-f'. \label{alpha}
\end{align}
On the other hand, we have
\begin{align*}
b=&-\lambda p(p-1)|u|^{p-2}-\lambda\frac{(n+1)(p-1)}{n-1}\frac{\psi'}{\psi}u|u|^{p-2}
\\&-2\frac{n}{n-1}\frac{(p-1)^2}{p}\phi'\frac{\psi'}{\psi}-(p-1)\phi''
+(p-1)\phi\big(\frac{\psi''}{\psi}-2\frac{\psi'^2}{\psi^2}\big).
\end{align*}
This gives
\begin{align}
\frac{b}{(p-1)|w'|^{p-2}}=&\frac{p}{p-1}T\bigg(\frac{n}{n-1}T-X|X|^{p-2}\bigg)\notag \\&-f^2+f\bigg(\bigg(\frac{2n}{n-1}+\frac{1}{p-1}\bigg)T-\frac{p}{p-1}X|X|^{p-2}\bigg)-f' \notag
\\=&:\beta(f,t)-f' \label{beta}
\end{align}
which is again verified by direct computation. Now according to {Lemma \ref{lemma.grad.comp}} below, $f$ can be chosen such that $b,c>0$, that is,
$$
0\geq bF+cF^2\geq bF
$$
which implies $F\leq 0$ as desired (choosing $f$ rather than $h$ does not make a difference since $w$ is invertible and $w'>0$).
\end{proof}
In the proof, we have used
\begin{lemma}
\label{lemma.grad.comp}
Let $\alpha,\beta$ be defined as in (\ref{alpha}),(\ref{beta}). Then for every $\epsilon>0$ there exists a smooth function $f:[a+\epsilon,b(a)-\epsilon]$ such that
$$
f'(t)<\min\{\alpha(f(t),t),\beta(f(t),t)\}.
$$
\end{lemma}
\begin{proof}
The proof relies on properties of the model function $w$ and uses the Pr\"ufer transformation (see \cite[Lemma 5.2]{val1}).
\end{proof}
\begin{remark}
If $1<p<2$ and $u(x)=0$, it only follows that $u\in C^{2,\alpha}$ near $x$. If this happens, the p-Bochner formula is not directly applicable. However, since the gradient of $u$ does not vanish in an environment $U$ of $x$, the set $U':=U\cap\{u\neq 0\}$ is dense and open
in $U$. $u$ is smooth in $U'$ and thus satisfies the strong eigenvalue equation, and hence we can replace the term $\Gamma(u,L_p u)$ arising from $\mathcal{L}^u_p(\Gamma(u)^{\frac{p}{2}})$ by $-\lambda\Gamma(u,u|u|^{p-2})$.
We get another diverging term $-\psi(u)\phi''(u)\Gamma(u)^{\frac{p}{2}}$ from
$-\psi(u)\mathcal{L}^u_p(\phi(u))$, and these two terms cancel out because $\phi''(u)$ includes a $-\lambda u|u|^{p-2}$ term as well. So we have
for $x'\in U'$ that $\mathcal{L}^u_p(F)(x')$ converges as $x'\to x$ and we also denote
the limit by $\mathcal{L}^u_p(F)(x)$. We easily verify that $0\geq\mathcal{L}^u_p(F)(x)$ is still valid. By definition of
$\mathcal{L}^u_p(F)(x)$, the identity (\ref{final.inequality})
still holds with the two diverging terms canceled out. Now we can proceed as in the normal proof.
\label{regularity.grad.comp}
\end{remark}
\subsection{Maximum comparison}
In this subsection, we use the gradient comparison to compare the maximum of the eigenfunctions and the model functions. Again, our approach is to generalize the idea of
\cite{val1}.
Let $u$ be an eigenfunction of $L_p$ with Neumann boundary conditions satisfying $\min u=-1$ and $\max u < 1 $. We assume that $L$ satisfies the condition BE$(0,N)$ for some $N\in[1,\infty)$ where we emphasize that we have excluded the case $N=\infty$. Let $n\geq N$ and $n>1$. By {Theorem \ref{ode.asymptotics}}, there exists a solution $w_a$ to $(\ref{model.ode})$ such that
$[\min(u),\max(u)]\subset[-1,m(a)]$ where $a\in[0,\infty)$. For ease of notation, we will write $w:=w_a$ unless specified. The differential equation implies that $w''$ stays positive until the first root of $w$, so $w$ has a unique root $t_0\in(a,b)$.
By {Theorem \ref{grad.comp.thm}}, the gradient comparison
$$
\Gamma(w^{-1}\circ u)\leq 1
$$
holds.
We will obtain the maximum comparison by comparing the volumes of small balls with respect to certain measures. In order to do that, we let $g:=w^{-1}\circ u$ and define the measure $\mu:=g_{*}m$ on $[a,b(a)]$. That is,
for any measurable function $f:[a,b]\to\mathbb{R}$ we have
$$
\int_{a}^{b}fd\mu=\int_M f\circ g dm.
$$
\indent
The first step in our volume comparison is the following theorem, which can be seen as a comparison theorem for the density of $\mu$.
\begin{theorem} \label{volume.density}
Let $u$ and $w$ be as above and define
$$
E(s):=-\exp\bigg(\int_{t_0}^s \frac{w|w|^{p-2}}{w'|w'|^{p-2}}dt\bigg)\int_a^sw|w|^{p-2}d\mu.
$$
Then $E$ is increasing on $(a,t_0]$ and decreasing on $[t_0,b)$.
\end{theorem}
This result can also be stated in a more convenient way, as we will soon see:
\begin{theorem}
Under the hypothesis of Theorem \ref{volume.density} the function
$$
E(s):=\frac{\int_a^s w|w|^{p-2}d\mu}{\int_a^s w|w|^{p-2}t^{n-1}dt}=
\frac{\int_{u\leq w(s)}u|u|^{p-2}dm}{\int_a^s w|w|^{p-2}t^{n-1}dt}
$$
is increasing on $(a,t_0]$ and decreasing on $[t_0,b)$. \label{volume.density.2}
\end{theorem}
\begin{proof} See \cite[Theorem 6.3]{val1}.
\end{proof}
\begin{proof}[Proof of {Theorem \ref{volume.density}}]
This is again very similar to \cite[Theorem 6.2]{val1}:
Let $H\in C^\infty_c((a,b))$ with $H\geq 0$ and consider the ordinary differential equation
$$
\begin{cases}
&\frac{\partial}{\partial t}\bigg(G(w(t))^{p-1}\bigg)=H(t),\\ &G(-1)=0.
\end{cases}
$$
Since $H$ has compact support, the singularities at the boundary are avoided, so after rewriting the equation, existence follows by the Picard-Lindel\"of theorem ($w(t)$ can be seen as a coordinate change). We therefore have
\begin{align}
G|G|^{p-2}\circ(w(t))=\int_{a}^{t} H(s)ds, \qquad (p-1)G'(w(t))|G(w(t))|^{p-2}w'(t)=H(t). \label{g.identity}
\end{align}
Next, define $K$ to be an antiderivative of $G$.
Since $L$ is a diffusion operator, we have
$L(K(u))=K''(u)\Gamma(u)+K'(u)L(u)$ and the chain rule gives $\Gamma(K(u))=K'(u)^2\Gamma(u)$ as well as $H_{K(u)}(K(u),K(u))=\frac12 \Gamma(K(u),\Gamma(K(u)))=K'(u)^3H_u(u,u)+K''(u)K'(u)^2\Gamma(u)^2$.
So using that $K'=G$, we obtain at points where $u$ is in $C^{2,\alpha}$, that is, at non-critical points,
\begin{align}
L_p(K(u))&=G(u)|G(u)|^{p-2}L_p(u)+G'(u)|G(u)|^{p-2}\Gamma(u)^{\frac{p}{2}}\notag\\
&=
-\lambda u|u|^{p-2}G(u)|G(u)|^{p-2}+(p-1)G'(u)|G(u)|^{p-2}\Gamma(u)^{\frac{p}{2}}.
\label{g.identity2}\end{align}
Now we consider the closed (hence compact) set $C:=\{p\in M | \Gamma(K(u))=0 \}$. If $\dim(M)=1$, we can just integrate between critical points. So we assume that $\dim(M)\geq 2$ and choose a cut-off function $\phi\in C^\infty(M)$ satisfying $0\leq\phi\leq1$, $\phi|_{M\setminus B_{2\epsilon }(C)}\equiv 1$, $\phi|_{M\setminus B_{\epsilon }(C)}\equiv 0$, and $\Gamma(\phi)\leq2/\epsilon^2$. Using the partial integration formula {Lemma \ref{int.by.parts}} together with $\Gamma(u,\tilde \nu)\equiv 0$ and (\ref{g.identity2}), we get
\begin{align}
\int_M\bigg(-\lambda u|u|^{p-2}G(u)|G(u)|^{p-2}+(p-1)G'(u)|G(u)|^{p-2}\Gamma(u)^{\frac{p}{2}}\bigg)\phi \notag \\
=-\int_M \Gamma(K(u))^{p-2}\Gamma(K(u),\phi). \label{approximating.equality}
\end{align}
The right-hand side of (\ref{g.identity2}) is integrable on $M$ and the second term vanishes on $C$. Since $u\in C^{1,\alpha}(M)$, $|C|\neq0$ is only possible if $C$ contains open points. Near such a point, we either have $G(u)\equiv 0$ or $\Gamma(u)\equiv 0$. In the latter case, the weak eigenvalue equation implies $u\equiv 0$ near such points, so the first term vanishes on $C$ up to a set of measure $0$, too. Hence, letting $\epsilon \to 0$, the left-hand side of (\ref{approximating.equality}) converges to
$$
\int_M\bigg(-\lambda u|u|^{p-2}G(u)|G(u)|^{p-2}+(p-1)G'(u)|G(u)|^{p-2}\Gamma(u)^{\frac{p}{2}}\bigg).
$$
On the other hand, the right-hand side of (\ref{approximating.equality}) is identically zero on $C$, and we can estimate using the Cauchy-Schwarz inequality
$$
\bigg|\int_{B_{2\epsilon}(C)\setminus B_\epsilon(C)} \Gamma(K(u))^{p-2}\Gamma(K(u),\phi)\bigg|\leq |B_{2\epsilon}(C)|\frac{A}{\epsilon}\to 0
$$
since $\Gamma(K(u))$ is uniformally bounded and $|B_{2\epsilon}(C)|\leq A'\epsilon ^2$. Here, the last statement is implied by the Bishop-Gromov volume growth theorem, valid for the metric measure space $(M,d,m)$ with the dimension upper bound $N\geq\dim(M)\geq2$ (see \cite[Theorem 4]{sturm2}, an elliptic operator cannot satisfy a condition BE$(0,N')$ for $N'<\dim (M)$, which is easily seen since this would also imply $\operatorname{tr}H=L$). Since $\Gamma(\phi)\equiv 0$ on $M\setminus B_{2\epsilon}(C)$, we finally obtain
$$
\int_M\lambda u|u|^{p-2}G(u)|G(u)|^{p-2}=\int_M (p-1)G'(u)|G(u)|^{p-2}\Gamma(u)^{\frac{p}{2}}.
$$
Now using the gradient comparison {Theorem \ref{grad.comp.thm}} and the definition of $\mu$, we get
\begin{align*}
\frac{\lambda}{p-1}\int_{a}^{b} w(t)|w(t)|^{p-2}G(w(t))|G(w(t))|^{p-2}d\mu
&\leq \int_{a}^{b}G'(w(t))|G(w(t))|^{p-2}|w'(t)|^pd\mu.
\end{align*}
With the identities (\ref{g.identity}), this reads
\begin{align}
\lambda\int_{a}^{b} w(t)|w(t)|^{p-2}\int_{a}^{t}H(s)dsd\mu\leq\int_{a}^{b}H(t)w'(t)^{p-1}d\mu.
\label{w.inequality}
\end{align}
On the other hand, using Fubini's theorem and $\int_{a}^{b}w(t)|w(t)|^{p-2}=0$, we get
\begin{align*}
\lambda\int_{a}^{b} w(t)|w(t)|^{p-2}\int_{a}^{t}H(s)dsd\mu&=\lambda\int_{a}^{b}\int_{s}^{b}w(t)|w(t)|^{p-2}d\mu H(s)ds
\\&=-\lambda\int_{a}^{b}\int_{a}^{s}w(t)|w(t)|^{p-2}d\mu H(s)ds.
\end{align*}
Combining this with (\ref{w.inequality}), we obtain
$$
\int_{a}^{b}\bigg(-\lambda\int_{a}^{s}w(t)|w(t)|^{p-2}d\mu\bigg) H(s)ds\leq\int_{a}^{b}H(s)w'(s)^{p-1}d\mu.
$$
Since this is valid for any non-negative function $H\in C^\infty_c((a,b))$, we get
$$
w'(s)^{p-1}g(s)+\lambda\int_{a}^{s}w(t)|w(t)|^{p-2}d\mu\geq 0
$$
for any $s\in(a,b)$, where $g$ is the density of $\mu$. Multiplying by $w|w|^{p-2}/(w'|w'|^{p-2})$, we get
$$
w|w|^{p-2}(s)g(s)+\frac{w|w|^{p-2}}{w'|w'|^{p-2}}(s)\lambda\int_{a}^{s}w(t)|w(t)|^{p-2}d\mu\begin{cases} &\geq 0 \text{ if } s\leq t_0,
\\&\leq 0 \text{ if } s>t_0.
\end{cases}
$$
But this is exactly the derivative of $E$, so the theorem is proven.
\end{proof}
In order to prove the maximum comparison, we want to compare the volumes of small balls near critical points. Therefore, we need the following lemma:
\begin{lemma}
For $\epsilon$ sufficiently small, the set $u^{-1}[-1,-1+\epsilon)$ contains
a ball with radius $r_\epsilon$, where
$$
r_\epsilon=w^{-1}(-1+\epsilon)-a.
$$
\label{ball.lemma}
\end{lemma}
\begin{proof}
Let $x_0\in M $ be a minimum point, that is, $u(x_0)=-1$ and $x\in M$ be
another point. By the gradient comparison, we have $\Gamma(w^{-1}(u))\leq 1$, so by the
definition of the distance function
$$
d(x,x_0)\geq |w^{-1}\circ u|_{x_0}^x|=w^{-1}(u(x))-a.
$$
So if $d(x,x_0)<r_\epsilon$ then
$$
w^{-1}(u(x))<w^{-1}(-1+\epsilon)
$$
and since $w$ is increasing, we must have $u(x)<-1+\epsilon$, which proves the claim.
\end{proof}
Now we are able to prove the following volume comparison:
\begin{theorem}[volume comparison] Let $n\geq N$ and $n>1$.
If $u$ is an eigenfunction satisfying $\operatorname{min}u=-1=u(x_0)$ and
$\operatorname{max}u\leq m(0)=w_0(b(0))$, then there exists a constant $c>0$ such that for all
$r$ sufficiently small we have
$$
m(B_{x_0}(r))\leq cr^n.
$$
\label{vol.comparison}
\end{theorem}
\begin{proof}
This proof is in spirit of \cite[Theorem 6.5]{val1}. We define the measure $\gamma:=t^{n-1}dt$ on $[0,\infty)$. Let $\epsilon>0$ be small enough such that {Lemma \ref{ball.lemma}} is applicable and that $-1+\epsilon \leq -1/2^{\frac{1}{p-1}}$. Hence, for $u\leq -1+\epsilon$ we have $-u|u|^{p-2}\geq 1/2$. Also, the point in time $t$, where $w_0(t)=-1+\epsilon$, occurs before the first zero of $w_0$, so {Theorem \ref{volume.density.2}} implies $E(t)\leq E(t_0)=:C$. Multiplying this inequality by $-\int_{a}^{s}w_0|w_0|^{p-2}d\gamma>0$, we get
\begin{align*}
\operatorname{Vol}(\{u\leq -1+\epsilon\})&\leq -2\int_{\{u\leq -1+\epsilon\}}u|u|^{p-2}dm\\&\leq-2C\int_{\{w_0\leq -1+\epsilon\}}w_0|w_0|^{p-1}d\gamma\\&\leq 2C'\gamma(\{w_0\leq -1+\epsilon\}).
\end{align*}
On the other hand, with $r_\epsilon$ from {Lemma \ref{ball.lemma}}, we get
$$
\operatorname{Vol}(B_{r_\epsilon}(x_0))\leq\operatorname{Vol}\{u\leq-1+\epsilon\}\leq2C\gamma(\{w_0\leq -1+\epsilon\})=2C\nu([0,r_\epsilon])=C'r^n_\epsilon
$$
since for $a=0$, we have $w_0(r_\epsilon)=-1+\epsilon$.
\end{proof}
This finally allows us to prove the desired maximum comparison which will always provide a suitable model function:
\begin{corollary}
Let $n\geq N$, $n>1$, and $w_0$ be the corresponding model function.
If $u$ is an eigenfunction satisfying $\operatorname{min}u=-1=u(x_0)$ ,
then
$\operatorname{max}u\geq m(0)$. \label{max.comparison}
\end{corollary}
\begin{proof}
This is obvious if $\max(u)=1$, so we can assume that $\max(u)<1$. Assuming the assertion were wrong, then by the continuous dependence of
the data there would exist a solution of the differential equation (\ref{model.ode}) with the same $\lambda$, $a=0$, and $n'>n$, in particular $n'>N$,
whose first maximum would still be bigger than $\operatorname{max} u$. By
{Lemma \ref{improved.BE}}, the gradient comparison would still hold and so would the volume comparison {Theorem \ref{vol.comparison}}, that is, for $r$ sufficiently small,
$$
m(B_{x_0}(r))\leq cr^{n'}.
$$
This, however, contradicts the Bishop-Gromov theorem (see \cite[Theorem 4]{sturm2}): since the metric measure space $(M,d,m)$ satisfies $\operatorname{BE}(0,N)$ and thus also $\operatorname{CD}^e(0,N)$, we get for small $r$ and some constant $c'>0$
$$
m(B_{x_0}(r))\geq c'r^{N}
$$
which contradicts the first estimate for small values of $r$.
\end{proof}
\subsection{Sharp estimate}
\label{sharp.estimate}
We can now combine the estimates for gradient and maximum with the theory of the one-dimensional model. As we have seen, in the one-dimensional case with $L=\Delta$, the first eigenvalue is $(p-1)\pi^p/D^p$ where $D$ is the diameter, so the next result is sharp.
\begin{theorem}
Let $M$ be compact and connected and $L$ be an elliptic diffusion operator with invariant measure $m$. We assume as well that $L$ satisfies $\operatorname{BE}(0,N)$ for some $N\in[1,\infty)$ and if the boundary is non-empty, we assume it to be convex. Let $\lambda$ be the principal eigenvalue of $L_p$ with Neumann boundary conditions. Then we have the sharp estimate
\begin{align}
\lambda\geq (p-1)\frac{\pi_p^p}{D^p}, \label{sharp.estimate.stated}
\end{align}
where $D$ is the diameter associated with the intrinsic metric $d$.
\label{sharp.estimate.thm}
\end{theorem}
\begin{proof}
Let $u$ be the rescaled eigenfunction such that $\min u=-1$ and
$\max u\leq 1$. By {Corollary \ref{max.comparison}}, we have $m(0)\leq\max u$. On the other hand, according to {Theorem \ref{ode.asymptotics}}, $m(a)$ is a continuous function with $m(a)\to 1$ as $a\to\infty$. Hence, there is a unique $a\in[0,\infty]$ satisfying
$m(a)=\max u$. Let $w_a$ be the corresponding solution. The gradient comparison
gives $\Gamma (w_a^{-1}\circ u)\leq 1$. Let $x,y$ be maximum and minimum points of $u$, respectively. By definition of $D$ and {Theorem \ref{ode.asymptotics}}, we have
$$
D\geq |w_a^{-1}\circ u(x)-w_a^{-1}\circ u(y)|=w_a^{-1}(m(a))-w_a^{-1}(-1)=b(a)-a\geq \frac{\pi_p}{\alpha}.
$$
Since $\delta_a=\pi_p/\alpha$ if and only if $a=\infty$, we obtain that $\max(u)=1$ is a necessary, but, as we will see, not sufficient condition for equality to hold.
\end{proof}
\begin{remark}
If $L$ only satisfies the condition BE$(0,\infty)$, the situation is slightly different:
the maximum comparison does not hold anymore, since we cannot compare the gradient of the eigenfunction with the relaxed model function. However, applying the gradient comparison with $w_\infty$ and using the symmetry of $w_\infty$ we easily obtain the estimate
$$
\lambda \geq (p-1)\frac{1}{2^p} \frac{\pi^p_p}{D^p}.
$$
which we expect not to be sharp.
\end{remark}
We now turn towards the case of equality. Once again, motivated by the approach in \cite{val1}, we prove the following necessary condition:
\begin{theorem} \label{equality.statement}
Under the hypothesis of Theorem \ref{sharp.estimate.thm}, we assume that equality holds in the estimate (\ref{sharp.estimate.stated}) and that $u$ is an eigenfunction with $-\min u=\max u=1$. Then the function $e(u)=\Gamma(u)^\frac{p}{2}+\lambda/(p-1)|u|^p$ is constant and equals $\lambda/(p-1)$, in particular, $\Gamma(u)=0$ if and only if $|u|=1$. Furthermore, we have that $R\equiv 0$ and $\dim(M)=1$..
\end{theorem}
\begin{proof}
In the one-dimensional model, we have the identity
$$
w'^p(t)+\frac{\lambda}{p-1}|w|^p(t)\equiv \frac{\lambda}{p-1},
$$
which is readily checked by differentiating and testing the equation at one of the endpoints. Hence, the gradient comparison gives
$$
\Gamma(u)^{\frac{p}{2}}\leq w'(w^{-1}(u))^p=\frac{\lambda}{p-1}\bigg(1-|u|^p\bigg),
$$
so we have $e\leq \lambda/(p-1)$. Let $x,y\in M$ be minimal and maximal points of $u$ respectively. We have
$$
D\geq d(x,y)=\Gamma(w^{-1}\circ u)|^y_x= \frac{\pi_p}{\alpha}=D
$$
by the equality assumption. Hence, the distance between $x,y$ is attained by $w^{-1}\circ u$. Therefore, we must have $\Gamma(w^{-1}\circ u)(z)=1$ for some $z\in M\setminus \{|u|=1\}$, because otherwise the function cannot attain the supremum in the definition of the intrinsic distance. Now we would like to use the strong maximum principle: we consider the operator
\begin{align*}
\mathcal{P}(\eta)=&\mathcal{L}^u_p(\eta)-(p-2)\lambda u|u|^{p-2}\frac{\Gamma(u,\eta)}{\Gamma(u)}\\&+(p-1)^2\Gamma(u)^{\frac{p-4}{2}}\bigg(H_u(u,\phi)-A_u\Gamma(u,\phi)\bigg)
-\frac{(p-1)^2}{p\Gamma(u)}\Gamma\big(\Gamma(u)^{\frac{p}{2}}-\frac{\lambda}{p-1}|u|^p,\phi\big)
\\=&:\mathcal{L}^u_p(\eta)+\mathcal{P}_0(\eta)
\end{align*}
and notice that $\mathcal{P}$ is locally uniformally elliptic in the open set $M\setminus \{\Gamma(u)=0\}$. The first-order term $\mathcal{P}_0$ is made in a way such that $\mathcal{P}(e)\geq 0$. Indeed, the chain rule gives
$\mathcal{L}^u_p(|u|^p)=p\big(u|u|^{p-2}L_p(u)+(p-1)^2|u|^{p-2}\Gamma(u)^p\big)$ and one easily verifies that $A_u(u,u)\Gamma(u)=H_u(u,u)$. Combining this with
the $p-$Bochner formula, the strong eigenvalue equation, some algebraic manipulations as well as the identity $p(p-2)-(p-1)^2=-1$, we obtain
$$
\mathcal{P}(e)=p\Gamma(u)^{p-2}\bigg(R(u,u)+|H_u|^2_{HS}-A_u^2\bigg)\geq 0
$$ by the definition of the Hilbert-Schmidt norm. Now the strong maximum principle (\cite[Theorem 3.5, Chapter 3]{gilbarg}) gives that the set $\{e=\lambda/(p-1)\}$ is open and closed in $M\setminus(\{|u|=1\}\cup\{\Gamma(u)=0\})=:M\setminus E$, in particular, $e\equiv 1$ in the component containing $z$, which we denote by $C_0$. If there is another component $C_1$, we let $p'\in C_1$ such that $u(p')=0$. Similarly as above, we have that
$$
\operatorname{dist}(p,E)\geq |(w^{-1}(\pm 1))-(w^{-1}(0))|=\frac 12 \frac{\pi_p}{\alpha}=\frac12D
$$
by the equality assumption. Now by the intermediate value theorem, we have
$$
D\geq\operatorname{dist}(\{u=1\},\{u=-1\})\geq \operatorname{dist}(\{u=1\},\{u=0\})+
\operatorname{dist}(\{u=0\},\{u=-1\})\geq D.
$$
In particular, the function $w^{-1}\circ u$ attains the maximum in the definition of the distance to some boundary point, so we obtain that there is a point $z'\in C_1$ with $\Gamma(w^{-1}\circ u)(z')=1$. Hence, $e\equiv \lambda/(p-1)$ in $C_1$, and since this holds on $E$ anyway, we get that $e\equiv \lambda/(p-1)$ on $M$. Now by the regular value theorem, the level sets $\{u=t\}$ are smooth submanifolds of dimension $\dim(M)-1$ for any$|t|<1$, and so are the sets $D_t:=C_1\cap\{u=t)\}$. In order to get a useful frame, we define the map $\Phi:D_0\times(-D/2,D/2)\to M $ by
\begin{align*}
&\Phi(x,0)=x,
\\ \frac{\partial}{\partial t} &u(\Phi(x,t))=1.
\end{align*}
This is well-defined by the standard theory for ordinary differential equation since $u$ is regular in $D_0$, and because the differential equation enforces $\Phi(x,t)\in D_t$ so the solution cannot blow up. In particular, we get $\operatorname{Im}(\Phi)\subset C_1$. Smooth dependence on the data implies that $\Phi$ is smooth. Uniqueness gives that $\Phi$ is a bijection onto $C_1$: surjectivity follows, since we can solve the differential equation backwards from a point $x\in D_t$, and if $\Phi(x,t)=\Phi(y,t')$, then we first have $t=t'$ and also $x=y$ because otherwise we could solve the differential equation backwards and obtain two distinct solutions starting at $\Phi(x,t)$ contradicting uniqueness. Now we would like to use the parametrization $\Phi$ to get a better understanding of the geometry of $C_1$: given a smooth function $\tilde v$, we define $v(t,x):=\tilde v(0,x)$ and note that $v$ is smooth, too.
Since the differential of a function is perpendicular to its level sets we have
$$
\Gamma(u,v)(x,t)=0.
$$
We remark that this also implies $\Gamma(u,\cdot)=\Gamma(u)\frac{\partial}{\partial t}$ since $\frac{\partial}{\partial t}u=1$ in the chosen coordinate frame. Now the important observation is that since $\mathcal{P}(e)\equiv 0$, we have $A_u^2=|H_u|^2_{HS}$ and this implies $H_u(a,b)=\eta\Gamma(u,a)\Gamma(u,b)$ for a smooth function $\eta$, which we do not need to determine. This gives
\begin{align*}
\frac{1}{\Gamma(u)}\frac{\partial}{\partial t}\Gamma(v)(x,t)&=\Gamma(u,\Gamma(v))=\Gamma(u,\Gamma(v))-2\Gamma(v,\Gamma(u,v))\\&=-2H_u(v,v)=-2\eta\Gamma(u,v)^2=0,
\end{align*}
where we used that $\Gamma(u,v)$ identically vanishes on the level sets of $u$. This implies that $\Gamma(v)(x,t)=\Gamma(x,0)$ which forces $M$ to be one-dimensional: if we assume that $D_0$ has more than two points, say $x$ and $y$, then we can find a smooth function $\tilde v$ with $\tilde v(x,0)-\tilde v(y,0)>0$ and $\Gamma(\tilde v)\leq 1$. We define $v(x,t):=\tilde v(x,0)$ and rescale by a constant $c$, such that $\Gamma(v)|_{D_0}\leq 1$. Then we have $v(x,t)-v(y,-t)=c(\tilde v(x,0)-\tilde v(y,0)>0$ for any $t\in[0,D/2)$ and the above consideration gives that $\Gamma(v)\leq 1$ on $D_0\times (-D/2,D/2)$. We have also seen that $\Gamma(u,v)=0$, so using the chain rule, we can directly compute
\begin{align*}
\Gamma(\sqrt{((w^{-1}\circ u)^2+v^2)})&=\frac{1}{(w^{-1}\circ u)^2+v^2}\Gamma((w^{-1}\circ u)^2+v^2)\\&\leq \frac{1}{(w^{-1}\circ u)^2+v^2}\bigg((w^{-1}\circ u)^2\Gamma((w^{-1}\circ u))+v^2\Gamma(v)\bigg)\\&\leq 1.
\end{align*}
If we define $\bar v(x,t):=v(x,t)-v(y,-t)$, we get that $\Gamma(\bar v)=\Gamma(v)$ and similarly for $w^{-1}\circ u$, so we have
$$d(\Phi((x,t)),\Phi((y,-t))^2\geq(v(x,t)-v(y,-t))^2+(w^{-1}(t)-w^{-1}(-t))^2>D^2 $$ for $t$ sufficiently close to $D/2$, a contradiction. Thus, it follows that the level sets are discrete and hence $\dim(M)=1$. On the other hand, we have $\mathcal{P}(e)=0$ since $e$ is constant, so it follows that $R(u,u)\equiv 0$ and since $M$ is one-dimensional, we have that $R\equiv 0$. \end{proof}
\indent
Finally, we would like to check if the necessary conditions derived in {Theorem \ref{equality.statement}} are sufficient. We consider the one-dimensional manifold with boundary $M:=[-D/2,D/2]$ and a general diffusion operator $L(u)=\Delta_g u+bu'$ for some smooth function $b$ and metric $g$. Since all Riemannian manifolds in one-dimension with the same diameter are isometric, we can assume that $g$ is the Euclidean metric, hence $Lu=u''+bu'$ and $\Gamma(u)=u'u'$. Let $\lambda$ be the principal eigenvalue of $L_p$ and assume that equality holds in the eigenvalue estimate (\ref{sharp.estimate.stated}), that is, $\lambda$ coincides with the principal eigenvalue of $\Delta_p$. Let $u$ be the first eigenfunction of $L_p$ and $w$ be the first eigenfunction of $\Delta_p$ on $[-D/2,D/2]$. Theorem \ref{equality.statement} implies that equality holds in the gradient comparison, that is,
$$\Gamma(w^{-1}\circ u)=1$$
or equivalently
$$
|u'|^p=\Gamma(u)^{\frac{p}{2}}=|w'(w^{-1})(u)|^p=\bigg(1-|u|^p\bigg).
$$
But this ODE is also solved by $w$ and $w$ satisfies the same boundary conditions as $u$ at $-D/2$ which implies $w\equiv u$. On the other hand, we have $$\Delta_pw=-\lambda w|w|^{p-2}=-\lambda u|u|^{p-2}=L_pu=\Delta_pu+\Gamma(u)^{\frac{p-2}{2}}bu'=\Delta_pw+\Gamma(w)^{\frac{p-2}{2}}bw'$$
which implies $b\equiv 0$. Hence equality is attained by all Laplace Beltrami operators in one-dimension or equivalently, all operators which satisfy BE$(0,1)$.
If $M$ does not have a boundary, we can proceed in a similar fashion so we have proven Theorem 1.1. \\
\indent
We end this section by demonstrating that although equality can only be attained for $\dim(M)=1$, the estimate is still sharp if we restrict ourselves to an arbitrary integer dimension. Let $N\in\mathbb{N}$ with $N\geq 2$ and $D>0$. The idea is to construct a thin tube which collapses to the one-dimensional model space. Precisely, we choose $D'$, such that $\pi D'<D$ and define the product manifold $M=S^1\times S^{N-1}$ with metric
$g:=D'g_{S^1}+ag_{S^{N-1}}$. We choose $L=\Delta_g$, which means that $R=\operatorname{ric}_M=\operatorname{ric}_{S^{N-1}}=(N-2)\frac{1}{a}g_{S^{N-2}}$, in particular, $R\geq 0$. Hence, $L$ satisfies BE$(0,N)$, but does not satisfy BE$(0,N')$ for any $N'<N$. Now let $w_{D'}$ be the first eigenfunction of $(\Delta_{\tilde g})_p$ on $S^1$ with metric $\tilde g:=D'{g_{S^1}}$. Let $\lambda_{D'}$ be the eigenvalue of $w_{D'}$, then we have
$$
\lambda_D'=(p-1)\frac{\pi^p_p}{(\pi D')^p}.
$$
Since the diameter depends continuously on $a$ we can chose $a=a(D')$ in a way such that $\operatorname{diam}(M)=D$. If we define $u_{D'}(t,x):=w_{D'}(t)$, then $u_{D'}$ is an eigenfunction of $(\Delta_g)_p$ with eigenvalue $\lambda_D'$. If we let $\pi D'\nearrow D$, then $\lambda_D'\to (p-1)\pi^p_p/D^p$ so the estimate cannot be improved.
\section{Non-symmetric operators}
\label{nonsymmetric.operators}
In this section, we extend our methods to non-symmetric diffusion operators, that is, operators without an invariant measure, more precisely, we prove Theorem \ref{maintheorem1}. We restrict ourselves to the linear case $p=2$ as it does not seem like our approach could be generalized to the non-linear case. \\
\indent
We consider a smooth manifold $M$ with $\dim(M)=N$ and an elliptic diffusion operator $L$ which satisfies $\operatorname{BE}(a,\infty)$ for some $a\in\mathbb R$. We equip $T^*M$ with the metric $\Gamma$ and use the distance function $d$ induced by $L$, and $(M,g)$ thus becomes a Riemannian manifold, where $g$ is the metric on $TM$ coming from the metric $\Gamma$ on $T^*M$. As described in {Section \ref{preliminaries}}, using the metric $g$ we can write $L=\Delta_g+X\cdot\nabla$ for a suitable vector field $X$. We consider the Neumann eigenvalue problem
\begin{align}
\begin{cases}
& Lu=\lambda u\text{ on } M\\ \label{nonsymmetric.ev.eqn}
& \Gamma (u,\tilde\nu)=0 \text{ on } \partial M
\end{cases}
\end{align}
where we require $\partial M$ to be strictly convex. We emphasize that contrary to \cite{andrews}, $X$ does not have to be the gradient of a function and hence $L$ might not possess and invariant measure. $N$ now denotes the extrinsic dimension of $L$ which at least coincides with the intrinsic dimension of $\Delta_g$.\\ \indent
Eigenvalues of non-symmetric operators with Neumann boundary conditions can be shown to exist by standard methods but apart from the trivial eigenvalue $\lambda=0$, they are generally complex (see for instance \cite[Theorem 3.2, Chapter 3, Section 3]{lady}). Still, the standard Schauder-theory gives smoothness of the eigenfunctions. \\ \indent
Again, we will compare the principal eigenvalues of the operator and a one-dimensional model space. Since the principal eigenvalue of the model space is hard to compute, the result in Theorem \ref{maintheorem1} is not sharp. By using the principal eigenvalue rather than $\pi^2/D^2+a/2$ as a lower bound, it becomes sharp but less useful. Furthermore, the lower bound $\pi^2/D^2+a/2$
is the best among all linear functions in $a$ (see \cite{andrews}), which is enough for most applications.
\subsection{Modulus of continuity comparison principle}
Similar to \cite{andrews}, we show a comparison theorem for the decay of a heat equation with drift. Since every eigenfunction of $L$ with eigenvalue $\lambda$ corresponds to a solution of a heat equation with decay-rate $\operatorname{Re}(\lambda)$, this will be a suitable eigenvalue comparison, too. For the next theorem, we define the operator $\tilde L$ on $M\times M$ by $\tilde L= L_x +L_y$, where $L_x,L_y$ act on the first or second component, respectively. This also induces a metric $\tilde \Gamma =\Gamma_x+\Gamma_y$. For the first-order vector field we get the decomposition
$\tilde X=X_x+X_y$. We recall that given a metric space $M$ with diameter $D$ and distance $d$ a continuous function $\varphi:[0,D/2]\to \mathbb{R}_+$ is called a
\textit{modulus of continuity} of a function $u:M\to \mathbb{R}$ if
$|u(x)-u(y)|\leq 2\varphi(d(x,y)/2)$ for any $x,y\in M$.
\begin{theorem} \label{modulus.of.continuity.thm}
Let $(M,L)$ be as above with diameter $D$ and $v$ be a smooth solution of the heat equation with drift
$$
\begin{cases}
&\frac{\partial v}{\partial t}=L(v) \text{ on } M,\\
& \Gamma(v,\tilde{\nu})=0 \text{ on } \partial M.
\end{cases}.
$$
Assume further that there exists a smooth function $\phi(s,t):[0,D/2]\times\mathbb{R}_+\to\mathbb R$ such that
\begin{itemize}
\item[(i)] $\phi(\cdot,0)$ is a modulus of continuity for $v(\cdot,0)$,
\item[(ii)]$\frac{\partial \phi}{\partial t}\geq\phi''-as\phi'$,
\item[(iii)] $\phi'>0$,
\item[(iv)] $\phi(0,\cdot)\geq 0. $
\end{itemize}
Then $\phi(\cdot,t)$ is also a modulus of continuity of $v(\cdot,t)$ for any $t\geq 0$.
\end{theorem}
\begin{proof}
The idea of the proof stems from \cite[Proposition 1.1]{andrews}. Let $\epsilon>0$ and define an evolving function on $M\times M\times\mathbb{R}_+$ by
$$
\Phi(x,y,t)=v(y,t)-v(x,t)-2\phi\bigg(\frac{d(x,y)}{2},t\bigg)-\epsilon e^t.
$$
By letting $\epsilon\to0$ it remains to show that $\Phi$ stays negative for any choice of $\epsilon$.
By assumption $(i)$, we have that $\Phi(x,y,0)\leq-\epsilon<0$. Assume the assertion were wrong, then there exists an $\epsilon>0$ and a first time $t_0$ such that the function attains the value $0$, say in $(x_0,y_0)\in M\times M$. Hence, the function $\Phi(\cdot,\cdot,t_0)$ attains its global maximum in $(x_0,y_0)$. Assumption ($iv$) implies that $x_0\neq y_0$. If for instance $y_0\in\partial M$ and $\tilde\nu$ is the outward normal function of $\partial M$ at $y_0$, then
$$
\Gamma_y(\Phi(\cdot,\cdot,t_0),\tilde\nu)(x_0,y_0)=\Gamma_y(v(y_0,t_0),\tilde\nu)-\phi'\bigg(\frac{d(x_0,y_0)}{2},t_0\bigg)\Gamma_y(d(x_0,y_0),\tilde\nu)<0,
$$
where we used the Neumann condition, $(iii)$, and that strict convexity implies that geodesics touching the boundary are outward pointing. This contradicts the maximum assumption, so we can assume that $x_0,y_0$ both lie in the interior. Now we define the functions $f(x,y):=2\phi(d(x,y)/2,t_0)$ and $\psi(x,y):=v(y,t_0)-v(x,t_0)-\epsilon e^t$. $\psi$ is smooth and touches $f$ in $(x_0,y_0)$ from below by assumption. Furthermore, we have $(\Delta_g)_x(u)=\operatorname{tr}(H_u)$, where the trace is taken with respect to $\Gamma_x$ and similarly for $y$. So in the situation of {Lemma \ref{viscosity.solution.thm}}, we have for any admissible $A\in\mathcal A$ that $\operatorname{tr}(AH_\psi)(x_0,y_0)=\Delta_g v(y_0,t_0)-\Delta_g v(x_0,t_0)$, in particular, $\mathcal{L}(H_\psi)(x_0,y_0)=\Delta_g v(y_0,t_0)-\Delta_g v(x_0,t_0)$. Hence, {Lemma \ref{viscosity.solution.thm}} below implies that
\begin{align}
\frac{\partial}{\partial t}v(y_0,t_0)-
\frac{\partial}{\partial t}v(x_0,t_0)&=\bigg(\Delta_g v+X_y\cdot\nabla v\bigg)(y_0,t_0)-\bigg(\Delta_g v+X_x\cdot\nabla v\bigg)(x_0,t_0) \notag \\
&=
\bigg(\mathcal{L}(H_\psi)+X\cdot\nabla\psi\bigg)(x_0,y_0,t_0)
\notag \\ & \leq 2\phi''\bigg(\frac{d(x_0,y_0)}{2},t_0\bigg)-a\frac{d(x_0,y_0)}{2}2\phi'\bigg(\frac{d(x_0,y_0,t_0)}{2}\bigg)\notag \\&\leq2 \frac{\partial}{\partial t}\phi\bigg(\frac{d(x_0,y_0)}{2},t_0\bigg),
\notag
\end{align}
where we used $(ii)$. Hence, we obtain
$$
\frac{\partial}{\partial t}\Phi(x_0,y_0,t_0)=\frac{\partial}{\partial t}v(y_0,t_0)-
\frac{\partial}{\partial t}v(x_0,t_0)-2 \frac{\partial}{\partial t}\phi\bigg(\frac{d(x_0,y_0)}{2},t_0\bigg)-\epsilon e^t<0,
$$
where the strict inequality is achieved by discarding the negative term $-\epsilon e^t$. This, however, contradicts the fact that $t_0$ was the first time for $\Phi$ to become zero, the assertion follows.
\end{proof}
We add the technical Lemma needed to obtain the contradiction in the previous theorem:
\begin{lemma}
\label{viscosity.solution.thm}
Let $(M,L)$ be a compact and connected manifold without boundary with a possibly non-symmetric diffusion operator $L=\Delta_g+X\cdot\nabla$ satisfying $\operatorname{BE}(a,\infty)$. Let $N=\dim(M)$, $d$ be the distance function induced by $L$, and $D$ the diameter of $M$. Let $\phi:[0,D/2]\to\mathbb{R}$ be a smooth and increasing function and define the function $$f:=(M\times M)\setminus\Delta, (x,y)\mapsto 2\phi\bigg(\frac{d(x,y)}{2}\bigg).$$ Then $f$ is a viscosity supersolution of
$$
\mathcal{L}(H_f)+(X_x+X_y)(f)=2\phi''\bigg(\frac{d(x,y)}{2}\bigg)-ad(x,y)\phi'\bigg(\frac{d(x,y)}{2}\bigg),
$$
where for $B\in\operatorname{Sym}(T_{(x,y)}(M\times M))$
$$
\mathcal{L}(B)=\inf_{A\in\mathcal A} \operatorname{tr}(AB)
$$
with $\mathcal{A}:=\{A\in\operatorname{Sym}(T^*_{(x,y)}(M\times M)) | A\geq 0, A|_{T^*_xM}=\Gamma_x, A|_{T^*_yM}=\Gamma_y\}$.
\end{lemma}
\begin{proof} This is a variation of the argument in \cite[Theorem 3]{julie}: Let $(x,y)\in M\times M\setminus \Delta $ and $\psi$ be a smooth function around $(x,y)$ with $\psi\leq f$ and $\psi(x,y)=f(x,y)$. $M$ is compact, so the Hopf-Rinow theorem implies that $(M,g)$ is complete and we can choose a length-minimizing geodesic $\gamma$ parametrized by the arc-length joining $x$ and $y$, that is, $\gamma(-d/2)=x$ and $\gamma(d/2)=y$, where $d:=d(x,y)$. Next, we define $e_N(s):=\gamma(s)$ and extend it to an orthonormal base $e_i$ of $T_xM$. We use parallel transport along $\gamma$ to get an orthonormal base $e_i(s)\in T_{\gamma(s)}M$ and denote $\tilde e_i:=e_i(d/2)\in T_yM$. Before defining a suitable matrix $A\in\mathcal A$, we compute the derivatives of $\psi$. Since $\phi$ is increasing, we have
$$
\psi(\gamma(s),\gamma(t))\leq 2\phi\bigg(\frac{d(\gamma(s),\gamma(t))}{2}\bigg)
\leq 2\phi\bigg(\frac{|t-s|}{2}\bigg)
$$
with equality if $t=d/2$ and $s=-d/2$. This gives $\partial_{e_N}\psi(x,y)=-\phi'(d/2)$ and $\partial_{\tilde e_N}\psi(x,y)=\phi'(d/2)$. Now we define the smooth family of paths $\gamma^i(r,s):=\exp_{\gamma(s)}(r(\frac{1}{2}+\frac{s}{d})e_i(s))$ starting at $x$. Again, since $\phi$ is increasing, we have
$$
\psi(x,\exp_y(r\tilde e_i))\leq 2\phi\bigg(\frac{L(\gamma^i(r,\cdot))}{2}\bigg)
$$
with equality if $r=0$. The right-hand side is a smooth function of $r$ and $\gamma^i$ is a variation of the minimizing geodesic $\gamma$ which is fixed at $x$ and orthogonal at $y$. Hence, the first variation formula gives that the right-hand side has derivative zero, which implies that $\partial_{\tilde e_i}\psi(x,y)=0$ and similarly $\partial_{ e_i}\psi(x,y)=0$ for $1\leq i\leq N-1$.
So if we define $r(y)=d(x,y)$, we can already compute
\begin{align}
(X_y+X_x)(\psi)&=\phi'(d/2)\bigg(g(X(y),\gamma'(d/2))-g(X(x),\gamma'(-d/2))\bigg)\notag\\
&=\phi'(d/2)\bigg(\int_{-d/2}^{d/2}(g(X(\gamma(s)),\gamma'(s)))'ds\bigg) \notag \\
&=-\phi'(d/2)\bigg(\int_{-d/2}^{d/2}\bigg(\frac12X(\Gamma_y(r,r))-\Gamma_y(X(r),r)\bigg)\circ\gamma(s)ds\bigg). \label{visc1}
\end{align}
To see why the last equality holds, we first note that any $\gamma(s)$ with $s\in(-d/2,d/2)$ is outside of the cut-locus of $x$ because $\gamma$ is length-minimizing. $r$ is smooth near such points and satisfies $\Gamma_y(r,r)\equiv 1$, which implies $X(\Gamma_y(r,r))=0$. Now for any $s$, we choose a normal coordinate frame involving the orthonormal base $e_i(s)$, and since $\gamma$ is a geodesic we have that
\begin{align*}
(g(X(\gamma(s)),\gamma'(s))'=g(\nabla_{\gamma'(s)}X(\gamma(s)),\gamma'(s))+
g(X(\gamma(s)),\nabla_{\gamma'(s)}\gamma'(s))=\partial_N X^N(\gamma(s)),
\end{align*}
where we used that all Christoffel symbols vanish at $s$ and that in the chosen chart, we have $\gamma'(s)=\partial_N$. On the other hand, we have $dr=\gamma'$ which gives
\begin{align*}
\Gamma_y(X(r),r)(\gamma(s))=\sum_{i=1}^{N}\partial_i(X(r))\partial_i(r)=\partial_N(g(X,\gamma'))(\gamma(s))=\partial_N X^N(\gamma(s)),
\end{align*}
again since $\gamma$ is a geodesic and all the Christoffel symbols vanish. We proceed to prove the lemma. Bearing in mind the asymmetry of the $e_N$ and $\tilde e_N$ derivative, we define
$$
A=(e_N^*,-\tilde e_N^*)\otimes(e_N^*,-\tilde e_N^*)+\sum_{i=1}^{N-1}(e_i^*,\tilde e_i^*)\otimes(e_i^*,\tilde e_i^*),
$$
where we use the metric to produce the dual vectors $e_i^*(s):=g(e_i(s),\cdot)\in T_{\gamma(s)}^*M$.
$A$ is obviously symmetric, and as a sum of non-negative matrices, it is non-negative itself. Since $\{e_i | 1\leq i \leq N\}$ is an orthonormal base, we have
$$
A|_{T_xM}=\sum_{i=1}^{N}e_i^*\otimes e_i^*=\Gamma_x
$$
and similarly for $y$, hence $A\in\mathcal A$. An easy computation gives
$$
\operatorname{tr}(AH_\psi)=\partial_{e_N\otimes -\tilde e_N}\partial_{e_N\otimes -\tilde e_N}\psi+\sum_{i=1}^{N-1}
\partial_{e_i\otimes \tilde e_i}
\partial_{e_i\otimes \tilde e_i} \psi.
$$
Now for any $1\leq i\leq N-1$, we define the geodesic variation $\gamma^i(r,s):=\exp_{\gamma(s)}(re_i(s))$. Again, since $\phi$ is non-increasing, we have that
$$
\psi(\exp_x(re_i(-d/2)),\exp_y(re_i(d/2)))\leq 2\phi(\frac{L(\gamma^i(r,\cdot))}{2})
$$
with equality if $r=0$. Similarly as above, $\gamma^i$ is an orthogonal variation of the length minimizing geodesic $\gamma$, so the first derivative of $L(\gamma^i(r,\cdot))$ is zero. On the other hand, using the second variation formula, we get
\begin{align}
\frac{\partial ^2}{\partial r^2}L(\gamma^i(r,\cdot))\bigg|_{r=0}\notag=&g(\nabla_r \frac{d}{dr}\gamma^i,\gamma
')\bigg|^{s=d/2}_{s=-d/2}=-\int_{a}^{b} \operatorname{Rim}(e_i,\gamma ',\gamma ',e_i) ,\notag
\end{align}
where we used that $\gamma^i(\cdot,s)$ is a geodesic for any $s$ and that $\frac{d}{dr}\gamma^i-g(\frac{d}{dr}\gamma^i,\gamma')\gamma^i\big|_{r=0}=e_i(s)$ which is parallel along $\gamma$ by construction. Here, $\operatorname{Rim}$ denotes the Riemannian curvature tensor. Hence, we get that
\begin{align}
\sum_{i=1}^{N-1}\partial_{e_i\otimes\tilde e_i}\partial_{e_i\otimes\tilde e_i}\psi(x,y)&\leq \sum_{i=1}^{N-1} \phi'\bigg(\frac{d(x,y)}{2}\bigg)\frac{\partial^2}{\partial r^2}L(\gamma^i(r,\cdot))\notag \\&=-\phi'\bigg(\frac{d(x,y)}{2}\bigg)\int_{-d/2}^{d/2}\operatorname{ric}(\gamma',\gamma') \label{visc2}
\end{align}
Finally, we have
$$
\psi(\gamma(r-d/2),\gamma(d/2-r))\leq 2\phi\bigg(\frac{d-2r}{2}\bigg)
$$
with equality if $r=0$. Using that $-r$ has vanishing second derivative, we get
\begin{align}
\partial_{e_N\otimes -\tilde e_N}\partial_{e_N \otimes -\tilde e_N}\psi(x,y)\leq 2\phi''\bigg(\frac{d(x,y)}{2}\bigg).
\label{visc3}
\end{align}
Combining (\ref{visc1}), (\ref{visc2}), and (\ref{visc3}), we get that
\begin{align}
\mathcal{L}(H_\psi)+(X_x+X_y)\psi\leq& \operatorname{tr}(AH_\psi)-\int_{-d/2}^{d/2}\bigg(\frac12X(\Gamma_y(r,r))-\Gamma_y(X(r),r)\bigg)\circ\gamma(s)ds
\notag
\\
\leq& 2\phi''(d/2)-\phi'(d/2)\int_{-d/2}^{d/2}\operatorname{ric}(\gamma',\gamma')ds
\notag \\\notag &-\phi'(d/2)\int_{-d/2}^{d/2}\bigg(\frac12X(\Gamma_y(r,r))-\Gamma_y(X(r),r)\bigg)\circ\gamma(s)ds
\\ \notag =& 2\phi''(d/2)-\phi'(d/2)\int_{-d/2}^{d/2}R(r,r)
\\ \notag \leq& 2\phi''(d/2)-\phi'(d/2)ad,
\end{align}
where we used that $L$ satisfies $\operatorname{BE}(a,\infty)$ and $\Gamma(r,r)\equiv 1$. The claim follows. \end{proof}
\subsection{A lower bound for the principal eigenvalue}
We are now in the position to prove {Theorem \ref{maintheorem1}}. The idea is, as usual, to compare the eigenfunction of $L$ with an eigenfunction of a one-dimensional model space.
\begin{theorem}
Let $(M,L)$ be as above and $w$ be the first non-constant Neumann eigenfunction of the operator $\frac{\partial^2}{\partial s^2} -as\frac{\partial}{\partial s}$ on $[-D/2,D/2]$ with eigenvalue $\bar \lambda$.
Let $\lambda$ be a non-constant Neumann eigenvalue of $L$. Then we have the estimate
$$\operatorname{Re}(\lambda)\geq\bar \lambda.$$\label{nonsymmetric.final.thm}
\end{theorem}
\begin{proof}
One easily verifies that $w$ is the unique minimizer of the weighted energy functional
$$
F(\psi):=\frac{\int_{-D/2}^{D/2}e^{-a\frac{s^2}{2}}(\psi')^2}{\int_{-D/2}^{D/2}e^{-a\frac{s^2}{2}}(\psi)^2},
$$
among all smooth non-zero functions with zero mean. So $w$ is well-defined and smooth. It is a well-known fact that
$w$ can be chosen in a way such that $w(0)=0$ and such that $w$ is positive on $(0,D/2]$. The equation is invariant under the transformation $s\mapsto -s$ and $w$ changes its sign, so it follows that $w$ is odd. Because of the Neumann condition, we cannot directly apply {Theorem \ref{modulus.of.continuity.thm}}, so let $\tilde w$ be the associated solution on the interval $[-\tilde D/2, \tilde D/2]$ for $\tilde D>D$ with eigenvalue $\tilde \lambda$. The uniqueness of solutions of ordinary differential equations gives $\tilde w'(0)>0$, since otherwise $\tilde w''(0)=\tilde w'(0)=\tilde w(0)=0$, and hence $\tilde w\equiv 0$. At $\tilde D/2$, the Neumann condition implies that
$\tilde w''(\tilde D/2)=-\tilde \lambda w(\tilde D/2)<0$, so we have that $\tilde w'>0$ on $(\tilde D/2-\epsilon,\tilde D/2)$ for some small $\epsilon>0$. Now we assume that there is an $s\in (0,\tilde D/2)$ such that $w'(s)=0$. By the considerations above, we can choose $s$ to be maximal. However, $\tilde w''(s)=-\lambda \tilde w(s)<0$, since $w$ is positive in $(0,\tilde D/2]$, contradicting the fact that $s$ is maximal. Hence, we have that $\tilde w'|_{[0,\tilde D/2)}>0$, in particular, $\tilde w'_{[0,D/2]}>0$.\\
Now we define the function $\tilde \phi(s,t)=Ce^{-\tilde \lambda t}\tilde w(s)$ and let $u$ be the eigenfunction of $L$ with eigenvalue $\lambda$ and define $v:=e^{-\lambda t}u$. Since $\tilde w'(0)>0$ and since the gradient of $u$ is uniformally bounded ($M$ is compact), $\phi(s,0)$ is a modulus of continuity of $\operatorname{Re}(u)$ and $\operatorname{Im}(u)$ for $C$ sufficiently large. Furthermore, $v$, and hence also $\operatorname{Re}(v)$ and $\operatorname{Im}(v)$, satisfy a heat equation with drift. $\phi$ obviously satisfies the other constraints of {Theorem \ref{modulus.of.continuity.thm}}, so we get that $\phi(\cdot,t)$ is a modulus of continuity of the real and imaginary part of $v(\cdot,t)$ for any $t\in\mathbb{R}_+$. Since $v$ is non-constant, this can only happen if $v$ has more decay than $\phi$, that is, $\operatorname{Re}(\lambda)\geq \tilde \lambda$. This proves the theorem since $\tilde \lambda \to \bar\lambda$ as $\tilde D\searrow D$. \end{proof}
\begin{proof}[Proof {Theorem \ref{maintheorem1}}] This follows from \cite[Proposition 3.1]{andrews}, which states that $\bar{\lambda}\geq \frac{a}{2}+\frac{\pi^2}{D^2}$.
\end{proof}
We have $\frac12as\frac{\partial}{\partial s}(u'u')-(as\frac{\partial}{\partial s}u)'u'=-au'^2$, so the operator
$\frac{\partial^2}{\partial s^2} -as\frac{\partial}{\partial s}$ satisfies $\operatorname{BE}(a,\infty)$. Therefore, the result in {Theorem \ref{nonsymmetric.final.thm}} is sharp. By constructing collapsing warped product manifolds similar to the previous section, one can see that the estimate is sharp even if we restrict ourselves to an arbitrary dimension. However, as can be seen in \cite{andrews}, {Theorem \ref{maintheorem1}} is not sharp, even in the smaller class of operators which can be written as a Bakry-Emery Laplacian. The reason is that there is no good understanding of the principal eigenvalue of the model function.
\end{document}
|
\begin{document}
\title{Robust quantum parameter estimation: coherent magnetometry with feedback}
\author{John K. Stockton}
\email{[email protected]}
\author{JM Geremia}
\author{Andrew C. Doherty}
\author{Hideo Mabuchi}
\affiliation{Norman Bridge Laboratory of Physics, M.C.
12-33, California Institute of Technology, Pasadena CA 91125}
\date{\today}
\begin{abstract}
We describe the formalism for optimally estimating and controlling
both the state of a spin ensemble and a scalar magnetic field with
information obtained from a continuous quantum limited measurement
of the spin precession due to the field. The full quantum
parameter estimation model is reduced to a simplified equivalent
representation to which classical estimation and control theory is
applied. We consider both the tracking of static and fluctuating
fields in the transient and steady state regimes. By using
feedback control, the field estimation can be made robust to
uncertainty about the total spin number.
\end{abstract}
\pacs{07.55.Ge,03.65.Ta,42.50.Lc,02.30.Yy}
\maketitle
\section{Introduction}\label{Section::Introduction}
As experimental methods for manipulating physical systems near
their fundamental quantum limits improve \cite{Armen2002,
Geremia2003b, Orozco2002, Raithel2002, Rempe2002}, the need for
quantum state and parameter estimation methods becomes critical.
Integrating a modern perspective on quantum measurement theory
with the extensive methodologies of classical estimation and
control theory provides new insight into how the limits imposed by
quantum mechanics affect our ability to measure and control
physical systems \cite{Verstraete2001, Gambetta2001, Mabuchi1996,
Belavkin1999}.
In this paper, we illustrate the processes of state estimation and
control for a continuously-observed, coherent spin ensemble (such
as an optically pumped cloud of atoms) interacting with an
external magnetic field. In the situation where the magnetic
field is either zero or well-characterized, continuous measurement
(e.g., via the dispersive phase shift or Faraday rotation of a far
off-resonant probe beam) can produce a spin-squeezed
\cite{Kitagawa1993} state conditioned on the measurement record
\cite{Kuzmich2000}. Spin-squeezing indicates internal
entanglement between the different particles in the ensemble
\cite{Stockton2003} and promises to improve precision measurements
\cite{Wineland1994}. When, however, the ambient magnetic
environment is either unknown or changing in time, the external
field can be estimated by observing Larmor precession in the
measurement signal \cite{Geremia2003b, Jessen2003, Romalis2003,
Budker2002}, see \reffig{Schematic}. Recently, we have shown that
uncertainty in both the magnetic field and the spin ensemble can
be simultaneously reduced through continuous measurement and
adequate quantum filtering \cite{Geremia2003}.
Here, we expand on our recent results \cite{Geremia2003} involving
Heisenberg-limited magnetometry by demonstrating the advantages of
including feedback control in the estimation process. Feedback is
a ubiquitous concept in classical applications because it enables
precision performance despite the presence of potentially large
system uncertainty. Quantum optical experiments are evolving to
the point where feedback can been used, for example, to stabilize
atomic motion within optical lattices \cite{Raithel2002} and high
finesse cavities \cite{Rempe2002}. Recently, we demonstrated the
use of feedback on a polarized ensemble of laser-cooled Cesium
atoms to robustly estimate an applied magnetic field
\cite{Geremia2003b}. In this work, we investigate the theoretical
limits of such an approach and demonstrate that an external
magnetic field can be measured with high precision despite
substantial ignorance of the size of the spin ensemble.
The paper is organized as follows. In
\refsec{QuantumParameterEstimation}, we provide a general
introduction to quantum parameter estimation followed by a
specialization to the case of a continuously measured spin
ensemble in a magnetic field. By capitalizing on the Gaussian
properties of both coherent and spin-squeezed states, we formulate
the parameter estimation problem in such a way that techniques
from classical estimation theory apply to the quantum system.
\refsec{OptimalEstimationAndControl} presents basic filtering and
control theory in a pedagogical manner with the simplified spin
model as an example. This theory is applied in
\refsec{OptimalPerformance}, where we simultaneously derive
mutually dependent magnetometry and spin-squeezing limits in the
ideal case where the observer is certain of the spin number. We
consider the optimal measurement of both constant and fluctuating
fields in the transient and steady state regimes. Finally, we show
in \refsec{RobustPerformance} that the estimation can be made
robust to uncertainty about the total spin number by using
precision feedback control.
\begin{figure*}
\caption{
(A) A spin ensemble is initially prepared in a coherent state
polarized along x, with symmetric variance in the y and z
directions. Subsequently, a field along y causes the spin to
rotate as the z-component is continuously measured. (B)
Experimental schematic for the measurement process. A far off
resonant probe beam traverses the sample and measures the
z-component of spin via Faraday rotation. The measurement strength
could be improved by surrounding the ensemble with a cavity. (C)
Experimental apparatus subsumed by the \textit{Plant}
\label{Figure::Schematic}
\end{figure*}
\section{Quantum parameter estimation}\label{Section::QuantumParameterEstimation}
First, we present a generic description of quantum parameter
estimation \cite{Verstraete2001, Gambetta2001, Mabuchi1996,
Belavkin1999}. This involves describing the quantum system with a
density matrix and our knowledge of the unknown parameter with a
classical probability distribution. The objective of parameter
estimation is then to utilize information gained about the system
through measurement to conditionally update both the density
matrix and the parameter distribution. After framing the general
case, our particular example of a continuously measured spin
ensemble is introduced.
\subsection{General problem}
The following outline of the parameter estimation process could be
generalized to treat a wide class of problems (discrete
measurement, multiple parameters), but for simplicity, we will
consider a continuously measured quantum system with scalar
Hamiltonian parameter $\theta$ and measurement record $y(t)$.
Suppose first that the observer has full knowledge of the
parameter $\theta$. The proper description of the system would
then be a density matrix $\rho_\theta(t)$ conditioned on the
measurement record $y(t)$. The first problem is to find a rule to
update this density matrix with the knowledge obtained from the
measurement. As in the problem of this paper, this mapping may
take the form of a stochastic master equation (SME). The SME is by
definition a filter that maps the measurement record to an optimal
estimate of the system state.
Now if we allow for uncertainty in $\theta$, then a particularly
intuitive choice for our new description of the system is
\begin{eqnarray}
\rho(t)&\equiv&\int_{\theta}\rho_\theta(t)p(\theta,t)\,d\theta\label{Equation::FullEstimate}
\end{eqnarray}
where $p(\theta,t)$ is a probability distribution representing our
knowledge of the system parameter. In addition to the rule for
updating each $\rho_\theta(t)$, we also need to find a rule for
updating $p(\theta,t)$ according to the measurement record. By
requiring internal consistency, it is possible to find a Bayes
rule for updating $p(\theta,t)$ \cite{Verstraete2001}. These two
update rules in principle solve the estimation problem completely.
Because evolving $\rho(t)$ involves performing calculations with
the full Hilbert space in question, which is often computationally
expensive, it is desirable to find a reduced description of the
system. Fortunately, it is often possible to find a closed set of
dynamical equations for a small set of moments of $\rho(t)$. For
example, if $c$ is an observable, then we can define the estimate
moments
\begin{eqnarray*}
\langle c \rangle (t)&\equiv&\textrm{Tr}[\rho(t) c]\\
\langle \Delta c^2 \rangle (t)&\equiv&\textrm{Tr}[\rho(t) (c-\langle c \rangle)^2]\\
\langle \theta \rangle (t) &\equiv& \int p(\theta,t) \theta d\theta\\
\langle \Delta\theta^2 \rangle (t)&\equiv& \int p(\theta,t)
(\theta-\langle \theta \rangle )^2 d\theta
\end{eqnarray*}
and derive their update rules from the full update rules,
resulting in a set of $y(t)$-dependent differential equations. If
those differential equations are closed, then this reduced
description is adequate for the parameter estimation task at hand.
This situation (with closure and Gaussian distributions) is to be
expected when the system is approximately linear.
\subsection{Continuously measured spin system}\label{Section::QPESpins}
This approach can be applied directly to the problem of
magnetometry considered in this paper. The problem can be
summarized by the situation illustrated in \reffig{Schematic}: a
spin ensemble of possibly unknown number is initially polarized
along the x-axis (e.g., via optical pumping), an unknown possibly
fluctuating scalar magnetic field $b$ directed along the y-axis
causes the spins to then rotate within the x-z plane, and the
z-component of the collective spin is measured continuously. The
measurement can, for example, be implemented as shown, where we
observe the difference photocurrent, $y(t)$, in a polarimeter
which measures the Faraday rotation of a linearly polarized far
off resonant probe beam travelling along z \cite{Geremia2003b,
Jessen2003, Deutsch2003}. The goal is to optimally estimate $b(t)$
via the measurement record and unbiased prior information. If a
control field $u(t)$ is included, as it will be eventually, the
total field is represented by $h(t)=b(t)+u(t)$.
In terms of our previous discussion, we have here the observable
$c= \sqrt{M}J_z$, where $M$ is the measurement rate (defined in
terms of probe beam parameters), and the parameter $\theta = b$.
When $b$ is known, our state estimate evolves by the stochastic
master equation \cite{Thomsen2002}
\begin{eqnarray}
d\rho_b(t)=-i[H(b),\rho_b(t)]dt+\mathcal{D}[\sqrt{M}J_z]\rho_b(t)dt\nonumber\\
+\sqrt{\eta}\mathcal{H}[\sqrt{M}J_z]\left(2 \sqrt{M\eta}[y(t)
dt-\langle J_z\rangle_b dt]\right)\rho_b(t)\label{Equation::SME}
\end{eqnarray}
where $H(b)=\gamma J_y b$, $\gamma$ is the gyromagnetic ratio, and
\begin{eqnarray*}
\mathcal{D}[c]\rho&\equiv& c \rho
c^{\dagger}-(c^{\dagger}c\rho+\rho c^{\dagger}c)/2\\
\mathcal{H}[c]\rho&\equiv&c\rho+\rho
c^{\dagger}-\textrm{Tr}[(c+c^{\dagger})\rho]\rho
\end{eqnarray*}
The stochastic quantity $2 \sqrt{M\eta}[y(t) dt-\langle
J_z\rangle_b (t) dt]\equiv d\bar{W}(t)$ is a Wiener increment
(Gaussian white noise with variance $dt$) by the optimality of the
filter. The sensitivity of the photodetection per
$\sqrt{\textrm{Hz}}$ is represented by $1/2\sqrt{M\eta}$, where
the quantity $\eta$ represents the quantum efficiency of the
detection. If $\eta = 0$, we are essentially ignoring the
measurement result and the conditional SME becomes a deterministic
unconditional master equation. If $\eta=1$, the detectors are
maximally efficient. Note that our initial state
$\rho(0)=\rho_b(0)$ is made equal to a coherent state (polarized
in x) and is representative of our prior information.
It can be shown that the unnormalized probability $\bar{p}(b,t)$
evolves according to \cite{Verstraete2001}
\begin{eqnarray}
d\bar{p}(b,t)=4M\eta\langle J_z\rangle_b (t) \bar{p}(b,t)y(t)
dt\label{Equation::ProbabilityUpdate}
\end{eqnarray}
The evolution \refeqns{SME}{ProbabilityUpdate} together with
\refeqn{FullEstimate} solve the problem completely, albeit in a
computationally expensive way. Clearly, for large ensembles it
would be advantageous to reduce the problem to a simpler
description.
If we consider only the estimate moments $\langle J_z \rangle(t)$,
$\langle \Delta J_z^2 \rangle(t)$, $\langle b \rangle(t)$, and
$\langle \Delta b^2 \rangle(t)$ and derive their evolution with
the above rules, it can be shown that the filtering equations for
those variables are closed under certain approximations. First,
the spin number $J$ must be large enough that the distributions
for $J_y$ and $J_z$ are approximately Gaussian for an x-polarized
coherent state. Second, we only consider times $t\ll 1/M$ because
the total spin becomes damped by the measurement at times
comparable to the inverse of the measurement rate.
Although this approach is rigorous and fail-safe, the resulting
filtering equations for the moments can be arrived at in a more
direct manner as discussed in \refapx{SimplifiedRepresentation}.
Essentially, the full quantum mechanical mapping from $h(t)$ to
$y(t)$ is equivalent to the mapping derived from a model which
appears classical, and assumes an actual, but random, value for
the $z$ component of spin. This correspondence generally holds for
a stochastic master equation corresponding to an arbitrary linear
quantum mechanical system with continuous measurement of
observables that are linear combinations of the canonical
variables \cite{Doherty2003}.
From this point on we will only consider the simplified Gaussian
representation (used in the next section) since it allows us to
apply established techniques from estimation and control theory.
The replacement of the quantum mechanical model with a classical
noise model is discussed more fully in the appendix. Throughout
this treatment, we keep in mind the constraints that the original
model imposed. As before, we assume $J$ is large enough to
maintain the Gaussian approximation and that time is small
compared to the measurement induced damping rate, $t\ll 1/M$.
Also, the description of our original problem demands that
$\langle \Delta J_z^2 \rangle(0)=J/2$ for a coherent state
\footnote{We assume throughout the paper that we have a system of
$N$ spin-$1/2$ particles, so for a polarized state along $x$,
$\langle J_x \rangle = J = N/2$ and $\sigma_{z0}=\langle \Delta
J_z^2 \rangle(0)=J/2=N/4$. This is an arbitrary choice and our
results are independent of any constituent spin value, apart from
defining these moments. In \cite{Geremia2003b}, for example, we
work with an ensemble of Cs atoms, each atom in a ground state of
spin-$4$. \label{Convention}}. Hence our prior information for the
initial value of the spin component will always be dictated by the
structure of Hilbert space.
\section{Optimal estimation and control}\label{Section::OptimalEstimationAndControl}
We now describe the dynamics of the simplified representation.
Given a linear state-space model (L), a quadratic performance
criterion (Q) and Gaussian noise (G), we show how to apply
standard LQG analysis to optimize the estimation and control
performance \cite{Jacobs1996}.
The system state we are trying to estimate is represented by
\begin{flushleft}
\textbf{State}
\begin{eqnarray}
\vec{x}(t)&\equiv&\begin{bmatrix} z(t) \\
b(t)\label{Equation::State}
\end{bmatrix}
\end{eqnarray}
\end{flushleft}
where $z(t)$ represents the small z-component of the collective
angular momentum and $b(t)$ is a scalar field along the y axis.
Our best guess of $\vec{x}(t)$, as we filter the measurement
record, will be denoted
\begin{flushleft}
\textbf{Estimate}
\begin{eqnarray}
\vec{m}(t)&\equiv&\begin{bmatrix} \tilde{z}(t) \\
\tilde{b}(t)\end{bmatrix}\label{Equation::Estimate}
\end{eqnarray}
\end{flushleft}
As stated in the appendix, we implicitly make the associations:
$\tilde{z}(t) = \langle J_z \rangle (t)=\textrm{Tr}[\rho(t)
J_z]$ and $\tilde{b}(t) = \int p(b,t) b \,db$, although no further
mention of $\rho(t)$ or $p(b,t)$ will be made.
We assume the measurement induced damping of $J$ to be negligible
for short times ($J \exp[-M t/2]\approx J$ if $t \ll 1/M$) and
approximate the dynamics as
\begin{flushleft}
\textbf{Dynamics}
\begin{eqnarray}
d\vec{x}(t)&=&\textbf{A}\vec{x}(t)dt+\textbf{B}u(t)
dt+\begin{bmatrix} 0 \\
\sqrt{\sigma_{bF}}\end{bmatrix}dW_1 \label{Equation::Dynamics}\\
\textbf{A}&\equiv&\begin{bmatrix} 0 & \gamma J \\ 0 &
-\gamma_b\end{bmatrix}\nonumber\\
\textbf{B}&\equiv&\begin{bmatrix} \gamma J\\
0\end{bmatrix}\nonumber\\
\mathbf{\Sigma_0}&\equiv&\begin{bmatrix} \sigma_{z0} & 0 \\ 0 &
\sigma_{b0}\end{bmatrix}\nonumber\\
\mathbf{\Sigma_1}&\equiv&\begin{bmatrix} 0 & 0 \\ 0 &
\sigma_{bF}\end{bmatrix}\nonumber
\end{eqnarray}
\end{flushleft}
where the initial value $\vec{x}(0)$ for each trial is drawn
randomly from a Gaussian distribution of mean zero and covariance
matrix $\mathbf{\Sigma}_0$. The initial field variance
$\sigma_{b0}$ is considered to be due to classical uncertainty,
whereas the initial spin variance $\sigma_{z0}$ is inherently
non-zero due to the original quantum state description.
Specifically, we impose $\sigma_{z0}=\langle \Delta J_z ^2 \rangle
(0)$. The Wiener increment $dW_1(t)$ has a Gaussian distribution
with mean zero and variance $dt$. $\mathbf{\Sigma_1}$ represents
the covariance matrix of the last vector in \refeqn{Dynamics}.
We have given ourselves a magnetic field control input, $u(t)$,
along the same axis, y, of the field to be measured, $b(t)$. We
have allowed $b(t)$ to fluctuate via a damped diffusion
(Ornstein-Uhlenbeck) process \cite{Gardiner2002}
\begin{eqnarray}
db(t)&=&-\gamma_b b(t)\,dt+\sqrt{\sigma_{bF}}\,dW_1
\label{Equation::bFluctuations}
\end{eqnarray}
The $b(t)$ fluctuations are represented in this particular way
because Gaussian noise processes are amenable to LQG analysis. The
variance of the field at any particular time is given by the
expectation $\sigma_{bFree}\equiv\textrm{E}[b(t)^2]=\sigma_{bF}/2
\gamma_b$. (Throughout the paper we use the notation
$\textrm{E}[x(t)]$ to represent the average of the generally
stochastic variable $x(t)$ at the same point in time, over many
trajectories.) The bandwidth of the field is determined by the
frequency $\gamma_b$ alone. When considering the measurement of
fluctuating fields, a valid choice of prior might be
$\sigma_{b0}=\sigma_{bFree}$, but we choose to let $\sigma_{b0}$
remain independent. For constant fields, we set
$\sigma_{bFree}=0$, but $\sigma_{b0}\neq 0$.
Note that only the small angle limit of the spin motion is
considered. Otherwise we would have to consider different
components of the spin vector rotating into each other. The small
angle approximation would be invalid if a field caused the spins
to rotate excessively, but using adequate control ensures this
will not happen. Hence, we use control for essentially two reasons
in this paper: first to keep our small angle approximation valid,
and, second, to make our estimation process robust to our
ignorance of $J$. The latter point will be discussed in
\refsec{RobustPerformance}.
Our measurement of $z$ is described by the process
\begin{flushleft}
\textbf{Measurement}
\begin{eqnarray}
y(t) dt&=&\textbf{C}\vec{x}(t)dt + \sqrt{\sigma_M}dW_2(t)\label{Equation::Measurement}\\
\textbf{C}&\equiv&\begin{bmatrix} 1 & 0\end{bmatrix}\nonumber\\
\mathbf{\Sigma_2}&\equiv&\sigma_M\equiv 1/4M\eta\nonumber
\end{eqnarray}
\end{flushleft}
where the measurement shot-noise is represented by the Wiener
increment $dW_2(t)$ of variance $dt$. Again, $\sqrt{\sigma_M}$
represents the sensitivity of the measurement, $M$ is the
measurement rate (with unspecified physical definition in terms of
probe parameters), and $\eta$ is the quantum efficiency of the
measurement. The increments $dW_1$ and $dW_2$ are uncorrelated.
Following \cite{Jacobs1996}, the optimal estimator for mapping
$y(t)$ to $\vec{m}(t)$ takes the form
\begin{flushleft}
\textbf{Estimator}
\begin{eqnarray}
d\vec{m}(t)&=&\textbf{A}\vec{m}(t)dt+\textbf{B}u(t) dt
\nonumber\\
&&+\textbf{K}_O(t)[y(t)-\textbf{C}\vec{m}(t)]dt \label{Equation::KalmanFilter}\\
\vec{m}(0)&=&\begin{bmatrix} 0\\ 0 \end{bmatrix} \nonumber\\
\textbf{K}_O(t)&\equiv&\mathbf{\Sigma}(t)\textbf{C}^T
\mathbf{\Sigma}_{2}^{-1}\nonumber\\
\frac{d\mathbf{\Sigma}(t)}{dt}&=&\mathbf{\Sigma}_1+\textbf{A}\mathbf{\Sigma}(t)+ \mathbf{\Sigma}(t)\textbf{A}^{T}\nonumber\\
&&-\mathbf{\Sigma}(t)\textbf{C}^{T}\mathbf{\Sigma}_{2}^{-1}\textbf{C}\mathbf{\Sigma}(t) \label{Equation::Riccati}\\
\mathbf{\Sigma}(t)&\equiv&\begin{bmatrix} \sigma_{zR}(t) & \sigma_{cR}(t)\\ \sigma_{cR}(t) & \sigma_{bR}(t)\end{bmatrix}\label{Equation::Sigma}\\
\mathbf{\Sigma}(0)&=&\mathbf{\Sigma_0}\equiv\begin{bmatrix}
\sigma_{z0} & 0
\\ 0 & \sigma_{b0}\end{bmatrix}\label{Equation::Priors}
\end{eqnarray}
\end{flushleft}
\refeqn{KalmanFilter} is the Kalman filter which depends on the
solution of the matrix Riccati \refeqn{Riccati}. The Riccati
equation gives the optimal observation gain $\textbf{K}_O(t)$ for
the filter. The estimator is designed to minimize the average
quadratic estimation error for each variable:
$\textrm{E}[(z(t)-\tilde{z}(t))^2]$ and
$\textrm{E}[(b(t)-\tilde{b}(t))^2]$. If the model is correct, and
we assume the observer chooses his prior information
$\mathbf{\Sigma}(0)$ to match the actual variance of the initial
data $\mathbf{\Sigma_0}$, then we have the self-consistent result:
\begin{eqnarray*}
\sigma_{zE}(t)&\equiv&\textrm{E}[(z(t)-\tilde{z}(t))^2]=\sigma_{zR}(t)\\
\sigma_{bE}(t)&\equiv&\textrm{E}[(b(t)-\tilde{b}(t))^2]=\sigma_{bR}(t)
\end{eqnarray*}
Hence, the Riccati equation solution represents both the observer
gain \emph{and} the expected performance of an optimal filter
using that same gain.
Now consider the control problem, which is in many respects dual
to the estimation problem. We would like to design a controller
to map $y(t)$ to $u(t)$ in a manner that minimizes the quadratic
cost function
\begin{flushleft}
\textbf{Minimized Cost}
\begin{eqnarray}
H&=&\int_0^T \left[\vec{x}^T(t) \textbf{P} \vec{x}(t) + u(t)
\textbf{Q} u(t)\right] \,dt\nonumber \\
&&+\vec{x}^T(T) \textbf{P}_1 \vec{x}(T) \label{Equation::ControllerCost}\\
\mathbf{P}&\equiv&\begin{bmatrix} p & 0
\\ 0 & 0\end{bmatrix}\nonumber\\
\mathbf{Q}&\equiv&q\nonumber
\end{eqnarray}
\end{flushleft}
where $\textbf{P}_1$ is the end-point cost. Only the ratio $p/q$
ever appears, of course, so we define the parameter
$\lambda\equiv\sqrt{p/q}$ and use it to represent the cost of
control. By setting $\lambda\rightarrow \infty$, as we often
choose to do in the subsequent analysis to simplify results, we
are putting no cost on our control output. This is unrealistic
because, for example, making $\lambda$ arbitrarily large implies
that we can apply transfer functions with finite gain at
arbitrarily high frequencies, which is not experimentally
possible. Despite this, we will often consider the limit
$\lambda\rightarrow \infty$ to set bounds on achievable estimation
and control performance. The optimal controller for minimizing
\refeqn{ControllerCost} is
\begin{flushleft}
\textbf{Controller}
\begin{eqnarray}
u(t)&=&-\textbf{K}_C(t) \vec{m}(t)\\
\textbf{K}_C(t)&\equiv&\textbf{Q}^{-1}\textbf{B}^T \textbf{V}(T-t)\nonumber\\
\frac{d\textbf{V}(T)}{dT}&=&\textbf{P}+\textbf{A}^{T}\textbf{V}(T)+\textbf{V}(T)\textbf{A} \nonumber\\
&&-\textbf{V}(T)\textbf{B}\textbf{Q}^{-1}\textbf{B}^T\textbf{V}(T)\label{Equation::VRiccati}\\
\textbf{V}(T=0)&\equiv&\textbf{P}_1\nonumber
\end{eqnarray}
\end{flushleft}
Here $\textbf{V}(T)$ is solved in reverse time $T$, which can be
interpreted as the time left to go until the stopping point. Thus
if $T\rightarrow\infty$, then we only need to use the steady state
of the $\textbf{V}$ Riccati \refeqn{VRiccati} to give the steady
state controller gain $\textbf{K}_C$ for all times. In this case,
we can ignore the (reverse) initial condition $P_1$ because the
controller is not designed to stop. Henceforth, we will make
$\textbf{K}_C$ equal to this constant steady state value, such
that the only time varying coefficients will come from
$\textbf{K}_O(t)$.
In principle, the above results give the entire solution to the
ideal estimation and control problem. However, in the non-ideal
case where our knowledge of the system is incomplete, e.g. $J$ is
unknown, our estimation performance will suffer. Notation is now
introduced which produces trivial results in the ideal case, but
is helpful otherwise. Our goal is to collect the above equations
into a single structure which can be used to solve the non-ideal
problem. We define the \emph{total} state of the system and
estimator as
\begin{flushleft}
\textbf{Total State}
\begin{eqnarray}
\vec{\theta}(t)&\equiv&\begin{bmatrix} \vec{x}(t) \\
\vec{m}(t)
\end{bmatrix}=\begin{bmatrix} z(t) \\ b(t) \\ \tilde{z}(t) \\ \tilde{b}(t)
\end{bmatrix}
\end{eqnarray}
\end{flushleft}
Consider the general case where the observer assumes the plant
contains spin $J'$, which may or may not be equal to the actual
$J$. All design elements depending on $J'$ instead of $J$ are now
labelled with a prime. Then it can be shown that the total state
dynamics from the above estimator-controller architecture are a
time-dependent Ornstein-Uhlenbeck process
\begin{flushleft}
\textbf{Total State Dynamics}
\begin{eqnarray}
d\vec{\theta}(t) &=& \mathbf{\alpha}(t) \vec{\theta}(t) dt + \mathbf{\beta}(t) d\vec{W}(t)\\
\mathbf{\alpha}(t)&\equiv&\begin{bmatrix} \textbf{A} & -\textbf{B}\textbf{K}'_C \\
\textbf{K}'_O(t) \textbf{C} &
\textbf{A}'-\textbf{B}'\textbf{K}'_C-\textbf{K}'_O(t) \textbf{C}
\end{bmatrix}\nonumber\\
\mathbf{\beta}(t)&\equiv&\begin{bmatrix}0 & 0 & 0 & 0 \\
0 & \sqrt{\sigma_{bF}} & 0 & 0 \\
0 & 0 & \sqrt{\sigma_M} K'_{O1}(t) & 0 \\
0 & 0 & \sqrt{\sigma_M} K'_{O2}(t) & 0
\end{bmatrix}\nonumber
\end{eqnarray}
\end{flushleft}
where the covariance matrix of $d\vec{W}$ is $dt$ times the
identity. Now the quantity of interest is the following covariance
matrix
\begin{flushleft}
\textbf{Total State Covariance}
\begin{eqnarray}
\mathbf{\Theta}(t)&\equiv& \textrm{E}[\vec{\theta}(t)\vec{\theta}^T(t)]\nonumber\\
&\equiv&\begin{bmatrix}\sigma_{zz} & \sigma_{zb} & \sigma_{z\tilde{z}} & \sigma_{z\tilde{b}} \\
\sigma_{zb} & \sigma_{bb} & \sigma_{b\tilde{z}} & \sigma_{b\tilde{b}} \\
\sigma_{z\tilde{z}} & \sigma_{b\tilde{z}} & \sigma_{\tilde{z}\tilde{z}} & \sigma_{\tilde{z}\tilde{b}} \\
\sigma_{z\tilde{b}} & \sigma_{b\tilde{b}} &
\sigma_{\tilde{z}\tilde{b}} &
\sigma_{\tilde{b}\tilde{b}} \end{bmatrix}\\
\sigma_{zz}&\equiv&\textrm{E}[z(t)^2]\nonumber\\
\sigma_{zb}&\equiv&\textrm{E}[z(t)b(t)]\nonumber\\
\vdots &\equiv&\vdots\nonumber
\end{eqnarray}
\end{flushleft}
It can be shown that this total covariance matrix obeys the
deterministic equations of motion
\begin{flushleft}
\textbf{Total State Covariance Dynamics}
\begin{eqnarray}
\frac{d\mathbf{\Theta}(t)}{dt}&=&\mathbf{\alpha}(t) \mathbf{\Theta}(t)+\mathbf{\Theta}(t) \mathbf{\alpha}^T(t) + \mathbf{\beta}(t) \mathbf{\beta}^T(t)\label{Equation::TotalDifferential}\\
\mathbf{\Theta}(t)&=&\exp\left[-\int_{0}^{t}\mathbf{\alpha}(t')
dt'\right]\mathbf{\Theta}_0\exp\left[-\int_{0}^{t}\mathbf{\alpha}^T(t')
dt'\right]\nonumber\\
&&+\int_0^tdt'\exp\left[-\int_{t'}^{t}\mathbf{\alpha}(s)
ds\right]\mathbf{\beta}(t')\mathbf{\beta}^T(t')\nonumber\\
&&\quad\times\exp\left[-\int_{t'}^{t}\mathbf{\alpha}^T(s) ds\right] \label{Equation::TotalIntegral}\\
\mathbf{\Theta}_0&=&\begin{bmatrix}\sigma_{z0} & 0 & 0 & 0 \\
0 & \sigma_{b0} & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \nonumber
\end{bmatrix}
\end{eqnarray}
\end{flushleft}
\refeqn{TotalIntegral} is the matrix form of the standard
integrating factor solution for time-dependent scalar ordinary
differential equations \cite{Gardiner2002}. Whether we solve this
problem numerically or analytically, the solution provides the
quantity that we ultimately care about
\begin{flushleft}
\textbf{Average Magnetometry Error}
\begin{eqnarray}
\sigma_{bE}(t)&\equiv& \textrm{E}[(\tilde{b}(t)-b(t))^2]\nonumber\\
&=&\textrm{E}[b^2(t)]+\textrm{E}[\tilde{b}^2(t)]-2\textrm{E}[b(t)\tilde{b}(t)]\nonumber\\
&=&\sigma_{bb}(t)+\sigma_{\tilde{b}\tilde{b}}(t)-2\sigma_{b\tilde{b}}(t)\label{Equation::AveMagnetometryError}
\end{eqnarray}
\end{flushleft}
When all parameters are known (and $J'=J$), this total state
description is unnecessary because
$\sigma_{bE}(t)=\sigma_{bR}(t)$. This equality is by
\emph{design}. However, when the wrong parameters are assumed
(e.g., $J'\neq J$) the equality does not hold $\sigma_{bE}(t)\neq
\sigma_{bR}(t)$ and either \refeqn{TotalDifferential} or
\refeqn{TotalIntegral} must be used to find $\sigma_{bE}(t)$.
Before addressing this problem, we consider in detail the
performance in the ideal case, where all system parameters are
known by the observer, including $J$.
At this point, we have defined several variables. For clarity,
let us review the meaning of several before continuing. Inputs to
the problem include the field fluctuation strength $\sigma_{bF}$,
\refeqn{bFluctuations}, and the measurement sensitivity
$\sigma_M$, \refeqn{Measurement}. The prior information for the
field is labelled $\sigma_{b0}$, \refeqn{Priors}. The solution to
the Riccati equation is $\sigma_{bR}(t)$, \refeqn{Sigma}, and is
equal to the estimation variance $\sigma_{bE}(t)$,
\refeqn{AveMagnetometryError}, when the estimator model is
correct. In the next section, we additionally use $\sigma_{bS}$,
\refeqn{bSteady}, and $\sigma_{bT}(t)$, \refeqn{bTransient}, to
represent the steady state and transient values of
$\sigma_{bE}(t)$ respectively.
\section{Optimal performance: $J$ known}\label{Section::OptimalPerformance}
We start by observing qualitative characteristics of the
$b$-estimation dynamics. \reffig{bRiccati} shows the average
estimation performance, $\sigma_{bR}(t)$, as a function of time
for a realistic set of parameters. Notice that $\sigma_{bR}$ is
constant for small and large times, below $t_1$ and above $t_2$.
If $\sigma_{b0}$ is non-infinite then the curve is constant for
small times, as it takes some time to begin improving the estimate
from the prior. If $\sigma_{b0}$ is infinite, then $t_1=0$ and the
sloped transient portion extends towards infinity as $t\rightarrow
0$. At long times, $\sigma_{bR}$ will become constant again, but
only if the field is fluctuating ($\sigma_{bF}\neq 0$ and
$\gamma_b\neq 0$). The performance saturates because one can track
a field only so well if the field is changing and the
signal-to-noise ratio is finite. If the field to be tracked is
constant, then $t_2=\infty$ and the sloped portion of the curve
extends to zero as $t\rightarrow \infty$ (given the approximations
discussed in \refsec{QPESpins}). After the point where the
performance saturates ($t\gg t_2$), all of the observer and
control gains have become time-independent and the filter can be
described by a transfer function.
\begin{figure}
\caption{
The Riccati equation solution gives the ideal field estimation
performance. The parameters used here are $J=10^6$,
$\sigma_{z0}
\label{Figure::bRiccati}
\end{figure}
However, as will be shown, applying only this steady state
transfer function is non-optimal in the transient regime ($t_1\ll
t\ll t_2$), because the time dependence of the gains is clearly
crucial for optimal transient performance.
\subsection{Steady state performance}
We start by examining the steady state performance of the filter.
At large enough times (where we have yet to define large enough),
$\textbf{K}_O$ becomes constant and if we set $T\rightarrow
\infty$ (ignoring the end point cost), then $\textbf{K}_C$ is
always constant. Setting $d\mathbf{\Sigma}/dt=0$ and
$d\textbf{V}/dt=0$ we find:
\begin{eqnarray*}
\textbf{K}_O(t)&\rightarrow&
\begin{bmatrix}
\sqrt{2 \gamma J
\sqrt{\frac{\sigma_{bF}}{\sigma_M}}+\gamma_b^2}-\gamma_b\\
\sqrt{\frac{\sigma_{bF}}{\sigma_M}}-\frac{\gamma_b}{\gamma
J}(\sqrt{2 \gamma J
\sqrt{\frac{\sigma_{bF}}{\sigma_M}}+\gamma_b^2}-\gamma_b)
\end{bmatrix}\\
\textbf{K}_C(t)&\rightarrow&\begin{bmatrix} \lambda &
1/(1+\frac{\gamma_b}{\gamma J \lambda})
\end{bmatrix}
\end{eqnarray*}
where $\lambda= \sqrt{\frac{p}{q}}$.
Now assuming the gains to be constant, we can derive the three
relevant transfer functions from $y(t)$ to $\vec{m}(t)$
($\tilde{z}$ and $\tilde{b}$) and $u$. We proceed as follows.
First, we express the estimates in terms of only themselves and
the photocurrent
\begin{eqnarray*}
\frac{d\vec{m}(t)}{dt}&=&\textbf{A}\vec{m}(t)+\textbf{B}u(t) +
\textbf{K}_O(y(t)-\textbf{C}\vec{m}(t))\\
&=&\textbf{A}\vec{m}(t)+\textbf{B}(-\textbf{K}_C \vec{m}(t)) +
\textbf{K}_O(y(t)-\textbf{C}\vec{m}(t))\\
&=&(\textbf{A}-\textbf{B}\textbf{K}_C-\textbf{K}_O\textbf{C})\vec{m}(t)+
\textbf{K}_O y(t)
\end{eqnarray*}
To get the transfer functions, we take the Laplace transform of
the entire equation, use differential transform rules to give $s$
factors (where $s= j\omega$, $j=\sqrt{-1}$), ignore initial
condition factors, and rearrange terms. However, this process only
gives meaningful transfer functions if the coefficients
$\textbf{K}_O$ and $\textbf{K}_C$ are constant. Following this
procedure, we have
\begin{eqnarray*}
\vec{m}(s)&=&(s\textbf{I}-\textbf{A}+\textbf{B}\textbf{K}_C+\textbf{K}_O\textbf{C})^{-1}\textbf{K}_O
y(s)\\
&=& \vec{G}_m(s)y(s)\\
u(s)&=&-\textbf{K}_C \vec{m}(s)\\
&=&-\textbf{K}_C(s-\textbf{A}+\textbf{B}\textbf{K}_C+\textbf{K}_O\textbf{C})^{-1}\textbf{K}_O
y(s)\\
&=& G_u(s) y(s)
\end{eqnarray*}
where
\begin{eqnarray*}
\vec{G}_m(s)=\begin{bmatrix} G_z(s)\\ G_b(s)
\end{bmatrix}
\end{eqnarray*}
The three transfer functions ($G_z(s)$, $G_b(s)$, and $G_u(s)$)
serve three different tasks. If estimation is the concern, then
$G_b(s)$ will perform optimally in steady state. Notice that,
while the Riccati solution is the same with and without control
($\textbf{K}_C$ non-zero or zero), this transfer function is not
the same in the two cases. So, even though the transfer functions
are different, they give the same steady state performance.
Let us now consider the controller transfer function $G_u(s)$ in
more detail. We find the controller to be of the form
\begin{equation}
G_{u}(s)=G_{u,DC}\frac{1+s/\omega_H}{1+(1+s/\omega_Q)s/\omega_L} \\
\end{equation}
Here each frequency $\omega$ represents a transition in the Bode
plot of \reffig{Bode}. A similar controller transfer function is
derived via a different method in \refapx{RobustFrequencySpace}.
If we are not constrained experimentally, we can make the
approximations
$\lambda^2\gg\sqrt{\sqrt{\sigma_{bF}/\sigma_M}/2\gamma J}$ and
$\gamma J\gg\gamma_b^2\sqrt{\sigma_{M}/\sigma_{bF}}$ giving
\begin{eqnarray*}
G_u(s)&\rightarrow&G_{u,DC}\frac{1+s/\omega_H}{1+s/\omega_L} \\
\omega_L &\rightarrow& \gamma_b\\
\omega_H&\rightarrow&\sqrt{\frac{\gamma J}{2}\sqrt{\frac{\sigma_{bF}}{\sigma_{M}}}}\\
\omega_C&\rightarrow&\sqrt{2\gamma
J\sqrt{\frac{\sigma_{bF}}{\sigma_{M}}}}=2 \omega_H\\
\omega_Q&\rightarrow&\lambda \gamma J\\
G_{u,DC}&\rightarrow&-\frac{1}{\gamma_b}\sqrt{\frac{\sigma_{bF}}{\sigma_{M}}} \\
G_{u,AC}&\rightarrow&G_{u,DC}\frac{\omega_L}{\omega_H}=-\sqrt{\frac{2}{\gamma
J}\sqrt{\frac{\sigma_{bF}}{\sigma_{M}}}}
\end{eqnarray*}
where $G_{u,AC}$ is the gain at high frequencies
($\omega>\omega_H$) and we find the closing frequency $\omega_C$
from the condition $|P_z(j\omega_C)G_u(j\omega_C)|=1$, with the
plant transfer function being the normal integrator $P_z(s)=\gamma
J/s$. Notice that the controller closes in the very beginning of
the flat high frequency region (hence with adequate phase margin)
because $\omega_C=2 \omega_H$.
\begin{figure}
\caption{
The Bode plot of $G_u(s)$, the transfer function of the filter in
steady state, for a typical parameter regime. Notice that the
controller closes the plant with adequate phase margin to avoid
closed-loop instability. At high frequencies the controller rolls
off at $\omega_Q$ if $\lambda\neq \infty$.}
\label{Figure::Bode}
\end{figure}
Finally, consider the steady state estimation performance. These
are the same with and without control (hence
$\lambda$-independent) and, under the simplifying assumption
$\gamma J\gg\gamma_b^2\sqrt{\sigma_{M}/\sigma_{bF}}$, are given by
\begin{eqnarray}
\sigma_{zR}(t)&\rightarrow& \sqrt{2 \gamma J} \sigma_{M}^{3/4}\sigma_{bF}^{1/4}\equiv \sigma_{zS}\label{Equation::zSteady}\\
\sigma_{bR}(t)&\rightarrow& \sqrt{\frac{2}{\gamma
J}}\sigma_{bF}^{3/4}\sigma_M^{1/4}\equiv
\sigma_{bS}\label{Equation::bSteady}
\end{eqnarray}
If the estimator reaches steady state at $t \ll 1/M$, then the
above variance $\sigma_{zR}$ represents a limit to the amount of
spin-squeezing possible in the presence of fluctuating fields.
Also the $J$ scaling of the saturated field sensitivity
$\sigma_{bR}\propto J^{-1/2}$ is not nearly as strong as the $J$
scaling in the transient period $\sigma_{bR}\propto J^{-2}$. Next,
we demonstrate this latter result as we move from the steady state
analysis to calculating the estimation performance during the
transient period.
\subsection{Transient performance}
We now consider the transient performance of the ideal filter: how
quickly and how well the estimator-controller will \emph{lock}
onto the signal and achieve steady state performance. In many
control applications, the transient response is not of interest
because the time it takes to acquire the lock is negligible
compared to the long steady state period of the system. However,
in systems where the measurement induces continuous decay, this
transient period can be a significant portion of the total
lifetime of the experiment.
\begin{table*}[t*]
\caption{Field tracking error, $\sigma_{bR}(t)$, for different
initial variances of $b$ and $z$.}
\begin{tabular*}{\textwidth}{c@{\hspace{.5cm}}|@{\hspace{1cm}}c@{\extracolsep{\fill}}cc@{\hspace{.5cm}}} \hline\hline
& $\sigma_{b0}$ = 0 & $\sigma_{b0}$
& $\sigma_{b0}\rightarrow \infty$ \\ \hline
\rule[-3mm]{0mm}{8mm}
$\sigma_{z0}$ = 0 & $0$ & $3\sigma_{b0}\sigma_M \left(3\sigma_M+\gamma^2 J^2 \sigma_{b0}t^3\right)^{-1}$
& $3\sigma_M \left( \gamma^2 J^2 t^3 \right)^{-1}$ \\
\rule[-3mm]{0mm}{8mm}
$\sigma_{z0}$ & $0$ & $\frac{12\sigma_{b0}\sigma_M (\sigma_M+\sigma_{z0}t)}{12\sigma_M^2+\gamma^2J^2\sigma_{b0}\sigma_{z0}t^4 +4\sigma_M(3\sigma_{z0} t +\gamma^2 J^2 t^3 \sigma_{b0})}$
& $12\sigma_M(\sigma_M+\sigma_{z0}t) \left(\gamma^2 J^2 t^3(4\sigma_M+\sigma_{z0}t)\right)^{-1}$\\
\rule[-3mm]{0mm}{8mm}
$\sigma_{z0}\rightarrow\infty$ & $0$ & $12\sigma_{b0}\sigma_M \left(12\sigma_M+\gamma^2 J^2 t^3 \sigma_{b0}\right)^{-1}$
& $12\sigma_M \left(\gamma^2 J^2 t^3\right)^{-1}$ \\ \hline\hline
\end{tabular*}\label{Table::FieldVariance}
\end{table*}
\begin{table*}[t*]
\caption{Spin tracking error, $\sigma_{zR}(t)$, for different
initial variances of $b$ and $z$.}
\begin{tabular*}{\textwidth}{c@{\hspace{.5cm}}|@{\extracolsep{\fill}}ccc@{\hspace{.5cm}}} \hline\hline
& $\sigma_{b0}=0$ & $\sigma_{b0}$ & $\sigma_{b0}\rightarrow\infty$ \\\hline
\rule[-3mm]{0mm}{8mm} $\sigma_{z0}=0$ & $0$ &
$3\gamma^2
J^2\sigma_{b0} \sigma_M t^2 \left(3\sigma_M+\gamma^2 J^2\sigma_{b0}t^3 \right)^{-1}$
& $3\sigma_M t^{-1}$ \\
\rule[-3mm]{0mm}{8mm} $\sigma_{z0}$ &
$\sigma_M\sigma_{z0}\left(\sigma_M+
\sigma_{z0}t\right)^{-1}$
&$\frac{4\sigma_M(\gamma^2J^2\sigma_{b0}
\sigma_{z0}t^3+3\sigma_M(\sigma_{z0}+\gamma^2 J^2
t^2\sigma_{b0}))}{12\sigma_M^2+\gamma^2J^2\sigma_{b0}\sigma_{z0}t^4
+4\sigma_M(3\sigma_{z0} t +\gamma^2 J^2 t^3
\sigma_{b0})}$
& $4\sigma_M(3\sigma_M+\sigma_{z0}t) \left( t(4\sigma_M+\sigma_{z0}t) \right)^{-1}$ \\
\rule[-3mm]{0mm}{8mm} $\sigma_{z0} \rightarrow \infty$
& $\sigma_M t^{-1}$ & $4\sigma_M(3\sigma_M+\gamma^2J^2t^3\sigma_{b0}) \left(12\sigma_M
t+\gamma^2 J^2 t^4\sigma_{b0}\right)^{-1}$
& $4\sigma_M t^{-1}$ \\ \hline\hline
\end{tabular*}\label{Table::SpinVariance}
\end{table*}
We will evaluate the transient performance of two different
filters. First, we look at the ideal dynamic version, with time
dependent observer gains derived from the Riccati equation. This
limits to a transfer function at long times when the gains have
become constant. Second, we numerically look at the case where the
same steady state transfer functions are used for the
\emph{entire} duration of the measurement. Because the gains are
not adjusted smoothly, the small time performance of this
estimator suffers. Of course, for long times the estimators are
equivalent.
\subsubsection{Dynamic estimation and control}
Now consider the transient response of $\mathbf{\Sigma}(t)$
(giving $\textbf{K}_O(t)$). We will continue to impose that
$\textbf{V}$ (thus $\textbf{K}_C$) is constant because we are not
interested in any particular stopping time.
The Riccati equation for $\mathbf{\Sigma}(t)$ (\refeqn{Riccati})
appears difficult to solve because it is non-linear. Fortunately,
it can be reduced to a much simpler linear problem. See
\refapx{RiccatiSolution} for an outline of this method.
The solution to the fluctuating field problem ($\sigma_{bF}\neq 0$
and $\gamma_b\neq0$) is represented in \reffig{bRiccati}. This
solution is simply the constant field solution ($\sigma_{bF}= 0$
and $\gamma_b = 0$) smoothly saturating at the steady state value
of \refeqn{bSteady} at time $t_2$. Thus, considering the long
time behavior of the constant field solution will tell us about
the transient behavior when measuring fluctuating fields. Because
the analytic form for the constant field solution is simple, we
consider only it and disregard the full analytic form of the
fluctuating field solution.
The analytic form of $\mathbf{\Sigma}(t)$ is highly instructive.
The general solutions to $\sigma_{bR}(t)$ and $\sigma_{zR}(t)$,
with arbitrary prior information $\sigma_{b0}$ and $\sigma_{z0}$,
are presented in the central entries of
\reftabs{FieldVariance}{SpinVariance} respectively. The other
entries of the tables represent the limits of these somewhat
complicated expressions as the prior information assumes extremely
large or small values. Here, we notice several interesting
trade-offs.
First, the left hand column of \reftab{FieldVariance} is zero
because if a constant field is being measured, and we start with
complete knowledge of the field ($\sigma_{b0}=0$), then our job is
completed trivially. Now notice that if $\sigma_{b0}$ and
$\sigma_{z0}$ are both non-zero, then at long times we have the
lower right entry of \reftab{FieldVariance}
\begin{eqnarray}
\sigma_{bR}(t)=\frac{12\sigma_M}{\gamma^2 J^2
t^3}\equiv\sigma_{bT}(t)\label{Equation::bTransient}
\end{eqnarray}
This is the same result one gets when the estimation procedure is
simply to perform a least-squares line fit to the noisy
measurement curve for constant fields. (Note that all of these
results are equivalent to the solutions of \cite{Geremia2003}, but
without $J$ damping.) If it were physically possible to ensure
$\sigma_{z0}=0$, then our estimation would improve by a factor of
four to the upper right result. However, quantum mechanics imposes
that this initial variance is non-zero (e.g., $\sigma_{z0}=J/2$
for a coherent state and less, but still non-zero, for a squeezed
state), and the upper right solution is unattainable.
Now consider the dual problem of spin estimation performance
$\sigma_{zR}(t)$ as represented in \reftab{SpinVariance}, where we
can make analogous trade-off observations. If there is no field
present, we set $\sigma_{b0}= 0$ and
\begin{eqnarray}
\sigma_{zR}(t)&=&\frac{\sigma_{z0}\sigma_M}{\sigma_M + t
\sigma_{z0}}
\end{eqnarray}
When $\sigma_{zR}(t)$ is interpreted as the quantum variance
$\langle \Delta J_z^2\rangle (t)$, this is the ideal (non-damped)
conditional spin-squeezing result which is valid at $t \ll 1/M$,
before damping in $J$ begins to take effect \cite{Thomsen2002}. If
we consider the solution for $t\gg 1/JM$, we have the lower left
entry of \reftab{SpinVariance}, $\sigma_{zR}(t)=\sigma_M/t$.
However, if we must include constant field uncertainty in our
estimation, then our estimate becomes the lower right entry
$\sigma_{zR}(t)=4\sigma_M/t$ which is, again, a factor of four
worse.
If our task is field estimation, intrinsic quantum mechanical
uncertainty in $z$ limits our performance just as, if our task is
spin-squeezed state preparation, field uncertainty limits our
performance.
\subsubsection{Transfer function estimation and control}
Suppose that the controller did not have the capability to adjust
the gains in time as it tracked a fluctuating field. One approach
would then be to apply the steady state transfer functions derived
above for the \emph{entire} measurement. While this approach
performs optimally in steady state, it approaches the steady state
in a non-optimal manner compared to the dynamic controller.
\reffig{TFTransient} demonstrates this poor transient performance
for tracking fluctuating fields of differing bandwidth. Notice
that the performance only begins to improve around the time that
the dynamic controller saturates.
Also notice that the transfer function $G_b(s)$ is dependent on
whether or not the state is being controlled, i.e. whether or not
$\lambda$ is zero. The performance shown in \reffig{TFTransient}
is for one particular value of $\lambda$, but others will give
different estimation performances for short times. Still, all of
the transfer functions generated from any value of $\lambda$ will
limit to the same performance at long times. Also, all of them
will perform poorly compared to the dynamic approach during the
transient time.
\section{Robust performance: $J$ unknown}\label{Section::RobustPerformance}
Until this point, we have assumed the observer has complete
knowledge of the system parameters, in particular the spin number
$J$. We will now relax this assumption and consider the
possibility that, for each measurement, the collective spin $J$ is
drawn randomly from a particular distribution. Although we will
be ignorant of a given $J$, we may still possess knowledge about
the distribution from which it is derived. For example, we may be
certain that $J$ never assumes a value below a minimal value
$J_{min}$ or above a maximal value $J_{max}$. This is a realistic
experimental situation, as it is unusual to have particularly long
tails on, for example, trapped atom number distributions. We do
not explicitly consider the problem of $J$ fluctuating during an
individual measurement, although the subsequent analysis can
clearly be extended to this problem.
Given a $J$ distribution, one might imagine completely
re-optimizing the estimator-controller with the full distribution
information in mind. Our initial approach is more basic and in
line with robust control theory: we design our filter as before,
assuming a particular $J'$, then analyze how well this filter
performs on an ensemble with $J \neq J'$. With this information in
mind, we can decide if estimator-controllers built with $J'$ are
robust, with and without control, given the bounds on $J$. We will
find that, under certain conditions, using control makes our
estimates robust to uncertainty about the total spin number.
\begin{figure}
\caption{
Estimation performance for estimators based on the dynamic gain
solution of the Riccati equation, compared against estimators with
constant estimation gain. The latter are the transfer function
limits of the former, hence they have the same long term
performance. Three different bandwidth $b$ processes are
considered.}
\label{Figure::TFTransient}
\end{figure}
The essential reason for this robustness is that when a control
field is applied to zero the measured signal, that control field
must be approximately equal to the field to be tracked. Because
$J$ is basically an effective gain, variations in $J$ will affect
the performance, but not critically, so the error signal will
still be approximately zero. If the applied signal is set to be
the estimate, then the tracking error must also be approximately
zero. (See \refapx{RobustFrequencySpace} for a robustness analysis
along these lines in frequency space. These methods were used in
\cite{Geremia2003b}, but neglect the transient behavior.)
Of course, this analysis assumes that we can apply fields with the
same precision that we measure them. While the precision with
which we can apply a field is experimentally limited, we here
consider the ideal case of infinite precision. In this admittedly
idealized problem, our estimation is limited by only the
measurement noise and our knowledge of $J$.
First, to motivate this problem, we describe how poorly our
estimator performs given ignorance about $J$ without control.
\subsection{Uncontrolled ignorance}
Let us consider the performance of our estimation procedure at
estimating constant fields when $J'\neq J$. In general, this
involves solving the complicated total covariance matrix
\refeqn{TotalIntegral}. However, in the long time limit ($t\gg
1/JM$) of estimating constant fields, the procedure amounts to
simply fitting a line to the noisy measurement with a
least-squares estimate. Suppose we record an open-loop
measurement which appears as a noisy sloped line for small angles
of rotation due to the Larmor precession. Regardless of whether or
not we know $J$, we can measure the slope of that line and
estimate it to be $\tilde{m}$. If we knew $J$, we would know how
to extract the field from the slope correctly:
$\tilde{b}=\tilde{m}/\gamma J$. If we assumed the wrong spin
number, $J' \neq J$, we would get the non-optimal estimate:
$\tilde{b}'=\tilde{m}/\gamma J'=\tilde{b}J/J'$.
First assume that this is a \emph{systematic} error and $J$ is
unknown, but the same, on every trial. We assume that the
constant field is drawn randomly from the $\sigma_{b0}$
distribution for every trial. In this case, if we are wrong, then
we are always wrong by the same factor. It can be shown that the
error always saturates
\begin{eqnarray*}
\sigma_{bE}\rightarrow (1-f)^2\sigma_{b0}
\end{eqnarray*}
where $f = J/J'$. Of course, because this error is systematic, the
variance of the estimate does not saturate, only the error. This
problem is analogous to ignorance of the constant electronic gains
in the measurement and can also be calibrated away.
However, a significant problem arises when, on every trial, a
constant $b$ is drawn at random \emph{and} $J$ is drawn at random
from a distribution, so the error is no longer systematic. In this
case, we would not know whether to attribute the size of the
measured slope to the size of $J$ or to the size of $b$. Given
the same $b$ every trial, all possible measurement curves fan out
over some angle due to the variation in $J$. After measuring the
slope of an individual line to beyond this fan-out precision, it
makes no sense to continue measuring.
We should also point out procedures for estimating fields in
open-loop configuration, but \emph{without} the small angle
approximation. For constant large fields, we could observe many
cycles before the spin damped significantly. By fitting the
amplitude and frequency independently, or computing the Fourier
transform, we could estimate the field somewhat independently of
$J$, which only determines the amplitude. However, the point here
is that $b$ might not be large enough to give many cycles before
the damping time or any other desired stopping time. In this case,
we could not independently fit the amplitude and frequency because
they appear as a product in the initial slope. Similar
considerations apply for the case of fluctuating $b$ and
fluctuating $J$. See \cite{Bretthorst1988}, for a complete
analysis of Bayesian spectrum analysis with free induction decay
examples.
Fortunately, using precise control can make the estimation process
relatively robust to such spin number fluctuations.
\subsection{Controlled ignorance: steady state performance}
We first analyze how the estimator designed with $J'$ performs on
a plant with $J$ at tracking fluctuating fields with and without
control. To determine this we calculate the steady state of
\refeqn{TotalDifferential}.
For the case of no control ($\lambda=0$), we simplify the
resulting expression by taking the same large $J'$ approximation
as before. This gives the steady state uncontrolled error
\begin{eqnarray*}
\sigma_{bE} &\rightarrow&
(1-f)^2\frac{\sigma_{bF}}{2\gamma_b}\\
&&=(1-f)^2\sigma_{bFree}
\end{eqnarray*}
where $f=J/J'$. Because the variance of the fluctuating $b$ is
$\sigma_{bFree}$, the uncontrolled estimation performs worse than
no estimation at all if $f>2$.
\begin{figure}
\caption{
Steady state estimation performance for estimator designed with
$J'=10^6$, and actual spin numbers: $J=J'\times [0.5,\, 0.75,\,
1,\, 1.25,\, 2,\, 10,\, 100]$. Other parameters: $\gamma=10^6$,
$M=10^4$, $\gamma_b=10^5$, $\sigma_{bFree}
\label{Figure::SteadyStateJp}
\end{figure}
\begin{figure}
\caption{
Transient estimation performance for controller designed with
$J'=10^6$, and actual spin numbers: $J=J'\times [0.75,\, 1,\,
1.25,\, 2,\, 10,\, 100,\, 1000]$. Other parameters:
$\gamma=10^6$, $M=10^4$, $\gamma_b=0$, $\sigma_{bFree}
\label{Figure::TransientJp}
\end{figure}
On the other hand, when we use precise control the performance
improves dramatically. We again simplify the steady state
solution with the large $J'$ and $\lambda$ assumptions from
before, giving
\begin{eqnarray*}
\sigma_{bS}(J,J') &\rightarrow& \left(\frac{1+f}{2f}\right) \sqrt{\frac{2}{\gamma J'}}\sigma_{bF}^{3/4}\sigma_M^{1/4} \\
&=&\left(\frac{1+f}{2f}\right)\sigma_{bS}(J')
\end{eqnarray*}
where $\sigma_{bS}(J,J')$ is the steady state controlled error
when a plant with $J$ is controlled with a $J'$ controller and
$\sigma_{bS}(J')$ is the error when $J=J'$. One simple
interpretation of this result is that if we set $J'$ to be the
minimum of the $J$ distribution ($f>1$) then we never do worse
than $\sigma_{bS}(J')$ and we never do better than twice as well
($f\rightarrow\infty$). See \reffig{SteadyStateJp} for a
demonstration of this performance.
\subsection{Controlled ignorance: transient performance}
Now consider measuring constant fields with the wrong assumed
$J'$. Again, when control is not used, the error saturates at
\begin{eqnarray*}
\sigma_{bE}\rightarrow (1-f)^2\sigma_{b0}
\end{eqnarray*}
When control is used, the transient performance again improves
under certain conditions. The long time transient solution of
\refeqn{TotalDifferential} is difficult to manage analytically,
yet the behavior under certain limits is again simple. For large
$\lambda$ and $J'$, and for $f>1/2$, we numerically find the
transient performance to be approximately
\begin{eqnarray}
\sigma_{bT}(J,J') &\rightarrow& \left(\frac{f^2+2}{4f^2-1}\right) \frac{12 \sigma_M}{\gamma^2 J'^2 t^3}\nonumber \\
&=&\left(\frac{f^2+2}{4f^2-1}\right)\sigma_{bT}(J')\label{Equation::Transientf}
\end{eqnarray}
where $\sigma_{bT}(J,J')$ is the transient controlled error when a
plant with $J$ is controlled with a $J'$ controller and
$\sigma_{bT}(J')$ is the error when $J=J'$. See
\reffig{TransientJp} for a demonstration of this performance for
realistic parameters. As $f\rightarrow\infty$ the $f$-dependent
pre-factor saturates at a value of $1/4$. However, as
$f\rightarrow 1/2$ then the system takes longer to reach such a
simple asymptotic form, and the solution of \refeqn{Transientf}
becomes invalid.
Accordingly, one robust strategy would be the following. Suppose
that the lower bound of the $J$-distribution was known and equal
to $J_{min}$. Also assume that $\sigma_{bT}(J_{min})$ represents
an \emph{acceptable} level of performance. In this case, we could
simply design our estimator based on $J'=J_{min}$ and we would be
guaranteed at least the performance $\sigma_{bT}(J_{min})$ and at
best the performance $\sigma_{bT}(J_{min})/4$.
This approach would be suitable for experimental situations
because typical $J$ distributions are narrow: the difference
between $J_{min}$ and $J_{max}$ is rarely greater than an order of
magnitude. Thus, the overall sacrifice in performance between the
ideal case and the robust case would be small. The estimation
performance still suffers because of our ignorance of $J$, but not
nearly as much as in the uncontrolled case.
\section{Conclusion}\label{Section::Conclusion}
The analysis of this paper contained several key steps which
should be emphasized. Our first goal was to outline the proper
approach to quantum parameter estimation. The second was to
demonstrate that reduced representations of the full filtering
problem are relevant and convenient because, if a simple
representation can be found, then existing classical estimation
and control methods can be readily applied. The characteristic
that led to this simple description was the approximately Gaussian
nature of the problem. Next, we attempted to present basic
classical filtering and control methodology in a self-contained,
pedagogical format. The results emphasized the inherent trade-offs
in simultaneous estimation of distinct, but dynamically coupled,
system parameters. Because these methods are potentially critical
in any field involving optimal estimation, we consider the full
exposition of this elementary example to be a useful resource for
future analogous work.
We have also demonstrated the general principle that precision
feedback control can make estimation robust to the uncertainty of
system parameters. Despite the need to assume that the controller
produced a precise cancellation field, this approach deserves
further investigation because of its inherent ability to precisely
track broadband field signals \cite{Geremia2003b}. It is
anticipated that these techniques will become more pervasive in
the experimental community as quantum systems are refined to
levels approaching their fundamental limits of performance.
\acknowledgments
This work was supported by the NSF (PHY-9987541, EIA-0086038), the
ONR (N00014-00-1-0479), and the Caltech MURI Center for Quantum
Networks (DAAD19-00- 1-0374). JKS acknowledges a Hertz fellowship.
The authors thank Ramon van Handel for useful discussions.
Additional information is available at
http://minty.caltech.edu/Ensemble.
\appendix
\section{Simplified representation of the plant}
\label{Appendix::SimplifiedRepresentation}
In \refsec{QuantumParameterEstimation} we outlined a general
approach to quantum parameter estimation based on the stochastic
master equation (SME), but subsequently we derived optimal
observer and controller gains from an explicit representation of
the plant dynamics (\refeqn{Dynamics}). This representation
appears classical in that the plant state is given by a scalar
variable, $z$, rather than a density operator. In this Section we
present a derivation of this simplified representation and discuss
the equivalence of our approach to the original quantum estimation
problem.
From the perspective of quantum filtering theory we will simply
show that a Gaussian approximation to the relevant SME can be
viewed as a Kalman filter, which in turn induces a simplified
representation of the dynamics of the spin state. In this
simplified representation the quantum state of the spin system is
replaced by a scalar variable, $z$, and $\langle J_z\rangle(t)$ is
viewed as the optimal estimate of the random process $z(t)$.
Equations for $dz(t)$ and its relation to the observed
photocurrent $y(t)dt$ are given in \refeqns{Simple}{Currents},
which have the convenient property of being formally
time-invariant. The technical approach in the main body of the
text is then to replace \refeqn{ReducedSME1}, which is derived
from the SME, by a state-space observer derived directly from the
simplified model of \refeqn{Simple}. By doing so we achieve
transparent correspondence with classical estimation and control
theory. We should note that the diagrams in \reffig{Equivalences}
indicate signal flows and dependencies in a way that is quite at
odds with the quantum filtering perspective. This Figure is meant
solely to motivate the simplified model (\refeqn{Simple}) for
readers who prefer a more traditional quantum optics perspective,
in which the Ito increment in the SME corresponds to optical
shot-noise (as opposed to an innovation process derived from the
photocurrent) and the SME itself plays the role of a `physical'
evolution equation mapping $h(t)$ to $y(t)dt$.
Adopting the latter perspective, let us briefly discuss (with
reference to the top diagram in \reffig{Equivalences}) the overall
structure of our estimation problem. The physical system that
exists in the laboratory (the spins and optical probe beam) acts
as a transducer, whose key role in the magnetometry scheme is to
imprint a statistical signature of the magnetic field $h(t)$ onto
the observable photocurrent, $y(t)dt$. Hence whatever theoretical
model we adopt for describing the spin and probe dynamics must
provide an accurate description of the mapping from $h(t)$ to
$y(t)dt$, as represented by the \textit{Plant} in
\reffig{Schematic}C. An open-loop estimator, designed on the basis
of this plant model, would construct a conditional probability
distribution for $h(t)$ based on passive observation of $y(t)dt$.
In a closed-loop estimation procedure we would allow the
controller to apply compensation fields to the system in order to
gain accuracy and/or robustness. In either case, the essential
role of the spin-probe (plant) model in the design process is to
provide an accurate description of the influence of an arbitrary
time-dependent field $h(t)$ on the photocurrent $y(t)dt$. Note
that the consideration of {\em arbitrary} $h(t)$ subsumes all
possible effects of real-time feedback.
\begin{figure}
\caption{Equivalent models for the filtering problem (see
discussion at the beginning of \refapx{SimplifiedRepresentation}
\label{Figure::Equivalences}
\end{figure}
Thomsen and co-workers \cite{Thomsen2002} have derived an accurate
plant model for our magnetometry problem, in the form of an SME
(\refeqn{SME}). Following a common convention in quantum optics,
let us here write this SME and the corresponding photocurrent
equation in the form
\begin{eqnarray*}
d\rho(t)&=&-i\,dt[H(h),\rho(t)]+\mathcal{D}[\sqrt{M}J_z]\rho(t)\,dt\\
&&+\sqrt{\eta}\mathcal{H}[\sqrt{M}J_z]\rho(t)d\bar{W}(t),\\
y(t)dt&=&\langle J_z\rangle(t)dt + \sqrt{\sigma_M}\,d\bar{W}(t),
\end{eqnarray*}
where $H(h)=\gamma h J_y$ and $\rho(t)$ is the state of the spin
system conditioned on the measurement record $y(t)dt$. The
quantity $d\bar{W}(t)$ is a Wiener increment that heuristically
represents shot-noise in the photodetection process
\cite{Wiseman1993}, and these are to be interpreted as Ito
stochastic differential equations. If $h(t)$ and $y(t)dt$ are
considered as input and output signals, respectively, this pair of
equations jointly implement a plant transfer function as depicted
in \reffig{Equivalences}, with $\rho(t)$ taking on the role of the
plant state.
For a large spin ensemble, however, $\rho(t)$ will have very high
dimension and it would be impractical to utilize the full SME for
design purposes. It is straightforward to derive a reduced model
by employing a moment expansion for the observable of interest.
Extracting the conditional expectation values of the first two
moments of $J_z$ from SME gives the following scalar stochastic
differential equations:
\begin{eqnarray*}
d\langle J_z \rangle(t)&=&\gamma \langle J_x \rangle(t) h(t)\,dt + \frac{\langle \Delta J_z^2\rangle(t)}{\sqrt{\sigma_M}} d\bar{W}(t)\\
d\langle \Delta J_z^2\rangle(t)&=&-\frac{\langle\Delta J_z^2\rangle^2(t)}{\sigma_M}\,dt\\
&&-i\gamma\langle[\Delta J_z^2 , J_y]\rangle(t)h(t)\,dt\\
&&+ \frac{\langle \Delta J_z^3 \rangle(t)}{\sqrt{\sigma_M}}
d\bar{W}(t)
\end{eqnarray*}
If the spins are initially fully polarized along $x$ and the spin
angle $\sim \langle J_z\rangle/\langle J_x\rangle$ is kept small
({\it e.g.}, by active control), then, by using the evolution
equation for the $x$ component, we can show $\langle J_x
\rangle(t) \approx J \exp[-M t /2] \approx J$ for times $t<1/M$.
Making the Gaussian approximation at small times, the third order
terms $\langle \Delta J_z^3 \rangle$ and $-i\gamma\langle[\Delta
J_z^2 , J_y]\rangle(t)h(t)$ can be neglected. The
Holstein-Primakoff transformation \cite{Holstein1940}, commonly
used in the condensed matter physics literature, makes it possible
to derive this Gaussian approximation as an expansion in $1/J$.
Both of the removed terms can be shown to be approximately
$1/J\sqrt{J}$ smaller than the retained non-linear term.
Additionally, the second removed term will be reduced if
$h(t)\approx 0$ by active control.
These approximations give
\begin{eqnarray}
d\langle J_z \rangle(t)&=&\gamma J h(t)\,dt + \frac{\langle \Delta J_z^2\rangle(t)}{\sqrt{\sigma_M}} d\bar{W}(t)\label{Equation::ReducedSME1}\\
d\langle \Delta J_z^2\rangle(t)&=&-\frac{\langle\Delta
J_z^2\rangle^2(t)}{\sigma_M}\,dt \label{Equation::ReducedSME2}
\end{eqnarray}
which constitute a Gaussian, small-time approximation to the full
SME that represents the essential dynamics for magnetometry. Note
that we can analytically solve
\begin{eqnarray*}
\langle\Delta J_z^2\rangle(t)&=&\frac{\langle\Delta
J_z^2\rangle(0)\sigma_{M}}{\sigma_M+\langle\Delta
J_z^2\rangle(0)t},
\end{eqnarray*}
where $\langle\Delta J_z^2\rangle(0)=J/2$ for an initial coherent
spin state.
At this point we may note that \refeqns{ReducedSME1}{ReducedSME2}
have the algebraic form of a Kalman filter. (This is not at all
surprising since the SME, as written in \refeqn{SME}, represents
an optimal nonlinear filter for the reduced spin state
\cite{Belavkin1999,Verstraete2001} and our subsequent
approximations have enforced both linearity and sufficiency of
second-order moments.) Viewed as such, the quantity $\langle
J_z\rangle (t)$ would represent an optimal (least square) estimate
of some underlying variable $z(t)$ based on observation of a
signal $d\xi(t)$, and $\langle\Delta J_z^2\rangle (t)$ would
represent the uncertainty (variance) of this estimate. It thus
stands to reason that we might be able to simplify our
magnetometry model even further if we could find an `underlying'
model for the evolution of $z(t)$ and $d\xi(t)$, for which our
equations derived from the SME would be the Kalman filter.
It is not difficult to do so, and indeed a very simple model
suffices:
\begin{eqnarray}
dz(t) &=& \gamma J h(t) dt\nonumber\\
d\xi(t) &=& z(t) dt + \sqrt{\sigma_M}
dW(t)\label{Equation::Simple}
\end{eqnarray}
where $dW(t)$ is an Wiener increment that is distinct from (though
related to) $d\bar{W}(t)$. In order to match initial conditions
with the equations derived from the SME, we should assume that the
expected value of $z(t=0)$ is zero and that the variance of our
prior distribution for $z(0)$ is $J/2$. Written in canonical form,
the Kalman filter for this hypothetical system is then
\begin{eqnarray*}
d\tilde{z}(t) &=& \gamma J h(t) dt +
\frac{\sigma_{zR}(t)}{\sqrt{\sigma_M}}
\frac{\left[d\xi(t)-\tilde{z}(t)dt\right]}{\sqrt{\sigma_M}},\\
d\sigma_{zR}(t) &=& -\frac{\sigma_{zR}^2(t)}{\sigma_M}\,dt.
\end{eqnarray*}
Here $\tilde{z}(t)$ is the optimal estimate of $z(t)$ and
$\sigma_{zR}(t)$ is the variance. We exactly recover the SME
model, \refeqns{ReducedSME1}{ReducedSME2}, by the identifications
\begin{eqnarray}
\tilde{z}(t) & \leftrightarrow & \langle J_z \rangle (t)\nonumber\\
\sigma_{zR}(t) & \leftrightarrow & \langle \Delta J_z^2
\rangle(t)\nonumber\\
\frac{\left[d\xi(t)-\tilde{z}(t)dt\right]}{\sqrt{\sigma_M}} &
\leftrightarrow & d\bar{W}(t).
\end{eqnarray}
It is important to note that the quantity
$\left[d\xi(t)-\tilde{z}dt \right]/\sqrt{\sigma_M}$ represents the
so-called {\em innovation process} of this Kalman filter, and it
is thus guaranteed (by least-squares optimality of the filter
\cite{Oksendal1998}) to have Gaussian white-noise statistics.
Hence we have solid grounds for identifying it with the Ito
increment appearing in the SME.
Given this insight, we see that our original magnetometry problem
can equivalently be viewed in a way that corresponds to the middle
diagram of \reffig{Equivalences}. In this version, we posit the
existence of a hidden transducer that imprints statistical
information about the magnetic field $h(t)$ onto a signal
$d\xi(t)$. A Kalman filter receives this signal, and from it
computes an estimate $\tilde{z}(t)$ as well as an innovation
process $d\bar{W}(t)$. (Note that as indicated in the diagram, the
Kalman filter will only function correctly if it `has knowledge
of' the true magnetic field $h(t)$ in the way that a physical
system would, but this is not an important point for what
follows.) According to the model equations, the Kalman filter then
emits the following signal to be received by our photo-detector:
\begin{eqnarray}
y(t)dt & = & \tilde{z}(t)dt + \sqrt{\sigma_M}\,d\bar{W}(t).
\end{eqnarray}
Note that $d\bar{W}(t)$ now appears as an internal variable to the
Kalman filter, computed from the input signal $d\xi(t)$ and the
recursive estimate $\tilde{z}(t)$, while the inherent randomness
is referred back to $dW(t)$. Although this may seem like an
unnecessarily complicated story, it should be noted that the
compound model with $z(t)$ and the Kalman filter predicts an {\em
identical} transfer function from $h(t)$ to the
experimentally-observed signal $y(t)dt$ to that of the equations
originally derived from the SME (top diagram in
\reffig{Equivalences}). Hence, for the purposes of analyzing and
designing magnetometry schemes, these are equivalent models.
Combining several definitions above we find
\begin{eqnarray}
y(t)dt & = & \tilde{z}(t)dt +
\sqrt{\sigma_M}\frac{\left[d\xi(t)-\tilde{z}(t)dt\right]}
{\sqrt{\sigma_M}}\nonumber\\
& = & d\xi(t)\label{Equation::Currents}.
\end{eqnarray}
It thus follows that in the compound model, the Kalman filter
actually implements a trivial transfer function and can in fact be
eliminated from the diagram. Doing this, we obtain the simplified
representation in the bottom diagram of \reffig{Equivalences}.
Here the perspective is to pretend that the internal dynamics of
the transducing physical system corresponds to the simplified
model (\refeqn{Simple}), since we can do so without making any
error in our description of the effect of $h(t)$ on the recorded
signal. We thus conclude that for the purposes of open- or
closed-loop estimation of $h(t)$, filters and controllers can in
fact be designed---without loss of performance---using the
simplified model (\refeqn{Simple}).
It is interesting to note that $z(t)$ can loosely be interpreted
as a `classical value' of the spin projection $J_z$. Since the
operator $J_z$ is a back-action evading observable, the continuous
measurement we consider is quantum non-demolition and its
backaction on the system state is minimal (conditioning without
disturbance). Hence if $h(t)=0$, we may think of the measurement
process as gradually `collapsing' the quantum state of the spin
system from an initial coherent state towards an eigenstate of
$J_z$; the hidden variable $z(t)$ in the simplified model
\refeqn{Simple} would then represent the eigenvalue corresponding
to the ultimate eigenstate, and $\tilde{z}(t)=\langle J_z
\rangle(t)$ in the Kalman filter would be our converging estimate
of it. (Again, this is as expected from the abstract perspective
of quantum filtering theory for open quantum systems.) Conditional
spin-squeezing in this case can then be understood as nothing more
than the reduction of our uncertainty as to the underlying value
of $z$ --- as we acquire information about $z$ through observation
of $d\xi(t)= y(t)dt$, our uncertainty $\sigma_{zR}(t)
\leftrightarrow\langle\Delta J_z^2\rangle(t)$ naturally decreases
below its initial coherent-state value of $J/2$. Still, the
quantum-mechanical nature of the spin system is not without
consequence, as it is known that continuous QND measurement
produces entanglement among the spins in the ensemble
\cite{Stockton2003}.
It seems worth commenting on the fact that \refeqn{Simple} clearly
predicts stationary statistics for the photocurrent $y(t)dt$,
whereas \refeqn{ReducedSME1} contains a time-dependent diffusion
coefficient that might color the statistics of $y(t)dt=\langle
J_z\rangle(t)dt + \sqrt{\sigma_M}\,d\bar{W}(t)$. In fact there is
no discrepancy. It is possible \cite{Wiseman1993} to derive the
second order time correlation function of the observed signal
$y(t)dt$ directly from the stochastic master \refeqn{SME},
\begin{eqnarray*}
\langle y(t) y(t+\tau) \rangle &=& (\langle J_{z}(t) J_{z}(t+\tau)
\rangle +\langle J_{z}(t+\tau) J_{z}(t) \rangle)/2
\\&&+\frac{1}{4\eta M}\delta (\tau)
\end{eqnarray*}
(This result could also be obtained from the standard input-output
theory of quantum optics.) Since the master equation results in
linear equations for the mean values $\langle J_{x}(t) \rangle$
and $\langle J_{z}(t)\rangle$ the quantum regression theorem
\cite{Walls1994} allows the correlation functions $\langle
J_{z}(t) J_{z}( t+\tau ) \rangle $ \ and $\langle J_{z}( t+\tau )
J_{z}( t) \rangle $ to be calculated explicitly. In this paper we
are most interested in the early time evolution for which we
obtain the expressions
\begin{eqnarray*}
\langle y(t) \rangle =\langle J_{z}( t) \rangle
&=&\gamma b J t+O( t^{2}) \\
\langle y(0) y(t) \rangle -\langle y(0) \rangle \langle y(t)
\rangle &=&\frac{1}{4\eta M}\delta (t) +\langle \Delta
J_{z}^{2}\rangle (0) \\&& +O(t^{2}) .
\end{eqnarray*}
These correlation functions correspond to a white noise signal
which is a linear ramp with gradient $\gamma b J$ with a random
offset of variance $\langle \Delta J_{z}^{2}\rangle(0)$, in
perfect agreement with our simplified model \refeqn{Simple}. If
the statistics of $y(t)$ were Gaussian these first and second
order moments would be enough to characterize the signal
completely, and indeed for sufficiently large $J$ the problem does
become effectively Gaussian.
As a final comment we note that the essential step in the above
discussion is to observe that the equations for the first and
second order moments of the quantum state derived from the
stochastic master equation correspond to a Kalman filter for some
classical model of a noisy measurement. This correspondence holds
for the stochastic master equation corresponding to an arbitrary
linear quantum mechanical systems with continuous measurement of
observables that are linear combinations of the canonical
variables \cite{Doherty2003}. In the general case of measurements
that are not QND the equivalent classical model will have
noise-driven dynamical equations as well as noise on the measured
signal. The noise processes driving the dynamics and the measured
signal may also be correlated. The case of position measurement of
a harmonic oscillator shows all of these features
\cite{Doherty1999}.
\section{Riccati equation solution method}\label{Appendix::RiccatiSolution}
The matrix Riccati equation is ubiquitous in optimal control.
Here, following \cite{Reid1972}, we show how to reduce the
non-linear problem to a set of linear differential equations.
Consider the generic Riccati Equation:
\begin{equation*}
\frac{d\textbf{V}(t)}{dt}=\textbf{C}-\textbf{D}
\textbf{V}(t)-\textbf{V}(t) \textbf{A}-\textbf{V}(t) \textbf{B}
\textbf{V}(t)
\end{equation*}
We propose the decomposition:
\begin{equation*}
\textbf{V}(t)=\textbf{W}(t) \textbf{U}^{-1}(t)
\end{equation*}
with the linear dynamics
\begin{center}
\begin{tabular}[t]{c}
$\begin{bmatrix} \frac{d\textbf{W}(t)}{dt} \\
\frac{d\textbf{U}(t)}{dt}\end{bmatrix} =
\begin{bmatrix}
-\textbf{D} & \textbf{C}\\
\textbf{B} & \textbf{A}
\end{bmatrix}
\begin{bmatrix} \textbf{W}(t) \\ \textbf{U}(t)\end{bmatrix}$
\end{tabular}
\end{center}
It is straightforward to then show that this linearized solution
is equivalent to the Riccati equation
\begin{eqnarray*}
\frac{d\textbf{V}(t)}{dt}&=&\frac{d\textbf{W}(t)}{dt}\textbf{U}^{-1}+\textbf{W}(t)\frac{d\textbf{U}^{-1}(t)}{dt}\\
&=&\frac{d\textbf{W}(t)}{dt}\textbf{U}^{-1}(t)+\textbf{W}(t)(-\textbf{U}^{-1}(t)\frac{d\textbf{U}(t)}{dt}\textbf{U}^{-1}(t))\\
&=&(-\textbf{D}\textbf{W}(t)+\textbf{C}\textbf{U}(t))\textbf{U}^{-1}(t)\\
&&-\textbf{W}(t)\textbf{U}^{-1}(t)(\textbf{B}\textbf{W}(t)+\textbf{A}\textbf{U}(t))\textbf{U}^{-1}(t)\\
&=&\textbf{C}-\textbf{D}\textbf{V}(t)-\textbf{V}(t)\textbf{A}-\textbf{V}(t)\textbf{B}\textbf{V}(t)
\end{eqnarray*}
where we have used the identity
\begin{equation*}
\frac{d\textbf{U}^{-1}(t)}{dt}=-\textbf{U}^{-1}(t)\frac{d\textbf{U}(t)}{dt}\textbf{U}^{-1}(t)
\end{equation*}
Thus the proposed solution works and the problem can be solved
with a linear set of differential equations.
\section{Robust control in frequency
space}\label{Appendix::RobustFrequencySpace}
Here we apply traditional frequency-space robust control methods
\cite{Doyle1990, Doyle1997} to the classical version of our
system. This analysis is different from the treatment in the body
of the paper in several respects. First, we assume nothing about
the noise sources (bandwidth, strength, etc.). Also, this approach
is meant for steady state situations, with the resulting
estimator-controller being a constant gain transfer function. The
performance criterion we present here is only loosely related to
the more complete estimation description above. Despite these
differences, this analysis gives a very similar design procedure
for the steady state situation.
We proceed as follows with the control system shown in
\reffig{BlockDiagram}, where we label $h(t)=u(t)+ b(t)$ as the
total field. Consider the usual spin system but ignore noise
sources and assume we can measure $z(t)$ directly, so that
$z(t)=y(t)$. For small angles of rotation, the transfer function
from $h(t)$ to $y(t)$ is an integrator
\begin{eqnarray*}
\frac{dy(t)}{dt}&=&\frac{dz(t)}{dt}=\gamma J h(t)\\
s y(s)&=&\gamma J h(s)\\
y(s)&=&P(s) h(s)\\
P(s)&=&\gamma J /s
\end{eqnarray*}
\begin{figure}
\caption{
Spin control system with plant transfer function $P(s)=\gamma
J/s$. $r(t)$ is the reference signal, which is usually zero.
$e(t)$ is the error signal. $u(t)$ is the controller output.
$b(t)$ is the external field to be tracked. $h(t)=b(t)+u(t)$ is
the total field. $\tilde{b}
\label{Figure::BlockDiagram}
\end{figure}
Now we define the performance criterion. First notice that the
transfer function from the field to be measured $b(t)$ to the
total field $h(t)$ is $S(s)$ where
\begin{eqnarray*}
h(s)&=&S(s)b(s)\\
S(s)&=&\frac{1}{1+P(s)C(s)}
\end{eqnarray*}
(Also notice that this represents the transfer function from the
reference to the error signal $e(s)=S(s)r(s)$.) Because our field
estimate will be $\tilde{b}(t)=-u(t)$, we desire $h(t)$ to be
significantly suppressed. Thus we would like $S(s)$ to be small
in magnitude (controller gain $|C(s)|$ large) in the frequency
range of interest. However, because the gain $|C(s)|$ must
physically decrease to zero at high frequencies we must close the
feedback loop with adequate phase margin to keep the closed-loop
system stable. This is what makes the design of $C(s)$
non-trivial.
Proceeding, we now define a function $W_1(s)$ which represents the
degree of suppression we desire at the frequency $s=j\omega$. So
our controller $C(s)$ should satisfy the following performance
criterion
\begin{eqnarray*}
\|W_1(s)S(s)\|_{\infty}&<&1\label{Equation::W1Performance}
\end{eqnarray*}
Thus the larger $W_1(s)$ becomes, the more precision we desire at
the frequency $s$. We choose the following performance function
\begin{eqnarray*}
W_1(s)&=&\frac{W_{10}}{1+s/\omega_1}
\end{eqnarray*}
such that $\omega_1$ is the frequency below which we desire
suppression $1/W_{10}$.
Because our knowledge of $J$ is imperfect, we need to consider all
plant transfer functions in the range
\begin{eqnarray*}
P=\frac{\gamma}{s}\{J_{min}\rightarrow J_{max}\}
\end{eqnarray*}
Our goal is now to find a $C(s)$ that can satisfy the performance
condition for any plant in this family. We choose our nominal
controller as
\begin{eqnarray*}
C_0(s)&=&\frac{\omega_C}{\gamma J'}
\end{eqnarray*}
So if $J=J'$ then the system closes at $\omega_C$ (i.e.,
$|P(i\omega_C)C_0(i\omega_C)|=1$, whereas in general the system
will close at $\omega_{CR}=\omega_C\frac{J}{J'}$. We choose this
controller because $P(s)C(s)$ should be an integrator ($\propto
1/s$) near the closing frequency for optimal phase margin and
closed loop stability.
Next we insert this solution into the performance condition. We
make the simplifying assumption $\omega_1\ll\omega_C \frac{J}{J'}$
(we will check this later to be self-consistent). Then the optimum
of the function is obvious and the condition of
\refeqn{W1Performance} becomes
\begin{eqnarray*}
\omega_1 W_{10}<\omega_{CR}=\omega_C \frac{J}{J'}
\end{eqnarray*}
We want this condition to be satisfied for all possible spin
numbers, so we must have
\begin{eqnarray}
\omega_1 W_{10}=\min[\omega_{CR}]=\omega_C \frac{J_{min}}{J'}
\label{Equation::Jmin}
\end{eqnarray}
Experimentally, we are forced to roll-off the controller at some
high frequency that we shall call $\omega_Q$. Electronics can
only be so fast. Of course, we never want to close above this
frequency because the phase margin would become too small, so this
determines the maximum $J$ that the controller can reliably handle
\begin{eqnarray}
\omega_Q=\max[\omega_{CR}]=\omega_C
\frac{J_{max}}{J'}\label{Equation::Jmax}
\end{eqnarray}
Combining \refeqns{Jmin}{Jmax} we find our fundamental trade-off
\begin{eqnarray}
\omega_1
W_{10}=\omega_Q\frac{J_{min}}{J_{max}}\label{Equation::W1TradeOff}
\end{eqnarray}
which is the basic result of this section. Given experimental
constraints (such as $J_{min}$, $J_{max}$, and $\omega_Q$), it
tells us what performance to expect ($1/W_{10}$ suppression) below
a chosen frequency $\omega_1$.
From \refeqn{W1TradeOff}, we recognize that the controller gain at
the closing frequency needs to be
\begin{eqnarray*}
|C|_C=\frac{\omega_C}{\gamma J'}=\frac{\omega_1 W_{10}}{\gamma
J_{min}}=\frac{\omega_Q}{\gamma J_{max}}
\end{eqnarray*}
In the final analysis, we do not need to use $J'$ and $\omega_C$
to parametrize the controller, only the trade-off and the gain.
Also, notice that now we can express $\min[\omega_{CR}]=\omega_1
W_{10}$.
To check our previous assumption
\begin{eqnarray*}
\omega_1&\ll&\omega_C \frac{J}{J'}\\
&=&\omega_1 W_{10} \frac{J}{J_{min}}
\end{eqnarray*}
which is true if $W_{10}\gg 1$.
Finally, the system will never close below the frequency
$\min[\omega_{CR}]$ so we should increase the gain below a
frequency $\omega_H$ which we might as well set equal to
$\min[\omega_{CR}]$. This improves the performance above and
beyond the criterion above. Of course we will be forced to level
off the gain at some even lower frequency $\omega_L$ because
infinite DC gain (a real integrator) is unreasonable. So the
final controller can be expressed as
\begin{eqnarray*}
C(s)=|C|_C
\frac{1}{1+s/\omega_Q}\frac{\omega_H(1+s/\omega_H)}{\omega_L(1+s/\omega_L)}
\end{eqnarray*}
with the frequencies obeying the order
\begin{eqnarray*}
\omega_L &<& \\
\omega_H &=& \min[\omega_{CR}]=\omega_1 W_{10} < \\
\omega_{CR} &=& \frac{J}{J_{min}}\omega_1 W_{10} < \\
\omega_Q &=& \max[\omega_{CR}]=\frac{J_{max}}{J_{min}}\omega_1
W_{10}
\end{eqnarray*}
Notice that the controller now looks like the steady state
transfer function in \reffig{Bode} derived from the steady state
of the full dynamic filter. (The notation is the same to make
this correspondence clear). Here $\omega_Q$ was simply stated,
whereas there it was a function of $\lambda$ that went to infinity
as $\lambda\rightarrow \infty$. Here the high gain due to
$\omega_L$ and $\omega_H$ was added manually, whereas before it
came from the design procedure directly.
\end{document}
|
\begin{document}
\begin{abstract}
In the signal-processing literature, a frame is a mechanism for performing analysis and reconstruction in a Hilbert space. By contrast, in quantum theory, a positive operator-valued measure (POVM) decomposes a Hilbert-space vector for the purpose of computing measurement probabilities.
Frames and their most common generalizations can be seen to give rise to POVMs, but does every reasonable POVM arise from a type of frame? In this paper we answer this question using a Radon-Nikodym-type result.
\end{abstract}
\maketitle
\section{Introduction}
Originally studied in quantum mechanics, positive operator-valued measures (POVMs) have recently been suggested in the signal-processing literature as a natural general framework for the analysis and reconstruction of square-integrable functions \cite{moran2013positive,han2014operator,robinson2014operator,li2017radon}. Examples include frames \cite{duffin1952class,daubechies1986painless,christensen2002introduction}, g-frames \cite{sun2006g,kaftal2009operator,han2011operator}, continuous frames \cite{ali1993continuous}, and the overarching continuous g-frames \cite{abdollahpour2008continuous}, all of which we will call simply operator-valued frames (OVFs). The precise relationship between OVFs and POVMs is captured by a Radon-Nikodym-type theorem \cite[Theorem~3.3.2]{robinson2014operator}. But this theorem applies only to POVMs with sigma-finite total variation. The purpose of this paper is to show that the above relationship extends to non-sigma-finite POVMs and densely-defined OVFs.
Even the sigma-finite case has been attempted by many authors. In finite dimensions, for example, Chiribella et al. recently solved the problem using barycentric decompositions \cite{chiribella2010barycentric}. In infinite dimensions, extra assumptions have traditionally been used, such as the finiteness of the POVM's trace \cite{berezanskiui1995spectral} or the weak compactness of its range \cite[VI.8.10]{dunford1967linear}.
One of the main complications in this case is that the space of bounded linear operators on an infinite-dimensional Hilbert space does not have the so-called Radon-Nikodym property \cite{barcenas2003radon}. Another reason the problem is challenging is that it requires proving the conclusion of Naimark's Theorem \cite{neumark1940spectral,neumark1943representation}. The non-sigma-finite case is at least as hard, and as far as we know no one has yet attempted it.
The paper is organized as follows. In Section~\ref{sec:the-question}, we make the relationship we wish to establish precise. Then, in Section~\ref{sec:main-result} we prove a Radon-Nikodym-type theorem from which the desired relationship follows.
\section{The Question} \label{sec:the-question}
Let $\mathcal{H}$ be a separable complex Hilbert space throughout, whose inner product is conjugate-linear in its second variable. A \emph{g-frame} for $\mathcal{H}$ is
a sequence of bounded operators $T_{1},T_{2},\dots$ mapping $\mathcal{H}$
into the sequence of separable Hilbert spaces $\mathcal{K}_{1},\mathcal{K}_{2},\dots$
such that the operator $T:x\in\mathcal{H}\mapsto\left\{ T_{i}x\right\} _{i\ge1}$
maps into $\bigoplus_{i\ge1}\mathcal{K}_{i}$ and is bounded above and below
\cite{sun2006g}. A related concept is the concept of a \emph{frame}
for $\mathcal{H}$. This is usually defined as follows.
\begin{defn}
A sequence $x_{1},x_{2},\dots$
in $\mathcal{H}$ is said to be a \emph{frame }for $\mathcal{H}$ if the operator
$T:x\in\mathcal{H}\mapsto\left\{ \left\langle x,x_{i}\right\rangle \right\} _{i\ge1}$
maps into $\ell^{2}(\mathbb{N})$ and is bounded above and below,
or equivalently, if there are $A,B>0$ such that
\[
A\left\Vert x\right\Vert _{\mathcal{H}}^{2}\le\sum_{i\ge1}\left|\left\langle x,x_{i}\right\rangle \right|^{2}\le B\left\Vert x\right\Vert _{\mathcal{H}}^{2}
\]
for all $x\in\mathcal{H}$. The numbers $A$ and $B$ are called the \emph{frame bounds}.
\end{defn}
Using the self-duality of Hilbert spaces, we can easily see that every
frame $\{x_{i}\}_{i\ge1}$ for $\mathcal{H}$ can be uniquely expressed as
the g-frame $\{T_{i}\}_{i\ge1}$ for $\mathcal{H}$ with $T_{i}=\left\langle \cdot,x_{i}\right\rangle $,
so that the concept of a g-frame subsumes the concept of a frame.
For examples of frames of interest, see \cite{duffin1952class,daubechies1986painless,daubechies1992ten,christensen2002introduction}.
For examples of g-frames of interest that are not frames, see Examples
2.3.7 and 2.3.8 in \cite{robinson2014operator}.
The importance of frames and g-frames is that they provide a framework
for \emph{analysis, synthesis, }and \emph{reconstruction} of a function.
The operator $T$ above is usually referred to as the \emph{analysis
operator}. \emph{Reconstruction} can be performed if we can calculate
the result of applying $S^{-1}T^{*}$ to $Tx$, where $T^{*}$ is
the adjoint of $T$ and $S=T^{*}T$ is known as the \emph{frame operator}. The intermediate step of applying
$T^{*}$ to $Tx$ is called \emph{synthesis. }In practice, if one
knows only $T$ and $Tx$, one way to recover $x$ without directly inverting
$S$ is by way of the so-called \emph{frame algorithm}:
\begin{prop}
\cite{grochenig1993acceleration} Let $\{x_{i}\}_{i\ge1}$ be a frame
for $\mathcal{H}$ with frame bounds $A,B>0$. Then every $x\in\mathcal{H}$ can be
reconstructed from $T$ and $Tx$ alone using the iteration
\[
x^{(n)}=x^{(n-1)}+\frac{2}{A+B}S\left(x-x^{(n-1)}\right),
\]
for $n\ge 1$ with $x^{(0)}=0$. Further, $x^{(n)}\to x$ according to
\[
\left\Vert x-x^{(n)}\right\Vert \le\left(\frac{B-A}{B+A}\right)^{n}\left\Vert x\right\Vert .
\]
\end{prop}
\noindent This algorithm applies equally well to frames and g-frames.
Another paradigm for analysis and synthesis is that of \emph{continuous
frames} \cite{ali1993continuous}, which we now describe.
\begin{defn}
Let $(\Omega,\Sigma,\mu)$ be a measure space and let $\{x_{t}\}_{t\in\Omega}\subset\mathcal{H}$
be such that for all $x\in\mathcal{H}$, the map $t\in\Omega\mapsto\left\langle x,x_{t}\right\rangle $
is measurable. Then $\left(\mu,\{x_t\}_{t\in\Omega}\right)$ is a \emph{continuous frame} if $T:x\in\mathcal{H}\mapsto\left\{ \left\langle x,x_{t}\right\rangle \right\} _{t\in\Omega}$
maps into $L^{2}(\mu)$ and is bounded above and below. This boundedness condition can also be expressed by saying there are constants $A,B>0$ such that
\[
A\left\Vert x\right\Vert _{\mathcal{H}}^{2}\le\int_{\Omega}\left|\left\langle x,x_{t}\right\rangle \right|^{2}\,d\mu(t)\le B\left\Vert x\right\Vert _{\mathcal{H}}^{2}
\]
for all $x\in\mathcal{H}$.
\end{defn}
\noindent Examples of interest occur
in wavelet and Gabor analysis \cite{daubechies1986painless,daubechies1992ten}.
Encompassing both continuous frames and g-frames is the overarching concept of
a \emph{continuous g-frame} \cite{abdollahpour2008continuous}.
We will generalize this concept slightly and call the result an operator-valued frame. (We note that our terminology differs from the term ``operator-valued frame'' in \cite{kaftal2009operator}, which refers to what we call here g-frames.)
\begin{defn}
Let $(\Omega,\Sigma,\mu)$ be a measure space. Let $\mathcal{M}$ be a dense linear
subspace of $\mathcal{H}$. Let $\{\mathcal{K}(t)\}_{t\in\Omega}$ be a $\Sigma$-measurable
family of Hilbert spaces\footnote{See \cite{dixmier2011neumann} or \cite{robinson2014operator} for the definition of this and the direct integral $\int^\oplus_\Omega \mathcal{K}(t)\, d\mu(t)$ of these spaces.}
and $T(t):\mathcal{M}\to\mathcal{K}(t)$ for every $t\in\Omega$ be a
family of linear maps such that for every $x\in \mathcal{M}$ both $\{T(t)x\}_{t\in\Omega}\in\int_{\Omega}^{\oplus}\mathcal{K}(t)\,d\mu(t)$
and the map $T:x\in \mathcal{M}\mapsto\{T(t)x\}_{t\in\Omega}$ is
bounded above and below. Then we say $\left(\mu,\mathcal{M}, \{T(t)\}_{t\in\Omega}\right)$
is an \emph{operator-valued frame}. This boundedness condition can be expressed by saying that there are $A,B>0$ such that
\[
A\left\Vert x\right\Vert _{\mathcal{H}}^{2}\le\int_{\Omega}\left\langle T(t)^* T(t)x,x\right\rangle\,d\mu(t)\le B\left\Vert x\right\Vert _{\mathcal{H}}^{2},
\]
for every $x\in\mathcal{M}$.
\end{defn}
\noindent Examples of operator-valued frames that are neither continuous frames
nor g-frames arise from the Plancherel theorem for non-compact non-commutative groups, for which examples $A=B$. (See \cite[Equation~7.46]{gb1995course} for more information.)
If $\left(\mu,\mathcal{M}, \{T(t)\}_{t\in\Omega}\right)$ is an operator-valued frame
for $\mathcal{H}$, then the frame operator
\begin{equation}
S=\int_{\Omega}T(t)^{*}T(t)\,d\mu(t) \nonumber
\end{equation}
may be defined as before. Here, $S$ is interpreted as an operator
that satisfies
\[
\left\langle Sx,y\right\rangle =\int_{\Omega}\left\langle T(t)^{*}T(t)x,y\right\rangle \,d\mu(t)
\]
for all $x,y\in \mathcal{M}$. By the boundedness of $T$ and the density of
$\mathcal{M}$, such an operator exists and is unique. As a result, both the frame algorithm and the procedure of applying
$S^{-1}T^{*}$ to $Tx$ for $x\in \mathcal{M}$ recover $x$, as before.
A related concept is that of a positive operator-valued measure (POVM).
POVMs represent measurements that occur in open quantum
systems and generalize the concept of a projection-valued measure or von Neumann measurement. Formally, if $(\Omega,\Sigma)$ is
a measurable space, a POVM is a map from $\Sigma$ to $\mathcal{L}^+(\mathcal{H})$,
the positive semi-definite bounded operators on $\mathcal{H}$, which is $\sigma$-additive in the
weak operator topology. That is, it is a tuple $(\Omega,\Sigma,M)$, where $M:\Sigma\to\mathcal{L}^+(\mathcal{H})$ is a map such
that
\begin{itemize}
\item $M(\varnothing)=0$ and
\item if $E_{1},E_{2},\dots$ are pairwise disjoint members of $\Sigma$
and $x,y\in\mathcal{H}$, then
\[
\left\langle M\left(\cup_{i=1}^{\infty}E_{i}\right)x,y\right\rangle =\sum_{i=1}^{\infty}\left\langle M\left(E_{i}\right)x,y\right\rangle .
\]
\end{itemize}
If, in addition, $M(\Omega)$ is invertible, we say $M$ is a \emph{framed POVM}. We will say that an OVF $\left(\mu, \mathcal{M}, \{T(t)\}_{t\in\Omega} \right)$ \emph{gives rise to} the
POVM $M$ if
\begin{equation*}
\left\langle M(E)x,y\right\rangle =\int_{E}\left\langle T(t)^{*}T(t)x,y\right\rangle \,d\mu(x)
\end{equation*}
for all $x,y\in \mathcal{M}$ and all $E\in\Sigma$.
It is easily seen that every g-frame and every continuous frame give rise to a framed POVM. With a little more work, it can be shown that every OVF gives rise to a framed POVM. The question of this paper is the converse question: ``Does every framed POVM arise from an OVF?''
\section{Main Result} \label{sec:main-result}
For this section, recall that a closed operator on $\mathcal{H}$ is a map $A:D(A)\to \mathcal{H}$ with a closed graph, where $D(A)$ is a dense linear subspace of $\mathcal{H}$. In other words, it is a map for which if $x_n \in D(A) \to x\in \mathcal{H}$ and $Ax_n$ converges to $y\in \mathcal{H}$, then $x\in D(A)$ and $y=Ax$.
The answer to the question of the last section hinges on the following Radon-Nikodym-type theorem.
\begin{thm*}
Suppose $(\Omega,\Sigma, M)$
is a POVM. Suppose that $\mathcal{M}$ is a dense linear manifold in $\mathcal{H}$
and that for each $x\in\mathcal{M}$, the total variation of the vector measure
$\mu_{x}:E\in\Sigma\mapsto M(E)x$ is sigma-finite. Then there is
a sigma-finite measure $(\Omega,\Sigma,\mu)$ and a positive closed
operator-valued function $Q(t):\mathcal{M}\rightarrow\mathcal{H}$, defined for $\mu$-a.e. $t\in\Omega$,
such that
\begin{equation}
M(E)x=\int_{E} Q(t)x \,d\mu(t),\label{eq:radon-decomposition-vector}
\end{equation}
weakly,
for all $E\in\Sigma$ and all $x\in\mathcal{M}$.
Further, if $(Q_1,\mu_1)$ and $(Q_2,\mu_2)$ are operator-valued functions defined on $\mathcal{M}$ and sigma-finite measures satisfying (\ref{eq:radon-decomposition-vector}), we have the following operator equality for $(\mu_1+\mu_2)$-a.e. $t$:
\begin{equation} \label{eq:uniqueness}
Q_1(t)\frac{d\mu_1}{d(\mu_1+\mu_2)}(t)= Q_2(t) \frac{d\mu_2}{d(\mu_1+\mu_2)}(t).
\end{equation}
\end{thm*}
\begin{proof}
Let $\mu_{x,y}:\Sigma\to\mathbb{C}$ for $x,y\in\mathcal{H}$ be the complex
measure defined by $\mu_{x,y}(E)=\left\langle M(E)x,y\right\rangle $. Let $\mu$ be any sigma-finite measure dominating each $\mu_{x,y}$. For example, if $\{x_j\}$ is a countable dense subset of the unit ball, then we may define
\begin{equation*}
\mu(E) = \sum_{j\ge 1} \frac{1}{2^j} \mu_{x_j,x_j}(E).
\end{equation*}
Observe that
\begin{align*}
\left|\mu_{x,y}(E)\right| & =\left|\left\langle M(E)x,y\right\rangle \right|\\
& \le\left\Vert M(E)x\right\Vert \left\Vert y\right\Vert
\end{align*}
so that
\begin{equation}
\left|\mu_{x,y}\right|(E)\le\left|\mu_{x}\right|(E)\left\Vert y\right\Vert \label{eq:not-exceed-x}
\end{equation}
for all $x\in\mathcal{M}$ and all $y\in\mathcal{H}$. We use the term \emph{null set} to mean ``$\mu$-null set.''
For each $x\in\mathcal{M}$, fix a Radon-Nikodym
derivative $g_{x}$ of $|\mu_{x}|$ with respect to $\mu$. The fact
that $\left|\mu_{x}\right|$ is sigma-finite implies that we may assume
$g_{x}$ is finite everywhere. For each $x,y \in\mathcal{M}$,
fix a Radon-Nikodym derivative $g_{x,y}$ of $\mu_{x,y}$ with respect
to $|\mu_{x}|$. By (\ref{eq:not-exceed-x}), we may assume $g_{x,y}$
does not exceed $\left\Vert y\right\Vert $ in absolute value. Similarly, we may choose
$g_{x,x}(t)$ to be non-negative for all $t\in\Omega$. Let $f_{x,y}=g_{x,y}g_x$ for $x,y\in\mathcal{M}$. Then, since both $f_{x,y}$ and $f_{y,x}$ are valid Radon-Nikodym derivatives of $\mu_{x,y}$ with
respect to $\mu$ for $x,y\in\mathcal{M}$, so is the following:
\[
q(t;x,y):=\begin{cases}f_{x,y}(t), & \text{if } |f_{x,y}(t)| \le |f_{y,x}(t)| \\
f_{y,x}(t), & \text{else}.
\end{cases}
\]
Let $\mathbb{F}=\mathbb{Q}+i\mathbb{Q}$, and let $\mathcal{M}_{0}$ be the
finite $\mathbb{F}$-linear span of a countable dense subset of $\mathcal{M}$.
Assume $x,y \in \mathcal{M}_0$ and $t\in \Omega$. By our choice of $q$, we have $|q(t;x,y)|\le g_{y}(t)\left\Vert x\right\Vert $.
Further, $g_{y}(t)$ is finite, so $q(t;\cdot,y)$
is bounded. Let us restrict $t$ to a null-complemented
set $\Omega_{y}\in\Sigma$ such that $q(t;\cdot,y)$ is $\mathbb{F}$-linear. The functional $q(t;\cdot,y)$ extends to a unique $\mathbb{C}$-linear continuous functional on all of $\mathcal{H}$.
Thus,
by the Riesz representation theorem, for all $t\in\Omega':=\cap_{y\in\mathcal{M}_{0}}\Omega_{y}$
there is a vector $z(t;y)\in\mathcal{H}$ such that
\begin{equation} \label{eq:first-reduction}
q(t;x,y)=\left\langle x,z(t;y)\right\rangle
\end{equation}
for all $x,y\in\mathcal{M}_{0}$, and all $t\in\Omega'$.
Let $x\in\mathcal{M}_{0}$. There is a measurable null-complemented set $\Omega_{x}'\subset\Omega'$
such that $y\in\mathcal{M}_{0}\mapsto q(t;x,y)$ is $\mathbb{F}$-conjugate-linear
for all $t\in\Omega_{x}'$. Letting $\Omega''=\cap_{x\in\mathcal{M}_{0}}\Omega_{x}'$,
we may assume this map is $\mathbb{F}$-conjugate-linear for all $x\in\mathcal{M}_{0}$
and all $t\in\Omega''$. Thus, we have
\[
\left\langle x,z(t;ay+by')\right\rangle =\left\langle x,a\,z(t;y)+b\,z(t;y')\right\rangle
\]
for all $x,y,y'\in\mathcal{M}_{0}$, all $a,b\in\mathbb{F}$, and all $t\in\Omega''$.
Letting $x$ range over $\mathcal{M}_{0}$ this gives $\mathbb{F}$-linearity
of the map $y\in\mathcal{M}_{0}\mapsto z(t;y)$ for all $t\in\Omega''$.
Thus, there is an $\mathbb{F}$-linear map $Q_0(t):\mathcal{M}_0\to \mathcal{H}$ such that $z(t;y)=Q_0(t)y$ for all $y\in\mathcal{M}_0$. Combining this with (\ref{eq:first-reduction}) we get
\[
q(t;x,y) = \left\langle x, Q_0(t)y \right\rangle
\]
for all $x,y\in\mathcal{M}_0$, and $t\in\Omega''$.
Let $y\in\mathcal{M}_0$ and $E\in\Sigma$. We have now shown that
\begin{equation} \label{eq:desired-result}
\mu_{x,y}(E) = \int_E \left\langle x,Q_0(t)y\right\rangle\, d\mu(t)
\end{equation}
for all $x\in \mathcal{M}_0$. It follows that
\begin{equation} \label{eq:total-variation-msr-xy}
|\mu_{x,y}|(E) = \int_E \left| \left\langle x,Q_0(t)y \right\rangle \right|\, d\mu(t)
\end{equation}
for all $x\in \mathcal{M}_0$. We now wish to show that (\ref{eq:desired-result}) extends to all $x\in\mathcal{H}$. In other words, we wish to show that the functional $\phi:\mathcal{H} \to \mathbb{C}$ defined by
\[
\phi(x)=\int_E \left\langle x,Q_0(t)y\right\rangle\, d\mu(t)
\]
is well-defined and satisfies $\phi(x)=\mu_{x,y}(E)$ for all $x\in\mathcal{H}$. For this, suppose $x\in\mathcal{H}$ and $x_n$ is a sequence in $\mathcal{M}_0$ converging to $x$. To show $\phi$ is well-defined, we first note that by Fatou's lemma we have
\begin{align}
& \left(\int_E\left|\left\langle x,Q_0(t)y\right\rangle\right|\, d\mu(t)\right)^2 \nonumber \\
& = \left(\int_E\lim_n \left|\left\langle x_n,Q_0(t)y\right\rangle\right|\, d\mu(t)\right)^2 \nonumber \\
& \le \liminf_n \left(\int_E \left|\left\langle x_n,Q_0(t)y\right\rangle\right|\, d\mu(t)\right)^2 \nonumber
\end{align}
By Cauchy-Schwarz, this is bounded by
\begin{align}
& \le \liminf_n \int_E \left\langle x_n,Q_0(t)x_n\right\rangle\, d\mu(t) \int_E \left\langle y,Q_0(t)y\right\rangle\, d\mu(t). \nonumber
\end{align}
Further, by (\ref{eq:desired-result}), this is equal to
\begin{align}
& \le \liminf_n \left\langle M(E)x_n,x_n\right\rangle \left\langle M(E)y,y\right\rangle \nonumber \\
& = \left\langle M(E)x,x\right\rangle \left\langle M(E)y,y\right\rangle \label{eq:continuity-xy}
\end{align}
This means that $\phi$ is well-defined. Futher, (\ref{eq:continuity-xy}) means that $\phi$ is continuous.
Since $x\in\mathcal{H}\mapsto \mu_{x,y}(E)$ is also continuous and restricts to $\phi$ on $\mathcal{M}_0$, we have $\phi(x)=\mu_{x,y}(E)$, as desired.
We will now show that there is a closed, positive operator-valued function $Q(t)$ with domain $\mathcal{M}$ such that
\begin{equation} \label{eq:final-desired-result}
\mu_{x,y}(E) = \int_E \left\langle Q(t)x,y\right\rangle\, d\mu(t),
\end{equation}
for all $x\in\mathcal{M}$, all $y\in\mathcal{M}_0$, and all $E\in\Sigma$. For this, let $x\in\mathcal{M}$, $y\in\mathcal{M}_0$, and $E\in\Sigma$. By the conclusion of the last paragraph and the definition of total-variation measure, we have:
\[
\int_E \left| \left\langle x, Q_0(t)y\right\rangle\right| \, d\mu(t)= |\mu_{x,y}|(E).
\]
Using (\ref{eq:not-exceed-x}), the right hand side is bounded by $|\mu_x|(E)\left\Vert y\right\Vert$. But since $E$ was arbitrary, this means that
\[
\left| \left\langle x, Q_0(t)y\right\rangle\right| \le g_x(t) \left\Vert y\right\Vert,
\]
for every $t$ in a null-complemented set $\Omega'''$. Fix $t\in\Omega'''$. By finiteness of $g_x(t)$, the above display equation means that if $y_n$ is a sequence in $\mathcal{M}_0$ converging to $y$ we have
\[
\left\langle x, Q_0(t) y_n\right\rangle - \left\langle x, Q_0(t)y \right\rangle \to 0.
\]
Thus, by the Riesz representation theorem, there is a densely defined operator $Q(t)$ with domain $\mathcal{M}$ such that
\[
\left\langle x, Q_0(t)y \right\rangle = \left\langle Q(t)x, y\right\rangle.
\]
Further, $Q(t)$ is closed by \cite[5.1.5]{pedersen2012analysis} and positive since $Q_0(t)$ is positive.
Let $x\in\mathcal{M}$ and $E\in\Sigma$. We will now argue that (\ref{eq:final-desired-result}) extends to all $y\in\mathcal{H}$. For this we define
$\psi:\mathcal{H}\to\mathbb{C}$ by
\[
\psi(y) = \int_E \left\langle Q(t)x, y\right\rangle\, d\mu(t).
\]
It suffices to show that $\psi$ is well-defined and agrees with $\mu_{x,y}(E)$ for all $y\in\mathcal{H}$. But this follows from the extension argument just applied to $\phi:\mathcal{M}_0\to\mathbb{C}$ in the paragraph before last.
This concludes the proof of (\ref{eq:radon-decomposition-vector}).
For (\ref{eq:uniqueness}), suppose that
\begin{equation*}
\int_E \left\langle Q_1(t)x,y \right\rangle\, d\mu_1(t) = \int_E \left\langle Q_2(t)x,y\right\rangle\, d\mu_2(t),
\end{equation*}
for all $E\in \Sigma$ and all $x\in\mathcal{M}$ and $y\in\mathcal{H}$.
Then for $(\mu_1+\mu_2)$-a.e. $t$ and all $x\in \mathcal{M}$ and $y\in\mathcal{H}$, we have
\begin{equation*}
\left\langle Q_1(t)x,y \right\rangle \frac{d\mu_1}{d(\mu_1+\mu_2)}(t) = \left\langle Q_2(t)x,y\right\rangle \frac{d\mu_2}{d(\mu_1+\mu_2)}(t) .
\end{equation*}
But this means
\begin{equation*}
Q_1(t)\frac{d\mu_1}{d(\mu_1+\mu_2)}(t) = Q_2(t)\frac{d\mu_2}{d(\mu_1+\mu_2)}(t),\end{equation*}
as desired.
\end{proof}
The above yields the sigma-finite case (\cite[Theorem~3.3.2]{robinson2014operator}) as an immediate corollary. Here, we say that $E\mapsto M(E)$ is sigma-finite if its total variation with respect to the operator norm is sigma-finite.
\begin{cor*}
Suppose $(\Omega, \Sigma, M)$ is a sigma-finite POVM. Then $\mu_x$ is sigma-finite for all $x\in \mathcal{H}$ and $Q(t)$ is bounded.
\end{cor*}
\begin{proof}
We may replace $\mu$ in the previous proof by the total variation measure of $M$ since it dominates $\mu_{x,y}$ for all $x,y\in\mathcal{H}$.
Further, $\mu(E)\left\Vert x \right\Vert$ dominates the total variation measure of $E\mapsto M(E)x$, so the latter is sigma-finite for all $x\in\mathcal{H}$.
By the definition of the total-variation measure of $\mu_{x,y}$ and $M$, we have
\begin{align*}
\int_E \left|\left\langle Q(t)x,y\right\rangle\right|\, d\mu(t) & \le \mu(E) \left\Vert x \right\Vert \left\Vert y \right\Vert
\end{align*}
for all $E\in\Sigma$ and all $x,y\in\mathcal{H}$. Thus, the Radon-Nikodym theorem tells us that for $\mu$-a.e. $t$ and all $x,y\in \mathcal{H}$,
\[
\left|\left\langle Q(t)x,y \right\rangle\right| \le \left\Vert x \right\Vert \left\Vert y \right\Vert,
\]
which means that $Q(t)$ is bounded for $\mu$-a.e. $t$.
\end{proof}
It follows immediately from the Theorem that any framed POVM $M$ satisfying the given sigma-finiteness condition arises from an OVF, as desired. Indeed, letting $\mu$, $Q(t)$, and $\mathcal{M}$ be as in the Theorem, such an OVF is $(\mu, \mathcal{M}, \{T(t)\}_{t\in\Omega})$, where $T(t)=Q(t)^{1/2}$. Further, the uniqueness condition of the Theorem shows that if $(\mu_1, \mathcal{M}, \{T_1(t)\}_{t\in\Omega})$ and $(\mu_2,\mathcal{M}, \{T_2(t)\}_{t\in\Omega})$ are any two OVFs giving rise to $M$, then they are essentially the same in the sense that
$$
T_1(t)^*T_1(t)\frac{d\mu_1}{d(\mu_1+\mu_2)}(t)=T_2(t)^*T_2(t)\frac{d\mu_2}{d(\mu_1+\mu_2)}(t)
$$
for $(\mu_1+\mu_2)$-a.e. $t$. This concludes our argument.
\end{document}
|
\begin{document}
\title{Numerical radii of accretive matrices}
\author{Yassine Bedrani, Fuad Kittaneh and Mohammed Sababheh}
\subjclass[2010]{15A60, 15B48, 47A12, 47A30, 47A63, 47A64.}
\keywords{Operator monotone function, sectorial matrix, accretive matrices, operator means, numerical radius} \maketitle
\pagestyle{myheadings}
\markboth{\centerlineterline {}}
{\centerlineterline {}}
\begin{abstract}
The numerical radius of a matrix is a scalar quantity that has many applications in the study of matrix analysis. Due to the difficulty in computing the numerical radius, inequalities bounding it have received a considerable attention in the literature. In this article, we present many new bounds for the numerical radius of accretive matrices. The importance of this study is the presence of a new approach that treats a specific class of matrices, namely the accretive ones. The new bounds provide a new set of inequalities, some of which can be considered as refinements of other existing ones, while others present new insight to some known results for positive matrices.
\end{abstract}
\section{Introduction}
Let $\mathcal{M}_n$ be the algebra of all complex $n\times n$ matrices. For $A\in\mathcal{M}_n$, the numerical radius $w(A)$ and the operator norm $\|A\|$ of $A$ are defined, respectively, by
$$w(A)=\max\{|\left<Ax,x\right>|:x\in\mathbb{C}^n, \|x\|=1\}$$ and
$$\|A\|=\max\{|\left<Ax,y\right>|:x,y\in\mathbb{C}^n, \|x\|=\|y\|=1\}.$$
It is well known that $w(\cdot)$ defines a norm on $\mathcal{M}_n$ that is equivalent to the operator norm, via the relation \cite[p 114.]{Halmos}
\begin{equation}\label{eq_num_oper}
\frac{1}{2}\|A\|\leq w(A)\leq \|A\|, A\in\mathcal{M}_n.
\end{equation}
Interest in bounding the numerical radius has grown due to the fact that computing the operator norm is much easier than that of the numerical radius. For this reason, we find many research papers presenting bounds for $w(A)$ in terms of the operator norm.
The numerical range of $A\in \mathcal{M}_n$ is defined by the set
$$W(A)=\{\left<Ax,x\right>:x\in\mathbb{C}^n,\|x\|=1\}.$$
If $W(A)\subset (0,\infty),$ we say that $A$ is positive, and we simply write $A> 0.$ It is well known that when $A> 0,$ we have $w(A)=\|A\|.$ A more general class of matrices than that of positive ones is the so called accretive matrices. A matrix $A\in\mathcal{M}_n$ is said to be accretive when $\Re A>0.$ Notice that
$$\Re A>0\Leftrightarrow W(A)\subset (0,\infty)\times (-\infty,\infty)\subset \mathbb{C},$$ where $\Re A=\frac{A+A^*}{2}$ is the real part of $A$. It is clear that when $A$ is positive, it is necessarily accretive.
The main goal of this article is to present many new relations for $w(A)$ when $A$ is accretive. Some of these new forms present a new direction in this study, while others can be looked at as refinements of some known results, in a new setting.
When talking about accretive matrices, we need to introduce sectorial matrices. A matrix $A\in\mathcal{M}_n$ is said to be sectorial if, for some $0\leq \alpha<\frac{\pi}{2}$, we have
$$W(A)\subset S_{\alpha}:=\{z\in\mathbb{C}:|\Im z|\leq \tan\alpha\; \Re z\}.$$ The smallest such $\alpha$ will be called the sectorial index of $A$. When $W(A)\subset S_{\alpha},$ we will write $A\in \mathcal{S}_{\alpha}.$ Further, in the sequel, it will be implicitly understood that the notion of $S_{\alpha}$ and $\mathcal{S}_{\alpha}$ are defined only when $0\leq \alpha<\frac{\pi}{2}.$
Recently, in \cite{bedr} the operator mean of two accretive matrices $A,B\in \mathcal{M}_n$ has been defined by
\begin{align}\label{eq_def_sigma}
A\sigma_f B=\int^1_0 (A!_sB)\;d\nu_f (s),
\end{align}
where $A!_sB=((1-s)A^{-1}+sB^{-1})^{-1}$ is the harmonic mean of $A,B$, the function $f:(0,\infty)\longrightarrow(0,\infty)$ is an operator monotone function with $f(1)=1$ and $\nu_f$ is the probability measure characterizing $\sigma_f$.
Moreover, they also characterize the operator monotone function for an accretive matrix: let $A\in\mathcal{S}_{\alpha}$ and $f:(0,\infty)\longrightarrow(0,\infty)$ be an operator monotone function with $f(1)=1,$ then
\begin{align}\label{eq_def_f(A)}
f(A)=\int^1_0 ((1-s)I+sA^{-1})^{-1}\;d\nu_f (s),
\end{align}
where $\nu_f$ is probability measure satisfying $f(x)=\int^1_0 ((1-s)+sx^{-1})^{-1}\;d\nu_f (s).$
This definition was motivated by the same definition for positive matrices, and many properties of operator mean of accretive matrices were given in \cite{bedr}. In \cite{ftan}, the logarithmic mean of accretive $A,B$ is defined by
\begin{align}\label{loga_matrices}
\mathcal{L}(A,B)=\int^{1}_{0}A\sharp_t B\;dt.
\end{align}
The Heinz mean is defined in \cite{YMao} as
\begin{align}\label{heinz_for_matrices}
\mathcal{H}_t(A,B)=\dfrac{A\sharp_t B+A\sharp_{1-t} B}{2},\;0\leq t\leq 1,
\end{align}
where $\sharp_t$ stands for the weighted geometric mean, which corresponds to the operator monotone function $f(x)=x^t,0\leq t\leq 1.$
When $A\in\mathcal{S}_0,$ we have $w(A)=\|A\|.$ Our first simple observation will be that when $A\in\mathcal{S}_{\alpha},$ we have
$$\cos\alpha\;\|A\|\leq w(A)\leq \|A\|.$$ Notice that this new inequality is better than \eqref{eq_num_oper} when $0\leq \alpha< \frac{\pi}{3}.$ Many extensions of some numerical radius inequalities will be shown for accretive and sectorial matrices, including power inequalities and sub multiplicative behavior.
Another set of new inequalities for accretive matrices is the treatment of $w(f(A))$ and $w(A\sigma B)$, where $f$ is an operator monotone function and $\sigma$ is an operator mean. Such inequalities have not been treated in the literature due to the fact that when $A,B$ are positive, $f(A)$ and $A\sigma B$ are positive, and hence their numerical radius and operator norms coincide. So, when $A,B$ are accretive, this presents a new direction.
Many other results will be presented, like subadditivity of the numerical radius, relations among $w(A)$ and $w(\Re A)$, and many others.
For our purpose, we will need the following notation .
$$ \mathfrak{m}=\{f(x)\;{\text{where}}\;f:(0,\infty)\to (0,\infty)\; {\text{is an operator monotone function with}} \;f(1)=1\}.$$
\section{Some preliminary discussion}
In this part of the paper, we discuss some needed results and terminologies related to accretive matrices.
\begin{lemma} \cite{bedr}\label{lemma_real_a_sigma_b_less}
Let $ A, B\in\mathcal{S}_{\alpha} $. If $f\in \mathfrak{m},$ then
\begin{equation}
\Re(A\sigma_f B)\leq \sec^{2}\alpha\;(\Re A)\;\sigma_f\;(\Re B).
\end{equation}
\end{lemma}
\begin{lemma}\label{lemma_ando_zhan}\cite{ando_zhan} Let $A, B\in \mathcal{M}_n$ be two positive matrices. Then, for any non-negative operator monotone function $f$ on $\left[0,\infty \right)$,
\begin{equation}
|||f(A+B)|||\leq |||f(A)+f(B)|||
\end{equation}
\end{lemma}
\begin{lemma}\cite{Ando_2}\label{sigma_norm} Let $A, B\in\mathcal{M}_n$ be positive. If $f\in \mathfrak{m},$ then
\begin{align}
||| A \sigma_f B|||\leq ||| A|||\sigma_f||| B|||.
\end{align}
\end{lemma}
\begin{lemma}\cite{bedr}\label{lemma_f_real_sec_f} Let $ A\in\mathcal{S}_\alpha. $ If $f\in \mathfrak{m},$ then
\begin{equation}
f(\Re A)\leq\Re (f(A))\leq\ \sec^{2}\alpha \;f(\Re A)
\end{equation}
\end{lemma}
\begin{lemma}\cite{bedr}\label{realf_fnorm} Let $ A\in\mathcal{S}_\alpha. $ If $f\in \mathfrak{m},$ then
\begin{align*}
f(\|\Re A\|)\leq\|\Re f(A)\|\leq\sec^2\alpha\;f(\|\Re A\|).
\end{align*}
\end{lemma}
\begin{lemma}\cite{Choi}\label{negtive_power_of_ real} Let $A\in\mathcal{S}_\alpha$ and $t\in[-1,0]$. Then
\begin{align}
\Re A^t\leq \Re^t A \leq \cos^{2t}\alpha\;\Re A^t
\end{align}
\end{lemma}
A reverse of Lemma \ref{negtive_power_of_ real} is as follows.
\begin{lemma}\cite{Choi}\label{positive_power_of_ real} Let $A\in\mathcal{S}_\alpha $ and $t\in[0,1]$. Then
\begin{align}
\cos^{2t}\alpha\;\Re A^t\leq \Re^t A \leq \Re A^t
\end{align}
\end{lemma}
It is well known that for any matrix $A\in\mathcal{M}_n$, $|||\Re A|||\leq |||A|||$, for any unitarily invariant norm $\|\cdot\|$ on $\mathcal{M}_n$. The following lemma presents a reversed version of this inequality for sectorial matrices.
\begin{lemma}\cite{Zhang}\label{norm}
Let $ A \in \mathcal{S}_{\alpha} $ and let $ \parallel.\parallel $ be any unitarily invariant norm on $\mathcal{M}_n$. Then
\begin{center}
$ \cos\alpha\; ||| A||| \leq\
||| \Re(A)||| \leq |||A|||.$
\end{center}
\end{lemma}
\begin{lemma} \cite{kitt1}\label{nume_real<nemu A} Let $A\in\mathcal{M}_n$. Then
\begin{align}
w(\Re A)\leq w(A).
\end{align}
\end{lemma}
\begin{lemma}\cite{YMao} \label{heinz_norm_bound} Let $ A, B \in \mathcal{S}_{\alpha} $. Then for $t\in (0,1)$,
\begin{align}
\cos^3\alpha\;|||A\sharp B|||\leq |||\mathcal{H}_t(A,B)|||\leq\dfrac{\sec^3\alpha}{2}\;|||A+B|||.
\end{align}
\end{lemma}
\begin{lemma}\cite{drury1}\label{S_alphat} Let $ A \in \mathcal{S}_{\alpha} $ and $t\in (0,1)$. Then $ W(A^t)\in S_{t\alpha}$.
Also note that $ W(A^{-t})\in S_{t\alpha}$, by the result indicates that if $W(A)\in S_{\alpha}$ then $W(A^{-1})\in S_{\alpha}$ .
\end{lemma}
\begin{lemma} \cite{abo_omar}\label{abu_omar_kittaneh} Let $ A ,B\in \mathcal{M}_{n} $ be positive matrices. Then
\begin{align}
w\left[ \begin{pmatrix}
0&A\\
B&0
\end{pmatrix}
\right] = \dfrac{1}{2}\|A+B\|.
\end{align}
\end{lemma}
The following two lemmas are well known.
\begin{lemma} \label{max_norm} Let $ A ,B\in \mathcal{M}_{n} $. Then
\begin{align}
\left\| \begin{pmatrix}
0&A\\
B&0
\end{pmatrix}
\right\| = \max(\|A\|,\|B\|).
\end{align}
\end{lemma}
\begin{lemma} \label{lem_negative_power_norm} Let $ A\in \mathcal{M}_{n} $ be invertible . Then
\begin{align}
\|A\|^{-1}\leq \|A^{-1}\|.
\end{align}
\end{lemma}
\begin{lemma}\cite{kubo_ando}\label{monoto_mean} Let $A,B,C,D\in\mathcal{M}_n$ be positive. Then
$A\leq C$ and $B\leq D$ imply $A\sigma B\leq C\sigma D$.
\end{lemma}
\section{Main results}
Now we are ready to present our results. We will present our results in three subsections. In the first subsection, we present inequalities for the numerical radii of accretive matrices that extend some well known inequalities for the numerical radius. However, in the second subsection, we present a new type of numerical radius inequalities that has never been tickled in the literature. The last subsection treats inequalities for the numerical radius and it's connection to operator means.
\subsection{Accretive versions of some known numerical radius inequalities}
First, we have the simple accretive version of \eqref{eq_num_oper}.
\begin{proposition} \label{prop_1}
Let $ A \in \mathcal{S}_{\alpha} $. Then
\begin{align}
\cos\alpha\;\|A\|\leq w(A)\leq \|A\|
\end{align}
\end{proposition}
\begin{proof}
Noting that $w(\Re A)=\|\Re A\|$, since $\Re A>0,$ Lemma \ref{norm} implies
\begin{align*}
\cos\alpha\;\|A\|\leq \|\Re A\|=w(\Re A)\leq w(A)\leq\|A\|.
\end{align*}
\end{proof}
\begin{remark}
Notice that when $0<\alpha<\frac{\pi}{3},$ $\cos\alpha>\frac{1}{2}.$ This means that, for such $\alpha$,
$$\frac{1}{2}\|A\|< \cos\alpha\; \|A\|\leq w(A)\leq \|A\|,$$ providing a considerable refinement of the left inequality in \eqref{eq_num_oper}.
\end{remark}
\begin{corollary} Let $ A \in \mathcal{S}_{\alpha} $. Then for $t\in (-1,1)$,
\begin{align}
\cos t\alpha\;\|A^t\|\leq w(A^t)\leq \|A^t\|.
\end{align}
\end{corollary}
\begin{proof} Proposition \ref{prop_1}, Lemma \ref{norm} and Lemma \ref{S_alphat}, imply the desired result.
\end{proof}
While $w(\Re A)\leq w(A)$ for any matrix $A$, a reversed version can be found via sectorial matrices, as follows.
\begin{corollary} Let $ A \in \mathcal{S}_{\alpha} $. Then
\begin{align}\label{inv_w(rA)}
w(A)\leq \sec\alpha\;w(\Re A).
\end{align}
\end{corollary}
\begin{proof} Let $A\in \mathcal{S}_{\alpha}$. Then $w(\Re A)=\|\Re A\|$, since $\Re A>0.$ Proposition \ref{prop_1} implies
\begin{align*}
w(A)\leq \|A\|\leq \sec\alpha\;\|\Re A\|=\sec\alpha\; w(\Re A).
\end{align*}
\end{proof}
In the next results, we present accretive versions of the well known power inequality \cite{Halmos}
\begin{align}\label{lemma_powers}
w(A^k)\leq w^k(A), A\in\mathcal{M}_n, k=1,2,\cdots.
\end{align}
It should be noted that in \eqref{lemma_powers}, only positive integer powers are treated. Now we add the interval $(0,1)$ to these powers. The significance of these results is the observation that when $A$ is positive, $w(A^t)=\|A^t\|$ for any $t\in (0,1).$ For such powers, we find no version of \eqref{lemma_powers} in the literature. Now we have one that reads as follows.
\begin{theorem} Let $ A \in \mathcal{S}_{\alpha} $. Then, for $t\in(0,1),$
\begin{align}
\cos t\alpha\;\cos^t\alpha\;w^t(A)\leq w(A^t)\leq \sec t\alpha\;\sec^{2t}\alpha\;w^t(A).
\end{align}
\end{theorem}
\begin{proof}
Let $t\in(0,1).$ Then
\begin{align*}
w(A^t)\leq \|A^t\|&\leq\sec t\alpha\;\|\Re A^t\| \hspace{1cm}\text{(by Lemma \ref{norm})}\\
&\leq \sec t\alpha\;\sec^{2t}\alpha\;\|\Re^t A\|\hspace{1cm}\text{(by Lemma \ref{positive_power_of_ real})}\\
&= \sec t\alpha\;\sec^{2t}\alpha\;\|\Re A\|^t\\
&=\sec t\alpha\;\sec^{2t}\alpha\;w^t(\Re A)\\
&\leq \sec t\alpha\;\sec^{2t}\alpha\;w^t(A).\hspace{1cm}\text{(by Lemma \ref{nume_real<nemu A})}
\end{align*}
Thus, we have shown the second inequality. To show the first inequality, we have
\begin{align*}
w(A^t)\geq \cos t\alpha\;\|A^t\|&\geq\cos t\alpha\;\|\Re A^t\| \hspace{1cm}\text{(by Lemma \ref{norm})}\\
&\geq \cos t\alpha\;\|\Re^t A\|\hspace{1cm}\text{(by Lemma \ref{positive_power_of_ real})}\\
&=\cos t\alpha\;\|\Re A\|^t\\
&\geq\cos t\alpha\;\cos^t \alpha\;\|A\|^t\hspace{1cm}\text{(by Lemma \ref{norm})}\\
&\geq\cos t\alpha\;\cos^t \alpha\;w^t(A),\hspace{0.7cm}\text{(by Lemma \ref{eq_num_oper})}.
\end{align*}
This completes the proof.
When $A$ are positive, then $\alpha=0,$ and we obtain the well known equality $\|A^t\|=\|A\|^t$.
\end{proof}
On the other hand, a negative-power version of \eqref{lemma_powers} can be stated as follows.
\begin{theorem} Let $ A \in \mathcal{S}_{\alpha} $. Then, for $t\in[0,1],$
\begin{align}\label{ine_negative_power}
\cos t\alpha\;\cos^{2t} \alpha\;w^{-t}(A)\leq w(A^{-t}).
\end{align}
\end{theorem}
\begin{proof} For such $t\in[0,1]$, we have
\begin{align*}
w(A^{-t})\geq \cos t\alpha\;\|A^{-t}\|&\geq \cos t\alpha\;\|\Re A^{-t}\| \hspace{1.2cm}\text{(by Lemma\; \ref{norm})}\\
&\geq \cos t\alpha\;\cos^{2t}\alpha\;\|\Re^{-t} A\|\hspace{1cm}\text{(by Lemma\; \ref{negtive_power_of_ real})}\\
&\geq\cos t\alpha\;\cos^{2t}\alpha\;\|\Re A\|^{-t}\hspace{1.2cm}\text{(by Lemma\;\ref{lem_negative_power_norm})}\\
&=\cos t\alpha\;\cos^{2t}\alpha\;w^{-t}(\Re A)\\
&\geq \cos t\alpha\;\cos^{2t}\alpha\;w^{-t}(A),\hspace{1cm}\text{(by Lemma\; \ref{nume_real<nemu A})}
\end{align*}
completing the proof.
When $A$ are positive, then $\alpha=0,$ and we obtain the well known inequality $\|A\|^{-t}\leq\|A^{-t}\|$, for $t\in[0,1].$
\end{proof}
In particular, we have the following interesting inverse relation. It should be noted that in general, we have no relation between $w^{-1}(A)$ and $w(A^{-1}).$ Now we have the following accretive version.
\begin{corollary} \label{wA^-1} Let $ A\in S_{\alpha}. $ Then
\begin{align}
\cos^{3}\alpha\;w^{-1}(A)\leq w(A^{-1}).
\end{align}
\end{corollary}
\begin{proof} Let $t=1$ in \eqref{ine_negative_power}.
\end{proof}
In the following result, we present a new submultiplicative inequality for the numerical radius. Recall that for general $A,B\in\mathcal{M}_n$, one has $w(AB)\leq 4w(A)w(B).$ When $A$ and $B$ commute, the factor 4 can be reduced to 2, while it can be reduced to 1 when $A$ and $B$ are normal \cite[p 114.]{Halmos}. The following result presents a new bound, that is better than these bounds for $0<\alpha<\frac{\pi}{3}.$
\begin{theorem} \label{nemu_for_AB}Let $ A, B \in\mathcal{S}_{\alpha}$. Then
\begin{align}
w(AB)\leq \sec^2\alpha\;w(A)w(B).
\end{align}
\end{theorem}
\begin{proof} We have
\begin{align*}
w(AB)&\leq\|AB\|\hspace{2.4cm}\text{(by Lemma\; \ref{eq_num_oper})}\\
&\leq \|A\| \|B\|\\
&\leq \sec^2\alpha\; \|\Re A\| \|\Re B\|\hspace{1.2cm}\text{(by Lemma\; \ref{norm})}\\
&=\sec^2\alpha\; w(\Re A)w(\Re B)\\
&\leq\sec^2 \alpha\;w(A)w(B),\hspace{1.2cm}\text{(by Lemma\; \ref{nume_real<nemu A})}\\
\end{align*}
which completes the proof.
When $A,B$ are positive, then $\alpha=0,$ and we obtain the well known inequality $w(AB)\leq w(A)w(B).$
\end{proof}
\subsection{The numerical radius and operator monotone functions}
A new type of numerical radius inequalities is discussed then, when relations for $w(f(A))$ and $f(w(A))$ are found. However, for such inequalities to be studied, we prove first that when $A\in\mathcal{S}_{\alpha}$ and $f\in\mathfrak{m}$, then $f(A)\in\mathcal{S}_{\alpha}.$ This follows from the following.
\begin{proposition}\label{prop_asigmab_s}
Let $A,B\in\mathcal{S}_{\alpha}$ and let $f\in\mathfrak{m}.$ Then $A\sigma_fB\in \mathcal{S}_{\alpha}.$
\end{proposition}
\begin{proof}
Let $A,B\in S_{\alpha}$ and notice that\cite[Definition 4.1]{bedr}
$$A\sigma_fB=\int_{0}^{1}(A!_{s}B)d\nu_f(s)\;\;({\text{see}}\;\eqref{eq_def_sigma})$$ for some positive measure $\nu_f(s)$ on $[0,1].$ Then for any unit vector $x\in\mathbb{C}$, we have
\begin{align*}
\left<A\sigma_fBx,x\right>&=\int_{0}^{1}\left<(A!_sB)x,x\right>d\nu_f(s)\\
&=\int_{0}^{1}h(s)d\nu_f(s)\;({\text{where}}\;h(s)=\left<(A!_sB)x,x\right>)\\
&=c+id,
\end{align*}
where
$$c=\Re \int_{0}^{1}h(s)d\nu_f(s), d=\Im \int_{0}^{1}h(s)d\nu_f(s).$$ We notice that for each $s\in [0,1]$, $h(s)\in S_{\alpha}$ since $A,B\in S_{\alpha}.$ This is due to the fact that $\mathcal{S}_{\alpha}$ is invariant under inversion and addition. To show that $A\sigma_fB\in \mathcal{S}_{\alpha},$ we need to show that $\left<A\sigma_fB)x,x\right>\in S_{\alpha},$ or $|d|\leq \tan (\alpha)c.$ In fact, we have
\begin{align*}
|d|&=\left|\Im \int_{0}^{1}h(s)d\nu_f(s)\right|\\
&\leq \int_{0}^{1}\left|\Im h(s)\right|d\nu_f(s)\\
&\leq \int_{0}^{1}\tan(\alpha)\Re h(s)d\nu_f(s)\;\;(\text{since}\;h(s)\in S_{\alpha})\\
&=\tan(\alpha)c.
\end{align*}
This shows that $A\sigma_fB\in \mathcal{S}_{\alpha}$ and completes the proof.
\end{proof}
Noting \eqref{eq_def_f(A)}, we have
\begin{align*}
I\sigma_f A=f(A), f\in\mathfrak{m}.
\end{align*}
Then Proposition \ref{prop_asigmab_s} implies the following.
\begin{corollary}\label{cor_f(A)_sect}
Let $A\in\mathcal{S}_{\alpha}$ and $f\in\mathfrak{m}$. Then $f(A)\in\mathcal{S}_{\alpha}.$
\end{corollary}
As a special case, we have the following.
\begin{corollary}\label{cor_at_s}
Let $A\in \mathcal{S}_{\alpha}$ and $t\in (0,1)$. Then $A^t\in \mathcal{S}_{\alpha}.$
\end{corollary}
It should be noted that in \cite{drury1}, it is shown that if $A\in\mathcal{S}_{\alpha},$ then $A^t\in\mathcal{S}_{t\alpha}, t\in (0,1),$ a stronger version of Corollary \ref{cor_at_s}.\\
Now we are ready to present the following new relation that allows switching the numerical radius and operator monotone functions.
\begin{theorem} Let $ A\in\mathcal{S}_{\alpha}$. If $ f\in\mathfrak{m},$ then
\begin{align}\label{w(f)_leq_f(w)}
\cos\alpha\;f(w(A))\leq w(f(A))\leq \sec^3\alpha\;f(w(A)).
\end{align}
\end{theorem}
\begin{proof} First we note that for every nonegative monotone function $f$ and every $0\leq s\leq 1$, one can get $f(sx)\geq sf(x)$. next we estimate the first inquality
\begin{align*}
\cos\alpha\;f(w(A))&\leq f(\cos\alpha\;w(A))\\
&\leq f(w(\Re A))\hspace{2cm}\text{(by \eqref{inv_w(rA)})}\\
&=f(\|\Re A\|)\\
&\leq\|\Re f(A)\|\hspace{2cm}\text{(by Lemma\; \ref{realf_fnorm})}\\
&=w(\Re f(A))\leq w(f(A)).
\end{align*}
Thus, we have shown the first inequality. To show the second inequality, noting Corollary \ref{cor_f(A)_sect}, we have
\begin{align*}
w(f(A))\leq \|f(A)\|&\leq \sec\alpha\;\|\Re f(A)\|\hspace{1.2cm}\text{(by Lemma\; \ref{norm})}\\
&\leq \sec^3\alpha\; f(\|\Re A\|)\hspace{1.2cm}\text{(by Lemma \ref{realf_fnorm})}\\
&=\sec^3\alpha\; f(w(\Re A))\\
&\leq \sec^3\alpha\; f(w(A)),
\end{align*}
where we have used the fact that $f$ is monotone to obtain the last inequality.
This completes the proof.
When $A$ are positive, then $\alpha=0,$ and we obtain the well known inequality $\|f(A)\|= f(\|A\|).$
\end{proof}
\begin{proposition} \label{w_concavity} Let $ A, B\in\mathcal{S}_{\alpha}$. If $ f\in\mathfrak{m},$ then for $\lambda\in (0,1)$,
\begin{align}
w((1-\lambda)f(A)+\lambda f(B))\leq \sec^3\alpha\;f((1-\lambda)w(A)+\lambda w(B)).
\end{align}
\end{proposition}
\begin{proof} We have
\begin{align*}
w((1-\lambda)f(A)+\lambda f(B))&\leq (1-\lambda)w(f(A))+\lambda w(f(B))\\
&\leq \sec^3\alpha\;\left( (1-\lambda)f(w(A))+\lambda f(w(B))\right) \hspace{2cm} \text{(by \eqref{w(f)_leq_f(w)})}\\
&\leq \sec^3\alpha\; f((1-\lambda)w(A)+\lambda w(B)),
\end{align*}
where we have used the fact that $f$ is concave to obtain the last inequality.
This completes the proof.
\end{proof}
\begin{corollary} \label{exm} Let $ A, B\in\mathcal{S}_{\alpha}$. Then, for $0<t<1,$
\begin{align*}
w(A^t +B^t)\leq 2^{1-t}\sec^3\alpha\;\left( w(A)+w(B)\right)^t.
\end{align*}
\end{corollary}
\begin{proof} In Proposition \ref{w_concavity}, let $f(x)=x^t, \; t\in (0,1)$ and $\lambda=\dfrac{1}{2}$.
\end{proof}
On the other hand, a subadditive inequality for the numerical radius with operator monotone functions is shown next. This inequality is the numerical radius version of the celebrated result stating that
$$|||f(A+B)|||\leq |||f(A)+f(B)|||, A,B\geq 0, f\in\mathfrak{m},$$ shown in \cite{ando_zhan} for any unitarily invariant norm $|||\cdot|||$ on $\mathcal{M}_n.$ Now we present the numerical radius version of this inequality, noting that the numerical radius is not a unitarily invariant norm.
\begin{theorem}\label{nemu_f} Let $ A, B\in\mathcal{S}_{\alpha}$. If $ f\in\mathfrak{m},$ then
\begin{align}
w(f(A+B))\leq \sec^3\alpha\;\left( w(f(A))+w(f(B)\right) .
\end{align}
\end{theorem}
\begin{proof} We have the following chain of inequalities
\begin{align*}
w\left( f\left( A+B\right) \right)\leq \| f\left( A+B\right)\|&\leq \sec\alpha\;\|\Re f\left( A+B\right)\|\hspace{1.2cm}\text{(by Lemma\; \ref{norm})}\\
&\leq \sec^3\alpha\;\| f\left( \Re A+\Re B\right)\|\hspace{1.2cm}\text{(by Lemma\; \ref{lemma_f_real_sec_f})}\\
&\leq \sec^3\alpha\;\| f\left(\Re A\right) +f\left(\Re B\right)\|\hspace{1.2cm}\text{(by Lemma\; \ref{lemma_ando_zhan})}\\
&\leq \sec^3\alpha\;\| \Re\left(f(A) + f(B)\right)\|\hspace{1.2cm}\text{(by Lemma\; \ref{lemma_f_real_sec_f})}\\
&=\sec^3\alpha\; w\left( \Re\left(f(A) + f(B)\right)\right) \\
&\leq \sec^3\alpha\; w\left( f(A) + f(B)\right) \hspace{1.2cm}\text{(by Lemma\; \ref{nume_real<nemu A})},
\end{align*}
which completes the proof.
\end{proof}
\begin{corollary} \label{cor_f(A+B)}Let $ A, B\in\mathcal{S}_{\alpha}$. Then, for $0<t<1,$
\begin{align}
w((A+B)^t)\leq \sec^{3}\alpha\;w(A^t+B^t).
\end{align}
\end{corollary}
\begin{proof} This is an immediate consequence of Theorem \ref{nemu_f}, by putting $f(x)=x^t,$ for $0\leq t\leq 1$.
When $A,B$ are positive, then $\alpha=0,$ and we obtain the well known inequality $\|(A+B)^t\|\leq \|A^t+B^t\|,$ for $t\in[0,1]$.
\end{proof}
\begin{corollary} Let $ A, B\in\mathcal{S}_{\alpha}$. Then, for $0<t<1,$
\begin{align}
\cos^{3}\alpha\;w((A+B)^t)\leq w(A^t+B^t)\leq 2^{1-t}\sec^3\alpha\;\left( w(A)+w(B)\right)^t.
\end{align}
\end{corollary}
\begin{proof}
It's result from Corollary \ref{exm} and Corollary \ref{cor_f(A+B)}.
\end{proof}
When $A,B\in\mathcal{M}_n$ are positive, then clearly $w(A+B)\geq \max(w(A),w(B)).$ If either $A$ or $B$ is not positive, this inequality is not necessarily true. However, when $A,B$ are sectorial, we have the following version.
\begin{theorem} Let $ A, B\in\mathcal{S}_{\alpha}$. Then
\begin{align}
\cos^2\alpha\;\max(w(A),w(B))\leq w(A+B).
\end{align}
\end{theorem}
\begin{proof} Let $A, B\in\mathcal{S}_{\alpha}$. Then
\begin{align*}
w(A+B)&\geq \cos\alpha\;\|A+B\| \hspace{2cm} \text{(by Proposition\; \ref{prop_1})}\\
&\geq\cos\alpha\;\|\Re A+\Re B\| \hspace{2cm} \text{(by Lemma\; \ref{norm})}\\
&= 2\cos\alpha\;w\left( \begin{pmatrix}
0&\Re A\\
\Re B&0
\end{pmatrix}\right) \hspace{2cm} \text{(Lemma\;\ref{abu_omar_kittaneh})}\\
&\geq\cos\alpha\;\left\|\begin{pmatrix}
0&\Re A\\
\Re B&0
\end{pmatrix}\right\|\hspace{2cm}\text{(by \; \eqref{eq_num_oper})}\\
&=\cos\alpha\;\max\left( \|\Re A\|,\|\Re B\|\right) \hspace{2cm} \text{(by Lemma\; \ref{max_norm})}\\
&=\cos\alpha\;\max\left( w(\Re A),w(\Re B)\right) \\
&\geq\cos^2\alpha\;\max\left( w(A),w(B)\right),\hspace{2cm} \text{(by \; \ref{inv_w(rA)})}\\
\end{align*}
completing the proof.
\end{proof}
\subsection{The numerical radius and operator mean}
In this part of the paper, we present another new type of numerical radius inequalities, where the numerical radius of operator means is discussed. When $A,B\in\mathcal{M}_n$ are positive, then for any operator mean $\sigma$, one has $$A\sigma B>0\Rightarrow w(A\sigma B)=\|A\sigma B\|.$$ This makes the study of numerical radius inequalities of means of positive matrices trivial.
Now we have the following numerical radius action over the operator mean of sectorial matrices.
\begin{theorem} Let $ A, B\in\mathcal{S}_{\alpha}$. If $ f\in\mathfrak{m}$, then
\begin{align}\label{num_of_sigma}
w(A \sigma_f B)\leq\sec^3\alpha\;\left( w(A)\sigma_f w(B)\right).
\end{align}
\end{theorem}
\begin{proof}
Noting Proposition \ref{prop_asigmab_s}, we have
\begin{align*}
w(A \sigma_f B)&\leq \|A \sigma_f B\|\\
&\leq \sec\alpha\;\|\Re(A \sigma_f B)\|\hspace{1.2cm}\text{(by Lemma\; \ref{norm})}\\
&\leq\sec^3\alpha\;\|\Re(A)\sigma_f \Re(B)\|\hspace{1.2cm}\text{(by Lemma\; \ref{lemma_real_a_sigma_b_less})}\\
&\leq \sec^3\alpha\;(\|\Re(A)\|\;\sigma_f\;\| \Re(B)\|)\hspace{1.2cm}\text{(by Lemma\; \ref{sigma_norm})}\\
&= \sec^3\alpha\;(w(\Re A)\;\sigma_f\;w(\Re B))\\
&\leq \sec^3\alpha\;(w(A)\;\sigma_f\; w(B)),\hspace{1.2cm}\text{(by Lemma\; \ref{nume_real<nemu A}\; and Lemma\; \ref{monoto_mean})}
\end{align*}
which completes the proof.
\end{proof}
In particular, when $A,B\in\mathcal{M}_n$ are positive, then we may select $\alpha=0$, and \eqref{num_of_sigma} implies the known inequality
$$\|A\sigma_fB\|\leq \|A\|\sigma_f\|B\|.$$
\begin{corollary} Let $ A, B\in\mathcal{S}_{\alpha}$. Then, for $0<t<1,$
\begin{equation}\label{nume_sharp_inq}
w(A\sharp_{t}B)\leq \sec^{3}\alpha\;w^{1-t}(A)w^t(B).
\end{equation}
\end{corollary}
\begin{proof} By \eqref{num_of_sigma} and for $\sigma_f =\sharp_t ,$ where $ t\in(1,0)$
\begin{align*}
w(A\sharp_{t}B)&\leq \sec^3\alpha\;(w(A)\;\sharp_t\; w(B))\\
&=\sec^{3}\alpha\;w^{1-t}(A)w^t(B),
\end{align*}
where the last inequality, follows from the definition of $\sharp_t$.
This completes the proof.
\end{proof}
In a similar way, the logarithmic mean satisfies similar property, as follows.
\begin{theorem}
Let $ A, B \in \mathcal{S}_{\alpha} $. Then, for $0<t<1,$
\begin{align}
w(\mathcal{L}(A,B))\leq \sec^3\alpha\;\mathcal{L}(w(A),w(B)).
\end{align}
\end{theorem}
\begin{proof} By definition of the logarithmic mean \eqref{loga_matrices}, we get
\begin{align*}
w(\mathcal{L}(A,B))&=w\left( \int^{1}_{0}A\sharp_{t}B\;dt\right) \\
&\leq \int^{1}_{0}w(A\sharp_{t}B)\;dt\\
&\leq \sec^3\alpha\;\int^{1}_{0}w^{1-t}(A)w^t(B)\;dt\hspace{1.2cm}\text{(by\; \eqref{nume_sharp_inq})}\\
&=\sec^3\alpha\;\mathcal{L}(w(A),w(B)),
\end{align*}
completing the proof.
\end{proof}
The Heinz means follow the same theme too.
\begin{theorem} Let $A, B\in \mathcal{S}_\alpha$. Then for $t\in(0,1),$
\begin{align}
w(\mathcal{H}_t(A,B))\leq \sec^3\alpha\;\mathcal{H}_t(w(A),w(B)).
\end{align}
\end{theorem}
\begin{proof}Compute
\begin{align*}
w(\mathcal{H}_t(A,B))&=w\left(\dfrac{A\sharp_t B+A\sharp_{1-t} B}{2}\right) \hspace{4.2cm}\text{(by \;\eqref{heinz_for_matrices})}\\
&\leq \dfrac{w(A\sharp_t B)+w(A\sharp_{1-t} B)}{2}\\
&\leq \dfrac{\sec^3\alpha}{2}\left(w^{1-t}(A)w^t(B)+w^{t}(A)w^{1-t}(B)\right) \hspace{1.2cm}\text{(by\; \eqref{nume_sharp_inq})}\\
&=\sec^3\alpha\;\mathcal{H}_t(w(A),w(B)).
\end{align*}
The proof is complete.
\end{proof}
A Heinz-type inequality for the numerical radii of accretive matrices maybe stated as follows.
\begin{theorem} Let $A, B\in \mathcal{S}_\alpha$. Then for $t\in(0,1),$
\begin{align}
\cos^4\alpha\;w(A\sharp B)\leq w(\mathcal{H}_t(A,B))\leq \dfrac{\sec^4\alpha}{2}\;w(A+B).
\end{align}
\end{theorem}
\begin{proof} We prove the first inequality.
\begin{align*}
w(A\sharp B)&\leq\;\|A\sharp B\| \hspace{2.2cm}\text{(by Proposition\; \eqref{prop_1})}\\
&\leq \sec^3\alpha\;\|\mathcal{H}_t(A,B)\|\hspace{2.2cm}\text{(by Lemma\; \eqref{heinz_norm_bound})}\\
&\leq \sec^4\alpha\;w(\mathcal{H}_t(A,B)).\hspace{2.2cm}\text{(by Proposition\; \eqref{prop_1})}
\end{align*}
We now prove the second inequality.
\begin{align*}
w(\mathcal{H}_t(A,B))&\leq \|\mathcal{H}_t(A,B)\|\hspace{2.2cm}\text{(by Proposition\; \eqref{prop_1})}\\
&\leq \sec^3\alpha\;\|\dfrac{A+B}{2}\|\hspace{2.2cm}\text{(by Lemma\; \eqref{heinz_norm_bound})}\\
&\leq \dfrac{\sec^4\alpha}{2}\;w(A+B).\hspace{2.2cm}\text{(by Proposition\; \eqref{prop_1})}
\end{align*}
\end{proof}
The proof is complete.
\begin{corollary} Let $A, B\in \mathcal{S}_\alpha$. Then for $t\in(0,1),$
\begin{align}
\cos\alpha\;w^{\frac{1}{2}}(AB)\leq \mathcal{H}_t(w(A),w(B)).
\end{align}
\end{corollary}
\begin{proof} By Theorem \ref{nemu_for_AB}, we get
\begin{align*}
\cos\alpha\;w^{\frac{1}{2}}(AB)\leq \sqrt{w(A)w(B)}\leq\mathcal{H}_t(w(A),w(B)).
\end{align*}
\end{proof}
{\tiny \vskip 1 true cm }
{\tiny (Y. Bedrani) Department of Mathematics, The University of Jordan, Amman, Jordan.
\textit{E-mail address:} \bf{[email protected]}}
{\tiny \vskip 0.3 true cm }
{\tiny (F. Kittaneh) Department of Mathematics, The University of Jordan, Amman, Jordan.
\textit{E-mail address:} \textit{E-mail address:} \bf{[email protected]}}
{\tiny \vskip 0.3 true cm }
{\tiny (M. Sababheh) Dept. of Basic Sciences, Princess Sumaya University for Tech., Amman 11941, Jordan.
\textit{E-mail address:} \bf{[email protected]}}
{\tiny \vskip 0.3 true cm }
\end{document}
|
\begin{document}
maketitle
\begin{abstract}
We show that for any $2$-local colouring of the edges of the balanced complete bipartite graph $K_{n,n}$, its vertices can be covered with at most~$3$ disjoint monochromatic paths. And, we can cover almost all vertices of any complete or balanced complete bipartite $r$-locally
coloured graph with $O(r^2)$ disjoint monochromatic cycles.\\
We also determine the $2$-local bipartite Ramsey number of a path almost exactly: Every $2$-local colouring of the edges of $K_{n,n}$ contains a monochromatic path on $n$ vertices.
\\
\noindent \textbf{MSC:} 05C38, 05C55.
\end{abstract}
\section{Introduction}
The problem of partitioning a graph into few monochromatic paths or cycles, first formulated explicitly in the beginning of the 80's~\cite{Gya83},
has lately received a fair amount of attention. Its origin lies in Ramsey theory and its subject are complete graphs (later
substituted with other types of graphs), whose edges are coloured with $r$ colours. Call such a colouring an $r$-colouring;
note that
this need not be a proper edge-colouring. The challenge is now to find a small number of disjoint monochromatic paths, which
together cover the vertex set of the underlying graph. Or, instead of disjoint monochromatic paths, we might ask for disjoint monochromatic cycles.
Here, single vertices and edges count as cycles. Such a cover is called a monochromatic path partition, or a monochromatic cycle partition, respectively.
It is not difficult to construct $r$-colourings that do not allow for partitions into less than $r$ paths, or cycles.\footnote{For instance,
take vertex sets $V_1,\ldots, V_r$ with $|V_i|=2^i$, and for $i\leq j$ give all $V_i$--$V_j$ edges colour $i$.}
At first, the problem was studied mostly for $r=2$, and the complete graph~$K_n$ as the host graph. In this situation, a partition into two disjoint paths always exists~\cite{GG67},
regardless of the size of $n$. Moreover, we can require these paths to have different colours. An extension of this fact, namely that every $2$-colouring of $K_n$ has a partition into two monochromatic
cycles of different colours was conjectured by Lehel, and verified by Bessy and Thomass\'e~\cite{BT10}, after
preliminary work for large $n$~\cite{All08,LRS98}.
A generalisation of these two results for other values of $r$, i.e.~that any $r$-coloured $K_n$ can be partitioned into $r$ monochromatic paths, or into $r$ monochromatic cycles, was conjectured by Gy\'arf\'as~\cite{Gya89} and by Erd\H os, Gy\'arf\'as and Pyber~\cite{EGP91}, respectively. The conjecture for cycles was recently disproved by Pokrovskiy~\cite{Pok14}. He gave counterexamples for all $r\geq 3$, but he also showed that the conjecture for paths is true for $r=3$.
Gy\'arf\'as, Ruszink\'o, S\'ark\"ozy and Szemer\'edi~\cite{GRSS06} showed that any $r$-coloured $K_n$ can be partitioned into $O(r\log r)$ monochromatic cycles, improving an earlier bound from~\cite{EGP91}.
Monochromatic path/cycle partitions have also been studied for bipartite graphs, mainly for $r=2$.
A $2$-colouring of $K_{n,n}$ is called
a split colouring if
there is a colour-preserving homomorphism from the edge-coloured $K_{n,n}$ to a properly edge-coloured $K_{2,2}$. Note that any split colouring allows for a partition into three paths, but not always into two. However, split colourings are the only `problematic' colourings, as the following result shows.
\begin{theorem}[Pokrovskiy~\cite{Pok14}]
\label{thm:pokrovskiy}
Let the edges of $K_{n,n}$ be coloured with 2 colours; then $K_{n,n}$ can be partitioned into two paths of distinct colours or the colouring is split.
\end{theorem}
Split colourings can be generalised to more colours~\cite{Pok14}. This gives a lower bound of $2r-1$ on the path/cycle partition number for $K_{n,n}$. For $r=3$, this bound is asymptotically correct~\cite{LSS15}.
For an upper bound, Peng, R\"odl and Ruci\'nski~\cite{PRR02} showed that any $r$-coloured $K_{n,n}$ can be partitioned into $O(r^2 \log r)$ monochromatic cycles, improving a result of Haxell~\cite{Hax97}. We improve this bound to $O(r^2)$.
\begin{theorem}\label{thm:bip}
For every $r \geq 1$ there is an $n_0$ such that for $n \geq n_0$, for any $r$-locally coloured $K_{n,n}$, we need at most $4 r^2$ disjoint monochromatic cycles to cover all its vertices.
\end{theorem}
Theorem~\ref{thm:bip} follows immediately from Theorem~\ref{thm:complete} (b) below.
Let us mention that the monochromatic cycle partition problem has also been studied for multipartite graphs~\cite{SS14}, and for
arbitrary graphs~\cite{BBG+14,Sar11}, or replacing paths or cycles with other graphs~\cite{GSS+12,SS00,SSS12b}.
Our main focus in this paper is on monochromatic cycle partitions for {\it local colourings} (Theorem~\ref{thm:bip} being only a side-product of our local colouring results). Local colourings are a natural way to generalise $r$-colourings.
A colouring is $r$-local if no vertex is adjacent to more than $r$ edges of distinct colours. Local colorings have appeared mostly in the context of Ramsey theory~\cite{Bie03,CT93,GLN+87,GLS+87,RT97,Sch97,Tru92,TT87}.
With respect to monochromatic path or cycle partitions,
Conlon and Stein~\cite{CS16} recently generalised some of the above mentioned results to $r$-local colourings. They show that for any $r$-local colouring of $K_n$, there is a partition into $O(r^2 \log r)$ monochromatic cycles, and, if $r=2$, then two cycles suffice.
In this paper we improve their general bound for complete graphs, and give the first bound for monochromatic cycle partitions in bipartite graphs. In both cases, $O(r^2)$ cycles suffice.
\begin{theorem}\label{thm:complete}
For every $r \geq 1$ there is an $n_0$ such that for $n \geq n_0$ the following holds.
\begin{enumerate}[(a)]
\item If $K_{n}$ is $r$-locally coloured, then its vertices can be covered with at most $2 r^2$ disjoint monochromatic cycles.
\item If $K_{n,n}$ is $r$-locally coloured, then its vertices can be covered with at most~$4 r^2$ disjoint monochromatic cycles.
\end{enumerate}
\end{theorem}
We do not believe our results are best possible, but suspect that in both cases ($K_n$ and $K_{n,n}$), the number of cycles needed should be linear in $r$.
\begin{conjecture}
There is a $c$ such that for every $r$, every $r$-local colouring of $K_n$ admits a covering with $cr$ disjoint cycles. The same should hold replacing $K_n$ with $K_{n,n}$.
\end{conjecture}
Our second result is a generalisation of Theorem~\ref{thm:pokrovskiy} to local colourings:
\begin{theorem}
\label{thm:bipartite-2-local-paths-partition}
Let the edges of $K_{n,n}$ be coloured 2-locally. Then $K_{n,n}$ can be partitioned into 3 or less monochromatic paths.
\end{theorem}
So, in terms of monochromatic path partitions, it does not matter whether our graph is 2-locally coloured, or if the total number of colours is 2. For more colours this might be different, but we have not been able to construct $r$-local colourings of $K_{n,n}$ which need more than $2r-1$ monochromatic paths for covering the vertices.
We prove Theorem~\ref{thm:complete} in Section~\ref{sec:2} and Theorem~\ref{thm:bipartite-2-local-paths-partition} in Section~\ref{sec:2-local-path-partition}. These proofs are totally independent of each other.
medskip
Theorem~\ref{thm:bipartite-2-local-paths-partition} relies on a structural lemma for $2$-local colourings, Lemma~\ref{3possiblecolourings}. This lemma has a second application in local Ramsey theory.
As mentioned above, some effort has gone into extending Ramsey theory to local
colourings.
In particular, in~\cite{GLS+87}, Gy\'arf\'as et al.~determine the $2$-local Ramsey number of the path $P_n$. This number is defined as the smallest number $m$ such that in any $2$-local colouring of $K_m$ a monochromatic path of length $n$ is present. In~\cite{GLS+87}, it is shown that the $2$-local Ramsey number of the path $P_n$ is $\lceil\frac 32n -1\rceil$. Thus the usual $2$-colour Ramsey number of the path, which is $\lfloor\frac 32n -1\rfloor$ and the $2$-local Ramsey number of the path $P_n$ only differ by at most 1 (depending on the parity of $n$).
The {\it bipartite} $2$-colour Ramsey number of the path $P_n$ is defined as a pair $(m_1,m_2)$, with $m_1\geq m_2$ such that for any pair $m_1',m_2'$ we have that $m'_i\geq m_i$ for both $i=1,2$ if and only if every $2$-colouring of $K_{m_1',m_2'}$ contains a monochromatic path $P_{n}$. Gy\'arf\'as and Lehel~\cite{GL73} and, independently, Faudree and Schelp~\cite{FS75} determined the bipartite $2$-colour Ramsey number of $P_{2m}$ to be $(2m-1,2m-1)$. The authors of~\cite{FS75} also show that for the odd path $P_{2m+1}$ this number is $(2m+1,2m-1)$. Observe that suitable split colourings can be used to see the sharpness of these Ramsey numbers.
We use our auxiliary structural result, Lemma~\ref{3possiblecolourings}, and the result of~\cite{GL73} to determine the $2$-local bipartite Ramsey number for the even path $P_{2m}$. As for complete host graphs, it turns out this number coincides with its non-local pendant.
\begin{theorem}\label{thm:ramsey}
Let $K_{2m-1,2m-1}$ be coloured 2-locally. Then there is a monochromatic path on $2m$ vertices.
\end{theorem}
It is likely that similar arguments can be applied to obtain an analoguous result for odd paths (but such an analogue is not straightforward). Clearly, the result from~\cite{FS75} together with Theorem~\ref{thm:ramsey} (for $m+1$) imply that the $2$-local bipartite Ramsey number for the odd path $P_{2m+1}$ is one of $(2m+1,2m-1)$, $(2m+1,2m)$, $(2m+1,2m+1)$.
medskip
In view of the results from~\cite{CS16} and our Theorems~\ref{thm:complete},~\ref{thm:bipartite-2-local-paths-partition} and~\ref{thm:ramsey}, it might seem that in terms of path- or cycle-partitions, $r$-local colourings are not very different from $r$-colourings. Let us give an example where they do behave differently, even for $r=2$.
It is shown in~\cite{SS14} that any 2-coloured complete tripartite graph can be partitioned into at most 2 monochromatic paths, provided that no part of the tripartition contains more than half of the vertices.
This is not true for 2-local colourings:
Let $G$ be a complete tripartite graph with triparts $U$, $V$ and $W$ such that $|U| = 2 |V| = 2 |W| \geq 6$. Pick vertices $u \in U$, $v \in V$ and $w \in W$ and write
$U' = U\setminus \{u\}$, $V' = V\setminus \{v\}$ and $W' = W\setminus \{w\}$.
Now colour the edges of $[W' \cup \{v\}, U']$ red,
$[V' \cup \{w\}, U']$ green and the remaining edges blue.
This is a 2-local colouring. However, since no monochromatic path can cover all vertices of $U'$, we need at least 3 monochromatic paths to cover all of $G$.
Note that in our example, the graph $G$ contains a 2-locally coloured balanced complete bipartite graph. This shows that in the situation of Theorem~\ref{thm:bipartite-2-local-paths-partition}, we might need 3 paths even if the 2-local colouring is not a split colouring (and thus a 2-colouring). Blowing this example up, and adding some smaller sets of vertices seeing new colours, one obtains examples of $r$-local colourings of balanced complete bipartite graphs requiring $2r-1$ monochromatic paths.
\section{Proof of Theorem~\ref{thm:complete}}
\label{sec:2}
In this section we will prove our bounds for monochromatic cycle partitions,
given by Theorem~\ref{thm:complete}.
The heart of this section is Lemma~\ref{lem:local-matchings}. This lemma enables us to use induction on $r$, in order to prove new bounds for the number of monochromatic matchings
needed to cover an $r$-locally coloured graph. In particular, we find these bounds for the complete and the complete bipartite graph. All of this is the topic of Subsection~\ref{match}.
To get from monochromatic cycles to the promised cycle cover,
we use a nowadays standard approach, which was first introduced in~\cite{Luc99}.
We find a large robust hamiltonian graph, regularise the rest, find monochromatic matchings covering almost all,
blow them up to cycles, and the absorb the remainder with the robust hamiltonian graph. The interested reader may find a sketch of this well-known method in Subsection~\ref{fromto}.
\subsection{Monochromatic matchings}\label{match}
Given a graph $G$ with an edge colouring, a monochromatic connected matching is a matching in a connected component of the subgraph that is induced by the edges of a single colour.
\begin{lemma}
\label{lem:local-matchings}
For $k \geq 2$, let the edges of a graph $G$ be coloured $k$-locally. Suppose there are $m$ monochromatic components that together cover $V(G)$, of colours $c_1,\ldots,c_m$. \\
Then there are $m$
vertex-disjoint monochromatic connected matchings $M_1,\ldots,$ $M_m$, of colours $c_1,\ldots,c_m$, such that the inherited colouring of $G\setminus V(\bigcup_{i=1}^{m}M_i)$ is a $(k-1)$-local colouring.
\end{lemma}
\begin{proof}
Let $G$ be covered by $m$ monochromatic components $C_1, \ldots, C_m$ of colours $c_1, \ldots, c_m$. Let $M_1 \subseteq C_1$ be a maximum matching
in colour $c_1$. For $ 2 \leq i \leq m$ we iteratively pick maximum matchings $M_i \subseteq C_i \setminus V(\bigcup_{j<i}M_j) $ in colour $c_i$.
Set $M:= \bigcup_{j\leq m}M_j$.
Now let $v$ be any vertex in $H:= G \setminus V(M)$. Say $v \in V(C_i \setminus V(M)).$ In particular, vertex $v$ sees colour $c_i$ in $G$. However, by maximality of $M_i$, vertex $v$ does not see the colour $c_i$ in $H$. Thus in $H$, vertex $v$ sees at most $k-1$ colours. Hence, the inherited colouring of $H$ is a $(k-1)$-local colouring, which is as desired.
\end{proof}
\begin{corollary}\label{cor:robust-cycle-complete}
If $K_n$ is $r$-locally edge coloured, and $H$ is obtained from $K_n$ by deleting $o(n^2)$ edges, then
\begin{enumerate}[(a)]
\item $V(K_n)$ can be covered with at most $r(r+1)/2$ monochromatic connected matchings, and
\item all but $o(n)$ vertices of $H$ can be covered with at most $r(r+1)/2$ monochromatic connected matchings.
\end{enumerate}
Note that the matchings from (b) are connected in $H$.
\end{corollary}
\begin{proof}
The proof is based on the following easy observation. In any colouring of $K_n$,
the closed monochromatic neighbourhoods of any vertex $v$ together cover $K_n$. Since the colouring is a $k$-local colouring, we can cover
all of $V(K_n)$ with $k$ components. Now apply Lemma~\ref{lem:local-matchings} successively to obtain the bound from (a).
For (b), it suffices to observe that we can choose at each step a vertex $v$ that has $o(n)$ non-neighbours in the current graph.
For, if at some step, there is no such vertex, then a simple calculation shows we have already covered all but $o(n)$ of $V(K_n)$, and can hence abort the procedure.
\end{proof}
\begin{corollary}\label{cor:robust-cycle-bipartite}
If $K_{n,n}$ is $r$-locally edge coloured, and $H$ is obtained from $K_{n,n}$ by deleting $o(n^2)$ edges, then
\begin{enumerate}[(a)]
\item $V(K_{n,n})$ can be covered with at most $(2r-1)r$ monochromatic connected matchings, and
\item all but $o(n)$ vertices of $H$ can be covered with at most $(2r-1)r$ monochromatic connected matchings.
\end{enumerate}
Note that the matchings from (b) are connected in $H$.
\end{corollary}
\begin{proof}
The proof very similar to the proof Corollary~\ref{cor:robust-cycle-complete}.
We only note that in any colouring of $K_{n,n}$ the two closed monochromatic
neighbourhoods of any edge form a vertex cover of size at most $2r-1$.
\end{proof}
\subsection{From matchings to cycles}\label{fromto}
\subsubsection{Regularity}
Regularity is the key for expanding our partition of an $r$-locally coloured $K_n$ or $K_{n,n}$ into monochromatic connected matchings to a partition of almost all vertices into monochromatic cycles.
We follow
an approach introduced by \L uczak~\cite{Luc99}, which has become a standard method for cycle embeddings in large graphs. We will focus on the parts where our proof differs
from other applications of this method (see~\cite{GRSS06,GRSS11,LSS15}).
The main result of this section is:
\begin{lemma}\label{lem:asymptotic-cycle-partition}
If $K_n$ and $K_{n,n}$ are $r$-locally edge coloured, then
\begin{enumerate}[(a)]
\item all but $o(n)$ vertices of $K_n$ can be covered with at most $r(r+1)/2$ monochromatic cycles.
\item all but $o(n)$ vertices of $K_{n,n}$ can be covered with at most $(2r-1)r$ monochromatic cycles.
\end{enumerate}
\end{lemma}
Before we start, we need a couple of regularity preliminaries.
For a graph $G$ and disjoint subsets of vertices $A,B \subseteq V(G)$ we denote by $[A,B]$ the bipartite subgraph with biparts $A$ and $B$ and edge set $\{ab \in E(G): a\in A, b \in B\}$.
We write $\deg_G(A,B)$ for the number of edges in $[A,B]$. If $A=\{a\}$ we write shorthand $\deg_G(a,B)$.
The subgraph $[A,B]$ is \emph{$(\eps, G)$-regular} if
$$ |\deg_G(X,Y) - \deg_G(A,B)| < \eps$$ for all $X \subseteq A,~Y\subseteq B$ with $|X|> \eps |A|,~|Y| > \eps|B|$.
Moreover, ${[A,B]}$ is \emph{$(\eps,\delta,G)$-super-regular} if it is $(\eps,G)$-regular and
$$ \deg_{G}(a,B) > \delta |B|\text{ for each } a \in A \text{ and }\deg_{G}(b,A) > \delta |A| \text{ for each } b \in B .$$
A vertex-partition $ \{ V_0, V_1 , \ldots , V_l \}$ of the vertex set of a graph $G$ into $l+1$ \emph{clusters} is called \emph{$(\eps,G)$-regular}, if
\begin{enumerate}[\rm (i)]
\item $|V_1| = |V_2| = \ldots = |V_l|;$
\item $|V_0| < \eps n;$
\item apart from at most $\eps \binom{l}{2}$ exceptional pairs, the graphs $[V_i, V_j]$ are $(\eps, G)$-regular.
\end{enumerate}
The following version of Szemer\'edi's regularity lemma is well-known. The given prepartition will only be used for the bipartition of the graph $K_{n,n}$ in Lemma~\ref{lem:asymptotic-cycle-partition}~(b). The colours on the edges are represented by the graphs $G_i$.
\begin{lemma}[Regularity lemma with prepartition and colours]
\label{lem:regularity-lemma}
For every $\eps > 0$ and $m,t \in mathbb{N}$ there are $M,n_0 \in mathbb{N}$ such that for all $n \geq n_0$
the following holds.\\ For all
graphs $G_0,G_1, G_2, \ldots, G_t$ with $V(G_0) = V(G_1) = \ldots = V(G_t) =V$ and a partition $A_1 \cup \ldots \cup A_s = V$, where
$r \geq 2$ and $|V| = n$, there is a partition $V_0\cup V_1 \cup \ldots \cup V_l$
of $V$ into $l+1$ clusters such that
\begin{enumerate}[\rm (a)]
\item \label{itm:regularity-a} $m \leq l \leq M;$
\item \label{itm:regularity-b} for each $1 \leq i \leq l$ there is a $1 \leq j \leq s$ such that $V_i \subseteq A_j;$
\item \label{itm:regularity-c} $V_0\cup V_1 \cup \ldots \cup V_l$ is $(\eps,G_i)$-regular for each $0\leq i \leq t.$
\end{enumerate}
\end{lemma}
Observe that the regularity lemma provides regularity only for a number of colours bounded by the input parameter $t$. However,
the total number of colours of an $r$-local colouring is not bounded by any function of $r$ (for an example, see Section~\ref{sec:paths}). Luckily, it turns out that it suffices to focus on the
$t$ colours of largest density, where $t$ depends only on $r$ and $\eps$. This is guaranteed by the following result from~\cite{GLN+87}.
\begin{lemma}
\label{lem:local-edges}
Let a graph $G$ with average degree $a$ be $r$-locally coloured.
Then one colour has at least $ a^2/2r^2$ edges.
\end{lemma}
\begin{corollary}
\label{cor:local-edges}
For all $\eps>0$ and $r\inmathbb N$ there is a $t=t(\eps, r)$ such that
for any $r$-local colouring of $K_n$ or $K_{n,n}$, there are $t$ colours such that all but at most $\eps n^2$ edges use these colours.
\end{corollary}
\begin{proof}
We only prove the corollary for $K_{n,n}$, as the proof for $K_n$ is very similar.
Let $t := \lceil - \frac{2r^2}{\eps}\log \eps\rceil$.
We iteratively take out the edges of the colours with the largest number of edges. We stop either after $t$ steps, or before, if we the remaining graph has density less than $\eps$.
At each step Lemma~\ref{lem:local-edges} ensures that at least a fraction of $\frac{\eps}{2r^2}$ of the remaining edges has the same
colour.\footnote{Here we use that in a balanced bipartite graph $H$ with $2n$ vertices, $m$ edges, average degree $a$ and density $d$ we have $a^2 = \frac{4m^2}{4n^2} = dm$.}
Hence we can bound the number of edges of the remaining graph by $$\left(1-\frac{\eps}{2r^2}\right)^tn^2\leq e^{-\eps t/2r^2}n^2\leq\eps n^2.$$
\end{proof}
\subsubsection{Proof of Lemma~\ref{lem:asymptotic-cycle-partition}}
We only prove part (b) of Lemma~\ref{lem:asymptotic-cycle-partition}, since the proof of part (a) is very similar and actually simpler.
For the sake of readability, we assume that $ n_0 \gg 0$ is sufficiently large and $0 < \eps \ll 1$ is sufficiently small without calculating exact values.
Let the edges of $K_{n,n}$ with biparts $A_1$ and $A_2$ be coloured $r$-locally and encode the colouring by edge-disjoint graphs $G_1, \ldots, G_s$ on the vertex set of $K_{n,n}$. By Corollary~\ref{cor:local-edges}, there is a $t= t(\eps,r)$
such that
the union of $G_1, \ldots, G_t$ covers all but at most $\eps n^2 /8r^2$ edges of $K_{n,n}$. We merge the remaining
edges into $G_{0} := \bigcup_{i=t+1}^s G_i$.
Note that the colouring remains $r$-local and by the choice of $t$, we have
\begin{equation}\label{equ:colour-t+1}
|E(G_{0})| \leq \eps n^2 /8r^2.
\end{equation}
\newcommand{a}{a}
\newcommand{b}{b}
\newcommand{v}{v}
\newcommand{w}{w}
For $\eps$, $t$ and $m:=1/\eps$, the regularity lemma (Lemma~\ref{lem:regularity-lemma}) provides $n_0$ and $M$ such for all $n \geq n_0$ there is a vertex-partion
$V_0,V_1, \ldots, V_{l}$ of $K_{n,n}$ satisfying Lemma~\ref{lem:regularity-lemma}(\ref{itm:regularity-a})--(\ref{itm:regularity-c})
for $G_0,G_1, \ldots, G_{t}$.
As usual, we define the reduced graph $R$
which has a vertex $v_i$ for each cluster $V_i$ for $1 \leq i \leq l$.
We place an edge between vertices $v_i $ and $v_j$ if the subgraph $[V_i,V_j]$ of the respective clusters is non-empty and forms
an $(\eps,G_q)$-regular subgraph for all $0 \leq q \leq t$.
Thus, $R$ is a balanced bipartite graph with at least $(1-\eps){\binom{l}{2}}$ edges.
Finally, the colouring of the edges of $K_{n,n}$, induces a \emph{majority colouring} of the edges of $R$. More precisely, we colour
each edge $v_i v_j$ of $R$ with the colour from $\{0,1,\ldots, t\}$ that appears most on the edges of the subgraph $[V_i,V_j] \subseteq G$
(in case of a tie, pick any of the densest colours).
Note that if $v_i v_j$ is coloured~$i$ then by Lemma~\ref{lem:local-edges},
\begin{equation}\label{zweite}
[V_i,V_j]mbox{ has at least $\frac{1}{2r^2 }(\frac{n}{2l})^2$ edges of colour $i$.}
\end{equation}
Our next step is to verify that the majority colouring is an $r$-local colouring of $R$. To this end we need the following easy and well-known fact about regular graphs.
\begin{fact}\label{lem:regular-fact}
Let $[A,B]$ be an $\eps$-regular graph of density $d>\eps$. Then at most $\eps |A|$ vertices from $A$ have no neighbours in $B$.
\end{fact}
\begin{claim}\label{cla:regular-r-local}
The colouring of the reduced graph $R$ is $r$-local.
\end{claim}
\begin{proof}
Assume otherwise. Then there is a vertex $v_i \in {V(R)}$ that sees $r+1$ different colours in $R$. By Fact~\ref{lem:regular-fact}, all but at most $(r+1)\eps |V_i|<|V_i|$ of the vertices in $V_i$ see
$r+1$ different colours in $K_{n,n}$, contradicting the $r$-locality of our colouring.
\end{proof}
By~\eqref{equ:colour-t+1}, and by~\eqref{zweite},
we know that $R$ has at most $|E(G_{0})|\frac{4l^2\cdot 2r^2}{n^2} \leq \eps l^2$ edges of colour $0$.
Delete these edges and use Corollary~\ref{cor:robust-cycle-bipartite} to cover all but $o(l)$ vertices of $R$ with $(2r-1)r$ vertex-disjoint
monochromatic matchings $M^1, \ldots, M^{(2r-1)r}$
of spectrum $1, \ldots, t$.
We finish by applying \L uczak's technique for blowing up matching to cycles~\cite{Luc99}. This is done by using the following (by now well-known) lemma.
\begin{lemma}\label{lem_connmatchTOcycles}
Let $t \geq 1$ and $\gamma >0$ be fixed. Suppose $R$ is the edge-coloured reduced graph of an edge-coloured graph~$H$, for some $\gamma$-regular partition, such that each edge
$v w$ of $R$ corresponds to a $\gamma$-regular pair of density at least $\sqrt\gamma$ in the colour of $v w$.\\
If all but at most $ \gamma |V(R)|$ vertices of $R$ can be covered with $t$ disjoint connected monochromatic matchings,
then there is a set of at most $t$ monochromatic disjoint cycles in $H$, which together cover all but at most $10\sqrt\gamma|V(H)|$ vertices of $H$.
\end{lemma}
For completeness, let us give an outline of the proof of Lemma~\ref{lem_connmatchTOcycles}.
\begin{proof}[Sketch of a proof of Lemma~\ref{lem_connmatchTOcycles}]
We start by connecting in $H$ the pairs corresponding to matching edges with monochromatic paths of the respective colour, following their connections in $R$. We do this in a cyclic manner, that is, if $v_{i_1}v_{j_1},\dots, v_{i_s}v_{j_s}$
forms the matching, then we take paths $P_1,\dots, P_s$ in a way that
$P_\ell$ connects $V_{j_\ell}$ and $V_{i_{\ell+1}}$ (modulo $\ell$). The end-vertex of each $P_\ell$ can be taken as a typical vertex of the graph $ [ V_{i_{\ell}}, V_{j_{\ell}}]$ or $[V_{i_{\ell+1}},V_{j_{\ell+1}}]$
(this is important as we later have to `fill up' the matching edges accordingly).
We find the connecting paths simultaneously for all matchings.
Note that, as $t$ is fixed, the paths chosen above together consume only a constant number of vertices of~$H$.
So we can we connect the monochromatic paths using the matching edges, blowing up the edges to long paths, where regularity and density ensure that we can fill up all but a small fraction of the corresponding pairs.
This gives the desired cycles.
A more detailed explanation of this argument can be found in the proof of the main result of~\cite{GRSS06b}.
\end{proof}
\subsection{The absorbing method}\label{absorbing-method}
In this subsection we prove Theorem~\ref{thm:complete}. We apply a well known absorbing argument introduced in~\cite{EGP91}.
To this end we need a few tools.
Call a balanced bipartite subgraph $H$ of a $2n$-vertex graph $\eps$-\emph{hamiltonian}, if any balanced bipartite subgraph of $H$ with at least
$2(1-\eps) n$ vertices is hamiltonian. The next lemma is a combination of results from~\cite{Hax97,PRR02} and can be found in~\cite{LSS15} in the following explicit form.
\begin{lemma}
\label{lem:eps-hamiltonian}
For any $1 >\gamma > 0,$ there is an $n_0 \in
mathbb{N}$ such that any
balanced bipartite graph on $2n \geq 2n_0$ vertices and of
edge density at least $\gamma$ has a $\gamma/4$-hamiltonian subgraph
of size at least $\gamma^{3024/\gamma} n/3$.
\end{lemma}
The following lemma is taken from~\cite{CS16}.
\begin{lemma}\label{lem:absorb}
Suppose that $A$ and $B$ are vertex sets with $|B| \leq |A|/r^{r+3}$ and the edges of the
complete bipartite graph between $A$ and $B$ are $r$-locally coloured. Then all vertices
of $B$ can be covered with at most $r^2$ disjoint monochromatic cycles.
\end{lemma}
\begin{proof}[Sketch of a proof of Theorem~\ref{thm:complete}]
Here we only prove part (b) of Theorem~\ref{thm:complete}, since the proof of (a) is almost identical. The differences are discussed at the end of the section.
Let $A$ and $B$ be the two partition classes of the $r$-locally edge coloured $K_{n,n}$.
We assume that $n \ge n_0$, where we specify $n_0$ later.
Pick subsets $A_1 \subseteqeq A$ and $B_1 \subseteqeq B$ of size $\lceil n / 2 \rceil$ each.
Say red is the majority colour of $[A_1,B_1]$. Then by Lemma~\ref{lem:local-edges}, there are at least
$n^2/8r^2$
red edges in $[A_1,B_1]$.
Lemma~\ref{lem:eps-hamiltonian} applied with $\gamma = 1/10r^2$ yields a red $\gamma/4$-hamiltonian sub\-graph $[A_2,B_2]$ of $[A_1,B_1]$ with
\[
|A_2|=|B_2|\ge \gamma^{3024/\gamma} |A_1|/3 \ge \gamma^{3024/\gamma} n/7 .
\]
Set $H := G - (A_2 \cup B_2)$, and note that each bipart of $H$ has order at least $\lfloor n / 2 \rfloor$.
Let $\delta := \gamma^{4000/\gamma}$.
Assuming $n_0$ is large enough, Lemma~\ref{lem:asymptotic-cycle-partition}(b) provides $(2r-1)r$ monochromatic vertex-disjoint cycles covering all but at most
$2 \delta n$ vertices of~$H$.
Let $X_A \subseteqeq A$ (resp.~$X_B \subseteqeq B$) be the set of uncovered vertices in $A$ (resp.~$B$).
Since we may assume none of the monochromatic cycles is an isolated vertex, we have $|X_A|=|X_B| \le \delta n$.
By the choice of $\delta$, and since we assume $n_0$ to be sufficiently large, we can apply Lemma~\ref{lem:absorb}
to the bipartite graphs $[A_2,X_B]$ and $[B_2,X_A]$.
This gives~$2r^2$ vertex-disjoint monochromatic cycles that together cover $X_A \cup X_B$.
Again, we assume none of these cycles is trivial.
As $|X_A|=|X_B| \le \delta n$, we know that the remainder of $[A_2,B_2]$ contains
a red Hamilton cycle.
Thus, in total, we found a cover of $G$ with at most $(2r-1)r+2r^2+1 \leq 4 r^2$
vertex-disjoint monochromatic cycles.
As claimed above, the proof of Theorem~\ref{thm:complete}(a) is very similar.
The main difference is that instead of an $\eps$-hamiltionan subgraph we use a large red
triangle cycle. A triangle cycle $T_k$ consists of a cycle on $k$ vertices $\{v_1, \ldots , v_k\}$ and $k$ additional vertices $A = \{a_1, \ldots a_k\}$, where
$a_i$ is joined to $v_i$ and $v_{i+1}$ (modulo~$k$). Note that $T_k$ remains hamiltionan after the deletion of any subset of vertices of~$A$. We use some classic Ramsey theory
to find a large monochromatic triangle cycle $T_k$ in an $r$-locally coloured $K_n$, as
shown in~\cite{CS16}. Next, Lemma~\ref{lem:asymptotic-cycle-partition}(a) guarantees we can cover most vertices of $K_n \setminus T_k$ with $r(r+1)/2$ monochromatic cycles. We finish by absorbing
the remaining vertices $B$ into $A$ with only one application of Lemma~\ref{lem:absorb}, thus producing $r^2$ additional cycles. As noted above, the remaining part of $T_k$ is hamiltionan and so
we have partitioned $K_n$ into $r(r+1)/2 + r^2 +1 \leq 2r^2$ monochromatic cycles.
\end{proof}
\section{Bipartite graphs with 2-local colourings}
\label{sec:2-local-path-partition}
In this section we prove Theorem~\ref{thm:bipartite-2-local-paths-partition} and Theorem~\ref{thm:ramsey}.
We start by specifying the structure of 2-local colourings of $K_{n,n}$.
Let $G$ be any graph, and let the edges of $G$ be coloured arbitrarily with colours in $mathbb{N}$.
We denote by $C_i$ the subgraph of $G$ induced by vertices that are adjacent to any edge of colour $i$.
Note that $C_i$ can contain edges of colours other than $i$.
If for colours $i,j$ the intersection $V(C_i) \cap V(C_j)$ is empty, we can merge $i$ and $j$ as we are only interested in monochromatic paths.
We call an edge colouring \emph{simple}, if $V(C_i) \cap V(C_j) \neq \emptyset$ for all colours $i,j$ that appear on an edge.
In~\cite{GLS+87} it was shown that the number of colours in a simple 2-local colouring of $K_n$ is bounded by 3.
In the next lemma we will see that for $K_{n,n}$ the number of colours in a simple 2-local colouring is bounded by 4.
For $r \geq 3$, however, simple $r$-local colourings can have an arbitrary large number of colours:
take a $t \times t$ grid $G$ and colour the edges of the column $i$ and row $i$ with colour $i$ for $1\leq i \leq t$. Then add edges of a new colour $t+1$ until $G$ is complete (or complete bipartite) and observe that $G$ is 3-locally edge coloured and simple, but the total number of colours is $t+1$.
In what follows, we denote partition classes of a bipartite graph $H$ (which we imagine as either top and bottom) by $\bitop{H}$ and $\bibot{H}$.
\begin{lemma}\label{3possiblecolourings}
Let $K_{n,n}$ have a simple 2-local colouring. Then the total number of colours is at most four. In particular, if there are (edges of) colours 1,2,3 and~4, then
\begin{itemize}
\item $\bitop{K_{n,n}}= \bitop{C_1 \cap C_2 } \cup \bitop{C_3 \cap C_4 }$ and
\item $\bibot{K_{n,n}}= \bibot{C_1 \cap C_3} \cup \bibot{C_1 \cap C_4} \cup \bibot{C_2 \cap C_3 } \cup \bibot{C_2 \cap C_4 }$
\end{itemize}
as shown in Figure~\ref{fig:c} (modulo swapping colours and swapping $\bitopK_{n,n}$ with $\bibotK_{n,n}$).
\end{lemma}
\begin{figure}
\caption{The four colour case of Lemma~\ref{3possiblecolourings}
\label{fig:c}
\end{figure}
\begin{proof}[Proof of Lemma~\ref{3possiblecolourings}]
We can assume there are at least four colors in total, as otherwise there is nothing to show.
We start by observing that for any four distinct colours $i,j,k,\ell$, if $v\in V(C_i\cap C_j)$ and $w\in V(C_k\cap C_\ell)$, then, by 2-locality, $v$ and $w$ cannot lie in opposite classes of $K_{n,n}$. Thus either $V(C_i\cap C_j)\cup V(C_k\cap C_\ell)\subseteqeq \bibot{K_{n,n}}$ or $V(C_i\cap C_j)\cup V(C_k\cap C_\ell)\subseteqeq \bitop{K_{n,n}}$. Fixing four colours $1,2,3,4$, and considering their six (by simplicity non-empty) intersections, the pigeon-hole principle gives that (after possibly swapping colours and/or top and bottom class of $K_{n,n}$),
\begin{equation}\label{equ:blabla}
V(C_1 \cap C_3)\cup V(C_2 \cap C_4)\cup V(C_1 \cap C_4)\cup V(C_2 \cap C_3)\subseteqeq \bibot{K_{n,n}}.
\end{equation}
As every colour must see both top and bottom of $K_{n,n}$,
we have that $V(C_1 \cap C_2)\cup V(C_3 \cap C_4)\subseteqeq \bitop{K_{n,n}}.$ By 2-locality there are no other colours.
\end{proof}
\subsection{Partitioning into paths}\label{sec:paths}
In this subsection we prove Theorem~\ref{thm:bipartite-2-local-paths-partition}. For the sake of contradiction, assume that $K_{n,n}$ is 2-locally edge-coloured such that there is no partition into three monochromatic paths.
Since we are not interested in the actual colours of the path we can assume the colouring to be simple.
Furthermore Theorem~\ref{thm:pokrovskiy} implies that there are at least three colours.
A path is \emph{even} if it has an even number of vertices.
\begin{claim}\label{onethenthree}
There is no even monochromatic path $P$ such that $\bitop{K_{n,n} \setminus P}$ is contained in $\bitop{C_i \cap C_j}$ for distinct colours $i,j$.
\end{claim}
\begin{proof}
Suppose the contrary and let $P$ be as described in the claim and of maximum length. Since the colouring is $2$-local and $\bitop{K_{n,n} \setminus P} \subseteq \bitop{C_i \cap C_j}$, the graph on $K_{n,n} \setminus P$ is 2-coloured. Using Theorem~\ref{thm:pokrovskiy}, we are done unless the colouring on ${K_{n,n} \setminus P}$ is split.
In that case, let $p$ be the endpoint of $P$ in $\bibot{K_{n,n}}$. Since $\bitop{K_{n,n} \setminus P} \subseteq \bitop{C_i \cap C_j}$, the edges between $p$ and $\bitop{K_{n,n} \setminus P}$ have colours $i$ or $j$. So $P$ has colour $k\notin\{i,j\}$, as otherwise we could use the splitness of ${K_{n,n} \setminus P}$ to extend $P$ with two extra vertices. But then, $p$ can only see one more colour apart from $k$, so we may assume that all the edges between $p$ and $\bitop{K_{n,n} \setminus P}$ have colour $i$. Now cover $K_{n,n} \setminus P$ by two paths $P_1$ and $P_2$ of the colour $i$ and one path of the colour $j$. The paths $P_1$ and $P_2$ can be joined using the vertex $p$ to give the three required paths.
\end{proof}
Now the case of four colours of Lemma~\ref{3possiblecolourings} is easily solved:
without loss of generality suppose that $|\bitop{C_1 \cap C_2}| \leq n/2$.
By symmetry between colours $1$ and~$2$ we can assume that $|\bitop{C_2}| \leq |\bibot{C_2}|$.
So there exists an even colour 2 path $P$ covering $\bitop{C_2}=\bitop{C_1\cap C_2}$ and we are done by Claim~\ref{onethenthree}. This proves the following claim.
\begin{claim}\label{drei}
The total number of colours is three.
\end{claim}
Our next aim is to show that the colouring looks like in Figure~\ref{fig:b}, that is, that every vertex sees two colours. For this, we need the next claim and the following definition.
We say that a subgraph of $H \subseteq K_{n,n}$ is connected in colour~$i$, if every two vertices of $H$ are connected by a path of colour $i$ in $H$.
\begin{claim}\label{greatobservation}
There is no even monochromatic path $P$ such that $K_{n,n} \setminus P$ is connected in some colour $i$.
\end{claim}
\begin{proof}
Assume the opposite and let $P$ be as described in the claim. Simplify the colouring of $K_{n,n}\setminus V(P)$ to a $2$-colouring by merging all colours distinct from $i$. (Note that since all vertices see $i$, by 2-locality no vertex can see more than one of the merged colours.) The new colouring is not a split colouring by the assumption on $i$. Hence Theorem~\ref{thm:pokrovskiy} applies, and we are done.
\end{proof}
\begin{claim}\label{aI}
Each vertex sees two colours.
\end{claim}
\begin{proof}
Suppose that there is a vertex in $\bitop{K_{n,n}}$ that sees only colour, $1$ say.
Then by 2-locality $\bibot{C_2 \cap C_3} = \emptyset$.
Since the colouring is simple we know that $\bitop{C_2 \cap C_3} \neq \emptyset$.
Therefore $\bibot{K_{n,n} }\subseteq \bibot{(C_1 \cap C_2) \cup (C_1 \cap C_3)}$.
If $|\bitop{C_2 \cap C_3}| > |\bibot{C_1 \cap C_3}|$, we can choose an even path of colour 3 that contains all vertices of $\bibot{C_1 \cap C_3}$ and apply Claim~\ref{onethenthree}.
Otherwise, let $P$ be an even path of colour 3 between $|\bitop{C_2 \cap C_3}| $ and $ |\bibot{C_1 \cap C_3}|$ that covers all vertices of $\bitop{C_2 \cap C_3}$.
Since all remaining vertices lie in $C_1$, the subgraph $K_{n,n} \setminus P$ is connected in colour 1 and we are done by Claim~\ref{greatobservation}.
\end{proof}
\begin{figure}
\caption{There are three colours and each vertex sees exactly two colours.}
\label{fig:b}
\end{figure}
Claims~\ref{drei} and~\ref{aI} ensure that for the rest of the proof we can assume that the colouring is exactly as shown in Figure~\ref{fig:b} (with some of the sets possibly being empty). Now, let us see how Claim~\ref{onethenthree} implies that we easily find the three paths if one of the $C_i$ is complete bipartite in colour $i$.
\begin{claim}\label{nocompletebipcomp}
For $i \in \{1,2,3\}$, the graph $C_i$ is not complete bipartite in colour $i$.
\end{claim}
\begin{proof}
Suppose the contrary and let $C_i$ contain only edges of colour $i$.
Take out a longest even path of colour $i$ in $C_i$.
This leaves us either with only $\bibot{C_j \cap C_k}$ in the bottom partition class, or with only $\bitop{ C_j \cap C_k}$ in the top partition class (where~$j$ and $k$ are the other two colors).
We may thus finish by applying Claim~\ref{onethenthree}, after possibly switching top and bottom parts.
\end{proof}
\begin{claim}\label{aII}\label{all}
For $i \in \{1,2,3\}$, the graph $C_i$ is connected in colour $i$.
\end{claim}
\begin{proof}
For contradiction, suppose that $C_3$ is not connected in colour 3 (the other colours are symmetric). Then there are two edges $e,f$ of colour $3$ belonging to $C_3$ that are not joined by a path of colour $3$. First assume we can choose $e$ in $E(C_2 \cap C_3)$. Since all edges between $C_1 \cap C_3$ and $C_2 \cap C_3$ have colour 3, we get $f\in E(C_2 \cap C_3)$, and $C_1 \cap C_3$ has no vertices. But this contradicts our assumption that the colouring is simple. Therefore, $C_2 \cap C_3$ and, by symmetry, $C_1 \cap C_3$ contain no edges of colour~3.
By symmetry (between the top and bottom partition) we can assume that $|\bitop{C_1 \cap C_3}| \geq |\bibot{C_1 \cap C_3}|$.
Further, we have $|\bitop{C_1 \cap C_3}| < |\bibot{C_1 \cap C_2}| + |\bibot{C_1 \cap C_3}|$, since otherwise we could find an even path of colour 1 that covers all of $\bibot{C_1 \cap C_2} \cup \bibot{C_1 \cap C_3}$ and use Claim~\ref{onethenthree}.
So we can choose an even path $P$ of colour 1, alternating between $\bitop{C_1 \cap C_3}$ and $\bibot{C_1 \cap C_2} \cup \bibot{C_1 \cap C_3}$, that contains both $\bitop{C_1 \cap C_3}$ and $\bibot{C_1 \cap C_3}$.
Thus $K_{n,n} \setminus P$ is connected in colour 2 and Claim~\ref{greatobservation} applies.
\end{proof}
Let us now show that for pairwise distinct $i,j,k \in \{1,2,3\}$ we have
\begin{equation}\label{equ:maya-1}
\text{at least one of $\bibot{C_i\cap C_j}, \bitop{C_i\cap C_k}$ is not empty.}
\end{equation}
To see this, note that the edges between $\bitop{C_i \cap C_j}$ and $\bibot{C_i \cap C_k}$ are of colour $i$.
Thus if~\eqref{equ:maya-1} does not hold, we can find a colour $i$ (possibly trivial) path~$P$ that covers one of these two sets.
Hence either in the top or in the bottom part of $K_{n,n}$, the path $P$ covers all but $C_j\cap C_k$.
We can thus finish with Claim~\ref{onethenthree}.
Together with the fact that every colour must see both top and bottom class,~\eqref{equ:maya-1} immediately implies that for pairwise distinct $i,j,k \in \{1,2,3\}$ we have
\begin{equation}\label{equ:maya-xx}
\text{at least one of $C_i\cap C_j, C_i\cap C_k$ meets both $\bibot{K_{n,n}}$ and $\bitop{K_{n,n}}$.}
\end{equation}
So, of the three bipartite graphs $C_i\cap C_j$, two have non-empty tops and bottoms. Hence, after possibly swapping colours, we know that
the four sets $\bibot{C_1\cap C_i}$, $\bitop{C_1\cap C_i}$, $i = 2,3$, are non-empty.
Observe that after possibly swapping colours $2$ and $3$, and/or switching partition classes of $K_{n,n}$, we have one of the following situations:
\begin{enumerate}[(i)]
\item $|\bitop{C_1\cap C_2}| \geq |\bibot{C_1\cap C_3}|$ and $|\bibot{C_1\cap C_2}| \geq |\bitop{C_1\cap C_3}|$, or
\item $|\bitop{C_1\cap C_2}| \geq |\bibot{C_1\cap C_3}|$ and $|\bibot{C_1\cap C_2}| \leq |\bitop{C_1\cap C_3}|$.
\end{enumerate}
In either of these situations, note that as all involved sets are non-empty, by Claim~\ref{all}
there is an edge $e_1$ of colour $1$ in $E(C_1\cap C_2)\cup E(C_1 \cap C_3)$.
So if we are in situation~(ii), we can find an even path of colour 1 covering all of $\bibot{C_1\cap C_3}\cup\bibot{C_1\cap C_2}$. Now Claim~\ref{onethenthree} applies, and we are done. So assume from now on we are in situation~(i).
Similarly as above, by~\eqref{equ:maya-xx}, there is an edge $e_2$ of colour 2 in $E(C_3\cap C_2)\cup E(C_1\cap C_2)$. By Claim~\ref{nocompletebipcomp}, $C_3$ is not complete bipartite in colour 3. So we can assume that at least one of $e_1$ or $e_2$ is chosen in~$C_3$ and hence the two edges are not incident.
Extend $e_1$ to an even colour 1 path $P$ covering all of $C_1\cap C_3$, using (apart from $e_1$) only edges from
$[\bibot {C_1\cap C_3},\bitop {C_1\cap C_2}]$ and from $[\bibot {C_1\cap C_2},\bitop {C_1\cap C_3}]$, while avoiding the endvertices of $e_2$, if possible.
If we had to use one of the endvertices of $e_2$ in $P$, then $P$ either covers
all of $\bibot{C_1\cap C_2}$ or all of $\bitop{C_1\cap C_2}$. In either case we may apply Claim~\ref{onethenthree}, and are done.
On the other hand, if we could avoid both endvertices of $e_2$ for $P$, then Claim~\ref{greatobservation} applies and we are done.
This finishes the proof of Theorem~\ref{thm:bipartite-2-local-paths-partition}.
\subsection{Finding long paths}
In this subsection we prove Theorem~\ref{thm:ramsey}. We will use the following theorem, which resolves the problem for the case of of 2-colourings.
\begin{theorem}[\cite{FS75,GL73}]\label{thm:faudree-schelp}
Every $2$-edge-coloured $K_{p+q-1,p+q-1}$ contains a colour~1 path of length $2p$ or a colour 2 path of length $2q$.
\end{theorem}
As in the last section, $C_i$ denotes the subgraph induced by the vertices that have an edge of colour $i$. Recall that the length of a path is the number of its vertices.
\begin{lemma}\label{lem:intersection-cover-path}
Let $K_{2m-1,2m-1}$ be 2-locally coloured with colours $1,2,3$. Then for distinct colours $i,j$ there is a monochromatic path of length at least $$min\{2m,2max(|\bitop{C_i \cap C_j}|,|\bibot{C_i \cap C_j}|)\}.$$
\end{lemma}
\begin{proof}
By symmetry, we can assume that $|\bitop{C_i \cap C_j}| \geq |\bibot{C_i \cap C_j}|$. Moreover, we can assume that $\bitop{C_i \cap C_j}\neq\emptyset$, as otherwise there is nothing to prove. Then by 2-locality,
\begin{equation}\label{kkk}
\bibot{C_k \setminus (C_i \cup C_j)} = \emptyset,
\end{equation}
where $k$ denotes the third colour.
We apply Theorem~\ref{thm:faudree-schelp} to a balanced subgraph of $C_i \cap C_j$ with $p = m - |\bibot{C_i \setminus C_j}|$ and $q = m - |\bibot{C_j \setminus C_i}|$.
For this, note that we have $$p+q-1 = 2m-1-|\bibot{C_i \setminus C_j}| -|\bibot{C_j \setminus C_i}| \overset{\eqref{kkk}}{=} |\bibot{C_i \cap C_j}|\leq |\bitop{C_i \cap C_j}|.$$
By symmetry between $i$ and $j$ we can assume that the outcome of Theorem~\ref{thm:faudree-schelp} is a colour $i$ path $P$ of length $2(m - |\bibot{C_i \setminus C_j}|)$.
Let $R \subseteq [\bitop{C_i \cap C_j}\setminus \bitop{P}, \bibot{C_i \setminus C_j}]$ be a path of colour $i$ and length $$r = min(2|\bitop{C_i \cap C_j} \setminus \bitop{P}|,2|\bibot{C_i \setminus C_j}|).$$
If $r = 2|\bibot{C_i \setminus C_j}|$, then we can join $P$ and $R$ to a path of length of $2m$.
Otherwise $r = 2|\bitop{C_i \cap C_j} \setminus \bitop{P}|$ and we can join $P$ and $R$ to a path of length of $2|\bitop{C_i \cap C_j}|$.
\end{proof}
Now let us prove Theorem~\ref{thm:ramsey} by contradiction. To this end, assume that $K_{2m-1,2m-1}$ is coloured 2-locally and has no monochromatic path on $2m$ vertices. Since we are not interested in the actual colours of the path we can assume the colouring to be simple, as in the previous subsection. Furthermore Theorem~\ref{thm:faudree-schelp} implies that there are at least three colours.
We now apply Lemma~\ref{3possiblecolourings}.
The four colour case of Lemma~\ref{3possiblecolourings} is quickly resolved: Without loss of generality suppose that $|\bitop{C_1 \cap C_2}| \geq m$. By symmetry between colours $1$ and $2$, we can assume that $|\bibot{C_1 \cap C_3} \cup \bibot{C_1 \cap C_4}| \geq m$. Thus we easily find a colour 1 path of length $2m$ alternating between these sets. This proves:
\begin{claim}\label{drei2}
The total number of colours is three.
\end{claim}
We can now exclude vertices that see only one colour.
\begin{claim}\label{only-two-colours}
Each vertex sees two colours.
\end{claim}
\begin{proof}
Suppose that there is a vertex in $\bitop{K_{2m-1,2m-1}}$ that sees only colour $1$, say.
Then by 2-locality, $\bibot{C_2 \cap C_3} = \emptyset$.
Since the colouring is simple we know that $\bitop{C_2 \cap C_3} \neq \emptyset$.
Therefore $\bibot{K_{2m -1,2m -1} }\subseteq \bibot{(C_1 \cap C_2) \cup (C_1 \cap C_3)}$.
Since one of $\bibot{C_1 \cap C_2}$ and $\bibot{C_1 \cap C_3}$ must have size at least $m$, we are done by Lemma~\ref{lem:intersection-cover-path}.
\end{proof}
Put together, Claims~\ref{drei2} and~\ref{only-two-colours} allow us to assume that the colouring is as shown in Figure~\ref{fig:b}. The next claim follows instantly from Lemma~\ref{lem:intersection-cover-path}.
\begin{claim}\label{cla:eine-seite-kleiner-n/2}
For distinct colours $i, j$ we have $max(|\bitop{C_i \cap C_j}|,|\bibot{C_i \cap C_j}|) < m.$
\end{claim}
As the three top parts sum up to $2m-1$, and so do the three bottom parts, we immediately get:
\begin{claim}\label{cla:non-empty-parts}
$\bibot{C_i \cap C_j},\bitop{C_i \cap C_j}\neq \emptyset$ for all distinct $i,j \in \{1,2,3\}$.
\end{claim}
The next claim requires some more work. Recall that a subgraph of $H \subseteq K_{n,n}$ is connected in colour~$i$, if every two vertices of $H$ are connected by a path of colour $i$ in $H$.
\begin{claim}\label{cla:zwei-seiten-kleiner-n/2}
If the subgraph $C_i$ is connected in colour $i$, then there are distinct $j,k \in \{1,2,3\} \setminus \{i\}$ such that $|\bitop{C_i \cap C_j}| \geq |\bibot{C_i \cap C_k}|$, $|\bibot{C_i \cap C_j}| > |\bitop{C_i \cap C_k}|$ (modulo swapping top and bottom partition classes) and $|V(C_i \cap C_k)|< m$.
\end{claim}
\begin{proof}
Suppose that $C_i$ is connected in colour $i$ and let $j,k \in \{1,2,3\} \setminus \{i\}$ be such that $|\bitop{C_i \cap C_j}| \geq |\bibot{C_i \cap C_k}|$ (after possible swapping top and bottom partition). By Claim~\ref{cla:non-empty-parts}, and as $C_i$ is connected in colour $i$, we find an edge $e_i \in E(C_i \cap C_j) \cup E(C_i \cap C_k)$ of colour $i$.
Choose an even path $P \subseteq [\bitop{C_i \cap C_j}, \bibot{C_i \cap C_k}]$ which covers $\bibot{C_i \cap C_k}$ and ends in one of the vertices of $e_i$.
For the first part of the claim, assume to the contrary that $|\bibot{C_i \cap C_j}| \leq |\bitop{C_i \cap C_k}|$.
Take an even path $P' \subseteq [\bibot{C_i \cap C_j}, \bitop{C_i \cap C_k}]$ which covers $\bibot{C_i \cap C_j}$ and ends in a vertex of $e_i$.
Since $P$ and $P'$ are joined by $e_i$ we infer that $|\bibot{C_i \cap C_k}|+ |\bibot{C_i \cap C_j}| < m$.
But then $|\bibot{C_j \cap C_k}| \geq m$ in contradiction to Claim~\ref{cla:eine-seite-kleiner-n/2}.
This shows that $|\bibot{C_i \cap C_j}| > |\bitop{C_i \cap C_k}|$, as desired.
This allows us to pick an even path $P'' \subseteq [\bibot{C_i \cap C_j}, \bitop{C_i \cap C_k}]$ of colour $i$, which covers $\bitop{C_i \cap C_k}$ and ends in one of the vertices of $e_i$.
Join $P$ and $P''$ via $e_i$ to obtain a colour $i$ path of length at least $2 |\bitop{C_i \cap C_k}| + 2 |\bibot{C_i \cap C_k}| =2|V(C_i \cap C_k)|$. So by our assumption that there is no monochromatic path of length $2m$, we obtain $|V(C_i \cap C_k)|<m$, as desired.
\end{proof}
\begin{claim}\label{cla:hoechstens-eine-kleiner-n/2}
For at most one pair of distinct indices $i,j \in\{1,2,3\}$ it holds that $|V(C_i \cap C_j)|<m$.
\end{claim}
\begin{proof}
Suppose, on the contrary, that $C_1 \cap C_2$ and $C_1 \cap C_3$ each have less than $m$ vertices.
Then $C_2 \cap C_3$ has at least $2m$ vertices.
Therefore one of its partition classes has size at least $m$, a contradiction to Claim~\ref{cla:eine-seite-kleiner-n/2}.
\end{proof}
We are now ready for the last step of the proof of Theorem~\ref{thm:ramsey}.
We start by observing that if for some $i \in \{1,2,3\}$, the subgraph $C_i$ is not connected in colour $i$, then (letting $j,k$ be the other two indices) the edges of the graphs $C_i \cap C_j$ and $C_i \cap C_k$ are all of colour $j$, or colour $k$, respectively, and thus both $C_j$ and $C_k$ are connected in colour $j$, or colour $k$, respectively. So we can assume that there are at least two distinct indices $j,k \in \{1,2,3\}$, such that the subgraphs $C_j$, $C_k$ are connected in colour $j$, or in colour $k$, respectively. Say these indices are $1$ and $3$.
We use Claim~\ref{cla:zwei-seiten-kleiner-n/2} twice:
For $C_1$ it yields that one of $C_1 \cap C_3$ and $C_1 \cap C_2$ has less than $m$ vertices.
For $C_3$ it yields that one of $C_1 \cap C_3$ and $C_2 \cap C_3$ has less than $m$ vertices.
So by Claim~\ref{cla:hoechstens-eine-kleiner-n/2} we get that necessarily,
\begin{equation}\label{xxxxx}
|V(C_1 \cap C_3)|<m, \ |V(C_1 \cap C_2)|\geq m, \ |V(C_2 \cap C_3)|\geq m.
\end{equation}
Again using Claim~\ref{cla:zwei-seiten-kleiner-n/2}, this implies that $C_2$ is not connected in colour $2$. So by Claim~\ref{cla:non-empty-parts} and the fact that the edges between $C_1 \cap C_2$ and $C_2 \cap C_3$ are complete bipartite in colour 2, we have that \begin{equation}\label{xxxxx+}\text{$C_1 \cap C_2$ is complete bipartite in colour 1.}
\end{equation}
Also, in light of~\eqref{xxxxx}, Claim~\ref{cla:zwei-seiten-kleiner-n/2} with input $i=1$ gives $j=2$ and $k = 3$ and thus $|\bitop{C_1 \cap C_2}| \geq |\bibot{C_1 \cap C_3}|$, $|\bibot{C_1 \cap C_2}| > |\bitop{C_1 \cap C_3}|$ (after possibly swapping top and bottom partition).
Choose two balanced paths of colour 1:
The first path $P \subseteq [\bitop{C_1 \cap C_2}, \bibot{C_1 \cap C_3}]$ such that it covers $\bibot{C_1 \cap C_3}$.
The second path $P' \subseteq [\bibot{C_1 \cap C_2}, \bitop{C_1 \cap C_3}]$ such that it covers $\bitop{C_1 \cap C_3}$.
As by~\eqref{xxxxx+} we know that $C_1 \cap C_2$ is complete bipartite in colour 1, we can join $P$ and $P'$ with a path of colour 1 in $C_1 \cap C_2$, such that the resulting path $P''$ covers one of $\bitop{C_1}$, $\bibot{C_1}$.
Since by assumption, $P''$ has less than~$2 m$ vertices, we obtain that $\bitop{C_2 \cap C_3}$ or $\bibot{C_2 \cap C_3}$ has size at least $m$, a contradiction to Claim~\ref{cla:eine-seite-kleiner-n/2}.
This finishes the proof of Theorem~\ref{thm:ramsey}.
\end{document}
|
\begin{document}
\preprint{APS/123-QED}
\title{High-fidelity state transfer through long-range correlated disordered quantum channels}
\author{Guilherme M. A. Almeida}
\email{[email protected]}
\affiliation{
Instituto de F\'{i}sica, Universidade Federal de Alagoas, 57072-900 Macei\'{o}, AL, Brazil
}
\author{Francisco A. B. F. de Moura}
\affiliation{
Instituto de F\'{i}sica, Universidade Federal de Alagoas, 57072-900 Macei\'{o}, AL, Brazil
}
\author{Marcelo L. Lyra}
\affiliation{
Instituto de F\'{i}sica, Universidade Federal de Alagoas, 57072-900 Macei\'{o}, AL, Brazil
}
\date{\today}
\begin{abstract}
We study quantum-state transfer in $XX$ spin-$1/2$ chains where
both communicating spins are weakly coupled to a channel featuring disordered on-site magnetic fields.
Fluctuations are modelled by long-range correlated sequences
with self-similar profile obeying a power-law spectrum.
We show that the channel is able to perform an almost perfect
quantum-state transfer in most of the samples
even in the presence of significant amounts of disorder
provided
the degree of those correlations is strong enough.
In that case, we also show that the lack of mirror symmetry
does not affect much the likelihood of having high-quality outcomes.
Our results advance a further step
in designing robust devices for
quantum communication protocols.
\end{abstract}
\maketitle
\section{\label{sec1}Introduction}
Transmitting quantum states and establishing entanglement between
distant parties (say Alice and Bob)
are crucial tasks in quantum information processing protocols \cite{cirac97, kimble08}.
In this direction,
spin chains have been widely addressed as quantum channels
for (especially short-distance) communication protocols since
proposed in Ref.
\cite{bose03} that
spin chains can be
used for carrying out transfer of quantum information with minimal control, i.e.,
no manipulation is required during the transmission.
Basically, Alice prepares and
sends out an arbitrary qubit state through the channel and Bob only needs
to make a measurement at some prescribed time. The evolution itself is given
by the natural dynamics of the system.
Since then, several schemes for high-fidelity quantum-state transfer (QST)
\cite{bose03,christandl04,plenio04,osborne04,wojcik05,*wojcik07,li05,huo08, gualdi08, banchi10, *banchi11,*apollaro12,lorenzo13,lorenzo15, almeida16}
and entanglement creation and distribution
\cite{amico04, plenio05, apollaro06,*plastina07,*apollaro08,venuti07, cubitt08, giampaolo09, *giampaolo10, gualdi11, estarellas17, *estarellas17scirep, almeida17-1}
in spin chains have been put forward.
For instance,
it was discovered that \textit{perfect} QST can be achieved in mirror-symmetric chains by a judicious tuning
of the spin-exchange couplings over the entire chain
\cite{christandl04, plenio04} (see \cite{feder06} for a generalization).
While this scheme allows one to perform QST with unit fidelity
for arbitrarily-large distances, it is not an easy task, on the practical side,
to engineer the whole chain with the desired
precision, what makes this configuration very sensitive to
perturbations \cite{dechiara05, zwick11, *zwick12}.
An alternative less-demanding approach is based on optimizing the outermost
couplings of a uniform channel so that the linear part of the spectrum dominates the dynamics \citep{banchi10}.
One can also encode the information using multiple spins to
send dispersion-free Gaussian wave-packets through the channel \cite{osborne04}.
Another class of protocols relies on setting \textit{very weak} couplings
between the end spins (those being the sender and receiver sites)
and the bulk of the chain \cite{wojcik05, li05,venuti07,huo08, gualdi08,giampaolo09, giampaolo10, almeida16}
in order to effectively reduce the operating Hilbert space
to that of a two- or three-site chain, depending on
the
resonance conditions.
That way, it is possible to carry out QST with close-to-unit fidelity.
A similar strategy is to apply strong magnetic fields at the sender and receiver spins (or on their nearest neighbors) \cite{plastina07, lorenzo13, paganelli13}.
Each of the aforementioned schemes has its
own peculiarities
but there is one detail that can seriously compromise
the protocol regardless of the engineering scheme being used, that is disorder.
Fluctuations either in the local magnetic fields or in the coupling strengths
are inevitably present either due to manufacturing errors
or dynamical spurious factors
hence leaving us far from the desired output.
Needless to say, finding out ways to overcome such difficulties and
and testing the robustness of various schemes against such experimental imperfections are
of great importance and have been done extensively \cite{dechiara05,fitzsimons05,burgarth05,tsomokos07,giampaolo10, petrosyan10,yao11,zwick11, bruderer12,kay16}.
Among many possible configurations to realize high-quality QST, in the presence of disorder it should be much more preferable to choose a channel in which
the sender and receiver spins do not heavily depend upon. Having that in mind,
those setups featuring communicating parties weakly coupled to the channel \cite{wojcik05}
seem to be a promising choice \cite{yao11}. A combined approach involving modulated couplings with weakly coupled spins has been also put forward in \cite{bruderer12}.
Still, the slightest amount of disorder is already capable of promoting
Anderson localization effects \cite{anderson58, *abrahams79} or, even worse, destroying
the symmetry of the channel \cite{albanese04}.
That is not necessarily true, however, in the case of \textit{correlated} disorder.
The breakdown of Anderson localization has been reported when short- \cite{flores89, *dunlap90,*phillips91} or long-range correlations \cite{demoura98,izrailev99,kuhl00,lima02, *demoura02,*nunes16,demoura03,adame03, gonzales14, almeida17-1} are present in disordered 1D models. In particular, the latter case
finds a set of extended states in the middle of the band with well detached
mobility edges thereby signalling an Anderson-type metal-insulator transition \cite{demoura98,izrailev99}. This is also manifested in low-dimensional spin chains \cite{lima02, almeida17-1}.
Correlated fluctuations takes place in many stochastic processes in nature (see, e.g., Refs. \cite{lam92,peng92,carreras98, carpena02}) and therefore
shall not be ruled out
when designing protocols for quantum information processing in solid-state devices \cite{dechiara05,burgarth05}.
Here, we will see that it indeed makes a dramatic difference
in the performance of QST protocols based on weakly-coupled end spins.
Specifically, we consider an one-dimensional $XX$ spin chain
in which
the local magnetic fields (on-site potentials) of the channel
follow a long-range correlated disordered distribution
with power-law spectrum $S(k) \propto 1/k^{\alpha}$, with $k$ being the corresponding wave number and
$\alpha$ being a characteristic exponent governing the degree of such correlations.
We show that when perturbatively attaching two communicating (end) spins to the channel
and setting their frequency to lie in the middle of the band,
we are still able to perform nearly perfect QST rounds
in the presence of correlated disorder.
Surprisingly, it happens even
in the presence of considerable amounts of asymmetries in the channel.
The reason for that is the appearance of extended states in the middle of the band
which
offers the necessary end-to-end \textit{effective} symmetry
thereby supporting the occurrence of Rabi-like oscillations between the sender and receiver spins.
We show that perfect mirror symmetry, despite being very convenient for QST protocols, is not a crucial factor as long as there exists a proper set of delocalized eigenstates in the channel.
In the following, Sec. \ref{sec2}, we
introduce the $XX$ spin Hamiltonian with on-site long-range correlated disorder.
In Sec. \ref{sec3}
we derive an effective two-site Hamiltonian that accounts for the way both communicating parties are coupled to the channel.
In Sec. \ref{sec4} we investigate how the channel responds to disorder
by looking at the resulting effect on the
localization and symmetry properties.
In Sec. \ref{sec5} we display the results for the QST fidelity and
our final remarks
are addressed in Sec. \ref{sec6}.
\section{\label{sec2}Spin-chain Hamiltonian}
We consider a
pair of spins (communicating parties) coupled
to a one-dimensional quantum channel
consisting altogether of an open spin-$1/2$ chain featuring
$XX$-type exchange interactions described by
Hamiltonian
$\hat{H} = \hat{H}_{\mathrm{ch}}+\hat{H}_{\mathrm{int}}$
with ($\hbar = 1$)
\begin{equation} \label{Hchannel}
\hat{H}_{\mathrm{ch}} = \sum_{i=1}^{N}\dfrac{\omega_{i}}{2}(\hat{1}-\hat{\sigma}_{i}^{z})-\sum_{\langle i,j \rangle}\dfrac{J_{i,j}}{2}(\hat{\sigma}_{i}^{x}\hat{\sigma}_{j}^{x}
+\hat{\sigma}_{i}^{y}\hat{\sigma}_{j}^{y}),
\end{equation}
where $\hat{\sigma}_{i}^{x,y,z}$ are the Pauli operators for the $i$-th spin,
$\omega_{i}$ is the local (on-site) magnetic field,
and $J_{i,j}$ is the exchange coupling strength between
between nearest-neighbor nodes.
Supposing the sender ($s$) and receiver ($r$) spins are connected to nodes $1$ and $N$
from the channel at rates $g_{s}$ and $g_{r}$, respectively, the interaction part reads
\begin{align} \label{Hint}
\hat{H}_{\mathrm{int}} &=\dfrac{\omega_{s}}{2}(\hat{1}-\hat{\sigma}_{s}^{z})+\dfrac{\omega_{r}}{2}(\hat{1}-\hat{\sigma}_{r}^{z})\\ \nonumber
& \quad-
\dfrac{g_{s}}{2}(\hat{\sigma}_{s}^{x}\hat{\sigma}_{1}^{x}
+\hat{\sigma}_{s}^{y}\hat{\sigma}_{1}^{y})+
\dfrac{g_{r}}{2}( \hat{\sigma}_{r}^{x}\hat{\sigma}_{N}^{x}
+\hat{\sigma}_{r}^{y}\hat{\sigma}_{N}^{y}).
\end{align}
Note that since $\hat{H}$ conserves the total magnetization of the system, i.e.,
$\left[ \hat{H}, \sum_{i} \hat{\sigma}_{i}^{z}\right] =0$, the
Hamiltonian can be split into independent subspaces with fixed
number of excitations. Here we focus on the single-excitation Hilbert space
spanned by states of the form $\ket{i} = \hat{\sigma}_{i}^{+}\ket{\downarrow\downarrow\ldots\downarrow}$ with $i = r, s, 1,\ldots,N$, that means every spin pointing down but the one located at the $i$-th position.
In this case, we end up with a
hopping-like matrix with $N+2$ dimensions.
Indeed
$\hat{H}$ can be mapped onto a system describing non-interacting spinless fermions through
the Jordan-Wigner transformation.
Let us now make a few assumptions in regard to the channel described by Hamiltonian (\ref{Hchannel}). Here we
consider the spin-exchange coupling strengths to be uniform $J_{i,j}\rightarrow J$ and,
in order to study the robustness of the channel against disorder we introduce
correlated static fluctuations on the
on-site magnetic field $\omega_{n}$, $n=1,\ldots,N$.
A straighforward way to generate random sequences featuring internal long-range correlations is through
the trace of the fractional Brownian motion with power-law
spectrum $S(k)\propto1/k^{\alpha}$
\cite{demoura98, adame03}
\begin{equation} \label{disorder}
\omega_{n} = \sum_{k=1}^{N/2}k^{-\alpha/2}
\mathrm{cos}\left( \dfrac{2\pi n k}{N} + \phi_{k} \right),
\end{equation}
where
$k=1/\lambda$, is the inverse modulation wavelength,
$\lbrace \phi_{k} \rbrace$ are random
phases distributed uniformly within $\left[0,2\pi \right]$,
and $\alpha$ controls the degree of correlations.
This parameter is related to the so-called
Hurst exponent \cite{fractalsbook},
$H = (\alpha -1)/2$, which characterizes
the self-similar character of a given sequence. When
$\alpha= 0$, we recover the case of
uncorrelated disorder (white noise) and for
$\alpha>0$ underlying long-range correlations take place. The resulting long-range correlated sequence becomes nonstationary for $\alpha > 1$.
Furthermore, according to the usual terminology,
when $\alpha > 2$ ($\alpha < 2$) the series increments become persistent (anti-persistent).
Interestingly, this brings about serious consequences on the spectrum
profile of the system. As shown in \cite{demoura98, adame03}, when $\alpha>2$ there occurs
the appearance of delocalized states in the middle of the one-particle spectrum band.
In the QST scenario with weakly-coupled spins $r$ and $s$, i.e. $g_{s},g_{r} \ll J$,
that promotes a strong enhancement in the likelihood
of disorder realizations with very-high fidelities $F$, most of them yielding $F\approx1$.
This will be elucidated along the paper.
Hereafter we set the sequence generated by Eq. (\ref{disorder})
to follow a normalized distribution, that is
$\omega_{n} \rightarrow
\left( \omega_{n}-\langle\omega_{n}\rangle \right) / \sqrt{\langle\omega_{n}^2\rangle-\langle\omega_{n}\rangle^2}$.
We also stress that such a disordered distribution
has no typical length scale which is a
property of many natural stochastic series \cite{bak96}.
\section{\label{sec3}Effective two-site description}
We now work out a perturbative approach to write down a proper representation of an effective Hamiltonian involving only the sender and receiver spins provided they are
very weakly coupled to the channel.
Intuitively, we expect they span their
own subspace with renormalized parameters and thus QST takes place via effective Rabi oscillations between them \cite{wojcik05, gualdi08, lorenzo13, almeida16}. Our goal here is
to investigate the influence of disorder in such subspaces and see about how
much asymmetry they are able to tolerate.
Here, we follow the procedure adopted in Refs. \cite{wojcik07, li05}. To begin with, let us express the channel Hamiltonian, Eq. (\ref{Hchannel}), in
terms of its eigenstates $\lbrace \ket{E_{k}} \rbrace$ with corresponding (nondegenerate) frequencies $\lbrace E_{k} \rbrace$ and recast $\hat{H} = \hat{H}_{0}+\hat{V}$, such that
\begin{align}
\hat{H}_{0} = \omega_{s}\ket{s}\bra{s} + \omega_{r}\ket{r}\bra{r} + \sum_{k}E_{k} \ket{E_{k}}\bra{E_{k}}, \\
\hat{V} = \epsilon \sum_{k} \left( g_{s}a_{sk} \ket{s}\bra{E_{k}} + g_{r}a_{rk} \ket{r}\bra{E_{k}} + \mathrm{H.c.} \right)
\end{align}
are now the free and perturbation Hamiltonians, respectively, with $\epsilon$
being a perturbation parameter,
$a_{sk}\equiv\langle 1 \vert E_{k} \rangle$, and
$a_{rk}\equiv\langle N \vert E_{k} \rangle$. Herein we set units such that $J=1$ for convenience.
If we consider that both $\omega_{s}$ and $\omega_{r}$ do not match any of the normal frequencies $E_{k}$
of the channel and set $\epsilon g_{s}$ and $\epsilon g_{r}$ to be very weak so as to not disturb the nearby
modes, we expect reaching an
effective Hamiltonian of the form $\hat{H}_{\mathrm{eff}} = \hat{H}_{\mathrm{ch}}\oplus\hat{H}_{sr}$
up to some leading order in $\epsilon$, where $\hat{H}_{sr}$ is the decoupled two-spin Hamiltonian which contains all the valuable information on the way the sender and receiver spins ``feel'' the spectrum of the channel. The trick to find $\hat{H}_{\mathrm{eff}}$
is quite straightforward \cite{wojcik07}. Suppose there is a transformation
$\hat{H}_{\mathrm{eff}} = e^{i\hat{S}}\hat{H}e^{-i\hat{S}}$, with $\hat{S}$ being a Hermitian operator which we properly choose to be of the form
\begin{equation}
\hat{S} = i\epsilon \sum_{k}\left( \dfrac{g_{s}a_{sk}}{E_{k}-\omega_{s}}\ket{s}\bra{E_{k}} + \dfrac{g_{r}a_{rk}}{E_{k}-\omega_{r}}\ket{r}\bra{E_{k}}+\mathrm{H.c.} \right).
\end{equation}
This choice is very convenient
because it rules out the first order terms
$\hat{V}+i[\hat{S},\hat{H}_{0}] = 0$
and, up to second-order perturbation theory, we are then left with
\begin{equation}
\hat{H}_{\mathrm{eff}} = \hat{H}_{0}+ i[\hat{S},\hat{V}] +\dfrac{i^{2}}{2!}[\hat{S},[\hat{S},\hat{H_{0}}]] + O(\epsilon^{3}).
\end{equation}
By inspecting the above equation, we see that spins $r$ and $s$ are now
decoupled from the rest of the chain, as we intended to.
The corresponding Hamiltonian projected onto $\lbrace\ket{s}, \ket{r} \rbrace$ then reads
\begin{equation}\label{Heff2}
\hat{H}_{sr}=
\begin{pmatrix}
h_{s} & -J'\\
-J' & h_{r}
\end{pmatrix},
\end{equation}
with
\begin{equation}\label{weff}
h_{\nu} = \omega_{\nu}-\epsilon^{2} g_{\nu}^{2} \sum_{k}\dfrac{|a_{\nu k}|^{2}}{E_{k}-\omega_{\nu}},
\end{equation}
$\nu \in \lbrace s, r \rbrace$, and
\begin{equation}\label{Jeff}
J' = \dfrac{\epsilon^{2}g_{s}g_{r}}{2}\sum_{k}\left( \dfrac{a_{sk}a_{rk}}{E_{k}-\omega_{s}} + \dfrac{a_{sk}a_{rk}}{E_{k}-\omega_{r}} \right).
\end{equation}
Note that we are assuming all parameters to be real. Hamiltonian (\ref{Heff2}) describes a two-level system which performs Rabi-like oscillations in a time scale
set by the inverse of the gap between its normal frequencies. In order to have as perfect as possible QST one should guarantee that $h_{s} = h_{r}$. This is
automatically fulfilled, given $\omega_{s} = \omega_{r}$ and $g_{s} = g_{r} = g$, for mirror-symmetric chains since $|a_{sk}| = |a_{rk}|$ for every $k$.
In that case, for a noiseless uniform channel and in the limit of very weak outer couplings, which implies in the validity of Hamiltonian (\ref{Heff2}),
an initial state prepared in $\ket{s}$
will evolve in time to $\ket{r}$ with nearly unit amplitude at times $\tau J = n\pi / (2J')= n\pi / (2\epsilon^{2}g^{2})$, with $n$ being an odd integer \cite{wojcik05, wojcik07}.
Note that as $N$ increases more eigenstates get in the middle of the spectrum and thus
$\epsilon g_{\nu}$ must be adjusted accordingly (we shall drop out the perturbation parameter $\epsilon$ hereafter).
In summary, in Rabi-type QST protocols \cite{wojcik05,gualdi08,lorenzo13,almeida16}, a pair of eigenstates of the form $\ket{\psi^{\pm}} \approx (\ket{s}\pm \ket{r})/\sqrt{2}$
is ultimately responsible for the fidelity of the transfer.
We remark that,
for certain classes of channels, such as uniform or dimerized
\cite{venuti07, ciccarello11,almeida16}, one
can obtain analytical forms for those states using perturbation theory as well as work
out the corresponding discrete normal frequencies.
The form expressed by Eq. (\ref{Heff2}), however,
is general and more suited for our purposes, not to mention we are dealing with disordered channels.
We also would like to mention that one can induce an effective three-site
system by properly tuning $\omega_{s} = \omega_{r} = E_{k}$ for a given $k$.
In that case, the transfer is directly \textit{mediated} by the corresponding eigenstate \cite{wojcik07, yao11, paganelli13}.
Likewise, whenever perfect symmetry between sites $1$ and $N$ is available, which
corresponds to equal off-diagonal rates in the effective $3\times3$ hopping matrix, QST can be similarly performed with nearly perfect fidelity in the limit of very small $g_{\nu}$ \cite{wojcik07}.
We do not deal with this scenario here because in our disordered chain there
will be no fixed normal frequencies to tune with since
each sample features a different sequence generated by Eq. (\ref{disorder}).
\section{\label{sec4}Disordered channel properties}
While spatial symmetry is an essential ingredient
in the design of quantum communication protocols in spin chains,
there is no guarantee that all chain parameters will come out as planned.
Experimental imperfections may induce disorder and hence spoil the intended output.
In 1D tight-binding models, pure (uncorrelated) disorder yields
the so-called phenomenon of Anderson localization \cite{anderson58}
in which every eigenstate
becomes exponentially localized around a given site, say $x_{0}$,
$\langle x\vert E_{k}\rangle \sim e^{-\frac{|x-x_{0}|}{\xi_{k}}}$,
where $\xi_{k}$ is the localization length \cite{thouless74}.
Now let us discuss the consequences of that on the two-site effective Hamiltonian,
Eq. (\ref{Heff2}).
Disorder acts on it by inducing a (undesired) detuning $\Delta \equiv h_{s}-h_{r}$.
At first glance, one could naively
think of masking this effect by setting
$J'\gg \Delta$ only to realize that all the Hamiltonian
parameters heavily depend upon the very same factors. First and foremost,
they are built from the overlap, $a_{sk}$ and $a_{rk}$, between the
spins they are connected to (the outer spins of an open linear chain) and \textit{each} normal mode $k$ of the channel. The presence of disorder then promotes a tremendous asymmetry
in the channel at the same time it decreases $J'$, because it turns out to be very unlikely a given eigenstate $\ket{E_{k}}$ will
simultaneously feature non-negligible
amplitudes
in $\ket{1}$ and $\ket{N}$ thereby diminishing the contribution of each term of the sum in Eq. (\ref{Jeff}).
As a consequence,
the subspace spanned by $\ket{s}$ and $\ket{r}$
becomes even more sensitive to $\Delta$.
A way around to compensate that would be to individually manipulate either
$g_{\nu}$ or $\omega_{\nu}$ [cf. Eq. (\ref{weff})] though this would not work out
very efficiently.
First, note that
$\omega_{\nu}$ is also present in the denominator of Eq. (\ref{weff}).
Also, one must be careful when tuning $g_{s}$ and $g_{r}$ in order to maintain
the sender and receiver off-resonantly coupled to the channel.
Normally, the scale $\Delta$ imposes is such that
it would become necessary to increase one of the outer
couplings $g_{\nu}$ quite considerably thus disturbing a few normal modes in the neighborhood
of the $\omega_{\nu}$ level thereby breaking down the validity of the effective description in Eq. (\ref{Heff2}).
Besides all that, in principle there is no way to predict, sample by sample,
the specific disordered outcome
so we are better off if we
just fix $g_{\nu}$ and $\omega_{\nu}$ to some convenient value.
We also remark that when dealing with quantum communication protocols of this kind in
spin chains \cite{bose03}, it is important to keep the level of external control over the system
to a minimum. Initialization and read-out procedures are the only forms
of control that should be allowed.
All the things discussed above is valid for the case of standard uncorrelated stochastic fluctuations in the system`s parameters. However, a given disordered set of parameters might not be always
uncorrelated, say, site-independent \cite{dechiara05,burgarth05}.
Let us now discuss some possible
consequences of correlated disorder sequences on the channel,
particularly those displaying
long-range correlations with power-law spectrum such as the ones generated from
Eq. (\ref{disorder}).
In this case, for 1D tight-binding models
it is known that
the underlying structure of the series induces the appearance of a set of delocalized
states around the middle of the band with well-defined mobility edges \cite{demoura98} provided $\alpha > 2$.
In order to elucidate that, we numerically calculate the normalized participation ratio distribution, for every eigenstate of Hamiltonian (\ref{Hchannel}), defined by
\begin{equation}
\xi_{k} = \dfrac{1}{N\sum_{i=1}^{N}|\langle i \vert E_{k} \rangle|^4},
\end{equation}
which assumes $1/N$ for fully-localized states and $1$ for uniformely extended states (that is
$\langle i \vert E_{k} \rangle = 1/\sqrt{N}$ $\forall$ $i$).
\begin{figure}
\caption{\label{fig1}
\label{fig1}
\end{figure}
Figure \ref{fig1} shows the resulting $\xi_{k}$ distribution (averaged over $10^{3}$ independent samples) as the degree of long-range
correlations $\alpha$ is increased for a on-site-disordered channel consisting of $N = 100$ spins, including the noiseless case ($\omega_{n}\rightarrow 0$) for comparison (dashed line).
Note that we are considering the channel Hamiltonian only [cf. Eq. (\ref{Hchannel})], with $g_{\nu} = 0$.
Indeed, a prominent set of delocalized eigenstates
builds up around the band center.
First of all,
we should remark that the slight deflection of the $\alpha = 0$ curve (uncorrelated disorder) is solely due to the well known fact that the states at the band edges are more localized than those near the band center.
This gets much more pronounced when $\alpha = 2$ and higher, as expected.
Indeed, $\alpha > 2$ sets the transition point from an insulator to a metallic phase
in eletronic tight-binding models, characterized by vanishing Lyapunov coefficients in the central part of the spectrum \cite{demoura98}.
This happens exactly when
the sequences generated by Eq. (\ref{disorder}) display persistent increments
according to the Hurst classification scheme \cite{fractalsbook}.
The likelihood of delocalized states in the presence of substantial amounts of
disorder, not to mention the lack of mirror symmetry due to the on-site magnetic field distribution across the chain, sounds quite appealing.
It means there is a suitable region in the frequency band of the
channel -- in our case, in the middle of it, as seen in Fig. \ref{fig1} --
to tune the sender and receiver spins with. The corresponding eigenstates, featuring a delocalized nature, will display a broader amplitude distribution with greater
\textit{balance} between $a_{rk}$ and $a_{sk}$ thereby increasing the chances of inducing
a small detuning $\Delta$ [cf. Eqs. (\ref{Heff2}), (\ref{weff}), and (\ref{Jeff})], which is crucial for
having very high transfer fidelities. Figure \ref{fig2} shows how
the absolute value of the ratio $\Delta/J'$
(averaged over several samples) behaves with $\alpha$ thus leaving no doubt the onset of long-range correlations
establishes a suitable ground for carrying out quantum communication protocols
with weakly-coupled parties.
As discussed earlier, uncorrelated fluctuations ($\alpha=0$)
rules out any possibility of doing so, the ratio being extremely high.
Things then get rapidly improved with $\alpha$ suggesting that already for $\alpha > 2$
one should obtain satisfying outcomes in the QST protocol, as we show in the following section.
\begin{figure}
\caption{\label{fig2}
\label{fig2}
\end{figure}
\section{\label{sec5}Quantum-state transfer protocol}
The standard QST protocol goes as follows \cite{bose03}. Suppose that Alice is able to control the spin located at position $s$ and wants to send an arbitrary qubit
$\ket{\phi}_{s} = \alpha \ket{\downarrow}_{s}+ \beta \ket{\uparrow}_{s}$ to Bob which has
access to spin $r$. Now let us assume that the rest of the chain is initialized
in the fully polarized spin-down state so that
the whole state reads
$\ket{\Psi (0)} = \ket{\phi}_{s}\ket{\downarrow}_{1}\ldots \ket{\downarrow}_{N}\ket{\downarrow}_{r}$. She then let the system evolve
following its natural dynamics,
$\ket{\psi(t)} =\hat{\mathcal{U}}(t)\ket{\psi(0)}$,
where $\hat{\mathcal{U}}(t)\equiv e^{-i\hat{H}t}$ is the unitary time-evolution operator.
Ideally, she expects that at some prescribed time $\tau$
the evolved state takes the form
$\ket{\Psi (\tau)} = \ket{\downarrow}_{s}\ket{\downarrow}_{1}\ldots \ket{\downarrow}_{N}\ket{\phi}_{r}$. At this point,
Bob receives state $\rho_{r}(\tau) = \mathrm{Tr}_{s,1,\ldots,N}\ket{\Psi (\tau)}\bra{\Psi (\tau)}$
and thus the transfer fidelity can be evaluated by $F_{\phi}(\tau) = \bra{\phi} \rho_{r}(\tau) \ket{\phi}$
Note, however, that this measures the performance of QST for a specific input.
In order to properly evaluate the efficiency of the \textit{channel}, we may average the above quantity over all input states $\ket{\phi}_{s}$ (that is, over the Bloch sphere)
which results in \cite{bose03}
\begin{equation}\label{avF}
F(t) = \dfrac{1}{2}+\dfrac{f_{r}(t)}{3}+\dfrac{f_{r}(t)^2}{6}
\end{equation}
for an arbitrary time with $f_{i}(t) \equiv \vert \bra{i} e^{-i\hat{H}t} \ket{s} \vert$
Therefore, we note that such a state-independent figure of merit of QST depends solely upon the transition amplitude between the sender and receiver spins with $F(t)=1$ only when
$f_{r}(t) = 1$.
The problem of transmitting a qubit state from one point to another can thus be viewed as
a single-particle continuous quantum walk \cite{kemperev} on a network and the goal is to
find out ways to transfer the excitation between two distant nodes
with the highest possible transition amplitude.
In the case of weakly-coupled spins in which an effective two-site
interaction sets in [cf. Eq. (\ref{Heff2})],
the transition amplitude $f_{r}(t)$ will strongly depend upon the resonance between
$h_{s}$ and $h_{r}$, that is $\Delta$. In the previous section, we have seen
that the emergence of long-range correlations (see Fig. \ref{fig2})
favors smaller values of $\Delta$.
Now, let us finally see about the resulting QST performance.
As a testbed, we consider a $N=50$ channel, $g_{\nu}=g= 0.001$ (in units of $J$),
and $\omega_{\nu} = 0$.
Given the size of the channel, this chosen value for $g$ assures that
the subspace created by states $\ket{s}$ and $\ket{r}$ becomes safely shielded from influence of
channel normal modes lying around the band center. Even if one of them gets close by,
it is very likely that the eigenstate will not be extremely asymmetric
due to the presence of delocalized states
for high enough $\alpha$ [see Fig. \ref{fig1}].
\begin{figure}
\caption{\label{fig3}
\label{fig3}
\end{figure}
In Fig. \ref{fig3} we show the
sample distribution of the maximum fidelity $F_{\mathrm{max}} = \mathrm{max}\lbrace F(t)\rbrace$
[as defined above in Eq. (\ref{avF})]
achieved in time interval $t \in [0,20\tau]$,
with $\tau=\pi/(2g^{2})$ being the corresponding time (in units of $1/J$) for which
a complete transfer would occur for the noiseless case, $f_{r}(\tau) \approx 1$,
as seen in Sec. \ref{sec3}.
That interval is a pretty reasonable one in order to
guarantee at least one full Rabi cycle in most of the samples.
Recall that the effective sender-receiver hopping strength $J'$ dictates the time scale of the dynamics
and is strongly affected by disorder.
Figure \ref{fig3} ultimately confirms what it has been suggested
by Fig. \ref{fig2}. Indeed, strong long-range correlations
in the disorder distribution enhances the
figure of merit of QST enormously.
Even more impressive is the fact that, for $\alpha = 2$ and $\alpha=3$ [see Figs. \ref{fig3}(c) and \ref{fig3}(d), respectively],
we find the number of occurrences of fidelities $F_{\mathrm{max}}\approx 1$ to be the highest one.
We also note that the fidelities for $\alpha=2$ case [Fig. \ref{fig3}(c)] is fairly well distributed
across all the possible outcomes, thus indicating
a transition regime.
\begin{figure}
\caption{\label{fig4}
\label{fig4}
\end{figure}
In order to provide an explicit view on what is actually going on in the QST process, in Fig. \ref{fig4}(a) we show the time evolution of the
occupation probabilities $f_{i}^{2}(t)$ of the sender ($i=s$), receiver ($i=r$),
and channel [$f_{\mathrm{ch}}^{2}(t)\equiv \sum_{n=1}^{N} f_{n}^{2}(t) $] spins
for one
particular (ordinary) sample, out of many successful ones
(meaning $F_{\mathrm{max}}\approx 1$) encountered for $\alpha=3$ [see Fig. \ref{fig3}(d)].
There we see a genuine Rabi-like behavior yielding a very high-quality QST.
We reduced the time scale to $2\tau$
so we can
have a more detailed view on a complete cycle.
Therefore, in this case the transfer time happens to be
roughly the same as for the noiseless case.
Further, we note that the channel is barely populated for all practical purposes [see the inset of Fig \ref{fig4}(a)],
meaning that Eq. (\ref{Heff2})
is a robust approximation. Those residual beatings seen
for $f_{\mathrm{ch}}^{2}(t)$
in are due to some negligible mixing between both channel
and sender/receiver subspaces.
One could get rid of it by further decreasing $g$.
Care must taken, though, not to compromise the transfer time scale since
it increases $\propto\ 1/g^{2}$.
Figure \ref{fig4}(b) shows the corresponding spatial distribution of eigenstates,
$\vert \langle\vert i \vert \psi_{k} \rangle \vert^{2}$, along the whole spectrum $k$.
First, note that the outer parts of the spectrum are
mostly populated by localized-like eigenstates. Indeed, the eigenstates get more delocalized
as we move towards the center of the band, as discussed before [see Fig. \ref{fig1}].
We also point out the asymmetrical aspect of the eigenstate distribution.
Still, it turns out to be possible to span an independent subspace involving only the sender and receiver
spins [Eq. \ref{Heff2}] so that their corresponding eigenstates become
close to $\ket{s}\pm \ket{r})/\sqrt{2}$.
By looking closely at Fig \ref{fig4}(b), we also spot a few eigenstates showing strong asymmetries between
spins $1$ and $N$.
Fortunately, since $a_{sk}$ and $a_{rk}$ is fairly balanced across the spectrum and
due to the fact that
the channel eigenstates lying around the middle of the band (less asymmetric) have great
influence on $\Delta/J'$, given that
the terms in the sum in Eqs. (\ref{weff}) and (\ref{Jeff}) goes $\sim 1/E_{k}$,
the sender and receiver spins are able to find a way out through such
asymmetries and establish an effective resonant interaction between them
thus resulting in an almost perfect QST for most of the samples.
\begin{figure}
\caption{\label{fig5}
\label{fig5}
\end{figure}
Last, in order to evaluate
a representative outcome for $F_{\mathrm{max}}$ for a given $\alpha$,
in Fig \ref{fig5} we plot its average
over all the samples
for a large window of $\alpha$ values.
This clearly illustrates
the overall behavior of the occurrences of $F_{\mathrm{max}}$
as one increases the degree of long-range correlations in the disorder distribution.
Note that we are also showing the curve for many values of $g$, only to stress the importance of setting this parameter
as smaller as possible
so as to avoid mixing between the channel and sender/receiver subspaces.
Indeed, we see quality of QST is affected by that.
As we go towards smaller values of $g$,
there is a saturation point indicating that
Hamiltonian \ref{Heff2} has reached its final form. It means that if we keep on decreasing $g$, the QST fidelity will not get any better and the time scale of the transfer will
increase substantially.
Finally, we identify in Fig. \ref{fig5}
that the $F_{\mathrm{max}}$ growth profile
is more pronounced
between $\alpha = 1$ and $\alpha = 3$
until it saturates for higher values of $\alpha$.
This is associated to the fact that
the long-range correlated sequence generate by Eq. (\ref{disorder}) becomes
nonstationary for $\alpha > 1$ and acquires persistent
character when $\alpha>2$, thereby triggering
the appearance of delocalized states in the middle
of the band \cite{demoura98, adame03}.
\section{\label{sec6}Concluding remarks}
We studied a QST protocol through a
$XX$ spin channel with on-site long-range-correlated disorder.
The protocol involved a couple of communicating spins weakly coupled to the channel not
matching with any of its normal modes so that the transfer takes
place through Rabi-like oscillations between
the ends of the chain \cite{wojcik05, almeida16}.
We focused on the reduced sender/receiver description based on Hamiltonian (\ref{Heff2})
which embodies all the relevant information regarding the way
they are affected by the channel, thus allowing one to foresee
the QST outcome based on the renormalized parameters contained in the two-site effective Hamiltonian.
We showed that this class of
weakly-coupled models are indeed robust against external perturbations \cite{yao11} as the effective interaction between sender and receiver spins do not depend upon
the entire wavefunction of the spectrum but rather on the local amplitudes of the spins they are connected to.
Because of that,
we realize we do not necessarily need a perfect symmetric chain to
to achieve an almost perfect QST.
When scale-free correlations with a power-law spectral density $S(k)\propto k^{-\alpha}$ set in,
the disorder distribution is such that it can support delocalized eigenstates around
the center of the band \cite{demoura98}.
Those are able to provide a broader, more balanced distribution
of amplitudes even in the presence of asymmetries, what makes it possible
to induce effective resonant interactions between $\ket{r}$ and $\ket{s}$,
provided $\alpha$ is high enough, thus resulting in extremely high
fidelities, with most of the samples providing $F_{\mathrm{max}} \approx 1$.
Note that we have not considered the case of structural disorder here, that is,
fluctuations on the spin couplings. However, on-site disorder actually
embodies a worst-case, and hence more realistic, scenario since the spectrum also looses
its symmetry, differently from structural fluctuations.
We remark that disorder, either correlated or not, might arise naturally due to
experimental imperfections in the
manufacturing process of solid state devices for quantum information processing.
However, we may also think about inducing those correlations
somehow since, as we have shown,
it may not be so detrimental for certain communication tasks
as in the uncorrelated-disorder scenario.
Overall, it should be easier to allow for that than designing a chain
with a very specific set of parameters, which demands a high degree of control.
Our work further promotes the study of quantum communication protocols
in disordered, asymmetric, spin chains.
\section{Acknowledgments}
This work was partially supported by CNPq (Grant No. 152722/2016-5),
CAPES, FINEP, and FAPEAL (Brazilian agencies).
\begin{thebibliography}{66}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Cirac}\ \emph {et~al.}(1997)\citenamefont {Cirac},
\citenamefont {Zoller}, \citenamefont {Kimble},\ and\ \citenamefont
{Mabuchi}}]{cirac97}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont
{Cirac}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zoller}},
\bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont {Kimble}}, \ and\
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Mabuchi}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {78}},\ \bibinfo {pages} {3221} (\bibinfo {year}
{1997})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kimble}(2008)}]{kimble08}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont
{Kimble}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Nature}\ }\textbf {\bibinfo {volume} {453}},\ \bibinfo {pages} {1023}
(\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bose}(2003)}]{bose03}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Bose}},\ }\href {\doibase 10.1103/PhysRevLett.91.207901} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {91}},\ \bibinfo {pages} {207901} (\bibinfo {year}
{2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Christandl}\ \emph {et~al.}(2004)\citenamefont
{Christandl}, \citenamefont {Datta}, \citenamefont {Ekert},\ and\
\citenamefont {Landahl}}]{christandl04}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Christandl}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Datta}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ekert}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.~J.}\ \bibnamefont {Landahl}},\ }\href {\doibase
10.1103/PhysRevLett.92.187902} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages}
{187902} (\bibinfo {year} {2004})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Plenio}\ \emph {et~al.}(2004)\citenamefont {Plenio},
\citenamefont {Hartley},\ and\ \citenamefont {Eisert}}]{plenio04}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{Plenio}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Hartley}}, \
and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}},\ }\href
{http://stacks.iop.org/1367-2630/6/i=1/a=036} {\bibfield {journal} {\bibinfo
{journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume} {6}},\
\bibinfo {pages} {36} (\bibinfo {year} {2004})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Osborne}\ and\ \citenamefont
{Linden}(2004)}]{osborne04}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont
{Osborne}}\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Linden}},\ }\href {\doibase 10.1103/PhysRevA.69.052315} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{69}},\ \bibinfo {pages} {052315} (\bibinfo {year} {2004})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {W\'ojcik}\ \emph {et~al.}(2005)\citenamefont
{W\'ojcik}, \citenamefont {\L{}uczak}, \citenamefont
{Kurzy\ifmmode~\acute{n}\else \'{n}\fi{}ski}, \citenamefont {Grudka},
\citenamefont {Gdala},\ and\ \citenamefont {Bednarska}}]{wojcik05}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{W\'ojcik}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {\L{}uczak}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Kurzy\ifmmode~\acute{n}\else \'{n}\fi{}ski}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Grudka}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Gdala}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Bednarska}},\ }\href {\doibase 10.1103/PhysRevA.72.034303}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {72}},\ \bibinfo {pages} {034303} (\bibinfo {year}
{2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {W\'ojcik}\ \emph {et~al.}(2007)\citenamefont
{W\'ojcik}, \citenamefont {\L{}uczak}, \citenamefont
{Kurzy\ifmmode~\acute{n}\else \'{n}\fi{}ski}, \citenamefont {Grudka},
\citenamefont {Gdala},\ and\ \citenamefont {Bednarska}}]{wojcik07}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{W\'ojcik}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {\L{}uczak}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Kurzy\ifmmode~\acute{n}\else \'{n}\fi{}ski}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Grudka}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Gdala}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Bednarska}},\ }\href {\doibase 10.1103/PhysRevA.75.022330}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {75}},\ \bibinfo {pages} {022330} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Li}\ \emph {et~al.}(2005)\citenamefont {Li},
\citenamefont {Shi}, \citenamefont {Chen}, \citenamefont {Song},\ and\
\citenamefont {Sun}}]{li05}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Li}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Shi}}, \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {Chen}}, \bibinfo {author}
{\bibfnamefont {Z.}~\bibnamefont {Song}}, \ and\ \bibinfo {author}
{\bibfnamefont {C.-P.}\ \bibnamefont {Sun}},\ }\href {\doibase
10.1103/PhysRevA.71.022301} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {022301}
(\bibinfo {year} {2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Huo}\ \emph {et~al.}(2008)\citenamefont {Huo},
\citenamefont {Li}, \citenamefont {Song},\ and\ \citenamefont {Sun}}]{huo08}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~X.}\ \bibnamefont
{Huo}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {Z.}~\bibnamefont {Song}}, \ and\ \bibinfo {author}
{\bibfnamefont {C.~P.}\ \bibnamefont {Sun}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Europhysics Letters}\ }\textbf {\bibinfo
{volume} {84}},\ \bibinfo {pages} {30004} (\bibinfo {year}
{2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gualdi}\ \emph {et~al.}(2008)\citenamefont {Gualdi},
\citenamefont {Kostak}, \citenamefont {Marzoli},\ and\ \citenamefont
{Tombesi}}]{gualdi08}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Gualdi}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Kostak}},
\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Marzoli}}, \ and\
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Tombesi}},\ }\href
{\doibase 10.1103/PhysRevA.78.022325} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo
{pages} {022325} (\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Banchi}\ \emph {et~al.}(2010)\citenamefont {Banchi},
\citenamefont {Apollaro}, \citenamefont {Cuccoli}, \citenamefont {Vaia},\
and\ \citenamefont {Verrucchi}}]{banchi10}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Banchi}}, \bibinfo {author} {\bibfnamefont {T.~J.~G.}\ \bibnamefont
{Apollaro}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cuccoli}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Vaia}}, \ and\ \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Verrucchi}},\ }\href {\doibase
10.1103/PhysRevA.82.052321} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {052321}
(\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Banchi}\ \emph {et~al.}(2011)\citenamefont {Banchi},
\citenamefont {Apollaro}, \citenamefont {Cuccoli}, \citenamefont {Vaia},\
and\ \citenamefont {Verrucchi}}]{banchi11}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Banchi}}, \bibinfo {author} {\bibfnamefont {T.~J.~G.}\ \bibnamefont
{Apollaro}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cuccoli}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Vaia}}, \ and\ \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Verrucchi}},\ }\href
{http://stacks.iop.org/1367-2630/13/i=12/a=123006} {\bibfield {journal}
{\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume}
{13}},\ \bibinfo {pages} {123006} (\bibinfo {year} {2011})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Apollaro}\ \emph {et~al.}(2012)\citenamefont
{Apollaro}, \citenamefont {Banchi}, \citenamefont {Cuccoli}, \citenamefont
{Vaia},\ and\ \citenamefont {Verrucchi}}]{apollaro12}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.~G.}\
\bibnamefont {Apollaro}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Banchi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cuccoli}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Vaia}}, \ and\ \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Verrucchi}},\ }\href {\doibase
10.1103/PhysRevA.85.052319} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {052319}
(\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lorenzo}\ \emph {et~al.}(2013)\citenamefont
{Lorenzo}, \citenamefont {Apollaro}, \citenamefont {Sindona},\ and\
\citenamefont {Plastina}}]{lorenzo13}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Lorenzo}}, \bibinfo {author} {\bibfnamefont {T.~J.~G.}\ \bibnamefont
{Apollaro}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Sindona}}, \
and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Plastina}},\ }\href
{\doibase 10.1103/PhysRevA.87.042313} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo
{pages} {042313} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lorenzo}\ \emph {et~al.}(2015)\citenamefont
{Lorenzo}, \citenamefont {Apollaro}, \citenamefont {Paganelli}, \citenamefont
{Palma},\ and\ \citenamefont {Plastina}}]{lorenzo15}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Lorenzo}}, \bibinfo {author} {\bibfnamefont {T.~J.~G.}\ \bibnamefont
{Apollaro}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Paganelli}},
\bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont {Palma}}, \ and\
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Plastina}},\ }\href
{\doibase 10.1103/PhysRevA.91.042321} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo
{pages} {042321} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Almeida}\ \emph {et~al.}(2016)\citenamefont
{Almeida}, \citenamefont {Ciccarello}, \citenamefont {Apollaro},\ and\
\citenamefont {Souza}}]{almeida16}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~M.~A.}\
\bibnamefont {Almeida}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Ciccarello}}, \bibinfo {author} {\bibfnamefont {T.~J.~G.}\ \bibnamefont
{Apollaro}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~M.~C.}\ \bibnamefont
{Souza}},\ }\href {\doibase 10.1103/PhysRevA.93.032310} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\
\bibinfo {pages} {032310} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Amico}\ \emph {et~al.}(2004)\citenamefont {Amico},
\citenamefont {Osterloh}, \citenamefont {Plastina}, \citenamefont {Fazio},\
and\ \citenamefont {Massimo~Palma}}]{amico04}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Amico}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Osterloh}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Plastina}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Fazio}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Massimo~Palma}},\ }\href {\doibase
10.1103/PhysRevA.69.022304} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {69}},\ \bibinfo {pages} {022304}
(\bibinfo {year} {2004})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Plenio}\ and\ \citenamefont
{Semi\~ao}(2005)}]{plenio05}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{Plenio}}\ and\ \bibinfo {author} {\bibfnamefont {F.~L.}\ \bibnamefont
{Semi\~ao}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New
Journal of Physics}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {73}
(\bibinfo {year} {2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Apollaro}\ and\ \citenamefont
{Plastina}(2006)}]{apollaro06}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.~G.}\
\bibnamefont {Apollaro}}\ and\ \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Plastina}},\ }\href {\doibase 10.1103/PhysRevA.74.062316}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {74}},\ \bibinfo {pages} {062316} (\bibinfo {year}
{2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Plastina}\ and\ \citenamefont
{Apollaro}(2007)}]{plastina07}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Plastina}}\ and\ \bibinfo {author} {\bibfnamefont {T.~J.~G.}\ \bibnamefont
{Apollaro}},\ }\href {\doibase 10.1103/PhysRevLett.99.177210} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {99}},\ \bibinfo {pages} {177210} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Apollaro}\ \emph {et~al.}(2008)\citenamefont
{Apollaro}, \citenamefont {Cuccoli}, \citenamefont {Fubini}, \citenamefont
{Plastina},\ and\ \citenamefont {Verrucchi}}]{apollaro08}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont
{Apollaro}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cuccoli}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fubini}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Plastina}}, \ and\ \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Verrucchi}},\ }\href {\doibase
10.1103/PhysRevA.77.062314} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {062314}
(\bibinfo {year} {2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Campos~Venuti}\ \emph {et~al.}(2007)\citenamefont
{Campos~Venuti}, \citenamefont {Giampaolo}, \citenamefont {Illuminati},\ and\
\citenamefont {Zanardi}}]{venuti07}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Campos~Venuti}}, \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Giampaolo}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Illuminati}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Zanardi}},\ }\href {\doibase 10.1103/PhysRevA.76.052328} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{76}},\ \bibinfo {pages} {052328} (\bibinfo {year} {2007})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Cubitt}\ and\ \citenamefont
{Cirac}(2008)}]{cubitt08}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~S.}\ \bibnamefont
{Cubitt}}\ and\ \bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont
{Cirac}},\ }\href {\doibase 10.1103/PhysRevLett.100.180406} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {100}},\ \bibinfo {pages} {180406} (\bibinfo {year}
{2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Giampaolo}\ and\ \citenamefont
{Illuminati}(2009)}]{giampaolo09}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Giampaolo}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Illuminati}},\ }\href {\doibase 10.1103/PhysRevA.80.050301} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{80}},\ \bibinfo {pages} {050301} (\bibinfo {year} {2009})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Giampaolo}\ and\ \citenamefont
{Illuminati}(2010)}]{giampaolo10}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Giampaolo}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Illuminati}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{New Journal of Physics}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo
{pages} {025019} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gualdi}\ \emph {et~al.}(2011)\citenamefont {Gualdi},
\citenamefont {Giampaolo},\ and\ \citenamefont {Illuminati}}]{gualdi11}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Gualdi}}, \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Giampaolo}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Illuminati}},\ }\href {\doibase 10.1103/PhysRevLett.106.050501} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {106}},\ \bibinfo {pages} {050501} (\bibinfo {year}
{2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Estarellas}\ \emph
{et~al.}(2017{\natexlab{a}})\citenamefont {Estarellas}, \citenamefont
{D'Amico},\ and\ \citenamefont {Spiller}}]{estarellas17}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont
{Estarellas}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {D'Amico}},
\ and\ \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Spiller}},\
}\href {\doibase 10.1103/PhysRevA.95.042335} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo
{pages} {042335} (\bibinfo {year} {2017}{\natexlab{a}})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Estarellas}\ \emph
{et~al.}(2017{\natexlab{b}})\citenamefont {Estarellas}, \citenamefont
{D’Amico},\ and\ \citenamefont {Spiller}}]{estarellas17scirep}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont
{Estarellas}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {D’Amico}},
\ and\ \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Spiller}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Scientific
Reports}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {42904}
(\bibinfo {year} {2017}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Almeida}\ \emph {et~al.}(2017)\citenamefont
{Almeida}, \citenamefont {de~Moura}, \citenamefont {Apollaro},\ and\
\citenamefont {Lyra}}]{almeida17-1}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~M.~A.}\
\bibnamefont {Almeida}}, \bibinfo {author} {\bibfnamefont {F.~A. B.~F.}\
\bibnamefont {de~Moura}}, \bibinfo {author} {\bibfnamefont {T.~J.~G.}\
\bibnamefont {Apollaro}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~L.}\
\bibnamefont {Lyra}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {arXiv:1707.05865 [quant-ph]}\ } (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Feder}(2006)}]{feder06}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~L.}\ \bibnamefont
{Feder}},\ }\href {\doibase 10.1103/PhysRevLett.97.180502} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {97}},\ \bibinfo {pages} {180502} (\bibinfo {year}
{2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {De~Chiara}\ \emph {et~al.}(2005)\citenamefont
{De~Chiara}, \citenamefont {Rossini}, \citenamefont {Montangero},\ and\
\citenamefont {Fazio}}]{dechiara05}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{De~Chiara}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Rossini}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Montangero}}, \ and\
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fazio}},\ }\href
{\doibase 10.1103/PhysRevA.72.012323} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo
{pages} {012323} (\bibinfo {year} {2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zwick}\ \emph {et~al.}(2011)\citenamefont {Zwick},
\citenamefont {\'Alvarez}, \citenamefont {Stolze},\ and\ \citenamefont
{Osenda}}]{zwick11}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Zwick}}, \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont
{\'Alvarez}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Stolze}}, \
and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Osenda}},\ }\href
{\doibase 10.1103/PhysRevA.84.022311} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo
{pages} {022311} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Zwick}\ \emph {et~al.}(2012)\citenamefont {Zwick},
\citenamefont {\'Alvarez}, \citenamefont {Stolze},\ and\ \citenamefont
{Osenda}}]{zwick12}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Zwick}}, \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont
{\'Alvarez}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Stolze}}, \
and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Osenda}},\ }\href
{\doibase 10.1103/PhysRevA.85.012318} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo
{pages} {012318} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Paganelli}\ \emph {et~al.}(2013)\citenamefont
{Paganelli}, \citenamefont {Lorenzo}, \citenamefont {Apollaro}, \citenamefont
{Plastina},\ and\ \citenamefont {Giorgi}}]{paganelli13}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Paganelli}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lorenzo}},
\bibinfo {author} {\bibfnamefont {T.~J.~G.}\ \bibnamefont {Apollaro}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Plastina}}, \ and\
\bibinfo {author} {\bibfnamefont {G.~L.}\ \bibnamefont {Giorgi}},\ }\href
{\doibase 10.1103/PhysRevA.87.062309} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo
{pages} {062309} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fitzsimons}\ and\ \citenamefont
{Twamley}(2005)}]{fitzsimons05}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Fitzsimons}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Twamley}},\ }\href {\doibase 10.1103/PhysRevA.72.050301} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{72}},\ \bibinfo {pages} {050301} (\bibinfo {year} {2005})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Burgarth}\ and\ \citenamefont
{Bose}(2005)}]{burgarth05}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Burgarth}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Bose}},\ }\href {\doibase 10.1088/1367-2630/7/1/135} {\bibfield {journal}
{\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume}
{7}},\ \bibinfo {pages} {135} (\bibinfo {year} {2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Tsomokos}\ \emph {et~al.}(2007)\citenamefont
{Tsomokos}, \citenamefont {Hartmann}, \citenamefont {Huelga},\ and\
\citenamefont {Plenio}}]{tsomokos07}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~I.}\ \bibnamefont
{Tsomokos}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont
{Hartmann}}, \bibinfo {author} {\bibfnamefont {S.~F.}\ \bibnamefont
{Huelga}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{Plenio}},\ }\href {http://stacks.iop.org/1367-2630/9/i=3/a=079} {\bibfield
{journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo
{volume} {9}},\ \bibinfo {pages} {79} (\bibinfo {year} {2007})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Petrosyan}\ \emph {et~al.}(2010)\citenamefont
{Petrosyan}, \citenamefont {Nikolopoulos},\ and\ \citenamefont
{Lambropoulos}}]{petrosyan10}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Petrosyan}}, \bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont
{Nikolopoulos}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Lambropoulos}},\ }\href {\doibase 10.1103/PhysRevA.81.042307} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{81}},\ \bibinfo {pages} {042307} (\bibinfo {year} {2010})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Yao}\ \emph {et~al.}(2011)\citenamefont {Yao},
\citenamefont {Jiang}, \citenamefont {Gorshkov}, \citenamefont {Gong},
\citenamefont {Zhai}, \citenamefont {Duan},\ and\ \citenamefont
{Lukin}}]{yao11}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~Y.}\ \bibnamefont
{Yao}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Jiang}}, \bibinfo
{author} {\bibfnamefont {A.~V.}\ \bibnamefont {Gorshkov}}, \bibinfo {author}
{\bibfnamefont {Z.-X.}\ \bibnamefont {Gong}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zhai}}, \bibinfo {author} {\bibfnamefont
{L.-M.}\ \bibnamefont {Duan}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.~D.}\ \bibnamefont {Lukin}},\ }\href {\doibase
10.1103/PhysRevLett.106.040505} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages}
{040505} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bruderer}\ \emph {et~al.}(2012)\citenamefont
{Bruderer}, \citenamefont {Franke}, \citenamefont {Ragg}, \citenamefont
{Belzig},\ and\ \citenamefont {Obreschkow}}]{bruderer12}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Bruderer}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Franke}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ragg}}, \bibinfo {author}
{\bibfnamefont {W.}~\bibnamefont {Belzig}}, \ and\ \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Obreschkow}},\ }\href {\doibase
10.1103/PhysRevA.85.022312} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {022312}
(\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kay}(2016)}]{kay16}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kay}},\ }\href {\doibase 10.1103/PhysRevA.93.042320} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\
\bibinfo {pages} {042320} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Anderson}(1958)}]{anderson58}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont
{Anderson}},\ }\href {\doibase 10.1103/PhysRev.109.1492} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev.}\ }\textbf {\bibinfo {volume}
{109}},\ \bibinfo {pages} {1492} (\bibinfo {year} {1958})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Abrahams}\ \emph {et~al.}(1979)\citenamefont
{Abrahams}, \citenamefont {Anderson}, \citenamefont {Licciardello},\ and\
\citenamefont {Ramakrishnan}}]{abrahams79}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Abrahams}}, \bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont
{Anderson}}, \bibinfo {author} {\bibfnamefont {D.~C.}\ \bibnamefont
{Licciardello}}, \ and\ \bibinfo {author} {\bibfnamefont {T.~V.}\
\bibnamefont {Ramakrishnan}},\ }\href {\doibase 10.1103/PhysRevLett.42.673}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {42}},\ \bibinfo {pages} {673} (\bibinfo {year}
{1979})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Albanese}\ \emph {et~al.}(2004)\citenamefont
{Albanese}, \citenamefont {Christandl}, \citenamefont {Datta},\ and\
\citenamefont {Ekert}}]{albanese04}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Albanese}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Christandl}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Datta}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ekert}},\ }\href
{\doibase 10.1103/PhysRevLett.93.230502} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo
{pages} {230502} (\bibinfo {year} {2004})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Flores}(1989)}]{flores89}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont
{Flores}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Journal of Physics: Condensed Matter}\ }\textbf {\bibinfo {volume} {1}},\
\bibinfo {pages} {8471} (\bibinfo {year} {1989})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dunlap}\ \emph {et~al.}(1990)\citenamefont {Dunlap},
\citenamefont {Wu},\ and\ \citenamefont {Phillips}}]{dunlap90}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~H.}\ \bibnamefont
{Dunlap}}, \bibinfo {author} {\bibfnamefont {H.-L.}\ \bibnamefont {Wu}}, \
and\ \bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont {Phillips}},\
}\href {\doibase 10.1103/PhysRevLett.65.88} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {65}},\ \bibinfo
{pages} {88} (\bibinfo {year} {1990})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Phillips}\ and\ \citenamefont
{Wu}(1991)}]{phillips91}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Phillips}}\ and\ \bibinfo {author} {\bibfnamefont {H.-L.}\ \bibnamefont
{Wu}},\ }\href {\doibase 10.1126/science.252.5014.1805} {\bibfield {journal}
{\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {252}},\ \bibinfo
{pages} {1805} (\bibinfo {year} {1991})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {de~Moura}\ and\ \citenamefont
{Lyra}(1998)}]{demoura98}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~A. B.~F.}\
\bibnamefont {de~Moura}}\ and\ \bibinfo {author} {\bibfnamefont {M.~L.}\
\bibnamefont {Lyra}},\ }\href {\doibase 10.1103/PhysRevLett.81.3735}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {81}},\ \bibinfo {pages} {3735} (\bibinfo {year}
{1998})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Izrailev}\ and\ \citenamefont
{Krokhin}(1999)}]{izrailev99}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~M.}\ \bibnamefont
{Izrailev}}\ and\ \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont
{Krokhin}},\ }\href {\doibase 10.1103/PhysRevLett.82.4062} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {82}},\ \bibinfo {pages} {4062} (\bibinfo {year}
{1999})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuhl}\ \emph {et~al.}(2000)\citenamefont {Kuhl},
\citenamefont {Izrailev}, \citenamefont {Krokhin},\ and\ \citenamefont
{Stöckmann}}]{kuhl00}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont
{Kuhl}}, \bibinfo {author} {\bibfnamefont {F.~M.}\ \bibnamefont {Izrailev}},
\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Krokhin}}, \ and\
\bibinfo {author} {\bibfnamefont {H.-J.}\ \bibnamefont {Stöckmann}},\ }\href
{\doibase 10.1063/1.127068} {\bibfield {journal} {\bibinfo {journal}
{Applied Physics Letters}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo
{pages} {633} (\bibinfo {year} {2000})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lima}\ \emph {et~al.}(2002)\citenamefont {Lima},
\citenamefont {Lyra}, \citenamefont {Nascimento},\ and\ \citenamefont
{de~Jesus}}]{lima02}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~P.~A.}\
\bibnamefont {Lima}}, \bibinfo {author} {\bibfnamefont {M.~L.}\ \bibnamefont
{Lyra}}, \bibinfo {author} {\bibfnamefont {E.~M.}\ \bibnamefont
{Nascimento}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont
{de~Jesus}},\ }\href {\doibase 10.1103/PhysRevB.65.104416} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume}
{65}},\ \bibinfo {pages} {104416} (\bibinfo {year} {2002})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {de~Moura}\ \emph {et~al.}(2002)\citenamefont
{de~Moura}, \citenamefont {Coutinho-Filho}, \citenamefont {Raposo},\ and\
\citenamefont {Lyra}}]{demoura02}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~A. B.~F.}\
\bibnamefont {de~Moura}}, \bibinfo {author} {\bibfnamefont {M.~D.}\
\bibnamefont {Coutinho-Filho}}, \bibinfo {author} {\bibfnamefont {E.~P.}\
\bibnamefont {Raposo}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~L.}\
\bibnamefont {Lyra}},\ }\href {\doibase 10.1103/PhysRevB.66.014418}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo
{volume} {66}},\ \bibinfo {pages} {014418} (\bibinfo {year}
{2002})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nunes}\ \emph {et~al.}(2016)\citenamefont {Nunes},
\citenamefont {Neto},\ and\ \citenamefont {de~Moura}}]{nunes16}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Nunes}}, \bibinfo {author} {\bibfnamefont {A.~R.}\ \bibnamefont {Neto}}, \
and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {de~Moura}},\ }\href
{\doibase http://dx.doi.org/10.1016/j.jmmm.2016.03.026} {\bibfield {journal}
{\bibinfo {journal} {Journal of Magnetism and Magnetic Materials}\ }\textbf
{\bibinfo {volume} {410}},\ \bibinfo {pages} {165 } (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {de~Moura}\ \emph {et~al.}(2003)\citenamefont
{de~Moura}, \citenamefont {Coutinho-Filho}, \citenamefont {Raposo},\ and\
\citenamefont {Lyra}}]{demoura03}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~A. B.~F.}\
\bibnamefont {de~Moura}}, \bibinfo {author} {\bibfnamefont {M.~D.}\
\bibnamefont {Coutinho-Filho}}, \bibinfo {author} {\bibfnamefont {E.~P.}\
\bibnamefont {Raposo}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~L.}\
\bibnamefont {Lyra}},\ }\href {\doibase 10.1103/PhysRevB.68.012202}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo
{volume} {68}},\ \bibinfo {pages} {012202} (\bibinfo {year}
{2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Dom\'{\i}nguez-Adame}\ \emph
{et~al.}(2003)\citenamefont {Dom\'{\i}nguez-Adame}, \citenamefont {Malyshev},
\citenamefont {de~Moura},\ and\ \citenamefont {Lyra}}]{adame03}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Dom\'{\i}nguez-Adame}}, \bibinfo {author} {\bibfnamefont {V.~A.}\
\bibnamefont {Malyshev}}, \bibinfo {author} {\bibfnamefont {F.~A. B.~F.}\
\bibnamefont {de~Moura}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~L.}\
\bibnamefont {Lyra}},\ }\href {\doibase 10.1103/PhysRevLett.91.197402}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {91}},\ \bibinfo {pages} {197402} (\bibinfo {year}
{2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Herrera-Gonz\'alez}\ \emph
{et~al.}(2014)\citenamefont {Herrera-Gonz\'alez}, \citenamefont
{M\'endez-Berm\'udez},\ and\ \citenamefont {Izrailev}}]{gonzales14}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~F.}\ \bibnamefont
{Herrera-Gonz\'alez}}, \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont
{M\'endez-Berm\'udez}}, \ and\ \bibinfo {author} {\bibfnamefont {F.~M.}\
\bibnamefont {Izrailev}},\ }\href {\doibase 10.1103/PhysRevE.90.042115}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo
{volume} {90}},\ \bibinfo {pages} {042115} (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lam}\ and\ \citenamefont {Sander}(1992)}]{lam92}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.-H.}\ \bibnamefont
{Lam}}\ and\ \bibinfo {author} {\bibfnamefont {L.~M.}\ \bibnamefont
{Sander}},\ }\href {\doibase 10.1103/PhysRevLett.69.3338} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {69}},\ \bibinfo {pages} {3338} (\bibinfo {year}
{1992})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Peng}\ \emph {et~al.}(1992)\citenamefont {Peng},
\citenamefont {Buldyrev}, \citenamefont {Goldberger}, \citenamefont {Havlin},
\citenamefont {Sciortino}, \citenamefont {Simons},\ and\ \citenamefont
{Stanley}}]{peng92}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.-K.}\ \bibnamefont
{Peng}}, \bibinfo {author} {\bibfnamefont {S.~V.}\ \bibnamefont {Buldyrev}},
\bibinfo {author} {\bibfnamefont {A.~L.}\ \bibnamefont {Goldberger}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Havlin}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Sciortino}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Simons}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.~E.}\ \bibnamefont {Stanley}},\ }\href {\doibase
10.1038/356168a0} {\bibfield {journal} {\bibinfo {journal} {Nature}\
}\textbf {\bibinfo {volume} {356}},\ \bibinfo {pages} {168} (\bibinfo {year}
{1992})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Carreras}\ \emph {et~al.}(1998)\citenamefont
{Carreras}, \citenamefont {van Milligen}, \citenamefont {Pedrosa},
\citenamefont {Balb\'{\i}n}, \citenamefont {Hidalgo}, \citenamefont {Newman},
\citenamefont {S\'anchez}, \citenamefont {Frances}, \citenamefont
{Garc\'{\i}a-Cort\'es}, \citenamefont {Bleuel}, \citenamefont {Endler},
\citenamefont {Davies},\ and\ \citenamefont {Matthews}}]{carreras98}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~A.}\ \bibnamefont
{Carreras}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {van
Milligen}}, \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Pedrosa}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Balb\'{\i}n}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Hidalgo}}, \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Newman}},
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {S\'anchez}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Frances}}, \bibinfo {author}
{\bibfnamefont {I.}~\bibnamefont {Garc\'{\i}a-Cort\'es}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Bleuel}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Endler}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Davies}}, \ and\ \bibinfo {author} {\bibfnamefont {G.~F.}\
\bibnamefont {Matthews}},\ }\href {\doibase 10.1103/PhysRevLett.80.4438}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {80}},\ \bibinfo {pages} {4438} (\bibinfo {year}
{1998})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Carpena}\ \emph {et~al.}(2002)\citenamefont
{Carpena}, \citenamefont {Bernaola-Galvan}, \citenamefont {Ivanov},\ and\
\citenamefont {Stanley}}]{carpena02}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Carpena}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Bernaola-Galvan}}, \bibinfo {author} {\bibfnamefont {P.~C.}\ \bibnamefont
{Ivanov}}, \ and\ \bibinfo {author} {\bibfnamefont {H.~E.}\ \bibnamefont
{Stanley}},\ }\href {\doibase 10.1038/nature00948} {\bibfield {journal}
{\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {418}},\ \bibinfo
{pages} {955} (\bibinfo {year} {2002})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Feder}(1988)}]{fractalsbook}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Feder}},\ }\href@noop {} {\emph {\bibinfo {title} {Fractals}}}\ (\bibinfo
{publisher} {Plenum Press, New York},\ \bibinfo {year} {1988})\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Paczuski}\ \emph {et~al.}(1996)\citenamefont
{Paczuski}, \citenamefont {Maslov},\ and\ \citenamefont {Bak}}]{bak96}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Paczuski}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Maslov}}, \
and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bak}},\ }\href
{\doibase 10.1103/PhysRevE.53.414} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. E}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo {pages} {414}
(\bibinfo {year} {1996})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ciccarello}(2011)}]{ciccarello11}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Ciccarello}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. A}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {043802}
(\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Thouless}(1974)}]{thouless74}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont
{Thouless}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Physics Reports}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {93}
(\bibinfo {year} {1974})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kempe}(2003)}]{kemperev}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Kempe}},\ }\href {\doibase 10.1080/00107151031000110776} {\bibfield
{journal} {\bibinfo {journal} {Contemporary Physics}\ }\textbf {\bibinfo
{volume} {44}},\ \bibinfo {pages} {307} (\bibinfo {year} {2003})}\BibitemShut
{NoStop}
\end{thebibliography}
\end{document}
|
\begin{document}
\title{Invariant manifolds and equilibrium states for non-uniformly hyperbolic horseshoes}
\author{Renaud Leplaideur\footnote{D\'epartement de math\'ematiques, UMR 6205, Universit\'e de Bretagne Occidentale, 29285 Brest Cedex, France} and Isabel Rios\footnote{Instituto de Matem\'atica, Universidade Federal Fluminense, Rua M\'ario Santos Braga s/n, Niter\'oi, RJ 24.020-140, Brasil} \thanks{This work was partially suported by CNRS-CNPq, UBO, PRONEX-Dynamical Systems, FAPERJ and PROPP-UFF.}
}
\maketitle
\begin{abstract}
In this paper we consider horseshoes containing an orbit of homoclinic tangency accumulated by periodic points. We prove a version of the Invariant Manifolds Theorem, construct finite Markov partitions and use them to prove the existence and uniqueness of equilibrium states associated to H\"older continuous potentials.
\end{abstract}
\sigmaection{Introduction and statement of results}
\input intro2.tex
\sigmaection{Horseshoes with internal tangencies}\label{sec-horse}
\input horse.tex
\sigmaection{Geometric properties of the map $f$}\label{sec-techn-lem}
\input lem-tek.tex
\sigmaection{Kergodic charts}\label{kergodic-charts}
\input kergodic.tex
\sigmaection{Invariant manifolds and some of their properties}\label{foliations}
\input foliations.tex
\sigmaection{Markov partitions and equilibrium states}\label{sec-thermo}
\input thermo2.tex
\end{document}
|
\begin{document}
\title{Establishing the equivalence between Szegedy's and \\ coined quantum walks using the staggered model}
\author{Renato Portugal\footnote{[email protected]} \\
\\
{\small National Laboratory of Scientific Computing - LNCC} \\
{\small Av. Get\'{u}lio Vargas 333, Petr\'{o}polis, RJ, 25651-075, Brazil}
}
\maketitle
\begin{abstract}
Coined Quantum Walks (QWs) are being used in many contexts with the goal of understanding quantum systems and building quantum algorithms for quantum computers. Alternative models such as Szegedy's and continuous-time QWs were proposed taking advantage of the fact that quantum theory seems to allow different quantized versions based on the same classical model, in this case, the classical random walk. In this work, we show the conditions upon which coined QWs are equivalent to Szegedy's QWs. Those QW models have in common a large class of instances, in the sense that the evolution operators are equal when we convert the graph on which the coined QW takes place into a bipartite graph on which Szegedy's QW takes place, and vice versa. We also show that the abstract search algorithm using the coined QW model can be cast into Szegedy's searching framework using bipartite graphs with sinks.
\end{abstract}
\section{Introduction}
The discrete-time coined QW on the line was proposed in the early 1990s in Ref.~\cite{Aharonov:1993}, and is one of the first quantization models of classical random walks. The generalization for regular graphs was proposed in Ref.~\cite{Aharonov:2000}. Early algorithms based on coined QWs with advantage over the classical counterparts were obtained for the element distinctness problem~\cite{Ambainis:2004} and for searching a marked node in a hypercube~\cite{Shenvi:2003}. Many important results were obtained about their asymptotic limit~\cite{Kon02}, localization~\cite{IKK04}, universality~\cite{LCETK10}, and many others as described in reviews~\cite{Ven12,Kon08,Kendon:2007}. Many experimental proposals were described~\cite{Travaglione2002,sanders2003quantum,MPO15}, and experimental implementations were performed~\cite{Karski2009,Zahringer2010,Schreiber2010}.
Using a different quantization procedure, Szegedy~\cite{Szegedy:2004} proposed a new coinless discrete-time QW model on bipartite graphs and was able to provide us with a natural definition of quantum hitting time. Szegedy also developed QW-based search algorithms, which can detect the presence of a marked vertex at a hitting time that is quadratically smaller than the classical average hitting time on ergodic Markov chains. Szegedy's model was also used for the spatial search problem, that is, for finding the location of a marked vertex in a graph~\cite{Magniez:2011,Krovi:2010}, and for searching triangles~\cite{mss07}.
The staggered QW model~\cite{PSFG15} plays an important role to connect Szegedy's and coined QWs. Ref.~\cite{Meyer96} analyzed a version of quantum cellular automata that can be converted into a one-dimensional staggered QW, which is equivalent to a generalized version of the coined QW on the line as shown in Refs.~\cite{HKS05,PBF15}. Attempts to obtain a staggered version of QWs for the two-dimensional lattice have appeared in Refs.~\cite{Patel05,Falk:2013}, but Ref.~\cite{PSFG15} showed that the graph considered on those references is a degree-6 crossed lattice, which is not planar. Ref.~\cite{PSFG15} obtained a formulation of staggered QWs on generic graphs, and showed that Szegedy's framework is a subcase of the staggered QW model by using the line graph of the bipartite graph employed in Szegedy's model.
In this work, we characterize which coined QWs can be cast into Szegedy's framework, and which Szegedy's QWs can be converted into the standard coined QW formalism. In the first direction, the shift operator of the coined QW must be Hermitian and the coin must be an orthogonal reflection, which is a unitary and Hermitian operator with special properties in terms of orthogonality of the $(+1)$-eigenvectors. The class of orthogonal reflections includes the Grover and the Hadamard coins. In the other direction, the bipartite graph on which Szegedy's QW takes place must have a special kind of regularity: the degree of the vertices in one of the disjoint sets of vertices must be 2, and the weights associated with the edges incident on those vertices must be equal. Those results show that the Szegedy and the coined QW models share a large class of instances. The staggered QW model bridges the coined and Szegedy's models.
Szegedy's and coined models apparently seem to employ different methods for searching marked vertices. We show that those methods are strongly related. A remarkable searching method based on coined QWs is the abstract search algorithm~\cite{Ambainis:2005,Portugal:book}, which uses coin $(-I)$ on the marked vertices and the Grover coin on the other ones. We show that this method can be cast into Szegedy's searching framework on bipartite graphs with sinks. On the other direction, under some assumptions on the stochastic matrix of the bipartite graph, Szegedy's searching framework can be converted into an equivalent searching method using coined QWs.
The structure of the paper is as follows. In Sec.~\ref{sec:MD}, we present formal definitions of the flip-flop coined QW model on regular and non-regular graphs, Szegedy's QW model on bipartite graphs, the staggered QW model, and important concepts that are employed throughout the work. In Sec.~\ref{sec:MR}, we prove two theorems which connect Szegedy's QWs with coined QWs on regular graphs. In Sec.~\ref{sec:GNR}, we extend the connection for non-regular graphs and give an example. The proofs are left to the Appendix. In Sec.~\ref{sec:searching}, we address the equivalence between the abstract search algorithm using coined QWs and Szegedy's searching framework. In Sec.~\ref{sec:conc}, we draw our conclusions.
\section{Main Definitions}\label{sec:MD}
Let $\Gamma(V,E)$ be a simple graph with vertex set $V$ and edge set $E$ with cardinalities $|V|$ and $|E|$, respectively. We associate the set of vertices, which represents the classical positions, with the vectors of an orthonormal basis of the Hilbert space ${\cal H}^{|V|}$. In the coined QW model on $d$-regular graphs, the total Hilbert space is ${\cal H}^{|V|}\otimes {\cal H}^{d}$, and the walker has $d$ classical directions to move~\cite{Aharonov:2000}. The edges of simple graphs are non-directed, but in many cases we have to consider a non-directed edge equivalent to two superposed opposite arrows.
\begin{definition}\label{def:coinedQW}
The \textbf{standard flip-flop coined QW} on a $d$-regular graph $\Gamma(V,E)$ associated with Hilbert space ${\cal H}^{d\,|V|}$ is driven by a unitary operator the form of which is
\begin{equation}
U \,=\, S\,(I\otimes C),
\end{equation}
where $C$ is an $d$-dimensional unitary operator (coin), $I$ is the $|V|$-dimensional identity operator, and $S$ is the shift operator which permutes the vectors of the computational basis of ${\cal H}^{d\,|V|}$, and $S^2=I$.
\end{definition}
We make three observations that complement the definition: First, the computational basis of ${\cal H}^{d\,|V|}$ is $\{\ket{v}\ket{j}:\,v\in V,\,0\le j<d\}$, and the action of the \textbf{shift operator} on a vector of the computational basis is
\begin{equation}\label{def_S}
S \ket{v}\ket{j} = \ket{v'}\ket{j'}, \,\forall v\in V, \,0\le j<d,
\end{equation}
where vertices $v$ and $v'$ are adjacent. If the walker is on vertex $v$, direction $j$ points to $v'$ ($j$ is the label of the directed edge from $v$ to $v'$). In the flip-flop case $(S^2=I)$, we have $S \ket{v'}\ket{j'} = \ket{v}\ket{j}$, which means that if the walker is on vertex $v'$, direction $j'$ points back to $v$ ($j'$ is the label of the directed edge from $v'$ to $v$). Second, in $d$-regular graphs with chromatic index equal to $d$, it is possible to find a new shift operator $S'$ similar to $S$ such that $S' \ket{v}\ket{j} = \ket{v'}\ket{j}$, $\forall v\in V$, that is $S'$ does not change the coin value. In this case, if label $j$ points from $v$ to $v'$, the same label $j$ points from $v'$ to $v$. Third, for any discrete-time QW model, if $\ket{\psi_0}$ is the initial state, $U^t\ket{\psi_0}$ is the QW state at step $t$, where $t$ is a non-negative integer. It is interesting to avoid intermediate measurements to take full advantage of the quantum interference.
Definition~\ref{def:coinedQW} does not use the most general shift operator. However, the flip-flop shift operator seems to be the most interesting choice for two reasons: First, it is the one that provides the best speedup in spatial search algorithms~\cite{Ambainis:2005}. Second, alternate definitions employed in the literature use information that is external to the graph, such as, go to the right, left, up, or down for the two-dimensional lattice. It is not fair to compare such QW with classical random walks on the same graph because the latter do not use that kind of external information.
The extension of Definition~\ref{def:coinedQW} for non-regular graphs is obtained by noticing that $I\otimes C$ is a direct sum of $|V|$ $d$-dimensional matrices, all of them equal to $C$.
\begin{definition}\label{def:nonregularQW}
The \textbf{non-regular flip-flop coined QW} on a graph $\Gamma(V,E)$ associated with Hilbert space ${\cal H}^{2\,|E|}$ is driven by a unitary operator the form of which is
\begin{equation}
U \,=\, S\,C',
\end{equation}
where $C'$ is a direct sum of $|V|$ matrices with dimensions $d_1$, ..., $d_{|V|}$ so that $d_v$ is the degree of vertex $v$, and
$S$ is the shift operator which permutes the vectors of the computational basis of ${\cal H}^{2\,|E|}$, and $S^2=I$.
\end{definition}
In the non-regular case, the computational basis of ${\cal H}^{2\,|E|}$ is $\{\ket{v,j}: v\in V, \,0\le j<d_v\}$, and the action of the shift operator on a vector of the computational basis is
\begin{equation}\label{def_S2}
S \ket{v,j} = \ket{v',j'}, \,\forall v\in V, \,0\le j<d_v,
\end{equation}
where vertices $v$ and ${v'}$ are adjacent, label $j$ points from $v$ to ${v'}$, and label $j'$ points from ${v'}$ to $v$. Notice that $\ket{v,j}$ is a notation for the basis vectors that cannot be written as $\ket{v}\otimes\ket{j}$ unless the graph is regular. The order of the basis vectors must be consistent with the fact that $C'$ is a direct sum of $|V|$ matrices. The order is $\ket{v_1,0}$, ..., $\ket{v_1,d_1-1}$, $\ket{v_2,0}$, ..., $\ket{v_2,d_2-1}$, etc. An example of non-regular flip-flop coined QW is given in Sec.~\ref{sec:GNR} with labels for coin directions and vertices in the graph $\Gamma$ of Fig.~\ref{fig:example6}.
If we associate two different coins (both $d$-dimensional matrices) with two distinct vertices of a $d$-regular graph, the QW is non-regular because $C'$ cannot be factorized and cast into the form $I\otimes C$.
A multigraph $\Gamma(V,E)$ is a generalization of the concept of a simple graph that allows multiple edges between two vertices. Graphs with loops can be added to this class. The results obtained using simple graph can be straightforwardly extended for multigraphs usually overburdening the notation. We will avoid the use of multigraphs whenever possible.
Let us define an alternate QW model on a bipartite graph known as Szegedy's model~\cite{Szegedy:2004}. Consider a connected bipartite graph $\Gamma(X,Y,E)$, where $X,Y$ are disjoint sets of vertices and $E$ is the set of non-directed edges. Let
\begin{equation}\label{biadmatrix}
\left(\begin{array}[]{cc}
0 & A \\
A^T & 0
\end{array}\right)
\end{equation}
be the biadjacency matrix of $\Gamma(X,Y,E)$. Using $A$, define $P$ as a probabilistic map from $X$ to $Y$ with entries $p_{xy}$. Using $A^T$, define $Q$ as a probabilistic map from $Y$ to $X$ with entries $q_{yx}$. If $P$ is an $m\times n$ matrix, $Q$ will be an $n\times m$ matrix. Both are right-stochastic, that is, each row sums to 1. Using $P$ and $Q$, it is possible to define unit vectors
\begin{eqnarray}
\ket{\phi_x} &=& \sum_{y\in Y} \sqrt{p_{x y}}\,\textrm{e}^{i\theta_{xy}} \, \ket{x,y}, \label{ht_phi_x} \\
\ket{\psi_y} &=& \sum_{x\in X} \sqrt{q_{y x}}\,\textrm{e}^{i\theta'_{xy}} \, \ket{x,y}, \label{ht_psi_y}
\end{eqnarray}
that have the following properties: $\braket{\phi_x}{\phi_{x'}}=\delta_{xx'}$ and $\braket{\psi_y}{\psi_{y'}}=\delta_{yy'}$. In Szegedy's original definition, $\theta_{xy}=\theta'_{xy}=0$. We call \textbf{extended Szegedy's QW} the version that allows nonzero angles.
\begin{definition}\label{def:SzegedyQW}
\textbf{Szegedy's QW} on a bipartite graph $\Gamma(X,Y,E)$ with biadjacent matrix (\ref{biadmatrix}) is defined on a Hilbert space ${\cal H}^{m n} = {\cal H}^{m}\otimes {\cal H}^{n} $, where $ m = | X |$ and $n = | Y | $, the computational basis of which is $ \big \{\ket {x, y}: x \in X, y \in Y \big \} $.
The QW is driven by the unitary operator
\begin{equation}\label{ht_U_ev}
W \,=\, R_1 \, R_0,
\end{equation}
where
\begin{eqnarray}
R_0 &=& 2\sum_{x\in X} \ket{\phi_x}\bra{\phi_x} - I, \label{ht_RA}\\
R_1 &=& 2\sum_{y\in Y} \ket{\psi_y}\bra{\psi_y} - I. \label{ht_RB}
\end{eqnarray}
\end{definition}
Notice that operators $R_0$ and $R_1$ are unitary and Hermitian ($R_0^2=R_1^2=I$).
Let ${\cal H}^{|V|}$ be the Hilbert space associated with a graph $\Gamma(V,E)$, the vertices of which are labeled by the vectors of the computational basis. If $U$ is unitary and Hermitian in ${\cal H}^{|V|}$, it can be written as
\begin{equation}
U \,=\,\sum_x \ket{\psi_x^+}\bra{\psi_x^+} - \sum_y \ket{\psi_y^-}\bra{\psi_y^-},
\end{equation}
where the set of vectors $\ket{\psi_x^+}$ is an orthonormal basis of the $(+1)$-eigenspace, and the set of vectors $\ket{\psi_y^-}$ is an orthonormal basis of the $(-1)$-eigenspace. Using that $\sum_x \ket{\psi_x^+}\bra{\psi_x^+} + \sum_y \ket{\psi_y^-}\bra{\psi_y^-} = I$, we obtain
\begin{equation}\label{U_or}
U \,=\,2\sum_x\ket{\psi_x^+}\bra{\psi_x^+} - I.
\end{equation}
We want to define a special class of reflection operators $U$ associated with a graph $\Gamma(V,E)$ with the following properties: The $(+1)$-eigenvectors $\ket{\psi_x^+}$ must have non-overlapping nonzero entries, and the sum of those eigenvectors must have no zero entries in the orthonormal basis associated with the vertices of $\Gamma(V,E)$. Each vector $\ket{\psi_x^+}$ forms a clique in $\Gamma(V,E)$ because the vertices associated with nonzero entries of $\ket{\psi_x^+}$ for a fixed $x$ are adjacent. The union of all cliques must be an induced subgraph of $\Gamma(V,E)$. This subgraph is a disconnected union of cliques in general, except when $U$ has only one $(+1)$-eigenvector; in this case, the subgraph and $\Gamma(V,E)$ must be the complete graph.
\begin{definition}\label{def:orthrefl}
A unitary and Hermitian operator $U$ in ${\cal H}^{|V|}$ given by Eq.~(\ref{U_or}) is called an \textbf{orthogonal reflection} of a graph $\Gamma(V,E)$ if there is a complete orthonormal set of $(+1)$-eigenvectors $\ket{\psi_x^+}$ in the orthonormal basis associated with the vertices of the graph obeying the following properties: (1)~if the $i$-th entry of $\ket{\psi_x^+}$ for a fixed $x$ is nonzero, the $i$-th entries of the other $(+1)$-eigenvectors are zero, and (2)~vector $\sum_{x} \ket{\psi_x^+}$ has no zero entries.
\end{definition}
For example, the identity operator $I$ is an orthogonal reflection because the canonical computational basis $\big\{\ket{\psi_x^+}=\ket{x}:\,0\le x<|V|\big\}$ obeys properties (1) and (2). Those $(+1)$-eigenvectors will be used to define a tessellation. $(-I)$ is not an orthogonal reflection because no set of $(+1)$-eigenvectors obeys property (2).
\begin{definition}
A \textbf{polygon} of $\Gamma(V,E)$ induced by vector $\ket{\psi}\in {\cal H}^{|V|}$ is a clique. Two vertices of $\Gamma(V,E)$ are adjacent if the corresponding entries of $\ket{\psi}$ in the basis associated with $\Gamma(V,E)$ are non-zero. A vertex belongs to the polygon if and only if its corresponding entry in $\ket{\psi}$ is non-zero. An edge belongs to a polygon if and only if the polygon contains the endpoints of the edge.
\end{definition}
\begin{definition}
A \textbf{tessellation} induced by an orthogonal reflection $U$ of $\Gamma(V,E)$ is the union of the polygons induced by the $(+1)$-eigenvectors $\ket{\psi_x^+}$ of $U$ described in Definition~\ref{def:orthrefl}.
\end{definition}
One of the simplest examples of orthogonal reflection is the Grover operator $G=2\ket{\psi}\bra{\psi}-I$, where $\ket{\psi}$ is the normalized uniform superposition of the vectors of the computational basis~\cite{Portugal:book}. Notice that $G^2=I$, and $G$ has only one eigenvector with eigenvalue $(+1)$, which has no zero entries. The complete graph is induced by $G$ because $\ket{\psi}$ is a superposition of all vertices. If an orthogonal reflection is given, the induced graph can be straightforwardly obtained. If a graph $\Gamma(V,E)$ is given, an orthogonal reflection induces a tessellation of this graph if it contains all necessary edges; the cliques induced by the invariant eigenvectors must be induced subgraphs of $\Gamma(V,E)$. For example, Fig.~\ref{fig:example1} depicts an orthogonal reflection $U$, its corresponding graph $\Gamma_U$, and the induced tessellation in blue. In the general case, polygons of a tessellation do not overlap (property~(1) of Definition~\ref{def:orthrefl}) and a tessellation covers all vertices (property~(2) of Definition~\ref{def:orthrefl}). A tessellation does not need to cover all edges of a predefined graph, unless the graph is induced by an orthogonal reflection as the one in Fig.~\ref{fig:example1}.
\noindent
\begin{figure}
\caption{Example of an orthogonal reflection $U$ with $(+1)$-eigenvectors $(\ket{0}
\label{fig:example1}
\end{figure}
\begin{definition}
The \textbf{staggered QW} on a graph $\Gamma(V,E)$ associated with Hilbert space ${\cal H}^{|V|}$ is driven by
\begin{equation}
U \,=\, U_1\,U_0,
\end{equation}
where $U_0$ and $U_1$ are orthogonal reflections of $\Gamma(V,E)$. The union of the tessellations induced by $U_0$ and $U_1$ must cover the edges of $\Gamma(V,E)$.
\end{definition}
The above definition can be readily extended by allowing $U$ to be a product of three or more orthogonal reflections. In this work we focus on the product of only two orthogonal reflections. Ref.~\cite{PSFG15} showed that all Szegedy's QWs are instances of the staggered QW model. In fact, a staggered QW is equivalent to a Szegedy's QW if and only if the intersection of the tessellations induced by $U_0$ and $U_1$ does not contain any edge of $\Gamma(V,E)$.
To use the staggered QW model for searching marked vertices, we have to use partial tessellations, which can be formally defined by using the notion of partial orthogonal reflection.
\begin{definition}
A unitary and Hermitian operator $U$ in ${\cal H}^{|V|}$ is called a \textbf{partial orthogonal reflection} of a graph $\Gamma(V,E)$ if there is a complete orthonormal set of $(+1)$-eigenvectors $\ket{\psi_x^+}$ in the orthonormal basis associated with the vertices of the graph obeying property~(1) and violating property~(2) of Definition~\ref{def:orthrefl}.
\end{definition}
A partial orthogonal reflection $U$ induces a \textbf{partial tessellation}, which does not contain all vertices of the graph. The most radical example of a partial orthogonal reflection is the minus identity operator $(-I)\in {\cal H}^N$ because it has no $(+1)$-eigenvectors. The graph induced by this partial orthogonal reflection has $N$ disconnected vertices (the empty $N$-graph). If an $N$-graph is given, a partial orthogonal reflection defines a partial tessellation, which is a tessellation with missing polygons. $(-I)$ induces a partial tessellation with no polygons at all. In the staggered QW model, vertices that do not belong to the intersection of polygons of all tessellations are the marked ones.
\begin{definition}
The \textbf{generalized staggered QW} on a graph $\Gamma(V,E)$ associated with Hilbert space ${\cal H}^{|V|}$ is driven by
\begin{equation}
U \,=\, U_1\,U_0,
\end{equation}
where $U_0$ is a partial orthogonal reflection and $U_1$ is an orthogonal reflection of $\Gamma(V,E)$. The union of the tessellations induced by $U_0$ and $U_1$ must cover the vertices of $\Gamma(V,E)$.
\end{definition}
Again, the above definition can be extended by allowing $U$ to be the product of more (partial) orthogonal reflections.
\section{Results for Regular Graphs}\label{sec:MR}
\begin{theorem}\label{theo1}
A standard flip-flop coined QW on a $d$-regular $N$-graph $\Gamma(V,E)$, such that the coin $C$ is an orthogonal reflection, can be cast into the extended Szegedy's QW model.
\end{theorem}
\begin{proof}
We start by obtaining a staggered QW with two tessellations equivalent to the standard flip-flop coined QW. If $C$ is an orthogonal reflection, then
\begin{eqnarray}
C &=& 2\sum_{x=0}^{m-1} \ket{\alpha_x}\bra{\alpha_x} - I, \label{C}
\end{eqnarray}
where $\ket{\alpha_0},...,\ket{\alpha_{m-1}}$ is an orthonormal basis for the invariant eigenspace of $C$ with the following properties: (1)~if the $i$-th entry of $\ket{\alpha_x}$ is nonzero, the $i$-th entries of the other $(+1)$-eigenvectors must be zero, and (2)~vector $\sum_{x=0}^{m-1} \ket{\alpha_x}$ has no zero entries. $C$ has an associated $d$-graph $\Gamma_C$, which is a union of $m$ disjoint cliques. The labels of the vertices of $\Gamma_C$ are the coin values.
Let $\Gamma'(V',E')$ be the graph obtained from $\Gamma(V,E)$ by replacing each vertex $v\in V$ by graph $\Gamma_C$. In the gluing process, a vertex of $\Gamma_C$ with label $j$ is linked by edge $j$ of $\Gamma(V,E)$ ($j$ is the coin direction as in (\ref{def_S})) and receives label $(v,j)$ as a new vertex in $\Gamma'(V',E')$. Fig.~\ref{fig:example2} shows how to obtain $\Gamma'(V',E')$ from the two-dimensional lattice when the coin is the Grover operator. In this example each vertex is replaced by a $4$-clique and two different cliques can have at most one common edge because the two-dimensional lattice is a simple graph.
\begin{figure}
\caption{A standard flip-flop coined QW with the four-dimensional Grover coin on the two-dimensional lattice $\Gamma$ depicted on the left-hand side is equivalent to a staggered QW on graph $\Gamma'$ on the right-hand side (the vertices are the black circles). Each vertex of the lattice $\Gamma$ is converted into a 4-clique of $\Gamma'$.}
\label{fig:example2}
\end{figure}
Define
\begin{eqnarray}
C' &=& I_N\otimes C\nonumber\\
&=& 2\sum_{v=0}^{N-1}\sum_{x=0}^{m-1} \ket{v,\alpha_x}\bra{v,\alpha_x} - I.
\end{eqnarray}
Polygons induced by $\ket{v,\alpha_x},\,\forall v,x$ tessellate $\Gamma'(V',E')$ because each polygon induced by $\ket{v,\alpha_x}$ covers exactly graph $\Gamma_C$ that replaces vertex $v$, and the union of polygons induced by $\ket{v,\alpha_x}$ covers all vertices. Fig.~\ref{fig:example3} shows polygons induced by $\ket{v,\alpha_x}$ in blue for the two-dimensional lattice. By analyzing the non-zero entries of vectors $\ket{v,\alpha_x}$, we can verify that $C'$ is an orthogonal reflection of $\Gamma'(V',E')$, which is another way to verify that the set of vectors $\ket{v,\alpha_x}$ induces a tessellation.
\begin{figure}
\caption{Tessellations of a staggered QW equivalent to a standard flip-flop coined QW. Blue polygons are induced by vectors $\ket{v,\alpha_x}
\label{fig:example3}
\end{figure}
The second tessellation is obtained using the $(+1)$-eigenvectors of $S$, the set of which is a perfect matching of $\Gamma'(V',E')$ as we now show. Using Eq.~(\ref{def_S}), it is straightforward to verify that, for any $v$ and $j$, vectors
\begin{eqnarray}
\ket{\beta_{v,j}^+} &=& \frac{1}{\sqrt 2}\left(\ket{v}\ket{j}+\ket{v'}\ket{j'}\right),\\
\ket{\beta_{v,j}^-} &=& \frac{1}{\sqrt 2}\left(\ket{v}\ket{j}-\ket{v'}\ket{j'}\right),
\end{eqnarray}
are eigenvectors of $S$ with eigenvalues $(+1)$ and $(-1)$, respectively. Since there are $dN/2$ independent eigenvectors associated with each eigenvalue, it follows that
\begin{equation}
S\,=\, 2 \sum \ket{\beta_{v,j}^+}\bra{\beta_{v,j}^+}-I
\end{equation}
where the sum runs over the set of independent $(+1)$-eigenvectors (the sum has $dN/2$ terms). $S$ is an orthogonal reflection because the set of independent $(+1)$-eigenvectors has non-overlapping nonzero entries, and the sum of those eigenvectors has no zero entries in the computation basis of $\Gamma'(V',E')$. Polygons induced by $\ket{\beta_{v,j}^+}$ cover all vertices and form a perfect matching, which defines a second tessellation of $\Gamma'(V',E')$. Fig.~\ref{fig:example3} shows polygons $\ket{\beta_{v,j}^+}$ in red for the two-dimensional lattice.
The union of tessellations $\ket{v,\alpha_x}$ and $\ket{\beta_{v,j}^+}$ covers all edges and is a well-defined staggered QW having one vertex in each polygon intersection. Using Proposition~4.3 of Ref.~\cite{PSFG15}, this staggered QW can be cast into the extended Szegedy's framework.\qed
\end{proof}
Theorem~\ref{theo1} has a converse, which we state as a new theorem.
\begin{theorem}\label{theo2}
Let $\Gamma(X,Y,E)$ be a biregular bipartite graph such that $\deg(x)=d$, $\forall x\in X$ and $\deg(y)=2$, $\forall y \in Y$. Suppose that if one eliminates the zeros of the sequence $p_{x0},p_{x1},p_{x2},...$ then one gets the same sequence $c_0,c_1,...,c_{d-1}$, for all $x\in X$. Suppose also that $q_{yx}$ is either $1/2$ or $0$. Then Szegedy's QW on $\Gamma(X,Y,E)$ is equivalent to a standard flip-flop coined QW on a $d$-regular $|X|$-multigraph.
\end{theorem}
\begin{proof}
Consider the staggered QW model on the line graph $L(\Gamma)$ equivalent to Szegedy's QW on $\Gamma(X,Y,E)$~\cite{PSFG15}. $L(\Gamma)$ has $d|X|=2|Y|$ vertices. The polygons of the staggered model are induced by
\begin{eqnarray}
\ket{\alpha_x} &=& \sum_{y\in Y} \sqrt{p_{x y}} \, \ket{f(x,y)}, \label{ht_alpha_x} \\
\ket{\beta_y} &=& \sum_{x\in X} \sqrt{q_{y x}}\, \ket{f(x,y)}, \label{ht_beta_y}
\end{eqnarray}
where $f$ is the bijection between $E$ and the vertices of $L(\Gamma)$ as described if Ref.~\cite{PSFG15}; and vectors $\ket{\alpha_x},\ket{\beta_x}$ belong to Hilbert space ${\cal H}^{d|X|}$. Using the edge labels described in Fig.~\ref{fig:example5}, vectors $\ket{\alpha_x}$ are given by
\begin{eqnarray}
\ket{\alpha_x} &=& \sum_{j=0}^{d-1} \sqrt{c_j} \, \ket{dx+j} \nonumber \\
&=& \sum_{j=0}^{d-1} \sqrt{c_j} \, \ket{x}\ket{j},
\end{eqnarray}
where vectors $\ket{x}\ket{j}$ belong to the computational basis of Hilbert space ${\cal H}^{|X|}\otimes{\cal H^d}$.
\begin{figure}
\caption{Description of the edge labels of the bipartite graph $\Gamma(X,Y,E)$, where $\deg(x)=d$, $\forall x\in X$; $\deg(y)=2$, $\forall y \in Y$. }
\label{fig:example5}
\end{figure}
Then
\begin{eqnarray}
U_0 &=& 2\sum_{x\in X} \ket{\alpha_x}\bra{\alpha_x} - I
\,\,=\,\, I\otimes C,
\end{eqnarray}
where
\begin{equation}\label{coin_C_Sz}
C\,=\, 2\ket{\psi}\bra{\psi}-I
\end{equation}
and
\begin{equation}
\ket{\psi}\,=\, \sum_{j=0}^{d-1} \sqrt{c_j} \, \ket{j}.
\end{equation}
Notice that $C\in {\cal H}^d$ is an orthogonal reflection because $C$ has only one eigenvector with eigenvalue $(+1)$ and this eigenvector has no zero entries. The graph induced by $C$ is a $d$-clique.
On the other hand, vectors $\ket{\beta_y}\in {\cal H}^{2|Y|}$ have only two terms
\begin{equation}
\ket{\beta_y}\,=\, \frac{1}{\sqrt 2}\left(\ket{f(x_1,y)}+\ket{f(x_2,y)}\right),
\end{equation}
where $x_1,x_2$ are the neighbors of $y$ ($x_1,x_2$ depend on $y$). Then
\begin{eqnarray}\label{U_1_theo2}
U_1 &=& 2\sum_{y\in Y} \ket{\beta_y}\bra{\beta_y}- I\nonumber\\
&=& \sum_{y\in Y} \,\,\ket{f(x_1,y)}\bra{f(x_2,y)}+\ket{f(x_2,y)}\bra{f(x_1,y)}. \label{U1_theo2}
\end{eqnarray}
$U_1$ is a flip-flop shift operator because $U_1$ commutes basis vectors and $U_1^2=I$. The shift can be understood in the following way: When we convert $\ket{dx_1+j_1}$ into $\ket{x_1}\ket{j_1}$, the interpretation of applying $U_1$ on $\ket{x_1}\ket{j_1}$ is that the walker moves from position $x_1$ in the direction $j_1$ reaching $y$, reflects at $y$, and moves to $x_2$. The state of the walker will be $\ket{x_2}\ket{j_2}$, where $j_2$ points to the same $y$ from $x_2$. Applying $U_1$ on $\ket{x_2}\ket{j_2}$ yields $\ket{x_1}\ket{j_1}$. This inversion of direction characterizes the flip-flop shift operator.
The evolution operator is
\begin{equation}
U \,=\, U_1 \, (I\otimes C),
\end{equation}
where $C$ is the coin operator given by Eq.~(\ref{coin_C_Sz}) and $U_1$ is the flip-flop shift operator given by Eq.~(\ref{U1_theo2}).
Now we have to specify the graph on which the coined QW evolves. The polygons of tessellation $\alpha$ are $d$-cliques, and the polygons of tessellation $\beta$ have two vertices and form a perfect matching of $L(\Gamma)$. Each $d$-clique of $L(\Gamma)$ must be converted into a single vertex. If two $d$-cliques are connected by an edge, the vertices that replace those cliques are adjacent. If two $d$-cliques are connected by more than one edge, the vertices that replace those cliques must be connected by more than one edge generating a $d$-regular $|X|$-multigraph. Fig.~\ref{fig:example4} shows an example of a bipartite graph $\Gamma$ on which the Szegedy's QW takes place and the multigraph $\Gamma'$ on which an equivalent standard flip-flop coined QW takes place.
\begin{figure}
\caption{An example of a bipartite graph $\Gamma$, its line graph $L(\Gamma)$, and the reduced multigraph obtained from the line graph by replacing $3$-cliques by single vertices. Szegedy's QW on $\Gamma$ using vectors $\ket{\phi_x}
\label{fig:example4}
\end{figure}
We have used the staggered QW version of Szegedy's QW because in the original Szegedy's version there is an idle subspace spanned by the non-edges linking $X$ and $Y$ that hinders the decomposition of $R_0$ given by Eq.~(\ref{ht_RA}) into $I\otimes C$.\qed
\end{proof}
\section{Results for Non-Regular Graphs}\label{sec:GNR}
Theorem~\ref{theo1} can be generalized for non-regular flip-flop coined QWs, and Theorem~\ref{theo2} can be generalized for bipartite graphs that are not biregular. The proofs are given in the Appendix.
\begin{theorem}\label{theo3}
A non-regular flip-flop coined QW such that $C'$ is an orthogonal reflection can be cast into the extended Szegedy's model.
\end{theorem}
\begin{theorem}\label{theo4}
Let $\Gamma(X,Y,E)$ be a bipartite graph such that $\deg(y)=2$, $\forall y \in Y$. Suppose that $q_{yx}$ is either $1/2$ or $0$. Then Szegedy's QW on $\Gamma(X,Y,E)$ is equivalent to a non-regular flip-flop coined QW on a $|X|$-multigraph.
\end{theorem}
We give an example that displays the underlying structure of the general proof. Let us start by describing a non-regular flip-flop coined QW on the graph $\Gamma$ depicted on the left-hand side of Fig.~\ref{fig:example6}. Let us use the three-dimensional Grover coin for the vertex of degree 3 and Hadamard coin for the vertices of degree 2.
\begin{figure}
\caption{Example of a non-regular flip-flop coined QW on graph $\Gamma$ and its equivalent Szegedy's version on the bipartite graph $\Gamma''$. The staggered model on $\Gamma' $ is used as a bridge to go from $\Gamma$ to $\Gamma''$.}
\label{fig:example6}
\end{figure}
The coin operator is
\begin{equation}
C'\,=\,\left[ \begin {array}{cccc} 1&0&0&0\\ 0&C&0&0
\\ 0&0&H&0\\ 0&0&0&H\end {array}
\right],
\end{equation}
where $H$ is the Hadamard gate and
\begin{equation}
C\,=\, \frac{1}{3}\left[ \begin {array}{ccc} -1&2&2\\ 2&-1&2
\\ 2&2&-1\end {array} \right].
\end{equation}
The shift operator is
\begin{equation}
S\,=\,\left[ \begin {array}{cccccccc} 0&1&0&0&0&0&0&0\\ 1
&0&0&0&0&0&0&0\\ 0&0&0&0&1&0&0&0
\\ 0&0&0&0&0&0&1&0\\ 0&0&1&0&0&0&0
&0\\ 0&0&0&0&0&0&0&1\\ 0&0&0&1&0&0
&0&0\\ 0&0&0&0&0&1&0&0\end {array} \right].
\end{equation}
The staggered QW graph equivalent to the coined QW graph is obtained by replacing each vertex of $\Gamma$ by a $d$-clique, where $d$ is the degree of the vertex. Fig.~\ref{fig:example6} shows the resulting graph $\Gamma'$ with the induced tessellations. Each $d$-clique is a polygon in the tessellation $\alpha$ (blue), and the vertices incident to each edge of the original graph is a polygon of the tessellation $\beta$ (red). Vectors $\ket{\alpha_x}$ and $\ket{\beta_y}$ are given by
\begin{center}
\begin{minipage}{0.4\textwidth}
\begin{eqnarray*}
\ket{\alpha_0}&=&\ket{0}\\
\ket{\alpha_1}&=&\frac{\ket{1}+\ket{2}+\ket{3}}{\sqrt 3}\\
\ket{\alpha_2}&=&\frac{{\sqrt {2+\sqrt {2}}}\,\ket{4}+{\sqrt {2-\sqrt {2}}}\,\ket{5}}{2}\\
\ket{\alpha_3}&=&\frac{{\sqrt {2+\sqrt {2}}}\,\ket{6}+{\sqrt {2-\sqrt {2}}}\,\ket{7}}{2}
\end{eqnarray*}
\mbox{\,}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\begin{eqnarray*}
\ket{\beta_0}&=&\frac{1}{\sqrt 2}\big(\ket{0}+\ket{1}\big)\\
\ket{\beta_1}&=&\frac{1}{\sqrt 2}\big(\ket{2}+\ket{4}\big)\\
\ket{\beta_2}&=&\frac{1}{\sqrt 2}\big(\ket{3}+\ket{6}\big)\\
\ket{\beta_3}&=&\frac{1}{\sqrt 2}\big(\ket{5}+\ket{7}\big)
\end{eqnarray*}
\mbox{\,}
\end{minipage}
\end{center}
where $\ket{\alpha_1}$ is the normalized $(+1)$-eigenvector of the three-dimensional Grover coin, $\ket{\alpha_2}$ and $\ket{\alpha_3}$ are each one the normalized $(+1)$-eigenvector of the Hadarmard gate, and $\ket{\beta_0}$ to $\ket{\beta_3}$ are the $(+1)$-eigenvectors of $S$. It is straightforward to check that
\begin{eqnarray}
C'&=&2\sum_{j=0}^3 \ket{\alpha_j}\bra{\alpha_j} - I,\\
S &=&2\sum_{j=0}^3 \ket{\beta_j}\bra{\beta_j} - I,
\end{eqnarray}
showing that the staggered QW is equivalent to the coined QW because the evolution operators of those QWs are equal.
Graph $\Gamma''$ on the right-hand side is obtained from $\Gamma'$ by connecting the vertices of $\Gamma''$ associated with overlapping polygons, as explained in Ref.~\cite{PSFG15}. After performing this connecting procedure, $\Gamma'$ is the line graph of $\Gamma''$. Vectors $\ket{\phi_x}$ and $\ket{\psi_y}$ of Szegedy's QW on $\Gamma''$ are obtained from vectors $\ket{\alpha_x}$ and $\ket{\beta_y}$ employing the bijection between the vertices of $\Gamma'$ and the edges of $\Gamma''$: $\ket{0} \leftrightarrow \ket{0, 0}, \ket{1} \leftrightarrow \ket{1, 0}, \ket{2} \leftrightarrow \ket{1, 1}, \ket{3} \leftrightarrow \ket{1, 2}, \ket{4} \leftrightarrow \ket{2, 1}, \ket{5} \leftrightarrow \ket{2, 3}, \ket{6} \leftrightarrow \ket{3, 2}, \ket{7} \leftrightarrow \ket{3, 3}$. Notice that $W$ is in ${\cal H}^{16}$ while $S(I\otimes C)$ is in ${\cal H}^8$. The non-trivial part of $W$ is equal to the evolution operator of the coined QW or the staggered QW.
Since the staggered QW on $\Gamma'$ is equivalent to Szegedy's QW on $\Gamma''$ ~\cite{PSFG15}, it follows that the non-regular flip-flop coined QW on $\Gamma$ is equivalent to Szegedy's QW on $\Gamma''$.
\section{Searching Marked Vertices}\label{sec:searching}
One of the most successful methods to search marked vertices in the coined QW model is to use non-regular flip-flop coined QWs with two different coins: $(-I)$ on the marked vertices and the Grover coin on the non-marked ones. The normalized uniform superposition of all vertices is the initial condition to avoid any bias at the beginning. This method is called abstract search algorithm~\cite{Ambainis:2005,Portugal:book} and was used for the hypercube~\cite{Shenvi:2003}, two-dimensional lattice~\cite{Ambainis:2005,Tulsi:2008}, honeycomb network~\cite{Abal:2010}, triangular network~\cite{Abal:2011}, and various graphs~\cite{BW10,LW12}. Let us review this method in the context of non-regular graphs. The coin is defined by
\begin{equation}
C'\ket{v,j}\,=\,\begin{cases} -\ket{v,j} &\mbox{if } v \mbox{ is a marked vertex} \\
-\ket{v,j} + 2\ket{\psi}& \mbox{if } v \mbox{ is not a marked vertex}, \end{cases}
\end{equation}
where $\ket{\psi}=\frac{1}{d_v}\sum_{j=0}^{d_v-1}\ket{v,j}$ (the uniform superposition of all coin directions at vertex $v$), $d_v$ is the degree of vertex $v$, and we are using the notation $\ket{v,j}$ following the one described in the paragraph right after Definition~\ref{def:nonregularQW}. The evolution operator is $U=S\,C'$ characterizing a non-regular flip-flop coined QW (Definition~\ref{def:nonregularQW}) even if graph $\Gamma(V,E)$, on which the QW takes place, is regular. Let $C\in {\cal H}^{2|E|}$ be the usual coin with the Grover operator $G$ for all vertices (including the marked ones). The searching evolution operator $U$ can be written as $(S\,C)\cdot R$, where $(S\,C)$ is the evolution operator of a QW with no marked vertices and $R$ is a reflection that applies $(-G)$ on the marked vertices and $(+I)$ on the non-marked ones, because $C'=CR$. By using this fact and the spectrum of $(S\,C)$, Ref.~\cite{Ambainis:2005} was able to find two non-trivial eigenvectors of $U$ associated with the eigenvalues with the smallest positive argument, which enabled the authors to find analytically the time complexity of the algorithm for the spatial search problem on the two-dimensional lattice with one marked vertex. Using the evolution operator $U=S\,C'$, the probability at the marked vertex increases periodically allowing to find a marked vertex if one performs the measurement at the correct moment.
On the other hand, the quantum search method in Szegedy's framework is an extension of the classical method using random walks. The key concept in the classical case is the \textbf{hitting time}, which is the average time to hit a marked vertex for the first time using a random walk on a graph $\Gamma$ with stochastic matrix $P$ after specifying some initial condition. The classical hitting time can be calculated by converting the original graph $\Gamma$ into a new directed graph $\Gamma'$ (with a new stochastic matrix $P'$) by removing the edges that leave the marked vertices. Marked vertices are converted into sinks. The hitting time obtained using $P'$ is the same using $P$ because as soon as the walker hits a marked vertex using some edge that comes from a non-marked vertex, the walker needs not to go ahead. Szegedy proposed a quantum version of this procedure. Let $\Gamma(X,E)$ be the original classical graph, where $X$ is the set of vertices and $E$ the set of edges. Define $\Gamma(X,X',E')$ as a bipartite graph obtained from $\Gamma(X,E)$ by duplicating $X$ and by converting edges $\{x_i,x_j\}\in E$ into $\{x_i,x'_j\}\in E'$. Until this point, no vertex has been marked and the quantum walk described in Definition~\ref{def:SzegedyQW} can be used taking $P=Q$. As before, define $\Gamma'(X,X',E'')$ as a directed bipartite graph by removing the edges of $\Gamma(X,X',E')$ that leaves the marked vertices of $X$ and $X'$. Add new non-directed edges connecting a marked $x$ with its corresponding copy $x'$ for all marked vertices. The quantum walk described in Definition~\ref{def:SzegedyQW} can be used taking $P'=Q'$, where $P'$ and $Q'$ are the new stochastic matrices of $\Gamma'(X,X',E'')$. With this framework and taking
\begin{equation}
\ket{\psi_0}=\frac{1}{\sqrt n}\sum_{x y} \sqrt{p_{x y}} \ket{x,y}
\end{equation}
as initial condition in ${\cal H}^{n}\otimes {\cal H}^{n}$, Szegedy showed that the detection problem on $\Gamma'(X,X',E')$ can be solved with a quadratic speedup compared with the time complexity of the same problem using random walks with symmetric and ergodic stochastic matrix $P$ in $\Gamma(X,E)$~\cite{Szegedy:2004}. In the detection problem, one does not calculate the probability of finding a marked vertex and, therefore, cannot be sure to have found the marked vertex. The searching problem on \textit{bipartite graphs} with a single marked vertex was addressed in Ref.~\cite{KMOR15}.
Szegedy's searching framework can be straightforwardly extended for generic bipartite graphs $\Gamma(X,Y,E)$, which are not obtained from the duplication process of simple classical graphs, but instead are obtained from the duplication process of \textit{directed} classical graphs. The key ingredient is to use sinks, which need not to be in both sets $X$ and $Y$. Since the elements of $X$ represent the physical positions while the elements of $Y$ are auxiliary copies, a good strategy is to use sinks only in $X$. To mark vertices in $X$, define a new directed bipartite graph $\Gamma'(X,Y,E')$ by removing the edges of $\Gamma(X,Y,E)$ that leave the marked vertices of $X$; they become sinks. Use the new stochastic matrices $P'$ and $Q'$ of $\Gamma'(X,Y,E')$ and the corresponding vectors~(\ref{ht_phi_x}) and~(\ref{ht_psi_y}) to define Szegedy's searching QW on the new directed bipartite graph. $P'$ has complete rows of zeroes corresponding to the marked vertices and it is not a stochastic matrix in the usual sense. $Q'$ does not change, because we are introducing sinks only in $X$. If the initial condition is a uniform superposition of the edges of $\Gamma(X,Y,E)$, Szegedy's QW will find a marked vertex in the sense that the probability associated with the marked vertices will be high if one performs a measurement (projection on the computational basis of $X$) at the correct time. We call this extended method \textbf{Szegedy's searching framework}.
Coin-based search algorithms and Szegedy's searching framework seem to be very different. However, they are strongly related. Let us address the equivalence between the abstract search algorithm and Szegedy's framework using the following strategy: (1)~we review how to convert Szegedy's QWs with sinks into equivalent generalized staggered QWs following~\cite{PSFG15}, and (2)~we show how to convert coined QWs with marked vertices into equivalent generalized staggered QWs. The generalized staggered models coming from those two different directions are equivalent, if the stochastic matrix of Szegedy's QW obeys the premises of Theorem~\ref{theo4} and vectors $\ket{\phi_x}$ and $\ket{\psi_y}$ are the uniform superposition.
Let us address item (1)~by reviewing how Szegedy's QWs with sinks are converted into equivalent generalized staggered QWs. Ref.~\cite{PSFG15} showed that Szegedy's searching framework is included in the staggered searching method, which employs partial tessellations. The staggered QW graph is the line graph of the original bipartite graph (before creating sinks). Tessellation $\alpha$ is partial; it does not employ polygons with the vertices corresponding to the edges that were removed in the process of creating the sinks. Following this process, there will be edges in the line graph that do not belong to neither partial tessellation $\alpha$ nor tessellation $\beta$. Tessellation $\beta$ is the same used in the proof of Theorem~\ref{theo4}. For example, the directed bipartite graph $\Gamma''$ of Fig.~\ref{fig:figure9a} has one sink: vertex $4\in X$. Graph $\Gamma'$ (with the dashed edges) is the line graph of the graph with no sinks equivalent to $\Gamma''$. $\Gamma'$ has periodic boundary conditions (in the form of a torus). In the figure, we label the polygons of tessellation $\alpha$ (blue) from 0 to 8 (4 is missing) and tessellation $\beta$ (red) from 0 to 17. There is an one-to-one mapping between the polygons of tessellation $\alpha$ and the vertex labels of $\Gamma''$ in $X$ (the same with respect to tessellation $\beta$ and labels in $Y$). The polygon of tessellation $\alpha$ corresponding to vertex $4\in X$ must be missing. The staggered QW on $\Gamma'$ with partial tessellation $\alpha$ and complete tessellation $\beta$ is equivalent to Szegedy's QW on $\Gamma''$ with vertex $4\in X$ as a sink.
\begin{figure}
\caption{A search algorithm using coined QW on a two-dimensional lattice $\Gamma$ with periodic boundary conditions using $(-I)$ on the marked vertex is equivalent to the generalized staggered QW on $\Gamma'$ with a missing polygon at the center of $\Gamma'$, which is equivalent to Szegedy's search on the directed bipartite graph $\Gamma''$ with vertex $4\in X$ as a sink.}
\label{fig:figure9a}
\end{figure}
Let us address item (2). When we convert a non-regular flip-flop coined QW into an equivalent staggered QW, the vertices with the Grover coin are replaced by cliques while the vertices with coin $(-I)$ are converted into disconnected vertices (empty graphs) with no polygons because $(-I)$ has no $(+1)$-eigenvectors. In this case, tessellation $\alpha$ is partial. Tessellation $\beta$ is complete because there is no change in the shift operator. For example, the two-dimensional lattice $\Gamma$ with periodic boundary conditions of Fig.~\ref{fig:figure9a} with a marked vertex in the center (label-4 blue vertex with an arrow pointing to it) must be converted to graph $\Gamma'$ (without the dashed edges) in the middle of the figure. Notice that the graph and the tessellations obtained from $\Gamma$ coincide with the graph and tessellations obtained from the bipartite graph employed by Szegedy's searching model after removing the dashed edges. The dashed edges can be removed because they play no role in the quantum-walk dynamics; they belong to no polygon. The non-regular flip-flop coined QW on $\Gamma$ with marked vertex 4 is equivalent to Szegedy's QW on $\Gamma''$ with vertex $4\in X$ as a sink in the sense that the evolution operators are exactly the same if we eliminate the idle space in the Szegedy's evolution operator and choose bases in the proper ordering. Notice that $\Gamma$ is not the classical graph associated with $\Gamma''$. The classical graph has 17 vertices and is a directed absorbing graph (the sink is vertex 4).
\begin{theorem}\label{theo5}
A non-regular flip-flop coined QW on a graph $\Gamma(V,E)$ with coin $(-I)$ on the marked vertices and the Grover coin on the non-marked vertices can be cast into Szegedy's searching framework.
\end{theorem}
\begin{theorem}\label{theo6}
Let $\Gamma(X,Y,E)$ be a bipartite graph such that $\deg(y)=2$, $\forall y \in Y$. Suppose that $q_{yx}$ is either $1/2$ or $0$. Then Szegedy's QW on $\Gamma(X,Y,E)$ with sinks in set $X$ is equivalent to a non-regular flip-flop coined QW on a $|X|$-multigraph $\Gamma(V,E')$ with coin $(-I)$ on the vertices $v\in V$ associated with the sinks of $X$.
\end{theorem}
The proofs are in the Appendix.
\section{Conclusions}\label{sec:conc}
In this work, we have showed that the coined and Szegedy's models have in common a large class of QWs. Under some assumptions, we can convert the graph on which the coined QW takes place into a bipartite graph on which an equivalent Szegedy's QW takes place, and vice versa. The equivalence means that the QW state at any step $t$ of one model can be exactly obtained using the other model. To go from the coined model on graph $\Gamma$ to Szegedy's model on graph $\Gamma''$, we have to replace each vertex of $\Gamma$ by a clique (for the Grover coin) obtaining a new enlarged graph $\Gamma'$ on which an equivalent staggered QW is defined. As a last step, we have to find the bipartite graph $\Gamma''$, the line graph of which is (isomorphic to) $\Gamma'$. On the other direction, we start with a bipartite graph $\Gamma''$ on which Szegedy's QW takes place, and we have to find the line graph of $\Gamma''$, convert cliques of one tessellation into vertices, which generates a new graph, or multigraph in some cases, on which the coined QW takes place. The staggered QW model plays a key role in the conversion process, because Szegedy's model has an idle subspace which hinders the direct conversion from Szegedy's to the coined model.
Remarkably, the abstract search algorithm using the coined QW model on (non-regular) graphs can be cast into Szegedy's searching framework, which is based on bipartite graphs with sinks. When converting from the coined to the staggered model, the coin $(-I)$ represents a missing polygon in one tessellation, which is converted into a sink in the equivalent Szegedy's model. The process is true on the other way around under some restrictions on the stochastic matrix of the bipartite graph; Szegedy's QWs with sinks can be converted into an equivalent search algorithm in the coined model using $(-I)$ on the marked vertices. One restriction is the degree of the vertices of set $Y$ must be 2. Szegedy's QWs that do not obey this restriction are not equivalent to coined QWs. In this sense, Szegedy's model is more general. On the other hand, coined QWs using coins that are not reflections, such as the Fourier coin~\cite{Portugal:book}, cannot be cast into Szegedy's model. In this sense, coined QW model is more general.
In conclusion, Szegedy's and the coined QW models share a large class of QW instances. However, there are Szegedy's QWs that cannot be converted into the coined formalism using the standard and non-regular flip-flop coined QWs defined in Sec.~\ref{sec:MD}, and vice versa. Since Szegedy's model is a subset of the staggered model, we also conclude that coined and staggered models are not equivalent. To pursue further connections between Szegedy's (or staggered) and coined models, one has to generalize the definitions of those models.
As a byproduct, we have showed how to convert coined QWs into coinless QWs in an enlarged graph, when the coin is an orthogonal reflection. The coin becomes a unitary operator acting on the new vertices and edges of the extended graph. The coined model can be understood in a new way, which may help in experimental implementations or in decoherence analysis.
\section*{Appendix}
\subsection*{Proof of Theorem~\ref{theo3}}
Suppose that we have a well-defined non-regular flip-flop coined QW on a graph $\Gamma(V,E)$ with coin $C'$ being an orthogonal reflection. As an intermediate step, we use a staggered QW on a graph $\Gamma(V',E')$ with two tessellations equivalent to the coined QW. If $C'=C_1\oplus ... \oplus C_{|V|}$, the eigenvectors of $C'$ are the direct sum of eigenvectors of $C_v$ and zero vectors, for $1\le v\le |V|$. Let $\ket{\tilde\alpha_x^{(v)}}$, $0\le x < m_v$ be an orthonormal basis for the $(+1)$-eigenspace of $C_v$ and let $\ket{\alpha_{x_v}^{(v)}}$ be the corresponding eigenvectors of $C'$ obtained from $\ket{\tilde\alpha_{x_v}^{(v)}}$ by performing the necessary direct sums with zero vectors. If $C'$ is an orthogonal reflection of graph $\Gamma'(V',E')$, it can be written as
\begin{eqnarray}\label{Cprime2}
C' &=& 2\,\sum_{v=1}^{{|V|}} \sum_{x_v=0}^{m_v-1} \ket{\alpha_{x_v}^{(v)}}\bra{\alpha_{x_v}^{(v)}} - I,
\end{eqnarray}
where the set of $(+1)$-eigenvectors $\ket{\alpha_{x_v}^{(v)}}$ has the following properties: (1)~if the $i$-th entry of $\ket{\alpha_{x_v}^{(v)}}$ is nonzero, the $i$-th entries of the other $(+1)$-eigenvectors must be zero, and (2)~vector $\sum_{v=1}^{{|V|}}\sum_{x_v=0}^{m_v-1} \ket{\alpha_{x_v}^{(v)}}$ has no zero entries. Then each $C_v$ can be written as
\begin{eqnarray}
C_v &=& 2\sum_{x=0}^{m_v-1} \ket{\tilde\alpha_x^{(v)}}\bra{\tilde\alpha_x^{(v)}} - I \label{Cv}
\end{eqnarray}
and the set of vectors $\ket{\tilde\alpha_x^{(v)}}$ inherit properties~(1) and~(2)~in ${\cal H}^{d_v}$. Each $C_v$ is an orthogonal reflection in ${\cal H}^{d_v}$, where $d_v$ is the degree of vertex $v$ in $\Gamma(V,E)$, and has an associated $d_v$-graph $\Gamma_{C_v}$ tessellated by the $(+1)$-eigenvectors of $C_v$. $\Gamma_{C_v}$ is a union of $m_v$ disjoint cliques. If $C_v$ has only one $(+1)$-eigenvector ($m_v=1$), $\Gamma_{C_v}$ is a clique.
Graph $\Gamma'(V',E')$ is obtained from $\Gamma(V,E)$ by replacing each vertex $v\in V$ by graph $\Gamma_{C_v}$ gluing the vertices of $\Gamma_{C_v}$, which run from 0 to $d_v-1$, in one-to-one mapping with the labels of the coin directions at vertex $v$. The vertices of $\Gamma_{C_v}$ after the gluing process receive the labels of the basis vectors in $\ket{\alpha_{x_v}^{(v)}}$ with nonzero coefficients. For example, the vertices of the 3-clique in graph $\Gamma'$ of Fig.~\ref{fig:example6} has labels 1, 2, and 3 because they correspond to the basis vectors of the $(+1)$-eigenvector $\ket{\alpha_1}=(\ket{1}+\ket{2}+\ket{3})/\sqrt 3$.
Polygons induced by $\ket{\alpha_x^{(v)}},\,\forall v,x$ tessellate $\Gamma'(V',E')$ because polygons induced by $\ket{\alpha_{x}^{(v)}}$, $0\le x<m_v$ exactly cover the graphs $\Gamma_{C_v}$ that replace vertices $v$. Tessellation $\alpha$ covers all vertices and all edges that were added via $\Gamma_{C_v}$, for all $v$. This tessellation does not cover the edges of $\Gamma'(V',E')$ that were inherited from the original graph $\Gamma(V,E)$.
Tessellation $\beta$ is made of size-2 polygons that cover the edges of $\Gamma'(V',E')$ that were inherited from the original graph $\Gamma(V,E)$. This tessellation has $|E|$ polygons and the set of those polygons has an one-to-one mapping with an independent set of $(+1)$-eigenvectors of $S$ in the computational basis, which are given by vectors
\begin{equation}
\ket{\beta_{v}^{j}} \,=\, \frac{1}{\sqrt 2}\left(\ket{v,j}+\ket{v',j'}\right)
\end{equation}
using the notation of Eq.~(\ref{def_S2}), where $v\in V$ and $0\le j\le d_v-1$. The cardinality of the independent set of $(+1)$-eigenvectors of $S$ is $|E|$. The shift operator of the non-regular flip-flop coined QW is
\begin{equation}
S\,=\, 2 \sum \ket{\beta_{v}^{j}}\bra{\beta_{v}^{j}}-I
\end{equation}
where the sum runs over the set of independent $(+1)$-eigenvectors (the sum has $|E|$ terms). $S$ is an orthogonal reflection because the set of independent $(+1)$-eigenvectors has non-overlapping nonzero entries and the sum of those eigenvectors has no zero entries in the computation basis of $\Gamma'(V',E')$. Polygons induced by $\ket{\beta_{v}^{j}}$ form a perfect matching of $\Gamma'(V',E')$.
The union of tessellations $\alpha$ and $\beta$ covers all edges and is a well-defined staggered QW having one vertex in each polygon intersection. Using Proposition~4.3 of Ref.~\cite{PSFG15}, this staggered QW can be cast into the extended Szegedy's framework.\qed
\subsection*{Proof of Theorem~\ref{theo4}}
Consider the staggered QW model on the line graph $L(\Gamma)$ equivalent to Szegedy's QW on $\Gamma(X,Y,E)$. $L(\Gamma)$ has $2|Y|$ vertices. The polygons of the staggered model are induced by
\begin{eqnarray}
\ket{\alpha_x} &=& \sum_{y\in Y} \sqrt{p_{x y}} \, \ket{f(x,y)}, \label{ht_alpha_x2} \\
\ket{\beta_y} &=& \sum_{x\in X} \sqrt{q_{y x}}\, \ket{f(x,y)}, \label{ht_beta_y2}
\end{eqnarray}
where $f$ is the bijection between $E$ and the vertices of $L(\Gamma)$ as described in Ref.~\cite{PSFG15} and vectors $\ket{\alpha_x},\ket{\beta_x}$ belong to Hilbert space ${\cal H}^{2|Y|}$.
Tessellation $\alpha$ is induced by the orthogonal reflection
\begin{eqnarray}
U_0 &=& 2\sum_{x\in X} \ket{\alpha_x}\bra{\alpha_x} - I.
\end{eqnarray}
By using a proper choice of $f$, matrix $\ket{\alpha_x}\bra{\alpha_x}$ is a direct sum of zeros matrices and a $d_x\times d_x$ matrix $M_x$, which has no zero entries. Define
\begin{equation}
C_x\,=\, 2 M_x - I.
\end{equation}
Then
\begin{equation}\label{coin_C_Sz2}
U_0 \,=\, \bigoplus_{x\in X} C_x.
\end{equation}
Operator $U_1$ is equal to the one described in the proof of Theorem~\ref{theo2} and is given by Eq.~(\ref{U_1_theo2}) because the assumptions about vertices $y\in Y$ are equal to the ones in Theorem~\ref{theo2}. Then $U_1$ commutes basis vectors and $U_1^2=I$.
The evolution operator is
\begin{equation}
U \,=\, U_1 \,U_0,
\end{equation}
where $U_0$ is the coin operator given by Eq.~(\ref{coin_C_Sz2}) and $U_1$ is the shift operator given by Eq.~(\ref{U1_theo2}). $U$ is an evolution operator of a non-regular flip-flop coined QW on the (multi)graph obtained in the following way: The polygons of tessellation $\beta$ have two vertices and form a perfect matching of $L(\Gamma)$. The remaining cliques belong to tessellation $\alpha$. Each clique of tessellation $\alpha$ must be converted into a single vertex. If two cliques of tessellation $\alpha$ are connected by an edge, the vertices that replace those cliques are adjacent. If two cliques are connected by more than one edge, the vertices that replace those cliques must be connected by more than one edge generating a non-regular $|X|$-multigraph. \qed
\subsection*{Proof of Theorem~\ref{theo5}}
\begin{proof}
The method employed in the proof of Theorem~\ref{theo3} when the coin is an orthogonal reflection can be straightforwardly extended when the coin is a \textit{partial} orthogonal reflection. In this case, we can convert a non-regular flip-flop coined QW on a graph $\Gamma(V,E)$ with coin $(-I)$ on the marked vertices and the Grover coin on the non-marked vertices into an equivalent \textit{generalized} staggered QW on $\Gamma'(V',E')$, which is obtained from $\Gamma(V,E)$ in the following way: a non-marked vertex $v\in V$ is converted into $d_v$-cliques and a marked vertex $v$ into disconnected $d_v$-graphs (empty $d_v$-graphs), where $d_v$ is the degree of vertex $v$. Tessellation $\alpha$ is partial, with polygons being the cliques associated with non-marked vertices only. Tessellation $\beta$ is the same employed in the proof of Theorem~\ref{theo3}.
The next step is to define a new graph $\Gamma''(V',E'')$ by converting the empty $d_v$-graphs into complete graphs by adding new edges to $\Gamma'(V',E')$. Let $\tilde\alpha$ be an extension of partial tessellation $\alpha$ by adding new polygons corresponding to the new complete graphs. $\Gamma''(V',E'')$ is the line graph of some bipartite graph $\Gamma(X,Y,\tilde E)$ because the union of tessellations $\tilde\alpha$ and $\beta$ form a two-colorable Kraus partition of $\Gamma''(V',E'')$.
We have defined a generalized staggered QW on $\Gamma'(V',E')$, which is equivalent to a generalized staggered QW on $\Gamma''(V',E'')$ using partial tessellation $\alpha$ because the egdes in $E''\setminus E'$ do not belong to any polygon. Following Ref.~\cite{PSFG15} we can obtain an equivalent Szegedy's QW; the missing polygons in partial tessellation $\alpha$ create sinks in graph $\Gamma(X,Y,\tilde E)$ by removing the directed edges coming out of the vertices in $X$ associated with the missing polygons. The edges oriented to the sinks are kept. This process creates a new directed bipartite graph $\Gamma'(X,Y,\tilde E')$. Ref.~\cite{PSFG15} showed that Szegedy's QW on $\Gamma'(X,Y,\tilde E')$ with vectors $\ket{\phi_x}$ and $\ket{\psi_y}$ given by Eqs.~(\ref{ht_phi_x}) and~(\ref{ht_psi_y}) in uniform superposition is equivalent to the generalized staggered QW on $\Gamma'(V',E')$. Then the non-regular flip-flop coined QW on a graph $\Gamma(V,E)$ with coin $(-I)$ on the marked vertices and the Grover coin on the non-marked vertices can be cast into Szegedy's searching framework. \qed
\end{proof}
\subsection*{Proof of Theorem~\ref{theo6}}
\begin{proof}
This theorem is a corollary of Theorem~\ref{theo4}, if we employ the method described in Ref.~\cite{PSFG15} of converting Szegedy's QWs on bipartited graphs with sinks into generalized staggered QWs. To convert generalized staggered QWs into the coined QW model, a missing polygon is converted into coin $(-I)$. \qed
\end{proof}
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.